kernel: bump 5.15 to 5.15.108
[openwrt/openwrt.git] / target / linux / layerscape / patches-5.15 / 701-staging-add-fsl_ppfe-driver.patch
1 From 15eff6c8760a76619654464103314bba9fdda0af Mon Sep 17 00:00:00 2001
2 From: Pawel Dembicki <paweldembicki@gmail.com>
3 Date: Sat, 19 Oct 2022 07:05:49 +0200
4 Subject: [PATCH] staging: add fsl_ppfe driver
5 MIME-Version: 1.0
6 Content-Type: text/plain; charset=UTF-8
7 Content-Transfer-Encoding: 8bit
8
9 This is squash of all commits with ppfe driver taken from NXP 5.15 tree:
10 https://source.codeaurora.org/external/qoriq/qoriq-components/linux/log/drivers/staging/fsl_ppfe?h=lf-5.15.y
11
12 List of original commits:
13
14 net: fsl_ppfe: dts binding for ppfe
15
16 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
17 Signed-off-by: Anjaneyulu Jagarlmudi <anji.jagarlmudi@nxp.com>
18
19 staging: fsl_ppfe/eth: header files for pfe driver
20
21 This patch has all pfe header files.
22
23 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
24 Signed-off-by: Anjaneyulu Jagarlmudi <anji.jagarlmudi@nxp.com>
25
26 staging: fsl_ppfe/eth: introduce pfe driver
27
28 This patch introduces Linux support for NXP's LS1012A Packet
29 Forwarding Engine (pfe_eth). LS1012A uses hardware packet forwarding
30 engine to provide high performance Ethernet interfaces. The device
31 includes two Ethernet ports.
32
33 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
34 Signed-off-by: Anjaneyulu Jagarlmudi <anji.jagarlmudi@nxp.com>
35
36 staging: fsl_ppfe/eth: fix RGMII tx delay issue
37
38 Recently logic to enable RGMII tx delay was changed by
39 below patch.
40
41 https://patchwork.kernel.org/patch/9447581/
42
43 Based on the patch, appropriate change is made in PFE driver.
44
45 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
46 Signed-off-by: Anjaneyulu Jagarlmudi <anji.jagarlmudi@nxp.com>
47
48 staging: fsl_ppfe/eth: remove unused functions
49
50 Remove unused functions hif_xmit_pkt & hif_lib_xmit_pkt.
51
52 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
53
54 staging: fsl_ppfe/eth: fix read/write/ack idx issue
55
56 While fixing checkpatch errors some of the index increments
57 were commented out. They are enabled.
58
59 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
60
61 staging: fsl_ppfe/eth: Make phy_ethtool_ksettings_get return void
62
63 Make return value void since function never return meaningful value
64
65 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
66
67 staging: fsl_ppfe/eth: add function to update tmu credits
68
69 __hif_lib_update_credit function is used to update the tmu credits.
70 If tx_qos is set, tmu credit is updated based on the number of packets
71 transmitted by tmu.
72
73 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
74 Signed-off-by: Anjaneyulu Jagarlmudi <anji.jagarlmudi@nxp.com>
75
76 staging: fsl_ppfe/eth: Avoid packet drop at TMU queues
77
78 Added flow control between TMU queues and PFE Linux driver,
79 based on TMU credits availability.
80 Added tx_qos module parameter to control this behavior.
81 Use queue-0 as default queue to transmit packets.
82
83 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
84 Signed-off-by: Akhila Kavi <akhila.kavi@nxp.com>
85 Signed-off-by: Anjaneyulu Jagarlmudi <anji.jagarlmudi@nxp.com>
86
87 staging: fsl_ppfe/eth: Enable PFE in clause 45 mode
88
89 when we opearate in clause 45 mode, we need to call
90 the function get_phy_device() with its 3rd argument as
91 "true" and then the resultant phy device needs to be
92 register with phy layer via phy_device_register()
93
94 Signed-off-by: Bhaskar Upadhaya <Bhaskar.Upadhaya@nxp.com>
95
96 staging: fsl_ppfe/eth: Disable autonegotiation for 2.5G SGMII
97
98 PCS initialization sequence for 2.5G SGMII interface governs
99 auto negotiation to be in disabled mode
100
101 Signed-off-by: Bhaskar Upadhaya <Bhaskar.Upadhaya@nxp.com>
102
103 staging: fsl_ppfe/eth: calculate PFE_PKT_SIZE with SKB_DATA_ALIGN
104
105 pfe packet size was calculated without considering skb data alignment
106 and this resulted in jumbo frames crashing kernel when the
107 cacheline size increased from 64 to 128 bytes with
108 commit 97303480753e ("arm64: Increase the max granular size").
109
110 Modify pfe packet size caclulation to include skb data alignment of
111 sizeof(struct skb_shared_info).
112
113 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
114
115 staging: fsl_ppfe/eth: support for userspace networking
116
117 This patch adds the userspace mode support to fsl_ppfe network driver.
118 In the new mode, basic hardware initialization is performed in kernel, while
119 the datapath and HIF handling is the responsibility of the userspace.
120
121 The new command line parameter is added to initialize the ppfe module
122 in userspace mode. By default the module remains in kernelspace networking
123 mode.
124 To enable userspace mode, use "insmod pfe.ko us=1"
125
126 Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
127 Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
128
129 staging: fsl_ppfe/eth: unregister netdev after pfe_phy_exit
130
131 rmmod pfe.ko throws below warning:
132
133 kernfs: can not remove 'phydev', no directory
134 ------------[ cut here ]------------
135 WARNING: CPU: 0 PID: 2230 at fs/kernfs/dir.c:1481
136 kernfs_remove_by_name_ns+0x90/0xa0
137
138 This is caused when the unregistered netdev structure is accessed to
139 disconnect phy.
140
141 Resolve the issue by unregistering netdev after disconnecting phy.
142
143 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
144
145 staging: fsl_ppfe/eth: HW parse results for DPDK
146
147 HW Parse results are included in the packet headroom.
148 Length and Offset calculation now accommodates parse info size.
149
150 Signed-off-by: Archana Madhavan <archana.madhavan@nxp.com>
151
152 staging: fsl_ppfe/eth: reorganize pfe_netdev_ops
153
154 Reorganize members of struct pfe_netdev_ops to match with the order
155 of members in struct net_device_ops defined in include/linux/netdevice.h
156
157 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
158
159 staging: fsl_ppfe/eth: use mask for rx max frame len
160
161 Define and use PFE_RCR_MAX_FL_MASK to properly set Rx max frame
162 length of MAC Receive Control Register.
163
164 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
165
166 staging: fsl_ppfe/eth: define pfe ndo_change_mtu function
167
168 Define ndo_change_mtu function for pfe. This sets the max Rx frame
169 length to the new mtu.
170
171 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
172
173 staging: fsl_ppfe/eth: remove jumbo frame enable from gemac init
174
175 MAC Receive Control Register was configured to allow jumbo frames.
176 This is removed as jumbo frames can be supported anytime by changing
177 mtu which will in turn modify MAX_FL field of MAC RCR.
178 Jumbo frames caused pfe to hang on LS1012A rev 1.0 Silicon due to
179 erratum A-010897.
180
181 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
182
183 staging: fsl_ppfe/eth: disable CRC removal
184
185 Disable CRC removal from the packet, so that packets are forwarded
186 as is to Linux.
187 CRC configuration in MAC will be reflected in the packet received
188 to Linux.
189
190 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
191
192 staging: fsl_ppfe/eth: handle ls1012a errata_a010897
193
194 On LS1012A rev 1.0, Jumbo frames are not supported as it causes
195 the PFE controller to hang. A reset of the entire chip is required
196 to resume normal operation.
197
198 To handle this errata, frames with length > 1900 are truncated for
199 rev 1.0 of LS1012A.
200
201 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
202
203 staging: fsl_ppfe/eth: replace magic numbers
204
205 Replace magic numbers and some cosmetic changes.
206
207 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
208
209 staging: fsl_ppfe/eth: resolve indentation warning
210
211 Resolve the following indentation warning:
212
213 drivers/staging/fsl_ppfe/pfe_ls1012a_platform.c:
214 In function ‘pfe_get_gemac_if_proprties’:
215 drivers/staging/fsl_ppfe/pfe_ls1012a_platform.c:96:2:
216 warning: this ‘else’ clause does not guard...
217 [-Wmisleading-indentation]
218 else
219 ^~~~
220 drivers/staging/fsl_ppfe/pfe_ls1012a_platform.c:98:3:
221 note: ...this statement, but the latter is misleadingly indented as
222 if it were guarded by the ‘else’
223 pdata->ls1012a_eth_pdata[port].mdio_muxval = phy_id;
224 ^~~~~
225
226 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
227
228 staging: fsl_ppfe/eth: add fixed-link support
229
230 In cases where MAC is not connected to a normal MDIO-managed PHY
231 device, and instead to a switch, it is configured as a "fixed-link".
232 Code to handle this scenario is added here.
233
234 phy_node in the dtb is checked to identify a fixed-link.
235 On identification of a fixed-link, it is registered and connected.
236
237 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
238
239 staging: fsl_ppfe: add support for a char dev for link status
240
241 Read and IOCTL support is added. Application would need to open,
242 read/ioctl the /dev/pfe_us_cdev device.
243 select is pending as it requires a wait_queue.
244
245 Signed-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>
246 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
247
248 staging: fsl_ppfe: enable hif event from userspace
249
250 HIF interrupts are enabled using ioctl from user space,
251 and epoll wait from user space wakes up when there is an HIF
252 interrupt.
253
254 Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
255
256 staging: fsl_ppfe: performance tuning for user space
257
258 interrupt coalescing of 100 usec is added.
259
260 Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
261 Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
262
263 staging: fsl_ppfe/eth: Update to use SPDX identifiers
264
265 Replace license text with corresponding SPDX identifiers and update the
266 format of existing SPDX identifiers to follow the new guideline
267 Documentation/process/license-rules.rst.
268
269 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
270
271 staging: fsl_ppfe/eth: misc clean up
272
273 - remove redundant hwfeature init
274 - remove unused vars from ls1012a_eth_platform_data
275 - To handle ls1012a errata_a010897, PPFE driver requires GUTS driver
276 to be compiled in. Select FSL_GUTS when PPFE driver is compiled.
277
278 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
279
280 staging: fsl_ppfe/eth: reorganize platform phy parameters
281
282 - Use "phy-handle" and of_* functions to get phy node and fixed-link
283 parameters
284
285 - Reorganize phy parameters and initialize them only if phy-handle
286 or fixed-link is defined in the dtb.
287
288 - correct typo pfe_get_gemac_if_proprties to pfe_get_gemac_if_properties
289
290 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
291
292 staging: fsl_ppfe/eth: support single interface initialization
293
294 - arrange members of struct mii_bus in sequence matching phy.h
295 - if mdio node is defined, use of_mdiobus_register to register
296 child nodes (phy devices) available on the mdio bus.
297 - remove of_phy_register_fixed_link from pfe_phy_init as it is being
298 handled in pfe_get_gemac_if_properties
299 - remove mdio enabled check
300 - skip phy init, if no PHY or fixed-link
301
302 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
303
304 net: fsl_ppfe: update dts properties for phy
305
306 Use commonly used phy-handle property and mdio subnode to handle
307 phy properties.
308
309 Deprecate bindings fsl,gemac-phy-id & fsl,pfe-phy-if-flags.
310
311 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
312
313 staging: fsl_ppfe/eth: remove unused code
314
315 - remove gemac-bus-id related code that is unused.
316 - remove unused prototype gemac_set_mdc_div.
317
318 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
319
320 staging: fsl_ppfe/eth: separate mdio init from mac init
321
322 - separate mdio initialization from mac initialization
323 - Define pfe_mdio_priv_s structure to hold mii_bus structure and other
324 related data.
325 - Modify functions to work with the separted mdio init model.
326
327 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
328
329 staging: fsl_ppfe/eth: adapt to link mode based phydev changes
330
331 Setting link mode bits have changed with the integration of
332 commit (3c1bcc8 net: ethernet: Convert phydev advertize and
333 supported from u32 to link mode). Adapt to the new method of
334 setting and clearing the link mode bits.
335
336 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
337
338 staging: fsl_ppfe/eth: use generic soc_device infra instead of fsl_guts_get_svr()
339
340 Commit ("soc: fsl: guts: make fsl_guts_get_svr() static") has
341 made fsl_guts_get_svr() static and hence use generic soc_device
342 infrastructure to check SoC revision.
343
344 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
345
346 staging: fsl_ppfe/eth: use memremap() to map RAM area used by PFE
347
348 RAM area used by PFE should be mapped using memremap() instead of
349 directly traslating physical addr to virtual. This will ensure proper
350 checks are done before the area is used.
351
352 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
353
354 staging: fsl_ppfe/eth: remove 'fallback' argument from dev->ndo_select_queue()
355
356 To be consistent with upstream API change.
357
358 Signed-off-by: Li Yang <leoyang.li@nxp.com>
359
360 staging: fsl_ppfe/eth: prefix header search paths with $(srctree)/
361
362 Currently, the rules for configuring search paths in Kbuild have
363 changed: https://lkml.org/lkml/2019/5/13/37
364
365 This will lead the below error:
366
367 fatal error: pfe/pfe.h: No such file or directory
368
369 Fix it by adding $(srctree)/ prefix to the search paths.
370
371 Signed-off-by: Ting Liu <ting.liu@nxp.com>
372
373 staging: fsl_ppfe/eth: add pfe support to Kconfig and Makefile
374
375 Signed-off-by: Calvin Johnson <calvin.johnson@nxp.com>
376 [ Aisheng: fix minor conflict due to removed VBOXSF_FS ]
377 Signed-off-by: Dong Aisheng <aisheng.dong@nxp.com>
378
379 staging: fsl_ppfe/eth: Disable termination of CRC fwd.
380
381 LS1012A MAC PCS block has an erratum that is seen with specific PHY AR803x.
382 The issue is triggered by the (spec-compliant) operation of the AR803x PHY
383 on the LS1012A-FRWY board.Due to this, good FCS packet is reported as error
384 packet by MAC, so for these error packets FCS should be validated and
385 discard only real error packets in PFE Rx packet path.
386
387 Signed-off-by: Nagesh Koneti <koneti.nagesh@nxp.com>
388 Signed-off-by: Nagesh Koneti <“koneti.nagesh@nxp.com”>
389
390 net: ppfe: Cope with of_get_phy_mode() API change
391
392 Signed-off-by: Li Yang <leoyang.li@nxp.com>
393
394 staging: fsl_ppfe/eth: Enhance error checking in platform probe
395
396 Fix the kernel crash when MAC addr is not passed in dtb.
397
398 Signed-off-by: Anji Jagarlmudi <anji.jagarlmudi@nxp.com>
399
400 staging: fsl_ppfe/eth: reject unsupported coalescing params
401
402 Set ethtool_ops->supported_coalesce_params to let
403 the core reject unsupported coalescing parameters.
404
405 Signed-off-by: Anji Jagarlmudi <anji.jagarlmudi@nxp.com>
406
407 staging: fsl_ppfe/eth:check "reg" property before pfe_get_gemac_if_properties()
408
409 It has been observed that the function pfe_get_gemac_if_properties() is
410 been called blindly for the next two child nodes. There might be some
411 cases where it may go wrong and that lead to missing interfaces.
412 with these changes it is ensured thats not the case.
413
414 Signed-off-by: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
415 Signed-off-by: Anji J <anji.jagarlmudi@nxp.com>
416
417 staging: fsl_ppfe/eth: "struct firmware" dereference is reduced in many functions
418
419 firmware structure's data variable is the actual elf data. It has been
420 dereferenced in multiple functions and this has been reduced.
421
422 Signed-off-by: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
423 Signed-off-by: Anji J <anji.jagarlmudi@nxp.com>
424
425 staging: fsl_ppfe/eth: LF-27 load pfe binaries from FDT
426
427 FDT prepared in uboot now has pfe firmware part of it.
428 These changes will read the firmware by default from it and tries to load
429 the elf into the PFE PEs. This help build the pfe driver pasrt of kernel.
430
431 Signed-off-by: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
432 Signed-off-by: Anji J <anji.jagarlmudi@nxp.com>
433
434 staging: fsl_ppfe/eth: proper handling for RGMII delay mode
435
436 The correct setting for the RGMII ports on LS1012ARDB is to
437 enable delay on both Tx and Rx. So the phy mode to be matched
438 is PHY_INTERFACE_MODE_RGMII_ID.
439
440 Signed-off-by: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
441 Signed-off-by: Anji Jagarlmudi <anji.jagarlmudi@nxp.com>
442
443 LF-1762-2 staging: fsl_ppfe: replace '---help---' in Kconfig files with 'help'
444
445 Update Kconfig to cope with upstream change
446 commit 84af7a6194e4 ("checkpatch: kconfig: prefer 'help' over
447 '---help---'").
448
449 Signed-off-by: Dong Aisheng <aisheng.dong@nxp.com>
450
451 staging: fsl_ppfe/eth: Nesting level does not match indentation
452
453 corrected nesting level
454 LF-1661 and Coverity CID: 8879316
455
456 Signed-off-by: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
457
458 staging: fsl_ppfe/eth: Initialized scalar variable
459
460 Proper initialization of scalar variable
461 LF-1657 and Coverity CID: 3335133
462
463 Signed-off-by: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
464
465 staging: fsl_ppfe/eth: misspelt variable name
466
467 variable name corrected
468 LF-1656 and Coverity CID: 3335119
469
470 Signed-off-by: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
471
472 staging: fsl_ppfe/eth: Avoiding out-of-bound writes
473
474 avoid out-of-bound writes with proper error handling
475 LF-1654, LF-1652 and Coverity CID: 3335106, 3335090
476
477 Signed-off-by: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
478
479 staging: fsl_ppfe/eth: Initializing scalar variable
480
481 proper initialization of scalar variable.
482 LF-1653 and Coverity CID: 3335101
483
484 Signed-off-by: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
485
486 staging: fsl_ppfe/eth: checking return value
487
488 proper checks added and handled for return value.
489 LF-1644 and Coverity CID: 241888
490
491 Signed-off-by: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
492
493 staging: fsl_ppfe/eth: Avoid out-of-bound access
494
495 proper handling to avoid out-of-bound access
496 LF-1642, LF-1641 and Coverity CID: 240910, 240891
497
498 Signed-off-by: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
499
500 staging: fsl_ppfe/eth: Avoiding out-of-bound writes
501
502 avoid out-of-bound writes with proper error handling
503 LF-1654, LF-1652 and Coverity CID: 3335106, 3335090
504
505 Signed-off-by: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
506
507 staging: fsl_ppfe/eth: return value init in error case
508
509 proper err return in error case.
510 LF-1806 and Coverity CID: 10468592
511
512 Signed-off-by: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
513
514 staging: fsl_ppfe/eth: Avoid recursion in header inclusion
515
516 Avoiding header inclusions that are not necessary and also that are
517 causing header inclusion recursion.
518
519 LF-2102 and Coverity CID: 240838
520
521 Signed-off-by: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
522
523 staging: fsl_ppfe/eth: Avoiding return value overwrite
524
525 avoid return value overwrite at the end of function.
526 LF-2136, LF-2137 and Coverity CID: 8879341, 8879364
527
528 Signed-off-by: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
529
530 staging: fsl_ppfe/eth: LF-27 enabling PFE firmware load from FDT
531
532 The macro, "LOAD_PFEFIRMWARE_FROM_FILESYSTEM" is been disabled to load
533 the firmware from FDT by default. Enabling the macro will load the
534 firmware from filesystem.
535
536 Also, the Makefile is now tuned to build pfe as per the config option
537
538 Signed-off-by: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
539
540 staging: fsl_ppfe/eth: Ethtool stats correction for IEEE_rx_drop counter
541
542 Due to carrier extended bug the phy counter IEEE_rx_drop counter is
543 incremented some times and phy reports the packet has crc error.
544 Because of this PFE revalidates all the packets that are marked crc
545 error by phy. Now, the counter phy reports is till bogus and this
546 patch decrements the counter by pfe revalidated (and are crc ok)
547 counter amount.
548
549 Signed-off-by: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
550
551 staging: fsl_ppfe/eth: PFE firmware load enhancements
552
553 PFE driver enhancements to load the PE firmware from filesystem
554 when the firmware is not found in FDT.
555
556 Signed-off-by: Chaitanya Sakinam <chaitanya.sakinam@nxp.com>
557
558 staging: fsl_ppfe: deal with upstream API change of of_get_mac_address()
559
560 Uptream commit 83216e398 changed the of_get_mac_address() API, update
561 the user accordingly.
562
563 Signed-off-by: Li Yang <leoyang.li@nxp.com>
564
565 staging: fsl_ppfe: update coalesce setting uAPI usage
566
567 API changed since:
568 f3ccfda19319 ("ethtool: extend coalesce setting uAPI with CQE mode")
569
570 Signed-off-by: Dong Aisheng <aisheng.dong@nxp.com>
571
572 Signed-off-by: Pawel Dembicki <paweldembicki@gmail.com>
573 ---
574 .../devicetree/bindings/net/fsl_ppfe/pfe.txt | 199 ++
575 MAINTAINERS | 8 +
576 drivers/staging/Kconfig | 2 +
577 drivers/staging/Makefile | 1 +
578 drivers/staging/fsl_ppfe/Kconfig | 21 +
579 drivers/staging/fsl_ppfe/Makefile | 20 +
580 drivers/staging/fsl_ppfe/TODO | 2 +
581 drivers/staging/fsl_ppfe/include/pfe/cbus.h | 78 +
582 .../staging/fsl_ppfe/include/pfe/cbus/bmu.h | 55 +
583 .../fsl_ppfe/include/pfe/cbus/class_csr.h | 289 ++
584 .../fsl_ppfe/include/pfe/cbus/emac_mtip.h | 242 ++
585 .../staging/fsl_ppfe/include/pfe/cbus/gpi.h | 86 +
586 .../staging/fsl_ppfe/include/pfe/cbus/hif.h | 100 +
587 .../fsl_ppfe/include/pfe/cbus/hif_nocpy.h | 50 +
588 .../fsl_ppfe/include/pfe/cbus/tmu_csr.h | 168 ++
589 .../fsl_ppfe/include/pfe/cbus/util_csr.h | 61 +
590 drivers/staging/fsl_ppfe/include/pfe/pfe.h | 372 +++
591 drivers/staging/fsl_ppfe/pfe_cdev.c | 258 ++
592 drivers/staging/fsl_ppfe/pfe_cdev.h | 41 +
593 drivers/staging/fsl_ppfe/pfe_ctrl.c | 226 ++
594 drivers/staging/fsl_ppfe/pfe_ctrl.h | 100 +
595 drivers/staging/fsl_ppfe/pfe_debugfs.c | 99 +
596 drivers/staging/fsl_ppfe/pfe_debugfs.h | 13 +
597 drivers/staging/fsl_ppfe/pfe_eth.c | 2591 +++++++++++++++++
598 drivers/staging/fsl_ppfe/pfe_eth.h | 175 ++
599 drivers/staging/fsl_ppfe/pfe_firmware.c | 398 +++
600 drivers/staging/fsl_ppfe/pfe_firmware.h | 21 +
601 drivers/staging/fsl_ppfe/pfe_hal.c | 1517 ++++++++++
602 drivers/staging/fsl_ppfe/pfe_hif.c | 1064 +++++++
603 drivers/staging/fsl_ppfe/pfe_hif.h | 199 ++
604 drivers/staging/fsl_ppfe/pfe_hif_lib.c | 628 ++++
605 drivers/staging/fsl_ppfe/pfe_hif_lib.h | 229 ++
606 drivers/staging/fsl_ppfe/pfe_hw.c | 164 ++
607 drivers/staging/fsl_ppfe/pfe_hw.h | 15 +
608 .../staging/fsl_ppfe/pfe_ls1012a_platform.c | 383 +++
609 drivers/staging/fsl_ppfe/pfe_mod.c | 158 +
610 drivers/staging/fsl_ppfe/pfe_mod.h | 103 +
611 drivers/staging/fsl_ppfe/pfe_perfmon.h | 26 +
612 drivers/staging/fsl_ppfe/pfe_sysfs.c | 840 ++++++
613 drivers/staging/fsl_ppfe/pfe_sysfs.h | 17 +
614 40 files changed, 11019 insertions(+)
615 create mode 100644 Documentation/devicetree/bindings/net/fsl_ppfe/pfe.txt
616 create mode 100644 drivers/staging/fsl_ppfe/Kconfig
617 create mode 100644 drivers/staging/fsl_ppfe/Makefile
618 create mode 100644 drivers/staging/fsl_ppfe/TODO
619 create mode 100644 drivers/staging/fsl_ppfe/include/pfe/cbus.h
620 create mode 100644 drivers/staging/fsl_ppfe/include/pfe/cbus/bmu.h
621 create mode 100644 drivers/staging/fsl_ppfe/include/pfe/cbus/class_csr.h
622 create mode 100644 drivers/staging/fsl_ppfe/include/pfe/cbus/emac_mtip.h
623 create mode 100644 drivers/staging/fsl_ppfe/include/pfe/cbus/gpi.h
624 create mode 100644 drivers/staging/fsl_ppfe/include/pfe/cbus/hif.h
625 create mode 100644 drivers/staging/fsl_ppfe/include/pfe/cbus/hif_nocpy.h
626 create mode 100644 drivers/staging/fsl_ppfe/include/pfe/cbus/tmu_csr.h
627 create mode 100644 drivers/staging/fsl_ppfe/include/pfe/cbus/util_csr.h
628 create mode 100644 drivers/staging/fsl_ppfe/include/pfe/pfe.h
629 create mode 100644 drivers/staging/fsl_ppfe/pfe_cdev.c
630 create mode 100644 drivers/staging/fsl_ppfe/pfe_cdev.h
631 create mode 100644 drivers/staging/fsl_ppfe/pfe_ctrl.c
632 create mode 100644 drivers/staging/fsl_ppfe/pfe_ctrl.h
633 create mode 100644 drivers/staging/fsl_ppfe/pfe_debugfs.c
634 create mode 100644 drivers/staging/fsl_ppfe/pfe_debugfs.h
635 create mode 100644 drivers/staging/fsl_ppfe/pfe_eth.c
636 create mode 100644 drivers/staging/fsl_ppfe/pfe_eth.h
637 create mode 100644 drivers/staging/fsl_ppfe/pfe_firmware.c
638 create mode 100644 drivers/staging/fsl_ppfe/pfe_firmware.h
639 create mode 100644 drivers/staging/fsl_ppfe/pfe_hal.c
640 create mode 100644 drivers/staging/fsl_ppfe/pfe_hif.c
641 create mode 100644 drivers/staging/fsl_ppfe/pfe_hif.h
642 create mode 100644 drivers/staging/fsl_ppfe/pfe_hif_lib.c
643 create mode 100644 drivers/staging/fsl_ppfe/pfe_hif_lib.h
644 create mode 100644 drivers/staging/fsl_ppfe/pfe_hw.c
645 create mode 100644 drivers/staging/fsl_ppfe/pfe_hw.h
646 create mode 100644 drivers/staging/fsl_ppfe/pfe_ls1012a_platform.c
647 create mode 100644 drivers/staging/fsl_ppfe/pfe_mod.c
648 create mode 100644 drivers/staging/fsl_ppfe/pfe_mod.h
649 create mode 100644 drivers/staging/fsl_ppfe/pfe_perfmon.h
650 create mode 100644 drivers/staging/fsl_ppfe/pfe_sysfs.c
651 create mode 100644 drivers/staging/fsl_ppfe/pfe_sysfs.h
652
653 --- /dev/null
654 +++ b/Documentation/devicetree/bindings/net/fsl_ppfe/pfe.txt
655 @@ -0,0 +1,199 @@
656 +=============================================================================
657 +NXP Programmable Packet Forwarding Engine Device Bindings
658 +
659 +CONTENTS
660 + - PFE Node
661 + - Ethernet Node
662 +
663 +=============================================================================
664 +PFE Node
665 +
666 +DESCRIPTION
667 +
668 +PFE Node has all the properties associated with Packet Forwarding Engine block.
669 +
670 +PROPERTIES
671 +
672 +- compatible
673 + Usage: required
674 + Value type: <stringlist>
675 + Definition: Must include "fsl,pfe"
676 +
677 +- reg
678 + Usage: required
679 + Value type: <prop-encoded-array>
680 + Definition: A standard property.
681 + Specifies the offset of the following registers:
682 + - PFE configuration registers
683 + - DDR memory used by PFE
684 +
685 +- fsl,pfe-num-interfaces
686 + Usage: required
687 + Value type: <u32>
688 + Definition: Must be present. Value can be either one or two.
689 +
690 +- interrupts
691 + Usage: required
692 + Value type: <prop-encoded-array>
693 + Definition: Three interrupts are specified in this property.
694 + - HIF interrupt
695 + - HIF NO COPY interrupt
696 + - Wake On LAN interrupt
697 +
698 +- interrupt-names
699 + Usage: required
700 + Value type: <stringlist>
701 + Definition: Following strings are defined for the 3 interrupts.
702 + "pfe_hif" - HIF interrupt
703 + "pfe_hif_nocpy" - HIF NO COPY interrupt
704 + "pfe_wol" - Wake On LAN interrupt
705 +
706 +- memory-region
707 + Usage: required
708 + Value type: <phandle>
709 + Definition: phandle to a node describing reserved memory used by pfe.
710 + Refer:- Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
711 +
712 +- fsl,pfe-scfg
713 + Usage: required
714 + Value type: <phandle>
715 + Definition: phandle for scfg.
716 +
717 +- fsl,rcpm-wakeup
718 + Usage: required
719 + Value type: <phandle>
720 + Definition: phandle for rcpm.
721 +
722 +- clocks
723 + Usage: required
724 + Value type: <phandle>
725 + Definition: phandle for clockgen.
726 +
727 +- clock-names
728 + Usage: required
729 + Value type: <string>
730 + Definition: phandle for clock name.
731 +
732 +EXAMPLE
733 +
734 +pfe: pfe@04000000 {
735 + compatible = "fsl,pfe";
736 + reg = <0x0 0x04000000 0x0 0xc00000>, /* AXI 16M */
737 + <0x0 0x83400000 0x0 0xc00000>; /* PFE DDR 12M */
738 + reg-names = "pfe", "pfe-ddr";
739 + fsl,pfe-num-interfaces = <0x2>;
740 + interrupts = <0 172 0x4>, /* HIF interrupt */
741 + <0 173 0x4>, /*HIF_NOCPY interrupt */
742 + <0 174 0x4>; /* WoL interrupt */
743 + interrupt-names = "pfe_hif", "pfe_hif_nocpy", "pfe_wol";
744 + memory-region = <&pfe_reserved>;
745 + fsl,pfe-scfg = <&scfg 0>;
746 + fsl,rcpm-wakeup = <&rcpm 0xf0000020>;
747 + clocks = <&clockgen 4 0>;
748 + clock-names = "pfe";
749 +
750 + status = "okay";
751 + pfe_mac0: ethernet@0 {
752 + };
753 +
754 + pfe_mac1: ethernet@1 {
755 + };
756 +};
757 +
758 +=============================================================================
759 +Ethernet Node
760 +
761 +DESCRIPTION
762 +
763 +Ethernet Node has all the properties associated with PFE used by platforms to
764 +connect to PHY:
765 +
766 +PROPERTIES
767 +
768 +- compatible
769 + Usage: required
770 + Value type: <stringlist>
771 + Definition: Must include "fsl,pfe-gemac-port"
772 +
773 +- reg
774 + Usage: required
775 + Value type: <prop-encoded-array>
776 + Definition: A standard property.
777 + Specifies the gemacid of the interface.
778 +
779 +- fsl,gemac-bus-id
780 + Usage: required
781 + Value type: <u32>
782 + Definition: Must be present. Value should be the id of the bus
783 + connected to gemac.
784 +
785 +- fsl,gemac-phy-id (deprecated binding)
786 + Usage: required
787 + Value type: <u32>
788 + Definition: This binding shouldn't be used with new platforms.
789 + Must be present. Value should be the id of the phy
790 + connected to gemac.
791 +
792 +- fsl,mdio-mux-val
793 + Usage: required
794 + Value type: <u32>
795 + Definition: Must be present. Value can be either 0 or 2 or 3.
796 + This value is used to configure the mux to enable mdio.
797 +
798 +- phy-mode
799 + Usage: required
800 + Value type: <string>
801 + Definition: Must include "sgmii"
802 +
803 +- fsl,pfe-phy-if-flags (deprecated binding)
804 + Usage: required
805 + Value type: <u32>
806 + Definition: This binding shouldn't be used with new platforms.
807 + Must be present. Value should be 0 by default.
808 + If there is not phy connected, this need to be 1.
809 +
810 +- phy-handle
811 + Usage: optional
812 + Value type: <phandle>
813 + Definition: phandle to the PHY device connected to this device.
814 +
815 +- mdio : A required subnode which specifies the mdio bus in the PFE and used as
816 +a container for phy nodes according to ../phy.txt.
817 +
818 +EXAMPLE
819 +
820 +ethernet@0 {
821 + compatible = "fsl,pfe-gemac-port";
822 + #address-cells = <1>;
823 + #size-cells = <0>;
824 + reg = <0x0>; /* GEM_ID */
825 + fsl,gemac-bus-id = <0x0>; /* BUS_ID */
826 + fsl,mdio-mux-val = <0x0>;
827 + phy-mode = "sgmii";
828 + phy-handle = <&sgmii_phy1>;
829 +};
830 +
831 +
832 +ethernet@1 {
833 + compatible = "fsl,pfe-gemac-port";
834 + #address-cells = <1>;
835 + #size-cells = <0>;
836 + reg = <0x1>; /* GEM_ID */
837 + fsl,gemac-bus-id = <0x1>; /* BUS_ID */
838 + fsl,mdio-mux-val = <0x0>;
839 + phy-mode = "sgmii";
840 + phy-handle = <&sgmii_phy2>;
841 +};
842 +
843 +mdio@0 {
844 + #address-cells = <1>;
845 + #size-cells = <0>;
846 +
847 + sgmii_phy1: ethernet-phy@2 {
848 + reg = <0x2>;
849 + };
850 +
851 + sgmii_phy2: ethernet-phy@1 {
852 + reg = <0x1>;
853 + };
854 +};
855 --- a/MAINTAINERS
856 +++ b/MAINTAINERS
857 @@ -7526,6 +7526,14 @@ F: drivers/ptp/ptp_qoriq.c
858 F: drivers/ptp/ptp_qoriq_debugfs.c
859 F: include/linux/fsl/ptp_qoriq.h
860
861 +FREESCALE QORIQ PPFE ETHERNET DRIVER
862 +M: Anji Jagarlmudi <anji.jagarlmudi@nxp.com>
863 +M: Calvin Johnson <calvin.johnson@nxp.com>
864 +L: netdev@vger.kernel.org
865 +S: Maintained
866 +F: drivers/staging/fsl_ppfe
867 +F: Documentation/devicetree/bindings/net/fsl_ppfe/pfe.txt
868 +
869 FREESCALE QUAD SPI DRIVER
870 M: Han Xu <han.xu@nxp.com>
871 L: linux-spi@vger.kernel.org
872 --- a/drivers/staging/Kconfig
873 +++ b/drivers/staging/Kconfig
874 @@ -102,4 +102,6 @@ source "drivers/staging/qlge/Kconfig"
875
876 source "drivers/staging/wfx/Kconfig"
877
878 +source "drivers/staging/fsl_ppfe/Kconfig"
879 +
880 endif # STAGING
881 --- a/drivers/staging/Makefile
882 +++ b/drivers/staging/Makefile
883 @@ -41,3 +41,4 @@ obj-$(CONFIG_XIL_AXIS_FIFO) += axis-fifo
884 obj-$(CONFIG_FIELDBUS_DEV) += fieldbus/
885 obj-$(CONFIG_QLGE) += qlge/
886 obj-$(CONFIG_WFX) += wfx/
887 +obj-$(CONFIG_FSL_PPFE) += fsl_ppfe/
888 --- /dev/null
889 +++ b/drivers/staging/fsl_ppfe/Kconfig
890 @@ -0,0 +1,21 @@
891 +#
892 +# Freescale Programmable Packet Forwarding Engine driver
893 +#
894 +config FSL_PPFE
895 + tristate "Freescale PPFE Driver"
896 + select FSL_GUTS
897 + default n
898 + help
899 + Freescale LS1012A SoC has a Programmable Packet Forwarding Engine.
900 + It provides two high performance ethernet interfaces.
901 + This driver initializes, programs and controls the PPFE.
902 + Use this driver to enable network connectivity on LS1012A platforms.
903 +
904 +if FSL_PPFE
905 +
906 +config FSL_PPFE_UTIL_DISABLED
907 + bool "Disable PPFE UTIL Processor Engine"
908 + help
909 + UTIL PE has to be enabled only if required.
910 +
911 +endif # FSL_PPFE
912 --- /dev/null
913 +++ b/drivers/staging/fsl_ppfe/Makefile
914 @@ -0,0 +1,20 @@
915 +#
916 +# Makefile for Freesecale PPFE driver
917 +#
918 +
919 +ccflags-y += -I $(srctree)/$(src)/include -I $(srctree)/$(src)
920 +
921 +obj-$(CONFIG_FSL_PPFE) += pfe.o
922 +
923 +pfe-y += pfe_mod.o \
924 + pfe_hw.o \
925 + pfe_firmware.o \
926 + pfe_ctrl.o \
927 + pfe_hif.o \
928 + pfe_hif_lib.o\
929 + pfe_eth.o \
930 + pfe_sysfs.o \
931 + pfe_debugfs.o \
932 + pfe_ls1012a_platform.o \
933 + pfe_hal.o \
934 + pfe_cdev.o
935 --- /dev/null
936 +++ b/drivers/staging/fsl_ppfe/TODO
937 @@ -0,0 +1,2 @@
938 +TODO:
939 + - provide pfe pe monitoring support
940 --- /dev/null
941 +++ b/drivers/staging/fsl_ppfe/include/pfe/cbus.h
942 @@ -0,0 +1,78 @@
943 +/*
944 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
945 + * Copyright 2017 NXP
946 + *
947 + * This program is free software; you can redistribute it and/or modify
948 + * it under the terms of the GNU General Public License as published by
949 + * the Free Software Foundation; either version 2 of the License, or
950 + * (at your option) any later version.
951 + *
952 + * This program is distributed in the hope that it will be useful,
953 + * but WITHOUT ANY WARRANTY; without even the implied warranty of
954 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
955 + * GNU General Public License for more details.
956 + *
957 + * You should have received a copy of the GNU General Public License
958 + * along with this program. If not, see <http://www.gnu.org/licenses/>.
959 + */
960 +
961 +#ifndef _CBUS_H_
962 +#define _CBUS_H_
963 +
964 +#define EMAC1_BASE_ADDR (CBUS_BASE_ADDR + 0x200000)
965 +#define EGPI1_BASE_ADDR (CBUS_BASE_ADDR + 0x210000)
966 +#define EMAC2_BASE_ADDR (CBUS_BASE_ADDR + 0x220000)
967 +#define EGPI2_BASE_ADDR (CBUS_BASE_ADDR + 0x230000)
968 +#define BMU1_BASE_ADDR (CBUS_BASE_ADDR + 0x240000)
969 +#define BMU2_BASE_ADDR (CBUS_BASE_ADDR + 0x250000)
970 +#define ARB_BASE_ADDR (CBUS_BASE_ADDR + 0x260000)
971 +#define DDR_CONFIG_BASE_ADDR (CBUS_BASE_ADDR + 0x270000)
972 +#define HIF_BASE_ADDR (CBUS_BASE_ADDR + 0x280000)
973 +#define HGPI_BASE_ADDR (CBUS_BASE_ADDR + 0x290000)
974 +#define LMEM_BASE_ADDR (CBUS_BASE_ADDR + 0x300000)
975 +#define LMEM_SIZE 0x10000
976 +#define LMEM_END (LMEM_BASE_ADDR + LMEM_SIZE)
977 +#define TMU_CSR_BASE_ADDR (CBUS_BASE_ADDR + 0x310000)
978 +#define CLASS_CSR_BASE_ADDR (CBUS_BASE_ADDR + 0x320000)
979 +#define HIF_NOCPY_BASE_ADDR (CBUS_BASE_ADDR + 0x350000)
980 +#define UTIL_CSR_BASE_ADDR (CBUS_BASE_ADDR + 0x360000)
981 +#define CBUS_GPT_BASE_ADDR (CBUS_BASE_ADDR + 0x370000)
982 +
983 +/*
984 + * defgroup XXX_MEM_ACCESS_ADDR PE memory access through CSR
985 + * XXX_MEM_ACCESS_ADDR register bit definitions.
986 + */
987 +#define PE_MEM_ACCESS_WRITE BIT(31) /* Internal Memory Write. */
988 +#define PE_MEM_ACCESS_IMEM BIT(15)
989 +#define PE_MEM_ACCESS_DMEM BIT(16)
990 +
991 +/* Byte Enables of the Internal memory access. These are interpred in BE */
992 +#define PE_MEM_ACCESS_BYTE_ENABLE(offset, size) \
993 + ({ typeof(size) size_ = (size); \
994 + (((BIT(size_) - 1) << (4 - (offset) - (size_))) & 0xf) << 24; })
995 +
996 +#include "cbus/emac_mtip.h"
997 +#include "cbus/gpi.h"
998 +#include "cbus/bmu.h"
999 +#include "cbus/hif.h"
1000 +#include "cbus/tmu_csr.h"
1001 +#include "cbus/class_csr.h"
1002 +#include "cbus/hif_nocpy.h"
1003 +#include "cbus/util_csr.h"
1004 +
1005 +/* PFE cores states */
1006 +#define CORE_DISABLE 0x00000000
1007 +#define CORE_ENABLE 0x00000001
1008 +#define CORE_SW_RESET 0x00000002
1009 +
1010 +/* LMEM defines */
1011 +#define LMEM_HDR_SIZE 0x0010
1012 +#define LMEM_BUF_SIZE_LN2 0x7
1013 +#define LMEM_BUF_SIZE BIT(LMEM_BUF_SIZE_LN2)
1014 +
1015 +/* DDR defines */
1016 +#define DDR_HDR_SIZE 0x0100
1017 +#define DDR_BUF_SIZE_LN2 0xb
1018 +#define DDR_BUF_SIZE BIT(DDR_BUF_SIZE_LN2)
1019 +
1020 +#endif /* _CBUS_H_ */
1021 --- /dev/null
1022 +++ b/drivers/staging/fsl_ppfe/include/pfe/cbus/bmu.h
1023 @@ -0,0 +1,55 @@
1024 +/*
1025 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
1026 + * Copyright 2017 NXP
1027 + *
1028 + * This program is free software; you can redistribute it and/or modify
1029 + * it under the terms of the GNU General Public License as published by
1030 + * the Free Software Foundation; either version 2 of the License, or
1031 + * (at your option) any later version.
1032 + *
1033 + * This program is distributed in the hope that it will be useful,
1034 + * but WITHOUT ANY WARRANTY; without even the implied warranty of
1035 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1036 + * GNU General Public License for more details.
1037 + *
1038 + * You should have received a copy of the GNU General Public License
1039 + * along with this program. If not, see <http://www.gnu.org/licenses/>.
1040 + */
1041 +
1042 +#ifndef _BMU_H_
1043 +#define _BMU_H_
1044 +
1045 +#define BMU_VERSION 0x000
1046 +#define BMU_CTRL 0x004
1047 +#define BMU_UCAST_CONFIG 0x008
1048 +#define BMU_UCAST_BASE_ADDR 0x00c
1049 +#define BMU_BUF_SIZE 0x010
1050 +#define BMU_BUF_CNT 0x014
1051 +#define BMU_THRES 0x018
1052 +#define BMU_INT_SRC 0x020
1053 +#define BMU_INT_ENABLE 0x024
1054 +#define BMU_ALLOC_CTRL 0x030
1055 +#define BMU_FREE_CTRL 0x034
1056 +#define BMU_FREE_ERR_ADDR 0x038
1057 +#define BMU_CURR_BUF_CNT 0x03c
1058 +#define BMU_MCAST_CNT 0x040
1059 +#define BMU_MCAST_ALLOC_CTRL 0x044
1060 +#define BMU_REM_BUF_CNT 0x048
1061 +#define BMU_LOW_WATERMARK 0x050
1062 +#define BMU_HIGH_WATERMARK 0x054
1063 +#define BMU_INT_MEM_ACCESS 0x100
1064 +
1065 +struct BMU_CFG {
1066 + unsigned long baseaddr;
1067 + u32 count;
1068 + u32 size;
1069 + u32 low_watermark;
1070 + u32 high_watermark;
1071 +};
1072 +
1073 +#define BMU1_BUF_SIZE LMEM_BUF_SIZE_LN2
1074 +#define BMU2_BUF_SIZE DDR_BUF_SIZE_LN2
1075 +
1076 +#define BMU2_MCAST_ALLOC_CTRL (BMU2_BASE_ADDR + BMU_MCAST_ALLOC_CTRL)
1077 +
1078 +#endif /* _BMU_H_ */
1079 --- /dev/null
1080 +++ b/drivers/staging/fsl_ppfe/include/pfe/cbus/class_csr.h
1081 @@ -0,0 +1,289 @@
1082 +/*
1083 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
1084 + * Copyright 2017 NXP
1085 + *
1086 + * This program is free software; you can redistribute it and/or modify
1087 + * it under the terms of the GNU General Public License as published by
1088 + * the Free Software Foundation; either version 2 of the License, or
1089 + * (at your option) any later version.
1090 + *
1091 + * This program is distributed in the hope that it will be useful,
1092 + * but WITHOUT ANY WARRANTY; without even the implied warranty of
1093 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1094 + * GNU General Public License for more details.
1095 + *
1096 + * You should have received a copy of the GNU General Public License
1097 + * along with this program. If not, see <http://www.gnu.org/licenses/>.
1098 + */
1099 +
1100 +#ifndef _CLASS_CSR_H_
1101 +#define _CLASS_CSR_H_
1102 +
1103 +/* @file class_csr.h.
1104 + * class_csr - block containing all the classifier control and status register.
1105 + * Mapped on CBUS and accessible from all PE's and ARM.
1106 + */
1107 +#define CLASS_VERSION (CLASS_CSR_BASE_ADDR + 0x000)
1108 +#define CLASS_TX_CTRL (CLASS_CSR_BASE_ADDR + 0x004)
1109 +#define CLASS_INQ_PKTPTR (CLASS_CSR_BASE_ADDR + 0x010)
1110 +
1111 +/* (ddr_hdr_size[24:16], lmem_hdr_size[5:0]) */
1112 +#define CLASS_HDR_SIZE (CLASS_CSR_BASE_ADDR + 0x014)
1113 +
1114 +/* LMEM header size for the Classifier block.\ Data in the LMEM
1115 + * is written from this offset.
1116 + */
1117 +#define CLASS_HDR_SIZE_LMEM(off) ((off) & 0x3f)
1118 +
1119 +/* DDR header size for the Classifier block.\ Data in the DDR
1120 + * is written from this offset.
1121 + */
1122 +#define CLASS_HDR_SIZE_DDR(off) (((off) & 0x1ff) << 16)
1123 +
1124 +#define CLASS_PE0_QB_DM_ADDR0 (CLASS_CSR_BASE_ADDR + 0x020)
1125 +
1126 +/* DMEM address of first [15:0] and second [31:16] buffers on QB side. */
1127 +#define CLASS_PE0_QB_DM_ADDR1 (CLASS_CSR_BASE_ADDR + 0x024)
1128 +
1129 +/* DMEM address of third [15:0] and fourth [31:16] buffers on QB side. */
1130 +#define CLASS_PE0_RO_DM_ADDR0 (CLASS_CSR_BASE_ADDR + 0x060)
1131 +
1132 +/* DMEM address of first [15:0] and second [31:16] buffers on RO side. */
1133 +#define CLASS_PE0_RO_DM_ADDR1 (CLASS_CSR_BASE_ADDR + 0x064)
1134 +
1135 +/* DMEM address of third [15:0] and fourth [31:16] buffers on RO side. */
1136 +
1137 +/* @name Class PE memory access. Allows external PE's and HOST to
1138 + * read/write PMEM/DMEM memory ranges for each classifier PE.
1139 + */
1140 +/* {sr_pe_mem_cmd[31], csr_pe_mem_wren[27:24], csr_pe_mem_addr[23:0]},
1141 + * See \ref XXX_MEM_ACCESS_ADDR for details.
1142 + */
1143 +#define CLASS_MEM_ACCESS_ADDR (CLASS_CSR_BASE_ADDR + 0x100)
1144 +
1145 +/* Internal Memory Access Write Data [31:0] */
1146 +#define CLASS_MEM_ACCESS_WDATA (CLASS_CSR_BASE_ADDR + 0x104)
1147 +
1148 +/* Internal Memory Access Read Data [31:0] */
1149 +#define CLASS_MEM_ACCESS_RDATA (CLASS_CSR_BASE_ADDR + 0x108)
1150 +#define CLASS_TM_INQ_ADDR (CLASS_CSR_BASE_ADDR + 0x114)
1151 +#define CLASS_PE_STATUS (CLASS_CSR_BASE_ADDR + 0x118)
1152 +
1153 +#define CLASS_PHY1_RX_PKTS (CLASS_CSR_BASE_ADDR + 0x11c)
1154 +#define CLASS_PHY1_TX_PKTS (CLASS_CSR_BASE_ADDR + 0x120)
1155 +#define CLASS_PHY1_LP_FAIL_PKTS (CLASS_CSR_BASE_ADDR + 0x124)
1156 +#define CLASS_PHY1_INTF_FAIL_PKTS (CLASS_CSR_BASE_ADDR + 0x128)
1157 +#define CLASS_PHY1_INTF_MATCH_PKTS (CLASS_CSR_BASE_ADDR + 0x12c)
1158 +#define CLASS_PHY1_L3_FAIL_PKTS (CLASS_CSR_BASE_ADDR + 0x130)
1159 +#define CLASS_PHY1_V4_PKTS (CLASS_CSR_BASE_ADDR + 0x134)
1160 +#define CLASS_PHY1_V6_PKTS (CLASS_CSR_BASE_ADDR + 0x138)
1161 +#define CLASS_PHY1_CHKSUM_ERR_PKTS (CLASS_CSR_BASE_ADDR + 0x13c)
1162 +#define CLASS_PHY1_TTL_ERR_PKTS (CLASS_CSR_BASE_ADDR + 0x140)
1163 +#define CLASS_PHY2_RX_PKTS (CLASS_CSR_BASE_ADDR + 0x144)
1164 +#define CLASS_PHY2_TX_PKTS (CLASS_CSR_BASE_ADDR + 0x148)
1165 +#define CLASS_PHY2_LP_FAIL_PKTS (CLASS_CSR_BASE_ADDR + 0x14c)
1166 +#define CLASS_PHY2_INTF_FAIL_PKTS (CLASS_CSR_BASE_ADDR + 0x150)
1167 +#define CLASS_PHY2_INTF_MATCH_PKTS (CLASS_CSR_BASE_ADDR + 0x154)
1168 +#define CLASS_PHY2_L3_FAIL_PKTS (CLASS_CSR_BASE_ADDR + 0x158)
1169 +#define CLASS_PHY2_V4_PKTS (CLASS_CSR_BASE_ADDR + 0x15c)
1170 +#define CLASS_PHY2_V6_PKTS (CLASS_CSR_BASE_ADDR + 0x160)
1171 +#define CLASS_PHY2_CHKSUM_ERR_PKTS (CLASS_CSR_BASE_ADDR + 0x164)
1172 +#define CLASS_PHY2_TTL_ERR_PKTS (CLASS_CSR_BASE_ADDR + 0x168)
1173 +#define CLASS_PHY3_RX_PKTS (CLASS_CSR_BASE_ADDR + 0x16c)
1174 +#define CLASS_PHY3_TX_PKTS (CLASS_CSR_BASE_ADDR + 0x170)
1175 +#define CLASS_PHY3_LP_FAIL_PKTS (CLASS_CSR_BASE_ADDR + 0x174)
1176 +#define CLASS_PHY3_INTF_FAIL_PKTS (CLASS_CSR_BASE_ADDR + 0x178)
1177 +#define CLASS_PHY3_INTF_MATCH_PKTS (CLASS_CSR_BASE_ADDR + 0x17c)
1178 +#define CLASS_PHY3_L3_FAIL_PKTS (CLASS_CSR_BASE_ADDR + 0x180)
1179 +#define CLASS_PHY3_V4_PKTS (CLASS_CSR_BASE_ADDR + 0x184)
1180 +#define CLASS_PHY3_V6_PKTS (CLASS_CSR_BASE_ADDR + 0x188)
1181 +#define CLASS_PHY3_CHKSUM_ERR_PKTS (CLASS_CSR_BASE_ADDR + 0x18c)
1182 +#define CLASS_PHY3_TTL_ERR_PKTS (CLASS_CSR_BASE_ADDR + 0x190)
1183 +#define CLASS_PHY1_ICMP_PKTS (CLASS_CSR_BASE_ADDR + 0x194)
1184 +#define CLASS_PHY1_IGMP_PKTS (CLASS_CSR_BASE_ADDR + 0x198)
1185 +#define CLASS_PHY1_TCP_PKTS (CLASS_CSR_BASE_ADDR + 0x19c)
1186 +#define CLASS_PHY1_UDP_PKTS (CLASS_CSR_BASE_ADDR + 0x1a0)
1187 +#define CLASS_PHY2_ICMP_PKTS (CLASS_CSR_BASE_ADDR + 0x1a4)
1188 +#define CLASS_PHY2_IGMP_PKTS (CLASS_CSR_BASE_ADDR + 0x1a8)
1189 +#define CLASS_PHY2_TCP_PKTS (CLASS_CSR_BASE_ADDR + 0x1ac)
1190 +#define CLASS_PHY2_UDP_PKTS (CLASS_CSR_BASE_ADDR + 0x1b0)
1191 +#define CLASS_PHY3_ICMP_PKTS (CLASS_CSR_BASE_ADDR + 0x1b4)
1192 +#define CLASS_PHY3_IGMP_PKTS (CLASS_CSR_BASE_ADDR + 0x1b8)
1193 +#define CLASS_PHY3_TCP_PKTS (CLASS_CSR_BASE_ADDR + 0x1bc)
1194 +#define CLASS_PHY3_UDP_PKTS (CLASS_CSR_BASE_ADDR + 0x1c0)
1195 +#define CLASS_PHY4_ICMP_PKTS (CLASS_CSR_BASE_ADDR + 0x1c4)
1196 +#define CLASS_PHY4_IGMP_PKTS (CLASS_CSR_BASE_ADDR + 0x1c8)
1197 +#define CLASS_PHY4_TCP_PKTS (CLASS_CSR_BASE_ADDR + 0x1cc)
1198 +#define CLASS_PHY4_UDP_PKTS (CLASS_CSR_BASE_ADDR + 0x1d0)
1199 +#define CLASS_PHY4_RX_PKTS (CLASS_CSR_BASE_ADDR + 0x1d4)
1200 +#define CLASS_PHY4_TX_PKTS (CLASS_CSR_BASE_ADDR + 0x1d8)
1201 +#define CLASS_PHY4_LP_FAIL_PKTS (CLASS_CSR_BASE_ADDR + 0x1dc)
1202 +#define CLASS_PHY4_INTF_FAIL_PKTS (CLASS_CSR_BASE_ADDR + 0x1e0)
1203 +#define CLASS_PHY4_INTF_MATCH_PKTS (CLASS_CSR_BASE_ADDR + 0x1e4)
1204 +#define CLASS_PHY4_L3_FAIL_PKTS (CLASS_CSR_BASE_ADDR + 0x1e8)
1205 +#define CLASS_PHY4_V4_PKTS (CLASS_CSR_BASE_ADDR + 0x1ec)
1206 +#define CLASS_PHY4_V6_PKTS (CLASS_CSR_BASE_ADDR + 0x1f0)
1207 +#define CLASS_PHY4_CHKSUM_ERR_PKTS (CLASS_CSR_BASE_ADDR + 0x1f4)
1208 +#define CLASS_PHY4_TTL_ERR_PKTS (CLASS_CSR_BASE_ADDR + 0x1f8)
1209 +
1210 +#define CLASS_PE_SYS_CLK_RATIO (CLASS_CSR_BASE_ADDR + 0x200)
1211 +#define CLASS_AFULL_THRES (CLASS_CSR_BASE_ADDR + 0x204)
1212 +#define CLASS_GAP_BETWEEN_READS (CLASS_CSR_BASE_ADDR + 0x208)
1213 +#define CLASS_MAX_BUF_CNT (CLASS_CSR_BASE_ADDR + 0x20c)
1214 +#define CLASS_TSQ_FIFO_THRES (CLASS_CSR_BASE_ADDR + 0x210)
1215 +#define CLASS_TSQ_MAX_CNT (CLASS_CSR_BASE_ADDR + 0x214)
1216 +#define CLASS_IRAM_DATA_0 (CLASS_CSR_BASE_ADDR + 0x218)
1217 +#define CLASS_IRAM_DATA_1 (CLASS_CSR_BASE_ADDR + 0x21c)
1218 +#define CLASS_IRAM_DATA_2 (CLASS_CSR_BASE_ADDR + 0x220)
1219 +#define CLASS_IRAM_DATA_3 (CLASS_CSR_BASE_ADDR + 0x224)
1220 +
1221 +#define CLASS_BUS_ACCESS_ADDR (CLASS_CSR_BASE_ADDR + 0x228)
1222 +
1223 +#define CLASS_BUS_ACCESS_WDATA (CLASS_CSR_BASE_ADDR + 0x22c)
1224 +#define CLASS_BUS_ACCESS_RDATA (CLASS_CSR_BASE_ADDR + 0x230)
1225 +
1226 +/* (route_entry_size[9:0], route_hash_size[23:16]
1227 + * (this is actually ln2(size)))
1228 + */
1229 +#define CLASS_ROUTE_HASH_ENTRY_SIZE (CLASS_CSR_BASE_ADDR + 0x234)
1230 +
1231 +#define CLASS_ROUTE_ENTRY_SIZE(size) ((size) & 0x1ff)
1232 +#define CLASS_ROUTE_HASH_SIZE(hash_bits) (((hash_bits) & 0xff) << 16)
1233 +
1234 +#define CLASS_ROUTE_TABLE_BASE (CLASS_CSR_BASE_ADDR + 0x238)
1235 +
1236 +#define CLASS_ROUTE_MULTI (CLASS_CSR_BASE_ADDR + 0x23c)
1237 +#define CLASS_SMEM_OFFSET (CLASS_CSR_BASE_ADDR + 0x240)
1238 +#define CLASS_LMEM_BUF_SIZE (CLASS_CSR_BASE_ADDR + 0x244)
1239 +#define CLASS_VLAN_ID (CLASS_CSR_BASE_ADDR + 0x248)
1240 +#define CLASS_BMU1_BUF_FREE (CLASS_CSR_BASE_ADDR + 0x24c)
1241 +#define CLASS_USE_TMU_INQ (CLASS_CSR_BASE_ADDR + 0x250)
1242 +#define CLASS_VLAN_ID1 (CLASS_CSR_BASE_ADDR + 0x254)
1243 +
1244 +#define CLASS_BUS_ACCESS_BASE (CLASS_CSR_BASE_ADDR + 0x258)
1245 +#define CLASS_BUS_ACCESS_BASE_MASK (0xFF000000)
1246 +/* bit 31:24 of PE peripheral address are stored in CLASS_BUS_ACCESS_BASE */
1247 +
1248 +#define CLASS_HIF_PARSE (CLASS_CSR_BASE_ADDR + 0x25c)
1249 +
1250 +#define CLASS_HOST_PE0_GP (CLASS_CSR_BASE_ADDR + 0x260)
1251 +#define CLASS_PE0_GP (CLASS_CSR_BASE_ADDR + 0x264)
1252 +#define CLASS_HOST_PE1_GP (CLASS_CSR_BASE_ADDR + 0x268)
1253 +#define CLASS_PE1_GP (CLASS_CSR_BASE_ADDR + 0x26c)
1254 +#define CLASS_HOST_PE2_GP (CLASS_CSR_BASE_ADDR + 0x270)
1255 +#define CLASS_PE2_GP (CLASS_CSR_BASE_ADDR + 0x274)
1256 +#define CLASS_HOST_PE3_GP (CLASS_CSR_BASE_ADDR + 0x278)
1257 +#define CLASS_PE3_GP (CLASS_CSR_BASE_ADDR + 0x27c)
1258 +#define CLASS_HOST_PE4_GP (CLASS_CSR_BASE_ADDR + 0x280)
1259 +#define CLASS_PE4_GP (CLASS_CSR_BASE_ADDR + 0x284)
1260 +#define CLASS_HOST_PE5_GP (CLASS_CSR_BASE_ADDR + 0x288)
1261 +#define CLASS_PE5_GP (CLASS_CSR_BASE_ADDR + 0x28c)
1262 +
1263 +#define CLASS_PE_INT_SRC (CLASS_CSR_BASE_ADDR + 0x290)
1264 +#define CLASS_PE_INT_ENABLE (CLASS_CSR_BASE_ADDR + 0x294)
1265 +
1266 +#define CLASS_TPID0_TPID1 (CLASS_CSR_BASE_ADDR + 0x298)
1267 +#define CLASS_TPID2 (CLASS_CSR_BASE_ADDR + 0x29c)
1268 +
1269 +#define CLASS_L4_CHKSUM_ADDR (CLASS_CSR_BASE_ADDR + 0x2a0)
1270 +
1271 +#define CLASS_PE0_DEBUG (CLASS_CSR_BASE_ADDR + 0x2a4)
1272 +#define CLASS_PE1_DEBUG (CLASS_CSR_BASE_ADDR + 0x2a8)
1273 +#define CLASS_PE2_DEBUG (CLASS_CSR_BASE_ADDR + 0x2ac)
1274 +#define CLASS_PE3_DEBUG (CLASS_CSR_BASE_ADDR + 0x2b0)
1275 +#define CLASS_PE4_DEBUG (CLASS_CSR_BASE_ADDR + 0x2b4)
1276 +#define CLASS_PE5_DEBUG (CLASS_CSR_BASE_ADDR + 0x2b8)
1277 +
1278 +#define CLASS_STATE (CLASS_CSR_BASE_ADDR + 0x2bc)
1279 +
1280 +/* CLASS defines */
1281 +#define CLASS_PBUF_SIZE 0x100 /* Fixed by hardware */
1282 +#define CLASS_PBUF_HEADER_OFFSET 0x80 /* Can be configured */
1283 +
1284 +/* Can be configured */
1285 +#define CLASS_PBUF0_BASE_ADDR 0x000
1286 +/* Can be configured */
1287 +#define CLASS_PBUF1_BASE_ADDR (CLASS_PBUF0_BASE_ADDR + CLASS_PBUF_SIZE)
1288 +/* Can be configured */
1289 +#define CLASS_PBUF2_BASE_ADDR (CLASS_PBUF1_BASE_ADDR + CLASS_PBUF_SIZE)
1290 +/* Can be configured */
1291 +#define CLASS_PBUF3_BASE_ADDR (CLASS_PBUF2_BASE_ADDR + CLASS_PBUF_SIZE)
1292 +
1293 +#define CLASS_PBUF0_HEADER_BASE_ADDR (CLASS_PBUF0_BASE_ADDR + \
1294 + CLASS_PBUF_HEADER_OFFSET)
1295 +#define CLASS_PBUF1_HEADER_BASE_ADDR (CLASS_PBUF1_BASE_ADDR + \
1296 + CLASS_PBUF_HEADER_OFFSET)
1297 +#define CLASS_PBUF2_HEADER_BASE_ADDR (CLASS_PBUF2_BASE_ADDR + \
1298 + CLASS_PBUF_HEADER_OFFSET)
1299 +#define CLASS_PBUF3_HEADER_BASE_ADDR (CLASS_PBUF3_BASE_ADDR + \
1300 + CLASS_PBUF_HEADER_OFFSET)
1301 +
1302 +#define CLASS_PE0_RO_DM_ADDR0_VAL ((CLASS_PBUF1_BASE_ADDR << 16) | \
1303 + CLASS_PBUF0_BASE_ADDR)
1304 +#define CLASS_PE0_RO_DM_ADDR1_VAL ((CLASS_PBUF3_BASE_ADDR << 16) | \
1305 + CLASS_PBUF2_BASE_ADDR)
1306 +
1307 +#define CLASS_PE0_QB_DM_ADDR0_VAL ((CLASS_PBUF1_HEADER_BASE_ADDR << 16) |\
1308 + CLASS_PBUF0_HEADER_BASE_ADDR)
1309 +#define CLASS_PE0_QB_DM_ADDR1_VAL ((CLASS_PBUF3_HEADER_BASE_ADDR << 16) |\
1310 + CLASS_PBUF2_HEADER_BASE_ADDR)
1311 +
1312 +#define CLASS_ROUTE_SIZE 128
1313 +#define CLASS_MAX_ROUTE_SIZE 256
1314 +#define CLASS_ROUTE_HASH_BITS 20
1315 +#define CLASS_ROUTE_HASH_MASK (BIT(CLASS_ROUTE_HASH_BITS) - 1)
1316 +
1317 +/* Can be configured */
1318 +#define CLASS_ROUTE0_BASE_ADDR 0x400
1319 +/* Can be configured */
1320 +#define CLASS_ROUTE1_BASE_ADDR (CLASS_ROUTE0_BASE_ADDR + CLASS_ROUTE_SIZE)
1321 +/* Can be configured */
1322 +#define CLASS_ROUTE2_BASE_ADDR (CLASS_ROUTE1_BASE_ADDR + CLASS_ROUTE_SIZE)
1323 +/* Can be configured */
1324 +#define CLASS_ROUTE3_BASE_ADDR (CLASS_ROUTE2_BASE_ADDR + CLASS_ROUTE_SIZE)
1325 +
1326 +#define CLASS_SA_SIZE 128
1327 +#define CLASS_IPSEC_SA0_BASE_ADDR 0x600
1328 +/* not used */
1329 +#define CLASS_IPSEC_SA1_BASE_ADDR (CLASS_IPSEC_SA0_BASE_ADDR + CLASS_SA_SIZE)
1330 +/* not used */
1331 +#define CLASS_IPSEC_SA2_BASE_ADDR (CLASS_IPSEC_SA1_BASE_ADDR + CLASS_SA_SIZE)
1332 +/* not used */
1333 +#define CLASS_IPSEC_SA3_BASE_ADDR (CLASS_IPSEC_SA2_BASE_ADDR + CLASS_SA_SIZE)
1334 +
1335 +/* generic purpose free dmem buffer, last portion of 2K dmem pbuf */
1336 +#define CLASS_GP_DMEM_BUF_SIZE (2048 - (CLASS_PBUF_SIZE * 4) - \
1337 + (CLASS_ROUTE_SIZE * 4) - (CLASS_SA_SIZE))
1338 +#define CLASS_GP_DMEM_BUF ((void *)(CLASS_IPSEC_SA0_BASE_ADDR + \
1339 + CLASS_SA_SIZE))
1340 +
1341 +#define TWO_LEVEL_ROUTE BIT(0)
1342 +#define PHYNO_IN_HASH BIT(1)
1343 +#define HW_ROUTE_FETCH BIT(3)
1344 +#define HW_BRIDGE_FETCH BIT(5)
1345 +#define IP_ALIGNED BIT(6)
1346 +#define ARC_HIT_CHECK_EN BIT(7)
1347 +#define CLASS_TOE BIT(11)
1348 +#define HASH_NORMAL (0 << 12)
1349 +#define HASH_CRC_PORT BIT(12)
1350 +#define HASH_CRC_IP (2 << 12)
1351 +#define HASH_CRC_PORT_IP (3 << 12)
1352 +#define QB2BUS_LE BIT(15)
1353 +
1354 +#define TCP_CHKSUM_DROP BIT(0)
1355 +#define UDP_CHKSUM_DROP BIT(1)
1356 +#define IPV4_CHKSUM_DROP BIT(9)
1357 +
1358 +/*CLASS_HIF_PARSE bits*/
1359 +#define HIF_PKT_CLASS_EN BIT(0)
1360 +#define HIF_PKT_OFFSET(ofst) (((ofst) & 0xF) << 1)
1361 +
1362 +struct class_cfg {
1363 + u32 toe_mode;
1364 + unsigned long route_table_baseaddr;
1365 + u32 route_table_hash_bits;
1366 + u32 pe_sys_clk_ratio;
1367 + u32 resume;
1368 +};
1369 +
1370 +#endif /* _CLASS_CSR_H_ */
1371 --- /dev/null
1372 +++ b/drivers/staging/fsl_ppfe/include/pfe/cbus/emac_mtip.h
1373 @@ -0,0 +1,242 @@
1374 +/*
1375 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
1376 + * Copyright 2017 NXP
1377 + *
1378 + * This program is free software; you can redistribute it and/or modify
1379 + * it under the terms of the GNU General Public License as published by
1380 + * the Free Software Foundation; either version 2 of the License, or
1381 + * (at your option) any later version.
1382 + *
1383 + * This program is distributed in the hope that it will be useful,
1384 + * but WITHOUT ANY WARRANTY; without even the implied warranty of
1385 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1386 + * GNU General Public License for more details.
1387 + *
1388 + * You should have received a copy of the GNU General Public License
1389 + * along with this program. If not, see <http://www.gnu.org/licenses/>.
1390 + */
1391 +
1392 +#ifndef _EMAC_H_
1393 +#define _EMAC_H_
1394 +
1395 +#include <linux/ethtool.h>
1396 +
1397 +#define EMAC_IEVENT_REG 0x004
1398 +#define EMAC_IMASK_REG 0x008
1399 +#define EMAC_R_DES_ACTIVE_REG 0x010
1400 +#define EMAC_X_DES_ACTIVE_REG 0x014
1401 +#define EMAC_ECNTRL_REG 0x024
1402 +#define EMAC_MII_DATA_REG 0x040
1403 +#define EMAC_MII_CTRL_REG 0x044
1404 +#define EMAC_MIB_CTRL_STS_REG 0x064
1405 +#define EMAC_RCNTRL_REG 0x084
1406 +#define EMAC_TCNTRL_REG 0x0C4
1407 +#define EMAC_PHY_ADDR_LOW 0x0E4
1408 +#define EMAC_PHY_ADDR_HIGH 0x0E8
1409 +#define EMAC_GAUR 0x120
1410 +#define EMAC_GALR 0x124
1411 +#define EMAC_TFWR_STR_FWD 0x144
1412 +#define EMAC_RX_SECTION_FULL 0x190
1413 +#define EMAC_RX_SECTION_EMPTY 0x194
1414 +#define EMAC_TX_SECTION_EMPTY 0x1A0
1415 +#define EMAC_TRUNC_FL 0x1B0
1416 +
1417 +#define RMON_T_DROP 0x200 /* Count of frames not cntd correctly */
1418 +#define RMON_T_PACKETS 0x204 /* RMON TX packet count */
1419 +#define RMON_T_BC_PKT 0x208 /* RMON TX broadcast pkts */
1420 +#define RMON_T_MC_PKT 0x20c /* RMON TX multicast pkts */
1421 +#define RMON_T_CRC_ALIGN 0x210 /* RMON TX pkts with CRC align err */
1422 +#define RMON_T_UNDERSIZE 0x214 /* RMON TX pkts < 64 bytes, good CRC */
1423 +#define RMON_T_OVERSIZE 0x218 /* RMON TX pkts > MAX_FL bytes good CRC */
1424 +#define RMON_T_FRAG 0x21c /* RMON TX pkts < 64 bytes, bad CRC */
1425 +#define RMON_T_JAB 0x220 /* RMON TX pkts > MAX_FL bytes, bad CRC */
1426 +#define RMON_T_COL 0x224 /* RMON TX collision count */
1427 +#define RMON_T_P64 0x228 /* RMON TX 64 byte pkts */
1428 +#define RMON_T_P65TO127 0x22c /* RMON TX 65 to 127 byte pkts */
1429 +#define RMON_T_P128TO255 0x230 /* RMON TX 128 to 255 byte pkts */
1430 +#define RMON_T_P256TO511 0x234 /* RMON TX 256 to 511 byte pkts */
1431 +#define RMON_T_P512TO1023 0x238 /* RMON TX 512 to 1023 byte pkts */
1432 +#define RMON_T_P1024TO2047 0x23c /* RMON TX 1024 to 2047 byte pkts */
1433 +#define RMON_T_P_GTE2048 0x240 /* RMON TX pkts > 2048 bytes */
1434 +#define RMON_T_OCTETS 0x244 /* RMON TX octets */
1435 +#define IEEE_T_DROP 0x248 /* Count of frames not counted crtly */
1436 +#define IEEE_T_FRAME_OK 0x24c /* Frames tx'd OK */
1437 +#define IEEE_T_1COL 0x250 /* Frames tx'd with single collision */
1438 +#define IEEE_T_MCOL 0x254 /* Frames tx'd with multiple collision */
1439 +#define IEEE_T_DEF 0x258 /* Frames tx'd after deferral delay */
1440 +#define IEEE_T_LCOL 0x25c /* Frames tx'd with late collision */
1441 +#define IEEE_T_EXCOL 0x260 /* Frames tx'd with excesv collisions */
1442 +#define IEEE_T_MACERR 0x264 /* Frames tx'd with TX FIFO underrun */
1443 +#define IEEE_T_CSERR 0x268 /* Frames tx'd with carrier sense err */
1444 +#define IEEE_T_SQE 0x26c /* Frames tx'd with SQE err */
1445 +#define IEEE_T_FDXFC 0x270 /* Flow control pause frames tx'd */
1446 +#define IEEE_T_OCTETS_OK 0x274 /* Octet count for frames tx'd w/o err */
1447 +#define RMON_R_PACKETS 0x284 /* RMON RX packet count */
1448 +#define RMON_R_BC_PKT 0x288 /* RMON RX broadcast pkts */
1449 +#define RMON_R_MC_PKT 0x28c /* RMON RX multicast pkts */
1450 +#define RMON_R_CRC_ALIGN 0x290 /* RMON RX pkts with CRC alignment err */
1451 +#define RMON_R_UNDERSIZE 0x294 /* RMON RX pkts < 64 bytes, good CRC */
1452 +#define RMON_R_OVERSIZE 0x298 /* RMON RX pkts > MAX_FL bytes good CRC */
1453 +#define RMON_R_FRAG 0x29c /* RMON RX pkts < 64 bytes, bad CRC */
1454 +#define RMON_R_JAB 0x2a0 /* RMON RX pkts > MAX_FL bytes, bad CRC */
1455 +#define RMON_R_RESVD_O 0x2a4 /* Reserved */
1456 +#define RMON_R_P64 0x2a8 /* RMON RX 64 byte pkts */
1457 +#define RMON_R_P65TO127 0x2ac /* RMON RX 65 to 127 byte pkts */
1458 +#define RMON_R_P128TO255 0x2b0 /* RMON RX 128 to 255 byte pkts */
1459 +#define RMON_R_P256TO511 0x2b4 /* RMON RX 256 to 511 byte pkts */
1460 +#define RMON_R_P512TO1023 0x2b8 /* RMON RX 512 to 1023 byte pkts */
1461 +#define RMON_R_P1024TO2047 0x2bc /* RMON RX 1024 to 2047 byte pkts */
1462 +#define RMON_R_P_GTE2048 0x2c0 /* RMON RX pkts > 2048 bytes */
1463 +#define RMON_R_OCTETS 0x2c4 /* RMON RX octets */
1464 +#define IEEE_R_DROP 0x2c8 /* Count frames not counted correctly */
1465 +#define IEEE_R_FRAME_OK 0x2cc /* Frames rx'd OK */
1466 +#define IEEE_R_CRC 0x2d0 /* Frames rx'd with CRC err */
1467 +#define IEEE_R_ALIGN 0x2d4 /* Frames rx'd with alignment err */
1468 +#define IEEE_R_MACERR 0x2d8 /* Receive FIFO overflow count */
1469 +#define IEEE_R_FDXFC 0x2dc /* Flow control pause frames rx'd */
1470 +#define IEEE_R_OCTETS_OK 0x2e0 /* Octet cnt for frames rx'd w/o err */
1471 +
1472 +#define EMAC_SMAC_0_0 0x500 /*Supplemental MAC Address 0 (RW).*/
1473 +#define EMAC_SMAC_0_1 0x504 /*Supplemental MAC Address 0 (RW).*/
1474 +
1475 +/* GEMAC definitions and settings */
1476 +
1477 +#define EMAC_PORT_0 0
1478 +#define EMAC_PORT_1 1
1479 +
1480 +/* GEMAC Bit definitions */
1481 +#define EMAC_IEVENT_HBERR 0x80000000
1482 +#define EMAC_IEVENT_BABR 0x40000000
1483 +#define EMAC_IEVENT_BABT 0x20000000
1484 +#define EMAC_IEVENT_GRA 0x10000000
1485 +#define EMAC_IEVENT_TXF 0x08000000
1486 +#define EMAC_IEVENT_TXB 0x04000000
1487 +#define EMAC_IEVENT_RXF 0x02000000
1488 +#define EMAC_IEVENT_RXB 0x01000000
1489 +#define EMAC_IEVENT_MII 0x00800000
1490 +#define EMAC_IEVENT_EBERR 0x00400000
1491 +#define EMAC_IEVENT_LC 0x00200000
1492 +#define EMAC_IEVENT_RL 0x00100000
1493 +#define EMAC_IEVENT_UN 0x00080000
1494 +
1495 +#define EMAC_IMASK_HBERR 0x80000000
1496 +#define EMAC_IMASK_BABR 0x40000000
1497 +#define EMAC_IMASKT_BABT 0x20000000
1498 +#define EMAC_IMASK_GRA 0x10000000
1499 +#define EMAC_IMASKT_TXF 0x08000000
1500 +#define EMAC_IMASK_TXB 0x04000000
1501 +#define EMAC_IMASKT_RXF 0x02000000
1502 +#define EMAC_IMASK_RXB 0x01000000
1503 +#define EMAC_IMASK_MII 0x00800000
1504 +#define EMAC_IMASK_EBERR 0x00400000
1505 +#define EMAC_IMASK_LC 0x00200000
1506 +#define EMAC_IMASKT_RL 0x00100000
1507 +#define EMAC_IMASK_UN 0x00080000
1508 +
1509 +#define EMAC_RCNTRL_MAX_FL_SHIFT 16
1510 +#define EMAC_RCNTRL_LOOP 0x00000001
1511 +#define EMAC_RCNTRL_DRT 0x00000002
1512 +#define EMAC_RCNTRL_MII_MODE 0x00000004
1513 +#define EMAC_RCNTRL_PROM 0x00000008
1514 +#define EMAC_RCNTRL_BC_REJ 0x00000010
1515 +#define EMAC_RCNTRL_FCE 0x00000020
1516 +#define EMAC_RCNTRL_RGMII 0x00000040
1517 +#define EMAC_RCNTRL_SGMII 0x00000080
1518 +#define EMAC_RCNTRL_RMII 0x00000100
1519 +#define EMAC_RCNTRL_RMII_10T 0x00000200
1520 +#define EMAC_RCNTRL_CRC_FWD 0x00004000
1521 +
1522 +#define EMAC_TCNTRL_GTS 0x00000001
1523 +#define EMAC_TCNTRL_HBC 0x00000002
1524 +#define EMAC_TCNTRL_FDEN 0x00000004
1525 +#define EMAC_TCNTRL_TFC_PAUSE 0x00000008
1526 +#define EMAC_TCNTRL_RFC_PAUSE 0x00000010
1527 +
1528 +#define EMAC_ECNTRL_RESET 0x00000001 /* reset the EMAC */
1529 +#define EMAC_ECNTRL_ETHER_EN 0x00000002 /* enable the EMAC */
1530 +#define EMAC_ECNTRL_MAGIC_ENA 0x00000004
1531 +#define EMAC_ECNTRL_SLEEP 0x00000008
1532 +#define EMAC_ECNTRL_SPEED 0x00000020
1533 +#define EMAC_ECNTRL_DBSWAP 0x00000100
1534 +
1535 +#define EMAC_X_WMRK_STRFWD 0x00000100
1536 +
1537 +#define EMAC_X_DES_ACTIVE_TDAR 0x01000000
1538 +#define EMAC_R_DES_ACTIVE_RDAR 0x01000000
1539 +
1540 +#define EMAC_RX_SECTION_EMPTY_V 0x00010006
1541 +/*
1542 + * The possible operating speeds of the MAC, currently supporting 10, 100 and
1543 + * 1000Mb modes.
1544 + */
1545 +enum mac_speed {SPEED_10M, SPEED_100M, SPEED_1000M, SPEED_1000M_PCS};
1546 +
1547 +/* MII-related definitios */
1548 +#define EMAC_MII_DATA_ST 0x40000000 /* Start of frame delimiter */
1549 +#define EMAC_MII_DATA_OP_RD 0x20000000 /* Perform a read operation */
1550 +#define EMAC_MII_DATA_OP_CL45_RD 0x30000000 /* Perform a read operation */
1551 +#define EMAC_MII_DATA_OP_WR 0x10000000 /* Perform a write operation */
1552 +#define EMAC_MII_DATA_OP_CL45_WR 0x10000000 /* Perform a write operation */
1553 +#define EMAC_MII_DATA_PA_MSK 0x0f800000 /* PHY Address field mask */
1554 +#define EMAC_MII_DATA_RA_MSK 0x007c0000 /* PHY Register field mask */
1555 +#define EMAC_MII_DATA_TA 0x00020000 /* Turnaround */
1556 +#define EMAC_MII_DATA_DATAMSK 0x0000ffff /* PHY data field */
1557 +
1558 +#define EMAC_MII_DATA_RA_SHIFT 18 /* MII Register address bits */
1559 +#define EMAC_MII_DATA_RA_MASK 0x1F /* MII Register address mask */
1560 +#define EMAC_MII_DATA_PA_SHIFT 23 /* MII PHY address bits */
1561 +#define EMAC_MII_DATA_PA_MASK 0x1F /* MII PHY address mask */
1562 +
1563 +#define EMAC_MII_DATA_RA(v) (((v) & EMAC_MII_DATA_RA_MASK) << \
1564 + EMAC_MII_DATA_RA_SHIFT)
1565 +#define EMAC_MII_DATA_PA(v) (((v) & EMAC_MII_DATA_RA_MASK) << \
1566 + EMAC_MII_DATA_PA_SHIFT)
1567 +#define EMAC_MII_DATA(v) ((v) & 0xffff)
1568 +
1569 +#define EMAC_MII_SPEED_SHIFT 1
1570 +#define EMAC_HOLDTIME_SHIFT 8
1571 +#define EMAC_HOLDTIME_MASK 0x7
1572 +#define EMAC_HOLDTIME(v) (((v) & EMAC_HOLDTIME_MASK) << \
1573 + EMAC_HOLDTIME_SHIFT)
1574 +
1575 +/*
1576 + * The Address organisation for the MAC device. All addresses are split into
1577 + * two 32-bit register fields. The first one (bottom) is the lower 32-bits of
1578 + * the address and the other field are the high order bits - this may be 16-bits
1579 + * in the case of MAC addresses, or 32-bits for the hash address.
1580 + * In terms of memory storage, the first item (bottom) is assumed to be at a
1581 + * lower address location than 'top'. i.e. top should be at address location of
1582 + * 'bottom' + 4 bytes.
1583 + */
1584 +struct pfe_mac_addr {
1585 + u32 bottom; /* Lower 32-bits of address. */
1586 + u32 top; /* Upper 32-bits of address. */
1587 +};
1588 +
1589 +/*
1590 + * The following is the organisation of the address filters section of the MAC
1591 + * registers. The Cadence MAC contains four possible specific address match
1592 + * addresses, if an incoming frame corresponds to any one of these four
1593 + * addresses then the frame will be copied to memory.
1594 + * It is not necessary for all four of the address match registers to be
1595 + * programmed, this is application dependent.
1596 + */
1597 +struct spec_addr {
1598 + struct pfe_mac_addr one; /* Specific address register 1. */
1599 + struct pfe_mac_addr two; /* Specific address register 2. */
1600 + struct pfe_mac_addr three; /* Specific address register 3. */
1601 + struct pfe_mac_addr four; /* Specific address register 4. */
1602 +};
1603 +
1604 +struct gemac_cfg {
1605 + u32 mode;
1606 + u32 speed;
1607 + u32 duplex;
1608 +};
1609 +
1610 +/* EMAC Hash size */
1611 +#define EMAC_HASH_REG_BITS 64
1612 +
1613 +#define EMAC_SPEC_ADDR_MAX 4
1614 +
1615 +#endif /* _EMAC_H_ */
1616 --- /dev/null
1617 +++ b/drivers/staging/fsl_ppfe/include/pfe/cbus/gpi.h
1618 @@ -0,0 +1,86 @@
1619 +/*
1620 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
1621 + * Copyright 2017 NXP
1622 + *
1623 + * This program is free software; you can redistribute it and/or modify
1624 + * it under the terms of the GNU General Public License as published by
1625 + * the Free Software Foundation; either version 2 of the License, or
1626 + * (at your option) any later version.
1627 + *
1628 + * This program is distributed in the hope that it will be useful,
1629 + * but WITHOUT ANY WARRANTY; without even the implied warranty of
1630 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1631 + * GNU General Public License for more details.
1632 + *
1633 + * You should have received a copy of the GNU General Public License
1634 + * along with this program. If not, see <http://www.gnu.org/licenses/>.
1635 + */
1636 +
1637 +#ifndef _GPI_H_
1638 +#define _GPI_H_
1639 +
1640 +#define GPI_VERSION 0x00
1641 +#define GPI_CTRL 0x04
1642 +#define GPI_RX_CONFIG 0x08
1643 +#define GPI_HDR_SIZE 0x0c
1644 +#define GPI_BUF_SIZE 0x10
1645 +#define GPI_LMEM_ALLOC_ADDR 0x14
1646 +#define GPI_LMEM_FREE_ADDR 0x18
1647 +#define GPI_DDR_ALLOC_ADDR 0x1c
1648 +#define GPI_DDR_FREE_ADDR 0x20
1649 +#define GPI_CLASS_ADDR 0x24
1650 +#define GPI_DRX_FIFO 0x28
1651 +#define GPI_TRX_FIFO 0x2c
1652 +#define GPI_INQ_PKTPTR 0x30
1653 +#define GPI_DDR_DATA_OFFSET 0x34
1654 +#define GPI_LMEM_DATA_OFFSET 0x38
1655 +#define GPI_TMLF_TX 0x4c
1656 +#define GPI_DTX_ASEQ 0x50
1657 +#define GPI_FIFO_STATUS 0x54
1658 +#define GPI_FIFO_DEBUG 0x58
1659 +#define GPI_TX_PAUSE_TIME 0x5c
1660 +#define GPI_LMEM_SEC_BUF_DATA_OFFSET 0x60
1661 +#define GPI_DDR_SEC_BUF_DATA_OFFSET 0x64
1662 +#define GPI_TOE_CHKSUM_EN 0x68
1663 +#define GPI_OVERRUN_DROPCNT 0x6c
1664 +#define GPI_CSR_MTIP_PAUSE_REG 0x74
1665 +#define GPI_CSR_MTIP_PAUSE_QUANTUM 0x78
1666 +#define GPI_CSR_RX_CNT 0x7c
1667 +#define GPI_CSR_TX_CNT 0x80
1668 +#define GPI_CSR_DEBUG1 0x84
1669 +#define GPI_CSR_DEBUG2 0x88
1670 +
1671 +struct gpi_cfg {
1672 + u32 lmem_rtry_cnt;
1673 + u32 tmlf_txthres;
1674 + u32 aseq_len;
1675 + u32 mtip_pause_reg;
1676 +};
1677 +
1678 +/* GPI commons defines */
1679 +#define GPI_LMEM_BUF_EN 0x1
1680 +#define GPI_DDR_BUF_EN 0x1
1681 +
1682 +/* EGPI 1 defines */
1683 +#define EGPI1_LMEM_RTRY_CNT 0x40
1684 +#define EGPI1_TMLF_TXTHRES 0xBC
1685 +#define EGPI1_ASEQ_LEN 0x50
1686 +
1687 +/* EGPI 2 defines */
1688 +#define EGPI2_LMEM_RTRY_CNT 0x40
1689 +#define EGPI2_TMLF_TXTHRES 0xBC
1690 +#define EGPI2_ASEQ_LEN 0x40
1691 +
1692 +/* EGPI 3 defines */
1693 +#define EGPI3_LMEM_RTRY_CNT 0x40
1694 +#define EGPI3_TMLF_TXTHRES 0xBC
1695 +#define EGPI3_ASEQ_LEN 0x40
1696 +
1697 +/* HGPI defines */
1698 +#define HGPI_LMEM_RTRY_CNT 0x40
1699 +#define HGPI_TMLF_TXTHRES 0xBC
1700 +#define HGPI_ASEQ_LEN 0x40
1701 +
1702 +#define EGPI_PAUSE_TIME 0x000007D0
1703 +#define EGPI_PAUSE_ENABLE 0x40000000
1704 +#endif /* _GPI_H_ */
1705 --- /dev/null
1706 +++ b/drivers/staging/fsl_ppfe/include/pfe/cbus/hif.h
1707 @@ -0,0 +1,100 @@
1708 +/*
1709 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
1710 + * Copyright 2017 NXP
1711 + *
1712 + * This program is free software; you can redistribute it and/or modify
1713 + * it under the terms of the GNU General Public License as published by
1714 + * the Free Software Foundation; either version 2 of the License, or
1715 + * (at your option) any later version.
1716 + *
1717 + * This program is distributed in the hope that it will be useful,
1718 + * but WITHOUT ANY WARRANTY; without even the implied warranty of
1719 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1720 + * GNU General Public License for more details.
1721 + *
1722 + * You should have received a copy of the GNU General Public License
1723 + * along with this program. If not, see <http://www.gnu.org/licenses/>.
1724 + */
1725 +
1726 +#ifndef _HIF_H_
1727 +#define _HIF_H_
1728 +
1729 +/* @file hif.h.
1730 + * hif - PFE hif block control and status register.
1731 + * Mapped on CBUS and accessible from all PE's and ARM.
1732 + */
1733 +#define HIF_VERSION (HIF_BASE_ADDR + 0x00)
1734 +#define HIF_TX_CTRL (HIF_BASE_ADDR + 0x04)
1735 +#define HIF_TX_CURR_BD_ADDR (HIF_BASE_ADDR + 0x08)
1736 +#define HIF_TX_ALLOC (HIF_BASE_ADDR + 0x0c)
1737 +#define HIF_TX_BDP_ADDR (HIF_BASE_ADDR + 0x10)
1738 +#define HIF_TX_STATUS (HIF_BASE_ADDR + 0x14)
1739 +#define HIF_RX_CTRL (HIF_BASE_ADDR + 0x20)
1740 +#define HIF_RX_BDP_ADDR (HIF_BASE_ADDR + 0x24)
1741 +#define HIF_RX_STATUS (HIF_BASE_ADDR + 0x30)
1742 +#define HIF_INT_SRC (HIF_BASE_ADDR + 0x34)
1743 +#define HIF_INT_ENABLE (HIF_BASE_ADDR + 0x38)
1744 +#define HIF_POLL_CTRL (HIF_BASE_ADDR + 0x3c)
1745 +#define HIF_RX_CURR_BD_ADDR (HIF_BASE_ADDR + 0x40)
1746 +#define HIF_RX_ALLOC (HIF_BASE_ADDR + 0x44)
1747 +#define HIF_TX_DMA_STATUS (HIF_BASE_ADDR + 0x48)
1748 +#define HIF_RX_DMA_STATUS (HIF_BASE_ADDR + 0x4c)
1749 +#define HIF_INT_COAL (HIF_BASE_ADDR + 0x50)
1750 +
1751 +/* HIF_INT_SRC/ HIF_INT_ENABLE control bits */
1752 +#define HIF_INT BIT(0)
1753 +#define HIF_RXBD_INT BIT(1)
1754 +#define HIF_RXPKT_INT BIT(2)
1755 +#define HIF_TXBD_INT BIT(3)
1756 +#define HIF_TXPKT_INT BIT(4)
1757 +
1758 +/* HIF_TX_CTRL bits */
1759 +#define HIF_CTRL_DMA_EN BIT(0)
1760 +#define HIF_CTRL_BDP_POLL_CTRL_EN BIT(1)
1761 +#define HIF_CTRL_BDP_CH_START_WSTB BIT(2)
1762 +
1763 +/* HIF_RX_STATUS bits */
1764 +#define BDP_CSR_RX_DMA_ACTV BIT(16)
1765 +
1766 +/* HIF_INT_ENABLE bits */
1767 +#define HIF_INT_EN BIT(0)
1768 +#define HIF_RXBD_INT_EN BIT(1)
1769 +#define HIF_RXPKT_INT_EN BIT(2)
1770 +#define HIF_TXBD_INT_EN BIT(3)
1771 +#define HIF_TXPKT_INT_EN BIT(4)
1772 +
1773 +/* HIF_POLL_CTRL bits*/
1774 +#define HIF_RX_POLL_CTRL_CYCLE 0x0400
1775 +#define HIF_TX_POLL_CTRL_CYCLE 0x0400
1776 +
1777 +/* HIF_INT_COAL bits*/
1778 +#define HIF_INT_COAL_ENABLE BIT(31)
1779 +
1780 +/* Buffer descriptor control bits */
1781 +#define BD_CTRL_BUFLEN_MASK 0x3fff
1782 +#define BD_BUF_LEN(x) ((x) & BD_CTRL_BUFLEN_MASK)
1783 +#define BD_CTRL_CBD_INT_EN BIT(16)
1784 +#define BD_CTRL_PKT_INT_EN BIT(17)
1785 +#define BD_CTRL_LIFM BIT(18)
1786 +#define BD_CTRL_LAST_BD BIT(19)
1787 +#define BD_CTRL_DIR BIT(20)
1788 +#define BD_CTRL_LMEM_CPY BIT(21) /* Valid only for HIF_NOCPY */
1789 +#define BD_CTRL_PKT_XFER BIT(24)
1790 +#define BD_CTRL_DESC_EN BIT(31)
1791 +#define BD_CTRL_PARSE_DISABLE BIT(25)
1792 +#define BD_CTRL_BRFETCH_DISABLE BIT(26)
1793 +#define BD_CTRL_RTFETCH_DISABLE BIT(27)
1794 +
1795 +/* Buffer descriptor status bits*/
1796 +#define BD_STATUS_CONN_ID(x) ((x) & 0xffff)
1797 +#define BD_STATUS_DIR_PROC_ID BIT(16)
1798 +#define BD_STATUS_CONN_ID_EN BIT(17)
1799 +#define BD_STATUS_PE2PROC_ID(x) (((x) & 7) << 18)
1800 +#define BD_STATUS_LE_DATA BIT(21)
1801 +#define BD_STATUS_CHKSUM_EN BIT(22)
1802 +
1803 +/* HIF Buffer descriptor status bits */
1804 +#define DIR_PROC_ID BIT(16)
1805 +#define PROC_ID(id) ((id) << 18)
1806 +
1807 +#endif /* _HIF_H_ */
1808 --- /dev/null
1809 +++ b/drivers/staging/fsl_ppfe/include/pfe/cbus/hif_nocpy.h
1810 @@ -0,0 +1,50 @@
1811 +/*
1812 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
1813 + * Copyright 2017 NXP
1814 + *
1815 + * This program is free software; you can redistribute it and/or modify
1816 + * it under the terms of the GNU General Public License as published by
1817 + * the Free Software Foundation; either version 2 of the License, or
1818 + * (at your option) any later version.
1819 + *
1820 + * This program is distributed in the hope that it will be useful,
1821 + * but WITHOUT ANY WARRANTY; without even the implied warranty of
1822 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1823 + * GNU General Public License for more details.
1824 + *
1825 + * You should have received a copy of the GNU General Public License
1826 + * along with this program. If not, see <http://www.gnu.org/licenses/>.
1827 + */
1828 +
1829 +#ifndef _HIF_NOCPY_H_
1830 +#define _HIF_NOCPY_H_
1831 +
1832 +#define HIF_NOCPY_VERSION (HIF_NOCPY_BASE_ADDR + 0x00)
1833 +#define HIF_NOCPY_TX_CTRL (HIF_NOCPY_BASE_ADDR + 0x04)
1834 +#define HIF_NOCPY_TX_CURR_BD_ADDR (HIF_NOCPY_BASE_ADDR + 0x08)
1835 +#define HIF_NOCPY_TX_ALLOC (HIF_NOCPY_BASE_ADDR + 0x0c)
1836 +#define HIF_NOCPY_TX_BDP_ADDR (HIF_NOCPY_BASE_ADDR + 0x10)
1837 +#define HIF_NOCPY_TX_STATUS (HIF_NOCPY_BASE_ADDR + 0x14)
1838 +#define HIF_NOCPY_RX_CTRL (HIF_NOCPY_BASE_ADDR + 0x20)
1839 +#define HIF_NOCPY_RX_BDP_ADDR (HIF_NOCPY_BASE_ADDR + 0x24)
1840 +#define HIF_NOCPY_RX_STATUS (HIF_NOCPY_BASE_ADDR + 0x30)
1841 +#define HIF_NOCPY_INT_SRC (HIF_NOCPY_BASE_ADDR + 0x34)
1842 +#define HIF_NOCPY_INT_ENABLE (HIF_NOCPY_BASE_ADDR + 0x38)
1843 +#define HIF_NOCPY_POLL_CTRL (HIF_NOCPY_BASE_ADDR + 0x3c)
1844 +#define HIF_NOCPY_RX_CURR_BD_ADDR (HIF_NOCPY_BASE_ADDR + 0x40)
1845 +#define HIF_NOCPY_RX_ALLOC (HIF_NOCPY_BASE_ADDR + 0x44)
1846 +#define HIF_NOCPY_TX_DMA_STATUS (HIF_NOCPY_BASE_ADDR + 0x48)
1847 +#define HIF_NOCPY_RX_DMA_STATUS (HIF_NOCPY_BASE_ADDR + 0x4c)
1848 +#define HIF_NOCPY_RX_INQ0_PKTPTR (HIF_NOCPY_BASE_ADDR + 0x50)
1849 +#define HIF_NOCPY_RX_INQ1_PKTPTR (HIF_NOCPY_BASE_ADDR + 0x54)
1850 +#define HIF_NOCPY_TX_PORT_NO (HIF_NOCPY_BASE_ADDR + 0x60)
1851 +#define HIF_NOCPY_LMEM_ALLOC_ADDR (HIF_NOCPY_BASE_ADDR + 0x64)
1852 +#define HIF_NOCPY_CLASS_ADDR (HIF_NOCPY_BASE_ADDR + 0x68)
1853 +#define HIF_NOCPY_TMU_PORT0_ADDR (HIF_NOCPY_BASE_ADDR + 0x70)
1854 +#define HIF_NOCPY_TMU_PORT1_ADDR (HIF_NOCPY_BASE_ADDR + 0x74)
1855 +#define HIF_NOCPY_TMU_PORT2_ADDR (HIF_NOCPY_BASE_ADDR + 0x7c)
1856 +#define HIF_NOCPY_TMU_PORT3_ADDR (HIF_NOCPY_BASE_ADDR + 0x80)
1857 +#define HIF_NOCPY_TMU_PORT4_ADDR (HIF_NOCPY_BASE_ADDR + 0x84)
1858 +#define HIF_NOCPY_INT_COAL (HIF_NOCPY_BASE_ADDR + 0x90)
1859 +
1860 +#endif /* _HIF_NOCPY_H_ */
1861 --- /dev/null
1862 +++ b/drivers/staging/fsl_ppfe/include/pfe/cbus/tmu_csr.h
1863 @@ -0,0 +1,168 @@
1864 +/*
1865 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
1866 + * Copyright 2017 NXP
1867 + *
1868 + * This program is free software; you can redistribute it and/or modify
1869 + * it under the terms of the GNU General Public License as published by
1870 + * the Free Software Foundation; either version 2 of the License, or
1871 + * (at your option) any later version.
1872 + *
1873 + * This program is distributed in the hope that it will be useful,
1874 + * but WITHOUT ANY WARRANTY; without even the implied warranty of
1875 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
1876 + * GNU General Public License for more details.
1877 + *
1878 + * You should have received a copy of the GNU General Public License
1879 + * along with this program. If not, see <http://www.gnu.org/licenses/>.
1880 + */
1881 +
1882 +#ifndef _TMU_CSR_H_
1883 +#define _TMU_CSR_H_
1884 +
1885 +#define TMU_VERSION (TMU_CSR_BASE_ADDR + 0x000)
1886 +#define TMU_INQ_WATERMARK (TMU_CSR_BASE_ADDR + 0x004)
1887 +#define TMU_PHY_INQ_PKTPTR (TMU_CSR_BASE_ADDR + 0x008)
1888 +#define TMU_PHY_INQ_PKTINFO (TMU_CSR_BASE_ADDR + 0x00c)
1889 +#define TMU_PHY_INQ_FIFO_CNT (TMU_CSR_BASE_ADDR + 0x010)
1890 +#define TMU_SYS_GENERIC_CONTROL (TMU_CSR_BASE_ADDR + 0x014)
1891 +#define TMU_SYS_GENERIC_STATUS (TMU_CSR_BASE_ADDR + 0x018)
1892 +#define TMU_SYS_GEN_CON0 (TMU_CSR_BASE_ADDR + 0x01c)
1893 +#define TMU_SYS_GEN_CON1 (TMU_CSR_BASE_ADDR + 0x020)
1894 +#define TMU_SYS_GEN_CON2 (TMU_CSR_BASE_ADDR + 0x024)
1895 +#define TMU_SYS_GEN_CON3 (TMU_CSR_BASE_ADDR + 0x028)
1896 +#define TMU_SYS_GEN_CON4 (TMU_CSR_BASE_ADDR + 0x02c)
1897 +#define TMU_TEQ_DISABLE_DROPCHK (TMU_CSR_BASE_ADDR + 0x030)
1898 +#define TMU_TEQ_CTRL (TMU_CSR_BASE_ADDR + 0x034)
1899 +#define TMU_TEQ_QCFG (TMU_CSR_BASE_ADDR + 0x038)
1900 +#define TMU_TEQ_DROP_STAT (TMU_CSR_BASE_ADDR + 0x03c)
1901 +#define TMU_TEQ_QAVG (TMU_CSR_BASE_ADDR + 0x040)
1902 +#define TMU_TEQ_WREG_PROB (TMU_CSR_BASE_ADDR + 0x044)
1903 +#define TMU_TEQ_TRANS_STAT (TMU_CSR_BASE_ADDR + 0x048)
1904 +#define TMU_TEQ_HW_PROB_CFG0 (TMU_CSR_BASE_ADDR + 0x04c)
1905 +#define TMU_TEQ_HW_PROB_CFG1 (TMU_CSR_BASE_ADDR + 0x050)
1906 +#define TMU_TEQ_HW_PROB_CFG2 (TMU_CSR_BASE_ADDR + 0x054)
1907 +#define TMU_TEQ_HW_PROB_CFG3 (TMU_CSR_BASE_ADDR + 0x058)
1908 +#define TMU_TEQ_HW_PROB_CFG4 (TMU_CSR_BASE_ADDR + 0x05c)
1909 +#define TMU_TEQ_HW_PROB_CFG5 (TMU_CSR_BASE_ADDR + 0x060)
1910 +#define TMU_TEQ_HW_PROB_CFG6 (TMU_CSR_BASE_ADDR + 0x064)
1911 +#define TMU_TEQ_HW_PROB_CFG7 (TMU_CSR_BASE_ADDR + 0x068)
1912 +#define TMU_TEQ_HW_PROB_CFG8 (TMU_CSR_BASE_ADDR + 0x06c)
1913 +#define TMU_TEQ_HW_PROB_CFG9 (TMU_CSR_BASE_ADDR + 0x070)
1914 +#define TMU_TEQ_HW_PROB_CFG10 (TMU_CSR_BASE_ADDR + 0x074)
1915 +#define TMU_TEQ_HW_PROB_CFG11 (TMU_CSR_BASE_ADDR + 0x078)
1916 +#define TMU_TEQ_HW_PROB_CFG12 (TMU_CSR_BASE_ADDR + 0x07c)
1917 +#define TMU_TEQ_HW_PROB_CFG13 (TMU_CSR_BASE_ADDR + 0x080)
1918 +#define TMU_TEQ_HW_PROB_CFG14 (TMU_CSR_BASE_ADDR + 0x084)
1919 +#define TMU_TEQ_HW_PROB_CFG15 (TMU_CSR_BASE_ADDR + 0x088)
1920 +#define TMU_TEQ_HW_PROB_CFG16 (TMU_CSR_BASE_ADDR + 0x08c)
1921 +#define TMU_TEQ_HW_PROB_CFG17 (TMU_CSR_BASE_ADDR + 0x090)
1922 +#define TMU_TEQ_HW_PROB_CFG18 (TMU_CSR_BASE_ADDR + 0x094)
1923 +#define TMU_TEQ_HW_PROB_CFG19 (TMU_CSR_BASE_ADDR + 0x098)
1924 +#define TMU_TEQ_HW_PROB_CFG20 (TMU_CSR_BASE_ADDR + 0x09c)
1925 +#define TMU_TEQ_HW_PROB_CFG21 (TMU_CSR_BASE_ADDR + 0x0a0)
1926 +#define TMU_TEQ_HW_PROB_CFG22 (TMU_CSR_BASE_ADDR + 0x0a4)
1927 +#define TMU_TEQ_HW_PROB_CFG23 (TMU_CSR_BASE_ADDR + 0x0a8)
1928 +#define TMU_TEQ_HW_PROB_CFG24 (TMU_CSR_BASE_ADDR + 0x0ac)
1929 +#define TMU_TEQ_HW_PROB_CFG25 (TMU_CSR_BASE_ADDR + 0x0b0)
1930 +#define TMU_TDQ_IIFG_CFG (TMU_CSR_BASE_ADDR + 0x0b4)
1931 +/* [9:0] Scheduler Enable for each of the scheduler in the TDQ.
1932 + * This is a global Enable for all schedulers in PHY0
1933 + */
1934 +#define TMU_TDQ0_SCH_CTRL (TMU_CSR_BASE_ADDR + 0x0b8)
1935 +
1936 +#define TMU_LLM_CTRL (TMU_CSR_BASE_ADDR + 0x0bc)
1937 +#define TMU_LLM_BASE_ADDR (TMU_CSR_BASE_ADDR + 0x0c0)
1938 +#define TMU_LLM_QUE_LEN (TMU_CSR_BASE_ADDR + 0x0c4)
1939 +#define TMU_LLM_QUE_HEADPTR (TMU_CSR_BASE_ADDR + 0x0c8)
1940 +#define TMU_LLM_QUE_TAILPTR (TMU_CSR_BASE_ADDR + 0x0cc)
1941 +#define TMU_LLM_QUE_DROPCNT (TMU_CSR_BASE_ADDR + 0x0d0)
1942 +#define TMU_INT_EN (TMU_CSR_BASE_ADDR + 0x0d4)
1943 +#define TMU_INT_SRC (TMU_CSR_BASE_ADDR + 0x0d8)
1944 +#define TMU_INQ_STAT (TMU_CSR_BASE_ADDR + 0x0dc)
1945 +#define TMU_CTRL (TMU_CSR_BASE_ADDR + 0x0e0)
1946 +
1947 +/* [31] Mem Access Command. 0 = Internal Memory Read, 1 = Internal memory
1948 + * Write [27:24] Byte Enables of the Internal memory access [23:0] Address of
1949 + * the internal memory. This address is used to access both the PM and DM of
1950 + * all the PE's
1951 + */
1952 +#define TMU_MEM_ACCESS_ADDR (TMU_CSR_BASE_ADDR + 0x0e4)
1953 +
1954 +/* Internal Memory Access Write Data */
1955 +#define TMU_MEM_ACCESS_WDATA (TMU_CSR_BASE_ADDR + 0x0e8)
1956 +/* Internal Memory Access Read Data. The commands are blocked
1957 + * at the mem_access only
1958 + */
1959 +#define TMU_MEM_ACCESS_RDATA (TMU_CSR_BASE_ADDR + 0x0ec)
1960 +
1961 +/* [31:0] PHY0 in queue address (must be initialized with one of the
1962 + * xxx_INQ_PKTPTR cbus addresses)
1963 + */
1964 +#define TMU_PHY0_INQ_ADDR (TMU_CSR_BASE_ADDR + 0x0f0)
1965 +/* [31:0] PHY1 in queue address (must be initialized with one of the
1966 + * xxx_INQ_PKTPTR cbus addresses)
1967 + */
1968 +#define TMU_PHY1_INQ_ADDR (TMU_CSR_BASE_ADDR + 0x0f4)
1969 +/* [31:0] PHY2 in queue address (must be initialized with one of the
1970 + * xxx_INQ_PKTPTR cbus addresses)
1971 + */
1972 +#define TMU_PHY2_INQ_ADDR (TMU_CSR_BASE_ADDR + 0x0f8)
1973 +/* [31:0] PHY3 in queue address (must be initialized with one of the
1974 + * xxx_INQ_PKTPTR cbus addresses)
1975 + */
1976 +#define TMU_PHY3_INQ_ADDR (TMU_CSR_BASE_ADDR + 0x0fc)
1977 +#define TMU_BMU_INQ_ADDR (TMU_CSR_BASE_ADDR + 0x100)
1978 +#define TMU_TX_CTRL (TMU_CSR_BASE_ADDR + 0x104)
1979 +
1980 +#define TMU_BUS_ACCESS_WDATA (TMU_CSR_BASE_ADDR + 0x108)
1981 +#define TMU_BUS_ACCESS (TMU_CSR_BASE_ADDR + 0x10c)
1982 +#define TMU_BUS_ACCESS_RDATA (TMU_CSR_BASE_ADDR + 0x110)
1983 +
1984 +#define TMU_PE_SYS_CLK_RATIO (TMU_CSR_BASE_ADDR + 0x114)
1985 +#define TMU_PE_STATUS (TMU_CSR_BASE_ADDR + 0x118)
1986 +#define TMU_TEQ_MAX_THRESHOLD (TMU_CSR_BASE_ADDR + 0x11c)
1987 +/* [31:0] PHY4 in queue address (must be initialized with one of the
1988 + * xxx_INQ_PKTPTR cbus addresses)
1989 + */
1990 +#define TMU_PHY4_INQ_ADDR (TMU_CSR_BASE_ADDR + 0x134)
1991 +/* [9:0] Scheduler Enable for each of the scheduler in the TDQ.
1992 + * This is a global Enable for all schedulers in PHY1
1993 + */
1994 +#define TMU_TDQ1_SCH_CTRL (TMU_CSR_BASE_ADDR + 0x138)
1995 +/* [9:0] Scheduler Enable for each of the scheduler in the TDQ.
1996 + * This is a global Enable for all schedulers in PHY2
1997 + */
1998 +#define TMU_TDQ2_SCH_CTRL (TMU_CSR_BASE_ADDR + 0x13c)
1999 +/* [9:0] Scheduler Enable for each of the scheduler in the TDQ.
2000 + * This is a global Enable for all schedulers in PHY3
2001 + */
2002 +#define TMU_TDQ3_SCH_CTRL (TMU_CSR_BASE_ADDR + 0x140)
2003 +#define TMU_BMU_BUF_SIZE (TMU_CSR_BASE_ADDR + 0x144)
2004 +/* [31:0] PHY5 in queue address (must be initialized with one of the
2005 + * xxx_INQ_PKTPTR cbus addresses)
2006 + */
2007 +#define TMU_PHY5_INQ_ADDR (TMU_CSR_BASE_ADDR + 0x148)
2008 +
2009 +#define SW_RESET BIT(0) /* Global software reset */
2010 +#define INQ_RESET BIT(2)
2011 +#define TEQ_RESET BIT(3)
2012 +#define TDQ_RESET BIT(4)
2013 +#define PE_RESET BIT(5)
2014 +#define MEM_INIT BIT(6)
2015 +#define MEM_INIT_DONE BIT(7)
2016 +#define LLM_INIT BIT(8)
2017 +#define LLM_INIT_DONE BIT(9)
2018 +#define ECC_MEM_INIT_DONE BIT(10)
2019 +
2020 +struct tmu_cfg {
2021 + u32 pe_sys_clk_ratio;
2022 + unsigned long llm_base_addr;
2023 + u32 llm_queue_len;
2024 +};
2025 +
2026 +/* Not HW related for pfe_ctrl / pfe common defines */
2027 +#define DEFAULT_MAX_QDEPTH 80
2028 +#define DEFAULT_Q0_QDEPTH 511 /*We keep one large queue for host tx qos */
2029 +#define DEFAULT_TMU3_QDEPTH 127
2030 +
2031 +#endif /* _TMU_CSR_H_ */
2032 --- /dev/null
2033 +++ b/drivers/staging/fsl_ppfe/include/pfe/cbus/util_csr.h
2034 @@ -0,0 +1,61 @@
2035 +/*
2036 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
2037 + * Copyright 2017 NXP
2038 + *
2039 + * This program is free software; you can redistribute it and/or modify
2040 + * it under the terms of the GNU General Public License as published by
2041 + * the Free Software Foundation; either version 2 of the License, or
2042 + * (at your option) any later version.
2043 + *
2044 + * This program is distributed in the hope that it will be useful,
2045 + * but WITHOUT ANY WARRANTY; without even the implied warranty of
2046 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
2047 + * GNU General Public License for more details.
2048 + *
2049 + * You should have received a copy of the GNU General Public License
2050 + * along with this program. If not, see <http://www.gnu.org/licenses/>.
2051 + */
2052 +
2053 +#ifndef _UTIL_CSR_H_
2054 +#define _UTIL_CSR_H_
2055 +
2056 +#define UTIL_VERSION (UTIL_CSR_BASE_ADDR + 0x000)
2057 +#define UTIL_TX_CTRL (UTIL_CSR_BASE_ADDR + 0x004)
2058 +#define UTIL_INQ_PKTPTR (UTIL_CSR_BASE_ADDR + 0x010)
2059 +
2060 +#define UTIL_HDR_SIZE (UTIL_CSR_BASE_ADDR + 0x014)
2061 +
2062 +#define UTIL_PE0_QB_DM_ADDR0 (UTIL_CSR_BASE_ADDR + 0x020)
2063 +#define UTIL_PE0_QB_DM_ADDR1 (UTIL_CSR_BASE_ADDR + 0x024)
2064 +#define UTIL_PE0_RO_DM_ADDR0 (UTIL_CSR_BASE_ADDR + 0x060)
2065 +#define UTIL_PE0_RO_DM_ADDR1 (UTIL_CSR_BASE_ADDR + 0x064)
2066 +
2067 +#define UTIL_MEM_ACCESS_ADDR (UTIL_CSR_BASE_ADDR + 0x100)
2068 +#define UTIL_MEM_ACCESS_WDATA (UTIL_CSR_BASE_ADDR + 0x104)
2069 +#define UTIL_MEM_ACCESS_RDATA (UTIL_CSR_BASE_ADDR + 0x108)
2070 +
2071 +#define UTIL_TM_INQ_ADDR (UTIL_CSR_BASE_ADDR + 0x114)
2072 +#define UTIL_PE_STATUS (UTIL_CSR_BASE_ADDR + 0x118)
2073 +
2074 +#define UTIL_PE_SYS_CLK_RATIO (UTIL_CSR_BASE_ADDR + 0x200)
2075 +#define UTIL_AFULL_THRES (UTIL_CSR_BASE_ADDR + 0x204)
2076 +#define UTIL_GAP_BETWEEN_READS (UTIL_CSR_BASE_ADDR + 0x208)
2077 +#define UTIL_MAX_BUF_CNT (UTIL_CSR_BASE_ADDR + 0x20c)
2078 +#define UTIL_TSQ_FIFO_THRES (UTIL_CSR_BASE_ADDR + 0x210)
2079 +#define UTIL_TSQ_MAX_CNT (UTIL_CSR_BASE_ADDR + 0x214)
2080 +#define UTIL_IRAM_DATA_0 (UTIL_CSR_BASE_ADDR + 0x218)
2081 +#define UTIL_IRAM_DATA_1 (UTIL_CSR_BASE_ADDR + 0x21c)
2082 +#define UTIL_IRAM_DATA_2 (UTIL_CSR_BASE_ADDR + 0x220)
2083 +#define UTIL_IRAM_DATA_3 (UTIL_CSR_BASE_ADDR + 0x224)
2084 +
2085 +#define UTIL_BUS_ACCESS_ADDR (UTIL_CSR_BASE_ADDR + 0x228)
2086 +#define UTIL_BUS_ACCESS_WDATA (UTIL_CSR_BASE_ADDR + 0x22c)
2087 +#define UTIL_BUS_ACCESS_RDATA (UTIL_CSR_BASE_ADDR + 0x230)
2088 +
2089 +#define UTIL_INQ_AFULL_THRES (UTIL_CSR_BASE_ADDR + 0x234)
2090 +
2091 +struct util_cfg {
2092 + u32 pe_sys_clk_ratio;
2093 +};
2094 +
2095 +#endif /* _UTIL_CSR_H_ */
2096 --- /dev/null
2097 +++ b/drivers/staging/fsl_ppfe/include/pfe/pfe.h
2098 @@ -0,0 +1,372 @@
2099 +/*
2100 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
2101 + * Copyright 2017 NXP
2102 + *
2103 + * This program is free software; you can redistribute it and/or modify
2104 + * it under the terms of the GNU General Public License as published by
2105 + * the Free Software Foundation; either version 2 of the License, or
2106 + * (at your option) any later version.
2107 + *
2108 + * This program is distributed in the hope that it will be useful,
2109 + * but WITHOUT ANY WARRANTY; without even the implied warranty of
2110 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
2111 + * GNU General Public License for more details.
2112 + *
2113 + * You should have received a copy of the GNU General Public License
2114 + * along with this program. If not, see <http://www.gnu.org/licenses/>.
2115 + */
2116 +
2117 +#ifndef _PFE_H_
2118 +#define _PFE_H_
2119 +
2120 +#include "cbus.h"
2121 +
2122 +#define CLASS_DMEM_BASE_ADDR(i) (0x00000000 | ((i) << 20))
2123 +/*
2124 + * Only valid for mem access register interface
2125 + */
2126 +#define CLASS_IMEM_BASE_ADDR(i) (0x00000000 | ((i) << 20))
2127 +#define CLASS_DMEM_SIZE 0x00002000
2128 +#define CLASS_IMEM_SIZE 0x00008000
2129 +
2130 +#define TMU_DMEM_BASE_ADDR(i) (0x00000000 + ((i) << 20))
2131 +/*
2132 + * Only valid for mem access register interface
2133 + */
2134 +#define TMU_IMEM_BASE_ADDR(i) (0x00000000 + ((i) << 20))
2135 +#define TMU_DMEM_SIZE 0x00000800
2136 +#define TMU_IMEM_SIZE 0x00002000
2137 +
2138 +#define UTIL_DMEM_BASE_ADDR 0x00000000
2139 +#define UTIL_DMEM_SIZE 0x00002000
2140 +
2141 +#define PE_LMEM_BASE_ADDR 0xc3010000
2142 +#define PE_LMEM_SIZE 0x8000
2143 +#define PE_LMEM_END (PE_LMEM_BASE_ADDR + PE_LMEM_SIZE)
2144 +
2145 +#define DMEM_BASE_ADDR 0x00000000
2146 +#define DMEM_SIZE 0x2000 /* TMU has less... */
2147 +#define DMEM_END (DMEM_BASE_ADDR + DMEM_SIZE)
2148 +
2149 +#define PMEM_BASE_ADDR 0x00010000
2150 +#define PMEM_SIZE 0x8000 /* TMU has less... */
2151 +#define PMEM_END (PMEM_BASE_ADDR + PMEM_SIZE)
2152 +
2153 +/* These check memory ranges from PE point of view/memory map */
2154 +#define IS_DMEM(addr, len) \
2155 + ({ typeof(addr) addr_ = (addr); \
2156 + ((unsigned long)(addr_) >= DMEM_BASE_ADDR) && \
2157 + (((unsigned long)(addr_) + (len)) <= DMEM_END); })
2158 +
2159 +#define IS_PMEM(addr, len) \
2160 + ({ typeof(addr) addr_ = (addr); \
2161 + ((unsigned long)(addr_) >= PMEM_BASE_ADDR) && \
2162 + (((unsigned long)(addr_) + (len)) <= PMEM_END); })
2163 +
2164 +#define IS_PE_LMEM(addr, len) \
2165 + ({ typeof(addr) addr_ = (addr); \
2166 + ((unsigned long)(addr_) >= \
2167 + PE_LMEM_BASE_ADDR) && \
2168 + (((unsigned long)(addr_) + \
2169 + (len)) <= PE_LMEM_END); })
2170 +
2171 +#define IS_PFE_LMEM(addr, len) \
2172 + ({ typeof(addr) addr_ = (addr); \
2173 + ((unsigned long)(addr_) >= \
2174 + CBUS_VIRT_TO_PFE(LMEM_BASE_ADDR)) && \
2175 + (((unsigned long)(addr_) + (len)) <= \
2176 + CBUS_VIRT_TO_PFE(LMEM_END)); })
2177 +
2178 +#define __IS_PHYS_DDR(addr, len) \
2179 + ({ typeof(addr) addr_ = (addr); \
2180 + ((unsigned long)(addr_) >= \
2181 + DDR_PHYS_BASE_ADDR) && \
2182 + (((unsigned long)(addr_) + (len)) <= \
2183 + DDR_PHYS_END); })
2184 +
2185 +#define IS_PHYS_DDR(addr, len) __IS_PHYS_DDR(DDR_PFE_TO_PHYS(addr), len)
2186 +
2187 +/*
2188 + * If using a run-time virtual address for the cbus base address use this code
2189 + */
2190 +extern void *cbus_base_addr;
2191 +extern void *ddr_base_addr;
2192 +extern unsigned long ddr_phys_base_addr;
2193 +extern unsigned int ddr_size;
2194 +
2195 +#define CBUS_BASE_ADDR cbus_base_addr
2196 +#define DDR_PHYS_BASE_ADDR ddr_phys_base_addr
2197 +#define DDR_BASE_ADDR ddr_base_addr
2198 +#define DDR_SIZE ddr_size
2199 +
2200 +#define DDR_PHYS_END (DDR_PHYS_BASE_ADDR + DDR_SIZE)
2201 +
2202 +#define LS1012A_PFE_RESET_WA /*
2203 + * PFE doesn't have global reset and re-init
2204 + * should takecare few things to make PFE
2205 + * functional after reset
2206 + */
2207 +#define PFE_CBUS_PHYS_BASE_ADDR 0xc0000000 /* CBUS physical base address
2208 + * as seen by PE's.
2209 + */
2210 +/* CBUS physical base address as seen by PE's. */
2211 +#define PFE_CBUS_PHYS_BASE_ADDR_FROM_PFE 0xc0000000
2212 +
2213 +#define DDR_PHYS_TO_PFE(p) (((unsigned long int)(p)) & 0x7FFFFFFF)
2214 +#define DDR_PFE_TO_PHYS(p) (((unsigned long int)(p)) | 0x80000000)
2215 +#define CBUS_PHYS_TO_PFE(p) (((p) - PFE_CBUS_PHYS_BASE_ADDR) + \
2216 + PFE_CBUS_PHYS_BASE_ADDR_FROM_PFE)
2217 +/* Translates to PFE address map */
2218 +
2219 +#define DDR_PHYS_TO_VIRT(p) (((p) - DDR_PHYS_BASE_ADDR) + DDR_BASE_ADDR)
2220 +#define DDR_VIRT_TO_PHYS(v) (((v) - DDR_BASE_ADDR) + DDR_PHYS_BASE_ADDR)
2221 +#define DDR_VIRT_TO_PFE(p) (DDR_PHYS_TO_PFE(DDR_VIRT_TO_PHYS(p)))
2222 +
2223 +#define CBUS_VIRT_TO_PFE(v) (((v) - CBUS_BASE_ADDR) + \
2224 + PFE_CBUS_PHYS_BASE_ADDR)
2225 +#define CBUS_PFE_TO_VIRT(p) (((unsigned long int)(p) - \
2226 + PFE_CBUS_PHYS_BASE_ADDR) + CBUS_BASE_ADDR)
2227 +
2228 +/* The below part of the code is used in QOS control driver from host */
2229 +#define TMU_APB_BASE_ADDR 0xc1000000 /* TMU base address seen by
2230 + * pe's
2231 + */
2232 +
2233 +enum {
2234 + CLASS0_ID = 0,
2235 + CLASS1_ID,
2236 + CLASS2_ID,
2237 + CLASS3_ID,
2238 + CLASS4_ID,
2239 + CLASS5_ID,
2240 + TMU0_ID,
2241 + TMU1_ID,
2242 + TMU2_ID,
2243 + TMU3_ID,
2244 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
2245 + UTIL_ID,
2246 +#endif
2247 + MAX_PE
2248 +};
2249 +
2250 +#define CLASS_MASK (BIT(CLASS0_ID) | BIT(CLASS1_ID) |\
2251 + BIT(CLASS2_ID) | BIT(CLASS3_ID) |\
2252 + BIT(CLASS4_ID) | BIT(CLASS5_ID))
2253 +#define CLASS_MAX_ID CLASS5_ID
2254 +
2255 +#define TMU_MASK (BIT(TMU0_ID) | BIT(TMU1_ID) |\
2256 + BIT(TMU3_ID))
2257 +
2258 +#define TMU_MAX_ID TMU3_ID
2259 +
2260 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
2261 +#define UTIL_MASK BIT(UTIL_ID)
2262 +#endif
2263 +
2264 +struct pe_status {
2265 + u32 cpu_state;
2266 + u32 activity_counter;
2267 + u32 rx;
2268 + union {
2269 + u32 tx;
2270 + u32 tmu_qstatus;
2271 + };
2272 + u32 drop;
2273 +#if defined(CFG_PE_DEBUG)
2274 + u32 debug_indicator;
2275 + u32 debug[16];
2276 +#endif
2277 +} __aligned(16);
2278 +
2279 +struct pe_sync_mailbox {
2280 + u32 stop;
2281 + u32 stopped;
2282 +};
2283 +
2284 +/* Drop counter definitions */
2285 +
2286 +#define CLASS_NUM_DROP_COUNTERS 13
2287 +#define UTIL_NUM_DROP_COUNTERS 8
2288 +
2289 +/* PE information.
2290 + * Structure containing PE's specific information. It is used to create
2291 + * generic C functions common to all PE's.
2292 + * Before using the library functions this structure needs to be initialized
2293 + * with the different registers virtual addresses
2294 + * (according to the ARM MMU mmaping). The default initialization supports a
2295 + * virtual == physical mapping.
2296 + */
2297 +struct pe_info {
2298 + u32 dmem_base_addr; /* PE's dmem base address */
2299 + u32 pmem_base_addr; /* PE's pmem base address */
2300 + u32 pmem_size; /* PE's pmem size */
2301 +
2302 + void *mem_access_wdata; /* PE's _MEM_ACCESS_WDATA register
2303 + * address
2304 + */
2305 + void *mem_access_addr; /* PE's _MEM_ACCESS_ADDR register
2306 + * address
2307 + */
2308 + void *mem_access_rdata; /* PE's _MEM_ACCESS_RDATA register
2309 + * address
2310 + */
2311 +};
2312 +
2313 +void pe_lmem_read(u32 *dst, u32 len, u32 offset);
2314 +void pe_lmem_write(u32 *src, u32 len, u32 offset);
2315 +
2316 +void pe_dmem_memcpy_to32(int id, u32 dst, const void *src, unsigned int len);
2317 +void pe_pmem_memcpy_to32(int id, u32 dst, const void *src, unsigned int len);
2318 +
2319 +u32 pe_pmem_read(int id, u32 addr, u8 size);
2320 +
2321 +void pe_dmem_write(int id, u32 val, u32 addr, u8 size);
2322 +u32 pe_dmem_read(int id, u32 addr, u8 size);
2323 +void class_pe_lmem_memcpy_to32(u32 dst, const void *src, unsigned int len);
2324 +void class_pe_lmem_memset(u32 dst, int val, unsigned int len);
2325 +void class_bus_write(u32 val, u32 addr, u8 size);
2326 +u32 class_bus_read(u32 addr, u8 size);
2327 +
2328 +#define class_bus_readl(addr) class_bus_read(addr, 4)
2329 +#define class_bus_readw(addr) class_bus_read(addr, 2)
2330 +#define class_bus_readb(addr) class_bus_read(addr, 1)
2331 +
2332 +#define class_bus_writel(val, addr) class_bus_write(val, addr, 4)
2333 +#define class_bus_writew(val, addr) class_bus_write(val, addr, 2)
2334 +#define class_bus_writeb(val, addr) class_bus_write(val, addr, 1)
2335 +
2336 +#define pe_dmem_readl(id, addr) pe_dmem_read(id, addr, 4)
2337 +#define pe_dmem_readw(id, addr) pe_dmem_read(id, addr, 2)
2338 +#define pe_dmem_readb(id, addr) pe_dmem_read(id, addr, 1)
2339 +
2340 +#define pe_dmem_writel(id, val, addr) pe_dmem_write(id, val, addr, 4)
2341 +#define pe_dmem_writew(id, val, addr) pe_dmem_write(id, val, addr, 2)
2342 +#define pe_dmem_writeb(id, val, addr) pe_dmem_write(id, val, addr, 1)
2343 +
2344 +/*int pe_load_elf_section(int id, const void *data, elf32_shdr *shdr); */
2345 +int pe_load_elf_section(int id, const void *data, struct elf32_shdr *shdr,
2346 + struct device *dev);
2347 +
2348 +void pfe_lib_init(void *cbus_base, void *ddr_base, unsigned long ddr_phys_base,
2349 + unsigned int ddr_size);
2350 +void bmu_init(void *base, struct BMU_CFG *cfg);
2351 +void bmu_reset(void *base);
2352 +void bmu_enable(void *base);
2353 +void bmu_disable(void *base);
2354 +void bmu_set_config(void *base, struct BMU_CFG *cfg);
2355 +
2356 +/*
2357 + * An enumerated type for loopback values. This can be one of three values, no
2358 + * loopback -normal operation, local loopback with internal loopback module of
2359 + * MAC or PHY loopback which is through the external PHY.
2360 + */
2361 +#ifndef __MAC_LOOP_ENUM__
2362 +#define __MAC_LOOP_ENUM__
2363 +enum mac_loop {LB_NONE, LB_EXT, LB_LOCAL};
2364 +#endif
2365 +
2366 +void gemac_init(void *base, void *config);
2367 +void gemac_disable_rx_checksum_offload(void *base);
2368 +void gemac_enable_rx_checksum_offload(void *base);
2369 +void gemac_set_speed(void *base, enum mac_speed gem_speed);
2370 +void gemac_set_duplex(void *base, int duplex);
2371 +void gemac_set_mode(void *base, int mode);
2372 +void gemac_enable(void *base);
2373 +void gemac_tx_disable(void *base);
2374 +void gemac_tx_enable(void *base);
2375 +void gemac_disable(void *base);
2376 +void gemac_reset(void *base);
2377 +void gemac_set_address(void *base, struct spec_addr *addr);
2378 +struct spec_addr gemac_get_address(void *base);
2379 +void gemac_set_loop(void *base, enum mac_loop gem_loop);
2380 +void gemac_set_laddr1(void *base, struct pfe_mac_addr *address);
2381 +void gemac_set_laddr2(void *base, struct pfe_mac_addr *address);
2382 +void gemac_set_laddr3(void *base, struct pfe_mac_addr *address);
2383 +void gemac_set_laddr4(void *base, struct pfe_mac_addr *address);
2384 +void gemac_set_laddrN(void *base, struct pfe_mac_addr *address,
2385 + unsigned int entry_index);
2386 +void gemac_clear_laddr1(void *base);
2387 +void gemac_clear_laddr2(void *base);
2388 +void gemac_clear_laddr3(void *base);
2389 +void gemac_clear_laddr4(void *base);
2390 +void gemac_clear_laddrN(void *base, unsigned int entry_index);
2391 +struct pfe_mac_addr gemac_get_hash(void *base);
2392 +void gemac_set_hash(void *base, struct pfe_mac_addr *hash);
2393 +struct pfe_mac_addr gem_get_laddr1(void *base);
2394 +struct pfe_mac_addr gem_get_laddr2(void *base);
2395 +struct pfe_mac_addr gem_get_laddr3(void *base);
2396 +struct pfe_mac_addr gem_get_laddr4(void *base);
2397 +struct pfe_mac_addr gem_get_laddrN(void *base, unsigned int entry_index);
2398 +void gemac_set_config(void *base, struct gemac_cfg *cfg);
2399 +void gemac_allow_broadcast(void *base);
2400 +void gemac_no_broadcast(void *base);
2401 +void gemac_enable_1536_rx(void *base);
2402 +void gemac_disable_1536_rx(void *base);
2403 +void gemac_set_rx_max_fl(void *base, int mtu);
2404 +void gemac_enable_rx_jmb(void *base);
2405 +void gemac_disable_rx_jmb(void *base);
2406 +void gemac_enable_stacked_vlan(void *base);
2407 +void gemac_disable_stacked_vlan(void *base);
2408 +void gemac_enable_pause_rx(void *base);
2409 +void gemac_disable_pause_rx(void *base);
2410 +void gemac_enable_copy_all(void *base);
2411 +void gemac_disable_copy_all(void *base);
2412 +void gemac_set_bus_width(void *base, int width);
2413 +void gemac_set_wol(void *base, u32 wol_conf);
2414 +
2415 +void gpi_init(void *base, struct gpi_cfg *cfg);
2416 +void gpi_reset(void *base);
2417 +void gpi_enable(void *base);
2418 +void gpi_disable(void *base);
2419 +void gpi_set_config(void *base, struct gpi_cfg *cfg);
2420 +
2421 +void class_init(struct class_cfg *cfg);
2422 +void class_reset(void);
2423 +void class_enable(void);
2424 +void class_disable(void);
2425 +void class_set_config(struct class_cfg *cfg);
2426 +
2427 +void tmu_reset(void);
2428 +void tmu_init(struct tmu_cfg *cfg);
2429 +void tmu_enable(u32 pe_mask);
2430 +void tmu_disable(u32 pe_mask);
2431 +u32 tmu_qstatus(u32 if_id);
2432 +u32 tmu_pkts_processed(u32 if_id);
2433 +
2434 +void util_init(struct util_cfg *cfg);
2435 +void util_reset(void);
2436 +void util_enable(void);
2437 +void util_disable(void);
2438 +
2439 +void hif_init(void);
2440 +void hif_tx_enable(void);
2441 +void hif_tx_disable(void);
2442 +void hif_rx_enable(void);
2443 +void hif_rx_disable(void);
2444 +
2445 +/* Get Chip Revision level
2446 + *
2447 + */
2448 +static inline unsigned int CHIP_REVISION(void)
2449 +{
2450 + /*For LS1012A return always 1 */
2451 + return 1;
2452 +}
2453 +
2454 +/* Start HIF rx DMA
2455 + *
2456 + */
2457 +static inline void hif_rx_dma_start(void)
2458 +{
2459 + writel(HIF_CTRL_DMA_EN | HIF_CTRL_BDP_CH_START_WSTB, HIF_RX_CTRL);
2460 +}
2461 +
2462 +/* Start HIF tx DMA
2463 + *
2464 + */
2465 +static inline void hif_tx_dma_start(void)
2466 +{
2467 + writel(HIF_CTRL_DMA_EN | HIF_CTRL_BDP_CH_START_WSTB, HIF_TX_CTRL);
2468 +}
2469 +
2470 +#endif /* _PFE_H_ */
2471 --- /dev/null
2472 +++ b/drivers/staging/fsl_ppfe/pfe_cdev.c
2473 @@ -0,0 +1,258 @@
2474 +// SPDX-License-Identifier: GPL-2.0+
2475 +/*
2476 + * Copyright 2018 NXP
2477 + */
2478 +
2479 +/* @pfe_cdev.c.
2480 + * Dummy device representing the PFE US in userspace.
2481 + * - used for interacting with the kernel layer for link status
2482 + */
2483 +
2484 +#include <linux/eventfd.h>
2485 +#include <linux/irqreturn.h>
2486 +#include <linux/io.h>
2487 +#include <asm/irq.h>
2488 +
2489 +#include "pfe_cdev.h"
2490 +#include "pfe_mod.h"
2491 +
2492 +static int pfe_majno;
2493 +static struct class *pfe_char_class;
2494 +static struct device *pfe_char_dev;
2495 +struct eventfd_ctx *g_trigger;
2496 +
2497 +struct pfe_shared_info link_states[PFE_CDEV_ETH_COUNT];
2498 +
2499 +static int pfe_cdev_open(struct inode *inp, struct file *fp)
2500 +{
2501 + pr_debug("PFE CDEV device opened.\n");
2502 + return 0;
2503 +}
2504 +
2505 +static ssize_t pfe_cdev_read(struct file *fp, char *buf,
2506 + size_t len, loff_t *off)
2507 +{
2508 + int ret = 0;
2509 +
2510 + pr_info("PFE CDEV attempt copying (%lu) size of user.\n",
2511 + sizeof(link_states));
2512 +
2513 + pr_debug("Dump link_state on screen before copy_to_user\n");
2514 + for (; ret < PFE_CDEV_ETH_COUNT; ret++) {
2515 + pr_debug("%u %u", link_states[ret].phy_id,
2516 + link_states[ret].state);
2517 + pr_debug("\n");
2518 + }
2519 +
2520 + /* Copy to user the value in buffer sized len */
2521 + ret = copy_to_user(buf, &link_states, sizeof(link_states));
2522 + if (ret != 0) {
2523 + pr_err("Failed to send (%d)bytes of (%lu) requested.\n",
2524 + ret, len);
2525 + return -EFAULT;
2526 + }
2527 +
2528 + /* offset set back to 0 as there is contextual reading offset */
2529 + *off = 0;
2530 + pr_debug("Read of (%lu) bytes performed.\n", sizeof(link_states));
2531 +
2532 + return sizeof(link_states);
2533 +}
2534 +
2535 +/**
2536 + * This function is for getting some commands from user through non-IOCTL
2537 + * channel. It can used to configure the device.
2538 + * TODO: To be filled in future, if require duplex communication with user
2539 + * space.
2540 + */
2541 +static ssize_t pfe_cdev_write(struct file *fp, const char *buf,
2542 + size_t len, loff_t *off)
2543 +{
2544 + pr_info("PFE CDEV Write operation not supported!\n");
2545 +
2546 + return -EFAULT;
2547 +}
2548 +
2549 +static int pfe_cdev_release(struct inode *inp, struct file *fp)
2550 +{
2551 + if (g_trigger) {
2552 + free_irq(pfe->hif_irq, g_trigger);
2553 + eventfd_ctx_put(g_trigger);
2554 + g_trigger = NULL;
2555 + }
2556 +
2557 + pr_info("PFE_CDEV: Device successfully closed\n");
2558 + return 0;
2559 +}
2560 +
2561 +/*
2562 + * hif_us_isr-
2563 + * This ISR routine processes Rx/Tx done interrupts from the HIF hardware block
2564 + */
2565 +static irqreturn_t hif_us_isr(int irq, void *arg)
2566 +{
2567 + struct eventfd_ctx *trigger = (struct eventfd_ctx *)arg;
2568 + int int_status;
2569 + int int_enable_mask;
2570 +
2571 + /*Read hif interrupt source register */
2572 + int_status = readl_relaxed(HIF_INT_SRC);
2573 + int_enable_mask = readl_relaxed(HIF_INT_ENABLE);
2574 +
2575 + if ((int_status & HIF_INT) == 0)
2576 + return IRQ_NONE;
2577 +
2578 + if (int_status & HIF_RXPKT_INT) {
2579 + int_enable_mask &= ~(HIF_RXPKT_INT);
2580 + /* Disable interrupts, they will be enabled after
2581 + * they are serviced
2582 + */
2583 + writel_relaxed(int_enable_mask, HIF_INT_ENABLE);
2584 +
2585 + eventfd_signal(trigger, 1);
2586 + }
2587 +
2588 + return IRQ_HANDLED;
2589 +}
2590 +
2591 +#define PFE_INTR_COAL_USECS 100
2592 +static long pfe_cdev_ioctl(struct file *fp, unsigned int cmd,
2593 + unsigned long arg)
2594 +{
2595 + int ret = -EFAULT;
2596 + int __user *argp = (int __user *)arg;
2597 +
2598 + pr_debug("PFE CDEV IOCTL Called with cmd=(%u)\n", cmd);
2599 +
2600 + switch (cmd) {
2601 + case PFE_CDEV_ETH0_STATE_GET:
2602 + /* Return an unsigned int (link state) for ETH0 */
2603 + *argp = link_states[0].state;
2604 + pr_debug("Returning state=%d for ETH0\n", *argp);
2605 + ret = 0;
2606 + break;
2607 + case PFE_CDEV_ETH1_STATE_GET:
2608 + /* Return an unsigned int (link state) for ETH0 */
2609 + *argp = link_states[1].state;
2610 + pr_debug("Returning state=%d for ETH1\n", *argp);
2611 + ret = 0;
2612 + break;
2613 + case PFE_CDEV_HIF_INTR_EN:
2614 + /* Return success/failure */
2615 + g_trigger = eventfd_ctx_fdget(*argp);
2616 + if (IS_ERR(g_trigger))
2617 + return PTR_ERR(g_trigger);
2618 + ret = request_irq(pfe->hif_irq, hif_us_isr, 0, "pfe_hif",
2619 + g_trigger);
2620 + if (ret) {
2621 + pr_err("%s: failed to get the hif IRQ = %d\n",
2622 + __func__, pfe->hif_irq);
2623 + eventfd_ctx_put(g_trigger);
2624 + g_trigger = NULL;
2625 + }
2626 + writel((PFE_INTR_COAL_USECS * (pfe->ctrl.sys_clk / 1000)) |
2627 + HIF_INT_COAL_ENABLE, HIF_INT_COAL);
2628 +
2629 + pr_debug("request_irq for hif interrupt: %d\n", pfe->hif_irq);
2630 + ret = 0;
2631 + break;
2632 + default:
2633 + pr_info("Unsupport cmd (%d) for PFE CDEV.\n", cmd);
2634 + break;
2635 + };
2636 +
2637 + return ret;
2638 +}
2639 +
2640 +static unsigned int pfe_cdev_poll(struct file *fp,
2641 + struct poll_table_struct *wait)
2642 +{
2643 + pr_info("PFE CDEV poll method not supported\n");
2644 + return 0;
2645 +}
2646 +
2647 +static const struct file_operations pfe_cdev_fops = {
2648 + .open = pfe_cdev_open,
2649 + .read = pfe_cdev_read,
2650 + .write = pfe_cdev_write,
2651 + .release = pfe_cdev_release,
2652 + .unlocked_ioctl = pfe_cdev_ioctl,
2653 + .poll = pfe_cdev_poll,
2654 +};
2655 +
2656 +int pfe_cdev_init(void)
2657 +{
2658 + int ret;
2659 +
2660 + pr_debug("PFE CDEV initialization begin\n");
2661 +
2662 + /* Register the major number for the device */
2663 + pfe_majno = register_chrdev(0, PFE_CDEV_NAME, &pfe_cdev_fops);
2664 + if (pfe_majno < 0) {
2665 + pr_err("Unable to register PFE CDEV. PFE CDEV not available\n");
2666 + ret = pfe_majno;
2667 + goto cleanup;
2668 + }
2669 +
2670 + pr_debug("PFE CDEV assigned major number: %d\n", pfe_majno);
2671 +
2672 + /* Register the class for the device */
2673 + pfe_char_class = class_create(THIS_MODULE, PFE_CLASS_NAME);
2674 + if (IS_ERR(pfe_char_class)) {
2675 + pr_err(
2676 + "Failed to init class for PFE CDEV. PFE CDEV not available.\n");
2677 + ret = PTR_ERR(pfe_char_class);
2678 + goto cleanup;
2679 + }
2680 +
2681 + pr_debug("PFE CDEV Class created successfully.\n");
2682 +
2683 + /* Create the device without any parent and without any callback data */
2684 + pfe_char_dev = device_create(pfe_char_class, NULL,
2685 + MKDEV(pfe_majno, 0), NULL,
2686 + PFE_CDEV_NAME);
2687 + if (IS_ERR(pfe_char_dev)) {
2688 + pr_err("Unable to PFE CDEV device. PFE CDEV not available.\n");
2689 + ret = PTR_ERR(pfe_char_dev);
2690 + goto cleanup;
2691 + }
2692 +
2693 + /* Information structure being shared with the userspace */
2694 + memset(link_states, 0, sizeof(struct pfe_shared_info) *
2695 + PFE_CDEV_ETH_COUNT);
2696 +
2697 + pr_info("PFE CDEV created: %s\n", PFE_CDEV_NAME);
2698 +
2699 + ret = 0;
2700 + return ret;
2701 +
2702 +cleanup:
2703 + if (!IS_ERR(pfe_char_class))
2704 + class_destroy(pfe_char_class);
2705 +
2706 + if (pfe_majno > 0)
2707 + unregister_chrdev(pfe_majno, PFE_CDEV_NAME);
2708 +
2709 + return ret;
2710 +}
2711 +
2712 +void pfe_cdev_exit(void)
2713 +{
2714 + if (!IS_ERR(pfe_char_dev))
2715 + device_destroy(pfe_char_class, MKDEV(pfe_majno, 0));
2716 +
2717 + if (!IS_ERR(pfe_char_class)) {
2718 + class_unregister(pfe_char_class);
2719 + class_destroy(pfe_char_class);
2720 + }
2721 +
2722 + if (pfe_majno > 0)
2723 + unregister_chrdev(pfe_majno, PFE_CDEV_NAME);
2724 +
2725 + /* reset the variables */
2726 + pfe_majno = 0;
2727 + pfe_char_class = NULL;
2728 + pfe_char_dev = NULL;
2729 +
2730 + pr_info("PFE CDEV Removed.\n");
2731 +}
2732 --- /dev/null
2733 +++ b/drivers/staging/fsl_ppfe/pfe_cdev.h
2734 @@ -0,0 +1,41 @@
2735 +/* SPDX-License-Identifier: GPL-2.0+ */
2736 +/*
2737 + * Copyright 2018 NXP
2738 + */
2739 +
2740 +#ifndef _PFE_CDEV_H_
2741 +#define _PFE_CDEV_H_
2742 +
2743 +#include <linux/init.h>
2744 +#include <linux/device.h>
2745 +#include <linux/err.h>
2746 +#include <linux/kernel.h>
2747 +#include <linux/fs.h>
2748 +#include <linux/uaccess.h>
2749 +#include <linux/poll.h>
2750 +
2751 +#define PFE_CDEV_NAME "pfe_us_cdev"
2752 +#define PFE_CLASS_NAME "ppfe_us"
2753 +
2754 +/* Extracted from ls1012a_pfe_platform_data, there are 3 interfaces which are
2755 + * supported by PFE driver. Should be updated if number of eth devices are
2756 + * changed.
2757 + */
2758 +#define PFE_CDEV_ETH_COUNT 3
2759 +
2760 +struct pfe_shared_info {
2761 + uint32_t phy_id; /* Link phy ID */
2762 + uint8_t state; /* Has either 0 or 1 */
2763 +};
2764 +
2765 +extern struct pfe_shared_info link_states[PFE_CDEV_ETH_COUNT];
2766 +
2767 +/* IOCTL Commands */
2768 +#define PFE_CDEV_ETH0_STATE_GET _IOR('R', 0, int)
2769 +#define PFE_CDEV_ETH1_STATE_GET _IOR('R', 1, int)
2770 +#define PFE_CDEV_HIF_INTR_EN _IOWR('R', 2, int)
2771 +
2772 +int pfe_cdev_init(void);
2773 +void pfe_cdev_exit(void);
2774 +
2775 +#endif /* _PFE_CDEV_H_ */
2776 --- /dev/null
2777 +++ b/drivers/staging/fsl_ppfe/pfe_ctrl.c
2778 @@ -0,0 +1,226 @@
2779 +// SPDX-License-Identifier: GPL-2.0+
2780 +/*
2781 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
2782 + * Copyright 2017 NXP
2783 + */
2784 +
2785 +#include <linux/kernel.h>
2786 +#include <linux/sched.h>
2787 +#include <linux/module.h>
2788 +#include <linux/list.h>
2789 +#include <linux/kthread.h>
2790 +
2791 +#include "pfe_mod.h"
2792 +#include "pfe_ctrl.h"
2793 +
2794 +#define TIMEOUT_MS 1000
2795 +
2796 +int relax(unsigned long end)
2797 +{
2798 + if (time_after(jiffies, end)) {
2799 + if (time_after(jiffies, end + (TIMEOUT_MS * HZ) / 1000))
2800 + return -1;
2801 +
2802 + if (need_resched())
2803 + schedule();
2804 + }
2805 +
2806 + return 0;
2807 +}
2808 +
2809 +void pfe_ctrl_suspend(struct pfe_ctrl *ctrl)
2810 +{
2811 + int id;
2812 +
2813 + mutex_lock(&ctrl->mutex);
2814 +
2815 + for (id = CLASS0_ID; id <= CLASS_MAX_ID; id++)
2816 + pe_dmem_write(id, cpu_to_be32(0x1), CLASS_DM_RESUME, 4);
2817 +
2818 + for (id = TMU0_ID; id <= TMU_MAX_ID; id++) {
2819 + if (id == TMU2_ID)
2820 + continue;
2821 + pe_dmem_write(id, cpu_to_be32(0x1), TMU_DM_RESUME, 4);
2822 + }
2823 +
2824 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
2825 + pe_dmem_write(UTIL_ID, cpu_to_be32(0x1), UTIL_DM_RESUME, 4);
2826 +#endif
2827 + mutex_unlock(&ctrl->mutex);
2828 +}
2829 +
2830 +void pfe_ctrl_resume(struct pfe_ctrl *ctrl)
2831 +{
2832 + int pe_mask = CLASS_MASK | TMU_MASK;
2833 +
2834 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
2835 + pe_mask |= UTIL_MASK;
2836 +#endif
2837 + mutex_lock(&ctrl->mutex);
2838 + pe_start(&pfe->ctrl, pe_mask);
2839 + mutex_unlock(&ctrl->mutex);
2840 +}
2841 +
2842 +/* PE sync stop.
2843 + * Stops packet processing for a list of PE's (specified using a bitmask).
2844 + * The caller must hold ctrl->mutex.
2845 + *
2846 + * @param ctrl Control context
2847 + * @param pe_mask Mask of PE id's to stop
2848 + *
2849 + */
2850 +int pe_sync_stop(struct pfe_ctrl *ctrl, int pe_mask)
2851 +{
2852 + struct pe_sync_mailbox *mbox;
2853 + int pe_stopped = 0;
2854 + unsigned long end = jiffies + 2;
2855 + int i;
2856 +
2857 + pe_mask &= 0x2FF; /*Exclude Util + TMU2 */
2858 +
2859 + for (i = 0; i < MAX_PE; i++)
2860 + if (pe_mask & (1 << i)) {
2861 + mbox = (void *)ctrl->sync_mailbox_baseaddr[i];
2862 +
2863 + pe_dmem_write(i, cpu_to_be32(0x1), (unsigned
2864 + long)&mbox->stop, 4);
2865 + }
2866 +
2867 + while (pe_stopped != pe_mask) {
2868 + for (i = 0; i < MAX_PE; i++)
2869 + if ((pe_mask & (1 << i)) && !(pe_stopped & (1 << i))) {
2870 + mbox = (void *)ctrl->sync_mailbox_baseaddr[i];
2871 +
2872 + if (pe_dmem_read(i, (unsigned
2873 + long)&mbox->stopped, 4) &
2874 + cpu_to_be32(0x1))
2875 + pe_stopped |= (1 << i);
2876 + }
2877 +
2878 + if (relax(end) < 0)
2879 + goto err;
2880 + }
2881 +
2882 + return 0;
2883 +
2884 +err:
2885 + pr_err("%s: timeout, %x %x\n", __func__, pe_mask, pe_stopped);
2886 +
2887 + for (i = 0; i < MAX_PE; i++)
2888 + if (pe_mask & (1 << i)) {
2889 + mbox = (void *)ctrl->sync_mailbox_baseaddr[i];
2890 +
2891 + pe_dmem_write(i, cpu_to_be32(0x0), (unsigned
2892 + long)&mbox->stop, 4);
2893 + }
2894 +
2895 + return -EIO;
2896 +}
2897 +
2898 +/* PE start.
2899 + * Starts packet processing for a list of PE's (specified using a bitmask).
2900 + * The caller must hold ctrl->mutex.
2901 + *
2902 + * @param ctrl Control context
2903 + * @param pe_mask Mask of PE id's to start
2904 + *
2905 + */
2906 +void pe_start(struct pfe_ctrl *ctrl, int pe_mask)
2907 +{
2908 + struct pe_sync_mailbox *mbox;
2909 + int i;
2910 +
2911 + for (i = 0; i < MAX_PE; i++)
2912 + if (pe_mask & (1 << i)) {
2913 + mbox = (void *)ctrl->sync_mailbox_baseaddr[i];
2914 +
2915 + pe_dmem_write(i, cpu_to_be32(0x0), (unsigned
2916 + long)&mbox->stop, 4);
2917 + }
2918 +}
2919 +
2920 +/* This function will ensure all PEs are put in to idle state */
2921 +int pe_reset_all(struct pfe_ctrl *ctrl)
2922 +{
2923 + struct pe_sync_mailbox *mbox;
2924 + int pe_stopped = 0;
2925 + unsigned long end = jiffies + 2;
2926 + int i;
2927 + int pe_mask = CLASS_MASK | TMU_MASK;
2928 +
2929 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
2930 + pe_mask |= UTIL_MASK;
2931 +#endif
2932 +
2933 + for (i = 0; i < MAX_PE; i++)
2934 + if (pe_mask & (1 << i)) {
2935 + mbox = (void *)ctrl->sync_mailbox_baseaddr[i];
2936 +
2937 + pe_dmem_write(i, cpu_to_be32(0x2), (unsigned
2938 + long)&mbox->stop, 4);
2939 + }
2940 +
2941 + while (pe_stopped != pe_mask) {
2942 + for (i = 0; i < MAX_PE; i++)
2943 + if ((pe_mask & (1 << i)) && !(pe_stopped & (1 << i))) {
2944 + mbox = (void *)ctrl->sync_mailbox_baseaddr[i];
2945 +
2946 + if (pe_dmem_read(i, (unsigned long)
2947 + &mbox->stopped, 4) &
2948 + cpu_to_be32(0x1))
2949 + pe_stopped |= (1 << i);
2950 + }
2951 +
2952 + if (relax(end) < 0)
2953 + goto err;
2954 + }
2955 +
2956 + return 0;
2957 +
2958 +err:
2959 + pr_err("%s: timeout, %x %x\n", __func__, pe_mask, pe_stopped);
2960 + return -EIO;
2961 +}
2962 +
2963 +int pfe_ctrl_init(struct pfe *pfe)
2964 +{
2965 + struct pfe_ctrl *ctrl = &pfe->ctrl;
2966 + int id;
2967 +
2968 + pr_info("%s\n", __func__);
2969 +
2970 + mutex_init(&ctrl->mutex);
2971 + spin_lock_init(&ctrl->lock);
2972 +
2973 + for (id = CLASS0_ID; id <= CLASS_MAX_ID; id++) {
2974 + ctrl->sync_mailbox_baseaddr[id] = CLASS_DM_SYNC_MBOX;
2975 + ctrl->msg_mailbox_baseaddr[id] = CLASS_DM_MSG_MBOX;
2976 + }
2977 +
2978 + for (id = TMU0_ID; id <= TMU_MAX_ID; id++) {
2979 + if (id == TMU2_ID)
2980 + continue;
2981 + ctrl->sync_mailbox_baseaddr[id] = TMU_DM_SYNC_MBOX;
2982 + ctrl->msg_mailbox_baseaddr[id] = TMU_DM_MSG_MBOX;
2983 + }
2984 +
2985 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
2986 + ctrl->sync_mailbox_baseaddr[UTIL_ID] = UTIL_DM_SYNC_MBOX;
2987 + ctrl->msg_mailbox_baseaddr[UTIL_ID] = UTIL_DM_MSG_MBOX;
2988 +#endif
2989 +
2990 + ctrl->hash_array_baseaddr = pfe->ddr_baseaddr + ROUTE_TABLE_BASEADDR;
2991 + ctrl->hash_array_phys_baseaddr = pfe->ddr_phys_baseaddr +
2992 + ROUTE_TABLE_BASEADDR;
2993 +
2994 + ctrl->dev = pfe->dev;
2995 +
2996 + pr_info("%s finished\n", __func__);
2997 +
2998 + return 0;
2999 +}
3000 +
3001 +void pfe_ctrl_exit(struct pfe *pfe)
3002 +{
3003 + pr_info("%s\n", __func__);
3004 +}
3005 --- /dev/null
3006 +++ b/drivers/staging/fsl_ppfe/pfe_ctrl.h
3007 @@ -0,0 +1,100 @@
3008 +/* SPDX-License-Identifier: GPL-2.0+ */
3009 +/*
3010 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
3011 + * Copyright 2017 NXP
3012 + */
3013 +
3014 +#ifndef _PFE_CTRL_H_
3015 +#define _PFE_CTRL_H_
3016 +
3017 +#include <linux/dmapool.h>
3018 +
3019 +#include "pfe/pfe.h"
3020 +
3021 +#define DMA_BUF_SIZE_128 0x80 /* enough for 1 conntracks */
3022 +#define DMA_BUF_SIZE_256 0x100
3023 +/* enough for 2 conntracks, 1 bridge entry or 1 multicast entry */
3024 +#define DMA_BUF_SIZE_512 0x200
3025 +/* 512bytes dma allocated buffers used by rtp relay feature */
3026 +#define DMA_BUF_MIN_ALIGNMENT 8
3027 +#define DMA_BUF_BOUNDARY (4 * 1024)
3028 +/* bursts can not cross 4k boundary */
3029 +
3030 +#define CMD_TX_ENABLE 0x0501
3031 +#define CMD_TX_DISABLE 0x0502
3032 +
3033 +#define CMD_RX_LRO 0x0011
3034 +#define CMD_PKTCAP_ENABLE 0x0d01
3035 +#define CMD_QM_EXPT_RATE 0x020c
3036 +
3037 +#define CLASS_DM_SH_STATIC (0x800)
3038 +#define CLASS_DM_CPU_TICKS (CLASS_DM_SH_STATIC)
3039 +#define CLASS_DM_SYNC_MBOX (0x808)
3040 +#define CLASS_DM_MSG_MBOX (0x810)
3041 +#define CLASS_DM_DROP_CNTR (0x820)
3042 +#define CLASS_DM_RESUME (0x854)
3043 +#define CLASS_DM_PESTATUS (0x860)
3044 +#define CLASS_DM_CRC_VALIDATED (0x14b0)
3045 +
3046 +#define TMU_DM_SH_STATIC (0x80)
3047 +#define TMU_DM_CPU_TICKS (TMU_DM_SH_STATIC)
3048 +#define TMU_DM_SYNC_MBOX (0x88)
3049 +#define TMU_DM_MSG_MBOX (0x90)
3050 +#define TMU_DM_RESUME (0xA0)
3051 +#define TMU_DM_PESTATUS (0xB0)
3052 +#define TMU_DM_CONTEXT (0x300)
3053 +#define TMU_DM_TX_TRANS (0x480)
3054 +
3055 +#define UTIL_DM_SH_STATIC (0x0)
3056 +#define UTIL_DM_CPU_TICKS (UTIL_DM_SH_STATIC)
3057 +#define UTIL_DM_SYNC_MBOX (0x8)
3058 +#define UTIL_DM_MSG_MBOX (0x10)
3059 +#define UTIL_DM_DROP_CNTR (0x20)
3060 +#define UTIL_DM_RESUME (0x40)
3061 +#define UTIL_DM_PESTATUS (0x50)
3062 +
3063 +struct pfe_ctrl {
3064 + struct mutex mutex; /* to serialize pfe control access */
3065 + spinlock_t lock;
3066 +
3067 + void *dma_pool;
3068 + void *dma_pool_512;
3069 + void *dma_pool_128;
3070 +
3071 + struct device *dev;
3072 +
3073 + void *hash_array_baseaddr; /*
3074 + * Virtual base address of
3075 + * the conntrack hash array
3076 + */
3077 + unsigned long hash_array_phys_baseaddr; /*
3078 + * Physical base address of
3079 + * the conntrack hash array
3080 + */
3081 +
3082 + int (*event_cb)(u16, u16, u16*);
3083 +
3084 + unsigned long sync_mailbox_baseaddr[MAX_PE]; /*
3085 + * Sync mailbox PFE
3086 + * internal address,
3087 + * initialized
3088 + * when parsing elf images
3089 + */
3090 + unsigned long msg_mailbox_baseaddr[MAX_PE]; /*
3091 + * Msg mailbox PFE internal
3092 + * address, initialized
3093 + * when parsing elf images
3094 + */
3095 + unsigned int sys_clk; /* AXI clock value, in KHz */
3096 +};
3097 +
3098 +int pfe_ctrl_init(struct pfe *pfe);
3099 +void pfe_ctrl_exit(struct pfe *pfe);
3100 +int pe_sync_stop(struct pfe_ctrl *ctrl, int pe_mask);
3101 +void pe_start(struct pfe_ctrl *ctrl, int pe_mask);
3102 +int pe_reset_all(struct pfe_ctrl *ctrl);
3103 +void pfe_ctrl_suspend(struct pfe_ctrl *ctrl);
3104 +void pfe_ctrl_resume(struct pfe_ctrl *ctrl);
3105 +int relax(unsigned long end);
3106 +
3107 +#endif /* _PFE_CTRL_H_ */
3108 --- /dev/null
3109 +++ b/drivers/staging/fsl_ppfe/pfe_debugfs.c
3110 @@ -0,0 +1,99 @@
3111 +// SPDX-License-Identifier: GPL-2.0+
3112 +/*
3113 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
3114 + * Copyright 2017 NXP
3115 + */
3116 +
3117 +#include <linux/module.h>
3118 +#include <linux/debugfs.h>
3119 +#include <linux/platform_device.h>
3120 +
3121 +#include "pfe_mod.h"
3122 +
3123 +static int dmem_show(struct seq_file *s, void *unused)
3124 +{
3125 + u32 dmem_addr, val;
3126 + int id = (long int)s->private;
3127 + int i;
3128 +
3129 + for (dmem_addr = 0; dmem_addr < CLASS_DMEM_SIZE; dmem_addr += 8 * 4) {
3130 + seq_printf(s, "%04x:", dmem_addr);
3131 +
3132 + for (i = 0; i < 8; i++) {
3133 + val = pe_dmem_read(id, dmem_addr + i * 4, 4);
3134 + seq_printf(s, " %02x %02x %02x %02x", val & 0xff,
3135 + (val >> 8) & 0xff, (val >> 16) & 0xff,
3136 + (val >> 24) & 0xff);
3137 + }
3138 +
3139 + seq_puts(s, "\n");
3140 + }
3141 +
3142 + return 0;
3143 +}
3144 +
3145 +static int dmem_open(struct inode *inode, struct file *file)
3146 +{
3147 + return single_open(file, dmem_show, inode->i_private);
3148 +}
3149 +
3150 +static const struct file_operations dmem_fops = {
3151 + .open = dmem_open,
3152 + .read = seq_read,
3153 + .llseek = seq_lseek,
3154 + .release = single_release,
3155 +};
3156 +
3157 +int pfe_debugfs_init(struct pfe *pfe)
3158 +{
3159 + struct dentry *d;
3160 +
3161 + pr_info("%s\n", __func__);
3162 +
3163 + pfe->dentry = debugfs_create_dir("pfe", NULL);
3164 + if (IS_ERR_OR_NULL(pfe->dentry))
3165 + goto err_dir;
3166 +
3167 + d = debugfs_create_file("pe0_dmem", 0444, pfe->dentry, (void *)0,
3168 + &dmem_fops);
3169 + if (IS_ERR_OR_NULL(d))
3170 + goto err_pe;
3171 +
3172 + d = debugfs_create_file("pe1_dmem", 0444, pfe->dentry, (void *)1,
3173 + &dmem_fops);
3174 + if (IS_ERR_OR_NULL(d))
3175 + goto err_pe;
3176 +
3177 + d = debugfs_create_file("pe2_dmem", 0444, pfe->dentry, (void *)2,
3178 + &dmem_fops);
3179 + if (IS_ERR_OR_NULL(d))
3180 + goto err_pe;
3181 +
3182 + d = debugfs_create_file("pe3_dmem", 0444, pfe->dentry, (void *)3,
3183 + &dmem_fops);
3184 + if (IS_ERR_OR_NULL(d))
3185 + goto err_pe;
3186 +
3187 + d = debugfs_create_file("pe4_dmem", 0444, pfe->dentry, (void *)4,
3188 + &dmem_fops);
3189 + if (IS_ERR_OR_NULL(d))
3190 + goto err_pe;
3191 +
3192 + d = debugfs_create_file("pe5_dmem", 0444, pfe->dentry, (void *)5,
3193 + &dmem_fops);
3194 + if (IS_ERR_OR_NULL(d))
3195 + goto err_pe;
3196 +
3197 + return 0;
3198 +
3199 +err_pe:
3200 + debugfs_remove_recursive(pfe->dentry);
3201 +
3202 +err_dir:
3203 + return -1;
3204 +}
3205 +
3206 +void pfe_debugfs_exit(struct pfe *pfe)
3207 +{
3208 + debugfs_remove_recursive(pfe->dentry);
3209 +}
3210 --- /dev/null
3211 +++ b/drivers/staging/fsl_ppfe/pfe_debugfs.h
3212 @@ -0,0 +1,13 @@
3213 +/* SPDX-License-Identifier: GPL-2.0+ */
3214 +/*
3215 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
3216 + * Copyright 2017 NXP
3217 + */
3218 +
3219 +#ifndef _PFE_DEBUGFS_H_
3220 +#define _PFE_DEBUGFS_H_
3221 +
3222 +int pfe_debugfs_init(struct pfe *pfe);
3223 +void pfe_debugfs_exit(struct pfe *pfe);
3224 +
3225 +#endif /* _PFE_DEBUGFS_H_ */
3226 --- /dev/null
3227 +++ b/drivers/staging/fsl_ppfe/pfe_eth.c
3228 @@ -0,0 +1,2591 @@
3229 +// SPDX-License-Identifier: GPL-2.0+
3230 +/*
3231 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
3232 + * Copyright 2017 NXP
3233 + */
3234 +
3235 +/* @pfe_eth.c.
3236 + * Ethernet driver for to handle exception path for PFE.
3237 + * - uses HIF functions to send/receive packets.
3238 + * - uses ctrl function to start/stop interfaces.
3239 + * - uses direct register accesses to control phy operation.
3240 + */
3241 +#include <linux/version.h>
3242 +#include <linux/kernel.h>
3243 +#include <linux/interrupt.h>
3244 +#include <linux/dma-mapping.h>
3245 +#include <linux/dmapool.h>
3246 +#include <linux/netdevice.h>
3247 +#include <linux/etherdevice.h>
3248 +#include <linux/ethtool.h>
3249 +#include <linux/mii.h>
3250 +#include <linux/phy.h>
3251 +#include <linux/timer.h>
3252 +#include <linux/hrtimer.h>
3253 +#include <linux/platform_device.h>
3254 +
3255 +#include <net/ip.h>
3256 +#include <net/sock.h>
3257 +
3258 +#include <linux/of.h>
3259 +#include <linux/of_mdio.h>
3260 +
3261 +#include <linux/io.h>
3262 +#include <asm/irq.h>
3263 +#include <linux/delay.h>
3264 +#include <linux/regmap.h>
3265 +#include <linux/i2c.h>
3266 +#include <linux/sys_soc.h>
3267 +
3268 +#if defined(CONFIG_NF_CONNTRACK_MARK)
3269 +#include <net/netfilter/nf_conntrack.h>
3270 +#endif
3271 +
3272 +#include "pfe_mod.h"
3273 +#include "pfe_eth.h"
3274 +#include "pfe_cdev.h"
3275 +
3276 +#define LS1012A_REV_1_0 0x87040010
3277 +
3278 +bool pfe_use_old_dts_phy;
3279 +bool pfe_errata_a010897;
3280 +
3281 +static void *cbus_emac_base[3];
3282 +static void *cbus_gpi_base[3];
3283 +
3284 +/* Forward Declaration */
3285 +static void pfe_eth_exit_one(struct pfe_eth_priv_s *priv);
3286 +static void pfe_eth_flush_tx(struct pfe_eth_priv_s *priv);
3287 +static void pfe_eth_flush_txQ(struct pfe_eth_priv_s *priv, int tx_q_num, int
3288 + from_tx, int n_desc);
3289 +
3290 +/* MDIO registers */
3291 +#define MDIO_SGMII_CR 0x00
3292 +#define MDIO_SGMII_SR 0x01
3293 +#define MDIO_SGMII_DEV_ABIL_SGMII 0x04
3294 +#define MDIO_SGMII_LINK_TMR_L 0x12
3295 +#define MDIO_SGMII_LINK_TMR_H 0x13
3296 +#define MDIO_SGMII_IF_MODE 0x14
3297 +
3298 +/* SGMII Control defines */
3299 +#define SGMII_CR_RST 0x8000
3300 +#define SGMII_CR_AN_EN 0x1000
3301 +#define SGMII_CR_RESTART_AN 0x0200
3302 +#define SGMII_CR_FD 0x0100
3303 +#define SGMII_CR_SPEED_SEL1_1G 0x0040
3304 +#define SGMII_CR_DEF_VAL (SGMII_CR_AN_EN | SGMII_CR_FD | \
3305 + SGMII_CR_SPEED_SEL1_1G)
3306 +
3307 +/* SGMII IF Mode */
3308 +#define SGMII_DUPLEX_HALF 0x10
3309 +#define SGMII_SPEED_10MBPS 0x00
3310 +#define SGMII_SPEED_100MBPS 0x04
3311 +#define SGMII_SPEED_1GBPS 0x08
3312 +#define SGMII_USE_SGMII_AN 0x02
3313 +#define SGMII_EN 0x01
3314 +
3315 +/* SGMII Device Ability for SGMII */
3316 +#define SGMII_DEV_ABIL_ACK 0x4000
3317 +#define SGMII_DEV_ABIL_EEE_CLK_STP_EN 0x0100
3318 +#define SGMII_DEV_ABIL_SGMII 0x0001
3319 +
3320 +unsigned int gemac_regs[] = {
3321 + 0x0004, /* Interrupt event */
3322 + 0x0008, /* Interrupt mask */
3323 + 0x0024, /* Ethernet control */
3324 + 0x0064, /* MIB Control/Status */
3325 + 0x0084, /* Receive control/status */
3326 + 0x00C4, /* Transmit control */
3327 + 0x00E4, /* Physical address low */
3328 + 0x00E8, /* Physical address high */
3329 + 0x0144, /* Transmit FIFO Watermark and Store and Forward Control*/
3330 + 0x0190, /* Receive FIFO Section Full Threshold */
3331 + 0x01A0, /* Transmit FIFO Section Empty Threshold */
3332 + 0x01B0, /* Frame Truncation Length */
3333 +};
3334 +
3335 +const struct soc_device_attribute ls1012a_rev1_soc_attr[] = {
3336 + { .family = "QorIQ LS1012A",
3337 + .soc_id = "svr:0x87040010",
3338 + .revision = "1.0",
3339 + .data = NULL },
3340 + { },
3341 +};
3342 +
3343 +/********************************************************************/
3344 +/* SYSFS INTERFACE */
3345 +/********************************************************************/
3346 +
3347 +#ifdef PFE_ETH_NAPI_STATS
3348 +/*
3349 + * pfe_eth_show_napi_stats
3350 + */
3351 +static ssize_t pfe_eth_show_napi_stats(struct device *dev,
3352 + struct device_attribute *attr,
3353 + char *buf)
3354 +{
3355 + struct pfe_eth_priv_s *priv = netdev_priv(to_net_dev(dev));
3356 + ssize_t len = 0;
3357 +
3358 + len += sprintf(buf + len, "sched: %u\n",
3359 + priv->napi_counters[NAPI_SCHED_COUNT]);
3360 + len += sprintf(buf + len, "poll: %u\n",
3361 + priv->napi_counters[NAPI_POLL_COUNT]);
3362 + len += sprintf(buf + len, "packet: %u\n",
3363 + priv->napi_counters[NAPI_PACKET_COUNT]);
3364 + len += sprintf(buf + len, "budget: %u\n",
3365 + priv->napi_counters[NAPI_FULL_BUDGET_COUNT]);
3366 + len += sprintf(buf + len, "desc: %u\n",
3367 + priv->napi_counters[NAPI_DESC_COUNT]);
3368 +
3369 + return len;
3370 +}
3371 +
3372 +/*
3373 + * pfe_eth_set_napi_stats
3374 + */
3375 +static ssize_t pfe_eth_set_napi_stats(struct device *dev,
3376 + struct device_attribute *attr,
3377 + const char *buf, size_t count)
3378 +{
3379 + struct pfe_eth_priv_s *priv = netdev_priv(to_net_dev(dev));
3380 +
3381 + memset(priv->napi_counters, 0, sizeof(priv->napi_counters));
3382 +
3383 + return count;
3384 +}
3385 +#endif
3386 +#ifdef PFE_ETH_TX_STATS
3387 +/* pfe_eth_show_tx_stats
3388 + *
3389 + */
3390 +static ssize_t pfe_eth_show_tx_stats(struct device *dev,
3391 + struct device_attribute *attr,
3392 + char *buf)
3393 +{
3394 + struct pfe_eth_priv_s *priv = netdev_priv(to_net_dev(dev));
3395 + ssize_t len = 0;
3396 + int i;
3397 +
3398 + len += sprintf(buf + len, "TX queues stats:\n");
3399 +
3400 + for (i = 0; i < emac_txq_cnt; i++) {
3401 + struct netdev_queue *tx_queue = netdev_get_tx_queue(priv->ndev,
3402 + i);
3403 +
3404 + len += sprintf(buf + len, "\n");
3405 + __netif_tx_lock_bh(tx_queue);
3406 +
3407 + hif_tx_lock(&pfe->hif);
3408 + len += sprintf(buf + len,
3409 + "Queue %2d : credits = %10d\n"
3410 + , i, hif_lib_tx_credit_avail(pfe, priv->id, i));
3411 + len += sprintf(buf + len,
3412 + " tx packets = %10d\n"
3413 + , pfe->tmu_credit.tx_packets[priv->id][i]);
3414 + hif_tx_unlock(&pfe->hif);
3415 +
3416 + /* Don't output additionnal stats if queue never used */
3417 + if (!pfe->tmu_credit.tx_packets[priv->id][i])
3418 + goto skip;
3419 +
3420 + len += sprintf(buf + len,
3421 + " clean_fail = %10d\n"
3422 + , priv->clean_fail[i]);
3423 + len += sprintf(buf + len,
3424 + " stop_queue = %10d\n"
3425 + , priv->stop_queue_total[i]);
3426 + len += sprintf(buf + len,
3427 + " stop_queue_hif = %10d\n"
3428 + , priv->stop_queue_hif[i]);
3429 + len += sprintf(buf + len,
3430 + " stop_queue_hif_client = %10d\n"
3431 + , priv->stop_queue_hif_client[i]);
3432 + len += sprintf(buf + len,
3433 + " stop_queue_credit = %10d\n"
3434 + , priv->stop_queue_credit[i]);
3435 +skip:
3436 + __netif_tx_unlock_bh(tx_queue);
3437 + }
3438 + return len;
3439 +}
3440 +
3441 +/* pfe_eth_set_tx_stats
3442 + *
3443 + */
3444 +static ssize_t pfe_eth_set_tx_stats(struct device *dev,
3445 + struct device_attribute *attr,
3446 + const char *buf, size_t count)
3447 +{
3448 + struct pfe_eth_priv_s *priv = netdev_priv(to_net_dev(dev));
3449 + int i;
3450 +
3451 + for (i = 0; i < emac_txq_cnt; i++) {
3452 + struct netdev_queue *tx_queue = netdev_get_tx_queue(priv->ndev,
3453 + i);
3454 +
3455 + __netif_tx_lock_bh(tx_queue);
3456 + priv->clean_fail[i] = 0;
3457 + priv->stop_queue_total[i] = 0;
3458 + priv->stop_queue_hif[i] = 0;
3459 + priv->stop_queue_hif_client[i] = 0;
3460 + priv->stop_queue_credit[i] = 0;
3461 + __netif_tx_unlock_bh(tx_queue);
3462 + }
3463 +
3464 + return count;
3465 +}
3466 +#endif
3467 +/* pfe_eth_show_txavail
3468 + *
3469 + */
3470 +static ssize_t pfe_eth_show_txavail(struct device *dev,
3471 + struct device_attribute *attr,
3472 + char *buf)
3473 +{
3474 + struct pfe_eth_priv_s *priv = netdev_priv(to_net_dev(dev));
3475 + ssize_t len = 0;
3476 + int i;
3477 +
3478 + for (i = 0; i < emac_txq_cnt; i++) {
3479 + struct netdev_queue *tx_queue = netdev_get_tx_queue(priv->ndev,
3480 + i);
3481 +
3482 + __netif_tx_lock_bh(tx_queue);
3483 +
3484 + len += sprintf(buf + len, "%d",
3485 + hif_lib_tx_avail(&priv->client, i));
3486 +
3487 + __netif_tx_unlock_bh(tx_queue);
3488 +
3489 + if (i == (emac_txq_cnt - 1))
3490 + len += sprintf(buf + len, "\n");
3491 + else
3492 + len += sprintf(buf + len, " ");
3493 + }
3494 +
3495 + return len;
3496 +}
3497 +
3498 +/* pfe_eth_show_default_priority
3499 + *
3500 + */
3501 +static ssize_t pfe_eth_show_default_priority(struct device *dev,
3502 + struct device_attribute *attr,
3503 + char *buf)
3504 +{
3505 + struct pfe_eth_priv_s *priv = netdev_priv(to_net_dev(dev));
3506 + unsigned long flags;
3507 + int rc;
3508 +
3509 + spin_lock_irqsave(&priv->lock, flags);
3510 + rc = sprintf(buf, "%d\n", priv->default_priority);
3511 + spin_unlock_irqrestore(&priv->lock, flags);
3512 +
3513 + return rc;
3514 +}
3515 +
3516 +/* pfe_eth_set_default_priority
3517 + *
3518 + */
3519 +
3520 +static ssize_t pfe_eth_set_default_priority(struct device *dev,
3521 + struct device_attribute *attr,
3522 + const char *buf, size_t count)
3523 +{
3524 + struct pfe_eth_priv_s *priv = netdev_priv(to_net_dev(dev));
3525 + unsigned long flags;
3526 +
3527 + spin_lock_irqsave(&priv->lock, flags);
3528 + priv->default_priority = kstrtoul(buf, 0, 0);
3529 + spin_unlock_irqrestore(&priv->lock, flags);
3530 +
3531 + return count;
3532 +}
3533 +
3534 +static DEVICE_ATTR(txavail, 0444, pfe_eth_show_txavail, NULL);
3535 +static DEVICE_ATTR(default_priority, 0644, pfe_eth_show_default_priority,
3536 + pfe_eth_set_default_priority);
3537 +
3538 +#ifdef PFE_ETH_NAPI_STATS
3539 +static DEVICE_ATTR(napi_stats, 0644, pfe_eth_show_napi_stats,
3540 + pfe_eth_set_napi_stats);
3541 +#endif
3542 +
3543 +#ifdef PFE_ETH_TX_STATS
3544 +static DEVICE_ATTR(tx_stats, 0644, pfe_eth_show_tx_stats,
3545 + pfe_eth_set_tx_stats);
3546 +#endif
3547 +
3548 +/*
3549 + * pfe_eth_sysfs_init
3550 + *
3551 + */
3552 +static int pfe_eth_sysfs_init(struct net_device *ndev)
3553 +{
3554 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
3555 + int err;
3556 +
3557 + /* Initialize the default values */
3558 +
3559 + /*
3560 + * By default, packets without conntrack will use this default low
3561 + * priority queue
3562 + */
3563 + priv->default_priority = 0;
3564 +
3565 + /* Create our sysfs files */
3566 + err = device_create_file(&ndev->dev, &dev_attr_default_priority);
3567 + if (err) {
3568 + netdev_err(ndev,
3569 + "failed to create default_priority sysfs files\n");
3570 + goto err_priority;
3571 + }
3572 +
3573 + err = device_create_file(&ndev->dev, &dev_attr_txavail);
3574 + if (err) {
3575 + netdev_err(ndev,
3576 + "failed to create default_priority sysfs files\n");
3577 + goto err_txavail;
3578 + }
3579 +
3580 +#ifdef PFE_ETH_NAPI_STATS
3581 + err = device_create_file(&ndev->dev, &dev_attr_napi_stats);
3582 + if (err) {
3583 + netdev_err(ndev, "failed to create napi stats sysfs files\n");
3584 + goto err_napi;
3585 + }
3586 +#endif
3587 +
3588 +#ifdef PFE_ETH_TX_STATS
3589 + err = device_create_file(&ndev->dev, &dev_attr_tx_stats);
3590 + if (err) {
3591 + netdev_err(ndev, "failed to create tx stats sysfs files\n");
3592 + goto err_tx;
3593 + }
3594 +#endif
3595 +
3596 + return 0;
3597 +
3598 +#ifdef PFE_ETH_TX_STATS
3599 +err_tx:
3600 +#endif
3601 +#ifdef PFE_ETH_NAPI_STATS
3602 + device_remove_file(&ndev->dev, &dev_attr_napi_stats);
3603 +
3604 +err_napi:
3605 +#endif
3606 + device_remove_file(&ndev->dev, &dev_attr_txavail);
3607 +
3608 +err_txavail:
3609 + device_remove_file(&ndev->dev, &dev_attr_default_priority);
3610 +
3611 +err_priority:
3612 + return -1;
3613 +}
3614 +
3615 +/* pfe_eth_sysfs_exit
3616 + *
3617 + */
3618 +void pfe_eth_sysfs_exit(struct net_device *ndev)
3619 +{
3620 +#ifdef PFE_ETH_TX_STATS
3621 + device_remove_file(&ndev->dev, &dev_attr_tx_stats);
3622 +#endif
3623 +
3624 +#ifdef PFE_ETH_NAPI_STATS
3625 + device_remove_file(&ndev->dev, &dev_attr_napi_stats);
3626 +#endif
3627 + device_remove_file(&ndev->dev, &dev_attr_txavail);
3628 + device_remove_file(&ndev->dev, &dev_attr_default_priority);
3629 +}
3630 +
3631 +/*************************************************************************/
3632 +/* ETHTOOL INTERCAE */
3633 +/*************************************************************************/
3634 +
3635 +/*MTIP GEMAC */
3636 +static const struct fec_stat {
3637 + char name[ETH_GSTRING_LEN];
3638 + u16 offset;
3639 +} fec_stats[] = {
3640 + /* RMON TX */
3641 + { "tx_dropped", RMON_T_DROP },
3642 + { "tx_packets", RMON_T_PACKETS },
3643 + { "tx_broadcast", RMON_T_BC_PKT },
3644 + { "tx_multicast", RMON_T_MC_PKT },
3645 + { "tx_crc_errors", RMON_T_CRC_ALIGN },
3646 + { "tx_undersize", RMON_T_UNDERSIZE },
3647 + { "tx_oversize", RMON_T_OVERSIZE },
3648 + { "tx_fragment", RMON_T_FRAG },
3649 + { "tx_jabber", RMON_T_JAB },
3650 + { "tx_collision", RMON_T_COL },
3651 + { "tx_64byte", RMON_T_P64 },
3652 + { "tx_65to127byte", RMON_T_P65TO127 },
3653 + { "tx_128to255byte", RMON_T_P128TO255 },
3654 + { "tx_256to511byte", RMON_T_P256TO511 },
3655 + { "tx_512to1023byte", RMON_T_P512TO1023 },
3656 + { "tx_1024to2047byte", RMON_T_P1024TO2047 },
3657 + { "tx_GTE2048byte", RMON_T_P_GTE2048 },
3658 + { "tx_octets", RMON_T_OCTETS },
3659 +
3660 + /* IEEE TX */
3661 + { "IEEE_tx_drop", IEEE_T_DROP },
3662 + { "IEEE_tx_frame_ok", IEEE_T_FRAME_OK },
3663 + { "IEEE_tx_1col", IEEE_T_1COL },
3664 + { "IEEE_tx_mcol", IEEE_T_MCOL },
3665 + { "IEEE_tx_def", IEEE_T_DEF },
3666 + { "IEEE_tx_lcol", IEEE_T_LCOL },
3667 + { "IEEE_tx_excol", IEEE_T_EXCOL },
3668 + { "IEEE_tx_macerr", IEEE_T_MACERR },
3669 + { "IEEE_tx_cserr", IEEE_T_CSERR },
3670 + { "IEEE_tx_sqe", IEEE_T_SQE },
3671 + { "IEEE_tx_fdxfc", IEEE_T_FDXFC },
3672 + { "IEEE_tx_octets_ok", IEEE_T_OCTETS_OK },
3673 +
3674 + /* RMON RX */
3675 + { "rx_packets", RMON_R_PACKETS },
3676 + { "rx_broadcast", RMON_R_BC_PKT },
3677 + { "rx_multicast", RMON_R_MC_PKT },
3678 + { "rx_crc_errors", RMON_R_CRC_ALIGN },
3679 + { "rx_undersize", RMON_R_UNDERSIZE },
3680 + { "rx_oversize", RMON_R_OVERSIZE },
3681 + { "rx_fragment", RMON_R_FRAG },
3682 + { "rx_jabber", RMON_R_JAB },
3683 + { "rx_64byte", RMON_R_P64 },
3684 + { "rx_65to127byte", RMON_R_P65TO127 },
3685 + { "rx_128to255byte", RMON_R_P128TO255 },
3686 + { "rx_256to511byte", RMON_R_P256TO511 },
3687 + { "rx_512to1023byte", RMON_R_P512TO1023 },
3688 + { "rx_1024to2047byte", RMON_R_P1024TO2047 },
3689 + { "rx_GTE2048byte", RMON_R_P_GTE2048 },
3690 + { "rx_octets", RMON_R_OCTETS },
3691 +
3692 + /* IEEE RX */
3693 + { "IEEE_rx_drop", IEEE_R_DROP },
3694 + { "IEEE_rx_frame_ok", IEEE_R_FRAME_OK },
3695 + { "IEEE_rx_crc", IEEE_R_CRC },
3696 + { "IEEE_rx_align", IEEE_R_ALIGN },
3697 + { "IEEE_rx_macerr", IEEE_R_MACERR },
3698 + { "IEEE_rx_fdxfc", IEEE_R_FDXFC },
3699 + { "IEEE_rx_octets_ok", IEEE_R_OCTETS_OK },
3700 +};
3701 +
3702 +static void pfe_eth_fill_stats(struct net_device *ndev, struct ethtool_stats
3703 + *stats, u64 *data)
3704 +{
3705 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
3706 + int i;
3707 + u64 pfe_crc_validated = 0;
3708 + int id;
3709 +
3710 + for (id = CLASS0_ID; id <= CLASS_MAX_ID; id++) {
3711 + pfe_crc_validated += be32_to_cpu(pe_dmem_read(id,
3712 + CLASS_DM_CRC_VALIDATED + (priv->id * 4), 4));
3713 + }
3714 +
3715 + for (i = 0; i < ARRAY_SIZE(fec_stats); i++) {
3716 + data[i] = readl(priv->EMAC_baseaddr + fec_stats[i].offset);
3717 +
3718 + if (fec_stats[i].offset == IEEE_R_DROP)
3719 + data[i] -= pfe_crc_validated;
3720 + }
3721 +}
3722 +
3723 +static void pfe_eth_gstrings(struct net_device *netdev,
3724 + u32 stringset, u8 *data)
3725 +{
3726 + int i;
3727 +
3728 + switch (stringset) {
3729 + case ETH_SS_STATS:
3730 + for (i = 0; i < ARRAY_SIZE(fec_stats); i++)
3731 + memcpy(data + i * ETH_GSTRING_LEN,
3732 + fec_stats[i].name, ETH_GSTRING_LEN);
3733 + break;
3734 + }
3735 +}
3736 +
3737 +static int pfe_eth_stats_count(struct net_device *ndev, int sset)
3738 +{
3739 + switch (sset) {
3740 + case ETH_SS_STATS:
3741 + return ARRAY_SIZE(fec_stats);
3742 + default:
3743 + return -EOPNOTSUPP;
3744 + }
3745 +}
3746 +
3747 +/*
3748 + * pfe_eth_gemac_reglen - Return the length of the register structure.
3749 + *
3750 + */
3751 +static int pfe_eth_gemac_reglen(struct net_device *ndev)
3752 +{
3753 + pr_info("%s()\n", __func__);
3754 + return (sizeof(gemac_regs) / sizeof(u32));
3755 +}
3756 +
3757 +/*
3758 + * pfe_eth_gemac_get_regs - Return the gemac register structure.
3759 + *
3760 + */
3761 +static void pfe_eth_gemac_get_regs(struct net_device *ndev, struct ethtool_regs
3762 + *regs, void *regbuf)
3763 +{
3764 + int i;
3765 +
3766 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
3767 + u32 *buf = (u32 *)regbuf;
3768 +
3769 + pr_info("%s()\n", __func__);
3770 + for (i = 0; i < sizeof(gemac_regs) / sizeof(u32); i++)
3771 + buf[i] = readl(priv->EMAC_baseaddr + gemac_regs[i]);
3772 +}
3773 +
3774 +/*
3775 + * pfe_eth_set_wol - Set the magic packet option, in WoL register.
3776 + *
3777 + */
3778 +static int pfe_eth_set_wol(struct net_device *ndev, struct ethtool_wolinfo *wol)
3779 +{
3780 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
3781 +
3782 + if (wol->wolopts & ~WAKE_MAGIC)
3783 + return -EOPNOTSUPP;
3784 +
3785 + /* for MTIP we store wol->wolopts */
3786 + priv->wol = wol->wolopts;
3787 +
3788 + device_set_wakeup_enable(&ndev->dev, wol->wolopts & WAKE_MAGIC);
3789 +
3790 + return 0;
3791 +}
3792 +
3793 +/*
3794 + *
3795 + * pfe_eth_get_wol - Get the WoL options.
3796 + *
3797 + */
3798 +static void pfe_eth_get_wol(struct net_device *ndev, struct ethtool_wolinfo
3799 + *wol)
3800 +{
3801 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
3802 +
3803 + wol->supported = WAKE_MAGIC;
3804 + wol->wolopts = 0;
3805 +
3806 + if (priv->wol & WAKE_MAGIC)
3807 + wol->wolopts = WAKE_MAGIC;
3808 +
3809 + memset(&wol->sopass, 0, sizeof(wol->sopass));
3810 +}
3811 +
3812 +/*
3813 + * pfe_eth_get_drvinfo - Fills in the drvinfo structure with some basic info
3814 + *
3815 + */
3816 +static void pfe_eth_get_drvinfo(struct net_device *ndev, struct ethtool_drvinfo
3817 + *drvinfo)
3818 +{
3819 + strlcpy(drvinfo->driver, DRV_NAME, sizeof(drvinfo->driver));
3820 + strlcpy(drvinfo->version, DRV_VERSION, sizeof(drvinfo->version));
3821 + strlcpy(drvinfo->fw_version, "N/A", sizeof(drvinfo->fw_version));
3822 + strlcpy(drvinfo->bus_info, "N/A", sizeof(drvinfo->bus_info));
3823 +}
3824 +
3825 +/*
3826 + * pfe_eth_set_settings - Used to send commands to PHY.
3827 + *
3828 + */
3829 +static int pfe_eth_set_settings(struct net_device *ndev,
3830 + const struct ethtool_link_ksettings *cmd)
3831 +{
3832 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
3833 + struct phy_device *phydev = priv->phydev;
3834 +
3835 + if (!phydev)
3836 + return -ENODEV;
3837 +
3838 + return phy_ethtool_ksettings_set(phydev, cmd);
3839 +}
3840 +
3841 +/*
3842 + * pfe_eth_getsettings - Return the current settings in the ethtool_cmd
3843 + * structure.
3844 + *
3845 + */
3846 +static int pfe_eth_get_settings(struct net_device *ndev,
3847 + struct ethtool_link_ksettings *cmd)
3848 +{
3849 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
3850 + struct phy_device *phydev = priv->phydev;
3851 +
3852 + if (!phydev)
3853 + return -ENODEV;
3854 +
3855 + phy_ethtool_ksettings_get(phydev, cmd);
3856 +
3857 + return 0;
3858 +}
3859 +
3860 +/*
3861 + * pfe_eth_get_msglevel - Gets the debug message mask.
3862 + *
3863 + */
3864 +static uint32_t pfe_eth_get_msglevel(struct net_device *ndev)
3865 +{
3866 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
3867 +
3868 + return priv->msg_enable;
3869 +}
3870 +
3871 +/*
3872 + * pfe_eth_set_msglevel - Sets the debug message mask.
3873 + *
3874 + */
3875 +static void pfe_eth_set_msglevel(struct net_device *ndev, uint32_t data)
3876 +{
3877 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
3878 +
3879 + priv->msg_enable = data;
3880 +}
3881 +
3882 +#define HIF_RX_COAL_MAX_CLKS (~(1 << 31))
3883 +#define HIF_RX_COAL_CLKS_PER_USEC (pfe->ctrl.sys_clk / 1000)
3884 +#define HIF_RX_COAL_MAX_USECS (HIF_RX_COAL_MAX_CLKS / \
3885 + HIF_RX_COAL_CLKS_PER_USEC)
3886 +
3887 +/*
3888 + * pfe_eth_set_coalesce - Sets rx interrupt coalescing timer.
3889 + *
3890 + */
3891 +static int pfe_eth_set_coalesce(struct net_device *ndev,
3892 + struct ethtool_coalesce *ec,
3893 + struct kernel_ethtool_coalesce *kernel_coal,
3894 + struct netlink_ext_ack *extack)
3895 +{
3896 + if (ec->rx_coalesce_usecs > HIF_RX_COAL_MAX_USECS)
3897 + return -EINVAL;
3898 +
3899 + if (!ec->rx_coalesce_usecs) {
3900 + writel(0, HIF_INT_COAL);
3901 + return 0;
3902 + }
3903 +
3904 + writel((ec->rx_coalesce_usecs * HIF_RX_COAL_CLKS_PER_USEC) |
3905 + HIF_INT_COAL_ENABLE, HIF_INT_COAL);
3906 +
3907 + return 0;
3908 +}
3909 +
3910 +/*
3911 + * pfe_eth_get_coalesce - Gets rx interrupt coalescing timer value.
3912 + *
3913 + */
3914 +static int pfe_eth_get_coalesce(struct net_device *ndev,
3915 + struct ethtool_coalesce *ec,
3916 + struct kernel_ethtool_coalesce *kernel_coal,
3917 + struct netlink_ext_ack *extack)
3918 +{
3919 + int reg_val = readl(HIF_INT_COAL);
3920 +
3921 + if (reg_val & HIF_INT_COAL_ENABLE)
3922 + ec->rx_coalesce_usecs = (reg_val & HIF_RX_COAL_MAX_CLKS) /
3923 + HIF_RX_COAL_CLKS_PER_USEC;
3924 + else
3925 + ec->rx_coalesce_usecs = 0;
3926 +
3927 + return 0;
3928 +}
3929 +
3930 +/*
3931 + * pfe_eth_set_pauseparam - Sets pause parameters
3932 + *
3933 + */
3934 +static int pfe_eth_set_pauseparam(struct net_device *ndev,
3935 + struct ethtool_pauseparam *epause)
3936 +{
3937 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
3938 +
3939 + if (epause->tx_pause != epause->rx_pause) {
3940 + netdev_info(ndev,
3941 + "hardware only support enable/disable both tx and rx\n");
3942 + return -EINVAL;
3943 + }
3944 +
3945 + priv->pause_flag = 0;
3946 + priv->pause_flag |= epause->rx_pause ? PFE_PAUSE_FLAG_ENABLE : 0;
3947 + priv->pause_flag |= epause->autoneg ? PFE_PAUSE_FLAG_AUTONEG : 0;
3948 +
3949 + if (epause->rx_pause || epause->autoneg) {
3950 + gemac_enable_pause_rx(priv->EMAC_baseaddr);
3951 + writel((readl(priv->GPI_baseaddr + GPI_TX_PAUSE_TIME) |
3952 + EGPI_PAUSE_ENABLE),
3953 + priv->GPI_baseaddr + GPI_TX_PAUSE_TIME);
3954 + if (priv->phydev) {
3955 + linkmode_set_bit(ETHTOOL_LINK_MODE_Pause_BIT,
3956 + priv->phydev->supported);
3957 + linkmode_set_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,
3958 + priv->phydev->supported);
3959 + linkmode_set_bit(ETHTOOL_LINK_MODE_Pause_BIT,
3960 + priv->phydev->advertising);
3961 + linkmode_set_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,
3962 + priv->phydev->advertising);
3963 + }
3964 + } else {
3965 + gemac_disable_pause_rx(priv->EMAC_baseaddr);
3966 + writel((readl(priv->GPI_baseaddr + GPI_TX_PAUSE_TIME) &
3967 + ~EGPI_PAUSE_ENABLE),
3968 + priv->GPI_baseaddr + GPI_TX_PAUSE_TIME);
3969 + if (priv->phydev) {
3970 + linkmode_clear_bit(ETHTOOL_LINK_MODE_Pause_BIT,
3971 + priv->phydev->supported);
3972 + linkmode_clear_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,
3973 + priv->phydev->supported);
3974 + linkmode_clear_bit(ETHTOOL_LINK_MODE_Pause_BIT,
3975 + priv->phydev->advertising);
3976 + linkmode_clear_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,
3977 + priv->phydev->advertising);
3978 + }
3979 + }
3980 +
3981 + return 0;
3982 +}
3983 +
3984 +/*
3985 + * pfe_eth_get_pauseparam - Gets pause parameters
3986 + *
3987 + */
3988 +static void pfe_eth_get_pauseparam(struct net_device *ndev,
3989 + struct ethtool_pauseparam *epause)
3990 +{
3991 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
3992 +
3993 + epause->autoneg = (priv->pause_flag & PFE_PAUSE_FLAG_AUTONEG) != 0;
3994 + epause->tx_pause = (priv->pause_flag & PFE_PAUSE_FLAG_ENABLE) != 0;
3995 + epause->rx_pause = epause->tx_pause;
3996 +}
3997 +
3998 +/*
3999 + * pfe_eth_get_hash
4000 + */
4001 +#define PFE_HASH_BITS 6 /* #bits in hash */
4002 +#define CRC32_POLY 0xEDB88320
4003 +
4004 +static int pfe_eth_get_hash(u8 *addr)
4005 +{
4006 + unsigned int i, bit, data, crc, hash;
4007 +
4008 + /* calculate crc32 value of mac address */
4009 + crc = 0xffffffff;
4010 +
4011 + for (i = 0; i < 6; i++) {
4012 + data = addr[i];
4013 + for (bit = 0; bit < 8; bit++, data >>= 1) {
4014 + crc = (crc >> 1) ^
4015 + (((crc ^ data) & 1) ? CRC32_POLY : 0);
4016 + }
4017 + }
4018 +
4019 + /*
4020 + * only upper 6 bits (PFE_HASH_BITS) are used
4021 + * which point to specific bit in the hash registers
4022 + */
4023 + hash = (crc >> (32 - PFE_HASH_BITS)) & 0x3f;
4024 +
4025 + return hash;
4026 +}
4027 +
4028 +const struct ethtool_ops pfe_ethtool_ops = {
4029 + .supported_coalesce_params = ETHTOOL_COALESCE_RX_USECS,
4030 + .get_drvinfo = pfe_eth_get_drvinfo,
4031 + .get_regs_len = pfe_eth_gemac_reglen,
4032 + .get_regs = pfe_eth_gemac_get_regs,
4033 + .get_link = ethtool_op_get_link,
4034 + .get_wol = pfe_eth_get_wol,
4035 + .set_wol = pfe_eth_set_wol,
4036 + .set_pauseparam = pfe_eth_set_pauseparam,
4037 + .get_pauseparam = pfe_eth_get_pauseparam,
4038 + .get_strings = pfe_eth_gstrings,
4039 + .get_sset_count = pfe_eth_stats_count,
4040 + .get_ethtool_stats = pfe_eth_fill_stats,
4041 + .get_msglevel = pfe_eth_get_msglevel,
4042 + .set_msglevel = pfe_eth_set_msglevel,
4043 + .set_coalesce = pfe_eth_set_coalesce,
4044 + .get_coalesce = pfe_eth_get_coalesce,
4045 + .get_link_ksettings = pfe_eth_get_settings,
4046 + .set_link_ksettings = pfe_eth_set_settings,
4047 +};
4048 +
4049 +/* pfe_eth_mdio_reset
4050 + */
4051 +int pfe_eth_mdio_reset(struct mii_bus *bus)
4052 +{
4053 + struct pfe_mdio_priv_s *priv = (struct pfe_mdio_priv_s *)bus->priv;
4054 + u32 phy_speed;
4055 +
4056 +
4057 + mutex_lock(&bus->mdio_lock);
4058 +
4059 + /*
4060 + * Set MII speed to 2.5 MHz (= clk_get_rate() / 2 * phy_speed)
4061 + *
4062 + * The formula for FEC MDC is 'ref_freq / (MII_SPEED x 2)' while
4063 + * for ENET-MAC is 'ref_freq / ((MII_SPEED + 1) x 2)'.
4064 + */
4065 + phy_speed = (DIV_ROUND_UP((pfe->ctrl.sys_clk * 1000), 4000000)
4066 + << EMAC_MII_SPEED_SHIFT);
4067 + phy_speed |= EMAC_HOLDTIME(0x5);
4068 + __raw_writel(phy_speed, priv->mdio_base + EMAC_MII_CTRL_REG);
4069 +
4070 + mutex_unlock(&bus->mdio_lock);
4071 +
4072 + return 0;
4073 +}
4074 +
4075 +/* pfe_eth_mdio_timeout
4076 + *
4077 + */
4078 +static int pfe_eth_mdio_timeout(struct pfe_mdio_priv_s *priv, int timeout)
4079 +{
4080 + while (!(__raw_readl(priv->mdio_base + EMAC_IEVENT_REG) &
4081 + EMAC_IEVENT_MII)) {
4082 + if (timeout-- <= 0)
4083 + return -1;
4084 + usleep_range(10, 20);
4085 + }
4086 + __raw_writel(EMAC_IEVENT_MII, priv->mdio_base + EMAC_IEVENT_REG);
4087 + return 0;
4088 +}
4089 +
4090 +static int pfe_eth_mdio_mux(u8 muxval)
4091 +{
4092 + struct i2c_adapter *a;
4093 + struct i2c_msg msg;
4094 + unsigned char buf[2];
4095 + int ret;
4096 +
4097 + a = i2c_get_adapter(0);
4098 + if (!a)
4099 + return -ENODEV;
4100 +
4101 + /* set bit 1 (the second bit) of chip at 0x09, register 0x13 */
4102 + buf[0] = 0x54; /* reg number */
4103 + buf[1] = (muxval << 6) | 0x3; /* data */
4104 + msg.addr = 0x66;
4105 + msg.buf = buf;
4106 + msg.len = 2;
4107 + msg.flags = 0;
4108 + ret = i2c_transfer(a, &msg, 1);
4109 + i2c_put_adapter(a);
4110 + if (ret != 1)
4111 + return -ENODEV;
4112 + return 0;
4113 +}
4114 +
4115 +static int pfe_eth_mdio_write_addr(struct mii_bus *bus, int mii_id,
4116 + int dev_addr, int regnum)
4117 +{
4118 + struct pfe_mdio_priv_s *priv = (struct pfe_mdio_priv_s *)bus->priv;
4119 +
4120 + __raw_writel(EMAC_MII_DATA_PA(mii_id) |
4121 + EMAC_MII_DATA_RA(dev_addr) |
4122 + EMAC_MII_DATA_TA | EMAC_MII_DATA(regnum),
4123 + priv->mdio_base + EMAC_MII_DATA_REG);
4124 +
4125 + if (pfe_eth_mdio_timeout(priv, EMAC_MDIO_TIMEOUT)) {
4126 + dev_err(&bus->dev, "phy MDIO address write timeout\n");
4127 + return -1;
4128 + }
4129 +
4130 + return 0;
4131 +}
4132 +
4133 +static int pfe_eth_mdio_write(struct mii_bus *bus, int mii_id, int regnum,
4134 + u16 value)
4135 +{
4136 + struct pfe_mdio_priv_s *priv = (struct pfe_mdio_priv_s *)bus->priv;
4137 +
4138 + /*To access external PHYs on QDS board mux needs to be configured*/
4139 + if ((mii_id) && (pfe->mdio_muxval[mii_id]))
4140 + pfe_eth_mdio_mux(pfe->mdio_muxval[mii_id]);
4141 +
4142 + if (regnum & MII_ADDR_C45) {
4143 + pfe_eth_mdio_write_addr(bus, mii_id, (regnum >> 16) & 0x1f,
4144 + regnum & 0xffff);
4145 + __raw_writel(EMAC_MII_DATA_OP_CL45_WR |
4146 + EMAC_MII_DATA_PA(mii_id) |
4147 + EMAC_MII_DATA_RA((regnum >> 16) & 0x1f) |
4148 + EMAC_MII_DATA_TA | EMAC_MII_DATA(value),
4149 + priv->mdio_base + EMAC_MII_DATA_REG);
4150 + } else {
4151 + /* start a write op */
4152 + __raw_writel(EMAC_MII_DATA_ST | EMAC_MII_DATA_OP_WR |
4153 + EMAC_MII_DATA_PA(mii_id) |
4154 + EMAC_MII_DATA_RA(regnum) |
4155 + EMAC_MII_DATA_TA | EMAC_MII_DATA(value),
4156 + priv->mdio_base + EMAC_MII_DATA_REG);
4157 + }
4158 +
4159 + if (pfe_eth_mdio_timeout(priv, EMAC_MDIO_TIMEOUT)) {
4160 + dev_err(&bus->dev, "%s: phy MDIO write timeout\n", __func__);
4161 + return -1;
4162 + }
4163 + return 0;
4164 +}
4165 +
4166 +static int pfe_eth_mdio_read(struct mii_bus *bus, int mii_id, int regnum)
4167 +{
4168 + struct pfe_mdio_priv_s *priv = (struct pfe_mdio_priv_s *)bus->priv;
4169 + u16 value = 0;
4170 +
4171 + /*To access external PHYs on QDS board mux needs to be configured*/
4172 + if ((mii_id) && (pfe->mdio_muxval[mii_id]))
4173 + pfe_eth_mdio_mux(pfe->mdio_muxval[mii_id]);
4174 +
4175 + if (regnum & MII_ADDR_C45) {
4176 + pfe_eth_mdio_write_addr(bus, mii_id, (regnum >> 16) & 0x1f,
4177 + regnum & 0xffff);
4178 + __raw_writel(EMAC_MII_DATA_OP_CL45_RD |
4179 + EMAC_MII_DATA_PA(mii_id) |
4180 + EMAC_MII_DATA_RA((regnum >> 16) & 0x1f) |
4181 + EMAC_MII_DATA_TA,
4182 + priv->mdio_base + EMAC_MII_DATA_REG);
4183 + } else {
4184 + /* start a read op */
4185 + __raw_writel(EMAC_MII_DATA_ST | EMAC_MII_DATA_OP_RD |
4186 + EMAC_MII_DATA_PA(mii_id) |
4187 + EMAC_MII_DATA_RA(regnum) |
4188 + EMAC_MII_DATA_TA, priv->mdio_base +
4189 + EMAC_MII_DATA_REG);
4190 + }
4191 +
4192 + if (pfe_eth_mdio_timeout(priv, EMAC_MDIO_TIMEOUT)) {
4193 + dev_err(&bus->dev, "%s: phy MDIO read timeout\n", __func__);
4194 + return -1;
4195 + }
4196 +
4197 + value = EMAC_MII_DATA(__raw_readl(priv->mdio_base +
4198 + EMAC_MII_DATA_REG));
4199 + return value;
4200 +}
4201 +
4202 +static int pfe_eth_mdio_init(struct pfe *pfe,
4203 + struct ls1012a_pfe_platform_data *pfe_info,
4204 + int ii)
4205 +{
4206 + struct pfe_mdio_priv_s *priv = NULL;
4207 + struct ls1012a_mdio_platform_data *mdio_info;
4208 + struct mii_bus *bus;
4209 + struct device_node *mdio_node;
4210 + int rc = 0;
4211 +
4212 + mdio_info = (struct ls1012a_mdio_platform_data *)
4213 + pfe_info->ls1012a_mdio_pdata;
4214 + mdio_info->id = ii;
4215 +
4216 + bus = mdiobus_alloc_size(sizeof(struct pfe_mdio_priv_s));
4217 + if (!bus) {
4218 + pr_err("mdiobus_alloc() failed\n");
4219 + rc = -ENOMEM;
4220 + goto err_mdioalloc;
4221 + }
4222 +
4223 + bus->name = "ls1012a MDIO Bus";
4224 + snprintf(bus->id, MII_BUS_ID_SIZE, "ls1012a-%x", mdio_info->id);
4225 +
4226 + bus->read = &pfe_eth_mdio_read;
4227 + bus->write = &pfe_eth_mdio_write;
4228 + bus->reset = &pfe_eth_mdio_reset;
4229 + bus->parent = pfe->dev;
4230 + bus->phy_mask = mdio_info->phy_mask;
4231 + bus->irq[0] = mdio_info->irq[0];
4232 + priv = bus->priv;
4233 + priv->mdio_base = cbus_emac_base[ii];
4234 +
4235 + priv->mdc_div = mdio_info->mdc_div;
4236 + if (!priv->mdc_div)
4237 + priv->mdc_div = 64;
4238 +
4239 + dev_info(bus->parent, "%s: mdc_div: %d, phy_mask: %x\n",
4240 + __func__, priv->mdc_div, bus->phy_mask);
4241 + mdio_node = of_get_child_by_name(pfe->dev->of_node, "mdio");
4242 + if ((mdio_info->id == 0) && mdio_node) {
4243 + rc = of_mdiobus_register(bus, mdio_node);
4244 + of_node_put(mdio_node);
4245 + } else {
4246 + rc = mdiobus_register(bus);
4247 + }
4248 +
4249 + if (rc) {
4250 + dev_err(bus->parent, "mdiobus_register(%s) failed\n",
4251 + bus->name);
4252 + goto err_mdioregister;
4253 + }
4254 +
4255 + priv->mii_bus = bus;
4256 + pfe->mdio.mdio_priv[ii] = priv;
4257 +
4258 + pfe_eth_mdio_reset(bus);
4259 +
4260 + return 0;
4261 +
4262 +err_mdioregister:
4263 + mdiobus_free(bus);
4264 +err_mdioalloc:
4265 + return rc;
4266 +}
4267 +
4268 +/* pfe_eth_mdio_exit
4269 + */
4270 +static void pfe_eth_mdio_exit(struct pfe *pfe,
4271 + int ii)
4272 +{
4273 + struct pfe_mdio_priv_s *mdio_priv = pfe->mdio.mdio_priv[ii];
4274 + struct mii_bus *bus = mdio_priv->mii_bus;
4275 +
4276 + if (!bus)
4277 + return;
4278 + mdiobus_unregister(bus);
4279 + mdiobus_free(bus);
4280 +}
4281 +
4282 +/* pfe_get_phydev_speed
4283 + */
4284 +static int pfe_get_phydev_speed(struct phy_device *phydev)
4285 +{
4286 + switch (phydev->speed) {
4287 + case 10:
4288 + return SPEED_10M;
4289 + case 100:
4290 + return SPEED_100M;
4291 + case 1000:
4292 + default:
4293 + return SPEED_1000M;
4294 + }
4295 +}
4296 +
4297 +/* pfe_set_rgmii_speed
4298 + */
4299 +#define RGMIIPCR 0x434
4300 +/* RGMIIPCR bit definitions*/
4301 +#define SCFG_RGMIIPCR_EN_AUTO (0x00000008)
4302 +#define SCFG_RGMIIPCR_SETSP_1000M (0x00000004)
4303 +#define SCFG_RGMIIPCR_SETSP_100M (0x00000000)
4304 +#define SCFG_RGMIIPCR_SETSP_10M (0x00000002)
4305 +#define SCFG_RGMIIPCR_SETFD (0x00000001)
4306 +
4307 +#define MDIOSELCR 0x484
4308 +#define MDIOSEL_SERDES 0x0
4309 +#define MDIOSEL_EXTPHY 0x80000000
4310 +
4311 +static void pfe_set_rgmii_speed(struct phy_device *phydev)
4312 +{
4313 + u32 rgmii_pcr;
4314 +
4315 + regmap_read(pfe->scfg, RGMIIPCR, &rgmii_pcr);
4316 + rgmii_pcr &= ~(SCFG_RGMIIPCR_SETSP_1000M | SCFG_RGMIIPCR_SETSP_10M);
4317 +
4318 + switch (phydev->speed) {
4319 + case 10:
4320 + rgmii_pcr |= SCFG_RGMIIPCR_SETSP_10M;
4321 + break;
4322 + case 1000:
4323 + rgmii_pcr |= SCFG_RGMIIPCR_SETSP_1000M;
4324 + break;
4325 + case 100:
4326 + default:
4327 + /* Default is 100M */
4328 + break;
4329 + }
4330 + regmap_write(pfe->scfg, RGMIIPCR, rgmii_pcr);
4331 +}
4332 +
4333 +/* pfe_get_phydev_duplex
4334 + */
4335 +static int pfe_get_phydev_duplex(struct phy_device *phydev)
4336 +{
4337 + /*return (phydev->duplex == DUPLEX_HALF) ? DUP_HALF:DUP_FULL ; */
4338 + return DUPLEX_FULL;
4339 +}
4340 +
4341 +/* pfe_eth_adjust_link
4342 + */
4343 +static void pfe_eth_adjust_link(struct net_device *ndev)
4344 +{
4345 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
4346 + unsigned long flags;
4347 + struct phy_device *phydev = priv->phydev;
4348 + int new_state = 0;
4349 +
4350 + netif_info(priv, drv, ndev, "%s\n", __func__);
4351 +
4352 + spin_lock_irqsave(&priv->lock, flags);
4353 +
4354 + if (phydev->link) {
4355 + /*
4356 + * Now we make sure that we can be in full duplex mode.
4357 + * If not, we operate in half-duplex mode.
4358 + */
4359 + if (phydev->duplex != priv->oldduplex) {
4360 + new_state = 1;
4361 + gemac_set_duplex(priv->EMAC_baseaddr,
4362 + pfe_get_phydev_duplex(phydev));
4363 + priv->oldduplex = phydev->duplex;
4364 + }
4365 +
4366 + if (phydev->speed != priv->oldspeed) {
4367 + new_state = 1;
4368 + gemac_set_speed(priv->EMAC_baseaddr,
4369 + pfe_get_phydev_speed(phydev));
4370 + if (priv->einfo->mii_config ==
4371 + PHY_INTERFACE_MODE_RGMII_ID)
4372 + pfe_set_rgmii_speed(phydev);
4373 + priv->oldspeed = phydev->speed;
4374 + }
4375 +
4376 + if (!priv->oldlink) {
4377 + new_state = 1;
4378 + priv->oldlink = 1;
4379 + }
4380 +
4381 + } else if (priv->oldlink) {
4382 + new_state = 1;
4383 + priv->oldlink = 0;
4384 + priv->oldspeed = 0;
4385 + priv->oldduplex = -1;
4386 + }
4387 +
4388 + if (new_state && netif_msg_link(priv))
4389 + phy_print_status(phydev);
4390 +
4391 + spin_unlock_irqrestore(&priv->lock, flags);
4392 +
4393 + /* Now, dump the details to the cdev.
4394 + * XXX: Locking would be required? (uniprocess arch)
4395 + * Or, maybe move it in spinlock above
4396 + */
4397 + if (us && priv->einfo->gem_id < PFE_CDEV_ETH_COUNT) {
4398 + pr_debug("Changing link state from (%u) to (%u) for ID=(%u)\n",
4399 + link_states[priv->einfo->gem_id].state,
4400 + phydev->link,
4401 + priv->einfo->gem_id);
4402 + link_states[priv->einfo->gem_id].phy_id = priv->einfo->gem_id;
4403 + link_states[priv->einfo->gem_id].state = phydev->link;
4404 + }
4405 +}
4406 +
4407 +/* pfe_phy_exit
4408 + */
4409 +static void pfe_phy_exit(struct net_device *ndev)
4410 +{
4411 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
4412 +
4413 + netif_info(priv, drv, ndev, "%s\n", __func__);
4414 +
4415 + phy_disconnect(priv->phydev);
4416 + priv->phydev = NULL;
4417 +}
4418 +
4419 +/* pfe_eth_stop
4420 + */
4421 +static void pfe_eth_stop(struct net_device *ndev, int wake)
4422 +{
4423 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
4424 +
4425 + netif_info(priv, drv, ndev, "%s\n", __func__);
4426 +
4427 + if (wake) {
4428 + gemac_tx_disable(priv->EMAC_baseaddr);
4429 + } else {
4430 + gemac_disable(priv->EMAC_baseaddr);
4431 + gpi_disable(priv->GPI_baseaddr);
4432 +
4433 + if (priv->phydev)
4434 + phy_stop(priv->phydev);
4435 + }
4436 +}
4437 +
4438 +/* pfe_eth_start
4439 + */
4440 +static int pfe_eth_start(struct pfe_eth_priv_s *priv)
4441 +{
4442 + netif_info(priv, drv, priv->ndev, "%s\n", __func__);
4443 +
4444 + if (priv->phydev)
4445 + phy_start(priv->phydev);
4446 +
4447 + gpi_enable(priv->GPI_baseaddr);
4448 + gemac_enable(priv->EMAC_baseaddr);
4449 +
4450 + return 0;
4451 +}
4452 +
4453 +/*
4454 + * Configure on chip serdes through mdio
4455 + */
4456 +static void ls1012a_configure_serdes(struct net_device *ndev)
4457 +{
4458 + struct pfe_eth_priv_s *eth_priv = netdev_priv(ndev);
4459 + struct pfe_mdio_priv_s *mdio_priv = pfe->mdio.mdio_priv[eth_priv->id];
4460 + int sgmii_2500 = 0;
4461 + struct mii_bus *bus = mdio_priv->mii_bus;
4462 + u16 value = 0;
4463 +
4464 + if (eth_priv->einfo->mii_config == PHY_INTERFACE_MODE_2500SGMII)
4465 + sgmii_2500 = 1;
4466 +
4467 + netif_info(eth_priv, drv, ndev, "%s\n", __func__);
4468 + /* PCS configuration done with corresponding GEMAC */
4469 +
4470 + pfe_eth_mdio_read(bus, 0, MDIO_SGMII_CR);
4471 + pfe_eth_mdio_read(bus, 0, MDIO_SGMII_SR);
4472 +
4473 + pfe_eth_mdio_write(bus, 0, MDIO_SGMII_CR, SGMII_CR_RST);
4474 +
4475 + if (sgmii_2500) {
4476 + pfe_eth_mdio_write(bus, 0, MDIO_SGMII_IF_MODE, SGMII_SPEED_1GBPS
4477 + | SGMII_EN);
4478 + pfe_eth_mdio_write(bus, 0, MDIO_SGMII_DEV_ABIL_SGMII,
4479 + SGMII_DEV_ABIL_ACK | SGMII_DEV_ABIL_SGMII);
4480 + pfe_eth_mdio_write(bus, 0, MDIO_SGMII_LINK_TMR_L, 0xa120);
4481 + pfe_eth_mdio_write(bus, 0, MDIO_SGMII_LINK_TMR_H, 0x7);
4482 + /* Autonegotiation need to be disabled for 2.5G SGMII mode*/
4483 + value = SGMII_CR_FD | SGMII_CR_SPEED_SEL1_1G;
4484 + pfe_eth_mdio_write(bus, 0, MDIO_SGMII_CR, value);
4485 + } else {
4486 + pfe_eth_mdio_write(bus, 0, MDIO_SGMII_IF_MODE,
4487 + SGMII_SPEED_1GBPS
4488 + | SGMII_USE_SGMII_AN
4489 + | SGMII_EN);
4490 + pfe_eth_mdio_write(bus, 0, MDIO_SGMII_DEV_ABIL_SGMII,
4491 + SGMII_DEV_ABIL_EEE_CLK_STP_EN
4492 + | 0xa0
4493 + | SGMII_DEV_ABIL_SGMII);
4494 + pfe_eth_mdio_write(bus, 0, MDIO_SGMII_LINK_TMR_L, 0x400);
4495 + pfe_eth_mdio_write(bus, 0, MDIO_SGMII_LINK_TMR_H, 0x0);
4496 + value = SGMII_CR_AN_EN | SGMII_CR_FD | SGMII_CR_SPEED_SEL1_1G;
4497 + pfe_eth_mdio_write(bus, 0, MDIO_SGMII_CR, value);
4498 + }
4499 +}
4500 +
4501 +/*
4502 + * pfe_phy_init
4503 + *
4504 + */
4505 +static int pfe_phy_init(struct net_device *ndev)
4506 +{
4507 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
4508 + struct phy_device *phydev;
4509 + char phy_id[MII_BUS_ID_SIZE + 3];
4510 + char bus_id[MII_BUS_ID_SIZE];
4511 + phy_interface_t interface;
4512 +
4513 + priv->oldlink = 0;
4514 + priv->oldspeed = 0;
4515 + priv->oldduplex = -1;
4516 +
4517 + snprintf(bus_id, MII_BUS_ID_SIZE, "ls1012a-%d", 0);
4518 + snprintf(phy_id, MII_BUS_ID_SIZE + 3, PHY_ID_FMT, bus_id,
4519 + priv->einfo->phy_id);
4520 + netif_info(priv, drv, ndev, "%s: %s\n", __func__, phy_id);
4521 + interface = priv->einfo->mii_config;
4522 + if ((interface == PHY_INTERFACE_MODE_SGMII) ||
4523 + (interface == PHY_INTERFACE_MODE_2500SGMII)) {
4524 + /*Configure SGMII PCS */
4525 + if (pfe->scfg) {
4526 + /* Config MDIO from serdes */
4527 + regmap_write(pfe->scfg, MDIOSELCR, MDIOSEL_SERDES);
4528 + }
4529 + ls1012a_configure_serdes(ndev);
4530 + }
4531 +
4532 + if (pfe->scfg) {
4533 + /*Config MDIO from PAD */
4534 + regmap_write(pfe->scfg, MDIOSELCR, MDIOSEL_EXTPHY);
4535 + }
4536 +
4537 + priv->oldlink = 0;
4538 + priv->oldspeed = 0;
4539 + priv->oldduplex = -1;
4540 + pr_info("%s interface %x\n", __func__, interface);
4541 +
4542 + if (priv->phy_node) {
4543 + phydev = of_phy_connect(ndev, priv->phy_node,
4544 + pfe_eth_adjust_link, 0,
4545 + priv->einfo->mii_config);
4546 + if (!(phydev)) {
4547 + netdev_err(ndev, "Unable to connect to phy\n");
4548 + return -ENODEV;
4549 + }
4550 +
4551 + } else {
4552 + phydev = phy_connect(ndev, phy_id,
4553 + &pfe_eth_adjust_link, interface);
4554 + if (IS_ERR(phydev)) {
4555 + netdev_err(ndev, "Unable to connect to phy\n");
4556 + return PTR_ERR(phydev);
4557 + }
4558 + }
4559 +
4560 + priv->phydev = phydev;
4561 + phydev->irq = PHY_POLL;
4562 +
4563 + return 0;
4564 +}
4565 +
4566 +/* pfe_gemac_init
4567 + */
4568 +static int pfe_gemac_init(struct pfe_eth_priv_s *priv)
4569 +{
4570 + struct gemac_cfg cfg;
4571 +
4572 + netif_info(priv, ifup, priv->ndev, "%s\n", __func__);
4573 +
4574 + cfg.mode = 0;
4575 + cfg.speed = SPEED_1000M;
4576 + cfg.duplex = DUPLEX_FULL;
4577 +
4578 + gemac_set_config(priv->EMAC_baseaddr, &cfg);
4579 + gemac_allow_broadcast(priv->EMAC_baseaddr);
4580 + gemac_enable_1536_rx(priv->EMAC_baseaddr);
4581 + gemac_enable_stacked_vlan(priv->EMAC_baseaddr);
4582 + gemac_enable_pause_rx(priv->EMAC_baseaddr);
4583 + gemac_set_bus_width(priv->EMAC_baseaddr, 64);
4584 +
4585 + /*GEM will perform checksum verifications*/
4586 + if (priv->ndev->features & NETIF_F_RXCSUM)
4587 + gemac_enable_rx_checksum_offload(priv->EMAC_baseaddr);
4588 + else
4589 + gemac_disable_rx_checksum_offload(priv->EMAC_baseaddr);
4590 +
4591 + return 0;
4592 +}
4593 +
4594 +/* pfe_eth_event_handler
4595 + */
4596 +static int pfe_eth_event_handler(void *data, int event, int qno)
4597 +{
4598 + struct pfe_eth_priv_s *priv = data;
4599 +
4600 + switch (event) {
4601 + case EVENT_RX_PKT_IND:
4602 +
4603 + if (qno == 0) {
4604 + if (napi_schedule_prep(&priv->high_napi)) {
4605 + netif_info(priv, intr, priv->ndev,
4606 + "%s: schedule high prio poll\n"
4607 + , __func__);
4608 +
4609 +#ifdef PFE_ETH_NAPI_STATS
4610 + priv->napi_counters[NAPI_SCHED_COUNT]++;
4611 +#endif
4612 +
4613 + __napi_schedule(&priv->high_napi);
4614 + }
4615 + } else if (qno == 1) {
4616 + if (napi_schedule_prep(&priv->low_napi)) {
4617 + netif_info(priv, intr, priv->ndev,
4618 + "%s: schedule low prio poll\n"
4619 + , __func__);
4620 +
4621 +#ifdef PFE_ETH_NAPI_STATS
4622 + priv->napi_counters[NAPI_SCHED_COUNT]++;
4623 +#endif
4624 + __napi_schedule(&priv->low_napi);
4625 + }
4626 + } else if (qno == 2) {
4627 + if (napi_schedule_prep(&priv->lro_napi)) {
4628 + netif_info(priv, intr, priv->ndev,
4629 + "%s: schedule lro prio poll\n"
4630 + , __func__);
4631 +
4632 +#ifdef PFE_ETH_NAPI_STATS
4633 + priv->napi_counters[NAPI_SCHED_COUNT]++;
4634 +#endif
4635 + __napi_schedule(&priv->lro_napi);
4636 + }
4637 + }
4638 +
4639 + break;
4640 +
4641 + case EVENT_TXDONE_IND:
4642 + pfe_eth_flush_tx(priv);
4643 + hif_lib_event_handler_start(&priv->client, EVENT_TXDONE_IND, 0);
4644 + break;
4645 + case EVENT_HIGH_RX_WM:
4646 + default:
4647 + break;
4648 + }
4649 +
4650 + return 0;
4651 +}
4652 +
4653 +static int pfe_eth_change_mtu(struct net_device *ndev, int new_mtu)
4654 +{
4655 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
4656 +
4657 + ndev->mtu = new_mtu;
4658 + new_mtu += ETH_HLEN + ETH_FCS_LEN;
4659 + gemac_set_rx_max_fl(priv->EMAC_baseaddr, new_mtu);
4660 +
4661 + return 0;
4662 +}
4663 +
4664 +/* pfe_eth_open
4665 + */
4666 +static int pfe_eth_open(struct net_device *ndev)
4667 +{
4668 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
4669 + struct hif_client_s *client;
4670 + int rc;
4671 +
4672 + netif_info(priv, ifup, ndev, "%s\n", __func__);
4673 +
4674 + /* Register client driver with HIF */
4675 + client = &priv->client;
4676 + memset(client, 0, sizeof(*client));
4677 + client->id = PFE_CL_GEM0 + priv->id;
4678 + client->tx_qn = emac_txq_cnt;
4679 + client->rx_qn = EMAC_RXQ_CNT;
4680 + client->priv = priv;
4681 + client->pfe = priv->pfe;
4682 + client->event_handler = pfe_eth_event_handler;
4683 +
4684 + client->tx_qsize = EMAC_TXQ_DEPTH;
4685 + client->rx_qsize = EMAC_RXQ_DEPTH;
4686 +
4687 + rc = hif_lib_client_register(client);
4688 + if (rc) {
4689 + netdev_err(ndev, "%s: hif_lib_client_register(%d) failed\n",
4690 + __func__, client->id);
4691 + goto err0;
4692 + }
4693 +
4694 + netif_info(priv, drv, ndev, "%s: registered client: %p\n", __func__,
4695 + client);
4696 +
4697 + pfe_gemac_init(priv);
4698 +
4699 + if (!is_valid_ether_addr(ndev->dev_addr)) {
4700 + netdev_err(ndev, "%s: invalid MAC address\n", __func__);
4701 + rc = -EADDRNOTAVAIL;
4702 + goto err1;
4703 + }
4704 +
4705 + gemac_set_laddrN(priv->EMAC_baseaddr,
4706 + (struct pfe_mac_addr *)ndev->dev_addr, 1);
4707 +
4708 + napi_enable(&priv->high_napi);
4709 + napi_enable(&priv->low_napi);
4710 + napi_enable(&priv->lro_napi);
4711 +
4712 + rc = pfe_eth_start(priv);
4713 +
4714 + netif_tx_wake_all_queues(ndev);
4715 +
4716 + return rc;
4717 +
4718 +err1:
4719 + hif_lib_client_unregister(&priv->client);
4720 +
4721 +err0:
4722 + return rc;
4723 +}
4724 +
4725 +/*
4726 + * pfe_eth_shutdown
4727 + */
4728 +int pfe_eth_shutdown(struct net_device *ndev, int wake)
4729 +{
4730 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
4731 + int i, qstatus, id;
4732 + unsigned long next_poll = jiffies + 1, end = jiffies +
4733 + (TX_POLL_TIMEOUT_MS * HZ) / 1000;
4734 + int tx_pkts, prv_tx_pkts;
4735 +
4736 + netif_info(priv, ifdown, ndev, "%s\n", __func__);
4737 +
4738 + for (i = 0; i < emac_txq_cnt; i++)
4739 + hrtimer_cancel(&priv->fast_tx_timeout[i].timer);
4740 +
4741 + netif_tx_stop_all_queues(ndev);
4742 +
4743 + do {
4744 + tx_pkts = 0;
4745 + pfe_eth_flush_tx(priv);
4746 +
4747 + for (i = 0; i < emac_txq_cnt; i++)
4748 + tx_pkts += hif_lib_tx_pending(&priv->client, i);
4749 +
4750 + if (tx_pkts) {
4751 + /*Don't wait forever, break if we cross max timeout */
4752 + if (time_after(jiffies, end)) {
4753 + pr_err(
4754 + "(%s)Tx is not complete after %dmsec\n",
4755 + ndev->name, TX_POLL_TIMEOUT_MS);
4756 + break;
4757 + }
4758 +
4759 + pr_info("%s : (%s) Waiting for tx packets to free. Pending tx pkts = %d.\n"
4760 + , __func__, ndev->name, tx_pkts);
4761 + if (need_resched())
4762 + schedule();
4763 + }
4764 +
4765 + } while (tx_pkts);
4766 +
4767 + end = jiffies + (TX_POLL_TIMEOUT_MS * HZ) / 1000;
4768 +
4769 + prv_tx_pkts = tmu_pkts_processed(priv->id);
4770 + /*
4771 + * Wait till TMU transmits all pending packets
4772 + * poll tmu_qstatus and pkts processed by TMU for every 10ms
4773 + * Consider TMU is busy, If we see TMU qeueu pending or any packets
4774 + * processed by TMU
4775 + */
4776 + while (1) {
4777 + if (time_after(jiffies, next_poll)) {
4778 + tx_pkts = tmu_pkts_processed(priv->id);
4779 + qstatus = tmu_qstatus(priv->id) & 0x7ffff;
4780 +
4781 + if (!qstatus && (tx_pkts == prv_tx_pkts))
4782 + break;
4783 + /* Don't wait forever, break if we cross max
4784 + * timeout(TX_POLL_TIMEOUT_MS)
4785 + */
4786 + if (time_after(jiffies, end)) {
4787 + pr_err("TMU%d is busy after %dmsec\n",
4788 + priv->id, TX_POLL_TIMEOUT_MS);
4789 + break;
4790 + }
4791 + prv_tx_pkts = tx_pkts;
4792 + next_poll++;
4793 + }
4794 + if (need_resched())
4795 + schedule();
4796 + }
4797 + /* Wait for some more time to complete transmitting packet if any */
4798 + next_poll = jiffies + 1;
4799 + while (1) {
4800 + if (time_after(jiffies, next_poll))
4801 + break;
4802 + if (need_resched())
4803 + schedule();
4804 + }
4805 +
4806 + pfe_eth_stop(ndev, wake);
4807 +
4808 + napi_disable(&priv->lro_napi);
4809 + napi_disable(&priv->low_napi);
4810 + napi_disable(&priv->high_napi);
4811 +
4812 + for (id = CLASS0_ID; id <= CLASS_MAX_ID; id++) {
4813 + pe_dmem_write(id, 0, CLASS_DM_CRC_VALIDATED
4814 + + (priv->id * 4), 4);
4815 + }
4816 +
4817 + hif_lib_client_unregister(&priv->client);
4818 +
4819 + return 0;
4820 +}
4821 +
4822 +/* pfe_eth_close
4823 + *
4824 + */
4825 +static int pfe_eth_close(struct net_device *ndev)
4826 +{
4827 + pfe_eth_shutdown(ndev, 0);
4828 +
4829 + return 0;
4830 +}
4831 +
4832 +/* pfe_eth_suspend
4833 + *
4834 + * return value : 1 if netdevice is configured to wakeup system
4835 + * 0 otherwise
4836 + */
4837 +int pfe_eth_suspend(struct net_device *ndev)
4838 +{
4839 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
4840 + int retval = 0;
4841 +
4842 + if (priv->wol) {
4843 + gemac_set_wol(priv->EMAC_baseaddr, priv->wol);
4844 + retval = 1;
4845 + }
4846 + pfe_eth_shutdown(ndev, priv->wol);
4847 +
4848 + return retval;
4849 +}
4850 +
4851 +/* pfe_eth_resume
4852 + *
4853 + */
4854 +int pfe_eth_resume(struct net_device *ndev)
4855 +{
4856 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
4857 +
4858 + if (priv->wol)
4859 + gemac_set_wol(priv->EMAC_baseaddr, 0);
4860 + gemac_tx_enable(priv->EMAC_baseaddr);
4861 +
4862 + return pfe_eth_open(ndev);
4863 +}
4864 +
4865 +/* pfe_eth_get_queuenum
4866 + */
4867 +static int pfe_eth_get_queuenum(struct pfe_eth_priv_s *priv, struct sk_buff
4868 + *skb)
4869 +{
4870 + int queuenum = 0;
4871 + unsigned long flags;
4872 +
4873 + /* Get the Fast Path queue number */
4874 + /*
4875 + * Use conntrack mark (if conntrack exists), then packet mark (if any),
4876 + * then fallback to default
4877 + */
4878 +#if defined(CONFIG_IP_NF_CONNTRACK_MARK) || defined(CONFIG_NF_CONNTRACK_MARK)
4879 + if (skb->_nfct) {
4880 + enum ip_conntrack_info cinfo;
4881 + struct nf_conn *ct;
4882 +
4883 + ct = nf_ct_get(skb, &cinfo);
4884 +
4885 + if (ct) {
4886 + u32 connmark;
4887 +
4888 + connmark = ct->mark;
4889 +
4890 + if ((connmark & 0x80000000) && priv->id != 0)
4891 + connmark >>= 16;
4892 +
4893 + queuenum = connmark & EMAC_QUEUENUM_MASK;
4894 + }
4895 + } else {/* continued after #endif ... */
4896 +#endif
4897 + if (skb->mark) {
4898 + queuenum = skb->mark & EMAC_QUEUENUM_MASK;
4899 + } else {
4900 + spin_lock_irqsave(&priv->lock, flags);
4901 + queuenum = priv->default_priority & EMAC_QUEUENUM_MASK;
4902 + spin_unlock_irqrestore(&priv->lock, flags);
4903 + }
4904 +#if defined(CONFIG_IP_NF_CONNTRACK_MARK) || defined(CONFIG_NF_CONNTRACK_MARK)
4905 + }
4906 +#endif
4907 + return queuenum;
4908 +}
4909 +
4910 +/* pfe_eth_might_stop_tx
4911 + *
4912 + */
4913 +static int pfe_eth_might_stop_tx(struct pfe_eth_priv_s *priv, int queuenum,
4914 + struct netdev_queue *tx_queue,
4915 + unsigned int n_desc,
4916 + unsigned int n_segs)
4917 +{
4918 + ktime_t kt;
4919 + int tried = 0;
4920 +
4921 +try_again:
4922 + if (unlikely((__hif_tx_avail(&pfe->hif) < n_desc) ||
4923 + (hif_lib_tx_avail(&priv->client, queuenum) < n_desc) ||
4924 + (hif_lib_tx_credit_avail(pfe, priv->id, queuenum) < n_segs))) {
4925 + if (!tried) {
4926 + __hif_lib_update_credit(&priv->client, queuenum);
4927 + tried = 1;
4928 + goto try_again;
4929 + }
4930 +#ifdef PFE_ETH_TX_STATS
4931 + if (__hif_tx_avail(&pfe->hif) < n_desc) {
4932 + priv->stop_queue_hif[queuenum]++;
4933 + } else if (hif_lib_tx_avail(&priv->client, queuenum) < n_desc) {
4934 + priv->stop_queue_hif_client[queuenum]++;
4935 + } else if (hif_lib_tx_credit_avail(pfe, priv->id, queuenum) <
4936 + n_segs) {
4937 + priv->stop_queue_credit[queuenum]++;
4938 + }
4939 + priv->stop_queue_total[queuenum]++;
4940 +#endif
4941 + netif_tx_stop_queue(tx_queue);
4942 +
4943 + kt = ktime_set(0, LS1012A_TX_FAST_RECOVERY_TIMEOUT_MS *
4944 + NSEC_PER_MSEC);
4945 + hrtimer_start(&priv->fast_tx_timeout[queuenum].timer, kt,
4946 + HRTIMER_MODE_REL);
4947 + return -1;
4948 + } else {
4949 + return 0;
4950 + }
4951 +}
4952 +
4953 +#define SA_MAX_OP 2
4954 +/* pfe_hif_send_packet
4955 + *
4956 + * At this level if TX fails we drop the packet
4957 + */
4958 +static void pfe_hif_send_packet(struct sk_buff *skb, struct pfe_eth_priv_s
4959 + *priv, int queuenum)
4960 +{
4961 + struct skb_shared_info *sh = skb_shinfo(skb);
4962 + unsigned int nr_frags;
4963 + u32 ctrl = 0;
4964 +
4965 + netif_info(priv, tx_queued, priv->ndev, "%s\n", __func__);
4966 +
4967 + if (skb_is_gso(skb)) {
4968 + priv->stats.tx_dropped++;
4969 + return;
4970 + }
4971 +
4972 + if (skb->ip_summed == CHECKSUM_PARTIAL)
4973 + ctrl = HIF_CTRL_TX_CHECKSUM;
4974 +
4975 + nr_frags = sh->nr_frags;
4976 +
4977 + if (nr_frags) {
4978 + skb_frag_t *f;
4979 + int i;
4980 +
4981 + __hif_lib_xmit_pkt(&priv->client, queuenum, skb->data,
4982 + skb_headlen(skb), ctrl, HIF_FIRST_BUFFER,
4983 + skb);
4984 +
4985 + for (i = 0; i < nr_frags - 1; i++) {
4986 + f = &sh->frags[i];
4987 + __hif_lib_xmit_pkt(&priv->client, queuenum,
4988 + skb_frag_address(f),
4989 + skb_frag_size(f),
4990 + 0x0, 0x0, skb);
4991 + }
4992 +
4993 + f = &sh->frags[i];
4994 +
4995 + __hif_lib_xmit_pkt(&priv->client, queuenum,
4996 + skb_frag_address(f), skb_frag_size(f),
4997 + 0x0, HIF_LAST_BUFFER | HIF_DATA_VALID,
4998 + skb);
4999 +
5000 + netif_info(priv, tx_queued, priv->ndev,
5001 + "%s: pkt sent successfully skb:%p nr_frags:%d len:%d\n",
5002 + __func__, skb, nr_frags, skb->len);
5003 + } else {
5004 + __hif_lib_xmit_pkt(&priv->client, queuenum, skb->data,
5005 + skb->len, ctrl, HIF_FIRST_BUFFER |
5006 + HIF_LAST_BUFFER | HIF_DATA_VALID,
5007 + skb);
5008 + netif_info(priv, tx_queued, priv->ndev,
5009 + "%s: pkt sent successfully skb:%p len:%d\n",
5010 + __func__, skb, skb->len);
5011 + }
5012 + hif_tx_dma_start();
5013 + priv->stats.tx_packets++;
5014 + priv->stats.tx_bytes += skb->len;
5015 + hif_lib_tx_credit_use(pfe, priv->id, queuenum, 1);
5016 +}
5017 +
5018 +/* pfe_eth_flush_txQ
5019 + */
5020 +static void pfe_eth_flush_txQ(struct pfe_eth_priv_s *priv, int tx_q_num, int
5021 + from_tx, int n_desc)
5022 +{
5023 + struct sk_buff *skb;
5024 + struct netdev_queue *tx_queue = netdev_get_tx_queue(priv->ndev,
5025 + tx_q_num);
5026 + unsigned int flags;
5027 +
5028 + netif_info(priv, tx_done, priv->ndev, "%s\n", __func__);
5029 +
5030 + if (!from_tx)
5031 + __netif_tx_lock_bh(tx_queue);
5032 +
5033 + /* Clean HIF and client queue */
5034 + while ((skb = hif_lib_tx_get_next_complete(&priv->client,
5035 + tx_q_num, &flags,
5036 + HIF_TX_DESC_NT))) {
5037 + if (flags & HIF_DATA_VALID)
5038 + dev_kfree_skb_any(skb);
5039 + }
5040 + if (!from_tx)
5041 + __netif_tx_unlock_bh(tx_queue);
5042 +}
5043 +
5044 +/* pfe_eth_flush_tx
5045 + */
5046 +static void pfe_eth_flush_tx(struct pfe_eth_priv_s *priv)
5047 +{
5048 + int ii;
5049 +
5050 + netif_info(priv, tx_done, priv->ndev, "%s\n", __func__);
5051 +
5052 + for (ii = 0; ii < emac_txq_cnt; ii++) {
5053 + pfe_eth_flush_txQ(priv, ii, 0, 0);
5054 + __hif_lib_update_credit(&priv->client, ii);
5055 + }
5056 +}
5057 +
5058 +void pfe_tx_get_req_desc(struct sk_buff *skb, unsigned int *n_desc, unsigned int
5059 + *n_segs)
5060 +{
5061 + struct skb_shared_info *sh = skb_shinfo(skb);
5062 +
5063 + /* Scattered data */
5064 + if (sh->nr_frags) {
5065 + *n_desc = sh->nr_frags + 1;
5066 + *n_segs = 1;
5067 + /* Regular case */
5068 + } else {
5069 + *n_desc = 1;
5070 + *n_segs = 1;
5071 + }
5072 +}
5073 +
5074 +/* pfe_eth_send_packet
5075 + */
5076 +static int pfe_eth_send_packet(struct sk_buff *skb, struct net_device *ndev)
5077 +{
5078 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
5079 + int tx_q_num = skb_get_queue_mapping(skb);
5080 + int n_desc, n_segs;
5081 + struct netdev_queue *tx_queue = netdev_get_tx_queue(priv->ndev,
5082 + tx_q_num);
5083 +
5084 + netif_info(priv, tx_queued, ndev, "%s\n", __func__);
5085 +
5086 + if ((!skb_is_gso(skb)) && (skb_headroom(skb) < (PFE_PKT_HEADER_SZ +
5087 + sizeof(unsigned long)))) {
5088 + netif_warn(priv, tx_err, priv->ndev, "%s: copying skb\n",
5089 + __func__);
5090 +
5091 + if (pskb_expand_head(skb, (PFE_PKT_HEADER_SZ + sizeof(unsigned
5092 + long)), 0, GFP_ATOMIC)) {
5093 + /* No need to re-transmit, no way to recover*/
5094 + kfree_skb(skb);
5095 + priv->stats.tx_dropped++;
5096 + return NETDEV_TX_OK;
5097 + }
5098 + }
5099 +
5100 + pfe_tx_get_req_desc(skb, &n_desc, &n_segs);
5101 +
5102 + hif_tx_lock(&pfe->hif);
5103 + if (unlikely(pfe_eth_might_stop_tx(priv, tx_q_num, tx_queue, n_desc,
5104 + n_segs))) {
5105 +#ifdef PFE_ETH_TX_STATS
5106 + if (priv->was_stopped[tx_q_num]) {
5107 + priv->clean_fail[tx_q_num]++;
5108 + priv->was_stopped[tx_q_num] = 0;
5109 + }
5110 +#endif
5111 + hif_tx_unlock(&pfe->hif);
5112 + return NETDEV_TX_BUSY;
5113 + }
5114 +
5115 + pfe_hif_send_packet(skb, priv, tx_q_num);
5116 +
5117 + hif_tx_unlock(&pfe->hif);
5118 +
5119 + tx_queue->trans_start = jiffies;
5120 +
5121 +#ifdef PFE_ETH_TX_STATS
5122 + priv->was_stopped[tx_q_num] = 0;
5123 +#endif
5124 +
5125 + return NETDEV_TX_OK;
5126 +}
5127 +
5128 +/* pfe_eth_select_queue
5129 + *
5130 + */
5131 +static u16 pfe_eth_select_queue(struct net_device *ndev, struct sk_buff *skb,
5132 + struct net_device *sb_dev)
5133 +{
5134 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
5135 +
5136 + return pfe_eth_get_queuenum(priv, skb);
5137 +}
5138 +
5139 +/* pfe_eth_get_stats
5140 + */
5141 +static struct net_device_stats *pfe_eth_get_stats(struct net_device *ndev)
5142 +{
5143 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
5144 +
5145 + netif_info(priv, drv, ndev, "%s\n", __func__);
5146 +
5147 + return &priv->stats;
5148 +}
5149 +
5150 +/* pfe_eth_set_mac_address
5151 + */
5152 +static int pfe_eth_set_mac_address(struct net_device *ndev, void *addr)
5153 +{
5154 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
5155 + struct sockaddr *sa = addr;
5156 +
5157 + netif_info(priv, drv, ndev, "%s\n", __func__);
5158 +
5159 + if (!is_valid_ether_addr(sa->sa_data))
5160 + return -EADDRNOTAVAIL;
5161 +
5162 + memcpy(ndev->dev_addr, sa->sa_data, ETH_ALEN);
5163 +
5164 + gemac_set_laddrN(priv->EMAC_baseaddr,
5165 + (struct pfe_mac_addr *)ndev->dev_addr, 1);
5166 +
5167 + return 0;
5168 +}
5169 +
5170 +/* pfe_eth_enet_addr_byte_mac
5171 + */
5172 +int pfe_eth_enet_addr_byte_mac(u8 *enet_byte_addr,
5173 + struct pfe_mac_addr *enet_addr)
5174 +{
5175 + if (!enet_byte_addr || !enet_addr) {
5176 + return -1;
5177 +
5178 + } else {
5179 + enet_addr->bottom = enet_byte_addr[0] |
5180 + (enet_byte_addr[1] << 8) |
5181 + (enet_byte_addr[2] << 16) |
5182 + (enet_byte_addr[3] << 24);
5183 + enet_addr->top = enet_byte_addr[4] |
5184 + (enet_byte_addr[5] << 8);
5185 + return 0;
5186 + }
5187 +}
5188 +
5189 +/* pfe_eth_set_multi
5190 + */
5191 +static void pfe_eth_set_multi(struct net_device *ndev)
5192 +{
5193 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
5194 + struct pfe_mac_addr hash_addr; /* hash register structure */
5195 + /* specific mac address register structure */
5196 + struct pfe_mac_addr spec_addr;
5197 + int result; /* index into hash register to set.. */
5198 + int uc_count = 0;
5199 + struct netdev_hw_addr *ha;
5200 +
5201 + if (ndev->flags & IFF_PROMISC) {
5202 + netif_info(priv, drv, ndev, "entering promiscuous mode\n");
5203 +
5204 + priv->promisc = 1;
5205 + gemac_enable_copy_all(priv->EMAC_baseaddr);
5206 + } else {
5207 + priv->promisc = 0;
5208 + gemac_disable_copy_all(priv->EMAC_baseaddr);
5209 + }
5210 +
5211 + /* Enable broadcast frame reception if required. */
5212 + if (ndev->flags & IFF_BROADCAST) {
5213 + gemac_allow_broadcast(priv->EMAC_baseaddr);
5214 + } else {
5215 + netif_info(priv, drv, ndev,
5216 + "disabling broadcast frame reception\n");
5217 +
5218 + gemac_no_broadcast(priv->EMAC_baseaddr);
5219 + }
5220 +
5221 + if (ndev->flags & IFF_ALLMULTI) {
5222 + /* Set the hash to rx all multicast frames */
5223 + hash_addr.bottom = 0xFFFFFFFF;
5224 + hash_addr.top = 0xFFFFFFFF;
5225 + gemac_set_hash(priv->EMAC_baseaddr, &hash_addr);
5226 + netdev_for_each_uc_addr(ha, ndev) {
5227 + if (uc_count >= MAX_UC_SPEC_ADDR_REG)
5228 + break;
5229 + pfe_eth_enet_addr_byte_mac(ha->addr, &spec_addr);
5230 + gemac_set_laddrN(priv->EMAC_baseaddr, &spec_addr,
5231 + uc_count + 2);
5232 + uc_count++;
5233 + }
5234 + } else if ((netdev_mc_count(ndev) > 0) || (netdev_uc_count(ndev))) {
5235 + u8 *addr;
5236 +
5237 + hash_addr.bottom = 0;
5238 + hash_addr.top = 0;
5239 +
5240 + netdev_for_each_mc_addr(ha, ndev) {
5241 + addr = ha->addr;
5242 +
5243 + netif_info(priv, drv, ndev,
5244 + "adding multicast address %X:%X:%X:%X:%X:%X to gem filter\n",
5245 + addr[0], addr[1], addr[2],
5246 + addr[3], addr[4], addr[5]);
5247 +
5248 + result = pfe_eth_get_hash(addr);
5249 +
5250 + if (result < EMAC_HASH_REG_BITS) {
5251 + if (result < 32)
5252 + hash_addr.bottom |= (1 << result);
5253 + else
5254 + hash_addr.top |= (1 << (result - 32));
5255 + } else {
5256 + break;
5257 + }
5258 + }
5259 +
5260 + uc_count = -1;
5261 + netdev_for_each_uc_addr(ha, ndev) {
5262 + addr = ha->addr;
5263 +
5264 + if (++uc_count < MAX_UC_SPEC_ADDR_REG) {
5265 + netdev_info(ndev,
5266 + "adding unicast address %02x:%02x:%02x:%02x:%02x:%02x to gem filter\n",
5267 + addr[0], addr[1], addr[2],
5268 + addr[3], addr[4], addr[5]);
5269 + pfe_eth_enet_addr_byte_mac(addr, &spec_addr);
5270 + gemac_set_laddrN(priv->EMAC_baseaddr,
5271 + &spec_addr, uc_count + 2);
5272 + } else {
5273 + netif_info(priv, drv, ndev,
5274 + "adding unicast address %02x:%02x:%02x:%02x:%02x:%02x to gem hash\n",
5275 + addr[0], addr[1], addr[2],
5276 + addr[3], addr[4], addr[5]);
5277 +
5278 + result = pfe_eth_get_hash(addr);
5279 + if (result >= EMAC_HASH_REG_BITS) {
5280 + break;
5281 +
5282 + } else {
5283 + if (result < 32)
5284 + hash_addr.bottom |= (1 <<
5285 + result);
5286 + else
5287 + hash_addr.top |= (1 <<
5288 + (result - 32));
5289 + }
5290 + }
5291 + }
5292 +
5293 + gemac_set_hash(priv->EMAC_baseaddr, &hash_addr);
5294 + }
5295 +
5296 + if (!(netdev_uc_count(ndev) >= MAX_UC_SPEC_ADDR_REG)) {
5297 + /*
5298 + * Check if there are any specific address HW registers that
5299 + * need to be flushed
5300 + */
5301 + for (uc_count = netdev_uc_count(ndev); uc_count <
5302 + MAX_UC_SPEC_ADDR_REG; uc_count++)
5303 + gemac_clear_laddrN(priv->EMAC_baseaddr, uc_count + 2);
5304 + }
5305 +
5306 + if (ndev->flags & IFF_LOOPBACK)
5307 + gemac_set_loop(priv->EMAC_baseaddr, LB_LOCAL);
5308 +}
5309 +
5310 +/* pfe_eth_set_features
5311 + */
5312 +static int pfe_eth_set_features(struct net_device *ndev, netdev_features_t
5313 + features)
5314 +{
5315 + struct pfe_eth_priv_s *priv = netdev_priv(ndev);
5316 + int rc = 0;
5317 +
5318 + if (features & NETIF_F_RXCSUM)
5319 + gemac_enable_rx_checksum_offload(priv->EMAC_baseaddr);
5320 + else
5321 + gemac_disable_rx_checksum_offload(priv->EMAC_baseaddr);
5322 + return rc;
5323 +}
5324 +
5325 +/* pfe_eth_fast_tx_timeout
5326 + */
5327 +static enum hrtimer_restart pfe_eth_fast_tx_timeout(struct hrtimer *timer)
5328 +{
5329 + struct pfe_eth_fast_timer *fast_tx_timeout = container_of(timer, struct
5330 + pfe_eth_fast_timer,
5331 + timer);
5332 + struct pfe_eth_priv_s *priv = container_of(fast_tx_timeout->base,
5333 + struct pfe_eth_priv_s,
5334 + fast_tx_timeout);
5335 + struct netdev_queue *tx_queue = netdev_get_tx_queue(priv->ndev,
5336 + fast_tx_timeout->queuenum);
5337 +
5338 + if (netif_tx_queue_stopped(tx_queue)) {
5339 +#ifdef PFE_ETH_TX_STATS
5340 + priv->was_stopped[fast_tx_timeout->queuenum] = 1;
5341 +#endif
5342 + netif_tx_wake_queue(tx_queue);
5343 + }
5344 +
5345 + return HRTIMER_NORESTART;
5346 +}
5347 +
5348 +/* pfe_eth_fast_tx_timeout_init
5349 + */
5350 +static void pfe_eth_fast_tx_timeout_init(struct pfe_eth_priv_s *priv)
5351 +{
5352 + int i;
5353 +
5354 + for (i = 0; i < emac_txq_cnt; i++) {
5355 + priv->fast_tx_timeout[i].queuenum = i;
5356 + hrtimer_init(&priv->fast_tx_timeout[i].timer, CLOCK_MONOTONIC,
5357 + HRTIMER_MODE_REL);
5358 + priv->fast_tx_timeout[i].timer.function =
5359 + pfe_eth_fast_tx_timeout;
5360 + priv->fast_tx_timeout[i].base = priv->fast_tx_timeout;
5361 + }
5362 +}
5363 +
5364 +static struct sk_buff *pfe_eth_rx_skb(struct net_device *ndev,
5365 + struct pfe_eth_priv_s *priv,
5366 + unsigned int qno)
5367 +{
5368 + void *buf_addr;
5369 + unsigned int rx_ctrl;
5370 + unsigned int desc_ctrl = 0;
5371 + struct hif_ipsec_hdr *ipsec_hdr = NULL;
5372 + struct sk_buff *skb;
5373 + struct sk_buff *skb_frag, *skb_frag_last = NULL;
5374 + int length = 0, offset;
5375 +
5376 + skb = priv->skb_inflight[qno];
5377 +
5378 + if (skb) {
5379 + skb_frag_last = skb_shinfo(skb)->frag_list;
5380 + if (skb_frag_last) {
5381 + while (skb_frag_last->next)
5382 + skb_frag_last = skb_frag_last->next;
5383 + }
5384 + }
5385 +
5386 + while (!(desc_ctrl & CL_DESC_LAST)) {
5387 + buf_addr = hif_lib_receive_pkt(&priv->client, qno, &length,
5388 + &offset, &rx_ctrl, &desc_ctrl,
5389 + (void **)&ipsec_hdr);
5390 + if (!buf_addr)
5391 + goto incomplete;
5392 +
5393 +#ifdef PFE_ETH_NAPI_STATS
5394 + priv->napi_counters[NAPI_DESC_COUNT]++;
5395 +#endif
5396 +
5397 + /* First frag */
5398 + if (desc_ctrl & CL_DESC_FIRST) {
5399 + skb = build_skb(buf_addr, 0);
5400 + if (unlikely(!skb))
5401 + goto pkt_drop;
5402 +
5403 + skb_reserve(skb, offset);
5404 + skb_put(skb, length);
5405 + skb->dev = ndev;
5406 +
5407 + if ((ndev->features & NETIF_F_RXCSUM) && (rx_ctrl &
5408 + HIF_CTRL_RX_CHECKSUMMED))
5409 + skb->ip_summed = CHECKSUM_UNNECESSARY;
5410 + else
5411 + skb_checksum_none_assert(skb);
5412 +
5413 + } else {
5414 + /* Next frags */
5415 + if (unlikely(!skb)) {
5416 + pr_err("%s: NULL skb_inflight\n",
5417 + __func__);
5418 + goto pkt_drop;
5419 + }
5420 +
5421 + skb_frag = build_skb(buf_addr, 0);
5422 +
5423 + if (unlikely(!skb_frag)) {
5424 + kfree(buf_addr);
5425 + goto pkt_drop;
5426 + }
5427 +
5428 + skb_reserve(skb_frag, offset);
5429 + skb_put(skb_frag, length);
5430 +
5431 + skb_frag->dev = ndev;
5432 +
5433 + if (skb_shinfo(skb)->frag_list)
5434 + skb_frag_last->next = skb_frag;
5435 + else
5436 + skb_shinfo(skb)->frag_list = skb_frag;
5437 +
5438 + skb->truesize += skb_frag->truesize;
5439 + skb->data_len += length;
5440 + skb->len += length;
5441 + skb_frag_last = skb_frag;
5442 + }
5443 + }
5444 +
5445 + priv->skb_inflight[qno] = NULL;
5446 + return skb;
5447 +
5448 +incomplete:
5449 + priv->skb_inflight[qno] = skb;
5450 + return NULL;
5451 +
5452 +pkt_drop:
5453 + priv->skb_inflight[qno] = NULL;
5454 +
5455 + if (skb)
5456 + kfree_skb(skb);
5457 + else
5458 + kfree(buf_addr);
5459 +
5460 + priv->stats.rx_errors++;
5461 +
5462 + return NULL;
5463 +}
5464 +
5465 +/* pfe_eth_poll
5466 + */
5467 +static int pfe_eth_poll(struct pfe_eth_priv_s *priv, struct napi_struct *napi,
5468 + unsigned int qno, int budget)
5469 +{
5470 + struct net_device *ndev = priv->ndev;
5471 + struct sk_buff *skb;
5472 + int work_done = 0;
5473 + unsigned int len;
5474 +
5475 + netif_info(priv, intr, priv->ndev, "%s\n", __func__);
5476 +
5477 +#ifdef PFE_ETH_NAPI_STATS
5478 + priv->napi_counters[NAPI_POLL_COUNT]++;
5479 +#endif
5480 +
5481 + do {
5482 + skb = pfe_eth_rx_skb(ndev, priv, qno);
5483 +
5484 + if (!skb)
5485 + break;
5486 +
5487 + len = skb->len;
5488 +
5489 + /* Packet will be processed */
5490 + skb->protocol = eth_type_trans(skb, ndev);
5491 +
5492 + netif_receive_skb(skb);
5493 +
5494 + priv->stats.rx_packets++;
5495 + priv->stats.rx_bytes += len;
5496 +
5497 + work_done++;
5498 +
5499 +#ifdef PFE_ETH_NAPI_STATS
5500 + priv->napi_counters[NAPI_PACKET_COUNT]++;
5501 +#endif
5502 +
5503 + } while (work_done < budget);
5504 +
5505 + /*
5506 + * If no Rx receive nor cleanup work was done, exit polling mode.
5507 + * No more netif_running(dev) check is required here , as this is
5508 + * checked in net/core/dev.c (2.6.33.5 kernel specific).
5509 + */
5510 + if (work_done < budget) {
5511 + napi_complete(napi);
5512 +
5513 + hif_lib_event_handler_start(&priv->client, EVENT_RX_PKT_IND,
5514 + qno);
5515 + }
5516 +#ifdef PFE_ETH_NAPI_STATS
5517 + else
5518 + priv->napi_counters[NAPI_FULL_BUDGET_COUNT]++;
5519 +#endif
5520 +
5521 + return work_done;
5522 +}
5523 +
5524 +/*
5525 + * pfe_eth_lro_poll
5526 + */
5527 +static int pfe_eth_lro_poll(struct napi_struct *napi, int budget)
5528 +{
5529 + struct pfe_eth_priv_s *priv = container_of(napi, struct pfe_eth_priv_s,
5530 + lro_napi);
5531 +
5532 + netif_info(priv, intr, priv->ndev, "%s\n", __func__);
5533 +
5534 + return pfe_eth_poll(priv, napi, 2, budget);
5535 +}
5536 +
5537 +/* pfe_eth_low_poll
5538 + */
5539 +static int pfe_eth_low_poll(struct napi_struct *napi, int budget)
5540 +{
5541 + struct pfe_eth_priv_s *priv = container_of(napi, struct pfe_eth_priv_s,
5542 + low_napi);
5543 +
5544 + netif_info(priv, intr, priv->ndev, "%s\n", __func__);
5545 +
5546 + return pfe_eth_poll(priv, napi, 1, budget);
5547 +}
5548 +
5549 +/* pfe_eth_high_poll
5550 + */
5551 +static int pfe_eth_high_poll(struct napi_struct *napi, int budget)
5552 +{
5553 + struct pfe_eth_priv_s *priv = container_of(napi, struct pfe_eth_priv_s,
5554 + high_napi);
5555 +
5556 + netif_info(priv, intr, priv->ndev, "%s\n", __func__);
5557 +
5558 + return pfe_eth_poll(priv, napi, 0, budget);
5559 +}
5560 +
5561 +static const struct net_device_ops pfe_netdev_ops = {
5562 + .ndo_open = pfe_eth_open,
5563 + .ndo_stop = pfe_eth_close,
5564 + .ndo_start_xmit = pfe_eth_send_packet,
5565 + .ndo_select_queue = pfe_eth_select_queue,
5566 + .ndo_set_rx_mode = pfe_eth_set_multi,
5567 + .ndo_set_mac_address = pfe_eth_set_mac_address,
5568 + .ndo_validate_addr = eth_validate_addr,
5569 + .ndo_change_mtu = pfe_eth_change_mtu,
5570 + .ndo_get_stats = pfe_eth_get_stats,
5571 + .ndo_set_features = pfe_eth_set_features,
5572 +};
5573 +
5574 +/* pfe_eth_init_one
5575 + */
5576 +static int pfe_eth_init_one(struct pfe *pfe,
5577 + struct ls1012a_pfe_platform_data *pfe_info,
5578 + int id)
5579 +{
5580 + struct net_device *ndev = NULL;
5581 + struct pfe_eth_priv_s *priv = NULL;
5582 + struct ls1012a_eth_platform_data *einfo;
5583 + int err;
5584 +
5585 + einfo = (struct ls1012a_eth_platform_data *)
5586 + pfe_info->ls1012a_eth_pdata;
5587 +
5588 + /* einfo never be NULL, but no harm in having this check */
5589 + if (!einfo) {
5590 + pr_err(
5591 + "%s: pfe missing additional gemacs platform data\n"
5592 + , __func__);
5593 + err = -ENODEV;
5594 + goto err0;
5595 + }
5596 +
5597 + if (us)
5598 + emac_txq_cnt = EMAC_TXQ_CNT;
5599 + /* Create an ethernet device instance */
5600 + ndev = alloc_etherdev_mq(sizeof(*priv), emac_txq_cnt);
5601 +
5602 + if (!ndev) {
5603 + pr_err("%s: gemac %d device allocation failed\n",
5604 + __func__, einfo[id].gem_id);
5605 + err = -ENOMEM;
5606 + goto err0;
5607 + }
5608 +
5609 + priv = netdev_priv(ndev);
5610 + priv->ndev = ndev;
5611 + priv->id = einfo[id].gem_id;
5612 + priv->pfe = pfe;
5613 + priv->phy_node = einfo[id].phy_node;
5614 +
5615 + SET_NETDEV_DEV(priv->ndev, priv->pfe->dev);
5616 +
5617 + pfe->eth.eth_priv[id] = priv;
5618 +
5619 + /* Set the info in the priv to the current info */
5620 + priv->einfo = &einfo[id];
5621 + priv->EMAC_baseaddr = cbus_emac_base[id];
5622 + priv->GPI_baseaddr = cbus_gpi_base[id];
5623 +
5624 + spin_lock_init(&priv->lock);
5625 +
5626 + pfe_eth_fast_tx_timeout_init(priv);
5627 +
5628 + /* Copy the station address into the dev structure, */
5629 + memcpy(ndev->dev_addr, einfo[id].mac_addr, ETH_ALEN);
5630 +
5631 + if (us)
5632 + goto phy_init;
5633 +
5634 + ndev->mtu = 1500;
5635 +
5636 + /* Set MTU limits */
5637 + ndev->min_mtu = ETH_MIN_MTU;
5638 +
5639 +/*
5640 + * Jumbo frames are not supported on LS1012A rev-1.0.
5641 + * So max mtu should be restricted to supported frame length.
5642 + */
5643 + if (pfe_errata_a010897)
5644 + ndev->max_mtu = JUMBO_FRAME_SIZE_V1 - ETH_HLEN - ETH_FCS_LEN;
5645 + else
5646 + ndev->max_mtu = JUMBO_FRAME_SIZE_V2 - ETH_HLEN - ETH_FCS_LEN;
5647 +
5648 + /*Enable after checksum offload is validated */
5649 + ndev->hw_features = NETIF_F_RXCSUM | NETIF_F_IP_CSUM |
5650 + NETIF_F_IPV6_CSUM | NETIF_F_SG;
5651 +
5652 + /* enabled by default */
5653 + ndev->features = ndev->hw_features;
5654 +
5655 + priv->usr_features = ndev->features;
5656 +
5657 + ndev->netdev_ops = &pfe_netdev_ops;
5658 +
5659 + ndev->ethtool_ops = &pfe_ethtool_ops;
5660 +
5661 + /* Enable basic messages by default */
5662 + priv->msg_enable = NETIF_MSG_IFUP | NETIF_MSG_IFDOWN | NETIF_MSG_LINK |
5663 + NETIF_MSG_PROBE;
5664 +
5665 + netif_napi_add(ndev, &priv->low_napi, pfe_eth_low_poll,
5666 + HIF_RX_POLL_WEIGHT - 16);
5667 + netif_napi_add(ndev, &priv->high_napi, pfe_eth_high_poll,
5668 + HIF_RX_POLL_WEIGHT - 16);
5669 + netif_napi_add(ndev, &priv->lro_napi, pfe_eth_lro_poll,
5670 + HIF_RX_POLL_WEIGHT - 16);
5671 +
5672 + err = register_netdev(ndev);
5673 + if (err) {
5674 + netdev_err(ndev, "register_netdev() failed\n");
5675 + goto err1;
5676 + }
5677 +
5678 + if ((!(pfe_use_old_dts_phy) && !(priv->phy_node)) ||
5679 + ((pfe_use_old_dts_phy) &&
5680 + (priv->einfo->phy_flags & GEMAC_NO_PHY))) {
5681 + pr_info("%s: No PHY or fixed-link\n", __func__);
5682 + goto skip_phy_init;
5683 + }
5684 +
5685 +phy_init:
5686 + device_init_wakeup(&ndev->dev, WAKE_MAGIC);
5687 +
5688 + err = pfe_phy_init(ndev);
5689 + if (err) {
5690 + netdev_err(ndev, "%s: pfe_phy_init() failed\n",
5691 + __func__);
5692 + goto err2;
5693 + }
5694 +
5695 + if (us) {
5696 + if (priv->phydev)
5697 + phy_start(priv->phydev);
5698 + return 0;
5699 + }
5700 +
5701 + netif_carrier_on(ndev);
5702 +
5703 +skip_phy_init:
5704 + /* Create all the sysfs files */
5705 + if (pfe_eth_sysfs_init(ndev))
5706 + goto err3;
5707 +
5708 + netif_info(priv, probe, ndev, "%s: created interface, baseaddr: %p\n",
5709 + __func__, priv->EMAC_baseaddr);
5710 +
5711 + return 0;
5712 +
5713 +err3:
5714 + pfe_phy_exit(priv->ndev);
5715 +err2:
5716 + if (us)
5717 + goto err1;
5718 + unregister_netdev(ndev);
5719 +err1:
5720 + free_netdev(priv->ndev);
5721 +err0:
5722 + return err;
5723 +}
5724 +
5725 +/* pfe_eth_init
5726 + */
5727 +int pfe_eth_init(struct pfe *pfe)
5728 +{
5729 + int ii = 0;
5730 + int err;
5731 + struct ls1012a_pfe_platform_data *pfe_info;
5732 +
5733 + pr_info("%s\n", __func__);
5734 +
5735 + cbus_emac_base[0] = EMAC1_BASE_ADDR;
5736 + cbus_emac_base[1] = EMAC2_BASE_ADDR;
5737 +
5738 + cbus_gpi_base[0] = EGPI1_BASE_ADDR;
5739 + cbus_gpi_base[1] = EGPI2_BASE_ADDR;
5740 +
5741 + pfe_info = (struct ls1012a_pfe_platform_data *)
5742 + pfe->dev->platform_data;
5743 + if (!pfe_info) {
5744 + pr_err("%s: pfe missing additional platform data\n", __func__);
5745 + err = -ENODEV;
5746 + goto err_pdata;
5747 + }
5748 +
5749 + for (ii = 0; ii < NUM_GEMAC_SUPPORT; ii++) {
5750 + err = pfe_eth_mdio_init(pfe, pfe_info, ii);
5751 + if (err) {
5752 + pr_err("%s: pfe_eth_mdio_init() failed\n", __func__);
5753 + goto err_mdio_init;
5754 + }
5755 + }
5756 +
5757 + if (soc_device_match(ls1012a_rev1_soc_attr))
5758 + pfe_errata_a010897 = true;
5759 + else
5760 + pfe_errata_a010897 = false;
5761 +
5762 + for (ii = 0; ii < NUM_GEMAC_SUPPORT; ii++) {
5763 + err = pfe_eth_init_one(pfe, pfe_info, ii);
5764 + if (err)
5765 + goto err_eth_init;
5766 + }
5767 +
5768 + return 0;
5769 +
5770 +err_eth_init:
5771 + while (ii--) {
5772 + pfe_eth_exit_one(pfe->eth.eth_priv[ii]);
5773 + pfe_eth_mdio_exit(pfe, ii);
5774 + }
5775 +
5776 +err_mdio_init:
5777 +err_pdata:
5778 + return err;
5779 +}
5780 +
5781 +/* pfe_eth_exit_one
5782 + */
5783 +static void pfe_eth_exit_one(struct pfe_eth_priv_s *priv)
5784 +{
5785 + netif_info(priv, probe, priv->ndev, "%s\n", __func__);
5786 +
5787 + if (!us)
5788 + pfe_eth_sysfs_exit(priv->ndev);
5789 +
5790 + if ((!(pfe_use_old_dts_phy) && !(priv->phy_node)) ||
5791 + ((pfe_use_old_dts_phy) &&
5792 + (priv->einfo->phy_flags & GEMAC_NO_PHY))) {
5793 + pr_info("%s: No PHY or fixed-link\n", __func__);
5794 + goto skip_phy_exit;
5795 + }
5796 +
5797 + pfe_phy_exit(priv->ndev);
5798 +
5799 +skip_phy_exit:
5800 + if (!us)
5801 + unregister_netdev(priv->ndev);
5802 +
5803 + free_netdev(priv->ndev);
5804 +}
5805 +
5806 +/* pfe_eth_exit
5807 + */
5808 +void pfe_eth_exit(struct pfe *pfe)
5809 +{
5810 + int ii;
5811 +
5812 + pr_info("%s\n", __func__);
5813 +
5814 + for (ii = NUM_GEMAC_SUPPORT - 1; ii >= 0; ii--)
5815 + pfe_eth_exit_one(pfe->eth.eth_priv[ii]);
5816 +
5817 + for (ii = NUM_GEMAC_SUPPORT - 1; ii >= 0; ii--)
5818 + pfe_eth_mdio_exit(pfe, ii);
5819 +}
5820 --- /dev/null
5821 +++ b/drivers/staging/fsl_ppfe/pfe_eth.h
5822 @@ -0,0 +1,175 @@
5823 +/* SPDX-License-Identifier: GPL-2.0+ */
5824 +/*
5825 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
5826 + * Copyright 2017 NXP
5827 + */
5828 +
5829 +#ifndef _PFE_ETH_H_
5830 +#define _PFE_ETH_H_
5831 +#include <linux/kernel.h>
5832 +#include <linux/netdevice.h>
5833 +#include <linux/etherdevice.h>
5834 +#include <linux/ethtool.h>
5835 +#include <linux/mii.h>
5836 +#include <linux/phy.h>
5837 +#include <linux/clk.h>
5838 +#include <linux/interrupt.h>
5839 +#include <linux/time.h>
5840 +
5841 +#define PFE_ETH_NAPI_STATS
5842 +#define PFE_ETH_TX_STATS
5843 +
5844 +#define PFE_ETH_FRAGS_MAX (65536 / HIF_RX_PKT_MIN_SIZE)
5845 +#define LRO_LEN_COUNT_MAX 32
5846 +#define LRO_NB_COUNT_MAX 32
5847 +
5848 +#define PFE_PAUSE_FLAG_ENABLE 1
5849 +#define PFE_PAUSE_FLAG_AUTONEG 2
5850 +
5851 +/* GEMAC configured by SW */
5852 +/* GEMAC configured by phy lines (not for MII/GMII) */
5853 +
5854 +#define GEMAC_SW_FULL_DUPLEX BIT(9)
5855 +#define GEMAC_SW_SPEED_10M (0 << 12)
5856 +#define GEMAC_SW_SPEED_100M BIT(12)
5857 +#define GEMAC_SW_SPEED_1G (2 << 12)
5858 +
5859 +#define GEMAC_NO_PHY BIT(0)
5860 +
5861 +struct ls1012a_eth_platform_data {
5862 + /* board specific information */
5863 + phy_interface_t mii_config;
5864 + u32 phy_flags;
5865 + u32 gem_id;
5866 + u32 phy_id;
5867 + u32 mdio_muxval;
5868 + u8 mac_addr[ETH_ALEN];
5869 + struct device_node *phy_node;
5870 +};
5871 +
5872 +struct ls1012a_mdio_platform_data {
5873 + int id;
5874 + int irq[32];
5875 + u32 phy_mask;
5876 + int mdc_div;
5877 +};
5878 +
5879 +struct ls1012a_pfe_platform_data {
5880 + struct ls1012a_eth_platform_data ls1012a_eth_pdata[3];
5881 + struct ls1012a_mdio_platform_data ls1012a_mdio_pdata[3];
5882 +};
5883 +
5884 +#define NUM_GEMAC_SUPPORT 2
5885 +#define DRV_NAME "pfe-eth"
5886 +#define DRV_VERSION "1.0"
5887 +
5888 +#define LS1012A_TX_FAST_RECOVERY_TIMEOUT_MS 3
5889 +#define TX_POLL_TIMEOUT_MS 1000
5890 +
5891 +#define EMAC_TXQ_CNT 16
5892 +#define EMAC_TXQ_DEPTH (HIF_TX_DESC_NT)
5893 +
5894 +#define JUMBO_FRAME_SIZE_V1 1900
5895 +#define JUMBO_FRAME_SIZE_V2 10258
5896 +/*
5897 + * Client Tx queue threshold, for txQ flush condition.
5898 + * It must be smaller than the queue size (in case we ever change it in the
5899 + * future).
5900 + */
5901 +#define HIF_CL_TX_FLUSH_MARK 32
5902 +
5903 +/*
5904 + * Max number of TX resources (HIF descriptors or skbs) that will be released
5905 + * in a single go during batch recycling.
5906 + * Should be lower than the flush mark so the SW can provide the HW with a
5907 + * continuous stream of packets instead of bursts.
5908 + */
5909 +#define TX_FREE_MAX_COUNT 16
5910 +#define EMAC_RXQ_CNT 3
5911 +#define EMAC_RXQ_DEPTH HIF_RX_DESC_NT
5912 +/* make sure clients can receive a full burst of packets */
5913 +#define EMAC_RMON_TXBYTES_POS 0x00
5914 +#define EMAC_RMON_RXBYTES_POS 0x14
5915 +
5916 +#define EMAC_QUEUENUM_MASK (emac_txq_cnt - 1)
5917 +#define EMAC_MDIO_TIMEOUT 1000
5918 +#define MAX_UC_SPEC_ADDR_REG 31
5919 +
5920 +struct pfe_eth_fast_timer {
5921 + int queuenum;
5922 + struct hrtimer timer;
5923 + void *base;
5924 +};
5925 +
5926 +struct pfe_eth_priv_s {
5927 + struct pfe *pfe;
5928 + struct hif_client_s client;
5929 + struct napi_struct lro_napi;
5930 + struct napi_struct low_napi;
5931 + struct napi_struct high_napi;
5932 + int low_tmu_q;
5933 + int high_tmu_q;
5934 + struct net_device_stats stats;
5935 + struct net_device *ndev;
5936 + int id;
5937 + int promisc;
5938 + unsigned int msg_enable;
5939 + unsigned int usr_features;
5940 +
5941 + spinlock_t lock; /* protect member variables */
5942 + unsigned int event_status;
5943 + int irq;
5944 + void *EMAC_baseaddr;
5945 + void *GPI_baseaddr;
5946 + /* PHY stuff */
5947 + struct phy_device *phydev;
5948 + int oldspeed;
5949 + int oldduplex;
5950 + int oldlink;
5951 + struct device_node *phy_node;
5952 + struct clk *gemtx_clk;
5953 + int wol;
5954 + int pause_flag;
5955 +
5956 + int default_priority;
5957 + struct pfe_eth_fast_timer fast_tx_timeout[EMAC_TXQ_CNT];
5958 +
5959 + struct ls1012a_eth_platform_data *einfo;
5960 + struct sk_buff *skb_inflight[EMAC_RXQ_CNT + 6];
5961 +
5962 +#ifdef PFE_ETH_TX_STATS
5963 + unsigned int stop_queue_total[EMAC_TXQ_CNT];
5964 + unsigned int stop_queue_hif[EMAC_TXQ_CNT];
5965 + unsigned int stop_queue_hif_client[EMAC_TXQ_CNT];
5966 + unsigned int stop_queue_credit[EMAC_TXQ_CNT];
5967 + unsigned int clean_fail[EMAC_TXQ_CNT];
5968 + unsigned int was_stopped[EMAC_TXQ_CNT];
5969 +#endif
5970 +
5971 +#ifdef PFE_ETH_NAPI_STATS
5972 + unsigned int napi_counters[NAPI_MAX_COUNT];
5973 +#endif
5974 + unsigned int frags_inflight[EMAC_RXQ_CNT + 6];
5975 +};
5976 +
5977 +struct pfe_eth {
5978 + struct pfe_eth_priv_s *eth_priv[3];
5979 +};
5980 +
5981 +struct pfe_mdio_priv_s {
5982 + void __iomem *mdio_base;
5983 + int mdc_div;
5984 + struct mii_bus *mii_bus;
5985 +};
5986 +
5987 +struct pfe_mdio {
5988 + struct pfe_mdio_priv_s *mdio_priv[3];
5989 +};
5990 +
5991 +int pfe_eth_init(struct pfe *pfe);
5992 +void pfe_eth_exit(struct pfe *pfe);
5993 +int pfe_eth_suspend(struct net_device *dev);
5994 +int pfe_eth_resume(struct net_device *dev);
5995 +int pfe_eth_mdio_reset(struct mii_bus *bus);
5996 +
5997 +#endif /* _PFE_ETH_H_ */
5998 --- /dev/null
5999 +++ b/drivers/staging/fsl_ppfe/pfe_firmware.c
6000 @@ -0,0 +1,398 @@
6001 +// SPDX-License-Identifier: GPL-2.0+
6002 +/*
6003 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
6004 + * Copyright 2017 NXP
6005 + */
6006 +
6007 +/*
6008 + * @file
6009 + * Contains all the functions to handle parsing and loading of PE firmware
6010 + * files.
6011 + */
6012 +#include <linux/firmware.h>
6013 +
6014 +#include "pfe_mod.h"
6015 +#include "pfe_firmware.h"
6016 +#include "pfe/pfe.h"
6017 +#include <linux/of_platform.h>
6018 +#include <linux/of_address.h>
6019 +
6020 +static struct elf32_shdr *get_elf_section_header(const u8 *fw,
6021 + const char *section)
6022 +{
6023 + struct elf32_hdr *elf_hdr = (struct elf32_hdr *)fw;
6024 + struct elf32_shdr *shdr;
6025 + struct elf32_shdr *shdr_shstr;
6026 + Elf32_Off e_shoff = be32_to_cpu(elf_hdr->e_shoff);
6027 + Elf32_Half e_shentsize = be16_to_cpu(elf_hdr->e_shentsize);
6028 + Elf32_Half e_shnum = be16_to_cpu(elf_hdr->e_shnum);
6029 + Elf32_Half e_shstrndx = be16_to_cpu(elf_hdr->e_shstrndx);
6030 + Elf32_Off shstr_offset;
6031 + Elf32_Word sh_name;
6032 + const char *name;
6033 + int i;
6034 +
6035 + /* Section header strings */
6036 + shdr_shstr = (struct elf32_shdr *)((u8 *)elf_hdr + e_shoff + e_shstrndx
6037 + * e_shentsize);
6038 + shstr_offset = be32_to_cpu(shdr_shstr->sh_offset);
6039 +
6040 + for (i = 0; i < e_shnum; i++) {
6041 + shdr = (struct elf32_shdr *)((u8 *)elf_hdr + e_shoff
6042 + + i * e_shentsize);
6043 +
6044 + sh_name = be32_to_cpu(shdr->sh_name);
6045 +
6046 + name = (const char *)((u8 *)elf_hdr + shstr_offset + sh_name);
6047 +
6048 + if (!strcmp(name, section))
6049 + return shdr;
6050 + }
6051 +
6052 + pr_err("%s: didn't find section %s\n", __func__, section);
6053 +
6054 + return NULL;
6055 +}
6056 +
6057 +#if defined(CFG_DIAGS)
6058 +static int pfe_get_diags_info(const u8 *fw, struct pfe_diags_info
6059 + *diags_info)
6060 +{
6061 + struct elf32_shdr *shdr;
6062 + unsigned long offset, size;
6063 +
6064 + shdr = get_elf_section_header(fw, ".pfe_diags_str");
6065 + if (shdr) {
6066 + offset = be32_to_cpu(shdr->sh_offset);
6067 + size = be32_to_cpu(shdr->sh_size);
6068 + diags_info->diags_str_base = be32_to_cpu(shdr->sh_addr);
6069 + diags_info->diags_str_size = size;
6070 + diags_info->diags_str_array = kmalloc(size, GFP_KERNEL);
6071 + memcpy(diags_info->diags_str_array, fw + offset, size);
6072 +
6073 + return 0;
6074 + } else {
6075 + return -1;
6076 + }
6077 +}
6078 +#endif
6079 +
6080 +static void pfe_check_version_info(const u8 *fw)
6081 +{
6082 + /*static char *version = NULL;*/
6083 + const u8 *elf_data = fw;
6084 + static char *version;
6085 +
6086 + struct elf32_shdr *shdr = get_elf_section_header(fw, ".version");
6087 +
6088 + if (shdr) {
6089 + if (!version) {
6090 + /*
6091 + * this is the first fw we load, use its version
6092 + * string as reference (whatever it is)
6093 + */
6094 + version = (char *)(elf_data +
6095 + be32_to_cpu(shdr->sh_offset));
6096 +
6097 + pr_info("PFE binary version: %s\n", version);
6098 + } else {
6099 + /*
6100 + * already have loaded at least one firmware, check
6101 + * sequence can start now
6102 + */
6103 + if (strcmp(version, (char *)(elf_data +
6104 + be32_to_cpu(shdr->sh_offset)))) {
6105 + pr_info(
6106 + "WARNING: PFE firmware binaries from incompatible version\n");
6107 + }
6108 + }
6109 + } else {
6110 + /*
6111 + * version cannot be verified, a potential issue that should
6112 + * be reported
6113 + */
6114 + pr_info(
6115 + "WARNING: PFE firmware binaries from incompatible version\n");
6116 + }
6117 +}
6118 +
6119 +/* PFE elf firmware loader.
6120 + * Loads an elf firmware image into a list of PE's (specified using a bitmask)
6121 + *
6122 + * @param pe_mask Mask of PE id's to load firmware to
6123 + * @param fw Pointer to the firmware image
6124 + *
6125 + * @return 0 on success, a negative value on error
6126 + *
6127 + */
6128 +int pfe_load_elf(int pe_mask, const u8 *fw, struct pfe *pfe)
6129 +{
6130 + struct elf32_hdr *elf_hdr = (struct elf32_hdr *)fw;
6131 + Elf32_Half sections = be16_to_cpu(elf_hdr->e_shnum);
6132 + struct elf32_shdr *shdr = (struct elf32_shdr *)(fw +
6133 + be32_to_cpu(elf_hdr->e_shoff));
6134 + int id, section;
6135 + int rc;
6136 +
6137 + pr_info("%s\n", __func__);
6138 +
6139 + /* Some sanity checks */
6140 + if (strncmp(&elf_hdr->e_ident[EI_MAG0], ELFMAG, SELFMAG)) {
6141 + pr_err("%s: incorrect elf magic number\n", __func__);
6142 + return -EINVAL;
6143 + }
6144 +
6145 + if (elf_hdr->e_ident[EI_CLASS] != ELFCLASS32) {
6146 + pr_err("%s: incorrect elf class(%x)\n", __func__,
6147 + elf_hdr->e_ident[EI_CLASS]);
6148 + return -EINVAL;
6149 + }
6150 +
6151 + if (elf_hdr->e_ident[EI_DATA] != ELFDATA2MSB) {
6152 + pr_err("%s: incorrect elf data(%x)\n", __func__,
6153 + elf_hdr->e_ident[EI_DATA]);
6154 + return -EINVAL;
6155 + }
6156 +
6157 + if (be16_to_cpu(elf_hdr->e_type) != ET_EXEC) {
6158 + pr_err("%s: incorrect elf file type(%x)\n", __func__,
6159 + be16_to_cpu(elf_hdr->e_type));
6160 + return -EINVAL;
6161 + }
6162 +
6163 + for (section = 0; section < sections; section++, shdr++) {
6164 + if (!(be32_to_cpu(shdr->sh_flags) & (SHF_WRITE | SHF_ALLOC |
6165 + SHF_EXECINSTR)))
6166 + continue;
6167 +
6168 + for (id = 0; id < MAX_PE; id++)
6169 + if (pe_mask & (1 << id)) {
6170 + rc = pe_load_elf_section(id, elf_hdr, shdr,
6171 + pfe->dev);
6172 + if (rc < 0)
6173 + goto err;
6174 + }
6175 + }
6176 +
6177 + pfe_check_version_info(fw);
6178 +
6179 + return 0;
6180 +
6181 +err:
6182 + return rc;
6183 +}
6184 +
6185 +int get_firmware_in_fdt(const u8 **pe_fw, const char *name)
6186 +{
6187 + struct device_node *np;
6188 + const unsigned int *len;
6189 + const void *data;
6190 +
6191 + if (!strcmp(name, CLASS_FIRMWARE_FILENAME)) {
6192 + /* The firmware should be inside the device tree. */
6193 + np = of_find_compatible_node(NULL, NULL,
6194 + "fsl,pfe-class-firmware");
6195 + if (!np) {
6196 + pr_info("Failed to find the node\n");
6197 + return -ENOENT;
6198 + }
6199 +
6200 + data = of_get_property(np, "fsl,class-firmware", NULL);
6201 + if (data) {
6202 + len = of_get_property(np, "length", NULL);
6203 + pr_info("CLASS fw of length %d bytes loaded from FDT.\n",
6204 + be32_to_cpu(*len));
6205 + } else {
6206 + pr_info("fsl,class-firmware not found!!!!\n");
6207 + return -ENOENT;
6208 + }
6209 + of_node_put(np);
6210 + *pe_fw = data;
6211 + } else if (!strcmp(name, TMU_FIRMWARE_FILENAME)) {
6212 + np = of_find_compatible_node(NULL, NULL,
6213 + "fsl,pfe-tmu-firmware");
6214 + if (!np) {
6215 + pr_info("Failed to find the node\n");
6216 + return -ENOENT;
6217 + }
6218 +
6219 + data = of_get_property(np, "fsl,tmu-firmware", NULL);
6220 + if (data) {
6221 + len = of_get_property(np, "length", NULL);
6222 + pr_info("TMU fw of length %d bytes loaded from FDT.\n",
6223 + be32_to_cpu(*len));
6224 + } else {
6225 + pr_info("fsl,tmu-firmware not found!!!!\n");
6226 + return -ENOENT;
6227 + }
6228 + of_node_put(np);
6229 + *pe_fw = data;
6230 + } else if (!strcmp(name, UTIL_FIRMWARE_FILENAME)) {
6231 + np = of_find_compatible_node(NULL, NULL,
6232 + "fsl,pfe-util-firmware");
6233 + if (!np) {
6234 + pr_info("Failed to find the node\n");
6235 + return -ENOENT;
6236 + }
6237 +
6238 + data = of_get_property(np, "fsl,util-firmware", NULL);
6239 + if (data) {
6240 + len = of_get_property(np, "length", NULL);
6241 + pr_info("UTIL fw of length %d bytes loaded from FDT.\n",
6242 + be32_to_cpu(*len));
6243 + } else {
6244 + pr_info("fsl,util-firmware not found!!!!\n");
6245 + return -ENOENT;
6246 + }
6247 + of_node_put(np);
6248 + *pe_fw = data;
6249 + } else {
6250 + pr_err("firmware:%s not known\n", name);
6251 + return -EINVAL;
6252 + }
6253 +
6254 + return 0;
6255 +}
6256 +
6257 +/* PFE firmware initialization.
6258 + * Loads different firmware files from filesystem.
6259 + * Initializes PE IMEM/DMEM and UTIL-PE DDR
6260 + * Initializes control path symbol addresses (by looking them up in the elf
6261 + * firmware files
6262 + * Takes PE's out of reset
6263 + *
6264 + * @return 0 on success, a negative value on error
6265 + *
6266 + */
6267 +int pfe_firmware_init(struct pfe *pfe)
6268 +{
6269 + const struct firmware *class_fw, *tmu_fw;
6270 + const u8 *class_elf_fw, *tmu_elf_fw;
6271 + int rc = 0, fs_load = 0;
6272 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
6273 + const struct firmware *util_fw;
6274 + const u8 *util_elf_fw;
6275 +
6276 +#endif
6277 +
6278 + pr_info("%s\n", __func__);
6279 +
6280 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
6281 + if (get_firmware_in_fdt(&class_elf_fw, CLASS_FIRMWARE_FILENAME) ||
6282 + get_firmware_in_fdt(&tmu_elf_fw, TMU_FIRMWARE_FILENAME) ||
6283 + get_firmware_in_fdt(&util_elf_fw, UTIL_FIRMWARE_FILENAME))
6284 +#else
6285 + if (get_firmware_in_fdt(&class_elf_fw, CLASS_FIRMWARE_FILENAME) ||
6286 + get_firmware_in_fdt(&tmu_elf_fw, TMU_FIRMWARE_FILENAME))
6287 +#endif
6288 + {
6289 + pr_info("%s:PFE firmware not found in FDT.\n", __func__);
6290 + pr_info("%s:Trying to load firmware from filesystem...!\n", __func__);
6291 +
6292 + /* look for firmware in filesystem...!*/
6293 + fs_load = 1;
6294 + if (request_firmware(&class_fw, CLASS_FIRMWARE_FILENAME, pfe->dev)) {
6295 + pr_err("%s: request firmware %s failed\n", __func__,
6296 + CLASS_FIRMWARE_FILENAME);
6297 + rc = -ETIMEDOUT;
6298 + goto err0;
6299 + }
6300 + class_elf_fw = class_fw->data;
6301 +
6302 + if (request_firmware(&tmu_fw, TMU_FIRMWARE_FILENAME, pfe->dev)) {
6303 + pr_err("%s: request firmware %s failed\n", __func__,
6304 + TMU_FIRMWARE_FILENAME);
6305 + rc = -ETIMEDOUT;
6306 + goto err1;
6307 + }
6308 + tmu_elf_fw = tmu_fw->data;
6309 +
6310 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
6311 + if (request_firmware(&util_fw, UTIL_FIRMWARE_FILENAME, pfe->dev)) {
6312 + pr_err("%s: request firmware %s failed\n", __func__,
6313 + UTIL_FIRMWARE_FILENAME);
6314 + rc = -ETIMEDOUT;
6315 + goto err2;
6316 + }
6317 + util_elf_fw = util_fw->data;
6318 +#endif
6319 + }
6320 +
6321 + rc = pfe_load_elf(CLASS_MASK, class_elf_fw, pfe);
6322 + if (rc < 0) {
6323 + pr_err("%s: class firmware load failed\n", __func__);
6324 + goto err3;
6325 + }
6326 +
6327 +#if defined(CFG_DIAGS)
6328 + rc = pfe_get_diags_info(class_elf_fw, &pfe->diags.class_diags_info);
6329 + if (rc < 0) {
6330 + pr_warn(
6331 + "PFE diags won't be available for class PEs\n");
6332 + rc = 0;
6333 + }
6334 +#endif
6335 +
6336 + rc = pfe_load_elf(TMU_MASK, tmu_elf_fw, pfe);
6337 + if (rc < 0) {
6338 + pr_err("%s: tmu firmware load failed\n", __func__);
6339 + goto err3;
6340 + }
6341 +
6342 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
6343 + rc = pfe_load_elf(UTIL_MASK, util_elf_fw, pfe);
6344 + if (rc < 0) {
6345 + pr_err("%s: util firmware load failed\n", __func__);
6346 + goto err3;
6347 + }
6348 +
6349 +#if defined(CFG_DIAGS)
6350 + rc = pfe_get_diags_info(util_elf_fw, &pfe->diags.util_diags_info);
6351 + if (rc < 0) {
6352 + pr_warn(
6353 + "PFE diags won't be available for util PE\n");
6354 + rc = 0;
6355 + }
6356 +#endif
6357 +
6358 + util_enable();
6359 +#endif
6360 +
6361 + tmu_enable(0xf);
6362 + class_enable();
6363 +
6364 +err3:
6365 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
6366 + if (fs_load)
6367 + release_firmware(util_fw);
6368 +err2:
6369 +#endif
6370 + if (fs_load)
6371 + release_firmware(tmu_fw);
6372 +
6373 +err1:
6374 + if (fs_load)
6375 + release_firmware(class_fw);
6376 +
6377 +err0:
6378 + return rc;
6379 +}
6380 +
6381 +/* PFE firmware cleanup
6382 + * Puts PE's in reset
6383 + *
6384 + *
6385 + */
6386 +void pfe_firmware_exit(struct pfe *pfe)
6387 +{
6388 + pr_info("%s\n", __func__);
6389 +
6390 + if (pe_reset_all(&pfe->ctrl) != 0)
6391 + pr_err("Error: Failed to stop PEs, PFE reload may not work correctly\n");
6392 +
6393 + class_disable();
6394 + tmu_disable(0xf);
6395 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
6396 + util_disable();
6397 +#endif
6398 +}
6399 --- /dev/null
6400 +++ b/drivers/staging/fsl_ppfe/pfe_firmware.h
6401 @@ -0,0 +1,21 @@
6402 +/* SPDX-License-Identifier: GPL-2.0+ */
6403 +/*
6404 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
6405 + * Copyright 2017 NXP
6406 + */
6407 +
6408 +#ifndef _PFE_FIRMWARE_H_
6409 +#define _PFE_FIRMWARE_H_
6410 +
6411 +#define CLASS_FIRMWARE_FILENAME "ppfe_class_ls1012a.elf"
6412 +#define TMU_FIRMWARE_FILENAME "ppfe_tmu_ls1012a.elf"
6413 +#define UTIL_FIRMWARE_FILENAME "ppfe_util_ls1012a.elf"
6414 +
6415 +#define PFE_FW_CHECK_PASS 0
6416 +#define PFE_FW_CHECK_FAIL 1
6417 +#define NUM_PFE_FW 3
6418 +
6419 +int pfe_firmware_init(struct pfe *pfe);
6420 +void pfe_firmware_exit(struct pfe *pfe);
6421 +
6422 +#endif /* _PFE_FIRMWARE_H_ */
6423 --- /dev/null
6424 +++ b/drivers/staging/fsl_ppfe/pfe_hal.c
6425 @@ -0,0 +1,1517 @@
6426 +// SPDX-License-Identifier: GPL-2.0+
6427 +/*
6428 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
6429 + * Copyright 2017 NXP
6430 + */
6431 +
6432 +#include "pfe_mod.h"
6433 +#include "pfe/pfe.h"
6434 +
6435 +/* A-010897: Jumbo frame is not supported */
6436 +extern bool pfe_errata_a010897;
6437 +
6438 +#define PFE_RCR_MAX_FL_MASK 0xC000FFFF
6439 +
6440 +void *cbus_base_addr;
6441 +void *ddr_base_addr;
6442 +unsigned long ddr_phys_base_addr;
6443 +unsigned int ddr_size;
6444 +
6445 +static struct pe_info pe[MAX_PE];
6446 +
6447 +/* Initializes the PFE library.
6448 + * Must be called before using any of the library functions.
6449 + *
6450 + * @param[in] cbus_base CBUS virtual base address (as mapped in
6451 + * the host CPU address space)
6452 + * @param[in] ddr_base PFE DDR range virtual base address (as
6453 + * mapped in the host CPU address space)
6454 + * @param[in] ddr_phys_base PFE DDR range physical base address (as
6455 + * mapped in platform)
6456 + * @param[in] size PFE DDR range size (as defined by the host
6457 + * software)
6458 + */
6459 +void pfe_lib_init(void *cbus_base, void *ddr_base, unsigned long ddr_phys_base,
6460 + unsigned int size)
6461 +{
6462 + cbus_base_addr = cbus_base;
6463 + ddr_base_addr = ddr_base;
6464 + ddr_phys_base_addr = ddr_phys_base;
6465 + ddr_size = size;
6466 +
6467 + pe[CLASS0_ID].dmem_base_addr = CLASS_DMEM_BASE_ADDR(0);
6468 + pe[CLASS0_ID].pmem_base_addr = CLASS_IMEM_BASE_ADDR(0);
6469 + pe[CLASS0_ID].pmem_size = CLASS_IMEM_SIZE;
6470 + pe[CLASS0_ID].mem_access_wdata = CLASS_MEM_ACCESS_WDATA;
6471 + pe[CLASS0_ID].mem_access_addr = CLASS_MEM_ACCESS_ADDR;
6472 + pe[CLASS0_ID].mem_access_rdata = CLASS_MEM_ACCESS_RDATA;
6473 +
6474 + pe[CLASS1_ID].dmem_base_addr = CLASS_DMEM_BASE_ADDR(1);
6475 + pe[CLASS1_ID].pmem_base_addr = CLASS_IMEM_BASE_ADDR(1);
6476 + pe[CLASS1_ID].pmem_size = CLASS_IMEM_SIZE;
6477 + pe[CLASS1_ID].mem_access_wdata = CLASS_MEM_ACCESS_WDATA;
6478 + pe[CLASS1_ID].mem_access_addr = CLASS_MEM_ACCESS_ADDR;
6479 + pe[CLASS1_ID].mem_access_rdata = CLASS_MEM_ACCESS_RDATA;
6480 +
6481 + pe[CLASS2_ID].dmem_base_addr = CLASS_DMEM_BASE_ADDR(2);
6482 + pe[CLASS2_ID].pmem_base_addr = CLASS_IMEM_BASE_ADDR(2);
6483 + pe[CLASS2_ID].pmem_size = CLASS_IMEM_SIZE;
6484 + pe[CLASS2_ID].mem_access_wdata = CLASS_MEM_ACCESS_WDATA;
6485 + pe[CLASS2_ID].mem_access_addr = CLASS_MEM_ACCESS_ADDR;
6486 + pe[CLASS2_ID].mem_access_rdata = CLASS_MEM_ACCESS_RDATA;
6487 +
6488 + pe[CLASS3_ID].dmem_base_addr = CLASS_DMEM_BASE_ADDR(3);
6489 + pe[CLASS3_ID].pmem_base_addr = CLASS_IMEM_BASE_ADDR(3);
6490 + pe[CLASS3_ID].pmem_size = CLASS_IMEM_SIZE;
6491 + pe[CLASS3_ID].mem_access_wdata = CLASS_MEM_ACCESS_WDATA;
6492 + pe[CLASS3_ID].mem_access_addr = CLASS_MEM_ACCESS_ADDR;
6493 + pe[CLASS3_ID].mem_access_rdata = CLASS_MEM_ACCESS_RDATA;
6494 +
6495 + pe[CLASS4_ID].dmem_base_addr = CLASS_DMEM_BASE_ADDR(4);
6496 + pe[CLASS4_ID].pmem_base_addr = CLASS_IMEM_BASE_ADDR(4);
6497 + pe[CLASS4_ID].pmem_size = CLASS_IMEM_SIZE;
6498 + pe[CLASS4_ID].mem_access_wdata = CLASS_MEM_ACCESS_WDATA;
6499 + pe[CLASS4_ID].mem_access_addr = CLASS_MEM_ACCESS_ADDR;
6500 + pe[CLASS4_ID].mem_access_rdata = CLASS_MEM_ACCESS_RDATA;
6501 +
6502 + pe[CLASS5_ID].dmem_base_addr = CLASS_DMEM_BASE_ADDR(5);
6503 + pe[CLASS5_ID].pmem_base_addr = CLASS_IMEM_BASE_ADDR(5);
6504 + pe[CLASS5_ID].pmem_size = CLASS_IMEM_SIZE;
6505 + pe[CLASS5_ID].mem_access_wdata = CLASS_MEM_ACCESS_WDATA;
6506 + pe[CLASS5_ID].mem_access_addr = CLASS_MEM_ACCESS_ADDR;
6507 + pe[CLASS5_ID].mem_access_rdata = CLASS_MEM_ACCESS_RDATA;
6508 +
6509 + pe[TMU0_ID].dmem_base_addr = TMU_DMEM_BASE_ADDR(0);
6510 + pe[TMU0_ID].pmem_base_addr = TMU_IMEM_BASE_ADDR(0);
6511 + pe[TMU0_ID].pmem_size = TMU_IMEM_SIZE;
6512 + pe[TMU0_ID].mem_access_wdata = TMU_MEM_ACCESS_WDATA;
6513 + pe[TMU0_ID].mem_access_addr = TMU_MEM_ACCESS_ADDR;
6514 + pe[TMU0_ID].mem_access_rdata = TMU_MEM_ACCESS_RDATA;
6515 +
6516 + pe[TMU1_ID].dmem_base_addr = TMU_DMEM_BASE_ADDR(1);
6517 + pe[TMU1_ID].pmem_base_addr = TMU_IMEM_BASE_ADDR(1);
6518 + pe[TMU1_ID].pmem_size = TMU_IMEM_SIZE;
6519 + pe[TMU1_ID].mem_access_wdata = TMU_MEM_ACCESS_WDATA;
6520 + pe[TMU1_ID].mem_access_addr = TMU_MEM_ACCESS_ADDR;
6521 + pe[TMU1_ID].mem_access_rdata = TMU_MEM_ACCESS_RDATA;
6522 +
6523 + pe[TMU3_ID].dmem_base_addr = TMU_DMEM_BASE_ADDR(3);
6524 + pe[TMU3_ID].pmem_base_addr = TMU_IMEM_BASE_ADDR(3);
6525 + pe[TMU3_ID].pmem_size = TMU_IMEM_SIZE;
6526 + pe[TMU3_ID].mem_access_wdata = TMU_MEM_ACCESS_WDATA;
6527 + pe[TMU3_ID].mem_access_addr = TMU_MEM_ACCESS_ADDR;
6528 + pe[TMU3_ID].mem_access_rdata = TMU_MEM_ACCESS_RDATA;
6529 +
6530 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
6531 + pe[UTIL_ID].dmem_base_addr = UTIL_DMEM_BASE_ADDR;
6532 + pe[UTIL_ID].mem_access_wdata = UTIL_MEM_ACCESS_WDATA;
6533 + pe[UTIL_ID].mem_access_addr = UTIL_MEM_ACCESS_ADDR;
6534 + pe[UTIL_ID].mem_access_rdata = UTIL_MEM_ACCESS_RDATA;
6535 +#endif
6536 +}
6537 +
6538 +/* Writes a buffer to PE internal memory from the host
6539 + * through indirect access registers.
6540 + *
6541 + * @param[in] id PE identification (CLASS0_ID, ..., TMU0_ID,
6542 + * ..., UTIL_ID)
6543 + * @param[in] src Buffer source address
6544 + * @param[in] mem_access_addr DMEM destination address (must be 32bit
6545 + * aligned)
6546 + * @param[in] len Number of bytes to copy
6547 + */
6548 +void pe_mem_memcpy_to32(int id, u32 mem_access_addr, const void *src, unsigned
6549 +int len)
6550 +{
6551 + u32 offset = 0, val, addr;
6552 + unsigned int len32 = len >> 2;
6553 + int i;
6554 +
6555 + addr = mem_access_addr | PE_MEM_ACCESS_WRITE |
6556 + PE_MEM_ACCESS_BYTE_ENABLE(0, 4);
6557 +
6558 + for (i = 0; i < len32; i++, offset += 4, src += 4) {
6559 + val = *(u32 *)src;
6560 + writel(cpu_to_be32(val), pe[id].mem_access_wdata);
6561 + writel(addr + offset, pe[id].mem_access_addr);
6562 + }
6563 +
6564 + len = (len & 0x3);
6565 + if (len) {
6566 + val = 0;
6567 +
6568 + addr = (mem_access_addr | PE_MEM_ACCESS_WRITE |
6569 + PE_MEM_ACCESS_BYTE_ENABLE(0, len)) + offset;
6570 +
6571 + for (i = 0; i < len; i++, src++)
6572 + val |= (*(u8 *)src) << (8 * i);
6573 +
6574 + writel(cpu_to_be32(val), pe[id].mem_access_wdata);
6575 + writel(addr, pe[id].mem_access_addr);
6576 + }
6577 +}
6578 +
6579 +/* Writes a buffer to PE internal data memory (DMEM) from the host
6580 + * through indirect access registers.
6581 + * @param[in] id PE identification (CLASS0_ID, ..., TMU0_ID,
6582 + * ..., UTIL_ID)
6583 + * @param[in] src Buffer source address
6584 + * @param[in] dst DMEM destination address (must be 32bit
6585 + * aligned)
6586 + * @param[in] len Number of bytes to copy
6587 + */
6588 +void pe_dmem_memcpy_to32(int id, u32 dst, const void *src, unsigned int len)
6589 +{
6590 + pe_mem_memcpy_to32(id, pe[id].dmem_base_addr | dst |
6591 + PE_MEM_ACCESS_DMEM, src, len);
6592 +}
6593 +
6594 +/* Writes a buffer to PE internal program memory (PMEM) from the host
6595 + * through indirect access registers.
6596 + * @param[in] id PE identification (CLASS0_ID, ..., TMU0_ID,
6597 + * ..., TMU3_ID)
6598 + * @param[in] src Buffer source address
6599 + * @param[in] dst PMEM destination address (must be 32bit
6600 + * aligned)
6601 + * @param[in] len Number of bytes to copy
6602 + */
6603 +void pe_pmem_memcpy_to32(int id, u32 dst, const void *src, unsigned int len)
6604 +{
6605 + pe_mem_memcpy_to32(id, pe[id].pmem_base_addr | (dst & (pe[id].pmem_size
6606 + - 1)) | PE_MEM_ACCESS_IMEM, src, len);
6607 +}
6608 +
6609 +/* Reads PE internal program memory (IMEM) from the host
6610 + * through indirect access registers.
6611 + * @param[in] id PE identification (CLASS0_ID, ..., TMU0_ID,
6612 + * ..., TMU3_ID)
6613 + * @param[in] addr PMEM read address (must be aligned on size)
6614 + * @param[in] size Number of bytes to read (maximum 4, must not
6615 + * cross 32bit boundaries)
6616 + * @return the data read (in PE endianness, i.e BE).
6617 + */
6618 +u32 pe_pmem_read(int id, u32 addr, u8 size)
6619 +{
6620 + u32 offset = addr & 0x3;
6621 + u32 mask = 0xffffffff >> ((4 - size) << 3);
6622 + u32 val;
6623 +
6624 + addr = pe[id].pmem_base_addr | ((addr & ~0x3) & (pe[id].pmem_size - 1))
6625 + | PE_MEM_ACCESS_IMEM | PE_MEM_ACCESS_BYTE_ENABLE(offset, size);
6626 +
6627 + writel(addr, pe[id].mem_access_addr);
6628 + val = be32_to_cpu(readl(pe[id].mem_access_rdata));
6629 +
6630 + return (val >> (offset << 3)) & mask;
6631 +}
6632 +
6633 +/* Writes PE internal data memory (DMEM) from the host
6634 + * through indirect access registers.
6635 + * @param[in] id PE identification (CLASS0_ID, ..., TMU0_ID,
6636 + * ..., UTIL_ID)
6637 + * @param[in] addr DMEM write address (must be aligned on size)
6638 + * @param[in] val Value to write (in PE endianness, i.e BE)
6639 + * @param[in] size Number of bytes to write (maximum 4, must not
6640 + * cross 32bit boundaries)
6641 + */
6642 +void pe_dmem_write(int id, u32 val, u32 addr, u8 size)
6643 +{
6644 + u32 offset = addr & 0x3;
6645 +
6646 + addr = pe[id].dmem_base_addr | (addr & ~0x3) | PE_MEM_ACCESS_WRITE |
6647 + PE_MEM_ACCESS_DMEM | PE_MEM_ACCESS_BYTE_ENABLE(offset, size);
6648 +
6649 + /* Indirect access interface is byte swapping data being written */
6650 + writel(cpu_to_be32(val << (offset << 3)), pe[id].mem_access_wdata);
6651 + writel(addr, pe[id].mem_access_addr);
6652 +}
6653 +
6654 +/* Reads PE internal data memory (DMEM) from the host
6655 + * through indirect access registers.
6656 + * @param[in] id PE identification (CLASS0_ID, ..., TMU0_ID,
6657 + * ..., UTIL_ID)
6658 + * @param[in] addr DMEM read address (must be aligned on size)
6659 + * @param[in] size Number of bytes to read (maximum 4, must not
6660 + * cross 32bit boundaries)
6661 + * @return the data read (in PE endianness, i.e BE).
6662 + */
6663 +u32 pe_dmem_read(int id, u32 addr, u8 size)
6664 +{
6665 + u32 offset = addr & 0x3;
6666 + u32 mask = 0xffffffff >> ((4 - size) << 3);
6667 + u32 val;
6668 +
6669 + addr = pe[id].dmem_base_addr | (addr & ~0x3) | PE_MEM_ACCESS_DMEM |
6670 + PE_MEM_ACCESS_BYTE_ENABLE(offset, size);
6671 +
6672 + writel(addr, pe[id].mem_access_addr);
6673 +
6674 + /* Indirect access interface is byte swapping data being read */
6675 + val = be32_to_cpu(readl(pe[id].mem_access_rdata));
6676 +
6677 + return (val >> (offset << 3)) & mask;
6678 +}
6679 +
6680 +/* This function is used to write to CLASS internal bus peripherals (ccu,
6681 + * pe-lem) from the host
6682 + * through indirect access registers.
6683 + * @param[in] val value to write
6684 + * @param[in] addr Address to write to (must be aligned on size)
6685 + * @param[in] size Number of bytes to write (1, 2 or 4)
6686 + *
6687 + */
6688 +void class_bus_write(u32 val, u32 addr, u8 size)
6689 +{
6690 + u32 offset = addr & 0x3;
6691 +
6692 + writel((addr & CLASS_BUS_ACCESS_BASE_MASK), CLASS_BUS_ACCESS_BASE);
6693 +
6694 + addr = (addr & ~CLASS_BUS_ACCESS_BASE_MASK) | PE_MEM_ACCESS_WRITE |
6695 + (size << 24);
6696 +
6697 + writel(cpu_to_be32(val << (offset << 3)), CLASS_BUS_ACCESS_WDATA);
6698 + writel(addr, CLASS_BUS_ACCESS_ADDR);
6699 +}
6700 +
6701 +/* Reads from CLASS internal bus peripherals (ccu, pe-lem) from the host
6702 + * through indirect access registers.
6703 + * @param[in] addr Address to read from (must be aligned on size)
6704 + * @param[in] size Number of bytes to read (1, 2 or 4)
6705 + * @return the read data
6706 + *
6707 + */
6708 +u32 class_bus_read(u32 addr, u8 size)
6709 +{
6710 + u32 offset = addr & 0x3;
6711 + u32 mask = 0xffffffff >> ((4 - size) << 3);
6712 + u32 val;
6713 +
6714 + writel((addr & CLASS_BUS_ACCESS_BASE_MASK), CLASS_BUS_ACCESS_BASE);
6715 +
6716 + addr = (addr & ~CLASS_BUS_ACCESS_BASE_MASK) | (size << 24);
6717 +
6718 + writel(addr, CLASS_BUS_ACCESS_ADDR);
6719 + val = be32_to_cpu(readl(CLASS_BUS_ACCESS_RDATA));
6720 +
6721 + return (val >> (offset << 3)) & mask;
6722 +}
6723 +
6724 +/* Writes data to the cluster memory (PE_LMEM)
6725 + * @param[in] dst PE LMEM destination address (must be 32bit aligned)
6726 + * @param[in] src Buffer source address
6727 + * @param[in] len Number of bytes to copy
6728 + */
6729 +void class_pe_lmem_memcpy_to32(u32 dst, const void *src, unsigned int len)
6730 +{
6731 + u32 len32 = len >> 2;
6732 + int i;
6733 +
6734 + for (i = 0; i < len32; i++, src += 4, dst += 4)
6735 + class_bus_write(*(u32 *)src, dst, 4);
6736 +
6737 + if (len & 0x2) {
6738 + class_bus_write(*(u16 *)src, dst, 2);
6739 + src += 2;
6740 + dst += 2;
6741 + }
6742 +
6743 + if (len & 0x1) {
6744 + class_bus_write(*(u8 *)src, dst, 1);
6745 + src++;
6746 + dst++;
6747 + }
6748 +}
6749 +
6750 +/* Writes value to the cluster memory (PE_LMEM)
6751 + * @param[in] dst PE LMEM destination address (must be 32bit aligned)
6752 + * @param[in] val Value to write
6753 + * @param[in] len Number of bytes to write
6754 + */
6755 +void class_pe_lmem_memset(u32 dst, int val, unsigned int len)
6756 +{
6757 + u32 len32 = len >> 2;
6758 + int i;
6759 +
6760 + val = val | (val << 8) | (val << 16) | (val << 24);
6761 +
6762 + for (i = 0; i < len32; i++, dst += 4)
6763 + class_bus_write(val, dst, 4);
6764 +
6765 + if (len & 0x2) {
6766 + class_bus_write(val, dst, 2);
6767 + dst += 2;
6768 + }
6769 +
6770 + if (len & 0x1) {
6771 + class_bus_write(val, dst, 1);
6772 + dst++;
6773 + }
6774 +}
6775 +
6776 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
6777 +
6778 +/* Writes UTIL program memory (DDR) from the host.
6779 + *
6780 + * @param[in] addr Address to write (virtual, must be aligned on size)
6781 + * @param[in] val Value to write (in PE endianness, i.e BE)
6782 + * @param[in] size Number of bytes to write (2 or 4)
6783 + */
6784 +static void util_pmem_write(u32 val, void *addr, u8 size)
6785 +{
6786 + void *addr64 = (void *)((unsigned long)addr & ~0x7);
6787 + unsigned long off = 8 - ((unsigned long)addr & 0x7) - size;
6788 +
6789 + /*
6790 + * IMEM should be loaded as a 64bit swapped value in a 64bit aligned
6791 + * location
6792 + */
6793 + if (size == 4)
6794 + writel(be32_to_cpu(val), addr64 + off);
6795 + else
6796 + writew(be16_to_cpu((u16)val), addr64 + off);
6797 +}
6798 +
6799 +/* Writes a buffer to UTIL program memory (DDR) from the host.
6800 + *
6801 + * @param[in] dst Address to write (virtual, must be at least 16bit
6802 + * aligned)
6803 + * @param[in] src Buffer to write (in PE endianness, i.e BE, must have
6804 + * same alignment as dst)
6805 + * @param[in] len Number of bytes to write (must be at least 16bit
6806 + * aligned)
6807 + */
6808 +static void util_pmem_memcpy(void *dst, const void *src, unsigned int len)
6809 +{
6810 + unsigned int len32;
6811 + int i;
6812 +
6813 + if ((unsigned long)src & 0x2) {
6814 + util_pmem_write(*(u16 *)src, dst, 2);
6815 + src += 2;
6816 + dst += 2;
6817 + len -= 2;
6818 + }
6819 +
6820 + len32 = len >> 2;
6821 +
6822 + for (i = 0; i < len32; i++, dst += 4, src += 4)
6823 + util_pmem_write(*(u32 *)src, dst, 4);
6824 +
6825 + if (len & 0x2)
6826 + util_pmem_write(*(u16 *)src, dst, len & 0x2);
6827 +}
6828 +#endif
6829 +
6830 +/* Loads an elf section into pmem
6831 + * Code needs to be at least 16bit aligned and only PROGBITS sections are
6832 + * supported
6833 + *
6834 + * @param[in] id PE identification (CLASS0_ID, ..., TMU0_ID, ...,
6835 + * TMU3_ID)
6836 + * @param[in] data pointer to the elf firmware
6837 + * @param[in] shdr pointer to the elf section header
6838 + *
6839 + */
6840 +static int pe_load_pmem_section(int id, const void *data,
6841 + struct elf32_shdr *shdr)
6842 +{
6843 + u32 offset = be32_to_cpu(shdr->sh_offset);
6844 + u32 addr = be32_to_cpu(shdr->sh_addr);
6845 + u32 size = be32_to_cpu(shdr->sh_size);
6846 + u32 type = be32_to_cpu(shdr->sh_type);
6847 +
6848 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
6849 + if (id == UTIL_ID) {
6850 + pr_err("%s: unsupported pmem section for UTIL\n",
6851 + __func__);
6852 + return -EINVAL;
6853 + }
6854 +#endif
6855 +
6856 + if (((unsigned long)(data + offset) & 0x3) != (addr & 0x3)) {
6857 + pr_err(
6858 + "%s: load address(%x) and elf file address(%lx) don't have the same alignment\n"
6859 + , __func__, addr, (unsigned long)data + offset);
6860 +
6861 + return -EINVAL;
6862 + }
6863 +
6864 + if (addr & 0x1) {
6865 + pr_err("%s: load address(%x) is not 16bit aligned\n",
6866 + __func__, addr);
6867 + return -EINVAL;
6868 + }
6869 +
6870 + if (size & 0x1) {
6871 + pr_err("%s: load size(%x) is not 16bit aligned\n",
6872 + __func__, size);
6873 + return -EINVAL;
6874 + }
6875 +
6876 + switch (type) {
6877 + case SHT_PROGBITS:
6878 + pe_pmem_memcpy_to32(id, addr, data + offset, size);
6879 +
6880 + break;
6881 +
6882 + default:
6883 + pr_err("%s: unsupported section type(%x)\n", __func__,
6884 + type);
6885 + return -EINVAL;
6886 + }
6887 +
6888 + return 0;
6889 +}
6890 +
6891 +/* Loads an elf section into dmem
6892 + * Data needs to be at least 32bit aligned, NOBITS sections are correctly
6893 + * initialized to 0
6894 + *
6895 + * @param[in] id PE identification (CLASS0_ID, ..., TMU0_ID,
6896 + * ..., UTIL_ID)
6897 + * @param[in] data pointer to the elf firmware
6898 + * @param[in] shdr pointer to the elf section header
6899 + *
6900 + */
6901 +static int pe_load_dmem_section(int id, const void *data,
6902 + struct elf32_shdr *shdr)
6903 +{
6904 + u32 offset = be32_to_cpu(shdr->sh_offset);
6905 + u32 addr = be32_to_cpu(shdr->sh_addr);
6906 + u32 size = be32_to_cpu(shdr->sh_size);
6907 + u32 type = be32_to_cpu(shdr->sh_type);
6908 + u32 size32 = size >> 2;
6909 + int i;
6910 +
6911 + if (((unsigned long)(data + offset) & 0x3) != (addr & 0x3)) {
6912 + pr_err(
6913 + "%s: load address(%x) and elf file address(%lx) don't have the same alignment\n",
6914 + __func__, addr, (unsigned long)data + offset);
6915 +
6916 + return -EINVAL;
6917 + }
6918 +
6919 + if (addr & 0x3) {
6920 + pr_err("%s: load address(%x) is not 32bit aligned\n",
6921 + __func__, addr);
6922 + return -EINVAL;
6923 + }
6924 +
6925 + switch (type) {
6926 + case SHT_PROGBITS:
6927 + pe_dmem_memcpy_to32(id, addr, data + offset, size);
6928 + break;
6929 +
6930 + case SHT_NOBITS:
6931 + for (i = 0; i < size32; i++, addr += 4)
6932 + pe_dmem_write(id, 0, addr, 4);
6933 +
6934 + if (size & 0x3)
6935 + pe_dmem_write(id, 0, addr, size & 0x3);
6936 +
6937 + break;
6938 +
6939 + default:
6940 + pr_err("%s: unsupported section type(%x)\n", __func__,
6941 + type);
6942 + return -EINVAL;
6943 + }
6944 +
6945 + return 0;
6946 +}
6947 +
6948 +/* Loads an elf section into DDR
6949 + * Data needs to be at least 32bit aligned, NOBITS sections are correctly
6950 + * initialized to 0
6951 + *
6952 + * @param[in] id PE identification (CLASS0_ID, ..., TMU0_ID,
6953 + * ..., UTIL_ID)
6954 + * @param[in] data pointer to the elf firmware
6955 + * @param[in] shdr pointer to the elf section header
6956 + *
6957 + */
6958 +static int pe_load_ddr_section(int id, const void *data,
6959 + struct elf32_shdr *shdr,
6960 + struct device *dev) {
6961 + u32 offset = be32_to_cpu(shdr->sh_offset);
6962 + u32 addr = be32_to_cpu(shdr->sh_addr);
6963 + u32 size = be32_to_cpu(shdr->sh_size);
6964 + u32 type = be32_to_cpu(shdr->sh_type);
6965 + u32 flags = be32_to_cpu(shdr->sh_flags);
6966 +
6967 + switch (type) {
6968 + case SHT_PROGBITS:
6969 + if (flags & SHF_EXECINSTR) {
6970 + if (id <= CLASS_MAX_ID) {
6971 + /* DO the loading only once in DDR */
6972 + if (id == CLASS0_ID) {
6973 + pr_err(
6974 + "%s: load address(%x) and elf file address(%lx) rcvd\n",
6975 + __func__, addr,
6976 + (unsigned long)data + offset);
6977 + if (((unsigned long)(data + offset)
6978 + & 0x3) != (addr & 0x3)) {
6979 + pr_err(
6980 + "%s: load address(%x) and elf file address(%lx) don't have the same alignment\n"
6981 + , __func__, addr,
6982 + (unsigned long)data + offset);
6983 +
6984 + return -EINVAL;
6985 + }
6986 +
6987 + if (addr & 0x1) {
6988 + pr_err(
6989 + "%s: load address(%x) is not 16bit aligned\n"
6990 + , __func__, addr);
6991 + return -EINVAL;
6992 + }
6993 +
6994 + if (size & 0x1) {
6995 + pr_err(
6996 + "%s: load length(%x) is not 16bit aligned\n"
6997 + , __func__, size);
6998 + return -EINVAL;
6999 + }
7000 + memcpy(DDR_PHYS_TO_VIRT(
7001 + DDR_PFE_TO_PHYS(addr)),
7002 + data + offset, size);
7003 + }
7004 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
7005 + } else if (id == UTIL_ID) {
7006 + if (((unsigned long)(data + offset) & 0x3)
7007 + != (addr & 0x3)) {
7008 + pr_err(
7009 + "%s: load address(%x) and elf file address(%lx) don't have the same alignment\n"
7010 + , __func__, addr,
7011 + (unsigned long)data + offset);
7012 +
7013 + return -EINVAL;
7014 + }
7015 +
7016 + if (addr & 0x1) {
7017 + pr_err(
7018 + "%s: load address(%x) is not 16bit aligned\n"
7019 + , __func__, addr);
7020 + return -EINVAL;
7021 + }
7022 +
7023 + if (size & 0x1) {
7024 + pr_err(
7025 + "%s: load length(%x) is not 16bit aligned\n"
7026 + , __func__, size);
7027 + return -EINVAL;
7028 + }
7029 +
7030 + util_pmem_memcpy(DDR_PHYS_TO_VIRT(
7031 + DDR_PFE_TO_PHYS(addr)),
7032 + data + offset, size);
7033 + }
7034 +#endif
7035 + } else {
7036 + pr_err(
7037 + "%s: unsupported ddr section type(%x) for PE(%d)\n"
7038 + , __func__, type, id);
7039 + return -EINVAL;
7040 + }
7041 +
7042 + } else {
7043 + memcpy(DDR_PHYS_TO_VIRT(DDR_PFE_TO_PHYS(addr)), data
7044 + + offset, size);
7045 + }
7046 +
7047 + break;
7048 +
7049 + case SHT_NOBITS:
7050 + memset(DDR_PHYS_TO_VIRT(DDR_PFE_TO_PHYS(addr)), 0, size);
7051 +
7052 + break;
7053 +
7054 + default:
7055 + pr_err("%s: unsupported section type(%x)\n", __func__,
7056 + type);
7057 + return -EINVAL;
7058 + }
7059 +
7060 + return 0;
7061 +}
7062 +
7063 +/* Loads an elf section into pe lmem
7064 + * Data needs to be at least 32bit aligned, NOBITS sections are correctly
7065 + * initialized to 0
7066 + *
7067 + * @param[in] id PE identification (CLASS0_ID,..., CLASS5_ID)
7068 + * @param[in] data pointer to the elf firmware
7069 + * @param[in] shdr pointer to the elf section header
7070 + *
7071 + */
7072 +static int pe_load_pe_lmem_section(int id, const void *data,
7073 + struct elf32_shdr *shdr)
7074 +{
7075 + u32 offset = be32_to_cpu(shdr->sh_offset);
7076 + u32 addr = be32_to_cpu(shdr->sh_addr);
7077 + u32 size = be32_to_cpu(shdr->sh_size);
7078 + u32 type = be32_to_cpu(shdr->sh_type);
7079 +
7080 + if (id > CLASS_MAX_ID) {
7081 + pr_err(
7082 + "%s: unsupported pe-lmem section type(%x) for PE(%d)\n",
7083 + __func__, type, id);
7084 + return -EINVAL;
7085 + }
7086 +
7087 + if (((unsigned long)(data + offset) & 0x3) != (addr & 0x3)) {
7088 + pr_err(
7089 + "%s: load address(%x) and elf file address(%lx) don't have the same alignment\n",
7090 + __func__, addr, (unsigned long)data + offset);
7091 +
7092 + return -EINVAL;
7093 + }
7094 +
7095 + if (addr & 0x3) {
7096 + pr_err("%s: load address(%x) is not 32bit aligned\n",
7097 + __func__, addr);
7098 + return -EINVAL;
7099 + }
7100 +
7101 + switch (type) {
7102 + case SHT_PROGBITS:
7103 + class_pe_lmem_memcpy_to32(addr, data + offset, size);
7104 + break;
7105 +
7106 + case SHT_NOBITS:
7107 + class_pe_lmem_memset(addr, 0, size);
7108 + break;
7109 +
7110 + default:
7111 + pr_err("%s: unsupported section type(%x)\n", __func__,
7112 + type);
7113 + return -EINVAL;
7114 + }
7115 +
7116 + return 0;
7117 +}
7118 +
7119 +/* Loads an elf section into a PE
7120 + * For now only supports loading a section to dmem (all PE's), pmem (class and
7121 + * tmu PE's),
7122 + * DDDR (util PE code)
7123 + *
7124 + * @param[in] id PE identification (CLASS0_ID, ..., TMU0_ID,
7125 + * ..., UTIL_ID)
7126 + * @param[in] data pointer to the elf firmware
7127 + * @param[in] shdr pointer to the elf section header
7128 + *
7129 + */
7130 +int pe_load_elf_section(int id, const void *data, struct elf32_shdr *shdr,
7131 + struct device *dev) {
7132 + u32 addr = be32_to_cpu(shdr->sh_addr);
7133 + u32 size = be32_to_cpu(shdr->sh_size);
7134 +
7135 + if (IS_DMEM(addr, size))
7136 + return pe_load_dmem_section(id, data, shdr);
7137 + else if (IS_PMEM(addr, size))
7138 + return pe_load_pmem_section(id, data, shdr);
7139 + else if (IS_PFE_LMEM(addr, size))
7140 + return 0;
7141 + else if (IS_PHYS_DDR(addr, size))
7142 + return pe_load_ddr_section(id, data, shdr, dev);
7143 + else if (IS_PE_LMEM(addr, size))
7144 + return pe_load_pe_lmem_section(id, data, shdr);
7145 +
7146 + pr_err("%s: unsupported memory range(%x)\n", __func__,
7147 + addr);
7148 + return 0;
7149 +}
7150 +
7151 +/**************************** BMU ***************************/
7152 +
7153 +/* Initializes a BMU block.
7154 + * @param[in] base BMU block base address
7155 + * @param[in] cfg BMU configuration
7156 + */
7157 +void bmu_init(void *base, struct BMU_CFG *cfg)
7158 +{
7159 + bmu_disable(base);
7160 +
7161 + bmu_set_config(base, cfg);
7162 +
7163 + bmu_reset(base);
7164 +}
7165 +
7166 +/* Resets a BMU block.
7167 + * @param[in] base BMU block base address
7168 + */
7169 +void bmu_reset(void *base)
7170 +{
7171 + writel(CORE_SW_RESET, base + BMU_CTRL);
7172 +
7173 + /* Wait for self clear */
7174 + while (readl(base + BMU_CTRL) & CORE_SW_RESET)
7175 + ;
7176 +}
7177 +
7178 +/* Enabled a BMU block.
7179 + * @param[in] base BMU block base address
7180 + */
7181 +void bmu_enable(void *base)
7182 +{
7183 + writel(CORE_ENABLE, base + BMU_CTRL);
7184 +}
7185 +
7186 +/* Disables a BMU block.
7187 + * @param[in] base BMU block base address
7188 + */
7189 +void bmu_disable(void *base)
7190 +{
7191 + writel(CORE_DISABLE, base + BMU_CTRL);
7192 +}
7193 +
7194 +/* Sets the configuration of a BMU block.
7195 + * @param[in] base BMU block base address
7196 + * @param[in] cfg BMU configuration
7197 + */
7198 +void bmu_set_config(void *base, struct BMU_CFG *cfg)
7199 +{
7200 + writel(cfg->baseaddr, base + BMU_UCAST_BASE_ADDR);
7201 + writel(cfg->count & 0xffff, base + BMU_UCAST_CONFIG);
7202 + writel(cfg->size & 0xffff, base + BMU_BUF_SIZE);
7203 +
7204 + /* Interrupts are never used */
7205 + writel(cfg->low_watermark, base + BMU_LOW_WATERMARK);
7206 + writel(cfg->high_watermark, base + BMU_HIGH_WATERMARK);
7207 + writel(0x0, base + BMU_INT_ENABLE);
7208 +}
7209 +
7210 +/**************************** MTIP GEMAC ***************************/
7211 +
7212 +/* Enable Rx Checksum Engine. With this enabled, Frame with bad IP,
7213 + * TCP or UDP checksums are discarded
7214 + *
7215 + * @param[in] base GEMAC base address.
7216 + */
7217 +void gemac_enable_rx_checksum_offload(void *base)
7218 +{
7219 + /*Do not find configuration to do this */
7220 +}
7221 +
7222 +/* Disable Rx Checksum Engine.
7223 + *
7224 + * @param[in] base GEMAC base address.
7225 + */
7226 +void gemac_disable_rx_checksum_offload(void *base)
7227 +{
7228 + /*Do not find configuration to do this */
7229 +}
7230 +
7231 +/* GEMAC set speed.
7232 + * @param[in] base GEMAC base address
7233 + * @param[in] speed GEMAC speed (10, 100 or 1000 Mbps)
7234 + */
7235 +void gemac_set_speed(void *base, enum mac_speed gem_speed)
7236 +{
7237 + u32 ecr = readl(base + EMAC_ECNTRL_REG) & ~EMAC_ECNTRL_SPEED;
7238 + u32 rcr = readl(base + EMAC_RCNTRL_REG) & ~EMAC_RCNTRL_RMII_10T;
7239 +
7240 + switch (gem_speed) {
7241 + case SPEED_10M:
7242 + rcr |= EMAC_RCNTRL_RMII_10T;
7243 + break;
7244 +
7245 + case SPEED_1000M:
7246 + ecr |= EMAC_ECNTRL_SPEED;
7247 + break;
7248 +
7249 + case SPEED_100M:
7250 + default:
7251 + /*It is in 100M mode */
7252 + break;
7253 + }
7254 + writel(ecr, (base + EMAC_ECNTRL_REG));
7255 + writel(rcr, (base + EMAC_RCNTRL_REG));
7256 +}
7257 +
7258 +/* GEMAC set duplex.
7259 + * @param[in] base GEMAC base address
7260 + * @param[in] duplex GEMAC duplex mode (Full, Half)
7261 + */
7262 +void gemac_set_duplex(void *base, int duplex)
7263 +{
7264 + if (duplex == DUPLEX_HALF) {
7265 + writel(readl(base + EMAC_TCNTRL_REG) & ~EMAC_TCNTRL_FDEN, base
7266 + + EMAC_TCNTRL_REG);
7267 + writel(readl(base + EMAC_RCNTRL_REG) | EMAC_RCNTRL_DRT, (base
7268 + + EMAC_RCNTRL_REG));
7269 + } else{
7270 + writel(readl(base + EMAC_TCNTRL_REG) | EMAC_TCNTRL_FDEN, base
7271 + + EMAC_TCNTRL_REG);
7272 + writel(readl(base + EMAC_RCNTRL_REG) & ~EMAC_RCNTRL_DRT, (base
7273 + + EMAC_RCNTRL_REG));
7274 + }
7275 +}
7276 +
7277 +/* GEMAC set mode.
7278 + * @param[in] base GEMAC base address
7279 + * @param[in] mode GEMAC operation mode (MII, RMII, RGMII, SGMII)
7280 + */
7281 +void gemac_set_mode(void *base, int mode)
7282 +{
7283 + u32 val = readl(base + EMAC_RCNTRL_REG);
7284 +
7285 + /*Remove loopbank*/
7286 + val &= ~EMAC_RCNTRL_LOOP;
7287 +
7288 + /* Enable flow control and MII mode.PFE firmware always expects
7289 + CRC should be forwarded by MAC to validate CRC in software.*/
7290 + val |= (EMAC_RCNTRL_FCE | EMAC_RCNTRL_MII_MODE);
7291 +
7292 + writel(val, base + EMAC_RCNTRL_REG);
7293 +}
7294 +
7295 +/* GEMAC enable function.
7296 + * @param[in] base GEMAC base address
7297 + */
7298 +void gemac_enable(void *base)
7299 +{
7300 + writel(readl(base + EMAC_ECNTRL_REG) | EMAC_ECNTRL_ETHER_EN, base +
7301 + EMAC_ECNTRL_REG);
7302 +}
7303 +
7304 +/* GEMAC disable function.
7305 + * @param[in] base GEMAC base address
7306 + */
7307 +void gemac_disable(void *base)
7308 +{
7309 + writel(readl(base + EMAC_ECNTRL_REG) & ~EMAC_ECNTRL_ETHER_EN, base +
7310 + EMAC_ECNTRL_REG);
7311 +}
7312 +
7313 +/* GEMAC TX disable function.
7314 + * @param[in] base GEMAC base address
7315 + */
7316 +void gemac_tx_disable(void *base)
7317 +{
7318 + writel(readl(base + EMAC_TCNTRL_REG) | EMAC_TCNTRL_GTS, base +
7319 + EMAC_TCNTRL_REG);
7320 +}
7321 +
7322 +void gemac_tx_enable(void *base)
7323 +{
7324 + writel(readl(base + EMAC_TCNTRL_REG) & ~EMAC_TCNTRL_GTS, base +
7325 + EMAC_TCNTRL_REG);
7326 +}
7327 +
7328 +/* Sets the hash register of the MAC.
7329 + * This register is used for matching unicast and multicast frames.
7330 + *
7331 + * @param[in] base GEMAC base address.
7332 + * @param[in] hash 64-bit hash to be configured.
7333 + */
7334 +void gemac_set_hash(void *base, struct pfe_mac_addr *hash)
7335 +{
7336 + writel(hash->bottom, base + EMAC_GALR);
7337 + writel(hash->top, base + EMAC_GAUR);
7338 +}
7339 +
7340 +void gemac_set_laddrN(void *base, struct pfe_mac_addr *address,
7341 + unsigned int entry_index)
7342 +{
7343 + if ((entry_index < 1) || (entry_index > EMAC_SPEC_ADDR_MAX))
7344 + return;
7345 +
7346 + entry_index = entry_index - 1;
7347 + if (entry_index < 1) {
7348 + writel(htonl(address->bottom), base + EMAC_PHY_ADDR_LOW);
7349 + writel((htonl(address->top) | 0x8808), base +
7350 + EMAC_PHY_ADDR_HIGH);
7351 + } else {
7352 + writel(htonl(address->bottom), base + ((entry_index - 1) * 8)
7353 + + EMAC_SMAC_0_0);
7354 + writel((htonl(address->top) | 0x8808), base + ((entry_index -
7355 + 1) * 8) + EMAC_SMAC_0_1);
7356 + }
7357 +}
7358 +
7359 +void gemac_clear_laddrN(void *base, unsigned int entry_index)
7360 +{
7361 + if ((entry_index < 1) || (entry_index > EMAC_SPEC_ADDR_MAX))
7362 + return;
7363 +
7364 + entry_index = entry_index - 1;
7365 + if (entry_index < 1) {
7366 + writel(0, base + EMAC_PHY_ADDR_LOW);
7367 + writel(0, base + EMAC_PHY_ADDR_HIGH);
7368 + } else {
7369 + writel(0, base + ((entry_index - 1) * 8) + EMAC_SMAC_0_0);
7370 + writel(0, base + ((entry_index - 1) * 8) + EMAC_SMAC_0_1);
7371 + }
7372 +}
7373 +
7374 +/* Set the loopback mode of the MAC. This can be either no loopback for
7375 + * normal operation, local loopback through MAC internal loopback module or PHY
7376 + * loopback for external loopback through a PHY. This asserts the external
7377 + * loop pin.
7378 + *
7379 + * @param[in] base GEMAC base address.
7380 + * @param[in] gem_loop Loopback mode to be enabled. LB_LOCAL - MAC
7381 + * Loopback,
7382 + * LB_EXT - PHY Loopback.
7383 + */
7384 +void gemac_set_loop(void *base, enum mac_loop gem_loop)
7385 +{
7386 + pr_info("%s()\n", __func__);
7387 + writel(readl(base + EMAC_RCNTRL_REG) | EMAC_RCNTRL_LOOP, (base +
7388 + EMAC_RCNTRL_REG));
7389 +}
7390 +
7391 +/* GEMAC allow frames
7392 + * @param[in] base GEMAC base address
7393 + */
7394 +void gemac_enable_copy_all(void *base)
7395 +{
7396 + writel(readl(base + EMAC_RCNTRL_REG) | EMAC_RCNTRL_PROM, (base +
7397 + EMAC_RCNTRL_REG));
7398 +}
7399 +
7400 +/* GEMAC do not allow frames
7401 + * @param[in] base GEMAC base address
7402 + */
7403 +void gemac_disable_copy_all(void *base)
7404 +{
7405 + writel(readl(base + EMAC_RCNTRL_REG) & ~EMAC_RCNTRL_PROM, (base +
7406 + EMAC_RCNTRL_REG));
7407 +}
7408 +
7409 +/* GEMAC allow broadcast function.
7410 + * @param[in] base GEMAC base address
7411 + */
7412 +void gemac_allow_broadcast(void *base)
7413 +{
7414 + writel(readl(base + EMAC_RCNTRL_REG) & ~EMAC_RCNTRL_BC_REJ, base +
7415 + EMAC_RCNTRL_REG);
7416 +}
7417 +
7418 +/* GEMAC no broadcast function.
7419 + * @param[in] base GEMAC base address
7420 + */
7421 +void gemac_no_broadcast(void *base)
7422 +{
7423 + writel(readl(base + EMAC_RCNTRL_REG) | EMAC_RCNTRL_BC_REJ, base +
7424 + EMAC_RCNTRL_REG);
7425 +}
7426 +
7427 +/* GEMAC enable 1536 rx function.
7428 + * @param[in] base GEMAC base address
7429 + */
7430 +void gemac_enable_1536_rx(void *base)
7431 +{
7432 + /* Set 1536 as Maximum frame length */
7433 + writel((readl(base + EMAC_RCNTRL_REG) & PFE_RCR_MAX_FL_MASK)
7434 + | (1536 << 16), base + EMAC_RCNTRL_REG);
7435 +}
7436 +
7437 +/* GEMAC set rx Max frame length.
7438 + * @param[in] base GEMAC base address
7439 + * @param[in] mtu new mtu
7440 + */
7441 +void gemac_set_rx_max_fl(void *base, int mtu)
7442 +{
7443 + /* Set mtu as Maximum frame length */
7444 + writel((readl(base + EMAC_RCNTRL_REG) & PFE_RCR_MAX_FL_MASK)
7445 + | (mtu << 16), base + EMAC_RCNTRL_REG);
7446 +}
7447 +
7448 +/* GEMAC enable stacked vlan function.
7449 + * @param[in] base GEMAC base address
7450 + */
7451 +void gemac_enable_stacked_vlan(void *base)
7452 +{
7453 + /* MTIP doesn't support stacked vlan */
7454 +}
7455 +
7456 +/* GEMAC enable pause rx function.
7457 + * @param[in] base GEMAC base address
7458 + */
7459 +void gemac_enable_pause_rx(void *base)
7460 +{
7461 + writel(readl(base + EMAC_RCNTRL_REG) | EMAC_RCNTRL_FCE,
7462 + base + EMAC_RCNTRL_REG);
7463 +}
7464 +
7465 +/* GEMAC disable pause rx function.
7466 + * @param[in] base GEMAC base address
7467 + */
7468 +void gemac_disable_pause_rx(void *base)
7469 +{
7470 + writel(readl(base + EMAC_RCNTRL_REG) & ~EMAC_RCNTRL_FCE,
7471 + base + EMAC_RCNTRL_REG);
7472 +}
7473 +
7474 +/* GEMAC enable pause tx function.
7475 + * @param[in] base GEMAC base address
7476 + */
7477 +void gemac_enable_pause_tx(void *base)
7478 +{
7479 + writel(EMAC_RX_SECTION_EMPTY_V, base + EMAC_RX_SECTION_EMPTY);
7480 +}
7481 +
7482 +/* GEMAC disable pause tx function.
7483 + * @param[in] base GEMAC base address
7484 + */
7485 +void gemac_disable_pause_tx(void *base)
7486 +{
7487 + writel(0x0, base + EMAC_RX_SECTION_EMPTY);
7488 +}
7489 +
7490 +/* GEMAC wol configuration
7491 + * @param[in] base GEMAC base address
7492 + * @param[in] wol_conf WoL register configuration
7493 + */
7494 +void gemac_set_wol(void *base, u32 wol_conf)
7495 +{
7496 + u32 val = readl(base + EMAC_ECNTRL_REG);
7497 +
7498 + if (wol_conf)
7499 + val |= (EMAC_ECNTRL_MAGIC_ENA | EMAC_ECNTRL_SLEEP);
7500 + else
7501 + val &= ~(EMAC_ECNTRL_MAGIC_ENA | EMAC_ECNTRL_SLEEP);
7502 + writel(val, base + EMAC_ECNTRL_REG);
7503 +}
7504 +
7505 +/* Sets Gemac bus width to 64bit
7506 + * @param[in] base GEMAC base address
7507 + * @param[in] width gemac bus width to be set possible values are 32/64/128
7508 + */
7509 +void gemac_set_bus_width(void *base, int width)
7510 +{
7511 +}
7512 +
7513 +/* Sets Gemac configuration.
7514 + * @param[in] base GEMAC base address
7515 + * @param[in] cfg GEMAC configuration
7516 + */
7517 +void gemac_set_config(void *base, struct gemac_cfg *cfg)
7518 +{
7519 + /*GEMAC config taken from VLSI */
7520 + writel(0x00000004, base + EMAC_TFWR_STR_FWD);
7521 + writel(0x00000005, base + EMAC_RX_SECTION_FULL);
7522 +
7523 + if (pfe_errata_a010897)
7524 + writel(0x0000076c, base + EMAC_TRUNC_FL);
7525 + else
7526 + writel(0x00003fff, base + EMAC_TRUNC_FL);
7527 +
7528 + writel(0x00000030, base + EMAC_TX_SECTION_EMPTY);
7529 + writel(0x00000000, base + EMAC_MIB_CTRL_STS_REG);
7530 +
7531 + gemac_set_mode(base, cfg->mode);
7532 +
7533 + gemac_set_speed(base, cfg->speed);
7534 +
7535 + gemac_set_duplex(base, cfg->duplex);
7536 +}
7537 +
7538 +/**************************** GPI ***************************/
7539 +
7540 +/* Initializes a GPI block.
7541 + * @param[in] base GPI base address
7542 + * @param[in] cfg GPI configuration
7543 + */
7544 +void gpi_init(void *base, struct gpi_cfg *cfg)
7545 +{
7546 + gpi_reset(base);
7547 +
7548 + gpi_disable(base);
7549 +
7550 + gpi_set_config(base, cfg);
7551 +}
7552 +
7553 +/* Resets a GPI block.
7554 + * @param[in] base GPI base address
7555 + */
7556 +void gpi_reset(void *base)
7557 +{
7558 + writel(CORE_SW_RESET, base + GPI_CTRL);
7559 +}
7560 +
7561 +/* Enables a GPI block.
7562 + * @param[in] base GPI base address
7563 + */
7564 +void gpi_enable(void *base)
7565 +{
7566 + writel(CORE_ENABLE, base + GPI_CTRL);
7567 +}
7568 +
7569 +/* Disables a GPI block.
7570 + * @param[in] base GPI base address
7571 + */
7572 +void gpi_disable(void *base)
7573 +{
7574 + writel(CORE_DISABLE, base + GPI_CTRL);
7575 +}
7576 +
7577 +/* Sets the configuration of a GPI block.
7578 + * @param[in] base GPI base address
7579 + * @param[in] cfg GPI configuration
7580 + */
7581 +void gpi_set_config(void *base, struct gpi_cfg *cfg)
7582 +{
7583 + writel(CBUS_VIRT_TO_PFE(BMU1_BASE_ADDR + BMU_ALLOC_CTRL), base
7584 + + GPI_LMEM_ALLOC_ADDR);
7585 + writel(CBUS_VIRT_TO_PFE(BMU1_BASE_ADDR + BMU_FREE_CTRL), base
7586 + + GPI_LMEM_FREE_ADDR);
7587 + writel(CBUS_VIRT_TO_PFE(BMU2_BASE_ADDR + BMU_ALLOC_CTRL), base
7588 + + GPI_DDR_ALLOC_ADDR);
7589 + writel(CBUS_VIRT_TO_PFE(BMU2_BASE_ADDR + BMU_FREE_CTRL), base
7590 + + GPI_DDR_FREE_ADDR);
7591 + writel(CBUS_VIRT_TO_PFE(CLASS_INQ_PKTPTR), base + GPI_CLASS_ADDR);
7592 + writel(DDR_HDR_SIZE, base + GPI_DDR_DATA_OFFSET);
7593 + writel(LMEM_HDR_SIZE, base + GPI_LMEM_DATA_OFFSET);
7594 + writel(0, base + GPI_LMEM_SEC_BUF_DATA_OFFSET);
7595 + writel(0, base + GPI_DDR_SEC_BUF_DATA_OFFSET);
7596 + writel((DDR_HDR_SIZE << 16) | LMEM_HDR_SIZE, base + GPI_HDR_SIZE);
7597 + writel((DDR_BUF_SIZE << 16) | LMEM_BUF_SIZE, base + GPI_BUF_SIZE);
7598 +
7599 + writel(((cfg->lmem_rtry_cnt << 16) | (GPI_DDR_BUF_EN << 1) |
7600 + GPI_LMEM_BUF_EN), base + GPI_RX_CONFIG);
7601 + writel(cfg->tmlf_txthres, base + GPI_TMLF_TX);
7602 + writel(cfg->aseq_len, base + GPI_DTX_ASEQ);
7603 + writel(1, base + GPI_TOE_CHKSUM_EN);
7604 +
7605 + if (cfg->mtip_pause_reg) {
7606 + writel(cfg->mtip_pause_reg, base + GPI_CSR_MTIP_PAUSE_REG);
7607 + writel(EGPI_PAUSE_TIME, base + GPI_TX_PAUSE_TIME);
7608 + }
7609 +}
7610 +
7611 +/**************************** CLASSIFIER ***************************/
7612 +
7613 +/* Initializes CLASSIFIER block.
7614 + * @param[in] cfg CLASSIFIER configuration
7615 + */
7616 +void class_init(struct class_cfg *cfg)
7617 +{
7618 + class_reset();
7619 +
7620 + class_disable();
7621 +
7622 + class_set_config(cfg);
7623 +}
7624 +
7625 +/* Resets CLASSIFIER block.
7626 + *
7627 + */
7628 +void class_reset(void)
7629 +{
7630 + writel(CORE_SW_RESET, CLASS_TX_CTRL);
7631 +}
7632 +
7633 +/* Enables all CLASS-PE's cores.
7634 + *
7635 + */
7636 +void class_enable(void)
7637 +{
7638 + writel(CORE_ENABLE, CLASS_TX_CTRL);
7639 +}
7640 +
7641 +/* Disables all CLASS-PE's cores.
7642 + *
7643 + */
7644 +void class_disable(void)
7645 +{
7646 + writel(CORE_DISABLE, CLASS_TX_CTRL);
7647 +}
7648 +
7649 +/*
7650 + * Sets the configuration of the CLASSIFIER block.
7651 + * @param[in] cfg CLASSIFIER configuration
7652 + */
7653 +void class_set_config(struct class_cfg *cfg)
7654 +{
7655 + u32 val;
7656 +
7657 + /* Initialize route table */
7658 + if (!cfg->resume)
7659 + memset(DDR_PHYS_TO_VIRT(cfg->route_table_baseaddr), 0, (1 <<
7660 + cfg->route_table_hash_bits) * CLASS_ROUTE_SIZE);
7661 +
7662 +#if !defined(LS1012A_PFE_RESET_WA)
7663 + writel(cfg->pe_sys_clk_ratio, CLASS_PE_SYS_CLK_RATIO);
7664 +#endif
7665 +
7666 + writel((DDR_HDR_SIZE << 16) | LMEM_HDR_SIZE, CLASS_HDR_SIZE);
7667 + writel(LMEM_BUF_SIZE, CLASS_LMEM_BUF_SIZE);
7668 + writel(CLASS_ROUTE_ENTRY_SIZE(CLASS_ROUTE_SIZE) |
7669 + CLASS_ROUTE_HASH_SIZE(cfg->route_table_hash_bits),
7670 + CLASS_ROUTE_HASH_ENTRY_SIZE);
7671 + writel(HIF_PKT_CLASS_EN | HIF_PKT_OFFSET(sizeof(struct hif_hdr)),
7672 + CLASS_HIF_PARSE);
7673 +
7674 + val = HASH_CRC_PORT_IP | QB2BUS_LE;
7675 +
7676 +#if defined(CONFIG_IP_ALIGNED)
7677 + val |= IP_ALIGNED;
7678 +#endif
7679 +
7680 + /*
7681 + * Class PE packet steering will only work if TOE mode, bridge fetch or
7682 + * route fetch are enabled (see class/qb_fet.v). Route fetch would
7683 + * trigger additional memory copies (likely from DDR because of hash
7684 + * table size, which cannot be reduced because PE software still
7685 + * relies on hash value computed in HW), so when not in TOE mode we
7686 + * simply enable HW bridge fetch even though we don't use it.
7687 + */
7688 + if (cfg->toe_mode)
7689 + val |= CLASS_TOE;
7690 + else
7691 + val |= HW_BRIDGE_FETCH;
7692 +
7693 + writel(val, CLASS_ROUTE_MULTI);
7694 +
7695 + writel(DDR_PHYS_TO_PFE(cfg->route_table_baseaddr),
7696 + CLASS_ROUTE_TABLE_BASE);
7697 + writel(CLASS_PE0_RO_DM_ADDR0_VAL, CLASS_PE0_RO_DM_ADDR0);
7698 + writel(CLASS_PE0_RO_DM_ADDR1_VAL, CLASS_PE0_RO_DM_ADDR1);
7699 + writel(CLASS_PE0_QB_DM_ADDR0_VAL, CLASS_PE0_QB_DM_ADDR0);
7700 + writel(CLASS_PE0_QB_DM_ADDR1_VAL, CLASS_PE0_QB_DM_ADDR1);
7701 + writel(CBUS_VIRT_TO_PFE(TMU_PHY_INQ_PKTPTR), CLASS_TM_INQ_ADDR);
7702 +
7703 + writel(23, CLASS_AFULL_THRES);
7704 + writel(23, CLASS_TSQ_FIFO_THRES);
7705 +
7706 + writel(24, CLASS_MAX_BUF_CNT);
7707 + writel(24, CLASS_TSQ_MAX_CNT);
7708 +}
7709 +
7710 +/**************************** TMU ***************************/
7711 +
7712 +void tmu_reset(void)
7713 +{
7714 + writel(SW_RESET, TMU_CTRL);
7715 +}
7716 +
7717 +/* Initializes TMU block.
7718 + * @param[in] cfg TMU configuration
7719 + */
7720 +void tmu_init(struct tmu_cfg *cfg)
7721 +{
7722 + int q, phyno;
7723 +
7724 + tmu_disable(0xF);
7725 + mdelay(10);
7726 +
7727 +#if !defined(LS1012A_PFE_RESET_WA)
7728 + /* keep in soft reset */
7729 + writel(SW_RESET, TMU_CTRL);
7730 +#endif
7731 + writel(0x3, TMU_SYS_GENERIC_CONTROL);
7732 + writel(750, TMU_INQ_WATERMARK);
7733 + writel(CBUS_VIRT_TO_PFE(EGPI1_BASE_ADDR +
7734 + GPI_INQ_PKTPTR), TMU_PHY0_INQ_ADDR);
7735 + writel(CBUS_VIRT_TO_PFE(EGPI2_BASE_ADDR +
7736 + GPI_INQ_PKTPTR), TMU_PHY1_INQ_ADDR);
7737 + writel(CBUS_VIRT_TO_PFE(HGPI_BASE_ADDR +
7738 + GPI_INQ_PKTPTR), TMU_PHY3_INQ_ADDR);
7739 + writel(CBUS_VIRT_TO_PFE(HIF_NOCPY_RX_INQ0_PKTPTR), TMU_PHY4_INQ_ADDR);
7740 + writel(CBUS_VIRT_TO_PFE(UTIL_INQ_PKTPTR), TMU_PHY5_INQ_ADDR);
7741 + writel(CBUS_VIRT_TO_PFE(BMU2_BASE_ADDR + BMU_FREE_CTRL),
7742 + TMU_BMU_INQ_ADDR);
7743 +
7744 + writel(0x3FF, TMU_TDQ0_SCH_CTRL); /*
7745 + * enabling all 10
7746 + * schedulers [9:0] of each TDQ
7747 + */
7748 + writel(0x3FF, TMU_TDQ1_SCH_CTRL);
7749 + writel(0x3FF, TMU_TDQ3_SCH_CTRL);
7750 +
7751 +#if !defined(LS1012A_PFE_RESET_WA)
7752 + writel(cfg->pe_sys_clk_ratio, TMU_PE_SYS_CLK_RATIO);
7753 +#endif
7754 +
7755 +#if !defined(LS1012A_PFE_RESET_WA)
7756 + writel(DDR_PHYS_TO_PFE(cfg->llm_base_addr), TMU_LLM_BASE_ADDR);
7757 + /* Extra packet pointers will be stored from this address onwards */
7758 +
7759 + writel(cfg->llm_queue_len, TMU_LLM_QUE_LEN);
7760 + writel(5, TMU_TDQ_IIFG_CFG);
7761 + writel(DDR_BUF_SIZE, TMU_BMU_BUF_SIZE);
7762 +
7763 + writel(0x0, TMU_CTRL);
7764 +
7765 + /* MEM init */
7766 + pr_info("%s: mem init\n", __func__);
7767 + writel(MEM_INIT, TMU_CTRL);
7768 +
7769 + while (!(readl(TMU_CTRL) & MEM_INIT_DONE))
7770 + ;
7771 +
7772 + /* LLM init */
7773 + pr_info("%s: lmem init\n", __func__);
7774 + writel(LLM_INIT, TMU_CTRL);
7775 +
7776 + while (!(readl(TMU_CTRL) & LLM_INIT_DONE))
7777 + ;
7778 +#endif
7779 + /* set up each queue for tail drop */
7780 + for (phyno = 0; phyno < 4; phyno++) {
7781 + if (phyno == 2)
7782 + continue;
7783 + for (q = 0; q < 16; q++) {
7784 + u32 qdepth;
7785 +
7786 + writel((phyno << 8) | q, TMU_TEQ_CTRL);
7787 + writel(1 << 22, TMU_TEQ_QCFG); /*Enable tail drop */
7788 +
7789 + if (phyno == 3)
7790 + qdepth = DEFAULT_TMU3_QDEPTH;
7791 + else
7792 + qdepth = (q == 0) ? DEFAULT_Q0_QDEPTH :
7793 + DEFAULT_MAX_QDEPTH;
7794 +
7795 + /* LOG: 68855 */
7796 + /*
7797 + * The following is a workaround for the reordered
7798 + * packet and BMU2 buffer leakage issue.
7799 + */
7800 + if (CHIP_REVISION() == 0)
7801 + qdepth = 31;
7802 +
7803 + writel(qdepth << 18, TMU_TEQ_HW_PROB_CFG2);
7804 + writel(qdepth >> 14, TMU_TEQ_HW_PROB_CFG3);
7805 + }
7806 + }
7807 +
7808 +#ifdef CFG_LRO
7809 + /* Set TMU-3 queue 5 (LRO) in no-drop mode */
7810 + writel((3 << 8) | TMU_QUEUE_LRO, TMU_TEQ_CTRL);
7811 + writel(0, TMU_TEQ_QCFG);
7812 +#endif
7813 +
7814 + writel(0x05, TMU_TEQ_DISABLE_DROPCHK);
7815 +
7816 + writel(0x0, TMU_CTRL);
7817 +}
7818 +
7819 +/* Enables TMU-PE cores.
7820 + * @param[in] pe_mask TMU PE mask
7821 + */
7822 +void tmu_enable(u32 pe_mask)
7823 +{
7824 + writel(readl(TMU_TX_CTRL) | (pe_mask & 0xF), TMU_TX_CTRL);
7825 +}
7826 +
7827 +/* Disables TMU cores.
7828 + * @param[in] pe_mask TMU PE mask
7829 + */
7830 +void tmu_disable(u32 pe_mask)
7831 +{
7832 + writel(readl(TMU_TX_CTRL) & ~(pe_mask & 0xF), TMU_TX_CTRL);
7833 +}
7834 +
7835 +/* This will return the tmu queue status
7836 + * @param[in] if_id gem interface id or TMU index
7837 + * @return returns the bit mask of busy queues, zero means all
7838 + * queues are empty
7839 + */
7840 +u32 tmu_qstatus(u32 if_id)
7841 +{
7842 + return cpu_to_be32(pe_dmem_read(TMU0_ID + if_id, TMU_DM_PESTATUS +
7843 + offsetof(struct pe_status, tmu_qstatus), 4));
7844 +}
7845 +
7846 +u32 tmu_pkts_processed(u32 if_id)
7847 +{
7848 + return cpu_to_be32(pe_dmem_read(TMU0_ID + if_id, TMU_DM_PESTATUS +
7849 + offsetof(struct pe_status, rx), 4));
7850 +}
7851 +
7852 +/**************************** UTIL ***************************/
7853 +
7854 +/* Resets UTIL block.
7855 + */
7856 +void util_reset(void)
7857 +{
7858 + writel(CORE_SW_RESET, UTIL_TX_CTRL);
7859 +}
7860 +
7861 +/* Initializes UTIL block.
7862 + * @param[in] cfg UTIL configuration
7863 + */
7864 +void util_init(struct util_cfg *cfg)
7865 +{
7866 + writel(cfg->pe_sys_clk_ratio, UTIL_PE_SYS_CLK_RATIO);
7867 +}
7868 +
7869 +/* Enables UTIL-PE core.
7870 + *
7871 + */
7872 +void util_enable(void)
7873 +{
7874 + writel(CORE_ENABLE, UTIL_TX_CTRL);
7875 +}
7876 +
7877 +/* Disables UTIL-PE core.
7878 + *
7879 + */
7880 +void util_disable(void)
7881 +{
7882 + writel(CORE_DISABLE, UTIL_TX_CTRL);
7883 +}
7884 +
7885 +/**************************** HIF ***************************/
7886 +/* Initializes HIF copy block.
7887 + *
7888 + */
7889 +void hif_init(void)
7890 +{
7891 + /*Initialize HIF registers*/
7892 + writel((HIF_RX_POLL_CTRL_CYCLE << 16) | HIF_TX_POLL_CTRL_CYCLE,
7893 + HIF_POLL_CTRL);
7894 +}
7895 +
7896 +/* Enable hif tx DMA and interrupt
7897 + *
7898 + */
7899 +void hif_tx_enable(void)
7900 +{
7901 + writel(HIF_CTRL_DMA_EN, HIF_TX_CTRL);
7902 + writel((readl(HIF_INT_ENABLE) | HIF_INT_EN | HIF_TXPKT_INT_EN),
7903 + HIF_INT_ENABLE);
7904 +}
7905 +
7906 +/* Disable hif tx DMA and interrupt
7907 + *
7908 + */
7909 +void hif_tx_disable(void)
7910 +{
7911 + u32 hif_int;
7912 +
7913 + writel(0, HIF_TX_CTRL);
7914 +
7915 + hif_int = readl(HIF_INT_ENABLE);
7916 + hif_int &= HIF_TXPKT_INT_EN;
7917 + writel(hif_int, HIF_INT_ENABLE);
7918 +}
7919 +
7920 +/* Enable hif rx DMA and interrupt
7921 + *
7922 + */
7923 +void hif_rx_enable(void)
7924 +{
7925 + hif_rx_dma_start();
7926 + writel((readl(HIF_INT_ENABLE) | HIF_INT_EN | HIF_RXPKT_INT_EN),
7927 + HIF_INT_ENABLE);
7928 +}
7929 +
7930 +/* Disable hif rx DMA and interrupt
7931 + *
7932 + */
7933 +void hif_rx_disable(void)
7934 +{
7935 + u32 hif_int;
7936 +
7937 + writel(0, HIF_RX_CTRL);
7938 +
7939 + hif_int = readl(HIF_INT_ENABLE);
7940 + hif_int &= HIF_RXPKT_INT_EN;
7941 + writel(hif_int, HIF_INT_ENABLE);
7942 +}
7943 --- /dev/null
7944 +++ b/drivers/staging/fsl_ppfe/pfe_hif.c
7945 @@ -0,0 +1,1064 @@
7946 +// SPDX-License-Identifier: GPL-2.0+
7947 +/*
7948 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
7949 + * Copyright 2017 NXP
7950 + */
7951 +
7952 +#include <linux/kernel.h>
7953 +#include <linux/interrupt.h>
7954 +#include <linux/dma-mapping.h>
7955 +#include <linux/dmapool.h>
7956 +#include <linux/sched.h>
7957 +#include <linux/module.h>
7958 +#include <linux/list.h>
7959 +#include <linux/kthread.h>
7960 +#include <linux/slab.h>
7961 +
7962 +#include <linux/io.h>
7963 +#include <asm/irq.h>
7964 +
7965 +#include "pfe_mod.h"
7966 +
7967 +#define HIF_INT_MASK (HIF_INT | HIF_RXPKT_INT | HIF_TXPKT_INT)
7968 +
7969 +unsigned char napi_first_batch;
7970 +
7971 +static void pfe_tx_do_cleanup(unsigned long data);
7972 +
7973 +static int pfe_hif_alloc_descr(struct pfe_hif *hif)
7974 +{
7975 + void *addr;
7976 + dma_addr_t dma_addr;
7977 + int err = 0;
7978 +
7979 + pr_info("%s\n", __func__);
7980 + addr = dma_alloc_coherent(pfe->dev,
7981 + HIF_RX_DESC_NT * sizeof(struct hif_desc) +
7982 + HIF_TX_DESC_NT * sizeof(struct hif_desc),
7983 + &dma_addr, GFP_KERNEL);
7984 +
7985 + if (!addr) {
7986 + pr_err("%s: Could not allocate buffer descriptors!\n"
7987 + , __func__);
7988 + err = -ENOMEM;
7989 + goto err0;
7990 + }
7991 +
7992 + hif->descr_baseaddr_p = dma_addr;
7993 + hif->descr_baseaddr_v = addr;
7994 + hif->rx_ring_size = HIF_RX_DESC_NT;
7995 + hif->tx_ring_size = HIF_TX_DESC_NT;
7996 +
7997 + return 0;
7998 +
7999 +err0:
8000 + return err;
8001 +}
8002 +
8003 +#if defined(LS1012A_PFE_RESET_WA)
8004 +static void pfe_hif_disable_rx_desc(struct pfe_hif *hif)
8005 +{
8006 + int ii;
8007 + struct hif_desc *desc = hif->rx_base;
8008 +
8009 + /*Mark all descriptors as LAST_BD */
8010 + for (ii = 0; ii < hif->rx_ring_size; ii++) {
8011 + desc->ctrl |= BD_CTRL_LAST_BD;
8012 + desc++;
8013 + }
8014 +}
8015 +
8016 +struct class_rx_hdr_t {
8017 + u32 next_ptr; /* ptr to the start of the first DDR buffer */
8018 + u16 length; /* total packet length */
8019 + u16 phyno; /* input physical port number */
8020 + u32 status; /* gemac status bits */
8021 + u32 status2; /* reserved for software usage */
8022 +};
8023 +
8024 +/* STATUS_BAD_FRAME_ERR is set for all errors (including checksums if enabled)
8025 + * except overflow
8026 + */
8027 +#define STATUS_BAD_FRAME_ERR BIT(16)
8028 +#define STATUS_LENGTH_ERR BIT(17)
8029 +#define STATUS_CRC_ERR BIT(18)
8030 +#define STATUS_TOO_SHORT_ERR BIT(19)
8031 +#define STATUS_TOO_LONG_ERR BIT(20)
8032 +#define STATUS_CODE_ERR BIT(21)
8033 +#define STATUS_MC_HASH_MATCH BIT(22)
8034 +#define STATUS_CUMULATIVE_ARC_HIT BIT(23)
8035 +#define STATUS_UNICAST_HASH_MATCH BIT(24)
8036 +#define STATUS_IP_CHECKSUM_CORRECT BIT(25)
8037 +#define STATUS_TCP_CHECKSUM_CORRECT BIT(26)
8038 +#define STATUS_UDP_CHECKSUM_CORRECT BIT(27)
8039 +#define STATUS_OVERFLOW_ERR BIT(28) /* GPI error */
8040 +#define MIN_PKT_SIZE 64
8041 +
8042 +static inline void copy_to_lmem(u32 *dst, u32 *src, int len)
8043 +{
8044 + int i;
8045 +
8046 + for (i = 0; i < len; i += sizeof(u32)) {
8047 + *dst = htonl(*src);
8048 + dst++; src++;
8049 + }
8050 +}
8051 +
8052 +static void send_dummy_pkt_to_hif(void)
8053 +{
8054 + void *lmem_ptr, *ddr_ptr, *lmem_virt_addr;
8055 + u32 physaddr;
8056 + struct class_rx_hdr_t local_hdr;
8057 + static u32 dummy_pkt[] = {
8058 + 0x33221100, 0x2b785544, 0xd73093cb, 0x01000608,
8059 + 0x04060008, 0x2b780200, 0xd73093cb, 0x0a01a8c0,
8060 + 0x33221100, 0xa8c05544, 0x00000301, 0x00000000,
8061 + 0x00000000, 0x00000000, 0x00000000, 0xbe86c51f };
8062 +
8063 + ddr_ptr = (void *)((u64)readl(BMU2_BASE_ADDR + BMU_ALLOC_CTRL));
8064 + if (!ddr_ptr)
8065 + return;
8066 +
8067 + lmem_ptr = (void *)((u64)readl(BMU1_BASE_ADDR + BMU_ALLOC_CTRL));
8068 + if (!lmem_ptr)
8069 + return;
8070 +
8071 + pr_info("Sending a dummy pkt to HIF %p %p\n", ddr_ptr, lmem_ptr);
8072 + physaddr = (u32)DDR_VIRT_TO_PFE(ddr_ptr);
8073 +
8074 + lmem_virt_addr = (void *)CBUS_PFE_TO_VIRT((unsigned long int)lmem_ptr);
8075 +
8076 + local_hdr.phyno = htons(0); /* RX_PHY_0 */
8077 + local_hdr.length = htons(MIN_PKT_SIZE);
8078 +
8079 + local_hdr.next_ptr = htonl((u32)physaddr);
8080 + /*Mark checksum is correct */
8081 + local_hdr.status = htonl((STATUS_IP_CHECKSUM_CORRECT |
8082 + STATUS_UDP_CHECKSUM_CORRECT |
8083 + STATUS_TCP_CHECKSUM_CORRECT |
8084 + STATUS_UNICAST_HASH_MATCH |
8085 + STATUS_CUMULATIVE_ARC_HIT));
8086 + local_hdr.status2 = 0;
8087 +
8088 + copy_to_lmem((u32 *)lmem_virt_addr, (u32 *)&local_hdr,
8089 + sizeof(local_hdr));
8090 +
8091 + copy_to_lmem((u32 *)(lmem_virt_addr + LMEM_HDR_SIZE), (u32 *)dummy_pkt,
8092 + 0x40);
8093 +
8094 + writel((unsigned long int)lmem_ptr, CLASS_INQ_PKTPTR);
8095 +}
8096 +
8097 +void pfe_hif_rx_idle(struct pfe_hif *hif)
8098 +{
8099 + int hif_stop_loop = 10;
8100 + u32 rx_status;
8101 +
8102 + pfe_hif_disable_rx_desc(hif);
8103 + pr_info("Bringing hif to idle state...");
8104 + writel(0, HIF_INT_ENABLE);
8105 + /*If HIF Rx BDP is busy send a dummy packet */
8106 + do {
8107 + rx_status = readl(HIF_RX_STATUS);
8108 + if (rx_status & BDP_CSR_RX_DMA_ACTV)
8109 + send_dummy_pkt_to_hif();
8110 +
8111 + usleep_range(100, 150);
8112 + } while (--hif_stop_loop);
8113 +
8114 + if (readl(HIF_RX_STATUS) & BDP_CSR_RX_DMA_ACTV)
8115 + pr_info("Failed\n");
8116 + else
8117 + pr_info("Done\n");
8118 +}
8119 +#endif
8120 +
8121 +static void pfe_hif_free_descr(struct pfe_hif *hif)
8122 +{
8123 + pr_info("%s\n", __func__);
8124 +
8125 + dma_free_coherent(pfe->dev,
8126 + hif->rx_ring_size * sizeof(struct hif_desc) +
8127 + hif->tx_ring_size * sizeof(struct hif_desc),
8128 + hif->descr_baseaddr_v, hif->descr_baseaddr_p);
8129 +}
8130 +
8131 +void pfe_hif_desc_dump(struct pfe_hif *hif)
8132 +{
8133 + struct hif_desc *desc;
8134 + unsigned long desc_p;
8135 + int ii = 0;
8136 +
8137 + pr_info("%s\n", __func__);
8138 +
8139 + desc = hif->rx_base;
8140 + desc_p = (u32)((u64)desc - (u64)hif->descr_baseaddr_v +
8141 + hif->descr_baseaddr_p);
8142 +
8143 + pr_info("HIF Rx desc base %p physical %x\n", desc, (u32)desc_p);
8144 + for (ii = 0; ii < hif->rx_ring_size; ii++) {
8145 + pr_info("status: %08x, ctrl: %08x, data: %08x, next: %x\n",
8146 + readl(&desc->status), readl(&desc->ctrl),
8147 + readl(&desc->data), readl(&desc->next));
8148 + desc++;
8149 + }
8150 +
8151 + desc = hif->tx_base;
8152 + desc_p = ((u64)desc - (u64)hif->descr_baseaddr_v +
8153 + hif->descr_baseaddr_p);
8154 +
8155 + pr_info("HIF Tx desc base %p physical %x\n", desc, (u32)desc_p);
8156 + for (ii = 0; ii < hif->tx_ring_size; ii++) {
8157 + pr_info("status: %08x, ctrl: %08x, data: %08x, next: %x\n",
8158 + readl(&desc->status), readl(&desc->ctrl),
8159 + readl(&desc->data), readl(&desc->next));
8160 + desc++;
8161 + }
8162 +}
8163 +
8164 +/* pfe_hif_release_buffers */
8165 +static void pfe_hif_release_buffers(struct pfe_hif *hif)
8166 +{
8167 + struct hif_desc *desc;
8168 + int i = 0;
8169 +
8170 + hif->rx_base = hif->descr_baseaddr_v;
8171 +
8172 + pr_info("%s\n", __func__);
8173 +
8174 + /*Free Rx buffers */
8175 + desc = hif->rx_base;
8176 + for (i = 0; i < hif->rx_ring_size; i++) {
8177 + if (readl(&desc->data)) {
8178 + if ((i < hif->shm->rx_buf_pool_cnt) &&
8179 + (!hif->shm->rx_buf_pool[i])) {
8180 + /*
8181 + * dma_unmap_single(hif->dev, desc->data,
8182 + * hif->rx_buf_len[i], DMA_FROM_DEVICE);
8183 + */
8184 + dma_unmap_single(hif->dev,
8185 + DDR_PFE_TO_PHYS(
8186 + readl(&desc->data)),
8187 + hif->rx_buf_len[i],
8188 + DMA_FROM_DEVICE);
8189 + hif->shm->rx_buf_pool[i] = hif->rx_buf_addr[i];
8190 + } else {
8191 + pr_err("%s: buffer pool already full\n"
8192 + , __func__);
8193 + }
8194 + }
8195 +
8196 + writel(0, &desc->data);
8197 + writel(0, &desc->status);
8198 + writel(0, &desc->ctrl);
8199 + desc++;
8200 + }
8201 +}
8202 +
8203 +/*
8204 + * pfe_hif_init_buffers
8205 + * This function initializes the HIF Rx/Tx ring descriptors and
8206 + * initialize Rx queue with buffers.
8207 + */
8208 +static int pfe_hif_init_buffers(struct pfe_hif *hif)
8209 +{
8210 + struct hif_desc *desc, *first_desc_p;
8211 + u32 data;
8212 + int i = 0;
8213 +
8214 + pr_info("%s\n", __func__);
8215 +
8216 + /* Check enough Rx buffers available in the shared memory */
8217 + if (hif->shm->rx_buf_pool_cnt < hif->rx_ring_size)
8218 + return -ENOMEM;
8219 +
8220 + hif->rx_base = hif->descr_baseaddr_v;
8221 + memset(hif->rx_base, 0, hif->rx_ring_size * sizeof(struct hif_desc));
8222 +
8223 + /*Initialize Rx descriptors */
8224 + desc = hif->rx_base;
8225 + first_desc_p = (struct hif_desc *)hif->descr_baseaddr_p;
8226 +
8227 + for (i = 0; i < hif->rx_ring_size; i++) {
8228 + /* Initialize Rx buffers from the shared memory */
8229 +
8230 + data = (u32)dma_map_single(hif->dev, hif->shm->rx_buf_pool[i],
8231 + pfe_pkt_size, DMA_FROM_DEVICE);
8232 + hif->rx_buf_addr[i] = hif->shm->rx_buf_pool[i];
8233 + hif->rx_buf_len[i] = pfe_pkt_size;
8234 + hif->shm->rx_buf_pool[i] = NULL;
8235 +
8236 + if (likely(dma_mapping_error(hif->dev, data) == 0)) {
8237 + writel(DDR_PHYS_TO_PFE(data), &desc->data);
8238 + } else {
8239 + pr_err("%s : low on mem\n", __func__);
8240 +
8241 + goto err;
8242 + }
8243 +
8244 + writel(0, &desc->status);
8245 +
8246 + /*
8247 + * Ensure everything else is written to DDR before
8248 + * writing bd->ctrl
8249 + */
8250 + wmb();
8251 +
8252 + writel((BD_CTRL_PKT_INT_EN | BD_CTRL_LIFM
8253 + | BD_CTRL_DIR | BD_CTRL_DESC_EN
8254 + | BD_BUF_LEN(pfe_pkt_size)), &desc->ctrl);
8255 +
8256 + /* Chain descriptors */
8257 + writel((u32)DDR_PHYS_TO_PFE(first_desc_p + i + 1), &desc->next);
8258 + desc++;
8259 + }
8260 +
8261 + /* Overwrite last descriptor to chain it to first one*/
8262 + desc--;
8263 + writel((u32)DDR_PHYS_TO_PFE(first_desc_p), &desc->next);
8264 +
8265 + hif->rxtoclean_index = 0;
8266 +
8267 + /*Initialize Rx buffer descriptor ring base address */
8268 + writel(DDR_PHYS_TO_PFE(hif->descr_baseaddr_p), HIF_RX_BDP_ADDR);
8269 +
8270 + hif->tx_base = hif->rx_base + hif->rx_ring_size;
8271 + first_desc_p = (struct hif_desc *)hif->descr_baseaddr_p +
8272 + hif->rx_ring_size;
8273 + memset(hif->tx_base, 0, hif->tx_ring_size * sizeof(struct hif_desc));
8274 +
8275 + /*Initialize tx descriptors */
8276 + desc = hif->tx_base;
8277 +
8278 + for (i = 0; i < hif->tx_ring_size; i++) {
8279 + /* Chain descriptors */
8280 + writel((u32)DDR_PHYS_TO_PFE(first_desc_p + i + 1), &desc->next);
8281 + writel(0, &desc->ctrl);
8282 + desc++;
8283 + }
8284 +
8285 + /* Overwrite last descriptor to chain it to first one */
8286 + desc--;
8287 + writel((u32)DDR_PHYS_TO_PFE(first_desc_p), &desc->next);
8288 + hif->txavail = hif->tx_ring_size;
8289 + hif->txtosend = 0;
8290 + hif->txtoclean = 0;
8291 + hif->txtoflush = 0;
8292 +
8293 + /*Initialize Tx buffer descriptor ring base address */
8294 + writel((u32)DDR_PHYS_TO_PFE(first_desc_p), HIF_TX_BDP_ADDR);
8295 +
8296 + return 0;
8297 +
8298 +err:
8299 + pfe_hif_release_buffers(hif);
8300 + return -ENOMEM;
8301 +}
8302 +
8303 +/*
8304 + * pfe_hif_client_register
8305 + *
8306 + * This function used to register a client driver with the HIF driver.
8307 + *
8308 + * Return value:
8309 + * 0 - on Successful registration
8310 + */
8311 +static int pfe_hif_client_register(struct pfe_hif *hif, u32 client_id,
8312 + struct hif_client_shm *client_shm)
8313 +{
8314 + struct hif_client *client = &hif->client[client_id];
8315 + u32 i, cnt;
8316 + struct rx_queue_desc *rx_qbase;
8317 + struct tx_queue_desc *tx_qbase;
8318 + struct hif_rx_queue *rx_queue;
8319 + struct hif_tx_queue *tx_queue;
8320 + int err = 0;
8321 +
8322 + pr_info("%s\n", __func__);
8323 +
8324 + spin_lock_bh(&hif->tx_lock);
8325 +
8326 + if (test_bit(client_id, &hif->shm->g_client_status[0])) {
8327 + pr_err("%s: client %d already registered\n",
8328 + __func__, client_id);
8329 + err = -1;
8330 + goto unlock;
8331 + }
8332 +
8333 + memset(client, 0, sizeof(struct hif_client));
8334 +
8335 + /* Initialize client Rx queues baseaddr, size */
8336 +
8337 + cnt = CLIENT_CTRL_RX_Q_CNT(client_shm->ctrl);
8338 + /* Check if client is requesting for more queues than supported */
8339 + if (cnt > HIF_CLIENT_QUEUES_MAX)
8340 + cnt = HIF_CLIENT_QUEUES_MAX;
8341 +
8342 + client->rx_qn = cnt;
8343 + rx_qbase = (struct rx_queue_desc *)client_shm->rx_qbase;
8344 + for (i = 0; i < cnt; i++) {
8345 + rx_queue = &client->rx_q[i];
8346 + rx_queue->base = rx_qbase + i * client_shm->rx_qsize;
8347 + rx_queue->size = client_shm->rx_qsize;
8348 + rx_queue->write_idx = 0;
8349 + }
8350 +
8351 + /* Initialize client Tx queues baseaddr, size */
8352 + cnt = CLIENT_CTRL_TX_Q_CNT(client_shm->ctrl);
8353 +
8354 + /* Check if client is requesting for more queues than supported */
8355 + if (cnt > HIF_CLIENT_QUEUES_MAX)
8356 + cnt = HIF_CLIENT_QUEUES_MAX;
8357 +
8358 + client->tx_qn = cnt;
8359 + tx_qbase = (struct tx_queue_desc *)client_shm->tx_qbase;
8360 + for (i = 0; i < cnt; i++) {
8361 + tx_queue = &client->tx_q[i];
8362 + tx_queue->base = tx_qbase + i * client_shm->tx_qsize;
8363 + tx_queue->size = client_shm->tx_qsize;
8364 + tx_queue->ack_idx = 0;
8365 + }
8366 +
8367 + set_bit(client_id, &hif->shm->g_client_status[0]);
8368 +
8369 +unlock:
8370 + spin_unlock_bh(&hif->tx_lock);
8371 +
8372 + return err;
8373 +}
8374 +
8375 +/*
8376 + * pfe_hif_client_unregister
8377 + *
8378 + * This function used to unregister a client from the HIF driver.
8379 + *
8380 + */
8381 +static void pfe_hif_client_unregister(struct pfe_hif *hif, u32 client_id)
8382 +{
8383 + pr_info("%s\n", __func__);
8384 +
8385 + /*
8386 + * Mark client as no longer available (which prevents further packet
8387 + * receive for this client)
8388 + */
8389 + spin_lock_bh(&hif->tx_lock);
8390 +
8391 + if (!test_bit(client_id, &hif->shm->g_client_status[0])) {
8392 + pr_err("%s: client %d not registered\n", __func__,
8393 + client_id);
8394 +
8395 + spin_unlock_bh(&hif->tx_lock);
8396 + return;
8397 + }
8398 +
8399 + clear_bit(client_id, &hif->shm->g_client_status[0]);
8400 +
8401 + spin_unlock_bh(&hif->tx_lock);
8402 +}
8403 +
8404 +/*
8405 + * client_put_rxpacket-
8406 + * This functions puts the Rx pkt in the given client Rx queue.
8407 + * It actually swap the Rx pkt in the client Rx descriptor buffer
8408 + * and returns the free buffer from it.
8409 + *
8410 + * If the function returns NULL means client Rx queue is full and
8411 + * packet couldn't send to client queue.
8412 + */
8413 +static void *client_put_rxpacket(struct hif_rx_queue *queue, void *pkt, u32 len,
8414 + u32 flags, u32 client_ctrl, u32 *rem_len)
8415 +{
8416 + void *free_pkt = NULL;
8417 + struct rx_queue_desc *desc = queue->base + queue->write_idx;
8418 +
8419 + if (readl(&desc->ctrl) & CL_DESC_OWN) {
8420 + if (page_mode) {
8421 + int rem_page_size = PAGE_SIZE -
8422 + PRESENT_OFST_IN_PAGE(pkt);
8423 + int cur_pkt_size = ROUND_MIN_RX_SIZE(len +
8424 + pfe_pkt_headroom);
8425 + *rem_len = (rem_page_size - cur_pkt_size);
8426 + if (*rem_len) {
8427 + free_pkt = pkt + cur_pkt_size;
8428 + get_page(virt_to_page(free_pkt));
8429 + } else {
8430 + free_pkt = (void
8431 + *)__get_free_page(GFP_ATOMIC | GFP_DMA_PFE);
8432 + *rem_len = pfe_pkt_size;
8433 + }
8434 + } else {
8435 + free_pkt = kmalloc(PFE_BUF_SIZE, GFP_ATOMIC |
8436 + GFP_DMA_PFE);
8437 + *rem_len = PFE_BUF_SIZE - pfe_pkt_headroom;
8438 + }
8439 +
8440 + if (free_pkt) {
8441 + desc->data = pkt;
8442 + desc->client_ctrl = client_ctrl;
8443 + /*
8444 + * Ensure everything else is written to DDR before
8445 + * writing bd->ctrl
8446 + */
8447 + smp_wmb();
8448 + writel(CL_DESC_BUF_LEN(len) | flags, &desc->ctrl);
8449 + queue->write_idx = (queue->write_idx + 1)
8450 + & (queue->size - 1);
8451 +
8452 + free_pkt += pfe_pkt_headroom;
8453 + }
8454 + }
8455 +
8456 + return free_pkt;
8457 +}
8458 +
8459 +/*
8460 + * pfe_hif_rx_process-
8461 + * This function does pfe hif rx queue processing.
8462 + * Dequeue packet from Rx queue and send it to corresponding client queue
8463 + */
8464 +static int pfe_hif_rx_process(struct pfe_hif *hif, int budget)
8465 +{
8466 + struct hif_desc *desc;
8467 + struct hif_hdr *pkt_hdr;
8468 + struct __hif_hdr hif_hdr;
8469 + void *free_buf;
8470 + int rtc, len, rx_processed = 0;
8471 + struct __hif_desc local_desc;
8472 + int flags;
8473 + unsigned int desc_p;
8474 + unsigned int buf_size = 0;
8475 +
8476 + spin_lock_bh(&hif->lock);
8477 +
8478 + rtc = hif->rxtoclean_index;
8479 +
8480 + while (rx_processed < budget) {
8481 + desc = hif->rx_base + rtc;
8482 +
8483 + __memcpy12(&local_desc, desc);
8484 +
8485 + /* ACK pending Rx interrupt */
8486 + if (local_desc.ctrl & BD_CTRL_DESC_EN) {
8487 + writel(HIF_INT | HIF_RXPKT_INT, HIF_INT_SRC);
8488 +
8489 + if (rx_processed == 0) {
8490 + if (napi_first_batch == 1) {
8491 + desc_p = hif->descr_baseaddr_p +
8492 + ((unsigned long int)(desc) -
8493 + (unsigned long
8494 + int)hif->descr_baseaddr_v);
8495 + napi_first_batch = 0;
8496 + }
8497 + }
8498 +
8499 + __memcpy12(&local_desc, desc);
8500 +
8501 + if (local_desc.ctrl & BD_CTRL_DESC_EN)
8502 + break;
8503 + }
8504 +
8505 + napi_first_batch = 0;
8506 +
8507 +#ifdef HIF_NAPI_STATS
8508 + hif->napi_counters[NAPI_DESC_COUNT]++;
8509 +#endif
8510 + len = BD_BUF_LEN(local_desc.ctrl);
8511 + /*
8512 + * dma_unmap_single(hif->dev, DDR_PFE_TO_PHYS(local_desc.data),
8513 + * hif->rx_buf_len[rtc], DMA_FROM_DEVICE);
8514 + */
8515 + dma_unmap_single(hif->dev, DDR_PFE_TO_PHYS(local_desc.data),
8516 + hif->rx_buf_len[rtc], DMA_FROM_DEVICE);
8517 +
8518 + pkt_hdr = (struct hif_hdr *)hif->rx_buf_addr[rtc];
8519 +
8520 + /* Track last HIF header received */
8521 + if (!hif->started) {
8522 + hif->started = 1;
8523 +
8524 + __memcpy8(&hif_hdr, pkt_hdr);
8525 +
8526 + hif->qno = hif_hdr.hdr.q_num;
8527 + hif->client_id = hif_hdr.hdr.client_id;
8528 + hif->client_ctrl = (hif_hdr.hdr.client_ctrl1 << 16) |
8529 + hif_hdr.hdr.client_ctrl;
8530 + flags = CL_DESC_FIRST;
8531 +
8532 + } else {
8533 + flags = 0;
8534 + }
8535 +
8536 + if (local_desc.ctrl & BD_CTRL_LIFM)
8537 + flags |= CL_DESC_LAST;
8538 +
8539 + /* Check for valid client id and still registered */
8540 + if ((hif->client_id >= HIF_CLIENTS_MAX) ||
8541 + !(test_bit(hif->client_id,
8542 + &hif->shm->g_client_status[0]))) {
8543 + printk_ratelimited("%s: packet with invalid client id %d q_num %d\n",
8544 + __func__,
8545 + hif->client_id,
8546 + hif->qno);
8547 +
8548 + free_buf = pkt_hdr;
8549 +
8550 + goto pkt_drop;
8551 + }
8552 +
8553 + /* Check to valid queue number */
8554 + if (hif->client[hif->client_id].rx_qn <= hif->qno) {
8555 + pr_info("%s: packet with invalid queue: %d\n"
8556 + , __func__, hif->qno);
8557 + hif->qno = 0;
8558 + }
8559 +
8560 + free_buf =
8561 + client_put_rxpacket(&hif->client[hif->client_id].rx_q[hif->qno],
8562 + (void *)pkt_hdr, len, flags,
8563 + hif->client_ctrl, &buf_size);
8564 +
8565 + hif_lib_indicate_client(hif->client_id, EVENT_RX_PKT_IND,
8566 + hif->qno);
8567 +
8568 + if (unlikely(!free_buf)) {
8569 +#ifdef HIF_NAPI_STATS
8570 + hif->napi_counters[NAPI_CLIENT_FULL_COUNT]++;
8571 +#endif
8572 + /*
8573 + * If we want to keep in polling mode to retry later,
8574 + * we need to tell napi that we consumed
8575 + * the full budget or we will hit a livelock scenario.
8576 + * The core code keeps this napi instance
8577 + * at the head of the list and none of the other
8578 + * instances get to run
8579 + */
8580 + rx_processed = budget;
8581 +
8582 + if (flags & CL_DESC_FIRST)
8583 + hif->started = 0;
8584 +
8585 + break;
8586 + }
8587 +
8588 +pkt_drop:
8589 + /*Fill free buffer in the descriptor */
8590 + hif->rx_buf_addr[rtc] = free_buf;
8591 + hif->rx_buf_len[rtc] = min(pfe_pkt_size, buf_size);
8592 + writel((DDR_PHYS_TO_PFE
8593 + ((u32)dma_map_single(hif->dev,
8594 + free_buf, hif->rx_buf_len[rtc], DMA_FROM_DEVICE))),
8595 + &desc->data);
8596 + /*
8597 + * Ensure everything else is written to DDR before
8598 + * writing bd->ctrl
8599 + */
8600 + wmb();
8601 + writel((BD_CTRL_PKT_INT_EN | BD_CTRL_LIFM | BD_CTRL_DIR |
8602 + BD_CTRL_DESC_EN | BD_BUF_LEN(hif->rx_buf_len[rtc])),
8603 + &desc->ctrl);
8604 +
8605 + rtc = (rtc + 1) & (hif->rx_ring_size - 1);
8606 +
8607 + if (local_desc.ctrl & BD_CTRL_LIFM) {
8608 + if (!(hif->client_ctrl & HIF_CTRL_RX_CONTINUED)) {
8609 + rx_processed++;
8610 +
8611 +#ifdef HIF_NAPI_STATS
8612 + hif->napi_counters[NAPI_PACKET_COUNT]++;
8613 +#endif
8614 + }
8615 + hif->started = 0;
8616 + }
8617 + }
8618 +
8619 + hif->rxtoclean_index = rtc;
8620 + spin_unlock_bh(&hif->lock);
8621 +
8622 + /* we made some progress, re-start rx dma in case it stopped */
8623 + hif_rx_dma_start();
8624 +
8625 + return rx_processed;
8626 +}
8627 +
8628 +/*
8629 + * client_ack_txpacket-
8630 + * This function ack the Tx packet in the give client Tx queue by resetting
8631 + * ownership bit in the descriptor.
8632 + */
8633 +static int client_ack_txpacket(struct pfe_hif *hif, unsigned int client_id,
8634 + unsigned int q_no)
8635 +{
8636 + struct hif_tx_queue *queue = &hif->client[client_id].tx_q[q_no];
8637 + struct tx_queue_desc *desc = queue->base + queue->ack_idx;
8638 +
8639 + if (readl(&desc->ctrl) & CL_DESC_OWN) {
8640 + writel((readl(&desc->ctrl) & ~CL_DESC_OWN), &desc->ctrl);
8641 + queue->ack_idx = (queue->ack_idx + 1) & (queue->size - 1);
8642 +
8643 + return 0;
8644 +
8645 + } else {
8646 + /*This should not happen */
8647 + pr_err("%s: %d %d %d %d %d %p %d\n", __func__,
8648 + hif->txtosend, hif->txtoclean, hif->txavail,
8649 + client_id, q_no, queue, queue->ack_idx);
8650 + WARN(1, "%s: doesn't own this descriptor", __func__);
8651 + return 1;
8652 + }
8653 +}
8654 +
8655 +void __hif_tx_done_process(struct pfe_hif *hif, int count)
8656 +{
8657 + struct hif_desc *desc;
8658 + struct hif_desc_sw *desc_sw;
8659 + int ttc, tx_avl;
8660 + int pkts_done[HIF_CLIENTS_MAX] = {0, 0};
8661 +
8662 + ttc = hif->txtoclean;
8663 + tx_avl = hif->txavail;
8664 +
8665 + while ((tx_avl < hif->tx_ring_size) && count--) {
8666 + desc = hif->tx_base + ttc;
8667 +
8668 + if (readl(&desc->ctrl) & BD_CTRL_DESC_EN)
8669 + break;
8670 +
8671 + desc_sw = &hif->tx_sw_queue[ttc];
8672 +
8673 + if (desc_sw->data) {
8674 + /*
8675 + * dmap_unmap_single(hif->dev, desc_sw->data,
8676 + * desc_sw->len, DMA_TO_DEVICE);
8677 + */
8678 + dma_unmap_single(hif->dev, desc_sw->data,
8679 + desc_sw->len, DMA_TO_DEVICE);
8680 + }
8681 +
8682 + if (desc_sw->client_id >= HIF_CLIENTS_MAX) {
8683 + pr_err("Invalid cl id %d\n", desc_sw->client_id);
8684 + break;
8685 + }
8686 +
8687 + pkts_done[desc_sw->client_id]++;
8688 +
8689 + client_ack_txpacket(hif, desc_sw->client_id, desc_sw->q_no);
8690 +
8691 + ttc = (ttc + 1) & (hif->tx_ring_size - 1);
8692 + tx_avl++;
8693 + }
8694 +
8695 + if (pkts_done[0])
8696 + hif_lib_indicate_client(0, EVENT_TXDONE_IND, 0);
8697 + if (pkts_done[1])
8698 + hif_lib_indicate_client(1, EVENT_TXDONE_IND, 0);
8699 +
8700 + hif->txtoclean = ttc;
8701 + hif->txavail = tx_avl;
8702 +
8703 + if (!count) {
8704 + tasklet_schedule(&hif->tx_cleanup_tasklet);
8705 + } else {
8706 + /*Enable Tx done interrupt */
8707 + writel(readl_relaxed(HIF_INT_ENABLE) | HIF_TXPKT_INT,
8708 + HIF_INT_ENABLE);
8709 + }
8710 +}
8711 +
8712 +static void pfe_tx_do_cleanup(unsigned long data)
8713 +{
8714 + struct pfe_hif *hif = (struct pfe_hif *)data;
8715 +
8716 + writel(HIF_INT | HIF_TXPKT_INT, HIF_INT_SRC);
8717 +
8718 + hif_tx_done_process(hif, 64);
8719 +}
8720 +
8721 +/*
8722 + * __hif_xmit_pkt -
8723 + * This function puts one packet in the HIF Tx queue
8724 + */
8725 +void __hif_xmit_pkt(struct pfe_hif *hif, unsigned int client_id, unsigned int
8726 + q_no, void *data, u32 len, unsigned int flags)
8727 +{
8728 + struct hif_desc *desc;
8729 + struct hif_desc_sw *desc_sw;
8730 +
8731 + desc = hif->tx_base + hif->txtosend;
8732 + desc_sw = &hif->tx_sw_queue[hif->txtosend];
8733 +
8734 + desc_sw->len = len;
8735 + desc_sw->client_id = client_id;
8736 + desc_sw->q_no = q_no;
8737 + desc_sw->flags = flags;
8738 +
8739 + if (flags & HIF_DONT_DMA_MAP) {
8740 + desc_sw->data = 0;
8741 + writel((u32)DDR_PHYS_TO_PFE(data), &desc->data);
8742 + } else {
8743 + desc_sw->data = dma_map_single(hif->dev, data, len,
8744 + DMA_TO_DEVICE);
8745 + writel((u32)DDR_PHYS_TO_PFE(desc_sw->data), &desc->data);
8746 + }
8747 +
8748 + hif->txtosend = (hif->txtosend + 1) & (hif->tx_ring_size - 1);
8749 + hif->txavail--;
8750 +
8751 + if ((!((flags & HIF_DATA_VALID) && (flags &
8752 + HIF_LAST_BUFFER))))
8753 + goto skip_tx;
8754 +
8755 + /*
8756 + * Ensure everything else is written to DDR before
8757 + * writing bd->ctrl
8758 + */
8759 + wmb();
8760 +
8761 + do {
8762 + desc_sw = &hif->tx_sw_queue[hif->txtoflush];
8763 + desc = hif->tx_base + hif->txtoflush;
8764 +
8765 + if (desc_sw->flags & HIF_LAST_BUFFER) {
8766 + writel((BD_CTRL_LIFM |
8767 + BD_CTRL_BRFETCH_DISABLE | BD_CTRL_RTFETCH_DISABLE
8768 + | BD_CTRL_PARSE_DISABLE | BD_CTRL_DESC_EN |
8769 + BD_CTRL_PKT_INT_EN | BD_BUF_LEN(desc_sw->len)),
8770 + &desc->ctrl);
8771 + } else {
8772 + writel((BD_CTRL_DESC_EN |
8773 + BD_BUF_LEN(desc_sw->len)), &desc->ctrl);
8774 + }
8775 + hif->txtoflush = (hif->txtoflush + 1) & (hif->tx_ring_size - 1);
8776 + }
8777 + while (hif->txtoflush != hif->txtosend)
8778 + ;
8779 +
8780 +skip_tx:
8781 + return;
8782 +}
8783 +
8784 +static irqreturn_t wol_isr(int irq, void *dev_id)
8785 +{
8786 + pr_info("WoL\n");
8787 + gemac_set_wol(EMAC1_BASE_ADDR, 0);
8788 + gemac_set_wol(EMAC2_BASE_ADDR, 0);
8789 + return IRQ_HANDLED;
8790 +}
8791 +
8792 +/*
8793 + * hif_isr-
8794 + * This ISR routine processes Rx/Tx done interrupts from the HIF hardware block
8795 + */
8796 +static irqreturn_t hif_isr(int irq, void *dev_id)
8797 +{
8798 + struct pfe_hif *hif = (struct pfe_hif *)dev_id;
8799 + int int_status;
8800 + int int_enable_mask;
8801 +
8802 + /*Read hif interrupt source register */
8803 + int_status = readl_relaxed(HIF_INT_SRC);
8804 + int_enable_mask = readl_relaxed(HIF_INT_ENABLE);
8805 +
8806 + if ((int_status & HIF_INT) == 0)
8807 + return IRQ_NONE;
8808 +
8809 + int_status &= ~(HIF_INT);
8810 +
8811 + if (int_status & HIF_RXPKT_INT) {
8812 + int_status &= ~(HIF_RXPKT_INT);
8813 + int_enable_mask &= ~(HIF_RXPKT_INT);
8814 +
8815 + napi_first_batch = 1;
8816 +
8817 + if (napi_schedule_prep(&hif->napi)) {
8818 +#ifdef HIF_NAPI_STATS
8819 + hif->napi_counters[NAPI_SCHED_COUNT]++;
8820 +#endif
8821 + __napi_schedule(&hif->napi);
8822 + }
8823 + }
8824 +
8825 + if (int_status & HIF_TXPKT_INT) {
8826 + int_status &= ~(HIF_TXPKT_INT);
8827 + int_enable_mask &= ~(HIF_TXPKT_INT);
8828 + /*Schedule tx cleanup tassklet */
8829 + tasklet_schedule(&hif->tx_cleanup_tasklet);
8830 + }
8831 +
8832 + /*Disable interrupts, they will be enabled after they are serviced */
8833 + writel_relaxed(int_enable_mask, HIF_INT_ENABLE);
8834 +
8835 + if (int_status) {
8836 + pr_info("%s : Invalid interrupt : %d\n", __func__,
8837 + int_status);
8838 + writel(int_status, HIF_INT_SRC);
8839 + }
8840 +
8841 + return IRQ_HANDLED;
8842 +}
8843 +
8844 +void hif_process_client_req(struct pfe_hif *hif, int req, int data1, int data2)
8845 +{
8846 + unsigned int client_id = data1;
8847 +
8848 + if (client_id >= HIF_CLIENTS_MAX) {
8849 + pr_err("%s: client id %d out of bounds\n", __func__,
8850 + client_id);
8851 + return;
8852 + }
8853 +
8854 + switch (req) {
8855 + case REQUEST_CL_REGISTER:
8856 + /* Request for register a client */
8857 + pr_info("%s: register client_id %d\n",
8858 + __func__, client_id);
8859 + pfe_hif_client_register(hif, client_id, (struct
8860 + hif_client_shm *)&hif->shm->client[client_id]);
8861 + break;
8862 +
8863 + case REQUEST_CL_UNREGISTER:
8864 + pr_info("%s: unregister client_id %d\n",
8865 + __func__, client_id);
8866 +
8867 + /* Request for unregister a client */
8868 + pfe_hif_client_unregister(hif, client_id);
8869 +
8870 + break;
8871 +
8872 + default:
8873 + pr_err("%s: unsupported request %d\n",
8874 + __func__, req);
8875 + break;
8876 + }
8877 +
8878 + /*
8879 + * Process client Tx queues
8880 + * Currently we don't have checking for tx pending
8881 + */
8882 +}
8883 +
8884 +/*
8885 + * pfe_hif_rx_poll
8886 + * This function is NAPI poll function to process HIF Rx queue.
8887 + */
8888 +static int pfe_hif_rx_poll(struct napi_struct *napi, int budget)
8889 +{
8890 + struct pfe_hif *hif = container_of(napi, struct pfe_hif, napi);
8891 + int work_done;
8892 +
8893 +#ifdef HIF_NAPI_STATS
8894 + hif->napi_counters[NAPI_POLL_COUNT]++;
8895 +#endif
8896 +
8897 + work_done = pfe_hif_rx_process(hif, budget);
8898 +
8899 + if (work_done < budget) {
8900 + napi_complete(napi);
8901 + writel(readl_relaxed(HIF_INT_ENABLE) | HIF_RXPKT_INT,
8902 + HIF_INT_ENABLE);
8903 + }
8904 +#ifdef HIF_NAPI_STATS
8905 + else
8906 + hif->napi_counters[NAPI_FULL_BUDGET_COUNT]++;
8907 +#endif
8908 +
8909 + return work_done;
8910 +}
8911 +
8912 +/*
8913 + * pfe_hif_init
8914 + * This function initializes the baseaddresses and irq, etc.
8915 + */
8916 +int pfe_hif_init(struct pfe *pfe)
8917 +{
8918 + struct pfe_hif *hif = &pfe->hif;
8919 + int err;
8920 +
8921 + pr_info("%s\n", __func__);
8922 +
8923 + hif->dev = pfe->dev;
8924 + hif->irq = pfe->hif_irq;
8925 +
8926 + err = pfe_hif_alloc_descr(hif);
8927 + if (err)
8928 + goto err0;
8929 +
8930 + if (pfe_hif_init_buffers(hif)) {
8931 + pr_err("%s: Could not initialize buffer descriptors\n"
8932 + , __func__);
8933 + err = -ENOMEM;
8934 + goto err1;
8935 + }
8936 +
8937 + /* Initialize NAPI for Rx processing */
8938 + init_dummy_netdev(&hif->dummy_dev);
8939 + netif_napi_add(&hif->dummy_dev, &hif->napi, pfe_hif_rx_poll,
8940 + HIF_RX_POLL_WEIGHT);
8941 + napi_enable(&hif->napi);
8942 +
8943 + spin_lock_init(&hif->tx_lock);
8944 + spin_lock_init(&hif->lock);
8945 +
8946 + hif_init();
8947 + hif_rx_enable();
8948 + hif_tx_enable();
8949 +
8950 + /* Disable tx done interrupt */
8951 + writel(HIF_INT_MASK, HIF_INT_ENABLE);
8952 +
8953 + gpi_enable(HGPI_BASE_ADDR);
8954 +
8955 + err = request_irq(hif->irq, hif_isr, 0, "pfe_hif", hif);
8956 + if (err) {
8957 + pr_err("%s: failed to get the hif IRQ = %d\n",
8958 + __func__, hif->irq);
8959 + goto err1;
8960 + }
8961 +
8962 + err = request_irq(pfe->wol_irq, wol_isr, 0, "pfe_wol", pfe);
8963 + if (err) {
8964 + pr_err("%s: failed to get the wol IRQ = %d\n",
8965 + __func__, pfe->wol_irq);
8966 + goto err1;
8967 + }
8968 +
8969 + tasklet_init(&hif->tx_cleanup_tasklet,
8970 + (void(*)(unsigned long))pfe_tx_do_cleanup,
8971 + (unsigned long)hif);
8972 +
8973 + return 0;
8974 +err1:
8975 + pfe_hif_free_descr(hif);
8976 +err0:
8977 + return err;
8978 +}
8979 +
8980 +/* pfe_hif_exit- */
8981 +void pfe_hif_exit(struct pfe *pfe)
8982 +{
8983 + struct pfe_hif *hif = &pfe->hif;
8984 +
8985 + pr_info("%s\n", __func__);
8986 +
8987 + tasklet_kill(&hif->tx_cleanup_tasklet);
8988 +
8989 + spin_lock_bh(&hif->lock);
8990 + hif->shm->g_client_status[0] = 0;
8991 + /* Make sure all clients are disabled*/
8992 + hif->shm->g_client_status[1] = 0;
8993 +
8994 + spin_unlock_bh(&hif->lock);
8995 +
8996 + /*Disable Rx/Tx */
8997 + gpi_disable(HGPI_BASE_ADDR);
8998 + hif_rx_disable();
8999 + hif_tx_disable();
9000 +
9001 + napi_disable(&hif->napi);
9002 + netif_napi_del(&hif->napi);
9003 +
9004 + free_irq(pfe->wol_irq, pfe);
9005 + free_irq(hif->irq, hif);
9006 +
9007 + pfe_hif_release_buffers(hif);
9008 + pfe_hif_free_descr(hif);
9009 +}
9010 --- /dev/null
9011 +++ b/drivers/staging/fsl_ppfe/pfe_hif.h
9012 @@ -0,0 +1,199 @@
9013 +/* SPDX-License-Identifier: GPL-2.0+ */
9014 +/*
9015 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
9016 + * Copyright 2017 NXP
9017 + */
9018 +
9019 +#ifndef _PFE_HIF_H_
9020 +#define _PFE_HIF_H_
9021 +
9022 +#include <linux/netdevice.h>
9023 +
9024 +#define HIF_NAPI_STATS
9025 +
9026 +#define HIF_CLIENT_QUEUES_MAX 16
9027 +#define HIF_RX_POLL_WEIGHT 64
9028 +
9029 +#define HIF_RX_PKT_MIN_SIZE 0x800 /* 2KB */
9030 +#define HIF_RX_PKT_MIN_SIZE_MASK ~(HIF_RX_PKT_MIN_SIZE - 1)
9031 +#define ROUND_MIN_RX_SIZE(_sz) (((_sz) + (HIF_RX_PKT_MIN_SIZE - 1)) \
9032 + & HIF_RX_PKT_MIN_SIZE_MASK)
9033 +#define PRESENT_OFST_IN_PAGE(_buf) (((unsigned long int)(_buf) & (PAGE_SIZE \
9034 + - 1)) & HIF_RX_PKT_MIN_SIZE_MASK)
9035 +
9036 +enum {
9037 + NAPI_SCHED_COUNT = 0,
9038 + NAPI_POLL_COUNT,
9039 + NAPI_PACKET_COUNT,
9040 + NAPI_DESC_COUNT,
9041 + NAPI_FULL_BUDGET_COUNT,
9042 + NAPI_CLIENT_FULL_COUNT,
9043 + NAPI_MAX_COUNT
9044 +};
9045 +
9046 +/*
9047 + * HIF_TX_DESC_NT value should be always greter than 4,
9048 + * Otherwise HIF_TX_POLL_MARK will become zero.
9049 + */
9050 +#define HIF_RX_DESC_NT 256
9051 +#define HIF_TX_DESC_NT 2048
9052 +
9053 +#define HIF_FIRST_BUFFER BIT(0)
9054 +#define HIF_LAST_BUFFER BIT(1)
9055 +#define HIF_DONT_DMA_MAP BIT(2)
9056 +#define HIF_DATA_VALID BIT(3)
9057 +#define HIF_TSO BIT(4)
9058 +
9059 +enum {
9060 + PFE_CL_GEM0 = 0,
9061 + PFE_CL_GEM1,
9062 + HIF_CLIENTS_MAX
9063 +};
9064 +
9065 +/*structure to store client queue info */
9066 +struct hif_rx_queue {
9067 + struct rx_queue_desc *base;
9068 + u32 size;
9069 + u32 write_idx;
9070 +};
9071 +
9072 +struct hif_tx_queue {
9073 + struct tx_queue_desc *base;
9074 + u32 size;
9075 + u32 ack_idx;
9076 +};
9077 +
9078 +/*Structure to store the client info */
9079 +struct hif_client {
9080 + int rx_qn;
9081 + struct hif_rx_queue rx_q[HIF_CLIENT_QUEUES_MAX];
9082 + int tx_qn;
9083 + struct hif_tx_queue tx_q[HIF_CLIENT_QUEUES_MAX];
9084 +};
9085 +
9086 +/*HIF hardware buffer descriptor */
9087 +struct hif_desc {
9088 + u32 ctrl;
9089 + u32 status;
9090 + u32 data;
9091 + u32 next;
9092 +};
9093 +
9094 +struct __hif_desc {
9095 + u32 ctrl;
9096 + u32 status;
9097 + u32 data;
9098 +};
9099 +
9100 +struct hif_desc_sw {
9101 + dma_addr_t data;
9102 + u16 len;
9103 + u8 client_id;
9104 + u8 q_no;
9105 + u16 flags;
9106 +};
9107 +
9108 +struct hif_hdr {
9109 + u8 client_id;
9110 + u8 q_num;
9111 + u16 client_ctrl;
9112 + u16 client_ctrl1;
9113 +};
9114 +
9115 +struct __hif_hdr {
9116 + union {
9117 + struct hif_hdr hdr;
9118 + u32 word[2];
9119 + };
9120 +};
9121 +
9122 +struct hif_ipsec_hdr {
9123 + u16 sa_handle[2];
9124 +} __packed;
9125 +
9126 +/* HIF_CTRL_TX... defines */
9127 +#define HIF_CTRL_TX_CHECKSUM BIT(2)
9128 +
9129 +/* HIF_CTRL_RX... defines */
9130 +#define HIF_CTRL_RX_OFFSET_OFST (24)
9131 +#define HIF_CTRL_RX_CHECKSUMMED BIT(2)
9132 +#define HIF_CTRL_RX_CONTINUED BIT(1)
9133 +
9134 +struct pfe_hif {
9135 + /* To store registered clients in hif layer */
9136 + struct hif_client client[HIF_CLIENTS_MAX];
9137 + struct hif_shm *shm;
9138 + int irq;
9139 +
9140 + void *descr_baseaddr_v;
9141 + unsigned long descr_baseaddr_p;
9142 +
9143 + struct hif_desc *rx_base;
9144 + u32 rx_ring_size;
9145 + u32 rxtoclean_index;
9146 + void *rx_buf_addr[HIF_RX_DESC_NT];
9147 + int rx_buf_len[HIF_RX_DESC_NT];
9148 + unsigned int qno;
9149 + unsigned int client_id;
9150 + unsigned int client_ctrl;
9151 + unsigned int started;
9152 +
9153 + struct hif_desc *tx_base;
9154 + u32 tx_ring_size;
9155 + u32 txtosend;
9156 + u32 txtoclean;
9157 + u32 txavail;
9158 + u32 txtoflush;
9159 + struct hif_desc_sw tx_sw_queue[HIF_TX_DESC_NT];
9160 +
9161 +/* tx_lock synchronizes hif packet tx as well as pfe_hif structure access */
9162 + spinlock_t tx_lock;
9163 +/* lock synchronizes hif rx queue processing */
9164 + spinlock_t lock;
9165 + struct net_device dummy_dev;
9166 + struct napi_struct napi;
9167 + struct device *dev;
9168 +
9169 +#ifdef HIF_NAPI_STATS
9170 + unsigned int napi_counters[NAPI_MAX_COUNT];
9171 +#endif
9172 + struct tasklet_struct tx_cleanup_tasklet;
9173 +};
9174 +
9175 +void __hif_xmit_pkt(struct pfe_hif *hif, unsigned int client_id, unsigned int
9176 + q_no, void *data, u32 len, unsigned int flags);
9177 +int hif_xmit_pkt(struct pfe_hif *hif, unsigned int client_id, unsigned int q_no,
9178 + void *data, unsigned int len);
9179 +void __hif_tx_done_process(struct pfe_hif *hif, int count);
9180 +void hif_process_client_req(struct pfe_hif *hif, int req, int data1, int
9181 + data2);
9182 +int pfe_hif_init(struct pfe *pfe);
9183 +void pfe_hif_exit(struct pfe *pfe);
9184 +void pfe_hif_rx_idle(struct pfe_hif *hif);
9185 +static inline void hif_tx_done_process(struct pfe_hif *hif, int count)
9186 +{
9187 + spin_lock_bh(&hif->tx_lock);
9188 + __hif_tx_done_process(hif, count);
9189 + spin_unlock_bh(&hif->tx_lock);
9190 +}
9191 +
9192 +static inline void hif_tx_lock(struct pfe_hif *hif)
9193 +{
9194 + spin_lock_bh(&hif->tx_lock);
9195 +}
9196 +
9197 +static inline void hif_tx_unlock(struct pfe_hif *hif)
9198 +{
9199 + spin_unlock_bh(&hif->tx_lock);
9200 +}
9201 +
9202 +static inline int __hif_tx_avail(struct pfe_hif *hif)
9203 +{
9204 + return hif->txavail;
9205 +}
9206 +
9207 +#define __memcpy8(dst, src) memcpy(dst, src, 8)
9208 +#define __memcpy12(dst, src) memcpy(dst, src, 12)
9209 +#define __memcpy(dst, src, len) memcpy(dst, src, len)
9210 +
9211 +#endif /* _PFE_HIF_H_ */
9212 --- /dev/null
9213 +++ b/drivers/staging/fsl_ppfe/pfe_hif_lib.c
9214 @@ -0,0 +1,628 @@
9215 +// SPDX-License-Identifier: GPL-2.0+
9216 +/*
9217 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
9218 + * Copyright 2017 NXP
9219 + */
9220 +
9221 +#include <linux/version.h>
9222 +#include <linux/kernel.h>
9223 +#include <linux/slab.h>
9224 +#include <linux/interrupt.h>
9225 +#include <linux/workqueue.h>
9226 +#include <linux/dma-mapping.h>
9227 +#include <linux/dmapool.h>
9228 +#include <linux/sched.h>
9229 +#include <linux/skbuff.h>
9230 +#include <linux/moduleparam.h>
9231 +#include <linux/cpu.h>
9232 +
9233 +#include "pfe_mod.h"
9234 +#include "pfe_hif.h"
9235 +#include "pfe_hif_lib.h"
9236 +
9237 +unsigned int lro_mode;
9238 +unsigned int page_mode;
9239 +unsigned int tx_qos = 1;
9240 +module_param(tx_qos, uint, 0444);
9241 +MODULE_PARM_DESC(tx_qos, "0: disable ,\n"
9242 + "1: enable (default), guarantee no packet drop at TMU level\n");
9243 +unsigned int pfe_pkt_size;
9244 +unsigned int pfe_pkt_headroom;
9245 +unsigned int emac_txq_cnt;
9246 +
9247 +/*
9248 + * @pfe_hal_lib.c.
9249 + * Common functions used by HIF client drivers
9250 + */
9251 +
9252 +/*HIF shared memory Global variable */
9253 +struct hif_shm ghif_shm;
9254 +
9255 +/* Cleanup the HIF shared memory, release HIF rx_buffer_pool.
9256 + * This function should be called after pfe_hif_exit
9257 + *
9258 + * @param[in] hif_shm Shared memory address location in DDR
9259 + */
9260 +static void pfe_hif_shm_clean(struct hif_shm *hif_shm)
9261 +{
9262 + int i;
9263 + void *pkt;
9264 +
9265 + for (i = 0; i < hif_shm->rx_buf_pool_cnt; i++) {
9266 + pkt = hif_shm->rx_buf_pool[i];
9267 + if (pkt) {
9268 + hif_shm->rx_buf_pool[i] = NULL;
9269 + pkt -= pfe_pkt_headroom;
9270 +
9271 + if (page_mode)
9272 + put_page(virt_to_page(pkt));
9273 + else
9274 + kfree(pkt);
9275 + }
9276 + }
9277 +}
9278 +
9279 +/* Initialize shared memory used between HIF driver and clients,
9280 + * allocate rx_buffer_pool required for HIF Rx descriptors.
9281 + * This function should be called before initializing HIF driver.
9282 + *
9283 + * @param[in] hif_shm Shared memory address location in DDR
9284 + * @rerurn 0 - on succes, <0 on fail to initialize
9285 + */
9286 +static int pfe_hif_shm_init(struct hif_shm *hif_shm)
9287 +{
9288 + int i;
9289 + void *pkt;
9290 +
9291 + memset(hif_shm, 0, sizeof(struct hif_shm));
9292 + hif_shm->rx_buf_pool_cnt = HIF_RX_DESC_NT;
9293 +
9294 + for (i = 0; i < hif_shm->rx_buf_pool_cnt; i++) {
9295 + if (page_mode) {
9296 + pkt = (void *)__get_free_page(GFP_KERNEL |
9297 + GFP_DMA_PFE);
9298 + } else {
9299 + pkt = kmalloc(PFE_BUF_SIZE, GFP_KERNEL | GFP_DMA_PFE);
9300 + }
9301 +
9302 + if (pkt)
9303 + hif_shm->rx_buf_pool[i] = pkt + pfe_pkt_headroom;
9304 + else
9305 + goto err0;
9306 + }
9307 +
9308 + return 0;
9309 +
9310 +err0:
9311 + pr_err("%s Low memory\n", __func__);
9312 + pfe_hif_shm_clean(hif_shm);
9313 + return -ENOMEM;
9314 +}
9315 +
9316 +/*This function sends indication to HIF driver
9317 + *
9318 + * @param[in] hif hif context
9319 + */
9320 +static void hif_lib_indicate_hif(struct pfe_hif *hif, int req, int data1, int
9321 + data2)
9322 +{
9323 + hif_process_client_req(hif, req, data1, data2);
9324 +}
9325 +
9326 +void hif_lib_indicate_client(int client_id, int event_type, int qno)
9327 +{
9328 + struct hif_client_s *client = pfe->hif_client[client_id];
9329 +
9330 + if (!client || (event_type >= HIF_EVENT_MAX) || (qno >=
9331 + HIF_CLIENT_QUEUES_MAX))
9332 + return;
9333 +
9334 + if (!test_and_set_bit(qno, &client->queue_mask[event_type]))
9335 + client->event_handler(client->priv, event_type, qno);
9336 +}
9337 +
9338 +/*This function releases Rx queue descriptors memory and pre-filled buffers
9339 + *
9340 + * @param[in] client hif_client context
9341 + */
9342 +static void hif_lib_client_release_rx_buffers(struct hif_client_s *client)
9343 +{
9344 + struct rx_queue_desc *desc;
9345 + int qno, ii;
9346 + void *buf;
9347 +
9348 + for (qno = 0; qno < client->rx_qn; qno++) {
9349 + desc = client->rx_q[qno].base;
9350 +
9351 + for (ii = 0; ii < client->rx_q[qno].size; ii++) {
9352 + buf = (void *)desc->data;
9353 + if (buf) {
9354 + buf -= pfe_pkt_headroom;
9355 +
9356 + if (page_mode)
9357 + free_page((unsigned long)buf);
9358 + else
9359 + kfree(buf);
9360 +
9361 + desc->ctrl = 0;
9362 + }
9363 +
9364 + desc++;
9365 + }
9366 + }
9367 +
9368 + kfree(client->rx_qbase);
9369 +}
9370 +
9371 +/*This function allocates memory for the rxq descriptors and pre-fill rx queues
9372 + * with buffers.
9373 + * @param[in] client client context
9374 + * @param[in] q_size size of the rxQ, all queues are of same size
9375 + */
9376 +static int hif_lib_client_init_rx_buffers(struct hif_client_s *client, int
9377 + q_size)
9378 +{
9379 + struct rx_queue_desc *desc;
9380 + struct hif_client_rx_queue *queue;
9381 + int ii, qno;
9382 +
9383 + /*Allocate memory for the client queues */
9384 + client->rx_qbase = kzalloc(client->rx_qn * q_size * sizeof(struct
9385 + rx_queue_desc), GFP_KERNEL);
9386 + if (!client->rx_qbase)
9387 + goto err;
9388 +
9389 + for (qno = 0; qno < client->rx_qn; qno++) {
9390 + queue = &client->rx_q[qno];
9391 +
9392 + queue->base = client->rx_qbase + qno * q_size * sizeof(struct
9393 + rx_queue_desc);
9394 + queue->size = q_size;
9395 + queue->read_idx = 0;
9396 + queue->write_idx = 0;
9397 +
9398 + pr_debug("rx queue: %d, base: %p, size: %d\n", qno,
9399 + queue->base, queue->size);
9400 + }
9401 +
9402 + for (qno = 0; qno < client->rx_qn; qno++) {
9403 + queue = &client->rx_q[qno];
9404 + desc = queue->base;
9405 +
9406 + for (ii = 0; ii < queue->size; ii++) {
9407 + desc->ctrl = CL_DESC_BUF_LEN(pfe_pkt_size) |
9408 + CL_DESC_OWN;
9409 + desc++;
9410 + }
9411 + }
9412 +
9413 + return 0;
9414 +
9415 +err:
9416 + return 1;
9417 +}
9418 +
9419 +
9420 +static void hif_lib_client_cleanup_tx_queue(struct hif_client_tx_queue *queue)
9421 +{
9422 + pr_debug("%s\n", __func__);
9423 +
9424 + /*
9425 + * Check if there are any pending packets. Client must flush the tx
9426 + * queues before unregistering, by calling by calling
9427 + * hif_lib_tx_get_next_complete()
9428 + *
9429 + * Hif no longer calls since we are no longer registered
9430 + */
9431 + if (queue->tx_pending)
9432 + pr_err("%s: pending transmit packets\n", __func__);
9433 +}
9434 +
9435 +static void hif_lib_client_release_tx_buffers(struct hif_client_s *client)
9436 +{
9437 + int qno;
9438 +
9439 + pr_debug("%s\n", __func__);
9440 +
9441 + for (qno = 0; qno < client->tx_qn; qno++)
9442 + hif_lib_client_cleanup_tx_queue(&client->tx_q[qno]);
9443 +
9444 + kfree(client->tx_qbase);
9445 +}
9446 +
9447 +static int hif_lib_client_init_tx_buffers(struct hif_client_s *client, int
9448 + q_size)
9449 +{
9450 + struct hif_client_tx_queue *queue;
9451 + int qno;
9452 +
9453 + client->tx_qbase = kzalloc(client->tx_qn * q_size * sizeof(struct
9454 + tx_queue_desc), GFP_KERNEL);
9455 + if (!client->tx_qbase)
9456 + return 1;
9457 +
9458 + for (qno = 0; qno < client->tx_qn; qno++) {
9459 + queue = &client->tx_q[qno];
9460 +
9461 + queue->base = client->tx_qbase + qno * q_size * sizeof(struct
9462 + tx_queue_desc);
9463 + queue->size = q_size;
9464 + queue->read_idx = 0;
9465 + queue->write_idx = 0;
9466 + queue->tx_pending = 0;
9467 + queue->nocpy_flag = 0;
9468 + queue->prev_tmu_tx_pkts = 0;
9469 + queue->done_tmu_tx_pkts = 0;
9470 +
9471 + pr_debug("tx queue: %d, base: %p, size: %d\n", qno,
9472 + queue->base, queue->size);
9473 + }
9474 +
9475 + return 0;
9476 +}
9477 +
9478 +static int hif_lib_event_dummy(void *priv, int event_type, int qno)
9479 +{
9480 + return 0;
9481 +}
9482 +
9483 +int hif_lib_client_register(struct hif_client_s *client)
9484 +{
9485 + struct hif_shm *hif_shm;
9486 + struct hif_client_shm *client_shm;
9487 + int err, i;
9488 + /* int loop_cnt = 0; */
9489 +
9490 + pr_debug("%s\n", __func__);
9491 +
9492 + /*Allocate memory before spin_lock*/
9493 + if (hif_lib_client_init_rx_buffers(client, client->rx_qsize)) {
9494 + err = -ENOMEM;
9495 + goto err_rx;
9496 + }
9497 +
9498 + if (hif_lib_client_init_tx_buffers(client, client->tx_qsize)) {
9499 + err = -ENOMEM;
9500 + goto err_tx;
9501 + }
9502 +
9503 + spin_lock_bh(&pfe->hif.lock);
9504 + if (!(client->pfe) || (client->id >= HIF_CLIENTS_MAX) ||
9505 + (pfe->hif_client[client->id])) {
9506 + err = -EINVAL;
9507 + goto err;
9508 + }
9509 +
9510 + hif_shm = client->pfe->hif.shm;
9511 +
9512 + if (!client->event_handler)
9513 + client->event_handler = hif_lib_event_dummy;
9514 +
9515 + /*Initialize client specific shared memory */
9516 + client_shm = (struct hif_client_shm *)&hif_shm->client[client->id];
9517 + client_shm->rx_qbase = (unsigned long int)client->rx_qbase;
9518 + client_shm->rx_qsize = client->rx_qsize;
9519 + client_shm->tx_qbase = (unsigned long int)client->tx_qbase;
9520 + client_shm->tx_qsize = client->tx_qsize;
9521 + client_shm->ctrl = (client->tx_qn << CLIENT_CTRL_TX_Q_CNT_OFST) |
9522 + (client->rx_qn << CLIENT_CTRL_RX_Q_CNT_OFST);
9523 + /* spin_lock_init(&client->rx_lock); */
9524 +
9525 + for (i = 0; i < HIF_EVENT_MAX; i++) {
9526 + client->queue_mask[i] = 0; /*
9527 + * By default all events are
9528 + * unmasked
9529 + */
9530 + }
9531 +
9532 + /*Indicate to HIF driver*/
9533 + hif_lib_indicate_hif(&pfe->hif, REQUEST_CL_REGISTER, client->id, 0);
9534 +
9535 + pr_debug("%s: client: %p, client_id: %d, tx_qsize: %d, rx_qsize: %d\n",
9536 + __func__, client, client->id, client->tx_qsize,
9537 + client->rx_qsize);
9538 +
9539 + client->cpu_id = -1;
9540 +
9541 + pfe->hif_client[client->id] = client;
9542 + spin_unlock_bh(&pfe->hif.lock);
9543 +
9544 + return 0;
9545 +
9546 +err:
9547 + spin_unlock_bh(&pfe->hif.lock);
9548 + hif_lib_client_release_tx_buffers(client);
9549 +
9550 +err_tx:
9551 + hif_lib_client_release_rx_buffers(client);
9552 +
9553 +err_rx:
9554 + return err;
9555 +}
9556 +
9557 +int hif_lib_client_unregister(struct hif_client_s *client)
9558 +{
9559 + struct pfe *pfe = client->pfe;
9560 + u32 client_id = client->id;
9561 +
9562 + pr_info(
9563 + "%s : client: %p, client_id: %d, txQ_depth: %d, rxQ_depth: %d\n"
9564 + , __func__, client, client->id, client->tx_qsize,
9565 + client->rx_qsize);
9566 +
9567 + spin_lock_bh(&pfe->hif.lock);
9568 + hif_lib_indicate_hif(&pfe->hif, REQUEST_CL_UNREGISTER, client->id, 0);
9569 +
9570 + hif_lib_client_release_tx_buffers(client);
9571 + hif_lib_client_release_rx_buffers(client);
9572 + pfe->hif_client[client_id] = NULL;
9573 + spin_unlock_bh(&pfe->hif.lock);
9574 +
9575 + return 0;
9576 +}
9577 +
9578 +int hif_lib_event_handler_start(struct hif_client_s *client, int event,
9579 + int qno)
9580 +{
9581 + struct hif_client_rx_queue *queue = &client->rx_q[qno];
9582 + struct rx_queue_desc *desc = queue->base + queue->read_idx;
9583 +
9584 + if ((event >= HIF_EVENT_MAX) || (qno >= HIF_CLIENT_QUEUES_MAX)) {
9585 + pr_debug("%s: Unsupported event : %d queue number : %d\n",
9586 + __func__, event, qno);
9587 + return -1;
9588 + }
9589 +
9590 + test_and_clear_bit(qno, &client->queue_mask[event]);
9591 +
9592 + switch (event) {
9593 + case EVENT_RX_PKT_IND:
9594 + if (!(desc->ctrl & CL_DESC_OWN))
9595 + hif_lib_indicate_client(client->id,
9596 + EVENT_RX_PKT_IND, qno);
9597 + break;
9598 +
9599 + case EVENT_HIGH_RX_WM:
9600 + case EVENT_TXDONE_IND:
9601 + default:
9602 + break;
9603 + }
9604 +
9605 + return 0;
9606 +}
9607 +
9608 +/*
9609 + * This function gets one packet from the specified client queue
9610 + * It also refill the rx buffer
9611 + */
9612 +void *hif_lib_receive_pkt(struct hif_client_s *client, int qno, int *len, int
9613 + *ofst, unsigned int *rx_ctrl,
9614 + unsigned int *desc_ctrl, void **priv_data)
9615 +{
9616 + struct hif_client_rx_queue *queue = &client->rx_q[qno];
9617 + struct rx_queue_desc *desc;
9618 + void *pkt = NULL;
9619 +
9620 + /*
9621 + * Following lock is to protect rx queue access from,
9622 + * hif_lib_event_handler_start.
9623 + * In general below lock is not required, because hif_lib_xmit_pkt and
9624 + * hif_lib_event_handler_start are called from napi poll and which is
9625 + * not re-entrant. But if some client use in different way this lock is
9626 + * required.
9627 + */
9628 + /*spin_lock_irqsave(&client->rx_lock, flags); */
9629 + desc = queue->base + queue->read_idx;
9630 + if (!(desc->ctrl & CL_DESC_OWN)) {
9631 + pkt = desc->data - pfe_pkt_headroom;
9632 +
9633 + *rx_ctrl = desc->client_ctrl;
9634 + *desc_ctrl = desc->ctrl;
9635 +
9636 + if (desc->ctrl & CL_DESC_FIRST) {
9637 + u16 size = *rx_ctrl >> HIF_CTRL_RX_OFFSET_OFST;
9638 +
9639 + if (size) {
9640 + size += PFE_PARSE_INFO_SIZE;
9641 + *len = CL_DESC_BUF_LEN(desc->ctrl) -
9642 + PFE_PKT_HEADER_SZ - size;
9643 + *ofst = pfe_pkt_headroom + PFE_PKT_HEADER_SZ
9644 + + size;
9645 + *priv_data = desc->data + PFE_PKT_HEADER_SZ;
9646 + } else {
9647 + *len = CL_DESC_BUF_LEN(desc->ctrl) -
9648 + PFE_PKT_HEADER_SZ - PFE_PARSE_INFO_SIZE;
9649 + *ofst = pfe_pkt_headroom
9650 + + PFE_PKT_HEADER_SZ
9651 + + PFE_PARSE_INFO_SIZE;
9652 + *priv_data = NULL;
9653 + }
9654 +
9655 + } else {
9656 + *len = CL_DESC_BUF_LEN(desc->ctrl);
9657 + *ofst = pfe_pkt_headroom;
9658 + }
9659 +
9660 + /*
9661 + * Needed so we don't free a buffer/page
9662 + * twice on module_exit
9663 + */
9664 + desc->data = NULL;
9665 +
9666 + /*
9667 + * Ensure everything else is written to DDR before
9668 + * writing bd->ctrl
9669 + */
9670 + smp_wmb();
9671 +
9672 + desc->ctrl = CL_DESC_BUF_LEN(pfe_pkt_size) | CL_DESC_OWN;
9673 + queue->read_idx = (queue->read_idx + 1) & (queue->size - 1);
9674 + }
9675 +
9676 + /*spin_unlock_irqrestore(&client->rx_lock, flags); */
9677 + return pkt;
9678 +}
9679 +
9680 +static inline void hif_hdr_write(struct hif_hdr *pkt_hdr, unsigned int
9681 + client_id, unsigned int qno,
9682 + u32 client_ctrl)
9683 +{
9684 + /* Optimize the write since the destinaton may be non-cacheable */
9685 + if (!((unsigned long)pkt_hdr & 0x3)) {
9686 + ((u32 *)pkt_hdr)[0] = (client_ctrl << 16) | (qno << 8) |
9687 + client_id;
9688 + } else {
9689 + ((u16 *)pkt_hdr)[0] = (qno << 8) | (client_id & 0xFF);
9690 + ((u16 *)pkt_hdr)[1] = (client_ctrl & 0xFFFF);
9691 + }
9692 +}
9693 +
9694 +/*This function puts the given packet in the specific client queue */
9695 +void __hif_lib_xmit_pkt(struct hif_client_s *client, unsigned int qno, void
9696 + *data, unsigned int len, u32 client_ctrl,
9697 + unsigned int flags, void *client_data)
9698 +{
9699 + struct hif_client_tx_queue *queue = &client->tx_q[qno];
9700 + struct tx_queue_desc *desc = queue->base + queue->write_idx;
9701 +
9702 + /* First buffer */
9703 + if (flags & HIF_FIRST_BUFFER) {
9704 + data -= sizeof(struct hif_hdr);
9705 + len += sizeof(struct hif_hdr);
9706 +
9707 + hif_hdr_write(data, client->id, qno, client_ctrl);
9708 + }
9709 +
9710 + desc->data = client_data;
9711 + desc->ctrl = CL_DESC_OWN | CL_DESC_FLAGS(flags);
9712 +
9713 + __hif_xmit_pkt(&pfe->hif, client->id, qno, data, len, flags);
9714 +
9715 + queue->write_idx = (queue->write_idx + 1) & (queue->size - 1);
9716 + queue->tx_pending++;
9717 + queue->jiffies_last_packet = jiffies;
9718 +}
9719 +
9720 +void *hif_lib_tx_get_next_complete(struct hif_client_s *client, int qno,
9721 + unsigned int *flags, int count)
9722 +{
9723 + struct hif_client_tx_queue *queue = &client->tx_q[qno];
9724 + struct tx_queue_desc *desc = queue->base + queue->read_idx;
9725 +
9726 + pr_debug("%s: qno : %d rd_indx: %d pending:%d\n", __func__, qno,
9727 + queue->read_idx, queue->tx_pending);
9728 +
9729 + if (!queue->tx_pending)
9730 + return NULL;
9731 +
9732 + if (queue->nocpy_flag && !queue->done_tmu_tx_pkts) {
9733 + u32 tmu_tx_pkts = be32_to_cpu(pe_dmem_read(TMU0_ID +
9734 + client->id, TMU_DM_TX_TRANS, 4));
9735 +
9736 + if (queue->prev_tmu_tx_pkts > tmu_tx_pkts)
9737 + queue->done_tmu_tx_pkts = UINT_MAX -
9738 + queue->prev_tmu_tx_pkts + tmu_tx_pkts;
9739 + else
9740 + queue->done_tmu_tx_pkts = tmu_tx_pkts -
9741 + queue->prev_tmu_tx_pkts;
9742 +
9743 + queue->prev_tmu_tx_pkts = tmu_tx_pkts;
9744 +
9745 + if (!queue->done_tmu_tx_pkts)
9746 + return NULL;
9747 + }
9748 +
9749 + if (desc->ctrl & CL_DESC_OWN)
9750 + return NULL;
9751 +
9752 + queue->read_idx = (queue->read_idx + 1) & (queue->size - 1);
9753 + queue->tx_pending--;
9754 +
9755 + *flags = CL_DESC_GET_FLAGS(desc->ctrl);
9756 +
9757 + if (queue->done_tmu_tx_pkts && (*flags & HIF_LAST_BUFFER))
9758 + queue->done_tmu_tx_pkts--;
9759 +
9760 + return desc->data;
9761 +}
9762 +
9763 +static void hif_lib_tmu_credit_init(struct pfe *pfe)
9764 +{
9765 + int i, q;
9766 +
9767 + for (i = 0; i < NUM_GEMAC_SUPPORT; i++)
9768 + for (q = 0; q < emac_txq_cnt; q++) {
9769 + pfe->tmu_credit.tx_credit_max[i][q] = (q == 0) ?
9770 + DEFAULT_Q0_QDEPTH : DEFAULT_MAX_QDEPTH;
9771 + pfe->tmu_credit.tx_credit[i][q] =
9772 + pfe->tmu_credit.tx_credit_max[i][q];
9773 + }
9774 +}
9775 +
9776 +/* __hif_lib_update_credit
9777 + *
9778 + * @param[in] client hif client context
9779 + * @param[in] queue queue number in match with TMU
9780 + */
9781 +void __hif_lib_update_credit(struct hif_client_s *client, unsigned int queue)
9782 +{
9783 + unsigned int tmu_tx_packets, tmp;
9784 +
9785 + if (tx_qos) {
9786 + tmu_tx_packets = be32_to_cpu(pe_dmem_read(TMU0_ID +
9787 + client->id, (TMU_DM_TX_TRANS + (queue * 4)), 4));
9788 +
9789 + /* tx_packets counter overflowed */
9790 + if (tmu_tx_packets >
9791 + pfe->tmu_credit.tx_packets[client->id][queue]) {
9792 + tmp = UINT_MAX - tmu_tx_packets +
9793 + pfe->tmu_credit.tx_packets[client->id][queue];
9794 +
9795 + pfe->tmu_credit.tx_credit[client->id][queue] =
9796 + pfe->tmu_credit.tx_credit_max[client->id][queue] - tmp;
9797 + } else {
9798 + /* TMU tx <= pfe_eth tx, normal case or both OF since
9799 + * last time
9800 + */
9801 + pfe->tmu_credit.tx_credit[client->id][queue] =
9802 + pfe->tmu_credit.tx_credit_max[client->id][queue] -
9803 + (pfe->tmu_credit.tx_packets[client->id][queue] -
9804 + tmu_tx_packets);
9805 + }
9806 + }
9807 +}
9808 +
9809 +int pfe_hif_lib_init(struct pfe *pfe)
9810 +{
9811 + int rc;
9812 +
9813 + pr_info("%s\n", __func__);
9814 +
9815 + if (lro_mode) {
9816 + page_mode = 1;
9817 + pfe_pkt_size = min(PAGE_SIZE, MAX_PFE_PKT_SIZE);
9818 + pfe_pkt_headroom = 0;
9819 + } else {
9820 + page_mode = 0;
9821 + pfe_pkt_size = PFE_PKT_SIZE;
9822 + pfe_pkt_headroom = PFE_PKT_HEADROOM;
9823 + }
9824 +
9825 + if (tx_qos)
9826 + emac_txq_cnt = EMAC_TXQ_CNT / 2;
9827 + else
9828 + emac_txq_cnt = EMAC_TXQ_CNT;
9829 +
9830 + hif_lib_tmu_credit_init(pfe);
9831 + pfe->hif.shm = &ghif_shm;
9832 + rc = pfe_hif_shm_init(pfe->hif.shm);
9833 +
9834 + return rc;
9835 +}
9836 +
9837 +void pfe_hif_lib_exit(struct pfe *pfe)
9838 +{
9839 + pr_info("%s\n", __func__);
9840 +
9841 + pfe_hif_shm_clean(pfe->hif.shm);
9842 +}
9843 --- /dev/null
9844 +++ b/drivers/staging/fsl_ppfe/pfe_hif_lib.h
9845 @@ -0,0 +1,229 @@
9846 +/* SPDX-License-Identifier: GPL-2.0+ */
9847 +/*
9848 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
9849 + * Copyright 2017 NXP
9850 + */
9851 +
9852 +#ifndef _PFE_HIF_LIB_H_
9853 +#define _PFE_HIF_LIB_H_
9854 +
9855 +#include "pfe_hif.h"
9856 +
9857 +#define HIF_CL_REQ_TIMEOUT 10
9858 +#define GFP_DMA_PFE 0
9859 +#define PFE_PARSE_INFO_SIZE 16
9860 +
9861 +enum {
9862 + REQUEST_CL_REGISTER = 0,
9863 + REQUEST_CL_UNREGISTER,
9864 + HIF_REQUEST_MAX
9865 +};
9866 +
9867 +enum {
9868 + /* Event to indicate that client rx queue is reached water mark level */
9869 + EVENT_HIGH_RX_WM = 0,
9870 + /* Event to indicate that, packet received for client */
9871 + EVENT_RX_PKT_IND,
9872 + /* Event to indicate that, packet tx done for client */
9873 + EVENT_TXDONE_IND,
9874 + HIF_EVENT_MAX
9875 +};
9876 +
9877 +/*structure to store client queue info */
9878 +
9879 +/*structure to store client queue info */
9880 +struct hif_client_rx_queue {
9881 + struct rx_queue_desc *base;
9882 + u32 size;
9883 + u32 read_idx;
9884 + u32 write_idx;
9885 +};
9886 +
9887 +struct hif_client_tx_queue {
9888 + struct tx_queue_desc *base;
9889 + u32 size;
9890 + u32 read_idx;
9891 + u32 write_idx;
9892 + u32 tx_pending;
9893 + unsigned long jiffies_last_packet;
9894 + u32 nocpy_flag;
9895 + u32 prev_tmu_tx_pkts;
9896 + u32 done_tmu_tx_pkts;
9897 +};
9898 +
9899 +struct hif_client_s {
9900 + int id;
9901 + int tx_qn;
9902 + int rx_qn;
9903 + void *rx_qbase;
9904 + void *tx_qbase;
9905 + int tx_qsize;
9906 + int rx_qsize;
9907 + int cpu_id;
9908 + struct hif_client_tx_queue tx_q[HIF_CLIENT_QUEUES_MAX];
9909 + struct hif_client_rx_queue rx_q[HIF_CLIENT_QUEUES_MAX];
9910 + int (*event_handler)(void *priv, int event, int data);
9911 + unsigned long queue_mask[HIF_EVENT_MAX];
9912 + struct pfe *pfe;
9913 + void *priv;
9914 +};
9915 +
9916 +/*
9917 + * Client specific shared memory
9918 + * It contains number of Rx/Tx queues, base addresses and queue sizes
9919 + */
9920 +struct hif_client_shm {
9921 + u32 ctrl; /*0-7: number of Rx queues, 8-15: number of tx queues */
9922 + unsigned long rx_qbase; /*Rx queue base address */
9923 + u32 rx_qsize; /*each Rx queue size, all Rx queues are of same size */
9924 + unsigned long tx_qbase; /* Tx queue base address */
9925 + u32 tx_qsize; /*each Tx queue size, all Tx queues are of same size */
9926 +};
9927 +
9928 +/*Client shared memory ctrl bit description */
9929 +#define CLIENT_CTRL_RX_Q_CNT_OFST 0
9930 +#define CLIENT_CTRL_TX_Q_CNT_OFST 8
9931 +#define CLIENT_CTRL_RX_Q_CNT(ctrl) (((ctrl) >> CLIENT_CTRL_RX_Q_CNT_OFST) \
9932 + & 0xFF)
9933 +#define CLIENT_CTRL_TX_Q_CNT(ctrl) (((ctrl) >> CLIENT_CTRL_TX_Q_CNT_OFST) \
9934 + & 0xFF)
9935 +
9936 +/*
9937 + * Shared memory used to communicate between HIF driver and host/client drivers
9938 + * Before starting the hif driver rx_buf_pool ans rx_buf_pool_cnt should be
9939 + * initialized with host buffers and buffers count in the pool.
9940 + * rx_buf_pool_cnt should be >= HIF_RX_DESC_NT.
9941 + *
9942 + */
9943 +struct hif_shm {
9944 + u32 rx_buf_pool_cnt; /*Number of rx buffers available*/
9945 + /*Rx buffers required to initialize HIF rx descriptors */
9946 + void *rx_buf_pool[HIF_RX_DESC_NT];
9947 + unsigned long g_client_status[2]; /*Global client status bit mask */
9948 + /* Client specific shared memory */
9949 + struct hif_client_shm client[HIF_CLIENTS_MAX];
9950 +};
9951 +
9952 +#define CL_DESC_OWN BIT(31)
9953 +/* This sets owner ship to HIF driver */
9954 +#define CL_DESC_LAST BIT(30)
9955 +/* This indicates last packet for multi buffers handling */
9956 +#define CL_DESC_FIRST BIT(29)
9957 +/* This indicates first packet for multi buffers handling */
9958 +
9959 +#define CL_DESC_BUF_LEN(x) ((x) & 0xFFFF)
9960 +#define CL_DESC_FLAGS(x) (((x) & 0xF) << 16)
9961 +#define CL_DESC_GET_FLAGS(x) (((x) >> 16) & 0xF)
9962 +
9963 +struct rx_queue_desc {
9964 + void *data;
9965 + u32 ctrl; /*0-15bit len, 16-20bit flags, 31bit owner*/
9966 + u32 client_ctrl;
9967 +};
9968 +
9969 +struct tx_queue_desc {
9970 + void *data;
9971 + u32 ctrl; /*0-15bit len, 16-20bit flags, 31bit owner*/
9972 +};
9973 +
9974 +/* HIF Rx is not working properly for 2-byte aligned buffers and
9975 + * ip_header should be 4byte aligned for better iperformance.
9976 + * "ip_header = 64 + 6(hif_header) + 14 (MAC Header)" will be 4byte aligned.
9977 + */
9978 +#define PFE_PKT_HEADER_SZ sizeof(struct hif_hdr)
9979 +/* must be big enough for headroom, pkt size and skb shared info */
9980 +#define PFE_BUF_SIZE 2048
9981 +#define PFE_PKT_HEADROOM 128
9982 +
9983 +#define SKB_SHARED_INFO_SIZE SKB_DATA_ALIGN(sizeof(struct skb_shared_info))
9984 +#define PFE_PKT_SIZE (PFE_BUF_SIZE - PFE_PKT_HEADROOM \
9985 + - SKB_SHARED_INFO_SIZE)
9986 +#define MAX_L2_HDR_SIZE 14 /* Not correct for VLAN/PPPoE */
9987 +#define MAX_L3_HDR_SIZE 20 /* Not correct for IPv6 */
9988 +#define MAX_L4_HDR_SIZE 60 /* TCP with maximum options */
9989 +#define MAX_HDR_SIZE (MAX_L2_HDR_SIZE + MAX_L3_HDR_SIZE \
9990 + + MAX_L4_HDR_SIZE)
9991 +/* Used in page mode to clamp packet size to the maximum supported by the hif
9992 + *hw interface (<16KiB)
9993 + */
9994 +#define MAX_PFE_PKT_SIZE 16380UL
9995 +
9996 +extern unsigned int pfe_pkt_size;
9997 +extern unsigned int pfe_pkt_headroom;
9998 +extern unsigned int page_mode;
9999 +extern unsigned int lro_mode;
10000 +extern unsigned int tx_qos;
10001 +extern unsigned int emac_txq_cnt;
10002 +
10003 +int pfe_hif_lib_init(struct pfe *pfe);
10004 +void pfe_hif_lib_exit(struct pfe *pfe);
10005 +int hif_lib_client_register(struct hif_client_s *client);
10006 +int hif_lib_client_unregister(struct hif_client_s *client);
10007 +void __hif_lib_xmit_pkt(struct hif_client_s *client, unsigned int qno, void
10008 + *data, unsigned int len, u32 client_ctrl,
10009 + unsigned int flags, void *client_data);
10010 +int hif_lib_xmit_pkt(struct hif_client_s *client, unsigned int qno, void *data,
10011 + unsigned int len, u32 client_ctrl, void *client_data);
10012 +void hif_lib_indicate_client(int cl_id, int event, int data);
10013 +int hif_lib_event_handler_start(struct hif_client_s *client, int event, int
10014 + data);
10015 +int hif_lib_tmu_queue_start(struct hif_client_s *client, int qno);
10016 +int hif_lib_tmu_queue_stop(struct hif_client_s *client, int qno);
10017 +void *hif_lib_tx_get_next_complete(struct hif_client_s *client, int qno,
10018 + unsigned int *flags, int count);
10019 +void *hif_lib_receive_pkt(struct hif_client_s *client, int qno, int *len, int
10020 + *ofst, unsigned int *rx_ctrl,
10021 + unsigned int *desc_ctrl, void **priv_data);
10022 +void __hif_lib_update_credit(struct hif_client_s *client, unsigned int queue);
10023 +void hif_lib_set_rx_cpu_affinity(struct hif_client_s *client, int cpu_id);
10024 +void hif_lib_set_tx_queue_nocpy(struct hif_client_s *client, int qno, int
10025 + enable);
10026 +static inline int hif_lib_tx_avail(struct hif_client_s *client, unsigned int
10027 + qno)
10028 +{
10029 + struct hif_client_tx_queue *queue = &client->tx_q[qno];
10030 +
10031 + return (queue->size - queue->tx_pending);
10032 +}
10033 +
10034 +static inline int hif_lib_get_tx_wr_index(struct hif_client_s *client, unsigned
10035 + int qno)
10036 +{
10037 + struct hif_client_tx_queue *queue = &client->tx_q[qno];
10038 +
10039 + return queue->write_idx;
10040 +}
10041 +
10042 +static inline int hif_lib_tx_pending(struct hif_client_s *client, unsigned int
10043 + qno)
10044 +{
10045 + struct hif_client_tx_queue *queue = &client->tx_q[qno];
10046 +
10047 + return queue->tx_pending;
10048 +}
10049 +
10050 +#define hif_lib_tx_credit_avail(pfe, id, qno) \
10051 + ((pfe)->tmu_credit.tx_credit[id][qno])
10052 +
10053 +#define hif_lib_tx_credit_max(pfe, id, qno) \
10054 + ((pfe)->tmu_credit.tx_credit_max[id][qno])
10055 +
10056 +/*
10057 + * Test comment
10058 + */
10059 +#define hif_lib_tx_credit_use(pfe, id, qno, credit) \
10060 + ({ typeof(pfe) pfe_ = pfe; \
10061 + typeof(id) id_ = id; \
10062 + typeof(qno) qno_ = qno; \
10063 + typeof(credit) credit_ = credit; \
10064 + do { \
10065 + if (tx_qos) { \
10066 + (pfe_)->tmu_credit.tx_credit[id_][qno_]\
10067 + -= credit_; \
10068 + (pfe_)->tmu_credit.tx_packets[id_][qno_]\
10069 + += credit_; \
10070 + } \
10071 + } while (0); \
10072 + })
10073 +
10074 +#endif /* _PFE_HIF_LIB_H_ */
10075 --- /dev/null
10076 +++ b/drivers/staging/fsl_ppfe/pfe_hw.c
10077 @@ -0,0 +1,164 @@
10078 +// SPDX-License-Identifier: GPL-2.0+
10079 +/*
10080 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
10081 + * Copyright 2017 NXP
10082 + */
10083 +
10084 +#include "pfe_mod.h"
10085 +#include "pfe_hw.h"
10086 +
10087 +/* Functions to handle most of pfe hw register initialization */
10088 +int pfe_hw_init(struct pfe *pfe, int resume)
10089 +{
10090 + struct class_cfg class_cfg = {
10091 + .pe_sys_clk_ratio = PE_SYS_CLK_RATIO,
10092 + .route_table_baseaddr = pfe->ddr_phys_baseaddr +
10093 + ROUTE_TABLE_BASEADDR,
10094 + .route_table_hash_bits = ROUTE_TABLE_HASH_BITS,
10095 + };
10096 +
10097 + struct tmu_cfg tmu_cfg = {
10098 + .pe_sys_clk_ratio = PE_SYS_CLK_RATIO,
10099 + .llm_base_addr = pfe->ddr_phys_baseaddr + TMU_LLM_BASEADDR,
10100 + .llm_queue_len = TMU_LLM_QUEUE_LEN,
10101 + };
10102 +
10103 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
10104 + struct util_cfg util_cfg = {
10105 + .pe_sys_clk_ratio = PE_SYS_CLK_RATIO,
10106 + };
10107 +#endif
10108 +
10109 + struct BMU_CFG bmu1_cfg = {
10110 + .baseaddr = CBUS_VIRT_TO_PFE(LMEM_BASE_ADDR +
10111 + BMU1_LMEM_BASEADDR),
10112 + .count = BMU1_BUF_COUNT,
10113 + .size = BMU1_BUF_SIZE,
10114 + .low_watermark = 10,
10115 + .high_watermark = 15,
10116 + };
10117 +
10118 + struct BMU_CFG bmu2_cfg = {
10119 + .baseaddr = DDR_PHYS_TO_PFE(pfe->ddr_phys_baseaddr +
10120 + BMU2_DDR_BASEADDR),
10121 + .count = BMU2_BUF_COUNT,
10122 + .size = BMU2_BUF_SIZE,
10123 + .low_watermark = 250,
10124 + .high_watermark = 253,
10125 + };
10126 +
10127 + struct gpi_cfg egpi1_cfg = {
10128 + .lmem_rtry_cnt = EGPI1_LMEM_RTRY_CNT,
10129 + .tmlf_txthres = EGPI1_TMLF_TXTHRES,
10130 + .aseq_len = EGPI1_ASEQ_LEN,
10131 + .mtip_pause_reg = CBUS_VIRT_TO_PFE(EMAC1_BASE_ADDR +
10132 + EMAC_TCNTRL_REG),
10133 + };
10134 +
10135 + struct gpi_cfg egpi2_cfg = {
10136 + .lmem_rtry_cnt = EGPI2_LMEM_RTRY_CNT,
10137 + .tmlf_txthres = EGPI2_TMLF_TXTHRES,
10138 + .aseq_len = EGPI2_ASEQ_LEN,
10139 + .mtip_pause_reg = CBUS_VIRT_TO_PFE(EMAC2_BASE_ADDR +
10140 + EMAC_TCNTRL_REG),
10141 + };
10142 +
10143 + struct gpi_cfg hgpi_cfg = {
10144 + .lmem_rtry_cnt = HGPI_LMEM_RTRY_CNT,
10145 + .tmlf_txthres = HGPI_TMLF_TXTHRES,
10146 + .aseq_len = HGPI_ASEQ_LEN,
10147 + .mtip_pause_reg = 0,
10148 + };
10149 +
10150 + pr_info("%s\n", __func__);
10151 +
10152 +#if !defined(LS1012A_PFE_RESET_WA)
10153 + /* LS1012A needs this to make PE work correctly */
10154 + writel(0x3, CLASS_PE_SYS_CLK_RATIO);
10155 + writel(0x3, TMU_PE_SYS_CLK_RATIO);
10156 + writel(0x3, UTIL_PE_SYS_CLK_RATIO);
10157 + usleep_range(10, 20);
10158 +#endif
10159 +
10160 + pr_info("CLASS version: %x\n", readl(CLASS_VERSION));
10161 + pr_info("TMU version: %x\n", readl(TMU_VERSION));
10162 +
10163 + pr_info("BMU1 version: %x\n", readl(BMU1_BASE_ADDR +
10164 + BMU_VERSION));
10165 + pr_info("BMU2 version: %x\n", readl(BMU2_BASE_ADDR +
10166 + BMU_VERSION));
10167 +
10168 + pr_info("EGPI1 version: %x\n", readl(EGPI1_BASE_ADDR +
10169 + GPI_VERSION));
10170 + pr_info("EGPI2 version: %x\n", readl(EGPI2_BASE_ADDR +
10171 + GPI_VERSION));
10172 + pr_info("HGPI version: %x\n", readl(HGPI_BASE_ADDR +
10173 + GPI_VERSION));
10174 +
10175 + pr_info("HIF version: %x\n", readl(HIF_VERSION));
10176 + pr_info("HIF NOPCY version: %x\n", readl(HIF_NOCPY_VERSION));
10177 +
10178 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
10179 + pr_info("UTIL version: %x\n", readl(UTIL_VERSION));
10180 +#endif
10181 + while (!(readl(TMU_CTRL) & ECC_MEM_INIT_DONE))
10182 + ;
10183 +
10184 + hif_rx_disable();
10185 + hif_tx_disable();
10186 +
10187 + bmu_init(BMU1_BASE_ADDR, &bmu1_cfg);
10188 +
10189 + pr_info("bmu_init(1) done\n");
10190 +
10191 + bmu_init(BMU2_BASE_ADDR, &bmu2_cfg);
10192 +
10193 + pr_info("bmu_init(2) done\n");
10194 +
10195 + class_cfg.resume = resume ? 1 : 0;
10196 +
10197 + class_init(&class_cfg);
10198 +
10199 + pr_info("class_init() done\n");
10200 +
10201 + tmu_init(&tmu_cfg);
10202 +
10203 + pr_info("tmu_init() done\n");
10204 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
10205 + util_init(&util_cfg);
10206 +
10207 + pr_info("util_init() done\n");
10208 +#endif
10209 + gpi_init(EGPI1_BASE_ADDR, &egpi1_cfg);
10210 +
10211 + pr_info("gpi_init(1) done\n");
10212 +
10213 + gpi_init(EGPI2_BASE_ADDR, &egpi2_cfg);
10214 +
10215 + pr_info("gpi_init(2) done\n");
10216 +
10217 + gpi_init(HGPI_BASE_ADDR, &hgpi_cfg);
10218 +
10219 + pr_info("gpi_init(hif) done\n");
10220 +
10221 + bmu_enable(BMU1_BASE_ADDR);
10222 +
10223 + pr_info("bmu_enable(1) done\n");
10224 +
10225 + bmu_enable(BMU2_BASE_ADDR);
10226 +
10227 + pr_info("bmu_enable(2) done\n");
10228 +
10229 + return 0;
10230 +}
10231 +
10232 +void pfe_hw_exit(struct pfe *pfe)
10233 +{
10234 + pr_info("%s\n", __func__);
10235 +
10236 + bmu_disable(BMU1_BASE_ADDR);
10237 + bmu_reset(BMU1_BASE_ADDR);
10238 +
10239 + bmu_disable(BMU2_BASE_ADDR);
10240 + bmu_reset(BMU2_BASE_ADDR);
10241 +}
10242 --- /dev/null
10243 +++ b/drivers/staging/fsl_ppfe/pfe_hw.h
10244 @@ -0,0 +1,15 @@
10245 +/* SPDX-License-Identifier: GPL-2.0+ */
10246 +/*
10247 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
10248 + * Copyright 2017 NXP
10249 + */
10250 +
10251 +#ifndef _PFE_HW_H_
10252 +#define _PFE_HW_H_
10253 +
10254 +#define PE_SYS_CLK_RATIO 1 /* SYS/AXI = 250MHz, HFE = 500MHz */
10255 +
10256 +int pfe_hw_init(struct pfe *pfe, int resume);
10257 +void pfe_hw_exit(struct pfe *pfe);
10258 +
10259 +#endif /* _PFE_HW_H_ */
10260 --- /dev/null
10261 +++ b/drivers/staging/fsl_ppfe/pfe_ls1012a_platform.c
10262 @@ -0,0 +1,383 @@
10263 +// SPDX-License-Identifier: GPL-2.0+
10264 +/*
10265 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
10266 + * Copyright 2017 NXP
10267 + */
10268 +
10269 +#include <linux/module.h>
10270 +#include <linux/device.h>
10271 +#include <linux/of.h>
10272 +#include <linux/of_net.h>
10273 +#include <linux/of_address.h>
10274 +#include <linux/of_mdio.h>
10275 +#include <linux/platform_device.h>
10276 +#include <linux/slab.h>
10277 +#include <linux/clk.h>
10278 +#include <linux/mfd/syscon.h>
10279 +#include <linux/regmap.h>
10280 +
10281 +#include "pfe_mod.h"
10282 +
10283 +extern bool pfe_use_old_dts_phy;
10284 +struct ls1012a_pfe_platform_data pfe_platform_data;
10285 +
10286 +static int pfe_get_gemac_if_properties(struct device_node *gem,
10287 + int port,
10288 + struct ls1012a_pfe_platform_data *pdata)
10289 +{
10290 + struct device_node *phy_node = NULL;
10291 + int size;
10292 + int phy_id = 0;
10293 + const u32 *addr;
10294 + int err;
10295 +
10296 + addr = of_get_property(gem, "reg", &size);
10297 + if (addr)
10298 + port = be32_to_cpup(addr);
10299 + else
10300 + goto err;
10301 +
10302 + pdata->ls1012a_eth_pdata[port].gem_id = port;
10303 +
10304 + err = of_get_mac_address(gem, pdata->ls1012a_eth_pdata[port].mac_addr);
10305 +
10306 + phy_node = of_parse_phandle(gem, "phy-handle", 0);
10307 + pdata->ls1012a_eth_pdata[port].phy_node = phy_node;
10308 + if (phy_node) {
10309 + pfe_use_old_dts_phy = false;
10310 + goto process_phynode;
10311 + } else if (of_phy_is_fixed_link(gem)) {
10312 + pfe_use_old_dts_phy = false;
10313 + if (of_phy_register_fixed_link(gem) < 0) {
10314 + pr_err("broken fixed-link specification\n");
10315 + goto err;
10316 + }
10317 + phy_node = of_node_get(gem);
10318 + pdata->ls1012a_eth_pdata[port].phy_node = phy_node;
10319 + } else if (of_get_property(gem, "fsl,pfe-phy-if-flags", &size)) {
10320 + pfe_use_old_dts_phy = true;
10321 + /* Use old dts properties for phy handling */
10322 + addr = of_get_property(gem, "fsl,pfe-phy-if-flags", &size);
10323 + pdata->ls1012a_eth_pdata[port].phy_flags = be32_to_cpup(addr);
10324 +
10325 + addr = of_get_property(gem, "fsl,gemac-phy-id", &size);
10326 + if (!addr) {
10327 + pr_err("%s:%d Invalid gemac-phy-id....\n", __func__,
10328 + __LINE__);
10329 + } else {
10330 + phy_id = be32_to_cpup(addr);
10331 + pdata->ls1012a_eth_pdata[port].phy_id = phy_id;
10332 + pdata->ls1012a_mdio_pdata[0].phy_mask &= ~(1 << phy_id);
10333 + }
10334 +
10335 + /* If PHY is enabled, read mdio properties */
10336 + if (pdata->ls1012a_eth_pdata[port].phy_flags & GEMAC_NO_PHY)
10337 + goto done;
10338 +
10339 + } else {
10340 + pr_info("%s: No PHY or fixed-link\n", __func__);
10341 + return 0;
10342 + }
10343 +
10344 +process_phynode:
10345 + err = of_get_phy_mode(gem, &pdata->ls1012a_eth_pdata[port].mii_config);
10346 + if (err)
10347 + pr_err("%s:%d Incorrect Phy mode....\n", __func__,
10348 + __LINE__);
10349 +
10350 + addr = of_get_property(gem, "fsl,mdio-mux-val", &size);
10351 + if (!addr) {
10352 + pr_err("%s: Invalid mdio-mux-val....\n", __func__);
10353 + } else {
10354 + phy_id = be32_to_cpup(addr);
10355 + pdata->ls1012a_eth_pdata[port].mdio_muxval = phy_id;
10356 + }
10357 +
10358 + if (pdata->ls1012a_eth_pdata[port].phy_id < 32)
10359 + pfe->mdio_muxval[pdata->ls1012a_eth_pdata[port].phy_id] =
10360 + pdata->ls1012a_eth_pdata[port].mdio_muxval;
10361 +
10362 +
10363 + pdata->ls1012a_mdio_pdata[port].irq[0] = PHY_POLL;
10364 +
10365 +done:
10366 + return 0;
10367 +
10368 +err:
10369 + return -1;
10370 +}
10371 +
10372 +/*
10373 + *
10374 + * pfe_platform_probe -
10375 + *
10376 + *
10377 + */
10378 +static int pfe_platform_probe(struct platform_device *pdev)
10379 +{
10380 + struct resource res;
10381 + int ii = 0, rc, interface_count = 0, size = 0;
10382 + const u32 *prop;
10383 + struct device_node *np, *gem = NULL;
10384 + struct clk *pfe_clk;
10385 +
10386 + np = pdev->dev.of_node;
10387 +
10388 + if (!np) {
10389 + pr_err("Invalid device node\n");
10390 + return -EINVAL;
10391 + }
10392 +
10393 + pfe = kzalloc(sizeof(*pfe), GFP_KERNEL);
10394 + if (!pfe) {
10395 + rc = -ENOMEM;
10396 + goto err_alloc;
10397 + }
10398 +
10399 + platform_set_drvdata(pdev, pfe);
10400 +
10401 + if (dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32))) {
10402 + rc = -ENOMEM;
10403 + pr_err("unable to configure DMA mask.\n");
10404 + goto err_ddr;
10405 + }
10406 +
10407 + if (of_address_to_resource(np, 1, &res)) {
10408 + rc = -ENOMEM;
10409 + pr_err("failed to get ddr resource\n");
10410 + goto err_ddr;
10411 + }
10412 +
10413 + pfe->ddr_phys_baseaddr = res.start;
10414 + pfe->ddr_size = resource_size(&res);
10415 +
10416 + pfe->ddr_baseaddr = memremap(res.start, resource_size(&res),
10417 + MEMREMAP_WB);
10418 + if (!pfe->ddr_baseaddr) {
10419 + pr_err("memremap() ddr failed\n");
10420 + rc = -ENOMEM;
10421 + goto err_ddr;
10422 + }
10423 +
10424 + pfe->scfg =
10425 + syscon_regmap_lookup_by_phandle(pdev->dev.of_node,
10426 + "fsl,pfe-scfg");
10427 + if (IS_ERR(pfe->scfg)) {
10428 + dev_err(&pdev->dev, "No syscfg phandle specified\n");
10429 + return PTR_ERR(pfe->scfg);
10430 + }
10431 +
10432 + pfe->cbus_baseaddr = of_iomap(np, 0);
10433 + if (!pfe->cbus_baseaddr) {
10434 + rc = -ENOMEM;
10435 + pr_err("failed to get axi resource\n");
10436 + goto err_axi;
10437 + }
10438 +
10439 + pfe->hif_irq = platform_get_irq(pdev, 0);
10440 + if (pfe->hif_irq < 0) {
10441 + pr_err("platform_get_irq for hif failed\n");
10442 + rc = pfe->hif_irq;
10443 + goto err_hif_irq;
10444 + }
10445 +
10446 + pfe->wol_irq = platform_get_irq(pdev, 2);
10447 + if (pfe->wol_irq < 0) {
10448 + pr_err("platform_get_irq for WoL failed\n");
10449 + rc = pfe->wol_irq;
10450 + goto err_hif_irq;
10451 + }
10452 +
10453 + /* Read interface count */
10454 + prop = of_get_property(np, "fsl,pfe-num-interfaces", &size);
10455 + if (!prop) {
10456 + pr_err("Failed to read number of interfaces\n");
10457 + rc = -ENXIO;
10458 + goto err_prop;
10459 + }
10460 +
10461 + interface_count = be32_to_cpup(prop);
10462 + if (interface_count <= 0) {
10463 + pr_err("No ethernet interface count : %d\n",
10464 + interface_count);
10465 + rc = -ENXIO;
10466 + goto err_prop;
10467 + }
10468 +
10469 + pfe_platform_data.ls1012a_mdio_pdata[0].phy_mask = 0xffffffff;
10470 +
10471 + while ((gem = of_get_next_child(np, gem))) {
10472 + if (of_find_property(gem, "reg", &size)) {
10473 + pfe_get_gemac_if_properties(gem, ii,
10474 + &pfe_platform_data);
10475 + ii++;
10476 + }
10477 + }
10478 +
10479 + if (interface_count != ii)
10480 + pr_info("missing some of gemac interface properties.\n");
10481 +
10482 + pfe->dev = &pdev->dev;
10483 +
10484 + pfe->dev->platform_data = &pfe_platform_data;
10485 +
10486 + /* declare WoL capabilities */
10487 + device_init_wakeup(&pdev->dev, true);
10488 +
10489 + /* find the clocks */
10490 + pfe_clk = devm_clk_get(pfe->dev, "pfe");
10491 + if (IS_ERR(pfe_clk))
10492 + return PTR_ERR(pfe_clk);
10493 +
10494 + /* PFE clock is (platform clock / 2) */
10495 + /* save sys_clk value as KHz */
10496 + pfe->ctrl.sys_clk = clk_get_rate(pfe_clk) / (2 * 1000);
10497 +
10498 + rc = pfe_probe(pfe);
10499 + if (rc < 0)
10500 + goto err_probe;
10501 +
10502 + return 0;
10503 +
10504 +err_probe:
10505 +err_prop:
10506 +err_hif_irq:
10507 + iounmap(pfe->cbus_baseaddr);
10508 +
10509 +err_axi:
10510 + memunmap(pfe->ddr_baseaddr);
10511 +
10512 +err_ddr:
10513 + platform_set_drvdata(pdev, NULL);
10514 +
10515 + kfree(pfe);
10516 +
10517 +err_alloc:
10518 + return rc;
10519 +}
10520 +
10521 +/*
10522 + * pfe_platform_remove -
10523 + */
10524 +static int pfe_platform_remove(struct platform_device *pdev)
10525 +{
10526 + struct pfe *pfe = platform_get_drvdata(pdev);
10527 + int rc;
10528 +
10529 + pr_info("%s\n", __func__);
10530 +
10531 + rc = pfe_remove(pfe);
10532 +
10533 + iounmap(pfe->cbus_baseaddr);
10534 +
10535 + memunmap(pfe->ddr_baseaddr);
10536 +
10537 + platform_set_drvdata(pdev, NULL);
10538 +
10539 + kfree(pfe);
10540 +
10541 + return rc;
10542 +}
10543 +
10544 +#ifdef CONFIG_PM
10545 +#ifdef CONFIG_PM_SLEEP
10546 +int pfe_platform_suspend(struct device *dev)
10547 +{
10548 + struct pfe *pfe = platform_get_drvdata(to_platform_device(dev));
10549 + struct net_device *netdev;
10550 + int i;
10551 +
10552 + pfe->wake = 0;
10553 +
10554 + for (i = 0; i < (NUM_GEMAC_SUPPORT); i++) {
10555 + netdev = pfe->eth.eth_priv[i]->ndev;
10556 +
10557 + netif_device_detach(netdev);
10558 +
10559 + if (netif_running(netdev))
10560 + if (pfe_eth_suspend(netdev))
10561 + pfe->wake = 1;
10562 + }
10563 +
10564 + /* Shutdown PFE only if we're not waking up the system */
10565 + if (!pfe->wake) {
10566 +#if defined(LS1012A_PFE_RESET_WA)
10567 + pfe_hif_rx_idle(&pfe->hif);
10568 +#endif
10569 + pfe_ctrl_suspend(&pfe->ctrl);
10570 + pfe_firmware_exit(pfe);
10571 +
10572 + pfe_hif_exit(pfe);
10573 + pfe_hif_lib_exit(pfe);
10574 +
10575 + pfe_hw_exit(pfe);
10576 + }
10577 +
10578 + return 0;
10579 +}
10580 +
10581 +static int pfe_platform_resume(struct device *dev)
10582 +{
10583 + struct pfe *pfe = platform_get_drvdata(to_platform_device(dev));
10584 + struct net_device *netdev;
10585 + int i;
10586 +
10587 + if (!pfe->wake) {
10588 + pfe_hw_init(pfe, 1);
10589 + pfe_hif_lib_init(pfe);
10590 + pfe_hif_init(pfe);
10591 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
10592 + util_enable();
10593 +#endif
10594 + tmu_enable(0xf);
10595 + class_enable();
10596 + pfe_ctrl_resume(&pfe->ctrl);
10597 + }
10598 +
10599 + for (i = 0; i < (NUM_GEMAC_SUPPORT); i++) {
10600 + netdev = pfe->eth.eth_priv[i]->ndev;
10601 +
10602 + if (pfe->mdio.mdio_priv[i]->mii_bus)
10603 + pfe_eth_mdio_reset(pfe->mdio.mdio_priv[i]->mii_bus);
10604 +
10605 + if (netif_running(netdev))
10606 + pfe_eth_resume(netdev);
10607 +
10608 + netif_device_attach(netdev);
10609 + }
10610 + return 0;
10611 +}
10612 +#else
10613 +#define pfe_platform_suspend NULL
10614 +#define pfe_platform_resume NULL
10615 +#endif
10616 +
10617 +static const struct dev_pm_ops pfe_platform_pm_ops = {
10618 + SET_SYSTEM_SLEEP_PM_OPS(pfe_platform_suspend, pfe_platform_resume)
10619 +};
10620 +#endif
10621 +
10622 +static const struct of_device_id pfe_match[] = {
10623 + {
10624 + .compatible = "fsl,pfe",
10625 + },
10626 + {},
10627 +};
10628 +MODULE_DEVICE_TABLE(of, pfe_match);
10629 +
10630 +static struct platform_driver pfe_platform_driver = {
10631 + .probe = pfe_platform_probe,
10632 + .remove = pfe_platform_remove,
10633 + .driver = {
10634 + .name = "pfe",
10635 + .of_match_table = pfe_match,
10636 +#ifdef CONFIG_PM
10637 + .pm = &pfe_platform_pm_ops,
10638 +#endif
10639 + },
10640 +};
10641 +
10642 +module_platform_driver(pfe_platform_driver);
10643 +MODULE_LICENSE("GPL");
10644 +MODULE_DESCRIPTION("PFE Ethernet driver");
10645 +MODULE_AUTHOR("NXP DNCPE");
10646 --- /dev/null
10647 +++ b/drivers/staging/fsl_ppfe/pfe_mod.c
10648 @@ -0,0 +1,158 @@
10649 +// SPDX-License-Identifier: GPL-2.0+
10650 +/*
10651 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
10652 + * Copyright 2017 NXP
10653 + */
10654 +
10655 +#include <linux/dma-mapping.h>
10656 +#include "pfe_mod.h"
10657 +#include "pfe_cdev.h"
10658 +
10659 +unsigned int us;
10660 +module_param(us, uint, 0444);
10661 +MODULE_PARM_DESC(us, "0: module enabled for kernel networking (DEFAULT)\n"
10662 + "1: module enabled for userspace networking\n");
10663 +struct pfe *pfe;
10664 +
10665 +/*
10666 + * pfe_probe -
10667 + */
10668 +int pfe_probe(struct pfe *pfe)
10669 +{
10670 + int rc;
10671 +
10672 + if (pfe->ddr_size < DDR_MAX_SIZE) {
10673 + pr_err("%s: required DDR memory (%x) above platform ddr memory (%x)\n",
10674 + __func__, (unsigned int)DDR_MAX_SIZE, pfe->ddr_size);
10675 + rc = -ENOMEM;
10676 + goto err_hw;
10677 + }
10678 +
10679 + if (((int)(pfe->ddr_phys_baseaddr + BMU2_DDR_BASEADDR) &
10680 + (8 * SZ_1M - 1)) != 0) {
10681 + pr_err("%s: BMU2 base address (0x%x) must be aligned on 8MB boundary\n",
10682 + __func__, (int)pfe->ddr_phys_baseaddr +
10683 + BMU2_DDR_BASEADDR);
10684 + rc = -ENOMEM;
10685 + goto err_hw;
10686 + }
10687 +
10688 + pr_info("cbus_baseaddr: %lx, ddr_baseaddr: %lx, ddr_phys_baseaddr: %lx, ddr_size: %x\n",
10689 + (unsigned long)pfe->cbus_baseaddr,
10690 + (unsigned long)pfe->ddr_baseaddr,
10691 + pfe->ddr_phys_baseaddr, pfe->ddr_size);
10692 +
10693 + pfe_lib_init(pfe->cbus_baseaddr, pfe->ddr_baseaddr,
10694 + pfe->ddr_phys_baseaddr, pfe->ddr_size);
10695 +
10696 + rc = pfe_hw_init(pfe, 0);
10697 + if (rc < 0)
10698 + goto err_hw;
10699 +
10700 + if (us)
10701 + goto firmware_init;
10702 +
10703 + rc = pfe_hif_lib_init(pfe);
10704 + if (rc < 0)
10705 + goto err_hif_lib;
10706 +
10707 + rc = pfe_hif_init(pfe);
10708 + if (rc < 0)
10709 + goto err_hif;
10710 +
10711 +firmware_init:
10712 + rc = pfe_firmware_init(pfe);
10713 + if (rc < 0)
10714 + goto err_firmware;
10715 +
10716 + rc = pfe_ctrl_init(pfe);
10717 + if (rc < 0)
10718 + goto err_ctrl;
10719 +
10720 + rc = pfe_eth_init(pfe);
10721 + if (rc < 0)
10722 + goto err_eth;
10723 +
10724 + rc = pfe_sysfs_init(pfe);
10725 + if (rc < 0)
10726 + goto err_sysfs;
10727 +
10728 + rc = pfe_debugfs_init(pfe);
10729 + if (rc < 0)
10730 + goto err_debugfs;
10731 +
10732 + if (us) {
10733 + /* Creating a character device */
10734 + rc = pfe_cdev_init();
10735 + if (rc < 0)
10736 + goto err_cdev;
10737 + }
10738 +
10739 + return 0;
10740 +
10741 +err_cdev:
10742 + pfe_debugfs_exit(pfe);
10743 +
10744 +err_debugfs:
10745 + pfe_sysfs_exit(pfe);
10746 +
10747 +err_sysfs:
10748 + pfe_eth_exit(pfe);
10749 +
10750 +err_eth:
10751 + pfe_ctrl_exit(pfe);
10752 +
10753 +err_ctrl:
10754 + pfe_firmware_exit(pfe);
10755 +
10756 +err_firmware:
10757 + if (us)
10758 + goto err_hif_lib;
10759 +
10760 + pfe_hif_exit(pfe);
10761 +
10762 +err_hif:
10763 + pfe_hif_lib_exit(pfe);
10764 +
10765 +err_hif_lib:
10766 + pfe_hw_exit(pfe);
10767 +
10768 +err_hw:
10769 + return rc;
10770 +}
10771 +
10772 +/*
10773 + * pfe_remove -
10774 + */
10775 +int pfe_remove(struct pfe *pfe)
10776 +{
10777 + pr_info("%s\n", __func__);
10778 +
10779 + if (us)
10780 + pfe_cdev_exit();
10781 +
10782 + pfe_debugfs_exit(pfe);
10783 +
10784 + pfe_sysfs_exit(pfe);
10785 +
10786 + pfe_eth_exit(pfe);
10787 +
10788 + pfe_ctrl_exit(pfe);
10789 +
10790 +#if defined(LS1012A_PFE_RESET_WA)
10791 + pfe_hif_rx_idle(&pfe->hif);
10792 +#endif
10793 + pfe_firmware_exit(pfe);
10794 +
10795 + if (us)
10796 + goto hw_exit;
10797 +
10798 + pfe_hif_exit(pfe);
10799 +
10800 + pfe_hif_lib_exit(pfe);
10801 +
10802 +hw_exit:
10803 + pfe_hw_exit(pfe);
10804 +
10805 + return 0;
10806 +}
10807 --- /dev/null
10808 +++ b/drivers/staging/fsl_ppfe/pfe_mod.h
10809 @@ -0,0 +1,103 @@
10810 +/* SPDX-License-Identifier: GPL-2.0+ */
10811 +/*
10812 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
10813 + * Copyright 2017 NXP
10814 + */
10815 +
10816 +#ifndef _PFE_MOD_H_
10817 +#define _PFE_MOD_H_
10818 +
10819 +#include <linux/device.h>
10820 +#include <linux/elf.h>
10821 +
10822 +extern unsigned int us;
10823 +
10824 +struct pfe;
10825 +
10826 +#include "pfe_hw.h"
10827 +#include "pfe_firmware.h"
10828 +#include "pfe_ctrl.h"
10829 +#include "pfe_hif.h"
10830 +#include "pfe_hif_lib.h"
10831 +#include "pfe_eth.h"
10832 +#include "pfe_sysfs.h"
10833 +#include "pfe_perfmon.h"
10834 +#include "pfe_debugfs.h"
10835 +
10836 +#define PHYID_MAX_VAL 32
10837 +
10838 +struct pfe_tmu_credit {
10839 + /* Number of allowed TX packet in-flight, matches TMU queue size */
10840 + unsigned int tx_credit[NUM_GEMAC_SUPPORT][EMAC_TXQ_CNT];
10841 + unsigned int tx_credit_max[NUM_GEMAC_SUPPORT][EMAC_TXQ_CNT];
10842 + unsigned int tx_packets[NUM_GEMAC_SUPPORT][EMAC_TXQ_CNT];
10843 +};
10844 +
10845 +struct pfe {
10846 + struct regmap *scfg;
10847 + unsigned long ddr_phys_baseaddr;
10848 + void *ddr_baseaddr;
10849 + unsigned int ddr_size;
10850 + void *cbus_baseaddr;
10851 + void *apb_baseaddr;
10852 + unsigned long iram_phys_baseaddr;
10853 + void *iram_baseaddr;
10854 + unsigned long ipsec_phys_baseaddr;
10855 + void *ipsec_baseaddr;
10856 + int hif_irq;
10857 + int wol_irq;
10858 + int hif_client_irq;
10859 + struct device *dev;
10860 + struct dentry *dentry;
10861 + struct pfe_ctrl ctrl;
10862 + struct pfe_hif hif;
10863 + struct pfe_eth eth;
10864 + struct pfe_mdio mdio;
10865 + struct hif_client_s *hif_client[HIF_CLIENTS_MAX];
10866 +#if defined(CFG_DIAGS)
10867 + struct pfe_diags diags;
10868 +#endif
10869 + struct pfe_tmu_credit tmu_credit;
10870 + struct pfe_cpumon cpumon;
10871 + struct pfe_memmon memmon;
10872 + int wake;
10873 + int mdio_muxval[PHYID_MAX_VAL];
10874 + struct clk *hfe_clock;
10875 +};
10876 +
10877 +extern struct pfe *pfe;
10878 +
10879 +int pfe_probe(struct pfe *pfe);
10880 +int pfe_remove(struct pfe *pfe);
10881 +
10882 +/* DDR Mapping in reserved memory*/
10883 +#define ROUTE_TABLE_BASEADDR 0
10884 +#define ROUTE_TABLE_HASH_BITS 15 /* 32K entries */
10885 +#define ROUTE_TABLE_SIZE ((1 << ROUTE_TABLE_HASH_BITS) \
10886 + * CLASS_ROUTE_SIZE)
10887 +#define BMU2_DDR_BASEADDR (ROUTE_TABLE_BASEADDR + ROUTE_TABLE_SIZE)
10888 +#define BMU2_BUF_COUNT (4096 - 256)
10889 +/* This is to get a total DDR size of 12MiB */
10890 +#define BMU2_DDR_SIZE (DDR_BUF_SIZE * BMU2_BUF_COUNT)
10891 +#define UTIL_CODE_BASEADDR (BMU2_DDR_BASEADDR + BMU2_DDR_SIZE)
10892 +#define UTIL_CODE_SIZE (128 * SZ_1K)
10893 +#define UTIL_DDR_DATA_BASEADDR (UTIL_CODE_BASEADDR + UTIL_CODE_SIZE)
10894 +#define UTIL_DDR_DATA_SIZE (64 * SZ_1K)
10895 +#define CLASS_DDR_DATA_BASEADDR (UTIL_DDR_DATA_BASEADDR + UTIL_DDR_DATA_SIZE)
10896 +#define CLASS_DDR_DATA_SIZE (32 * SZ_1K)
10897 +#define TMU_DDR_DATA_BASEADDR (CLASS_DDR_DATA_BASEADDR + CLASS_DDR_DATA_SIZE)
10898 +#define TMU_DDR_DATA_SIZE (32 * SZ_1K)
10899 +#define TMU_LLM_BASEADDR (TMU_DDR_DATA_BASEADDR + TMU_DDR_DATA_SIZE)
10900 +#define TMU_LLM_QUEUE_LEN (8 * 512)
10901 +/* Must be power of two and at least 16 * 8 = 128 bytes */
10902 +#define TMU_LLM_SIZE (4 * 16 * TMU_LLM_QUEUE_LEN)
10903 +/* (4 TMU's x 16 queues x queue_len) */
10904 +
10905 +#define DDR_MAX_SIZE (TMU_LLM_BASEADDR + TMU_LLM_SIZE)
10906 +
10907 +/* LMEM Mapping */
10908 +#define BMU1_LMEM_BASEADDR 0
10909 +#define BMU1_BUF_COUNT 256
10910 +#define BMU1_LMEM_SIZE (LMEM_BUF_SIZE * BMU1_BUF_COUNT)
10911 +
10912 +#endif /* _PFE_MOD_H */
10913 --- /dev/null
10914 +++ b/drivers/staging/fsl_ppfe/pfe_perfmon.h
10915 @@ -0,0 +1,26 @@
10916 +/* SPDX-License-Identifier: GPL-2.0+ */
10917 +/*
10918 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
10919 + * Copyright 2017 NXP
10920 + */
10921 +
10922 +#ifndef _PFE_PERFMON_H_
10923 +#define _PFE_PERFMON_H_
10924 +
10925 +#include "pfe/pfe.h"
10926 +
10927 +#define CT_CPUMON_INTERVAL (1 * TIMER_TICKS_PER_SEC)
10928 +
10929 +struct pfe_cpumon {
10930 + u32 cpu_usage_pct[MAX_PE];
10931 + u32 class_usage_pct;
10932 +};
10933 +
10934 +struct pfe_memmon {
10935 + u32 kernel_memory_allocated;
10936 +};
10937 +
10938 +int pfe_perfmon_init(struct pfe *pfe);
10939 +void pfe_perfmon_exit(struct pfe *pfe);
10940 +
10941 +#endif /* _PFE_PERFMON_H_ */
10942 --- /dev/null
10943 +++ b/drivers/staging/fsl_ppfe/pfe_sysfs.c
10944 @@ -0,0 +1,840 @@
10945 +// SPDX-License-Identifier: GPL-2.0+
10946 +/*
10947 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
10948 + * Copyright 2017 NXP
10949 + */
10950 +
10951 +#include <linux/module.h>
10952 +#include <linux/platform_device.h>
10953 +
10954 +#include "pfe_mod.h"
10955 +
10956 +#define PE_EXCEPTION_DUMP_ADDRESS 0x1fa8
10957 +#define NUM_QUEUES 16
10958 +
10959 +static char register_name[20][5] = {
10960 + "EPC", "ECAS", "EID", "ED",
10961 + "r0", "r1", "r2", "r3",
10962 + "r4", "r5", "r6", "r7",
10963 + "r8", "r9", "r10", "r11",
10964 + "r12", "r13", "r14", "r15",
10965 +};
10966 +
10967 +static char exception_name[14][20] = {
10968 + "Reset",
10969 + "HardwareFailure",
10970 + "NMI",
10971 + "InstBreakpoint",
10972 + "DataBreakpoint",
10973 + "Unsupported",
10974 + "PrivilegeViolation",
10975 + "InstBusError",
10976 + "DataBusError",
10977 + "AlignmentError",
10978 + "ArithmeticError",
10979 + "SystemCall",
10980 + "MemoryManagement",
10981 + "Interrupt",
10982 +};
10983 +
10984 +static unsigned long class_do_clear;
10985 +static unsigned long tmu_do_clear;
10986 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
10987 +static unsigned long util_do_clear;
10988 +#endif
10989 +
10990 +static ssize_t display_pe_status(char *buf, int id, u32 dmem_addr, unsigned long
10991 + do_clear)
10992 +{
10993 + ssize_t len = 0;
10994 + u32 val;
10995 + char statebuf[5];
10996 + struct pfe_cpumon *cpumon = &pfe->cpumon;
10997 + u32 debug_indicator;
10998 + u32 debug[20];
10999 +
11000 + if (id < CLASS0_ID || id >= MAX_PE)
11001 + return len;
11002 +
11003 + *(u32 *)statebuf = pe_dmem_read(id, dmem_addr, 4);
11004 + dmem_addr += 4;
11005 +
11006 + statebuf[4] = '\0';
11007 + len += sprintf(buf + len, "state=%4s ", statebuf);
11008 +
11009 + val = pe_dmem_read(id, dmem_addr, 4);
11010 + dmem_addr += 4;
11011 + len += sprintf(buf + len, "ctr=%08x ", cpu_to_be32(val));
11012 +
11013 + val = pe_dmem_read(id, dmem_addr, 4);
11014 + if (do_clear && val)
11015 + pe_dmem_write(id, 0, dmem_addr, 4);
11016 + dmem_addr += 4;
11017 + len += sprintf(buf + len, "rx=%u ", cpu_to_be32(val));
11018 +
11019 + val = pe_dmem_read(id, dmem_addr, 4);
11020 + if (do_clear && val)
11021 + pe_dmem_write(id, 0, dmem_addr, 4);
11022 + dmem_addr += 4;
11023 + if (id >= TMU0_ID && id <= TMU_MAX_ID)
11024 + len += sprintf(buf + len, "qstatus=%x", cpu_to_be32(val));
11025 + else
11026 + len += sprintf(buf + len, "tx=%u", cpu_to_be32(val));
11027 +
11028 + val = pe_dmem_read(id, dmem_addr, 4);
11029 + if (do_clear && val)
11030 + pe_dmem_write(id, 0, dmem_addr, 4);
11031 + dmem_addr += 4;
11032 + if (val)
11033 + len += sprintf(buf + len, " drop=%u", cpu_to_be32(val));
11034 +
11035 + len += sprintf(buf + len, " load=%d%%", cpumon->cpu_usage_pct[id]);
11036 +
11037 + len += sprintf(buf + len, "\n");
11038 +
11039 + debug_indicator = pe_dmem_read(id, dmem_addr, 4);
11040 + dmem_addr += 4;
11041 + if (!strncmp((char *)&debug_indicator, "DBUG", 4)) {
11042 + int j, last = 0;
11043 +
11044 + for (j = 0; j < 16; j++) {
11045 + debug[j] = pe_dmem_read(id, dmem_addr, 4);
11046 + if (debug[j]) {
11047 + if (do_clear)
11048 + pe_dmem_write(id, 0, dmem_addr, 4);
11049 + last = j + 1;
11050 + }
11051 + dmem_addr += 4;
11052 + }
11053 + for (j = 0; j < last; j++) {
11054 + len += sprintf(buf + len, "%08x%s",
11055 + cpu_to_be32(debug[j]),
11056 + (j & 0x7) == 0x7 || j == last - 1 ? "\n" : " ");
11057 + }
11058 + }
11059 +
11060 + if (!strncmp(statebuf, "DEAD", 4)) {
11061 + u32 i, dump = PE_EXCEPTION_DUMP_ADDRESS;
11062 +
11063 + len += sprintf(buf + len, "Exception details:\n");
11064 + for (i = 0; i < 20; i++) {
11065 + debug[i] = pe_dmem_read(id, dump, 4);
11066 + dump += 4;
11067 + if (i == 2)
11068 + len += sprintf(buf + len, "%4s = %08x (=%s) ",
11069 + register_name[i], cpu_to_be32(debug[i]),
11070 + exception_name[min((u32)
11071 + cpu_to_be32(debug[i]), (u32)13)]);
11072 + else
11073 + len += sprintf(buf + len, "%4s = %08x%s",
11074 + register_name[i], cpu_to_be32(debug[i]),
11075 + (i & 0x3) == 0x3 || i == 19 ? "\n" : " ");
11076 + }
11077 + }
11078 +
11079 + return len;
11080 +}
11081 +
11082 +static ssize_t class_phy_stats(char *buf, int phy)
11083 +{
11084 + ssize_t len = 0;
11085 + int off1 = phy * 0x28;
11086 + int off2 = phy * 0x10;
11087 +
11088 + if (phy == 3)
11089 + off1 = CLASS_PHY4_RX_PKTS - CLASS_PHY1_RX_PKTS;
11090 +
11091 + len += sprintf(buf + len, "phy: %d\n", phy);
11092 + len += sprintf(buf + len,
11093 + " rx: %10u, tx: %10u, intf: %10u, ipv4: %10u, ipv6: %10u\n",
11094 + readl(CLASS_PHY1_RX_PKTS + off1),
11095 + readl(CLASS_PHY1_TX_PKTS + off1),
11096 + readl(CLASS_PHY1_INTF_MATCH_PKTS + off1),
11097 + readl(CLASS_PHY1_V4_PKTS + off1),
11098 + readl(CLASS_PHY1_V6_PKTS + off1));
11099 +
11100 + len += sprintf(buf + len,
11101 + " icmp: %10u, igmp: %10u, tcp: %10u, udp: %10u\n",
11102 + readl(CLASS_PHY1_ICMP_PKTS + off2),
11103 + readl(CLASS_PHY1_IGMP_PKTS + off2),
11104 + readl(CLASS_PHY1_TCP_PKTS + off2),
11105 + readl(CLASS_PHY1_UDP_PKTS + off2));
11106 +
11107 + len += sprintf(buf + len, " err\n");
11108 + len += sprintf(buf + len,
11109 + " lp: %10u, intf: %10u, l3: %10u, chcksum: %10u, ttl: %10u\n",
11110 + readl(CLASS_PHY1_LP_FAIL_PKTS + off1),
11111 + readl(CLASS_PHY1_INTF_FAIL_PKTS + off1),
11112 + readl(CLASS_PHY1_L3_FAIL_PKTS + off1),
11113 + readl(CLASS_PHY1_CHKSUM_ERR_PKTS + off1),
11114 + readl(CLASS_PHY1_TTL_ERR_PKTS + off1));
11115 +
11116 + return len;
11117 +}
11118 +
11119 +/* qm_read_drop_stat
11120 + * This function is used to read the drop statistics from the TMU
11121 + * hw drop counter. Since the hw counter is always cleared afer
11122 + * reading, this function maintains the previous drop count, and
11123 + * adds the new value to it. That value can be retrieved by
11124 + * passing a pointer to it with the total_drops arg.
11125 + *
11126 + * @param tmu TMU number (0 - 3)
11127 + * @param queue queue number (0 - 15)
11128 + * @param total_drops pointer to location to store total drops (or NULL)
11129 + * @param do_reset if TRUE, clear total drops after updating
11130 + */
11131 +u32 qm_read_drop_stat(u32 tmu, u32 queue, u32 *total_drops, int do_reset)
11132 +{
11133 + static u32 qtotal[TMU_MAX_ID + 1][NUM_QUEUES];
11134 + u32 val;
11135 +
11136 + writel((tmu << 8) | queue, TMU_TEQ_CTRL);
11137 + writel((tmu << 8) | queue, TMU_LLM_CTRL);
11138 + val = readl(TMU_TEQ_DROP_STAT);
11139 + qtotal[tmu][queue] += val;
11140 + if (total_drops)
11141 + *total_drops = qtotal[tmu][queue];
11142 + if (do_reset)
11143 + qtotal[tmu][queue] = 0;
11144 + return val;
11145 +}
11146 +
11147 +static ssize_t tmu_queue_stats(char *buf, int tmu, int queue)
11148 +{
11149 + ssize_t len = 0;
11150 + u32 drops;
11151 +
11152 + len += sprintf(buf + len, "%d-%02d, ", tmu, queue);
11153 +
11154 + drops = qm_read_drop_stat(tmu, queue, NULL, 0);
11155 +
11156 + /* Select queue */
11157 + writel((tmu << 8) | queue, TMU_TEQ_CTRL);
11158 + writel((tmu << 8) | queue, TMU_LLM_CTRL);
11159 +
11160 + len += sprintf(buf + len,
11161 + "(teq) drop: %10u, tx: %10u (llm) head: %08x, tail: %08x, drop: %10u\n",
11162 + drops, readl(TMU_TEQ_TRANS_STAT),
11163 + readl(TMU_LLM_QUE_HEADPTR), readl(TMU_LLM_QUE_TAILPTR),
11164 + readl(TMU_LLM_QUE_DROPCNT));
11165 +
11166 + return len;
11167 +}
11168 +
11169 +static ssize_t tmu_queues(char *buf, int tmu)
11170 +{
11171 + ssize_t len = 0;
11172 + int queue;
11173 +
11174 + for (queue = 0; queue < 16; queue++)
11175 + len += tmu_queue_stats(buf + len, tmu, queue);
11176 +
11177 + return len;
11178 +}
11179 +
11180 +static ssize_t block_version(char *buf, void *addr)
11181 +{
11182 + ssize_t len = 0;
11183 + u32 val;
11184 +
11185 + val = readl(addr);
11186 + len += sprintf(buf + len, "revision: %x, version: %x, id: %x\n",
11187 + (val >> 24) & 0xff, (val >> 16) & 0xff, val & 0xffff);
11188 +
11189 + return len;
11190 +}
11191 +
11192 +static ssize_t bmu(char *buf, int id, void *base)
11193 +{
11194 + ssize_t len = 0;
11195 +
11196 + len += sprintf(buf + len, "%s: %d\n ", __func__, id);
11197 +
11198 + len += block_version(buf + len, base + BMU_VERSION);
11199 +
11200 + len += sprintf(buf + len, " buf size: %x\n", (1 << readl(base +
11201 + BMU_BUF_SIZE)));
11202 + len += sprintf(buf + len, " buf count: %x\n", readl(base +
11203 + BMU_BUF_CNT));
11204 + len += sprintf(buf + len, " buf rem: %x\n", readl(base +
11205 + BMU_REM_BUF_CNT));
11206 + len += sprintf(buf + len, " buf curr: %x\n", readl(base +
11207 + BMU_CURR_BUF_CNT));
11208 + len += sprintf(buf + len, " free err: %x\n", readl(base +
11209 + BMU_FREE_ERR_ADDR));
11210 +
11211 + return len;
11212 +}
11213 +
11214 +static ssize_t gpi(char *buf, int id, void *base)
11215 +{
11216 + ssize_t len = 0;
11217 + u32 val;
11218 +
11219 + len += sprintf(buf + len, "%s%d:\n ", __func__, id);
11220 + len += block_version(buf + len, base + GPI_VERSION);
11221 +
11222 + len += sprintf(buf + len, " tx under stick: %x\n", readl(base +
11223 + GPI_FIFO_STATUS));
11224 + val = readl(base + GPI_FIFO_DEBUG);
11225 + len += sprintf(buf + len, " tx pkts: %x\n", (val >> 23) &
11226 + 0x3f);
11227 + len += sprintf(buf + len, " rx pkts: %x\n", (val >> 18) &
11228 + 0x3f);
11229 + len += sprintf(buf + len, " tx bytes: %x\n", (val >> 9) &
11230 + 0x1ff);
11231 + len += sprintf(buf + len, " rx bytes: %x\n", (val >> 0) &
11232 + 0x1ff);
11233 + len += sprintf(buf + len, " overrun: %x\n", readl(base +
11234 + GPI_OVERRUN_DROPCNT));
11235 +
11236 + return len;
11237 +}
11238 +
11239 +static ssize_t pfe_set_class(struct device *dev, struct device_attribute *attr,
11240 + const char *buf, size_t count)
11241 +{
11242 + class_do_clear = kstrtoul(buf, 0, 0);
11243 + return count;
11244 +}
11245 +
11246 +static ssize_t pfe_show_class(struct device *dev, struct device_attribute *attr,
11247 + char *buf)
11248 +{
11249 + ssize_t len = 0;
11250 + int id;
11251 + u32 val;
11252 + struct pfe_cpumon *cpumon = &pfe->cpumon;
11253 +
11254 + len += block_version(buf + len, CLASS_VERSION);
11255 +
11256 + for (id = CLASS0_ID; id <= CLASS_MAX_ID; id++) {
11257 + len += sprintf(buf + len, "%d: ", id - CLASS0_ID);
11258 +
11259 + val = readl(CLASS_PE0_DEBUG + id * 4);
11260 + len += sprintf(buf + len, "pc=1%04x ", val & 0xffff);
11261 +
11262 + len += display_pe_status(buf + len, id, CLASS_DM_PESTATUS,
11263 + class_do_clear);
11264 + }
11265 + len += sprintf(buf + len, "aggregate load=%d%%\n\n",
11266 + cpumon->class_usage_pct);
11267 +
11268 + len += sprintf(buf + len, "pe status: 0x%x\n",
11269 + readl(CLASS_PE_STATUS));
11270 + len += sprintf(buf + len, "max buf cnt: 0x%x afull thres: 0x%x\n",
11271 + readl(CLASS_MAX_BUF_CNT), readl(CLASS_AFULL_THRES));
11272 + len += sprintf(buf + len, "tsq max cnt: 0x%x tsq fifo thres: 0x%x\n",
11273 + readl(CLASS_TSQ_MAX_CNT), readl(CLASS_TSQ_FIFO_THRES));
11274 + len += sprintf(buf + len, "state: 0x%x\n", readl(CLASS_STATE));
11275 +
11276 + len += class_phy_stats(buf + len, 0);
11277 + len += class_phy_stats(buf + len, 1);
11278 + len += class_phy_stats(buf + len, 2);
11279 + len += class_phy_stats(buf + len, 3);
11280 +
11281 + return len;
11282 +}
11283 +
11284 +static ssize_t pfe_set_tmu(struct device *dev, struct device_attribute *attr,
11285 + const char *buf, size_t count)
11286 +{
11287 + tmu_do_clear = kstrtoul(buf, 0, 0);
11288 + return count;
11289 +}
11290 +
11291 +static ssize_t pfe_show_tmu(struct device *dev, struct device_attribute *attr,
11292 + char *buf)
11293 +{
11294 + ssize_t len = 0;
11295 + int id;
11296 + u32 val;
11297 +
11298 + len += block_version(buf + len, TMU_VERSION);
11299 +
11300 + for (id = TMU0_ID; id <= TMU_MAX_ID; id++) {
11301 + if (id == TMU2_ID)
11302 + continue;
11303 + len += sprintf(buf + len, "%d: ", id - TMU0_ID);
11304 +
11305 + len += display_pe_status(buf + len, id, TMU_DM_PESTATUS,
11306 + tmu_do_clear);
11307 + }
11308 +
11309 + len += sprintf(buf + len, "pe status: %x\n", readl(TMU_PE_STATUS));
11310 + len += sprintf(buf + len, "inq fifo cnt: %x\n",
11311 + readl(TMU_PHY_INQ_FIFO_CNT));
11312 + val = readl(TMU_INQ_STAT);
11313 + len += sprintf(buf + len, "inq wr ptr: %x\n", val & 0x3ff);
11314 + len += sprintf(buf + len, "inq rd ptr: %x\n", val >> 10);
11315 +
11316 + return len;
11317 +}
11318 +
11319 +static unsigned long drops_do_clear;
11320 +static u32 class_drop_counter[CLASS_NUM_DROP_COUNTERS];
11321 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
11322 +static u32 util_drop_counter[UTIL_NUM_DROP_COUNTERS];
11323 +#endif
11324 +
11325 +char *class_drop_description[CLASS_NUM_DROP_COUNTERS] = {
11326 + "ICC",
11327 + "Host Pkt Error",
11328 + "Rx Error",
11329 + "IPsec Outbound",
11330 + "IPsec Inbound",
11331 + "EXPT IPsec Error",
11332 + "Reassembly",
11333 + "Fragmenter",
11334 + "NAT-T",
11335 + "Socket",
11336 + "Multicast",
11337 + "NAT-PT",
11338 + "Tx Disabled",
11339 +};
11340 +
11341 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
11342 +char *util_drop_description[UTIL_NUM_DROP_COUNTERS] = {
11343 + "IPsec Outbound",
11344 + "IPsec Inbound",
11345 + "IPsec Rate Limiter",
11346 + "Fragmenter",
11347 + "Socket",
11348 + "Tx Disabled",
11349 + "Rx Error",
11350 +};
11351 +#endif
11352 +
11353 +static ssize_t pfe_set_drops(struct device *dev, struct device_attribute *attr,
11354 + const char *buf, size_t count)
11355 +{
11356 + drops_do_clear = kstrtoul(buf, 0, 0);
11357 + return count;
11358 +}
11359 +
11360 +static u32 tmu_drops[4][16];
11361 +static ssize_t pfe_show_drops(struct device *dev, struct device_attribute *attr,
11362 + char *buf)
11363 +{
11364 + ssize_t len = 0;
11365 + int id, dropnum;
11366 + int tmu, queue;
11367 + u32 val;
11368 + u32 dmem_addr;
11369 + int num_class_drops = 0, num_tmu_drops = 0, num_util_drops = 0;
11370 + struct pfe_ctrl *ctrl = &pfe->ctrl;
11371 +
11372 + memset(class_drop_counter, 0, sizeof(class_drop_counter));
11373 + for (id = CLASS0_ID; id <= CLASS_MAX_ID; id++) {
11374 + if (drops_do_clear)
11375 + pe_sync_stop(ctrl, (1 << id));
11376 + for (dropnum = 0; dropnum < CLASS_NUM_DROP_COUNTERS;
11377 + dropnum++) {
11378 + dmem_addr = CLASS_DM_DROP_CNTR;
11379 + val = be32_to_cpu(pe_dmem_read(id, dmem_addr, 4));
11380 + class_drop_counter[dropnum] += val;
11381 + num_class_drops += val;
11382 + if (drops_do_clear)
11383 + pe_dmem_write(id, 0, dmem_addr, 4);
11384 + }
11385 + if (drops_do_clear)
11386 + pe_start(ctrl, (1 << id));
11387 + }
11388 +
11389 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
11390 + if (drops_do_clear)
11391 + pe_sync_stop(ctrl, (1 << UTIL_ID));
11392 + for (dropnum = 0; dropnum < UTIL_NUM_DROP_COUNTERS; dropnum++) {
11393 + dmem_addr = UTIL_DM_DROP_CNTR;
11394 + val = be32_to_cpu(pe_dmem_read(UTIL_ID, dmem_addr, 4));
11395 + util_drop_counter[dropnum] = val;
11396 + num_util_drops += val;
11397 + if (drops_do_clear)
11398 + pe_dmem_write(UTIL_ID, 0, dmem_addr, 4);
11399 + }
11400 + if (drops_do_clear)
11401 + pe_start(ctrl, (1 << UTIL_ID));
11402 +#endif
11403 + for (tmu = 0; tmu < 4; tmu++) {
11404 + for (queue = 0; queue < 16; queue++) {
11405 + qm_read_drop_stat(tmu, queue, &tmu_drops[tmu][queue],
11406 + drops_do_clear);
11407 + num_tmu_drops += tmu_drops[tmu][queue];
11408 + }
11409 + }
11410 +
11411 + if (num_class_drops == 0 && num_util_drops == 0 && num_tmu_drops == 0)
11412 + len += sprintf(buf + len, "No PE drops\n\n");
11413 +
11414 + if (num_class_drops > 0) {
11415 + len += sprintf(buf + len, "Class PE drops --\n");
11416 + for (dropnum = 0; dropnum < CLASS_NUM_DROP_COUNTERS;
11417 + dropnum++) {
11418 + if (class_drop_counter[dropnum] > 0)
11419 + len += sprintf(buf + len, " %s: %d\n",
11420 + class_drop_description[dropnum],
11421 + class_drop_counter[dropnum]);
11422 + }
11423 + len += sprintf(buf + len, "\n");
11424 + }
11425 +
11426 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
11427 + if (num_util_drops > 0) {
11428 + len += sprintf(buf + len, "Util PE drops --\n");
11429 + for (dropnum = 0; dropnum < UTIL_NUM_DROP_COUNTERS; dropnum++) {
11430 + if (util_drop_counter[dropnum] > 0)
11431 + len += sprintf(buf + len, " %s: %d\n",
11432 + util_drop_description[dropnum],
11433 + util_drop_counter[dropnum]);
11434 + }
11435 + len += sprintf(buf + len, "\n");
11436 + }
11437 +#endif
11438 + if (num_tmu_drops > 0) {
11439 + len += sprintf(buf + len, "TMU drops --\n");
11440 + for (tmu = 0; tmu < 4; tmu++) {
11441 + for (queue = 0; queue < 16; queue++) {
11442 + if (tmu_drops[tmu][queue] > 0)
11443 + len += sprintf(buf + len,
11444 + " TMU%d-Q%d: %d\n"
11445 + , tmu, queue, tmu_drops[tmu][queue]);
11446 + }
11447 + }
11448 + len += sprintf(buf + len, "\n");
11449 + }
11450 +
11451 + return len;
11452 +}
11453 +
11454 +static ssize_t pfe_show_tmu0_queues(struct device *dev, struct device_attribute
11455 + *attr, char *buf)
11456 +{
11457 + return tmu_queues(buf, 0);
11458 +}
11459 +
11460 +static ssize_t pfe_show_tmu1_queues(struct device *dev, struct device_attribute
11461 + *attr, char *buf)
11462 +{
11463 + return tmu_queues(buf, 1);
11464 +}
11465 +
11466 +static ssize_t pfe_show_tmu2_queues(struct device *dev, struct device_attribute
11467 + *attr, char *buf)
11468 +{
11469 + return tmu_queues(buf, 2);
11470 +}
11471 +
11472 +static ssize_t pfe_show_tmu3_queues(struct device *dev, struct device_attribute
11473 + *attr, char *buf)
11474 +{
11475 + return tmu_queues(buf, 3);
11476 +}
11477 +
11478 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
11479 +static ssize_t pfe_set_util(struct device *dev, struct device_attribute *attr,
11480 + const char *buf, size_t count)
11481 +{
11482 + util_do_clear = kstrtoul(buf, NULL, 0);
11483 + return count;
11484 +}
11485 +
11486 +static ssize_t pfe_show_util(struct device *dev, struct device_attribute *attr,
11487 + char *buf)
11488 +{
11489 + ssize_t len = 0;
11490 + struct pfe_ctrl *ctrl = &pfe->ctrl;
11491 +
11492 + len += block_version(buf + len, UTIL_VERSION);
11493 +
11494 + pe_sync_stop(ctrl, (1 << UTIL_ID));
11495 + len += display_pe_status(buf + len, UTIL_ID, UTIL_DM_PESTATUS,
11496 + util_do_clear);
11497 + pe_start(ctrl, (1 << UTIL_ID));
11498 +
11499 + len += sprintf(buf + len, "pe status: %x\n", readl(UTIL_PE_STATUS));
11500 + len += sprintf(buf + len, "max buf cnt: %x\n",
11501 + readl(UTIL_MAX_BUF_CNT));
11502 + len += sprintf(buf + len, "tsq max cnt: %x\n",
11503 + readl(UTIL_TSQ_MAX_CNT));
11504 +
11505 + return len;
11506 +}
11507 +#endif
11508 +
11509 +static ssize_t pfe_show_bmu(struct device *dev, struct device_attribute *attr,
11510 + char *buf)
11511 +{
11512 + ssize_t len = 0;
11513 +
11514 + len += bmu(buf + len, 1, BMU1_BASE_ADDR);
11515 + len += bmu(buf + len, 2, BMU2_BASE_ADDR);
11516 +
11517 + return len;
11518 +}
11519 +
11520 +static ssize_t pfe_show_hif(struct device *dev, struct device_attribute *attr,
11521 + char *buf)
11522 +{
11523 + ssize_t len = 0;
11524 +
11525 + len += sprintf(buf + len, "hif:\n ");
11526 + len += block_version(buf + len, HIF_VERSION);
11527 +
11528 + len += sprintf(buf + len, " tx curr bd: %x\n",
11529 + readl(HIF_TX_CURR_BD_ADDR));
11530 + len += sprintf(buf + len, " tx status: %x\n",
11531 + readl(HIF_TX_STATUS));
11532 + len += sprintf(buf + len, " tx dma status: %x\n",
11533 + readl(HIF_TX_DMA_STATUS));
11534 +
11535 + len += sprintf(buf + len, " rx curr bd: %x\n",
11536 + readl(HIF_RX_CURR_BD_ADDR));
11537 + len += sprintf(buf + len, " rx status: %x\n",
11538 + readl(HIF_RX_STATUS));
11539 + len += sprintf(buf + len, " rx dma status: %x\n",
11540 + readl(HIF_RX_DMA_STATUS));
11541 +
11542 + len += sprintf(buf + len, "hif nocopy:\n ");
11543 + len += block_version(buf + len, HIF_NOCPY_VERSION);
11544 +
11545 + len += sprintf(buf + len, " tx curr bd: %x\n",
11546 + readl(HIF_NOCPY_TX_CURR_BD_ADDR));
11547 + len += sprintf(buf + len, " tx status: %x\n",
11548 + readl(HIF_NOCPY_TX_STATUS));
11549 + len += sprintf(buf + len, " tx dma status: %x\n",
11550 + readl(HIF_NOCPY_TX_DMA_STATUS));
11551 +
11552 + len += sprintf(buf + len, " rx curr bd: %x\n",
11553 + readl(HIF_NOCPY_RX_CURR_BD_ADDR));
11554 + len += sprintf(buf + len, " rx status: %x\n",
11555 + readl(HIF_NOCPY_RX_STATUS));
11556 + len += sprintf(buf + len, " rx dma status: %x\n",
11557 + readl(HIF_NOCPY_RX_DMA_STATUS));
11558 +
11559 + return len;
11560 +}
11561 +
11562 +static ssize_t pfe_show_gpi(struct device *dev, struct device_attribute *attr,
11563 + char *buf)
11564 +{
11565 + ssize_t len = 0;
11566 +
11567 + len += gpi(buf + len, 0, EGPI1_BASE_ADDR);
11568 + len += gpi(buf + len, 1, EGPI2_BASE_ADDR);
11569 + len += gpi(buf + len, 3, HGPI_BASE_ADDR);
11570 +
11571 + return len;
11572 +}
11573 +
11574 +static ssize_t pfe_show_pfemem(struct device *dev, struct device_attribute
11575 + *attr, char *buf)
11576 +{
11577 + ssize_t len = 0;
11578 + struct pfe_memmon *memmon = &pfe->memmon;
11579 +
11580 + len += sprintf(buf + len, "Kernel Memory: %d Bytes (%d KB)\n",
11581 + memmon->kernel_memory_allocated,
11582 + (memmon->kernel_memory_allocated + 1023) / 1024);
11583 +
11584 + return len;
11585 +}
11586 +
11587 +static ssize_t pfe_show_crc_revalidated(struct device *dev,
11588 + struct device_attribute *attr,
11589 + char *buf)
11590 +{
11591 + u64 crc_validated = 0;
11592 + ssize_t len = 0;
11593 + int id, phyid;
11594 +
11595 + len += sprintf(buf + len, "FCS re-validated by PFE:\n");
11596 +
11597 + for (phyid = 0; phyid < 2; phyid++) {
11598 + crc_validated = 0;
11599 + for (id = CLASS0_ID; id <= CLASS_MAX_ID; id++) {
11600 + crc_validated += be32_to_cpu(pe_dmem_read(id,
11601 + CLASS_DM_CRC_VALIDATED + (phyid * 4), 4));
11602 + }
11603 + len += sprintf(buf + len, "MAC %d:\n count:%10llu\n",
11604 + phyid, crc_validated);
11605 + }
11606 +
11607 + return len;
11608 +}
11609 +
11610 +#ifdef HIF_NAPI_STATS
11611 +static ssize_t pfe_show_hif_napi_stats(struct device *dev,
11612 + struct device_attribute *attr,
11613 + char *buf)
11614 +{
11615 + struct platform_device *pdev = to_platform_device(dev);
11616 + struct pfe *pfe = platform_get_drvdata(pdev);
11617 + ssize_t len = 0;
11618 +
11619 + len += sprintf(buf + len, "sched: %u\n",
11620 + pfe->hif.napi_counters[NAPI_SCHED_COUNT]);
11621 + len += sprintf(buf + len, "poll: %u\n",
11622 + pfe->hif.napi_counters[NAPI_POLL_COUNT]);
11623 + len += sprintf(buf + len, "packet: %u\n",
11624 + pfe->hif.napi_counters[NAPI_PACKET_COUNT]);
11625 + len += sprintf(buf + len, "budget: %u\n",
11626 + pfe->hif.napi_counters[NAPI_FULL_BUDGET_COUNT]);
11627 + len += sprintf(buf + len, "desc: %u\n",
11628 + pfe->hif.napi_counters[NAPI_DESC_COUNT]);
11629 + len += sprintf(buf + len, "full: %u\n",
11630 + pfe->hif.napi_counters[NAPI_CLIENT_FULL_COUNT]);
11631 +
11632 + return len;
11633 +}
11634 +
11635 +static ssize_t pfe_set_hif_napi_stats(struct device *dev,
11636 + struct device_attribute *attr,
11637 + const char *buf, size_t count)
11638 +{
11639 + struct platform_device *pdev = to_platform_device(dev);
11640 + struct pfe *pfe = platform_get_drvdata(pdev);
11641 +
11642 + memset(pfe->hif.napi_counters, 0, sizeof(pfe->hif.napi_counters));
11643 +
11644 + return count;
11645 +}
11646 +
11647 +static DEVICE_ATTR(hif_napi_stats, 0644, pfe_show_hif_napi_stats,
11648 + pfe_set_hif_napi_stats);
11649 +#endif
11650 +
11651 +static DEVICE_ATTR(class, 0644, pfe_show_class, pfe_set_class);
11652 +static DEVICE_ATTR(tmu, 0644, pfe_show_tmu, pfe_set_tmu);
11653 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
11654 +static DEVICE_ATTR(util, 0644, pfe_show_util, pfe_set_util);
11655 +#endif
11656 +static DEVICE_ATTR(bmu, 0444, pfe_show_bmu, NULL);
11657 +static DEVICE_ATTR(hif, 0444, pfe_show_hif, NULL);
11658 +static DEVICE_ATTR(gpi, 0444, pfe_show_gpi, NULL);
11659 +static DEVICE_ATTR(drops, 0644, pfe_show_drops, pfe_set_drops);
11660 +static DEVICE_ATTR(tmu0_queues, 0444, pfe_show_tmu0_queues, NULL);
11661 +static DEVICE_ATTR(tmu1_queues, 0444, pfe_show_tmu1_queues, NULL);
11662 +static DEVICE_ATTR(tmu2_queues, 0444, pfe_show_tmu2_queues, NULL);
11663 +static DEVICE_ATTR(tmu3_queues, 0444, pfe_show_tmu3_queues, NULL);
11664 +static DEVICE_ATTR(pfemem, 0444, pfe_show_pfemem, NULL);
11665 +static DEVICE_ATTR(fcs_revalidated, 0444, pfe_show_crc_revalidated, NULL);
11666 +
11667 +int pfe_sysfs_init(struct pfe *pfe)
11668 +{
11669 + if (device_create_file(pfe->dev, &dev_attr_class))
11670 + goto err_class;
11671 +
11672 + if (device_create_file(pfe->dev, &dev_attr_tmu))
11673 + goto err_tmu;
11674 +
11675 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
11676 + if (device_create_file(pfe->dev, &dev_attr_util))
11677 + goto err_util;
11678 +#endif
11679 +
11680 + if (device_create_file(pfe->dev, &dev_attr_bmu))
11681 + goto err_bmu;
11682 +
11683 + if (device_create_file(pfe->dev, &dev_attr_hif))
11684 + goto err_hif;
11685 +
11686 + if (device_create_file(pfe->dev, &dev_attr_gpi))
11687 + goto err_gpi;
11688 +
11689 + if (device_create_file(pfe->dev, &dev_attr_drops))
11690 + goto err_drops;
11691 +
11692 + if (device_create_file(pfe->dev, &dev_attr_tmu0_queues))
11693 + goto err_tmu0_queues;
11694 +
11695 + if (device_create_file(pfe->dev, &dev_attr_tmu1_queues))
11696 + goto err_tmu1_queues;
11697 +
11698 + if (device_create_file(pfe->dev, &dev_attr_tmu2_queues))
11699 + goto err_tmu2_queues;
11700 +
11701 + if (device_create_file(pfe->dev, &dev_attr_tmu3_queues))
11702 + goto err_tmu3_queues;
11703 +
11704 + if (device_create_file(pfe->dev, &dev_attr_pfemem))
11705 + goto err_pfemem;
11706 +
11707 + if (device_create_file(pfe->dev, &dev_attr_fcs_revalidated))
11708 + goto err_crc_revalidated;
11709 +
11710 +#ifdef HIF_NAPI_STATS
11711 + if (device_create_file(pfe->dev, &dev_attr_hif_napi_stats))
11712 + goto err_hif_napi_stats;
11713 +#endif
11714 +
11715 + return 0;
11716 +
11717 +#ifdef HIF_NAPI_STATS
11718 +err_hif_napi_stats:
11719 + device_remove_file(pfe->dev, &dev_attr_fcs_revalidated);
11720 +#endif
11721 +
11722 +err_crc_revalidated:
11723 + device_remove_file(pfe->dev, &dev_attr_pfemem);
11724 +
11725 +err_pfemem:
11726 + device_remove_file(pfe->dev, &dev_attr_tmu3_queues);
11727 +
11728 +err_tmu3_queues:
11729 + device_remove_file(pfe->dev, &dev_attr_tmu2_queues);
11730 +
11731 +err_tmu2_queues:
11732 + device_remove_file(pfe->dev, &dev_attr_tmu1_queues);
11733 +
11734 +err_tmu1_queues:
11735 + device_remove_file(pfe->dev, &dev_attr_tmu0_queues);
11736 +
11737 +err_tmu0_queues:
11738 + device_remove_file(pfe->dev, &dev_attr_drops);
11739 +
11740 +err_drops:
11741 + device_remove_file(pfe->dev, &dev_attr_gpi);
11742 +
11743 +err_gpi:
11744 + device_remove_file(pfe->dev, &dev_attr_hif);
11745 +
11746 +err_hif:
11747 + device_remove_file(pfe->dev, &dev_attr_bmu);
11748 +
11749 +err_bmu:
11750 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
11751 + device_remove_file(pfe->dev, &dev_attr_util);
11752 +
11753 +err_util:
11754 +#endif
11755 + device_remove_file(pfe->dev, &dev_attr_tmu);
11756 +
11757 +err_tmu:
11758 + device_remove_file(pfe->dev, &dev_attr_class);
11759 +
11760 +err_class:
11761 + return -1;
11762 +}
11763 +
11764 +void pfe_sysfs_exit(struct pfe *pfe)
11765 +{
11766 +#ifdef HIF_NAPI_STATS
11767 + device_remove_file(pfe->dev, &dev_attr_hif_napi_stats);
11768 +#endif
11769 + device_remove_file(pfe->dev, &dev_attr_fcs_revalidated);
11770 + device_remove_file(pfe->dev, &dev_attr_pfemem);
11771 + device_remove_file(pfe->dev, &dev_attr_tmu3_queues);
11772 + device_remove_file(pfe->dev, &dev_attr_tmu2_queues);
11773 + device_remove_file(pfe->dev, &dev_attr_tmu1_queues);
11774 + device_remove_file(pfe->dev, &dev_attr_tmu0_queues);
11775 + device_remove_file(pfe->dev, &dev_attr_drops);
11776 + device_remove_file(pfe->dev, &dev_attr_gpi);
11777 + device_remove_file(pfe->dev, &dev_attr_hif);
11778 + device_remove_file(pfe->dev, &dev_attr_bmu);
11779 +#if !defined(CONFIG_FSL_PPFE_UTIL_DISABLED)
11780 + device_remove_file(pfe->dev, &dev_attr_util);
11781 +#endif
11782 + device_remove_file(pfe->dev, &dev_attr_tmu);
11783 + device_remove_file(pfe->dev, &dev_attr_class);
11784 +}
11785 --- /dev/null
11786 +++ b/drivers/staging/fsl_ppfe/pfe_sysfs.h
11787 @@ -0,0 +1,17 @@
11788 +/* SPDX-License-Identifier: GPL-2.0+ */
11789 +/*
11790 + * Copyright 2015-2016 Freescale Semiconductor, Inc.
11791 + * Copyright 2017 NXP
11792 + */
11793 +
11794 +#ifndef _PFE_SYSFS_H_
11795 +#define _PFE_SYSFS_H_
11796 +
11797 +#include <linux/proc_fs.h>
11798 +
11799 +u32 qm_read_drop_stat(u32 tmu, u32 queue, u32 *total_drops, int do_reset);
11800 +
11801 +int pfe_sysfs_init(struct pfe *pfe);
11802 +void pfe_sysfs_exit(struct pfe *pfe);
11803 +
11804 +#endif /* _PFE_SYSFS_H_ */