bf4463ed5211800a8652b808e2a0e5ea9519404d
[openwrt/openwrt.git] / target / linux / bcm27xx / patches-5.4 / 950-0037-Add-dwc_otg-driver.patch
1 From 4ece96fb20967c2723b0e3f9c915199b5817de7e Mon Sep 17 00:00:00 2001
2 From: popcornmix <popcornmix@gmail.com>
3 Date: Wed, 1 May 2013 19:46:17 +0100
4 Subject: [PATCH] Add dwc_otg driver
5 MIME-Version: 1.0
6 Content-Type: text/plain; charset=UTF-8
7 Content-Transfer-Encoding: 8bit
8
9 Signed-off-by: popcornmix <popcornmix@gmail.com>
10
11 usb: dwc: fix lockdep false positive
12
13 Signed-off-by: Kari Suvanto <karis79@gmail.com>
14
15 usb: dwc: fix inconsistent lock state
16
17 Signed-off-by: Kari Suvanto <karis79@gmail.com>
18
19 Add FIQ patch to dwc_otg driver. Enable with dwc_otg.fiq_fix_enable=1. Should give about 10% more ARM performance.
20 Thanks to Gordon and Costas
21
22 Avoid dynamic memory allocation for channel lock in USB driver. Thanks ddv2005.
23
24 Add NAK holdoff scheme. Enabled by default, disable with dwc_otg.nak_holdoff_enable=0. Thanks gsh
25
26 Make sure we wait for the reset to finish
27
28 dwc_otg: fix bug in dwc_otg_hcd.c resulting in silent kernel
29 memory corruption, escalating to OOPS under high USB load.
30
31 dwc_otg: Fix unsafe access of QTD during URB enqueue
32
33 In dwc_otg_hcd_urb_enqueue during qtd creation, it was possible that the
34 transaction could complete almost immediately after the qtd was assigned
35 to a host channel during URB enqueue, which meant the qtd pointer was no
36 longer valid having been completed and removed. Usually, this resulted in
37 an OOPS during URB submission. By predetermining whether transactions
38 need to be queued or not, this unsafe pointer access is avoided.
39
40 This bug was only evident on the Pi model A where a device was attached
41 that had no periodic endpoints (e.g. USB pendrive or some wlan devices).
42
43 dwc_otg: Fix incorrect URB allocation error handling
44
45 If the memory allocation for a dwc_otg_urb failed, the kernel would OOPS
46 because for some reason a member of the *unallocated* struct was set to
47 zero. Error handling changed to fail correctly.
48
49 dwc_otg: fix potential use-after-free case in interrupt handler
50
51 If a transaction had previously aborted, certain interrupts are
52 enabled to track error counts and reset where necessary. On IN
53 endpoints the host generates an ACK interrupt near-simultaneously
54 with completion of transfer. In the case where this transfer had
55 previously had an error, this results in a use-after-free on
56 the QTD memory space with a 1-byte length being overwritten to
57 0x00.
58
59 dwc_otg: add handling of SPLIT transaction data toggle errors
60
61 Previously a data toggle error on packets from a USB1.1 device behind
62 a TT would result in the Pi locking up as the driver never handled
63 the associated interrupt. Patch adds basic retry mechanism and
64 interrupt acknowledgement to cater for either a chance toggle error or
65 for devices that have a broken initial toggle state (FT8U232/FT232BM).
66
67 dwc_otg: implement tasklet for returning URBs to usbcore hcd layer
68
69 The dwc_otg driver interrupt handler for transfer completion will spend
70 a very long time with interrupts disabled when a URB is completed -
71 this is because usb_hcd_giveback_urb is called from within the handler
72 which for a USB device driver with complicated processing (e.g. webcam)
73 will take an exorbitant amount of time to complete. This results in
74 missed completion interrupts for other USB packets which lead to them
75 being dropped due to microframe overruns.
76
77 This patch splits returning the URB to the usb hcd layer into a
78 high-priority tasklet. This will have most benefit for isochronous IN
79 transfers but will also have incidental benefit where multiple periodic
80 devices are active at once.
81
82 dwc_otg: fix NAK holdoff and allow on split transactions only
83
84 This corrects a bug where if a single active non-periodic endpoint
85 had at least one transaction in its qh, on frnum == MAX_FRNUM the qh
86 would get skipped and never get queued again. This would result in
87 a silent device until error detection (automatic or otherwise) would
88 either reset the device or flush and requeue the URBs.
89
90 Additionally the NAK holdoff was enabled for all transactions - this
91 would potentially stall a HS endpoint for 1ms if a previous error state
92 enabled this interrupt and the next response was a NAK. Fix so that
93 only split transactions get held off.
94
95 dwc_otg: Call usb_hcd_unlink_urb_from_ep with lock held in completion handler
96
97 usb_hcd_unlink_urb_from_ep must be called with the HCD lock held. Calling it
98 asynchronously in the tasklet was not safe (regression in
99 c4564d4a1a0a9b10d4419e48239f5d99e88d2667).
100
101 This change unlinks it from the endpoint prior to queueing it for handling in
102 the tasklet, and also adds a check to ensure the urb is OK to be unlinked
103 before doing so.
104
105 NULL pointer dereference kernel oopses had been observed in usb_hcd_giveback_urb
106 when a USB device was unplugged/replugged during data transfer. This effect
107 was reproduced using automated USB port power control, hundreds of replug
108 events were performed during active transfers to confirm that the problem was
109 eliminated.
110
111 USB fix using a FIQ to implement split transactions
112
113 This commit adds a FIQ implementaion that schedules
114 the split transactions using a FIQ so we don't get
115 held off by the interrupt latency of Linux
116
117 dwc_otg: fix device attributes and avoid kernel warnings on boot
118
119 dcw_otg: avoid logging function that can cause panics
120
121 See: https://github.com/raspberrypi/firmware/issues/21
122 Thanks to cleverca22 for fix
123
124 dwc_otg: mask correct interrupts after transaction error recovery
125
126 The dwc_otg driver will unmask certain interrupts on a transaction
127 that previously halted in the error state in order to reset the
128 QTD error count. The various fine-grained interrupt handlers do not
129 consider that other interrupts besides themselves were unmasked.
130
131 By disabling the two other interrupts only ever enabled in DMA mode
132 for this purpose, we can avoid unnecessary function calls in the
133 IRQ handler. This will also prevent an unneccesary FIQ interrupt
134 from being generated if the FIQ is enabled.
135
136 dwc_otg: fiq: prevent FIQ thrash and incorrect state passing to IRQ
137
138 In the case of a transaction to a device that had previously aborted
139 due to an error, several interrupts are enabled to reset the error
140 count when a device responds. This has the side-effect of making the
141 FIQ thrash because the hardware will generate multiple instances of
142 a NAK on an IN bulk/interrupt endpoint and multiple instances of ACK
143 on an OUT bulk/interrupt endpoint. Make the FIQ mask and clear the
144 associated interrupts.
145
146 Additionally, on non-split transactions make sure that only unmasked
147 interrupts are cleared. This caused a hard-to-trigger but serious
148 race condition when you had the combination of an endpoint awaiting
149 error recovery and a transaction completed on an endpoint - due to
150 the sequencing and timing of interrupts generated by the dwc_otg core,
151 it was possible to confuse the IRQ handler.
152
153 Fix function tracing
154
155 dwc_otg: whitespace cleanup in dwc_otg_urb_enqueue
156
157 dwc_otg: prevent OOPSes during device disconnects
158
159 The dwc_otg_urb_enqueue function is thread-unsafe. In particular the
160 access of urb->hcpriv, usb_hcd_link_urb_to_ep, dwc_otg_urb->qtd and
161 friends does not occur within a critical section and so if a device
162 was unplugged during activity there was a high chance that the
163 usbcore hub_thread would try to disable the endpoint with partially-
164 formed entries in the URB queue. This would result in BUG() or null
165 pointer dereferences.
166
167 Fix so that access of urb->hcpriv, enqueuing to the hardware and
168 adding to usbcore endpoint URB lists is contained within a single
169 critical section.
170
171 dwc_otg: prevent BUG() in TT allocation if hub address is > 16
172
173 A fixed-size array is used to track TT allocation. This was
174 previously set to 16 which caused a crash because
175 dwc_otg_hcd_allocate_port would read past the end of the array.
176
177 This was hit if a hub was plugged in which enumerated as addr > 16,
178 due to previous device resets or unplugs.
179
180 Also add #ifdef FIQ_DEBUG around hcd->hub_port_alloc[], which grows
181 to a large size if 128 hub addresses are supported. This field is
182 for debug only for tracking which frame an allocate happened in.
183
184 dwc_otg: make channel halts with unknown state less damaging
185
186 If the IRQ received a channel halt interrupt through the FIQ
187 with no other bits set, the IRQ would not release the host
188 channel and never complete the URB.
189
190 Add catchall handling to treat as a transaction error and retry.
191
192 dwc_otg: fiq_split: use TTs with more granularity
193
194 This fixes certain issues with split transaction scheduling.
195
196 - Isochronous multi-packet OUT transactions now hog the TT until
197 they are completed - this prevents hubs aborting transactions
198 if they get a periodic start-split out-of-order
199 - Don't perform TT allocation on non-periodic endpoints - this
200 allows simultaneous use of the TT's bulk/control and periodic
201 transaction buffers
202
203 This commit will mainly affect USB audio playback.
204
205 dwc_otg: fix potential sleep while atomic during urb enqueue
206
207 Fixes a regression introduced with eb1b482a. Kmalloc called from
208 dwc_otg_hcd_qtd_add / dwc_otg_hcd_qtd_create did not always have
209 the GPF_ATOMIC flag set. Force this flag when inside the larger
210 critical section.
211
212 dwc_otg: make fiq_split_enable imply fiq_fix_enable
213
214 Failing to set up the FIQ correctly would result in
215 "IRQ 32: nobody cared" errors in dmesg.
216
217 dwc_otg: prevent crashes on host port disconnects
218
219 Fix several issues resulting in crashes or inconsistent state
220 if a Model A root port was disconnected.
221
222 - Clean up queue heads properly in kill_urbs_in_qh_list by
223 removing the empty QHs from the schedule lists
224 - Set the halt status properly to prevent IRQ handlers from
225 using freed memory
226 - Add fiq_split related cleanup for saved registers
227 - Make microframe scheduling reclaim host channels if
228 active during a disconnect
229 - Abort URBs with -ESHUTDOWN status response, informing
230 device drivers so they respond in a more correct fashion
231 and don't try to resubmit URBs
232 - Prevent IRQ handlers from attempting to handle channel
233 interrupts if the associated URB was dequeued (and the
234 driver state was cleared)
235
236 dwc_otg: prevent leaking URBs during enqueue
237
238 A dwc_otg_urb would get leaked if the HCD enqueue function
239 failed for any reason. Free the URB at the appropriate points.
240
241 dwc_otg: Enable NAK holdoff for control split transactions
242
243 Certain low-speed devices take a very long time to complete a
244 data or status stage of a control transaction, producing NAK
245 responses until they complete internal processing - the USB2.0
246 spec limit is up to 500mS. This causes the same type of interrupt
247 storm as seen with USB-serial dongles prior to c8edb238.
248
249 In certain circumstances, usually while booting, this interrupt
250 storm could cause SD card timeouts.
251
252 dwc_otg: Fix for occasional lockup on boot when doing a USB reset
253
254 dwc_otg: Don't issue traffic to LS devices in FS mode
255
256 Issuing low-speed packets when the root port is in full-speed mode
257 causes the root port to stop responding. Explicitly fail when
258 enqueuing URBs to a LS endpoint on a FS bus.
259
260 Fix ARM architecture issue with local_irq_restore()
261
262 If local_fiq_enable() is called before a local_irq_restore(flags) where
263 the flags variable has the F bit set, the FIQ will be erroneously disabled.
264
265 Fixup arch_local_irq_restore to avoid trampling the F bit in CPSR.
266
267 Also fix some of the hacks previously implemented for previous dwc_otg
268 incarnations.
269
270 dwc_otg: fiq_fsm: Base commit for driver rewrite
271
272 This commit removes the previous FIQ fixes entirely and adds fiq_fsm.
273
274 This rewrite features much more complete support for split transactions
275 and takes into account several OTG hardware bugs. High-speed
276 isochronous transactions are also capable of being performed by fiq_fsm.
277
278 All driver options have been removed and replaced with:
279 - dwc_otg.fiq_enable (bool)
280 - dwc_otg.fiq_fsm_enable (bool)
281 - dwc_otg.fiq_fsm_mask (bitmask)
282 - dwc_otg.nak_holdoff (unsigned int)
283
284 Defaults are specified such that fiq_fsm behaves similarly to the
285 previously implemented FIQ fixes.
286
287 fiq_fsm: Push error recovery into the FIQ when fiq_fsm is used
288
289 If the transfer associated with a QTD failed due to a bus error, the HCD
290 would retry the transfer up to 3 times (implementing the USB2.0
291 three-strikes retry in software).
292
293 Due to the masking mechanism used by fiq_fsm, it is only possible to pass
294 a single interrupt through to the HCD per-transfer.
295
296 In this instance host channels would fall off the radar because the error
297 reset would function, but the subsequent channel halt would be lost.
298
299 Push the error count reset into the FIQ handler.
300
301 fiq_fsm: Implement timeout mechanism
302
303 For full-speed endpoints with a large packet size, interrupt latency
304 runs the risk of the FIQ starting a transaction too late in a full-speed
305 frame. If the device is still transmitting data when EOF2 for the
306 downstream frame occurs, the hub will disable the port. This change is
307 not reflected in the hub status endpoint and the device becomes
308 unresponsive.
309
310 Prevent high-bandwidth transactions from being started too late in a
311 frame. The mechanism is not guaranteed: a combination of bit stuffing
312 and hub latency may still result in a device overrunning.
313
314 fiq_fsm: fix bounce buffer utilisation for Isochronous OUT
315
316 Multi-packet isochronous OUT transactions were subject to a few bounday
317 bugs. Fix them.
318
319 Audio playback is now much more robust: however, an issue stands with
320 devices that have adaptive sinks - ALSA plays samples too fast.
321
322 dwc_otg: Return full-speed frame numbers in HS mode
323
324 The frame counter increments on every *microframe* in high-speed mode.
325 Most device drivers expect this number to be in full-speed frames - this
326 caused considerable confusion to e.g. snd_usb_audio which uses the
327 frame counter to estimate the number of samples played.
328
329 fiq_fsm: save PID on completion of interrupt OUT transfers
330
331 Also add edge case handling for interrupt transports.
332
333 Note that for periodic split IN, data toggles are unimplemented in the
334 OTG host hardware - it unconditionally accepts any PID.
335
336 fiq_fsm: add missing case for fiq_fsm_tt_in_use()
337
338 Certain combinations of bitrate and endpoint activity could
339 result in a periodic transaction erroneously getting started
340 while the previous Isochronous OUT was still active.
341
342 fiq_fsm: clear hcintmsk for aborted transactions
343
344 Prevents the FIQ from erroneously handling interrupts
345 on a timed out channel.
346
347 fiq_fsm: enable by default
348
349 fiq_fsm: fix dequeues for non-periodic split transactions
350
351 If a dequeue happened between the SSPLIT and CSPLIT phases of the
352 transaction, the HCD would never receive an interrupt.
353
354 fiq_fsm: Disable by default
355
356 fiq_fsm: Handle HC babble errors
357
358 The HCTSIZ transfer size field raises a babble interrupt if
359 the counter wraps. Handle the resulting interrupt in this case.
360
361 dwc_otg: fix interrupt registration for fiq_enable=0
362
363 Additionally make the module parameter conditional for wherever
364 hcd->fiq_state is touched.
365
366 fiq_fsm: Enable by default
367
368 dwc_otg: Fix various issues with root port and transaction errors
369
370 Process the host port interrupts correctly (and don't trample them).
371 Root port hotplug now functional again.
372
373 Fix a few thinkos with the transaction error passthrough for fiq_fsm.
374
375 fiq_fsm: Implement hack for Split Interrupt transactions
376
377 Hubs aren't too picky about which endpoint we send Control type split
378 transactions to. By treating Interrupt transfers as Control, it is
379 possible to use the non-periodic queue in the OTG core as well as the
380 non-periodic FIFOs in the hub itself. This massively reduces the
381 microframe exclusivity/contention that periodic split transactions
382 otherwise have to enforce.
383
384 It goes without saying that this is a fairly egregious USB specification
385 violation, but it works.
386
387 Original idea by Hans Petter Selasky @ FreeBSD.org.
388
389 dwc_otg: FIQ support on SMP. Set up FIQ stack and handler on Core 0 only.
390
391 dwc_otg: introduce fiq_fsm_spin(un|)lock()
392
393 SMP safety for the FIQ relies on register read-modify write cycles being
394 completed in the correct order. Several places in the DWC code modify
395 registers also touched by the FIQ. Protect these by a bare-bones lock
396 mechanism.
397
398 This also makes it possible to run the FIQ and IRQ handlers on different
399 cores.
400
401 fiq_fsm: fix build on bcm2708 and bcm2709 platforms
402
403 dwc_otg: put some barriers back where they should be for UP
404
405 bcm2709/dwc_otg: Setup FIQ on core 1 if >1 core active
406
407 dwc_otg: fixup read-modify-write in critical paths
408
409 Be more careful about read-modify-write on registers that the FIQ
410 also touches.
411
412 Guard fiq_fsm_spin_lock with fiq_enable check
413
414 fiq_fsm: Falling out of the state machine isn't fatal
415
416 This edge case can be hit if the port is disabled while the FIQ is
417 in the middle of a transaction. Make the effects less severe.
418
419 Also get rid of the useless return value.
420
421 squash: dwc_otg: Allow to build without SMP
422
423 usb: core: make overcurrent messages more prominent
424
425 Hub overcurrent messages are more serious than "debug". Increase loglevel.
426
427 usb: dwc_otg: Don't use dma_to_virt()
428
429 Commit 6ce0d20 changes dma_to_virt() which breaks this driver.
430 Open code the old dma_to_virt() implementation to work around this.
431
432 Limit the use of __bus_to_virt() to cases where transfer_buffer_length
433 is set and transfer_buffer is not set. This is done to increase the
434 chance that this driver will also work on ARCH_BCM2835.
435
436 transfer_buffer should not be NULL if the length is set, but the
437 comment in the code indicates that there are situations where this
438 might happen. drivers/usb/isp1760/isp1760-hcd.c also has a similar
439 comment pointing to a possible: 'usb storage / SCSI bug'.
440
441 Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
442
443 dwc_otg: Fix crash when fiq_enable=0
444
445 dwc_otg: fiq_fsm: Make high-speed isochronous strided transfers work properly
446
447 Certain low-bandwidth high-speed USB devices (specialist audio devices,
448 compressed-frame webcams) have packet intervals > 1 microframe.
449
450 Stride these transfers in the FIQ by using the start-of-frame interrupt
451 to restart the channel at the right time.
452
453 dwc_otg: Force host mode to fix incorrect compute module boards
454
455 dwc_otg: Add ARCH_BCM2835 support
456
457 Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
458
459 dwc_otg: Simplify FIQ irq number code
460
461 Dropping ATAGS means we can simplify the FIQ irq number code.
462 Also add error checking on the returned irq number.
463
464 Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
465
466 dwc_otg: Remove duplicate gadget probe/unregister function
467
468 dwc_otg: Properly set the HFIR
469
470 Douglas Anderson reported:
471
472 According to the most up to date version of the dwc2 databook, the FRINT
473 field of the HFIR register should be programmed to:
474 * 125 us * (PHY clock freq for HS) - 1
475 * 1000 us * (PHY clock freq for FS/LS) - 1
476
477 This is opposed to older versions of the doc that claimed it should be:
478 * 125 us * (PHY clock freq for HS)
479 * 1000 us * (PHY clock freq for FS/LS)
480
481 and reported lower timing jitter on a USB analyser
482
483 dcw_otg: trim xfer length when buffer larger than allocated size is received
484
485 dwc_otg: Don't free qh align buffers in atomic context
486
487 dwc_otg: Enable the hack for Split Interrupt transactions by default
488
489 dwc_otg.fiq_fsm_mask=0xF has long been a suggestion for users with audio stutters or other USB bandwidth issues.
490 So far we are aware of many success stories but no failure caused by this setting.
491 Make it a default to learn more.
492
493 See: https://www.raspberrypi.org/forums/viewtopic.php?f=28&t=70437
494
495 Signed-off-by: popcornmix <popcornmix@gmail.com>
496
497 dwc_otg: Use kzalloc when suitable
498
499 dwc_otg: Pass struct device to dma_alloc*()
500
501 This makes it possible to get the bus address from Device Tree.
502
503 Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
504
505 dwc_otg: fix summarize urb->actual_length for isochronous transfers
506
507 Kernel does not copy input data of ISO transfers to userspace
508 if actual_length is set only in ISO transfers and not summarized
509 in urb->actual_length. Fixes raspberrypi/linux#903
510
511 fiq_fsm: Use correct states when starting isoc OUT transfers
512
513 In fiq_fsm_start_next_periodic() if an isochronous OUT transfer
514 was selected, no regard was given as to whether this was a single-packet
515 transfer or a multi-packet staged transfer.
516
517 For single-packet transfers, this had the effect of repeatedly sending
518 OUT packets with bogus data and lengths.
519
520 Eventually if the channel was repeatedly enabled enough times, this
521 would lock up the OTG core and no further bus transfers would happen.
522
523 Set the FSM state up properly if we select a single-packet transfer.
524
525 Fixes https://github.com/raspberrypi/linux/issues/1842
526
527 dwc_otg: make nak_holdoff work as intended with empty queues
528
529 If URBs reading from non-periodic split endpoints were dequeued and
530 the last transfer from the endpoint was a NAK handshake, the resulting
531 qh->nak_frame value was stale which would result in unnecessarily long
532 polling intervals for the first subsequent transfer with a fresh URB.
533
534 Fixup qh->nak_frame in dwc_otg_hcd_urb_dequeue and also guard against
535 a case where a single URB is submitted to the endpoint, a NAK was
536 received on the transfer immediately prior to receiving data and the
537 device subsequently resubmits another URB past the qh->nak_frame interval.
538
539 Fixes https://github.com/raspberrypi/linux/issues/1709
540
541 dwc_otg: fix split transaction data toggle handling around dequeues
542
543 See https://github.com/raspberrypi/linux/issues/1709
544
545 Fix several issues regarding endpoint state when URBs are dequeued
546 - If the HCD is disconnected, flush FIQ-enabled channels properly
547 - Save the data toggle state for bulk endpoints if the last transfer
548 from an endpoint where URBs were dequeued returned a data packet
549 - Reset hc->start_pkt_count properly in assign_and_init_hc()
550
551 dwc_otg: fix several potential crash sources
552
553 On root port disconnect events, the host driver state is cleared and
554 in-progress host channels are forcibly stopped. This doesn't play
555 well with the FIQ running in the background, so:
556 - Guard the disconnect callback with both the host spinlock and FIQ
557 spinlock
558 - Move qtd dereference in dwc_otg_handle_hc_fsm() after the early-out
559 so we don't dereference a qtd that has gone away
560 - Turn catch-all BUG()s in dwc_otg_handle_hc_fsm() into warnings.
561
562 dwc_otg: delete hcd->channel_lock
563
564 The lock serves no purpose as it is only held while the HCD spinlock
565 is already being held.
566
567 dwc_otg: remove unnecessary dma-mode channel halts on disconnect interrupt
568
569 Host channels are already halted in kill_urbs_in_qh_list() with the
570 subsequent interrupt processing behaving as if the URB was dequeued
571 via HCD callback.
572
573 There's no need to clobber the host channel registers a second time
574 as this exposes races between the driver and host channel resulting
575 in hcd->free_hc_list becoming corrupted.
576
577 dwcotg: Allow to build without FIQ on ARM64
578
579 Signed-off-by: popcornmix <popcornmix@gmail.com>
580
581 dwc_otg: make periodic scheduling behave properly for FS buses
582
583 If the root port is in full-speed mode, transfer times at 12mbit/s
584 would be calculated but matched against high-speed quotas.
585
586 Reinitialise hcd->frame_usecs[i] on each port enable event so that
587 full-speed bandwidth can be tracked sensibly.
588
589 Also, don't bother using the FIQ for transfers when in full-speed
590 mode - at the slower bus speed, interrupt frequency is reduced by
591 an order of magnitude.
592
593 Related issue: https://github.com/raspberrypi/linux/issues/2020
594
595 dwc_otg: fiq_fsm: Make isochronous compatibility checks work properly
596
597 Get rid of the spammy printk and local pointer mangling.
598 Also, there is a nominal benefit for using fiq_fsm for isochronous
599 transfers in FS mode (~1.1k IRQs per second vs 2.1k IRQs per second)
600 so remove the root port speed check.
601
602 dwc_otg: add module parameter int_ep_interval_min
603
604 Add a module parameter (defaulting to ignored) that clamps the polling rate
605 of high-speed Interrupt endpoints to a minimum microframe interval.
606
607 The parameter is modifiable at runtime as it is used when activating new
608 endpoints (such as on device connect).
609
610 dwc_otg: fiq_fsm: Add non-periodic TT exclusivity constraints
611
612 Certain hub types do not discriminate between pipe direction (IN or OUT)
613 when considering non-periodic transfers. Therefore these hubs get confused
614 if multiple transfers are issued in different directions with the same
615 device address and endpoint number.
616
617 Constrain queuing non-periodic split transactions so they are performed
618 serially in such cases.
619
620 Related: https://github.com/raspberrypi/linux/issues/2024
621
622 dwc_otg: Fixup change to DRIVER_ATTR interface
623
624 dwc_otg: Fix compilation warnings
625
626 Signed-off-by: Phil Elwell <phil@raspberrypi.org>
627
628 USB_DWCOTG: Disable building dwc_otg as a module (#2265)
629
630 When dwc_otg is built as a module, build will fail with the following
631 error:
632
633 ERROR: "DWC_TASK_HI_SCHEDULE" [drivers/usb/host/dwc_otg/dwc_otg.ko] undefined!
634 scripts/Makefile.modpost:91: recipe for target '__modpost' failed
635 make[1]: *** [__modpost] Error 1
636 Makefile:1199: recipe for target 'modules' failed
637 make: *** [modules] Error 2
638
639 Even if the error is solved by including the missing
640 DWC_TASK_HI_SCHEDULE function, the kernel will panic when loading
641 dwc_otg.
642
643 As a workaround, simply prevent user from building dwc_otg as a module
644 as the current kernel does not support it.
645
646 See: https://github.com/raspberrypi/linux/issues/2258
647
648 Signed-off-by: Malik Olivier Boussejra <malik@boussejra.com>
649
650 dwc_otg: New timer API
651
652 dwc_otg: Fix removed ACCESS_ONCE->READ_ONCE
653
654 dwc_otg: don't unconditionally force host mode in dwc_otg_cil_init()
655
656 Add the ability to disable force_host_mode for those that want to use
657 dwc_otg in both device and host modes.
658
659 dwc_otg: Fix a regression when dequeueing isochronous transfers
660
661 In 282bed95 (dwc_otg: make nak_holdoff work as intended with empty queues)
662 the dequeue mechanism was changed to leave FIQ-enabled transfers to run
663 to completion - to avoid leaving hub TT buffers with stale packets lying
664 around.
665
666 This broke FIQ-accelerated isochronous transfers, as this then meant that
667 dozens of transfers were performed after the dequeue function returned.
668
669 Restore the state machine fence for isochronous transfers.
670
671 fiq_fsm: rewind DMA pointer for OUT transactions that fail (#2288)
672
673 See: https://github.com/raspberrypi/linux/issues/2140
674
675 dwc_otg: add smp_mb() to prevent driver state corruption on boot
676
677 Occasional crashes have been seen where the FIQ code dereferences
678 invalid/random pointers immediately after being set up, leading to
679 panic on boot.
680
681 The crash occurs as the FIQ code races against hcd_init_fiq() and
682 the hcd_init_fiq() code races against the outstanding memory stores
683 from dwc_otg_hcd_init(). Use explicit barriers after touching
684 driver state.
685
686 usb: dwc_otg: fix memory corruption in dwc_otg driver
687
688 [Upstream commit 51b1b6491752ac066ee8d32cc66042fcc955fef6]
689
690 The move from the staging tree to the main tree exposed a
691 longstanding memory corruption bug in the dwc2 driver. The
692 reordering of the driver initialization caused the dwc2 driver
693 to corrupt the initialization data of the sdhci driver on the
694 Raspberry Pi platform, which made the bug show up.
695
696 The error is in calling to_usb_device(hsotg->dev), since ->dev
697 is not a member of struct usb_device. The easiest fix is to
698 just remove the offending code, since it is not really needed.
699
700 Thanks to Stephen Warren for tracking down the cause of this.
701
702 Reported-by: Andre Heider <a.heider@gmail.com>
703 Tested-by: Stephen Warren <swarren@wwwdotorg.org>
704 Signed-off-by: Paul Zimmerman <paulz@synopsys.com>
705 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
706 [lukas: port from upstream dwc2 to out-of-tree dwc_otg driver]
707 Signed-off-by: Lukas Wunner <lukas@wunner.de>
708
709 usb: dwb_otg: Fix unreachable switch statement warning
710
711 This warning appears with GCC 7.3.0 from toolchains.bootlin.com:
712
713 ../drivers/usb/host/dwc_otg/dwc_otg_fiq_fsm.c: In function ‘fiq_fsm_update_hs_isoc’:
714 ../drivers/usb/host/dwc_otg/dwc_otg_fiq_fsm.c:595:61: warning: statement will never be executed [-Wswitch-unreachable]
715 st->hctsiz_copy.b.xfersize = nrpackets * st->hcchar_copy.b.mps;
716 ~~~~~~~~~~~~~~~~~^~~~
717
718 Signed-off-by: Nathan Chancellor <natechancellor@gmail.com>
719
720 dwc_otg: fiq_fsm: fix incorrect DMA register offset calculation
721
722 Rationalise the offset and update all call sites.
723
724 Fixes https://github.com/raspberrypi/linux/issues/2408
725
726 dwc_otg: fix bug with port_addr assignment for single-TT hubs
727
728 See https://github.com/raspberrypi/linux/issues/2734
729
730 The "Hub Port" field in the split transaction packet was always set
731 to 1 for single-TT hubs. The majority of single-TT hub products
732 apparently ignore this field and broadcast to all downstream enabled
733 ports, which masked the issue. A subset of hub devices apparently
734 need the port number to be exact or split transactions will fail.
735
736 usb: dwc_otg: Clean up build warnings on 64bit kernels
737
738 No functional changes. Almost all are changes to logging lines.
739
740 Signed-off-by: Dave Stevenson <dave.stevenson@raspberrypi.org>
741
742 usb: dwc_otg: Use dma allocation for mphi dummy_send buffer
743
744 The FIQ driver used a kzalloc'ed buffer for dummy_send,
745 passing a kernel virtual address to the hardware block.
746 The buffer is only ever used for a dummy read, so it
747 should be harmless, but there is the chance that it will
748 cause exceptions.
749
750 Use a dma allocation so that we have a genuine bus address,
751 and read from that.
752 Free the allocation when done for good measure.
753
754 Signed-off-by: Dave Stevenson <dave.stevenson@raspberrypi.org>
755
756 dwc_otg: only do_split when we actually need to do a split
757
758 The previous test would fail if the root port was in fullspeed mode
759 and there was a hub between the FS device and the root port. While
760 the transfer worked, the schedule mangling performed for high-speed
761 split transfers would break leading to an 8ms polling interval.
762
763 dwc_otg: fix locking around dequeueing and killing URBs
764
765 kill_urbs_in_qh_list() is practically only ever called with the fiq lock
766 already held, so don't spinlock twice in the case where we need to cancel
767 an isochronous transfer.
768
769 Also fix up a case where the global interrupt register could be read with
770 the fiq lock not held.
771
772 Fixes the deadlock seen in https://github.com/raspberrypi/linux/issues/2907
773
774 ARM64/DWC_OTG: Port dwc_otg driver to ARM64
775
776 In ARM64, the FIQ mechanism used by this driver is not current
777 implemented. As a workaround, reqular IRQ is used instead
778 of FIQ.
779
780 In a separate change, the IRQ-CPU mapping is round robined
781 on ARM64 to increase concurrency and allow multiple interrupts
782 to be serviced at a time. This reduces the need for FIQ.
783
784 Tests Run:
785
786 This mechanism is most likely to break when multiple USB devices
787 are attached at the same time. So the system was tested under
788 stress.
789
790 Devices:
791
792 1. USB Speakers playing back a FLAC audio through VLC
793 at 96KHz.(Higher then typically, but supported on my speakers).
794
795 2. sftp transferring large files through the buildin ethernet
796 connection which is connected through USB.
797
798 3. Keyboard and mouse attached and being used.
799
800 Although I do occasionally hear some glitches, the music seems to
801 play quite well.
802
803 Signed-off-by: Michael Zoran <mzoran@crowfest.net>
804
805 usb: dwc_otg: Clean up interrupt claiming code
806
807 The FIQ/IRQ interrupt number identification code is scattered through
808 the dwc_otg driver. Rationalise it, simplifying the code and solving
809 an existing issue.
810
811 See: https://github.com/raspberrypi/linux/issues/2612
812
813 Signed-off-by: Phil Elwell <phil@raspberrypi.org>
814
815 dwc_otg: Choose appropriate IRQ handover strategy
816
817 2711 has no MPHI peripheral, but the ARM Control block can fake
818 interrupts. Use the size of the DTB "mphi" reg block to determine
819 which is required.
820
821 Signed-off-by: Phil Elwell <phil@raspberrypi.org>
822
823 usb: host: dwc_otg: fix compiling in separate directory
824
825 The dwc_otg Makefile does not respect the O=path argument correctly:
826 include paths in CFLAGS are given relatively to object path, not source
827 path. Compiling in a separate directory yields #include errors.
828
829 Signed-off-by: Marek Behún <marek.behun@nic.cz>
830
831 dwc_otg: use align_buf for small IN control transfers (#3150)
832
833 The hardware will do a 4-byte write to memory on any IN packet received
834 that is between 1 and 3 bytes long. This tramples memory in the uvcvideo
835 driver, as it uses a sequence of 1- and 2-byte control transfers to
836 query the min/max/range/step of each individual camera control and
837 gives us buffers that are offsets into a struct.
838
839 Catch small control transfers in the data phase and use the align_buf
840 to bounce the correct number of bytes into the URB's buffer.
841
842 In general, short packets on non-control endpoints should be OK as URBs
843 should have enough buffer space for a wMaxPacket size transfer.
844
845 See: https://github.com/raspberrypi/linux/issues/3148
846
847 Signed-off-by: Jonathan Bell <jonathan@raspberrypi.org>
848 ---
849 arch/arm/include/asm/irqflags.h | 16 +-
850 arch/arm/kernel/fiqasm.S | 4 +
851 drivers/usb/Makefile | 1 +
852 drivers/usb/core/generic.c | 1 +
853 drivers/usb/core/hub.c | 2 +-
854 drivers/usb/core/message.c | 79 +
855 drivers/usb/core/otg_whitelist.h | 114 +-
856 drivers/usb/gadget/file_storage.c | 3676 +++++++++
857 drivers/usb/host/Kconfig | 10 +
858 drivers/usb/host/Makefile | 1 +
859 drivers/usb/host/dwc_common_port/Makefile | 58 +
860 .../usb/host/dwc_common_port/Makefile.fbsd | 17 +
861 .../usb/host/dwc_common_port/Makefile.linux | 49 +
862 drivers/usb/host/dwc_common_port/changes.txt | 174 +
863 .../usb/host/dwc_common_port/doc/doxygen.cfg | 270 +
864 drivers/usb/host/dwc_common_port/dwc_cc.c | 532 ++
865 drivers/usb/host/dwc_common_port/dwc_cc.h | 224 +
866 .../host/dwc_common_port/dwc_common_fbsd.c | 1308 +++
867 .../host/dwc_common_port/dwc_common_linux.c | 1409 ++++
868 .../host/dwc_common_port/dwc_common_nbsd.c | 1275 +++
869 drivers/usb/host/dwc_common_port/dwc_crypto.c | 308 +
870 drivers/usb/host/dwc_common_port/dwc_crypto.h | 111 +
871 drivers/usb/host/dwc_common_port/dwc_dh.c | 291 +
872 drivers/usb/host/dwc_common_port/dwc_dh.h | 106 +
873 drivers/usb/host/dwc_common_port/dwc_list.h | 594 ++
874 drivers/usb/host/dwc_common_port/dwc_mem.c | 245 +
875 drivers/usb/host/dwc_common_port/dwc_modpow.c | 636 ++
876 drivers/usb/host/dwc_common_port/dwc_modpow.h | 34 +
877 .../usb/host/dwc_common_port/dwc_notifier.c | 319 +
878 .../usb/host/dwc_common_port/dwc_notifier.h | 122 +
879 drivers/usb/host/dwc_common_port/dwc_os.h | 1276 +++
880 drivers/usb/host/dwc_common_port/usb.h | 946 +++
881 drivers/usb/host/dwc_otg/Makefile | 85 +
882 drivers/usb/host/dwc_otg/doc/doxygen.cfg | 224 +
883 drivers/usb/host/dwc_otg/dummy_audio.c | 1574 ++++
884 drivers/usb/host/dwc_otg/dwc_cfi_common.h | 142 +
885 drivers/usb/host/dwc_otg/dwc_otg_adp.c | 854 ++
886 drivers/usb/host/dwc_otg/dwc_otg_adp.h | 80 +
887 drivers/usb/host/dwc_otg/dwc_otg_attr.c | 1212 +++
888 drivers/usb/host/dwc_otg/dwc_otg_attr.h | 89 +
889 drivers/usb/host/dwc_otg/dwc_otg_cfi.c | 1876 +++++
890 drivers/usb/host/dwc_otg/dwc_otg_cfi.h | 320 +
891 drivers/usb/host/dwc_otg/dwc_otg_cil.c | 7146 +++++++++++++++++
892 drivers/usb/host/dwc_otg/dwc_otg_cil.h | 1464 ++++
893 drivers/usb/host/dwc_otg/dwc_otg_cil_intr.c | 1601 ++++
894 drivers/usb/host/dwc_otg/dwc_otg_core_if.h | 705 ++
895 drivers/usb/host/dwc_otg/dwc_otg_dbg.h | 117 +
896 drivers/usb/host/dwc_otg/dwc_otg_driver.c | 1772 ++++
897 drivers/usb/host/dwc_otg/dwc_otg_driver.h | 86 +
898 drivers/usb/host/dwc_otg/dwc_otg_fiq_fsm.c | 1425 ++++
899 drivers/usb/host/dwc_otg/dwc_otg_fiq_fsm.h | 399 +
900 drivers/usb/host/dwc_otg/dwc_otg_fiq_stub.S | 80 +
901 drivers/usb/host/dwc_otg/dwc_otg_hcd.c | 4327 ++++++++++
902 drivers/usb/host/dwc_otg/dwc_otg_hcd.h | 870 ++
903 drivers/usb/host/dwc_otg/dwc_otg_hcd_ddma.c | 1134 +++
904 drivers/usb/host/dwc_otg/dwc_otg_hcd_if.h | 421 +
905 drivers/usb/host/dwc_otg/dwc_otg_hcd_intr.c | 2757 +++++++
906 drivers/usb/host/dwc_otg/dwc_otg_hcd_linux.c | 1083 +++
907 drivers/usb/host/dwc_otg/dwc_otg_hcd_queue.c | 970 +++
908 drivers/usb/host/dwc_otg/dwc_otg_os_dep.h | 199 +
909 drivers/usb/host/dwc_otg/dwc_otg_pcd.c | 2725 +++++++
910 drivers/usb/host/dwc_otg/dwc_otg_pcd.h | 273 +
911 drivers/usb/host/dwc_otg/dwc_otg_pcd_if.h | 361 +
912 drivers/usb/host/dwc_otg/dwc_otg_pcd_intr.c | 5148 ++++++++++++
913 drivers/usb/host/dwc_otg/dwc_otg_pcd_linux.c | 1262 +++
914 drivers/usb/host/dwc_otg/dwc_otg_regs.h | 2550 ++++++
915 drivers/usb/host/dwc_otg/test/Makefile | 16 +
916 drivers/usb/host/dwc_otg/test/dwc_otg_test.pm | 337 +
917 .../usb/host/dwc_otg/test/test_mod_param.pl | 133 +
918 drivers/usb/host/dwc_otg/test/test_sysfs.pl | 193 +
919 70 files changed, 60202 insertions(+), 16 deletions(-)
920 create mode 100644 drivers/usb/gadget/file_storage.c
921 create mode 100644 drivers/usb/host/dwc_common_port/Makefile
922 create mode 100644 drivers/usb/host/dwc_common_port/Makefile.fbsd
923 create mode 100644 drivers/usb/host/dwc_common_port/Makefile.linux
924 create mode 100644 drivers/usb/host/dwc_common_port/changes.txt
925 create mode 100644 drivers/usb/host/dwc_common_port/doc/doxygen.cfg
926 create mode 100644 drivers/usb/host/dwc_common_port/dwc_cc.c
927 create mode 100644 drivers/usb/host/dwc_common_port/dwc_cc.h
928 create mode 100644 drivers/usb/host/dwc_common_port/dwc_common_fbsd.c
929 create mode 100644 drivers/usb/host/dwc_common_port/dwc_common_linux.c
930 create mode 100644 drivers/usb/host/dwc_common_port/dwc_common_nbsd.c
931 create mode 100644 drivers/usb/host/dwc_common_port/dwc_crypto.c
932 create mode 100644 drivers/usb/host/dwc_common_port/dwc_crypto.h
933 create mode 100644 drivers/usb/host/dwc_common_port/dwc_dh.c
934 create mode 100644 drivers/usb/host/dwc_common_port/dwc_dh.h
935 create mode 100644 drivers/usb/host/dwc_common_port/dwc_list.h
936 create mode 100644 drivers/usb/host/dwc_common_port/dwc_mem.c
937 create mode 100644 drivers/usb/host/dwc_common_port/dwc_modpow.c
938 create mode 100644 drivers/usb/host/dwc_common_port/dwc_modpow.h
939 create mode 100644 drivers/usb/host/dwc_common_port/dwc_notifier.c
940 create mode 100644 drivers/usb/host/dwc_common_port/dwc_notifier.h
941 create mode 100644 drivers/usb/host/dwc_common_port/dwc_os.h
942 create mode 100644 drivers/usb/host/dwc_common_port/usb.h
943 create mode 100644 drivers/usb/host/dwc_otg/Makefile
944 create mode 100644 drivers/usb/host/dwc_otg/doc/doxygen.cfg
945 create mode 100644 drivers/usb/host/dwc_otg/dummy_audio.c
946 create mode 100644 drivers/usb/host/dwc_otg/dwc_cfi_common.h
947 create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_adp.c
948 create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_adp.h
949 create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_attr.c
950 create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_attr.h
951 create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_cfi.c
952 create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_cfi.h
953 create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_cil.c
954 create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_cil.h
955 create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_cil_intr.c
956 create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_core_if.h
957 create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_dbg.h
958 create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_driver.c
959 create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_driver.h
960 create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_fiq_fsm.c
961 create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_fiq_fsm.h
962 create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_fiq_stub.S
963 create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_hcd.c
964 create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_hcd.h
965 create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_hcd_ddma.c
966 create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_hcd_if.h
967 create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_hcd_intr.c
968 create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_hcd_linux.c
969 create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_hcd_queue.c
970 create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_os_dep.h
971 create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_pcd.c
972 create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_pcd.h
973 create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_pcd_if.h
974 create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_pcd_intr.c
975 create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_pcd_linux.c
976 create mode 100644 drivers/usb/host/dwc_otg/dwc_otg_regs.h
977 create mode 100644 drivers/usb/host/dwc_otg/test/Makefile
978 create mode 100644 drivers/usb/host/dwc_otg/test/dwc_otg_test.pm
979 create mode 100644 drivers/usb/host/dwc_otg/test/test_mod_param.pl
980 create mode 100644 drivers/usb/host/dwc_otg/test/test_sysfs.pl
981
982 --- a/arch/arm/include/asm/irqflags.h
983 +++ b/arch/arm/include/asm/irqflags.h
984 @@ -163,13 +163,23 @@ static inline unsigned long arch_local_s
985 }
986
987 /*
988 - * restore saved IRQ & FIQ state
989 + * restore saved IRQ state
990 */
991 #define arch_local_irq_restore arch_local_irq_restore
992 static inline void arch_local_irq_restore(unsigned long flags)
993 {
994 - asm volatile(
995 - " msr " IRQMASK_REG_NAME_W ", %0 @ local_irq_restore"
996 + unsigned long temp = 0;
997 + flags &= ~(1 << 6);
998 + asm volatile (
999 + " mrs %0, cpsr"
1000 + : "=r" (temp)
1001 + :
1002 + : "memory", "cc");
1003 + /* Preserve FIQ bit */
1004 + temp &= (1 << 6);
1005 + flags = flags | temp;
1006 + asm volatile (
1007 + " msr cpsr_c, %0 @ local_irq_restore"
1008 :
1009 : "r" (flags)
1010 : "memory", "cc");
1011 --- a/arch/arm/kernel/fiqasm.S
1012 +++ b/arch/arm/kernel/fiqasm.S
1013 @@ -47,3 +47,7 @@ ENTRY(__get_fiq_regs)
1014 mov r0, r0 @ avoid hazard prior to ARMv4
1015 ret lr
1016 ENDPROC(__get_fiq_regs)
1017 +
1018 +ENTRY(__FIQ_Branch)
1019 + mov pc, r8
1020 +ENDPROC(__FIQ_Branch)
1021 --- a/drivers/usb/Makefile
1022 +++ b/drivers/usb/Makefile
1023 @@ -9,6 +9,7 @@ obj-$(CONFIG_USB_COMMON) += common/
1024 obj-$(CONFIG_USB) += core/
1025 obj-$(CONFIG_USB_SUPPORT) += phy/
1026
1027 +obj-$(CONFIG_USB_DWCOTG) += host/
1028 obj-$(CONFIG_USB_DWC3) += dwc3/
1029 obj-$(CONFIG_USB_DWC2) += dwc2/
1030 obj-$(CONFIG_USB_ISP1760) += isp1760/
1031 --- a/drivers/usb/core/generic.c
1032 +++ b/drivers/usb/core/generic.c
1033 @@ -190,6 +190,7 @@ int usb_choose_configuration(struct usb_
1034 dev_warn(&udev->dev,
1035 "no configuration chosen from %d choice%s\n",
1036 num_configs, plural(num_configs));
1037 + dev_warn(&udev->dev, "No support over %dmA\n", udev->bus_mA);
1038 }
1039 return i;
1040 }
1041 --- a/drivers/usb/core/hub.c
1042 +++ b/drivers/usb/core/hub.c
1043 @@ -5296,7 +5296,7 @@ static void port_event(struct usb_hub *h
1044 port_dev->over_current_count++;
1045 port_over_current_notify(port_dev);
1046
1047 - dev_dbg(&port_dev->dev, "over-current change #%u\n",
1048 + dev_notice(&port_dev->dev, "over-current change #%u\n",
1049 port_dev->over_current_count);
1050 usb_clear_port_feature(hdev, port1,
1051 USB_PORT_FEAT_C_OVER_CURRENT);
1052 --- a/drivers/usb/core/message.c
1053 +++ b/drivers/usb/core/message.c
1054 @@ -1993,6 +1993,85 @@ free_interfaces:
1055 if (cp->string == NULL &&
1056 !(dev->quirks & USB_QUIRK_CONFIG_INTF_STRINGS))
1057 cp->string = usb_cache_string(dev, cp->desc.iConfiguration);
1058 +/* Uncomment this define to enable the HS Electrical Test support */
1059 +#define DWC_HS_ELECT_TST 1
1060 +#ifdef DWC_HS_ELECT_TST
1061 + /* Here we implement the HS Electrical Test support. The
1062 + * tester uses a vendor ID of 0x1A0A to indicate we should
1063 + * run a special test sequence. The product ID tells us
1064 + * which sequence to run. We invoke the test sequence by
1065 + * sending a non-standard SetFeature command to our root
1066 + * hub port. Our dwc_otg_hcd_hub_control() routine will
1067 + * recognize the command and perform the desired test
1068 + * sequence.
1069 + */
1070 + if (dev->descriptor.idVendor == 0x1A0A) {
1071 + /* HSOTG Electrical Test */
1072 + dev_warn(&dev->dev, "VID from HSOTG Electrical Test Fixture\n");
1073 +
1074 + if (dev->bus && dev->bus->root_hub) {
1075 + struct usb_device *hdev = dev->bus->root_hub;
1076 + dev_warn(&dev->dev, "Got PID 0x%x\n", dev->descriptor.idProduct);
1077 +
1078 + switch (dev->descriptor.idProduct) {
1079 + case 0x0101: /* TEST_SE0_NAK */
1080 + dev_warn(&dev->dev, "TEST_SE0_NAK\n");
1081 + usb_control_msg(hdev, usb_sndctrlpipe(hdev, 0),
1082 + USB_REQ_SET_FEATURE, USB_RT_PORT,
1083 + USB_PORT_FEAT_TEST, 0x300, NULL, 0, HZ);
1084 + break;
1085 +
1086 + case 0x0102: /* TEST_J */
1087 + dev_warn(&dev->dev, "TEST_J\n");
1088 + usb_control_msg(hdev, usb_sndctrlpipe(hdev, 0),
1089 + USB_REQ_SET_FEATURE, USB_RT_PORT,
1090 + USB_PORT_FEAT_TEST, 0x100, NULL, 0, HZ);
1091 + break;
1092 +
1093 + case 0x0103: /* TEST_K */
1094 + dev_warn(&dev->dev, "TEST_K\n");
1095 + usb_control_msg(hdev, usb_sndctrlpipe(hdev, 0),
1096 + USB_REQ_SET_FEATURE, USB_RT_PORT,
1097 + USB_PORT_FEAT_TEST, 0x200, NULL, 0, HZ);
1098 + break;
1099 +
1100 + case 0x0104: /* TEST_PACKET */
1101 + dev_warn(&dev->dev, "TEST_PACKET\n");
1102 + usb_control_msg(hdev, usb_sndctrlpipe(hdev, 0),
1103 + USB_REQ_SET_FEATURE, USB_RT_PORT,
1104 + USB_PORT_FEAT_TEST, 0x400, NULL, 0, HZ);
1105 + break;
1106 +
1107 + case 0x0105: /* TEST_FORCE_ENABLE */
1108 + dev_warn(&dev->dev, "TEST_FORCE_ENABLE\n");
1109 + usb_control_msg(hdev, usb_sndctrlpipe(hdev, 0),
1110 + USB_REQ_SET_FEATURE, USB_RT_PORT,
1111 + USB_PORT_FEAT_TEST, 0x500, NULL, 0, HZ);
1112 + break;
1113 +
1114 + case 0x0106: /* HS_HOST_PORT_SUSPEND_RESUME */
1115 + dev_warn(&dev->dev, "HS_HOST_PORT_SUSPEND_RESUME\n");
1116 + usb_control_msg(hdev, usb_sndctrlpipe(hdev, 0),
1117 + USB_REQ_SET_FEATURE, USB_RT_PORT,
1118 + USB_PORT_FEAT_TEST, 0x600, NULL, 0, 40 * HZ);
1119 + break;
1120 +
1121 + case 0x0107: /* SINGLE_STEP_GET_DEVICE_DESCRIPTOR setup */
1122 + dev_warn(&dev->dev, "SINGLE_STEP_GET_DEVICE_DESCRIPTOR setup\n");
1123 + usb_control_msg(hdev, usb_sndctrlpipe(hdev, 0),
1124 + USB_REQ_SET_FEATURE, USB_RT_PORT,
1125 + USB_PORT_FEAT_TEST, 0x700, NULL, 0, 40 * HZ);
1126 + break;
1127 +
1128 + case 0x0108: /* SINGLE_STEP_GET_DEVICE_DESCRIPTOR execute */
1129 + dev_warn(&dev->dev, "SINGLE_STEP_GET_DEVICE_DESCRIPTOR execute\n");
1130 + usb_control_msg(hdev, usb_sndctrlpipe(hdev, 0),
1131 + USB_REQ_SET_FEATURE, USB_RT_PORT,
1132 + USB_PORT_FEAT_TEST, 0x800, NULL, 0, 40 * HZ);
1133 + }
1134 + }
1135 + }
1136 +#endif /* DWC_HS_ELECT_TST */
1137
1138 /* Now that the interfaces are installed, re-enable LPM. */
1139 usb_unlocked_enable_lpm(dev);
1140 --- a/drivers/usb/core/otg_whitelist.h
1141 +++ b/drivers/usb/core/otg_whitelist.h
1142 @@ -15,33 +15,82 @@
1143 static struct usb_device_id whitelist_table[] = {
1144
1145 /* hubs are optional in OTG, but very handy ... */
1146 +#define CERT_WITHOUT_HUBS
1147 +#if defined(CERT_WITHOUT_HUBS)
1148 +{ USB_DEVICE( 0x0000, 0x0000 ), }, /* Root HUB Only*/
1149 +#else
1150 { USB_DEVICE_INFO(USB_CLASS_HUB, 0, 0), },
1151 { USB_DEVICE_INFO(USB_CLASS_HUB, 0, 1), },
1152 +{ USB_DEVICE_INFO(USB_CLASS_HUB, 0, 2), },
1153 +#endif
1154
1155 #ifdef CONFIG_USB_PRINTER /* ignoring nonstatic linkage! */
1156 /* FIXME actually, printers are NOT supposed to use device classes;
1157 * they're supposed to use interface classes...
1158 */
1159 -{ USB_DEVICE_INFO(7, 1, 1) },
1160 -{ USB_DEVICE_INFO(7, 1, 2) },
1161 -{ USB_DEVICE_INFO(7, 1, 3) },
1162 +//{ USB_DEVICE_INFO(7, 1, 1) },
1163 +//{ USB_DEVICE_INFO(7, 1, 2) },
1164 +//{ USB_DEVICE_INFO(7, 1, 3) },
1165 #endif
1166
1167 #ifdef CONFIG_USB_NET_CDCETHER
1168 /* Linux-USB CDC Ethernet gadget */
1169 -{ USB_DEVICE(0x0525, 0xa4a1), },
1170 +//{ USB_DEVICE(0x0525, 0xa4a1), },
1171 /* Linux-USB CDC Ethernet + RNDIS gadget */
1172 -{ USB_DEVICE(0x0525, 0xa4a2), },
1173 +//{ USB_DEVICE(0x0525, 0xa4a2), },
1174 #endif
1175
1176 #if IS_ENABLED(CONFIG_USB_TEST)
1177 /* gadget zero, for testing */
1178 -{ USB_DEVICE(0x0525, 0xa4a0), },
1179 +//{ USB_DEVICE(0x0525, 0xa4a0), },
1180 #endif
1181
1182 +/* OPT Tester */
1183 +{ USB_DEVICE( 0x1a0a, 0x0101 ), }, /* TEST_SE0_NAK */
1184 +{ USB_DEVICE( 0x1a0a, 0x0102 ), }, /* Test_J */
1185 +{ USB_DEVICE( 0x1a0a, 0x0103 ), }, /* Test_K */
1186 +{ USB_DEVICE( 0x1a0a, 0x0104 ), }, /* Test_PACKET */
1187 +{ USB_DEVICE( 0x1a0a, 0x0105 ), }, /* Test_FORCE_ENABLE */
1188 +{ USB_DEVICE( 0x1a0a, 0x0106 ), }, /* HS_PORT_SUSPEND_RESUME */
1189 +{ USB_DEVICE( 0x1a0a, 0x0107 ), }, /* SINGLE_STEP_GET_DESCRIPTOR setup */
1190 +{ USB_DEVICE( 0x1a0a, 0x0108 ), }, /* SINGLE_STEP_GET_DESCRIPTOR execute */
1191 +
1192 +/* Sony cameras */
1193 +{ USB_DEVICE_VER(0x054c,0x0010,0x0410, 0x0500), },
1194 +
1195 +/* Memory Devices */
1196 +//{ USB_DEVICE( 0x0781, 0x5150 ), }, /* SanDisk */
1197 +//{ USB_DEVICE( 0x05DC, 0x0080 ), }, /* Lexar */
1198 +//{ USB_DEVICE( 0x4146, 0x9281 ), }, /* IOMEGA */
1199 +//{ USB_DEVICE( 0x067b, 0x2507 ), }, /* Hammer 20GB External HD */
1200 +{ USB_DEVICE( 0x0EA0, 0x2168 ), }, /* Ours Technology Inc. (BUFFALO ClipDrive)*/
1201 +//{ USB_DEVICE( 0x0457, 0x0150 ), }, /* Silicon Integrated Systems Corp. */
1202 +
1203 +/* HP Printers */
1204 +//{ USB_DEVICE( 0x03F0, 0x1102 ), }, /* HP Photosmart 245 */
1205 +//{ USB_DEVICE( 0x03F0, 0x1302 ), }, /* HP Photosmart 370 Series */
1206 +
1207 +/* Speakers */
1208 +//{ USB_DEVICE( 0x0499, 0x3002 ), }, /* YAMAHA YST-MS35D USB Speakers */
1209 +//{ USB_DEVICE( 0x0672, 0x1041 ), }, /* Labtec USB Headset */
1210 +
1211 { } /* Terminating entry */
1212 };
1213
1214 +static inline void report_errors(struct usb_device *dev)
1215 +{
1216 + /* OTG MESSAGE: report errors here, customize to match your product */
1217 + dev_info(&dev->dev, "device Vendor:%04x Product:%04x is not supported\n",
1218 + le16_to_cpu(dev->descriptor.idVendor),
1219 + le16_to_cpu(dev->descriptor.idProduct));
1220 + if (USB_CLASS_HUB == dev->descriptor.bDeviceClass){
1221 + dev_printk(KERN_CRIT, &dev->dev, "Unsupported Hub Topology\n");
1222 + } else {
1223 + dev_printk(KERN_CRIT, &dev->dev, "Attached Device is not Supported\n");
1224 + }
1225 +}
1226 +
1227 +
1228 static int is_targeted(struct usb_device *dev)
1229 {
1230 struct usb_device_id *id = whitelist_table;
1231 @@ -91,16 +140,57 @@ static int is_targeted(struct usb_device
1232 continue;
1233
1234 return 1;
1235 - }
1236 + /* NOTE: can't use usb_match_id() since interface caches
1237 + * aren't set up yet. this is cut/paste from that code.
1238 + */
1239 + for (id = whitelist_table; id->match_flags; id++) {
1240 +#ifdef DEBUG
1241 + dev_dbg(&dev->dev,
1242 + "ID: V:%04x P:%04x DC:%04x SC:%04x PR:%04x \n",
1243 + id->idVendor,
1244 + id->idProduct,
1245 + id->bDeviceClass,
1246 + id->bDeviceSubClass,
1247 + id->bDeviceProtocol);
1248 +#endif
1249
1250 - /* add other match criteria here ... */
1251 + if ((id->match_flags & USB_DEVICE_ID_MATCH_VENDOR) &&
1252 + id->idVendor != le16_to_cpu(dev->descriptor.idVendor))
1253 + continue;
1254 +
1255 + if ((id->match_flags & USB_DEVICE_ID_MATCH_PRODUCT) &&
1256 + id->idProduct != le16_to_cpu(dev->descriptor.idProduct))
1257 + continue;
1258 +
1259 + /* No need to test id->bcdDevice_lo != 0, since 0 is never
1260 + greater than any unsigned number. */
1261 + if ((id->match_flags & USB_DEVICE_ID_MATCH_DEV_LO) &&
1262 + (id->bcdDevice_lo > le16_to_cpu(dev->descriptor.bcdDevice)))
1263 + continue;
1264 +
1265 + if ((id->match_flags & USB_DEVICE_ID_MATCH_DEV_HI) &&
1266 + (id->bcdDevice_hi < le16_to_cpu(dev->descriptor.bcdDevice)))
1267 + continue;
1268 +
1269 + if ((id->match_flags & USB_DEVICE_ID_MATCH_DEV_CLASS) &&
1270 + (id->bDeviceClass != dev->descriptor.bDeviceClass))
1271 + continue;
1272 +
1273 + if ((id->match_flags & USB_DEVICE_ID_MATCH_DEV_SUBCLASS) &&
1274 + (id->bDeviceSubClass != dev->descriptor.bDeviceSubClass))
1275 + continue;
1276 +
1277 + if ((id->match_flags & USB_DEVICE_ID_MATCH_DEV_PROTOCOL) &&
1278 + (id->bDeviceProtocol != dev->descriptor.bDeviceProtocol))
1279 + continue;
1280
1281 + return 1;
1282 + }
1283 + }
1284
1285 - /* OTG MESSAGE: report errors here, customize to match your product */
1286 - dev_err(&dev->dev, "device v%04x p%04x is not supported\n",
1287 - le16_to_cpu(dev->descriptor.idVendor),
1288 - le16_to_cpu(dev->descriptor.idProduct));
1289 + /* add other match criteria here ... */
1290
1291 + report_errors(dev);
1292 return 0;
1293 }
1294
1295 --- /dev/null
1296 +++ b/drivers/usb/gadget/file_storage.c
1297 @@ -0,0 +1,3676 @@
1298 +/*
1299 + * file_storage.c -- File-backed USB Storage Gadget, for USB development
1300 + *
1301 + * Copyright (C) 2003-2008 Alan Stern
1302 + * All rights reserved.
1303 + *
1304 + * Redistribution and use in source and binary forms, with or without
1305 + * modification, are permitted provided that the following conditions
1306 + * are met:
1307 + * 1. Redistributions of source code must retain the above copyright
1308 + * notice, this list of conditions, and the following disclaimer,
1309 + * without modification.
1310 + * 2. Redistributions in binary form must reproduce the above copyright
1311 + * notice, this list of conditions and the following disclaimer in the
1312 + * documentation and/or other materials provided with the distribution.
1313 + * 3. The names of the above-listed copyright holders may not be used
1314 + * to endorse or promote products derived from this software without
1315 + * specific prior written permission.
1316 + *
1317 + * ALTERNATIVELY, this software may be distributed under the terms of the
1318 + * GNU General Public License ("GPL") as published by the Free Software
1319 + * Foundation, either version 2 of that License or (at your option) any
1320 + * later version.
1321 + *
1322 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
1323 + * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
1324 + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
1325 + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
1326 + * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
1327 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
1328 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
1329 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
1330 + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
1331 + * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
1332 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
1333 + */
1334 +
1335 +
1336 +/*
1337 + * The File-backed Storage Gadget acts as a USB Mass Storage device,
1338 + * appearing to the host as a disk drive or as a CD-ROM drive. In addition
1339 + * to providing an example of a genuinely useful gadget driver for a USB
1340 + * device, it also illustrates a technique of double-buffering for increased
1341 + * throughput. Last but not least, it gives an easy way to probe the
1342 + * behavior of the Mass Storage drivers in a USB host.
1343 + *
1344 + * Backing storage is provided by a regular file or a block device, specified
1345 + * by the "file" module parameter. Access can be limited to read-only by
1346 + * setting the optional "ro" module parameter. (For CD-ROM emulation,
1347 + * access is always read-only.) The gadget will indicate that it has
1348 + * removable media if the optional "removable" module parameter is set.
1349 + *
1350 + * The gadget supports the Control-Bulk (CB), Control-Bulk-Interrupt (CBI),
1351 + * and Bulk-Only (also known as Bulk-Bulk-Bulk or BBB) transports, selected
1352 + * by the optional "transport" module parameter. It also supports the
1353 + * following protocols: RBC (0x01), ATAPI or SFF-8020i (0x02), QIC-157 (0c03),
1354 + * UFI (0x04), SFF-8070i (0x05), and transparent SCSI (0x06), selected by
1355 + * the optional "protocol" module parameter. In addition, the default
1356 + * Vendor ID, Product ID, release number and serial number can be overridden.
1357 + *
1358 + * There is support for multiple logical units (LUNs), each of which has
1359 + * its own backing file. The number of LUNs can be set using the optional
1360 + * "luns" module parameter (anywhere from 1 to 8), and the corresponding
1361 + * files are specified using comma-separated lists for "file" and "ro".
1362 + * The default number of LUNs is taken from the number of "file" elements;
1363 + * it is 1 if "file" is not given. If "removable" is not set then a backing
1364 + * file must be specified for each LUN. If it is set, then an unspecified
1365 + * or empty backing filename means the LUN's medium is not loaded. Ideally
1366 + * each LUN would be settable independently as a disk drive or a CD-ROM
1367 + * drive, but currently all LUNs have to be the same type. The CD-ROM
1368 + * emulation includes a single data track and no audio tracks; hence there
1369 + * need be only one backing file per LUN.
1370 + *
1371 + * Requirements are modest; only a bulk-in and a bulk-out endpoint are
1372 + * needed (an interrupt-out endpoint is also needed for CBI). The memory
1373 + * requirement amounts to two 16K buffers, size configurable by a parameter.
1374 + * Support is included for both full-speed and high-speed operation.
1375 + *
1376 + * Note that the driver is slightly non-portable in that it assumes a
1377 + * single memory/DMA buffer will be useable for bulk-in, bulk-out, and
1378 + * interrupt-in endpoints. With most device controllers this isn't an
1379 + * issue, but there may be some with hardware restrictions that prevent
1380 + * a buffer from being used by more than one endpoint.
1381 + *
1382 + * Module options:
1383 + *
1384 + * file=filename[,filename...]
1385 + * Required if "removable" is not set, names of
1386 + * the files or block devices used for
1387 + * backing storage
1388 + * serial=HHHH... Required serial number (string of hex chars)
1389 + * ro=b[,b...] Default false, booleans for read-only access
1390 + * removable Default false, boolean for removable media
1391 + * luns=N Default N = number of filenames, number of
1392 + * LUNs to support
1393 + * nofua=b[,b...] Default false, booleans for ignore FUA flag
1394 + * in SCSI WRITE(10,12) commands
1395 + * stall Default determined according to the type of
1396 + * USB device controller (usually true),
1397 + * boolean to permit the driver to halt
1398 + * bulk endpoints
1399 + * cdrom Default false, boolean for whether to emulate
1400 + * a CD-ROM drive
1401 + * transport=XXX Default BBB, transport name (CB, CBI, or BBB)
1402 + * protocol=YYY Default SCSI, protocol name (RBC, 8020 or
1403 + * ATAPI, QIC, UFI, 8070, or SCSI;
1404 + * also 1 - 6)
1405 + * vendor=0xVVVV Default 0x0525 (NetChip), USB Vendor ID
1406 + * product=0xPPPP Default 0xa4a5 (FSG), USB Product ID
1407 + * release=0xRRRR Override the USB release number (bcdDevice)
1408 + * buflen=N Default N=16384, buffer size used (will be
1409 + * rounded down to a multiple of
1410 + * PAGE_CACHE_SIZE)
1411 + *
1412 + * If CONFIG_USB_FILE_STORAGE_TEST is not set, only the "file", "serial", "ro",
1413 + * "removable", "luns", "nofua", "stall", and "cdrom" options are available;
1414 + * default values are used for everything else.
1415 + *
1416 + * The pathnames of the backing files and the ro settings are available in
1417 + * the attribute files "file", "nofua", and "ro" in the lun<n> subdirectory of
1418 + * the gadget's sysfs directory. If the "removable" option is set, writing to
1419 + * these files will simulate ejecting/loading the medium (writing an empty
1420 + * line means eject) and adjusting a write-enable tab. Changes to the ro
1421 + * setting are not allowed when the medium is loaded or if CD-ROM emulation
1422 + * is being used.
1423 + *
1424 + * This gadget driver is heavily based on "Gadget Zero" by David Brownell.
1425 + * The driver's SCSI command interface was based on the "Information
1426 + * technology - Small Computer System Interface - 2" document from
1427 + * X3T9.2 Project 375D, Revision 10L, 7-SEP-93, available at
1428 + * <http://www.t10.org/ftp/t10/drafts/s2/s2-r10l.pdf>. The single exception
1429 + * is opcode 0x23 (READ FORMAT CAPACITIES), which was based on the
1430 + * "Universal Serial Bus Mass Storage Class UFI Command Specification"
1431 + * document, Revision 1.0, December 14, 1998, available at
1432 + * <http://www.usb.org/developers/devclass_docs/usbmass-ufi10.pdf>.
1433 + */
1434 +
1435 +
1436 +/*
1437 + * Driver Design
1438 + *
1439 + * The FSG driver is fairly straightforward. There is a main kernel
1440 + * thread that handles most of the work. Interrupt routines field
1441 + * callbacks from the controller driver: bulk- and interrupt-request
1442 + * completion notifications, endpoint-0 events, and disconnect events.
1443 + * Completion events are passed to the main thread by wakeup calls. Many
1444 + * ep0 requests are handled at interrupt time, but SetInterface,
1445 + * SetConfiguration, and device reset requests are forwarded to the
1446 + * thread in the form of "exceptions" using SIGUSR1 signals (since they
1447 + * should interrupt any ongoing file I/O operations).
1448 + *
1449 + * The thread's main routine implements the standard command/data/status
1450 + * parts of a SCSI interaction. It and its subroutines are full of tests
1451 + * for pending signals/exceptions -- all this polling is necessary since
1452 + * the kernel has no setjmp/longjmp equivalents. (Maybe this is an
1453 + * indication that the driver really wants to be running in userspace.)
1454 + * An important point is that so long as the thread is alive it keeps an
1455 + * open reference to the backing file. This will prevent unmounting
1456 + * the backing file's underlying filesystem and could cause problems
1457 + * during system shutdown, for example. To prevent such problems, the
1458 + * thread catches INT, TERM, and KILL signals and converts them into
1459 + * an EXIT exception.
1460 + *
1461 + * In normal operation the main thread is started during the gadget's
1462 + * fsg_bind() callback and stopped during fsg_unbind(). But it can also
1463 + * exit when it receives a signal, and there's no point leaving the
1464 + * gadget running when the thread is dead. So just before the thread
1465 + * exits, it deregisters the gadget driver. This makes things a little
1466 + * tricky: The driver is deregistered at two places, and the exiting
1467 + * thread can indirectly call fsg_unbind() which in turn can tell the
1468 + * thread to exit. The first problem is resolved through the use of the
1469 + * REGISTERED atomic bitflag; the driver will only be deregistered once.
1470 + * The second problem is resolved by having fsg_unbind() check
1471 + * fsg->state; it won't try to stop the thread if the state is already
1472 + * FSG_STATE_TERMINATED.
1473 + *
1474 + * To provide maximum throughput, the driver uses a circular pipeline of
1475 + * buffer heads (struct fsg_buffhd). In principle the pipeline can be
1476 + * arbitrarily long; in practice the benefits don't justify having more
1477 + * than 2 stages (i.e., double buffering). But it helps to think of the
1478 + * pipeline as being a long one. Each buffer head contains a bulk-in and
1479 + * a bulk-out request pointer (since the buffer can be used for both
1480 + * output and input -- directions always are given from the host's
1481 + * point of view) as well as a pointer to the buffer and various state
1482 + * variables.
1483 + *
1484 + * Use of the pipeline follows a simple protocol. There is a variable
1485 + * (fsg->next_buffhd_to_fill) that points to the next buffer head to use.
1486 + * At any time that buffer head may still be in use from an earlier
1487 + * request, so each buffer head has a state variable indicating whether
1488 + * it is EMPTY, FULL, or BUSY. Typical use involves waiting for the
1489 + * buffer head to be EMPTY, filling the buffer either by file I/O or by
1490 + * USB I/O (during which the buffer head is BUSY), and marking the buffer
1491 + * head FULL when the I/O is complete. Then the buffer will be emptied
1492 + * (again possibly by USB I/O, during which it is marked BUSY) and
1493 + * finally marked EMPTY again (possibly by a completion routine).
1494 + *
1495 + * A module parameter tells the driver to avoid stalling the bulk
1496 + * endpoints wherever the transport specification allows. This is
1497 + * necessary for some UDCs like the SuperH, which cannot reliably clear a
1498 + * halt on a bulk endpoint. However, under certain circumstances the
1499 + * Bulk-only specification requires a stall. In such cases the driver
1500 + * will halt the endpoint and set a flag indicating that it should clear
1501 + * the halt in software during the next device reset. Hopefully this
1502 + * will permit everything to work correctly. Furthermore, although the
1503 + * specification allows the bulk-out endpoint to halt when the host sends
1504 + * too much data, implementing this would cause an unavoidable race.
1505 + * The driver will always use the "no-stall" approach for OUT transfers.
1506 + *
1507 + * One subtle point concerns sending status-stage responses for ep0
1508 + * requests. Some of these requests, such as device reset, can involve
1509 + * interrupting an ongoing file I/O operation, which might take an
1510 + * arbitrarily long time. During that delay the host might give up on
1511 + * the original ep0 request and issue a new one. When that happens the
1512 + * driver should not notify the host about completion of the original
1513 + * request, as the host will no longer be waiting for it. So the driver
1514 + * assigns to each ep0 request a unique tag, and it keeps track of the
1515 + * tag value of the request associated with a long-running exception
1516 + * (device-reset, interface-change, or configuration-change). When the
1517 + * exception handler is finished, the status-stage response is submitted
1518 + * only if the current ep0 request tag is equal to the exception request
1519 + * tag. Thus only the most recently received ep0 request will get a
1520 + * status-stage response.
1521 + *
1522 + * Warning: This driver source file is too long. It ought to be split up
1523 + * into a header file plus about 3 separate .c files, to handle the details
1524 + * of the Gadget, USB Mass Storage, and SCSI protocols.
1525 + */
1526 +
1527 +
1528 +/* #define VERBOSE_DEBUG */
1529 +/* #define DUMP_MSGS */
1530 +
1531 +
1532 +#include <linux/blkdev.h>
1533 +#include <linux/completion.h>
1534 +#include <linux/dcache.h>
1535 +#include <linux/delay.h>
1536 +#include <linux/device.h>
1537 +#include <linux/fcntl.h>
1538 +#include <linux/file.h>
1539 +#include <linux/fs.h>
1540 +#include <linux/kref.h>
1541 +#include <linux/kthread.h>
1542 +#include <linux/limits.h>
1543 +#include <linux/module.h>
1544 +#include <linux/rwsem.h>
1545 +#include <linux/slab.h>
1546 +#include <linux/spinlock.h>
1547 +#include <linux/string.h>
1548 +#include <linux/freezer.h>
1549 +#include <linux/utsname.h>
1550 +
1551 +#include <linux/usb/ch9.h>
1552 +#include <linux/usb/gadget.h>
1553 +
1554 +#include "gadget_chips.h"
1555 +
1556 +
1557 +
1558 +/*
1559 + * Kbuild is not very cooperative with respect to linking separately
1560 + * compiled library objects into one module. So for now we won't use
1561 + * separate compilation ... ensuring init/exit sections work to shrink
1562 + * the runtime footprint, and giving us at least some parts of what
1563 + * a "gcc --combine ... part1.c part2.c part3.c ... " build would.
1564 + */
1565 +#include "usbstring.c"
1566 +#include "config.c"
1567 +#include "epautoconf.c"
1568 +
1569 +/*-------------------------------------------------------------------------*/
1570 +
1571 +#define DRIVER_DESC "File-backed Storage Gadget"
1572 +#define DRIVER_NAME "g_file_storage"
1573 +#define DRIVER_VERSION "1 September 2010"
1574 +
1575 +static char fsg_string_manufacturer[64];
1576 +static const char fsg_string_product[] = DRIVER_DESC;
1577 +static const char fsg_string_config[] = "Self-powered";
1578 +static const char fsg_string_interface[] = "Mass Storage";
1579 +
1580 +
1581 +#include "storage_common.c"
1582 +
1583 +
1584 +MODULE_DESCRIPTION(DRIVER_DESC);
1585 +MODULE_AUTHOR("Alan Stern");
1586 +MODULE_LICENSE("Dual BSD/GPL");
1587 +
1588 +/*
1589 + * This driver assumes self-powered hardware and has no way for users to
1590 + * trigger remote wakeup. It uses autoconfiguration to select endpoints
1591 + * and endpoint addresses.
1592 + */
1593 +
1594 +
1595 +/*-------------------------------------------------------------------------*/
1596 +
1597 +
1598 +/* Encapsulate the module parameter settings */
1599 +
1600 +static struct {
1601 + char *file[FSG_MAX_LUNS];
1602 + char *serial;
1603 + bool ro[FSG_MAX_LUNS];
1604 + bool nofua[FSG_MAX_LUNS];
1605 + unsigned int num_filenames;
1606 + unsigned int num_ros;
1607 + unsigned int num_nofuas;
1608 + unsigned int nluns;
1609 +
1610 + bool removable;
1611 + bool can_stall;
1612 + bool cdrom;
1613 +
1614 + char *transport_parm;
1615 + char *protocol_parm;
1616 + unsigned short vendor;
1617 + unsigned short product;
1618 + unsigned short release;
1619 + unsigned int buflen;
1620 +
1621 + int transport_type;
1622 + char *transport_name;
1623 + int protocol_type;
1624 + char *protocol_name;
1625 +
1626 +} mod_data = { // Default values
1627 + .transport_parm = "BBB",
1628 + .protocol_parm = "SCSI",
1629 + .removable = 0,
1630 + .can_stall = 1,
1631 + .cdrom = 0,
1632 + .vendor = FSG_VENDOR_ID,
1633 + .product = FSG_PRODUCT_ID,
1634 + .release = 0xffff, // Use controller chip type
1635 + .buflen = 16384,
1636 + };
1637 +
1638 +
1639 +module_param_array_named(file, mod_data.file, charp, &mod_data.num_filenames,
1640 + S_IRUGO);
1641 +MODULE_PARM_DESC(file, "names of backing files or devices");
1642 +
1643 +module_param_named(serial, mod_data.serial, charp, S_IRUGO);
1644 +MODULE_PARM_DESC(serial, "USB serial number");
1645 +
1646 +module_param_array_named(ro, mod_data.ro, bool, &mod_data.num_ros, S_IRUGO);
1647 +MODULE_PARM_DESC(ro, "true to force read-only");
1648 +
1649 +module_param_array_named(nofua, mod_data.nofua, bool, &mod_data.num_nofuas,
1650 + S_IRUGO);
1651 +MODULE_PARM_DESC(nofua, "true to ignore SCSI WRITE(10,12) FUA bit");
1652 +
1653 +module_param_named(luns, mod_data.nluns, uint, S_IRUGO);
1654 +MODULE_PARM_DESC(luns, "number of LUNs");
1655 +
1656 +module_param_named(removable, mod_data.removable, bool, S_IRUGO);
1657 +MODULE_PARM_DESC(removable, "true to simulate removable media");
1658 +
1659 +module_param_named(stall, mod_data.can_stall, bool, S_IRUGO);
1660 +MODULE_PARM_DESC(stall, "false to prevent bulk stalls");
1661 +
1662 +module_param_named(cdrom, mod_data.cdrom, bool, S_IRUGO);
1663 +MODULE_PARM_DESC(cdrom, "true to emulate cdrom instead of disk");
1664 +
1665 +/* In the non-TEST version, only the module parameters listed above
1666 + * are available. */
1667 +#ifdef CONFIG_USB_FILE_STORAGE_TEST
1668 +
1669 +module_param_named(transport, mod_data.transport_parm, charp, S_IRUGO);
1670 +MODULE_PARM_DESC(transport, "type of transport (BBB, CBI, or CB)");
1671 +
1672 +module_param_named(protocol, mod_data.protocol_parm, charp, S_IRUGO);
1673 +MODULE_PARM_DESC(protocol, "type of protocol (RBC, 8020, QIC, UFI, "
1674 + "8070, or SCSI)");
1675 +
1676 +module_param_named(vendor, mod_data.vendor, ushort, S_IRUGO);
1677 +MODULE_PARM_DESC(vendor, "USB Vendor ID");
1678 +
1679 +module_param_named(product, mod_data.product, ushort, S_IRUGO);
1680 +MODULE_PARM_DESC(product, "USB Product ID");
1681 +
1682 +module_param_named(release, mod_data.release, ushort, S_IRUGO);
1683 +MODULE_PARM_DESC(release, "USB release number");
1684 +
1685 +module_param_named(buflen, mod_data.buflen, uint, S_IRUGO);
1686 +MODULE_PARM_DESC(buflen, "I/O buffer size");
1687 +
1688 +#endif /* CONFIG_USB_FILE_STORAGE_TEST */
1689 +
1690 +
1691 +/*
1692 + * These definitions will permit the compiler to avoid generating code for
1693 + * parts of the driver that aren't used in the non-TEST version. Even gcc
1694 + * can recognize when a test of a constant expression yields a dead code
1695 + * path.
1696 + */
1697 +
1698 +#ifdef CONFIG_USB_FILE_STORAGE_TEST
1699 +
1700 +#define transport_is_bbb() (mod_data.transport_type == USB_PR_BULK)
1701 +#define transport_is_cbi() (mod_data.transport_type == USB_PR_CBI)
1702 +#define protocol_is_scsi() (mod_data.protocol_type == USB_SC_SCSI)
1703 +
1704 +#else
1705 +
1706 +#define transport_is_bbb() 1
1707 +#define transport_is_cbi() 0
1708 +#define protocol_is_scsi() 1
1709 +
1710 +#endif /* CONFIG_USB_FILE_STORAGE_TEST */
1711 +
1712 +
1713 +/*-------------------------------------------------------------------------*/
1714 +
1715 +
1716 +struct fsg_dev {
1717 + /* lock protects: state, all the req_busy's, and cbbuf_cmnd */
1718 + spinlock_t lock;
1719 + struct usb_gadget *gadget;
1720 +
1721 + /* filesem protects: backing files in use */
1722 + struct rw_semaphore filesem;
1723 +
1724 + /* reference counting: wait until all LUNs are released */
1725 + struct kref ref;
1726 +
1727 + struct usb_ep *ep0; // Handy copy of gadget->ep0
1728 + struct usb_request *ep0req; // For control responses
1729 + unsigned int ep0_req_tag;
1730 + const char *ep0req_name;
1731 +
1732 + struct usb_request *intreq; // For interrupt responses
1733 + int intreq_busy;
1734 + struct fsg_buffhd *intr_buffhd;
1735 +
1736 + unsigned int bulk_out_maxpacket;
1737 + enum fsg_state state; // For exception handling
1738 + unsigned int exception_req_tag;
1739 +
1740 + u8 config, new_config;
1741 +
1742 + unsigned int running : 1;
1743 + unsigned int bulk_in_enabled : 1;
1744 + unsigned int bulk_out_enabled : 1;
1745 + unsigned int intr_in_enabled : 1;
1746 + unsigned int phase_error : 1;
1747 + unsigned int short_packet_received : 1;
1748 + unsigned int bad_lun_okay : 1;
1749 +
1750 + unsigned long atomic_bitflags;
1751 +#define REGISTERED 0
1752 +#define IGNORE_BULK_OUT 1
1753 +#define SUSPENDED 2
1754 +
1755 + struct usb_ep *bulk_in;
1756 + struct usb_ep *bulk_out;
1757 + struct usb_ep *intr_in;
1758 +
1759 + struct fsg_buffhd *next_buffhd_to_fill;
1760 + struct fsg_buffhd *next_buffhd_to_drain;
1761 +
1762 + int thread_wakeup_needed;
1763 + struct completion thread_notifier;
1764 + struct task_struct *thread_task;
1765 +
1766 + int cmnd_size;
1767 + u8 cmnd[MAX_COMMAND_SIZE];
1768 + enum data_direction data_dir;
1769 + u32 data_size;
1770 + u32 data_size_from_cmnd;
1771 + u32 tag;
1772 + unsigned int lun;
1773 + u32 residue;
1774 + u32 usb_amount_left;
1775 +
1776 + /* The CB protocol offers no way for a host to know when a command
1777 + * has completed. As a result the next command may arrive early,
1778 + * and we will still have to handle it. For that reason we need
1779 + * a buffer to store new commands when using CB (or CBI, which
1780 + * does not oblige a host to wait for command completion either). */
1781 + int cbbuf_cmnd_size;
1782 + u8 cbbuf_cmnd[MAX_COMMAND_SIZE];
1783 +
1784 + unsigned int nluns;
1785 + struct fsg_lun *luns;
1786 + struct fsg_lun *curlun;
1787 + /* Must be the last entry */
1788 + struct fsg_buffhd buffhds[];
1789 +};
1790 +
1791 +typedef void (*fsg_routine_t)(struct fsg_dev *);
1792 +
1793 +static int exception_in_progress(struct fsg_dev *fsg)
1794 +{
1795 + return (fsg->state > FSG_STATE_IDLE);
1796 +}
1797 +
1798 +/* Make bulk-out requests be divisible by the maxpacket size */
1799 +static void set_bulk_out_req_length(struct fsg_dev *fsg,
1800 + struct fsg_buffhd *bh, unsigned int length)
1801 +{
1802 + unsigned int rem;
1803 +
1804 + bh->bulk_out_intended_length = length;
1805 + rem = length % fsg->bulk_out_maxpacket;
1806 + if (rem > 0)
1807 + length += fsg->bulk_out_maxpacket - rem;
1808 + bh->outreq->length = length;
1809 +}
1810 +
1811 +static struct fsg_dev *the_fsg;
1812 +static struct usb_gadget_driver fsg_driver;
1813 +
1814 +
1815 +/*-------------------------------------------------------------------------*/
1816 +
1817 +static int fsg_set_halt(struct fsg_dev *fsg, struct usb_ep *ep)
1818 +{
1819 + const char *name;
1820 +
1821 + if (ep == fsg->bulk_in)
1822 + name = "bulk-in";
1823 + else if (ep == fsg->bulk_out)
1824 + name = "bulk-out";
1825 + else
1826 + name = ep->name;
1827 + DBG(fsg, "%s set halt\n", name);
1828 + return usb_ep_set_halt(ep);
1829 +}
1830 +
1831 +
1832 +/*-------------------------------------------------------------------------*/
1833 +
1834 +/*
1835 + * DESCRIPTORS ... most are static, but strings and (full) configuration
1836 + * descriptors are built on demand. Also the (static) config and interface
1837 + * descriptors are adjusted during fsg_bind().
1838 + */
1839 +
1840 +/* There is only one configuration. */
1841 +#define CONFIG_VALUE 1
1842 +
1843 +static struct usb_device_descriptor
1844 +device_desc = {
1845 + .bLength = sizeof device_desc,
1846 + .bDescriptorType = USB_DT_DEVICE,
1847 +
1848 + .bcdUSB = cpu_to_le16(0x0200),
1849 + .bDeviceClass = USB_CLASS_PER_INTERFACE,
1850 +
1851 + /* The next three values can be overridden by module parameters */
1852 + .idVendor = cpu_to_le16(FSG_VENDOR_ID),
1853 + .idProduct = cpu_to_le16(FSG_PRODUCT_ID),
1854 + .bcdDevice = cpu_to_le16(0xffff),
1855 +
1856 + .iManufacturer = FSG_STRING_MANUFACTURER,
1857 + .iProduct = FSG_STRING_PRODUCT,
1858 + .iSerialNumber = FSG_STRING_SERIAL,
1859 + .bNumConfigurations = 1,
1860 +};
1861 +
1862 +static struct usb_config_descriptor
1863 +config_desc = {
1864 + .bLength = sizeof config_desc,
1865 + .bDescriptorType = USB_DT_CONFIG,
1866 +
1867 + /* wTotalLength computed by usb_gadget_config_buf() */
1868 + .bNumInterfaces = 1,
1869 + .bConfigurationValue = CONFIG_VALUE,
1870 + .iConfiguration = FSG_STRING_CONFIG,
1871 + .bmAttributes = USB_CONFIG_ATT_ONE | USB_CONFIG_ATT_SELFPOWER,
1872 + .bMaxPower = CONFIG_USB_GADGET_VBUS_DRAW / 2,
1873 +};
1874 +
1875 +
1876 +static struct usb_qualifier_descriptor
1877 +dev_qualifier = {
1878 + .bLength = sizeof dev_qualifier,
1879 + .bDescriptorType = USB_DT_DEVICE_QUALIFIER,
1880 +
1881 + .bcdUSB = cpu_to_le16(0x0200),
1882 + .bDeviceClass = USB_CLASS_PER_INTERFACE,
1883 +
1884 + .bNumConfigurations = 1,
1885 +};
1886 +
1887 +static int populate_bos(struct fsg_dev *fsg, u8 *buf)
1888 +{
1889 + memcpy(buf, &fsg_bos_desc, USB_DT_BOS_SIZE);
1890 + buf += USB_DT_BOS_SIZE;
1891 +
1892 + memcpy(buf, &fsg_ext_cap_desc, USB_DT_USB_EXT_CAP_SIZE);
1893 + buf += USB_DT_USB_EXT_CAP_SIZE;
1894 +
1895 + memcpy(buf, &fsg_ss_cap_desc, USB_DT_USB_SS_CAP_SIZE);
1896 +
1897 + return USB_DT_BOS_SIZE + USB_DT_USB_SS_CAP_SIZE
1898 + + USB_DT_USB_EXT_CAP_SIZE;
1899 +}
1900 +
1901 +/*
1902 + * Config descriptors must agree with the code that sets configurations
1903 + * and with code managing interfaces and their altsettings. They must
1904 + * also handle different speeds and other-speed requests.
1905 + */
1906 +static int populate_config_buf(struct usb_gadget *gadget,
1907 + u8 *buf, u8 type, unsigned index)
1908 +{
1909 + enum usb_device_speed speed = gadget->speed;
1910 + int len;
1911 + const struct usb_descriptor_header **function;
1912 +
1913 + if (index > 0)
1914 + return -EINVAL;
1915 +
1916 + if (gadget_is_dualspeed(gadget) && type == USB_DT_OTHER_SPEED_CONFIG)
1917 + speed = (USB_SPEED_FULL + USB_SPEED_HIGH) - speed;
1918 + function = gadget_is_dualspeed(gadget) && speed == USB_SPEED_HIGH
1919 + ? (const struct usb_descriptor_header **)fsg_hs_function
1920 + : (const struct usb_descriptor_header **)fsg_fs_function;
1921 +
1922 + /* for now, don't advertise srp-only devices */
1923 + if (!gadget_is_otg(gadget))
1924 + function++;
1925 +
1926 + len = usb_gadget_config_buf(&config_desc, buf, EP0_BUFSIZE, function);
1927 + ((struct usb_config_descriptor *) buf)->bDescriptorType = type;
1928 + return len;
1929 +}
1930 +
1931 +
1932 +/*-------------------------------------------------------------------------*/
1933 +
1934 +/* These routines may be called in process context or in_irq */
1935 +
1936 +/* Caller must hold fsg->lock */
1937 +static void wakeup_thread(struct fsg_dev *fsg)
1938 +{
1939 + /* Tell the main thread that something has happened */
1940 + fsg->thread_wakeup_needed = 1;
1941 + if (fsg->thread_task)
1942 + wake_up_process(fsg->thread_task);
1943 +}
1944 +
1945 +
1946 +static void raise_exception(struct fsg_dev *fsg, enum fsg_state new_state)
1947 +{
1948 + unsigned long flags;
1949 +
1950 + /* Do nothing if a higher-priority exception is already in progress.
1951 + * If a lower-or-equal priority exception is in progress, preempt it
1952 + * and notify the main thread by sending it a signal. */
1953 + spin_lock_irqsave(&fsg->lock, flags);
1954 + if (fsg->state <= new_state) {
1955 + fsg->exception_req_tag = fsg->ep0_req_tag;
1956 + fsg->state = new_state;
1957 + if (fsg->thread_task)
1958 + send_sig_info(SIGUSR1, SEND_SIG_FORCED,
1959 + fsg->thread_task);
1960 + }
1961 + spin_unlock_irqrestore(&fsg->lock, flags);
1962 +}
1963 +
1964 +
1965 +/*-------------------------------------------------------------------------*/
1966 +
1967 +/* The disconnect callback and ep0 routines. These always run in_irq,
1968 + * except that ep0_queue() is called in the main thread to acknowledge
1969 + * completion of various requests: set config, set interface, and
1970 + * Bulk-only device reset. */
1971 +
1972 +static void fsg_disconnect(struct usb_gadget *gadget)
1973 +{
1974 + struct fsg_dev *fsg = get_gadget_data(gadget);
1975 +
1976 + DBG(fsg, "disconnect or port reset\n");
1977 + raise_exception(fsg, FSG_STATE_DISCONNECT);
1978 +}
1979 +
1980 +
1981 +static int ep0_queue(struct fsg_dev *fsg)
1982 +{
1983 + int rc;
1984 +
1985 + rc = usb_ep_queue(fsg->ep0, fsg->ep0req, GFP_ATOMIC);
1986 + if (rc != 0 && rc != -ESHUTDOWN) {
1987 +
1988 + /* We can't do much more than wait for a reset */
1989 + WARNING(fsg, "error in submission: %s --> %d\n",
1990 + fsg->ep0->name, rc);
1991 + }
1992 + return rc;
1993 +}
1994 +
1995 +static void ep0_complete(struct usb_ep *ep, struct usb_request *req)
1996 +{
1997 + struct fsg_dev *fsg = ep->driver_data;
1998 +
1999 + if (req->actual > 0)
2000 + dump_msg(fsg, fsg->ep0req_name, req->buf, req->actual);
2001 + if (req->status || req->actual != req->length)
2002 + DBG(fsg, "%s --> %d, %u/%u\n", __func__,
2003 + req->status, req->actual, req->length);
2004 + if (req->status == -ECONNRESET) // Request was cancelled
2005 + usb_ep_fifo_flush(ep);
2006 +
2007 + if (req->status == 0 && req->context)
2008 + ((fsg_routine_t) (req->context))(fsg);
2009 +}
2010 +
2011 +
2012 +/*-------------------------------------------------------------------------*/
2013 +
2014 +/* Bulk and interrupt endpoint completion handlers.
2015 + * These always run in_irq. */
2016 +
2017 +static void bulk_in_complete(struct usb_ep *ep, struct usb_request *req)
2018 +{
2019 + struct fsg_dev *fsg = ep->driver_data;
2020 + struct fsg_buffhd *bh = req->context;
2021 +
2022 + if (req->status || req->actual != req->length)
2023 + DBG(fsg, "%s --> %d, %u/%u\n", __func__,
2024 + req->status, req->actual, req->length);
2025 + if (req->status == -ECONNRESET) // Request was cancelled
2026 + usb_ep_fifo_flush(ep);
2027 +
2028 + /* Hold the lock while we update the request and buffer states */
2029 + smp_wmb();
2030 + spin_lock(&fsg->lock);
2031 + bh->inreq_busy = 0;
2032 + bh->state = BUF_STATE_EMPTY;
2033 + wakeup_thread(fsg);
2034 + spin_unlock(&fsg->lock);
2035 +}
2036 +
2037 +static void bulk_out_complete(struct usb_ep *ep, struct usb_request *req)
2038 +{
2039 + struct fsg_dev *fsg = ep->driver_data;
2040 + struct fsg_buffhd *bh = req->context;
2041 +
2042 + dump_msg(fsg, "bulk-out", req->buf, req->actual);
2043 + if (req->status || req->actual != bh->bulk_out_intended_length)
2044 + DBG(fsg, "%s --> %d, %u/%u\n", __func__,
2045 + req->status, req->actual,
2046 + bh->bulk_out_intended_length);
2047 + if (req->status == -ECONNRESET) // Request was cancelled
2048 + usb_ep_fifo_flush(ep);
2049 +
2050 + /* Hold the lock while we update the request and buffer states */
2051 + smp_wmb();
2052 + spin_lock(&fsg->lock);
2053 + bh->outreq_busy = 0;
2054 + bh->state = BUF_STATE_FULL;
2055 + wakeup_thread(fsg);
2056 + spin_unlock(&fsg->lock);
2057 +}
2058 +
2059 +
2060 +#ifdef CONFIG_USB_FILE_STORAGE_TEST
2061 +static void intr_in_complete(struct usb_ep *ep, struct usb_request *req)
2062 +{
2063 + struct fsg_dev *fsg = ep->driver_data;
2064 + struct fsg_buffhd *bh = req->context;
2065 +
2066 + if (req->status || req->actual != req->length)
2067 + DBG(fsg, "%s --> %d, %u/%u\n", __func__,
2068 + req->status, req->actual, req->length);
2069 + if (req->status == -ECONNRESET) // Request was cancelled
2070 + usb_ep_fifo_flush(ep);
2071 +
2072 + /* Hold the lock while we update the request and buffer states */
2073 + smp_wmb();
2074 + spin_lock(&fsg->lock);
2075 + fsg->intreq_busy = 0;
2076 + bh->state = BUF_STATE_EMPTY;
2077 + wakeup_thread(fsg);
2078 + spin_unlock(&fsg->lock);
2079 +}
2080 +
2081 +#else
2082 +static void intr_in_complete(struct usb_ep *ep, struct usb_request *req)
2083 +{}
2084 +#endif /* CONFIG_USB_FILE_STORAGE_TEST */
2085 +
2086 +
2087 +/*-------------------------------------------------------------------------*/
2088 +
2089 +/* Ep0 class-specific handlers. These always run in_irq. */
2090 +
2091 +#ifdef CONFIG_USB_FILE_STORAGE_TEST
2092 +static void received_cbi_adsc(struct fsg_dev *fsg, struct fsg_buffhd *bh)
2093 +{
2094 + struct usb_request *req = fsg->ep0req;
2095 + static u8 cbi_reset_cmnd[6] = {
2096 + SEND_DIAGNOSTIC, 4, 0xff, 0xff, 0xff, 0xff};
2097 +
2098 + /* Error in command transfer? */
2099 + if (req->status || req->length != req->actual ||
2100 + req->actual < 6 || req->actual > MAX_COMMAND_SIZE) {
2101 +
2102 + /* Not all controllers allow a protocol stall after
2103 + * receiving control-out data, but we'll try anyway. */
2104 + fsg_set_halt(fsg, fsg->ep0);
2105 + return; // Wait for reset
2106 + }
2107 +
2108 + /* Is it the special reset command? */
2109 + if (req->actual >= sizeof cbi_reset_cmnd &&
2110 + memcmp(req->buf, cbi_reset_cmnd,
2111 + sizeof cbi_reset_cmnd) == 0) {
2112 +
2113 + /* Raise an exception to stop the current operation
2114 + * and reinitialize our state. */
2115 + DBG(fsg, "cbi reset request\n");
2116 + raise_exception(fsg, FSG_STATE_RESET);
2117 + return;
2118 + }
2119 +
2120 + VDBG(fsg, "CB[I] accept device-specific command\n");
2121 + spin_lock(&fsg->lock);
2122 +
2123 + /* Save the command for later */
2124 + if (fsg->cbbuf_cmnd_size)
2125 + WARNING(fsg, "CB[I] overwriting previous command\n");
2126 + fsg->cbbuf_cmnd_size = req->actual;
2127 + memcpy(fsg->cbbuf_cmnd, req->buf, fsg->cbbuf_cmnd_size);
2128 +
2129 + wakeup_thread(fsg);
2130 + spin_unlock(&fsg->lock);
2131 +}
2132 +
2133 +#else
2134 +static void received_cbi_adsc(struct fsg_dev *fsg, struct fsg_buffhd *bh)
2135 +{}
2136 +#endif /* CONFIG_USB_FILE_STORAGE_TEST */
2137 +
2138 +
2139 +static int class_setup_req(struct fsg_dev *fsg,
2140 + const struct usb_ctrlrequest *ctrl)
2141 +{
2142 + struct usb_request *req = fsg->ep0req;
2143 + int value = -EOPNOTSUPP;
2144 + u16 w_index = le16_to_cpu(ctrl->wIndex);
2145 + u16 w_value = le16_to_cpu(ctrl->wValue);
2146 + u16 w_length = le16_to_cpu(ctrl->wLength);
2147 +
2148 + if (!fsg->config)
2149 + return value;
2150 +
2151 + /* Handle Bulk-only class-specific requests */
2152 + if (transport_is_bbb()) {
2153 + switch (ctrl->bRequest) {
2154 +
2155 + case US_BULK_RESET_REQUEST:
2156 + if (ctrl->bRequestType != (USB_DIR_OUT |
2157 + USB_TYPE_CLASS | USB_RECIP_INTERFACE))
2158 + break;
2159 + if (w_index != 0 || w_value != 0 || w_length != 0) {
2160 + value = -EDOM;
2161 + break;
2162 + }
2163 +
2164 + /* Raise an exception to stop the current operation
2165 + * and reinitialize our state. */
2166 + DBG(fsg, "bulk reset request\n");
2167 + raise_exception(fsg, FSG_STATE_RESET);
2168 + value = DELAYED_STATUS;
2169 + break;
2170 +
2171 + case US_BULK_GET_MAX_LUN:
2172 + if (ctrl->bRequestType != (USB_DIR_IN |
2173 + USB_TYPE_CLASS | USB_RECIP_INTERFACE))
2174 + break;
2175 + if (w_index != 0 || w_value != 0 || w_length != 1) {
2176 + value = -EDOM;
2177 + break;
2178 + }
2179 + VDBG(fsg, "get max LUN\n");
2180 + *(u8 *) req->buf = fsg->nluns - 1;
2181 + value = 1;
2182 + break;
2183 + }
2184 + }
2185 +
2186 + /* Handle CBI class-specific requests */
2187 + else {
2188 + switch (ctrl->bRequest) {
2189 +
2190 + case USB_CBI_ADSC_REQUEST:
2191 + if (ctrl->bRequestType != (USB_DIR_OUT |
2192 + USB_TYPE_CLASS | USB_RECIP_INTERFACE))
2193 + break;
2194 + if (w_index != 0 || w_value != 0) {
2195 + value = -EDOM;
2196 + break;
2197 + }
2198 + if (w_length > MAX_COMMAND_SIZE) {
2199 + value = -EOVERFLOW;
2200 + break;
2201 + }
2202 + value = w_length;
2203 + fsg->ep0req->context = received_cbi_adsc;
2204 + break;
2205 + }
2206 + }
2207 +
2208 + if (value == -EOPNOTSUPP)
2209 + VDBG(fsg,
2210 + "unknown class-specific control req "
2211 + "%02x.%02x v%04x i%04x l%u\n",
2212 + ctrl->bRequestType, ctrl->bRequest,
2213 + le16_to_cpu(ctrl->wValue), w_index, w_length);
2214 + return value;
2215 +}
2216 +
2217 +
2218 +/*-------------------------------------------------------------------------*/
2219 +
2220 +/* Ep0 standard request handlers. These always run in_irq. */
2221 +
2222 +static int standard_setup_req(struct fsg_dev *fsg,
2223 + const struct usb_ctrlrequest *ctrl)
2224 +{
2225 + struct usb_request *req = fsg->ep0req;
2226 + int value = -EOPNOTSUPP;
2227 + u16 w_index = le16_to_cpu(ctrl->wIndex);
2228 + u16 w_value = le16_to_cpu(ctrl->wValue);
2229 +
2230 + /* Usually this just stores reply data in the pre-allocated ep0 buffer,
2231 + * but config change events will also reconfigure hardware. */
2232 + switch (ctrl->bRequest) {
2233 +
2234 + case USB_REQ_GET_DESCRIPTOR:
2235 + if (ctrl->bRequestType != (USB_DIR_IN | USB_TYPE_STANDARD |
2236 + USB_RECIP_DEVICE))
2237 + break;
2238 + switch (w_value >> 8) {
2239 +
2240 + case USB_DT_DEVICE:
2241 + VDBG(fsg, "get device descriptor\n");
2242 + device_desc.bMaxPacketSize0 = fsg->ep0->maxpacket;
2243 + value = sizeof device_desc;
2244 + memcpy(req->buf, &device_desc, value);
2245 + break;
2246 + case USB_DT_DEVICE_QUALIFIER:
2247 + VDBG(fsg, "get device qualifier\n");
2248 + if (!gadget_is_dualspeed(fsg->gadget) ||
2249 + fsg->gadget->speed == USB_SPEED_SUPER)
2250 + break;
2251 + /*
2252 + * Assume ep0 uses the same maxpacket value for both
2253 + * speeds
2254 + */
2255 + dev_qualifier.bMaxPacketSize0 = fsg->ep0->maxpacket;
2256 + value = sizeof dev_qualifier;
2257 + memcpy(req->buf, &dev_qualifier, value);
2258 + break;
2259 +
2260 + case USB_DT_OTHER_SPEED_CONFIG:
2261 + VDBG(fsg, "get other-speed config descriptor\n");
2262 + if (!gadget_is_dualspeed(fsg->gadget) ||
2263 + fsg->gadget->speed == USB_SPEED_SUPER)
2264 + break;
2265 + goto get_config;
2266 + case USB_DT_CONFIG:
2267 + VDBG(fsg, "get configuration descriptor\n");
2268 +get_config:
2269 + value = populate_config_buf(fsg->gadget,
2270 + req->buf,
2271 + w_value >> 8,
2272 + w_value & 0xff);
2273 + break;
2274 +
2275 + case USB_DT_STRING:
2276 + VDBG(fsg, "get string descriptor\n");
2277 +
2278 + /* wIndex == language code */
2279 + value = usb_gadget_get_string(&fsg_stringtab,
2280 + w_value & 0xff, req->buf);
2281 + break;
2282 +
2283 + case USB_DT_BOS:
2284 + VDBG(fsg, "get bos descriptor\n");
2285 +
2286 + if (gadget_is_superspeed(fsg->gadget))
2287 + value = populate_bos(fsg, req->buf);
2288 + break;
2289 + }
2290 +
2291 + break;
2292 +
2293 + /* One config, two speeds */
2294 + case USB_REQ_SET_CONFIGURATION:
2295 + if (ctrl->bRequestType != (USB_DIR_OUT | USB_TYPE_STANDARD |
2296 + USB_RECIP_DEVICE))
2297 + break;
2298 + VDBG(fsg, "set configuration\n");
2299 + if (w_value == CONFIG_VALUE || w_value == 0) {
2300 + fsg->new_config = w_value;
2301 +
2302 + /* Raise an exception to wipe out previous transaction
2303 + * state (queued bufs, etc) and set the new config. */
2304 + raise_exception(fsg, FSG_STATE_CONFIG_CHANGE);
2305 + value = DELAYED_STATUS;
2306 + }
2307 + break;
2308 + case USB_REQ_GET_CONFIGURATION:
2309 + if (ctrl->bRequestType != (USB_DIR_IN | USB_TYPE_STANDARD |
2310 + USB_RECIP_DEVICE))
2311 + break;
2312 + VDBG(fsg, "get configuration\n");
2313 + *(u8 *) req->buf = fsg->config;
2314 + value = 1;
2315 + break;
2316 +
2317 + case USB_REQ_SET_INTERFACE:
2318 + if (ctrl->bRequestType != (USB_DIR_OUT| USB_TYPE_STANDARD |
2319 + USB_RECIP_INTERFACE))
2320 + break;
2321 + if (fsg->config && w_index == 0) {
2322 +
2323 + /* Raise an exception to wipe out previous transaction
2324 + * state (queued bufs, etc) and install the new
2325 + * interface altsetting. */
2326 + raise_exception(fsg, FSG_STATE_INTERFACE_CHANGE);
2327 + value = DELAYED_STATUS;
2328 + }
2329 + break;
2330 + case USB_REQ_GET_INTERFACE:
2331 + if (ctrl->bRequestType != (USB_DIR_IN | USB_TYPE_STANDARD |
2332 + USB_RECIP_INTERFACE))
2333 + break;
2334 + if (!fsg->config)
2335 + break;
2336 + if (w_index != 0) {
2337 + value = -EDOM;
2338 + break;
2339 + }
2340 + VDBG(fsg, "get interface\n");
2341 + *(u8 *) req->buf = 0;
2342 + value = 1;
2343 + break;
2344 +
2345 + default:
2346 + VDBG(fsg,
2347 + "unknown control req %02x.%02x v%04x i%04x l%u\n",
2348 + ctrl->bRequestType, ctrl->bRequest,
2349 + w_value, w_index, le16_to_cpu(ctrl->wLength));
2350 + }
2351 +
2352 + return value;
2353 +}
2354 +
2355 +
2356 +static int fsg_setup(struct usb_gadget *gadget,
2357 + const struct usb_ctrlrequest *ctrl)
2358 +{
2359 + struct fsg_dev *fsg = get_gadget_data(gadget);
2360 + int rc;
2361 + int w_length = le16_to_cpu(ctrl->wLength);
2362 +
2363 + ++fsg->ep0_req_tag; // Record arrival of a new request
2364 + fsg->ep0req->context = NULL;
2365 + fsg->ep0req->length = 0;
2366 + dump_msg(fsg, "ep0-setup", (u8 *) ctrl, sizeof(*ctrl));
2367 +
2368 + if ((ctrl->bRequestType & USB_TYPE_MASK) == USB_TYPE_CLASS)
2369 + rc = class_setup_req(fsg, ctrl);
2370 + else
2371 + rc = standard_setup_req(fsg, ctrl);
2372 +
2373 + /* Respond with data/status or defer until later? */
2374 + if (rc >= 0 && rc != DELAYED_STATUS) {
2375 + rc = min(rc, w_length);
2376 + fsg->ep0req->length = rc;
2377 + fsg->ep0req->zero = rc < w_length;
2378 + fsg->ep0req_name = (ctrl->bRequestType & USB_DIR_IN ?
2379 + "ep0-in" : "ep0-out");
2380 + rc = ep0_queue(fsg);
2381 + }
2382 +
2383 + /* Device either stalls (rc < 0) or reports success */
2384 + return rc;
2385 +}
2386 +
2387 +
2388 +/*-------------------------------------------------------------------------*/
2389 +
2390 +/* All the following routines run in process context */
2391 +
2392 +
2393 +/* Use this for bulk or interrupt transfers, not ep0 */
2394 +static void start_transfer(struct fsg_dev *fsg, struct usb_ep *ep,
2395 + struct usb_request *req, int *pbusy,
2396 + enum fsg_buffer_state *state)
2397 +{
2398 + int rc;
2399 +
2400 + if (ep == fsg->bulk_in)
2401 + dump_msg(fsg, "bulk-in", req->buf, req->length);
2402 + else if (ep == fsg->intr_in)
2403 + dump_msg(fsg, "intr-in", req->buf, req->length);
2404 +
2405 + spin_lock_irq(&fsg->lock);
2406 + *pbusy = 1;
2407 + *state = BUF_STATE_BUSY;
2408 + spin_unlock_irq(&fsg->lock);
2409 + rc = usb_ep_queue(ep, req, GFP_KERNEL);
2410 + if (rc != 0) {
2411 + *pbusy = 0;
2412 + *state = BUF_STATE_EMPTY;
2413 +
2414 + /* We can't do much more than wait for a reset */
2415 +
2416 + /* Note: currently the net2280 driver fails zero-length
2417 + * submissions if DMA is enabled. */
2418 + if (rc != -ESHUTDOWN && !(rc == -EOPNOTSUPP &&
2419 + req->length == 0))
2420 + WARNING(fsg, "error in submission: %s --> %d\n",
2421 + ep->name, rc);
2422 + }
2423 +}
2424 +
2425 +
2426 +static int sleep_thread(struct fsg_dev *fsg)
2427 +{
2428 + int rc = 0;
2429 +
2430 + /* Wait until a signal arrives or we are woken up */
2431 + for (;;) {
2432 + try_to_freeze();
2433 + set_current_state(TASK_INTERRUPTIBLE);
2434 + if (signal_pending(current)) {
2435 + rc = -EINTR;
2436 + break;
2437 + }
2438 + if (fsg->thread_wakeup_needed)
2439 + break;
2440 + schedule();
2441 + }
2442 + __set_current_state(TASK_RUNNING);
2443 + fsg->thread_wakeup_needed = 0;
2444 + return rc;
2445 +}
2446 +
2447 +
2448 +/*-------------------------------------------------------------------------*/
2449 +
2450 +static int do_read(struct fsg_dev *fsg)
2451 +{
2452 + struct fsg_lun *curlun = fsg->curlun;
2453 + u32 lba;
2454 + struct fsg_buffhd *bh;
2455 + int rc;
2456 + u32 amount_left;
2457 + loff_t file_offset, file_offset_tmp;
2458 + unsigned int amount;
2459 + ssize_t nread;
2460 +
2461 + /* Get the starting Logical Block Address and check that it's
2462 + * not too big */
2463 + if (fsg->cmnd[0] == READ_6)
2464 + lba = get_unaligned_be24(&fsg->cmnd[1]);
2465 + else {
2466 + lba = get_unaligned_be32(&fsg->cmnd[2]);
2467 +
2468 + /* We allow DPO (Disable Page Out = don't save data in the
2469 + * cache) and FUA (Force Unit Access = don't read from the
2470 + * cache), but we don't implement them. */
2471 + if ((fsg->cmnd[1] & ~0x18) != 0) {
2472 + curlun->sense_data = SS_INVALID_FIELD_IN_CDB;
2473 + return -EINVAL;
2474 + }
2475 + }
2476 + if (lba >= curlun->num_sectors) {
2477 + curlun->sense_data = SS_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE;
2478 + return -EINVAL;
2479 + }
2480 + file_offset = ((loff_t) lba) << curlun->blkbits;
2481 +
2482 + /* Carry out the file reads */
2483 + amount_left = fsg->data_size_from_cmnd;
2484 + if (unlikely(amount_left == 0))
2485 + return -EIO; // No default reply
2486 +
2487 + for (;;) {
2488 +
2489 + /* Figure out how much we need to read:
2490 + * Try to read the remaining amount.
2491 + * But don't read more than the buffer size.
2492 + * And don't try to read past the end of the file.
2493 + */
2494 + amount = min((unsigned int) amount_left, mod_data.buflen);
2495 + amount = min((loff_t) amount,
2496 + curlun->file_length - file_offset);
2497 +
2498 + /* Wait for the next buffer to become available */
2499 + bh = fsg->next_buffhd_to_fill;
2500 + while (bh->state != BUF_STATE_EMPTY) {
2501 + rc = sleep_thread(fsg);
2502 + if (rc)
2503 + return rc;
2504 + }
2505 +
2506 + /* If we were asked to read past the end of file,
2507 + * end with an empty buffer. */
2508 + if (amount == 0) {
2509 + curlun->sense_data =
2510 + SS_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE;
2511 + curlun->sense_data_info = file_offset >> curlun->blkbits;
2512 + curlun->info_valid = 1;
2513 + bh->inreq->length = 0;
2514 + bh->state = BUF_STATE_FULL;
2515 + break;
2516 + }
2517 +
2518 + /* Perform the read */
2519 + file_offset_tmp = file_offset;
2520 + nread = vfs_read(curlun->filp,
2521 + (char __user *) bh->buf,
2522 + amount, &file_offset_tmp);
2523 + VLDBG(curlun, "file read %u @ %llu -> %d\n", amount,
2524 + (unsigned long long) file_offset,
2525 + (int) nread);
2526 + if (signal_pending(current))
2527 + return -EINTR;
2528 +
2529 + if (nread < 0) {
2530 + LDBG(curlun, "error in file read: %d\n",
2531 + (int) nread);
2532 + nread = 0;
2533 + } else if (nread < amount) {
2534 + LDBG(curlun, "partial file read: %d/%u\n",
2535 + (int) nread, amount);
2536 + nread = round_down(nread, curlun->blksize);
2537 + }
2538 + file_offset += nread;
2539 + amount_left -= nread;
2540 + fsg->residue -= nread;
2541 +
2542 + /* Except at the end of the transfer, nread will be
2543 + * equal to the buffer size, which is divisible by the
2544 + * bulk-in maxpacket size.
2545 + */
2546 + bh->inreq->length = nread;
2547 + bh->state = BUF_STATE_FULL;
2548 +
2549 + /* If an error occurred, report it and its position */
2550 + if (nread < amount) {
2551 + curlun->sense_data = SS_UNRECOVERED_READ_ERROR;
2552 + curlun->sense_data_info = file_offset >> curlun->blkbits;
2553 + curlun->info_valid = 1;
2554 + break;
2555 + }
2556 +
2557 + if (amount_left == 0)
2558 + break; // No more left to read
2559 +
2560 + /* Send this buffer and go read some more */
2561 + bh->inreq->zero = 0;
2562 + start_transfer(fsg, fsg->bulk_in, bh->inreq,
2563 + &bh->inreq_busy, &bh->state);
2564 + fsg->next_buffhd_to_fill = bh->next;
2565 + }
2566 +
2567 + return -EIO; // No default reply
2568 +}
2569 +
2570 +
2571 +/*-------------------------------------------------------------------------*/
2572 +
2573 +static int do_write(struct fsg_dev *fsg)
2574 +{
2575 + struct fsg_lun *curlun = fsg->curlun;
2576 + u32 lba;
2577 + struct fsg_buffhd *bh;
2578 + int get_some_more;
2579 + u32 amount_left_to_req, amount_left_to_write;
2580 + loff_t usb_offset, file_offset, file_offset_tmp;
2581 + unsigned int amount;
2582 + ssize_t nwritten;
2583 + int rc;
2584 +
2585 + if (curlun->ro) {
2586 + curlun->sense_data = SS_WRITE_PROTECTED;
2587 + return -EINVAL;
2588 + }
2589 + spin_lock(&curlun->filp->f_lock);
2590 + curlun->filp->f_flags &= ~O_SYNC; // Default is not to wait
2591 + spin_unlock(&curlun->filp->f_lock);
2592 +
2593 + /* Get the starting Logical Block Address and check that it's
2594 + * not too big */
2595 + if (fsg->cmnd[0] == WRITE_6)
2596 + lba = get_unaligned_be24(&fsg->cmnd[1]);
2597 + else {
2598 + lba = get_unaligned_be32(&fsg->cmnd[2]);
2599 +
2600 + /* We allow DPO (Disable Page Out = don't save data in the
2601 + * cache) and FUA (Force Unit Access = write directly to the
2602 + * medium). We don't implement DPO; we implement FUA by
2603 + * performing synchronous output. */
2604 + if ((fsg->cmnd[1] & ~0x18) != 0) {
2605 + curlun->sense_data = SS_INVALID_FIELD_IN_CDB;
2606 + return -EINVAL;
2607 + }
2608 + /* FUA */
2609 + if (!curlun->nofua && (fsg->cmnd[1] & 0x08)) {
2610 + spin_lock(&curlun->filp->f_lock);
2611 + curlun->filp->f_flags |= O_DSYNC;
2612 + spin_unlock(&curlun->filp->f_lock);
2613 + }
2614 + }
2615 + if (lba >= curlun->num_sectors) {
2616 + curlun->sense_data = SS_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE;
2617 + return -EINVAL;
2618 + }
2619 +
2620 + /* Carry out the file writes */
2621 + get_some_more = 1;
2622 + file_offset = usb_offset = ((loff_t) lba) << curlun->blkbits;
2623 + amount_left_to_req = amount_left_to_write = fsg->data_size_from_cmnd;
2624 +
2625 + while (amount_left_to_write > 0) {
2626 +
2627 + /* Queue a request for more data from the host */
2628 + bh = fsg->next_buffhd_to_fill;
2629 + if (bh->state == BUF_STATE_EMPTY && get_some_more) {
2630 +
2631 + /* Figure out how much we want to get:
2632 + * Try to get the remaining amount,
2633 + * but not more than the buffer size.
2634 + */
2635 + amount = min(amount_left_to_req, mod_data.buflen);
2636 +
2637 + /* Beyond the end of the backing file? */
2638 + if (usb_offset >= curlun->file_length) {
2639 + get_some_more = 0;
2640 + curlun->sense_data =
2641 + SS_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE;
2642 + curlun->sense_data_info = usb_offset >> curlun->blkbits;
2643 + curlun->info_valid = 1;
2644 + continue;
2645 + }
2646 +
2647 + /* Get the next buffer */
2648 + usb_offset += amount;
2649 + fsg->usb_amount_left -= amount;
2650 + amount_left_to_req -= amount;
2651 + if (amount_left_to_req == 0)
2652 + get_some_more = 0;
2653 +
2654 + /* Except at the end of the transfer, amount will be
2655 + * equal to the buffer size, which is divisible by
2656 + * the bulk-out maxpacket size.
2657 + */
2658 + set_bulk_out_req_length(fsg, bh, amount);
2659 + start_transfer(fsg, fsg->bulk_out, bh->outreq,
2660 + &bh->outreq_busy, &bh->state);
2661 + fsg->next_buffhd_to_fill = bh->next;
2662 + continue;
2663 + }
2664 +
2665 + /* Write the received data to the backing file */
2666 + bh = fsg->next_buffhd_to_drain;
2667 + if (bh->state == BUF_STATE_EMPTY && !get_some_more)
2668 + break; // We stopped early
2669 + if (bh->state == BUF_STATE_FULL) {
2670 + smp_rmb();
2671 + fsg->next_buffhd_to_drain = bh->next;
2672 + bh->state = BUF_STATE_EMPTY;
2673 +
2674 + /* Did something go wrong with the transfer? */
2675 + if (bh->outreq->status != 0) {
2676 + curlun->sense_data = SS_COMMUNICATION_FAILURE;
2677 + curlun->sense_data_info = file_offset >> curlun->blkbits;
2678 + curlun->info_valid = 1;
2679 + break;
2680 + }
2681 +
2682 + amount = bh->outreq->actual;
2683 + if (curlun->file_length - file_offset < amount) {
2684 + LERROR(curlun,
2685 + "write %u @ %llu beyond end %llu\n",
2686 + amount, (unsigned long long) file_offset,
2687 + (unsigned long long) curlun->file_length);
2688 + amount = curlun->file_length - file_offset;
2689 + }
2690 +
2691 + /* Don't accept excess data. The spec doesn't say
2692 + * what to do in this case. We'll ignore the error.
2693 + */
2694 + amount = min(amount, bh->bulk_out_intended_length);
2695 +
2696 + /* Don't write a partial block */
2697 + amount = round_down(amount, curlun->blksize);
2698 + if (amount == 0)
2699 + goto empty_write;
2700 +
2701 + /* Perform the write */
2702 + file_offset_tmp = file_offset;
2703 + nwritten = vfs_write(curlun->filp,
2704 + (char __user *) bh->buf,
2705 + amount, &file_offset_tmp);
2706 + VLDBG(curlun, "file write %u @ %llu -> %d\n", amount,
2707 + (unsigned long long) file_offset,
2708 + (int) nwritten);
2709 + if (signal_pending(current))
2710 + return -EINTR; // Interrupted!
2711 +
2712 + if (nwritten < 0) {
2713 + LDBG(curlun, "error in file write: %d\n",
2714 + (int) nwritten);
2715 + nwritten = 0;
2716 + } else if (nwritten < amount) {
2717 + LDBG(curlun, "partial file write: %d/%u\n",
2718 + (int) nwritten, amount);
2719 + nwritten = round_down(nwritten, curlun->blksize);
2720 + }
2721 + file_offset += nwritten;
2722 + amount_left_to_write -= nwritten;
2723 + fsg->residue -= nwritten;
2724 +
2725 + /* If an error occurred, report it and its position */
2726 + if (nwritten < amount) {
2727 + curlun->sense_data = SS_WRITE_ERROR;
2728 + curlun->sense_data_info = file_offset >> curlun->blkbits;
2729 + curlun->info_valid = 1;
2730 + break;
2731 + }
2732 +
2733 + empty_write:
2734 + /* Did the host decide to stop early? */
2735 + if (bh->outreq->actual < bh->bulk_out_intended_length) {
2736 + fsg->short_packet_received = 1;
2737 + break;
2738 + }
2739 + continue;
2740 + }
2741 +
2742 + /* Wait for something to happen */
2743 + rc = sleep_thread(fsg);
2744 + if (rc)
2745 + return rc;
2746 + }
2747 +
2748 + return -EIO; // No default reply
2749 +}
2750 +
2751 +
2752 +/*-------------------------------------------------------------------------*/
2753 +
2754 +static int do_synchronize_cache(struct fsg_dev *fsg)
2755 +{
2756 + struct fsg_lun *curlun = fsg->curlun;
2757 + int rc;
2758 +
2759 + /* We ignore the requested LBA and write out all file's
2760 + * dirty data buffers. */
2761 + rc = fsg_lun_fsync_sub(curlun);
2762 + if (rc)
2763 + curlun->sense_data = SS_WRITE_ERROR;
2764 + return 0;
2765 +}
2766 +
2767 +
2768 +/*-------------------------------------------------------------------------*/
2769 +
2770 +static void invalidate_sub(struct fsg_lun *curlun)
2771 +{
2772 + struct file *filp = curlun->filp;
2773 + struct inode *inode = filp->f_path.dentry->d_inode;
2774 + unsigned long rc;
2775 +
2776 + rc = invalidate_mapping_pages(inode->i_mapping, 0, -1);
2777 + VLDBG(curlun, "invalidate_mapping_pages -> %ld\n", rc);
2778 +}
2779 +
2780 +static int do_verify(struct fsg_dev *fsg)
2781 +{
2782 + struct fsg_lun *curlun = fsg->curlun;
2783 + u32 lba;
2784 + u32 verification_length;
2785 + struct fsg_buffhd *bh = fsg->next_buffhd_to_fill;
2786 + loff_t file_offset, file_offset_tmp;
2787 + u32 amount_left;
2788 + unsigned int amount;
2789 + ssize_t nread;
2790 +
2791 + /* Get the starting Logical Block Address and check that it's
2792 + * not too big */
2793 + lba = get_unaligned_be32(&fsg->cmnd[2]);
2794 + if (lba >= curlun->num_sectors) {
2795 + curlun->sense_data = SS_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE;
2796 + return -EINVAL;
2797 + }
2798 +
2799 + /* We allow DPO (Disable Page Out = don't save data in the
2800 + * cache) but we don't implement it. */
2801 + if ((fsg->cmnd[1] & ~0x10) != 0) {
2802 + curlun->sense_data = SS_INVALID_FIELD_IN_CDB;
2803 + return -EINVAL;
2804 + }
2805 +
2806 + verification_length = get_unaligned_be16(&fsg->cmnd[7]);
2807 + if (unlikely(verification_length == 0))
2808 + return -EIO; // No default reply
2809 +
2810 + /* Prepare to carry out the file verify */
2811 + amount_left = verification_length << curlun->blkbits;
2812 + file_offset = ((loff_t) lba) << curlun->blkbits;
2813 +
2814 + /* Write out all the dirty buffers before invalidating them */
2815 + fsg_lun_fsync_sub(curlun);
2816 + if (signal_pending(current))
2817 + return -EINTR;
2818 +
2819 + invalidate_sub(curlun);
2820 + if (signal_pending(current))
2821 + return -EINTR;
2822 +
2823 + /* Just try to read the requested blocks */
2824 + while (amount_left > 0) {
2825 +
2826 + /* Figure out how much we need to read:
2827 + * Try to read the remaining amount, but not more than
2828 + * the buffer size.
2829 + * And don't try to read past the end of the file.
2830 + */
2831 + amount = min((unsigned int) amount_left, mod_data.buflen);
2832 + amount = min((loff_t) amount,
2833 + curlun->file_length - file_offset);
2834 + if (amount == 0) {
2835 + curlun->sense_data =
2836 + SS_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE;
2837 + curlun->sense_data_info = file_offset >> curlun->blkbits;
2838 + curlun->info_valid = 1;
2839 + break;
2840 + }
2841 +
2842 + /* Perform the read */
2843 + file_offset_tmp = file_offset;
2844 + nread = vfs_read(curlun->filp,
2845 + (char __user *) bh->buf,
2846 + amount, &file_offset_tmp);
2847 + VLDBG(curlun, "file read %u @ %llu -> %d\n", amount,
2848 + (unsigned long long) file_offset,
2849 + (int) nread);
2850 + if (signal_pending(current))
2851 + return -EINTR;
2852 +
2853 + if (nread < 0) {
2854 + LDBG(curlun, "error in file verify: %d\n",
2855 + (int) nread);
2856 + nread = 0;
2857 + } else if (nread < amount) {
2858 + LDBG(curlun, "partial file verify: %d/%u\n",
2859 + (int) nread, amount);
2860 + nread = round_down(nread, curlun->blksize);
2861 + }
2862 + if (nread == 0) {
2863 + curlun->sense_data = SS_UNRECOVERED_READ_ERROR;
2864 + curlun->sense_data_info = file_offset >> curlun->blkbits;
2865 + curlun->info_valid = 1;
2866 + break;
2867 + }
2868 + file_offset += nread;
2869 + amount_left -= nread;
2870 + }
2871 + return 0;
2872 +}
2873 +
2874 +
2875 +/*-------------------------------------------------------------------------*/
2876 +
2877 +static int do_inquiry(struct fsg_dev *fsg, struct fsg_buffhd *bh)
2878 +{
2879 + u8 *buf = (u8 *) bh->buf;
2880 +
2881 + static char vendor_id[] = "Linux ";
2882 + static char product_disk_id[] = "File-Stor Gadget";
2883 + static char product_cdrom_id[] = "File-CD Gadget ";
2884 +
2885 + if (!fsg->curlun) { // Unsupported LUNs are okay
2886 + fsg->bad_lun_okay = 1;
2887 + memset(buf, 0, 36);
2888 + buf[0] = 0x7f; // Unsupported, no device-type
2889 + buf[4] = 31; // Additional length
2890 + return 36;
2891 + }
2892 +
2893 + memset(buf, 0, 8);
2894 + buf[0] = (mod_data.cdrom ? TYPE_ROM : TYPE_DISK);
2895 + if (mod_data.removable)
2896 + buf[1] = 0x80;
2897 + buf[2] = 2; // ANSI SCSI level 2
2898 + buf[3] = 2; // SCSI-2 INQUIRY data format
2899 + buf[4] = 31; // Additional length
2900 + // No special options
2901 + sprintf(buf + 8, "%-8s%-16s%04x", vendor_id,
2902 + (mod_data.cdrom ? product_cdrom_id :
2903 + product_disk_id),
2904 + mod_data.release);
2905 + return 36;
2906 +}
2907 +
2908 +
2909 +static int do_request_sense(struct fsg_dev *fsg, struct fsg_buffhd *bh)
2910 +{
2911 + struct fsg_lun *curlun = fsg->curlun;
2912 + u8 *buf = (u8 *) bh->buf;
2913 + u32 sd, sdinfo;
2914 + int valid;
2915 +
2916 + /*
2917 + * From the SCSI-2 spec., section 7.9 (Unit attention condition):
2918 + *
2919 + * If a REQUEST SENSE command is received from an initiator
2920 + * with a pending unit attention condition (before the target
2921 + * generates the contingent allegiance condition), then the
2922 + * target shall either:
2923 + * a) report any pending sense data and preserve the unit
2924 + * attention condition on the logical unit, or,
2925 + * b) report the unit attention condition, may discard any
2926 + * pending sense data, and clear the unit attention
2927 + * condition on the logical unit for that initiator.
2928 + *
2929 + * FSG normally uses option a); enable this code to use option b).
2930 + */
2931 +#if 0
2932 + if (curlun && curlun->unit_attention_data != SS_NO_SENSE) {
2933 + curlun->sense_data = curlun->unit_attention_data;
2934 + curlun->unit_attention_data = SS_NO_SENSE;
2935 + }
2936 +#endif
2937 +
2938 + if (!curlun) { // Unsupported LUNs are okay
2939 + fsg->bad_lun_okay = 1;
2940 + sd = SS_LOGICAL_UNIT_NOT_SUPPORTED;
2941 + sdinfo = 0;
2942 + valid = 0;
2943 + } else {
2944 + sd = curlun->sense_data;
2945 + sdinfo = curlun->sense_data_info;
2946 + valid = curlun->info_valid << 7;
2947 + curlun->sense_data = SS_NO_SENSE;
2948 + curlun->sense_data_info = 0;
2949 + curlun->info_valid = 0;
2950 + }
2951 +
2952 + memset(buf, 0, 18);
2953 + buf[0] = valid | 0x70; // Valid, current error
2954 + buf[2] = SK(sd);
2955 + put_unaligned_be32(sdinfo, &buf[3]); /* Sense information */
2956 + buf[7] = 18 - 8; // Additional sense length
2957 + buf[12] = ASC(sd);
2958 + buf[13] = ASCQ(sd);
2959 + return 18;
2960 +}
2961 +
2962 +
2963 +static int do_read_capacity(struct fsg_dev *fsg, struct fsg_buffhd *bh)
2964 +{
2965 + struct fsg_lun *curlun = fsg->curlun;
2966 + u32 lba = get_unaligned_be32(&fsg->cmnd[2]);
2967 + int pmi = fsg->cmnd[8];
2968 + u8 *buf = (u8 *) bh->buf;
2969 +
2970 + /* Check the PMI and LBA fields */
2971 + if (pmi > 1 || (pmi == 0 && lba != 0)) {
2972 + curlun->sense_data = SS_INVALID_FIELD_IN_CDB;
2973 + return -EINVAL;
2974 + }
2975 +
2976 + put_unaligned_be32(curlun->num_sectors - 1, &buf[0]);
2977 + /* Max logical block */
2978 + put_unaligned_be32(curlun->blksize, &buf[4]); /* Block length */
2979 + return 8;
2980 +}
2981 +
2982 +
2983 +static int do_read_header(struct fsg_dev *fsg, struct fsg_buffhd *bh)
2984 +{
2985 + struct fsg_lun *curlun = fsg->curlun;
2986 + int msf = fsg->cmnd[1] & 0x02;
2987 + u32 lba = get_unaligned_be32(&fsg->cmnd[2]);
2988 + u8 *buf = (u8 *) bh->buf;
2989 +
2990 + if ((fsg->cmnd[1] & ~0x02) != 0) { /* Mask away MSF */
2991 + curlun->sense_data = SS_INVALID_FIELD_IN_CDB;
2992 + return -EINVAL;
2993 + }
2994 + if (lba >= curlun->num_sectors) {
2995 + curlun->sense_data = SS_LOGICAL_BLOCK_ADDRESS_OUT_OF_RANGE;
2996 + return -EINVAL;
2997 + }
2998 +
2999 + memset(buf, 0, 8);
3000 + buf[0] = 0x01; /* 2048 bytes of user data, rest is EC */
3001 + store_cdrom_address(&buf[4], msf, lba);
3002 + return 8;
3003 +}
3004 +
3005 +
3006 +static int do_read_toc(struct fsg_dev *fsg, struct fsg_buffhd *bh)
3007 +{
3008 + struct fsg_lun *curlun = fsg->curlun;
3009 + int msf = fsg->cmnd[1] & 0x02;
3010 + int start_track = fsg->cmnd[6];
3011 + u8 *buf = (u8 *) bh->buf;
3012 +
3013 + if ((fsg->cmnd[1] & ~0x02) != 0 || /* Mask away MSF */
3014 + start_track > 1) {
3015 + curlun->sense_data = SS_INVALID_FIELD_IN_CDB;
3016 + return -EINVAL;
3017 + }
3018 +
3019 + memset(buf, 0, 20);
3020 + buf[1] = (20-2); /* TOC data length */
3021 + buf[2] = 1; /* First track number */
3022 + buf[3] = 1; /* Last track number */
3023 + buf[5] = 0x16; /* Data track, copying allowed */
3024 + buf[6] = 0x01; /* Only track is number 1 */
3025 + store_cdrom_address(&buf[8], msf, 0);
3026 +
3027 + buf[13] = 0x16; /* Lead-out track is data */
3028 + buf[14] = 0xAA; /* Lead-out track number */
3029 + store_cdrom_address(&buf[16], msf, curlun->num_sectors);
3030 + return 20;
3031 +}
3032 +
3033 +
3034 +static int do_mode_sense(struct fsg_dev *fsg, struct fsg_buffhd *bh)
3035 +{
3036 + struct fsg_lun *curlun = fsg->curlun;
3037 + int mscmnd = fsg->cmnd[0];
3038 + u8 *buf = (u8 *) bh->buf;
3039 + u8 *buf0 = buf;
3040 + int pc, page_code;
3041 + int changeable_values, all_pages;
3042 + int valid_page = 0;
3043 + int len, limit;
3044 +
3045 + if ((fsg->cmnd[1] & ~0x08) != 0) { // Mask away DBD
3046 + curlun->sense_data = SS_INVALID_FIELD_IN_CDB;
3047 + return -EINVAL;
3048 + }
3049 + pc = fsg->cmnd[2] >> 6;
3050 + page_code = fsg->cmnd[2] & 0x3f;
3051 + if (pc == 3) {
3052 + curlun->sense_data = SS_SAVING_PARAMETERS_NOT_SUPPORTED;
3053 + return -EINVAL;
3054 + }
3055 + changeable_values = (pc == 1);
3056 + all_pages = (page_code == 0x3f);
3057 +
3058 + /* Write the mode parameter header. Fixed values are: default
3059 + * medium type, no cache control (DPOFUA), and no block descriptors.
3060 + * The only variable value is the WriteProtect bit. We will fill in
3061 + * the mode data length later. */
3062 + memset(buf, 0, 8);
3063 + if (mscmnd == MODE_SENSE) {
3064 + buf[2] = (curlun->ro ? 0x80 : 0x00); // WP, DPOFUA
3065 + buf += 4;
3066 + limit = 255;
3067 + } else { // MODE_SENSE_10
3068 + buf[3] = (curlun->ro ? 0x80 : 0x00); // WP, DPOFUA
3069 + buf += 8;
3070 + limit = 65535; // Should really be mod_data.buflen
3071 + }
3072 +
3073 + /* No block descriptors */
3074 +
3075 + /* The mode pages, in numerical order. The only page we support
3076 + * is the Caching page. */
3077 + if (page_code == 0x08 || all_pages) {
3078 + valid_page = 1;
3079 + buf[0] = 0x08; // Page code
3080 + buf[1] = 10; // Page length
3081 + memset(buf+2, 0, 10); // None of the fields are changeable
3082 +
3083 + if (!changeable_values) {
3084 + buf[2] = 0x04; // Write cache enable,
3085 + // Read cache not disabled
3086 + // No cache retention priorities
3087 + put_unaligned_be16(0xffff, &buf[4]);
3088 + /* Don't disable prefetch */
3089 + /* Minimum prefetch = 0 */
3090 + put_unaligned_be16(0xffff, &buf[8]);
3091 + /* Maximum prefetch */
3092 + put_unaligned_be16(0xffff, &buf[10]);
3093 + /* Maximum prefetch ceiling */
3094 + }
3095 + buf += 12;
3096 + }
3097 +
3098 + /* Check that a valid page was requested and the mode data length
3099 + * isn't too long. */
3100 + len = buf - buf0;
3101 + if (!valid_page || len > limit) {
3102 + curlun->sense_data = SS_INVALID_FIELD_IN_CDB;
3103 + return -EINVAL;
3104 + }
3105 +
3106 + /* Store the mode data length */
3107 + if (mscmnd == MODE_SENSE)
3108 + buf0[0] = len - 1;
3109 + else
3110 + put_unaligned_be16(len - 2, buf0);
3111 + return len;
3112 +}
3113 +
3114 +
3115 +static int do_start_stop(struct fsg_dev *fsg)
3116 +{
3117 + struct fsg_lun *curlun = fsg->curlun;
3118 + int loej, start;
3119 +
3120 + if (!mod_data.removable) {
3121 + curlun->sense_data = SS_INVALID_COMMAND;
3122 + return -EINVAL;
3123 + }
3124 +
3125 + // int immed = fsg->cmnd[1] & 0x01;
3126 + loej = fsg->cmnd[4] & 0x02;
3127 + start = fsg->cmnd[4] & 0x01;
3128 +
3129 +#ifdef CONFIG_USB_FILE_STORAGE_TEST
3130 + if ((fsg->cmnd[1] & ~0x01) != 0 || // Mask away Immed
3131 + (fsg->cmnd[4] & ~0x03) != 0) { // Mask LoEj, Start
3132 + curlun->sense_data = SS_INVALID_FIELD_IN_CDB;
3133 + return -EINVAL;
3134 + }
3135 +
3136 + if (!start) {
3137 +
3138 + /* Are we allowed to unload the media? */
3139 + if (curlun->prevent_medium_removal) {
3140 + LDBG(curlun, "unload attempt prevented\n");
3141 + curlun->sense_data = SS_MEDIUM_REMOVAL_PREVENTED;
3142 + return -EINVAL;
3143 + }
3144 + if (loej) { // Simulate an unload/eject
3145 + up_read(&fsg->filesem);
3146 + down_write(&fsg->filesem);
3147 + fsg_lun_close(curlun);
3148 + up_write(&fsg->filesem);
3149 + down_read(&fsg->filesem);
3150 + }
3151 + } else {
3152 +
3153 + /* Our emulation doesn't support mounting; the medium is
3154 + * available for use as soon as it is loaded. */
3155 + if (!fsg_lun_is_open(curlun)) {
3156 + curlun->sense_data = SS_MEDIUM_NOT_PRESENT;
3157 + return -EINVAL;
3158 + }
3159 + }
3160 +#endif
3161 + return 0;
3162 +}
3163 +
3164 +
3165 +static int do_prevent_allow(struct fsg_dev *fsg)
3166 +{
3167 + struct fsg_lun *curlun = fsg->curlun;
3168 + int prevent;
3169 +
3170 + if (!mod_data.removable) {
3171 + curlun->sense_data = SS_INVALID_COMMAND;
3172 + return -EINVAL;
3173 + }
3174 +
3175 + prevent = fsg->cmnd[4] & 0x01;
3176 + if ((fsg->cmnd[4] & ~0x01) != 0) { // Mask away Prevent
3177 + curlun->sense_data = SS_INVALID_FIELD_IN_CDB;
3178 + return -EINVAL;
3179 + }
3180 +
3181 + if (curlun->prevent_medium_removal && !prevent)
3182 + fsg_lun_fsync_sub(curlun);
3183 + curlun->prevent_medium_removal = prevent;
3184 + return 0;
3185 +}
3186 +
3187 +
3188 +static int do_read_format_capacities(struct fsg_dev *fsg,
3189 + struct fsg_buffhd *bh)
3190 +{
3191 + struct fsg_lun *curlun = fsg->curlun;
3192 + u8 *buf = (u8 *) bh->buf;
3193 +
3194 + buf[0] = buf[1] = buf[2] = 0;
3195 + buf[3] = 8; // Only the Current/Maximum Capacity Descriptor
3196 + buf += 4;
3197 +
3198 + put_unaligned_be32(curlun->num_sectors, &buf[0]);
3199 + /* Number of blocks */
3200 + put_unaligned_be32(curlun->blksize, &buf[4]); /* Block length */
3201 + buf[4] = 0x02; /* Current capacity */
3202 + return 12;
3203 +}
3204 +
3205 +
3206 +static int do_mode_select(struct fsg_dev *fsg, struct fsg_buffhd *bh)
3207 +{
3208 + struct fsg_lun *curlun = fsg->curlun;
3209 +
3210 + /* We don't support MODE SELECT */
3211 + curlun->sense_data = SS_INVALID_COMMAND;
3212 + return -EINVAL;
3213 +}
3214 +
3215 +
3216 +/*-------------------------------------------------------------------------*/
3217 +
3218 +static int halt_bulk_in_endpoint(struct fsg_dev *fsg)
3219 +{
3220 + int rc;
3221 +
3222 + rc = fsg_set_halt(fsg, fsg->bulk_in);
3223 + if (rc == -EAGAIN)
3224 + VDBG(fsg, "delayed bulk-in endpoint halt\n");
3225 + while (rc != 0) {
3226 + if (rc != -EAGAIN) {
3227 + WARNING(fsg, "usb_ep_set_halt -> %d\n", rc);
3228 + rc = 0;
3229 + break;
3230 + }
3231 +
3232 + /* Wait for a short time and then try again */
3233 + if (msleep_interruptible(100) != 0)
3234 + return -EINTR;
3235 + rc = usb_ep_set_halt(fsg->bulk_in);
3236 + }
3237 + return rc;
3238 +}
3239 +
3240 +static int wedge_bulk_in_endpoint(struct fsg_dev *fsg)
3241 +{
3242 + int rc;
3243 +
3244 + DBG(fsg, "bulk-in set wedge\n");
3245 + rc = usb_ep_set_wedge(fsg->bulk_in);
3246 + if (rc == -EAGAIN)
3247 + VDBG(fsg, "delayed bulk-in endpoint wedge\n");
3248 + while (rc != 0) {
3249 + if (rc != -EAGAIN) {
3250 + WARNING(fsg, "usb_ep_set_wedge -> %d\n", rc);
3251 + rc = 0;
3252 + break;
3253 + }
3254 +
3255 + /* Wait for a short time and then try again */
3256 + if (msleep_interruptible(100) != 0)
3257 + return -EINTR;
3258 + rc = usb_ep_set_wedge(fsg->bulk_in);
3259 + }
3260 + return rc;
3261 +}
3262 +
3263 +static int throw_away_data(struct fsg_dev *fsg)
3264 +{
3265 + struct fsg_buffhd *bh;
3266 + u32 amount;
3267 + int rc;
3268 +
3269 + while ((bh = fsg->next_buffhd_to_drain)->state != BUF_STATE_EMPTY ||
3270 + fsg->usb_amount_left > 0) {
3271 +
3272 + /* Throw away the data in a filled buffer */
3273 + if (bh->state == BUF_STATE_FULL) {
3274 + smp_rmb();
3275 + bh->state = BUF_STATE_EMPTY;
3276 + fsg->next_buffhd_to_drain = bh->next;
3277 +
3278 + /* A short packet or an error ends everything */
3279 + if (bh->outreq->actual < bh->bulk_out_intended_length ||
3280 + bh->outreq->status != 0) {
3281 + raise_exception(fsg, FSG_STATE_ABORT_BULK_OUT);
3282 + return -EINTR;
3283 + }
3284 + continue;
3285 + }
3286 +
3287 + /* Try to submit another request if we need one */
3288 + bh = fsg->next_buffhd_to_fill;
3289 + if (bh->state == BUF_STATE_EMPTY && fsg->usb_amount_left > 0) {
3290 + amount = min(fsg->usb_amount_left,
3291 + (u32) mod_data.buflen);
3292 +
3293 + /* Except at the end of the transfer, amount will be
3294 + * equal to the buffer size, which is divisible by
3295 + * the bulk-out maxpacket size.
3296 + */
3297 + set_bulk_out_req_length(fsg, bh, amount);
3298 + start_transfer(fsg, fsg->bulk_out, bh->outreq,
3299 + &bh->outreq_busy, &bh->state);
3300 + fsg->next_buffhd_to_fill = bh->next;
3301 + fsg->usb_amount_left -= amount;
3302 + continue;
3303 + }
3304 +
3305 + /* Otherwise wait for something to happen */
3306 + rc = sleep_thread(fsg);
3307 + if (rc)
3308 + return rc;
3309 + }
3310 + return 0;
3311 +}
3312 +
3313 +
3314 +static int finish_reply(struct fsg_dev *fsg)
3315 +{
3316 + struct fsg_buffhd *bh = fsg->next_buffhd_to_fill;
3317 + int rc = 0;
3318 +
3319 + switch (fsg->data_dir) {
3320 + case DATA_DIR_NONE:
3321 + break; // Nothing to send
3322 +
3323 + /* If we don't know whether the host wants to read or write,
3324 + * this must be CB or CBI with an unknown command. We mustn't
3325 + * try to send or receive any data. So stall both bulk pipes
3326 + * if we can and wait for a reset. */
3327 + case DATA_DIR_UNKNOWN:
3328 + if (mod_data.can_stall) {
3329 + fsg_set_halt(fsg, fsg->bulk_out);
3330 + rc = halt_bulk_in_endpoint(fsg);
3331 + }
3332 + break;
3333 +
3334 + /* All but the last buffer of data must have already been sent */
3335 + case DATA_DIR_TO_HOST:
3336 + if (fsg->data_size == 0)
3337 + ; // Nothing to send
3338 +
3339 + /* If there's no residue, simply send the last buffer */
3340 + else if (fsg->residue == 0) {
3341 + bh->inreq->zero = 0;
3342 + start_transfer(fsg, fsg->bulk_in, bh->inreq,
3343 + &bh->inreq_busy, &bh->state);
3344 + fsg->next_buffhd_to_fill = bh->next;
3345 + }
3346 +
3347 + /* There is a residue. For CB and CBI, simply mark the end
3348 + * of the data with a short packet. However, if we are
3349 + * allowed to stall, there was no data at all (residue ==
3350 + * data_size), and the command failed (invalid LUN or
3351 + * sense data is set), then halt the bulk-in endpoint
3352 + * instead. */
3353 + else if (!transport_is_bbb()) {
3354 + if (mod_data.can_stall &&
3355 + fsg->residue == fsg->data_size &&
3356 + (!fsg->curlun || fsg->curlun->sense_data != SS_NO_SENSE)) {
3357 + bh->state = BUF_STATE_EMPTY;
3358 + rc = halt_bulk_in_endpoint(fsg);
3359 + } else {
3360 + bh->inreq->zero = 1;
3361 + start_transfer(fsg, fsg->bulk_in, bh->inreq,
3362 + &bh->inreq_busy, &bh->state);
3363 + fsg->next_buffhd_to_fill = bh->next;
3364 + }
3365 + }
3366 +
3367 + /*
3368 + * For Bulk-only, mark the end of the data with a short
3369 + * packet. If we are allowed to stall, halt the bulk-in
3370 + * endpoint. (Note: This violates the Bulk-Only Transport
3371 + * specification, which requires us to pad the data if we
3372 + * don't halt the endpoint. Presumably nobody will mind.)
3373 + */
3374 + else {
3375 + bh->inreq->zero = 1;
3376 + start_transfer(fsg, fsg->bulk_in, bh->inreq,
3377 + &bh->inreq_busy, &bh->state);
3378 + fsg->next_buffhd_to_fill = bh->next;
3379 + if (mod_data.can_stall)
3380 + rc = halt_bulk_in_endpoint(fsg);
3381 + }
3382 + break;
3383 +
3384 + /* We have processed all we want from the data the host has sent.
3385 + * There may still be outstanding bulk-out requests. */
3386 + case DATA_DIR_FROM_HOST:
3387 + if (fsg->residue == 0)
3388 + ; // Nothing to receive
3389 +
3390 + /* Did the host stop sending unexpectedly early? */
3391 + else if (fsg->short_packet_received) {
3392 + raise_exception(fsg, FSG_STATE_ABORT_BULK_OUT);
3393 + rc = -EINTR;
3394 + }
3395 +
3396 + /* We haven't processed all the incoming data. Even though
3397 + * we may be allowed to stall, doing so would cause a race.
3398 + * The controller may already have ACK'ed all the remaining
3399 + * bulk-out packets, in which case the host wouldn't see a
3400 + * STALL. Not realizing the endpoint was halted, it wouldn't
3401 + * clear the halt -- leading to problems later on. */
3402 +#if 0
3403 + else if (mod_data.can_stall) {
3404 + fsg_set_halt(fsg, fsg->bulk_out);
3405 + raise_exception(fsg, FSG_STATE_ABORT_BULK_OUT);
3406 + rc = -EINTR;
3407 + }
3408 +#endif
3409 +
3410 + /* We can't stall. Read in the excess data and throw it
3411 + * all away. */
3412 + else
3413 + rc = throw_away_data(fsg);
3414 + break;
3415 + }
3416 + return rc;
3417 +}
3418 +
3419 +
3420 +static int send_status(struct fsg_dev *fsg)
3421 +{
3422 + struct fsg_lun *curlun = fsg->curlun;
3423 + struct fsg_buffhd *bh;
3424 + int rc;
3425 + u8 status = US_BULK_STAT_OK;
3426 + u32 sd, sdinfo = 0;
3427 +
3428 + /* Wait for the next buffer to become available */
3429 + bh = fsg->next_buffhd_to_fill;
3430 + while (bh->state != BUF_STATE_EMPTY) {
3431 + rc = sleep_thread(fsg);
3432 + if (rc)
3433 + return rc;
3434 + }
3435 +
3436 + if (curlun) {
3437 + sd = curlun->sense_data;
3438 + sdinfo = curlun->sense_data_info;
3439 + } else if (fsg->bad_lun_okay)
3440 + sd = SS_NO_SENSE;
3441 + else
3442 + sd = SS_LOGICAL_UNIT_NOT_SUPPORTED;
3443 +
3444 + if (fsg->phase_error) {
3445 + DBG(fsg, "sending phase-error status\n");
3446 + status = US_BULK_STAT_PHASE;
3447 + sd = SS_INVALID_COMMAND;
3448 + } else if (sd != SS_NO_SENSE) {
3449 + DBG(fsg, "sending command-failure status\n");
3450 + status = US_BULK_STAT_FAIL;
3451 + VDBG(fsg, " sense data: SK x%02x, ASC x%02x, ASCQ x%02x;"
3452 + " info x%x\n",
3453 + SK(sd), ASC(sd), ASCQ(sd), sdinfo);
3454 + }
3455 +
3456 + if (transport_is_bbb()) {
3457 + struct bulk_cs_wrap *csw = bh->buf;
3458 +
3459 + /* Store and send the Bulk-only CSW */
3460 + csw->Signature = cpu_to_le32(US_BULK_CS_SIGN);
3461 + csw->Tag = fsg->tag;
3462 + csw->Residue = cpu_to_le32(fsg->residue);
3463 + csw->Status = status;
3464 +
3465 + bh->inreq->length = US_BULK_CS_WRAP_LEN;
3466 + bh->inreq->zero = 0;
3467 + start_transfer(fsg, fsg->bulk_in, bh->inreq,
3468 + &bh->inreq_busy, &bh->state);
3469 +
3470 + } else if (mod_data.transport_type == USB_PR_CB) {
3471 +
3472 + /* Control-Bulk transport has no status phase! */
3473 + return 0;
3474 +
3475 + } else { // USB_PR_CBI
3476 + struct interrupt_data *buf = bh->buf;
3477 +
3478 + /* Store and send the Interrupt data. UFI sends the ASC
3479 + * and ASCQ bytes. Everything else sends a Type (which
3480 + * is always 0) and the status Value. */
3481 + if (mod_data.protocol_type == USB_SC_UFI) {
3482 + buf->bType = ASC(sd);
3483 + buf->bValue = ASCQ(sd);
3484 + } else {
3485 + buf->bType = 0;
3486 + buf->bValue = status;
3487 + }
3488 + fsg->intreq->length = CBI_INTERRUPT_DATA_LEN;
3489 +
3490 + fsg->intr_buffhd = bh; // Point to the right buffhd
3491 + fsg->intreq->buf = bh->inreq->buf;
3492 + fsg->intreq->context = bh;
3493 + start_transfer(fsg, fsg->intr_in, fsg->intreq,
3494 + &fsg->intreq_busy, &bh->state);
3495 + }
3496 +
3497 + fsg->next_buffhd_to_fill = bh->next;
3498 + return 0;
3499 +}
3500 +
3501 +
3502 +/*-------------------------------------------------------------------------*/
3503 +
3504 +/* Check whether the command is properly formed and whether its data size
3505 + * and direction agree with the values we already have. */
3506 +static int check_command(struct fsg_dev *fsg, int cmnd_size,
3507 + enum data_direction data_dir, unsigned int mask,
3508 + int needs_medium, const char *name)
3509 +{
3510 + int i;
3511 + int lun = fsg->cmnd[1] >> 5;
3512 + static const char dirletter[4] = {'u', 'o', 'i', 'n'};
3513 + char hdlen[20];
3514 + struct fsg_lun *curlun;
3515 +
3516 + /* Adjust the expected cmnd_size for protocol encapsulation padding.
3517 + * Transparent SCSI doesn't pad. */
3518 + if (protocol_is_scsi())
3519 + ;
3520 +
3521 + /* There's some disagreement as to whether RBC pads commands or not.
3522 + * We'll play it safe and accept either form. */
3523 + else if (mod_data.protocol_type == USB_SC_RBC) {
3524 + if (fsg->cmnd_size == 12)
3525 + cmnd_size = 12;
3526 +
3527 + /* All the other protocols pad to 12 bytes */
3528 + } else
3529 + cmnd_size = 12;
3530 +
3531 + hdlen[0] = 0;
3532 + if (fsg->data_dir != DATA_DIR_UNKNOWN)
3533 + sprintf(hdlen, ", H%c=%u", dirletter[(int) fsg->data_dir],
3534 + fsg->data_size);
3535 + VDBG(fsg, "SCSI command: %s; Dc=%d, D%c=%u; Hc=%d%s\n",
3536 + name, cmnd_size, dirletter[(int) data_dir],
3537 + fsg->data_size_from_cmnd, fsg->cmnd_size, hdlen);
3538 +
3539 + /* We can't reply at all until we know the correct data direction
3540 + * and size. */
3541 + if (fsg->data_size_from_cmnd == 0)
3542 + data_dir = DATA_DIR_NONE;
3543 + if (fsg->data_dir == DATA_DIR_UNKNOWN) { // CB or CBI
3544 + fsg->data_dir = data_dir;
3545 + fsg->data_size = fsg->data_size_from_cmnd;
3546 +
3547 + } else { // Bulk-only
3548 + if (fsg->data_size < fsg->data_size_from_cmnd) {
3549 +
3550 + /* Host data size < Device data size is a phase error.
3551 + * Carry out the command, but only transfer as much
3552 + * as we are allowed. */
3553 + fsg->data_size_from_cmnd = fsg->data_size;
3554 + fsg->phase_error = 1;
3555 + }
3556 + }
3557 + fsg->residue = fsg->usb_amount_left = fsg->data_size;
3558 +
3559 + /* Conflicting data directions is a phase error */
3560 + if (fsg->data_dir != data_dir && fsg->data_size_from_cmnd > 0) {
3561 + fsg->phase_error = 1;
3562 + return -EINVAL;
3563 + }
3564 +
3565 + /* Verify the length of the command itself */
3566 + if (cmnd_size != fsg->cmnd_size) {
3567 +
3568 + /* Special case workaround: There are plenty of buggy SCSI
3569 + * implementations. Many have issues with cbw->Length
3570 + * field passing a wrong command size. For those cases we
3571 + * always try to work around the problem by using the length
3572 + * sent by the host side provided it is at least as large
3573 + * as the correct command length.
3574 + * Examples of such cases would be MS-Windows, which issues
3575 + * REQUEST SENSE with cbw->Length == 12 where it should
3576 + * be 6, and xbox360 issuing INQUIRY, TEST UNIT READY and
3577 + * REQUEST SENSE with cbw->Length == 10 where it should
3578 + * be 6 as well.
3579 + */
3580 + if (cmnd_size <= fsg->cmnd_size) {
3581 + DBG(fsg, "%s is buggy! Expected length %d "
3582 + "but we got %d\n", name,
3583 + cmnd_size, fsg->cmnd_size);
3584 + cmnd_size = fsg->cmnd_size;
3585 + } else {
3586 + fsg->phase_error = 1;
3587 + return -EINVAL;
3588 + }
3589 + }
3590 +
3591 + /* Check that the LUN values are consistent */
3592 + if (transport_is_bbb()) {
3593 + if (fsg->lun != lun)
3594 + DBG(fsg, "using LUN %d from CBW, "
3595 + "not LUN %d from CDB\n",
3596 + fsg->lun, lun);
3597 + }
3598 +
3599 + /* Check the LUN */
3600 + curlun = fsg->curlun;
3601 + if (curlun) {
3602 + if (fsg->cmnd[0] != REQUEST_SENSE) {
3603 + curlun->sense_data = SS_NO_SENSE;
3604 + curlun->sense_data_info = 0;
3605 + curlun->info_valid = 0;
3606 + }
3607 + } else {
3608 + fsg->bad_lun_okay = 0;
3609 +
3610 + /* INQUIRY and REQUEST SENSE commands are explicitly allowed
3611 + * to use unsupported LUNs; all others may not. */
3612 + if (fsg->cmnd[0] != INQUIRY &&
3613 + fsg->cmnd[0] != REQUEST_SENSE) {
3614 + DBG(fsg, "unsupported LUN %d\n", fsg->lun);
3615 + return -EINVAL;
3616 + }
3617 + }
3618 +
3619 + /* If a unit attention condition exists, only INQUIRY and
3620 + * REQUEST SENSE commands are allowed; anything else must fail. */
3621 + if (curlun && curlun->unit_attention_data != SS_NO_SENSE &&
3622 + fsg->cmnd[0] != INQUIRY &&
3623 + fsg->cmnd[0] != REQUEST_SENSE) {
3624 + curlun->sense_data = curlun->unit_attention_data;
3625 + curlun->unit_attention_data = SS_NO_SENSE;
3626 + return -EINVAL;
3627 + }
3628 +
3629 + /* Check that only command bytes listed in the mask are non-zero */
3630 + fsg->cmnd[1] &= 0x1f; // Mask away the LUN
3631 + for (i = 1; i < cmnd_size; ++i) {
3632 + if (fsg->cmnd[i] && !(mask & (1 << i))) {
3633 + if (curlun)
3634 + curlun->sense_data = SS_INVALID_FIELD_IN_CDB;
3635 + return -EINVAL;
3636 + }
3637 + }
3638 +
3639 + /* If the medium isn't mounted and the command needs to access
3640 + * it, return an error. */
3641 + if (curlun && !fsg_lun_is_open(curlun) && needs_medium) {
3642 + curlun->sense_data = SS_MEDIUM_NOT_PRESENT;
3643 + return -EINVAL;
3644 + }
3645 +
3646 + return 0;
3647 +}
3648 +
3649 +/* wrapper of check_command for data size in blocks handling */
3650 +static int check_command_size_in_blocks(struct fsg_dev *fsg, int cmnd_size,
3651 + enum data_direction data_dir, unsigned int mask,
3652 + int needs_medium, const char *name)
3653 +{
3654 + if (fsg->curlun)
3655 + fsg->data_size_from_cmnd <<= fsg->curlun->blkbits;
3656 + return check_command(fsg, cmnd_size, data_dir,
3657 + mask, needs_medium, name);
3658 +}
3659 +
3660 +static int do_scsi_command(struct fsg_dev *fsg)
3661 +{
3662 + struct fsg_buffhd *bh;
3663 + int rc;
3664 + int reply = -EINVAL;
3665 + int i;
3666 + static char unknown[16];
3667 +
3668 + dump_cdb(fsg);
3669 +
3670 + /* Wait for the next buffer to become available for data or status */
3671 + bh = fsg->next_buffhd_to_drain = fsg->next_buffhd_to_fill;
3672 + while (bh->state != BUF_STATE_EMPTY) {
3673 + rc = sleep_thread(fsg);
3674 + if (rc)
3675 + return rc;
3676 + }
3677 + fsg->phase_error = 0;
3678 + fsg->short_packet_received = 0;
3679 +
3680 + down_read(&fsg->filesem); // We're using the backing file
3681 + switch (fsg->cmnd[0]) {
3682 +
3683 + case INQUIRY:
3684 + fsg->data_size_from_cmnd = fsg->cmnd[4];
3685 + if ((reply = check_command(fsg, 6, DATA_DIR_TO_HOST,
3686 + (1<<4), 0,
3687 + "INQUIRY")) == 0)
3688 + reply = do_inquiry(fsg, bh);
3689 + break;
3690 +
3691 + case MODE_SELECT:
3692 + fsg->data_size_from_cmnd = fsg->cmnd[4];
3693 + if ((reply = check_command(fsg, 6, DATA_DIR_FROM_HOST,
3694 + (1<<1) | (1<<4), 0,
3695 + "MODE SELECT(6)")) == 0)
3696 + reply = do_mode_select(fsg, bh);
3697 + break;
3698 +
3699 + case MODE_SELECT_10:
3700 + fsg->data_size_from_cmnd = get_unaligned_be16(&fsg->cmnd[7]);
3701 + if ((reply = check_command(fsg, 10, DATA_DIR_FROM_HOST,
3702 + (1<<1) | (3<<7), 0,
3703 + "MODE SELECT(10)")) == 0)
3704 + reply = do_mode_select(fsg, bh);
3705 + break;
3706 +
3707 + case MODE_SENSE:
3708 + fsg->data_size_from_cmnd = fsg->cmnd[4];
3709 + if ((reply = check_command(fsg, 6, DATA_DIR_TO_HOST,
3710 + (1<<1) | (1<<2) | (1<<4), 0,
3711 + "MODE SENSE(6)")) == 0)
3712 + reply = do_mode_sense(fsg, bh);
3713 + break;
3714 +
3715 + case MODE_SENSE_10:
3716 + fsg->data_size_from_cmnd = get_unaligned_be16(&fsg->cmnd[7]);
3717 + if ((reply = check_command(fsg, 10, DATA_DIR_TO_HOST,
3718 + (1<<1) | (1<<2) | (3<<7), 0,
3719 + "MODE SENSE(10)")) == 0)
3720 + reply = do_mode_sense(fsg, bh);
3721 + break;
3722 +
3723 + case ALLOW_MEDIUM_REMOVAL:
3724 + fsg->data_size_from_cmnd = 0;
3725 + if ((reply = check_command(fsg, 6, DATA_DIR_NONE,
3726 + (1<<4), 0,
3727 + "PREVENT-ALLOW MEDIUM REMOVAL")) == 0)
3728 + reply = do_prevent_allow(fsg);
3729 + break;
3730 +
3731 + case READ_6:
3732 + i = fsg->cmnd[4];
3733 + fsg->data_size_from_cmnd = (i == 0) ? 256 : i;
3734 + if ((reply = check_command_size_in_blocks(fsg, 6,
3735 + DATA_DIR_TO_HOST,
3736 + (7<<1) | (1<<4), 1,
3737 + "READ(6)")) == 0)
3738 + reply = do_read(fsg);
3739 + break;
3740 +
3741 + case READ_10:
3742 + fsg->data_size_from_cmnd = get_unaligned_be16(&fsg->cmnd[7]);
3743 + if ((reply = check_command_size_in_blocks(fsg, 10,
3744 + DATA_DIR_TO_HOST,
3745 + (1<<1) | (0xf<<2) | (3<<7), 1,
3746 + "READ(10)")) == 0)
3747 + reply = do_read(fsg);
3748 + break;
3749 +
3750 + case READ_12:
3751 + fsg->data_size_from_cmnd = get_unaligned_be32(&fsg->cmnd[6]);
3752 + if ((reply = check_command_size_in_blocks(fsg, 12,
3753 + DATA_DIR_TO_HOST,
3754 + (1<<1) | (0xf<<2) | (0xf<<6), 1,
3755 + "READ(12)")) == 0)
3756 + reply = do_read(fsg);
3757 + break;
3758 +
3759 + case READ_CAPACITY:
3760 + fsg->data_size_from_cmnd = 8;
3761 + if ((reply = check_command(fsg, 10, DATA_DIR_TO_HOST,
3762 + (0xf<<2) | (1<<8), 1,
3763 + "READ CAPACITY")) == 0)
3764 + reply = do_read_capacity(fsg, bh);
3765 + break;
3766 +
3767 + case READ_HEADER:
3768 + if (!mod_data.cdrom)
3769 + goto unknown_cmnd;
3770 + fsg->data_size_from_cmnd = get_unaligned_be16(&fsg->cmnd[7]);
3771 + if ((reply = check_command(fsg, 10, DATA_DIR_TO_HOST,
3772 + (3<<7) | (0x1f<<1), 1,
3773 + "READ HEADER")) == 0)
3774 + reply = do_read_header(fsg, bh);
3775 + break;
3776 +
3777 + case READ_TOC:
3778 + if (!mod_data.cdrom)
3779 + goto unknown_cmnd;
3780 + fsg->data_size_from_cmnd = get_unaligned_be16(&fsg->cmnd[7]);
3781 + if ((reply = check_command(fsg, 10, DATA_DIR_TO_HOST,
3782 + (7<<6) | (1<<1), 1,
3783 + "READ TOC")) == 0)
3784 + reply = do_read_toc(fsg, bh);
3785 + break;
3786 +
3787 + case READ_FORMAT_CAPACITIES:
3788 + fsg->data_size_from_cmnd = get_unaligned_be16(&fsg->cmnd[7]);
3789 + if ((reply = check_command(fsg, 10, DATA_DIR_TO_HOST,
3790 + (3<<7), 1,
3791 + "READ FORMAT CAPACITIES")) == 0)
3792 + reply = do_read_format_capacities(fsg, bh);
3793 + break;
3794 +
3795 + case REQUEST_SENSE:
3796 + fsg->data_size_from_cmnd = fsg->cmnd[4];
3797 + if ((reply = check_command(fsg, 6, DATA_DIR_TO_HOST,
3798 + (1<<4), 0,
3799 + "REQUEST SENSE")) == 0)
3800 + reply = do_request_sense(fsg, bh);
3801 + break;
3802 +
3803 + case START_STOP:
3804 + fsg->data_size_from_cmnd = 0;
3805 + if ((reply = check_command(fsg, 6, DATA_DIR_NONE,
3806 + (1<<1) | (1<<4), 0,
3807 + "START-STOP UNIT")) == 0)
3808 + reply = do_start_stop(fsg);
3809 + break;
3810 +
3811 + case SYNCHRONIZE_CACHE:
3812 + fsg->data_size_from_cmnd = 0;
3813 + if ((reply = check_command(fsg, 10, DATA_DIR_NONE,
3814 + (0xf<<2) | (3<<7), 1,
3815 + "SYNCHRONIZE CACHE")) == 0)
3816 + reply = do_synchronize_cache(fsg);
3817 + break;
3818 +
3819 + case TEST_UNIT_READY:
3820 + fsg->data_size_from_cmnd = 0;
3821 + reply = check_command(fsg, 6, DATA_DIR_NONE,
3822 + 0, 1,
3823 + "TEST UNIT READY");
3824 + break;
3825 +
3826 + /* Although optional, this command is used by MS-Windows. We
3827 + * support a minimal version: BytChk must be 0. */
3828 + case VERIFY:
3829 + fsg->data_size_from_cmnd = 0;
3830 + if ((reply = check_command(fsg, 10, DATA_DIR_NONE,
3831 + (1<<1) | (0xf<<2) | (3<<7), 1,
3832 + "VERIFY")) == 0)
3833 + reply = do_verify(fsg);
3834 + break;
3835 +
3836 + case WRITE_6:
3837 + i = fsg->cmnd[4];
3838 + fsg->data_size_from_cmnd = (i == 0) ? 256 : i;
3839 + if ((reply = check_command_size_in_blocks(fsg, 6,
3840 + DATA_DIR_FROM_HOST,
3841 + (7<<1) | (1<<4), 1,
3842 + "WRITE(6)")) == 0)
3843 + reply = do_write(fsg);
3844 + break;
3845 +
3846 + case WRITE_10:
3847 + fsg->data_size_from_cmnd = get_unaligned_be16(&fsg->cmnd[7]);
3848 + if ((reply = check_command_size_in_blocks(fsg, 10,
3849 + DATA_DIR_FROM_HOST,
3850 + (1<<1) | (0xf<<2) | (3<<7), 1,
3851 + "WRITE(10)")) == 0)
3852 + reply = do_write(fsg);
3853 + break;
3854 +
3855 + case WRITE_12:
3856 + fsg->data_size_from_cmnd = get_unaligned_be32(&fsg->cmnd[6]);
3857 + if ((reply = check_command_size_in_blocks(fsg, 12,
3858 + DATA_DIR_FROM_HOST,
3859 + (1<<1) | (0xf<<2) | (0xf<<6), 1,
3860 + "WRITE(12)")) == 0)
3861 + reply = do_write(fsg);
3862 + break;
3863 +
3864 + /* Some mandatory commands that we recognize but don't implement.
3865 + * They don't mean much in this setting. It's left as an exercise
3866 + * for anyone interested to implement RESERVE and RELEASE in terms
3867 + * of Posix locks. */
3868 + case FORMAT_UNIT:
3869 + case RELEASE:
3870 + case RESERVE:
3871 + case SEND_DIAGNOSTIC:
3872 + // Fall through
3873 +
3874 + default:
3875 + unknown_cmnd:
3876 + fsg->data_size_from_cmnd = 0;
3877 + sprintf(unknown, "Unknown x%02x", fsg->cmnd[0]);
3878 + if ((reply = check_command(fsg, fsg->cmnd_size,
3879 + DATA_DIR_UNKNOWN, ~0, 0, unknown)) == 0) {
3880 + fsg->curlun->sense_data = SS_INVALID_COMMAND;
3881 + reply = -EINVAL;
3882 + }
3883 + break;
3884 + }
3885 + up_read(&fsg->filesem);
3886 +
3887 + if (reply == -EINTR || signal_pending(current))
3888 + return -EINTR;
3889 +
3890 + /* Set up the single reply buffer for finish_reply() */
3891 + if (reply == -EINVAL)
3892 + reply = 0; // Error reply length
3893 + if (reply >= 0 && fsg->data_dir == DATA_DIR_TO_HOST) {
3894 + reply = min((u32) reply, fsg->data_size_from_cmnd);
3895 + bh->inreq->length = reply;
3896 + bh->state = BUF_STATE_FULL;
3897 + fsg->residue -= reply;
3898 + } // Otherwise it's already set
3899 +
3900 + return 0;
3901 +}
3902 +
3903 +
3904 +/*-------------------------------------------------------------------------*/
3905 +
3906 +static int received_cbw(struct fsg_dev *fsg, struct fsg_buffhd *bh)
3907 +{
3908 + struct usb_request *req = bh->outreq;
3909 + struct bulk_cb_wrap *cbw = req->buf;
3910 +
3911 + /* Was this a real packet? Should it be ignored? */
3912 + if (req->status || test_bit(IGNORE_BULK_OUT, &fsg->atomic_bitflags))
3913 + return -EINVAL;
3914 +
3915 + /* Is the CBW valid? */
3916 + if (req->actual != US_BULK_CB_WRAP_LEN ||
3917 + cbw->Signature != cpu_to_le32(
3918 + US_BULK_CB_SIGN)) {
3919 + DBG(fsg, "invalid CBW: len %u sig 0x%x\n",
3920 + req->actual,
3921 + le32_to_cpu(cbw->Signature));
3922 +
3923 + /* The Bulk-only spec says we MUST stall the IN endpoint
3924 + * (6.6.1), so it's unavoidable. It also says we must
3925 + * retain this state until the next reset, but there's
3926 + * no way to tell the controller driver it should ignore
3927 + * Clear-Feature(HALT) requests.
3928 + *
3929 + * We aren't required to halt the OUT endpoint; instead
3930 + * we can simply accept and discard any data received
3931 + * until the next reset. */
3932 + wedge_bulk_in_endpoint(fsg);
3933 + set_bit(IGNORE_BULK_OUT, &fsg->atomic_bitflags);
3934 + return -EINVAL;
3935 + }
3936 +
3937 + /* Is the CBW meaningful? */
3938 + if (cbw->Lun >= FSG_MAX_LUNS || cbw->Flags & ~US_BULK_FLAG_IN ||
3939 + cbw->Length <= 0 || cbw->Length > MAX_COMMAND_SIZE) {
3940 + DBG(fsg, "non-meaningful CBW: lun = %u, flags = 0x%x, "
3941 + "cmdlen %u\n",
3942 + cbw->Lun, cbw->Flags, cbw->Length);
3943 +
3944 + /* We can do anything we want here, so let's stall the
3945 + * bulk pipes if we are allowed to. */
3946 + if (mod_data.can_stall) {
3947 + fsg_set_halt(fsg, fsg->bulk_out);
3948 + halt_bulk_in_endpoint(fsg);
3949 + }
3950 + return -EINVAL;
3951 + }
3952 +
3953 + /* Save the command for later */
3954 + fsg->cmnd_size = cbw->Length;
3955 + memcpy(fsg->cmnd, cbw->CDB, fsg->cmnd_size);
3956 + if (cbw->Flags & US_BULK_FLAG_IN)
3957 + fsg->data_dir = DATA_DIR_TO_HOST;
3958 + else
3959 + fsg->data_dir = DATA_DIR_FROM_HOST;
3960 + fsg->data_size = le32_to_cpu(cbw->DataTransferLength);
3961 + if (fsg->data_size == 0)
3962 + fsg->data_dir = DATA_DIR_NONE;
3963 + fsg->lun = cbw->Lun;
3964 + fsg->tag = cbw->Tag;
3965 + return 0;
3966 +}
3967 +
3968 +
3969 +static int get_next_command(struct fsg_dev *fsg)
3970 +{
3971 + struct fsg_buffhd *bh;
3972 + int rc = 0;
3973 +
3974 + if (transport_is_bbb()) {
3975 +
3976 + /* Wait for the next buffer to become available */
3977 + bh = fsg->next_buffhd_to_fill;
3978 + while (bh->state != BUF_STATE_EMPTY) {
3979 + rc = sleep_thread(fsg);
3980 + if (rc)
3981 + return rc;
3982 + }
3983 +
3984 + /* Queue a request to read a Bulk-only CBW */
3985 + set_bulk_out_req_length(fsg, bh, US_BULK_CB_WRAP_LEN);
3986 + start_transfer(fsg, fsg->bulk_out, bh->outreq,
3987 + &bh->outreq_busy, &bh->state);
3988 +
3989 + /* We will drain the buffer in software, which means we
3990 + * can reuse it for the next filling. No need to advance
3991 + * next_buffhd_to_fill. */
3992 +
3993 + /* Wait for the CBW to arrive */
3994 + while (bh->state != BUF_STATE_FULL) {
3995 + rc = sleep_thread(fsg);
3996 + if (rc)
3997 + return rc;
3998 + }
3999 + smp_rmb();
4000 + rc = received_cbw(fsg, bh);
4001 + bh->state = BUF_STATE_EMPTY;
4002 +
4003 + } else { // USB_PR_CB or USB_PR_CBI
4004 +
4005 + /* Wait for the next command to arrive */
4006 + while (fsg->cbbuf_cmnd_size == 0) {
4007 + rc = sleep_thread(fsg);
4008 + if (rc)
4009 + return rc;
4010 + }
4011 +
4012 + /* Is the previous status interrupt request still busy?
4013 + * The host is allowed to skip reading the status,
4014 + * so we must cancel it. */
4015 + if (fsg->intreq_busy)
4016 + usb_ep_dequeue(fsg->intr_in, fsg->intreq);
4017 +
4018 + /* Copy the command and mark the buffer empty */
4019 + fsg->data_dir = DATA_DIR_UNKNOWN;
4020 + spin_lock_irq(&fsg->lock);
4021 + fsg->cmnd_size = fsg->cbbuf_cmnd_size;
4022 + memcpy(fsg->cmnd, fsg->cbbuf_cmnd, fsg->cmnd_size);
4023 + fsg->cbbuf_cmnd_size = 0;
4024 + spin_unlock_irq(&fsg->lock);
4025 +
4026 + /* Use LUN from the command */
4027 + fsg->lun = fsg->cmnd[1] >> 5;
4028 + }
4029 +
4030 + /* Update current lun */
4031 + if (fsg->lun >= 0 && fsg->lun < fsg->nluns)
4032 + fsg->curlun = &fsg->luns[fsg->lun];
4033 + else
4034 + fsg->curlun = NULL;
4035 +
4036 + return rc;
4037 +}
4038 +
4039 +
4040 +/*-------------------------------------------------------------------------*/
4041 +
4042 +static int enable_endpoint(struct fsg_dev *fsg, struct usb_ep *ep,
4043 + const struct usb_endpoint_descriptor *d)
4044 +{
4045 + int rc;
4046 +
4047 + ep->driver_data = fsg;
4048 + ep->desc = d;
4049 + rc = usb_ep_enable(ep);
4050 + if (rc)
4051 + ERROR(fsg, "can't enable %s, result %d\n", ep->name, rc);
4052 + return rc;
4053 +}
4054 +
4055 +static int alloc_request(struct fsg_dev *fsg, struct usb_ep *ep,
4056 + struct usb_request **preq)
4057 +{
4058 + *preq = usb_ep_alloc_request(ep, GFP_ATOMIC);
4059 + if (*preq)
4060 + return 0;
4061 + ERROR(fsg, "can't allocate request for %s\n", ep->name);
4062 + return -ENOMEM;
4063 +}
4064 +
4065 +/*
4066 + * Reset interface setting and re-init endpoint state (toggle etc).
4067 + * Call with altsetting < 0 to disable the interface. The only other
4068 + * available altsetting is 0, which enables the interface.
4069 + */
4070 +static int do_set_interface(struct fsg_dev *fsg, int altsetting)
4071 +{
4072 + int rc = 0;
4073 + int i;
4074 + const struct usb_endpoint_descriptor *d;
4075 +
4076 + if (fsg->running)
4077 + DBG(fsg, "reset interface\n");
4078 +
4079 +reset:
4080 + /* Deallocate the requests */
4081 + for (i = 0; i < fsg_num_buffers; ++i) {
4082 + struct fsg_buffhd *bh = &fsg->buffhds[i];
4083 +
4084 + if (bh->inreq) {
4085 + usb_ep_free_request(fsg->bulk_in, bh->inreq);
4086 + bh->inreq = NULL;
4087 + }
4088 + if (bh->outreq) {
4089 + usb_ep_free_request(fsg->bulk_out, bh->outreq);
4090 + bh->outreq = NULL;
4091 + }
4092 + }
4093 + if (fsg->intreq) {
4094 + usb_ep_free_request(fsg->intr_in, fsg->intreq);
4095 + fsg->intreq = NULL;
4096 + }
4097 +
4098 + /* Disable the endpoints */
4099 + if (fsg->bulk_in_enabled) {
4100 + usb_ep_disable(fsg->bulk_in);
4101 + fsg->bulk_in_enabled = 0;
4102 + }
4103 + if (fsg->bulk_out_enabled) {
4104 + usb_ep_disable(fsg->bulk_out);
4105 + fsg->bulk_out_enabled = 0;
4106 + }
4107 + if (fsg->intr_in_enabled) {
4108 + usb_ep_disable(fsg->intr_in);
4109 + fsg->intr_in_enabled = 0;
4110 + }
4111 +
4112 + fsg->running = 0;
4113 + if (altsetting < 0 || rc != 0)
4114 + return rc;
4115 +
4116 + DBG(fsg, "set interface %d\n", altsetting);
4117 +
4118 + /* Enable the endpoints */
4119 + d = fsg_ep_desc(fsg->gadget,
4120 + &fsg_fs_bulk_in_desc, &fsg_hs_bulk_in_desc,
4121 + &fsg_ss_bulk_in_desc);
4122 + if ((rc = enable_endpoint(fsg, fsg->bulk_in, d)) != 0)
4123 + goto reset;
4124 + fsg->bulk_in_enabled = 1;
4125 +
4126 + d = fsg_ep_desc(fsg->gadget,
4127 + &fsg_fs_bulk_out_desc, &fsg_hs_bulk_out_desc,
4128 + &fsg_ss_bulk_out_desc);
4129 + if ((rc = enable_endpoint(fsg, fsg->bulk_out, d)) != 0)
4130 + goto reset;
4131 + fsg->bulk_out_enabled = 1;
4132 + fsg->bulk_out_maxpacket = usb_endpoint_maxp(d);
4133 + clear_bit(IGNORE_BULK_OUT, &fsg->atomic_bitflags);
4134 +
4135 + if (transport_is_cbi()) {
4136 + d = fsg_ep_desc(fsg->gadget,
4137 + &fsg_fs_intr_in_desc, &fsg_hs_intr_in_desc,
4138 + &fsg_ss_intr_in_desc);
4139 + if ((rc = enable_endpoint(fsg, fsg->intr_in, d)) != 0)
4140 + goto reset;
4141 + fsg->intr_in_enabled = 1;
4142 + }
4143 +
4144 + /* Allocate the requests */
4145 + for (i = 0; i < fsg_num_buffers; ++i) {
4146 + struct fsg_buffhd *bh = &fsg->buffhds[i];
4147 +
4148 + if ((rc = alloc_request(fsg, fsg->bulk_in, &bh->inreq)) != 0)
4149 + goto reset;
4150 + if ((rc = alloc_request(fsg, fsg->bulk_out, &bh->outreq)) != 0)
4151 + goto reset;
4152 + bh->inreq->buf = bh->outreq->buf = bh->buf;
4153 + bh->inreq->context = bh->outreq->context = bh;
4154 + bh->inreq->complete = bulk_in_complete;
4155 + bh->outreq->complete = bulk_out_complete;
4156 + }
4157 + if (transport_is_cbi()) {
4158 + if ((rc = alloc_request(fsg, fsg->intr_in, &fsg->intreq)) != 0)
4159 + goto reset;
4160 + fsg->intreq->complete = intr_in_complete;
4161 + }
4162 +
4163 + fsg->running = 1;
4164 + for (i = 0; i < fsg->nluns; ++i)
4165 + fsg->luns[i].unit_attention_data = SS_RESET_OCCURRED;
4166 + return rc;
4167 +}
4168 +
4169 +
4170 +/*
4171 + * Change our operational configuration. This code must agree with the code
4172 + * that returns config descriptors, and with interface altsetting code.
4173 + *
4174 + * It's also responsible for power management interactions. Some
4175 + * configurations might not work with our current power sources.
4176 + * For now we just assume the gadget is always self-powered.
4177 + */
4178 +static int do_set_config(struct fsg_dev *fsg, u8 new_config)
4179 +{
4180 + int rc = 0;
4181 +
4182 + /* Disable the single interface */
4183 + if (fsg->config != 0) {
4184 + DBG(fsg, "reset config\n");
4185 + fsg->config = 0;
4186 + rc = do_set_interface(fsg, -1);
4187 + }
4188 +
4189 + /* Enable the interface */
4190 + if (new_config != 0) {
4191 + fsg->config = new_config;
4192 + if ((rc = do_set_interface(fsg, 0)) != 0)
4193 + fsg->config = 0; // Reset on errors
4194 + else
4195 + INFO(fsg, "%s config #%d\n",
4196 + usb_speed_string(fsg->gadget->speed),
4197 + fsg->config);
4198 + }
4199 + return rc;
4200 +}
4201 +
4202 +
4203 +/*-------------------------------------------------------------------------*/
4204 +
4205 +static void handle_exception(struct fsg_dev *fsg)
4206 +{
4207 + siginfo_t info;
4208 + int sig;
4209 + int i;
4210 + int num_active;
4211 + struct fsg_buffhd *bh;
4212 + enum fsg_state old_state;
4213 + u8 new_config;
4214 + struct fsg_lun *curlun;
4215 + unsigned int exception_req_tag;
4216 + int rc;
4217 +
4218 + /* Clear the existing signals. Anything but SIGUSR1 is converted
4219 + * into a high-priority EXIT exception. */
4220 + for (;;) {
4221 + sig = dequeue_signal_lock(current, &current->blocked, &info);
4222 + if (!sig)
4223 + break;
4224 + if (sig != SIGUSR1) {
4225 + if (fsg->state < FSG_STATE_EXIT)
4226 + DBG(fsg, "Main thread exiting on signal\n");
4227 + raise_exception(fsg, FSG_STATE_EXIT);
4228 + }
4229 + }
4230 +
4231 + /* Cancel all the pending transfers */
4232 + if (fsg->intreq_busy)
4233 + usb_ep_dequeue(fsg->intr_in, fsg->intreq);
4234 + for (i = 0; i < fsg_num_buffers; ++i) {
4235 + bh = &fsg->buffhds[i];
4236 + if (bh->inreq_busy)
4237 + usb_ep_dequeue(fsg->bulk_in, bh->inreq);
4238 + if (bh->outreq_busy)
4239 + usb_ep_dequeue(fsg->bulk_out, bh->outreq);
4240 + }
4241 +
4242 + /* Wait until everything is idle */
4243 + for (;;) {
4244 + num_active = fsg->intreq_busy;
4245 + for (i = 0; i < fsg_num_buffers; ++i) {
4246 + bh = &fsg->buffhds[i];
4247 + num_active += bh->inreq_busy + bh->outreq_busy;
4248 + }
4249 + if (num_active == 0)
4250 + break;
4251 + if (sleep_thread(fsg))
4252 + return;
4253 + }
4254 +
4255 + /* Clear out the controller's fifos */
4256 + if (fsg->bulk_in_enabled)
4257 + usb_ep_fifo_flush(fsg->bulk_in);
4258 + if (fsg->bulk_out_enabled)
4259 + usb_ep_fifo_flush(fsg->bulk_out);
4260 + if (fsg->intr_in_enabled)
4261 + usb_ep_fifo_flush(fsg->intr_in);
4262 +
4263 + /* Reset the I/O buffer states and pointers, the SCSI
4264 + * state, and the exception. Then invoke the handler. */
4265 + spin_lock_irq(&fsg->lock);
4266 +
4267 + for (i = 0; i < fsg_num_buffers; ++i) {
4268 + bh = &fsg->buffhds[i];
4269 + bh->state = BUF_STATE_EMPTY;
4270 + }
4271 + fsg->next_buffhd_to_fill = fsg->next_buffhd_to_drain =
4272 + &fsg->buffhds[0];
4273 +
4274 + exception_req_tag = fsg->exception_req_tag;
4275 + new_config = fsg->new_config;
4276 + old_state = fsg->state;
4277 +
4278 + if (old_state == FSG_STATE_ABORT_BULK_OUT)
4279 + fsg->state = FSG_STATE_STATUS_PHASE;
4280 + else {
4281 + for (i = 0; i < fsg->nluns; ++i) {
4282 + curlun = &fsg->luns[i];
4283 + curlun->prevent_medium_removal = 0;
4284 + curlun->sense_data = curlun->unit_attention_data =
4285 + SS_NO_SENSE;
4286 + curlun->sense_data_info = 0;
4287 + curlun->info_valid = 0;
4288 + }
4289 + fsg->state = FSG_STATE_IDLE;
4290 + }
4291 + spin_unlock_irq(&fsg->lock);
4292 +
4293 + /* Carry out any extra actions required for the exception */
4294 + switch (old_state) {
4295 + default:
4296 + break;
4297 +
4298 + case FSG_STATE_ABORT_BULK_OUT:
4299 + send_status(fsg);
4300 + spin_lock_irq(&fsg->lock);
4301 + if (fsg->state == FSG_STATE_STATUS_PHASE)
4302 + fsg->state = FSG_STATE_IDLE;
4303 + spin_unlock_irq(&fsg->lock);
4304 + break;
4305 +
4306 + case FSG_STATE_RESET:
4307 + /* In case we were forced against our will to halt a
4308 + * bulk endpoint, clear the halt now. (The SuperH UDC
4309 + * requires this.) */
4310 + if (test_and_clear_bit(IGNORE_BULK_OUT, &fsg->atomic_bitflags))
4311 + usb_ep_clear_halt(fsg->bulk_in);
4312 +
4313 + if (transport_is_bbb()) {
4314 + if (fsg->ep0_req_tag == exception_req_tag)
4315 + ep0_queue(fsg); // Complete the status stage
4316 +
4317 + } else if (transport_is_cbi())
4318 + send_status(fsg); // Status by interrupt pipe
4319 +
4320 + /* Technically this should go here, but it would only be
4321 + * a waste of time. Ditto for the INTERFACE_CHANGE and
4322 + * CONFIG_CHANGE cases. */
4323 + // for (i = 0; i < fsg->nluns; ++i)
4324 + // fsg->luns[i].unit_attention_data = SS_RESET_OCCURRED;
4325 + break;
4326 +
4327 + case FSG_STATE_INTERFACE_CHANGE:
4328 + rc = do_set_interface(fsg, 0);
4329 + if (fsg->ep0_req_tag != exception_req_tag)
4330 + break;
4331 + if (rc != 0) // STALL on errors
4332 + fsg_set_halt(fsg, fsg->ep0);
4333 + else // Complete the status stage
4334 + ep0_queue(fsg);
4335 + break;
4336 +
4337 + case FSG_STATE_CONFIG_CHANGE:
4338 + rc = do_set_config(fsg, new_config);
4339 + if (fsg->ep0_req_tag != exception_req_tag)
4340 + break;
4341 + if (rc != 0) // STALL on errors
4342 + fsg_set_halt(fsg, fsg->ep0);
4343 + else // Complete the status stage
4344 + ep0_queue(fsg);
4345 + break;
4346 +
4347 + case FSG_STATE_DISCONNECT:
4348 + for (i = 0; i < fsg->nluns; ++i)
4349 + fsg_lun_fsync_sub(fsg->luns + i);
4350 + do_set_config(fsg, 0); // Unconfigured state
4351 + break;
4352 +
4353 + case FSG_STATE_EXIT:
4354 + case FSG_STATE_TERMINATED:
4355 + do_set_config(fsg, 0); // Free resources
4356 + spin_lock_irq(&fsg->lock);
4357 + fsg->state = FSG_STATE_TERMINATED; // Stop the thread
4358 + spin_unlock_irq(&fsg->lock);
4359 + break;
4360 + }
4361 +}
4362 +
4363 +
4364 +/*-------------------------------------------------------------------------*/
4365 +
4366 +static int fsg_main_thread(void *fsg_)
4367 +{
4368 + struct fsg_dev *fsg = fsg_;
4369 +
4370 + /* Allow the thread to be killed by a signal, but set the signal mask
4371 + * to block everything but INT, TERM, KILL, and USR1. */
4372 + allow_signal(SIGINT);
4373 + allow_signal(SIGTERM);
4374 + allow_signal(SIGKILL);
4375 + allow_signal(SIGUSR1);
4376 +
4377 + /* Allow the thread to be frozen */
4378 + set_freezable();
4379 +
4380 + /* Arrange for userspace references to be interpreted as kernel
4381 + * pointers. That way we can pass a kernel pointer to a routine
4382 + * that expects a __user pointer and it will work okay. */
4383 + set_fs(get_ds());
4384 +
4385 + /* The main loop */
4386 + while (fsg->state != FSG_STATE_TERMINATED) {
4387 + if (exception_in_progress(fsg) || signal_pending(current)) {
4388 + handle_exception(fsg);
4389 + continue;
4390 + }
4391 +
4392 + if (!fsg->running) {
4393 + sleep_thread(fsg);
4394 + continue;
4395 + }
4396 +
4397 + if (get_next_command(fsg))
4398 + continue;
4399 +
4400 + spin_lock_irq(&fsg->lock);
4401 + if (!exception_in_progress(fsg))
4402 + fsg->state = FSG_STATE_DATA_PHASE;
4403 + spin_unlock_irq(&fsg->lock);
4404 +
4405 + if (do_scsi_command(fsg) || finish_reply(fsg))
4406 + continue;
4407 +
4408 + spin_lock_irq(&fsg->lock);
4409 + if (!exception_in_progress(fsg))
4410 + fsg->state = FSG_STATE_STATUS_PHASE;
4411 + spin_unlock_irq(&fsg->lock);
4412 +
4413 + if (send_status(fsg))
4414 + continue;
4415 +
4416 + spin_lock_irq(&fsg->lock);
4417 + if (!exception_in_progress(fsg))
4418 + fsg->state = FSG_STATE_IDLE;
4419 + spin_unlock_irq(&fsg->lock);
4420 + }
4421 +
4422 + spin_lock_irq(&fsg->lock);
4423 + fsg->thread_task = NULL;
4424 + spin_unlock_irq(&fsg->lock);
4425 +
4426 + /* If we are exiting because of a signal, unregister the
4427 + * gadget driver. */
4428 + if (test_and_clear_bit(REGISTERED, &fsg->atomic_bitflags))
4429 + usb_gadget_unregister_driver(&fsg_driver);
4430 +
4431 + /* Let the unbind and cleanup routines know the thread has exited */
4432 + complete_and_exit(&fsg->thread_notifier, 0);
4433 +}
4434 +
4435 +
4436 +/*-------------------------------------------------------------------------*/
4437 +
4438 +
4439 +/* The write permissions and store_xxx pointers are set in fsg_bind() */
4440 +static DEVICE_ATTR(ro, 0444, fsg_show_ro, NULL);
4441 +static DEVICE_ATTR(nofua, 0644, fsg_show_nofua, NULL);
4442 +static DEVICE_ATTR(file, 0444, fsg_show_file, NULL);
4443 +
4444 +
4445 +/*-------------------------------------------------------------------------*/
4446 +
4447 +static void fsg_release(struct kref *ref)
4448 +{
4449 + struct fsg_dev *fsg = container_of(ref, struct fsg_dev, ref);
4450 +
4451 + kfree(fsg->luns);
4452 + kfree(fsg);
4453 +}
4454 +
4455 +static void lun_release(struct device *dev)
4456 +{
4457 + struct rw_semaphore *filesem = dev_get_drvdata(dev);
4458 + struct fsg_dev *fsg =
4459 + container_of(filesem, struct fsg_dev, filesem);
4460 +
4461 + kref_put(&fsg->ref, fsg_release);
4462 +}
4463 +
4464 +static void /* __init_or_exit */ fsg_unbind(struct usb_gadget *gadget)
4465 +{
4466 + struct fsg_dev *fsg = get_gadget_data(gadget);
4467 + int i;
4468 + struct fsg_lun *curlun;
4469 + struct usb_request *req = fsg->ep0req;
4470 +
4471 + DBG(fsg, "unbind\n");
4472 + clear_bit(REGISTERED, &fsg->atomic_bitflags);
4473 +
4474 + /* If the thread isn't already dead, tell it to exit now */
4475 + if (fsg->state != FSG_STATE_TERMINATED) {
4476 + raise_exception(fsg, FSG_STATE_EXIT);
4477 + wait_for_completion(&fsg->thread_notifier);
4478 +
4479 + /* The cleanup routine waits for this completion also */
4480 + complete(&fsg->thread_notifier);
4481 + }
4482 +
4483 + /* Unregister the sysfs attribute files and the LUNs */
4484 + for (i = 0; i < fsg->nluns; ++i) {
4485 + curlun = &fsg->luns[i];
4486 + if (curlun->registered) {
4487 + device_remove_file(&curlun->dev, &dev_attr_nofua);
4488 + device_remove_file(&curlun->dev, &dev_attr_ro);
4489 + device_remove_file(&curlun->dev, &dev_attr_file);
4490 + fsg_lun_close(curlun);
4491 + device_unregister(&curlun->dev);
4492 + curlun->registered = 0;
4493 + }
4494 + }
4495 +
4496 + /* Free the data buffers */
4497 + for (i = 0; i < fsg_num_buffers; ++i)
4498 + kfree(fsg->buffhds[i].buf);
4499 +
4500 + /* Free the request and buffer for endpoint 0 */
4501 + if (req) {
4502 + kfree(req->buf);
4503 + usb_ep_free_request(fsg->ep0, req);
4504 + }
4505 +
4506 + set_gadget_data(gadget, NULL);
4507 +}
4508 +
4509 +
4510 +static int __init check_parameters(struct fsg_dev *fsg)
4511 +{
4512 + int prot;
4513 + int gcnum;
4514 +
4515 + /* Store the default values */
4516 + mod_data.transport_type = USB_PR_BULK;
4517 + mod_data.transport_name = "Bulk-only";
4518 + mod_data.protocol_type = USB_SC_SCSI;
4519 + mod_data.protocol_name = "Transparent SCSI";
4520 +
4521 + /* Some peripheral controllers are known not to be able to
4522 + * halt bulk endpoints correctly. If one of them is present,
4523 + * disable stalls.
4524 + */
4525 + if (gadget_is_at91(fsg->gadget))
4526 + mod_data.can_stall = 0;
4527 +
4528 + if (mod_data.release == 0xffff) { // Parameter wasn't set
4529 + gcnum = usb_gadget_controller_number(fsg->gadget);
4530 + if (gcnum >= 0)
4531 + mod_data.release = 0x0300 + gcnum;
4532 + else {
4533 + WARNING(fsg, "controller '%s' not recognized\n",
4534 + fsg->gadget->name);
4535 + mod_data.release = 0x0399;
4536 + }
4537 + }
4538 +
4539 + prot = simple_strtol(mod_data.protocol_parm, NULL, 0);
4540 +
4541 +#ifdef CONFIG_USB_FILE_STORAGE_TEST
4542 + if (strnicmp(mod_data.transport_parm, "BBB", 10) == 0) {
4543 + ; // Use default setting
4544 + } else if (strnicmp(mod_data.transport_parm, "CB", 10) == 0) {
4545 + mod_data.transport_type = USB_PR_CB;
4546 + mod_data.transport_name = "Control-Bulk";
4547 + } else if (strnicmp(mod_data.transport_parm, "CBI", 10) == 0) {
4548 + mod_data.transport_type = USB_PR_CBI;
4549 + mod_data.transport_name = "Control-Bulk-Interrupt";
4550 + } else {
4551 + ERROR(fsg, "invalid transport: %s\n", mod_data.transport_parm);
4552 + return -EINVAL;
4553 + }
4554 +
4555 + if (strnicmp(mod_data.protocol_parm, "SCSI", 10) == 0 ||
4556 + prot == USB_SC_SCSI) {
4557 + ; // Use default setting
4558 + } else if (strnicmp(mod_data.protocol_parm, "RBC", 10) == 0 ||
4559 + prot == USB_SC_RBC) {
4560 + mod_data.protocol_type = USB_SC_RBC;
4561 + mod_data.protocol_name = "RBC";
4562 + } else if (strnicmp(mod_data.protocol_parm, "8020", 4) == 0 ||
4563 + strnicmp(mod_data.protocol_parm, "ATAPI", 10) == 0 ||
4564 + prot == USB_SC_8020) {
4565 + mod_data.protocol_type = USB_SC_8020;
4566 + mod_data.protocol_name = "8020i (ATAPI)";
4567 + } else if (strnicmp(mod_data.protocol_parm, "QIC", 3) == 0 ||
4568 + prot == USB_SC_QIC) {
4569 + mod_data.protocol_type = USB_SC_QIC;
4570 + mod_data.protocol_name = "QIC-157";
4571 + } else if (strnicmp(mod_data.protocol_parm, "UFI", 10) == 0 ||
4572 + prot == USB_SC_UFI) {
4573 + mod_data.protocol_type = USB_SC_UFI;
4574 + mod_data.protocol_name = "UFI";
4575 + } else if (strnicmp(mod_data.protocol_parm, "8070", 4) == 0 ||
4576 + prot == USB_SC_8070) {
4577 + mod_data.protocol_type = USB_SC_8070;
4578 + mod_data.protocol_name = "8070i";
4579 + } else {
4580 + ERROR(fsg, "invalid protocol: %s\n", mod_data.protocol_parm);
4581 + return -EINVAL;
4582 + }
4583 +
4584 + mod_data.buflen &= PAGE_CACHE_MASK;
4585 + if (mod_data.buflen <= 0) {
4586 + ERROR(fsg, "invalid buflen\n");
4587 + return -ETOOSMALL;
4588 + }
4589 +
4590 +#endif /* CONFIG_USB_FILE_STORAGE_TEST */
4591 +
4592 + /* Serial string handling.
4593 + * On a real device, the serial string would be loaded
4594 + * from permanent storage. */
4595 + if (mod_data.serial) {
4596 + const char *ch;
4597 + unsigned len = 0;
4598 +
4599 + /* Sanity check :
4600 + * The CB[I] specification limits the serial string to
4601 + * 12 uppercase hexadecimal characters.
4602 + * BBB need at least 12 uppercase hexadecimal characters,
4603 + * with a maximum of 126. */
4604 + for (ch = mod_data.serial; *ch; ++ch) {
4605 + ++len;
4606 + if ((*ch < '0' || *ch > '9') &&
4607 + (*ch < 'A' || *ch > 'F')) { /* not uppercase hex */
4608 + WARNING(fsg,
4609 + "Invalid serial string character: %c\n",
4610 + *ch);
4611 + goto no_serial;
4612 + }
4613 + }
4614 + if (len > 126 ||
4615 + (mod_data.transport_type == USB_PR_BULK && len < 12) ||
4616 + (mod_data.transport_type != USB_PR_BULK && len > 12)) {
4617 + WARNING(fsg, "Invalid serial string length!\n");
4618 + goto no_serial;
4619 + }
4620 + fsg_strings[FSG_STRING_SERIAL - 1].s = mod_data.serial;
4621 + } else {
4622 + WARNING(fsg, "No serial-number string provided!\n");
4623 + no_serial:
4624 + device_desc.iSerialNumber = 0;
4625 + }
4626 +
4627 + return 0;
4628 +}
4629 +
4630 +
4631 +static int __init fsg_bind(struct usb_gadget *gadget)
4632 +{
4633 + struct fsg_dev *fsg = the_fsg;
4634 + int rc;
4635 + int i;
4636 + struct fsg_lun *curlun;
4637 + struct usb_ep *ep;
4638 + struct usb_request *req;
4639 + char *pathbuf, *p;
4640 +
4641 + fsg->gadget = gadget;
4642 + set_gadget_data(gadget, fsg);
4643 + fsg->ep0 = gadget->ep0;
4644 + fsg->ep0->driver_data = fsg;
4645 +
4646 + if ((rc = check_parameters(fsg)) != 0)
4647 + goto out;
4648 +
4649 + if (mod_data.removable) { // Enable the store_xxx attributes
4650 + dev_attr_file.attr.mode = 0644;
4651 + dev_attr_file.store = fsg_store_file;
4652 + if (!mod_data.cdrom) {
4653 + dev_attr_ro.attr.mode = 0644;
4654 + dev_attr_ro.store = fsg_store_ro;
4655 + }
4656 + }
4657 +
4658 + /* Only for removable media? */
4659 + dev_attr_nofua.attr.mode = 0644;
4660 + dev_attr_nofua.store = fsg_store_nofua;
4661 +
4662 + /* Find out how many LUNs there should be */
4663 + i = mod_data.nluns;
4664 + if (i == 0)
4665 + i = max(mod_data.num_filenames, 1u);
4666 + if (i > FSG_MAX_LUNS) {
4667 + ERROR(fsg, "invalid number of LUNs: %d\n", i);
4668 + rc = -EINVAL;
4669 + goto out;
4670 + }
4671 +
4672 + /* Create the LUNs, open their backing files, and register the
4673 + * LUN devices in sysfs. */
4674 + fsg->luns = kzalloc(i * sizeof(struct fsg_lun), GFP_KERNEL);
4675 + if (!fsg->luns) {
4676 + rc = -ENOMEM;
4677 + goto out;
4678 + }
4679 + fsg->nluns = i;
4680 +
4681 + for (i = 0; i < fsg->nluns; ++i) {
4682 + curlun = &fsg->luns[i];
4683 + curlun->cdrom = !!mod_data.cdrom;
4684 + curlun->ro = mod_data.cdrom || mod_data.ro[i];
4685 + curlun->initially_ro = curlun->ro;
4686 + curlun->removable = mod_data.removable;
4687 + curlun->nofua = mod_data.nofua[i];
4688 + curlun->dev.release = lun_release;
4689 + curlun->dev.parent = &gadget->dev;
4690 + curlun->dev.driver = &fsg_driver.driver;
4691 + dev_set_drvdata(&curlun->dev, &fsg->filesem);
4692 + dev_set_name(&curlun->dev,"%s-lun%d",
4693 + dev_name(&gadget->dev), i);
4694 +
4695 + kref_get(&fsg->ref);
4696 + rc = device_register(&curlun->dev);
4697 + if (rc) {
4698 + INFO(fsg, "failed to register LUN%d: %d\n", i, rc);
4699 + put_device(&curlun->dev);
4700 + goto out;
4701 + }
4702 + curlun->registered = 1;
4703 +
4704 + rc = device_create_file(&curlun->dev, &dev_attr_ro);
4705 + if (rc)
4706 + goto out;
4707 + rc = device_create_file(&curlun->dev, &dev_attr_nofua);
4708 + if (rc)
4709 + goto out;
4710 + rc = device_create_file(&curlun->dev, &dev_attr_file);
4711 + if (rc)
4712 + goto out;
4713 +
4714 + if (mod_data.file[i] && *mod_data.file[i]) {
4715 + rc = fsg_lun_open(curlun, mod_data.file[i]);
4716 + if (rc)
4717 + goto out;
4718 + } else if (!mod_data.removable) {
4719 + ERROR(fsg, "no file given for LUN%d\n", i);
4720 + rc = -EINVAL;
4721 + goto out;
4722 + }
4723 + }
4724 +
4725 + /* Find all the endpoints we will use */
4726 + usb_ep_autoconfig_reset(gadget);
4727 + ep = usb_ep_autoconfig(gadget, &fsg_fs_bulk_in_desc);
4728 + if (!ep)
4729 + goto autoconf_fail;
4730 + ep->driver_data = fsg; // claim the endpoint
4731 + fsg->bulk_in = ep;
4732 +
4733 + ep = usb_ep_autoconfig(gadget, &fsg_fs_bulk_out_desc);
4734 + if (!ep)
4735 + goto autoconf_fail;
4736 + ep->driver_data = fsg; // claim the endpoint
4737 + fsg->bulk_out = ep;
4738 +
4739 + if (transport_is_cbi()) {
4740 + ep = usb_ep_autoconfig(gadget, &fsg_fs_intr_in_desc);
4741 + if (!ep)
4742 + goto autoconf_fail;
4743 + ep->driver_data = fsg; // claim the endpoint
4744 + fsg->intr_in = ep;
4745 + }
4746 +
4747 + /* Fix up the descriptors */
4748 + device_desc.idVendor = cpu_to_le16(mod_data.vendor);
4749 + device_desc.idProduct = cpu_to_le16(mod_data.product);
4750 + device_desc.bcdDevice = cpu_to_le16(mod_data.release);
4751 +
4752 + i = (transport_is_cbi() ? 3 : 2); // Number of endpoints
4753 + fsg_intf_desc.bNumEndpoints = i;
4754 + fsg_intf_desc.bInterfaceSubClass = mod_data.protocol_type;
4755 + fsg_intf_desc.bInterfaceProtocol = mod_data.transport_type;
4756 + fsg_fs_function[i + FSG_FS_FUNCTION_PRE_EP_ENTRIES] = NULL;
4757 +
4758 + if (gadget_is_dualspeed(gadget)) {
4759 + fsg_hs_function[i + FSG_HS_FUNCTION_PRE_EP_ENTRIES] = NULL;
4760 +
4761 + /* Assume endpoint addresses are the same for both speeds */
4762 + fsg_hs_bulk_in_desc.bEndpointAddress =
4763 + fsg_fs_bulk_in_desc.bEndpointAddress;
4764 + fsg_hs_bulk_out_desc.bEndpointAddress =
4765 + fsg_fs_bulk_out_desc.bEndpointAddress;
4766 + fsg_hs_intr_in_desc.bEndpointAddress =
4767 + fsg_fs_intr_in_desc.bEndpointAddress;
4768 + }
4769 +
4770 + if (gadget_is_superspeed(gadget)) {
4771 + unsigned max_burst;
4772 +
4773 + fsg_ss_function[i + FSG_SS_FUNCTION_PRE_EP_ENTRIES] = NULL;
4774 +
4775 + /* Calculate bMaxBurst, we know packet size is 1024 */
4776 + max_burst = min_t(unsigned, mod_data.buflen / 1024, 15);
4777 +
4778 + /* Assume endpoint addresses are the same for both speeds */
4779 + fsg_ss_bulk_in_desc.bEndpointAddress =
4780 + fsg_fs_bulk_in_desc.bEndpointAddress;
4781 + fsg_ss_bulk_in_comp_desc.bMaxBurst = max_burst;
4782 +
4783 + fsg_ss_bulk_out_desc.bEndpointAddress =
4784 + fsg_fs_bulk_out_desc.bEndpointAddress;
4785 + fsg_ss_bulk_out_comp_desc.bMaxBurst = max_burst;
4786 + }
4787 +
4788 + if (gadget_is_otg(gadget))
4789 + fsg_otg_desc.bmAttributes |= USB_OTG_HNP;
4790 +
4791 + rc = -ENOMEM;
4792 +
4793 + /* Allocate the request and buffer for endpoint 0 */
4794 + fsg->ep0req = req = usb_ep_alloc_request(fsg->ep0, GFP_KERNEL);
4795 + if (!req)
4796 + goto out;
4797 + req->buf = kmalloc(EP0_BUFSIZE, GFP_KERNEL);
4798 + if (!req->buf)
4799 + goto out;
4800 + req->complete = ep0_complete;
4801 +
4802 + /* Allocate the data buffers */
4803 + for (i = 0; i < fsg_num_buffers; ++i) {
4804 + struct fsg_buffhd *bh = &fsg->buffhds[i];
4805 +
4806 + /* Allocate for the bulk-in endpoint. We assume that
4807 + * the buffer will also work with the bulk-out (and
4808 + * interrupt-in) endpoint. */
4809 + bh->buf = kmalloc(mod_data.buflen, GFP_KERNEL);
4810 + if (!bh->buf)
4811 + goto out;
4812 + bh->next = bh + 1;
4813 + }
4814 + fsg->buffhds[fsg_num_buffers - 1].next = &fsg->buffhds[0];
4815 +
4816 + /* This should reflect the actual gadget power source */
4817 + usb_gadget_set_selfpowered(gadget);
4818 +
4819 + snprintf(fsg_string_manufacturer, sizeof fsg_string_manufacturer,
4820 + "%s %s with %s",
4821 + init_utsname()->sysname, init_utsname()->release,
4822 + gadget->name);
4823 +
4824 + fsg->thread_task = kthread_create(fsg_main_thread, fsg,
4825 + "file-storage-gadget");
4826 + if (IS_ERR(fsg->thread_task)) {
4827 + rc = PTR_ERR(fsg->thread_task);
4828 + goto out;
4829 + }
4830 +
4831 + INFO(fsg, DRIVER_DESC ", version: " DRIVER_VERSION "\n");
4832 + INFO(fsg, "NOTE: This driver is deprecated. "
4833 + "Consider using g_mass_storage instead.\n");
4834 + INFO(fsg, "Number of LUNs=%d\n", fsg->nluns);
4835 +
4836 + pathbuf = kmalloc(PATH_MAX, GFP_KERNEL);
4837 + for (i = 0; i < fsg->nluns; ++i) {
4838 + curlun = &fsg->luns[i];
4839 + if (fsg_lun_is_open(curlun)) {
4840 + p = NULL;
4841 + if (pathbuf) {
4842 + p = d_path(&curlun->filp->f_path,
4843 + pathbuf, PATH_MAX);
4844 + if (IS_ERR(p))
4845 + p = NULL;
4846 + }
4847 + LINFO(curlun, "ro=%d, nofua=%d, file: %s\n",
4848 + curlun->ro, curlun->nofua, (p ? p : "(error)"));
4849 + }
4850 + }
4851 + kfree(pathbuf);
4852 +
4853 + DBG(fsg, "transport=%s (x%02x)\n",
4854 + mod_data.transport_name, mod_data.transport_type);
4855 + DBG(fsg, "protocol=%s (x%02x)\n",
4856 + mod_data.protocol_name, mod_data.protocol_type);
4857 + DBG(fsg, "VendorID=x%04x, ProductID=x%04x, Release=x%04x\n",
4858 + mod_data.vendor, mod_data.product, mod_data.release);
4859 + DBG(fsg, "removable=%d, stall=%d, cdrom=%d, buflen=%u\n",
4860 + mod_data.removable, mod_data.can_stall,
4861 + mod_data.cdrom, mod_data.buflen);
4862 + DBG(fsg, "I/O thread pid: %d\n", task_pid_nr(fsg->thread_task));
4863 +
4864 + set_bit(REGISTERED, &fsg->atomic_bitflags);
4865 +
4866 + /* Tell the thread to start working */
4867 + wake_up_process(fsg->thread_task);
4868 + return 0;
4869 +
4870 +autoconf_fail:
4871 + ERROR(fsg, "unable to autoconfigure all endpoints\n");
4872 + rc = -ENOTSUPP;
4873 +
4874 +out:
4875 + fsg->state = FSG_STATE_TERMINATED; // The thread is dead
4876 + fsg_unbind(gadget);
4877 + complete(&fsg->thread_notifier);
4878 + return rc;
4879 +}
4880 +
4881 +
4882 +/*-------------------------------------------------------------------------*/
4883 +
4884 +static void fsg_suspend(struct usb_gadget *gadget)
4885 +{
4886 + struct fsg_dev *fsg = get_gadget_data(gadget);
4887 +
4888 + DBG(fsg, "suspend\n");
4889 + set_bit(SUSPENDED, &fsg->atomic_bitflags);
4890 +}
4891 +
4892 +static void fsg_resume(struct usb_gadget *gadget)
4893 +{
4894 + struct fsg_dev *fsg = get_gadget_data(gadget);
4895 +
4896 + DBG(fsg, "resume\n");
4897 + clear_bit(SUSPENDED, &fsg->atomic_bitflags);
4898 +}
4899 +
4900 +
4901 +/*-------------------------------------------------------------------------*/
4902 +
4903 +static struct usb_gadget_driver fsg_driver = {
4904 + .max_speed = USB_SPEED_SUPER,
4905 + .function = (char *) fsg_string_product,
4906 + .unbind = fsg_unbind,
4907 + .disconnect = fsg_disconnect,
4908 + .setup = fsg_setup,
4909 + .suspend = fsg_suspend,
4910 + .resume = fsg_resume,
4911 +
4912 + .driver = {
4913 + .name = DRIVER_NAME,
4914 + .owner = THIS_MODULE,
4915 + // .release = ...
4916 + // .suspend = ...
4917 + // .resume = ...
4918 + },
4919 +};
4920 +
4921 +
4922 +static int __init fsg_alloc(void)
4923 +{
4924 + struct fsg_dev *fsg;
4925 +
4926 + fsg = kzalloc(sizeof *fsg +
4927 + fsg_num_buffers * sizeof *(fsg->buffhds), GFP_KERNEL);
4928 +
4929 + if (!fsg)
4930 + return -ENOMEM;
4931 + spin_lock_init(&fsg->lock);
4932 + init_rwsem(&fsg->filesem);
4933 + kref_init(&fsg->ref);
4934 + init_completion(&fsg->thread_notifier);
4935 +
4936 + the_fsg = fsg;
4937 + return 0;
4938 +}
4939 +
4940 +
4941 +static int __init fsg_init(void)
4942 +{
4943 + int rc;
4944 + struct fsg_dev *fsg;
4945 +
4946 + rc = fsg_num_buffers_validate();
4947 + if (rc != 0)
4948 + return rc;
4949 +
4950 + if ((rc = fsg_alloc()) != 0)
4951 + return rc;
4952 + fsg = the_fsg;
4953 + if ((rc = usb_gadget_probe_driver(&fsg_driver, fsg_bind)) != 0)
4954 + kref_put(&fsg->ref, fsg_release);
4955 + return rc;
4956 +}
4957 +module_init(fsg_init);
4958 +
4959 +
4960 +static void __exit fsg_cleanup(void)
4961 +{
4962 + struct fsg_dev *fsg = the_fsg;
4963 +
4964 + /* Unregister the driver iff the thread hasn't already done so */
4965 + if (test_and_clear_bit(REGISTERED, &fsg->atomic_bitflags))
4966 + usb_gadget_unregister_driver(&fsg_driver);
4967 +
4968 + /* Wait for the thread to finish up */
4969 + wait_for_completion(&fsg->thread_notifier);
4970 +
4971 + kref_put(&fsg->ref, fsg_release);
4972 +}
4973 +module_exit(fsg_cleanup);
4974 --- a/drivers/usb/host/Kconfig
4975 +++ b/drivers/usb/host/Kconfig
4976 @@ -712,6 +712,16 @@ config USB_RENESAS_USBHS_HCD
4977 To compile this driver as a module, choose M here: the
4978 module will be called renesas-usbhs.
4979
4980 +config USB_DWCOTG
4981 + bool "Synopsis DWC host support"
4982 + depends on USB && (FIQ || ARM64)
4983 + help
4984 + The Synopsis DWC controller is a dual-role
4985 + host/peripheral/OTG ("On The Go") USB controllers.
4986 +
4987 + Enable this option to support this IP in host controller mode.
4988 + If unsure, say N.
4989 +
4990 config USB_IMX21_HCD
4991 tristate "i.MX21 HCD support"
4992 depends on ARM && ARCH_MXC
4993 --- a/drivers/usb/host/Makefile
4994 +++ b/drivers/usb/host/Makefile
4995 @@ -79,6 +79,7 @@ obj-$(CONFIG_USB_SL811_HCD) += sl811-hcd
4996 obj-$(CONFIG_USB_SL811_CS) += sl811_cs.o
4997 obj-$(CONFIG_USB_U132_HCD) += u132-hcd.o
4998 obj-$(CONFIG_USB_R8A66597_HCD) += r8a66597-hcd.o
4999 +obj-$(CONFIG_USB_DWCOTG) += dwc_otg/ dwc_common_port/
5000 obj-$(CONFIG_USB_IMX21_HCD) += imx21-hcd.o
5001 obj-$(CONFIG_USB_FSL_USB2) += fsl-mph-dr-of.o
5002 obj-$(CONFIG_USB_EHCI_FSL) += fsl-mph-dr-of.o
5003 --- /dev/null
5004 +++ b/drivers/usb/host/dwc_common_port/Makefile
5005 @@ -0,0 +1,58 @@
5006 +#
5007 +# Makefile for DWC_common library
5008 +#
5009 +
5010 +ifneq ($(KERNELRELEASE),)
5011 +
5012 +ccflags-y += -DDWC_LINUX
5013 +#ccflags-y += -DDEBUG
5014 +#ccflags-y += -DDWC_DEBUG_REGS
5015 +#ccflags-y += -DDWC_DEBUG_MEMORY
5016 +
5017 +ccflags-y += -DDWC_LIBMODULE
5018 +ccflags-y += -DDWC_CCLIB
5019 +#ccflags-y += -DDWC_CRYPTOLIB
5020 +ccflags-y += -DDWC_NOTIFYLIB
5021 +ccflags-y += -DDWC_UTFLIB
5022 +
5023 +obj-$(CONFIG_USB_DWCOTG) += dwc_common_port_lib.o
5024 +dwc_common_port_lib-objs := dwc_cc.o dwc_modpow.o dwc_dh.o \
5025 + dwc_crypto.o dwc_notifier.o \
5026 + dwc_common_linux.o dwc_mem.o
5027 +
5028 +kernrelwd := $(subst ., ,$(KERNELRELEASE))
5029 +kernrel3 := $(word 1,$(kernrelwd)).$(word 2,$(kernrelwd)).$(word 3,$(kernrelwd))
5030 +
5031 +ifneq ($(kernrel3),2.6.20)
5032 +# grayg - I only know that we use ccflags-y in 2.6.31 actually
5033 +ccflags-y += $(CPPFLAGS)
5034 +endif
5035 +
5036 +else
5037 +
5038 +#ifeq ($(KDIR),)
5039 +#$(error Must give "KDIR=/path/to/kernel/source" on command line or in environment)
5040 +#endif
5041 +
5042 +ifeq ($(ARCH),)
5043 +$(error Must give "ARCH=<arch>" on command line or in environment. Also, if \
5044 + cross-compiling, must give "CROSS_COMPILE=/path/to/compiler/plus/tool-prefix-")
5045 +endif
5046 +
5047 +ifeq ($(DOXYGEN),)
5048 +DOXYGEN := doxygen
5049 +endif
5050 +
5051 +default:
5052 + $(MAKE) -C$(KDIR) M=$(PWD) ARCH=$(ARCH) CROSS_COMPILE=$(CROSS_COMPILE) modules
5053 +
5054 +docs: $(wildcard *.[hc]) doc/doxygen.cfg
5055 + $(DOXYGEN) doc/doxygen.cfg
5056 +
5057 +tags: $(wildcard *.[hc])
5058 + $(CTAGS) -e $(wildcard *.[hc]) $(wildcard linux/*.[hc]) $(wildcard $(KDIR)/include/linux/usb*.h)
5059 +
5060 +endif
5061 +
5062 +clean:
5063 + rm -rf *.o *.ko .*.cmd *.mod.c .*.o.d .*.o.tmp modules.order Module.markers Module.symvers .tmp_versions/
5064 --- /dev/null
5065 +++ b/drivers/usb/host/dwc_common_port/Makefile.fbsd
5066 @@ -0,0 +1,17 @@
5067 +CFLAGS += -I/sys/i386/compile/GENERIC -I/sys/i386/include -I/usr/include
5068 +CFLAGS += -DDWC_FREEBSD
5069 +CFLAGS += -DDEBUG
5070 +#CFLAGS += -DDWC_DEBUG_REGS
5071 +#CFLAGS += -DDWC_DEBUG_MEMORY
5072 +
5073 +#CFLAGS += -DDWC_LIBMODULE
5074 +#CFLAGS += -DDWC_CCLIB
5075 +#CFLAGS += -DDWC_CRYPTOLIB
5076 +#CFLAGS += -DDWC_NOTIFYLIB
5077 +#CFLAGS += -DDWC_UTFLIB
5078 +
5079 +KMOD = dwc_common_port_lib
5080 +SRCS = dwc_cc.c dwc_modpow.c dwc_dh.c dwc_crypto.c dwc_notifier.c \
5081 + dwc_common_fbsd.c dwc_mem.c
5082 +
5083 +.include <bsd.kmod.mk>
5084 --- /dev/null
5085 +++ b/drivers/usb/host/dwc_common_port/Makefile.linux
5086 @@ -0,0 +1,49 @@
5087 +#
5088 +# Makefile for DWC_common library
5089 +#
5090 +ifneq ($(KERNELRELEASE),)
5091 +
5092 +ccflags-y += -DDWC_LINUX
5093 +#ccflags-y += -DDEBUG
5094 +#ccflags-y += -DDWC_DEBUG_REGS
5095 +#ccflags-y += -DDWC_DEBUG_MEMORY
5096 +
5097 +ccflags-y += -DDWC_LIBMODULE
5098 +ccflags-y += -DDWC_CCLIB
5099 +ccflags-y += -DDWC_CRYPTOLIB
5100 +ccflags-y += -DDWC_NOTIFYLIB
5101 +ccflags-y += -DDWC_UTFLIB
5102 +
5103 +obj-m := dwc_common_port_lib.o
5104 +dwc_common_port_lib-objs := dwc_cc.o dwc_modpow.o dwc_dh.o \
5105 + dwc_crypto.o dwc_notifier.o \
5106 + dwc_common_linux.o dwc_mem.o
5107 +
5108 +else
5109 +
5110 +ifeq ($(KDIR),)
5111 +$(error Must give "KDIR=/path/to/kernel/source" on command line or in environment)
5112 +endif
5113 +
5114 +ifeq ($(ARCH),)
5115 +$(error Must give "ARCH=<arch>" on command line or in environment. Also, if \
5116 + cross-compiling, must give "CROSS_COMPILE=/path/to/compiler/plus/tool-prefix-")
5117 +endif
5118 +
5119 +ifeq ($(DOXYGEN),)
5120 +DOXYGEN := doxygen
5121 +endif
5122 +
5123 +default:
5124 + $(MAKE) -C$(KDIR) M=$(PWD) ARCH=$(ARCH) CROSS_COMPILE=$(CROSS_COMPILE) modules
5125 +
5126 +docs: $(wildcard *.[hc]) doc/doxygen.cfg
5127 + $(DOXYGEN) doc/doxygen.cfg
5128 +
5129 +tags: $(wildcard *.[hc])
5130 + $(CTAGS) -e $(wildcard *.[hc]) $(wildcard linux/*.[hc]) $(wildcard $(KDIR)/include/linux/usb*.h)
5131 +
5132 +endif
5133 +
5134 +clean:
5135 + rm -rf *.o *.ko .*.cmd *.mod.c .*.o.d .*.o.tmp modules.order Module.markers Module.symvers .tmp_versions/
5136 --- /dev/null
5137 +++ b/drivers/usb/host/dwc_common_port/changes.txt
5138 @@ -0,0 +1,174 @@
5139 +
5140 +dwc_read_reg32() and friends now take an additional parameter, a pointer to an
5141 +IO context struct. The IO context struct should live in an os-dependent struct
5142 +in your driver. As an example, the dwc_usb3 driver has an os-dependent struct
5143 +named 'os_dep' embedded in the main device struct. So there these calls look
5144 +like this:
5145 +
5146 + dwc_read_reg32(&usb3_dev->os_dep.ioctx, &pcd->dev_global_regs->dcfg);
5147 +
5148 + dwc_write_reg32(&usb3_dev->os_dep.ioctx,
5149 + &pcd->dev_global_regs->dcfg, 0);
5150 +
5151 +Note that for the existing Linux driver ports, it is not necessary to actually
5152 +define the 'ioctx' member in the os-dependent struct. Since Linux does not
5153 +require an IO context, its macros for dwc_read_reg32() and friends do not
5154 +use the context pointer, so it is optimized away by the compiler. But it is
5155 +necessary to add the pointer parameter to all of the call sites, to be ready
5156 +for any future ports (such as FreeBSD) which do require an IO context.
5157 +
5158 +
5159 +Similarly, dwc_alloc(), dwc_alloc_atomic(), dwc_strdup(), and dwc_free() now
5160 +take an additional parameter, a pointer to a memory context. Examples:
5161 +
5162 + addr = dwc_alloc(&usb3_dev->os_dep.memctx, size);
5163 +
5164 + dwc_free(&usb3_dev->os_dep.memctx, addr);
5165 +
5166 +Again, for the Linux ports, it is not necessary to actually define the memctx
5167 +member, but it is necessary to add the pointer parameter to all of the call
5168 +sites.
5169 +
5170 +
5171 +Same for dwc_dma_alloc() and dwc_dma_free(). Examples:
5172 +
5173 + virt_addr = dwc_dma_alloc(&usb3_dev->os_dep.dmactx, size, &phys_addr);
5174 +
5175 + dwc_dma_free(&usb3_dev->os_dep.dmactx, size, virt_addr, phys_addr);
5176 +
5177 +
5178 +Same for dwc_mutex_alloc() and dwc_mutex_free(). Examples:
5179 +
5180 + mutex = dwc_mutex_alloc(&usb3_dev->os_dep.mtxctx);
5181 +
5182 + dwc_mutex_free(&usb3_dev->os_dep.mtxctx, mutex);
5183 +
5184 +
5185 +Same for dwc_spinlock_alloc() and dwc_spinlock_free(). Examples:
5186 +
5187 + lock = dwc_spinlock_alloc(&usb3_dev->osdep.splctx);
5188 +
5189 + dwc_spinlock_free(&usb3_dev->osdep.splctx, lock);
5190 +
5191 +
5192 +Same for dwc_timer_alloc(). Example:
5193 +
5194 + timer = dwc_timer_alloc(&usb3_dev->os_dep.tmrctx, "dwc_usb3_tmr1",
5195 + cb_func, cb_data);
5196 +
5197 +
5198 +Same for dwc_waitq_alloc(). Example:
5199 +
5200 + waitq = dwc_waitq_alloc(&usb3_dev->os_dep.wtqctx);
5201 +
5202 +
5203 +Same for dwc_thread_run(). Example:
5204 +
5205 + thread = dwc_thread_run(&usb3_dev->os_dep.thdctx, func,
5206 + "dwc_usb3_thd1", data);
5207 +
5208 +
5209 +Same for dwc_workq_alloc(). Example:
5210 +
5211 + workq = dwc_workq_alloc(&usb3_dev->osdep.wkqctx, "dwc_usb3_wkq1");
5212 +
5213 +
5214 +Same for dwc_task_alloc(). Example:
5215 +
5216 + task = dwc_task_alloc(&usb3_dev->os_dep.tskctx, "dwc_usb3_tsk1",
5217 + cb_func, cb_data);
5218 +
5219 +
5220 +In addition to the context pointer additions, a few core functions have had
5221 +other changes made to their parameters:
5222 +
5223 +The 'flags' parameter to dwc_spinlock_irqsave() and dwc_spinunlock_irqrestore()
5224 +has been changed from a uint64_t to a dwc_irqflags_t.
5225 +
5226 +dwc_thread_should_stop() now takes a 'dwc_thread_t *' parameter, because the
5227 +FreeBSD equivalent of that function requires it.
5228 +
5229 +And, in addition to the context pointer, dwc_task_alloc() also adds a
5230 +'char *name' parameter, to be consistent with dwc_thread_run() and
5231 +dwc_workq_alloc(), and because the FreeBSD equivalent of that function
5232 +requires a unique name.
5233 +
5234 +
5235 +Here is a complete list of the core functions that now take a pointer to a
5236 +context as their first parameter:
5237 +
5238 + dwc_read_reg32
5239 + dwc_read_reg64
5240 + dwc_write_reg32
5241 + dwc_write_reg64
5242 + dwc_modify_reg32
5243 + dwc_modify_reg64
5244 + dwc_alloc
5245 + dwc_alloc_atomic
5246 + dwc_strdup
5247 + dwc_free
5248 + dwc_dma_alloc
5249 + dwc_dma_free
5250 + dwc_mutex_alloc
5251 + dwc_mutex_free
5252 + dwc_spinlock_alloc
5253 + dwc_spinlock_free
5254 + dwc_timer_alloc
5255 + dwc_waitq_alloc
5256 + dwc_thread_run
5257 + dwc_workq_alloc
5258 + dwc_task_alloc Also adds a 'char *name' as its 2nd parameter
5259 +
5260 +And here are the core functions that have other changes to their parameters:
5261 +
5262 + dwc_spinlock_irqsave 'flags' param is now a 'dwc_irqflags_t *'
5263 + dwc_spinunlock_irqrestore 'flags' param is now a 'dwc_irqflags_t'
5264 + dwc_thread_should_stop Adds a 'dwc_thread_t *' parameter
5265 +
5266 +
5267 +
5268 +The changes to the core functions also require some of the other library
5269 +functions to change:
5270 +
5271 + dwc_cc_if_alloc() and dwc_cc_if_free() now take a 'void *memctx'
5272 + (for memory allocation) as the 1st param and a 'void *mtxctx'
5273 + (for mutex allocation) as the 2nd param.
5274 +
5275 + dwc_cc_clear(), dwc_cc_add(), dwc_cc_change(), dwc_cc_remove(),
5276 + dwc_cc_data_for_save(), and dwc_cc_restore_from_data() now take a
5277 + 'void *memctx' as the 1st param.
5278 +
5279 + dwc_dh_modpow(), dwc_dh_pk(), and dwc_dh_derive_keys() now take a
5280 + 'void *memctx' as the 1st param.
5281 +
5282 + dwc_modpow() now takes a 'void *memctx' as the 1st param.
5283 +
5284 + dwc_alloc_notification_manager() now takes a 'void *memctx' as the
5285 + 1st param and a 'void *wkqctx' (for work queue allocation) as the 2nd
5286 + param, and also now returns an integer value that is non-zero if
5287 + allocation of its data structures or work queue fails.
5288 +
5289 + dwc_register_notifier() now takes a 'void *memctx' as the 1st param.
5290 +
5291 + dwc_memory_debug_start() now takes a 'void *mem_ctx' as the first
5292 + param, and also now returns an integer value that is non-zero if
5293 + allocation of its data structures fails.
5294 +
5295 +
5296 +
5297 +Other miscellaneous changes:
5298 +
5299 +The DEBUG_MEMORY and DEBUG_REGS #define's have been renamed to
5300 +DWC_DEBUG_MEMORY and DWC_DEBUG_REGS.
5301 +
5302 +The following #define's have been added to allow selectively compiling library
5303 +features:
5304 +
5305 + DWC_CCLIB
5306 + DWC_CRYPTOLIB
5307 + DWC_NOTIFYLIB
5308 + DWC_UTFLIB
5309 +
5310 +A DWC_LIBMODULE #define has also been added. If this is not defined, then the
5311 +module code in dwc_common_linux.c is not compiled in. This allows linking the
5312 +library code directly into a driver module, instead of as a standalone module.
5313 --- /dev/null
5314 +++ b/drivers/usb/host/dwc_common_port/doc/doxygen.cfg
5315 @@ -0,0 +1,270 @@
5316 +# Doxyfile 1.4.5
5317 +
5318 +#---------------------------------------------------------------------------
5319 +# Project related configuration options
5320 +#---------------------------------------------------------------------------
5321 +PROJECT_NAME = "Synopsys DWC Portability and Common Library for UWB"
5322 +PROJECT_NUMBER =
5323 +OUTPUT_DIRECTORY = doc
5324 +CREATE_SUBDIRS = NO
5325 +OUTPUT_LANGUAGE = English
5326 +BRIEF_MEMBER_DESC = YES
5327 +REPEAT_BRIEF = YES
5328 +ABBREVIATE_BRIEF = "The $name class" \
5329 + "The $name widget" \
5330 + "The $name file" \
5331 + is \
5332 + provides \
5333 + specifies \
5334 + contains \
5335 + represents \
5336 + a \
5337 + an \
5338 + the
5339 +ALWAYS_DETAILED_SEC = YES
5340 +INLINE_INHERITED_MEMB = NO
5341 +FULL_PATH_NAMES = NO
5342 +STRIP_FROM_PATH = ..
5343 +STRIP_FROM_INC_PATH =
5344 +SHORT_NAMES = NO
5345 +JAVADOC_AUTOBRIEF = YES
5346 +MULTILINE_CPP_IS_BRIEF = NO
5347 +DETAILS_AT_TOP = YES
5348 +INHERIT_DOCS = YES
5349 +SEPARATE_MEMBER_PAGES = NO
5350 +TAB_SIZE = 8
5351 +ALIASES =
5352 +OPTIMIZE_OUTPUT_FOR_C = YES
5353 +OPTIMIZE_OUTPUT_JAVA = NO
5354 +BUILTIN_STL_SUPPORT = NO
5355 +DISTRIBUTE_GROUP_DOC = NO
5356 +SUBGROUPING = NO
5357 +#---------------------------------------------------------------------------
5358 +# Build related configuration options
5359 +#---------------------------------------------------------------------------
5360 +EXTRACT_ALL = NO
5361 +EXTRACT_PRIVATE = NO
5362 +EXTRACT_STATIC = YES
5363 +EXTRACT_LOCAL_CLASSES = NO
5364 +EXTRACT_LOCAL_METHODS = NO
5365 +HIDE_UNDOC_MEMBERS = NO
5366 +HIDE_UNDOC_CLASSES = NO
5367 +HIDE_FRIEND_COMPOUNDS = NO
5368 +HIDE_IN_BODY_DOCS = NO
5369 +INTERNAL_DOCS = NO
5370 +CASE_SENSE_NAMES = YES
5371 +HIDE_SCOPE_NAMES = NO
5372 +SHOW_INCLUDE_FILES = NO
5373 +INLINE_INFO = YES
5374 +SORT_MEMBER_DOCS = NO
5375 +SORT_BRIEF_DOCS = NO
5376 +SORT_BY_SCOPE_NAME = NO
5377 +GENERATE_TODOLIST = YES
5378 +GENERATE_TESTLIST = YES
5379 +GENERATE_BUGLIST = YES
5380 +GENERATE_DEPRECATEDLIST= YES
5381 +ENABLED_SECTIONS =
5382 +MAX_INITIALIZER_LINES = 30
5383 +SHOW_USED_FILES = YES
5384 +SHOW_DIRECTORIES = YES
5385 +FILE_VERSION_FILTER =
5386 +#---------------------------------------------------------------------------
5387 +# configuration options related to warning and progress messages
5388 +#---------------------------------------------------------------------------
5389 +QUIET = YES
5390 +WARNINGS = YES
5391 +WARN_IF_UNDOCUMENTED = NO
5392 +WARN_IF_DOC_ERROR = YES
5393 +WARN_NO_PARAMDOC = YES
5394 +WARN_FORMAT = "$file:$line: $text"
5395 +WARN_LOGFILE =
5396 +#---------------------------------------------------------------------------
5397 +# configuration options related to the input files
5398 +#---------------------------------------------------------------------------
5399 +INPUT = .
5400 +FILE_PATTERNS = *.c \
5401 + *.cc \
5402 + *.cxx \
5403 + *.cpp \
5404 + *.c++ \
5405 + *.d \
5406 + *.java \
5407 + *.ii \
5408 + *.ixx \
5409 + *.ipp \
5410 + *.i++ \
5411 + *.inl \
5412 + *.h \
5413 + *.hh \
5414 + *.hxx \
5415 + *.hpp \
5416 + *.h++ \
5417 + *.idl \
5418 + *.odl \
5419 + *.cs \
5420 + *.php \
5421 + *.php3 \
5422 + *.inc \
5423 + *.m \
5424 + *.mm \
5425 + *.dox \
5426 + *.py \
5427 + *.C \
5428 + *.CC \
5429 + *.C++ \
5430 + *.II \
5431 + *.I++ \
5432 + *.H \
5433 + *.HH \
5434 + *.H++ \
5435 + *.CS \
5436 + *.PHP \
5437 + *.PHP3 \
5438 + *.M \
5439 + *.MM \
5440 + *.PY
5441 +RECURSIVE = NO
5442 +EXCLUDE =
5443 +EXCLUDE_SYMLINKS = NO
5444 +EXCLUDE_PATTERNS =
5445 +EXAMPLE_PATH =
5446 +EXAMPLE_PATTERNS = *
5447 +EXAMPLE_RECURSIVE = NO
5448 +IMAGE_PATH =
5449 +INPUT_FILTER =
5450 +FILTER_PATTERNS =
5451 +FILTER_SOURCE_FILES = NO
5452 +#---------------------------------------------------------------------------
5453 +# configuration options related to source browsing
5454 +#---------------------------------------------------------------------------
5455 +SOURCE_BROWSER = NO
5456 +INLINE_SOURCES = NO
5457 +STRIP_CODE_COMMENTS = YES
5458 +REFERENCED_BY_RELATION = YES
5459 +REFERENCES_RELATION = YES
5460 +USE_HTAGS = NO
5461 +VERBATIM_HEADERS = NO
5462 +#---------------------------------------------------------------------------
5463 +# configuration options related to the alphabetical class index
5464 +#---------------------------------------------------------------------------
5465 +ALPHABETICAL_INDEX = NO
5466 +COLS_IN_ALPHA_INDEX = 5
5467 +IGNORE_PREFIX =
5468 +#---------------------------------------------------------------------------
5469 +# configuration options related to the HTML output
5470 +#---------------------------------------------------------------------------
5471 +GENERATE_HTML = YES
5472 +HTML_OUTPUT = html
5473 +HTML_FILE_EXTENSION = .html
5474 +HTML_HEADER =
5475 +HTML_FOOTER =
5476 +HTML_STYLESHEET =
5477 +HTML_ALIGN_MEMBERS = YES
5478 +GENERATE_HTMLHELP = NO
5479 +CHM_FILE =
5480 +HHC_LOCATION =
5481 +GENERATE_CHI = NO
5482 +BINARY_TOC = NO
5483 +TOC_EXPAND = NO
5484 +DISABLE_INDEX = NO
5485 +ENUM_VALUES_PER_LINE = 4
5486 +GENERATE_TREEVIEW = YES
5487 +TREEVIEW_WIDTH = 250
5488 +#---------------------------------------------------------------------------
5489 +# configuration options related to the LaTeX output
5490 +#---------------------------------------------------------------------------
5491 +GENERATE_LATEX = NO
5492 +LATEX_OUTPUT = latex
5493 +LATEX_CMD_NAME = latex
5494 +MAKEINDEX_CMD_NAME = makeindex
5495 +COMPACT_LATEX = NO
5496 +PAPER_TYPE = a4wide
5497 +EXTRA_PACKAGES =
5498 +LATEX_HEADER =
5499 +PDF_HYPERLINKS = NO
5500 +USE_PDFLATEX = NO
5501 +LATEX_BATCHMODE = NO
5502 +LATEX_HIDE_INDICES = NO
5503 +#---------------------------------------------------------------------------
5504 +# configuration options related to the RTF output
5505 +#---------------------------------------------------------------------------
5506 +GENERATE_RTF = NO
5507 +RTF_OUTPUT = rtf
5508 +COMPACT_RTF = NO
5509 +RTF_HYPERLINKS = NO
5510 +RTF_STYLESHEET_FILE =
5511 +RTF_EXTENSIONS_FILE =
5512 +#---------------------------------------------------------------------------
5513 +# configuration options related to the man page output
5514 +#---------------------------------------------------------------------------
5515 +GENERATE_MAN = NO
5516 +MAN_OUTPUT = man
5517 +MAN_EXTENSION = .3
5518 +MAN_LINKS = NO
5519 +#---------------------------------------------------------------------------
5520 +# configuration options related to the XML output
5521 +#---------------------------------------------------------------------------
5522 +GENERATE_XML = NO
5523 +XML_OUTPUT = xml
5524 +XML_SCHEMA =
5525 +XML_DTD =
5526 +XML_PROGRAMLISTING = YES
5527 +#---------------------------------------------------------------------------
5528 +# configuration options for the AutoGen Definitions output
5529 +#---------------------------------------------------------------------------
5530 +GENERATE_AUTOGEN_DEF = NO
5531 +#---------------------------------------------------------------------------
5532 +# configuration options related to the Perl module output
5533 +#---------------------------------------------------------------------------
5534 +GENERATE_PERLMOD = NO
5535 +PERLMOD_LATEX = NO
5536 +PERLMOD_PRETTY = YES
5537 +PERLMOD_MAKEVAR_PREFIX =
5538 +#---------------------------------------------------------------------------
5539 +# Configuration options related to the preprocessor
5540 +#---------------------------------------------------------------------------
5541 +ENABLE_PREPROCESSING = YES
5542 +MACRO_EXPANSION = NO
5543 +EXPAND_ONLY_PREDEF = NO
5544 +SEARCH_INCLUDES = YES
5545 +INCLUDE_PATH =
5546 +INCLUDE_FILE_PATTERNS =
5547 +PREDEFINED = DEBUG DEBUG_MEMORY
5548 +EXPAND_AS_DEFINED =
5549 +SKIP_FUNCTION_MACROS = YES
5550 +#---------------------------------------------------------------------------
5551 +# Configuration::additions related to external references
5552 +#---------------------------------------------------------------------------
5553 +TAGFILES =
5554 +GENERATE_TAGFILE =
5555 +ALLEXTERNALS = NO
5556 +EXTERNAL_GROUPS = YES
5557 +PERL_PATH = /usr/bin/perl
5558 +#---------------------------------------------------------------------------
5559 +# Configuration options related to the dot tool
5560 +#---------------------------------------------------------------------------
5561 +CLASS_DIAGRAMS = YES
5562 +HIDE_UNDOC_RELATIONS = YES
5563 +HAVE_DOT = NO
5564 +CLASS_GRAPH = YES
5565 +COLLABORATION_GRAPH = YES
5566 +GROUP_GRAPHS = YES
5567 +UML_LOOK = NO
5568 +TEMPLATE_RELATIONS = NO
5569 +INCLUDE_GRAPH = NO
5570 +INCLUDED_BY_GRAPH = YES
5571 +CALL_GRAPH = NO
5572 +GRAPHICAL_HIERARCHY = YES
5573 +DIRECTORY_GRAPH = YES
5574 +DOT_IMAGE_FORMAT = png
5575 +DOT_PATH =
5576 +DOTFILE_DIRS =
5577 +MAX_DOT_GRAPH_DEPTH = 1000
5578 +DOT_TRANSPARENT = NO
5579 +DOT_MULTI_TARGETS = NO
5580 +GENERATE_LEGEND = YES
5581 +DOT_CLEANUP = YES
5582 +#---------------------------------------------------------------------------
5583 +# Configuration::additions related to the search engine
5584 +#---------------------------------------------------------------------------
5585 +SEARCHENGINE = NO
5586 --- /dev/null
5587 +++ b/drivers/usb/host/dwc_common_port/dwc_cc.c
5588 @@ -0,0 +1,532 @@
5589 +/* =========================================================================
5590 + * $File: //dwh/usb_iip/dev/software/dwc_common_port_2/dwc_cc.c $
5591 + * $Revision: #4 $
5592 + * $Date: 2010/11/04 $
5593 + * $Change: 1621692 $
5594 + *
5595 + * Synopsys Portability Library Software and documentation
5596 + * (hereinafter, "Software") is an Unsupported proprietary work of
5597 + * Synopsys, Inc. unless otherwise expressly agreed to in writing
5598 + * between Synopsys and you.
5599 + *
5600 + * The Software IS NOT an item of Licensed Software or Licensed Product
5601 + * under any End User Software License Agreement or Agreement for
5602 + * Licensed Product with Synopsys or any supplement thereto. You are
5603 + * permitted to use and redistribute this Software in source and binary
5604 + * forms, with or without modification, provided that redistributions
5605 + * of source code must retain this notice. You may not view, use,
5606 + * disclose, copy or distribute this file or any information contained
5607 + * herein except pursuant to this license grant from Synopsys. If you
5608 + * do not agree with this notice, including the disclaimer below, then
5609 + * you are not authorized to use the Software.
5610 + *
5611 + * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS"
5612 + * BASIS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
5613 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
5614 + * FOR A PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. IN NO EVENT SHALL
5615 + * SYNOPSYS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
5616 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
5617 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
5618 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
5619 + * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
5620 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
5621 + * USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
5622 + * DAMAGE.
5623 + * ========================================================================= */
5624 +#ifdef DWC_CCLIB
5625 +
5626 +#include "dwc_cc.h"
5627 +
5628 +typedef struct dwc_cc
5629 +{
5630 + uint32_t uid;
5631 + uint8_t chid[16];
5632 + uint8_t cdid[16];
5633 + uint8_t ck[16];
5634 + uint8_t *name;
5635 + uint8_t length;
5636 + DWC_CIRCLEQ_ENTRY(dwc_cc) list_entry;
5637 +} dwc_cc_t;
5638 +
5639 +DWC_CIRCLEQ_HEAD(context_list, dwc_cc);
5640 +
5641 +/** The main structure for CC management. */
5642 +struct dwc_cc_if
5643 +{
5644 + dwc_mutex_t *mutex;
5645 + char *filename;
5646 +
5647 + unsigned is_host:1;
5648 +
5649 + dwc_notifier_t *notifier;
5650 +
5651 + struct context_list list;
5652 +};
5653 +
5654 +#ifdef DEBUG
5655 +static inline void dump_bytes(char *name, uint8_t *bytes, int len)
5656 +{
5657 + int i;
5658 + DWC_PRINTF("%s: ", name);
5659 + for (i=0; i<len; i++) {
5660 + DWC_PRINTF("%02x ", bytes[i]);
5661 + }
5662 + DWC_PRINTF("\n");
5663 +}
5664 +#else
5665 +#define dump_bytes(x...)
5666 +#endif
5667 +
5668 +static dwc_cc_t *alloc_cc(void *mem_ctx, uint8_t *name, uint32_t length)
5669 +{
5670 + dwc_cc_t *cc = dwc_alloc(mem_ctx, sizeof(dwc_cc_t));
5671 + if (!cc) {
5672 + return NULL;
5673 + }
5674 + DWC_MEMSET(cc, 0, sizeof(dwc_cc_t));
5675 +
5676 + if (name) {
5677 + cc->length = length;
5678 + cc->name = dwc_alloc(mem_ctx, length);
5679 + if (!cc->name) {
5680 + dwc_free(mem_ctx, cc);
5681 + return NULL;
5682 + }
5683 +
5684 + DWC_MEMCPY(cc->name, name, length);
5685 + }
5686 +
5687 + return cc;
5688 +}
5689 +
5690 +static void free_cc(void *mem_ctx, dwc_cc_t *cc)
5691 +{
5692 + if (cc->name) {
5693 + dwc_free(mem_ctx, cc->name);
5694 + }
5695 + dwc_free(mem_ctx, cc);
5696 +}
5697 +
5698 +static uint32_t next_uid(dwc_cc_if_t *cc_if)
5699 +{
5700 + uint32_t uid = 0;
5701 + dwc_cc_t *cc;
5702 + DWC_CIRCLEQ_FOREACH(cc, &cc_if->list, list_entry) {
5703 + if (cc->uid > uid) {
5704 + uid = cc->uid;
5705 + }
5706 + }
5707 +
5708 + if (uid == 0) {
5709 + uid = 255;
5710 + }
5711 +
5712 + return uid + 1;
5713 +}
5714 +
5715 +static dwc_cc_t *cc_find(dwc_cc_if_t *cc_if, uint32_t uid)
5716 +{
5717 + dwc_cc_t *cc;
5718 + DWC_CIRCLEQ_FOREACH(cc, &cc_if->list, list_entry) {
5719 + if (cc->uid == uid) {
5720 + return cc;
5721 + }
5722 + }
5723 + return NULL;
5724 +}
5725 +
5726 +static unsigned int cc_data_size(dwc_cc_if_t *cc_if)
5727 +{
5728 + unsigned int size = 0;
5729 + dwc_cc_t *cc;
5730 + DWC_CIRCLEQ_FOREACH(cc, &cc_if->list, list_entry) {
5731 + size += (48 + 1);
5732 + if (cc->name) {
5733 + size += cc->length;
5734 + }
5735 + }
5736 + return size;
5737 +}
5738 +
5739 +static uint32_t cc_match_chid(dwc_cc_if_t *cc_if, uint8_t *chid)
5740 +{
5741 + uint32_t uid = 0;
5742 + dwc_cc_t *cc;
5743 +
5744 + DWC_CIRCLEQ_FOREACH(cc, &cc_if->list, list_entry) {
5745 + if (DWC_MEMCMP(cc->chid, chid, 16) == 0) {
5746 + uid = cc->uid;
5747 + break;
5748 + }
5749 + }
5750 + return uid;
5751 +}
5752 +static uint32_t cc_match_cdid(dwc_cc_if_t *cc_if, uint8_t *cdid)
5753 +{
5754 + uint32_t uid = 0;
5755 + dwc_cc_t *cc;
5756 +
5757 + DWC_CIRCLEQ_FOREACH(cc, &cc_if->list, list_entry) {
5758 + if (DWC_MEMCMP(cc->cdid, cdid, 16) == 0) {
5759 + uid = cc->uid;
5760 + break;
5761 + }
5762 + }
5763 + return uid;
5764 +}
5765 +
5766 +/* Internal cc_add */
5767 +static int32_t cc_add(void *mem_ctx, dwc_cc_if_t *cc_if, uint8_t *chid,
5768 + uint8_t *cdid, uint8_t *ck, uint8_t *name, uint8_t length)
5769 +{
5770 + dwc_cc_t *cc;
5771 + uint32_t uid;
5772 +
5773 + if (cc_if->is_host) {
5774 + uid = cc_match_cdid(cc_if, cdid);
5775 + }
5776 + else {
5777 + uid = cc_match_chid(cc_if, chid);
5778 + }
5779 +
5780 + if (uid) {
5781 + DWC_DEBUGC("Replacing previous connection context id=%d name=%p name_len=%d", uid, name, length);
5782 + cc = cc_find(cc_if, uid);
5783 + }
5784 + else {
5785 + cc = alloc_cc(mem_ctx, name, length);
5786 + cc->uid = next_uid(cc_if);
5787 + DWC_CIRCLEQ_INSERT_TAIL(&cc_if->list, cc, list_entry);
5788 + }
5789 +
5790 + DWC_MEMCPY(&(cc->chid[0]), chid, 16);
5791 + DWC_MEMCPY(&(cc->cdid[0]), cdid, 16);
5792 + DWC_MEMCPY(&(cc->ck[0]), ck, 16);
5793 +
5794 + DWC_DEBUGC("Added connection context id=%d name=%p name_len=%d", cc->uid, name, length);
5795 + dump_bytes("CHID", cc->chid, 16);
5796 + dump_bytes("CDID", cc->cdid, 16);
5797 + dump_bytes("CK", cc->ck, 16);
5798 + return cc->uid;
5799 +}
5800 +
5801 +/* Internal cc_clear */
5802 +static void cc_clear(void *mem_ctx, dwc_cc_if_t *cc_if)
5803 +{
5804 + while (!DWC_CIRCLEQ_EMPTY(&cc_if->list)) {
5805 + dwc_cc_t *cc = DWC_CIRCLEQ_FIRST(&cc_if->list);
5806 + DWC_CIRCLEQ_REMOVE_INIT(&cc_if->list, cc, list_entry);
5807 + free_cc(mem_ctx, cc);
5808 + }
5809 +}
5810 +
5811 +dwc_cc_if_t *dwc_cc_if_alloc(void *mem_ctx, void *mtx_ctx,
5812 + dwc_notifier_t *notifier, unsigned is_host)
5813 +{
5814 + dwc_cc_if_t *cc_if = NULL;
5815 +
5816 + /* Allocate a common_cc_if structure */
5817 + cc_if = dwc_alloc(mem_ctx, sizeof(dwc_cc_if_t));
5818 +
5819 + if (!cc_if)
5820 + return NULL;
5821 +
5822 +#if (defined(DWC_LINUX) && defined(CONFIG_DEBUG_MUTEXES))
5823 + DWC_MUTEX_ALLOC_LINUX_DEBUG(cc_if->mutex);
5824 +#else
5825 + cc_if->mutex = dwc_mutex_alloc(mtx_ctx);
5826 +#endif
5827 + if (!cc_if->mutex) {
5828 + dwc_free(mem_ctx, cc_if);
5829 + return NULL;
5830 + }
5831 +
5832 + DWC_CIRCLEQ_INIT(&cc_if->list);
5833 + cc_if->is_host = is_host;
5834 + cc_if->notifier = notifier;
5835 + return cc_if;
5836 +}
5837 +
5838 +void dwc_cc_if_free(void *mem_ctx, void *mtx_ctx, dwc_cc_if_t *cc_if)
5839 +{
5840 +#if (defined(DWC_LINUX) && defined(CONFIG_DEBUG_MUTEXES))
5841 + DWC_MUTEX_FREE(cc_if->mutex);
5842 +#else
5843 + dwc_mutex_free(mtx_ctx, cc_if->mutex);
5844 +#endif
5845 + cc_clear(mem_ctx, cc_if);
5846 + dwc_free(mem_ctx, cc_if);
5847 +}
5848 +
5849 +static void cc_changed(dwc_cc_if_t *cc_if)
5850 +{
5851 + if (cc_if->notifier) {
5852 + dwc_notify(cc_if->notifier, DWC_CC_LIST_CHANGED_NOTIFICATION, cc_if);
5853 + }
5854 +}
5855 +
5856 +void dwc_cc_clear(void *mem_ctx, dwc_cc_if_t *cc_if)
5857 +{
5858 + DWC_MUTEX_LOCK(cc_if->mutex);
5859 + cc_clear(mem_ctx, cc_if);
5860 + DWC_MUTEX_UNLOCK(cc_if->mutex);
5861 + cc_changed(cc_if);
5862 +}
5863 +
5864 +int32_t dwc_cc_add(void *mem_ctx, dwc_cc_if_t *cc_if, uint8_t *chid,
5865 + uint8_t *cdid, uint8_t *ck, uint8_t *name, uint8_t length)
5866 +{
5867 + uint32_t uid;
5868 +
5869 + DWC_MUTEX_LOCK(cc_if->mutex);
5870 + uid = cc_add(mem_ctx, cc_if, chid, cdid, ck, name, length);
5871 + DWC_MUTEX_UNLOCK(cc_if->mutex);
5872 + cc_changed(cc_if);
5873 +
5874 + return uid;
5875 +}
5876 +
5877 +void dwc_cc_change(void *mem_ctx, dwc_cc_if_t *cc_if, int32_t id, uint8_t *chid,
5878 + uint8_t *cdid, uint8_t *ck, uint8_t *name, uint8_t length)
5879 +{
5880 + dwc_cc_t* cc;
5881 +
5882 + DWC_DEBUGC("Change connection context %d", id);
5883 +
5884 + DWC_MUTEX_LOCK(cc_if->mutex);
5885 + cc = cc_find(cc_if, id);
5886 + if (!cc) {
5887 + DWC_ERROR("Uid %d not found in cc list\n", id);
5888 + DWC_MUTEX_UNLOCK(cc_if->mutex);
5889 + return;
5890 + }
5891 +
5892 + if (chid) {
5893 + DWC_MEMCPY(&(cc->chid[0]), chid, 16);
5894 + }
5895 + if (cdid) {
5896 + DWC_MEMCPY(&(cc->cdid[0]), cdid, 16);
5897 + }
5898 + if (ck) {
5899 + DWC_MEMCPY(&(cc->ck[0]), ck, 16);
5900 + }
5901 +
5902 + if (name) {
5903 + if (cc->name) {
5904 + dwc_free(mem_ctx, cc->name);
5905 + }
5906 + cc->name = dwc_alloc(mem_ctx, length);
5907 + if (!cc->name) {
5908 + DWC_ERROR("Out of memory in dwc_cc_change()\n");
5909 + DWC_MUTEX_UNLOCK(cc_if->mutex);
5910 + return;
5911 + }
5912 + cc->length = length;
5913 + DWC_MEMCPY(cc->name, name, length);
5914 + }
5915 +
5916 + DWC_MUTEX_UNLOCK(cc_if->mutex);
5917 +
5918 + cc_changed(cc_if);
5919 +
5920 + DWC_DEBUGC("Changed connection context id=%d\n", id);
5921 + dump_bytes("New CHID", cc->chid, 16);
5922 + dump_bytes("New CDID", cc->cdid, 16);
5923 + dump_bytes("New CK", cc->ck, 16);
5924 +}
5925 +
5926 +void dwc_cc_remove(void *mem_ctx, dwc_cc_if_t *cc_if, int32_t id)
5927 +{
5928 + dwc_cc_t *cc;
5929 +
5930 + DWC_DEBUGC("Removing connection context %d", id);
5931 +
5932 + DWC_MUTEX_LOCK(cc_if->mutex);
5933 + cc = cc_find(cc_if, id);
5934 + if (!cc) {
5935 + DWC_ERROR("Uid %d not found in cc list\n", id);
5936 + DWC_MUTEX_UNLOCK(cc_if->mutex);
5937 + return;
5938 + }
5939 +
5940 + DWC_CIRCLEQ_REMOVE_INIT(&cc_if->list, cc, list_entry);
5941 + DWC_MUTEX_UNLOCK(cc_if->mutex);
5942 + free_cc(mem_ctx, cc);
5943 +
5944 + cc_changed(cc_if);
5945 +}
5946 +
5947 +uint8_t *dwc_cc_data_for_save(void *mem_ctx, dwc_cc_if_t *cc_if, unsigned int *length)
5948 +{
5949 + uint8_t *buf, *x;
5950 + uint8_t zero = 0;
5951 + dwc_cc_t *cc;
5952 +
5953 + DWC_MUTEX_LOCK(cc_if->mutex);
5954 + *length = cc_data_size(cc_if);
5955 + if (!(*length)) {
5956 + DWC_MUTEX_UNLOCK(cc_if->mutex);
5957 + return NULL;
5958 + }
5959 +
5960 + DWC_DEBUGC("Creating data for saving (length=%d)", *length);
5961 +
5962 + buf = dwc_alloc(mem_ctx, *length);
5963 + if (!buf) {
5964 + *length = 0;
5965 + DWC_MUTEX_UNLOCK(cc_if->mutex);
5966 + return NULL;
5967 + }
5968 +
5969 + x = buf;
5970 + DWC_CIRCLEQ_FOREACH(cc, &cc_if->list, list_entry) {
5971 + DWC_MEMCPY(x, cc->chid, 16);
5972 + x += 16;
5973 + DWC_MEMCPY(x, cc->cdid, 16);
5974 + x += 16;
5975 + DWC_MEMCPY(x, cc->ck, 16);
5976 + x += 16;
5977 + if (cc->name) {
5978 + DWC_MEMCPY(x, &cc->length, 1);
5979 + x += 1;
5980 + DWC_MEMCPY(x, cc->name, cc->length);
5981 + x += cc->length;
5982 + }
5983 + else {
5984 + DWC_MEMCPY(x, &zero, 1);
5985 + x += 1;
5986 + }
5987 + }
5988 + DWC_MUTEX_UNLOCK(cc_if->mutex);
5989 +
5990 + return buf;
5991 +}
5992 +
5993 +void dwc_cc_restore_from_data(void *mem_ctx, dwc_cc_if_t *cc_if, uint8_t *data, uint32_t length)
5994 +{
5995 + uint8_t name_length;
5996 + uint8_t *name;
5997 + uint8_t *chid;
5998 + uint8_t *cdid;
5999 + uint8_t *ck;
6000 + uint32_t i = 0;
6001 +
6002 + DWC_MUTEX_LOCK(cc_if->mutex);
6003 + cc_clear(mem_ctx, cc_if);
6004 +
6005 + while (i < length) {
6006 + chid = &data[i];
6007 + i += 16;
6008 + cdid = &data[i];
6009 + i += 16;
6010 + ck = &data[i];
6011 + i += 16;
6012 +
6013 + name_length = data[i];
6014 + i ++;
6015 +
6016 + if (name_length) {
6017 + name = &data[i];
6018 + i += name_length;
6019 + }
6020 + else {
6021 + name = NULL;
6022 + }
6023 +
6024 + /* check to see if we haven't overflown the buffer */
6025 + if (i > length) {
6026 + DWC_ERROR("Data format error while attempting to load CCs "
6027 + "(nlen=%d, iter=%d, buflen=%d).\n", name_length, i, length);
6028 + break;
6029 + }
6030 +
6031 + cc_add(mem_ctx, cc_if, chid, cdid, ck, name, name_length);
6032 + }
6033 + DWC_MUTEX_UNLOCK(cc_if->mutex);
6034 +
6035 + cc_changed(cc_if);
6036 +}
6037 +
6038 +uint32_t dwc_cc_match_chid(dwc_cc_if_t *cc_if, uint8_t *chid)
6039 +{
6040 + uint32_t uid = 0;
6041 +
6042 + DWC_MUTEX_LOCK(cc_if->mutex);
6043 + uid = cc_match_chid(cc_if, chid);
6044 + DWC_MUTEX_UNLOCK(cc_if->mutex);
6045 + return uid;
6046 +}
6047 +uint32_t dwc_cc_match_cdid(dwc_cc_if_t *cc_if, uint8_t *cdid)
6048 +{
6049 + uint32_t uid = 0;
6050 +
6051 + DWC_MUTEX_LOCK(cc_if->mutex);
6052 + uid = cc_match_cdid(cc_if, cdid);
6053 + DWC_MUTEX_UNLOCK(cc_if->mutex);
6054 + return uid;
6055 +}
6056 +
6057 +uint8_t *dwc_cc_ck(dwc_cc_if_t *cc_if, int32_t id)
6058 +{
6059 + uint8_t *ck = NULL;
6060 + dwc_cc_t *cc;
6061 +
6062 + DWC_MUTEX_LOCK(cc_if->mutex);
6063 + cc = cc_find(cc_if, id);
6064 + if (cc) {
6065 + ck = cc->ck;
6066 + }
6067 + DWC_MUTEX_UNLOCK(cc_if->mutex);
6068 +
6069 + return ck;
6070 +
6071 +}
6072 +
6073 +uint8_t *dwc_cc_chid(dwc_cc_if_t *cc_if, int32_t id)
6074 +{
6075 + uint8_t *retval = NULL;
6076 + dwc_cc_t *cc;
6077 +
6078 + DWC_MUTEX_LOCK(cc_if->mutex);
6079 + cc = cc_find(cc_if, id);
6080 + if (cc) {
6081 + retval = cc->chid;
6082 + }
6083 + DWC_MUTEX_UNLOCK(cc_if->mutex);
6084 +
6085 + return retval;
6086 +}
6087 +
6088 +uint8_t *dwc_cc_cdid(dwc_cc_if_t *cc_if, int32_t id)
6089 +{
6090 + uint8_t *retval = NULL;
6091 + dwc_cc_t *cc;
6092 +
6093 + DWC_MUTEX_LOCK(cc_if->mutex);
6094 + cc = cc_find(cc_if, id);
6095 + if (cc) {
6096 + retval = cc->cdid;
6097 + }
6098 + DWC_MUTEX_UNLOCK(cc_if->mutex);
6099 +
6100 + return retval;
6101 +}
6102 +
6103 +uint8_t *dwc_cc_name(dwc_cc_if_t *cc_if, int32_t id, uint8_t *length)
6104 +{
6105 + uint8_t *retval = NULL;
6106 + dwc_cc_t *cc;
6107 +
6108 + DWC_MUTEX_LOCK(cc_if->mutex);
6109 + *length = 0;
6110 + cc = cc_find(cc_if, id);
6111 + if (cc) {
6112 + *length = cc->length;
6113 + retval = cc->name;
6114 + }
6115 + DWC_MUTEX_UNLOCK(cc_if->mutex);
6116 +
6117 + return retval;
6118 +}
6119 +
6120 +#endif /* DWC_CCLIB */
6121 --- /dev/null
6122 +++ b/drivers/usb/host/dwc_common_port/dwc_cc.h
6123 @@ -0,0 +1,224 @@
6124 +/* =========================================================================
6125 + * $File: //dwh/usb_iip/dev/software/dwc_common_port_2/dwc_cc.h $
6126 + * $Revision: #4 $
6127 + * $Date: 2010/09/28 $
6128 + * $Change: 1596182 $
6129 + *
6130 + * Synopsys Portability Library Software and documentation
6131 + * (hereinafter, "Software") is an Unsupported proprietary work of
6132 + * Synopsys, Inc. unless otherwise expressly agreed to in writing
6133 + * between Synopsys and you.
6134 + *
6135 + * The Software IS NOT an item of Licensed Software or Licensed Product
6136 + * under any End User Software License Agreement or Agreement for
6137 + * Licensed Product with Synopsys or any supplement thereto. You are
6138 + * permitted to use and redistribute this Software in source and binary
6139 + * forms, with or without modification, provided that redistributions
6140 + * of source code must retain this notice. You may not view, use,
6141 + * disclose, copy or distribute this file or any information contained
6142 + * herein except pursuant to this license grant from Synopsys. If you
6143 + * do not agree with this notice, including the disclaimer below, then
6144 + * you are not authorized to use the Software.
6145 + *
6146 + * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS"
6147 + * BASIS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
6148 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
6149 + * FOR A PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. IN NO EVENT SHALL
6150 + * SYNOPSYS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
6151 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
6152 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
6153 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
6154 + * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
6155 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
6156 + * USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
6157 + * DAMAGE.
6158 + * ========================================================================= */
6159 +#ifndef _DWC_CC_H_
6160 +#define _DWC_CC_H_
6161 +
6162 +#ifdef __cplusplus
6163 +extern "C" {
6164 +#endif
6165 +
6166 +/** @file
6167 + *
6168 + * This file defines the Context Context library.
6169 + *
6170 + * The main data structure is dwc_cc_if_t which is returned by either the
6171 + * dwc_cc_if_alloc function or returned by the module to the user via a provided
6172 + * function. The data structure is opaque and should only be manipulated via the
6173 + * functions provied in this API.
6174 + *
6175 + * It manages a list of connection contexts and operations can be performed to
6176 + * add, remove, query, search, and change, those contexts. Additionally,
6177 + * a dwc_notifier_t object can be requested from the manager so that
6178 + * the user can be notified whenever the context list has changed.
6179 + */
6180 +
6181 +#include "dwc_os.h"
6182 +#include "dwc_list.h"
6183 +#include "dwc_notifier.h"
6184 +
6185 +
6186 +/* Notifications */
6187 +#define DWC_CC_LIST_CHANGED_NOTIFICATION "DWC_CC_LIST_CHANGED_NOTIFICATION"
6188 +
6189 +struct dwc_cc_if;
6190 +typedef struct dwc_cc_if dwc_cc_if_t;
6191 +
6192 +
6193 +/** @name Connection Context Operations */
6194 +/** @{ */
6195 +
6196 +/** This function allocates memory for a dwc_cc_if_t structure, initializes
6197 + * fields to default values, and returns a pointer to the structure or NULL on
6198 + * error. */
6199 +extern dwc_cc_if_t *dwc_cc_if_alloc(void *mem_ctx, void *mtx_ctx,
6200 + dwc_notifier_t *notifier, unsigned is_host);
6201 +
6202 +/** Frees the memory for the specified CC structure allocated from
6203 + * dwc_cc_if_alloc(). */
6204 +extern void dwc_cc_if_free(void *mem_ctx, void *mtx_ctx, dwc_cc_if_t *cc_if);
6205 +
6206 +/** Removes all contexts from the connection context list */
6207 +extern void dwc_cc_clear(void *mem_ctx, dwc_cc_if_t *cc_if);
6208 +
6209 +/** Adds a connection context (CHID, CK, CDID, Name) to the connection context list.
6210 + * If a CHID already exists, the CK and name are overwritten. Statistics are
6211 + * not overwritten.
6212 + *
6213 + * @param cc_if The cc_if structure.
6214 + * @param chid A pointer to the 16-byte CHID. This value will be copied.
6215 + * @param ck A pointer to the 16-byte CK. This value will be copied.
6216 + * @param cdid A pointer to the 16-byte CDID. This value will be copied.
6217 + * @param name An optional host friendly name as defined in the association model
6218 + * spec. Must be a UTF16-LE unicode string. Can be NULL to indicated no name.
6219 + * @param length The length othe unicode string.
6220 + * @return A unique identifier used to refer to this context that is valid for
6221 + * as long as this context is still in the list. */
6222 +extern int32_t dwc_cc_add(void *mem_ctx, dwc_cc_if_t *cc_if, uint8_t *chid,
6223 + uint8_t *cdid, uint8_t *ck, uint8_t *name,
6224 + uint8_t length);
6225 +
6226 +/** Changes the CHID, CK, CDID, or Name values of a connection context in the
6227 + * list, preserving any accumulated statistics. This would typically be called
6228 + * if the host decideds to change the context with a SET_CONNECTION request.
6229 + *
6230 + * @param cc_if The cc_if structure.
6231 + * @param id The identifier of the connection context.
6232 + * @param chid A pointer to the 16-byte CHID. This value will be copied. NULL
6233 + * indicates no change.
6234 + * @param cdid A pointer to the 16-byte CDID. This value will be copied. NULL
6235 + * indicates no change.
6236 + * @param ck A pointer to the 16-byte CK. This value will be copied. NULL
6237 + * indicates no change.
6238 + * @param name Host friendly name UTF16-LE. NULL indicates no change.
6239 + * @param length Length of name. */
6240 +extern void dwc_cc_change(void *mem_ctx, dwc_cc_if_t *cc_if, int32_t id,
6241 + uint8_t *chid, uint8_t *cdid, uint8_t *ck,
6242 + uint8_t *name, uint8_t length);
6243 +
6244 +/** Remove the specified connection context.
6245 + * @param cc_if The cc_if structure.
6246 + * @param id The identifier of the connection context to remove. */
6247 +extern void dwc_cc_remove(void *mem_ctx, dwc_cc_if_t *cc_if, int32_t id);
6248 +
6249 +/** Get a binary block of data for the connection context list and attributes.
6250 + * This data can be used by the OS specific driver to save the connection
6251 + * context list into non-volatile memory.
6252 + *
6253 + * @param cc_if The cc_if structure.
6254 + * @param length Return the length of the data buffer.
6255 + * @return A pointer to the data buffer. The memory for this buffer should be
6256 + * freed with DWC_FREE() after use. */
6257 +extern uint8_t *dwc_cc_data_for_save(void *mem_ctx, dwc_cc_if_t *cc_if,
6258 + unsigned int *length);
6259 +
6260 +/** Restore the connection context list from the binary data that was previously
6261 + * returned from a call to dwc_cc_data_for_save. This can be used by the OS specific
6262 + * driver to load a connection context list from non-volatile memory.
6263 + *
6264 + * @param cc_if The cc_if structure.
6265 + * @param data The data bytes as returned from dwc_cc_data_for_save.
6266 + * @param length The length of the data. */
6267 +extern void dwc_cc_restore_from_data(void *mem_ctx, dwc_cc_if_t *cc_if,
6268 + uint8_t *data, unsigned int length);
6269 +
6270 +/** Find the connection context from the specified CHID.
6271 + *
6272 + * @param cc_if The cc_if structure.
6273 + * @param chid A pointer to the CHID data.
6274 + * @return A non-zero identifier of the connection context if the CHID matches.
6275 + * Otherwise returns 0. */
6276 +extern uint32_t dwc_cc_match_chid(dwc_cc_if_t *cc_if, uint8_t *chid);
6277 +
6278 +/** Find the connection context from the specified CDID.
6279 + *
6280 + * @param cc_if The cc_if structure.
6281 + * @param cdid A pointer to the CDID data.
6282 + * @return A non-zero identifier of the connection context if the CHID matches.
6283 + * Otherwise returns 0. */
6284 +extern uint32_t dwc_cc_match_cdid(dwc_cc_if_t *cc_if, uint8_t *cdid);
6285 +
6286 +/** Retrieve the CK from the specified connection context.
6287 + *
6288 + * @param cc_if The cc_if structure.
6289 + * @param id The identifier of the connection context.
6290 + * @return A pointer to the CK data. The memory does not need to be freed. */
6291 +extern uint8_t *dwc_cc_ck(dwc_cc_if_t *cc_if, int32_t id);
6292 +
6293 +/** Retrieve the CHID from the specified connection context.
6294 + *
6295 + * @param cc_if The cc_if structure.
6296 + * @param id The identifier of the connection context.
6297 + * @return A pointer to the CHID data. The memory does not need to be freed. */
6298 +extern uint8_t *dwc_cc_chid(dwc_cc_if_t *cc_if, int32_t id);
6299 +
6300 +/** Retrieve the CDID from the specified connection context.
6301 + *
6302 + * @param cc_if The cc_if structure.
6303 + * @param id The identifier of the connection context.
6304 + * @return A pointer to the CDID data. The memory does not need to be freed. */
6305 +extern uint8_t *dwc_cc_cdid(dwc_cc_if_t *cc_if, int32_t id);
6306 +
6307 +extern uint8_t *dwc_cc_name(dwc_cc_if_t *cc_if, int32_t id, uint8_t *length);
6308 +
6309 +/** Checks a buffer for non-zero.
6310 + * @param id A pointer to a 16 byte buffer.
6311 + * @return true if the 16 byte value is non-zero. */
6312 +static inline unsigned dwc_assoc_is_not_zero_id(uint8_t *id) {
6313 + int i;
6314 + for (i=0; i<16; i++) {
6315 + if (id[i]) return 1;
6316 + }
6317 + return 0;
6318 +}
6319 +
6320 +/** Checks a buffer for zero.
6321 + * @param id A pointer to a 16 byte buffer.
6322 + * @return true if the 16 byte value is zero. */
6323 +static inline unsigned dwc_assoc_is_zero_id(uint8_t *id) {
6324 + return !dwc_assoc_is_not_zero_id(id);
6325 +}
6326 +
6327 +/** Prints an ASCII representation for the 16-byte chid, cdid, or ck, into
6328 + * buffer. */
6329 +static inline int dwc_print_id_string(char *buffer, uint8_t *id) {
6330 + char *ptr = buffer;
6331 + int i;
6332 + for (i=0; i<16; i++) {
6333 + ptr += DWC_SPRINTF(ptr, "%02x", id[i]);
6334 + if (i < 15) {
6335 + ptr += DWC_SPRINTF(ptr, " ");
6336 + }
6337 + }
6338 + return ptr - buffer;
6339 +}
6340 +
6341 +/** @} */
6342 +
6343 +#ifdef __cplusplus
6344 +}
6345 +#endif
6346 +
6347 +#endif /* _DWC_CC_H_ */
6348 --- /dev/null
6349 +++ b/drivers/usb/host/dwc_common_port/dwc_common_fbsd.c
6350 @@ -0,0 +1,1308 @@
6351 +#include "dwc_os.h"
6352 +#include "dwc_list.h"
6353 +
6354 +#ifdef DWC_CCLIB
6355 +# include "dwc_cc.h"
6356 +#endif
6357 +
6358 +#ifdef DWC_CRYPTOLIB
6359 +# include "dwc_modpow.h"
6360 +# include "dwc_dh.h"
6361 +# include "dwc_crypto.h"
6362 +#endif
6363 +
6364 +#ifdef DWC_NOTIFYLIB
6365 +# include "dwc_notifier.h"
6366 +#endif
6367 +
6368 +/* OS-Level Implementations */
6369 +
6370 +/* This is the FreeBSD 7.0 kernel implementation of the DWC platform library. */
6371 +
6372 +
6373 +/* MISC */
6374 +
6375 +void *DWC_MEMSET(void *dest, uint8_t byte, uint32_t size)
6376 +{
6377 + return memset(dest, byte, size);
6378 +}
6379 +
6380 +void *DWC_MEMCPY(void *dest, void const *src, uint32_t size)
6381 +{
6382 + return memcpy(dest, src, size);
6383 +}
6384 +
6385 +void *DWC_MEMMOVE(void *dest, void *src, uint32_t size)
6386 +{
6387 + bcopy(src, dest, size);
6388 + return dest;
6389 +}
6390 +
6391 +int DWC_MEMCMP(void *m1, void *m2, uint32_t size)
6392 +{
6393 + return memcmp(m1, m2, size);
6394 +}
6395 +
6396 +int DWC_STRNCMP(void *s1, void *s2, uint32_t size)
6397 +{
6398 + return strncmp(s1, s2, size);
6399 +}
6400 +
6401 +int DWC_STRCMP(void *s1, void *s2)
6402 +{
6403 + return strcmp(s1, s2);
6404 +}
6405 +
6406 +int DWC_STRLEN(char const *str)
6407 +{
6408 + return strlen(str);
6409 +}
6410 +
6411 +char *DWC_STRCPY(char *to, char const *from)
6412 +{
6413 + return strcpy(to, from);
6414 +}
6415 +
6416 +char *DWC_STRDUP(char const *str)
6417 +{
6418 + int len = DWC_STRLEN(str) + 1;
6419 + char *new = DWC_ALLOC_ATOMIC(len);
6420 +
6421 + if (!new) {
6422 + return NULL;
6423 + }
6424 +
6425 + DWC_MEMCPY(new, str, len);
6426 + return new;
6427 +}
6428 +
6429 +int DWC_ATOI(char *str, int32_t *value)
6430 +{
6431 + char *end = NULL;
6432 +
6433 + *value = strtol(str, &end, 0);
6434 + if (*end == '\0') {
6435 + return 0;
6436 + }
6437 +
6438 + return -1;
6439 +}
6440 +
6441 +int DWC_ATOUI(char *str, uint32_t *value)
6442 +{
6443 + char *end = NULL;
6444 +
6445 + *value = strtoul(str, &end, 0);
6446 + if (*end == '\0') {
6447 + return 0;
6448 + }
6449 +
6450 + return -1;
6451 +}
6452 +
6453 +
6454 +#ifdef DWC_UTFLIB
6455 +/* From usbstring.c */
6456 +
6457 +int DWC_UTF8_TO_UTF16LE(uint8_t const *s, uint16_t *cp, unsigned len)
6458 +{
6459 + int count = 0;
6460 + u8 c;
6461 + u16 uchar;
6462 +
6463 + /* this insists on correct encodings, though not minimal ones.
6464 + * BUT it currently rejects legit 4-byte UTF-8 code points,
6465 + * which need surrogate pairs. (Unicode 3.1 can use them.)
6466 + */
6467 + while (len != 0 && (c = (u8) *s++) != 0) {
6468 + if (unlikely(c & 0x80)) {
6469 + // 2-byte sequence:
6470 + // 00000yyyyyxxxxxx = 110yyyyy 10xxxxxx
6471 + if ((c & 0xe0) == 0xc0) {
6472 + uchar = (c & 0x1f) << 6;
6473 +
6474 + c = (u8) *s++;
6475 + if ((c & 0xc0) != 0xc0)
6476 + goto fail;
6477 + c &= 0x3f;
6478 + uchar |= c;
6479 +
6480 + // 3-byte sequence (most CJKV characters):
6481 + // zzzzyyyyyyxxxxxx = 1110zzzz 10yyyyyy 10xxxxxx
6482 + } else if ((c & 0xf0) == 0xe0) {
6483 + uchar = (c & 0x0f) << 12;
6484 +
6485 + c = (u8) *s++;
6486 + if ((c & 0xc0) != 0xc0)
6487 + goto fail;
6488 + c &= 0x3f;
6489 + uchar |= c << 6;
6490 +
6491 + c = (u8) *s++;
6492 + if ((c & 0xc0) != 0xc0)
6493 + goto fail;
6494 + c &= 0x3f;
6495 + uchar |= c;
6496 +
6497 + /* no bogus surrogates */
6498 + if (0xd800 <= uchar && uchar <= 0xdfff)
6499 + goto fail;
6500 +
6501 + // 4-byte sequence (surrogate pairs, currently rare):
6502 + // 11101110wwwwzzzzyy + 110111yyyyxxxxxx
6503 + // = 11110uuu 10uuzzzz 10yyyyyy 10xxxxxx
6504 + // (uuuuu = wwww + 1)
6505 + // FIXME accept the surrogate code points (only)
6506 + } else
6507 + goto fail;
6508 + } else
6509 + uchar = c;
6510 + put_unaligned (cpu_to_le16 (uchar), cp++);
6511 + count++;
6512 + len--;
6513 + }
6514 + return count;
6515 +fail:
6516 + return -1;
6517 +}
6518 +
6519 +#endif /* DWC_UTFLIB */
6520 +
6521 +
6522 +/* dwc_debug.h */
6523 +
6524 +dwc_bool_t DWC_IN_IRQ(void)
6525 +{
6526 +// return in_irq();
6527 + return 0;
6528 +}
6529 +
6530 +dwc_bool_t DWC_IN_BH(void)
6531 +{
6532 +// return in_softirq();
6533 + return 0;
6534 +}
6535 +
6536 +void DWC_VPRINTF(char *format, va_list args)
6537 +{
6538 + vprintf(format, args);
6539 +}
6540 +
6541 +int DWC_VSNPRINTF(char *str, int size, char *format, va_list args)
6542 +{
6543 + return vsnprintf(str, size, format, args);
6544 +}
6545 +
6546 +void DWC_PRINTF(char *format, ...)
6547 +{
6548 + va_list args;
6549 +
6550 + va_start(args, format);
6551 + DWC_VPRINTF(format, args);
6552 + va_end(args);
6553 +}
6554 +
6555 +int DWC_SPRINTF(char *buffer, char *format, ...)
6556 +{
6557 + int retval;
6558 + va_list args;
6559 +
6560 + va_start(args, format);
6561 + retval = vsprintf(buffer, format, args);
6562 + va_end(args);
6563 + return retval;
6564 +}
6565 +
6566 +int DWC_SNPRINTF(char *buffer, int size, char *format, ...)
6567 +{
6568 + int retval;
6569 + va_list args;
6570 +
6571 + va_start(args, format);
6572 + retval = vsnprintf(buffer, size, format, args);
6573 + va_end(args);
6574 + return retval;
6575 +}
6576 +
6577 +void __DWC_WARN(char *format, ...)
6578 +{
6579 + va_list args;
6580 +
6581 + va_start(args, format);
6582 + DWC_VPRINTF(format, args);
6583 + va_end(args);
6584 +}
6585 +
6586 +void __DWC_ERROR(char *format, ...)
6587 +{
6588 + va_list args;
6589 +
6590 + va_start(args, format);
6591 + DWC_VPRINTF(format, args);
6592 + va_end(args);
6593 +}
6594 +
6595 +void DWC_EXCEPTION(char *format, ...)
6596 +{
6597 + va_list args;
6598 +
6599 + va_start(args, format);
6600 + DWC_VPRINTF(format, args);
6601 + va_end(args);
6602 +// BUG_ON(1); ???
6603 +}
6604 +
6605 +#ifdef DEBUG
6606 +void __DWC_DEBUG(char *format, ...)
6607 +{
6608 + va_list args;
6609 +
6610 + va_start(args, format);
6611 + DWC_VPRINTF(format, args);
6612 + va_end(args);
6613 +}
6614 +#endif
6615 +
6616 +
6617 +/* dwc_mem.h */
6618 +
6619 +#if 0
6620 +dwc_pool_t *DWC_DMA_POOL_CREATE(uint32_t size,
6621 + uint32_t align,
6622 + uint32_t alloc)
6623 +{
6624 + struct dma_pool *pool = dma_pool_create("Pool", NULL,
6625 + size, align, alloc);
6626 + return (dwc_pool_t *)pool;
6627 +}
6628 +
6629 +void DWC_DMA_POOL_DESTROY(dwc_pool_t *pool)
6630 +{
6631 + dma_pool_destroy((struct dma_pool *)pool);
6632 +}
6633 +
6634 +void *DWC_DMA_POOL_ALLOC(dwc_pool_t *pool, uint64_t *dma_addr)
6635 +{
6636 +// return dma_pool_alloc((struct dma_pool *)pool, GFP_KERNEL, dma_addr);
6637 + return dma_pool_alloc((struct dma_pool *)pool, M_WAITOK, dma_addr);
6638 +}
6639 +
6640 +void *DWC_DMA_POOL_ZALLOC(dwc_pool_t *pool, uint64_t *dma_addr)
6641 +{
6642 + void *vaddr = DWC_DMA_POOL_ALLOC(pool, dma_addr);
6643 + memset(..);
6644 +}
6645 +
6646 +void DWC_DMA_POOL_FREE(dwc_pool_t *pool, void *vaddr, void *daddr)
6647 +{
6648 + dma_pool_free(pool, vaddr, daddr);
6649 +}
6650 +#endif
6651 +
6652 +static void dmamap_cb(void *arg, bus_dma_segment_t *segs, int nseg, int error)
6653 +{
6654 + if (error)
6655 + return;
6656 + *(bus_addr_t *)arg = segs[0].ds_addr;
6657 +}
6658 +
6659 +void *__DWC_DMA_ALLOC(void *dma_ctx, uint32_t size, dwc_dma_t *dma_addr)
6660 +{
6661 + dwc_dmactx_t *dma = (dwc_dmactx_t *)dma_ctx;
6662 + int error;
6663 +
6664 + error = bus_dma_tag_create(
6665 +#if __FreeBSD_version >= 700000
6666 + bus_get_dma_tag(dma->dev), /* parent */
6667 +#else
6668 + NULL, /* parent */
6669 +#endif
6670 + 4, 0, /* alignment, bounds */
6671 + BUS_SPACE_MAXADDR_32BIT, /* lowaddr */
6672 + BUS_SPACE_MAXADDR, /* highaddr */
6673 + NULL, NULL, /* filter, filterarg */
6674 + size, /* maxsize */
6675 + 1, /* nsegments */
6676 + size, /* maxsegsize */
6677 + 0, /* flags */
6678 + NULL, /* lockfunc */
6679 + NULL, /* lockarg */
6680 + &dma->dma_tag);
6681 + if (error) {
6682 + device_printf(dma->dev, "%s: bus_dma_tag_create failed: %d\n",
6683 + __func__, error);
6684 + goto fail_0;
6685 + }
6686 +
6687 + error = bus_dmamem_alloc(dma->dma_tag, &dma->dma_vaddr,
6688 + BUS_DMA_NOWAIT | BUS_DMA_COHERENT, &dma->dma_map);
6689 + if (error) {
6690 + device_printf(dma->dev, "%s: bus_dmamem_alloc(%ju) failed: %d\n",
6691 + __func__, (uintmax_t)size, error);
6692 + goto fail_1;
6693 + }
6694 +
6695 + dma->dma_paddr = 0;
6696 + error = bus_dmamap_load(dma->dma_tag, dma->dma_map, dma->dma_vaddr, size,
6697 + dmamap_cb, &dma->dma_paddr, BUS_DMA_NOWAIT);
6698 + if (error || dma->dma_paddr == 0) {
6699 + device_printf(dma->dev, "%s: bus_dmamap_load failed: %d\n",
6700 + __func__, error);
6701 + goto fail_2;
6702 + }
6703 +
6704 + *dma_addr = dma->dma_paddr;
6705 + return dma->dma_vaddr;
6706 +
6707 +fail_2:
6708 + bus_dmamap_unload(dma->dma_tag, dma->dma_map);
6709 +fail_1:
6710 + bus_dmamem_free(dma->dma_tag, dma->dma_vaddr, dma->dma_map);
6711 + bus_dma_tag_destroy(dma->dma_tag);
6712 +fail_0:
6713 + dma->dma_map = NULL;
6714 + dma->dma_tag = NULL;
6715 +
6716 + return NULL;
6717 +}
6718 +
6719 +void __DWC_DMA_FREE(void *dma_ctx, uint32_t size, void *virt_addr, dwc_dma_t dma_addr)
6720 +{
6721 + dwc_dmactx_t *dma = (dwc_dmactx_t *)dma_ctx;
6722 +
6723 + if (dma->dma_tag == NULL)
6724 + return;
6725 + if (dma->dma_map != NULL) {
6726 + bus_dmamap_sync(dma->dma_tag, dma->dma_map,
6727 + BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE);
6728 + bus_dmamap_unload(dma->dma_tag, dma->dma_map);
6729 + bus_dmamem_free(dma->dma_tag, dma->dma_vaddr, dma->dma_map);
6730 + dma->dma_map = NULL;
6731 + }
6732 +
6733 + bus_dma_tag_destroy(dma->dma_tag);
6734 + dma->dma_tag = NULL;
6735 +}
6736 +
6737 +void *__DWC_ALLOC(void *mem_ctx, uint32_t size)
6738 +{
6739 + return malloc(size, M_DEVBUF, M_WAITOK | M_ZERO);
6740 +}
6741 +
6742 +void *__DWC_ALLOC_ATOMIC(void *mem_ctx, uint32_t size)
6743 +{
6744 + return malloc(size, M_DEVBUF, M_NOWAIT | M_ZERO);
6745 +}
6746 +
6747 +void __DWC_FREE(void *mem_ctx, void *addr)
6748 +{
6749 + free(addr, M_DEVBUF);
6750 +}
6751 +
6752 +
6753 +#ifdef DWC_CRYPTOLIB
6754 +/* dwc_crypto.h */
6755 +
6756 +void DWC_RANDOM_BYTES(uint8_t *buffer, uint32_t length)
6757 +{
6758 + get_random_bytes(buffer, length);
6759 +}
6760 +
6761 +int DWC_AES_CBC(uint8_t *message, uint32_t messagelen, uint8_t *key, uint32_t keylen, uint8_t iv[16], uint8_t *out)
6762 +{
6763 + struct crypto_blkcipher *tfm;
6764 + struct blkcipher_desc desc;
6765 + struct scatterlist sgd;
6766 + struct scatterlist sgs;
6767 +
6768 + tfm = crypto_alloc_blkcipher("cbc(aes)", 0, CRYPTO_ALG_ASYNC);
6769 + if (tfm == NULL) {
6770 + printk("failed to load transform for aes CBC\n");
6771 + return -1;
6772 + }
6773 +
6774 + crypto_blkcipher_setkey(tfm, key, keylen);
6775 + crypto_blkcipher_set_iv(tfm, iv, 16);
6776 +
6777 + sg_init_one(&sgd, out, messagelen);
6778 + sg_init_one(&sgs, message, messagelen);
6779 +
6780 + desc.tfm = tfm;
6781 + desc.flags = 0;
6782 +
6783 + if (crypto_blkcipher_encrypt(&desc, &sgd, &sgs, messagelen)) {
6784 + crypto_free_blkcipher(tfm);
6785 + DWC_ERROR("AES CBC encryption failed");
6786 + return -1;
6787 + }
6788 +
6789 + crypto_free_blkcipher(tfm);
6790 + return 0;
6791 +}
6792 +
6793 +int DWC_SHA256(uint8_t *message, uint32_t len, uint8_t *out)
6794 +{
6795 + struct crypto_hash *tfm;
6796 + struct hash_desc desc;
6797 + struct scatterlist sg;
6798 +
6799 + tfm = crypto_alloc_hash("sha256", 0, CRYPTO_ALG_ASYNC);
6800 + if (IS_ERR(tfm)) {
6801 + DWC_ERROR("Failed to load transform for sha256: %ld", PTR_ERR(tfm));
6802 + return 0;
6803 + }
6804 + desc.tfm = tfm;
6805 + desc.flags = 0;
6806 +
6807 + sg_init_one(&sg, message, len);
6808 + crypto_hash_digest(&desc, &sg, len, out);
6809 + crypto_free_hash(tfm);
6810 +
6811 + return 1;
6812 +}
6813 +
6814 +int DWC_HMAC_SHA256(uint8_t *message, uint32_t messagelen,
6815 + uint8_t *key, uint32_t keylen, uint8_t *out)
6816 +{
6817 + struct crypto_hash *tfm;
6818 + struct hash_desc desc;
6819 + struct scatterlist sg;
6820 +
6821 + tfm = crypto_alloc_hash("hmac(sha256)", 0, CRYPTO_ALG_ASYNC);
6822 + if (IS_ERR(tfm)) {
6823 + DWC_ERROR("Failed to load transform for hmac(sha256): %ld", PTR_ERR(tfm));
6824 + return 0;
6825 + }
6826 + desc.tfm = tfm;
6827 + desc.flags = 0;
6828 +
6829 + sg_init_one(&sg, message, messagelen);
6830 + crypto_hash_setkey(tfm, key, keylen);
6831 + crypto_hash_digest(&desc, &sg, messagelen, out);
6832 + crypto_free_hash(tfm);
6833 +
6834 + return 1;
6835 +}
6836 +
6837 +#endif /* DWC_CRYPTOLIB */
6838 +
6839 +
6840 +/* Byte Ordering Conversions */
6841 +
6842 +uint32_t DWC_CPU_TO_LE32(uint32_t *p)
6843 +{
6844 +#ifdef __LITTLE_ENDIAN
6845 + return *p;
6846 +#else
6847 + uint8_t *u_p = (uint8_t *)p;
6848 +
6849 + return (u_p[3] | (u_p[2] << 8) | (u_p[1] << 16) | (u_p[0] << 24));
6850 +#endif
6851 +}
6852 +
6853 +uint32_t DWC_CPU_TO_BE32(uint32_t *p)
6854 +{
6855 +#ifdef __BIG_ENDIAN
6856 + return *p;
6857 +#else
6858 + uint8_t *u_p = (uint8_t *)p;
6859 +
6860 + return (u_p[3] | (u_p[2] << 8) | (u_p[1] << 16) | (u_p[0] << 24));
6861 +#endif
6862 +}
6863 +
6864 +uint32_t DWC_LE32_TO_CPU(uint32_t *p)
6865 +{
6866 +#ifdef __LITTLE_ENDIAN
6867 + return *p;
6868 +#else
6869 + uint8_t *u_p = (uint8_t *)p;
6870 +
6871 + return (u_p[3] | (u_p[2] << 8) | (u_p[1] << 16) | (u_p[0] << 24));
6872 +#endif
6873 +}
6874 +
6875 +uint32_t DWC_BE32_TO_CPU(uint32_t *p)
6876 +{
6877 +#ifdef __BIG_ENDIAN
6878 + return *p;
6879 +#else
6880 + uint8_t *u_p = (uint8_t *)p;
6881 +
6882 + return (u_p[3] | (u_p[2] << 8) | (u_p[1] << 16) | (u_p[0] << 24));
6883 +#endif
6884 +}
6885 +
6886 +uint16_t DWC_CPU_TO_LE16(uint16_t *p)
6887 +{
6888 +#ifdef __LITTLE_ENDIAN
6889 + return *p;
6890 +#else
6891 + uint8_t *u_p = (uint8_t *)p;
6892 + return (u_p[1] | (u_p[0] << 8));
6893 +#endif
6894 +}
6895 +
6896 +uint16_t DWC_CPU_TO_BE16(uint16_t *p)
6897 +{
6898 +#ifdef __BIG_ENDIAN
6899 + return *p;
6900 +#else
6901 + uint8_t *u_p = (uint8_t *)p;
6902 + return (u_p[1] | (u_p[0] << 8));
6903 +#endif
6904 +}
6905 +
6906 +uint16_t DWC_LE16_TO_CPU(uint16_t *p)
6907 +{
6908 +#ifdef __LITTLE_ENDIAN
6909 + return *p;
6910 +#else
6911 + uint8_t *u_p = (uint8_t *)p;
6912 + return (u_p[1] | (u_p[0] << 8));
6913 +#endif
6914 +}
6915 +
6916 +uint16_t DWC_BE16_TO_CPU(uint16_t *p)
6917 +{
6918 +#ifdef __BIG_ENDIAN
6919 + return *p;
6920 +#else
6921 + uint8_t *u_p = (uint8_t *)p;
6922 + return (u_p[1] | (u_p[0] << 8));
6923 +#endif
6924 +}
6925 +
6926 +
6927 +/* Registers */
6928 +
6929 +uint32_t DWC_READ_REG32(void *io_ctx, uint32_t volatile *reg)
6930 +{
6931 + dwc_ioctx_t *io = (dwc_ioctx_t *)io_ctx;
6932 + bus_size_t ior = (bus_size_t)reg;
6933 +
6934 + return bus_space_read_4(io->iot, io->ioh, ior);
6935 +}
6936 +
6937 +#if 0
6938 +uint64_t DWC_READ_REG64(void *io_ctx, uint64_t volatile *reg)
6939 +{
6940 + dwc_ioctx_t *io = (dwc_ioctx_t *)io_ctx;
6941 + bus_size_t ior = (bus_size_t)reg;
6942 +
6943 + return bus_space_read_8(io->iot, io->ioh, ior);
6944 +}
6945 +#endif
6946 +
6947 +void DWC_WRITE_REG32(void *io_ctx, uint32_t volatile *reg, uint32_t value)
6948 +{
6949 + dwc_ioctx_t *io = (dwc_ioctx_t *)io_ctx;
6950 + bus_size_t ior = (bus_size_t)reg;
6951 +
6952 + bus_space_write_4(io->iot, io->ioh, ior, value);
6953 +}
6954 +
6955 +#if 0
6956 +void DWC_WRITE_REG64(void *io_ctx, uint64_t volatile *reg, uint64_t value)
6957 +{
6958 + dwc_ioctx_t *io = (dwc_ioctx_t *)io_ctx;
6959 + bus_size_t ior = (bus_size_t)reg;
6960 +
6961 + bus_space_write_8(io->iot, io->ioh, ior, value);
6962 +}
6963 +#endif
6964 +
6965 +void DWC_MODIFY_REG32(void *io_ctx, uint32_t volatile *reg, uint32_t clear_mask,
6966 + uint32_t set_mask)
6967 +{
6968 + dwc_ioctx_t *io = (dwc_ioctx_t *)io_ctx;
6969 + bus_size_t ior = (bus_size_t)reg;
6970 +
6971 + bus_space_write_4(io->iot, io->ioh, ior,
6972 + (bus_space_read_4(io->iot, io->ioh, ior) &
6973 + ~clear_mask) | set_mask);
6974 +}
6975 +
6976 +#if 0
6977 +void DWC_MODIFY_REG64(void *io_ctx, uint64_t volatile *reg, uint64_t clear_mask,
6978 + uint64_t set_mask)
6979 +{
6980 + dwc_ioctx_t *io = (dwc_ioctx_t *)io_ctx;
6981 + bus_size_t ior = (bus_size_t)reg;
6982 +
6983 + bus_space_write_8(io->iot, io->ioh, ior,
6984 + (bus_space_read_8(io->iot, io->ioh, ior) &
6985 + ~clear_mask) | set_mask);
6986 +}
6987 +#endif
6988 +
6989 +
6990 +/* Locking */
6991 +
6992 +dwc_spinlock_t *DWC_SPINLOCK_ALLOC(void)
6993 +{
6994 + struct mtx *sl = DWC_ALLOC(sizeof(*sl));
6995 +
6996 + if (!sl) {
6997 + DWC_ERROR("Cannot allocate memory for spinlock");
6998 + return NULL;
6999 + }
7000 +
7001 + mtx_init(sl, "dw3spn", NULL, MTX_SPIN);
7002 + return (dwc_spinlock_t *)sl;
7003 +}
7004 +
7005 +void DWC_SPINLOCK_FREE(dwc_spinlock_t *lock)
7006 +{
7007 + struct mtx *sl = (struct mtx *)lock;
7008 +
7009 + mtx_destroy(sl);
7010 + DWC_FREE(sl);
7011 +}
7012 +
7013 +void DWC_SPINLOCK(dwc_spinlock_t *lock)
7014 +{
7015 + mtx_lock_spin((struct mtx *)lock); // ???
7016 +}
7017 +
7018 +void DWC_SPINUNLOCK(dwc_spinlock_t *lock)
7019 +{
7020 + mtx_unlock_spin((struct mtx *)lock); // ???
7021 +}
7022 +
7023 +void DWC_SPINLOCK_IRQSAVE(dwc_spinlock_t *lock, dwc_irqflags_t *flags)
7024 +{
7025 + mtx_lock_spin((struct mtx *)lock);
7026 +}
7027 +
7028 +void DWC_SPINUNLOCK_IRQRESTORE(dwc_spinlock_t *lock, dwc_irqflags_t flags)
7029 +{
7030 + mtx_unlock_spin((struct mtx *)lock);
7031 +}
7032 +
7033 +dwc_mutex_t *DWC_MUTEX_ALLOC(void)
7034 +{
7035 + struct mtx *m;
7036 + dwc_mutex_t *mutex = (dwc_mutex_t *)DWC_ALLOC(sizeof(struct mtx));
7037 +
7038 + if (!mutex) {
7039 + DWC_ERROR("Cannot allocate memory for mutex");
7040 + return NULL;
7041 + }
7042 +
7043 + m = (struct mtx *)mutex;
7044 + mtx_init(m, "dw3mtx", NULL, MTX_DEF);
7045 + return mutex;
7046 +}
7047 +
7048 +#if (defined(DWC_LINUX) && defined(CONFIG_DEBUG_MUTEXES))
7049 +#else
7050 +void DWC_MUTEX_FREE(dwc_mutex_t *mutex)
7051 +{
7052 + mtx_destroy((struct mtx *)mutex);
7053 + DWC_FREE(mutex);
7054 +}
7055 +#endif
7056 +
7057 +void DWC_MUTEX_LOCK(dwc_mutex_t *mutex)
7058 +{
7059 + struct mtx *m = (struct mtx *)mutex;
7060 +
7061 + mtx_lock(m);
7062 +}
7063 +
7064 +int DWC_MUTEX_TRYLOCK(dwc_mutex_t *mutex)
7065 +{
7066 + struct mtx *m = (struct mtx *)mutex;
7067 +
7068 + return mtx_trylock(m);
7069 +}
7070 +
7071 +void DWC_MUTEX_UNLOCK(dwc_mutex_t *mutex)
7072 +{
7073 + struct mtx *m = (struct mtx *)mutex;
7074 +
7075 + mtx_unlock(m);
7076 +}
7077 +
7078 +
7079 +/* Timing */
7080 +
7081 +void DWC_UDELAY(uint32_t usecs)
7082 +{
7083 + DELAY(usecs);
7084 +}
7085 +
7086 +void DWC_MDELAY(uint32_t msecs)
7087 +{
7088 + do {
7089 + DELAY(1000);
7090 + } while (--msecs);
7091 +}
7092 +
7093 +void DWC_MSLEEP(uint32_t msecs)
7094 +{
7095 + struct timeval tv;
7096 +
7097 + tv.tv_sec = msecs / 1000;
7098 + tv.tv_usec = (msecs - tv.tv_sec * 1000) * 1000;
7099 + pause("dw3slp", tvtohz(&tv));
7100 +}
7101 +
7102 +uint32_t DWC_TIME(void)
7103 +{
7104 + struct timeval tv;
7105 +
7106 + microuptime(&tv); // or getmicrouptime? (less precise, but faster)
7107 + return tv.tv_sec * 1000 + tv.tv_usec / 1000;
7108 +}
7109 +
7110 +
7111 +/* Timers */
7112 +
7113 +struct dwc_timer {
7114 + struct callout t;
7115 + char *name;
7116 + dwc_spinlock_t *lock;
7117 + dwc_timer_callback_t cb;
7118 + void *data;
7119 +};
7120 +
7121 +dwc_timer_t *DWC_TIMER_ALLOC(char *name, dwc_timer_callback_t cb, void *data)
7122 +{
7123 + dwc_timer_t *t = DWC_ALLOC(sizeof(*t));
7124 +
7125 + if (!t) {
7126 + DWC_ERROR("Cannot allocate memory for timer");
7127 + return NULL;
7128 + }
7129 +
7130 + callout_init(&t->t, 1);
7131 +
7132 + t->name = DWC_STRDUP(name);
7133 + if (!t->name) {
7134 + DWC_ERROR("Cannot allocate memory for timer->name");
7135 + goto no_name;
7136 + }
7137 +
7138 + t->lock = DWC_SPINLOCK_ALLOC();
7139 + if (!t->lock) {
7140 + DWC_ERROR("Cannot allocate memory for lock");
7141 + goto no_lock;
7142 + }
7143 +
7144 + t->cb = cb;
7145 + t->data = data;
7146 +
7147 + return t;
7148 +
7149 + no_lock:
7150 + DWC_FREE(t->name);
7151 + no_name:
7152 + DWC_FREE(t);
7153 +
7154 + return NULL;
7155 +}
7156 +
7157 +void DWC_TIMER_FREE(dwc_timer_t *timer)
7158 +{
7159 + callout_stop(&timer->t);
7160 + DWC_SPINLOCK_FREE(timer->lock);
7161 + DWC_FREE(timer->name);
7162 + DWC_FREE(timer);
7163 +}
7164 +
7165 +void DWC_TIMER_SCHEDULE(dwc_timer_t *timer, uint32_t time)
7166 +{
7167 + struct timeval tv;
7168 +
7169 + tv.tv_sec = time / 1000;
7170 + tv.tv_usec = (time - tv.tv_sec * 1000) * 1000;
7171 + callout_reset(&timer->t, tvtohz(&tv), timer->cb, timer->data);
7172 +}
7173 +
7174 +void DWC_TIMER_CANCEL(dwc_timer_t *timer)
7175 +{
7176 + callout_stop(&timer->t);
7177 +}
7178 +
7179 +
7180 +/* Wait Queues */
7181 +
7182 +struct dwc_waitq {
7183 + struct mtx lock;
7184 + int abort;
7185 +};
7186 +
7187 +dwc_waitq_t *DWC_WAITQ_ALLOC(void)
7188 +{
7189 + dwc_waitq_t *wq = DWC_ALLOC(sizeof(*wq));
7190 +
7191 + if (!wq) {
7192 + DWC_ERROR("Cannot allocate memory for waitqueue");
7193 + return NULL;
7194 + }
7195 +
7196 + mtx_init(&wq->lock, "dw3wtq", NULL, MTX_DEF);
7197 + wq->abort = 0;
7198 +
7199 + return wq;
7200 +}
7201 +
7202 +void DWC_WAITQ_FREE(dwc_waitq_t *wq)
7203 +{
7204 + mtx_destroy(&wq->lock);
7205 + DWC_FREE(wq);
7206 +}
7207 +
7208 +int32_t DWC_WAITQ_WAIT(dwc_waitq_t *wq, dwc_waitq_condition_t cond, void *data)
7209 +{
7210 +// intrmask_t ipl;
7211 + int result = 0;
7212 +
7213 + mtx_lock(&wq->lock);
7214 +// ipl = splbio();
7215 +
7216 + /* Skip the sleep if already aborted or triggered */
7217 + if (!wq->abort && !cond(data)) {
7218 +// splx(ipl);
7219 + result = msleep(wq, &wq->lock, PCATCH, "dw3wat", 0); // infinite timeout
7220 +// ipl = splbio();
7221 + }
7222 +
7223 + if (result == ERESTART) { // signaled - restart
7224 + result = -DWC_E_RESTART;
7225 +
7226 + } else if (result == EINTR) { // signaled - interrupt
7227 + result = -DWC_E_ABORT;
7228 +
7229 + } else if (wq->abort) {
7230 + result = -DWC_E_ABORT;
7231 +
7232 + } else {
7233 + result = 0;
7234 + }
7235 +
7236 + wq->abort = 0;
7237 +// splx(ipl);
7238 + mtx_unlock(&wq->lock);
7239 + return result;
7240 +}
7241 +
7242 +int32_t DWC_WAITQ_WAIT_TIMEOUT(dwc_waitq_t *wq, dwc_waitq_condition_t cond,
7243 + void *data, int32_t msecs)
7244 +{
7245 + struct timeval tv, tv1, tv2;
7246 +// intrmask_t ipl;
7247 + int result = 0;
7248 +
7249 + tv.tv_sec = msecs / 1000;
7250 + tv.tv_usec = (msecs - tv.tv_sec * 1000) * 1000;
7251 +
7252 + mtx_lock(&wq->lock);
7253 +// ipl = splbio();
7254 +
7255 + /* Skip the sleep if already aborted or triggered */
7256 + if (!wq->abort && !cond(data)) {
7257 +// splx(ipl);
7258 + getmicrouptime(&tv1);
7259 + result = msleep(wq, &wq->lock, PCATCH, "dw3wto", tvtohz(&tv));
7260 + getmicrouptime(&tv2);
7261 +// ipl = splbio();
7262 + }
7263 +
7264 + if (result == 0) { // awoken
7265 + if (wq->abort) {
7266 + result = -DWC_E_ABORT;
7267 + } else {
7268 + tv2.tv_usec -= tv1.tv_usec;
7269 + if (tv2.tv_usec < 0) {
7270 + tv2.tv_usec += 1000000;
7271 + tv2.tv_sec--;
7272 + }
7273 +
7274 + tv2.tv_sec -= tv1.tv_sec;
7275 + result = tv2.tv_sec * 1000 + tv2.tv_usec / 1000;
7276 + result = msecs - result;
7277 + if (result <= 0)
7278 + result = 1;
7279 + }
7280 + } else if (result == ERESTART) { // signaled - restart
7281 + result = -DWC_E_RESTART;
7282 +
7283 + } else if (result == EINTR) { // signaled - interrupt
7284 + result = -DWC_E_ABORT;
7285 +
7286 + } else { // timed out
7287 + result = -DWC_E_TIMEOUT;
7288 + }
7289 +
7290 + wq->abort = 0;
7291 +// splx(ipl);
7292 + mtx_unlock(&wq->lock);
7293 + return result;
7294 +}
7295 +
7296 +void DWC_WAITQ_TRIGGER(dwc_waitq_t *wq)
7297 +{
7298 + wakeup(wq);
7299 +}
7300 +
7301 +void DWC_WAITQ_ABORT(dwc_waitq_t *wq)
7302 +{
7303 +// intrmask_t ipl;
7304 +
7305 + mtx_lock(&wq->lock);
7306 +// ipl = splbio();
7307 + wq->abort = 1;
7308 + wakeup(wq);
7309 +// splx(ipl);
7310 + mtx_unlock(&wq->lock);
7311 +}
7312 +
7313 +
7314 +/* Threading */
7315 +
7316 +struct dwc_thread {
7317 + struct proc *proc;
7318 + int abort;
7319 +};
7320 +
7321 +dwc_thread_t *DWC_THREAD_RUN(dwc_thread_function_t func, char *name, void *data)
7322 +{
7323 + int retval;
7324 + dwc_thread_t *thread = DWC_ALLOC(sizeof(*thread));
7325 +
7326 + if (!thread) {
7327 + return NULL;
7328 + }
7329 +
7330 + thread->abort = 0;
7331 + retval = kthread_create((void (*)(void *))func, data, &thread->proc,
7332 + RFPROC | RFNOWAIT, 0, "%s", name);
7333 + if (retval) {
7334 + DWC_FREE(thread);
7335 + return NULL;
7336 + }
7337 +
7338 + return thread;
7339 +}
7340 +
7341 +int DWC_THREAD_STOP(dwc_thread_t *thread)
7342 +{
7343 + int retval;
7344 +
7345 + thread->abort = 1;
7346 + retval = tsleep(&thread->abort, 0, "dw3stp", 60 * hz);
7347 +
7348 + if (retval == 0) {
7349 + /* DWC_THREAD_EXIT() will free the thread struct */
7350 + return 0;
7351 + }
7352 +
7353 + /* NOTE: We leak the thread struct if thread doesn't die */
7354 +
7355 + if (retval == EWOULDBLOCK) {
7356 + return -DWC_E_TIMEOUT;
7357 + }
7358 +
7359 + return -DWC_E_UNKNOWN;
7360 +}
7361 +
7362 +dwc_bool_t DWC_THREAD_SHOULD_STOP(dwc_thread_t *thread)
7363 +{
7364 + return thread->abort;
7365 +}
7366 +
7367 +void DWC_THREAD_EXIT(dwc_thread_t *thread)
7368 +{
7369 + wakeup(&thread->abort);
7370 + DWC_FREE(thread);
7371 + kthread_exit(0);
7372 +}
7373 +
7374 +
7375 +/* tasklets
7376 + - Runs in interrupt context (cannot sleep)
7377 + - Each tasklet runs on a single CPU [ How can we ensure this on FreeBSD? Does it matter? ]
7378 + - Different tasklets can be running simultaneously on different CPUs [ shouldn't matter ]
7379 + */
7380 +struct dwc_tasklet {
7381 + struct task t;
7382 + dwc_tasklet_callback_t cb;
7383 + void *data;
7384 +};
7385 +
7386 +static void tasklet_callback(void *data, int pending) // what to do with pending ???
7387 +{
7388 + dwc_tasklet_t *task = (dwc_tasklet_t *)data;
7389 +
7390 + task->cb(task->data);
7391 +}
7392 +
7393 +dwc_tasklet_t *DWC_TASK_ALLOC(char *name, dwc_tasklet_callback_t cb, void *data)
7394 +{
7395 + dwc_tasklet_t *task = DWC_ALLOC(sizeof(*task));
7396 +
7397 + if (task) {
7398 + task->cb = cb;
7399 + task->data = data;
7400 + TASK_INIT(&task->t, 0, tasklet_callback, task);
7401 + } else {
7402 + DWC_ERROR("Cannot allocate memory for tasklet");
7403 + }
7404 +
7405 + return task;
7406 +}
7407 +
7408 +void DWC_TASK_FREE(dwc_tasklet_t *task)
7409 +{
7410 + taskqueue_drain(taskqueue_fast, &task->t); // ???
7411 + DWC_FREE(task);
7412 +}
7413 +
7414 +void DWC_TASK_SCHEDULE(dwc_tasklet_t *task)
7415 +{
7416 + /* Uses predefined system queue */
7417 + taskqueue_enqueue_fast(taskqueue_fast, &task->t);
7418 +}
7419 +
7420 +
7421 +/* workqueues
7422 + - Runs in process context (can sleep)
7423 + */
7424 +typedef struct work_container {
7425 + dwc_work_callback_t cb;
7426 + void *data;
7427 + dwc_workq_t *wq;
7428 + char *name;
7429 + int hz;
7430 +
7431 +#ifdef DEBUG
7432 + DWC_CIRCLEQ_ENTRY(work_container) entry;
7433 +#endif
7434 + struct task task;
7435 +} work_container_t;
7436 +
7437 +#ifdef DEBUG
7438 +DWC_CIRCLEQ_HEAD(work_container_queue, work_container);
7439 +#endif
7440 +
7441 +struct dwc_workq {
7442 + struct taskqueue *taskq;
7443 + dwc_spinlock_t *lock;
7444 + dwc_waitq_t *waitq;
7445 + int pending;
7446 +
7447 +#ifdef DEBUG
7448 + struct work_container_queue entries;
7449 +#endif
7450 +};
7451 +
7452 +static void do_work(void *data, int pending) // what to do with pending ???
7453 +{
7454 + work_container_t *container = (work_container_t *)data;
7455 + dwc_workq_t *wq = container->wq;
7456 + dwc_irqflags_t flags;
7457 +
7458 + if (container->hz) {
7459 + pause("dw3wrk", container->hz);
7460 + }
7461 +
7462 + container->cb(container->data);
7463 + DWC_DEBUG("Work done: %s, container=%p", container->name, container);
7464 +
7465 + DWC_SPINLOCK_IRQSAVE(wq->lock, &flags);
7466 +
7467 +#ifdef DEBUG
7468 + DWC_CIRCLEQ_REMOVE(&wq->entries, container, entry);
7469 +#endif
7470 + if (container->name)
7471 + DWC_FREE(container->name);
7472 + DWC_FREE(container);
7473 + wq->pending--;
7474 + DWC_SPINUNLOCK_IRQRESTORE(wq->lock, flags);
7475 + DWC_WAITQ_TRIGGER(wq->waitq);
7476 +}
7477 +
7478 +static int work_done(void *data)
7479 +{
7480 + dwc_workq_t *workq = (dwc_workq_t *)data;
7481 +
7482 + return workq->pending == 0;
7483 +}
7484 +
7485 +int DWC_WORKQ_WAIT_WORK_DONE(dwc_workq_t *workq, int timeout)
7486 +{
7487 + return DWC_WAITQ_WAIT_TIMEOUT(workq->waitq, work_done, workq, timeout);
7488 +}
7489 +
7490 +dwc_workq_t *DWC_WORKQ_ALLOC(char *name)
7491 +{
7492 + dwc_workq_t *wq = DWC_ALLOC(sizeof(*wq));
7493 +
7494 + if (!wq) {
7495 + DWC_ERROR("Cannot allocate memory for workqueue");
7496 + return NULL;
7497 + }
7498 +
7499 + wq->taskq = taskqueue_create(name, M_NOWAIT, taskqueue_thread_enqueue, &wq->taskq);
7500 + if (!wq->taskq) {
7501 + DWC_ERROR("Cannot allocate memory for taskqueue");
7502 + goto no_taskq;
7503 + }
7504 +
7505 + wq->pending = 0;
7506 +
7507 + wq->lock = DWC_SPINLOCK_ALLOC();
7508 + if (!wq->lock) {
7509 + DWC_ERROR("Cannot allocate memory for spinlock");
7510 + goto no_lock;
7511 + }
7512 +
7513 + wq->waitq = DWC_WAITQ_ALLOC();
7514 + if (!wq->waitq) {
7515 + DWC_ERROR("Cannot allocate memory for waitqueue");
7516 + goto no_waitq;
7517 + }
7518 +
7519 + taskqueue_start_threads(&wq->taskq, 1, PWAIT, "%s taskq", "dw3tsk");
7520 +
7521 +#ifdef DEBUG
7522 + DWC_CIRCLEQ_INIT(&wq->entries);
7523 +#endif
7524 + return wq;
7525 +
7526 + no_waitq:
7527 + DWC_SPINLOCK_FREE(wq->lock);
7528 + no_lock:
7529 + taskqueue_free(wq->taskq);
7530 + no_taskq:
7531 + DWC_FREE(wq);
7532 +
7533 + return NULL;
7534 +}
7535 +
7536 +void DWC_WORKQ_FREE(dwc_workq_t *wq)
7537 +{
7538 +#ifdef DEBUG
7539 + dwc_irqflags_t flags;
7540 +
7541 + DWC_SPINLOCK_IRQSAVE(wq->lock, &flags);
7542 +
7543 + if (wq->pending != 0) {
7544 + struct work_container *container;
7545 +
7546 + DWC_ERROR("Destroying work queue with pending work");
7547 +
7548 + DWC_CIRCLEQ_FOREACH(container, &wq->entries, entry) {
7549 + DWC_ERROR("Work %s still pending", container->name);
7550 + }
7551 + }
7552 +
7553 + DWC_SPINUNLOCK_IRQRESTORE(wq->lock, flags);
7554 +#endif
7555 + DWC_WAITQ_FREE(wq->waitq);
7556 + DWC_SPINLOCK_FREE(wq->lock);
7557 + taskqueue_free(wq->taskq);
7558 + DWC_FREE(wq);
7559 +}
7560 +
7561 +void DWC_WORKQ_SCHEDULE(dwc_workq_t *wq, dwc_work_callback_t cb, void *data,
7562 + char *format, ...)
7563 +{
7564 + dwc_irqflags_t flags;
7565 + work_container_t *container;
7566 + static char name[128];
7567 + va_list args;
7568 +
7569 + va_start(args, format);
7570 + DWC_VSNPRINTF(name, 128, format, args);
7571 + va_end(args);
7572 +
7573 + DWC_SPINLOCK_IRQSAVE(wq->lock, &flags);
7574 + wq->pending++;
7575 + DWC_SPINUNLOCK_IRQRESTORE(wq->lock, flags);
7576 + DWC_WAITQ_TRIGGER(wq->waitq);
7577 +
7578 + container = DWC_ALLOC_ATOMIC(sizeof(*container));
7579 + if (!container) {
7580 + DWC_ERROR("Cannot allocate memory for container");
7581 + return;
7582 + }
7583 +
7584 + container->name = DWC_STRDUP(name);
7585 + if (!container->name) {
7586 + DWC_ERROR("Cannot allocate memory for container->name");
7587 + DWC_FREE(container);
7588 + return;
7589 + }
7590 +
7591 + container->cb = cb;
7592 + container->data = data;
7593 + container->wq = wq;
7594 + container->hz = 0;
7595 +
7596 + DWC_DEBUG("Queueing work: %s, container=%p", container->name, container);
7597 +
7598 + TASK_INIT(&container->task, 0, do_work, container);
7599 +
7600 +#ifdef DEBUG
7601 + DWC_CIRCLEQ_INSERT_TAIL(&wq->entries, container, entry);
7602 +#endif
7603 + taskqueue_enqueue_fast(wq->taskq, &container->task);
7604 +}
7605 +
7606 +void DWC_WORKQ_SCHEDULE_DELAYED(dwc_workq_t *wq, dwc_work_callback_t cb,
7607 + void *data, uint32_t time, char *format, ...)
7608 +{
7609 + dwc_irqflags_t flags;
7610 + work_container_t *container;
7611 + static char name[128];
7612 + struct timeval tv;
7613 + va_list args;
7614 +
7615 + va_start(args, format);
7616 + DWC_VSNPRINTF(name, 128, format, args);
7617 + va_end(args);
7618 +
7619 + DWC_SPINLOCK_IRQSAVE(wq->lock, &flags);
7620 + wq->pending++;
7621 + DWC_SPINUNLOCK_IRQRESTORE(wq->lock, flags);
7622 + DWC_WAITQ_TRIGGER(wq->waitq);
7623 +
7624 + container = DWC_ALLOC_ATOMIC(sizeof(*container));
7625 + if (!container) {
7626 + DWC_ERROR("Cannot allocate memory for container");
7627 + return;
7628 + }
7629 +
7630 + container->name = DWC_STRDUP(name);
7631 + if (!container->name) {
7632 + DWC_ERROR("Cannot allocate memory for container->name");
7633 + DWC_FREE(container);
7634 + return;
7635 + }
7636 +
7637 + container->cb = cb;
7638 + container->data = data;
7639 + container->wq = wq;
7640 +
7641 + tv.tv_sec = time / 1000;
7642 + tv.tv_usec = (time - tv.tv_sec * 1000) * 1000;
7643 + container->hz = tvtohz(&tv);
7644 +
7645 + DWC_DEBUG("Queueing work: %s, container=%p", container->name, container);
7646 +
7647 + TASK_INIT(&container->task, 0, do_work, container);
7648 +
7649 +#ifdef DEBUG
7650 + DWC_CIRCLEQ_INSERT_TAIL(&wq->entries, container, entry);
7651 +#endif
7652 + taskqueue_enqueue_fast(wq->taskq, &container->task);
7653 +}
7654 +
7655 +int DWC_WORKQ_PENDING(dwc_workq_t *wq)
7656 +{
7657 + return wq->pending;
7658 +}
7659 --- /dev/null
7660 +++ b/drivers/usb/host/dwc_common_port/dwc_common_linux.c
7661 @@ -0,0 +1,1409 @@
7662 +#include <linux/kernel.h>
7663 +#include <linux/init.h>
7664 +#include <linux/module.h>
7665 +#include <linux/kthread.h>
7666 +
7667 +#ifdef DWC_CCLIB
7668 +# include "dwc_cc.h"
7669 +#endif
7670 +
7671 +#ifdef DWC_CRYPTOLIB
7672 +# include "dwc_modpow.h"
7673 +# include "dwc_dh.h"
7674 +# include "dwc_crypto.h"
7675 +#endif
7676 +
7677 +#ifdef DWC_NOTIFYLIB
7678 +# include "dwc_notifier.h"
7679 +#endif
7680 +
7681 +/* OS-Level Implementations */
7682 +
7683 +/* This is the Linux kernel implementation of the DWC platform library. */
7684 +#include <linux/moduleparam.h>
7685 +#include <linux/ctype.h>
7686 +#include <linux/crypto.h>
7687 +#include <linux/delay.h>
7688 +#include <linux/device.h>
7689 +#include <linux/dma-mapping.h>
7690 +#include <linux/cdev.h>
7691 +#include <linux/errno.h>
7692 +#include <linux/interrupt.h>
7693 +#include <linux/jiffies.h>
7694 +#include <linux/list.h>
7695 +#include <linux/pci.h>
7696 +#include <linux/random.h>
7697 +#include <linux/scatterlist.h>
7698 +#include <linux/slab.h>
7699 +#include <linux/stat.h>
7700 +#include <linux/string.h>
7701 +#include <linux/timer.h>
7702 +#include <linux/usb.h>
7703 +
7704 +#include <linux/version.h>
7705 +
7706 +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,24)
7707 +# include <linux/usb/gadget.h>
7708 +#else
7709 +# include <linux/usb_gadget.h>
7710 +#endif
7711 +
7712 +#include <asm/io.h>
7713 +#include <asm/page.h>
7714 +#include <asm/uaccess.h>
7715 +#include <asm/unaligned.h>
7716 +
7717 +#include "dwc_os.h"
7718 +#include "dwc_list.h"
7719 +
7720 +
7721 +/* MISC */
7722 +
7723 +void *DWC_MEMSET(void *dest, uint8_t byte, uint32_t size)
7724 +{
7725 + return memset(dest, byte, size);
7726 +}
7727 +
7728 +void *DWC_MEMCPY(void *dest, void const *src, uint32_t size)
7729 +{
7730 + return memcpy(dest, src, size);
7731 +}
7732 +
7733 +void *DWC_MEMMOVE(void *dest, void *src, uint32_t size)
7734 +{
7735 + return memmove(dest, src, size);
7736 +}
7737 +
7738 +int DWC_MEMCMP(void *m1, void *m2, uint32_t size)
7739 +{
7740 + return memcmp(m1, m2, size);
7741 +}
7742 +
7743 +int DWC_STRNCMP(void *s1, void *s2, uint32_t size)
7744 +{
7745 + return strncmp(s1, s2, size);
7746 +}
7747 +
7748 +int DWC_STRCMP(void *s1, void *s2)
7749 +{
7750 + return strcmp(s1, s2);
7751 +}
7752 +
7753 +int DWC_STRLEN(char const *str)
7754 +{
7755 + return strlen(str);
7756 +}
7757 +
7758 +char *DWC_STRCPY(char *to, char const *from)
7759 +{
7760 + return strcpy(to, from);
7761 +}
7762 +
7763 +char *DWC_STRDUP(char const *str)
7764 +{
7765 + int len = DWC_STRLEN(str) + 1;
7766 + char *new = DWC_ALLOC_ATOMIC(len);
7767 +
7768 + if (!new) {
7769 + return NULL;
7770 + }
7771 +
7772 + DWC_MEMCPY(new, str, len);
7773 + return new;
7774 +}
7775 +
7776 +int DWC_ATOI(const char *str, int32_t *value)
7777 +{
7778 + char *end = NULL;
7779 +
7780 + *value = simple_strtol(str, &end, 0);
7781 + if (*end == '\0') {
7782 + return 0;
7783 + }
7784 +
7785 + return -1;
7786 +}
7787 +
7788 +int DWC_ATOUI(const char *str, uint32_t *value)
7789 +{
7790 + char *end = NULL;
7791 +
7792 + *value = simple_strtoul(str, &end, 0);
7793 + if (*end == '\0') {
7794 + return 0;
7795 + }
7796 +
7797 + return -1;
7798 +}
7799 +
7800 +
7801 +#ifdef DWC_UTFLIB
7802 +/* From usbstring.c */
7803 +
7804 +int DWC_UTF8_TO_UTF16LE(uint8_t const *s, uint16_t *cp, unsigned len)
7805 +{
7806 + int count = 0;
7807 + u8 c;
7808 + u16 uchar;
7809 +
7810 + /* this insists on correct encodings, though not minimal ones.
7811 + * BUT it currently rejects legit 4-byte UTF-8 code points,
7812 + * which need surrogate pairs. (Unicode 3.1 can use them.)
7813 + */
7814 + while (len != 0 && (c = (u8) *s++) != 0) {
7815 + if (unlikely(c & 0x80)) {
7816 + // 2-byte sequence:
7817 + // 00000yyyyyxxxxxx = 110yyyyy 10xxxxxx
7818 + if ((c & 0xe0) == 0xc0) {
7819 + uchar = (c & 0x1f) << 6;
7820 +
7821 + c = (u8) *s++;
7822 + if ((c & 0xc0) != 0xc0)
7823 + goto fail;
7824 + c &= 0x3f;
7825 + uchar |= c;
7826 +
7827 + // 3-byte sequence (most CJKV characters):
7828 + // zzzzyyyyyyxxxxxx = 1110zzzz 10yyyyyy 10xxxxxx
7829 + } else if ((c & 0xf0) == 0xe0) {
7830 + uchar = (c & 0x0f) << 12;
7831 +
7832 + c = (u8) *s++;
7833 + if ((c & 0xc0) != 0xc0)
7834 + goto fail;
7835 + c &= 0x3f;
7836 + uchar |= c << 6;
7837 +
7838 + c = (u8) *s++;
7839 + if ((c & 0xc0) != 0xc0)
7840 + goto fail;
7841 + c &= 0x3f;
7842 + uchar |= c;
7843 +
7844 + /* no bogus surrogates */
7845 + if (0xd800 <= uchar && uchar <= 0xdfff)
7846 + goto fail;
7847 +
7848 + // 4-byte sequence (surrogate pairs, currently rare):
7849 + // 11101110wwwwzzzzyy + 110111yyyyxxxxxx
7850 + // = 11110uuu 10uuzzzz 10yyyyyy 10xxxxxx
7851 + // (uuuuu = wwww + 1)
7852 + // FIXME accept the surrogate code points (only)
7853 + } else
7854 + goto fail;
7855 + } else
7856 + uchar = c;
7857 + put_unaligned (cpu_to_le16 (uchar), cp++);
7858 + count++;
7859 + len--;
7860 + }
7861 + return count;
7862 +fail:
7863 + return -1;
7864 +}
7865 +#endif /* DWC_UTFLIB */
7866 +
7867 +
7868 +/* dwc_debug.h */
7869 +
7870 +dwc_bool_t DWC_IN_IRQ(void)
7871 +{
7872 + return in_irq();
7873 +}
7874 +
7875 +dwc_bool_t DWC_IN_BH(void)
7876 +{
7877 + return in_softirq();
7878 +}
7879 +
7880 +void DWC_VPRINTF(char *format, va_list args)
7881 +{
7882 + vprintk(format, args);
7883 +}
7884 +
7885 +int DWC_VSNPRINTF(char *str, int size, char *format, va_list args)
7886 +{
7887 + return vsnprintf(str, size, format, args);
7888 +}
7889 +
7890 +void DWC_PRINTF(char *format, ...)
7891 +{
7892 + va_list args;
7893 +
7894 + va_start(args, format);
7895 + DWC_VPRINTF(format, args);
7896 + va_end(args);
7897 +}
7898 +
7899 +int DWC_SPRINTF(char *buffer, char *format, ...)
7900 +{
7901 + int retval;
7902 + va_list args;
7903 +
7904 + va_start(args, format);
7905 + retval = vsprintf(buffer, format, args);
7906 + va_end(args);
7907 + return retval;
7908 +}
7909 +
7910 +int DWC_SNPRINTF(char *buffer, int size, char *format, ...)
7911 +{
7912 + int retval;
7913 + va_list args;
7914 +
7915 + va_start(args, format);
7916 + retval = vsnprintf(buffer, size, format, args);
7917 + va_end(args);
7918 + return retval;
7919 +}
7920 +
7921 +void __DWC_WARN(char *format, ...)
7922 +{
7923 + va_list args;
7924 +
7925 + va_start(args, format);
7926 + DWC_PRINTF(KERN_WARNING);
7927 + DWC_VPRINTF(format, args);
7928 + va_end(args);
7929 +}
7930 +
7931 +void __DWC_ERROR(char *format, ...)
7932 +{
7933 + va_list args;
7934 +
7935 + va_start(args, format);
7936 + DWC_PRINTF(KERN_ERR);
7937 + DWC_VPRINTF(format, args);
7938 + va_end(args);
7939 +}
7940 +
7941 +void DWC_EXCEPTION(char *format, ...)
7942 +{
7943 + va_list args;
7944 +
7945 + va_start(args, format);
7946 + DWC_PRINTF(KERN_ERR);
7947 + DWC_VPRINTF(format, args);
7948 + va_end(args);
7949 + BUG_ON(1);
7950 +}
7951 +
7952 +#ifdef DEBUG
7953 +void __DWC_DEBUG(char *format, ...)
7954 +{
7955 + va_list args;
7956 +
7957 + va_start(args, format);
7958 + DWC_PRINTF(KERN_DEBUG);
7959 + DWC_VPRINTF(format, args);
7960 + va_end(args);
7961 +}
7962 +#endif
7963 +
7964 +
7965 +/* dwc_mem.h */
7966 +
7967 +#if 0
7968 +dwc_pool_t *DWC_DMA_POOL_CREATE(uint32_t size,
7969 + uint32_t align,
7970 + uint32_t alloc)
7971 +{
7972 + struct dma_pool *pool = dma_pool_create("Pool", NULL,
7973 + size, align, alloc);
7974 + return (dwc_pool_t *)pool;
7975 +}
7976 +
7977 +void DWC_DMA_POOL_DESTROY(dwc_pool_t *pool)
7978 +{
7979 + dma_pool_destroy((struct dma_pool *)pool);
7980 +}
7981 +
7982 +void *DWC_DMA_POOL_ALLOC(dwc_pool_t *pool, uint64_t *dma_addr)
7983 +{
7984 + return dma_pool_alloc((struct dma_pool *)pool, GFP_KERNEL, dma_addr);
7985 +}
7986 +
7987 +void *DWC_DMA_POOL_ZALLOC(dwc_pool_t *pool, uint64_t *dma_addr)
7988 +{
7989 + void *vaddr = DWC_DMA_POOL_ALLOC(pool, dma_addr);
7990 + memset(..);
7991 +}
7992 +
7993 +void DWC_DMA_POOL_FREE(dwc_pool_t *pool, void *vaddr, void *daddr)
7994 +{
7995 + dma_pool_free(pool, vaddr, daddr);
7996 +}
7997 +#endif
7998 +
7999 +void *__DWC_DMA_ALLOC(void *dma_ctx, uint32_t size, dwc_dma_t *dma_addr)
8000 +{
8001 + return dma_alloc_coherent(dma_ctx, size, dma_addr, GFP_KERNEL | GFP_DMA32);
8002 +}
8003 +
8004 +void *__DWC_DMA_ALLOC_ATOMIC(void *dma_ctx, uint32_t size, dwc_dma_t *dma_addr)
8005 +{
8006 + return dma_alloc_coherent(dma_ctx, size, dma_addr, GFP_ATOMIC);
8007 +}
8008 +
8009 +void __DWC_DMA_FREE(void *dma_ctx, uint32_t size, void *virt_addr, dwc_dma_t dma_addr)
8010 +{
8011 + dma_free_coherent(dma_ctx, size, virt_addr, dma_addr);
8012 +}
8013 +
8014 +void *__DWC_ALLOC(void *mem_ctx, uint32_t size)
8015 +{
8016 + return kzalloc(size, GFP_KERNEL);
8017 +}
8018 +
8019 +void *__DWC_ALLOC_ATOMIC(void *mem_ctx, uint32_t size)
8020 +{
8021 + return kzalloc(size, GFP_ATOMIC);
8022 +}
8023 +
8024 +void __DWC_FREE(void *mem_ctx, void *addr)
8025 +{
8026 + kfree(addr);
8027 +}
8028 +
8029 +
8030 +#ifdef DWC_CRYPTOLIB
8031 +/* dwc_crypto.h */
8032 +
8033 +void DWC_RANDOM_BYTES(uint8_t *buffer, uint32_t length)
8034 +{
8035 + get_random_bytes(buffer, length);
8036 +}
8037 +
8038 +int DWC_AES_CBC(uint8_t *message, uint32_t messagelen, uint8_t *key, uint32_t keylen, uint8_t iv[16], uint8_t *out)
8039 +{
8040 + struct crypto_blkcipher *tfm;
8041 + struct blkcipher_desc desc;
8042 + struct scatterlist sgd;
8043 + struct scatterlist sgs;
8044 +
8045 + tfm = crypto_alloc_blkcipher("cbc(aes)", 0, CRYPTO_ALG_ASYNC);
8046 + if (tfm == NULL) {
8047 + printk("failed to load transform for aes CBC\n");
8048 + return -1;
8049 + }
8050 +
8051 + crypto_blkcipher_setkey(tfm, key, keylen);
8052 + crypto_blkcipher_set_iv(tfm, iv, 16);
8053 +
8054 + sg_init_one(&sgd, out, messagelen);
8055 + sg_init_one(&sgs, message, messagelen);
8056 +
8057 + desc.tfm = tfm;
8058 + desc.flags = 0;
8059 +
8060 + if (crypto_blkcipher_encrypt(&desc, &sgd, &sgs, messagelen)) {
8061 + crypto_free_blkcipher(tfm);
8062 + DWC_ERROR("AES CBC encryption failed");
8063 + return -1;
8064 + }
8065 +
8066 + crypto_free_blkcipher(tfm);
8067 + return 0;
8068 +}
8069 +
8070 +int DWC_SHA256(uint8_t *message, uint32_t len, uint8_t *out)
8071 +{
8072 + struct crypto_hash *tfm;
8073 + struct hash_desc desc;
8074 + struct scatterlist sg;
8075 +
8076 + tfm = crypto_alloc_hash("sha256", 0, CRYPTO_ALG_ASYNC);
8077 + if (IS_ERR(tfm)) {
8078 + DWC_ERROR("Failed to load transform for sha256: %ld\n", PTR_ERR(tfm));
8079 + return 0;
8080 + }
8081 + desc.tfm = tfm;
8082 + desc.flags = 0;
8083 +
8084 + sg_init_one(&sg, message, len);
8085 + crypto_hash_digest(&desc, &sg, len, out);
8086 + crypto_free_hash(tfm);
8087 +
8088 + return 1;
8089 +}
8090 +
8091 +int DWC_HMAC_SHA256(uint8_t *message, uint32_t messagelen,
8092 + uint8_t *key, uint32_t keylen, uint8_t *out)
8093 +{
8094 + struct crypto_hash *tfm;
8095 + struct hash_desc desc;
8096 + struct scatterlist sg;
8097 +
8098 + tfm = crypto_alloc_hash("hmac(sha256)", 0, CRYPTO_ALG_ASYNC);
8099 + if (IS_ERR(tfm)) {
8100 + DWC_ERROR("Failed to load transform for hmac(sha256): %ld\n", PTR_ERR(tfm));
8101 + return 0;
8102 + }
8103 + desc.tfm = tfm;
8104 + desc.flags = 0;
8105 +
8106 + sg_init_one(&sg, message, messagelen);
8107 + crypto_hash_setkey(tfm, key, keylen);
8108 + crypto_hash_digest(&desc, &sg, messagelen, out);
8109 + crypto_free_hash(tfm);
8110 +
8111 + return 1;
8112 +}
8113 +#endif /* DWC_CRYPTOLIB */
8114 +
8115 +
8116 +/* Byte Ordering Conversions */
8117 +
8118 +uint32_t DWC_CPU_TO_LE32(uint32_t *p)
8119 +{
8120 +#ifdef __LITTLE_ENDIAN
8121 + return *p;
8122 +#else
8123 + uint8_t *u_p = (uint8_t *)p;
8124 +
8125 + return (u_p[3] | (u_p[2] << 8) | (u_p[1] << 16) | (u_p[0] << 24));
8126 +#endif
8127 +}
8128 +
8129 +uint32_t DWC_CPU_TO_BE32(uint32_t *p)
8130 +{
8131 +#ifdef __BIG_ENDIAN
8132 + return *p;
8133 +#else
8134 + uint8_t *u_p = (uint8_t *)p;
8135 +
8136 + return (u_p[3] | (u_p[2] << 8) | (u_p[1] << 16) | (u_p[0] << 24));
8137 +#endif
8138 +}
8139 +
8140 +uint32_t DWC_LE32_TO_CPU(uint32_t *p)
8141 +{
8142 +#ifdef __LITTLE_ENDIAN
8143 + return *p;
8144 +#else
8145 + uint8_t *u_p = (uint8_t *)p;
8146 +
8147 + return (u_p[3] | (u_p[2] << 8) | (u_p[1] << 16) | (u_p[0] << 24));
8148 +#endif
8149 +}
8150 +
8151 +uint32_t DWC_BE32_TO_CPU(uint32_t *p)
8152 +{
8153 +#ifdef __BIG_ENDIAN
8154 + return *p;
8155 +#else
8156 + uint8_t *u_p = (uint8_t *)p;
8157 +
8158 + return (u_p[3] | (u_p[2] << 8) | (u_p[1] << 16) | (u_p[0] << 24));
8159 +#endif
8160 +}
8161 +
8162 +uint16_t DWC_CPU_TO_LE16(uint16_t *p)
8163 +{
8164 +#ifdef __LITTLE_ENDIAN
8165 + return *p;
8166 +#else
8167 + uint8_t *u_p = (uint8_t *)p;
8168 + return (u_p[1] | (u_p[0] << 8));
8169 +#endif
8170 +}
8171 +
8172 +uint16_t DWC_CPU_TO_BE16(uint16_t *p)
8173 +{
8174 +#ifdef __BIG_ENDIAN
8175 + return *p;
8176 +#else
8177 + uint8_t *u_p = (uint8_t *)p;
8178 + return (u_p[1] | (u_p[0] << 8));
8179 +#endif
8180 +}
8181 +
8182 +uint16_t DWC_LE16_TO_CPU(uint16_t *p)
8183 +{
8184 +#ifdef __LITTLE_ENDIAN
8185 + return *p;
8186 +#else
8187 + uint8_t *u_p = (uint8_t *)p;
8188 + return (u_p[1] | (u_p[0] << 8));
8189 +#endif
8190 +}
8191 +
8192 +uint16_t DWC_BE16_TO_CPU(uint16_t *p)
8193 +{
8194 +#ifdef __BIG_ENDIAN
8195 + return *p;
8196 +#else
8197 + uint8_t *u_p = (uint8_t *)p;
8198 + return (u_p[1] | (u_p[0] << 8));
8199 +#endif
8200 +}
8201 +
8202 +
8203 +/* Registers */
8204 +
8205 +uint32_t DWC_READ_REG32(uint32_t volatile *reg)
8206 +{
8207 + return readl(reg);
8208 +}
8209 +
8210 +#if 0
8211 +uint64_t DWC_READ_REG64(uint64_t volatile *reg)
8212 +{
8213 +}
8214 +#endif
8215 +
8216 +void DWC_WRITE_REG32(uint32_t volatile *reg, uint32_t value)
8217 +{
8218 + writel(value, reg);
8219 +}
8220 +
8221 +#if 0
8222 +void DWC_WRITE_REG64(uint64_t volatile *reg, uint64_t value)
8223 +{
8224 +}
8225 +#endif
8226 +
8227 +void DWC_MODIFY_REG32(uint32_t volatile *reg, uint32_t clear_mask, uint32_t set_mask)
8228 +{
8229 + writel((readl(reg) & ~clear_mask) | set_mask, reg);
8230 +}
8231 +
8232 +#if 0
8233 +void DWC_MODIFY_REG64(uint64_t volatile *reg, uint64_t clear_mask, uint64_t set_mask)
8234 +{
8235 +}
8236 +#endif
8237 +
8238 +
8239 +/* Locking */
8240 +
8241 +dwc_spinlock_t *DWC_SPINLOCK_ALLOC(void)
8242 +{
8243 + spinlock_t *sl = (spinlock_t *)1;
8244 +
8245 +#if defined(CONFIG_PREEMPT) || defined(CONFIG_SMP)
8246 + sl = DWC_ALLOC(sizeof(*sl));
8247 + if (!sl) {
8248 + DWC_ERROR("Cannot allocate memory for spinlock\n");
8249 + return NULL;
8250 + }
8251 +
8252 + spin_lock_init(sl);
8253 +#endif
8254 + return (dwc_spinlock_t *)sl;
8255 +}
8256 +
8257 +void DWC_SPINLOCK_FREE(dwc_spinlock_t *lock)
8258 +{
8259 +#if defined(CONFIG_PREEMPT) || defined(CONFIG_SMP)
8260 + DWC_FREE(lock);
8261 +#endif
8262 +}
8263 +
8264 +void DWC_SPINLOCK(dwc_spinlock_t *lock)
8265 +{
8266 +#if defined(CONFIG_PREEMPT) || defined(CONFIG_SMP)
8267 + spin_lock((spinlock_t *)lock);
8268 +#endif
8269 +}
8270 +
8271 +void DWC_SPINUNLOCK(dwc_spinlock_t *lock)
8272 +{
8273 +#if defined(CONFIG_PREEMPT) || defined(CONFIG_SMP)
8274 + spin_unlock((spinlock_t *)lock);
8275 +#endif
8276 +}
8277 +
8278 +void DWC_SPINLOCK_IRQSAVE(dwc_spinlock_t *lock, dwc_irqflags_t *flags)
8279 +{
8280 + dwc_irqflags_t f;
8281 +
8282 +#if defined(CONFIG_PREEMPT) || defined(CONFIG_SMP)
8283 + spin_lock_irqsave((spinlock_t *)lock, f);
8284 +#else
8285 + local_irq_save(f);
8286 +#endif
8287 + *flags = f;
8288 +}
8289 +
8290 +void DWC_SPINUNLOCK_IRQRESTORE(dwc_spinlock_t *lock, dwc_irqflags_t flags)
8291 +{
8292 +#if defined(CONFIG_PREEMPT) || defined(CONFIG_SMP)
8293 + spin_unlock_irqrestore((spinlock_t *)lock, flags);
8294 +#else
8295 + local_irq_restore(flags);
8296 +#endif
8297 +}
8298 +
8299 +dwc_mutex_t *DWC_MUTEX_ALLOC(void)
8300 +{
8301 + struct mutex *m;
8302 + dwc_mutex_t *mutex = (dwc_mutex_t *)DWC_ALLOC(sizeof(struct mutex));
8303 +
8304 + if (!mutex) {
8305 + DWC_ERROR("Cannot allocate memory for mutex\n");
8306 + return NULL;
8307 + }
8308 +
8309 + m = (struct mutex *)mutex;
8310 + mutex_init(m);
8311 + return mutex;
8312 +}
8313 +
8314 +#if (defined(DWC_LINUX) && defined(CONFIG_DEBUG_MUTEXES))
8315 +#else
8316 +void DWC_MUTEX_FREE(dwc_mutex_t *mutex)
8317 +{
8318 + mutex_destroy((struct mutex *)mutex);
8319 + DWC_FREE(mutex);
8320 +}
8321 +#endif
8322 +
8323 +void DWC_MUTEX_LOCK(dwc_mutex_t *mutex)
8324 +{
8325 + struct mutex *m = (struct mutex *)mutex;
8326 + mutex_lock(m);
8327 +}
8328 +
8329 +int DWC_MUTEX_TRYLOCK(dwc_mutex_t *mutex)
8330 +{
8331 + struct mutex *m = (struct mutex *)mutex;
8332 + return mutex_trylock(m);
8333 +}
8334 +
8335 +void DWC_MUTEX_UNLOCK(dwc_mutex_t *mutex)
8336 +{
8337 + struct mutex *m = (struct mutex *)mutex;
8338 + mutex_unlock(m);
8339 +}
8340 +
8341 +
8342 +/* Timing */
8343 +
8344 +void DWC_UDELAY(uint32_t usecs)
8345 +{
8346 + udelay(usecs);
8347 +}
8348 +
8349 +void DWC_MDELAY(uint32_t msecs)
8350 +{
8351 + mdelay(msecs);
8352 +}
8353 +
8354 +void DWC_MSLEEP(uint32_t msecs)
8355 +{
8356 + msleep(msecs);
8357 +}
8358 +
8359 +uint32_t DWC_TIME(void)
8360 +{
8361 + return jiffies_to_msecs(jiffies);
8362 +}
8363 +
8364 +
8365 +/* Timers */
8366 +
8367 +struct dwc_timer {
8368 + struct timer_list t;
8369 + char *name;
8370 + dwc_timer_callback_t cb;
8371 + void *data;
8372 + uint8_t scheduled;
8373 + dwc_spinlock_t *lock;
8374 +};
8375 +
8376 +static void timer_callback(struct timer_list *tt)
8377 +{
8378 + dwc_timer_t *timer = from_timer(timer, tt, t);
8379 + dwc_irqflags_t flags;
8380 +
8381 + DWC_SPINLOCK_IRQSAVE(timer->lock, &flags);
8382 + timer->scheduled = 0;
8383 + DWC_SPINUNLOCK_IRQRESTORE(timer->lock, flags);
8384 + DWC_DEBUGC("Timer %s callback", timer->name);
8385 + timer->cb(timer->data);
8386 +}
8387 +
8388 +dwc_timer_t *DWC_TIMER_ALLOC(char *name, dwc_timer_callback_t cb, void *data)
8389 +{
8390 + dwc_timer_t *t = DWC_ALLOC(sizeof(*t));
8391 +
8392 + if (!t) {
8393 + DWC_ERROR("Cannot allocate memory for timer");
8394 + return NULL;
8395 + }
8396 +
8397 + t->name = DWC_STRDUP(name);
8398 + if (!t->name) {
8399 + DWC_ERROR("Cannot allocate memory for timer->name");
8400 + goto no_name;
8401 + }
8402 +
8403 +#if (defined(DWC_LINUX) && defined(CONFIG_DEBUG_SPINLOCK))
8404 + DWC_SPINLOCK_ALLOC_LINUX_DEBUG(t->lock);
8405 +#else
8406 + t->lock = DWC_SPINLOCK_ALLOC();
8407 +#endif
8408 + if (!t->lock) {
8409 + DWC_ERROR("Cannot allocate memory for lock");
8410 + goto no_lock;
8411 + }
8412 +
8413 + t->scheduled = 0;
8414 + t->t.expires = jiffies;
8415 + timer_setup(&t->t, timer_callback, 0);
8416 +
8417 + t->cb = cb;
8418 + t->data = data;
8419 +
8420 + return t;
8421 +
8422 + no_lock:
8423 + DWC_FREE(t->name);
8424 + no_name:
8425 + DWC_FREE(t);
8426 + return NULL;
8427 +}
8428 +
8429 +void DWC_TIMER_FREE(dwc_timer_t *timer)
8430 +{
8431 + dwc_irqflags_t flags;
8432 +
8433 + DWC_SPINLOCK_IRQSAVE(timer->lock, &flags);
8434 +
8435 + if (timer->scheduled) {
8436 + del_timer(&timer->t);
8437 + timer->scheduled = 0;
8438 + }
8439 +
8440 + DWC_SPINUNLOCK_IRQRESTORE(timer->lock, flags);
8441 + DWC_SPINLOCK_FREE(timer->lock);
8442 + DWC_FREE(timer->name);
8443 + DWC_FREE(timer);
8444 +}
8445 +
8446 +void DWC_TIMER_SCHEDULE(dwc_timer_t *timer, uint32_t time)
8447 +{
8448 + dwc_irqflags_t flags;
8449 +
8450 + DWC_SPINLOCK_IRQSAVE(timer->lock, &flags);
8451 +
8452 + if (!timer->scheduled) {
8453 + timer->scheduled = 1;
8454 + DWC_DEBUGC("Scheduling timer %s to expire in +%d msec", timer->name, time);
8455 + timer->t.expires = jiffies + msecs_to_jiffies(time);
8456 + add_timer(&timer->t);
8457 + } else {
8458 + DWC_DEBUGC("Modifying timer %s to expire in +%d msec", timer->name, time);
8459 + mod_timer(&timer->t, jiffies + msecs_to_jiffies(time));
8460 + }
8461 +
8462 + DWC_SPINUNLOCK_IRQRESTORE(timer->lock, flags);
8463 +}
8464 +
8465 +void DWC_TIMER_CANCEL(dwc_timer_t *timer)
8466 +{
8467 + del_timer(&timer->t);
8468 +}
8469 +
8470 +
8471 +/* Wait Queues */
8472 +
8473 +struct dwc_waitq {
8474 + wait_queue_head_t queue;
8475 + int abort;
8476 +};
8477 +
8478 +dwc_waitq_t *DWC_WAITQ_ALLOC(void)
8479 +{
8480 + dwc_waitq_t *wq = DWC_ALLOC(sizeof(*wq));
8481 +
8482 + if (!wq) {
8483 + DWC_ERROR("Cannot allocate memory for waitqueue\n");
8484 + return NULL;
8485 + }
8486 +
8487 + init_waitqueue_head(&wq->queue);
8488 + wq->abort = 0;
8489 + return wq;
8490 +}
8491 +
8492 +void DWC_WAITQ_FREE(dwc_waitq_t *wq)
8493 +{
8494 + DWC_FREE(wq);
8495 +}
8496 +
8497 +int32_t DWC_WAITQ_WAIT(dwc_waitq_t *wq, dwc_waitq_condition_t cond, void *data)
8498 +{
8499 + int result = wait_event_interruptible(wq->queue,
8500 + cond(data) || wq->abort);
8501 + if (result == -ERESTARTSYS) {
8502 + wq->abort = 0;
8503 + return -DWC_E_RESTART;
8504 + }
8505 +
8506 + if (wq->abort == 1) {
8507 + wq->abort = 0;
8508 + return -DWC_E_ABORT;
8509 + }
8510 +
8511 + wq->abort = 0;
8512 +
8513 + if (result == 0) {
8514 + return 0;
8515 + }
8516 +
8517 + return -DWC_E_UNKNOWN;
8518 +}
8519 +
8520 +int32_t DWC_WAITQ_WAIT_TIMEOUT(dwc_waitq_t *wq, dwc_waitq_condition_t cond,
8521 + void *data, int32_t msecs)
8522 +{
8523 + int32_t tmsecs;
8524 + int result = wait_event_interruptible_timeout(wq->queue,
8525 + cond(data) || wq->abort,
8526 + msecs_to_jiffies(msecs));
8527 + if (result == -ERESTARTSYS) {
8528 + wq->abort = 0;
8529 + return -DWC_E_RESTART;
8530 + }
8531 +
8532 + if (wq->abort == 1) {
8533 + wq->abort = 0;
8534 + return -DWC_E_ABORT;
8535 + }
8536 +
8537 + wq->abort = 0;
8538 +
8539 + if (result > 0) {
8540 + tmsecs = jiffies_to_msecs(result);
8541 + if (!tmsecs) {
8542 + return 1;
8543 + }
8544 +
8545 + return tmsecs;
8546 + }
8547 +
8548 + if (result == 0) {
8549 + return -DWC_E_TIMEOUT;
8550 + }
8551 +
8552 + return -DWC_E_UNKNOWN;
8553 +}
8554 +
8555 +void DWC_WAITQ_TRIGGER(dwc_waitq_t *wq)
8556 +{
8557 + wq->abort = 0;
8558 + wake_up_interruptible(&wq->queue);
8559 +}
8560 +
8561 +void DWC_WAITQ_ABORT(dwc_waitq_t *wq)
8562 +{
8563 + wq->abort = 1;
8564 + wake_up_interruptible(&wq->queue);
8565 +}
8566 +
8567 +
8568 +/* Threading */
8569 +
8570 +dwc_thread_t *DWC_THREAD_RUN(dwc_thread_function_t func, char *name, void *data)
8571 +{
8572 + struct task_struct *thread = kthread_run(func, data, name);
8573 +
8574 + if (thread == ERR_PTR(-ENOMEM)) {
8575 + return NULL;
8576 + }
8577 +
8578 + return (dwc_thread_t *)thread;
8579 +}
8580 +
8581 +int DWC_THREAD_STOP(dwc_thread_t *thread)
8582 +{
8583 + return kthread_stop((struct task_struct *)thread);
8584 +}
8585 +
8586 +dwc_bool_t DWC_THREAD_SHOULD_STOP(void)
8587 +{
8588 + return kthread_should_stop();
8589 +}
8590 +
8591 +
8592 +/* tasklets
8593 + - run in interrupt context (cannot sleep)
8594 + - each tasklet runs on a single CPU
8595 + - different tasklets can be running simultaneously on different CPUs
8596 + */
8597 +struct dwc_tasklet {
8598 + struct tasklet_struct t;
8599 + dwc_tasklet_callback_t cb;
8600 + void *data;
8601 +};
8602 +
8603 +static void tasklet_callback(unsigned long data)
8604 +{
8605 + dwc_tasklet_t *t = (dwc_tasklet_t *)data;
8606 + t->cb(t->data);
8607 +}
8608 +
8609 +dwc_tasklet_t *DWC_TASK_ALLOC(char *name, dwc_tasklet_callback_t cb, void *data)
8610 +{
8611 + dwc_tasklet_t *t = DWC_ALLOC(sizeof(*t));
8612 +
8613 + if (t) {
8614 + t->cb = cb;
8615 + t->data = data;
8616 + tasklet_init(&t->t, tasklet_callback, (unsigned long)t);
8617 + } else {
8618 + DWC_ERROR("Cannot allocate memory for tasklet\n");
8619 + }
8620 +
8621 + return t;
8622 +}
8623 +
8624 +void DWC_TASK_FREE(dwc_tasklet_t *task)
8625 +{
8626 + DWC_FREE(task);
8627 +}
8628 +
8629 +void DWC_TASK_SCHEDULE(dwc_tasklet_t *task)
8630 +{
8631 + tasklet_schedule(&task->t);
8632 +}
8633 +
8634 +void DWC_TASK_HI_SCHEDULE(dwc_tasklet_t *task)
8635 +{
8636 + tasklet_hi_schedule(&task->t);
8637 +}
8638 +
8639 +
8640 +/* workqueues
8641 + - run in process context (can sleep)
8642 + */
8643 +typedef struct work_container {
8644 + dwc_work_callback_t cb;
8645 + void *data;
8646 + dwc_workq_t *wq;
8647 + char *name;
8648 +
8649 +#ifdef DEBUG
8650 + DWC_CIRCLEQ_ENTRY(work_container) entry;
8651 +#endif
8652 + struct delayed_work work;
8653 +} work_container_t;
8654 +
8655 +#ifdef DEBUG
8656 +DWC_CIRCLEQ_HEAD(work_container_queue, work_container);
8657 +#endif
8658 +
8659 +struct dwc_workq {
8660 + struct workqueue_struct *wq;
8661 + dwc_spinlock_t *lock;
8662 + dwc_waitq_t *waitq;
8663 + int pending;
8664 +
8665 +#ifdef DEBUG
8666 + struct work_container_queue entries;
8667 +#endif
8668 +};
8669 +
8670 +static void do_work(struct work_struct *work)
8671 +{
8672 + dwc_irqflags_t flags;
8673 + struct delayed_work *dw = container_of(work, struct delayed_work, work);
8674 + work_container_t *container = container_of(dw, struct work_container, work);
8675 + dwc_workq_t *wq = container->wq;
8676 +
8677 + container->cb(container->data);
8678 +
8679 +#ifdef DEBUG
8680 + DWC_CIRCLEQ_REMOVE(&wq->entries, container, entry);
8681 +#endif
8682 + DWC_DEBUGC("Work done: %s, container=%p", container->name, container);
8683 + if (container->name) {
8684 + DWC_FREE(container->name);
8685 + }
8686 + DWC_FREE(container);
8687 +
8688 + DWC_SPINLOCK_IRQSAVE(wq->lock, &flags);
8689 + wq->pending--;
8690 + DWC_SPINUNLOCK_IRQRESTORE(wq->lock, flags);
8691 + DWC_WAITQ_TRIGGER(wq->waitq);
8692 +}
8693 +
8694 +static int work_done(void *data)
8695 +{
8696 + dwc_workq_t *workq = (dwc_workq_t *)data;
8697 + return workq->pending == 0;
8698 +}
8699 +
8700 +int DWC_WORKQ_WAIT_WORK_DONE(dwc_workq_t *workq, int timeout)
8701 +{
8702 + return DWC_WAITQ_WAIT_TIMEOUT(workq->waitq, work_done, workq, timeout);
8703 +}
8704 +
8705 +dwc_workq_t *DWC_WORKQ_ALLOC(char *name)
8706 +{
8707 + dwc_workq_t *wq = DWC_ALLOC(sizeof(*wq));
8708 +
8709 + if (!wq) {
8710 + return NULL;
8711 + }
8712 +
8713 + wq->wq = create_singlethread_workqueue(name);
8714 + if (!wq->wq) {
8715 + goto no_wq;
8716 + }
8717 +
8718 + wq->pending = 0;
8719 +
8720 +#if (defined(DWC_LINUX) && defined(CONFIG_DEBUG_SPINLOCK))
8721 + DWC_SPINLOCK_ALLOC_LINUX_DEBUG(wq->lock);
8722 +#else
8723 + wq->lock = DWC_SPINLOCK_ALLOC();
8724 +#endif
8725 + if (!wq->lock) {
8726 + goto no_lock;
8727 + }
8728 +
8729 + wq->waitq = DWC_WAITQ_ALLOC();
8730 + if (!wq->waitq) {
8731 + goto no_waitq;
8732 + }
8733 +
8734 +#ifdef DEBUG
8735 + DWC_CIRCLEQ_INIT(&wq->entries);
8736 +#endif
8737 + return wq;
8738 +
8739 + no_waitq:
8740 + DWC_SPINLOCK_FREE(wq->lock);
8741 + no_lock:
8742 + destroy_workqueue(wq->wq);
8743 + no_wq:
8744 + DWC_FREE(wq);
8745 +
8746 + return NULL;
8747 +}
8748 +
8749 +void DWC_WORKQ_FREE(dwc_workq_t *wq)
8750 +{
8751 +#ifdef DEBUG
8752 + if (wq->pending != 0) {
8753 + struct work_container *wc;
8754 + DWC_ERROR("Destroying work queue with pending work");
8755 + DWC_CIRCLEQ_FOREACH(wc, &wq->entries, entry) {
8756 + DWC_ERROR("Work %s still pending", wc->name);
8757 + }
8758 + }
8759 +#endif
8760 + destroy_workqueue(wq->wq);
8761 + DWC_SPINLOCK_FREE(wq->lock);
8762 + DWC_WAITQ_FREE(wq->waitq);
8763 + DWC_FREE(wq);
8764 +}
8765 +
8766 +void DWC_WORKQ_SCHEDULE(dwc_workq_t *wq, dwc_work_callback_t cb, void *data,
8767 + char *format, ...)
8768 +{
8769 + dwc_irqflags_t flags;
8770 + work_container_t *container;
8771 + static char name[128];
8772 + va_list args;
8773 +
8774 + va_start(args, format);
8775 + DWC_VSNPRINTF(name, 128, format, args);
8776 + va_end(args);
8777 +
8778 + DWC_SPINLOCK_IRQSAVE(wq->lock, &flags);
8779 + wq->pending++;
8780 + DWC_SPINUNLOCK_IRQRESTORE(wq->lock, flags);
8781 + DWC_WAITQ_TRIGGER(wq->waitq);
8782 +
8783 + container = DWC_ALLOC_ATOMIC(sizeof(*container));
8784 + if (!container) {
8785 + DWC_ERROR("Cannot allocate memory for container\n");
8786 + return;
8787 + }
8788 +
8789 + container->name = DWC_STRDUP(name);
8790 + if (!container->name) {
8791 + DWC_ERROR("Cannot allocate memory for container->name\n");
8792 + DWC_FREE(container);
8793 + return;
8794 + }
8795 +
8796 + container->cb = cb;
8797 + container->data = data;
8798 + container->wq = wq;
8799 + DWC_DEBUGC("Queueing work: %s, container=%p", container->name, container);
8800 + INIT_WORK(&container->work.work, do_work);
8801 +
8802 +#ifdef DEBUG
8803 + DWC_CIRCLEQ_INSERT_TAIL(&wq->entries, container, entry);
8804 +#endif
8805 + queue_work(wq->wq, &container->work.work);
8806 +}
8807 +
8808 +void DWC_WORKQ_SCHEDULE_DELAYED(dwc_workq_t *wq, dwc_work_callback_t cb,
8809 + void *data, uint32_t time, char *format, ...)
8810 +{
8811 + dwc_irqflags_t flags;
8812 + work_container_t *container;
8813 + static char name[128];
8814 + va_list args;
8815 +
8816 + va_start(args, format);
8817 + DWC_VSNPRINTF(name, 128, format, args);
8818 + va_end(args);
8819 +
8820 + DWC_SPINLOCK_IRQSAVE(wq->lock, &flags);
8821 + wq->pending++;
8822 + DWC_SPINUNLOCK_IRQRESTORE(wq->lock, flags);
8823 + DWC_WAITQ_TRIGGER(wq->waitq);
8824 +
8825 + container = DWC_ALLOC_ATOMIC(sizeof(*container));
8826 + if (!container) {
8827 + DWC_ERROR("Cannot allocate memory for container\n");
8828 + return;
8829 + }
8830 +
8831 + container->name = DWC_STRDUP(name);
8832 + if (!container->name) {
8833 + DWC_ERROR("Cannot allocate memory for container->name\n");
8834 + DWC_FREE(container);
8835 + return;
8836 + }
8837 +
8838 + container->cb = cb;
8839 + container->data = data;
8840 + container->wq = wq;
8841 + DWC_DEBUGC("Queueing work: %s, container=%p", container->name, container);
8842 + INIT_DELAYED_WORK(&container->work, do_work);
8843 +
8844 +#ifdef DEBUG
8845 + DWC_CIRCLEQ_INSERT_TAIL(&wq->entries, container, entry);
8846 +#endif
8847 + queue_delayed_work(wq->wq, &container->work, msecs_to_jiffies(time));
8848 +}
8849 +
8850 +int DWC_WORKQ_PENDING(dwc_workq_t *wq)
8851 +{
8852 + return wq->pending;
8853 +}
8854 +
8855 +
8856 +#ifdef DWC_LIBMODULE
8857 +
8858 +#ifdef DWC_CCLIB
8859 +/* CC */
8860 +EXPORT_SYMBOL(dwc_cc_if_alloc);
8861 +EXPORT_SYMBOL(dwc_cc_if_free);
8862 +EXPORT_SYMBOL(dwc_cc_clear);
8863 +EXPORT_SYMBOL(dwc_cc_add);
8864 +EXPORT_SYMBOL(dwc_cc_remove);
8865 +EXPORT_SYMBOL(dwc_cc_change);
8866 +EXPORT_SYMBOL(dwc_cc_data_for_save);
8867 +EXPORT_SYMBOL(dwc_cc_restore_from_data);
8868 +EXPORT_SYMBOL(dwc_cc_match_chid);
8869 +EXPORT_SYMBOL(dwc_cc_match_cdid);
8870 +EXPORT_SYMBOL(dwc_cc_ck);
8871 +EXPORT_SYMBOL(dwc_cc_chid);
8872 +EXPORT_SYMBOL(dwc_cc_cdid);
8873 +EXPORT_SYMBOL(dwc_cc_name);
8874 +#endif /* DWC_CCLIB */
8875 +
8876 +#ifdef DWC_CRYPTOLIB
8877 +# ifndef CONFIG_MACH_IPMATE
8878 +/* Modpow */
8879 +EXPORT_SYMBOL(dwc_modpow);
8880 +
8881 +/* DH */
8882 +EXPORT_SYMBOL(dwc_dh_modpow);
8883 +EXPORT_SYMBOL(dwc_dh_derive_keys);
8884 +EXPORT_SYMBOL(dwc_dh_pk);
8885 +# endif /* CONFIG_MACH_IPMATE */
8886 +
8887 +/* Crypto */
8888 +EXPORT_SYMBOL(dwc_wusb_aes_encrypt);
8889 +EXPORT_SYMBOL(dwc_wusb_cmf);
8890 +EXPORT_SYMBOL(dwc_wusb_prf);
8891 +EXPORT_SYMBOL(dwc_wusb_fill_ccm_nonce);
8892 +EXPORT_SYMBOL(dwc_wusb_gen_nonce);
8893 +EXPORT_SYMBOL(dwc_wusb_gen_key);
8894 +EXPORT_SYMBOL(dwc_wusb_gen_mic);
8895 +#endif /* DWC_CRYPTOLIB */
8896 +
8897 +/* Notification */
8898 +#ifdef DWC_NOTIFYLIB
8899 +EXPORT_SYMBOL(dwc_alloc_notification_manager);
8900 +EXPORT_SYMBOL(dwc_free_notification_manager);
8901 +EXPORT_SYMBOL(dwc_register_notifier);
8902 +EXPORT_SYMBOL(dwc_unregister_notifier);
8903 +EXPORT_SYMBOL(dwc_add_observer);
8904 +EXPORT_SYMBOL(dwc_remove_observer);
8905 +EXPORT_SYMBOL(dwc_notify);
8906 +#endif
8907 +
8908 +/* Memory Debugging Routines */
8909 +#ifdef DWC_DEBUG_MEMORY
8910 +EXPORT_SYMBOL(dwc_alloc_debug);
8911 +EXPORT_SYMBOL(dwc_alloc_atomic_debug);
8912 +EXPORT_SYMBOL(dwc_free_debug);
8913 +EXPORT_SYMBOL(dwc_dma_alloc_debug);
8914 +EXPORT_SYMBOL(dwc_dma_free_debug);
8915 +#endif
8916 +
8917 +EXPORT_SYMBOL(DWC_MEMSET);
8918 +EXPORT_SYMBOL(DWC_MEMCPY);
8919 +EXPORT_SYMBOL(DWC_MEMMOVE);
8920 +EXPORT_SYMBOL(DWC_MEMCMP);
8921 +EXPORT_SYMBOL(DWC_STRNCMP);
8922 +EXPORT_SYMBOL(DWC_STRCMP);
8923 +EXPORT_SYMBOL(DWC_STRLEN);
8924 +EXPORT_SYMBOL(DWC_STRCPY);
8925 +EXPORT_SYMBOL(DWC_STRDUP);
8926 +EXPORT_SYMBOL(DWC_ATOI);
8927 +EXPORT_SYMBOL(DWC_ATOUI);
8928 +
8929 +#ifdef DWC_UTFLIB
8930 +EXPORT_SYMBOL(DWC_UTF8_TO_UTF16LE);
8931 +#endif /* DWC_UTFLIB */
8932 +
8933 +EXPORT_SYMBOL(DWC_IN_IRQ);
8934 +EXPORT_SYMBOL(DWC_IN_BH);
8935 +EXPORT_SYMBOL(DWC_VPRINTF);
8936 +EXPORT_SYMBOL(DWC_VSNPRINTF);
8937 +EXPORT_SYMBOL(DWC_PRINTF);
8938 +EXPORT_SYMBOL(DWC_SPRINTF);
8939 +EXPORT_SYMBOL(DWC_SNPRINTF);
8940 +EXPORT_SYMBOL(__DWC_WARN);
8941 +EXPORT_SYMBOL(__DWC_ERROR);
8942 +EXPORT_SYMBOL(DWC_EXCEPTION);
8943 +
8944 +#ifdef DEBUG
8945 +EXPORT_SYMBOL(__DWC_DEBUG);
8946 +#endif
8947 +
8948 +EXPORT_SYMBOL(__DWC_DMA_ALLOC);
8949 +EXPORT_SYMBOL(__DWC_DMA_ALLOC_ATOMIC);
8950 +EXPORT_SYMBOL(__DWC_DMA_FREE);
8951 +EXPORT_SYMBOL(__DWC_ALLOC);
8952 +EXPORT_SYMBOL(__DWC_ALLOC_ATOMIC);
8953 +EXPORT_SYMBOL(__DWC_FREE);
8954 +
8955 +#ifdef DWC_CRYPTOLIB
8956 +EXPORT_SYMBOL(DWC_RANDOM_BYTES);
8957 +EXPORT_SYMBOL(DWC_AES_CBC);
8958 +EXPORT_SYMBOL(DWC_SHA256);
8959 +EXPORT_SYMBOL(DWC_HMAC_SHA256);
8960 +#endif
8961 +
8962 +EXPORT_SYMBOL(DWC_CPU_TO_LE32);
8963 +EXPORT_SYMBOL(DWC_CPU_TO_BE32);
8964 +EXPORT_SYMBOL(DWC_LE32_TO_CPU);
8965 +EXPORT_SYMBOL(DWC_BE32_TO_CPU);
8966 +EXPORT_SYMBOL(DWC_CPU_TO_LE16);
8967 +EXPORT_SYMBOL(DWC_CPU_TO_BE16);
8968 +EXPORT_SYMBOL(DWC_LE16_TO_CPU);
8969 +EXPORT_SYMBOL(DWC_BE16_TO_CPU);
8970 +EXPORT_SYMBOL(DWC_READ_REG32);
8971 +EXPORT_SYMBOL(DWC_WRITE_REG32);
8972 +EXPORT_SYMBOL(DWC_MODIFY_REG32);
8973 +
8974 +#if 0
8975 +EXPORT_SYMBOL(DWC_READ_REG64);
8976 +EXPORT_SYMBOL(DWC_WRITE_REG64);
8977 +EXPORT_SYMBOL(DWC_MODIFY_REG64);
8978 +#endif
8979 +
8980 +EXPORT_SYMBOL(DWC_SPINLOCK_ALLOC);
8981 +EXPORT_SYMBOL(DWC_SPINLOCK_FREE);
8982 +EXPORT_SYMBOL(DWC_SPINLOCK);
8983 +EXPORT_SYMBOL(DWC_SPINUNLOCK);
8984 +EXPORT_SYMBOL(DWC_SPINLOCK_IRQSAVE);
8985 +EXPORT_SYMBOL(DWC_SPINUNLOCK_IRQRESTORE);
8986 +EXPORT_SYMBOL(DWC_MUTEX_ALLOC);
8987 +
8988 +#if (!defined(DWC_LINUX) || !defined(CONFIG_DEBUG_MUTEXES))
8989 +EXPORT_SYMBOL(DWC_MUTEX_FREE);
8990 +#endif
8991 +
8992 +EXPORT_SYMBOL(DWC_MUTEX_LOCK);
8993 +EXPORT_SYMBOL(DWC_MUTEX_TRYLOCK);
8994 +EXPORT_SYMBOL(DWC_MUTEX_UNLOCK);
8995 +EXPORT_SYMBOL(DWC_UDELAY);
8996 +EXPORT_SYMBOL(DWC_MDELAY);
8997 +EXPORT_SYMBOL(DWC_MSLEEP);
8998 +EXPORT_SYMBOL(DWC_TIME);
8999 +EXPORT_SYMBOL(DWC_TIMER_ALLOC);
9000 +EXPORT_SYMBOL(DWC_TIMER_FREE);
9001 +EXPORT_SYMBOL(DWC_TIMER_SCHEDULE);
9002 +EXPORT_SYMBOL(DWC_TIMER_CANCEL);
9003 +EXPORT_SYMBOL(DWC_WAITQ_ALLOC);
9004 +EXPORT_SYMBOL(DWC_WAITQ_FREE);
9005 +EXPORT_SYMBOL(DWC_WAITQ_WAIT);
9006 +EXPORT_SYMBOL(DWC_WAITQ_WAIT_TIMEOUT);
9007 +EXPORT_SYMBOL(DWC_WAITQ_TRIGGER);
9008 +EXPORT_SYMBOL(DWC_WAITQ_ABORT);
9009 +EXPORT_SYMBOL(DWC_THREAD_RUN);
9010 +EXPORT_SYMBOL(DWC_THREAD_STOP);
9011 +EXPORT_SYMBOL(DWC_THREAD_SHOULD_STOP);
9012 +EXPORT_SYMBOL(DWC_TASK_ALLOC);
9013 +EXPORT_SYMBOL(DWC_TASK_FREE);
9014 +EXPORT_SYMBOL(DWC_TASK_SCHEDULE);
9015 +EXPORT_SYMBOL(DWC_WORKQ_WAIT_WORK_DONE);
9016 +EXPORT_SYMBOL(DWC_WORKQ_ALLOC);
9017 +EXPORT_SYMBOL(DWC_WORKQ_FREE);
9018 +EXPORT_SYMBOL(DWC_WORKQ_SCHEDULE);
9019 +EXPORT_SYMBOL(DWC_WORKQ_SCHEDULE_DELAYED);
9020 +EXPORT_SYMBOL(DWC_WORKQ_PENDING);
9021 +
9022 +static int dwc_common_port_init_module(void)
9023 +{
9024 + int result = 0;
9025 +
9026 + printk(KERN_DEBUG "Module dwc_common_port init\n" );
9027 +
9028 +#ifdef DWC_DEBUG_MEMORY
9029 + result = dwc_memory_debug_start(NULL);
9030 + if (result) {
9031 + printk(KERN_ERR
9032 + "dwc_memory_debug_start() failed with error %d\n",
9033 + result);
9034 + return result;
9035 + }
9036 +#endif
9037 +
9038 +#ifdef DWC_NOTIFYLIB
9039 + result = dwc_alloc_notification_manager(NULL, NULL);
9040 + if (result) {
9041 + printk(KERN_ERR
9042 + "dwc_alloc_notification_manager() failed with error %d\n",
9043 + result);
9044 + return result;
9045 + }
9046 +#endif
9047 + return result;
9048 +}
9049 +
9050 +static void dwc_common_port_exit_module(void)
9051 +{
9052 + printk(KERN_DEBUG "Module dwc_common_port exit\n" );
9053 +
9054 +#ifdef DWC_NOTIFYLIB
9055 + dwc_free_notification_manager();
9056 +#endif
9057 +
9058 +#ifdef DWC_DEBUG_MEMORY
9059 + dwc_memory_debug_stop();
9060 +#endif
9061 +}
9062 +
9063 +module_init(dwc_common_port_init_module);
9064 +module_exit(dwc_common_port_exit_module);
9065 +
9066 +MODULE_DESCRIPTION("DWC Common Library - Portable version");
9067 +MODULE_AUTHOR("Synopsys Inc.");
9068 +MODULE_LICENSE ("GPL");
9069 +
9070 +#endif /* DWC_LIBMODULE */
9071 --- /dev/null
9072 +++ b/drivers/usb/host/dwc_common_port/dwc_common_nbsd.c
9073 @@ -0,0 +1,1275 @@
9074 +#include "dwc_os.h"
9075 +#include "dwc_list.h"
9076 +
9077 +#ifdef DWC_CCLIB
9078 +# include "dwc_cc.h"
9079 +#endif
9080 +
9081 +#ifdef DWC_CRYPTOLIB
9082 +# include "dwc_modpow.h"
9083 +# include "dwc_dh.h"
9084 +# include "dwc_crypto.h"
9085 +#endif
9086 +
9087 +#ifdef DWC_NOTIFYLIB
9088 +# include "dwc_notifier.h"
9089 +#endif
9090 +
9091 +/* OS-Level Implementations */
9092 +
9093 +/* This is the NetBSD 4.0.1 kernel implementation of the DWC platform library. */
9094 +
9095 +
9096 +/* MISC */
9097 +
9098 +void *DWC_MEMSET(void *dest, uint8_t byte, uint32_t size)
9099 +{
9100 + return memset(dest, byte, size);
9101 +}
9102 +
9103 +void *DWC_MEMCPY(void *dest, void const *src, uint32_t size)
9104 +{
9105 + return memcpy(dest, src, size);
9106 +}
9107 +
9108 +void *DWC_MEMMOVE(void *dest, void *src, uint32_t size)
9109 +{
9110 + bcopy(src, dest, size);
9111 + return dest;
9112 +}
9113 +
9114 +int DWC_MEMCMP(void *m1, void *m2, uint32_t size)
9115 +{
9116 + return memcmp(m1, m2, size);
9117 +}
9118 +
9119 +int DWC_STRNCMP(void *s1, void *s2, uint32_t size)
9120 +{
9121 + return strncmp(s1, s2, size);
9122 +}
9123 +
9124 +int DWC_STRCMP(void *s1, void *s2)
9125 +{
9126 + return strcmp(s1, s2);
9127 +}
9128 +
9129 +int DWC_STRLEN(char const *str)
9130 +{
9131 + return strlen(str);
9132 +}
9133 +
9134 +char *DWC_STRCPY(char *to, char const *from)
9135 +{
9136 + return strcpy(to, from);
9137 +}
9138 +
9139 +char *DWC_STRDUP(char const *str)
9140 +{
9141 + int len = DWC_STRLEN(str) + 1;
9142 + char *new = DWC_ALLOC_ATOMIC(len);
9143 +
9144 + if (!new) {
9145 + return NULL;
9146 + }
9147 +
9148 + DWC_MEMCPY(new, str, len);
9149 + return new;
9150 +}
9151 +
9152 +int DWC_ATOI(char *str, int32_t *value)
9153 +{
9154 + char *end = NULL;
9155 +
9156 + /* NetBSD doesn't have 'strtol' in the kernel, but 'strtoul'
9157 + * should be equivalent on 2's complement machines
9158 + */
9159 + *value = strtoul(str, &end, 0);
9160 + if (*end == '\0') {
9161 + return 0;
9162 + }
9163 +
9164 + return -1;
9165 +}
9166 +
9167 +int DWC_ATOUI(char *str, uint32_t *value)
9168 +{
9169 + char *end = NULL;
9170 +
9171 + *value = strtoul(str, &end, 0);
9172 + if (*end == '\0') {
9173 + return 0;
9174 + }
9175 +
9176 + return -1;
9177 +}
9178 +
9179 +
9180 +#ifdef DWC_UTFLIB
9181 +/* From usbstring.c */
9182 +
9183 +int DWC_UTF8_TO_UTF16LE(uint8_t const *s, uint16_t *cp, unsigned len)
9184 +{
9185 + int count = 0;
9186 + u8 c;
9187 + u16 uchar;
9188 +
9189 + /* this insists on correct encodings, though not minimal ones.
9190 + * BUT it currently rejects legit 4-byte UTF-8 code points,
9191 + * which need surrogate pairs. (Unicode 3.1 can use them.)
9192 + */
9193 + while (len != 0 && (c = (u8) *s++) != 0) {
9194 + if (unlikely(c & 0x80)) {
9195 + // 2-byte sequence:
9196 + // 00000yyyyyxxxxxx = 110yyyyy 10xxxxxx
9197 + if ((c & 0xe0) == 0xc0) {
9198 + uchar = (c & 0x1f) << 6;
9199 +
9200 + c = (u8) *s++;
9201 + if ((c & 0xc0) != 0xc0)
9202 + goto fail;
9203 + c &= 0x3f;
9204 + uchar |= c;
9205 +
9206 + // 3-byte sequence (most CJKV characters):
9207 + // zzzzyyyyyyxxxxxx = 1110zzzz 10yyyyyy 10xxxxxx
9208 + } else if ((c & 0xf0) == 0xe0) {
9209 + uchar = (c & 0x0f) << 12;
9210 +
9211 + c = (u8) *s++;
9212 + if ((c & 0xc0) != 0xc0)
9213 + goto fail;
9214 + c &= 0x3f;
9215 + uchar |= c << 6;
9216 +
9217 + c = (u8) *s++;
9218 + if ((c & 0xc0) != 0xc0)
9219 + goto fail;
9220 + c &= 0x3f;
9221 + uchar |= c;
9222 +
9223 + /* no bogus surrogates */
9224 + if (0xd800 <= uchar && uchar <= 0xdfff)
9225 + goto fail;
9226 +
9227 + // 4-byte sequence (surrogate pairs, currently rare):
9228 + // 11101110wwwwzzzzyy + 110111yyyyxxxxxx
9229 + // = 11110uuu 10uuzzzz 10yyyyyy 10xxxxxx
9230 + // (uuuuu = wwww + 1)
9231 + // FIXME accept the surrogate code points (only)
9232 + } else
9233 + goto fail;
9234 + } else
9235 + uchar = c;
9236 + put_unaligned (cpu_to_le16 (uchar), cp++);
9237 + count++;
9238 + len--;
9239 + }
9240 + return count;
9241 +fail:
9242 + return -1;
9243 +}
9244 +
9245 +#endif /* DWC_UTFLIB */
9246 +
9247 +
9248 +/* dwc_debug.h */
9249 +
9250 +dwc_bool_t DWC_IN_IRQ(void)
9251 +{
9252 +// return in_irq();
9253 + return 0;
9254 +}
9255 +
9256 +dwc_bool_t DWC_IN_BH(void)
9257 +{
9258 +// return in_softirq();
9259 + return 0;
9260 +}
9261 +
9262 +void DWC_VPRINTF(char *format, va_list args)
9263 +{
9264 + vprintf(format, args);
9265 +}
9266 +
9267 +int DWC_VSNPRINTF(char *str, int size, char *format, va_list args)
9268 +{
9269 + return vsnprintf(str, size, format, args);
9270 +}
9271 +
9272 +void DWC_PRINTF(char *format, ...)
9273 +{
9274 + va_list args;
9275 +
9276 + va_start(args, format);
9277 + DWC_VPRINTF(format, args);
9278 + va_end(args);
9279 +}
9280 +
9281 +int DWC_SPRINTF(char *buffer, char *format, ...)
9282 +{
9283 + int retval;
9284 + va_list args;
9285 +
9286 + va_start(args, format);
9287 + retval = vsprintf(buffer, format, args);
9288 + va_end(args);
9289 + return retval;
9290 +}
9291 +
9292 +int DWC_SNPRINTF(char *buffer, int size, char *format, ...)
9293 +{
9294 + int retval;
9295 + va_list args;
9296 +
9297 + va_start(args, format);
9298 + retval = vsnprintf(buffer, size, format, args);
9299 + va_end(args);
9300 + return retval;
9301 +}
9302 +
9303 +void __DWC_WARN(char *format, ...)
9304 +{
9305 + va_list args;
9306 +
9307 + va_start(args, format);
9308 + DWC_VPRINTF(format, args);
9309 + va_end(args);
9310 +}
9311 +
9312 +void __DWC_ERROR(char *format, ...)
9313 +{
9314 + va_list args;
9315 +
9316 + va_start(args, format);
9317 + DWC_VPRINTF(format, args);
9318 + va_end(args);
9319 +}
9320 +
9321 +void DWC_EXCEPTION(char *format, ...)
9322 +{
9323 + va_list args;
9324 +
9325 + va_start(args, format);
9326 + DWC_VPRINTF(format, args);
9327 + va_end(args);
9328 +// BUG_ON(1); ???
9329 +}
9330 +
9331 +#ifdef DEBUG
9332 +void __DWC_DEBUG(char *format, ...)
9333 +{
9334 + va_list args;
9335 +
9336 + va_start(args, format);
9337 + DWC_VPRINTF(format, args);
9338 + va_end(args);
9339 +}
9340 +#endif
9341 +
9342 +
9343 +/* dwc_mem.h */
9344 +
9345 +#if 0
9346 +dwc_pool_t *DWC_DMA_POOL_CREATE(uint32_t size,
9347 + uint32_t align,
9348 + uint32_t alloc)
9349 +{
9350 + struct dma_pool *pool = dma_pool_create("Pool", NULL,
9351 + size, align, alloc);
9352 + return (dwc_pool_t *)pool;
9353 +}
9354 +
9355 +void DWC_DMA_POOL_DESTROY(dwc_pool_t *pool)
9356 +{
9357 + dma_pool_destroy((struct dma_pool *)pool);
9358 +}
9359 +
9360 +void *DWC_DMA_POOL_ALLOC(dwc_pool_t *pool, uint64_t *dma_addr)
9361 +{
9362 +// return dma_pool_alloc((struct dma_pool *)pool, GFP_KERNEL, dma_addr);
9363 + return dma_pool_alloc((struct dma_pool *)pool, M_WAITOK, dma_addr);
9364 +}
9365 +
9366 +void *DWC_DMA_POOL_ZALLOC(dwc_pool_t *pool, uint64_t *dma_addr)
9367 +{
9368 + void *vaddr = DWC_DMA_POOL_ALLOC(pool, dma_addr);
9369 + memset(..);
9370 +}
9371 +
9372 +void DWC_DMA_POOL_FREE(dwc_pool_t *pool, void *vaddr, void *daddr)
9373 +{
9374 + dma_pool_free(pool, vaddr, daddr);
9375 +}
9376 +#endif
9377 +
9378 +void *__DWC_DMA_ALLOC(void *dma_ctx, uint32_t size, dwc_dma_t *dma_addr)
9379 +{
9380 + dwc_dmactx_t *dma = (dwc_dmactx_t *)dma_ctx;
9381 + int error;
9382 +
9383 + error = bus_dmamem_alloc(dma->dma_tag, size, 1, size, dma->segs,
9384 + sizeof(dma->segs) / sizeof(dma->segs[0]),
9385 + &dma->nsegs, BUS_DMA_NOWAIT);
9386 + if (error) {
9387 + printf("%s: bus_dmamem_alloc(%ju) failed: %d\n", __func__,
9388 + (uintmax_t)size, error);
9389 + goto fail_0;
9390 + }
9391 +
9392 + error = bus_dmamem_map(dma->dma_tag, dma->segs, dma->nsegs, size,
9393 + (caddr_t *)&dma->dma_vaddr,
9394 + BUS_DMA_NOWAIT | BUS_DMA_COHERENT);
9395 + if (error) {
9396 + printf("%s: bus_dmamem_map failed: %d\n", __func__, error);
9397 + goto fail_1;
9398 + }
9399 +
9400 + error = bus_dmamap_create(dma->dma_tag, size, 1, size, 0,
9401 + BUS_DMA_NOWAIT, &dma->dma_map);
9402 + if (error) {
9403 + printf("%s: bus_dmamap_create failed: %d\n", __func__, error);
9404 + goto fail_2;
9405 + }
9406 +
9407 + error = bus_dmamap_load(dma->dma_tag, dma->dma_map, dma->dma_vaddr,
9408 + size, NULL, BUS_DMA_NOWAIT);
9409 + if (error) {
9410 + printf("%s: bus_dmamap_load failed: %d\n", __func__, error);
9411 + goto fail_3;
9412 + }
9413 +
9414 + dma->dma_paddr = (bus_addr_t)dma->segs[0].ds_addr;
9415 + *dma_addr = dma->dma_paddr;
9416 + return dma->dma_vaddr;
9417 +
9418 +fail_3:
9419 + bus_dmamap_destroy(dma->dma_tag, dma->dma_map);
9420 +fail_2:
9421 + bus_dmamem_unmap(dma->dma_tag, dma->dma_vaddr, size);
9422 +fail_1:
9423 + bus_dmamem_free(dma->dma_tag, dma->segs, dma->nsegs);
9424 +fail_0:
9425 + dma->dma_map = NULL;
9426 + dma->dma_vaddr = NULL;
9427 + dma->nsegs = 0;
9428 +
9429 + return NULL;
9430 +}
9431 +
9432 +void __DWC_DMA_FREE(void *dma_ctx, uint32_t size, void *virt_addr, dwc_dma_t dma_addr)
9433 +{
9434 + dwc_dmactx_t *dma = (dwc_dmactx_t *)dma_ctx;
9435 +
9436 + if (dma->dma_map != NULL) {
9437 + bus_dmamap_sync(dma->dma_tag, dma->dma_map, 0, size,
9438 + BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE);
9439 + bus_dmamap_unload(dma->dma_tag, dma->dma_map);
9440 + bus_dmamap_destroy(dma->dma_tag, dma->dma_map);
9441 + bus_dmamem_unmap(dma->dma_tag, dma->dma_vaddr, size);
9442 + bus_dmamem_free(dma->dma_tag, dma->segs, dma->nsegs);
9443 + dma->dma_paddr = 0;
9444 + dma->dma_map = NULL;
9445 + dma->dma_vaddr = NULL;
9446 + dma->nsegs = 0;
9447 + }
9448 +}
9449 +
9450 +void *__DWC_ALLOC(void *mem_ctx, uint32_t size)
9451 +{
9452 + return malloc(size, M_DEVBUF, M_WAITOK | M_ZERO);
9453 +}
9454 +
9455 +void *__DWC_ALLOC_ATOMIC(void *mem_ctx, uint32_t size)
9456 +{
9457 + return malloc(size, M_DEVBUF, M_NOWAIT | M_ZERO);
9458 +}
9459 +
9460 +void __DWC_FREE(void *mem_ctx, void *addr)
9461 +{
9462 + free(addr, M_DEVBUF);
9463 +}
9464 +
9465 +
9466 +#ifdef DWC_CRYPTOLIB
9467 +/* dwc_crypto.h */
9468 +
9469 +void DWC_RANDOM_BYTES(uint8_t *buffer, uint32_t length)
9470 +{
9471 + get_random_bytes(buffer, length);
9472 +}
9473 +
9474 +int DWC_AES_CBC(uint8_t *message, uint32_t messagelen, uint8_t *key, uint32_t keylen, uint8_t iv[16], uint8_t *out)
9475 +{
9476 + struct crypto_blkcipher *tfm;
9477 + struct blkcipher_desc desc;
9478 + struct scatterlist sgd;
9479 + struct scatterlist sgs;
9480 +
9481 + tfm = crypto_alloc_blkcipher("cbc(aes)", 0, CRYPTO_ALG_ASYNC);
9482 + if (tfm == NULL) {
9483 + printk("failed to load transform for aes CBC\n");
9484 + return -1;
9485 + }
9486 +
9487 + crypto_blkcipher_setkey(tfm, key, keylen);
9488 + crypto_blkcipher_set_iv(tfm, iv, 16);
9489 +
9490 + sg_init_one(&sgd, out, messagelen);
9491 + sg_init_one(&sgs, message, messagelen);
9492 +
9493 + desc.tfm = tfm;
9494 + desc.flags = 0;
9495 +
9496 + if (crypto_blkcipher_encrypt(&desc, &sgd, &sgs, messagelen)) {
9497 + crypto_free_blkcipher(tfm);
9498 + DWC_ERROR("AES CBC encryption failed");
9499 + return -1;
9500 + }
9501 +
9502 + crypto_free_blkcipher(tfm);
9503 + return 0;
9504 +}
9505 +
9506 +int DWC_SHA256(uint8_t *message, uint32_t len, uint8_t *out)
9507 +{
9508 + struct crypto_hash *tfm;
9509 + struct hash_desc desc;
9510 + struct scatterlist sg;
9511 +
9512 + tfm = crypto_alloc_hash("sha256", 0, CRYPTO_ALG_ASYNC);
9513 + if (IS_ERR(tfm)) {
9514 + DWC_ERROR("Failed to load transform for sha256: %ld", PTR_ERR(tfm));
9515 + return 0;
9516 + }
9517 + desc.tfm = tfm;
9518 + desc.flags = 0;
9519 +
9520 + sg_init_one(&sg, message, len);
9521 + crypto_hash_digest(&desc, &sg, len, out);
9522 + crypto_free_hash(tfm);
9523 +
9524 + return 1;
9525 +}
9526 +
9527 +int DWC_HMAC_SHA256(uint8_t *message, uint32_t messagelen,
9528 + uint8_t *key, uint32_t keylen, uint8_t *out)
9529 +{
9530 + struct crypto_hash *tfm;
9531 + struct hash_desc desc;
9532 + struct scatterlist sg;
9533 +
9534 + tfm = crypto_alloc_hash("hmac(sha256)", 0, CRYPTO_ALG_ASYNC);
9535 + if (IS_ERR(tfm)) {
9536 + DWC_ERROR("Failed to load transform for hmac(sha256): %ld", PTR_ERR(tfm));
9537 + return 0;
9538 + }
9539 + desc.tfm = tfm;
9540 + desc.flags = 0;
9541 +
9542 + sg_init_one(&sg, message, messagelen);
9543 + crypto_hash_setkey(tfm, key, keylen);
9544 + crypto_hash_digest(&desc, &sg, messagelen, out);
9545 + crypto_free_hash(tfm);
9546 +
9547 + return 1;
9548 +}
9549 +
9550 +#endif /* DWC_CRYPTOLIB */
9551 +
9552 +
9553 +/* Byte Ordering Conversions */
9554 +
9555 +uint32_t DWC_CPU_TO_LE32(uint32_t *p)
9556 +{
9557 +#ifdef __LITTLE_ENDIAN
9558 + return *p;
9559 +#else
9560 + uint8_t *u_p = (uint8_t *)p;
9561 +
9562 + return (u_p[3] | (u_p[2] << 8) | (u_p[1] << 16) | (u_p[0] << 24));
9563 +#endif
9564 +}
9565 +
9566 +uint32_t DWC_CPU_TO_BE32(uint32_t *p)
9567 +{
9568 +#ifdef __BIG_ENDIAN
9569 + return *p;
9570 +#else
9571 + uint8_t *u_p = (uint8_t *)p;
9572 +
9573 + return (u_p[3] | (u_p[2] << 8) | (u_p[1] << 16) | (u_p[0] << 24));
9574 +#endif
9575 +}
9576 +
9577 +uint32_t DWC_LE32_TO_CPU(uint32_t *p)
9578 +{
9579 +#ifdef __LITTLE_ENDIAN
9580 + return *p;
9581 +#else
9582 + uint8_t *u_p = (uint8_t *)p;
9583 +
9584 + return (u_p[3] | (u_p[2] << 8) | (u_p[1] << 16) | (u_p[0] << 24));
9585 +#endif
9586 +}
9587 +
9588 +uint32_t DWC_BE32_TO_CPU(uint32_t *p)
9589 +{
9590 +#ifdef __BIG_ENDIAN
9591 + return *p;
9592 +#else
9593 + uint8_t *u_p = (uint8_t *)p;
9594 +
9595 + return (u_p[3] | (u_p[2] << 8) | (u_p[1] << 16) | (u_p[0] << 24));
9596 +#endif
9597 +}
9598 +
9599 +uint16_t DWC_CPU_TO_LE16(uint16_t *p)
9600 +{
9601 +#ifdef __LITTLE_ENDIAN
9602 + return *p;
9603 +#else
9604 + uint8_t *u_p = (uint8_t *)p;
9605 + return (u_p[1] | (u_p[0] << 8));
9606 +#endif
9607 +}
9608 +
9609 +uint16_t DWC_CPU_TO_BE16(uint16_t *p)
9610 +{
9611 +#ifdef __BIG_ENDIAN
9612 + return *p;
9613 +#else
9614 + uint8_t *u_p = (uint8_t *)p;
9615 + return (u_p[1] | (u_p[0] << 8));
9616 +#endif
9617 +}
9618 +
9619 +uint16_t DWC_LE16_TO_CPU(uint16_t *p)
9620 +{
9621 +#ifdef __LITTLE_ENDIAN
9622 + return *p;
9623 +#else
9624 + uint8_t *u_p = (uint8_t *)p;
9625 + return (u_p[1] | (u_p[0] << 8));
9626 +#endif
9627 +}
9628 +
9629 +uint16_t DWC_BE16_TO_CPU(uint16_t *p)
9630 +{
9631 +#ifdef __BIG_ENDIAN
9632 + return *p;
9633 +#else
9634 + uint8_t *u_p = (uint8_t *)p;
9635 + return (u_p[1] | (u_p[0] << 8));
9636 +#endif
9637 +}
9638 +
9639 +
9640 +/* Registers */
9641 +
9642 +uint32_t DWC_READ_REG32(void *io_ctx, uint32_t volatile *reg)
9643 +{
9644 + dwc_ioctx_t *io = (dwc_ioctx_t *)io_ctx;
9645 + bus_size_t ior = (bus_size_t)reg;
9646 +
9647 + return bus_space_read_4(io->iot, io->ioh, ior);
9648 +}
9649 +
9650 +#if 0
9651 +uint64_t DWC_READ_REG64(void *io_ctx, uint64_t volatile *reg)
9652 +{
9653 + dwc_ioctx_t *io = (dwc_ioctx_t *)io_ctx;
9654 + bus_size_t ior = (bus_size_t)reg;
9655 +
9656 + return bus_space_read_8(io->iot, io->ioh, ior);
9657 +}
9658 +#endif
9659 +
9660 +void DWC_WRITE_REG32(void *io_ctx, uint32_t volatile *reg, uint32_t value)
9661 +{
9662 + dwc_ioctx_t *io = (dwc_ioctx_t *)io_ctx;
9663 + bus_size_t ior = (bus_size_t)reg;
9664 +
9665 + bus_space_write_4(io->iot, io->ioh, ior, value);
9666 +}
9667 +
9668 +#if 0
9669 +void DWC_WRITE_REG64(void *io_ctx, uint64_t volatile *reg, uint64_t value)
9670 +{
9671 + dwc_ioctx_t *io = (dwc_ioctx_t *)io_ctx;
9672 + bus_size_t ior = (bus_size_t)reg;
9673 +
9674 + bus_space_write_8(io->iot, io->ioh, ior, value);
9675 +}
9676 +#endif
9677 +
9678 +void DWC_MODIFY_REG32(void *io_ctx, uint32_t volatile *reg, uint32_t clear_mask,
9679 + uint32_t set_mask)
9680 +{
9681 + dwc_ioctx_t *io = (dwc_ioctx_t *)io_ctx;
9682 + bus_size_t ior = (bus_size_t)reg;
9683 +
9684 + bus_space_write_4(io->iot, io->ioh, ior,
9685 + (bus_space_read_4(io->iot, io->ioh, ior) &
9686 + ~clear_mask) | set_mask);
9687 +}
9688 +
9689 +#if 0
9690 +void DWC_MODIFY_REG64(void *io_ctx, uint64_t volatile *reg, uint64_t clear_mask,
9691 + uint64_t set_mask)
9692 +{
9693 + dwc_ioctx_t *io = (dwc_ioctx_t *)io_ctx;
9694 + bus_size_t ior = (bus_size_t)reg;
9695 +
9696 + bus_space_write_8(io->iot, io->ioh, ior,
9697 + (bus_space_read_8(io->iot, io->ioh, ior) &
9698 + ~clear_mask) | set_mask);
9699 +}
9700 +#endif
9701 +
9702 +
9703 +/* Locking */
9704 +
9705 +dwc_spinlock_t *DWC_SPINLOCK_ALLOC(void)
9706 +{
9707 + struct simplelock *sl = DWC_ALLOC(sizeof(*sl));
9708 +
9709 + if (!sl) {
9710 + DWC_ERROR("Cannot allocate memory for spinlock");
9711 + return NULL;
9712 + }
9713 +
9714 + simple_lock_init(sl);
9715 + return (dwc_spinlock_t *)sl;
9716 +}
9717 +
9718 +void DWC_SPINLOCK_FREE(dwc_spinlock_t *lock)
9719 +{
9720 + struct simplelock *sl = (struct simplelock *)lock;
9721 +
9722 + DWC_FREE(sl);
9723 +}
9724 +
9725 +void DWC_SPINLOCK(dwc_spinlock_t *lock)
9726 +{
9727 + simple_lock((struct simplelock *)lock);
9728 +}
9729 +
9730 +void DWC_SPINUNLOCK(dwc_spinlock_t *lock)
9731 +{
9732 + simple_unlock((struct simplelock *)lock);
9733 +}
9734 +
9735 +void DWC_SPINLOCK_IRQSAVE(dwc_spinlock_t *lock, dwc_irqflags_t *flags)
9736 +{
9737 + simple_lock((struct simplelock *)lock);
9738 + *flags = splbio();
9739 +}
9740 +
9741 +void DWC_SPINUNLOCK_IRQRESTORE(dwc_spinlock_t *lock, dwc_irqflags_t flags)
9742 +{
9743 + splx(flags);
9744 + simple_unlock((struct simplelock *)lock);
9745 +}
9746 +
9747 +dwc_mutex_t *DWC_MUTEX_ALLOC(void)
9748 +{
9749 + dwc_mutex_t *mutex = DWC_ALLOC(sizeof(struct lock));
9750 +
9751 + if (!mutex) {
9752 + DWC_ERROR("Cannot allocate memory for mutex");
9753 + return NULL;
9754 + }
9755 +
9756 + lockinit((struct lock *)mutex, 0, "dw3mtx", 0, 0);
9757 + return mutex;
9758 +}
9759 +
9760 +#if (defined(DWC_LINUX) && defined(CONFIG_DEBUG_MUTEXES))
9761 +#else
9762 +void DWC_MUTEX_FREE(dwc_mutex_t *mutex)
9763 +{
9764 + DWC_FREE(mutex);
9765 +}
9766 +#endif
9767 +
9768 +void DWC_MUTEX_LOCK(dwc_mutex_t *mutex)
9769 +{
9770 + lockmgr((struct lock *)mutex, LK_EXCLUSIVE, NULL);
9771 +}
9772 +
9773 +int DWC_MUTEX_TRYLOCK(dwc_mutex_t *mutex)
9774 +{
9775 + int status;
9776 +
9777 + status = lockmgr((struct lock *)mutex, LK_EXCLUSIVE | LK_NOWAIT, NULL);
9778 + return status == 0;
9779 +}
9780 +
9781 +void DWC_MUTEX_UNLOCK(dwc_mutex_t *mutex)
9782 +{
9783 + lockmgr((struct lock *)mutex, LK_RELEASE, NULL);
9784 +}
9785 +
9786 +
9787 +/* Timing */
9788 +
9789 +void DWC_UDELAY(uint32_t usecs)
9790 +{
9791 + DELAY(usecs);
9792 +}
9793 +
9794 +void DWC_MDELAY(uint32_t msecs)
9795 +{
9796 + do {
9797 + DELAY(1000);
9798 + } while (--msecs);
9799 +}
9800 +
9801 +void DWC_MSLEEP(uint32_t msecs)
9802 +{
9803 + struct timeval tv;
9804 +
9805 + tv.tv_sec = msecs / 1000;
9806 + tv.tv_usec = (msecs - tv.tv_sec * 1000) * 1000;
9807 + tsleep(&tv, 0, "dw3slp", tvtohz(&tv));
9808 +}
9809 +
9810 +uint32_t DWC_TIME(void)
9811 +{
9812 + struct timeval tv;
9813 +
9814 + microuptime(&tv); // or getmicrouptime? (less precise, but faster)
9815 + return tv.tv_sec * 1000 + tv.tv_usec / 1000;
9816 +}
9817 +
9818 +
9819 +/* Timers */
9820 +
9821 +struct dwc_timer {
9822 + struct callout t;
9823 + char *name;
9824 + dwc_spinlock_t *lock;
9825 + dwc_timer_callback_t cb;
9826 + void *data;
9827 +};
9828 +
9829 +dwc_timer_t *DWC_TIMER_ALLOC(char *name, dwc_timer_callback_t cb, void *data)
9830 +{
9831 + dwc_timer_t *t = DWC_ALLOC(sizeof(*t));
9832 +
9833 + if (!t) {
9834 + DWC_ERROR("Cannot allocate memory for timer");
9835 + return NULL;
9836 + }
9837 +
9838 + callout_init(&t->t);
9839 +
9840 + t->name = DWC_STRDUP(name);
9841 + if (!t->name) {
9842 + DWC_ERROR("Cannot allocate memory for timer->name");
9843 + goto no_name;
9844 + }
9845 +
9846 + t->lock = DWC_SPINLOCK_ALLOC();
9847 + if (!t->lock) {
9848 + DWC_ERROR("Cannot allocate memory for timer->lock");
9849 + goto no_lock;
9850 + }
9851 +
9852 + t->cb = cb;
9853 + t->data = data;
9854 +
9855 + return t;
9856 +
9857 + no_lock:
9858 + DWC_FREE(t->name);
9859 + no_name:
9860 + DWC_FREE(t);
9861 +
9862 + return NULL;
9863 +}
9864 +
9865 +void DWC_TIMER_FREE(dwc_timer_t *timer)
9866 +{
9867 + callout_stop(&timer->t);
9868 + DWC_SPINLOCK_FREE(timer->lock);
9869 + DWC_FREE(timer->name);
9870 + DWC_FREE(timer);
9871 +}
9872 +
9873 +void DWC_TIMER_SCHEDULE(dwc_timer_t *timer, uint32_t time)
9874 +{
9875 + struct timeval tv;
9876 +
9877 + tv.tv_sec = time / 1000;
9878 + tv.tv_usec = (time - tv.tv_sec * 1000) * 1000;
9879 + callout_reset(&timer->t, tvtohz(&tv), timer->cb, timer->data);
9880 +}
9881 +
9882 +void DWC_TIMER_CANCEL(dwc_timer_t *timer)
9883 +{
9884 + callout_stop(&timer->t);
9885 +}
9886 +
9887 +
9888 +/* Wait Queues */
9889 +
9890 +struct dwc_waitq {
9891 + struct simplelock lock;
9892 + int abort;
9893 +};
9894 +
9895 +dwc_waitq_t *DWC_WAITQ_ALLOC(void)
9896 +{
9897 + dwc_waitq_t *wq = DWC_ALLOC(sizeof(*wq));
9898 +
9899 + if (!wq) {
9900 + DWC_ERROR("Cannot allocate memory for waitqueue");
9901 + return NULL;
9902 + }
9903 +
9904 + simple_lock_init(&wq->lock);
9905 + wq->abort = 0;
9906 +
9907 + return wq;
9908 +}
9909 +
9910 +void DWC_WAITQ_FREE(dwc_waitq_t *wq)
9911 +{
9912 + DWC_FREE(wq);
9913 +}
9914 +
9915 +int32_t DWC_WAITQ_WAIT(dwc_waitq_t *wq, dwc_waitq_condition_t cond, void *data)
9916 +{
9917 + int ipl;
9918 + int result = 0;
9919 +
9920 + simple_lock(&wq->lock);
9921 + ipl = splbio();
9922 +
9923 + /* Skip the sleep if already aborted or triggered */
9924 + if (!wq->abort && !cond(data)) {
9925 + splx(ipl);
9926 + result = ltsleep(wq, PCATCH, "dw3wat", 0, &wq->lock); // infinite timeout
9927 + ipl = splbio();
9928 + }
9929 +
9930 + if (result == 0) { // awoken
9931 + if (wq->abort) {
9932 + wq->abort = 0;
9933 + result = -DWC_E_ABORT;
9934 + } else {
9935 + result = 0;
9936 + }
9937 +
9938 + splx(ipl);
9939 + simple_unlock(&wq->lock);
9940 + } else {
9941 + wq->abort = 0;
9942 + splx(ipl);
9943 + simple_unlock(&wq->lock);
9944 +
9945 + if (result == ERESTART) { // signaled - restart
9946 + result = -DWC_E_RESTART;
9947 + } else { // signaled - must be EINTR
9948 + result = -DWC_E_ABORT;
9949 + }
9950 + }
9951 +
9952 + return result;
9953 +}
9954 +
9955 +int32_t DWC_WAITQ_WAIT_TIMEOUT(dwc_waitq_t *wq, dwc_waitq_condition_t cond,
9956 + void *data, int32_t msecs)
9957 +{
9958 + struct timeval tv, tv1, tv2;
9959 + int ipl;
9960 + int result = 0;
9961 +
9962 + tv.tv_sec = msecs / 1000;
9963 + tv.tv_usec = (msecs - tv.tv_sec * 1000) * 1000;
9964 +
9965 + simple_lock(&wq->lock);
9966 + ipl = splbio();
9967 +
9968 + /* Skip the sleep if already aborted or triggered */
9969 + if (!wq->abort && !cond(data)) {
9970 + splx(ipl);
9971 + getmicrouptime(&tv1);
9972 + result = ltsleep(wq, PCATCH, "dw3wto", tvtohz(&tv), &wq->lock);
9973 + getmicrouptime(&tv2);
9974 + ipl = splbio();
9975 + }
9976 +
9977 + if (result == 0) { // awoken
9978 + if (wq->abort) {
9979 + wq->abort = 0;
9980 + splx(ipl);
9981 + simple_unlock(&wq->lock);
9982 + result = -DWC_E_ABORT;
9983 + } else {
9984 + splx(ipl);
9985 + simple_unlock(&wq->lock);
9986 +
9987 + tv2.tv_usec -= tv1.tv_usec;
9988 + if (tv2.tv_usec < 0) {
9989 + tv2.tv_usec += 1000000;
9990 + tv2.tv_sec--;
9991 + }
9992 +
9993 + tv2.tv_sec -= tv1.tv_sec;
9994 + result = tv2.tv_sec * 1000 + tv2.tv_usec / 1000;
9995 + result = msecs - result;
9996 + if (result <= 0)
9997 + result = 1;
9998 + }
9999 + } else {
10000 + wq->abort = 0;
10001 + splx(ipl);
10002 + simple_unlock(&wq->lock);
10003 +
10004 + if (result == ERESTART) { // signaled - restart
10005 + result = -DWC_E_RESTART;
10006 +
10007 + } else if (result == EINTR) { // signaled - interrupt
10008 + result = -DWC_E_ABORT;
10009 +
10010 + } else { // timed out
10011 + result = -DWC_E_TIMEOUT;
10012 + }
10013 + }
10014 +
10015 + return result;
10016 +}
10017 +
10018 +void DWC_WAITQ_TRIGGER(dwc_waitq_t *wq)
10019 +{
10020 + wakeup(wq);
10021 +}
10022 +
10023 +void DWC_WAITQ_ABORT(dwc_waitq_t *wq)
10024 +{
10025 + int ipl;
10026 +
10027 + simple_lock(&wq->lock);
10028 + ipl = splbio();
10029 + wq->abort = 1;
10030 + wakeup(wq);
10031 + splx(ipl);
10032 + simple_unlock(&wq->lock);
10033 +}
10034 +
10035 +
10036 +/* Threading */
10037 +
10038 +struct dwc_thread {
10039 + struct proc *proc;
10040 + int abort;
10041 +};
10042 +
10043 +dwc_thread_t *DWC_THREAD_RUN(dwc_thread_function_t func, char *name, void *data)
10044 +{
10045 + int retval;
10046 + dwc_thread_t *thread = DWC_ALLOC(sizeof(*thread));
10047 +
10048 + if (!thread) {
10049 + return NULL;
10050 + }
10051 +
10052 + thread->abort = 0;
10053 + retval = kthread_create1((void (*)(void *))func, data, &thread->proc,
10054 + "%s", name);
10055 + if (retval) {
10056 + DWC_FREE(thread);
10057 + return NULL;
10058 + }
10059 +
10060 + return thread;
10061 +}
10062 +
10063 +int DWC_THREAD_STOP(dwc_thread_t *thread)
10064 +{
10065 + int retval;
10066 +
10067 + thread->abort = 1;
10068 + retval = tsleep(&thread->abort, 0, "dw3stp", 60 * hz);
10069 +
10070 + if (retval == 0) {
10071 + /* DWC_THREAD_EXIT() will free the thread struct */
10072 + return 0;
10073 + }
10074 +
10075 + /* NOTE: We leak the thread struct if thread doesn't die */
10076 +
10077 + if (retval == EWOULDBLOCK) {
10078 + return -DWC_E_TIMEOUT;
10079 + }
10080 +
10081 + return -DWC_E_UNKNOWN;
10082 +}
10083 +
10084 +dwc_bool_t DWC_THREAD_SHOULD_STOP(dwc_thread_t *thread)
10085 +{
10086 + return thread->abort;
10087 +}
10088 +
10089 +void DWC_THREAD_EXIT(dwc_thread_t *thread)
10090 +{
10091 + wakeup(&thread->abort);
10092 + DWC_FREE(thread);
10093 + kthread_exit(0);
10094 +}
10095 +
10096 +/* tasklets
10097 + - Runs in interrupt context (cannot sleep)
10098 + - Each tasklet runs on a single CPU
10099 + - Different tasklets can be running simultaneously on different CPUs
10100 + [ On NetBSD there is no corresponding mechanism, drivers don't have bottom-
10101 + halves. So we just call the callback directly from DWC_TASK_SCHEDULE() ]
10102 + */
10103 +struct dwc_tasklet {
10104 + dwc_tasklet_callback_t cb;
10105 + void *data;
10106 +};
10107 +
10108 +static void tasklet_callback(void *data)
10109 +{
10110 + dwc_tasklet_t *task = (dwc_tasklet_t *)data;
10111 +
10112 + task->cb(task->data);
10113 +}
10114 +
10115 +dwc_tasklet_t *DWC_TASK_ALLOC(char *name, dwc_tasklet_callback_t cb, void *data)
10116 +{
10117 + dwc_tasklet_t *task = DWC_ALLOC(sizeof(*task));
10118 +
10119 + if (task) {
10120 + task->cb = cb;
10121 + task->data = data;
10122 + } else {
10123 + DWC_ERROR("Cannot allocate memory for tasklet");
10124 + }
10125 +
10126 + return task;
10127 +}
10128 +
10129 +void DWC_TASK_FREE(dwc_tasklet_t *task)
10130 +{
10131 + DWC_FREE(task);
10132 +}
10133 +
10134 +void DWC_TASK_SCHEDULE(dwc_tasklet_t *task)
10135 +{
10136 + tasklet_callback(task);
10137 +}
10138 +
10139 +
10140 +/* workqueues
10141 + - Runs in process context (can sleep)
10142 + */
10143 +typedef struct work_container {
10144 + dwc_work_callback_t cb;
10145 + void *data;
10146 + dwc_workq_t *wq;
10147 + char *name;
10148 + int hz;
10149 + struct work task;
10150 +} work_container_t;
10151 +
10152 +struct dwc_workq {
10153 + struct workqueue *taskq;
10154 + dwc_spinlock_t *lock;
10155 + dwc_waitq_t *waitq;
10156 + int pending;
10157 + struct work_container *container;
10158 +};
10159 +
10160 +static void do_work(struct work *task, void *data)
10161 +{
10162 + dwc_workq_t *wq = (dwc_workq_t *)data;
10163 + work_container_t *container = wq->container;
10164 + dwc_irqflags_t flags;
10165 +
10166 + if (container->hz) {
10167 + tsleep(container, 0, "dw3wrk", container->hz);
10168 + }
10169 +
10170 + container->cb(container->data);
10171 + DWC_DEBUG("Work done: %s, container=%p", container->name, container);
10172 +
10173 + DWC_SPINLOCK_IRQSAVE(wq->lock, &flags);
10174 + if (container->name)
10175 + DWC_FREE(container->name);
10176 + DWC_FREE(container);
10177 + wq->pending--;
10178 + DWC_SPINUNLOCK_IRQRESTORE(wq->lock, flags);
10179 + DWC_WAITQ_TRIGGER(wq->waitq);
10180 +}
10181 +
10182 +static int work_done(void *data)
10183 +{
10184 + dwc_workq_t *workq = (dwc_workq_t *)data;
10185 +
10186 + return workq->pending == 0;
10187 +}
10188 +
10189 +int DWC_WORKQ_WAIT_WORK_DONE(dwc_workq_t *workq, int timeout)
10190 +{
10191 + return DWC_WAITQ_WAIT_TIMEOUT(workq->waitq, work_done, workq, timeout);
10192 +}
10193 +
10194 +dwc_workq_t *DWC_WORKQ_ALLOC(char *name)
10195 +{
10196 + int result;
10197 + dwc_workq_t *wq = DWC_ALLOC(sizeof(*wq));
10198 +
10199 + if (!wq) {
10200 + DWC_ERROR("Cannot allocate memory for workqueue");
10201 + return NULL;
10202 + }
10203 +
10204 + result = workqueue_create(&wq->taskq, name, do_work, wq, 0 /*PWAIT*/,
10205 + IPL_BIO, 0);
10206 + if (result) {
10207 + DWC_ERROR("Cannot create workqueue");
10208 + goto no_taskq;
10209 + }
10210 +
10211 + wq->pending = 0;
10212 +
10213 + wq->lock = DWC_SPINLOCK_ALLOC();
10214 + if (!wq->lock) {
10215 + DWC_ERROR("Cannot allocate memory for spinlock");
10216 + goto no_lock;
10217 + }
10218 +
10219 + wq->waitq = DWC_WAITQ_ALLOC();
10220 + if (!wq->waitq) {
10221 + DWC_ERROR("Cannot allocate memory for waitqueue");
10222 + goto no_waitq;
10223 + }
10224 +
10225 + return wq;
10226 +
10227 + no_waitq:
10228 + DWC_SPINLOCK_FREE(wq->lock);
10229 + no_lock:
10230 + workqueue_destroy(wq->taskq);
10231 + no_taskq:
10232 + DWC_FREE(wq);
10233 +
10234 + return NULL;
10235 +}
10236 +
10237 +void DWC_WORKQ_FREE(dwc_workq_t *wq)
10238 +{
10239 +#ifdef DEBUG
10240 + dwc_irqflags_t flags;
10241 +
10242 + DWC_SPINLOCK_IRQSAVE(wq->lock, &flags);
10243 +
10244 + if (wq->pending != 0) {
10245 + struct work_container *container = wq->container;
10246 +
10247 + DWC_ERROR("Destroying work queue with pending work");
10248 +
10249 + if (container && container->name) {
10250 + DWC_ERROR("Work %s still pending", container->name);
10251 + }
10252 + }
10253 +
10254 + DWC_SPINUNLOCK_IRQRESTORE(wq->lock, flags);
10255 +#endif
10256 + DWC_WAITQ_FREE(wq->waitq);
10257 + DWC_SPINLOCK_FREE(wq->lock);
10258 + workqueue_destroy(wq->taskq);
10259 + DWC_FREE(wq);
10260 +}
10261 +
10262 +void DWC_WORKQ_SCHEDULE(dwc_workq_t *wq, dwc_work_callback_t cb, void *data,
10263 + char *format, ...)
10264 +{
10265 + dwc_irqflags_t flags;
10266 + work_container_t *container;
10267 + static char name[128];
10268 + va_list args;
10269 +
10270 + va_start(args, format);
10271 + DWC_VSNPRINTF(name, 128, format, args);
10272 + va_end(args);
10273 +
10274 + DWC_SPINLOCK_IRQSAVE(wq->lock, &flags);
10275 + wq->pending++;
10276 + DWC_SPINUNLOCK_IRQRESTORE(wq->lock, flags);
10277 + DWC_WAITQ_TRIGGER(wq->waitq);
10278 +
10279 + container = DWC_ALLOC_ATOMIC(sizeof(*container));
10280 + if (!container) {
10281 + DWC_ERROR("Cannot allocate memory for container");
10282 + return;
10283 + }
10284 +
10285 + container->name = DWC_STRDUP(name);
10286 + if (!container->name) {
10287 + DWC_ERROR("Cannot allocate memory for container->name");
10288 + DWC_FREE(container);
10289 + return;
10290 + }
10291 +
10292 + container->cb = cb;
10293 + container->data = data;
10294 + container->wq = wq;
10295 + container->hz = 0;
10296 + wq->container = container;
10297 +
10298 + DWC_DEBUG("Queueing work: %s, container=%p", container->name, container);
10299 + workqueue_enqueue(wq->taskq, &container->task);
10300 +}
10301 +
10302 +void DWC_WORKQ_SCHEDULE_DELAYED(dwc_workq_t *wq, dwc_work_callback_t cb,
10303 + void *data, uint32_t time, char *format, ...)
10304 +{
10305 + dwc_irqflags_t flags;
10306 + work_container_t *container;
10307 + static char name[128];
10308 + struct timeval tv;
10309 + va_list args;
10310 +
10311 + va_start(args, format);
10312 + DWC_VSNPRINTF(name, 128, format, args);
10313 + va_end(args);
10314 +
10315 + DWC_SPINLOCK_IRQSAVE(wq->lock, &flags);
10316 + wq->pending++;
10317 + DWC_SPINUNLOCK_IRQRESTORE(wq->lock, flags);
10318 + DWC_WAITQ_TRIGGER(wq->waitq);
10319 +
10320 + container = DWC_ALLOC_ATOMIC(sizeof(*container));
10321 + if (!container) {
10322 + DWC_ERROR("Cannot allocate memory for container");
10323 + return;
10324 + }
10325 +
10326 + container->name = DWC_STRDUP(name);
10327 + if (!container->name) {
10328 + DWC_ERROR("Cannot allocate memory for container->name");
10329 + DWC_FREE(container);
10330 + return;
10331 + }
10332 +
10333 + container->cb = cb;
10334 + container->data = data;
10335 + container->wq = wq;
10336 + tv.tv_sec = time / 1000;
10337 + tv.tv_usec = (time - tv.tv_sec * 1000) * 1000;
10338 + container->hz = tvtohz(&tv);
10339 + wq->container = container;
10340 +
10341 + DWC_DEBUG("Queueing work: %s, container=%p", container->name, container);
10342 + workqueue_enqueue(wq->taskq, &container->task);
10343 +}
10344 +
10345 +int DWC_WORKQ_PENDING(dwc_workq_t *wq)
10346 +{
10347 + return wq->pending;
10348 +}
10349 --- /dev/null
10350 +++ b/drivers/usb/host/dwc_common_port/dwc_crypto.c
10351 @@ -0,0 +1,308 @@
10352 +/* =========================================================================
10353 + * $File: //dwh/usb_iip/dev/software/dwc_common_port_2/dwc_crypto.c $
10354 + * $Revision: #5 $
10355 + * $Date: 2010/09/28 $
10356 + * $Change: 1596182 $
10357 + *
10358 + * Synopsys Portability Library Software and documentation
10359 + * (hereinafter, "Software") is an Unsupported proprietary work of
10360 + * Synopsys, Inc. unless otherwise expressly agreed to in writing
10361 + * between Synopsys and you.
10362 + *
10363 + * The Software IS NOT an item of Licensed Software or Licensed Product
10364 + * under any End User Software License Agreement or Agreement for
10365 + * Licensed Product with Synopsys or any supplement thereto. You are
10366 + * permitted to use and redistribute this Software in source and binary
10367 + * forms, with or without modification, provided that redistributions
10368 + * of source code must retain this notice. You may not view, use,
10369 + * disclose, copy or distribute this file or any information contained
10370 + * herein except pursuant to this license grant from Synopsys. If you
10371 + * do not agree with this notice, including the disclaimer below, then
10372 + * you are not authorized to use the Software.
10373 + *
10374 + * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS"
10375 + * BASIS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
10376 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
10377 + * FOR A PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. IN NO EVENT SHALL
10378 + * SYNOPSYS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
10379 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
10380 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
10381 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
10382 + * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
10383 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
10384 + * USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
10385 + * DAMAGE.
10386 + * ========================================================================= */
10387 +
10388 +/** @file
10389 + * This file contains the WUSB cryptographic routines.
10390 + */
10391 +
10392 +#ifdef DWC_CRYPTOLIB
10393 +
10394 +#include "dwc_crypto.h"
10395 +#include "usb.h"
10396 +
10397 +#ifdef DEBUG
10398 +static inline void dump_bytes(char *name, uint8_t *bytes, int len)
10399 +{
10400 + int i;
10401 + DWC_PRINTF("%s: ", name);
10402 + for (i=0; i<len; i++) {
10403 + DWC_PRINTF("%02x ", bytes[i]);
10404 + }
10405 + DWC_PRINTF("\n");
10406 +}
10407 +#else
10408 +#define dump_bytes(x...)
10409 +#endif
10410 +
10411 +/* Display a block */
10412 +void show_block(const u8 *blk, const char *prefix, const char *suffix, int a)
10413 +{
10414 +#ifdef DWC_DEBUG_CRYPTO
10415 + int i, blksize = 16;
10416 +
10417 + DWC_DEBUG("%s", prefix);
10418 +
10419 + if (suffix == NULL) {
10420 + suffix = "\n";
10421 + blksize = a;
10422 + }
10423 +
10424 + for (i = 0; i < blksize; i++)
10425 + DWC_PRINT("%02x%s", *blk++, ((i & 3) == 3) ? " " : " ");
10426 + DWC_PRINT(suffix);
10427 +#endif
10428 +}
10429 +
10430 +/**
10431 + * Encrypts an array of bytes using the AES encryption engine.
10432 + * If <code>dst</code> == <code>src</code>, then the bytes will be encrypted
10433 + * in-place.
10434 + *
10435 + * @return 0 on success, negative error code on error.
10436 + */
10437 +int dwc_wusb_aes_encrypt(u8 *src, u8 *key, u8 *dst)
10438 +{
10439 + u8 block_t[16];
10440 + DWC_MEMSET(block_t, 0, 16);
10441 +
10442 + return DWC_AES_CBC(src, 16, key, 16, block_t, dst);
10443 +}
10444 +
10445 +/**
10446 + * The CCM-MAC-FUNCTION described in section 6.5 of the WUSB spec.
10447 + * This function takes a data string and returns the encrypted CBC
10448 + * Counter-mode MIC.
10449 + *
10450 + * @param key The 128-bit symmetric key.
10451 + * @param nonce The CCM nonce.
10452 + * @param label The unique 14-byte ASCII text label.
10453 + * @param bytes The byte array to be encrypted.
10454 + * @param len Length of the byte array.
10455 + * @param result Byte array to receive the 8-byte encrypted MIC.
10456 + */
10457 +void dwc_wusb_cmf(u8 *key, u8 *nonce,
10458 + char *label, u8 *bytes, int len, u8 *result)
10459 +{
10460 + u8 block_m[16];
10461 + u8 block_x[16];
10462 + u8 block_t[8];
10463 + int idx, blkNum;
10464 + u16 la = (u16)(len + 14);
10465 +
10466 + /* Set the AES-128 key */
10467 + //dwc_aes_setkey(tfm, key, 16);
10468 +
10469 + /* Fill block B0 from flags = 0x59, N, and l(m) = 0 */
10470 + block_m[0] = 0x59;
10471 + for (idx = 0; idx < 13; idx++)
10472 + block_m[idx + 1] = nonce[idx];
10473 + block_m[14] = 0;
10474 + block_m[15] = 0;
10475 +
10476 + /* Produce the CBC IV */
10477 + dwc_wusb_aes_encrypt(block_m, key, block_x);
10478 + show_block(block_m, "CBC IV in: ", "\n", 0);
10479 + show_block(block_x, "CBC IV out:", "\n", 0);
10480 +
10481 + /* Fill block B1 from l(a) = Blen + 14, and A */
10482 + block_x[0] ^= (u8)(la >> 8);
10483 + block_x[1] ^= (u8)la;
10484 + for (idx = 0; idx < 14; idx++)
10485 + block_x[idx + 2] ^= label[idx];
10486 + show_block(block_x, "After xor: ", "b1\n", 16);
10487 +
10488 + dwc_wusb_aes_encrypt(block_x, key, block_x);
10489 + show_block(block_x, "After AES: ", "b1\n", 16);
10490 +
10491 + idx = 0;
10492 + blkNum = 0;
10493 +
10494 + /* Fill remaining blocks with B */
10495 + while (len-- > 0) {
10496 + block_x[idx] ^= *bytes++;
10497 + if (++idx >= 16) {
10498 + idx = 0;
10499 + show_block(block_x, "After xor: ", "\n", blkNum);
10500 + dwc_wusb_aes_encrypt(block_x, key, block_x);
10501 + show_block(block_x, "After AES: ", "\n", blkNum);
10502 + blkNum++;
10503 + }
10504 + }
10505 +
10506 + /* Handle partial last block */
10507 + if (idx > 0) {
10508 + show_block(block_x, "After xor: ", "\n", blkNum);
10509 + dwc_wusb_aes_encrypt(block_x, key, block_x);
10510 + show_block(block_x, "After AES: ", "\n", blkNum);
10511 + }
10512 +
10513 + /* Save the MIC tag */
10514 + DWC_MEMCPY(block_t, block_x, 8);
10515 + show_block(block_t, "MIC tag : ", NULL, 8);
10516 +
10517 + /* Fill block A0 from flags = 0x01, N, and counter = 0 */
10518 + block_m[0] = 0x01;
10519 + block_m[14] = 0;
10520 + block_m[15] = 0;
10521 +
10522 + /* Encrypt the counter */
10523 + dwc_wusb_aes_encrypt(block_m, key, block_x);
10524 + show_block(block_x, "CTR[MIC] : ", NULL, 8);
10525 +
10526 + /* XOR with MIC tag */
10527 + for (idx = 0; idx < 8; idx++) {
10528 + block_t[idx] ^= block_x[idx];
10529 + }
10530 +
10531 + /* Return result to caller */
10532 + DWC_MEMCPY(result, block_t, 8);
10533 + show_block(result, "CCM-MIC : ", NULL, 8);
10534 +
10535 +}
10536 +
10537 +/**
10538 + * The PRF function described in section 6.5 of the WUSB spec. This function
10539 + * concatenates MIC values returned from dwc_cmf() to create a value of
10540 + * the requested length.
10541 + *
10542 + * @param prf_len Length of the PRF function in bits (64, 128, or 256).
10543 + * @param key, nonce, label, bytes, len Same as for dwc_cmf().
10544 + * @param result Byte array to receive the result.
10545 + */
10546 +void dwc_wusb_prf(int prf_len, u8 *key,
10547 + u8 *nonce, char *label, u8 *bytes, int len, u8 *result)
10548 +{
10549 + int i;
10550 +
10551 + nonce[0] = 0;
10552 + for (i = 0; i < prf_len >> 6; i++, nonce[0]++) {
10553 + dwc_wusb_cmf(key, nonce, label, bytes, len, result);
10554 + result += 8;
10555 + }
10556 +}
10557 +
10558 +/**
10559 + * Fills in CCM Nonce per the WUSB spec.
10560 + *
10561 + * @param[in] haddr Host address.
10562 + * @param[in] daddr Device address.
10563 + * @param[in] tkid Session Key(PTK) identifier.
10564 + * @param[out] nonce Pointer to where the CCM Nonce output is to be written.
10565 + */
10566 +void dwc_wusb_fill_ccm_nonce(uint16_t haddr, uint16_t daddr, uint8_t *tkid,
10567 + uint8_t *nonce)
10568 +{
10569 +
10570 + DWC_DEBUG("%s %x %x\n", __func__, daddr, haddr);
10571 +
10572 + DWC_MEMSET(&nonce[0], 0, 16);
10573 +
10574 + DWC_MEMCPY(&nonce[6], tkid, 3);
10575 + nonce[9] = daddr & 0xFF;
10576 + nonce[10] = (daddr >> 8) & 0xFF;
10577 + nonce[11] = haddr & 0xFF;
10578 + nonce[12] = (haddr >> 8) & 0xFF;
10579 +
10580 + dump_bytes("CCM nonce", nonce, 16);
10581 +}
10582 +
10583 +/**
10584 + * Generates a 16-byte cryptographic-grade random number for the Host/Device
10585 + * Nonce.
10586 + */
10587 +void dwc_wusb_gen_nonce(uint16_t addr, uint8_t *nonce)
10588 +{
10589 + uint8_t inonce[16];
10590 + uint32_t temp[4];
10591 +
10592 + /* Fill in the Nonce */
10593 + DWC_MEMSET(&inonce[0], 0, sizeof(inonce));
10594 + inonce[9] = addr & 0xFF;
10595 + inonce[10] = (addr >> 8) & 0xFF;
10596 + inonce[11] = inonce[9];
10597 + inonce[12] = inonce[10];
10598 +
10599 + /* Collect "randomness samples" */
10600 + DWC_RANDOM_BYTES((uint8_t *)temp, 16);
10601 +
10602 + dwc_wusb_prf_128((uint8_t *)temp, nonce,
10603 + "Random Numbers", (uint8_t *)temp, sizeof(temp),
10604 + nonce);
10605 +}
10606 +
10607 +/**
10608 + * Generates the Session Key (PTK) and Key Confirmation Key (KCK) per the
10609 + * WUSB spec.
10610 + *
10611 + * @param[in] ccm_nonce Pointer to CCM Nonce.
10612 + * @param[in] mk Master Key to derive the session from
10613 + * @param[in] hnonce Pointer to Host Nonce.
10614 + * @param[in] dnonce Pointer to Device Nonce.
10615 + * @param[out] kck Pointer to where the KCK output is to be written.
10616 + * @param[out] ptk Pointer to where the PTK output is to be written.
10617 + */
10618 +void dwc_wusb_gen_key(uint8_t *ccm_nonce, uint8_t *mk, uint8_t *hnonce,
10619 + uint8_t *dnonce, uint8_t *kck, uint8_t *ptk)
10620 +{
10621 + uint8_t idata[32];
10622 + uint8_t odata[32];
10623 +
10624 + dump_bytes("ck", mk, 16);
10625 + dump_bytes("hnonce", hnonce, 16);
10626 + dump_bytes("dnonce", dnonce, 16);
10627 +
10628 + /* The data is the HNonce and DNonce concatenated */
10629 + DWC_MEMCPY(&idata[0], hnonce, 16);
10630 + DWC_MEMCPY(&idata[16], dnonce, 16);
10631 +
10632 + dwc_wusb_prf_256(mk, ccm_nonce, "Pair-wise keys", idata, 32, odata);
10633 +
10634 + /* Low 16 bytes of the result is the KCK, high 16 is the PTK */
10635 + DWC_MEMCPY(kck, &odata[0], 16);
10636 + DWC_MEMCPY(ptk, &odata[16], 16);
10637 +
10638 + dump_bytes("kck", kck, 16);
10639 + dump_bytes("ptk", ptk, 16);
10640 +}
10641 +
10642 +/**
10643 + * Generates the Message Integrity Code over the Handshake data per the
10644 + * WUSB spec.
10645 + *
10646 + * @param ccm_nonce Pointer to CCM Nonce.
10647 + * @param kck Pointer to Key Confirmation Key.
10648 + * @param data Pointer to Handshake data to be checked.
10649 + * @param mic Pointer to where the MIC output is to be written.
10650 + */
10651 +void dwc_wusb_gen_mic(uint8_t *ccm_nonce, uint8_t *kck,
10652 + uint8_t *data, uint8_t *mic)
10653 +{
10654 +
10655 + dwc_wusb_prf_64(kck, ccm_nonce, "out-of-bandMIC",
10656 + data, WUSB_HANDSHAKE_LEN_FOR_MIC, mic);
10657 +}
10658 +
10659 +#endif /* DWC_CRYPTOLIB */
10660 --- /dev/null
10661 +++ b/drivers/usb/host/dwc_common_port/dwc_crypto.h
10662 @@ -0,0 +1,111 @@
10663 +/* =========================================================================
10664 + * $File: //dwh/usb_iip/dev/software/dwc_common_port_2/dwc_crypto.h $
10665 + * $Revision: #3 $
10666 + * $Date: 2010/09/28 $
10667 + * $Change: 1596182 $
10668 + *
10669 + * Synopsys Portability Library Software and documentation
10670 + * (hereinafter, "Software") is an Unsupported proprietary work of
10671 + * Synopsys, Inc. unless otherwise expressly agreed to in writing
10672 + * between Synopsys and you.
10673 + *
10674 + * The Software IS NOT an item of Licensed Software or Licensed Product
10675 + * under any End User Software License Agreement or Agreement for
10676 + * Licensed Product with Synopsys or any supplement thereto. You are
10677 + * permitted to use and redistribute this Software in source and binary
10678 + * forms, with or without modification, provided that redistributions
10679 + * of source code must retain this notice. You may not view, use,
10680 + * disclose, copy or distribute this file or any information contained
10681 + * herein except pursuant to this license grant from Synopsys. If you
10682 + * do not agree with this notice, including the disclaimer below, then
10683 + * you are not authorized to use the Software.
10684 + *
10685 + * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS"
10686 + * BASIS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
10687 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
10688 + * FOR A PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. IN NO EVENT SHALL
10689 + * SYNOPSYS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
10690 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
10691 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
10692 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
10693 + * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
10694 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
10695 + * USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
10696 + * DAMAGE.
10697 + * ========================================================================= */
10698 +
10699 +#ifndef _DWC_CRYPTO_H_
10700 +#define _DWC_CRYPTO_H_
10701 +
10702 +#ifdef __cplusplus
10703 +extern "C" {
10704 +#endif
10705 +
10706 +/** @file
10707 + *
10708 + * This file contains declarations for the WUSB Cryptographic routines as
10709 + * defined in the WUSB spec. They are only to be used internally by the DWC UWB
10710 + * modules.
10711 + */
10712 +
10713 +#include "dwc_os.h"
10714 +
10715 +int dwc_wusb_aes_encrypt(u8 *src, u8 *key, u8 *dst);
10716 +
10717 +void dwc_wusb_cmf(u8 *key, u8 *nonce,
10718 + char *label, u8 *bytes, int len, u8 *result);
10719 +void dwc_wusb_prf(int prf_len, u8 *key,
10720 + u8 *nonce, char *label, u8 *bytes, int len, u8 *result);
10721 +
10722 +/**
10723 + * The PRF-64 function described in section 6.5 of the WUSB spec.
10724 + *
10725 + * @param key, nonce, label, bytes, len, result Same as for dwc_prf().
10726 + */
10727 +static inline void dwc_wusb_prf_64(u8 *key, u8 *nonce,
10728 + char *label, u8 *bytes, int len, u8 *result)
10729 +{
10730 + dwc_wusb_prf(64, key, nonce, label, bytes, len, result);
10731 +}
10732 +
10733 +/**
10734 + * The PRF-128 function described in section 6.5 of the WUSB spec.
10735 + *
10736 + * @param key, nonce, label, bytes, len, result Same as for dwc_prf().
10737 + */
10738 +static inline void dwc_wusb_prf_128(u8 *key, u8 *nonce,
10739 + char *label, u8 *bytes, int len, u8 *result)
10740 +{
10741 + dwc_wusb_prf(128, key, nonce, label, bytes, len, result);
10742 +}
10743 +
10744 +/**
10745 + * The PRF-256 function described in section 6.5 of the WUSB spec.
10746 + *
10747 + * @param key, nonce, label, bytes, len, result Same as for dwc_prf().
10748 + */
10749 +static inline void dwc_wusb_prf_256(u8 *key, u8 *nonce,
10750 + char *label, u8 *bytes, int len, u8 *result)
10751 +{
10752 + dwc_wusb_prf(256, key, nonce, label, bytes, len, result);
10753 +}
10754 +
10755 +
10756 +void dwc_wusb_fill_ccm_nonce(uint16_t haddr, uint16_t daddr, uint8_t *tkid,
10757 + uint8_t *nonce);
10758 +void dwc_wusb_gen_nonce(uint16_t addr,
10759 + uint8_t *nonce);
10760 +
10761 +void dwc_wusb_gen_key(uint8_t *ccm_nonce, uint8_t *mk,
10762 + uint8_t *hnonce, uint8_t *dnonce,
10763 + uint8_t *kck, uint8_t *ptk);
10764 +
10765 +
10766 +void dwc_wusb_gen_mic(uint8_t *ccm_nonce, uint8_t
10767 + *kck, uint8_t *data, uint8_t *mic);
10768 +
10769 +#ifdef __cplusplus
10770 +}
10771 +#endif
10772 +
10773 +#endif /* _DWC_CRYPTO_H_ */
10774 --- /dev/null
10775 +++ b/drivers/usb/host/dwc_common_port/dwc_dh.c
10776 @@ -0,0 +1,291 @@
10777 +/* =========================================================================
10778 + * $File: //dwh/usb_iip/dev/software/dwc_common_port_2/dwc_dh.c $
10779 + * $Revision: #3 $
10780 + * $Date: 2010/09/28 $
10781 + * $Change: 1596182 $
10782 + *
10783 + * Synopsys Portability Library Software and documentation
10784 + * (hereinafter, "Software") is an Unsupported proprietary work of
10785 + * Synopsys, Inc. unless otherwise expressly agreed to in writing
10786 + * between Synopsys and you.
10787 + *
10788 + * The Software IS NOT an item of Licensed Software or Licensed Product
10789 + * under any End User Software License Agreement or Agreement for
10790 + * Licensed Product with Synopsys or any supplement thereto. You are
10791 + * permitted to use and redistribute this Software in source and binary
10792 + * forms, with or without modification, provided that redistributions
10793 + * of source code must retain this notice. You may not view, use,
10794 + * disclose, copy or distribute this file or any information contained
10795 + * herein except pursuant to this license grant from Synopsys. If you
10796 + * do not agree with this notice, including the disclaimer below, then
10797 + * you are not authorized to use the Software.
10798 + *
10799 + * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS"
10800 + * BASIS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
10801 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
10802 + * FOR A PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. IN NO EVENT SHALL
10803 + * SYNOPSYS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
10804 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
10805 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
10806 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
10807 + * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
10808 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
10809 + * USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
10810 + * DAMAGE.
10811 + * ========================================================================= */
10812 +#ifdef DWC_CRYPTOLIB
10813 +
10814 +#ifndef CONFIG_MACH_IPMATE
10815 +
10816 +#include "dwc_dh.h"
10817 +#include "dwc_modpow.h"
10818 +
10819 +#ifdef DEBUG
10820 +/* This function prints out a buffer in the format described in the Association
10821 + * Model specification. */
10822 +static void dh_dump(char *str, void *_num, int len)
10823 +{
10824 + uint8_t *num = _num;
10825 + int i;
10826 + DWC_PRINTF("%s\n", str);
10827 + for (i = 0; i < len; i ++) {
10828 + DWC_PRINTF("%02x", num[i]);
10829 + if (((i + 1) % 2) == 0) DWC_PRINTF(" ");
10830 + if (((i + 1) % 26) == 0) DWC_PRINTF("\n");
10831 + }
10832 +
10833 + DWC_PRINTF("\n");
10834 +}
10835 +#else
10836 +#define dh_dump(_x...) do {; } while(0)
10837 +#endif
10838 +
10839 +/* Constant g value */
10840 +static __u32 dh_g[] = {
10841 + 0x02000000,
10842 +};
10843 +
10844 +/* Constant p value */
10845 +static __u32 dh_p[] = {
10846 + 0xFFFFFFFF, 0xFFFFFFFF, 0xA2DA0FC9, 0x34C26821, 0x8B62C6C4, 0xD11CDC80, 0x084E0229, 0x74CC678A,
10847 + 0xA6BE0B02, 0x229B133B, 0x79084A51, 0xDD04348E, 0xB31995EF, 0x1B433ACD, 0x6D0A2B30, 0x37145FF2,
10848 + 0x6D35E14F, 0x45C2516D, 0x76B585E4, 0xC67E5E62, 0xE9424CF4, 0x6BED37A6, 0xB65CFF0B, 0xEDB706F4,
10849 + 0xFB6B38EE, 0xA59F895A, 0x11249FAE, 0xE61F4B7C, 0x51662849, 0x3D5BE4EC, 0xB87C00C2, 0x05BF63A1,
10850 + 0x3648DA98, 0x9AD3551C, 0xA83F1669, 0x5FCF24FD, 0x235D6583, 0x96ADA3DC, 0x56F3621C, 0xBB528520,
10851 + 0x0729D59E, 0x6D969670, 0x4E350C67, 0x0498BC4A, 0x086C74F1, 0x7C2118CA, 0x465E9032, 0x3BCE362E,
10852 + 0x2C779EE3, 0x03860E18, 0xA283279B, 0x8FA207EC, 0xF05DC5B5, 0xC9524C6F, 0xF6CB2BDE, 0x18175895,
10853 + 0x7C499539, 0xE56A95EA, 0x1826D215, 0x1005FA98, 0x5A8E7215, 0x2DC4AA8A, 0x0D1733AD, 0x337A5004,
10854 + 0xAB2155A8, 0x64BA1CDF, 0x0485FBEC, 0x0AEFDB58, 0x5771EA8A, 0x7D0C065D, 0x850F97B3, 0xC7E4E1A6,
10855 + 0x8CAEF5AB, 0xD73309DB, 0xE0948C1E, 0x9D61254A, 0x26D2E3CE, 0x6BEED21A, 0x06FA2FF1, 0x64088AD9,
10856 + 0x730276D8, 0x646AC83E, 0x182B1F52, 0x0C207B17, 0x5717E1BB, 0x6C5D617A, 0xC0880977, 0xE246D9BA,
10857 + 0xA04FE208, 0x31ABE574, 0xFC5BDB43, 0x8E10FDE0, 0x20D1824B, 0xCAD23AA9, 0xFFFFFFFF, 0xFFFFFFFF,
10858 +};
10859 +
10860 +static void dh_swap_bytes(void *_in, void *_out, uint32_t len)
10861 +{
10862 + uint8_t *in = _in;
10863 + uint8_t *out = _out;
10864 + int i;
10865 + for (i=0; i<len; i++) {
10866 + out[i] = in[len-1-i];
10867 + }
10868 +}
10869 +
10870 +/* Computes the modular exponentiation (num^exp % mod). num, exp, and mod are
10871 + * big endian numbers of size len, in bytes. Each len value must be a multiple
10872 + * of 4. */
10873 +int dwc_dh_modpow(void *mem_ctx, void *num, uint32_t num_len,
10874 + void *exp, uint32_t exp_len,
10875 + void *mod, uint32_t mod_len,
10876 + void *out)
10877 +{
10878 + /* modpow() takes little endian numbers. AM uses big-endian. This
10879 + * function swaps bytes of numbers before passing onto modpow. */
10880 +
10881 + int retval = 0;
10882 + uint32_t *result;
10883 +
10884 + uint32_t *bignum_num = dwc_alloc(mem_ctx, num_len + 4);
10885 + uint32_t *bignum_exp = dwc_alloc(mem_ctx, exp_len + 4);
10886 + uint32_t *bignum_mod = dwc_alloc(mem_ctx, mod_len + 4);
10887 +
10888 + dh_swap_bytes(num, &bignum_num[1], num_len);
10889 + bignum_num[0] = num_len / 4;
10890 +
10891 + dh_swap_bytes(exp, &bignum_exp[1], exp_len);
10892 + bignum_exp[0] = exp_len / 4;
10893 +
10894 + dh_swap_bytes(mod, &bignum_mod[1], mod_len);
10895 + bignum_mod[0] = mod_len / 4;
10896 +
10897 + result = dwc_modpow(mem_ctx, bignum_num, bignum_exp, bignum_mod);
10898 + if (!result) {
10899 + retval = -1;
10900 + goto dh_modpow_nomem;
10901 + }
10902 +
10903 + dh_swap_bytes(&result[1], out, result[0] * 4);
10904 + dwc_free(mem_ctx, result);
10905 +
10906 + dh_modpow_nomem:
10907 + dwc_free(mem_ctx, bignum_num);
10908 + dwc_free(mem_ctx, bignum_exp);
10909 + dwc_free(mem_ctx, bignum_mod);
10910 + return retval;
10911 +}
10912 +
10913 +
10914 +int dwc_dh_pk(void *mem_ctx, uint8_t nd, uint8_t *exp, uint8_t *pk, uint8_t *hash)
10915 +{
10916 + int retval;
10917 + uint8_t m3[385];
10918 +
10919 +#ifndef DH_TEST_VECTORS
10920 + DWC_RANDOM_BYTES(exp, 32);
10921 +#endif
10922 +
10923 + /* Compute the pkd */
10924 + if ((retval = dwc_dh_modpow(mem_ctx, dh_g, 4,
10925 + exp, 32,
10926 + dh_p, 384, pk))) {
10927 + return retval;
10928 + }
10929 +
10930 + m3[384] = nd;
10931 + DWC_MEMCPY(&m3[0], pk, 384);
10932 + DWC_SHA256(m3, 385, hash);
10933 +
10934 + dh_dump("PK", pk, 384);
10935 + dh_dump("SHA-256(M3)", hash, 32);
10936 + return 0;
10937 +}
10938 +
10939 +int dwc_dh_derive_keys(void *mem_ctx, uint8_t nd, uint8_t *pkh, uint8_t *pkd,
10940 + uint8_t *exp, int is_host,
10941 + char *dd, uint8_t *ck, uint8_t *kdk)
10942 +{
10943 + int retval;
10944 + uint8_t mv[784];
10945 + uint8_t sha_result[32];
10946 + uint8_t dhkey[384];
10947 + uint8_t shared_secret[384];
10948 + char *message;
10949 + uint32_t vd;
10950 +
10951 + uint8_t *pk;
10952 +
10953 + if (is_host) {
10954 + pk = pkd;
10955 + }
10956 + else {
10957 + pk = pkh;
10958 + }
10959 +
10960 + if ((retval = dwc_dh_modpow(mem_ctx, pk, 384,
10961 + exp, 32,
10962 + dh_p, 384, shared_secret))) {
10963 + return retval;
10964 + }
10965 + dh_dump("Shared Secret", shared_secret, 384);
10966 +
10967 + DWC_SHA256(shared_secret, 384, dhkey);
10968 + dh_dump("DHKEY", dhkey, 384);
10969 +
10970 + DWC_MEMCPY(&mv[0], pkd, 384);
10971 + DWC_MEMCPY(&mv[384], pkh, 384);
10972 + DWC_MEMCPY(&mv[768], "displayed digest", 16);
10973 + dh_dump("MV", mv, 784);
10974 +
10975 + DWC_SHA256(mv, 784, sha_result);
10976 + dh_dump("SHA-256(MV)", sha_result, 32);
10977 + dh_dump("First 32-bits of SHA-256(MV)", sha_result, 4);
10978 +
10979 + dh_swap_bytes(sha_result, &vd, 4);
10980 +#ifdef DEBUG
10981 + DWC_PRINTF("Vd (decimal) = %d\n", vd);
10982 +#endif
10983 +
10984 + switch (nd) {
10985 + case 2:
10986 + vd = vd % 100;
10987 + DWC_SPRINTF(dd, "%02d", vd);
10988 + break;
10989 + case 3:
10990 + vd = vd % 1000;
10991 + DWC_SPRINTF(dd, "%03d", vd);
10992 + break;
10993 + case 4:
10994 + vd = vd % 10000;
10995 + DWC_SPRINTF(dd, "%04d", vd);
10996 + break;
10997 + }
10998 +#ifdef DEBUG
10999 + DWC_PRINTF("Display Digits: %s\n", dd);
11000 +#endif
11001 +
11002 + message = "connection key";
11003 + DWC_HMAC_SHA256(message, DWC_STRLEN(message), dhkey, 32, sha_result);
11004 + dh_dump("HMAC(SHA-256, DHKey, connection key)", sha_result, 32);
11005 + DWC_MEMCPY(ck, sha_result, 16);
11006 +
11007 + message = "key derivation key";
11008 + DWC_HMAC_SHA256(message, DWC_STRLEN(message), dhkey, 32, sha_result);
11009 + dh_dump("HMAC(SHA-256, DHKey, key derivation key)", sha_result, 32);
11010 + DWC_MEMCPY(kdk, sha_result, 32);
11011 +
11012 + return 0;
11013 +}
11014 +
11015 +
11016 +#ifdef DH_TEST_VECTORS
11017 +
11018 +static __u8 dh_a[] = {
11019 + 0x44, 0x00, 0x51, 0xd6,
11020 + 0xf0, 0xb5, 0x5e, 0xa9,
11021 + 0x67, 0xab, 0x31, 0xc6,
11022 + 0x8a, 0x8b, 0x5e, 0x37,
11023 + 0xd9, 0x10, 0xda, 0xe0,
11024 + 0xe2, 0xd4, 0x59, 0xa4,
11025 + 0x86, 0x45, 0x9c, 0xaa,
11026 + 0xdf, 0x36, 0x75, 0x16,
11027 +};
11028 +
11029 +static __u8 dh_b[] = {
11030 + 0x5d, 0xae, 0xc7, 0x86,
11031 + 0x79, 0x80, 0xa3, 0x24,
11032 + 0x8c, 0xe3, 0x57, 0x8f,
11033 + 0xc7, 0x5f, 0x1b, 0x0f,
11034 + 0x2d, 0xf8, 0x9d, 0x30,
11035 + 0x6f, 0xa4, 0x52, 0xcd,
11036 + 0xe0, 0x7a, 0x04, 0x8a,
11037 + 0xde, 0xd9, 0x26, 0x56,
11038 +};
11039 +
11040 +void dwc_run_dh_test_vectors(void *mem_ctx)
11041 +{
11042 + uint8_t pkd[384];
11043 + uint8_t pkh[384];
11044 + uint8_t hashd[32];
11045 + uint8_t hashh[32];
11046 + uint8_t ck[16];
11047 + uint8_t kdk[32];
11048 + char dd[5];
11049 +
11050 + DWC_PRINTF("\n\n\nDH_TEST_VECTORS\n\n");
11051 +
11052 + /* compute the PKd and SHA-256(PKd || Nd) */
11053 + DWC_PRINTF("Computing PKd\n");
11054 + dwc_dh_pk(mem_ctx, 2, dh_a, pkd, hashd);
11055 +
11056 + /* compute the PKd and SHA-256(PKh || Nd) */
11057 + DWC_PRINTF("Computing PKh\n");
11058 + dwc_dh_pk(mem_ctx, 2, dh_b, pkh, hashh);
11059 +
11060 + /* compute the dhkey */
11061 + dwc_dh_derive_keys(mem_ctx, 2, pkh, pkd, dh_a, 0, dd, ck, kdk);
11062 +}
11063 +#endif /* DH_TEST_VECTORS */
11064 +
11065 +#endif /* !CONFIG_MACH_IPMATE */
11066 +
11067 +#endif /* DWC_CRYPTOLIB */
11068 --- /dev/null
11069 +++ b/drivers/usb/host/dwc_common_port/dwc_dh.h
11070 @@ -0,0 +1,106 @@
11071 +/* =========================================================================
11072 + * $File: //dwh/usb_iip/dev/software/dwc_common_port_2/dwc_dh.h $
11073 + * $Revision: #4 $
11074 + * $Date: 2010/09/28 $
11075 + * $Change: 1596182 $
11076 + *
11077 + * Synopsys Portability Library Software and documentation
11078 + * (hereinafter, "Software") is an Unsupported proprietary work of
11079 + * Synopsys, Inc. unless otherwise expressly agreed to in writing
11080 + * between Synopsys and you.
11081 + *
11082 + * The Software IS NOT an item of Licensed Software or Licensed Product
11083 + * under any End User Software License Agreement or Agreement for
11084 + * Licensed Product with Synopsys or any supplement thereto. You are
11085 + * permitted to use and redistribute this Software in source and binary
11086 + * forms, with or without modification, provided that redistributions
11087 + * of source code must retain this notice. You may not view, use,
11088 + * disclose, copy or distribute this file or any information contained
11089 + * herein except pursuant to this license grant from Synopsys. If you
11090 + * do not agree with this notice, including the disclaimer below, then
11091 + * you are not authorized to use the Software.
11092 + *
11093 + * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS"
11094 + * BASIS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
11095 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
11096 + * FOR A PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. IN NO EVENT SHALL
11097 + * SYNOPSYS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
11098 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
11099 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
11100 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
11101 + * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
11102 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
11103 + * USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
11104 + * DAMAGE.
11105 + * ========================================================================= */
11106 +#ifndef _DWC_DH_H_
11107 +#define _DWC_DH_H_
11108 +
11109 +#ifdef __cplusplus
11110 +extern "C" {
11111 +#endif
11112 +
11113 +#include "dwc_os.h"
11114 +
11115 +/** @file
11116 + *
11117 + * This file defines the common functions on device and host for performing
11118 + * numeric association as defined in the WUSB spec. They are only to be
11119 + * used internally by the DWC UWB modules. */
11120 +
11121 +extern int dwc_dh_sha256(uint8_t *message, uint32_t len, uint8_t *out);
11122 +extern int dwc_dh_hmac_sha256(uint8_t *message, uint32_t messagelen,
11123 + uint8_t *key, uint32_t keylen,
11124 + uint8_t *out);
11125 +extern int dwc_dh_modpow(void *mem_ctx, void *num, uint32_t num_len,
11126 + void *exp, uint32_t exp_len,
11127 + void *mod, uint32_t mod_len,
11128 + void *out);
11129 +
11130 +/** Computes PKD or PKH, and SHA-256(PKd || Nd)
11131 + *
11132 + * PK = g^exp mod p.
11133 + *
11134 + * Input:
11135 + * Nd = Number of digits on the device.
11136 + *
11137 + * Output:
11138 + * exp = A 32-byte buffer to be filled with a randomly generated number.
11139 + * used as either A or B.
11140 + * pk = A 384-byte buffer to be filled with the PKH or PKD.
11141 + * hash = A 32-byte buffer to be filled with SHA-256(PK || ND).
11142 + */
11143 +extern int dwc_dh_pk(void *mem_ctx, uint8_t nd, uint8_t *exp, uint8_t *pkd, uint8_t *hash);
11144 +
11145 +/** Computes the DHKEY, and VD.
11146 + *
11147 + * If called from host, then it will comput DHKEY=PKD^exp % p.
11148 + * If called from device, then it will comput DHKEY=PKH^exp % p.
11149 + *
11150 + * Input:
11151 + * pkd = The PKD value.
11152 + * pkh = The PKH value.
11153 + * exp = The A value (if device) or B value (if host) generated in dwc_wudev_dh_pk.
11154 + * is_host = Set to non zero if a WUSB host is calling this function.
11155 + *
11156 + * Output:
11157 +
11158 + * dd = A pointer to an buffer to be set to the displayed digits string to be shown
11159 + * to the user. This buffer should be at 5 bytes long to hold 4 digits plus a
11160 + * null termination character. This buffer can be used directly for display.
11161 + * ck = A 16-byte buffer to be filled with the CK.
11162 + * kdk = A 32-byte buffer to be filled with the KDK.
11163 + */
11164 +extern int dwc_dh_derive_keys(void *mem_ctx, uint8_t nd, uint8_t *pkh, uint8_t *pkd,
11165 + uint8_t *exp, int is_host,
11166 + char *dd, uint8_t *ck, uint8_t *kdk);
11167 +
11168 +#ifdef DH_TEST_VECTORS
11169 +extern void dwc_run_dh_test_vectors(void);
11170 +#endif
11171 +
11172 +#ifdef __cplusplus
11173 +}
11174 +#endif
11175 +
11176 +#endif /* _DWC_DH_H_ */
11177 --- /dev/null
11178 +++ b/drivers/usb/host/dwc_common_port/dwc_list.h
11179 @@ -0,0 +1,594 @@
11180 +/* $OpenBSD: queue.h,v 1.26 2004/05/04 16:59:32 grange Exp $ */
11181 +/* $NetBSD: queue.h,v 1.11 1996/05/16 05:17:14 mycroft Exp $ */
11182 +
11183 +/*
11184 + * Copyright (c) 1991, 1993
11185 + * The Regents of the University of California. All rights reserved.
11186 + *
11187 + * Redistribution and use in source and binary forms, with or without
11188 + * modification, are permitted provided that the following conditions
11189 + * are met:
11190 + * 1. Redistributions of source code must retain the above copyright
11191 + * notice, this list of conditions and the following disclaimer.
11192 + * 2. Redistributions in binary form must reproduce the above copyright
11193 + * notice, this list of conditions and the following disclaimer in the
11194 + * documentation and/or other materials provided with the distribution.
11195 + * 3. Neither the name of the University nor the names of its contributors
11196 + * may be used to endorse or promote products derived from this software
11197 + * without specific prior written permission.
11198 + *
11199 + * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
11200 + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
11201 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
11202 + * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
11203 + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
11204 + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
11205 + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
11206 + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
11207 + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
11208 + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
11209 + * SUCH DAMAGE.
11210 + *
11211 + * @(#)queue.h 8.5 (Berkeley) 8/20/94
11212 + */
11213 +
11214 +#ifndef _DWC_LIST_H_
11215 +#define _DWC_LIST_H_
11216 +
11217 +#ifdef __cplusplus
11218 +extern "C" {
11219 +#endif
11220 +
11221 +/** @file
11222 + *
11223 + * This file defines linked list operations. It is derived from BSD with
11224 + * only the MACRO names being prefixed with DWC_. This is because a few of
11225 + * these names conflict with those on Linux. For documentation on use, see the
11226 + * inline comments in the source code. The original license for this source
11227 + * code applies and is preserved in the dwc_list.h source file.
11228 + */
11229 +
11230 +/*
11231 + * This file defines five types of data structures: singly-linked lists,
11232 + * lists, simple queues, tail queues, and circular queues.
11233 + *
11234 + *
11235 + * A singly-linked list is headed by a single forward pointer. The elements
11236 + * are singly linked for minimum space and pointer manipulation overhead at
11237 + * the expense of O(n) removal for arbitrary elements. New elements can be
11238 + * added to the list after an existing element or at the head of the list.
11239 + * Elements being removed from the head of the list should use the explicit
11240 + * macro for this purpose for optimum efficiency. A singly-linked list may
11241 + * only be traversed in the forward direction. Singly-linked lists are ideal
11242 + * for applications with large datasets and few or no removals or for
11243 + * implementing a LIFO queue.
11244 + *
11245 + * A list is headed by a single forward pointer (or an array of forward
11246 + * pointers for a hash table header). The elements are doubly linked
11247 + * so that an arbitrary element can be removed without a need to
11248 + * traverse the list. New elements can be added to the list before
11249 + * or after an existing element or at the head of the list. A list
11250 + * may only be traversed in the forward direction.
11251 + *
11252 + * A simple queue is headed by a pair of pointers, one the head of the
11253 + * list and the other to the tail of the list. The elements are singly
11254 + * linked to save space, so elements can only be removed from the
11255 + * head of the list. New elements can be added to the list before or after
11256 + * an existing element, at the head of the list, or at the end of the
11257 + * list. A simple queue may only be traversed in the forward direction.
11258 + *
11259 + * A tail queue is headed by a pair of pointers, one to the head of the
11260 + * list and the other to the tail of the list. The elements are doubly
11261 + * linked so that an arbitrary element can be removed without a need to
11262 + * traverse the list. New elements can be added to the list before or
11263 + * after an existing element, at the head of the list, or at the end of
11264 + * the list. A tail queue may be traversed in either direction.
11265 + *
11266 + * A circle queue is headed by a pair of pointers, one to the head of the
11267 + * list and the other to the tail of the list. The elements are doubly
11268 + * linked so that an arbitrary element can be removed without a need to
11269 + * traverse the list. New elements can be added to the list before or after
11270 + * an existing element, at the head of the list, or at the end of the list.
11271 + * A circle queue may be traversed in either direction, but has a more
11272 + * complex end of list detection.
11273 + *
11274 + * For details on the use of these macros, see the queue(3) manual page.
11275 + */
11276 +
11277 +/*
11278 + * Double-linked List.
11279 + */
11280 +
11281 +typedef struct dwc_list_link {
11282 + struct dwc_list_link *next;
11283 + struct dwc_list_link *prev;
11284 +} dwc_list_link_t;
11285 +
11286 +#define DWC_LIST_INIT(link) do { \
11287 + (link)->next = (link); \
11288 + (link)->prev = (link); \
11289 +} while (0)
11290 +
11291 +#define DWC_LIST_FIRST(link) ((link)->next)
11292 +#define DWC_LIST_LAST(link) ((link)->prev)
11293 +#define DWC_LIST_END(link) (link)
11294 +#define DWC_LIST_NEXT(link) ((link)->next)
11295 +#define DWC_LIST_PREV(link) ((link)->prev)
11296 +#define DWC_LIST_EMPTY(link) \
11297 + (DWC_LIST_FIRST(link) == DWC_LIST_END(link))
11298 +#define DWC_LIST_ENTRY(link, type, field) \
11299 + (type *)((uint8_t *)(link) - (size_t)(&((type *)0)->field))
11300 +
11301 +#if 0
11302 +#define DWC_LIST_INSERT_HEAD(list, link) do { \
11303 + (link)->next = (list)->next; \
11304 + (link)->prev = (list); \
11305 + (list)->next->prev = (link); \
11306 + (list)->next = (link); \
11307 +} while (0)
11308 +
11309 +#define DWC_LIST_INSERT_TAIL(list, link) do { \
11310 + (link)->next = (list); \
11311 + (link)->prev = (list)->prev; \
11312 + (list)->prev->next = (link); \
11313 + (list)->prev = (link); \
11314 +} while (0)
11315 +#else
11316 +#define DWC_LIST_INSERT_HEAD(list, link) do { \
11317 + dwc_list_link_t *__next__ = (list)->next; \
11318 + __next__->prev = (link); \
11319 + (link)->next = __next__; \
11320 + (link)->prev = (list); \
11321 + (list)->next = (link); \
11322 +} while (0)
11323 +
11324 +#define DWC_LIST_INSERT_TAIL(list, link) do { \
11325 + dwc_list_link_t *__prev__ = (list)->prev; \
11326 + (list)->prev = (link); \
11327 + (link)->next = (list); \
11328 + (link)->prev = __prev__; \
11329 + __prev__->next = (link); \
11330 +} while (0)
11331 +#endif
11332 +
11333 +#if 0
11334 +static inline void __list_add(struct list_head *new,
11335 + struct list_head *prev,
11336 + struct list_head *next)
11337 +{
11338 + next->prev = new;
11339 + new->next = next;
11340 + new->prev = prev;
11341 + prev->next = new;
11342 +}
11343 +
11344 +static inline void list_add(struct list_head *new, struct list_head *head)
11345 +{
11346 + __list_add(new, head, head->next);
11347 +}
11348 +
11349 +static inline void list_add_tail(struct list_head *new, struct list_head *head)
11350 +{
11351 + __list_add(new, head->prev, head);
11352 +}
11353 +
11354 +static inline void __list_del(struct list_head * prev, struct list_head * next)
11355 +{
11356 + next->prev = prev;
11357 + prev->next = next;
11358 +}
11359 +
11360 +static inline void list_del(struct list_head *entry)
11361 +{
11362 + __list_del(entry->prev, entry->next);
11363 + entry->next = LIST_POISON1;
11364 + entry->prev = LIST_POISON2;
11365 +}
11366 +#endif
11367 +
11368 +#define DWC_LIST_REMOVE(link) do { \
11369 + (link)->next->prev = (link)->prev; \
11370 + (link)->prev->next = (link)->next; \
11371 +} while (0)
11372 +
11373 +#define DWC_LIST_REMOVE_INIT(link) do { \
11374 + DWC_LIST_REMOVE(link); \
11375 + DWC_LIST_INIT(link); \
11376 +} while (0)
11377 +
11378 +#define DWC_LIST_MOVE_HEAD(list, link) do { \
11379 + DWC_LIST_REMOVE(link); \
11380 + DWC_LIST_INSERT_HEAD(list, link); \
11381 +} while (0)
11382 +
11383 +#define DWC_LIST_MOVE_TAIL(list, link) do { \
11384 + DWC_LIST_REMOVE(link); \
11385 + DWC_LIST_INSERT_TAIL(list, link); \
11386 +} while (0)
11387 +
11388 +#define DWC_LIST_FOREACH(var, list) \
11389 + for((var) = DWC_LIST_FIRST(list); \
11390 + (var) != DWC_LIST_END(list); \
11391 + (var) = DWC_LIST_NEXT(var))
11392 +
11393 +#define DWC_LIST_FOREACH_SAFE(var, var2, list) \
11394 + for((var) = DWC_LIST_FIRST(list), (var2) = DWC_LIST_NEXT(var); \
11395 + (var) != DWC_LIST_END(list); \
11396 + (var) = (var2), (var2) = DWC_LIST_NEXT(var2))
11397 +
11398 +#define DWC_LIST_FOREACH_REVERSE(var, list) \
11399 + for((var) = DWC_LIST_LAST(list); \
11400 + (var) != DWC_LIST_END(list); \
11401 + (var) = DWC_LIST_PREV(var))
11402 +
11403 +/*
11404 + * Singly-linked List definitions.
11405 + */
11406 +#define DWC_SLIST_HEAD(name, type) \
11407 +struct name { \
11408 + struct type *slh_first; /* first element */ \
11409 +}
11410 +
11411 +#define DWC_SLIST_HEAD_INITIALIZER(head) \
11412 + { NULL }
11413 +
11414 +#define DWC_SLIST_ENTRY(type) \
11415 +struct { \
11416 + struct type *sle_next; /* next element */ \
11417 +}
11418 +
11419 +/*
11420 + * Singly-linked List access methods.
11421 + */
11422 +#define DWC_SLIST_FIRST(head) ((head)->slh_first)
11423 +#define DWC_SLIST_END(head) NULL
11424 +#define DWC_SLIST_EMPTY(head) (SLIST_FIRST(head) == SLIST_END(head))
11425 +#define DWC_SLIST_NEXT(elm, field) ((elm)->field.sle_next)
11426 +
11427 +#define DWC_SLIST_FOREACH(var, head, field) \
11428 + for((var) = SLIST_FIRST(head); \
11429 + (var) != SLIST_END(head); \
11430 + (var) = SLIST_NEXT(var, field))
11431 +
11432 +#define DWC_SLIST_FOREACH_PREVPTR(var, varp, head, field) \
11433 + for((varp) = &SLIST_FIRST((head)); \
11434 + ((var) = *(varp)) != SLIST_END(head); \
11435 + (varp) = &SLIST_NEXT((var), field))
11436 +
11437 +/*
11438 + * Singly-linked List functions.
11439 + */
11440 +#define DWC_SLIST_INIT(head) { \
11441 + SLIST_FIRST(head) = SLIST_END(head); \
11442 +}
11443 +
11444 +#define DWC_SLIST_INSERT_AFTER(slistelm, elm, field) do { \
11445 + (elm)->field.sle_next = (slistelm)->field.sle_next; \
11446 + (slistelm)->field.sle_next = (elm); \
11447 +} while (0)
11448 +
11449 +#define DWC_SLIST_INSERT_HEAD(head, elm, field) do { \
11450 + (elm)->field.sle_next = (head)->slh_first; \
11451 + (head)->slh_first = (elm); \
11452 +} while (0)
11453 +
11454 +#define DWC_SLIST_REMOVE_NEXT(head, elm, field) do { \
11455 + (elm)->field.sle_next = (elm)->field.sle_next->field.sle_next; \
11456 +} while (0)
11457 +
11458 +#define DWC_SLIST_REMOVE_HEAD(head, field) do { \
11459 + (head)->slh_first = (head)->slh_first->field.sle_next; \
11460 +} while (0)
11461 +
11462 +#define DWC_SLIST_REMOVE(head, elm, type, field) do { \
11463 + if ((head)->slh_first == (elm)) { \
11464 + SLIST_REMOVE_HEAD((head), field); \
11465 + } \
11466 + else { \
11467 + struct type *curelm = (head)->slh_first; \
11468 + while( curelm->field.sle_next != (elm) ) \
11469 + curelm = curelm->field.sle_next; \
11470 + curelm->field.sle_next = \
11471 + curelm->field.sle_next->field.sle_next; \
11472 + } \
11473 +} while (0)
11474 +
11475 +/*
11476 + * Simple queue definitions.
11477 + */
11478 +#define DWC_SIMPLEQ_HEAD(name, type) \
11479 +struct name { \
11480 + struct type *sqh_first; /* first element */ \
11481 + struct type **sqh_last; /* addr of last next element */ \
11482 +}
11483 +
11484 +#define DWC_SIMPLEQ_HEAD_INITIALIZER(head) \
11485 + { NULL, &(head).sqh_first }
11486 +
11487 +#define DWC_SIMPLEQ_ENTRY(type) \
11488 +struct { \
11489 + struct type *sqe_next; /* next element */ \
11490 +}
11491 +
11492 +/*
11493 + * Simple queue access methods.
11494 + */
11495 +#define DWC_SIMPLEQ_FIRST(head) ((head)->sqh_first)
11496 +#define DWC_SIMPLEQ_END(head) NULL
11497 +#define DWC_SIMPLEQ_EMPTY(head) (SIMPLEQ_FIRST(head) == SIMPLEQ_END(head))
11498 +#define DWC_SIMPLEQ_NEXT(elm, field) ((elm)->field.sqe_next)
11499 +
11500 +#define DWC_SIMPLEQ_FOREACH(var, head, field) \
11501 + for((var) = SIMPLEQ_FIRST(head); \
11502 + (var) != SIMPLEQ_END(head); \
11503 + (var) = SIMPLEQ_NEXT(var, field))
11504 +
11505 +/*
11506 + * Simple queue functions.
11507 + */
11508 +#define DWC_SIMPLEQ_INIT(head) do { \
11509 + (head)->sqh_first = NULL; \
11510 + (head)->sqh_last = &(head)->sqh_first; \
11511 +} while (0)
11512 +
11513 +#define DWC_SIMPLEQ_INSERT_HEAD(head, elm, field) do { \
11514 + if (((elm)->field.sqe_next = (head)->sqh_first) == NULL) \
11515 + (head)->sqh_last = &(elm)->field.sqe_next; \
11516 + (head)->sqh_first = (elm); \
11517 +} while (0)
11518 +
11519 +#define DWC_SIMPLEQ_INSERT_TAIL(head, elm, field) do { \
11520 + (elm)->field.sqe_next = NULL; \
11521 + *(head)->sqh_last = (elm); \
11522 + (head)->sqh_last = &(elm)->field.sqe_next; \
11523 +} while (0)
11524 +
11525 +#define DWC_SIMPLEQ_INSERT_AFTER(head, listelm, elm, field) do { \
11526 + if (((elm)->field.sqe_next = (listelm)->field.sqe_next) == NULL)\
11527 + (head)->sqh_last = &(elm)->field.sqe_next; \
11528 + (listelm)->field.sqe_next = (elm); \
11529 +} while (0)
11530 +
11531 +#define DWC_SIMPLEQ_REMOVE_HEAD(head, field) do { \
11532 + if (((head)->sqh_first = (head)->sqh_first->field.sqe_next) == NULL) \
11533 + (head)->sqh_last = &(head)->sqh_first; \
11534 +} while (0)
11535 +
11536 +/*
11537 + * Tail queue definitions.
11538 + */
11539 +#define DWC_TAILQ_HEAD(name, type) \
11540 +struct name { \
11541 + struct type *tqh_first; /* first element */ \
11542 + struct type **tqh_last; /* addr of last next element */ \
11543 +}
11544 +
11545 +#define DWC_TAILQ_HEAD_INITIALIZER(head) \
11546 + { NULL, &(head).tqh_first }
11547 +
11548 +#define DWC_TAILQ_ENTRY(type) \
11549 +struct { \
11550 + struct type *tqe_next; /* next element */ \
11551 + struct type **tqe_prev; /* address of previous next element */ \
11552 +}
11553 +
11554 +/*
11555 + * tail queue access methods
11556 + */
11557 +#define DWC_TAILQ_FIRST(head) ((head)->tqh_first)
11558 +#define DWC_TAILQ_END(head) NULL
11559 +#define DWC_TAILQ_NEXT(elm, field) ((elm)->field.tqe_next)
11560 +#define DWC_TAILQ_LAST(head, headname) \
11561 + (*(((struct headname *)((head)->tqh_last))->tqh_last))
11562 +/* XXX */
11563 +#define DWC_TAILQ_PREV(elm, headname, field) \
11564 + (*(((struct headname *)((elm)->field.tqe_prev))->tqh_last))
11565 +#define DWC_TAILQ_EMPTY(head) \
11566 + (DWC_TAILQ_FIRST(head) == DWC_TAILQ_END(head))
11567 +
11568 +#define DWC_TAILQ_FOREACH(var, head, field) \
11569 + for ((var) = DWC_TAILQ_FIRST(head); \
11570 + (var) != DWC_TAILQ_END(head); \
11571 + (var) = DWC_TAILQ_NEXT(var, field))
11572 +
11573 +#define DWC_TAILQ_FOREACH_REVERSE(var, head, headname, field) \
11574 + for ((var) = DWC_TAILQ_LAST(head, headname); \
11575 + (var) != DWC_TAILQ_END(head); \
11576 + (var) = DWC_TAILQ_PREV(var, headname, field))
11577 +
11578 +/*
11579 + * Tail queue functions.
11580 + */
11581 +#define DWC_TAILQ_INIT(head) do { \
11582 + (head)->tqh_first = NULL; \
11583 + (head)->tqh_last = &(head)->tqh_first; \
11584 +} while (0)
11585 +
11586 +#define DWC_TAILQ_INSERT_HEAD(head, elm, field) do { \
11587 + if (((elm)->field.tqe_next = (head)->tqh_first) != NULL) \
11588 + (head)->tqh_first->field.tqe_prev = \
11589 + &(elm)->field.tqe_next; \
11590 + else \
11591 + (head)->tqh_last = &(elm)->field.tqe_next; \
11592 + (head)->tqh_first = (elm); \
11593 + (elm)->field.tqe_prev = &(head)->tqh_first; \
11594 +} while (0)
11595 +
11596 +#define DWC_TAILQ_INSERT_TAIL(head, elm, field) do { \
11597 + (elm)->field.tqe_next = NULL; \
11598 + (elm)->field.tqe_prev = (head)->tqh_last; \
11599 + *(head)->tqh_last = (elm); \
11600 + (head)->tqh_last = &(elm)->field.tqe_next; \
11601 +} while (0)
11602 +
11603 +#define DWC_TAILQ_INSERT_AFTER(head, listelm, elm, field) do { \
11604 + if (((elm)->field.tqe_next = (listelm)->field.tqe_next) != NULL)\
11605 + (elm)->field.tqe_next->field.tqe_prev = \
11606 + &(elm)->field.tqe_next; \
11607 + else \
11608 + (head)->tqh_last = &(elm)->field.tqe_next; \
11609 + (listelm)->field.tqe_next = (elm); \
11610 + (elm)->field.tqe_prev = &(listelm)->field.tqe_next; \
11611 +} while (0)
11612 +
11613 +#define DWC_TAILQ_INSERT_BEFORE(listelm, elm, field) do { \
11614 + (elm)->field.tqe_prev = (listelm)->field.tqe_prev; \
11615 + (elm)->field.tqe_next = (listelm); \
11616 + *(listelm)->field.tqe_prev = (elm); \
11617 + (listelm)->field.tqe_prev = &(elm)->field.tqe_next; \
11618 +} while (0)
11619 +
11620 +#define DWC_TAILQ_REMOVE(head, elm, field) do { \
11621 + if (((elm)->field.tqe_next) != NULL) \
11622 + (elm)->field.tqe_next->field.tqe_prev = \
11623 + (elm)->field.tqe_prev; \
11624 + else \
11625 + (head)->tqh_last = (elm)->field.tqe_prev; \
11626 + *(elm)->field.tqe_prev = (elm)->field.tqe_next; \
11627 +} while (0)
11628 +
11629 +#define DWC_TAILQ_REPLACE(head, elm, elm2, field) do { \
11630 + if (((elm2)->field.tqe_next = (elm)->field.tqe_next) != NULL) \
11631 + (elm2)->field.tqe_next->field.tqe_prev = \
11632 + &(elm2)->field.tqe_next; \
11633 + else \
11634 + (head)->tqh_last = &(elm2)->field.tqe_next; \
11635 + (elm2)->field.tqe_prev = (elm)->field.tqe_prev; \
11636 + *(elm2)->field.tqe_prev = (elm2); \
11637 +} while (0)
11638 +
11639 +/*
11640 + * Circular queue definitions.
11641 + */
11642 +#define DWC_CIRCLEQ_HEAD(name, type) \
11643 +struct name { \
11644 + struct type *cqh_first; /* first element */ \
11645 + struct type *cqh_last; /* last element */ \
11646 +}
11647 +
11648 +#define DWC_CIRCLEQ_HEAD_INITIALIZER(head) \
11649 + { DWC_CIRCLEQ_END(&head), DWC_CIRCLEQ_END(&head) }
11650 +
11651 +#define DWC_CIRCLEQ_ENTRY(type) \
11652 +struct { \
11653 + struct type *cqe_next; /* next element */ \
11654 + struct type *cqe_prev; /* previous element */ \
11655 +}
11656 +
11657 +/*
11658 + * Circular queue access methods
11659 + */
11660 +#define DWC_CIRCLEQ_FIRST(head) ((head)->cqh_first)
11661 +#define DWC_CIRCLEQ_LAST(head) ((head)->cqh_last)
11662 +#define DWC_CIRCLEQ_END(head) ((void *)(head))
11663 +#define DWC_CIRCLEQ_NEXT(elm, field) ((elm)->field.cqe_next)
11664 +#define DWC_CIRCLEQ_PREV(elm, field) ((elm)->field.cqe_prev)
11665 +#define DWC_CIRCLEQ_EMPTY(head) \
11666 + (DWC_CIRCLEQ_FIRST(head) == DWC_CIRCLEQ_END(head))
11667 +
11668 +#define DWC_CIRCLEQ_EMPTY_ENTRY(elm, field) (((elm)->field.cqe_next == NULL) && ((elm)->field.cqe_prev == NULL))
11669 +
11670 +#define DWC_CIRCLEQ_FOREACH(var, head, field) \
11671 + for((var) = DWC_CIRCLEQ_FIRST(head); \
11672 + (var) != DWC_CIRCLEQ_END(head); \
11673 + (var) = DWC_CIRCLEQ_NEXT(var, field))
11674 +
11675 +#define DWC_CIRCLEQ_FOREACH_SAFE(var, var2, head, field) \
11676 + for((var) = DWC_CIRCLEQ_FIRST(head), var2 = DWC_CIRCLEQ_NEXT(var, field); \
11677 + (var) != DWC_CIRCLEQ_END(head); \
11678 + (var) = var2, var2 = DWC_CIRCLEQ_NEXT(var, field))
11679 +
11680 +#define DWC_CIRCLEQ_FOREACH_REVERSE(var, head, field) \
11681 + for((var) = DWC_CIRCLEQ_LAST(head); \
11682 + (var) != DWC_CIRCLEQ_END(head); \
11683 + (var) = DWC_CIRCLEQ_PREV(var, field))
11684 +
11685 +/*
11686 + * Circular queue functions.
11687 + */
11688 +#define DWC_CIRCLEQ_INIT(head) do { \
11689 + (head)->cqh_first = DWC_CIRCLEQ_END(head); \
11690 + (head)->cqh_last = DWC_CIRCLEQ_END(head); \
11691 +} while (0)
11692 +
11693 +#define DWC_CIRCLEQ_INIT_ENTRY(elm, field) do { \
11694 + (elm)->field.cqe_next = NULL; \
11695 + (elm)->field.cqe_prev = NULL; \
11696 +} while (0)
11697 +
11698 +#define DWC_CIRCLEQ_INSERT_AFTER(head, listelm, elm, field) do { \
11699 + (elm)->field.cqe_next = (listelm)->field.cqe_next; \
11700 + (elm)->field.cqe_prev = (listelm); \
11701 + if ((listelm)->field.cqe_next == DWC_CIRCLEQ_END(head)) \
11702 + (head)->cqh_last = (elm); \
11703 + else \
11704 + (listelm)->field.cqe_next->field.cqe_prev = (elm); \
11705 + (listelm)->field.cqe_next = (elm); \
11706 +} while (0)
11707 +
11708 +#define DWC_CIRCLEQ_INSERT_BEFORE(head, listelm, elm, field) do { \
11709 + (elm)->field.cqe_next = (listelm); \
11710 + (elm)->field.cqe_prev = (listelm)->field.cqe_prev; \
11711 + if ((listelm)->field.cqe_prev == DWC_CIRCLEQ_END(head)) \
11712 + (head)->cqh_first = (elm); \
11713 + else \
11714 + (listelm)->field.cqe_prev->field.cqe_next = (elm); \
11715 + (listelm)->field.cqe_prev = (elm); \
11716 +} while (0)
11717 +
11718 +#define DWC_CIRCLEQ_INSERT_HEAD(head, elm, field) do { \
11719 + (elm)->field.cqe_next = (head)->cqh_first; \
11720 + (elm)->field.cqe_prev = DWC_CIRCLEQ_END(head); \
11721 + if ((head)->cqh_last == DWC_CIRCLEQ_END(head)) \
11722 + (head)->cqh_last = (elm); \
11723 + else \
11724 + (head)->cqh_first->field.cqe_prev = (elm); \
11725 + (head)->cqh_first = (elm); \
11726 +} while (0)
11727 +
11728 +#define DWC_CIRCLEQ_INSERT_TAIL(head, elm, field) do { \
11729 + (elm)->field.cqe_next = DWC_CIRCLEQ_END(head); \
11730 + (elm)->field.cqe_prev = (head)->cqh_last; \
11731 + if ((head)->cqh_first == DWC_CIRCLEQ_END(head)) \
11732 + (head)->cqh_first = (elm); \
11733 + else \
11734 + (head)->cqh_last->field.cqe_next = (elm); \
11735 + (head)->cqh_last = (elm); \
11736 +} while (0)
11737 +
11738 +#define DWC_CIRCLEQ_REMOVE(head, elm, field) do { \
11739 + if ((elm)->field.cqe_next == DWC_CIRCLEQ_END(head)) \
11740 + (head)->cqh_last = (elm)->field.cqe_prev; \
11741 + else \
11742 + (elm)->field.cqe_next->field.cqe_prev = \
11743 + (elm)->field.cqe_prev; \
11744 + if ((elm)->field.cqe_prev == DWC_CIRCLEQ_END(head)) \
11745 + (head)->cqh_first = (elm)->field.cqe_next; \
11746 + else \
11747 + (elm)->field.cqe_prev->field.cqe_next = \
11748 + (elm)->field.cqe_next; \
11749 +} while (0)
11750 +
11751 +#define DWC_CIRCLEQ_REMOVE_INIT(head, elm, field) do { \
11752 + DWC_CIRCLEQ_REMOVE(head, elm, field); \
11753 + DWC_CIRCLEQ_INIT_ENTRY(elm, field); \
11754 +} while (0)
11755 +
11756 +#define DWC_CIRCLEQ_REPLACE(head, elm, elm2, field) do { \
11757 + if (((elm2)->field.cqe_next = (elm)->field.cqe_next) == \
11758 + DWC_CIRCLEQ_END(head)) \
11759 + (head).cqh_last = (elm2); \
11760 + else \
11761 + (elm2)->field.cqe_next->field.cqe_prev = (elm2); \
11762 + if (((elm2)->field.cqe_prev = (elm)->field.cqe_prev) == \
11763 + DWC_CIRCLEQ_END(head)) \
11764 + (head).cqh_first = (elm2); \
11765 + else \
11766 + (elm2)->field.cqe_prev->field.cqe_next = (elm2); \
11767 +} while (0)
11768 +
11769 +#ifdef __cplusplus
11770 +}
11771 +#endif
11772 +
11773 +#endif /* _DWC_LIST_H_ */
11774 --- /dev/null
11775 +++ b/drivers/usb/host/dwc_common_port/dwc_mem.c
11776 @@ -0,0 +1,245 @@
11777 +/* Memory Debugging */
11778 +#ifdef DWC_DEBUG_MEMORY
11779 +
11780 +#include "dwc_os.h"
11781 +#include "dwc_list.h"
11782 +
11783 +struct allocation {
11784 + void *addr;
11785 + void *ctx;
11786 + char *func;
11787 + int line;
11788 + uint32_t size;
11789 + int dma;
11790 + DWC_CIRCLEQ_ENTRY(allocation) entry;
11791 +};
11792 +
11793 +DWC_CIRCLEQ_HEAD(allocation_queue, allocation);
11794 +
11795 +struct allocation_manager {
11796 + void *mem_ctx;
11797 + struct allocation_queue allocations;
11798 +
11799 + /* statistics */
11800 + int num;
11801 + int num_freed;
11802 + int num_active;
11803 + uint32_t total;
11804 + uint32_t cur;
11805 + uint32_t max;
11806 +};
11807 +
11808 +static struct allocation_manager *manager = NULL;
11809 +
11810 +static int add_allocation(void *ctx, uint32_t size, char const *func, int line, void *addr,
11811 + int dma)
11812 +{
11813 + struct allocation *a;
11814 +
11815 + DWC_ASSERT(manager != NULL, "manager not allocated");
11816 +
11817 + a = __DWC_ALLOC_ATOMIC(manager->mem_ctx, sizeof(*a));
11818 + if (!a) {
11819 + return -DWC_E_NO_MEMORY;
11820 + }
11821 +
11822 + a->func = __DWC_ALLOC_ATOMIC(manager->mem_ctx, DWC_STRLEN(func) + 1);
11823 + if (!a->func) {
11824 + __DWC_FREE(manager->mem_ctx, a);
11825 + return -DWC_E_NO_MEMORY;
11826 + }
11827 +
11828 + DWC_MEMCPY(a->func, func, DWC_STRLEN(func) + 1);
11829 + a->addr = addr;
11830 + a->ctx = ctx;
11831 + a->line = line;
11832 + a->size = size;
11833 + a->dma = dma;
11834 + DWC_CIRCLEQ_INSERT_TAIL(&manager->allocations, a, entry);
11835 +
11836 + /* Update stats */
11837 + manager->num++;
11838 + manager->num_active++;
11839 + manager->total += size;
11840 + manager->cur += size;
11841 +
11842 + if (manager->max < manager->cur) {
11843 + manager->max = manager->cur;
11844 + }
11845 +
11846 + return 0;
11847 +}
11848 +
11849 +static struct allocation *find_allocation(void *ctx, void *addr)
11850 +{
11851 + struct allocation *a;
11852 +
11853 + DWC_CIRCLEQ_FOREACH(a, &manager->allocations, entry) {
11854 + if (a->ctx == ctx && a->addr == addr) {
11855 + return a;
11856 + }
11857 + }
11858 +
11859 + return NULL;
11860 +}
11861 +
11862 +static void free_allocation(void *ctx, void *addr, char const *func, int line)
11863 +{
11864 + struct allocation *a = find_allocation(ctx, addr);
11865 +
11866 + if (!a) {
11867 + DWC_ASSERT(0,
11868 + "Free of address %p that was never allocated or already freed %s:%d",
11869 + addr, func, line);
11870 + return;
11871 + }
11872 +
11873 + DWC_CIRCLEQ_REMOVE(&manager->allocations, a, entry);
11874 +
11875 + manager->num_active--;
11876 + manager->num_freed++;
11877 + manager->cur -= a->size;
11878 + __DWC_FREE(manager->mem_ctx, a->func);
11879 + __DWC_FREE(manager->mem_ctx, a);
11880 +}
11881 +
11882 +int dwc_memory_debug_start(void *mem_ctx)
11883 +{
11884 + DWC_ASSERT(manager == NULL, "Memory debugging has already started\n");
11885 +
11886 + if (manager) {
11887 + return -DWC_E_BUSY;
11888 + }
11889 +
11890 + manager = __DWC_ALLOC(mem_ctx, sizeof(*manager));
11891 + if (!manager) {
11892 + return -DWC_E_NO_MEMORY;
11893 + }
11894 +
11895 + DWC_CIRCLEQ_INIT(&manager->allocations);
11896 + manager->mem_ctx = mem_ctx;
11897 + manager->num = 0;
11898 + manager->num_freed = 0;
11899 + manager->num_active = 0;
11900 + manager->total = 0;
11901 + manager->cur = 0;
11902 + manager->max = 0;
11903 +
11904 + return 0;
11905 +}
11906 +
11907 +void dwc_memory_debug_stop(void)
11908 +{
11909 + struct allocation *a;
11910 +
11911 + dwc_memory_debug_report();
11912 +
11913 + DWC_CIRCLEQ_FOREACH(a, &manager->allocations, entry) {
11914 + DWC_ERROR("Memory leaked from %s:%d\n", a->func, a->line);
11915 + free_allocation(a->ctx, a->addr, NULL, -1);
11916 + }
11917 +
11918 + __DWC_FREE(manager->mem_ctx, manager);
11919 +}
11920 +
11921 +void dwc_memory_debug_report(void)
11922 +{
11923 + struct allocation *a;
11924 +
11925 + DWC_PRINTF("\n\n\n----------------- Memory Debugging Report -----------------\n\n");
11926 + DWC_PRINTF("Num Allocations = %d\n", manager->num);
11927 + DWC_PRINTF("Freed = %d\n", manager->num_freed);
11928 + DWC_PRINTF("Active = %d\n", manager->num_active);
11929 + DWC_PRINTF("Current Memory Used = %d\n", manager->cur);
11930 + DWC_PRINTF("Total Memory Used = %d\n", manager->total);
11931 + DWC_PRINTF("Maximum Memory Used at Once = %d\n", manager->max);
11932 + DWC_PRINTF("Unfreed allocations:\n");
11933 +
11934 + DWC_CIRCLEQ_FOREACH(a, &manager->allocations, entry) {
11935 + DWC_PRINTF(" addr=%p, size=%d from %s:%d, DMA=%d\n",
11936 + a->addr, a->size, a->func, a->line, a->dma);
11937 + }
11938 +}
11939 +
11940 +/* The replacement functions */
11941 +void *dwc_alloc_debug(void *mem_ctx, uint32_t size, char const *func, int line)
11942 +{
11943 + void *addr = __DWC_ALLOC(mem_ctx, size);
11944 +
11945 + if (!addr) {
11946 + return NULL;
11947 + }
11948 +
11949 + if (add_allocation(mem_ctx, size, func, line, addr, 0)) {
11950 + __DWC_FREE(mem_ctx, addr);
11951 + return NULL;
11952 + }
11953 +
11954 + return addr;
11955 +}
11956 +
11957 +void *dwc_alloc_atomic_debug(void *mem_ctx, uint32_t size, char const *func,
11958 + int line)
11959 +{
11960 + void *addr = __DWC_ALLOC_ATOMIC(mem_ctx, size);
11961 +
11962 + if (!addr) {
11963 + return NULL;
11964 + }
11965 +
11966 + if (add_allocation(mem_ctx, size, func, line, addr, 0)) {
11967 + __DWC_FREE(mem_ctx, addr);
11968 + return NULL;
11969 + }
11970 +
11971 + return addr;
11972 +}
11973 +
11974 +void dwc_free_debug(void *mem_ctx, void *addr, char const *func, int line)
11975 +{
11976 + free_allocation(mem_ctx, addr, func, line);
11977 + __DWC_FREE(mem_ctx, addr);
11978 +}
11979 +
11980 +void *dwc_dma_alloc_debug(void *dma_ctx, uint32_t size, dwc_dma_t *dma_addr,
11981 + char const *func, int line)
11982 +{
11983 + void *addr = __DWC_DMA_ALLOC(dma_ctx, size, dma_addr);
11984 +
11985 + if (!addr) {
11986 + return NULL;
11987 + }
11988 +
11989 + if (add_allocation(dma_ctx, size, func, line, addr, 1)) {
11990 + __DWC_DMA_FREE(dma_ctx, size, addr, *dma_addr);
11991 + return NULL;
11992 + }
11993 +
11994 + return addr;
11995 +}
11996 +
11997 +void *dwc_dma_alloc_atomic_debug(void *dma_ctx, uint32_t size,
11998 + dwc_dma_t *dma_addr, char const *func, int line)
11999 +{
12000 + void *addr = __DWC_DMA_ALLOC_ATOMIC(dma_ctx, size, dma_addr);
12001 +
12002 + if (!addr) {
12003 + return NULL;
12004 + }
12005 +
12006 + if (add_allocation(dma_ctx, size, func, line, addr, 1)) {
12007 + __DWC_DMA_FREE(dma_ctx, size, addr, *dma_addr);
12008 + return NULL;
12009 + }
12010 +
12011 + return addr;
12012 +}
12013 +
12014 +void dwc_dma_free_debug(void *dma_ctx, uint32_t size, void *virt_addr,
12015 + dwc_dma_t dma_addr, char const *func, int line)
12016 +{
12017 + free_allocation(dma_ctx, virt_addr, func, line);
12018 + __DWC_DMA_FREE(dma_ctx, size, virt_addr, dma_addr);
12019 +}
12020 +
12021 +#endif /* DWC_DEBUG_MEMORY */
12022 --- /dev/null
12023 +++ b/drivers/usb/host/dwc_common_port/dwc_modpow.c
12024 @@ -0,0 +1,636 @@
12025 +/* Bignum routines adapted from PUTTY sources. PuTTY copyright notice follows.
12026 + *
12027 + * PuTTY is copyright 1997-2007 Simon Tatham.
12028 + *
12029 + * Portions copyright Robert de Bath, Joris van Rantwijk, Delian
12030 + * Delchev, Andreas Schultz, Jeroen Massar, Wez Furlong, Nicolas Barry,
12031 + * Justin Bradford, Ben Harris, Malcolm Smith, Ahmad Khalifa, Markus
12032 + * Kuhn, and CORE SDI S.A.
12033 + *
12034 + * Permission is hereby granted, free of charge, to any person
12035 + * obtaining a copy of this software and associated documentation files
12036 + * (the "Software"), to deal in the Software without restriction,
12037 + * including without limitation the rights to use, copy, modify, merge,
12038 + * publish, distribute, sublicense, and/or sell copies of the Software,
12039 + * and to permit persons to whom the Software is furnished to do so,
12040 + * subject to the following conditions:
12041 + *
12042 + * The above copyright notice and this permission notice shall be
12043 + * included in all copies or substantial portions of the Software.
12044 +
12045 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
12046 + * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
12047 + * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
12048 + * NONINFRINGEMENT. IN NO EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE
12049 + * FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
12050 + * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
12051 + * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
12052 + *
12053 + */
12054 +#ifdef DWC_CRYPTOLIB
12055 +
12056 +#ifndef CONFIG_MACH_IPMATE
12057 +
12058 +#include "dwc_modpow.h"
12059 +
12060 +#define BIGNUM_INT_MASK 0xFFFFFFFFUL
12061 +#define BIGNUM_TOP_BIT 0x80000000UL
12062 +#define BIGNUM_INT_BITS 32
12063 +
12064 +
12065 +static void *snmalloc(void *mem_ctx, size_t n, size_t size)
12066 +{
12067 + void *p;
12068 + size *= n;
12069 + if (size == 0) size = 1;
12070 + p = dwc_alloc(mem_ctx, size);
12071 + return p;
12072 +}
12073 +
12074 +#define snewn(ctx, n, type) ((type *)snmalloc((ctx), (n), sizeof(type)))
12075 +#define sfree dwc_free
12076 +
12077 +/*
12078 + * Usage notes:
12079 + * * Do not call the DIVMOD_WORD macro with expressions such as array
12080 + * subscripts, as some implementations object to this (see below).
12081 + * * Note that none of the division methods below will cope if the
12082 + * quotient won't fit into BIGNUM_INT_BITS. Callers should be careful
12083 + * to avoid this case.
12084 + * If this condition occurs, in the case of the x86 DIV instruction,
12085 + * an overflow exception will occur, which (according to a correspondent)
12086 + * will manifest on Windows as something like
12087 + * 0xC0000095: Integer overflow
12088 + * The C variant won't give the right answer, either.
12089 + */
12090 +
12091 +#define MUL_WORD(w1, w2) ((BignumDblInt)w1 * w2)
12092 +
12093 +#if defined __GNUC__ && defined __i386__
12094 +#define DIVMOD_WORD(q, r, hi, lo, w) \
12095 + __asm__("div %2" : \
12096 + "=d" (r), "=a" (q) : \
12097 + "r" (w), "d" (hi), "a" (lo))
12098 +#else
12099 +#define DIVMOD_WORD(q, r, hi, lo, w) do { \
12100 + BignumDblInt n = (((BignumDblInt)hi) << BIGNUM_INT_BITS) | lo; \
12101 + q = n / w; \
12102 + r = n % w; \
12103 +} while (0)
12104 +#endif
12105 +
12106 +// q = n / w;
12107 +// r = n % w;
12108 +
12109 +#define BIGNUM_INT_BYTES (BIGNUM_INT_BITS / 8)
12110 +
12111 +#define BIGNUM_INTERNAL
12112 +
12113 +static Bignum newbn(void *mem_ctx, int length)
12114 +{
12115 + Bignum b = snewn(mem_ctx, length + 1, BignumInt);
12116 + //if (!b)
12117 + //abort(); /* FIXME */
12118 + DWC_MEMSET(b, 0, (length + 1) * sizeof(*b));
12119 + b[0] = length;
12120 + return b;
12121 +}
12122 +
12123 +void freebn(void *mem_ctx, Bignum b)
12124 +{
12125 + /*
12126 + * Burn the evidence, just in case.
12127 + */
12128 + DWC_MEMSET(b, 0, sizeof(b[0]) * (b[0] + 1));
12129 + sfree(mem_ctx, b);
12130 +}
12131 +
12132 +/*
12133 + * Compute c = a * b.
12134 + * Input is in the first len words of a and b.
12135 + * Result is returned in the first 2*len words of c.
12136 + */
12137 +static void internal_mul(BignumInt *a, BignumInt *b,
12138 + BignumInt *c, int len)
12139 +{
12140 + int i, j;
12141 + BignumDblInt t;
12142 +
12143 + for (j = 0; j < 2 * len; j++)
12144 + c[j] = 0;
12145 +
12146 + for (i = len - 1; i >= 0; i--) {
12147 + t = 0;
12148 + for (j = len - 1; j >= 0; j--) {
12149 + t += MUL_WORD(a[i], (BignumDblInt) b[j]);
12150 + t += (BignumDblInt) c[i + j + 1];
12151 + c[i + j + 1] = (BignumInt) t;
12152 + t = t >> BIGNUM_INT_BITS;
12153 + }
12154 + c[i] = (BignumInt) t;
12155 + }
12156 +}
12157 +
12158 +static void internal_add_shifted(BignumInt *number,
12159 + unsigned n, int shift)
12160 +{
12161 + int word = 1 + (shift / BIGNUM_INT_BITS);
12162 + int bshift = shift % BIGNUM_INT_BITS;
12163 + BignumDblInt addend;
12164 +
12165 + addend = (BignumDblInt)n << bshift;
12166 +
12167 + while (addend) {
12168 + addend += number[word];
12169 + number[word] = (BignumInt) addend & BIGNUM_INT_MASK;
12170 + addend >>= BIGNUM_INT_BITS;
12171 + word++;
12172 + }
12173 +}
12174 +
12175 +/*
12176 + * Compute a = a % m.
12177 + * Input in first alen words of a and first mlen words of m.
12178 + * Output in first alen words of a
12179 + * (of which first alen-mlen words will be zero).
12180 + * The MSW of m MUST have its high bit set.
12181 + * Quotient is accumulated in the `quotient' array, which is a Bignum
12182 + * rather than the internal bigendian format. Quotient parts are shifted
12183 + * left by `qshift' before adding into quot.
12184 + */
12185 +static void internal_mod(BignumInt *a, int alen,
12186 + BignumInt *m, int mlen,
12187 + BignumInt *quot, int qshift)
12188 +{
12189 + BignumInt m0, m1;
12190 + unsigned int h;
12191 + int i, k;
12192 +
12193 + m0 = m[0];
12194 + if (mlen > 1)
12195 + m1 = m[1];
12196 + else
12197 + m1 = 0;
12198 +
12199 + for (i = 0; i <= alen - mlen; i++) {
12200 + BignumDblInt t;
12201 + unsigned int q, r, c, ai1;
12202 +
12203 + if (i == 0) {
12204 + h = 0;
12205 + } else {
12206 + h = a[i - 1];
12207 + a[i - 1] = 0;
12208 + }
12209 +
12210 + if (i == alen - 1)
12211 + ai1 = 0;
12212 + else
12213 + ai1 = a[i + 1];
12214 +
12215 + /* Find q = h:a[i] / m0 */
12216 + if (h >= m0) {
12217 + /*
12218 + * Special case.
12219 + *
12220 + * To illustrate it, suppose a BignumInt is 8 bits, and
12221 + * we are dividing (say) A1:23:45:67 by A1:B2:C3. Then
12222 + * our initial division will be 0xA123 / 0xA1, which
12223 + * will give a quotient of 0x100 and a divide overflow.
12224 + * However, the invariants in this division algorithm
12225 + * are not violated, since the full number A1:23:... is
12226 + * _less_ than the quotient prefix A1:B2:... and so the
12227 + * following correction loop would have sorted it out.
12228 + *
12229 + * In this situation we set q to be the largest
12230 + * quotient we _can_ stomach (0xFF, of course).
12231 + */
12232 + q = BIGNUM_INT_MASK;
12233 + } else {
12234 + /* Macro doesn't want an array subscript expression passed
12235 + * into it (see definition), so use a temporary. */
12236 + BignumInt tmplo = a[i];
12237 + DIVMOD_WORD(q, r, h, tmplo, m0);
12238 +
12239 + /* Refine our estimate of q by looking at
12240 + h:a[i]:a[i+1] / m0:m1 */
12241 + t = MUL_WORD(m1, q);
12242 + if (t > ((BignumDblInt) r << BIGNUM_INT_BITS) + ai1) {
12243 + q--;
12244 + t -= m1;
12245 + r = (r + m0) & BIGNUM_INT_MASK; /* overflow? */
12246 + if (r >= (BignumDblInt) m0 &&
12247 + t > ((BignumDblInt) r << BIGNUM_INT_BITS) + ai1) q--;
12248 + }
12249 + }
12250 +
12251 + /* Subtract q * m from a[i...] */
12252 + c = 0;
12253 + for (k = mlen - 1; k >= 0; k--) {
12254 + t = MUL_WORD(q, m[k]);
12255 + t += c;
12256 + c = (unsigned)(t >> BIGNUM_INT_BITS);
12257 + if ((BignumInt) t > a[i + k])
12258 + c++;
12259 + a[i + k] -= (BignumInt) t;
12260 + }
12261 +
12262 + /* Add back m in case of borrow */
12263 + if (c != h) {
12264 + t = 0;
12265 + for (k = mlen - 1; k >= 0; k--) {
12266 + t += m[k];
12267 + t += a[i + k];
12268 + a[i + k] = (BignumInt) t;
12269 + t = t >> BIGNUM_INT_BITS;
12270 + }
12271 + q--;
12272 + }
12273 + if (quot)
12274 + internal_add_shifted(quot, q, qshift + BIGNUM_INT_BITS * (alen - mlen - i));
12275 + }
12276 +}
12277 +
12278 +/*
12279 + * Compute p % mod.
12280 + * The most significant word of mod MUST be non-zero.
12281 + * We assume that the result array is the same size as the mod array.
12282 + * We optionally write out a quotient if `quotient' is non-NULL.
12283 + * We can avoid writing out the result if `result' is NULL.
12284 + */
12285 +void bigdivmod(void *mem_ctx, Bignum p, Bignum mod, Bignum result, Bignum quotient)
12286 +{
12287 + BignumInt *n, *m;
12288 + int mshift;
12289 + int plen, mlen, i, j;
12290 +
12291 + /* Allocate m of size mlen, copy mod to m */
12292 + /* We use big endian internally */
12293 + mlen = mod[0];
12294 + m = snewn(mem_ctx, mlen, BignumInt);
12295 + //if (!m)
12296 + //abort(); /* FIXME */
12297 + for (j = 0; j < mlen; j++)
12298 + m[j] = mod[mod[0] - j];
12299 +
12300 + /* Shift m left to make msb bit set */
12301 + for (mshift = 0; mshift < BIGNUM_INT_BITS-1; mshift++)
12302 + if ((m[0] << mshift) & BIGNUM_TOP_BIT)
12303 + break;
12304 + if (mshift) {
12305 + for (i = 0; i < mlen - 1; i++)
12306 + m[i] = (m[i] << mshift) | (m[i + 1] >> (BIGNUM_INT_BITS - mshift));
12307 + m[mlen - 1] = m[mlen - 1] << mshift;
12308 + }
12309 +
12310 + plen = p[0];
12311 + /* Ensure plen > mlen */
12312 + if (plen <= mlen)
12313 + plen = mlen + 1;
12314 +
12315 + /* Allocate n of size plen, copy p to n */
12316 + n = snewn(mem_ctx, plen, BignumInt);
12317 + //if (!n)
12318 + //abort(); /* FIXME */
12319 + for (j = 0; j < plen; j++)
12320 + n[j] = 0;
12321 + for (j = 1; j <= (int)p[0]; j++)
12322 + n[plen - j] = p[j];
12323 +
12324 + /* Main computation */
12325 + internal_mod(n, plen, m, mlen, quotient, mshift);
12326 +
12327 + /* Fixup result in case the modulus was shifted */
12328 + if (mshift) {
12329 + for (i = plen - mlen - 1; i < plen - 1; i++)
12330 + n[i] = (n[i] << mshift) | (n[i + 1] >> (BIGNUM_INT_BITS - mshift));
12331 + n[plen - 1] = n[plen - 1] << mshift;
12332 + internal_mod(n, plen, m, mlen, quotient, 0);
12333 + for (i = plen - 1; i >= plen - mlen; i--)
12334 + n[i] = (n[i] >> mshift) | (n[i - 1] << (BIGNUM_INT_BITS - mshift));
12335 + }
12336 +
12337 + /* Copy result to buffer */
12338 + if (result) {
12339 + for (i = 1; i <= (int)result[0]; i++) {
12340 + int j = plen - i;
12341 + result[i] = j >= 0 ? n[j] : 0;
12342 + }
12343 + }
12344 +
12345 + /* Free temporary arrays */
12346 + for (i = 0; i < mlen; i++)
12347 + m[i] = 0;
12348 + sfree(mem_ctx, m);
12349 + for (i = 0; i < plen; i++)
12350 + n[i] = 0;
12351 + sfree(mem_ctx, n);
12352 +}
12353 +
12354 +/*
12355 + * Simple remainder.
12356 + */
12357 +Bignum bigmod(void *mem_ctx, Bignum a, Bignum b)
12358 +{
12359 + Bignum r = newbn(mem_ctx, b[0]);
12360 + bigdivmod(mem_ctx, a, b, r, NULL);
12361 + return r;
12362 +}
12363 +
12364 +/*
12365 + * Compute (base ^ exp) % mod.
12366 + */
12367 +Bignum dwc_modpow(void *mem_ctx, Bignum base_in, Bignum exp, Bignum mod)
12368 +{
12369 + BignumInt *a, *b, *n, *m;
12370 + int mshift;
12371 + int mlen, i, j;
12372 + Bignum base, result;
12373 +
12374 + /*
12375 + * The most significant word of mod needs to be non-zero. It
12376 + * should already be, but let's make sure.
12377 + */
12378 + //assert(mod[mod[0]] != 0);
12379 +
12380 + /*
12381 + * Make sure the base is smaller than the modulus, by reducing
12382 + * it modulo the modulus if not.
12383 + */
12384 + base = bigmod(mem_ctx, base_in, mod);
12385 +
12386 + /* Allocate m of size mlen, copy mod to m */
12387 + /* We use big endian internally */
12388 + mlen = mod[0];
12389 + m = snewn(mem_ctx, mlen, BignumInt);
12390 + //if (!m)
12391 + //abort(); /* FIXME */
12392 + for (j = 0; j < mlen; j++)
12393 + m[j] = mod[mod[0] - j];
12394 +
12395 + /* Shift m left to make msb bit set */
12396 + for (mshift = 0; mshift < BIGNUM_INT_BITS - 1; mshift++)
12397 + if ((m[0] << mshift) & BIGNUM_TOP_BIT)
12398 + break;
12399 + if (mshift) {
12400 + for (i = 0; i < mlen - 1; i++)
12401 + m[i] =
12402 + (m[i] << mshift) | (m[i + 1] >>
12403 + (BIGNUM_INT_BITS - mshift));
12404 + m[mlen - 1] = m[mlen - 1] << mshift;
12405 + }
12406 +
12407 + /* Allocate n of size mlen, copy base to n */
12408 + n = snewn(mem_ctx, mlen, BignumInt);
12409 + //if (!n)
12410 + //abort(); /* FIXME */
12411 + i = mlen - base[0];
12412 + for (j = 0; j < i; j++)
12413 + n[j] = 0;
12414 + for (j = 0; j < base[0]; j++)
12415 + n[i + j] = base[base[0] - j];
12416 +
12417 + /* Allocate a and b of size 2*mlen. Set a = 1 */
12418 + a = snewn(mem_ctx, 2 * mlen, BignumInt);
12419 + //if (!a)
12420 + //abort(); /* FIXME */
12421 + b = snewn(mem_ctx, 2 * mlen, BignumInt);
12422 + //if (!b)
12423 + //abort(); /* FIXME */
12424 + for (i = 0; i < 2 * mlen; i++)
12425 + a[i] = 0;
12426 + a[2 * mlen - 1] = 1;
12427 +
12428 + /* Skip leading zero bits of exp. */
12429 + i = 0;
12430 + j = BIGNUM_INT_BITS - 1;
12431 + while (i < exp[0] && (exp[exp[0] - i] & (1 << j)) == 0) {
12432 + j--;
12433 + if (j < 0) {
12434 + i++;
12435 + j = BIGNUM_INT_BITS - 1;
12436 + }
12437 + }
12438 +
12439 + /* Main computation */
12440 + while (i < exp[0]) {
12441 + while (j >= 0) {
12442 + internal_mul(a + mlen, a + mlen, b, mlen);
12443 + internal_mod(b, mlen * 2, m, mlen, NULL, 0);
12444 + if ((exp[exp[0] - i] & (1 << j)) != 0) {
12445 + internal_mul(b + mlen, n, a, mlen);
12446 + internal_mod(a, mlen * 2, m, mlen, NULL, 0);
12447 + } else {
12448 + BignumInt *t;
12449 + t = a;
12450 + a = b;
12451 + b = t;
12452 + }
12453 + j--;
12454 + }
12455 + i++;
12456 + j = BIGNUM_INT_BITS - 1;
12457 + }
12458 +
12459 + /* Fixup result in case the modulus was shifted */
12460 + if (mshift) {
12461 + for (i = mlen - 1; i < 2 * mlen - 1; i++)
12462 + a[i] =
12463 + (a[i] << mshift) | (a[i + 1] >>
12464 + (BIGNUM_INT_BITS - mshift));
12465 + a[2 * mlen - 1] = a[2 * mlen - 1] << mshift;
12466 + internal_mod(a, mlen * 2, m, mlen, NULL, 0);
12467 + for (i = 2 * mlen - 1; i >= mlen; i--)
12468 + a[i] =
12469 + (a[i] >> mshift) | (a[i - 1] <<
12470 + (BIGNUM_INT_BITS - mshift));
12471 + }
12472 +
12473 + /* Copy result to buffer */
12474 + result = newbn(mem_ctx, mod[0]);
12475 + for (i = 0; i < mlen; i++)
12476 + result[result[0] - i] = a[i + mlen];
12477 + while (result[0] > 1 && result[result[0]] == 0)
12478 + result[0]--;
12479 +
12480 + /* Free temporary arrays */
12481 + for (i = 0; i < 2 * mlen; i++)
12482 + a[i] = 0;
12483 + sfree(mem_ctx, a);
12484 + for (i = 0; i < 2 * mlen; i++)
12485 + b[i] = 0;
12486 + sfree(mem_ctx, b);
12487 + for (i = 0; i < mlen; i++)
12488 + m[i] = 0;
12489 + sfree(mem_ctx, m);
12490 + for (i = 0; i < mlen; i++)
12491 + n[i] = 0;
12492 + sfree(mem_ctx, n);
12493 +
12494 + freebn(mem_ctx, base);
12495 +
12496 + return result;
12497 +}
12498 +
12499 +
12500 +#ifdef UNITTEST
12501 +
12502 +static __u32 dh_p[] = {
12503 + 96,
12504 + 0xFFFFFFFF,
12505 + 0xFFFFFFFF,
12506 + 0xA93AD2CA,
12507 + 0x4B82D120,
12508 + 0xE0FD108E,
12509 + 0x43DB5BFC,
12510 + 0x74E5AB31,
12511 + 0x08E24FA0,
12512 + 0xBAD946E2,
12513 + 0x770988C0,
12514 + 0x7A615D6C,
12515 + 0xBBE11757,
12516 + 0x177B200C,
12517 + 0x521F2B18,
12518 + 0x3EC86A64,
12519 + 0xD8760273,
12520 + 0xD98A0864,
12521 + 0xF12FFA06,
12522 + 0x1AD2EE6B,
12523 + 0xCEE3D226,
12524 + 0x4A25619D,
12525 + 0x1E8C94E0,
12526 + 0xDB0933D7,
12527 + 0xABF5AE8C,
12528 + 0xA6E1E4C7,
12529 + 0xB3970F85,
12530 + 0x5D060C7D,
12531 + 0x8AEA7157,
12532 + 0x58DBEF0A,
12533 + 0xECFB8504,
12534 + 0xDF1CBA64,
12535 + 0xA85521AB,
12536 + 0x04507A33,
12537 + 0xAD33170D,
12538 + 0x8AAAC42D,
12539 + 0x15728E5A,
12540 + 0x98FA0510,
12541 + 0x15D22618,
12542 + 0xEA956AE5,
12543 + 0x3995497C,
12544 + 0x95581718,
12545 + 0xDE2BCBF6,
12546 + 0x6F4C52C9,
12547 + 0xB5C55DF0,
12548 + 0xEC07A28F,
12549 + 0x9B2783A2,
12550 + 0x180E8603,
12551 + 0xE39E772C,
12552 + 0x2E36CE3B,
12553 + 0x32905E46,
12554 + 0xCA18217C,
12555 + 0xF1746C08,
12556 + 0x4ABC9804,
12557 + 0x670C354E,
12558 + 0x7096966D,
12559 + 0x9ED52907,
12560 + 0x208552BB,
12561 + 0x1C62F356,
12562 + 0xDCA3AD96,
12563 + 0x83655D23,
12564 + 0xFD24CF5F,
12565 + 0x69163FA8,
12566 + 0x1C55D39A,
12567 + 0x98DA4836,
12568 + 0xA163BF05,
12569 + 0xC2007CB8,
12570 + 0xECE45B3D,
12571 + 0x49286651,
12572 + 0x7C4B1FE6,
12573 + 0xAE9F2411,
12574 + 0x5A899FA5,
12575 + 0xEE386BFB,
12576 + 0xF406B7ED,
12577 + 0x0BFF5CB6,
12578 + 0xA637ED6B,
12579 + 0xF44C42E9,
12580 + 0x625E7EC6,
12581 + 0xE485B576,
12582 + 0x6D51C245,
12583 + 0x4FE1356D,
12584 + 0xF25F1437,
12585 + 0x302B0A6D,
12586 + 0xCD3A431B,
12587 + 0xEF9519B3,
12588 + 0x8E3404DD,
12589 + 0x514A0879,
12590 + 0x3B139B22,
12591 + 0x020BBEA6,
12592 + 0x8A67CC74,
12593 + 0x29024E08,
12594 + 0x80DC1CD1,
12595 + 0xC4C6628B,
12596 + 0x2168C234,
12597 + 0xC90FDAA2,
12598 + 0xFFFFFFFF,
12599 + 0xFFFFFFFF,
12600 +};
12601 +
12602 +static __u32 dh_a[] = {
12603 + 8,
12604 + 0xdf367516,
12605 + 0x86459caa,
12606 + 0xe2d459a4,
12607 + 0xd910dae0,
12608 + 0x8a8b5e37,
12609 + 0x67ab31c6,
12610 + 0xf0b55ea9,
12611 + 0x440051d6,
12612 +};
12613 +
12614 +static __u32 dh_b[] = {
12615 + 8,
12616 + 0xded92656,
12617 + 0xe07a048a,
12618 + 0x6fa452cd,
12619 + 0x2df89d30,
12620 + 0xc75f1b0f,
12621 + 0x8ce3578f,
12622 + 0x7980a324,
12623 + 0x5daec786,
12624 +};
12625 +
12626 +static __u32 dh_g[] = {
12627 + 1,
12628 + 2,
12629 +};
12630 +
12631 +int main(void)
12632 +{
12633 + int i;
12634 + __u32 *k;
12635 + k = dwc_modpow(NULL, dh_g, dh_a, dh_p);
12636 +
12637 + printf("\n\n");
12638 + for (i=0; i<k[0]; i++) {
12639 + __u32 word32 = k[k[0] - i];
12640 + __u16 l = word32 & 0xffff;
12641 + __u16 m = (word32 & 0xffff0000) >> 16;
12642 + printf("%04x %04x ", m, l);
12643 + if (!((i + 1)%13)) printf("\n");
12644 + }
12645 + printf("\n\n");
12646 +
12647 + if ((k[0] == 0x60) && (k[1] == 0x28e490e5) && (k[0x60] == 0x5a0d3d4e)) {
12648 + printf("PASS\n\n");
12649 + }
12650 + else {
12651 + printf("FAIL\n\n");
12652 + }
12653 +
12654 +}
12655 +
12656 +#endif /* UNITTEST */
12657 +
12658 +#endif /* CONFIG_MACH_IPMATE */
12659 +
12660 +#endif /*DWC_CRYPTOLIB */
12661 --- /dev/null
12662 +++ b/drivers/usb/host/dwc_common_port/dwc_modpow.h
12663 @@ -0,0 +1,34 @@
12664 +/*
12665 + * dwc_modpow.h
12666 + * See dwc_modpow.c for license and changes
12667 + */
12668 +#ifndef _DWC_MODPOW_H
12669 +#define _DWC_MODPOW_H
12670 +
12671 +#ifdef __cplusplus
12672 +extern "C" {
12673 +#endif
12674 +
12675 +#include "dwc_os.h"
12676 +
12677 +/** @file
12678 + *
12679 + * This file defines the module exponentiation function which is only used
12680 + * internally by the DWC UWB modules for calculation of PKs during numeric
12681 + * association. The routine is taken from the PUTTY, an open source terminal
12682 + * emulator. The PUTTY License is preserved in the dwc_modpow.c file.
12683 + *
12684 + */
12685 +
12686 +typedef uint32_t BignumInt;
12687 +typedef uint64_t BignumDblInt;
12688 +typedef BignumInt *Bignum;
12689 +
12690 +/* Compute modular exponentiaion */
12691 +extern Bignum dwc_modpow(void *mem_ctx, Bignum base_in, Bignum exp, Bignum mod);
12692 +
12693 +#ifdef __cplusplus
12694 +}
12695 +#endif
12696 +
12697 +#endif /* _LINUX_BIGNUM_H */
12698 --- /dev/null
12699 +++ b/drivers/usb/host/dwc_common_port/dwc_notifier.c
12700 @@ -0,0 +1,319 @@
12701 +#ifdef DWC_NOTIFYLIB
12702 +
12703 +#include "dwc_notifier.h"
12704 +#include "dwc_list.h"
12705 +
12706 +typedef struct dwc_observer {
12707 + void *observer;
12708 + dwc_notifier_callback_t callback;
12709 + void *data;
12710 + char *notification;
12711 + DWC_CIRCLEQ_ENTRY(dwc_observer) list_entry;
12712 +} observer_t;
12713 +
12714 +DWC_CIRCLEQ_HEAD(observer_queue, dwc_observer);
12715 +
12716 +typedef struct dwc_notifier {
12717 + void *mem_ctx;
12718 + void *object;
12719 + struct observer_queue observers;
12720 + DWC_CIRCLEQ_ENTRY(dwc_notifier) list_entry;
12721 +} notifier_t;
12722 +
12723 +DWC_CIRCLEQ_HEAD(notifier_queue, dwc_notifier);
12724 +
12725 +typedef struct manager {
12726 + void *mem_ctx;
12727 + void *wkq_ctx;
12728 + dwc_workq_t *wq;
12729 +// dwc_mutex_t *mutex;
12730 + struct notifier_queue notifiers;
12731 +} manager_t;
12732 +
12733 +static manager_t *manager = NULL;
12734 +
12735 +static int create_manager(void *mem_ctx, void *wkq_ctx)
12736 +{
12737 + manager = dwc_alloc(mem_ctx, sizeof(manager_t));
12738 + if (!manager) {
12739 + return -DWC_E_NO_MEMORY;
12740 + }
12741 +
12742 + DWC_CIRCLEQ_INIT(&manager->notifiers);
12743 +
12744 + manager->wq = dwc_workq_alloc(wkq_ctx, "DWC Notification WorkQ");
12745 + if (!manager->wq) {
12746 + return -DWC_E_NO_MEMORY;
12747 + }
12748 +
12749 + return 0;
12750 +}
12751 +
12752 +static void free_manager(void)
12753 +{
12754 + dwc_workq_free(manager->wq);
12755 +
12756 + /* All notifiers must have unregistered themselves before this module
12757 + * can be removed. Hitting this assertion indicates a programmer
12758 + * error. */
12759 + DWC_ASSERT(DWC_CIRCLEQ_EMPTY(&manager->notifiers),
12760 + "Notification manager being freed before all notifiers have been removed");
12761 + dwc_free(manager->mem_ctx, manager);
12762 +}
12763 +
12764 +#ifdef DEBUG
12765 +static void dump_manager(void)
12766 +{
12767 + notifier_t *n;
12768 + observer_t *o;
12769 +
12770 + DWC_ASSERT(manager, "Notification manager not found");
12771 +
12772 + DWC_DEBUG("List of all notifiers and observers:\n");
12773 + DWC_CIRCLEQ_FOREACH(n, &manager->notifiers, list_entry) {
12774 + DWC_DEBUG("Notifier %p has observers:\n", n->object);
12775 + DWC_CIRCLEQ_FOREACH(o, &n->observers, list_entry) {
12776 + DWC_DEBUG(" %p watching %s\n", o->observer, o->notification);
12777 + }
12778 + }
12779 +}
12780 +#else
12781 +#define dump_manager(...)
12782 +#endif
12783 +
12784 +static observer_t *alloc_observer(void *mem_ctx, void *observer, char *notification,
12785 + dwc_notifier_callback_t callback, void *data)
12786 +{
12787 + observer_t *new_observer = dwc_alloc(mem_ctx, sizeof(observer_t));
12788 +
12789 + if (!new_observer) {
12790 + return NULL;
12791 + }
12792 +
12793 + DWC_CIRCLEQ_INIT_ENTRY(new_observer, list_entry);
12794 + new_observer->observer = observer;
12795 + new_observer->notification = notification;
12796 + new_observer->callback = callback;
12797 + new_observer->data = data;
12798 + return new_observer;
12799 +}
12800 +
12801 +static void free_observer(void *mem_ctx, observer_t *observer)
12802 +{
12803 + dwc_free(mem_ctx, observer);
12804 +}
12805 +
12806 +static notifier_t *alloc_notifier(void *mem_ctx, void *object)
12807 +{
12808 + notifier_t *notifier;
12809 +
12810 + if (!object) {
12811 + return NULL;
12812 + }
12813 +
12814 + notifier = dwc_alloc(mem_ctx, sizeof(notifier_t));
12815 + if (!notifier) {
12816 + return NULL;
12817 + }
12818 +
12819 + DWC_CIRCLEQ_INIT(&notifier->observers);
12820 + DWC_CIRCLEQ_INIT_ENTRY(notifier, list_entry);
12821 +
12822 + notifier->mem_ctx = mem_ctx;
12823 + notifier->object = object;
12824 + return notifier;
12825 +}
12826 +
12827 +static void free_notifier(notifier_t *notifier)
12828 +{
12829 + observer_t *observer;
12830 +
12831 + DWC_CIRCLEQ_FOREACH(observer, &notifier->observers, list_entry) {
12832 + free_observer(notifier->mem_ctx, observer);
12833 + }
12834 +
12835 + dwc_free(notifier->mem_ctx, notifier);
12836 +}
12837 +
12838 +static notifier_t *find_notifier(void *object)
12839 +{
12840 + notifier_t *notifier;
12841 +
12842 + DWC_ASSERT(manager, "Notification manager not found");
12843 +
12844 + if (!object) {
12845 + return NULL;
12846 + }
12847 +
12848 + DWC_CIRCLEQ_FOREACH(notifier, &manager->notifiers, list_entry) {
12849 + if (notifier->object == object) {
12850 + return notifier;
12851 + }
12852 + }
12853 +
12854 + return NULL;
12855 +}
12856 +
12857 +int dwc_alloc_notification_manager(void *mem_ctx, void *wkq_ctx)
12858 +{
12859 + return create_manager(mem_ctx, wkq_ctx);
12860 +}
12861 +
12862 +void dwc_free_notification_manager(void)
12863 +{
12864 + free_manager();
12865 +}
12866 +
12867 +dwc_notifier_t *dwc_register_notifier(void *mem_ctx, void *object)
12868 +{
12869 + notifier_t *notifier;
12870 +
12871 + DWC_ASSERT(manager, "Notification manager not found");
12872 +
12873 + notifier = find_notifier(object);
12874 + if (notifier) {
12875 + DWC_ERROR("Notifier %p is already registered\n", object);
12876 + return NULL;
12877 + }
12878 +
12879 + notifier = alloc_notifier(mem_ctx, object);
12880 + if (!notifier) {
12881 + return NULL;
12882 + }
12883 +
12884 + DWC_CIRCLEQ_INSERT_TAIL(&manager->notifiers, notifier, list_entry);
12885 +
12886 + DWC_INFO("Notifier %p registered", object);
12887 + dump_manager();
12888 +
12889 + return notifier;
12890 +}
12891 +
12892 +void dwc_unregister_notifier(dwc_notifier_t *notifier)
12893 +{
12894 + DWC_ASSERT(manager, "Notification manager not found");
12895 +
12896 + if (!DWC_CIRCLEQ_EMPTY(&notifier->observers)) {
12897 + observer_t *o;
12898 +
12899 + DWC_ERROR("Notifier %p has active observers when removing\n", notifier->object);
12900 + DWC_CIRCLEQ_FOREACH(o, &notifier->observers, list_entry) {
12901 + DWC_DEBUGC(" %p watching %s\n", o->observer, o->notification);
12902 + }
12903 +
12904 + DWC_ASSERT(DWC_CIRCLEQ_EMPTY(&notifier->observers),
12905 + "Notifier %p has active observers when removing", notifier);
12906 + }
12907 +
12908 + DWC_CIRCLEQ_REMOVE_INIT(&manager->notifiers, notifier, list_entry);
12909 + free_notifier(notifier);
12910 +
12911 + DWC_INFO("Notifier unregistered");
12912 + dump_manager();
12913 +}
12914 +
12915 +/* Add an observer to observe the notifier for a particular state, event, or notification. */
12916 +int dwc_add_observer(void *observer, void *object, char *notification,
12917 + dwc_notifier_callback_t callback, void *data)
12918 +{
12919 + notifier_t *notifier = find_notifier(object);
12920 + observer_t *new_observer;
12921 +
12922 + if (!notifier) {
12923 + DWC_ERROR("Notifier %p is not found when adding observer\n", object);
12924 + return -DWC_E_INVALID;
12925 + }
12926 +
12927 + new_observer = alloc_observer(notifier->mem_ctx, observer, notification, callback, data);
12928 + if (!new_observer) {
12929 + return -DWC_E_NO_MEMORY;
12930 + }
12931 +
12932 + DWC_CIRCLEQ_INSERT_TAIL(&notifier->observers, new_observer, list_entry);
12933 +
12934 + DWC_INFO("Added observer %p to notifier %p observing notification %s, callback=%p, data=%p",
12935 + observer, object, notification, callback, data);
12936 +
12937 + dump_manager();
12938 + return 0;
12939 +}
12940 +
12941 +int dwc_remove_observer(void *observer)
12942 +{
12943 + notifier_t *n;
12944 +
12945 + DWC_ASSERT(manager, "Notification manager not found");
12946 +
12947 + DWC_CIRCLEQ_FOREACH(n, &manager->notifiers, list_entry) {
12948 + observer_t *o;
12949 + observer_t *o2;
12950 +
12951 + DWC_CIRCLEQ_FOREACH_SAFE(o, o2, &n->observers, list_entry) {
12952 + if (o->observer == observer) {
12953 + DWC_CIRCLEQ_REMOVE_INIT(&n->observers, o, list_entry);
12954 + DWC_INFO("Removing observer %p from notifier %p watching notification %s:",
12955 + o->observer, n->object, o->notification);
12956 + free_observer(n->mem_ctx, o);
12957 + }
12958 + }
12959 + }
12960 +
12961 + dump_manager();
12962 + return 0;
12963 +}
12964 +
12965 +typedef struct callback_data {
12966 + void *mem_ctx;
12967 + dwc_notifier_callback_t cb;
12968 + void *observer;
12969 + void *data;
12970 + void *object;
12971 + char *notification;
12972 + void *notification_data;
12973 +} cb_data_t;
12974 +
12975 +static void cb_task(void *data)
12976 +{
12977 + cb_data_t *cb = (cb_data_t *)data;
12978 +
12979 + cb->cb(cb->object, cb->notification, cb->observer, cb->notification_data, cb->data);
12980 + dwc_free(cb->mem_ctx, cb);
12981 +}
12982 +
12983 +void dwc_notify(dwc_notifier_t *notifier, char *notification, void *notification_data)
12984 +{
12985 + observer_t *o;
12986 +
12987 + DWC_ASSERT(manager, "Notification manager not found");
12988 +
12989 + DWC_CIRCLEQ_FOREACH(o, &notifier->observers, list_entry) {
12990 + int len = DWC_STRLEN(notification);
12991 +
12992 + if (DWC_STRLEN(o->notification) != len) {
12993 + continue;
12994 + }
12995 +
12996 + if (DWC_STRNCMP(o->notification, notification, len) == 0) {
12997 + cb_data_t *cb_data = dwc_alloc(notifier->mem_ctx, sizeof(cb_data_t));
12998 +
12999 + if (!cb_data) {
13000 + DWC_ERROR("Failed to allocate callback data\n");
13001 + return;
13002 + }
13003 +
13004 + cb_data->mem_ctx = notifier->mem_ctx;
13005 + cb_data->cb = o->callback;
13006 + cb_data->observer = o->observer;
13007 + cb_data->data = o->data;
13008 + cb_data->object = notifier->object;
13009 + cb_data->notification = notification;
13010 + cb_data->notification_data = notification_data;
13011 + DWC_DEBUGC("Observer found %p for notification %s\n", o->observer, notification);
13012 + DWC_WORKQ_SCHEDULE(manager->wq, cb_task, cb_data,
13013 + "Notify callback from %p for Notification %s, to observer %p",
13014 + cb_data->object, notification, cb_data->observer);
13015 + }
13016 + }
13017 +}
13018 +
13019 +#endif /* DWC_NOTIFYLIB */
13020 --- /dev/null
13021 +++ b/drivers/usb/host/dwc_common_port/dwc_notifier.h
13022 @@ -0,0 +1,122 @@
13023 +
13024 +#ifndef __DWC_NOTIFIER_H__
13025 +#define __DWC_NOTIFIER_H__
13026 +
13027 +#ifdef __cplusplus
13028 +extern "C" {
13029 +#endif
13030 +
13031 +#include "dwc_os.h"
13032 +
13033 +/** @file
13034 + *
13035 + * A simple implementation of the Observer pattern. Any "module" can
13036 + * register as an observer or notifier. The notion of "module" is abstract and
13037 + * can mean anything used to identify either an observer or notifier. Usually
13038 + * it will be a pointer to a data structure which contains some state, ie an
13039 + * object.
13040 + *
13041 + * Before any notifiers can be added, the global notification manager must be
13042 + * brought up with dwc_alloc_notification_manager().
13043 + * dwc_free_notification_manager() will bring it down and free all resources.
13044 + * These would typically be called upon module load and unload. The
13045 + * notification manager is a single global instance that handles all registered
13046 + * observable modules and observers so this should be done only once.
13047 + *
13048 + * A module can be observable by using Notifications to publicize some general
13049 + * information about it's state or operation. It does not care who listens, or
13050 + * even if anyone listens, or what they do with the information. The observable
13051 + * modules do not need to know any information about it's observers or their
13052 + * interface, or their state or data.
13053 + *
13054 + * Any module can register to emit Notifications. It should publish a list of
13055 + * notifications that it can emit and their behavior, such as when they will get
13056 + * triggered, and what information will be provided to the observer. Then it
13057 + * should register itself as an observable module. See dwc_register_notifier().
13058 + *
13059 + * Any module can observe any observable, registered module, provided it has a
13060 + * handle to the other module and knows what notifications to observe. See
13061 + * dwc_add_observer().
13062 + *
13063 + * A function of type dwc_notifier_callback_t is called whenever a notification
13064 + * is triggered with one or more observers observing it. This function is
13065 + * called in it's own process so it may sleep or block if needed. It is
13066 + * guaranteed to be called sometime after the notification has occurred and will
13067 + * be called once per each time the notification is triggered. It will NOT be
13068 + * called in the same process context used to trigger the notification.
13069 + *
13070 + * @section Limitiations
13071 + *
13072 + * Keep in mind that Notifications that can be triggered in rapid sucession may
13073 + * schedule too many processes too handle. Be aware of this limitation when
13074 + * designing to use notifications, and only add notifications for appropriate
13075 + * observable information.
13076 + *
13077 + * Also Notification callbacks are not synchronous. If you need to synchronize
13078 + * the behavior between module/observer you must use other means. And perhaps
13079 + * that will mean Notifications are not the proper solution.
13080 + */
13081 +
13082 +struct dwc_notifier;
13083 +typedef struct dwc_notifier dwc_notifier_t;
13084 +
13085 +/** The callback function must be of this type.
13086 + *
13087 + * @param object This is the object that is being observed.
13088 + * @param notification This is the notification that was triggered.
13089 + * @param observer This is the observer
13090 + * @param notification_data This is notification-specific data that the notifier
13091 + * has included in this notification. The value of this should be published in
13092 + * the documentation of the observable module with the notifications.
13093 + * @param user_data This is any custom data that the observer provided when
13094 + * adding itself as an observer to the notification. */
13095 +typedef void (*dwc_notifier_callback_t)(void *object, char *notification, void *observer,
13096 + void *notification_data, void *user_data);
13097 +
13098 +/** Brings up the notification manager. */
13099 +extern int dwc_alloc_notification_manager(void *mem_ctx, void *wkq_ctx);
13100 +/** Brings down the notification manager. */
13101 +extern void dwc_free_notification_manager(void);
13102 +
13103 +/** This function registers an observable module. A dwc_notifier_t object is
13104 + * returned to the observable module. This is an opaque object that is used by
13105 + * the observable module to trigger notifications. This object should only be
13106 + * accessible to functions that are authorized to trigger notifications for this
13107 + * module. Observers do not need this object. */
13108 +extern dwc_notifier_t *dwc_register_notifier(void *mem_ctx, void *object);
13109 +
13110 +/** This function unregisters an observable module. All observers have to be
13111 + * removed prior to unregistration. */
13112 +extern void dwc_unregister_notifier(dwc_notifier_t *notifier);
13113 +
13114 +/** Add a module as an observer to the observable module. The observable module
13115 + * needs to have previously registered with the notification manager.
13116 + *
13117 + * @param observer The observer module
13118 + * @param object The module to observe
13119 + * @param notification The notification to observe
13120 + * @param callback The callback function to call
13121 + * @param user_data Any additional user data to pass into the callback function */
13122 +extern int dwc_add_observer(void *observer, void *object, char *notification,
13123 + dwc_notifier_callback_t callback, void *user_data);
13124 +
13125 +/** Removes the specified observer from all notifications that it is currently
13126 + * observing. */
13127 +extern int dwc_remove_observer(void *observer);
13128 +
13129 +/** This function triggers a Notification. It should be called by the
13130 + * observable module, or any module or library which the observable module
13131 + * allows to trigger notification on it's behalf. Such as the dwc_cc_t.
13132 + *
13133 + * dwc_notify is a non-blocking function. Callbacks are scheduled called in
13134 + * their own process context for each trigger. Callbacks can be blocking.
13135 + * dwc_notify can be called from interrupt context if needed.
13136 + *
13137 + */
13138 +void dwc_notify(dwc_notifier_t *notifier, char *notification, void *notification_data);
13139 +
13140 +#ifdef __cplusplus
13141 +}
13142 +#endif
13143 +
13144 +#endif /* __DWC_NOTIFIER_H__ */
13145 --- /dev/null
13146 +++ b/drivers/usb/host/dwc_common_port/dwc_os.h
13147 @@ -0,0 +1,1276 @@
13148 +/* =========================================================================
13149 + * $File: //dwh/usb_iip/dev/software/dwc_common_port_2/dwc_os.h $
13150 + * $Revision: #14 $
13151 + * $Date: 2010/11/04 $
13152 + * $Change: 1621695 $
13153 + *
13154 + * Synopsys Portability Library Software and documentation
13155 + * (hereinafter, "Software") is an Unsupported proprietary work of
13156 + * Synopsys, Inc. unless otherwise expressly agreed to in writing
13157 + * between Synopsys and you.
13158 + *
13159 + * The Software IS NOT an item of Licensed Software or Licensed Product
13160 + * under any End User Software License Agreement or Agreement for
13161 + * Licensed Product with Synopsys or any supplement thereto. You are
13162 + * permitted to use and redistribute this Software in source and binary
13163 + * forms, with or without modification, provided that redistributions
13164 + * of source code must retain this notice. You may not view, use,
13165 + * disclose, copy or distribute this file or any information contained
13166 + * herein except pursuant to this license grant from Synopsys. If you
13167 + * do not agree with this notice, including the disclaimer below, then
13168 + * you are not authorized to use the Software.
13169 + *
13170 + * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS"
13171 + * BASIS AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
13172 + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
13173 + * FOR A PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. IN NO EVENT SHALL
13174 + * SYNOPSYS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
13175 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
13176 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
13177 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
13178 + * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
13179 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
13180 + * USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
13181 + * DAMAGE.
13182 + * ========================================================================= */
13183 +#ifndef _DWC_OS_H_
13184 +#define _DWC_OS_H_
13185 +
13186 +#ifdef __cplusplus
13187 +extern "C" {
13188 +#endif
13189 +
13190 +/** @file
13191 + *
13192 + * DWC portability library, low level os-wrapper functions
13193 + *
13194 + */
13195 +
13196 +/* These basic types need to be defined by some OS header file or custom header
13197 + * file for your specific target architecture.
13198 + *
13199 + * uint8_t, int8_t, uint16_t, int16_t, uint32_t, int32_t, uint64_t, int64_t
13200 + *
13201 + * Any custom or alternate header file must be added and enabled here.
13202 + */
13203 +
13204 +#ifdef DWC_LINUX
13205 +# include <linux/types.h>
13206 +# ifdef CONFIG_DEBUG_MUTEXES
13207 +# include <linux/mutex.h>
13208 +# endif
13209 +# include <linux/spinlock.h>
13210 +# include <linux/errno.h>
13211 +# include <stdarg.h>
13212 +#endif
13213 +
13214 +#if defined(DWC_FREEBSD) || defined(DWC_NETBSD)
13215 +# include <os_dep.h>
13216 +#endif
13217 +
13218 +
13219 +/** @name Primitive Types and Values */
13220 +
13221 +/** We define a boolean type for consistency. Can be either YES or NO */
13222 +typedef uint8_t dwc_bool_t;
13223 +#define YES 1
13224 +#define NO 0
13225 +
13226 +#ifdef DWC_LINUX
13227 +
13228 +/** @name Error Codes */
13229 +#define DWC_E_INVALID EINVAL
13230 +#define DWC_E_NO_MEMORY ENOMEM
13231 +#define DWC_E_NO_DEVICE ENODEV
13232 +#define DWC_E_NOT_SUPPORTED EOPNOTSUPP
13233 +#define DWC_E_TIMEOUT ETIMEDOUT
13234 +#define DWC_E_BUSY EBUSY
13235 +#define DWC_E_AGAIN EAGAIN
13236 +#define DWC_E_RESTART ERESTART
13237 +#define DWC_E_ABORT ECONNABORTED
13238 +#define DWC_E_SHUTDOWN ESHUTDOWN
13239 +#define DWC_E_NO_DATA ENODATA
13240 +#define DWC_E_DISCONNECT ECONNRESET
13241 +#define DWC_E_UNKNOWN EINVAL
13242 +#define DWC_E_NO_STREAM_RES ENOSR
13243 +#define DWC_E_COMMUNICATION ECOMM
13244 +#define DWC_E_OVERFLOW EOVERFLOW
13245 +#define DWC_E_PROTOCOL EPROTO
13246 +#define DWC_E_IN_PROGRESS EINPROGRESS
13247 +#define DWC_E_PIPE EPIPE
13248 +#define DWC_E_IO EIO
13249 +#define DWC_E_NO_SPACE ENOSPC
13250 +
13251 +#else
13252 +
13253 +/** @name Error Codes */
13254 +#define DWC_E_INVALID 1001
13255 +#define DWC_E_NO_MEMORY 1002
13256 +#define DWC_E_NO_DEVICE 1003
13257 +#define DWC_E_NOT_SUPPORTED 1004
13258 +#define DWC_E_TIMEOUT 1005
13259 +#define DWC_E_BUSY 1006
13260 +#define DWC_E_AGAIN 1007
13261 +#define DWC_E_RESTART 1008
13262 +#define DWC_E_ABORT 1009
13263 +#define DWC_E_SHUTDOWN 1010
13264 +#define DWC_E_NO_DATA 1011
13265 +#define DWC_E_DISCONNECT 2000
13266 +#define DWC_E_UNKNOWN 3000
13267 +#define DWC_E_NO_STREAM_RES 4001
13268 +#define DWC_E_COMMUNICATION 4002
13269 +#define DWC_E_OVERFLOW 4003
13270 +#define DWC_E_PROTOCOL 4004
13271 +#define DWC_E_IN_PROGRESS 4005
13272 +#define DWC_E_PIPE 4006
13273 +#define DWC_E_IO 4007
13274 +#define DWC_E_NO_SPACE 4008
13275 +
13276 +#endif
13277 +
13278 +
13279 +/** @name Tracing/Logging Functions
13280 + *
13281 + * These function provide the capability to add tracing, debugging, and error
13282 + * messages, as well exceptions as assertions. The WUDEV uses these
13283 + * extensively. These could be logged to the main console, the serial port, an
13284 + * internal buffer, etc. These functions could also be no-op if they are too
13285 + * expensive on your system. By default undefining the DEBUG macro already
13286 + * no-ops some of these functions. */
13287 +
13288 +/** Returns non-zero if in interrupt context. */
13289 +extern dwc_bool_t DWC_IN_IRQ(void);
13290 +#define dwc_in_irq DWC_IN_IRQ
13291 +
13292 +/** Returns "IRQ" if DWC_IN_IRQ is true. */
13293 +static inline char *dwc_irq(void) {
13294 + return DWC_IN_IRQ() ? "IRQ" : "";
13295 +}
13296 +
13297 +/** Returns non-zero if in bottom-half context. */
13298 +extern dwc_bool_t DWC_IN_BH(void);
13299 +#define dwc_in_bh DWC_IN_BH
13300 +
13301 +/** Returns "BH" if DWC_IN_BH is true. */
13302 +static inline char *dwc_bh(void) {
13303 + return DWC_IN_BH() ? "BH" : "";
13304 +}
13305 +
13306 +/**
13307 + * A vprintf() clone. Just call vprintf if you've got it.
13308 + */
13309 +extern void DWC_VPRINTF(char *format, va_list args);
13310 +#define dwc_vprintf DWC_VPRINTF
13311 +
13312 +/**
13313 + * A vsnprintf() clone. Just call vprintf if you've got it.
13314 + */
13315 +extern int DWC_VSNPRINTF(char *str, int size, char *format, va_list args);
13316 +#define dwc_vsnprintf DWC_VSNPRINTF
13317 +
13318 +/**
13319 + * printf() clone. Just call printf if you've go it.
13320 + */
13321 +extern void DWC_PRINTF(char *format, ...)
13322 +/* This provides compiler level static checking of the parameters if you're
13323 + * using GCC. */
13324 +#ifdef __GNUC__
13325 + __attribute__ ((format(printf, 1, 2)));
13326 +#else
13327 + ;
13328 +#endif
13329 +#define dwc_printf DWC_PRINTF
13330 +
13331 +/**
13332 + * sprintf() clone. Just call sprintf if you've got it.
13333 + */
13334 +extern int DWC_SPRINTF(char *string, char *format, ...)
13335 +#ifdef __GNUC__
13336 + __attribute__ ((format(printf, 2, 3)));
13337 +#else
13338 + ;
13339 +#endif
13340 +#define dwc_sprintf DWC_SPRINTF
13341 +
13342 +/**
13343 + * snprintf() clone. Just call snprintf if you've got it.
13344 + */
13345 +extern int DWC_SNPRINTF(char *string, int size, char *format, ...)
13346 +#ifdef __GNUC__
13347 + __attribute__ ((format(printf, 3, 4)));
13348 +#else
13349 + ;
13350 +#endif
13351 +#define dwc_snprintf DWC_SNPRINTF
13352 +
13353 +/**
13354 + * Prints a WARNING message. On systems that don't differentiate between
13355 + * warnings and regular log messages, just print it. Indicates that something
13356 + * may be wrong with the driver. Works like printf().
13357 + *
13358 + * Use the DWC_WARN macro to call this function.
13359 + */
13360 +extern void __DWC_WARN(char *format, ...)
13361 +#ifdef __GNUC__
13362 + __attribute__ ((format(printf, 1, 2)));
13363 +#else
13364 + ;
13365 +#endif
13366 +
13367 +/**
13368 + * Prints an error message. On systems that don't differentiate between errors
13369 + * and regular log messages, just print it. Indicates that something went wrong
13370 + * with the driver. Works like printf().
13371 + *
13372 + * Use the DWC_ERROR macro to call this function.
13373 + */
13374 +extern void __DWC_ERROR(char *format, ...)
13375 +#ifdef __GNUC__
13376 + __attribute__ ((format(printf, 1, 2)));
13377 +#else
13378 + ;
13379 +#endif
13380 +
13381 +/**
13382 + * Prints an exception error message and takes some user-defined action such as
13383 + * print out a backtrace or trigger a breakpoint. Indicates that something went
13384 + * abnormally wrong with the driver such as programmer error, or other
13385 + * exceptional condition. It should not be ignored so even on systems without
13386 + * printing capability, some action should be taken to notify the developer of
13387 + * it. Works like printf().
13388 + */
13389 +extern void DWC_EXCEPTION(char *format, ...)
13390 +#ifdef __GNUC__
13391 + __attribute__ ((format(printf, 1, 2)));
13392 +#else
13393 + ;
13394 +#endif
13395 +#define dwc_exception DWC_EXCEPTION
13396 +
13397 +#ifndef DWC_OTG_DEBUG_LEV
13398 +#define DWC_OTG_DEBUG_LEV 0
13399 +#endif
13400 +
13401 +#ifdef DEBUG
13402 +/**
13403 + * Prints out a debug message. Used for logging/trace messages.
13404 + *
13405 + * Use the DWC_DEBUG macro to call this function
13406 + */
13407 +extern void __DWC_DEBUG(char *format, ...)
13408 +#ifdef __GNUC__
13409 + __attribute__ ((format(printf, 1, 2)));
13410 +#else
13411 + ;
13412 +#endif
13413 +#else
13414 +#define __DWC_DEBUG printk
13415 +#endif
13416 +
13417 +/**
13418 + * Prints out a Debug message.
13419 + */
13420 +#define DWC_DEBUG(_format, _args...) __DWC_DEBUG("DEBUG:%s:%s: " _format "\n", \
13421 + __func__, dwc_irq(), ## _args)
13422 +#define dwc_debug DWC_DEBUG
13423 +/**
13424 + * Prints out a Debug message if enabled at compile time.
13425 + */
13426 +#if DWC_OTG_DEBUG_LEV > 0
13427 +#define DWC_DEBUGC(_format, _args...) DWC_DEBUG(_format, ##_args )
13428 +#else
13429 +#define DWC_DEBUGC(_format, _args...)
13430 +#endif
13431 +#define dwc_debugc DWC_DEBUGC
13432 +/**
13433 + * Prints out an informative message.
13434 + */
13435 +#define DWC_INFO(_format, _args...) DWC_PRINTF("INFO:%s: " _format "\n", \
13436 + dwc_irq(), ## _args)
13437 +#define dwc_info DWC_INFO
13438 +/**
13439 + * Prints out an informative message if enabled at compile time.
13440 + */
13441 +#if DWC_OTG_DEBUG_LEV > 1
13442 +#define DWC_INFOC(_format, _args...) DWC_INFO(_format, ##_args )
13443 +#else
13444 +#define DWC_INFOC(_format, _args...)
13445 +#endif
13446 +#define dwc_infoc DWC_INFOC
13447 +/**
13448 + * Prints out a warning message.
13449 + */
13450 +#define DWC_WARN(_format, _args...) __DWC_WARN("WARN:%s:%s:%d: " _format "\n", \
13451 + dwc_irq(), __func__, __LINE__, ## _args)
13452 +#define dwc_warn DWC_WARN
13453 +/**
13454 + * Prints out an error message.
13455 + */
13456 +#define DWC_ERROR(_format, _args...) __DWC_ERROR("ERROR:%s:%s:%d: " _format "\n", \
13457 + dwc_irq(), __func__, __LINE__, ## _args)
13458 +#define dwc_error DWC_ERROR
13459 +
13460 +#define DWC_PROTO_ERROR(_format, _args...) __DWC_WARN("ERROR:%s:%s:%d: " _format "\n", \
13461 + dwc_irq(), __func__, __LINE__, ## _args)
13462 +#define dwc_proto_error DWC_PROTO_ERROR
13463 +
13464 +#ifdef DEBUG
13465 +/** Prints out a exception error message if the _expr expression fails. Disabled
13466 + * if DEBUG is not enabled. */
13467 +#define DWC_ASSERT(_expr, _format, _args...) do { \
13468 + if (!(_expr)) { DWC_EXCEPTION("%s:%s:%d: " _format "\n", dwc_irq(), \
13469 + __FILE__, __LINE__, ## _args); } \
13470 + } while (0)
13471 +#else
13472 +#define DWC_ASSERT(_x...)
13473 +#endif
13474 +#define dwc_assert DWC_ASSERT
13475 +
13476 +
13477 +/** @name Byte Ordering
13478 + * The following functions are for conversions between processor's byte ordering
13479 + * and specific ordering you want.
13480 + */
13481 +
13482 +/** Converts 32 bit data in CPU byte ordering to little endian. */
13483 +extern uint32_t DWC_CPU_TO_LE32(uint32_t *p);
13484 +#define dwc_cpu_to_le32 DWC_CPU_TO_LE32
13485 +
13486 +/** Converts 32 bit data in CPU byte orderint to big endian. */
13487 +extern uint32_t DWC_CPU_TO_BE32(uint32_t *p);
13488 +#define dwc_cpu_to_be32 DWC_CPU_TO_BE32
13489 +
13490 +/** Converts 32 bit little endian data to CPU byte ordering. */
13491 +extern uint32_t DWC_LE32_TO_CPU(uint32_t *p);
13492 +#define dwc_le32_to_cpu DWC_LE32_TO_CPU
13493 +
13494 +/** Converts 32 bit big endian data to CPU byte ordering. */
13495 +extern uint32_t DWC_BE32_TO_CPU(uint32_t *p);
13496 +#define dwc_be32_to_cpu DWC_BE32_TO_CPU
13497 +
13498 +/** Converts 16 bit data in CPU byte ordering to little endian. */
13499 +extern uint16_t DWC_CPU_TO_LE16(uint16_t *p);
13500 +#define dwc_cpu_to_le16 DWC_CPU_TO_LE16
13501 +
13502 +/** Converts 16 bit data in CPU byte orderint to big endian. */
13503 +extern uint16_t DWC_CPU_TO_BE16(uint16_t *p);
13504 +#define dwc_cpu_to_be16 DWC_CPU_TO_BE16
13505 +
13506 +/** Converts 16 bit little endian data to CPU byte ordering. */
13507 +extern uint16_t DWC_LE16_TO_CPU(uint16_t *p);
13508 +#define dwc_le16_to_cpu DWC_LE16_TO_CPU
13509 +
13510 +/** Converts 16 bit bi endian data to CPU byte ordering. */
13511 +extern uint16_t DWC_BE16_TO_CPU(uint16_t *p);
13512 +#define dwc_be16_to_cpu DWC_BE16_TO_CPU
13513 +
13514 +
13515 +/** @name Register Read/Write
13516 + *
13517 + * The following six functions should be implemented to read/write registers of
13518 + * 32-bit and 64-bit sizes. All modules use this to read/write register values.
13519 + * The reg value is a pointer to the register calculated from the void *base
13520 + * variable passed into the driver when it is started. */
13521 +
13522 +#ifdef DWC_LINUX
13523 +/* Linux doesn't need any extra parameters for register read/write, so we
13524 + * just throw away the IO context parameter.
13525 + */
13526 +/** Reads the content of a 32-bit register. */
13527 +extern uint32_t DWC_READ_REG32(uint32_t volatile *reg);
13528 +#define dwc_read_reg32(_ctx_,_reg_) DWC_READ_REG32(_reg_)
13529 +
13530 +/** Reads the content of a 64-bit register. */
13531 +extern uint64_t DWC_READ_REG64(uint64_t volatile *reg);
13532 +#define dwc_read_reg64(_ctx_,_reg_) DWC_READ_REG64(_reg_)
13533 +
13534 +/** Writes to a 32-bit register. */
13535 +extern void DWC_WRITE_REG32(uint32_t volatile *reg, uint32_t value);
13536 +#define dwc_write_reg32(_ctx_,_reg_,_val_) DWC_WRITE_REG32(_reg_, _val_)
13537 +
13538 +/** Writes to a 64-bit register. */
13539 +extern void DWC_WRITE_REG64(uint64_t volatile *reg, uint64_t value);
13540 +#define dwc_write_reg64(_ctx_,_reg_,_val_) DWC_WRITE_REG64(_reg_, _val_)
13541 +
13542 +/**
13543 + * Modify bit values in a register. Using the
13544 + * algorithm: (reg_contents & ~clear_mask) | set_mask.
13545 + */
13546 +extern void DWC_MODIFY_REG32(uint32_t volatile *reg, uint32_t clear_mask, uint32_t set_mask);
13547 +#define dwc_modify_reg32(_ctx_,_reg_,_cmsk_,_smsk_) DWC_MODIFY_REG32(_reg_,_cmsk_,_smsk_)
13548 +extern void DWC_MODIFY_REG64(uint64_t volatile *reg, uint64_t clear_mask, uint64_t set_mask);
13549 +#define dwc_modify_reg64(_ctx_,_reg_,_cmsk_,_smsk_) DWC_MODIFY_REG64(_reg_,_cmsk_,_smsk_)
13550 +
13551 +#endif /* DWC_LINUX */
13552 +
13553 +#if defined(DWC_FREEBSD) || defined(DWC_NETBSD)
13554 +typedef struct dwc_ioctx {
13555 + struct device *dev;
13556 + bus_space_tag_t iot;
13557 + bus_space_handle_t ioh;
13558 +} dwc_ioctx_t;
13559 +
13560 +/** BSD needs two extra parameters for register read/write, so we pass
13561 + * them in using the IO context parameter.
13562 + */
13563 +/** Reads the content of a 32-bit register. */
13564 +extern uint32_t DWC_READ_REG32(void *io_ctx, uint32_t volatile *reg);
13565 +#define dwc_read_reg32 DWC_READ_REG32
13566 +
13567 +/** Reads the content of a 64-bit register. */
13568 +extern uint64_t DWC_READ_REG64(void *io_ctx, uint64_t volatile *reg);
13569 +#define dwc_read_reg64 DWC_READ_REG64
13570 +
13571 +/** Writes to a 32-bit register. */
13572 +extern void DWC_WRITE_REG32(void *io_ctx, uint32_t volatile *reg, uint32_t value);
13573 +#define dwc_write_reg32 DWC_WRITE_REG32
13574 +
13575 +/** Writes to a 64-bit register. */
13576 +extern void DWC_WRITE_REG64(void *io_ctx, uint64_t volatile *reg, uint64_t value);
13577 +#define dwc_write_reg64 DWC_WRITE_REG64
13578 +
13579 +/**
13580 + * Modify bit values in a register. Using the
13581 + * algorithm: (reg_contents & ~clear_mask) | set_mask.
13582 + */
13583 +extern void DWC_MODIFY_REG32(void *io_ctx, uint32_t volatile *reg, uint32_t clear_mask, uint32_t set_mask);
13584 +#define dwc_modify_reg32 DWC_MODIFY_REG32
13585 +extern void DWC_MODIFY_REG64(void *io_ctx, uint64_t volatile *reg, uint64_t clear_mask, uint64_t set_mask);
13586 +#define dwc_modify_reg64 DWC_MODIFY_REG64
13587 +
13588 +#endif /* DWC_FREEBSD || DWC_NETBSD */
13589 +
13590 +/** @cond */
13591 +
13592 +/** @name Some convenience MACROS used internally. Define DWC_DEBUG_REGS to log the
13593 + * register writes. */
13594 +
13595 +#ifdef DWC_LINUX
13596 +
13597 +# ifdef DWC_DEBUG_REGS
13598 +
13599 +#define dwc_define_read_write_reg_n(_reg,_container_type) \
13600 +static inline uint32_t dwc_read_##_reg##_n(_container_type *container, int num) { \
13601 + return DWC_READ_REG32(&container->regs->_reg[num]); \
13602 +} \
13603 +static inline void dwc_write_##_reg##_n(_container_type *container, int num, uint32_t data) { \
13604 + DWC_DEBUG("WRITING %8s[%d]: %p: %08x", #_reg, num, \
13605 + &(((uint32_t*)container->regs->_reg)[num]), data); \
13606 + DWC_WRITE_REG32(&(((uint32_t*)container->regs->_reg)[num]), data); \
13607 +}
13608 +
13609 +#define dwc_define_read_write_reg(_reg,_container_type) \
13610 +static inline uint32_t dwc_read_##_reg(_container_type *container) { \
13611 + return DWC_READ_REG32(&container->regs->_reg); \
13612 +} \
13613 +static inline void dwc_write_##_reg(_container_type *container, uint32_t data) { \
13614 + DWC_DEBUG("WRITING %11s: %p: %08x", #_reg, &container->regs->_reg, data); \
13615 + DWC_WRITE_REG32(&container->regs->_reg, data); \
13616 +}
13617 +
13618 +# else /* DWC_DEBUG_REGS */
13619 +
13620 +#define dwc_define_read_write_reg_n(_reg,_container_type) \
13621 +static inline uint32_t dwc_read_##_reg##_n(_container_type *container, int num) { \
13622 + return DWC_READ_REG32(&container->regs->_reg[num]); \
13623 +} \
13624 +static inline void dwc_write_##_reg##_n(_container_type *container, int num, uint32_t data) { \
13625 + DWC_WRITE_REG32(&(((uint32_t*)container->regs->_reg)[num]), data); \
13626 +}
13627 +
13628 +#define dwc_define_read_write_reg(_reg,_container_type) \
13629 +static inline uint32_t dwc_read_##_reg(_container_type *container) { \
13630 + return DWC_READ_REG32(&container->regs->_reg); \
13631 +} \
13632 +static inline void dwc_write_##_reg(_container_type *container, uint32_t data) { \
13633 + DWC_WRITE_REG32(&container->regs->_reg, data); \
13634 +}
13635 +
13636 +# endif /* DWC_DEBUG_REGS */
13637 +
13638 +#endif /* DWC_LINUX */
13639 +
13640 +#if defined(DWC_FREEBSD) || defined(DWC_NETBSD)
13641 +
13642 +# ifdef DWC_DEBUG_REGS
13643 +
13644 +#define dwc_define_read_write_reg_n(_reg,_container_type) \
13645 +static inline uint32_t dwc_read_##_reg##_n(void *io_ctx, _container_type *container, int num) { \
13646 + return DWC_READ_REG32(io_ctx, &container->regs->_reg[num]); \
13647 +} \
13648 +static inline void dwc_write_##_reg##_n(void *io_ctx, _container_type *container, int num, uint32_t data) { \
13649 + DWC_DEBUG("WRITING %8s[%d]: %p: %08x", #_reg, num, \
13650 + &(((uint32_t*)container->regs->_reg)[num]), data); \
13651 + DWC_WRITE_REG32(io_ctx, &(((uint32_t*)container->regs->_reg)[num]), data); \
13652 +}
13653 +
13654 +#define dwc_define_read_write_reg(_reg,_container_type) \
13655 +static inline uint32_t dwc_read_##_reg(void *io_ctx, _container_type *container) { \
13656 + return DWC_READ_REG32(io_ctx, &container->regs->_reg); \
13657 +} \
13658 +static inline void dwc_write_##_reg(void *io_ctx, _container_type *container, uint32_t data) { \
13659 + DWC_DEBUG("WRITING %11s: %p: %08x", #_reg, &container->regs->_reg, data); \
13660 + DWC_WRITE_REG32(io_ctx, &container->regs->_reg, data); \
13661 +}
13662 +
13663 +# else /* DWC_DEBUG_REGS */
13664 +
13665 +#define dwc_define_read_write_reg_n(_reg,_container_type) \
13666 +static inline uint32_t dwc_read_##_reg##_n(void *io_ctx, _container_type *container, int num) { \
13667 + return DWC_READ_REG32(io_ctx, &container->regs->_reg[num]); \
13668 +} \
13669 +static inline void dwc_write_##_reg##_n(void *io_ctx, _container_type *container, int num, uint32_t data) { \
13670 + DWC_WRITE_REG32(io_ctx, &(((uint32_t*)container->regs->_reg)[num]), data); \
13671 +}
13672 +
13673 +#define dwc_define_read_write_reg(_reg,_container_type) \
13674 +static inline uint32_t dwc_read_##_reg(void *io_ctx, _container_type *container) { \
13675 + return DWC_READ_REG32(io_ctx, &container->regs->_reg); \
13676 +} \
13677 +static inline void dwc_write_##_reg(void *io_ctx, _container_type *container, uint32_t data) { \
13678 + DWC_WRITE_REG32(io_ctx, &container->regs->_reg, data); \
13679 +}
13680 +
13681 +# endif /* DWC_DEBUG_REGS */
13682 +
13683 +#endif /* DWC_FREEBSD || DWC_NETBSD */
13684 +
13685 +/** @endcond */
13686 +
13687 +
13688 +#ifdef DWC_CRYPTOLIB
13689 +/** @name Crypto Functions
13690 + *
13691 + * These are the low-level cryptographic functions used by the driver. */
13692 +
13693 +/** Perform AES CBC */
13694 +extern int DWC_AES_CBC(uint8_t *message, uint32_t messagelen, uint8_t *key, uint32_t keylen, uint8_t iv[16], uint8_t *out);
13695 +#define dwc_aes_cbc DWC_AES_CBC
13696 +
13697 +/** Fill the provided buffer with random bytes. These should be cryptographic grade random numbers. */
13698 +extern void DWC_RANDOM_BYTES(uint8_t *buffer, uint32_t length);
13699 +#define dwc_random_bytes DWC_RANDOM_BYTES
13700 +
13701 +/** Perform the SHA-256 hash function */
13702 +extern int DWC_SHA256(uint8_t *message, uint32_t len, uint8_t *out);
13703 +#define dwc_sha256 DWC_SHA256
13704 +
13705 +/** Calculated the HMAC-SHA256 */
13706 +extern int DWC_HMAC_SHA256(uint8_t *message, uint32_t messagelen, uint8_t *key, uint32_t keylen, uint8_t *out);
13707 +#define dwc_hmac_sha256 DWC_HMAC_SHA256
13708 +
13709 +#endif /* DWC_CRYPTOLIB */
13710 +
13711 +
13712 +/** @name Memory Allocation
13713 + *
13714 + * These function provide access to memory allocation. There are only 2 DMA
13715 + * functions and 3 Regular memory functions that need to be implemented. None
13716 + * of the memory debugging routines need to be implemented. The allocation
13717 + * routines all ZERO the contents of the memory.
13718 + *
13719 + * Defining DWC_DEBUG_MEMORY turns on memory debugging and statistic gathering.
13720 + * This checks for memory leaks, keeping track of alloc/free pairs. It also
13721 + * keeps track of how much memory the driver is using at any given time. */
13722 +
13723 +#define DWC_PAGE_SIZE 4096
13724 +#define DWC_PAGE_OFFSET(addr) (((uint32_t)addr) & 0xfff)
13725 +#define DWC_PAGE_ALIGNED(addr) ((((uint32_t)addr) & 0xfff) == 0)
13726 +
13727 +#define DWC_INVALID_DMA_ADDR 0x0
13728 +
13729 +#ifdef DWC_LINUX
13730 +/** Type for a DMA address */
13731 +typedef dma_addr_t dwc_dma_t;
13732 +#endif
13733 +
13734 +#if defined(DWC_FREEBSD) || defined(DWC_NETBSD)
13735 +typedef bus_addr_t dwc_dma_t;
13736 +#endif
13737 +
13738 +#ifdef DWC_FREEBSD
13739 +typedef struct dwc_dmactx {
13740 + struct device *dev;
13741 + bus_dma_tag_t dma_tag;
13742 + bus_dmamap_t dma_map;
13743 + bus_addr_t dma_paddr;
13744 + void *dma_vaddr;
13745 +} dwc_dmactx_t;
13746 +#endif
13747 +
13748 +#ifdef DWC_NETBSD
13749 +typedef struct dwc_dmactx {
13750 + struct device *dev;
13751 + bus_dma_tag_t dma_tag;
13752 + bus_dmamap_t dma_map;
13753 + bus_dma_segment_t segs[1];
13754 + int nsegs;
13755 + bus_addr_t dma_paddr;
13756 + void *dma_vaddr;
13757 +} dwc_dmactx_t;
13758 +#endif
13759 +
13760 +/* @todo these functions will be added in the future */
13761 +#if 0
13762 +/**
13763 + * Creates a DMA pool from which you can allocate DMA buffers. Buffers
13764 + * allocated from this pool will be guaranteed to meet the size, alignment, and
13765 + * boundary requirements specified.
13766 + *
13767 + * @param[in] size Specifies the size of the buffers that will be allocated from
13768 + * this pool.
13769 + * @param[in] align Specifies the byte alignment requirements of the buffers
13770 + * allocated from this pool. Must be a power of 2.
13771 + * @param[in] boundary Specifies the N-byte boundary that buffers allocated from
13772 + * this pool must not cross.
13773 + *
13774 + * @returns A pointer to an internal opaque structure which is not to be
13775 + * accessed outside of these library functions. Use this handle to specify
13776 + * which pools to allocate/free DMA buffers from and also to destroy the pool,
13777 + * when you are done with it.
13778 + */
13779 +extern dwc_pool_t *DWC_DMA_POOL_CREATE(uint32_t size, uint32_t align, uint32_t boundary);
13780 +
13781 +/**
13782 + * Destroy a DMA pool. All buffers allocated from that pool must be freed first.
13783 + */
13784 +extern void DWC_DMA_POOL_DESTROY(dwc_pool_t *pool);
13785 +
13786 +/**
13787 + * Allocate a buffer from the specified DMA pool and zeros its contents.
13788 + */
13789 +extern void *DWC_DMA_POOL_ALLOC(dwc_pool_t *pool, uint64_t *dma_addr);
13790 +
13791 +/**
13792 + * Free a previously allocated buffer from the DMA pool.
13793 + */
13794 +extern void DWC_DMA_POOL_FREE(dwc_pool_t *pool, void *vaddr, void *daddr);
13795 +#endif
13796 +
13797 +/** Allocates a DMA capable buffer and zeroes its contents. */
13798 +extern void *__DWC_DMA_ALLOC(void *dma_ctx, uint32_t size, dwc_dma_t *dma_addr);
13799 +
13800 +/** Allocates a DMA capable buffer and zeroes its contents in atomic contest */
13801 +extern void *__DWC_DMA_ALLOC_ATOMIC(void *dma_ctx, uint32_t size, dwc_dma_t *dma_addr);
13802 +
13803 +/** Frees a previously allocated buffer. */
13804 +extern void __DWC_DMA_FREE(void *dma_ctx, uint32_t size, void *virt_addr, dwc_dma_t dma_addr);
13805 +
13806 +/** Allocates a block of memory and zeroes its contents. */
13807 +extern void *__DWC_ALLOC(void *mem_ctx, uint32_t size);
13808 +
13809 +/** Allocates a block of memory and zeroes its contents, in an atomic manner
13810 + * which can be used inside interrupt context. The size should be sufficiently
13811 + * small, a few KB at most, such that failures are not likely to occur. Can just call
13812 + * __DWC_ALLOC if it is atomic. */
13813 +extern void *__DWC_ALLOC_ATOMIC(void *mem_ctx, uint32_t size);
13814 +
13815 +/** Frees a previously allocated buffer. */
13816 +extern void __DWC_FREE(void *mem_ctx, void *addr);
13817 +
13818 +#ifndef DWC_DEBUG_MEMORY
13819 +
13820 +#define DWC_ALLOC(_size_) __DWC_ALLOC(NULL, _size_)
13821 +#define DWC_ALLOC_ATOMIC(_size_) __DWC_ALLOC_ATOMIC(NULL, _size_)
13822 +#define DWC_FREE(_addr_) __DWC_FREE(NULL, _addr_)
13823 +
13824 +# ifdef DWC_LINUX
13825 +#define DWC_DMA_ALLOC(_dev, _size_, _dma_) __DWC_DMA_ALLOC(_dev, _size_, _dma_)
13826 +#define DWC_DMA_ALLOC_ATOMIC(_dev, _size_, _dma_) __DWC_DMA_ALLOC_ATOMIC(_dev, _size_, _dma_)
13827 +#define DWC_DMA_FREE(_dev, _size_,_virt_, _dma_) __DWC_DMA_FREE(_dev, _size_, _virt_, _dma_)
13828 +# endif
13829 +
13830 +# if defined(DWC_FREEBSD) || defined(DWC_NETBSD)
13831 +#define DWC_DMA_ALLOC __DWC_DMA_ALLOC
13832 +#define DWC_DMA_FREE __DWC_DMA_FREE
13833 +# endif
13834 +extern void *dwc_dma_alloc_atomic_debug(uint32_t size, dwc_dma_t *dma_addr, char const *func, int line);
13835 +
13836 +#else /* DWC_DEBUG_MEMORY */
13837 +
13838 +extern void *dwc_alloc_debug(void *mem_ctx, uint32_t size, char const *func, int line);
13839 +extern void *dwc_alloc_atomic_debug(void *mem_ctx, uint32_t size, char const *func, int line);
13840 +extern void dwc_free_debug(void *mem_ctx, void *addr, char const *func, int line);
13841 +extern void *dwc_dma_alloc_debug(void *dma_ctx, uint32_t size, dwc_dma_t *dma_addr,
13842 + char const *func, int line);
13843 +extern void *dwc_dma_alloc_atomic_debug(void *dma_ctx, uint32_t size, dwc_dma_t *dma_addr,
13844 + char const *func, int line);
13845 +extern void dwc_dma_free_debug(void *dma_ctx, uint32_t size, void *virt_addr,
13846 + dwc_dma_t dma_addr, char const *func, int line);
13847 +
13848 +extern int dwc_memory_debug_start(void *mem_ctx);
13849 +extern void dwc_memory_debug_stop(void);
13850 +extern void dwc_memory_debug_report(void);
13851 +
13852 +#define DWC_ALLOC(_size_) dwc_alloc_debug(NULL, _size_, __func__, __LINE__)
13853 +#define DWC_ALLOC_ATOMIC(_size_) dwc_alloc_atomic_debug(NULL, _size_, \
13854 + __func__, __LINE__)
13855 +#define DWC_FREE(_addr_) dwc_free_debug(NULL, _addr_, __func__, __LINE__)
13856 +
13857 +# ifdef DWC_LINUX
13858 +#define DWC_DMA_ALLOC(_dev, _size_, _dma_) \
13859 + dwc_dma_alloc_debug(_dev, _size_, _dma_, __func__, __LINE__)
13860 +#define DWC_DMA_ALLOC_ATOMIC(_dev, _size_, _dma_) \
13861 + dwc_dma_alloc_atomic_debug(_dev, _size_, _dma_, __func__, __LINE__)
13862 +#define DWC_DMA_FREE(_dev, _size_, _virt_, _dma_) \
13863 + dwc_dma_free_debug(_dev, _size_, _virt_, _dma_, __func__, __LINE__)
13864 +# endif
13865 +
13866 +# if defined(DWC_FREEBSD) || defined(DWC_NETBSD)
13867 +#define DWC_DMA_ALLOC(_ctx_,_size_,_dma_) dwc_dma_alloc_debug(_ctx_, _size_, \
13868 + _dma_, __func__, __LINE__)
13869 +#define DWC_DMA_FREE(_ctx_,_size_,_virt_,_dma_) dwc_dma_free_debug(_ctx_, _size_, \
13870 + _virt_, _dma_, __func__, __LINE__)
13871 +# endif
13872 +
13873 +#endif /* DWC_DEBUG_MEMORY */
13874 +
13875 +#define dwc_alloc(_ctx_,_size_) DWC_ALLOC(_size_)
13876 +#define dwc_alloc_atomic(_ctx_,_size_) DWC_ALLOC_ATOMIC(_size_)
13877 +#define dwc_free(_ctx_,_addr_) DWC_FREE(_addr_)
13878 +
13879 +#ifdef DWC_LINUX
13880 +/* Linux doesn't need any extra parameters for DMA buffer allocation, so we
13881 + * just throw away the DMA context parameter.
13882 + */
13883 +#define dwc_dma_alloc(_ctx_,_size_,_dma_) DWC_DMA_ALLOC(_size_, _dma_)
13884 +#define dwc_dma_alloc_atomic(_ctx_,_size_,_dma_) DWC_DMA_ALLOC_ATOMIC(_size_, _dma_)
13885 +#define dwc_dma_free(_ctx_,_size_,_virt_,_dma_) DWC_DMA_FREE(_size_, _virt_, _dma_)
13886 +#endif
13887 +
13888 +#if defined(DWC_FREEBSD) || defined(DWC_NETBSD)
13889 +/** BSD needs several extra parameters for DMA buffer allocation, so we pass
13890 + * them in using the DMA context parameter.
13891 + */
13892 +#define dwc_dma_alloc DWC_DMA_ALLOC
13893 +#define dwc_dma_free DWC_DMA_FREE
13894 +#endif
13895 +
13896 +
13897 +/** @name Memory and String Processing */
13898 +
13899 +/** memset() clone */
13900 +extern void *DWC_MEMSET(void *dest, uint8_t byte, uint32_t size);
13901 +#define dwc_memset DWC_MEMSET
13902 +
13903 +/** memcpy() clone */
13904 +extern void *DWC_MEMCPY(void *dest, void const *src, uint32_t size);
13905 +#define dwc_memcpy DWC_MEMCPY
13906 +
13907 +/** memmove() clone */
13908 +extern void *DWC_MEMMOVE(void *dest, void *src, uint32_t size);
13909 +#define dwc_memmove DWC_MEMMOVE
13910 +
13911 +/** memcmp() clone */
13912 +extern int DWC_MEMCMP(void *m1, void *m2, uint32_t size);
13913 +#define dwc_memcmp DWC_MEMCMP
13914 +
13915 +/** strcmp() clone */
13916 +extern int DWC_STRCMP(void *s1, void *s2);
13917 +#define dwc_strcmp DWC_STRCMP
13918 +
13919 +/** strncmp() clone */
13920 +extern int DWC_STRNCMP(void *s1, void *s2, uint32_t size);
13921 +#define dwc_strncmp DWC_STRNCMP
13922 +
13923 +/** strlen() clone, for NULL terminated ASCII strings */
13924 +extern int DWC_STRLEN(char const *str);
13925 +#define dwc_strlen DWC_STRLEN
13926 +
13927 +/** strcpy() clone, for NULL terminated ASCII strings */
13928 +extern char *DWC_STRCPY(char *to, const char *from);
13929 +#define dwc_strcpy DWC_STRCPY
13930 +
13931 +/** strdup() clone. If you wish to use memory allocation debugging, this
13932 + * implementation of strdup should use the DWC_* memory routines instead of
13933 + * calling a predefined strdup. Otherwise the memory allocated by this routine
13934 + * will not be seen by the debugging routines. */
13935 +extern char *DWC_STRDUP(char const *str);
13936 +#define dwc_strdup(_ctx_,_str_) DWC_STRDUP(_str_)
13937 +
13938 +/** NOT an atoi() clone. Read the description carefully. Returns an integer
13939 + * converted from the string str in base 10 unless the string begins with a "0x"
13940 + * in which case it is base 16. String must be a NULL terminated sequence of
13941 + * ASCII characters and may optionally begin with whitespace, a + or -, and a
13942 + * "0x" prefix if base 16. The remaining characters must be valid digits for
13943 + * the number and end with a NULL character. If any invalid characters are
13944 + * encountered or it returns with a negative error code and the results of the
13945 + * conversion are undefined. On sucess it returns 0. Overflow conditions are
13946 + * undefined. An example implementation using atoi() can be referenced from the
13947 + * Linux implementation. */
13948 +extern int DWC_ATOI(const char *str, int32_t *value);
13949 +#define dwc_atoi DWC_ATOI
13950 +
13951 +/** Same as above but for unsigned. */
13952 +extern int DWC_ATOUI(const char *str, uint32_t *value);
13953 +#define dwc_atoui DWC_ATOUI
13954 +
13955 +#ifdef DWC_UTFLIB
13956 +/** This routine returns a UTF16LE unicode encoded string from a UTF8 string. */
13957 +extern int DWC_UTF8_TO_UTF16LE(uint8_t const *utf8string, uint16_t *utf16string, unsigned len);
13958 +#define dwc_utf8_to_utf16le DWC_UTF8_TO_UTF16LE
13959 +#endif
13960 +
13961 +
13962 +/** @name Wait queues
13963 + *
13964 + * Wait queues provide a means of synchronizing between threads or processes. A
13965 + * process can block on a waitq if some condition is not true, waiting for it to
13966 + * become true. When the waitq is triggered all waiting process will get
13967 + * unblocked and the condition will be check again. Waitqs should be triggered
13968 + * every time a condition can potentially change.*/
13969 +struct dwc_waitq;
13970 +
13971 +/** Type for a waitq */
13972 +typedef struct dwc_waitq dwc_waitq_t;
13973 +
13974 +/** The type of waitq condition callback function. This is called every time
13975 + * condition is evaluated. */
13976 +typedef int (*dwc_waitq_condition_t)(void *data);
13977 +
13978 +/** Allocate a waitq */
13979 +extern dwc_waitq_t *DWC_WAITQ_ALLOC(void);
13980 +#define dwc_waitq_alloc(_ctx_) DWC_WAITQ_ALLOC()
13981 +
13982 +/** Free a waitq */
13983 +extern void DWC_WAITQ_FREE(dwc_waitq_t *wq);
13984 +#define dwc_waitq_free DWC_WAITQ_FREE
13985 +
13986 +/** Check the condition and if it is false, block on the waitq. When unblocked, check the
13987 + * condition again. The function returns when the condition becomes true. The return value
13988 + * is 0 on condition true, DWC_WAITQ_ABORTED on abort or killed, or DWC_WAITQ_UNKNOWN on error. */
13989 +extern int32_t DWC_WAITQ_WAIT(dwc_waitq_t *wq, dwc_waitq_condition_t cond, void *data);
13990 +#define dwc_waitq_wait DWC_WAITQ_WAIT
13991 +
13992 +/** Check the condition and if it is false, block on the waitq. When unblocked,
13993 + * check the condition again. The function returns when the condition become
13994 + * true or the timeout has passed. The return value is 0 on condition true or
13995 + * DWC_TIMED_OUT on timeout, or DWC_WAITQ_ABORTED, or DWC_WAITQ_UNKNOWN on
13996 + * error. */
13997 +extern int32_t DWC_WAITQ_WAIT_TIMEOUT(dwc_waitq_t *wq, dwc_waitq_condition_t cond,
13998 + void *data, int32_t msecs);
13999 +#define dwc_waitq_wait_timeout DWC_WAITQ_WAIT_TIMEOUT
14000 +
14001 +/** Trigger a waitq, unblocking all processes. This should be called whenever a condition
14002 + * has potentially changed. */
14003 +extern void DWC_WAITQ_TRIGGER(dwc_waitq_t *wq);
14004 +#define dwc_waitq_trigger DWC_WAITQ_TRIGGER
14005 +
14006 +/** Unblock all processes waiting on the waitq with an ABORTED result. */
14007 +extern void DWC_WAITQ_ABORT(dwc_waitq_t *wq);
14008 +#define dwc_waitq_abort DWC_WAITQ_ABORT
14009 +
14010 +
14011 +/** @name Threads
14012 + *
14013 + * A thread must be explicitly stopped. It must check DWC_THREAD_SHOULD_STOP
14014 + * whenever it is woken up, and then return. The DWC_THREAD_STOP function
14015 + * returns the value from the thread.
14016 + */
14017 +
14018 +struct dwc_thread;
14019 +
14020 +/** Type for a thread */
14021 +typedef struct dwc_thread dwc_thread_t;
14022 +
14023 +/** The thread function */
14024 +typedef int (*dwc_thread_function_t)(void *data);
14025 +
14026 +/** Create a thread and start it running the thread_function. Returns a handle
14027 + * to the thread */
14028 +extern dwc_thread_t *DWC_THREAD_RUN(dwc_thread_function_t func, char *name, void *data);
14029 +#define dwc_thread_run(_ctx_,_func_,_name_,_data_) DWC_THREAD_RUN(_func_, _name_, _data_)
14030 +
14031 +/** Stops a thread. Return the value returned by the thread. Or will return
14032 + * DWC_ABORT if the thread never started. */
14033 +extern int DWC_THREAD_STOP(dwc_thread_t *thread);
14034 +#define dwc_thread_stop DWC_THREAD_STOP
14035 +
14036 +/** Signifies to the thread that it must stop. */
14037 +#ifdef DWC_LINUX
14038 +/* Linux doesn't need any parameters for kthread_should_stop() */
14039 +extern dwc_bool_t DWC_THREAD_SHOULD_STOP(void);
14040 +#define dwc_thread_should_stop(_thrd_) DWC_THREAD_SHOULD_STOP()
14041 +
14042 +/* No thread_exit function in Linux */
14043 +#define dwc_thread_exit(_thrd_)
14044 +#endif
14045 +
14046 +#if defined(DWC_FREEBSD) || defined(DWC_NETBSD)
14047 +/** BSD needs the thread pointer for kthread_suspend_check() */
14048 +extern dwc_bool_t DWC_THREAD_SHOULD_STOP(dwc_thread_t *thread);
14049 +#define dwc_thread_should_stop DWC_THREAD_SHOULD_STOP
14050 +
14051 +/** The thread must call this to exit. */
14052 +extern void DWC_THREAD_EXIT(dwc_thread_t *thread);
14053 +#define dwc_thread_exit DWC_THREAD_EXIT
14054 +#endif
14055 +
14056 +
14057 +/** @name Work queues
14058 + *
14059 + * Workqs are used to queue a callback function to be called at some later time,
14060 + * in another thread. */
14061 +struct dwc_workq;
14062 +
14063 +/** Type for a workq */
14064 +typedef struct dwc_workq dwc_workq_t;
14065 +
14066 +/** The type of the callback function to be called. */
14067 +typedef void (*dwc_work_callback_t)(void *data);
14068 +
14069 +/** Allocate a workq */
14070 +extern dwc_workq_t *DWC_WORKQ_ALLOC(char *name);
14071 +#define dwc_workq_alloc(_ctx_,_name_) DWC_WORKQ_ALLOC(_name_)
14072 +
14073 +/** Free a workq. All work must be completed before being freed. */
14074 +extern void DWC_WORKQ_FREE(dwc_workq_t *workq);
14075 +#define dwc_workq_free DWC_WORKQ_FREE
14076 +
14077 +/** Schedule a callback on the workq, passing in data. The function will be
14078 + * scheduled at some later time. */
14079 +extern void DWC_WORKQ_SCHEDULE(dwc_workq_t *workq, dwc_work_callback_t cb,
14080 + void *data, char *format, ...)
14081 +#ifdef __GNUC__
14082 + __attribute__ ((format(printf, 4, 5)));
14083 +#else
14084 + ;
14085 +#endif
14086 +#define dwc_workq_schedule DWC_WORKQ_SCHEDULE
14087 +
14088 +/** Schedule a callback on the workq, that will be called until at least
14089 + * given number miliseconds have passed. */
14090 +extern void DWC_WORKQ_SCHEDULE_DELAYED(dwc_workq_t *workq, dwc_work_callback_t cb,
14091 + void *data, uint32_t time, char *format, ...)
14092 +#ifdef __GNUC__
14093 + __attribute__ ((format(printf, 5, 6)));
14094 +#else
14095 + ;
14096 +#endif
14097 +#define dwc_workq_schedule_delayed DWC_WORKQ_SCHEDULE_DELAYED
14098 +
14099 +/** The number of processes in the workq */
14100 +extern int DWC_WORKQ_PENDING(dwc_workq_t *workq);
14101 +#define dwc_workq_pending DWC_WORKQ_PENDING
14102 +
14103 +/** Blocks until all the work in the workq is complete or timed out. Returns <
14104 + * 0 on timeout. */
14105 +extern int DWC_WORKQ_WAIT_WORK_DONE(dwc_workq_t *workq, int timeout);
14106 +#define dwc_workq_wait_work_done DWC_WORKQ_WAIT_WORK_DONE
14107 +
14108 +
14109 +/** @name Tasklets
14110 + *
14111 + */
14112 +struct dwc_tasklet;
14113 +
14114 +/** Type for a tasklet */
14115 +typedef struct dwc_tasklet dwc_tasklet_t;
14116 +
14117 +/** The type of the callback function to be called */
14118 +typedef void (*dwc_tasklet_callback_t)(void *data);
14119 +
14120 +/** Allocates a tasklet */
14121 +extern dwc_tasklet_t *DWC_TASK_ALLOC(char *name, dwc_tasklet_callback_t cb, void *data);
14122 +#define dwc_task_alloc(_ctx_,_name_,_cb_,_data_) DWC_TASK_ALLOC(_name_, _cb_, _data_)
14123 +
14124 +/** Frees a tasklet */
14125 +extern void DWC_TASK_FREE(dwc_tasklet_t *task);
14126 +#define dwc_task_free DWC_TASK_FREE
14127 +
14128 +/** Schedules a tasklet to run */
14129 +extern void DWC_TASK_SCHEDULE(dwc_tasklet_t *task);
14130 +#define dwc_task_schedule DWC_TASK_SCHEDULE
14131 +
14132 +extern void DWC_TASK_HI_SCHEDULE(dwc_tasklet_t *task);
14133 +#define dwc_task_hi_schedule DWC_TASK_HI_SCHEDULE
14134 +
14135 +/** @name Timer
14136 + *
14137 + * Callbacks must be small and atomic.
14138 + */
14139 +struct dwc_timer;
14140 +
14141 +/** Type for a timer */
14142 +typedef struct dwc_timer dwc_timer_t;
14143 +
14144 +/** The type of the callback function to be called */
14145 +typedef void (*dwc_timer_callback_t)(void *data);
14146 +
14147 +/** Allocates a timer */
14148 +extern dwc_timer_t *DWC_TIMER_ALLOC(char *name, dwc_timer_callback_t cb, void *data);
14149 +#define dwc_timer_alloc(_ctx_,_name_,_cb_,_data_) DWC_TIMER_ALLOC(_name_,_cb_,_data_)
14150 +
14151 +/** Frees a timer */
14152 +extern void DWC_TIMER_FREE(dwc_timer_t *timer);
14153 +#define dwc_timer_free DWC_TIMER_FREE
14154 +
14155 +/** Schedules the timer to run at time ms from now. And will repeat at every
14156 + * repeat_interval msec therafter
14157 + *
14158 + * Modifies a timer that is still awaiting execution to a new expiration time.
14159 + * The mod_time is added to the old time. */
14160 +extern void DWC_TIMER_SCHEDULE(dwc_timer_t *timer, uint32_t time);
14161 +#define dwc_timer_schedule DWC_TIMER_SCHEDULE
14162 +
14163 +/** Disables the timer from execution. */
14164 +extern void DWC_TIMER_CANCEL(dwc_timer_t *timer);
14165 +#define dwc_timer_cancel DWC_TIMER_CANCEL
14166 +
14167 +
14168 +/** @name Spinlocks
14169 + *
14170 + * These locks are used when the work between the lock/unlock is atomic and
14171 + * short. Interrupts are also disabled during the lock/unlock and thus they are
14172 + * suitable to lock between interrupt/non-interrupt context. They also lock
14173 + * between processes if you have multiple CPUs or Preemption. If you don't have
14174 + * multiple CPUS or Preemption, then the you can simply implement the
14175 + * DWC_SPINLOCK and DWC_SPINUNLOCK to disable and enable interrupts. Because
14176 + * the work between the lock/unlock is atomic, the process context will never
14177 + * change, and so you never have to lock between processes. */
14178 +
14179 +struct dwc_spinlock;
14180 +
14181 +/** Type for a spinlock */
14182 +typedef struct dwc_spinlock dwc_spinlock_t;
14183 +
14184 +/** Type for the 'flags' argument to spinlock funtions */
14185 +typedef unsigned long dwc_irqflags_t;
14186 +
14187 +/** Returns an initialized lock variable. This function should allocate and
14188 + * initialize the OS-specific data structure used for locking. This data
14189 + * structure is to be used for the DWC_LOCK and DWC_UNLOCK functions and should
14190 + * be freed by the DWC_FREE_LOCK when it is no longer used.
14191 + *
14192 + * For Linux Spinlock Debugging make it macro because the debugging routines use
14193 + * the symbol name to determine recursive locking. Using a wrapper function
14194 + * makes it falsely think recursive locking occurs. */
14195 +#if defined(DWC_LINUX) && defined(CONFIG_DEBUG_SPINLOCK)
14196 +#define DWC_SPINLOCK_ALLOC_LINUX_DEBUG(lock) ({ \
14197 + lock = DWC_ALLOC(sizeof(spinlock_t)); \
14198 + if (lock) { \
14199 + spin_lock_init((spinlock_t *)lock); \
14200 + } \
14201 +})
14202 +#else
14203 +extern dwc_spinlock_t *DWC_SPINLOCK_ALLOC(void);
14204 +#define dwc_spinlock_alloc(_ctx_) DWC_SPINLOCK_ALLOC()
14205 +#endif
14206 +
14207 +/** Frees an initialized lock variable. */
14208 +extern void DWC_SPINLOCK_FREE(dwc_spinlock_t *lock);
14209 +#define dwc_spinlock_free(_ctx_,_lock_) DWC_SPINLOCK_FREE(_lock_)
14210 +
14211 +/** Disables interrupts and blocks until it acquires the lock.
14212 + *
14213 + * @param lock Pointer to the spinlock.
14214 + * @param flags Unsigned long for irq flags storage.
14215 + */
14216 +extern void DWC_SPINLOCK_IRQSAVE(dwc_spinlock_t *lock, dwc_irqflags_t *flags);
14217 +#define dwc_spinlock_irqsave DWC_SPINLOCK_IRQSAVE
14218 +
14219 +/** Re-enables the interrupt and releases the lock.
14220 + *
14221 + * @param lock Pointer to the spinlock.
14222 + * @param flags Unsigned long for irq flags storage. Must be the same as was
14223 + * passed into DWC_LOCK.
14224 + */
14225 +extern void DWC_SPINUNLOCK_IRQRESTORE(dwc_spinlock_t *lock, dwc_irqflags_t flags);
14226 +#define dwc_spinunlock_irqrestore DWC_SPINUNLOCK_IRQRESTORE
14227 +
14228 +/** Blocks until it acquires the lock.
14229 + *
14230 + * @param lock Pointer to the spinlock.
14231 + */
14232 +extern void DWC_SPINLOCK(dwc_spinlock_t *lock);
14233 +#define dwc_spinlock DWC_SPINLOCK
14234 +
14235 +/** Releases the lock.
14236 + *
14237 + * @param lock Pointer to the spinlock.
14238 + */
14239 +extern void DWC_SPINUNLOCK(dwc_spinlock_t *lock);
14240 +#define dwc_spinunlock DWC_SPINUNLOCK
14241 +
14242 +
14243 +/** @name Mutexes
14244 + *
14245 + * Unlike spinlocks Mutexes lock only between processes and the work between the
14246 + * lock/unlock CAN block, therefore it CANNOT be called from interrupt context.
14247 + */
14248 +
14249 +struct dwc_mutex;
14250 +
14251 +/** Type for a mutex */
14252 +typedef struct dwc_mutex dwc_mutex_t;
14253 +
14254 +/* For Linux Mutex Debugging make it inline because the debugging routines use
14255 + * the symbol to determine recursive locking. This makes it falsely think
14256 + * recursive locking occurs. */
14257 +#if defined(DWC_LINUX) && defined(CONFIG_DEBUG_MUTEXES)
14258 +#define DWC_MUTEX_ALLOC_LINUX_DEBUG(__mutexp) ({ \
14259 + __mutexp = (dwc_mutex_t *)DWC_ALLOC(sizeof(struct mutex)); \
14260 + mutex_init((struct mutex *)__mutexp); \
14261 +})
14262 +#endif
14263 +
14264 +/** Allocate a mutex */
14265 +extern dwc_mutex_t *DWC_MUTEX_ALLOC(void);
14266 +#define dwc_mutex_alloc(_ctx_) DWC_MUTEX_ALLOC()
14267 +
14268 +/* For memory leak debugging when using Linux Mutex Debugging */
14269 +#if defined(DWC_LINUX) && defined(CONFIG_DEBUG_MUTEXES)
14270 +#define DWC_MUTEX_FREE(__mutexp) do { \
14271 + mutex_destroy((struct mutex *)__mutexp); \
14272 + DWC_FREE(__mutexp); \
14273 +} while(0)
14274 +#else
14275 +/** Free a mutex */
14276 +extern void DWC_MUTEX_FREE(dwc_mutex_t *mutex);
14277 +#define dwc_mutex_free(_ctx_,_mutex_) DWC_MUTEX_FREE(_mutex_)
14278 +#endif
14279 +
14280 +/** Lock a mutex */
14281 +extern void DWC_MUTEX_LOCK(dwc_mutex_t *mutex);
14282 +#define dwc_mutex_lock DWC_MUTEX_LOCK
14283 +
14284 +/** Non-blocking lock returns 1 on successful lock. */
14285 +extern int DWC_MUTEX_TRYLOCK(dwc_mutex_t *mutex);
14286 +#define dwc_mutex_trylock DWC_MUTEX_TRYLOCK
14287 +
14288 +/** Unlock a mutex */
14289 +extern void DWC_MUTEX_UNLOCK(dwc_mutex_t *mutex);
14290 +#define dwc_mutex_unlock DWC_MUTEX_UNLOCK
14291 +
14292 +
14293 +/** @name Time */
14294 +
14295 +/** Microsecond delay.
14296 + *
14297 + * @param usecs Microseconds to delay.
14298 + */
14299 +extern void DWC_UDELAY(uint32_t usecs);
14300 +#define dwc_udelay DWC_UDELAY
14301 +
14302 +/** Millisecond delay.
14303 + *
14304 + * @param msecs Milliseconds to delay.
14305 + */
14306 +extern void DWC_MDELAY(uint32_t msecs);
14307 +#define dwc_mdelay DWC_MDELAY
14308 +
14309 +/** Non-busy waiting.
14310 + * Sleeps for specified number of milliseconds.
14311 + *
14312 + * @param msecs Milliseconds to sleep.
14313 + */
14314 +extern void DWC_MSLEEP(uint32_t msecs);
14315 +#define dwc_msleep DWC_MSLEEP
14316 +
14317 +/**
14318 + * Returns number of milliseconds since boot.
14319 + */
14320 +extern uint32_t DWC_TIME(void);
14321 +#define dwc_time DWC_TIME
14322 +
14323 +
14324 +
14325 +
14326 +/* @mainpage DWC Portability and Common Library
14327 + *
14328 + * This is the documentation for the DWC Portability and Common Library.
14329 + *
14330 + * @section intro Introduction
14331 + *
14332 + * The DWC Portability library consists of wrapper calls and data structures to
14333 + * all low-level functions which are typically provided by the OS. The WUDEV
14334 + * driver uses only these functions. In order to port the WUDEV driver, only
14335 + * the functions in this library need to be re-implemented, with the same
14336 + * behavior as documented here.
14337 + *
14338 + * The Common library consists of higher level functions, which rely only on
14339 + * calling the functions from the DWC Portability library. These common
14340 + * routines are shared across modules. Some of the common libraries need to be
14341 + * used directly by the driver programmer when porting WUDEV. Such as the
14342 + * parameter and notification libraries.
14343 + *
14344 + * @section low Portability Library OS Wrapper Functions
14345 + *
14346 + * Any function starting with DWC and in all CAPS is a low-level OS-wrapper that
14347 + * needs to be implemented when porting, for example DWC_MUTEX_ALLOC(). All of
14348 + * these functions are included in the dwc_os.h file.
14349 + *
14350 + * There are many functions here covering a wide array of OS services. Please
14351 + * see dwc_os.h for details, and implementation notes for each function.
14352 + *
14353 + * @section common Common Library Functions
14354 + *
14355 + * Any function starting with dwc and in all lowercase is a common library
14356 + * routine. These functions have a portable implementation and do not need to
14357 + * be reimplemented when porting. The common routines can be used by any
14358 + * driver, and some must be used by the end user to control the drivers. For
14359 + * example, you must use the Parameter common library in order to set the
14360 + * parameters in the WUDEV module.
14361 + *
14362 + * The common libraries consist of the following:
14363 + *
14364 + * - Connection Contexts - Used internally and can be used by end-user. See dwc_cc.h
14365 + * - Parameters - Used internally and can be used by end-user. See dwc_params.h
14366 + * - Notifications - Used internally and can be used by end-user. See dwc_notifier.h
14367 + * - Lists - Used internally and can be used by end-user. See dwc_list.h
14368 + * - Memory Debugging - Used internally and can be used by end-user. See dwc_os.h
14369 + * - Modpow - Used internally only. See dwc_modpow.h
14370 + * - DH - Used internally only. See dwc_dh.h
14371 + * - Crypto - Used internally only. See dwc_crypto.h
14372 + *
14373 + *
14374 + * @section prereq Prerequistes For dwc_os.h
14375 + * @subsection types Data Types
14376 + *
14377 + * The dwc_os.h file assumes that several low-level data types are pre defined for the
14378 + * compilation environment. These data types are:
14379 + *
14380 + * - uint8_t - unsigned 8-bit data type
14381 + * - int8_t - signed 8-bit data type
14382 + * - uint16_t - unsigned 16-bit data type
14383 + * - int16_t - signed 16-bit data type
14384 + * - uint32_t - unsigned 32-bit data type
14385 + * - int32_t - signed 32-bit data type
14386 + * - uint64_t - unsigned 64-bit data type
14387 + * - int64_t - signed 64-bit data type
14388 + *
14389 + * Ensure that these are defined before using dwc_os.h. The easiest way to do
14390 + * that is to modify the top of the file to include the appropriate header.
14391 + * This is already done for the Linux environment. If the DWC_LINUX macro is
14392 + * defined, the correct header will be added. A standard header <stdint.h> is
14393 + * also used for environments where standard C headers are available.
14394 + *
14395 + * @subsection stdarg Variable Arguments
14396 + *
14397 + * Variable arguments are provided by a standard C header <stdarg.h>. it is
14398 + * available in Both the Linux and ANSI C enviornment. An equivalent must be
14399 + * provided in your enviornment in order to use dwc_os.h with the debug and
14400 + * tracing message functionality.
14401 + *
14402 + * @subsection thread Threading
14403 + *
14404 + * WUDEV Core must be run on an operating system that provides for multiple
14405 + * threads/processes. Threading can be implemented in many ways, even in
14406 + * embedded systems without an operating system. At the bare minimum, the
14407 + * system should be able to start any number of processes at any time to handle
14408 + * special work. It need not be a pre-emptive system. Process context can
14409 + * change upon a call to a blocking function. The hardware interrupt context
14410 + * that calls the module's ISR() function must be differentiable from process
14411 + * context, even if your processes are impemented via a hardware interrupt.
14412 + * Further locking mechanism between process must exist (or be implemented), and
14413 + * process context must have a way to disable interrupts for a period of time to
14414 + * lock them out. If all of this exists, the functions in dwc_os.h related to
14415 + * threading should be able to be implemented with the defined behavior.
14416 + *
14417 + */
14418 +
14419 +#ifdef __cplusplus
14420 +}
14421 +#endif
14422 +
14423 +#endif /* _DWC_OS_H_ */
14424 --- /dev/null
14425 +++ b/drivers/usb/host/dwc_common_port/usb.h
14426 @@ -0,0 +1,946 @@
14427 +/*
14428 + * Copyright (c) 1998 The NetBSD Foundation, Inc.
14429 + * All rights reserved.
14430 + *
14431 + * This code is derived from software contributed to The NetBSD Foundation
14432 + * by Lennart Augustsson (lennart@augustsson.net) at
14433 + * Carlstedt Research & Technology.
14434 + *
14435 + * Redistribution and use in source and binary forms, with or without
14436 + * modification, are permitted provided that the following conditions
14437 + * are met:
14438 + * 1. Redistributions of source code must retain the above copyright
14439 + * notice, this list of conditions and the following disclaimer.
14440 + * 2. Redistributions in binary form must reproduce the above copyright
14441 + * notice, this list of conditions and the following disclaimer in the
14442 + * documentation and/or other materials provided with the distribution.
14443 + * 3. All advertising materials mentioning features or use of this software
14444 + * must display the following acknowledgement:
14445 + * This product includes software developed by the NetBSD
14446 + * Foundation, Inc. and its contributors.
14447 + * 4. Neither the name of The NetBSD Foundation nor the names of its
14448 + * contributors may be used to endorse or promote products derived
14449 + * from this software without specific prior written permission.
14450 + *
14451 + * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
14452 + * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
14453 + * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
14454 + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
14455 + * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
14456 + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
14457 + * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
14458 + * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
14459 + * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
14460 + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
14461 + * POSSIBILITY OF SUCH DAMAGE.
14462 + */
14463 +
14464 +/* Modified by Synopsys, Inc, 12/12/2007 */
14465 +
14466 +
14467 +#ifndef _USB_H_
14468 +#define _USB_H_
14469 +
14470 +#ifdef __cplusplus
14471 +extern "C" {
14472 +#endif
14473 +
14474 +/*
14475 + * The USB records contain some unaligned little-endian word
14476 + * components. The U[SG]ETW macros take care of both the alignment
14477 + * and endian problem and should always be used to access non-byte
14478 + * values.
14479 + */
14480 +typedef u_int8_t uByte;
14481 +typedef u_int8_t uWord[2];
14482 +typedef u_int8_t uDWord[4];
14483 +
14484 +#define USETW2(w,h,l) ((w)[0] = (u_int8_t)(l), (w)[1] = (u_int8_t)(h))
14485 +#define UCONSTW(x) { (x) & 0xff, ((x) >> 8) & 0xff }
14486 +#define UCONSTDW(x) { (x) & 0xff, ((x) >> 8) & 0xff, \
14487 + ((x) >> 16) & 0xff, ((x) >> 24) & 0xff }
14488 +
14489 +#if 1
14490 +#define UGETW(w) ((w)[0] | ((w)[1] << 8))
14491 +#define USETW(w,v) ((w)[0] = (u_int8_t)(v), (w)[1] = (u_int8_t)((v) >> 8))
14492 +#define UGETDW(w) ((w)[0] | ((w)[1] << 8) | ((w)[2] << 16) | ((w)[3] << 24))
14493 +#define USETDW(w,v) ((w)[0] = (u_int8_t)(v), \
14494 + (w)[1] = (u_int8_t)((v) >> 8), \
14495 + (w)[2] = (u_int8_t)((v) >> 16), \
14496 + (w)[3] = (u_int8_t)((v) >> 24))
14497 +#else
14498 +/*
14499 + * On little-endian machines that can handle unanliged accesses
14500 + * (e.g. i386) these macros can be replaced by the following.
14501 + */
14502 +#define UGETW(w) (*(u_int16_t *)(w))
14503 +#define USETW(w,v) (*(u_int16_t *)(w) = (v))
14504 +#define UGETDW(w) (*(u_int32_t *)(w))
14505 +#define USETDW(w,v) (*(u_int32_t *)(w) = (v))
14506 +#endif
14507 +
14508 +/*
14509 + * Macros for accessing UAS IU fields, which are big-endian
14510 + */
14511 +#define IUSETW2(w,h,l) ((w)[0] = (u_int8_t)(h), (w)[1] = (u_int8_t)(l))
14512 +#define IUCONSTW(x) { ((x) >> 8) & 0xff, (x) & 0xff }
14513 +#define IUCONSTDW(x) { ((x) >> 24) & 0xff, ((x) >> 16) & 0xff, \
14514 + ((x) >> 8) & 0xff, (x) & 0xff }
14515 +#define IUGETW(w) (((w)[0] << 8) | (w)[1])
14516 +#define IUSETW(w,v) ((w)[0] = (u_int8_t)((v) >> 8), (w)[1] = (u_int8_t)(v))
14517 +#define IUGETDW(w) (((w)[0] << 24) | ((w)[1] << 16) | ((w)[2] << 8) | (w)[3])
14518 +#define IUSETDW(w,v) ((w)[0] = (u_int8_t)((v) >> 24), \
14519 + (w)[1] = (u_int8_t)((v) >> 16), \
14520 + (w)[2] = (u_int8_t)((v) >> 8), \
14521 + (w)[3] = (u_int8_t)(v))
14522 +
14523 +#define UPACKED __attribute__((__packed__))
14524 +
14525 +typedef struct {
14526 + uByte bmRequestType;
14527 + uByte bRequest;
14528 + uWord wValue;
14529 + uWord wIndex;
14530 + uWord wLength;
14531 +} UPACKED usb_device_request_t;
14532 +
14533 +#define UT_GET_DIR(a) ((a) & 0x80)
14534 +#define UT_WRITE 0x00
14535 +#define UT_READ 0x80
14536 +
14537 +#define UT_GET_TYPE(a) ((a) & 0x60)
14538 +#define UT_STANDARD 0x00
14539 +#define UT_CLASS 0x20
14540 +#define UT_VENDOR 0x40
14541 +
14542 +#define UT_GET_RECIPIENT(a) ((a) & 0x1f)
14543 +#define UT_DEVICE 0x00
14544 +#define UT_INTERFACE 0x01
14545 +#define UT_ENDPOINT 0x02
14546 +#define UT_OTHER 0x03
14547 +
14548 +#define UT_READ_DEVICE (UT_READ | UT_STANDARD | UT_DEVICE)
14549 +#define UT_READ_INTERFACE (UT_READ | UT_STANDARD | UT_INTERFACE)
14550 +#define UT_READ_ENDPOINT (UT_READ | UT_STANDARD | UT_ENDPOINT)
14551 +#define UT_WRITE_DEVICE (UT_WRITE | UT_STANDARD | UT_DEVICE)
14552 +#define UT_WRITE_INTERFACE (UT_WRITE | UT_STANDARD | UT_INTERFACE)
14553 +#define UT_WRITE_ENDPOINT (UT_WRITE | UT_STANDARD | UT_ENDPOINT)
14554 +#define UT_READ_CLASS_DEVICE (UT_READ | UT_CLASS | UT_DEVICE)
14555 +#define UT_READ_CLASS_INTERFACE (UT_READ | UT_CLASS | UT_INTERFACE)
14556 +#define UT_READ_CLASS_OTHER (UT_READ | UT_CLASS | UT_OTHER)
14557 +#define UT_READ_CLASS_ENDPOINT (UT_READ | UT_CLASS | UT_ENDPOINT)
14558 +#define UT_WRITE_CLASS_DEVICE (UT_WRITE | UT_CLASS | UT_DEVICE)
14559 +#define UT_WRITE_CLASS_INTERFACE (UT_WRITE | UT_CLASS | UT_INTERFACE)
14560 +#define UT_WRITE_CLASS_OTHER (UT_WRITE | UT_CLASS | UT_OTHER)
14561 +#define UT_WRITE_CLASS_ENDPOINT (UT_WRITE | UT_CLASS | UT_ENDPOINT)
14562 +#define UT_READ_VENDOR_DEVICE (UT_READ | UT_VENDOR | UT_DEVICE)
14563 +#define UT_READ_VENDOR_INTERFACE (UT_READ | UT_VENDOR | UT_INTERFACE)
14564 +#define UT_READ_VENDOR_OTHER (UT_READ | UT_VENDOR | UT_OTHER)
14565 +#define UT_READ_VENDOR_ENDPOINT (UT_READ | UT_VENDOR | UT_ENDPOINT)
14566 +#define UT_WRITE_VENDOR_DEVICE (UT_WRITE | UT_VENDOR | UT_DEVICE)
14567 +#define UT_WRITE_VENDOR_INTERFACE (UT_WRITE | UT_VENDOR | UT_INTERFACE)
14568 +#define UT_WRITE_VENDOR_OTHER (UT_WRITE | UT_VENDOR | UT_OTHER)
14569 +#define UT_WRITE_VENDOR_ENDPOINT (UT_WRITE | UT_VENDOR | UT_ENDPOINT)
14570 +
14571 +/* Requests */
14572 +#define UR_GET_STATUS 0x00
14573 +#define USTAT_STANDARD_STATUS 0x00
14574 +#define WUSTAT_WUSB_FEATURE 0x01
14575 +#define WUSTAT_CHANNEL_INFO 0x02
14576 +#define WUSTAT_RECEIVED_DATA 0x03
14577 +#define WUSTAT_MAS_AVAILABILITY 0x04
14578 +#define WUSTAT_CURRENT_TRANSMIT_POWER 0x05
14579 +#define UR_CLEAR_FEATURE 0x01
14580 +#define UR_SET_FEATURE 0x03
14581 +#define UR_SET_AND_TEST_FEATURE 0x0c
14582 +#define UR_SET_ADDRESS 0x05
14583 +#define UR_GET_DESCRIPTOR 0x06
14584 +#define UDESC_DEVICE 0x01
14585 +#define UDESC_CONFIG 0x02
14586 +#define UDESC_STRING 0x03
14587 +#define UDESC_INTERFACE 0x04
14588 +#define UDESC_ENDPOINT 0x05
14589 +#define UDESC_SS_USB_COMPANION 0x30
14590 +#define UDESC_DEVICE_QUALIFIER 0x06
14591 +#define UDESC_OTHER_SPEED_CONFIGURATION 0x07
14592 +#define UDESC_INTERFACE_POWER 0x08
14593 +#define UDESC_OTG 0x09
14594 +#define WUDESC_SECURITY 0x0c
14595 +#define WUDESC_KEY 0x0d
14596 +#define WUD_GET_KEY_INDEX(_wValue_) ((_wValue_) & 0xf)
14597 +#define WUD_GET_KEY_TYPE(_wValue_) (((_wValue_) & 0x30) >> 4)
14598 +#define WUD_KEY_TYPE_ASSOC 0x01
14599 +#define WUD_KEY_TYPE_GTK 0x02
14600 +#define WUD_GET_KEY_ORIGIN(_wValue_) (((_wValue_) & 0x40) >> 6)
14601 +#define WUD_KEY_ORIGIN_HOST 0x00
14602 +#define WUD_KEY_ORIGIN_DEVICE 0x01
14603 +#define WUDESC_ENCRYPTION_TYPE 0x0e
14604 +#define WUDESC_BOS 0x0f
14605 +#define WUDESC_DEVICE_CAPABILITY 0x10
14606 +#define WUDESC_WIRELESS_ENDPOINT_COMPANION 0x11
14607 +#define UDESC_BOS 0x0f
14608 +#define UDESC_DEVICE_CAPABILITY 0x10
14609 +#define UDESC_CS_DEVICE 0x21 /* class specific */
14610 +#define UDESC_CS_CONFIG 0x22
14611 +#define UDESC_CS_STRING 0x23
14612 +#define UDESC_CS_INTERFACE 0x24
14613 +#define UDESC_CS_ENDPOINT 0x25
14614 +#define UDESC_HUB 0x29
14615 +#define UR_SET_DESCRIPTOR 0x07
14616 +#define UR_GET_CONFIG 0x08
14617 +#define UR_SET_CONFIG 0x09
14618 +#define UR_GET_INTERFACE 0x0a
14619 +#define UR_SET_INTERFACE 0x0b
14620 +#define UR_SYNCH_FRAME 0x0c
14621 +#define WUR_SET_ENCRYPTION 0x0d
14622 +#define WUR_GET_ENCRYPTION 0x0e
14623 +#define WUR_SET_HANDSHAKE 0x0f
14624 +#define WUR_GET_HANDSHAKE 0x10
14625 +#define WUR_SET_CONNECTION 0x11
14626 +#define WUR_SET_SECURITY_DATA 0x12
14627 +#define WUR_GET_SECURITY_DATA 0x13
14628 +#define WUR_SET_WUSB_DATA 0x14
14629 +#define WUDATA_DRPIE_INFO 0x01
14630 +#define WUDATA_TRANSMIT_DATA 0x02
14631 +#define WUDATA_TRANSMIT_PARAMS 0x03
14632 +#define WUDATA_RECEIVE_PARAMS 0x04
14633 +#define WUDATA_TRANSMIT_POWER 0x05
14634 +#define WUR_LOOPBACK_DATA_WRITE 0x15
14635 +#define WUR_LOOPBACK_DATA_READ 0x16
14636 +#define WUR_SET_INTERFACE_DS 0x17
14637 +
14638 +/* Feature numbers */
14639 +#define UF_ENDPOINT_HALT 0
14640 +#define UF_DEVICE_REMOTE_WAKEUP 1
14641 +#define UF_TEST_MODE 2
14642 +#define UF_DEVICE_B_HNP_ENABLE 3
14643 +#define UF_DEVICE_A_HNP_SUPPORT 4
14644 +#define UF_DEVICE_A_ALT_HNP_SUPPORT 5
14645 +#define WUF_WUSB 3
14646 +#define WUF_TX_DRPIE 0x0
14647 +#define WUF_DEV_XMIT_PACKET 0x1
14648 +#define WUF_COUNT_PACKETS 0x2
14649 +#define WUF_CAPTURE_PACKETS 0x3
14650 +#define UF_FUNCTION_SUSPEND 0
14651 +#define UF_U1_ENABLE 48
14652 +#define UF_U2_ENABLE 49
14653 +#define UF_LTM_ENABLE 50
14654 +
14655 +/* Class requests from the USB 2.0 hub spec, table 11-15 */
14656 +#define UCR_CLEAR_HUB_FEATURE (0x2000 | UR_CLEAR_FEATURE)
14657 +#define UCR_CLEAR_PORT_FEATURE (0x2300 | UR_CLEAR_FEATURE)
14658 +#define UCR_GET_HUB_DESCRIPTOR (0xa000 | UR_GET_DESCRIPTOR)
14659 +#define UCR_GET_HUB_STATUS (0xa000 | UR_GET_STATUS)
14660 +#define UCR_GET_PORT_STATUS (0xa300 | UR_GET_STATUS)
14661 +#define UCR_SET_HUB_FEATURE (0x2000 | UR_SET_FEATURE)
14662 +#define UCR_SET_PORT_FEATURE (0x2300 | UR_SET_FEATURE)
14663 +#define UCR_SET_AND_TEST_PORT_FEATURE (0xa300 | UR_SET_AND_TEST_FEATURE)
14664 +
14665 +#ifdef _MSC_VER
14666 +#include <pshpack1.h>
14667 +#endif
14668 +
14669 +typedef struct {
14670 + uByte bLength;
14671 + uByte bDescriptorType;
14672 + uByte bDescriptorSubtype;
14673 +} UPACKED usb_descriptor_t;
14674 +
14675 +typedef struct {
14676 + uByte bLength;
14677 + uByte bDescriptorType;
14678 +} UPACKED usb_descriptor_header_t;
14679 +
14680 +typedef struct {
14681 + uByte bLength;
14682 + uByte bDescriptorType;
14683 + uWord bcdUSB;
14684 +#define UD_USB_2_0 0x0200
14685 +#define UD_IS_USB2(d) (UGETW((d)->bcdUSB) >= UD_USB_2_0)
14686 + uByte bDeviceClass;
14687 + uByte bDeviceSubClass;
14688 + uByte bDeviceProtocol;
14689 + uByte bMaxPacketSize;
14690 + /* The fields below are not part of the initial descriptor. */
14691 + uWord idVendor;
14692 + uWord idProduct;
14693 + uWord bcdDevice;
14694 + uByte iManufacturer;
14695 + uByte iProduct;
14696 + uByte iSerialNumber;
14697 + uByte bNumConfigurations;
14698 +} UPACKED usb_device_descriptor_t;
14699 +#define USB_DEVICE_DESCRIPTOR_SIZE 18
14700 +
14701 +typedef struct {
14702 + uByte bLength;
14703 + uByte bDescriptorType;
14704 + uWord wTotalLength;
14705 + uByte bNumInterface;
14706 + uByte bConfigurationValue;
14707 + uByte iConfiguration;
14708 +#define UC_ATT_ONE (1 << 7) /* must be set */
14709 +#define UC_ATT_SELFPOWER (1 << 6) /* self powered */
14710 +#define UC_ATT_WAKEUP (1 << 5) /* can wakeup */
14711 +#define UC_ATT_BATTERY (1 << 4) /* battery powered */
14712 + uByte bmAttributes;
14713 +#define UC_BUS_POWERED 0x80
14714 +#define UC_SELF_POWERED 0x40
14715 +#define UC_REMOTE_WAKEUP 0x20
14716 + uByte bMaxPower; /* max current in 2 mA units */
14717 +#define UC_POWER_FACTOR 2
14718 +} UPACKED usb_config_descriptor_t;
14719 +#define USB_CONFIG_DESCRIPTOR_SIZE 9
14720 +
14721 +typedef struct {
14722 + uByte bLength;
14723 + uByte bDescriptorType;
14724 + uByte bInterfaceNumber;
14725 + uByte bAlternateSetting;
14726 + uByte bNumEndpoints;
14727 + uByte bInterfaceClass;
14728 + uByte bInterfaceSubClass;
14729 + uByte bInterfaceProtocol;
14730 + uByte iInterface;
14731 +} UPACKED usb_interface_descriptor_t;
14732 +#define USB_INTERFACE_DESCRIPTOR_SIZE 9
14733 +
14734 +typedef struct {
14735 + uByte bLength;
14736 + uByte bDescriptorType;
14737 + uByte bEndpointAddress;
14738 +#define UE_GET_DIR(a) ((a) & 0x80)
14739 +#define UE_SET_DIR(a,d) ((a) | (((d)&1) << 7))
14740 +#define UE_DIR_IN 0x80
14741 +#define UE_DIR_OUT 0x00
14742 +#define UE_ADDR 0x0f
14743 +#define UE_GET_ADDR(a) ((a) & UE_ADDR)
14744 + uByte bmAttributes;
14745 +#define UE_XFERTYPE 0x03
14746 +#define UE_CONTROL 0x00
14747 +#define UE_ISOCHRONOUS 0x01
14748 +#define UE_BULK 0x02
14749 +#define UE_INTERRUPT 0x03
14750 +#define UE_GET_XFERTYPE(a) ((a) & UE_XFERTYPE)
14751 +#define UE_ISO_TYPE 0x0c
14752 +#define UE_ISO_ASYNC 0x04
14753 +#define UE_ISO_ADAPT 0x08
14754 +#define UE_ISO_SYNC 0x0c
14755 +#define UE_GET_ISO_TYPE(a) ((a) & UE_ISO_TYPE)
14756 + uWord wMaxPacketSize;
14757 + uByte bInterval;
14758 +} UPACKED usb_endpoint_descriptor_t;
14759 +#define USB_ENDPOINT_DESCRIPTOR_SIZE 7
14760 +
14761 +typedef struct ss_endpoint_companion_descriptor {
14762 + uByte bLength;
14763 + uByte bDescriptorType;
14764 + uByte bMaxBurst;
14765 +#define USSE_GET_MAX_STREAMS(a) ((a) & 0x1f)
14766 +#define USSE_SET_MAX_STREAMS(a, b) ((a) | ((b) & 0x1f))
14767 +#define USSE_GET_MAX_PACKET_NUM(a) ((a) & 0x03)
14768 +#define USSE_SET_MAX_PACKET_NUM(a, b) ((a) | ((b) & 0x03))
14769 + uByte bmAttributes;
14770 + uWord wBytesPerInterval;
14771 +} UPACKED ss_endpoint_companion_descriptor_t;
14772 +#define USB_SS_ENDPOINT_COMPANION_DESCRIPTOR_SIZE 6
14773 +
14774 +typedef struct {
14775 + uByte bLength;
14776 + uByte bDescriptorType;
14777 + uWord bString[127];
14778 +} UPACKED usb_string_descriptor_t;
14779 +#define USB_MAX_STRING_LEN 128
14780 +#define USB_LANGUAGE_TABLE 0 /* # of the string language id table */
14781 +
14782 +/* Hub specific request */
14783 +#define UR_GET_BUS_STATE 0x02
14784 +#define UR_CLEAR_TT_BUFFER 0x08
14785 +#define UR_RESET_TT 0x09
14786 +#define UR_GET_TT_STATE 0x0a
14787 +#define UR_STOP_TT 0x0b
14788 +
14789 +/* Hub features */
14790 +#define UHF_C_HUB_LOCAL_POWER 0
14791 +#define UHF_C_HUB_OVER_CURRENT 1
14792 +#define UHF_PORT_CONNECTION 0
14793 +#define UHF_PORT_ENABLE 1
14794 +#define UHF_PORT_SUSPEND 2
14795 +#define UHF_PORT_OVER_CURRENT 3
14796 +#define UHF_PORT_RESET 4
14797 +#define UHF_PORT_L1 5
14798 +#define UHF_PORT_POWER 8
14799 +#define UHF_PORT_LOW_SPEED 9
14800 +#define UHF_PORT_HIGH_SPEED 10
14801 +#define UHF_C_PORT_CONNECTION 16
14802 +#define UHF_C_PORT_ENABLE 17
14803 +#define UHF_C_PORT_SUSPEND 18
14804 +#define UHF_C_PORT_OVER_CURRENT 19
14805 +#define UHF_C_PORT_RESET 20
14806 +#define UHF_C_PORT_L1 23
14807 +#define UHF_PORT_TEST 21
14808 +#define UHF_PORT_INDICATOR 22
14809 +
14810 +typedef struct {
14811 + uByte bDescLength;
14812 + uByte bDescriptorType;
14813 + uByte bNbrPorts;
14814 + uWord wHubCharacteristics;
14815 +#define UHD_PWR 0x0003
14816 +#define UHD_PWR_GANGED 0x0000
14817 +#define UHD_PWR_INDIVIDUAL 0x0001
14818 +#define UHD_PWR_NO_SWITCH 0x0002
14819 +#define UHD_COMPOUND 0x0004
14820 +#define UHD_OC 0x0018
14821 +#define UHD_OC_GLOBAL 0x0000
14822 +#define UHD_OC_INDIVIDUAL 0x0008
14823 +#define UHD_OC_NONE 0x0010
14824 +#define UHD_TT_THINK 0x0060
14825 +#define UHD_TT_THINK_8 0x0000
14826 +#define UHD_TT_THINK_16 0x0020
14827 +#define UHD_TT_THINK_24 0x0040
14828 +#define UHD_TT_THINK_32 0x0060
14829 +#define UHD_PORT_IND 0x0080
14830 + uByte bPwrOn2PwrGood; /* delay in 2 ms units */
14831 +#define UHD_PWRON_FACTOR 2
14832 + uByte bHubContrCurrent;
14833 + uByte DeviceRemovable[32]; /* max 255 ports */
14834 +#define UHD_NOT_REMOV(desc, i) \
14835 + (((desc)->DeviceRemovable[(i)/8] >> ((i) % 8)) & 1)
14836 + /* deprecated */ uByte PortPowerCtrlMask[1];
14837 +} UPACKED usb_hub_descriptor_t;
14838 +#define USB_HUB_DESCRIPTOR_SIZE 9 /* includes deprecated PortPowerCtrlMask */
14839 +
14840 +typedef struct {
14841 + uByte bLength;
14842 + uByte bDescriptorType;
14843 + uWord bcdUSB;
14844 + uByte bDeviceClass;
14845 + uByte bDeviceSubClass;
14846 + uByte bDeviceProtocol;
14847 + uByte bMaxPacketSize0;
14848 + uByte bNumConfigurations;
14849 + uByte bReserved;
14850 +} UPACKED usb_device_qualifier_t;
14851 +#define USB_DEVICE_QUALIFIER_SIZE 10
14852 +
14853 +typedef struct {
14854 + uByte bLength;
14855 + uByte bDescriptorType;
14856 + uByte bmAttributes;
14857 +#define UOTG_SRP 0x01
14858 +#define UOTG_HNP 0x02
14859 +} UPACKED usb_otg_descriptor_t;
14860 +
14861 +/* OTG feature selectors */
14862 +#define UOTG_B_HNP_ENABLE 3
14863 +#define UOTG_A_HNP_SUPPORT 4
14864 +#define UOTG_A_ALT_HNP_SUPPORT 5
14865 +
14866 +typedef struct {
14867 + uWord wStatus;
14868 +/* Device status flags */
14869 +#define UDS_SELF_POWERED 0x0001
14870 +#define UDS_REMOTE_WAKEUP 0x0002
14871 +/* Endpoint status flags */
14872 +#define UES_HALT 0x0001
14873 +} UPACKED usb_status_t;
14874 +
14875 +typedef struct {
14876 + uWord wHubStatus;
14877 +#define UHS_LOCAL_POWER 0x0001
14878 +#define UHS_OVER_CURRENT 0x0002
14879 + uWord wHubChange;
14880 +} UPACKED usb_hub_status_t;
14881 +
14882 +typedef struct {
14883 + uWord wPortStatus;
14884 +#define UPS_CURRENT_CONNECT_STATUS 0x0001
14885 +#define UPS_PORT_ENABLED 0x0002
14886 +#define UPS_SUSPEND 0x0004
14887 +#define UPS_OVERCURRENT_INDICATOR 0x0008
14888 +#define UPS_RESET 0x0010
14889 +#define UPS_PORT_POWER 0x0100
14890 +#define UPS_LOW_SPEED 0x0200
14891 +#define UPS_HIGH_SPEED 0x0400
14892 +#define UPS_PORT_TEST 0x0800
14893 +#define UPS_PORT_INDICATOR 0x1000
14894 + uWord wPortChange;
14895 +#define UPS_C_CONNECT_STATUS 0x0001
14896 +#define UPS_C_PORT_ENABLED 0x0002
14897 +#define UPS_C_SUSPEND 0x0004
14898 +#define UPS_C_OVERCURRENT_INDICATOR 0x0008
14899 +#define UPS_C_PORT_RESET 0x0010
14900 +} UPACKED usb_port_status_t;
14901 +
14902 +#ifdef _MSC_VER
14903 +#include <poppack.h>
14904 +#endif
14905 +
14906 +/* Device class codes */
14907 +#define UDCLASS_IN_INTERFACE 0x00
14908 +#define UDCLASS_COMM 0x02
14909 +#define UDCLASS_HUB 0x09
14910 +#define UDSUBCLASS_HUB 0x00
14911 +#define UDPROTO_FSHUB 0x00
14912 +#define UDPROTO_HSHUBSTT 0x01
14913 +#define UDPROTO_HSHUBMTT 0x02
14914 +#define UDCLASS_DIAGNOSTIC 0xdc
14915 +#define UDCLASS_WIRELESS 0xe0
14916 +#define UDSUBCLASS_RF 0x01
14917 +#define UDPROTO_BLUETOOTH 0x01
14918 +#define UDCLASS_VENDOR 0xff
14919 +
14920 +/* Interface class codes */
14921 +#define UICLASS_UNSPEC 0x00
14922 +
14923 +#define UICLASS_AUDIO 0x01
14924 +#define UISUBCLASS_AUDIOCONTROL 1
14925 +#define UISUBCLASS_AUDIOSTREAM 2
14926 +#define UISUBCLASS_MIDISTREAM 3
14927 +
14928 +#define UICLASS_CDC 0x02 /* communication */
14929 +#define UISUBCLASS_DIRECT_LINE_CONTROL_MODEL 1
14930 +#define UISUBCLASS_ABSTRACT_CONTROL_MODEL 2
14931 +#define UISUBCLASS_TELEPHONE_CONTROL_MODEL 3
14932 +#define UISUBCLASS_MULTICHANNEL_CONTROL_MODEL 4
14933 +#define UISUBCLASS_CAPI_CONTROLMODEL 5
14934 +#define UISUBCLASS_ETHERNET_NETWORKING_CONTROL_MODEL 6
14935 +#define UISUBCLASS_ATM_NETWORKING_CONTROL_MODEL 7
14936 +#define UIPROTO_CDC_AT 1
14937 +
14938 +#define UICLASS_HID 0x03
14939 +#define UISUBCLASS_BOOT 1
14940 +#define UIPROTO_BOOT_KEYBOARD 1
14941 +
14942 +#define UICLASS_PHYSICAL 0x05
14943 +
14944 +#define UICLASS_IMAGE 0x06
14945 +
14946 +#define UICLASS_PRINTER 0x07
14947 +#define UISUBCLASS_PRINTER 1
14948 +#define UIPROTO_PRINTER_UNI 1
14949 +#define UIPROTO_PRINTER_BI 2
14950 +#define UIPROTO_PRINTER_1284 3
14951 +
14952 +#define UICLASS_MASS 0x08
14953 +#define UISUBCLASS_RBC 1
14954 +#define UISUBCLASS_SFF8020I 2
14955 +#define UISUBCLASS_QIC157 3
14956 +#define UISUBCLASS_UFI 4
14957 +#define UISUBCLASS_SFF8070I 5
14958 +#define UISUBCLASS_SCSI 6
14959 +#define UIPROTO_MASS_CBI_I 0
14960 +#define UIPROTO_MASS_CBI 1
14961 +#define UIPROTO_MASS_BBB_OLD 2 /* Not in the spec anymore */
14962 +#define UIPROTO_MASS_BBB 80 /* 'P' for the Iomega Zip drive */
14963 +
14964 +#define UICLASS_HUB 0x09
14965 +#define UISUBCLASS_HUB 0
14966 +#define UIPROTO_FSHUB 0
14967 +#define UIPROTO_HSHUBSTT 0 /* Yes, same as previous */
14968 +#define UIPROTO_HSHUBMTT 1
14969 +
14970 +#define UICLASS_CDC_DATA 0x0a
14971 +#define UISUBCLASS_DATA 0
14972 +#define UIPROTO_DATA_ISDNBRI 0x30 /* Physical iface */
14973 +#define UIPROTO_DATA_HDLC 0x31 /* HDLC */
14974 +#define UIPROTO_DATA_TRANSPARENT 0x32 /* Transparent */
14975 +#define UIPROTO_DATA_Q921M 0x50 /* Management for Q921 */
14976 +#define UIPROTO_DATA_Q921 0x51 /* Data for Q921 */
14977 +#define UIPROTO_DATA_Q921TM 0x52 /* TEI multiplexer for Q921 */
14978 +#define UIPROTO_DATA_V42BIS 0x90 /* Data compression */
14979 +#define UIPROTO_DATA_Q931 0x91 /* Euro-ISDN */
14980 +#define UIPROTO_DATA_V120 0x92 /* V.24 rate adaption */
14981 +#define UIPROTO_DATA_CAPI 0x93 /* CAPI 2.0 commands */
14982 +#define UIPROTO_DATA_HOST_BASED 0xfd /* Host based driver */
14983 +#define UIPROTO_DATA_PUF 0xfe /* see Prot. Unit Func. Desc.*/
14984 +#define UIPROTO_DATA_VENDOR 0xff /* Vendor specific */
14985 +
14986 +#define UICLASS_SMARTCARD 0x0b
14987 +
14988 +/*#define UICLASS_FIRM_UPD 0x0c*/
14989 +
14990 +#define UICLASS_SECURITY 0x0d
14991 +
14992 +#define UICLASS_DIAGNOSTIC 0xdc
14993 +
14994 +#define UICLASS_WIRELESS 0xe0
14995 +#define UISUBCLASS_RF 0x01
14996 +#define UIPROTO_BLUETOOTH 0x01
14997 +
14998 +#define UICLASS_APPL_SPEC 0xfe
14999 +#define UISUBCLASS_FIRMWARE_DOWNLOAD 1
15000 +#define UISUBCLASS_IRDA 2
15001 +#define UIPROTO_IRDA 0
15002 +
15003 +#define UICLASS_VENDOR 0xff
15004 +
15005 +#define USB_HUB_MAX_DEPTH 5
15006 +
15007 +/*
15008 + * Minimum time a device needs to be powered down to go through
15009 + * a power cycle. XXX Are these time in the spec?
15010 + */
15011 +#define USB_POWER_DOWN_TIME 200 /* ms */
15012 +#define USB_PORT_POWER_DOWN_TIME 100 /* ms */
15013 +
15014 +#if 0
15015 +/* These are the values from the spec. */
15016 +#define USB_PORT_RESET_DELAY 10 /* ms */
15017 +#define USB_PORT_ROOT_RESET_DELAY 50 /* ms */
15018 +#define USB_PORT_RESET_RECOVERY 10 /* ms */
15019 +#define USB_PORT_POWERUP_DELAY 100 /* ms */
15020 +#define USB_SET_ADDRESS_SETTLE 2 /* ms */
15021 +#define USB_RESUME_DELAY (20*5) /* ms */
15022 +#define USB_RESUME_WAIT 10 /* ms */
15023 +#define USB_RESUME_RECOVERY 10 /* ms */
15024 +#define USB_EXTRA_POWER_UP_TIME 0 /* ms */
15025 +#else
15026 +/* Allow for marginal (i.e. non-conforming) devices. */
15027 +#define USB_PORT_RESET_DELAY 50 /* ms */
15028 +#define USB_PORT_ROOT_RESET_DELAY 250 /* ms */
15029 +#define USB_PORT_RESET_RECOVERY 250 /* ms */
15030 +#define USB_PORT_POWERUP_DELAY 300 /* ms */
15031 +#define USB_SET_ADDRESS_SETTLE 10 /* ms */
15032 +#define USB_RESUME_DELAY (50*5) /* ms */
15033 +#define USB_RESUME_WAIT 50 /* ms */
15034 +#define USB_RESUME_RECOVERY 50 /* ms */
15035 +#define USB_EXTRA_POWER_UP_TIME 20 /* ms */
15036 +#endif
15037 +
15038 +#define USB_MIN_POWER 100 /* mA */
15039 +#define USB_MAX_POWER 500 /* mA */
15040 +
15041 +#define USB_BUS_RESET_DELAY 100 /* ms XXX?*/
15042 +
15043 +#define USB_UNCONFIG_NO 0
15044 +#define USB_UNCONFIG_INDEX (-1)
15045 +
15046 +/*** ioctl() related stuff ***/
15047 +
15048 +struct usb_ctl_request {
15049 + int ucr_addr;
15050 + usb_device_request_t ucr_request;
15051 + void *ucr_data;
15052 + int ucr_flags;
15053 +#define USBD_SHORT_XFER_OK 0x04 /* allow short reads */
15054 + int ucr_actlen; /* actual length transferred */
15055 +};
15056 +
15057 +struct usb_alt_interface {
15058 + int uai_config_index;
15059 + int uai_interface_index;
15060 + int uai_alt_no;
15061 +};
15062 +
15063 +#define USB_CURRENT_CONFIG_INDEX (-1)
15064 +#define USB_CURRENT_ALT_INDEX (-1)
15065 +
15066 +struct usb_config_desc {
15067 + int ucd_config_index;
15068 + usb_config_descriptor_t ucd_desc;
15069 +};
15070 +
15071 +struct usb_interface_desc {
15072 + int uid_config_index;
15073 + int uid_interface_index;
15074 + int uid_alt_index;
15075 + usb_interface_descriptor_t uid_desc;
15076 +};
15077 +
15078 +struct usb_endpoint_desc {
15079 + int ued_config_index;
15080 + int ued_interface_index;
15081 + int ued_alt_index;
15082 + int ued_endpoint_index;
15083 + usb_endpoint_descriptor_t ued_desc;
15084 +};
15085 +
15086 +struct usb_full_desc {
15087 + int ufd_config_index;
15088 + u_int ufd_size;
15089 + u_char *ufd_data;
15090 +};
15091 +
15092 +struct usb_string_desc {
15093 + int usd_string_index;
15094 + int usd_language_id;
15095 + usb_string_descriptor_t usd_desc;
15096 +};
15097 +
15098 +struct usb_ctl_report_desc {
15099 + int ucrd_size;
15100 + u_char ucrd_data[1024]; /* filled data size will vary */
15101 +};
15102 +
15103 +typedef struct { u_int32_t cookie; } usb_event_cookie_t;
15104 +
15105 +#define USB_MAX_DEVNAMES 4
15106 +#define USB_MAX_DEVNAMELEN 16
15107 +struct usb_device_info {
15108 + u_int8_t udi_bus;
15109 + u_int8_t udi_addr; /* device address */
15110 + usb_event_cookie_t udi_cookie;
15111 + char udi_product[USB_MAX_STRING_LEN];
15112 + char udi_vendor[USB_MAX_STRING_LEN];
15113 + char udi_release[8];
15114 + u_int16_t udi_productNo;
15115 + u_int16_t udi_vendorNo;
15116 + u_int16_t udi_releaseNo;
15117 + u_int8_t udi_class;
15118 + u_int8_t udi_subclass;
15119 + u_int8_t udi_protocol;
15120 + u_int8_t udi_config;
15121 + u_int8_t udi_speed;
15122 +#define USB_SPEED_UNKNOWN 0
15123 +#define USB_SPEED_LOW 1
15124 +#define USB_SPEED_FULL 2
15125 +#define USB_SPEED_HIGH 3
15126 +#define USB_SPEED_VARIABLE 4
15127 +#define USB_SPEED_SUPER 5
15128 + int udi_power; /* power consumption in mA, 0 if selfpowered */
15129 + int udi_nports;
15130 + char udi_devnames[USB_MAX_DEVNAMES][USB_MAX_DEVNAMELEN];
15131 + u_int8_t udi_ports[16];/* hub only: addresses of devices on ports */
15132 +#define USB_PORT_ENABLED 0xff
15133 +#define USB_PORT_SUSPENDED 0xfe
15134 +#define USB_PORT_POWERED 0xfd
15135 +#define USB_PORT_DISABLED 0xfc
15136 +};
15137 +
15138 +struct usb_ctl_report {
15139 + int ucr_report;
15140 + u_char ucr_data[1024]; /* filled data size will vary */
15141 +};
15142 +
15143 +struct usb_device_stats {
15144 + u_long uds_requests[4]; /* indexed by transfer type UE_* */
15145 +};
15146 +
15147 +#define WUSB_MIN_IE 0x80
15148 +#define WUSB_WCTA_IE 0x80
15149 +#define WUSB_WCONNECTACK_IE 0x81
15150 +#define WUSB_WHOSTINFO_IE 0x82
15151 +#define WUHI_GET_CA(_bmAttributes_) ((_bmAttributes_) & 0x3)
15152 +#define WUHI_CA_RECONN 0x00
15153 +#define WUHI_CA_LIMITED 0x01
15154 +#define WUHI_CA_ALL 0x03
15155 +#define WUHI_GET_MLSI(_bmAttributes_) (((_bmAttributes_) & 0x38) >> 3)
15156 +#define WUSB_WCHCHANGEANNOUNCE_IE 0x83
15157 +#define WUSB_WDEV_DISCONNECT_IE 0x84
15158 +#define WUSB_WHOST_DISCONNECT_IE 0x85
15159 +#define WUSB_WRELEASE_CHANNEL_IE 0x86
15160 +#define WUSB_WWORK_IE 0x87
15161 +#define WUSB_WCHANNEL_STOP_IE 0x88
15162 +#define WUSB_WDEV_KEEPALIVE_IE 0x89
15163 +#define WUSB_WISOCH_DISCARD_IE 0x8A
15164 +#define WUSB_WRESETDEVICE_IE 0x8B
15165 +#define WUSB_WXMIT_PACKET_ADJUST_IE 0x8C
15166 +#define WUSB_MAX_IE 0x8C
15167 +
15168 +/* Device Notification Types */
15169 +
15170 +#define WUSB_DN_MIN 0x01
15171 +#define WUSB_DN_CONNECT 0x01
15172 +# define WUSB_DA_OLDCONN 0x00
15173 +# define WUSB_DA_NEWCONN 0x01
15174 +# define WUSB_DA_SELF_BEACON 0x02
15175 +# define WUSB_DA_DIR_BEACON 0x04
15176 +# define WUSB_DA_NO_BEACON 0x06
15177 +#define WUSB_DN_DISCONNECT 0x02
15178 +#define WUSB_DN_EPRDY 0x03
15179 +#define WUSB_DN_MASAVAILCHANGED 0x04
15180 +#define WUSB_DN_REMOTEWAKEUP 0x05
15181 +#define WUSB_DN_SLEEP 0x06
15182 +#define WUSB_DN_ALIVE 0x07
15183 +#define WUSB_DN_MAX 0x07
15184 +
15185 +#ifdef _MSC_VER
15186 +#include <pshpack1.h>
15187 +#endif
15188 +
15189 +/* WUSB Handshake Data. Used during the SET/GET HANDSHAKE requests */
15190 +typedef struct wusb_hndshk_data {
15191 + uByte bMessageNumber;
15192 + uByte bStatus;
15193 + uByte tTKID[3];
15194 + uByte bReserved;
15195 + uByte CDID[16];
15196 + uByte Nonce[16];
15197 + uByte MIC[8];
15198 +} UPACKED wusb_hndshk_data_t;
15199 +#define WUSB_HANDSHAKE_LEN_FOR_MIC 38
15200 +
15201 +/* WUSB Connection Context */
15202 +typedef struct wusb_conn_context {
15203 + uByte CHID [16];
15204 + uByte CDID [16];
15205 + uByte CK [16];
15206 +} UPACKED wusb_conn_context_t;
15207 +
15208 +/* WUSB Security Descriptor */
15209 +typedef struct wusb_security_desc {
15210 + uByte bLength;
15211 + uByte bDescriptorType;
15212 + uWord wTotalLength;
15213 + uByte bNumEncryptionTypes;
15214 +} UPACKED wusb_security_desc_t;
15215 +
15216 +/* WUSB Encryption Type Descriptor */
15217 +typedef struct wusb_encrypt_type_desc {
15218 + uByte bLength;
15219 + uByte bDescriptorType;
15220 +
15221 + uByte bEncryptionType;
15222 +#define WUETD_UNSECURE 0
15223 +#define WUETD_WIRED 1
15224 +#define WUETD_CCM_1 2
15225 +#define WUETD_RSA_1 3
15226 +
15227 + uByte bEncryptionValue;
15228 + uByte bAuthKeyIndex;
15229 +} UPACKED wusb_encrypt_type_desc_t;
15230 +
15231 +/* WUSB Key Descriptor */
15232 +typedef struct wusb_key_desc {
15233 + uByte bLength;
15234 + uByte bDescriptorType;
15235 + uByte tTKID[3];
15236 + uByte bReserved;
15237 + uByte KeyData[1]; /* variable length */
15238 +} UPACKED wusb_key_desc_t;
15239 +
15240 +/* WUSB BOS Descriptor (Binary device Object Store) */
15241 +typedef struct wusb_bos_desc {
15242 + uByte bLength;
15243 + uByte bDescriptorType;
15244 + uWord wTotalLength;
15245 + uByte bNumDeviceCaps;
15246 +} UPACKED wusb_bos_desc_t;
15247 +
15248 +#define USB_DEVICE_CAPABILITY_20_EXTENSION 0x02
15249 +typedef struct usb_dev_cap_20_ext_desc {
15250 + uByte bLength;
15251 + uByte bDescriptorType;
15252 + uByte bDevCapabilityType;
15253 +#define USB_20_EXT_LPM 0x02
15254 + uDWord bmAttributes;
15255 +} UPACKED usb_dev_cap_20_ext_desc_t;
15256 +
15257 +#define USB_DEVICE_CAPABILITY_SS_USB 0x03
15258 +typedef struct usb_dev_cap_ss_usb {
15259 + uByte bLength;
15260 + uByte bDescriptorType;
15261 + uByte bDevCapabilityType;
15262 +#define USB_DC_SS_USB_LTM_CAPABLE 0x02
15263 + uByte bmAttributes;
15264 +#define USB_DC_SS_USB_SPEED_SUPPORT_LOW 0x01
15265 +#define USB_DC_SS_USB_SPEED_SUPPORT_FULL 0x02
15266 +#define USB_DC_SS_USB_SPEED_SUPPORT_HIGH 0x04
15267 +#define USB_DC_SS_USB_SPEED_SUPPORT_SS 0x08
15268 + uWord wSpeedsSupported;
15269 + uByte bFunctionalitySupport;
15270 + uByte bU1DevExitLat;
15271 + uWord wU2DevExitLat;
15272 +} UPACKED usb_dev_cap_ss_usb_t;
15273 +
15274 +#define USB_DEVICE_CAPABILITY_CONTAINER_ID 0x04
15275 +typedef struct usb_dev_cap_container_id {
15276 + uByte bLength;
15277 + uByte bDescriptorType;
15278 + uByte bDevCapabilityType;
15279 + uByte bReserved;
15280 + uByte containerID[16];
15281 +} UPACKED usb_dev_cap_container_id_t;
15282 +
15283 +/* Device Capability Type Codes */
15284 +#define WUSB_DEVICE_CAPABILITY_WIRELESS_USB 0x01
15285 +
15286 +/* Device Capability Descriptor */
15287 +typedef struct wusb_dev_cap_desc {
15288 + uByte bLength;
15289 + uByte bDescriptorType;
15290 + uByte bDevCapabilityType;
15291 + uByte caps[1]; /* Variable length */
15292 +} UPACKED wusb_dev_cap_desc_t;
15293 +
15294 +/* Device Capability Descriptor */
15295 +typedef struct wusb_dev_cap_uwb_desc {
15296 + uByte bLength;
15297 + uByte bDescriptorType;
15298 + uByte bDevCapabilityType;
15299 + uByte bmAttributes;
15300 + uWord wPHYRates; /* Bitmap */
15301 + uByte bmTFITXPowerInfo;
15302 + uByte bmFFITXPowerInfo;
15303 + uWord bmBandGroup;
15304 + uByte bReserved;
15305 +} UPACKED wusb_dev_cap_uwb_desc_t;
15306 +
15307 +/* Wireless USB Endpoint Companion Descriptor */
15308 +typedef struct wusb_endpoint_companion_desc {
15309 + uByte bLength;
15310 + uByte bDescriptorType;
15311 + uByte bMaxBurst;
15312 + uByte bMaxSequence;
15313 + uWord wMaxStreamDelay;
15314 + uWord wOverTheAirPacketSize;
15315 + uByte bOverTheAirInterval;
15316 + uByte bmCompAttributes;
15317 +} UPACKED wusb_endpoint_companion_desc_t;
15318 +
15319 +/* Wireless USB Numeric Association M1 Data Structure */
15320 +typedef struct wusb_m1_data {
15321 + uByte version;
15322 + uWord langId;
15323 + uByte deviceFriendlyNameLength;
15324 + uByte sha_256_m3[32];
15325 + uByte deviceFriendlyName[256];
15326 +} UPACKED wusb_m1_data_t;
15327 +
15328 +typedef struct wusb_m2_data {
15329 + uByte version;
15330 + uWord langId;
15331 + uByte hostFriendlyNameLength;
15332 + uByte pkh[384];
15333 + uByte hostFriendlyName[256];
15334 +} UPACKED wusb_m2_data_t;
15335 +
15336 +typedef struct wusb_m3_data {
15337 + uByte pkd[384];
15338 + uByte nd;
15339 +} UPACKED wusb_m3_data_t;
15340 +
15341 +typedef struct wusb_m4_data {
15342 + uDWord _attributeTypeIdAndLength_1;
15343 + uWord associationTypeId;
15344 +
15345 + uDWord _attributeTypeIdAndLength_2;
15346 + uWord associationSubTypeId;
15347 +
15348 + uDWord _attributeTypeIdAndLength_3;
15349 + uDWord length;
15350 +
15351 + uDWord _attributeTypeIdAndLength_4;
15352 + uDWord associationStatus;
15353 +
15354 + uDWord _attributeTypeIdAndLength_5;
15355 + uByte chid[16];
15356 +
15357 + uDWord _attributeTypeIdAndLength_6;
15358 + uByte cdid[16];
15359 +
15360 + uDWord _attributeTypeIdAndLength_7;
15361 + uByte bandGroups[2];
15362 +} UPACKED wusb_m4_data_t;
15363 +
15364 +#ifdef _MSC_VER
15365 +#include <poppack.h>
15366 +#endif
15367 +
15368 +#ifdef __cplusplus
15369 +}
15370 +#endif
15371 +
15372 +#endif /* _USB_H_ */
15373 --- /dev/null
15374 +++ b/drivers/usb/host/dwc_otg/Makefile
15375 @@ -0,0 +1,85 @@
15376 +#
15377 +# Makefile for DWC_otg Highspeed USB controller driver
15378 +#
15379 +
15380 +ifneq ($(KERNELRELEASE),)
15381 +
15382 +# Use the BUS_INTERFACE variable to compile the software for either
15383 +# PCI(PCI_INTERFACE) or LM(LM_INTERFACE) bus.
15384 +ifeq ($(BUS_INTERFACE),)
15385 +# BUS_INTERFACE = -DPCI_INTERFACE
15386 +# BUS_INTERFACE = -DLM_INTERFACE
15387 + BUS_INTERFACE = -DPLATFORM_INTERFACE
15388 +endif
15389 +
15390 +#ccflags-y += -DDEBUG
15391 +#ccflags-y += -DDWC_OTG_DEBUGLEV=1 # reduce common debug msgs
15392 +
15393 +# Use one of the following flags to compile the software in host-only or
15394 +# device-only mode.
15395 +#ccflags-y += -DDWC_HOST_ONLY
15396 +#ccflags-y += -DDWC_DEVICE_ONLY
15397 +
15398 +ccflags-y += -Dlinux -DDWC_HS_ELECT_TST
15399 +#ccflags-y += -DDWC_EN_ISOC
15400 +ccflags-y += -I$(srctree)/drivers/usb/host/dwc_common_port
15401 +#ccflags-y += -I$(PORTLIB)
15402 +ccflags-y += -DDWC_LINUX
15403 +ccflags-y += $(CFI)
15404 +ccflags-y += $(BUS_INTERFACE)
15405 +#ccflags-y += -DDWC_DEV_SRPCAP
15406 +
15407 +obj-$(CONFIG_USB_DWCOTG) += dwc_otg.o
15408 +
15409 +dwc_otg-objs := dwc_otg_driver.o dwc_otg_attr.o
15410 +dwc_otg-objs += dwc_otg_cil.o dwc_otg_cil_intr.o
15411 +dwc_otg-objs += dwc_otg_pcd_linux.o dwc_otg_pcd.o dwc_otg_pcd_intr.o
15412 +dwc_otg-objs += dwc_otg_hcd.o dwc_otg_hcd_linux.o dwc_otg_hcd_intr.o dwc_otg_hcd_queue.o dwc_otg_hcd_ddma.o
15413 +dwc_otg-objs += dwc_otg_adp.o
15414 +dwc_otg-objs += dwc_otg_fiq_fsm.o
15415 +ifneq ($(CONFIG_ARM64),y)
15416 +dwc_otg-objs += dwc_otg_fiq_stub.o
15417 +endif
15418 +
15419 +ifneq ($(CFI),)
15420 +dwc_otg-objs += dwc_otg_cfi.o
15421 +endif
15422 +
15423 +kernrelwd := $(subst ., ,$(KERNELRELEASE))
15424 +kernrel3 := $(word 1,$(kernrelwd)).$(word 2,$(kernrelwd)).$(word 3,$(kernrelwd))
15425 +
15426 +ifneq ($(kernrel3),2.6.20)
15427 +ccflags-y += $(CPPFLAGS)
15428 +endif
15429 +
15430 +else
15431 +
15432 +PWD := $(shell pwd)
15433 +PORTLIB := $(PWD)/../dwc_common_port
15434 +
15435 +# Command paths
15436 +CTAGS := $(CTAGS)
15437 +DOXYGEN := $(DOXYGEN)
15438 +
15439 +default: portlib
15440 + $(MAKE) -C$(KDIR) M=$(PWD) ARCH=$(ARCH) CROSS_COMPILE=$(CROSS_COMPILE) modules
15441 +
15442 +install: default
15443 + $(MAKE) -C$(KDIR) M=$(PORTLIB) modules_install
15444 + $(MAKE) -C$(KDIR) M=$(PWD) modules_install
15445 +
15446 +portlib:
15447 + $(MAKE) -C$(KDIR) M=$(PORTLIB) ARCH=$(ARCH) CROSS_COMPILE=$(CROSS_COMPILE) modules
15448 + cp $(PORTLIB)/Module.symvers $(PWD)/
15449 +
15450 +docs: $(wildcard *.[hc]) doc/doxygen.cfg
15451 + $(DOXYGEN) doc/doxygen.cfg
15452 +
15453 +tags: $(wildcard *.[hc])
15454 + $(CTAGS) -e $(wildcard *.[hc]) $(wildcard linux/*.[hc]) $(wildcard $(KDIR)/include/linux/usb*.h)
15455 +
15456 +
15457 +clean:
15458 + rm -rf *.o *.ko .*cmd *.mod.c .tmp_versions Module.symvers
15459 +
15460 +endif
15461 --- /dev/null
15462 +++ b/drivers/usb/host/dwc_otg/doc/doxygen.cfg
15463 @@ -0,0 +1,224 @@
15464 +# Doxyfile 1.3.9.1
15465 +
15466 +#---------------------------------------------------------------------------
15467 +# Project related configuration options
15468 +#---------------------------------------------------------------------------
15469 +PROJECT_NAME = "DesignWare USB 2.0 OTG Controller (DWC_otg) Device Driver"
15470 +PROJECT_NUMBER = v3.00a
15471 +OUTPUT_DIRECTORY = ./doc/
15472 +CREATE_SUBDIRS = NO
15473 +OUTPUT_LANGUAGE = English
15474 +BRIEF_MEMBER_DESC = YES
15475 +REPEAT_BRIEF = YES
15476 +ABBREVIATE_BRIEF = "The $name class" \
15477 + "The $name widget" \
15478 + "The $name file" \
15479 + is \
15480 + provides \
15481 + specifies \
15482 + contains \
15483 + represents \
15484 + a \
15485 + an \
15486 + the
15487 +ALWAYS_DETAILED_SEC = NO
15488 +INLINE_INHERITED_MEMB = NO
15489 +FULL_PATH_NAMES = NO
15490 +STRIP_FROM_PATH =
15491 +STRIP_FROM_INC_PATH =
15492 +SHORT_NAMES = NO
15493 +JAVADOC_AUTOBRIEF = YES
15494 +MULTILINE_CPP_IS_BRIEF = NO
15495 +INHERIT_DOCS = YES
15496 +DISTRIBUTE_GROUP_DOC = NO
15497 +TAB_SIZE = 8
15498 +ALIASES =
15499 +OPTIMIZE_OUTPUT_FOR_C = YES
15500 +OPTIMIZE_OUTPUT_JAVA = NO
15501 +SUBGROUPING = YES
15502 +#---------------------------------------------------------------------------
15503 +# Build related configuration options
15504 +#---------------------------------------------------------------------------
15505 +EXTRACT_ALL = NO
15506 +EXTRACT_PRIVATE = YES
15507 +EXTRACT_STATIC = YES
15508 +EXTRACT_LOCAL_CLASSES = YES
15509 +EXTRACT_LOCAL_METHODS = NO
15510 +HIDE_UNDOC_MEMBERS = NO
15511 +HIDE_UNDOC_CLASSES = NO
15512 +HIDE_FRIEND_COMPOUNDS = NO
15513 +HIDE_IN_BODY_DOCS = NO
15514 +INTERNAL_DOCS = NO
15515 +CASE_SENSE_NAMES = NO
15516 +HIDE_SCOPE_NAMES = NO
15517 +SHOW_INCLUDE_FILES = YES
15518 +INLINE_INFO = YES
15519 +SORT_MEMBER_DOCS = NO
15520 +SORT_BRIEF_DOCS = NO
15521 +SORT_BY_SCOPE_NAME = NO
15522 +GENERATE_TODOLIST = YES
15523 +GENERATE_TESTLIST = YES
15524 +GENERATE_BUGLIST = YES
15525 +GENERATE_DEPRECATEDLIST= YES
15526 +ENABLED_SECTIONS =
15527 +MAX_INITIALIZER_LINES = 30
15528 +SHOW_USED_FILES = YES
15529 +SHOW_DIRECTORIES = YES
15530 +#---------------------------------------------------------------------------
15531 +# configuration options related to warning and progress messages
15532 +#---------------------------------------------------------------------------
15533 +QUIET = YES
15534 +WARNINGS = YES
15535 +WARN_IF_UNDOCUMENTED = NO
15536 +WARN_IF_DOC_ERROR = YES
15537 +WARN_FORMAT = "$file:$line: $text"
15538 +WARN_LOGFILE =
15539 +#---------------------------------------------------------------------------
15540 +# configuration options related to the input files
15541 +#---------------------------------------------------------------------------
15542 +INPUT = .
15543 +FILE_PATTERNS = *.c \
15544 + *.h \
15545 + ./linux/*.c \
15546 + ./linux/*.h
15547 +RECURSIVE = NO
15548 +EXCLUDE = ./test/ \
15549 + ./dwc_otg/.AppleDouble/
15550 +EXCLUDE_SYMLINKS = YES
15551 +EXCLUDE_PATTERNS = *.mod.*
15552 +EXAMPLE_PATH =
15553 +EXAMPLE_PATTERNS = *
15554 +EXAMPLE_RECURSIVE = NO
15555 +IMAGE_PATH =
15556 +INPUT_FILTER =
15557 +FILTER_PATTERNS =
15558 +FILTER_SOURCE_FILES = NO
15559 +#---------------------------------------------------------------------------
15560 +# configuration options related to source browsing
15561 +#---------------------------------------------------------------------------
15562 +SOURCE_BROWSER = YES
15563 +INLINE_SOURCES = NO
15564 +STRIP_CODE_COMMENTS = YES
15565 +REFERENCED_BY_RELATION = NO
15566 +REFERENCES_RELATION = NO
15567 +VERBATIM_HEADERS = NO
15568 +#---------------------------------------------------------------------------
15569 +# configuration options related to the alphabetical class index
15570 +#---------------------------------------------------------------------------
15571 +ALPHABETICAL_INDEX = NO
15572 +COLS_IN_ALPHA_INDEX = 5
15573 +IGNORE_PREFIX =
15574 +#---------------------------------------------------------------------------
15575 +# configuration options related to the HTML output
15576 +#---------------------------------------------------------------------------
15577 +GENERATE_HTML = YES
15578 +HTML_OUTPUT = html
15579 +HTML_FILE_EXTENSION = .html
15580 +HTML_HEADER =
15581 +HTML_FOOTER =
15582 +HTML_STYLESHEET =
15583 +HTML_ALIGN_MEMBERS = YES
15584 +GENERATE_HTMLHELP = NO
15585 +CHM_FILE =
15586 +HHC_LOCATION =
15587 +GENERATE_CHI = NO
15588 +BINARY_TOC = NO
15589 +TOC_EXPAND = NO
15590 +DISABLE_INDEX = NO
15591 +ENUM_VALUES_PER_LINE = 4
15592 +GENERATE_TREEVIEW = YES
15593 +TREEVIEW_WIDTH = 250
15594 +#---------------------------------------------------------------------------
15595 +# configuration options related to the LaTeX output
15596 +#---------------------------------------------------------------------------
15597 +GENERATE_LATEX = NO
15598 +LATEX_OUTPUT = latex
15599 +LATEX_CMD_NAME = latex
15600 +MAKEINDEX_CMD_NAME = makeindex
15601 +COMPACT_LATEX = NO
15602 +PAPER_TYPE = a4wide
15603 +EXTRA_PACKAGES =
15604 +LATEX_HEADER =
15605 +PDF_HYPERLINKS = NO
15606 +USE_PDFLATEX = NO
15607 +LATEX_BATCHMODE = NO
15608 +LATEX_HIDE_INDICES = NO
15609 +#---------------------------------------------------------------------------
15610 +# configuration options related to the RTF output
15611 +#---------------------------------------------------------------------------
15612 +GENERATE_RTF = NO
15613 +RTF_OUTPUT = rtf
15614 +COMPACT_RTF = NO
15615 +RTF_HYPERLINKS = NO
15616 +RTF_STYLESHEET_FILE =
15617 +RTF_EXTENSIONS_FILE =
15618 +#---------------------------------------------------------------------------
15619 +# configuration options related to the man page output
15620 +#---------------------------------------------------------------------------
15621 +GENERATE_MAN = NO
15622 +MAN_OUTPUT = man
15623 +MAN_EXTENSION = .3
15624 +MAN_LINKS = NO
15625 +#---------------------------------------------------------------------------
15626 +# configuration options related to the XML output
15627 +#---------------------------------------------------------------------------
15628 +GENERATE_XML = NO
15629 +XML_OUTPUT = xml
15630 +XML_SCHEMA =
15631 +XML_DTD =
15632 +XML_PROGRAMLISTING = YES
15633 +#---------------------------------------------------------------------------
15634 +# configuration options for the AutoGen Definitions output
15635 +#---------------------------------------------------------------------------
15636 +GENERATE_AUTOGEN_DEF = NO
15637 +#---------------------------------------------------------------------------
15638 +# configuration options related to the Perl module output
15639 +#---------------------------------------------------------------------------
15640 +GENERATE_PERLMOD = NO
15641 +PERLMOD_LATEX = NO
15642 +PERLMOD_PRETTY = YES
15643 +PERLMOD_MAKEVAR_PREFIX =
15644 +#---------------------------------------------------------------------------
15645 +# Configuration options related to the preprocessor
15646 +#---------------------------------------------------------------------------
15647 +ENABLE_PREPROCESSING = YES
15648 +MACRO_EXPANSION = YES
15649 +EXPAND_ONLY_PREDEF = YES
15650 +SEARCH_INCLUDES = YES
15651 +INCLUDE_PATH =
15652 +INCLUDE_FILE_PATTERNS =
15653 +PREDEFINED = DEVICE_ATTR DWC_EN_ISOC
15654 +EXPAND_AS_DEFINED = DWC_OTG_DEVICE_ATTR_BITFIELD_SHOW DWC_OTG_DEVICE_ATTR_BITFIELD_STORE DWC_OTG_DEVICE_ATTR_BITFIELD_RW DWC_OTG_DEVICE_ATTR_BITFIELD_RO DWC_OTG_DEVICE_ATTR_REG_SHOW DWC_OTG_DEVICE_ATTR_REG_STORE DWC_OTG_DEVICE_ATTR_REG32_RW DWC_OTG_DEVICE_ATTR_REG32_RO DWC_EN_ISOC
15655 +SKIP_FUNCTION_MACROS = NO
15656 +#---------------------------------------------------------------------------
15657 +# Configuration::additions related to external references
15658 +#---------------------------------------------------------------------------
15659 +TAGFILES =
15660 +GENERATE_TAGFILE =
15661 +ALLEXTERNALS = NO
15662 +EXTERNAL_GROUPS = YES
15663 +PERL_PATH = /usr/bin/perl
15664 +#---------------------------------------------------------------------------
15665 +# Configuration options related to the dot tool
15666 +#---------------------------------------------------------------------------
15667 +CLASS_DIAGRAMS = YES
15668 +HIDE_UNDOC_RELATIONS = YES
15669 +HAVE_DOT = NO
15670 +CLASS_GRAPH = YES
15671 +COLLABORATION_GRAPH = YES
15672 +UML_LOOK = NO
15673 +TEMPLATE_RELATIONS = NO
15674 +INCLUDE_GRAPH = YES
15675 +INCLUDED_BY_GRAPH = YES
15676 +CALL_GRAPH = NO
15677 +GRAPHICAL_HIERARCHY = YES
15678 +DOT_IMAGE_FORMAT = png
15679 +DOT_PATH =
15680 +DOTFILE_DIRS =
15681 +MAX_DOT_GRAPH_DEPTH = 1000
15682 +GENERATE_LEGEND = YES
15683 +DOT_CLEANUP = YES
15684 +#---------------------------------------------------------------------------
15685 +# Configuration::additions related to the search engine
15686 +#---------------------------------------------------------------------------
15687 +SEARCHENGINE = NO
15688 --- /dev/null
15689 +++ b/drivers/usb/host/dwc_otg/dummy_audio.c
15690 @@ -0,0 +1,1574 @@
15691 +/*
15692 + * zero.c -- Gadget Zero, for USB development
15693 + *
15694 + * Copyright (C) 2003-2004 David Brownell
15695 + * All rights reserved.
15696 + *
15697 + * Redistribution and use in source and binary forms, with or without
15698 + * modification, are permitted provided that the following conditions
15699 + * are met:
15700 + * 1. Redistributions of source code must retain the above copyright
15701 + * notice, this list of conditions, and the following disclaimer,
15702 + * without modification.
15703 + * 2. Redistributions in binary form must reproduce the above copyright
15704 + * notice, this list of conditions and the following disclaimer in the
15705 + * documentation and/or other materials provided with the distribution.
15706 + * 3. The names of the above-listed copyright holders may not be used
15707 + * to endorse or promote products derived from this software without
15708 + * specific prior written permission.
15709 + *
15710 + * ALTERNATIVELY, this software may be distributed under the terms of the
15711 + * GNU General Public License ("GPL") as published by the Free Software
15712 + * Foundation, either version 2 of that License or (at your option) any
15713 + * later version.
15714 + *
15715 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
15716 + * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
15717 + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
15718 + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
15719 + * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
15720 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
15721 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
15722 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
15723 + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
15724 + * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
15725 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
15726 + */
15727 +
15728 +
15729 +/*
15730 + * Gadget Zero only needs two bulk endpoints, and is an example of how you
15731 + * can write a hardware-agnostic gadget driver running inside a USB device.
15732 + *
15733 + * Hardware details are visible (see CONFIG_USB_ZERO_* below) but don't
15734 + * affect most of the driver.
15735 + *
15736 + * Use it with the Linux host/master side "usbtest" driver to get a basic
15737 + * functional test of your device-side usb stack, or with "usb-skeleton".
15738 + *
15739 + * It supports two similar configurations. One sinks whatever the usb host
15740 + * writes, and in return sources zeroes. The other loops whatever the host
15741 + * writes back, so the host can read it. Module options include:
15742 + *
15743 + * buflen=N default N=4096, buffer size used
15744 + * qlen=N default N=32, how many buffers in the loopback queue
15745 + * loopdefault default false, list loopback config first
15746 + *
15747 + * Many drivers will only have one configuration, letting them be much
15748 + * simpler if they also don't support high speed operation (like this
15749 + * driver does).
15750 + */
15751 +
15752 +#include <linux/config.h>
15753 +#include <linux/module.h>
15754 +#include <linux/kernel.h>
15755 +#include <linux/delay.h>
15756 +#include <linux/ioport.h>
15757 +#include <linux/sched.h>
15758 +#include <linux/slab.h>
15759 +#include <linux/smp_lock.h>
15760 +#include <linux/errno.h>
15761 +#include <linux/init.h>
15762 +#include <linux/timer.h>
15763 +#include <linux/list.h>
15764 +#include <linux/interrupt.h>
15765 +#include <linux/uts.h>
15766 +#include <linux/version.h>
15767 +#include <linux/device.h>
15768 +#include <linux/moduleparam.h>
15769 +#include <linux/proc_fs.h>
15770 +
15771 +#include <asm/byteorder.h>
15772 +#include <asm/io.h>
15773 +#include <asm/irq.h>
15774 +#include <asm/system.h>
15775 +#include <asm/unaligned.h>
15776 +
15777 +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,21)
15778 +# include <linux/usb/ch9.h>
15779 +#else
15780 +# include <linux/usb_ch9.h>
15781 +#endif
15782 +
15783 +#include <linux/usb_gadget.h>
15784 +
15785 +
15786 +/*-------------------------------------------------------------------------*/
15787 +/*-------------------------------------------------------------------------*/
15788 +
15789 +
15790 +static int utf8_to_utf16le(const char *s, u16 *cp, unsigned len)
15791 +{
15792 + int count = 0;
15793 + u8 c;
15794 + u16 uchar;
15795 +
15796 + /* this insists on correct encodings, though not minimal ones.
15797 + * BUT it currently rejects legit 4-byte UTF-8 code points,
15798 + * which need surrogate pairs. (Unicode 3.1 can use them.)
15799 + */
15800 + while (len != 0 && (c = (u8) *s++) != 0) {
15801 + if (unlikely(c & 0x80)) {
15802 + // 2-byte sequence:
15803 + // 00000yyyyyxxxxxx = 110yyyyy 10xxxxxx
15804 + if ((c & 0xe0) == 0xc0) {
15805 + uchar = (c & 0x1f) << 6;
15806 +
15807 + c = (u8) *s++;
15808 + if ((c & 0xc0) != 0xc0)
15809 + goto fail;
15810 + c &= 0x3f;
15811 + uchar |= c;
15812 +
15813 + // 3-byte sequence (most CJKV characters):
15814 + // zzzzyyyyyyxxxxxx = 1110zzzz 10yyyyyy 10xxxxxx
15815 + } else if ((c & 0xf0) == 0xe0) {
15816 + uchar = (c & 0x0f) << 12;
15817 +
15818 + c = (u8) *s++;
15819 + if ((c & 0xc0) != 0xc0)
15820 + goto fail;
15821 + c &= 0x3f;
15822 + uchar |= c << 6;
15823 +
15824 + c = (u8) *s++;
15825 + if ((c & 0xc0) != 0xc0)
15826 + goto fail;
15827 + c &= 0x3f;
15828 + uchar |= c;
15829 +
15830 + /* no bogus surrogates */
15831 + if (0xd800 <= uchar && uchar <= 0xdfff)
15832 + goto fail;
15833 +
15834 + // 4-byte sequence (surrogate pairs, currently rare):
15835 + // 11101110wwwwzzzzyy + 110111yyyyxxxxxx
15836 + // = 11110uuu 10uuzzzz 10yyyyyy 10xxxxxx
15837 + // (uuuuu = wwww + 1)
15838 + // FIXME accept the surrogate code points (only)
15839 +
15840 + } else
15841 + goto fail;
15842 + } else
15843 + uchar = c;
15844 + put_unaligned (cpu_to_le16 (uchar), cp++);
15845 + count++;
15846 + len--;
15847 + }
15848 + return count;
15849 +fail:
15850 + return -1;
15851 +}
15852 +
15853 +
15854 +/**
15855 + * usb_gadget_get_string - fill out a string descriptor
15856 + * @table: of c strings encoded using UTF-8
15857 + * @id: string id, from low byte of wValue in get string descriptor
15858 + * @buf: at least 256 bytes
15859 + *
15860 + * Finds the UTF-8 string matching the ID, and converts it into a
15861 + * string descriptor in utf16-le.
15862 + * Returns length of descriptor (always even) or negative errno
15863 + *
15864 + * If your driver needs stings in multiple languages, you'll probably
15865 + * "switch (wIndex) { ... }" in your ep0 string descriptor logic,
15866 + * using this routine after choosing which set of UTF-8 strings to use.
15867 + * Note that US-ASCII is a strict subset of UTF-8; any string bytes with
15868 + * the eighth bit set will be multibyte UTF-8 characters, not ISO-8859/1
15869 + * characters (which are also widely used in C strings).
15870 + */
15871 +int
15872 +usb_gadget_get_string (struct usb_gadget_strings *table, int id, u8 *buf)
15873 +{
15874 + struct usb_string *s;
15875 + int len;
15876 +
15877 + /* descriptor 0 has the language id */
15878 + if (id == 0) {
15879 + buf [0] = 4;
15880 + buf [1] = USB_DT_STRING;
15881 + buf [2] = (u8) table->language;
15882 + buf [3] = (u8) (table->language >> 8);
15883 + return 4;
15884 + }
15885 + for (s = table->strings; s && s->s; s++)
15886 + if (s->id == id)
15887 + break;
15888 +
15889 + /* unrecognized: stall. */
15890 + if (!s || !s->s)
15891 + return -EINVAL;
15892 +
15893 + /* string descriptors have length, tag, then UTF16-LE text */
15894 + len = min ((size_t) 126, strlen (s->s));
15895 + memset (buf + 2, 0, 2 * len); /* zero all the bytes */
15896 + len = utf8_to_utf16le(s->s, (u16 *)&buf[2], len);
15897 + if (len < 0)
15898 + return -EINVAL;
15899 + buf [0] = (len + 1) * 2;
15900 + buf [1] = USB_DT_STRING;
15901 + return buf [0];
15902 +}
15903 +
15904 +
15905 +/*-------------------------------------------------------------------------*/
15906 +/*-------------------------------------------------------------------------*/
15907 +
15908 +
15909 +/**
15910 + * usb_descriptor_fillbuf - fill buffer with descriptors
15911 + * @buf: Buffer to be filled
15912 + * @buflen: Size of buf
15913 + * @src: Array of descriptor pointers, terminated by null pointer.
15914 + *
15915 + * Copies descriptors into the buffer, returning the length or a
15916 + * negative error code if they can't all be copied. Useful when
15917 + * assembling descriptors for an associated set of interfaces used
15918 + * as part of configuring a composite device; or in other cases where
15919 + * sets of descriptors need to be marshaled.
15920 + */
15921 +int
15922 +usb_descriptor_fillbuf(void *buf, unsigned buflen,
15923 + const struct usb_descriptor_header **src)
15924 +{
15925 + u8 *dest = buf;
15926 +
15927 + if (!src)
15928 + return -EINVAL;
15929 +
15930 + /* fill buffer from src[] until null descriptor ptr */
15931 + for (; 0 != *src; src++) {
15932 + unsigned len = (*src)->bLength;
15933 +
15934 + if (len > buflen)
15935 + return -EINVAL;
15936 + memcpy(dest, *src, len);
15937 + buflen -= len;
15938 + dest += len;
15939 + }
15940 + return dest - (u8 *)buf;
15941 +}
15942 +
15943 +
15944 +/**
15945 + * usb_gadget_config_buf - builts a complete configuration descriptor
15946 + * @config: Header for the descriptor, including characteristics such
15947 + * as power requirements and number of interfaces.
15948 + * @desc: Null-terminated vector of pointers to the descriptors (interface,
15949 + * endpoint, etc) defining all functions in this device configuration.
15950 + * @buf: Buffer for the resulting configuration descriptor.
15951 + * @length: Length of buffer. If this is not big enough to hold the
15952 + * entire configuration descriptor, an error code will be returned.
15953 + *
15954 + * This copies descriptors into the response buffer, building a descriptor
15955 + * for that configuration. It returns the buffer length or a negative
15956 + * status code. The config.wTotalLength field is set to match the length
15957 + * of the result, but other descriptor fields (including power usage and
15958 + * interface count) must be set by the caller.
15959 + *
15960 + * Gadget drivers could use this when constructing a config descriptor
15961 + * in response to USB_REQ_GET_DESCRIPTOR. They will need to patch the
15962 + * resulting bDescriptorType value if USB_DT_OTHER_SPEED_CONFIG is needed.
15963 + */
15964 +int usb_gadget_config_buf(
15965 + const struct usb_config_descriptor *config,
15966 + void *buf,
15967 + unsigned length,
15968 + const struct usb_descriptor_header **desc
15969 +)
15970 +{
15971 + struct usb_config_descriptor *cp = buf;
15972 + int len;
15973 +
15974 + /* config descriptor first */
15975 + if (length < USB_DT_CONFIG_SIZE || !desc)
15976 + return -EINVAL;
15977 + *cp = *config;
15978 +
15979 + /* then interface/endpoint/class/vendor/... */
15980 + len = usb_descriptor_fillbuf(USB_DT_CONFIG_SIZE + (u8*)buf,
15981 + length - USB_DT_CONFIG_SIZE, desc);
15982 + if (len < 0)
15983 + return len;
15984 + len += USB_DT_CONFIG_SIZE;
15985 + if (len > 0xffff)
15986 + return -EINVAL;
15987 +
15988 + /* patch up the config descriptor */
15989 + cp->bLength = USB_DT_CONFIG_SIZE;
15990 + cp->bDescriptorType = USB_DT_CONFIG;
15991 + cp->wTotalLength = cpu_to_le16(len);
15992 + cp->bmAttributes |= USB_CONFIG_ATT_ONE;
15993 + return len;
15994 +}
15995 +
15996 +/*-------------------------------------------------------------------------*/
15997 +/*-------------------------------------------------------------------------*/
15998 +
15999 +
16000 +#define RBUF_LEN (1024*1024)
16001 +static int rbuf_start;
16002 +static int rbuf_len;
16003 +static __u8 rbuf[RBUF_LEN];
16004 +
16005 +/*-------------------------------------------------------------------------*/
16006 +
16007 +#define DRIVER_VERSION "St Patrick's Day 2004"
16008 +
16009 +static const char shortname [] = "zero";
16010 +static const char longname [] = "YAMAHA YST-MS35D USB Speaker ";
16011 +
16012 +static const char source_sink [] = "source and sink data";
16013 +static const char loopback [] = "loop input to output";
16014 +
16015 +/*-------------------------------------------------------------------------*/
16016 +
16017 +/*
16018 + * driver assumes self-powered hardware, and
16019 + * has no way for users to trigger remote wakeup.
16020 + *
16021 + * this version autoconfigures as much as possible,
16022 + * which is reasonable for most "bulk-only" drivers.
16023 + */
16024 +static const char *EP_IN_NAME; /* source */
16025 +static const char *EP_OUT_NAME; /* sink */
16026 +
16027 +/*-------------------------------------------------------------------------*/
16028 +
16029 +/* big enough to hold our biggest descriptor */
16030 +#define USB_BUFSIZ 512
16031 +
16032 +struct zero_dev {
16033 + spinlock_t lock;
16034 + struct usb_gadget *gadget;
16035 + struct usb_request *req; /* for control responses */
16036 +
16037 + /* when configured, we have one of two configs:
16038 + * - source data (in to host) and sink it (out from host)
16039 + * - or loop it back (out from host back in to host)
16040 + */
16041 + u8 config;
16042 + struct usb_ep *in_ep, *out_ep;
16043 +
16044 + /* autoresume timer */
16045 + struct timer_list resume;
16046 +};
16047 +
16048 +#define xprintk(d,level,fmt,args...) \
16049 + dev_printk(level , &(d)->gadget->dev , fmt , ## args)
16050 +
16051 +#ifdef DEBUG
16052 +#define DBG(dev,fmt,args...) \
16053 + xprintk(dev , KERN_DEBUG , fmt , ## args)
16054 +#else
16055 +#define DBG(dev,fmt,args...) \
16056 + do { } while (0)
16057 +#endif /* DEBUG */
16058 +
16059 +#ifdef VERBOSE
16060 +#define VDBG DBG
16061 +#else
16062 +#define VDBG(dev,fmt,args...) \
16063 + do { } while (0)
16064 +#endif /* VERBOSE */
16065 +
16066 +#define ERROR(dev,fmt,args...) \
16067 + xprintk(dev , KERN_ERR , fmt , ## args)
16068 +#define WARN(dev,fmt,args...) \
16069 + xprintk(dev , KERN_WARNING , fmt , ## args)
16070 +#define INFO(dev,fmt,args...) \
16071 + xprintk(dev , KERN_INFO , fmt , ## args)
16072 +
16073 +/*-------------------------------------------------------------------------*/
16074 +
16075 +static unsigned buflen = 4096;
16076 +static unsigned qlen = 32;
16077 +static unsigned pattern = 0;
16078 +
16079 +module_param (buflen, uint, S_IRUGO|S_IWUSR);
16080 +module_param (qlen, uint, S_IRUGO|S_IWUSR);
16081 +module_param (pattern, uint, S_IRUGO|S_IWUSR);
16082 +
16083 +/*
16084 + * if it's nonzero, autoresume says how many seconds to wait
16085 + * before trying to wake up the host after suspend.
16086 + */
16087 +static unsigned autoresume = 0;
16088 +module_param (autoresume, uint, 0);
16089 +
16090 +/*
16091 + * Normally the "loopback" configuration is second (index 1) so
16092 + * it's not the default. Here's where to change that order, to
16093 + * work better with hosts where config changes are problematic.
16094 + * Or controllers (like superh) that only support one config.
16095 + */
16096 +static int loopdefault = 0;
16097 +
16098 +module_param (loopdefault, bool, S_IRUGO|S_IWUSR);
16099 +
16100 +/*-------------------------------------------------------------------------*/
16101 +
16102 +/* Thanks to NetChip Technologies for donating this product ID.
16103 + *
16104 + * DO NOT REUSE THESE IDs with a protocol-incompatible driver!! Ever!!
16105 + * Instead: allocate your own, using normal USB-IF procedures.
16106 + */
16107 +#ifndef CONFIG_USB_ZERO_HNPTEST
16108 +#define DRIVER_VENDOR_NUM 0x0525 /* NetChip */
16109 +#define DRIVER_PRODUCT_NUM 0xa4a0 /* Linux-USB "Gadget Zero" */
16110 +#else
16111 +#define DRIVER_VENDOR_NUM 0x1a0a /* OTG test device IDs */
16112 +#define DRIVER_PRODUCT_NUM 0xbadd
16113 +#endif
16114 +
16115 +/*-------------------------------------------------------------------------*/
16116 +
16117 +/*
16118 + * DESCRIPTORS ... most are static, but strings and (full)
16119 + * configuration descriptors are built on demand.
16120 + */
16121 +
16122 +/*
16123 +#define STRING_MANUFACTURER 25
16124 +#define STRING_PRODUCT 42
16125 +#define STRING_SERIAL 101
16126 +*/
16127 +#define STRING_MANUFACTURER 1
16128 +#define STRING_PRODUCT 2
16129 +#define STRING_SERIAL 3
16130 +
16131 +#define STRING_SOURCE_SINK 250
16132 +#define STRING_LOOPBACK 251
16133 +
16134 +/*
16135 + * This device advertises two configurations; these numbers work
16136 + * on a pxa250 as well as more flexible hardware.
16137 + */
16138 +#define CONFIG_SOURCE_SINK 3
16139 +#define CONFIG_LOOPBACK 2
16140 +
16141 +/*
16142 +static struct usb_device_descriptor
16143 +device_desc = {
16144 + .bLength = sizeof device_desc,
16145 + .bDescriptorType = USB_DT_DEVICE,
16146 +
16147 + .bcdUSB = __constant_cpu_to_le16 (0x0200),
16148 + .bDeviceClass = USB_CLASS_VENDOR_SPEC,
16149 +
16150 + .idVendor = __constant_cpu_to_le16 (DRIVER_VENDOR_NUM),
16151 + .idProduct = __constant_cpu_to_le16 (DRIVER_PRODUCT_NUM),
16152 + .iManufacturer = STRING_MANUFACTURER,
16153 + .iProduct = STRING_PRODUCT,
16154 + .iSerialNumber = STRING_SERIAL,
16155 + .bNumConfigurations = 2,
16156 +};
16157 +*/
16158 +static struct usb_device_descriptor
16159 +device_desc = {
16160 + .bLength = sizeof device_desc,
16161 + .bDescriptorType = USB_DT_DEVICE,
16162 + .bcdUSB = __constant_cpu_to_le16 (0x0100),
16163 + .bDeviceClass = USB_CLASS_PER_INTERFACE,
16164 + .bDeviceSubClass = 0,
16165 + .bDeviceProtocol = 0,
16166 + .bMaxPacketSize0 = 64,
16167 + .bcdDevice = __constant_cpu_to_le16 (0x0100),
16168 + .idVendor = __constant_cpu_to_le16 (0x0499),
16169 + .idProduct = __constant_cpu_to_le16 (0x3002),
16170 + .iManufacturer = STRING_MANUFACTURER,
16171 + .iProduct = STRING_PRODUCT,
16172 + .iSerialNumber = STRING_SERIAL,
16173 + .bNumConfigurations = 1,
16174 +};
16175 +
16176 +static struct usb_config_descriptor
16177 +z_config = {
16178 + .bLength = sizeof z_config,
16179 + .bDescriptorType = USB_DT_CONFIG,
16180 +
16181 + /* compute wTotalLength on the fly */
16182 + .bNumInterfaces = 2,
16183 + .bConfigurationValue = 1,
16184 + .iConfiguration = 0,
16185 + .bmAttributes = 0x40,
16186 + .bMaxPower = 0, /* self-powered */
16187 +};
16188 +
16189 +
16190 +static struct usb_otg_descriptor
16191 +otg_descriptor = {
16192 + .bLength = sizeof otg_descriptor,
16193 + .bDescriptorType = USB_DT_OTG,
16194 +
16195 + .bmAttributes = USB_OTG_SRP,
16196 +};
16197 +
16198 +/* one interface in each configuration */
16199 +#ifdef CONFIG_USB_GADGET_DUALSPEED
16200 +
16201 +/*
16202 + * usb 2.0 devices need to expose both high speed and full speed
16203 + * descriptors, unless they only run at full speed.
16204 + *
16205 + * that means alternate endpoint descriptors (bigger packets)
16206 + * and a "device qualifier" ... plus more construction options
16207 + * for the config descriptor.
16208 + */
16209 +
16210 +static struct usb_qualifier_descriptor
16211 +dev_qualifier = {
16212 + .bLength = sizeof dev_qualifier,
16213 + .bDescriptorType = USB_DT_DEVICE_QUALIFIER,
16214 +
16215 + .bcdUSB = __constant_cpu_to_le16 (0x0200),
16216 + .bDeviceClass = USB_CLASS_VENDOR_SPEC,
16217 +
16218 + .bNumConfigurations = 2,
16219 +};
16220 +
16221 +
16222 +struct usb_cs_as_general_descriptor {
16223 + __u8 bLength;
16224 + __u8 bDescriptorType;
16225 +
16226 + __u8 bDescriptorSubType;
16227 + __u8 bTerminalLink;
16228 + __u8 bDelay;
16229 + __u16 wFormatTag;
16230 +} __attribute__ ((packed));
16231 +
16232 +struct usb_cs_as_format_descriptor {
16233 + __u8 bLength;
16234 + __u8 bDescriptorType;
16235 +
16236 + __u8 bDescriptorSubType;
16237 + __u8 bFormatType;
16238 + __u8 bNrChannels;
16239 + __u8 bSubframeSize;
16240 + __u8 bBitResolution;
16241 + __u8 bSamfreqType;
16242 + __u8 tLowerSamFreq[3];
16243 + __u8 tUpperSamFreq[3];
16244 +} __attribute__ ((packed));
16245 +
16246 +static const struct usb_interface_descriptor
16247 +z_audio_control_if_desc = {
16248 + .bLength = sizeof z_audio_control_if_desc,
16249 + .bDescriptorType = USB_DT_INTERFACE,
16250 + .bInterfaceNumber = 0,
16251 + .bAlternateSetting = 0,
16252 + .bNumEndpoints = 0,
16253 + .bInterfaceClass = USB_CLASS_AUDIO,
16254 + .bInterfaceSubClass = 0x1,
16255 + .bInterfaceProtocol = 0,
16256 + .iInterface = 0,
16257 +};
16258 +
16259 +static const struct usb_interface_descriptor
16260 +z_audio_if_desc = {
16261 + .bLength = sizeof z_audio_if_desc,
16262 + .bDescriptorType = USB_DT_INTERFACE,
16263 + .bInterfaceNumber = 1,
16264 + .bAlternateSetting = 0,
16265 + .bNumEndpoints = 0,
16266 + .bInterfaceClass = USB_CLASS_AUDIO,
16267 + .bInterfaceSubClass = 0x2,
16268 + .bInterfaceProtocol = 0,
16269 + .iInterface = 0,
16270 +};
16271 +
16272 +static const struct usb_interface_descriptor
16273 +z_audio_if_desc2 = {
16274 + .bLength = sizeof z_audio_if_desc,
16275 + .bDescriptorType = USB_DT_INTERFACE,
16276 + .bInterfaceNumber = 1,
16277 + .bAlternateSetting = 1,
16278 + .bNumEndpoints = 1,
16279 + .bInterfaceClass = USB_CLASS_AUDIO,
16280 + .bInterfaceSubClass = 0x2,
16281 + .bInterfaceProtocol = 0,
16282 + .iInterface = 0,
16283 +};
16284 +
16285 +static const struct usb_cs_as_general_descriptor
16286 +z_audio_cs_as_if_desc = {
16287 + .bLength = 7,
16288 + .bDescriptorType = 0x24,
16289 +
16290 + .bDescriptorSubType = 0x01,
16291 + .bTerminalLink = 0x01,
16292 + .bDelay = 0x0,
16293 + .wFormatTag = __constant_cpu_to_le16 (0x0001)
16294 +};
16295 +
16296 +
16297 +static const struct usb_cs_as_format_descriptor
16298 +z_audio_cs_as_format_desc = {
16299 + .bLength = 0xe,
16300 + .bDescriptorType = 0x24,
16301 +
16302 + .bDescriptorSubType = 2,
16303 + .bFormatType = 1,
16304 + .bNrChannels = 1,
16305 + .bSubframeSize = 1,
16306 + .bBitResolution = 8,
16307 + .bSamfreqType = 0,
16308 + .tLowerSamFreq = {0x7e, 0x13, 0x00},
16309 + .tUpperSamFreq = {0xe2, 0xd6, 0x00},
16310 +};
16311 +
16312 +static const struct usb_endpoint_descriptor
16313 +z_iso_ep = {
16314 + .bLength = 0x09,
16315 + .bDescriptorType = 0x05,
16316 + .bEndpointAddress = 0x04,
16317 + .bmAttributes = 0x09,
16318 + .wMaxPacketSize = 0x0038,
16319 + .bInterval = 0x01,
16320 + .bRefresh = 0x00,
16321 + .bSynchAddress = 0x00,
16322 +};
16323 +
16324 +static char z_iso_ep2[] = {0x07, 0x25, 0x01, 0x00, 0x02, 0x00, 0x02};
16325 +
16326 +// 9 bytes
16327 +static char z_ac_interface_header_desc[] =
16328 +{ 0x09, 0x24, 0x01, 0x00, 0x01, 0x2b, 0x00, 0x01, 0x01 };
16329 +
16330 +// 12 bytes
16331 +static char z_0[] = {0x0c, 0x24, 0x02, 0x01, 0x01, 0x01, 0x00, 0x02,
16332 + 0x03, 0x00, 0x00, 0x00};
16333 +// 13 bytes
16334 +static char z_1[] = {0x0d, 0x24, 0x06, 0x02, 0x01, 0x02, 0x15, 0x00,
16335 + 0x02, 0x00, 0x02, 0x00, 0x00};
16336 +// 9 bytes
16337 +static char z_2[] = {0x09, 0x24, 0x03, 0x03, 0x01, 0x03, 0x00, 0x02,
16338 + 0x00};
16339 +
16340 +static char za_0[] = {0x09, 0x04, 0x01, 0x02, 0x01, 0x01, 0x02, 0x00,
16341 + 0x00};
16342 +
16343 +static char za_1[] = {0x07, 0x24, 0x01, 0x01, 0x00, 0x01, 0x00};
16344 +
16345 +static char za_2[] = {0x0e, 0x24, 0x02, 0x01, 0x02, 0x01, 0x08, 0x00,
16346 + 0x7e, 0x13, 0x00, 0xe2, 0xd6, 0x00};
16347 +
16348 +static char za_3[] = {0x09, 0x05, 0x04, 0x09, 0x70, 0x00, 0x01, 0x00,
16349 + 0x00};
16350 +
16351 +static char za_4[] = {0x07, 0x25, 0x01, 0x00, 0x02, 0x00, 0x02};
16352 +
16353 +static char za_5[] = {0x09, 0x04, 0x01, 0x03, 0x01, 0x01, 0x02, 0x00,
16354 + 0x00};
16355 +
16356 +static char za_6[] = {0x07, 0x24, 0x01, 0x01, 0x00, 0x01, 0x00};
16357 +
16358 +static char za_7[] = {0x0e, 0x24, 0x02, 0x01, 0x01, 0x02, 0x10, 0x00,
16359 + 0x7e, 0x13, 0x00, 0xe2, 0xd6, 0x00};
16360 +
16361 +static char za_8[] = {0x09, 0x05, 0x04, 0x09, 0x70, 0x00, 0x01, 0x00,
16362 + 0x00};
16363 +
16364 +static char za_9[] = {0x07, 0x25, 0x01, 0x00, 0x02, 0x00, 0x02};
16365 +
16366 +static char za_10[] = {0x09, 0x04, 0x01, 0x04, 0x01, 0x01, 0x02, 0x00,
16367 + 0x00};
16368 +
16369 +static char za_11[] = {0x07, 0x24, 0x01, 0x01, 0x00, 0x01, 0x00};
16370 +
16371 +static char za_12[] = {0x0e, 0x24, 0x02, 0x01, 0x02, 0x02, 0x10, 0x00,
16372 + 0x73, 0x13, 0x00, 0xe2, 0xd6, 0x00};
16373 +
16374 +static char za_13[] = {0x09, 0x05, 0x04, 0x09, 0xe0, 0x00, 0x01, 0x00,
16375 + 0x00};
16376 +
16377 +static char za_14[] = {0x07, 0x25, 0x01, 0x00, 0x02, 0x00, 0x02};
16378 +
16379 +static char za_15[] = {0x09, 0x04, 0x01, 0x05, 0x01, 0x01, 0x02, 0x00,
16380 + 0x00};
16381 +
16382 +static char za_16[] = {0x07, 0x24, 0x01, 0x01, 0x00, 0x01, 0x00};
16383 +
16384 +static char za_17[] = {0x0e, 0x24, 0x02, 0x01, 0x01, 0x03, 0x14, 0x00,
16385 + 0x7e, 0x13, 0x00, 0xe2, 0xd6, 0x00};
16386 +
16387 +static char za_18[] = {0x09, 0x05, 0x04, 0x09, 0xa8, 0x00, 0x01, 0x00,
16388 + 0x00};
16389 +
16390 +static char za_19[] = {0x07, 0x25, 0x01, 0x00, 0x02, 0x00, 0x02};
16391 +
16392 +static char za_20[] = {0x09, 0x04, 0x01, 0x06, 0x01, 0x01, 0x02, 0x00,
16393 + 0x00};
16394 +
16395 +static char za_21[] = {0x07, 0x24, 0x01, 0x01, 0x00, 0x01, 0x00};
16396 +
16397 +static char za_22[] = {0x0e, 0x24, 0x02, 0x01, 0x02, 0x03, 0x14, 0x00,
16398 + 0x7e, 0x13, 0x00, 0xe2, 0xd6, 0x00};
16399 +
16400 +static char za_23[] = {0x09, 0x05, 0x04, 0x09, 0x50, 0x01, 0x01, 0x00,
16401 + 0x00};
16402 +
16403 +static char za_24[] = {0x07, 0x25, 0x01, 0x00, 0x02, 0x00, 0x02};
16404 +
16405 +
16406 +
16407 +static const struct usb_descriptor_header *z_function [] = {
16408 + (struct usb_descriptor_header *) &z_audio_control_if_desc,
16409 + (struct usb_descriptor_header *) &z_ac_interface_header_desc,
16410 + (struct usb_descriptor_header *) &z_0,
16411 + (struct usb_descriptor_header *) &z_1,
16412 + (struct usb_descriptor_header *) &z_2,
16413 + (struct usb_descriptor_header *) &z_audio_if_desc,
16414 + (struct usb_descriptor_header *) &z_audio_if_desc2,
16415 + (struct usb_descriptor_header *) &z_audio_cs_as_if_desc,
16416 + (struct usb_descriptor_header *) &z_audio_cs_as_format_desc,
16417 + (struct usb_descriptor_header *) &z_iso_ep,
16418 + (struct usb_descriptor_header *) &z_iso_ep2,
16419 + (struct usb_descriptor_header *) &za_0,
16420 + (struct usb_descriptor_header *) &za_1,
16421 + (struct usb_descriptor_header *) &za_2,
16422 + (struct usb_descriptor_header *) &za_3,
16423 + (struct usb_descriptor_header *) &za_4,
16424 + (struct usb_descriptor_header *) &za_5,
16425 + (struct usb_descriptor_header *) &za_6,
16426 + (struct usb_descriptor_header *) &za_7,
16427 + (struct usb_descriptor_header *) &za_8,
16428 + (struct usb_descriptor_header *) &za_9,
16429 + (struct usb_descriptor_header *) &za_10,
16430 + (struct usb_descriptor_header *) &za_11,
16431 + (struct usb_descriptor_header *) &za_12,
16432 + (struct usb_descriptor_header *) &za_13,
16433 + (struct usb_descriptor_header *) &za_14,
16434 + (struct usb_descriptor_header *) &za_15,
16435 + (struct usb_descriptor_header *) &za_16,
16436 + (struct usb_descriptor_header *) &za_17,
16437 + (struct usb_descriptor_header *) &za_18,
16438 + (struct usb_descriptor_header *) &za_19,
16439 + (struct usb_descriptor_header *) &za_20,
16440 + (struct usb_descriptor_header *) &za_21,
16441 + (struct usb_descriptor_header *) &za_22,
16442 + (struct usb_descriptor_header *) &za_23,
16443 + (struct usb_descriptor_header *) &za_24,
16444 + NULL,
16445 +};
16446 +
16447 +/* maxpacket and other transfer characteristics vary by speed. */
16448 +#define ep_desc(g,hs,fs) (((g)->speed==USB_SPEED_HIGH)?(hs):(fs))
16449 +
16450 +#else
16451 +
16452 +/* if there's no high speed support, maxpacket doesn't change. */
16453 +#define ep_desc(g,hs,fs) fs
16454 +
16455 +#endif /* !CONFIG_USB_GADGET_DUALSPEED */
16456 +
16457 +static char manufacturer [40];
16458 +//static char serial [40];
16459 +static char serial [] = "Ser 00 em";
16460 +
16461 +/* static strings, in UTF-8 */
16462 +static struct usb_string strings [] = {
16463 + { STRING_MANUFACTURER, manufacturer, },
16464 + { STRING_PRODUCT, longname, },
16465 + { STRING_SERIAL, serial, },
16466 + { STRING_LOOPBACK, loopback, },
16467 + { STRING_SOURCE_SINK, source_sink, },
16468 + { } /* end of list */
16469 +};
16470 +
16471 +static struct usb_gadget_strings stringtab = {
16472 + .language = 0x0409, /* en-us */
16473 + .strings = strings,
16474 +};
16475 +
16476 +/*
16477 + * config descriptors are also handcrafted. these must agree with code
16478 + * that sets configurations, and with code managing interfaces and their
16479 + * altsettings. other complexity may come from:
16480 + *
16481 + * - high speed support, including "other speed config" rules
16482 + * - multiple configurations
16483 + * - interfaces with alternate settings
16484 + * - embedded class or vendor-specific descriptors
16485 + *
16486 + * this handles high speed, and has a second config that could as easily
16487 + * have been an alternate interface setting (on most hardware).
16488 + *
16489 + * NOTE: to demonstrate (and test) more USB capabilities, this driver
16490 + * should include an altsetting to test interrupt transfers, including
16491 + * high bandwidth modes at high speed. (Maybe work like Intel's test
16492 + * device?)
16493 + */
16494 +static int
16495 +config_buf (struct usb_gadget *gadget, u8 *buf, u8 type, unsigned index)
16496 +{
16497 + int len;
16498 + const struct usb_descriptor_header **function;
16499 +
16500 + function = z_function;
16501 + len = usb_gadget_config_buf (&z_config, buf, USB_BUFSIZ, function);
16502 + if (len < 0)
16503 + return len;
16504 + ((struct usb_config_descriptor *) buf)->bDescriptorType = type;
16505 + return len;
16506 +}
16507 +
16508 +/*-------------------------------------------------------------------------*/
16509 +
16510 +static struct usb_request *
16511 +alloc_ep_req (struct usb_ep *ep, unsigned length)
16512 +{
16513 + struct usb_request *req;
16514 +
16515 + req = usb_ep_alloc_request (ep, GFP_ATOMIC);
16516 + if (req) {
16517 + req->length = length;
16518 + req->buf = usb_ep_alloc_buffer (ep, length,
16519 + &req->dma, GFP_ATOMIC);
16520 + if (!req->buf) {
16521 + usb_ep_free_request (ep, req);
16522 + req = NULL;
16523 + }
16524 + }
16525 + return req;
16526 +}
16527 +
16528 +static void free_ep_req (struct usb_ep *ep, struct usb_request *req)
16529 +{
16530 + if (req->buf)
16531 + usb_ep_free_buffer (ep, req->buf, req->dma, req->length);
16532 + usb_ep_free_request (ep, req);
16533 +}
16534 +
16535 +/*-------------------------------------------------------------------------*/
16536 +
16537 +/* optionally require specific source/sink data patterns */
16538 +
16539 +static int
16540 +check_read_data (
16541 + struct zero_dev *dev,
16542 + struct usb_ep *ep,
16543 + struct usb_request *req
16544 +)
16545 +{
16546 + unsigned i;
16547 + u8 *buf = req->buf;
16548 +
16549 + for (i = 0; i < req->actual; i++, buf++) {
16550 + switch (pattern) {
16551 + /* all-zeroes has no synchronization issues */
16552 + case 0:
16553 + if (*buf == 0)
16554 + continue;
16555 + break;
16556 + /* mod63 stays in sync with short-terminated transfers,
16557 + * or otherwise when host and gadget agree on how large
16558 + * each usb transfer request should be. resync is done
16559 + * with set_interface or set_config.
16560 + */
16561 + case 1:
16562 + if (*buf == (u8)(i % 63))
16563 + continue;
16564 + break;
16565 + }
16566 + ERROR (dev, "bad OUT byte, buf [%d] = %d\n", i, *buf);
16567 + usb_ep_set_halt (ep);
16568 + return -EINVAL;
16569 + }
16570 + return 0;
16571 +}
16572 +
16573 +/*-------------------------------------------------------------------------*/
16574 +
16575 +static void zero_reset_config (struct zero_dev *dev)
16576 +{
16577 + if (dev->config == 0)
16578 + return;
16579 +
16580 + DBG (dev, "reset config\n");
16581 +
16582 + /* just disable endpoints, forcing completion of pending i/o.
16583 + * all our completion handlers free their requests in this case.
16584 + */
16585 + if (dev->in_ep) {
16586 + usb_ep_disable (dev->in_ep);
16587 + dev->in_ep = NULL;
16588 + }
16589 + if (dev->out_ep) {
16590 + usb_ep_disable (dev->out_ep);
16591 + dev->out_ep = NULL;
16592 + }
16593 + dev->config = 0;
16594 + del_timer (&dev->resume);
16595 +}
16596 +
16597 +#define _write(f, buf, sz) (f->f_op->write(f, buf, sz, &f->f_pos))
16598 +
16599 +static void
16600 +zero_isoc_complete (struct usb_ep *ep, struct usb_request *req)
16601 +{
16602 + struct zero_dev *dev = ep->driver_data;
16603 + int status = req->status;
16604 + int i, j;
16605 +
16606 + switch (status) {
16607 +
16608 + case 0: /* normal completion? */
16609 + //printk ("\nzero ---------------> isoc normal completion %d bytes\n", req->actual);
16610 + for (i=0, j=rbuf_start; i<req->actual; i++) {
16611 + //printk ("%02x ", ((__u8*)req->buf)[i]);
16612 + rbuf[j] = ((__u8*)req->buf)[i];
16613 + j++;
16614 + if (j >= RBUF_LEN) j=0;
16615 + }
16616 + rbuf_start = j;
16617 + //printk ("\n\n");
16618 +
16619 + if (rbuf_len < RBUF_LEN) {
16620 + rbuf_len += req->actual;
16621 + if (rbuf_len > RBUF_LEN) {
16622 + rbuf_len = RBUF_LEN;
16623 + }
16624 + }
16625 +
16626 + break;
16627 +
16628 + /* this endpoint is normally active while we're configured */
16629 + case -ECONNABORTED: /* hardware forced ep reset */
16630 + case -ECONNRESET: /* request dequeued */
16631 + case -ESHUTDOWN: /* disconnect from host */
16632 + VDBG (dev, "%s gone (%d), %d/%d\n", ep->name, status,
16633 + req->actual, req->length);
16634 + if (ep == dev->out_ep)
16635 + check_read_data (dev, ep, req);
16636 + free_ep_req (ep, req);
16637 + return;
16638 +
16639 + case -EOVERFLOW: /* buffer overrun on read means that
16640 + * we didn't provide a big enough
16641 + * buffer.
16642 + */
16643 + default:
16644 +#if 1
16645 + DBG (dev, "%s complete --> %d, %d/%d\n", ep->name,
16646 + status, req->actual, req->length);
16647 +#endif
16648 + case -EREMOTEIO: /* short read */
16649 + break;
16650 + }
16651 +
16652 + status = usb_ep_queue (ep, req, GFP_ATOMIC);
16653 + if (status) {
16654 + ERROR (dev, "kill %s: resubmit %d bytes --> %d\n",
16655 + ep->name, req->length, status);
16656 + usb_ep_set_halt (ep);
16657 + /* FIXME recover later ... somehow */
16658 + }
16659 +}
16660 +
16661 +static struct usb_request *
16662 +zero_start_isoc_ep (struct usb_ep *ep, int gfp_flags)
16663 +{
16664 + struct usb_request *req;
16665 + int status;
16666 +
16667 + req = alloc_ep_req (ep, 512);
16668 + if (!req)
16669 + return NULL;
16670 +
16671 + req->complete = zero_isoc_complete;
16672 +
16673 + status = usb_ep_queue (ep, req, gfp_flags);
16674 + if (status) {
16675 + struct zero_dev *dev = ep->driver_data;
16676 +
16677 + ERROR (dev, "start %s --> %d\n", ep->name, status);
16678 + free_ep_req (ep, req);
16679 + req = NULL;
16680 + }
16681 +
16682 + return req;
16683 +}
16684 +
16685 +/* change our operational config. this code must agree with the code
16686 + * that returns config descriptors, and altsetting code.
16687 + *
16688 + * it's also responsible for power management interactions. some
16689 + * configurations might not work with our current power sources.
16690 + *
16691 + * note that some device controller hardware will constrain what this
16692 + * code can do, perhaps by disallowing more than one configuration or
16693 + * by limiting configuration choices (like the pxa2xx).
16694 + */
16695 +static int
16696 +zero_set_config (struct zero_dev *dev, unsigned number, int gfp_flags)
16697 +{
16698 + int result = 0;
16699 + struct usb_gadget *gadget = dev->gadget;
16700 + const struct usb_endpoint_descriptor *d;
16701 + struct usb_ep *ep;
16702 +
16703 + if (number == dev->config)
16704 + return 0;
16705 +
16706 + zero_reset_config (dev);
16707 +
16708 + gadget_for_each_ep (ep, gadget) {
16709 +
16710 + if (strcmp (ep->name, "ep4") == 0) {
16711 +
16712 + d = (struct usb_endpoint_descripter *)&za_23; // isoc ep desc for audio i/f alt setting 6
16713 + result = usb_ep_enable (ep, d);
16714 +
16715 + if (result == 0) {
16716 + ep->driver_data = dev;
16717 + dev->in_ep = ep;
16718 +
16719 + if (zero_start_isoc_ep (ep, gfp_flags) != 0) {
16720 +
16721 + dev->in_ep = ep;
16722 + continue;
16723 + }
16724 +
16725 + usb_ep_disable (ep);
16726 + result = -EIO;
16727 + }
16728 + }
16729 +
16730 + }
16731 +
16732 + dev->config = number;
16733 + return result;
16734 +}
16735 +
16736 +/*-------------------------------------------------------------------------*/
16737 +
16738 +static void zero_setup_complete (struct usb_ep *ep, struct usb_request *req)
16739 +{
16740 + if (req->status || req->actual != req->length)
16741 + DBG ((struct zero_dev *) ep->driver_data,
16742 + "setup complete --> %d, %d/%d\n",
16743 + req->status, req->actual, req->length);
16744 +}
16745 +
16746 +/*
16747 + * The setup() callback implements all the ep0 functionality that's
16748 + * not handled lower down, in hardware or the hardware driver (like
16749 + * device and endpoint feature flags, and their status). It's all
16750 + * housekeeping for the gadget function we're implementing. Most of
16751 + * the work is in config-specific setup.
16752 + */
16753 +static int
16754 +zero_setup (struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl)
16755 +{
16756 + struct zero_dev *dev = get_gadget_data (gadget);
16757 + struct usb_request *req = dev->req;
16758 + int value = -EOPNOTSUPP;
16759 +
16760 + /* usually this stores reply data in the pre-allocated ep0 buffer,
16761 + * but config change events will reconfigure hardware.
16762 + */
16763 + req->zero = 0;
16764 + switch (ctrl->bRequest) {
16765 +
16766 + case USB_REQ_GET_DESCRIPTOR:
16767 +
16768 + switch (ctrl->wValue >> 8) {
16769 +
16770 + case USB_DT_DEVICE:
16771 + value = min (ctrl->wLength, (u16) sizeof device_desc);
16772 + memcpy (req->buf, &device_desc, value);
16773 + break;
16774 +#ifdef CONFIG_USB_GADGET_DUALSPEED
16775 + case USB_DT_DEVICE_QUALIFIER:
16776 + if (!gadget->is_dualspeed)
16777 + break;
16778 + value = min (ctrl->wLength, (u16) sizeof dev_qualifier);
16779 + memcpy (req->buf, &dev_qualifier, value);
16780 + break;
16781 +
16782 + case USB_DT_OTHER_SPEED_CONFIG:
16783 + if (!gadget->is_dualspeed)
16784 + break;
16785 + // FALLTHROUGH
16786 +#endif /* CONFIG_USB_GADGET_DUALSPEED */
16787 + case USB_DT_CONFIG:
16788 + value = config_buf (gadget, req->buf,
16789 + ctrl->wValue >> 8,
16790 + ctrl->wValue & 0xff);
16791 + if (value >= 0)
16792 + value = min (ctrl->wLength, (u16) value);
16793 + break;
16794 +
16795 + case USB_DT_STRING:
16796 + /* wIndex == language code.
16797 + * this driver only handles one language, you can
16798 + * add string tables for other languages, using
16799 + * any UTF-8 characters
16800 + */
16801 + value = usb_gadget_get_string (&stringtab,
16802 + ctrl->wValue & 0xff, req->buf);
16803 + if (value >= 0) {
16804 + value = min (ctrl->wLength, (u16) value);
16805 + }
16806 + break;
16807 + }
16808 + break;
16809 +
16810 + /* currently two configs, two speeds */
16811 + case USB_REQ_SET_CONFIGURATION:
16812 + if (ctrl->bRequestType != 0)
16813 + goto unknown;
16814 +
16815 + spin_lock (&dev->lock);
16816 + value = zero_set_config (dev, ctrl->wValue, GFP_ATOMIC);
16817 + spin_unlock (&dev->lock);
16818 + break;
16819 + case USB_REQ_GET_CONFIGURATION:
16820 + if (ctrl->bRequestType != USB_DIR_IN)
16821 + goto unknown;
16822 + *(u8 *)req->buf = dev->config;
16823 + value = min (ctrl->wLength, (u16) 1);
16824 + break;
16825 +
16826 + /* until we add altsetting support, or other interfaces,
16827 + * only 0/0 are possible. pxa2xx only supports 0/0 (poorly)
16828 + * and already killed pending endpoint I/O.
16829 + */
16830 + case USB_REQ_SET_INTERFACE:
16831 +
16832 + if (ctrl->bRequestType != USB_RECIP_INTERFACE)
16833 + goto unknown;
16834 + spin_lock (&dev->lock);
16835 + if (dev->config) {
16836 + u8 config = dev->config;
16837 +
16838 + /* resets interface configuration, forgets about
16839 + * previous transaction state (queued bufs, etc)
16840 + * and re-inits endpoint state (toggle etc)
16841 + * no response queued, just zero status == success.
16842 + * if we had more than one interface we couldn't
16843 + * use this "reset the config" shortcut.
16844 + */
16845 + zero_reset_config (dev);
16846 + zero_set_config (dev, config, GFP_ATOMIC);
16847 + value = 0;
16848 + }
16849 + spin_unlock (&dev->lock);
16850 + break;
16851 + case USB_REQ_GET_INTERFACE:
16852 + if ((ctrl->bRequestType == 0x21) && (ctrl->wIndex == 0x02)) {
16853 + value = ctrl->wLength;
16854 + break;
16855 + }
16856 + else {
16857 + if (ctrl->bRequestType != (USB_DIR_IN|USB_RECIP_INTERFACE))
16858 + goto unknown;
16859 + if (!dev->config)
16860 + break;
16861 + if (ctrl->wIndex != 0) {
16862 + value = -EDOM;
16863 + break;
16864 + }
16865 + *(u8 *)req->buf = 0;
16866 + value = min (ctrl->wLength, (u16) 1);
16867 + }
16868 + break;
16869 +
16870 + /*
16871 + * These are the same vendor-specific requests supported by
16872 + * Intel's USB 2.0 compliance test devices. We exceed that
16873 + * device spec by allowing multiple-packet requests.
16874 + */
16875 + case 0x5b: /* control WRITE test -- fill the buffer */
16876 + if (ctrl->bRequestType != (USB_DIR_OUT|USB_TYPE_VENDOR))
16877 + goto unknown;
16878 + if (ctrl->wValue || ctrl->wIndex)
16879 + break;
16880 + /* just read that many bytes into the buffer */
16881 + if (ctrl->wLength > USB_BUFSIZ)
16882 + break;
16883 + value = ctrl->wLength;
16884 + break;
16885 + case 0x5c: /* control READ test -- return the buffer */
16886 + if (ctrl->bRequestType != (USB_DIR_IN|USB_TYPE_VENDOR))
16887 + goto unknown;
16888 + if (ctrl->wValue || ctrl->wIndex)
16889 + break;
16890 + /* expect those bytes are still in the buffer; send back */
16891 + if (ctrl->wLength > USB_BUFSIZ
16892 + || ctrl->wLength != req->length)
16893 + break;
16894 + value = ctrl->wLength;
16895 + break;
16896 +
16897 + case 0x01: // SET_CUR
16898 + case 0x02:
16899 + case 0x03:
16900 + case 0x04:
16901 + case 0x05:
16902 + value = ctrl->wLength;
16903 + break;
16904 + case 0x81:
16905 + switch (ctrl->wValue) {
16906 + case 0x0201:
16907 + case 0x0202:
16908 + ((u8*)req->buf)[0] = 0x00;
16909 + ((u8*)req->buf)[1] = 0xe3;
16910 + break;
16911 + case 0x0300:
16912 + case 0x0500:
16913 + ((u8*)req->buf)[0] = 0x00;
16914 + break;
16915 + }
16916 + //((u8*)req->buf)[0] = 0x81;
16917 + //((u8*)req->buf)[1] = 0x81;
16918 + value = ctrl->wLength;
16919 + break;
16920 + case 0x82:
16921 + switch (ctrl->wValue) {
16922 + case 0x0201:
16923 + case 0x0202:
16924 + ((u8*)req->buf)[0] = 0x00;
16925 + ((u8*)req->buf)[1] = 0xc3;
16926 + break;
16927 + case 0x0300:
16928 + case 0x0500:
16929 + ((u8*)req->buf)[0] = 0x00;
16930 + break;
16931 + }
16932 + //((u8*)req->buf)[0] = 0x82;
16933 + //((u8*)req->buf)[1] = 0x82;
16934 + value = ctrl->wLength;
16935 + break;
16936 + case 0x83:
16937 + switch (ctrl->wValue) {
16938 + case 0x0201:
16939 + case 0x0202:
16940 + ((u8*)req->buf)[0] = 0x00;
16941 + ((u8*)req->buf)[1] = 0x00;
16942 + break;
16943 + case 0x0300:
16944 + ((u8*)req->buf)[0] = 0x60;
16945 + break;
16946 + case 0x0500:
16947 + ((u8*)req->buf)[0] = 0x18;
16948 + break;
16949 + }
16950 + //((u8*)req->buf)[0] = 0x83;
16951 + //((u8*)req->buf)[1] = 0x83;
16952 + value = ctrl->wLength;
16953 + break;
16954 + case 0x84:
16955 + switch (ctrl->wValue) {
16956 + case 0x0201:
16957 + case 0x0202:
16958 + ((u8*)req->buf)[0] = 0x00;
16959 + ((u8*)req->buf)[1] = 0x01;
16960 + break;
16961 + case 0x0300:
16962 + case 0x0500:
16963 + ((u8*)req->buf)[0] = 0x08;
16964 + break;
16965 + }
16966 + //((u8*)req->buf)[0] = 0x84;
16967 + //((u8*)req->buf)[1] = 0x84;
16968 + value = ctrl->wLength;
16969 + break;
16970 + case 0x85:
16971 + ((u8*)req->buf)[0] = 0x85;
16972 + ((u8*)req->buf)[1] = 0x85;
16973 + value = ctrl->wLength;
16974 + break;
16975 +
16976 +
16977 + default:
16978 +unknown:
16979 + printk("unknown control req%02x.%02x v%04x i%04x l%d\n",
16980 + ctrl->bRequestType, ctrl->bRequest,
16981 + ctrl->wValue, ctrl->wIndex, ctrl->wLength);
16982 + }
16983 +
16984 + /* respond with data transfer before status phase? */
16985 + if (value >= 0) {
16986 + req->length = value;
16987 + req->zero = value < ctrl->wLength
16988 + && (value % gadget->ep0->maxpacket) == 0;
16989 + value = usb_ep_queue (gadget->ep0, req, GFP_ATOMIC);
16990 + if (value < 0) {
16991 + DBG (dev, "ep_queue < 0 --> %d\n", value);
16992 + req->status = 0;
16993 + zero_setup_complete (gadget->ep0, req);
16994 + }
16995 + }
16996 +
16997 + /* device either stalls (value < 0) or reports success */
16998 + return value;
16999 +}
17000 +
17001 +static void
17002 +zero_disconnect (struct usb_gadget *gadget)
17003 +{
17004 + struct zero_dev *dev = get_gadget_data (gadget);
17005 + unsigned long flags;
17006 +
17007 + spin_lock_irqsave (&dev->lock, flags);
17008 + zero_reset_config (dev);
17009 +
17010 + /* a more significant application might have some non-usb
17011 + * activities to quiesce here, saving resources like power
17012 + * or pushing the notification up a network stack.
17013 + */
17014 + spin_unlock_irqrestore (&dev->lock, flags);
17015 +
17016 + /* next we may get setup() calls to enumerate new connections;
17017 + * or an unbind() during shutdown (including removing module).
17018 + */
17019 +}
17020 +
17021 +static void
17022 +zero_autoresume (unsigned long _dev)
17023 +{
17024 + struct zero_dev *dev = (struct zero_dev *) _dev;
17025 + int status;
17026 +
17027 + /* normally the host would be woken up for something
17028 + * more significant than just a timer firing...
17029 + */
17030 + if (dev->gadget->speed != USB_SPEED_UNKNOWN) {
17031 + status = usb_gadget_wakeup (dev->gadget);
17032 + DBG (dev, "wakeup --> %d\n", status);
17033 + }
17034 +}
17035 +
17036 +/*-------------------------------------------------------------------------*/
17037 +
17038 +static void
17039 +zero_unbind (struct usb_gadget *gadget)
17040 +{
17041 + struct zero_dev *dev = get_gadget_data (gadget);
17042 +
17043 + DBG (dev, "unbind\n");
17044 +
17045 + /* we've already been disconnected ... no i/o is active */
17046 + if (dev->req)
17047 + free_ep_req (gadget->ep0, dev->req);
17048 + del_timer_sync (&dev->resume);
17049 + kfree (dev);
17050 + set_gadget_data (gadget, NULL);
17051 +}
17052 +
17053 +static int
17054 +zero_bind (struct usb_gadget *gadget)
17055 +{
17056 + struct zero_dev *dev;
17057 + //struct usb_ep *ep;
17058 +
17059 + printk("binding\n");
17060 + /*
17061 + * DRIVER POLICY CHOICE: you may want to do this differently.
17062 + * One thing to avoid is reusing a bcdDevice revision code
17063 + * with different host-visible configurations or behavior
17064 + * restrictions -- using ep1in/ep2out vs ep1out/ep3in, etc
17065 + */
17066 + //device_desc.bcdDevice = __constant_cpu_to_le16 (0x0201);
17067 +
17068 +
17069 + /* ok, we made sense of the hardware ... */
17070 + dev = kzalloc (sizeof *dev, SLAB_KERNEL);
17071 + if (!dev)
17072 + return -ENOMEM;
17073 + spin_lock_init (&dev->lock);
17074 + dev->gadget = gadget;
17075 + set_gadget_data (gadget, dev);
17076 +
17077 + /* preallocate control response and buffer */
17078 + dev->req = usb_ep_alloc_request (gadget->ep0, GFP_KERNEL);
17079 + if (!dev->req)
17080 + goto enomem;
17081 + dev->req->buf = usb_ep_alloc_buffer (gadget->ep0, USB_BUFSIZ,
17082 + &dev->req->dma, GFP_KERNEL);
17083 + if (!dev->req->buf)
17084 + goto enomem;
17085 +
17086 + dev->req->complete = zero_setup_complete;
17087 +
17088 + device_desc.bMaxPacketSize0 = gadget->ep0->maxpacket;
17089 +
17090 +#ifdef CONFIG_USB_GADGET_DUALSPEED
17091 + /* assume ep0 uses the same value for both speeds ... */
17092 + dev_qualifier.bMaxPacketSize0 = device_desc.bMaxPacketSize0;
17093 +
17094 + /* and that all endpoints are dual-speed */
17095 + //hs_source_desc.bEndpointAddress = fs_source_desc.bEndpointAddress;
17096 + //hs_sink_desc.bEndpointAddress = fs_sink_desc.bEndpointAddress;
17097 +#endif
17098 +
17099 + usb_gadget_set_selfpowered (gadget);
17100 +
17101 + init_timer (&dev->resume);
17102 + dev->resume.function = zero_autoresume;
17103 + dev->resume.data = (unsigned long) dev;
17104 +
17105 + gadget->ep0->driver_data = dev;
17106 +
17107 + INFO (dev, "%s, version: " DRIVER_VERSION "\n", longname);
17108 + INFO (dev, "using %s, OUT %s IN %s\n", gadget->name,
17109 + EP_OUT_NAME, EP_IN_NAME);
17110 +
17111 + snprintf (manufacturer, sizeof manufacturer,
17112 + UTS_SYSNAME " " UTS_RELEASE " with %s",
17113 + gadget->name);
17114 +
17115 + return 0;
17116 +
17117 +enomem:
17118 + zero_unbind (gadget);
17119 + return -ENOMEM;
17120 +}
17121 +
17122 +/*-------------------------------------------------------------------------*/
17123 +
17124 +static void
17125 +zero_suspend (struct usb_gadget *gadget)
17126 +{
17127 + struct zero_dev *dev = get_gadget_data (gadget);
17128 +
17129 + if (gadget->speed == USB_SPEED_UNKNOWN)
17130 + return;
17131 +
17132 + if (autoresume) {
17133 + mod_timer (&dev->resume, jiffies + (HZ * autoresume));
17134 + DBG (dev, "suspend, wakeup in %d seconds\n", autoresume);
17135 + } else
17136 + DBG (dev, "suspend\n");
17137 +}
17138 +
17139 +static void
17140 +zero_resume (struct usb_gadget *gadget)
17141 +{
17142 + struct zero_dev *dev = get_gadget_data (gadget);
17143 +
17144 + DBG (dev, "resume\n");
17145 + del_timer (&dev->resume);
17146 +}
17147 +
17148 +
17149 +/*-------------------------------------------------------------------------*/
17150 +
17151 +static struct usb_gadget_driver zero_driver = {
17152 +#ifdef CONFIG_USB_GADGET_DUALSPEED
17153 + .speed = USB_SPEED_HIGH,
17154 +#else
17155 + .speed = USB_SPEED_FULL,
17156 +#endif
17157 + .function = (char *) longname,
17158 + .bind = zero_bind,
17159 + .unbind = zero_unbind,
17160 +
17161 + .setup = zero_setup,
17162 + .disconnect = zero_disconnect,
17163 +
17164 + .suspend = zero_suspend,
17165 + .resume = zero_resume,
17166 +
17167 + .driver = {
17168 + .name = (char *) shortname,
17169 + // .shutdown = ...
17170 + // .suspend = ...
17171 + // .resume = ...
17172 + },
17173 +};
17174 +
17175 +MODULE_AUTHOR ("David Brownell");
17176 +MODULE_LICENSE ("Dual BSD/GPL");
17177 +
17178 +static struct proc_dir_entry *pdir, *pfile;
17179 +
17180 +static int isoc_read_data (char *page, char **start,
17181 + off_t off, int count,
17182 + int *eof, void *data)
17183 +{
17184 + int i;
17185 + static int c = 0;
17186 + static int done = 0;
17187 + static int s = 0;
17188 +
17189 +/*
17190 + printk ("\ncount: %d\n", count);
17191 + printk ("rbuf_start: %d\n", rbuf_start);
17192 + printk ("rbuf_len: %d\n", rbuf_len);
17193 + printk ("off: %d\n", off);
17194 + printk ("start: %p\n\n", *start);
17195 +*/
17196 + if (done) {
17197 + c = 0;
17198 + done = 0;
17199 + *eof = 1;
17200 + return 0;
17201 + }
17202 +
17203 + if (c == 0) {
17204 + if (rbuf_len == RBUF_LEN)
17205 + s = rbuf_start;
17206 + else s = 0;
17207 + }
17208 +
17209 + for (i=0; i<count && c<rbuf_len; i++, c++) {
17210 + page[i] = rbuf[(c+s) % RBUF_LEN];
17211 + }
17212 + *start = page;
17213 +
17214 + if (c >= rbuf_len) {
17215 + *eof = 1;
17216 + done = 1;
17217 + }
17218 +
17219 +
17220 + return i;
17221 +}
17222 +
17223 +static int __init init (void)
17224 +{
17225 +
17226 + int retval = 0;
17227 +
17228 + pdir = proc_mkdir("isoc_test", NULL);
17229 + if(pdir == NULL) {
17230 + retval = -ENOMEM;
17231 + printk("Error creating dir\n");
17232 + goto done;
17233 + }
17234 + pdir->owner = THIS_MODULE;
17235 +
17236 + pfile = create_proc_read_entry("isoc_data",
17237 + 0444, pdir,
17238 + isoc_read_data,
17239 + NULL);
17240 + if (pfile == NULL) {
17241 + retval = -ENOMEM;
17242 + printk("Error creating file\n");
17243 + goto no_file;
17244 + }
17245 + pfile->owner = THIS_MODULE;
17246 +
17247 + return usb_gadget_register_driver (&zero_driver);
17248 +
17249 + no_file:
17250 + remove_proc_entry("isoc_data", NULL);
17251 + done:
17252 + return retval;
17253 +}
17254 +module_init (init);
17255 +
17256 +static void __exit cleanup (void)
17257 +{
17258 +
17259 + usb_gadget_unregister_driver (&zero_driver);
17260 +
17261 + remove_proc_entry("isoc_data", pdir);
17262 + remove_proc_entry("isoc_test", NULL);
17263 +}
17264 +module_exit (cleanup);
17265 --- /dev/null
17266 +++ b/drivers/usb/host/dwc_otg/dwc_cfi_common.h
17267 @@ -0,0 +1,142 @@
17268 +/* ==========================================================================
17269 + * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
17270 + * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
17271 + * otherwise expressly agreed to in writing between Synopsys and you.
17272 + *
17273 + * The Software IS NOT an item of Licensed Software or Licensed Product under
17274 + * any End User Software License Agreement or Agreement for Licensed Product
17275 + * with Synopsys or any supplement thereto. You are permitted to use and
17276 + * redistribute this Software in source and binary forms, with or without
17277 + * modification, provided that redistributions of source code must retain this
17278 + * notice. You may not view, use, disclose, copy or distribute this file or
17279 + * any information contained herein except pursuant to this license grant from
17280 + * Synopsys. If you do not agree with this notice, including the disclaimer
17281 + * below, then you are not authorized to use the Software.
17282 + *
17283 + * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
17284 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
17285 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
17286 + * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
17287 + * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
17288 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
17289 + * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
17290 + * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
17291 + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
17292 + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
17293 + * DAMAGE.
17294 + * ========================================================================== */
17295 +
17296 +#if !defined(__DWC_CFI_COMMON_H__)
17297 +#define __DWC_CFI_COMMON_H__
17298 +
17299 +//#include <linux/types.h>
17300 +
17301 +/**
17302 + * @file
17303 + *
17304 + * This file contains the CFI specific common constants, interfaces
17305 + * (functions and macros) and structures for Linux. No PCD specific
17306 + * data structure or definition is to be included in this file.
17307 + *
17308 + */
17309 +
17310 +/** This is a request for all Core Features */
17311 +#define VEN_CORE_GET_FEATURES 0xB1
17312 +
17313 +/** This is a request to get the value of a specific Core Feature */
17314 +#define VEN_CORE_GET_FEATURE 0xB2
17315 +
17316 +/** This command allows the host to set the value of a specific Core Feature */
17317 +#define VEN_CORE_SET_FEATURE 0xB3
17318 +
17319 +/** This command allows the host to set the default values of
17320 + * either all or any specific Core Feature
17321 + */
17322 +#define VEN_CORE_RESET_FEATURES 0xB4
17323 +
17324 +/** This command forces the PCD to write the deferred values of a Core Features */
17325 +#define VEN_CORE_ACTIVATE_FEATURES 0xB5
17326 +
17327 +/** This request reads a DWORD value from a register at the specified offset */
17328 +#define VEN_CORE_READ_REGISTER 0xB6
17329 +
17330 +/** This request writes a DWORD value into a register at the specified offset */
17331 +#define VEN_CORE_WRITE_REGISTER 0xB7
17332 +
17333 +/** This structure is the header of the Core Features dataset returned to
17334 + * the Host
17335 + */
17336 +struct cfi_all_features_header {
17337 +/** The features header structure length is */
17338 +#define CFI_ALL_FEATURES_HDR_LEN 8
17339 + /**
17340 + * The total length of the features dataset returned to the Host
17341 + */
17342 + uint16_t wTotalLen;
17343 +
17344 + /**
17345 + * CFI version number inBinary-Coded Decimal (i.e., 1.00 is 100H).
17346 + * This field identifies the version of the CFI Specification with which
17347 + * the device is compliant.
17348 + */
17349 + uint16_t wVersion;
17350 +
17351 + /** The ID of the Core */
17352 + uint16_t wCoreID;
17353 +#define CFI_CORE_ID_UDC 1
17354 +#define CFI_CORE_ID_OTG 2
17355 +#define CFI_CORE_ID_WUDEV 3
17356 +
17357 + /** Number of features returned by VEN_CORE_GET_FEATURES request */
17358 + uint16_t wNumFeatures;
17359 +} UPACKED;
17360 +
17361 +typedef struct cfi_all_features_header cfi_all_features_header_t;
17362 +
17363 +/** This structure is a header of the Core Feature descriptor dataset returned to
17364 + * the Host after the VEN_CORE_GET_FEATURES request
17365 + */
17366 +struct cfi_feature_desc_header {
17367 +#define CFI_FEATURE_DESC_HDR_LEN 8
17368 +
17369 + /** The feature ID */
17370 + uint16_t wFeatureID;
17371 +
17372 + /** Length of this feature descriptor in bytes - including the
17373 + * length of the feature name string
17374 + */
17375 + uint16_t wLength;
17376 +
17377 + /** The data length of this feature in bytes */
17378 + uint16_t wDataLength;
17379 +
17380 + /**
17381 + * Attributes of this features
17382 + * D0: Access rights
17383 + * 0 - Read/Write
17384 + * 1 - Read only
17385 + */
17386 + uint8_t bmAttributes;
17387 +#define CFI_FEATURE_ATTR_RO 1
17388 +#define CFI_FEATURE_ATTR_RW 0
17389 +
17390 + /** Length of the feature name in bytes */
17391 + uint8_t bNameLen;
17392 +
17393 + /** The feature name buffer */
17394 + //uint8_t *name;
17395 +} UPACKED;
17396 +
17397 +typedef struct cfi_feature_desc_header cfi_feature_desc_header_t;
17398 +
17399 +/**
17400 + * This structure describes a NULL terminated string referenced by its id field.
17401 + * It is very similar to usb_string structure but has the id field type set to 16-bit.
17402 + */
17403 +struct cfi_string {
17404 + uint16_t id;
17405 + const uint8_t *s;
17406 +};
17407 +typedef struct cfi_string cfi_string_t;
17408 +
17409 +#endif
17410 --- /dev/null
17411 +++ b/drivers/usb/host/dwc_otg/dwc_otg_adp.c
17412 @@ -0,0 +1,854 @@
17413 +/* ==========================================================================
17414 + * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_adp.c $
17415 + * $Revision: #12 $
17416 + * $Date: 2011/10/26 $
17417 + * $Change: 1873028 $
17418 + *
17419 + * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
17420 + * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
17421 + * otherwise expressly agreed to in writing between Synopsys and you.
17422 + *
17423 + * The Software IS NOT an item of Licensed Software or Licensed Product under
17424 + * any End User Software License Agreement or Agreement for Licensed Product
17425 + * with Synopsys or any supplement thereto. You are permitted to use and
17426 + * redistribute this Software in source and binary forms, with or without
17427 + * modification, provided that redistributions of source code must retain this
17428 + * notice. You may not view, use, disclose, copy or distribute this file or
17429 + * any information contained herein except pursuant to this license grant from
17430 + * Synopsys. If you do not agree with this notice, including the disclaimer
17431 + * below, then you are not authorized to use the Software.
17432 + *
17433 + * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
17434 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
17435 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
17436 + * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
17437 + * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
17438 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
17439 + * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
17440 + * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
17441 + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
17442 + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
17443 + * DAMAGE.
17444 + * ========================================================================== */
17445 +
17446 +#include "dwc_os.h"
17447 +#include "dwc_otg_regs.h"
17448 +#include "dwc_otg_cil.h"
17449 +#include "dwc_otg_adp.h"
17450 +
17451 +/** @file
17452 + *
17453 + * This file contains the most of the Attach Detect Protocol implementation for
17454 + * the driver to support OTG Rev2.0.
17455 + *
17456 + */
17457 +
17458 +void dwc_otg_adp_write_reg(dwc_otg_core_if_t * core_if, uint32_t value)
17459 +{
17460 + adpctl_data_t adpctl;
17461 +
17462 + adpctl.d32 = value;
17463 + adpctl.b.ar = 0x2;
17464 +
17465 + DWC_WRITE_REG32(&core_if->core_global_regs->adpctl, adpctl.d32);
17466 +
17467 + while (adpctl.b.ar) {
17468 + adpctl.d32 = DWC_READ_REG32(&core_if->core_global_regs->adpctl);
17469 + }
17470 +
17471 +}
17472 +
17473 +/**
17474 + * Function is called to read ADP registers
17475 + */
17476 +uint32_t dwc_otg_adp_read_reg(dwc_otg_core_if_t * core_if)
17477 +{
17478 + adpctl_data_t adpctl;
17479 +
17480 + adpctl.d32 = 0;
17481 + adpctl.b.ar = 0x1;
17482 +
17483 + DWC_WRITE_REG32(&core_if->core_global_regs->adpctl, adpctl.d32);
17484 +
17485 + while (adpctl.b.ar) {
17486 + adpctl.d32 = DWC_READ_REG32(&core_if->core_global_regs->adpctl);
17487 + }
17488 +
17489 + return adpctl.d32;
17490 +}
17491 +
17492 +/**
17493 + * Function is called to read ADPCTL register and filter Write-clear bits
17494 + */
17495 +uint32_t dwc_otg_adp_read_reg_filter(dwc_otg_core_if_t * core_if)
17496 +{
17497 + adpctl_data_t adpctl;
17498 +
17499 + adpctl.d32 = dwc_otg_adp_read_reg(core_if);
17500 + adpctl.b.adp_tmout_int = 0;
17501 + adpctl.b.adp_prb_int = 0;
17502 + adpctl.b.adp_tmout_int = 0;
17503 +
17504 + return adpctl.d32;
17505 +}
17506 +
17507 +/**
17508 + * Function is called to write ADP registers
17509 + */
17510 +void dwc_otg_adp_modify_reg(dwc_otg_core_if_t * core_if, uint32_t clr,
17511 + uint32_t set)
17512 +{
17513 + dwc_otg_adp_write_reg(core_if,
17514 + (dwc_otg_adp_read_reg(core_if) & (~clr)) | set);
17515 +}
17516 +
17517 +static void adp_sense_timeout(void *ptr)
17518 +{
17519 + dwc_otg_core_if_t *core_if = (dwc_otg_core_if_t *) ptr;
17520 + core_if->adp.sense_timer_started = 0;
17521 + DWC_PRINTF("ADP SENSE TIMEOUT\n");
17522 + if (core_if->adp_enable) {
17523 + dwc_otg_adp_sense_stop(core_if);
17524 + dwc_otg_adp_probe_start(core_if);
17525 + }
17526 +}
17527 +
17528 +/**
17529 + * This function is called when the ADP vbus timer expires. Timeout is 1.1s.
17530 + */
17531 +static void adp_vbuson_timeout(void *ptr)
17532 +{
17533 + gpwrdn_data_t gpwrdn;
17534 + dwc_otg_core_if_t *core_if = (dwc_otg_core_if_t *) ptr;
17535 + hprt0_data_t hprt0 = {.d32 = 0 };
17536 + pcgcctl_data_t pcgcctl = {.d32 = 0 };
17537 + DWC_PRINTF("%s: 1.1 seconds expire after turning on VBUS\n",__FUNCTION__);
17538 + if (core_if) {
17539 + core_if->adp.vbuson_timer_started = 0;
17540 + /* Turn off vbus */
17541 + hprt0.b.prtpwr = 1;
17542 + DWC_MODIFY_REG32(core_if->host_if->hprt0, hprt0.d32, 0);
17543 + gpwrdn.d32 = 0;
17544 +
17545 + /* Power off the core */
17546 + if (core_if->power_down == 2) {
17547 + /* Enable Wakeup Logic */
17548 +// gpwrdn.b.wkupactiv = 1;
17549 + gpwrdn.b.pmuactv = 0;
17550 + gpwrdn.b.pwrdnrstn = 1;
17551 + gpwrdn.b.pwrdnclmp = 1;
17552 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, 0,
17553 + gpwrdn.d32);
17554 +
17555 + /* Suspend the Phy Clock */
17556 + pcgcctl.b.stoppclk = 1;
17557 + DWC_MODIFY_REG32(core_if->pcgcctl, 0, pcgcctl.d32);
17558 +
17559 + /* Switch on VDD */
17560 +// gpwrdn.b.wkupactiv = 1;
17561 + gpwrdn.b.pmuactv = 1;
17562 + gpwrdn.b.pwrdnrstn = 1;
17563 + gpwrdn.b.pwrdnclmp = 1;
17564 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, 0,
17565 + gpwrdn.d32);
17566 + } else {
17567 + /* Enable Power Down Logic */
17568 + gpwrdn.b.pmuintsel = 1;
17569 + gpwrdn.b.pmuactv = 1;
17570 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, 0, gpwrdn.d32);
17571 + }
17572 +
17573 + /* Power off the core */
17574 + if (core_if->power_down == 2) {
17575 + gpwrdn.d32 = 0;
17576 + gpwrdn.b.pwrdnswtch = 1;
17577 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn,
17578 + gpwrdn.d32, 0);
17579 + }
17580 +
17581 + /* Unmask SRP detected interrupt from Power Down Logic */
17582 + gpwrdn.d32 = 0;
17583 + gpwrdn.b.srp_det_msk = 1;
17584 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, 0, gpwrdn.d32);
17585 +
17586 + dwc_otg_adp_probe_start(core_if);
17587 + dwc_otg_dump_global_registers(core_if);
17588 + dwc_otg_dump_host_registers(core_if);
17589 + }
17590 +
17591 +}
17592 +
17593 +/**
17594 + * Start the ADP Initial Probe timer to detect if Port Connected interrupt is
17595 + * not asserted within 1.1 seconds.
17596 + *
17597 + * @param core_if the pointer to core_if strucure.
17598 + */
17599 +void dwc_otg_adp_vbuson_timer_start(dwc_otg_core_if_t * core_if)
17600 +{
17601 + core_if->adp.vbuson_timer_started = 1;
17602 + if (core_if->adp.vbuson_timer)
17603 + {
17604 + DWC_PRINTF("SCHEDULING VBUSON TIMER\n");
17605 + /* 1.1 secs + 60ms necessary for cil_hcd_start*/
17606 + DWC_TIMER_SCHEDULE(core_if->adp.vbuson_timer, 1160);
17607 + } else {
17608 + DWC_WARN("VBUSON_TIMER = %p\n",core_if->adp.vbuson_timer);
17609 + }
17610 +}
17611 +
17612 +#if 0
17613 +/**
17614 + * Masks all DWC OTG core interrupts
17615 + *
17616 + */
17617 +static void mask_all_interrupts(dwc_otg_core_if_t * core_if)
17618 +{
17619 + int i;
17620 + gahbcfg_data_t ahbcfg = {.d32 = 0 };
17621 +
17622 + /* Mask Host Interrupts */
17623 +
17624 + /* Clear and disable HCINTs */
17625 + for (i = 0; i < core_if->core_params->host_channels; i++) {
17626 + DWC_WRITE_REG32(&core_if->host_if->hc_regs[i]->hcintmsk, 0);
17627 + DWC_WRITE_REG32(&core_if->host_if->hc_regs[i]->hcint, 0xFFFFFFFF);
17628 +
17629 + }
17630 +
17631 + /* Clear and disable HAINT */
17632 + DWC_WRITE_REG32(&core_if->host_if->host_global_regs->haintmsk, 0x0000);
17633 + DWC_WRITE_REG32(&core_if->host_if->host_global_regs->haint, 0xFFFFFFFF);
17634 +
17635 + /* Mask Device Interrupts */
17636 + if (!core_if->multiproc_int_enable) {
17637 + /* Clear and disable IN Endpoint interrupts */
17638 + DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->diepmsk, 0);
17639 + for (i = 0; i <= core_if->dev_if->num_in_eps; i++) {
17640 + DWC_WRITE_REG32(&core_if->dev_if->in_ep_regs[i]->
17641 + diepint, 0xFFFFFFFF);
17642 + }
17643 +
17644 + /* Clear and disable OUT Endpoint interrupts */
17645 + DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->doepmsk, 0);
17646 + for (i = 0; i <= core_if->dev_if->num_out_eps; i++) {
17647 + DWC_WRITE_REG32(&core_if->dev_if->out_ep_regs[i]->
17648 + doepint, 0xFFFFFFFF);
17649 + }
17650 +
17651 + /* Clear and disable DAINT */
17652 + DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->daint,
17653 + 0xFFFFFFFF);
17654 + DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->daintmsk, 0);
17655 + } else {
17656 + for (i = 0; i < core_if->dev_if->num_in_eps; ++i) {
17657 + DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->
17658 + diepeachintmsk[i], 0);
17659 + DWC_WRITE_REG32(&core_if->dev_if->in_ep_regs[i]->
17660 + diepint, 0xFFFFFFFF);
17661 + }
17662 +
17663 + for (i = 0; i < core_if->dev_if->num_out_eps; ++i) {
17664 + DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->
17665 + doepeachintmsk[i], 0);
17666 + DWC_WRITE_REG32(&core_if->dev_if->out_ep_regs[i]->
17667 + doepint, 0xFFFFFFFF);
17668 + }
17669 +
17670 + DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->deachintmsk,
17671 + 0);
17672 + DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->deachint,
17673 + 0xFFFFFFFF);
17674 +
17675 + }
17676 +
17677 + /* Disable interrupts */
17678 + ahbcfg.b.glblintrmsk = 1;
17679 + DWC_MODIFY_REG32(&core_if->core_global_regs->gahbcfg, ahbcfg.d32, 0);
17680 +
17681 + /* Disable all interrupts. */
17682 + DWC_WRITE_REG32(&core_if->core_global_regs->gintmsk, 0);
17683 +
17684 + /* Clear any pending interrupts */
17685 + DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, 0xFFFFFFFF);
17686 +
17687 + /* Clear any pending OTG Interrupts */
17688 + DWC_WRITE_REG32(&core_if->core_global_regs->gotgint, 0xFFFFFFFF);
17689 +}
17690 +
17691 +/**
17692 + * Unmask Port Connection Detected interrupt
17693 + *
17694 + */
17695 +static void unmask_conn_det_intr(dwc_otg_core_if_t * core_if)
17696 +{
17697 + gintmsk_data_t gintmsk = {.d32 = 0,.b.portintr = 1 };
17698 +
17699 + DWC_WRITE_REG32(&core_if->core_global_regs->gintmsk, gintmsk.d32);
17700 +}
17701 +#endif
17702 +
17703 +/**
17704 + * Starts the ADP Probing
17705 + *
17706 + * @param core_if the pointer to core_if structure.
17707 + */
17708 +uint32_t dwc_otg_adp_probe_start(dwc_otg_core_if_t * core_if)
17709 +{
17710 +
17711 + adpctl_data_t adpctl = {.d32 = 0};
17712 + gpwrdn_data_t gpwrdn;
17713 +#if 0
17714 + adpctl_data_t adpctl_int = {.d32 = 0, .b.adp_prb_int = 1,
17715 + .b.adp_sns_int = 1, b.adp_tmout_int};
17716 +#endif
17717 + dwc_otg_disable_global_interrupts(core_if);
17718 + DWC_PRINTF("ADP Probe Start\n");
17719 + core_if->adp.probe_enabled = 1;
17720 +
17721 + adpctl.b.adpres = 1;
17722 + dwc_otg_adp_write_reg(core_if, adpctl.d32);
17723 +
17724 + while (adpctl.b.adpres) {
17725 + adpctl.d32 = dwc_otg_adp_read_reg(core_if);
17726 + }
17727 +
17728 + adpctl.d32 = 0;
17729 + gpwrdn.d32 = DWC_READ_REG32(&core_if->core_global_regs->gpwrdn);
17730 +
17731 + /* In Host mode unmask SRP detected interrupt */
17732 + gpwrdn.d32 = 0;
17733 + gpwrdn.b.sts_chngint_msk = 1;
17734 + if (!gpwrdn.b.idsts) {
17735 + gpwrdn.b.srp_det_msk = 1;
17736 + }
17737 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, 0, gpwrdn.d32);
17738 +
17739 + adpctl.b.adp_tmout_int_msk = 1;
17740 + adpctl.b.adp_prb_int_msk = 1;
17741 + adpctl.b.prb_dschg = 1;
17742 + adpctl.b.prb_delta = 1;
17743 + adpctl.b.prb_per = 1;
17744 + adpctl.b.adpen = 1;
17745 + adpctl.b.enaprb = 1;
17746 +
17747 + dwc_otg_adp_write_reg(core_if, adpctl.d32);
17748 + DWC_PRINTF("ADP Probe Finish\n");
17749 + return 0;
17750 +}
17751 +
17752 +/**
17753 + * Starts the ADP Sense timer to detect if ADP Sense interrupt is not asserted
17754 + * within 3 seconds.
17755 + *
17756 + * @param core_if the pointer to core_if strucure.
17757 + */
17758 +void dwc_otg_adp_sense_timer_start(dwc_otg_core_if_t * core_if)
17759 +{
17760 + core_if->adp.sense_timer_started = 1;
17761 + DWC_TIMER_SCHEDULE(core_if->adp.sense_timer, 3000 /* 3 secs */ );
17762 +}
17763 +
17764 +/**
17765 + * Starts the ADP Sense
17766 + *
17767 + * @param core_if the pointer to core_if strucure.
17768 + */
17769 +uint32_t dwc_otg_adp_sense_start(dwc_otg_core_if_t * core_if)
17770 +{
17771 + adpctl_data_t adpctl;
17772 +
17773 + DWC_PRINTF("ADP Sense Start\n");
17774 +
17775 + /* Unmask ADP sense interrupt and mask all other from the core */
17776 + adpctl.d32 = dwc_otg_adp_read_reg_filter(core_if);
17777 + adpctl.b.adp_sns_int_msk = 1;
17778 + dwc_otg_adp_write_reg(core_if, adpctl.d32);
17779 + dwc_otg_disable_global_interrupts(core_if); // vahrama
17780 +
17781 + /* Set ADP reset bit*/
17782 + adpctl.d32 = dwc_otg_adp_read_reg_filter(core_if);
17783 + adpctl.b.adpres = 1;
17784 + dwc_otg_adp_write_reg(core_if, adpctl.d32);
17785 +
17786 + while (adpctl.b.adpres) {
17787 + adpctl.d32 = dwc_otg_adp_read_reg(core_if);
17788 + }
17789 +
17790 + adpctl.b.adpres = 0;
17791 + adpctl.b.adpen = 1;
17792 + adpctl.b.enasns = 1;
17793 + dwc_otg_adp_write_reg(core_if, adpctl.d32);
17794 +
17795 + dwc_otg_adp_sense_timer_start(core_if);
17796 +
17797 + return 0;
17798 +}
17799 +
17800 +/**
17801 + * Stops the ADP Probing
17802 + *
17803 + * @param core_if the pointer to core_if strucure.
17804 + */
17805 +uint32_t dwc_otg_adp_probe_stop(dwc_otg_core_if_t * core_if)
17806 +{
17807 +
17808 + adpctl_data_t adpctl;
17809 + DWC_PRINTF("Stop ADP probe\n");
17810 + core_if->adp.probe_enabled = 0;
17811 + core_if->adp.probe_counter = 0;
17812 + adpctl.d32 = dwc_otg_adp_read_reg(core_if);
17813 +
17814 + adpctl.b.adpen = 0;
17815 + adpctl.b.adp_prb_int = 1;
17816 + adpctl.b.adp_tmout_int = 1;
17817 + adpctl.b.adp_sns_int = 1;
17818 + dwc_otg_adp_write_reg(core_if, adpctl.d32);
17819 +
17820 + return 0;
17821 +}
17822 +
17823 +/**
17824 + * Stops the ADP Sensing
17825 + *
17826 + * @param core_if the pointer to core_if strucure.
17827 + */
17828 +uint32_t dwc_otg_adp_sense_stop(dwc_otg_core_if_t * core_if)
17829 +{
17830 + adpctl_data_t adpctl;
17831 +
17832 + core_if->adp.sense_enabled = 0;
17833 +
17834 + adpctl.d32 = dwc_otg_adp_read_reg_filter(core_if);
17835 + adpctl.b.enasns = 0;
17836 + adpctl.b.adp_sns_int = 1;
17837 + dwc_otg_adp_write_reg(core_if, adpctl.d32);
17838 +
17839 + return 0;
17840 +}
17841 +
17842 +/**
17843 + * Called to turn on the VBUS after initial ADP probe in host mode.
17844 + * If port power was already enabled in cil_hcd_start function then
17845 + * only schedule a timer.
17846 + *
17847 + * @param core_if the pointer to core_if structure.
17848 + */
17849 +void dwc_otg_adp_turnon_vbus(dwc_otg_core_if_t * core_if)
17850 +{
17851 + hprt0_data_t hprt0 = {.d32 = 0 };
17852 + hprt0.d32 = dwc_otg_read_hprt0(core_if);
17853 + DWC_PRINTF("Turn on VBUS for 1.1s, port power is %d\n", hprt0.b.prtpwr);
17854 +
17855 + if (hprt0.b.prtpwr == 0) {
17856 + hprt0.b.prtpwr = 1;
17857 + //DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
17858 + }
17859 +
17860 + dwc_otg_adp_vbuson_timer_start(core_if);
17861 +}
17862 +
17863 +/**
17864 + * Called right after driver is loaded
17865 + * to perform initial actions for ADP
17866 + *
17867 + * @param core_if the pointer to core_if structure.
17868 + * @param is_host - flag for current mode of operation either from GINTSTS or GPWRDN
17869 + */
17870 +void dwc_otg_adp_start(dwc_otg_core_if_t * core_if, uint8_t is_host)
17871 +{
17872 + gpwrdn_data_t gpwrdn;
17873 +
17874 + DWC_PRINTF("ADP Initial Start\n");
17875 + core_if->adp.adp_started = 1;
17876 +
17877 + DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, 0xFFFFFFFF);
17878 + dwc_otg_disable_global_interrupts(core_if);
17879 + if (is_host) {
17880 + DWC_PRINTF("HOST MODE\n");
17881 + /* Enable Power Down Logic Interrupt*/
17882 + gpwrdn.d32 = 0;
17883 + gpwrdn.b.pmuintsel = 1;
17884 + gpwrdn.b.pmuactv = 1;
17885 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, 0, gpwrdn.d32);
17886 + /* Initialize first ADP probe to obtain Ramp Time value */
17887 + core_if->adp.initial_probe = 1;
17888 + dwc_otg_adp_probe_start(core_if);
17889 + } else {
17890 + gotgctl_data_t gotgctl;
17891 + gotgctl.d32 = DWC_READ_REG32(&core_if->core_global_regs->gotgctl);
17892 + DWC_PRINTF("DEVICE MODE\n");
17893 + if (gotgctl.b.bsesvld == 0) {
17894 + /* Enable Power Down Logic Interrupt*/
17895 + gpwrdn.d32 = 0;
17896 + DWC_PRINTF("VBUS is not valid - start ADP probe\n");
17897 + gpwrdn.b.pmuintsel = 1;
17898 + gpwrdn.b.pmuactv = 1;
17899 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, 0, gpwrdn.d32);
17900 + core_if->adp.initial_probe = 1;
17901 + dwc_otg_adp_probe_start(core_if);
17902 + } else {
17903 + DWC_PRINTF("VBUS is valid - initialize core as a Device\n");
17904 + core_if->op_state = B_PERIPHERAL;
17905 + dwc_otg_core_init(core_if);
17906 + dwc_otg_enable_global_interrupts(core_if);
17907 + cil_pcd_start(core_if);
17908 + dwc_otg_dump_global_registers(core_if);
17909 + dwc_otg_dump_dev_registers(core_if);
17910 + }
17911 + }
17912 +}
17913 +
17914 +void dwc_otg_adp_init(dwc_otg_core_if_t * core_if)
17915 +{
17916 + core_if->adp.adp_started = 0;
17917 + core_if->adp.initial_probe = 0;
17918 + core_if->adp.probe_timer_values[0] = -1;
17919 + core_if->adp.probe_timer_values[1] = -1;
17920 + core_if->adp.probe_enabled = 0;
17921 + core_if->adp.sense_enabled = 0;
17922 + core_if->adp.sense_timer_started = 0;
17923 + core_if->adp.vbuson_timer_started = 0;
17924 + core_if->adp.probe_counter = 0;
17925 + core_if->adp.gpwrdn = 0;
17926 + core_if->adp.attached = DWC_OTG_ADP_UNKOWN;
17927 + /* Initialize timers */
17928 + core_if->adp.sense_timer =
17929 + DWC_TIMER_ALLOC("ADP SENSE TIMER", adp_sense_timeout, core_if);
17930 + core_if->adp.vbuson_timer =
17931 + DWC_TIMER_ALLOC("ADP VBUS ON TIMER", adp_vbuson_timeout, core_if);
17932 + if (!core_if->adp.sense_timer || !core_if->adp.vbuson_timer)
17933 + {
17934 + DWC_ERROR("Could not allocate memory for ADP timers\n");
17935 + }
17936 +}
17937 +
17938 +void dwc_otg_adp_remove(dwc_otg_core_if_t * core_if)
17939 +{
17940 + gpwrdn_data_t gpwrdn = { .d32 = 0 };
17941 + gpwrdn.b.pmuintsel = 1;
17942 + gpwrdn.b.pmuactv = 1;
17943 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
17944 +
17945 + if (core_if->adp.probe_enabled)
17946 + dwc_otg_adp_probe_stop(core_if);
17947 + if (core_if->adp.sense_enabled)
17948 + dwc_otg_adp_sense_stop(core_if);
17949 + if (core_if->adp.sense_timer_started)
17950 + DWC_TIMER_CANCEL(core_if->adp.sense_timer);
17951 + if (core_if->adp.vbuson_timer_started)
17952 + DWC_TIMER_CANCEL(core_if->adp.vbuson_timer);
17953 + DWC_TIMER_FREE(core_if->adp.sense_timer);
17954 + DWC_TIMER_FREE(core_if->adp.vbuson_timer);
17955 +}
17956 +
17957 +/////////////////////////////////////////////////////////////////////
17958 +////////////// ADP Interrupt Handlers ///////////////////////////////
17959 +/////////////////////////////////////////////////////////////////////
17960 +/**
17961 + * This function sets Ramp Timer values
17962 + */
17963 +static uint32_t set_timer_value(dwc_otg_core_if_t * core_if, uint32_t val)
17964 +{
17965 + if (core_if->adp.probe_timer_values[0] == -1) {
17966 + core_if->adp.probe_timer_values[0] = val;
17967 + core_if->adp.probe_timer_values[1] = -1;
17968 + return 1;
17969 + } else {
17970 + core_if->adp.probe_timer_values[1] =
17971 + core_if->adp.probe_timer_values[0];
17972 + core_if->adp.probe_timer_values[0] = val;
17973 + return 0;
17974 + }
17975 +}
17976 +
17977 +/**
17978 + * This function compares Ramp Timer values
17979 + */
17980 +static uint32_t compare_timer_values(dwc_otg_core_if_t * core_if)
17981 +{
17982 + uint32_t diff;
17983 + if (core_if->adp.probe_timer_values[0]>=core_if->adp.probe_timer_values[1])
17984 + diff = core_if->adp.probe_timer_values[0]-core_if->adp.probe_timer_values[1];
17985 + else
17986 + diff = core_if->adp.probe_timer_values[1]-core_if->adp.probe_timer_values[0];
17987 + if(diff < 2) {
17988 + return 0;
17989 + } else {
17990 + return 1;
17991 + }
17992 +}
17993 +
17994 +/**
17995 + * This function handles ADP Probe Interrupts
17996 + */
17997 +static int32_t dwc_otg_adp_handle_prb_intr(dwc_otg_core_if_t * core_if,
17998 + uint32_t val)
17999 +{
18000 + adpctl_data_t adpctl = {.d32 = 0 };
18001 + gpwrdn_data_t gpwrdn, temp;
18002 + adpctl.d32 = val;
18003 +
18004 + temp.d32 = DWC_READ_REG32(&core_if->core_global_regs->gpwrdn);
18005 + core_if->adp.probe_counter++;
18006 + core_if->adp.gpwrdn = DWC_READ_REG32(&core_if->core_global_regs->gpwrdn);
18007 + if (adpctl.b.rtim == 0 && !temp.b.idsts){
18008 + DWC_PRINTF("RTIM value is 0\n");
18009 + goto exit;
18010 + }
18011 + if (set_timer_value(core_if, adpctl.b.rtim) &&
18012 + core_if->adp.initial_probe) {
18013 + core_if->adp.initial_probe = 0;
18014 + dwc_otg_adp_probe_stop(core_if);
18015 + gpwrdn.d32 = 0;
18016 + gpwrdn.b.pmuactv = 1;
18017 + gpwrdn.b.pmuintsel = 1;
18018 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
18019 + DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, 0xFFFFFFFF);
18020 +
18021 + /* check which value is for device mode and which for Host mode */
18022 + if (!temp.b.idsts) { /* considered host mode value is 0 */
18023 + /*
18024 + * Turn on VBUS after initial ADP probe.
18025 + */
18026 + core_if->op_state = A_HOST;
18027 + dwc_otg_enable_global_interrupts(core_if);
18028 + DWC_SPINUNLOCK(core_if->lock);
18029 + cil_hcd_start(core_if);
18030 + dwc_otg_adp_turnon_vbus(core_if);
18031 + DWC_SPINLOCK(core_if->lock);
18032 + } else {
18033 + /*
18034 + * Initiate SRP after initial ADP probe.
18035 + */
18036 + dwc_otg_enable_global_interrupts(core_if);
18037 + dwc_otg_initiate_srp(core_if);
18038 + }
18039 + } else if (core_if->adp.probe_counter > 2){
18040 + gpwrdn.d32 = DWC_READ_REG32(&core_if->core_global_regs->gpwrdn);
18041 + if (compare_timer_values(core_if)) {
18042 + DWC_PRINTF("Difference in timer values !!! \n");
18043 +// core_if->adp.attached = DWC_OTG_ADP_ATTACHED;
18044 + dwc_otg_adp_probe_stop(core_if);
18045 +
18046 + /* Power on the core */
18047 + if (core_if->power_down == 2) {
18048 + gpwrdn.b.pwrdnswtch = 1;
18049 + DWC_MODIFY_REG32(&core_if->core_global_regs->
18050 + gpwrdn, 0, gpwrdn.d32);
18051 + }
18052 +
18053 + /* check which value is for device mode and which for Host mode */
18054 + if (!temp.b.idsts) { /* considered host mode value is 0 */
18055 + /* Disable Interrupt from Power Down Logic */
18056 + gpwrdn.d32 = 0;
18057 + gpwrdn.b.pmuintsel = 1;
18058 + gpwrdn.b.pmuactv = 1;
18059 + DWC_MODIFY_REG32(&core_if->core_global_regs->
18060 + gpwrdn, gpwrdn.d32, 0);
18061 +
18062 + /*
18063 + * Initialize the Core for Host mode.
18064 + */
18065 + core_if->op_state = A_HOST;
18066 + dwc_otg_core_init(core_if);
18067 + dwc_otg_enable_global_interrupts(core_if);
18068 + cil_hcd_start(core_if);
18069 + } else {
18070 + gotgctl_data_t gotgctl;
18071 + /* Mask SRP detected interrupt from Power Down Logic */
18072 + gpwrdn.d32 = 0;
18073 + gpwrdn.b.srp_det_msk = 1;
18074 + DWC_MODIFY_REG32(&core_if->core_global_regs->
18075 + gpwrdn, gpwrdn.d32, 0);
18076 +
18077 + /* Disable Power Down Logic */
18078 + gpwrdn.d32 = 0;
18079 + gpwrdn.b.pmuintsel = 1;
18080 + gpwrdn.b.pmuactv = 1;
18081 + DWC_MODIFY_REG32(&core_if->core_global_regs->
18082 + gpwrdn, gpwrdn.d32, 0);
18083 +
18084 + /*
18085 + * Initialize the Core for Device mode.
18086 + */
18087 + core_if->op_state = B_PERIPHERAL;
18088 + dwc_otg_core_init(core_if);
18089 + dwc_otg_enable_global_interrupts(core_if);
18090 + cil_pcd_start(core_if);
18091 +
18092 + gotgctl.d32 = DWC_READ_REG32(&core_if->core_global_regs->gotgctl);
18093 + if (!gotgctl.b.bsesvld) {
18094 + dwc_otg_initiate_srp(core_if);
18095 + }
18096 + }
18097 + }
18098 + if (core_if->power_down == 2) {
18099 + if (gpwrdn.b.bsessvld) {
18100 + /* Mask SRP detected interrupt from Power Down Logic */
18101 + gpwrdn.d32 = 0;
18102 + gpwrdn.b.srp_det_msk = 1;
18103 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
18104 +
18105 + /* Disable Power Down Logic */
18106 + gpwrdn.d32 = 0;
18107 + gpwrdn.b.pmuactv = 1;
18108 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
18109 +
18110 + /*
18111 + * Initialize the Core for Device mode.
18112 + */
18113 + core_if->op_state = B_PERIPHERAL;
18114 + dwc_otg_core_init(core_if);
18115 + dwc_otg_enable_global_interrupts(core_if);
18116 + cil_pcd_start(core_if);
18117 + }
18118 + }
18119 + }
18120 +exit:
18121 + /* Clear interrupt */
18122 + adpctl.d32 = dwc_otg_adp_read_reg(core_if);
18123 + adpctl.b.adp_prb_int = 1;
18124 + dwc_otg_adp_write_reg(core_if, adpctl.d32);
18125 +
18126 + return 0;
18127 +}
18128 +
18129 +/**
18130 + * This function hadles ADP Sense Interrupt
18131 + */
18132 +static int32_t dwc_otg_adp_handle_sns_intr(dwc_otg_core_if_t * core_if)
18133 +{
18134 + adpctl_data_t adpctl;
18135 + /* Stop ADP Sense timer */
18136 + DWC_TIMER_CANCEL(core_if->adp.sense_timer);
18137 +
18138 + /* Restart ADP Sense timer */
18139 + dwc_otg_adp_sense_timer_start(core_if);
18140 +
18141 + /* Clear interrupt */
18142 + adpctl.d32 = dwc_otg_adp_read_reg(core_if);
18143 + adpctl.b.adp_sns_int = 1;
18144 + dwc_otg_adp_write_reg(core_if, adpctl.d32);
18145 +
18146 + return 0;
18147 +}
18148 +
18149 +/**
18150 + * This function handles ADP Probe Interrupts
18151 + */
18152 +static int32_t dwc_otg_adp_handle_prb_tmout_intr(dwc_otg_core_if_t * core_if,
18153 + uint32_t val)
18154 +{
18155 + adpctl_data_t adpctl = {.d32 = 0 };
18156 + adpctl.d32 = val;
18157 + set_timer_value(core_if, adpctl.b.rtim);
18158 +
18159 + /* Clear interrupt */
18160 + adpctl.d32 = dwc_otg_adp_read_reg(core_if);
18161 + adpctl.b.adp_tmout_int = 1;
18162 + dwc_otg_adp_write_reg(core_if, adpctl.d32);
18163 +
18164 + return 0;
18165 +}
18166 +
18167 +/**
18168 + * ADP Interrupt handler.
18169 + *
18170 + */
18171 +int32_t dwc_otg_adp_handle_intr(dwc_otg_core_if_t * core_if)
18172 +{
18173 + int retval = 0;
18174 + adpctl_data_t adpctl = {.d32 = 0};
18175 +
18176 + adpctl.d32 = dwc_otg_adp_read_reg(core_if);
18177 + DWC_PRINTF("ADPCTL = %08x\n",adpctl.d32);
18178 +
18179 + if (adpctl.b.adp_sns_int & adpctl.b.adp_sns_int_msk) {
18180 + DWC_PRINTF("ADP Sense interrupt\n");
18181 + retval |= dwc_otg_adp_handle_sns_intr(core_if);
18182 + }
18183 + if (adpctl.b.adp_tmout_int & adpctl.b.adp_tmout_int_msk) {
18184 + DWC_PRINTF("ADP timeout interrupt\n");
18185 + retval |= dwc_otg_adp_handle_prb_tmout_intr(core_if, adpctl.d32);
18186 + }
18187 + if (adpctl.b.adp_prb_int & adpctl.b.adp_prb_int_msk) {
18188 + DWC_PRINTF("ADP Probe interrupt\n");
18189 + adpctl.b.adp_prb_int = 1;
18190 + retval |= dwc_otg_adp_handle_prb_intr(core_if, adpctl.d32);
18191 + }
18192 +
18193 +// dwc_otg_adp_modify_reg(core_if, adpctl.d32, 0);
18194 + //dwc_otg_adp_write_reg(core_if, adpctl.d32);
18195 + DWC_PRINTF("RETURN FROM ADP ISR\n");
18196 +
18197 + return retval;
18198 +}
18199 +
18200 +/**
18201 + *
18202 + * @param core_if Programming view of DWC_otg controller.
18203 + */
18204 +int32_t dwc_otg_adp_handle_srp_intr(dwc_otg_core_if_t * core_if)
18205 +{
18206 +
18207 +#ifndef DWC_HOST_ONLY
18208 + hprt0_data_t hprt0;
18209 + gpwrdn_data_t gpwrdn;
18210 + DWC_DEBUGPL(DBG_ANY, "++ Power Down Logic Session Request Interrupt++\n");
18211 +
18212 + gpwrdn.d32 = DWC_READ_REG32(&core_if->core_global_regs->gpwrdn);
18213 + /* check which value is for device mode and which for Host mode */
18214 + if (!gpwrdn.b.idsts) { /* considered host mode value is 0 */
18215 + DWC_PRINTF("SRP: Host mode\n");
18216 +
18217 + if (core_if->adp_enable) {
18218 + dwc_otg_adp_probe_stop(core_if);
18219 +
18220 + /* Power on the core */
18221 + if (core_if->power_down == 2) {
18222 + gpwrdn.b.pwrdnswtch = 1;
18223 + DWC_MODIFY_REG32(&core_if->core_global_regs->
18224 + gpwrdn, 0, gpwrdn.d32);
18225 + }
18226 +
18227 + core_if->op_state = A_HOST;
18228 + dwc_otg_core_init(core_if);
18229 + dwc_otg_enable_global_interrupts(core_if);
18230 + cil_hcd_start(core_if);
18231 + }
18232 +
18233 + /* Turn on the port power bit. */
18234 + hprt0.d32 = dwc_otg_read_hprt0(core_if);
18235 + hprt0.b.prtpwr = 1;
18236 + DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
18237 +
18238 + /* Start the Connection timer. So a message can be displayed
18239 + * if connect does not occur within 10 seconds. */
18240 + cil_hcd_session_start(core_if);
18241 + } else {
18242 + DWC_PRINTF("SRP: Device mode %s\n", __FUNCTION__);
18243 + if (core_if->adp_enable) {
18244 + dwc_otg_adp_probe_stop(core_if);
18245 +
18246 + /* Power on the core */
18247 + if (core_if->power_down == 2) {
18248 + gpwrdn.b.pwrdnswtch = 1;
18249 + DWC_MODIFY_REG32(&core_if->core_global_regs->
18250 + gpwrdn, 0, gpwrdn.d32);
18251 + }
18252 +
18253 + gpwrdn.d32 = 0;
18254 + gpwrdn.b.pmuactv = 0;
18255 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, 0,
18256 + gpwrdn.d32);
18257 +
18258 + core_if->op_state = B_PERIPHERAL;
18259 + dwc_otg_core_init(core_if);
18260 + dwc_otg_enable_global_interrupts(core_if);
18261 + cil_pcd_start(core_if);
18262 + }
18263 + }
18264 +#endif
18265 + return 1;
18266 +}
18267 --- /dev/null
18268 +++ b/drivers/usb/host/dwc_otg/dwc_otg_adp.h
18269 @@ -0,0 +1,80 @@
18270 +/* ==========================================================================
18271 + * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_adp.h $
18272 + * $Revision: #7 $
18273 + * $Date: 2011/10/24 $
18274 + * $Change: 1871159 $
18275 + *
18276 + * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
18277 + * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
18278 + * otherwise expressly agreed to in writing between Synopsys and you.
18279 + *
18280 + * The Software IS NOT an item of Licensed Software or Licensed Product under
18281 + * any End User Software License Agreement or Agreement for Licensed Product
18282 + * with Synopsys or any supplement thereto. You are permitted to use and
18283 + * redistribute this Software in source and binary forms, with or without
18284 + * modification, provided that redistributions of source code must retain this
18285 + * notice. You may not view, use, disclose, copy or distribute this file or
18286 + * any information contained herein except pursuant to this license grant from
18287 + * Synopsys. If you do not agree with this notice, including the disclaimer
18288 + * below, then you are not authorized to use the Software.
18289 + *
18290 + * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
18291 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
18292 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
18293 + * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
18294 + * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
18295 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
18296 + * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
18297 + * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
18298 + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
18299 + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
18300 + * DAMAGE.
18301 + * ========================================================================== */
18302 +
18303 +#ifndef __DWC_OTG_ADP_H__
18304 +#define __DWC_OTG_ADP_H__
18305 +
18306 +/**
18307 + * @file
18308 + *
18309 + * This file contains the Attach Detect Protocol interfaces and defines
18310 + * (functions) and structures for Linux.
18311 + *
18312 + */
18313 +
18314 +#define DWC_OTG_ADP_UNATTACHED 0
18315 +#define DWC_OTG_ADP_ATTACHED 1
18316 +#define DWC_OTG_ADP_UNKOWN 2
18317 +
18318 +typedef struct dwc_otg_adp {
18319 + uint32_t adp_started;
18320 + uint32_t initial_probe;
18321 + int32_t probe_timer_values[2];
18322 + uint32_t probe_enabled;
18323 + uint32_t sense_enabled;
18324 + dwc_timer_t *sense_timer;
18325 + uint32_t sense_timer_started;
18326 + dwc_timer_t *vbuson_timer;
18327 + uint32_t vbuson_timer_started;
18328 + uint32_t attached;
18329 + uint32_t probe_counter;
18330 + uint32_t gpwrdn;
18331 +} dwc_otg_adp_t;
18332 +
18333 +/**
18334 + * Attach Detect Protocol functions
18335 + */
18336 +
18337 +extern void dwc_otg_adp_write_reg(dwc_otg_core_if_t * core_if, uint32_t value);
18338 +extern uint32_t dwc_otg_adp_read_reg(dwc_otg_core_if_t * core_if);
18339 +extern uint32_t dwc_otg_adp_probe_start(dwc_otg_core_if_t * core_if);
18340 +extern uint32_t dwc_otg_adp_sense_start(dwc_otg_core_if_t * core_if);
18341 +extern uint32_t dwc_otg_adp_probe_stop(dwc_otg_core_if_t * core_if);
18342 +extern uint32_t dwc_otg_adp_sense_stop(dwc_otg_core_if_t * core_if);
18343 +extern void dwc_otg_adp_start(dwc_otg_core_if_t * core_if, uint8_t is_host);
18344 +extern void dwc_otg_adp_init(dwc_otg_core_if_t * core_if);
18345 +extern void dwc_otg_adp_remove(dwc_otg_core_if_t * core_if);
18346 +extern int32_t dwc_otg_adp_handle_intr(dwc_otg_core_if_t * core_if);
18347 +extern int32_t dwc_otg_adp_handle_srp_intr(dwc_otg_core_if_t * core_if);
18348 +
18349 +#endif //__DWC_OTG_ADP_H__
18350 --- /dev/null
18351 +++ b/drivers/usb/host/dwc_otg/dwc_otg_attr.c
18352 @@ -0,0 +1,1212 @@
18353 +/* ==========================================================================
18354 + * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_attr.c $
18355 + * $Revision: #44 $
18356 + * $Date: 2010/11/29 $
18357 + * $Change: 1636033 $
18358 + *
18359 + * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
18360 + * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
18361 + * otherwise expressly agreed to in writing between Synopsys and you.
18362 + *
18363 + * The Software IS NOT an item of Licensed Software or Licensed Product under
18364 + * any End User Software License Agreement or Agreement for Licensed Product
18365 + * with Synopsys or any supplement thereto. You are permitted to use and
18366 + * redistribute this Software in source and binary forms, with or without
18367 + * modification, provided that redistributions of source code must retain this
18368 + * notice. You may not view, use, disclose, copy or distribute this file or
18369 + * any information contained herein except pursuant to this license grant from
18370 + * Synopsys. If you do not agree with this notice, including the disclaimer
18371 + * below, then you are not authorized to use the Software.
18372 + *
18373 + * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
18374 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
18375 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
18376 + * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
18377 + * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
18378 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
18379 + * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
18380 + * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
18381 + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
18382 + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
18383 + * DAMAGE.
18384 + * ========================================================================== */
18385 +
18386 +/** @file
18387 + *
18388 + * The diagnostic interface will provide access to the controller for
18389 + * bringing up the hardware and testing. The Linux driver attributes
18390 + * feature will be used to provide the Linux Diagnostic
18391 + * Interface. These attributes are accessed through sysfs.
18392 + */
18393 +
18394 +/** @page "Linux Module Attributes"
18395 + *
18396 + * The Linux module attributes feature is used to provide the Linux
18397 + * Diagnostic Interface. These attributes are accessed through sysfs.
18398 + * The diagnostic interface will provide access to the controller for
18399 + * bringing up the hardware and testing.
18400 +
18401 + The following table shows the attributes.
18402 + <table>
18403 + <tr>
18404 + <td><b> Name</b></td>
18405 + <td><b> Description</b></td>
18406 + <td><b> Access</b></td>
18407 + </tr>
18408 +
18409 + <tr>
18410 + <td> mode </td>
18411 + <td> Returns the current mode: 0 for device mode, 1 for host mode</td>
18412 + <td> Read</td>
18413 + </tr>
18414 +
18415 + <tr>
18416 + <td> hnpcapable </td>
18417 + <td> Gets or sets the "HNP-capable" bit in the Core USB Configuraton Register.
18418 + Read returns the current value.</td>
18419 + <td> Read/Write</td>
18420 + </tr>
18421 +
18422 + <tr>
18423 + <td> srpcapable </td>
18424 + <td> Gets or sets the "SRP-capable" bit in the Core USB Configuraton Register.
18425 + Read returns the current value.</td>
18426 + <td> Read/Write</td>
18427 + </tr>
18428 +
18429 + <tr>
18430 + <td> hsic_connect </td>
18431 + <td> Gets or sets the "HSIC-Connect" bit in the GLPMCFG Register.
18432 + Read returns the current value.</td>
18433 + <td> Read/Write</td>
18434 + </tr>
18435 +
18436 + <tr>
18437 + <td> inv_sel_hsic </td>
18438 + <td> Gets or sets the "Invert Select HSIC" bit in the GLPMFG Register.
18439 + Read returns the current value.</td>
18440 + <td> Read/Write</td>
18441 + </tr>
18442 +
18443 + <tr>
18444 + <td> hnp </td>
18445 + <td> Initiates the Host Negotiation Protocol. Read returns the status.</td>
18446 + <td> Read/Write</td>
18447 + </tr>
18448 +
18449 + <tr>
18450 + <td> srp </td>
18451 + <td> Initiates the Session Request Protocol. Read returns the status.</td>
18452 + <td> Read/Write</td>
18453 + </tr>
18454 +
18455 + <tr>
18456 + <td> buspower </td>
18457 + <td> Gets or sets the Power State of the bus (0 - Off or 1 - On)</td>
18458 + <td> Read/Write</td>
18459 + </tr>
18460 +
18461 + <tr>
18462 + <td> bussuspend </td>
18463 + <td> Suspends the USB bus.</td>
18464 + <td> Read/Write</td>
18465 + </tr>
18466 +
18467 + <tr>
18468 + <td> busconnected </td>
18469 + <td> Gets the connection status of the bus</td>
18470 + <td> Read</td>
18471 + </tr>
18472 +
18473 + <tr>
18474 + <td> gotgctl </td>
18475 + <td> Gets or sets the Core Control Status Register.</td>
18476 + <td> Read/Write</td>
18477 + </tr>
18478 +
18479 + <tr>
18480 + <td> gusbcfg </td>
18481 + <td> Gets or sets the Core USB Configuration Register</td>
18482 + <td> Read/Write</td>
18483 + </tr>
18484 +
18485 + <tr>
18486 + <td> grxfsiz </td>
18487 + <td> Gets or sets the Receive FIFO Size Register</td>
18488 + <td> Read/Write</td>
18489 + </tr>
18490 +
18491 + <tr>
18492 + <td> gnptxfsiz </td>
18493 + <td> Gets or sets the non-periodic Transmit Size Register</td>
18494 + <td> Read/Write</td>
18495 + </tr>
18496 +
18497 + <tr>
18498 + <td> gpvndctl </td>
18499 + <td> Gets or sets the PHY Vendor Control Register</td>
18500 + <td> Read/Write</td>
18501 + </tr>
18502 +
18503 + <tr>
18504 + <td> ggpio </td>
18505 + <td> Gets the value in the lower 16-bits of the General Purpose IO Register
18506 + or sets the upper 16 bits.</td>
18507 + <td> Read/Write</td>
18508 + </tr>
18509 +
18510 + <tr>
18511 + <td> guid </td>
18512 + <td> Gets or sets the value of the User ID Register</td>
18513 + <td> Read/Write</td>
18514 + </tr>
18515 +
18516 + <tr>
18517 + <td> gsnpsid </td>
18518 + <td> Gets the value of the Synopsys ID Regester</td>
18519 + <td> Read</td>
18520 + </tr>
18521 +
18522 + <tr>
18523 + <td> devspeed </td>
18524 + <td> Gets or sets the device speed setting in the DCFG register</td>
18525 + <td> Read/Write</td>
18526 + </tr>
18527 +
18528 + <tr>
18529 + <td> enumspeed </td>
18530 + <td> Gets the device enumeration Speed.</td>
18531 + <td> Read</td>
18532 + </tr>
18533 +
18534 + <tr>
18535 + <td> hptxfsiz </td>
18536 + <td> Gets the value of the Host Periodic Transmit FIFO</td>
18537 + <td> Read</td>
18538 + </tr>
18539 +
18540 + <tr>
18541 + <td> hprt0 </td>
18542 + <td> Gets or sets the value in the Host Port Control and Status Register</td>
18543 + <td> Read/Write</td>
18544 + </tr>
18545 +
18546 + <tr>
18547 + <td> regoffset </td>
18548 + <td> Sets the register offset for the next Register Access</td>
18549 + <td> Read/Write</td>
18550 + </tr>
18551 +
18552 + <tr>
18553 + <td> regvalue </td>
18554 + <td> Gets or sets the value of the register at the offset in the regoffset attribute.</td>
18555 + <td> Read/Write</td>
18556 + </tr>
18557 +
18558 + <tr>
18559 + <td> remote_wakeup </td>
18560 + <td> On read, shows the status of Remote Wakeup. On write, initiates a remote
18561 + wakeup of the host. When bit 0 is 1 and Remote Wakeup is enabled, the Remote
18562 + Wakeup signalling bit in the Device Control Register is set for 1
18563 + milli-second.</td>
18564 + <td> Read/Write</td>
18565 + </tr>
18566 +
18567 + <tr>
18568 + <td> rem_wakeup_pwrdn </td>
18569 + <td> On read, shows the status core - hibernated or not. On write, initiates
18570 + a remote wakeup of the device from Hibernation. </td>
18571 + <td> Read/Write</td>
18572 + </tr>
18573 +
18574 + <tr>
18575 + <td> mode_ch_tim_en </td>
18576 + <td> This bit is used to enable or disable the host core to wait for 200 PHY
18577 + clock cycles at the end of Resume to change the opmode signal to the PHY to 00
18578 + after Suspend or LPM. </td>
18579 + <td> Read/Write</td>
18580 + </tr>
18581 +
18582 + <tr>
18583 + <td> fr_interval </td>
18584 + <td> On read, shows the value of HFIR Frame Interval. On write, dynamically
18585 + reload HFIR register during runtime. The application can write a value to this
18586 + register only after the Port Enable bit of the Host Port Control and Status
18587 + register (HPRT.PrtEnaPort) has been set </td>
18588 + <td> Read/Write</td>
18589 + </tr>
18590 +
18591 + <tr>
18592 + <td> disconnect_us </td>
18593 + <td> On read, shows the status of disconnect_device_us. On write, sets disconnect_us
18594 + which causes soft disconnect for 100us. Applicable only for device mode of operation.</td>
18595 + <td> Read/Write</td>
18596 + </tr>
18597 +
18598 + <tr>
18599 + <td> regdump </td>
18600 + <td> Dumps the contents of core registers.</td>
18601 + <td> Read</td>
18602 + </tr>
18603 +
18604 + <tr>
18605 + <td> spramdump </td>
18606 + <td> Dumps the contents of core registers.</td>
18607 + <td> Read</td>
18608 + </tr>
18609 +
18610 + <tr>
18611 + <td> hcddump </td>
18612 + <td> Dumps the current HCD state.</td>
18613 + <td> Read</td>
18614 + </tr>
18615 +
18616 + <tr>
18617 + <td> hcd_frrem </td>
18618 + <td> Shows the average value of the Frame Remaining
18619 + field in the Host Frame Number/Frame Remaining register when an SOF interrupt
18620 + occurs. This can be used to determine the average interrupt latency. Also
18621 + shows the average Frame Remaining value for start_transfer and the "a" and
18622 + "b" sample points. The "a" and "b" sample points may be used during debugging
18623 + bto determine how long it takes to execute a section of the HCD code.</td>
18624 + <td> Read</td>
18625 + </tr>
18626 +
18627 + <tr>
18628 + <td> rd_reg_test </td>
18629 + <td> Displays the time required to read the GNPTXFSIZ register many times
18630 + (the output shows the number of times the register is read).
18631 + <td> Read</td>
18632 + </tr>
18633 +
18634 + <tr>
18635 + <td> wr_reg_test </td>
18636 + <td> Displays the time required to write the GNPTXFSIZ register many times
18637 + (the output shows the number of times the register is written).
18638 + <td> Read</td>
18639 + </tr>
18640 +
18641 + <tr>
18642 + <td> lpm_response </td>
18643 + <td> Gets or sets lpm_response mode. Applicable only in device mode.
18644 + <td> Write</td>
18645 + </tr>
18646 +
18647 + <tr>
18648 + <td> sleep_status </td>
18649 + <td> Shows sleep status of device.
18650 + <td> Read</td>
18651 + </tr>
18652 +
18653 + </table>
18654 +
18655 + Example usage:
18656 + To get the current mode:
18657 + cat /sys/devices/lm0/mode
18658 +
18659 + To power down the USB:
18660 + echo 0 > /sys/devices/lm0/buspower
18661 + */
18662 +
18663 +#include "dwc_otg_os_dep.h"
18664 +#include "dwc_os.h"
18665 +#include "dwc_otg_driver.h"
18666 +#include "dwc_otg_attr.h"
18667 +#include "dwc_otg_core_if.h"
18668 +#include "dwc_otg_pcd_if.h"
18669 +#include "dwc_otg_hcd_if.h"
18670 +
18671 +/*
18672 + * MACROs for defining sysfs attribute
18673 + */
18674 +#ifdef LM_INTERFACE
18675 +
18676 +#define DWC_OTG_DEVICE_ATTR_BITFIELD_SHOW(_otg_attr_name_,_string_) \
18677 +static ssize_t _otg_attr_name_##_show (struct device *_dev, struct device_attribute *attr, char *buf) \
18678 +{ \
18679 + struct lm_device *lm_dev = container_of(_dev, struct lm_device, dev); \
18680 + dwc_otg_device_t *otg_dev = lm_get_drvdata(lm_dev); \
18681 + uint32_t val; \
18682 + val = dwc_otg_get_##_otg_attr_name_ (otg_dev->core_if); \
18683 + return sprintf (buf, "%s = 0x%x\n", _string_, val); \
18684 +}
18685 +#define DWC_OTG_DEVICE_ATTR_BITFIELD_STORE(_otg_attr_name_,_string_) \
18686 +static ssize_t _otg_attr_name_##_store (struct device *_dev, struct device_attribute *attr, \
18687 + const char *buf, size_t count) \
18688 +{ \
18689 + struct lm_device *lm_dev = container_of(_dev, struct lm_device, dev); \
18690 + dwc_otg_device_t *otg_dev = lm_get_drvdata(lm_dev); \
18691 + uint32_t set = simple_strtoul(buf, NULL, 16); \
18692 + dwc_otg_set_##_otg_attr_name_(otg_dev->core_if, set);\
18693 + return count; \
18694 +}
18695 +
18696 +#elif defined(PCI_INTERFACE)
18697 +
18698 +#define DWC_OTG_DEVICE_ATTR_BITFIELD_SHOW(_otg_attr_name_,_string_) \
18699 +static ssize_t _otg_attr_name_##_show (struct device *_dev, struct device_attribute *attr, char *buf) \
18700 +{ \
18701 + dwc_otg_device_t *otg_dev = dev_get_drvdata(_dev); \
18702 + uint32_t val; \
18703 + val = dwc_otg_get_##_otg_attr_name_ (otg_dev->core_if); \
18704 + return sprintf (buf, "%s = 0x%x\n", _string_, val); \
18705 +}
18706 +#define DWC_OTG_DEVICE_ATTR_BITFIELD_STORE(_otg_attr_name_,_string_) \
18707 +static ssize_t _otg_attr_name_##_store (struct device *_dev, struct device_attribute *attr, \
18708 + const char *buf, size_t count) \
18709 +{ \
18710 + dwc_otg_device_t *otg_dev = dev_get_drvdata(_dev); \
18711 + uint32_t set = simple_strtoul(buf, NULL, 16); \
18712 + dwc_otg_set_##_otg_attr_name_(otg_dev->core_if, set);\
18713 + return count; \
18714 +}
18715 +
18716 +#elif defined(PLATFORM_INTERFACE)
18717 +
18718 +#define DWC_OTG_DEVICE_ATTR_BITFIELD_SHOW(_otg_attr_name_,_string_) \
18719 +static ssize_t _otg_attr_name_##_show (struct device *_dev, struct device_attribute *attr, char *buf) \
18720 +{ \
18721 + struct platform_device *platform_dev = \
18722 + container_of(_dev, struct platform_device, dev); \
18723 + dwc_otg_device_t *otg_dev = platform_get_drvdata(platform_dev); \
18724 + uint32_t val; \
18725 + DWC_PRINTF("%s(%p) -> platform_dev %p, otg_dev %p\n", \
18726 + __func__, _dev, platform_dev, otg_dev); \
18727 + val = dwc_otg_get_##_otg_attr_name_ (otg_dev->core_if); \
18728 + return sprintf (buf, "%s = 0x%x\n", _string_, val); \
18729 +}
18730 +#define DWC_OTG_DEVICE_ATTR_BITFIELD_STORE(_otg_attr_name_,_string_) \
18731 +static ssize_t _otg_attr_name_##_store (struct device *_dev, struct device_attribute *attr, \
18732 + const char *buf, size_t count) \
18733 +{ \
18734 + struct platform_device *platform_dev = container_of(_dev, struct platform_device, dev); \
18735 + dwc_otg_device_t *otg_dev = platform_get_drvdata(platform_dev); \
18736 + uint32_t set = simple_strtoul(buf, NULL, 16); \
18737 + dwc_otg_set_##_otg_attr_name_(otg_dev->core_if, set);\
18738 + return count; \
18739 +}
18740 +#endif
18741 +
18742 +/*
18743 + * MACROs for defining sysfs attribute for 32-bit registers
18744 + */
18745 +#ifdef LM_INTERFACE
18746 +#define DWC_OTG_DEVICE_ATTR_REG_SHOW(_otg_attr_name_,_string_) \
18747 +static ssize_t _otg_attr_name_##_show (struct device *_dev, struct device_attribute *attr, char *buf) \
18748 +{ \
18749 + struct lm_device *lm_dev = container_of(_dev, struct lm_device, dev); \
18750 + dwc_otg_device_t *otg_dev = lm_get_drvdata(lm_dev); \
18751 + uint32_t val; \
18752 + val = dwc_otg_get_##_otg_attr_name_ (otg_dev->core_if); \
18753 + return sprintf (buf, "%s = 0x%08x\n", _string_, val); \
18754 +}
18755 +#define DWC_OTG_DEVICE_ATTR_REG_STORE(_otg_attr_name_,_string_) \
18756 +static ssize_t _otg_attr_name_##_store (struct device *_dev, struct device_attribute *attr, \
18757 + const char *buf, size_t count) \
18758 +{ \
18759 + struct lm_device *lm_dev = container_of(_dev, struct lm_device, dev); \
18760 + dwc_otg_device_t *otg_dev = lm_get_drvdata(lm_dev); \
18761 + uint32_t val = simple_strtoul(buf, NULL, 16); \
18762 + dwc_otg_set_##_otg_attr_name_ (otg_dev->core_if, val); \
18763 + return count; \
18764 +}
18765 +#elif defined(PCI_INTERFACE)
18766 +#define DWC_OTG_DEVICE_ATTR_REG_SHOW(_otg_attr_name_,_string_) \
18767 +static ssize_t _otg_attr_name_##_show (struct device *_dev, struct device_attribute *attr, char *buf) \
18768 +{ \
18769 + dwc_otg_device_t *otg_dev = dev_get_drvdata(_dev); \
18770 + uint32_t val; \
18771 + val = dwc_otg_get_##_otg_attr_name_ (otg_dev->core_if); \
18772 + return sprintf (buf, "%s = 0x%08x\n", _string_, val); \
18773 +}
18774 +#define DWC_OTG_DEVICE_ATTR_REG_STORE(_otg_attr_name_,_string_) \
18775 +static ssize_t _otg_attr_name_##_store (struct device *_dev, struct device_attribute *attr, \
18776 + const char *buf, size_t count) \
18777 +{ \
18778 + dwc_otg_device_t *otg_dev = dev_get_drvdata(_dev); \
18779 + uint32_t val = simple_strtoul(buf, NULL, 16); \
18780 + dwc_otg_set_##_otg_attr_name_ (otg_dev->core_if, val); \
18781 + return count; \
18782 +}
18783 +
18784 +#elif defined(PLATFORM_INTERFACE)
18785 +#include "dwc_otg_dbg.h"
18786 +#define DWC_OTG_DEVICE_ATTR_REG_SHOW(_otg_attr_name_,_string_) \
18787 +static ssize_t _otg_attr_name_##_show (struct device *_dev, struct device_attribute *attr, char *buf) \
18788 +{ \
18789 + struct platform_device *platform_dev = container_of(_dev, struct platform_device, dev); \
18790 + dwc_otg_device_t *otg_dev = platform_get_drvdata(platform_dev); \
18791 + uint32_t val; \
18792 + DWC_PRINTF("%s(%p) -> platform_dev %p, otg_dev %p\n", \
18793 + __func__, _dev, platform_dev, otg_dev); \
18794 + val = dwc_otg_get_##_otg_attr_name_ (otg_dev->core_if); \
18795 + return sprintf (buf, "%s = 0x%08x\n", _string_, val); \
18796 +}
18797 +#define DWC_OTG_DEVICE_ATTR_REG_STORE(_otg_attr_name_,_string_) \
18798 +static ssize_t _otg_attr_name_##_store (struct device *_dev, struct device_attribute *attr, \
18799 + const char *buf, size_t count) \
18800 +{ \
18801 + struct platform_device *platform_dev = container_of(_dev, struct platform_device, dev); \
18802 + dwc_otg_device_t *otg_dev = platform_get_drvdata(platform_dev); \
18803 + uint32_t val = simple_strtoul(buf, NULL, 16); \
18804 + dwc_otg_set_##_otg_attr_name_ (otg_dev->core_if, val); \
18805 + return count; \
18806 +}
18807 +
18808 +#endif
18809 +
18810 +#define DWC_OTG_DEVICE_ATTR_BITFIELD_RW(_otg_attr_name_,_string_) \
18811 +DWC_OTG_DEVICE_ATTR_BITFIELD_SHOW(_otg_attr_name_,_string_) \
18812 +DWC_OTG_DEVICE_ATTR_BITFIELD_STORE(_otg_attr_name_,_string_) \
18813 +DEVICE_ATTR(_otg_attr_name_,0644,_otg_attr_name_##_show,_otg_attr_name_##_store);
18814 +
18815 +#define DWC_OTG_DEVICE_ATTR_BITFIELD_RO(_otg_attr_name_,_string_) \
18816 +DWC_OTG_DEVICE_ATTR_BITFIELD_SHOW(_otg_attr_name_,_string_) \
18817 +DEVICE_ATTR(_otg_attr_name_,0444,_otg_attr_name_##_show,NULL);
18818 +
18819 +#define DWC_OTG_DEVICE_ATTR_REG32_RW(_otg_attr_name_,_addr_,_string_) \
18820 +DWC_OTG_DEVICE_ATTR_REG_SHOW(_otg_attr_name_,_string_) \
18821 +DWC_OTG_DEVICE_ATTR_REG_STORE(_otg_attr_name_,_string_) \
18822 +DEVICE_ATTR(_otg_attr_name_,0644,_otg_attr_name_##_show,_otg_attr_name_##_store);
18823 +
18824 +#define DWC_OTG_DEVICE_ATTR_REG32_RO(_otg_attr_name_,_addr_,_string_) \
18825 +DWC_OTG_DEVICE_ATTR_REG_SHOW(_otg_attr_name_,_string_) \
18826 +DEVICE_ATTR(_otg_attr_name_,0444,_otg_attr_name_##_show,NULL);
18827 +
18828 +/** @name Functions for Show/Store of Attributes */
18829 +/**@{*/
18830 +
18831 +/**
18832 + * Helper function returning the otg_device structure of the given device
18833 + */
18834 +static dwc_otg_device_t *dwc_otg_drvdev(struct device *_dev)
18835 +{
18836 + dwc_otg_device_t *otg_dev;
18837 + DWC_OTG_GETDRVDEV(otg_dev, _dev);
18838 + return otg_dev;
18839 +}
18840 +
18841 +/**
18842 + * Show the register offset of the Register Access.
18843 + */
18844 +static ssize_t regoffset_show(struct device *_dev,
18845 + struct device_attribute *attr, char *buf)
18846 +{
18847 + dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
18848 + return snprintf(buf, sizeof("0xFFFFFFFF\n") + 1, "0x%08x\n",
18849 + otg_dev->os_dep.reg_offset);
18850 +}
18851 +
18852 +/**
18853 + * Set the register offset for the next Register Access Read/Write
18854 + */
18855 +static ssize_t regoffset_store(struct device *_dev,
18856 + struct device_attribute *attr,
18857 + const char *buf, size_t count)
18858 +{
18859 + dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
18860 + uint32_t offset = simple_strtoul(buf, NULL, 16);
18861 +#if defined(LM_INTERFACE) || defined(PLATFORM_INTERFACE)
18862 + if (offset < SZ_256K) {
18863 +#elif defined(PCI_INTERFACE)
18864 + if (offset < 0x00040000) {
18865 +#endif
18866 + otg_dev->os_dep.reg_offset = offset;
18867 + } else {
18868 + dev_err(_dev, "invalid offset\n");
18869 + }
18870 +
18871 + return count;
18872 +}
18873 +
18874 +DEVICE_ATTR(regoffset, S_IRUGO | S_IWUSR, regoffset_show, regoffset_store);
18875 +
18876 +/**
18877 + * Show the value of the register at the offset in the reg_offset
18878 + * attribute.
18879 + */
18880 +static ssize_t regvalue_show(struct device *_dev,
18881 + struct device_attribute *attr, char *buf)
18882 +{
18883 + dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
18884 + uint32_t val;
18885 + volatile uint32_t *addr;
18886 +
18887 + if (otg_dev->os_dep.reg_offset != 0xFFFFFFFF && 0 != otg_dev->os_dep.base) {
18888 + /* Calculate the address */
18889 + addr = (uint32_t *) (otg_dev->os_dep.reg_offset +
18890 + (uint8_t *) otg_dev->os_dep.base);
18891 + val = DWC_READ_REG32(addr);
18892 + return snprintf(buf,
18893 + sizeof("Reg@0xFFFFFFFF = 0xFFFFFFFF\n") + 1,
18894 + "Reg@0x%06x = 0x%08x\n", otg_dev->os_dep.reg_offset,
18895 + val);
18896 + } else {
18897 + dev_err(_dev, "Invalid offset (0x%0x)\n", otg_dev->os_dep.reg_offset);
18898 + return sprintf(buf, "invalid offset\n");
18899 + }
18900 +}
18901 +
18902 +/**
18903 + * Store the value in the register at the offset in the reg_offset
18904 + * attribute.
18905 + *
18906 + */
18907 +static ssize_t regvalue_store(struct device *_dev,
18908 + struct device_attribute *attr,
18909 + const char *buf, size_t count)
18910 +{
18911 + dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
18912 + volatile uint32_t *addr;
18913 + uint32_t val = simple_strtoul(buf, NULL, 16);
18914 + //dev_dbg(_dev, "Offset=0x%08x Val=0x%08x\n", otg_dev->reg_offset, val);
18915 + if (otg_dev->os_dep.reg_offset != 0xFFFFFFFF && 0 != otg_dev->os_dep.base) {
18916 + /* Calculate the address */
18917 + addr = (uint32_t *) (otg_dev->os_dep.reg_offset +
18918 + (uint8_t *) otg_dev->os_dep.base);
18919 + DWC_WRITE_REG32(addr, val);
18920 + } else {
18921 + dev_err(_dev, "Invalid Register Offset (0x%08x)\n",
18922 + otg_dev->os_dep.reg_offset);
18923 + }
18924 + return count;
18925 +}
18926 +
18927 +DEVICE_ATTR(regvalue, S_IRUGO | S_IWUSR, regvalue_show, regvalue_store);
18928 +
18929 +/*
18930 + * Attributes
18931 + */
18932 +DWC_OTG_DEVICE_ATTR_BITFIELD_RO(mode, "Mode");
18933 +DWC_OTG_DEVICE_ATTR_BITFIELD_RW(hnpcapable, "HNPCapable");
18934 +DWC_OTG_DEVICE_ATTR_BITFIELD_RW(srpcapable, "SRPCapable");
18935 +DWC_OTG_DEVICE_ATTR_BITFIELD_RW(hsic_connect, "HSIC Connect");
18936 +DWC_OTG_DEVICE_ATTR_BITFIELD_RW(inv_sel_hsic, "Invert Select HSIC");
18937 +
18938 +//DWC_OTG_DEVICE_ATTR_BITFIELD_RW(buspower,&(otg_dev->core_if->core_global_regs->gotgctl),(1<<8),8,"Mode");
18939 +//DWC_OTG_DEVICE_ATTR_BITFIELD_RW(bussuspend,&(otg_dev->core_if->core_global_regs->gotgctl),(1<<8),8,"Mode");
18940 +DWC_OTG_DEVICE_ATTR_BITFIELD_RO(busconnected, "Bus Connected");
18941 +
18942 +DWC_OTG_DEVICE_ATTR_REG32_RW(gotgctl, 0, "GOTGCTL");
18943 +DWC_OTG_DEVICE_ATTR_REG32_RW(gusbcfg,
18944 + &(otg_dev->core_if->core_global_regs->gusbcfg),
18945 + "GUSBCFG");
18946 +DWC_OTG_DEVICE_ATTR_REG32_RW(grxfsiz,
18947 + &(otg_dev->core_if->core_global_regs->grxfsiz),
18948 + "GRXFSIZ");
18949 +DWC_OTG_DEVICE_ATTR_REG32_RW(gnptxfsiz,
18950 + &(otg_dev->core_if->core_global_regs->gnptxfsiz),
18951 + "GNPTXFSIZ");
18952 +DWC_OTG_DEVICE_ATTR_REG32_RW(gpvndctl,
18953 + &(otg_dev->core_if->core_global_regs->gpvndctl),
18954 + "GPVNDCTL");
18955 +DWC_OTG_DEVICE_ATTR_REG32_RW(ggpio,
18956 + &(otg_dev->core_if->core_global_regs->ggpio),
18957 + "GGPIO");
18958 +DWC_OTG_DEVICE_ATTR_REG32_RW(guid, &(otg_dev->core_if->core_global_regs->guid),
18959 + "GUID");
18960 +DWC_OTG_DEVICE_ATTR_REG32_RO(gsnpsid,
18961 + &(otg_dev->core_if->core_global_regs->gsnpsid),
18962 + "GSNPSID");
18963 +DWC_OTG_DEVICE_ATTR_BITFIELD_RW(devspeed, "Device Speed");
18964 +DWC_OTG_DEVICE_ATTR_BITFIELD_RO(enumspeed, "Device Enumeration Speed");
18965 +
18966 +DWC_OTG_DEVICE_ATTR_REG32_RO(hptxfsiz,
18967 + &(otg_dev->core_if->core_global_regs->hptxfsiz),
18968 + "HPTXFSIZ");
18969 +DWC_OTG_DEVICE_ATTR_REG32_RW(hprt0, otg_dev->core_if->host_if->hprt0, "HPRT0");
18970 +
18971 +/**
18972 + * @todo Add code to initiate the HNP.
18973 + */
18974 +/**
18975 + * Show the HNP status bit
18976 + */
18977 +static ssize_t hnp_show(struct device *_dev,
18978 + struct device_attribute *attr, char *buf)
18979 +{
18980 + dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
18981 + return sprintf(buf, "HstNegScs = 0x%x\n",
18982 + dwc_otg_get_hnpstatus(otg_dev->core_if));
18983 +}
18984 +
18985 +/**
18986 + * Set the HNP Request bit
18987 + */
18988 +static ssize_t hnp_store(struct device *_dev,
18989 + struct device_attribute *attr,
18990 + const char *buf, size_t count)
18991 +{
18992 + dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
18993 + uint32_t in = simple_strtoul(buf, NULL, 16);
18994 + dwc_otg_set_hnpreq(otg_dev->core_if, in);
18995 + return count;
18996 +}
18997 +
18998 +DEVICE_ATTR(hnp, 0644, hnp_show, hnp_store);
18999 +
19000 +/**
19001 + * @todo Add code to initiate the SRP.
19002 + */
19003 +/**
19004 + * Show the SRP status bit
19005 + */
19006 +static ssize_t srp_show(struct device *_dev,
19007 + struct device_attribute *attr, char *buf)
19008 +{
19009 +#ifndef DWC_HOST_ONLY
19010 + dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
19011 + return sprintf(buf, "SesReqScs = 0x%x\n",
19012 + dwc_otg_get_srpstatus(otg_dev->core_if));
19013 +#else
19014 + return sprintf(buf, "Host Only Mode!\n");
19015 +#endif
19016 +}
19017 +
19018 +/**
19019 + * Set the SRP Request bit
19020 + */
19021 +static ssize_t srp_store(struct device *_dev,
19022 + struct device_attribute *attr,
19023 + const char *buf, size_t count)
19024 +{
19025 +#ifndef DWC_HOST_ONLY
19026 + dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
19027 + dwc_otg_pcd_initiate_srp(otg_dev->pcd);
19028 +#endif
19029 + return count;
19030 +}
19031 +
19032 +DEVICE_ATTR(srp, 0644, srp_show, srp_store);
19033 +
19034 +/**
19035 + * @todo Need to do more for power on/off?
19036 + */
19037 +/**
19038 + * Show the Bus Power status
19039 + */
19040 +static ssize_t buspower_show(struct device *_dev,
19041 + struct device_attribute *attr, char *buf)
19042 +{
19043 + dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
19044 + return sprintf(buf, "Bus Power = 0x%x\n",
19045 + dwc_otg_get_prtpower(otg_dev->core_if));
19046 +}
19047 +
19048 +/**
19049 + * Set the Bus Power status
19050 + */
19051 +static ssize_t buspower_store(struct device *_dev,
19052 + struct device_attribute *attr,
19053 + const char *buf, size_t count)
19054 +{
19055 + dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
19056 + uint32_t on = simple_strtoul(buf, NULL, 16);
19057 + dwc_otg_set_prtpower(otg_dev->core_if, on);
19058 + return count;
19059 +}
19060 +
19061 +DEVICE_ATTR(buspower, 0644, buspower_show, buspower_store);
19062 +
19063 +/**
19064 + * @todo Need to do more for suspend?
19065 + */
19066 +/**
19067 + * Show the Bus Suspend status
19068 + */
19069 +static ssize_t bussuspend_show(struct device *_dev,
19070 + struct device_attribute *attr, char *buf)
19071 +{
19072 + dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
19073 + return sprintf(buf, "Bus Suspend = 0x%x\n",
19074 + dwc_otg_get_prtsuspend(otg_dev->core_if));
19075 +}
19076 +
19077 +/**
19078 + * Set the Bus Suspend status
19079 + */
19080 +static ssize_t bussuspend_store(struct device *_dev,
19081 + struct device_attribute *attr,
19082 + const char *buf, size_t count)
19083 +{
19084 + dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
19085 + uint32_t in = simple_strtoul(buf, NULL, 16);
19086 + dwc_otg_set_prtsuspend(otg_dev->core_if, in);
19087 + return count;
19088 +}
19089 +
19090 +DEVICE_ATTR(bussuspend, 0644, bussuspend_show, bussuspend_store);
19091 +
19092 +/**
19093 + * Show the Mode Change Ready Timer status
19094 + */
19095 +static ssize_t mode_ch_tim_en_show(struct device *_dev,
19096 + struct device_attribute *attr, char *buf)
19097 +{
19098 + dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
19099 + return sprintf(buf, "Mode Change Ready Timer Enable = 0x%x\n",
19100 + dwc_otg_get_mode_ch_tim(otg_dev->core_if));
19101 +}
19102 +
19103 +/**
19104 + * Set the Mode Change Ready Timer status
19105 + */
19106 +static ssize_t mode_ch_tim_en_store(struct device *_dev,
19107 + struct device_attribute *attr,
19108 + const char *buf, size_t count)
19109 +{
19110 + dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
19111 + uint32_t in = simple_strtoul(buf, NULL, 16);
19112 + dwc_otg_set_mode_ch_tim(otg_dev->core_if, in);
19113 + return count;
19114 +}
19115 +
19116 +DEVICE_ATTR(mode_ch_tim_en, 0644, mode_ch_tim_en_show, mode_ch_tim_en_store);
19117 +
19118 +/**
19119 + * Show the value of HFIR Frame Interval bitfield
19120 + */
19121 +static ssize_t fr_interval_show(struct device *_dev,
19122 + struct device_attribute *attr, char *buf)
19123 +{
19124 + dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
19125 + return sprintf(buf, "Frame Interval = 0x%x\n",
19126 + dwc_otg_get_fr_interval(otg_dev->core_if));
19127 +}
19128 +
19129 +/**
19130 + * Set the HFIR Frame Interval value
19131 + */
19132 +static ssize_t fr_interval_store(struct device *_dev,
19133 + struct device_attribute *attr,
19134 + const char *buf, size_t count)
19135 +{
19136 + dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
19137 + uint32_t in = simple_strtoul(buf, NULL, 10);
19138 + dwc_otg_set_fr_interval(otg_dev->core_if, in);
19139 + return count;
19140 +}
19141 +
19142 +DEVICE_ATTR(fr_interval, 0644, fr_interval_show, fr_interval_store);
19143 +
19144 +/**
19145 + * Show the status of Remote Wakeup.
19146 + */
19147 +static ssize_t remote_wakeup_show(struct device *_dev,
19148 + struct device_attribute *attr, char *buf)
19149 +{
19150 +#ifndef DWC_HOST_ONLY
19151 + dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
19152 +
19153 + return sprintf(buf,
19154 + "Remote Wakeup Sig = %d Enabled = %d LPM Remote Wakeup = %d\n",
19155 + dwc_otg_get_remotewakesig(otg_dev->core_if),
19156 + dwc_otg_pcd_get_rmwkup_enable(otg_dev->pcd),
19157 + dwc_otg_get_lpm_remotewakeenabled(otg_dev->core_if));
19158 +#else
19159 + return sprintf(buf, "Host Only Mode!\n");
19160 +#endif /* DWC_HOST_ONLY */
19161 +}
19162 +
19163 +/**
19164 + * Initiate a remote wakeup of the host. The Device control register
19165 + * Remote Wakeup Signal bit is written if the PCD Remote wakeup enable
19166 + * flag is set.
19167 + *
19168 + */
19169 +static ssize_t remote_wakeup_store(struct device *_dev,
19170 + struct device_attribute *attr,
19171 + const char *buf, size_t count)
19172 +{
19173 +#ifndef DWC_HOST_ONLY
19174 + dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
19175 + uint32_t val = simple_strtoul(buf, NULL, 16);
19176 +
19177 + if (val & 1) {
19178 + dwc_otg_pcd_remote_wakeup(otg_dev->pcd, 1);
19179 + } else {
19180 + dwc_otg_pcd_remote_wakeup(otg_dev->pcd, 0);
19181 + }
19182 +#endif /* DWC_HOST_ONLY */
19183 + return count;
19184 +}
19185 +
19186 +DEVICE_ATTR(remote_wakeup, S_IRUGO | S_IWUSR, remote_wakeup_show,
19187 + remote_wakeup_store);
19188 +
19189 +/**
19190 + * Show the whether core is hibernated or not.
19191 + */
19192 +static ssize_t rem_wakeup_pwrdn_show(struct device *_dev,
19193 + struct device_attribute *attr, char *buf)
19194 +{
19195 +#ifndef DWC_HOST_ONLY
19196 + dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
19197 +
19198 + if (dwc_otg_get_core_state(otg_dev->core_if)) {
19199 + DWC_PRINTF("Core is in hibernation\n");
19200 + } else {
19201 + DWC_PRINTF("Core is not in hibernation\n");
19202 + }
19203 +#endif /* DWC_HOST_ONLY */
19204 + return 0;
19205 +}
19206 +
19207 +extern int dwc_otg_device_hibernation_restore(dwc_otg_core_if_t * core_if,
19208 + int rem_wakeup, int reset);
19209 +
19210 +/**
19211 + * Initiate a remote wakeup of the device to exit from hibernation.
19212 + */
19213 +static ssize_t rem_wakeup_pwrdn_store(struct device *_dev,
19214 + struct device_attribute *attr,
19215 + const char *buf, size_t count)
19216 +{
19217 +#ifndef DWC_HOST_ONLY
19218 + dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
19219 + dwc_otg_device_hibernation_restore(otg_dev->core_if, 1, 0);
19220 +#endif
19221 + return count;
19222 +}
19223 +
19224 +DEVICE_ATTR(rem_wakeup_pwrdn, S_IRUGO | S_IWUSR, rem_wakeup_pwrdn_show,
19225 + rem_wakeup_pwrdn_store);
19226 +
19227 +static ssize_t disconnect_us(struct device *_dev,
19228 + struct device_attribute *attr,
19229 + const char *buf, size_t count)
19230 +{
19231 +
19232 +#ifndef DWC_HOST_ONLY
19233 + dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
19234 + uint32_t val = simple_strtoul(buf, NULL, 16);
19235 + DWC_PRINTF("The Passed value is %04x\n", val);
19236 +
19237 + dwc_otg_pcd_disconnect_us(otg_dev->pcd, 50);
19238 +
19239 +#endif /* DWC_HOST_ONLY */
19240 + return count;
19241 +}
19242 +
19243 +DEVICE_ATTR(disconnect_us, S_IWUSR, 0, disconnect_us);
19244 +
19245 +/**
19246 + * Dump global registers and either host or device registers (depending on the
19247 + * current mode of the core).
19248 + */
19249 +static ssize_t regdump_show(struct device *_dev,
19250 + struct device_attribute *attr, char *buf)
19251 +{
19252 + dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
19253 +
19254 + dwc_otg_dump_global_registers(otg_dev->core_if);
19255 + if (dwc_otg_is_host_mode(otg_dev->core_if)) {
19256 + dwc_otg_dump_host_registers(otg_dev->core_if);
19257 + } else {
19258 + dwc_otg_dump_dev_registers(otg_dev->core_if);
19259 +
19260 + }
19261 + return sprintf(buf, "Register Dump\n");
19262 +}
19263 +
19264 +DEVICE_ATTR(regdump, S_IRUGO, regdump_show, 0);
19265 +
19266 +/**
19267 + * Dump global registers and either host or device registers (depending on the
19268 + * current mode of the core).
19269 + */
19270 +static ssize_t spramdump_show(struct device *_dev,
19271 + struct device_attribute *attr, char *buf)
19272 +{
19273 +#if 0
19274 + dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
19275 +
19276 + dwc_otg_dump_spram(otg_dev->core_if);
19277 +#endif
19278 +
19279 + return sprintf(buf, "SPRAM Dump\n");
19280 +}
19281 +
19282 +DEVICE_ATTR(spramdump, S_IRUGO, spramdump_show, 0);
19283 +
19284 +/**
19285 + * Dump the current hcd state.
19286 + */
19287 +static ssize_t hcddump_show(struct device *_dev,
19288 + struct device_attribute *attr, char *buf)
19289 +{
19290 +#ifndef DWC_DEVICE_ONLY
19291 + dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
19292 + dwc_otg_hcd_dump_state(otg_dev->hcd);
19293 +#endif /* DWC_DEVICE_ONLY */
19294 + return sprintf(buf, "HCD Dump\n");
19295 +}
19296 +
19297 +DEVICE_ATTR(hcddump, S_IRUGO, hcddump_show, 0);
19298 +
19299 +/**
19300 + * Dump the average frame remaining at SOF. This can be used to
19301 + * determine average interrupt latency. Frame remaining is also shown for
19302 + * start transfer and two additional sample points.
19303 + */
19304 +static ssize_t hcd_frrem_show(struct device *_dev,
19305 + struct device_attribute *attr, char *buf)
19306 +{
19307 +#ifndef DWC_DEVICE_ONLY
19308 + dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
19309 +
19310 + dwc_otg_hcd_dump_frrem(otg_dev->hcd);
19311 +#endif /* DWC_DEVICE_ONLY */
19312 + return sprintf(buf, "HCD Dump Frame Remaining\n");
19313 +}
19314 +
19315 +DEVICE_ATTR(hcd_frrem, S_IRUGO, hcd_frrem_show, 0);
19316 +
19317 +/**
19318 + * Displays the time required to read the GNPTXFSIZ register many times (the
19319 + * output shows the number of times the register is read).
19320 + */
19321 +#define RW_REG_COUNT 10000000
19322 +#define MSEC_PER_JIFFIE 1000/HZ
19323 +static ssize_t rd_reg_test_show(struct device *_dev,
19324 + struct device_attribute *attr, char *buf)
19325 +{
19326 + dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
19327 + int i;
19328 + int time;
19329 + int start_jiffies;
19330 +
19331 + printk("HZ %d, MSEC_PER_JIFFIE %d, loops_per_jiffy %lu\n",
19332 + HZ, MSEC_PER_JIFFIE, loops_per_jiffy);
19333 + start_jiffies = jiffies;
19334 + for (i = 0; i < RW_REG_COUNT; i++) {
19335 + dwc_otg_get_gnptxfsiz(otg_dev->core_if);
19336 + }
19337 + time = jiffies - start_jiffies;
19338 + return sprintf(buf,
19339 + "Time to read GNPTXFSIZ reg %d times: %d msecs (%d jiffies)\n",
19340 + RW_REG_COUNT, time * MSEC_PER_JIFFIE, time);
19341 +}
19342 +
19343 +DEVICE_ATTR(rd_reg_test, S_IRUGO, rd_reg_test_show, 0);
19344 +
19345 +/**
19346 + * Displays the time required to write the GNPTXFSIZ register many times (the
19347 + * output shows the number of times the register is written).
19348 + */
19349 +static ssize_t wr_reg_test_show(struct device *_dev,
19350 + struct device_attribute *attr, char *buf)
19351 +{
19352 + dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
19353 + uint32_t reg_val;
19354 + int i;
19355 + int time;
19356 + int start_jiffies;
19357 +
19358 + printk("HZ %d, MSEC_PER_JIFFIE %d, loops_per_jiffy %lu\n",
19359 + HZ, MSEC_PER_JIFFIE, loops_per_jiffy);
19360 + reg_val = dwc_otg_get_gnptxfsiz(otg_dev->core_if);
19361 + start_jiffies = jiffies;
19362 + for (i = 0; i < RW_REG_COUNT; i++) {
19363 + dwc_otg_set_gnptxfsiz(otg_dev->core_if, reg_val);
19364 + }
19365 + time = jiffies - start_jiffies;
19366 + return sprintf(buf,
19367 + "Time to write GNPTXFSIZ reg %d times: %d msecs (%d jiffies)\n",
19368 + RW_REG_COUNT, time * MSEC_PER_JIFFIE, time);
19369 +}
19370 +
19371 +DEVICE_ATTR(wr_reg_test, S_IRUGO, wr_reg_test_show, 0);
19372 +
19373 +#ifdef CONFIG_USB_DWC_OTG_LPM
19374 +
19375 +/**
19376 +* Show the lpm_response attribute.
19377 +*/
19378 +static ssize_t lpmresp_show(struct device *_dev,
19379 + struct device_attribute *attr, char *buf)
19380 +{
19381 + dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
19382 +
19383 + if (!dwc_otg_get_param_lpm_enable(otg_dev->core_if))
19384 + return sprintf(buf, "** LPM is DISABLED **\n");
19385 +
19386 + if (!dwc_otg_is_device_mode(otg_dev->core_if)) {
19387 + return sprintf(buf, "** Current mode is not device mode\n");
19388 + }
19389 + return sprintf(buf, "lpm_response = %d\n",
19390 + dwc_otg_get_lpmresponse(otg_dev->core_if));
19391 +}
19392 +
19393 +/**
19394 +* Store the lpm_response attribute.
19395 +*/
19396 +static ssize_t lpmresp_store(struct device *_dev,
19397 + struct device_attribute *attr,
19398 + const char *buf, size_t count)
19399 +{
19400 + dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
19401 + uint32_t val = simple_strtoul(buf, NULL, 16);
19402 +
19403 + if (!dwc_otg_get_param_lpm_enable(otg_dev->core_if)) {
19404 + return 0;
19405 + }
19406 +
19407 + if (!dwc_otg_is_device_mode(otg_dev->core_if)) {
19408 + return 0;
19409 + }
19410 +
19411 + dwc_otg_set_lpmresponse(otg_dev->core_if, val);
19412 + return count;
19413 +}
19414 +
19415 +DEVICE_ATTR(lpm_response, S_IRUGO | S_IWUSR, lpmresp_show, lpmresp_store);
19416 +
19417 +/**
19418 +* Show the sleep_status attribute.
19419 +*/
19420 +static ssize_t sleepstatus_show(struct device *_dev,
19421 + struct device_attribute *attr, char *buf)
19422 +{
19423 + dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
19424 + return sprintf(buf, "Sleep Status = %d\n",
19425 + dwc_otg_get_lpm_portsleepstatus(otg_dev->core_if));
19426 +}
19427 +
19428 +/**
19429 + * Store the sleep_status attribure.
19430 + */
19431 +static ssize_t sleepstatus_store(struct device *_dev,
19432 + struct device_attribute *attr,
19433 + const char *buf, size_t count)
19434 +{
19435 + dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
19436 + dwc_otg_core_if_t *core_if = otg_dev->core_if;
19437 +
19438 + if (dwc_otg_get_lpm_portsleepstatus(otg_dev->core_if)) {
19439 + if (dwc_otg_is_host_mode(core_if)) {
19440 +
19441 + DWC_PRINTF("Host initiated resume\n");
19442 + dwc_otg_set_prtresume(otg_dev->core_if, 1);
19443 + }
19444 + }
19445 +
19446 + return count;
19447 +}
19448 +
19449 +DEVICE_ATTR(sleep_status, S_IRUGO | S_IWUSR, sleepstatus_show,
19450 + sleepstatus_store);
19451 +
19452 +#endif /* CONFIG_USB_DWC_OTG_LPM_ENABLE */
19453 +
19454 +/**@}*/
19455 +
19456 +/**
19457 + * Create the device files
19458 + */
19459 +void dwc_otg_attr_create(
19460 +#ifdef LM_INTERFACE
19461 + struct lm_device *dev
19462 +#elif defined(PCI_INTERFACE)
19463 + struct pci_dev *dev
19464 +#elif defined(PLATFORM_INTERFACE)
19465 + struct platform_device *dev
19466 +#endif
19467 + )
19468 +{
19469 + int error;
19470 +
19471 + error = device_create_file(&dev->dev, &dev_attr_regoffset);
19472 + error = device_create_file(&dev->dev, &dev_attr_regvalue);
19473 + error = device_create_file(&dev->dev, &dev_attr_mode);
19474 + error = device_create_file(&dev->dev, &dev_attr_hnpcapable);
19475 + error = device_create_file(&dev->dev, &dev_attr_srpcapable);
19476 + error = device_create_file(&dev->dev, &dev_attr_hsic_connect);
19477 + error = device_create_file(&dev->dev, &dev_attr_inv_sel_hsic);
19478 + error = device_create_file(&dev->dev, &dev_attr_hnp);
19479 + error = device_create_file(&dev->dev, &dev_attr_srp);
19480 + error = device_create_file(&dev->dev, &dev_attr_buspower);
19481 + error = device_create_file(&dev->dev, &dev_attr_bussuspend);
19482 + error = device_create_file(&dev->dev, &dev_attr_mode_ch_tim_en);
19483 + error = device_create_file(&dev->dev, &dev_attr_fr_interval);
19484 + error = device_create_file(&dev->dev, &dev_attr_busconnected);
19485 + error = device_create_file(&dev->dev, &dev_attr_gotgctl);
19486 + error = device_create_file(&dev->dev, &dev_attr_gusbcfg);
19487 + error = device_create_file(&dev->dev, &dev_attr_grxfsiz);
19488 + error = device_create_file(&dev->dev, &dev_attr_gnptxfsiz);
19489 + error = device_create_file(&dev->dev, &dev_attr_gpvndctl);
19490 + error = device_create_file(&dev->dev, &dev_attr_ggpio);
19491 + error = device_create_file(&dev->dev, &dev_attr_guid);
19492 + error = device_create_file(&dev->dev, &dev_attr_gsnpsid);
19493 + error = device_create_file(&dev->dev, &dev_attr_devspeed);
19494 + error = device_create_file(&dev->dev, &dev_attr_enumspeed);
19495 + error = device_create_file(&dev->dev, &dev_attr_hptxfsiz);
19496 + error = device_create_file(&dev->dev, &dev_attr_hprt0);
19497 + error = device_create_file(&dev->dev, &dev_attr_remote_wakeup);
19498 + error = device_create_file(&dev->dev, &dev_attr_rem_wakeup_pwrdn);
19499 + error = device_create_file(&dev->dev, &dev_attr_disconnect_us);
19500 + error = device_create_file(&dev->dev, &dev_attr_regdump);
19501 + error = device_create_file(&dev->dev, &dev_attr_spramdump);
19502 + error = device_create_file(&dev->dev, &dev_attr_hcddump);
19503 + error = device_create_file(&dev->dev, &dev_attr_hcd_frrem);
19504 + error = device_create_file(&dev->dev, &dev_attr_rd_reg_test);
19505 + error = device_create_file(&dev->dev, &dev_attr_wr_reg_test);
19506 +#ifdef CONFIG_USB_DWC_OTG_LPM
19507 + error = device_create_file(&dev->dev, &dev_attr_lpm_response);
19508 + error = device_create_file(&dev->dev, &dev_attr_sleep_status);
19509 +#endif
19510 +}
19511 +
19512 +/**
19513 + * Remove the device files
19514 + */
19515 +void dwc_otg_attr_remove(
19516 +#ifdef LM_INTERFACE
19517 + struct lm_device *dev
19518 +#elif defined(PCI_INTERFACE)
19519 + struct pci_dev *dev
19520 +#elif defined(PLATFORM_INTERFACE)
19521 + struct platform_device *dev
19522 +#endif
19523 + )
19524 +{
19525 + device_remove_file(&dev->dev, &dev_attr_regoffset);
19526 + device_remove_file(&dev->dev, &dev_attr_regvalue);
19527 + device_remove_file(&dev->dev, &dev_attr_mode);
19528 + device_remove_file(&dev->dev, &dev_attr_hnpcapable);
19529 + device_remove_file(&dev->dev, &dev_attr_srpcapable);
19530 + device_remove_file(&dev->dev, &dev_attr_hsic_connect);
19531 + device_remove_file(&dev->dev, &dev_attr_inv_sel_hsic);
19532 + device_remove_file(&dev->dev, &dev_attr_hnp);
19533 + device_remove_file(&dev->dev, &dev_attr_srp);
19534 + device_remove_file(&dev->dev, &dev_attr_buspower);
19535 + device_remove_file(&dev->dev, &dev_attr_bussuspend);
19536 + device_remove_file(&dev->dev, &dev_attr_mode_ch_tim_en);
19537 + device_remove_file(&dev->dev, &dev_attr_fr_interval);
19538 + device_remove_file(&dev->dev, &dev_attr_busconnected);
19539 + device_remove_file(&dev->dev, &dev_attr_gotgctl);
19540 + device_remove_file(&dev->dev, &dev_attr_gusbcfg);
19541 + device_remove_file(&dev->dev, &dev_attr_grxfsiz);
19542 + device_remove_file(&dev->dev, &dev_attr_gnptxfsiz);
19543 + device_remove_file(&dev->dev, &dev_attr_gpvndctl);
19544 + device_remove_file(&dev->dev, &dev_attr_ggpio);
19545 + device_remove_file(&dev->dev, &dev_attr_guid);
19546 + device_remove_file(&dev->dev, &dev_attr_gsnpsid);
19547 + device_remove_file(&dev->dev, &dev_attr_devspeed);
19548 + device_remove_file(&dev->dev, &dev_attr_enumspeed);
19549 + device_remove_file(&dev->dev, &dev_attr_hptxfsiz);
19550 + device_remove_file(&dev->dev, &dev_attr_hprt0);
19551 + device_remove_file(&dev->dev, &dev_attr_remote_wakeup);
19552 + device_remove_file(&dev->dev, &dev_attr_rem_wakeup_pwrdn);
19553 + device_remove_file(&dev->dev, &dev_attr_disconnect_us);
19554 + device_remove_file(&dev->dev, &dev_attr_regdump);
19555 + device_remove_file(&dev->dev, &dev_attr_spramdump);
19556 + device_remove_file(&dev->dev, &dev_attr_hcddump);
19557 + device_remove_file(&dev->dev, &dev_attr_hcd_frrem);
19558 + device_remove_file(&dev->dev, &dev_attr_rd_reg_test);
19559 + device_remove_file(&dev->dev, &dev_attr_wr_reg_test);
19560 +#ifdef CONFIG_USB_DWC_OTG_LPM
19561 + device_remove_file(&dev->dev, &dev_attr_lpm_response);
19562 + device_remove_file(&dev->dev, &dev_attr_sleep_status);
19563 +#endif
19564 +}
19565 --- /dev/null
19566 +++ b/drivers/usb/host/dwc_otg/dwc_otg_attr.h
19567 @@ -0,0 +1,89 @@
19568 +/* ==========================================================================
19569 + * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_attr.h $
19570 + * $Revision: #13 $
19571 + * $Date: 2010/06/21 $
19572 + * $Change: 1532021 $
19573 + *
19574 + * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
19575 + * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
19576 + * otherwise expressly agreed to in writing between Synopsys and you.
19577 + *
19578 + * The Software IS NOT an item of Licensed Software or Licensed Product under
19579 + * any End User Software License Agreement or Agreement for Licensed Product
19580 + * with Synopsys or any supplement thereto. You are permitted to use and
19581 + * redistribute this Software in source and binary forms, with or without
19582 + * modification, provided that redistributions of source code must retain this
19583 + * notice. You may not view, use, disclose, copy or distribute this file or
19584 + * any information contained herein except pursuant to this license grant from
19585 + * Synopsys. If you do not agree with this notice, including the disclaimer
19586 + * below, then you are not authorized to use the Software.
19587 + *
19588 + * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
19589 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
19590 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
19591 + * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
19592 + * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
19593 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
19594 + * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
19595 + * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
19596 + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
19597 + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
19598 + * DAMAGE.
19599 + * ========================================================================== */
19600 +
19601 +#if !defined(__DWC_OTG_ATTR_H__)
19602 +#define __DWC_OTG_ATTR_H__
19603 +
19604 +/** @file
19605 + * This file contains the interface to the Linux device attributes.
19606 + */
19607 +extern struct device_attribute dev_attr_regoffset;
19608 +extern struct device_attribute dev_attr_regvalue;
19609 +
19610 +extern struct device_attribute dev_attr_mode;
19611 +extern struct device_attribute dev_attr_hnpcapable;
19612 +extern struct device_attribute dev_attr_srpcapable;
19613 +extern struct device_attribute dev_attr_hnp;
19614 +extern struct device_attribute dev_attr_srp;
19615 +extern struct device_attribute dev_attr_buspower;
19616 +extern struct device_attribute dev_attr_bussuspend;
19617 +extern struct device_attribute dev_attr_mode_ch_tim_en;
19618 +extern struct device_attribute dev_attr_fr_interval;
19619 +extern struct device_attribute dev_attr_busconnected;
19620 +extern struct device_attribute dev_attr_gotgctl;
19621 +extern struct device_attribute dev_attr_gusbcfg;
19622 +extern struct device_attribute dev_attr_grxfsiz;
19623 +extern struct device_attribute dev_attr_gnptxfsiz;
19624 +extern struct device_attribute dev_attr_gpvndctl;
19625 +extern struct device_attribute dev_attr_ggpio;
19626 +extern struct device_attribute dev_attr_guid;
19627 +extern struct device_attribute dev_attr_gsnpsid;
19628 +extern struct device_attribute dev_attr_devspeed;
19629 +extern struct device_attribute dev_attr_enumspeed;
19630 +extern struct device_attribute dev_attr_hptxfsiz;
19631 +extern struct device_attribute dev_attr_hprt0;
19632 +#ifdef CONFIG_USB_DWC_OTG_LPM
19633 +extern struct device_attribute dev_attr_lpm_response;
19634 +extern struct device_attribute devi_attr_sleep_status;
19635 +#endif
19636 +
19637 +void dwc_otg_attr_create(
19638 +#ifdef LM_INTERFACE
19639 + struct lm_device *dev
19640 +#elif defined(PCI_INTERFACE)
19641 + struct pci_dev *dev
19642 +#elif defined(PLATFORM_INTERFACE)
19643 + struct platform_device *dev
19644 +#endif
19645 + );
19646 +
19647 +void dwc_otg_attr_remove(
19648 +#ifdef LM_INTERFACE
19649 + struct lm_device *dev
19650 +#elif defined(PCI_INTERFACE)
19651 + struct pci_dev *dev
19652 +#elif defined(PLATFORM_INTERFACE)
19653 + struct platform_device *dev
19654 +#endif
19655 + );
19656 +#endif
19657 --- /dev/null
19658 +++ b/drivers/usb/host/dwc_otg/dwc_otg_cfi.c
19659 @@ -0,0 +1,1876 @@
19660 +/* ==========================================================================
19661 + * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
19662 + * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
19663 + * otherwise expressly agreed to in writing between Synopsys and you.
19664 + *
19665 + * The Software IS NOT an item of Licensed Software or Licensed Product under
19666 + * any End User Software License Agreement or Agreement for Licensed Product
19667 + * with Synopsys or any supplement thereto. You are permitted to use and
19668 + * redistribute this Software in source and binary forms, with or without
19669 + * modification, provided that redistributions of source code must retain this
19670 + * notice. You may not view, use, disclose, copy or distribute this file or
19671 + * any information contained herein except pursuant to this license grant from
19672 + * Synopsys. If you do not agree with this notice, including the disclaimer
19673 + * below, then you are not authorized to use the Software.
19674 + *
19675 + * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
19676 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
19677 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
19678 + * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
19679 + * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
19680 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
19681 + * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
19682 + * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
19683 + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
19684 + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
19685 + * DAMAGE.
19686 + * ========================================================================== */
19687 +
19688 +/** @file
19689 + *
19690 + * This file contains the most of the CFI(Core Feature Interface)
19691 + * implementation for the OTG.
19692 + */
19693 +
19694 +#ifdef DWC_UTE_CFI
19695 +
19696 +#include "dwc_otg_pcd.h"
19697 +#include "dwc_otg_cfi.h"
19698 +
19699 +/** This definition should actually migrate to the Portability Library */
19700 +#define DWC_CONSTANT_CPU_TO_LE16(x) (x)
19701 +
19702 +extern dwc_otg_pcd_ep_t *get_ep_by_addr(dwc_otg_pcd_t * pcd, u16 wIndex);
19703 +
19704 +static int cfi_core_features_buf(uint8_t * buf, uint16_t buflen);
19705 +static int cfi_get_feature_value(uint8_t * buf, uint16_t buflen,
19706 + struct dwc_otg_pcd *pcd,
19707 + struct cfi_usb_ctrlrequest *ctrl_req);
19708 +static int cfi_set_feature_value(struct dwc_otg_pcd *pcd);
19709 +static int cfi_ep_get_sg_val(uint8_t * buf, struct dwc_otg_pcd *pcd,
19710 + struct cfi_usb_ctrlrequest *req);
19711 +static int cfi_ep_get_concat_val(uint8_t * buf, struct dwc_otg_pcd *pcd,
19712 + struct cfi_usb_ctrlrequest *req);
19713 +static int cfi_ep_get_align_val(uint8_t * buf, struct dwc_otg_pcd *pcd,
19714 + struct cfi_usb_ctrlrequest *req);
19715 +static int cfi_preproc_reset(struct dwc_otg_pcd *pcd,
19716 + struct cfi_usb_ctrlrequest *req);
19717 +static void cfi_free_ep_bs_dyn_data(cfi_ep_t * cfiep);
19718 +
19719 +static uint16_t get_dfifo_size(dwc_otg_core_if_t * core_if);
19720 +static int32_t get_rxfifo_size(dwc_otg_core_if_t * core_if, uint16_t wValue);
19721 +static int32_t get_txfifo_size(struct dwc_otg_pcd *pcd, uint16_t wValue);
19722 +
19723 +static uint8_t resize_fifos(dwc_otg_core_if_t * core_if);
19724 +
19725 +/** This is the header of the all features descriptor */
19726 +static cfi_all_features_header_t all_props_desc_header = {
19727 + .wVersion = DWC_CONSTANT_CPU_TO_LE16(0x100),
19728 + .wCoreID = DWC_CONSTANT_CPU_TO_LE16(CFI_CORE_ID_OTG),
19729 + .wNumFeatures = DWC_CONSTANT_CPU_TO_LE16(9),
19730 +};
19731 +
19732 +/** This is an array of statically allocated feature descriptors */
19733 +static cfi_feature_desc_header_t prop_descs[] = {
19734 +
19735 + /* FT_ID_DMA_MODE */
19736 + {
19737 + .wFeatureID = DWC_CONSTANT_CPU_TO_LE16(FT_ID_DMA_MODE),
19738 + .bmAttributes = CFI_FEATURE_ATTR_RW,
19739 + .wDataLength = DWC_CONSTANT_CPU_TO_LE16(1),
19740 + },
19741 +
19742 + /* FT_ID_DMA_BUFFER_SETUP */
19743 + {
19744 + .wFeatureID = DWC_CONSTANT_CPU_TO_LE16(FT_ID_DMA_BUFFER_SETUP),
19745 + .bmAttributes = CFI_FEATURE_ATTR_RW,
19746 + .wDataLength = DWC_CONSTANT_CPU_TO_LE16(6),
19747 + },
19748 +
19749 + /* FT_ID_DMA_BUFF_ALIGN */
19750 + {
19751 + .wFeatureID = DWC_CONSTANT_CPU_TO_LE16(FT_ID_DMA_BUFF_ALIGN),
19752 + .bmAttributes = CFI_FEATURE_ATTR_RW,
19753 + .wDataLength = DWC_CONSTANT_CPU_TO_LE16(2),
19754 + },
19755 +
19756 + /* FT_ID_DMA_CONCAT_SETUP */
19757 + {
19758 + .wFeatureID = DWC_CONSTANT_CPU_TO_LE16(FT_ID_DMA_CONCAT_SETUP),
19759 + .bmAttributes = CFI_FEATURE_ATTR_RW,
19760 + //.wDataLength = DWC_CONSTANT_CPU_TO_LE16(6),
19761 + },
19762 +
19763 + /* FT_ID_DMA_CIRCULAR */
19764 + {
19765 + .wFeatureID = DWC_CONSTANT_CPU_TO_LE16(FT_ID_DMA_CIRCULAR),
19766 + .bmAttributes = CFI_FEATURE_ATTR_RW,
19767 + .wDataLength = DWC_CONSTANT_CPU_TO_LE16(6),
19768 + },
19769 +
19770 + /* FT_ID_THRESHOLD_SETUP */
19771 + {
19772 + .wFeatureID = DWC_CONSTANT_CPU_TO_LE16(FT_ID_THRESHOLD_SETUP),
19773 + .bmAttributes = CFI_FEATURE_ATTR_RW,
19774 + .wDataLength = DWC_CONSTANT_CPU_TO_LE16(6),
19775 + },
19776 +
19777 + /* FT_ID_DFIFO_DEPTH */
19778 + {
19779 + .wFeatureID = DWC_CONSTANT_CPU_TO_LE16(FT_ID_DFIFO_DEPTH),
19780 + .bmAttributes = CFI_FEATURE_ATTR_RO,
19781 + .wDataLength = DWC_CONSTANT_CPU_TO_LE16(2),
19782 + },
19783 +
19784 + /* FT_ID_TX_FIFO_DEPTH */
19785 + {
19786 + .wFeatureID = DWC_CONSTANT_CPU_TO_LE16(FT_ID_TX_FIFO_DEPTH),
19787 + .bmAttributes = CFI_FEATURE_ATTR_RW,
19788 + .wDataLength = DWC_CONSTANT_CPU_TO_LE16(2),
19789 + },
19790 +
19791 + /* FT_ID_RX_FIFO_DEPTH */
19792 + {
19793 + .wFeatureID = DWC_CONSTANT_CPU_TO_LE16(FT_ID_RX_FIFO_DEPTH),
19794 + .bmAttributes = CFI_FEATURE_ATTR_RW,
19795 + .wDataLength = DWC_CONSTANT_CPU_TO_LE16(2),
19796 + }
19797 +};
19798 +
19799 +/** The table of feature names */
19800 +cfi_string_t prop_name_table[] = {
19801 + {FT_ID_DMA_MODE, "dma_mode"},
19802 + {FT_ID_DMA_BUFFER_SETUP, "buffer_setup"},
19803 + {FT_ID_DMA_BUFF_ALIGN, "buffer_align"},
19804 + {FT_ID_DMA_CONCAT_SETUP, "concat_setup"},
19805 + {FT_ID_DMA_CIRCULAR, "buffer_circular"},
19806 + {FT_ID_THRESHOLD_SETUP, "threshold_setup"},
19807 + {FT_ID_DFIFO_DEPTH, "dfifo_depth"},
19808 + {FT_ID_TX_FIFO_DEPTH, "txfifo_depth"},
19809 + {FT_ID_RX_FIFO_DEPTH, "rxfifo_depth"},
19810 + {}
19811 +};
19812 +
19813 +/************************************************************************/
19814 +
19815 +/**
19816 + * Returns the name of the feature by its ID
19817 + * or NULL if no featute ID matches.
19818 + *
19819 + */
19820 +const uint8_t *get_prop_name(uint16_t prop_id, int *len)
19821 +{
19822 + cfi_string_t *pstr;
19823 + *len = 0;
19824 +
19825 + for (pstr = prop_name_table; pstr && pstr->s; pstr++) {
19826 + if (pstr->id == prop_id) {
19827 + *len = DWC_STRLEN(pstr->s);
19828 + return pstr->s;
19829 + }
19830 + }
19831 + return NULL;
19832 +}
19833 +
19834 +/**
19835 + * This function handles all CFI specific control requests.
19836 + *
19837 + * Return a negative value to stall the DCE.
19838 + */
19839 +int cfi_setup(struct dwc_otg_pcd *pcd, struct cfi_usb_ctrlrequest *ctrl)
19840 +{
19841 + int retval = 0;
19842 + dwc_otg_pcd_ep_t *ep = NULL;
19843 + cfiobject_t *cfi = pcd->cfi;
19844 + struct dwc_otg_core_if *coreif = GET_CORE_IF(pcd);
19845 + uint16_t wLen = DWC_LE16_TO_CPU(&ctrl->wLength);
19846 + uint16_t wValue = DWC_LE16_TO_CPU(&ctrl->wValue);
19847 + uint16_t wIndex = DWC_LE16_TO_CPU(&ctrl->wIndex);
19848 + uint32_t regaddr = 0;
19849 + uint32_t regval = 0;
19850 +
19851 + /* Save this Control Request in the CFI object.
19852 + * The data field will be assigned in the data stage completion CB function.
19853 + */
19854 + cfi->ctrl_req = *ctrl;
19855 + cfi->ctrl_req.data = NULL;
19856 +
19857 + cfi->need_gadget_att = 0;
19858 + cfi->need_status_in_complete = 0;
19859 +
19860 + switch (ctrl->bRequest) {
19861 + case VEN_CORE_GET_FEATURES:
19862 + retval = cfi_core_features_buf(cfi->buf_in.buf, CFI_IN_BUF_LEN);
19863 + if (retval >= 0) {
19864 + //dump_msg(cfi->buf_in.buf, retval);
19865 + ep = &pcd->ep0;
19866 +
19867 + retval = min((uint16_t) retval, wLen);
19868 + /* Transfer this buffer to the host through the EP0-IN EP */
19869 + ep->dwc_ep.dma_addr = cfi->buf_in.addr;
19870 + ep->dwc_ep.start_xfer_buff = cfi->buf_in.buf;
19871 + ep->dwc_ep.xfer_buff = cfi->buf_in.buf;
19872 + ep->dwc_ep.xfer_len = retval;
19873 + ep->dwc_ep.xfer_count = 0;
19874 + ep->dwc_ep.sent_zlp = 0;
19875 + ep->dwc_ep.total_len = ep->dwc_ep.xfer_len;
19876 +
19877 + pcd->ep0_pending = 1;
19878 + dwc_otg_ep0_start_transfer(coreif, &ep->dwc_ep);
19879 + }
19880 + retval = 0;
19881 + break;
19882 +
19883 + case VEN_CORE_GET_FEATURE:
19884 + CFI_INFO("VEN_CORE_GET_FEATURE\n");
19885 + retval = cfi_get_feature_value(cfi->buf_in.buf, CFI_IN_BUF_LEN,
19886 + pcd, ctrl);
19887 + if (retval >= 0) {
19888 + ep = &pcd->ep0;
19889 +
19890 + retval = min((uint16_t) retval, wLen);
19891 + /* Transfer this buffer to the host through the EP0-IN EP */
19892 + ep->dwc_ep.dma_addr = cfi->buf_in.addr;
19893 + ep->dwc_ep.start_xfer_buff = cfi->buf_in.buf;
19894 + ep->dwc_ep.xfer_buff = cfi->buf_in.buf;
19895 + ep->dwc_ep.xfer_len = retval;
19896 + ep->dwc_ep.xfer_count = 0;
19897 + ep->dwc_ep.sent_zlp = 0;
19898 + ep->dwc_ep.total_len = ep->dwc_ep.xfer_len;
19899 +
19900 + pcd->ep0_pending = 1;
19901 + dwc_otg_ep0_start_transfer(coreif, &ep->dwc_ep);
19902 + }
19903 + CFI_INFO("VEN_CORE_GET_FEATURE=%d\n", retval);
19904 + dump_msg(cfi->buf_in.buf, retval);
19905 + break;
19906 +
19907 + case VEN_CORE_SET_FEATURE:
19908 + CFI_INFO("VEN_CORE_SET_FEATURE\n");
19909 + /* Set up an XFER to get the data stage of the control request,
19910 + * which is the new value of the feature to be modified.
19911 + */
19912 + ep = &pcd->ep0;
19913 + ep->dwc_ep.is_in = 0;
19914 + ep->dwc_ep.dma_addr = cfi->buf_out.addr;
19915 + ep->dwc_ep.start_xfer_buff = cfi->buf_out.buf;
19916 + ep->dwc_ep.xfer_buff = cfi->buf_out.buf;
19917 + ep->dwc_ep.xfer_len = wLen;
19918 + ep->dwc_ep.xfer_count = 0;
19919 + ep->dwc_ep.sent_zlp = 0;
19920 + ep->dwc_ep.total_len = ep->dwc_ep.xfer_len;
19921 +
19922 + pcd->ep0_pending = 1;
19923 + /* Read the control write's data stage */
19924 + dwc_otg_ep0_start_transfer(coreif, &ep->dwc_ep);
19925 + retval = 0;
19926 + break;
19927 +
19928 + case VEN_CORE_RESET_FEATURES:
19929 + CFI_INFO("VEN_CORE_RESET_FEATURES\n");
19930 + cfi->need_gadget_att = 1;
19931 + cfi->need_status_in_complete = 1;
19932 + retval = cfi_preproc_reset(pcd, ctrl);
19933 + CFI_INFO("VEN_CORE_RESET_FEATURES = (%d)\n", retval);
19934 + break;
19935 +
19936 + case VEN_CORE_ACTIVATE_FEATURES:
19937 + CFI_INFO("VEN_CORE_ACTIVATE_FEATURES\n");
19938 + break;
19939 +
19940 + case VEN_CORE_READ_REGISTER:
19941 + CFI_INFO("VEN_CORE_READ_REGISTER\n");
19942 + /* wValue optionally contains the HI WORD of the register offset and
19943 + * wIndex contains the LOW WORD of the register offset
19944 + */
19945 + if (wValue == 0) {
19946 + /* @TODO - MAS - fix the access to the base field */
19947 + regaddr = 0;
19948 + //regaddr = (uint32_t) pcd->otg_dev->os_dep.base;
19949 + //GET_CORE_IF(pcd)->co
19950 + regaddr |= wIndex;
19951 + } else {
19952 + regaddr = (wValue << 16) | wIndex;
19953 + }
19954 +
19955 + /* Read a 32-bit value of the memory at the regaddr */
19956 + regval = DWC_READ_REG32((uint32_t *) regaddr);
19957 +
19958 + ep = &pcd->ep0;
19959 + dwc_memcpy(cfi->buf_in.buf, &regval, sizeof(uint32_t));
19960 + ep->dwc_ep.is_in = 1;
19961 + ep->dwc_ep.dma_addr = cfi->buf_in.addr;
19962 + ep->dwc_ep.start_xfer_buff = cfi->buf_in.buf;
19963 + ep->dwc_ep.xfer_buff = cfi->buf_in.buf;
19964 + ep->dwc_ep.xfer_len = wLen;
19965 + ep->dwc_ep.xfer_count = 0;
19966 + ep->dwc_ep.sent_zlp = 0;
19967 + ep->dwc_ep.total_len = ep->dwc_ep.xfer_len;
19968 +
19969 + pcd->ep0_pending = 1;
19970 + dwc_otg_ep0_start_transfer(coreif, &ep->dwc_ep);
19971 + cfi->need_gadget_att = 0;
19972 + retval = 0;
19973 + break;
19974 +
19975 + case VEN_CORE_WRITE_REGISTER:
19976 + CFI_INFO("VEN_CORE_WRITE_REGISTER\n");
19977 + /* Set up an XFER to get the data stage of the control request,
19978 + * which is the new value of the register to be modified.
19979 + */
19980 + ep = &pcd->ep0;
19981 + ep->dwc_ep.is_in = 0;
19982 + ep->dwc_ep.dma_addr = cfi->buf_out.addr;
19983 + ep->dwc_ep.start_xfer_buff = cfi->buf_out.buf;
19984 + ep->dwc_ep.xfer_buff = cfi->buf_out.buf;
19985 + ep->dwc_ep.xfer_len = wLen;
19986 + ep->dwc_ep.xfer_count = 0;
19987 + ep->dwc_ep.sent_zlp = 0;
19988 + ep->dwc_ep.total_len = ep->dwc_ep.xfer_len;
19989 +
19990 + pcd->ep0_pending = 1;
19991 + /* Read the control write's data stage */
19992 + dwc_otg_ep0_start_transfer(coreif, &ep->dwc_ep);
19993 + retval = 0;
19994 + break;
19995 +
19996 + default:
19997 + retval = -DWC_E_NOT_SUPPORTED;
19998 + break;
19999 + }
20000 +
20001 + return retval;
20002 +}
20003 +
20004 +/**
20005 + * This function prepares the core features descriptors and copies its
20006 + * raw representation into the buffer <buf>.
20007 + *
20008 + * The buffer structure is as follows:
20009 + * all_features_header (8 bytes)
20010 + * features_#1 (8 bytes + feature name string length)
20011 + * features_#2 (8 bytes + feature name string length)
20012 + * .....
20013 + * features_#n - where n=the total count of feature descriptors
20014 + */
20015 +static int cfi_core_features_buf(uint8_t * buf, uint16_t buflen)
20016 +{
20017 + cfi_feature_desc_header_t *prop_hdr = prop_descs;
20018 + cfi_feature_desc_header_t *prop;
20019 + cfi_all_features_header_t *all_props_hdr = &all_props_desc_header;
20020 + cfi_all_features_header_t *tmp;
20021 + uint8_t *tmpbuf = buf;
20022 + const uint8_t *pname = NULL;
20023 + int i, j, namelen = 0, totlen;
20024 +
20025 + /* Prepare and copy the core features into the buffer */
20026 + CFI_INFO("%s:\n", __func__);
20027 +
20028 + tmp = (cfi_all_features_header_t *) tmpbuf;
20029 + *tmp = *all_props_hdr;
20030 + tmpbuf += CFI_ALL_FEATURES_HDR_LEN;
20031 +
20032 + j = sizeof(prop_descs) / sizeof(cfi_all_features_header_t);
20033 + for (i = 0; i < j; i++, prop_hdr++) {
20034 + pname = get_prop_name(prop_hdr->wFeatureID, &namelen);
20035 + prop = (cfi_feature_desc_header_t *) tmpbuf;
20036 + *prop = *prop_hdr;
20037 +
20038 + prop->bNameLen = namelen;
20039 + prop->wLength =
20040 + DWC_CONSTANT_CPU_TO_LE16(CFI_FEATURE_DESC_HDR_LEN +
20041 + namelen);
20042 +
20043 + tmpbuf += CFI_FEATURE_DESC_HDR_LEN;
20044 + dwc_memcpy(tmpbuf, pname, namelen);
20045 + tmpbuf += namelen;
20046 + }
20047 +
20048 + totlen = tmpbuf - buf;
20049 +
20050 + if (totlen > 0) {
20051 + tmp = (cfi_all_features_header_t *) buf;
20052 + tmp->wTotalLen = DWC_CONSTANT_CPU_TO_LE16(totlen);
20053 + }
20054 +
20055 + return totlen;
20056 +}
20057 +
20058 +/**
20059 + * This function releases all the dynamic memory in the CFI object.
20060 + */
20061 +static void cfi_release(cfiobject_t * cfiobj)
20062 +{
20063 + cfi_ep_t *cfiep;
20064 + dwc_list_link_t *tmp;
20065 +
20066 + CFI_INFO("%s\n", __func__);
20067 +
20068 + if (cfiobj->buf_in.buf) {
20069 + DWC_DMA_FREE(CFI_IN_BUF_LEN, cfiobj->buf_in.buf,
20070 + cfiobj->buf_in.addr);
20071 + cfiobj->buf_in.buf = NULL;
20072 + }
20073 +
20074 + if (cfiobj->buf_out.buf) {
20075 + DWC_DMA_FREE(CFI_OUT_BUF_LEN, cfiobj->buf_out.buf,
20076 + cfiobj->buf_out.addr);
20077 + cfiobj->buf_out.buf = NULL;
20078 + }
20079 +
20080 + /* Free the Buffer Setup values for each EP */
20081 + //list_for_each_entry(cfiep, &cfiobj->active_eps, lh) {
20082 + DWC_LIST_FOREACH(tmp, &cfiobj->active_eps) {
20083 + cfiep = DWC_LIST_ENTRY(tmp, struct cfi_ep, lh);
20084 + cfi_free_ep_bs_dyn_data(cfiep);
20085 + }
20086 +}
20087 +
20088 +/**
20089 + * This function frees the dynamically allocated EP buffer setup data.
20090 + */
20091 +static void cfi_free_ep_bs_dyn_data(cfi_ep_t * cfiep)
20092 +{
20093 + if (cfiep->bm_sg) {
20094 + DWC_FREE(cfiep->bm_sg);
20095 + cfiep->bm_sg = NULL;
20096 + }
20097 +
20098 + if (cfiep->bm_align) {
20099 + DWC_FREE(cfiep->bm_align);
20100 + cfiep->bm_align = NULL;
20101 + }
20102 +
20103 + if (cfiep->bm_concat) {
20104 + if (NULL != cfiep->bm_concat->wTxBytes) {
20105 + DWC_FREE(cfiep->bm_concat->wTxBytes);
20106 + cfiep->bm_concat->wTxBytes = NULL;
20107 + }
20108 + DWC_FREE(cfiep->bm_concat);
20109 + cfiep->bm_concat = NULL;
20110 + }
20111 +}
20112 +
20113 +/**
20114 + * This function initializes the default values of the features
20115 + * for a specific endpoint and should be called only once when
20116 + * the EP is enabled first time.
20117 + */
20118 +static int cfi_ep_init_defaults(struct dwc_otg_pcd *pcd, cfi_ep_t * cfiep)
20119 +{
20120 + int retval = 0;
20121 +
20122 + cfiep->bm_sg = DWC_ALLOC(sizeof(ddma_sg_buffer_setup_t));
20123 + if (NULL == cfiep->bm_sg) {
20124 + CFI_INFO("Failed to allocate memory for SG feature value\n");
20125 + return -DWC_E_NO_MEMORY;
20126 + }
20127 + dwc_memset(cfiep->bm_sg, 0, sizeof(ddma_sg_buffer_setup_t));
20128 +
20129 + /* For the Concatenation feature's default value we do not allocate
20130 + * memory for the wTxBytes field - it will be done in the set_feature_value
20131 + * request handler.
20132 + */
20133 + cfiep->bm_concat = DWC_ALLOC(sizeof(ddma_concat_buffer_setup_t));
20134 + if (NULL == cfiep->bm_concat) {
20135 + CFI_INFO
20136 + ("Failed to allocate memory for CONCATENATION feature value\n");
20137 + DWC_FREE(cfiep->bm_sg);
20138 + return -DWC_E_NO_MEMORY;
20139 + }
20140 + dwc_memset(cfiep->bm_concat, 0, sizeof(ddma_concat_buffer_setup_t));
20141 +
20142 + cfiep->bm_align = DWC_ALLOC(sizeof(ddma_align_buffer_setup_t));
20143 + if (NULL == cfiep->bm_align) {
20144 + CFI_INFO
20145 + ("Failed to allocate memory for Alignment feature value\n");
20146 + DWC_FREE(cfiep->bm_sg);
20147 + DWC_FREE(cfiep->bm_concat);
20148 + return -DWC_E_NO_MEMORY;
20149 + }
20150 + dwc_memset(cfiep->bm_align, 0, sizeof(ddma_align_buffer_setup_t));
20151 +
20152 + return retval;
20153 +}
20154 +
20155 +/**
20156 + * The callback function that notifies the CFI on the activation of
20157 + * an endpoint in the PCD. The following steps are done in this function:
20158 + *
20159 + * Create a dynamically allocated cfi_ep_t object (a CFI wrapper to the PCD's
20160 + * active endpoint)
20161 + * Create MAX_DMA_DESCS_PER_EP count DMA Descriptors for the EP
20162 + * Set the Buffer Mode to standard
20163 + * Initialize the default values for all EP modes (SG, Circular, Concat, Align)
20164 + * Add the cfi_ep_t object to the list of active endpoints in the CFI object
20165 + */
20166 +static int cfi_ep_enable(struct cfiobject *cfi, struct dwc_otg_pcd *pcd,
20167 + struct dwc_otg_pcd_ep *ep)
20168 +{
20169 + cfi_ep_t *cfiep;
20170 + int retval = -DWC_E_NOT_SUPPORTED;
20171 +
20172 + CFI_INFO("%s: epname=%s; epnum=0x%02x\n", __func__,
20173 + "EP_" /*ep->ep.name */ , ep->desc->bEndpointAddress);
20174 + /* MAS - Check whether this endpoint already is in the list */
20175 + cfiep = get_cfi_ep_by_pcd_ep(cfi, ep);
20176 +
20177 + if (NULL == cfiep) {
20178 + /* Allocate a cfi_ep_t object */
20179 + cfiep = DWC_ALLOC(sizeof(cfi_ep_t));
20180 + if (NULL == cfiep) {
20181 + CFI_INFO
20182 + ("Unable to allocate memory for <cfiep> in function %s\n",
20183 + __func__);
20184 + return -DWC_E_NO_MEMORY;
20185 + }
20186 + dwc_memset(cfiep, 0, sizeof(cfi_ep_t));
20187 +
20188 + /* Save the dwc_otg_pcd_ep pointer in the cfiep object */
20189 + cfiep->ep = ep;
20190 +
20191 + /* Allocate the DMA Descriptors chain of MAX_DMA_DESCS_PER_EP count */
20192 + ep->dwc_ep.descs =
20193 + DWC_DMA_ALLOC(MAX_DMA_DESCS_PER_EP *
20194 + sizeof(dwc_otg_dma_desc_t),
20195 + &ep->dwc_ep.descs_dma_addr);
20196 +
20197 + if (NULL == ep->dwc_ep.descs) {
20198 + DWC_FREE(cfiep);
20199 + return -DWC_E_NO_MEMORY;
20200 + }
20201 +
20202 + DWC_LIST_INIT(&cfiep->lh);
20203 +
20204 + /* Set the buffer mode to BM_STANDARD. It will be modified
20205 + * when building descriptors for a specific buffer mode */
20206 + ep->dwc_ep.buff_mode = BM_STANDARD;
20207 +
20208 + /* Create and initialize the default values for this EP's Buffer modes */
20209 + if ((retval = cfi_ep_init_defaults(pcd, cfiep)) < 0)
20210 + return retval;
20211 +
20212 + /* Add the cfi_ep_t object to the CFI object's list of active endpoints */
20213 + DWC_LIST_INSERT_TAIL(&cfi->active_eps, &cfiep->lh);
20214 + retval = 0;
20215 + } else { /* The sought EP already is in the list */
20216 + CFI_INFO("%s: The sought EP already is in the list\n",
20217 + __func__);
20218 + }
20219 +
20220 + return retval;
20221 +}
20222 +
20223 +/**
20224 + * This function is called when the data stage of a 3-stage Control Write request
20225 + * is complete.
20226 + *
20227 + */
20228 +static int cfi_ctrl_write_complete(struct cfiobject *cfi,
20229 + struct dwc_otg_pcd *pcd)
20230 +{
20231 + uint32_t addr, reg_value;
20232 + uint16_t wIndex, wValue;
20233 + uint8_t bRequest;
20234 + uint8_t *buf = cfi->buf_out.buf;
20235 + //struct usb_ctrlrequest *ctrl_req = &cfi->ctrl_req_saved;
20236 + struct cfi_usb_ctrlrequest *ctrl_req = &cfi->ctrl_req;
20237 + int retval = -DWC_E_NOT_SUPPORTED;
20238 +
20239 + CFI_INFO("%s\n", __func__);
20240 +
20241 + bRequest = ctrl_req->bRequest;
20242 + wIndex = DWC_CONSTANT_CPU_TO_LE16(ctrl_req->wIndex);
20243 + wValue = DWC_CONSTANT_CPU_TO_LE16(ctrl_req->wValue);
20244 +
20245 + /*
20246 + * Save the pointer to the data stage in the ctrl_req's <data> field.
20247 + * The request should be already saved in the command stage by now.
20248 + */
20249 + ctrl_req->data = cfi->buf_out.buf;
20250 + cfi->need_status_in_complete = 0;
20251 + cfi->need_gadget_att = 0;
20252 +
20253 + switch (bRequest) {
20254 + case VEN_CORE_WRITE_REGISTER:
20255 + /* The buffer contains raw data of the new value for the register */
20256 + reg_value = *((uint32_t *) buf);
20257 + if (wValue == 0) {
20258 + addr = 0;
20259 + //addr = (uint32_t) pcd->otg_dev->os_dep.base;
20260 + addr += wIndex;
20261 + } else {
20262 + addr = (wValue << 16) | wIndex;
20263 + }
20264 +
20265 + //writel(reg_value, addr);
20266 +
20267 + retval = 0;
20268 + cfi->need_status_in_complete = 1;
20269 + break;
20270 +
20271 + case VEN_CORE_SET_FEATURE:
20272 + /* The buffer contains raw data of the new value of the feature */
20273 + retval = cfi_set_feature_value(pcd);
20274 + if (retval < 0)
20275 + return retval;
20276 +
20277 + cfi->need_status_in_complete = 1;
20278 + break;
20279 +
20280 + default:
20281 + break;
20282 + }
20283 +
20284 + return retval;
20285 +}
20286 +
20287 +/**
20288 + * This function builds the DMA descriptors for the SG buffer mode.
20289 + */
20290 +static void cfi_build_sg_descs(struct cfiobject *cfi, cfi_ep_t * cfiep,
20291 + dwc_otg_pcd_request_t * req)
20292 +{
20293 + struct dwc_otg_pcd_ep *ep = cfiep->ep;
20294 + ddma_sg_buffer_setup_t *sgval = cfiep->bm_sg;
20295 + struct dwc_otg_dma_desc *desc = cfiep->ep->dwc_ep.descs;
20296 + struct dwc_otg_dma_desc *desc_last = cfiep->ep->dwc_ep.descs;
20297 + dma_addr_t buff_addr = req->dma;
20298 + int i;
20299 + uint32_t txsize, off;
20300 +
20301 + txsize = sgval->wSize;
20302 + off = sgval->bOffset;
20303 +
20304 +// CFI_INFO("%s: %s TXSIZE=0x%08x; OFFSET=0x%08x\n",
20305 +// __func__, cfiep->ep->ep.name, txsize, off);
20306 +
20307 + for (i = 0; i < sgval->bCount; i++) {
20308 + desc->status.b.bs = BS_HOST_BUSY;
20309 + desc->buf = buff_addr;
20310 + desc->status.b.l = 0;
20311 + desc->status.b.ioc = 0;
20312 + desc->status.b.sp = 0;
20313 + desc->status.b.bytes = txsize;
20314 + desc->status.b.bs = BS_HOST_READY;
20315 +
20316 + /* Set the next address of the buffer */
20317 + buff_addr += txsize + off;
20318 + desc_last = desc;
20319 + desc++;
20320 + }
20321 +
20322 + /* Set the last, ioc and sp bits on the Last DMA Descriptor */
20323 + desc_last->status.b.l = 1;
20324 + desc_last->status.b.ioc = 1;
20325 + desc_last->status.b.sp = ep->dwc_ep.sent_zlp;
20326 + /* Save the last DMA descriptor pointer */
20327 + cfiep->dma_desc_last = desc_last;
20328 + cfiep->desc_count = sgval->bCount;
20329 +}
20330 +
20331 +/**
20332 + * This function builds the DMA descriptors for the Concatenation buffer mode.
20333 + */
20334 +static void cfi_build_concat_descs(struct cfiobject *cfi, cfi_ep_t * cfiep,
20335 + dwc_otg_pcd_request_t * req)
20336 +{
20337 + struct dwc_otg_pcd_ep *ep = cfiep->ep;
20338 + ddma_concat_buffer_setup_t *concatval = cfiep->bm_concat;
20339 + struct dwc_otg_dma_desc *desc = cfiep->ep->dwc_ep.descs;
20340 + struct dwc_otg_dma_desc *desc_last = cfiep->ep->dwc_ep.descs;
20341 + dma_addr_t buff_addr = req->dma;
20342 + int i;
20343 + uint16_t *txsize;
20344 +
20345 + txsize = concatval->wTxBytes;
20346 +
20347 + for (i = 0; i < concatval->hdr.bDescCount; i++) {
20348 + desc->buf = buff_addr;
20349 + desc->status.b.bs = BS_HOST_BUSY;
20350 + desc->status.b.l = 0;
20351 + desc->status.b.ioc = 0;
20352 + desc->status.b.sp = 0;
20353 + desc->status.b.bytes = *txsize;
20354 + desc->status.b.bs = BS_HOST_READY;
20355 +
20356 + txsize++;
20357 + /* Set the next address of the buffer */
20358 + buff_addr += UGETW(ep->desc->wMaxPacketSize);
20359 + desc_last = desc;
20360 + desc++;
20361 + }
20362 +
20363 + /* Set the last, ioc and sp bits on the Last DMA Descriptor */
20364 + desc_last->status.b.l = 1;
20365 + desc_last->status.b.ioc = 1;
20366 + desc_last->status.b.sp = ep->dwc_ep.sent_zlp;
20367 + cfiep->dma_desc_last = desc_last;
20368 + cfiep->desc_count = concatval->hdr.bDescCount;
20369 +}
20370 +
20371 +/**
20372 + * This function builds the DMA descriptors for the Circular buffer mode
20373 + */
20374 +static void cfi_build_circ_descs(struct cfiobject *cfi, cfi_ep_t * cfiep,
20375 + dwc_otg_pcd_request_t * req)
20376 +{
20377 + /* @todo: MAS - add implementation when this feature needs to be tested */
20378 +}
20379 +
20380 +/**
20381 + * This function builds the DMA descriptors for the Alignment buffer mode
20382 + */
20383 +static void cfi_build_align_descs(struct cfiobject *cfi, cfi_ep_t * cfiep,
20384 + dwc_otg_pcd_request_t * req)
20385 +{
20386 + struct dwc_otg_pcd_ep *ep = cfiep->ep;
20387 + ddma_align_buffer_setup_t *alignval = cfiep->bm_align;
20388 + struct dwc_otg_dma_desc *desc = cfiep->ep->dwc_ep.descs;
20389 + dma_addr_t buff_addr = req->dma;
20390 +
20391 + desc->status.b.bs = BS_HOST_BUSY;
20392 + desc->status.b.l = 1;
20393 + desc->status.b.ioc = 1;
20394 + desc->status.b.sp = ep->dwc_ep.sent_zlp;
20395 + desc->status.b.bytes = req->length;
20396 + /* Adjust the buffer alignment */
20397 + desc->buf = (buff_addr + alignval->bAlign);
20398 + desc->status.b.bs = BS_HOST_READY;
20399 + cfiep->dma_desc_last = desc;
20400 + cfiep->desc_count = 1;
20401 +}
20402 +
20403 +/**
20404 + * This function builds the DMA descriptors chain for different modes of the
20405 + * buffer setup of an endpoint.
20406 + */
20407 +static void cfi_build_descriptors(struct cfiobject *cfi,
20408 + struct dwc_otg_pcd *pcd,
20409 + struct dwc_otg_pcd_ep *ep,
20410 + dwc_otg_pcd_request_t * req)
20411 +{
20412 + cfi_ep_t *cfiep;
20413 +
20414 + /* Get the cfiep by the dwc_otg_pcd_ep */
20415 + cfiep = get_cfi_ep_by_pcd_ep(cfi, ep);
20416 + if (NULL == cfiep) {
20417 + CFI_INFO("%s: Unable to find a matching active endpoint\n",
20418 + __func__);
20419 + return;
20420 + }
20421 +
20422 + cfiep->xfer_len = req->length;
20423 +
20424 + /* Iterate through all the DMA descriptors */
20425 + switch (cfiep->ep->dwc_ep.buff_mode) {
20426 + case BM_SG:
20427 + cfi_build_sg_descs(cfi, cfiep, req);
20428 + break;
20429 +
20430 + case BM_CONCAT:
20431 + cfi_build_concat_descs(cfi, cfiep, req);
20432 + break;
20433 +
20434 + case BM_CIRCULAR:
20435 + cfi_build_circ_descs(cfi, cfiep, req);
20436 + break;
20437 +
20438 + case BM_ALIGN:
20439 + cfi_build_align_descs(cfi, cfiep, req);
20440 + break;
20441 +
20442 + default:
20443 + break;
20444 + }
20445 +}
20446 +
20447 +/**
20448 + * Allocate DMA buffer for different Buffer modes.
20449 + */
20450 +static void *cfi_ep_alloc_buf(struct cfiobject *cfi, struct dwc_otg_pcd *pcd,
20451 + struct dwc_otg_pcd_ep *ep, dma_addr_t * dma,
20452 + unsigned size, gfp_t flags)
20453 +{
20454 + return DWC_DMA_ALLOC(size, dma);
20455 +}
20456 +
20457 +/**
20458 + * This function initializes the CFI object.
20459 + */
20460 +int init_cfi(cfiobject_t * cfiobj)
20461 +{
20462 + CFI_INFO("%s\n", __func__);
20463 +
20464 + /* Allocate a buffer for IN XFERs */
20465 + cfiobj->buf_in.buf =
20466 + DWC_DMA_ALLOC(CFI_IN_BUF_LEN, &cfiobj->buf_in.addr);
20467 + if (NULL == cfiobj->buf_in.buf) {
20468 + CFI_INFO("Unable to allocate buffer for INs\n");
20469 + return -DWC_E_NO_MEMORY;
20470 + }
20471 +
20472 + /* Allocate a buffer for OUT XFERs */
20473 + cfiobj->buf_out.buf =
20474 + DWC_DMA_ALLOC(CFI_OUT_BUF_LEN, &cfiobj->buf_out.addr);
20475 + if (NULL == cfiobj->buf_out.buf) {
20476 + CFI_INFO("Unable to allocate buffer for OUT\n");
20477 + return -DWC_E_NO_MEMORY;
20478 + }
20479 +
20480 + /* Initialize the callback function pointers */
20481 + cfiobj->ops.release = cfi_release;
20482 + cfiobj->ops.ep_enable = cfi_ep_enable;
20483 + cfiobj->ops.ctrl_write_complete = cfi_ctrl_write_complete;
20484 + cfiobj->ops.build_descriptors = cfi_build_descriptors;
20485 + cfiobj->ops.ep_alloc_buf = cfi_ep_alloc_buf;
20486 +
20487 + /* Initialize the list of active endpoints in the CFI object */
20488 + DWC_LIST_INIT(&cfiobj->active_eps);
20489 +
20490 + return 0;
20491 +}
20492 +
20493 +/**
20494 + * This function reads the required feature's current value into the buffer
20495 + *
20496 + * @retval: Returns negative as error, or the data length of the feature
20497 + */
20498 +static int cfi_get_feature_value(uint8_t * buf, uint16_t buflen,
20499 + struct dwc_otg_pcd *pcd,
20500 + struct cfi_usb_ctrlrequest *ctrl_req)
20501 +{
20502 + int retval = -DWC_E_NOT_SUPPORTED;
20503 + struct dwc_otg_core_if *coreif = GET_CORE_IF(pcd);
20504 + uint16_t dfifo, rxfifo, txfifo;
20505 +
20506 + switch (ctrl_req->wIndex) {
20507 + /* Whether the DDMA is enabled or not */
20508 + case FT_ID_DMA_MODE:
20509 + *buf = (coreif->dma_enable && coreif->dma_desc_enable) ? 1 : 0;
20510 + retval = 1;
20511 + break;
20512 +
20513 + case FT_ID_DMA_BUFFER_SETUP:
20514 + retval = cfi_ep_get_sg_val(buf, pcd, ctrl_req);
20515 + break;
20516 +
20517 + case FT_ID_DMA_BUFF_ALIGN:
20518 + retval = cfi_ep_get_align_val(buf, pcd, ctrl_req);
20519 + break;
20520 +
20521 + case FT_ID_DMA_CONCAT_SETUP:
20522 + retval = cfi_ep_get_concat_val(buf, pcd, ctrl_req);
20523 + break;
20524 +
20525 + case FT_ID_DMA_CIRCULAR:
20526 + CFI_INFO("GetFeature value (FT_ID_DMA_CIRCULAR)\n");
20527 + break;
20528 +
20529 + case FT_ID_THRESHOLD_SETUP:
20530 + CFI_INFO("GetFeature value (FT_ID_THRESHOLD_SETUP)\n");
20531 + break;
20532 +
20533 + case FT_ID_DFIFO_DEPTH:
20534 + dfifo = get_dfifo_size(coreif);
20535 + *((uint16_t *) buf) = dfifo;
20536 + retval = sizeof(uint16_t);
20537 + break;
20538 +
20539 + case FT_ID_TX_FIFO_DEPTH:
20540 + retval = get_txfifo_size(pcd, ctrl_req->wValue);
20541 + if (retval >= 0) {
20542 + txfifo = retval;
20543 + *((uint16_t *) buf) = txfifo;
20544 + retval = sizeof(uint16_t);
20545 + }
20546 + break;
20547 +
20548 + case FT_ID_RX_FIFO_DEPTH:
20549 + retval = get_rxfifo_size(coreif, ctrl_req->wValue);
20550 + if (retval >= 0) {
20551 + rxfifo = retval;
20552 + *((uint16_t *) buf) = rxfifo;
20553 + retval = sizeof(uint16_t);
20554 + }
20555 + break;
20556 + }
20557 +
20558 + return retval;
20559 +}
20560 +
20561 +/**
20562 + * This function resets the SG for the specified EP to its default value
20563 + */
20564 +static int cfi_reset_sg_val(cfi_ep_t * cfiep)
20565 +{
20566 + dwc_memset(cfiep->bm_sg, 0, sizeof(ddma_sg_buffer_setup_t));
20567 + return 0;
20568 +}
20569 +
20570 +/**
20571 + * This function resets the Alignment for the specified EP to its default value
20572 + */
20573 +static int cfi_reset_align_val(cfi_ep_t * cfiep)
20574 +{
20575 + dwc_memset(cfiep->bm_sg, 0, sizeof(ddma_sg_buffer_setup_t));
20576 + return 0;
20577 +}
20578 +
20579 +/**
20580 + * This function resets the Concatenation for the specified EP to its default value
20581 + * This function will also set the value of the wTxBytes field to NULL after
20582 + * freeing the memory previously allocated for this field.
20583 + */
20584 +static int cfi_reset_concat_val(cfi_ep_t * cfiep)
20585 +{
20586 + /* First we need to free the wTxBytes field */
20587 + if (cfiep->bm_concat->wTxBytes) {
20588 + DWC_FREE(cfiep->bm_concat->wTxBytes);
20589 + cfiep->bm_concat->wTxBytes = NULL;
20590 + }
20591 +
20592 + dwc_memset(cfiep->bm_concat, 0, sizeof(ddma_concat_buffer_setup_t));
20593 + return 0;
20594 +}
20595 +
20596 +/**
20597 + * This function resets all the buffer setups of the specified endpoint
20598 + */
20599 +static int cfi_ep_reset_all_setup_vals(cfi_ep_t * cfiep)
20600 +{
20601 + cfi_reset_sg_val(cfiep);
20602 + cfi_reset_align_val(cfiep);
20603 + cfi_reset_concat_val(cfiep);
20604 + return 0;
20605 +}
20606 +
20607 +static int cfi_handle_reset_fifo_val(struct dwc_otg_pcd *pcd, uint8_t ep_addr,
20608 + uint8_t rx_rst, uint8_t tx_rst)
20609 +{
20610 + int retval = -DWC_E_INVALID;
20611 + uint16_t tx_siz[15];
20612 + uint16_t rx_siz = 0;
20613 + dwc_otg_pcd_ep_t *ep = NULL;
20614 + dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
20615 + dwc_otg_core_params_t *params = GET_CORE_IF(pcd)->core_params;
20616 +
20617 + if (rx_rst) {
20618 + rx_siz = params->dev_rx_fifo_size;
20619 + params->dev_rx_fifo_size = GET_CORE_IF(pcd)->init_rxfsiz;
20620 + }
20621 +
20622 + if (tx_rst) {
20623 + if (ep_addr == 0) {
20624 + int i;
20625 +
20626 + for (i = 0; i < core_if->hwcfg4.b.num_in_eps; i++) {
20627 + tx_siz[i] =
20628 + core_if->core_params->dev_tx_fifo_size[i];
20629 + core_if->core_params->dev_tx_fifo_size[i] =
20630 + core_if->init_txfsiz[i];
20631 + }
20632 + } else {
20633 +
20634 + ep = get_ep_by_addr(pcd, ep_addr);
20635 +
20636 + if (NULL == ep) {
20637 + CFI_INFO
20638 + ("%s: Unable to get the endpoint addr=0x%02x\n",
20639 + __func__, ep_addr);
20640 + return -DWC_E_INVALID;
20641 + }
20642 +
20643 + tx_siz[0] =
20644 + params->dev_tx_fifo_size[ep->dwc_ep.tx_fifo_num -
20645 + 1];
20646 + params->dev_tx_fifo_size[ep->dwc_ep.tx_fifo_num - 1] =
20647 + GET_CORE_IF(pcd)->init_txfsiz[ep->
20648 + dwc_ep.tx_fifo_num -
20649 + 1];
20650 + }
20651 + }
20652 +
20653 + if (resize_fifos(GET_CORE_IF(pcd))) {
20654 + retval = 0;
20655 + } else {
20656 + CFI_INFO
20657 + ("%s: Error resetting the feature Reset All(FIFO size)\n",
20658 + __func__);
20659 + if (rx_rst) {
20660 + params->dev_rx_fifo_size = rx_siz;
20661 + }
20662 +
20663 + if (tx_rst) {
20664 + if (ep_addr == 0) {
20665 + int i;
20666 + for (i = 0; i < core_if->hwcfg4.b.num_in_eps;
20667 + i++) {
20668 + core_if->
20669 + core_params->dev_tx_fifo_size[i] =
20670 + tx_siz[i];
20671 + }
20672 + } else {
20673 + params->dev_tx_fifo_size[ep->
20674 + dwc_ep.tx_fifo_num -
20675 + 1] = tx_siz[0];
20676 + }
20677 + }
20678 + retval = -DWC_E_INVALID;
20679 + }
20680 + return retval;
20681 +}
20682 +
20683 +static int cfi_handle_reset_all(struct dwc_otg_pcd *pcd, uint8_t addr)
20684 +{
20685 + int retval = 0;
20686 + cfi_ep_t *cfiep;
20687 + cfiobject_t *cfi = pcd->cfi;
20688 + dwc_list_link_t *tmp;
20689 +
20690 + retval = cfi_handle_reset_fifo_val(pcd, addr, 1, 1);
20691 + if (retval < 0) {
20692 + return retval;
20693 + }
20694 +
20695 + /* If the EP address is known then reset the features for only that EP */
20696 + if (addr) {
20697 + cfiep = get_cfi_ep_by_addr(pcd->cfi, addr);
20698 + if (NULL == cfiep) {
20699 + CFI_INFO("%s: Error getting the EP address 0x%02x\n",
20700 + __func__, addr);
20701 + return -DWC_E_INVALID;
20702 + }
20703 + retval = cfi_ep_reset_all_setup_vals(cfiep);
20704 + cfiep->ep->dwc_ep.buff_mode = BM_STANDARD;
20705 + }
20706 + /* Otherwise (wValue == 0), reset all features of all EP's */
20707 + else {
20708 + /* Traverse all the active EP's and reset the feature(s) value(s) */
20709 + //list_for_each_entry(cfiep, &cfi->active_eps, lh) {
20710 + DWC_LIST_FOREACH(tmp, &cfi->active_eps) {
20711 + cfiep = DWC_LIST_ENTRY(tmp, struct cfi_ep, lh);
20712 + retval = cfi_ep_reset_all_setup_vals(cfiep);
20713 + cfiep->ep->dwc_ep.buff_mode = BM_STANDARD;
20714 + if (retval < 0) {
20715 + CFI_INFO
20716 + ("%s: Error resetting the feature Reset All\n",
20717 + __func__);
20718 + return retval;
20719 + }
20720 + }
20721 + }
20722 + return retval;
20723 +}
20724 +
20725 +static int cfi_handle_reset_dma_buff_setup(struct dwc_otg_pcd *pcd,
20726 + uint8_t addr)
20727 +{
20728 + int retval = 0;
20729 + cfi_ep_t *cfiep;
20730 + cfiobject_t *cfi = pcd->cfi;
20731 + dwc_list_link_t *tmp;
20732 +
20733 + /* If the EP address is known then reset the features for only that EP */
20734 + if (addr) {
20735 + cfiep = get_cfi_ep_by_addr(pcd->cfi, addr);
20736 + if (NULL == cfiep) {
20737 + CFI_INFO("%s: Error getting the EP address 0x%02x\n",
20738 + __func__, addr);
20739 + return -DWC_E_INVALID;
20740 + }
20741 + retval = cfi_reset_sg_val(cfiep);
20742 + }
20743 + /* Otherwise (wValue == 0), reset all features of all EP's */
20744 + else {
20745 + /* Traverse all the active EP's and reset the feature(s) value(s) */
20746 + //list_for_each_entry(cfiep, &cfi->active_eps, lh) {
20747 + DWC_LIST_FOREACH(tmp, &cfi->active_eps) {
20748 + cfiep = DWC_LIST_ENTRY(tmp, struct cfi_ep, lh);
20749 + retval = cfi_reset_sg_val(cfiep);
20750 + if (retval < 0) {
20751 + CFI_INFO
20752 + ("%s: Error resetting the feature Buffer Setup\n",
20753 + __func__);
20754 + return retval;
20755 + }
20756 + }
20757 + }
20758 + return retval;
20759 +}
20760 +
20761 +static int cfi_handle_reset_concat_val(struct dwc_otg_pcd *pcd, uint8_t addr)
20762 +{
20763 + int retval = 0;
20764 + cfi_ep_t *cfiep;
20765 + cfiobject_t *cfi = pcd->cfi;
20766 + dwc_list_link_t *tmp;
20767 +
20768 + /* If the EP address is known then reset the features for only that EP */
20769 + if (addr) {
20770 + cfiep = get_cfi_ep_by_addr(pcd->cfi, addr);
20771 + if (NULL == cfiep) {
20772 + CFI_INFO("%s: Error getting the EP address 0x%02x\n",
20773 + __func__, addr);
20774 + return -DWC_E_INVALID;
20775 + }
20776 + retval = cfi_reset_concat_val(cfiep);
20777 + }
20778 + /* Otherwise (wValue == 0), reset all features of all EP's */
20779 + else {
20780 + /* Traverse all the active EP's and reset the feature(s) value(s) */
20781 + //list_for_each_entry(cfiep, &cfi->active_eps, lh) {
20782 + DWC_LIST_FOREACH(tmp, &cfi->active_eps) {
20783 + cfiep = DWC_LIST_ENTRY(tmp, struct cfi_ep, lh);
20784 + retval = cfi_reset_concat_val(cfiep);
20785 + if (retval < 0) {
20786 + CFI_INFO
20787 + ("%s: Error resetting the feature Concatenation Value\n",
20788 + __func__);
20789 + return retval;
20790 + }
20791 + }
20792 + }
20793 + return retval;
20794 +}
20795 +
20796 +static int cfi_handle_reset_align_val(struct dwc_otg_pcd *pcd, uint8_t addr)
20797 +{
20798 + int retval = 0;
20799 + cfi_ep_t *cfiep;
20800 + cfiobject_t *cfi = pcd->cfi;
20801 + dwc_list_link_t *tmp;
20802 +
20803 + /* If the EP address is known then reset the features for only that EP */
20804 + if (addr) {
20805 + cfiep = get_cfi_ep_by_addr(pcd->cfi, addr);
20806 + if (NULL == cfiep) {
20807 + CFI_INFO("%s: Error getting the EP address 0x%02x\n",
20808 + __func__, addr);
20809 + return -DWC_E_INVALID;
20810 + }
20811 + retval = cfi_reset_align_val(cfiep);
20812 + }
20813 + /* Otherwise (wValue == 0), reset all features of all EP's */
20814 + else {
20815 + /* Traverse all the active EP's and reset the feature(s) value(s) */
20816 + //list_for_each_entry(cfiep, &cfi->active_eps, lh) {
20817 + DWC_LIST_FOREACH(tmp, &cfi->active_eps) {
20818 + cfiep = DWC_LIST_ENTRY(tmp, struct cfi_ep, lh);
20819 + retval = cfi_reset_align_val(cfiep);
20820 + if (retval < 0) {
20821 + CFI_INFO
20822 + ("%s: Error resetting the feature Aliignment Value\n",
20823 + __func__);
20824 + return retval;
20825 + }
20826 + }
20827 + }
20828 + return retval;
20829 +
20830 +}
20831 +
20832 +static int cfi_preproc_reset(struct dwc_otg_pcd *pcd,
20833 + struct cfi_usb_ctrlrequest *req)
20834 +{
20835 + int retval = 0;
20836 +
20837 + switch (req->wIndex) {
20838 + case 0:
20839 + /* Reset all features */
20840 + retval = cfi_handle_reset_all(pcd, req->wValue & 0xff);
20841 + break;
20842 +
20843 + case FT_ID_DMA_BUFFER_SETUP:
20844 + /* Reset the SG buffer setup */
20845 + retval =
20846 + cfi_handle_reset_dma_buff_setup(pcd, req->wValue & 0xff);
20847 + break;
20848 +
20849 + case FT_ID_DMA_CONCAT_SETUP:
20850 + /* Reset the Concatenation buffer setup */
20851 + retval = cfi_handle_reset_concat_val(pcd, req->wValue & 0xff);
20852 + break;
20853 +
20854 + case FT_ID_DMA_BUFF_ALIGN:
20855 + /* Reset the Alignment buffer setup */
20856 + retval = cfi_handle_reset_align_val(pcd, req->wValue & 0xff);
20857 + break;
20858 +
20859 + case FT_ID_TX_FIFO_DEPTH:
20860 + retval =
20861 + cfi_handle_reset_fifo_val(pcd, req->wValue & 0xff, 0, 1);
20862 + pcd->cfi->need_gadget_att = 0;
20863 + break;
20864 +
20865 + case FT_ID_RX_FIFO_DEPTH:
20866 + retval = cfi_handle_reset_fifo_val(pcd, 0, 1, 0);
20867 + pcd->cfi->need_gadget_att = 0;
20868 + break;
20869 + default:
20870 + break;
20871 + }
20872 + return retval;
20873 +}
20874 +
20875 +/**
20876 + * This function sets a new value for the SG buffer setup.
20877 + */
20878 +static int cfi_ep_set_sg_val(uint8_t * buf, struct dwc_otg_pcd *pcd)
20879 +{
20880 + uint8_t inaddr, outaddr;
20881 + cfi_ep_t *epin, *epout;
20882 + ddma_sg_buffer_setup_t *psgval;
20883 + uint32_t desccount, size;
20884 +
20885 + CFI_INFO("%s\n", __func__);
20886 +
20887 + psgval = (ddma_sg_buffer_setup_t *) buf;
20888 + desccount = (uint32_t) psgval->bCount;
20889 + size = (uint32_t) psgval->wSize;
20890 +
20891 + /* Check the DMA descriptor count */
20892 + if ((desccount > MAX_DMA_DESCS_PER_EP) || (desccount == 0)) {
20893 + CFI_INFO
20894 + ("%s: The count of DMA Descriptors should be between 1 and %d\n",
20895 + __func__, MAX_DMA_DESCS_PER_EP);
20896 + return -DWC_E_INVALID;
20897 + }
20898 +
20899 + /* Check the DMA descriptor count */
20900 +
20901 + if (size == 0) {
20902 +
20903 + CFI_INFO("%s: The transfer size should be at least 1 byte\n",
20904 + __func__);
20905 +
20906 + return -DWC_E_INVALID;
20907 +
20908 + }
20909 +
20910 + inaddr = psgval->bInEndpointAddress;
20911 + outaddr = psgval->bOutEndpointAddress;
20912 +
20913 + epin = get_cfi_ep_by_addr(pcd->cfi, inaddr);
20914 + epout = get_cfi_ep_by_addr(pcd->cfi, outaddr);
20915 +
20916 + if (NULL == epin || NULL == epout) {
20917 + CFI_INFO
20918 + ("%s: Unable to get the endpoints inaddr=0x%02x outaddr=0x%02x\n",
20919 + __func__, inaddr, outaddr);
20920 + return -DWC_E_INVALID;
20921 + }
20922 +
20923 + epin->ep->dwc_ep.buff_mode = BM_SG;
20924 + dwc_memcpy(epin->bm_sg, psgval, sizeof(ddma_sg_buffer_setup_t));
20925 +
20926 + epout->ep->dwc_ep.buff_mode = BM_SG;
20927 + dwc_memcpy(epout->bm_sg, psgval, sizeof(ddma_sg_buffer_setup_t));
20928 +
20929 + return 0;
20930 +}
20931 +
20932 +/**
20933 + * This function sets a new value for the buffer Alignment setup.
20934 + */
20935 +static int cfi_ep_set_alignment_val(uint8_t * buf, struct dwc_otg_pcd *pcd)
20936 +{
20937 + cfi_ep_t *ep;
20938 + uint8_t addr;
20939 + ddma_align_buffer_setup_t *palignval;
20940 +
20941 + palignval = (ddma_align_buffer_setup_t *) buf;
20942 + addr = palignval->bEndpointAddress;
20943 +
20944 + ep = get_cfi_ep_by_addr(pcd->cfi, addr);
20945 +
20946 + if (NULL == ep) {
20947 + CFI_INFO("%s: Unable to get the endpoint addr=0x%02x\n",
20948 + __func__, addr);
20949 + return -DWC_E_INVALID;
20950 + }
20951 +
20952 + ep->ep->dwc_ep.buff_mode = BM_ALIGN;
20953 + dwc_memcpy(ep->bm_align, palignval, sizeof(ddma_align_buffer_setup_t));
20954 +
20955 + return 0;
20956 +}
20957 +
20958 +/**
20959 + * This function sets a new value for the Concatenation buffer setup.
20960 + */
20961 +static int cfi_ep_set_concat_val(uint8_t * buf, struct dwc_otg_pcd *pcd)
20962 +{
20963 + uint8_t addr;
20964 + cfi_ep_t *ep;
20965 + struct _ddma_concat_buffer_setup_hdr *pConcatValHdr;
20966 + uint16_t *pVals;
20967 + uint32_t desccount;
20968 + int i;
20969 + uint16_t mps;
20970 +
20971 + pConcatValHdr = (struct _ddma_concat_buffer_setup_hdr *)buf;
20972 + desccount = (uint32_t) pConcatValHdr->bDescCount;
20973 + pVals = (uint16_t *) (buf + BS_CONCAT_VAL_HDR_LEN);
20974 +
20975 + /* Check the DMA descriptor count */
20976 + if (desccount > MAX_DMA_DESCS_PER_EP) {
20977 + CFI_INFO("%s: Maximum DMA Descriptor count should be %d\n",
20978 + __func__, MAX_DMA_DESCS_PER_EP);
20979 + return -DWC_E_INVALID;
20980 + }
20981 +
20982 + addr = pConcatValHdr->bEndpointAddress;
20983 + ep = get_cfi_ep_by_addr(pcd->cfi, addr);
20984 + if (NULL == ep) {
20985 + CFI_INFO("%s: Unable to get the endpoint addr=0x%02x\n",
20986 + __func__, addr);
20987 + return -DWC_E_INVALID;
20988 + }
20989 +
20990 + mps = UGETW(ep->ep->desc->wMaxPacketSize);
20991 +
20992 +#if 0
20993 + for (i = 0; i < desccount; i++) {
20994 + CFI_INFO("%s: wTxSize[%d]=0x%04x\n", __func__, i, pVals[i]);
20995 + }
20996 + CFI_INFO("%s: epname=%s; mps=%d\n", __func__, ep->ep->ep.name, mps);
20997 +#endif
20998 +
20999 + /* Check the wTxSizes to be less than or equal to the mps */
21000 + for (i = 0; i < desccount; i++) {
21001 + if (pVals[i] > mps) {
21002 + CFI_INFO
21003 + ("%s: ERROR - the wTxSize[%d] should be <= MPS (wTxSize=%d)\n",
21004 + __func__, i, pVals[i]);
21005 + return -DWC_E_INVALID;
21006 + }
21007 + }
21008 +
21009 + ep->ep->dwc_ep.buff_mode = BM_CONCAT;
21010 + dwc_memcpy(ep->bm_concat, pConcatValHdr, BS_CONCAT_VAL_HDR_LEN);
21011 +
21012 + /* Free the previously allocated storage for the wTxBytes */
21013 + if (ep->bm_concat->wTxBytes) {
21014 + DWC_FREE(ep->bm_concat->wTxBytes);
21015 + }
21016 +
21017 + /* Allocate a new storage for the wTxBytes field */
21018 + ep->bm_concat->wTxBytes =
21019 + DWC_ALLOC(sizeof(uint16_t) * pConcatValHdr->bDescCount);
21020 + if (NULL == ep->bm_concat->wTxBytes) {
21021 + CFI_INFO("%s: Unable to allocate memory\n", __func__);
21022 + return -DWC_E_NO_MEMORY;
21023 + }
21024 +
21025 + /* Copy the new values into the wTxBytes filed */
21026 + dwc_memcpy(ep->bm_concat->wTxBytes, buf + BS_CONCAT_VAL_HDR_LEN,
21027 + sizeof(uint16_t) * pConcatValHdr->bDescCount);
21028 +
21029 + return 0;
21030 +}
21031 +
21032 +/**
21033 + * This function calculates the total of all FIFO sizes
21034 + *
21035 + * @param core_if Programming view of DWC_otg controller
21036 + *
21037 + * @return The total of data FIFO sizes.
21038 + *
21039 + */
21040 +static uint16_t get_dfifo_size(dwc_otg_core_if_t * core_if)
21041 +{
21042 + dwc_otg_core_params_t *params = core_if->core_params;
21043 + uint16_t dfifo_total = 0;
21044 + int i;
21045 +
21046 + /* The shared RxFIFO size */
21047 + dfifo_total =
21048 + params->dev_rx_fifo_size + params->dev_nperio_tx_fifo_size;
21049 +
21050 + /* Add up each TxFIFO size to the total */
21051 + for (i = 0; i < core_if->hwcfg4.b.num_in_eps; i++) {
21052 + dfifo_total += params->dev_tx_fifo_size[i];
21053 + }
21054 +
21055 + return dfifo_total;
21056 +}
21057 +
21058 +/**
21059 + * This function returns Rx FIFO size
21060 + *
21061 + * @param core_if Programming view of DWC_otg controller
21062 + *
21063 + * @return The total of data FIFO sizes.
21064 + *
21065 + */
21066 +static int32_t get_rxfifo_size(dwc_otg_core_if_t * core_if, uint16_t wValue)
21067 +{
21068 + switch (wValue >> 8) {
21069 + case 0:
21070 + return (core_if->pwron_rxfsiz <
21071 + 32768) ? core_if->pwron_rxfsiz : 32768;
21072 + break;
21073 + case 1:
21074 + return core_if->core_params->dev_rx_fifo_size;
21075 + break;
21076 + default:
21077 + return -DWC_E_INVALID;
21078 + break;
21079 + }
21080 +}
21081 +
21082 +/**
21083 + * This function returns Tx FIFO size for IN EP
21084 + *
21085 + * @param core_if Programming view of DWC_otg controller
21086 + *
21087 + * @return The total of data FIFO sizes.
21088 + *
21089 + */
21090 +static int32_t get_txfifo_size(struct dwc_otg_pcd *pcd, uint16_t wValue)
21091 +{
21092 + dwc_otg_pcd_ep_t *ep;
21093 +
21094 + ep = get_ep_by_addr(pcd, wValue & 0xff);
21095 +
21096 + if (NULL == ep) {
21097 + CFI_INFO("%s: Unable to get the endpoint addr=0x%02x\n",
21098 + __func__, wValue & 0xff);
21099 + return -DWC_E_INVALID;
21100 + }
21101 +
21102 + if (!ep->dwc_ep.is_in) {
21103 + CFI_INFO
21104 + ("%s: No Tx FIFO assingned to the Out endpoint addr=0x%02x\n",
21105 + __func__, wValue & 0xff);
21106 + return -DWC_E_INVALID;
21107 + }
21108 +
21109 + switch (wValue >> 8) {
21110 + case 0:
21111 + return (GET_CORE_IF(pcd)->pwron_txfsiz
21112 + [ep->dwc_ep.tx_fifo_num - 1] <
21113 + 768) ? GET_CORE_IF(pcd)->pwron_txfsiz[ep->
21114 + dwc_ep.tx_fifo_num
21115 + - 1] : 32768;
21116 + break;
21117 + case 1:
21118 + return GET_CORE_IF(pcd)->core_params->
21119 + dev_tx_fifo_size[ep->dwc_ep.num - 1];
21120 + break;
21121 + default:
21122 + return -DWC_E_INVALID;
21123 + break;
21124 + }
21125 +}
21126 +
21127 +/**
21128 + * This function checks if the submitted combination of
21129 + * device mode FIFO sizes is possible or not.
21130 + *
21131 + * @param core_if Programming view of DWC_otg controller
21132 + *
21133 + * @return 1 if possible, 0 otherwise.
21134 + *
21135 + */
21136 +static uint8_t check_fifo_sizes(dwc_otg_core_if_t * core_if)
21137 +{
21138 + uint16_t dfifo_actual = 0;
21139 + dwc_otg_core_params_t *params = core_if->core_params;
21140 + uint16_t start_addr = 0;
21141 + int i;
21142 +
21143 + dfifo_actual =
21144 + params->dev_rx_fifo_size + params->dev_nperio_tx_fifo_size;
21145 +
21146 + for (i = 0; i < core_if->hwcfg4.b.num_in_eps; i++) {
21147 + dfifo_actual += params->dev_tx_fifo_size[i];
21148 + }
21149 +
21150 + if (dfifo_actual > core_if->total_fifo_size) {
21151 + return 0;
21152 + }
21153 +
21154 + if (params->dev_rx_fifo_size > 32768 || params->dev_rx_fifo_size < 16)
21155 + return 0;
21156 +
21157 + if (params->dev_nperio_tx_fifo_size > 32768
21158 + || params->dev_nperio_tx_fifo_size < 16)
21159 + return 0;
21160 +
21161 + for (i = 0; i < core_if->hwcfg4.b.num_in_eps; i++) {
21162 +
21163 + if (params->dev_tx_fifo_size[i] > 768
21164 + || params->dev_tx_fifo_size[i] < 4)
21165 + return 0;
21166 + }
21167 +
21168 + if (params->dev_rx_fifo_size > core_if->pwron_rxfsiz)
21169 + return 0;
21170 + start_addr = params->dev_rx_fifo_size;
21171 +
21172 + if (params->dev_nperio_tx_fifo_size > core_if->pwron_gnptxfsiz)
21173 + return 0;
21174 + start_addr += params->dev_nperio_tx_fifo_size;
21175 +
21176 + for (i = 0; i < core_if->hwcfg4.b.num_in_eps; i++) {
21177 +
21178 + if (params->dev_tx_fifo_size[i] > core_if->pwron_txfsiz[i])
21179 + return 0;
21180 + start_addr += params->dev_tx_fifo_size[i];
21181 + }
21182 +
21183 + return 1;
21184 +}
21185 +
21186 +/**
21187 + * This function resizes Device mode FIFOs
21188 + *
21189 + * @param core_if Programming view of DWC_otg controller
21190 + *
21191 + * @return 1 if successful, 0 otherwise
21192 + *
21193 + */
21194 +static uint8_t resize_fifos(dwc_otg_core_if_t * core_if)
21195 +{
21196 + int i = 0;
21197 + dwc_otg_core_global_regs_t *global_regs = core_if->core_global_regs;
21198 + dwc_otg_core_params_t *params = core_if->core_params;
21199 + uint32_t rx_fifo_size;
21200 + fifosize_data_t nptxfifosize;
21201 + fifosize_data_t txfifosize[15];
21202 +
21203 + uint32_t rx_fsz_bak;
21204 + uint32_t nptxfsz_bak;
21205 + uint32_t txfsz_bak[15];
21206 +
21207 + uint16_t start_address;
21208 + uint8_t retval = 1;
21209 +
21210 + if (!check_fifo_sizes(core_if)) {
21211 + return 0;
21212 + }
21213 +
21214 + /* Configure data FIFO sizes */
21215 + if (core_if->hwcfg2.b.dynamic_fifo && params->enable_dynamic_fifo) {
21216 + rx_fsz_bak = DWC_READ_REG32(&global_regs->grxfsiz);
21217 + rx_fifo_size = params->dev_rx_fifo_size;
21218 + DWC_WRITE_REG32(&global_regs->grxfsiz, rx_fifo_size);
21219 +
21220 + /*
21221 + * Tx FIFOs These FIFOs are numbered from 1 to 15.
21222 + * Indexes of the FIFO size module parameters in the
21223 + * dev_tx_fifo_size array and the FIFO size registers in
21224 + * the dtxfsiz array run from 0 to 14.
21225 + */
21226 +
21227 + /* Non-periodic Tx FIFO */
21228 + nptxfsz_bak = DWC_READ_REG32(&global_regs->gnptxfsiz);
21229 + nptxfifosize.b.depth = params->dev_nperio_tx_fifo_size;
21230 + start_address = params->dev_rx_fifo_size;
21231 + nptxfifosize.b.startaddr = start_address;
21232 +
21233 + DWC_WRITE_REG32(&global_regs->gnptxfsiz, nptxfifosize.d32);
21234 +
21235 + start_address += nptxfifosize.b.depth;
21236 +
21237 + for (i = 0; i < core_if->hwcfg4.b.num_in_eps; i++) {
21238 + txfsz_bak[i] = DWC_READ_REG32(&global_regs->dtxfsiz[i]);
21239 +
21240 + txfifosize[i].b.depth = params->dev_tx_fifo_size[i];
21241 + txfifosize[i].b.startaddr = start_address;
21242 + DWC_WRITE_REG32(&global_regs->dtxfsiz[i],
21243 + txfifosize[i].d32);
21244 +
21245 + start_address += txfifosize[i].b.depth;
21246 + }
21247 +
21248 + /** Check if register values are set correctly */
21249 + if (rx_fifo_size != DWC_READ_REG32(&global_regs->grxfsiz)) {
21250 + retval = 0;
21251 + }
21252 +
21253 + if (nptxfifosize.d32 != DWC_READ_REG32(&global_regs->gnptxfsiz)) {
21254 + retval = 0;
21255 + }
21256 +
21257 + for (i = 0; i < core_if->hwcfg4.b.num_in_eps; i++) {
21258 + if (txfifosize[i].d32 !=
21259 + DWC_READ_REG32(&global_regs->dtxfsiz[i])) {
21260 + retval = 0;
21261 + }
21262 + }
21263 +
21264 + /** If register values are not set correctly, reset old values */
21265 + if (retval == 0) {
21266 + DWC_WRITE_REG32(&global_regs->grxfsiz, rx_fsz_bak);
21267 +
21268 + /* Non-periodic Tx FIFO */
21269 + DWC_WRITE_REG32(&global_regs->gnptxfsiz, nptxfsz_bak);
21270 +
21271 + for (i = 0; i < core_if->hwcfg4.b.num_in_eps; i++) {
21272 + DWC_WRITE_REG32(&global_regs->dtxfsiz[i],
21273 + txfsz_bak[i]);
21274 + }
21275 + }
21276 + } else {
21277 + return 0;
21278 + }
21279 +
21280 + /* Flush the FIFOs */
21281 + dwc_otg_flush_tx_fifo(core_if, 0x10); /* all Tx FIFOs */
21282 + dwc_otg_flush_rx_fifo(core_if);
21283 +
21284 + return retval;
21285 +}
21286 +
21287 +/**
21288 + * This function sets a new value for the buffer Alignment setup.
21289 + */
21290 +static int cfi_ep_set_tx_fifo_val(uint8_t * buf, dwc_otg_pcd_t * pcd)
21291 +{
21292 + int retval;
21293 + uint32_t fsiz;
21294 + uint16_t size;
21295 + uint16_t ep_addr;
21296 + dwc_otg_pcd_ep_t *ep;
21297 + dwc_otg_core_params_t *params = GET_CORE_IF(pcd)->core_params;
21298 + tx_fifo_size_setup_t *ptxfifoval;
21299 +
21300 + ptxfifoval = (tx_fifo_size_setup_t *) buf;
21301 + ep_addr = ptxfifoval->bEndpointAddress;
21302 + size = ptxfifoval->wDepth;
21303 +
21304 + ep = get_ep_by_addr(pcd, ep_addr);
21305 +
21306 + CFI_INFO
21307 + ("%s: Set Tx FIFO size: endpoint addr=0x%02x, depth=%d, FIFO Num=%d\n",
21308 + __func__, ep_addr, size, ep->dwc_ep.tx_fifo_num);
21309 +
21310 + if (NULL == ep) {
21311 + CFI_INFO("%s: Unable to get the endpoint addr=0x%02x\n",
21312 + __func__, ep_addr);
21313 + return -DWC_E_INVALID;
21314 + }
21315 +
21316 + fsiz = params->dev_tx_fifo_size[ep->dwc_ep.tx_fifo_num - 1];
21317 + params->dev_tx_fifo_size[ep->dwc_ep.tx_fifo_num - 1] = size;
21318 +
21319 + if (resize_fifos(GET_CORE_IF(pcd))) {
21320 + retval = 0;
21321 + } else {
21322 + CFI_INFO
21323 + ("%s: Error setting the feature Tx FIFO Size for EP%d\n",
21324 + __func__, ep_addr);
21325 + params->dev_tx_fifo_size[ep->dwc_ep.tx_fifo_num - 1] = fsiz;
21326 + retval = -DWC_E_INVALID;
21327 + }
21328 +
21329 + return retval;
21330 +}
21331 +
21332 +/**
21333 + * This function sets a new value for the buffer Alignment setup.
21334 + */
21335 +static int cfi_set_rx_fifo_val(uint8_t * buf, dwc_otg_pcd_t * pcd)
21336 +{
21337 + int retval;
21338 + uint32_t fsiz;
21339 + uint16_t size;
21340 + dwc_otg_core_params_t *params = GET_CORE_IF(pcd)->core_params;
21341 + rx_fifo_size_setup_t *prxfifoval;
21342 +
21343 + prxfifoval = (rx_fifo_size_setup_t *) buf;
21344 + size = prxfifoval->wDepth;
21345 +
21346 + fsiz = params->dev_rx_fifo_size;
21347 + params->dev_rx_fifo_size = size;
21348 +
21349 + if (resize_fifos(GET_CORE_IF(pcd))) {
21350 + retval = 0;
21351 + } else {
21352 + CFI_INFO("%s: Error setting the feature Rx FIFO Size\n",
21353 + __func__);
21354 + params->dev_rx_fifo_size = fsiz;
21355 + retval = -DWC_E_INVALID;
21356 + }
21357 +
21358 + return retval;
21359 +}
21360 +
21361 +/**
21362 + * This function reads the SG of an EP's buffer setup into the buffer buf
21363 + */
21364 +static int cfi_ep_get_sg_val(uint8_t * buf, struct dwc_otg_pcd *pcd,
21365 + struct cfi_usb_ctrlrequest *req)
21366 +{
21367 + int retval = -DWC_E_INVALID;
21368 + uint8_t addr;
21369 + cfi_ep_t *ep;
21370 +
21371 + /* The Low Byte of the wValue contains a non-zero address of the endpoint */
21372 + addr = req->wValue & 0xFF;
21373 + if (addr == 0) /* The address should be non-zero */
21374 + return retval;
21375 +
21376 + ep = get_cfi_ep_by_addr(pcd->cfi, addr);
21377 + if (NULL == ep) {
21378 + CFI_INFO("%s: Unable to get the endpoint address(0x%02x)\n",
21379 + __func__, addr);
21380 + return retval;
21381 + }
21382 +
21383 + dwc_memcpy(buf, ep->bm_sg, BS_SG_VAL_DESC_LEN);
21384 + retval = BS_SG_VAL_DESC_LEN;
21385 + return retval;
21386 +}
21387 +
21388 +/**
21389 + * This function reads the Concatenation value of an EP's buffer mode into
21390 + * the buffer buf
21391 + */
21392 +static int cfi_ep_get_concat_val(uint8_t * buf, struct dwc_otg_pcd *pcd,
21393 + struct cfi_usb_ctrlrequest *req)
21394 +{
21395 + int retval = -DWC_E_INVALID;
21396 + uint8_t addr;
21397 + cfi_ep_t *ep;
21398 + uint8_t desc_count;
21399 +
21400 + /* The Low Byte of the wValue contains a non-zero address of the endpoint */
21401 + addr = req->wValue & 0xFF;
21402 + if (addr == 0) /* The address should be non-zero */
21403 + return retval;
21404 +
21405 + ep = get_cfi_ep_by_addr(pcd->cfi, addr);
21406 + if (NULL == ep) {
21407 + CFI_INFO("%s: Unable to get the endpoint address(0x%02x)\n",
21408 + __func__, addr);
21409 + return retval;
21410 + }
21411 +
21412 + /* Copy the header to the buffer */
21413 + dwc_memcpy(buf, ep->bm_concat, BS_CONCAT_VAL_HDR_LEN);
21414 + /* Advance the buffer pointer by the header size */
21415 + buf += BS_CONCAT_VAL_HDR_LEN;
21416 +
21417 + desc_count = ep->bm_concat->hdr.bDescCount;
21418 + /* Copy alll the wTxBytes to the buffer */
21419 + dwc_memcpy(buf, ep->bm_concat->wTxBytes, sizeof(uid16_t) * desc_count);
21420 +
21421 + retval = BS_CONCAT_VAL_HDR_LEN + sizeof(uid16_t) * desc_count;
21422 + return retval;
21423 +}
21424 +
21425 +/**
21426 + * This function reads the buffer Alignment value of an EP's buffer mode into
21427 + * the buffer buf
21428 + *
21429 + * @return The total number of bytes copied to the buffer or negative error code.
21430 + */
21431 +static int cfi_ep_get_align_val(uint8_t * buf, struct dwc_otg_pcd *pcd,
21432 + struct cfi_usb_ctrlrequest *req)
21433 +{
21434 + int retval = -DWC_E_INVALID;
21435 + uint8_t addr;
21436 + cfi_ep_t *ep;
21437 +
21438 + /* The Low Byte of the wValue contains a non-zero address of the endpoint */
21439 + addr = req->wValue & 0xFF;
21440 + if (addr == 0) /* The address should be non-zero */
21441 + return retval;
21442 +
21443 + ep = get_cfi_ep_by_addr(pcd->cfi, addr);
21444 + if (NULL == ep) {
21445 + CFI_INFO("%s: Unable to get the endpoint address(0x%02x)\n",
21446 + __func__, addr);
21447 + return retval;
21448 + }
21449 +
21450 + dwc_memcpy(buf, ep->bm_align, BS_ALIGN_VAL_HDR_LEN);
21451 + retval = BS_ALIGN_VAL_HDR_LEN;
21452 +
21453 + return retval;
21454 +}
21455 +
21456 +/**
21457 + * This function sets a new value for the specified feature
21458 + *
21459 + * @param pcd A pointer to the PCD object
21460 + *
21461 + * @return 0 if successful, negative error code otherwise to stall the DCE.
21462 + */
21463 +static int cfi_set_feature_value(struct dwc_otg_pcd *pcd)
21464 +{
21465 + int retval = -DWC_E_NOT_SUPPORTED;
21466 + uint16_t wIndex, wValue;
21467 + uint8_t bRequest;
21468 + struct dwc_otg_core_if *coreif;
21469 + cfiobject_t *cfi = pcd->cfi;
21470 + struct cfi_usb_ctrlrequest *ctrl_req;
21471 + uint8_t *buf;
21472 + ctrl_req = &cfi->ctrl_req;
21473 +
21474 + buf = pcd->cfi->ctrl_req.data;
21475 +
21476 + coreif = GET_CORE_IF(pcd);
21477 + bRequest = ctrl_req->bRequest;
21478 + wIndex = DWC_CONSTANT_CPU_TO_LE16(ctrl_req->wIndex);
21479 + wValue = DWC_CONSTANT_CPU_TO_LE16(ctrl_req->wValue);
21480 +
21481 + /* See which feature is to be modified */
21482 + switch (wIndex) {
21483 + case FT_ID_DMA_BUFFER_SETUP:
21484 + /* Modify the feature */
21485 + if ((retval = cfi_ep_set_sg_val(buf, pcd)) < 0)
21486 + return retval;
21487 +
21488 + /* And send this request to the gadget */
21489 + cfi->need_gadget_att = 1;
21490 + break;
21491 +
21492 + case FT_ID_DMA_BUFF_ALIGN:
21493 + if ((retval = cfi_ep_set_alignment_val(buf, pcd)) < 0)
21494 + return retval;
21495 + cfi->need_gadget_att = 1;
21496 + break;
21497 +
21498 + case FT_ID_DMA_CONCAT_SETUP:
21499 + /* Modify the feature */
21500 + if ((retval = cfi_ep_set_concat_val(buf, pcd)) < 0)
21501 + return retval;
21502 + cfi->need_gadget_att = 1;
21503 + break;
21504 +
21505 + case FT_ID_DMA_CIRCULAR:
21506 + CFI_INFO("FT_ID_DMA_CIRCULAR\n");
21507 + break;
21508 +
21509 + case FT_ID_THRESHOLD_SETUP:
21510 + CFI_INFO("FT_ID_THRESHOLD_SETUP\n");
21511 + break;
21512 +
21513 + case FT_ID_DFIFO_DEPTH:
21514 + CFI_INFO("FT_ID_DFIFO_DEPTH\n");
21515 + break;
21516 +
21517 + case FT_ID_TX_FIFO_DEPTH:
21518 + CFI_INFO("FT_ID_TX_FIFO_DEPTH\n");
21519 + if ((retval = cfi_ep_set_tx_fifo_val(buf, pcd)) < 0)
21520 + return retval;
21521 + cfi->need_gadget_att = 0;
21522 + break;
21523 +
21524 + case FT_ID_RX_FIFO_DEPTH:
21525 + CFI_INFO("FT_ID_RX_FIFO_DEPTH\n");
21526 + if ((retval = cfi_set_rx_fifo_val(buf, pcd)) < 0)
21527 + return retval;
21528 + cfi->need_gadget_att = 0;
21529 + break;
21530 + }
21531 +
21532 + return retval;
21533 +}
21534 +
21535 +#endif //DWC_UTE_CFI
21536 --- /dev/null
21537 +++ b/drivers/usb/host/dwc_otg/dwc_otg_cfi.h
21538 @@ -0,0 +1,320 @@
21539 +/* ==========================================================================
21540 + * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
21541 + * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
21542 + * otherwise expressly agreed to in writing between Synopsys and you.
21543 + *
21544 + * The Software IS NOT an item of Licensed Software or Licensed Product under
21545 + * any End User Software License Agreement or Agreement for Licensed Product
21546 + * with Synopsys or any supplement thereto. You are permitted to use and
21547 + * redistribute this Software in source and binary forms, with or without
21548 + * modification, provided that redistributions of source code must retain this
21549 + * notice. You may not view, use, disclose, copy or distribute this file or
21550 + * any information contained herein except pursuant to this license grant from
21551 + * Synopsys. If you do not agree with this notice, including the disclaimer
21552 + * below, then you are not authorized to use the Software.
21553 + *
21554 + * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
21555 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
21556 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
21557 + * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
21558 + * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
21559 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
21560 + * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
21561 + * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
21562 + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
21563 + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
21564 + * DAMAGE.
21565 + * ========================================================================== */
21566 +
21567 +#if !defined(__DWC_OTG_CFI_H__)
21568 +#define __DWC_OTG_CFI_H__
21569 +
21570 +#include "dwc_otg_pcd.h"
21571 +#include "dwc_cfi_common.h"
21572 +
21573 +/**
21574 + * @file
21575 + * This file contains the CFI related OTG PCD specific common constants,
21576 + * interfaces(functions and macros) and data structures.The CFI Protocol is an
21577 + * optional interface for internal testing purposes that a DUT may implement to
21578 + * support testing of configurable features.
21579 + *
21580 + */
21581 +
21582 +struct dwc_otg_pcd;
21583 +struct dwc_otg_pcd_ep;
21584 +
21585 +/** OTG CFI Features (properties) ID constants */
21586 +/** This is a request for all Core Features */
21587 +#define FT_ID_DMA_MODE 0x0001
21588 +#define FT_ID_DMA_BUFFER_SETUP 0x0002
21589 +#define FT_ID_DMA_BUFF_ALIGN 0x0003
21590 +#define FT_ID_DMA_CONCAT_SETUP 0x0004
21591 +#define FT_ID_DMA_CIRCULAR 0x0005
21592 +#define FT_ID_THRESHOLD_SETUP 0x0006
21593 +#define FT_ID_DFIFO_DEPTH 0x0007
21594 +#define FT_ID_TX_FIFO_DEPTH 0x0008
21595 +#define FT_ID_RX_FIFO_DEPTH 0x0009
21596 +
21597 +/**********************************************************/
21598 +#define CFI_INFO_DEF
21599 +
21600 +#ifdef CFI_INFO_DEF
21601 +#define CFI_INFO(fmt...) DWC_PRINTF("CFI: " fmt);
21602 +#else
21603 +#define CFI_INFO(fmt...)
21604 +#endif
21605 +
21606 +#define min(x,y) ({ \
21607 + x < y ? x : y; })
21608 +
21609 +#define max(x,y) ({ \
21610 + x > y ? x : y; })
21611 +
21612 +/**
21613 + * Descriptor DMA SG Buffer setup structure (SG buffer). This structure is
21614 + * also used for setting up a buffer for Circular DDMA.
21615 + */
21616 +struct _ddma_sg_buffer_setup {
21617 +#define BS_SG_VAL_DESC_LEN 6
21618 + /* The OUT EP address */
21619 + uint8_t bOutEndpointAddress;
21620 + /* The IN EP address */
21621 + uint8_t bInEndpointAddress;
21622 + /* Number of bytes to put between transfer segments (must be DWORD boundaries) */
21623 + uint8_t bOffset;
21624 + /* The number of transfer segments (a DMA descriptors per each segment) */
21625 + uint8_t bCount;
21626 + /* Size (in byte) of each transfer segment */
21627 + uint16_t wSize;
21628 +} __attribute__ ((packed));
21629 +typedef struct _ddma_sg_buffer_setup ddma_sg_buffer_setup_t;
21630 +
21631 +/** Descriptor DMA Concatenation Buffer setup structure */
21632 +struct _ddma_concat_buffer_setup_hdr {
21633 +#define BS_CONCAT_VAL_HDR_LEN 4
21634 + /* The endpoint for which the buffer is to be set up */
21635 + uint8_t bEndpointAddress;
21636 + /* The count of descriptors to be used */
21637 + uint8_t bDescCount;
21638 + /* The total size of the transfer */
21639 + uint16_t wSize;
21640 +} __attribute__ ((packed));
21641 +typedef struct _ddma_concat_buffer_setup_hdr ddma_concat_buffer_setup_hdr_t;
21642 +
21643 +/** Descriptor DMA Concatenation Buffer setup structure */
21644 +struct _ddma_concat_buffer_setup {
21645 + /* The SG header */
21646 + ddma_concat_buffer_setup_hdr_t hdr;
21647 +
21648 + /* The XFER sizes pointer (allocated dynamically) */
21649 + uint16_t *wTxBytes;
21650 +} __attribute__ ((packed));
21651 +typedef struct _ddma_concat_buffer_setup ddma_concat_buffer_setup_t;
21652 +
21653 +/** Descriptor DMA Alignment Buffer setup structure */
21654 +struct _ddma_align_buffer_setup {
21655 +#define BS_ALIGN_VAL_HDR_LEN 2
21656 + uint8_t bEndpointAddress;
21657 + uint8_t bAlign;
21658 +} __attribute__ ((packed));
21659 +typedef struct _ddma_align_buffer_setup ddma_align_buffer_setup_t;
21660 +
21661 +/** Transmit FIFO Size setup structure */
21662 +struct _tx_fifo_size_setup {
21663 + uint8_t bEndpointAddress;
21664 + uint16_t wDepth;
21665 +} __attribute__ ((packed));
21666 +typedef struct _tx_fifo_size_setup tx_fifo_size_setup_t;
21667 +
21668 +/** Transmit FIFO Size setup structure */
21669 +struct _rx_fifo_size_setup {
21670 + uint16_t wDepth;
21671 +} __attribute__ ((packed));
21672 +typedef struct _rx_fifo_size_setup rx_fifo_size_setup_t;
21673 +
21674 +/**
21675 + * struct cfi_usb_ctrlrequest - the CFI implementation of the struct usb_ctrlrequest
21676 + * This structure encapsulates the standard usb_ctrlrequest and adds a pointer
21677 + * to the data returned in the data stage of a 3-stage Control Write requests.
21678 + */
21679 +struct cfi_usb_ctrlrequest {
21680 + uint8_t bRequestType;
21681 + uint8_t bRequest;
21682 + uint16_t wValue;
21683 + uint16_t wIndex;
21684 + uint16_t wLength;
21685 + uint8_t *data;
21686 +} UPACKED;
21687 +
21688 +/*---------------------------------------------------------------------------*/
21689 +
21690 +/**
21691 + * The CFI wrapper of the enabled and activated dwc_otg_pcd_ep structures.
21692 + * This structure is used to store the buffer setup data for any
21693 + * enabled endpoint in the PCD.
21694 + */
21695 +struct cfi_ep {
21696 + /* Entry for the list container */
21697 + dwc_list_link_t lh;
21698 + /* Pointer to the active PCD endpoint structure */
21699 + struct dwc_otg_pcd_ep *ep;
21700 + /* The last descriptor in the chain of DMA descriptors of the endpoint */
21701 + struct dwc_otg_dma_desc *dma_desc_last;
21702 + /* The SG feature value */
21703 + ddma_sg_buffer_setup_t *bm_sg;
21704 + /* The Circular feature value */
21705 + ddma_sg_buffer_setup_t *bm_circ;
21706 + /* The Concatenation feature value */
21707 + ddma_concat_buffer_setup_t *bm_concat;
21708 + /* The Alignment feature value */
21709 + ddma_align_buffer_setup_t *bm_align;
21710 + /* XFER length */
21711 + uint32_t xfer_len;
21712 + /*
21713 + * Count of DMA descriptors currently used.
21714 + * The total should not exceed the MAX_DMA_DESCS_PER_EP value
21715 + * defined in the dwc_otg_cil.h
21716 + */
21717 + uint32_t desc_count;
21718 +};
21719 +typedef struct cfi_ep cfi_ep_t;
21720 +
21721 +typedef struct cfi_dma_buff {
21722 +#define CFI_IN_BUF_LEN 1024
21723 +#define CFI_OUT_BUF_LEN 1024
21724 + dma_addr_t addr;
21725 + uint8_t *buf;
21726 +} cfi_dma_buff_t;
21727 +
21728 +struct cfiobject;
21729 +
21730 +/**
21731 + * This is the interface for the CFI operations.
21732 + *
21733 + * @param ep_enable Called when any endpoint is enabled and activated.
21734 + * @param release Called when the CFI object is released and it needs to correctly
21735 + * deallocate the dynamic memory
21736 + * @param ctrl_write_complete Called when the data stage of the request is complete
21737 + */
21738 +typedef struct cfi_ops {
21739 + int (*ep_enable) (struct cfiobject * cfi, struct dwc_otg_pcd * pcd,
21740 + struct dwc_otg_pcd_ep * ep);
21741 + void *(*ep_alloc_buf) (struct cfiobject * cfi, struct dwc_otg_pcd * pcd,
21742 + struct dwc_otg_pcd_ep * ep, dma_addr_t * dma,
21743 + unsigned size, gfp_t flags);
21744 + void (*release) (struct cfiobject * cfi);
21745 + int (*ctrl_write_complete) (struct cfiobject * cfi,
21746 + struct dwc_otg_pcd * pcd);
21747 + void (*build_descriptors) (struct cfiobject * cfi,
21748 + struct dwc_otg_pcd * pcd,
21749 + struct dwc_otg_pcd_ep * ep,
21750 + dwc_otg_pcd_request_t * req);
21751 +} cfi_ops_t;
21752 +
21753 +struct cfiobject {
21754 + cfi_ops_t ops;
21755 + struct dwc_otg_pcd *pcd;
21756 + struct usb_gadget *gadget;
21757 +
21758 + /* Buffers used to send/receive CFI-related request data */
21759 + cfi_dma_buff_t buf_in;
21760 + cfi_dma_buff_t buf_out;
21761 +
21762 + /* CFI specific Control request wrapper */
21763 + struct cfi_usb_ctrlrequest ctrl_req;
21764 +
21765 + /* The list of active EP's in the PCD of type cfi_ep_t */
21766 + dwc_list_link_t active_eps;
21767 +
21768 + /* This flag shall control the propagation of a specific request
21769 + * to the gadget's processing routines.
21770 + * 0 - no gadget handling
21771 + * 1 - the gadget needs to know about this request (w/o completing a status
21772 + * phase - just return a 0 to the _setup callback)
21773 + */
21774 + uint8_t need_gadget_att;
21775 +
21776 + /* Flag indicating whether the status IN phase needs to be
21777 + * completed by the PCD
21778 + */
21779 + uint8_t need_status_in_complete;
21780 +};
21781 +typedef struct cfiobject cfiobject_t;
21782 +
21783 +#define DUMP_MSG
21784 +
21785 +#if defined(DUMP_MSG)
21786 +static inline void dump_msg(const u8 * buf, unsigned int length)
21787 +{
21788 + unsigned int start, num, i;
21789 + char line[52], *p;
21790 +
21791 + if (length >= 512)
21792 + return;
21793 +
21794 + start = 0;
21795 + while (length > 0) {
21796 + num = min(length, 16u);
21797 + p = line;
21798 + for (i = 0; i < num; ++i) {
21799 + if (i == 8)
21800 + *p++ = ' ';
21801 + DWC_SPRINTF(p, " %02x", buf[i]);
21802 + p += 3;
21803 + }
21804 + *p = 0;
21805 + DWC_DEBUG("%6x: %s\n", start, line);
21806 + buf += num;
21807 + start += num;
21808 + length -= num;
21809 + }
21810 +}
21811 +#else
21812 +static inline void dump_msg(const u8 * buf, unsigned int length)
21813 +{
21814 +}
21815 +#endif
21816 +
21817 +/**
21818 + * This function returns a pointer to cfi_ep_t object with the addr address.
21819 + */
21820 +static inline struct cfi_ep *get_cfi_ep_by_addr(struct cfiobject *cfi,
21821 + uint8_t addr)
21822 +{
21823 + struct cfi_ep *pcfiep;
21824 + dwc_list_link_t *tmp;
21825 +
21826 + DWC_LIST_FOREACH(tmp, &cfi->active_eps) {
21827 + pcfiep = DWC_LIST_ENTRY(tmp, struct cfi_ep, lh);
21828 +
21829 + if (pcfiep->ep->desc->bEndpointAddress == addr) {
21830 + return pcfiep;
21831 + }
21832 + }
21833 +
21834 + return NULL;
21835 +}
21836 +
21837 +/**
21838 + * This function returns a pointer to cfi_ep_t object that matches
21839 + * the dwc_otg_pcd_ep object.
21840 + */
21841 +static inline struct cfi_ep *get_cfi_ep_by_pcd_ep(struct cfiobject *cfi,
21842 + struct dwc_otg_pcd_ep *ep)
21843 +{
21844 + struct cfi_ep *pcfiep = NULL;
21845 + dwc_list_link_t *tmp;
21846 +
21847 + DWC_LIST_FOREACH(tmp, &cfi->active_eps) {
21848 + pcfiep = DWC_LIST_ENTRY(tmp, struct cfi_ep, lh);
21849 + if (pcfiep->ep == ep) {
21850 + return pcfiep;
21851 + }
21852 + }
21853 + return NULL;
21854 +}
21855 +
21856 +int cfi_setup(struct dwc_otg_pcd *pcd, struct cfi_usb_ctrlrequest *ctrl);
21857 +
21858 +#endif /* (__DWC_OTG_CFI_H__) */
21859 --- /dev/null
21860 +++ b/drivers/usb/host/dwc_otg/dwc_otg_cil.c
21861 @@ -0,0 +1,7146 @@
21862 +/* ==========================================================================
21863 + * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_cil.c $
21864 + * $Revision: #191 $
21865 + * $Date: 2012/08/10 $
21866 + * $Change: 2047372 $
21867 + *
21868 + * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
21869 + * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
21870 + * otherwise expressly agreed to in writing between Synopsys and you.
21871 + *
21872 + * The Software IS NOT an item of Licensed Software or Licensed Product under
21873 + * any End User Software License Agreement or Agreement for Licensed Product
21874 + * with Synopsys or any supplement thereto. You are permitted to use and
21875 + * redistribute this Software in source and binary forms, with or without
21876 + * modification, provided that redistributions of source code must retain this
21877 + * notice. You may not view, use, disclose, copy or distribute this file or
21878 + * any information contained herein except pursuant to this license grant from
21879 + * Synopsys. If you do not agree with this notice, including the disclaimer
21880 + * below, then you are not authorized to use the Software.
21881 + *
21882 + * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
21883 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
21884 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
21885 + * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
21886 + * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
21887 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
21888 + * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
21889 + * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
21890 + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
21891 + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
21892 + * DAMAGE.
21893 + * ========================================================================== */
21894 +
21895 +/** @file
21896 + *
21897 + * The Core Interface Layer provides basic services for accessing and
21898 + * managing the DWC_otg hardware. These services are used by both the
21899 + * Host Controller Driver and the Peripheral Controller Driver.
21900 + *
21901 + * The CIL manages the memory map for the core so that the HCD and PCD
21902 + * don't have to do this separately. It also handles basic tasks like
21903 + * reading/writing the registers and data FIFOs in the controller.
21904 + * Some of the data access functions provide encapsulation of several
21905 + * operations required to perform a task, such as writing multiple
21906 + * registers to start a transfer. Finally, the CIL performs basic
21907 + * services that are not specific to either the host or device modes
21908 + * of operation. These services include management of the OTG Host
21909 + * Negotiation Protocol (HNP) and Session Request Protocol (SRP). A
21910 + * Diagnostic API is also provided to allow testing of the controller
21911 + * hardware.
21912 + *
21913 + * The Core Interface Layer has the following requirements:
21914 + * - Provides basic controller operations.
21915 + * - Minimal use of OS services.
21916 + * - The OS services used will be abstracted by using inline functions
21917 + * or macros.
21918 + *
21919 + */
21920 +
21921 +#include "dwc_os.h"
21922 +#include "dwc_otg_regs.h"
21923 +#include "dwc_otg_cil.h"
21924 +
21925 +extern bool cil_force_host;
21926 +
21927 +static int dwc_otg_setup_params(dwc_otg_core_if_t * core_if);
21928 +
21929 +/**
21930 + * This function is called to initialize the DWC_otg CSR data
21931 + * structures. The register addresses in the device and host
21932 + * structures are initialized from the base address supplied by the
21933 + * caller. The calling function must make the OS calls to get the
21934 + * base address of the DWC_otg controller registers. The core_params
21935 + * argument holds the parameters that specify how the core should be
21936 + * configured.
21937 + *
21938 + * @param reg_base_addr Base address of DWC_otg core registers
21939 + *
21940 + */
21941 +dwc_otg_core_if_t *dwc_otg_cil_init(const uint32_t * reg_base_addr)
21942 +{
21943 + dwc_otg_core_if_t *core_if = 0;
21944 + dwc_otg_dev_if_t *dev_if = 0;
21945 + dwc_otg_host_if_t *host_if = 0;
21946 + uint8_t *reg_base = (uint8_t *) reg_base_addr;
21947 + int i = 0;
21948 +
21949 + DWC_DEBUGPL(DBG_CILV, "%s(%p)\n", __func__, reg_base_addr);
21950 +
21951 + core_if = DWC_ALLOC(sizeof(dwc_otg_core_if_t));
21952 +
21953 + if (core_if == NULL) {
21954 + DWC_DEBUGPL(DBG_CIL,
21955 + "Allocation of dwc_otg_core_if_t failed\n");
21956 + return 0;
21957 + }
21958 + core_if->core_global_regs = (dwc_otg_core_global_regs_t *) reg_base;
21959 +
21960 + /*
21961 + * Allocate the Device Mode structures.
21962 + */
21963 + dev_if = DWC_ALLOC(sizeof(dwc_otg_dev_if_t));
21964 +
21965 + if (dev_if == NULL) {
21966 + DWC_DEBUGPL(DBG_CIL, "Allocation of dwc_otg_dev_if_t failed\n");
21967 + DWC_FREE(core_if);
21968 + return 0;
21969 + }
21970 +
21971 + dev_if->dev_global_regs =
21972 + (dwc_otg_device_global_regs_t *) (reg_base +
21973 + DWC_DEV_GLOBAL_REG_OFFSET);
21974 +
21975 + for (i = 0; i < MAX_EPS_CHANNELS; i++) {
21976 + dev_if->in_ep_regs[i] = (dwc_otg_dev_in_ep_regs_t *)
21977 + (reg_base + DWC_DEV_IN_EP_REG_OFFSET +
21978 + (i * DWC_EP_REG_OFFSET));
21979 +
21980 + dev_if->out_ep_regs[i] = (dwc_otg_dev_out_ep_regs_t *)
21981 + (reg_base + DWC_DEV_OUT_EP_REG_OFFSET +
21982 + (i * DWC_EP_REG_OFFSET));
21983 + DWC_DEBUGPL(DBG_CILV, "in_ep_regs[%d]->diepctl=%p\n",
21984 + i, &dev_if->in_ep_regs[i]->diepctl);
21985 + DWC_DEBUGPL(DBG_CILV, "out_ep_regs[%d]->doepctl=%p\n",
21986 + i, &dev_if->out_ep_regs[i]->doepctl);
21987 + }
21988 +
21989 + dev_if->speed = 0; // unknown
21990 +
21991 + core_if->dev_if = dev_if;
21992 +
21993 + /*
21994 + * Allocate the Host Mode structures.
21995 + */
21996 + host_if = DWC_ALLOC(sizeof(dwc_otg_host_if_t));
21997 +
21998 + if (host_if == NULL) {
21999 + DWC_DEBUGPL(DBG_CIL,
22000 + "Allocation of dwc_otg_host_if_t failed\n");
22001 + DWC_FREE(dev_if);
22002 + DWC_FREE(core_if);
22003 + return 0;
22004 + }
22005 +
22006 + host_if->host_global_regs = (dwc_otg_host_global_regs_t *)
22007 + (reg_base + DWC_OTG_HOST_GLOBAL_REG_OFFSET);
22008 +
22009 + host_if->hprt0 =
22010 + (uint32_t *) (reg_base + DWC_OTG_HOST_PORT_REGS_OFFSET);
22011 +
22012 + for (i = 0; i < MAX_EPS_CHANNELS; i++) {
22013 + host_if->hc_regs[i] = (dwc_otg_hc_regs_t *)
22014 + (reg_base + DWC_OTG_HOST_CHAN_REGS_OFFSET +
22015 + (i * DWC_OTG_CHAN_REGS_OFFSET));
22016 + DWC_DEBUGPL(DBG_CILV, "hc_reg[%d]->hcchar=%p\n",
22017 + i, &host_if->hc_regs[i]->hcchar);
22018 + }
22019 +
22020 + host_if->num_host_channels = MAX_EPS_CHANNELS;
22021 + core_if->host_if = host_if;
22022 +
22023 + for (i = 0; i < MAX_EPS_CHANNELS; i++) {
22024 + core_if->data_fifo[i] =
22025 + (uint32_t *) (reg_base + DWC_OTG_DATA_FIFO_OFFSET +
22026 + (i * DWC_OTG_DATA_FIFO_SIZE));
22027 + DWC_DEBUGPL(DBG_CILV, "data_fifo[%d]=0x%08lx\n",
22028 + i, (unsigned long)core_if->data_fifo[i]);
22029 + }
22030 +
22031 + core_if->pcgcctl = (uint32_t *) (reg_base + DWC_OTG_PCGCCTL_OFFSET);
22032 +
22033 + /* Initiate lx_state to L3 disconnected state */
22034 + core_if->lx_state = DWC_OTG_L3;
22035 + /*
22036 + * Store the contents of the hardware configuration registers here for
22037 + * easy access later.
22038 + */
22039 + core_if->hwcfg1.d32 =
22040 + DWC_READ_REG32(&core_if->core_global_regs->ghwcfg1);
22041 + core_if->hwcfg2.d32 =
22042 + DWC_READ_REG32(&core_if->core_global_regs->ghwcfg2);
22043 + core_if->hwcfg3.d32 =
22044 + DWC_READ_REG32(&core_if->core_global_regs->ghwcfg3);
22045 + core_if->hwcfg4.d32 =
22046 + DWC_READ_REG32(&core_if->core_global_regs->ghwcfg4);
22047 +
22048 + /* Force host mode to get HPTXFSIZ exact power on value */
22049 + {
22050 + gusbcfg_data_t gusbcfg = {.d32 = 0 };
22051 + gusbcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->gusbcfg);
22052 + gusbcfg.b.force_host_mode = 1;
22053 + DWC_WRITE_REG32(&core_if->core_global_regs->gusbcfg, gusbcfg.d32);
22054 + dwc_mdelay(100);
22055 + core_if->hptxfsiz.d32 =
22056 + DWC_READ_REG32(&core_if->core_global_regs->hptxfsiz);
22057 + gusbcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->gusbcfg);
22058 + if (cil_force_host)
22059 + gusbcfg.b.force_host_mode = 1;
22060 + else
22061 + gusbcfg.b.force_host_mode = 0;
22062 + DWC_WRITE_REG32(&core_if->core_global_regs->gusbcfg, gusbcfg.d32);
22063 + dwc_mdelay(100);
22064 + }
22065 +
22066 + DWC_DEBUGPL(DBG_CILV, "hwcfg1=%08x\n", core_if->hwcfg1.d32);
22067 + DWC_DEBUGPL(DBG_CILV, "hwcfg2=%08x\n", core_if->hwcfg2.d32);
22068 + DWC_DEBUGPL(DBG_CILV, "hwcfg3=%08x\n", core_if->hwcfg3.d32);
22069 + DWC_DEBUGPL(DBG_CILV, "hwcfg4=%08x\n", core_if->hwcfg4.d32);
22070 +
22071 + core_if->hcfg.d32 =
22072 + DWC_READ_REG32(&core_if->host_if->host_global_regs->hcfg);
22073 + core_if->dcfg.d32 =
22074 + DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dcfg);
22075 +
22076 + DWC_DEBUGPL(DBG_CILV, "hcfg=%08x\n", core_if->hcfg.d32);
22077 + DWC_DEBUGPL(DBG_CILV, "dcfg=%08x\n", core_if->dcfg.d32);
22078 +
22079 + DWC_DEBUGPL(DBG_CILV, "op_mode=%0x\n", core_if->hwcfg2.b.op_mode);
22080 + DWC_DEBUGPL(DBG_CILV, "arch=%0x\n", core_if->hwcfg2.b.architecture);
22081 + DWC_DEBUGPL(DBG_CILV, "num_dev_ep=%d\n", core_if->hwcfg2.b.num_dev_ep);
22082 + DWC_DEBUGPL(DBG_CILV, "num_host_chan=%d\n",
22083 + core_if->hwcfg2.b.num_host_chan);
22084 + DWC_DEBUGPL(DBG_CILV, "nonperio_tx_q_depth=0x%0x\n",
22085 + core_if->hwcfg2.b.nonperio_tx_q_depth);
22086 + DWC_DEBUGPL(DBG_CILV, "host_perio_tx_q_depth=0x%0x\n",
22087 + core_if->hwcfg2.b.host_perio_tx_q_depth);
22088 + DWC_DEBUGPL(DBG_CILV, "dev_token_q_depth=0x%0x\n",
22089 + core_if->hwcfg2.b.dev_token_q_depth);
22090 +
22091 + DWC_DEBUGPL(DBG_CILV, "Total FIFO SZ=%d\n",
22092 + core_if->hwcfg3.b.dfifo_depth);
22093 + DWC_DEBUGPL(DBG_CILV, "xfer_size_cntr_width=%0x\n",
22094 + core_if->hwcfg3.b.xfer_size_cntr_width);
22095 +
22096 + /*
22097 + * Set the SRP sucess bit for FS-I2c
22098 + */
22099 + core_if->srp_success = 0;
22100 + core_if->srp_timer_started = 0;
22101 +
22102 + /*
22103 + * Create new workqueue and init works
22104 + */
22105 + core_if->wq_otg = DWC_WORKQ_ALLOC("dwc_otg");
22106 + if (core_if->wq_otg == 0) {
22107 + DWC_WARN("DWC_WORKQ_ALLOC failed\n");
22108 + DWC_FREE(host_if);
22109 + DWC_FREE(dev_if);
22110 + DWC_FREE(core_if);
22111 + return 0;
22112 + }
22113 +
22114 + core_if->snpsid = DWC_READ_REG32(&core_if->core_global_regs->gsnpsid);
22115 +
22116 + DWC_PRINTF("Core Release: %x.%x%x%x\n",
22117 + (core_if->snpsid >> 12 & 0xF),
22118 + (core_if->snpsid >> 8 & 0xF),
22119 + (core_if->snpsid >> 4 & 0xF), (core_if->snpsid & 0xF));
22120 +
22121 + core_if->wkp_timer = DWC_TIMER_ALLOC("Wake Up Timer",
22122 + w_wakeup_detected, core_if);
22123 + if (core_if->wkp_timer == 0) {
22124 + DWC_WARN("DWC_TIMER_ALLOC failed\n");
22125 + DWC_FREE(host_if);
22126 + DWC_FREE(dev_if);
22127 + DWC_WORKQ_FREE(core_if->wq_otg);
22128 + DWC_FREE(core_if);
22129 + return 0;
22130 + }
22131 +
22132 + if (dwc_otg_setup_params(core_if)) {
22133 + DWC_WARN("Error while setting core params\n");
22134 + }
22135 +
22136 + core_if->hibernation_suspend = 0;
22137 +
22138 + /** ADP initialization */
22139 + dwc_otg_adp_init(core_if);
22140 +
22141 + return core_if;
22142 +}
22143 +
22144 +/**
22145 + * This function frees the structures allocated by dwc_otg_cil_init().
22146 + *
22147 + * @param core_if The core interface pointer returned from
22148 + * dwc_otg_cil_init().
22149 + *
22150 + */
22151 +void dwc_otg_cil_remove(dwc_otg_core_if_t * core_if)
22152 +{
22153 + dctl_data_t dctl = {.d32 = 0 };
22154 + DWC_DEBUGPL(DBG_CILV, "%s(%p)\n", __func__, core_if);
22155 +
22156 + /* Disable all interrupts */
22157 + DWC_MODIFY_REG32(&core_if->core_global_regs->gahbcfg, 1, 0);
22158 + DWC_WRITE_REG32(&core_if->core_global_regs->gintmsk, 0);
22159 +
22160 + dctl.b.sftdiscon = 1;
22161 + if (core_if->snpsid >= OTG_CORE_REV_3_00a) {
22162 + DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->dctl, 0,
22163 + dctl.d32);
22164 + }
22165 +
22166 + if (core_if->wq_otg) {
22167 + DWC_WORKQ_WAIT_WORK_DONE(core_if->wq_otg, 500);
22168 + DWC_WORKQ_FREE(core_if->wq_otg);
22169 + }
22170 + if (core_if->dev_if) {
22171 + DWC_FREE(core_if->dev_if);
22172 + }
22173 + if (core_if->host_if) {
22174 + DWC_FREE(core_if->host_if);
22175 + }
22176 +
22177 + /** Remove ADP Stuff */
22178 + dwc_otg_adp_remove(core_if);
22179 + if (core_if->core_params) {
22180 + DWC_FREE(core_if->core_params);
22181 + }
22182 + if (core_if->wkp_timer) {
22183 + DWC_TIMER_FREE(core_if->wkp_timer);
22184 + }
22185 + if (core_if->srp_timer) {
22186 + DWC_TIMER_FREE(core_if->srp_timer);
22187 + }
22188 + DWC_FREE(core_if);
22189 +}
22190 +
22191 +/**
22192 + * This function enables the controller's Global Interrupt in the AHB Config
22193 + * register.
22194 + *
22195 + * @param core_if Programming view of DWC_otg controller.
22196 + */
22197 +void dwc_otg_enable_global_interrupts(dwc_otg_core_if_t * core_if)
22198 +{
22199 + gahbcfg_data_t ahbcfg = {.d32 = 0 };
22200 + ahbcfg.b.glblintrmsk = 1; /* Enable interrupts */
22201 + DWC_MODIFY_REG32(&core_if->core_global_regs->gahbcfg, 0, ahbcfg.d32);
22202 +}
22203 +
22204 +/**
22205 + * This function disables the controller's Global Interrupt in the AHB Config
22206 + * register.
22207 + *
22208 + * @param core_if Programming view of DWC_otg controller.
22209 + */
22210 +void dwc_otg_disable_global_interrupts(dwc_otg_core_if_t * core_if)
22211 +{
22212 + gahbcfg_data_t ahbcfg = {.d32 = 0 };
22213 + ahbcfg.b.glblintrmsk = 1; /* Disable interrupts */
22214 + DWC_MODIFY_REG32(&core_if->core_global_regs->gahbcfg, ahbcfg.d32, 0);
22215 +}
22216 +
22217 +/**
22218 + * This function initializes the commmon interrupts, used in both
22219 + * device and host modes.
22220 + *
22221 + * @param core_if Programming view of the DWC_otg controller
22222 + *
22223 + */
22224 +static void dwc_otg_enable_common_interrupts(dwc_otg_core_if_t * core_if)
22225 +{
22226 + dwc_otg_core_global_regs_t *global_regs = core_if->core_global_regs;
22227 + gintmsk_data_t intr_mask = {.d32 = 0 };
22228 +
22229 + /* Clear any pending OTG Interrupts */
22230 + DWC_WRITE_REG32(&global_regs->gotgint, 0xFFFFFFFF);
22231 +
22232 + /* Clear any pending interrupts */
22233 + DWC_WRITE_REG32(&global_regs->gintsts, 0xFFFFFFFF);
22234 +
22235 + /*
22236 + * Enable the interrupts in the GINTMSK.
22237 + */
22238 + intr_mask.b.modemismatch = 1;
22239 + intr_mask.b.otgintr = 1;
22240 +
22241 + if (!core_if->dma_enable) {
22242 + intr_mask.b.rxstsqlvl = 1;
22243 + }
22244 +
22245 + intr_mask.b.conidstschng = 1;
22246 + intr_mask.b.wkupintr = 1;
22247 + intr_mask.b.disconnect = 0;
22248 + intr_mask.b.usbsuspend = 1;
22249 + intr_mask.b.sessreqintr = 1;
22250 +#ifdef CONFIG_USB_DWC_OTG_LPM
22251 + if (core_if->core_params->lpm_enable) {
22252 + intr_mask.b.lpmtranrcvd = 1;
22253 + }
22254 +#endif
22255 + DWC_WRITE_REG32(&global_regs->gintmsk, intr_mask.d32);
22256 +}
22257 +
22258 +/*
22259 + * The restore operation is modified to support Synopsys Emulated Powerdown and
22260 + * Hibernation. This function is for exiting from Device mode hibernation by
22261 + * Host Initiated Resume/Reset and Device Initiated Remote-Wakeup.
22262 + * @param core_if Programming view of DWC_otg controller.
22263 + * @param rem_wakeup - indicates whether resume is initiated by Device or Host.
22264 + * @param reset - indicates whether resume is initiated by Reset.
22265 + */
22266 +int dwc_otg_device_hibernation_restore(dwc_otg_core_if_t * core_if,
22267 + int rem_wakeup, int reset)
22268 +{
22269 + gpwrdn_data_t gpwrdn = {.d32 = 0 };
22270 + pcgcctl_data_t pcgcctl = {.d32 = 0 };
22271 + dctl_data_t dctl = {.d32 = 0 };
22272 +
22273 + int timeout = 2000;
22274 +
22275 + if (!core_if->hibernation_suspend) {
22276 + DWC_PRINTF("Already exited from Hibernation\n");
22277 + return 1;
22278 + }
22279 +
22280 + DWC_DEBUGPL(DBG_PCD, "%s called\n", __FUNCTION__);
22281 + /* Switch-on voltage to the core */
22282 + gpwrdn.b.pwrdnswtch = 1;
22283 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
22284 + dwc_udelay(10);
22285 +
22286 + /* Reset core */
22287 + gpwrdn.d32 = 0;
22288 + gpwrdn.b.pwrdnrstn = 1;
22289 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
22290 + dwc_udelay(10);
22291 +
22292 + /* Assert Restore signal */
22293 + gpwrdn.d32 = 0;
22294 + gpwrdn.b.restore = 1;
22295 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, 0, gpwrdn.d32);
22296 + dwc_udelay(10);
22297 +
22298 + /* Disable power clamps */
22299 + gpwrdn.d32 = 0;
22300 + gpwrdn.b.pwrdnclmp = 1;
22301 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
22302 +
22303 + if (rem_wakeup) {
22304 + dwc_udelay(70);
22305 + }
22306 +
22307 + /* Deassert Reset core */
22308 + gpwrdn.d32 = 0;
22309 + gpwrdn.b.pwrdnrstn = 1;
22310 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, 0, gpwrdn.d32);
22311 + dwc_udelay(10);
22312 +
22313 + /* Disable PMU interrupt */
22314 + gpwrdn.d32 = 0;
22315 + gpwrdn.b.pmuintsel = 1;
22316 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
22317 +
22318 + /* Mask interrupts from gpwrdn */
22319 + gpwrdn.d32 = 0;
22320 + gpwrdn.b.connect_det_msk = 1;
22321 + gpwrdn.b.srp_det_msk = 1;
22322 + gpwrdn.b.disconn_det_msk = 1;
22323 + gpwrdn.b.rst_det_msk = 1;
22324 + gpwrdn.b.lnstchng_msk = 1;
22325 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
22326 +
22327 + /* Indicates that we are going out from hibernation */
22328 + core_if->hibernation_suspend = 0;
22329 +
22330 + /*
22331 + * Set Restore Essential Regs bit in PCGCCTL register, restore_mode = 1
22332 + * indicates restore from remote_wakeup
22333 + */
22334 + restore_essential_regs(core_if, rem_wakeup, 0);
22335 +
22336 + /*
22337 + * Wait a little for seeing new value of variable hibernation_suspend if
22338 + * Restore done interrupt received before polling
22339 + */
22340 + dwc_udelay(10);
22341 +
22342 + if (core_if->hibernation_suspend == 0) {
22343 + /*
22344 + * Wait For Restore_done Interrupt. This mechanism of polling the
22345 + * interrupt is introduced to avoid any possible race conditions
22346 + */
22347 + do {
22348 + gintsts_data_t gintsts;
22349 + gintsts.d32 =
22350 + DWC_READ_REG32(&core_if->core_global_regs->gintsts);
22351 + if (gintsts.b.restoredone) {
22352 + gintsts.d32 = 0;
22353 + gintsts.b.restoredone = 1;
22354 + DWC_WRITE_REG32(&core_if->core_global_regs->
22355 + gintsts, gintsts.d32);
22356 + DWC_PRINTF("Restore Done Interrupt seen\n");
22357 + break;
22358 + }
22359 + dwc_udelay(10);
22360 + } while (--timeout);
22361 + if (!timeout) {
22362 + DWC_PRINTF("Restore Done interrupt wasn't generated here\n");
22363 + }
22364 + }
22365 + /* Clear all pending interupts */
22366 + DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, 0xFFFFFFFF);
22367 +
22368 + /* De-assert Restore */
22369 + gpwrdn.d32 = 0;
22370 + gpwrdn.b.restore = 1;
22371 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
22372 + dwc_udelay(10);
22373 +
22374 + if (!rem_wakeup) {
22375 + pcgcctl.d32 = 0;
22376 + pcgcctl.b.rstpdwnmodule = 1;
22377 + DWC_MODIFY_REG32(core_if->pcgcctl, pcgcctl.d32, 0);
22378 + }
22379 +
22380 + /* Restore GUSBCFG and DCFG */
22381 + DWC_WRITE_REG32(&core_if->core_global_regs->gusbcfg,
22382 + core_if->gr_backup->gusbcfg_local);
22383 + DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->dcfg,
22384 + core_if->dr_backup->dcfg);
22385 +
22386 + /* De-assert Wakeup Logic */
22387 + gpwrdn.d32 = 0;
22388 + gpwrdn.b.pmuactv = 1;
22389 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
22390 + dwc_udelay(10);
22391 +
22392 + if (!rem_wakeup) {
22393 + /* Set Device programming done bit */
22394 + dctl.b.pwronprgdone = 1;
22395 + DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->dctl, 0, dctl.d32);
22396 + } else {
22397 + /* Start Remote Wakeup Signaling */
22398 + dctl.d32 = core_if->dr_backup->dctl;
22399 + dctl.b.rmtwkupsig = 1;
22400 + DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->dctl, dctl.d32);
22401 + }
22402 +
22403 + dwc_mdelay(2);
22404 + /* Clear all pending interupts */
22405 + DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, 0xFFFFFFFF);
22406 +
22407 + /* Restore global registers */
22408 + dwc_otg_restore_global_regs(core_if);
22409 + /* Restore device global registers */
22410 + dwc_otg_restore_dev_regs(core_if, rem_wakeup);
22411 +
22412 + if (rem_wakeup) {
22413 + dwc_mdelay(7);
22414 + dctl.d32 = 0;
22415 + dctl.b.rmtwkupsig = 1;
22416 + DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->dctl, dctl.d32, 0);
22417 + }
22418 +
22419 + core_if->hibernation_suspend = 0;
22420 + /* The core will be in ON STATE */
22421 + core_if->lx_state = DWC_OTG_L0;
22422 + DWC_PRINTF("Hibernation recovery completes here\n");
22423 +
22424 + return 1;
22425 +}
22426 +
22427 +/*
22428 + * The restore operation is modified to support Synopsys Emulated Powerdown and
22429 + * Hibernation. This function is for exiting from Host mode hibernation by
22430 + * Host Initiated Resume/Reset and Device Initiated Remote-Wakeup.
22431 + * @param core_if Programming view of DWC_otg controller.
22432 + * @param rem_wakeup - indicates whether resume is initiated by Device or Host.
22433 + * @param reset - indicates whether resume is initiated by Reset.
22434 + */
22435 +int dwc_otg_host_hibernation_restore(dwc_otg_core_if_t * core_if,
22436 + int rem_wakeup, int reset)
22437 +{
22438 + gpwrdn_data_t gpwrdn = {.d32 = 0 };
22439 + hprt0_data_t hprt0 = {.d32 = 0 };
22440 +
22441 + int timeout = 2000;
22442 +
22443 + DWC_DEBUGPL(DBG_HCD, "%s called\n", __FUNCTION__);
22444 + /* Switch-on voltage to the core */
22445 + gpwrdn.b.pwrdnswtch = 1;
22446 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
22447 + dwc_udelay(10);
22448 +
22449 + /* Reset core */
22450 + gpwrdn.d32 = 0;
22451 + gpwrdn.b.pwrdnrstn = 1;
22452 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
22453 + dwc_udelay(10);
22454 +
22455 + /* Assert Restore signal */
22456 + gpwrdn.d32 = 0;
22457 + gpwrdn.b.restore = 1;
22458 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, 0, gpwrdn.d32);
22459 + dwc_udelay(10);
22460 +
22461 + /* Disable power clamps */
22462 + gpwrdn.d32 = 0;
22463 + gpwrdn.b.pwrdnclmp = 1;
22464 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
22465 +
22466 + if (!rem_wakeup) {
22467 + dwc_udelay(50);
22468 + }
22469 +
22470 + /* Deassert Reset core */
22471 + gpwrdn.d32 = 0;
22472 + gpwrdn.b.pwrdnrstn = 1;
22473 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, 0, gpwrdn.d32);
22474 + dwc_udelay(10);
22475 +
22476 + /* Disable PMU interrupt */
22477 + gpwrdn.d32 = 0;
22478 + gpwrdn.b.pmuintsel = 1;
22479 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
22480 +
22481 + gpwrdn.d32 = 0;
22482 + gpwrdn.b.connect_det_msk = 1;
22483 + gpwrdn.b.srp_det_msk = 1;
22484 + gpwrdn.b.disconn_det_msk = 1;
22485 + gpwrdn.b.rst_det_msk = 1;
22486 + gpwrdn.b.lnstchng_msk = 1;
22487 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
22488 +
22489 + /* Indicates that we are going out from hibernation */
22490 + core_if->hibernation_suspend = 0;
22491 +
22492 + /* Set Restore Essential Regs bit in PCGCCTL register */
22493 + restore_essential_regs(core_if, rem_wakeup, 1);
22494 +
22495 + /* Wait a little for seeing new value of variable hibernation_suspend if
22496 + * Restore done interrupt received before polling */
22497 + dwc_udelay(10);
22498 +
22499 + if (core_if->hibernation_suspend == 0) {
22500 + /* Wait For Restore_done Interrupt. This mechanism of polling the
22501 + * interrupt is introduced to avoid any possible race conditions
22502 + */
22503 + do {
22504 + gintsts_data_t gintsts;
22505 + gintsts.d32 = DWC_READ_REG32(&core_if->core_global_regs->gintsts);
22506 + if (gintsts.b.restoredone) {
22507 + gintsts.d32 = 0;
22508 + gintsts.b.restoredone = 1;
22509 + DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, gintsts.d32);
22510 + DWC_DEBUGPL(DBG_HCD,"Restore Done Interrupt seen\n");
22511 + break;
22512 + }
22513 + dwc_udelay(10);
22514 + } while (--timeout);
22515 + if (!timeout) {
22516 + DWC_WARN("Restore Done interrupt wasn't generated\n");
22517 + }
22518 + }
22519 +
22520 + /* Set the flag's value to 0 again after receiving restore done interrupt */
22521 + core_if->hibernation_suspend = 0;
22522 +
22523 + /* This step is not described in functional spec but if not wait for this
22524 + * delay, mismatch interrupts occurred because just after restore core is
22525 + * in Device mode(gintsts.curmode == 0) */
22526 + dwc_mdelay(100);
22527 +
22528 + /* Clear all pending interrupts */
22529 + DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, 0xFFFFFFFF);
22530 +
22531 + /* De-assert Restore */
22532 + gpwrdn.d32 = 0;
22533 + gpwrdn.b.restore = 1;
22534 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
22535 + dwc_udelay(10);
22536 +
22537 + /* Restore GUSBCFG and HCFG */
22538 + DWC_WRITE_REG32(&core_if->core_global_regs->gusbcfg,
22539 + core_if->gr_backup->gusbcfg_local);
22540 + DWC_WRITE_REG32(&core_if->host_if->host_global_regs->hcfg,
22541 + core_if->hr_backup->hcfg_local);
22542 +
22543 + /* De-assert Wakeup Logic */
22544 + gpwrdn.d32 = 0;
22545 + gpwrdn.b.pmuactv = 1;
22546 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
22547 + dwc_udelay(10);
22548 +
22549 + /* Start the Resume operation by programming HPRT0 */
22550 + hprt0.d32 = core_if->hr_backup->hprt0_local;
22551 + hprt0.b.prtpwr = 1;
22552 + hprt0.b.prtena = 0;
22553 + hprt0.b.prtsusp = 0;
22554 + DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
22555 +
22556 + DWC_PRINTF("Resume Starts Now\n");
22557 + if (!reset) { // Indicates it is Resume Operation
22558 + hprt0.d32 = core_if->hr_backup->hprt0_local;
22559 + hprt0.b.prtres = 1;
22560 + hprt0.b.prtpwr = 1;
22561 + hprt0.b.prtena = 0;
22562 + hprt0.b.prtsusp = 0;
22563 + DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
22564 +
22565 + if (!rem_wakeup)
22566 + hprt0.b.prtres = 0;
22567 + /* Wait for Resume time and then program HPRT again */
22568 + dwc_mdelay(100);
22569 + DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
22570 +
22571 + } else { // Indicates it is Reset Operation
22572 + hprt0.d32 = core_if->hr_backup->hprt0_local;
22573 + hprt0.b.prtrst = 1;
22574 + hprt0.b.prtpwr = 1;
22575 + hprt0.b.prtena = 0;
22576 + hprt0.b.prtsusp = 0;
22577 + DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
22578 + /* Wait for Reset time and then program HPRT again */
22579 + dwc_mdelay(60);
22580 + hprt0.b.prtrst = 0;
22581 + DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
22582 + }
22583 + /* Clear all interrupt status */
22584 + hprt0.d32 = dwc_otg_read_hprt0(core_if);
22585 + hprt0.b.prtconndet = 1;
22586 + hprt0.b.prtenchng = 1;
22587 + DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
22588 +
22589 + /* Clear all pending interupts */
22590 + DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, 0xFFFFFFFF);
22591 +
22592 + /* Restore global registers */
22593 + dwc_otg_restore_global_regs(core_if);
22594 + /* Restore host global registers */
22595 + dwc_otg_restore_host_regs(core_if, reset);
22596 +
22597 + /* The core will be in ON STATE */
22598 + core_if->lx_state = DWC_OTG_L0;
22599 + DWC_PRINTF("Hibernation recovery is complete here\n");
22600 + return 0;
22601 +}
22602 +
22603 +/** Saves some register values into system memory. */
22604 +int dwc_otg_save_global_regs(dwc_otg_core_if_t * core_if)
22605 +{
22606 + struct dwc_otg_global_regs_backup *gr;
22607 + int i;
22608 +
22609 + gr = core_if->gr_backup;
22610 + if (!gr) {
22611 + gr = DWC_ALLOC(sizeof(*gr));
22612 + if (!gr) {
22613 + return -DWC_E_NO_MEMORY;
22614 + }
22615 + core_if->gr_backup = gr;
22616 + }
22617 +
22618 + gr->gotgctl_local = DWC_READ_REG32(&core_if->core_global_regs->gotgctl);
22619 + gr->gintmsk_local = DWC_READ_REG32(&core_if->core_global_regs->gintmsk);
22620 + gr->gahbcfg_local = DWC_READ_REG32(&core_if->core_global_regs->gahbcfg);
22621 + gr->gusbcfg_local = DWC_READ_REG32(&core_if->core_global_regs->gusbcfg);
22622 + gr->grxfsiz_local = DWC_READ_REG32(&core_if->core_global_regs->grxfsiz);
22623 + gr->gnptxfsiz_local = DWC_READ_REG32(&core_if->core_global_regs->gnptxfsiz);
22624 + gr->hptxfsiz_local = DWC_READ_REG32(&core_if->core_global_regs->hptxfsiz);
22625 +#ifdef CONFIG_USB_DWC_OTG_LPM
22626 + gr->glpmcfg_local = DWC_READ_REG32(&core_if->core_global_regs->glpmcfg);
22627 +#endif
22628 + gr->gi2cctl_local = DWC_READ_REG32(&core_if->core_global_regs->gi2cctl);
22629 + gr->pcgcctl_local = DWC_READ_REG32(core_if->pcgcctl);
22630 + gr->gdfifocfg_local =
22631 + DWC_READ_REG32(&core_if->core_global_regs->gdfifocfg);
22632 + for (i = 0; i < MAX_EPS_CHANNELS; i++) {
22633 + gr->dtxfsiz_local[i] =
22634 + DWC_READ_REG32(&(core_if->core_global_regs->dtxfsiz[i]));
22635 + }
22636 +
22637 + DWC_DEBUGPL(DBG_ANY, "===========Backing Global registers==========\n");
22638 + DWC_DEBUGPL(DBG_ANY, "Backed up gotgctl = %08x\n", gr->gotgctl_local);
22639 + DWC_DEBUGPL(DBG_ANY, "Backed up gintmsk = %08x\n", gr->gintmsk_local);
22640 + DWC_DEBUGPL(DBG_ANY, "Backed up gahbcfg = %08x\n", gr->gahbcfg_local);
22641 + DWC_DEBUGPL(DBG_ANY, "Backed up gusbcfg = %08x\n", gr->gusbcfg_local);
22642 + DWC_DEBUGPL(DBG_ANY, "Backed up grxfsiz = %08x\n", gr->grxfsiz_local);
22643 + DWC_DEBUGPL(DBG_ANY, "Backed up gnptxfsiz = %08x\n",
22644 + gr->gnptxfsiz_local);
22645 + DWC_DEBUGPL(DBG_ANY, "Backed up hptxfsiz = %08x\n",
22646 + gr->hptxfsiz_local);
22647 +#ifdef CONFIG_USB_DWC_OTG_LPM
22648 + DWC_DEBUGPL(DBG_ANY, "Backed up glpmcfg = %08x\n", gr->glpmcfg_local);
22649 +#endif
22650 + DWC_DEBUGPL(DBG_ANY, "Backed up gi2cctl = %08x\n", gr->gi2cctl_local);
22651 + DWC_DEBUGPL(DBG_ANY, "Backed up pcgcctl = %08x\n", gr->pcgcctl_local);
22652 + DWC_DEBUGPL(DBG_ANY,"Backed up gdfifocfg = %08x\n",gr->gdfifocfg_local);
22653 +
22654 + return 0;
22655 +}
22656 +
22657 +/** Saves GINTMSK register before setting the msk bits. */
22658 +int dwc_otg_save_gintmsk_reg(dwc_otg_core_if_t * core_if)
22659 +{
22660 + struct dwc_otg_global_regs_backup *gr;
22661 +
22662 + gr = core_if->gr_backup;
22663 + if (!gr) {
22664 + gr = DWC_ALLOC(sizeof(*gr));
22665 + if (!gr) {
22666 + return -DWC_E_NO_MEMORY;
22667 + }
22668 + core_if->gr_backup = gr;
22669 + }
22670 +
22671 + gr->gintmsk_local = DWC_READ_REG32(&core_if->core_global_regs->gintmsk);
22672 +
22673 + DWC_DEBUGPL(DBG_ANY,"=============Backing GINTMSK registers============\n");
22674 + DWC_DEBUGPL(DBG_ANY, "Backed up gintmsk = %08x\n", gr->gintmsk_local);
22675 +
22676 + return 0;
22677 +}
22678 +
22679 +int dwc_otg_save_dev_regs(dwc_otg_core_if_t * core_if)
22680 +{
22681 + struct dwc_otg_dev_regs_backup *dr;
22682 + int i;
22683 +
22684 + dr = core_if->dr_backup;
22685 + if (!dr) {
22686 + dr = DWC_ALLOC(sizeof(*dr));
22687 + if (!dr) {
22688 + return -DWC_E_NO_MEMORY;
22689 + }
22690 + core_if->dr_backup = dr;
22691 + }
22692 +
22693 + dr->dcfg = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dcfg);
22694 + dr->dctl = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dctl);
22695 + dr->daintmsk =
22696 + DWC_READ_REG32(&core_if->dev_if->dev_global_regs->daintmsk);
22697 + dr->diepmsk =
22698 + DWC_READ_REG32(&core_if->dev_if->dev_global_regs->diepmsk);
22699 + dr->doepmsk =
22700 + DWC_READ_REG32(&core_if->dev_if->dev_global_regs->doepmsk);
22701 +
22702 + for (i = 0; i < core_if->dev_if->num_in_eps; ++i) {
22703 + dr->diepctl[i] =
22704 + DWC_READ_REG32(&core_if->dev_if->in_ep_regs[i]->diepctl);
22705 + dr->dieptsiz[i] =
22706 + DWC_READ_REG32(&core_if->dev_if->in_ep_regs[i]->dieptsiz);
22707 + dr->diepdma[i] =
22708 + DWC_READ_REG32(&core_if->dev_if->in_ep_regs[i]->diepdma);
22709 + }
22710 +
22711 + DWC_DEBUGPL(DBG_ANY,
22712 + "=============Backing Host registers==============\n");
22713 + DWC_DEBUGPL(DBG_ANY, "Backed up dcfg = %08x\n", dr->dcfg);
22714 + DWC_DEBUGPL(DBG_ANY, "Backed up dctl = %08x\n", dr->dctl);
22715 + DWC_DEBUGPL(DBG_ANY, "Backed up daintmsk = %08x\n",
22716 + dr->daintmsk);
22717 + DWC_DEBUGPL(DBG_ANY, "Backed up diepmsk = %08x\n", dr->diepmsk);
22718 + DWC_DEBUGPL(DBG_ANY, "Backed up doepmsk = %08x\n", dr->doepmsk);
22719 + for (i = 0; i < core_if->dev_if->num_in_eps; ++i) {
22720 + DWC_DEBUGPL(DBG_ANY, "Backed up diepctl[%d] = %08x\n", i,
22721 + dr->diepctl[i]);
22722 + DWC_DEBUGPL(DBG_ANY, "Backed up dieptsiz[%d] = %08x\n",
22723 + i, dr->dieptsiz[i]);
22724 + DWC_DEBUGPL(DBG_ANY, "Backed up diepdma[%d] = %08x\n", i,
22725 + dr->diepdma[i]);
22726 + }
22727 +
22728 + return 0;
22729 +}
22730 +
22731 +int dwc_otg_save_host_regs(dwc_otg_core_if_t * core_if)
22732 +{
22733 + struct dwc_otg_host_regs_backup *hr;
22734 + int i;
22735 +
22736 + hr = core_if->hr_backup;
22737 + if (!hr) {
22738 + hr = DWC_ALLOC(sizeof(*hr));
22739 + if (!hr) {
22740 + return -DWC_E_NO_MEMORY;
22741 + }
22742 + core_if->hr_backup = hr;
22743 + }
22744 +
22745 + hr->hcfg_local =
22746 + DWC_READ_REG32(&core_if->host_if->host_global_regs->hcfg);
22747 + hr->haintmsk_local =
22748 + DWC_READ_REG32(&core_if->host_if->host_global_regs->haintmsk);
22749 + for (i = 0; i < dwc_otg_get_param_host_channels(core_if); ++i) {
22750 + hr->hcintmsk_local[i] =
22751 + DWC_READ_REG32(&core_if->host_if->hc_regs[i]->hcintmsk);
22752 + }
22753 + hr->hprt0_local = DWC_READ_REG32(core_if->host_if->hprt0);
22754 + hr->hfir_local =
22755 + DWC_READ_REG32(&core_if->host_if->host_global_regs->hfir);
22756 +
22757 + DWC_DEBUGPL(DBG_ANY,
22758 + "=============Backing Host registers===============\n");
22759 + DWC_DEBUGPL(DBG_ANY, "Backed up hcfg = %08x\n",
22760 + hr->hcfg_local);
22761 + DWC_DEBUGPL(DBG_ANY, "Backed up haintmsk = %08x\n", hr->haintmsk_local);
22762 + for (i = 0; i < dwc_otg_get_param_host_channels(core_if); ++i) {
22763 + DWC_DEBUGPL(DBG_ANY, "Backed up hcintmsk[%02d]=%08x\n", i,
22764 + hr->hcintmsk_local[i]);
22765 + }
22766 + DWC_DEBUGPL(DBG_ANY, "Backed up hprt0 = %08x\n",
22767 + hr->hprt0_local);
22768 + DWC_DEBUGPL(DBG_ANY, "Backed up hfir = %08x\n",
22769 + hr->hfir_local);
22770 +
22771 + return 0;
22772 +}
22773 +
22774 +int dwc_otg_restore_global_regs(dwc_otg_core_if_t *core_if)
22775 +{
22776 + struct dwc_otg_global_regs_backup *gr;
22777 + int i;
22778 +
22779 + gr = core_if->gr_backup;
22780 + if (!gr) {
22781 + return -DWC_E_INVALID;
22782 + }
22783 +
22784 + DWC_WRITE_REG32(&core_if->core_global_regs->gotgctl, gr->gotgctl_local);
22785 + DWC_WRITE_REG32(&core_if->core_global_regs->gintmsk, gr->gintmsk_local);
22786 + DWC_WRITE_REG32(&core_if->core_global_regs->gusbcfg, gr->gusbcfg_local);
22787 + DWC_WRITE_REG32(&core_if->core_global_regs->gahbcfg, gr->gahbcfg_local);
22788 + DWC_WRITE_REG32(&core_if->core_global_regs->grxfsiz, gr->grxfsiz_local);
22789 + DWC_WRITE_REG32(&core_if->core_global_regs->gnptxfsiz,
22790 + gr->gnptxfsiz_local);
22791 + DWC_WRITE_REG32(&core_if->core_global_regs->hptxfsiz,
22792 + gr->hptxfsiz_local);
22793 + DWC_WRITE_REG32(&core_if->core_global_regs->gdfifocfg,
22794 + gr->gdfifocfg_local);
22795 + for (i = 0; i < MAX_EPS_CHANNELS; i++) {
22796 + DWC_WRITE_REG32(&core_if->core_global_regs->dtxfsiz[i],
22797 + gr->dtxfsiz_local[i]);
22798 + }
22799 +
22800 + DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, 0xFFFFFFFF);
22801 + DWC_WRITE_REG32(core_if->host_if->hprt0, 0x0000100A);
22802 + DWC_WRITE_REG32(&core_if->core_global_regs->gahbcfg,
22803 + (gr->gahbcfg_local));
22804 + return 0;
22805 +}
22806 +
22807 +int dwc_otg_restore_dev_regs(dwc_otg_core_if_t * core_if, int rem_wakeup)
22808 +{
22809 + struct dwc_otg_dev_regs_backup *dr;
22810 + int i;
22811 +
22812 + dr = core_if->dr_backup;
22813 +
22814 + if (!dr) {
22815 + return -DWC_E_INVALID;
22816 + }
22817 +
22818 + if (!rem_wakeup) {
22819 + DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->dctl,
22820 + dr->dctl);
22821 + }
22822 +
22823 + DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->daintmsk, dr->daintmsk);
22824 + DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->diepmsk, dr->diepmsk);
22825 + DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->doepmsk, dr->doepmsk);
22826 +
22827 + for (i = 0; i < core_if->dev_if->num_in_eps; ++i) {
22828 + DWC_WRITE_REG32(&core_if->dev_if->in_ep_regs[i]->dieptsiz, dr->dieptsiz[i]);
22829 + DWC_WRITE_REG32(&core_if->dev_if->in_ep_regs[i]->diepdma, dr->diepdma[i]);
22830 + DWC_WRITE_REG32(&core_if->dev_if->in_ep_regs[i]->diepctl, dr->diepctl[i]);
22831 + }
22832 +
22833 + return 0;
22834 +}
22835 +
22836 +int dwc_otg_restore_host_regs(dwc_otg_core_if_t * core_if, int reset)
22837 +{
22838 + struct dwc_otg_host_regs_backup *hr;
22839 + int i;
22840 + hr = core_if->hr_backup;
22841 +
22842 + if (!hr) {
22843 + return -DWC_E_INVALID;
22844 + }
22845 +
22846 + DWC_WRITE_REG32(&core_if->host_if->host_global_regs->hcfg, hr->hcfg_local);
22847 + //if (!reset)
22848 + //{
22849 + // DWC_WRITE_REG32(&core_if->host_if->host_global_regs->hfir, hr->hfir_local);
22850 + //}
22851 +
22852 + DWC_WRITE_REG32(&core_if->host_if->host_global_regs->haintmsk,
22853 + hr->haintmsk_local);
22854 + for (i = 0; i < dwc_otg_get_param_host_channels(core_if); ++i) {
22855 + DWC_WRITE_REG32(&core_if->host_if->hc_regs[i]->hcintmsk,
22856 + hr->hcintmsk_local[i]);
22857 + }
22858 +
22859 + return 0;
22860 +}
22861 +
22862 +int restore_lpm_i2c_regs(dwc_otg_core_if_t * core_if)
22863 +{
22864 + struct dwc_otg_global_regs_backup *gr;
22865 +
22866 + gr = core_if->gr_backup;
22867 +
22868 + /* Restore values for LPM and I2C */
22869 +#ifdef CONFIG_USB_DWC_OTG_LPM
22870 + DWC_WRITE_REG32(&core_if->core_global_regs->glpmcfg, gr->glpmcfg_local);
22871 +#endif
22872 + DWC_WRITE_REG32(&core_if->core_global_regs->gi2cctl, gr->gi2cctl_local);
22873 +
22874 + return 0;
22875 +}
22876 +
22877 +int restore_essential_regs(dwc_otg_core_if_t * core_if, int rmode, int is_host)
22878 +{
22879 + struct dwc_otg_global_regs_backup *gr;
22880 + pcgcctl_data_t pcgcctl = {.d32 = 0 };
22881 + gahbcfg_data_t gahbcfg = {.d32 = 0 };
22882 + gusbcfg_data_t gusbcfg = {.d32 = 0 };
22883 + gintmsk_data_t gintmsk = {.d32 = 0 };
22884 +
22885 + /* Restore LPM and I2C registers */
22886 + restore_lpm_i2c_regs(core_if);
22887 +
22888 + /* Set PCGCCTL to 0 */
22889 + DWC_WRITE_REG32(core_if->pcgcctl, 0x00000000);
22890 +
22891 + gr = core_if->gr_backup;
22892 + /* Load restore values for [31:14] bits */
22893 + DWC_WRITE_REG32(core_if->pcgcctl,
22894 + ((gr->pcgcctl_local & 0xffffc000) | 0x00020000));
22895 +
22896 + /* Umnask global Interrupt in GAHBCFG and restore it */
22897 + gahbcfg.d32 = gr->gahbcfg_local;
22898 + gahbcfg.b.glblintrmsk = 1;
22899 + DWC_WRITE_REG32(&core_if->core_global_regs->gahbcfg, gahbcfg.d32);
22900 +
22901 + /* Clear all pending interupts */
22902 + DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, 0xFFFFFFFF);
22903 +
22904 + /* Unmask restore done interrupt */
22905 + gintmsk.b.restoredone = 1;
22906 + DWC_WRITE_REG32(&core_if->core_global_regs->gintmsk, gintmsk.d32);
22907 +
22908 + /* Restore GUSBCFG and HCFG/DCFG */
22909 + gusbcfg.d32 = core_if->gr_backup->gusbcfg_local;
22910 + DWC_WRITE_REG32(&core_if->core_global_regs->gusbcfg, gusbcfg.d32);
22911 +
22912 + if (is_host) {
22913 + hcfg_data_t hcfg = {.d32 = 0 };
22914 + hcfg.d32 = core_if->hr_backup->hcfg_local;
22915 + DWC_WRITE_REG32(&core_if->host_if->host_global_regs->hcfg,
22916 + hcfg.d32);
22917 +
22918 + /* Load restore values for [31:14] bits */
22919 + pcgcctl.d32 = gr->pcgcctl_local & 0xffffc000;
22920 + pcgcctl.d32 = gr->pcgcctl_local | 0x00020000;
22921 +
22922 + if (rmode)
22923 + pcgcctl.b.restoremode = 1;
22924 + DWC_WRITE_REG32(core_if->pcgcctl, pcgcctl.d32);
22925 + dwc_udelay(10);
22926 +
22927 + /* Load restore values for [31:14] bits and set EssRegRestored bit */
22928 + pcgcctl.d32 = gr->pcgcctl_local | 0xffffc000;
22929 + pcgcctl.d32 = gr->pcgcctl_local & 0xffffc000;
22930 + pcgcctl.b.ess_reg_restored = 1;
22931 + if (rmode)
22932 + pcgcctl.b.restoremode = 1;
22933 + DWC_WRITE_REG32(core_if->pcgcctl, pcgcctl.d32);
22934 + } else {
22935 + dcfg_data_t dcfg = {.d32 = 0 };
22936 + dcfg.d32 = core_if->dr_backup->dcfg;
22937 + DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->dcfg, dcfg.d32);
22938 +
22939 + /* Load restore values for [31:14] bits */
22940 + pcgcctl.d32 = gr->pcgcctl_local & 0xffffc000;
22941 + pcgcctl.d32 = gr->pcgcctl_local | 0x00020000;
22942 + if (!rmode) {
22943 + pcgcctl.d32 |= 0x208;
22944 + }
22945 + DWC_WRITE_REG32(core_if->pcgcctl, pcgcctl.d32);
22946 + dwc_udelay(10);
22947 +
22948 + /* Load restore values for [31:14] bits */
22949 + pcgcctl.d32 = gr->pcgcctl_local & 0xffffc000;
22950 + pcgcctl.d32 = gr->pcgcctl_local | 0x00020000;
22951 + pcgcctl.b.ess_reg_restored = 1;
22952 + if (!rmode)
22953 + pcgcctl.d32 |= 0x208;
22954 + DWC_WRITE_REG32(core_if->pcgcctl, pcgcctl.d32);
22955 + }
22956 +
22957 + return 0;
22958 +}
22959 +
22960 +/**
22961 + * Initializes the FSLSPClkSel field of the HCFG register depending on the PHY
22962 + * type.
22963 + */
22964 +static void init_fslspclksel(dwc_otg_core_if_t * core_if)
22965 +{
22966 + uint32_t val;
22967 + hcfg_data_t hcfg;
22968 +
22969 + if (((core_if->hwcfg2.b.hs_phy_type == 2) &&
22970 + (core_if->hwcfg2.b.fs_phy_type == 1) &&
22971 + (core_if->core_params->ulpi_fs_ls)) ||
22972 + (core_if->core_params->phy_type == DWC_PHY_TYPE_PARAM_FS)) {
22973 + /* Full speed PHY */
22974 + val = DWC_HCFG_48_MHZ;
22975 + } else {
22976 + /* High speed PHY running at full speed or high speed */
22977 + val = DWC_HCFG_30_60_MHZ;
22978 + }
22979 +
22980 + DWC_DEBUGPL(DBG_CIL, "Initializing HCFG.FSLSPClkSel to 0x%1x\n", val);
22981 + hcfg.d32 = DWC_READ_REG32(&core_if->host_if->host_global_regs->hcfg);
22982 + hcfg.b.fslspclksel = val;
22983 + DWC_WRITE_REG32(&core_if->host_if->host_global_regs->hcfg, hcfg.d32);
22984 +}
22985 +
22986 +/**
22987 + * Initializes the DevSpd field of the DCFG register depending on the PHY type
22988 + * and the enumeration speed of the device.
22989 + */
22990 +static void init_devspd(dwc_otg_core_if_t * core_if)
22991 +{
22992 + uint32_t val;
22993 + dcfg_data_t dcfg;
22994 +
22995 + if (((core_if->hwcfg2.b.hs_phy_type == 2) &&
22996 + (core_if->hwcfg2.b.fs_phy_type == 1) &&
22997 + (core_if->core_params->ulpi_fs_ls)) ||
22998 + (core_if->core_params->phy_type == DWC_PHY_TYPE_PARAM_FS)) {
22999 + /* Full speed PHY */
23000 + val = 0x3;
23001 + } else if (core_if->core_params->speed == DWC_SPEED_PARAM_FULL) {
23002 + /* High speed PHY running at full speed */
23003 + val = 0x1;
23004 + } else {
23005 + /* High speed PHY running at high speed */
23006 + val = 0x0;
23007 + }
23008 +
23009 + DWC_DEBUGPL(DBG_CIL, "Initializing DCFG.DevSpd to 0x%1x\n", val);
23010 +
23011 + dcfg.d32 = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dcfg);
23012 + dcfg.b.devspd = val;
23013 + DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->dcfg, dcfg.d32);
23014 +}
23015 +
23016 +/**
23017 + * This function calculates the number of IN EPS
23018 + * using GHWCFG1 and GHWCFG2 registers values
23019 + *
23020 + * @param core_if Programming view of the DWC_otg controller
23021 + */
23022 +static uint32_t calc_num_in_eps(dwc_otg_core_if_t * core_if)
23023 +{
23024 + uint32_t num_in_eps = 0;
23025 + uint32_t num_eps = core_if->hwcfg2.b.num_dev_ep;
23026 + uint32_t hwcfg1 = core_if->hwcfg1.d32 >> 3;
23027 + uint32_t num_tx_fifos = core_if->hwcfg4.b.num_in_eps;
23028 + int i;
23029 +
23030 + for (i = 0; i < num_eps; ++i) {
23031 + if (!(hwcfg1 & 0x1))
23032 + num_in_eps++;
23033 +
23034 + hwcfg1 >>= 2;
23035 + }
23036 +
23037 + if (core_if->hwcfg4.b.ded_fifo_en) {
23038 + num_in_eps =
23039 + (num_in_eps > num_tx_fifos) ? num_tx_fifos : num_in_eps;
23040 + }
23041 +
23042 + return num_in_eps;
23043 +}
23044 +
23045 +/**
23046 + * This function calculates the number of OUT EPS
23047 + * using GHWCFG1 and GHWCFG2 registers values
23048 + *
23049 + * @param core_if Programming view of the DWC_otg controller
23050 + */
23051 +static uint32_t calc_num_out_eps(dwc_otg_core_if_t * core_if)
23052 +{
23053 + uint32_t num_out_eps = 0;
23054 + uint32_t num_eps = core_if->hwcfg2.b.num_dev_ep;
23055 + uint32_t hwcfg1 = core_if->hwcfg1.d32 >> 2;
23056 + int i;
23057 +
23058 + for (i = 0; i < num_eps; ++i) {
23059 + if (!(hwcfg1 & 0x1))
23060 + num_out_eps++;
23061 +
23062 + hwcfg1 >>= 2;
23063 + }
23064 + return num_out_eps;
23065 +}
23066 +
23067 +/**
23068 + * This function initializes the DWC_otg controller registers and
23069 + * prepares the core for device mode or host mode operation.
23070 + *
23071 + * @param core_if Programming view of the DWC_otg controller
23072 + *
23073 + */
23074 +void dwc_otg_core_init(dwc_otg_core_if_t * core_if)
23075 +{
23076 + int i = 0;
23077 + dwc_otg_core_global_regs_t *global_regs = core_if->core_global_regs;
23078 + dwc_otg_dev_if_t *dev_if = core_if->dev_if;
23079 + gahbcfg_data_t ahbcfg = {.d32 = 0 };
23080 + gusbcfg_data_t usbcfg = {.d32 = 0 };
23081 + gi2cctl_data_t i2cctl = {.d32 = 0 };
23082 +
23083 + DWC_DEBUGPL(DBG_CILV, "dwc_otg_core_init(%p) regs at %p\n",
23084 + core_if, global_regs);
23085 +
23086 + /* Common Initialization */
23087 + usbcfg.d32 = DWC_READ_REG32(&global_regs->gusbcfg);
23088 +
23089 + /* Program the ULPI External VBUS bit if needed */
23090 + usbcfg.b.ulpi_ext_vbus_drv =
23091 + (core_if->core_params->phy_ulpi_ext_vbus ==
23092 + DWC_PHY_ULPI_EXTERNAL_VBUS) ? 1 : 0;
23093 +
23094 + /* Set external TS Dline pulsing */
23095 + usbcfg.b.term_sel_dl_pulse =
23096 + (core_if->core_params->ts_dline == 1) ? 1 : 0;
23097 + DWC_WRITE_REG32(&global_regs->gusbcfg, usbcfg.d32);
23098 +
23099 + /* Reset the Controller */
23100 + dwc_otg_core_reset(core_if);
23101 +
23102 + core_if->adp_enable = core_if->core_params->adp_supp_enable;
23103 + core_if->power_down = core_if->core_params->power_down;
23104 + core_if->otg_sts = 0;
23105 +
23106 + /* Initialize parameters from Hardware configuration registers. */
23107 + dev_if->num_in_eps = calc_num_in_eps(core_if);
23108 + dev_if->num_out_eps = calc_num_out_eps(core_if);
23109 +
23110 + DWC_DEBUGPL(DBG_CIL, "num_dev_perio_in_ep=%d\n",
23111 + core_if->hwcfg4.b.num_dev_perio_in_ep);
23112 +
23113 + for (i = 0; i < core_if->hwcfg4.b.num_dev_perio_in_ep; i++) {
23114 + dev_if->perio_tx_fifo_size[i] =
23115 + DWC_READ_REG32(&global_regs->dtxfsiz[i]) >> 16;
23116 + DWC_DEBUGPL(DBG_CIL, "Periodic Tx FIFO SZ #%d=0x%0x\n",
23117 + i, dev_if->perio_tx_fifo_size[i]);
23118 + }
23119 +
23120 + for (i = 0; i < core_if->hwcfg4.b.num_in_eps; i++) {
23121 + dev_if->tx_fifo_size[i] =
23122 + DWC_READ_REG32(&global_regs->dtxfsiz[i]) >> 16;
23123 + DWC_DEBUGPL(DBG_CIL, "Tx FIFO SZ #%d=0x%0x\n",
23124 + i, dev_if->tx_fifo_size[i]);
23125 + }
23126 +
23127 + core_if->total_fifo_size = core_if->hwcfg3.b.dfifo_depth;
23128 + core_if->rx_fifo_size = DWC_READ_REG32(&global_regs->grxfsiz);
23129 + core_if->nperio_tx_fifo_size =
23130 + DWC_READ_REG32(&global_regs->gnptxfsiz) >> 16;
23131 +
23132 + DWC_DEBUGPL(DBG_CIL, "Total FIFO SZ=%d\n", core_if->total_fifo_size);
23133 + DWC_DEBUGPL(DBG_CIL, "Rx FIFO SZ=%d\n", core_if->rx_fifo_size);
23134 + DWC_DEBUGPL(DBG_CIL, "NP Tx FIFO SZ=%d\n",
23135 + core_if->nperio_tx_fifo_size);
23136 +
23137 + /* This programming sequence needs to happen in FS mode before any other
23138 + * programming occurs */
23139 + if ((core_if->core_params->speed == DWC_SPEED_PARAM_FULL) &&
23140 + (core_if->core_params->phy_type == DWC_PHY_TYPE_PARAM_FS)) {
23141 + /* If FS mode with FS PHY */
23142 +
23143 + /* core_init() is now called on every switch so only call the
23144 + * following for the first time through. */
23145 + if (!core_if->phy_init_done) {
23146 + core_if->phy_init_done = 1;
23147 + DWC_DEBUGPL(DBG_CIL, "FS_PHY detected\n");
23148 + usbcfg.d32 = DWC_READ_REG32(&global_regs->gusbcfg);
23149 + usbcfg.b.physel = 1;
23150 + DWC_WRITE_REG32(&global_regs->gusbcfg, usbcfg.d32);
23151 +
23152 + /* Reset after a PHY select */
23153 + dwc_otg_core_reset(core_if);
23154 + }
23155 +
23156 + /* Program DCFG.DevSpd or HCFG.FSLSPclkSel to 48Mhz in FS. Also
23157 + * do this on HNP Dev/Host mode switches (done in dev_init and
23158 + * host_init). */
23159 + if (dwc_otg_is_host_mode(core_if)) {
23160 + init_fslspclksel(core_if);
23161 + } else {
23162 + init_devspd(core_if);
23163 + }
23164 +
23165 + if (core_if->core_params->i2c_enable) {
23166 + DWC_DEBUGPL(DBG_CIL, "FS_PHY Enabling I2c\n");
23167 + /* Program GUSBCFG.OtgUtmifsSel to I2C */
23168 + usbcfg.d32 = DWC_READ_REG32(&global_regs->gusbcfg);
23169 + usbcfg.b.otgutmifssel = 1;
23170 + DWC_WRITE_REG32(&global_regs->gusbcfg, usbcfg.d32);
23171 +
23172 + /* Program GI2CCTL.I2CEn */
23173 + i2cctl.d32 = DWC_READ_REG32(&global_regs->gi2cctl);
23174 + i2cctl.b.i2cdevaddr = 1;
23175 + i2cctl.b.i2cen = 0;
23176 + DWC_WRITE_REG32(&global_regs->gi2cctl, i2cctl.d32);
23177 + i2cctl.b.i2cen = 1;
23178 + DWC_WRITE_REG32(&global_regs->gi2cctl, i2cctl.d32);
23179 + }
23180 +
23181 + } /* endif speed == DWC_SPEED_PARAM_FULL */
23182 + else {
23183 + /* High speed PHY. */
23184 + if (!core_if->phy_init_done) {
23185 + core_if->phy_init_done = 1;
23186 + /* HS PHY parameters. These parameters are preserved
23187 + * during soft reset so only program the first time. Do
23188 + * a soft reset immediately after setting phyif. */
23189 +
23190 + if (core_if->core_params->phy_type == 2) {
23191 + /* ULPI interface */
23192 + usbcfg.b.ulpi_utmi_sel = 1;
23193 + usbcfg.b.phyif = 0;
23194 + usbcfg.b.ddrsel =
23195 + core_if->core_params->phy_ulpi_ddr;
23196 + } else if (core_if->core_params->phy_type == 1) {
23197 + /* UTMI+ interface */
23198 + usbcfg.b.ulpi_utmi_sel = 0;
23199 + if (core_if->core_params->phy_utmi_width == 16) {
23200 + usbcfg.b.phyif = 1;
23201 +
23202 + } else {
23203 + usbcfg.b.phyif = 0;
23204 + }
23205 + } else {
23206 + DWC_ERROR("FS PHY TYPE\n");
23207 + }
23208 + DWC_WRITE_REG32(&global_regs->gusbcfg, usbcfg.d32);
23209 + /* Reset after setting the PHY parameters */
23210 + dwc_otg_core_reset(core_if);
23211 + }
23212 + }
23213 +
23214 + if ((core_if->hwcfg2.b.hs_phy_type == 2) &&
23215 + (core_if->hwcfg2.b.fs_phy_type == 1) &&
23216 + (core_if->core_params->ulpi_fs_ls)) {
23217 + DWC_DEBUGPL(DBG_CIL, "Setting ULPI FSLS\n");
23218 + usbcfg.d32 = DWC_READ_REG32(&global_regs->gusbcfg);
23219 + usbcfg.b.ulpi_fsls = 1;
23220 + usbcfg.b.ulpi_clk_sus_m = 1;
23221 + DWC_WRITE_REG32(&global_regs->gusbcfg, usbcfg.d32);
23222 + } else {
23223 + usbcfg.d32 = DWC_READ_REG32(&global_regs->gusbcfg);
23224 + usbcfg.b.ulpi_fsls = 0;
23225 + usbcfg.b.ulpi_clk_sus_m = 0;
23226 + DWC_WRITE_REG32(&global_regs->gusbcfg, usbcfg.d32);
23227 + }
23228 +
23229 + /* Program the GAHBCFG Register. */
23230 + switch (core_if->hwcfg2.b.architecture) {
23231 +
23232 + case DWC_SLAVE_ONLY_ARCH:
23233 + DWC_DEBUGPL(DBG_CIL, "Slave Only Mode\n");
23234 + ahbcfg.b.nptxfemplvl_txfemplvl =
23235 + DWC_GAHBCFG_TXFEMPTYLVL_HALFEMPTY;
23236 + ahbcfg.b.ptxfemplvl = DWC_GAHBCFG_TXFEMPTYLVL_HALFEMPTY;
23237 + core_if->dma_enable = 0;
23238 + core_if->dma_desc_enable = 0;
23239 + break;
23240 +
23241 + case DWC_EXT_DMA_ARCH:
23242 + DWC_DEBUGPL(DBG_CIL, "External DMA Mode\n");
23243 + {
23244 + uint8_t brst_sz = core_if->core_params->dma_burst_size;
23245 + ahbcfg.b.hburstlen = 0;
23246 + while (brst_sz > 1) {
23247 + ahbcfg.b.hburstlen++;
23248 + brst_sz >>= 1;
23249 + }
23250 + }
23251 + core_if->dma_enable = (core_if->core_params->dma_enable != 0);
23252 + core_if->dma_desc_enable =
23253 + (core_if->core_params->dma_desc_enable != 0);
23254 + break;
23255 +
23256 + case DWC_INT_DMA_ARCH:
23257 + DWC_DEBUGPL(DBG_CIL, "Internal DMA Mode\n");
23258 + /* Old value was DWC_GAHBCFG_INT_DMA_BURST_INCR - done for
23259 + Host mode ISOC in issue fix - vahrama */
23260 + /* Broadcom had altered to (1<<3)|(0<<0) - WRESP=1, max 4 beats */
23261 + ahbcfg.b.hburstlen = (1<<3)|(0<<0);//DWC_GAHBCFG_INT_DMA_BURST_INCR4;
23262 + core_if->dma_enable = (core_if->core_params->dma_enable != 0);
23263 + core_if->dma_desc_enable =
23264 + (core_if->core_params->dma_desc_enable != 0);
23265 + break;
23266 +
23267 + }
23268 + if (core_if->dma_enable) {
23269 + if (core_if->dma_desc_enable) {
23270 + DWC_PRINTF("Using Descriptor DMA mode\n");
23271 + } else {
23272 + DWC_PRINTF("Using Buffer DMA mode\n");
23273 +
23274 + }
23275 + } else {
23276 + DWC_PRINTF("Using Slave mode\n");
23277 + core_if->dma_desc_enable = 0;
23278 + }
23279 +
23280 + if (core_if->core_params->ahb_single) {
23281 + ahbcfg.b.ahbsingle = 1;
23282 + }
23283 +
23284 + ahbcfg.b.dmaenable = core_if->dma_enable;
23285 + DWC_WRITE_REG32(&global_regs->gahbcfg, ahbcfg.d32);
23286 +
23287 + core_if->en_multiple_tx_fifo = core_if->hwcfg4.b.ded_fifo_en;
23288 +
23289 + core_if->pti_enh_enable = core_if->core_params->pti_enable != 0;
23290 + core_if->multiproc_int_enable = core_if->core_params->mpi_enable;
23291 + DWC_PRINTF("Periodic Transfer Interrupt Enhancement - %s\n",
23292 + ((core_if->pti_enh_enable) ? "enabled" : "disabled"));
23293 + DWC_PRINTF("Multiprocessor Interrupt Enhancement - %s\n",
23294 + ((core_if->multiproc_int_enable) ? "enabled" : "disabled"));
23295 +
23296 + /*
23297 + * Program the GUSBCFG register.
23298 + */
23299 + usbcfg.d32 = DWC_READ_REG32(&global_regs->gusbcfg);
23300 +
23301 + switch (core_if->hwcfg2.b.op_mode) {
23302 + case DWC_MODE_HNP_SRP_CAPABLE:
23303 + usbcfg.b.hnpcap = (core_if->core_params->otg_cap ==
23304 + DWC_OTG_CAP_PARAM_HNP_SRP_CAPABLE);
23305 + usbcfg.b.srpcap = (core_if->core_params->otg_cap !=
23306 + DWC_OTG_CAP_PARAM_NO_HNP_SRP_CAPABLE);
23307 + break;
23308 +
23309 + case DWC_MODE_SRP_ONLY_CAPABLE:
23310 + usbcfg.b.hnpcap = 0;
23311 + usbcfg.b.srpcap = (core_if->core_params->otg_cap !=
23312 + DWC_OTG_CAP_PARAM_NO_HNP_SRP_CAPABLE);
23313 + break;
23314 +
23315 + case DWC_MODE_NO_HNP_SRP_CAPABLE:
23316 + usbcfg.b.hnpcap = 0;
23317 + usbcfg.b.srpcap = 0;
23318 + break;
23319 +
23320 + case DWC_MODE_SRP_CAPABLE_DEVICE:
23321 + usbcfg.b.hnpcap = 0;
23322 + usbcfg.b.srpcap = (core_if->core_params->otg_cap !=
23323 + DWC_OTG_CAP_PARAM_NO_HNP_SRP_CAPABLE);
23324 + break;
23325 +
23326 + case DWC_MODE_NO_SRP_CAPABLE_DEVICE:
23327 + usbcfg.b.hnpcap = 0;
23328 + usbcfg.b.srpcap = 0;
23329 + break;
23330 +
23331 + case DWC_MODE_SRP_CAPABLE_HOST:
23332 + usbcfg.b.hnpcap = 0;
23333 + usbcfg.b.srpcap = (core_if->core_params->otg_cap !=
23334 + DWC_OTG_CAP_PARAM_NO_HNP_SRP_CAPABLE);
23335 + break;
23336 +
23337 + case DWC_MODE_NO_SRP_CAPABLE_HOST:
23338 + usbcfg.b.hnpcap = 0;
23339 + usbcfg.b.srpcap = 0;
23340 + break;
23341 + }
23342 +
23343 + DWC_WRITE_REG32(&global_regs->gusbcfg, usbcfg.d32);
23344 +
23345 +#ifdef CONFIG_USB_DWC_OTG_LPM
23346 + if (core_if->core_params->lpm_enable) {
23347 + glpmcfg_data_t lpmcfg = {.d32 = 0 };
23348 +
23349 + /* To enable LPM support set lpm_cap_en bit */
23350 + lpmcfg.b.lpm_cap_en = 1;
23351 +
23352 + /* Make AppL1Res ACK */
23353 + lpmcfg.b.appl_resp = 1;
23354 +
23355 + /* Retry 3 times */
23356 + lpmcfg.b.retry_count = 3;
23357 +
23358 + DWC_MODIFY_REG32(&core_if->core_global_regs->glpmcfg,
23359 + 0, lpmcfg.d32);
23360 +
23361 + }
23362 +#endif
23363 + if (core_if->core_params->ic_usb_cap) {
23364 + gusbcfg_data_t gusbcfg = {.d32 = 0 };
23365 + gusbcfg.b.ic_usb_cap = 1;
23366 + DWC_MODIFY_REG32(&core_if->core_global_regs->gusbcfg,
23367 + 0, gusbcfg.d32);
23368 + }
23369 + {
23370 + gotgctl_data_t gotgctl = {.d32 = 0 };
23371 + gotgctl.b.otgver = core_if->core_params->otg_ver;
23372 + DWC_MODIFY_REG32(&core_if->core_global_regs->gotgctl, 0,
23373 + gotgctl.d32);
23374 + /* Set OTG version supported */
23375 + core_if->otg_ver = core_if->core_params->otg_ver;
23376 + DWC_PRINTF("OTG VER PARAM: %d, OTG VER FLAG: %d\n",
23377 + core_if->core_params->otg_ver, core_if->otg_ver);
23378 + }
23379 +
23380 +
23381 + /* Enable common interrupts */
23382 + dwc_otg_enable_common_interrupts(core_if);
23383 +
23384 + /* Do device or host intialization based on mode during PCD
23385 + * and HCD initialization */
23386 + if (dwc_otg_is_host_mode(core_if)) {
23387 + DWC_DEBUGPL(DBG_ANY, "Host Mode\n");
23388 + core_if->op_state = A_HOST;
23389 + } else {
23390 + DWC_DEBUGPL(DBG_ANY, "Device Mode\n");
23391 + core_if->op_state = B_PERIPHERAL;
23392 +#ifdef DWC_DEVICE_ONLY
23393 + dwc_otg_core_dev_init(core_if);
23394 +#endif
23395 + }
23396 +}
23397 +
23398 +/**
23399 + * This function enables the Device mode interrupts.
23400 + *
23401 + * @param core_if Programming view of DWC_otg controller
23402 + */
23403 +void dwc_otg_enable_device_interrupts(dwc_otg_core_if_t * core_if)
23404 +{
23405 + gintmsk_data_t intr_mask = {.d32 = 0 };
23406 + dwc_otg_core_global_regs_t *global_regs = core_if->core_global_regs;
23407 +
23408 + DWC_DEBUGPL(DBG_CIL, "%s()\n", __func__);
23409 +
23410 + /* Disable all interrupts. */
23411 + DWC_WRITE_REG32(&global_regs->gintmsk, 0);
23412 +
23413 + /* Clear any pending interrupts */
23414 + DWC_WRITE_REG32(&global_regs->gintsts, 0xFFFFFFFF);
23415 +
23416 + /* Enable the common interrupts */
23417 + dwc_otg_enable_common_interrupts(core_if);
23418 +
23419 + /* Enable interrupts */
23420 + intr_mask.b.usbreset = 1;
23421 + intr_mask.b.enumdone = 1;
23422 + /* Disable Disconnect interrupt in Device mode */
23423 + intr_mask.b.disconnect = 0;
23424 +
23425 + if (!core_if->multiproc_int_enable) {
23426 + intr_mask.b.inepintr = 1;
23427 + intr_mask.b.outepintr = 1;
23428 + }
23429 +
23430 + intr_mask.b.erlysuspend = 1;
23431 +
23432 + if (core_if->en_multiple_tx_fifo == 0) {
23433 + intr_mask.b.epmismatch = 1;
23434 + }
23435 +
23436 + //intr_mask.b.incomplisoout = 1;
23437 + intr_mask.b.incomplisoin = 1;
23438 +
23439 +/* Enable the ignore frame number for ISOC xfers - MAS */
23440 +/* Disable to support high bandwith ISOC transfers - manukz */
23441 +#if 0
23442 +#ifdef DWC_UTE_PER_IO
23443 + if (core_if->dma_enable) {
23444 + if (core_if->dma_desc_enable) {
23445 + dctl_data_t dctl1 = {.d32 = 0 };
23446 + dctl1.b.ifrmnum = 1;
23447 + DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->
23448 + dctl, 0, dctl1.d32);
23449 + DWC_DEBUG("----Enabled Ignore frame number (0x%08x)",
23450 + DWC_READ_REG32(&core_if->dev_if->
23451 + dev_global_regs->dctl));
23452 + }
23453 + }
23454 +#endif
23455 +#endif
23456 +#ifdef DWC_EN_ISOC
23457 + if (core_if->dma_enable) {
23458 + if (core_if->dma_desc_enable == 0) {
23459 + if (core_if->pti_enh_enable) {
23460 + dctl_data_t dctl = {.d32 = 0 };
23461 + dctl.b.ifrmnum = 1;
23462 + DWC_MODIFY_REG32(&core_if->
23463 + dev_if->dev_global_regs->dctl,
23464 + 0, dctl.d32);
23465 + } else {
23466 + intr_mask.b.incomplisoin = 1;
23467 + intr_mask.b.incomplisoout = 1;
23468 + }
23469 + }
23470 + } else {
23471 + intr_mask.b.incomplisoin = 1;
23472 + intr_mask.b.incomplisoout = 1;
23473 + }
23474 +#endif /* DWC_EN_ISOC */
23475 +
23476 + /** @todo NGS: Should this be a module parameter? */
23477 +#ifdef USE_PERIODIC_EP
23478 + intr_mask.b.isooutdrop = 1;
23479 + intr_mask.b.eopframe = 1;
23480 + intr_mask.b.incomplisoin = 1;
23481 + intr_mask.b.incomplisoout = 1;
23482 +#endif
23483 +
23484 + DWC_MODIFY_REG32(&global_regs->gintmsk, intr_mask.d32, intr_mask.d32);
23485 +
23486 + DWC_DEBUGPL(DBG_CIL, "%s() gintmsk=%0x\n", __func__,
23487 + DWC_READ_REG32(&global_regs->gintmsk));
23488 +}
23489 +
23490 +/**
23491 + * This function initializes the DWC_otg controller registers for
23492 + * device mode.
23493 + *
23494 + * @param core_if Programming view of DWC_otg controller
23495 + *
23496 + */
23497 +void dwc_otg_core_dev_init(dwc_otg_core_if_t * core_if)
23498 +{
23499 + int i;
23500 + dwc_otg_core_global_regs_t *global_regs = core_if->core_global_regs;
23501 + dwc_otg_dev_if_t *dev_if = core_if->dev_if;
23502 + dwc_otg_core_params_t *params = core_if->core_params;
23503 + dcfg_data_t dcfg = {.d32 = 0 };
23504 + depctl_data_t diepctl = {.d32 = 0 };
23505 + grstctl_t resetctl = {.d32 = 0 };
23506 + uint32_t rx_fifo_size;
23507 + fifosize_data_t nptxfifosize;
23508 + fifosize_data_t txfifosize;
23509 + dthrctl_data_t dthrctl;
23510 + fifosize_data_t ptxfifosize;
23511 + uint16_t rxfsiz, nptxfsiz;
23512 + gdfifocfg_data_t gdfifocfg = {.d32 = 0 };
23513 + hwcfg3_data_t hwcfg3 = {.d32 = 0 };
23514 +
23515 + /* Restart the Phy Clock */
23516 + DWC_WRITE_REG32(core_if->pcgcctl, 0);
23517 +
23518 + /* Device configuration register */
23519 + init_devspd(core_if);
23520 + dcfg.d32 = DWC_READ_REG32(&dev_if->dev_global_regs->dcfg);
23521 + dcfg.b.descdma = (core_if->dma_desc_enable) ? 1 : 0;
23522 + dcfg.b.perfrint = DWC_DCFG_FRAME_INTERVAL_80;
23523 + /* Enable Device OUT NAK in case of DDMA mode*/
23524 + if (core_if->core_params->dev_out_nak) {
23525 + dcfg.b.endevoutnak = 1;
23526 + }
23527 +
23528 + if (core_if->core_params->cont_on_bna) {
23529 + dctl_data_t dctl = {.d32 = 0 };
23530 + dctl.b.encontonbna = 1;
23531 + DWC_MODIFY_REG32(&dev_if->dev_global_regs->dctl, 0, dctl.d32);
23532 + }
23533 +
23534 +
23535 + DWC_WRITE_REG32(&dev_if->dev_global_regs->dcfg, dcfg.d32);
23536 +
23537 + /* Configure data FIFO sizes */
23538 + if (core_if->hwcfg2.b.dynamic_fifo && params->enable_dynamic_fifo) {
23539 + DWC_DEBUGPL(DBG_CIL, "Total FIFO Size=%d\n",
23540 + core_if->total_fifo_size);
23541 + DWC_DEBUGPL(DBG_CIL, "Rx FIFO Size=%d\n",
23542 + params->dev_rx_fifo_size);
23543 + DWC_DEBUGPL(DBG_CIL, "NP Tx FIFO Size=%d\n",
23544 + params->dev_nperio_tx_fifo_size);
23545 +
23546 + /* Rx FIFO */
23547 + DWC_DEBUGPL(DBG_CIL, "initial grxfsiz=%08x\n",
23548 + DWC_READ_REG32(&global_regs->grxfsiz));
23549 +
23550 +#ifdef DWC_UTE_CFI
23551 + core_if->pwron_rxfsiz = DWC_READ_REG32(&global_regs->grxfsiz);
23552 + core_if->init_rxfsiz = params->dev_rx_fifo_size;
23553 +#endif
23554 + rx_fifo_size = params->dev_rx_fifo_size;
23555 + DWC_WRITE_REG32(&global_regs->grxfsiz, rx_fifo_size);
23556 +
23557 + DWC_DEBUGPL(DBG_CIL, "new grxfsiz=%08x\n",
23558 + DWC_READ_REG32(&global_regs->grxfsiz));
23559 +
23560 + /** Set Periodic Tx FIFO Mask all bits 0 */
23561 + core_if->p_tx_msk = 0;
23562 +
23563 + /** Set Tx FIFO Mask all bits 0 */
23564 + core_if->tx_msk = 0;
23565 +
23566 + if (core_if->en_multiple_tx_fifo == 0) {
23567 + /* Non-periodic Tx FIFO */
23568 + DWC_DEBUGPL(DBG_CIL, "initial gnptxfsiz=%08x\n",
23569 + DWC_READ_REG32(&global_regs->gnptxfsiz));
23570 +
23571 + nptxfifosize.b.depth = params->dev_nperio_tx_fifo_size;
23572 + nptxfifosize.b.startaddr = params->dev_rx_fifo_size;
23573 +
23574 + DWC_WRITE_REG32(&global_regs->gnptxfsiz,
23575 + nptxfifosize.d32);
23576 +
23577 + DWC_DEBUGPL(DBG_CIL, "new gnptxfsiz=%08x\n",
23578 + DWC_READ_REG32(&global_regs->gnptxfsiz));
23579 +
23580 + /**@todo NGS: Fix Periodic FIFO Sizing! */
23581 + /*
23582 + * Periodic Tx FIFOs These FIFOs are numbered from 1 to 15.
23583 + * Indexes of the FIFO size module parameters in the
23584 + * dev_perio_tx_fifo_size array and the FIFO size registers in
23585 + * the dptxfsiz array run from 0 to 14.
23586 + */
23587 + /** @todo Finish debug of this */
23588 + ptxfifosize.b.startaddr =
23589 + nptxfifosize.b.startaddr + nptxfifosize.b.depth;
23590 + for (i = 0; i < core_if->hwcfg4.b.num_dev_perio_in_ep; i++) {
23591 + ptxfifosize.b.depth =
23592 + params->dev_perio_tx_fifo_size[i];
23593 + DWC_DEBUGPL(DBG_CIL,
23594 + "initial dtxfsiz[%d]=%08x\n", i,
23595 + DWC_READ_REG32(&global_regs->dtxfsiz
23596 + [i]));
23597 + DWC_WRITE_REG32(&global_regs->dtxfsiz[i],
23598 + ptxfifosize.d32);
23599 + DWC_DEBUGPL(DBG_CIL, "new dtxfsiz[%d]=%08x\n",
23600 + i,
23601 + DWC_READ_REG32(&global_regs->dtxfsiz
23602 + [i]));
23603 + ptxfifosize.b.startaddr += ptxfifosize.b.depth;
23604 + }
23605 + } else {
23606 + /*
23607 + * Tx FIFOs These FIFOs are numbered from 1 to 15.
23608 + * Indexes of the FIFO size module parameters in the
23609 + * dev_tx_fifo_size array and the FIFO size registers in
23610 + * the dtxfsiz array run from 0 to 14.
23611 + */
23612 +
23613 + /* Non-periodic Tx FIFO */
23614 + DWC_DEBUGPL(DBG_CIL, "initial gnptxfsiz=%08x\n",
23615 + DWC_READ_REG32(&global_regs->gnptxfsiz));
23616 +
23617 +#ifdef DWC_UTE_CFI
23618 + core_if->pwron_gnptxfsiz =
23619 + (DWC_READ_REG32(&global_regs->gnptxfsiz) >> 16);
23620 + core_if->init_gnptxfsiz =
23621 + params->dev_nperio_tx_fifo_size;
23622 +#endif
23623 + nptxfifosize.b.depth = params->dev_nperio_tx_fifo_size;
23624 + nptxfifosize.b.startaddr = params->dev_rx_fifo_size;
23625 +
23626 + DWC_WRITE_REG32(&global_regs->gnptxfsiz,
23627 + nptxfifosize.d32);
23628 +
23629 + DWC_DEBUGPL(DBG_CIL, "new gnptxfsiz=%08x\n",
23630 + DWC_READ_REG32(&global_regs->gnptxfsiz));
23631 +
23632 + txfifosize.b.startaddr =
23633 + nptxfifosize.b.startaddr + nptxfifosize.b.depth;
23634 +
23635 + for (i = 0; i < core_if->hwcfg4.b.num_in_eps; i++) {
23636 +
23637 + txfifosize.b.depth =
23638 + params->dev_tx_fifo_size[i];
23639 +
23640 + DWC_DEBUGPL(DBG_CIL,
23641 + "initial dtxfsiz[%d]=%08x\n",
23642 + i,
23643 + DWC_READ_REG32(&global_regs->dtxfsiz
23644 + [i]));
23645 +
23646 +#ifdef DWC_UTE_CFI
23647 + core_if->pwron_txfsiz[i] =
23648 + (DWC_READ_REG32
23649 + (&global_regs->dtxfsiz[i]) >> 16);
23650 + core_if->init_txfsiz[i] =
23651 + params->dev_tx_fifo_size[i];
23652 +#endif
23653 + DWC_WRITE_REG32(&global_regs->dtxfsiz[i],
23654 + txfifosize.d32);
23655 +
23656 + DWC_DEBUGPL(DBG_CIL,
23657 + "new dtxfsiz[%d]=%08x\n",
23658 + i,
23659 + DWC_READ_REG32(&global_regs->dtxfsiz
23660 + [i]));
23661 +
23662 + txfifosize.b.startaddr += txfifosize.b.depth;
23663 + }
23664 + if (core_if->snpsid <= OTG_CORE_REV_2_94a) {
23665 + /* Calculating DFIFOCFG for Device mode to include RxFIFO and NPTXFIFO */
23666 + gdfifocfg.d32 = DWC_READ_REG32(&global_regs->gdfifocfg);
23667 + hwcfg3.d32 = DWC_READ_REG32(&global_regs->ghwcfg3);
23668 + gdfifocfg.b.gdfifocfg = (DWC_READ_REG32(&global_regs->ghwcfg3) >> 16);
23669 + DWC_WRITE_REG32(&global_regs->gdfifocfg, gdfifocfg.d32);
23670 + rxfsiz = (DWC_READ_REG32(&global_regs->grxfsiz) & 0x0000ffff);
23671 + nptxfsiz = (DWC_READ_REG32(&global_regs->gnptxfsiz) >> 16);
23672 + gdfifocfg.b.epinfobase = rxfsiz + nptxfsiz;
23673 + DWC_WRITE_REG32(&global_regs->gdfifocfg, gdfifocfg.d32);
23674 + }
23675 + }
23676 +
23677 + /* Flush the FIFOs */
23678 + dwc_otg_flush_tx_fifo(core_if, 0x10); /* all Tx FIFOs */
23679 + dwc_otg_flush_rx_fifo(core_if);
23680 +
23681 + /* Flush the Learning Queue. */
23682 + resetctl.b.intknqflsh = 1;
23683 + DWC_WRITE_REG32(&core_if->core_global_regs->grstctl, resetctl.d32);
23684 +
23685 + if (!core_if->core_params->en_multiple_tx_fifo && core_if->dma_enable) {
23686 + core_if->start_predict = 0;
23687 + for (i = 0; i<= core_if->dev_if->num_in_eps; ++i) {
23688 + core_if->nextep_seq[i] = 0xff; // 0xff - EP not active
23689 + }
23690 + core_if->nextep_seq[0] = 0;
23691 + core_if->first_in_nextep_seq = 0;
23692 + diepctl.d32 = DWC_READ_REG32(&dev_if->in_ep_regs[0]->diepctl);
23693 + diepctl.b.nextep = 0;
23694 + DWC_WRITE_REG32(&dev_if->in_ep_regs[0]->diepctl, diepctl.d32);
23695 +
23696 + /* Update IN Endpoint Mismatch Count by active IN NP EP count + 1 */
23697 + dcfg.d32 = DWC_READ_REG32(&dev_if->dev_global_regs->dcfg);
23698 + dcfg.b.epmscnt = 2;
23699 + DWC_WRITE_REG32(&dev_if->dev_global_regs->dcfg, dcfg.d32);
23700 +
23701 + DWC_DEBUGPL(DBG_CILV,"%s first_in_nextep_seq= %2d; nextep_seq[]:\n",
23702 + __func__, core_if->first_in_nextep_seq);
23703 + for (i=0; i <= core_if->dev_if->num_in_eps; i++) {
23704 + DWC_DEBUGPL(DBG_CILV, "%2d ", core_if->nextep_seq[i]);
23705 + }
23706 + DWC_DEBUGPL(DBG_CILV,"\n");
23707 + }
23708 +
23709 + /* Clear all pending Device Interrupts */
23710 + /** @todo - if the condition needed to be checked
23711 + * or in any case all pending interrutps should be cleared?
23712 + */
23713 + if (core_if->multiproc_int_enable) {
23714 + for (i = 0; i < core_if->dev_if->num_in_eps; ++i) {
23715 + DWC_WRITE_REG32(&dev_if->
23716 + dev_global_regs->diepeachintmsk[i], 0);
23717 + }
23718 + }
23719 +
23720 + for (i = 0; i < core_if->dev_if->num_out_eps; ++i) {
23721 + DWC_WRITE_REG32(&dev_if->
23722 + dev_global_regs->doepeachintmsk[i], 0);
23723 + }
23724 +
23725 + DWC_WRITE_REG32(&dev_if->dev_global_regs->deachint, 0xFFFFFFFF);
23726 + DWC_WRITE_REG32(&dev_if->dev_global_regs->deachintmsk, 0);
23727 + } else {
23728 + DWC_WRITE_REG32(&dev_if->dev_global_regs->diepmsk, 0);
23729 + DWC_WRITE_REG32(&dev_if->dev_global_regs->doepmsk, 0);
23730 + DWC_WRITE_REG32(&dev_if->dev_global_regs->daint, 0xFFFFFFFF);
23731 + DWC_WRITE_REG32(&dev_if->dev_global_regs->daintmsk, 0);
23732 + }
23733 +
23734 + for (i = 0; i <= dev_if->num_in_eps; i++) {
23735 + depctl_data_t depctl;
23736 + depctl.d32 = DWC_READ_REG32(&dev_if->in_ep_regs[i]->diepctl);
23737 + if (depctl.b.epena) {
23738 + depctl.d32 = 0;
23739 + depctl.b.epdis = 1;
23740 + depctl.b.snak = 1;
23741 + } else {
23742 + depctl.d32 = 0;
23743 + }
23744 +
23745 + DWC_WRITE_REG32(&dev_if->in_ep_regs[i]->diepctl, depctl.d32);
23746 +
23747 + DWC_WRITE_REG32(&dev_if->in_ep_regs[i]->dieptsiz, 0);
23748 + DWC_WRITE_REG32(&dev_if->in_ep_regs[i]->diepdma, 0);
23749 + DWC_WRITE_REG32(&dev_if->in_ep_regs[i]->diepint, 0xFF);
23750 + }
23751 +
23752 + for (i = 0; i <= dev_if->num_out_eps; i++) {
23753 + depctl_data_t depctl;
23754 + depctl.d32 = DWC_READ_REG32(&dev_if->out_ep_regs[i]->doepctl);
23755 + if (depctl.b.epena) {
23756 + dctl_data_t dctl = {.d32 = 0 };
23757 + gintmsk_data_t gintsts = {.d32 = 0 };
23758 + doepint_data_t doepint = {.d32 = 0 };
23759 + dctl.b.sgoutnak = 1;
23760 + DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->dctl, 0, dctl.d32);
23761 + do {
23762 + dwc_udelay(10);
23763 + gintsts.d32 = DWC_READ_REG32(&core_if->core_global_regs->gintsts);
23764 + } while (!gintsts.b.goutnakeff);
23765 + gintsts.d32 = 0;
23766 + gintsts.b.goutnakeff = 1;
23767 + DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, gintsts.d32);
23768 +
23769 + depctl.d32 = 0;
23770 + depctl.b.epdis = 1;
23771 + depctl.b.snak = 1;
23772 + DWC_WRITE_REG32(&core_if->dev_if->out_ep_regs[i]->doepctl, depctl.d32);
23773 + do {
23774 + dwc_udelay(10);
23775 + doepint.d32 = DWC_READ_REG32(&core_if->dev_if->
23776 + out_ep_regs[i]->doepint);
23777 + } while (!doepint.b.epdisabled);
23778 +
23779 + doepint.b.epdisabled = 1;
23780 + DWC_WRITE_REG32(&core_if->dev_if->out_ep_regs[i]->doepint, doepint.d32);
23781 +
23782 + dctl.d32 = 0;
23783 + dctl.b.cgoutnak = 1;
23784 + DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->dctl, 0, dctl.d32);
23785 + } else {
23786 + depctl.d32 = 0;
23787 + }
23788 +
23789 + DWC_WRITE_REG32(&dev_if->out_ep_regs[i]->doepctl, depctl.d32);
23790 +
23791 + DWC_WRITE_REG32(&dev_if->out_ep_regs[i]->doeptsiz, 0);
23792 + DWC_WRITE_REG32(&dev_if->out_ep_regs[i]->doepdma, 0);
23793 + DWC_WRITE_REG32(&dev_if->out_ep_regs[i]->doepint, 0xFF);
23794 + }
23795 +
23796 + if (core_if->en_multiple_tx_fifo && core_if->dma_enable) {
23797 + dev_if->non_iso_tx_thr_en = params->thr_ctl & 0x1;
23798 + dev_if->iso_tx_thr_en = (params->thr_ctl >> 1) & 0x1;
23799 + dev_if->rx_thr_en = (params->thr_ctl >> 2) & 0x1;
23800 +
23801 + dev_if->rx_thr_length = params->rx_thr_length;
23802 + dev_if->tx_thr_length = params->tx_thr_length;
23803 +
23804 + dev_if->setup_desc_index = 0;
23805 +
23806 + dthrctl.d32 = 0;
23807 + dthrctl.b.non_iso_thr_en = dev_if->non_iso_tx_thr_en;
23808 + dthrctl.b.iso_thr_en = dev_if->iso_tx_thr_en;
23809 + dthrctl.b.tx_thr_len = dev_if->tx_thr_length;
23810 + dthrctl.b.rx_thr_en = dev_if->rx_thr_en;
23811 + dthrctl.b.rx_thr_len = dev_if->rx_thr_length;
23812 + dthrctl.b.ahb_thr_ratio = params->ahb_thr_ratio;
23813 +
23814 + DWC_WRITE_REG32(&dev_if->dev_global_regs->dtknqr3_dthrctl,
23815 + dthrctl.d32);
23816 +
23817 + DWC_DEBUGPL(DBG_CIL,
23818 + "Non ISO Tx Thr - %d\nISO Tx Thr - %d\nRx Thr - %d\nTx Thr Len - %d\nRx Thr Len - %d\n",
23819 + dthrctl.b.non_iso_thr_en, dthrctl.b.iso_thr_en,
23820 + dthrctl.b.rx_thr_en, dthrctl.b.tx_thr_len,
23821 + dthrctl.b.rx_thr_len);
23822 +
23823 + }
23824 +
23825 + dwc_otg_enable_device_interrupts(core_if);
23826 +
23827 + {
23828 + diepmsk_data_t msk = {.d32 = 0 };
23829 + msk.b.txfifoundrn = 1;
23830 + if (core_if->multiproc_int_enable) {
23831 + DWC_MODIFY_REG32(&dev_if->dev_global_regs->
23832 + diepeachintmsk[0], msk.d32, msk.d32);
23833 + } else {
23834 + DWC_MODIFY_REG32(&dev_if->dev_global_regs->diepmsk,
23835 + msk.d32, msk.d32);
23836 + }
23837 + }
23838 +
23839 + if (core_if->multiproc_int_enable) {
23840 + /* Set NAK on Babble */
23841 + dctl_data_t dctl = {.d32 = 0 };
23842 + dctl.b.nakonbble = 1;
23843 + DWC_MODIFY_REG32(&dev_if->dev_global_regs->dctl, 0, dctl.d32);
23844 + }
23845 +
23846 + if (core_if->snpsid >= OTG_CORE_REV_2_94a) {
23847 + dctl_data_t dctl = {.d32 = 0 };
23848 + dctl.d32 = DWC_READ_REG32(&dev_if->dev_global_regs->dctl);
23849 + dctl.b.sftdiscon = 0;
23850 + DWC_WRITE_REG32(&dev_if->dev_global_regs->dctl, dctl.d32);
23851 + }
23852 +}
23853 +
23854 +/**
23855 + * This function enables the Host mode interrupts.
23856 + *
23857 + * @param core_if Programming view of DWC_otg controller
23858 + */
23859 +void dwc_otg_enable_host_interrupts(dwc_otg_core_if_t * core_if)
23860 +{
23861 + dwc_otg_core_global_regs_t *global_regs = core_if->core_global_regs;
23862 + gintmsk_data_t intr_mask = {.d32 = 0 };
23863 +
23864 + DWC_DEBUGPL(DBG_CIL, "%s(%p)\n", __func__, core_if);
23865 +
23866 + /* Disable all interrupts. */
23867 + DWC_WRITE_REG32(&global_regs->gintmsk, 0);
23868 +
23869 + /* Clear any pending interrupts. */
23870 + DWC_WRITE_REG32(&global_regs->gintsts, 0xFFFFFFFF);
23871 +
23872 + /* Enable the common interrupts */
23873 + dwc_otg_enable_common_interrupts(core_if);
23874 +
23875 + /*
23876 + * Enable host mode interrupts without disturbing common
23877 + * interrupts.
23878 + */
23879 +
23880 + intr_mask.b.disconnect = 1;
23881 + intr_mask.b.portintr = 1;
23882 + intr_mask.b.hcintr = 1;
23883 +
23884 + DWC_MODIFY_REG32(&global_regs->gintmsk, intr_mask.d32, intr_mask.d32);
23885 +}
23886 +
23887 +/**
23888 + * This function disables the Host Mode interrupts.
23889 + *
23890 + * @param core_if Programming view of DWC_otg controller
23891 + */
23892 +void dwc_otg_disable_host_interrupts(dwc_otg_core_if_t * core_if)
23893 +{
23894 + dwc_otg_core_global_regs_t *global_regs = core_if->core_global_regs;
23895 + gintmsk_data_t intr_mask = {.d32 = 0 };
23896 +
23897 + DWC_DEBUGPL(DBG_CILV, "%s()\n", __func__);
23898 +
23899 + /*
23900 + * Disable host mode interrupts without disturbing common
23901 + * interrupts.
23902 + */
23903 + intr_mask.b.sofintr = 1;
23904 + intr_mask.b.portintr = 1;
23905 + intr_mask.b.hcintr = 1;
23906 + intr_mask.b.ptxfempty = 1;
23907 + intr_mask.b.nptxfempty = 1;
23908 +
23909 + DWC_MODIFY_REG32(&global_regs->gintmsk, intr_mask.d32, 0);
23910 +}
23911 +
23912 +/**
23913 + * This function initializes the DWC_otg controller registers for
23914 + * host mode.
23915 + *
23916 + * This function flushes the Tx and Rx FIFOs and it flushes any entries in the
23917 + * request queues. Host channels are reset to ensure that they are ready for
23918 + * performing transfers.
23919 + *
23920 + * @param core_if Programming view of DWC_otg controller
23921 + *
23922 + */
23923 +void dwc_otg_core_host_init(dwc_otg_core_if_t * core_if)
23924 +{
23925 + dwc_otg_core_global_regs_t *global_regs = core_if->core_global_regs;
23926 + dwc_otg_host_if_t *host_if = core_if->host_if;
23927 + dwc_otg_core_params_t *params = core_if->core_params;
23928 + hprt0_data_t hprt0 = {.d32 = 0 };
23929 + fifosize_data_t nptxfifosize;
23930 + fifosize_data_t ptxfifosize;
23931 + uint16_t rxfsiz, nptxfsiz, hptxfsiz;
23932 + gdfifocfg_data_t gdfifocfg = {.d32 = 0 };
23933 + int i;
23934 + hcchar_data_t hcchar;
23935 + hcfg_data_t hcfg;
23936 + hfir_data_t hfir;
23937 + dwc_otg_hc_regs_t *hc_regs;
23938 + int num_channels;
23939 + gotgctl_data_t gotgctl = {.d32 = 0 };
23940 +
23941 + DWC_DEBUGPL(DBG_CILV, "%s(%p)\n", __func__, core_if);
23942 +
23943 + /* Restart the Phy Clock */
23944 + DWC_WRITE_REG32(core_if->pcgcctl, 0);
23945 +
23946 + /* Initialize Host Configuration Register */
23947 + init_fslspclksel(core_if);
23948 + if (core_if->core_params->speed == DWC_SPEED_PARAM_FULL) {
23949 + hcfg.d32 = DWC_READ_REG32(&host_if->host_global_regs->hcfg);
23950 + hcfg.b.fslssupp = 1;
23951 + DWC_WRITE_REG32(&host_if->host_global_regs->hcfg, hcfg.d32);
23952 +
23953 + }
23954 +
23955 + /* This bit allows dynamic reloading of the HFIR register
23956 + * during runtime. This bit needs to be programmed during
23957 + * initial configuration and its value must not be changed
23958 + * during runtime.*/
23959 + if (core_if->core_params->reload_ctl == 1) {
23960 + hfir.d32 = DWC_READ_REG32(&host_if->host_global_regs->hfir);
23961 + hfir.b.hfirrldctrl = 1;
23962 + DWC_WRITE_REG32(&host_if->host_global_regs->hfir, hfir.d32);
23963 + }
23964 +
23965 + if (core_if->core_params->dma_desc_enable) {
23966 + uint8_t op_mode = core_if->hwcfg2.b.op_mode;
23967 + if (!
23968 + (core_if->hwcfg4.b.desc_dma
23969 + && (core_if->snpsid >= OTG_CORE_REV_2_90a)
23970 + && ((op_mode == DWC_HWCFG2_OP_MODE_HNP_SRP_CAPABLE_OTG)
23971 + || (op_mode == DWC_HWCFG2_OP_MODE_SRP_ONLY_CAPABLE_OTG)
23972 + || (op_mode ==
23973 + DWC_HWCFG2_OP_MODE_NO_HNP_SRP_CAPABLE_OTG)
23974 + || (op_mode == DWC_HWCFG2_OP_MODE_SRP_CAPABLE_HOST)
23975 + || (op_mode ==
23976 + DWC_HWCFG2_OP_MODE_NO_SRP_CAPABLE_HOST)))) {
23977 +
23978 + DWC_ERROR("Host can't operate in Descriptor DMA mode.\n"
23979 + "Either core version is below 2.90a or "
23980 + "GHWCFG2, GHWCFG4 registers' values do not allow Descriptor DMA in host mode.\n"
23981 + "To run the driver in Buffer DMA host mode set dma_desc_enable "
23982 + "module parameter to 0.\n");
23983 + return;
23984 + }
23985 + hcfg.d32 = DWC_READ_REG32(&host_if->host_global_regs->hcfg);
23986 + hcfg.b.descdma = 1;
23987 + DWC_WRITE_REG32(&host_if->host_global_regs->hcfg, hcfg.d32);
23988 + }
23989 +
23990 + /* Configure data FIFO sizes */
23991 + if (core_if->hwcfg2.b.dynamic_fifo && params->enable_dynamic_fifo) {
23992 + DWC_DEBUGPL(DBG_CIL, "Total FIFO Size=%d\n",
23993 + core_if->total_fifo_size);
23994 + DWC_DEBUGPL(DBG_CIL, "Rx FIFO Size=%d\n",
23995 + params->host_rx_fifo_size);
23996 + DWC_DEBUGPL(DBG_CIL, "NP Tx FIFO Size=%d\n",
23997 + params->host_nperio_tx_fifo_size);
23998 + DWC_DEBUGPL(DBG_CIL, "P Tx FIFO Size=%d\n",
23999 + params->host_perio_tx_fifo_size);
24000 +
24001 + /* Rx FIFO */
24002 + DWC_DEBUGPL(DBG_CIL, "initial grxfsiz=%08x\n",
24003 + DWC_READ_REG32(&global_regs->grxfsiz));
24004 + DWC_WRITE_REG32(&global_regs->grxfsiz,
24005 + params->host_rx_fifo_size);
24006 + DWC_DEBUGPL(DBG_CIL, "new grxfsiz=%08x\n",
24007 + DWC_READ_REG32(&global_regs->grxfsiz));
24008 +
24009 + /* Non-periodic Tx FIFO */
24010 + DWC_DEBUGPL(DBG_CIL, "initial gnptxfsiz=%08x\n",
24011 + DWC_READ_REG32(&global_regs->gnptxfsiz));
24012 + nptxfifosize.b.depth = params->host_nperio_tx_fifo_size;
24013 + nptxfifosize.b.startaddr = params->host_rx_fifo_size;
24014 + DWC_WRITE_REG32(&global_regs->gnptxfsiz, nptxfifosize.d32);
24015 + DWC_DEBUGPL(DBG_CIL, "new gnptxfsiz=%08x\n",
24016 + DWC_READ_REG32(&global_regs->gnptxfsiz));
24017 +
24018 + /* Periodic Tx FIFO */
24019 + DWC_DEBUGPL(DBG_CIL, "initial hptxfsiz=%08x\n",
24020 + DWC_READ_REG32(&global_regs->hptxfsiz));
24021 + ptxfifosize.b.depth = params->host_perio_tx_fifo_size;
24022 + ptxfifosize.b.startaddr =
24023 + nptxfifosize.b.startaddr + nptxfifosize.b.depth;
24024 + DWC_WRITE_REG32(&global_regs->hptxfsiz, ptxfifosize.d32);
24025 + DWC_DEBUGPL(DBG_CIL, "new hptxfsiz=%08x\n",
24026 + DWC_READ_REG32(&global_regs->hptxfsiz));
24027 +
24028 + if (core_if->en_multiple_tx_fifo
24029 + && core_if->snpsid <= OTG_CORE_REV_2_94a) {
24030 + /* Global DFIFOCFG calculation for Host mode - include RxFIFO, NPTXFIFO and HPTXFIFO */
24031 + gdfifocfg.d32 = DWC_READ_REG32(&global_regs->gdfifocfg);
24032 + rxfsiz = (DWC_READ_REG32(&global_regs->grxfsiz) & 0x0000ffff);
24033 + nptxfsiz = (DWC_READ_REG32(&global_regs->gnptxfsiz) >> 16);
24034 + hptxfsiz = (DWC_READ_REG32(&global_regs->hptxfsiz) >> 16);
24035 + gdfifocfg.b.epinfobase = rxfsiz + nptxfsiz + hptxfsiz;
24036 + DWC_WRITE_REG32(&global_regs->gdfifocfg, gdfifocfg.d32);
24037 + }
24038 + }
24039 +
24040 + /* TODO - check this */
24041 + /* Clear Host Set HNP Enable in the OTG Control Register */
24042 + gotgctl.b.hstsethnpen = 1;
24043 + DWC_MODIFY_REG32(&global_regs->gotgctl, gotgctl.d32, 0);
24044 + /* Make sure the FIFOs are flushed. */
24045 + dwc_otg_flush_tx_fifo(core_if, 0x10 /* all TX FIFOs */ );
24046 + dwc_otg_flush_rx_fifo(core_if);
24047 +
24048 + /* Clear Host Set HNP Enable in the OTG Control Register */
24049 + gotgctl.b.hstsethnpen = 1;
24050 + DWC_MODIFY_REG32(&global_regs->gotgctl, gotgctl.d32, 0);
24051 +
24052 + if (!core_if->core_params->dma_desc_enable) {
24053 + /* Flush out any leftover queued requests. */
24054 + num_channels = core_if->core_params->host_channels;
24055 +
24056 + for (i = 0; i < num_channels; i++) {
24057 + hc_regs = core_if->host_if->hc_regs[i];
24058 + hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
24059 + hcchar.b.chen = 0;
24060 + hcchar.b.chdis = 1;
24061 + hcchar.b.epdir = 0;
24062 + DWC_WRITE_REG32(&hc_regs->hcchar, hcchar.d32);
24063 + }
24064 +
24065 + /* Halt all channels to put them into a known state. */
24066 + for (i = 0; i < num_channels; i++) {
24067 + int count = 0;
24068 + hc_regs = core_if->host_if->hc_regs[i];
24069 + hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
24070 + hcchar.b.chen = 1;
24071 + hcchar.b.chdis = 1;
24072 + hcchar.b.epdir = 0;
24073 + DWC_WRITE_REG32(&hc_regs->hcchar, hcchar.d32);
24074 + DWC_DEBUGPL(DBG_HCDV, "%s: Halt channel %d regs %p\n", __func__, i, hc_regs);
24075 + do {
24076 + hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
24077 + if (++count > 1000) {
24078 + DWC_ERROR
24079 + ("%s: Unable to clear halt on channel %d (timeout HCCHAR 0x%X @%p)\n",
24080 + __func__, i, hcchar.d32, &hc_regs->hcchar);
24081 + break;
24082 + }
24083 + dwc_udelay(1);
24084 + } while (hcchar.b.chen);
24085 + }
24086 + }
24087 +
24088 + /* Turn on the vbus power. */
24089 + DWC_PRINTF("Init: Port Power? op_state=%d\n", core_if->op_state);
24090 + if (core_if->op_state == A_HOST) {
24091 + hprt0.d32 = dwc_otg_read_hprt0(core_if);
24092 + DWC_PRINTF("Init: Power Port (%d)\n", hprt0.b.prtpwr);
24093 + if (hprt0.b.prtpwr == 0) {
24094 + hprt0.b.prtpwr = 1;
24095 + DWC_WRITE_REG32(host_if->hprt0, hprt0.d32);
24096 + }
24097 + }
24098 +
24099 + dwc_otg_enable_host_interrupts(core_if);
24100 +}
24101 +
24102 +/**
24103 + * Prepares a host channel for transferring packets to/from a specific
24104 + * endpoint. The HCCHARn register is set up with the characteristics specified
24105 + * in _hc. Host channel interrupts that may need to be serviced while this
24106 + * transfer is in progress are enabled.
24107 + *
24108 + * @param core_if Programming view of DWC_otg controller
24109 + * @param hc Information needed to initialize the host channel
24110 + */
24111 +void dwc_otg_hc_init(dwc_otg_core_if_t * core_if, dwc_hc_t * hc)
24112 +{
24113 + hcintmsk_data_t hc_intr_mask;
24114 + hcchar_data_t hcchar;
24115 + hcsplt_data_t hcsplt;
24116 +
24117 + uint8_t hc_num = hc->hc_num;
24118 + dwc_otg_host_if_t *host_if = core_if->host_if;
24119 + dwc_otg_hc_regs_t *hc_regs = host_if->hc_regs[hc_num];
24120 +
24121 + /* Clear old interrupt conditions for this host channel. */
24122 + hc_intr_mask.d32 = 0xFFFFFFFF;
24123 + hc_intr_mask.b.reserved14_31 = 0;
24124 + DWC_WRITE_REG32(&hc_regs->hcint, hc_intr_mask.d32);
24125 +
24126 + /* Enable channel interrupts required for this transfer. */
24127 + hc_intr_mask.d32 = 0;
24128 + hc_intr_mask.b.chhltd = 1;
24129 + if (core_if->dma_enable) {
24130 + /* For Descriptor DMA mode core halts the channel on AHB error. Interrupt is not required */
24131 + if (!core_if->dma_desc_enable)
24132 + hc_intr_mask.b.ahberr = 1;
24133 + else {
24134 + if (hc->ep_type == DWC_OTG_EP_TYPE_ISOC)
24135 + hc_intr_mask.b.xfercompl = 1;
24136 + }
24137 +
24138 + if (hc->error_state && !hc->do_split &&
24139 + hc->ep_type != DWC_OTG_EP_TYPE_ISOC) {
24140 + hc_intr_mask.b.ack = 1;
24141 + if (hc->ep_is_in) {
24142 + hc_intr_mask.b.datatglerr = 1;
24143 + if (hc->ep_type != DWC_OTG_EP_TYPE_INTR) {
24144 + hc_intr_mask.b.nak = 1;
24145 + }
24146 + }
24147 + }
24148 + } else {
24149 + switch (hc->ep_type) {
24150 + case DWC_OTG_EP_TYPE_CONTROL:
24151 + case DWC_OTG_EP_TYPE_BULK:
24152 + hc_intr_mask.b.xfercompl = 1;
24153 + hc_intr_mask.b.stall = 1;
24154 + hc_intr_mask.b.xacterr = 1;
24155 + hc_intr_mask.b.datatglerr = 1;
24156 + if (hc->ep_is_in) {
24157 + hc_intr_mask.b.bblerr = 1;
24158 + } else {
24159 + hc_intr_mask.b.nak = 1;
24160 + hc_intr_mask.b.nyet = 1;
24161 + if (hc->do_ping) {
24162 + hc_intr_mask.b.ack = 1;
24163 + }
24164 + }
24165 +
24166 + if (hc->do_split) {
24167 + hc_intr_mask.b.nak = 1;
24168 + if (hc->complete_split) {
24169 + hc_intr_mask.b.nyet = 1;
24170 + } else {
24171 + hc_intr_mask.b.ack = 1;
24172 + }
24173 + }
24174 +
24175 + if (hc->error_state) {
24176 + hc_intr_mask.b.ack = 1;
24177 + }
24178 + break;
24179 + case DWC_OTG_EP_TYPE_INTR:
24180 + hc_intr_mask.b.xfercompl = 1;
24181 + hc_intr_mask.b.nak = 1;
24182 + hc_intr_mask.b.stall = 1;
24183 + hc_intr_mask.b.xacterr = 1;
24184 + hc_intr_mask.b.datatglerr = 1;
24185 + hc_intr_mask.b.frmovrun = 1;
24186 +
24187 + if (hc->ep_is_in) {
24188 + hc_intr_mask.b.bblerr = 1;
24189 + }
24190 + if (hc->error_state) {
24191 + hc_intr_mask.b.ack = 1;
24192 + }
24193 + if (hc->do_split) {
24194 + if (hc->complete_split) {
24195 + hc_intr_mask.b.nyet = 1;
24196 + } else {
24197 + hc_intr_mask.b.ack = 1;
24198 + }
24199 + }
24200 + break;
24201 + case DWC_OTG_EP_TYPE_ISOC:
24202 + hc_intr_mask.b.xfercompl = 1;
24203 + hc_intr_mask.b.frmovrun = 1;
24204 + hc_intr_mask.b.ack = 1;
24205 +
24206 + if (hc->ep_is_in) {
24207 + hc_intr_mask.b.xacterr = 1;
24208 + hc_intr_mask.b.bblerr = 1;
24209 + }
24210 + break;
24211 + }
24212 + }
24213 + DWC_WRITE_REG32(&hc_regs->hcintmsk, hc_intr_mask.d32);
24214 +
24215 + /*
24216 + * Program the HCCHARn register with the endpoint characteristics for
24217 + * the current transfer.
24218 + */
24219 + hcchar.d32 = 0;
24220 + hcchar.b.devaddr = hc->dev_addr;
24221 + hcchar.b.epnum = hc->ep_num;
24222 + hcchar.b.epdir = hc->ep_is_in;
24223 + hcchar.b.lspddev = (hc->speed == DWC_OTG_EP_SPEED_LOW);
24224 + hcchar.b.eptype = hc->ep_type;
24225 + hcchar.b.mps = hc->max_packet;
24226 +
24227 + DWC_WRITE_REG32(&host_if->hc_regs[hc_num]->hcchar, hcchar.d32);
24228 +
24229 + DWC_DEBUGPL(DBG_HCDV, "%s: Channel %d, Dev Addr %d, EP #%d\n",
24230 + __func__, hc->hc_num, hcchar.b.devaddr, hcchar.b.epnum);
24231 + DWC_DEBUGPL(DBG_HCDV, " Is In %d, Is Low Speed %d, EP Type %d, "
24232 + "Max Pkt %d, Multi Cnt %d\n",
24233 + hcchar.b.epdir, hcchar.b.lspddev, hcchar.b.eptype,
24234 + hcchar.b.mps, hcchar.b.multicnt);
24235 +
24236 + /*
24237 + * Program the HCSPLIT register for SPLITs
24238 + */
24239 + hcsplt.d32 = 0;
24240 + if (hc->do_split) {
24241 + DWC_DEBUGPL(DBG_HCDV, "Programming HC %d with split --> %s\n",
24242 + hc->hc_num,
24243 + hc->complete_split ? "CSPLIT" : "SSPLIT");
24244 + hcsplt.b.compsplt = hc->complete_split;
24245 + hcsplt.b.xactpos = hc->xact_pos;
24246 + hcsplt.b.hubaddr = hc->hub_addr;
24247 + hcsplt.b.prtaddr = hc->port_addr;
24248 + DWC_DEBUGPL(DBG_HCDV, "\t comp split %d\n", hc->complete_split);
24249 + DWC_DEBUGPL(DBG_HCDV, "\t xact pos %d\n", hc->xact_pos);
24250 + DWC_DEBUGPL(DBG_HCDV, "\t hub addr %d\n", hc->hub_addr);
24251 + DWC_DEBUGPL(DBG_HCDV, "\t port addr %d\n", hc->port_addr);
24252 + DWC_DEBUGPL(DBG_HCDV, "\t is_in %d\n", hc->ep_is_in);
24253 + DWC_DEBUGPL(DBG_HCDV, "\t Max Pkt: %d\n", hcchar.b.mps);
24254 + DWC_DEBUGPL(DBG_HCDV, "\t xferlen: %d\n", hc->xfer_len);
24255 + }
24256 + DWC_WRITE_REG32(&host_if->hc_regs[hc_num]->hcsplt, hcsplt.d32);
24257 +
24258 +}
24259 +
24260 +/**
24261 + * Attempts to halt a host channel. This function should only be called in
24262 + * Slave mode or to abort a transfer in either Slave mode or DMA mode. Under
24263 + * normal circumstances in DMA mode, the controller halts the channel when the
24264 + * transfer is complete or a condition occurs that requires application
24265 + * intervention.
24266 + *
24267 + * In slave mode, checks for a free request queue entry, then sets the Channel
24268 + * Enable and Channel Disable bits of the Host Channel Characteristics
24269 + * register of the specified channel to intiate the halt. If there is no free
24270 + * request queue entry, sets only the Channel Disable bit of the HCCHARn
24271 + * register to flush requests for this channel. In the latter case, sets a
24272 + * flag to indicate that the host channel needs to be halted when a request
24273 + * queue slot is open.
24274 + *
24275 + * In DMA mode, always sets the Channel Enable and Channel Disable bits of the
24276 + * HCCHARn register. The controller ensures there is space in the request
24277 + * queue before submitting the halt request.
24278 + *
24279 + * Some time may elapse before the core flushes any posted requests for this
24280 + * host channel and halts. The Channel Halted interrupt handler completes the
24281 + * deactivation of the host channel.
24282 + *
24283 + * @param core_if Controller register interface.
24284 + * @param hc Host channel to halt.
24285 + * @param halt_status Reason for halting the channel.
24286 + */
24287 +void dwc_otg_hc_halt(dwc_otg_core_if_t * core_if,
24288 + dwc_hc_t * hc, dwc_otg_halt_status_e halt_status)
24289 +{
24290 + gnptxsts_data_t nptxsts;
24291 + hptxsts_data_t hptxsts;
24292 + hcchar_data_t hcchar;
24293 + dwc_otg_hc_regs_t *hc_regs;
24294 + dwc_otg_core_global_regs_t *global_regs;
24295 + dwc_otg_host_global_regs_t *host_global_regs;
24296 +
24297 + hc_regs = core_if->host_if->hc_regs[hc->hc_num];
24298 + global_regs = core_if->core_global_regs;
24299 + host_global_regs = core_if->host_if->host_global_regs;
24300 +
24301 + DWC_ASSERT(!(halt_status == DWC_OTG_HC_XFER_NO_HALT_STATUS),
24302 + "halt_status = %d\n", halt_status);
24303 +
24304 + if (halt_status == DWC_OTG_HC_XFER_URB_DEQUEUE ||
24305 + halt_status == DWC_OTG_HC_XFER_AHB_ERR) {
24306 + /*
24307 + * Disable all channel interrupts except Ch Halted. The QTD
24308 + * and QH state associated with this transfer has been cleared
24309 + * (in the case of URB_DEQUEUE), so the channel needs to be
24310 + * shut down carefully to prevent crashes.
24311 + */
24312 + hcintmsk_data_t hcintmsk;
24313 + hcintmsk.d32 = 0;
24314 + hcintmsk.b.chhltd = 1;
24315 + DWC_WRITE_REG32(&hc_regs->hcintmsk, hcintmsk.d32);
24316 +
24317 + /*
24318 + * Make sure no other interrupts besides halt are currently
24319 + * pending. Handling another interrupt could cause a crash due
24320 + * to the QTD and QH state.
24321 + */
24322 + DWC_WRITE_REG32(&hc_regs->hcint, ~hcintmsk.d32);
24323 +
24324 + /*
24325 + * Make sure the halt status is set to URB_DEQUEUE or AHB_ERR
24326 + * even if the channel was already halted for some other
24327 + * reason.
24328 + */
24329 + hc->halt_status = halt_status;
24330 +
24331 + hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
24332 + if (hcchar.b.chen == 0) {
24333 + /*
24334 + * The channel is either already halted or it hasn't
24335 + * started yet. In DMA mode, the transfer may halt if
24336 + * it finishes normally or a condition occurs that
24337 + * requires driver intervention. Don't want to halt
24338 + * the channel again. In either Slave or DMA mode,
24339 + * it's possible that the transfer has been assigned
24340 + * to a channel, but not started yet when an URB is
24341 + * dequeued. Don't want to halt a channel that hasn't
24342 + * started yet.
24343 + */
24344 + return;
24345 + }
24346 + }
24347 + if (hc->halt_pending) {
24348 + /*
24349 + * A halt has already been issued for this channel. This might
24350 + * happen when a transfer is aborted by a higher level in
24351 + * the stack.
24352 + */
24353 +#ifdef DEBUG
24354 + DWC_PRINTF
24355 + ("*** %s: Channel %d, _hc->halt_pending already set ***\n",
24356 + __func__, hc->hc_num);
24357 +
24358 +#endif
24359 + return;
24360 + }
24361 +
24362 + hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
24363 +
24364 + /* No need to set the bit in DDMA for disabling the channel */
24365 + //TODO check it everywhere channel is disabled
24366 + if (!core_if->core_params->dma_desc_enable)
24367 + hcchar.b.chen = 1;
24368 + hcchar.b.chdis = 1;
24369 +
24370 + if (!core_if->dma_enable) {
24371 + /* Check for space in the request queue to issue the halt. */
24372 + if (hc->ep_type == DWC_OTG_EP_TYPE_CONTROL ||
24373 + hc->ep_type == DWC_OTG_EP_TYPE_BULK) {
24374 + nptxsts.d32 = DWC_READ_REG32(&global_regs->gnptxsts);
24375 + if (nptxsts.b.nptxqspcavail == 0) {
24376 + hcchar.b.chen = 0;
24377 + }
24378 + } else {
24379 + hptxsts.d32 =
24380 + DWC_READ_REG32(&host_global_regs->hptxsts);
24381 + if ((hptxsts.b.ptxqspcavail == 0)
24382 + || (core_if->queuing_high_bandwidth)) {
24383 + hcchar.b.chen = 0;
24384 + }
24385 + }
24386 + }
24387 + DWC_WRITE_REG32(&hc_regs->hcchar, hcchar.d32);
24388 +
24389 + hc->halt_status = halt_status;
24390 +
24391 + if (hcchar.b.chen) {
24392 + hc->halt_pending = 1;
24393 + hc->halt_on_queue = 0;
24394 + } else {
24395 + hc->halt_on_queue = 1;
24396 + }
24397 +
24398 + DWC_DEBUGPL(DBG_HCDV, "%s: Channel %d\n", __func__, hc->hc_num);
24399 + DWC_DEBUGPL(DBG_HCDV, " hcchar: 0x%08x\n", hcchar.d32);
24400 + DWC_DEBUGPL(DBG_HCDV, " halt_pending: %d\n", hc->halt_pending);
24401 + DWC_DEBUGPL(DBG_HCDV, " halt_on_queue: %d\n", hc->halt_on_queue);
24402 + DWC_DEBUGPL(DBG_HCDV, " halt_status: %d\n", hc->halt_status);
24403 +
24404 + return;
24405 +}
24406 +
24407 +/**
24408 + * Clears the transfer state for a host channel. This function is normally
24409 + * called after a transfer is done and the host channel is being released.
24410 + *
24411 + * @param core_if Programming view of DWC_otg controller.
24412 + * @param hc Identifies the host channel to clean up.
24413 + */
24414 +void dwc_otg_hc_cleanup(dwc_otg_core_if_t * core_if, dwc_hc_t * hc)
24415 +{
24416 + dwc_otg_hc_regs_t *hc_regs;
24417 +
24418 + hc->xfer_started = 0;
24419 +
24420 + /*
24421 + * Clear channel interrupt enables and any unhandled channel interrupt
24422 + * conditions.
24423 + */
24424 + hc_regs = core_if->host_if->hc_regs[hc->hc_num];
24425 + DWC_WRITE_REG32(&hc_regs->hcintmsk, 0);
24426 + DWC_WRITE_REG32(&hc_regs->hcint, 0xFFFFFFFF);
24427 +#ifdef DEBUG
24428 + DWC_TIMER_CANCEL(core_if->hc_xfer_timer[hc->hc_num]);
24429 +#endif
24430 +}
24431 +
24432 +/**
24433 + * Sets the channel property that indicates in which frame a periodic transfer
24434 + * should occur. This is always set to the _next_ frame. This function has no
24435 + * effect on non-periodic transfers.
24436 + *
24437 + * @param core_if Programming view of DWC_otg controller.
24438 + * @param hc Identifies the host channel to set up and its properties.
24439 + * @param hcchar Current value of the HCCHAR register for the specified host
24440 + * channel.
24441 + */
24442 +static inline void hc_set_even_odd_frame(dwc_otg_core_if_t * core_if,
24443 + dwc_hc_t * hc, hcchar_data_t * hcchar)
24444 +{
24445 + if (hc->ep_type == DWC_OTG_EP_TYPE_INTR ||
24446 + hc->ep_type == DWC_OTG_EP_TYPE_ISOC) {
24447 + hfnum_data_t hfnum;
24448 + hfnum.d32 =
24449 + DWC_READ_REG32(&core_if->host_if->host_global_regs->hfnum);
24450 +
24451 + /* 1 if _next_ frame is odd, 0 if it's even */
24452 + hcchar->b.oddfrm = (hfnum.b.frnum & 0x1) ? 0 : 1;
24453 +#ifdef DEBUG
24454 + if (hc->ep_type == DWC_OTG_EP_TYPE_INTR && hc->do_split
24455 + && !hc->complete_split) {
24456 + switch (hfnum.b.frnum & 0x7) {
24457 + case 7:
24458 + core_if->hfnum_7_samples++;
24459 + core_if->hfnum_7_frrem_accum += hfnum.b.frrem;
24460 + break;
24461 + case 0:
24462 + core_if->hfnum_0_samples++;
24463 + core_if->hfnum_0_frrem_accum += hfnum.b.frrem;
24464 + break;
24465 + default:
24466 + core_if->hfnum_other_samples++;
24467 + core_if->hfnum_other_frrem_accum +=
24468 + hfnum.b.frrem;
24469 + break;
24470 + }
24471 + }
24472 +#endif
24473 + }
24474 +}
24475 +
24476 +#ifdef DEBUG
24477 +void hc_xfer_timeout(void *ptr)
24478 +{
24479 + hc_xfer_info_t *xfer_info = NULL;
24480 + int hc_num = 0;
24481 +
24482 + if (ptr)
24483 + xfer_info = (hc_xfer_info_t *) ptr;
24484 +
24485 + if (!xfer_info->hc) {
24486 + DWC_ERROR("xfer_info->hc = %p\n", xfer_info->hc);
24487 + return;
24488 + }
24489 +
24490 + hc_num = xfer_info->hc->hc_num;
24491 + DWC_WARN("%s: timeout on channel %d\n", __func__, hc_num);
24492 + DWC_WARN(" start_hcchar_val 0x%08x\n",
24493 + xfer_info->core_if->start_hcchar_val[hc_num]);
24494 +}
24495 +#endif
24496 +
24497 +void ep_xfer_timeout(void *ptr)
24498 +{
24499 + ep_xfer_info_t *xfer_info = NULL;
24500 + int ep_num = 0;
24501 + dctl_data_t dctl = {.d32 = 0 };
24502 + gintsts_data_t gintsts = {.d32 = 0 };
24503 + gintmsk_data_t gintmsk = {.d32 = 0 };
24504 +
24505 + if (ptr)
24506 + xfer_info = (ep_xfer_info_t *) ptr;
24507 +
24508 + if (!xfer_info->ep) {
24509 + DWC_ERROR("xfer_info->ep = %p\n", xfer_info->ep);
24510 + return;
24511 + }
24512 +
24513 + ep_num = xfer_info->ep->num;
24514 + DWC_WARN("%s: timeout on endpoit %d\n", __func__, ep_num);
24515 + /* Put the sate to 2 as it was time outed */
24516 + xfer_info->state = 2;
24517 +
24518 + dctl.d32 =
24519 + DWC_READ_REG32(&xfer_info->core_if->dev_if->dev_global_regs->dctl);
24520 + gintsts.d32 =
24521 + DWC_READ_REG32(&xfer_info->core_if->core_global_regs->gintsts);
24522 + gintmsk.d32 =
24523 + DWC_READ_REG32(&xfer_info->core_if->core_global_regs->gintmsk);
24524 +
24525 + if (!gintmsk.b.goutnakeff) {
24526 + /* Unmask it */
24527 + gintmsk.b.goutnakeff = 1;
24528 + DWC_WRITE_REG32(&xfer_info->core_if->core_global_regs->gintmsk,
24529 + gintmsk.d32);
24530 +
24531 + }
24532 +
24533 + if (!gintsts.b.goutnakeff) {
24534 + dctl.b.sgoutnak = 1;
24535 + }
24536 + DWC_WRITE_REG32(&xfer_info->core_if->dev_if->dev_global_regs->dctl,
24537 + dctl.d32);
24538 +
24539 +}
24540 +
24541 +void set_pid_isoc(dwc_hc_t * hc)
24542 +{
24543 + /* Set up the initial PID for the transfer. */
24544 + if (hc->speed == DWC_OTG_EP_SPEED_HIGH) {
24545 + if (hc->ep_is_in) {
24546 + if (hc->multi_count == 1) {
24547 + hc->data_pid_start = DWC_OTG_HC_PID_DATA0;
24548 + } else if (hc->multi_count == 2) {
24549 + hc->data_pid_start = DWC_OTG_HC_PID_DATA1;
24550 + } else {
24551 + hc->data_pid_start = DWC_OTG_HC_PID_DATA2;
24552 + }
24553 + } else {
24554 + if (hc->multi_count == 1) {
24555 + hc->data_pid_start = DWC_OTG_HC_PID_DATA0;
24556 + } else {
24557 + hc->data_pid_start = DWC_OTG_HC_PID_MDATA;
24558 + }
24559 + }
24560 + } else {
24561 + hc->data_pid_start = DWC_OTG_HC_PID_DATA0;
24562 + }
24563 +}
24564 +
24565 +/**
24566 + * This function does the setup for a data transfer for a host channel and
24567 + * starts the transfer. May be called in either Slave mode or DMA mode. In
24568 + * Slave mode, the caller must ensure that there is sufficient space in the
24569 + * request queue and Tx Data FIFO.
24570 + *
24571 + * For an OUT transfer in Slave mode, it loads a data packet into the
24572 + * appropriate FIFO. If necessary, additional data packets will be loaded in
24573 + * the Host ISR.
24574 + *
24575 + * For an IN transfer in Slave mode, a data packet is requested. The data
24576 + * packets are unloaded from the Rx FIFO in the Host ISR. If necessary,
24577 + * additional data packets are requested in the Host ISR.
24578 + *
24579 + * For a PING transfer in Slave mode, the Do Ping bit is set in the HCTSIZ
24580 + * register along with a packet count of 1 and the channel is enabled. This
24581 + * causes a single PING transaction to occur. Other fields in HCTSIZ are
24582 + * simply set to 0 since no data transfer occurs in this case.
24583 + *
24584 + * For a PING transfer in DMA mode, the HCTSIZ register is initialized with
24585 + * all the information required to perform the subsequent data transfer. In
24586 + * addition, the Do Ping bit is set in the HCTSIZ register. In this case, the
24587 + * controller performs the entire PING protocol, then starts the data
24588 + * transfer.
24589 + *
24590 + * @param core_if Programming view of DWC_otg controller.
24591 + * @param hc Information needed to initialize the host channel. The xfer_len
24592 + * value may be reduced to accommodate the max widths of the XferSize and
24593 + * PktCnt fields in the HCTSIZn register. The multi_count value may be changed
24594 + * to reflect the final xfer_len value.
24595 + */
24596 +void dwc_otg_hc_start_transfer(dwc_otg_core_if_t * core_if, dwc_hc_t * hc)
24597 +{
24598 + hcchar_data_t hcchar;
24599 + hctsiz_data_t hctsiz;
24600 + uint16_t num_packets;
24601 + uint32_t max_hc_xfer_size = core_if->core_params->max_transfer_size;
24602 + uint16_t max_hc_pkt_count = core_if->core_params->max_packet_count;
24603 + dwc_otg_hc_regs_t *hc_regs = core_if->host_if->hc_regs[hc->hc_num];
24604 +
24605 + hctsiz.d32 = 0;
24606 +
24607 + if (hc->do_ping) {
24608 + if (!core_if->dma_enable) {
24609 + dwc_otg_hc_do_ping(core_if, hc);
24610 + hc->xfer_started = 1;
24611 + return;
24612 + } else {
24613 + hctsiz.b.dopng = 1;
24614 + }
24615 + }
24616 +
24617 + if (hc->do_split) {
24618 + num_packets = 1;
24619 +
24620 + if (hc->complete_split && !hc->ep_is_in) {
24621 + /* For CSPLIT OUT Transfer, set the size to 0 so the
24622 + * core doesn't expect any data written to the FIFO */
24623 + hc->xfer_len = 0;
24624 + } else if (hc->ep_is_in || (hc->xfer_len > hc->max_packet)) {
24625 + hc->xfer_len = hc->max_packet;
24626 + } else if (!hc->ep_is_in && (hc->xfer_len > 188)) {
24627 + hc->xfer_len = 188;
24628 + }
24629 +
24630 + hctsiz.b.xfersize = hc->xfer_len;
24631 + } else {
24632 + /*
24633 + * Ensure that the transfer length and packet count will fit
24634 + * in the widths allocated for them in the HCTSIZn register.
24635 + */
24636 + if (hc->ep_type == DWC_OTG_EP_TYPE_INTR ||
24637 + hc->ep_type == DWC_OTG_EP_TYPE_ISOC) {
24638 + /*
24639 + * Make sure the transfer size is no larger than one
24640 + * (micro)frame's worth of data. (A check was done
24641 + * when the periodic transfer was accepted to ensure
24642 + * that a (micro)frame's worth of data can be
24643 + * programmed into a channel.)
24644 + */
24645 + uint32_t max_periodic_len =
24646 + hc->multi_count * hc->max_packet;
24647 + if (hc->xfer_len > max_periodic_len) {
24648 + hc->xfer_len = max_periodic_len;
24649 + } else {
24650 + }
24651 + } else if (hc->xfer_len > max_hc_xfer_size) {
24652 + /* Make sure that xfer_len is a multiple of max packet size. */
24653 + hc->xfer_len = max_hc_xfer_size - hc->max_packet + 1;
24654 + }
24655 +
24656 + if (hc->xfer_len > 0) {
24657 + num_packets =
24658 + (hc->xfer_len + hc->max_packet -
24659 + 1) / hc->max_packet;
24660 + if (num_packets > max_hc_pkt_count) {
24661 + num_packets = max_hc_pkt_count;
24662 + hc->xfer_len = num_packets * hc->max_packet;
24663 + }
24664 + } else {
24665 + /* Need 1 packet for transfer length of 0. */
24666 + num_packets = 1;
24667 + }
24668 +
24669 + if (hc->ep_is_in) {
24670 + /* Always program an integral # of max packets for IN transfers. */
24671 + hc->xfer_len = num_packets * hc->max_packet;
24672 + }
24673 +
24674 + if (hc->ep_type == DWC_OTG_EP_TYPE_INTR ||
24675 + hc->ep_type == DWC_OTG_EP_TYPE_ISOC) {
24676 + /*
24677 + * Make sure that the multi_count field matches the
24678 + * actual transfer length.
24679 + */
24680 + hc->multi_count = num_packets;
24681 + }
24682 +
24683 + if (hc->ep_type == DWC_OTG_EP_TYPE_ISOC)
24684 + set_pid_isoc(hc);
24685 +
24686 + hctsiz.b.xfersize = hc->xfer_len;
24687 + }
24688 +
24689 + hc->start_pkt_count = num_packets;
24690 + hctsiz.b.pktcnt = num_packets;
24691 + hctsiz.b.pid = hc->data_pid_start;
24692 + DWC_WRITE_REG32(&hc_regs->hctsiz, hctsiz.d32);
24693 +
24694 + DWC_DEBUGPL(DBG_HCDV, "%s: Channel %d\n", __func__, hc->hc_num);
24695 + DWC_DEBUGPL(DBG_HCDV, " Xfer Size: %d\n", hctsiz.b.xfersize);
24696 + DWC_DEBUGPL(DBG_HCDV, " Num Pkts: %d\n", hctsiz.b.pktcnt);
24697 + DWC_DEBUGPL(DBG_HCDV, " Start PID: %d\n", hctsiz.b.pid);
24698 +
24699 + if (core_if->dma_enable) {
24700 + dwc_dma_t dma_addr;
24701 + if (hc->align_buff) {
24702 + dma_addr = hc->align_buff;
24703 + } else {
24704 + dma_addr = ((unsigned long)hc->xfer_buff & 0xffffffff);
24705 + }
24706 + DWC_WRITE_REG32(&hc_regs->hcdma, dma_addr);
24707 + }
24708 +
24709 + /* Start the split */
24710 + if (hc->do_split) {
24711 + hcsplt_data_t hcsplt;
24712 + hcsplt.d32 = DWC_READ_REG32(&hc_regs->hcsplt);
24713 + hcsplt.b.spltena = 1;
24714 + DWC_WRITE_REG32(&hc_regs->hcsplt, hcsplt.d32);
24715 + }
24716 +
24717 + hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
24718 + hcchar.b.multicnt = hc->multi_count;
24719 + hc_set_even_odd_frame(core_if, hc, &hcchar);
24720 +#ifdef DEBUG
24721 + core_if->start_hcchar_val[hc->hc_num] = hcchar.d32;
24722 + if (hcchar.b.chdis) {
24723 + DWC_WARN("%s: chdis set, channel %d, hcchar 0x%08x\n",
24724 + __func__, hc->hc_num, hcchar.d32);
24725 + }
24726 +#endif
24727 +
24728 + /* Set host channel enable after all other setup is complete. */
24729 + hcchar.b.chen = 1;
24730 + hcchar.b.chdis = 0;
24731 + DWC_WRITE_REG32(&hc_regs->hcchar, hcchar.d32);
24732 +
24733 + hc->xfer_started = 1;
24734 + hc->requests++;
24735 +
24736 + if (!core_if->dma_enable && !hc->ep_is_in && hc->xfer_len > 0) {
24737 + /* Load OUT packet into the appropriate Tx FIFO. */
24738 + dwc_otg_hc_write_packet(core_if, hc);
24739 + }
24740 +#ifdef DEBUG
24741 + if (hc->ep_type != DWC_OTG_EP_TYPE_INTR) {
24742 + DWC_DEBUGPL(DBG_HCDV, "transfer %d from core_if %p\n",
24743 + hc->hc_num, core_if);//GRAYG
24744 + core_if->hc_xfer_info[hc->hc_num].core_if = core_if;
24745 + core_if->hc_xfer_info[hc->hc_num].hc = hc;
24746 +
24747 + /* Start a timer for this transfer. */
24748 + DWC_TIMER_SCHEDULE(core_if->hc_xfer_timer[hc->hc_num], 10000);
24749 + }
24750 +#endif
24751 +}
24752 +
24753 +/**
24754 + * This function does the setup for a data transfer for a host channel
24755 + * and starts the transfer in Descriptor DMA mode.
24756 + *
24757 + * Initializes HCTSIZ register. For a PING transfer the Do Ping bit is set.
24758 + * Sets PID and NTD values. For periodic transfers
24759 + * initializes SCHED_INFO field with micro-frame bitmap.
24760 + *
24761 + * Initializes HCDMA register with descriptor list address and CTD value
24762 + * then starts the transfer via enabling the channel.
24763 + *
24764 + * @param core_if Programming view of DWC_otg controller.
24765 + * @param hc Information needed to initialize the host channel.
24766 + */
24767 +void dwc_otg_hc_start_transfer_ddma(dwc_otg_core_if_t * core_if, dwc_hc_t * hc)
24768 +{
24769 + dwc_otg_hc_regs_t *hc_regs = core_if->host_if->hc_regs[hc->hc_num];
24770 + hcchar_data_t hcchar;
24771 + hctsiz_data_t hctsiz;
24772 + hcdma_data_t hcdma;
24773 +
24774 + hctsiz.d32 = 0;
24775 +
24776 + if (hc->do_ping)
24777 + hctsiz.b_ddma.dopng = 1;
24778 +
24779 + if (hc->ep_type == DWC_OTG_EP_TYPE_ISOC)
24780 + set_pid_isoc(hc);
24781 +
24782 + /* Packet Count and Xfer Size are not used in Descriptor DMA mode */
24783 + hctsiz.b_ddma.pid = hc->data_pid_start;
24784 + hctsiz.b_ddma.ntd = hc->ntd - 1; /* 0 - 1 descriptor, 1 - 2 descriptors, etc. */
24785 + hctsiz.b_ddma.schinfo = hc->schinfo; /* Non-zero only for high-speed interrupt endpoints */
24786 +
24787 + DWC_DEBUGPL(DBG_HCDV, "%s: Channel %d\n", __func__, hc->hc_num);
24788 + DWC_DEBUGPL(DBG_HCDV, " Start PID: %d\n", hctsiz.b.pid);
24789 + DWC_DEBUGPL(DBG_HCDV, " NTD: %d\n", hctsiz.b_ddma.ntd);
24790 +
24791 + DWC_WRITE_REG32(&hc_regs->hctsiz, hctsiz.d32);
24792 +
24793 + hcdma.d32 = 0;
24794 + hcdma.b.dma_addr = ((uint32_t) hc->desc_list_addr) >> 11;
24795 +
24796 + /* Always start from first descriptor. */
24797 + hcdma.b.ctd = 0;
24798 + DWC_WRITE_REG32(&hc_regs->hcdma, hcdma.d32);
24799 +
24800 + hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
24801 + hcchar.b.multicnt = hc->multi_count;
24802 +
24803 +#ifdef DEBUG
24804 + core_if->start_hcchar_val[hc->hc_num] = hcchar.d32;
24805 + if (hcchar.b.chdis) {
24806 + DWC_WARN("%s: chdis set, channel %d, hcchar 0x%08x\n",
24807 + __func__, hc->hc_num, hcchar.d32);
24808 + }
24809 +#endif
24810 +
24811 + /* Set host channel enable after all other setup is complete. */
24812 + hcchar.b.chen = 1;
24813 + hcchar.b.chdis = 0;
24814 +
24815 + DWC_WRITE_REG32(&hc_regs->hcchar, hcchar.d32);
24816 +
24817 + hc->xfer_started = 1;
24818 + hc->requests++;
24819 +
24820 +#ifdef DEBUG
24821 + if ((hc->ep_type != DWC_OTG_EP_TYPE_INTR)
24822 + && (hc->ep_type != DWC_OTG_EP_TYPE_ISOC)) {
24823 + DWC_DEBUGPL(DBG_HCDV, "DMA transfer %d from core_if %p\n",
24824 + hc->hc_num, core_if);//GRAYG
24825 + core_if->hc_xfer_info[hc->hc_num].core_if = core_if;
24826 + core_if->hc_xfer_info[hc->hc_num].hc = hc;
24827 + /* Start a timer for this transfer. */
24828 + DWC_TIMER_SCHEDULE(core_if->hc_xfer_timer[hc->hc_num], 10000);
24829 + }
24830 +#endif
24831 +
24832 +}
24833 +
24834 +/**
24835 + * This function continues a data transfer that was started by previous call
24836 + * to <code>dwc_otg_hc_start_transfer</code>. The caller must ensure there is
24837 + * sufficient space in the request queue and Tx Data FIFO. This function
24838 + * should only be called in Slave mode. In DMA mode, the controller acts
24839 + * autonomously to complete transfers programmed to a host channel.
24840 + *
24841 + * For an OUT transfer, a new data packet is loaded into the appropriate FIFO
24842 + * if there is any data remaining to be queued. For an IN transfer, another
24843 + * data packet is always requested. For the SETUP phase of a control transfer,
24844 + * this function does nothing.
24845 + *
24846 + * @return 1 if a new request is queued, 0 if no more requests are required
24847 + * for this transfer.
24848 + */
24849 +int dwc_otg_hc_continue_transfer(dwc_otg_core_if_t * core_if, dwc_hc_t * hc)
24850 +{
24851 + DWC_DEBUGPL(DBG_HCDV, "%s: Channel %d\n", __func__, hc->hc_num);
24852 +
24853 + if (hc->do_split) {
24854 + /* SPLITs always queue just once per channel */
24855 + return 0;
24856 + } else if (hc->data_pid_start == DWC_OTG_HC_PID_SETUP) {
24857 + /* SETUPs are queued only once since they can't be NAKed. */
24858 + return 0;
24859 + } else if (hc->ep_is_in) {
24860 + /*
24861 + * Always queue another request for other IN transfers. If
24862 + * back-to-back INs are issued and NAKs are received for both,
24863 + * the driver may still be processing the first NAK when the
24864 + * second NAK is received. When the interrupt handler clears
24865 + * the NAK interrupt for the first NAK, the second NAK will
24866 + * not be seen. So we can't depend on the NAK interrupt
24867 + * handler to requeue a NAKed request. Instead, IN requests
24868 + * are issued each time this function is called. When the
24869 + * transfer completes, the extra requests for the channel will
24870 + * be flushed.
24871 + */
24872 + hcchar_data_t hcchar;
24873 + dwc_otg_hc_regs_t *hc_regs =
24874 + core_if->host_if->hc_regs[hc->hc_num];
24875 +
24876 + hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
24877 + hc_set_even_odd_frame(core_if, hc, &hcchar);
24878 + hcchar.b.chen = 1;
24879 + hcchar.b.chdis = 0;
24880 + DWC_DEBUGPL(DBG_HCDV, " IN xfer: hcchar = 0x%08x\n",
24881 + hcchar.d32);
24882 + DWC_WRITE_REG32(&hc_regs->hcchar, hcchar.d32);
24883 + hc->requests++;
24884 + return 1;
24885 + } else {
24886 + /* OUT transfers. */
24887 + if (hc->xfer_count < hc->xfer_len) {
24888 + if (hc->ep_type == DWC_OTG_EP_TYPE_INTR ||
24889 + hc->ep_type == DWC_OTG_EP_TYPE_ISOC) {
24890 + hcchar_data_t hcchar;
24891 + dwc_otg_hc_regs_t *hc_regs;
24892 + hc_regs = core_if->host_if->hc_regs[hc->hc_num];
24893 + hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
24894 + hc_set_even_odd_frame(core_if, hc, &hcchar);
24895 + }
24896 +
24897 + /* Load OUT packet into the appropriate Tx FIFO. */
24898 + dwc_otg_hc_write_packet(core_if, hc);
24899 + hc->requests++;
24900 + return 1;
24901 + } else {
24902 + return 0;
24903 + }
24904 + }
24905 +}
24906 +
24907 +/**
24908 + * Starts a PING transfer. This function should only be called in Slave mode.
24909 + * The Do Ping bit is set in the HCTSIZ register, then the channel is enabled.
24910 + */
24911 +void dwc_otg_hc_do_ping(dwc_otg_core_if_t * core_if, dwc_hc_t * hc)
24912 +{
24913 + hcchar_data_t hcchar;
24914 + hctsiz_data_t hctsiz;
24915 + dwc_otg_hc_regs_t *hc_regs = core_if->host_if->hc_regs[hc->hc_num];
24916 +
24917 + DWC_DEBUGPL(DBG_HCDV, "%s: Channel %d\n", __func__, hc->hc_num);
24918 +
24919 + hctsiz.d32 = 0;
24920 + hctsiz.b.dopng = 1;
24921 + hctsiz.b.pktcnt = 1;
24922 + DWC_WRITE_REG32(&hc_regs->hctsiz, hctsiz.d32);
24923 +
24924 + hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
24925 + hcchar.b.chen = 1;
24926 + hcchar.b.chdis = 0;
24927 + DWC_WRITE_REG32(&hc_regs->hcchar, hcchar.d32);
24928 +}
24929 +
24930 +/*
24931 + * This function writes a packet into the Tx FIFO associated with the Host
24932 + * Channel. For a channel associated with a non-periodic EP, the non-periodic
24933 + * Tx FIFO is written. For a channel associated with a periodic EP, the
24934 + * periodic Tx FIFO is written. This function should only be called in Slave
24935 + * mode.
24936 + *
24937 + * Upon return the xfer_buff and xfer_count fields in _hc are incremented by
24938 + * then number of bytes written to the Tx FIFO.
24939 + */
24940 +void dwc_otg_hc_write_packet(dwc_otg_core_if_t * core_if, dwc_hc_t * hc)
24941 +{
24942 + uint32_t i;
24943 + uint32_t remaining_count;
24944 + uint32_t byte_count;
24945 + uint32_t dword_count;
24946 +
24947 + uint32_t *data_buff = (uint32_t *) (hc->xfer_buff);
24948 + uint32_t *data_fifo = core_if->data_fifo[hc->hc_num];
24949 +
24950 + remaining_count = hc->xfer_len - hc->xfer_count;
24951 + if (remaining_count > hc->max_packet) {
24952 + byte_count = hc->max_packet;
24953 + } else {
24954 + byte_count = remaining_count;
24955 + }
24956 +
24957 + dword_count = (byte_count + 3) / 4;
24958 +
24959 + if ((((unsigned long)data_buff) & 0x3) == 0) {
24960 + /* xfer_buff is DWORD aligned. */
24961 + for (i = 0; i < dword_count; i++, data_buff++) {
24962 + DWC_WRITE_REG32(data_fifo, *data_buff);
24963 + }
24964 + } else {
24965 + /* xfer_buff is not DWORD aligned. */
24966 + for (i = 0; i < dword_count; i++, data_buff++) {
24967 + uint32_t data;
24968 + data =
24969 + (data_buff[0] | data_buff[1] << 8 | data_buff[2] <<
24970 + 16 | data_buff[3] << 24);
24971 + DWC_WRITE_REG32(data_fifo, data);
24972 + }
24973 + }
24974 +
24975 + hc->xfer_count += byte_count;
24976 + hc->xfer_buff += byte_count;
24977 +}
24978 +
24979 +/**
24980 + * Gets the current USB frame number. This is the frame number from the last
24981 + * SOF packet.
24982 + */
24983 +uint32_t dwc_otg_get_frame_number(dwc_otg_core_if_t * core_if)
24984 +{
24985 + dsts_data_t dsts;
24986 + dsts.d32 = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dsts);
24987 +
24988 + /* read current frame/microframe number from DSTS register */
24989 + return dsts.b.soffn;
24990 +}
24991 +
24992 +/**
24993 + * Calculates and gets the frame Interval value of HFIR register according PHY
24994 + * type and speed.The application can modify a value of HFIR register only after
24995 + * the Port Enable bit of the Host Port Control and Status register
24996 + * (HPRT.PrtEnaPort) has been set.
24997 +*/
24998 +
24999 +uint32_t calc_frame_interval(dwc_otg_core_if_t * core_if)
25000 +{
25001 + gusbcfg_data_t usbcfg;
25002 + hwcfg2_data_t hwcfg2;
25003 + hprt0_data_t hprt0;
25004 + int clock = 60; // default value
25005 + usbcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->gusbcfg);
25006 + hwcfg2.d32 = DWC_READ_REG32(&core_if->core_global_regs->ghwcfg2);
25007 + hprt0.d32 = DWC_READ_REG32(core_if->host_if->hprt0);
25008 + if (!usbcfg.b.physel && usbcfg.b.ulpi_utmi_sel && !usbcfg.b.phyif)
25009 + clock = 60;
25010 + if (usbcfg.b.physel && hwcfg2.b.fs_phy_type == 3)
25011 + clock = 48;
25012 + if (!usbcfg.b.phylpwrclksel && !usbcfg.b.physel &&
25013 + !usbcfg.b.ulpi_utmi_sel && usbcfg.b.phyif)
25014 + clock = 30;
25015 + if (!usbcfg.b.phylpwrclksel && !usbcfg.b.physel &&
25016 + !usbcfg.b.ulpi_utmi_sel && !usbcfg.b.phyif)
25017 + clock = 60;
25018 + if (usbcfg.b.phylpwrclksel && !usbcfg.b.physel &&
25019 + !usbcfg.b.ulpi_utmi_sel && usbcfg.b.phyif)
25020 + clock = 48;
25021 + if (usbcfg.b.physel && !usbcfg.b.phyif && hwcfg2.b.fs_phy_type == 2)
25022 + clock = 48;
25023 + if (usbcfg.b.physel && hwcfg2.b.fs_phy_type == 1)
25024 + clock = 48;
25025 + if (hprt0.b.prtspd == 0)
25026 + /* High speed case */
25027 + return 125 * clock - 1;
25028 + else
25029 + /* FS/LS case */
25030 + return 1000 * clock - 1;
25031 +}
25032 +
25033 +/**
25034 + * This function reads a setup packet from the Rx FIFO into the destination
25035 + * buffer. This function is called from the Rx Status Queue Level (RxStsQLvl)
25036 + * Interrupt routine when a SETUP packet has been received in Slave mode.
25037 + *
25038 + * @param core_if Programming view of DWC_otg controller.
25039 + * @param dest Destination buffer for packet data.
25040 + */
25041 +void dwc_otg_read_setup_packet(dwc_otg_core_if_t * core_if, uint32_t * dest)
25042 +{
25043 + device_grxsts_data_t status;
25044 + /* Get the 8 bytes of a setup transaction data */
25045 +
25046 + /* Pop 2 DWORDS off the receive data FIFO into memory */
25047 + dest[0] = DWC_READ_REG32(core_if->data_fifo[0]);
25048 + dest[1] = DWC_READ_REG32(core_if->data_fifo[0]);
25049 + if (core_if->snpsid >= OTG_CORE_REV_3_00a) {
25050 + status.d32 =
25051 + DWC_READ_REG32(&core_if->core_global_regs->grxstsp);
25052 + DWC_DEBUGPL(DBG_ANY,
25053 + "EP:%d BCnt:%d " "pktsts:%x Frame:%d(0x%0x)\n",
25054 + status.b.epnum, status.b.bcnt, status.b.pktsts,
25055 + status.b.fn, status.b.fn);
25056 + }
25057 +}
25058 +
25059 +/**
25060 + * This function enables EP0 OUT to receive SETUP packets and configures EP0
25061 + * IN for transmitting packets. It is normally called when the
25062 + * "Enumeration Done" interrupt occurs.
25063 + *
25064 + * @param core_if Programming view of DWC_otg controller.
25065 + * @param ep The EP0 data.
25066 + */
25067 +void dwc_otg_ep0_activate(dwc_otg_core_if_t * core_if, dwc_ep_t * ep)
25068 +{
25069 + dwc_otg_dev_if_t *dev_if = core_if->dev_if;
25070 + dsts_data_t dsts;
25071 + depctl_data_t diepctl;
25072 + depctl_data_t doepctl;
25073 + dctl_data_t dctl = {.d32 = 0 };
25074 +
25075 + ep->stp_rollover = 0;
25076 + /* Read the Device Status and Endpoint 0 Control registers */
25077 + dsts.d32 = DWC_READ_REG32(&dev_if->dev_global_regs->dsts);
25078 + diepctl.d32 = DWC_READ_REG32(&dev_if->in_ep_regs[0]->diepctl);
25079 + doepctl.d32 = DWC_READ_REG32(&dev_if->out_ep_regs[0]->doepctl);
25080 +
25081 + /* Set the MPS of the IN EP based on the enumeration speed */
25082 + switch (dsts.b.enumspd) {
25083 + case DWC_DSTS_ENUMSPD_HS_PHY_30MHZ_OR_60MHZ:
25084 + case DWC_DSTS_ENUMSPD_FS_PHY_30MHZ_OR_60MHZ:
25085 + case DWC_DSTS_ENUMSPD_FS_PHY_48MHZ:
25086 + diepctl.b.mps = DWC_DEP0CTL_MPS_64;
25087 + break;
25088 + case DWC_DSTS_ENUMSPD_LS_PHY_6MHZ:
25089 + diepctl.b.mps = DWC_DEP0CTL_MPS_8;
25090 + break;
25091 + }
25092 +
25093 + DWC_WRITE_REG32(&dev_if->in_ep_regs[0]->diepctl, diepctl.d32);
25094 +
25095 + /* Enable OUT EP for receive */
25096 + if (core_if->snpsid <= OTG_CORE_REV_2_94a) {
25097 + doepctl.b.epena = 1;
25098 + DWC_WRITE_REG32(&dev_if->out_ep_regs[0]->doepctl, doepctl.d32);
25099 + }
25100 +#ifdef VERBOSE
25101 + DWC_DEBUGPL(DBG_PCDV, "doepctl0=%0x\n",
25102 + DWC_READ_REG32(&dev_if->out_ep_regs[0]->doepctl));
25103 + DWC_DEBUGPL(DBG_PCDV, "diepctl0=%0x\n",
25104 + DWC_READ_REG32(&dev_if->in_ep_regs[0]->diepctl));
25105 +#endif
25106 + dctl.b.cgnpinnak = 1;
25107 +
25108 + DWC_MODIFY_REG32(&dev_if->dev_global_regs->dctl, dctl.d32, dctl.d32);
25109 + DWC_DEBUGPL(DBG_PCDV, "dctl=%0x\n",
25110 + DWC_READ_REG32(&dev_if->dev_global_regs->dctl));
25111 +
25112 +}
25113 +
25114 +/**
25115 + * This function activates an EP. The Device EP control register for
25116 + * the EP is configured as defined in the ep structure. Note: This
25117 + * function is not used for EP0.
25118 + *
25119 + * @param core_if Programming view of DWC_otg controller.
25120 + * @param ep The EP to activate.
25121 + */
25122 +void dwc_otg_ep_activate(dwc_otg_core_if_t * core_if, dwc_ep_t * ep)
25123 +{
25124 + dwc_otg_dev_if_t *dev_if = core_if->dev_if;
25125 + depctl_data_t depctl;
25126 + volatile uint32_t *addr;
25127 + daint_data_t daintmsk = {.d32 = 0 };
25128 + dcfg_data_t dcfg;
25129 + uint8_t i;
25130 +
25131 + DWC_DEBUGPL(DBG_PCDV, "%s() EP%d-%s\n", __func__, ep->num,
25132 + (ep->is_in ? "IN" : "OUT"));
25133 +
25134 +#ifdef DWC_UTE_PER_IO
25135 + ep->xiso_frame_num = 0xFFFFFFFF;
25136 + ep->xiso_active_xfers = 0;
25137 + ep->xiso_queued_xfers = 0;
25138 +#endif
25139 + /* Read DEPCTLn register */
25140 + if (ep->is_in == 1) {
25141 + addr = &dev_if->in_ep_regs[ep->num]->diepctl;
25142 + daintmsk.ep.in = 1 << ep->num;
25143 + } else {
25144 + addr = &dev_if->out_ep_regs[ep->num]->doepctl;
25145 + daintmsk.ep.out = 1 << ep->num;
25146 + }
25147 +
25148 + /* If the EP is already active don't change the EP Control
25149 + * register. */
25150 + depctl.d32 = DWC_READ_REG32(addr);
25151 + if (!depctl.b.usbactep) {
25152 + depctl.b.mps = ep->maxpacket;
25153 + depctl.b.eptype = ep->type;
25154 + depctl.b.txfnum = ep->tx_fifo_num;
25155 +
25156 + if (ep->type == DWC_OTG_EP_TYPE_ISOC) {
25157 + depctl.b.setd0pid = 1; // ???
25158 + } else {
25159 + depctl.b.setd0pid = 1;
25160 + }
25161 + depctl.b.usbactep = 1;
25162 +
25163 + /* Update nextep_seq array and EPMSCNT in DCFG*/
25164 + if (!(depctl.b.eptype & 1) && (ep->is_in == 1)) { // NP IN EP
25165 + for (i = 0; i <= core_if->dev_if->num_in_eps; i++) {
25166 + if (core_if->nextep_seq[i] == core_if->first_in_nextep_seq)
25167 + break;
25168 + }
25169 + core_if->nextep_seq[i] = ep->num;
25170 + core_if->nextep_seq[ep->num] = core_if->first_in_nextep_seq;
25171 + depctl.b.nextep = core_if->nextep_seq[ep->num];
25172 + dcfg.d32 = DWC_READ_REG32(&dev_if->dev_global_regs->dcfg);
25173 + dcfg.b.epmscnt++;
25174 + DWC_WRITE_REG32(&dev_if->dev_global_regs->dcfg, dcfg.d32);
25175 +
25176 + DWC_DEBUGPL(DBG_PCDV,
25177 + "%s first_in_nextep_seq= %2d; nextep_seq[]:\n",
25178 + __func__, core_if->first_in_nextep_seq);
25179 + for (i=0; i <= core_if->dev_if->num_in_eps; i++) {
25180 + DWC_DEBUGPL(DBG_PCDV, "%2d\n",
25181 + core_if->nextep_seq[i]);
25182 + }
25183 +
25184 + }
25185 +
25186 +
25187 + DWC_WRITE_REG32(addr, depctl.d32);
25188 + DWC_DEBUGPL(DBG_PCDV, "DEPCTL=%08x\n", DWC_READ_REG32(addr));
25189 + }
25190 +
25191 + /* Enable the Interrupt for this EP */
25192 + if (core_if->multiproc_int_enable) {
25193 + if (ep->is_in == 1) {
25194 + diepmsk_data_t diepmsk = {.d32 = 0 };
25195 + diepmsk.b.xfercompl = 1;
25196 + diepmsk.b.timeout = 1;
25197 + diepmsk.b.epdisabled = 1;
25198 + diepmsk.b.ahberr = 1;
25199 + diepmsk.b.intknepmis = 1;
25200 + if (!core_if->en_multiple_tx_fifo && core_if->dma_enable)
25201 + diepmsk.b.intknepmis = 0;
25202 + diepmsk.b.txfifoundrn = 1; //?????
25203 + if (ep->type == DWC_OTG_EP_TYPE_ISOC) {
25204 + diepmsk.b.nak = 1;
25205 + }
25206 +
25207 +
25208 +
25209 +/*
25210 + if (core_if->dma_desc_enable) {
25211 + diepmsk.b.bna = 1;
25212 + }
25213 +*/
25214 +/*
25215 + if (core_if->dma_enable) {
25216 + doepmsk.b.nak = 1;
25217 + }
25218 +*/
25219 + DWC_WRITE_REG32(&dev_if->dev_global_regs->
25220 + diepeachintmsk[ep->num], diepmsk.d32);
25221 +
25222 + } else {
25223 + doepmsk_data_t doepmsk = {.d32 = 0 };
25224 + doepmsk.b.xfercompl = 1;
25225 + doepmsk.b.ahberr = 1;
25226 + doepmsk.b.epdisabled = 1;
25227 + if (ep->type == DWC_OTG_EP_TYPE_ISOC)
25228 + doepmsk.b.outtknepdis = 1;
25229 +
25230 +/*
25231 +
25232 + if (core_if->dma_desc_enable) {
25233 + doepmsk.b.bna = 1;
25234 + }
25235 +*/
25236 +/*
25237 + doepmsk.b.babble = 1;
25238 + doepmsk.b.nyet = 1;
25239 + doepmsk.b.nak = 1;
25240 +*/
25241 + DWC_WRITE_REG32(&dev_if->dev_global_regs->
25242 + doepeachintmsk[ep->num], doepmsk.d32);
25243 + }
25244 + DWC_MODIFY_REG32(&dev_if->dev_global_regs->deachintmsk,
25245 + 0, daintmsk.d32);
25246 + } else {
25247 + if (ep->type == DWC_OTG_EP_TYPE_ISOC) {
25248 + if (ep->is_in) {
25249 + diepmsk_data_t diepmsk = {.d32 = 0 };
25250 + diepmsk.b.nak = 1;
25251 + DWC_MODIFY_REG32(&dev_if->dev_global_regs->diepmsk, 0, diepmsk.d32);
25252 + } else {
25253 + doepmsk_data_t doepmsk = {.d32 = 0 };
25254 + doepmsk.b.outtknepdis = 1;
25255 + DWC_MODIFY_REG32(&dev_if->dev_global_regs->doepmsk, 0, doepmsk.d32);
25256 + }
25257 + }
25258 + DWC_MODIFY_REG32(&dev_if->dev_global_regs->daintmsk,
25259 + 0, daintmsk.d32);
25260 + }
25261 +
25262 + DWC_DEBUGPL(DBG_PCDV, "DAINTMSK=%0x\n",
25263 + DWC_READ_REG32(&dev_if->dev_global_regs->daintmsk));
25264 +
25265 + ep->stall_clear_flag = 0;
25266 +
25267 + return;
25268 +}
25269 +
25270 +/**
25271 + * This function deactivates an EP. This is done by clearing the USB Active
25272 + * EP bit in the Device EP control register. Note: This function is not used
25273 + * for EP0. EP0 cannot be deactivated.
25274 + *
25275 + * @param core_if Programming view of DWC_otg controller.
25276 + * @param ep The EP to deactivate.
25277 + */
25278 +void dwc_otg_ep_deactivate(dwc_otg_core_if_t * core_if, dwc_ep_t * ep)
25279 +{
25280 + depctl_data_t depctl = {.d32 = 0 };
25281 + volatile uint32_t *addr;
25282 + daint_data_t daintmsk = {.d32 = 0 };
25283 + dcfg_data_t dcfg;
25284 + uint8_t i = 0;
25285 +
25286 +#ifdef DWC_UTE_PER_IO
25287 + ep->xiso_frame_num = 0xFFFFFFFF;
25288 + ep->xiso_active_xfers = 0;
25289 + ep->xiso_queued_xfers = 0;
25290 +#endif
25291 +
25292 + /* Read DEPCTLn register */
25293 + if (ep->is_in == 1) {
25294 + addr = &core_if->dev_if->in_ep_regs[ep->num]->diepctl;
25295 + daintmsk.ep.in = 1 << ep->num;
25296 + } else {
25297 + addr = &core_if->dev_if->out_ep_regs[ep->num]->doepctl;
25298 + daintmsk.ep.out = 1 << ep->num;
25299 + }
25300 +
25301 + depctl.d32 = DWC_READ_REG32(addr);
25302 +
25303 + depctl.b.usbactep = 0;
25304 +
25305 + /* Update nextep_seq array and EPMSCNT in DCFG*/
25306 + if (!(depctl.b.eptype & 1) && ep->is_in == 1) { // NP EP IN
25307 + for (i = 0; i <= core_if->dev_if->num_in_eps; i++) {
25308 + if (core_if->nextep_seq[i] == ep->num)
25309 + break;
25310 + }
25311 + core_if->nextep_seq[i] = core_if->nextep_seq[ep->num];
25312 + if (core_if->first_in_nextep_seq == ep->num)
25313 + core_if->first_in_nextep_seq = i;
25314 + core_if->nextep_seq[ep->num] = 0xff;
25315 + depctl.b.nextep = 0;
25316 + dcfg.d32 =
25317 + DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dcfg);
25318 + dcfg.b.epmscnt--;
25319 + DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->dcfg,
25320 + dcfg.d32);
25321 +
25322 + DWC_DEBUGPL(DBG_PCDV,
25323 + "%s first_in_nextep_seq= %2d; nextep_seq[]:\n",
25324 + __func__, core_if->first_in_nextep_seq);
25325 + for (i=0; i <= core_if->dev_if->num_in_eps; i++) {
25326 + DWC_DEBUGPL(DBG_PCDV, "%2d\n", core_if->nextep_seq[i]);
25327 + }
25328 + }
25329 +
25330 + if (ep->is_in == 1)
25331 + depctl.b.txfnum = 0;
25332 +
25333 + if (core_if->dma_desc_enable)
25334 + depctl.b.epdis = 1;
25335 +
25336 + DWC_WRITE_REG32(addr, depctl.d32);
25337 + depctl.d32 = DWC_READ_REG32(addr);
25338 + if (core_if->dma_enable && ep->type == DWC_OTG_EP_TYPE_ISOC
25339 + && depctl.b.epena) {
25340 + depctl_data_t depctl = {.d32 = 0};
25341 + if (ep->is_in) {
25342 + diepint_data_t diepint = {.d32 = 0};
25343 +
25344 + depctl.b.snak = 1;
25345 + DWC_WRITE_REG32(&core_if->dev_if->in_ep_regs[ep->num]->
25346 + diepctl, depctl.d32);
25347 + do {
25348 + dwc_udelay(10);
25349 + diepint.d32 =
25350 + DWC_READ_REG32(&core_if->
25351 + dev_if->in_ep_regs[ep->num]->
25352 + diepint);
25353 + } while (!diepint.b.inepnakeff);
25354 + diepint.b.inepnakeff = 1;
25355 + DWC_WRITE_REG32(&core_if->dev_if->in_ep_regs[ep->num]->
25356 + diepint, diepint.d32);
25357 + depctl.d32 = 0;
25358 + depctl.b.epdis = 1;
25359 + DWC_WRITE_REG32(&core_if->dev_if->in_ep_regs[ep->num]->
25360 + diepctl, depctl.d32);
25361 + do {
25362 + dwc_udelay(10);
25363 + diepint.d32 =
25364 + DWC_READ_REG32(&core_if->
25365 + dev_if->in_ep_regs[ep->num]->
25366 + diepint);
25367 + } while (!diepint.b.epdisabled);
25368 + diepint.b.epdisabled = 1;
25369 + DWC_WRITE_REG32(&core_if->dev_if->in_ep_regs[ep->num]->
25370 + diepint, diepint.d32);
25371 + } else {
25372 + dctl_data_t dctl = {.d32 = 0};
25373 + gintmsk_data_t gintsts = {.d32 = 0};
25374 + doepint_data_t doepint = {.d32 = 0};
25375 + dctl.b.sgoutnak = 1;
25376 + DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->
25377 + dctl, 0, dctl.d32);
25378 + do {
25379 + dwc_udelay(10);
25380 + gintsts.d32 = DWC_READ_REG32(&core_if->core_global_regs->gintsts);
25381 + } while (!gintsts.b.goutnakeff);
25382 + gintsts.d32 = 0;
25383 + gintsts.b.goutnakeff = 1;
25384 + DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, gintsts.d32);
25385 +
25386 + depctl.d32 = 0;
25387 + depctl.b.epdis = 1;
25388 + depctl.b.snak = 1;
25389 + DWC_WRITE_REG32(&core_if->dev_if->out_ep_regs[ep->num]->doepctl, depctl.d32);
25390 + do
25391 + {
25392 + dwc_udelay(10);
25393 + doepint.d32 = DWC_READ_REG32(&core_if->dev_if->
25394 + out_ep_regs[ep->num]->doepint);
25395 + } while (!doepint.b.epdisabled);
25396 +
25397 + doepint.b.epdisabled = 1;
25398 + DWC_WRITE_REG32(&core_if->dev_if->out_ep_regs[ep->num]->doepint, doepint.d32);
25399 +
25400 + dctl.d32 = 0;
25401 + dctl.b.cgoutnak = 1;
25402 + DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->dctl, 0, dctl.d32);
25403 + }
25404 + }
25405 +
25406 + /* Disable the Interrupt for this EP */
25407 + if (core_if->multiproc_int_enable) {
25408 + DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->deachintmsk,
25409 + daintmsk.d32, 0);
25410 +
25411 + if (ep->is_in == 1) {
25412 + DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->
25413 + diepeachintmsk[ep->num], 0);
25414 + } else {
25415 + DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->
25416 + doepeachintmsk[ep->num], 0);
25417 + }
25418 + } else {
25419 + DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->daintmsk,
25420 + daintmsk.d32, 0);
25421 + }
25422 +
25423 +}
25424 +
25425 +/**
25426 + * This function initializes dma descriptor chain.
25427 + *
25428 + * @param core_if Programming view of DWC_otg controller.
25429 + * @param ep The EP to start the transfer on.
25430 + */
25431 +static void init_dma_desc_chain(dwc_otg_core_if_t * core_if, dwc_ep_t * ep)
25432 +{
25433 + dwc_otg_dev_dma_desc_t *dma_desc;
25434 + uint32_t offset;
25435 + uint32_t xfer_est;
25436 + int i;
25437 + unsigned maxxfer_local, total_len;
25438 +
25439 + if (!ep->is_in && ep->type == DWC_OTG_EP_TYPE_INTR &&
25440 + (ep->maxpacket%4)) {
25441 + maxxfer_local = ep->maxpacket;
25442 + total_len = ep->xfer_len;
25443 + } else {
25444 + maxxfer_local = ep->maxxfer;
25445 + total_len = ep->total_len;
25446 + }
25447 +
25448 + ep->desc_cnt = (total_len / maxxfer_local) +
25449 + ((total_len % maxxfer_local) ? 1 : 0);
25450 +
25451 + if (!ep->desc_cnt)
25452 + ep->desc_cnt = 1;
25453 +
25454 + if (ep->desc_cnt > MAX_DMA_DESC_CNT)
25455 + ep->desc_cnt = MAX_DMA_DESC_CNT;
25456 +
25457 + dma_desc = ep->desc_addr;
25458 + if (maxxfer_local == ep->maxpacket) {
25459 + if ((total_len % maxxfer_local) &&
25460 + (total_len/maxxfer_local < MAX_DMA_DESC_CNT)) {
25461 + xfer_est = (ep->desc_cnt - 1) * maxxfer_local +
25462 + (total_len % maxxfer_local);
25463 + } else
25464 + xfer_est = ep->desc_cnt * maxxfer_local;
25465 + } else
25466 + xfer_est = total_len;
25467 + offset = 0;
25468 + for (i = 0; i < ep->desc_cnt; ++i) {
25469 + /** DMA Descriptor Setup */
25470 + if (xfer_est > maxxfer_local) {
25471 + dma_desc->status.b.bs = BS_HOST_BUSY;
25472 + dma_desc->status.b.l = 0;
25473 + dma_desc->status.b.ioc = 0;
25474 + dma_desc->status.b.sp = 0;
25475 + dma_desc->status.b.bytes = maxxfer_local;
25476 + dma_desc->buf = ep->dma_addr + offset;
25477 + dma_desc->status.b.sts = 0;
25478 + dma_desc->status.b.bs = BS_HOST_READY;
25479 +
25480 + xfer_est -= maxxfer_local;
25481 + offset += maxxfer_local;
25482 + } else {
25483 + dma_desc->status.b.bs = BS_HOST_BUSY;
25484 + dma_desc->status.b.l = 1;
25485 + dma_desc->status.b.ioc = 1;
25486 + if (ep->is_in) {
25487 + dma_desc->status.b.sp =
25488 + (xfer_est %
25489 + ep->maxpacket) ? 1 : ((ep->
25490 + sent_zlp) ? 1 : 0);
25491 + dma_desc->status.b.bytes = xfer_est;
25492 + } else {
25493 + if (maxxfer_local == ep->maxpacket)
25494 + dma_desc->status.b.bytes = xfer_est;
25495 + else
25496 + dma_desc->status.b.bytes =
25497 + xfer_est + ((4 - (xfer_est & 0x3)) & 0x3);
25498 + }
25499 +
25500 + dma_desc->buf = ep->dma_addr + offset;
25501 + dma_desc->status.b.sts = 0;
25502 + dma_desc->status.b.bs = BS_HOST_READY;
25503 + }
25504 + dma_desc++;
25505 + }
25506 +}
25507 +/**
25508 + * This function is called when to write ISOC data into appropriate dedicated
25509 + * periodic FIFO.
25510 + */
25511 +static int32_t write_isoc_tx_fifo(dwc_otg_core_if_t * core_if, dwc_ep_t * dwc_ep)
25512 +{
25513 + dwc_otg_dev_if_t *dev_if = core_if->dev_if;
25514 + dwc_otg_dev_in_ep_regs_t *ep_regs;
25515 + dtxfsts_data_t txstatus = {.d32 = 0 };
25516 + uint32_t len = 0;
25517 + int epnum = dwc_ep->num;
25518 + int dwords;
25519 +
25520 + DWC_DEBUGPL(DBG_PCD, "Dedicated TxFifo Empty: %d \n", epnum);
25521 +
25522 + ep_regs = core_if->dev_if->in_ep_regs[epnum];
25523 +
25524 + len = dwc_ep->xfer_len - dwc_ep->xfer_count;
25525 +
25526 + if (len > dwc_ep->maxpacket) {
25527 + len = dwc_ep->maxpacket;
25528 + }
25529 +
25530 + dwords = (len + 3) / 4;
25531 +
25532 + /* While there is space in the queue and space in the FIFO and
25533 + * More data to tranfer, Write packets to the Tx FIFO */
25534 + txstatus.d32 = DWC_READ_REG32(&dev_if->in_ep_regs[epnum]->dtxfsts);
25535 + DWC_DEBUGPL(DBG_PCDV, "b4 dtxfsts[%d]=0x%08x\n", epnum, txstatus.d32);
25536 +
25537 + while (txstatus.b.txfspcavail > dwords &&
25538 + dwc_ep->xfer_count < dwc_ep->xfer_len && dwc_ep->xfer_len != 0) {
25539 + /* Write the FIFO */
25540 + dwc_otg_ep_write_packet(core_if, dwc_ep, 0);
25541 +
25542 + len = dwc_ep->xfer_len - dwc_ep->xfer_count;
25543 + if (len > dwc_ep->maxpacket) {
25544 + len = dwc_ep->maxpacket;
25545 + }
25546 +
25547 + dwords = (len + 3) / 4;
25548 + txstatus.d32 =
25549 + DWC_READ_REG32(&dev_if->in_ep_regs[epnum]->dtxfsts);
25550 + DWC_DEBUGPL(DBG_PCDV, "dtxfsts[%d]=0x%08x\n", epnum,
25551 + txstatus.d32);
25552 + }
25553 +
25554 + DWC_DEBUGPL(DBG_PCDV, "b4 dtxfsts[%d]=0x%08x\n", epnum,
25555 + DWC_READ_REG32(&dev_if->in_ep_regs[epnum]->dtxfsts));
25556 +
25557 + return 1;
25558 +}
25559 +/**
25560 + * This function does the setup for a data transfer for an EP and
25561 + * starts the transfer. For an IN transfer, the packets will be
25562 + * loaded into the appropriate Tx FIFO in the ISR. For OUT transfers,
25563 + * the packets are unloaded from the Rx FIFO in the ISR. the ISR.
25564 + *
25565 + * @param core_if Programming view of DWC_otg controller.
25566 + * @param ep The EP to start the transfer on.
25567 + */
25568 +
25569 +void dwc_otg_ep_start_transfer(dwc_otg_core_if_t * core_if, dwc_ep_t * ep)
25570 +{
25571 + depctl_data_t depctl;
25572 + deptsiz_data_t deptsiz;
25573 + gintmsk_data_t intr_mask = {.d32 = 0 };
25574 +
25575 + DWC_DEBUGPL((DBG_PCDV | DBG_CILV), "%s()\n", __func__);
25576 + DWC_DEBUGPL(DBG_PCD, "ep%d-%s xfer_len=%d xfer_cnt=%d "
25577 + "xfer_buff=%p start_xfer_buff=%p, total_len = %d\n",
25578 + ep->num, (ep->is_in ? "IN" : "OUT"), ep->xfer_len,
25579 + ep->xfer_count, ep->xfer_buff, ep->start_xfer_buff,
25580 + ep->total_len);
25581 + /* IN endpoint */
25582 + if (ep->is_in == 1) {
25583 + dwc_otg_dev_in_ep_regs_t *in_regs =
25584 + core_if->dev_if->in_ep_regs[ep->num];
25585 +
25586 + gnptxsts_data_t gtxstatus;
25587 +
25588 + gtxstatus.d32 =
25589 + DWC_READ_REG32(&core_if->core_global_regs->gnptxsts);
25590 +
25591 + if (core_if->en_multiple_tx_fifo == 0
25592 + && gtxstatus.b.nptxqspcavail == 0 && !core_if->dma_enable) {
25593 +#ifdef DEBUG
25594 + DWC_PRINTF("TX Queue Full (0x%0x)\n", gtxstatus.d32);
25595 +#endif
25596 + return;
25597 + }
25598 +
25599 + depctl.d32 = DWC_READ_REG32(&(in_regs->diepctl));
25600 + deptsiz.d32 = DWC_READ_REG32(&(in_regs->dieptsiz));
25601 +
25602 + if (ep->maxpacket > ep->maxxfer / MAX_PKT_CNT)
25603 + ep->xfer_len += (ep->maxxfer < (ep->total_len - ep->xfer_len)) ?
25604 + ep->maxxfer : (ep->total_len - ep->xfer_len);
25605 + else
25606 + ep->xfer_len += (MAX_PKT_CNT * ep->maxpacket < (ep->total_len - ep->xfer_len)) ?
25607 + MAX_PKT_CNT * ep->maxpacket : (ep->total_len - ep->xfer_len);
25608 +
25609 +
25610 + /* Zero Length Packet? */
25611 + if ((ep->xfer_len - ep->xfer_count) == 0) {
25612 + deptsiz.b.xfersize = 0;
25613 + deptsiz.b.pktcnt = 1;
25614 + } else {
25615 + /* Program the transfer size and packet count
25616 + * as follows: xfersize = N * maxpacket +
25617 + * short_packet pktcnt = N + (short_packet
25618 + * exist ? 1 : 0)
25619 + */
25620 + deptsiz.b.xfersize = ep->xfer_len - ep->xfer_count;
25621 + deptsiz.b.pktcnt =
25622 + (ep->xfer_len - ep->xfer_count - 1 +
25623 + ep->maxpacket) / ep->maxpacket;
25624 + if (deptsiz.b.pktcnt > MAX_PKT_CNT) {
25625 + deptsiz.b.pktcnt = MAX_PKT_CNT;
25626 + deptsiz.b.xfersize = deptsiz.b.pktcnt * ep->maxpacket;
25627 + }
25628 + if (ep->type == DWC_OTG_EP_TYPE_ISOC)
25629 + deptsiz.b.mc = deptsiz.b.pktcnt;
25630 + }
25631 +
25632 + /* Write the DMA register */
25633 + if (core_if->dma_enable) {
25634 + if (core_if->dma_desc_enable == 0) {
25635 + if (ep->type != DWC_OTG_EP_TYPE_ISOC)
25636 + deptsiz.b.mc = 1;
25637 + DWC_WRITE_REG32(&in_regs->dieptsiz,
25638 + deptsiz.d32);
25639 + DWC_WRITE_REG32(&(in_regs->diepdma),
25640 + (uint32_t) ep->dma_addr);
25641 + } else {
25642 +#ifdef DWC_UTE_CFI
25643 + /* The descriptor chain should be already initialized by now */
25644 + if (ep->buff_mode != BM_STANDARD) {
25645 + DWC_WRITE_REG32(&in_regs->diepdma,
25646 + ep->descs_dma_addr);
25647 + } else {
25648 +#endif
25649 + init_dma_desc_chain(core_if, ep);
25650 + /** DIEPDMAn Register write */
25651 + DWC_WRITE_REG32(&in_regs->diepdma,
25652 + ep->dma_desc_addr);
25653 +#ifdef DWC_UTE_CFI
25654 + }
25655 +#endif
25656 + }
25657 + } else {
25658 + DWC_WRITE_REG32(&in_regs->dieptsiz, deptsiz.d32);
25659 + if (ep->type != DWC_OTG_EP_TYPE_ISOC) {
25660 + /**
25661 + * Enable the Non-Periodic Tx FIFO empty interrupt,
25662 + * or the Tx FIFO epmty interrupt in dedicated Tx FIFO mode,
25663 + * the data will be written into the fifo by the ISR.
25664 + */
25665 + if (core_if->en_multiple_tx_fifo == 0) {
25666 + intr_mask.b.nptxfempty = 1;
25667 + DWC_MODIFY_REG32
25668 + (&core_if->core_global_regs->gintmsk,
25669 + intr_mask.d32, intr_mask.d32);
25670 + } else {
25671 + /* Enable the Tx FIFO Empty Interrupt for this EP */
25672 + if (ep->xfer_len > 0) {
25673 + uint32_t fifoemptymsk = 0;
25674 + fifoemptymsk = 1 << ep->num;
25675 + DWC_MODIFY_REG32
25676 + (&core_if->dev_if->dev_global_regs->dtknqr4_fifoemptymsk,
25677 + 0, fifoemptymsk);
25678 +
25679 + }
25680 + }
25681 + } else {
25682 + write_isoc_tx_fifo(core_if, ep);
25683 + }
25684 + }
25685 + if (!core_if->core_params->en_multiple_tx_fifo && core_if->dma_enable)
25686 + depctl.b.nextep = core_if->nextep_seq[ep->num];
25687 +
25688 + if (ep->type == DWC_OTG_EP_TYPE_ISOC) {
25689 + dsts_data_t dsts = {.d32 = 0};
25690 + if (ep->bInterval == 1) {
25691 + dsts.d32 =
25692 + DWC_READ_REG32(&core_if->dev_if->
25693 + dev_global_regs->dsts);
25694 + ep->frame_num = dsts.b.soffn + ep->bInterval;
25695 + if (ep->frame_num > 0x3FFF) {
25696 + ep->frm_overrun = 1;
25697 + ep->frame_num &= 0x3FFF;
25698 + } else
25699 + ep->frm_overrun = 0;
25700 + if (ep->frame_num & 0x1) {
25701 + depctl.b.setd1pid = 1;
25702 + } else {
25703 + depctl.b.setd0pid = 1;
25704 + }
25705 + }
25706 + }
25707 + /* EP enable, IN data in FIFO */
25708 + depctl.b.cnak = 1;
25709 + depctl.b.epena = 1;
25710 + DWC_WRITE_REG32(&in_regs->diepctl, depctl.d32);
25711 +
25712 + } else {
25713 + /* OUT endpoint */
25714 + dwc_otg_dev_out_ep_regs_t *out_regs =
25715 + core_if->dev_if->out_ep_regs[ep->num];
25716 +
25717 + depctl.d32 = DWC_READ_REG32(&(out_regs->doepctl));
25718 + deptsiz.d32 = DWC_READ_REG32(&(out_regs->doeptsiz));
25719 +
25720 + if (!core_if->dma_desc_enable) {
25721 + if (ep->maxpacket > ep->maxxfer / MAX_PKT_CNT)
25722 + ep->xfer_len += (ep->maxxfer < (ep->total_len - ep->xfer_len)) ?
25723 + ep->maxxfer : (ep->total_len - ep->xfer_len);
25724 + else
25725 + ep->xfer_len += (MAX_PKT_CNT * ep->maxpacket < (ep->total_len
25726 + - ep->xfer_len)) ? MAX_PKT_CNT * ep->maxpacket : (ep->total_len - ep->xfer_len);
25727 + }
25728 +
25729 + /* Program the transfer size and packet count as follows:
25730 + *
25731 + * pktcnt = N
25732 + * xfersize = N * maxpacket
25733 + */
25734 + if ((ep->xfer_len - ep->xfer_count) == 0) {
25735 + /* Zero Length Packet */
25736 + deptsiz.b.xfersize = ep->maxpacket;
25737 + deptsiz.b.pktcnt = 1;
25738 + } else {
25739 + deptsiz.b.pktcnt =
25740 + (ep->xfer_len - ep->xfer_count +
25741 + (ep->maxpacket - 1)) / ep->maxpacket;
25742 + if (deptsiz.b.pktcnt > MAX_PKT_CNT) {
25743 + deptsiz.b.pktcnt = MAX_PKT_CNT;
25744 + }
25745 + if (!core_if->dma_desc_enable) {
25746 + ep->xfer_len =
25747 + deptsiz.b.pktcnt * ep->maxpacket + ep->xfer_count;
25748 + }
25749 + deptsiz.b.xfersize = ep->xfer_len - ep->xfer_count;
25750 + }
25751 +
25752 + DWC_DEBUGPL(DBG_PCDV, "ep%d xfersize=%d pktcnt=%d\n",
25753 + ep->num, deptsiz.b.xfersize, deptsiz.b.pktcnt);
25754 +
25755 + if (core_if->dma_enable) {
25756 + if (!core_if->dma_desc_enable) {
25757 + DWC_WRITE_REG32(&out_regs->doeptsiz,
25758 + deptsiz.d32);
25759 +
25760 + DWC_WRITE_REG32(&(out_regs->doepdma),
25761 + (uint32_t) ep->dma_addr);
25762 + } else {
25763 +#ifdef DWC_UTE_CFI
25764 + /* The descriptor chain should be already initialized by now */
25765 + if (ep->buff_mode != BM_STANDARD) {
25766 + DWC_WRITE_REG32(&out_regs->doepdma,
25767 + ep->descs_dma_addr);
25768 + } else {
25769 +#endif
25770 + /** This is used for interrupt out transfers*/
25771 + if (!ep->xfer_len)
25772 + ep->xfer_len = ep->total_len;
25773 + init_dma_desc_chain(core_if, ep);
25774 +
25775 + if (core_if->core_params->dev_out_nak) {
25776 + if (ep->type == DWC_OTG_EP_TYPE_BULK) {
25777 + deptsiz.b.pktcnt = (ep->total_len +
25778 + (ep->maxpacket - 1)) / ep->maxpacket;
25779 + deptsiz.b.xfersize = ep->total_len;
25780 + /* Remember initial value of doeptsiz */
25781 + core_if->start_doeptsiz_val[ep->num] = deptsiz.d32;
25782 + DWC_WRITE_REG32(&out_regs->doeptsiz,
25783 + deptsiz.d32);
25784 + }
25785 + }
25786 + /** DOEPDMAn Register write */
25787 + DWC_WRITE_REG32(&out_regs->doepdma,
25788 + ep->dma_desc_addr);
25789 +#ifdef DWC_UTE_CFI
25790 + }
25791 +#endif
25792 + }
25793 + } else {
25794 + DWC_WRITE_REG32(&out_regs->doeptsiz, deptsiz.d32);
25795 + }
25796 +
25797 + if (ep->type == DWC_OTG_EP_TYPE_ISOC) {
25798 + dsts_data_t dsts = {.d32 = 0};
25799 + if (ep->bInterval == 1) {
25800 + dsts.d32 =
25801 + DWC_READ_REG32(&core_if->dev_if->
25802 + dev_global_regs->dsts);
25803 + ep->frame_num = dsts.b.soffn + ep->bInterval;
25804 + if (ep->frame_num > 0x3FFF) {
25805 + ep->frm_overrun = 1;
25806 + ep->frame_num &= 0x3FFF;
25807 + } else
25808 + ep->frm_overrun = 0;
25809 +
25810 + if (ep->frame_num & 0x1) {
25811 + depctl.b.setd1pid = 1;
25812 + } else {
25813 + depctl.b.setd0pid = 1;
25814 + }
25815 + }
25816 + }
25817 +
25818 + /* EP enable */
25819 + depctl.b.cnak = 1;
25820 + depctl.b.epena = 1;
25821 +
25822 + DWC_WRITE_REG32(&out_regs->doepctl, depctl.d32);
25823 +
25824 + DWC_DEBUGPL(DBG_PCD, "DOEPCTL=%08x DOEPTSIZ=%08x\n",
25825 + DWC_READ_REG32(&out_regs->doepctl),
25826 + DWC_READ_REG32(&out_regs->doeptsiz));
25827 + DWC_DEBUGPL(DBG_PCD, "DAINTMSK=%08x GINTMSK=%08x\n",
25828 + DWC_READ_REG32(&core_if->dev_if->dev_global_regs->
25829 + daintmsk),
25830 + DWC_READ_REG32(&core_if->core_global_regs->
25831 + gintmsk));
25832 +
25833 + /* Timer is scheduling only for out bulk transfers for
25834 + * "Device DDMA OUT NAK Enhancement" feature to inform user
25835 + * about received data payload in case of timeout
25836 + */
25837 + if (core_if->core_params->dev_out_nak) {
25838 + if (ep->type == DWC_OTG_EP_TYPE_BULK) {
25839 + core_if->ep_xfer_info[ep->num].core_if = core_if;
25840 + core_if->ep_xfer_info[ep->num].ep = ep;
25841 + core_if->ep_xfer_info[ep->num].state = 1;
25842 +
25843 + /* Start a timer for this transfer. */
25844 + DWC_TIMER_SCHEDULE(core_if->ep_xfer_timer[ep->num], 10000);
25845 + }
25846 + }
25847 + }
25848 +}
25849 +
25850 +/**
25851 + * This function setup a zero length transfer in Buffer DMA and
25852 + * Slave modes for usb requests with zero field set
25853 + *
25854 + * @param core_if Programming view of DWC_otg controller.
25855 + * @param ep The EP to start the transfer on.
25856 + *
25857 + */
25858 +void dwc_otg_ep_start_zl_transfer(dwc_otg_core_if_t * core_if, dwc_ep_t * ep)
25859 +{
25860 +
25861 + depctl_data_t depctl;
25862 + deptsiz_data_t deptsiz;
25863 + gintmsk_data_t intr_mask = {.d32 = 0 };
25864 +
25865 + DWC_DEBUGPL((DBG_PCDV | DBG_CILV), "%s()\n", __func__);
25866 + DWC_PRINTF("zero length transfer is called\n");
25867 +
25868 + /* IN endpoint */
25869 + if (ep->is_in == 1) {
25870 + dwc_otg_dev_in_ep_regs_t *in_regs =
25871 + core_if->dev_if->in_ep_regs[ep->num];
25872 +
25873 + depctl.d32 = DWC_READ_REG32(&(in_regs->diepctl));
25874 + deptsiz.d32 = DWC_READ_REG32(&(in_regs->dieptsiz));
25875 +
25876 + deptsiz.b.xfersize = 0;
25877 + deptsiz.b.pktcnt = 1;
25878 +
25879 + /* Write the DMA register */
25880 + if (core_if->dma_enable) {
25881 + if (core_if->dma_desc_enable == 0) {
25882 + deptsiz.b.mc = 1;
25883 + DWC_WRITE_REG32(&in_regs->dieptsiz,
25884 + deptsiz.d32);
25885 + DWC_WRITE_REG32(&(in_regs->diepdma),
25886 + (uint32_t) ep->dma_addr);
25887 + }
25888 + } else {
25889 + DWC_WRITE_REG32(&in_regs->dieptsiz, deptsiz.d32);
25890 + /**
25891 + * Enable the Non-Periodic Tx FIFO empty interrupt,
25892 + * or the Tx FIFO epmty interrupt in dedicated Tx FIFO mode,
25893 + * the data will be written into the fifo by the ISR.
25894 + */
25895 + if (core_if->en_multiple_tx_fifo == 0) {
25896 + intr_mask.b.nptxfempty = 1;
25897 + DWC_MODIFY_REG32(&core_if->
25898 + core_global_regs->gintmsk,
25899 + intr_mask.d32, intr_mask.d32);
25900 + } else {
25901 + /* Enable the Tx FIFO Empty Interrupt for this EP */
25902 + if (ep->xfer_len > 0) {
25903 + uint32_t fifoemptymsk = 0;
25904 + fifoemptymsk = 1 << ep->num;
25905 + DWC_MODIFY_REG32(&core_if->
25906 + dev_if->dev_global_regs->dtknqr4_fifoemptymsk,
25907 + 0, fifoemptymsk);
25908 + }
25909 + }
25910 + }
25911 +
25912 + if (!core_if->core_params->en_multiple_tx_fifo && core_if->dma_enable)
25913 + depctl.b.nextep = core_if->nextep_seq[ep->num];
25914 + /* EP enable, IN data in FIFO */
25915 + depctl.b.cnak = 1;
25916 + depctl.b.epena = 1;
25917 + DWC_WRITE_REG32(&in_regs->diepctl, depctl.d32);
25918 +
25919 + } else {
25920 + /* OUT endpoint */
25921 + dwc_otg_dev_out_ep_regs_t *out_regs =
25922 + core_if->dev_if->out_ep_regs[ep->num];
25923 +
25924 + depctl.d32 = DWC_READ_REG32(&(out_regs->doepctl));
25925 + deptsiz.d32 = DWC_READ_REG32(&(out_regs->doeptsiz));
25926 +
25927 + /* Zero Length Packet */
25928 + deptsiz.b.xfersize = ep->maxpacket;
25929 + deptsiz.b.pktcnt = 1;
25930 +
25931 + if (core_if->dma_enable) {
25932 + if (!core_if->dma_desc_enable) {
25933 + DWC_WRITE_REG32(&out_regs->doeptsiz,
25934 + deptsiz.d32);
25935 +
25936 + DWC_WRITE_REG32(&(out_regs->doepdma),
25937 + (uint32_t) ep->dma_addr);
25938 + }
25939 + } else {
25940 + DWC_WRITE_REG32(&out_regs->doeptsiz, deptsiz.d32);
25941 + }
25942 +
25943 + /* EP enable */
25944 + depctl.b.cnak = 1;
25945 + depctl.b.epena = 1;
25946 +
25947 + DWC_WRITE_REG32(&out_regs->doepctl, depctl.d32);
25948 +
25949 + }
25950 +}
25951 +
25952 +/**
25953 + * This function does the setup for a data transfer for EP0 and starts
25954 + * the transfer. For an IN transfer, the packets will be loaded into
25955 + * the appropriate Tx FIFO in the ISR. For OUT transfers, the packets are
25956 + * unloaded from the Rx FIFO in the ISR.
25957 + *
25958 + * @param core_if Programming view of DWC_otg controller.
25959 + * @param ep The EP0 data.
25960 + */
25961 +void dwc_otg_ep0_start_transfer(dwc_otg_core_if_t * core_if, dwc_ep_t * ep)
25962 +{
25963 + depctl_data_t depctl;
25964 + deptsiz0_data_t deptsiz;
25965 + gintmsk_data_t intr_mask = {.d32 = 0 };
25966 + dwc_otg_dev_dma_desc_t *dma_desc;
25967 +
25968 + DWC_DEBUGPL(DBG_PCD, "ep%d-%s xfer_len=%d xfer_cnt=%d "
25969 + "xfer_buff=%p start_xfer_buff=%p \n",
25970 + ep->num, (ep->is_in ? "IN" : "OUT"), ep->xfer_len,
25971 + ep->xfer_count, ep->xfer_buff, ep->start_xfer_buff);
25972 +
25973 + ep->total_len = ep->xfer_len;
25974 +
25975 + /* IN endpoint */
25976 + if (ep->is_in == 1) {
25977 + dwc_otg_dev_in_ep_regs_t *in_regs =
25978 + core_if->dev_if->in_ep_regs[0];
25979 +
25980 + gnptxsts_data_t gtxstatus;
25981 +
25982 + if (core_if->snpsid >= OTG_CORE_REV_3_00a) {
25983 + depctl.d32 = DWC_READ_REG32(&in_regs->diepctl);
25984 + if (depctl.b.epena)
25985 + return;
25986 + }
25987 +
25988 + gtxstatus.d32 =
25989 + DWC_READ_REG32(&core_if->core_global_regs->gnptxsts);
25990 +
25991 + /* If dedicated FIFO every time flush fifo before enable ep*/
25992 + if (core_if->en_multiple_tx_fifo && core_if->snpsid >= OTG_CORE_REV_3_00a)
25993 + dwc_otg_flush_tx_fifo(core_if, ep->tx_fifo_num);
25994 +
25995 + if (core_if->en_multiple_tx_fifo == 0
25996 + && gtxstatus.b.nptxqspcavail == 0
25997 + && !core_if->dma_enable) {
25998 +#ifdef DEBUG
25999 + deptsiz.d32 = DWC_READ_REG32(&in_regs->dieptsiz);
26000 + DWC_DEBUGPL(DBG_PCD, "DIEPCTL0=%0x\n",
26001 + DWC_READ_REG32(&in_regs->diepctl));
26002 + DWC_DEBUGPL(DBG_PCD, "DIEPTSIZ0=%0x (sz=%d, pcnt=%d)\n",
26003 + deptsiz.d32,
26004 + deptsiz.b.xfersize, deptsiz.b.pktcnt);
26005 + DWC_PRINTF("TX Queue or FIFO Full (0x%0x)\n",
26006 + gtxstatus.d32);
26007 +#endif
26008 + return;
26009 + }
26010 +
26011 + depctl.d32 = DWC_READ_REG32(&in_regs->diepctl);
26012 + deptsiz.d32 = DWC_READ_REG32(&in_regs->dieptsiz);
26013 +
26014 + /* Zero Length Packet? */
26015 + if (ep->xfer_len == 0) {
26016 + deptsiz.b.xfersize = 0;
26017 + deptsiz.b.pktcnt = 1;
26018 + } else {
26019 + /* Program the transfer size and packet count
26020 + * as follows: xfersize = N * maxpacket +
26021 + * short_packet pktcnt = N + (short_packet
26022 + * exist ? 1 : 0)
26023 + */
26024 + if (ep->xfer_len > ep->maxpacket) {
26025 + ep->xfer_len = ep->maxpacket;
26026 + deptsiz.b.xfersize = ep->maxpacket;
26027 + } else {
26028 + deptsiz.b.xfersize = ep->xfer_len;
26029 + }
26030 + deptsiz.b.pktcnt = 1;
26031 +
26032 + }
26033 + DWC_DEBUGPL(DBG_PCDV,
26034 + "IN len=%d xfersize=%d pktcnt=%d [%08x]\n",
26035 + ep->xfer_len, deptsiz.b.xfersize, deptsiz.b.pktcnt,
26036 + deptsiz.d32);
26037 +
26038 + /* Write the DMA register */
26039 + if (core_if->dma_enable) {
26040 + if (core_if->dma_desc_enable == 0) {
26041 + DWC_WRITE_REG32(&in_regs->dieptsiz,
26042 + deptsiz.d32);
26043 +
26044 + DWC_WRITE_REG32(&(in_regs->diepdma),
26045 + (uint32_t) ep->dma_addr);
26046 + } else {
26047 + dma_desc = core_if->dev_if->in_desc_addr;
26048 +
26049 + /** DMA Descriptor Setup */
26050 + dma_desc->status.b.bs = BS_HOST_BUSY;
26051 + dma_desc->status.b.l = 1;
26052 + dma_desc->status.b.ioc = 1;
26053 + dma_desc->status.b.sp =
26054 + (ep->xfer_len == ep->maxpacket) ? 0 : 1;
26055 + dma_desc->status.b.bytes = ep->xfer_len;
26056 + dma_desc->buf = ep->dma_addr;
26057 + dma_desc->status.b.sts = 0;
26058 + dma_desc->status.b.bs = BS_HOST_READY;
26059 +
26060 + /** DIEPDMA0 Register write */
26061 + DWC_WRITE_REG32(&in_regs->diepdma,
26062 + core_if->
26063 + dev_if->dma_in_desc_addr);
26064 + }
26065 + } else {
26066 + DWC_WRITE_REG32(&in_regs->dieptsiz, deptsiz.d32);
26067 + }
26068 +
26069 + if (!core_if->core_params->en_multiple_tx_fifo && core_if->dma_enable)
26070 + depctl.b.nextep = core_if->nextep_seq[ep->num];
26071 + /* EP enable, IN data in FIFO */
26072 + depctl.b.cnak = 1;
26073 + depctl.b.epena = 1;
26074 + DWC_WRITE_REG32(&in_regs->diepctl, depctl.d32);
26075 +
26076 + /**
26077 + * Enable the Non-Periodic Tx FIFO empty interrupt, the
26078 + * data will be written into the fifo by the ISR.
26079 + */
26080 + if (!core_if->dma_enable) {
26081 + if (core_if->en_multiple_tx_fifo == 0) {
26082 + intr_mask.b.nptxfempty = 1;
26083 + DWC_MODIFY_REG32(&core_if->
26084 + core_global_regs->gintmsk,
26085 + intr_mask.d32, intr_mask.d32);
26086 + } else {
26087 + /* Enable the Tx FIFO Empty Interrupt for this EP */
26088 + if (ep->xfer_len > 0) {
26089 + uint32_t fifoemptymsk = 0;
26090 + fifoemptymsk |= 1 << ep->num;
26091 + DWC_MODIFY_REG32(&core_if->
26092 + dev_if->dev_global_regs->dtknqr4_fifoemptymsk,
26093 + 0, fifoemptymsk);
26094 + }
26095 + }
26096 + }
26097 + } else {
26098 + /* OUT endpoint */
26099 + dwc_otg_dev_out_ep_regs_t *out_regs =
26100 + core_if->dev_if->out_ep_regs[0];
26101 +
26102 + depctl.d32 = DWC_READ_REG32(&out_regs->doepctl);
26103 + deptsiz.d32 = DWC_READ_REG32(&out_regs->doeptsiz);
26104 +
26105 + /* Program the transfer size and packet count as follows:
26106 + * xfersize = N * (maxpacket + 4 - (maxpacket % 4))
26107 + * pktcnt = N */
26108 + /* Zero Length Packet */
26109 + deptsiz.b.xfersize = ep->maxpacket;
26110 + deptsiz.b.pktcnt = 1;
26111 + if (core_if->snpsid >= OTG_CORE_REV_3_00a)
26112 + deptsiz.b.supcnt = 3;
26113 +
26114 + DWC_DEBUGPL(DBG_PCDV, "len=%d xfersize=%d pktcnt=%d\n",
26115 + ep->xfer_len, deptsiz.b.xfersize, deptsiz.b.pktcnt);
26116 +
26117 + if (core_if->dma_enable) {
26118 + if (!core_if->dma_desc_enable) {
26119 + DWC_WRITE_REG32(&out_regs->doeptsiz,
26120 + deptsiz.d32);
26121 +
26122 + DWC_WRITE_REG32(&(out_regs->doepdma),
26123 + (uint32_t) ep->dma_addr);
26124 + } else {
26125 + dma_desc = core_if->dev_if->out_desc_addr;
26126 +
26127 + /** DMA Descriptor Setup */
26128 + dma_desc->status.b.bs = BS_HOST_BUSY;
26129 + if (core_if->snpsid >= OTG_CORE_REV_3_00a) {
26130 + dma_desc->status.b.mtrf = 0;
26131 + dma_desc->status.b.sr = 0;
26132 + }
26133 + dma_desc->status.b.l = 1;
26134 + dma_desc->status.b.ioc = 1;
26135 + dma_desc->status.b.bytes = ep->maxpacket;
26136 + dma_desc->buf = ep->dma_addr;
26137 + dma_desc->status.b.sts = 0;
26138 + dma_desc->status.b.bs = BS_HOST_READY;
26139 +
26140 + /** DOEPDMA0 Register write */
26141 + DWC_WRITE_REG32(&out_regs->doepdma,
26142 + core_if->dev_if->
26143 + dma_out_desc_addr);
26144 + }
26145 + } else {
26146 + DWC_WRITE_REG32(&out_regs->doeptsiz, deptsiz.d32);
26147 + }
26148 +
26149 + /* EP enable */
26150 + depctl.b.cnak = 1;
26151 + depctl.b.epena = 1;
26152 + DWC_WRITE_REG32(&(out_regs->doepctl), depctl.d32);
26153 + }
26154 +}
26155 +
26156 +/**
26157 + * This function continues control IN transfers started by
26158 + * dwc_otg_ep0_start_transfer, when the transfer does not fit in a
26159 + * single packet. NOTE: The DIEPCTL0/DOEPCTL0 registers only have one
26160 + * bit for the packet count.
26161 + *
26162 + * @param core_if Programming view of DWC_otg controller.
26163 + * @param ep The EP0 data.
26164 + */
26165 +void dwc_otg_ep0_continue_transfer(dwc_otg_core_if_t * core_if, dwc_ep_t * ep)
26166 +{
26167 + depctl_data_t depctl;
26168 + deptsiz0_data_t deptsiz;
26169 + gintmsk_data_t intr_mask = {.d32 = 0 };
26170 + dwc_otg_dev_dma_desc_t *dma_desc;
26171 +
26172 + if (ep->is_in == 1) {
26173 + dwc_otg_dev_in_ep_regs_t *in_regs =
26174 + core_if->dev_if->in_ep_regs[0];
26175 + gnptxsts_data_t tx_status = {.d32 = 0 };
26176 +
26177 + tx_status.d32 =
26178 + DWC_READ_REG32(&core_if->core_global_regs->gnptxsts);
26179 + /** @todo Should there be check for room in the Tx
26180 + * Status Queue. If not remove the code above this comment. */
26181 +
26182 + depctl.d32 = DWC_READ_REG32(&in_regs->diepctl);
26183 + deptsiz.d32 = DWC_READ_REG32(&in_regs->dieptsiz);
26184 +
26185 + /* Program the transfer size and packet count
26186 + * as follows: xfersize = N * maxpacket +
26187 + * short_packet pktcnt = N + (short_packet
26188 + * exist ? 1 : 0)
26189 + */
26190 +
26191 + if (core_if->dma_desc_enable == 0) {
26192 + deptsiz.b.xfersize =
26193 + (ep->total_len - ep->xfer_count) >
26194 + ep->maxpacket ? ep->maxpacket : (ep->total_len -
26195 + ep->xfer_count);
26196 + deptsiz.b.pktcnt = 1;
26197 + if (core_if->dma_enable == 0) {
26198 + ep->xfer_len += deptsiz.b.xfersize;
26199 + } else {
26200 + ep->xfer_len = deptsiz.b.xfersize;
26201 + }
26202 + DWC_WRITE_REG32(&in_regs->dieptsiz, deptsiz.d32);
26203 + } else {
26204 + ep->xfer_len =
26205 + (ep->total_len - ep->xfer_count) >
26206 + ep->maxpacket ? ep->maxpacket : (ep->total_len -
26207 + ep->xfer_count);
26208 +
26209 + dma_desc = core_if->dev_if->in_desc_addr;
26210 +
26211 + /** DMA Descriptor Setup */
26212 + dma_desc->status.b.bs = BS_HOST_BUSY;
26213 + dma_desc->status.b.l = 1;
26214 + dma_desc->status.b.ioc = 1;
26215 + dma_desc->status.b.sp =
26216 + (ep->xfer_len == ep->maxpacket) ? 0 : 1;
26217 + dma_desc->status.b.bytes = ep->xfer_len;
26218 + dma_desc->buf = ep->dma_addr;
26219 + dma_desc->status.b.sts = 0;
26220 + dma_desc->status.b.bs = BS_HOST_READY;
26221 +
26222 + /** DIEPDMA0 Register write */
26223 + DWC_WRITE_REG32(&in_regs->diepdma,
26224 + core_if->dev_if->dma_in_desc_addr);
26225 + }
26226 +
26227 + DWC_DEBUGPL(DBG_PCDV,
26228 + "IN len=%d xfersize=%d pktcnt=%d [%08x]\n",
26229 + ep->xfer_len, deptsiz.b.xfersize, deptsiz.b.pktcnt,
26230 + deptsiz.d32);
26231 +
26232 + /* Write the DMA register */
26233 + if (core_if->hwcfg2.b.architecture == DWC_INT_DMA_ARCH) {
26234 + if (core_if->dma_desc_enable == 0)
26235 + DWC_WRITE_REG32(&(in_regs->diepdma),
26236 + (uint32_t) ep->dma_addr);
26237 + }
26238 + if (!core_if->core_params->en_multiple_tx_fifo && core_if->dma_enable)
26239 + depctl.b.nextep = core_if->nextep_seq[ep->num];
26240 + /* EP enable, IN data in FIFO */
26241 + depctl.b.cnak = 1;
26242 + depctl.b.epena = 1;
26243 + DWC_WRITE_REG32(&in_regs->diepctl, depctl.d32);
26244 +
26245 + /**
26246 + * Enable the Non-Periodic Tx FIFO empty interrupt, the
26247 + * data will be written into the fifo by the ISR.
26248 + */
26249 + if (!core_if->dma_enable) {
26250 + if (core_if->en_multiple_tx_fifo == 0) {
26251 + /* First clear it from GINTSTS */
26252 + intr_mask.b.nptxfempty = 1;
26253 + DWC_MODIFY_REG32(&core_if->
26254 + core_global_regs->gintmsk,
26255 + intr_mask.d32, intr_mask.d32);
26256 +
26257 + } else {
26258 + /* Enable the Tx FIFO Empty Interrupt for this EP */
26259 + if (ep->xfer_len > 0) {
26260 + uint32_t fifoemptymsk = 0;
26261 + fifoemptymsk |= 1 << ep->num;
26262 + DWC_MODIFY_REG32(&core_if->
26263 + dev_if->dev_global_regs->dtknqr4_fifoemptymsk,
26264 + 0, fifoemptymsk);
26265 + }
26266 + }
26267 + }
26268 + } else {
26269 + dwc_otg_dev_out_ep_regs_t *out_regs =
26270 + core_if->dev_if->out_ep_regs[0];
26271 +
26272 + depctl.d32 = DWC_READ_REG32(&out_regs->doepctl);
26273 + deptsiz.d32 = DWC_READ_REG32(&out_regs->doeptsiz);
26274 +
26275 + /* Program the transfer size and packet count
26276 + * as follows: xfersize = N * maxpacket +
26277 + * short_packet pktcnt = N + (short_packet
26278 + * exist ? 1 : 0)
26279 + */
26280 + deptsiz.b.xfersize = ep->maxpacket;
26281 + deptsiz.b.pktcnt = 1;
26282 +
26283 + if (core_if->dma_desc_enable == 0) {
26284 + DWC_WRITE_REG32(&out_regs->doeptsiz, deptsiz.d32);
26285 + } else {
26286 + dma_desc = core_if->dev_if->out_desc_addr;
26287 +
26288 + /** DMA Descriptor Setup */
26289 + dma_desc->status.b.bs = BS_HOST_BUSY;
26290 + dma_desc->status.b.l = 1;
26291 + dma_desc->status.b.ioc = 1;
26292 + dma_desc->status.b.bytes = ep->maxpacket;
26293 + dma_desc->buf = ep->dma_addr;
26294 + dma_desc->status.b.sts = 0;
26295 + dma_desc->status.b.bs = BS_HOST_READY;
26296 +
26297 + /** DOEPDMA0 Register write */
26298 + DWC_WRITE_REG32(&out_regs->doepdma,
26299 + core_if->dev_if->dma_out_desc_addr);
26300 + }
26301 +
26302 + DWC_DEBUGPL(DBG_PCDV,
26303 + "IN len=%d xfersize=%d pktcnt=%d [%08x]\n",
26304 + ep->xfer_len, deptsiz.b.xfersize, deptsiz.b.pktcnt,
26305 + deptsiz.d32);
26306 +
26307 + /* Write the DMA register */
26308 + if (core_if->hwcfg2.b.architecture == DWC_INT_DMA_ARCH) {
26309 + if (core_if->dma_desc_enable == 0)
26310 + DWC_WRITE_REG32(&(out_regs->doepdma),
26311 + (uint32_t) ep->dma_addr);
26312 +
26313 + }
26314 +
26315 + /* EP enable, IN data in FIFO */
26316 + depctl.b.cnak = 1;
26317 + depctl.b.epena = 1;
26318 + DWC_WRITE_REG32(&out_regs->doepctl, depctl.d32);
26319 +
26320 + }
26321 +}
26322 +
26323 +#ifdef DEBUG
26324 +void dump_msg(const u8 * buf, unsigned int length)
26325 +{
26326 + unsigned int start, num, i;
26327 + char line[52], *p;
26328 +
26329 + if (length >= 512)
26330 + return;
26331 + start = 0;
26332 + while (length > 0) {
26333 + num = length < 16u ? length : 16u;
26334 + p = line;
26335 + for (i = 0; i < num; ++i) {
26336 + if (i == 8)
26337 + *p++ = ' ';
26338 + DWC_SPRINTF(p, " %02x", buf[i]);
26339 + p += 3;
26340 + }
26341 + *p = 0;
26342 + DWC_PRINTF("%6x: %s\n", start, line);
26343 + buf += num;
26344 + start += num;
26345 + length -= num;
26346 + }
26347 +}
26348 +#else
26349 +static inline void dump_msg(const u8 * buf, unsigned int length)
26350 +{
26351 +}
26352 +#endif
26353 +
26354 +/**
26355 + * This function writes a packet into the Tx FIFO associated with the
26356 + * EP. For non-periodic EPs the non-periodic Tx FIFO is written. For
26357 + * periodic EPs the periodic Tx FIFO associated with the EP is written
26358 + * with all packets for the next micro-frame.
26359 + *
26360 + * @param core_if Programming view of DWC_otg controller.
26361 + * @param ep The EP to write packet for.
26362 + * @param dma Indicates if DMA is being used.
26363 + */
26364 +void dwc_otg_ep_write_packet(dwc_otg_core_if_t * core_if, dwc_ep_t * ep,
26365 + int dma)
26366 +{
26367 + /**
26368 + * The buffer is padded to DWORD on a per packet basis in
26369 + * slave/dma mode if the MPS is not DWORD aligned. The last
26370 + * packet, if short, is also padded to a multiple of DWORD.
26371 + *
26372 + * ep->xfer_buff always starts DWORD aligned in memory and is a
26373 + * multiple of DWORD in length
26374 + *
26375 + * ep->xfer_len can be any number of bytes
26376 + *
26377 + * ep->xfer_count is a multiple of ep->maxpacket until the last
26378 + * packet
26379 + *
26380 + * FIFO access is DWORD */
26381 +
26382 + uint32_t i;
26383 + uint32_t byte_count;
26384 + uint32_t dword_count;
26385 + uint32_t *fifo;
26386 + uint32_t *data_buff = (uint32_t *) ep->xfer_buff;
26387 +
26388 + DWC_DEBUGPL((DBG_PCDV | DBG_CILV), "%s(%p,%p)\n", __func__, core_if,
26389 + ep);
26390 + if (ep->xfer_count >= ep->xfer_len) {
26391 + DWC_WARN("%s() No data for EP%d!!!\n", __func__, ep->num);
26392 + return;
26393 + }
26394 +
26395 + /* Find the byte length of the packet either short packet or MPS */
26396 + if ((ep->xfer_len - ep->xfer_count) < ep->maxpacket) {
26397 + byte_count = ep->xfer_len - ep->xfer_count;
26398 + } else {
26399 + byte_count = ep->maxpacket;
26400 + }
26401 +
26402 + /* Find the DWORD length, padded by extra bytes as neccessary if MPS
26403 + * is not a multiple of DWORD */
26404 + dword_count = (byte_count + 3) / 4;
26405 +
26406 +#ifdef VERBOSE
26407 + dump_msg(ep->xfer_buff, byte_count);
26408 +#endif
26409 +
26410 + /**@todo NGS Where are the Periodic Tx FIFO addresses
26411 + * intialized? What should this be? */
26412 +
26413 + fifo = core_if->data_fifo[ep->num];
26414 +
26415 + DWC_DEBUGPL((DBG_PCDV | DBG_CILV), "fifo=%p buff=%p *p=%08x bc=%d\n",
26416 + fifo, data_buff, *data_buff, byte_count);
26417 +
26418 + if (!dma) {
26419 + for (i = 0; i < dword_count; i++, data_buff++) {
26420 + DWC_WRITE_REG32(fifo, *data_buff);
26421 + }
26422 + }
26423 +
26424 + ep->xfer_count += byte_count;
26425 + ep->xfer_buff += byte_count;
26426 + ep->dma_addr += byte_count;
26427 +}
26428 +
26429 +/**
26430 + * Set the EP STALL.
26431 + *
26432 + * @param core_if Programming view of DWC_otg controller.
26433 + * @param ep The EP to set the stall on.
26434 + */
26435 +void dwc_otg_ep_set_stall(dwc_otg_core_if_t * core_if, dwc_ep_t * ep)
26436 +{
26437 + depctl_data_t depctl;
26438 + volatile uint32_t *depctl_addr;
26439 +
26440 + DWC_DEBUGPL(DBG_PCD, "%s ep%d-%s\n", __func__, ep->num,
26441 + (ep->is_in ? "IN" : "OUT"));
26442 +
26443 + if (ep->is_in == 1) {
26444 + depctl_addr = &(core_if->dev_if->in_ep_regs[ep->num]->diepctl);
26445 + depctl.d32 = DWC_READ_REG32(depctl_addr);
26446 +
26447 + /* set the disable and stall bits */
26448 + if (depctl.b.epena) {
26449 + depctl.b.epdis = 1;
26450 + }
26451 + depctl.b.stall = 1;
26452 + DWC_WRITE_REG32(depctl_addr, depctl.d32);
26453 + } else {
26454 + depctl_addr = &(core_if->dev_if->out_ep_regs[ep->num]->doepctl);
26455 + depctl.d32 = DWC_READ_REG32(depctl_addr);
26456 +
26457 + /* set the stall bit */
26458 + depctl.b.stall = 1;
26459 + DWC_WRITE_REG32(depctl_addr, depctl.d32);
26460 + }
26461 +
26462 + DWC_DEBUGPL(DBG_PCD, "DEPCTL=%0x\n", DWC_READ_REG32(depctl_addr));
26463 +
26464 + return;
26465 +}
26466 +
26467 +/**
26468 + * Clear the EP STALL.
26469 + *
26470 + * @param core_if Programming view of DWC_otg controller.
26471 + * @param ep The EP to clear stall from.
26472 + */
26473 +void dwc_otg_ep_clear_stall(dwc_otg_core_if_t * core_if, dwc_ep_t * ep)
26474 +{
26475 + depctl_data_t depctl;
26476 + volatile uint32_t *depctl_addr;
26477 +
26478 + DWC_DEBUGPL(DBG_PCD, "%s ep%d-%s\n", __func__, ep->num,
26479 + (ep->is_in ? "IN" : "OUT"));
26480 +
26481 + if (ep->is_in == 1) {
26482 + depctl_addr = &(core_if->dev_if->in_ep_regs[ep->num]->diepctl);
26483 + } else {
26484 + depctl_addr = &(core_if->dev_if->out_ep_regs[ep->num]->doepctl);
26485 + }
26486 +
26487 + depctl.d32 = DWC_READ_REG32(depctl_addr);
26488 +
26489 + /* clear the stall bits */
26490 + depctl.b.stall = 0;
26491 +
26492 + /*
26493 + * USB Spec 9.4.5: For endpoints using data toggle, regardless
26494 + * of whether an endpoint has the Halt feature set, a
26495 + * ClearFeature(ENDPOINT_HALT) request always results in the
26496 + * data toggle being reinitialized to DATA0.
26497 + */
26498 + if (ep->type == DWC_OTG_EP_TYPE_INTR ||
26499 + ep->type == DWC_OTG_EP_TYPE_BULK) {
26500 + depctl.b.setd0pid = 1; /* DATA0 */
26501 + }
26502 +
26503 + DWC_WRITE_REG32(depctl_addr, depctl.d32);
26504 + DWC_DEBUGPL(DBG_PCD, "DEPCTL=%0x\n", DWC_READ_REG32(depctl_addr));
26505 + return;
26506 +}
26507 +
26508 +/**
26509 + * This function reads a packet from the Rx FIFO into the destination
26510 + * buffer. To read SETUP data use dwc_otg_read_setup_packet.
26511 + *
26512 + * @param core_if Programming view of DWC_otg controller.
26513 + * @param dest Destination buffer for the packet.
26514 + * @param bytes Number of bytes to copy to the destination.
26515 + */
26516 +void dwc_otg_read_packet(dwc_otg_core_if_t * core_if,
26517 + uint8_t * dest, uint16_t bytes)
26518 +{
26519 + int i;
26520 + int word_count = (bytes + 3) / 4;
26521 +
26522 + volatile uint32_t *fifo = core_if->data_fifo[0];
26523 + uint32_t *data_buff = (uint32_t *) dest;
26524 +
26525 + /**
26526 + * @todo Account for the case where _dest is not dword aligned. This
26527 + * requires reading data from the FIFO into a uint32_t temp buffer,
26528 + * then moving it into the data buffer.
26529 + */
26530 +
26531 + DWC_DEBUGPL((DBG_PCDV | DBG_CILV), "%s(%p,%p,%d)\n", __func__,
26532 + core_if, dest, bytes);
26533 +
26534 + for (i = 0; i < word_count; i++, data_buff++) {
26535 + *data_buff = DWC_READ_REG32(fifo);
26536 + }
26537 +
26538 + return;
26539 +}
26540 +
26541 +/**
26542 + * This functions reads the device registers and prints them
26543 + *
26544 + * @param core_if Programming view of DWC_otg controller.
26545 + */
26546 +void dwc_otg_dump_dev_registers(dwc_otg_core_if_t * core_if)
26547 +{
26548 + int i;
26549 + volatile uint32_t *addr;
26550 +
26551 + DWC_PRINTF("Device Global Registers\n");
26552 + addr = &core_if->dev_if->dev_global_regs->dcfg;
26553 + DWC_PRINTF("DCFG @0x%08lX : 0x%08X\n",
26554 + (unsigned long)addr, DWC_READ_REG32(addr));
26555 + addr = &core_if->dev_if->dev_global_regs->dctl;
26556 + DWC_PRINTF("DCTL @0x%08lX : 0x%08X\n",
26557 + (unsigned long)addr, DWC_READ_REG32(addr));
26558 + addr = &core_if->dev_if->dev_global_regs->dsts;
26559 + DWC_PRINTF("DSTS @0x%08lX : 0x%08X\n",
26560 + (unsigned long)addr, DWC_READ_REG32(addr));
26561 + addr = &core_if->dev_if->dev_global_regs->diepmsk;
26562 + DWC_PRINTF("DIEPMSK @0x%08lX : 0x%08X\n", (unsigned long)addr,
26563 + DWC_READ_REG32(addr));
26564 + addr = &core_if->dev_if->dev_global_regs->doepmsk;
26565 + DWC_PRINTF("DOEPMSK @0x%08lX : 0x%08X\n", (unsigned long)addr,
26566 + DWC_READ_REG32(addr));
26567 + addr = &core_if->dev_if->dev_global_regs->daint;
26568 + DWC_PRINTF("DAINT @0x%08lX : 0x%08X\n", (unsigned long)addr,
26569 + DWC_READ_REG32(addr));
26570 + addr = &core_if->dev_if->dev_global_regs->daintmsk;
26571 + DWC_PRINTF("DAINTMSK @0x%08lX : 0x%08X\n", (unsigned long)addr,
26572 + DWC_READ_REG32(addr));
26573 + addr = &core_if->dev_if->dev_global_regs->dtknqr1;
26574 + DWC_PRINTF("DTKNQR1 @0x%08lX : 0x%08X\n", (unsigned long)addr,
26575 + DWC_READ_REG32(addr));
26576 + if (core_if->hwcfg2.b.dev_token_q_depth > 6) {
26577 + addr = &core_if->dev_if->dev_global_regs->dtknqr2;
26578 + DWC_PRINTF("DTKNQR2 @0x%08lX : 0x%08X\n",
26579 + (unsigned long)addr, DWC_READ_REG32(addr));
26580 + }
26581 +
26582 + addr = &core_if->dev_if->dev_global_regs->dvbusdis;
26583 + DWC_PRINTF("DVBUSID @0x%08lX : 0x%08X\n", (unsigned long)addr,
26584 + DWC_READ_REG32(addr));
26585 +
26586 + addr = &core_if->dev_if->dev_global_regs->dvbuspulse;
26587 + DWC_PRINTF("DVBUSPULSE @0x%08lX : 0x%08X\n",
26588 + (unsigned long)addr, DWC_READ_REG32(addr));
26589 +
26590 + addr = &core_if->dev_if->dev_global_regs->dtknqr3_dthrctl;
26591 + DWC_PRINTF("DTKNQR3_DTHRCTL @0x%08lX : 0x%08X\n",
26592 + (unsigned long)addr, DWC_READ_REG32(addr));
26593 +
26594 + if (core_if->hwcfg2.b.dev_token_q_depth > 22) {
26595 + addr = &core_if->dev_if->dev_global_regs->dtknqr4_fifoemptymsk;
26596 + DWC_PRINTF("DTKNQR4 @0x%08lX : 0x%08X\n",
26597 + (unsigned long)addr, DWC_READ_REG32(addr));
26598 + }
26599 +
26600 + addr = &core_if->dev_if->dev_global_regs->dtknqr4_fifoemptymsk;
26601 + DWC_PRINTF("FIFOEMPMSK @0x%08lX : 0x%08X\n", (unsigned long)addr,
26602 + DWC_READ_REG32(addr));
26603 +
26604 + if (core_if->hwcfg2.b.multi_proc_int) {
26605 +
26606 + addr = &core_if->dev_if->dev_global_regs->deachint;
26607 + DWC_PRINTF("DEACHINT @0x%08lX : 0x%08X\n",
26608 + (unsigned long)addr, DWC_READ_REG32(addr));
26609 + addr = &core_if->dev_if->dev_global_regs->deachintmsk;
26610 + DWC_PRINTF("DEACHINTMSK @0x%08lX : 0x%08X\n",
26611 + (unsigned long)addr, DWC_READ_REG32(addr));
26612 +
26613 + for (i = 0; i <= core_if->dev_if->num_in_eps; i++) {
26614 + addr =
26615 + &core_if->dev_if->
26616 + dev_global_regs->diepeachintmsk[i];
26617 + DWC_PRINTF("DIEPEACHINTMSK[%d] @0x%08lX : 0x%08X\n",
26618 + i, (unsigned long)addr,
26619 + DWC_READ_REG32(addr));
26620 + }
26621 +
26622 + for (i = 0; i <= core_if->dev_if->num_out_eps; i++) {
26623 + addr =
26624 + &core_if->dev_if->
26625 + dev_global_regs->doepeachintmsk[i];
26626 + DWC_PRINTF("DOEPEACHINTMSK[%d] @0x%08lX : 0x%08X\n",
26627 + i, (unsigned long)addr,
26628 + DWC_READ_REG32(addr));
26629 + }
26630 + }
26631 +
26632 + for (i = 0; i <= core_if->dev_if->num_in_eps; i++) {
26633 + DWC_PRINTF("Device IN EP %d Registers\n", i);
26634 + addr = &core_if->dev_if->in_ep_regs[i]->diepctl;
26635 + DWC_PRINTF("DIEPCTL @0x%08lX : 0x%08X\n",
26636 + (unsigned long)addr, DWC_READ_REG32(addr));
26637 + addr = &core_if->dev_if->in_ep_regs[i]->diepint;
26638 + DWC_PRINTF("DIEPINT @0x%08lX : 0x%08X\n",
26639 + (unsigned long)addr, DWC_READ_REG32(addr));
26640 + addr = &core_if->dev_if->in_ep_regs[i]->dieptsiz;
26641 + DWC_PRINTF("DIETSIZ @0x%08lX : 0x%08X\n",
26642 + (unsigned long)addr, DWC_READ_REG32(addr));
26643 + addr = &core_if->dev_if->in_ep_regs[i]->diepdma;
26644 + DWC_PRINTF("DIEPDMA @0x%08lX : 0x%08X\n",
26645 + (unsigned long)addr, DWC_READ_REG32(addr));
26646 + addr = &core_if->dev_if->in_ep_regs[i]->dtxfsts;
26647 + DWC_PRINTF("DTXFSTS @0x%08lX : 0x%08X\n",
26648 + (unsigned long)addr, DWC_READ_REG32(addr));
26649 + addr = &core_if->dev_if->in_ep_regs[i]->diepdmab;
26650 + DWC_PRINTF("DIEPDMAB @0x%08lX : 0x%08X\n",
26651 + (unsigned long)addr, 0 /*DWC_READ_REG32(addr) */ );
26652 + }
26653 +
26654 + for (i = 0; i <= core_if->dev_if->num_out_eps; i++) {
26655 + DWC_PRINTF("Device OUT EP %d Registers\n", i);
26656 + addr = &core_if->dev_if->out_ep_regs[i]->doepctl;
26657 + DWC_PRINTF("DOEPCTL @0x%08lX : 0x%08X\n",
26658 + (unsigned long)addr, DWC_READ_REG32(addr));
26659 + addr = &core_if->dev_if->out_ep_regs[i]->doepint;
26660 + DWC_PRINTF("DOEPINT @0x%08lX : 0x%08X\n",
26661 + (unsigned long)addr, DWC_READ_REG32(addr));
26662 + addr = &core_if->dev_if->out_ep_regs[i]->doeptsiz;
26663 + DWC_PRINTF("DOETSIZ @0x%08lX : 0x%08X\n",
26664 + (unsigned long)addr, DWC_READ_REG32(addr));
26665 + addr = &core_if->dev_if->out_ep_regs[i]->doepdma;
26666 + DWC_PRINTF("DOEPDMA @0x%08lX : 0x%08X\n",
26667 + (unsigned long)addr, DWC_READ_REG32(addr));
26668 + if (core_if->dma_enable) { /* Don't access this register in SLAVE mode */
26669 + addr = &core_if->dev_if->out_ep_regs[i]->doepdmab;
26670 + DWC_PRINTF("DOEPDMAB @0x%08lX : 0x%08X\n",
26671 + (unsigned long)addr, DWC_READ_REG32(addr));
26672 + }
26673 +
26674 + }
26675 +}
26676 +
26677 +/**
26678 + * This functions reads the SPRAM and prints its content
26679 + *
26680 + * @param core_if Programming view of DWC_otg controller.
26681 + */
26682 +void dwc_otg_dump_spram(dwc_otg_core_if_t * core_if)
26683 +{
26684 + volatile uint8_t *addr, *start_addr, *end_addr;
26685 +
26686 + DWC_PRINTF("SPRAM Data:\n");
26687 + start_addr = (void *)core_if->core_global_regs;
26688 + DWC_PRINTF("Base Address: 0x%8lX\n", (unsigned long)start_addr);
26689 + start_addr += 0x00028000;
26690 + end_addr = (void *)core_if->core_global_regs;
26691 + end_addr += 0x000280e0;
26692 +
26693 + for (addr = start_addr; addr < end_addr; addr += 16) {
26694 + DWC_PRINTF
26695 + ("0x%8lX:\t%2X %2X %2X %2X %2X %2X %2X %2X %2X %2X %2X %2X %2X %2X %2X %2X\n",
26696 + (unsigned long)addr, addr[0], addr[1], addr[2], addr[3],
26697 + addr[4], addr[5], addr[6], addr[7], addr[8], addr[9],
26698 + addr[10], addr[11], addr[12], addr[13], addr[14], addr[15]
26699 + );
26700 + }
26701 +
26702 + return;
26703 +}
26704 +
26705 +/**
26706 + * This function reads the host registers and prints them
26707 + *
26708 + * @param core_if Programming view of DWC_otg controller.
26709 + */
26710 +void dwc_otg_dump_host_registers(dwc_otg_core_if_t * core_if)
26711 +{
26712 + int i;
26713 + volatile uint32_t *addr;
26714 +
26715 + DWC_PRINTF("Host Global Registers\n");
26716 + addr = &core_if->host_if->host_global_regs->hcfg;
26717 + DWC_PRINTF("HCFG @0x%08lX : 0x%08X\n",
26718 + (unsigned long)addr, DWC_READ_REG32(addr));
26719 + addr = &core_if->host_if->host_global_regs->hfir;
26720 + DWC_PRINTF("HFIR @0x%08lX : 0x%08X\n",
26721 + (unsigned long)addr, DWC_READ_REG32(addr));
26722 + addr = &core_if->host_if->host_global_regs->hfnum;
26723 + DWC_PRINTF("HFNUM @0x%08lX : 0x%08X\n", (unsigned long)addr,
26724 + DWC_READ_REG32(addr));
26725 + addr = &core_if->host_if->host_global_regs->hptxsts;
26726 + DWC_PRINTF("HPTXSTS @0x%08lX : 0x%08X\n", (unsigned long)addr,
26727 + DWC_READ_REG32(addr));
26728 + addr = &core_if->host_if->host_global_regs->haint;
26729 + DWC_PRINTF("HAINT @0x%08lX : 0x%08X\n", (unsigned long)addr,
26730 + DWC_READ_REG32(addr));
26731 + addr = &core_if->host_if->host_global_regs->haintmsk;
26732 + DWC_PRINTF("HAINTMSK @0x%08lX : 0x%08X\n", (unsigned long)addr,
26733 + DWC_READ_REG32(addr));
26734 + if (core_if->dma_desc_enable) {
26735 + addr = &core_if->host_if->host_global_regs->hflbaddr;
26736 + DWC_PRINTF("HFLBADDR @0x%08lX : 0x%08X\n",
26737 + (unsigned long)addr, DWC_READ_REG32(addr));
26738 + }
26739 +
26740 + addr = core_if->host_if->hprt0;
26741 + DWC_PRINTF("HPRT0 @0x%08lX : 0x%08X\n", (unsigned long)addr,
26742 + DWC_READ_REG32(addr));
26743 +
26744 + for (i = 0; i < core_if->core_params->host_channels; i++) {
26745 + DWC_PRINTF("Host Channel %d Specific Registers\n", i);
26746 + addr = &core_if->host_if->hc_regs[i]->hcchar;
26747 + DWC_PRINTF("HCCHAR @0x%08lX : 0x%08X\n",
26748 + (unsigned long)addr, DWC_READ_REG32(addr));
26749 + addr = &core_if->host_if->hc_regs[i]->hcsplt;
26750 + DWC_PRINTF("HCSPLT @0x%08lX : 0x%08X\n",
26751 + (unsigned long)addr, DWC_READ_REG32(addr));
26752 + addr = &core_if->host_if->hc_regs[i]->hcint;
26753 + DWC_PRINTF("HCINT @0x%08lX : 0x%08X\n",
26754 + (unsigned long)addr, DWC_READ_REG32(addr));
26755 + addr = &core_if->host_if->hc_regs[i]->hcintmsk;
26756 + DWC_PRINTF("HCINTMSK @0x%08lX : 0x%08X\n",
26757 + (unsigned long)addr, DWC_READ_REG32(addr));
26758 + addr = &core_if->host_if->hc_regs[i]->hctsiz;
26759 + DWC_PRINTF("HCTSIZ @0x%08lX : 0x%08X\n",
26760 + (unsigned long)addr, DWC_READ_REG32(addr));
26761 + addr = &core_if->host_if->hc_regs[i]->hcdma;
26762 + DWC_PRINTF("HCDMA @0x%08lX : 0x%08X\n",
26763 + (unsigned long)addr, DWC_READ_REG32(addr));
26764 + if (core_if->dma_desc_enable) {
26765 + addr = &core_if->host_if->hc_regs[i]->hcdmab;
26766 + DWC_PRINTF("HCDMAB @0x%08lX : 0x%08X\n",
26767 + (unsigned long)addr, DWC_READ_REG32(addr));
26768 + }
26769 +
26770 + }
26771 + return;
26772 +}
26773 +
26774 +/**
26775 + * This function reads the core global registers and prints them
26776 + *
26777 + * @param core_if Programming view of DWC_otg controller.
26778 + */
26779 +void dwc_otg_dump_global_registers(dwc_otg_core_if_t * core_if)
26780 +{
26781 + int i, ep_num;
26782 + volatile uint32_t *addr;
26783 + char *txfsiz;
26784 +
26785 + DWC_PRINTF("Core Global Registers\n");
26786 + addr = &core_if->core_global_regs->gotgctl;
26787 + DWC_PRINTF("GOTGCTL @0x%08lX : 0x%08X\n", (unsigned long)addr,
26788 + DWC_READ_REG32(addr));
26789 + addr = &core_if->core_global_regs->gotgint;
26790 + DWC_PRINTF("GOTGINT @0x%08lX : 0x%08X\n", (unsigned long)addr,
26791 + DWC_READ_REG32(addr));
26792 + addr = &core_if->core_global_regs->gahbcfg;
26793 + DWC_PRINTF("GAHBCFG @0x%08lX : 0x%08X\n", (unsigned long)addr,
26794 + DWC_READ_REG32(addr));
26795 + addr = &core_if->core_global_regs->gusbcfg;
26796 + DWC_PRINTF("GUSBCFG @0x%08lX : 0x%08X\n", (unsigned long)addr,
26797 + DWC_READ_REG32(addr));
26798 + addr = &core_if->core_global_regs->grstctl;
26799 + DWC_PRINTF("GRSTCTL @0x%08lX : 0x%08X\n", (unsigned long)addr,
26800 + DWC_READ_REG32(addr));
26801 + addr = &core_if->core_global_regs->gintsts;
26802 + DWC_PRINTF("GINTSTS @0x%08lX : 0x%08X\n", (unsigned long)addr,
26803 + DWC_READ_REG32(addr));
26804 + addr = &core_if->core_global_regs->gintmsk;
26805 + DWC_PRINTF("GINTMSK @0x%08lX : 0x%08X\n", (unsigned long)addr,
26806 + DWC_READ_REG32(addr));
26807 + addr = &core_if->core_global_regs->grxstsr;
26808 + DWC_PRINTF("GRXSTSR @0x%08lX : 0x%08X\n", (unsigned long)addr,
26809 + DWC_READ_REG32(addr));
26810 + addr = &core_if->core_global_regs->grxfsiz;
26811 + DWC_PRINTF("GRXFSIZ @0x%08lX : 0x%08X\n", (unsigned long)addr,
26812 + DWC_READ_REG32(addr));
26813 + addr = &core_if->core_global_regs->gnptxfsiz;
26814 + DWC_PRINTF("GNPTXFSIZ @0x%08lX : 0x%08X\n", (unsigned long)addr,
26815 + DWC_READ_REG32(addr));
26816 + addr = &core_if->core_global_regs->gnptxsts;
26817 + DWC_PRINTF("GNPTXSTS @0x%08lX : 0x%08X\n", (unsigned long)addr,
26818 + DWC_READ_REG32(addr));
26819 + addr = &core_if->core_global_regs->gi2cctl;
26820 + DWC_PRINTF("GI2CCTL @0x%08lX : 0x%08X\n", (unsigned long)addr,
26821 + DWC_READ_REG32(addr));
26822 + addr = &core_if->core_global_regs->gpvndctl;
26823 + DWC_PRINTF("GPVNDCTL @0x%08lX : 0x%08X\n", (unsigned long)addr,
26824 + DWC_READ_REG32(addr));
26825 + addr = &core_if->core_global_regs->ggpio;
26826 + DWC_PRINTF("GGPIO @0x%08lX : 0x%08X\n", (unsigned long)addr,
26827 + DWC_READ_REG32(addr));
26828 + addr = &core_if->core_global_regs->guid;
26829 + DWC_PRINTF("GUID @0x%08lX : 0x%08X\n",
26830 + (unsigned long)addr, DWC_READ_REG32(addr));
26831 + addr = &core_if->core_global_regs->gsnpsid;
26832 + DWC_PRINTF("GSNPSID @0x%08lX : 0x%08X\n", (unsigned long)addr,
26833 + DWC_READ_REG32(addr));
26834 + addr = &core_if->core_global_regs->ghwcfg1;
26835 + DWC_PRINTF("GHWCFG1 @0x%08lX : 0x%08X\n", (unsigned long)addr,
26836 + DWC_READ_REG32(addr));
26837 + addr = &core_if->core_global_regs->ghwcfg2;
26838 + DWC_PRINTF("GHWCFG2 @0x%08lX : 0x%08X\n", (unsigned long)addr,
26839 + DWC_READ_REG32(addr));
26840 + addr = &core_if->core_global_regs->ghwcfg3;
26841 + DWC_PRINTF("GHWCFG3 @0x%08lX : 0x%08X\n", (unsigned long)addr,
26842 + DWC_READ_REG32(addr));
26843 + addr = &core_if->core_global_regs->ghwcfg4;
26844 + DWC_PRINTF("GHWCFG4 @0x%08lX : 0x%08X\n", (unsigned long)addr,
26845 + DWC_READ_REG32(addr));
26846 + addr = &core_if->core_global_regs->glpmcfg;
26847 + DWC_PRINTF("GLPMCFG @0x%08lX : 0x%08X\n", (unsigned long)addr,
26848 + DWC_READ_REG32(addr));
26849 + addr = &core_if->core_global_regs->gpwrdn;
26850 + DWC_PRINTF("GPWRDN @0x%08lX : 0x%08X\n", (unsigned long)addr,
26851 + DWC_READ_REG32(addr));
26852 + addr = &core_if->core_global_regs->gdfifocfg;
26853 + DWC_PRINTF("GDFIFOCFG @0x%08lX : 0x%08X\n", (unsigned long)addr,
26854 + DWC_READ_REG32(addr));
26855 + addr = &core_if->core_global_regs->adpctl;
26856 + DWC_PRINTF("ADPCTL @0x%08lX : 0x%08X\n", (unsigned long)addr,
26857 + dwc_otg_adp_read_reg(core_if));
26858 + addr = &core_if->core_global_regs->hptxfsiz;
26859 + DWC_PRINTF("HPTXFSIZ @0x%08lX : 0x%08X\n", (unsigned long)addr,
26860 + DWC_READ_REG32(addr));
26861 +
26862 + if (core_if->en_multiple_tx_fifo == 0) {
26863 + ep_num = core_if->hwcfg4.b.num_dev_perio_in_ep;
26864 + txfsiz = "DPTXFSIZ";
26865 + } else {
26866 + ep_num = core_if->hwcfg4.b.num_in_eps;
26867 + txfsiz = "DIENPTXF";
26868 + }
26869 + for (i = 0; i < ep_num; i++) {
26870 + addr = &core_if->core_global_regs->dtxfsiz[i];
26871 + DWC_PRINTF("%s[%d] @0x%08lX : 0x%08X\n", txfsiz, i + 1,
26872 + (unsigned long)addr, DWC_READ_REG32(addr));
26873 + }
26874 + addr = core_if->pcgcctl;
26875 + DWC_PRINTF("PCGCCTL @0x%08lX : 0x%08X\n", (unsigned long)addr,
26876 + DWC_READ_REG32(addr));
26877 +}
26878 +
26879 +/**
26880 + * Flush a Tx FIFO.
26881 + *
26882 + * @param core_if Programming view of DWC_otg controller.
26883 + * @param num Tx FIFO to flush.
26884 + */
26885 +void dwc_otg_flush_tx_fifo(dwc_otg_core_if_t * core_if, const int num)
26886 +{
26887 + dwc_otg_core_global_regs_t *global_regs = core_if->core_global_regs;
26888 + volatile grstctl_t greset = {.d32 = 0 };
26889 + int count = 0;
26890 +
26891 + DWC_DEBUGPL((DBG_CIL | DBG_PCDV), "Flush Tx FIFO %d\n", num);
26892 +
26893 + greset.b.txfflsh = 1;
26894 + greset.b.txfnum = num;
26895 + DWC_WRITE_REG32(&global_regs->grstctl, greset.d32);
26896 +
26897 + do {
26898 + greset.d32 = DWC_READ_REG32(&global_regs->grstctl);
26899 + if (++count > 10000) {
26900 + DWC_WARN("%s() HANG! GRSTCTL=%0x GNPTXSTS=0x%08x\n",
26901 + __func__, greset.d32,
26902 + DWC_READ_REG32(&global_regs->gnptxsts));
26903 + break;
26904 + }
26905 + dwc_udelay(1);
26906 + } while (greset.b.txfflsh == 1);
26907 +
26908 + /* Wait for 3 PHY Clocks */
26909 + dwc_udelay(1);
26910 +}
26911 +
26912 +/**
26913 + * Flush Rx FIFO.
26914 + *
26915 + * @param core_if Programming view of DWC_otg controller.
26916 + */
26917 +void dwc_otg_flush_rx_fifo(dwc_otg_core_if_t * core_if)
26918 +{
26919 + dwc_otg_core_global_regs_t *global_regs = core_if->core_global_regs;
26920 + volatile grstctl_t greset = {.d32 = 0 };
26921 + int count = 0;
26922 +
26923 + DWC_DEBUGPL((DBG_CIL | DBG_PCDV), "%s\n", __func__);
26924 + /*
26925 + *
26926 + */
26927 + greset.b.rxfflsh = 1;
26928 + DWC_WRITE_REG32(&global_regs->grstctl, greset.d32);
26929 +
26930 + do {
26931 + greset.d32 = DWC_READ_REG32(&global_regs->grstctl);
26932 + if (++count > 10000) {
26933 + DWC_WARN("%s() HANG! GRSTCTL=%0x\n", __func__,
26934 + greset.d32);
26935 + break;
26936 + }
26937 + dwc_udelay(1);
26938 + } while (greset.b.rxfflsh == 1);
26939 +
26940 + /* Wait for 3 PHY Clocks */
26941 + dwc_udelay(1);
26942 +}
26943 +
26944 +/**
26945 + * Do core a soft reset of the core. Be careful with this because it
26946 + * resets all the internal state machines of the core.
26947 + */
26948 +void dwc_otg_core_reset(dwc_otg_core_if_t * core_if)
26949 +{
26950 + dwc_otg_core_global_regs_t *global_regs = core_if->core_global_regs;
26951 + volatile grstctl_t greset = {.d32 = 0 };
26952 + int count = 0;
26953 +
26954 + DWC_DEBUGPL(DBG_CILV, "%s\n", __func__);
26955 + /* Wait for AHB master IDLE state. */
26956 + do {
26957 + dwc_udelay(10);
26958 + greset.d32 = DWC_READ_REG32(&global_regs->grstctl);
26959 + if (++count > 100000) {
26960 + DWC_WARN("%s() HANG! AHB Idle GRSTCTL=%0x\n", __func__,
26961 + greset.d32);
26962 + return;
26963 + }
26964 + }
26965 + while (greset.b.ahbidle == 0);
26966 +
26967 + /* Core Soft Reset */
26968 + count = 0;
26969 + greset.b.csftrst = 1;
26970 + DWC_WRITE_REG32(&global_regs->grstctl, greset.d32);
26971 + do {
26972 + greset.d32 = DWC_READ_REG32(&global_regs->grstctl);
26973 + if (++count > 10000) {
26974 + DWC_WARN("%s() HANG! Soft Reset GRSTCTL=%0x\n",
26975 + __func__, greset.d32);
26976 + break;
26977 + }
26978 + dwc_udelay(1);
26979 + }
26980 + while (greset.b.csftrst == 1);
26981 +
26982 + /* Wait for 3 PHY Clocks */
26983 + dwc_mdelay(100);
26984 +}
26985 +
26986 +uint8_t dwc_otg_is_device_mode(dwc_otg_core_if_t * _core_if)
26987 +{
26988 + return (dwc_otg_mode(_core_if) != DWC_HOST_MODE);
26989 +}
26990 +
26991 +uint8_t dwc_otg_is_host_mode(dwc_otg_core_if_t * _core_if)
26992 +{
26993 + return (dwc_otg_mode(_core_if) == DWC_HOST_MODE);
26994 +}
26995 +
26996 +/**
26997 + * Register HCD callbacks. The callbacks are used to start and stop
26998 + * the HCD for interrupt processing.
26999 + *
27000 + * @param core_if Programming view of DWC_otg controller.
27001 + * @param cb the HCD callback structure.
27002 + * @param p pointer to be passed to callback function (usb_hcd*).
27003 + */
27004 +void dwc_otg_cil_register_hcd_callbacks(dwc_otg_core_if_t * core_if,
27005 + dwc_otg_cil_callbacks_t * cb, void *p)
27006 +{
27007 + core_if->hcd_cb = cb;
27008 + cb->p = p;
27009 +}
27010 +
27011 +/**
27012 + * Register PCD callbacks. The callbacks are used to start and stop
27013 + * the PCD for interrupt processing.
27014 + *
27015 + * @param core_if Programming view of DWC_otg controller.
27016 + * @param cb the PCD callback structure.
27017 + * @param p pointer to be passed to callback function (pcd*).
27018 + */
27019 +void dwc_otg_cil_register_pcd_callbacks(dwc_otg_core_if_t * core_if,
27020 + dwc_otg_cil_callbacks_t * cb, void *p)
27021 +{
27022 + core_if->pcd_cb = cb;
27023 + cb->p = p;
27024 +}
27025 +
27026 +#ifdef DWC_EN_ISOC
27027 +
27028 +/**
27029 + * This function writes isoc data per 1 (micro)frame into tx fifo
27030 + *
27031 + * @param core_if Programming view of DWC_otg controller.
27032 + * @param ep The EP to start the transfer on.
27033 + *
27034 + */
27035 +void write_isoc_frame_data(dwc_otg_core_if_t * core_if, dwc_ep_t * ep)
27036 +{
27037 + dwc_otg_dev_in_ep_regs_t *ep_regs;
27038 + dtxfsts_data_t txstatus = {.d32 = 0 };
27039 + uint32_t len = 0;
27040 + uint32_t dwords;
27041 +
27042 + ep->xfer_len = ep->data_per_frame;
27043 + ep->xfer_count = 0;
27044 +
27045 + ep_regs = core_if->dev_if->in_ep_regs[ep->num];
27046 +
27047 + len = ep->xfer_len - ep->xfer_count;
27048 +
27049 + if (len > ep->maxpacket) {
27050 + len = ep->maxpacket;
27051 + }
27052 +
27053 + dwords = (len + 3) / 4;
27054 +
27055 + /* While there is space in the queue and space in the FIFO and
27056 + * More data to tranfer, Write packets to the Tx FIFO */
27057 + txstatus.d32 =
27058 + DWC_READ_REG32(&core_if->dev_if->in_ep_regs[ep->num]->dtxfsts);
27059 + DWC_DEBUGPL(DBG_PCDV, "b4 dtxfsts[%d]=0x%08x\n", ep->num, txstatus.d32);
27060 +
27061 + while (txstatus.b.txfspcavail > dwords &&
27062 + ep->xfer_count < ep->xfer_len && ep->xfer_len != 0) {
27063 + /* Write the FIFO */
27064 + dwc_otg_ep_write_packet(core_if, ep, 0);
27065 +
27066 + len = ep->xfer_len - ep->xfer_count;
27067 + if (len > ep->maxpacket) {
27068 + len = ep->maxpacket;
27069 + }
27070 +
27071 + dwords = (len + 3) / 4;
27072 + txstatus.d32 =
27073 + DWC_READ_REG32(&core_if->dev_if->in_ep_regs[ep->num]->
27074 + dtxfsts);
27075 + DWC_DEBUGPL(DBG_PCDV, "dtxfsts[%d]=0x%08x\n", ep->num,
27076 + txstatus.d32);
27077 + }
27078 +}
27079 +
27080 +/**
27081 + * This function initializes a descriptor chain for Isochronous transfer
27082 + *
27083 + * @param core_if Programming view of DWC_otg controller.
27084 + * @param ep The EP to start the transfer on.
27085 + *
27086 + */
27087 +void dwc_otg_iso_ep_start_frm_transfer(dwc_otg_core_if_t * core_if,
27088 + dwc_ep_t * ep)
27089 +{
27090 + deptsiz_data_t deptsiz = {.d32 = 0 };
27091 + depctl_data_t depctl = {.d32 = 0 };
27092 + dsts_data_t dsts = {.d32 = 0 };
27093 + volatile uint32_t *addr;
27094 +
27095 + if (ep->is_in) {
27096 + addr = &core_if->dev_if->in_ep_regs[ep->num]->diepctl;
27097 + } else {
27098 + addr = &core_if->dev_if->out_ep_regs[ep->num]->doepctl;
27099 + }
27100 +
27101 + ep->xfer_len = ep->data_per_frame;
27102 + ep->xfer_count = 0;
27103 + ep->xfer_buff = ep->cur_pkt_addr;
27104 + ep->dma_addr = ep->cur_pkt_dma_addr;
27105 +
27106 + if (ep->is_in) {
27107 + /* Program the transfer size and packet count
27108 + * as follows: xfersize = N * maxpacket +
27109 + * short_packet pktcnt = N + (short_packet
27110 + * exist ? 1 : 0)
27111 + */
27112 + deptsiz.b.xfersize = ep->xfer_len;
27113 + deptsiz.b.pktcnt =
27114 + (ep->xfer_len - 1 + ep->maxpacket) / ep->maxpacket;
27115 + deptsiz.b.mc = deptsiz.b.pktcnt;
27116 + DWC_WRITE_REG32(&core_if->dev_if->in_ep_regs[ep->num]->dieptsiz,
27117 + deptsiz.d32);
27118 +
27119 + /* Write the DMA register */
27120 + if (core_if->dma_enable) {
27121 + DWC_WRITE_REG32(&
27122 + (core_if->dev_if->in_ep_regs[ep->num]->
27123 + diepdma), (uint32_t) ep->dma_addr);
27124 + }
27125 + } else {
27126 + deptsiz.b.pktcnt =
27127 + (ep->xfer_len + (ep->maxpacket - 1)) / ep->maxpacket;
27128 + deptsiz.b.xfersize = deptsiz.b.pktcnt * ep->maxpacket;
27129 +
27130 + DWC_WRITE_REG32(&core_if->dev_if->
27131 + out_ep_regs[ep->num]->doeptsiz, deptsiz.d32);
27132 +
27133 + if (core_if->dma_enable) {
27134 + DWC_WRITE_REG32(&
27135 + (core_if->dev_if->
27136 + out_ep_regs[ep->num]->doepdma),
27137 + (uint32_t) ep->dma_addr);
27138 + }
27139 + }
27140 +
27141 + /** Enable endpoint, clear nak */
27142 +
27143 + depctl.d32 = 0;
27144 + if (ep->bInterval == 1) {
27145 + dsts.d32 =
27146 + DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dsts);
27147 + ep->next_frame = dsts.b.soffn + ep->bInterval;
27148 +
27149 + if (ep->next_frame & 0x1) {
27150 + depctl.b.setd1pid = 1;
27151 + } else {
27152 + depctl.b.setd0pid = 1;
27153 + }
27154 + } else {
27155 + ep->next_frame += ep->bInterval;
27156 +
27157 + if (ep->next_frame & 0x1) {
27158 + depctl.b.setd1pid = 1;
27159 + } else {
27160 + depctl.b.setd0pid = 1;
27161 + }
27162 + }
27163 + depctl.b.epena = 1;
27164 + depctl.b.cnak = 1;
27165 +
27166 + DWC_MODIFY_REG32(addr, 0, depctl.d32);
27167 + depctl.d32 = DWC_READ_REG32(addr);
27168 +
27169 + if (ep->is_in && core_if->dma_enable == 0) {
27170 + write_isoc_frame_data(core_if, ep);
27171 + }
27172 +
27173 +}
27174 +#endif /* DWC_EN_ISOC */
27175 +
27176 +static void dwc_otg_set_uninitialized(int32_t * p, int size)
27177 +{
27178 + int i;
27179 + for (i = 0; i < size; i++) {
27180 + p[i] = -1;
27181 + }
27182 +}
27183 +
27184 +static int dwc_otg_param_initialized(int32_t val)
27185 +{
27186 + return val != -1;
27187 +}
27188 +
27189 +static int dwc_otg_setup_params(dwc_otg_core_if_t * core_if)
27190 +{
27191 + int i;
27192 + core_if->core_params = DWC_ALLOC(sizeof(*core_if->core_params));
27193 + if (!core_if->core_params) {
27194 + return -DWC_E_NO_MEMORY;
27195 + }
27196 + dwc_otg_set_uninitialized((int32_t *) core_if->core_params,
27197 + sizeof(*core_if->core_params) /
27198 + sizeof(int32_t));
27199 + DWC_PRINTF("Setting default values for core params\n");
27200 + dwc_otg_set_param_otg_cap(core_if, dwc_param_otg_cap_default);
27201 + dwc_otg_set_param_dma_enable(core_if, dwc_param_dma_enable_default);
27202 + dwc_otg_set_param_dma_desc_enable(core_if,
27203 + dwc_param_dma_desc_enable_default);
27204 + dwc_otg_set_param_opt(core_if, dwc_param_opt_default);
27205 + dwc_otg_set_param_dma_burst_size(core_if,
27206 + dwc_param_dma_burst_size_default);
27207 + dwc_otg_set_param_host_support_fs_ls_low_power(core_if,
27208 + dwc_param_host_support_fs_ls_low_power_default);
27209 + dwc_otg_set_param_enable_dynamic_fifo(core_if,
27210 + dwc_param_enable_dynamic_fifo_default);
27211 + dwc_otg_set_param_data_fifo_size(core_if,
27212 + dwc_param_data_fifo_size_default);
27213 + dwc_otg_set_param_dev_rx_fifo_size(core_if,
27214 + dwc_param_dev_rx_fifo_size_default);
27215 + dwc_otg_set_param_dev_nperio_tx_fifo_size(core_if,
27216 + dwc_param_dev_nperio_tx_fifo_size_default);
27217 + dwc_otg_set_param_host_rx_fifo_size(core_if,
27218 + dwc_param_host_rx_fifo_size_default);
27219 + dwc_otg_set_param_host_nperio_tx_fifo_size(core_if,
27220 + dwc_param_host_nperio_tx_fifo_size_default);
27221 + dwc_otg_set_param_host_perio_tx_fifo_size(core_if,
27222 + dwc_param_host_perio_tx_fifo_size_default);
27223 + dwc_otg_set_param_max_transfer_size(core_if,
27224 + dwc_param_max_transfer_size_default);
27225 + dwc_otg_set_param_max_packet_count(core_if,
27226 + dwc_param_max_packet_count_default);
27227 + dwc_otg_set_param_host_channels(core_if,
27228 + dwc_param_host_channels_default);
27229 + dwc_otg_set_param_dev_endpoints(core_if,
27230 + dwc_param_dev_endpoints_default);
27231 + dwc_otg_set_param_phy_type(core_if, dwc_param_phy_type_default);
27232 + dwc_otg_set_param_speed(core_if, dwc_param_speed_default);
27233 + dwc_otg_set_param_host_ls_low_power_phy_clk(core_if,
27234 + dwc_param_host_ls_low_power_phy_clk_default);
27235 + dwc_otg_set_param_phy_ulpi_ddr(core_if, dwc_param_phy_ulpi_ddr_default);
27236 + dwc_otg_set_param_phy_ulpi_ext_vbus(core_if,
27237 + dwc_param_phy_ulpi_ext_vbus_default);
27238 + dwc_otg_set_param_phy_utmi_width(core_if,
27239 + dwc_param_phy_utmi_width_default);
27240 + dwc_otg_set_param_ts_dline(core_if, dwc_param_ts_dline_default);
27241 + dwc_otg_set_param_i2c_enable(core_if, dwc_param_i2c_enable_default);
27242 + dwc_otg_set_param_ulpi_fs_ls(core_if, dwc_param_ulpi_fs_ls_default);
27243 + dwc_otg_set_param_en_multiple_tx_fifo(core_if,
27244 + dwc_param_en_multiple_tx_fifo_default);
27245 + for (i = 0; i < 15; i++) {
27246 + dwc_otg_set_param_dev_perio_tx_fifo_size(core_if,
27247 + dwc_param_dev_perio_tx_fifo_size_default,
27248 + i);
27249 + }
27250 +
27251 + for (i = 0; i < 15; i++) {
27252 + dwc_otg_set_param_dev_tx_fifo_size(core_if,
27253 + dwc_param_dev_tx_fifo_size_default,
27254 + i);
27255 + }
27256 + dwc_otg_set_param_thr_ctl(core_if, dwc_param_thr_ctl_default);
27257 + dwc_otg_set_param_mpi_enable(core_if, dwc_param_mpi_enable_default);
27258 + dwc_otg_set_param_pti_enable(core_if, dwc_param_pti_enable_default);
27259 + dwc_otg_set_param_lpm_enable(core_if, dwc_param_lpm_enable_default);
27260 + dwc_otg_set_param_ic_usb_cap(core_if, dwc_param_ic_usb_cap_default);
27261 + dwc_otg_set_param_tx_thr_length(core_if,
27262 + dwc_param_tx_thr_length_default);
27263 + dwc_otg_set_param_rx_thr_length(core_if,
27264 + dwc_param_rx_thr_length_default);
27265 + dwc_otg_set_param_ahb_thr_ratio(core_if,
27266 + dwc_param_ahb_thr_ratio_default);
27267 + dwc_otg_set_param_power_down(core_if, dwc_param_power_down_default);
27268 + dwc_otg_set_param_reload_ctl(core_if, dwc_param_reload_ctl_default);
27269 + dwc_otg_set_param_dev_out_nak(core_if, dwc_param_dev_out_nak_default);
27270 + dwc_otg_set_param_cont_on_bna(core_if, dwc_param_cont_on_bna_default);
27271 + dwc_otg_set_param_ahb_single(core_if, dwc_param_ahb_single_default);
27272 + dwc_otg_set_param_otg_ver(core_if, dwc_param_otg_ver_default);
27273 + dwc_otg_set_param_adp_enable(core_if, dwc_param_adp_enable_default);
27274 + DWC_PRINTF("Finished setting default values for core params\n");
27275 +
27276 + return 0;
27277 +}
27278 +
27279 +uint8_t dwc_otg_is_dma_enable(dwc_otg_core_if_t * core_if)
27280 +{
27281 + return core_if->dma_enable;
27282 +}
27283 +
27284 +/* Checks if the parameter is outside of its valid range of values */
27285 +#define DWC_OTG_PARAM_TEST(_param_, _low_, _high_) \
27286 + (((_param_) < (_low_)) || \
27287 + ((_param_) > (_high_)))
27288 +
27289 +/* Parameter access functions */
27290 +int dwc_otg_set_param_otg_cap(dwc_otg_core_if_t * core_if, int32_t val)
27291 +{
27292 + int valid;
27293 + int retval = 0;
27294 + if (DWC_OTG_PARAM_TEST(val, 0, 2)) {
27295 + DWC_WARN("Wrong value for otg_cap parameter\n");
27296 + DWC_WARN("otg_cap parameter must be 0,1 or 2\n");
27297 + retval = -DWC_E_INVALID;
27298 + goto out;
27299 + }
27300 +
27301 + valid = 1;
27302 + switch (val) {
27303 + case DWC_OTG_CAP_PARAM_HNP_SRP_CAPABLE:
27304 + if (core_if->hwcfg2.b.op_mode !=
27305 + DWC_HWCFG2_OP_MODE_HNP_SRP_CAPABLE_OTG)
27306 + valid = 0;
27307 + break;
27308 + case DWC_OTG_CAP_PARAM_SRP_ONLY_CAPABLE:
27309 + if ((core_if->hwcfg2.b.op_mode !=
27310 + DWC_HWCFG2_OP_MODE_HNP_SRP_CAPABLE_OTG)
27311 + && (core_if->hwcfg2.b.op_mode !=
27312 + DWC_HWCFG2_OP_MODE_SRP_ONLY_CAPABLE_OTG)
27313 + && (core_if->hwcfg2.b.op_mode !=
27314 + DWC_HWCFG2_OP_MODE_SRP_CAPABLE_DEVICE)
27315 + && (core_if->hwcfg2.b.op_mode !=
27316 + DWC_HWCFG2_OP_MODE_SRP_CAPABLE_HOST)) {
27317 + valid = 0;
27318 + }
27319 + break;
27320 + case DWC_OTG_CAP_PARAM_NO_HNP_SRP_CAPABLE:
27321 + /* always valid */
27322 + break;
27323 + }
27324 + if (!valid) {
27325 + if (dwc_otg_param_initialized(core_if->core_params->otg_cap)) {
27326 + DWC_ERROR
27327 + ("%d invalid for otg_cap paremter. Check HW configuration.\n",
27328 + val);
27329 + }
27330 + val =
27331 + (((core_if->hwcfg2.b.op_mode ==
27332 + DWC_HWCFG2_OP_MODE_HNP_SRP_CAPABLE_OTG)
27333 + || (core_if->hwcfg2.b.op_mode ==
27334 + DWC_HWCFG2_OP_MODE_SRP_ONLY_CAPABLE_OTG)
27335 + || (core_if->hwcfg2.b.op_mode ==
27336 + DWC_HWCFG2_OP_MODE_SRP_CAPABLE_DEVICE)
27337 + || (core_if->hwcfg2.b.op_mode ==
27338 + DWC_HWCFG2_OP_MODE_SRP_CAPABLE_HOST)) ?
27339 + DWC_OTG_CAP_PARAM_SRP_ONLY_CAPABLE :
27340 + DWC_OTG_CAP_PARAM_NO_HNP_SRP_CAPABLE);
27341 + retval = -DWC_E_INVALID;
27342 + }
27343 +
27344 + core_if->core_params->otg_cap = val;
27345 +out:
27346 + return retval;
27347 +}
27348 +
27349 +int32_t dwc_otg_get_param_otg_cap(dwc_otg_core_if_t * core_if)
27350 +{
27351 + return core_if->core_params->otg_cap;
27352 +}
27353 +
27354 +int dwc_otg_set_param_opt(dwc_otg_core_if_t * core_if, int32_t val)
27355 +{
27356 + if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
27357 + DWC_WARN("Wrong value for opt parameter\n");
27358 + return -DWC_E_INVALID;
27359 + }
27360 + core_if->core_params->opt = val;
27361 + return 0;
27362 +}
27363 +
27364 +int32_t dwc_otg_get_param_opt(dwc_otg_core_if_t * core_if)
27365 +{
27366 + return core_if->core_params->opt;
27367 +}
27368 +
27369 +int dwc_otg_set_param_dma_enable(dwc_otg_core_if_t * core_if, int32_t val)
27370 +{
27371 + int retval = 0;
27372 + if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
27373 + DWC_WARN("Wrong value for dma enable\n");
27374 + return -DWC_E_INVALID;
27375 + }
27376 +
27377 + if ((val == 1) && (core_if->hwcfg2.b.architecture == 0)) {
27378 + if (dwc_otg_param_initialized(core_if->core_params->dma_enable)) {
27379 + DWC_ERROR
27380 + ("%d invalid for dma_enable paremter. Check HW configuration.\n",
27381 + val);
27382 + }
27383 + val = 0;
27384 + retval = -DWC_E_INVALID;
27385 + }
27386 +
27387 + core_if->core_params->dma_enable = val;
27388 + if (val == 0) {
27389 + dwc_otg_set_param_dma_desc_enable(core_if, 0);
27390 + }
27391 + return retval;
27392 +}
27393 +
27394 +int32_t dwc_otg_get_param_dma_enable(dwc_otg_core_if_t * core_if)
27395 +{
27396 + return core_if->core_params->dma_enable;
27397 +}
27398 +
27399 +int dwc_otg_set_param_dma_desc_enable(dwc_otg_core_if_t * core_if, int32_t val)
27400 +{
27401 + int retval = 0;
27402 + if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
27403 + DWC_WARN("Wrong value for dma_enable\n");
27404 + DWC_WARN("dma_desc_enable must be 0 or 1\n");
27405 + return -DWC_E_INVALID;
27406 + }
27407 +
27408 + if ((val == 1)
27409 + && ((dwc_otg_get_param_dma_enable(core_if) == 0)
27410 + || (core_if->hwcfg4.b.desc_dma == 0))) {
27411 + if (dwc_otg_param_initialized
27412 + (core_if->core_params->dma_desc_enable)) {
27413 + DWC_ERROR
27414 + ("%d invalid for dma_desc_enable paremter. Check HW configuration.\n",
27415 + val);
27416 + }
27417 + val = 0;
27418 + retval = -DWC_E_INVALID;
27419 + }
27420 + core_if->core_params->dma_desc_enable = val;
27421 + return retval;
27422 +}
27423 +
27424 +int32_t dwc_otg_get_param_dma_desc_enable(dwc_otg_core_if_t * core_if)
27425 +{
27426 + return core_if->core_params->dma_desc_enable;
27427 +}
27428 +
27429 +int dwc_otg_set_param_host_support_fs_ls_low_power(dwc_otg_core_if_t * core_if,
27430 + int32_t val)
27431 +{
27432 + if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
27433 + DWC_WARN("Wrong value for host_support_fs_low_power\n");
27434 + DWC_WARN("host_support_fs_low_power must be 0 or 1\n");
27435 + return -DWC_E_INVALID;
27436 + }
27437 + core_if->core_params->host_support_fs_ls_low_power = val;
27438 + return 0;
27439 +}
27440 +
27441 +int32_t dwc_otg_get_param_host_support_fs_ls_low_power(dwc_otg_core_if_t *
27442 + core_if)
27443 +{
27444 + return core_if->core_params->host_support_fs_ls_low_power;
27445 +}
27446 +
27447 +int dwc_otg_set_param_enable_dynamic_fifo(dwc_otg_core_if_t * core_if,
27448 + int32_t val)
27449 +{
27450 + int retval = 0;
27451 + if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
27452 + DWC_WARN("Wrong value for enable_dynamic_fifo\n");
27453 + DWC_WARN("enable_dynamic_fifo must be 0 or 1\n");
27454 + return -DWC_E_INVALID;
27455 + }
27456 +
27457 + if ((val == 1) && (core_if->hwcfg2.b.dynamic_fifo == 0)) {
27458 + if (dwc_otg_param_initialized
27459 + (core_if->core_params->enable_dynamic_fifo)) {
27460 + DWC_ERROR
27461 + ("%d invalid for enable_dynamic_fifo paremter. Check HW configuration.\n",
27462 + val);
27463 + }
27464 + val = 0;
27465 + retval = -DWC_E_INVALID;
27466 + }
27467 + core_if->core_params->enable_dynamic_fifo = val;
27468 + return retval;
27469 +}
27470 +
27471 +int32_t dwc_otg_get_param_enable_dynamic_fifo(dwc_otg_core_if_t * core_if)
27472 +{
27473 + return core_if->core_params->enable_dynamic_fifo;
27474 +}
27475 +
27476 +int dwc_otg_set_param_data_fifo_size(dwc_otg_core_if_t * core_if, int32_t val)
27477 +{
27478 + int retval = 0;
27479 + if (DWC_OTG_PARAM_TEST(val, 32, 32768)) {
27480 + DWC_WARN("Wrong value for data_fifo_size\n");
27481 + DWC_WARN("data_fifo_size must be 32-32768\n");
27482 + return -DWC_E_INVALID;
27483 + }
27484 +
27485 + if (val > core_if->hwcfg3.b.dfifo_depth) {
27486 + if (dwc_otg_param_initialized
27487 + (core_if->core_params->data_fifo_size)) {
27488 + DWC_ERROR
27489 + ("%d invalid for data_fifo_size parameter. Check HW configuration.\n",
27490 + val);
27491 + }
27492 + val = core_if->hwcfg3.b.dfifo_depth;
27493 + retval = -DWC_E_INVALID;
27494 + }
27495 +
27496 + core_if->core_params->data_fifo_size = val;
27497 + return retval;
27498 +}
27499 +
27500 +int32_t dwc_otg_get_param_data_fifo_size(dwc_otg_core_if_t * core_if)
27501 +{
27502 + return core_if->core_params->data_fifo_size;
27503 +}
27504 +
27505 +int dwc_otg_set_param_dev_rx_fifo_size(dwc_otg_core_if_t * core_if, int32_t val)
27506 +{
27507 + int retval = 0;
27508 + if (DWC_OTG_PARAM_TEST(val, 16, 32768)) {
27509 + DWC_WARN("Wrong value for dev_rx_fifo_size\n");
27510 + DWC_WARN("dev_rx_fifo_size must be 16-32768\n");
27511 + return -DWC_E_INVALID;
27512 + }
27513 +
27514 + if (val > DWC_READ_REG32(&core_if->core_global_regs->grxfsiz)) {
27515 + if (dwc_otg_param_initialized(core_if->core_params->dev_rx_fifo_size)) {
27516 + DWC_WARN("%d invalid for dev_rx_fifo_size parameter\n", val);
27517 + }
27518 + val = DWC_READ_REG32(&core_if->core_global_regs->grxfsiz);
27519 + retval = -DWC_E_INVALID;
27520 + }
27521 +
27522 + core_if->core_params->dev_rx_fifo_size = val;
27523 + return retval;
27524 +}
27525 +
27526 +int32_t dwc_otg_get_param_dev_rx_fifo_size(dwc_otg_core_if_t * core_if)
27527 +{
27528 + return core_if->core_params->dev_rx_fifo_size;
27529 +}
27530 +
27531 +int dwc_otg_set_param_dev_nperio_tx_fifo_size(dwc_otg_core_if_t * core_if,
27532 + int32_t val)
27533 +{
27534 + int retval = 0;
27535 +
27536 + if (DWC_OTG_PARAM_TEST(val, 16, 32768)) {
27537 + DWC_WARN("Wrong value for dev_nperio_tx_fifo\n");
27538 + DWC_WARN("dev_nperio_tx_fifo must be 16-32768\n");
27539 + return -DWC_E_INVALID;
27540 + }
27541 +
27542 + if (val > (DWC_READ_REG32(&core_if->core_global_regs->gnptxfsiz) >> 16)) {
27543 + if (dwc_otg_param_initialized
27544 + (core_if->core_params->dev_nperio_tx_fifo_size)) {
27545 + DWC_ERROR
27546 + ("%d invalid for dev_nperio_tx_fifo_size. Check HW configuration.\n",
27547 + val);
27548 + }
27549 + val =
27550 + (DWC_READ_REG32(&core_if->core_global_regs->gnptxfsiz) >>
27551 + 16);
27552 + retval = -DWC_E_INVALID;
27553 + }
27554 +
27555 + core_if->core_params->dev_nperio_tx_fifo_size = val;
27556 + return retval;
27557 +}
27558 +
27559 +int32_t dwc_otg_get_param_dev_nperio_tx_fifo_size(dwc_otg_core_if_t * core_if)
27560 +{
27561 + return core_if->core_params->dev_nperio_tx_fifo_size;
27562 +}
27563 +
27564 +int dwc_otg_set_param_host_rx_fifo_size(dwc_otg_core_if_t * core_if,
27565 + int32_t val)
27566 +{
27567 + int retval = 0;
27568 +
27569 + if (DWC_OTG_PARAM_TEST(val, 16, 32768)) {
27570 + DWC_WARN("Wrong value for host_rx_fifo_size\n");
27571 + DWC_WARN("host_rx_fifo_size must be 16-32768\n");
27572 + return -DWC_E_INVALID;
27573 + }
27574 +
27575 + if (val > DWC_READ_REG32(&core_if->core_global_regs->grxfsiz)) {
27576 + if (dwc_otg_param_initialized
27577 + (core_if->core_params->host_rx_fifo_size)) {
27578 + DWC_ERROR
27579 + ("%d invalid for host_rx_fifo_size. Check HW configuration.\n",
27580 + val);
27581 + }
27582 + val = DWC_READ_REG32(&core_if->core_global_regs->grxfsiz);
27583 + retval = -DWC_E_INVALID;
27584 + }
27585 +
27586 + core_if->core_params->host_rx_fifo_size = val;
27587 + return retval;
27588 +
27589 +}
27590 +
27591 +int32_t dwc_otg_get_param_host_rx_fifo_size(dwc_otg_core_if_t * core_if)
27592 +{
27593 + return core_if->core_params->host_rx_fifo_size;
27594 +}
27595 +
27596 +int dwc_otg_set_param_host_nperio_tx_fifo_size(dwc_otg_core_if_t * core_if,
27597 + int32_t val)
27598 +{
27599 + int retval = 0;
27600 +
27601 + if (DWC_OTG_PARAM_TEST(val, 16, 32768)) {
27602 + DWC_WARN("Wrong value for host_nperio_tx_fifo_size\n");
27603 + DWC_WARN("host_nperio_tx_fifo_size must be 16-32768\n");
27604 + return -DWC_E_INVALID;
27605 + }
27606 +
27607 + if (val > (DWC_READ_REG32(&core_if->core_global_regs->gnptxfsiz) >> 16)) {
27608 + if (dwc_otg_param_initialized
27609 + (core_if->core_params->host_nperio_tx_fifo_size)) {
27610 + DWC_ERROR
27611 + ("%d invalid for host_nperio_tx_fifo_size. Check HW configuration.\n",
27612 + val);
27613 + }
27614 + val =
27615 + (DWC_READ_REG32(&core_if->core_global_regs->gnptxfsiz) >>
27616 + 16);
27617 + retval = -DWC_E_INVALID;
27618 + }
27619 +
27620 + core_if->core_params->host_nperio_tx_fifo_size = val;
27621 + return retval;
27622 +}
27623 +
27624 +int32_t dwc_otg_get_param_host_nperio_tx_fifo_size(dwc_otg_core_if_t * core_if)
27625 +{
27626 + return core_if->core_params->host_nperio_tx_fifo_size;
27627 +}
27628 +
27629 +int dwc_otg_set_param_host_perio_tx_fifo_size(dwc_otg_core_if_t * core_if,
27630 + int32_t val)
27631 +{
27632 + int retval = 0;
27633 + if (DWC_OTG_PARAM_TEST(val, 16, 32768)) {
27634 + DWC_WARN("Wrong value for host_perio_tx_fifo_size\n");
27635 + DWC_WARN("host_perio_tx_fifo_size must be 16-32768\n");
27636 + return -DWC_E_INVALID;
27637 + }
27638 +
27639 + if (val > ((core_if->hptxfsiz.d32) >> 16)) {
27640 + if (dwc_otg_param_initialized
27641 + (core_if->core_params->host_perio_tx_fifo_size)) {
27642 + DWC_ERROR
27643 + ("%d invalid for host_perio_tx_fifo_size. Check HW configuration.\n",
27644 + val);
27645 + }
27646 + val = (core_if->hptxfsiz.d32) >> 16;
27647 + retval = -DWC_E_INVALID;
27648 + }
27649 +
27650 + core_if->core_params->host_perio_tx_fifo_size = val;
27651 + return retval;
27652 +}
27653 +
27654 +int32_t dwc_otg_get_param_host_perio_tx_fifo_size(dwc_otg_core_if_t * core_if)
27655 +{
27656 + return core_if->core_params->host_perio_tx_fifo_size;
27657 +}
27658 +
27659 +int dwc_otg_set_param_max_transfer_size(dwc_otg_core_if_t * core_if,
27660 + int32_t val)
27661 +{
27662 + int retval = 0;
27663 +
27664 + if (DWC_OTG_PARAM_TEST(val, 2047, 524288)) {
27665 + DWC_WARN("Wrong value for max_transfer_size\n");
27666 + DWC_WARN("max_transfer_size must be 2047-524288\n");
27667 + return -DWC_E_INVALID;
27668 + }
27669 +
27670 + if (val >= (1 << (core_if->hwcfg3.b.xfer_size_cntr_width + 11))) {
27671 + if (dwc_otg_param_initialized
27672 + (core_if->core_params->max_transfer_size)) {
27673 + DWC_ERROR
27674 + ("%d invalid for max_transfer_size. Check HW configuration.\n",
27675 + val);
27676 + }
27677 + val =
27678 + ((1 << (core_if->hwcfg3.b.packet_size_cntr_width + 11)) -
27679 + 1);
27680 + retval = -DWC_E_INVALID;
27681 + }
27682 +
27683 + core_if->core_params->max_transfer_size = val;
27684 + return retval;
27685 +}
27686 +
27687 +int32_t dwc_otg_get_param_max_transfer_size(dwc_otg_core_if_t * core_if)
27688 +{
27689 + return core_if->core_params->max_transfer_size;
27690 +}
27691 +
27692 +int dwc_otg_set_param_max_packet_count(dwc_otg_core_if_t * core_if, int32_t val)
27693 +{
27694 + int retval = 0;
27695 +
27696 + if (DWC_OTG_PARAM_TEST(val, 15, 511)) {
27697 + DWC_WARN("Wrong value for max_packet_count\n");
27698 + DWC_WARN("max_packet_count must be 15-511\n");
27699 + return -DWC_E_INVALID;
27700 + }
27701 +
27702 + if (val > (1 << (core_if->hwcfg3.b.packet_size_cntr_width + 4))) {
27703 + if (dwc_otg_param_initialized
27704 + (core_if->core_params->max_packet_count)) {
27705 + DWC_ERROR
27706 + ("%d invalid for max_packet_count. Check HW configuration.\n",
27707 + val);
27708 + }
27709 + val =
27710 + ((1 << (core_if->hwcfg3.b.packet_size_cntr_width + 4)) - 1);
27711 + retval = -DWC_E_INVALID;
27712 + }
27713 +
27714 + core_if->core_params->max_packet_count = val;
27715 + return retval;
27716 +}
27717 +
27718 +int32_t dwc_otg_get_param_max_packet_count(dwc_otg_core_if_t * core_if)
27719 +{
27720 + return core_if->core_params->max_packet_count;
27721 +}
27722 +
27723 +int dwc_otg_set_param_host_channels(dwc_otg_core_if_t * core_if, int32_t val)
27724 +{
27725 + int retval = 0;
27726 +
27727 + if (DWC_OTG_PARAM_TEST(val, 1, 16)) {
27728 + DWC_WARN("Wrong value for host_channels\n");
27729 + DWC_WARN("host_channels must be 1-16\n");
27730 + return -DWC_E_INVALID;
27731 + }
27732 +
27733 + if (val > (core_if->hwcfg2.b.num_host_chan + 1)) {
27734 + if (dwc_otg_param_initialized
27735 + (core_if->core_params->host_channels)) {
27736 + DWC_ERROR
27737 + ("%d invalid for host_channels. Check HW configurations.\n",
27738 + val);
27739 + }
27740 + val = (core_if->hwcfg2.b.num_host_chan + 1);
27741 + retval = -DWC_E_INVALID;
27742 + }
27743 +
27744 + core_if->core_params->host_channels = val;
27745 + return retval;
27746 +}
27747 +
27748 +int32_t dwc_otg_get_param_host_channels(dwc_otg_core_if_t * core_if)
27749 +{
27750 + return core_if->core_params->host_channels;
27751 +}
27752 +
27753 +int dwc_otg_set_param_dev_endpoints(dwc_otg_core_if_t * core_if, int32_t val)
27754 +{
27755 + int retval = 0;
27756 +
27757 + if (DWC_OTG_PARAM_TEST(val, 1, 15)) {
27758 + DWC_WARN("Wrong value for dev_endpoints\n");
27759 + DWC_WARN("dev_endpoints must be 1-15\n");
27760 + return -DWC_E_INVALID;
27761 + }
27762 +
27763 + if (val > (core_if->hwcfg2.b.num_dev_ep)) {
27764 + if (dwc_otg_param_initialized
27765 + (core_if->core_params->dev_endpoints)) {
27766 + DWC_ERROR
27767 + ("%d invalid for dev_endpoints. Check HW configurations.\n",
27768 + val);
27769 + }
27770 + val = core_if->hwcfg2.b.num_dev_ep;
27771 + retval = -DWC_E_INVALID;
27772 + }
27773 +
27774 + core_if->core_params->dev_endpoints = val;
27775 + return retval;
27776 +}
27777 +
27778 +int32_t dwc_otg_get_param_dev_endpoints(dwc_otg_core_if_t * core_if)
27779 +{
27780 + return core_if->core_params->dev_endpoints;
27781 +}
27782 +
27783 +int dwc_otg_set_param_phy_type(dwc_otg_core_if_t * core_if, int32_t val)
27784 +{
27785 + int retval = 0;
27786 + int valid = 0;
27787 +
27788 + if (DWC_OTG_PARAM_TEST(val, 0, 2)) {
27789 + DWC_WARN("Wrong value for phy_type\n");
27790 + DWC_WARN("phy_type must be 0,1 or 2\n");
27791 + return -DWC_E_INVALID;
27792 + }
27793 +#ifndef NO_FS_PHY_HW_CHECKS
27794 + if ((val == DWC_PHY_TYPE_PARAM_UTMI) &&
27795 + ((core_if->hwcfg2.b.hs_phy_type == 1) ||
27796 + (core_if->hwcfg2.b.hs_phy_type == 3))) {
27797 + valid = 1;
27798 + } else if ((val == DWC_PHY_TYPE_PARAM_ULPI) &&
27799 + ((core_if->hwcfg2.b.hs_phy_type == 2) ||
27800 + (core_if->hwcfg2.b.hs_phy_type == 3))) {
27801 + valid = 1;
27802 + } else if ((val == DWC_PHY_TYPE_PARAM_FS) &&
27803 + (core_if->hwcfg2.b.fs_phy_type == 1)) {
27804 + valid = 1;
27805 + }
27806 + if (!valid) {
27807 + if (dwc_otg_param_initialized(core_if->core_params->phy_type)) {
27808 + DWC_ERROR
27809 + ("%d invalid for phy_type. Check HW configurations.\n",
27810 + val);
27811 + }
27812 + if (core_if->hwcfg2.b.hs_phy_type) {
27813 + if ((core_if->hwcfg2.b.hs_phy_type == 3) ||
27814 + (core_if->hwcfg2.b.hs_phy_type == 1)) {
27815 + val = DWC_PHY_TYPE_PARAM_UTMI;
27816 + } else {
27817 + val = DWC_PHY_TYPE_PARAM_ULPI;
27818 + }
27819 + }
27820 + retval = -DWC_E_INVALID;
27821 + }
27822 +#endif
27823 + core_if->core_params->phy_type = val;
27824 + return retval;
27825 +}
27826 +
27827 +int32_t dwc_otg_get_param_phy_type(dwc_otg_core_if_t * core_if)
27828 +{
27829 + return core_if->core_params->phy_type;
27830 +}
27831 +
27832 +int dwc_otg_set_param_speed(dwc_otg_core_if_t * core_if, int32_t val)
27833 +{
27834 + int retval = 0;
27835 + if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
27836 + DWC_WARN("Wrong value for speed parameter\n");
27837 + DWC_WARN("max_speed parameter must be 0 or 1\n");
27838 + return -DWC_E_INVALID;
27839 + }
27840 + if ((val == 0)
27841 + && dwc_otg_get_param_phy_type(core_if) == DWC_PHY_TYPE_PARAM_FS) {
27842 + if (dwc_otg_param_initialized(core_if->core_params->speed)) {
27843 + DWC_ERROR
27844 + ("%d invalid for speed paremter. Check HW configuration.\n",
27845 + val);
27846 + }
27847 + val =
27848 + (dwc_otg_get_param_phy_type(core_if) ==
27849 + DWC_PHY_TYPE_PARAM_FS ? 1 : 0);
27850 + retval = -DWC_E_INVALID;
27851 + }
27852 + core_if->core_params->speed = val;
27853 + return retval;
27854 +}
27855 +
27856 +int32_t dwc_otg_get_param_speed(dwc_otg_core_if_t * core_if)
27857 +{
27858 + return core_if->core_params->speed;
27859 +}
27860 +
27861 +int dwc_otg_set_param_host_ls_low_power_phy_clk(dwc_otg_core_if_t * core_if,
27862 + int32_t val)
27863 +{
27864 + int retval = 0;
27865 +
27866 + if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
27867 + DWC_WARN
27868 + ("Wrong value for host_ls_low_power_phy_clk parameter\n");
27869 + DWC_WARN("host_ls_low_power_phy_clk must be 0 or 1\n");
27870 + return -DWC_E_INVALID;
27871 + }
27872 +
27873 + if ((val == DWC_HOST_LS_LOW_POWER_PHY_CLK_PARAM_48MHZ)
27874 + && (dwc_otg_get_param_phy_type(core_if) == DWC_PHY_TYPE_PARAM_FS)) {
27875 + if (dwc_otg_param_initialized
27876 + (core_if->core_params->host_ls_low_power_phy_clk)) {
27877 + DWC_ERROR
27878 + ("%d invalid for host_ls_low_power_phy_clk. Check HW configuration.\n",
27879 + val);
27880 + }
27881 + val =
27882 + (dwc_otg_get_param_phy_type(core_if) ==
27883 + DWC_PHY_TYPE_PARAM_FS) ?
27884 + DWC_HOST_LS_LOW_POWER_PHY_CLK_PARAM_6MHZ :
27885 + DWC_HOST_LS_LOW_POWER_PHY_CLK_PARAM_48MHZ;
27886 + retval = -DWC_E_INVALID;
27887 + }
27888 +
27889 + core_if->core_params->host_ls_low_power_phy_clk = val;
27890 + return retval;
27891 +}
27892 +
27893 +int32_t dwc_otg_get_param_host_ls_low_power_phy_clk(dwc_otg_core_if_t * core_if)
27894 +{
27895 + return core_if->core_params->host_ls_low_power_phy_clk;
27896 +}
27897 +
27898 +int dwc_otg_set_param_phy_ulpi_ddr(dwc_otg_core_if_t * core_if, int32_t val)
27899 +{
27900 + if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
27901 + DWC_WARN("Wrong value for phy_ulpi_ddr\n");
27902 + DWC_WARN("phy_upli_ddr must be 0 or 1\n");
27903 + return -DWC_E_INVALID;
27904 + }
27905 +
27906 + core_if->core_params->phy_ulpi_ddr = val;
27907 + return 0;
27908 +}
27909 +
27910 +int32_t dwc_otg_get_param_phy_ulpi_ddr(dwc_otg_core_if_t * core_if)
27911 +{
27912 + return core_if->core_params->phy_ulpi_ddr;
27913 +}
27914 +
27915 +int dwc_otg_set_param_phy_ulpi_ext_vbus(dwc_otg_core_if_t * core_if,
27916 + int32_t val)
27917 +{
27918 + if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
27919 + DWC_WARN("Wrong valaue for phy_ulpi_ext_vbus\n");
27920 + DWC_WARN("phy_ulpi_ext_vbus must be 0 or 1\n");
27921 + return -DWC_E_INVALID;
27922 + }
27923 +
27924 + core_if->core_params->phy_ulpi_ext_vbus = val;
27925 + return 0;
27926 +}
27927 +
27928 +int32_t dwc_otg_get_param_phy_ulpi_ext_vbus(dwc_otg_core_if_t * core_if)
27929 +{
27930 + return core_if->core_params->phy_ulpi_ext_vbus;
27931 +}
27932 +
27933 +int dwc_otg_set_param_phy_utmi_width(dwc_otg_core_if_t * core_if, int32_t val)
27934 +{
27935 + if (DWC_OTG_PARAM_TEST(val, 8, 8) && DWC_OTG_PARAM_TEST(val, 16, 16)) {
27936 + DWC_WARN("Wrong valaue for phy_utmi_width\n");
27937 + DWC_WARN("phy_utmi_width must be 8 or 16\n");
27938 + return -DWC_E_INVALID;
27939 + }
27940 +
27941 + core_if->core_params->phy_utmi_width = val;
27942 + return 0;
27943 +}
27944 +
27945 +int32_t dwc_otg_get_param_phy_utmi_width(dwc_otg_core_if_t * core_if)
27946 +{
27947 + return core_if->core_params->phy_utmi_width;
27948 +}
27949 +
27950 +int dwc_otg_set_param_ulpi_fs_ls(dwc_otg_core_if_t * core_if, int32_t val)
27951 +{
27952 + if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
27953 + DWC_WARN("Wrong valaue for ulpi_fs_ls\n");
27954 + DWC_WARN("ulpi_fs_ls must be 0 or 1\n");
27955 + return -DWC_E_INVALID;
27956 + }
27957 +
27958 + core_if->core_params->ulpi_fs_ls = val;
27959 + return 0;
27960 +}
27961 +
27962 +int32_t dwc_otg_get_param_ulpi_fs_ls(dwc_otg_core_if_t * core_if)
27963 +{
27964 + return core_if->core_params->ulpi_fs_ls;
27965 +}
27966 +
27967 +int dwc_otg_set_param_ts_dline(dwc_otg_core_if_t * core_if, int32_t val)
27968 +{
27969 + if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
27970 + DWC_WARN("Wrong valaue for ts_dline\n");
27971 + DWC_WARN("ts_dline must be 0 or 1\n");
27972 + return -DWC_E_INVALID;
27973 + }
27974 +
27975 + core_if->core_params->ts_dline = val;
27976 + return 0;
27977 +}
27978 +
27979 +int32_t dwc_otg_get_param_ts_dline(dwc_otg_core_if_t * core_if)
27980 +{
27981 + return core_if->core_params->ts_dline;
27982 +}
27983 +
27984 +int dwc_otg_set_param_i2c_enable(dwc_otg_core_if_t * core_if, int32_t val)
27985 +{
27986 + int retval = 0;
27987 + if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
27988 + DWC_WARN("Wrong valaue for i2c_enable\n");
27989 + DWC_WARN("i2c_enable must be 0 or 1\n");
27990 + return -DWC_E_INVALID;
27991 + }
27992 +#ifndef NO_FS_PHY_HW_CHECK
27993 + if (val == 1 && core_if->hwcfg3.b.i2c == 0) {
27994 + if (dwc_otg_param_initialized(core_if->core_params->i2c_enable)) {
27995 + DWC_ERROR
27996 + ("%d invalid for i2c_enable. Check HW configuration.\n",
27997 + val);
27998 + }
27999 + val = 0;
28000 + retval = -DWC_E_INVALID;
28001 + }
28002 +#endif
28003 +
28004 + core_if->core_params->i2c_enable = val;
28005 + return retval;
28006 +}
28007 +
28008 +int32_t dwc_otg_get_param_i2c_enable(dwc_otg_core_if_t * core_if)
28009 +{
28010 + return core_if->core_params->i2c_enable;
28011 +}
28012 +
28013 +int dwc_otg_set_param_dev_perio_tx_fifo_size(dwc_otg_core_if_t * core_if,
28014 + int32_t val, int fifo_num)
28015 +{
28016 + int retval = 0;
28017 +
28018 + if (DWC_OTG_PARAM_TEST(val, 4, 768)) {
28019 + DWC_WARN("Wrong value for dev_perio_tx_fifo_size\n");
28020 + DWC_WARN("dev_perio_tx_fifo_size must be 4-768\n");
28021 + return -DWC_E_INVALID;
28022 + }
28023 +
28024 + if (val >
28025 + (DWC_READ_REG32(&core_if->core_global_regs->dtxfsiz[fifo_num]))) {
28026 + if (dwc_otg_param_initialized
28027 + (core_if->core_params->dev_perio_tx_fifo_size[fifo_num])) {
28028 + DWC_ERROR
28029 + ("`%d' invalid for parameter `dev_perio_fifo_size_%d'. Check HW configuration.\n",
28030 + val, fifo_num);
28031 + }
28032 + val = (DWC_READ_REG32(&core_if->core_global_regs->dtxfsiz[fifo_num]));
28033 + retval = -DWC_E_INVALID;
28034 + }
28035 +
28036 + core_if->core_params->dev_perio_tx_fifo_size[fifo_num] = val;
28037 + return retval;
28038 +}
28039 +
28040 +int32_t dwc_otg_get_param_dev_perio_tx_fifo_size(dwc_otg_core_if_t * core_if,
28041 + int fifo_num)
28042 +{
28043 + return core_if->core_params->dev_perio_tx_fifo_size[fifo_num];
28044 +}
28045 +
28046 +int dwc_otg_set_param_en_multiple_tx_fifo(dwc_otg_core_if_t * core_if,
28047 + int32_t val)
28048 +{
28049 + int retval = 0;
28050 + if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
28051 + DWC_WARN("Wrong valaue for en_multiple_tx_fifo,\n");
28052 + DWC_WARN("en_multiple_tx_fifo must be 0 or 1\n");
28053 + return -DWC_E_INVALID;
28054 + }
28055 +
28056 + if (val == 1 && core_if->hwcfg4.b.ded_fifo_en == 0) {
28057 + if (dwc_otg_param_initialized
28058 + (core_if->core_params->en_multiple_tx_fifo)) {
28059 + DWC_ERROR
28060 + ("%d invalid for parameter en_multiple_tx_fifo. Check HW configuration.\n",
28061 + val);
28062 + }
28063 + val = 0;
28064 + retval = -DWC_E_INVALID;
28065 + }
28066 +
28067 + core_if->core_params->en_multiple_tx_fifo = val;
28068 + return retval;
28069 +}
28070 +
28071 +int32_t dwc_otg_get_param_en_multiple_tx_fifo(dwc_otg_core_if_t * core_if)
28072 +{
28073 + return core_if->core_params->en_multiple_tx_fifo;
28074 +}
28075 +
28076 +int dwc_otg_set_param_dev_tx_fifo_size(dwc_otg_core_if_t * core_if, int32_t val,
28077 + int fifo_num)
28078 +{
28079 + int retval = 0;
28080 +
28081 + if (DWC_OTG_PARAM_TEST(val, 4, 768)) {
28082 + DWC_WARN("Wrong value for dev_tx_fifo_size\n");
28083 + DWC_WARN("dev_tx_fifo_size must be 4-768\n");
28084 + return -DWC_E_INVALID;
28085 + }
28086 +
28087 + if (val >
28088 + (DWC_READ_REG32(&core_if->core_global_regs->dtxfsiz[fifo_num]))) {
28089 + if (dwc_otg_param_initialized
28090 + (core_if->core_params->dev_tx_fifo_size[fifo_num])) {
28091 + DWC_ERROR
28092 + ("`%d' invalid for parameter `dev_tx_fifo_size_%d'. Check HW configuration.\n",
28093 + val, fifo_num);
28094 + }
28095 + val = (DWC_READ_REG32(&core_if->core_global_regs->dtxfsiz[fifo_num]));
28096 + retval = -DWC_E_INVALID;
28097 + }
28098 +
28099 + core_if->core_params->dev_tx_fifo_size[fifo_num] = val;
28100 + return retval;
28101 +}
28102 +
28103 +int32_t dwc_otg_get_param_dev_tx_fifo_size(dwc_otg_core_if_t * core_if,
28104 + int fifo_num)
28105 +{
28106 + return core_if->core_params->dev_tx_fifo_size[fifo_num];
28107 +}
28108 +
28109 +int dwc_otg_set_param_thr_ctl(dwc_otg_core_if_t * core_if, int32_t val)
28110 +{
28111 + int retval = 0;
28112 +
28113 + if (DWC_OTG_PARAM_TEST(val, 0, 7)) {
28114 + DWC_WARN("Wrong value for thr_ctl\n");
28115 + DWC_WARN("thr_ctl must be 0-7\n");
28116 + return -DWC_E_INVALID;
28117 + }
28118 +
28119 + if ((val != 0) &&
28120 + (!dwc_otg_get_param_dma_enable(core_if) ||
28121 + !core_if->hwcfg4.b.ded_fifo_en)) {
28122 + if (dwc_otg_param_initialized(core_if->core_params->thr_ctl)) {
28123 + DWC_ERROR
28124 + ("%d invalid for parameter thr_ctl. Check HW configuration.\n",
28125 + val);
28126 + }
28127 + val = 0;
28128 + retval = -DWC_E_INVALID;
28129 + }
28130 +
28131 + core_if->core_params->thr_ctl = val;
28132 + return retval;
28133 +}
28134 +
28135 +int32_t dwc_otg_get_param_thr_ctl(dwc_otg_core_if_t * core_if)
28136 +{
28137 + return core_if->core_params->thr_ctl;
28138 +}
28139 +
28140 +int dwc_otg_set_param_lpm_enable(dwc_otg_core_if_t * core_if, int32_t val)
28141 +{
28142 + int retval = 0;
28143 +
28144 + if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
28145 + DWC_WARN("Wrong value for lpm_enable\n");
28146 + DWC_WARN("lpm_enable must be 0 or 1\n");
28147 + return -DWC_E_INVALID;
28148 + }
28149 +
28150 + if (val && !core_if->hwcfg3.b.otg_lpm_en) {
28151 + if (dwc_otg_param_initialized(core_if->core_params->lpm_enable)) {
28152 + DWC_ERROR
28153 + ("%d invalid for parameter lpm_enable. Check HW configuration.\n",
28154 + val);
28155 + }
28156 + val = 0;
28157 + retval = -DWC_E_INVALID;
28158 + }
28159 +
28160 + core_if->core_params->lpm_enable = val;
28161 + return retval;
28162 +}
28163 +
28164 +int32_t dwc_otg_get_param_lpm_enable(dwc_otg_core_if_t * core_if)
28165 +{
28166 + return core_if->core_params->lpm_enable;
28167 +}
28168 +
28169 +int dwc_otg_set_param_tx_thr_length(dwc_otg_core_if_t * core_if, int32_t val)
28170 +{
28171 + if (DWC_OTG_PARAM_TEST(val, 8, 128)) {
28172 + DWC_WARN("Wrong valaue for tx_thr_length\n");
28173 + DWC_WARN("tx_thr_length must be 8 - 128\n");
28174 + return -DWC_E_INVALID;
28175 + }
28176 +
28177 + core_if->core_params->tx_thr_length = val;
28178 + return 0;
28179 +}
28180 +
28181 +int32_t dwc_otg_get_param_tx_thr_length(dwc_otg_core_if_t * core_if)
28182 +{
28183 + return core_if->core_params->tx_thr_length;
28184 +}
28185 +
28186 +int dwc_otg_set_param_rx_thr_length(dwc_otg_core_if_t * core_if, int32_t val)
28187 +{
28188 + if (DWC_OTG_PARAM_TEST(val, 8, 128)) {
28189 + DWC_WARN("Wrong valaue for rx_thr_length\n");
28190 + DWC_WARN("rx_thr_length must be 8 - 128\n");
28191 + return -DWC_E_INVALID;
28192 + }
28193 +
28194 + core_if->core_params->rx_thr_length = val;
28195 + return 0;
28196 +}
28197 +
28198 +int32_t dwc_otg_get_param_rx_thr_length(dwc_otg_core_if_t * core_if)
28199 +{
28200 + return core_if->core_params->rx_thr_length;
28201 +}
28202 +
28203 +int dwc_otg_set_param_dma_burst_size(dwc_otg_core_if_t * core_if, int32_t val)
28204 +{
28205 + if (DWC_OTG_PARAM_TEST(val, 1, 1) &&
28206 + DWC_OTG_PARAM_TEST(val, 4, 4) &&
28207 + DWC_OTG_PARAM_TEST(val, 8, 8) &&
28208 + DWC_OTG_PARAM_TEST(val, 16, 16) &&
28209 + DWC_OTG_PARAM_TEST(val, 32, 32) &&
28210 + DWC_OTG_PARAM_TEST(val, 64, 64) &&
28211 + DWC_OTG_PARAM_TEST(val, 128, 128) &&
28212 + DWC_OTG_PARAM_TEST(val, 256, 256)) {
28213 + DWC_WARN("`%d' invalid for parameter `dma_burst_size'\n", val);
28214 + return -DWC_E_INVALID;
28215 + }
28216 + core_if->core_params->dma_burst_size = val;
28217 + return 0;
28218 +}
28219 +
28220 +int32_t dwc_otg_get_param_dma_burst_size(dwc_otg_core_if_t * core_if)
28221 +{
28222 + return core_if->core_params->dma_burst_size;
28223 +}
28224 +
28225 +int dwc_otg_set_param_pti_enable(dwc_otg_core_if_t * core_if, int32_t val)
28226 +{
28227 + int retval = 0;
28228 + if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
28229 + DWC_WARN("`%d' invalid for parameter `pti_enable'\n", val);
28230 + return -DWC_E_INVALID;
28231 + }
28232 + if (val && (core_if->snpsid < OTG_CORE_REV_2_72a)) {
28233 + if (dwc_otg_param_initialized(core_if->core_params->pti_enable)) {
28234 + DWC_ERROR
28235 + ("%d invalid for parameter pti_enable. Check HW configuration.\n",
28236 + val);
28237 + }
28238 + retval = -DWC_E_INVALID;
28239 + val = 0;
28240 + }
28241 + core_if->core_params->pti_enable = val;
28242 + return retval;
28243 +}
28244 +
28245 +int32_t dwc_otg_get_param_pti_enable(dwc_otg_core_if_t * core_if)
28246 +{
28247 + return core_if->core_params->pti_enable;
28248 +}
28249 +
28250 +int dwc_otg_set_param_mpi_enable(dwc_otg_core_if_t * core_if, int32_t val)
28251 +{
28252 + int retval = 0;
28253 + if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
28254 + DWC_WARN("`%d' invalid for parameter `mpi_enable'\n", val);
28255 + return -DWC_E_INVALID;
28256 + }
28257 + if (val && (core_if->hwcfg2.b.multi_proc_int == 0)) {
28258 + if (dwc_otg_param_initialized(core_if->core_params->mpi_enable)) {
28259 + DWC_ERROR
28260 + ("%d invalid for parameter mpi_enable. Check HW configuration.\n",
28261 + val);
28262 + }
28263 + retval = -DWC_E_INVALID;
28264 + val = 0;
28265 + }
28266 + core_if->core_params->mpi_enable = val;
28267 + return retval;
28268 +}
28269 +
28270 +int32_t dwc_otg_get_param_mpi_enable(dwc_otg_core_if_t * core_if)
28271 +{
28272 + return core_if->core_params->mpi_enable;
28273 +}
28274 +
28275 +int dwc_otg_set_param_adp_enable(dwc_otg_core_if_t * core_if, int32_t val)
28276 +{
28277 + int retval = 0;
28278 + if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
28279 + DWC_WARN("`%d' invalid for parameter `adp_enable'\n", val);
28280 + return -DWC_E_INVALID;
28281 + }
28282 + if (val && (core_if->hwcfg3.b.adp_supp == 0)) {
28283 + if (dwc_otg_param_initialized
28284 + (core_if->core_params->adp_supp_enable)) {
28285 + DWC_ERROR
28286 + ("%d invalid for parameter adp_enable. Check HW configuration.\n",
28287 + val);
28288 + }
28289 + retval = -DWC_E_INVALID;
28290 + val = 0;
28291 + }
28292 + core_if->core_params->adp_supp_enable = val;
28293 + /*Set OTG version 2.0 in case of enabling ADP*/
28294 + if (val)
28295 + dwc_otg_set_param_otg_ver(core_if, 1);
28296 +
28297 + return retval;
28298 +}
28299 +
28300 +int32_t dwc_otg_get_param_adp_enable(dwc_otg_core_if_t * core_if)
28301 +{
28302 + return core_if->core_params->adp_supp_enable;
28303 +}
28304 +
28305 +int dwc_otg_set_param_ic_usb_cap(dwc_otg_core_if_t * core_if, int32_t val)
28306 +{
28307 + int retval = 0;
28308 + if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
28309 + DWC_WARN("`%d' invalid for parameter `ic_usb_cap'\n", val);
28310 + DWC_WARN("ic_usb_cap must be 0 or 1\n");
28311 + return -DWC_E_INVALID;
28312 + }
28313 +
28314 + if (val && (core_if->hwcfg2.b.otg_enable_ic_usb == 0)) {
28315 + if (dwc_otg_param_initialized(core_if->core_params->ic_usb_cap)) {
28316 + DWC_ERROR
28317 + ("%d invalid for parameter ic_usb_cap. Check HW configuration.\n",
28318 + val);
28319 + }
28320 + retval = -DWC_E_INVALID;
28321 + val = 0;
28322 + }
28323 + core_if->core_params->ic_usb_cap = val;
28324 + return retval;
28325 +}
28326 +
28327 +int32_t dwc_otg_get_param_ic_usb_cap(dwc_otg_core_if_t * core_if)
28328 +{
28329 + return core_if->core_params->ic_usb_cap;
28330 +}
28331 +
28332 +int dwc_otg_set_param_ahb_thr_ratio(dwc_otg_core_if_t * core_if, int32_t val)
28333 +{
28334 + int retval = 0;
28335 + int valid = 1;
28336 +
28337 + if (DWC_OTG_PARAM_TEST(val, 0, 3)) {
28338 + DWC_WARN("`%d' invalid for parameter `ahb_thr_ratio'\n", val);
28339 + DWC_WARN("ahb_thr_ratio must be 0 - 3\n");
28340 + return -DWC_E_INVALID;
28341 + }
28342 +
28343 + if (val
28344 + && (core_if->snpsid < OTG_CORE_REV_2_81a
28345 + || !dwc_otg_get_param_thr_ctl(core_if))) {
28346 + valid = 0;
28347 + } else if (val
28348 + && ((dwc_otg_get_param_tx_thr_length(core_if) / (1 << val)) <
28349 + 4)) {
28350 + valid = 0;
28351 + }
28352 + if (valid == 0) {
28353 + if (dwc_otg_param_initialized
28354 + (core_if->core_params->ahb_thr_ratio)) {
28355 + DWC_ERROR
28356 + ("%d invalid for parameter ahb_thr_ratio. Check HW configuration.\n",
28357 + val);
28358 + }
28359 + retval = -DWC_E_INVALID;
28360 + val = 0;
28361 + }
28362 +
28363 + core_if->core_params->ahb_thr_ratio = val;
28364 + return retval;
28365 +}
28366 +
28367 +int32_t dwc_otg_get_param_ahb_thr_ratio(dwc_otg_core_if_t * core_if)
28368 +{
28369 + return core_if->core_params->ahb_thr_ratio;
28370 +}
28371 +
28372 +int dwc_otg_set_param_power_down(dwc_otg_core_if_t * core_if, int32_t val)
28373 +{
28374 + int retval = 0;
28375 + int valid = 1;
28376 + hwcfg4_data_t hwcfg4 = {.d32 = 0 };
28377 + hwcfg4.d32 = DWC_READ_REG32(&core_if->core_global_regs->ghwcfg4);
28378 +
28379 + if (DWC_OTG_PARAM_TEST(val, 0, 3)) {
28380 + DWC_WARN("`%d' invalid for parameter `power_down'\n", val);
28381 + DWC_WARN("power_down must be 0 - 2\n");
28382 + return -DWC_E_INVALID;
28383 + }
28384 +
28385 + if ((val == 2) && (core_if->snpsid < OTG_CORE_REV_2_91a)) {
28386 + valid = 0;
28387 + }
28388 + if ((val == 3)
28389 + && ((core_if->snpsid < OTG_CORE_REV_3_00a)
28390 + || (hwcfg4.b.xhiber == 0))) {
28391 + valid = 0;
28392 + }
28393 + if (valid == 0) {
28394 + if (dwc_otg_param_initialized(core_if->core_params->power_down)) {
28395 + DWC_ERROR
28396 + ("%d invalid for parameter power_down. Check HW configuration.\n",
28397 + val);
28398 + }
28399 + retval = -DWC_E_INVALID;
28400 + val = 0;
28401 + }
28402 + core_if->core_params->power_down = val;
28403 + return retval;
28404 +}
28405 +
28406 +int32_t dwc_otg_get_param_power_down(dwc_otg_core_if_t * core_if)
28407 +{
28408 + return core_if->core_params->power_down;
28409 +}
28410 +
28411 +int dwc_otg_set_param_reload_ctl(dwc_otg_core_if_t * core_if, int32_t val)
28412 +{
28413 + int retval = 0;
28414 + int valid = 1;
28415 +
28416 + if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
28417 + DWC_WARN("`%d' invalid for parameter `reload_ctl'\n", val);
28418 + DWC_WARN("reload_ctl must be 0 or 1\n");
28419 + return -DWC_E_INVALID;
28420 + }
28421 +
28422 + if ((val == 1) && (core_if->snpsid < OTG_CORE_REV_2_92a)) {
28423 + valid = 0;
28424 + }
28425 + if (valid == 0) {
28426 + if (dwc_otg_param_initialized(core_if->core_params->reload_ctl)) {
28427 + DWC_ERROR("%d invalid for parameter reload_ctl."
28428 + "Check HW configuration.\n", val);
28429 + }
28430 + retval = -DWC_E_INVALID;
28431 + val = 0;
28432 + }
28433 + core_if->core_params->reload_ctl = val;
28434 + return retval;
28435 +}
28436 +
28437 +int32_t dwc_otg_get_param_reload_ctl(dwc_otg_core_if_t * core_if)
28438 +{
28439 + return core_if->core_params->reload_ctl;
28440 +}
28441 +
28442 +int dwc_otg_set_param_dev_out_nak(dwc_otg_core_if_t * core_if, int32_t val)
28443 +{
28444 + int retval = 0;
28445 + int valid = 1;
28446 +
28447 + if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
28448 + DWC_WARN("`%d' invalid for parameter `dev_out_nak'\n", val);
28449 + DWC_WARN("dev_out_nak must be 0 or 1\n");
28450 + return -DWC_E_INVALID;
28451 + }
28452 +
28453 + if ((val == 1) && ((core_if->snpsid < OTG_CORE_REV_2_93a) ||
28454 + !(core_if->core_params->dma_desc_enable))) {
28455 + valid = 0;
28456 + }
28457 + if (valid == 0) {
28458 + if (dwc_otg_param_initialized(core_if->core_params->dev_out_nak)) {
28459 + DWC_ERROR("%d invalid for parameter dev_out_nak."
28460 + "Check HW configuration.\n", val);
28461 + }
28462 + retval = -DWC_E_INVALID;
28463 + val = 0;
28464 + }
28465 + core_if->core_params->dev_out_nak = val;
28466 + return retval;
28467 +}
28468 +
28469 +int32_t dwc_otg_get_param_dev_out_nak(dwc_otg_core_if_t * core_if)
28470 +{
28471 + return core_if->core_params->dev_out_nak;
28472 +}
28473 +
28474 +int dwc_otg_set_param_cont_on_bna(dwc_otg_core_if_t * core_if, int32_t val)
28475 +{
28476 + int retval = 0;
28477 + int valid = 1;
28478 +
28479 + if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
28480 + DWC_WARN("`%d' invalid for parameter `cont_on_bna'\n", val);
28481 + DWC_WARN("cont_on_bna must be 0 or 1\n");
28482 + return -DWC_E_INVALID;
28483 + }
28484 +
28485 + if ((val == 1) && ((core_if->snpsid < OTG_CORE_REV_2_94a) ||
28486 + !(core_if->core_params->dma_desc_enable))) {
28487 + valid = 0;
28488 + }
28489 + if (valid == 0) {
28490 + if (dwc_otg_param_initialized(core_if->core_params->cont_on_bna)) {
28491 + DWC_ERROR("%d invalid for parameter cont_on_bna."
28492 + "Check HW configuration.\n", val);
28493 + }
28494 + retval = -DWC_E_INVALID;
28495 + val = 0;
28496 + }
28497 + core_if->core_params->cont_on_bna = val;
28498 + return retval;
28499 +}
28500 +
28501 +int32_t dwc_otg_get_param_cont_on_bna(dwc_otg_core_if_t * core_if)
28502 +{
28503 + return core_if->core_params->cont_on_bna;
28504 +}
28505 +
28506 +int dwc_otg_set_param_ahb_single(dwc_otg_core_if_t * core_if, int32_t val)
28507 +{
28508 + int retval = 0;
28509 + int valid = 1;
28510 +
28511 + if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
28512 + DWC_WARN("`%d' invalid for parameter `ahb_single'\n", val);
28513 + DWC_WARN("ahb_single must be 0 or 1\n");
28514 + return -DWC_E_INVALID;
28515 + }
28516 +
28517 + if ((val == 1) && (core_if->snpsid < OTG_CORE_REV_2_94a)) {
28518 + valid = 0;
28519 + }
28520 + if (valid == 0) {
28521 + if (dwc_otg_param_initialized(core_if->core_params->ahb_single)) {
28522 + DWC_ERROR("%d invalid for parameter ahb_single."
28523 + "Check HW configuration.\n", val);
28524 + }
28525 + retval = -DWC_E_INVALID;
28526 + val = 0;
28527 + }
28528 + core_if->core_params->ahb_single = val;
28529 + return retval;
28530 +}
28531 +
28532 +int32_t dwc_otg_get_param_ahb_single(dwc_otg_core_if_t * core_if)
28533 +{
28534 + return core_if->core_params->ahb_single;
28535 +}
28536 +
28537 +int dwc_otg_set_param_otg_ver(dwc_otg_core_if_t * core_if, int32_t val)
28538 +{
28539 + int retval = 0;
28540 +
28541 + if (DWC_OTG_PARAM_TEST(val, 0, 1)) {
28542 + DWC_WARN("`%d' invalid for parameter `otg_ver'\n", val);
28543 + DWC_WARN
28544 + ("otg_ver must be 0(for OTG 1.3 support) or 1(for OTG 2.0 support)\n");
28545 + return -DWC_E_INVALID;
28546 + }
28547 +
28548 + core_if->core_params->otg_ver = val;
28549 + return retval;
28550 +}
28551 +
28552 +int32_t dwc_otg_get_param_otg_ver(dwc_otg_core_if_t * core_if)
28553 +{
28554 + return core_if->core_params->otg_ver;
28555 +}
28556 +
28557 +uint32_t dwc_otg_get_hnpstatus(dwc_otg_core_if_t * core_if)
28558 +{
28559 + gotgctl_data_t otgctl;
28560 + otgctl.d32 = DWC_READ_REG32(&core_if->core_global_regs->gotgctl);
28561 + return otgctl.b.hstnegscs;
28562 +}
28563 +
28564 +uint32_t dwc_otg_get_srpstatus(dwc_otg_core_if_t * core_if)
28565 +{
28566 + gotgctl_data_t otgctl;
28567 + otgctl.d32 = DWC_READ_REG32(&core_if->core_global_regs->gotgctl);
28568 + return otgctl.b.sesreqscs;
28569 +}
28570 +
28571 +void dwc_otg_set_hnpreq(dwc_otg_core_if_t * core_if, uint32_t val)
28572 +{
28573 + if(core_if->otg_ver == 0) {
28574 + gotgctl_data_t otgctl;
28575 + otgctl.d32 = DWC_READ_REG32(&core_if->core_global_regs->gotgctl);
28576 + otgctl.b.hnpreq = val;
28577 + DWC_WRITE_REG32(&core_if->core_global_regs->gotgctl, otgctl.d32);
28578 + } else {
28579 + core_if->otg_sts = val;
28580 + }
28581 +}
28582 +
28583 +uint32_t dwc_otg_get_gsnpsid(dwc_otg_core_if_t * core_if)
28584 +{
28585 + return core_if->snpsid;
28586 +}
28587 +
28588 +uint32_t dwc_otg_get_mode(dwc_otg_core_if_t * core_if)
28589 +{
28590 + gintsts_data_t gintsts;
28591 + gintsts.d32 = DWC_READ_REG32(&core_if->core_global_regs->gintsts);
28592 + return gintsts.b.curmode;
28593 +}
28594 +
28595 +uint32_t dwc_otg_get_hnpcapable(dwc_otg_core_if_t * core_if)
28596 +{
28597 + gusbcfg_data_t usbcfg;
28598 + usbcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->gusbcfg);
28599 + return usbcfg.b.hnpcap;
28600 +}
28601 +
28602 +void dwc_otg_set_hnpcapable(dwc_otg_core_if_t * core_if, uint32_t val)
28603 +{
28604 + gusbcfg_data_t usbcfg;
28605 + usbcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->gusbcfg);
28606 + usbcfg.b.hnpcap = val;
28607 + DWC_WRITE_REG32(&core_if->core_global_regs->gusbcfg, usbcfg.d32);
28608 +}
28609 +
28610 +uint32_t dwc_otg_get_srpcapable(dwc_otg_core_if_t * core_if)
28611 +{
28612 + gusbcfg_data_t usbcfg;
28613 + usbcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->gusbcfg);
28614 + return usbcfg.b.srpcap;
28615 +}
28616 +
28617 +void dwc_otg_set_srpcapable(dwc_otg_core_if_t * core_if, uint32_t val)
28618 +{
28619 + gusbcfg_data_t usbcfg;
28620 + usbcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->gusbcfg);
28621 + usbcfg.b.srpcap = val;
28622 + DWC_WRITE_REG32(&core_if->core_global_regs->gusbcfg, usbcfg.d32);
28623 +}
28624 +
28625 +uint32_t dwc_otg_get_devspeed(dwc_otg_core_if_t * core_if)
28626 +{
28627 + dcfg_data_t dcfg;
28628 + /* originally: dcfg.d32 = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dcfg); */
28629 +
28630 + dcfg.d32 = -1; //GRAYG
28631 + DWC_DEBUGPL(DBG_CILV, "%s - core_if(%p)\n", __func__, core_if);
28632 + if (NULL == core_if)
28633 + DWC_ERROR("reg request with NULL core_if\n");
28634 + DWC_DEBUGPL(DBG_CILV, "%s - core_if(%p)->dev_if(%p)\n", __func__,
28635 + core_if, core_if->dev_if);
28636 + if (NULL == core_if->dev_if)
28637 + DWC_ERROR("reg request with NULL dev_if\n");
28638 + DWC_DEBUGPL(DBG_CILV, "%s - core_if(%p)->dev_if(%p)->"
28639 + "dev_global_regs(%p)\n", __func__,
28640 + core_if, core_if->dev_if,
28641 + core_if->dev_if->dev_global_regs);
28642 + if (NULL == core_if->dev_if->dev_global_regs)
28643 + DWC_ERROR("reg request with NULL dev_global_regs\n");
28644 + else {
28645 + DWC_DEBUGPL(DBG_CILV, "%s - &core_if(%p)->dev_if(%p)->"
28646 + "dev_global_regs(%p)->dcfg = %p\n", __func__,
28647 + core_if, core_if->dev_if,
28648 + core_if->dev_if->dev_global_regs,
28649 + &core_if->dev_if->dev_global_regs->dcfg);
28650 + dcfg.d32 = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dcfg);
28651 + }
28652 + return dcfg.b.devspd;
28653 +}
28654 +
28655 +void dwc_otg_set_devspeed(dwc_otg_core_if_t * core_if, uint32_t val)
28656 +{
28657 + dcfg_data_t dcfg;
28658 + dcfg.d32 = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dcfg);
28659 + dcfg.b.devspd = val;
28660 + DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->dcfg, dcfg.d32);
28661 +}
28662 +
28663 +uint32_t dwc_otg_get_busconnected(dwc_otg_core_if_t * core_if)
28664 +{
28665 + hprt0_data_t hprt0;
28666 + hprt0.d32 = DWC_READ_REG32(core_if->host_if->hprt0);
28667 + return hprt0.b.prtconnsts;
28668 +}
28669 +
28670 +uint32_t dwc_otg_get_enumspeed(dwc_otg_core_if_t * core_if)
28671 +{
28672 + dsts_data_t dsts;
28673 + dsts.d32 = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dsts);
28674 + return dsts.b.enumspd;
28675 +}
28676 +
28677 +uint32_t dwc_otg_get_prtpower(dwc_otg_core_if_t * core_if)
28678 +{
28679 + hprt0_data_t hprt0;
28680 + hprt0.d32 = DWC_READ_REG32(core_if->host_if->hprt0);
28681 + return hprt0.b.prtpwr;
28682 +
28683 +}
28684 +
28685 +uint32_t dwc_otg_get_core_state(dwc_otg_core_if_t * core_if)
28686 +{
28687 + return core_if->hibernation_suspend;
28688 +}
28689 +
28690 +void dwc_otg_set_prtpower(dwc_otg_core_if_t * core_if, uint32_t val)
28691 +{
28692 + hprt0_data_t hprt0;
28693 + hprt0.d32 = dwc_otg_read_hprt0(core_if);
28694 + hprt0.b.prtpwr = val;
28695 + DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
28696 +}
28697 +
28698 +uint32_t dwc_otg_get_prtsuspend(dwc_otg_core_if_t * core_if)
28699 +{
28700 + hprt0_data_t hprt0;
28701 + hprt0.d32 = DWC_READ_REG32(core_if->host_if->hprt0);
28702 + return hprt0.b.prtsusp;
28703 +
28704 +}
28705 +
28706 +void dwc_otg_set_prtsuspend(dwc_otg_core_if_t * core_if, uint32_t val)
28707 +{
28708 + hprt0_data_t hprt0;
28709 + hprt0.d32 = dwc_otg_read_hprt0(core_if);
28710 + hprt0.b.prtsusp = val;
28711 + DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
28712 +}
28713 +
28714 +uint32_t dwc_otg_get_fr_interval(dwc_otg_core_if_t * core_if)
28715 +{
28716 + hfir_data_t hfir;
28717 + hfir.d32 = DWC_READ_REG32(&core_if->host_if->host_global_regs->hfir);
28718 + return hfir.b.frint;
28719 +
28720 +}
28721 +
28722 +void dwc_otg_set_fr_interval(dwc_otg_core_if_t * core_if, uint32_t val)
28723 +{
28724 + hfir_data_t hfir;
28725 + uint32_t fram_int;
28726 + fram_int = calc_frame_interval(core_if);
28727 + hfir.d32 = DWC_READ_REG32(&core_if->host_if->host_global_regs->hfir);
28728 + if (!core_if->core_params->reload_ctl) {
28729 + DWC_WARN("\nCannot reload HFIR register.HFIR.HFIRRldCtrl bit is"
28730 + "not set to 1.\nShould load driver with reload_ctl=1"
28731 + " module parameter\n");
28732 + return;
28733 + }
28734 + switch (fram_int) {
28735 + case 3750:
28736 + if ((val < 3350) || (val > 4150)) {
28737 + DWC_WARN("HFIR interval for HS core and 30 MHz"
28738 + "clock freq should be from 3350 to 4150\n");
28739 + return;
28740 + }
28741 + break;
28742 + case 30000:
28743 + if ((val < 26820) || (val > 33180)) {
28744 + DWC_WARN("HFIR interval for FS/LS core and 30 MHz"
28745 + "clock freq should be from 26820 to 33180\n");
28746 + return;
28747 + }
28748 + break;
28749 + case 6000:
28750 + if ((val < 5360) || (val > 6640)) {
28751 + DWC_WARN("HFIR interval for HS core and 48 MHz"
28752 + "clock freq should be from 5360 to 6640\n");
28753 + return;
28754 + }
28755 + break;
28756 + case 48000:
28757 + if ((val < 42912) || (val > 53088)) {
28758 + DWC_WARN("HFIR interval for FS/LS core and 48 MHz"
28759 + "clock freq should be from 42912 to 53088\n");
28760 + return;
28761 + }
28762 + break;
28763 + case 7500:
28764 + if ((val < 6700) || (val > 8300)) {
28765 + DWC_WARN("HFIR interval for HS core and 60 MHz"
28766 + "clock freq should be from 6700 to 8300\n");
28767 + return;
28768 + }
28769 + break;
28770 + case 60000:
28771 + if ((val < 53640) || (val > 65536)) {
28772 + DWC_WARN("HFIR interval for FS/LS core and 60 MHz"
28773 + "clock freq should be from 53640 to 65536\n");
28774 + return;
28775 + }
28776 + break;
28777 + default:
28778 + DWC_WARN("Unknown frame interval\n");
28779 + return;
28780 + break;
28781 +
28782 + }
28783 + hfir.b.frint = val;
28784 + DWC_WRITE_REG32(&core_if->host_if->host_global_regs->hfir, hfir.d32);
28785 +}
28786 +
28787 +uint32_t dwc_otg_get_mode_ch_tim(dwc_otg_core_if_t * core_if)
28788 +{
28789 + hcfg_data_t hcfg;
28790 + hcfg.d32 = DWC_READ_REG32(&core_if->host_if->host_global_regs->hcfg);
28791 + return hcfg.b.modechtimen;
28792 +
28793 +}
28794 +
28795 +void dwc_otg_set_mode_ch_tim(dwc_otg_core_if_t * core_if, uint32_t val)
28796 +{
28797 + hcfg_data_t hcfg;
28798 + hcfg.d32 = DWC_READ_REG32(&core_if->host_if->host_global_regs->hcfg);
28799 + hcfg.b.modechtimen = val;
28800 + DWC_WRITE_REG32(&core_if->host_if->host_global_regs->hcfg, hcfg.d32);
28801 +}
28802 +
28803 +void dwc_otg_set_prtresume(dwc_otg_core_if_t * core_if, uint32_t val)
28804 +{
28805 + hprt0_data_t hprt0;
28806 + hprt0.d32 = dwc_otg_read_hprt0(core_if);
28807 + hprt0.b.prtres = val;
28808 + DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
28809 +}
28810 +
28811 +uint32_t dwc_otg_get_remotewakesig(dwc_otg_core_if_t * core_if)
28812 +{
28813 + dctl_data_t dctl;
28814 + dctl.d32 = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dctl);
28815 + return dctl.b.rmtwkupsig;
28816 +}
28817 +
28818 +uint32_t dwc_otg_get_lpm_portsleepstatus(dwc_otg_core_if_t * core_if)
28819 +{
28820 + glpmcfg_data_t lpmcfg;
28821 + lpmcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->glpmcfg);
28822 +
28823 + DWC_ASSERT(!
28824 + ((core_if->lx_state == DWC_OTG_L1) ^ lpmcfg.b.prt_sleep_sts),
28825 + "lx_state = %d, lmpcfg.prt_sleep_sts = %d\n",
28826 + core_if->lx_state, lpmcfg.b.prt_sleep_sts);
28827 +
28828 + return lpmcfg.b.prt_sleep_sts;
28829 +}
28830 +
28831 +uint32_t dwc_otg_get_lpm_remotewakeenabled(dwc_otg_core_if_t * core_if)
28832 +{
28833 + glpmcfg_data_t lpmcfg;
28834 + lpmcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->glpmcfg);
28835 + return lpmcfg.b.rem_wkup_en;
28836 +}
28837 +
28838 +uint32_t dwc_otg_get_lpmresponse(dwc_otg_core_if_t * core_if)
28839 +{
28840 + glpmcfg_data_t lpmcfg;
28841 + lpmcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->glpmcfg);
28842 + return lpmcfg.b.appl_resp;
28843 +}
28844 +
28845 +void dwc_otg_set_lpmresponse(dwc_otg_core_if_t * core_if, uint32_t val)
28846 +{
28847 + glpmcfg_data_t lpmcfg;
28848 + lpmcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->glpmcfg);
28849 + lpmcfg.b.appl_resp = val;
28850 + DWC_WRITE_REG32(&core_if->core_global_regs->glpmcfg, lpmcfg.d32);
28851 +}
28852 +
28853 +uint32_t dwc_otg_get_hsic_connect(dwc_otg_core_if_t * core_if)
28854 +{
28855 + glpmcfg_data_t lpmcfg;
28856 + lpmcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->glpmcfg);
28857 + return lpmcfg.b.hsic_connect;
28858 +}
28859 +
28860 +void dwc_otg_set_hsic_connect(dwc_otg_core_if_t * core_if, uint32_t val)
28861 +{
28862 + glpmcfg_data_t lpmcfg;
28863 + lpmcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->glpmcfg);
28864 + lpmcfg.b.hsic_connect = val;
28865 + DWC_WRITE_REG32(&core_if->core_global_regs->glpmcfg, lpmcfg.d32);
28866 +}
28867 +
28868 +uint32_t dwc_otg_get_inv_sel_hsic(dwc_otg_core_if_t * core_if)
28869 +{
28870 + glpmcfg_data_t lpmcfg;
28871 + lpmcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->glpmcfg);
28872 + return lpmcfg.b.inv_sel_hsic;
28873 +
28874 +}
28875 +
28876 +void dwc_otg_set_inv_sel_hsic(dwc_otg_core_if_t * core_if, uint32_t val)
28877 +{
28878 + glpmcfg_data_t lpmcfg;
28879 + lpmcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->glpmcfg);
28880 + lpmcfg.b.inv_sel_hsic = val;
28881 + DWC_WRITE_REG32(&core_if->core_global_regs->glpmcfg, lpmcfg.d32);
28882 +}
28883 +
28884 +uint32_t dwc_otg_get_gotgctl(dwc_otg_core_if_t * core_if)
28885 +{
28886 + return DWC_READ_REG32(&core_if->core_global_regs->gotgctl);
28887 +}
28888 +
28889 +void dwc_otg_set_gotgctl(dwc_otg_core_if_t * core_if, uint32_t val)
28890 +{
28891 + DWC_WRITE_REG32(&core_if->core_global_regs->gotgctl, val);
28892 +}
28893 +
28894 +uint32_t dwc_otg_get_gusbcfg(dwc_otg_core_if_t * core_if)
28895 +{
28896 + return DWC_READ_REG32(&core_if->core_global_regs->gusbcfg);
28897 +}
28898 +
28899 +void dwc_otg_set_gusbcfg(dwc_otg_core_if_t * core_if, uint32_t val)
28900 +{
28901 + DWC_WRITE_REG32(&core_if->core_global_regs->gusbcfg, val);
28902 +}
28903 +
28904 +uint32_t dwc_otg_get_grxfsiz(dwc_otg_core_if_t * core_if)
28905 +{
28906 + return DWC_READ_REG32(&core_if->core_global_regs->grxfsiz);
28907 +}
28908 +
28909 +void dwc_otg_set_grxfsiz(dwc_otg_core_if_t * core_if, uint32_t val)
28910 +{
28911 + DWC_WRITE_REG32(&core_if->core_global_regs->grxfsiz, val);
28912 +}
28913 +
28914 +uint32_t dwc_otg_get_gnptxfsiz(dwc_otg_core_if_t * core_if)
28915 +{
28916 + return DWC_READ_REG32(&core_if->core_global_regs->gnptxfsiz);
28917 +}
28918 +
28919 +void dwc_otg_set_gnptxfsiz(dwc_otg_core_if_t * core_if, uint32_t val)
28920 +{
28921 + DWC_WRITE_REG32(&core_if->core_global_regs->gnptxfsiz, val);
28922 +}
28923 +
28924 +uint32_t dwc_otg_get_gpvndctl(dwc_otg_core_if_t * core_if)
28925 +{
28926 + return DWC_READ_REG32(&core_if->core_global_regs->gpvndctl);
28927 +}
28928 +
28929 +void dwc_otg_set_gpvndctl(dwc_otg_core_if_t * core_if, uint32_t val)
28930 +{
28931 + DWC_WRITE_REG32(&core_if->core_global_regs->gpvndctl, val);
28932 +}
28933 +
28934 +uint32_t dwc_otg_get_ggpio(dwc_otg_core_if_t * core_if)
28935 +{
28936 + return DWC_READ_REG32(&core_if->core_global_regs->ggpio);
28937 +}
28938 +
28939 +void dwc_otg_set_ggpio(dwc_otg_core_if_t * core_if, uint32_t val)
28940 +{
28941 + DWC_WRITE_REG32(&core_if->core_global_regs->ggpio, val);
28942 +}
28943 +
28944 +uint32_t dwc_otg_get_hprt0(dwc_otg_core_if_t * core_if)
28945 +{
28946 + return DWC_READ_REG32(core_if->host_if->hprt0);
28947 +
28948 +}
28949 +
28950 +void dwc_otg_set_hprt0(dwc_otg_core_if_t * core_if, uint32_t val)
28951 +{
28952 + DWC_WRITE_REG32(core_if->host_if->hprt0, val);
28953 +}
28954 +
28955 +uint32_t dwc_otg_get_guid(dwc_otg_core_if_t * core_if)
28956 +{
28957 + return DWC_READ_REG32(&core_if->core_global_regs->guid);
28958 +}
28959 +
28960 +void dwc_otg_set_guid(dwc_otg_core_if_t * core_if, uint32_t val)
28961 +{
28962 + DWC_WRITE_REG32(&core_if->core_global_regs->guid, val);
28963 +}
28964 +
28965 +uint32_t dwc_otg_get_hptxfsiz(dwc_otg_core_if_t * core_if)
28966 +{
28967 + return DWC_READ_REG32(&core_if->core_global_regs->hptxfsiz);
28968 +}
28969 +
28970 +uint16_t dwc_otg_get_otg_version(dwc_otg_core_if_t * core_if)
28971 +{
28972 + return ((core_if->otg_ver == 1) ? (uint16_t)0x0200 : (uint16_t)0x0103);
28973 +}
28974 +
28975 +/**
28976 + * Start the SRP timer to detect when the SRP does not complete within
28977 + * 6 seconds.
28978 + *
28979 + * @param core_if the pointer to core_if strucure.
28980 + */
28981 +void dwc_otg_pcd_start_srp_timer(dwc_otg_core_if_t * core_if)
28982 +{
28983 + core_if->srp_timer_started = 1;
28984 + DWC_TIMER_SCHEDULE(core_if->srp_timer, 6000 /* 6 secs */ );
28985 +}
28986 +
28987 +void dwc_otg_initiate_srp(dwc_otg_core_if_t * core_if)
28988 +{
28989 + uint32_t *addr = (uint32_t *) & (core_if->core_global_regs->gotgctl);
28990 + gotgctl_data_t mem;
28991 + gotgctl_data_t val;
28992 +
28993 + val.d32 = DWC_READ_REG32(addr);
28994 + if (val.b.sesreq) {
28995 + DWC_ERROR("Session Request Already active!\n");
28996 + return;
28997 + }
28998 +
28999 + DWC_INFO("Session Request Initated\n"); //NOTICE
29000 + mem.d32 = DWC_READ_REG32(addr);
29001 + mem.b.sesreq = 1;
29002 + DWC_WRITE_REG32(addr, mem.d32);
29003 +
29004 + /* Start the SRP timer */
29005 + dwc_otg_pcd_start_srp_timer(core_if);
29006 + return;
29007 +}
29008 --- /dev/null
29009 +++ b/drivers/usb/host/dwc_otg/dwc_otg_cil.h
29010 @@ -0,0 +1,1464 @@
29011 +/* ==========================================================================
29012 + * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_cil.h $
29013 + * $Revision: #123 $
29014 + * $Date: 2012/08/10 $
29015 + * $Change: 2047372 $
29016 + *
29017 + * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
29018 + * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
29019 + * otherwise expressly agreed to in writing between Synopsys and you.
29020 + *
29021 + * The Software IS NOT an item of Licensed Software or Licensed Product under
29022 + * any End User Software License Agreement or Agreement for Licensed Product
29023 + * with Synopsys or any supplement thereto. You are permitted to use and
29024 + * redistribute this Software in source and binary forms, with or without
29025 + * modification, provided that redistributions of source code must retain this
29026 + * notice. You may not view, use, disclose, copy or distribute this file or
29027 + * any information contained herein except pursuant to this license grant from
29028 + * Synopsys. If you do not agree with this notice, including the disclaimer
29029 + * below, then you are not authorized to use the Software.
29030 + *
29031 + * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
29032 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
29033 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
29034 + * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
29035 + * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
29036 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
29037 + * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
29038 + * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
29039 + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
29040 + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
29041 + * DAMAGE.
29042 + * ========================================================================== */
29043 +
29044 +#if !defined(__DWC_CIL_H__)
29045 +#define __DWC_CIL_H__
29046 +
29047 +#include "dwc_list.h"
29048 +#include "dwc_otg_dbg.h"
29049 +#include "dwc_otg_regs.h"
29050 +
29051 +#include "dwc_otg_core_if.h"
29052 +#include "dwc_otg_adp.h"
29053 +
29054 +/**
29055 + * @file
29056 + * This file contains the interface to the Core Interface Layer.
29057 + */
29058 +
29059 +#ifdef DWC_UTE_CFI
29060 +
29061 +#define MAX_DMA_DESCS_PER_EP 256
29062 +
29063 +/**
29064 + * Enumeration for the data buffer mode
29065 + */
29066 +typedef enum _data_buffer_mode {
29067 + BM_STANDARD = 0, /* data buffer is in normal mode */
29068 + BM_SG = 1, /* data buffer uses the scatter/gather mode */
29069 + BM_CONCAT = 2, /* data buffer uses the concatenation mode */
29070 + BM_CIRCULAR = 3, /* data buffer uses the circular DMA mode */
29071 + BM_ALIGN = 4 /* data buffer is in buffer alignment mode */
29072 +} data_buffer_mode_e;
29073 +#endif //DWC_UTE_CFI
29074 +
29075 +/** Macros defined for DWC OTG HW Release version */
29076 +
29077 +#define OTG_CORE_REV_2_60a 0x4F54260A
29078 +#define OTG_CORE_REV_2_71a 0x4F54271A
29079 +#define OTG_CORE_REV_2_72a 0x4F54272A
29080 +#define OTG_CORE_REV_2_80a 0x4F54280A
29081 +#define OTG_CORE_REV_2_81a 0x4F54281A
29082 +#define OTG_CORE_REV_2_90a 0x4F54290A
29083 +#define OTG_CORE_REV_2_91a 0x4F54291A
29084 +#define OTG_CORE_REV_2_92a 0x4F54292A
29085 +#define OTG_CORE_REV_2_93a 0x4F54293A
29086 +#define OTG_CORE_REV_2_94a 0x4F54294A
29087 +#define OTG_CORE_REV_3_00a 0x4F54300A
29088 +
29089 +/**
29090 + * Information for each ISOC packet.
29091 + */
29092 +typedef struct iso_pkt_info {
29093 + uint32_t offset;
29094 + uint32_t length;
29095 + int32_t status;
29096 +} iso_pkt_info_t;
29097 +
29098 +/**
29099 + * The <code>dwc_ep</code> structure represents the state of a single
29100 + * endpoint when acting in device mode. It contains the data items
29101 + * needed for an endpoint to be activated and transfer packets.
29102 + */
29103 +typedef struct dwc_ep {
29104 + /** EP number used for register address lookup */
29105 + uint8_t num;
29106 + /** EP direction 0 = OUT */
29107 + unsigned is_in:1;
29108 + /** EP active. */
29109 + unsigned active:1;
29110 +
29111 + /**
29112 + * Periodic Tx FIFO # for IN EPs For INTR EP set to 0 to use non-periodic
29113 + * Tx FIFO. If dedicated Tx FIFOs are enabled Tx FIFO # FOR IN EPs*/
29114 + unsigned tx_fifo_num:4;
29115 + /** EP type: 0 - Control, 1 - ISOC, 2 - BULK, 3 - INTR */
29116 + unsigned type:2;
29117 +#define DWC_OTG_EP_TYPE_CONTROL 0
29118 +#define DWC_OTG_EP_TYPE_ISOC 1
29119 +#define DWC_OTG_EP_TYPE_BULK 2
29120 +#define DWC_OTG_EP_TYPE_INTR 3
29121 +
29122 + /** DATA start PID for INTR and BULK EP */
29123 + unsigned data_pid_start:1;
29124 + /** Frame (even/odd) for ISOC EP */
29125 + unsigned even_odd_frame:1;
29126 + /** Max Packet bytes */
29127 + unsigned maxpacket:11;
29128 +
29129 + /** Max Transfer size */
29130 + uint32_t maxxfer;
29131 +
29132 + /** @name Transfer state */
29133 + /** @{ */
29134 +
29135 + /**
29136 + * Pointer to the beginning of the transfer buffer -- do not modify
29137 + * during transfer.
29138 + */
29139 +
29140 + dwc_dma_t dma_addr;
29141 +
29142 + dwc_dma_t dma_desc_addr;
29143 + dwc_otg_dev_dma_desc_t *desc_addr;
29144 +
29145 + uint8_t *start_xfer_buff;
29146 + /** pointer to the transfer buffer */
29147 + uint8_t *xfer_buff;
29148 + /** Number of bytes to transfer */
29149 + unsigned xfer_len:19;
29150 + /** Number of bytes transferred. */
29151 + unsigned xfer_count:19;
29152 + /** Sent ZLP */
29153 + unsigned sent_zlp:1;
29154 + /** Total len for control transfer */
29155 + unsigned total_len:19;
29156 +
29157 + /** stall clear flag */
29158 + unsigned stall_clear_flag:1;
29159 +
29160 + /** SETUP pkt cnt rollover flag for EP0 out*/
29161 + unsigned stp_rollover;
29162 +
29163 +#ifdef DWC_UTE_CFI
29164 + /* The buffer mode */
29165 + data_buffer_mode_e buff_mode;
29166 +
29167 + /* The chain of DMA descriptors.
29168 + * MAX_DMA_DESCS_PER_EP will be allocated for each active EP.
29169 + */
29170 + dwc_otg_dma_desc_t *descs;
29171 +
29172 + /* The DMA address of the descriptors chain start */
29173 + dma_addr_t descs_dma_addr;
29174 + /** This variable stores the length of the last enqueued request */
29175 + uint32_t cfi_req_len;
29176 +#endif //DWC_UTE_CFI
29177 +
29178 +/** Max DMA Descriptor count for any EP */
29179 +#define MAX_DMA_DESC_CNT 256
29180 + /** Allocated DMA Desc count */
29181 + uint32_t desc_cnt;
29182 +
29183 + /** bInterval */
29184 + uint32_t bInterval;
29185 + /** Next frame num to setup next ISOC transfer */
29186 + uint32_t frame_num;
29187 + /** Indicates SOF number overrun in DSTS */
29188 + uint8_t frm_overrun;
29189 +
29190 +#ifdef DWC_UTE_PER_IO
29191 + /** Next frame num for which will be setup DMA Desc */
29192 + uint32_t xiso_frame_num;
29193 + /** bInterval */
29194 + uint32_t xiso_bInterval;
29195 + /** Count of currently active transfers - shall be either 0 or 1 */
29196 + int xiso_active_xfers;
29197 + int xiso_queued_xfers;
29198 +#endif
29199 +#ifdef DWC_EN_ISOC
29200 + /**
29201 + * Variables specific for ISOC EPs
29202 + *
29203 + */
29204 + /** DMA addresses of ISOC buffers */
29205 + dwc_dma_t dma_addr0;
29206 + dwc_dma_t dma_addr1;
29207 +
29208 + dwc_dma_t iso_dma_desc_addr;
29209 + dwc_otg_dev_dma_desc_t *iso_desc_addr;
29210 +
29211 + /** pointer to the transfer buffers */
29212 + uint8_t *xfer_buff0;
29213 + uint8_t *xfer_buff1;
29214 +
29215 + /** number of ISOC Buffer is processing */
29216 + uint32_t proc_buf_num;
29217 + /** Interval of ISOC Buffer processing */
29218 + uint32_t buf_proc_intrvl;
29219 + /** Data size for regular frame */
29220 + uint32_t data_per_frame;
29221 +
29222 + /* todo - pattern data support is to be implemented in the future */
29223 + /** Data size for pattern frame */
29224 + uint32_t data_pattern_frame;
29225 + /** Frame number of pattern data */
29226 + uint32_t sync_frame;
29227 +
29228 + /** bInterval */
29229 + uint32_t bInterval;
29230 + /** ISO Packet number per frame */
29231 + uint32_t pkt_per_frm;
29232 + /** Next frame num for which will be setup DMA Desc */
29233 + uint32_t next_frame;
29234 + /** Number of packets per buffer processing */
29235 + uint32_t pkt_cnt;
29236 + /** Info for all isoc packets */
29237 + iso_pkt_info_t *pkt_info;
29238 + /** current pkt number */
29239 + uint32_t cur_pkt;
29240 + /** current pkt number */
29241 + uint8_t *cur_pkt_addr;
29242 + /** current pkt number */
29243 + uint32_t cur_pkt_dma_addr;
29244 +#endif /* DWC_EN_ISOC */
29245 +
29246 +/** @} */
29247 +} dwc_ep_t;
29248 +
29249 +/*
29250 + * Reasons for halting a host channel.
29251 + */
29252 +typedef enum dwc_otg_halt_status {
29253 + DWC_OTG_HC_XFER_NO_HALT_STATUS,
29254 + DWC_OTG_HC_XFER_COMPLETE,
29255 + DWC_OTG_HC_XFER_URB_COMPLETE,
29256 + DWC_OTG_HC_XFER_ACK,
29257 + DWC_OTG_HC_XFER_NAK,
29258 + DWC_OTG_HC_XFER_NYET,
29259 + DWC_OTG_HC_XFER_STALL,
29260 + DWC_OTG_HC_XFER_XACT_ERR,
29261 + DWC_OTG_HC_XFER_FRAME_OVERRUN,
29262 + DWC_OTG_HC_XFER_BABBLE_ERR,
29263 + DWC_OTG_HC_XFER_DATA_TOGGLE_ERR,
29264 + DWC_OTG_HC_XFER_AHB_ERR,
29265 + DWC_OTG_HC_XFER_PERIODIC_INCOMPLETE,
29266 + DWC_OTG_HC_XFER_URB_DEQUEUE
29267 +} dwc_otg_halt_status_e;
29268 +
29269 +/**
29270 + * Host channel descriptor. This structure represents the state of a single
29271 + * host channel when acting in host mode. It contains the data items needed to
29272 + * transfer packets to an endpoint via a host channel.
29273 + */
29274 +typedef struct dwc_hc {
29275 + /** Host channel number used for register address lookup */
29276 + uint8_t hc_num;
29277 +
29278 + /** Device to access */
29279 + unsigned dev_addr:7;
29280 +
29281 + /** EP to access */
29282 + unsigned ep_num:4;
29283 +
29284 + /** EP direction. 0: OUT, 1: IN */
29285 + unsigned ep_is_in:1;
29286 +
29287 + /**
29288 + * EP speed.
29289 + * One of the following values:
29290 + * - DWC_OTG_EP_SPEED_LOW
29291 + * - DWC_OTG_EP_SPEED_FULL
29292 + * - DWC_OTG_EP_SPEED_HIGH
29293 + */
29294 + unsigned speed:2;
29295 +#define DWC_OTG_EP_SPEED_LOW 0
29296 +#define DWC_OTG_EP_SPEED_FULL 1
29297 +#define DWC_OTG_EP_SPEED_HIGH 2
29298 +
29299 + /**
29300 + * Endpoint type.
29301 + * One of the following values:
29302 + * - DWC_OTG_EP_TYPE_CONTROL: 0
29303 + * - DWC_OTG_EP_TYPE_ISOC: 1
29304 + * - DWC_OTG_EP_TYPE_BULK: 2
29305 + * - DWC_OTG_EP_TYPE_INTR: 3
29306 + */
29307 + unsigned ep_type:2;
29308 +
29309 + /** Max packet size in bytes */
29310 + unsigned max_packet:11;
29311 +
29312 + /**
29313 + * PID for initial transaction.
29314 + * 0: DATA0,<br>
29315 + * 1: DATA2,<br>
29316 + * 2: DATA1,<br>
29317 + * 3: MDATA (non-Control EP),
29318 + * SETUP (Control EP)
29319 + */
29320 + unsigned data_pid_start:2;
29321 +#define DWC_OTG_HC_PID_DATA0 0
29322 +#define DWC_OTG_HC_PID_DATA2 1
29323 +#define DWC_OTG_HC_PID_DATA1 2
29324 +#define DWC_OTG_HC_PID_MDATA 3
29325 +#define DWC_OTG_HC_PID_SETUP 3
29326 +
29327 + /** Number of periodic transactions per (micro)frame */
29328 + unsigned multi_count:2;
29329 +
29330 + /** @name Transfer State */
29331 + /** @{ */
29332 +
29333 + /** Pointer to the current transfer buffer position. */
29334 + uint8_t *xfer_buff;
29335 + /**
29336 + * In Buffer DMA mode this buffer will be used
29337 + * if xfer_buff is not DWORD aligned.
29338 + */
29339 + dwc_dma_t align_buff;
29340 + /** Total number of bytes to transfer. */
29341 + uint32_t xfer_len;
29342 + /** Number of bytes transferred so far. */
29343 + uint32_t xfer_count;
29344 + /** Packet count at start of transfer.*/
29345 + uint16_t start_pkt_count;
29346 +
29347 + /**
29348 + * Flag to indicate whether the transfer has been started. Set to 1 if
29349 + * it has been started, 0 otherwise.
29350 + */
29351 + uint8_t xfer_started;
29352 +
29353 + /**
29354 + * Set to 1 to indicate that a PING request should be issued on this
29355 + * channel. If 0, process normally.
29356 + */
29357 + uint8_t do_ping;
29358 +
29359 + /**
29360 + * Set to 1 to indicate that the error count for this transaction is
29361 + * non-zero. Set to 0 if the error count is 0.
29362 + */
29363 + uint8_t error_state;
29364 +
29365 + /**
29366 + * Set to 1 to indicate that this channel should be halted the next
29367 + * time a request is queued for the channel. This is necessary in
29368 + * slave mode if no request queue space is available when an attempt
29369 + * is made to halt the channel.
29370 + */
29371 + uint8_t halt_on_queue;
29372 +
29373 + /**
29374 + * Set to 1 if the host channel has been halted, but the core is not
29375 + * finished flushing queued requests. Otherwise 0.
29376 + */
29377 + uint8_t halt_pending;
29378 +
29379 + /**
29380 + * Reason for halting the host channel.
29381 + */
29382 + dwc_otg_halt_status_e halt_status;
29383 +
29384 + /*
29385 + * Split settings for the host channel
29386 + */
29387 + uint8_t do_split; /**< Enable split for the channel */
29388 + uint8_t complete_split; /**< Enable complete split */
29389 + uint8_t hub_addr; /**< Address of high speed hub */
29390 +
29391 + uint8_t port_addr; /**< Port of the low/full speed device */
29392 + /** Split transaction position
29393 + * One of the following values:
29394 + * - DWC_HCSPLIT_XACTPOS_MID
29395 + * - DWC_HCSPLIT_XACTPOS_BEGIN
29396 + * - DWC_HCSPLIT_XACTPOS_END
29397 + * - DWC_HCSPLIT_XACTPOS_ALL */
29398 + uint8_t xact_pos;
29399 +
29400 + /** Set when the host channel does a short read. */
29401 + uint8_t short_read;
29402 +
29403 + /**
29404 + * Number of requests issued for this channel since it was assigned to
29405 + * the current transfer (not counting PINGs).
29406 + */
29407 + uint8_t requests;
29408 +
29409 + /**
29410 + * Queue Head for the transfer being processed by this channel.
29411 + */
29412 + struct dwc_otg_qh *qh;
29413 +
29414 + /** @} */
29415 +
29416 + /** Entry in list of host channels. */
29417 + DWC_CIRCLEQ_ENTRY(dwc_hc) hc_list_entry;
29418 +
29419 + /** @name Descriptor DMA support */
29420 + /** @{ */
29421 +
29422 + /** Number of Transfer Descriptors */
29423 + uint16_t ntd;
29424 +
29425 + /** Descriptor List DMA address */
29426 + dwc_dma_t desc_list_addr;
29427 +
29428 + /** Scheduling micro-frame bitmap. */
29429 + uint8_t schinfo;
29430 +
29431 + /** @} */
29432 +} dwc_hc_t;
29433 +
29434 +/**
29435 + * The following parameters may be specified when starting the module. These
29436 + * parameters define how the DWC_otg controller should be configured.
29437 + */
29438 +typedef struct dwc_otg_core_params {
29439 + int32_t opt;
29440 +
29441 + /**
29442 + * Specifies the OTG capabilities. The driver will automatically
29443 + * detect the value for this parameter if none is specified.
29444 + * 0 - HNP and SRP capable (default)
29445 + * 1 - SRP Only capable
29446 + * 2 - No HNP/SRP capable
29447 + */
29448 + int32_t otg_cap;
29449 +
29450 + /**
29451 + * Specifies whether to use slave or DMA mode for accessing the data
29452 + * FIFOs. The driver will automatically detect the value for this
29453 + * parameter if none is specified.
29454 + * 0 - Slave
29455 + * 1 - DMA (default, if available)
29456 + */
29457 + int32_t dma_enable;
29458 +
29459 + /**
29460 + * When DMA mode is enabled specifies whether to use address DMA or DMA
29461 + * Descriptor mode for accessing the data FIFOs in device mode. The driver
29462 + * will automatically detect the value for this if none is specified.
29463 + * 0 - address DMA
29464 + * 1 - DMA Descriptor(default, if available)
29465 + */
29466 + int32_t dma_desc_enable;
29467 + /** The DMA Burst size (applicable only for External DMA
29468 + * Mode). 1, 4, 8 16, 32, 64, 128, 256 (default 32)
29469 + */
29470 + int32_t dma_burst_size; /* Translate this to GAHBCFG values */
29471 +
29472 + /**
29473 + * Specifies the maximum speed of operation in host and device mode.
29474 + * The actual speed depends on the speed of the attached device and
29475 + * the value of phy_type. The actual speed depends on the speed of the
29476 + * attached device.
29477 + * 0 - High Speed (default)
29478 + * 1 - Full Speed
29479 + */
29480 + int32_t speed;
29481 + /** Specifies whether low power mode is supported when attached
29482 + * to a Full Speed or Low Speed device in host mode.
29483 + * 0 - Don't support low power mode (default)
29484 + * 1 - Support low power mode
29485 + */
29486 + int32_t host_support_fs_ls_low_power;
29487 +
29488 + /** Specifies the PHY clock rate in low power mode when connected to a
29489 + * Low Speed device in host mode. This parameter is applicable only if
29490 + * HOST_SUPPORT_FS_LS_LOW_POWER is enabled. If PHY_TYPE is set to FS
29491 + * then defaults to 6 MHZ otherwise 48 MHZ.
29492 + *
29493 + * 0 - 48 MHz
29494 + * 1 - 6 MHz
29495 + */
29496 + int32_t host_ls_low_power_phy_clk;
29497 +
29498 + /**
29499 + * 0 - Use cC FIFO size parameters
29500 + * 1 - Allow dynamic FIFO sizing (default)
29501 + */
29502 + int32_t enable_dynamic_fifo;
29503 +
29504 + /** Total number of 4-byte words in the data FIFO memory. This
29505 + * memory includes the Rx FIFO, non-periodic Tx FIFO, and periodic
29506 + * Tx FIFOs.
29507 + * 32 to 32768 (default 8192)
29508 + * Note: The total FIFO memory depth in the FPGA configuration is 8192.
29509 + */
29510 + int32_t data_fifo_size;
29511 +
29512 + /** Number of 4-byte words in the Rx FIFO in device mode when dynamic
29513 + * FIFO sizing is enabled.
29514 + * 16 to 32768 (default 1064)
29515 + */
29516 + int32_t dev_rx_fifo_size;
29517 +
29518 + /** Number of 4-byte words in the non-periodic Tx FIFO in device mode
29519 + * when dynamic FIFO sizing is enabled.
29520 + * 16 to 32768 (default 1024)
29521 + */
29522 + int32_t dev_nperio_tx_fifo_size;
29523 +
29524 + /** Number of 4-byte words in each of the periodic Tx FIFOs in device
29525 + * mode when dynamic FIFO sizing is enabled.
29526 + * 4 to 768 (default 256)
29527 + */
29528 + uint32_t dev_perio_tx_fifo_size[MAX_PERIO_FIFOS];
29529 +
29530 + /** Number of 4-byte words in the Rx FIFO in host mode when dynamic
29531 + * FIFO sizing is enabled.
29532 + * 16 to 32768 (default 1024)
29533 + */
29534 + int32_t host_rx_fifo_size;
29535 +
29536 + /** Number of 4-byte words in the non-periodic Tx FIFO in host mode
29537 + * when Dynamic FIFO sizing is enabled in the core.
29538 + * 16 to 32768 (default 1024)
29539 + */
29540 + int32_t host_nperio_tx_fifo_size;
29541 +
29542 + /** Number of 4-byte words in the host periodic Tx FIFO when dynamic
29543 + * FIFO sizing is enabled.
29544 + * 16 to 32768 (default 1024)
29545 + */
29546 + int32_t host_perio_tx_fifo_size;
29547 +
29548 + /** The maximum transfer size supported in bytes.
29549 + * 2047 to 65,535 (default 65,535)
29550 + */
29551 + int32_t max_transfer_size;
29552 +
29553 + /** The maximum number of packets in a transfer.
29554 + * 15 to 511 (default 511)
29555 + */
29556 + int32_t max_packet_count;
29557 +
29558 + /** The number of host channel registers to use.
29559 + * 1 to 16 (default 12)
29560 + * Note: The FPGA configuration supports a maximum of 12 host channels.
29561 + */
29562 + int32_t host_channels;
29563 +
29564 + /** The number of endpoints in addition to EP0 available for device
29565 + * mode operations.
29566 + * 1 to 15 (default 6 IN and OUT)
29567 + * Note: The FPGA configuration supports a maximum of 6 IN and OUT
29568 + * endpoints in addition to EP0.
29569 + */
29570 + int32_t dev_endpoints;
29571 +
29572 + /**
29573 + * Specifies the type of PHY interface to use. By default, the driver
29574 + * will automatically detect the phy_type.
29575 + *
29576 + * 0 - Full Speed PHY
29577 + * 1 - UTMI+ (default)
29578 + * 2 - ULPI
29579 + */
29580 + int32_t phy_type;
29581 +
29582 + /**
29583 + * Specifies the UTMI+ Data Width. This parameter is
29584 + * applicable for a PHY_TYPE of UTMI+ or ULPI. (For a ULPI
29585 + * PHY_TYPE, this parameter indicates the data width between
29586 + * the MAC and the ULPI Wrapper.) Also, this parameter is
29587 + * applicable only if the OTG_HSPHY_WIDTH cC parameter was set
29588 + * to "8 and 16 bits", meaning that the core has been
29589 + * configured to work at either data path width.
29590 + *
29591 + * 8 or 16 bits (default 16)
29592 + */
29593 + int32_t phy_utmi_width;
29594 +
29595 + /**
29596 + * Specifies whether the ULPI operates at double or single
29597 + * data rate. This parameter is only applicable if PHY_TYPE is
29598 + * ULPI.
29599 + *
29600 + * 0 - single data rate ULPI interface with 8 bit wide data
29601 + * bus (default)
29602 + * 1 - double data rate ULPI interface with 4 bit wide data
29603 + * bus
29604 + */
29605 + int32_t phy_ulpi_ddr;
29606 +
29607 + /**
29608 + * Specifies whether to use the internal or external supply to
29609 + * drive the vbus with a ULPI phy.
29610 + */
29611 + int32_t phy_ulpi_ext_vbus;
29612 +
29613 + /**
29614 + * Specifies whether to use the I2Cinterface for full speed PHY. This
29615 + * parameter is only applicable if PHY_TYPE is FS.
29616 + * 0 - No (default)
29617 + * 1 - Yes
29618 + */
29619 + int32_t i2c_enable;
29620 +
29621 + int32_t ulpi_fs_ls;
29622 +
29623 + int32_t ts_dline;
29624 +
29625 + /**
29626 + * Specifies whether dedicated transmit FIFOs are
29627 + * enabled for non periodic IN endpoints in device mode
29628 + * 0 - No
29629 + * 1 - Yes
29630 + */
29631 + int32_t en_multiple_tx_fifo;
29632 +
29633 + /** Number of 4-byte words in each of the Tx FIFOs in device
29634 + * mode when dynamic FIFO sizing is enabled.
29635 + * 4 to 768 (default 256)
29636 + */
29637 + uint32_t dev_tx_fifo_size[MAX_TX_FIFOS];
29638 +
29639 + /** Thresholding enable flag-
29640 + * bit 0 - enable non-ISO Tx thresholding
29641 + * bit 1 - enable ISO Tx thresholding
29642 + * bit 2 - enable Rx thresholding
29643 + */
29644 + uint32_t thr_ctl;
29645 +
29646 + /** Thresholding length for Tx
29647 + * FIFOs in 32 bit DWORDs
29648 + */
29649 + uint32_t tx_thr_length;
29650 +
29651 + /** Thresholding length for Rx
29652 + * FIFOs in 32 bit DWORDs
29653 + */
29654 + uint32_t rx_thr_length;
29655 +
29656 + /**
29657 + * Specifies whether LPM (Link Power Management) support is enabled
29658 + */
29659 + int32_t lpm_enable;
29660 +
29661 + /** Per Transfer Interrupt
29662 + * mode enable flag
29663 + * 1 - Enabled
29664 + * 0 - Disabled
29665 + */
29666 + int32_t pti_enable;
29667 +
29668 + /** Multi Processor Interrupt
29669 + * mode enable flag
29670 + * 1 - Enabled
29671 + * 0 - Disabled
29672 + */
29673 + int32_t mpi_enable;
29674 +
29675 + /** IS_USB Capability
29676 + * 1 - Enabled
29677 + * 0 - Disabled
29678 + */
29679 + int32_t ic_usb_cap;
29680 +
29681 + /** AHB Threshold Ratio
29682 + * 2'b00 AHB Threshold = MAC Threshold
29683 + * 2'b01 AHB Threshold = 1/2 MAC Threshold
29684 + * 2'b10 AHB Threshold = 1/4 MAC Threshold
29685 + * 2'b11 AHB Threshold = 1/8 MAC Threshold
29686 + */
29687 + int32_t ahb_thr_ratio;
29688 +
29689 + /** ADP Support
29690 + * 1 - Enabled
29691 + * 0 - Disabled
29692 + */
29693 + int32_t adp_supp_enable;
29694 +
29695 + /** HFIR Reload Control
29696 + * 0 - The HFIR cannot be reloaded dynamically.
29697 + * 1 - Allow dynamic reloading of the HFIR register during runtime.
29698 + */
29699 + int32_t reload_ctl;
29700 +
29701 + /** DCFG: Enable device Out NAK
29702 + * 0 - The core does not set NAK after Bulk Out transfer complete.
29703 + * 1 - The core sets NAK after Bulk OUT transfer complete.
29704 + */
29705 + int32_t dev_out_nak;
29706 +
29707 + /** DCFG: Enable Continue on BNA
29708 + * After receiving BNA interrupt the core disables the endpoint,when the
29709 + * endpoint is re-enabled by the application the core starts processing
29710 + * 0 - from the DOEPDMA descriptor
29711 + * 1 - from the descriptor which received the BNA.
29712 + */
29713 + int32_t cont_on_bna;
29714 +
29715 + /** GAHBCFG: AHB Single Support
29716 + * This bit when programmed supports SINGLE transfers for remainder
29717 + * data in a transfer for DMA mode of operation.
29718 + * 0 - in this case the remainder data will be sent using INCR burst size.
29719 + * 1 - in this case the remainder data will be sent using SINGLE burst size.
29720 + */
29721 + int32_t ahb_single;
29722 +
29723 + /** Core Power down mode
29724 + * 0 - No Power Down is enabled
29725 + * 1 - Reserved
29726 + * 2 - Complete Power Down (Hibernation)
29727 + */
29728 + int32_t power_down;
29729 +
29730 + /** OTG revision supported
29731 + * 0 - OTG 1.3 revision
29732 + * 1 - OTG 2.0 revision
29733 + */
29734 + int32_t otg_ver;
29735 +
29736 +} dwc_otg_core_params_t;
29737 +
29738 +#ifdef DEBUG
29739 +struct dwc_otg_core_if;
29740 +typedef struct hc_xfer_info {
29741 + struct dwc_otg_core_if *core_if;
29742 + dwc_hc_t *hc;
29743 +} hc_xfer_info_t;
29744 +#endif
29745 +
29746 +typedef struct ep_xfer_info {
29747 + struct dwc_otg_core_if *core_if;
29748 + dwc_ep_t *ep;
29749 + uint8_t state;
29750 +} ep_xfer_info_t;
29751 +/*
29752 + * Device States
29753 + */
29754 +typedef enum dwc_otg_lx_state {
29755 + /** On state */
29756 + DWC_OTG_L0,
29757 + /** LPM sleep state*/
29758 + DWC_OTG_L1,
29759 + /** USB suspend state*/
29760 + DWC_OTG_L2,
29761 + /** Off state*/
29762 + DWC_OTG_L3
29763 +} dwc_otg_lx_state_e;
29764 +
29765 +struct dwc_otg_global_regs_backup {
29766 + uint32_t gotgctl_local;
29767 + uint32_t gintmsk_local;
29768 + uint32_t gahbcfg_local;
29769 + uint32_t gusbcfg_local;
29770 + uint32_t grxfsiz_local;
29771 + uint32_t gnptxfsiz_local;
29772 +#ifdef CONFIG_USB_DWC_OTG_LPM
29773 + uint32_t glpmcfg_local;
29774 +#endif
29775 + uint32_t gi2cctl_local;
29776 + uint32_t hptxfsiz_local;
29777 + uint32_t pcgcctl_local;
29778 + uint32_t gdfifocfg_local;
29779 + uint32_t dtxfsiz_local[MAX_EPS_CHANNELS];
29780 + uint32_t gpwrdn_local;
29781 + uint32_t xhib_pcgcctl;
29782 + uint32_t xhib_gpwrdn;
29783 +};
29784 +
29785 +struct dwc_otg_host_regs_backup {
29786 + uint32_t hcfg_local;
29787 + uint32_t haintmsk_local;
29788 + uint32_t hcintmsk_local[MAX_EPS_CHANNELS];
29789 + uint32_t hprt0_local;
29790 + uint32_t hfir_local;
29791 +};
29792 +
29793 +struct dwc_otg_dev_regs_backup {
29794 + uint32_t dcfg;
29795 + uint32_t dctl;
29796 + uint32_t daintmsk;
29797 + uint32_t diepmsk;
29798 + uint32_t doepmsk;
29799 + uint32_t diepctl[MAX_EPS_CHANNELS];
29800 + uint32_t dieptsiz[MAX_EPS_CHANNELS];
29801 + uint32_t diepdma[MAX_EPS_CHANNELS];
29802 +};
29803 +/**
29804 + * The <code>dwc_otg_core_if</code> structure contains information needed to manage
29805 + * the DWC_otg controller acting in either host or device mode. It
29806 + * represents the programming view of the controller as a whole.
29807 + */
29808 +struct dwc_otg_core_if {
29809 + /** Parameters that define how the core should be configured.*/
29810 + dwc_otg_core_params_t *core_params;
29811 +
29812 + /** Core Global registers starting at offset 000h. */
29813 + dwc_otg_core_global_regs_t *core_global_regs;
29814 +
29815 + /** Device-specific information */
29816 + dwc_otg_dev_if_t *dev_if;
29817 + /** Host-specific information */
29818 + dwc_otg_host_if_t *host_if;
29819 +
29820 + /** Value from SNPSID register */
29821 + uint32_t snpsid;
29822 +
29823 + /*
29824 + * Set to 1 if the core PHY interface bits in USBCFG have been
29825 + * initialized.
29826 + */
29827 + uint8_t phy_init_done;
29828 +
29829 + /*
29830 + * SRP Success flag, set by srp success interrupt in FS I2C mode
29831 + */
29832 + uint8_t srp_success;
29833 + uint8_t srp_timer_started;
29834 + /** Timer for SRP. If it expires before SRP is successful
29835 + * clear the SRP. */
29836 + dwc_timer_t *srp_timer;
29837 +
29838 +#ifdef DWC_DEV_SRPCAP
29839 + /* This timer is needed to power on the hibernated host core if SRP is not
29840 + * initiated on connected SRP capable device for limited period of time
29841 + */
29842 + uint8_t pwron_timer_started;
29843 + dwc_timer_t *pwron_timer;
29844 +#endif
29845 + /* Common configuration information */
29846 + /** Power and Clock Gating Control Register */
29847 + volatile uint32_t *pcgcctl;
29848 +#define DWC_OTG_PCGCCTL_OFFSET 0xE00
29849 +
29850 + /** Push/pop addresses for endpoints or host channels.*/
29851 + uint32_t *data_fifo[MAX_EPS_CHANNELS];
29852 +#define DWC_OTG_DATA_FIFO_OFFSET 0x1000
29853 +#define DWC_OTG_DATA_FIFO_SIZE 0x1000
29854 +
29855 + /** Total RAM for FIFOs (Bytes) */
29856 + uint16_t total_fifo_size;
29857 + /** Size of Rx FIFO (Bytes) */
29858 + uint16_t rx_fifo_size;
29859 + /** Size of Non-periodic Tx FIFO (Bytes) */
29860 + uint16_t nperio_tx_fifo_size;
29861 +
29862 + /** 1 if DMA is enabled, 0 otherwise. */
29863 + uint8_t dma_enable;
29864 +
29865 + /** 1 if DMA descriptor is enabled, 0 otherwise. */
29866 + uint8_t dma_desc_enable;
29867 +
29868 + /** 1 if PTI Enhancement mode is enabled, 0 otherwise. */
29869 + uint8_t pti_enh_enable;
29870 +
29871 + /** 1 if MPI Enhancement mode is enabled, 0 otherwise. */
29872 + uint8_t multiproc_int_enable;
29873 +
29874 + /** 1 if dedicated Tx FIFOs are enabled, 0 otherwise. */
29875 + uint8_t en_multiple_tx_fifo;
29876 +
29877 + /** Set to 1 if multiple packets of a high-bandwidth transfer is in
29878 + * process of being queued */
29879 + uint8_t queuing_high_bandwidth;
29880 +
29881 + /** Hardware Configuration -- stored here for convenience.*/
29882 + hwcfg1_data_t hwcfg1;
29883 + hwcfg2_data_t hwcfg2;
29884 + hwcfg3_data_t hwcfg3;
29885 + hwcfg4_data_t hwcfg4;
29886 + fifosize_data_t hptxfsiz;
29887 +
29888 + /** Host and Device Configuration -- stored here for convenience.*/
29889 + hcfg_data_t hcfg;
29890 + dcfg_data_t dcfg;
29891 +
29892 + /** The operational State, during transations
29893 + * (a_host>>a_peripherial and b_device=>b_host) this may not
29894 + * match the core but allows the software to determine
29895 + * transitions.
29896 + */
29897 + uint8_t op_state;
29898 +
29899 + /**
29900 + * Set to 1 if the HCD needs to be restarted on a session request
29901 + * interrupt. This is required if no connector ID status change has
29902 + * occurred since the HCD was last disconnected.
29903 + */
29904 + uint8_t restart_hcd_on_session_req;
29905 +
29906 + /** HCD callbacks */
29907 + /** A-Device is a_host */
29908 +#define A_HOST (1)
29909 + /** A-Device is a_suspend */
29910 +#define A_SUSPEND (2)
29911 + /** A-Device is a_peripherial */
29912 +#define A_PERIPHERAL (3)
29913 + /** B-Device is operating as a Peripheral. */
29914 +#define B_PERIPHERAL (4)
29915 + /** B-Device is operating as a Host. */
29916 +#define B_HOST (5)
29917 +
29918 + /** HCD callbacks */
29919 + struct dwc_otg_cil_callbacks *hcd_cb;
29920 + /** PCD callbacks */
29921 + struct dwc_otg_cil_callbacks *pcd_cb;
29922 +
29923 + /** Device mode Periodic Tx FIFO Mask */
29924 + uint32_t p_tx_msk;
29925 + /** Device mode Periodic Tx FIFO Mask */
29926 + uint32_t tx_msk;
29927 +
29928 + /** Workqueue object used for handling several interrupts */
29929 + dwc_workq_t *wq_otg;
29930 +
29931 + /** Timer object used for handling "Wakeup Detected" Interrupt */
29932 + dwc_timer_t *wkp_timer;
29933 + /** This arrays used for debug purposes for DEV OUT NAK enhancement */
29934 + uint32_t start_doeptsiz_val[MAX_EPS_CHANNELS];
29935 + ep_xfer_info_t ep_xfer_info[MAX_EPS_CHANNELS];
29936 + dwc_timer_t *ep_xfer_timer[MAX_EPS_CHANNELS];
29937 +#ifdef DEBUG
29938 + uint32_t start_hcchar_val[MAX_EPS_CHANNELS];
29939 +
29940 + hc_xfer_info_t hc_xfer_info[MAX_EPS_CHANNELS];
29941 + dwc_timer_t *hc_xfer_timer[MAX_EPS_CHANNELS];
29942 +
29943 + uint32_t hfnum_7_samples;
29944 + uint64_t hfnum_7_frrem_accum;
29945 + uint32_t hfnum_0_samples;
29946 + uint64_t hfnum_0_frrem_accum;
29947 + uint32_t hfnum_other_samples;
29948 + uint64_t hfnum_other_frrem_accum;
29949 +#endif
29950 +
29951 +#ifdef DWC_UTE_CFI
29952 + uint16_t pwron_rxfsiz;
29953 + uint16_t pwron_gnptxfsiz;
29954 + uint16_t pwron_txfsiz[15];
29955 +
29956 + uint16_t init_rxfsiz;
29957 + uint16_t init_gnptxfsiz;
29958 + uint16_t init_txfsiz[15];
29959 +#endif
29960 +
29961 + /** Lx state of device */
29962 + dwc_otg_lx_state_e lx_state;
29963 +
29964 + /** Saved Core Global registers */
29965 + struct dwc_otg_global_regs_backup *gr_backup;
29966 + /** Saved Host registers */
29967 + struct dwc_otg_host_regs_backup *hr_backup;
29968 + /** Saved Device registers */
29969 + struct dwc_otg_dev_regs_backup *dr_backup;
29970 +
29971 + /** Power Down Enable */
29972 + uint32_t power_down;
29973 +
29974 + /** ADP support Enable */
29975 + uint32_t adp_enable;
29976 +
29977 + /** ADP structure object */
29978 + dwc_otg_adp_t adp;
29979 +
29980 + /** hibernation/suspend flag */
29981 + int hibernation_suspend;
29982 +
29983 + /** Device mode extended hibernation flag */
29984 + int xhib;
29985 +
29986 + /** OTG revision supported */
29987 + uint32_t otg_ver;
29988 +
29989 + /** OTG status flag used for HNP polling */
29990 + uint8_t otg_sts;
29991 +
29992 + /** Pointer to either hcd->lock or pcd->lock */
29993 + dwc_spinlock_t *lock;
29994 +
29995 + /** Start predict NextEP based on Learning Queue if equal 1,
29996 + * also used as counter of disabled NP IN EP's */
29997 + uint8_t start_predict;
29998 +
29999 + /** NextEp sequence, including EP0: nextep_seq[] = EP if non-periodic and
30000 + * active, 0xff otherwise */
30001 + uint8_t nextep_seq[MAX_EPS_CHANNELS];
30002 +
30003 + /** Index of fisrt EP in nextep_seq array which should be re-enabled **/
30004 + uint8_t first_in_nextep_seq;
30005 +
30006 + /** Frame number while entering to ISR - needed for ISOCs **/
30007 + uint32_t frame_num;
30008 +
30009 +};
30010 +
30011 +#ifdef DEBUG
30012 +/*
30013 + * This function is called when transfer is timed out.
30014 + */
30015 +extern void hc_xfer_timeout(void *ptr);
30016 +#endif
30017 +
30018 +/*
30019 + * This function is called when transfer is timed out on endpoint.
30020 + */
30021 +extern void ep_xfer_timeout(void *ptr);
30022 +
30023 +/*
30024 + * The following functions are functions for works
30025 + * using during handling some interrupts
30026 + */
30027 +extern void w_conn_id_status_change(void *p);
30028 +
30029 +extern void w_wakeup_detected(void *p);
30030 +
30031 +/** Saves global register values into system memory. */
30032 +extern int dwc_otg_save_global_regs(dwc_otg_core_if_t * core_if);
30033 +/** Saves device register values into system memory. */
30034 +extern int dwc_otg_save_dev_regs(dwc_otg_core_if_t * core_if);
30035 +/** Saves host register values into system memory. */
30036 +extern int dwc_otg_save_host_regs(dwc_otg_core_if_t * core_if);
30037 +/** Restore global register values. */
30038 +extern int dwc_otg_restore_global_regs(dwc_otg_core_if_t * core_if);
30039 +/** Restore host register values. */
30040 +extern int dwc_otg_restore_host_regs(dwc_otg_core_if_t * core_if, int reset);
30041 +/** Restore device register values. */
30042 +extern int dwc_otg_restore_dev_regs(dwc_otg_core_if_t * core_if,
30043 + int rem_wakeup);
30044 +extern int restore_lpm_i2c_regs(dwc_otg_core_if_t * core_if);
30045 +extern int restore_essential_regs(dwc_otg_core_if_t * core_if, int rmode,
30046 + int is_host);
30047 +
30048 +extern int dwc_otg_host_hibernation_restore(dwc_otg_core_if_t * core_if,
30049 + int restore_mode, int reset);
30050 +extern int dwc_otg_device_hibernation_restore(dwc_otg_core_if_t * core_if,
30051 + int rem_wakeup, int reset);
30052 +
30053 +/*
30054 + * The following functions support initialization of the CIL driver component
30055 + * and the DWC_otg controller.
30056 + */
30057 +extern void dwc_otg_core_host_init(dwc_otg_core_if_t * _core_if);
30058 +extern void dwc_otg_core_dev_init(dwc_otg_core_if_t * _core_if);
30059 +
30060 +/** @name Device CIL Functions
30061 + * The following functions support managing the DWC_otg controller in device
30062 + * mode.
30063 + */
30064 +/**@{*/
30065 +extern void dwc_otg_wakeup(dwc_otg_core_if_t * _core_if);
30066 +extern void dwc_otg_read_setup_packet(dwc_otg_core_if_t * _core_if,
30067 + uint32_t * _dest);
30068 +extern uint32_t dwc_otg_get_frame_number(dwc_otg_core_if_t * _core_if);
30069 +extern void dwc_otg_ep0_activate(dwc_otg_core_if_t * _core_if, dwc_ep_t * _ep);
30070 +extern void dwc_otg_ep_activate(dwc_otg_core_if_t * _core_if, dwc_ep_t * _ep);
30071 +extern void dwc_otg_ep_deactivate(dwc_otg_core_if_t * _core_if, dwc_ep_t * _ep);
30072 +extern void dwc_otg_ep_start_transfer(dwc_otg_core_if_t * _core_if,
30073 + dwc_ep_t * _ep);
30074 +extern void dwc_otg_ep_start_zl_transfer(dwc_otg_core_if_t * _core_if,
30075 + dwc_ep_t * _ep);
30076 +extern void dwc_otg_ep0_start_transfer(dwc_otg_core_if_t * _core_if,
30077 + dwc_ep_t * _ep);
30078 +extern void dwc_otg_ep0_continue_transfer(dwc_otg_core_if_t * _core_if,
30079 + dwc_ep_t * _ep);
30080 +extern void dwc_otg_ep_write_packet(dwc_otg_core_if_t * _core_if,
30081 + dwc_ep_t * _ep, int _dma);
30082 +extern void dwc_otg_ep_set_stall(dwc_otg_core_if_t * _core_if, dwc_ep_t * _ep);
30083 +extern void dwc_otg_ep_clear_stall(dwc_otg_core_if_t * _core_if,
30084 + dwc_ep_t * _ep);
30085 +extern void dwc_otg_enable_device_interrupts(dwc_otg_core_if_t * _core_if);
30086 +
30087 +#ifdef DWC_EN_ISOC
30088 +extern void dwc_otg_iso_ep_start_frm_transfer(dwc_otg_core_if_t * core_if,
30089 + dwc_ep_t * ep);
30090 +extern void dwc_otg_iso_ep_start_buf_transfer(dwc_otg_core_if_t * core_if,
30091 + dwc_ep_t * ep);
30092 +#endif /* DWC_EN_ISOC */
30093 +/**@}*/
30094 +
30095 +/** @name Host CIL Functions
30096 + * The following functions support managing the DWC_otg controller in host
30097 + * mode.
30098 + */
30099 +/**@{*/
30100 +extern void dwc_otg_hc_init(dwc_otg_core_if_t * _core_if, dwc_hc_t * _hc);
30101 +extern void dwc_otg_hc_halt(dwc_otg_core_if_t * _core_if,
30102 + dwc_hc_t * _hc, dwc_otg_halt_status_e _halt_status);
30103 +extern void dwc_otg_hc_cleanup(dwc_otg_core_if_t * _core_if, dwc_hc_t * _hc);
30104 +extern void dwc_otg_hc_start_transfer(dwc_otg_core_if_t * _core_if,
30105 + dwc_hc_t * _hc);
30106 +extern int dwc_otg_hc_continue_transfer(dwc_otg_core_if_t * _core_if,
30107 + dwc_hc_t * _hc);
30108 +extern void dwc_otg_hc_do_ping(dwc_otg_core_if_t * _core_if, dwc_hc_t * _hc);
30109 +extern void dwc_otg_hc_write_packet(dwc_otg_core_if_t * _core_if,
30110 + dwc_hc_t * _hc);
30111 +extern void dwc_otg_enable_host_interrupts(dwc_otg_core_if_t * _core_if);
30112 +extern void dwc_otg_disable_host_interrupts(dwc_otg_core_if_t * _core_if);
30113 +
30114 +extern void dwc_otg_hc_start_transfer_ddma(dwc_otg_core_if_t * core_if,
30115 + dwc_hc_t * hc);
30116 +
30117 +extern uint32_t calc_frame_interval(dwc_otg_core_if_t * core_if);
30118 +
30119 +/* Macro used to clear one channel interrupt */
30120 +#define clear_hc_int(_hc_regs_, _intr_) \
30121 +do { \
30122 + hcint_data_t hcint_clear = {.d32 = 0}; \
30123 + hcint_clear.b._intr_ = 1; \
30124 + DWC_WRITE_REG32(&(_hc_regs_)->hcint, hcint_clear.d32); \
30125 +} while (0)
30126 +
30127 +/*
30128 + * Macro used to disable one channel interrupt. Channel interrupts are
30129 + * disabled when the channel is halted or released by the interrupt handler.
30130 + * There is no need to handle further interrupts of that type until the
30131 + * channel is re-assigned. In fact, subsequent handling may cause crashes
30132 + * because the channel structures are cleaned up when the channel is released.
30133 + */
30134 +#define disable_hc_int(_hc_regs_, _intr_) \
30135 +do { \
30136 + hcintmsk_data_t hcintmsk = {.d32 = 0}; \
30137 + hcintmsk.b._intr_ = 1; \
30138 + DWC_MODIFY_REG32(&(_hc_regs_)->hcintmsk, hcintmsk.d32, 0); \
30139 +} while (0)
30140 +
30141 +/**
30142 + * This function Reads HPRT0 in preparation to modify. It keeps the
30143 + * WC bits 0 so that if they are read as 1, they won't clear when you
30144 + * write it back
30145 + */
30146 +static inline uint32_t dwc_otg_read_hprt0(dwc_otg_core_if_t * _core_if)
30147 +{
30148 + hprt0_data_t hprt0;
30149 + hprt0.d32 = DWC_READ_REG32(_core_if->host_if->hprt0);
30150 + hprt0.b.prtena = 0;
30151 + hprt0.b.prtconndet = 0;
30152 + hprt0.b.prtenchng = 0;
30153 + hprt0.b.prtovrcurrchng = 0;
30154 + return hprt0.d32;
30155 +}
30156 +
30157 +/**@}*/
30158 +
30159 +/** @name Common CIL Functions
30160 + * The following functions support managing the DWC_otg controller in either
30161 + * device or host mode.
30162 + */
30163 +/**@{*/
30164 +
30165 +extern void dwc_otg_read_packet(dwc_otg_core_if_t * core_if,
30166 + uint8_t * dest, uint16_t bytes);
30167 +
30168 +extern void dwc_otg_flush_tx_fifo(dwc_otg_core_if_t * _core_if, const int _num);
30169 +extern void dwc_otg_flush_rx_fifo(dwc_otg_core_if_t * _core_if);
30170 +extern void dwc_otg_core_reset(dwc_otg_core_if_t * _core_if);
30171 +
30172 +/**
30173 + * This function returns the Core Interrupt register.
30174 + */
30175 +static inline uint32_t dwc_otg_read_core_intr(dwc_otg_core_if_t * core_if)
30176 +{
30177 + return (DWC_READ_REG32(&core_if->core_global_regs->gintsts) &
30178 + DWC_READ_REG32(&core_if->core_global_regs->gintmsk));
30179 +}
30180 +
30181 +/**
30182 + * This function returns the OTG Interrupt register.
30183 + */
30184 +static inline uint32_t dwc_otg_read_otg_intr(dwc_otg_core_if_t * core_if)
30185 +{
30186 + return (DWC_READ_REG32(&core_if->core_global_regs->gotgint));
30187 +}
30188 +
30189 +/**
30190 + * This function reads the Device All Endpoints Interrupt register and
30191 + * returns the IN endpoint interrupt bits.
30192 + */
30193 +static inline uint32_t dwc_otg_read_dev_all_in_ep_intr(dwc_otg_core_if_t *
30194 + core_if)
30195 +{
30196 +
30197 + uint32_t v;
30198 +
30199 + if (core_if->multiproc_int_enable) {
30200 + v = DWC_READ_REG32(&core_if->dev_if->
30201 + dev_global_regs->deachint) &
30202 + DWC_READ_REG32(&core_if->
30203 + dev_if->dev_global_regs->deachintmsk);
30204 + } else {
30205 + v = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->daint) &
30206 + DWC_READ_REG32(&core_if->dev_if->dev_global_regs->daintmsk);
30207 + }
30208 + return (v & 0xffff);
30209 +}
30210 +
30211 +/**
30212 + * This function reads the Device All Endpoints Interrupt register and
30213 + * returns the OUT endpoint interrupt bits.
30214 + */
30215 +static inline uint32_t dwc_otg_read_dev_all_out_ep_intr(dwc_otg_core_if_t *
30216 + core_if)
30217 +{
30218 + uint32_t v;
30219 +
30220 + if (core_if->multiproc_int_enable) {
30221 + v = DWC_READ_REG32(&core_if->dev_if->
30222 + dev_global_regs->deachint) &
30223 + DWC_READ_REG32(&core_if->
30224 + dev_if->dev_global_regs->deachintmsk);
30225 + } else {
30226 + v = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->daint) &
30227 + DWC_READ_REG32(&core_if->dev_if->dev_global_regs->daintmsk);
30228 + }
30229 +
30230 + return ((v & 0xffff0000) >> 16);
30231 +}
30232 +
30233 +/**
30234 + * This function returns the Device IN EP Interrupt register
30235 + */
30236 +static inline uint32_t dwc_otg_read_dev_in_ep_intr(dwc_otg_core_if_t * core_if,
30237 + dwc_ep_t * ep)
30238 +{
30239 + dwc_otg_dev_if_t *dev_if = core_if->dev_if;
30240 + uint32_t v, msk, emp;
30241 +
30242 + if (core_if->multiproc_int_enable) {
30243 + msk =
30244 + DWC_READ_REG32(&dev_if->
30245 + dev_global_regs->diepeachintmsk[ep->num]);
30246 + emp =
30247 + DWC_READ_REG32(&dev_if->
30248 + dev_global_regs->dtknqr4_fifoemptymsk);
30249 + msk |= ((emp >> ep->num) & 0x1) << 7;
30250 + v = DWC_READ_REG32(&dev_if->in_ep_regs[ep->num]->diepint) & msk;
30251 + } else {
30252 + msk = DWC_READ_REG32(&dev_if->dev_global_regs->diepmsk);
30253 + emp =
30254 + DWC_READ_REG32(&dev_if->
30255 + dev_global_regs->dtknqr4_fifoemptymsk);
30256 + msk |= ((emp >> ep->num) & 0x1) << 7;
30257 + v = DWC_READ_REG32(&dev_if->in_ep_regs[ep->num]->diepint) & msk;
30258 + }
30259 +
30260 + return v;
30261 +}
30262 +
30263 +/**
30264 + * This function returns the Device OUT EP Interrupt register
30265 + */
30266 +static inline uint32_t dwc_otg_read_dev_out_ep_intr(dwc_otg_core_if_t *
30267 + _core_if, dwc_ep_t * _ep)
30268 +{
30269 + dwc_otg_dev_if_t *dev_if = _core_if->dev_if;
30270 + uint32_t v;
30271 + doepmsk_data_t msk = {.d32 = 0 };
30272 +
30273 + if (_core_if->multiproc_int_enable) {
30274 + msk.d32 =
30275 + DWC_READ_REG32(&dev_if->
30276 + dev_global_regs->doepeachintmsk[_ep->num]);
30277 + if (_core_if->pti_enh_enable) {
30278 + msk.b.pktdrpsts = 1;
30279 + }
30280 + v = DWC_READ_REG32(&dev_if->
30281 + out_ep_regs[_ep->num]->doepint) & msk.d32;
30282 + } else {
30283 + msk.d32 = DWC_READ_REG32(&dev_if->dev_global_regs->doepmsk);
30284 + if (_core_if->pti_enh_enable) {
30285 + msk.b.pktdrpsts = 1;
30286 + }
30287 + v = DWC_READ_REG32(&dev_if->
30288 + out_ep_regs[_ep->num]->doepint) & msk.d32;
30289 + }
30290 + return v;
30291 +}
30292 +
30293 +/**
30294 + * This function returns the Host All Channel Interrupt register
30295 + */
30296 +static inline uint32_t dwc_otg_read_host_all_channels_intr(dwc_otg_core_if_t *
30297 + _core_if)
30298 +{
30299 + return (DWC_READ_REG32(&_core_if->host_if->host_global_regs->haint));
30300 +}
30301 +
30302 +static inline uint32_t dwc_otg_read_host_channel_intr(dwc_otg_core_if_t *
30303 + _core_if, dwc_hc_t * _hc)
30304 +{
30305 + return (DWC_READ_REG32
30306 + (&_core_if->host_if->hc_regs[_hc->hc_num]->hcint));
30307 +}
30308 +
30309 +/**
30310 + * This function returns the mode of the operation, host or device.
30311 + *
30312 + * @return 0 - Device Mode, 1 - Host Mode
30313 + */
30314 +static inline uint32_t dwc_otg_mode(dwc_otg_core_if_t * _core_if)
30315 +{
30316 + return (DWC_READ_REG32(&_core_if->core_global_regs->gintsts) & 0x1);
30317 +}
30318 +
30319 +/**@}*/
30320 +
30321 +/**
30322 + * DWC_otg CIL callback structure. This structure allows the HCD and
30323 + * PCD to register functions used for starting and stopping the PCD
30324 + * and HCD for role change on for a DRD.
30325 + */
30326 +typedef struct dwc_otg_cil_callbacks {
30327 + /** Start function for role change */
30328 + int (*start) (void *_p);
30329 + /** Stop Function for role change */
30330 + int (*stop) (void *_p);
30331 + /** Disconnect Function for role change */
30332 + int (*disconnect) (void *_p);
30333 + /** Resume/Remote wakeup Function */
30334 + int (*resume_wakeup) (void *_p);
30335 + /** Suspend function */
30336 + int (*suspend) (void *_p);
30337 + /** Session Start (SRP) */
30338 + int (*session_start) (void *_p);
30339 +#ifdef CONFIG_USB_DWC_OTG_LPM
30340 + /** Sleep (switch to L0 state) */
30341 + int (*sleep) (void *_p);
30342 +#endif
30343 + /** Pointer passed to start() and stop() */
30344 + void *p;
30345 +} dwc_otg_cil_callbacks_t;
30346 +
30347 +extern void dwc_otg_cil_register_pcd_callbacks(dwc_otg_core_if_t * _core_if,
30348 + dwc_otg_cil_callbacks_t * _cb,
30349 + void *_p);
30350 +extern void dwc_otg_cil_register_hcd_callbacks(dwc_otg_core_if_t * _core_if,
30351 + dwc_otg_cil_callbacks_t * _cb,
30352 + void *_p);
30353 +
30354 +void dwc_otg_initiate_srp(dwc_otg_core_if_t * core_if);
30355 +
30356 +//////////////////////////////////////////////////////////////////////
30357 +/** Start the HCD. Helper function for using the HCD callbacks.
30358 + *
30359 + * @param core_if Programming view of DWC_otg controller.
30360 + */
30361 +static inline void cil_hcd_start(dwc_otg_core_if_t * core_if)
30362 +{
30363 + if (core_if->hcd_cb && core_if->hcd_cb->start) {
30364 + core_if->hcd_cb->start(core_if->hcd_cb->p);
30365 + }
30366 +}
30367 +
30368 +/** Stop the HCD. Helper function for using the HCD callbacks.
30369 + *
30370 + * @param core_if Programming view of DWC_otg controller.
30371 + */
30372 +static inline void cil_hcd_stop(dwc_otg_core_if_t * core_if)
30373 +{
30374 + if (core_if->hcd_cb && core_if->hcd_cb->stop) {
30375 + core_if->hcd_cb->stop(core_if->hcd_cb->p);
30376 + }
30377 +}
30378 +
30379 +/** Disconnect the HCD. Helper function for using the HCD callbacks.
30380 + *
30381 + * @param core_if Programming view of DWC_otg controller.
30382 + */
30383 +static inline void cil_hcd_disconnect(dwc_otg_core_if_t * core_if)
30384 +{
30385 + if (core_if->hcd_cb && core_if->hcd_cb->disconnect) {
30386 + core_if->hcd_cb->disconnect(core_if->hcd_cb->p);
30387 + }
30388 +}
30389 +
30390 +/** Inform the HCD the a New Session has begun. Helper function for
30391 + * using the HCD callbacks.
30392 + *
30393 + * @param core_if Programming view of DWC_otg controller.
30394 + */
30395 +static inline void cil_hcd_session_start(dwc_otg_core_if_t * core_if)
30396 +{
30397 + if (core_if->hcd_cb && core_if->hcd_cb->session_start) {
30398 + core_if->hcd_cb->session_start(core_if->hcd_cb->p);
30399 + }
30400 +}
30401 +
30402 +#ifdef CONFIG_USB_DWC_OTG_LPM
30403 +/**
30404 + * Inform the HCD about LPM sleep.
30405 + * Helper function for using the HCD callbacks.
30406 + *
30407 + * @param core_if Programming view of DWC_otg controller.
30408 + */
30409 +static inline void cil_hcd_sleep(dwc_otg_core_if_t * core_if)
30410 +{
30411 + if (core_if->hcd_cb && core_if->hcd_cb->sleep) {
30412 + core_if->hcd_cb->sleep(core_if->hcd_cb->p);
30413 + }
30414 +}
30415 +#endif
30416 +
30417 +/** Resume the HCD. Helper function for using the HCD callbacks.
30418 + *
30419 + * @param core_if Programming view of DWC_otg controller.
30420 + */
30421 +static inline void cil_hcd_resume(dwc_otg_core_if_t * core_if)
30422 +{
30423 + if (core_if->hcd_cb && core_if->hcd_cb->resume_wakeup) {
30424 + core_if->hcd_cb->resume_wakeup(core_if->hcd_cb->p);
30425 + }
30426 +}
30427 +
30428 +/** Start the PCD. Helper function for using the PCD callbacks.
30429 + *
30430 + * @param core_if Programming view of DWC_otg controller.
30431 + */
30432 +static inline void cil_pcd_start(dwc_otg_core_if_t * core_if)
30433 +{
30434 + if (core_if->pcd_cb && core_if->pcd_cb->start) {
30435 + core_if->pcd_cb->start(core_if->pcd_cb->p);
30436 + }
30437 +}
30438 +
30439 +/** Stop the PCD. Helper function for using the PCD callbacks.
30440 + *
30441 + * @param core_if Programming view of DWC_otg controller.
30442 + */
30443 +static inline void cil_pcd_stop(dwc_otg_core_if_t * core_if)
30444 +{
30445 + if (core_if->pcd_cb && core_if->pcd_cb->stop) {
30446 + core_if->pcd_cb->stop(core_if->pcd_cb->p);
30447 + }
30448 +}
30449 +
30450 +/** Suspend the PCD. Helper function for using the PCD callbacks.
30451 + *
30452 + * @param core_if Programming view of DWC_otg controller.
30453 + */
30454 +static inline void cil_pcd_suspend(dwc_otg_core_if_t * core_if)
30455 +{
30456 + if (core_if->pcd_cb && core_if->pcd_cb->suspend) {
30457 + core_if->pcd_cb->suspend(core_if->pcd_cb->p);
30458 + }
30459 +}
30460 +
30461 +/** Resume the PCD. Helper function for using the PCD callbacks.
30462 + *
30463 + * @param core_if Programming view of DWC_otg controller.
30464 + */
30465 +static inline void cil_pcd_resume(dwc_otg_core_if_t * core_if)
30466 +{
30467 + if (core_if->pcd_cb && core_if->pcd_cb->resume_wakeup) {
30468 + core_if->pcd_cb->resume_wakeup(core_if->pcd_cb->p);
30469 + }
30470 +}
30471 +
30472 +//////////////////////////////////////////////////////////////////////
30473 +
30474 +#endif
30475 --- /dev/null
30476 +++ b/drivers/usb/host/dwc_otg/dwc_otg_cil_intr.c
30477 @@ -0,0 +1,1601 @@
30478 +/* ==========================================================================
30479 + * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_cil_intr.c $
30480 + * $Revision: #32 $
30481 + * $Date: 2012/08/10 $
30482 + * $Change: 2047372 $
30483 + *
30484 + * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
30485 + * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
30486 + * otherwise expressly agreed to in writing between Synopsys and you.
30487 + *
30488 + * The Software IS NOT an item of Licensed Software or Licensed Product under
30489 + * any End User Software License Agreement or Agreement for Licensed Product
30490 + * with Synopsys or any supplement thereto. You are permitted to use and
30491 + * redistribute this Software in source and binary forms, with or without
30492 + * modification, provided that redistributions of source code must retain this
30493 + * notice. You may not view, use, disclose, copy or distribute this file or
30494 + * any information contained herein except pursuant to this license grant from
30495 + * Synopsys. If you do not agree with this notice, including the disclaimer
30496 + * below, then you are not authorized to use the Software.
30497 + *
30498 + * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
30499 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
30500 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
30501 + * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
30502 + * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
30503 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
30504 + * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
30505 + * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
30506 + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
30507 + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
30508 + * DAMAGE.
30509 + * ========================================================================== */
30510 +
30511 +/** @file
30512 + *
30513 + * The Core Interface Layer provides basic services for accessing and
30514 + * managing the DWC_otg hardware. These services are used by both the
30515 + * Host Controller Driver and the Peripheral Controller Driver.
30516 + *
30517 + * This file contains the Common Interrupt handlers.
30518 + */
30519 +#include "dwc_os.h"
30520 +#include "dwc_otg_regs.h"
30521 +#include "dwc_otg_cil.h"
30522 +#include "dwc_otg_driver.h"
30523 +#include "dwc_otg_pcd.h"
30524 +#include "dwc_otg_hcd.h"
30525 +
30526 +#ifdef DEBUG
30527 +inline const char *op_state_str(dwc_otg_core_if_t * core_if)
30528 +{
30529 + return (core_if->op_state == A_HOST ? "a_host" :
30530 + (core_if->op_state == A_SUSPEND ? "a_suspend" :
30531 + (core_if->op_state == A_PERIPHERAL ? "a_peripheral" :
30532 + (core_if->op_state == B_PERIPHERAL ? "b_peripheral" :
30533 + (core_if->op_state == B_HOST ? "b_host" : "unknown")))));
30534 +}
30535 +#endif
30536 +
30537 +/** This function will log a debug message
30538 + *
30539 + * @param core_if Programming view of DWC_otg controller.
30540 + */
30541 +int32_t dwc_otg_handle_mode_mismatch_intr(dwc_otg_core_if_t * core_if)
30542 +{
30543 + gintsts_data_t gintsts;
30544 + DWC_WARN("Mode Mismatch Interrupt: currently in %s mode\n",
30545 + dwc_otg_mode(core_if) ? "Host" : "Device");
30546 +
30547 + /* Clear interrupt */
30548 + gintsts.d32 = 0;
30549 + gintsts.b.modemismatch = 1;
30550 + DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, gintsts.d32);
30551 + return 1;
30552 +}
30553 +
30554 +/**
30555 + * This function handles the OTG Interrupts. It reads the OTG
30556 + * Interrupt Register (GOTGINT) to determine what interrupt has
30557 + * occurred.
30558 + *
30559 + * @param core_if Programming view of DWC_otg controller.
30560 + */
30561 +int32_t dwc_otg_handle_otg_intr(dwc_otg_core_if_t * core_if)
30562 +{
30563 + dwc_otg_core_global_regs_t *global_regs = core_if->core_global_regs;
30564 + gotgint_data_t gotgint;
30565 + gotgctl_data_t gotgctl;
30566 + gintmsk_data_t gintmsk;
30567 + gpwrdn_data_t gpwrdn;
30568 +
30569 + gotgint.d32 = DWC_READ_REG32(&global_regs->gotgint);
30570 + gotgctl.d32 = DWC_READ_REG32(&global_regs->gotgctl);
30571 + DWC_DEBUGPL(DBG_CIL, "++OTG Interrupt gotgint=%0x [%s]\n", gotgint.d32,
30572 + op_state_str(core_if));
30573 +
30574 + if (gotgint.b.sesenddet) {
30575 + DWC_DEBUGPL(DBG_ANY, " ++OTG Interrupt: "
30576 + "Session End Detected++ (%s)\n",
30577 + op_state_str(core_if));
30578 + gotgctl.d32 = DWC_READ_REG32(&global_regs->gotgctl);
30579 +
30580 + if (core_if->op_state == B_HOST) {
30581 + cil_pcd_start(core_if);
30582 + core_if->op_state = B_PERIPHERAL;
30583 + } else {
30584 + /* If not B_HOST and Device HNP still set. HNP
30585 + * Did not succeed!*/
30586 + if (gotgctl.b.devhnpen) {
30587 + DWC_DEBUGPL(DBG_ANY, "Session End Detected\n");
30588 + __DWC_ERROR("Device Not Connected/Responding!\n");
30589 + }
30590 +
30591 + /* If Session End Detected the B-Cable has
30592 + * been disconnected. */
30593 + /* Reset PCD and Gadget driver to a
30594 + * clean state. */
30595 + core_if->lx_state = DWC_OTG_L0;
30596 + DWC_SPINUNLOCK(core_if->lock);
30597 + cil_pcd_stop(core_if);
30598 + DWC_SPINLOCK(core_if->lock);
30599 +
30600 + if (core_if->adp_enable) {
30601 + if (core_if->power_down == 2) {
30602 + gpwrdn.d32 = 0;
30603 + gpwrdn.b.pwrdnswtch = 1;
30604 + DWC_MODIFY_REG32(&core_if->
30605 + core_global_regs->
30606 + gpwrdn, gpwrdn.d32, 0);
30607 + }
30608 +
30609 + gpwrdn.d32 = 0;
30610 + gpwrdn.b.pmuintsel = 1;
30611 + gpwrdn.b.pmuactv = 1;
30612 + DWC_MODIFY_REG32(&core_if->core_global_regs->
30613 + gpwrdn, 0, gpwrdn.d32);
30614 +
30615 + dwc_otg_adp_sense_start(core_if);
30616 + }
30617 + }
30618 +
30619 + gotgctl.d32 = 0;
30620 + gotgctl.b.devhnpen = 1;
30621 + DWC_MODIFY_REG32(&global_regs->gotgctl, gotgctl.d32, 0);
30622 + }
30623 + if (gotgint.b.sesreqsucstschng) {
30624 + DWC_DEBUGPL(DBG_ANY, " ++OTG Interrupt: "
30625 + "Session Reqeust Success Status Change++\n");
30626 + gotgctl.d32 = DWC_READ_REG32(&global_regs->gotgctl);
30627 + if (gotgctl.b.sesreqscs) {
30628 +
30629 + if ((core_if->core_params->phy_type ==
30630 + DWC_PHY_TYPE_PARAM_FS) && (core_if->core_params->i2c_enable)) {
30631 + core_if->srp_success = 1;
30632 + } else {
30633 + DWC_SPINUNLOCK(core_if->lock);
30634 + cil_pcd_resume(core_if);
30635 + DWC_SPINLOCK(core_if->lock);
30636 + /* Clear Session Request */
30637 + gotgctl.d32 = 0;
30638 + gotgctl.b.sesreq = 1;
30639 + DWC_MODIFY_REG32(&global_regs->gotgctl,
30640 + gotgctl.d32, 0);
30641 + }
30642 + }
30643 + }
30644 + if (gotgint.b.hstnegsucstschng) {
30645 + /* Print statements during the HNP interrupt handling
30646 + * can cause it to fail.*/
30647 + gotgctl.d32 = DWC_READ_REG32(&global_regs->gotgctl);
30648 + /* WA for 3.00a- HW is not setting cur_mode, even sometimes
30649 + * this does not help*/
30650 + if (core_if->snpsid >= OTG_CORE_REV_3_00a)
30651 + dwc_udelay(100);
30652 + if (gotgctl.b.hstnegscs) {
30653 + if (dwc_otg_is_host_mode(core_if)) {
30654 + core_if->op_state = B_HOST;
30655 + /*
30656 + * Need to disable SOF interrupt immediately.
30657 + * When switching from device to host, the PCD
30658 + * interrupt handler won't handle the
30659 + * interrupt if host mode is already set. The
30660 + * HCD interrupt handler won't get called if
30661 + * the HCD state is HALT. This means that the
30662 + * interrupt does not get handled and Linux
30663 + * complains loudly.
30664 + */
30665 + gintmsk.d32 = 0;
30666 + gintmsk.b.sofintr = 1;
30667 + DWC_MODIFY_REG32(&global_regs->gintmsk,
30668 + gintmsk.d32, 0);
30669 + /* Call callback function with spin lock released */
30670 + DWC_SPINUNLOCK(core_if->lock);
30671 + cil_pcd_stop(core_if);
30672 + /*
30673 + * Initialize the Core for Host mode.
30674 + */
30675 + cil_hcd_start(core_if);
30676 + DWC_SPINLOCK(core_if->lock);
30677 + core_if->op_state = B_HOST;
30678 + }
30679 + } else {
30680 + gotgctl.d32 = 0;
30681 + gotgctl.b.hnpreq = 1;
30682 + gotgctl.b.devhnpen = 1;
30683 + DWC_MODIFY_REG32(&global_regs->gotgctl, gotgctl.d32, 0);
30684 + DWC_DEBUGPL(DBG_ANY, "HNP Failed\n");
30685 + __DWC_ERROR("Device Not Connected/Responding\n");
30686 + }
30687 + }
30688 + if (gotgint.b.hstnegdet) {
30689 + /* The disconnect interrupt is set at the same time as
30690 + * Host Negotiation Detected. During the mode
30691 + * switch all interrupts are cleared so the disconnect
30692 + * interrupt handler will not get executed.
30693 + */
30694 + DWC_DEBUGPL(DBG_ANY, " ++OTG Interrupt: "
30695 + "Host Negotiation Detected++ (%s)\n",
30696 + (dwc_otg_is_host_mode(core_if) ? "Host" :
30697 + "Device"));
30698 + if (dwc_otg_is_device_mode(core_if)) {
30699 + DWC_DEBUGPL(DBG_ANY, "a_suspend->a_peripheral (%d)\n",
30700 + core_if->op_state);
30701 + DWC_SPINUNLOCK(core_if->lock);
30702 + cil_hcd_disconnect(core_if);
30703 + cil_pcd_start(core_if);
30704 + DWC_SPINLOCK(core_if->lock);
30705 + core_if->op_state = A_PERIPHERAL;
30706 + } else {
30707 + /*
30708 + * Need to disable SOF interrupt immediately. When
30709 + * switching from device to host, the PCD interrupt
30710 + * handler won't handle the interrupt if host mode is
30711 + * already set. The HCD interrupt handler won't get
30712 + * called if the HCD state is HALT. This means that
30713 + * the interrupt does not get handled and Linux
30714 + * complains loudly.
30715 + */
30716 + gintmsk.d32 = 0;
30717 + gintmsk.b.sofintr = 1;
30718 + DWC_MODIFY_REG32(&global_regs->gintmsk, gintmsk.d32, 0);
30719 + DWC_SPINUNLOCK(core_if->lock);
30720 + cil_pcd_stop(core_if);
30721 + cil_hcd_start(core_if);
30722 + DWC_SPINLOCK(core_if->lock);
30723 + core_if->op_state = A_HOST;
30724 + }
30725 + }
30726 + if (gotgint.b.adevtoutchng) {
30727 + DWC_DEBUGPL(DBG_ANY, " ++OTG Interrupt: "
30728 + "A-Device Timeout Change++\n");
30729 + }
30730 + if (gotgint.b.debdone) {
30731 + DWC_DEBUGPL(DBG_ANY, " ++OTG Interrupt: " "Debounce Done++\n");
30732 + }
30733 +
30734 + /* Clear GOTGINT */
30735 + DWC_WRITE_REG32(&core_if->core_global_regs->gotgint, gotgint.d32);
30736 +
30737 + return 1;
30738 +}
30739 +
30740 +void w_conn_id_status_change(void *p)
30741 +{
30742 + dwc_otg_core_if_t *core_if = p;
30743 + uint32_t count = 0;
30744 + gotgctl_data_t gotgctl = {.d32 = 0 };
30745 +
30746 + gotgctl.d32 = DWC_READ_REG32(&core_if->core_global_regs->gotgctl);
30747 + DWC_DEBUGPL(DBG_CIL, "gotgctl=%0x\n", gotgctl.d32);
30748 + DWC_DEBUGPL(DBG_CIL, "gotgctl.b.conidsts=%d\n", gotgctl.b.conidsts);
30749 +
30750 + /* B-Device connector (Device Mode) */
30751 + if (gotgctl.b.conidsts) {
30752 + /* Wait for switch to device mode. */
30753 + while (!dwc_otg_is_device_mode(core_if)) {
30754 + DWC_PRINTF("Waiting for Peripheral Mode, Mode=%s\n",
30755 + (dwc_otg_is_host_mode(core_if) ? "Host" :
30756 + "Peripheral"));
30757 + dwc_mdelay(100);
30758 + if (++count > 10000)
30759 + break;
30760 + }
30761 + DWC_ASSERT(++count < 10000,
30762 + "Connection id status change timed out");
30763 + core_if->op_state = B_PERIPHERAL;
30764 + dwc_otg_core_init(core_if);
30765 + dwc_otg_enable_global_interrupts(core_if);
30766 + cil_pcd_start(core_if);
30767 + } else {
30768 + /* A-Device connector (Host Mode) */
30769 + while (!dwc_otg_is_host_mode(core_if)) {
30770 + DWC_PRINTF("Waiting for Host Mode, Mode=%s\n",
30771 + (dwc_otg_is_host_mode(core_if) ? "Host" :
30772 + "Peripheral"));
30773 + dwc_mdelay(100);
30774 + if (++count > 10000)
30775 + break;
30776 + }
30777 + DWC_ASSERT(++count < 10000,
30778 + "Connection id status change timed out");
30779 + core_if->op_state = A_HOST;
30780 + /*
30781 + * Initialize the Core for Host mode.
30782 + */
30783 + dwc_otg_core_init(core_if);
30784 + dwc_otg_enable_global_interrupts(core_if);
30785 + cil_hcd_start(core_if);
30786 + }
30787 +}
30788 +
30789 +/**
30790 + * This function handles the Connector ID Status Change Interrupt. It
30791 + * reads the OTG Interrupt Register (GOTCTL) to determine whether this
30792 + * is a Device to Host Mode transition or a Host Mode to Device
30793 + * Transition.
30794 + *
30795 + * This only occurs when the cable is connected/removed from the PHY
30796 + * connector.
30797 + *
30798 + * @param core_if Programming view of DWC_otg controller.
30799 + */
30800 +int32_t dwc_otg_handle_conn_id_status_change_intr(dwc_otg_core_if_t * core_if)
30801 +{
30802 +
30803 + /*
30804 + * Need to disable SOF interrupt immediately. If switching from device
30805 + * to host, the PCD interrupt handler won't handle the interrupt if
30806 + * host mode is already set. The HCD interrupt handler won't get
30807 + * called if the HCD state is HALT. This means that the interrupt does
30808 + * not get handled and Linux complains loudly.
30809 + */
30810 + gintmsk_data_t gintmsk = {.d32 = 0 };
30811 + gintsts_data_t gintsts = {.d32 = 0 };
30812 +
30813 + gintmsk.b.sofintr = 1;
30814 + DWC_MODIFY_REG32(&core_if->core_global_regs->gintmsk, gintmsk.d32, 0);
30815 +
30816 + DWC_DEBUGPL(DBG_CIL,
30817 + " ++Connector ID Status Change Interrupt++ (%s)\n",
30818 + (dwc_otg_is_host_mode(core_if) ? "Host" : "Device"));
30819 +
30820 + DWC_SPINUNLOCK(core_if->lock);
30821 +
30822 + /*
30823 + * Need to schedule a work, as there are possible DELAY function calls
30824 + * Release lock before scheduling workq as it holds spinlock during scheduling
30825 + */
30826 +
30827 + DWC_WORKQ_SCHEDULE(core_if->wq_otg, w_conn_id_status_change,
30828 + core_if, "connection id status change");
30829 + DWC_SPINLOCK(core_if->lock);
30830 +
30831 + /* Set flag and clear interrupt */
30832 + gintsts.b.conidstschng = 1;
30833 + DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, gintsts.d32);
30834 +
30835 + return 1;
30836 +}
30837 +
30838 +/**
30839 + * This interrupt indicates that a device is initiating the Session
30840 + * Request Protocol to request the host to turn on bus power so a new
30841 + * session can begin. The handler responds by turning on bus power. If
30842 + * the DWC_otg controller is in low power mode, the handler brings the
30843 + * controller out of low power mode before turning on bus power.
30844 + *
30845 + * @param core_if Programming view of DWC_otg controller.
30846 + */
30847 +int32_t dwc_otg_handle_session_req_intr(dwc_otg_core_if_t * core_if)
30848 +{
30849 + gintsts_data_t gintsts;
30850 +
30851 +#ifndef DWC_HOST_ONLY
30852 + DWC_DEBUGPL(DBG_ANY, "++Session Request Interrupt++\n");
30853 +
30854 + if (dwc_otg_is_device_mode(core_if)) {
30855 + DWC_PRINTF("SRP: Device mode\n");
30856 + } else {
30857 + hprt0_data_t hprt0;
30858 + DWC_PRINTF("SRP: Host mode\n");
30859 +
30860 + /* Turn on the port power bit. */
30861 + hprt0.d32 = dwc_otg_read_hprt0(core_if);
30862 + hprt0.b.prtpwr = 1;
30863 + DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
30864 +
30865 + /* Start the Connection timer. So a message can be displayed
30866 + * if connect does not occur within 10 seconds. */
30867 + cil_hcd_session_start(core_if);
30868 + }
30869 +#endif
30870 +
30871 + /* Clear interrupt */
30872 + gintsts.d32 = 0;
30873 + gintsts.b.sessreqintr = 1;
30874 + DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, gintsts.d32);
30875 +
30876 + return 1;
30877 +}
30878 +
30879 +void w_wakeup_detected(void *p)
30880 +{
30881 + dwc_otg_core_if_t *core_if = (dwc_otg_core_if_t *) p;
30882 + /*
30883 + * Clear the Resume after 70ms. (Need 20 ms minimum. Use 70 ms
30884 + * so that OPT tests pass with all PHYs).
30885 + */
30886 + hprt0_data_t hprt0 = {.d32 = 0 };
30887 +#if 0
30888 + pcgcctl_data_t pcgcctl = {.d32 = 0 };
30889 + /* Restart the Phy Clock */
30890 + pcgcctl.b.stoppclk = 1;
30891 + DWC_MODIFY_REG32(core_if->pcgcctl, pcgcctl.d32, 0);
30892 + dwc_udelay(10);
30893 +#endif //0
30894 + hprt0.d32 = dwc_otg_read_hprt0(core_if);
30895 + DWC_DEBUGPL(DBG_ANY, "Resume: HPRT0=%0x\n", hprt0.d32);
30896 +// dwc_mdelay(70);
30897 + hprt0.b.prtres = 0; /* Resume */
30898 + DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
30899 + DWC_DEBUGPL(DBG_ANY, "Clear Resume: HPRT0=%0x\n",
30900 + DWC_READ_REG32(core_if->host_if->hprt0));
30901 +
30902 + cil_hcd_resume(core_if);
30903 +
30904 + /** Change to L0 state*/
30905 + core_if->lx_state = DWC_OTG_L0;
30906 +}
30907 +
30908 +/**
30909 + * This interrupt indicates that the DWC_otg controller has detected a
30910 + * resume or remote wakeup sequence. If the DWC_otg controller is in
30911 + * low power mode, the handler must brings the controller out of low
30912 + * power mode. The controller automatically begins resume
30913 + * signaling. The handler schedules a time to stop resume signaling.
30914 + */
30915 +int32_t dwc_otg_handle_wakeup_detected_intr(dwc_otg_core_if_t * core_if)
30916 +{
30917 + gintsts_data_t gintsts;
30918 +
30919 + DWC_DEBUGPL(DBG_ANY,
30920 + "++Resume and Remote Wakeup Detected Interrupt++\n");
30921 +
30922 + DWC_PRINTF("%s lxstate = %d\n", __func__, core_if->lx_state);
30923 +
30924 + if (dwc_otg_is_device_mode(core_if)) {
30925 + dctl_data_t dctl = {.d32 = 0 };
30926 + DWC_DEBUGPL(DBG_PCD, "DSTS=0x%0x\n",
30927 + DWC_READ_REG32(&core_if->dev_if->dev_global_regs->
30928 + dsts));
30929 + if (core_if->lx_state == DWC_OTG_L2) {
30930 +#ifdef PARTIAL_POWER_DOWN
30931 + if (core_if->hwcfg4.b.power_optimiz) {
30932 + pcgcctl_data_t power = {.d32 = 0 };
30933 +
30934 + power.d32 = DWC_READ_REG32(core_if->pcgcctl);
30935 + DWC_DEBUGPL(DBG_CIL, "PCGCCTL=%0x\n",
30936 + power.d32);
30937 +
30938 + power.b.stoppclk = 0;
30939 + DWC_WRITE_REG32(core_if->pcgcctl, power.d32);
30940 +
30941 + power.b.pwrclmp = 0;
30942 + DWC_WRITE_REG32(core_if->pcgcctl, power.d32);
30943 +
30944 + power.b.rstpdwnmodule = 0;
30945 + DWC_WRITE_REG32(core_if->pcgcctl, power.d32);
30946 + }
30947 +#endif
30948 + /* Clear the Remote Wakeup Signaling */
30949 + dctl.b.rmtwkupsig = 1;
30950 + DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->
30951 + dctl, dctl.d32, 0);
30952 +
30953 + DWC_SPINUNLOCK(core_if->lock);
30954 + if (core_if->pcd_cb && core_if->pcd_cb->resume_wakeup) {
30955 + core_if->pcd_cb->resume_wakeup(core_if->pcd_cb->p);
30956 + }
30957 + DWC_SPINLOCK(core_if->lock);
30958 + } else {
30959 + glpmcfg_data_t lpmcfg;
30960 + lpmcfg.d32 =
30961 + DWC_READ_REG32(&core_if->core_global_regs->glpmcfg);
30962 + lpmcfg.b.hird_thres &= (~(1 << 4));
30963 + DWC_WRITE_REG32(&core_if->core_global_regs->glpmcfg,
30964 + lpmcfg.d32);
30965 + }
30966 + /** Change to L0 state*/
30967 + core_if->lx_state = DWC_OTG_L0;
30968 + } else {
30969 + if (core_if->lx_state != DWC_OTG_L1) {
30970 + pcgcctl_data_t pcgcctl = {.d32 = 0 };
30971 +
30972 + /* Restart the Phy Clock */
30973 + pcgcctl.b.stoppclk = 1;
30974 + DWC_MODIFY_REG32(core_if->pcgcctl, pcgcctl.d32, 0);
30975 + DWC_TIMER_SCHEDULE(core_if->wkp_timer, 71);
30976 + } else {
30977 + /** Change to L0 state*/
30978 + core_if->lx_state = DWC_OTG_L0;
30979 + }
30980 + }
30981 +
30982 + /* Clear interrupt */
30983 + gintsts.d32 = 0;
30984 + gintsts.b.wkupintr = 1;
30985 + DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, gintsts.d32);
30986 +
30987 + return 1;
30988 +}
30989 +
30990 +/**
30991 + * This interrupt indicates that the Wakeup Logic has detected a
30992 + * Device disconnect.
30993 + */
30994 +static int32_t dwc_otg_handle_pwrdn_disconnect_intr(dwc_otg_core_if_t *core_if)
30995 +{
30996 + gpwrdn_data_t gpwrdn = { .d32 = 0 };
30997 + gpwrdn_data_t gpwrdn_temp = { .d32 = 0 };
30998 + gpwrdn_temp.d32 = DWC_READ_REG32(&core_if->core_global_regs->gpwrdn);
30999 +
31000 + DWC_PRINTF("%s called\n", __FUNCTION__);
31001 +
31002 + if (!core_if->hibernation_suspend) {
31003 + DWC_PRINTF("Already exited from Hibernation\n");
31004 + return 1;
31005 + }
31006 +
31007 + /* Switch on the voltage to the core */
31008 + gpwrdn.b.pwrdnswtch = 1;
31009 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
31010 + dwc_udelay(10);
31011 +
31012 + /* Reset the core */
31013 + gpwrdn.d32 = 0;
31014 + gpwrdn.b.pwrdnrstn = 1;
31015 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
31016 + dwc_udelay(10);
31017 +
31018 + /* Disable power clamps*/
31019 + gpwrdn.d32 = 0;
31020 + gpwrdn.b.pwrdnclmp = 1;
31021 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
31022 +
31023 + /* Remove reset the core signal */
31024 + gpwrdn.d32 = 0;
31025 + gpwrdn.b.pwrdnrstn = 1;
31026 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, 0, gpwrdn.d32);
31027 + dwc_udelay(10);
31028 +
31029 + /* Disable PMU interrupt */
31030 + gpwrdn.d32 = 0;
31031 + gpwrdn.b.pmuintsel = 1;
31032 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
31033 +
31034 + core_if->hibernation_suspend = 0;
31035 +
31036 + /* Disable PMU */
31037 + gpwrdn.d32 = 0;
31038 + gpwrdn.b.pmuactv = 1;
31039 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
31040 + dwc_udelay(10);
31041 +
31042 + if (gpwrdn_temp.b.idsts) {
31043 + core_if->op_state = B_PERIPHERAL;
31044 + dwc_otg_core_init(core_if);
31045 + dwc_otg_enable_global_interrupts(core_if);
31046 + cil_pcd_start(core_if);
31047 + } else {
31048 + core_if->op_state = A_HOST;
31049 + dwc_otg_core_init(core_if);
31050 + dwc_otg_enable_global_interrupts(core_if);
31051 + cil_hcd_start(core_if);
31052 + }
31053 +
31054 + return 1;
31055 +}
31056 +
31057 +/**
31058 + * This interrupt indicates that the Wakeup Logic has detected a
31059 + * remote wakeup sequence.
31060 + */
31061 +static int32_t dwc_otg_handle_pwrdn_wakeup_detected_intr(dwc_otg_core_if_t * core_if)
31062 +{
31063 + gpwrdn_data_t gpwrdn = {.d32 = 0 };
31064 + DWC_DEBUGPL(DBG_ANY,
31065 + "++Powerdown Remote Wakeup Detected Interrupt++\n");
31066 +
31067 + if (!core_if->hibernation_suspend) {
31068 + DWC_PRINTF("Already exited from Hibernation\n");
31069 + return 1;
31070 + }
31071 +
31072 + gpwrdn.d32 = DWC_READ_REG32(&core_if->core_global_regs->gpwrdn);
31073 + if (gpwrdn.b.idsts) { // Device Mode
31074 + if ((core_if->power_down == 2)
31075 + && (core_if->hibernation_suspend == 1)) {
31076 + dwc_otg_device_hibernation_restore(core_if, 0, 0);
31077 + }
31078 + } else {
31079 + if ((core_if->power_down == 2)
31080 + && (core_if->hibernation_suspend == 1)) {
31081 + dwc_otg_host_hibernation_restore(core_if, 1, 0);
31082 + }
31083 + }
31084 + return 1;
31085 +}
31086 +
31087 +static int32_t dwc_otg_handle_pwrdn_idsts_change(dwc_otg_device_t *otg_dev)
31088 +{
31089 + gpwrdn_data_t gpwrdn = {.d32 = 0 };
31090 + gpwrdn_data_t gpwrdn_temp = {.d32 = 0 };
31091 + dwc_otg_core_if_t *core_if = otg_dev->core_if;
31092 +
31093 + DWC_DEBUGPL(DBG_ANY, "%s called\n", __FUNCTION__);
31094 + gpwrdn_temp.d32 = DWC_READ_REG32(&core_if->core_global_regs->gpwrdn);
31095 + if (core_if->power_down == 2) {
31096 + if (!core_if->hibernation_suspend) {
31097 + DWC_PRINTF("Already exited from Hibernation\n");
31098 + return 1;
31099 + }
31100 + DWC_DEBUGPL(DBG_ANY, "Exit from hibernation on ID sts change\n");
31101 + /* Switch on the voltage to the core */
31102 + gpwrdn.b.pwrdnswtch = 1;
31103 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
31104 + dwc_udelay(10);
31105 +
31106 + /* Reset the core */
31107 + gpwrdn.d32 = 0;
31108 + gpwrdn.b.pwrdnrstn = 1;
31109 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
31110 + dwc_udelay(10);
31111 +
31112 + /* Disable power clamps */
31113 + gpwrdn.d32 = 0;
31114 + gpwrdn.b.pwrdnclmp = 1;
31115 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
31116 +
31117 + /* Remove reset the core signal */
31118 + gpwrdn.d32 = 0;
31119 + gpwrdn.b.pwrdnrstn = 1;
31120 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, 0, gpwrdn.d32);
31121 + dwc_udelay(10);
31122 +
31123 + /* Disable PMU interrupt */
31124 + gpwrdn.d32 = 0;
31125 + gpwrdn.b.pmuintsel = 1;
31126 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
31127 +
31128 + /*Indicates that we are exiting from hibernation */
31129 + core_if->hibernation_suspend = 0;
31130 +
31131 + /* Disable PMU */
31132 + gpwrdn.d32 = 0;
31133 + gpwrdn.b.pmuactv = 1;
31134 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
31135 + dwc_udelay(10);
31136 +
31137 + gpwrdn.d32 = core_if->gr_backup->gpwrdn_local;
31138 + if (gpwrdn.b.dis_vbus == 1) {
31139 + gpwrdn.d32 = 0;
31140 + gpwrdn.b.dis_vbus = 1;
31141 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
31142 + }
31143 +
31144 + if (gpwrdn_temp.b.idsts) {
31145 + core_if->op_state = B_PERIPHERAL;
31146 + dwc_otg_core_init(core_if);
31147 + dwc_otg_enable_global_interrupts(core_if);
31148 + cil_pcd_start(core_if);
31149 + } else {
31150 + core_if->op_state = A_HOST;
31151 + dwc_otg_core_init(core_if);
31152 + dwc_otg_enable_global_interrupts(core_if);
31153 + cil_hcd_start(core_if);
31154 + }
31155 + }
31156 +
31157 + if (core_if->adp_enable) {
31158 + uint8_t is_host = 0;
31159 + DWC_SPINUNLOCK(core_if->lock);
31160 + /* Change the core_if's lock to hcd/pcd lock depend on mode? */
31161 +#ifndef DWC_HOST_ONLY
31162 + if (gpwrdn_temp.b.idsts)
31163 + core_if->lock = otg_dev->pcd->lock;
31164 +#endif
31165 +#ifndef DWC_DEVICE_ONLY
31166 + if (!gpwrdn_temp.b.idsts) {
31167 + core_if->lock = otg_dev->hcd->lock;
31168 + is_host = 1;
31169 + }
31170 +#endif
31171 + DWC_PRINTF("RESTART ADP\n");
31172 + if (core_if->adp.probe_enabled)
31173 + dwc_otg_adp_probe_stop(core_if);
31174 + if (core_if->adp.sense_enabled)
31175 + dwc_otg_adp_sense_stop(core_if);
31176 + if (core_if->adp.sense_timer_started)
31177 + DWC_TIMER_CANCEL(core_if->adp.sense_timer);
31178 + if (core_if->adp.vbuson_timer_started)
31179 + DWC_TIMER_CANCEL(core_if->adp.vbuson_timer);
31180 + core_if->adp.probe_timer_values[0] = -1;
31181 + core_if->adp.probe_timer_values[1] = -1;
31182 + core_if->adp.sense_timer_started = 0;
31183 + core_if->adp.vbuson_timer_started = 0;
31184 + core_if->adp.probe_counter = 0;
31185 + core_if->adp.gpwrdn = 0;
31186 +
31187 + /* Disable PMU and restart ADP */
31188 + gpwrdn_temp.d32 = 0;
31189 + gpwrdn_temp.b.pmuactv = 1;
31190 + gpwrdn_temp.b.pmuintsel = 1;
31191 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
31192 + DWC_PRINTF("Check point 1\n");
31193 + dwc_mdelay(110);
31194 + dwc_otg_adp_start(core_if, is_host);
31195 + DWC_SPINLOCK(core_if->lock);
31196 + }
31197 +
31198 +
31199 + return 1;
31200 +}
31201 +
31202 +static int32_t dwc_otg_handle_pwrdn_session_change(dwc_otg_core_if_t * core_if)
31203 +{
31204 + gpwrdn_data_t gpwrdn = {.d32 = 0 };
31205 + int32_t otg_cap_param = core_if->core_params->otg_cap;
31206 + DWC_DEBUGPL(DBG_ANY, "%s called\n", __FUNCTION__);
31207 +
31208 + gpwrdn.d32 = DWC_READ_REG32(&core_if->core_global_regs->gpwrdn);
31209 + if (core_if->power_down == 2) {
31210 + if (!core_if->hibernation_suspend) {
31211 + DWC_PRINTF("Already exited from Hibernation\n");
31212 + return 1;
31213 + }
31214 +
31215 + if ((otg_cap_param != DWC_OTG_CAP_PARAM_HNP_SRP_CAPABLE ||
31216 + otg_cap_param != DWC_OTG_CAP_PARAM_SRP_ONLY_CAPABLE) &&
31217 + gpwrdn.b.bsessvld == 0) {
31218 + /* Save gpwrdn register for further usage if stschng interrupt */
31219 + core_if->gr_backup->gpwrdn_local =
31220 + DWC_READ_REG32(&core_if->core_global_regs->gpwrdn);
31221 + /*Exit from ISR and wait for stschng interrupt with bsessvld = 1 */
31222 + return 1;
31223 + }
31224 +
31225 + /* Switch on the voltage to the core */
31226 + gpwrdn.d32 = 0;
31227 + gpwrdn.b.pwrdnswtch = 1;
31228 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
31229 + dwc_udelay(10);
31230 +
31231 + /* Reset the core */
31232 + gpwrdn.d32 = 0;
31233 + gpwrdn.b.pwrdnrstn = 1;
31234 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
31235 + dwc_udelay(10);
31236 +
31237 + /* Disable power clamps */
31238 + gpwrdn.d32 = 0;
31239 + gpwrdn.b.pwrdnclmp = 1;
31240 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
31241 +
31242 + /* Remove reset the core signal */
31243 + gpwrdn.d32 = 0;
31244 + gpwrdn.b.pwrdnrstn = 1;
31245 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, 0, gpwrdn.d32);
31246 + dwc_udelay(10);
31247 +
31248 + /* Disable PMU interrupt */
31249 + gpwrdn.d32 = 0;
31250 + gpwrdn.b.pmuintsel = 1;
31251 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
31252 + dwc_udelay(10);
31253 +
31254 + /*Indicates that we are exiting from hibernation */
31255 + core_if->hibernation_suspend = 0;
31256 +
31257 + /* Disable PMU */
31258 + gpwrdn.d32 = 0;
31259 + gpwrdn.b.pmuactv = 1;
31260 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
31261 + dwc_udelay(10);
31262 +
31263 + core_if->op_state = B_PERIPHERAL;
31264 + dwc_otg_core_init(core_if);
31265 + dwc_otg_enable_global_interrupts(core_if);
31266 + cil_pcd_start(core_if);
31267 +
31268 + if (otg_cap_param == DWC_OTG_CAP_PARAM_HNP_SRP_CAPABLE ||
31269 + otg_cap_param == DWC_OTG_CAP_PARAM_SRP_ONLY_CAPABLE) {
31270 + /*
31271 + * Initiate SRP after initial ADP probe.
31272 + */
31273 + dwc_otg_initiate_srp(core_if);
31274 + }
31275 + }
31276 +
31277 + return 1;
31278 +}
31279 +/**
31280 + * This interrupt indicates that the Wakeup Logic has detected a
31281 + * status change either on IDDIG or BSessVld.
31282 + */
31283 +static uint32_t dwc_otg_handle_pwrdn_stschng_intr(dwc_otg_device_t *otg_dev)
31284 +{
31285 + int retval;
31286 + gpwrdn_data_t gpwrdn = {.d32 = 0 };
31287 + gpwrdn_data_t gpwrdn_temp = {.d32 = 0 };
31288 + dwc_otg_core_if_t *core_if = otg_dev->core_if;
31289 +
31290 + DWC_PRINTF("%s called\n", __FUNCTION__);
31291 +
31292 + if (core_if->power_down == 2) {
31293 + if (core_if->hibernation_suspend <= 0) {
31294 + DWC_PRINTF("Already exited from Hibernation\n");
31295 + return 1;
31296 + } else
31297 + gpwrdn_temp.d32 = core_if->gr_backup->gpwrdn_local;
31298 +
31299 + } else {
31300 + gpwrdn_temp.d32 = core_if->adp.gpwrdn;
31301 + }
31302 +
31303 + gpwrdn.d32 = DWC_READ_REG32(&core_if->core_global_regs->gpwrdn);
31304 +
31305 + if (gpwrdn.b.idsts ^ gpwrdn_temp.b.idsts) {
31306 + retval = dwc_otg_handle_pwrdn_idsts_change(otg_dev);
31307 + } else if (gpwrdn.b.bsessvld ^ gpwrdn_temp.b.bsessvld) {
31308 + retval = dwc_otg_handle_pwrdn_session_change(core_if);
31309 + }
31310 +
31311 + return retval;
31312 +}
31313 +
31314 +/**
31315 + * This interrupt indicates that the Wakeup Logic has detected a
31316 + * SRP.
31317 + */
31318 +static int32_t dwc_otg_handle_pwrdn_srp_intr(dwc_otg_core_if_t * core_if)
31319 +{
31320 + gpwrdn_data_t gpwrdn = {.d32 = 0 };
31321 +
31322 + DWC_PRINTF("%s called\n", __FUNCTION__);
31323 +
31324 + if (!core_if->hibernation_suspend) {
31325 + DWC_PRINTF("Already exited from Hibernation\n");
31326 + return 1;
31327 + }
31328 +#ifdef DWC_DEV_SRPCAP
31329 + if (core_if->pwron_timer_started) {
31330 + core_if->pwron_timer_started = 0;
31331 + DWC_TIMER_CANCEL(core_if->pwron_timer);
31332 + }
31333 +#endif
31334 +
31335 + /* Switch on the voltage to the core */
31336 + gpwrdn.b.pwrdnswtch = 1;
31337 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
31338 + dwc_udelay(10);
31339 +
31340 + /* Reset the core */
31341 + gpwrdn.d32 = 0;
31342 + gpwrdn.b.pwrdnrstn = 1;
31343 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
31344 + dwc_udelay(10);
31345 +
31346 + /* Disable power clamps */
31347 + gpwrdn.d32 = 0;
31348 + gpwrdn.b.pwrdnclmp = 1;
31349 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
31350 +
31351 + /* Remove reset the core signal */
31352 + gpwrdn.d32 = 0;
31353 + gpwrdn.b.pwrdnrstn = 1;
31354 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, 0, gpwrdn.d32);
31355 + dwc_udelay(10);
31356 +
31357 + /* Disable PMU interrupt */
31358 + gpwrdn.d32 = 0;
31359 + gpwrdn.b.pmuintsel = 1;
31360 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
31361 +
31362 + /* Indicates that we are exiting from hibernation */
31363 + core_if->hibernation_suspend = 0;
31364 +
31365 + /* Disable PMU */
31366 + gpwrdn.d32 = 0;
31367 + gpwrdn.b.pmuactv = 1;
31368 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
31369 + dwc_udelay(10);
31370 +
31371 + /* Programm Disable VBUS to 0 */
31372 + gpwrdn.d32 = 0;
31373 + gpwrdn.b.dis_vbus = 1;
31374 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
31375 +
31376 + /*Initialize the core as Host */
31377 + core_if->op_state = A_HOST;
31378 + dwc_otg_core_init(core_if);
31379 + dwc_otg_enable_global_interrupts(core_if);
31380 + cil_hcd_start(core_if);
31381 +
31382 + return 1;
31383 +}
31384 +
31385 +/** This interrupt indicates that restore command after Hibernation
31386 + * was completed by the core. */
31387 +int32_t dwc_otg_handle_restore_done_intr(dwc_otg_core_if_t * core_if)
31388 +{
31389 + pcgcctl_data_t pcgcctl;
31390 + DWC_DEBUGPL(DBG_ANY, "++Restore Done Interrupt++\n");
31391 +
31392 + //TODO De-assert restore signal. 8.a
31393 + pcgcctl.d32 = DWC_READ_REG32(core_if->pcgcctl);
31394 + if (pcgcctl.b.restoremode == 1) {
31395 + gintmsk_data_t gintmsk = {.d32 = 0 };
31396 + /*
31397 + * If restore mode is Remote Wakeup,
31398 + * unmask Remote Wakeup interrupt.
31399 + */
31400 + gintmsk.b.wkupintr = 1;
31401 + DWC_MODIFY_REG32(&core_if->core_global_regs->gintmsk,
31402 + 0, gintmsk.d32);
31403 + }
31404 +
31405 + return 1;
31406 +}
31407 +
31408 +/**
31409 + * This interrupt indicates that a device has been disconnected from
31410 + * the root port.
31411 + */
31412 +int32_t dwc_otg_handle_disconnect_intr(dwc_otg_core_if_t * core_if)
31413 +{
31414 + gintsts_data_t gintsts;
31415 +
31416 + DWC_DEBUGPL(DBG_ANY, "++Disconnect Detected Interrupt++ (%s) %s\n",
31417 + (dwc_otg_is_host_mode(core_if) ? "Host" : "Device"),
31418 + op_state_str(core_if));
31419 +
31420 +/** @todo Consolidate this if statement. */
31421 +#ifndef DWC_HOST_ONLY
31422 + if (core_if->op_state == B_HOST) {
31423 + /* If in device mode Disconnect and stop the HCD, then
31424 + * start the PCD. */
31425 + DWC_SPINUNLOCK(core_if->lock);
31426 + cil_hcd_disconnect(core_if);
31427 + cil_pcd_start(core_if);
31428 + DWC_SPINLOCK(core_if->lock);
31429 + core_if->op_state = B_PERIPHERAL;
31430 + } else if (dwc_otg_is_device_mode(core_if)) {
31431 + gotgctl_data_t gotgctl = {.d32 = 0 };
31432 + gotgctl.d32 =
31433 + DWC_READ_REG32(&core_if->core_global_regs->gotgctl);
31434 + if (gotgctl.b.hstsethnpen == 1) {
31435 + /* Do nothing, if HNP in process the OTG
31436 + * interrupt "Host Negotiation Detected"
31437 + * interrupt will do the mode switch.
31438 + */
31439 + } else if (gotgctl.b.devhnpen == 0) {
31440 + /* If in device mode Disconnect and stop the HCD, then
31441 + * start the PCD. */
31442 + DWC_SPINUNLOCK(core_if->lock);
31443 + cil_hcd_disconnect(core_if);
31444 + cil_pcd_start(core_if);
31445 + DWC_SPINLOCK(core_if->lock);
31446 + core_if->op_state = B_PERIPHERAL;
31447 + } else {
31448 + DWC_DEBUGPL(DBG_ANY, "!a_peripheral && !devhnpen\n");
31449 + }
31450 + } else {
31451 + if (core_if->op_state == A_HOST) {
31452 + /* A-Cable still connected but device disconnected. */
31453 + DWC_SPINUNLOCK(core_if->lock);
31454 + cil_hcd_disconnect(core_if);
31455 + DWC_SPINLOCK(core_if->lock);
31456 + if (core_if->adp_enable) {
31457 + gpwrdn_data_t gpwrdn = { .d32 = 0 };
31458 + cil_hcd_stop(core_if);
31459 + /* Enable Power Down Logic */
31460 + gpwrdn.b.pmuintsel = 1;
31461 + gpwrdn.b.pmuactv = 1;
31462 + DWC_MODIFY_REG32(&core_if->core_global_regs->
31463 + gpwrdn, 0, gpwrdn.d32);
31464 + dwc_otg_adp_probe_start(core_if);
31465 +
31466 + /* Power off the core */
31467 + if (core_if->power_down == 2) {
31468 + gpwrdn.d32 = 0;
31469 + gpwrdn.b.pwrdnswtch = 1;
31470 + DWC_MODIFY_REG32
31471 + (&core_if->core_global_regs->gpwrdn,
31472 + gpwrdn.d32, 0);
31473 + }
31474 + }
31475 + }
31476 + }
31477 +#endif
31478 + /* Change to L3(OFF) state */
31479 + core_if->lx_state = DWC_OTG_L3;
31480 +
31481 + gintsts.d32 = 0;
31482 + gintsts.b.disconnect = 1;
31483 + DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, gintsts.d32);
31484 + return 1;
31485 +}
31486 +
31487 +/**
31488 + * This interrupt indicates that SUSPEND state has been detected on
31489 + * the USB.
31490 + *
31491 + * For HNP the USB Suspend interrupt signals the change from
31492 + * "a_peripheral" to "a_host".
31493 + *
31494 + * When power management is enabled the core will be put in low power
31495 + * mode.
31496 + */
31497 +int32_t dwc_otg_handle_usb_suspend_intr(dwc_otg_core_if_t * core_if)
31498 +{
31499 + dsts_data_t dsts;
31500 + gintsts_data_t gintsts;
31501 + dcfg_data_t dcfg;
31502 +
31503 + DWC_DEBUGPL(DBG_ANY, "USB SUSPEND\n");
31504 +
31505 + if (dwc_otg_is_device_mode(core_if)) {
31506 + /* Check the Device status register to determine if the Suspend
31507 + * state is active. */
31508 + dsts.d32 =
31509 + DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dsts);
31510 + DWC_DEBUGPL(DBG_PCD, "DSTS=0x%0x\n", dsts.d32);
31511 + DWC_DEBUGPL(DBG_PCD, "DSTS.Suspend Status=%d "
31512 + "HWCFG4.power Optimize=%d\n",
31513 + dsts.b.suspsts, core_if->hwcfg4.b.power_optimiz);
31514 +
31515 +#ifdef PARTIAL_POWER_DOWN
31516 +/** @todo Add a module parameter for power management. */
31517 +
31518 + if (dsts.b.suspsts && core_if->hwcfg4.b.power_optimiz) {
31519 + pcgcctl_data_t power = {.d32 = 0 };
31520 + DWC_DEBUGPL(DBG_CIL, "suspend\n");
31521 +
31522 + power.b.pwrclmp = 1;
31523 + DWC_WRITE_REG32(core_if->pcgcctl, power.d32);
31524 +
31525 + power.b.rstpdwnmodule = 1;
31526 + DWC_MODIFY_REG32(core_if->pcgcctl, 0, power.d32);
31527 +
31528 + power.b.stoppclk = 1;
31529 + DWC_MODIFY_REG32(core_if->pcgcctl, 0, power.d32);
31530 +
31531 + } else {
31532 + DWC_DEBUGPL(DBG_ANY, "disconnect?\n");
31533 + }
31534 +#endif
31535 + /* PCD callback for suspend. Release the lock inside of callback function */
31536 + cil_pcd_suspend(core_if);
31537 + if (core_if->power_down == 2)
31538 + {
31539 + dcfg.d32 = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dcfg);
31540 + DWC_DEBUGPL(DBG_ANY,"lx_state = %08x\n",core_if->lx_state);
31541 + DWC_DEBUGPL(DBG_ANY," device address = %08d\n",dcfg.b.devaddr);
31542 +
31543 + if (core_if->lx_state != DWC_OTG_L3 && dcfg.b.devaddr) {
31544 + pcgcctl_data_t pcgcctl = {.d32 = 0 };
31545 + gpwrdn_data_t gpwrdn = {.d32 = 0 };
31546 + gusbcfg_data_t gusbcfg = {.d32 = 0 };
31547 +
31548 + /* Change to L2(suspend) state */
31549 + core_if->lx_state = DWC_OTG_L2;
31550 +
31551 + /* Clear interrupt in gintsts */
31552 + gintsts.d32 = 0;
31553 + gintsts.b.usbsuspend = 1;
31554 + DWC_WRITE_REG32(&core_if->core_global_regs->
31555 + gintsts, gintsts.d32);
31556 + DWC_PRINTF("Start of hibernation completed\n");
31557 + dwc_otg_save_global_regs(core_if);
31558 + dwc_otg_save_dev_regs(core_if);
31559 +
31560 + gusbcfg.d32 =
31561 + DWC_READ_REG32(&core_if->core_global_regs->
31562 + gusbcfg);
31563 + if (gusbcfg.b.ulpi_utmi_sel == 1) {
31564 + /* ULPI interface */
31565 + /* Suspend the Phy Clock */
31566 + pcgcctl.d32 = 0;
31567 + pcgcctl.b.stoppclk = 1;
31568 + DWC_MODIFY_REG32(core_if->pcgcctl, 0,
31569 + pcgcctl.d32);
31570 + dwc_udelay(10);
31571 + gpwrdn.b.pmuactv = 1;
31572 + DWC_MODIFY_REG32(&core_if->
31573 + core_global_regs->
31574 + gpwrdn, 0, gpwrdn.d32);
31575 + } else {
31576 + /* UTMI+ Interface */
31577 + gpwrdn.b.pmuactv = 1;
31578 + DWC_MODIFY_REG32(&core_if->
31579 + core_global_regs->
31580 + gpwrdn, 0, gpwrdn.d32);
31581 + dwc_udelay(10);
31582 + pcgcctl.b.stoppclk = 1;
31583 + DWC_MODIFY_REG32(core_if->pcgcctl, 0,
31584 + pcgcctl.d32);
31585 + dwc_udelay(10);
31586 + }
31587 +
31588 + /* Set flag to indicate that we are in hibernation */
31589 + core_if->hibernation_suspend = 1;
31590 + /* Enable interrupts from wake up logic */
31591 + gpwrdn.d32 = 0;
31592 + gpwrdn.b.pmuintsel = 1;
31593 + DWC_MODIFY_REG32(&core_if->core_global_regs->
31594 + gpwrdn, 0, gpwrdn.d32);
31595 + dwc_udelay(10);
31596 +
31597 + /* Unmask device mode interrupts in GPWRDN */
31598 + gpwrdn.d32 = 0;
31599 + gpwrdn.b.rst_det_msk = 1;
31600 + gpwrdn.b.lnstchng_msk = 1;
31601 + gpwrdn.b.sts_chngint_msk = 1;
31602 + DWC_MODIFY_REG32(&core_if->core_global_regs->
31603 + gpwrdn, 0, gpwrdn.d32);
31604 + dwc_udelay(10);
31605 +
31606 + /* Enable Power Down Clamp */
31607 + gpwrdn.d32 = 0;
31608 + gpwrdn.b.pwrdnclmp = 1;
31609 + DWC_MODIFY_REG32(&core_if->core_global_regs->
31610 + gpwrdn, 0, gpwrdn.d32);
31611 + dwc_udelay(10);
31612 +
31613 + /* Switch off VDD */
31614 + gpwrdn.d32 = 0;
31615 + gpwrdn.b.pwrdnswtch = 1;
31616 + DWC_MODIFY_REG32(&core_if->core_global_regs->
31617 + gpwrdn, 0, gpwrdn.d32);
31618 +
31619 + /* Save gpwrdn register for further usage if stschng interrupt */
31620 + core_if->gr_backup->gpwrdn_local =
31621 + DWC_READ_REG32(&core_if->core_global_regs->gpwrdn);
31622 + DWC_PRINTF("Hibernation completed\n");
31623 +
31624 + return 1;
31625 + }
31626 + } else if (core_if->power_down == 3) {
31627 + pcgcctl_data_t pcgcctl = {.d32 = 0 };
31628 + dcfg.d32 = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dcfg);
31629 + DWC_DEBUGPL(DBG_ANY, "lx_state = %08x\n",core_if->lx_state);
31630 + DWC_DEBUGPL(DBG_ANY, " device address = %08d\n",dcfg.b.devaddr);
31631 +
31632 + if (core_if->lx_state != DWC_OTG_L3 && dcfg.b.devaddr) {
31633 + DWC_DEBUGPL(DBG_ANY, "Start entering to extended hibernation\n");
31634 + core_if->xhib = 1;
31635 +
31636 + /* Clear interrupt in gintsts */
31637 + gintsts.d32 = 0;
31638 + gintsts.b.usbsuspend = 1;
31639 + DWC_WRITE_REG32(&core_if->core_global_regs->
31640 + gintsts, gintsts.d32);
31641 +
31642 + dwc_otg_save_global_regs(core_if);
31643 + dwc_otg_save_dev_regs(core_if);
31644 +
31645 + /* Wait for 10 PHY clocks */
31646 + dwc_udelay(10);
31647 +
31648 + /* Program GPIO register while entering to xHib */
31649 + DWC_WRITE_REG32(&core_if->core_global_regs->ggpio, 0x1);
31650 +
31651 + pcgcctl.b.enbl_extnd_hiber = 1;
31652 + DWC_MODIFY_REG32(core_if->pcgcctl, 0, pcgcctl.d32);
31653 + DWC_MODIFY_REG32(core_if->pcgcctl, 0, pcgcctl.d32);
31654 +
31655 + pcgcctl.d32 = 0;
31656 + pcgcctl.b.extnd_hiber_pwrclmp = 1;
31657 + DWC_MODIFY_REG32(core_if->pcgcctl, 0, pcgcctl.d32);
31658 +
31659 + pcgcctl.d32 = 0;
31660 + pcgcctl.b.extnd_hiber_switch = 1;
31661 + core_if->gr_backup->xhib_gpwrdn = DWC_READ_REG32(&core_if->core_global_regs->gpwrdn);
31662 + core_if->gr_backup->xhib_pcgcctl = DWC_READ_REG32(core_if->pcgcctl) | pcgcctl.d32;
31663 + DWC_MODIFY_REG32(core_if->pcgcctl, 0, pcgcctl.d32);
31664 +
31665 + DWC_DEBUGPL(DBG_ANY, "Finished entering to extended hibernation\n");
31666 +
31667 + return 1;
31668 + }
31669 + }
31670 + } else {
31671 + if (core_if->op_state == A_PERIPHERAL) {
31672 + DWC_DEBUGPL(DBG_ANY, "a_peripheral->a_host\n");
31673 + /* Clear the a_peripheral flag, back to a_host. */
31674 + DWC_SPINUNLOCK(core_if->lock);
31675 + cil_pcd_stop(core_if);
31676 + cil_hcd_start(core_if);
31677 + DWC_SPINLOCK(core_if->lock);
31678 + core_if->op_state = A_HOST;
31679 + }
31680 + }
31681 +
31682 + /* Change to L2(suspend) state */
31683 + core_if->lx_state = DWC_OTG_L2;
31684 +
31685 + /* Clear interrupt */
31686 + gintsts.d32 = 0;
31687 + gintsts.b.usbsuspend = 1;
31688 + DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, gintsts.d32);
31689 +
31690 + return 1;
31691 +}
31692 +
31693 +static int32_t dwc_otg_handle_xhib_exit_intr(dwc_otg_core_if_t * core_if)
31694 +{
31695 + gpwrdn_data_t gpwrdn = {.d32 = 0 };
31696 + pcgcctl_data_t pcgcctl = {.d32 = 0 };
31697 + gahbcfg_data_t gahbcfg = {.d32 = 0 };
31698 +
31699 + dwc_udelay(10);
31700 +
31701 + /* Program GPIO register while entering to xHib */
31702 + DWC_WRITE_REG32(&core_if->core_global_regs->ggpio, 0x0);
31703 +
31704 + pcgcctl.d32 = core_if->gr_backup->xhib_pcgcctl;
31705 + pcgcctl.b.extnd_hiber_pwrclmp = 0;
31706 + DWC_WRITE_REG32(core_if->pcgcctl, pcgcctl.d32);
31707 + dwc_udelay(10);
31708 +
31709 + gpwrdn.d32 = core_if->gr_backup->xhib_gpwrdn;
31710 + gpwrdn.b.restore = 1;
31711 + DWC_WRITE_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32);
31712 + dwc_udelay(10);
31713 +
31714 + restore_lpm_i2c_regs(core_if);
31715 +
31716 + pcgcctl.d32 = core_if->gr_backup->pcgcctl_local & (0x3FFFF << 14);
31717 + pcgcctl.b.max_xcvrselect = 1;
31718 + pcgcctl.b.ess_reg_restored = 0;
31719 + pcgcctl.b.extnd_hiber_switch = 0;
31720 + pcgcctl.b.extnd_hiber_pwrclmp = 0;
31721 + pcgcctl.b.enbl_extnd_hiber = 1;
31722 + DWC_WRITE_REG32(core_if->pcgcctl, pcgcctl.d32);
31723 +
31724 + gahbcfg.d32 = core_if->gr_backup->gahbcfg_local;
31725 + gahbcfg.b.glblintrmsk = 1;
31726 + DWC_WRITE_REG32(&core_if->core_global_regs->gahbcfg, gahbcfg.d32);
31727 +
31728 + DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, 0xFFFFFFFF);
31729 + DWC_WRITE_REG32(&core_if->core_global_regs->gintmsk, 0x1 << 16);
31730 +
31731 + DWC_WRITE_REG32(&core_if->core_global_regs->gusbcfg,
31732 + core_if->gr_backup->gusbcfg_local);
31733 + DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->dcfg,
31734 + core_if->dr_backup->dcfg);
31735 +
31736 + pcgcctl.d32 = 0;
31737 + pcgcctl.d32 = core_if->gr_backup->pcgcctl_local & (0x3FFFF << 14);
31738 + pcgcctl.b.max_xcvrselect = 1;
31739 + pcgcctl.d32 |= 0x608;
31740 + DWC_WRITE_REG32(core_if->pcgcctl, pcgcctl.d32);
31741 + dwc_udelay(10);
31742 +
31743 + pcgcctl.d32 = 0;
31744 + pcgcctl.d32 = core_if->gr_backup->pcgcctl_local & (0x3FFFF << 14);
31745 + pcgcctl.b.max_xcvrselect = 1;
31746 + pcgcctl.b.ess_reg_restored = 1;
31747 + pcgcctl.b.enbl_extnd_hiber = 1;
31748 + pcgcctl.b.rstpdwnmodule = 1;
31749 + pcgcctl.b.restoremode = 1;
31750 + DWC_WRITE_REG32(core_if->pcgcctl, pcgcctl.d32);
31751 +
31752 + DWC_DEBUGPL(DBG_ANY, "%s called\n", __FUNCTION__);
31753 +
31754 + return 1;
31755 +}
31756 +
31757 +#ifdef CONFIG_USB_DWC_OTG_LPM
31758 +/**
31759 + * This function hadles LPM transaction received interrupt.
31760 + */
31761 +static int32_t dwc_otg_handle_lpm_intr(dwc_otg_core_if_t * core_if)
31762 +{
31763 + glpmcfg_data_t lpmcfg;
31764 + gintsts_data_t gintsts;
31765 +
31766 + if (!core_if->core_params->lpm_enable) {
31767 + DWC_PRINTF("Unexpected LPM interrupt\n");
31768 + }
31769 +
31770 + lpmcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->glpmcfg);
31771 + DWC_PRINTF("LPM config register = 0x%08x\n", lpmcfg.d32);
31772 +
31773 + if (dwc_otg_is_host_mode(core_if)) {
31774 + cil_hcd_sleep(core_if);
31775 + } else {
31776 + lpmcfg.b.hird_thres |= (1 << 4);
31777 + DWC_WRITE_REG32(&core_if->core_global_regs->glpmcfg,
31778 + lpmcfg.d32);
31779 + }
31780 +
31781 + /* Examine prt_sleep_sts after TL1TokenTetry period max (10 us) */
31782 + dwc_udelay(10);
31783 + lpmcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->glpmcfg);
31784 + if (lpmcfg.b.prt_sleep_sts) {
31785 + /* Save the current state */
31786 + core_if->lx_state = DWC_OTG_L1;
31787 + }
31788 +
31789 + /* Clear interrupt */
31790 + gintsts.d32 = 0;
31791 + gintsts.b.lpmtranrcvd = 1;
31792 + DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, gintsts.d32);
31793 + return 1;
31794 +}
31795 +#endif /* CONFIG_USB_DWC_OTG_LPM */
31796 +
31797 +/**
31798 + * This function returns the Core Interrupt register.
31799 + */
31800 +static inline uint32_t dwc_otg_read_common_intr(dwc_otg_core_if_t * core_if, gintmsk_data_t *reenable_gintmsk, dwc_otg_hcd_t *hcd)
31801 +{
31802 + gahbcfg_data_t gahbcfg = {.d32 = 0 };
31803 + gintsts_data_t gintsts;
31804 + gintmsk_data_t gintmsk;
31805 + gintmsk_data_t gintmsk_common = {.d32 = 0 };
31806 + gintmsk_common.b.wkupintr = 1;
31807 + gintmsk_common.b.sessreqintr = 1;
31808 + gintmsk_common.b.conidstschng = 1;
31809 + gintmsk_common.b.otgintr = 1;
31810 + gintmsk_common.b.modemismatch = 1;
31811 + gintmsk_common.b.disconnect = 1;
31812 + gintmsk_common.b.usbsuspend = 1;
31813 +#ifdef CONFIG_USB_DWC_OTG_LPM
31814 + gintmsk_common.b.lpmtranrcvd = 1;
31815 +#endif
31816 + gintmsk_common.b.restoredone = 1;
31817 + if(dwc_otg_is_device_mode(core_if))
31818 + {
31819 + /** @todo: The port interrupt occurs while in device
31820 + * mode. Added code to CIL to clear the interrupt for now!
31821 + */
31822 + gintmsk_common.b.portintr = 1;
31823 + }
31824 + if(fiq_enable) {
31825 + local_fiq_disable();
31826 + fiq_fsm_spin_lock(&hcd->fiq_state->lock);
31827 + gintsts.d32 = DWC_READ_REG32(&core_if->core_global_regs->gintsts);
31828 + gintmsk.d32 = DWC_READ_REG32(&core_if->core_global_regs->gintmsk);
31829 + /* Pull in the interrupts that the FIQ has masked */
31830 + gintmsk.d32 |= ~(hcd->fiq_state->gintmsk_saved.d32);
31831 + gintmsk.d32 |= gintmsk_common.d32;
31832 + /* for the upstairs function to reenable - have to read it here in case FIQ triggers again */
31833 + reenable_gintmsk->d32 = gintmsk.d32;
31834 + fiq_fsm_spin_unlock(&hcd->fiq_state->lock);
31835 + local_fiq_enable();
31836 + } else {
31837 + gintsts.d32 = DWC_READ_REG32(&core_if->core_global_regs->gintsts);
31838 + gintmsk.d32 = DWC_READ_REG32(&core_if->core_global_regs->gintmsk);
31839 + }
31840 +
31841 + gahbcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->gahbcfg);
31842 +
31843 +#ifdef DEBUG
31844 + /* if any common interrupts set */
31845 + if (gintsts.d32 & gintmsk_common.d32) {
31846 + DWC_DEBUGPL(DBG_ANY, "common_intr: gintsts=%08x gintmsk=%08x\n",
31847 + gintsts.d32, gintmsk.d32);
31848 + }
31849 +#endif
31850 + if (!fiq_enable){
31851 + if (gahbcfg.b.glblintrmsk)
31852 + return ((gintsts.d32 & gintmsk.d32) & gintmsk_common.d32);
31853 + else
31854 + return 0;
31855 + } else {
31856 + /* Our IRQ kicker is no longer the USB hardware, it's the MPHI interface.
31857 + * Can't trust the global interrupt mask bit in this case.
31858 + */
31859 + return ((gintsts.d32 & gintmsk.d32) & gintmsk_common.d32);
31860 + }
31861 +
31862 +}
31863 +
31864 +/* MACRO for clearing interupt bits in GPWRDN register */
31865 +#define CLEAR_GPWRDN_INTR(__core_if,__intr) \
31866 +do { \
31867 + gpwrdn_data_t gpwrdn = {.d32=0}; \
31868 + gpwrdn.b.__intr = 1; \
31869 + DWC_MODIFY_REG32(&__core_if->core_global_regs->gpwrdn, \
31870 + 0, gpwrdn.d32); \
31871 +} while (0)
31872 +
31873 +/**
31874 + * Common interrupt handler.
31875 + *
31876 + * The common interrupts are those that occur in both Host and Device mode.
31877 + * This handler handles the following interrupts:
31878 + * - Mode Mismatch Interrupt
31879 + * - Disconnect Interrupt
31880 + * - OTG Interrupt
31881 + * - Connector ID Status Change Interrupt
31882 + * - Session Request Interrupt.
31883 + * - Resume / Remote Wakeup Detected Interrupt.
31884 + * - LPM Transaction Received Interrupt
31885 + * - ADP Transaction Received Interrupt
31886 + *
31887 + */
31888 +int32_t dwc_otg_handle_common_intr(void *dev)
31889 +{
31890 + int retval = 0;
31891 + gintsts_data_t gintsts;
31892 + gintmsk_data_t gintmsk_reenable = { .d32 = 0 };
31893 + gpwrdn_data_t gpwrdn = {.d32 = 0 };
31894 + dwc_otg_device_t *otg_dev = dev;
31895 + dwc_otg_core_if_t *core_if = otg_dev->core_if;
31896 + gpwrdn.d32 = DWC_READ_REG32(&core_if->core_global_regs->gpwrdn);
31897 + if (dwc_otg_is_device_mode(core_if))
31898 + core_if->frame_num = dwc_otg_get_frame_number(core_if);
31899 +
31900 + if (core_if->lock)
31901 + DWC_SPINLOCK(core_if->lock);
31902 +
31903 + if (core_if->power_down == 3 && core_if->xhib == 1) {
31904 + DWC_DEBUGPL(DBG_ANY, "Exiting from xHIB state\n");
31905 + retval |= dwc_otg_handle_xhib_exit_intr(core_if);
31906 + core_if->xhib = 2;
31907 + if (core_if->lock)
31908 + DWC_SPINUNLOCK(core_if->lock);
31909 +
31910 + return retval;
31911 + }
31912 +
31913 + if (core_if->hibernation_suspend <= 0) {
31914 + /* read_common will have to poke the FIQ's saved mask. We must then clear this mask at the end
31915 + * of this handler - god only knows why it's done like this
31916 + */
31917 + gintsts.d32 = dwc_otg_read_common_intr(core_if, &gintmsk_reenable, otg_dev->hcd);
31918 +
31919 + if (gintsts.b.modemismatch) {
31920 + retval |= dwc_otg_handle_mode_mismatch_intr(core_if);
31921 + }
31922 + if (gintsts.b.otgintr) {
31923 + retval |= dwc_otg_handle_otg_intr(core_if);
31924 + }
31925 + if (gintsts.b.conidstschng) {
31926 + retval |=
31927 + dwc_otg_handle_conn_id_status_change_intr(core_if);
31928 + }
31929 + if (gintsts.b.disconnect) {
31930 + retval |= dwc_otg_handle_disconnect_intr(core_if);
31931 + }
31932 + if (gintsts.b.sessreqintr) {
31933 + retval |= dwc_otg_handle_session_req_intr(core_if);
31934 + }
31935 + if (gintsts.b.wkupintr) {
31936 + retval |= dwc_otg_handle_wakeup_detected_intr(core_if);
31937 + }
31938 + if (gintsts.b.usbsuspend) {
31939 + retval |= dwc_otg_handle_usb_suspend_intr(core_if);
31940 + }
31941 +#ifdef CONFIG_USB_DWC_OTG_LPM
31942 + if (gintsts.b.lpmtranrcvd) {
31943 + retval |= dwc_otg_handle_lpm_intr(core_if);
31944 + }
31945 +#endif
31946 + if (gintsts.b.restoredone) {
31947 + gintsts.d32 = 0;
31948 + if (core_if->power_down == 2)
31949 + core_if->hibernation_suspend = -1;
31950 + else if (core_if->power_down == 3 && core_if->xhib == 2) {
31951 + gpwrdn_data_t gpwrdn = {.d32 = 0 };
31952 + pcgcctl_data_t pcgcctl = {.d32 = 0 };
31953 + dctl_data_t dctl = {.d32 = 0 };
31954 +
31955 + DWC_WRITE_REG32(&core_if->core_global_regs->
31956 + gintsts, 0xFFFFFFFF);
31957 +
31958 + DWC_DEBUGPL(DBG_ANY,
31959 + "RESTORE DONE generated\n");
31960 +
31961 + gpwrdn.b.restore = 1;
31962 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
31963 + dwc_udelay(10);
31964 +
31965 + pcgcctl.b.rstpdwnmodule = 1;
31966 + DWC_MODIFY_REG32(core_if->pcgcctl, pcgcctl.d32, 0);
31967 +
31968 + DWC_WRITE_REG32(&core_if->core_global_regs->gusbcfg, core_if->gr_backup->gusbcfg_local);
31969 + DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->dcfg, core_if->dr_backup->dcfg);
31970 + DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->dctl, core_if->dr_backup->dctl);
31971 + dwc_udelay(50);
31972 +
31973 + dctl.b.pwronprgdone = 1;
31974 + DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->dctl, 0, dctl.d32);
31975 + dwc_udelay(10);
31976 +
31977 + dwc_otg_restore_global_regs(core_if);
31978 + dwc_otg_restore_dev_regs(core_if, 0);
31979 +
31980 + dctl.d32 = 0;
31981 + dctl.b.pwronprgdone = 1;
31982 + DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->dctl, dctl.d32, 0);
31983 + dwc_udelay(10);
31984 +
31985 + pcgcctl.d32 = 0;
31986 + pcgcctl.b.enbl_extnd_hiber = 1;
31987 + DWC_MODIFY_REG32(core_if->pcgcctl, pcgcctl.d32, 0);
31988 +
31989 + /* The core will be in ON STATE */
31990 + core_if->lx_state = DWC_OTG_L0;
31991 + core_if->xhib = 0;
31992 +
31993 + DWC_SPINUNLOCK(core_if->lock);
31994 + if (core_if->pcd_cb && core_if->pcd_cb->resume_wakeup) {
31995 + core_if->pcd_cb->resume_wakeup(core_if->pcd_cb->p);
31996 + }
31997 + DWC_SPINLOCK(core_if->lock);
31998 +
31999 + }
32000 +
32001 + gintsts.b.restoredone = 1;
32002 + DWC_WRITE_REG32(&core_if->core_global_regs->gintsts,gintsts.d32);
32003 + DWC_PRINTF(" --Restore done interrupt received-- \n");
32004 + retval |= 1;
32005 + }
32006 + if (gintsts.b.portintr && dwc_otg_is_device_mode(core_if)) {
32007 + /* The port interrupt occurs while in device mode with HPRT0
32008 + * Port Enable/Disable.
32009 + */
32010 + gintsts.d32 = 0;
32011 + gintsts.b.portintr = 1;
32012 + DWC_WRITE_REG32(&core_if->core_global_regs->gintsts,gintsts.d32);
32013 + retval |= 1;
32014 + gintmsk_reenable.b.portintr = 1;
32015 +
32016 + }
32017 + /* Did we actually handle anything? if so, unmask the interrupt */
32018 +// fiq_print(FIQDBG_INT, otg_dev->hcd->fiq_state, "CILOUT %1d", retval);
32019 +// fiq_print(FIQDBG_INT, otg_dev->hcd->fiq_state, "%08x", gintsts.d32);
32020 +// fiq_print(FIQDBG_INT, otg_dev->hcd->fiq_state, "%08x", gintmsk_reenable.d32);
32021 + if (retval && fiq_enable) {
32022 + DWC_WRITE_REG32(&core_if->core_global_regs->gintmsk, gintmsk_reenable.d32);
32023 + }
32024 +
32025 + } else {
32026 + DWC_DEBUGPL(DBG_ANY, "gpwrdn=%08x\n", gpwrdn.d32);
32027 +
32028 + if (gpwrdn.b.disconn_det && gpwrdn.b.disconn_det_msk) {
32029 + CLEAR_GPWRDN_INTR(core_if, disconn_det);
32030 + if (gpwrdn.b.linestate == 0) {
32031 + dwc_otg_handle_pwrdn_disconnect_intr(core_if);
32032 + } else {
32033 + DWC_PRINTF("Disconnect detected while linestate is not 0\n");
32034 + }
32035 +
32036 + retval |= 1;
32037 + }
32038 + if (gpwrdn.b.lnstschng && gpwrdn.b.lnstchng_msk) {
32039 + CLEAR_GPWRDN_INTR(core_if, lnstschng);
32040 + /* remote wakeup from hibernation */
32041 + if (gpwrdn.b.linestate == 2 || gpwrdn.b.linestate == 1) {
32042 + dwc_otg_handle_pwrdn_wakeup_detected_intr(core_if);
32043 + } else {
32044 + DWC_PRINTF("gpwrdn.linestate = %d\n", gpwrdn.b.linestate);
32045 + }
32046 + retval |= 1;
32047 + }
32048 + if (gpwrdn.b.rst_det && gpwrdn.b.rst_det_msk) {
32049 + CLEAR_GPWRDN_INTR(core_if, rst_det);
32050 + if (gpwrdn.b.linestate == 0) {
32051 + DWC_PRINTF("Reset detected\n");
32052 + retval |= dwc_otg_device_hibernation_restore(core_if, 0, 1);
32053 + }
32054 + }
32055 + if (gpwrdn.b.srp_det && gpwrdn.b.srp_det_msk) {
32056 + CLEAR_GPWRDN_INTR(core_if, srp_det);
32057 + dwc_otg_handle_pwrdn_srp_intr(core_if);
32058 + retval |= 1;
32059 + }
32060 + }
32061 + /* Handle ADP interrupt here */
32062 + if (gpwrdn.b.adp_int) {
32063 + DWC_PRINTF("ADP interrupt\n");
32064 + CLEAR_GPWRDN_INTR(core_if, adp_int);
32065 + dwc_otg_adp_handle_intr(core_if);
32066 + retval |= 1;
32067 + }
32068 + if (gpwrdn.b.sts_chngint && gpwrdn.b.sts_chngint_msk) {
32069 + DWC_PRINTF("STS CHNG interrupt asserted\n");
32070 + CLEAR_GPWRDN_INTR(core_if, sts_chngint);
32071 + dwc_otg_handle_pwrdn_stschng_intr(otg_dev);
32072 +
32073 + retval |= 1;
32074 + }
32075 + if (core_if->lock)
32076 + DWC_SPINUNLOCK(core_if->lock);
32077 + return retval;
32078 +}
32079 --- /dev/null
32080 +++ b/drivers/usb/host/dwc_otg/dwc_otg_core_if.h
32081 @@ -0,0 +1,705 @@
32082 +/* ==========================================================================
32083 + * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_core_if.h $
32084 + * $Revision: #13 $
32085 + * $Date: 2012/08/10 $
32086 + * $Change: 2047372 $
32087 + *
32088 + * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
32089 + * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
32090 + * otherwise expressly agreed to in writing between Synopsys and you.
32091 + *
32092 + * The Software IS NOT an item of Licensed Software or Licensed Product under
32093 + * any End User Software License Agreement or Agreement for Licensed Product
32094 + * with Synopsys or any supplement thereto. You are permitted to use and
32095 + * redistribute this Software in source and binary forms, with or without
32096 + * modification, provided that redistributions of source code must retain this
32097 + * notice. You may not view, use, disclose, copy or distribute this file or
32098 + * any information contained herein except pursuant to this license grant from
32099 + * Synopsys. If you do not agree with this notice, including the disclaimer
32100 + * below, then you are not authorized to use the Software.
32101 + *
32102 + * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
32103 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
32104 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
32105 + * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
32106 + * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
32107 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
32108 + * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
32109 + * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
32110 + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
32111 + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
32112 + * DAMAGE.
32113 + * ========================================================================== */
32114 +#if !defined(__DWC_CORE_IF_H__)
32115 +#define __DWC_CORE_IF_H__
32116 +
32117 +#include "dwc_os.h"
32118 +
32119 +/** @file
32120 + * This file defines DWC_OTG Core API
32121 + */
32122 +
32123 +struct dwc_otg_core_if;
32124 +typedef struct dwc_otg_core_if dwc_otg_core_if_t;
32125 +
32126 +/** Maximum number of Periodic FIFOs */
32127 +#define MAX_PERIO_FIFOS 15
32128 +/** Maximum number of Periodic FIFOs */
32129 +#define MAX_TX_FIFOS 15
32130 +
32131 +/** Maximum number of Endpoints/HostChannels */
32132 +#define MAX_EPS_CHANNELS 16
32133 +
32134 +extern dwc_otg_core_if_t *dwc_otg_cil_init(const uint32_t * _reg_base_addr);
32135 +extern void dwc_otg_core_init(dwc_otg_core_if_t * _core_if);
32136 +extern void dwc_otg_cil_remove(dwc_otg_core_if_t * _core_if);
32137 +
32138 +extern void dwc_otg_enable_global_interrupts(dwc_otg_core_if_t * _core_if);
32139 +extern void dwc_otg_disable_global_interrupts(dwc_otg_core_if_t * _core_if);
32140 +
32141 +extern uint8_t dwc_otg_is_device_mode(dwc_otg_core_if_t * _core_if);
32142 +extern uint8_t dwc_otg_is_host_mode(dwc_otg_core_if_t * _core_if);
32143 +
32144 +extern uint8_t dwc_otg_is_dma_enable(dwc_otg_core_if_t * core_if);
32145 +
32146 +/** This function should be called on every hardware interrupt. */
32147 +extern int32_t dwc_otg_handle_common_intr(void *otg_dev);
32148 +
32149 +/** @name OTG Core Parameters */
32150 +/** @{ */
32151 +
32152 +/**
32153 + * Specifies the OTG capabilities. The driver will automatically
32154 + * detect the value for this parameter if none is specified.
32155 + * 0 - HNP and SRP capable (default)
32156 + * 1 - SRP Only capable
32157 + * 2 - No HNP/SRP capable
32158 + */
32159 +extern int dwc_otg_set_param_otg_cap(dwc_otg_core_if_t * core_if, int32_t val);
32160 +extern int32_t dwc_otg_get_param_otg_cap(dwc_otg_core_if_t * core_if);
32161 +#define DWC_OTG_CAP_PARAM_HNP_SRP_CAPABLE 0
32162 +#define DWC_OTG_CAP_PARAM_SRP_ONLY_CAPABLE 1
32163 +#define DWC_OTG_CAP_PARAM_NO_HNP_SRP_CAPABLE 2
32164 +#define dwc_param_otg_cap_default DWC_OTG_CAP_PARAM_HNP_SRP_CAPABLE
32165 +
32166 +extern int dwc_otg_set_param_opt(dwc_otg_core_if_t * core_if, int32_t val);
32167 +extern int32_t dwc_otg_get_param_opt(dwc_otg_core_if_t * core_if);
32168 +#define dwc_param_opt_default 1
32169 +
32170 +/**
32171 + * Specifies whether to use slave or DMA mode for accessing the data
32172 + * FIFOs. The driver will automatically detect the value for this
32173 + * parameter if none is specified.
32174 + * 0 - Slave
32175 + * 1 - DMA (default, if available)
32176 + */
32177 +extern int dwc_otg_set_param_dma_enable(dwc_otg_core_if_t * core_if,
32178 + int32_t val);
32179 +extern int32_t dwc_otg_get_param_dma_enable(dwc_otg_core_if_t * core_if);
32180 +#define dwc_param_dma_enable_default 1
32181 +
32182 +/**
32183 + * When DMA mode is enabled specifies whether to use
32184 + * address DMA or DMA Descritor mode for accessing the data
32185 + * FIFOs in device mode. The driver will automatically detect
32186 + * the value for this parameter if none is specified.
32187 + * 0 - address DMA
32188 + * 1 - DMA Descriptor(default, if available)
32189 + */
32190 +extern int dwc_otg_set_param_dma_desc_enable(dwc_otg_core_if_t * core_if,
32191 + int32_t val);
32192 +extern int32_t dwc_otg_get_param_dma_desc_enable(dwc_otg_core_if_t * core_if);
32193 +//#define dwc_param_dma_desc_enable_default 1
32194 +#define dwc_param_dma_desc_enable_default 0 // Broadcom BCM2708
32195 +
32196 +/** The DMA Burst size (applicable only for External DMA
32197 + * Mode). 1, 4, 8 16, 32, 64, 128, 256 (default 32)
32198 + */
32199 +extern int dwc_otg_set_param_dma_burst_size(dwc_otg_core_if_t * core_if,
32200 + int32_t val);
32201 +extern int32_t dwc_otg_get_param_dma_burst_size(dwc_otg_core_if_t * core_if);
32202 +#define dwc_param_dma_burst_size_default 32
32203 +
32204 +/**
32205 + * Specifies the maximum speed of operation in host and device mode.
32206 + * The actual speed depends on the speed of the attached device and
32207 + * the value of phy_type. The actual speed depends on the speed of the
32208 + * attached device.
32209 + * 0 - High Speed (default)
32210 + * 1 - Full Speed
32211 + */
32212 +extern int dwc_otg_set_param_speed(dwc_otg_core_if_t * core_if, int32_t val);
32213 +extern int32_t dwc_otg_get_param_speed(dwc_otg_core_if_t * core_if);
32214 +#define dwc_param_speed_default 0
32215 +#define DWC_SPEED_PARAM_HIGH 0
32216 +#define DWC_SPEED_PARAM_FULL 1
32217 +
32218 +/** Specifies whether low power mode is supported when attached
32219 + * to a Full Speed or Low Speed device in host mode.
32220 + * 0 - Don't support low power mode (default)
32221 + * 1 - Support low power mode
32222 + */
32223 +extern int dwc_otg_set_param_host_support_fs_ls_low_power(dwc_otg_core_if_t *
32224 + core_if, int32_t val);
32225 +extern int32_t dwc_otg_get_param_host_support_fs_ls_low_power(dwc_otg_core_if_t
32226 + * core_if);
32227 +#define dwc_param_host_support_fs_ls_low_power_default 0
32228 +
32229 +/** Specifies the PHY clock rate in low power mode when connected to a
32230 + * Low Speed device in host mode. This parameter is applicable only if
32231 + * HOST_SUPPORT_FS_LS_LOW_POWER is enabled. If PHY_TYPE is set to FS
32232 + * then defaults to 6 MHZ otherwise 48 MHZ.
32233 + *
32234 + * 0 - 48 MHz
32235 + * 1 - 6 MHz
32236 + */
32237 +extern int dwc_otg_set_param_host_ls_low_power_phy_clk(dwc_otg_core_if_t *
32238 + core_if, int32_t val);
32239 +extern int32_t dwc_otg_get_param_host_ls_low_power_phy_clk(dwc_otg_core_if_t *
32240 + core_if);
32241 +#define dwc_param_host_ls_low_power_phy_clk_default 0
32242 +#define DWC_HOST_LS_LOW_POWER_PHY_CLK_PARAM_48MHZ 0
32243 +#define DWC_HOST_LS_LOW_POWER_PHY_CLK_PARAM_6MHZ 1
32244 +
32245 +/**
32246 + * 0 - Use cC FIFO size parameters
32247 + * 1 - Allow dynamic FIFO sizing (default)
32248 + */
32249 +extern int dwc_otg_set_param_enable_dynamic_fifo(dwc_otg_core_if_t * core_if,
32250 + int32_t val);
32251 +extern int32_t dwc_otg_get_param_enable_dynamic_fifo(dwc_otg_core_if_t *
32252 + core_if);
32253 +#define dwc_param_enable_dynamic_fifo_default 1
32254 +
32255 +/** Total number of 4-byte words in the data FIFO memory. This
32256 + * memory includes the Rx FIFO, non-periodic Tx FIFO, and periodic
32257 + * Tx FIFOs.
32258 + * 32 to 32768 (default 8192)
32259 + * Note: The total FIFO memory depth in the FPGA configuration is 8192.
32260 + */
32261 +extern int dwc_otg_set_param_data_fifo_size(dwc_otg_core_if_t * core_if,
32262 + int32_t val);
32263 +extern int32_t dwc_otg_get_param_data_fifo_size(dwc_otg_core_if_t * core_if);
32264 +//#define dwc_param_data_fifo_size_default 8192
32265 +#define dwc_param_data_fifo_size_default 0xFF0 // Broadcom BCM2708
32266 +
32267 +/** Number of 4-byte words in the Rx FIFO in device mode when dynamic
32268 + * FIFO sizing is enabled.
32269 + * 16 to 32768 (default 1064)
32270 + */
32271 +extern int dwc_otg_set_param_dev_rx_fifo_size(dwc_otg_core_if_t * core_if,
32272 + int32_t val);
32273 +extern int32_t dwc_otg_get_param_dev_rx_fifo_size(dwc_otg_core_if_t * core_if);
32274 +#define dwc_param_dev_rx_fifo_size_default 1064
32275 +
32276 +/** Number of 4-byte words in the non-periodic Tx FIFO in device mode
32277 + * when dynamic FIFO sizing is enabled.
32278 + * 16 to 32768 (default 1024)
32279 + */
32280 +extern int dwc_otg_set_param_dev_nperio_tx_fifo_size(dwc_otg_core_if_t *
32281 + core_if, int32_t val);
32282 +extern int32_t dwc_otg_get_param_dev_nperio_tx_fifo_size(dwc_otg_core_if_t *
32283 + core_if);
32284 +#define dwc_param_dev_nperio_tx_fifo_size_default 1024
32285 +
32286 +/** Number of 4-byte words in each of the periodic Tx FIFOs in device
32287 + * mode when dynamic FIFO sizing is enabled.
32288 + * 4 to 768 (default 256)
32289 + */
32290 +extern int dwc_otg_set_param_dev_perio_tx_fifo_size(dwc_otg_core_if_t * core_if,
32291 + int32_t val, int fifo_num);
32292 +extern int32_t dwc_otg_get_param_dev_perio_tx_fifo_size(dwc_otg_core_if_t *
32293 + core_if, int fifo_num);
32294 +#define dwc_param_dev_perio_tx_fifo_size_default 256
32295 +
32296 +/** Number of 4-byte words in the Rx FIFO in host mode when dynamic
32297 + * FIFO sizing is enabled.
32298 + * 16 to 32768 (default 1024)
32299 + */
32300 +extern int dwc_otg_set_param_host_rx_fifo_size(dwc_otg_core_if_t * core_if,
32301 + int32_t val);
32302 +extern int32_t dwc_otg_get_param_host_rx_fifo_size(dwc_otg_core_if_t * core_if);
32303 +//#define dwc_param_host_rx_fifo_size_default 1024
32304 +#define dwc_param_host_rx_fifo_size_default 774 // Broadcom BCM2708
32305 +
32306 +/** Number of 4-byte words in the non-periodic Tx FIFO in host mode
32307 + * when Dynamic FIFO sizing is enabled in the core.
32308 + * 16 to 32768 (default 1024)
32309 + */
32310 +extern int dwc_otg_set_param_host_nperio_tx_fifo_size(dwc_otg_core_if_t *
32311 + core_if, int32_t val);
32312 +extern int32_t dwc_otg_get_param_host_nperio_tx_fifo_size(dwc_otg_core_if_t *
32313 + core_if);
32314 +//#define dwc_param_host_nperio_tx_fifo_size_default 1024
32315 +#define dwc_param_host_nperio_tx_fifo_size_default 0x100 // Broadcom BCM2708
32316 +
32317 +/** Number of 4-byte words in the host periodic Tx FIFO when dynamic
32318 + * FIFO sizing is enabled.
32319 + * 16 to 32768 (default 1024)
32320 + */
32321 +extern int dwc_otg_set_param_host_perio_tx_fifo_size(dwc_otg_core_if_t *
32322 + core_if, int32_t val);
32323 +extern int32_t dwc_otg_get_param_host_perio_tx_fifo_size(dwc_otg_core_if_t *
32324 + core_if);
32325 +//#define dwc_param_host_perio_tx_fifo_size_default 1024
32326 +#define dwc_param_host_perio_tx_fifo_size_default 0x200 // Broadcom BCM2708
32327 +
32328 +/** The maximum transfer size supported in bytes.
32329 + * 2047 to 65,535 (default 65,535)
32330 + */
32331 +extern int dwc_otg_set_param_max_transfer_size(dwc_otg_core_if_t * core_if,
32332 + int32_t val);
32333 +extern int32_t dwc_otg_get_param_max_transfer_size(dwc_otg_core_if_t * core_if);
32334 +#define dwc_param_max_transfer_size_default 65535
32335 +
32336 +/** The maximum number of packets in a transfer.
32337 + * 15 to 511 (default 511)
32338 + */
32339 +extern int dwc_otg_set_param_max_packet_count(dwc_otg_core_if_t * core_if,
32340 + int32_t val);
32341 +extern int32_t dwc_otg_get_param_max_packet_count(dwc_otg_core_if_t * core_if);
32342 +#define dwc_param_max_packet_count_default 511
32343 +
32344 +/** The number of host channel registers to use.
32345 + * 1 to 16 (default 12)
32346 + * Note: The FPGA configuration supports a maximum of 12 host channels.
32347 + */
32348 +extern int dwc_otg_set_param_host_channels(dwc_otg_core_if_t * core_if,
32349 + int32_t val);
32350 +extern int32_t dwc_otg_get_param_host_channels(dwc_otg_core_if_t * core_if);
32351 +//#define dwc_param_host_channels_default 12
32352 +#define dwc_param_host_channels_default 8 // Broadcom BCM2708
32353 +
32354 +/** The number of endpoints in addition to EP0 available for device
32355 + * mode operations.
32356 + * 1 to 15 (default 6 IN and OUT)
32357 + * Note: The FPGA configuration supports a maximum of 6 IN and OUT
32358 + * endpoints in addition to EP0.
32359 + */
32360 +extern int dwc_otg_set_param_dev_endpoints(dwc_otg_core_if_t * core_if,
32361 + int32_t val);
32362 +extern int32_t dwc_otg_get_param_dev_endpoints(dwc_otg_core_if_t * core_if);
32363 +#define dwc_param_dev_endpoints_default 6
32364 +
32365 +/**
32366 + * Specifies the type of PHY interface to use. By default, the driver
32367 + * will automatically detect the phy_type.
32368 + *
32369 + * 0 - Full Speed PHY
32370 + * 1 - UTMI+ (default)
32371 + * 2 - ULPI
32372 + */
32373 +extern int dwc_otg_set_param_phy_type(dwc_otg_core_if_t * core_if, int32_t val);
32374 +extern int32_t dwc_otg_get_param_phy_type(dwc_otg_core_if_t * core_if);
32375 +#define DWC_PHY_TYPE_PARAM_FS 0
32376 +#define DWC_PHY_TYPE_PARAM_UTMI 1
32377 +#define DWC_PHY_TYPE_PARAM_ULPI 2
32378 +#define dwc_param_phy_type_default DWC_PHY_TYPE_PARAM_UTMI
32379 +
32380 +/**
32381 + * Specifies the UTMI+ Data Width. This parameter is
32382 + * applicable for a PHY_TYPE of UTMI+ or ULPI. (For a ULPI
32383 + * PHY_TYPE, this parameter indicates the data width between
32384 + * the MAC and the ULPI Wrapper.) Also, this parameter is
32385 + * applicable only if the OTG_HSPHY_WIDTH cC parameter was set
32386 + * to "8 and 16 bits", meaning that the core has been
32387 + * configured to work at either data path width.
32388 + *
32389 + * 8 or 16 bits (default 16)
32390 + */
32391 +extern int dwc_otg_set_param_phy_utmi_width(dwc_otg_core_if_t * core_if,
32392 + int32_t val);
32393 +extern int32_t dwc_otg_get_param_phy_utmi_width(dwc_otg_core_if_t * core_if);
32394 +//#define dwc_param_phy_utmi_width_default 16
32395 +#define dwc_param_phy_utmi_width_default 8 // Broadcom BCM2708
32396 +
32397 +/**
32398 + * Specifies whether the ULPI operates at double or single
32399 + * data rate. This parameter is only applicable if PHY_TYPE is
32400 + * ULPI.
32401 + *
32402 + * 0 - single data rate ULPI interface with 8 bit wide data
32403 + * bus (default)
32404 + * 1 - double data rate ULPI interface with 4 bit wide data
32405 + * bus
32406 + */
32407 +extern int dwc_otg_set_param_phy_ulpi_ddr(dwc_otg_core_if_t * core_if,
32408 + int32_t val);
32409 +extern int32_t dwc_otg_get_param_phy_ulpi_ddr(dwc_otg_core_if_t * core_if);
32410 +#define dwc_param_phy_ulpi_ddr_default 0
32411 +
32412 +/**
32413 + * Specifies whether to use the internal or external supply to
32414 + * drive the vbus with a ULPI phy.
32415 + */
32416 +extern int dwc_otg_set_param_phy_ulpi_ext_vbus(dwc_otg_core_if_t * core_if,
32417 + int32_t val);
32418 +extern int32_t dwc_otg_get_param_phy_ulpi_ext_vbus(dwc_otg_core_if_t * core_if);
32419 +#define DWC_PHY_ULPI_INTERNAL_VBUS 0
32420 +#define DWC_PHY_ULPI_EXTERNAL_VBUS 1
32421 +#define dwc_param_phy_ulpi_ext_vbus_default DWC_PHY_ULPI_INTERNAL_VBUS
32422 +
32423 +/**
32424 + * Specifies whether to use the I2Cinterface for full speed PHY. This
32425 + * parameter is only applicable if PHY_TYPE is FS.
32426 + * 0 - No (default)
32427 + * 1 - Yes
32428 + */
32429 +extern int dwc_otg_set_param_i2c_enable(dwc_otg_core_if_t * core_if,
32430 + int32_t val);
32431 +extern int32_t dwc_otg_get_param_i2c_enable(dwc_otg_core_if_t * core_if);
32432 +#define dwc_param_i2c_enable_default 0
32433 +
32434 +extern int dwc_otg_set_param_ulpi_fs_ls(dwc_otg_core_if_t * core_if,
32435 + int32_t val);
32436 +extern int32_t dwc_otg_get_param_ulpi_fs_ls(dwc_otg_core_if_t * core_if);
32437 +#define dwc_param_ulpi_fs_ls_default 0
32438 +
32439 +extern int dwc_otg_set_param_ts_dline(dwc_otg_core_if_t * core_if, int32_t val);
32440 +extern int32_t dwc_otg_get_param_ts_dline(dwc_otg_core_if_t * core_if);
32441 +#define dwc_param_ts_dline_default 0
32442 +
32443 +/**
32444 + * Specifies whether dedicated transmit FIFOs are
32445 + * enabled for non periodic IN endpoints in device mode
32446 + * 0 - No
32447 + * 1 - Yes
32448 + */
32449 +extern int dwc_otg_set_param_en_multiple_tx_fifo(dwc_otg_core_if_t * core_if,
32450 + int32_t val);
32451 +extern int32_t dwc_otg_get_param_en_multiple_tx_fifo(dwc_otg_core_if_t *
32452 + core_if);
32453 +#define dwc_param_en_multiple_tx_fifo_default 1
32454 +
32455 +/** Number of 4-byte words in each of the Tx FIFOs in device
32456 + * mode when dynamic FIFO sizing is enabled.
32457 + * 4 to 768 (default 256)
32458 + */
32459 +extern int dwc_otg_set_param_dev_tx_fifo_size(dwc_otg_core_if_t * core_if,
32460 + int fifo_num, int32_t val);
32461 +extern int32_t dwc_otg_get_param_dev_tx_fifo_size(dwc_otg_core_if_t * core_if,
32462 + int fifo_num);
32463 +#define dwc_param_dev_tx_fifo_size_default 768
32464 +
32465 +/** Thresholding enable flag-
32466 + * bit 0 - enable non-ISO Tx thresholding
32467 + * bit 1 - enable ISO Tx thresholding
32468 + * bit 2 - enable Rx thresholding
32469 + */
32470 +extern int dwc_otg_set_param_thr_ctl(dwc_otg_core_if_t * core_if, int32_t val);
32471 +extern int32_t dwc_otg_get_thr_ctl(dwc_otg_core_if_t * core_if, int fifo_num);
32472 +#define dwc_param_thr_ctl_default 0
32473 +
32474 +/** Thresholding length for Tx
32475 + * FIFOs in 32 bit DWORDs
32476 + */
32477 +extern int dwc_otg_set_param_tx_thr_length(dwc_otg_core_if_t * core_if,
32478 + int32_t val);
32479 +extern int32_t dwc_otg_get_tx_thr_length(dwc_otg_core_if_t * core_if);
32480 +#define dwc_param_tx_thr_length_default 64
32481 +
32482 +/** Thresholding length for Rx
32483 + * FIFOs in 32 bit DWORDs
32484 + */
32485 +extern int dwc_otg_set_param_rx_thr_length(dwc_otg_core_if_t * core_if,
32486 + int32_t val);
32487 +extern int32_t dwc_otg_get_rx_thr_length(dwc_otg_core_if_t * core_if);
32488 +#define dwc_param_rx_thr_length_default 64
32489 +
32490 +/**
32491 + * Specifies whether LPM (Link Power Management) support is enabled
32492 + */
32493 +extern int dwc_otg_set_param_lpm_enable(dwc_otg_core_if_t * core_if,
32494 + int32_t val);
32495 +extern int32_t dwc_otg_get_param_lpm_enable(dwc_otg_core_if_t * core_if);
32496 +#define dwc_param_lpm_enable_default 1
32497 +
32498 +/**
32499 + * Specifies whether PTI enhancement is enabled
32500 + */
32501 +extern int dwc_otg_set_param_pti_enable(dwc_otg_core_if_t * core_if,
32502 + int32_t val);
32503 +extern int32_t dwc_otg_get_param_pti_enable(dwc_otg_core_if_t * core_if);
32504 +#define dwc_param_pti_enable_default 0
32505 +
32506 +/**
32507 + * Specifies whether MPI enhancement is enabled
32508 + */
32509 +extern int dwc_otg_set_param_mpi_enable(dwc_otg_core_if_t * core_if,
32510 + int32_t val);
32511 +extern int32_t dwc_otg_get_param_mpi_enable(dwc_otg_core_if_t * core_if);
32512 +#define dwc_param_mpi_enable_default 0
32513 +
32514 +/**
32515 + * Specifies whether ADP capability is enabled
32516 + */
32517 +extern int dwc_otg_set_param_adp_enable(dwc_otg_core_if_t * core_if,
32518 + int32_t val);
32519 +extern int32_t dwc_otg_get_param_adp_enable(dwc_otg_core_if_t * core_if);
32520 +#define dwc_param_adp_enable_default 0
32521 +
32522 +/**
32523 + * Specifies whether IC_USB capability is enabled
32524 + */
32525 +
32526 +extern int dwc_otg_set_param_ic_usb_cap(dwc_otg_core_if_t * core_if,
32527 + int32_t val);
32528 +extern int32_t dwc_otg_get_param_ic_usb_cap(dwc_otg_core_if_t * core_if);
32529 +#define dwc_param_ic_usb_cap_default 0
32530 +
32531 +extern int dwc_otg_set_param_ahb_thr_ratio(dwc_otg_core_if_t * core_if,
32532 + int32_t val);
32533 +extern int32_t dwc_otg_get_param_ahb_thr_ratio(dwc_otg_core_if_t * core_if);
32534 +#define dwc_param_ahb_thr_ratio_default 0
32535 +
32536 +extern int dwc_otg_set_param_power_down(dwc_otg_core_if_t * core_if,
32537 + int32_t val);
32538 +extern int32_t dwc_otg_get_param_power_down(dwc_otg_core_if_t * core_if);
32539 +#define dwc_param_power_down_default 0
32540 +
32541 +extern int dwc_otg_set_param_reload_ctl(dwc_otg_core_if_t * core_if,
32542 + int32_t val);
32543 +extern int32_t dwc_otg_get_param_reload_ctl(dwc_otg_core_if_t * core_if);
32544 +#define dwc_param_reload_ctl_default 0
32545 +
32546 +extern int dwc_otg_set_param_dev_out_nak(dwc_otg_core_if_t * core_if,
32547 + int32_t val);
32548 +extern int32_t dwc_otg_get_param_dev_out_nak(dwc_otg_core_if_t * core_if);
32549 +#define dwc_param_dev_out_nak_default 0
32550 +
32551 +extern int dwc_otg_set_param_cont_on_bna(dwc_otg_core_if_t * core_if,
32552 + int32_t val);
32553 +extern int32_t dwc_otg_get_param_cont_on_bna(dwc_otg_core_if_t * core_if);
32554 +#define dwc_param_cont_on_bna_default 0
32555 +
32556 +extern int dwc_otg_set_param_ahb_single(dwc_otg_core_if_t * core_if,
32557 + int32_t val);
32558 +extern int32_t dwc_otg_get_param_ahb_single(dwc_otg_core_if_t * core_if);
32559 +#define dwc_param_ahb_single_default 0
32560 +
32561 +extern int dwc_otg_set_param_otg_ver(dwc_otg_core_if_t * core_if, int32_t val);
32562 +extern int32_t dwc_otg_get_param_otg_ver(dwc_otg_core_if_t * core_if);
32563 +#define dwc_param_otg_ver_default 0
32564 +
32565 +/** @} */
32566 +
32567 +/** @name Access to registers and bit-fields */
32568 +
32569 +/**
32570 + * Dump core registers and SPRAM
32571 + */
32572 +extern void dwc_otg_dump_dev_registers(dwc_otg_core_if_t * _core_if);
32573 +extern void dwc_otg_dump_spram(dwc_otg_core_if_t * _core_if);
32574 +extern void dwc_otg_dump_host_registers(dwc_otg_core_if_t * _core_if);
32575 +extern void dwc_otg_dump_global_registers(dwc_otg_core_if_t * _core_if);
32576 +
32577 +/**
32578 + * Get host negotiation status.
32579 + */
32580 +extern uint32_t dwc_otg_get_hnpstatus(dwc_otg_core_if_t * core_if);
32581 +
32582 +/**
32583 + * Get srp status
32584 + */
32585 +extern uint32_t dwc_otg_get_srpstatus(dwc_otg_core_if_t * core_if);
32586 +
32587 +/**
32588 + * Set hnpreq bit in the GOTGCTL register.
32589 + */
32590 +extern void dwc_otg_set_hnpreq(dwc_otg_core_if_t * core_if, uint32_t val);
32591 +
32592 +/**
32593 + * Get Content of SNPSID register.
32594 + */
32595 +extern uint32_t dwc_otg_get_gsnpsid(dwc_otg_core_if_t * core_if);
32596 +
32597 +/**
32598 + * Get current mode.
32599 + * Returns 0 if in device mode, and 1 if in host mode.
32600 + */
32601 +extern uint32_t dwc_otg_get_mode(dwc_otg_core_if_t * core_if);
32602 +
32603 +/**
32604 + * Get value of hnpcapable field in the GUSBCFG register
32605 + */
32606 +extern uint32_t dwc_otg_get_hnpcapable(dwc_otg_core_if_t * core_if);
32607 +/**
32608 + * Set value of hnpcapable field in the GUSBCFG register
32609 + */
32610 +extern void dwc_otg_set_hnpcapable(dwc_otg_core_if_t * core_if, uint32_t val);
32611 +
32612 +/**
32613 + * Get value of srpcapable field in the GUSBCFG register
32614 + */
32615 +extern uint32_t dwc_otg_get_srpcapable(dwc_otg_core_if_t * core_if);
32616 +/**
32617 + * Set value of srpcapable field in the GUSBCFG register
32618 + */
32619 +extern void dwc_otg_set_srpcapable(dwc_otg_core_if_t * core_if, uint32_t val);
32620 +
32621 +/**
32622 + * Get value of devspeed field in the DCFG register
32623 + */
32624 +extern uint32_t dwc_otg_get_devspeed(dwc_otg_core_if_t * core_if);
32625 +/**
32626 + * Set value of devspeed field in the DCFG register
32627 + */
32628 +extern void dwc_otg_set_devspeed(dwc_otg_core_if_t * core_if, uint32_t val);
32629 +
32630 +/**
32631 + * Get the value of busconnected field from the HPRT0 register
32632 + */
32633 +extern uint32_t dwc_otg_get_busconnected(dwc_otg_core_if_t * core_if);
32634 +
32635 +/**
32636 + * Gets the device enumeration Speed.
32637 + */
32638 +extern uint32_t dwc_otg_get_enumspeed(dwc_otg_core_if_t * core_if);
32639 +
32640 +/**
32641 + * Get value of prtpwr field from the HPRT0 register
32642 + */
32643 +extern uint32_t dwc_otg_get_prtpower(dwc_otg_core_if_t * core_if);
32644 +
32645 +/**
32646 + * Get value of flag indicating core state - hibernated or not
32647 + */
32648 +extern uint32_t dwc_otg_get_core_state(dwc_otg_core_if_t * core_if);
32649 +
32650 +/**
32651 + * Set value of prtpwr field from the HPRT0 register
32652 + */
32653 +extern void dwc_otg_set_prtpower(dwc_otg_core_if_t * core_if, uint32_t val);
32654 +
32655 +/**
32656 + * Get value of prtsusp field from the HPRT0 regsiter
32657 + */
32658 +extern uint32_t dwc_otg_get_prtsuspend(dwc_otg_core_if_t * core_if);
32659 +/**
32660 + * Set value of prtpwr field from the HPRT0 register
32661 + */
32662 +extern void dwc_otg_set_prtsuspend(dwc_otg_core_if_t * core_if, uint32_t val);
32663 +
32664 +/**
32665 + * Get value of ModeChTimEn field from the HCFG regsiter
32666 + */
32667 +extern uint32_t dwc_otg_get_mode_ch_tim(dwc_otg_core_if_t * core_if);
32668 +/**
32669 + * Set value of ModeChTimEn field from the HCFG regsiter
32670 + */
32671 +extern void dwc_otg_set_mode_ch_tim(dwc_otg_core_if_t * core_if, uint32_t val);
32672 +
32673 +/**
32674 + * Get value of Fram Interval field from the HFIR regsiter
32675 + */
32676 +extern uint32_t dwc_otg_get_fr_interval(dwc_otg_core_if_t * core_if);
32677 +/**
32678 + * Set value of Frame Interval field from the HFIR regsiter
32679 + */
32680 +extern void dwc_otg_set_fr_interval(dwc_otg_core_if_t * core_if, uint32_t val);
32681 +
32682 +/**
32683 + * Set value of prtres field from the HPRT0 register
32684 + *FIXME Remove?
32685 + */
32686 +extern void dwc_otg_set_prtresume(dwc_otg_core_if_t * core_if, uint32_t val);
32687 +
32688 +/**
32689 + * Get value of rmtwkupsig bit in DCTL register
32690 + */
32691 +extern uint32_t dwc_otg_get_remotewakesig(dwc_otg_core_if_t * core_if);
32692 +
32693 +/**
32694 + * Get value of prt_sleep_sts field from the GLPMCFG register
32695 + */
32696 +extern uint32_t dwc_otg_get_lpm_portsleepstatus(dwc_otg_core_if_t * core_if);
32697 +
32698 +/**
32699 + * Get value of rem_wkup_en field from the GLPMCFG register
32700 + */
32701 +extern uint32_t dwc_otg_get_lpm_remotewakeenabled(dwc_otg_core_if_t * core_if);
32702 +
32703 +/**
32704 + * Get value of appl_resp field from the GLPMCFG register
32705 + */
32706 +extern uint32_t dwc_otg_get_lpmresponse(dwc_otg_core_if_t * core_if);
32707 +/**
32708 + * Set value of appl_resp field from the GLPMCFG register
32709 + */
32710 +extern void dwc_otg_set_lpmresponse(dwc_otg_core_if_t * core_if, uint32_t val);
32711 +
32712 +/**
32713 + * Get value of hsic_connect field from the GLPMCFG register
32714 + */
32715 +extern uint32_t dwc_otg_get_hsic_connect(dwc_otg_core_if_t * core_if);
32716 +/**
32717 + * Set value of hsic_connect field from the GLPMCFG register
32718 + */
32719 +extern void dwc_otg_set_hsic_connect(dwc_otg_core_if_t * core_if, uint32_t val);
32720 +
32721 +/**
32722 + * Get value of inv_sel_hsic field from the GLPMCFG register.
32723 + */
32724 +extern uint32_t dwc_otg_get_inv_sel_hsic(dwc_otg_core_if_t * core_if);
32725 +/**
32726 + * Set value of inv_sel_hsic field from the GLPMFG register.
32727 + */
32728 +extern void dwc_otg_set_inv_sel_hsic(dwc_otg_core_if_t * core_if, uint32_t val);
32729 +
32730 +/*
32731 + * Some functions for accessing registers
32732 + */
32733 +
32734 +/**
32735 + * GOTGCTL register
32736 + */
32737 +extern uint32_t dwc_otg_get_gotgctl(dwc_otg_core_if_t * core_if);
32738 +extern void dwc_otg_set_gotgctl(dwc_otg_core_if_t * core_if, uint32_t val);
32739 +
32740 +/**
32741 + * GUSBCFG register
32742 + */
32743 +extern uint32_t dwc_otg_get_gusbcfg(dwc_otg_core_if_t * core_if);
32744 +extern void dwc_otg_set_gusbcfg(dwc_otg_core_if_t * core_if, uint32_t val);
32745 +
32746 +/**
32747 + * GRXFSIZ register
32748 + */
32749 +extern uint32_t dwc_otg_get_grxfsiz(dwc_otg_core_if_t * core_if);
32750 +extern void dwc_otg_set_grxfsiz(dwc_otg_core_if_t * core_if, uint32_t val);
32751 +
32752 +/**
32753 + * GNPTXFSIZ register
32754 + */
32755 +extern uint32_t dwc_otg_get_gnptxfsiz(dwc_otg_core_if_t * core_if);
32756 +extern void dwc_otg_set_gnptxfsiz(dwc_otg_core_if_t * core_if, uint32_t val);
32757 +
32758 +extern uint32_t dwc_otg_get_gpvndctl(dwc_otg_core_if_t * core_if);
32759 +extern void dwc_otg_set_gpvndctl(dwc_otg_core_if_t * core_if, uint32_t val);
32760 +
32761 +/**
32762 + * GGPIO register
32763 + */
32764 +extern uint32_t dwc_otg_get_ggpio(dwc_otg_core_if_t * core_if);
32765 +extern void dwc_otg_set_ggpio(dwc_otg_core_if_t * core_if, uint32_t val);
32766 +
32767 +/**
32768 + * GUID register
32769 + */
32770 +extern uint32_t dwc_otg_get_guid(dwc_otg_core_if_t * core_if);
32771 +extern void dwc_otg_set_guid(dwc_otg_core_if_t * core_if, uint32_t val);
32772 +
32773 +/**
32774 + * HPRT0 register
32775 + */
32776 +extern uint32_t dwc_otg_get_hprt0(dwc_otg_core_if_t * core_if);
32777 +extern void dwc_otg_set_hprt0(dwc_otg_core_if_t * core_if, uint32_t val);
32778 +
32779 +/**
32780 + * GHPTXFSIZE
32781 + */
32782 +extern uint32_t dwc_otg_get_hptxfsiz(dwc_otg_core_if_t * core_if);
32783 +
32784 +/** @} */
32785 +
32786 +#endif /* __DWC_CORE_IF_H__ */
32787 --- /dev/null
32788 +++ b/drivers/usb/host/dwc_otg/dwc_otg_dbg.h
32789 @@ -0,0 +1,117 @@
32790 +/* ==========================================================================
32791 + *
32792 + * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
32793 + * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
32794 + * otherwise expressly agreed to in writing between Synopsys and you.
32795 + *
32796 + * The Software IS NOT an item of Licensed Software or Licensed Product under
32797 + * any End User Software License Agreement or Agreement for Licensed Product
32798 + * with Synopsys or any supplement thereto. You are permitted to use and
32799 + * redistribute this Software in source and binary forms, with or without
32800 + * modification, provided that redistributions of source code must retain this
32801 + * notice. You may not view, use, disclose, copy or distribute this file or
32802 + * any information contained herein except pursuant to this license grant from
32803 + * Synopsys. If you do not agree with this notice, including the disclaimer
32804 + * below, then you are not authorized to use the Software.
32805 + *
32806 + * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
32807 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
32808 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
32809 + * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
32810 + * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
32811 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
32812 + * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
32813 + * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
32814 + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
32815 + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
32816 + * DAMAGE.
32817 + * ========================================================================== */
32818 +
32819 +#ifndef __DWC_OTG_DBG_H__
32820 +#define __DWC_OTG_DBG_H__
32821 +
32822 +/** @file
32823 + * This file defines debug levels.
32824 + * Debugging support vanishes in non-debug builds.
32825 + */
32826 +
32827 +/**
32828 + * The Debug Level bit-mask variable.
32829 + */
32830 +extern uint32_t g_dbg_lvl;
32831 +/**
32832 + * Set the Debug Level variable.
32833 + */
32834 +static inline uint32_t SET_DEBUG_LEVEL(const uint32_t new)
32835 +{
32836 + uint32_t old = g_dbg_lvl;
32837 + g_dbg_lvl = new;
32838 + return old;
32839 +}
32840 +
32841 +#define DBG_USER (0x1)
32842 +/** When debug level has the DBG_CIL bit set, display CIL Debug messages. */
32843 +#define DBG_CIL (0x2)
32844 +/** When debug level has the DBG_CILV bit set, display CIL Verbose debug
32845 + * messages */
32846 +#define DBG_CILV (0x20)
32847 +/** When debug level has the DBG_PCD bit set, display PCD (Device) debug
32848 + * messages */
32849 +#define DBG_PCD (0x4)
32850 +/** When debug level has the DBG_PCDV set, display PCD (Device) Verbose debug
32851 + * messages */
32852 +#define DBG_PCDV (0x40)
32853 +/** When debug level has the DBG_HCD bit set, display Host debug messages */
32854 +#define DBG_HCD (0x8)
32855 +/** When debug level has the DBG_HCDV bit set, display Verbose Host debug
32856 + * messages */
32857 +#define DBG_HCDV (0x80)
32858 +/** When debug level has the DBG_HCD_URB bit set, display enqueued URBs in host
32859 + * mode. */
32860 +#define DBG_HCD_URB (0x800)
32861 +/** When debug level has the DBG_HCDI bit set, display host interrupt
32862 + * messages. */
32863 +#define DBG_HCDI (0x1000)
32864 +
32865 +/** When debug level has any bit set, display debug messages */
32866 +#define DBG_ANY (0xFF)
32867 +
32868 +/** All debug messages off */
32869 +#define DBG_OFF 0
32870 +
32871 +/** Prefix string for DWC_DEBUG print macros. */
32872 +#define USB_DWC "DWC_otg: "
32873 +
32874 +/**
32875 + * Print a debug message when the Global debug level variable contains
32876 + * the bit defined in <code>lvl</code>.
32877 + *
32878 + * @param[in] lvl - Debug level, use one of the DBG_ constants above.
32879 + * @param[in] x - like printf
32880 + *
32881 + * Example:<p>
32882 + * <code>
32883 + * DWC_DEBUGPL( DBG_ANY, "%s(%p)\n", __func__, _reg_base_addr);
32884 + * </code>
32885 + * <br>
32886 + * results in:<br>
32887 + * <code>
32888 + * usb-DWC_otg: dwc_otg_cil_init(ca867000)
32889 + * </code>
32890 + */
32891 +#ifdef DEBUG
32892 +
32893 +# define DWC_DEBUGPL(lvl, x...) do{ if ((lvl)&g_dbg_lvl)__DWC_DEBUG(USB_DWC x ); }while(0)
32894 +# define DWC_DEBUGP(x...) DWC_DEBUGPL(DBG_ANY, x )
32895 +
32896 +# define CHK_DEBUG_LEVEL(level) ((level) & g_dbg_lvl)
32897 +
32898 +#else
32899 +
32900 +# define DWC_DEBUGPL(lvl, x...) do{}while(0)
32901 +# define DWC_DEBUGP(x...)
32902 +
32903 +# define CHK_DEBUG_LEVEL(level) (0)
32904 +
32905 +#endif /*DEBUG*/
32906 +#endif
32907 --- /dev/null
32908 +++ b/drivers/usb/host/dwc_otg/dwc_otg_driver.c
32909 @@ -0,0 +1,1772 @@
32910 +/* ==========================================================================
32911 + * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_driver.c $
32912 + * $Revision: #92 $
32913 + * $Date: 2012/08/10 $
32914 + * $Change: 2047372 $
32915 + *
32916 + * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
32917 + * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
32918 + * otherwise expressly agreed to in writing between Synopsys and you.
32919 + *
32920 + * The Software IS NOT an item of Licensed Software or Licensed Product under
32921 + * any End User Software License Agreement or Agreement for Licensed Product
32922 + * with Synopsys or any supplement thereto. You are permitted to use and
32923 + * redistribute this Software in source and binary forms, with or without
32924 + * modification, provided that redistributions of source code must retain this
32925 + * notice. You may not view, use, disclose, copy or distribute this file or
32926 + * any information contained herein except pursuant to this license grant from
32927 + * Synopsys. If you do not agree with this notice, including the disclaimer
32928 + * below, then you are not authorized to use the Software.
32929 + *
32930 + * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
32931 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
32932 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
32933 + * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
32934 + * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
32935 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
32936 + * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
32937 + * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
32938 + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
32939 + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
32940 + * DAMAGE.
32941 + * ========================================================================== */
32942 +
32943 +/** @file
32944 + * The dwc_otg_driver module provides the initialization and cleanup entry
32945 + * points for the DWC_otg driver. This module will be dynamically installed
32946 + * after Linux is booted using the insmod command. When the module is
32947 + * installed, the dwc_otg_driver_init function is called. When the module is
32948 + * removed (using rmmod), the dwc_otg_driver_cleanup function is called.
32949 + *
32950 + * This module also defines a data structure for the dwc_otg_driver, which is
32951 + * used in conjunction with the standard ARM lm_device structure. These
32952 + * structures allow the OTG driver to comply with the standard Linux driver
32953 + * model in which devices and drivers are registered with a bus driver. This
32954 + * has the benefit that Linux can expose attributes of the driver and device
32955 + * in its special sysfs file system. Users can then read or write files in
32956 + * this file system to perform diagnostics on the driver components or the
32957 + * device.
32958 + */
32959 +
32960 +#include "dwc_otg_os_dep.h"
32961 +#include "dwc_os.h"
32962 +#include "dwc_otg_dbg.h"
32963 +#include "dwc_otg_driver.h"
32964 +#include "dwc_otg_attr.h"
32965 +#include "dwc_otg_core_if.h"
32966 +#include "dwc_otg_pcd_if.h"
32967 +#include "dwc_otg_hcd_if.h"
32968 +#include "dwc_otg_fiq_fsm.h"
32969 +
32970 +#define DWC_DRIVER_VERSION "3.00a 10-AUG-2012"
32971 +#define DWC_DRIVER_DESC "HS OTG USB Controller driver"
32972 +
32973 +bool microframe_schedule=true;
32974 +
32975 +static const char dwc_driver_name[] = "dwc_otg";
32976 +
32977 +
32978 +extern int pcd_init(
32979 +#ifdef LM_INTERFACE
32980 + struct lm_device *_dev
32981 +#elif defined(PCI_INTERFACE)
32982 + struct pci_dev *_dev
32983 +#elif defined(PLATFORM_INTERFACE)
32984 + struct platform_device *dev
32985 +#endif
32986 + );
32987 +extern int hcd_init(
32988 +#ifdef LM_INTERFACE
32989 + struct lm_device *_dev
32990 +#elif defined(PCI_INTERFACE)
32991 + struct pci_dev *_dev
32992 +#elif defined(PLATFORM_INTERFACE)
32993 + struct platform_device *dev
32994 +#endif
32995 + );
32996 +
32997 +extern int pcd_remove(
32998 +#ifdef LM_INTERFACE
32999 + struct lm_device *_dev
33000 +#elif defined(PCI_INTERFACE)
33001 + struct pci_dev *_dev
33002 +#elif defined(PLATFORM_INTERFACE)
33003 + struct platform_device *_dev
33004 +#endif
33005 + );
33006 +
33007 +extern void hcd_remove(
33008 +#ifdef LM_INTERFACE
33009 + struct lm_device *_dev
33010 +#elif defined(PCI_INTERFACE)
33011 + struct pci_dev *_dev
33012 +#elif defined(PLATFORM_INTERFACE)
33013 + struct platform_device *_dev
33014 +#endif
33015 + );
33016 +
33017 +extern void dwc_otg_adp_start(dwc_otg_core_if_t * core_if, uint8_t is_host);
33018 +
33019 +/*-------------------------------------------------------------------------*/
33020 +/* Encapsulate the module parameter settings */
33021 +
33022 +struct dwc_otg_driver_module_params {
33023 + int32_t opt;
33024 + int32_t otg_cap;
33025 + int32_t dma_enable;
33026 + int32_t dma_desc_enable;
33027 + int32_t dma_burst_size;
33028 + int32_t speed;
33029 + int32_t host_support_fs_ls_low_power;
33030 + int32_t host_ls_low_power_phy_clk;
33031 + int32_t enable_dynamic_fifo;
33032 + int32_t data_fifo_size;
33033 + int32_t dev_rx_fifo_size;
33034 + int32_t dev_nperio_tx_fifo_size;
33035 + uint32_t dev_perio_tx_fifo_size[MAX_PERIO_FIFOS];
33036 + int32_t host_rx_fifo_size;
33037 + int32_t host_nperio_tx_fifo_size;
33038 + int32_t host_perio_tx_fifo_size;
33039 + int32_t max_transfer_size;
33040 + int32_t max_packet_count;
33041 + int32_t host_channels;
33042 + int32_t dev_endpoints;
33043 + int32_t phy_type;
33044 + int32_t phy_utmi_width;
33045 + int32_t phy_ulpi_ddr;
33046 + int32_t phy_ulpi_ext_vbus;
33047 + int32_t i2c_enable;
33048 + int32_t ulpi_fs_ls;
33049 + int32_t ts_dline;
33050 + int32_t en_multiple_tx_fifo;
33051 + uint32_t dev_tx_fifo_size[MAX_TX_FIFOS];
33052 + uint32_t thr_ctl;
33053 + uint32_t tx_thr_length;
33054 + uint32_t rx_thr_length;
33055 + int32_t pti_enable;
33056 + int32_t mpi_enable;
33057 + int32_t lpm_enable;
33058 + int32_t ic_usb_cap;
33059 + int32_t ahb_thr_ratio;
33060 + int32_t power_down;
33061 + int32_t reload_ctl;
33062 + int32_t dev_out_nak;
33063 + int32_t cont_on_bna;
33064 + int32_t ahb_single;
33065 + int32_t otg_ver;
33066 + int32_t adp_enable;
33067 +};
33068 +
33069 +static struct dwc_otg_driver_module_params dwc_otg_module_params = {
33070 + .opt = -1,
33071 + .otg_cap = -1,
33072 + .dma_enable = -1,
33073 + .dma_desc_enable = -1,
33074 + .dma_burst_size = -1,
33075 + .speed = -1,
33076 + .host_support_fs_ls_low_power = -1,
33077 + .host_ls_low_power_phy_clk = -1,
33078 + .enable_dynamic_fifo = -1,
33079 + .data_fifo_size = -1,
33080 + .dev_rx_fifo_size = -1,
33081 + .dev_nperio_tx_fifo_size = -1,
33082 + .dev_perio_tx_fifo_size = {
33083 + /* dev_perio_tx_fifo_size_1 */
33084 + -1,
33085 + -1,
33086 + -1,
33087 + -1,
33088 + -1,
33089 + -1,
33090 + -1,
33091 + -1,
33092 + -1,
33093 + -1,
33094 + -1,
33095 + -1,
33096 + -1,
33097 + -1,
33098 + -1
33099 + /* 15 */
33100 + },
33101 + .host_rx_fifo_size = -1,
33102 + .host_nperio_tx_fifo_size = -1,
33103 + .host_perio_tx_fifo_size = -1,
33104 + .max_transfer_size = -1,
33105 + .max_packet_count = -1,
33106 + .host_channels = -1,
33107 + .dev_endpoints = -1,
33108 + .phy_type = -1,
33109 + .phy_utmi_width = -1,
33110 + .phy_ulpi_ddr = -1,
33111 + .phy_ulpi_ext_vbus = -1,
33112 + .i2c_enable = -1,
33113 + .ulpi_fs_ls = -1,
33114 + .ts_dline = -1,
33115 + .en_multiple_tx_fifo = -1,
33116 + .dev_tx_fifo_size = {
33117 + /* dev_tx_fifo_size */
33118 + -1,
33119 + -1,
33120 + -1,
33121 + -1,
33122 + -1,
33123 + -1,
33124 + -1,
33125 + -1,
33126 + -1,
33127 + -1,
33128 + -1,
33129 + -1,
33130 + -1,
33131 + -1,
33132 + -1
33133 + /* 15 */
33134 + },
33135 + .thr_ctl = -1,
33136 + .tx_thr_length = -1,
33137 + .rx_thr_length = -1,
33138 + .pti_enable = -1,
33139 + .mpi_enable = -1,
33140 + .lpm_enable = 0,
33141 + .ic_usb_cap = -1,
33142 + .ahb_thr_ratio = -1,
33143 + .power_down = -1,
33144 + .reload_ctl = -1,
33145 + .dev_out_nak = -1,
33146 + .cont_on_bna = -1,
33147 + .ahb_single = -1,
33148 + .otg_ver = -1,
33149 + .adp_enable = -1,
33150 +};
33151 +
33152 +//Global variable to switch the fiq fix on or off
33153 +bool fiq_enable = 1;
33154 +// Global variable to enable the split transaction fix
33155 +bool fiq_fsm_enable = true;
33156 +//Bulk split-transaction NAK holdoff in microframes
33157 +uint16_t nak_holdoff = 8;
33158 +
33159 +//Force host mode during CIL re-init
33160 +bool cil_force_host = true;
33161 +
33162 +unsigned short fiq_fsm_mask = 0x0F;
33163 +
33164 +unsigned short int_ep_interval_min = 0;
33165 +/**
33166 + * This function shows the Driver Version.
33167 + */
33168 +static ssize_t version_show(struct device_driver *dev, char *buf)
33169 +{
33170 + return snprintf(buf, sizeof(DWC_DRIVER_VERSION) + 2, "%s\n",
33171 + DWC_DRIVER_VERSION);
33172 +}
33173 +
33174 +static DRIVER_ATTR_RO(version);
33175 +
33176 +/**
33177 + * Global Debug Level Mask.
33178 + */
33179 +uint32_t g_dbg_lvl = 0; /* OFF */
33180 +
33181 +/**
33182 + * This function shows the driver Debug Level.
33183 + */
33184 +static ssize_t debuglevel_show(struct device_driver *drv, char *buf)
33185 +{
33186 + return sprintf(buf, "0x%0x\n", g_dbg_lvl);
33187 +}
33188 +
33189 +/**
33190 + * This function stores the driver Debug Level.
33191 + */
33192 +static ssize_t debuglevel_store(struct device_driver *drv, const char *buf,
33193 + size_t count)
33194 +{
33195 + g_dbg_lvl = simple_strtoul(buf, NULL, 16);
33196 + return count;
33197 +}
33198 +
33199 +static DRIVER_ATTR_RW(debuglevel);
33200 +
33201 +/**
33202 + * This function is called during module intialization
33203 + * to pass module parameters to the DWC_OTG CORE.
33204 + */
33205 +static int set_parameters(dwc_otg_core_if_t * core_if)
33206 +{
33207 + int retval = 0;
33208 + int i;
33209 +
33210 + if (dwc_otg_module_params.otg_cap != -1) {
33211 + retval +=
33212 + dwc_otg_set_param_otg_cap(core_if,
33213 + dwc_otg_module_params.otg_cap);
33214 + }
33215 + if (dwc_otg_module_params.dma_enable != -1) {
33216 + retval +=
33217 + dwc_otg_set_param_dma_enable(core_if,
33218 + dwc_otg_module_params.
33219 + dma_enable);
33220 + }
33221 + if (dwc_otg_module_params.dma_desc_enable != -1) {
33222 + retval +=
33223 + dwc_otg_set_param_dma_desc_enable(core_if,
33224 + dwc_otg_module_params.
33225 + dma_desc_enable);
33226 + }
33227 + if (dwc_otg_module_params.opt != -1) {
33228 + retval +=
33229 + dwc_otg_set_param_opt(core_if, dwc_otg_module_params.opt);
33230 + }
33231 + if (dwc_otg_module_params.dma_burst_size != -1) {
33232 + retval +=
33233 + dwc_otg_set_param_dma_burst_size(core_if,
33234 + dwc_otg_module_params.
33235 + dma_burst_size);
33236 + }
33237 + if (dwc_otg_module_params.host_support_fs_ls_low_power != -1) {
33238 + retval +=
33239 + dwc_otg_set_param_host_support_fs_ls_low_power(core_if,
33240 + dwc_otg_module_params.
33241 + host_support_fs_ls_low_power);
33242 + }
33243 + if (dwc_otg_module_params.enable_dynamic_fifo != -1) {
33244 + retval +=
33245 + dwc_otg_set_param_enable_dynamic_fifo(core_if,
33246 + dwc_otg_module_params.
33247 + enable_dynamic_fifo);
33248 + }
33249 + if (dwc_otg_module_params.data_fifo_size != -1) {
33250 + retval +=
33251 + dwc_otg_set_param_data_fifo_size(core_if,
33252 + dwc_otg_module_params.
33253 + data_fifo_size);
33254 + }
33255 + if (dwc_otg_module_params.dev_rx_fifo_size != -1) {
33256 + retval +=
33257 + dwc_otg_set_param_dev_rx_fifo_size(core_if,
33258 + dwc_otg_module_params.
33259 + dev_rx_fifo_size);
33260 + }
33261 + if (dwc_otg_module_params.dev_nperio_tx_fifo_size != -1) {
33262 + retval +=
33263 + dwc_otg_set_param_dev_nperio_tx_fifo_size(core_if,
33264 + dwc_otg_module_params.
33265 + dev_nperio_tx_fifo_size);
33266 + }
33267 + if (dwc_otg_module_params.host_rx_fifo_size != -1) {
33268 + retval +=
33269 + dwc_otg_set_param_host_rx_fifo_size(core_if,
33270 + dwc_otg_module_params.host_rx_fifo_size);
33271 + }
33272 + if (dwc_otg_module_params.host_nperio_tx_fifo_size != -1) {
33273 + retval +=
33274 + dwc_otg_set_param_host_nperio_tx_fifo_size(core_if,
33275 + dwc_otg_module_params.
33276 + host_nperio_tx_fifo_size);
33277 + }
33278 + if (dwc_otg_module_params.host_perio_tx_fifo_size != -1) {
33279 + retval +=
33280 + dwc_otg_set_param_host_perio_tx_fifo_size(core_if,
33281 + dwc_otg_module_params.
33282 + host_perio_tx_fifo_size);
33283 + }
33284 + if (dwc_otg_module_params.max_transfer_size != -1) {
33285 + retval +=
33286 + dwc_otg_set_param_max_transfer_size(core_if,
33287 + dwc_otg_module_params.
33288 + max_transfer_size);
33289 + }
33290 + if (dwc_otg_module_params.max_packet_count != -1) {
33291 + retval +=
33292 + dwc_otg_set_param_max_packet_count(core_if,
33293 + dwc_otg_module_params.
33294 + max_packet_count);
33295 + }
33296 + if (dwc_otg_module_params.host_channels != -1) {
33297 + retval +=
33298 + dwc_otg_set_param_host_channels(core_if,
33299 + dwc_otg_module_params.
33300 + host_channels);
33301 + }
33302 + if (dwc_otg_module_params.dev_endpoints != -1) {
33303 + retval +=
33304 + dwc_otg_set_param_dev_endpoints(core_if,
33305 + dwc_otg_module_params.
33306 + dev_endpoints);
33307 + }
33308 + if (dwc_otg_module_params.phy_type != -1) {
33309 + retval +=
33310 + dwc_otg_set_param_phy_type(core_if,
33311 + dwc_otg_module_params.phy_type);
33312 + }
33313 + if (dwc_otg_module_params.speed != -1) {
33314 + retval +=
33315 + dwc_otg_set_param_speed(core_if,
33316 + dwc_otg_module_params.speed);
33317 + }
33318 + if (dwc_otg_module_params.host_ls_low_power_phy_clk != -1) {
33319 + retval +=
33320 + dwc_otg_set_param_host_ls_low_power_phy_clk(core_if,
33321 + dwc_otg_module_params.
33322 + host_ls_low_power_phy_clk);
33323 + }
33324 + if (dwc_otg_module_params.phy_ulpi_ddr != -1) {
33325 + retval +=
33326 + dwc_otg_set_param_phy_ulpi_ddr(core_if,
33327 + dwc_otg_module_params.
33328 + phy_ulpi_ddr);
33329 + }
33330 + if (dwc_otg_module_params.phy_ulpi_ext_vbus != -1) {
33331 + retval +=
33332 + dwc_otg_set_param_phy_ulpi_ext_vbus(core_if,
33333 + dwc_otg_module_params.
33334 + phy_ulpi_ext_vbus);
33335 + }
33336 + if (dwc_otg_module_params.phy_utmi_width != -1) {
33337 + retval +=
33338 + dwc_otg_set_param_phy_utmi_width(core_if,
33339 + dwc_otg_module_params.
33340 + phy_utmi_width);
33341 + }
33342 + if (dwc_otg_module_params.ulpi_fs_ls != -1) {
33343 + retval +=
33344 + dwc_otg_set_param_ulpi_fs_ls(core_if,
33345 + dwc_otg_module_params.ulpi_fs_ls);
33346 + }
33347 + if (dwc_otg_module_params.ts_dline != -1) {
33348 + retval +=
33349 + dwc_otg_set_param_ts_dline(core_if,
33350 + dwc_otg_module_params.ts_dline);
33351 + }
33352 + if (dwc_otg_module_params.i2c_enable != -1) {
33353 + retval +=
33354 + dwc_otg_set_param_i2c_enable(core_if,
33355 + dwc_otg_module_params.
33356 + i2c_enable);
33357 + }
33358 + if (dwc_otg_module_params.en_multiple_tx_fifo != -1) {
33359 + retval +=
33360 + dwc_otg_set_param_en_multiple_tx_fifo(core_if,
33361 + dwc_otg_module_params.
33362 + en_multiple_tx_fifo);
33363 + }
33364 + for (i = 0; i < 15; i++) {
33365 + if (dwc_otg_module_params.dev_perio_tx_fifo_size[i] != -1) {
33366 + retval +=
33367 + dwc_otg_set_param_dev_perio_tx_fifo_size(core_if,
33368 + dwc_otg_module_params.
33369 + dev_perio_tx_fifo_size
33370 + [i], i);
33371 + }
33372 + }
33373 +
33374 + for (i = 0; i < 15; i++) {
33375 + if (dwc_otg_module_params.dev_tx_fifo_size[i] != -1) {
33376 + retval += dwc_otg_set_param_dev_tx_fifo_size(core_if,
33377 + dwc_otg_module_params.
33378 + dev_tx_fifo_size
33379 + [i], i);
33380 + }
33381 + }
33382 + if (dwc_otg_module_params.thr_ctl != -1) {
33383 + retval +=
33384 + dwc_otg_set_param_thr_ctl(core_if,
33385 + dwc_otg_module_params.thr_ctl);
33386 + }
33387 + if (dwc_otg_module_params.mpi_enable != -1) {
33388 + retval +=
33389 + dwc_otg_set_param_mpi_enable(core_if,
33390 + dwc_otg_module_params.
33391 + mpi_enable);
33392 + }
33393 + if (dwc_otg_module_params.pti_enable != -1) {
33394 + retval +=
33395 + dwc_otg_set_param_pti_enable(core_if,
33396 + dwc_otg_module_params.
33397 + pti_enable);
33398 + }
33399 + if (dwc_otg_module_params.lpm_enable != -1) {
33400 + retval +=
33401 + dwc_otg_set_param_lpm_enable(core_if,
33402 + dwc_otg_module_params.
33403 + lpm_enable);
33404 + }
33405 + if (dwc_otg_module_params.ic_usb_cap != -1) {
33406 + retval +=
33407 + dwc_otg_set_param_ic_usb_cap(core_if,
33408 + dwc_otg_module_params.
33409 + ic_usb_cap);
33410 + }
33411 + if (dwc_otg_module_params.tx_thr_length != -1) {
33412 + retval +=
33413 + dwc_otg_set_param_tx_thr_length(core_if,
33414 + dwc_otg_module_params.tx_thr_length);
33415 + }
33416 + if (dwc_otg_module_params.rx_thr_length != -1) {
33417 + retval +=
33418 + dwc_otg_set_param_rx_thr_length(core_if,
33419 + dwc_otg_module_params.
33420 + rx_thr_length);
33421 + }
33422 + if (dwc_otg_module_params.ahb_thr_ratio != -1) {
33423 + retval +=
33424 + dwc_otg_set_param_ahb_thr_ratio(core_if,
33425 + dwc_otg_module_params.ahb_thr_ratio);
33426 + }
33427 + if (dwc_otg_module_params.power_down != -1) {
33428 + retval +=
33429 + dwc_otg_set_param_power_down(core_if,
33430 + dwc_otg_module_params.power_down);
33431 + }
33432 + if (dwc_otg_module_params.reload_ctl != -1) {
33433 + retval +=
33434 + dwc_otg_set_param_reload_ctl(core_if,
33435 + dwc_otg_module_params.reload_ctl);
33436 + }
33437 +
33438 + if (dwc_otg_module_params.dev_out_nak != -1) {
33439 + retval +=
33440 + dwc_otg_set_param_dev_out_nak(core_if,
33441 + dwc_otg_module_params.dev_out_nak);
33442 + }
33443 +
33444 + if (dwc_otg_module_params.cont_on_bna != -1) {
33445 + retval +=
33446 + dwc_otg_set_param_cont_on_bna(core_if,
33447 + dwc_otg_module_params.cont_on_bna);
33448 + }
33449 +
33450 + if (dwc_otg_module_params.ahb_single != -1) {
33451 + retval +=
33452 + dwc_otg_set_param_ahb_single(core_if,
33453 + dwc_otg_module_params.ahb_single);
33454 + }
33455 +
33456 + if (dwc_otg_module_params.otg_ver != -1) {
33457 + retval +=
33458 + dwc_otg_set_param_otg_ver(core_if,
33459 + dwc_otg_module_params.otg_ver);
33460 + }
33461 + if (dwc_otg_module_params.adp_enable != -1) {
33462 + retval +=
33463 + dwc_otg_set_param_adp_enable(core_if,
33464 + dwc_otg_module_params.
33465 + adp_enable);
33466 + }
33467 + return retval;
33468 +}
33469 +
33470 +/**
33471 + * This function is the top level interrupt handler for the Common
33472 + * (Device and host modes) interrupts.
33473 + */
33474 +static irqreturn_t dwc_otg_common_irq(int irq, void *dev)
33475 +{
33476 + int32_t retval = IRQ_NONE;
33477 +
33478 + retval = dwc_otg_handle_common_intr(dev);
33479 + if (retval != 0) {
33480 + S3C2410X_CLEAR_EINTPEND();
33481 + }
33482 + return IRQ_RETVAL(retval);
33483 +}
33484 +
33485 +/**
33486 + * This function is called when a lm_device is unregistered with the
33487 + * dwc_otg_driver. This happens, for example, when the rmmod command is
33488 + * executed. The device may or may not be electrically present. If it is
33489 + * present, the driver stops device processing. Any resources used on behalf
33490 + * of this device are freed.
33491 + *
33492 + * @param _dev
33493 + */
33494 +#ifdef LM_INTERFACE
33495 +#define REM_RETVAL(n)
33496 +static void dwc_otg_driver_remove( struct lm_device *_dev )
33497 +{ dwc_otg_device_t *otg_dev = lm_get_drvdata(_dev);
33498 +#elif defined(PCI_INTERFACE)
33499 +#define REM_RETVAL(n)
33500 +static void dwc_otg_driver_remove( struct pci_dev *_dev )
33501 +{ dwc_otg_device_t *otg_dev = pci_get_drvdata(_dev);
33502 +#elif defined(PLATFORM_INTERFACE)
33503 +#define REM_RETVAL(n) n
33504 +static int dwc_otg_driver_remove( struct platform_device *_dev )
33505 +{ dwc_otg_device_t *otg_dev = platform_get_drvdata(_dev);
33506 +#endif
33507 +
33508 + DWC_DEBUGPL(DBG_ANY, "%s(%p) otg_dev %p\n", __func__, _dev, otg_dev);
33509 +
33510 + if (!otg_dev) {
33511 + /* Memory allocation for the dwc_otg_device failed. */
33512 + DWC_DEBUGPL(DBG_ANY, "%s: otg_dev NULL!\n", __func__);
33513 + return REM_RETVAL(-ENOMEM);
33514 + }
33515 +#ifndef DWC_DEVICE_ONLY
33516 + if (otg_dev->hcd) {
33517 + hcd_remove(_dev);
33518 + } else {
33519 + DWC_DEBUGPL(DBG_ANY, "%s: otg_dev->hcd NULL!\n", __func__);
33520 + return REM_RETVAL(-EINVAL);
33521 + }
33522 +#endif
33523 +
33524 +#ifndef DWC_HOST_ONLY
33525 + if (otg_dev->pcd) {
33526 + pcd_remove(_dev);
33527 + } else {
33528 + DWC_DEBUGPL(DBG_ANY, "%s: otg_dev->pcd NULL!\n", __func__);
33529 + return REM_RETVAL(-EINVAL);
33530 + }
33531 +#endif
33532 + /*
33533 + * Free the IRQ
33534 + */
33535 + if (otg_dev->common_irq_installed) {
33536 + free_irq(otg_dev->os_dep.irq_num, otg_dev);
33537 + } else {
33538 + DWC_DEBUGPL(DBG_ANY, "%s: There is no installed irq!\n", __func__);
33539 + return REM_RETVAL(-ENXIO);
33540 + }
33541 +
33542 + if (otg_dev->core_if) {
33543 + dwc_otg_cil_remove(otg_dev->core_if);
33544 + } else {
33545 + DWC_DEBUGPL(DBG_ANY, "%s: otg_dev->core_if NULL!\n", __func__);
33546 + return REM_RETVAL(-ENXIO);
33547 + }
33548 +
33549 + /*
33550 + * Remove the device attributes
33551 + */
33552 + dwc_otg_attr_remove(_dev);
33553 +
33554 + /*
33555 + * Return the memory.
33556 + */
33557 + if (otg_dev->os_dep.base) {
33558 + iounmap(otg_dev->os_dep.base);
33559 + }
33560 + DWC_FREE(otg_dev);
33561 +
33562 + /*
33563 + * Clear the drvdata pointer.
33564 + */
33565 +#ifdef LM_INTERFACE
33566 + lm_set_drvdata(_dev, 0);
33567 +#elif defined(PCI_INTERFACE)
33568 + release_mem_region(otg_dev->os_dep.rsrc_start,
33569 + otg_dev->os_dep.rsrc_len);
33570 + pci_set_drvdata(_dev, 0);
33571 +#elif defined(PLATFORM_INTERFACE)
33572 + platform_set_drvdata(_dev, 0);
33573 +#endif
33574 + return REM_RETVAL(0);
33575 +}
33576 +
33577 +/**
33578 + * This function is called when an lm_device is bound to a
33579 + * dwc_otg_driver. It creates the driver components required to
33580 + * control the device (CIL, HCD, and PCD) and it initializes the
33581 + * device. The driver components are stored in a dwc_otg_device
33582 + * structure. A reference to the dwc_otg_device is saved in the
33583 + * lm_device. This allows the driver to access the dwc_otg_device
33584 + * structure on subsequent calls to driver methods for this device.
33585 + *
33586 + * @param _dev Bus device
33587 + */
33588 +static int dwc_otg_driver_probe(
33589 +#ifdef LM_INTERFACE
33590 + struct lm_device *_dev
33591 +#elif defined(PCI_INTERFACE)
33592 + struct pci_dev *_dev,
33593 + const struct pci_device_id *id
33594 +#elif defined(PLATFORM_INTERFACE)
33595 + struct platform_device *_dev
33596 +#endif
33597 + )
33598 +{
33599 + int retval = 0;
33600 + dwc_otg_device_t *dwc_otg_device;
33601 + int devirq;
33602 +
33603 + dev_dbg(&_dev->dev, "dwc_otg_driver_probe(%p)\n", _dev);
33604 +#ifdef LM_INTERFACE
33605 + dev_dbg(&_dev->dev, "start=0x%08x\n", (unsigned)_dev->resource.start);
33606 +#elif defined(PCI_INTERFACE)
33607 + if (!id) {
33608 + DWC_ERROR("Invalid pci_device_id %p", id);
33609 + return -EINVAL;
33610 + }
33611 +
33612 + if (!_dev || (pci_enable_device(_dev) < 0)) {
33613 + DWC_ERROR("Invalid pci_device %p", _dev);
33614 + return -ENODEV;
33615 + }
33616 + dev_dbg(&_dev->dev, "start=0x%08x\n", (unsigned)pci_resource_start(_dev,0));
33617 + /* other stuff needed as well? */
33618 +
33619 +#elif defined(PLATFORM_INTERFACE)
33620 + dev_dbg(&_dev->dev, "start=0x%08x (len 0x%x)\n",
33621 + (unsigned)_dev->resource->start,
33622 + (unsigned)(_dev->resource->end - _dev->resource->start));
33623 +#endif
33624 +
33625 + dwc_otg_device = DWC_ALLOC(sizeof(dwc_otg_device_t));
33626 +
33627 + if (!dwc_otg_device) {
33628 + dev_err(&_dev->dev, "kmalloc of dwc_otg_device failed\n");
33629 + return -ENOMEM;
33630 + }
33631 +
33632 + memset(dwc_otg_device, 0, sizeof(*dwc_otg_device));
33633 + dwc_otg_device->os_dep.reg_offset = 0xFFFFFFFF;
33634 + dwc_otg_device->os_dep.platformdev = _dev;
33635 +
33636 + /*
33637 + * Map the DWC_otg Core memory into virtual address space.
33638 + */
33639 +#ifdef LM_INTERFACE
33640 + dwc_otg_device->os_dep.base = ioremap(_dev->resource.start, SZ_256K);
33641 +
33642 + if (!dwc_otg_device->os_dep.base) {
33643 + dev_err(&_dev->dev, "ioremap() failed\n");
33644 + DWC_FREE(dwc_otg_device);
33645 + return -ENOMEM;
33646 + }
33647 + dev_dbg(&_dev->dev, "base=0x%08x\n",
33648 + (unsigned)dwc_otg_device->os_dep.base);
33649 +#elif defined(PCI_INTERFACE)
33650 + _dev->current_state = PCI_D0;
33651 + _dev->dev.power.power_state = PMSG_ON;
33652 +
33653 + if (!_dev->irq) {
33654 + DWC_ERROR("Found HC with no IRQ. Check BIOS/PCI %s setup!",
33655 + pci_name(_dev));
33656 + iounmap(dwc_otg_device->os_dep.base);
33657 + DWC_FREE(dwc_otg_device);
33658 + return -ENODEV;
33659 + }
33660 +
33661 + dwc_otg_device->os_dep.rsrc_start = pci_resource_start(_dev, 0);
33662 + dwc_otg_device->os_dep.rsrc_len = pci_resource_len(_dev, 0);
33663 + DWC_DEBUGPL(DBG_ANY, "PCI resource: start=%08x, len=%08x\n",
33664 + (unsigned)dwc_otg_device->os_dep.rsrc_start,
33665 + (unsigned)dwc_otg_device->os_dep.rsrc_len);
33666 + if (!request_mem_region
33667 + (dwc_otg_device->os_dep.rsrc_start, dwc_otg_device->os_dep.rsrc_len,
33668 + "dwc_otg")) {
33669 + dev_dbg(&_dev->dev, "error requesting memory\n");
33670 + iounmap(dwc_otg_device->os_dep.base);
33671 + DWC_FREE(dwc_otg_device);
33672 + return -EFAULT;
33673 + }
33674 +
33675 + dwc_otg_device->os_dep.base =
33676 + ioremap_nocache(dwc_otg_device->os_dep.rsrc_start,
33677 + dwc_otg_device->os_dep.rsrc_len);
33678 + if (dwc_otg_device->os_dep.base == NULL) {
33679 + dev_dbg(&_dev->dev, "error mapping memory\n");
33680 + release_mem_region(dwc_otg_device->os_dep.rsrc_start,
33681 + dwc_otg_device->os_dep.rsrc_len);
33682 + iounmap(dwc_otg_device->os_dep.base);
33683 + DWC_FREE(dwc_otg_device);
33684 + return -EFAULT;
33685 + }
33686 + dev_dbg(&_dev->dev, "base=0x%p (before adjust) \n",
33687 + dwc_otg_device->os_dep.base);
33688 + dwc_otg_device->os_dep.base = (char *)dwc_otg_device->os_dep.base;
33689 + dev_dbg(&_dev->dev, "base=0x%p (after adjust) \n",
33690 + dwc_otg_device->os_dep.base);
33691 + dev_dbg(&_dev->dev, "%s: mapped PA 0x%x to VA 0x%p\n", __func__,
33692 + (unsigned)dwc_otg_device->os_dep.rsrc_start,
33693 + dwc_otg_device->os_dep.base);
33694 +
33695 + pci_set_master(_dev);
33696 + pci_set_drvdata(_dev, dwc_otg_device);
33697 +#elif defined(PLATFORM_INTERFACE)
33698 + DWC_DEBUGPL(DBG_ANY,"Platform resource: start=%08x, len=%08x\n",
33699 + _dev->resource->start,
33700 + _dev->resource->end - _dev->resource->start + 1);
33701 +#if 1
33702 + if (!request_mem_region(_dev->resource[0].start,
33703 + _dev->resource[0].end - _dev->resource[0].start + 1,
33704 + "dwc_otg")) {
33705 + dev_dbg(&_dev->dev, "error reserving mapped memory\n");
33706 + retval = -EFAULT;
33707 + goto fail;
33708 + }
33709 +
33710 + dwc_otg_device->os_dep.base = ioremap_nocache(_dev->resource[0].start,
33711 + _dev->resource[0].end -
33712 + _dev->resource[0].start+1);
33713 + if (fiq_enable)
33714 + {
33715 + if (!request_mem_region(_dev->resource[1].start,
33716 + _dev->resource[1].end - _dev->resource[1].start + 1,
33717 + "dwc_otg")) {
33718 + dev_dbg(&_dev->dev, "error reserving mapped memory\n");
33719 + retval = -EFAULT;
33720 + goto fail;
33721 + }
33722 +
33723 + dwc_otg_device->os_dep.mphi_base = ioremap_nocache(_dev->resource[1].start,
33724 + _dev->resource[1].end -
33725 + _dev->resource[1].start + 1);
33726 + dwc_otg_device->os_dep.use_swirq = (_dev->resource[1].end - _dev->resource[1].start) == 0x200;
33727 + }
33728 +
33729 +#else
33730 + {
33731 + struct map_desc desc = {
33732 + .virtual = IO_ADDRESS((unsigned)_dev->resource->start),
33733 + .pfn = __phys_to_pfn((unsigned)_dev->resource->start),
33734 + .length = SZ_128K,
33735 + .type = MT_DEVICE
33736 + };
33737 + iotable_init(&desc, 1);
33738 + dwc_otg_device->os_dep.base = (void *)desc.virtual;
33739 + }
33740 +#endif
33741 + if (!dwc_otg_device->os_dep.base) {
33742 + dev_err(&_dev->dev, "ioremap() failed\n");
33743 + retval = -ENOMEM;
33744 + goto fail;
33745 + }
33746 +#endif
33747 +
33748 + /*
33749 + * Initialize driver data to point to the global DWC_otg
33750 + * Device structure.
33751 + */
33752 +#ifdef LM_INTERFACE
33753 + lm_set_drvdata(_dev, dwc_otg_device);
33754 +#elif defined(PLATFORM_INTERFACE)
33755 + platform_set_drvdata(_dev, dwc_otg_device);
33756 +#endif
33757 + dev_dbg(&_dev->dev, "dwc_otg_device=0x%p\n", dwc_otg_device);
33758 +
33759 + dwc_otg_device->core_if = dwc_otg_cil_init(dwc_otg_device->os_dep.base);
33760 + DWC_DEBUGPL(DBG_HCDV, "probe of device %p given core_if %p\n",
33761 + dwc_otg_device, dwc_otg_device->core_if);//GRAYG
33762 +
33763 + if (!dwc_otg_device->core_if) {
33764 + dev_err(&_dev->dev, "CIL initialization failed!\n");
33765 + retval = -ENOMEM;
33766 + goto fail;
33767 + }
33768 +
33769 + dev_dbg(&_dev->dev, "Calling get_gsnpsid\n");
33770 + /*
33771 + * Attempt to ensure this device is really a DWC_otg Controller.
33772 + * Read and verify the SNPSID register contents. The value should be
33773 + * 0x45F42XXX or 0x45F42XXX, which corresponds to either "OT2" or "OTG3",
33774 + * as in "OTG version 2.XX" or "OTG version 3.XX".
33775 + */
33776 +
33777 + if (((dwc_otg_get_gsnpsid(dwc_otg_device->core_if) & 0xFFFFF000) != 0x4F542000) &&
33778 + ((dwc_otg_get_gsnpsid(dwc_otg_device->core_if) & 0xFFFFF000) != 0x4F543000)) {
33779 + dev_err(&_dev->dev, "Bad value for SNPSID: 0x%08x\n",
33780 + dwc_otg_get_gsnpsid(dwc_otg_device->core_if));
33781 + retval = -EINVAL;
33782 + goto fail;
33783 + }
33784 +
33785 + /*
33786 + * Validate parameter values.
33787 + */
33788 + dev_dbg(&_dev->dev, "Calling set_parameters\n");
33789 + if (set_parameters(dwc_otg_device->core_if)) {
33790 + retval = -EINVAL;
33791 + goto fail;
33792 + }
33793 +
33794 + /*
33795 + * Create Device Attributes in sysfs
33796 + */
33797 + dev_dbg(&_dev->dev, "Calling attr_create\n");
33798 + dwc_otg_attr_create(_dev);
33799 +
33800 + /*
33801 + * Disable the global interrupt until all the interrupt
33802 + * handlers are installed.
33803 + */
33804 + dev_dbg(&_dev->dev, "Calling disable_global_interrupts\n");
33805 + dwc_otg_disable_global_interrupts(dwc_otg_device->core_if);
33806 +
33807 + /*
33808 + * Install the interrupt handler for the common interrupts before
33809 + * enabling common interrupts in core_init below.
33810 + */
33811 +
33812 +#if defined(PLATFORM_INTERFACE)
33813 + devirq = platform_get_irq_byname(_dev, fiq_enable ? "soft" : "usb");
33814 + if (devirq < 0)
33815 + devirq = platform_get_irq(_dev, fiq_enable ? 0 : 1);
33816 +#else
33817 + devirq = _dev->irq;
33818 +#endif
33819 + DWC_DEBUGPL(DBG_CIL, "registering (common) handler for irq%d\n",
33820 + devirq);
33821 + dev_dbg(&_dev->dev, "Calling request_irq(%d)\n", devirq);
33822 + retval = request_irq(devirq, dwc_otg_common_irq,
33823 + IRQF_SHARED,
33824 + "dwc_otg", dwc_otg_device);
33825 + if (retval) {
33826 + DWC_ERROR("request of irq%d failed\n", devirq);
33827 + retval = -EBUSY;
33828 + goto fail;
33829 + } else {
33830 + dwc_otg_device->common_irq_installed = 1;
33831 + }
33832 + dwc_otg_device->os_dep.irq_num = devirq;
33833 + dwc_otg_device->os_dep.fiq_num = -EINVAL;
33834 + if (fiq_enable) {
33835 + int devfiq = platform_get_irq_byname(_dev, "usb");
33836 + if (devfiq < 0)
33837 + devfiq = platform_get_irq(_dev, 1);
33838 + dwc_otg_device->os_dep.fiq_num = devfiq;
33839 + }
33840 +
33841 +#ifndef IRQF_TRIGGER_LOW
33842 +#if defined(LM_INTERFACE) || defined(PLATFORM_INTERFACE)
33843 + dev_dbg(&_dev->dev, "Calling set_irq_type\n");
33844 + set_irq_type(devirq,
33845 +#if (LINUX_VERSION_CODE < KERNEL_VERSION(2,6,30))
33846 + IRQT_LOW
33847 +#else
33848 + IRQ_TYPE_LEVEL_LOW
33849 +#endif
33850 + );
33851 +#endif
33852 +#endif /*IRQF_TRIGGER_LOW*/
33853 +
33854 + /*
33855 + * Initialize the DWC_otg core.
33856 + */
33857 + dev_dbg(&_dev->dev, "Calling dwc_otg_core_init\n");
33858 + dwc_otg_core_init(dwc_otg_device->core_if);
33859 +
33860 +#ifndef DWC_HOST_ONLY
33861 + /*
33862 + * Initialize the PCD
33863 + */
33864 + dev_dbg(&_dev->dev, "Calling pcd_init\n");
33865 + retval = pcd_init(_dev);
33866 + if (retval != 0) {
33867 + DWC_ERROR("pcd_init failed\n");
33868 + dwc_otg_device->pcd = NULL;
33869 + goto fail;
33870 + }
33871 +#endif
33872 +#ifndef DWC_DEVICE_ONLY
33873 + /*
33874 + * Initialize the HCD
33875 + */
33876 + dev_dbg(&_dev->dev, "Calling hcd_init\n");
33877 + retval = hcd_init(_dev);
33878 + if (retval != 0) {
33879 + DWC_ERROR("hcd_init failed\n");
33880 + dwc_otg_device->hcd = NULL;
33881 + goto fail;
33882 + }
33883 +#endif
33884 + /* Recover from drvdata having been overwritten by hcd_init() */
33885 +#ifdef LM_INTERFACE
33886 + lm_set_drvdata(_dev, dwc_otg_device);
33887 +#elif defined(PLATFORM_INTERFACE)
33888 + platform_set_drvdata(_dev, dwc_otg_device);
33889 +#elif defined(PCI_INTERFACE)
33890 + pci_set_drvdata(_dev, dwc_otg_device);
33891 + dwc_otg_device->os_dep.pcidev = _dev;
33892 +#endif
33893 +
33894 + /*
33895 + * Enable the global interrupt after all the interrupt
33896 + * handlers are installed if there is no ADP support else
33897 + * perform initial actions required for Internal ADP logic.
33898 + */
33899 + if (!dwc_otg_get_param_adp_enable(dwc_otg_device->core_if)) {
33900 + dev_dbg(&_dev->dev, "Calling enable_global_interrupts\n");
33901 + dwc_otg_enable_global_interrupts(dwc_otg_device->core_if);
33902 + dev_dbg(&_dev->dev, "Done\n");
33903 + } else
33904 + dwc_otg_adp_start(dwc_otg_device->core_if,
33905 + dwc_otg_is_host_mode(dwc_otg_device->core_if));
33906 +
33907 + return 0;
33908 +
33909 +fail:
33910 + dwc_otg_driver_remove(_dev);
33911 + return retval;
33912 +}
33913 +
33914 +/**
33915 + * This structure defines the methods to be called by a bus driver
33916 + * during the lifecycle of a device on that bus. Both drivers and
33917 + * devices are registered with a bus driver. The bus driver matches
33918 + * devices to drivers based on information in the device and driver
33919 + * structures.
33920 + *
33921 + * The probe function is called when the bus driver matches a device
33922 + * to this driver. The remove function is called when a device is
33923 + * unregistered with the bus driver.
33924 + */
33925 +#ifdef LM_INTERFACE
33926 +static struct lm_driver dwc_otg_driver = {
33927 + .drv = {.name = (char *)dwc_driver_name,},
33928 + .probe = dwc_otg_driver_probe,
33929 + .remove = dwc_otg_driver_remove,
33930 + // 'suspend' and 'resume' absent
33931 +};
33932 +#elif defined(PCI_INTERFACE)
33933 +static const struct pci_device_id pci_ids[] = { {
33934 + PCI_DEVICE(0x16c3, 0xabcd),
33935 + .driver_data =
33936 + (unsigned long)0xdeadbeef,
33937 + }, { /* end: all zeroes */ }
33938 +};
33939 +
33940 +MODULE_DEVICE_TABLE(pci, pci_ids);
33941 +
33942 +/* pci driver glue; this is a "new style" PCI driver module */
33943 +static struct pci_driver dwc_otg_driver = {
33944 + .name = "dwc_otg",
33945 + .id_table = pci_ids,
33946 +
33947 + .probe = dwc_otg_driver_probe,
33948 + .remove = dwc_otg_driver_remove,
33949 +
33950 + .driver = {
33951 + .name = (char *)dwc_driver_name,
33952 + },
33953 +};
33954 +#elif defined(PLATFORM_INTERFACE)
33955 +static struct platform_device_id platform_ids[] = {
33956 + {
33957 + .name = "bcm2708_usb",
33958 + .driver_data = (kernel_ulong_t) 0xdeadbeef,
33959 + },
33960 + { /* end: all zeroes */ }
33961 +};
33962 +MODULE_DEVICE_TABLE(platform, platform_ids);
33963 +
33964 +static const struct of_device_id dwc_otg_of_match_table[] = {
33965 + { .compatible = "brcm,bcm2708-usb", },
33966 + {},
33967 +};
33968 +MODULE_DEVICE_TABLE(of, dwc_otg_of_match_table);
33969 +
33970 +static struct platform_driver dwc_otg_driver = {
33971 + .driver = {
33972 + .name = (char *)dwc_driver_name,
33973 + .of_match_table = dwc_otg_of_match_table,
33974 + },
33975 + .id_table = platform_ids,
33976 +
33977 + .probe = dwc_otg_driver_probe,
33978 + .remove = dwc_otg_driver_remove,
33979 + // no 'shutdown', 'suspend', 'resume', 'suspend_late' or 'resume_early'
33980 +};
33981 +#endif
33982 +
33983 +/**
33984 + * This function is called when the dwc_otg_driver is installed with the
33985 + * insmod command. It registers the dwc_otg_driver structure with the
33986 + * appropriate bus driver. This will cause the dwc_otg_driver_probe function
33987 + * to be called. In addition, the bus driver will automatically expose
33988 + * attributes defined for the device and driver in the special sysfs file
33989 + * system.
33990 + *
33991 + * @return
33992 + */
33993 +static int __init dwc_otg_driver_init(void)
33994 +{
33995 + int retval = 0;
33996 + int error;
33997 + struct device_driver *drv;
33998 +
33999 + if(fiq_fsm_enable && !fiq_enable) {
34000 + printk(KERN_WARNING "dwc_otg: fiq_fsm_enable was set without fiq_enable! Correcting.\n");
34001 + fiq_enable = 1;
34002 + }
34003 +
34004 + printk(KERN_INFO "%s: version %s (%s bus)\n", dwc_driver_name,
34005 + DWC_DRIVER_VERSION,
34006 +#ifdef LM_INTERFACE
34007 + "logicmodule");
34008 + retval = lm_driver_register(&dwc_otg_driver);
34009 + drv = &dwc_otg_driver.drv;
34010 +#elif defined(PCI_INTERFACE)
34011 + "pci");
34012 + retval = pci_register_driver(&dwc_otg_driver);
34013 + drv = &dwc_otg_driver.driver;
34014 +#elif defined(PLATFORM_INTERFACE)
34015 + "platform");
34016 + retval = platform_driver_register(&dwc_otg_driver);
34017 + drv = &dwc_otg_driver.driver;
34018 +#endif
34019 + if (retval < 0) {
34020 + printk(KERN_ERR "%s retval=%d\n", __func__, retval);
34021 + return retval;
34022 + }
34023 + printk(KERN_DEBUG "dwc_otg: FIQ %s\n", fiq_enable ? "enabled":"disabled");
34024 + printk(KERN_DEBUG "dwc_otg: NAK holdoff %s\n", nak_holdoff ? "enabled":"disabled");
34025 + printk(KERN_DEBUG "dwc_otg: FIQ split-transaction FSM %s\n", fiq_fsm_enable ? "enabled":"disabled");
34026 +
34027 + error = driver_create_file(drv, &driver_attr_version);
34028 +#ifdef DEBUG
34029 + error = driver_create_file(drv, &driver_attr_debuglevel);
34030 +#endif
34031 + return retval;
34032 +}
34033 +
34034 +module_init(dwc_otg_driver_init);
34035 +
34036 +/**
34037 + * This function is called when the driver is removed from the kernel
34038 + * with the rmmod command. The driver unregisters itself with its bus
34039 + * driver.
34040 + *
34041 + */
34042 +static void __exit dwc_otg_driver_cleanup(void)
34043 +{
34044 + printk(KERN_DEBUG "dwc_otg_driver_cleanup()\n");
34045 +
34046 +#ifdef LM_INTERFACE
34047 + driver_remove_file(&dwc_otg_driver.drv, &driver_attr_debuglevel);
34048 + driver_remove_file(&dwc_otg_driver.drv, &driver_attr_version);
34049 + lm_driver_unregister(&dwc_otg_driver);
34050 +#elif defined(PCI_INTERFACE)
34051 + driver_remove_file(&dwc_otg_driver.driver, &driver_attr_debuglevel);
34052 + driver_remove_file(&dwc_otg_driver.driver, &driver_attr_version);
34053 + pci_unregister_driver(&dwc_otg_driver);
34054 +#elif defined(PLATFORM_INTERFACE)
34055 + driver_remove_file(&dwc_otg_driver.driver, &driver_attr_debuglevel);
34056 + driver_remove_file(&dwc_otg_driver.driver, &driver_attr_version);
34057 + platform_driver_unregister(&dwc_otg_driver);
34058 +#endif
34059 +
34060 + printk(KERN_INFO "%s module removed\n", dwc_driver_name);
34061 +}
34062 +
34063 +module_exit(dwc_otg_driver_cleanup);
34064 +
34065 +MODULE_DESCRIPTION(DWC_DRIVER_DESC);
34066 +MODULE_AUTHOR("Synopsys Inc.");
34067 +MODULE_LICENSE("GPL");
34068 +
34069 +module_param_named(otg_cap, dwc_otg_module_params.otg_cap, int, 0444);
34070 +MODULE_PARM_DESC(otg_cap, "OTG Capabilities 0=HNP&SRP 1=SRP Only 2=None");
34071 +module_param_named(opt, dwc_otg_module_params.opt, int, 0444);
34072 +MODULE_PARM_DESC(opt, "OPT Mode");
34073 +module_param_named(dma_enable, dwc_otg_module_params.dma_enable, int, 0444);
34074 +MODULE_PARM_DESC(dma_enable, "DMA Mode 0=Slave 1=DMA enabled");
34075 +
34076 +module_param_named(dma_desc_enable, dwc_otg_module_params.dma_desc_enable, int,
34077 + 0444);
34078 +MODULE_PARM_DESC(dma_desc_enable,
34079 + "DMA Desc Mode 0=Address DMA 1=DMA Descriptor enabled");
34080 +
34081 +module_param_named(dma_burst_size, dwc_otg_module_params.dma_burst_size, int,
34082 + 0444);
34083 +MODULE_PARM_DESC(dma_burst_size,
34084 + "DMA Burst Size 1, 4, 8, 16, 32, 64, 128, 256");
34085 +module_param_named(speed, dwc_otg_module_params.speed, int, 0444);
34086 +MODULE_PARM_DESC(speed, "Speed 0=High Speed 1=Full Speed");
34087 +module_param_named(host_support_fs_ls_low_power,
34088 + dwc_otg_module_params.host_support_fs_ls_low_power, int,
34089 + 0444);
34090 +MODULE_PARM_DESC(host_support_fs_ls_low_power,
34091 + "Support Low Power w/FS or LS 0=Support 1=Don't Support");
34092 +module_param_named(host_ls_low_power_phy_clk,
34093 + dwc_otg_module_params.host_ls_low_power_phy_clk, int, 0444);
34094 +MODULE_PARM_DESC(host_ls_low_power_phy_clk,
34095 + "Low Speed Low Power Clock 0=48Mhz 1=6Mhz");
34096 +module_param_named(enable_dynamic_fifo,
34097 + dwc_otg_module_params.enable_dynamic_fifo, int, 0444);
34098 +MODULE_PARM_DESC(enable_dynamic_fifo, "0=cC Setting 1=Allow Dynamic Sizing");
34099 +module_param_named(data_fifo_size, dwc_otg_module_params.data_fifo_size, int,
34100 + 0444);
34101 +MODULE_PARM_DESC(data_fifo_size,
34102 + "Total number of words in the data FIFO memory 32-32768");
34103 +module_param_named(dev_rx_fifo_size, dwc_otg_module_params.dev_rx_fifo_size,
34104 + int, 0444);
34105 +MODULE_PARM_DESC(dev_rx_fifo_size, "Number of words in the Rx FIFO 16-32768");
34106 +module_param_named(dev_nperio_tx_fifo_size,
34107 + dwc_otg_module_params.dev_nperio_tx_fifo_size, int, 0444);
34108 +MODULE_PARM_DESC(dev_nperio_tx_fifo_size,
34109 + "Number of words in the non-periodic Tx FIFO 16-32768");
34110 +module_param_named(dev_perio_tx_fifo_size_1,
34111 + dwc_otg_module_params.dev_perio_tx_fifo_size[0], int, 0444);
34112 +MODULE_PARM_DESC(dev_perio_tx_fifo_size_1,
34113 + "Number of words in the periodic Tx FIFO 4-768");
34114 +module_param_named(dev_perio_tx_fifo_size_2,
34115 + dwc_otg_module_params.dev_perio_tx_fifo_size[1], int, 0444);
34116 +MODULE_PARM_DESC(dev_perio_tx_fifo_size_2,
34117 + "Number of words in the periodic Tx FIFO 4-768");
34118 +module_param_named(dev_perio_tx_fifo_size_3,
34119 + dwc_otg_module_params.dev_perio_tx_fifo_size[2], int, 0444);
34120 +MODULE_PARM_DESC(dev_perio_tx_fifo_size_3,
34121 + "Number of words in the periodic Tx FIFO 4-768");
34122 +module_param_named(dev_perio_tx_fifo_size_4,
34123 + dwc_otg_module_params.dev_perio_tx_fifo_size[3], int, 0444);
34124 +MODULE_PARM_DESC(dev_perio_tx_fifo_size_4,
34125 + "Number of words in the periodic Tx FIFO 4-768");
34126 +module_param_named(dev_perio_tx_fifo_size_5,
34127 + dwc_otg_module_params.dev_perio_tx_fifo_size[4], int, 0444);
34128 +MODULE_PARM_DESC(dev_perio_tx_fifo_size_5,
34129 + "Number of words in the periodic Tx FIFO 4-768");
34130 +module_param_named(dev_perio_tx_fifo_size_6,
34131 + dwc_otg_module_params.dev_perio_tx_fifo_size[5], int, 0444);
34132 +MODULE_PARM_DESC(dev_perio_tx_fifo_size_6,
34133 + "Number of words in the periodic Tx FIFO 4-768");
34134 +module_param_named(dev_perio_tx_fifo_size_7,
34135 + dwc_otg_module_params.dev_perio_tx_fifo_size[6], int, 0444);
34136 +MODULE_PARM_DESC(dev_perio_tx_fifo_size_7,
34137 + "Number of words in the periodic Tx FIFO 4-768");
34138 +module_param_named(dev_perio_tx_fifo_size_8,
34139 + dwc_otg_module_params.dev_perio_tx_fifo_size[7], int, 0444);
34140 +MODULE_PARM_DESC(dev_perio_tx_fifo_size_8,
34141 + "Number of words in the periodic Tx FIFO 4-768");
34142 +module_param_named(dev_perio_tx_fifo_size_9,
34143 + dwc_otg_module_params.dev_perio_tx_fifo_size[8], int, 0444);
34144 +MODULE_PARM_DESC(dev_perio_tx_fifo_size_9,
34145 + "Number of words in the periodic Tx FIFO 4-768");
34146 +module_param_named(dev_perio_tx_fifo_size_10,
34147 + dwc_otg_module_params.dev_perio_tx_fifo_size[9], int, 0444);
34148 +MODULE_PARM_DESC(dev_perio_tx_fifo_size_10,
34149 + "Number of words in the periodic Tx FIFO 4-768");
34150 +module_param_named(dev_perio_tx_fifo_size_11,
34151 + dwc_otg_module_params.dev_perio_tx_fifo_size[10], int, 0444);
34152 +MODULE_PARM_DESC(dev_perio_tx_fifo_size_11,
34153 + "Number of words in the periodic Tx FIFO 4-768");
34154 +module_param_named(dev_perio_tx_fifo_size_12,
34155 + dwc_otg_module_params.dev_perio_tx_fifo_size[11], int, 0444);
34156 +MODULE_PARM_DESC(dev_perio_tx_fifo_size_12,
34157 + "Number of words in the periodic Tx FIFO 4-768");
34158 +module_param_named(dev_perio_tx_fifo_size_13,
34159 + dwc_otg_module_params.dev_perio_tx_fifo_size[12], int, 0444);
34160 +MODULE_PARM_DESC(dev_perio_tx_fifo_size_13,
34161 + "Number of words in the periodic Tx FIFO 4-768");
34162 +module_param_named(dev_perio_tx_fifo_size_14,
34163 + dwc_otg_module_params.dev_perio_tx_fifo_size[13], int, 0444);
34164 +MODULE_PARM_DESC(dev_perio_tx_fifo_size_14,
34165 + "Number of words in the periodic Tx FIFO 4-768");
34166 +module_param_named(dev_perio_tx_fifo_size_15,
34167 + dwc_otg_module_params.dev_perio_tx_fifo_size[14], int, 0444);
34168 +MODULE_PARM_DESC(dev_perio_tx_fifo_size_15,
34169 + "Number of words in the periodic Tx FIFO 4-768");
34170 +module_param_named(host_rx_fifo_size, dwc_otg_module_params.host_rx_fifo_size,
34171 + int, 0444);
34172 +MODULE_PARM_DESC(host_rx_fifo_size, "Number of words in the Rx FIFO 16-32768");
34173 +module_param_named(host_nperio_tx_fifo_size,
34174 + dwc_otg_module_params.host_nperio_tx_fifo_size, int, 0444);
34175 +MODULE_PARM_DESC(host_nperio_tx_fifo_size,
34176 + "Number of words in the non-periodic Tx FIFO 16-32768");
34177 +module_param_named(host_perio_tx_fifo_size,
34178 + dwc_otg_module_params.host_perio_tx_fifo_size, int, 0444);
34179 +MODULE_PARM_DESC(host_perio_tx_fifo_size,
34180 + "Number of words in the host periodic Tx FIFO 16-32768");
34181 +module_param_named(max_transfer_size, dwc_otg_module_params.max_transfer_size,
34182 + int, 0444);
34183 +/** @todo Set the max to 512K, modify checks */
34184 +MODULE_PARM_DESC(max_transfer_size,
34185 + "The maximum transfer size supported in bytes 2047-65535");
34186 +module_param_named(max_packet_count, dwc_otg_module_params.max_packet_count,
34187 + int, 0444);
34188 +MODULE_PARM_DESC(max_packet_count,
34189 + "The maximum number of packets in a transfer 15-511");
34190 +module_param_named(host_channels, dwc_otg_module_params.host_channels, int,
34191 + 0444);
34192 +MODULE_PARM_DESC(host_channels,
34193 + "The number of host channel registers to use 1-16");
34194 +module_param_named(dev_endpoints, dwc_otg_module_params.dev_endpoints, int,
34195 + 0444);
34196 +MODULE_PARM_DESC(dev_endpoints,
34197 + "The number of endpoints in addition to EP0 available for device mode 1-15");
34198 +module_param_named(phy_type, dwc_otg_module_params.phy_type, int, 0444);
34199 +MODULE_PARM_DESC(phy_type, "0=Reserved 1=UTMI+ 2=ULPI");
34200 +module_param_named(phy_utmi_width, dwc_otg_module_params.phy_utmi_width, int,
34201 + 0444);
34202 +MODULE_PARM_DESC(phy_utmi_width, "Specifies the UTMI+ Data Width 8 or 16 bits");
34203 +module_param_named(phy_ulpi_ddr, dwc_otg_module_params.phy_ulpi_ddr, int, 0444);
34204 +MODULE_PARM_DESC(phy_ulpi_ddr,
34205 + "ULPI at double or single data rate 0=Single 1=Double");
34206 +module_param_named(phy_ulpi_ext_vbus, dwc_otg_module_params.phy_ulpi_ext_vbus,
34207 + int, 0444);
34208 +MODULE_PARM_DESC(phy_ulpi_ext_vbus,
34209 + "ULPI PHY using internal or external vbus 0=Internal");
34210 +module_param_named(i2c_enable, dwc_otg_module_params.i2c_enable, int, 0444);
34211 +MODULE_PARM_DESC(i2c_enable, "FS PHY Interface");
34212 +module_param_named(ulpi_fs_ls, dwc_otg_module_params.ulpi_fs_ls, int, 0444);
34213 +MODULE_PARM_DESC(ulpi_fs_ls, "ULPI PHY FS/LS mode only");
34214 +module_param_named(ts_dline, dwc_otg_module_params.ts_dline, int, 0444);
34215 +MODULE_PARM_DESC(ts_dline, "Term select Dline pulsing for all PHYs");
34216 +module_param_named(debug, g_dbg_lvl, int, 0444);
34217 +MODULE_PARM_DESC(debug, "");
34218 +
34219 +module_param_named(en_multiple_tx_fifo,
34220 + dwc_otg_module_params.en_multiple_tx_fifo, int, 0444);
34221 +MODULE_PARM_DESC(en_multiple_tx_fifo,
34222 + "Dedicated Non Periodic Tx FIFOs 0=disabled 1=enabled");
34223 +module_param_named(dev_tx_fifo_size_1,
34224 + dwc_otg_module_params.dev_tx_fifo_size[0], int, 0444);
34225 +MODULE_PARM_DESC(dev_tx_fifo_size_1, "Number of words in the Tx FIFO 4-768");
34226 +module_param_named(dev_tx_fifo_size_2,
34227 + dwc_otg_module_params.dev_tx_fifo_size[1], int, 0444);
34228 +MODULE_PARM_DESC(dev_tx_fifo_size_2, "Number of words in the Tx FIFO 4-768");
34229 +module_param_named(dev_tx_fifo_size_3,
34230 + dwc_otg_module_params.dev_tx_fifo_size[2], int, 0444);
34231 +MODULE_PARM_DESC(dev_tx_fifo_size_3, "Number of words in the Tx FIFO 4-768");
34232 +module_param_named(dev_tx_fifo_size_4,
34233 + dwc_otg_module_params.dev_tx_fifo_size[3], int, 0444);
34234 +MODULE_PARM_DESC(dev_tx_fifo_size_4, "Number of words in the Tx FIFO 4-768");
34235 +module_param_named(dev_tx_fifo_size_5,
34236 + dwc_otg_module_params.dev_tx_fifo_size[4], int, 0444);
34237 +MODULE_PARM_DESC(dev_tx_fifo_size_5, "Number of words in the Tx FIFO 4-768");
34238 +module_param_named(dev_tx_fifo_size_6,
34239 + dwc_otg_module_params.dev_tx_fifo_size[5], int, 0444);
34240 +MODULE_PARM_DESC(dev_tx_fifo_size_6, "Number of words in the Tx FIFO 4-768");
34241 +module_param_named(dev_tx_fifo_size_7,
34242 + dwc_otg_module_params.dev_tx_fifo_size[6], int, 0444);
34243 +MODULE_PARM_DESC(dev_tx_fifo_size_7, "Number of words in the Tx FIFO 4-768");
34244 +module_param_named(dev_tx_fifo_size_8,
34245 + dwc_otg_module_params.dev_tx_fifo_size[7], int, 0444);
34246 +MODULE_PARM_DESC(dev_tx_fifo_size_8, "Number of words in the Tx FIFO 4-768");
34247 +module_param_named(dev_tx_fifo_size_9,
34248 + dwc_otg_module_params.dev_tx_fifo_size[8], int, 0444);
34249 +MODULE_PARM_DESC(dev_tx_fifo_size_9, "Number of words in the Tx FIFO 4-768");
34250 +module_param_named(dev_tx_fifo_size_10,
34251 + dwc_otg_module_params.dev_tx_fifo_size[9], int, 0444);
34252 +MODULE_PARM_DESC(dev_tx_fifo_size_10, "Number of words in the Tx FIFO 4-768");
34253 +module_param_named(dev_tx_fifo_size_11,
34254 + dwc_otg_module_params.dev_tx_fifo_size[10], int, 0444);
34255 +MODULE_PARM_DESC(dev_tx_fifo_size_11, "Number of words in the Tx FIFO 4-768");
34256 +module_param_named(dev_tx_fifo_size_12,
34257 + dwc_otg_module_params.dev_tx_fifo_size[11], int, 0444);
34258 +MODULE_PARM_DESC(dev_tx_fifo_size_12, "Number of words in the Tx FIFO 4-768");
34259 +module_param_named(dev_tx_fifo_size_13,
34260 + dwc_otg_module_params.dev_tx_fifo_size[12], int, 0444);
34261 +MODULE_PARM_DESC(dev_tx_fifo_size_13, "Number of words in the Tx FIFO 4-768");
34262 +module_param_named(dev_tx_fifo_size_14,
34263 + dwc_otg_module_params.dev_tx_fifo_size[13], int, 0444);
34264 +MODULE_PARM_DESC(dev_tx_fifo_size_14, "Number of words in the Tx FIFO 4-768");
34265 +module_param_named(dev_tx_fifo_size_15,
34266 + dwc_otg_module_params.dev_tx_fifo_size[14], int, 0444);
34267 +MODULE_PARM_DESC(dev_tx_fifo_size_15, "Number of words in the Tx FIFO 4-768");
34268 +
34269 +module_param_named(thr_ctl, dwc_otg_module_params.thr_ctl, int, 0444);
34270 +MODULE_PARM_DESC(thr_ctl,
34271 + "Thresholding enable flag bit 0 - non ISO Tx thr., 1 - ISO Tx thr., 2 - Rx thr.- bit 0=disabled 1=enabled");
34272 +module_param_named(tx_thr_length, dwc_otg_module_params.tx_thr_length, int,
34273 + 0444);
34274 +MODULE_PARM_DESC(tx_thr_length, "Tx Threshold length in 32 bit DWORDs");
34275 +module_param_named(rx_thr_length, dwc_otg_module_params.rx_thr_length, int,
34276 + 0444);
34277 +MODULE_PARM_DESC(rx_thr_length, "Rx Threshold length in 32 bit DWORDs");
34278 +
34279 +module_param_named(pti_enable, dwc_otg_module_params.pti_enable, int, 0444);
34280 +module_param_named(mpi_enable, dwc_otg_module_params.mpi_enable, int, 0444);
34281 +module_param_named(lpm_enable, dwc_otg_module_params.lpm_enable, int, 0444);
34282 +MODULE_PARM_DESC(lpm_enable, "LPM Enable 0=LPM Disabled 1=LPM Enabled");
34283 +module_param_named(ic_usb_cap, dwc_otg_module_params.ic_usb_cap, int, 0444);
34284 +MODULE_PARM_DESC(ic_usb_cap,
34285 + "IC_USB Capability 0=IC_USB Disabled 1=IC_USB Enabled");
34286 +module_param_named(ahb_thr_ratio, dwc_otg_module_params.ahb_thr_ratio, int,
34287 + 0444);
34288 +MODULE_PARM_DESC(ahb_thr_ratio, "AHB Threshold Ratio");
34289 +module_param_named(power_down, dwc_otg_module_params.power_down, int, 0444);
34290 +MODULE_PARM_DESC(power_down, "Power Down Mode");
34291 +module_param_named(reload_ctl, dwc_otg_module_params.reload_ctl, int, 0444);
34292 +MODULE_PARM_DESC(reload_ctl, "HFIR Reload Control");
34293 +module_param_named(dev_out_nak, dwc_otg_module_params.dev_out_nak, int, 0444);
34294 +MODULE_PARM_DESC(dev_out_nak, "Enable Device OUT NAK");
34295 +module_param_named(cont_on_bna, dwc_otg_module_params.cont_on_bna, int, 0444);
34296 +MODULE_PARM_DESC(cont_on_bna, "Enable Enable Continue on BNA");
34297 +module_param_named(ahb_single, dwc_otg_module_params.ahb_single, int, 0444);
34298 +MODULE_PARM_DESC(ahb_single, "Enable AHB Single Support");
34299 +module_param_named(adp_enable, dwc_otg_module_params.adp_enable, int, 0444);
34300 +MODULE_PARM_DESC(adp_enable, "ADP Enable 0=ADP Disabled 1=ADP Enabled");
34301 +module_param_named(otg_ver, dwc_otg_module_params.otg_ver, int, 0444);
34302 +MODULE_PARM_DESC(otg_ver, "OTG revision supported 0=OTG 1.3 1=OTG 2.0");
34303 +module_param(microframe_schedule, bool, 0444);
34304 +MODULE_PARM_DESC(microframe_schedule, "Enable the microframe scheduler");
34305 +
34306 +module_param(fiq_enable, bool, 0444);
34307 +MODULE_PARM_DESC(fiq_enable, "Enable the FIQ");
34308 +module_param(nak_holdoff, ushort, 0644);
34309 +MODULE_PARM_DESC(nak_holdoff, "Throttle duration for bulk split-transaction endpoints on a NAK. Default 8");
34310 +module_param(fiq_fsm_enable, bool, 0444);
34311 +MODULE_PARM_DESC(fiq_fsm_enable, "Enable the FIQ to perform split transactions as defined by fiq_fsm_mask");
34312 +module_param(fiq_fsm_mask, ushort, 0444);
34313 +MODULE_PARM_DESC(fiq_fsm_mask, "Bitmask of transactions to perform in the FIQ.\n"
34314 + "Bit 0 : Non-periodic split transactions\n"
34315 + "Bit 1 : Periodic split transactions\n"
34316 + "Bit 2 : High-speed multi-transfer isochronous\n"
34317 + "All other bits should be set 0.");
34318 +module_param(int_ep_interval_min, ushort, 0644);
34319 +MODULE_PARM_DESC(int_ep_interval_min, "Clamp high-speed Interrupt endpoints to a minimum polling interval.\n"
34320 + "0..1 = Use endpoint default\n"
34321 + "2..n = Minimum interval n microframes. Use powers of 2.\n");
34322 +
34323 +module_param(cil_force_host, bool, 0644);
34324 +MODULE_PARM_DESC(cil_force_host, "On a connector-ID status change, "
34325 + "force Host Mode regardless of OTG state.");
34326 +
34327 +/** @page "Module Parameters"
34328 + *
34329 + * The following parameters may be specified when starting the module.
34330 + * These parameters define how the DWC_otg controller should be
34331 + * configured. Parameter values are passed to the CIL initialization
34332 + * function dwc_otg_cil_init
34333 + *
34334 + * Example: <code>modprobe dwc_otg speed=1 otg_cap=1</code>
34335 + *
34336 +
34337 + <table>
34338 + <tr><td>Parameter Name</td><td>Meaning</td></tr>
34339 +
34340 + <tr>
34341 + <td>otg_cap</td>
34342 + <td>Specifies the OTG capabilities. The driver will automatically detect the
34343 + value for this parameter if none is specified.
34344 + - 0: HNP and SRP capable (default, if available)
34345 + - 1: SRP Only capable
34346 + - 2: No HNP/SRP capable
34347 + </td></tr>
34348 +
34349 + <tr>
34350 + <td>dma_enable</td>
34351 + <td>Specifies whether to use slave or DMA mode for accessing the data FIFOs.
34352 + The driver will automatically detect the value for this parameter if none is
34353 + specified.
34354 + - 0: Slave
34355 + - 1: DMA (default, if available)
34356 + </td></tr>
34357 +
34358 + <tr>
34359 + <td>dma_burst_size</td>
34360 + <td>The DMA Burst size (applicable only for External DMA Mode).
34361 + - Values: 1, 4, 8 16, 32, 64, 128, 256 (default 32)
34362 + </td></tr>
34363 +
34364 + <tr>
34365 + <td>speed</td>
34366 + <td>Specifies the maximum speed of operation in host and device mode. The
34367 + actual speed depends on the speed of the attached device and the value of
34368 + phy_type.
34369 + - 0: High Speed (default)
34370 + - 1: Full Speed
34371 + </td></tr>
34372 +
34373 + <tr>
34374 + <td>host_support_fs_ls_low_power</td>
34375 + <td>Specifies whether low power mode is supported when attached to a Full
34376 + Speed or Low Speed device in host mode.
34377 + - 0: Don't support low power mode (default)
34378 + - 1: Support low power mode
34379 + </td></tr>
34380 +
34381 + <tr>
34382 + <td>host_ls_low_power_phy_clk</td>
34383 + <td>Specifies the PHY clock rate in low power mode when connected to a Low
34384 + Speed device in host mode. This parameter is applicable only if
34385 + HOST_SUPPORT_FS_LS_LOW_POWER is enabled.
34386 + - 0: 48 MHz (default)
34387 + - 1: 6 MHz
34388 + </td></tr>
34389 +
34390 + <tr>
34391 + <td>enable_dynamic_fifo</td>
34392 + <td> Specifies whether FIFOs may be resized by the driver software.
34393 + - 0: Use cC FIFO size parameters
34394 + - 1: Allow dynamic FIFO sizing (default)
34395 + </td></tr>
34396 +
34397 + <tr>
34398 + <td>data_fifo_size</td>
34399 + <td>Total number of 4-byte words in the data FIFO memory. This memory
34400 + includes the Rx FIFO, non-periodic Tx FIFO, and periodic Tx FIFOs.
34401 + - Values: 32 to 32768 (default 8192)
34402 +
34403 + Note: The total FIFO memory depth in the FPGA configuration is 8192.
34404 + </td></tr>
34405 +
34406 + <tr>
34407 + <td>dev_rx_fifo_size</td>
34408 + <td>Number of 4-byte words in the Rx FIFO in device mode when dynamic
34409 + FIFO sizing is enabled.
34410 + - Values: 16 to 32768 (default 1064)
34411 + </td></tr>
34412 +
34413 + <tr>
34414 + <td>dev_nperio_tx_fifo_size</td>
34415 + <td>Number of 4-byte words in the non-periodic Tx FIFO in device mode when
34416 + dynamic FIFO sizing is enabled.
34417 + - Values: 16 to 32768 (default 1024)
34418 + </td></tr>
34419 +
34420 + <tr>
34421 + <td>dev_perio_tx_fifo_size_n (n = 1 to 15)</td>
34422 + <td>Number of 4-byte words in each of the periodic Tx FIFOs in device mode
34423 + when dynamic FIFO sizing is enabled.
34424 + - Values: 4 to 768 (default 256)
34425 + </td></tr>
34426 +
34427 + <tr>
34428 + <td>host_rx_fifo_size</td>
34429 + <td>Number of 4-byte words in the Rx FIFO in host mode when dynamic FIFO
34430 + sizing is enabled.
34431 + - Values: 16 to 32768 (default 1024)
34432 + </td></tr>
34433 +
34434 + <tr>
34435 + <td>host_nperio_tx_fifo_size</td>
34436 + <td>Number of 4-byte words in the non-periodic Tx FIFO in host mode when
34437 + dynamic FIFO sizing is enabled in the core.
34438 + - Values: 16 to 32768 (default 1024)
34439 + </td></tr>
34440 +
34441 + <tr>
34442 + <td>host_perio_tx_fifo_size</td>
34443 + <td>Number of 4-byte words in the host periodic Tx FIFO when dynamic FIFO
34444 + sizing is enabled.
34445 + - Values: 16 to 32768 (default 1024)
34446 + </td></tr>
34447 +
34448 + <tr>
34449 + <td>max_transfer_size</td>
34450 + <td>The maximum transfer size supported in bytes.
34451 + - Values: 2047 to 65,535 (default 65,535)
34452 + </td></tr>
34453 +
34454 + <tr>
34455 + <td>max_packet_count</td>
34456 + <td>The maximum number of packets in a transfer.
34457 + - Values: 15 to 511 (default 511)
34458 + </td></tr>
34459 +
34460 + <tr>
34461 + <td>host_channels</td>
34462 + <td>The number of host channel registers to use.
34463 + - Values: 1 to 16 (default 12)
34464 +
34465 + Note: The FPGA configuration supports a maximum of 12 host channels.
34466 + </td></tr>
34467 +
34468 + <tr>
34469 + <td>dev_endpoints</td>
34470 + <td>The number of endpoints in addition to EP0 available for device mode
34471 + operations.
34472 + - Values: 1 to 15 (default 6 IN and OUT)
34473 +
34474 + Note: The FPGA configuration supports a maximum of 6 IN and OUT endpoints in
34475 + addition to EP0.
34476 + </td></tr>
34477 +
34478 + <tr>
34479 + <td>phy_type</td>
34480 + <td>Specifies the type of PHY interface to use. By default, the driver will
34481 + automatically detect the phy_type.
34482 + - 0: Full Speed
34483 + - 1: UTMI+ (default, if available)
34484 + - 2: ULPI
34485 + </td></tr>
34486 +
34487 + <tr>
34488 + <td>phy_utmi_width</td>
34489 + <td>Specifies the UTMI+ Data Width. This parameter is applicable for a
34490 + phy_type of UTMI+. Also, this parameter is applicable only if the
34491 + OTG_HSPHY_WIDTH cC parameter was set to "8 and 16 bits", meaning that the
34492 + core has been configured to work at either data path width.
34493 + - Values: 8 or 16 bits (default 16)
34494 + </td></tr>
34495 +
34496 + <tr>
34497 + <td>phy_ulpi_ddr</td>
34498 + <td>Specifies whether the ULPI operates at double or single data rate. This
34499 + parameter is only applicable if phy_type is ULPI.
34500 + - 0: single data rate ULPI interface with 8 bit wide data bus (default)
34501 + - 1: double data rate ULPI interface with 4 bit wide data bus
34502 + </td></tr>
34503 +
34504 + <tr>
34505 + <td>i2c_enable</td>
34506 + <td>Specifies whether to use the I2C interface for full speed PHY. This
34507 + parameter is only applicable if PHY_TYPE is FS.
34508 + - 0: Disabled (default)
34509 + - 1: Enabled
34510 + </td></tr>
34511 +
34512 + <tr>
34513 + <td>ulpi_fs_ls</td>
34514 + <td>Specifies whether to use ULPI FS/LS mode only.
34515 + - 0: Disabled (default)
34516 + - 1: Enabled
34517 + </td></tr>
34518 +
34519 + <tr>
34520 + <td>ts_dline</td>
34521 + <td>Specifies whether term select D-Line pulsing for all PHYs is enabled.
34522 + - 0: Disabled (default)
34523 + - 1: Enabled
34524 + </td></tr>
34525 +
34526 + <tr>
34527 + <td>en_multiple_tx_fifo</td>
34528 + <td>Specifies whether dedicatedto tx fifos are enabled for non periodic IN EPs.
34529 + The driver will automatically detect the value for this parameter if none is
34530 + specified.
34531 + - 0: Disabled
34532 + - 1: Enabled (default, if available)
34533 + </td></tr>
34534 +
34535 + <tr>
34536 + <td>dev_tx_fifo_size_n (n = 1 to 15)</td>
34537 + <td>Number of 4-byte words in each of the Tx FIFOs in device mode
34538 + when dynamic FIFO sizing is enabled.
34539 + - Values: 4 to 768 (default 256)
34540 + </td></tr>
34541 +
34542 + <tr>
34543 + <td>tx_thr_length</td>
34544 + <td>Transmit Threshold length in 32 bit double words
34545 + - Values: 8 to 128 (default 64)
34546 + </td></tr>
34547 +
34548 + <tr>
34549 + <td>rx_thr_length</td>
34550 + <td>Receive Threshold length in 32 bit double words
34551 + - Values: 8 to 128 (default 64)
34552 + </td></tr>
34553 +
34554 +<tr>
34555 + <td>thr_ctl</td>
34556 + <td>Specifies whether to enable Thresholding for Device mode. Bits 0, 1, 2 of
34557 + this parmater specifies if thresholding is enabled for non-Iso Tx, Iso Tx and
34558 + Rx transfers accordingly.
34559 + The driver will automatically detect the value for this parameter if none is
34560 + specified.
34561 + - Values: 0 to 7 (default 0)
34562 + Bit values indicate:
34563 + - 0: Thresholding disabled
34564 + - 1: Thresholding enabled
34565 + </td></tr>
34566 +
34567 +<tr>
34568 + <td>dma_desc_enable</td>
34569 + <td>Specifies whether to enable Descriptor DMA mode.
34570 + The driver will automatically detect the value for this parameter if none is
34571 + specified.
34572 + - 0: Descriptor DMA disabled
34573 + - 1: Descriptor DMA (default, if available)
34574 + </td></tr>
34575 +
34576 +<tr>
34577 + <td>mpi_enable</td>
34578 + <td>Specifies whether to enable MPI enhancement mode.
34579 + The driver will automatically detect the value for this parameter if none is
34580 + specified.
34581 + - 0: MPI disabled (default)
34582 + - 1: MPI enable
34583 + </td></tr>
34584 +
34585 +<tr>
34586 + <td>pti_enable</td>
34587 + <td>Specifies whether to enable PTI enhancement support.
34588 + The driver will automatically detect the value for this parameter if none is
34589 + specified.
34590 + - 0: PTI disabled (default)
34591 + - 1: PTI enable
34592 + </td></tr>
34593 +
34594 +<tr>
34595 + <td>lpm_enable</td>
34596 + <td>Specifies whether to enable LPM support.
34597 + The driver will automatically detect the value for this parameter if none is
34598 + specified.
34599 + - 0: LPM disabled
34600 + - 1: LPM enable (default, if available)
34601 + </td></tr>
34602 +
34603 +<tr>
34604 + <td>ic_usb_cap</td>
34605 + <td>Specifies whether to enable IC_USB capability.
34606 + The driver will automatically detect the value for this parameter if none is
34607 + specified.
34608 + - 0: IC_USB disabled (default, if available)
34609 + - 1: IC_USB enable
34610 + </td></tr>
34611 +
34612 +<tr>
34613 + <td>ahb_thr_ratio</td>
34614 + <td>Specifies AHB Threshold ratio.
34615 + - Values: 0 to 3 (default 0)
34616 + </td></tr>
34617 +
34618 +<tr>
34619 + <td>power_down</td>
34620 + <td>Specifies Power Down(Hibernation) Mode.
34621 + The driver will automatically detect the value for this parameter if none is
34622 + specified.
34623 + - 0: Power Down disabled (default)
34624 + - 2: Power Down enabled
34625 + </td></tr>
34626 +
34627 + <tr>
34628 + <td>reload_ctl</td>
34629 + <td>Specifies whether dynamic reloading of the HFIR register is allowed during
34630 + run time. The driver will automatically detect the value for this parameter if
34631 + none is specified. In case the HFIR value is reloaded when HFIR.RldCtrl == 1'b0
34632 + the core might misbehave.
34633 + - 0: Reload Control disabled (default)
34634 + - 1: Reload Control enabled
34635 + </td></tr>
34636 +
34637 + <tr>
34638 + <td>dev_out_nak</td>
34639 + <td>Specifies whether Device OUT NAK enhancement enabled or no.
34640 + The driver will automatically detect the value for this parameter if
34641 + none is specified. This parameter is valid only when OTG_EN_DESC_DMA == 1b1.
34642 + - 0: The core does not set NAK after Bulk OUT transfer complete (default)
34643 + - 1: The core sets NAK after Bulk OUT transfer complete
34644 + </td></tr>
34645 +
34646 + <tr>
34647 + <td>cont_on_bna</td>
34648 + <td>Specifies whether Enable Continue on BNA enabled or no.
34649 + After receiving BNA interrupt the core disables the endpoint,when the
34650 + endpoint is re-enabled by the application the
34651 + - 0: Core starts processing from the DOEPDMA descriptor (default)
34652 + - 1: Core starts processing from the descriptor which received the BNA.
34653 + This parameter is valid only when OTG_EN_DESC_DMA == 1b1.
34654 + </td></tr>
34655 +
34656 + <tr>
34657 + <td>ahb_single</td>
34658 + <td>This bit when programmed supports SINGLE transfers for remainder data
34659 + in a transfer for DMA mode of operation.
34660 + - 0: The remainder data will be sent using INCR burst size (default)
34661 + - 1: The remainder data will be sent using SINGLE burst size.
34662 + </td></tr>
34663 +
34664 +<tr>
34665 + <td>adp_enable</td>
34666 + <td>Specifies whether ADP feature is enabled.
34667 + The driver will automatically detect the value for this parameter if none is
34668 + specified.
34669 + - 0: ADP feature disabled (default)
34670 + - 1: ADP feature enabled
34671 + </td></tr>
34672 +
34673 + <tr>
34674 + <td>otg_ver</td>
34675 + <td>Specifies whether OTG is performing as USB OTG Revision 2.0 or Revision 1.3
34676 + USB OTG device.
34677 + - 0: OTG 2.0 support disabled (default)
34678 + - 1: OTG 2.0 support enabled
34679 + </td></tr>
34680 +
34681 +*/
34682 --- /dev/null
34683 +++ b/drivers/usb/host/dwc_otg/dwc_otg_driver.h
34684 @@ -0,0 +1,86 @@
34685 +/* ==========================================================================
34686 + * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_driver.h $
34687 + * $Revision: #19 $
34688 + * $Date: 2010/11/15 $
34689 + * $Change: 1627671 $
34690 + *
34691 + * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
34692 + * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
34693 + * otherwise expressly agreed to in writing between Synopsys and you.
34694 + *
34695 + * The Software IS NOT an item of Licensed Software or Licensed Product under
34696 + * any End User Software License Agreement or Agreement for Licensed Product
34697 + * with Synopsys or any supplement thereto. You are permitted to use and
34698 + * redistribute this Software in source and binary forms, with or without
34699 + * modification, provided that redistributions of source code must retain this
34700 + * notice. You may not view, use, disclose, copy or distribute this file or
34701 + * any information contained herein except pursuant to this license grant from
34702 + * Synopsys. If you do not agree with this notice, including the disclaimer
34703 + * below, then you are not authorized to use the Software.
34704 + *
34705 + * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
34706 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
34707 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
34708 + * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
34709 + * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
34710 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
34711 + * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
34712 + * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
34713 + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
34714 + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
34715 + * DAMAGE.
34716 + * ========================================================================== */
34717 +
34718 +#ifndef __DWC_OTG_DRIVER_H__
34719 +#define __DWC_OTG_DRIVER_H__
34720 +
34721 +/** @file
34722 + * This file contains the interface to the Linux driver.
34723 + */
34724 +#include "dwc_otg_os_dep.h"
34725 +#include "dwc_otg_core_if.h"
34726 +
34727 +/* Type declarations */
34728 +struct dwc_otg_pcd;
34729 +struct dwc_otg_hcd;
34730 +
34731 +/**
34732 + * This structure is a wrapper that encapsulates the driver components used to
34733 + * manage a single DWC_otg controller.
34734 + */
34735 +typedef struct dwc_otg_device {
34736 + /** Structure containing OS-dependent stuff. KEEP THIS STRUCT AT THE
34737 + * VERY BEGINNING OF THE DEVICE STRUCT. OSes such as FreeBSD and NetBSD
34738 + * require this. */
34739 + struct os_dependent os_dep;
34740 +
34741 + /** Pointer to the core interface structure. */
34742 + dwc_otg_core_if_t *core_if;
34743 +
34744 + /** Pointer to the PCD structure. */
34745 + struct dwc_otg_pcd *pcd;
34746 +
34747 + /** Pointer to the HCD structure. */
34748 + struct dwc_otg_hcd *hcd;
34749 +
34750 + /** Flag to indicate whether the common IRQ handler is installed. */
34751 + uint8_t common_irq_installed;
34752 +
34753 +} dwc_otg_device_t;
34754 +
34755 +/*We must clear S3C24XX_EINTPEND external interrupt register
34756 + * because after clearing in this register trigerred IRQ from
34757 + * H/W core in kernel interrupt can be occured again before OTG
34758 + * handlers clear all IRQ sources of Core registers because of
34759 + * timing latencies and Low Level IRQ Type.
34760 + */
34761 +#ifdef CONFIG_MACH_IPMATE
34762 +#define S3C2410X_CLEAR_EINTPEND() \
34763 +do { \
34764 + __raw_writel(1UL << 11,S3C24XX_EINTPEND); \
34765 +} while (0)
34766 +#else
34767 +#define S3C2410X_CLEAR_EINTPEND() do { } while (0)
34768 +#endif
34769 +
34770 +#endif
34771 --- /dev/null
34772 +++ b/drivers/usb/host/dwc_otg/dwc_otg_fiq_fsm.c
34773 @@ -0,0 +1,1425 @@
34774 +/*
34775 + * dwc_otg_fiq_fsm.c - The finite state machine FIQ
34776 + *
34777 + * Copyright (c) 2013 Raspberry Pi Foundation
34778 + *
34779 + * Author: Jonathan Bell <jonathan@raspberrypi.org>
34780 + * All rights reserved.
34781 + *
34782 + * Redistribution and use in source and binary forms, with or without
34783 + * modification, are permitted provided that the following conditions are met:
34784 + * * Redistributions of source code must retain the above copyright
34785 + * notice, this list of conditions and the following disclaimer.
34786 + * * Redistributions in binary form must reproduce the above copyright
34787 + * notice, this list of conditions and the following disclaimer in the
34788 + * documentation and/or other materials provided with the distribution.
34789 + * * Neither the name of Raspberry Pi nor the
34790 + * names of its contributors may be used to endorse or promote products
34791 + * derived from this software without specific prior written permission.
34792 + *
34793 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
34794 + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
34795 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
34796 + * DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY
34797 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
34798 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
34799 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
34800 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
34801 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
34802 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
34803 + *
34804 + * This FIQ implements functionality that performs split transactions on
34805 + * the dwc_otg hardware without any outside intervention. A split transaction
34806 + * is "queued" by nominating a specific host channel to perform the entirety
34807 + * of a split transaction. This FIQ will then perform the microframe-precise
34808 + * scheduling required in each phase of the transaction until completion.
34809 + *
34810 + * The FIQ functionality is glued into the Synopsys driver via the entry point
34811 + * in the FSM enqueue function, and at the exit point in handling a HC interrupt
34812 + * for a FSM-enabled channel.
34813 + *
34814 + * NB: Large parts of this implementation have architecture-specific code.
34815 + * For porting this functionality to other ARM machines, the minimum is required:
34816 + * - An interrupt controller allowing the top-level dwc USB interrupt to be routed
34817 + * to the FIQ
34818 + * - A method of forcing a software generated interrupt from FIQ mode that then
34819 + * triggers an IRQ entry (with the dwc USB handler called by this IRQ number)
34820 + * - Guaranteed interrupt routing such that both the FIQ and SGI occur on the same
34821 + * processor core - there is no locking between the FIQ and IRQ (aside from
34822 + * local_fiq_disable)
34823 + *
34824 + */
34825 +
34826 +#include "dwc_otg_fiq_fsm.h"
34827 +
34828 +
34829 +char buffer[1000*16];
34830 +int wptr;
34831 +void notrace _fiq_print(enum fiq_debug_level dbg_lvl, volatile struct fiq_state *state, char *fmt, ...)
34832 +{
34833 + enum fiq_debug_level dbg_lvl_req = FIQDBG_ERR;
34834 + va_list args;
34835 + char text[17];
34836 + hfnum_data_t hfnum = { .d32 = FIQ_READ(state->dwc_regs_base + 0x408) };
34837 +
34838 + if((dbg_lvl & dbg_lvl_req) || dbg_lvl == FIQDBG_ERR)
34839 + {
34840 + snprintf(text, 9, " %4d:%1u ", hfnum.b.frnum/8, hfnum.b.frnum & 7);
34841 + va_start(args, fmt);
34842 + vsnprintf(text+8, 9, fmt, args);
34843 + va_end(args);
34844 +
34845 + memcpy(buffer + wptr, text, 16);
34846 + wptr = (wptr + 16) % sizeof(buffer);
34847 + }
34848 +}
34849 +
34850 +
34851 +#ifdef CONFIG_ARM64
34852 +
34853 +inline void fiq_fsm_spin_lock(fiq_lock_t *lock)
34854 +{
34855 + spin_lock((spinlock_t *)lock);
34856 +}
34857 +
34858 +inline void fiq_fsm_spin_unlock(fiq_lock_t *lock)
34859 +{
34860 + spin_unlock((spinlock_t *)lock);
34861 +}
34862 +
34863 +#else
34864 +
34865 +/**
34866 + * fiq_fsm_spin_lock() - ARMv6+ bare bones spinlock
34867 + * Must be called with local interrupts and FIQ disabled.
34868 + */
34869 +#if defined(CONFIG_ARCH_BCM2835) && defined(CONFIG_SMP)
34870 +inline void fiq_fsm_spin_lock(fiq_lock_t *lock)
34871 +{
34872 + unsigned long tmp;
34873 + uint32_t newval;
34874 + fiq_lock_t lockval;
34875 + /* Nested locking, yay. If we are on the same CPU as the fiq, then the disable
34876 + * will be sufficient. If we are on a different CPU, then the lock protects us. */
34877 + prefetchw(&lock->slock);
34878 + asm volatile (
34879 + "1: ldrex %0, [%3]\n"
34880 + " add %1, %0, %4\n"
34881 + " strex %2, %1, [%3]\n"
34882 + " teq %2, #0\n"
34883 + " bne 1b"
34884 + : "=&r" (lockval), "=&r" (newval), "=&r" (tmp)
34885 + : "r" (&lock->slock), "I" (1 << 16)
34886 + : "cc");
34887 +
34888 + while (lockval.tickets.next != lockval.tickets.owner) {
34889 + wfe();
34890 + lockval.tickets.owner = READ_ONCE(lock->tickets.owner);
34891 + }
34892 + smp_mb();
34893 +}
34894 +#else
34895 +inline void fiq_fsm_spin_lock(fiq_lock_t *lock) { }
34896 +#endif
34897 +
34898 +/**
34899 + * fiq_fsm_spin_unlock() - ARMv6+ bare bones spinunlock
34900 + */
34901 +#if defined(CONFIG_ARCH_BCM2835) && defined(CONFIG_SMP)
34902 +inline void fiq_fsm_spin_unlock(fiq_lock_t *lock)
34903 +{
34904 + smp_mb();
34905 + lock->tickets.owner++;
34906 + dsb_sev();
34907 +}
34908 +#else
34909 +inline void fiq_fsm_spin_unlock(fiq_lock_t *lock) { }
34910 +#endif
34911 +
34912 +#endif
34913 +
34914 +/**
34915 + * fiq_fsm_restart_channel() - Poke channel enable bit for a split transaction
34916 + * @channel: channel to re-enable
34917 + */
34918 +static void fiq_fsm_restart_channel(struct fiq_state *st, int n, int force)
34919 +{
34920 + hcchar_data_t hcchar = { .d32 = FIQ_READ(st->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCCHAR) };
34921 +
34922 + hcchar.b.chen = 0;
34923 + if (st->channel[n].hcchar_copy.b.eptype & 0x1) {
34924 + hfnum_data_t hfnum = { .d32 = FIQ_READ(st->dwc_regs_base + HFNUM) };
34925 + /* Hardware bug workaround: update the ssplit index */
34926 + if (st->channel[n].hcsplt_copy.b.spltena)
34927 + st->channel[n].expected_uframe = (hfnum.b.frnum + 1) & 0x3FFF;
34928 +
34929 + hcchar.b.oddfrm = (hfnum.b.frnum & 0x1) ? 0 : 1;
34930 + }
34931 +
34932 + FIQ_WRITE(st->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCCHAR, hcchar.d32);
34933 + hcchar.d32 = FIQ_READ(st->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCCHAR);
34934 + hcchar.b.chen = 1;
34935 +
34936 + FIQ_WRITE(st->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCCHAR, hcchar.d32);
34937 + fiq_print(FIQDBG_INT, st, "HCGO %01d %01d", n, force);
34938 +}
34939 +
34940 +/**
34941 + * fiq_fsm_setup_csplit() - Prepare a host channel for a CSplit transaction stage
34942 + * @st: Pointer to the channel's state
34943 + * @n : channel number
34944 + *
34945 + * Change host channel registers to perform a complete-split transaction. Being mindful of the
34946 + * endpoint direction, set control regs up correctly.
34947 + */
34948 +static void notrace fiq_fsm_setup_csplit(struct fiq_state *st, int n)
34949 +{
34950 + hcsplt_data_t hcsplt = { .d32 = FIQ_READ(st->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCSPLT) };
34951 + hctsiz_data_t hctsiz = { .d32 = FIQ_READ(st->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCTSIZ) };
34952 +
34953 + hcsplt.b.compsplt = 1;
34954 + if (st->channel[n].hcchar_copy.b.epdir == 1) {
34955 + // If IN, the CSPLIT result contains the data or a hub handshake. hctsiz = maxpacket.
34956 + hctsiz.b.xfersize = st->channel[n].hctsiz_copy.b.xfersize;
34957 + } else {
34958 + // If OUT, the CSPLIT result contains handshake only.
34959 + hctsiz.b.xfersize = 0;
34960 + }
34961 + FIQ_WRITE(st->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCSPLT, hcsplt.d32);
34962 + FIQ_WRITE(st->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCTSIZ, hctsiz.d32);
34963 + mb();
34964 +}
34965 +
34966 +/**
34967 + * fiq_fsm_restart_np_pending() - Restart a single non-periodic contended transfer
34968 + * @st: Pointer to the channel's state
34969 + * @num_channels: Total number of host channels
34970 + * @orig_channel: Channel index of completed transfer
34971 + *
34972 + * In the case where an IN and OUT transfer are simultaneously scheduled to the
34973 + * same device/EP, inadequate hub implementations will misbehave. Once the first
34974 + * transfer is complete, a pending non-periodic split can then be issued.
34975 + */
34976 +static void notrace fiq_fsm_restart_np_pending(struct fiq_state *st, int num_channels, int orig_channel)
34977 +{
34978 + int i;
34979 + int dev_addr = st->channel[orig_channel].hcchar_copy.b.devaddr;
34980 + int ep_num = st->channel[orig_channel].hcchar_copy.b.epnum;
34981 + for (i = 0; i < num_channels; i++) {
34982 + if (st->channel[i].fsm == FIQ_NP_SSPLIT_PENDING &&
34983 + st->channel[i].hcchar_copy.b.devaddr == dev_addr &&
34984 + st->channel[i].hcchar_copy.b.epnum == ep_num) {
34985 + st->channel[i].fsm = FIQ_NP_SSPLIT_STARTED;
34986 + fiq_fsm_restart_channel(st, i, 0);
34987 + break;
34988 + }
34989 + }
34990 +}
34991 +
34992 +static inline int notrace fiq_get_xfer_len(struct fiq_state *st, int n)
34993 +{
34994 + /* The xfersize register is a bit wonky. For IN transfers, it decrements by the packet size. */
34995 + hctsiz_data_t hctsiz = { .d32 = FIQ_READ(st->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCTSIZ) };
34996 +
34997 + if (st->channel[n].hcchar_copy.b.epdir == 0) {
34998 + return st->channel[n].hctsiz_copy.b.xfersize;
34999 + } else {
35000 + return st->channel[n].hctsiz_copy.b.xfersize - hctsiz.b.xfersize;
35001 + }
35002 +
35003 +}
35004 +
35005 +
35006 +/**
35007 + * fiq_increment_dma_buf() - update DMA address for bounce buffers after a CSPLIT
35008 + *
35009 + * Of use only for IN periodic transfers.
35010 + */
35011 +static int notrace fiq_increment_dma_buf(struct fiq_state *st, int num_channels, int n)
35012 +{
35013 + hcdma_data_t hcdma;
35014 + int i = st->channel[n].dma_info.index;
35015 + int len;
35016 + struct fiq_dma_blob *blob = (struct fiq_dma_blob *) st->dma_base;
35017 +
35018 + len = fiq_get_xfer_len(st, n);
35019 + fiq_print(FIQDBG_INT, st, "LEN: %03d", len);
35020 + st->channel[n].dma_info.slot_len[i] = len;
35021 + i++;
35022 + if (i > 6)
35023 + BUG();
35024 +
35025 + hcdma.d32 = (dma_addr_t) &blob->channel[n].index[i].buf[0];
35026 + FIQ_WRITE(st->dwc_regs_base + HC_START + (HC_OFFSET * n) + HC_DMA, hcdma.d32);
35027 + st->channel[n].dma_info.index = i;
35028 + return 0;
35029 +}
35030 +
35031 +/**
35032 + * fiq_reload_hctsiz() - for IN transactions, reset HCTSIZ
35033 + */
35034 +static void notrace fiq_fsm_reload_hctsiz(struct fiq_state *st, int n)
35035 +{
35036 + hctsiz_data_t hctsiz = { .d32 = FIQ_READ(st->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCTSIZ) };
35037 + hctsiz.b.xfersize = st->channel[n].hctsiz_copy.b.xfersize;
35038 + hctsiz.b.pktcnt = 1;
35039 + FIQ_WRITE(st->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCTSIZ, hctsiz.d32);
35040 +}
35041 +
35042 +/**
35043 + * fiq_fsm_reload_hcdma() - for OUT transactions, rewind DMA pointer
35044 + */
35045 +static void notrace fiq_fsm_reload_hcdma(struct fiq_state *st, int n)
35046 +{
35047 + hcdma_data_t hcdma = st->channel[n].hcdma_copy;
35048 + FIQ_WRITE(st->dwc_regs_base + HC_START + (HC_OFFSET * n) + HC_DMA, hcdma.d32);
35049 +}
35050 +
35051 +/**
35052 + * fiq_iso_out_advance() - update DMA address and split position bits
35053 + * for isochronous OUT transactions.
35054 + *
35055 + * Returns 1 if this is the last packet queued, 0 otherwise. Split-ALL and
35056 + * Split-BEGIN states are not handled - this is done when the transaction was queued.
35057 + *
35058 + * This function must only be called from the FIQ_ISO_OUT_ACTIVE state.
35059 + */
35060 +static int notrace fiq_iso_out_advance(struct fiq_state *st, int num_channels, int n)
35061 +{
35062 + hcsplt_data_t hcsplt;
35063 + hctsiz_data_t hctsiz;
35064 + hcdma_data_t hcdma;
35065 + struct fiq_dma_blob *blob = (struct fiq_dma_blob *) st->dma_base;
35066 + int last = 0;
35067 + int i = st->channel[n].dma_info.index;
35068 +
35069 + fiq_print(FIQDBG_INT, st, "ADV %01d %01d ", n, i);
35070 + i++;
35071 + if (i == 4)
35072 + last = 1;
35073 + if (st->channel[n].dma_info.slot_len[i+1] == 255)
35074 + last = 1;
35075 +
35076 + /* New DMA address - address of bounce buffer referred to in index */
35077 + hcdma.d32 = (dma_addr_t) blob->channel[n].index[i].buf;
35078 + //hcdma.d32 = FIQ_READ(st->dwc_regs_base + HC_START + (HC_OFFSET * n) + HC_DMA);
35079 + //hcdma.d32 += st->channel[n].dma_info.slot_len[i];
35080 + fiq_print(FIQDBG_INT, st, "LAST: %01d ", last);
35081 + fiq_print(FIQDBG_INT, st, "LEN: %03d", st->channel[n].dma_info.slot_len[i]);
35082 + hcsplt.d32 = FIQ_READ(st->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCSPLT);
35083 + hctsiz.d32 = FIQ_READ(st->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCTSIZ);
35084 + hcsplt.b.xactpos = (last) ? ISOC_XACTPOS_END : ISOC_XACTPOS_MID;
35085 + /* Set up new packet length */
35086 + hctsiz.b.pktcnt = 1;
35087 + hctsiz.b.xfersize = st->channel[n].dma_info.slot_len[i];
35088 + fiq_print(FIQDBG_INT, st, "%08x", hctsiz.d32);
35089 +
35090 + st->channel[n].dma_info.index++;
35091 + FIQ_WRITE(st->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCSPLT, hcsplt.d32);
35092 + FIQ_WRITE(st->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCTSIZ, hctsiz.d32);
35093 + FIQ_WRITE(st->dwc_regs_base + HC_START + (HC_OFFSET * n) + HC_DMA, hcdma.d32);
35094 + return last;
35095 +}
35096 +
35097 +/**
35098 + * fiq_fsm_tt_next_isoc() - queue next pending isochronous out start-split on a TT
35099 + *
35100 + * Despite the limitations of the DWC core, we can force a microframe pipeline of
35101 + * isochronous OUT start-split transactions while waiting for a corresponding other-type
35102 + * of endpoint to finish its CSPLITs. TTs have big periodic buffers therefore it
35103 + * is very unlikely that filling the start-split FIFO will cause data loss.
35104 + * This allows much better interleaving of transactions in an order-independent way-
35105 + * there is no requirement to prioritise isochronous, just a state-space search has
35106 + * to be performed on each periodic start-split complete interrupt.
35107 + */
35108 +static int notrace fiq_fsm_tt_next_isoc(struct fiq_state *st, int num_channels, int n)
35109 +{
35110 + int hub_addr = st->channel[n].hub_addr;
35111 + int port_addr = st->channel[n].port_addr;
35112 + int i, poked = 0;
35113 + for (i = 0; i < num_channels; i++) {
35114 + if (i == n || st->channel[i].fsm == FIQ_PASSTHROUGH)
35115 + continue;
35116 + if (st->channel[i].hub_addr == hub_addr &&
35117 + st->channel[i].port_addr == port_addr) {
35118 + switch (st->channel[i].fsm) {
35119 + case FIQ_PER_ISO_OUT_PENDING:
35120 + if (st->channel[i].nrpackets == 1) {
35121 + st->channel[i].fsm = FIQ_PER_ISO_OUT_LAST;
35122 + } else {
35123 + st->channel[i].fsm = FIQ_PER_ISO_OUT_ACTIVE;
35124 + }
35125 + fiq_fsm_restart_channel(st, i, 0);
35126 + poked = 1;
35127 + break;
35128 +
35129 + default:
35130 + break;
35131 + }
35132 + }
35133 + if (poked)
35134 + break;
35135 + }
35136 + return poked;
35137 +}
35138 +
35139 +/**
35140 + * fiq_fsm_tt_in_use() - search for host channels using this TT
35141 + * @n: Channel to use as reference
35142 + *
35143 + */
35144 +int notrace noinline fiq_fsm_tt_in_use(struct fiq_state *st, int num_channels, int n)
35145 +{
35146 + int hub_addr = st->channel[n].hub_addr;
35147 + int port_addr = st->channel[n].port_addr;
35148 + int i, in_use = 0;
35149 + for (i = 0; i < num_channels; i++) {
35150 + if (i == n || st->channel[i].fsm == FIQ_PASSTHROUGH)
35151 + continue;
35152 + switch (st->channel[i].fsm) {
35153 + /* TT is reserved for channels that are in the middle of a periodic
35154 + * split transaction.
35155 + */
35156 + case FIQ_PER_SSPLIT_STARTED:
35157 + case FIQ_PER_CSPLIT_WAIT:
35158 + case FIQ_PER_CSPLIT_NYET1:
35159 + //case FIQ_PER_CSPLIT_POLL:
35160 + case FIQ_PER_ISO_OUT_ACTIVE:
35161 + case FIQ_PER_ISO_OUT_LAST:
35162 + if (st->channel[i].hub_addr == hub_addr &&
35163 + st->channel[i].port_addr == port_addr) {
35164 + in_use = 1;
35165 + }
35166 + break;
35167 + default:
35168 + break;
35169 + }
35170 + if (in_use)
35171 + break;
35172 + }
35173 + return in_use;
35174 +}
35175 +
35176 +/**
35177 + * fiq_fsm_more_csplits() - determine whether additional CSPLITs need
35178 + * to be issued for this IN transaction.
35179 + *
35180 + * We cannot tell the inbound PID of a data packet due to hardware limitations.
35181 + * we need to make an educated guess as to whether we need to queue another CSPLIT
35182 + * or not. A no-brainer is when we have received enough data to fill the endpoint
35183 + * size, but for endpoints that give variable-length data then we have to resort
35184 + * to heuristics.
35185 + *
35186 + * We also return whether this is the last CSPLIT to be queued, again based on
35187 + * heuristics. This is to allow a 1-uframe overlap of periodic split transactions.
35188 + * Note: requires at least 1 CSPLIT to have been performed prior to being called.
35189 + */
35190 +
35191 +/*
35192 + * We need some way of guaranteeing if a returned periodic packet of size X
35193 + * has a DATA0 PID.
35194 + * The heuristic value of 144 bytes assumes that the received data has maximal
35195 + * bit-stuffing and the clock frequency of the transmitting device is at the lowest
35196 + * permissible limit. If the transfer length results in a final packet size
35197 + * 144 < p <= 188, then an erroneous CSPLIT will be issued.
35198 + * Also used to ensure that an endpoint will nominally only return a single
35199 + * complete-split worth of data.
35200 + */
35201 +#define DATA0_PID_HEURISTIC 144
35202 +
35203 +static int notrace noinline fiq_fsm_more_csplits(struct fiq_state *state, int n, int *probably_last)
35204 +{
35205 +
35206 + int i;
35207 + int total_len = 0;
35208 + int more_needed = 1;
35209 + struct fiq_channel_state *st = &state->channel[n];
35210 +
35211 + for (i = 0; i < st->dma_info.index; i++) {
35212 + total_len += st->dma_info.slot_len[i];
35213 + }
35214 +
35215 + *probably_last = 0;
35216 +
35217 + if (st->hcchar_copy.b.eptype == 0x3) {
35218 + /*
35219 + * An interrupt endpoint will take max 2 CSPLITs. if we are receiving data
35220 + * then this is definitely the last CSPLIT.
35221 + */
35222 + *probably_last = 1;
35223 + } else {
35224 + /* Isoc IN. This is a bit risky if we are the first transaction:
35225 + * we may have been held off slightly. */
35226 + if (i > 1 && st->dma_info.slot_len[st->dma_info.index-1] <= DATA0_PID_HEURISTIC) {
35227 + more_needed = 0;
35228 + }
35229 + /* If in the next uframe we will receive enough data to fill the endpoint,
35230 + * then only issue 1 more csplit.
35231 + */
35232 + if (st->hctsiz_copy.b.xfersize - total_len <= DATA0_PID_HEURISTIC)
35233 + *probably_last = 1;
35234 + }
35235 +
35236 + if (total_len >= st->hctsiz_copy.b.xfersize ||
35237 + i == 6 || total_len == 0)
35238 + /* Note: due to bit stuffing it is possible to have > 6 CSPLITs for
35239 + * a single endpoint. Accepting more would completely break our scheduling mechanism though
35240 + * - in these extreme cases we will pass through a truncated packet.
35241 + */
35242 + more_needed = 0;
35243 +
35244 + return more_needed;
35245 +}
35246 +
35247 +/**
35248 + * fiq_fsm_too_late() - Test transaction for lateness
35249 + *
35250 + * If a SSPLIT for a large IN transaction is issued too late in a frame,
35251 + * the hub will disable the port to the device and respond with ERR handshakes.
35252 + * The hub status endpoint will not reflect this change.
35253 + * Returns 1 if we will issue a SSPLIT that will result in a device babble.
35254 + */
35255 +int notrace fiq_fsm_too_late(struct fiq_state *st, int n)
35256 +{
35257 + int uframe;
35258 + hfnum_data_t hfnum = { .d32 = FIQ_READ(st->dwc_regs_base + HFNUM) };
35259 + uframe = hfnum.b.frnum & 0x7;
35260 + if ((uframe < 6) && (st->channel[n].nrpackets + 1 + uframe > 7)) {
35261 + return 1;
35262 + } else {
35263 + return 0;
35264 + }
35265 +}
35266 +
35267 +
35268 +/**
35269 + * fiq_fsm_start_next_periodic() - A half-arsed attempt at a microframe pipeline
35270 + *
35271 + * Search pending transactions in the start-split pending state and queue them.
35272 + * Don't queue packets in uframe .5 (comes out in .6) (USB2.0 11.18.4).
35273 + * Note: we specifically don't do isochronous OUT transactions first because better
35274 + * use of the TT's start-split fifo can be achieved by pipelining an IN before an OUT.
35275 + */
35276 +static void notrace noinline fiq_fsm_start_next_periodic(struct fiq_state *st, int num_channels)
35277 +{
35278 + int n;
35279 + hfnum_data_t hfnum = { .d32 = FIQ_READ(st->dwc_regs_base + HFNUM) };
35280 + if ((hfnum.b.frnum & 0x7) == 5)
35281 + return;
35282 + for (n = 0; n < num_channels; n++) {
35283 + if (st->channel[n].fsm == FIQ_PER_SSPLIT_QUEUED) {
35284 + /* Check to see if any other transactions are using this TT */
35285 + if(!fiq_fsm_tt_in_use(st, num_channels, n)) {
35286 + if (!fiq_fsm_too_late(st, n)) {
35287 + st->channel[n].fsm = FIQ_PER_SSPLIT_STARTED;
35288 + fiq_print(FIQDBG_INT, st, "NEXTPER ");
35289 + fiq_fsm_restart_channel(st, n, 0);
35290 + } else {
35291 + st->channel[n].fsm = FIQ_PER_SPLIT_TIMEOUT;
35292 + }
35293 + break;
35294 + }
35295 + }
35296 + }
35297 + for (n = 0; n < num_channels; n++) {
35298 + if (st->channel[n].fsm == FIQ_PER_ISO_OUT_PENDING) {
35299 + if (!fiq_fsm_tt_in_use(st, num_channels, n)) {
35300 + fiq_print(FIQDBG_INT, st, "NEXTISO ");
35301 + if (st->channel[n].nrpackets == 1)
35302 + st->channel[n].fsm = FIQ_PER_ISO_OUT_LAST;
35303 + else
35304 + st->channel[n].fsm = FIQ_PER_ISO_OUT_ACTIVE;
35305 + fiq_fsm_restart_channel(st, n, 0);
35306 + break;
35307 + }
35308 + }
35309 + }
35310 +}
35311 +
35312 +/**
35313 + * fiq_fsm_update_hs_isoc() - update isochronous frame and transfer data
35314 + * @state: Pointer to fiq_state
35315 + * @n: Channel transaction is active on
35316 + * @hcint: Copy of host channel interrupt register
35317 + *
35318 + * Returns 0 if there are no more transactions for this HC to do, 1
35319 + * otherwise.
35320 + */
35321 +static int notrace noinline fiq_fsm_update_hs_isoc(struct fiq_state *state, int n, hcint_data_t hcint)
35322 +{
35323 + struct fiq_channel_state *st = &state->channel[n];
35324 + int xfer_len = 0, nrpackets = 0;
35325 + hcdma_data_t hcdma;
35326 + fiq_print(FIQDBG_INT, state, "HSISO %02d", n);
35327 +
35328 + xfer_len = fiq_get_xfer_len(state, n);
35329 + st->hs_isoc_info.iso_desc[st->hs_isoc_info.index].actual_length = xfer_len;
35330 +
35331 + st->hs_isoc_info.iso_desc[st->hs_isoc_info.index].status = hcint.d32;
35332 +
35333 + st->hs_isoc_info.index++;
35334 + if (st->hs_isoc_info.index == st->hs_isoc_info.nrframes) {
35335 + return 0;
35336 + }
35337 +
35338 + /* grab the next DMA address offset from the array */
35339 + hcdma.d32 = st->hcdma_copy.d32 + st->hs_isoc_info.iso_desc[st->hs_isoc_info.index].offset;
35340 + FIQ_WRITE(state->dwc_regs_base + HC_START + (HC_OFFSET * n) + HC_DMA, hcdma.d32);
35341 +
35342 + /* We need to set multi_count. This is a bit tricky - has to be set per-transaction as
35343 + * the core needs to be told to send the correct number. Caution: for IN transfers,
35344 + * this is always set to the maximum size of the endpoint. */
35345 + xfer_len = st->hs_isoc_info.iso_desc[st->hs_isoc_info.index].length;
35346 + /* Integer divide in a FIQ: fun. FIXME: make this not suck */
35347 + nrpackets = (xfer_len + st->hcchar_copy.b.mps - 1) / st->hcchar_copy.b.mps;
35348 + if (nrpackets == 0)
35349 + nrpackets = 1;
35350 + st->hcchar_copy.b.multicnt = nrpackets;
35351 + st->hctsiz_copy.b.pktcnt = nrpackets;
35352 +
35353 + /* Initial PID also needs to be set */
35354 + if (st->hcchar_copy.b.epdir == 0) {
35355 + st->hctsiz_copy.b.xfersize = xfer_len;
35356 + switch (st->hcchar_copy.b.multicnt) {
35357 + case 1:
35358 + st->hctsiz_copy.b.pid = DWC_PID_DATA0;
35359 + break;
35360 + case 2:
35361 + case 3:
35362 + st->hctsiz_copy.b.pid = DWC_PID_MDATA;
35363 + break;
35364 + }
35365 +
35366 + } else {
35367 + st->hctsiz_copy.b.xfersize = nrpackets * st->hcchar_copy.b.mps;
35368 + switch (st->hcchar_copy.b.multicnt) {
35369 + case 1:
35370 + st->hctsiz_copy.b.pid = DWC_PID_DATA0;
35371 + break;
35372 + case 2:
35373 + st->hctsiz_copy.b.pid = DWC_PID_DATA1;
35374 + break;
35375 + case 3:
35376 + st->hctsiz_copy.b.pid = DWC_PID_DATA2;
35377 + break;
35378 + }
35379 + }
35380 + FIQ_WRITE(state->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCTSIZ, st->hctsiz_copy.d32);
35381 + FIQ_WRITE(state->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCCHAR, st->hcchar_copy.d32);
35382 + /* Channel is enabled on hcint handler exit */
35383 + fiq_print(FIQDBG_INT, state, "HSISOOUT");
35384 + return 1;
35385 +}
35386 +
35387 +
35388 +/**
35389 + * fiq_fsm_do_sof() - FSM start-of-frame interrupt handler
35390 + * @state: Pointer to the state struct passed from banked FIQ mode registers.
35391 + * @num_channels: set according to the DWC hardware configuration
35392 + *
35393 + * The SOF handler in FSM mode has two functions
35394 + * 1. Hold off SOF from causing schedule advancement in IRQ context if there's
35395 + * nothing to do
35396 + * 2. Advance certain FSM states that require either a microframe delay, or a microframe
35397 + * of holdoff.
35398 + *
35399 + * The second part is architecture-specific to mach-bcm2835 -
35400 + * a sane interrupt controller would have a mask register for ARM interrupt sources
35401 + * to be promoted to the nFIQ line, but it doesn't. Instead a single interrupt
35402 + * number (USB) can be enabled. This means that certain parts of the USB specification
35403 + * that require "wait a little while, then issue another packet" cannot be fulfilled with
35404 + * the timing granularity required to achieve optimal throughout. The workaround is to use
35405 + * the SOF "timer" (125uS) to perform this task.
35406 + */
35407 +static int notrace noinline fiq_fsm_do_sof(struct fiq_state *state, int num_channels)
35408 +{
35409 + hfnum_data_t hfnum = { .d32 = FIQ_READ(state->dwc_regs_base + HFNUM) };
35410 + int n;
35411 + int kick_irq = 0;
35412 +
35413 + if ((hfnum.b.frnum & 0x7) == 1) {
35414 + /* We cannot issue csplits for transactions in the last frame past (n+1).1
35415 + * Check to see if there are any transactions that are stale.
35416 + * Boot them out.
35417 + */
35418 + for (n = 0; n < num_channels; n++) {
35419 + switch (state->channel[n].fsm) {
35420 + case FIQ_PER_CSPLIT_WAIT:
35421 + case FIQ_PER_CSPLIT_NYET1:
35422 + case FIQ_PER_CSPLIT_POLL:
35423 + case FIQ_PER_CSPLIT_LAST:
35424 + /* Check if we are no longer in the same full-speed frame. */
35425 + if (((state->channel[n].expected_uframe & 0x3FFF) & ~0x7) <
35426 + (hfnum.b.frnum & ~0x7))
35427 + state->channel[n].fsm = FIQ_PER_SPLIT_TIMEOUT;
35428 + break;
35429 + default:
35430 + break;
35431 + }
35432 + }
35433 + }
35434 +
35435 + for (n = 0; n < num_channels; n++) {
35436 + switch (state->channel[n].fsm) {
35437 +
35438 + case FIQ_NP_SSPLIT_RETRY:
35439 + case FIQ_NP_IN_CSPLIT_RETRY:
35440 + case FIQ_NP_OUT_CSPLIT_RETRY:
35441 + fiq_fsm_restart_channel(state, n, 0);
35442 + break;
35443 +
35444 + case FIQ_HS_ISOC_SLEEPING:
35445 + /* Is it time to wake this channel yet? */
35446 + if (--state->channel[n].uframe_sleeps == 0) {
35447 + state->channel[n].fsm = FIQ_HS_ISOC_TURBO;
35448 + fiq_fsm_restart_channel(state, n, 0);
35449 + }
35450 + break;
35451 +
35452 + case FIQ_PER_SSPLIT_QUEUED:
35453 + if ((hfnum.b.frnum & 0x7) == 5)
35454 + break;
35455 + if(!fiq_fsm_tt_in_use(state, num_channels, n)) {
35456 + if (!fiq_fsm_too_late(state, n)) {
35457 + fiq_print(FIQDBG_INT, state, "SOF GO %01d", n);
35458 + fiq_fsm_restart_channel(state, n, 0);
35459 + state->channel[n].fsm = FIQ_PER_SSPLIT_STARTED;
35460 + } else {
35461 + /* Transaction cannot be started without risking a device babble error */
35462 + state->channel[n].fsm = FIQ_PER_SPLIT_TIMEOUT;
35463 + state->haintmsk_saved.b2.chint &= ~(1 << n);
35464 + FIQ_WRITE(state->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCINTMSK, 0);
35465 + kick_irq |= 1;
35466 + }
35467 + }
35468 + break;
35469 +
35470 + case FIQ_PER_ISO_OUT_PENDING:
35471 + /* Ordinarily, this should be poked after the SSPLIT
35472 + * complete interrupt for a competing transfer on the same
35473 + * TT. Doesn't happen for aborted transactions though.
35474 + */
35475 + if ((hfnum.b.frnum & 0x7) >= 5)
35476 + break;
35477 + if (!fiq_fsm_tt_in_use(state, num_channels, n)) {
35478 + /* Hardware bug. SOF can sometimes occur after the channel halt interrupt
35479 + * that caused this.
35480 + */
35481 + fiq_fsm_restart_channel(state, n, 0);
35482 + fiq_print(FIQDBG_INT, state, "SOF ISOC");
35483 + if (state->channel[n].nrpackets == 1) {
35484 + state->channel[n].fsm = FIQ_PER_ISO_OUT_LAST;
35485 + } else {
35486 + state->channel[n].fsm = FIQ_PER_ISO_OUT_ACTIVE;
35487 + }
35488 + }
35489 + break;
35490 +
35491 + case FIQ_PER_CSPLIT_WAIT:
35492 + /* we are guaranteed to be in this state if and only if the SSPLIT interrupt
35493 + * occurred when the bus transaction occurred. The SOF interrupt reversal bug
35494 + * will utterly bugger this up though.
35495 + */
35496 + if (hfnum.b.frnum != state->channel[n].expected_uframe) {
35497 + fiq_print(FIQDBG_INT, state, "SOFCS %d ", n);
35498 + state->channel[n].fsm = FIQ_PER_CSPLIT_POLL;
35499 + fiq_fsm_restart_channel(state, n, 0);
35500 + fiq_fsm_start_next_periodic(state, num_channels);
35501 +
35502 + }
35503 + break;
35504 +
35505 + case FIQ_PER_SPLIT_TIMEOUT:
35506 + case FIQ_DEQUEUE_ISSUED:
35507 + /* Ugly: we have to force a HCD interrupt.
35508 + * Poke the mask for the channel in question.
35509 + * We will take a fake SOF because of this, but
35510 + * that's OK.
35511 + */
35512 + state->haintmsk_saved.b2.chint &= ~(1 << n);
35513 + FIQ_WRITE(state->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCINTMSK, 0);
35514 + kick_irq |= 1;
35515 + break;
35516 +
35517 + default:
35518 + break;
35519 + }
35520 + }
35521 +
35522 + if (state->kick_np_queues ||
35523 + dwc_frame_num_le(state->next_sched_frame, hfnum.b.frnum))
35524 + kick_irq |= 1;
35525 +
35526 + return !kick_irq;
35527 +}
35528 +
35529 +
35530 +/**
35531 + * fiq_fsm_do_hcintr() - FSM host channel interrupt handler
35532 + * @state: Pointer to the FIQ state struct
35533 + * @num_channels: Number of channels as per hardware config
35534 + * @n: channel for which HAINT(i) was raised
35535 + *
35536 + * An important property is that only the CHHLT interrupt is unmasked. Unfortunately, AHBerr is as well.
35537 + */
35538 +static int notrace noinline fiq_fsm_do_hcintr(struct fiq_state *state, int num_channels, int n)
35539 +{
35540 + hcint_data_t hcint;
35541 + hcintmsk_data_t hcintmsk;
35542 + hcint_data_t hcint_probe;
35543 + hcchar_data_t hcchar;
35544 + int handled = 0;
35545 + int restart = 0;
35546 + int last_csplit = 0;
35547 + int start_next_periodic = 0;
35548 + struct fiq_channel_state *st = &state->channel[n];
35549 + hfnum_data_t hfnum;
35550 +
35551 + hcint.d32 = FIQ_READ(state->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCINT);
35552 + hcintmsk.d32 = FIQ_READ(state->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCINTMSK);
35553 + hcint_probe.d32 = hcint.d32 & hcintmsk.d32;
35554 +
35555 + if (st->fsm != FIQ_PASSTHROUGH) {
35556 + fiq_print(FIQDBG_INT, state, "HC%01d ST%02d", n, st->fsm);
35557 + fiq_print(FIQDBG_INT, state, "%08x", hcint.d32);
35558 + }
35559 +
35560 + switch (st->fsm) {
35561 +
35562 + case FIQ_PASSTHROUGH:
35563 + case FIQ_DEQUEUE_ISSUED:
35564 + /* doesn't belong to us, kick it upstairs */
35565 + break;
35566 +
35567 + case FIQ_PASSTHROUGH_ERRORSTATE:
35568 + /* We are here to emulate the error recovery mechanism of the dwc HCD.
35569 + * Several interrupts are unmasked if a previous transaction failed - it's
35570 + * death for the FIQ to attempt to handle them as the channel isn't halted.
35571 + * Emulate what the HCD does in this situation: mask and continue.
35572 + * The FSM has no other state setup so this has to be handled out-of-band.
35573 + */
35574 + fiq_print(FIQDBG_ERR, state, "ERRST %02d", n);
35575 + if (hcint_probe.b.nak || hcint_probe.b.ack || hcint_probe.b.datatglerr) {
35576 + fiq_print(FIQDBG_ERR, state, "RESET %02d", n);
35577 + /* In some random cases we can get a NAK interrupt coincident with a Xacterr
35578 + * interrupt, after the device has disappeared.
35579 + */
35580 + if (!hcint.b.xacterr)
35581 + st->nr_errors = 0;
35582 + hcintmsk.b.nak = 0;
35583 + hcintmsk.b.ack = 0;
35584 + hcintmsk.b.datatglerr = 0;
35585 + FIQ_WRITE(state->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCINTMSK, hcintmsk.d32);
35586 + return 1;
35587 + }
35588 + if (hcint_probe.b.chhltd) {
35589 + fiq_print(FIQDBG_ERR, state, "CHHLT %02d", n);
35590 + fiq_print(FIQDBG_ERR, state, "%08x", hcint.d32);
35591 + return 0;
35592 + }
35593 + break;
35594 +
35595 + /* Non-periodic state groups */
35596 + case FIQ_NP_SSPLIT_STARTED:
35597 + case FIQ_NP_SSPLIT_RETRY:
35598 + /* Got a HCINT for a NP SSPLIT. Expected ACK / NAK / fail */
35599 + if (hcint.b.ack) {
35600 + /* SSPLIT complete. For OUT, the data has been sent. For IN, the LS transaction
35601 + * will start shortly. SOF needs to kick the transaction to prevent a NYET flood.
35602 + */
35603 + if(st->hcchar_copy.b.epdir == 1)
35604 + st->fsm = FIQ_NP_IN_CSPLIT_RETRY;
35605 + else
35606 + st->fsm = FIQ_NP_OUT_CSPLIT_RETRY;
35607 + st->nr_errors = 0;
35608 + handled = 1;
35609 + fiq_fsm_setup_csplit(state, n);
35610 + } else if (hcint.b.nak) {
35611 + // No buffer space in TT. Retry on a uframe boundary.
35612 + fiq_fsm_reload_hcdma(state, n);
35613 + st->fsm = FIQ_NP_SSPLIT_RETRY;
35614 + handled = 1;
35615 + } else if (hcint.b.xacterr) {
35616 + // The only other one we care about is xacterr. This implies HS bus error - retry.
35617 + st->nr_errors++;
35618 + if(st->hcchar_copy.b.epdir == 0)
35619 + fiq_fsm_reload_hcdma(state, n);
35620 + st->fsm = FIQ_NP_SSPLIT_RETRY;
35621 + if (st->nr_errors >= 3) {
35622 + st->fsm = FIQ_NP_SPLIT_HS_ABORTED;
35623 + } else {
35624 + handled = 1;
35625 + restart = 1;
35626 + }
35627 + } else {
35628 + st->fsm = FIQ_NP_SPLIT_LS_ABORTED;
35629 + handled = 0;
35630 + restart = 0;
35631 + }
35632 + break;
35633 +
35634 + case FIQ_NP_IN_CSPLIT_RETRY:
35635 + /* Received a CSPLIT done interrupt.
35636 + * Expected Data/NAK/STALL/NYET for IN.
35637 + */
35638 + if (hcint.b.xfercomp) {
35639 + /* For IN, data is present. */
35640 + st->fsm = FIQ_NP_SPLIT_DONE;
35641 + } else if (hcint.b.nak) {
35642 + /* no endpoint data. Punt it upstairs */
35643 + st->fsm = FIQ_NP_SPLIT_DONE;
35644 + } else if (hcint.b.nyet) {
35645 + /* CSPLIT NYET - retry on a uframe boundary. */
35646 + handled = 1;
35647 + st->nr_errors = 0;
35648 + } else if (hcint.b.datatglerr) {
35649 + /* data toggle errors do not set the xfercomp bit. */
35650 + st->fsm = FIQ_NP_SPLIT_LS_ABORTED;
35651 + } else if (hcint.b.xacterr) {
35652 + /* HS error. Retry immediate */
35653 + st->fsm = FIQ_NP_IN_CSPLIT_RETRY;
35654 + st->nr_errors++;
35655 + if (st->nr_errors >= 3) {
35656 + st->fsm = FIQ_NP_SPLIT_HS_ABORTED;
35657 + } else {
35658 + handled = 1;
35659 + restart = 1;
35660 + }
35661 + } else if (hcint.b.stall || hcint.b.bblerr) {
35662 + /* A STALL implies either a LS bus error or a genuine STALL. */
35663 + st->fsm = FIQ_NP_SPLIT_LS_ABORTED;
35664 + } else {
35665 + /* Hardware bug. It's possible in some cases to
35666 + * get a channel halt with nothing else set when
35667 + * the response was a NYET. Treat as local 3-strikes retry.
35668 + */
35669 + hcint_data_t hcint_test = hcint;
35670 + hcint_test.b.chhltd = 0;
35671 + if (!hcint_test.d32) {
35672 + st->nr_errors++;
35673 + if (st->nr_errors >= 3) {
35674 + st->fsm = FIQ_NP_SPLIT_HS_ABORTED;
35675 + } else {
35676 + handled = 1;
35677 + }
35678 + } else {
35679 + /* Bail out if something unexpected happened */
35680 + st->fsm = FIQ_NP_SPLIT_HS_ABORTED;
35681 + }
35682 + }
35683 + if (st->fsm != FIQ_NP_IN_CSPLIT_RETRY) {
35684 + fiq_fsm_restart_np_pending(state, num_channels, n);
35685 + }
35686 + break;
35687 +
35688 + case FIQ_NP_OUT_CSPLIT_RETRY:
35689 + /* Received a CSPLIT done interrupt.
35690 + * Expected ACK/NAK/STALL/NYET/XFERCOMP for OUT.*/
35691 + if (hcint.b.xfercomp) {
35692 + st->fsm = FIQ_NP_SPLIT_DONE;
35693 + } else if (hcint.b.nak) {
35694 + // The HCD will implement the holdoff on frame boundaries.
35695 + st->fsm = FIQ_NP_SPLIT_DONE;
35696 + } else if (hcint.b.nyet) {
35697 + // Hub still processing.
35698 + st->fsm = FIQ_NP_OUT_CSPLIT_RETRY;
35699 + handled = 1;
35700 + st->nr_errors = 0;
35701 + //restart = 1;
35702 + } else if (hcint.b.xacterr) {
35703 + /* HS error. retry immediate */
35704 + st->fsm = FIQ_NP_OUT_CSPLIT_RETRY;
35705 + st->nr_errors++;
35706 + if (st->nr_errors >= 3) {
35707 + st->fsm = FIQ_NP_SPLIT_HS_ABORTED;
35708 + } else {
35709 + handled = 1;
35710 + restart = 1;
35711 + }
35712 + } else if (hcint.b.stall) {
35713 + /* LS bus error or genuine stall */
35714 + st->fsm = FIQ_NP_SPLIT_LS_ABORTED;
35715 + } else {
35716 + /*
35717 + * Hardware bug. It's possible in some cases to get a
35718 + * channel halt with nothing else set when the response was a NYET.
35719 + * Treat as local 3-strikes retry.
35720 + */
35721 + hcint_data_t hcint_test = hcint;
35722 + hcint_test.b.chhltd = 0;
35723 + if (!hcint_test.d32) {
35724 + st->nr_errors++;
35725 + if (st->nr_errors >= 3) {
35726 + st->fsm = FIQ_NP_SPLIT_HS_ABORTED;
35727 + } else {
35728 + handled = 1;
35729 + }
35730 + } else {
35731 + // Something unexpected happened. AHBerror or babble perhaps. Let the IRQ deal with it.
35732 + st->fsm = FIQ_NP_SPLIT_HS_ABORTED;
35733 + }
35734 + }
35735 + if (st->fsm != FIQ_NP_OUT_CSPLIT_RETRY) {
35736 + fiq_fsm_restart_np_pending(state, num_channels, n);
35737 + }
35738 + break;
35739 +
35740 + /* Periodic split states (except isoc out) */
35741 + case FIQ_PER_SSPLIT_STARTED:
35742 + /* Expect an ACK or failure for SSPLIT */
35743 + if (hcint.b.ack) {
35744 + /*
35745 + * SSPLIT transfer complete interrupt - the generation of this interrupt is fraught with bugs.
35746 + * For a packet queued in microframe n-3 to appear in n-2, if the channel is enabled near the EOF1
35747 + * point for microframe n-3, the packet will not appear on the bus until microframe n.
35748 + * Additionally, the generation of the actual interrupt is dodgy. For a packet appearing on the bus
35749 + * in microframe n, sometimes the interrupt is generated immediately. Sometimes, it appears in n+1
35750 + * coincident with SOF for n+1.
35751 + * SOF is also buggy. It can sometimes be raised AFTER the first bus transaction has taken place.
35752 + * These appear to be caused by timing/clock crossing bugs within the core itself.
35753 + * State machine workaround.
35754 + */
35755 + hfnum.d32 = FIQ_READ(state->dwc_regs_base + HFNUM);
35756 + hcchar.d32 = FIQ_READ(state->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCCHAR);
35757 + fiq_fsm_setup_csplit(state, n);
35758 + /* Poke the oddfrm bit. If we are equivalent, we received the interrupt at the correct
35759 + * time. If not, then we're in the next SOF.
35760 + */
35761 + if ((hfnum.b.frnum & 0x1) == hcchar.b.oddfrm) {
35762 + fiq_print(FIQDBG_INT, state, "CSWAIT %01d", n);
35763 + st->expected_uframe = hfnum.b.frnum;
35764 + st->fsm = FIQ_PER_CSPLIT_WAIT;
35765 + } else {
35766 + fiq_print(FIQDBG_INT, state, "CSPOL %01d", n);
35767 + /* For isochronous IN endpoints,
35768 + * we need to hold off if we are expecting a lot of data */
35769 + if (st->hcchar_copy.b.mps < DATA0_PID_HEURISTIC) {
35770 + start_next_periodic = 1;
35771 + }
35772 + /* Danger will robinson: we are in a broken state. If our first interrupt after
35773 + * this is a NYET, it will be delayed by 1 uframe and result in an unrecoverable
35774 + * lag. Unmask the NYET interrupt.
35775 + */
35776 + st->expected_uframe = (hfnum.b.frnum + 1) & 0x3FFF;
35777 + st->fsm = FIQ_PER_CSPLIT_BROKEN_NYET1;
35778 + restart = 1;
35779 + }
35780 + handled = 1;
35781 + } else if (hcint.b.xacterr) {
35782 + /* 3-strikes retry is enabled, we have hit our max nr_errors */
35783 + st->fsm = FIQ_PER_SPLIT_HS_ABORTED;
35784 + start_next_periodic = 1;
35785 + } else {
35786 + st->fsm = FIQ_PER_SPLIT_HS_ABORTED;
35787 + start_next_periodic = 1;
35788 + }
35789 + /* We can now queue the next isochronous OUT transaction, if one is pending. */
35790 + if(fiq_fsm_tt_next_isoc(state, num_channels, n)) {
35791 + fiq_print(FIQDBG_INT, state, "NEXTISO ");
35792 + }
35793 + break;
35794 +
35795 + case FIQ_PER_CSPLIT_NYET1:
35796 + /* First CSPLIT attempt was a NYET. If we get a subsequent NYET,
35797 + * we are too late and the TT has dropped its CSPLIT fifo.
35798 + */
35799 + hfnum.d32 = FIQ_READ(state->dwc_regs_base + HFNUM);
35800 + hcchar.d32 = FIQ_READ(state->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCCHAR);
35801 + start_next_periodic = 1;
35802 + if (hcint.b.nak) {
35803 + st->fsm = FIQ_PER_SPLIT_DONE;
35804 + } else if (hcint.b.xfercomp) {
35805 + fiq_increment_dma_buf(state, num_channels, n);
35806 + st->fsm = FIQ_PER_CSPLIT_POLL;
35807 + st->nr_errors = 0;
35808 + if (fiq_fsm_more_csplits(state, n, &last_csplit)) {
35809 + handled = 1;
35810 + restart = 1;
35811 + if (!last_csplit)
35812 + start_next_periodic = 0;
35813 + } else {
35814 + st->fsm = FIQ_PER_SPLIT_DONE;
35815 + }
35816 + } else if (hcint.b.nyet) {
35817 + /* Doh. Data lost. */
35818 + st->fsm = FIQ_PER_SPLIT_NYET_ABORTED;
35819 + } else if (hcint.b.xacterr || hcint.b.stall || hcint.b.bblerr) {
35820 + st->fsm = FIQ_PER_SPLIT_LS_ABORTED;
35821 + } else {
35822 + st->fsm = FIQ_PER_SPLIT_HS_ABORTED;
35823 + }
35824 + break;
35825 +
35826 + case FIQ_PER_CSPLIT_BROKEN_NYET1:
35827 + /*
35828 + * we got here because our host channel is in the delayed-interrupt
35829 + * state and we cannot take a NYET interrupt any later than when it
35830 + * occurred. Disable then re-enable the channel if this happens to force
35831 + * CSPLITs to occur at the right time.
35832 + */
35833 + hfnum.d32 = FIQ_READ(state->dwc_regs_base + HFNUM);
35834 + hcchar.d32 = FIQ_READ(state->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCCHAR);
35835 + fiq_print(FIQDBG_INT, state, "BROK: %01d ", n);
35836 + if (hcint.b.nak) {
35837 + st->fsm = FIQ_PER_SPLIT_DONE;
35838 + start_next_periodic = 1;
35839 + } else if (hcint.b.xfercomp) {
35840 + fiq_increment_dma_buf(state, num_channels, n);
35841 + if (fiq_fsm_more_csplits(state, n, &last_csplit)) {
35842 + st->fsm = FIQ_PER_CSPLIT_POLL;
35843 + handled = 1;
35844 + restart = 1;
35845 + start_next_periodic = 1;
35846 + /* Reload HCTSIZ for the next transfer */
35847 + fiq_fsm_reload_hctsiz(state, n);
35848 + if (!last_csplit)
35849 + start_next_periodic = 0;
35850 + } else {
35851 + st->fsm = FIQ_PER_SPLIT_DONE;
35852 + }
35853 + } else if (hcint.b.nyet) {
35854 + st->fsm = FIQ_PER_SPLIT_NYET_ABORTED;
35855 + start_next_periodic = 1;
35856 + } else if (hcint.b.xacterr || hcint.b.stall || hcint.b.bblerr) {
35857 + /* Local 3-strikes retry is handled by the core. This is a ERR response.*/
35858 + st->fsm = FIQ_PER_SPLIT_LS_ABORTED;
35859 + } else {
35860 + st->fsm = FIQ_PER_SPLIT_HS_ABORTED;
35861 + }
35862 + break;
35863 +
35864 + case FIQ_PER_CSPLIT_POLL:
35865 + hfnum.d32 = FIQ_READ(state->dwc_regs_base + HFNUM);
35866 + hcchar.d32 = FIQ_READ(state->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCCHAR);
35867 + start_next_periodic = 1;
35868 + if (hcint.b.nak) {
35869 + st->fsm = FIQ_PER_SPLIT_DONE;
35870 + } else if (hcint.b.xfercomp) {
35871 + fiq_increment_dma_buf(state, num_channels, n);
35872 + if (fiq_fsm_more_csplits(state, n, &last_csplit)) {
35873 + handled = 1;
35874 + restart = 1;
35875 + /* Reload HCTSIZ for the next transfer */
35876 + fiq_fsm_reload_hctsiz(state, n);
35877 + if (!last_csplit)
35878 + start_next_periodic = 0;
35879 + } else {
35880 + st->fsm = FIQ_PER_SPLIT_DONE;
35881 + }
35882 + } else if (hcint.b.nyet) {
35883 + /* Are we a NYET after the first data packet? */
35884 + if (st->nrpackets == 0) {
35885 + st->fsm = FIQ_PER_CSPLIT_NYET1;
35886 + handled = 1;
35887 + restart = 1;
35888 + } else {
35889 + /* We got a NYET when polling CSPLITs. Can happen
35890 + * if our heuristic fails, or if someone disables us
35891 + * for any significant length of time.
35892 + */
35893 + if (st->nr_errors >= 3) {
35894 + st->fsm = FIQ_PER_SPLIT_NYET_ABORTED;
35895 + } else {
35896 + st->fsm = FIQ_PER_SPLIT_DONE;
35897 + }
35898 + }
35899 + } else if (hcint.b.xacterr || hcint.b.stall || hcint.b.bblerr) {
35900 + /* For xacterr, Local 3-strikes retry is handled by the core. This is a ERR response.*/
35901 + st->fsm = FIQ_PER_SPLIT_LS_ABORTED;
35902 + } else {
35903 + st->fsm = FIQ_PER_SPLIT_HS_ABORTED;
35904 + }
35905 + break;
35906 +
35907 + case FIQ_HS_ISOC_TURBO:
35908 + if (fiq_fsm_update_hs_isoc(state, n, hcint)) {
35909 + /* more transactions to come */
35910 + handled = 1;
35911 + fiq_print(FIQDBG_INT, state, "HSISO M ");
35912 + /* For strided transfers, put ourselves to sleep */
35913 + if (st->hs_isoc_info.stride > 1) {
35914 + st->uframe_sleeps = st->hs_isoc_info.stride - 1;
35915 + st->fsm = FIQ_HS_ISOC_SLEEPING;
35916 + } else {
35917 + restart = 1;
35918 + }
35919 + } else {
35920 + st->fsm = FIQ_HS_ISOC_DONE;
35921 + fiq_print(FIQDBG_INT, state, "HSISO F ");
35922 + }
35923 + break;
35924 +
35925 + case FIQ_HS_ISOC_ABORTED:
35926 + /* This abort is called by the driver rewriting the state mid-transaction
35927 + * which allows the dequeue mechanism to work more effectively.
35928 + */
35929 + break;
35930 +
35931 + case FIQ_PER_ISO_OUT_ACTIVE:
35932 + if (hcint.b.ack) {
35933 + if(fiq_iso_out_advance(state, num_channels, n)) {
35934 + /* last OUT transfer */
35935 + st->fsm = FIQ_PER_ISO_OUT_LAST;
35936 + /*
35937 + * Assuming the periodic FIFO in the dwc core
35938 + * actually does its job properly, we can queue
35939 + * the next ssplit now and in theory, the wire
35940 + * transactions will be in-order.
35941 + */
35942 + // No it doesn't. It appears to process requests in host channel order.
35943 + //start_next_periodic = 1;
35944 + }
35945 + handled = 1;
35946 + restart = 1;
35947 + } else {
35948 + /*
35949 + * Isochronous transactions carry on regardless. Log the error
35950 + * and continue.
35951 + */
35952 + //explode += 1;
35953 + st->nr_errors++;
35954 + if(fiq_iso_out_advance(state, num_channels, n)) {
35955 + st->fsm = FIQ_PER_ISO_OUT_LAST;
35956 + //start_next_periodic = 1;
35957 + }
35958 + handled = 1;
35959 + restart = 1;
35960 + }
35961 + break;
35962 +
35963 + case FIQ_PER_ISO_OUT_LAST:
35964 + if (hcint.b.ack) {
35965 + /* All done here */
35966 + st->fsm = FIQ_PER_ISO_OUT_DONE;
35967 + } else {
35968 + st->fsm = FIQ_PER_ISO_OUT_DONE;
35969 + st->nr_errors++;
35970 + }
35971 + start_next_periodic = 1;
35972 + break;
35973 +
35974 + case FIQ_PER_SPLIT_TIMEOUT:
35975 + /* SOF kicked us because we overran. */
35976 + start_next_periodic = 1;
35977 + break;
35978 +
35979 + default:
35980 + break;
35981 + }
35982 +
35983 + if (handled) {
35984 + FIQ_WRITE(state->dwc_regs_base + HC_START + (HC_OFFSET * n) + HCINT, hcint.d32);
35985 + } else {
35986 + /* Copy the regs into the state so the IRQ knows what to do */
35987 + st->hcint_copy.d32 = hcint.d32;
35988 + }
35989 +
35990 + if (restart) {
35991 + /* Restart always implies handled. */
35992 + if (restart == 2) {
35993 + /* For complete-split INs, the show must go on.
35994 + * Force a channel restart */
35995 + fiq_fsm_restart_channel(state, n, 1);
35996 + } else {
35997 + fiq_fsm_restart_channel(state, n, 0);
35998 + }
35999 + }
36000 + if (start_next_periodic) {
36001 + fiq_fsm_start_next_periodic(state, num_channels);
36002 + }
36003 + if (st->fsm != FIQ_PASSTHROUGH)
36004 + fiq_print(FIQDBG_INT, state, "FSMOUT%02d", st->fsm);
36005 +
36006 + return handled;
36007 +}
36008 +
36009 +
36010 +/**
36011 + * dwc_otg_fiq_fsm() - Flying State Machine (monster) FIQ
36012 + * @state: pointer to state struct passed from the banked FIQ mode registers.
36013 + * @num_channels: set according to the DWC hardware configuration
36014 + * @dma: pointer to DMA bounce buffers for split transaction slots
36015 + *
36016 + * The FSM FIQ performs the low-level tasks that normally would be performed by the microcode
36017 + * inside an EHCI or similar host controller regarding split transactions. The DWC core
36018 + * interrupts each and every time a split transaction packet is received or sent successfully.
36019 + * This results in either an interrupt storm when everything is working "properly", or
36020 + * the interrupt latency of the system in general breaks time-sensitive periodic split
36021 + * transactions. Pushing the low-level, but relatively easy state machine work into the FIQ
36022 + * solves these problems.
36023 + *
36024 + * Return: void
36025 + */
36026 +void notrace dwc_otg_fiq_fsm(struct fiq_state *state, int num_channels)
36027 +{
36028 + gintsts_data_t gintsts, gintsts_handled;
36029 + gintmsk_data_t gintmsk;
36030 + //hfnum_data_t hfnum;
36031 + haint_data_t haint, haint_handled;
36032 + haintmsk_data_t haintmsk;
36033 + int kick_irq = 0;
36034 +
36035 + gintsts_handled.d32 = 0;
36036 + haint_handled.d32 = 0;
36037 +
36038 + fiq_fsm_spin_lock(&state->lock);
36039 + gintsts.d32 = FIQ_READ(state->dwc_regs_base + GINTSTS);
36040 + gintmsk.d32 = FIQ_READ(state->dwc_regs_base + GINTMSK);
36041 + gintsts.d32 &= gintmsk.d32;
36042 +
36043 + if (gintsts.b.sofintr) {
36044 + /* For FSM mode, SOF is required to keep the state machine advance for
36045 + * certain stages of the periodic pipeline. It's death to mask this
36046 + * interrupt in that case.
36047 + */
36048 +
36049 + if (!fiq_fsm_do_sof(state, num_channels)) {
36050 + /* Kick IRQ once. Queue advancement means that all pending transactions
36051 + * will get serviced when the IRQ finally executes.
36052 + */
36053 + if (state->gintmsk_saved.b.sofintr == 1)
36054 + kick_irq |= 1;
36055 + state->gintmsk_saved.b.sofintr = 0;
36056 + }
36057 + gintsts_handled.b.sofintr = 1;
36058 + }
36059 +
36060 + if (gintsts.b.hcintr) {
36061 + int i;
36062 + haint.d32 = FIQ_READ(state->dwc_regs_base + HAINT);
36063 + haintmsk.d32 = FIQ_READ(state->dwc_regs_base + HAINTMSK);
36064 + haint.d32 &= haintmsk.d32;
36065 + haint_handled.d32 = 0;
36066 + for (i=0; i<num_channels; i++) {
36067 + if (haint.b2.chint & (1 << i)) {
36068 + if(!fiq_fsm_do_hcintr(state, num_channels, i)) {
36069 + /* HCINT was not handled in FIQ
36070 + * HAINT is level-sensitive, leading to level-sensitive ginststs.b.hcint bit.
36071 + * Mask HAINT(i) but keep top-level hcint unmasked.
36072 + */
36073 + state->haintmsk_saved.b2.chint &= ~(1 << i);
36074 + } else {
36075 + /* do_hcintr cleaned up after itself, but clear haint */
36076 + haint_handled.b2.chint |= (1 << i);
36077 + }
36078 + }
36079 + }
36080 +
36081 + if (haint_handled.b2.chint) {
36082 + FIQ_WRITE(state->dwc_regs_base + HAINT, haint_handled.d32);
36083 + }
36084 +
36085 + if (haintmsk.d32 != (haintmsk.d32 & state->haintmsk_saved.d32)) {
36086 + /*
36087 + * This is necessary to avoid multiple retriggers of the MPHI in the case
36088 + * where interrupts are held off and HCINTs start to pile up.
36089 + * Only wake up the IRQ if a new interrupt came in, was not handled and was
36090 + * masked.
36091 + */
36092 + haintmsk.d32 &= state->haintmsk_saved.d32;
36093 + FIQ_WRITE(state->dwc_regs_base + HAINTMSK, haintmsk.d32);
36094 + kick_irq |= 1;
36095 + }
36096 + /* Top-Level interrupt - always handled because it's level-sensitive */
36097 + gintsts_handled.b.hcintr = 1;
36098 + }
36099 +
36100 +
36101 + /* Clear the bits in the saved register that were not handled but were triggered. */
36102 + state->gintmsk_saved.d32 &= ~(gintsts.d32 & ~gintsts_handled.d32);
36103 +
36104 + /* FIQ didn't handle something - mask has changed - write new mask */
36105 + if (gintmsk.d32 != (gintmsk.d32 & state->gintmsk_saved.d32)) {
36106 + gintmsk.d32 &= state->gintmsk_saved.d32;
36107 + gintmsk.b.sofintr = 1;
36108 + FIQ_WRITE(state->dwc_regs_base + GINTMSK, gintmsk.d32);
36109 +// fiq_print(FIQDBG_INT, state, "KICKGINT");
36110 +// fiq_print(FIQDBG_INT, state, "%08x", gintmsk.d32);
36111 +// fiq_print(FIQDBG_INT, state, "%08x", state->gintmsk_saved.d32);
36112 + kick_irq |= 1;
36113 + }
36114 +
36115 + if (gintsts_handled.d32) {
36116 + /* Only applies to edge-sensitive bits in GINTSTS */
36117 + FIQ_WRITE(state->dwc_regs_base + GINTSTS, gintsts_handled.d32);
36118 + }
36119 +
36120 + /* We got an interrupt, didn't handle it. */
36121 + if (kick_irq) {
36122 + state->mphi_int_count++;
36123 + if (state->mphi_regs.swirq_set) {
36124 + FIQ_WRITE(state->mphi_regs.swirq_set, 1);
36125 + } else {
36126 + FIQ_WRITE(state->mphi_regs.outdda, state->dummy_send_dma);
36127 + FIQ_WRITE(state->mphi_regs.outddb, (1<<29));
36128 + }
36129 +
36130 + }
36131 + state->fiq_done++;
36132 + mb();
36133 + fiq_fsm_spin_unlock(&state->lock);
36134 +}
36135 +
36136 +
36137 +/**
36138 + * dwc_otg_fiq_nop() - FIQ "lite"
36139 + * @state: pointer to state struct passed from the banked FIQ mode registers.
36140 + *
36141 + * The "nop" handler does not intervene on any interrupts other than SOF.
36142 + * It is limited in scope to deciding at each SOF if the IRQ SOF handler (which deals
36143 + * with non-periodic/periodic queues) needs to be kicked.
36144 + *
36145 + * This is done to hold off the SOF interrupt, which occurs at a rate of 8000 per second.
36146 + *
36147 + * Return: void
36148 + */
36149 +void notrace dwc_otg_fiq_nop(struct fiq_state *state)
36150 +{
36151 + gintsts_data_t gintsts, gintsts_handled;
36152 + gintmsk_data_t gintmsk;
36153 + hfnum_data_t hfnum;
36154 +
36155 + fiq_fsm_spin_lock(&state->lock);
36156 + hfnum.d32 = FIQ_READ(state->dwc_regs_base + HFNUM);
36157 + gintsts.d32 = FIQ_READ(state->dwc_regs_base + GINTSTS);
36158 + gintmsk.d32 = FIQ_READ(state->dwc_regs_base + GINTMSK);
36159 + gintsts.d32 &= gintmsk.d32;
36160 + gintsts_handled.d32 = 0;
36161 +
36162 + if (gintsts.b.sofintr) {
36163 + if (!state->kick_np_queues &&
36164 + dwc_frame_num_gt(state->next_sched_frame, hfnum.b.frnum)) {
36165 + /* SOF handled, no work to do, just ACK interrupt */
36166 + gintsts_handled.b.sofintr = 1;
36167 + } else {
36168 + /* Kick IRQ */
36169 + state->gintmsk_saved.b.sofintr = 0;
36170 + }
36171 + }
36172 +
36173 + /* Reset handled interrupts */
36174 + if(gintsts_handled.d32) {
36175 + FIQ_WRITE(state->dwc_regs_base + GINTSTS, gintsts_handled.d32);
36176 + }
36177 +
36178 + /* Clear the bits in the saved register that were not handled but were triggered. */
36179 + state->gintmsk_saved.d32 &= ~(gintsts.d32 & ~gintsts_handled.d32);
36180 +
36181 + /* We got an interrupt, didn't handle it and want to mask it */
36182 + if (~(state->gintmsk_saved.d32)) {
36183 + state->mphi_int_count++;
36184 + gintmsk.d32 &= state->gintmsk_saved.d32;
36185 + FIQ_WRITE(state->dwc_regs_base + GINTMSK, gintmsk.d32);
36186 + if (state->mphi_regs.swirq_set) {
36187 + FIQ_WRITE(state->mphi_regs.swirq_set, 1);
36188 + } else {
36189 + /* Force a clear before another dummy send */
36190 + FIQ_WRITE(state->mphi_regs.intstat, (1<<29));
36191 + FIQ_WRITE(state->mphi_regs.outdda, state->dummy_send_dma);
36192 + FIQ_WRITE(state->mphi_regs.outddb, (1<<29));
36193 + }
36194 + }
36195 + state->fiq_done++;
36196 + mb();
36197 + fiq_fsm_spin_unlock(&state->lock);
36198 +}
36199 --- /dev/null
36200 +++ b/drivers/usb/host/dwc_otg/dwc_otg_fiq_fsm.h
36201 @@ -0,0 +1,399 @@
36202 +/*
36203 + * dwc_otg_fiq_fsm.h - Finite state machine FIQ header definitions
36204 + *
36205 + * Copyright (c) 2013 Raspberry Pi Foundation
36206 + *
36207 + * Author: Jonathan Bell <jonathan@raspberrypi.org>
36208 + * All rights reserved.
36209 + *
36210 + * Redistribution and use in source and binary forms, with or without
36211 + * modification, are permitted provided that the following conditions are met:
36212 + * * Redistributions of source code must retain the above copyright
36213 + * notice, this list of conditions and the following disclaimer.
36214 + * * Redistributions in binary form must reproduce the above copyright
36215 + * notice, this list of conditions and the following disclaimer in the
36216 + * documentation and/or other materials provided with the distribution.
36217 + * * Neither the name of Raspberry Pi nor the
36218 + * names of its contributors may be used to endorse or promote products
36219 + * derived from this software without specific prior written permission.
36220 + *
36221 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
36222 + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
36223 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
36224 + * DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY
36225 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
36226 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
36227 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
36228 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
36229 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
36230 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
36231 + *
36232 + * This FIQ implements functionality that performs split transactions on
36233 + * the dwc_otg hardware without any outside intervention. A split transaction
36234 + * is "queued" by nominating a specific host channel to perform the entirety
36235 + * of a split transaction. This FIQ will then perform the microframe-precise
36236 + * scheduling required in each phase of the transaction until completion.
36237 + *
36238 + * The FIQ functionality has been surgically implanted into the Synopsys
36239 + * vendor-provided driver.
36240 + *
36241 + */
36242 +
36243 +#ifndef DWC_OTG_FIQ_FSM_H_
36244 +#define DWC_OTG_FIQ_FSM_H_
36245 +
36246 +#include "dwc_otg_regs.h"
36247 +#include "dwc_otg_cil.h"
36248 +#include "dwc_otg_hcd.h"
36249 +#include <linux/kernel.h>
36250 +#include <linux/irqflags.h>
36251 +#include <linux/string.h>
36252 +#include <asm/barrier.h>
36253 +
36254 +#if 0
36255 +#define FLAME_ON(x) \
36256 +do { \
36257 + int gpioreg; \
36258 + \
36259 + gpioreg = readl(__io_address(0x20200000+0x8)); \
36260 + gpioreg &= ~(7 << (x-20)*3); \
36261 + gpioreg |= 0x1 << (x-20)*3; \
36262 + writel(gpioreg, __io_address(0x20200000+0x8)); \
36263 + \
36264 + writel(1<<x, __io_address(0x20200000+(0x1C))); \
36265 +} while (0)
36266 +
36267 +#define FLAME_OFF(x) \
36268 +do { \
36269 + writel(1<<x, __io_address(0x20200000+(0x28))); \
36270 +} while (0)
36271 +#else
36272 +#define FLAME_ON(x) do { } while (0)
36273 +#define FLAME_OFF(X) do { } while (0)
36274 +#endif
36275 +
36276 +/* This is a quick-and-dirty arch-specific register read/write. We know that
36277 + * writes to a peripheral on BCM2835 will always arrive in-order, also that
36278 + * reads and writes are executed in-order therefore the need for memory barriers
36279 + * is obviated if we're only talking to USB.
36280 + */
36281 +#define FIQ_WRITE(_addr_,_data_) (*(volatile unsigned int *) (_addr_) = (_data_))
36282 +#define FIQ_READ(_addr_) (*(volatile unsigned int *) (_addr_))
36283 +
36284 +/* FIQ-ified register definitions. Offsets are from dwc_regs_base. */
36285 +#define GINTSTS 0x014
36286 +#define GINTMSK 0x018
36287 +/* Debug register. Poll the top of the received packets FIFO. */
36288 +#define GRXSTSR 0x01C
36289 +#define HFNUM 0x408
36290 +#define HAINT 0x414
36291 +#define HAINTMSK 0x418
36292 +#define HPRT0 0x440
36293 +
36294 +/* HC_regs start from an offset of 0x500 */
36295 +#define HC_START 0x500
36296 +#define HC_OFFSET 0x020
36297 +
36298 +#define HC_DMA 0x14
36299 +
36300 +#define HCCHAR 0x00
36301 +#define HCSPLT 0x04
36302 +#define HCINT 0x08
36303 +#define HCINTMSK 0x0C
36304 +#define HCTSIZ 0x10
36305 +
36306 +#define ISOC_XACTPOS_ALL 0b11
36307 +#define ISOC_XACTPOS_BEGIN 0b10
36308 +#define ISOC_XACTPOS_MID 0b00
36309 +#define ISOC_XACTPOS_END 0b01
36310 +
36311 +#define DWC_PID_DATA2 0b01
36312 +#define DWC_PID_MDATA 0b11
36313 +#define DWC_PID_DATA1 0b10
36314 +#define DWC_PID_DATA0 0b00
36315 +
36316 +typedef struct {
36317 + volatile void* base;
36318 + volatile void* ctrl;
36319 + volatile void* outdda;
36320 + volatile void* outddb;
36321 + volatile void* intstat;
36322 + volatile void* swirq_set;
36323 + volatile void* swirq_clr;
36324 +} mphi_regs_t;
36325 +
36326 +enum fiq_debug_level {
36327 + FIQDBG_SCHED = (1 << 0),
36328 + FIQDBG_INT = (1 << 1),
36329 + FIQDBG_ERR = (1 << 2),
36330 + FIQDBG_PORTHUB = (1 << 3),
36331 +};
36332 +
36333 +#ifdef CONFIG_ARM64
36334 +
36335 +typedef spinlock_t fiq_lock_t;
36336 +
36337 +#else
36338 +
36339 +typedef struct {
36340 + union {
36341 + uint32_t slock;
36342 + struct _tickets {
36343 + uint16_t owner;
36344 + uint16_t next;
36345 + } tickets;
36346 + };
36347 +} fiq_lock_t;
36348 +
36349 +#endif
36350 +
36351 +struct fiq_state;
36352 +
36353 +extern void _fiq_print (enum fiq_debug_level dbg_lvl, volatile struct fiq_state *state, char *fmt, ...);
36354 +#if 0
36355 +#define fiq_print _fiq_print
36356 +#else
36357 +#define fiq_print(x, y, ...)
36358 +#endif
36359 +
36360 +extern bool fiq_enable, fiq_fsm_enable;
36361 +extern ushort nak_holdoff;
36362 +
36363 +/**
36364 + * enum fiq_fsm_state - The FIQ FSM states.
36365 + *
36366 + * This is the "core" of the FIQ FSM. Broadly, the FSM states follow the
36367 + * USB2.0 specification for host responses to various transaction states.
36368 + * There are modifications to this host state machine because of a variety of
36369 + * quirks and limitations in the dwc_otg hardware.
36370 + *
36371 + * The fsm state is also used to communicate back to the driver on completion of
36372 + * a split transaction. The end states are used in conjunction with the interrupts
36373 + * raised by the final transaction.
36374 + */
36375 +enum fiq_fsm_state {
36376 + /* FIQ isn't enabled for this host channel */
36377 + FIQ_PASSTHROUGH = 0,
36378 + /* For the first interrupt received for this channel,
36379 + * the FIQ has to ack any interrupts indicating success. */
36380 + FIQ_PASSTHROUGH_ERRORSTATE = 31,
36381 + /* Nonperiodic state groups */
36382 + FIQ_NP_SSPLIT_STARTED = 1,
36383 + FIQ_NP_SSPLIT_RETRY = 2,
36384 + /* TT contention - working around hub bugs */
36385 + FIQ_NP_SSPLIT_PENDING = 33,
36386 + FIQ_NP_OUT_CSPLIT_RETRY = 3,
36387 + FIQ_NP_IN_CSPLIT_RETRY = 4,
36388 + FIQ_NP_SPLIT_DONE = 5,
36389 + FIQ_NP_SPLIT_LS_ABORTED = 6,
36390 + /* This differentiates a HS transaction error from a LS one
36391 + * (handling the hub state is different) */
36392 + FIQ_NP_SPLIT_HS_ABORTED = 7,
36393 +
36394 + /* Periodic state groups */
36395 + /* Periodic transactions are either started directly by the IRQ handler
36396 + * or deferred if the TT is already in use.
36397 + */
36398 + FIQ_PER_SSPLIT_QUEUED = 8,
36399 + FIQ_PER_SSPLIT_STARTED = 9,
36400 + FIQ_PER_SSPLIT_LAST = 10,
36401 +
36402 +
36403 + FIQ_PER_ISO_OUT_PENDING = 11,
36404 + FIQ_PER_ISO_OUT_ACTIVE = 12,
36405 + FIQ_PER_ISO_OUT_LAST = 13,
36406 + FIQ_PER_ISO_OUT_DONE = 27,
36407 +
36408 + FIQ_PER_CSPLIT_WAIT = 14,
36409 + FIQ_PER_CSPLIT_NYET1 = 15,
36410 + FIQ_PER_CSPLIT_BROKEN_NYET1 = 28,
36411 + FIQ_PER_CSPLIT_NYET_FAFF = 29,
36412 + /* For multiple CSPLITs (large isoc IN, or delayed interrupt) */
36413 + FIQ_PER_CSPLIT_POLL = 16,
36414 + /* The last CSPLIT for a transaction has been issued, differentiates
36415 + * for the state machine to queue the next packet.
36416 + */
36417 + FIQ_PER_CSPLIT_LAST = 17,
36418 +
36419 + FIQ_PER_SPLIT_DONE = 18,
36420 + FIQ_PER_SPLIT_LS_ABORTED = 19,
36421 + FIQ_PER_SPLIT_HS_ABORTED = 20,
36422 + FIQ_PER_SPLIT_NYET_ABORTED = 21,
36423 + /* Frame rollover has occurred without the transaction finishing. */
36424 + FIQ_PER_SPLIT_TIMEOUT = 22,
36425 +
36426 + /* FIQ-accelerated HS Isochronous state groups */
36427 + FIQ_HS_ISOC_TURBO = 23,
36428 + /* For interval > 1, SOF wakes up the isochronous FSM */
36429 + FIQ_HS_ISOC_SLEEPING = 24,
36430 + FIQ_HS_ISOC_DONE = 25,
36431 + FIQ_HS_ISOC_ABORTED = 26,
36432 + FIQ_DEQUEUE_ISSUED = 30,
36433 + FIQ_TEST = 32,
36434 +};
36435 +
36436 +struct fiq_stack {
36437 + int magic1;
36438 + uint8_t stack[2048];
36439 + int magic2;
36440 +};
36441 +
36442 +
36443 +/**
36444 + * struct fiq_dma_info - DMA bounce buffer utilisation information (per-channel)
36445 + * @index: Number of slots reported used for IN transactions / number of slots
36446 + * transmitted for an OUT transaction
36447 + * @slot_len[6]: Number of actual transfer bytes in each slot (255 if unused)
36448 + *
36449 + * Split transaction transfers can have variable length depending on other bus
36450 + * traffic. The OTG core DMA engine requires 4-byte aligned addresses therefore
36451 + * each transaction needs a guaranteed aligned address. A maximum of 6 split transfers
36452 + * can happen per-frame.
36453 + */
36454 +struct fiq_dma_info {
36455 + u8 index;
36456 + u8 slot_len[6];
36457 +};
36458 +
36459 +struct __attribute__((packed)) fiq_split_dma_slot {
36460 + u8 buf[188];
36461 +};
36462 +
36463 +struct fiq_dma_channel {
36464 + struct __attribute__((packed)) fiq_split_dma_slot index[6];
36465 +};
36466 +
36467 +struct fiq_dma_blob {
36468 + struct __attribute__((packed)) fiq_dma_channel channel[0];
36469 +};
36470 +
36471 +/**
36472 + * struct fiq_hs_isoc_info - USB2.0 isochronous data
36473 + * @iso_frame: Pointer to the array of OTG URB iso_frame_descs.
36474 + * @nrframes: Total length of iso_frame_desc array
36475 + * @index: Current index (FIQ-maintained)
36476 + * @stride: Interval in uframes between HS isoc transactions
36477 + */
36478 +struct fiq_hs_isoc_info {
36479 + struct dwc_otg_hcd_iso_packet_desc *iso_desc;
36480 + unsigned int nrframes;
36481 + unsigned int index;
36482 + unsigned int stride;
36483 +};
36484 +
36485 +/**
36486 + * struct fiq_channel_state - FIQ state machine storage
36487 + * @fsm: Current state of the channel as understood by the FIQ
36488 + * @nr_errors: Number of transaction errors on this split-transaction
36489 + * @hub_addr: SSPLIT/CSPLIT destination hub
36490 + * @port_addr: SSPLIT/CSPLIT destination port - always 1 if single TT hub
36491 + * @nrpackets: For isoc OUT, the number of split-OUT packets to transmit. For
36492 + * split-IN, number of CSPLIT data packets that were received.
36493 + * @hcchar_copy:
36494 + * @hcsplt_copy:
36495 + * @hcintmsk_copy:
36496 + * @hctsiz_copy: Copies of the host channel registers.
36497 + * For use as scratch, or for returning state.
36498 + *
36499 + * The fiq_channel_state is state storage between interrupts for a host channel. The
36500 + * FSM state is stored here. Members of this structure must only be set up by the
36501 + * driver prior to enabling the FIQ for this host channel, and not touched until the FIQ
36502 + * has updated the state to either a COMPLETE state group or ABORT state group.
36503 + */
36504 +
36505 +struct fiq_channel_state {
36506 + enum fiq_fsm_state fsm;
36507 + unsigned int nr_errors;
36508 + unsigned int hub_addr;
36509 + unsigned int port_addr;
36510 + /* Hardware bug workaround: sometimes channel halt interrupts are
36511 + * delayed until the next SOF. Keep track of when we expected to get interrupted. */
36512 + unsigned int expected_uframe;
36513 + /* number of uframes remaining (for interval > 1 HS isoc transfers) before next transfer */
36514 + unsigned int uframe_sleeps;
36515 + /* in/out for communicating number of dma buffers used, or number of ISOC to do */
36516 + unsigned int nrpackets;
36517 + struct fiq_dma_info dma_info;
36518 + struct fiq_hs_isoc_info hs_isoc_info;
36519 + /* Copies of HC registers - in/out communication from/to IRQ handler
36520 + * and for ease of channel setup. A bit of mungeing is performed - for
36521 + * example the hctsiz.b.maxp is _always_ the max packet size of the endpoint.
36522 + */
36523 + hcchar_data_t hcchar_copy;
36524 + hcsplt_data_t hcsplt_copy;
36525 + hcint_data_t hcint_copy;
36526 + hcintmsk_data_t hcintmsk_copy;
36527 + hctsiz_data_t hctsiz_copy;
36528 + hcdma_data_t hcdma_copy;
36529 +};
36530 +
36531 +/**
36532 + * struct fiq_state - top-level FIQ state machine storage
36533 + * @mphi_regs: virtual address of the MPHI peripheral register file
36534 + * @dwc_regs_base: virtual address of the base of the DWC core register file
36535 + * @dma_base: physical address for the base of the DMA bounce buffers
36536 + * @dummy_send: Scratch area for sending a fake message to the MPHI peripheral
36537 + * @gintmsk_saved: Top-level mask of interrupts that the FIQ has not handled.
36538 + * Used for determining which interrupts fired to set off the IRQ handler.
36539 + * @haintmsk_saved: Mask of interrupts from host channels that the FIQ did not handle internally.
36540 + * @np_count: Non-periodic transactions in the active queue
36541 + * @np_sent: Count of non-periodic transactions that have completed
36542 + * @next_sched_frame: For periodic transactions handled by the driver's SOF-driven queuing mechanism,
36543 + * this is the next frame on which a SOF interrupt is required. Used to hold off
36544 + * passing SOF through to the driver until necessary.
36545 + * @channel[n]: Per-channel FIQ state. Allocated during init depending on the number of host
36546 + * channels configured into the core logic.
36547 + *
36548 + * This is passed as the first argument to the dwc_otg_fiq_fsm top-level FIQ handler from the asm stub.
36549 + * It contains top-level state information.
36550 + */
36551 +struct fiq_state {
36552 + fiq_lock_t lock;
36553 + mphi_regs_t mphi_regs;
36554 + void *dwc_regs_base;
36555 + dma_addr_t dma_base;
36556 + struct fiq_dma_blob *fiq_dmab;
36557 + void *dummy_send;
36558 + dma_addr_t dummy_send_dma;
36559 + gintmsk_data_t gintmsk_saved;
36560 + haintmsk_data_t haintmsk_saved;
36561 + int mphi_int_count;
36562 + unsigned int fiq_done;
36563 + unsigned int kick_np_queues;
36564 + unsigned int next_sched_frame;
36565 +#ifdef FIQ_DEBUG
36566 + char * buffer;
36567 + unsigned int bufsiz;
36568 +#endif
36569 + struct fiq_channel_state channel[0];
36570 +};
36571 +
36572 +#ifdef CONFIG_ARM64
36573 +
36574 +#ifdef local_fiq_enable
36575 +#undef local_fiq_enable
36576 +#endif
36577 +
36578 +#ifdef local_fiq_disable
36579 +#undef local_fiq_disable
36580 +#endif
36581 +
36582 +extern void local_fiq_enable(void);
36583 +
36584 +extern void local_fiq_disable(void);
36585 +
36586 +#endif
36587 +
36588 +extern void fiq_fsm_spin_lock(fiq_lock_t *lock);
36589 +
36590 +extern void fiq_fsm_spin_unlock(fiq_lock_t *lock);
36591 +
36592 +extern int fiq_fsm_too_late(struct fiq_state *st, int n);
36593 +
36594 +extern int fiq_fsm_tt_in_use(struct fiq_state *st, int num_channels, int n);
36595 +
36596 +extern void dwc_otg_fiq_fsm(struct fiq_state *state, int num_channels);
36597 +
36598 +extern void dwc_otg_fiq_nop(struct fiq_state *state);
36599 +
36600 +#endif /* DWC_OTG_FIQ_FSM_H_ */
36601 --- /dev/null
36602 +++ b/drivers/usb/host/dwc_otg/dwc_otg_fiq_stub.S
36603 @@ -0,0 +1,80 @@
36604 +/*
36605 + * dwc_otg_fiq_fsm.S - assembly stub for the FSM FIQ
36606 + *
36607 + * Copyright (c) 2013 Raspberry Pi Foundation
36608 + *
36609 + * Author: Jonathan Bell <jonathan@raspberrypi.org>
36610 + * All rights reserved.
36611 + *
36612 + * Redistribution and use in source and binary forms, with or without
36613 + * modification, are permitted provided that the following conditions are met:
36614 + * * Redistributions of source code must retain the above copyright
36615 + * notice, this list of conditions and the following disclaimer.
36616 + * * Redistributions in binary form must reproduce the above copyright
36617 + * notice, this list of conditions and the following disclaimer in the
36618 + * documentation and/or other materials provided with the distribution.
36619 + * * Neither the name of Raspberry Pi nor the
36620 + * names of its contributors may be used to endorse or promote products
36621 + * derived from this software without specific prior written permission.
36622 + *
36623 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
36624 + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
36625 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
36626 + * DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY
36627 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
36628 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
36629 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
36630 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
36631 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
36632 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
36633 + */
36634 +
36635 +
36636 +#include <asm/assembler.h>
36637 +#include <linux/linkage.h>
36638 +
36639 +
36640 +.text
36641 +
36642 +.global _dwc_otg_fiq_stub_end;
36643 +
36644 +/**
36645 + * _dwc_otg_fiq_stub() - entry copied to the FIQ vector page to allow
36646 + * a C-style function call with arguments from the FIQ banked registers.
36647 + * r0 = &hcd->fiq_state
36648 + * r1 = &hcd->num_channels
36649 + * r2 = &hcd->dma_buffers
36650 + * Tramples: r0, r1, r2, r4, fp, ip
36651 + */
36652 +
36653 +ENTRY(_dwc_otg_fiq_stub)
36654 + /* Stash unbanked regs - SP will have been set up for us */
36655 + mov ip, sp;
36656 + stmdb sp!, {r0-r12, lr};
36657 +#ifdef FIQ_DEBUG
36658 + // Cycle profiling - read cycle counter at start
36659 + mrc p15, 0, r5, c15, c12, 1;
36660 +#endif
36661 + /* r11 = fp, don't trample it */
36662 + mov r4, fp;
36663 + /* set EABI frame size */
36664 + sub fp, ip, #512;
36665 +
36666 + /* for fiq NOP mode - just need state */
36667 + mov r0, r8;
36668 + /* r9 = num_channels */
36669 + mov r1, r9;
36670 + /* r10 = struct *dma_bufs */
36671 +// mov r2, r10;
36672 +
36673 + /* r4 = &fiq_c_function */
36674 + blx r4;
36675 +#ifdef FIQ_DEBUG
36676 + mrc p15, 0, r4, c15, c12, 1;
36677 + subs r5, r5, r4;
36678 + // r5 is now the cycle count time for executing the FIQ. Store it somewhere?
36679 +#endif
36680 + ldmia sp!, {r0-r12, lr};
36681 + subs pc, lr, #4;
36682 +_dwc_otg_fiq_stub_end:
36683 +END(_dwc_otg_fiq_stub)
36684 --- /dev/null
36685 +++ b/drivers/usb/host/dwc_otg/dwc_otg_hcd.c
36686 @@ -0,0 +1,4327 @@
36687 +
36688 +/* ==========================================================================
36689 + * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_hcd.c $
36690 + * $Revision: #104 $
36691 + * $Date: 2011/10/24 $
36692 + * $Change: 1871159 $
36693 + *
36694 + * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
36695 + * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
36696 + * otherwise expressly agreed to in writing between Synopsys and you.
36697 + *
36698 + * The Software IS NOT an item of Licensed Software or Licensed Product under
36699 + * any End User Software License Agreement or Agreement for Licensed Product
36700 + * with Synopsys or any supplement thereto. You are permitted to use and
36701 + * redistribute this Software in source and binary forms, with or without
36702 + * modification, provided that redistributions of source code must retain this
36703 + * notice. You may not view, use, disclose, copy or distribute this file or
36704 + * any information contained herein except pursuant to this license grant from
36705 + * Synopsys. If you do not agree with this notice, including the disclaimer
36706 + * below, then you are not authorized to use the Software.
36707 + *
36708 + * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
36709 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
36710 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
36711 + * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
36712 + * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
36713 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
36714 + * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
36715 + * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
36716 + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
36717 + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
36718 + * DAMAGE.
36719 + * ========================================================================== */
36720 +#ifndef DWC_DEVICE_ONLY
36721 +
36722 +/** @file
36723 + * This file implements HCD Core. All code in this file is portable and doesn't
36724 + * use any OS specific functions.
36725 + * Interface provided by HCD Core is defined in <code><hcd_if.h></code>
36726 + * header file.
36727 + */
36728 +
36729 +#include <linux/usb.h>
36730 +#include <linux/usb/hcd.h>
36731 +
36732 +#include "dwc_otg_hcd.h"
36733 +#include "dwc_otg_regs.h"
36734 +#include "dwc_otg_fiq_fsm.h"
36735 +
36736 +extern bool microframe_schedule;
36737 +extern uint16_t fiq_fsm_mask, nak_holdoff;
36738 +
36739 +//#define DEBUG_HOST_CHANNELS
36740 +#ifdef DEBUG_HOST_CHANNELS
36741 +static int last_sel_trans_num_per_scheduled = 0;
36742 +static int last_sel_trans_num_nonper_scheduled = 0;
36743 +static int last_sel_trans_num_avail_hc_at_start = 0;
36744 +static int last_sel_trans_num_avail_hc_at_end = 0;
36745 +#endif /* DEBUG_HOST_CHANNELS */
36746 +
36747 +
36748 +dwc_otg_hcd_t *dwc_otg_hcd_alloc_hcd(void)
36749 +{
36750 + return DWC_ALLOC(sizeof(dwc_otg_hcd_t));
36751 +}
36752 +
36753 +/**
36754 + * Connection timeout function. An OTG host is required to display a
36755 + * message if the device does not connect within 10 seconds.
36756 + */
36757 +void dwc_otg_hcd_connect_timeout(void *ptr)
36758 +{
36759 + DWC_DEBUGPL(DBG_HCDV, "%s(%p)\n", __func__, ptr);
36760 + DWC_PRINTF("Connect Timeout\n");
36761 + __DWC_ERROR("Device Not Connected/Responding\n");
36762 +}
36763 +
36764 +#if defined(DEBUG)
36765 +static void dump_channel_info(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh)
36766 +{
36767 + if (qh->channel != NULL) {
36768 + dwc_hc_t *hc = qh->channel;
36769 + dwc_list_link_t *item;
36770 + dwc_otg_qh_t *qh_item;
36771 + int num_channels = hcd->core_if->core_params->host_channels;
36772 + int i;
36773 +
36774 + dwc_otg_hc_regs_t *hc_regs;
36775 + hcchar_data_t hcchar;
36776 + hcsplt_data_t hcsplt;
36777 + hctsiz_data_t hctsiz;
36778 + uint32_t hcdma;
36779 +
36780 + hc_regs = hcd->core_if->host_if->hc_regs[hc->hc_num];
36781 + hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
36782 + hcsplt.d32 = DWC_READ_REG32(&hc_regs->hcsplt);
36783 + hctsiz.d32 = DWC_READ_REG32(&hc_regs->hctsiz);
36784 + hcdma = DWC_READ_REG32(&hc_regs->hcdma);
36785 +
36786 + DWC_PRINTF(" Assigned to channel %p:\n", hc);
36787 + DWC_PRINTF(" hcchar 0x%08x, hcsplt 0x%08x\n", hcchar.d32,
36788 + hcsplt.d32);
36789 + DWC_PRINTF(" hctsiz 0x%08x, hcdma 0x%08x\n", hctsiz.d32,
36790 + hcdma);
36791 + DWC_PRINTF(" dev_addr: %d, ep_num: %d, ep_is_in: %d\n",
36792 + hc->dev_addr, hc->ep_num, hc->ep_is_in);
36793 + DWC_PRINTF(" ep_type: %d\n", hc->ep_type);
36794 + DWC_PRINTF(" max_packet: %d\n", hc->max_packet);
36795 + DWC_PRINTF(" data_pid_start: %d\n", hc->data_pid_start);
36796 + DWC_PRINTF(" xfer_started: %d\n", hc->xfer_started);
36797 + DWC_PRINTF(" halt_status: %d\n", hc->halt_status);
36798 + DWC_PRINTF(" xfer_buff: %p\n", hc->xfer_buff);
36799 + DWC_PRINTF(" xfer_len: %d\n", hc->xfer_len);
36800 + DWC_PRINTF(" qh: %p\n", hc->qh);
36801 + DWC_PRINTF(" NP inactive sched:\n");
36802 + DWC_LIST_FOREACH(item, &hcd->non_periodic_sched_inactive) {
36803 + qh_item =
36804 + DWC_LIST_ENTRY(item, dwc_otg_qh_t, qh_list_entry);
36805 + DWC_PRINTF(" %p\n", qh_item);
36806 + }
36807 + DWC_PRINTF(" NP active sched:\n");
36808 + DWC_LIST_FOREACH(item, &hcd->non_periodic_sched_active) {
36809 + qh_item =
36810 + DWC_LIST_ENTRY(item, dwc_otg_qh_t, qh_list_entry);
36811 + DWC_PRINTF(" %p\n", qh_item);
36812 + }
36813 + DWC_PRINTF(" Channels: \n");
36814 + for (i = 0; i < num_channels; i++) {
36815 + dwc_hc_t *hc = hcd->hc_ptr_array[i];
36816 + DWC_PRINTF(" %2d: %p\n", i, hc);
36817 + }
36818 + }
36819 +}
36820 +#else
36821 +#define dump_channel_info(hcd, qh)
36822 +#endif /* DEBUG */
36823 +
36824 +/**
36825 + * Work queue function for starting the HCD when A-Cable is connected.
36826 + * The hcd_start() must be called in a process context.
36827 + */
36828 +static void hcd_start_func(void *_vp)
36829 +{
36830 + dwc_otg_hcd_t *hcd = (dwc_otg_hcd_t *) _vp;
36831 +
36832 + DWC_DEBUGPL(DBG_HCDV, "%s() %p\n", __func__, hcd);
36833 + if (hcd) {
36834 + hcd->fops->start(hcd);
36835 + }
36836 +}
36837 +
36838 +static void del_xfer_timers(dwc_otg_hcd_t * hcd)
36839 +{
36840 +#ifdef DEBUG
36841 + int i;
36842 + int num_channels = hcd->core_if->core_params->host_channels;
36843 + for (i = 0; i < num_channels; i++) {
36844 + DWC_TIMER_CANCEL(hcd->core_if->hc_xfer_timer[i]);
36845 + }
36846 +#endif
36847 +}
36848 +
36849 +static void del_timers(dwc_otg_hcd_t * hcd)
36850 +{
36851 + del_xfer_timers(hcd);
36852 + DWC_TIMER_CANCEL(hcd->conn_timer);
36853 +}
36854 +
36855 +/**
36856 + * Processes all the URBs in a single list of QHs. Completes them with
36857 + * -ESHUTDOWN and frees the QTD.
36858 + */
36859 +static void kill_urbs_in_qh_list(dwc_otg_hcd_t * hcd, dwc_list_link_t * qh_list)
36860 +{
36861 + dwc_list_link_t *qh_item, *qh_tmp;
36862 + dwc_otg_qh_t *qh;
36863 + dwc_otg_qtd_t *qtd, *qtd_tmp;
36864 +
36865 + DWC_LIST_FOREACH_SAFE(qh_item, qh_tmp, qh_list) {
36866 + qh = DWC_LIST_ENTRY(qh_item, dwc_otg_qh_t, qh_list_entry);
36867 + DWC_CIRCLEQ_FOREACH_SAFE(qtd, qtd_tmp,
36868 + &qh->qtd_list, qtd_list_entry) {
36869 + qtd = DWC_CIRCLEQ_FIRST(&qh->qtd_list);
36870 + if (qtd->urb != NULL) {
36871 + hcd->fops->complete(hcd, qtd->urb->priv,
36872 + qtd->urb, -DWC_E_SHUTDOWN);
36873 + dwc_otg_hcd_qtd_remove_and_free(hcd, qtd, qh);
36874 + }
36875 +
36876 + }
36877 + if(qh->channel) {
36878 + int n = qh->channel->hc_num;
36879 + /* Using hcchar.chen == 1 is not a reliable test.
36880 + * It is possible that the channel has already halted
36881 + * but not yet been through the IRQ handler.
36882 + */
36883 + if (fiq_fsm_enable && (hcd->fiq_state->channel[qh->channel->hc_num].fsm != FIQ_PASSTHROUGH)) {
36884 + qh->channel->halt_status = DWC_OTG_HC_XFER_URB_DEQUEUE;
36885 + qh->channel->halt_pending = 1;
36886 + if (hcd->fiq_state->channel[n].fsm == FIQ_HS_ISOC_TURBO ||
36887 + hcd->fiq_state->channel[n].fsm == FIQ_HS_ISOC_SLEEPING)
36888 + hcd->fiq_state->channel[n].fsm = FIQ_HS_ISOC_ABORTED;
36889 + } else {
36890 + dwc_otg_hc_halt(hcd->core_if, qh->channel,
36891 + DWC_OTG_HC_XFER_URB_DEQUEUE);
36892 + }
36893 + qh->channel = NULL;
36894 + }
36895 + dwc_otg_hcd_qh_remove(hcd, qh);
36896 + }
36897 +}
36898 +
36899 +/**
36900 + * Responds with an error status of ESHUTDOWN to all URBs in the non-periodic
36901 + * and periodic schedules. The QTD associated with each URB is removed from
36902 + * the schedule and freed. This function may be called when a disconnect is
36903 + * detected or when the HCD is being stopped.
36904 + */
36905 +static void kill_all_urbs(dwc_otg_hcd_t * hcd)
36906 +{
36907 + kill_urbs_in_qh_list(hcd, &hcd->non_periodic_sched_inactive);
36908 + kill_urbs_in_qh_list(hcd, &hcd->non_periodic_sched_active);
36909 + kill_urbs_in_qh_list(hcd, &hcd->periodic_sched_inactive);
36910 + kill_urbs_in_qh_list(hcd, &hcd->periodic_sched_ready);
36911 + kill_urbs_in_qh_list(hcd, &hcd->periodic_sched_assigned);
36912 + kill_urbs_in_qh_list(hcd, &hcd->periodic_sched_queued);
36913 +}
36914 +
36915 +/**
36916 + * Start the connection timer. An OTG host is required to display a
36917 + * message if the device does not connect within 10 seconds. The
36918 + * timer is deleted if a port connect interrupt occurs before the
36919 + * timer expires.
36920 + */
36921 +static void dwc_otg_hcd_start_connect_timer(dwc_otg_hcd_t * hcd)
36922 +{
36923 + DWC_TIMER_SCHEDULE(hcd->conn_timer, 10000 /* 10 secs */ );
36924 +}
36925 +
36926 +/**
36927 + * HCD Callback function for disconnect of the HCD.
36928 + *
36929 + * @param p void pointer to the <code>struct usb_hcd</code>
36930 + */
36931 +static int32_t dwc_otg_hcd_session_start_cb(void *p)
36932 +{
36933 + dwc_otg_hcd_t *dwc_otg_hcd;
36934 + DWC_DEBUGPL(DBG_HCDV, "%s(%p)\n", __func__, p);
36935 + dwc_otg_hcd = p;
36936 + dwc_otg_hcd_start_connect_timer(dwc_otg_hcd);
36937 + return 1;
36938 +}
36939 +
36940 +/**
36941 + * HCD Callback function for starting the HCD when A-Cable is
36942 + * connected.
36943 + *
36944 + * @param p void pointer to the <code>struct usb_hcd</code>
36945 + */
36946 +static int32_t dwc_otg_hcd_start_cb(void *p)
36947 +{
36948 + dwc_otg_hcd_t *dwc_otg_hcd = p;
36949 + dwc_otg_core_if_t *core_if;
36950 + hprt0_data_t hprt0;
36951 +
36952 + core_if = dwc_otg_hcd->core_if;
36953 +
36954 + if (core_if->op_state == B_HOST) {
36955 + /*
36956 + * Reset the port. During a HNP mode switch the reset
36957 + * needs to occur within 1ms and have a duration of at
36958 + * least 50ms.
36959 + */
36960 + hprt0.d32 = dwc_otg_read_hprt0(core_if);
36961 + hprt0.b.prtrst = 1;
36962 + DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
36963 + }
36964 + DWC_WORKQ_SCHEDULE_DELAYED(core_if->wq_otg,
36965 + hcd_start_func, dwc_otg_hcd, 50,
36966 + "start hcd");
36967 +
36968 + return 1;
36969 +}
36970 +
36971 +/**
36972 + * HCD Callback function for disconnect of the HCD.
36973 + *
36974 + * @param p void pointer to the <code>struct usb_hcd</code>
36975 + */
36976 +static int32_t dwc_otg_hcd_disconnect_cb(void *p)
36977 +{
36978 + gintsts_data_t intr;
36979 + dwc_otg_hcd_t *dwc_otg_hcd = p;
36980 +
36981 + DWC_SPINLOCK(dwc_otg_hcd->lock);
36982 + /*
36983 + * Set status flags for the hub driver.
36984 + */
36985 + dwc_otg_hcd->flags.b.port_connect_status_change = 1;
36986 + dwc_otg_hcd->flags.b.port_connect_status = 0;
36987 + if(fiq_enable) {
36988 + local_fiq_disable();
36989 + fiq_fsm_spin_lock(&dwc_otg_hcd->fiq_state->lock);
36990 + }
36991 + /*
36992 + * Shutdown any transfers in process by clearing the Tx FIFO Empty
36993 + * interrupt mask and status bits and disabling subsequent host
36994 + * channel interrupts.
36995 + */
36996 + intr.d32 = 0;
36997 + intr.b.nptxfempty = 1;
36998 + intr.b.ptxfempty = 1;
36999 + intr.b.hcintr = 1;
37000 + DWC_MODIFY_REG32(&dwc_otg_hcd->core_if->core_global_regs->gintmsk,
37001 + intr.d32, 0);
37002 + DWC_MODIFY_REG32(&dwc_otg_hcd->core_if->core_global_regs->gintsts,
37003 + intr.d32, 0);
37004 +
37005 + del_timers(dwc_otg_hcd);
37006 +
37007 + /*
37008 + * Turn off the vbus power only if the core has transitioned to device
37009 + * mode. If still in host mode, need to keep power on to detect a
37010 + * reconnection.
37011 + */
37012 + if (dwc_otg_is_device_mode(dwc_otg_hcd->core_if)) {
37013 + if (dwc_otg_hcd->core_if->op_state != A_SUSPEND) {
37014 + hprt0_data_t hprt0 = {.d32 = 0 };
37015 + DWC_PRINTF("Disconnect: PortPower off\n");
37016 + hprt0.b.prtpwr = 0;
37017 + DWC_WRITE_REG32(dwc_otg_hcd->core_if->host_if->hprt0,
37018 + hprt0.d32);
37019 + }
37020 +
37021 + dwc_otg_disable_host_interrupts(dwc_otg_hcd->core_if);
37022 + }
37023 +
37024 + /* Respond with an error status to all URBs in the schedule. */
37025 + kill_all_urbs(dwc_otg_hcd);
37026 +
37027 + if (dwc_otg_is_host_mode(dwc_otg_hcd->core_if)) {
37028 + /* Clean up any host channels that were in use. */
37029 + int num_channels;
37030 + int i;
37031 + dwc_hc_t *channel;
37032 + dwc_otg_hc_regs_t *hc_regs;
37033 + hcchar_data_t hcchar;
37034 +
37035 + num_channels = dwc_otg_hcd->core_if->core_params->host_channels;
37036 +
37037 + if (!dwc_otg_hcd->core_if->dma_enable) {
37038 + /* Flush out any channel requests in slave mode. */
37039 + for (i = 0; i < num_channels; i++) {
37040 + channel = dwc_otg_hcd->hc_ptr_array[i];
37041 + if (DWC_CIRCLEQ_EMPTY_ENTRY
37042 + (channel, hc_list_entry)) {
37043 + hc_regs =
37044 + dwc_otg_hcd->core_if->
37045 + host_if->hc_regs[i];
37046 + hcchar.d32 =
37047 + DWC_READ_REG32(&hc_regs->hcchar);
37048 + if (hcchar.b.chen) {
37049 + hcchar.b.chen = 0;
37050 + hcchar.b.chdis = 1;
37051 + hcchar.b.epdir = 0;
37052 + DWC_WRITE_REG32
37053 + (&hc_regs->hcchar,
37054 + hcchar.d32);
37055 + }
37056 + }
37057 + }
37058 + }
37059 +
37060 + if(fiq_fsm_enable) {
37061 + for(i=0; i < 128; i++) {
37062 + dwc_otg_hcd->hub_port[i] = 0;
37063 + }
37064 + }
37065 + }
37066 +
37067 + if(fiq_enable) {
37068 + fiq_fsm_spin_unlock(&dwc_otg_hcd->fiq_state->lock);
37069 + local_fiq_enable();
37070 + }
37071 +
37072 + if (dwc_otg_hcd->fops->disconnect) {
37073 + dwc_otg_hcd->fops->disconnect(dwc_otg_hcd);
37074 + }
37075 +
37076 + DWC_SPINUNLOCK(dwc_otg_hcd->lock);
37077 + return 1;
37078 +}
37079 +
37080 +/**
37081 + * HCD Callback function for stopping the HCD.
37082 + *
37083 + * @param p void pointer to the <code>struct usb_hcd</code>
37084 + */
37085 +static int32_t dwc_otg_hcd_stop_cb(void *p)
37086 +{
37087 + dwc_otg_hcd_t *dwc_otg_hcd = p;
37088 +
37089 + DWC_DEBUGPL(DBG_HCDV, "%s(%p)\n", __func__, p);
37090 + dwc_otg_hcd_stop(dwc_otg_hcd);
37091 + return 1;
37092 +}
37093 +
37094 +#ifdef CONFIG_USB_DWC_OTG_LPM
37095 +/**
37096 + * HCD Callback function for sleep of HCD.
37097 + *
37098 + * @param p void pointer to the <code>struct usb_hcd</code>
37099 + */
37100 +static int dwc_otg_hcd_sleep_cb(void *p)
37101 +{
37102 + dwc_otg_hcd_t *hcd = p;
37103 +
37104 + dwc_otg_hcd_free_hc_from_lpm(hcd);
37105 +
37106 + return 0;
37107 +}
37108 +#endif
37109 +
37110 +
37111 +/**
37112 + * HCD Callback function for Remote Wakeup.
37113 + *
37114 + * @param p void pointer to the <code>struct usb_hcd</code>
37115 + */
37116 +static int dwc_otg_hcd_rem_wakeup_cb(void *p)
37117 +{
37118 + dwc_otg_hcd_t *hcd = p;
37119 +
37120 + if (hcd->core_if->lx_state == DWC_OTG_L2) {
37121 + hcd->flags.b.port_suspend_change = 1;
37122 + }
37123 +#ifdef CONFIG_USB_DWC_OTG_LPM
37124 + else {
37125 + hcd->flags.b.port_l1_change = 1;
37126 + }
37127 +#endif
37128 + return 0;
37129 +}
37130 +
37131 +/**
37132 + * Halts the DWC_otg host mode operations in a clean manner. USB transfers are
37133 + * stopped.
37134 + */
37135 +void dwc_otg_hcd_stop(dwc_otg_hcd_t * hcd)
37136 +{
37137 + hprt0_data_t hprt0 = {.d32 = 0 };
37138 +
37139 + DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD STOP\n");
37140 +
37141 + /*
37142 + * The root hub should be disconnected before this function is called.
37143 + * The disconnect will clear the QTD lists (via ..._hcd_urb_dequeue)
37144 + * and the QH lists (via ..._hcd_endpoint_disable).
37145 + */
37146 +
37147 + /* Turn off all host-specific interrupts. */
37148 + dwc_otg_disable_host_interrupts(hcd->core_if);
37149 +
37150 + /* Turn off the vbus power */
37151 + DWC_PRINTF("PortPower off\n");
37152 + hprt0.b.prtpwr = 0;
37153 + DWC_WRITE_REG32(hcd->core_if->host_if->hprt0, hprt0.d32);
37154 + dwc_mdelay(1);
37155 +}
37156 +
37157 +int dwc_otg_hcd_urb_enqueue(dwc_otg_hcd_t * hcd,
37158 + dwc_otg_hcd_urb_t * dwc_otg_urb, void **ep_handle,
37159 + int atomic_alloc)
37160 +{
37161 + int retval = 0;
37162 + uint8_t needs_scheduling = 0;
37163 + dwc_otg_transaction_type_e tr_type;
37164 + dwc_otg_qtd_t *qtd;
37165 + gintmsk_data_t intr_mask = {.d32 = 0 };
37166 + hprt0_data_t hprt0 = { .d32 = 0 };
37167 +
37168 +#ifdef DEBUG /* integrity checks (Broadcom) */
37169 + if (NULL == hcd->core_if) {
37170 + DWC_ERROR("**** DWC OTG HCD URB Enqueue - HCD has NULL core_if\n");
37171 + /* No longer connected. */
37172 + return -DWC_E_INVALID;
37173 + }
37174 +#endif
37175 + if (!hcd->flags.b.port_connect_status) {
37176 + /* No longer connected. */
37177 + DWC_ERROR("Not connected\n");
37178 + return -DWC_E_NO_DEVICE;
37179 + }
37180 +
37181 + /* Some core configurations cannot support LS traffic on a FS root port */
37182 + if ((hcd->fops->speed(hcd, dwc_otg_urb->priv) == USB_SPEED_LOW) &&
37183 + (hcd->core_if->hwcfg2.b.fs_phy_type == 1) &&
37184 + (hcd->core_if->hwcfg2.b.hs_phy_type == 1)) {
37185 + hprt0.d32 = DWC_READ_REG32(hcd->core_if->host_if->hprt0);
37186 + if (hprt0.b.prtspd == DWC_HPRT0_PRTSPD_FULL_SPEED) {
37187 + return -DWC_E_NO_DEVICE;
37188 + }
37189 + }
37190 +
37191 + qtd = dwc_otg_hcd_qtd_create(dwc_otg_urb, atomic_alloc);
37192 + if (qtd == NULL) {
37193 + DWC_ERROR("DWC OTG HCD URB Enqueue failed creating QTD\n");
37194 + return -DWC_E_NO_MEMORY;
37195 + }
37196 +#ifdef DEBUG /* integrity checks (Broadcom) */
37197 + if (qtd->urb == NULL) {
37198 + DWC_ERROR("**** DWC OTG HCD URB Enqueue created QTD with no URBs\n");
37199 + return -DWC_E_NO_MEMORY;
37200 + }
37201 + if (qtd->urb->priv == NULL) {
37202 + DWC_ERROR("**** DWC OTG HCD URB Enqueue created QTD URB with no URB handle\n");
37203 + return -DWC_E_NO_MEMORY;
37204 + }
37205 +#endif
37206 + intr_mask.d32 = DWC_READ_REG32(&hcd->core_if->core_global_regs->gintmsk);
37207 + if(!intr_mask.b.sofintr || fiq_enable) needs_scheduling = 1;
37208 + if((((dwc_otg_qh_t *)ep_handle)->ep_type == UE_BULK) && !(qtd->urb->flags & URB_GIVEBACK_ASAP))
37209 + /* Do not schedule SG transactions until qtd has URB_GIVEBACK_ASAP set */
37210 + needs_scheduling = 0;
37211 +
37212 + retval = dwc_otg_hcd_qtd_add(qtd, hcd, (dwc_otg_qh_t **) ep_handle, atomic_alloc);
37213 + // creates a new queue in ep_handle if it doesn't exist already
37214 + if (retval < 0) {
37215 + DWC_ERROR("DWC OTG HCD URB Enqueue failed adding QTD. "
37216 + "Error status %d\n", retval);
37217 + dwc_otg_hcd_qtd_free(qtd);
37218 + return retval;
37219 + }
37220 +
37221 + if(needs_scheduling) {
37222 + tr_type = dwc_otg_hcd_select_transactions(hcd);
37223 + if (tr_type != DWC_OTG_TRANSACTION_NONE) {
37224 + dwc_otg_hcd_queue_transactions(hcd, tr_type);
37225 + }
37226 + }
37227 + return retval;
37228 +}
37229 +
37230 +int dwc_otg_hcd_urb_dequeue(dwc_otg_hcd_t * hcd,
37231 + dwc_otg_hcd_urb_t * dwc_otg_urb)
37232 +{
37233 + dwc_otg_qh_t *qh;
37234 + dwc_otg_qtd_t *urb_qtd;
37235 + BUG_ON(!hcd);
37236 + BUG_ON(!dwc_otg_urb);
37237 +
37238 +#ifdef DEBUG /* integrity checks (Broadcom) */
37239 +
37240 + if (hcd == NULL) {
37241 + DWC_ERROR("**** DWC OTG HCD URB Dequeue has NULL HCD\n");
37242 + return -DWC_E_INVALID;
37243 + }
37244 + if (dwc_otg_urb == NULL) {
37245 + DWC_ERROR("**** DWC OTG HCD URB Dequeue has NULL URB\n");
37246 + return -DWC_E_INVALID;
37247 + }
37248 + if (dwc_otg_urb->qtd == NULL) {
37249 + DWC_ERROR("**** DWC OTG HCD URB Dequeue with NULL QTD\n");
37250 + return -DWC_E_INVALID;
37251 + }
37252 + urb_qtd = dwc_otg_urb->qtd;
37253 + BUG_ON(!urb_qtd);
37254 + if (urb_qtd->qh == NULL) {
37255 + DWC_ERROR("**** DWC OTG HCD URB Dequeue with QTD with NULL Q handler\n");
37256 + return -DWC_E_INVALID;
37257 + }
37258 +#else
37259 + urb_qtd = dwc_otg_urb->qtd;
37260 + BUG_ON(!urb_qtd);
37261 +#endif
37262 + qh = urb_qtd->qh;
37263 + BUG_ON(!qh);
37264 + if (CHK_DEBUG_LEVEL(DBG_HCDV | DBG_HCD_URB)) {
37265 + if (urb_qtd->in_process) {
37266 + dump_channel_info(hcd, qh);
37267 + }
37268 + }
37269 +#ifdef DEBUG /* integrity checks (Broadcom) */
37270 + if (hcd->core_if == NULL) {
37271 + DWC_ERROR("**** DWC OTG HCD URB Dequeue HCD has NULL core_if\n");
37272 + return -DWC_E_INVALID;
37273 + }
37274 +#endif
37275 + if (urb_qtd->in_process && qh->channel) {
37276 + /* The QTD is in process (it has been assigned to a channel). */
37277 + if (hcd->flags.b.port_connect_status) {
37278 + int n = qh->channel->hc_num;
37279 + /*
37280 + * If still connected (i.e. in host mode), halt the
37281 + * channel so it can be used for other transfers. If
37282 + * no longer connected, the host registers can't be
37283 + * written to halt the channel since the core is in
37284 + * device mode.
37285 + */
37286 + /* In FIQ FSM mode, we need to shut down carefully.
37287 + * The FIQ may attempt to restart a disabled channel */
37288 + if (fiq_fsm_enable && (hcd->fiq_state->channel[n].fsm != FIQ_PASSTHROUGH)) {
37289 + local_fiq_disable();
37290 + fiq_fsm_spin_lock(&hcd->fiq_state->lock);
37291 + qh->channel->halt_status = DWC_OTG_HC_XFER_URB_DEQUEUE;
37292 + qh->channel->halt_pending = 1;
37293 + if (hcd->fiq_state->channel[n].fsm == FIQ_HS_ISOC_TURBO ||
37294 + hcd->fiq_state->channel[n].fsm == FIQ_HS_ISOC_SLEEPING)
37295 + hcd->fiq_state->channel[n].fsm = FIQ_HS_ISOC_ABORTED;
37296 + fiq_fsm_spin_unlock(&hcd->fiq_state->lock);
37297 + local_fiq_enable();
37298 + } else {
37299 + dwc_otg_hc_halt(hcd->core_if, qh->channel,
37300 + DWC_OTG_HC_XFER_URB_DEQUEUE);
37301 + }
37302 + }
37303 + }
37304 +
37305 + /*
37306 + * Free the QTD and clean up the associated QH. Leave the QH in the
37307 + * schedule if it has any remaining QTDs.
37308 + */
37309 +
37310 + DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD URB Dequeue - "
37311 + "delete %sQueue handler\n",
37312 + hcd->core_if->dma_desc_enable?"DMA ":"");
37313 + if (!hcd->core_if->dma_desc_enable) {
37314 + uint8_t b = urb_qtd->in_process;
37315 + if (nak_holdoff && qh->do_split && dwc_qh_is_non_per(qh))
37316 + qh->nak_frame = 0xFFFF;
37317 + dwc_otg_hcd_qtd_remove_and_free(hcd, urb_qtd, qh);
37318 + if (b) {
37319 + dwc_otg_hcd_qh_deactivate(hcd, qh, 0);
37320 + qh->channel = NULL;
37321 + } else if (DWC_CIRCLEQ_EMPTY(&qh->qtd_list)) {
37322 + dwc_otg_hcd_qh_remove(hcd, qh);
37323 + }
37324 + } else {
37325 + dwc_otg_hcd_qtd_remove_and_free(hcd, urb_qtd, qh);
37326 + }
37327 + return 0;
37328 +}
37329 +
37330 +int dwc_otg_hcd_endpoint_disable(dwc_otg_hcd_t * hcd, void *ep_handle,
37331 + int retry)
37332 +{
37333 + dwc_otg_qh_t *qh = (dwc_otg_qh_t *) ep_handle;
37334 + int retval = 0;
37335 + dwc_irqflags_t flags;
37336 +
37337 + if (retry < 0) {
37338 + retval = -DWC_E_INVALID;
37339 + goto done;
37340 + }
37341 +
37342 + if (!qh) {
37343 + retval = -DWC_E_INVALID;
37344 + goto done;
37345 + }
37346 +
37347 + DWC_SPINLOCK_IRQSAVE(hcd->lock, &flags);
37348 +
37349 + while (!DWC_CIRCLEQ_EMPTY(&qh->qtd_list) && retry) {
37350 + DWC_SPINUNLOCK_IRQRESTORE(hcd->lock, flags);
37351 + retry--;
37352 + dwc_msleep(5);
37353 + DWC_SPINLOCK_IRQSAVE(hcd->lock, &flags);
37354 + }
37355 +
37356 + dwc_otg_hcd_qh_remove(hcd, qh);
37357 +
37358 + DWC_SPINUNLOCK_IRQRESTORE(hcd->lock, flags);
37359 + /*
37360 + * Split dwc_otg_hcd_qh_remove_and_free() into qh_remove
37361 + * and qh_free to prevent stack dump on DWC_DMA_FREE() with
37362 + * irq_disabled (spinlock_irqsave) in dwc_otg_hcd_desc_list_free()
37363 + * and dwc_otg_hcd_frame_list_alloc().
37364 + */
37365 + dwc_otg_hcd_qh_free(hcd, qh);
37366 +
37367 +done:
37368 + return retval;
37369 +}
37370 +
37371 +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,30)
37372 +int dwc_otg_hcd_endpoint_reset(dwc_otg_hcd_t * hcd, void *ep_handle)
37373 +{
37374 + int retval = 0;
37375 + dwc_otg_qh_t *qh = (dwc_otg_qh_t *) ep_handle;
37376 + if (!qh)
37377 + return -DWC_E_INVALID;
37378 +
37379 + qh->data_toggle = DWC_OTG_HC_PID_DATA0;
37380 + return retval;
37381 +}
37382 +#endif
37383 +
37384 +/**
37385 + * HCD Callback structure for handling mode switching.
37386 + */
37387 +static dwc_otg_cil_callbacks_t hcd_cil_callbacks = {
37388 + .start = dwc_otg_hcd_start_cb,
37389 + .stop = dwc_otg_hcd_stop_cb,
37390 + .disconnect = dwc_otg_hcd_disconnect_cb,
37391 + .session_start = dwc_otg_hcd_session_start_cb,
37392 + .resume_wakeup = dwc_otg_hcd_rem_wakeup_cb,
37393 +#ifdef CONFIG_USB_DWC_OTG_LPM
37394 + .sleep = dwc_otg_hcd_sleep_cb,
37395 +#endif
37396 + .p = 0,
37397 +};
37398 +
37399 +/**
37400 + * Reset tasklet function
37401 + */
37402 +static void reset_tasklet_func(void *data)
37403 +{
37404 + dwc_otg_hcd_t *dwc_otg_hcd = (dwc_otg_hcd_t *) data;
37405 + dwc_otg_core_if_t *core_if = dwc_otg_hcd->core_if;
37406 + hprt0_data_t hprt0;
37407 +
37408 + DWC_DEBUGPL(DBG_HCDV, "USB RESET tasklet called\n");
37409 +
37410 + hprt0.d32 = dwc_otg_read_hprt0(core_if);
37411 + hprt0.b.prtrst = 1;
37412 + DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
37413 + dwc_mdelay(60);
37414 +
37415 + hprt0.b.prtrst = 0;
37416 + DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
37417 + dwc_otg_hcd->flags.b.port_reset_change = 1;
37418 +}
37419 +
37420 +static void completion_tasklet_func(void *ptr)
37421 +{
37422 + dwc_otg_hcd_t *hcd = (dwc_otg_hcd_t *) ptr;
37423 + struct urb *urb;
37424 + urb_tq_entry_t *item;
37425 + dwc_irqflags_t flags;
37426 +
37427 + /* This could just be spin_lock_irq */
37428 + DWC_SPINLOCK_IRQSAVE(hcd->lock, &flags);
37429 + while (!DWC_TAILQ_EMPTY(&hcd->completed_urb_list)) {
37430 + item = DWC_TAILQ_FIRST(&hcd->completed_urb_list);
37431 + urb = item->urb;
37432 + DWC_TAILQ_REMOVE(&hcd->completed_urb_list, item,
37433 + urb_tq_entries);
37434 + DWC_SPINUNLOCK_IRQRESTORE(hcd->lock, flags);
37435 + DWC_FREE(item);
37436 +
37437 + usb_hcd_giveback_urb(hcd->priv, urb, urb->status);
37438 +
37439 +
37440 + DWC_SPINLOCK_IRQSAVE(hcd->lock, &flags);
37441 + }
37442 + DWC_SPINUNLOCK_IRQRESTORE(hcd->lock, flags);
37443 + return;
37444 +}
37445 +
37446 +static void qh_list_free(dwc_otg_hcd_t * hcd, dwc_list_link_t * qh_list)
37447 +{
37448 + dwc_list_link_t *item;
37449 + dwc_otg_qh_t *qh;
37450 + dwc_irqflags_t flags;
37451 +
37452 + if (!qh_list->next) {
37453 + /* The list hasn't been initialized yet. */
37454 + return;
37455 + }
37456 + /*
37457 + * Hold spinlock here. Not needed in that case if bellow
37458 + * function is being called from ISR
37459 + */
37460 + DWC_SPINLOCK_IRQSAVE(hcd->lock, &flags);
37461 + /* Ensure there are no QTDs or URBs left. */
37462 + kill_urbs_in_qh_list(hcd, qh_list);
37463 + DWC_SPINUNLOCK_IRQRESTORE(hcd->lock, flags);
37464 +
37465 + DWC_LIST_FOREACH(item, qh_list) {
37466 + qh = DWC_LIST_ENTRY(item, dwc_otg_qh_t, qh_list_entry);
37467 + dwc_otg_hcd_qh_remove_and_free(hcd, qh);
37468 + }
37469 +}
37470 +
37471 +/**
37472 + * Exit from Hibernation if Host did not detect SRP from connected SRP capable
37473 + * Device during SRP time by host power up.
37474 + */
37475 +void dwc_otg_hcd_power_up(void *ptr)
37476 +{
37477 + gpwrdn_data_t gpwrdn = {.d32 = 0 };
37478 + dwc_otg_core_if_t *core_if = (dwc_otg_core_if_t *) ptr;
37479 +
37480 + DWC_PRINTF("%s called\n", __FUNCTION__);
37481 +
37482 + if (!core_if->hibernation_suspend) {
37483 + DWC_PRINTF("Already exited from Hibernation\n");
37484 + return;
37485 + }
37486 +
37487 + /* Switch on the voltage to the core */
37488 + gpwrdn.b.pwrdnswtch = 1;
37489 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
37490 + dwc_udelay(10);
37491 +
37492 + /* Reset the core */
37493 + gpwrdn.d32 = 0;
37494 + gpwrdn.b.pwrdnrstn = 1;
37495 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
37496 + dwc_udelay(10);
37497 +
37498 + /* Disable power clamps */
37499 + gpwrdn.d32 = 0;
37500 + gpwrdn.b.pwrdnclmp = 1;
37501 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
37502 +
37503 + /* Remove reset the core signal */
37504 + gpwrdn.d32 = 0;
37505 + gpwrdn.b.pwrdnrstn = 1;
37506 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, 0, gpwrdn.d32);
37507 + dwc_udelay(10);
37508 +
37509 + /* Disable PMU interrupt */
37510 + gpwrdn.d32 = 0;
37511 + gpwrdn.b.pmuintsel = 1;
37512 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
37513 +
37514 + core_if->hibernation_suspend = 0;
37515 +
37516 + /* Disable PMU */
37517 + gpwrdn.d32 = 0;
37518 + gpwrdn.b.pmuactv = 1;
37519 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
37520 + dwc_udelay(10);
37521 +
37522 + /* Enable VBUS */
37523 + gpwrdn.d32 = 0;
37524 + gpwrdn.b.dis_vbus = 1;
37525 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, gpwrdn.d32, 0);
37526 +
37527 + core_if->op_state = A_HOST;
37528 + dwc_otg_core_init(core_if);
37529 + dwc_otg_enable_global_interrupts(core_if);
37530 + cil_hcd_start(core_if);
37531 +}
37532 +
37533 +void dwc_otg_cleanup_fiq_channel(dwc_otg_hcd_t *hcd, uint32_t num)
37534 +{
37535 + struct fiq_channel_state *st = &hcd->fiq_state->channel[num];
37536 + struct fiq_dma_blob *blob = hcd->fiq_dmab;
37537 + int i;
37538 +
37539 + st->fsm = FIQ_PASSTHROUGH;
37540 + st->hcchar_copy.d32 = 0;
37541 + st->hcsplt_copy.d32 = 0;
37542 + st->hcint_copy.d32 = 0;
37543 + st->hcintmsk_copy.d32 = 0;
37544 + st->hctsiz_copy.d32 = 0;
37545 + st->hcdma_copy.d32 = 0;
37546 + st->nr_errors = 0;
37547 + st->hub_addr = 0;
37548 + st->port_addr = 0;
37549 + st->expected_uframe = 0;
37550 + st->nrpackets = 0;
37551 + st->dma_info.index = 0;
37552 + for (i = 0; i < 6; i++)
37553 + st->dma_info.slot_len[i] = 255;
37554 + st->hs_isoc_info.index = 0;
37555 + st->hs_isoc_info.iso_desc = NULL;
37556 + st->hs_isoc_info.nrframes = 0;
37557 +
37558 + DWC_MEMSET(&blob->channel[num].index[0], 0x6b, 1128);
37559 +}
37560 +
37561 +/**
37562 + * Frees secondary storage associated with the dwc_otg_hcd structure contained
37563 + * in the struct usb_hcd field.
37564 + */
37565 +static void dwc_otg_hcd_free(dwc_otg_hcd_t * dwc_otg_hcd)
37566 +{
37567 + struct device *dev = dwc_otg_hcd_to_dev(dwc_otg_hcd);
37568 + int i;
37569 +
37570 + DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD FREE\n");
37571 +
37572 + del_timers(dwc_otg_hcd);
37573 +
37574 + /* Free memory for QH/QTD lists */
37575 + qh_list_free(dwc_otg_hcd, &dwc_otg_hcd->non_periodic_sched_inactive);
37576 + qh_list_free(dwc_otg_hcd, &dwc_otg_hcd->non_periodic_sched_active);
37577 + qh_list_free(dwc_otg_hcd, &dwc_otg_hcd->periodic_sched_inactive);
37578 + qh_list_free(dwc_otg_hcd, &dwc_otg_hcd->periodic_sched_ready);
37579 + qh_list_free(dwc_otg_hcd, &dwc_otg_hcd->periodic_sched_assigned);
37580 + qh_list_free(dwc_otg_hcd, &dwc_otg_hcd->periodic_sched_queued);
37581 +
37582 + /* Free memory for the host channels. */
37583 + for (i = 0; i < MAX_EPS_CHANNELS; i++) {
37584 + dwc_hc_t *hc = dwc_otg_hcd->hc_ptr_array[i];
37585 +
37586 +#ifdef DEBUG
37587 + if (dwc_otg_hcd->core_if->hc_xfer_timer[i]) {
37588 + DWC_TIMER_FREE(dwc_otg_hcd->core_if->hc_xfer_timer[i]);
37589 + }
37590 +#endif
37591 + if (hc != NULL) {
37592 + DWC_DEBUGPL(DBG_HCDV, "HCD Free channel #%i, hc=%p\n",
37593 + i, hc);
37594 + DWC_FREE(hc);
37595 + }
37596 + }
37597 +
37598 + if (dwc_otg_hcd->core_if->dma_enable) {
37599 + if (dwc_otg_hcd->status_buf_dma) {
37600 + DWC_DMA_FREE(dev, DWC_OTG_HCD_STATUS_BUF_SIZE,
37601 + dwc_otg_hcd->status_buf,
37602 + dwc_otg_hcd->status_buf_dma);
37603 + }
37604 + } else if (dwc_otg_hcd->status_buf != NULL) {
37605 + DWC_FREE(dwc_otg_hcd->status_buf);
37606 + }
37607 + DWC_SPINLOCK_FREE(dwc_otg_hcd->lock);
37608 + /* Set core_if's lock pointer to NULL */
37609 + dwc_otg_hcd->core_if->lock = NULL;
37610 +
37611 + DWC_TIMER_FREE(dwc_otg_hcd->conn_timer);
37612 + DWC_TASK_FREE(dwc_otg_hcd->reset_tasklet);
37613 + DWC_TASK_FREE(dwc_otg_hcd->completion_tasklet);
37614 + DWC_DMA_FREE(dev, 16, dwc_otg_hcd->fiq_state->dummy_send,
37615 + dwc_otg_hcd->fiq_state->dummy_send_dma);
37616 + DWC_FREE(dwc_otg_hcd->fiq_state);
37617 +
37618 +#ifdef DWC_DEV_SRPCAP
37619 + if (dwc_otg_hcd->core_if->power_down == 2 &&
37620 + dwc_otg_hcd->core_if->pwron_timer) {
37621 + DWC_TIMER_FREE(dwc_otg_hcd->core_if->pwron_timer);
37622 + }
37623 +#endif
37624 + DWC_FREE(dwc_otg_hcd);
37625 +}
37626 +
37627 +int dwc_otg_hcd_init(dwc_otg_hcd_t * hcd, dwc_otg_core_if_t * core_if)
37628 +{
37629 + struct device *dev = dwc_otg_hcd_to_dev(hcd);
37630 + int retval = 0;
37631 + int num_channels;
37632 + int i;
37633 + dwc_hc_t *channel;
37634 +
37635 +#if (defined(DWC_LINUX) && defined(CONFIG_DEBUG_SPINLOCK))
37636 + DWC_SPINLOCK_ALLOC_LINUX_DEBUG(hcd->lock);
37637 +#else
37638 + hcd->lock = DWC_SPINLOCK_ALLOC();
37639 +#endif
37640 + DWC_DEBUGPL(DBG_HCDV, "init of HCD %p given core_if %p\n",
37641 + hcd, core_if);
37642 + if (!hcd->lock) {
37643 + DWC_ERROR("Could not allocate lock for pcd");
37644 + DWC_FREE(hcd);
37645 + retval = -DWC_E_NO_MEMORY;
37646 + goto out;
37647 + }
37648 + hcd->core_if = core_if;
37649 +
37650 + /* Register the HCD CIL Callbacks */
37651 + dwc_otg_cil_register_hcd_callbacks(hcd->core_if,
37652 + &hcd_cil_callbacks, hcd);
37653 +
37654 + /* Initialize the non-periodic schedule. */
37655 + DWC_LIST_INIT(&hcd->non_periodic_sched_inactive);
37656 + DWC_LIST_INIT(&hcd->non_periodic_sched_active);
37657 +
37658 + /* Initialize the periodic schedule. */
37659 + DWC_LIST_INIT(&hcd->periodic_sched_inactive);
37660 + DWC_LIST_INIT(&hcd->periodic_sched_ready);
37661 + DWC_LIST_INIT(&hcd->periodic_sched_assigned);
37662 + DWC_LIST_INIT(&hcd->periodic_sched_queued);
37663 + DWC_TAILQ_INIT(&hcd->completed_urb_list);
37664 + /*
37665 + * Create a host channel descriptor for each host channel implemented
37666 + * in the controller. Initialize the channel descriptor array.
37667 + */
37668 + DWC_CIRCLEQ_INIT(&hcd->free_hc_list);
37669 + num_channels = hcd->core_if->core_params->host_channels;
37670 + DWC_MEMSET(hcd->hc_ptr_array, 0, sizeof(hcd->hc_ptr_array));
37671 + for (i = 0; i < num_channels; i++) {
37672 + channel = DWC_ALLOC(sizeof(dwc_hc_t));
37673 + if (channel == NULL) {
37674 + retval = -DWC_E_NO_MEMORY;
37675 + DWC_ERROR("%s: host channel allocation failed\n",
37676 + __func__);
37677 + dwc_otg_hcd_free(hcd);
37678 + goto out;
37679 + }
37680 + channel->hc_num = i;
37681 + hcd->hc_ptr_array[i] = channel;
37682 +#ifdef DEBUG
37683 + hcd->core_if->hc_xfer_timer[i] =
37684 + DWC_TIMER_ALLOC("hc timer", hc_xfer_timeout,
37685 + &hcd->core_if->hc_xfer_info[i]);
37686 +#endif
37687 + DWC_DEBUGPL(DBG_HCDV, "HCD Added channel #%d, hc=%p\n", i,
37688 + channel);
37689 + }
37690 +
37691 + if (fiq_enable) {
37692 + hcd->fiq_state = DWC_ALLOC(sizeof(struct fiq_state) + (sizeof(struct fiq_channel_state) * num_channels));
37693 + if (!hcd->fiq_state) {
37694 + retval = -DWC_E_NO_MEMORY;
37695 + DWC_ERROR("%s: cannot allocate fiq_state structure\n", __func__);
37696 + dwc_otg_hcd_free(hcd);
37697 + goto out;
37698 + }
37699 + DWC_MEMSET(hcd->fiq_state, 0, (sizeof(struct fiq_state) + (sizeof(struct fiq_channel_state) * num_channels)));
37700 +
37701 +#ifdef CONFIG_ARM64
37702 + spin_lock_init(&hcd->fiq_state->lock);
37703 +#endif
37704 +
37705 + for (i = 0; i < num_channels; i++) {
37706 + hcd->fiq_state->channel[i].fsm = FIQ_PASSTHROUGH;
37707 + }
37708 + hcd->fiq_state->dummy_send = DWC_DMA_ALLOC_ATOMIC(dev, 16,
37709 + &hcd->fiq_state->dummy_send_dma);
37710 +
37711 + hcd->fiq_stack = DWC_ALLOC(sizeof(struct fiq_stack));
37712 + if (!hcd->fiq_stack) {
37713 + retval = -DWC_E_NO_MEMORY;
37714 + DWC_ERROR("%s: cannot allocate fiq_stack structure\n", __func__);
37715 + dwc_otg_hcd_free(hcd);
37716 + goto out;
37717 + }
37718 + hcd->fiq_stack->magic1 = 0xDEADBEEF;
37719 + hcd->fiq_stack->magic2 = 0xD00DFEED;
37720 + hcd->fiq_state->gintmsk_saved.d32 = ~0;
37721 + hcd->fiq_state->haintmsk_saved.b2.chint = ~0;
37722 +
37723 + /* This bit is terrible and uses no API, but necessary. The FIQ has no concept of DMA pools
37724 + * (and if it did, would be a lot slower). This allocates a chunk of memory (~9kiB for 8 host channels)
37725 + * for use as transaction bounce buffers in a 2-D array. Our access into this chunk is done by some
37726 + * moderately readable array casts.
37727 + */
37728 + hcd->fiq_dmab = DWC_DMA_ALLOC(dev, (sizeof(struct fiq_dma_channel) * num_channels), &hcd->fiq_state->dma_base);
37729 + DWC_WARN("FIQ DMA bounce buffers: virt = %px dma = %pad len=%zu",
37730 + hcd->fiq_dmab, &hcd->fiq_state->dma_base,
37731 + sizeof(struct fiq_dma_channel) * num_channels);
37732 +
37733 + DWC_MEMSET(hcd->fiq_dmab, 0x6b, 9024);
37734 +
37735 + /* pointer for debug in fiq_print */
37736 + hcd->fiq_state->fiq_dmab = hcd->fiq_dmab;
37737 + if (fiq_fsm_enable) {
37738 + int i;
37739 + for (i=0; i < hcd->core_if->core_params->host_channels; i++) {
37740 + dwc_otg_cleanup_fiq_channel(hcd, i);
37741 + }
37742 + DWC_PRINTF("FIQ FSM acceleration enabled for :\n%s%s%s%s",
37743 + (fiq_fsm_mask & 0x1) ? "Non-periodic Split Transactions\n" : "",
37744 + (fiq_fsm_mask & 0x2) ? "Periodic Split Transactions\n" : "",
37745 + (fiq_fsm_mask & 0x4) ? "High-Speed Isochronous Endpoints\n" : "",
37746 + (fiq_fsm_mask & 0x8) ? "Interrupt/Control Split Transaction hack enabled\n" : "");
37747 + }
37748 + }
37749 +
37750 + /* Initialize the Connection timeout timer. */
37751 + hcd->conn_timer = DWC_TIMER_ALLOC("Connection timer",
37752 + dwc_otg_hcd_connect_timeout, 0);
37753 +
37754 + printk(KERN_DEBUG "dwc_otg: Microframe scheduler %s\n", microframe_schedule ? "enabled":"disabled");
37755 + if (microframe_schedule)
37756 + init_hcd_usecs(hcd);
37757 +
37758 + /* Initialize reset tasklet. */
37759 + hcd->reset_tasklet = DWC_TASK_ALLOC("reset_tasklet", reset_tasklet_func, hcd);
37760 +
37761 + hcd->completion_tasklet = DWC_TASK_ALLOC("completion_tasklet",
37762 + completion_tasklet_func, hcd);
37763 +#ifdef DWC_DEV_SRPCAP
37764 + if (hcd->core_if->power_down == 2) {
37765 + /* Initialize Power on timer for Host power up in case hibernation */
37766 + hcd->core_if->pwron_timer = DWC_TIMER_ALLOC("PWRON TIMER",
37767 + dwc_otg_hcd_power_up, core_if);
37768 + }
37769 +#endif
37770 +
37771 + /*
37772 + * Allocate space for storing data on status transactions. Normally no
37773 + * data is sent, but this space acts as a bit bucket. This must be
37774 + * done after usb_add_hcd since that function allocates the DMA buffer
37775 + * pool.
37776 + */
37777 + if (hcd->core_if->dma_enable) {
37778 + hcd->status_buf =
37779 + DWC_DMA_ALLOC(dev, DWC_OTG_HCD_STATUS_BUF_SIZE,
37780 + &hcd->status_buf_dma);
37781 + } else {
37782 + hcd->status_buf = DWC_ALLOC(DWC_OTG_HCD_STATUS_BUF_SIZE);
37783 + }
37784 + if (!hcd->status_buf) {
37785 + retval = -DWC_E_NO_MEMORY;
37786 + DWC_ERROR("%s: status_buf allocation failed\n", __func__);
37787 + dwc_otg_hcd_free(hcd);
37788 + goto out;
37789 + }
37790 +
37791 + hcd->otg_port = 1;
37792 + hcd->frame_list = NULL;
37793 + hcd->frame_list_dma = 0;
37794 + hcd->periodic_qh_count = 0;
37795 +
37796 + DWC_MEMSET(hcd->hub_port, 0, sizeof(hcd->hub_port));
37797 +#ifdef FIQ_DEBUG
37798 + DWC_MEMSET(hcd->hub_port_alloc, -1, sizeof(hcd->hub_port_alloc));
37799 +#endif
37800 +
37801 +out:
37802 + return retval;
37803 +}
37804 +
37805 +void dwc_otg_hcd_remove(dwc_otg_hcd_t * hcd)
37806 +{
37807 + /* Turn off all host-specific interrupts. */
37808 + dwc_otg_disable_host_interrupts(hcd->core_if);
37809 +
37810 + dwc_otg_hcd_free(hcd);
37811 +}
37812 +
37813 +/**
37814 + * Initializes dynamic portions of the DWC_otg HCD state.
37815 + */
37816 +static void dwc_otg_hcd_reinit(dwc_otg_hcd_t * hcd)
37817 +{
37818 + int num_channels;
37819 + int i;
37820 + dwc_hc_t *channel;
37821 + dwc_hc_t *channel_tmp;
37822 +
37823 + hcd->flags.d32 = 0;
37824 +
37825 + hcd->non_periodic_qh_ptr = &hcd->non_periodic_sched_active;
37826 + if (!microframe_schedule) {
37827 + hcd->non_periodic_channels = 0;
37828 + hcd->periodic_channels = 0;
37829 + } else {
37830 + hcd->available_host_channels = hcd->core_if->core_params->host_channels;
37831 + }
37832 + /*
37833 + * Put all channels in the free channel list and clean up channel
37834 + * states.
37835 + */
37836 + DWC_CIRCLEQ_FOREACH_SAFE(channel, channel_tmp,
37837 + &hcd->free_hc_list, hc_list_entry) {
37838 + DWC_CIRCLEQ_REMOVE(&hcd->free_hc_list, channel, hc_list_entry);
37839 + }
37840 +
37841 + num_channels = hcd->core_if->core_params->host_channels;
37842 + for (i = 0; i < num_channels; i++) {
37843 + channel = hcd->hc_ptr_array[i];
37844 + DWC_CIRCLEQ_INSERT_TAIL(&hcd->free_hc_list, channel,
37845 + hc_list_entry);
37846 + dwc_otg_hc_cleanup(hcd->core_if, channel);
37847 + }
37848 +
37849 + /* Initialize the DWC core for host mode operation. */
37850 + dwc_otg_core_host_init(hcd->core_if);
37851 +
37852 + /* Set core_if's lock pointer to the hcd->lock */
37853 + hcd->core_if->lock = hcd->lock;
37854 +}
37855 +
37856 +/**
37857 + * Assigns transactions from a QTD to a free host channel and initializes the
37858 + * host channel to perform the transactions. The host channel is removed from
37859 + * the free list.
37860 + *
37861 + * @param hcd The HCD state structure.
37862 + * @param qh Transactions from the first QTD for this QH are selected and
37863 + * assigned to a free host channel.
37864 + */
37865 +static void assign_and_init_hc(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh)
37866 +{
37867 + dwc_hc_t *hc;
37868 + dwc_otg_qtd_t *qtd;
37869 + dwc_otg_hcd_urb_t *urb;
37870 + void* ptr = NULL;
37871 + uint16_t wLength;
37872 + uint32_t intr_enable;
37873 + unsigned long flags;
37874 + gintmsk_data_t gintmsk = { .d32 = 0, };
37875 + struct device *dev = dwc_otg_hcd_to_dev(hcd);
37876 +
37877 + qtd = DWC_CIRCLEQ_FIRST(&qh->qtd_list);
37878 +
37879 + urb = qtd->urb;
37880 +
37881 + DWC_DEBUGPL(DBG_HCDV, "%s(%p,%p) - urb %x, actual_length %d\n", __func__, hcd, qh, (unsigned int)urb, urb->actual_length);
37882 +
37883 + if (((urb->actual_length < 0) || (urb->actual_length > urb->length)) && !dwc_otg_hcd_is_pipe_in(&urb->pipe_info))
37884 + urb->actual_length = urb->length;
37885 +
37886 +
37887 + hc = DWC_CIRCLEQ_FIRST(&hcd->free_hc_list);
37888 +
37889 + /* Remove the host channel from the free list. */
37890 + DWC_CIRCLEQ_REMOVE_INIT(&hcd->free_hc_list, hc, hc_list_entry);
37891 +
37892 + qh->channel = hc;
37893 +
37894 + qtd->in_process = 1;
37895 +
37896 + /*
37897 + * Use usb_pipedevice to determine device address. This address is
37898 + * 0 before the SET_ADDRESS command and the correct address afterward.
37899 + */
37900 + hc->dev_addr = dwc_otg_hcd_get_dev_addr(&urb->pipe_info);
37901 + hc->ep_num = dwc_otg_hcd_get_ep_num(&urb->pipe_info);
37902 + hc->speed = qh->dev_speed;
37903 + hc->max_packet = dwc_max_packet(qh->maxp);
37904 +
37905 + hc->xfer_started = 0;
37906 + hc->halt_status = DWC_OTG_HC_XFER_NO_HALT_STATUS;
37907 + hc->error_state = (qtd->error_count > 0);
37908 + hc->halt_on_queue = 0;
37909 + hc->halt_pending = 0;
37910 + hc->requests = 0;
37911 +
37912 + /*
37913 + * The following values may be modified in the transfer type section
37914 + * below. The xfer_len value may be reduced when the transfer is
37915 + * started to accommodate the max widths of the XferSize and PktCnt
37916 + * fields in the HCTSIZn register.
37917 + */
37918 +
37919 + hc->ep_is_in = (dwc_otg_hcd_is_pipe_in(&urb->pipe_info) != 0);
37920 + if (hc->ep_is_in) {
37921 + hc->do_ping = 0;
37922 + } else {
37923 + hc->do_ping = qh->ping_state;
37924 + }
37925 +
37926 + hc->data_pid_start = qh->data_toggle;
37927 + hc->multi_count = 1;
37928 +
37929 + if (hcd->core_if->dma_enable) {
37930 + hc->xfer_buff = (uint8_t *) urb->dma + urb->actual_length;
37931 +
37932 + /* For non-dword aligned case */
37933 + if (((unsigned long)hc->xfer_buff & 0x3)
37934 + && !hcd->core_if->dma_desc_enable) {
37935 + ptr = (uint8_t *) urb->buf + urb->actual_length;
37936 + }
37937 + } else {
37938 + hc->xfer_buff = (uint8_t *) urb->buf + urb->actual_length;
37939 + }
37940 + hc->xfer_len = urb->length - urb->actual_length;
37941 + hc->xfer_count = 0;
37942 +
37943 + /*
37944 + * Set the split attributes
37945 + */
37946 + hc->do_split = 0;
37947 + if (qh->do_split) {
37948 + uint32_t hub_addr, port_addr;
37949 + hc->do_split = 1;
37950 + hc->start_pkt_count = 1;
37951 + hc->xact_pos = qtd->isoc_split_pos;
37952 + /* We don't need to do complete splits anymore */
37953 +// if(fiq_fsm_enable)
37954 + if (0)
37955 + hc->complete_split = qtd->complete_split = 0;
37956 + else
37957 + hc->complete_split = qtd->complete_split;
37958 +
37959 + hcd->fops->hub_info(hcd, urb->priv, &hub_addr, &port_addr);
37960 + hc->hub_addr = (uint8_t) hub_addr;
37961 + hc->port_addr = (uint8_t) port_addr;
37962 + }
37963 +
37964 + switch (dwc_otg_hcd_get_pipe_type(&urb->pipe_info)) {
37965 + case UE_CONTROL:
37966 + hc->ep_type = DWC_OTG_EP_TYPE_CONTROL;
37967 + switch (qtd->control_phase) {
37968 + case DWC_OTG_CONTROL_SETUP:
37969 + DWC_DEBUGPL(DBG_HCDV, " Control setup transaction\n");
37970 + hc->do_ping = 0;
37971 + hc->ep_is_in = 0;
37972 + hc->data_pid_start = DWC_OTG_HC_PID_SETUP;
37973 + if (hcd->core_if->dma_enable) {
37974 + hc->xfer_buff = (uint8_t *) urb->setup_dma;
37975 + } else {
37976 + hc->xfer_buff = (uint8_t *) urb->setup_packet;
37977 + }
37978 + hc->xfer_len = 8;
37979 + ptr = NULL;
37980 + break;
37981 + case DWC_OTG_CONTROL_DATA:
37982 + DWC_DEBUGPL(DBG_HCDV, " Control data transaction\n");
37983 + /*
37984 + * Hardware bug: small IN packets with length < 4
37985 + * cause a 4-byte write to memory. We can only catch
37986 + * the case where we know a short packet is going to be
37987 + * returned in a control transfer, as the length is
37988 + * specified in the setup packet. This is only an issue
37989 + * for drivers that insist on packing a device's various
37990 + * properties into a struct and querying them one at a
37991 + * time (uvcvideo).
37992 + * Force the use of align_buf so that the subsequent
37993 + * memcpy puts the right number of bytes in the URB's
37994 + * buffer.
37995 + */
37996 + wLength = ((uint16_t *)urb->setup_packet)[3];
37997 + if (hc->ep_is_in && wLength < 4)
37998 + ptr = hc->xfer_buff;
37999 +
38000 + hc->data_pid_start = qtd->data_toggle;
38001 + break;
38002 + case DWC_OTG_CONTROL_STATUS:
38003 + /*
38004 + * Direction is opposite of data direction or IN if no
38005 + * data.
38006 + */
38007 + DWC_DEBUGPL(DBG_HCDV, " Control status transaction\n");
38008 + if (urb->length == 0) {
38009 + hc->ep_is_in = 1;
38010 + } else {
38011 + hc->ep_is_in =
38012 + dwc_otg_hcd_is_pipe_out(&urb->pipe_info);
38013 + }
38014 + if (hc->ep_is_in) {
38015 + hc->do_ping = 0;
38016 + }
38017 +
38018 + hc->data_pid_start = DWC_OTG_HC_PID_DATA1;
38019 +
38020 + hc->xfer_len = 0;
38021 + if (hcd->core_if->dma_enable) {
38022 + hc->xfer_buff = (uint8_t *) hcd->status_buf_dma;
38023 + } else {
38024 + hc->xfer_buff = (uint8_t *) hcd->status_buf;
38025 + }
38026 + ptr = NULL;
38027 + break;
38028 + }
38029 + break;
38030 + case UE_BULK:
38031 + hc->ep_type = DWC_OTG_EP_TYPE_BULK;
38032 + break;
38033 + case UE_INTERRUPT:
38034 + hc->ep_type = DWC_OTG_EP_TYPE_INTR;
38035 + break;
38036 + case UE_ISOCHRONOUS:
38037 + {
38038 + struct dwc_otg_hcd_iso_packet_desc *frame_desc;
38039 +
38040 + hc->ep_type = DWC_OTG_EP_TYPE_ISOC;
38041 +
38042 + if (hcd->core_if->dma_desc_enable)
38043 + break;
38044 +
38045 + frame_desc = &urb->iso_descs[qtd->isoc_frame_index];
38046 +
38047 + frame_desc->status = 0;
38048 +
38049 + if (hcd->core_if->dma_enable) {
38050 + hc->xfer_buff = (uint8_t *) urb->dma;
38051 + } else {
38052 + hc->xfer_buff = (uint8_t *) urb->buf;
38053 + }
38054 + hc->xfer_buff +=
38055 + frame_desc->offset + qtd->isoc_split_offset;
38056 + hc->xfer_len =
38057 + frame_desc->length - qtd->isoc_split_offset;
38058 +
38059 + /* For non-dword aligned buffers */
38060 + if (((unsigned long)hc->xfer_buff & 0x3)
38061 + && hcd->core_if->dma_enable) {
38062 + ptr =
38063 + (uint8_t *) urb->buf + frame_desc->offset +
38064 + qtd->isoc_split_offset;
38065 + } else
38066 + ptr = NULL;
38067 +
38068 + if (hc->xact_pos == DWC_HCSPLIT_XACTPOS_ALL) {
38069 + if (hc->xfer_len <= 188) {
38070 + hc->xact_pos = DWC_HCSPLIT_XACTPOS_ALL;
38071 + } else {
38072 + hc->xact_pos =
38073 + DWC_HCSPLIT_XACTPOS_BEGIN;
38074 + }
38075 + }
38076 + }
38077 + break;
38078 + }
38079 + /* non DWORD-aligned buffer case */
38080 + if (ptr) {
38081 + uint32_t buf_size;
38082 + if (hc->ep_type != DWC_OTG_EP_TYPE_ISOC) {
38083 + buf_size = hcd->core_if->core_params->max_transfer_size;
38084 + } else {
38085 + buf_size = 4096;
38086 + }
38087 + if (!qh->dw_align_buf) {
38088 + qh->dw_align_buf = DWC_DMA_ALLOC_ATOMIC(dev, buf_size,
38089 + &qh->dw_align_buf_dma);
38090 + if (!qh->dw_align_buf) {
38091 + DWC_ERROR
38092 + ("%s: Failed to allocate memory to handle "
38093 + "non-dword aligned buffer case\n",
38094 + __func__);
38095 + return;
38096 + }
38097 + }
38098 + if (!hc->ep_is_in) {
38099 + dwc_memcpy(qh->dw_align_buf, ptr, hc->xfer_len);
38100 + }
38101 + hc->align_buff = qh->dw_align_buf_dma;
38102 + } else {
38103 + hc->align_buff = 0;
38104 + }
38105 +
38106 + if (hc->ep_type == DWC_OTG_EP_TYPE_INTR ||
38107 + hc->ep_type == DWC_OTG_EP_TYPE_ISOC) {
38108 + /*
38109 + * This value may be modified when the transfer is started to
38110 + * reflect the actual transfer length.
38111 + */
38112 + hc->multi_count = dwc_hb_mult(qh->maxp);
38113 + }
38114 +
38115 + if (hcd->core_if->dma_desc_enable)
38116 + hc->desc_list_addr = qh->desc_list_dma;
38117 +
38118 + dwc_otg_hc_init(hcd->core_if, hc);
38119 +
38120 + local_irq_save(flags);
38121 +
38122 + if (fiq_enable) {
38123 + local_fiq_disable();
38124 + fiq_fsm_spin_lock(&hcd->fiq_state->lock);
38125 + }
38126 +
38127 + /* Enable the top level host channel interrupt. */
38128 + intr_enable = (1 << hc->hc_num);
38129 + DWC_MODIFY_REG32(&hcd->core_if->host_if->host_global_regs->haintmsk, 0, intr_enable);
38130 +
38131 + /* Make sure host channel interrupts are enabled. */
38132 + gintmsk.b.hcintr = 1;
38133 + DWC_MODIFY_REG32(&hcd->core_if->core_global_regs->gintmsk, 0, gintmsk.d32);
38134 +
38135 + if (fiq_enable) {
38136 + fiq_fsm_spin_unlock(&hcd->fiq_state->lock);
38137 + local_fiq_enable();
38138 + }
38139 +
38140 + local_irq_restore(flags);
38141 + hc->qh = qh;
38142 +}
38143 +
38144 +
38145 +/**
38146 + * fiq_fsm_transaction_suitable() - Test a QH for compatibility with the FIQ
38147 + * @hcd: Pointer to the dwc_otg_hcd struct
38148 + * @qh: pointer to the endpoint's queue head
38149 + *
38150 + * Transaction start/end control flow is grafted onto the existing dwc_otg
38151 + * mechanisms, to avoid spaghettifying the functions more than they already are.
38152 + * This function's eligibility check is altered by debug parameter.
38153 + *
38154 + * Returns: 0 for unsuitable, 1 implies the FIQ can be enabled for this transaction.
38155 + */
38156 +
38157 +int fiq_fsm_transaction_suitable(dwc_otg_hcd_t *hcd, dwc_otg_qh_t *qh)
38158 +{
38159 + if (qh->do_split) {
38160 + switch (qh->ep_type) {
38161 + case UE_CONTROL:
38162 + case UE_BULK:
38163 + if (fiq_fsm_mask & (1 << 0))
38164 + return 1;
38165 + break;
38166 + case UE_INTERRUPT:
38167 + case UE_ISOCHRONOUS:
38168 + if (fiq_fsm_mask & (1 << 1))
38169 + return 1;
38170 + break;
38171 + default:
38172 + break;
38173 + }
38174 + } else if (qh->ep_type == UE_ISOCHRONOUS) {
38175 + if (fiq_fsm_mask & (1 << 2)) {
38176 + /* ISOCH support. We test for compatibility:
38177 + * - DWORD aligned buffers
38178 + * - Must be at least 2 transfers (otherwise pointless to use the FIQ)
38179 + * If yes, then the fsm enqueue function will handle the state machine setup.
38180 + */
38181 + dwc_otg_qtd_t *qtd = DWC_CIRCLEQ_FIRST(&qh->qtd_list);
38182 + dwc_otg_hcd_urb_t *urb = qtd->urb;
38183 + dwc_dma_t ptr;
38184 + int i;
38185 +
38186 + if (urb->packet_count < 2)
38187 + return 0;
38188 + for (i = 0; i < urb->packet_count; i++) {
38189 + ptr = urb->dma + urb->iso_descs[i].offset;
38190 + if (ptr & 0x3)
38191 + return 0;
38192 + }
38193 + return 1;
38194 + }
38195 + }
38196 + return 0;
38197 +}
38198 +
38199 +/**
38200 + * fiq_fsm_setup_periodic_dma() - Set up DMA bounce buffers
38201 + * @hcd: Pointer to the dwc_otg_hcd struct
38202 + * @qh: Pointer to the endpoint's queue head
38203 + *
38204 + * Periodic split transactions are transmitted modulo 188 bytes.
38205 + * This necessitates slicing data up into buckets for isochronous out
38206 + * and fixing up the DMA address for all IN transfers.
38207 + *
38208 + * Returns 1 if the DMA bounce buffers have been used, 0 if the default
38209 + * HC buffer has been used.
38210 + */
38211 +int fiq_fsm_setup_periodic_dma(dwc_otg_hcd_t *hcd, struct fiq_channel_state *st, dwc_otg_qh_t *qh)
38212 + {
38213 + int frame_length, i = 0;
38214 + uint8_t *ptr = NULL;
38215 + dwc_hc_t *hc = qh->channel;
38216 + struct fiq_dma_blob *blob;
38217 + struct dwc_otg_hcd_iso_packet_desc *frame_desc;
38218 +
38219 + for (i = 0; i < 6; i++) {
38220 + st->dma_info.slot_len[i] = 255;
38221 + }
38222 + st->dma_info.index = 0;
38223 + i = 0;
38224 + if (hc->ep_is_in) {
38225 + /*
38226 + * Set dma_regs to bounce buffer. FIQ will update the
38227 + * state depending on transaction progress.
38228 + * Pointer arithmetic on hcd->fiq_state->dma_base (a dma_addr_t)
38229 + * to point it to the correct offset in the allocated buffers.
38230 + */
38231 + blob = (struct fiq_dma_blob *) hcd->fiq_state->dma_base;
38232 + st->hcdma_copy.d32 = (dma_addr_t) blob->channel[hc->hc_num].index[0].buf;
38233 +
38234 + /* Calculate the max number of CSPLITS such that the FIQ can time out
38235 + * a transaction if it fails.
38236 + */
38237 + frame_length = st->hcchar_copy.b.mps;
38238 + do {
38239 + i++;
38240 + frame_length -= 188;
38241 + } while (frame_length >= 0);
38242 + st->nrpackets = i;
38243 + return 1;
38244 + } else {
38245 + if (qh->ep_type == UE_ISOCHRONOUS) {
38246 +
38247 + dwc_otg_qtd_t *qtd = DWC_CIRCLEQ_FIRST(&qh->qtd_list);
38248 +
38249 + frame_desc = &qtd->urb->iso_descs[qtd->isoc_frame_index];
38250 + frame_length = frame_desc->length;
38251 +
38252 + /* Virtual address for bounce buffers */
38253 + blob = hcd->fiq_dmab;
38254 +
38255 + ptr = qtd->urb->buf + frame_desc->offset;
38256 + if (frame_length == 0) {
38257 + /*
38258 + * for isochronous transactions, we must still transmit a packet
38259 + * even if the length is zero.
38260 + */
38261 + st->dma_info.slot_len[0] = 0;
38262 + st->nrpackets = 1;
38263 + } else {
38264 + do {
38265 + if (frame_length <= 188) {
38266 + dwc_memcpy(&blob->channel[hc->hc_num].index[i].buf[0], ptr, frame_length);
38267 + st->dma_info.slot_len[i] = frame_length;
38268 + ptr += frame_length;
38269 + } else {
38270 + dwc_memcpy(&blob->channel[hc->hc_num].index[i].buf[0], ptr, 188);
38271 + st->dma_info.slot_len[i] = 188;
38272 + ptr += 188;
38273 + }
38274 + i++;
38275 + frame_length -= 188;
38276 + } while (frame_length > 0);
38277 + st->nrpackets = i;
38278 + }
38279 + ptr = qtd->urb->buf + frame_desc->offset;
38280 + /*
38281 + * Point the HC at the DMA address of the bounce buffers
38282 + *
38283 + * Pointer arithmetic on hcd->fiq_state->dma_base (a
38284 + * dma_addr_t) to point it to the correct offset in the
38285 + * allocated buffers.
38286 + */
38287 + blob = (struct fiq_dma_blob *) hcd->fiq_state->dma_base;
38288 + st->hcdma_copy.d32 = (dma_addr_t) blob->channel[hc->hc_num].index[0].buf;
38289 +
38290 + /* fixup xfersize to the actual packet size */
38291 + st->hctsiz_copy.b.pid = 0;
38292 + st->hctsiz_copy.b.xfersize = st->dma_info.slot_len[0];
38293 + return 1;
38294 + } else {
38295 + /* For interrupt, single OUT packet required, goes in the SSPLIT from hc_buff. */
38296 + return 0;
38297 + }
38298 + }
38299 +}
38300 +
38301 +/**
38302 + * fiq_fsm_np_tt_contended() - Avoid performing contended non-periodic transfers
38303 + * @hcd: Pointer to the dwc_otg_hcd struct
38304 + * @qh: Pointer to the endpoint's queue head
38305 + *
38306 + * Certain hub chips don't differentiate between IN and OUT non-periodic pipes
38307 + * with the same endpoint number. If transfers get completed out of order
38308 + * (disregarding the direction token) then the hub can lock up
38309 + * or return erroneous responses.
38310 + *
38311 + * Returns 1 if initiating the transfer would cause contention, 0 otherwise.
38312 + */
38313 +int fiq_fsm_np_tt_contended(dwc_otg_hcd_t *hcd, dwc_otg_qh_t *qh)
38314 +{
38315 + int i;
38316 + struct fiq_channel_state *st;
38317 + int dev_addr = qh->channel->dev_addr;
38318 + int ep_num = qh->channel->ep_num;
38319 + for (i = 0; i < hcd->core_if->core_params->host_channels; i++) {
38320 + if (i == qh->channel->hc_num)
38321 + continue;
38322 + st = &hcd->fiq_state->channel[i];
38323 + switch (st->fsm) {
38324 + case FIQ_NP_SSPLIT_STARTED:
38325 + case FIQ_NP_SSPLIT_RETRY:
38326 + case FIQ_NP_SSPLIT_PENDING:
38327 + case FIQ_NP_OUT_CSPLIT_RETRY:
38328 + case FIQ_NP_IN_CSPLIT_RETRY:
38329 + if (st->hcchar_copy.b.devaddr == dev_addr &&
38330 + st->hcchar_copy.b.epnum == ep_num)
38331 + return 1;
38332 + break;
38333 + default:
38334 + break;
38335 + }
38336 + }
38337 + return 0;
38338 +}
38339 +
38340 +/*
38341 + * Pushing a periodic request into the queue near the EOF1 point
38342 + * in a microframe causes erroneous behaviour (frmovrun) interrupt.
38343 + * Usually, the request goes out on the bus causing a transfer but
38344 + * the core does not transfer the data to memory.
38345 + * This guard interval (in number of 60MHz clocks) is required which
38346 + * must cater for CPU latency between reading the value and enabling
38347 + * the channel.
38348 + */
38349 +#define PERIODIC_FRREM_BACKOFF 1000
38350 +
38351 +int fiq_fsm_queue_isoc_transaction(dwc_otg_hcd_t *hcd, dwc_otg_qh_t *qh)
38352 +{
38353 + dwc_hc_t *hc = qh->channel;
38354 + dwc_otg_hc_regs_t *hc_regs = hcd->core_if->host_if->hc_regs[hc->hc_num];
38355 + dwc_otg_qtd_t *qtd = DWC_CIRCLEQ_FIRST(&qh->qtd_list);
38356 + int frame;
38357 + struct fiq_channel_state *st = &hcd->fiq_state->channel[hc->hc_num];
38358 + int xfer_len, nrpackets;
38359 + hcdma_data_t hcdma;
38360 + hfnum_data_t hfnum;
38361 +
38362 + if (st->fsm != FIQ_PASSTHROUGH)
38363 + return 0;
38364 +
38365 + st->nr_errors = 0;
38366 +
38367 + st->hcchar_copy.d32 = 0;
38368 + st->hcchar_copy.b.mps = hc->max_packet;
38369 + st->hcchar_copy.b.epdir = hc->ep_is_in;
38370 + st->hcchar_copy.b.devaddr = hc->dev_addr;
38371 + st->hcchar_copy.b.epnum = hc->ep_num;
38372 + st->hcchar_copy.b.eptype = hc->ep_type;
38373 +
38374 + st->hcintmsk_copy.b.chhltd = 1;
38375 +
38376 + frame = dwc_otg_hcd_get_frame_number(hcd);
38377 + st->hcchar_copy.b.oddfrm = (frame & 0x1) ? 0 : 1;
38378 +
38379 + st->hcchar_copy.b.lspddev = 0;
38380 + /* Enable the channel later as a final register write. */
38381 +
38382 + st->hcsplt_copy.d32 = 0;
38383 +
38384 + st->hs_isoc_info.iso_desc = (struct dwc_otg_hcd_iso_packet_desc *) &qtd->urb->iso_descs;
38385 + st->hs_isoc_info.nrframes = qtd->urb->packet_count;
38386 + /* grab the next DMA address offset from the array */
38387 + st->hcdma_copy.d32 = qtd->urb->dma;
38388 + hcdma.d32 = st->hcdma_copy.d32 + st->hs_isoc_info.iso_desc[0].offset;
38389 +
38390 + /* We need to set multi_count. This is a bit tricky - has to be set per-transaction as
38391 + * the core needs to be told to send the correct number. Caution: for IN transfers,
38392 + * this is always set to the maximum size of the endpoint. */
38393 + xfer_len = st->hs_isoc_info.iso_desc[0].length;
38394 + nrpackets = (xfer_len + st->hcchar_copy.b.mps - 1) / st->hcchar_copy.b.mps;
38395 + if (nrpackets == 0)
38396 + nrpackets = 1;
38397 + st->hcchar_copy.b.multicnt = nrpackets;
38398 + st->hctsiz_copy.b.pktcnt = nrpackets;
38399 +
38400 + /* Initial PID also needs to be set */
38401 + if (st->hcchar_copy.b.epdir == 0) {
38402 + st->hctsiz_copy.b.xfersize = xfer_len;
38403 + switch (st->hcchar_copy.b.multicnt) {
38404 + case 1:
38405 + st->hctsiz_copy.b.pid = DWC_PID_DATA0;
38406 + break;
38407 + case 2:
38408 + case 3:
38409 + st->hctsiz_copy.b.pid = DWC_PID_MDATA;
38410 + break;
38411 + }
38412 +
38413 + } else {
38414 + st->hctsiz_copy.b.xfersize = nrpackets * st->hcchar_copy.b.mps;
38415 + switch (st->hcchar_copy.b.multicnt) {
38416 + case 1:
38417 + st->hctsiz_copy.b.pid = DWC_PID_DATA0;
38418 + break;
38419 + case 2:
38420 + st->hctsiz_copy.b.pid = DWC_PID_DATA1;
38421 + break;
38422 + case 3:
38423 + st->hctsiz_copy.b.pid = DWC_PID_DATA2;
38424 + break;
38425 + }
38426 + }
38427 +
38428 + st->hs_isoc_info.stride = qh->interval;
38429 + st->uframe_sleeps = 0;
38430 +
38431 + fiq_print(FIQDBG_INT, hcd->fiq_state, "FSMQ %01d ", hc->hc_num);
38432 + fiq_print(FIQDBG_INT, hcd->fiq_state, "%08x", st->hcchar_copy.d32);
38433 + fiq_print(FIQDBG_INT, hcd->fiq_state, "%08x", st->hctsiz_copy.d32);
38434 + fiq_print(FIQDBG_INT, hcd->fiq_state, "%08x", st->hcdma_copy.d32);
38435 + hfnum.d32 = DWC_READ_REG32(&hcd->core_if->host_if->host_global_regs->hfnum);
38436 + local_fiq_disable();
38437 + fiq_fsm_spin_lock(&hcd->fiq_state->lock);
38438 + DWC_WRITE_REG32(&hc_regs->hctsiz, st->hctsiz_copy.d32);
38439 + DWC_WRITE_REG32(&hc_regs->hcsplt, st->hcsplt_copy.d32);
38440 + DWC_WRITE_REG32(&hc_regs->hcdma, st->hcdma_copy.d32);
38441 + DWC_WRITE_REG32(&hc_regs->hcchar, st->hcchar_copy.d32);
38442 + DWC_WRITE_REG32(&hc_regs->hcintmsk, st->hcintmsk_copy.d32);
38443 + if (hfnum.b.frrem < PERIODIC_FRREM_BACKOFF) {
38444 + /* Prevent queueing near EOF1. Bad things happen if a periodic
38445 + * split transaction is queued very close to EOF. SOF interrupt handler
38446 + * will wake this channel at the next interrupt.
38447 + */
38448 + st->fsm = FIQ_HS_ISOC_SLEEPING;
38449 + st->uframe_sleeps = 1;
38450 + } else {
38451 + st->fsm = FIQ_HS_ISOC_TURBO;
38452 + st->hcchar_copy.b.chen = 1;
38453 + DWC_WRITE_REG32(&hc_regs->hcchar, st->hcchar_copy.d32);
38454 + }
38455 + mb();
38456 + st->hcchar_copy.b.chen = 0;
38457 + fiq_fsm_spin_unlock(&hcd->fiq_state->lock);
38458 + local_fiq_enable();
38459 + return 0;
38460 +}
38461 +
38462 +
38463 +/**
38464 + * fiq_fsm_queue_split_transaction() - Set up a host channel and FIQ state
38465 + * @hcd: Pointer to the dwc_otg_hcd struct
38466 + * @qh: Pointer to the endpoint's queue head
38467 + *
38468 + * This overrides the dwc_otg driver's normal method of queueing a transaction.
38469 + * Called from dwc_otg_hcd_queue_transactions(), this performs specific setup
38470 + * for the nominated host channel.
38471 + *
38472 + * For periodic transfers, it also peeks at the FIQ state to see if an immediate
38473 + * start is possible. If not, then the FIQ is left to start the transfer.
38474 + */
38475 +int fiq_fsm_queue_split_transaction(dwc_otg_hcd_t *hcd, dwc_otg_qh_t *qh)
38476 +{
38477 + int start_immediate = 1, i;
38478 + hfnum_data_t hfnum;
38479 + dwc_hc_t *hc = qh->channel;
38480 + dwc_otg_hc_regs_t *hc_regs = hcd->core_if->host_if->hc_regs[hc->hc_num];
38481 + /* Program HC registers, setup FIQ_state, examine FIQ if periodic, start transfer (not if uframe 5) */
38482 + int hub_addr, port_addr, frame, uframe;
38483 + struct fiq_channel_state *st = &hcd->fiq_state->channel[hc->hc_num];
38484 +
38485 + /*
38486 + * Non-periodic channel assignments stay in the non_periodic_active queue.
38487 + * Therefore we get repeatedly called until the FIQ's done processing this channel.
38488 + */
38489 + if (qh->channel->xfer_started == 1)
38490 + return 0;
38491 +
38492 + if (st->fsm != FIQ_PASSTHROUGH) {
38493 + pr_warn_ratelimited("%s:%d: Queue called for an active channel\n", __func__, __LINE__);
38494 + return 0;
38495 + }
38496 +
38497 + qh->channel->xfer_started = 1;
38498 +
38499 + st->nr_errors = 0;
38500 +
38501 + st->hcchar_copy.d32 = 0;
38502 + st->hcchar_copy.b.mps = hc->max_packet;
38503 + st->hcchar_copy.b.epdir = hc->ep_is_in;
38504 + st->hcchar_copy.b.devaddr = hc->dev_addr;
38505 + st->hcchar_copy.b.epnum = hc->ep_num;
38506 + st->hcchar_copy.b.eptype = hc->ep_type;
38507 + if (hc->ep_type & 0x1) {
38508 + if (hc->ep_is_in)
38509 + st->hcchar_copy.b.multicnt = 3;
38510 + else
38511 + /* Docs say set this to 1, but driver sets to 0! */
38512 + st->hcchar_copy.b.multicnt = 0;
38513 + } else {
38514 + st->hcchar_copy.b.multicnt = 1;
38515 + st->hcchar_copy.b.oddfrm = 0;
38516 + }
38517 + st->hcchar_copy.b.lspddev = (hc->speed == DWC_OTG_EP_SPEED_LOW) ? 1 : 0;
38518 + /* Enable the channel later as a final register write. */
38519 +
38520 + st->hcsplt_copy.d32 = 0;
38521 + if(qh->do_split) {
38522 + hcd->fops->hub_info(hcd, DWC_CIRCLEQ_FIRST(&qh->qtd_list)->urb->priv, &hub_addr, &port_addr);
38523 + st->hcsplt_copy.b.compsplt = 0;
38524 + st->hcsplt_copy.b.spltena = 1;
38525 + // XACTPOS is for isoc-out only but needs initialising anyway.
38526 + st->hcsplt_copy.b.xactpos = ISOC_XACTPOS_ALL;
38527 + if((qh->ep_type == DWC_OTG_EP_TYPE_ISOC) && (!qh->ep_is_in)) {
38528 + /* For packetsize 0 < L < 188, ISOC_XACTPOS_ALL.
38529 + * for longer than this, ISOC_XACTPOS_BEGIN and the FIQ
38530 + * will update as necessary.
38531 + */
38532 + if (hc->xfer_len > 188) {
38533 + st->hcsplt_copy.b.xactpos = ISOC_XACTPOS_BEGIN;
38534 + }
38535 + }
38536 + st->hcsplt_copy.b.hubaddr = (uint8_t) hub_addr;
38537 + st->hcsplt_copy.b.prtaddr = (uint8_t) port_addr;
38538 + st->hub_addr = hub_addr;
38539 + st->port_addr = port_addr;
38540 + }
38541 +
38542 + st->hctsiz_copy.d32 = 0;
38543 + st->hctsiz_copy.b.dopng = 0;
38544 + st->hctsiz_copy.b.pid = hc->data_pid_start;
38545 +
38546 + if (hc->ep_is_in || (hc->xfer_len > hc->max_packet)) {
38547 + hc->xfer_len = hc->max_packet;
38548 + } else if (!hc->ep_is_in && (hc->xfer_len > 188)) {
38549 + hc->xfer_len = 188;
38550 + }
38551 + st->hctsiz_copy.b.xfersize = hc->xfer_len;
38552 +
38553 + st->hctsiz_copy.b.pktcnt = 1;
38554 +
38555 + if (hc->ep_type & 0x1) {
38556 + /*
38557 + * For potentially multi-packet transfers, must use the DMA bounce buffers. For IN transfers,
38558 + * the DMA address is the address of the first 188byte slot buffer in the bounce buffer array.
38559 + * For multi-packet OUT transfers, we need to copy the data into the bounce buffer array so the FIQ can punt
38560 + * the right address out as necessary. hc->xfer_buff and hc->xfer_len have already been set
38561 + * in assign_and_init_hc(), but this is for the eventual transaction completion only. The FIQ
38562 + * must not touch internal driver state.
38563 + */
38564 + if(!fiq_fsm_setup_periodic_dma(hcd, st, qh)) {
38565 + if (hc->align_buff) {
38566 + st->hcdma_copy.d32 = hc->align_buff;
38567 + } else {
38568 + st->hcdma_copy.d32 = ((unsigned long) hc->xfer_buff & 0xFFFFFFFF);
38569 + }
38570 + }
38571 + } else {
38572 + if (hc->align_buff) {
38573 + st->hcdma_copy.d32 = hc->align_buff;
38574 + } else {
38575 + st->hcdma_copy.d32 = ((unsigned long) hc->xfer_buff & 0xFFFFFFFF);
38576 + }
38577 + }
38578 + /* The FIQ depends upon no other interrupts being enabled except channel halt.
38579 + * Fixup channel interrupt mask. */
38580 + st->hcintmsk_copy.d32 = 0;
38581 + st->hcintmsk_copy.b.chhltd = 1;
38582 + st->hcintmsk_copy.b.ahberr = 1;
38583 +
38584 + /* Hack courtesy of FreeBSD: apparently forcing Interrupt Split transactions
38585 + * as Control puts the transfer into the non-periodic request queue and the
38586 + * non-periodic handler in the hub. Makes things lots easier.
38587 + */
38588 + if ((fiq_fsm_mask & 0x8) && hc->ep_type == UE_INTERRUPT) {
38589 + st->hcchar_copy.b.multicnt = 0;
38590 + st->hcchar_copy.b.oddfrm = 0;
38591 + st->hcchar_copy.b.eptype = UE_CONTROL;
38592 + if (hc->align_buff) {
38593 + st->hcdma_copy.d32 = hc->align_buff;
38594 + } else {
38595 + st->hcdma_copy.d32 = ((unsigned long) hc->xfer_buff & 0xFFFFFFFF);
38596 + }
38597 + }
38598 + DWC_WRITE_REG32(&hc_regs->hcdma, st->hcdma_copy.d32);
38599 + DWC_WRITE_REG32(&hc_regs->hctsiz, st->hctsiz_copy.d32);
38600 + DWC_WRITE_REG32(&hc_regs->hcsplt, st->hcsplt_copy.d32);
38601 + DWC_WRITE_REG32(&hc_regs->hcchar, st->hcchar_copy.d32);
38602 + DWC_WRITE_REG32(&hc_regs->hcintmsk, st->hcintmsk_copy.d32);
38603 +
38604 + local_fiq_disable();
38605 + fiq_fsm_spin_lock(&hcd->fiq_state->lock);
38606 +
38607 + if (hc->ep_type & 0x1) {
38608 + hfnum.d32 = DWC_READ_REG32(&hcd->core_if->host_if->host_global_regs->hfnum);
38609 + frame = (hfnum.b.frnum & ~0x7) >> 3;
38610 + uframe = hfnum.b.frnum & 0x7;
38611 + if (hfnum.b.frrem < PERIODIC_FRREM_BACKOFF) {
38612 + /* Prevent queueing near EOF1. Bad things happen if a periodic
38613 + * split transaction is queued very close to EOF.
38614 + */
38615 + start_immediate = 0;
38616 + } else if (uframe == 5) {
38617 + start_immediate = 0;
38618 + } else if (hc->ep_type == UE_ISOCHRONOUS && !hc->ep_is_in) {
38619 + start_immediate = 0;
38620 + } else if (hc->ep_is_in && fiq_fsm_too_late(hcd->fiq_state, hc->hc_num)) {
38621 + start_immediate = 0;
38622 + } else {
38623 + /* Search through all host channels to determine if a transaction
38624 + * is currently in progress */
38625 + for (i = 0; i < hcd->core_if->core_params->host_channels; i++) {
38626 + if (i == hc->hc_num || hcd->fiq_state->channel[i].fsm == FIQ_PASSTHROUGH)
38627 + continue;
38628 + switch (hcd->fiq_state->channel[i].fsm) {
38629 + /* TT is reserved for channels that are in the middle of a periodic
38630 + * split transaction.
38631 + */
38632 + case FIQ_PER_SSPLIT_STARTED:
38633 + case FIQ_PER_CSPLIT_WAIT:
38634 + case FIQ_PER_CSPLIT_NYET1:
38635 + case FIQ_PER_CSPLIT_POLL:
38636 + case FIQ_PER_ISO_OUT_ACTIVE:
38637 + case FIQ_PER_ISO_OUT_LAST:
38638 + if (hcd->fiq_state->channel[i].hub_addr == hub_addr &&
38639 + hcd->fiq_state->channel[i].port_addr == port_addr) {
38640 + start_immediate = 0;
38641 + }
38642 + break;
38643 + default:
38644 + break;
38645 + }
38646 + if (!start_immediate)
38647 + break;
38648 + }
38649 + }
38650 + }
38651 + if ((fiq_fsm_mask & 0x8) && hc->ep_type == UE_INTERRUPT)
38652 + start_immediate = 1;
38653 +
38654 + fiq_print(FIQDBG_INT, hcd->fiq_state, "FSMQ %01d %01d", hc->hc_num, start_immediate);
38655 + fiq_print(FIQDBG_INT, hcd->fiq_state, "%08d", hfnum.b.frrem);
38656 + //fiq_print(FIQDBG_INT, hcd->fiq_state, "H:%02dP:%02d", hub_addr, port_addr);
38657 + //fiq_print(FIQDBG_INT, hcd->fiq_state, "%08x", st->hctsiz_copy.d32);
38658 + //fiq_print(FIQDBG_INT, hcd->fiq_state, "%08x", st->hcdma_copy.d32);
38659 + switch (hc->ep_type) {
38660 + case UE_CONTROL:
38661 + case UE_BULK:
38662 + if (fiq_fsm_np_tt_contended(hcd, qh)) {
38663 + st->fsm = FIQ_NP_SSPLIT_PENDING;
38664 + start_immediate = 0;
38665 + } else {
38666 + st->fsm = FIQ_NP_SSPLIT_STARTED;
38667 + }
38668 + break;
38669 + case UE_ISOCHRONOUS:
38670 + if (hc->ep_is_in) {
38671 + if (start_immediate) {
38672 + st->fsm = FIQ_PER_SSPLIT_STARTED;
38673 + } else {
38674 + st->fsm = FIQ_PER_SSPLIT_QUEUED;
38675 + }
38676 + } else {
38677 + if (start_immediate) {
38678 + /* Single-isoc OUT packets don't require FIQ involvement */
38679 + if (st->nrpackets == 1) {
38680 + st->fsm = FIQ_PER_ISO_OUT_LAST;
38681 + } else {
38682 + st->fsm = FIQ_PER_ISO_OUT_ACTIVE;
38683 + }
38684 + } else {
38685 + st->fsm = FIQ_PER_ISO_OUT_PENDING;
38686 + }
38687 + }
38688 + break;
38689 + case UE_INTERRUPT:
38690 + if (fiq_fsm_mask & 0x8) {
38691 + if (fiq_fsm_np_tt_contended(hcd, qh)) {
38692 + st->fsm = FIQ_NP_SSPLIT_PENDING;
38693 + start_immediate = 0;
38694 + } else {
38695 + st->fsm = FIQ_NP_SSPLIT_STARTED;
38696 + }
38697 + } else if (start_immediate) {
38698 + st->fsm = FIQ_PER_SSPLIT_STARTED;
38699 + } else {
38700 + st->fsm = FIQ_PER_SSPLIT_QUEUED;
38701 + }
38702 + default:
38703 + break;
38704 + }
38705 + if (start_immediate) {
38706 + /* Set the oddfrm bit as close as possible to actual queueing */
38707 + frame = dwc_otg_hcd_get_frame_number(hcd);
38708 + st->expected_uframe = (frame + 1) & 0x3FFF;
38709 + st->hcchar_copy.b.oddfrm = (frame & 0x1) ? 0 : 1;
38710 + st->hcchar_copy.b.chen = 1;
38711 + DWC_WRITE_REG32(&hc_regs->hcchar, st->hcchar_copy.d32);
38712 + }
38713 + mb();
38714 + fiq_fsm_spin_unlock(&hcd->fiq_state->lock);
38715 + local_fiq_enable();
38716 + return 0;
38717 +}
38718 +
38719 +
38720 +/**
38721 + * This function selects transactions from the HCD transfer schedule and
38722 + * assigns them to available host channels. It is called from HCD interrupt
38723 + * handler functions.
38724 + *
38725 + * @param hcd The HCD state structure.
38726 + *
38727 + * @return The types of new transactions that were assigned to host channels.
38728 + */
38729 +dwc_otg_transaction_type_e dwc_otg_hcd_select_transactions(dwc_otg_hcd_t * hcd)
38730 +{
38731 + dwc_list_link_t *qh_ptr;
38732 + dwc_otg_qh_t *qh;
38733 + int num_channels;
38734 + dwc_otg_transaction_type_e ret_val = DWC_OTG_TRANSACTION_NONE;
38735 +
38736 +#ifdef DEBUG_HOST_CHANNELS
38737 + last_sel_trans_num_per_scheduled = 0;
38738 + last_sel_trans_num_nonper_scheduled = 0;
38739 + last_sel_trans_num_avail_hc_at_start = hcd->available_host_channels;
38740 +#endif /* DEBUG_HOST_CHANNELS */
38741 +
38742 + /* Process entries in the periodic ready list. */
38743 + qh_ptr = DWC_LIST_FIRST(&hcd->periodic_sched_ready);
38744 +
38745 + while (qh_ptr != &hcd->periodic_sched_ready &&
38746 + !DWC_CIRCLEQ_EMPTY(&hcd->free_hc_list)) {
38747 +
38748 + qh = DWC_LIST_ENTRY(qh_ptr, dwc_otg_qh_t, qh_list_entry);
38749 +
38750 + if (microframe_schedule) {
38751 + // Make sure we leave one channel for non periodic transactions.
38752 + if (hcd->available_host_channels <= 1) {
38753 + break;
38754 + }
38755 + hcd->available_host_channels--;
38756 +#ifdef DEBUG_HOST_CHANNELS
38757 + last_sel_trans_num_per_scheduled++;
38758 +#endif /* DEBUG_HOST_CHANNELS */
38759 + }
38760 + qh = DWC_LIST_ENTRY(qh_ptr, dwc_otg_qh_t, qh_list_entry);
38761 + assign_and_init_hc(hcd, qh);
38762 +
38763 + /*
38764 + * Move the QH from the periodic ready schedule to the
38765 + * periodic assigned schedule.
38766 + */
38767 + qh_ptr = DWC_LIST_NEXT(qh_ptr);
38768 + DWC_LIST_MOVE_HEAD(&hcd->periodic_sched_assigned,
38769 + &qh->qh_list_entry);
38770 + }
38771 +
38772 + /*
38773 + * Process entries in the inactive portion of the non-periodic
38774 + * schedule. Some free host channels may not be used if they are
38775 + * reserved for periodic transfers.
38776 + */
38777 + qh_ptr = hcd->non_periodic_sched_inactive.next;
38778 + num_channels = hcd->core_if->core_params->host_channels;
38779 + while (qh_ptr != &hcd->non_periodic_sched_inactive &&
38780 + (microframe_schedule || hcd->non_periodic_channels <
38781 + num_channels - hcd->periodic_channels) &&
38782 + !DWC_CIRCLEQ_EMPTY(&hcd->free_hc_list)) {
38783 +
38784 + qh = DWC_LIST_ENTRY(qh_ptr, dwc_otg_qh_t, qh_list_entry);
38785 + /*
38786 + * Check to see if this is a NAK'd retransmit, in which case ignore for retransmission
38787 + * we hold off on bulk retransmissions to reduce NAK interrupt overhead for full-speed
38788 + * cheeky devices that just hold off using NAKs
38789 + */
38790 + if (fiq_enable && nak_holdoff && qh->do_split) {
38791 + if (qh->nak_frame != 0xffff) {
38792 + uint16_t next_frame = dwc_frame_num_inc(qh->nak_frame, (qh->ep_type == UE_BULK) ? nak_holdoff : 8);
38793 + uint16_t frame = dwc_otg_hcd_get_frame_number(hcd);
38794 + if (dwc_frame_num_le(frame, next_frame)) {
38795 + if(dwc_frame_num_le(next_frame, hcd->fiq_state->next_sched_frame)) {
38796 + hcd->fiq_state->next_sched_frame = next_frame;
38797 + }
38798 + qh_ptr = DWC_LIST_NEXT(qh_ptr);
38799 + continue;
38800 + } else {
38801 + qh->nak_frame = 0xFFFF;
38802 + }
38803 + }
38804 + }
38805 +
38806 + if (microframe_schedule) {
38807 + if (hcd->available_host_channels < 1) {
38808 + break;
38809 + }
38810 + hcd->available_host_channels--;
38811 +#ifdef DEBUG_HOST_CHANNELS
38812 + last_sel_trans_num_nonper_scheduled++;
38813 +#endif /* DEBUG_HOST_CHANNELS */
38814 + }
38815 +
38816 + assign_and_init_hc(hcd, qh);
38817 +
38818 + /*
38819 + * Move the QH from the non-periodic inactive schedule to the
38820 + * non-periodic active schedule.
38821 + */
38822 + qh_ptr = DWC_LIST_NEXT(qh_ptr);
38823 + DWC_LIST_MOVE_HEAD(&hcd->non_periodic_sched_active,
38824 + &qh->qh_list_entry);
38825 +
38826 + if (!microframe_schedule)
38827 + hcd->non_periodic_channels++;
38828 + }
38829 + /* we moved a non-periodic QH to the active schedule. If the inactive queue is empty,
38830 + * stop the FIQ from kicking us. We could potentially still have elements here if we
38831 + * ran out of host channels.
38832 + */
38833 + if (fiq_enable) {
38834 + if (DWC_LIST_EMPTY(&hcd->non_periodic_sched_inactive)) {
38835 + hcd->fiq_state->kick_np_queues = 0;
38836 + } else {
38837 + /* For each entry remaining in the NP inactive queue,
38838 + * if this a NAK'd retransmit then don't set the kick flag.
38839 + */
38840 + if(nak_holdoff) {
38841 + DWC_LIST_FOREACH(qh_ptr, &hcd->non_periodic_sched_inactive) {
38842 + qh = DWC_LIST_ENTRY(qh_ptr, dwc_otg_qh_t, qh_list_entry);
38843 + if (qh->nak_frame == 0xFFFF) {
38844 + hcd->fiq_state->kick_np_queues = 1;
38845 + }
38846 + }
38847 + }
38848 + }
38849 + }
38850 + if(!DWC_LIST_EMPTY(&hcd->periodic_sched_assigned))
38851 + ret_val |= DWC_OTG_TRANSACTION_PERIODIC;
38852 +
38853 + if(!DWC_LIST_EMPTY(&hcd->non_periodic_sched_active))
38854 + ret_val |= DWC_OTG_TRANSACTION_NON_PERIODIC;
38855 +
38856 +
38857 +#ifdef DEBUG_HOST_CHANNELS
38858 + last_sel_trans_num_avail_hc_at_end = hcd->available_host_channels;
38859 +#endif /* DEBUG_HOST_CHANNELS */
38860 + return ret_val;
38861 +}
38862 +
38863 +/**
38864 + * Attempts to queue a single transaction request for a host channel
38865 + * associated with either a periodic or non-periodic transfer. This function
38866 + * assumes that there is space available in the appropriate request queue. For
38867 + * an OUT transfer or SETUP transaction in Slave mode, it checks whether space
38868 + * is available in the appropriate Tx FIFO.
38869 + *
38870 + * @param hcd The HCD state structure.
38871 + * @param hc Host channel descriptor associated with either a periodic or
38872 + * non-periodic transfer.
38873 + * @param fifo_dwords_avail Number of DWORDs available in the periodic Tx
38874 + * FIFO for periodic transfers or the non-periodic Tx FIFO for non-periodic
38875 + * transfers.
38876 + *
38877 + * @return 1 if a request is queued and more requests may be needed to
38878 + * complete the transfer, 0 if no more requests are required for this
38879 + * transfer, -1 if there is insufficient space in the Tx FIFO.
38880 + */
38881 +static int queue_transaction(dwc_otg_hcd_t * hcd,
38882 + dwc_hc_t * hc, uint16_t fifo_dwords_avail)
38883 +{
38884 + int retval;
38885 +
38886 + if (hcd->core_if->dma_enable) {
38887 + if (hcd->core_if->dma_desc_enable) {
38888 + if (!hc->xfer_started
38889 + || (hc->ep_type == DWC_OTG_EP_TYPE_ISOC)) {
38890 + dwc_otg_hcd_start_xfer_ddma(hcd, hc->qh);
38891 + hc->qh->ping_state = 0;
38892 + }
38893 + } else if (!hc->xfer_started) {
38894 + if (fiq_fsm_enable && hc->error_state) {
38895 + hcd->fiq_state->channel[hc->hc_num].nr_errors =
38896 + DWC_CIRCLEQ_FIRST(&hc->qh->qtd_list)->error_count;
38897 + hcd->fiq_state->channel[hc->hc_num].fsm =
38898 + FIQ_PASSTHROUGH_ERRORSTATE;
38899 + }
38900 + dwc_otg_hc_start_transfer(hcd->core_if, hc);
38901 + hc->qh->ping_state = 0;
38902 + }
38903 + retval = 0;
38904 + } else if (hc->halt_pending) {
38905 + /* Don't queue a request if the channel has been halted. */
38906 + retval = 0;
38907 + } else if (hc->halt_on_queue) {
38908 + dwc_otg_hc_halt(hcd->core_if, hc, hc->halt_status);
38909 + retval = 0;
38910 + } else if (hc->do_ping) {
38911 + if (!hc->xfer_started) {
38912 + dwc_otg_hc_start_transfer(hcd->core_if, hc);
38913 + }
38914 + retval = 0;
38915 + } else if (!hc->ep_is_in || hc->data_pid_start == DWC_OTG_HC_PID_SETUP) {
38916 + if ((fifo_dwords_avail * 4) >= hc->max_packet) {
38917 + if (!hc->xfer_started) {
38918 + dwc_otg_hc_start_transfer(hcd->core_if, hc);
38919 + retval = 1;
38920 + } else {
38921 + retval =
38922 + dwc_otg_hc_continue_transfer(hcd->core_if,
38923 + hc);
38924 + }
38925 + } else {
38926 + retval = -1;
38927 + }
38928 + } else {
38929 + if (!hc->xfer_started) {
38930 + dwc_otg_hc_start_transfer(hcd->core_if, hc);
38931 + retval = 1;
38932 + } else {
38933 + retval = dwc_otg_hc_continue_transfer(hcd->core_if, hc);
38934 + }
38935 + }
38936 +
38937 + return retval;
38938 +}
38939 +
38940 +/**
38941 + * Processes periodic channels for the next frame and queues transactions for
38942 + * these channels to the DWC_otg controller. After queueing transactions, the
38943 + * Periodic Tx FIFO Empty interrupt is enabled if there are more transactions
38944 + * to queue as Periodic Tx FIFO or request queue space becomes available.
38945 + * Otherwise, the Periodic Tx FIFO Empty interrupt is disabled.
38946 + */
38947 +static void process_periodic_channels(dwc_otg_hcd_t * hcd)
38948 +{
38949 + hptxsts_data_t tx_status;
38950 + dwc_list_link_t *qh_ptr;
38951 + dwc_otg_qh_t *qh;
38952 + int status = 0;
38953 + int no_queue_space = 0;
38954 + int no_fifo_space = 0;
38955 +
38956 + dwc_otg_host_global_regs_t *host_regs;
38957 + host_regs = hcd->core_if->host_if->host_global_regs;
38958 +
38959 + DWC_DEBUGPL(DBG_HCDV, "Queue periodic transactions\n");
38960 +#ifdef DEBUG
38961 + tx_status.d32 = DWC_READ_REG32(&host_regs->hptxsts);
38962 + DWC_DEBUGPL(DBG_HCDV,
38963 + " P Tx Req Queue Space Avail (before queue): %d\n",
38964 + tx_status.b.ptxqspcavail);
38965 + DWC_DEBUGPL(DBG_HCDV, " P Tx FIFO Space Avail (before queue): %d\n",
38966 + tx_status.b.ptxfspcavail);
38967 +#endif
38968 +
38969 + qh_ptr = hcd->periodic_sched_assigned.next;
38970 + while (qh_ptr != &hcd->periodic_sched_assigned) {
38971 + tx_status.d32 = DWC_READ_REG32(&host_regs->hptxsts);
38972 + if (tx_status.b.ptxqspcavail == 0) {
38973 + no_queue_space = 1;
38974 + break;
38975 + }
38976 +
38977 + qh = DWC_LIST_ENTRY(qh_ptr, dwc_otg_qh_t, qh_list_entry);
38978 +
38979 + // Do not send a split start transaction any later than frame .6
38980 + // Note, we have to schedule a periodic in .5 to make it go in .6
38981 + if(fiq_fsm_enable && qh->do_split && ((dwc_otg_hcd_get_frame_number(hcd) + 1) & 7) > 6)
38982 + {
38983 + qh_ptr = qh_ptr->next;
38984 + hcd->fiq_state->next_sched_frame = dwc_otg_hcd_get_frame_number(hcd) | 7;
38985 + continue;
38986 + }
38987 +
38988 + if (fiq_fsm_enable && fiq_fsm_transaction_suitable(hcd, qh)) {
38989 + if (qh->do_split)
38990 + fiq_fsm_queue_split_transaction(hcd, qh);
38991 + else
38992 + fiq_fsm_queue_isoc_transaction(hcd, qh);
38993 + } else {
38994 +
38995 + /*
38996 + * Set a flag if we're queueing high-bandwidth in slave mode.
38997 + * The flag prevents any halts to get into the request queue in
38998 + * the middle of multiple high-bandwidth packets getting queued.
38999 + */
39000 + if (!hcd->core_if->dma_enable && qh->channel->multi_count > 1) {
39001 + hcd->core_if->queuing_high_bandwidth = 1;
39002 + }
39003 + status = queue_transaction(hcd, qh->channel,
39004 + tx_status.b.ptxfspcavail);
39005 + if (status < 0) {
39006 + no_fifo_space = 1;
39007 + break;
39008 + }
39009 + }
39010 +
39011 + /*
39012 + * In Slave mode, stay on the current transfer until there is
39013 + * nothing more to do or the high-bandwidth request count is
39014 + * reached. In DMA mode, only need to queue one request. The
39015 + * controller automatically handles multiple packets for
39016 + * high-bandwidth transfers.
39017 + */
39018 + if (hcd->core_if->dma_enable || status == 0 ||
39019 + qh->channel->requests == qh->channel->multi_count) {
39020 + qh_ptr = qh_ptr->next;
39021 + /*
39022 + * Move the QH from the periodic assigned schedule to
39023 + * the periodic queued schedule.
39024 + */
39025 + DWC_LIST_MOVE_HEAD(&hcd->periodic_sched_queued,
39026 + &qh->qh_list_entry);
39027 +
39028 + /* done queuing high bandwidth */
39029 + hcd->core_if->queuing_high_bandwidth = 0;
39030 + }
39031 + }
39032 +
39033 + if (!hcd->core_if->dma_enable) {
39034 + dwc_otg_core_global_regs_t *global_regs;
39035 + gintmsk_data_t intr_mask = {.d32 = 0 };
39036 +
39037 + global_regs = hcd->core_if->core_global_regs;
39038 + intr_mask.b.ptxfempty = 1;
39039 +#ifdef DEBUG
39040 + tx_status.d32 = DWC_READ_REG32(&host_regs->hptxsts);
39041 + DWC_DEBUGPL(DBG_HCDV,
39042 + " P Tx Req Queue Space Avail (after queue): %d\n",
39043 + tx_status.b.ptxqspcavail);
39044 + DWC_DEBUGPL(DBG_HCDV,
39045 + " P Tx FIFO Space Avail (after queue): %d\n",
39046 + tx_status.b.ptxfspcavail);
39047 +#endif
39048 + if (!DWC_LIST_EMPTY(&hcd->periodic_sched_assigned) ||
39049 + no_queue_space || no_fifo_space) {
39050 + /*
39051 + * May need to queue more transactions as the request
39052 + * queue or Tx FIFO empties. Enable the periodic Tx
39053 + * FIFO empty interrupt. (Always use the half-empty
39054 + * level to ensure that new requests are loaded as
39055 + * soon as possible.)
39056 + */
39057 + DWC_MODIFY_REG32(&global_regs->gintmsk, 0,
39058 + intr_mask.d32);
39059 + } else {
39060 + /*
39061 + * Disable the Tx FIFO empty interrupt since there are
39062 + * no more transactions that need to be queued right
39063 + * now. This function is called from interrupt
39064 + * handlers to queue more transactions as transfer
39065 + * states change.
39066 + */
39067 + DWC_MODIFY_REG32(&global_regs->gintmsk, intr_mask.d32,
39068 + 0);
39069 + }
39070 + }
39071 +}
39072 +
39073 +/**
39074 + * Processes active non-periodic channels and queues transactions for these
39075 + * channels to the DWC_otg controller. After queueing transactions, the NP Tx
39076 + * FIFO Empty interrupt is enabled if there are more transactions to queue as
39077 + * NP Tx FIFO or request queue space becomes available. Otherwise, the NP Tx
39078 + * FIFO Empty interrupt is disabled.
39079 + */
39080 +static void process_non_periodic_channels(dwc_otg_hcd_t * hcd)
39081 +{
39082 + gnptxsts_data_t tx_status;
39083 + dwc_list_link_t *orig_qh_ptr;
39084 + dwc_otg_qh_t *qh;
39085 + int status;
39086 + int no_queue_space = 0;
39087 + int no_fifo_space = 0;
39088 + int more_to_do = 0;
39089 +
39090 + dwc_otg_core_global_regs_t *global_regs =
39091 + hcd->core_if->core_global_regs;
39092 +
39093 + DWC_DEBUGPL(DBG_HCDV, "Queue non-periodic transactions\n");
39094 +#ifdef DEBUG
39095 + tx_status.d32 = DWC_READ_REG32(&global_regs->gnptxsts);
39096 + DWC_DEBUGPL(DBG_HCDV,
39097 + " NP Tx Req Queue Space Avail (before queue): %d\n",
39098 + tx_status.b.nptxqspcavail);
39099 + DWC_DEBUGPL(DBG_HCDV, " NP Tx FIFO Space Avail (before queue): %d\n",
39100 + tx_status.b.nptxfspcavail);
39101 +#endif
39102 + /*
39103 + * Keep track of the starting point. Skip over the start-of-list
39104 + * entry.
39105 + */
39106 + if (hcd->non_periodic_qh_ptr == &hcd->non_periodic_sched_active) {
39107 + hcd->non_periodic_qh_ptr = hcd->non_periodic_qh_ptr->next;
39108 + }
39109 + orig_qh_ptr = hcd->non_periodic_qh_ptr;
39110 +
39111 + /*
39112 + * Process once through the active list or until no more space is
39113 + * available in the request queue or the Tx FIFO.
39114 + */
39115 + do {
39116 + tx_status.d32 = DWC_READ_REG32(&global_regs->gnptxsts);
39117 + if (!hcd->core_if->dma_enable && tx_status.b.nptxqspcavail == 0) {
39118 + no_queue_space = 1;
39119 + break;
39120 + }
39121 +
39122 + qh = DWC_LIST_ENTRY(hcd->non_periodic_qh_ptr, dwc_otg_qh_t,
39123 + qh_list_entry);
39124 +
39125 + if(fiq_fsm_enable && fiq_fsm_transaction_suitable(hcd, qh)) {
39126 + fiq_fsm_queue_split_transaction(hcd, qh);
39127 + } else {
39128 + status = queue_transaction(hcd, qh->channel,
39129 + tx_status.b.nptxfspcavail);
39130 +
39131 + if (status > 0) {
39132 + more_to_do = 1;
39133 + } else if (status < 0) {
39134 + no_fifo_space = 1;
39135 + break;
39136 + }
39137 + }
39138 + /* Advance to next QH, skipping start-of-list entry. */
39139 + hcd->non_periodic_qh_ptr = hcd->non_periodic_qh_ptr->next;
39140 + if (hcd->non_periodic_qh_ptr == &hcd->non_periodic_sched_active) {
39141 + hcd->non_periodic_qh_ptr =
39142 + hcd->non_periodic_qh_ptr->next;
39143 + }
39144 +
39145 + } while (hcd->non_periodic_qh_ptr != orig_qh_ptr);
39146 +
39147 + if (!hcd->core_if->dma_enable) {
39148 + gintmsk_data_t intr_mask = {.d32 = 0 };
39149 + intr_mask.b.nptxfempty = 1;
39150 +
39151 +#ifdef DEBUG
39152 + tx_status.d32 = DWC_READ_REG32(&global_regs->gnptxsts);
39153 + DWC_DEBUGPL(DBG_HCDV,
39154 + " NP Tx Req Queue Space Avail (after queue): %d\n",
39155 + tx_status.b.nptxqspcavail);
39156 + DWC_DEBUGPL(DBG_HCDV,
39157 + " NP Tx FIFO Space Avail (after queue): %d\n",
39158 + tx_status.b.nptxfspcavail);
39159 +#endif
39160 + if (more_to_do || no_queue_space || no_fifo_space) {
39161 + /*
39162 + * May need to queue more transactions as the request
39163 + * queue or Tx FIFO empties. Enable the non-periodic
39164 + * Tx FIFO empty interrupt. (Always use the half-empty
39165 + * level to ensure that new requests are loaded as
39166 + * soon as possible.)
39167 + */
39168 + DWC_MODIFY_REG32(&global_regs->gintmsk, 0,
39169 + intr_mask.d32);
39170 + } else {
39171 + /*
39172 + * Disable the Tx FIFO empty interrupt since there are
39173 + * no more transactions that need to be queued right
39174 + * now. This function is called from interrupt
39175 + * handlers to queue more transactions as transfer
39176 + * states change.
39177 + */
39178 + DWC_MODIFY_REG32(&global_regs->gintmsk, intr_mask.d32,
39179 + 0);
39180 + }
39181 + }
39182 +}
39183 +
39184 +/**
39185 + * This function processes the currently active host channels and queues
39186 + * transactions for these channels to the DWC_otg controller. It is called
39187 + * from HCD interrupt handler functions.
39188 + *
39189 + * @param hcd The HCD state structure.
39190 + * @param tr_type The type(s) of transactions to queue (non-periodic,
39191 + * periodic, or both).
39192 + */
39193 +void dwc_otg_hcd_queue_transactions(dwc_otg_hcd_t * hcd,
39194 + dwc_otg_transaction_type_e tr_type)
39195 +{
39196 +#ifdef DEBUG_SOF
39197 + DWC_DEBUGPL(DBG_HCD, "Queue Transactions\n");
39198 +#endif
39199 + /* Process host channels associated with periodic transfers. */
39200 + if ((tr_type == DWC_OTG_TRANSACTION_PERIODIC ||
39201 + tr_type == DWC_OTG_TRANSACTION_ALL) &&
39202 + !DWC_LIST_EMPTY(&hcd->periodic_sched_assigned)) {
39203 +
39204 + process_periodic_channels(hcd);
39205 + }
39206 +
39207 + /* Process host channels associated with non-periodic transfers. */
39208 + if (tr_type == DWC_OTG_TRANSACTION_NON_PERIODIC ||
39209 + tr_type == DWC_OTG_TRANSACTION_ALL) {
39210 + if (!DWC_LIST_EMPTY(&hcd->non_periodic_sched_active)) {
39211 + process_non_periodic_channels(hcd);
39212 + } else {
39213 + /*
39214 + * Ensure NP Tx FIFO empty interrupt is disabled when
39215 + * there are no non-periodic transfers to process.
39216 + */
39217 + gintmsk_data_t gintmsk = {.d32 = 0 };
39218 + gintmsk.b.nptxfempty = 1;
39219 +
39220 + if (fiq_enable) {
39221 + local_fiq_disable();
39222 + fiq_fsm_spin_lock(&hcd->fiq_state->lock);
39223 + DWC_MODIFY_REG32(&hcd->core_if->core_global_regs->gintmsk, gintmsk.d32, 0);
39224 + fiq_fsm_spin_unlock(&hcd->fiq_state->lock);
39225 + local_fiq_enable();
39226 + } else {
39227 + DWC_MODIFY_REG32(&hcd->core_if->core_global_regs->gintmsk, gintmsk.d32, 0);
39228 + }
39229 + }
39230 + }
39231 +}
39232 +
39233 +#ifdef DWC_HS_ELECT_TST
39234 +/*
39235 + * Quick and dirty hack to implement the HS Electrical Test
39236 + * SINGLE_STEP_GET_DEVICE_DESCRIPTOR feature.
39237 + *
39238 + * This code was copied from our userspace app "hset". It sends a
39239 + * Get Device Descriptor control sequence in two parts, first the
39240 + * Setup packet by itself, followed some time later by the In and
39241 + * Ack packets. Rather than trying to figure out how to add this
39242 + * functionality to the normal driver code, we just hijack the
39243 + * hardware, using these two function to drive the hardware
39244 + * directly.
39245 + */
39246 +
39247 +static dwc_otg_core_global_regs_t *global_regs;
39248 +static dwc_otg_host_global_regs_t *hc_global_regs;
39249 +static dwc_otg_hc_regs_t *hc_regs;
39250 +static uint32_t *data_fifo;
39251 +
39252 +static void do_setup(void)
39253 +{
39254 + gintsts_data_t gintsts;
39255 + hctsiz_data_t hctsiz;
39256 + hcchar_data_t hcchar;
39257 + haint_data_t haint;
39258 + hcint_data_t hcint;
39259 +
39260 + /* Enable HAINTs */
39261 + DWC_WRITE_REG32(&hc_global_regs->haintmsk, 0x0001);
39262 +
39263 + /* Enable HCINTs */
39264 + DWC_WRITE_REG32(&hc_regs->hcintmsk, 0x04a3);
39265 +
39266 + /* Read GINTSTS */
39267 + gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
39268 +
39269 + /* Read HAINT */
39270 + haint.d32 = DWC_READ_REG32(&hc_global_regs->haint);
39271 +
39272 + /* Read HCINT */
39273 + hcint.d32 = DWC_READ_REG32(&hc_regs->hcint);
39274 +
39275 + /* Read HCCHAR */
39276 + hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
39277 +
39278 + /* Clear HCINT */
39279 + DWC_WRITE_REG32(&hc_regs->hcint, hcint.d32);
39280 +
39281 + /* Clear HAINT */
39282 + DWC_WRITE_REG32(&hc_global_regs->haint, haint.d32);
39283 +
39284 + /* Clear GINTSTS */
39285 + DWC_WRITE_REG32(&global_regs->gintsts, gintsts.d32);
39286 +
39287 + /* Read GINTSTS */
39288 + gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
39289 +
39290 + /*
39291 + * Send Setup packet (Get Device Descriptor)
39292 + */
39293 +
39294 + /* Make sure channel is disabled */
39295 + hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
39296 + if (hcchar.b.chen) {
39297 + hcchar.b.chdis = 1;
39298 +// hcchar.b.chen = 1;
39299 + DWC_WRITE_REG32(&hc_regs->hcchar, hcchar.d32);
39300 + //sleep(1);
39301 + dwc_mdelay(1000);
39302 +
39303 + /* Read GINTSTS */
39304 + gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
39305 +
39306 + /* Read HAINT */
39307 + haint.d32 = DWC_READ_REG32(&hc_global_regs->haint);
39308 +
39309 + /* Read HCINT */
39310 + hcint.d32 = DWC_READ_REG32(&hc_regs->hcint);
39311 +
39312 + /* Read HCCHAR */
39313 + hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
39314 +
39315 + /* Clear HCINT */
39316 + DWC_WRITE_REG32(&hc_regs->hcint, hcint.d32);
39317 +
39318 + /* Clear HAINT */
39319 + DWC_WRITE_REG32(&hc_global_regs->haint, haint.d32);
39320 +
39321 + /* Clear GINTSTS */
39322 + DWC_WRITE_REG32(&global_regs->gintsts, gintsts.d32);
39323 +
39324 + hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
39325 + }
39326 +
39327 + /* Set HCTSIZ */
39328 + hctsiz.d32 = 0;
39329 + hctsiz.b.xfersize = 8;
39330 + hctsiz.b.pktcnt = 1;
39331 + hctsiz.b.pid = DWC_OTG_HC_PID_SETUP;
39332 + DWC_WRITE_REG32(&hc_regs->hctsiz, hctsiz.d32);
39333 +
39334 + /* Set HCCHAR */
39335 + hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
39336 + hcchar.b.eptype = DWC_OTG_EP_TYPE_CONTROL;
39337 + hcchar.b.epdir = 0;
39338 + hcchar.b.epnum = 0;
39339 + hcchar.b.mps = 8;
39340 + hcchar.b.chen = 1;
39341 + DWC_WRITE_REG32(&hc_regs->hcchar, hcchar.d32);
39342 +
39343 + /* Fill FIFO with Setup data for Get Device Descriptor */
39344 + data_fifo = (uint32_t *) ((char *)global_regs + 0x1000);
39345 + DWC_WRITE_REG32(data_fifo++, 0x01000680);
39346 + DWC_WRITE_REG32(data_fifo++, 0x00080000);
39347 +
39348 + gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
39349 +
39350 + /* Wait for host channel interrupt */
39351 + do {
39352 + gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
39353 + } while (gintsts.b.hcintr == 0);
39354 +
39355 + /* Disable HCINTs */
39356 + DWC_WRITE_REG32(&hc_regs->hcintmsk, 0x0000);
39357 +
39358 + /* Disable HAINTs */
39359 + DWC_WRITE_REG32(&hc_global_regs->haintmsk, 0x0000);
39360 +
39361 + /* Read HAINT */
39362 + haint.d32 = DWC_READ_REG32(&hc_global_regs->haint);
39363 +
39364 + /* Read HCINT */
39365 + hcint.d32 = DWC_READ_REG32(&hc_regs->hcint);
39366 +
39367 + /* Read HCCHAR */
39368 + hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
39369 +
39370 + /* Clear HCINT */
39371 + DWC_WRITE_REG32(&hc_regs->hcint, hcint.d32);
39372 +
39373 + /* Clear HAINT */
39374 + DWC_WRITE_REG32(&hc_global_regs->haint, haint.d32);
39375 +
39376 + /* Clear GINTSTS */
39377 + DWC_WRITE_REG32(&global_regs->gintsts, gintsts.d32);
39378 +
39379 + /* Read GINTSTS */
39380 + gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
39381 +}
39382 +
39383 +static void do_in_ack(void)
39384 +{
39385 + gintsts_data_t gintsts;
39386 + hctsiz_data_t hctsiz;
39387 + hcchar_data_t hcchar;
39388 + haint_data_t haint;
39389 + hcint_data_t hcint;
39390 + host_grxsts_data_t grxsts;
39391 +
39392 + /* Enable HAINTs */
39393 + DWC_WRITE_REG32(&hc_global_regs->haintmsk, 0x0001);
39394 +
39395 + /* Enable HCINTs */
39396 + DWC_WRITE_REG32(&hc_regs->hcintmsk, 0x04a3);
39397 +
39398 + /* Read GINTSTS */
39399 + gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
39400 +
39401 + /* Read HAINT */
39402 + haint.d32 = DWC_READ_REG32(&hc_global_regs->haint);
39403 +
39404 + /* Read HCINT */
39405 + hcint.d32 = DWC_READ_REG32(&hc_regs->hcint);
39406 +
39407 + /* Read HCCHAR */
39408 + hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
39409 +
39410 + /* Clear HCINT */
39411 + DWC_WRITE_REG32(&hc_regs->hcint, hcint.d32);
39412 +
39413 + /* Clear HAINT */
39414 + DWC_WRITE_REG32(&hc_global_regs->haint, haint.d32);
39415 +
39416 + /* Clear GINTSTS */
39417 + DWC_WRITE_REG32(&global_regs->gintsts, gintsts.d32);
39418 +
39419 + /* Read GINTSTS */
39420 + gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
39421 +
39422 + /*
39423 + * Receive Control In packet
39424 + */
39425 +
39426 + /* Make sure channel is disabled */
39427 + hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
39428 + if (hcchar.b.chen) {
39429 + hcchar.b.chdis = 1;
39430 + hcchar.b.chen = 1;
39431 + DWC_WRITE_REG32(&hc_regs->hcchar, hcchar.d32);
39432 + //sleep(1);
39433 + dwc_mdelay(1000);
39434 +
39435 + /* Read GINTSTS */
39436 + gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
39437 +
39438 + /* Read HAINT */
39439 + haint.d32 = DWC_READ_REG32(&hc_global_regs->haint);
39440 +
39441 + /* Read HCINT */
39442 + hcint.d32 = DWC_READ_REG32(&hc_regs->hcint);
39443 +
39444 + /* Read HCCHAR */
39445 + hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
39446 +
39447 + /* Clear HCINT */
39448 + DWC_WRITE_REG32(&hc_regs->hcint, hcint.d32);
39449 +
39450 + /* Clear HAINT */
39451 + DWC_WRITE_REG32(&hc_global_regs->haint, haint.d32);
39452 +
39453 + /* Clear GINTSTS */
39454 + DWC_WRITE_REG32(&global_regs->gintsts, gintsts.d32);
39455 +
39456 + hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
39457 + }
39458 +
39459 + /* Set HCTSIZ */
39460 + hctsiz.d32 = 0;
39461 + hctsiz.b.xfersize = 8;
39462 + hctsiz.b.pktcnt = 1;
39463 + hctsiz.b.pid = DWC_OTG_HC_PID_DATA1;
39464 + DWC_WRITE_REG32(&hc_regs->hctsiz, hctsiz.d32);
39465 +
39466 + /* Set HCCHAR */
39467 + hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
39468 + hcchar.b.eptype = DWC_OTG_EP_TYPE_CONTROL;
39469 + hcchar.b.epdir = 1;
39470 + hcchar.b.epnum = 0;
39471 + hcchar.b.mps = 8;
39472 + hcchar.b.chen = 1;
39473 + DWC_WRITE_REG32(&hc_regs->hcchar, hcchar.d32);
39474 +
39475 + gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
39476 +
39477 + /* Wait for receive status queue interrupt */
39478 + do {
39479 + gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
39480 + } while (gintsts.b.rxstsqlvl == 0);
39481 +
39482 + /* Read RXSTS */
39483 + grxsts.d32 = DWC_READ_REG32(&global_regs->grxstsp);
39484 +
39485 + /* Clear RXSTSQLVL in GINTSTS */
39486 + gintsts.d32 = 0;
39487 + gintsts.b.rxstsqlvl = 1;
39488 + DWC_WRITE_REG32(&global_regs->gintsts, gintsts.d32);
39489 +
39490 + switch (grxsts.b.pktsts) {
39491 + case DWC_GRXSTS_PKTSTS_IN:
39492 + /* Read the data into the host buffer */
39493 + if (grxsts.b.bcnt > 0) {
39494 + int i;
39495 + int word_count = (grxsts.b.bcnt + 3) / 4;
39496 +
39497 + data_fifo = (uint32_t *) ((char *)global_regs + 0x1000);
39498 +
39499 + for (i = 0; i < word_count; i++) {
39500 + (void)DWC_READ_REG32(data_fifo++);
39501 + }
39502 + }
39503 + break;
39504 +
39505 + default:
39506 + break;
39507 + }
39508 +
39509 + gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
39510 +
39511 + /* Wait for receive status queue interrupt */
39512 + do {
39513 + gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
39514 + } while (gintsts.b.rxstsqlvl == 0);
39515 +
39516 + /* Read RXSTS */
39517 + grxsts.d32 = DWC_READ_REG32(&global_regs->grxstsp);
39518 +
39519 + /* Clear RXSTSQLVL in GINTSTS */
39520 + gintsts.d32 = 0;
39521 + gintsts.b.rxstsqlvl = 1;
39522 + DWC_WRITE_REG32(&global_regs->gintsts, gintsts.d32);
39523 +
39524 + switch (grxsts.b.pktsts) {
39525 + case DWC_GRXSTS_PKTSTS_IN_XFER_COMP:
39526 + break;
39527 +
39528 + default:
39529 + break;
39530 + }
39531 +
39532 + gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
39533 +
39534 + /* Wait for host channel interrupt */
39535 + do {
39536 + gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
39537 + } while (gintsts.b.hcintr == 0);
39538 +
39539 + /* Read HAINT */
39540 + haint.d32 = DWC_READ_REG32(&hc_global_regs->haint);
39541 +
39542 + /* Read HCINT */
39543 + hcint.d32 = DWC_READ_REG32(&hc_regs->hcint);
39544 +
39545 + /* Read HCCHAR */
39546 + hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
39547 +
39548 + /* Clear HCINT */
39549 + DWC_WRITE_REG32(&hc_regs->hcint, hcint.d32);
39550 +
39551 + /* Clear HAINT */
39552 + DWC_WRITE_REG32(&hc_global_regs->haint, haint.d32);
39553 +
39554 + /* Clear GINTSTS */
39555 + DWC_WRITE_REG32(&global_regs->gintsts, gintsts.d32);
39556 +
39557 + /* Read GINTSTS */
39558 + gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
39559 +
39560 +// usleep(100000);
39561 +// mdelay(100);
39562 + dwc_mdelay(1);
39563 +
39564 + /*
39565 + * Send handshake packet
39566 + */
39567 +
39568 + /* Read HAINT */
39569 + haint.d32 = DWC_READ_REG32(&hc_global_regs->haint);
39570 +
39571 + /* Read HCINT */
39572 + hcint.d32 = DWC_READ_REG32(&hc_regs->hcint);
39573 +
39574 + /* Read HCCHAR */
39575 + hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
39576 +
39577 + /* Clear HCINT */
39578 + DWC_WRITE_REG32(&hc_regs->hcint, hcint.d32);
39579 +
39580 + /* Clear HAINT */
39581 + DWC_WRITE_REG32(&hc_global_regs->haint, haint.d32);
39582 +
39583 + /* Clear GINTSTS */
39584 + DWC_WRITE_REG32(&global_regs->gintsts, gintsts.d32);
39585 +
39586 + /* Read GINTSTS */
39587 + gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
39588 +
39589 + /* Make sure channel is disabled */
39590 + hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
39591 + if (hcchar.b.chen) {
39592 + hcchar.b.chdis = 1;
39593 + hcchar.b.chen = 1;
39594 + DWC_WRITE_REG32(&hc_regs->hcchar, hcchar.d32);
39595 + //sleep(1);
39596 + dwc_mdelay(1000);
39597 +
39598 + /* Read GINTSTS */
39599 + gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
39600 +
39601 + /* Read HAINT */
39602 + haint.d32 = DWC_READ_REG32(&hc_global_regs->haint);
39603 +
39604 + /* Read HCINT */
39605 + hcint.d32 = DWC_READ_REG32(&hc_regs->hcint);
39606 +
39607 + /* Read HCCHAR */
39608 + hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
39609 +
39610 + /* Clear HCINT */
39611 + DWC_WRITE_REG32(&hc_regs->hcint, hcint.d32);
39612 +
39613 + /* Clear HAINT */
39614 + DWC_WRITE_REG32(&hc_global_regs->haint, haint.d32);
39615 +
39616 + /* Clear GINTSTS */
39617 + DWC_WRITE_REG32(&global_regs->gintsts, gintsts.d32);
39618 +
39619 + hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
39620 + }
39621 +
39622 + /* Set HCTSIZ */
39623 + hctsiz.d32 = 0;
39624 + hctsiz.b.xfersize = 0;
39625 + hctsiz.b.pktcnt = 1;
39626 + hctsiz.b.pid = DWC_OTG_HC_PID_DATA1;
39627 + DWC_WRITE_REG32(&hc_regs->hctsiz, hctsiz.d32);
39628 +
39629 + /* Set HCCHAR */
39630 + hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
39631 + hcchar.b.eptype = DWC_OTG_EP_TYPE_CONTROL;
39632 + hcchar.b.epdir = 0;
39633 + hcchar.b.epnum = 0;
39634 + hcchar.b.mps = 8;
39635 + hcchar.b.chen = 1;
39636 + DWC_WRITE_REG32(&hc_regs->hcchar, hcchar.d32);
39637 +
39638 + gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
39639 +
39640 + /* Wait for host channel interrupt */
39641 + do {
39642 + gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
39643 + } while (gintsts.b.hcintr == 0);
39644 +
39645 + /* Disable HCINTs */
39646 + DWC_WRITE_REG32(&hc_regs->hcintmsk, 0x0000);
39647 +
39648 + /* Disable HAINTs */
39649 + DWC_WRITE_REG32(&hc_global_regs->haintmsk, 0x0000);
39650 +
39651 + /* Read HAINT */
39652 + haint.d32 = DWC_READ_REG32(&hc_global_regs->haint);
39653 +
39654 + /* Read HCINT */
39655 + hcint.d32 = DWC_READ_REG32(&hc_regs->hcint);
39656 +
39657 + /* Read HCCHAR */
39658 + hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
39659 +
39660 + /* Clear HCINT */
39661 + DWC_WRITE_REG32(&hc_regs->hcint, hcint.d32);
39662 +
39663 + /* Clear HAINT */
39664 + DWC_WRITE_REG32(&hc_global_regs->haint, haint.d32);
39665 +
39666 + /* Clear GINTSTS */
39667 + DWC_WRITE_REG32(&global_regs->gintsts, gintsts.d32);
39668 +
39669 + /* Read GINTSTS */
39670 + gintsts.d32 = DWC_READ_REG32(&global_regs->gintsts);
39671 +}
39672 +#endif
39673 +
39674 +/** Handles hub class-specific requests. */
39675 +int dwc_otg_hcd_hub_control(dwc_otg_hcd_t * dwc_otg_hcd,
39676 + uint16_t typeReq,
39677 + uint16_t wValue,
39678 + uint16_t wIndex, uint8_t * buf, uint16_t wLength)
39679 +{
39680 + int retval = 0;
39681 +
39682 + dwc_otg_core_if_t *core_if = dwc_otg_hcd->core_if;
39683 + usb_hub_descriptor_t *hub_desc;
39684 + hprt0_data_t hprt0 = {.d32 = 0 };
39685 +
39686 + uint32_t port_status;
39687 +
39688 + switch (typeReq) {
39689 + case UCR_CLEAR_HUB_FEATURE:
39690 + DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD HUB CONTROL - "
39691 + "ClearHubFeature 0x%x\n", wValue);
39692 + switch (wValue) {
39693 + case UHF_C_HUB_LOCAL_POWER:
39694 + case UHF_C_HUB_OVER_CURRENT:
39695 + /* Nothing required here */
39696 + break;
39697 + default:
39698 + retval = -DWC_E_INVALID;
39699 + DWC_ERROR("DWC OTG HCD - "
39700 + "ClearHubFeature request %xh unknown\n",
39701 + wValue);
39702 + }
39703 + break;
39704 + case UCR_CLEAR_PORT_FEATURE:
39705 +#ifdef CONFIG_USB_DWC_OTG_LPM
39706 + if (wValue != UHF_PORT_L1)
39707 +#endif
39708 + if (!wIndex || wIndex > 1)
39709 + goto error;
39710 +
39711 + switch (wValue) {
39712 + case UHF_PORT_ENABLE:
39713 + DWC_DEBUGPL(DBG_ANY, "DWC OTG HCD HUB CONTROL - "
39714 + "ClearPortFeature USB_PORT_FEAT_ENABLE\n");
39715 + hprt0.d32 = dwc_otg_read_hprt0(core_if);
39716 + hprt0.b.prtena = 1;
39717 + DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
39718 + break;
39719 + case UHF_PORT_SUSPEND:
39720 + DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD HUB CONTROL - "
39721 + "ClearPortFeature USB_PORT_FEAT_SUSPEND\n");
39722 +
39723 + if (core_if->power_down == 2) {
39724 + dwc_otg_host_hibernation_restore(core_if, 0, 0);
39725 + } else {
39726 + DWC_WRITE_REG32(core_if->pcgcctl, 0);
39727 + dwc_mdelay(5);
39728 +
39729 + hprt0.d32 = dwc_otg_read_hprt0(core_if);
39730 + hprt0.b.prtres = 1;
39731 + DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
39732 + hprt0.b.prtsusp = 0;
39733 + /* Clear Resume bit */
39734 + dwc_mdelay(100);
39735 + hprt0.b.prtres = 0;
39736 + DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
39737 + }
39738 + break;
39739 +#ifdef CONFIG_USB_DWC_OTG_LPM
39740 + case UHF_PORT_L1:
39741 + {
39742 + pcgcctl_data_t pcgcctl = {.d32 = 0 };
39743 + glpmcfg_data_t lpmcfg = {.d32 = 0 };
39744 +
39745 + lpmcfg.d32 =
39746 + DWC_READ_REG32(&core_if->
39747 + core_global_regs->glpmcfg);
39748 + lpmcfg.b.en_utmi_sleep = 0;
39749 + lpmcfg.b.hird_thres &= (~(1 << 4));
39750 + lpmcfg.b.prt_sleep_sts = 1;
39751 + DWC_WRITE_REG32(&core_if->
39752 + core_global_regs->glpmcfg,
39753 + lpmcfg.d32);
39754 +
39755 + /* Clear Enbl_L1Gating bit. */
39756 + pcgcctl.b.enbl_sleep_gating = 1;
39757 + DWC_MODIFY_REG32(core_if->pcgcctl, pcgcctl.d32,
39758 + 0);
39759 +
39760 + dwc_mdelay(5);
39761 +
39762 + hprt0.d32 = dwc_otg_read_hprt0(core_if);
39763 + hprt0.b.prtres = 1;
39764 + DWC_WRITE_REG32(core_if->host_if->hprt0,
39765 + hprt0.d32);
39766 + /* This bit will be cleared in wakeup interrupt handle */
39767 + break;
39768 + }
39769 +#endif
39770 + case UHF_PORT_POWER:
39771 + DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD HUB CONTROL - "
39772 + "ClearPortFeature USB_PORT_FEAT_POWER\n");
39773 + hprt0.d32 = dwc_otg_read_hprt0(core_if);
39774 + hprt0.b.prtpwr = 0;
39775 + DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
39776 + break;
39777 + case UHF_PORT_INDICATOR:
39778 + DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD HUB CONTROL - "
39779 + "ClearPortFeature USB_PORT_FEAT_INDICATOR\n");
39780 + /* Port inidicator not supported */
39781 + break;
39782 + case UHF_C_PORT_CONNECTION:
39783 + /* Clears drivers internal connect status change
39784 + * flag */
39785 + DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD HUB CONTROL - "
39786 + "ClearPortFeature USB_PORT_FEAT_C_CONNECTION\n");
39787 + dwc_otg_hcd->flags.b.port_connect_status_change = 0;
39788 + break;
39789 + case UHF_C_PORT_RESET:
39790 + /* Clears the driver's internal Port Reset Change
39791 + * flag */
39792 + DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD HUB CONTROL - "
39793 + "ClearPortFeature USB_PORT_FEAT_C_RESET\n");
39794 + dwc_otg_hcd->flags.b.port_reset_change = 0;
39795 + break;
39796 + case UHF_C_PORT_ENABLE:
39797 + /* Clears the driver's internal Port
39798 + * Enable/Disable Change flag */
39799 + DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD HUB CONTROL - "
39800 + "ClearPortFeature USB_PORT_FEAT_C_ENABLE\n");
39801 + dwc_otg_hcd->flags.b.port_enable_change = 0;
39802 + break;
39803 + case UHF_C_PORT_SUSPEND:
39804 + /* Clears the driver's internal Port Suspend
39805 + * Change flag, which is set when resume signaling on
39806 + * the host port is complete */
39807 + DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD HUB CONTROL - "
39808 + "ClearPortFeature USB_PORT_FEAT_C_SUSPEND\n");
39809 + dwc_otg_hcd->flags.b.port_suspend_change = 0;
39810 + break;
39811 +#ifdef CONFIG_USB_DWC_OTG_LPM
39812 + case UHF_C_PORT_L1:
39813 + dwc_otg_hcd->flags.b.port_l1_change = 0;
39814 + break;
39815 +#endif
39816 + case UHF_C_PORT_OVER_CURRENT:
39817 + DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD HUB CONTROL - "
39818 + "ClearPortFeature USB_PORT_FEAT_C_OVER_CURRENT\n");
39819 + dwc_otg_hcd->flags.b.port_over_current_change = 0;
39820 + break;
39821 + default:
39822 + retval = -DWC_E_INVALID;
39823 + DWC_ERROR("DWC OTG HCD - "
39824 + "ClearPortFeature request %xh "
39825 + "unknown or unsupported\n", wValue);
39826 + }
39827 + break;
39828 + case UCR_GET_HUB_DESCRIPTOR:
39829 + DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD HUB CONTROL - "
39830 + "GetHubDescriptor\n");
39831 + hub_desc = (usb_hub_descriptor_t *) buf;
39832 + hub_desc->bDescLength = 9;
39833 + hub_desc->bDescriptorType = 0x29;
39834 + hub_desc->bNbrPorts = 1;
39835 + USETW(hub_desc->wHubCharacteristics, 0x08);
39836 + hub_desc->bPwrOn2PwrGood = 1;
39837 + hub_desc->bHubContrCurrent = 0;
39838 + hub_desc->DeviceRemovable[0] = 0;
39839 + hub_desc->DeviceRemovable[1] = 0xff;
39840 + break;
39841 + case UCR_GET_HUB_STATUS:
39842 + DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD HUB CONTROL - "
39843 + "GetHubStatus\n");
39844 + DWC_MEMSET(buf, 0, 4);
39845 + break;
39846 + case UCR_GET_PORT_STATUS:
39847 + DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD HUB CONTROL - "
39848 + "GetPortStatus wIndex = 0x%04x FLAGS=0x%08x\n",
39849 + wIndex, dwc_otg_hcd->flags.d32);
39850 + if (!wIndex || wIndex > 1)
39851 + goto error;
39852 +
39853 + port_status = 0;
39854 +
39855 + if (dwc_otg_hcd->flags.b.port_connect_status_change)
39856 + port_status |= (1 << UHF_C_PORT_CONNECTION);
39857 +
39858 + if (dwc_otg_hcd->flags.b.port_enable_change)
39859 + port_status |= (1 << UHF_C_PORT_ENABLE);
39860 +
39861 + if (dwc_otg_hcd->flags.b.port_suspend_change)
39862 + port_status |= (1 << UHF_C_PORT_SUSPEND);
39863 +
39864 + if (dwc_otg_hcd->flags.b.port_l1_change)
39865 + port_status |= (1 << UHF_C_PORT_L1);
39866 +
39867 + if (dwc_otg_hcd->flags.b.port_reset_change) {
39868 + port_status |= (1 << UHF_C_PORT_RESET);
39869 + }
39870 +
39871 + if (dwc_otg_hcd->flags.b.port_over_current_change) {
39872 + DWC_WARN("Overcurrent change detected\n");
39873 + port_status |= (1 << UHF_C_PORT_OVER_CURRENT);
39874 + }
39875 +
39876 + if (!dwc_otg_hcd->flags.b.port_connect_status) {
39877 + /*
39878 + * The port is disconnected, which means the core is
39879 + * either in device mode or it soon will be. Just
39880 + * return 0's for the remainder of the port status
39881 + * since the port register can't be read if the core
39882 + * is in device mode.
39883 + */
39884 + *((__le32 *) buf) = dwc_cpu_to_le32(&port_status);
39885 + break;
39886 + }
39887 +
39888 + hprt0.d32 = DWC_READ_REG32(core_if->host_if->hprt0);
39889 + DWC_DEBUGPL(DBG_HCDV, " HPRT0: 0x%08x\n", hprt0.d32);
39890 +
39891 + if (hprt0.b.prtconnsts)
39892 + port_status |= (1 << UHF_PORT_CONNECTION);
39893 +
39894 + if (hprt0.b.prtena)
39895 + port_status |= (1 << UHF_PORT_ENABLE);
39896 +
39897 + if (hprt0.b.prtsusp)
39898 + port_status |= (1 << UHF_PORT_SUSPEND);
39899 +
39900 + if (hprt0.b.prtovrcurract)
39901 + port_status |= (1 << UHF_PORT_OVER_CURRENT);
39902 +
39903 + if (hprt0.b.prtrst)
39904 + port_status |= (1 << UHF_PORT_RESET);
39905 +
39906 + if (hprt0.b.prtpwr)
39907 + port_status |= (1 << UHF_PORT_POWER);
39908 +
39909 + if (hprt0.b.prtspd == DWC_HPRT0_PRTSPD_HIGH_SPEED)
39910 + port_status |= (1 << UHF_PORT_HIGH_SPEED);
39911 + else if (hprt0.b.prtspd == DWC_HPRT0_PRTSPD_LOW_SPEED)
39912 + port_status |= (1 << UHF_PORT_LOW_SPEED);
39913 +
39914 + if (hprt0.b.prttstctl)
39915 + port_status |= (1 << UHF_PORT_TEST);
39916 + if (dwc_otg_get_lpm_portsleepstatus(dwc_otg_hcd->core_if)) {
39917 + port_status |= (1 << UHF_PORT_L1);
39918 + }
39919 + /*
39920 + For Synopsys HW emulation of Power down wkup_control asserts the
39921 + hreset_n and prst_n on suspned. This causes the HPRT0 to be zero.
39922 + We intentionally tell the software that port is in L2Suspend state.
39923 + Only for STE.
39924 + */
39925 + if ((core_if->power_down == 2)
39926 + && (core_if->hibernation_suspend == 1)) {
39927 + port_status |= (1 << UHF_PORT_SUSPEND);
39928 + }
39929 + /* USB_PORT_FEAT_INDICATOR unsupported always 0 */
39930 +
39931 + *((__le32 *) buf) = dwc_cpu_to_le32(&port_status);
39932 +
39933 + break;
39934 + case UCR_SET_HUB_FEATURE:
39935 + DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD HUB CONTROL - "
39936 + "SetHubFeature\n");
39937 + /* No HUB features supported */
39938 + break;
39939 + case UCR_SET_PORT_FEATURE:
39940 + if (wValue != UHF_PORT_TEST && (!wIndex || wIndex > 1))
39941 + goto error;
39942 +
39943 + if (!dwc_otg_hcd->flags.b.port_connect_status) {
39944 + /*
39945 + * The port is disconnected, which means the core is
39946 + * either in device mode or it soon will be. Just
39947 + * return without doing anything since the port
39948 + * register can't be written if the core is in device
39949 + * mode.
39950 + */
39951 + break;
39952 + }
39953 +
39954 + switch (wValue) {
39955 + case UHF_PORT_SUSPEND:
39956 + DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD HUB CONTROL - "
39957 + "SetPortFeature - USB_PORT_FEAT_SUSPEND\n");
39958 + if (dwc_otg_hcd_otg_port(dwc_otg_hcd) != wIndex) {
39959 + goto error;
39960 + }
39961 + if (core_if->power_down == 2) {
39962 + int timeout = 300;
39963 + dwc_irqflags_t flags;
39964 + pcgcctl_data_t pcgcctl = {.d32 = 0 };
39965 + gpwrdn_data_t gpwrdn = {.d32 = 0 };
39966 + gusbcfg_data_t gusbcfg = {.d32 = 0 };
39967 +#ifdef DWC_DEV_SRPCAP
39968 + int32_t otg_cap_param = core_if->core_params->otg_cap;
39969 +#endif
39970 + DWC_PRINTF("Preparing for complete power-off\n");
39971 +
39972 + /* Save registers before hibernation */
39973 + dwc_otg_save_global_regs(core_if);
39974 + dwc_otg_save_host_regs(core_if);
39975 +
39976 + hprt0.d32 = dwc_otg_read_hprt0(core_if);
39977 + hprt0.b.prtsusp = 1;
39978 + hprt0.b.prtena = 0;
39979 + DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
39980 + /* Spin hprt0.b.prtsusp to became 1 */
39981 + do {
39982 + hprt0.d32 = dwc_otg_read_hprt0(core_if);
39983 + if (hprt0.b.prtsusp) {
39984 + break;
39985 + }
39986 + dwc_mdelay(1);
39987 + } while (--timeout);
39988 + if (!timeout) {
39989 + DWC_WARN("Suspend wasn't genereted\n");
39990 + }
39991 + dwc_udelay(10);
39992 +
39993 + /*
39994 + * We need to disable interrupts to prevent servicing of any IRQ
39995 + * during going to hibernation
39996 + */
39997 + DWC_SPINLOCK_IRQSAVE(dwc_otg_hcd->lock, &flags);
39998 + core_if->lx_state = DWC_OTG_L2;
39999 +#ifdef DWC_DEV_SRPCAP
40000 + hprt0.d32 = dwc_otg_read_hprt0(core_if);
40001 + hprt0.b.prtpwr = 0;
40002 + hprt0.b.prtena = 0;
40003 + DWC_WRITE_REG32(core_if->host_if->hprt0,
40004 + hprt0.d32);
40005 +#endif
40006 + gusbcfg.d32 =
40007 + DWC_READ_REG32(&core_if->core_global_regs->
40008 + gusbcfg);
40009 + if (gusbcfg.b.ulpi_utmi_sel == 1) {
40010 + /* ULPI interface */
40011 + /* Suspend the Phy Clock */
40012 + pcgcctl.d32 = 0;
40013 + pcgcctl.b.stoppclk = 1;
40014 + DWC_MODIFY_REG32(core_if->pcgcctl, 0,
40015 + pcgcctl.d32);
40016 + dwc_udelay(10);
40017 + gpwrdn.b.pmuactv = 1;
40018 + DWC_MODIFY_REG32(&core_if->
40019 + core_global_regs->
40020 + gpwrdn, 0, gpwrdn.d32);
40021 + } else {
40022 + /* UTMI+ Interface */
40023 + gpwrdn.b.pmuactv = 1;
40024 + DWC_MODIFY_REG32(&core_if->
40025 + core_global_regs->
40026 + gpwrdn, 0, gpwrdn.d32);
40027 + dwc_udelay(10);
40028 + pcgcctl.b.stoppclk = 1;
40029 + DWC_MODIFY_REG32(core_if->pcgcctl, 0, pcgcctl.d32);
40030 + dwc_udelay(10);
40031 + }
40032 +#ifdef DWC_DEV_SRPCAP
40033 + gpwrdn.d32 = 0;
40034 + gpwrdn.b.dis_vbus = 1;
40035 + DWC_MODIFY_REG32(&core_if->core_global_regs->
40036 + gpwrdn, 0, gpwrdn.d32);
40037 +#endif
40038 + gpwrdn.d32 = 0;
40039 + gpwrdn.b.pmuintsel = 1;
40040 + DWC_MODIFY_REG32(&core_if->core_global_regs->
40041 + gpwrdn, 0, gpwrdn.d32);
40042 + dwc_udelay(10);
40043 +
40044 + gpwrdn.d32 = 0;
40045 +#ifdef DWC_DEV_SRPCAP
40046 + gpwrdn.b.srp_det_msk = 1;
40047 +#endif
40048 + gpwrdn.b.disconn_det_msk = 1;
40049 + gpwrdn.b.lnstchng_msk = 1;
40050 + gpwrdn.b.sts_chngint_msk = 1;
40051 + DWC_MODIFY_REG32(&core_if->core_global_regs->
40052 + gpwrdn, 0, gpwrdn.d32);
40053 + dwc_udelay(10);
40054 +
40055 + /* Enable Power Down Clamp and all interrupts in GPWRDN */
40056 + gpwrdn.d32 = 0;
40057 + gpwrdn.b.pwrdnclmp = 1;
40058 + DWC_MODIFY_REG32(&core_if->core_global_regs->
40059 + gpwrdn, 0, gpwrdn.d32);
40060 + dwc_udelay(10);
40061 +
40062 + /* Switch off VDD */
40063 + gpwrdn.d32 = 0;
40064 + gpwrdn.b.pwrdnswtch = 1;
40065 + DWC_MODIFY_REG32(&core_if->core_global_regs->
40066 + gpwrdn, 0, gpwrdn.d32);
40067 +
40068 +#ifdef DWC_DEV_SRPCAP
40069 + if (otg_cap_param == DWC_OTG_CAP_PARAM_HNP_SRP_CAPABLE)
40070 + {
40071 + core_if->pwron_timer_started = 1;
40072 + DWC_TIMER_SCHEDULE(core_if->pwron_timer, 6000 /* 6 secs */ );
40073 + }
40074 +#endif
40075 + /* Save gpwrdn register for further usage if stschng interrupt */
40076 + core_if->gr_backup->gpwrdn_local =
40077 + DWC_READ_REG32(&core_if->core_global_regs->gpwrdn);
40078 +
40079 + /* Set flag to indicate that we are in hibernation */
40080 + core_if->hibernation_suspend = 1;
40081 + DWC_SPINUNLOCK_IRQRESTORE(dwc_otg_hcd->lock,flags);
40082 +
40083 + DWC_PRINTF("Host hibernation completed\n");
40084 + // Exit from case statement
40085 + break;
40086 +
40087 + }
40088 + if (dwc_otg_hcd_otg_port(dwc_otg_hcd) == wIndex &&
40089 + dwc_otg_hcd->fops->get_b_hnp_enable(dwc_otg_hcd)) {
40090 + gotgctl_data_t gotgctl = {.d32 = 0 };
40091 + gotgctl.b.hstsethnpen = 1;
40092 + DWC_MODIFY_REG32(&core_if->core_global_regs->
40093 + gotgctl, 0, gotgctl.d32);
40094 + core_if->op_state = A_SUSPEND;
40095 + }
40096 + hprt0.d32 = dwc_otg_read_hprt0(core_if);
40097 + hprt0.b.prtsusp = 1;
40098 + DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
40099 + {
40100 + dwc_irqflags_t flags;
40101 + /* Update lx_state */
40102 + DWC_SPINLOCK_IRQSAVE(dwc_otg_hcd->lock, &flags);
40103 + core_if->lx_state = DWC_OTG_L2;
40104 + DWC_SPINUNLOCK_IRQRESTORE(dwc_otg_hcd->lock, flags);
40105 + }
40106 + /* Suspend the Phy Clock */
40107 + {
40108 + pcgcctl_data_t pcgcctl = {.d32 = 0 };
40109 + pcgcctl.b.stoppclk = 1;
40110 + DWC_MODIFY_REG32(core_if->pcgcctl, 0,
40111 + pcgcctl.d32);
40112 + dwc_udelay(10);
40113 + }
40114 +
40115 + /* For HNP the bus must be suspended for at least 200ms. */
40116 + if (dwc_otg_hcd->fops->get_b_hnp_enable(dwc_otg_hcd)) {
40117 + pcgcctl_data_t pcgcctl = {.d32 = 0 };
40118 + pcgcctl.b.stoppclk = 1;
40119 + DWC_MODIFY_REG32(core_if->pcgcctl, pcgcctl.d32, 0);
40120 + dwc_mdelay(200);
40121 + }
40122 +
40123 + /** @todo - check how sw can wait for 1 sec to check asesvld??? */
40124 +#if 0 //vahrama !!!!!!!!!!!!!!!!!!
40125 + if (core_if->adp_enable) {
40126 + gotgctl_data_t gotgctl = {.d32 = 0 };
40127 + gpwrdn_data_t gpwrdn;
40128 +
40129 + while (gotgctl.b.asesvld == 1) {
40130 + gotgctl.d32 =
40131 + DWC_READ_REG32(&core_if->
40132 + core_global_regs->
40133 + gotgctl);
40134 + dwc_mdelay(100);
40135 + }
40136 +
40137 + /* Enable Power Down Logic */
40138 + gpwrdn.d32 = 0;
40139 + gpwrdn.b.pmuactv = 1;
40140 + DWC_MODIFY_REG32(&core_if->core_global_regs->
40141 + gpwrdn, 0, gpwrdn.d32);
40142 +
40143 + /* Unmask SRP detected interrupt from Power Down Logic */
40144 + gpwrdn.d32 = 0;
40145 + gpwrdn.b.srp_det_msk = 1;
40146 + DWC_MODIFY_REG32(&core_if->core_global_regs->
40147 + gpwrdn, 0, gpwrdn.d32);
40148 +
40149 + dwc_otg_adp_probe_start(core_if);
40150 + }
40151 +#endif
40152 + break;
40153 + case UHF_PORT_POWER:
40154 + DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD HUB CONTROL - "
40155 + "SetPortFeature - USB_PORT_FEAT_POWER\n");
40156 + hprt0.d32 = dwc_otg_read_hprt0(core_if);
40157 + hprt0.b.prtpwr = 1;
40158 + DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
40159 + break;
40160 + case UHF_PORT_RESET:
40161 + if ((core_if->power_down == 2)
40162 + && (core_if->hibernation_suspend == 1)) {
40163 + /* If we are going to exit from Hibernated
40164 + * state via USB RESET.
40165 + */
40166 + dwc_otg_host_hibernation_restore(core_if, 0, 1);
40167 + } else {
40168 + hprt0.d32 = dwc_otg_read_hprt0(core_if);
40169 +
40170 + DWC_DEBUGPL(DBG_HCD,
40171 + "DWC OTG HCD HUB CONTROL - "
40172 + "SetPortFeature - USB_PORT_FEAT_RESET\n");
40173 + {
40174 + pcgcctl_data_t pcgcctl = {.d32 = 0 };
40175 + pcgcctl.b.enbl_sleep_gating = 1;
40176 + pcgcctl.b.stoppclk = 1;
40177 + DWC_MODIFY_REG32(core_if->pcgcctl, pcgcctl.d32, 0);
40178 + DWC_WRITE_REG32(core_if->pcgcctl, 0);
40179 + }
40180 +#ifdef CONFIG_USB_DWC_OTG_LPM
40181 + {
40182 + glpmcfg_data_t lpmcfg;
40183 + lpmcfg.d32 =
40184 + DWC_READ_REG32(&core_if->core_global_regs->glpmcfg);
40185 + if (lpmcfg.b.prt_sleep_sts) {
40186 + lpmcfg.b.en_utmi_sleep = 0;
40187 + lpmcfg.b.hird_thres &= (~(1 << 4));
40188 + DWC_WRITE_REG32
40189 + (&core_if->core_global_regs->glpmcfg,
40190 + lpmcfg.d32);
40191 + dwc_mdelay(1);
40192 + }
40193 + }
40194 +#endif
40195 + hprt0.d32 = dwc_otg_read_hprt0(core_if);
40196 + /* Clear suspend bit if resetting from suspended state. */
40197 + hprt0.b.prtsusp = 0;
40198 + /* When B-Host the Port reset bit is set in
40199 + * the Start HCD Callback function, so that
40200 + * the reset is started within 1ms of the HNP
40201 + * success interrupt. */
40202 + if (!dwc_otg_hcd_is_b_host(dwc_otg_hcd)) {
40203 + hprt0.b.prtpwr = 1;
40204 + hprt0.b.prtrst = 1;
40205 + DWC_PRINTF("Indeed it is in host mode hprt0 = %08x\n",hprt0.d32);
40206 + DWC_WRITE_REG32(core_if->host_if->hprt0,
40207 + hprt0.d32);
40208 + }
40209 + /* Clear reset bit in 10ms (FS/LS) or 50ms (HS) */
40210 + dwc_mdelay(60);
40211 + hprt0.b.prtrst = 0;
40212 + DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
40213 + core_if->lx_state = DWC_OTG_L0; /* Now back to the on state */
40214 + }
40215 + break;
40216 +#ifdef DWC_HS_ELECT_TST
40217 + case UHF_PORT_TEST:
40218 + {
40219 + uint32_t t;
40220 + gintmsk_data_t gintmsk;
40221 +
40222 + t = (wIndex >> 8); /* MSB wIndex USB */
40223 + DWC_DEBUGPL(DBG_HCD,
40224 + "DWC OTG HCD HUB CONTROL - "
40225 + "SetPortFeature - USB_PORT_FEAT_TEST %d\n",
40226 + t);
40227 + DWC_WARN("USB_PORT_FEAT_TEST %d\n", t);
40228 + if (t < 6) {
40229 + hprt0.d32 = dwc_otg_read_hprt0(core_if);
40230 + hprt0.b.prttstctl = t;
40231 + DWC_WRITE_REG32(core_if->host_if->hprt0,
40232 + hprt0.d32);
40233 + } else {
40234 + /* Setup global vars with reg addresses (quick and
40235 + * dirty hack, should be cleaned up)
40236 + */
40237 + global_regs = core_if->core_global_regs;
40238 + hc_global_regs =
40239 + core_if->host_if->host_global_regs;
40240 + hc_regs =
40241 + (dwc_otg_hc_regs_t *) ((char *)
40242 + global_regs +
40243 + 0x500);
40244 + data_fifo =
40245 + (uint32_t *) ((char *)global_regs +
40246 + 0x1000);
40247 +
40248 + if (t == 6) { /* HS_HOST_PORT_SUSPEND_RESUME */
40249 + /* Save current interrupt mask */
40250 + gintmsk.d32 =
40251 + DWC_READ_REG32
40252 + (&global_regs->gintmsk);
40253 +
40254 + /* Disable all interrupts while we muck with
40255 + * the hardware directly
40256 + */
40257 + DWC_WRITE_REG32(&global_regs->gintmsk, 0);
40258 +
40259 + /* 15 second delay per the test spec */
40260 + dwc_mdelay(15000);
40261 +
40262 + /* Drive suspend on the root port */
40263 + hprt0.d32 =
40264 + dwc_otg_read_hprt0(core_if);
40265 + hprt0.b.prtsusp = 1;
40266 + hprt0.b.prtres = 0;
40267 + DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
40268 +
40269 + /* 15 second delay per the test spec */
40270 + dwc_mdelay(15000);
40271 +
40272 + /* Drive resume on the root port */
40273 + hprt0.d32 =
40274 + dwc_otg_read_hprt0(core_if);
40275 + hprt0.b.prtsusp = 0;
40276 + hprt0.b.prtres = 1;
40277 + DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
40278 + dwc_mdelay(100);
40279 +
40280 + /* Clear the resume bit */
40281 + hprt0.b.prtres = 0;
40282 + DWC_WRITE_REG32(core_if->host_if->hprt0, hprt0.d32);
40283 +
40284 + /* Restore interrupts */
40285 + DWC_WRITE_REG32(&global_regs->gintmsk, gintmsk.d32);
40286 + } else if (t == 7) { /* SINGLE_STEP_GET_DEVICE_DESCRIPTOR setup */
40287 + /* Save current interrupt mask */
40288 + gintmsk.d32 =
40289 + DWC_READ_REG32
40290 + (&global_regs->gintmsk);
40291 +
40292 + /* Disable all interrupts while we muck with
40293 + * the hardware directly
40294 + */
40295 + DWC_WRITE_REG32(&global_regs->gintmsk, 0);
40296 +
40297 + /* 15 second delay per the test spec */
40298 + dwc_mdelay(15000);
40299 +
40300 + /* Send the Setup packet */
40301 + do_setup();
40302 +
40303 + /* 15 second delay so nothing else happens for awhile */
40304 + dwc_mdelay(15000);
40305 +
40306 + /* Restore interrupts */
40307 + DWC_WRITE_REG32(&global_regs->gintmsk, gintmsk.d32);
40308 + } else if (t == 8) { /* SINGLE_STEP_GET_DEVICE_DESCRIPTOR execute */
40309 + /* Save current interrupt mask */
40310 + gintmsk.d32 =
40311 + DWC_READ_REG32
40312 + (&global_regs->gintmsk);
40313 +
40314 + /* Disable all interrupts while we muck with
40315 + * the hardware directly
40316 + */
40317 + DWC_WRITE_REG32(&global_regs->gintmsk, 0);
40318 +
40319 + /* Send the Setup packet */
40320 + do_setup();
40321 +
40322 + /* 15 second delay so nothing else happens for awhile */
40323 + dwc_mdelay(15000);
40324 +
40325 + /* Send the In and Ack packets */
40326 + do_in_ack();
40327 +
40328 + /* 15 second delay so nothing else happens for awhile */
40329 + dwc_mdelay(15000);
40330 +
40331 + /* Restore interrupts */
40332 + DWC_WRITE_REG32(&global_regs->gintmsk, gintmsk.d32);
40333 + }
40334 + }
40335 + break;
40336 + }
40337 +#endif /* DWC_HS_ELECT_TST */
40338 +
40339 + case UHF_PORT_INDICATOR:
40340 + DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD HUB CONTROL - "
40341 + "SetPortFeature - USB_PORT_FEAT_INDICATOR\n");
40342 + /* Not supported */
40343 + break;
40344 + default:
40345 + retval = -DWC_E_INVALID;
40346 + DWC_ERROR("DWC OTG HCD - "
40347 + "SetPortFeature request %xh "
40348 + "unknown or unsupported\n", wValue);
40349 + break;
40350 + }
40351 + break;
40352 +#ifdef CONFIG_USB_DWC_OTG_LPM
40353 + case UCR_SET_AND_TEST_PORT_FEATURE:
40354 + if (wValue != UHF_PORT_L1) {
40355 + goto error;
40356 + }
40357 + {
40358 + int portnum, hird, devaddr, remwake;
40359 + glpmcfg_data_t lpmcfg;
40360 + uint32_t time_usecs;
40361 + gintsts_data_t gintsts;
40362 + gintmsk_data_t gintmsk;
40363 +
40364 + if (!dwc_otg_get_param_lpm_enable(core_if)) {
40365 + goto error;
40366 + }
40367 + if (wValue != UHF_PORT_L1 || wLength != 1) {
40368 + goto error;
40369 + }
40370 + /* Check if the port currently is in SLEEP state */
40371 + lpmcfg.d32 =
40372 + DWC_READ_REG32(&core_if->core_global_regs->glpmcfg);
40373 + if (lpmcfg.b.prt_sleep_sts) {
40374 + DWC_INFO("Port is already in sleep mode\n");
40375 + buf[0] = 0; /* Return success */
40376 + break;
40377 + }
40378 +
40379 + portnum = wIndex & 0xf;
40380 + hird = (wIndex >> 4) & 0xf;
40381 + devaddr = (wIndex >> 8) & 0x7f;
40382 + remwake = (wIndex >> 15);
40383 +
40384 + if (portnum != 1) {
40385 + retval = -DWC_E_INVALID;
40386 + DWC_WARN
40387 + ("Wrong port number(%d) in SetandTestPortFeature request\n",
40388 + portnum);
40389 + break;
40390 + }
40391 +
40392 + DWC_PRINTF
40393 + ("SetandTestPortFeature request: portnum = %d, hird = %d, devaddr = %d, rewake = %d\n",
40394 + portnum, hird, devaddr, remwake);
40395 + /* Disable LPM interrupt */
40396 + gintmsk.d32 = 0;
40397 + gintmsk.b.lpmtranrcvd = 1;
40398 + DWC_MODIFY_REG32(&core_if->core_global_regs->gintmsk,
40399 + gintmsk.d32, 0);
40400 +
40401 + if (dwc_otg_hcd_send_lpm
40402 + (dwc_otg_hcd, devaddr, hird, remwake)) {
40403 + retval = -DWC_E_INVALID;
40404 + break;
40405 + }
40406 +
40407 + time_usecs = 10 * (lpmcfg.b.retry_count + 1);
40408 + /* We will consider timeout if time_usecs microseconds pass,
40409 + * and we don't receive LPM transaction status.
40410 + * After receiving non-error responce(ACK/NYET/STALL) from device,
40411 + * core will set lpmtranrcvd bit.
40412 + */
40413 + do {
40414 + gintsts.d32 =
40415 + DWC_READ_REG32(&core_if->core_global_regs->gintsts);
40416 + if (gintsts.b.lpmtranrcvd) {
40417 + break;
40418 + }
40419 + dwc_udelay(1);
40420 + } while (--time_usecs);
40421 + /* lpm_int bit will be cleared in LPM interrupt handler */
40422 +
40423 + /* Now fill status
40424 + * 0x00 - Success
40425 + * 0x10 - NYET
40426 + * 0x11 - Timeout
40427 + */
40428 + if (!gintsts.b.lpmtranrcvd) {
40429 + buf[0] = 0x3; /* Completion code is Timeout */
40430 + dwc_otg_hcd_free_hc_from_lpm(dwc_otg_hcd);
40431 + } else {
40432 + lpmcfg.d32 =
40433 + DWC_READ_REG32(&core_if->core_global_regs->glpmcfg);
40434 + if (lpmcfg.b.lpm_resp == 0x3) {
40435 + /* ACK responce from the device */
40436 + buf[0] = 0x00; /* Success */
40437 + } else if (lpmcfg.b.lpm_resp == 0x2) {
40438 + /* NYET responce from the device */
40439 + buf[0] = 0x2;
40440 + } else {
40441 + /* Otherwise responce with Timeout */
40442 + buf[0] = 0x3;
40443 + }
40444 + }
40445 + DWC_PRINTF("Device responce to LPM trans is %x\n",
40446 + lpmcfg.b.lpm_resp);
40447 + DWC_MODIFY_REG32(&core_if->core_global_regs->gintmsk, 0,
40448 + gintmsk.d32);
40449 +
40450 + break;
40451 + }
40452 +#endif /* CONFIG_USB_DWC_OTG_LPM */
40453 + default:
40454 +error:
40455 + retval = -DWC_E_INVALID;
40456 + DWC_WARN("DWC OTG HCD - "
40457 + "Unknown hub control request type or invalid typeReq: %xh wIndex: %xh wValue: %xh\n",
40458 + typeReq, wIndex, wValue);
40459 + break;
40460 + }
40461 +
40462 + return retval;
40463 +}
40464 +
40465 +#ifdef CONFIG_USB_DWC_OTG_LPM
40466 +/** Returns index of host channel to perform LPM transaction. */
40467 +int dwc_otg_hcd_get_hc_for_lpm_tran(dwc_otg_hcd_t * hcd, uint8_t devaddr)
40468 +{
40469 + dwc_otg_core_if_t *core_if = hcd->core_if;
40470 + dwc_hc_t *hc;
40471 + hcchar_data_t hcchar;
40472 + gintmsk_data_t gintmsk = {.d32 = 0 };
40473 +
40474 + if (DWC_CIRCLEQ_EMPTY(&hcd->free_hc_list)) {
40475 + DWC_PRINTF("No free channel to select for LPM transaction\n");
40476 + return -1;
40477 + }
40478 +
40479 + hc = DWC_CIRCLEQ_FIRST(&hcd->free_hc_list);
40480 +
40481 + /* Mask host channel interrupts. */
40482 + gintmsk.b.hcintr = 1;
40483 + DWC_MODIFY_REG32(&core_if->core_global_regs->gintmsk, gintmsk.d32, 0);
40484 +
40485 + /* Fill fields that core needs for LPM transaction */
40486 + hcchar.b.devaddr = devaddr;
40487 + hcchar.b.epnum = 0;
40488 + hcchar.b.eptype = DWC_OTG_EP_TYPE_CONTROL;
40489 + hcchar.b.mps = 64;
40490 + hcchar.b.lspddev = (hc->speed == DWC_OTG_EP_SPEED_LOW);
40491 + hcchar.b.epdir = 0; /* OUT */
40492 + DWC_WRITE_REG32(&core_if->host_if->hc_regs[hc->hc_num]->hcchar,
40493 + hcchar.d32);
40494 +
40495 + /* Remove the host channel from the free list. */
40496 + DWC_CIRCLEQ_REMOVE_INIT(&hcd->free_hc_list, hc, hc_list_entry);
40497 +
40498 + DWC_PRINTF("hcnum = %d devaddr = %d\n", hc->hc_num, devaddr);
40499 +
40500 + return hc->hc_num;
40501 +}
40502 +
40503 +/** Release hc after performing LPM transaction */
40504 +void dwc_otg_hcd_free_hc_from_lpm(dwc_otg_hcd_t * hcd)
40505 +{
40506 + dwc_hc_t *hc;
40507 + glpmcfg_data_t lpmcfg;
40508 + uint8_t hc_num;
40509 +
40510 + lpmcfg.d32 = DWC_READ_REG32(&hcd->core_if->core_global_regs->glpmcfg);
40511 + hc_num = lpmcfg.b.lpm_chan_index;
40512 +
40513 + hc = hcd->hc_ptr_array[hc_num];
40514 +
40515 + DWC_PRINTF("Freeing channel %d after LPM\n", hc_num);
40516 + /* Return host channel to free list */
40517 + DWC_CIRCLEQ_INSERT_TAIL(&hcd->free_hc_list, hc, hc_list_entry);
40518 +}
40519 +
40520 +int dwc_otg_hcd_send_lpm(dwc_otg_hcd_t * hcd, uint8_t devaddr, uint8_t hird,
40521 + uint8_t bRemoteWake)
40522 +{
40523 + glpmcfg_data_t lpmcfg;
40524 + pcgcctl_data_t pcgcctl = {.d32 = 0 };
40525 + int channel;
40526 +
40527 + channel = dwc_otg_hcd_get_hc_for_lpm_tran(hcd, devaddr);
40528 + if (channel < 0) {
40529 + return channel;
40530 + }
40531 +
40532 + pcgcctl.b.enbl_sleep_gating = 1;
40533 + DWC_MODIFY_REG32(hcd->core_if->pcgcctl, 0, pcgcctl.d32);
40534 +
40535 + /* Read LPM config register */
40536 + lpmcfg.d32 = DWC_READ_REG32(&hcd->core_if->core_global_regs->glpmcfg);
40537 +
40538 + /* Program LPM transaction fields */
40539 + lpmcfg.b.rem_wkup_en = bRemoteWake;
40540 + lpmcfg.b.hird = hird;
40541 + lpmcfg.b.hird_thres = 0x1c;
40542 + lpmcfg.b.lpm_chan_index = channel;
40543 + lpmcfg.b.en_utmi_sleep = 1;
40544 + /* Program LPM config register */
40545 + DWC_WRITE_REG32(&hcd->core_if->core_global_regs->glpmcfg, lpmcfg.d32);
40546 +
40547 + /* Send LPM transaction */
40548 + lpmcfg.b.send_lpm = 1;
40549 + DWC_WRITE_REG32(&hcd->core_if->core_global_regs->glpmcfg, lpmcfg.d32);
40550 +
40551 + return 0;
40552 +}
40553 +
40554 +#endif /* CONFIG_USB_DWC_OTG_LPM */
40555 +
40556 +int dwc_otg_hcd_is_status_changed(dwc_otg_hcd_t * hcd, int port)
40557 +{
40558 + int retval;
40559 +
40560 + if (port != 1) {
40561 + return -DWC_E_INVALID;
40562 + }
40563 +
40564 + retval = (hcd->flags.b.port_connect_status_change ||
40565 + hcd->flags.b.port_reset_change ||
40566 + hcd->flags.b.port_enable_change ||
40567 + hcd->flags.b.port_suspend_change ||
40568 + hcd->flags.b.port_over_current_change);
40569 +#ifdef DEBUG
40570 + if (retval) {
40571 + DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD HUB STATUS DATA:"
40572 + " Root port status changed\n");
40573 + DWC_DEBUGPL(DBG_HCDV, " port_connect_status_change: %d\n",
40574 + hcd->flags.b.port_connect_status_change);
40575 + DWC_DEBUGPL(DBG_HCDV, " port_reset_change: %d\n",
40576 + hcd->flags.b.port_reset_change);
40577 + DWC_DEBUGPL(DBG_HCDV, " port_enable_change: %d\n",
40578 + hcd->flags.b.port_enable_change);
40579 + DWC_DEBUGPL(DBG_HCDV, " port_suspend_change: %d\n",
40580 + hcd->flags.b.port_suspend_change);
40581 + DWC_DEBUGPL(DBG_HCDV, " port_over_current_change: %d\n",
40582 + hcd->flags.b.port_over_current_change);
40583 + }
40584 +#endif
40585 + return retval;
40586 +}
40587 +
40588 +int dwc_otg_hcd_get_frame_number(dwc_otg_hcd_t * dwc_otg_hcd)
40589 +{
40590 + hfnum_data_t hfnum;
40591 + hfnum.d32 =
40592 + DWC_READ_REG32(&dwc_otg_hcd->core_if->host_if->host_global_regs->
40593 + hfnum);
40594 +
40595 +#ifdef DEBUG_SOF
40596 + DWC_DEBUGPL(DBG_HCDV, "DWC OTG HCD GET FRAME NUMBER %d\n",
40597 + hfnum.b.frnum);
40598 +#endif
40599 + return hfnum.b.frnum;
40600 +}
40601 +
40602 +int dwc_otg_hcd_start(dwc_otg_hcd_t * hcd,
40603 + struct dwc_otg_hcd_function_ops *fops)
40604 +{
40605 + int retval = 0;
40606 +
40607 + hcd->fops = fops;
40608 + if (!dwc_otg_is_device_mode(hcd->core_if) &&
40609 + (!hcd->core_if->adp_enable || hcd->core_if->adp.adp_started)) {
40610 + dwc_otg_hcd_reinit(hcd);
40611 + } else {
40612 + retval = -DWC_E_NO_DEVICE;
40613 + }
40614 +
40615 + return retval;
40616 +}
40617 +
40618 +void *dwc_otg_hcd_get_priv_data(dwc_otg_hcd_t * hcd)
40619 +{
40620 + return hcd->priv;
40621 +}
40622 +
40623 +void dwc_otg_hcd_set_priv_data(dwc_otg_hcd_t * hcd, void *priv_data)
40624 +{
40625 + hcd->priv = priv_data;
40626 +}
40627 +
40628 +uint32_t dwc_otg_hcd_otg_port(dwc_otg_hcd_t * hcd)
40629 +{
40630 + return hcd->otg_port;
40631 +}
40632 +
40633 +uint32_t dwc_otg_hcd_is_b_host(dwc_otg_hcd_t * hcd)
40634 +{
40635 + uint32_t is_b_host;
40636 + if (hcd->core_if->op_state == B_HOST) {
40637 + is_b_host = 1;
40638 + } else {
40639 + is_b_host = 0;
40640 + }
40641 +
40642 + return is_b_host;
40643 +}
40644 +
40645 +dwc_otg_hcd_urb_t *dwc_otg_hcd_urb_alloc(dwc_otg_hcd_t * hcd,
40646 + int iso_desc_count, int atomic_alloc)
40647 +{
40648 + dwc_otg_hcd_urb_t *dwc_otg_urb;
40649 + uint32_t size;
40650 +
40651 + size =
40652 + sizeof(*dwc_otg_urb) +
40653 + iso_desc_count * sizeof(struct dwc_otg_hcd_iso_packet_desc);
40654 + if (atomic_alloc)
40655 + dwc_otg_urb = DWC_ALLOC_ATOMIC(size);
40656 + else
40657 + dwc_otg_urb = DWC_ALLOC(size);
40658 +
40659 + if (dwc_otg_urb)
40660 + dwc_otg_urb->packet_count = iso_desc_count;
40661 + else {
40662 + DWC_ERROR("**** DWC OTG HCD URB alloc - "
40663 + "%salloc of %db failed\n",
40664 + atomic_alloc?"atomic ":"", size);
40665 + }
40666 + return dwc_otg_urb;
40667 +}
40668 +
40669 +void dwc_otg_hcd_urb_set_pipeinfo(dwc_otg_hcd_urb_t * dwc_otg_urb,
40670 + uint8_t dev_addr, uint8_t ep_num,
40671 + uint8_t ep_type, uint8_t ep_dir, uint16_t mps)
40672 +{
40673 + dwc_otg_hcd_fill_pipe(&dwc_otg_urb->pipe_info, dev_addr, ep_num,
40674 + ep_type, ep_dir, mps);
40675 +#if 0
40676 + DWC_PRINTF
40677 + ("addr = %d, ep_num = %d, ep_dir = 0x%x, ep_type = 0x%x, mps = %d\n",
40678 + dev_addr, ep_num, ep_dir, ep_type, mps);
40679 +#endif
40680 +}
40681 +
40682 +void dwc_otg_hcd_urb_set_params(dwc_otg_hcd_urb_t * dwc_otg_urb,
40683 + void *urb_handle, void *buf, dwc_dma_t dma,
40684 + uint32_t buflen, void *setup_packet,
40685 + dwc_dma_t setup_dma, uint32_t flags,
40686 + uint16_t interval)
40687 +{
40688 + dwc_otg_urb->priv = urb_handle;
40689 + dwc_otg_urb->buf = buf;
40690 + dwc_otg_urb->dma = dma;
40691 + dwc_otg_urb->length = buflen;
40692 + dwc_otg_urb->setup_packet = setup_packet;
40693 + dwc_otg_urb->setup_dma = setup_dma;
40694 + dwc_otg_urb->flags = flags;
40695 + dwc_otg_urb->interval = interval;
40696 + dwc_otg_urb->status = -DWC_E_IN_PROGRESS;
40697 +}
40698 +
40699 +uint32_t dwc_otg_hcd_urb_get_status(dwc_otg_hcd_urb_t * dwc_otg_urb)
40700 +{
40701 + return dwc_otg_urb->status;
40702 +}
40703 +
40704 +uint32_t dwc_otg_hcd_urb_get_actual_length(dwc_otg_hcd_urb_t * dwc_otg_urb)
40705 +{
40706 + return dwc_otg_urb->actual_length;
40707 +}
40708 +
40709 +uint32_t dwc_otg_hcd_urb_get_error_count(dwc_otg_hcd_urb_t * dwc_otg_urb)
40710 +{
40711 + return dwc_otg_urb->error_count;
40712 +}
40713 +
40714 +void dwc_otg_hcd_urb_set_iso_desc_params(dwc_otg_hcd_urb_t * dwc_otg_urb,
40715 + int desc_num, uint32_t offset,
40716 + uint32_t length)
40717 +{
40718 + dwc_otg_urb->iso_descs[desc_num].offset = offset;
40719 + dwc_otg_urb->iso_descs[desc_num].length = length;
40720 +}
40721 +
40722 +uint32_t dwc_otg_hcd_urb_get_iso_desc_status(dwc_otg_hcd_urb_t * dwc_otg_urb,
40723 + int desc_num)
40724 +{
40725 + return dwc_otg_urb->iso_descs[desc_num].status;
40726 +}
40727 +
40728 +uint32_t dwc_otg_hcd_urb_get_iso_desc_actual_length(dwc_otg_hcd_urb_t *
40729 + dwc_otg_urb, int desc_num)
40730 +{
40731 + return dwc_otg_urb->iso_descs[desc_num].actual_length;
40732 +}
40733 +
40734 +int dwc_otg_hcd_is_bandwidth_allocated(dwc_otg_hcd_t * hcd, void *ep_handle)
40735 +{
40736 + int allocated = 0;
40737 + dwc_otg_qh_t *qh = (dwc_otg_qh_t *) ep_handle;
40738 +
40739 + if (qh) {
40740 + if (!DWC_LIST_EMPTY(&qh->qh_list_entry)) {
40741 + allocated = 1;
40742 + }
40743 + }
40744 + return allocated;
40745 +}
40746 +
40747 +int dwc_otg_hcd_is_bandwidth_freed(dwc_otg_hcd_t * hcd, void *ep_handle)
40748 +{
40749 + dwc_otg_qh_t *qh = (dwc_otg_qh_t *) ep_handle;
40750 + int freed = 0;
40751 + DWC_ASSERT(qh, "qh is not allocated\n");
40752 +
40753 + if (DWC_LIST_EMPTY(&qh->qh_list_entry)) {
40754 + freed = 1;
40755 + }
40756 +
40757 + return freed;
40758 +}
40759 +
40760 +uint8_t dwc_otg_hcd_get_ep_bandwidth(dwc_otg_hcd_t * hcd, void *ep_handle)
40761 +{
40762 + dwc_otg_qh_t *qh = (dwc_otg_qh_t *) ep_handle;
40763 + DWC_ASSERT(qh, "qh is not allocated\n");
40764 + return qh->usecs;
40765 +}
40766 +
40767 +void dwc_otg_hcd_dump_state(dwc_otg_hcd_t * hcd)
40768 +{
40769 +#ifdef DEBUG
40770 + int num_channels;
40771 + int i;
40772 + gnptxsts_data_t np_tx_status;
40773 + hptxsts_data_t p_tx_status;
40774 +
40775 + num_channels = hcd->core_if->core_params->host_channels;
40776 + DWC_PRINTF("\n");
40777 + DWC_PRINTF
40778 + ("************************************************************\n");
40779 + DWC_PRINTF("HCD State:\n");
40780 + DWC_PRINTF(" Num channels: %d\n", num_channels);
40781 + for (i = 0; i < num_channels; i++) {
40782 + dwc_hc_t *hc = hcd->hc_ptr_array[i];
40783 + DWC_PRINTF(" Channel %d:\n", i);
40784 + DWC_PRINTF(" dev_addr: %d, ep_num: %d, ep_is_in: %d\n",
40785 + hc->dev_addr, hc->ep_num, hc->ep_is_in);
40786 + DWC_PRINTF(" speed: %d\n", hc->speed);
40787 + DWC_PRINTF(" ep_type: %d\n", hc->ep_type);
40788 + DWC_PRINTF(" max_packet: %d\n", hc->max_packet);
40789 + DWC_PRINTF(" data_pid_start: %d\n", hc->data_pid_start);
40790 + DWC_PRINTF(" multi_count: %d\n", hc->multi_count);
40791 + DWC_PRINTF(" xfer_started: %d\n", hc->xfer_started);
40792 + DWC_PRINTF(" xfer_buff: %p\n", hc->xfer_buff);
40793 + DWC_PRINTF(" xfer_len: %d\n", hc->xfer_len);
40794 + DWC_PRINTF(" xfer_count: %d\n", hc->xfer_count);
40795 + DWC_PRINTF(" halt_on_queue: %d\n", hc->halt_on_queue);
40796 + DWC_PRINTF(" halt_pending: %d\n", hc->halt_pending);
40797 + DWC_PRINTF(" halt_status: %d\n", hc->halt_status);
40798 + DWC_PRINTF(" do_split: %d\n", hc->do_split);
40799 + DWC_PRINTF(" complete_split: %d\n", hc->complete_split);
40800 + DWC_PRINTF(" hub_addr: %d\n", hc->hub_addr);
40801 + DWC_PRINTF(" port_addr: %d\n", hc->port_addr);
40802 + DWC_PRINTF(" xact_pos: %d\n", hc->xact_pos);
40803 + DWC_PRINTF(" requests: %d\n", hc->requests);
40804 + DWC_PRINTF(" qh: %p\n", hc->qh);
40805 + if (hc->xfer_started) {
40806 + hfnum_data_t hfnum;
40807 + hcchar_data_t hcchar;
40808 + hctsiz_data_t hctsiz;
40809 + hcint_data_t hcint;
40810 + hcintmsk_data_t hcintmsk;
40811 + hfnum.d32 =
40812 + DWC_READ_REG32(&hcd->core_if->
40813 + host_if->host_global_regs->hfnum);
40814 + hcchar.d32 =
40815 + DWC_READ_REG32(&hcd->core_if->host_if->
40816 + hc_regs[i]->hcchar);
40817 + hctsiz.d32 =
40818 + DWC_READ_REG32(&hcd->core_if->host_if->
40819 + hc_regs[i]->hctsiz);
40820 + hcint.d32 =
40821 + DWC_READ_REG32(&hcd->core_if->host_if->
40822 + hc_regs[i]->hcint);
40823 + hcintmsk.d32 =
40824 + DWC_READ_REG32(&hcd->core_if->host_if->
40825 + hc_regs[i]->hcintmsk);
40826 + DWC_PRINTF(" hfnum: 0x%08x\n", hfnum.d32);
40827 + DWC_PRINTF(" hcchar: 0x%08x\n", hcchar.d32);
40828 + DWC_PRINTF(" hctsiz: 0x%08x\n", hctsiz.d32);
40829 + DWC_PRINTF(" hcint: 0x%08x\n", hcint.d32);
40830 + DWC_PRINTF(" hcintmsk: 0x%08x\n", hcintmsk.d32);
40831 + }
40832 + if (hc->xfer_started && hc->qh) {
40833 + dwc_otg_qtd_t *qtd;
40834 + dwc_otg_hcd_urb_t *urb;
40835 +
40836 + DWC_CIRCLEQ_FOREACH(qtd, &hc->qh->qtd_list, qtd_list_entry) {
40837 + if (!qtd->in_process)
40838 + break;
40839 +
40840 + urb = qtd->urb;
40841 + DWC_PRINTF(" URB Info:\n");
40842 + DWC_PRINTF(" qtd: %p, urb: %p\n", qtd, urb);
40843 + if (urb) {
40844 + DWC_PRINTF(" Dev: %d, EP: %d %s\n",
40845 + dwc_otg_hcd_get_dev_addr(&urb->
40846 + pipe_info),
40847 + dwc_otg_hcd_get_ep_num(&urb->
40848 + pipe_info),
40849 + dwc_otg_hcd_is_pipe_in(&urb->
40850 + pipe_info) ?
40851 + "IN" : "OUT");
40852 + DWC_PRINTF(" Max packet size: %d\n",
40853 + dwc_otg_hcd_get_mps(&urb->
40854 + pipe_info));
40855 + DWC_PRINTF(" transfer_buffer: %p\n",
40856 + urb->buf);
40857 + DWC_PRINTF(" transfer_dma: %p\n",
40858 + (void *)urb->dma);
40859 + DWC_PRINTF(" transfer_buffer_length: %d\n",
40860 + urb->length);
40861 + DWC_PRINTF(" actual_length: %d\n",
40862 + urb->actual_length);
40863 + }
40864 + }
40865 + }
40866 + }
40867 + DWC_PRINTF(" non_periodic_channels: %d\n", hcd->non_periodic_channels);
40868 + DWC_PRINTF(" periodic_channels: %d\n", hcd->periodic_channels);
40869 + DWC_PRINTF(" periodic_usecs: %d\n", hcd->periodic_usecs);
40870 + np_tx_status.d32 =
40871 + DWC_READ_REG32(&hcd->core_if->core_global_regs->gnptxsts);
40872 + DWC_PRINTF(" NP Tx Req Queue Space Avail: %d\n",
40873 + np_tx_status.b.nptxqspcavail);
40874 + DWC_PRINTF(" NP Tx FIFO Space Avail: %d\n",
40875 + np_tx_status.b.nptxfspcavail);
40876 + p_tx_status.d32 =
40877 + DWC_READ_REG32(&hcd->core_if->host_if->host_global_regs->hptxsts);
40878 + DWC_PRINTF(" P Tx Req Queue Space Avail: %d\n",
40879 + p_tx_status.b.ptxqspcavail);
40880 + DWC_PRINTF(" P Tx FIFO Space Avail: %d\n", p_tx_status.b.ptxfspcavail);
40881 + dwc_otg_hcd_dump_frrem(hcd);
40882 + dwc_otg_dump_global_registers(hcd->core_if);
40883 + dwc_otg_dump_host_registers(hcd->core_if);
40884 + DWC_PRINTF
40885 + ("************************************************************\n");
40886 + DWC_PRINTF("\n");
40887 +#endif
40888 +}
40889 +
40890 +#ifdef DEBUG
40891 +void dwc_print_setup_data(uint8_t * setup)
40892 +{
40893 + int i;
40894 + if (CHK_DEBUG_LEVEL(DBG_HCD)) {
40895 + DWC_PRINTF("Setup Data = MSB ");
40896 + for (i = 7; i >= 0; i--)
40897 + DWC_PRINTF("%02x ", setup[i]);
40898 + DWC_PRINTF("\n");
40899 + DWC_PRINTF(" bmRequestType Tranfer = %s\n",
40900 + (setup[0] & 0x80) ? "Device-to-Host" :
40901 + "Host-to-Device");
40902 + DWC_PRINTF(" bmRequestType Type = ");
40903 + switch ((setup[0] & 0x60) >> 5) {
40904 + case 0:
40905 + DWC_PRINTF("Standard\n");
40906 + break;
40907 + case 1:
40908 + DWC_PRINTF("Class\n");
40909 + break;
40910 + case 2:
40911 + DWC_PRINTF("Vendor\n");
40912 + break;
40913 + case 3:
40914 + DWC_PRINTF("Reserved\n");
40915 + break;
40916 + }
40917 + DWC_PRINTF(" bmRequestType Recipient = ");
40918 + switch (setup[0] & 0x1f) {
40919 + case 0:
40920 + DWC_PRINTF("Device\n");
40921 + break;
40922 + case 1:
40923 + DWC_PRINTF("Interface\n");
40924 + break;
40925 + case 2:
40926 + DWC_PRINTF("Endpoint\n");
40927 + break;
40928 + case 3:
40929 + DWC_PRINTF("Other\n");
40930 + break;
40931 + default:
40932 + DWC_PRINTF("Reserved\n");
40933 + break;
40934 + }
40935 + DWC_PRINTF(" bRequest = 0x%0x\n", setup[1]);
40936 + DWC_PRINTF(" wValue = 0x%0x\n", *((uint16_t *) & setup[2]));
40937 + DWC_PRINTF(" wIndex = 0x%0x\n", *((uint16_t *) & setup[4]));
40938 + DWC_PRINTF(" wLength = 0x%0x\n\n", *((uint16_t *) & setup[6]));
40939 + }
40940 +}
40941 +#endif
40942 +
40943 +void dwc_otg_hcd_dump_frrem(dwc_otg_hcd_t * hcd)
40944 +{
40945 +#if 0
40946 + DWC_PRINTF("Frame remaining at SOF:\n");
40947 + DWC_PRINTF(" samples %u, accum %llu, avg %llu\n",
40948 + hcd->frrem_samples, hcd->frrem_accum,
40949 + (hcd->frrem_samples > 0) ?
40950 + hcd->frrem_accum / hcd->frrem_samples : 0);
40951 +
40952 + DWC_PRINTF("\n");
40953 + DWC_PRINTF("Frame remaining at start_transfer (uframe 7):\n");
40954 + DWC_PRINTF(" samples %u, accum %llu, avg %llu\n",
40955 + hcd->core_if->hfnum_7_samples,
40956 + hcd->core_if->hfnum_7_frrem_accum,
40957 + (hcd->core_if->hfnum_7_samples >
40958 + 0) ? hcd->core_if->hfnum_7_frrem_accum /
40959 + hcd->core_if->hfnum_7_samples : 0);
40960 + DWC_PRINTF("Frame remaining at start_transfer (uframe 0):\n");
40961 + DWC_PRINTF(" samples %u, accum %llu, avg %llu\n",
40962 + hcd->core_if->hfnum_0_samples,
40963 + hcd->core_if->hfnum_0_frrem_accum,
40964 + (hcd->core_if->hfnum_0_samples >
40965 + 0) ? hcd->core_if->hfnum_0_frrem_accum /
40966 + hcd->core_if->hfnum_0_samples : 0);
40967 + DWC_PRINTF("Frame remaining at start_transfer (uframe 1-6):\n");
40968 + DWC_PRINTF(" samples %u, accum %llu, avg %llu\n",
40969 + hcd->core_if->hfnum_other_samples,
40970 + hcd->core_if->hfnum_other_frrem_accum,
40971 + (hcd->core_if->hfnum_other_samples >
40972 + 0) ? hcd->core_if->hfnum_other_frrem_accum /
40973 + hcd->core_if->hfnum_other_samples : 0);
40974 +
40975 + DWC_PRINTF("\n");
40976 + DWC_PRINTF("Frame remaining at sample point A (uframe 7):\n");
40977 + DWC_PRINTF(" samples %u, accum %llu, avg %llu\n",
40978 + hcd->hfnum_7_samples_a, hcd->hfnum_7_frrem_accum_a,
40979 + (hcd->hfnum_7_samples_a > 0) ?
40980 + hcd->hfnum_7_frrem_accum_a / hcd->hfnum_7_samples_a : 0);
40981 + DWC_PRINTF("Frame remaining at sample point A (uframe 0):\n");
40982 + DWC_PRINTF(" samples %u, accum %llu, avg %llu\n",
40983 + hcd->hfnum_0_samples_a, hcd->hfnum_0_frrem_accum_a,
40984 + (hcd->hfnum_0_samples_a > 0) ?
40985 + hcd->hfnum_0_frrem_accum_a / hcd->hfnum_0_samples_a : 0);
40986 + DWC_PRINTF("Frame remaining at sample point A (uframe 1-6):\n");
40987 + DWC_PRINTF(" samples %u, accum %llu, avg %llu\n",
40988 + hcd->hfnum_other_samples_a, hcd->hfnum_other_frrem_accum_a,
40989 + (hcd->hfnum_other_samples_a > 0) ?
40990 + hcd->hfnum_other_frrem_accum_a /
40991 + hcd->hfnum_other_samples_a : 0);
40992 +
40993 + DWC_PRINTF("\n");
40994 + DWC_PRINTF("Frame remaining at sample point B (uframe 7):\n");
40995 + DWC_PRINTF(" samples %u, accum %llu, avg %llu\n",
40996 + hcd->hfnum_7_samples_b, hcd->hfnum_7_frrem_accum_b,
40997 + (hcd->hfnum_7_samples_b > 0) ?
40998 + hcd->hfnum_7_frrem_accum_b / hcd->hfnum_7_samples_b : 0);
40999 + DWC_PRINTF("Frame remaining at sample point B (uframe 0):\n");
41000 + DWC_PRINTF(" samples %u, accum %llu, avg %llu\n",
41001 + hcd->hfnum_0_samples_b, hcd->hfnum_0_frrem_accum_b,
41002 + (hcd->hfnum_0_samples_b > 0) ?
41003 + hcd->hfnum_0_frrem_accum_b / hcd->hfnum_0_samples_b : 0);
41004 + DWC_PRINTF("Frame remaining at sample point B (uframe 1-6):\n");
41005 + DWC_PRINTF(" samples %u, accum %llu, avg %llu\n",
41006 + hcd->hfnum_other_samples_b, hcd->hfnum_other_frrem_accum_b,
41007 + (hcd->hfnum_other_samples_b > 0) ?
41008 + hcd->hfnum_other_frrem_accum_b /
41009 + hcd->hfnum_other_samples_b : 0);
41010 +#endif
41011 +}
41012 +
41013 +#endif /* DWC_DEVICE_ONLY */
41014 --- /dev/null
41015 +++ b/drivers/usb/host/dwc_otg/dwc_otg_hcd.h
41016 @@ -0,0 +1,870 @@
41017 +/* ==========================================================================
41018 + * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_hcd.h $
41019 + * $Revision: #58 $
41020 + * $Date: 2011/09/15 $
41021 + * $Change: 1846647 $
41022 + *
41023 + * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
41024 + * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
41025 + * otherwise expressly agreed to in writing between Synopsys and you.
41026 + *
41027 + * The Software IS NOT an item of Licensed Software or Licensed Product under
41028 + * any End User Software License Agreement or Agreement for Licensed Product
41029 + * with Synopsys or any supplement thereto. You are permitted to use and
41030 + * redistribute this Software in source and binary forms, with or without
41031 + * modification, provided that redistributions of source code must retain this
41032 + * notice. You may not view, use, disclose, copy or distribute this file or
41033 + * any information contained herein except pursuant to this license grant from
41034 + * Synopsys. If you do not agree with this notice, including the disclaimer
41035 + * below, then you are not authorized to use the Software.
41036 + *
41037 + * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
41038 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
41039 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
41040 + * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
41041 + * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
41042 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
41043 + * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
41044 + * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
41045 + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
41046 + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
41047 + * DAMAGE.
41048 + * ========================================================================== */
41049 +#ifndef DWC_DEVICE_ONLY
41050 +#ifndef __DWC_HCD_H__
41051 +#define __DWC_HCD_H__
41052 +
41053 +#include "dwc_otg_os_dep.h"
41054 +#include "usb.h"
41055 +#include "dwc_otg_hcd_if.h"
41056 +#include "dwc_otg_core_if.h"
41057 +#include "dwc_list.h"
41058 +#include "dwc_otg_cil.h"
41059 +#include "dwc_otg_fiq_fsm.h"
41060 +#include "dwc_otg_driver.h"
41061 +
41062 +
41063 +/**
41064 + * @file
41065 + *
41066 + * This file contains the structures, constants, and interfaces for
41067 + * the Host Contoller Driver (HCD).
41068 + *
41069 + * The Host Controller Driver (HCD) is responsible for translating requests
41070 + * from the USB Driver into the appropriate actions on the DWC_otg controller.
41071 + * It isolates the USBD from the specifics of the controller by providing an
41072 + * API to the USBD.
41073 + */
41074 +
41075 +struct dwc_otg_hcd_pipe_info {
41076 + uint8_t dev_addr;
41077 + uint8_t ep_num;
41078 + uint8_t pipe_type;
41079 + uint8_t pipe_dir;
41080 + uint16_t mps;
41081 +};
41082 +
41083 +struct dwc_otg_hcd_iso_packet_desc {
41084 + uint32_t offset;
41085 + uint32_t length;
41086 + uint32_t actual_length;
41087 + uint32_t status;
41088 +};
41089 +
41090 +struct dwc_otg_qtd;
41091 +
41092 +struct dwc_otg_hcd_urb {
41093 + void *priv;
41094 + struct dwc_otg_qtd *qtd;
41095 + void *buf;
41096 + dwc_dma_t dma;
41097 + void *setup_packet;
41098 + dwc_dma_t setup_dma;
41099 + uint32_t length;
41100 + uint32_t actual_length;
41101 + uint32_t status;
41102 + uint32_t error_count;
41103 + uint32_t packet_count;
41104 + uint32_t flags;
41105 + uint16_t interval;
41106 + struct dwc_otg_hcd_pipe_info pipe_info;
41107 + struct dwc_otg_hcd_iso_packet_desc iso_descs[0];
41108 +};
41109 +
41110 +static inline uint8_t dwc_otg_hcd_get_ep_num(struct dwc_otg_hcd_pipe_info *pipe)
41111 +{
41112 + return pipe->ep_num;
41113 +}
41114 +
41115 +static inline uint8_t dwc_otg_hcd_get_pipe_type(struct dwc_otg_hcd_pipe_info
41116 + *pipe)
41117 +{
41118 + return pipe->pipe_type;
41119 +}
41120 +
41121 +static inline uint16_t dwc_otg_hcd_get_mps(struct dwc_otg_hcd_pipe_info *pipe)
41122 +{
41123 + return pipe->mps;
41124 +}
41125 +
41126 +static inline uint8_t dwc_otg_hcd_get_dev_addr(struct dwc_otg_hcd_pipe_info
41127 + *pipe)
41128 +{
41129 + return pipe->dev_addr;
41130 +}
41131 +
41132 +static inline uint8_t dwc_otg_hcd_is_pipe_isoc(struct dwc_otg_hcd_pipe_info
41133 + *pipe)
41134 +{
41135 + return (pipe->pipe_type == UE_ISOCHRONOUS);
41136 +}
41137 +
41138 +static inline uint8_t dwc_otg_hcd_is_pipe_int(struct dwc_otg_hcd_pipe_info
41139 + *pipe)
41140 +{
41141 + return (pipe->pipe_type == UE_INTERRUPT);
41142 +}
41143 +
41144 +static inline uint8_t dwc_otg_hcd_is_pipe_bulk(struct dwc_otg_hcd_pipe_info
41145 + *pipe)
41146 +{
41147 + return (pipe->pipe_type == UE_BULK);
41148 +}
41149 +
41150 +static inline uint8_t dwc_otg_hcd_is_pipe_control(struct dwc_otg_hcd_pipe_info
41151 + *pipe)
41152 +{
41153 + return (pipe->pipe_type == UE_CONTROL);
41154 +}
41155 +
41156 +static inline uint8_t dwc_otg_hcd_is_pipe_in(struct dwc_otg_hcd_pipe_info *pipe)
41157 +{
41158 + return (pipe->pipe_dir == UE_DIR_IN);
41159 +}
41160 +
41161 +static inline uint8_t dwc_otg_hcd_is_pipe_out(struct dwc_otg_hcd_pipe_info
41162 + *pipe)
41163 +{
41164 + return (!dwc_otg_hcd_is_pipe_in(pipe));
41165 +}
41166 +
41167 +static inline void dwc_otg_hcd_fill_pipe(struct dwc_otg_hcd_pipe_info *pipe,
41168 + uint8_t devaddr, uint8_t ep_num,
41169 + uint8_t pipe_type, uint8_t pipe_dir,
41170 + uint16_t mps)
41171 +{
41172 + pipe->dev_addr = devaddr;
41173 + pipe->ep_num = ep_num;
41174 + pipe->pipe_type = pipe_type;
41175 + pipe->pipe_dir = pipe_dir;
41176 + pipe->mps = mps;
41177 +}
41178 +
41179 +/**
41180 + * Phases for control transfers.
41181 + */
41182 +typedef enum dwc_otg_control_phase {
41183 + DWC_OTG_CONTROL_SETUP,
41184 + DWC_OTG_CONTROL_DATA,
41185 + DWC_OTG_CONTROL_STATUS
41186 +} dwc_otg_control_phase_e;
41187 +
41188 +/** Transaction types. */
41189 +typedef enum dwc_otg_transaction_type {
41190 + DWC_OTG_TRANSACTION_NONE = 0,
41191 + DWC_OTG_TRANSACTION_PERIODIC = 1,
41192 + DWC_OTG_TRANSACTION_NON_PERIODIC = 2,
41193 + DWC_OTG_TRANSACTION_ALL = DWC_OTG_TRANSACTION_PERIODIC + DWC_OTG_TRANSACTION_NON_PERIODIC
41194 +} dwc_otg_transaction_type_e;
41195 +
41196 +struct dwc_otg_qh;
41197 +
41198 +/**
41199 + * A Queue Transfer Descriptor (QTD) holds the state of a bulk, control,
41200 + * interrupt, or isochronous transfer. A single QTD is created for each URB
41201 + * (of one of these types) submitted to the HCD. The transfer associated with
41202 + * a QTD may require one or multiple transactions.
41203 + *
41204 + * A QTD is linked to a Queue Head, which is entered in either the
41205 + * non-periodic or periodic schedule for execution. When a QTD is chosen for
41206 + * execution, some or all of its transactions may be executed. After
41207 + * execution, the state of the QTD is updated. The QTD may be retired if all
41208 + * its transactions are complete or if an error occurred. Otherwise, it
41209 + * remains in the schedule so more transactions can be executed later.
41210 + */
41211 +typedef struct dwc_otg_qtd {
41212 + /**
41213 + * Determines the PID of the next data packet for the data phase of
41214 + * control transfers. Ignored for other transfer types.<br>
41215 + * One of the following values:
41216 + * - DWC_OTG_HC_PID_DATA0
41217 + * - DWC_OTG_HC_PID_DATA1
41218 + */
41219 + uint8_t data_toggle;
41220 +
41221 + /** Current phase for control transfers (Setup, Data, or Status). */
41222 + dwc_otg_control_phase_e control_phase;
41223 +
41224 + /** Keep track of the current split type
41225 + * for FS/LS endpoints on a HS Hub */
41226 + uint8_t complete_split;
41227 +
41228 + /** How many bytes transferred during SSPLIT OUT */
41229 + uint32_t ssplit_out_xfer_count;
41230 +
41231 + /**
41232 + * Holds the number of bus errors that have occurred for a transaction
41233 + * within this transfer.
41234 + */
41235 + uint8_t error_count;
41236 +
41237 + /**
41238 + * Index of the next frame descriptor for an isochronous transfer. A
41239 + * frame descriptor describes the buffer position and length of the
41240 + * data to be transferred in the next scheduled (micro)frame of an
41241 + * isochronous transfer. It also holds status for that transaction.
41242 + * The frame index starts at 0.
41243 + */
41244 + uint16_t isoc_frame_index;
41245 +
41246 + /** Position of the ISOC split on full/low speed */
41247 + uint8_t isoc_split_pos;
41248 +
41249 + /** Position of the ISOC split in the buffer for the current frame */
41250 + uint16_t isoc_split_offset;
41251 +
41252 + /** URB for this transfer */
41253 + struct dwc_otg_hcd_urb *urb;
41254 +
41255 + struct dwc_otg_qh *qh;
41256 +
41257 + /** This list of QTDs */
41258 + DWC_CIRCLEQ_ENTRY(dwc_otg_qtd) qtd_list_entry;
41259 +
41260 + /** Indicates if this QTD is currently processed by HW. */
41261 + uint8_t in_process;
41262 +
41263 + /** Number of DMA descriptors for this QTD */
41264 + uint8_t n_desc;
41265 +
41266 + /**
41267 + * Last activated frame(packet) index.
41268 + * Used in Descriptor DMA mode only.
41269 + */
41270 + uint16_t isoc_frame_index_last;
41271 +
41272 +} dwc_otg_qtd_t;
41273 +
41274 +DWC_CIRCLEQ_HEAD(dwc_otg_qtd_list, dwc_otg_qtd);
41275 +
41276 +/**
41277 + * A Queue Head (QH) holds the static characteristics of an endpoint and
41278 + * maintains a list of transfers (QTDs) for that endpoint. A QH structure may
41279 + * be entered in either the non-periodic or periodic schedule.
41280 + */
41281 +typedef struct dwc_otg_qh {
41282 + /**
41283 + * Endpoint type.
41284 + * One of the following values:
41285 + * - UE_CONTROL
41286 + * - UE_BULK
41287 + * - UE_INTERRUPT
41288 + * - UE_ISOCHRONOUS
41289 + */
41290 + uint8_t ep_type;
41291 + uint8_t ep_is_in;
41292 +
41293 + /** wMaxPacketSize Field of Endpoint Descriptor. */
41294 + uint16_t maxp;
41295 +
41296 + /**
41297 + * Device speed.
41298 + * One of the following values:
41299 + * - DWC_OTG_EP_SPEED_LOW
41300 + * - DWC_OTG_EP_SPEED_FULL
41301 + * - DWC_OTG_EP_SPEED_HIGH
41302 + */
41303 + uint8_t dev_speed;
41304 +
41305 + /**
41306 + * Determines the PID of the next data packet for non-control
41307 + * transfers. Ignored for control transfers.<br>
41308 + * One of the following values:
41309 + * - DWC_OTG_HC_PID_DATA0
41310 + * - DWC_OTG_HC_PID_DATA1
41311 + */
41312 + uint8_t data_toggle;
41313 +
41314 + /** Ping state if 1. */
41315 + uint8_t ping_state;
41316 +
41317 + /**
41318 + * List of QTDs for this QH.
41319 + */
41320 + struct dwc_otg_qtd_list qtd_list;
41321 +
41322 + /** Host channel currently processing transfers for this QH. */
41323 + struct dwc_hc *channel;
41324 +
41325 + /** Full/low speed endpoint on high-speed hub requires split. */
41326 + uint8_t do_split;
41327 +
41328 + /** @name Periodic schedule information */
41329 + /** @{ */
41330 +
41331 + /** Bandwidth in microseconds per (micro)frame. */
41332 + uint16_t usecs;
41333 +
41334 + /** Interval between transfers in (micro)frames. */
41335 + uint16_t interval;
41336 +
41337 + /**
41338 + * (micro)frame to initialize a periodic transfer. The transfer
41339 + * executes in the following (micro)frame.
41340 + */
41341 + uint16_t sched_frame;
41342 +
41343 + /*
41344 + ** Frame a NAK was received on this queue head, used to minimise NAK retransmission
41345 + */
41346 + uint16_t nak_frame;
41347 +
41348 + /** (micro)frame at which last start split was initialized. */
41349 + uint16_t start_split_frame;
41350 +
41351 + /** @} */
41352 +
41353 + /**
41354 + * Used instead of original buffer if
41355 + * it(physical address) is not dword-aligned.
41356 + */
41357 + uint8_t *dw_align_buf;
41358 + dwc_dma_t dw_align_buf_dma;
41359 +
41360 + /** Entry for QH in either the periodic or non-periodic schedule. */
41361 + dwc_list_link_t qh_list_entry;
41362 +
41363 + /** @name Descriptor DMA support */
41364 + /** @{ */
41365 +
41366 + /** Descriptor List. */
41367 + dwc_otg_host_dma_desc_t *desc_list;
41368 +
41369 + /** Descriptor List physical address. */
41370 + dwc_dma_t desc_list_dma;
41371 +
41372 + /**
41373 + * Xfer Bytes array.
41374 + * Each element corresponds to a descriptor and indicates
41375 + * original XferSize size value for the descriptor.
41376 + */
41377 + uint32_t *n_bytes;
41378 +
41379 + /** Actual number of transfer descriptors in a list. */
41380 + uint16_t ntd;
41381 +
41382 + /** First activated isochronous transfer descriptor index. */
41383 + uint8_t td_first;
41384 + /** Last activated isochronous transfer descriptor index. */
41385 + uint8_t td_last;
41386 +
41387 + /** @} */
41388 +
41389 +
41390 + uint16_t speed;
41391 + uint16_t frame_usecs[8];
41392 +
41393 + uint32_t skip_count;
41394 +} dwc_otg_qh_t;
41395 +
41396 +DWC_CIRCLEQ_HEAD(hc_list, dwc_hc);
41397 +
41398 +typedef struct urb_tq_entry {
41399 + struct urb *urb;
41400 + DWC_TAILQ_ENTRY(urb_tq_entry) urb_tq_entries;
41401 +} urb_tq_entry_t;
41402 +
41403 +DWC_TAILQ_HEAD(urb_list, urb_tq_entry);
41404 +
41405 +/**
41406 + * This structure holds the state of the HCD, including the non-periodic and
41407 + * periodic schedules.
41408 + */
41409 +struct dwc_otg_hcd {
41410 + /** The DWC otg device pointer */
41411 + struct dwc_otg_device *otg_dev;
41412 + /** DWC OTG Core Interface Layer */
41413 + dwc_otg_core_if_t *core_if;
41414 +
41415 + /** Function HCD driver callbacks */
41416 + struct dwc_otg_hcd_function_ops *fops;
41417 +
41418 + /** Internal DWC HCD Flags */
41419 + volatile union dwc_otg_hcd_internal_flags {
41420 + uint32_t d32;
41421 + struct {
41422 + unsigned port_connect_status_change:1;
41423 + unsigned port_connect_status:1;
41424 + unsigned port_reset_change:1;
41425 + unsigned port_enable_change:1;
41426 + unsigned port_suspend_change:1;
41427 + unsigned port_over_current_change:1;
41428 + unsigned port_l1_change:1;
41429 + unsigned port_speed:2;
41430 + unsigned reserved:24;
41431 + } b;
41432 + } flags;
41433 +
41434 + /**
41435 + * Inactive items in the non-periodic schedule. This is a list of
41436 + * Queue Heads. Transfers associated with these Queue Heads are not
41437 + * currently assigned to a host channel.
41438 + */
41439 + dwc_list_link_t non_periodic_sched_inactive;
41440 +
41441 + /**
41442 + * Active items in the non-periodic schedule. This is a list of
41443 + * Queue Heads. Transfers associated with these Queue Heads are
41444 + * currently assigned to a host channel.
41445 + */
41446 + dwc_list_link_t non_periodic_sched_active;
41447 +
41448 + /**
41449 + * Pointer to the next Queue Head to process in the active
41450 + * non-periodic schedule.
41451 + */
41452 + dwc_list_link_t *non_periodic_qh_ptr;
41453 +
41454 + /**
41455 + * Inactive items in the periodic schedule. This is a list of QHs for
41456 + * periodic transfers that are _not_ scheduled for the next frame.
41457 + * Each QH in the list has an interval counter that determines when it
41458 + * needs to be scheduled for execution. This scheduling mechanism
41459 + * allows only a simple calculation for periodic bandwidth used (i.e.
41460 + * must assume that all periodic transfers may need to execute in the
41461 + * same frame). However, it greatly simplifies scheduling and should
41462 + * be sufficient for the vast majority of OTG hosts, which need to
41463 + * connect to a small number of peripherals at one time.
41464 + *
41465 + * Items move from this list to periodic_sched_ready when the QH
41466 + * interval counter is 0 at SOF.
41467 + */
41468 + dwc_list_link_t periodic_sched_inactive;
41469 +
41470 + /**
41471 + * List of periodic QHs that are ready for execution in the next
41472 + * frame, but have not yet been assigned to host channels.
41473 + *
41474 + * Items move from this list to periodic_sched_assigned as host
41475 + * channels become available during the current frame.
41476 + */
41477 + dwc_list_link_t periodic_sched_ready;
41478 +
41479 + /**
41480 + * List of periodic QHs to be executed in the next frame that are
41481 + * assigned to host channels.
41482 + *
41483 + * Items move from this list to periodic_sched_queued as the
41484 + * transactions for the QH are queued to the DWC_otg controller.
41485 + */
41486 + dwc_list_link_t periodic_sched_assigned;
41487 +
41488 + /**
41489 + * List of periodic QHs that have been queued for execution.
41490 + *
41491 + * Items move from this list to either periodic_sched_inactive or
41492 + * periodic_sched_ready when the channel associated with the transfer
41493 + * is released. If the interval for the QH is 1, the item moves to
41494 + * periodic_sched_ready because it must be rescheduled for the next
41495 + * frame. Otherwise, the item moves to periodic_sched_inactive.
41496 + */
41497 + dwc_list_link_t periodic_sched_queued;
41498 +
41499 + /**
41500 + * Total bandwidth claimed so far for periodic transfers. This value
41501 + * is in microseconds per (micro)frame. The assumption is that all
41502 + * periodic transfers may occur in the same (micro)frame.
41503 + */
41504 + uint16_t periodic_usecs;
41505 +
41506 + /**
41507 + * Total bandwidth claimed so far for all periodic transfers
41508 + * in a frame.
41509 + * This will include a mixture of HS and FS transfers.
41510 + * Units are microseconds per (micro)frame.
41511 + * We have a budget per frame and have to schedule
41512 + * transactions accordingly.
41513 + * Watch out for the fact that things are actually scheduled for the
41514 + * "next frame".
41515 + */
41516 + uint16_t frame_usecs[8];
41517 +
41518 +
41519 + /**
41520 + * Frame number read from the core at SOF. The value ranges from 0 to
41521 + * DWC_HFNUM_MAX_FRNUM.
41522 + */
41523 + uint16_t frame_number;
41524 +
41525 + /**
41526 + * Count of periodic QHs, if using several eps. For SOF enable/disable.
41527 + */
41528 + uint16_t periodic_qh_count;
41529 +
41530 + /**
41531 + * Free host channels in the controller. This is a list of
41532 + * dwc_hc_t items.
41533 + */
41534 + struct hc_list free_hc_list;
41535 + /**
41536 + * Number of host channels assigned to periodic transfers. Currently
41537 + * assuming that there is a dedicated host channel for each periodic
41538 + * transaction and at least one host channel available for
41539 + * non-periodic transactions.
41540 + */
41541 + int periodic_channels; /* microframe_schedule==0 */
41542 +
41543 + /**
41544 + * Number of host channels assigned to non-periodic transfers.
41545 + */
41546 + int non_periodic_channels; /* microframe_schedule==0 */
41547 +
41548 + /**
41549 + * Number of host channels assigned to non-periodic transfers.
41550 + */
41551 + int available_host_channels;
41552 +
41553 + /**
41554 + * Array of pointers to the host channel descriptors. Allows accessing
41555 + * a host channel descriptor given the host channel number. This is
41556 + * useful in interrupt handlers.
41557 + */
41558 + struct dwc_hc *hc_ptr_array[MAX_EPS_CHANNELS];
41559 +
41560 + /**
41561 + * Buffer to use for any data received during the status phase of a
41562 + * control transfer. Normally no data is transferred during the status
41563 + * phase. This buffer is used as a bit bucket.
41564 + */
41565 + uint8_t *status_buf;
41566 +
41567 + /**
41568 + * DMA address for status_buf.
41569 + */
41570 + dma_addr_t status_buf_dma;
41571 +#define DWC_OTG_HCD_STATUS_BUF_SIZE 64
41572 +
41573 + /**
41574 + * Connection timer. An OTG host must display a message if the device
41575 + * does not connect. Started when the VBus power is turned on via
41576 + * sysfs attribute "buspower".
41577 + */
41578 + dwc_timer_t *conn_timer;
41579 +
41580 + /* Tasket to do a reset */
41581 + dwc_tasklet_t *reset_tasklet;
41582 +
41583 + dwc_tasklet_t *completion_tasklet;
41584 + struct urb_list completed_urb_list;
41585 +
41586 + /* */
41587 + dwc_spinlock_t *lock;
41588 + /**
41589 + * Private data that could be used by OS wrapper.
41590 + */
41591 + void *priv;
41592 +
41593 + uint8_t otg_port;
41594 +
41595 + /** Frame List */
41596 + uint32_t *frame_list;
41597 +
41598 + /** Hub - Port assignment */
41599 + int hub_port[128];
41600 +#ifdef FIQ_DEBUG
41601 + int hub_port_alloc[2048];
41602 +#endif
41603 +
41604 + /** Frame List DMA address */
41605 + dma_addr_t frame_list_dma;
41606 +
41607 + struct fiq_stack *fiq_stack;
41608 + struct fiq_state *fiq_state;
41609 +
41610 + /** Virtual address for split transaction DMA bounce buffers */
41611 + struct fiq_dma_blob *fiq_dmab;
41612 +
41613 +#ifdef DEBUG
41614 + uint32_t frrem_samples;
41615 + uint64_t frrem_accum;
41616 +
41617 + uint32_t hfnum_7_samples_a;
41618 + uint64_t hfnum_7_frrem_accum_a;
41619 + uint32_t hfnum_0_samples_a;
41620 + uint64_t hfnum_0_frrem_accum_a;
41621 + uint32_t hfnum_other_samples_a;
41622 + uint64_t hfnum_other_frrem_accum_a;
41623 +
41624 + uint32_t hfnum_7_samples_b;
41625 + uint64_t hfnum_7_frrem_accum_b;
41626 + uint32_t hfnum_0_samples_b;
41627 + uint64_t hfnum_0_frrem_accum_b;
41628 + uint32_t hfnum_other_samples_b;
41629 + uint64_t hfnum_other_frrem_accum_b;
41630 +#endif
41631 +};
41632 +
41633 +static inline struct device *dwc_otg_hcd_to_dev(struct dwc_otg_hcd *hcd)
41634 +{
41635 + return &hcd->otg_dev->os_dep.platformdev->dev;
41636 +}
41637 +
41638 +/** @name Transaction Execution Functions */
41639 +/** @{ */
41640 +extern dwc_otg_transaction_type_e dwc_otg_hcd_select_transactions(dwc_otg_hcd_t
41641 + * hcd);
41642 +extern void dwc_otg_hcd_queue_transactions(dwc_otg_hcd_t * hcd,
41643 + dwc_otg_transaction_type_e tr_type);
41644 +
41645 +int dwc_otg_hcd_allocate_port(dwc_otg_hcd_t * hcd, dwc_otg_qh_t *qh);
41646 +void dwc_otg_hcd_release_port(dwc_otg_hcd_t * dwc_otg_hcd, dwc_otg_qh_t *qh);
41647 +
41648 +extern int fiq_fsm_queue_transaction(dwc_otg_hcd_t *hcd, dwc_otg_qh_t *qh);
41649 +extern int fiq_fsm_transaction_suitable(dwc_otg_hcd_t *hcd, dwc_otg_qh_t *qh);
41650 +extern void dwc_otg_cleanup_fiq_channel(dwc_otg_hcd_t *hcd, uint32_t num);
41651 +
41652 +/** @} */
41653 +
41654 +/** @name Interrupt Handler Functions */
41655 +/** @{ */
41656 +extern int32_t dwc_otg_hcd_handle_intr(dwc_otg_hcd_t * dwc_otg_hcd);
41657 +extern int32_t dwc_otg_hcd_handle_sof_intr(dwc_otg_hcd_t * dwc_otg_hcd);
41658 +extern int32_t dwc_otg_hcd_handle_rx_status_q_level_intr(dwc_otg_hcd_t *
41659 + dwc_otg_hcd);
41660 +extern int32_t dwc_otg_hcd_handle_np_tx_fifo_empty_intr(dwc_otg_hcd_t *
41661 + dwc_otg_hcd);
41662 +extern int32_t dwc_otg_hcd_handle_perio_tx_fifo_empty_intr(dwc_otg_hcd_t *
41663 + dwc_otg_hcd);
41664 +extern int32_t dwc_otg_hcd_handle_incomplete_periodic_intr(dwc_otg_hcd_t *
41665 + dwc_otg_hcd);
41666 +extern int32_t dwc_otg_hcd_handle_port_intr(dwc_otg_hcd_t * dwc_otg_hcd);
41667 +extern int32_t dwc_otg_hcd_handle_conn_id_status_change_intr(dwc_otg_hcd_t *
41668 + dwc_otg_hcd);
41669 +extern int32_t dwc_otg_hcd_handle_disconnect_intr(dwc_otg_hcd_t * dwc_otg_hcd);
41670 +extern int32_t dwc_otg_hcd_handle_hc_intr(dwc_otg_hcd_t * dwc_otg_hcd);
41671 +extern int32_t dwc_otg_hcd_handle_hc_n_intr(dwc_otg_hcd_t * dwc_otg_hcd,
41672 + uint32_t num);
41673 +extern int32_t dwc_otg_hcd_handle_session_req_intr(dwc_otg_hcd_t * dwc_otg_hcd);
41674 +extern int32_t dwc_otg_hcd_handle_wakeup_detected_intr(dwc_otg_hcd_t *
41675 + dwc_otg_hcd);
41676 +/** @} */
41677 +
41678 +/** @name Schedule Queue Functions */
41679 +/** @{ */
41680 +
41681 +/* Implemented in dwc_otg_hcd_queue.c */
41682 +extern dwc_otg_qh_t *dwc_otg_hcd_qh_create(dwc_otg_hcd_t * hcd,
41683 + dwc_otg_hcd_urb_t * urb, int atomic_alloc);
41684 +extern void dwc_otg_hcd_qh_free(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh);
41685 +extern int dwc_otg_hcd_qh_add(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh);
41686 +extern void dwc_otg_hcd_qh_remove(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh);
41687 +extern void dwc_otg_hcd_qh_deactivate(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh,
41688 + int sched_csplit);
41689 +
41690 +/** Remove and free a QH */
41691 +static inline void dwc_otg_hcd_qh_remove_and_free(dwc_otg_hcd_t * hcd,
41692 + dwc_otg_qh_t * qh)
41693 +{
41694 + dwc_irqflags_t flags;
41695 + DWC_SPINLOCK_IRQSAVE(hcd->lock, &flags);
41696 + dwc_otg_hcd_qh_remove(hcd, qh);
41697 + DWC_SPINUNLOCK_IRQRESTORE(hcd->lock, flags);
41698 + dwc_otg_hcd_qh_free(hcd, qh);
41699 +}
41700 +
41701 +/** Allocates memory for a QH structure.
41702 + * @return Returns the memory allocate or NULL on error. */
41703 +static inline dwc_otg_qh_t *dwc_otg_hcd_qh_alloc(int atomic_alloc)
41704 +{
41705 + if (atomic_alloc)
41706 + return (dwc_otg_qh_t *) DWC_ALLOC_ATOMIC(sizeof(dwc_otg_qh_t));
41707 + else
41708 + return (dwc_otg_qh_t *) DWC_ALLOC(sizeof(dwc_otg_qh_t));
41709 +}
41710 +
41711 +extern dwc_otg_qtd_t *dwc_otg_hcd_qtd_create(dwc_otg_hcd_urb_t * urb,
41712 + int atomic_alloc);
41713 +extern void dwc_otg_hcd_qtd_init(dwc_otg_qtd_t * qtd, dwc_otg_hcd_urb_t * urb);
41714 +extern int dwc_otg_hcd_qtd_add(dwc_otg_qtd_t * qtd, dwc_otg_hcd_t * dwc_otg_hcd,
41715 + dwc_otg_qh_t ** qh, int atomic_alloc);
41716 +
41717 +/** Allocates memory for a QTD structure.
41718 + * @return Returns the memory allocate or NULL on error. */
41719 +static inline dwc_otg_qtd_t *dwc_otg_hcd_qtd_alloc(int atomic_alloc)
41720 +{
41721 + if (atomic_alloc)
41722 + return (dwc_otg_qtd_t *) DWC_ALLOC_ATOMIC(sizeof(dwc_otg_qtd_t));
41723 + else
41724 + return (dwc_otg_qtd_t *) DWC_ALLOC(sizeof(dwc_otg_qtd_t));
41725 +}
41726 +
41727 +/** Frees the memory for a QTD structure. QTD should already be removed from
41728 + * list.
41729 + * @param qtd QTD to free.*/
41730 +static inline void dwc_otg_hcd_qtd_free(dwc_otg_qtd_t * qtd)
41731 +{
41732 + DWC_FREE(qtd);
41733 +}
41734 +
41735 +/** Removes a QTD from list.
41736 + * @param hcd HCD instance.
41737 + * @param qtd QTD to remove from list.
41738 + * @param qh QTD belongs to.
41739 + */
41740 +static inline void dwc_otg_hcd_qtd_remove(dwc_otg_hcd_t * hcd,
41741 + dwc_otg_qtd_t * qtd,
41742 + dwc_otg_qh_t * qh)
41743 +{
41744 + DWC_CIRCLEQ_REMOVE(&qh->qtd_list, qtd, qtd_list_entry);
41745 +}
41746 +
41747 +/** Remove and free a QTD
41748 + * Need to disable IRQ and hold hcd lock while calling this function out of
41749 + * interrupt servicing chain */
41750 +static inline void dwc_otg_hcd_qtd_remove_and_free(dwc_otg_hcd_t * hcd,
41751 + dwc_otg_qtd_t * qtd,
41752 + dwc_otg_qh_t * qh)
41753 +{
41754 + dwc_otg_hcd_qtd_remove(hcd, qtd, qh);
41755 + dwc_otg_hcd_qtd_free(qtd);
41756 +}
41757 +
41758 +/** @} */
41759 +
41760 +/** @name Descriptor DMA Supporting Functions */
41761 +/** @{ */
41762 +
41763 +extern void dwc_otg_hcd_start_xfer_ddma(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh);
41764 +extern void dwc_otg_hcd_complete_xfer_ddma(dwc_otg_hcd_t * hcd,
41765 + dwc_hc_t * hc,
41766 + dwc_otg_hc_regs_t * hc_regs,
41767 + dwc_otg_halt_status_e halt_status);
41768 +
41769 +extern int dwc_otg_hcd_qh_init_ddma(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh);
41770 +extern void dwc_otg_hcd_qh_free_ddma(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh);
41771 +
41772 +/** @} */
41773 +
41774 +/** @name Internal Functions */
41775 +/** @{ */
41776 +dwc_otg_qh_t *dwc_urb_to_qh(dwc_otg_hcd_urb_t * urb);
41777 +/** @} */
41778 +
41779 +#ifdef CONFIG_USB_DWC_OTG_LPM
41780 +extern int dwc_otg_hcd_get_hc_for_lpm_tran(dwc_otg_hcd_t * hcd,
41781 + uint8_t devaddr);
41782 +extern void dwc_otg_hcd_free_hc_from_lpm(dwc_otg_hcd_t * hcd);
41783 +#endif
41784 +
41785 +/** Gets the QH that contains the list_head */
41786 +#define dwc_list_to_qh(_list_head_ptr_) container_of(_list_head_ptr_, dwc_otg_qh_t, qh_list_entry)
41787 +
41788 +/** Gets the QTD that contains the list_head */
41789 +#define dwc_list_to_qtd(_list_head_ptr_) container_of(_list_head_ptr_, dwc_otg_qtd_t, qtd_list_entry)
41790 +
41791 +/** Check if QH is non-periodic */
41792 +#define dwc_qh_is_non_per(_qh_ptr_) ((_qh_ptr_->ep_type == UE_BULK) || \
41793 + (_qh_ptr_->ep_type == UE_CONTROL))
41794 +
41795 +/** High bandwidth multiplier as encoded in highspeed endpoint descriptors */
41796 +#define dwc_hb_mult(wMaxPacketSize) (1 + (((wMaxPacketSize) >> 11) & 0x03))
41797 +
41798 +/** Packet size for any kind of endpoint descriptor */
41799 +#define dwc_max_packet(wMaxPacketSize) ((wMaxPacketSize) & 0x07ff)
41800 +
41801 +/**
41802 + * Returns true if _frame1 is less than or equal to _frame2. The comparison is
41803 + * done modulo DWC_HFNUM_MAX_FRNUM. This accounts for the rollover of the
41804 + * frame number when the max frame number is reached.
41805 + */
41806 +static inline int dwc_frame_num_le(uint16_t frame1, uint16_t frame2)
41807 +{
41808 + return ((frame2 - frame1) & DWC_HFNUM_MAX_FRNUM) <=
41809 + (DWC_HFNUM_MAX_FRNUM >> 1);
41810 +}
41811 +
41812 +/**
41813 + * Returns true if _frame1 is greater than _frame2. The comparison is done
41814 + * modulo DWC_HFNUM_MAX_FRNUM. This accounts for the rollover of the frame
41815 + * number when the max frame number is reached.
41816 + */
41817 +static inline int dwc_frame_num_gt(uint16_t frame1, uint16_t frame2)
41818 +{
41819 + return (frame1 != frame2) &&
41820 + (((frame1 - frame2) & DWC_HFNUM_MAX_FRNUM) <
41821 + (DWC_HFNUM_MAX_FRNUM >> 1));
41822 +}
41823 +
41824 +/**
41825 + * Increments _frame by the amount specified by _inc. The addition is done
41826 + * modulo DWC_HFNUM_MAX_FRNUM. Returns the incremented value.
41827 + */
41828 +static inline uint16_t dwc_frame_num_inc(uint16_t frame, uint16_t inc)
41829 +{
41830 + return (frame + inc) & DWC_HFNUM_MAX_FRNUM;
41831 +}
41832 +
41833 +static inline uint16_t dwc_full_frame_num(uint16_t frame)
41834 +{
41835 + return (frame & DWC_HFNUM_MAX_FRNUM) >> 3;
41836 +}
41837 +
41838 +static inline uint16_t dwc_micro_frame_num(uint16_t frame)
41839 +{
41840 + return frame & 0x7;
41841 +}
41842 +
41843 +extern void init_hcd_usecs(dwc_otg_hcd_t *_hcd);
41844 +
41845 +void dwc_otg_hcd_save_data_toggle(dwc_hc_t * hc,
41846 + dwc_otg_hc_regs_t * hc_regs,
41847 + dwc_otg_qtd_t * qtd);
41848 +
41849 +#ifdef DEBUG
41850 +/**
41851 + * Macro to sample the remaining PHY clocks left in the current frame. This
41852 + * may be used during debugging to determine the average time it takes to
41853 + * execute sections of code. There are two possible sample points, "a" and
41854 + * "b", so the _letter argument must be one of these values.
41855 + *
41856 + * To dump the average sample times, read the "hcd_frrem" sysfs attribute. For
41857 + * example, "cat /sys/devices/lm0/hcd_frrem".
41858 + */
41859 +#define dwc_sample_frrem(_hcd, _qh, _letter) \
41860 +{ \
41861 + hfnum_data_t hfnum; \
41862 + dwc_otg_qtd_t *qtd; \
41863 + qtd = list_entry(_qh->qtd_list.next, dwc_otg_qtd_t, qtd_list_entry); \
41864 + if (usb_pipeint(qtd->urb->pipe) && _qh->start_split_frame != 0 && !qtd->complete_split) { \
41865 + hfnum.d32 = DWC_READ_REG32(&_hcd->core_if->host_if->host_global_regs->hfnum); \
41866 + switch (hfnum.b.frnum & 0x7) { \
41867 + case 7: \
41868 + _hcd->hfnum_7_samples_##_letter++; \
41869 + _hcd->hfnum_7_frrem_accum_##_letter += hfnum.b.frrem; \
41870 + break; \
41871 + case 0: \
41872 + _hcd->hfnum_0_samples_##_letter++; \
41873 + _hcd->hfnum_0_frrem_accum_##_letter += hfnum.b.frrem; \
41874 + break; \
41875 + default: \
41876 + _hcd->hfnum_other_samples_##_letter++; \
41877 + _hcd->hfnum_other_frrem_accum_##_letter += hfnum.b.frrem; \
41878 + break; \
41879 + } \
41880 + } \
41881 +}
41882 +#else
41883 +#define dwc_sample_frrem(_hcd, _qh, _letter)
41884 +#endif
41885 +#endif
41886 +#endif /* DWC_DEVICE_ONLY */
41887 --- /dev/null
41888 +++ b/drivers/usb/host/dwc_otg/dwc_otg_hcd_ddma.c
41889 @@ -0,0 +1,1134 @@
41890 +/*==========================================================================
41891 + * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_hcd_ddma.c $
41892 + * $Revision: #10 $
41893 + * $Date: 2011/10/20 $
41894 + * $Change: 1869464 $
41895 + *
41896 + * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
41897 + * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
41898 + * otherwise expressly agreed to in writing between Synopsys and you.
41899 + *
41900 + * The Software IS NOT an item of Licensed Software or Licensed Product under
41901 + * any End User Software License Agreement or Agreement for Licensed Product
41902 + * with Synopsys or any supplement thereto. You are permitted to use and
41903 + * redistribute this Software in source and binary forms, with or without
41904 + * modification, provided that redistributions of source code must retain this
41905 + * notice. You may not view, use, disclose, copy or distribute this file or
41906 + * any information contained herein except pursuant to this license grant from
41907 + * Synopsys. If you do not agree with this notice, including the disclaimer
41908 + * below, then you are not authorized to use the Software.
41909 + *
41910 + * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
41911 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
41912 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
41913 + * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
41914 + * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
41915 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
41916 + * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
41917 + * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
41918 + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
41919 + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
41920 + * DAMAGE.
41921 + * ========================================================================== */
41922 +#ifndef DWC_DEVICE_ONLY
41923 +
41924 +/** @file
41925 + * This file contains Descriptor DMA support implementation for host mode.
41926 + */
41927 +
41928 +#include "dwc_otg_hcd.h"
41929 +#include "dwc_otg_regs.h"
41930 +
41931 +extern bool microframe_schedule;
41932 +
41933 +static inline uint8_t frame_list_idx(uint16_t frame)
41934 +{
41935 + return (frame & (MAX_FRLIST_EN_NUM - 1));
41936 +}
41937 +
41938 +static inline uint16_t desclist_idx_inc(uint16_t idx, uint16_t inc, uint8_t speed)
41939 +{
41940 + return (idx + inc) &
41941 + (((speed ==
41942 + DWC_OTG_EP_SPEED_HIGH) ? MAX_DMA_DESC_NUM_HS_ISOC :
41943 + MAX_DMA_DESC_NUM_GENERIC) - 1);
41944 +}
41945 +
41946 +static inline uint16_t desclist_idx_dec(uint16_t idx, uint16_t inc, uint8_t speed)
41947 +{
41948 + return (idx - inc) &
41949 + (((speed ==
41950 + DWC_OTG_EP_SPEED_HIGH) ? MAX_DMA_DESC_NUM_HS_ISOC :
41951 + MAX_DMA_DESC_NUM_GENERIC) - 1);
41952 +}
41953 +
41954 +static inline uint16_t max_desc_num(dwc_otg_qh_t * qh)
41955 +{
41956 + return (((qh->ep_type == UE_ISOCHRONOUS)
41957 + && (qh->dev_speed == DWC_OTG_EP_SPEED_HIGH))
41958 + ? MAX_DMA_DESC_NUM_HS_ISOC : MAX_DMA_DESC_NUM_GENERIC);
41959 +}
41960 +static inline uint16_t frame_incr_val(dwc_otg_qh_t * qh)
41961 +{
41962 + return ((qh->dev_speed == DWC_OTG_EP_SPEED_HIGH)
41963 + ? ((qh->interval + 8 - 1) / 8)
41964 + : qh->interval);
41965 +}
41966 +
41967 +static int desc_list_alloc(struct device *dev, dwc_otg_qh_t * qh)
41968 +{
41969 + int retval = 0;
41970 +
41971 + qh->desc_list = (dwc_otg_host_dma_desc_t *)
41972 + DWC_DMA_ALLOC(dev, sizeof(dwc_otg_host_dma_desc_t) * max_desc_num(qh),
41973 + &qh->desc_list_dma);
41974 +
41975 + if (!qh->desc_list) {
41976 + retval = -DWC_E_NO_MEMORY;
41977 + DWC_ERROR("%s: DMA descriptor list allocation failed\n", __func__);
41978 +
41979 + }
41980 +
41981 + dwc_memset(qh->desc_list, 0x00,
41982 + sizeof(dwc_otg_host_dma_desc_t) * max_desc_num(qh));
41983 +
41984 + qh->n_bytes =
41985 + (uint32_t *) DWC_ALLOC(sizeof(uint32_t) * max_desc_num(qh));
41986 +
41987 + if (!qh->n_bytes) {
41988 + retval = -DWC_E_NO_MEMORY;
41989 + DWC_ERROR
41990 + ("%s: Failed to allocate array for descriptors' size actual values\n",
41991 + __func__);
41992 +
41993 + }
41994 + return retval;
41995 +
41996 +}
41997 +
41998 +static void desc_list_free(struct device *dev, dwc_otg_qh_t * qh)
41999 +{
42000 + if (qh->desc_list) {
42001 + DWC_DMA_FREE(dev, max_desc_num(qh), qh->desc_list,
42002 + qh->desc_list_dma);
42003 + qh->desc_list = NULL;
42004 + }
42005 +
42006 + if (qh->n_bytes) {
42007 + DWC_FREE(qh->n_bytes);
42008 + qh->n_bytes = NULL;
42009 + }
42010 +}
42011 +
42012 +static int frame_list_alloc(dwc_otg_hcd_t * hcd)
42013 +{
42014 + struct device *dev = dwc_otg_hcd_to_dev(hcd);
42015 + int retval = 0;
42016 +
42017 + if (hcd->frame_list)
42018 + return 0;
42019 +
42020 + hcd->frame_list = DWC_DMA_ALLOC(dev, 4 * MAX_FRLIST_EN_NUM,
42021 + &hcd->frame_list_dma);
42022 + if (!hcd->frame_list) {
42023 + retval = -DWC_E_NO_MEMORY;
42024 + DWC_ERROR("%s: Frame List allocation failed\n", __func__);
42025 + }
42026 +
42027 + dwc_memset(hcd->frame_list, 0x00, 4 * MAX_FRLIST_EN_NUM);
42028 +
42029 + return retval;
42030 +}
42031 +
42032 +static void frame_list_free(dwc_otg_hcd_t * hcd)
42033 +{
42034 + struct device *dev = dwc_otg_hcd_to_dev(hcd);
42035 +
42036 + if (!hcd->frame_list)
42037 + return;
42038 +
42039 + DWC_DMA_FREE(dev, 4 * MAX_FRLIST_EN_NUM, hcd->frame_list, hcd->frame_list_dma);
42040 + hcd->frame_list = NULL;
42041 +}
42042 +
42043 +static void per_sched_enable(dwc_otg_hcd_t * hcd, uint16_t fr_list_en)
42044 +{
42045 +
42046 + hcfg_data_t hcfg;
42047 +
42048 + hcfg.d32 = DWC_READ_REG32(&hcd->core_if->host_if->host_global_regs->hcfg);
42049 +
42050 + if (hcfg.b.perschedena) {
42051 + /* already enabled */
42052 + return;
42053 + }
42054 +
42055 + DWC_WRITE_REG32(&hcd->core_if->host_if->host_global_regs->hflbaddr,
42056 + hcd->frame_list_dma);
42057 +
42058 + switch (fr_list_en) {
42059 + case 64:
42060 + hcfg.b.frlisten = 3;
42061 + break;
42062 + case 32:
42063 + hcfg.b.frlisten = 2;
42064 + break;
42065 + case 16:
42066 + hcfg.b.frlisten = 1;
42067 + break;
42068 + case 8:
42069 + hcfg.b.frlisten = 0;
42070 + break;
42071 + default:
42072 + break;
42073 + }
42074 +
42075 + hcfg.b.perschedena = 1;
42076 +
42077 + DWC_DEBUGPL(DBG_HCD, "Enabling Periodic schedule\n");
42078 + DWC_WRITE_REG32(&hcd->core_if->host_if->host_global_regs->hcfg, hcfg.d32);
42079 +
42080 +}
42081 +
42082 +static void per_sched_disable(dwc_otg_hcd_t * hcd)
42083 +{
42084 + hcfg_data_t hcfg;
42085 +
42086 + hcfg.d32 = DWC_READ_REG32(&hcd->core_if->host_if->host_global_regs->hcfg);
42087 +
42088 + if (!hcfg.b.perschedena) {
42089 + /* already disabled */
42090 + return;
42091 + }
42092 + hcfg.b.perschedena = 0;
42093 +
42094 + DWC_DEBUGPL(DBG_HCD, "Disabling Periodic schedule\n");
42095 + DWC_WRITE_REG32(&hcd->core_if->host_if->host_global_regs->hcfg, hcfg.d32);
42096 +}
42097 +
42098 +/*
42099 + * Activates/Deactivates FrameList entries for the channel
42100 + * based on endpoint servicing period.
42101 + */
42102 +void update_frame_list(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh, uint8_t enable)
42103 +{
42104 + uint16_t i, j, inc;
42105 + dwc_hc_t *hc = NULL;
42106 +
42107 + if (!qh->channel) {
42108 + DWC_ERROR("qh->channel = %p", qh->channel);
42109 + return;
42110 + }
42111 +
42112 + if (!hcd) {
42113 + DWC_ERROR("------hcd = %p", hcd);
42114 + return;
42115 + }
42116 +
42117 + if (!hcd->frame_list) {
42118 + DWC_ERROR("-------hcd->frame_list = %p", hcd->frame_list);
42119 + return;
42120 + }
42121 +
42122 + hc = qh->channel;
42123 + inc = frame_incr_val(qh);
42124 + if (qh->ep_type == UE_ISOCHRONOUS)
42125 + i = frame_list_idx(qh->sched_frame);
42126 + else
42127 + i = 0;
42128 +
42129 + j = i;
42130 + do {
42131 + if (enable)
42132 + hcd->frame_list[j] |= (1 << hc->hc_num);
42133 + else
42134 + hcd->frame_list[j] &= ~(1 << hc->hc_num);
42135 + j = (j + inc) & (MAX_FRLIST_EN_NUM - 1);
42136 + }
42137 + while (j != i);
42138 + if (!enable)
42139 + return;
42140 + hc->schinfo = 0;
42141 + if (qh->channel->speed == DWC_OTG_EP_SPEED_HIGH) {
42142 + j = 1;
42143 + /* TODO - check this */
42144 + inc = (8 + qh->interval - 1) / qh->interval;
42145 + for (i = 0; i < inc; i++) {
42146 + hc->schinfo |= j;
42147 + j = j << qh->interval;
42148 + }
42149 + } else {
42150 + hc->schinfo = 0xff;
42151 + }
42152 +}
42153 +
42154 +#if 1
42155 +void dump_frame_list(dwc_otg_hcd_t * hcd)
42156 +{
42157 + int i = 0;
42158 + DWC_PRINTF("--FRAME LIST (hex) --\n");
42159 + for (i = 0; i < MAX_FRLIST_EN_NUM; i++) {
42160 + DWC_PRINTF("%x\t", hcd->frame_list[i]);
42161 + if (!(i % 8) && i)
42162 + DWC_PRINTF("\n");
42163 + }
42164 + DWC_PRINTF("\n----\n");
42165 +
42166 +}
42167 +#endif
42168 +
42169 +static void release_channel_ddma(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh)
42170 +{
42171 + dwc_hc_t *hc = qh->channel;
42172 + if (dwc_qh_is_non_per(qh)) {
42173 + if (!microframe_schedule)
42174 + hcd->non_periodic_channels--;
42175 + else
42176 + hcd->available_host_channels++;
42177 + } else
42178 + update_frame_list(hcd, qh, 0);
42179 +
42180 + /*
42181 + * The condition is added to prevent double cleanup try in case of device
42182 + * disconnect. See channel cleanup in dwc_otg_hcd_disconnect_cb().
42183 + */
42184 + if (hc->qh) {
42185 + dwc_otg_hc_cleanup(hcd->core_if, hc);
42186 + DWC_CIRCLEQ_INSERT_TAIL(&hcd->free_hc_list, hc, hc_list_entry);
42187 + hc->qh = NULL;
42188 + }
42189 +
42190 + qh->channel = NULL;
42191 + qh->ntd = 0;
42192 +
42193 + if (qh->desc_list) {
42194 + dwc_memset(qh->desc_list, 0x00,
42195 + sizeof(dwc_otg_host_dma_desc_t) * max_desc_num(qh));
42196 + }
42197 +}
42198 +
42199 +/**
42200 + * Initializes a QH structure's Descriptor DMA related members.
42201 + * Allocates memory for descriptor list.
42202 + * On first periodic QH, allocates memory for FrameList
42203 + * and enables periodic scheduling.
42204 + *
42205 + * @param hcd The HCD state structure for the DWC OTG controller.
42206 + * @param qh The QH to init.
42207 + *
42208 + * @return 0 if successful, negative error code otherwise.
42209 + */
42210 +int dwc_otg_hcd_qh_init_ddma(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh)
42211 +{
42212 + struct device *dev = dwc_otg_hcd_to_dev(hcd);
42213 + int retval = 0;
42214 +
42215 + if (qh->do_split) {
42216 + DWC_ERROR("SPLIT Transfers are not supported in Descriptor DMA.\n");
42217 + return -1;
42218 + }
42219 +
42220 + retval = desc_list_alloc(dev, qh);
42221 +
42222 + if ((retval == 0)
42223 + && (qh->ep_type == UE_ISOCHRONOUS || qh->ep_type == UE_INTERRUPT)) {
42224 + if (!hcd->frame_list) {
42225 + retval = frame_list_alloc(hcd);
42226 + /* Enable periodic schedule on first periodic QH */
42227 + if (retval == 0)
42228 + per_sched_enable(hcd, MAX_FRLIST_EN_NUM);
42229 + }
42230 + }
42231 +
42232 + qh->ntd = 0;
42233 +
42234 + return retval;
42235 +}
42236 +
42237 +/**
42238 + * Frees descriptor list memory associated with the QH.
42239 + * If QH is periodic and the last, frees FrameList memory
42240 + * and disables periodic scheduling.
42241 + *
42242 + * @param hcd The HCD state structure for the DWC OTG controller.
42243 + * @param qh The QH to init.
42244 + */
42245 +void dwc_otg_hcd_qh_free_ddma(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh)
42246 +{
42247 + struct device *dev = dwc_otg_hcd_to_dev(hcd);
42248 +
42249 + desc_list_free(dev, qh);
42250 +
42251 + /*
42252 + * Channel still assigned due to some reasons.
42253 + * Seen on Isoc URB dequeue. Channel halted but no subsequent
42254 + * ChHalted interrupt to release the channel. Afterwards
42255 + * when it comes here from endpoint disable routine
42256 + * channel remains assigned.
42257 + */
42258 + if (qh->channel)
42259 + release_channel_ddma(hcd, qh);
42260 +
42261 + if ((qh->ep_type == UE_ISOCHRONOUS || qh->ep_type == UE_INTERRUPT)
42262 + && (microframe_schedule || !hcd->periodic_channels) && hcd->frame_list) {
42263 +
42264 + per_sched_disable(hcd);
42265 + frame_list_free(hcd);
42266 + }
42267 +}
42268 +
42269 +static uint8_t frame_to_desc_idx(dwc_otg_qh_t * qh, uint16_t frame_idx)
42270 +{
42271 + if (qh->dev_speed == DWC_OTG_EP_SPEED_HIGH) {
42272 + /*
42273 + * Descriptor set(8 descriptors) index
42274 + * which is 8-aligned.
42275 + */
42276 + return (frame_idx & ((MAX_DMA_DESC_NUM_HS_ISOC / 8) - 1)) * 8;
42277 + } else {
42278 + return (frame_idx & (MAX_DMA_DESC_NUM_GENERIC - 1));
42279 + }
42280 +}
42281 +
42282 +/*
42283 + * Determine starting frame for Isochronous transfer.
42284 + * Few frames skipped to prevent race condition with HC.
42285 + */
42286 +static uint8_t calc_starting_frame(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh,
42287 + uint8_t * skip_frames)
42288 +{
42289 + uint16_t frame = 0;
42290 + hcd->frame_number = dwc_otg_hcd_get_frame_number(hcd);
42291 +
42292 + /* sched_frame is always frame number(not uFrame) both in FS and HS !! */
42293 +
42294 + /*
42295 + * skip_frames is used to limit activated descriptors number
42296 + * to avoid the situation when HC services the last activated
42297 + * descriptor firstly.
42298 + * Example for FS:
42299 + * Current frame is 1, scheduled frame is 3. Since HC always fetches the descriptor
42300 + * corresponding to curr_frame+1, the descriptor corresponding to frame 2
42301 + * will be fetched. If the number of descriptors is max=64 (or greather) the
42302 + * list will be fully programmed with Active descriptors and it is possible
42303 + * case(rare) that the latest descriptor(considering rollback) corresponding
42304 + * to frame 2 will be serviced first. HS case is more probable because, in fact,
42305 + * up to 11 uframes(16 in the code) may be skipped.
42306 + */
42307 + if (qh->dev_speed == DWC_OTG_EP_SPEED_HIGH) {
42308 + /*
42309 + * Consider uframe counter also, to start xfer asap.
42310 + * If half of the frame elapsed skip 2 frames otherwise
42311 + * just 1 frame.
42312 + * Starting descriptor index must be 8-aligned, so
42313 + * if the current frame is near to complete the next one
42314 + * is skipped as well.
42315 + */
42316 +
42317 + if (dwc_micro_frame_num(hcd->frame_number) >= 5) {
42318 + *skip_frames = 2 * 8;
42319 + frame = dwc_frame_num_inc(hcd->frame_number, *skip_frames);
42320 + } else {
42321 + *skip_frames = 1 * 8;
42322 + frame = dwc_frame_num_inc(hcd->frame_number, *skip_frames);
42323 + }
42324 +
42325 + frame = dwc_full_frame_num(frame);
42326 + } else {
42327 + /*
42328 + * Two frames are skipped for FS - the current and the next.
42329 + * But for descriptor programming, 1 frame(descriptor) is enough,
42330 + * see example above.
42331 + */
42332 + *skip_frames = 1;
42333 + frame = dwc_frame_num_inc(hcd->frame_number, 2);
42334 + }
42335 +
42336 + return frame;
42337 +}
42338 +
42339 +/*
42340 + * Calculate initial descriptor index for isochronous transfer
42341 + * based on scheduled frame.
42342 + */
42343 +static uint8_t recalc_initial_desc_idx(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh)
42344 +{
42345 + uint16_t frame = 0, fr_idx, fr_idx_tmp;
42346 + uint8_t skip_frames = 0;
42347 + /*
42348 + * With current ISOC processing algorithm the channel is being
42349 + * released when no more QTDs in the list(qh->ntd == 0).
42350 + * Thus this function is called only when qh->ntd == 0 and qh->channel == 0.
42351 + *
42352 + * So qh->channel != NULL branch is not used and just not removed from the
42353 + * source file. It is required for another possible approach which is,
42354 + * do not disable and release the channel when ISOC session completed,
42355 + * just move QH to inactive schedule until new QTD arrives.
42356 + * On new QTD, the QH moved back to 'ready' schedule,
42357 + * starting frame and therefore starting desc_index are recalculated.
42358 + * In this case channel is released only on ep_disable.
42359 + */
42360 +
42361 + /* Calculate starting descriptor index. For INTERRUPT endpoint it is always 0. */
42362 + if (qh->channel) {
42363 + frame = calc_starting_frame(hcd, qh, &skip_frames);
42364 + /*
42365 + * Calculate initial descriptor index based on FrameList current bitmap
42366 + * and servicing period.
42367 + */
42368 + fr_idx_tmp = frame_list_idx(frame);
42369 + fr_idx =
42370 + (MAX_FRLIST_EN_NUM + frame_list_idx(qh->sched_frame) -
42371 + fr_idx_tmp)
42372 + % frame_incr_val(qh);
42373 + fr_idx = (fr_idx + fr_idx_tmp) % MAX_FRLIST_EN_NUM;
42374 + } else {
42375 + qh->sched_frame = calc_starting_frame(hcd, qh, &skip_frames);
42376 + fr_idx = frame_list_idx(qh->sched_frame);
42377 + }
42378 +
42379 + qh->td_first = qh->td_last = frame_to_desc_idx(qh, fr_idx);
42380 +
42381 + return skip_frames;
42382 +}
42383 +
42384 +#define ISOC_URB_GIVEBACK_ASAP
42385 +
42386 +#define MAX_ISOC_XFER_SIZE_FS 1023
42387 +#define MAX_ISOC_XFER_SIZE_HS 3072
42388 +#define DESCNUM_THRESHOLD 4
42389 +
42390 +static void init_isoc_dma_desc(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh,
42391 + uint8_t skip_frames)
42392 +{
42393 + struct dwc_otg_hcd_iso_packet_desc *frame_desc;
42394 + dwc_otg_qtd_t *qtd;
42395 + dwc_otg_host_dma_desc_t *dma_desc;
42396 + uint16_t idx, inc, n_desc, ntd_max, max_xfer_size;
42397 +
42398 + idx = qh->td_last;
42399 + inc = qh->interval;
42400 + n_desc = 0;
42401 +
42402 + ntd_max = (max_desc_num(qh) + qh->interval - 1) / qh->interval;
42403 + if (skip_frames && !qh->channel)
42404 + ntd_max = ntd_max - skip_frames / qh->interval;
42405 +
42406 + max_xfer_size =
42407 + (qh->dev_speed ==
42408 + DWC_OTG_EP_SPEED_HIGH) ? MAX_ISOC_XFER_SIZE_HS :
42409 + MAX_ISOC_XFER_SIZE_FS;
42410 +
42411 + DWC_CIRCLEQ_FOREACH(qtd, &qh->qtd_list, qtd_list_entry) {
42412 + while ((qh->ntd < ntd_max)
42413 + && (qtd->isoc_frame_index_last <
42414 + qtd->urb->packet_count)) {
42415 +
42416 + dma_desc = &qh->desc_list[idx];
42417 + dwc_memset(dma_desc, 0x00, sizeof(dwc_otg_host_dma_desc_t));
42418 +
42419 + frame_desc = &qtd->urb->iso_descs[qtd->isoc_frame_index_last];
42420 +
42421 + if (frame_desc->length > max_xfer_size)
42422 + qh->n_bytes[idx] = max_xfer_size;
42423 + else
42424 + qh->n_bytes[idx] = frame_desc->length;
42425 + dma_desc->status.b_isoc.n_bytes = qh->n_bytes[idx];
42426 + dma_desc->status.b_isoc.a = 1;
42427 + dma_desc->status.b_isoc.sts = 0;
42428 +
42429 + dma_desc->buf = qtd->urb->dma + frame_desc->offset;
42430 +
42431 + qh->ntd++;
42432 +
42433 + qtd->isoc_frame_index_last++;
42434 +
42435 +#ifdef ISOC_URB_GIVEBACK_ASAP
42436 + /*
42437 + * Set IOC for each descriptor corresponding to the
42438 + * last frame of the URB.
42439 + */
42440 + if (qtd->isoc_frame_index_last ==
42441 + qtd->urb->packet_count)
42442 + dma_desc->status.b_isoc.ioc = 1;
42443 +
42444 +#endif
42445 + idx = desclist_idx_inc(idx, inc, qh->dev_speed);
42446 + n_desc++;
42447 +
42448 + }
42449 + qtd->in_process = 1;
42450 + }
42451 +
42452 + qh->td_last = idx;
42453 +
42454 +#ifdef ISOC_URB_GIVEBACK_ASAP
42455 + /* Set IOC for the last descriptor if descriptor list is full */
42456 + if (qh->ntd == ntd_max) {
42457 + idx = desclist_idx_dec(qh->td_last, inc, qh->dev_speed);
42458 + qh->desc_list[idx].status.b_isoc.ioc = 1;
42459 + }
42460 +#else
42461 + /*
42462 + * Set IOC bit only for one descriptor.
42463 + * Always try to be ahead of HW processing,
42464 + * i.e. on IOC generation driver activates next descriptors but
42465 + * core continues to process descriptors followed the one with IOC set.
42466 + */
42467 +
42468 + if (n_desc > DESCNUM_THRESHOLD) {
42469 + /*
42470 + * Move IOC "up". Required even if there is only one QTD
42471 + * in the list, cause QTDs migth continue to be queued,
42472 + * but during the activation it was only one queued.
42473 + * Actually more than one QTD might be in the list if this function called
42474 + * from XferCompletion - QTDs was queued during HW processing of the previous
42475 + * descriptor chunk.
42476 + */
42477 + idx = dwc_desclist_idx_dec(idx, inc * ((qh->ntd + 1) / 2), qh->dev_speed);
42478 + } else {
42479 + /*
42480 + * Set the IOC for the latest descriptor
42481 + * if either number of descriptor is not greather than threshold
42482 + * or no more new descriptors activated.
42483 + */
42484 + idx = dwc_desclist_idx_dec(qh->td_last, inc, qh->dev_speed);
42485 + }
42486 +
42487 + qh->desc_list[idx].status.b_isoc.ioc = 1;
42488 +#endif
42489 +}
42490 +
42491 +static void init_non_isoc_dma_desc(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh)
42492 +{
42493 +
42494 + dwc_hc_t *hc;
42495 + dwc_otg_host_dma_desc_t *dma_desc;
42496 + dwc_otg_qtd_t *qtd;
42497 + int num_packets, len, n_desc = 0;
42498 +
42499 + hc = qh->channel;
42500 +
42501 + /*
42502 + * Start with hc->xfer_buff initialized in
42503 + * assign_and_init_hc(), then if SG transfer consists of multiple URBs,
42504 + * this pointer re-assigned to the buffer of the currently processed QTD.
42505 + * For non-SG request there is always one QTD active.
42506 + */
42507 +
42508 + DWC_CIRCLEQ_FOREACH(qtd, &qh->qtd_list, qtd_list_entry) {
42509 +
42510 + if (n_desc) {
42511 + /* SG request - more than 1 QTDs */
42512 + hc->xfer_buff = (uint8_t *)qtd->urb->dma + qtd->urb->actual_length;
42513 + hc->xfer_len = qtd->urb->length - qtd->urb->actual_length;
42514 + }
42515 +
42516 + qtd->n_desc = 0;
42517 +
42518 + do {
42519 + dma_desc = &qh->desc_list[n_desc];
42520 + len = hc->xfer_len;
42521 +
42522 + if (len > MAX_DMA_DESC_SIZE)
42523 + len = MAX_DMA_DESC_SIZE - hc->max_packet + 1;
42524 +
42525 + if (hc->ep_is_in) {
42526 + if (len > 0) {
42527 + num_packets = (len + hc->max_packet - 1) / hc->max_packet;
42528 + } else {
42529 + /* Need 1 packet for transfer length of 0. */
42530 + num_packets = 1;
42531 + }
42532 + /* Always program an integral # of max packets for IN transfers. */
42533 + len = num_packets * hc->max_packet;
42534 + }
42535 +
42536 + dma_desc->status.b.n_bytes = len;
42537 +
42538 + qh->n_bytes[n_desc] = len;
42539 +
42540 + if ((qh->ep_type == UE_CONTROL)
42541 + && (qtd->control_phase == DWC_OTG_CONTROL_SETUP))
42542 + dma_desc->status.b.sup = 1; /* Setup Packet */
42543 +
42544 + dma_desc->status.b.a = 1; /* Active descriptor */
42545 + dma_desc->status.b.sts = 0;
42546 +
42547 + dma_desc->buf =
42548 + ((unsigned long)hc->xfer_buff & 0xffffffff);
42549 +
42550 + /*
42551 + * Last descriptor(or single) of IN transfer
42552 + * with actual size less than MaxPacket.
42553 + */
42554 + if (len > hc->xfer_len) {
42555 + hc->xfer_len = 0;
42556 + } else {
42557 + hc->xfer_buff += len;
42558 + hc->xfer_len -= len;
42559 + }
42560 +
42561 + qtd->n_desc++;
42562 + n_desc++;
42563 + }
42564 + while ((hc->xfer_len > 0) && (n_desc != MAX_DMA_DESC_NUM_GENERIC));
42565 +
42566 +
42567 + qtd->in_process = 1;
42568 +
42569 + if (qh->ep_type == UE_CONTROL)
42570 + break;
42571 +
42572 + if (n_desc == MAX_DMA_DESC_NUM_GENERIC)
42573 + break;
42574 + }
42575 +
42576 + if (n_desc) {
42577 + /* Request Transfer Complete interrupt for the last descriptor */
42578 + qh->desc_list[n_desc - 1].status.b.ioc = 1;
42579 + /* End of List indicator */
42580 + qh->desc_list[n_desc - 1].status.b.eol = 1;
42581 +
42582 + hc->ntd = n_desc;
42583 + }
42584 +}
42585 +
42586 +/**
42587 + * For Control and Bulk endpoints initializes descriptor list
42588 + * and starts the transfer.
42589 + *
42590 + * For Interrupt and Isochronous endpoints initializes descriptor list
42591 + * then updates FrameList, marking appropriate entries as active.
42592 + * In case of Isochronous, the starting descriptor index is calculated based
42593 + * on the scheduled frame, but only on the first transfer descriptor within a session.
42594 + * Then starts the transfer via enabling the channel.
42595 + * For Isochronous endpoint the channel is not halted on XferComplete
42596 + * interrupt so remains assigned to the endpoint(QH) until session is done.
42597 + *
42598 + * @param hcd The HCD state structure for the DWC OTG controller.
42599 + * @param qh The QH to init.
42600 + *
42601 + * @return 0 if successful, negative error code otherwise.
42602 + */
42603 +void dwc_otg_hcd_start_xfer_ddma(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh)
42604 +{
42605 + /* Channel is already assigned */
42606 + dwc_hc_t *hc = qh->channel;
42607 + uint8_t skip_frames = 0;
42608 +
42609 + switch (hc->ep_type) {
42610 + case DWC_OTG_EP_TYPE_CONTROL:
42611 + case DWC_OTG_EP_TYPE_BULK:
42612 + init_non_isoc_dma_desc(hcd, qh);
42613 +
42614 + dwc_otg_hc_start_transfer_ddma(hcd->core_if, hc);
42615 + break;
42616 + case DWC_OTG_EP_TYPE_INTR:
42617 + init_non_isoc_dma_desc(hcd, qh);
42618 +
42619 + update_frame_list(hcd, qh, 1);
42620 +
42621 + dwc_otg_hc_start_transfer_ddma(hcd->core_if, hc);
42622 + break;
42623 + case DWC_OTG_EP_TYPE_ISOC:
42624 +
42625 + if (!qh->ntd)
42626 + skip_frames = recalc_initial_desc_idx(hcd, qh);
42627 +
42628 + init_isoc_dma_desc(hcd, qh, skip_frames);
42629 +
42630 + if (!hc->xfer_started) {
42631 +
42632 + update_frame_list(hcd, qh, 1);
42633 +
42634 + /*
42635 + * Always set to max, instead of actual size.
42636 + * Otherwise ntd will be changed with
42637 + * channel being enabled. Not recommended.
42638 + *
42639 + */
42640 + hc->ntd = max_desc_num(qh);
42641 + /* Enable channel only once for ISOC */
42642 + dwc_otg_hc_start_transfer_ddma(hcd->core_if, hc);
42643 + }
42644 +
42645 + break;
42646 + default:
42647 +
42648 + break;
42649 + }
42650 +}
42651 +
42652 +static void complete_isoc_xfer_ddma(dwc_otg_hcd_t * hcd,
42653 + dwc_hc_t * hc,
42654 + dwc_otg_hc_regs_t * hc_regs,
42655 + dwc_otg_halt_status_e halt_status)
42656 +{
42657 + struct dwc_otg_hcd_iso_packet_desc *frame_desc;
42658 + dwc_otg_qtd_t *qtd, *qtd_tmp;
42659 + dwc_otg_qh_t *qh;
42660 + dwc_otg_host_dma_desc_t *dma_desc;
42661 + uint16_t idx, remain;
42662 + uint8_t urb_compl;
42663 +
42664 + qh = hc->qh;
42665 + idx = qh->td_first;
42666 +
42667 + if (hc->halt_status == DWC_OTG_HC_XFER_URB_DEQUEUE) {
42668 + DWC_CIRCLEQ_FOREACH_SAFE(qtd, qtd_tmp, &hc->qh->qtd_list, qtd_list_entry)
42669 + qtd->in_process = 0;
42670 + return;
42671 + } else if ((halt_status == DWC_OTG_HC_XFER_AHB_ERR) ||
42672 + (halt_status == DWC_OTG_HC_XFER_BABBLE_ERR)) {
42673 + /*
42674 + * Channel is halted in these error cases.
42675 + * Considered as serious issues.
42676 + * Complete all URBs marking all frames as failed,
42677 + * irrespective whether some of the descriptors(frames) succeeded or no.
42678 + * Pass error code to completion routine as well, to
42679 + * update urb->status, some of class drivers might use it to stop
42680 + * queing transfer requests.
42681 + */
42682 + int err = (halt_status == DWC_OTG_HC_XFER_AHB_ERR)
42683 + ? (-DWC_E_IO)
42684 + : (-DWC_E_OVERFLOW);
42685 +
42686 + DWC_CIRCLEQ_FOREACH_SAFE(qtd, qtd_tmp, &hc->qh->qtd_list, qtd_list_entry) {
42687 + for (idx = 0; idx < qtd->urb->packet_count; idx++) {
42688 + frame_desc = &qtd->urb->iso_descs[idx];
42689 + frame_desc->status = err;
42690 + }
42691 + hcd->fops->complete(hcd, qtd->urb->priv, qtd->urb, err);
42692 + dwc_otg_hcd_qtd_remove_and_free(hcd, qtd, qh);
42693 + }
42694 + return;
42695 + }
42696 +
42697 + DWC_CIRCLEQ_FOREACH_SAFE(qtd, qtd_tmp, &hc->qh->qtd_list, qtd_list_entry) {
42698 +
42699 + if (!qtd->in_process)
42700 + break;
42701 +
42702 + urb_compl = 0;
42703 +
42704 + do {
42705 +
42706 + dma_desc = &qh->desc_list[idx];
42707 +
42708 + frame_desc = &qtd->urb->iso_descs[qtd->isoc_frame_index];
42709 + remain = hc->ep_is_in ? dma_desc->status.b_isoc.n_bytes : 0;
42710 +
42711 + if (dma_desc->status.b_isoc.sts == DMA_DESC_STS_PKTERR) {
42712 + /*
42713 + * XactError or, unable to complete all the transactions
42714 + * in the scheduled micro-frame/frame,
42715 + * both indicated by DMA_DESC_STS_PKTERR.
42716 + */
42717 + qtd->urb->error_count++;
42718 + frame_desc->actual_length = qh->n_bytes[idx] - remain;
42719 + frame_desc->status = -DWC_E_PROTOCOL;
42720 + } else {
42721 + /* Success */
42722 +
42723 + frame_desc->actual_length = qh->n_bytes[idx] - remain;
42724 + frame_desc->status = 0;
42725 + }
42726 +
42727 + if (++qtd->isoc_frame_index == qtd->urb->packet_count) {
42728 + /*
42729 + * urb->status is not used for isoc transfers here.
42730 + * The individual frame_desc status are used instead.
42731 + */
42732 +
42733 + hcd->fops->complete(hcd, qtd->urb->priv, qtd->urb, 0);
42734 + dwc_otg_hcd_qtd_remove_and_free(hcd, qtd, qh);
42735 +
42736 + /*
42737 + * This check is necessary because urb_dequeue can be called
42738 + * from urb complete callback(sound driver example).
42739 + * All pending URBs are dequeued there, so no need for
42740 + * further processing.
42741 + */
42742 + if (hc->halt_status == DWC_OTG_HC_XFER_URB_DEQUEUE) {
42743 + return;
42744 + }
42745 +
42746 + urb_compl = 1;
42747 +
42748 + }
42749 +
42750 + qh->ntd--;
42751 +
42752 + /* Stop if IOC requested descriptor reached */
42753 + if (dma_desc->status.b_isoc.ioc) {
42754 + idx = desclist_idx_inc(idx, qh->interval, hc->speed);
42755 + goto stop_scan;
42756 + }
42757 +
42758 + idx = desclist_idx_inc(idx, qh->interval, hc->speed);
42759 +
42760 + if (urb_compl)
42761 + break;
42762 + }
42763 + while (idx != qh->td_first);
42764 + }
42765 +stop_scan:
42766 + qh->td_first = idx;
42767 +}
42768 +
42769 +uint8_t update_non_isoc_urb_state_ddma(dwc_otg_hcd_t * hcd,
42770 + dwc_hc_t * hc,
42771 + dwc_otg_qtd_t * qtd,
42772 + dwc_otg_host_dma_desc_t * dma_desc,
42773 + dwc_otg_halt_status_e halt_status,
42774 + uint32_t n_bytes, uint8_t * xfer_done)
42775 +{
42776 +
42777 + uint16_t remain = hc->ep_is_in ? dma_desc->status.b.n_bytes : 0;
42778 + dwc_otg_hcd_urb_t *urb = qtd->urb;
42779 +
42780 + if (halt_status == DWC_OTG_HC_XFER_AHB_ERR) {
42781 + urb->status = -DWC_E_IO;
42782 + return 1;
42783 + }
42784 + if (dma_desc->status.b.sts == DMA_DESC_STS_PKTERR) {
42785 + switch (halt_status) {
42786 + case DWC_OTG_HC_XFER_STALL:
42787 + urb->status = -DWC_E_PIPE;
42788 + break;
42789 + case DWC_OTG_HC_XFER_BABBLE_ERR:
42790 + urb->status = -DWC_E_OVERFLOW;
42791 + break;
42792 + case DWC_OTG_HC_XFER_XACT_ERR:
42793 + urb->status = -DWC_E_PROTOCOL;
42794 + break;
42795 + default:
42796 + DWC_ERROR("%s: Unhandled descriptor error status (%d)\n", __func__,
42797 + halt_status);
42798 + break;
42799 + }
42800 + return 1;
42801 + }
42802 +
42803 + if (dma_desc->status.b.a == 1) {
42804 + DWC_DEBUGPL(DBG_HCDV,
42805 + "Active descriptor encountered on channel %d\n",
42806 + hc->hc_num);
42807 + return 0;
42808 + }
42809 +
42810 + if (hc->ep_type == DWC_OTG_EP_TYPE_CONTROL) {
42811 + if (qtd->control_phase == DWC_OTG_CONTROL_DATA) {
42812 + urb->actual_length += n_bytes - remain;
42813 + if (remain || urb->actual_length == urb->length) {
42814 + /*
42815 + * For Control Data stage do not set urb->status=0 to prevent
42816 + * URB callback. Set it when Status phase done. See below.
42817 + */
42818 + *xfer_done = 1;
42819 + }
42820 +
42821 + } else if (qtd->control_phase == DWC_OTG_CONTROL_STATUS) {
42822 + urb->status = 0;
42823 + *xfer_done = 1;
42824 + }
42825 + /* No handling for SETUP stage */
42826 + } else {
42827 + /* BULK and INTR */
42828 + urb->actual_length += n_bytes - remain;
42829 + if (remain || urb->actual_length == urb->length) {
42830 + urb->status = 0;
42831 + *xfer_done = 1;
42832 + }
42833 + }
42834 +
42835 + return 0;
42836 +}
42837 +
42838 +static void complete_non_isoc_xfer_ddma(dwc_otg_hcd_t * hcd,
42839 + dwc_hc_t * hc,
42840 + dwc_otg_hc_regs_t * hc_regs,
42841 + dwc_otg_halt_status_e halt_status)
42842 +{
42843 + dwc_otg_hcd_urb_t *urb = NULL;
42844 + dwc_otg_qtd_t *qtd, *qtd_tmp;
42845 + dwc_otg_qh_t *qh;
42846 + dwc_otg_host_dma_desc_t *dma_desc;
42847 + uint32_t n_bytes, n_desc, i;
42848 + uint8_t failed = 0, xfer_done;
42849 +
42850 + n_desc = 0;
42851 +
42852 + qh = hc->qh;
42853 +
42854 + if (hc->halt_status == DWC_OTG_HC_XFER_URB_DEQUEUE) {
42855 + DWC_CIRCLEQ_FOREACH_SAFE(qtd, qtd_tmp, &hc->qh->qtd_list, qtd_list_entry) {
42856 + qtd->in_process = 0;
42857 + }
42858 + return;
42859 + }
42860 +
42861 + DWC_CIRCLEQ_FOREACH_SAFE(qtd, qtd_tmp, &qh->qtd_list, qtd_list_entry) {
42862 +
42863 + urb = qtd->urb;
42864 +
42865 + n_bytes = 0;
42866 + xfer_done = 0;
42867 +
42868 + for (i = 0; i < qtd->n_desc; i++) {
42869 + dma_desc = &qh->desc_list[n_desc];
42870 +
42871 + n_bytes = qh->n_bytes[n_desc];
42872 +
42873 + failed =
42874 + update_non_isoc_urb_state_ddma(hcd, hc, qtd,
42875 + dma_desc,
42876 + halt_status, n_bytes,
42877 + &xfer_done);
42878 +
42879 + if (failed
42880 + || (xfer_done
42881 + && (urb->status != -DWC_E_IN_PROGRESS))) {
42882 +
42883 + hcd->fops->complete(hcd, urb->priv, urb,
42884 + urb->status);
42885 + dwc_otg_hcd_qtd_remove_and_free(hcd, qtd, qh);
42886 +
42887 + if (failed)
42888 + goto stop_scan;
42889 + } else if (qh->ep_type == UE_CONTROL) {
42890 + if (qtd->control_phase == DWC_OTG_CONTROL_SETUP) {
42891 + if (urb->length > 0) {
42892 + qtd->control_phase = DWC_OTG_CONTROL_DATA;
42893 + } else {
42894 + qtd->control_phase = DWC_OTG_CONTROL_STATUS;
42895 + }
42896 + DWC_DEBUGPL(DBG_HCDV, " Control setup transaction done\n");
42897 + } else if (qtd->control_phase == DWC_OTG_CONTROL_DATA) {
42898 + if (xfer_done) {
42899 + qtd->control_phase = DWC_OTG_CONTROL_STATUS;
42900 + DWC_DEBUGPL(DBG_HCDV, " Control data transfer done\n");
42901 + } else if (i + 1 == qtd->n_desc) {
42902 + /*
42903 + * Last descriptor for Control data stage which is
42904 + * not completed yet.
42905 + */
42906 + dwc_otg_hcd_save_data_toggle(hc, hc_regs, qtd);
42907 + }
42908 + }
42909 + }
42910 +
42911 + n_desc++;
42912 + }
42913 +
42914 + }
42915 +
42916 +stop_scan:
42917 +
42918 + if (qh->ep_type != UE_CONTROL) {
42919 + /*
42920 + * Resetting the data toggle for bulk
42921 + * and interrupt endpoints in case of stall. See handle_hc_stall_intr()
42922 + */
42923 + if (halt_status == DWC_OTG_HC_XFER_STALL)
42924 + qh->data_toggle = DWC_OTG_HC_PID_DATA0;
42925 + else
42926 + dwc_otg_hcd_save_data_toggle(hc, hc_regs, qtd);
42927 + }
42928 +
42929 + if (halt_status == DWC_OTG_HC_XFER_COMPLETE) {
42930 + hcint_data_t hcint;
42931 + hcint.d32 = DWC_READ_REG32(&hc_regs->hcint);
42932 + if (hcint.b.nyet) {
42933 + /*
42934 + * Got a NYET on the last transaction of the transfer. It
42935 + * means that the endpoint should be in the PING state at the
42936 + * beginning of the next transfer.
42937 + */
42938 + qh->ping_state = 1;
42939 + clear_hc_int(hc_regs, nyet);
42940 + }
42941 +
42942 + }
42943 +
42944 +}
42945 +
42946 +/**
42947 + * This function is called from interrupt handlers.
42948 + * Scans the descriptor list, updates URB's status and
42949 + * calls completion routine for the URB if it's done.
42950 + * Releases the channel to be used by other transfers.
42951 + * In case of Isochronous endpoint the channel is not halted until
42952 + * the end of the session, i.e. QTD list is empty.
42953 + * If periodic channel released the FrameList is updated accordingly.
42954 + *
42955 + * Calls transaction selection routines to activate pending transfers.
42956 + *
42957 + * @param hcd The HCD state structure for the DWC OTG controller.
42958 + * @param hc Host channel, the transfer is completed on.
42959 + * @param hc_regs Host channel registers.
42960 + * @param halt_status Reason the channel is being halted,
42961 + * or just XferComplete for isochronous transfer
42962 + */
42963 +void dwc_otg_hcd_complete_xfer_ddma(dwc_otg_hcd_t * hcd,
42964 + dwc_hc_t * hc,
42965 + dwc_otg_hc_regs_t * hc_regs,
42966 + dwc_otg_halt_status_e halt_status)
42967 +{
42968 + uint8_t continue_isoc_xfer = 0;
42969 + dwc_otg_transaction_type_e tr_type;
42970 + dwc_otg_qh_t *qh = hc->qh;
42971 +
42972 + if (hc->ep_type == DWC_OTG_EP_TYPE_ISOC) {
42973 +
42974 + complete_isoc_xfer_ddma(hcd, hc, hc_regs, halt_status);
42975 +
42976 + /* Release the channel if halted or session completed */
42977 + if (halt_status != DWC_OTG_HC_XFER_COMPLETE ||
42978 + DWC_CIRCLEQ_EMPTY(&qh->qtd_list)) {
42979 +
42980 + /* Halt the channel if session completed */
42981 + if (halt_status == DWC_OTG_HC_XFER_COMPLETE) {
42982 + dwc_otg_hc_halt(hcd->core_if, hc, halt_status);
42983 + }
42984 +
42985 + release_channel_ddma(hcd, qh);
42986 + dwc_otg_hcd_qh_remove(hcd, qh);
42987 + } else {
42988 + /* Keep in assigned schedule to continue transfer */
42989 + DWC_LIST_MOVE_HEAD(&hcd->periodic_sched_assigned,
42990 + &qh->qh_list_entry);
42991 + continue_isoc_xfer = 1;
42992 +
42993 + }
42994 + /** @todo Consider the case when period exceeds FrameList size.
42995 + * Frame Rollover interrupt should be used.
42996 + */
42997 + } else {
42998 + /* Scan descriptor list to complete the URB(s), then release the channel */
42999 + complete_non_isoc_xfer_ddma(hcd, hc, hc_regs, halt_status);
43000 +
43001 + release_channel_ddma(hcd, qh);
43002 + dwc_otg_hcd_qh_remove(hcd, qh);
43003 +
43004 + if (!DWC_CIRCLEQ_EMPTY(&qh->qtd_list)) {
43005 + /* Add back to inactive non-periodic schedule on normal completion */
43006 + dwc_otg_hcd_qh_add(hcd, qh);
43007 + }
43008 +
43009 + }
43010 + tr_type = dwc_otg_hcd_select_transactions(hcd);
43011 + if (tr_type != DWC_OTG_TRANSACTION_NONE || continue_isoc_xfer) {
43012 + if (continue_isoc_xfer) {
43013 + if (tr_type == DWC_OTG_TRANSACTION_NONE) {
43014 + tr_type = DWC_OTG_TRANSACTION_PERIODIC;
43015 + } else if (tr_type == DWC_OTG_TRANSACTION_NON_PERIODIC) {
43016 + tr_type = DWC_OTG_TRANSACTION_ALL;
43017 + }
43018 + }
43019 + dwc_otg_hcd_queue_transactions(hcd, tr_type);
43020 + }
43021 +}
43022 +
43023 +#endif /* DWC_DEVICE_ONLY */
43024 --- /dev/null
43025 +++ b/drivers/usb/host/dwc_otg/dwc_otg_hcd_if.h
43026 @@ -0,0 +1,421 @@
43027 +/* ==========================================================================
43028 + * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_hcd_if.h $
43029 + * $Revision: #12 $
43030 + * $Date: 2011/10/26 $
43031 + * $Change: 1873028 $
43032 + *
43033 + * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
43034 + * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
43035 + * otherwise expressly agreed to in writing between Synopsys and you.
43036 + *
43037 + * The Software IS NOT an item of Licensed Software or Licensed Product under
43038 + * any End User Software License Agreement or Agreement for Licensed Product
43039 + * with Synopsys or any supplement thereto. You are permitted to use and
43040 + * redistribute this Software in source and binary forms, with or without
43041 + * modification, provided that redistributions of source code must retain this
43042 + * notice. You may not view, use, disclose, copy or distribute this file or
43043 + * any information contained herein except pursuant to this license grant from
43044 + * Synopsys. If you do not agree with this notice, including the disclaimer
43045 + * below, then you are not authorized to use the Software.
43046 + *
43047 + * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
43048 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
43049 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
43050 + * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
43051 + * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
43052 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
43053 + * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
43054 + * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
43055 + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
43056 + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
43057 + * DAMAGE.
43058 + * ========================================================================== */
43059 +#ifndef DWC_DEVICE_ONLY
43060 +#ifndef __DWC_HCD_IF_H__
43061 +#define __DWC_HCD_IF_H__
43062 +
43063 +#include "dwc_otg_core_if.h"
43064 +
43065 +/** @file
43066 + * This file defines DWC_OTG HCD Core API.
43067 + */
43068 +
43069 +struct dwc_otg_hcd;
43070 +typedef struct dwc_otg_hcd dwc_otg_hcd_t;
43071 +
43072 +struct dwc_otg_hcd_urb;
43073 +typedef struct dwc_otg_hcd_urb dwc_otg_hcd_urb_t;
43074 +
43075 +/** @name HCD Function Driver Callbacks */
43076 +/** @{ */
43077 +
43078 +/** This function is called whenever core switches to host mode. */
43079 +typedef int (*dwc_otg_hcd_start_cb_t) (dwc_otg_hcd_t * hcd);
43080 +
43081 +/** This function is called when device has been disconnected */
43082 +typedef int (*dwc_otg_hcd_disconnect_cb_t) (dwc_otg_hcd_t * hcd);
43083 +
43084 +/** Wrapper provides this function to HCD to core, so it can get hub information to which device is connected */
43085 +typedef int (*dwc_otg_hcd_hub_info_from_urb_cb_t) (dwc_otg_hcd_t * hcd,
43086 + void *urb_handle,
43087 + uint32_t * hub_addr,
43088 + uint32_t * port_addr);
43089 +/** Via this function HCD core gets device speed */
43090 +typedef int (*dwc_otg_hcd_speed_from_urb_cb_t) (dwc_otg_hcd_t * hcd,
43091 + void *urb_handle);
43092 +
43093 +/** This function is called when urb is completed */
43094 +typedef int (*dwc_otg_hcd_complete_urb_cb_t) (dwc_otg_hcd_t * hcd,
43095 + void *urb_handle,
43096 + dwc_otg_hcd_urb_t * dwc_otg_urb,
43097 + int32_t status);
43098 +
43099 +/** Via this function HCD core gets b_hnp_enable parameter */
43100 +typedef int (*dwc_otg_hcd_get_b_hnp_enable) (dwc_otg_hcd_t * hcd);
43101 +
43102 +struct dwc_otg_hcd_function_ops {
43103 + dwc_otg_hcd_start_cb_t start;
43104 + dwc_otg_hcd_disconnect_cb_t disconnect;
43105 + dwc_otg_hcd_hub_info_from_urb_cb_t hub_info;
43106 + dwc_otg_hcd_speed_from_urb_cb_t speed;
43107 + dwc_otg_hcd_complete_urb_cb_t complete;
43108 + dwc_otg_hcd_get_b_hnp_enable get_b_hnp_enable;
43109 +};
43110 +/** @} */
43111 +
43112 +/** @name HCD Core API */
43113 +/** @{ */
43114 +/** This function allocates dwc_otg_hcd structure and returns pointer on it. */
43115 +extern dwc_otg_hcd_t *dwc_otg_hcd_alloc_hcd(void);
43116 +
43117 +/** This function should be called to initiate HCD Core.
43118 + *
43119 + * @param hcd The HCD
43120 + * @param core_if The DWC_OTG Core
43121 + *
43122 + * Returns -DWC_E_NO_MEMORY if no enough memory.
43123 + * Returns 0 on success
43124 + */
43125 +extern int dwc_otg_hcd_init(dwc_otg_hcd_t * hcd, dwc_otg_core_if_t * core_if);
43126 +
43127 +/** Frees HCD
43128 + *
43129 + * @param hcd The HCD
43130 + */
43131 +extern void dwc_otg_hcd_remove(dwc_otg_hcd_t * hcd);
43132 +
43133 +/** This function should be called on every hardware interrupt.
43134 + *
43135 + * @param dwc_otg_hcd The HCD
43136 + *
43137 + * Returns non zero if interrupt is handled
43138 + * Return 0 if interrupt is not handled
43139 + */
43140 +extern int32_t dwc_otg_hcd_handle_intr(dwc_otg_hcd_t * dwc_otg_hcd);
43141 +
43142 +/** This function is used to handle the fast interrupt
43143 + *
43144 + */
43145 +#ifdef CONFIG_ARM64
43146 +extern void dwc_otg_hcd_handle_fiq(void);
43147 +#else
43148 +extern void __attribute__ ((naked)) dwc_otg_hcd_handle_fiq(void);
43149 +#endif
43150 +
43151 +/**
43152 + * Returns private data set by
43153 + * dwc_otg_hcd_set_priv_data function.
43154 + *
43155 + * @param hcd The HCD
43156 + */
43157 +extern void *dwc_otg_hcd_get_priv_data(dwc_otg_hcd_t * hcd);
43158 +
43159 +/**
43160 + * Set private data.
43161 + *
43162 + * @param hcd The HCD
43163 + * @param priv_data pointer to be stored in private data
43164 + */
43165 +extern void dwc_otg_hcd_set_priv_data(dwc_otg_hcd_t * hcd, void *priv_data);
43166 +
43167 +/**
43168 + * This function initializes the HCD Core.
43169 + *
43170 + * @param hcd The HCD
43171 + * @param fops The Function Driver Operations data structure containing pointers to all callbacks.
43172 + *
43173 + * Returns -DWC_E_NO_DEVICE if Core is currently is in device mode.
43174 + * Returns 0 on success
43175 + */
43176 +extern int dwc_otg_hcd_start(dwc_otg_hcd_t * hcd,
43177 + struct dwc_otg_hcd_function_ops *fops);
43178 +
43179 +/**
43180 + * Halts the DWC_otg host mode operations in a clean manner. USB transfers are
43181 + * stopped.
43182 + *
43183 + * @param hcd The HCD
43184 + */
43185 +extern void dwc_otg_hcd_stop(dwc_otg_hcd_t * hcd);
43186 +
43187 +/**
43188 + * Handles hub class-specific requests.
43189 + *
43190 + * @param dwc_otg_hcd The HCD
43191 + * @param typeReq Request Type
43192 + * @param wValue wValue from control request
43193 + * @param wIndex wIndex from control request
43194 + * @param buf data buffer
43195 + * @param wLength data buffer length
43196 + *
43197 + * Returns -DWC_E_INVALID if invalid argument is passed
43198 + * Returns 0 on success
43199 + */
43200 +extern int dwc_otg_hcd_hub_control(dwc_otg_hcd_t * dwc_otg_hcd,
43201 + uint16_t typeReq, uint16_t wValue,
43202 + uint16_t wIndex, uint8_t * buf,
43203 + uint16_t wLength);
43204 +
43205 +/**
43206 + * Returns otg port number.
43207 + *
43208 + * @param hcd The HCD
43209 + */
43210 +extern uint32_t dwc_otg_hcd_otg_port(dwc_otg_hcd_t * hcd);
43211 +
43212 +/**
43213 + * Returns OTG version - either 1.3 or 2.0.
43214 + *
43215 + * @param core_if The core_if structure pointer
43216 + */
43217 +extern uint16_t dwc_otg_get_otg_version(dwc_otg_core_if_t * core_if);
43218 +
43219 +/**
43220 + * Returns 1 if currently core is acting as B host, and 0 otherwise.
43221 + *
43222 + * @param hcd The HCD
43223 + */
43224 +extern uint32_t dwc_otg_hcd_is_b_host(dwc_otg_hcd_t * hcd);
43225 +
43226 +/**
43227 + * Returns current frame number.
43228 + *
43229 + * @param hcd The HCD
43230 + */
43231 +extern int dwc_otg_hcd_get_frame_number(dwc_otg_hcd_t * hcd);
43232 +
43233 +/**
43234 + * Dumps hcd state.
43235 + *
43236 + * @param hcd The HCD
43237 + */
43238 +extern void dwc_otg_hcd_dump_state(dwc_otg_hcd_t * hcd);
43239 +
43240 +/**
43241 + * Dump the average frame remaining at SOF. This can be used to
43242 + * determine average interrupt latency. Frame remaining is also shown for
43243 + * start transfer and two additional sample points.
43244 + * Currently this function is not implemented.
43245 + *
43246 + * @param hcd The HCD
43247 + */
43248 +extern void dwc_otg_hcd_dump_frrem(dwc_otg_hcd_t * hcd);
43249 +
43250 +/**
43251 + * Sends LPM transaction to the local device.
43252 + *
43253 + * @param hcd The HCD
43254 + * @param devaddr Device Address
43255 + * @param hird Host initiated resume duration
43256 + * @param bRemoteWake Value of bRemoteWake field in LPM transaction
43257 + *
43258 + * Returns negative value if sending LPM transaction was not succeeded.
43259 + * Returns 0 on success.
43260 + */
43261 +extern int dwc_otg_hcd_send_lpm(dwc_otg_hcd_t * hcd, uint8_t devaddr,
43262 + uint8_t hird, uint8_t bRemoteWake);
43263 +
43264 +/* URB interface */
43265 +
43266 +/**
43267 + * Allocates memory for dwc_otg_hcd_urb structure.
43268 + * Allocated memory should be freed by call of DWC_FREE.
43269 + *
43270 + * @param hcd The HCD
43271 + * @param iso_desc_count Count of ISOC descriptors
43272 + * @param atomic_alloc Specefies whether to perform atomic allocation.
43273 + */
43274 +extern dwc_otg_hcd_urb_t *dwc_otg_hcd_urb_alloc(dwc_otg_hcd_t * hcd,
43275 + int iso_desc_count,
43276 + int atomic_alloc);
43277 +
43278 +/**
43279 + * Set pipe information in URB.
43280 + *
43281 + * @param hcd_urb DWC_OTG URB
43282 + * @param devaddr Device Address
43283 + * @param ep_num Endpoint Number
43284 + * @param ep_type Endpoint Type
43285 + * @param ep_dir Endpoint Direction
43286 + * @param mps Max Packet Size
43287 + */
43288 +extern void dwc_otg_hcd_urb_set_pipeinfo(dwc_otg_hcd_urb_t * hcd_urb,
43289 + uint8_t devaddr, uint8_t ep_num,
43290 + uint8_t ep_type, uint8_t ep_dir,
43291 + uint16_t mps);
43292 +
43293 +/* Transfer flags */
43294 +#define URB_GIVEBACK_ASAP 0x1
43295 +#define URB_SEND_ZERO_PACKET 0x2
43296 +
43297 +/**
43298 + * Sets dwc_otg_hcd_urb parameters.
43299 + *
43300 + * @param urb DWC_OTG URB allocated by dwc_otg_hcd_urb_alloc function.
43301 + * @param urb_handle Unique handle for request, this will be passed back
43302 + * to function driver in completion callback.
43303 + * @param buf The buffer for the data
43304 + * @param dma The DMA buffer for the data
43305 + * @param buflen Transfer length
43306 + * @param sp Buffer for setup data
43307 + * @param sp_dma DMA address of setup data buffer
43308 + * @param flags Transfer flags
43309 + * @param interval Polling interval for interrupt or isochronous transfers.
43310 + */
43311 +extern void dwc_otg_hcd_urb_set_params(dwc_otg_hcd_urb_t * urb,
43312 + void *urb_handle, void *buf,
43313 + dwc_dma_t dma, uint32_t buflen, void *sp,
43314 + dwc_dma_t sp_dma, uint32_t flags,
43315 + uint16_t interval);
43316 +
43317 +/** Gets status from dwc_otg_hcd_urb
43318 + *
43319 + * @param dwc_otg_urb DWC_OTG URB
43320 + */
43321 +extern uint32_t dwc_otg_hcd_urb_get_status(dwc_otg_hcd_urb_t * dwc_otg_urb);
43322 +
43323 +/** Gets actual length from dwc_otg_hcd_urb
43324 + *
43325 + * @param dwc_otg_urb DWC_OTG URB
43326 + */
43327 +extern uint32_t dwc_otg_hcd_urb_get_actual_length(dwc_otg_hcd_urb_t *
43328 + dwc_otg_urb);
43329 +
43330 +/** Gets error count from dwc_otg_hcd_urb. Only for ISOC URBs
43331 + *
43332 + * @param dwc_otg_urb DWC_OTG URB
43333 + */
43334 +extern uint32_t dwc_otg_hcd_urb_get_error_count(dwc_otg_hcd_urb_t *
43335 + dwc_otg_urb);
43336 +
43337 +/** Set ISOC descriptor offset and length
43338 + *
43339 + * @param dwc_otg_urb DWC_OTG URB
43340 + * @param desc_num ISOC descriptor number
43341 + * @param offset Offset from beginig of buffer.
43342 + * @param length Transaction length
43343 + */
43344 +extern void dwc_otg_hcd_urb_set_iso_desc_params(dwc_otg_hcd_urb_t * dwc_otg_urb,
43345 + int desc_num, uint32_t offset,
43346 + uint32_t length);
43347 +
43348 +/** Get status of ISOC descriptor, specified by desc_num
43349 + *
43350 + * @param dwc_otg_urb DWC_OTG URB
43351 + * @param desc_num ISOC descriptor number
43352 + */
43353 +extern uint32_t dwc_otg_hcd_urb_get_iso_desc_status(dwc_otg_hcd_urb_t *
43354 + dwc_otg_urb, int desc_num);
43355 +
43356 +/** Get actual length of ISOC descriptor, specified by desc_num
43357 + *
43358 + * @param dwc_otg_urb DWC_OTG URB
43359 + * @param desc_num ISOC descriptor number
43360 + */
43361 +extern uint32_t dwc_otg_hcd_urb_get_iso_desc_actual_length(dwc_otg_hcd_urb_t *
43362 + dwc_otg_urb,
43363 + int desc_num);
43364 +
43365 +/** Queue URB. After transfer is completes, the complete callback will be called with the URB status
43366 + *
43367 + * @param dwc_otg_hcd The HCD
43368 + * @param dwc_otg_urb DWC_OTG URB
43369 + * @param ep_handle Out parameter for returning endpoint handle
43370 + * @param atomic_alloc Flag to do atomic allocation if needed
43371 + *
43372 + * Returns -DWC_E_NO_DEVICE if no device is connected.
43373 + * Returns -DWC_E_NO_MEMORY if there is no enough memory.
43374 + * Returns 0 on success.
43375 + */
43376 +extern int dwc_otg_hcd_urb_enqueue(dwc_otg_hcd_t * dwc_otg_hcd,
43377 + dwc_otg_hcd_urb_t * dwc_otg_urb,
43378 + void **ep_handle, int atomic_alloc);
43379 +
43380 +/** De-queue the specified URB
43381 + *
43382 + * @param dwc_otg_hcd The HCD
43383 + * @param dwc_otg_urb DWC_OTG URB
43384 + */
43385 +extern int dwc_otg_hcd_urb_dequeue(dwc_otg_hcd_t * dwc_otg_hcd,
43386 + dwc_otg_hcd_urb_t * dwc_otg_urb);
43387 +
43388 +/** Frees resources in the DWC_otg controller related to a given endpoint.
43389 + * Any URBs for the endpoint must already be dequeued.
43390 + *
43391 + * @param hcd The HCD
43392 + * @param ep_handle Endpoint handle, returned by dwc_otg_hcd_urb_enqueue function
43393 + * @param retry Number of retries if there are queued transfers.
43394 + *
43395 + * Returns -DWC_E_INVALID if invalid arguments are passed.
43396 + * Returns 0 on success
43397 + */
43398 +extern int dwc_otg_hcd_endpoint_disable(dwc_otg_hcd_t * hcd, void *ep_handle,
43399 + int retry);
43400 +
43401 +/* Resets the data toggle in qh structure. This function can be called from
43402 + * usb_clear_halt routine.
43403 + *
43404 + * @param hcd The HCD
43405 + * @param ep_handle Endpoint handle, returned by dwc_otg_hcd_urb_enqueue function
43406 + *
43407 + * Returns -DWC_E_INVALID if invalid arguments are passed.
43408 + * Returns 0 on success
43409 + */
43410 +extern int dwc_otg_hcd_endpoint_reset(dwc_otg_hcd_t * hcd, void *ep_handle);
43411 +
43412 +/** Returns 1 if status of specified port is changed and 0 otherwise.
43413 + *
43414 + * @param hcd The HCD
43415 + * @param port Port number
43416 + */
43417 +extern int dwc_otg_hcd_is_status_changed(dwc_otg_hcd_t * hcd, int port);
43418 +
43419 +/** Call this function to check if bandwidth was allocated for specified endpoint.
43420 + * Only for ISOC and INTERRUPT endpoints.
43421 + *
43422 + * @param hcd The HCD
43423 + * @param ep_handle Endpoint handle
43424 + */
43425 +extern int dwc_otg_hcd_is_bandwidth_allocated(dwc_otg_hcd_t * hcd,
43426 + void *ep_handle);
43427 +
43428 +/** Call this function to check if bandwidth was freed for specified endpoint.
43429 + *
43430 + * @param hcd The HCD
43431 + * @param ep_handle Endpoint handle
43432 + */
43433 +extern int dwc_otg_hcd_is_bandwidth_freed(dwc_otg_hcd_t * hcd, void *ep_handle);
43434 +
43435 +/** Returns bandwidth allocated for specified endpoint in microseconds.
43436 + * Only for ISOC and INTERRUPT endpoints.
43437 + *
43438 + * @param hcd The HCD
43439 + * @param ep_handle Endpoint handle
43440 + */
43441 +extern uint8_t dwc_otg_hcd_get_ep_bandwidth(dwc_otg_hcd_t * hcd,
43442 + void *ep_handle);
43443 +
43444 +/** @} */
43445 +
43446 +#endif /* __DWC_HCD_IF_H__ */
43447 +#endif /* DWC_DEVICE_ONLY */
43448 --- /dev/null
43449 +++ b/drivers/usb/host/dwc_otg/dwc_otg_hcd_intr.c
43450 @@ -0,0 +1,2757 @@
43451 +/* ==========================================================================
43452 + * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_hcd_intr.c $
43453 + * $Revision: #89 $
43454 + * $Date: 2011/10/20 $
43455 + * $Change: 1869487 $
43456 + *
43457 + * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
43458 + * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
43459 + * otherwise expressly agreed to in writing between Synopsys and you.
43460 + *
43461 + * The Software IS NOT an item of Licensed Software or Licensed Product under
43462 + * any End User Software License Agreement or Agreement for Licensed Product
43463 + * with Synopsys or any supplement thereto. You are permitted to use and
43464 + * redistribute this Software in source and binary forms, with or without
43465 + * modification, provided that redistributions of source code must retain this
43466 + * notice. You may not view, use, disclose, copy or distribute this file or
43467 + * any information contained herein except pursuant to this license grant from
43468 + * Synopsys. If you do not agree with this notice, including the disclaimer
43469 + * below, then you are not authorized to use the Software.
43470 + *
43471 + * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
43472 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
43473 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
43474 + * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
43475 + * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
43476 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
43477 + * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
43478 + * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
43479 + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
43480 + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
43481 + * DAMAGE.
43482 + * ========================================================================== */
43483 +#ifndef DWC_DEVICE_ONLY
43484 +
43485 +#include "dwc_otg_hcd.h"
43486 +#include "dwc_otg_regs.h"
43487 +
43488 +#include <linux/jiffies.h>
43489 +#ifdef CONFIG_ARM
43490 +#include <asm/fiq.h>
43491 +#endif
43492 +
43493 +extern bool microframe_schedule;
43494 +
43495 +/** @file
43496 + * This file contains the implementation of the HCD Interrupt handlers.
43497 + */
43498 +
43499 +int fiq_done, int_done;
43500 +
43501 +#ifdef FIQ_DEBUG
43502 +char buffer[1000*16];
43503 +int wptr;
43504 +void notrace _fiq_print(FIQDBG_T dbg_lvl, char *fmt, ...)
43505 +{
43506 + FIQDBG_T dbg_lvl_req = FIQDBG_PORTHUB;
43507 + va_list args;
43508 + char text[17];
43509 + hfnum_data_t hfnum = { .d32 = FIQ_READ(dwc_regs_base + 0x408) };
43510 +
43511 + if(dbg_lvl & dbg_lvl_req || dbg_lvl == FIQDBG_ERR)
43512 + {
43513 + local_fiq_disable();
43514 + snprintf(text, 9, "%4d%d:%d ", hfnum.b.frnum/8, hfnum.b.frnum%8, 8 - hfnum.b.frrem/937);
43515 + va_start(args, fmt);
43516 + vsnprintf(text+8, 9, fmt, args);
43517 + va_end(args);
43518 +
43519 + memcpy(buffer + wptr, text, 16);
43520 + wptr = (wptr + 16) % sizeof(buffer);
43521 + local_fiq_enable();
43522 + }
43523 +}
43524 +#endif
43525 +
43526 +/** This function handles interrupts for the HCD. */
43527 +int32_t dwc_otg_hcd_handle_intr(dwc_otg_hcd_t * dwc_otg_hcd)
43528 +{
43529 + int retval = 0;
43530 + static int last_time;
43531 + dwc_otg_core_if_t *core_if = dwc_otg_hcd->core_if;
43532 + gintsts_data_t gintsts;
43533 + gintmsk_data_t gintmsk;
43534 + hfnum_data_t hfnum;
43535 + haintmsk_data_t haintmsk;
43536 +
43537 +#ifdef DEBUG
43538 + dwc_otg_core_global_regs_t *global_regs = core_if->core_global_regs;
43539 +
43540 +#endif
43541 +
43542 + gintsts.d32 = DWC_READ_REG32(&core_if->core_global_regs->gintsts);
43543 + gintmsk.d32 = DWC_READ_REG32(&core_if->core_global_regs->gintmsk);
43544 +
43545 + /* Exit from ISR if core is hibernated */
43546 + if (core_if->hibernation_suspend == 1) {
43547 + goto exit_handler_routine;
43548 + }
43549 + DWC_SPINLOCK(dwc_otg_hcd->lock);
43550 + /* Check if HOST Mode */
43551 + if (dwc_otg_is_host_mode(core_if)) {
43552 + if (fiq_enable) {
43553 + local_fiq_disable();
43554 + fiq_fsm_spin_lock(&dwc_otg_hcd->fiq_state->lock);
43555 + /* Pull in from the FIQ's disabled mask */
43556 + gintmsk.d32 = gintmsk.d32 | ~(dwc_otg_hcd->fiq_state->gintmsk_saved.d32);
43557 + dwc_otg_hcd->fiq_state->gintmsk_saved.d32 = ~0;
43558 + }
43559 +
43560 + if (fiq_fsm_enable && ( 0x0000FFFF & ~(dwc_otg_hcd->fiq_state->haintmsk_saved.b2.chint))) {
43561 + gintsts.b.hcintr = 1;
43562 + }
43563 +
43564 + /* Danger will robinson: fake a SOF if necessary */
43565 + if (fiq_fsm_enable && (dwc_otg_hcd->fiq_state->gintmsk_saved.b.sofintr == 1)) {
43566 + gintsts.b.sofintr = 1;
43567 + }
43568 + gintsts.d32 &= gintmsk.d32;
43569 +
43570 + if (fiq_enable) {
43571 + fiq_fsm_spin_unlock(&dwc_otg_hcd->fiq_state->lock);
43572 + local_fiq_enable();
43573 + }
43574 +
43575 + if (!gintsts.d32) {
43576 + goto exit_handler_routine;
43577 + }
43578 +
43579 +#ifdef DEBUG
43580 + // We should be OK doing this because the common interrupts should already have been serviced
43581 + /* Don't print debug message in the interrupt handler on SOF */
43582 +#ifndef DEBUG_SOF
43583 + if (gintsts.d32 != DWC_SOF_INTR_MASK)
43584 +#endif
43585 + DWC_DEBUGPL(DBG_HCDI, "\n");
43586 +#endif
43587 +
43588 +#ifdef DEBUG
43589 +#ifndef DEBUG_SOF
43590 + if (gintsts.d32 != DWC_SOF_INTR_MASK)
43591 +#endif
43592 + DWC_DEBUGPL(DBG_HCDI,
43593 + "DWC OTG HCD Interrupt Detected gintsts&gintmsk=0x%08x core_if=%p\n",
43594 + gintsts.d32, core_if);
43595 +#endif
43596 + hfnum.d32 = DWC_READ_REG32(&dwc_otg_hcd->core_if->host_if->host_global_regs->hfnum);
43597 + if (gintsts.b.sofintr) {
43598 + retval |= dwc_otg_hcd_handle_sof_intr(dwc_otg_hcd);
43599 + }
43600 +
43601 + if (gintsts.b.rxstsqlvl) {
43602 + retval |=
43603 + dwc_otg_hcd_handle_rx_status_q_level_intr
43604 + (dwc_otg_hcd);
43605 + }
43606 + if (gintsts.b.nptxfempty) {
43607 + retval |=
43608 + dwc_otg_hcd_handle_np_tx_fifo_empty_intr
43609 + (dwc_otg_hcd);
43610 + }
43611 + if (gintsts.b.i2cintr) {
43612 + /** @todo Implement i2cintr handler. */
43613 + }
43614 + if (gintsts.b.portintr) {
43615 +
43616 + gintmsk_data_t gintmsk = { .b.portintr = 1};
43617 + retval |= dwc_otg_hcd_handle_port_intr(dwc_otg_hcd);
43618 + if (fiq_enable) {
43619 + local_fiq_disable();
43620 + fiq_fsm_spin_lock(&dwc_otg_hcd->fiq_state->lock);
43621 + DWC_MODIFY_REG32(&dwc_otg_hcd->core_if->core_global_regs->gintmsk, 0, gintmsk.d32);
43622 + fiq_fsm_spin_unlock(&dwc_otg_hcd->fiq_state->lock);
43623 + local_fiq_enable();
43624 + } else {
43625 + DWC_MODIFY_REG32(&dwc_otg_hcd->core_if->core_global_regs->gintmsk, 0, gintmsk.d32);
43626 + }
43627 + }
43628 + if (gintsts.b.hcintr) {
43629 + retval |= dwc_otg_hcd_handle_hc_intr(dwc_otg_hcd);
43630 + }
43631 + if (gintsts.b.ptxfempty) {
43632 + retval |=
43633 + dwc_otg_hcd_handle_perio_tx_fifo_empty_intr
43634 + (dwc_otg_hcd);
43635 + }
43636 +#ifdef DEBUG
43637 +#ifndef DEBUG_SOF
43638 + if (gintsts.d32 != DWC_SOF_INTR_MASK)
43639 +#endif
43640 + {
43641 + DWC_DEBUGPL(DBG_HCDI,
43642 + "DWC OTG HCD Finished Servicing Interrupts\n");
43643 + DWC_DEBUGPL(DBG_HCDV, "DWC OTG HCD gintsts=0x%08x\n",
43644 + DWC_READ_REG32(&global_regs->gintsts));
43645 + DWC_DEBUGPL(DBG_HCDV, "DWC OTG HCD gintmsk=0x%08x\n",
43646 + DWC_READ_REG32(&global_regs->gintmsk));
43647 + }
43648 +#endif
43649 +
43650 +#ifdef DEBUG
43651 +#ifndef DEBUG_SOF
43652 + if (gintsts.d32 != DWC_SOF_INTR_MASK)
43653 +#endif
43654 + DWC_DEBUGPL(DBG_HCDI, "\n");
43655 +#endif
43656 +
43657 + }
43658 +
43659 +exit_handler_routine:
43660 + if (fiq_enable) {
43661 + gintmsk_data_t gintmsk_new;
43662 + haintmsk_data_t haintmsk_new;
43663 + local_fiq_disable();
43664 + fiq_fsm_spin_lock(&dwc_otg_hcd->fiq_state->lock);
43665 + gintmsk_new.d32 = *(volatile uint32_t *)&dwc_otg_hcd->fiq_state->gintmsk_saved.d32;
43666 + if(fiq_fsm_enable)
43667 + haintmsk_new.d32 = *(volatile uint32_t *)&dwc_otg_hcd->fiq_state->haintmsk_saved.d32;
43668 + else
43669 + haintmsk_new.d32 = 0x0000FFFF;
43670 +
43671 + /* The FIQ could have sneaked another interrupt in. If so, don't clear MPHI */
43672 + if ((gintmsk_new.d32 == ~0) && (haintmsk_new.d32 == 0x0000FFFF)) {
43673 + if (dwc_otg_hcd->fiq_state->mphi_regs.swirq_clr) {
43674 + DWC_WRITE_REG32(dwc_otg_hcd->fiq_state->mphi_regs.swirq_clr, 1);
43675 + } else {
43676 + DWC_WRITE_REG32(dwc_otg_hcd->fiq_state->mphi_regs.intstat, (1<<16));
43677 + }
43678 + if (dwc_otg_hcd->fiq_state->mphi_int_count >= 50) {
43679 + fiq_print(FIQDBG_INT, dwc_otg_hcd->fiq_state, "MPHI CLR");
43680 + DWC_WRITE_REG32(dwc_otg_hcd->fiq_state->mphi_regs.ctrl, ((1<<31) + (1<<16)));
43681 + while (!(DWC_READ_REG32(dwc_otg_hcd->fiq_state->mphi_regs.ctrl) & (1 << 17)))
43682 + ;
43683 + DWC_WRITE_REG32(dwc_otg_hcd->fiq_state->mphi_regs.ctrl, (1<<31));
43684 + dwc_otg_hcd->fiq_state->mphi_int_count = 0;
43685 + }
43686 + int_done++;
43687 + }
43688 + haintmsk.d32 = DWC_READ_REG32(&core_if->host_if->host_global_regs->haintmsk);
43689 + /* Re-enable interrupts that the FIQ masked (first time round) */
43690 + FIQ_WRITE(dwc_otg_hcd->fiq_state->dwc_regs_base + GINTMSK, gintmsk.d32);
43691 + fiq_fsm_spin_unlock(&dwc_otg_hcd->fiq_state->lock);
43692 + local_fiq_enable();
43693 +
43694 + if ((jiffies / HZ) > last_time) {
43695 + //dwc_otg_qh_t *qh;
43696 + //dwc_list_link_t *cur;
43697 + /* Once a second output the fiq and irq numbers, useful for debug */
43698 + last_time = jiffies / HZ;
43699 + // DWC_WARN("np_kick=%d AHC=%d sched_frame=%d cur_frame=%d int_done=%d fiq_done=%d",
43700 + // dwc_otg_hcd->fiq_state->kick_np_queues, dwc_otg_hcd->available_host_channels,
43701 + // dwc_otg_hcd->fiq_state->next_sched_frame, hfnum.b.frnum, int_done, dwc_otg_hcd->fiq_state->fiq_done);
43702 + //printk(KERN_WARNING "Periodic queues:\n");
43703 + }
43704 + }
43705 +
43706 + DWC_SPINUNLOCK(dwc_otg_hcd->lock);
43707 + return retval;
43708 +}
43709 +
43710 +#ifdef DWC_TRACK_MISSED_SOFS
43711 +
43712 +#warning Compiling code to track missed SOFs
43713 +#define FRAME_NUM_ARRAY_SIZE 1000
43714 +/**
43715 + * This function is for debug only.
43716 + */
43717 +static inline void track_missed_sofs(uint16_t curr_frame_number)
43718 +{
43719 + static uint16_t frame_num_array[FRAME_NUM_ARRAY_SIZE];
43720 + static uint16_t last_frame_num_array[FRAME_NUM_ARRAY_SIZE];
43721 + static int frame_num_idx = 0;
43722 + static uint16_t last_frame_num = DWC_HFNUM_MAX_FRNUM;
43723 + static int dumped_frame_num_array = 0;
43724 +
43725 + if (frame_num_idx < FRAME_NUM_ARRAY_SIZE) {
43726 + if (((last_frame_num + 1) & DWC_HFNUM_MAX_FRNUM) !=
43727 + curr_frame_number) {
43728 + frame_num_array[frame_num_idx] = curr_frame_number;
43729 + last_frame_num_array[frame_num_idx++] = last_frame_num;
43730 + }
43731 + } else if (!dumped_frame_num_array) {
43732 + int i;
43733 + DWC_PRINTF("Frame Last Frame\n");
43734 + DWC_PRINTF("----- ----------\n");
43735 + for (i = 0; i < FRAME_NUM_ARRAY_SIZE; i++) {
43736 + DWC_PRINTF("0x%04x 0x%04x\n",
43737 + frame_num_array[i], last_frame_num_array[i]);
43738 + }
43739 + dumped_frame_num_array = 1;
43740 + }
43741 + last_frame_num = curr_frame_number;
43742 +}
43743 +#endif
43744 +
43745 +/**
43746 + * Handles the start-of-frame interrupt in host mode. Non-periodic
43747 + * transactions may be queued to the DWC_otg controller for the current
43748 + * (micro)frame. Periodic transactions may be queued to the controller for the
43749 + * next (micro)frame.
43750 + */
43751 +int32_t dwc_otg_hcd_handle_sof_intr(dwc_otg_hcd_t * hcd)
43752 +{
43753 + hfnum_data_t hfnum;
43754 + gintsts_data_t gintsts = { .d32 = 0 };
43755 + dwc_list_link_t *qh_entry;
43756 + dwc_otg_qh_t *qh;
43757 + dwc_otg_transaction_type_e tr_type;
43758 + int did_something = 0;
43759 + int32_t next_sched_frame = -1;
43760 +
43761 + hfnum.d32 =
43762 + DWC_READ_REG32(&hcd->core_if->host_if->host_global_regs->hfnum);
43763 +
43764 +#ifdef DEBUG_SOF
43765 + DWC_DEBUGPL(DBG_HCD, "--Start of Frame Interrupt--\n");
43766 +#endif
43767 + hcd->frame_number = hfnum.b.frnum;
43768 +
43769 +#ifdef DEBUG
43770 + hcd->frrem_accum += hfnum.b.frrem;
43771 + hcd->frrem_samples++;
43772 +#endif
43773 +
43774 +#ifdef DWC_TRACK_MISSED_SOFS
43775 + track_missed_sofs(hcd->frame_number);
43776 +#endif
43777 + /* Determine whether any periodic QHs should be executed. */
43778 + qh_entry = DWC_LIST_FIRST(&hcd->periodic_sched_inactive);
43779 + while (qh_entry != &hcd->periodic_sched_inactive) {
43780 + qh = DWC_LIST_ENTRY(qh_entry, dwc_otg_qh_t, qh_list_entry);
43781 + qh_entry = qh_entry->next;
43782 + if (dwc_frame_num_le(qh->sched_frame, hcd->frame_number)) {
43783 +
43784 + /*
43785 + * Move QH to the ready list to be executed next
43786 + * (micro)frame.
43787 + */
43788 + DWC_LIST_MOVE_HEAD(&hcd->periodic_sched_ready,
43789 + &qh->qh_list_entry);
43790 +
43791 + did_something = 1;
43792 + }
43793 + else
43794 + {
43795 + if(next_sched_frame < 0 || dwc_frame_num_le(qh->sched_frame, next_sched_frame))
43796 + {
43797 + next_sched_frame = qh->sched_frame;
43798 + }
43799 + }
43800 + }
43801 + if (fiq_enable)
43802 + hcd->fiq_state->next_sched_frame = next_sched_frame;
43803 +
43804 + tr_type = dwc_otg_hcd_select_transactions(hcd);
43805 + if (tr_type != DWC_OTG_TRANSACTION_NONE) {
43806 + dwc_otg_hcd_queue_transactions(hcd, tr_type);
43807 + did_something = 1;
43808 + }
43809 +
43810 + /* Clear interrupt - but do not trample on the FIQ sof */
43811 + if (!fiq_fsm_enable) {
43812 + gintsts.b.sofintr = 1;
43813 + DWC_WRITE_REG32(&hcd->core_if->core_global_regs->gintsts, gintsts.d32);
43814 + }
43815 + return 1;
43816 +}
43817 +
43818 +/** Handles the Rx Status Queue Level Interrupt, which indicates that there is at
43819 + * least one packet in the Rx FIFO. The packets are moved from the FIFO to
43820 + * memory if the DWC_otg controller is operating in Slave mode. */
43821 +int32_t dwc_otg_hcd_handle_rx_status_q_level_intr(dwc_otg_hcd_t * dwc_otg_hcd)
43822 +{
43823 + host_grxsts_data_t grxsts;
43824 + dwc_hc_t *hc = NULL;
43825 +
43826 + DWC_DEBUGPL(DBG_HCD, "--RxStsQ Level Interrupt--\n");
43827 +
43828 + grxsts.d32 =
43829 + DWC_READ_REG32(&dwc_otg_hcd->core_if->core_global_regs->grxstsp);
43830 +
43831 + hc = dwc_otg_hcd->hc_ptr_array[grxsts.b.chnum];
43832 + if (!hc) {
43833 + DWC_ERROR("Unable to get corresponding channel\n");
43834 + return 0;
43835 + }
43836 +
43837 + /* Packet Status */
43838 + DWC_DEBUGPL(DBG_HCDV, " Ch num = %d\n", grxsts.b.chnum);
43839 + DWC_DEBUGPL(DBG_HCDV, " Count = %d\n", grxsts.b.bcnt);
43840 + DWC_DEBUGPL(DBG_HCDV, " DPID = %d, hc.dpid = %d\n", grxsts.b.dpid,
43841 + hc->data_pid_start);
43842 + DWC_DEBUGPL(DBG_HCDV, " PStatus = %d\n", grxsts.b.pktsts);
43843 +
43844 + switch (grxsts.b.pktsts) {
43845 + case DWC_GRXSTS_PKTSTS_IN:
43846 + /* Read the data into the host buffer. */
43847 + if (grxsts.b.bcnt > 0) {
43848 + dwc_otg_read_packet(dwc_otg_hcd->core_if,
43849 + hc->xfer_buff, grxsts.b.bcnt);
43850 +
43851 + /* Update the HC fields for the next packet received. */
43852 + hc->xfer_count += grxsts.b.bcnt;
43853 + hc->xfer_buff += grxsts.b.bcnt;
43854 + }
43855 +
43856 + case DWC_GRXSTS_PKTSTS_IN_XFER_COMP:
43857 + case DWC_GRXSTS_PKTSTS_DATA_TOGGLE_ERR:
43858 + case DWC_GRXSTS_PKTSTS_CH_HALTED:
43859 + /* Handled in interrupt, just ignore data */
43860 + break;
43861 + default:
43862 + DWC_ERROR("RX_STS_Q Interrupt: Unknown status %d\n",
43863 + grxsts.b.pktsts);
43864 + break;
43865 + }
43866 +
43867 + return 1;
43868 +}
43869 +
43870 +/** This interrupt occurs when the non-periodic Tx FIFO is half-empty. More
43871 + * data packets may be written to the FIFO for OUT transfers. More requests
43872 + * may be written to the non-periodic request queue for IN transfers. This
43873 + * interrupt is enabled only in Slave mode. */
43874 +int32_t dwc_otg_hcd_handle_np_tx_fifo_empty_intr(dwc_otg_hcd_t * dwc_otg_hcd)
43875 +{
43876 + DWC_DEBUGPL(DBG_HCD, "--Non-Periodic TxFIFO Empty Interrupt--\n");
43877 + dwc_otg_hcd_queue_transactions(dwc_otg_hcd,
43878 + DWC_OTG_TRANSACTION_NON_PERIODIC);
43879 + return 1;
43880 +}
43881 +
43882 +/** This interrupt occurs when the periodic Tx FIFO is half-empty. More data
43883 + * packets may be written to the FIFO for OUT transfers. More requests may be
43884 + * written to the periodic request queue for IN transfers. This interrupt is
43885 + * enabled only in Slave mode. */
43886 +int32_t dwc_otg_hcd_handle_perio_tx_fifo_empty_intr(dwc_otg_hcd_t * dwc_otg_hcd)
43887 +{
43888 + DWC_DEBUGPL(DBG_HCD, "--Periodic TxFIFO Empty Interrupt--\n");
43889 + dwc_otg_hcd_queue_transactions(dwc_otg_hcd,
43890 + DWC_OTG_TRANSACTION_PERIODIC);
43891 + return 1;
43892 +}
43893 +
43894 +/** There are multiple conditions that can cause a port interrupt. This function
43895 + * determines which interrupt conditions have occurred and handles them
43896 + * appropriately. */
43897 +int32_t dwc_otg_hcd_handle_port_intr(dwc_otg_hcd_t * dwc_otg_hcd)
43898 +{
43899 + int retval = 0;
43900 + hprt0_data_t hprt0;
43901 + hprt0_data_t hprt0_modify;
43902 +
43903 + hprt0.d32 = DWC_READ_REG32(dwc_otg_hcd->core_if->host_if->hprt0);
43904 + hprt0_modify.d32 = DWC_READ_REG32(dwc_otg_hcd->core_if->host_if->hprt0);
43905 +
43906 + /* Clear appropriate bits in HPRT0 to clear the interrupt bit in
43907 + * GINTSTS */
43908 +
43909 + hprt0_modify.b.prtena = 0;
43910 + hprt0_modify.b.prtconndet = 0;
43911 + hprt0_modify.b.prtenchng = 0;
43912 + hprt0_modify.b.prtovrcurrchng = 0;
43913 +
43914 + /* Port Connect Detected
43915 + * Set flag and clear if detected */
43916 + if (dwc_otg_hcd->core_if->hibernation_suspend == 1) {
43917 + // Dont modify port status if we are in hibernation state
43918 + hprt0_modify.b.prtconndet = 1;
43919 + hprt0_modify.b.prtenchng = 1;
43920 + DWC_WRITE_REG32(dwc_otg_hcd->core_if->host_if->hprt0, hprt0_modify.d32);
43921 + hprt0.d32 = DWC_READ_REG32(dwc_otg_hcd->core_if->host_if->hprt0);
43922 + return retval;
43923 + }
43924 +
43925 + if (hprt0.b.prtconndet) {
43926 + /** @todo - check if steps performed in 'else' block should be perfromed regardles adp */
43927 + if (dwc_otg_hcd->core_if->adp_enable &&
43928 + dwc_otg_hcd->core_if->adp.vbuson_timer_started == 1) {
43929 + DWC_PRINTF("PORT CONNECT DETECTED ----------------\n");
43930 + DWC_TIMER_CANCEL(dwc_otg_hcd->core_if->adp.vbuson_timer);
43931 + dwc_otg_hcd->core_if->adp.vbuson_timer_started = 0;
43932 + /* TODO - check if this is required, as
43933 + * host initialization was already performed
43934 + * after initial ADP probing
43935 + */
43936 + /*dwc_otg_hcd->core_if->adp.vbuson_timer_started = 0;
43937 + dwc_otg_core_init(dwc_otg_hcd->core_if);
43938 + dwc_otg_enable_global_interrupts(dwc_otg_hcd->core_if);
43939 + cil_hcd_start(dwc_otg_hcd->core_if);*/
43940 + } else {
43941 +
43942 + DWC_DEBUGPL(DBG_HCD, "--Port Interrupt HPRT0=0x%08x "
43943 + "Port Connect Detected--\n", hprt0.d32);
43944 + dwc_otg_hcd->flags.b.port_connect_status_change = 1;
43945 + dwc_otg_hcd->flags.b.port_connect_status = 1;
43946 + hprt0_modify.b.prtconndet = 1;
43947 +
43948 + /* B-Device has connected, Delete the connection timer. */
43949 + DWC_TIMER_CANCEL(dwc_otg_hcd->conn_timer);
43950 + }
43951 + /* The Hub driver asserts a reset when it sees port connect
43952 + * status change flag */
43953 + retval |= 1;
43954 + }
43955 +
43956 + /* Port Enable Changed
43957 + * Clear if detected - Set internal flag if disabled */
43958 + if (hprt0.b.prtenchng) {
43959 + DWC_DEBUGPL(DBG_HCD, " --Port Interrupt HPRT0=0x%08x "
43960 + "Port Enable Changed--\n", hprt0.d32);
43961 + hprt0_modify.b.prtenchng = 1;
43962 + if (hprt0.b.prtena == 1) {
43963 + hfir_data_t hfir;
43964 + int do_reset = 0;
43965 + dwc_otg_core_params_t *params =
43966 + dwc_otg_hcd->core_if->core_params;
43967 + dwc_otg_core_global_regs_t *global_regs =
43968 + dwc_otg_hcd->core_if->core_global_regs;
43969 + dwc_otg_host_if_t *host_if =
43970 + dwc_otg_hcd->core_if->host_if;
43971 +
43972 + dwc_otg_hcd->flags.b.port_speed = hprt0.b.prtspd;
43973 + if (microframe_schedule)
43974 + init_hcd_usecs(dwc_otg_hcd);
43975 +
43976 + /* Every time when port enables calculate
43977 + * HFIR.FrInterval
43978 + */
43979 + hfir.d32 = DWC_READ_REG32(&host_if->host_global_regs->hfir);
43980 + hfir.b.frint = calc_frame_interval(dwc_otg_hcd->core_if);
43981 + DWC_WRITE_REG32(&host_if->host_global_regs->hfir, hfir.d32);
43982 +
43983 + /* Check if we need to adjust the PHY clock speed for
43984 + * low power and adjust it */
43985 + if (params->host_support_fs_ls_low_power) {
43986 + gusbcfg_data_t usbcfg;
43987 +
43988 + usbcfg.d32 =
43989 + DWC_READ_REG32(&global_regs->gusbcfg);
43990 +
43991 + if (hprt0.b.prtspd == DWC_HPRT0_PRTSPD_LOW_SPEED
43992 + || hprt0.b.prtspd ==
43993 + DWC_HPRT0_PRTSPD_FULL_SPEED) {
43994 + /*
43995 + * Low power
43996 + */
43997 + hcfg_data_t hcfg;
43998 + if (usbcfg.b.phylpwrclksel == 0) {
43999 + /* Set PHY low power clock select for FS/LS devices */
44000 + usbcfg.b.phylpwrclksel = 1;
44001 + DWC_WRITE_REG32
44002 + (&global_regs->gusbcfg,
44003 + usbcfg.d32);
44004 + do_reset = 1;
44005 + }
44006 +
44007 + hcfg.d32 =
44008 + DWC_READ_REG32
44009 + (&host_if->host_global_regs->hcfg);
44010 +
44011 + if (hprt0.b.prtspd ==
44012 + DWC_HPRT0_PRTSPD_LOW_SPEED
44013 + && params->host_ls_low_power_phy_clk
44014 + ==
44015 + DWC_HOST_LS_LOW_POWER_PHY_CLK_PARAM_6MHZ)
44016 + {
44017 + /* 6 MHZ */
44018 + DWC_DEBUGPL(DBG_CIL,
44019 + "FS_PHY programming HCFG to 6 MHz (Low Power)\n");
44020 + if (hcfg.b.fslspclksel !=
44021 + DWC_HCFG_6_MHZ) {
44022 + hcfg.b.fslspclksel =
44023 + DWC_HCFG_6_MHZ;
44024 + DWC_WRITE_REG32
44025 + (&host_if->host_global_regs->hcfg,
44026 + hcfg.d32);
44027 + do_reset = 1;
44028 + }
44029 + } else {
44030 + /* 48 MHZ */
44031 + DWC_DEBUGPL(DBG_CIL,
44032 + "FS_PHY programming HCFG to 48 MHz ()\n");
44033 + if (hcfg.b.fslspclksel !=
44034 + DWC_HCFG_48_MHZ) {
44035 + hcfg.b.fslspclksel =
44036 + DWC_HCFG_48_MHZ;
44037 + DWC_WRITE_REG32
44038 + (&host_if->host_global_regs->hcfg,
44039 + hcfg.d32);
44040 + do_reset = 1;
44041 + }
44042 + }
44043 + } else {
44044 + /*
44045 + * Not low power
44046 + */
44047 + if (usbcfg.b.phylpwrclksel == 1) {
44048 + usbcfg.b.phylpwrclksel = 0;
44049 + DWC_WRITE_REG32
44050 + (&global_regs->gusbcfg,
44051 + usbcfg.d32);
44052 + do_reset = 1;
44053 + }
44054 + }
44055 +
44056 + if (do_reset) {
44057 + DWC_TASK_SCHEDULE(dwc_otg_hcd->reset_tasklet);
44058 + }
44059 + }
44060 +
44061 + if (!do_reset) {
44062 + /* Port has been enabled set the reset change flag */
44063 + dwc_otg_hcd->flags.b.port_reset_change = 1;
44064 + }
44065 + } else {
44066 + dwc_otg_hcd->flags.b.port_enable_change = 1;
44067 + }
44068 + retval |= 1;
44069 + }
44070 +
44071 + /** Overcurrent Change Interrupt */
44072 + if (hprt0.b.prtovrcurrchng) {
44073 + DWC_DEBUGPL(DBG_HCD, " --Port Interrupt HPRT0=0x%08x "
44074 + "Port Overcurrent Changed--\n", hprt0.d32);
44075 + dwc_otg_hcd->flags.b.port_over_current_change = 1;
44076 + hprt0_modify.b.prtovrcurrchng = 1;
44077 + retval |= 1;
44078 + }
44079 +
44080 + /* Clear Port Interrupts */
44081 + DWC_WRITE_REG32(dwc_otg_hcd->core_if->host_if->hprt0, hprt0_modify.d32);
44082 +
44083 + return retval;
44084 +}
44085 +
44086 +/** This interrupt indicates that one or more host channels has a pending
44087 + * interrupt. There are multiple conditions that can cause each host channel
44088 + * interrupt. This function determines which conditions have occurred for each
44089 + * host channel interrupt and handles them appropriately. */
44090 +int32_t dwc_otg_hcd_handle_hc_intr(dwc_otg_hcd_t * dwc_otg_hcd)
44091 +{
44092 + int i;
44093 + int retval = 0;
44094 + haint_data_t haint = { .d32 = 0 } ;
44095 +
44096 + /* Clear appropriate bits in HCINTn to clear the interrupt bit in
44097 + * GINTSTS */
44098 +
44099 + if (!fiq_fsm_enable)
44100 + haint.d32 = dwc_otg_read_host_all_channels_intr(dwc_otg_hcd->core_if);
44101 +
44102 + // Overwrite with saved interrupts from fiq handler
44103 + if(fiq_fsm_enable)
44104 + {
44105 + /* check the mask? */
44106 + local_fiq_disable();
44107 + fiq_fsm_spin_lock(&dwc_otg_hcd->fiq_state->lock);
44108 + haint.b2.chint |= ~(dwc_otg_hcd->fiq_state->haintmsk_saved.b2.chint);
44109 + dwc_otg_hcd->fiq_state->haintmsk_saved.b2.chint = ~0;
44110 + fiq_fsm_spin_unlock(&dwc_otg_hcd->fiq_state->lock);
44111 + local_fiq_enable();
44112 + }
44113 +
44114 + for (i = 0; i < dwc_otg_hcd->core_if->core_params->host_channels; i++) {
44115 + if (haint.b2.chint & (1 << i)) {
44116 + retval |= dwc_otg_hcd_handle_hc_n_intr(dwc_otg_hcd, i);
44117 + }
44118 + }
44119 +
44120 + return retval;
44121 +}
44122 +
44123 +/**
44124 + * Gets the actual length of a transfer after the transfer halts. _halt_status
44125 + * holds the reason for the halt.
44126 + *
44127 + * For IN transfers where halt_status is DWC_OTG_HC_XFER_COMPLETE,
44128 + * *short_read is set to 1 upon return if less than the requested
44129 + * number of bytes were transferred. Otherwise, *short_read is set to 0 upon
44130 + * return. short_read may also be NULL on entry, in which case it remains
44131 + * unchanged.
44132 + */
44133 +static uint32_t get_actual_xfer_length(dwc_hc_t * hc,
44134 + dwc_otg_hc_regs_t * hc_regs,
44135 + dwc_otg_qtd_t * qtd,
44136 + dwc_otg_halt_status_e halt_status,
44137 + int *short_read)
44138 +{
44139 + hctsiz_data_t hctsiz;
44140 + uint32_t length;
44141 +
44142 + if (short_read != NULL) {
44143 + *short_read = 0;
44144 + }
44145 + hctsiz.d32 = DWC_READ_REG32(&hc_regs->hctsiz);
44146 +
44147 + if (halt_status == DWC_OTG_HC_XFER_COMPLETE) {
44148 + if (hc->ep_is_in) {
44149 + length = hc->xfer_len - hctsiz.b.xfersize;
44150 + if (short_read != NULL) {
44151 + *short_read = (hctsiz.b.xfersize != 0);
44152 + }
44153 + } else if (hc->qh->do_split) {
44154 + //length = split_out_xfersize[hc->hc_num];
44155 + length = qtd->ssplit_out_xfer_count;
44156 + } else {
44157 + length = hc->xfer_len;
44158 + }
44159 + } else {
44160 + /*
44161 + * Must use the hctsiz.pktcnt field to determine how much data
44162 + * has been transferred. This field reflects the number of
44163 + * packets that have been transferred via the USB. This is
44164 + * always an integral number of packets if the transfer was
44165 + * halted before its normal completion. (Can't use the
44166 + * hctsiz.xfersize field because that reflects the number of
44167 + * bytes transferred via the AHB, not the USB).
44168 + */
44169 + length =
44170 + (hc->start_pkt_count - hctsiz.b.pktcnt) * hc->max_packet;
44171 + }
44172 +
44173 + return length;
44174 +}
44175 +
44176 +/**
44177 + * Updates the state of the URB after a Transfer Complete interrupt on the
44178 + * host channel. Updates the actual_length field of the URB based on the
44179 + * number of bytes transferred via the host channel. Sets the URB status
44180 + * if the data transfer is finished.
44181 + *
44182 + * @return 1 if the data transfer specified by the URB is completely finished,
44183 + * 0 otherwise.
44184 + */
44185 +static int update_urb_state_xfer_comp(dwc_hc_t * hc,
44186 + dwc_otg_hc_regs_t * hc_regs,
44187 + dwc_otg_hcd_urb_t * urb,
44188 + dwc_otg_qtd_t * qtd)
44189 +{
44190 + int xfer_done = 0;
44191 + int short_read = 0;
44192 +
44193 + int xfer_length;
44194 +
44195 + xfer_length = get_actual_xfer_length(hc, hc_regs, qtd,
44196 + DWC_OTG_HC_XFER_COMPLETE,
44197 + &short_read);
44198 +
44199 + if (urb->actual_length + xfer_length > urb->length) {
44200 + printk_once(KERN_DEBUG "dwc_otg: DEVICE:%03d : %s:%d:trimming xfer length\n",
44201 + hc->dev_addr, __func__, __LINE__);
44202 + xfer_length = urb->length - urb->actual_length;
44203 + }
44204 +
44205 + /* non DWORD-aligned buffer case handling. */
44206 + if (hc->align_buff && xfer_length && hc->ep_is_in) {
44207 + dwc_memcpy(urb->buf + urb->actual_length, hc->qh->dw_align_buf,
44208 + xfer_length);
44209 + }
44210 +
44211 + urb->actual_length += xfer_length;
44212 +
44213 + if (xfer_length && (hc->ep_type == DWC_OTG_EP_TYPE_BULK) &&
44214 + (urb->flags & URB_SEND_ZERO_PACKET)
44215 + && (urb->actual_length == urb->length)
44216 + && !(urb->length % hc->max_packet)) {
44217 + xfer_done = 0;
44218 + } else if (short_read || urb->actual_length >= urb->length) {
44219 + xfer_done = 1;
44220 + urb->status = 0;
44221 + }
44222 +
44223 +#ifdef DEBUG
44224 + {
44225 + hctsiz_data_t hctsiz;
44226 + hctsiz.d32 = DWC_READ_REG32(&hc_regs->hctsiz);
44227 + DWC_DEBUGPL(DBG_HCDV, "DWC_otg: %s: %s, channel %d\n",
44228 + __func__, (hc->ep_is_in ? "IN" : "OUT"),
44229 + hc->hc_num);
44230 + DWC_DEBUGPL(DBG_HCDV, " hc->xfer_len %d\n", hc->xfer_len);
44231 + DWC_DEBUGPL(DBG_HCDV, " hctsiz.xfersize %d\n",
44232 + hctsiz.b.xfersize);
44233 + DWC_DEBUGPL(DBG_HCDV, " urb->transfer_buffer_length %d\n",
44234 + urb->length);
44235 + DWC_DEBUGPL(DBG_HCDV, " urb->actual_length %d\n",
44236 + urb->actual_length);
44237 + DWC_DEBUGPL(DBG_HCDV, " short_read %d, xfer_done %d\n",
44238 + short_read, xfer_done);
44239 + }
44240 +#endif
44241 +
44242 + return xfer_done;
44243 +}
44244 +
44245 +/*
44246 + * Save the starting data toggle for the next transfer. The data toggle is
44247 + * saved in the QH for non-control transfers and it's saved in the QTD for
44248 + * control transfers.
44249 + */
44250 +void dwc_otg_hcd_save_data_toggle(dwc_hc_t * hc,
44251 + dwc_otg_hc_regs_t * hc_regs, dwc_otg_qtd_t * qtd)
44252 +{
44253 + hctsiz_data_t hctsiz;
44254 + hctsiz.d32 = DWC_READ_REG32(&hc_regs->hctsiz);
44255 +
44256 + if (hc->ep_type != DWC_OTG_EP_TYPE_CONTROL) {
44257 + dwc_otg_qh_t *qh = hc->qh;
44258 + if (hctsiz.b.pid == DWC_HCTSIZ_DATA0) {
44259 + qh->data_toggle = DWC_OTG_HC_PID_DATA0;
44260 + } else {
44261 + qh->data_toggle = DWC_OTG_HC_PID_DATA1;
44262 + }
44263 + } else {
44264 + if (hctsiz.b.pid == DWC_HCTSIZ_DATA0) {
44265 + qtd->data_toggle = DWC_OTG_HC_PID_DATA0;
44266 + } else {
44267 + qtd->data_toggle = DWC_OTG_HC_PID_DATA1;
44268 + }
44269 + }
44270 +}
44271 +
44272 +/**
44273 + * Updates the state of an Isochronous URB when the transfer is stopped for
44274 + * any reason. The fields of the current entry in the frame descriptor array
44275 + * are set based on the transfer state and the input _halt_status. Completes
44276 + * the Isochronous URB if all the URB frames have been completed.
44277 + *
44278 + * @return DWC_OTG_HC_XFER_COMPLETE if there are more frames remaining to be
44279 + * transferred in the URB. Otherwise return DWC_OTG_HC_XFER_URB_COMPLETE.
44280 + */
44281 +static dwc_otg_halt_status_e
44282 +update_isoc_urb_state(dwc_otg_hcd_t * hcd,
44283 + dwc_hc_t * hc,
44284 + dwc_otg_hc_regs_t * hc_regs,
44285 + dwc_otg_qtd_t * qtd, dwc_otg_halt_status_e halt_status)
44286 +{
44287 + dwc_otg_hcd_urb_t *urb = qtd->urb;
44288 + dwc_otg_halt_status_e ret_val = halt_status;
44289 + struct dwc_otg_hcd_iso_packet_desc *frame_desc;
44290 +
44291 + frame_desc = &urb->iso_descs[qtd->isoc_frame_index];
44292 + switch (halt_status) {
44293 + case DWC_OTG_HC_XFER_COMPLETE:
44294 + frame_desc->status = 0;
44295 + frame_desc->actual_length =
44296 + get_actual_xfer_length(hc, hc_regs, qtd, halt_status, NULL);
44297 +
44298 + /* non DWORD-aligned buffer case handling. */
44299 + if (hc->align_buff && frame_desc->actual_length && hc->ep_is_in) {
44300 + dwc_memcpy(urb->buf + frame_desc->offset + qtd->isoc_split_offset,
44301 + hc->qh->dw_align_buf, frame_desc->actual_length);
44302 + }
44303 +
44304 + break;
44305 + case DWC_OTG_HC_XFER_FRAME_OVERRUN:
44306 + urb->error_count++;
44307 + if (hc->ep_is_in) {
44308 + frame_desc->status = -DWC_E_NO_STREAM_RES;
44309 + } else {
44310 + frame_desc->status = -DWC_E_COMMUNICATION;
44311 + }
44312 + frame_desc->actual_length = 0;
44313 + break;
44314 + case DWC_OTG_HC_XFER_BABBLE_ERR:
44315 + urb->error_count++;
44316 + frame_desc->status = -DWC_E_OVERFLOW;
44317 + /* Don't need to update actual_length in this case. */
44318 + break;
44319 + case DWC_OTG_HC_XFER_XACT_ERR:
44320 + urb->error_count++;
44321 + frame_desc->status = -DWC_E_PROTOCOL;
44322 + frame_desc->actual_length =
44323 + get_actual_xfer_length(hc, hc_regs, qtd, halt_status, NULL);
44324 +
44325 + /* non DWORD-aligned buffer case handling. */
44326 + if (hc->align_buff && frame_desc->actual_length && hc->ep_is_in) {
44327 + dwc_memcpy(urb->buf + frame_desc->offset + qtd->isoc_split_offset,
44328 + hc->qh->dw_align_buf, frame_desc->actual_length);
44329 + }
44330 + /* Skip whole frame */
44331 + if (hc->qh->do_split && (hc->ep_type == DWC_OTG_EP_TYPE_ISOC) &&
44332 + hc->ep_is_in && hcd->core_if->dma_enable) {
44333 + qtd->complete_split = 0;
44334 + qtd->isoc_split_offset = 0;
44335 + }
44336 +
44337 + break;
44338 + default:
44339 + DWC_ASSERT(1, "Unhandled _halt_status (%d)\n", halt_status);
44340 + break;
44341 + }
44342 + if (++qtd->isoc_frame_index == urb->packet_count) {
44343 + /*
44344 + * urb->status is not used for isoc transfers.
44345 + * The individual frame_desc statuses are used instead.
44346 + */
44347 + hcd->fops->complete(hcd, urb->priv, urb, 0);
44348 + ret_val = DWC_OTG_HC_XFER_URB_COMPLETE;
44349 + } else {
44350 + ret_val = DWC_OTG_HC_XFER_COMPLETE;
44351 + }
44352 + return ret_val;
44353 +}
44354 +
44355 +/**
44356 + * Frees the first QTD in the QH's list if free_qtd is 1. For non-periodic
44357 + * QHs, removes the QH from the active non-periodic schedule. If any QTDs are
44358 + * still linked to the QH, the QH is added to the end of the inactive
44359 + * non-periodic schedule. For periodic QHs, removes the QH from the periodic
44360 + * schedule if no more QTDs are linked to the QH.
44361 + */
44362 +static void deactivate_qh(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh, int free_qtd)
44363 +{
44364 + int continue_split = 0;
44365 + dwc_otg_qtd_t *qtd;
44366 +
44367 + DWC_DEBUGPL(DBG_HCDV, " %s(%p,%p,%d)\n", __func__, hcd, qh, free_qtd);
44368 +
44369 + qtd = DWC_CIRCLEQ_FIRST(&qh->qtd_list);
44370 +
44371 + if (qtd->complete_split) {
44372 + continue_split = 1;
44373 + } else if (qtd->isoc_split_pos == DWC_HCSPLIT_XACTPOS_MID ||
44374 + qtd->isoc_split_pos == DWC_HCSPLIT_XACTPOS_END) {
44375 + continue_split = 1;
44376 + }
44377 +
44378 + if (free_qtd) {
44379 + dwc_otg_hcd_qtd_remove_and_free(hcd, qtd, qh);
44380 + continue_split = 0;
44381 + }
44382 +
44383 + qh->channel = NULL;
44384 + dwc_otg_hcd_qh_deactivate(hcd, qh, continue_split);
44385 +}
44386 +
44387 +/**
44388 + * Releases a host channel for use by other transfers. Attempts to select and
44389 + * queue more transactions since at least one host channel is available.
44390 + *
44391 + * @param hcd The HCD state structure.
44392 + * @param hc The host channel to release.
44393 + * @param qtd The QTD associated with the host channel. This QTD may be freed
44394 + * if the transfer is complete or an error has occurred.
44395 + * @param halt_status Reason the channel is being released. This status
44396 + * determines the actions taken by this function.
44397 + */
44398 +static void release_channel(dwc_otg_hcd_t * hcd,
44399 + dwc_hc_t * hc,
44400 + dwc_otg_qtd_t * qtd,
44401 + dwc_otg_halt_status_e halt_status)
44402 +{
44403 + dwc_otg_transaction_type_e tr_type;
44404 + int free_qtd;
44405 +
44406 + int hog_port = 0;
44407 +
44408 + DWC_DEBUGPL(DBG_HCDV, " %s: channel %d, halt_status %d, xfer_len %d\n",
44409 + __func__, hc->hc_num, halt_status, hc->xfer_len);
44410 +
44411 + if(fiq_fsm_enable && hc->do_split) {
44412 + if(!hc->ep_is_in && hc->ep_type == UE_ISOCHRONOUS) {
44413 + if(hc->xact_pos == DWC_HCSPLIT_XACTPOS_MID ||
44414 + hc->xact_pos == DWC_HCSPLIT_XACTPOS_BEGIN) {
44415 + hog_port = 0;
44416 + }
44417 + }
44418 + }
44419 +
44420 + switch (halt_status) {
44421 + case DWC_OTG_HC_XFER_URB_COMPLETE:
44422 + free_qtd = 1;
44423 + break;
44424 + case DWC_OTG_HC_XFER_AHB_ERR:
44425 + case DWC_OTG_HC_XFER_STALL:
44426 + case DWC_OTG_HC_XFER_BABBLE_ERR:
44427 + free_qtd = 1;
44428 + break;
44429 + case DWC_OTG_HC_XFER_XACT_ERR:
44430 + if (qtd->error_count >= 3) {
44431 + DWC_DEBUGPL(DBG_HCDV,
44432 + " Complete URB with transaction error\n");
44433 + free_qtd = 1;
44434 + qtd->urb->status = -DWC_E_PROTOCOL;
44435 + hcd->fops->complete(hcd, qtd->urb->priv,
44436 + qtd->urb, -DWC_E_PROTOCOL);
44437 + } else {
44438 + free_qtd = 0;
44439 + }
44440 + break;
44441 + case DWC_OTG_HC_XFER_URB_DEQUEUE:
44442 + /*
44443 + * The QTD has already been removed and the QH has been
44444 + * deactivated. Don't want to do anything except release the
44445 + * host channel and try to queue more transfers.
44446 + */
44447 + goto cleanup;
44448 + case DWC_OTG_HC_XFER_NO_HALT_STATUS:
44449 + free_qtd = 0;
44450 + break;
44451 + case DWC_OTG_HC_XFER_PERIODIC_INCOMPLETE:
44452 + DWC_DEBUGPL(DBG_HCDV,
44453 + " Complete URB with I/O error\n");
44454 + free_qtd = 1;
44455 + qtd->urb->status = -DWC_E_IO;
44456 + hcd->fops->complete(hcd, qtd->urb->priv,
44457 + qtd->urb, -DWC_E_IO);
44458 + break;
44459 + default:
44460 + free_qtd = 0;
44461 + break;
44462 + }
44463 +
44464 + deactivate_qh(hcd, hc->qh, free_qtd);
44465 +
44466 +cleanup:
44467 + /*
44468 + * Release the host channel for use by other transfers. The cleanup
44469 + * function clears the channel interrupt enables and conditions, so
44470 + * there's no need to clear the Channel Halted interrupt separately.
44471 + */
44472 + if (fiq_fsm_enable && hcd->fiq_state->channel[hc->hc_num].fsm != FIQ_PASSTHROUGH)
44473 + dwc_otg_cleanup_fiq_channel(hcd, hc->hc_num);
44474 + dwc_otg_hc_cleanup(hcd->core_if, hc);
44475 + DWC_CIRCLEQ_INSERT_TAIL(&hcd->free_hc_list, hc, hc_list_entry);
44476 +
44477 + if (!microframe_schedule) {
44478 + switch (hc->ep_type) {
44479 + case DWC_OTG_EP_TYPE_CONTROL:
44480 + case DWC_OTG_EP_TYPE_BULK:
44481 + hcd->non_periodic_channels--;
44482 + break;
44483 +
44484 + default:
44485 + /*
44486 + * Don't release reservations for periodic channels here.
44487 + * That's done when a periodic transfer is descheduled (i.e.
44488 + * when the QH is removed from the periodic schedule).
44489 + */
44490 + break;
44491 + }
44492 + } else {
44493 + hcd->available_host_channels++;
44494 + fiq_print(FIQDBG_INT, hcd->fiq_state, "AHC = %d ", hcd->available_host_channels);
44495 + }
44496 +
44497 + /* Try to queue more transfers now that there's a free channel. */
44498 + tr_type = dwc_otg_hcd_select_transactions(hcd);
44499 + if (tr_type != DWC_OTG_TRANSACTION_NONE) {
44500 + dwc_otg_hcd_queue_transactions(hcd, tr_type);
44501 + }
44502 +}
44503 +
44504 +/**
44505 + * Halts a host channel. If the channel cannot be halted immediately because
44506 + * the request queue is full, this function ensures that the FIFO empty
44507 + * interrupt for the appropriate queue is enabled so that the halt request can
44508 + * be queued when there is space in the request queue.
44509 + *
44510 + * This function may also be called in DMA mode. In that case, the channel is
44511 + * simply released since the core always halts the channel automatically in
44512 + * DMA mode.
44513 + */
44514 +static void halt_channel(dwc_otg_hcd_t * hcd,
44515 + dwc_hc_t * hc,
44516 + dwc_otg_qtd_t * qtd, dwc_otg_halt_status_e halt_status)
44517 +{
44518 + if (hcd->core_if->dma_enable) {
44519 + release_channel(hcd, hc, qtd, halt_status);
44520 + return;
44521 + }
44522 +
44523 + /* Slave mode processing... */
44524 + dwc_otg_hc_halt(hcd->core_if, hc, halt_status);
44525 +
44526 + if (hc->halt_on_queue) {
44527 + gintmsk_data_t gintmsk = {.d32 = 0 };
44528 + dwc_otg_core_global_regs_t *global_regs;
44529 + global_regs = hcd->core_if->core_global_regs;
44530 +
44531 + if (hc->ep_type == DWC_OTG_EP_TYPE_CONTROL ||
44532 + hc->ep_type == DWC_OTG_EP_TYPE_BULK) {
44533 + /*
44534 + * Make sure the Non-periodic Tx FIFO empty interrupt
44535 + * is enabled so that the non-periodic schedule will
44536 + * be processed.
44537 + */
44538 + gintmsk.b.nptxfempty = 1;
44539 + if (fiq_enable) {
44540 + local_fiq_disable();
44541 + fiq_fsm_spin_lock(&hcd->fiq_state->lock);
44542 + DWC_MODIFY_REG32(&global_regs->gintmsk, 0, gintmsk.d32);
44543 + fiq_fsm_spin_unlock(&hcd->fiq_state->lock);
44544 + local_fiq_enable();
44545 + } else {
44546 + DWC_MODIFY_REG32(&global_regs->gintmsk, 0, gintmsk.d32);
44547 + }
44548 + } else {
44549 + /*
44550 + * Move the QH from the periodic queued schedule to
44551 + * the periodic assigned schedule. This allows the
44552 + * halt to be queued when the periodic schedule is
44553 + * processed.
44554 + */
44555 + DWC_LIST_MOVE_HEAD(&hcd->periodic_sched_assigned,
44556 + &hc->qh->qh_list_entry);
44557 +
44558 + /*
44559 + * Make sure the Periodic Tx FIFO Empty interrupt is
44560 + * enabled so that the periodic schedule will be
44561 + * processed.
44562 + */
44563 + gintmsk.b.ptxfempty = 1;
44564 + if (fiq_enable) {
44565 + local_fiq_disable();
44566 + fiq_fsm_spin_lock(&hcd->fiq_state->lock);
44567 + DWC_MODIFY_REG32(&global_regs->gintmsk, 0, gintmsk.d32);
44568 + fiq_fsm_spin_unlock(&hcd->fiq_state->lock);
44569 + local_fiq_enable();
44570 + } else {
44571 + DWC_MODIFY_REG32(&global_regs->gintmsk, 0, gintmsk.d32);
44572 + }
44573 + }
44574 + }
44575 +}
44576 +
44577 +/**
44578 + * Performs common cleanup for non-periodic transfers after a Transfer
44579 + * Complete interrupt. This function should be called after any endpoint type
44580 + * specific handling is finished to release the host channel.
44581 + */
44582 +static void complete_non_periodic_xfer(dwc_otg_hcd_t * hcd,
44583 + dwc_hc_t * hc,
44584 + dwc_otg_hc_regs_t * hc_regs,
44585 + dwc_otg_qtd_t * qtd,
44586 + dwc_otg_halt_status_e halt_status)
44587 +{
44588 + hcint_data_t hcint;
44589 +
44590 + qtd->error_count = 0;
44591 +
44592 + hcint.d32 = DWC_READ_REG32(&hc_regs->hcint);
44593 + if (hcint.b.nyet) {
44594 + /*
44595 + * Got a NYET on the last transaction of the transfer. This
44596 + * means that the endpoint should be in the PING state at the
44597 + * beginning of the next transfer.
44598 + */
44599 + hc->qh->ping_state = 1;
44600 + clear_hc_int(hc_regs, nyet);
44601 + }
44602 +
44603 + /*
44604 + * Always halt and release the host channel to make it available for
44605 + * more transfers. There may still be more phases for a control
44606 + * transfer or more data packets for a bulk transfer at this point,
44607 + * but the host channel is still halted. A channel will be reassigned
44608 + * to the transfer when the non-periodic schedule is processed after
44609 + * the channel is released. This allows transactions to be queued
44610 + * properly via dwc_otg_hcd_queue_transactions, which also enables the
44611 + * Tx FIFO Empty interrupt if necessary.
44612 + */
44613 + if (hc->ep_is_in) {
44614 + /*
44615 + * IN transfers in Slave mode require an explicit disable to
44616 + * halt the channel. (In DMA mode, this call simply releases
44617 + * the channel.)
44618 + */
44619 + halt_channel(hcd, hc, qtd, halt_status);
44620 + } else {
44621 + /*
44622 + * The channel is automatically disabled by the core for OUT
44623 + * transfers in Slave mode.
44624 + */
44625 + release_channel(hcd, hc, qtd, halt_status);
44626 + }
44627 +}
44628 +
44629 +/**
44630 + * Performs common cleanup for periodic transfers after a Transfer Complete
44631 + * interrupt. This function should be called after any endpoint type specific
44632 + * handling is finished to release the host channel.
44633 + */
44634 +static void complete_periodic_xfer(dwc_otg_hcd_t * hcd,
44635 + dwc_hc_t * hc,
44636 + dwc_otg_hc_regs_t * hc_regs,
44637 + dwc_otg_qtd_t * qtd,
44638 + dwc_otg_halt_status_e halt_status)
44639 +{
44640 + hctsiz_data_t hctsiz;
44641 + qtd->error_count = 0;
44642 +
44643 + hctsiz.d32 = DWC_READ_REG32(&hc_regs->hctsiz);
44644 + if (!hc->ep_is_in || hctsiz.b.pktcnt == 0) {
44645 + /* Core halts channel in these cases. */
44646 + release_channel(hcd, hc, qtd, halt_status);
44647 + } else {
44648 + /* Flush any outstanding requests from the Tx queue. */
44649 + halt_channel(hcd, hc, qtd, halt_status);
44650 + }
44651 +}
44652 +
44653 +static int32_t handle_xfercomp_isoc_split_in(dwc_otg_hcd_t * hcd,
44654 + dwc_hc_t * hc,
44655 + dwc_otg_hc_regs_t * hc_regs,
44656 + dwc_otg_qtd_t * qtd)
44657 +{
44658 + uint32_t len;
44659 + struct dwc_otg_hcd_iso_packet_desc *frame_desc;
44660 + frame_desc = &qtd->urb->iso_descs[qtd->isoc_frame_index];
44661 +
44662 + len = get_actual_xfer_length(hc, hc_regs, qtd,
44663 + DWC_OTG_HC_XFER_COMPLETE, NULL);
44664 +
44665 + if (!len) {
44666 + qtd->complete_split = 0;
44667 + qtd->isoc_split_offset = 0;
44668 + return 0;
44669 + }
44670 + frame_desc->actual_length += len;
44671 +
44672 + if (hc->align_buff && len)
44673 + dwc_memcpy(qtd->urb->buf + frame_desc->offset +
44674 + qtd->isoc_split_offset, hc->qh->dw_align_buf, len);
44675 + qtd->isoc_split_offset += len;
44676 +
44677 + if (frame_desc->length == frame_desc->actual_length) {
44678 + frame_desc->status = 0;
44679 + qtd->isoc_frame_index++;
44680 + qtd->complete_split = 0;
44681 + qtd->isoc_split_offset = 0;
44682 + }
44683 +
44684 + if (qtd->isoc_frame_index == qtd->urb->packet_count) {
44685 + hcd->fops->complete(hcd, qtd->urb->priv, qtd->urb, 0);
44686 + release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_URB_COMPLETE);
44687 + } else {
44688 + release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_NO_HALT_STATUS);
44689 + }
44690 +
44691 + return 1; /* Indicates that channel released */
44692 +}
44693 +
44694 +/**
44695 + * Handles a host channel Transfer Complete interrupt. This handler may be
44696 + * called in either DMA mode or Slave mode.
44697 + */
44698 +static int32_t handle_hc_xfercomp_intr(dwc_otg_hcd_t * hcd,
44699 + dwc_hc_t * hc,
44700 + dwc_otg_hc_regs_t * hc_regs,
44701 + dwc_otg_qtd_t * qtd)
44702 +{
44703 + int urb_xfer_done;
44704 + dwc_otg_halt_status_e halt_status = DWC_OTG_HC_XFER_COMPLETE;
44705 + dwc_otg_hcd_urb_t *urb = qtd->urb;
44706 + int pipe_type = dwc_otg_hcd_get_pipe_type(&urb->pipe_info);
44707 +
44708 + DWC_DEBUGPL(DBG_HCDI, "--Host Channel %d Interrupt: "
44709 + "Transfer Complete--\n", hc->hc_num);
44710 +
44711 + if (hcd->core_if->dma_desc_enable) {
44712 + dwc_otg_hcd_complete_xfer_ddma(hcd, hc, hc_regs, halt_status);
44713 + if (pipe_type == UE_ISOCHRONOUS) {
44714 + /* Do not disable the interrupt, just clear it */
44715 + clear_hc_int(hc_regs, xfercomp);
44716 + return 1;
44717 + }
44718 + goto handle_xfercomp_done;
44719 + }
44720 +
44721 + /*
44722 + * Handle xfer complete on CSPLIT.
44723 + */
44724 +
44725 + if (hc->qh->do_split) {
44726 + if ((hc->ep_type == DWC_OTG_EP_TYPE_ISOC) && hc->ep_is_in
44727 + && hcd->core_if->dma_enable) {
44728 + if (qtd->complete_split
44729 + && handle_xfercomp_isoc_split_in(hcd, hc, hc_regs,
44730 + qtd))
44731 + goto handle_xfercomp_done;
44732 + } else {
44733 + qtd->complete_split = 0;
44734 + }
44735 + }
44736 +
44737 + /* Update the QTD and URB states. */
44738 + switch (pipe_type) {
44739 + case UE_CONTROL:
44740 + switch (qtd->control_phase) {
44741 + case DWC_OTG_CONTROL_SETUP:
44742 + if (urb->length > 0) {
44743 + qtd->control_phase = DWC_OTG_CONTROL_DATA;
44744 + } else {
44745 + qtd->control_phase = DWC_OTG_CONTROL_STATUS;
44746 + }
44747 + DWC_DEBUGPL(DBG_HCDV,
44748 + " Control setup transaction done\n");
44749 + halt_status = DWC_OTG_HC_XFER_COMPLETE;
44750 + break;
44751 + case DWC_OTG_CONTROL_DATA:{
44752 + urb_xfer_done =
44753 + update_urb_state_xfer_comp(hc, hc_regs, urb,
44754 + qtd);
44755 + if (urb_xfer_done) {
44756 + qtd->control_phase =
44757 + DWC_OTG_CONTROL_STATUS;
44758 + DWC_DEBUGPL(DBG_HCDV,
44759 + " Control data transfer done\n");
44760 + } else {
44761 + dwc_otg_hcd_save_data_toggle(hc, hc_regs, qtd);
44762 + }
44763 + halt_status = DWC_OTG_HC_XFER_COMPLETE;
44764 + break;
44765 + }
44766 + case DWC_OTG_CONTROL_STATUS:
44767 + DWC_DEBUGPL(DBG_HCDV, " Control transfer complete\n");
44768 + if (urb->status == -DWC_E_IN_PROGRESS) {
44769 + urb->status = 0;
44770 + }
44771 + hcd->fops->complete(hcd, urb->priv, urb, urb->status);
44772 + halt_status = DWC_OTG_HC_XFER_URB_COMPLETE;
44773 + break;
44774 + }
44775 +
44776 + complete_non_periodic_xfer(hcd, hc, hc_regs, qtd, halt_status);
44777 + break;
44778 + case UE_BULK:
44779 + DWC_DEBUGPL(DBG_HCDV, " Bulk transfer complete\n");
44780 + urb_xfer_done =
44781 + update_urb_state_xfer_comp(hc, hc_regs, urb, qtd);
44782 + if (urb_xfer_done) {
44783 + hcd->fops->complete(hcd, urb->priv, urb, urb->status);
44784 + halt_status = DWC_OTG_HC_XFER_URB_COMPLETE;
44785 + } else {
44786 + halt_status = DWC_OTG_HC_XFER_COMPLETE;
44787 + }
44788 +
44789 + dwc_otg_hcd_save_data_toggle(hc, hc_regs, qtd);
44790 + complete_non_periodic_xfer(hcd, hc, hc_regs, qtd, halt_status);
44791 + break;
44792 + case UE_INTERRUPT:
44793 + DWC_DEBUGPL(DBG_HCDV, " Interrupt transfer complete\n");
44794 + urb_xfer_done =
44795 + update_urb_state_xfer_comp(hc, hc_regs, urb, qtd);
44796 +
44797 + /*
44798 + * Interrupt URB is done on the first transfer complete
44799 + * interrupt.
44800 + */
44801 + if (urb_xfer_done) {
44802 + hcd->fops->complete(hcd, urb->priv, urb, urb->status);
44803 + halt_status = DWC_OTG_HC_XFER_URB_COMPLETE;
44804 + } else {
44805 + halt_status = DWC_OTG_HC_XFER_COMPLETE;
44806 + }
44807 +
44808 + dwc_otg_hcd_save_data_toggle(hc, hc_regs, qtd);
44809 + complete_periodic_xfer(hcd, hc, hc_regs, qtd, halt_status);
44810 + break;
44811 + case UE_ISOCHRONOUS:
44812 + DWC_DEBUGPL(DBG_HCDV, " Isochronous transfer complete\n");
44813 + if (qtd->isoc_split_pos == DWC_HCSPLIT_XACTPOS_ALL) {
44814 + halt_status =
44815 + update_isoc_urb_state(hcd, hc, hc_regs, qtd,
44816 + DWC_OTG_HC_XFER_COMPLETE);
44817 + }
44818 + complete_periodic_xfer(hcd, hc, hc_regs, qtd, halt_status);
44819 + break;
44820 + }
44821 +
44822 +handle_xfercomp_done:
44823 + disable_hc_int(hc_regs, xfercompl);
44824 +
44825 + return 1;
44826 +}
44827 +
44828 +/**
44829 + * Handles a host channel STALL interrupt. This handler may be called in
44830 + * either DMA mode or Slave mode.
44831 + */
44832 +static int32_t handle_hc_stall_intr(dwc_otg_hcd_t * hcd,
44833 + dwc_hc_t * hc,
44834 + dwc_otg_hc_regs_t * hc_regs,
44835 + dwc_otg_qtd_t * qtd)
44836 +{
44837 + dwc_otg_hcd_urb_t *urb = qtd->urb;
44838 + int pipe_type = dwc_otg_hcd_get_pipe_type(&urb->pipe_info);
44839 +
44840 + DWC_DEBUGPL(DBG_HCD, "--Host Channel %d Interrupt: "
44841 + "STALL Received--\n", hc->hc_num);
44842 +
44843 + if (hcd->core_if->dma_desc_enable) {
44844 + dwc_otg_hcd_complete_xfer_ddma(hcd, hc, hc_regs, DWC_OTG_HC_XFER_STALL);
44845 + goto handle_stall_done;
44846 + }
44847 +
44848 + if (pipe_type == UE_CONTROL) {
44849 + hcd->fops->complete(hcd, urb->priv, urb, -DWC_E_PIPE);
44850 + }
44851 +
44852 + if (pipe_type == UE_BULK || pipe_type == UE_INTERRUPT) {
44853 + hcd->fops->complete(hcd, urb->priv, urb, -DWC_E_PIPE);
44854 + /*
44855 + * USB protocol requires resetting the data toggle for bulk
44856 + * and interrupt endpoints when a CLEAR_FEATURE(ENDPOINT_HALT)
44857 + * setup command is issued to the endpoint. Anticipate the
44858 + * CLEAR_FEATURE command since a STALL has occurred and reset
44859 + * the data toggle now.
44860 + */
44861 + hc->qh->data_toggle = 0;
44862 + }
44863 +
44864 + halt_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_STALL);
44865 +
44866 +handle_stall_done:
44867 + disable_hc_int(hc_regs, stall);
44868 +
44869 + return 1;
44870 +}
44871 +
44872 +/*
44873 + * Updates the state of the URB when a transfer has been stopped due to an
44874 + * abnormal condition before the transfer completes. Modifies the
44875 + * actual_length field of the URB to reflect the number of bytes that have
44876 + * actually been transferred via the host channel.
44877 + */
44878 +static void update_urb_state_xfer_intr(dwc_hc_t * hc,
44879 + dwc_otg_hc_regs_t * hc_regs,
44880 + dwc_otg_hcd_urb_t * urb,
44881 + dwc_otg_qtd_t * qtd,
44882 + dwc_otg_halt_status_e halt_status)
44883 +{
44884 + uint32_t bytes_transferred = get_actual_xfer_length(hc, hc_regs, qtd,
44885 + halt_status, NULL);
44886 +
44887 + if (urb->actual_length + bytes_transferred > urb->length) {
44888 + printk_once(KERN_DEBUG "dwc_otg: DEVICE:%03d : %s:%d:trimming xfer length\n",
44889 + hc->dev_addr, __func__, __LINE__);
44890 + bytes_transferred = urb->length - urb->actual_length;
44891 + }
44892 +
44893 + /* non DWORD-aligned buffer case handling. */
44894 + if (hc->align_buff && bytes_transferred && hc->ep_is_in) {
44895 + dwc_memcpy(urb->buf + urb->actual_length, hc->qh->dw_align_buf,
44896 + bytes_transferred);
44897 + }
44898 +
44899 + urb->actual_length += bytes_transferred;
44900 +
44901 +#ifdef DEBUG
44902 + {
44903 + hctsiz_data_t hctsiz;
44904 + hctsiz.d32 = DWC_READ_REG32(&hc_regs->hctsiz);
44905 + DWC_DEBUGPL(DBG_HCDV, "DWC_otg: %s: %s, channel %d\n",
44906 + __func__, (hc->ep_is_in ? "IN" : "OUT"),
44907 + hc->hc_num);
44908 + DWC_DEBUGPL(DBG_HCDV, " hc->start_pkt_count %d\n",
44909 + hc->start_pkt_count);
44910 + DWC_DEBUGPL(DBG_HCDV, " hctsiz.pktcnt %d\n", hctsiz.b.pktcnt);
44911 + DWC_DEBUGPL(DBG_HCDV, " hc->max_packet %d\n", hc->max_packet);
44912 + DWC_DEBUGPL(DBG_HCDV, " bytes_transferred %d\n",
44913 + bytes_transferred);
44914 + DWC_DEBUGPL(DBG_HCDV, " urb->actual_length %d\n",
44915 + urb->actual_length);
44916 + DWC_DEBUGPL(DBG_HCDV, " urb->transfer_buffer_length %d\n",
44917 + urb->length);
44918 + }
44919 +#endif
44920 +}
44921 +
44922 +/**
44923 + * Handles a host channel NAK interrupt. This handler may be called in either
44924 + * DMA mode or Slave mode.
44925 + */
44926 +static int32_t handle_hc_nak_intr(dwc_otg_hcd_t * hcd,
44927 + dwc_hc_t * hc,
44928 + dwc_otg_hc_regs_t * hc_regs,
44929 + dwc_otg_qtd_t * qtd)
44930 +{
44931 + DWC_DEBUGPL(DBG_HCDI, "--Host Channel %d Interrupt: "
44932 + "NAK Received--\n", hc->hc_num);
44933 +
44934 + /*
44935 + * When we get bulk NAKs then remember this so we holdoff on this qh until
44936 + * the beginning of the next frame
44937 + */
44938 + switch(dwc_otg_hcd_get_pipe_type(&qtd->urb->pipe_info)) {
44939 + case UE_BULK:
44940 + case UE_CONTROL:
44941 + if (nak_holdoff && qtd->qh->do_split)
44942 + hc->qh->nak_frame = dwc_otg_hcd_get_frame_number(hcd);
44943 + }
44944 +
44945 + /*
44946 + * Handle NAK for IN/OUT SSPLIT/CSPLIT transfers, bulk, control, and
44947 + * interrupt. Re-start the SSPLIT transfer.
44948 + */
44949 + if (hc->do_split) {
44950 + if (hc->complete_split) {
44951 + qtd->error_count = 0;
44952 + }
44953 + qtd->complete_split = 0;
44954 + halt_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_NAK);
44955 + goto handle_nak_done;
44956 + }
44957 +
44958 + switch (dwc_otg_hcd_get_pipe_type(&qtd->urb->pipe_info)) {
44959 + case UE_CONTROL:
44960 + case UE_BULK:
44961 + if (hcd->core_if->dma_enable && hc->ep_is_in) {
44962 + /*
44963 + * NAK interrupts are enabled on bulk/control IN
44964 + * transfers in DMA mode for the sole purpose of
44965 + * resetting the error count after a transaction error
44966 + * occurs. The core will continue transferring data.
44967 + * Disable other interrupts unmasked for the same
44968 + * reason.
44969 + */
44970 + disable_hc_int(hc_regs, datatglerr);
44971 + disable_hc_int(hc_regs, ack);
44972 + qtd->error_count = 0;
44973 + goto handle_nak_done;
44974 + }
44975 +
44976 + /*
44977 + * NAK interrupts normally occur during OUT transfers in DMA
44978 + * or Slave mode. For IN transfers, more requests will be
44979 + * queued as request queue space is available.
44980 + */
44981 + qtd->error_count = 0;
44982 +
44983 + if (!hc->qh->ping_state) {
44984 + update_urb_state_xfer_intr(hc, hc_regs,
44985 + qtd->urb, qtd,
44986 + DWC_OTG_HC_XFER_NAK);
44987 + dwc_otg_hcd_save_data_toggle(hc, hc_regs, qtd);
44988 +
44989 + if (hc->speed == DWC_OTG_EP_SPEED_HIGH)
44990 + hc->qh->ping_state = 1;
44991 + }
44992 +
44993 + /*
44994 + * Halt the channel so the transfer can be re-started from
44995 + * the appropriate point or the PING protocol will
44996 + * start/continue.
44997 + */
44998 + halt_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_NAK);
44999 + break;
45000 + case UE_INTERRUPT:
45001 + qtd->error_count = 0;
45002 + halt_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_NAK);
45003 + break;
45004 + case UE_ISOCHRONOUS:
45005 + /* Should never get called for isochronous transfers. */
45006 + DWC_ASSERT(1, "NACK interrupt for ISOC transfer\n");
45007 + break;
45008 + }
45009 +
45010 +handle_nak_done:
45011 + disable_hc_int(hc_regs, nak);
45012 +
45013 + return 1;
45014 +}
45015 +
45016 +/**
45017 + * Handles a host channel ACK interrupt. This interrupt is enabled when
45018 + * performing the PING protocol in Slave mode, when errors occur during
45019 + * either Slave mode or DMA mode, and during Start Split transactions.
45020 + */
45021 +static int32_t handle_hc_ack_intr(dwc_otg_hcd_t * hcd,
45022 + dwc_hc_t * hc,
45023 + dwc_otg_hc_regs_t * hc_regs,
45024 + dwc_otg_qtd_t * qtd)
45025 +{
45026 + DWC_DEBUGPL(DBG_HCDI, "--Host Channel %d Interrupt: "
45027 + "ACK Received--\n", hc->hc_num);
45028 +
45029 + if (hc->do_split) {
45030 + /*
45031 + * Handle ACK on SSPLIT.
45032 + * ACK should not occur in CSPLIT.
45033 + */
45034 + if (!hc->ep_is_in && hc->data_pid_start != DWC_OTG_HC_PID_SETUP) {
45035 + qtd->ssplit_out_xfer_count = hc->xfer_len;
45036 + }
45037 + if (!(hc->ep_type == DWC_OTG_EP_TYPE_ISOC && !hc->ep_is_in)) {
45038 + /* Don't need complete for isochronous out transfers. */
45039 + qtd->complete_split = 1;
45040 + }
45041 +
45042 + /* ISOC OUT */
45043 + if (hc->ep_type == DWC_OTG_EP_TYPE_ISOC && !hc->ep_is_in) {
45044 + switch (hc->xact_pos) {
45045 + case DWC_HCSPLIT_XACTPOS_ALL:
45046 + break;
45047 + case DWC_HCSPLIT_XACTPOS_END:
45048 + qtd->isoc_split_pos = DWC_HCSPLIT_XACTPOS_ALL;
45049 + qtd->isoc_split_offset = 0;
45050 + break;
45051 + case DWC_HCSPLIT_XACTPOS_BEGIN:
45052 + case DWC_HCSPLIT_XACTPOS_MID:
45053 + /*
45054 + * For BEGIN or MID, calculate the length for
45055 + * the next microframe to determine the correct
45056 + * SSPLIT token, either MID or END.
45057 + */
45058 + {
45059 + struct dwc_otg_hcd_iso_packet_desc
45060 + *frame_desc;
45061 +
45062 + frame_desc =
45063 + &qtd->urb->
45064 + iso_descs[qtd->isoc_frame_index];
45065 + qtd->isoc_split_offset += 188;
45066 +
45067 + if ((frame_desc->length -
45068 + qtd->isoc_split_offset) <= 188) {
45069 + qtd->isoc_split_pos =
45070 + DWC_HCSPLIT_XACTPOS_END;
45071 + } else {
45072 + qtd->isoc_split_pos =
45073 + DWC_HCSPLIT_XACTPOS_MID;
45074 + }
45075 +
45076 + }
45077 + break;
45078 + }
45079 + } else {
45080 + halt_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_ACK);
45081 + }
45082 + } else {
45083 + /*
45084 + * An unmasked ACK on a non-split DMA transaction is
45085 + * for the sole purpose of resetting error counts. Disable other
45086 + * interrupts unmasked for the same reason.
45087 + */
45088 + if(hcd->core_if->dma_enable) {
45089 + disable_hc_int(hc_regs, datatglerr);
45090 + disable_hc_int(hc_regs, nak);
45091 + }
45092 + qtd->error_count = 0;
45093 +
45094 + if (hc->qh->ping_state) {
45095 + hc->qh->ping_state = 0;
45096 + /*
45097 + * Halt the channel so the transfer can be re-started
45098 + * from the appropriate point. This only happens in
45099 + * Slave mode. In DMA mode, the ping_state is cleared
45100 + * when the transfer is started because the core
45101 + * automatically executes the PING, then the transfer.
45102 + */
45103 + halt_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_ACK);
45104 + }
45105 + }
45106 +
45107 + /*
45108 + * If the ACK occurred when _not_ in the PING state, let the channel
45109 + * continue transferring data after clearing the error count.
45110 + */
45111 +
45112 + disable_hc_int(hc_regs, ack);
45113 +
45114 + return 1;
45115 +}
45116 +
45117 +/**
45118 + * Handles a host channel NYET interrupt. This interrupt should only occur on
45119 + * Bulk and Control OUT endpoints and for complete split transactions. If a
45120 + * NYET occurs at the same time as a Transfer Complete interrupt, it is
45121 + * handled in the xfercomp interrupt handler, not here. This handler may be
45122 + * called in either DMA mode or Slave mode.
45123 + */
45124 +static int32_t handle_hc_nyet_intr(dwc_otg_hcd_t * hcd,
45125 + dwc_hc_t * hc,
45126 + dwc_otg_hc_regs_t * hc_regs,
45127 + dwc_otg_qtd_t * qtd)
45128 +{
45129 + DWC_DEBUGPL(DBG_HCDI, "--Host Channel %d Interrupt: "
45130 + "NYET Received--\n", hc->hc_num);
45131 +
45132 + /*
45133 + * NYET on CSPLIT
45134 + * re-do the CSPLIT immediately on non-periodic
45135 + */
45136 + if (hc->do_split && hc->complete_split) {
45137 + if (hc->ep_is_in && (hc->ep_type == DWC_OTG_EP_TYPE_ISOC)
45138 + && hcd->core_if->dma_enable) {
45139 + qtd->complete_split = 0;
45140 + qtd->isoc_split_offset = 0;
45141 + if (++qtd->isoc_frame_index == qtd->urb->packet_count) {
45142 + hcd->fops->complete(hcd, qtd->urb->priv, qtd->urb, 0);
45143 + release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_URB_COMPLETE);
45144 + }
45145 + else
45146 + release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_NO_HALT_STATUS);
45147 + goto handle_nyet_done;
45148 + }
45149 +
45150 + if (hc->ep_type == DWC_OTG_EP_TYPE_INTR ||
45151 + hc->ep_type == DWC_OTG_EP_TYPE_ISOC) {
45152 + int frnum = dwc_otg_hcd_get_frame_number(hcd);
45153 +
45154 + // With the FIQ running we only ever see the failed NYET
45155 + if (dwc_full_frame_num(frnum) !=
45156 + dwc_full_frame_num(hc->qh->sched_frame) ||
45157 + fiq_fsm_enable) {
45158 + /*
45159 + * No longer in the same full speed frame.
45160 + * Treat this as a transaction error.
45161 + */
45162 +#if 0
45163 + /** @todo Fix system performance so this can
45164 + * be treated as an error. Right now complete
45165 + * splits cannot be scheduled precisely enough
45166 + * due to other system activity, so this error
45167 + * occurs regularly in Slave mode.
45168 + */
45169 + qtd->error_count++;
45170 +#endif
45171 + qtd->complete_split = 0;
45172 + halt_channel(hcd, hc, qtd,
45173 + DWC_OTG_HC_XFER_XACT_ERR);
45174 + /** @todo add support for isoc release */
45175 + goto handle_nyet_done;
45176 + }
45177 + }
45178 +
45179 + halt_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_NYET);
45180 + goto handle_nyet_done;
45181 + }
45182 +
45183 + hc->qh->ping_state = 1;
45184 + qtd->error_count = 0;
45185 +
45186 + update_urb_state_xfer_intr(hc, hc_regs, qtd->urb, qtd,
45187 + DWC_OTG_HC_XFER_NYET);
45188 + dwc_otg_hcd_save_data_toggle(hc, hc_regs, qtd);
45189 +
45190 + /*
45191 + * Halt the channel and re-start the transfer so the PING
45192 + * protocol will start.
45193 + */
45194 + halt_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_NYET);
45195 +
45196 +handle_nyet_done:
45197 + disable_hc_int(hc_regs, nyet);
45198 + return 1;
45199 +}
45200 +
45201 +/**
45202 + * Handles a host channel babble interrupt. This handler may be called in
45203 + * either DMA mode or Slave mode.
45204 + */
45205 +static int32_t handle_hc_babble_intr(dwc_otg_hcd_t * hcd,
45206 + dwc_hc_t * hc,
45207 + dwc_otg_hc_regs_t * hc_regs,
45208 + dwc_otg_qtd_t * qtd)
45209 +{
45210 + DWC_DEBUGPL(DBG_HCDI, "--Host Channel %d Interrupt: "
45211 + "Babble Error--\n", hc->hc_num);
45212 +
45213 + if (hcd->core_if->dma_desc_enable) {
45214 + dwc_otg_hcd_complete_xfer_ddma(hcd, hc, hc_regs,
45215 + DWC_OTG_HC_XFER_BABBLE_ERR);
45216 + goto handle_babble_done;
45217 + }
45218 +
45219 + if (hc->ep_type != DWC_OTG_EP_TYPE_ISOC) {
45220 + hcd->fops->complete(hcd, qtd->urb->priv,
45221 + qtd->urb, -DWC_E_OVERFLOW);
45222 + halt_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_BABBLE_ERR);
45223 + } else {
45224 + dwc_otg_halt_status_e halt_status;
45225 + halt_status = update_isoc_urb_state(hcd, hc, hc_regs, qtd,
45226 + DWC_OTG_HC_XFER_BABBLE_ERR);
45227 + halt_channel(hcd, hc, qtd, halt_status);
45228 + }
45229 +
45230 +handle_babble_done:
45231 + disable_hc_int(hc_regs, bblerr);
45232 + return 1;
45233 +}
45234 +
45235 +/**
45236 + * Handles a host channel AHB error interrupt. This handler is only called in
45237 + * DMA mode.
45238 + */
45239 +static int32_t handle_hc_ahberr_intr(dwc_otg_hcd_t * hcd,
45240 + dwc_hc_t * hc,
45241 + dwc_otg_hc_regs_t * hc_regs,
45242 + dwc_otg_qtd_t * qtd)
45243 +{
45244 + hcchar_data_t hcchar;
45245 + hcsplt_data_t hcsplt;
45246 + hctsiz_data_t hctsiz;
45247 + uint32_t hcdma;
45248 + char *pipetype, *speed;
45249 +
45250 + dwc_otg_hcd_urb_t *urb = qtd->urb;
45251 +
45252 + DWC_DEBUGPL(DBG_HCDI, "--Host Channel %d Interrupt: "
45253 + "AHB Error--\n", hc->hc_num);
45254 +
45255 + hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
45256 + hcsplt.d32 = DWC_READ_REG32(&hc_regs->hcsplt);
45257 + hctsiz.d32 = DWC_READ_REG32(&hc_regs->hctsiz);
45258 + hcdma = DWC_READ_REG32(&hc_regs->hcdma);
45259 +
45260 + DWC_ERROR("AHB ERROR, Channel %d\n", hc->hc_num);
45261 + DWC_ERROR(" hcchar 0x%08x, hcsplt 0x%08x\n", hcchar.d32, hcsplt.d32);
45262 + DWC_ERROR(" hctsiz 0x%08x, hcdma 0x%08x\n", hctsiz.d32, hcdma);
45263 + DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD URB Enqueue\n");
45264 + DWC_ERROR(" Device address: %d\n",
45265 + dwc_otg_hcd_get_dev_addr(&urb->pipe_info));
45266 + DWC_ERROR(" Endpoint: %d, %s\n",
45267 + dwc_otg_hcd_get_ep_num(&urb->pipe_info),
45268 + (dwc_otg_hcd_is_pipe_in(&urb->pipe_info) ? "IN" : "OUT"));
45269 +
45270 + switch (dwc_otg_hcd_get_pipe_type(&urb->pipe_info)) {
45271 + case UE_CONTROL:
45272 + pipetype = "CONTROL";
45273 + break;
45274 + case UE_BULK:
45275 + pipetype = "BULK";
45276 + break;
45277 + case UE_INTERRUPT:
45278 + pipetype = "INTERRUPT";
45279 + break;
45280 + case UE_ISOCHRONOUS:
45281 + pipetype = "ISOCHRONOUS";
45282 + break;
45283 + default:
45284 + pipetype = "UNKNOWN";
45285 + break;
45286 + }
45287 +
45288 + DWC_ERROR(" Endpoint type: %s\n", pipetype);
45289 +
45290 + switch (hc->speed) {
45291 + case DWC_OTG_EP_SPEED_HIGH:
45292 + speed = "HIGH";
45293 + break;
45294 + case DWC_OTG_EP_SPEED_FULL:
45295 + speed = "FULL";
45296 + break;
45297 + case DWC_OTG_EP_SPEED_LOW:
45298 + speed = "LOW";
45299 + break;
45300 + default:
45301 + speed = "UNKNOWN";
45302 + break;
45303 + };
45304 +
45305 + DWC_ERROR(" Speed: %s\n", speed);
45306 +
45307 + DWC_ERROR(" Max packet size: %d\n",
45308 + dwc_otg_hcd_get_mps(&urb->pipe_info));
45309 + DWC_ERROR(" Data buffer length: %d\n", urb->length);
45310 + DWC_ERROR(" Transfer buffer: %p, Transfer DMA: %p\n",
45311 + urb->buf, (void *)urb->dma);
45312 + DWC_ERROR(" Setup buffer: %p, Setup DMA: %p\n",
45313 + urb->setup_packet, (void *)urb->setup_dma);
45314 + DWC_ERROR(" Interval: %d\n", urb->interval);
45315 +
45316 + /* Core haltes the channel for Descriptor DMA mode */
45317 + if (hcd->core_if->dma_desc_enable) {
45318 + dwc_otg_hcd_complete_xfer_ddma(hcd, hc, hc_regs,
45319 + DWC_OTG_HC_XFER_AHB_ERR);
45320 + goto handle_ahberr_done;
45321 + }
45322 +
45323 + hcd->fops->complete(hcd, urb->priv, urb, -DWC_E_IO);
45324 +
45325 + /*
45326 + * Force a channel halt. Don't call halt_channel because that won't
45327 + * write to the HCCHARn register in DMA mode to force the halt.
45328 + */
45329 + dwc_otg_hc_halt(hcd->core_if, hc, DWC_OTG_HC_XFER_AHB_ERR);
45330 +handle_ahberr_done:
45331 + disable_hc_int(hc_regs, ahberr);
45332 + return 1;
45333 +}
45334 +
45335 +/**
45336 + * Handles a host channel transaction error interrupt. This handler may be
45337 + * called in either DMA mode or Slave mode.
45338 + */
45339 +static int32_t handle_hc_xacterr_intr(dwc_otg_hcd_t * hcd,
45340 + dwc_hc_t * hc,
45341 + dwc_otg_hc_regs_t * hc_regs,
45342 + dwc_otg_qtd_t * qtd)
45343 +{
45344 + DWC_DEBUGPL(DBG_HCDI, "--Host Channel %d Interrupt: "
45345 + "Transaction Error--\n", hc->hc_num);
45346 +
45347 + if (hcd->core_if->dma_desc_enable) {
45348 + dwc_otg_hcd_complete_xfer_ddma(hcd, hc, hc_regs,
45349 + DWC_OTG_HC_XFER_XACT_ERR);
45350 + goto handle_xacterr_done;
45351 + }
45352 +
45353 + switch (dwc_otg_hcd_get_pipe_type(&qtd->urb->pipe_info)) {
45354 + case UE_CONTROL:
45355 + case UE_BULK:
45356 + qtd->error_count++;
45357 + if (!hc->qh->ping_state) {
45358 +
45359 + update_urb_state_xfer_intr(hc, hc_regs,
45360 + qtd->urb, qtd,
45361 + DWC_OTG_HC_XFER_XACT_ERR);
45362 + dwc_otg_hcd_save_data_toggle(hc, hc_regs, qtd);
45363 + if (!hc->ep_is_in && hc->speed == DWC_OTG_EP_SPEED_HIGH) {
45364 + hc->qh->ping_state = 1;
45365 + }
45366 + }
45367 +
45368 + /*
45369 + * Halt the channel so the transfer can be re-started from
45370 + * the appropriate point or the PING protocol will start.
45371 + */
45372 + halt_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_XACT_ERR);
45373 + break;
45374 + case UE_INTERRUPT:
45375 + qtd->error_count++;
45376 + if (hc->do_split && hc->complete_split) {
45377 + qtd->complete_split = 0;
45378 + }
45379 + halt_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_XACT_ERR);
45380 + break;
45381 + case UE_ISOCHRONOUS:
45382 + {
45383 + dwc_otg_halt_status_e halt_status;
45384 + halt_status =
45385 + update_isoc_urb_state(hcd, hc, hc_regs, qtd,
45386 + DWC_OTG_HC_XFER_XACT_ERR);
45387 +
45388 + halt_channel(hcd, hc, qtd, halt_status);
45389 + }
45390 + break;
45391 + }
45392 +handle_xacterr_done:
45393 + disable_hc_int(hc_regs, xacterr);
45394 +
45395 + return 1;
45396 +}
45397 +
45398 +/**
45399 + * Handles a host channel frame overrun interrupt. This handler may be called
45400 + * in either DMA mode or Slave mode.
45401 + */
45402 +static int32_t handle_hc_frmovrun_intr(dwc_otg_hcd_t * hcd,
45403 + dwc_hc_t * hc,
45404 + dwc_otg_hc_regs_t * hc_regs,
45405 + dwc_otg_qtd_t * qtd)
45406 +{
45407 + DWC_DEBUGPL(DBG_HCDI, "--Host Channel %d Interrupt: "
45408 + "Frame Overrun--\n", hc->hc_num);
45409 +
45410 + switch (dwc_otg_hcd_get_pipe_type(&qtd->urb->pipe_info)) {
45411 + case UE_CONTROL:
45412 + case UE_BULK:
45413 + break;
45414 + case UE_INTERRUPT:
45415 + halt_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_FRAME_OVERRUN);
45416 + break;
45417 + case UE_ISOCHRONOUS:
45418 + {
45419 + dwc_otg_halt_status_e halt_status;
45420 + halt_status =
45421 + update_isoc_urb_state(hcd, hc, hc_regs, qtd,
45422 + DWC_OTG_HC_XFER_FRAME_OVERRUN);
45423 +
45424 + halt_channel(hcd, hc, qtd, halt_status);
45425 + }
45426 + break;
45427 + }
45428 +
45429 + disable_hc_int(hc_regs, frmovrun);
45430 +
45431 + return 1;
45432 +}
45433 +
45434 +/**
45435 + * Handles a host channel data toggle error interrupt. This handler may be
45436 + * called in either DMA mode or Slave mode.
45437 + */
45438 +static int32_t handle_hc_datatglerr_intr(dwc_otg_hcd_t * hcd,
45439 + dwc_hc_t * hc,
45440 + dwc_otg_hc_regs_t * hc_regs,
45441 + dwc_otg_qtd_t * qtd)
45442 +{
45443 + DWC_DEBUGPL(DBG_HCDI, "--Host Channel %d Interrupt: "
45444 + "Data Toggle Error on %s transfer--\n",
45445 + hc->hc_num, (hc->ep_is_in ? "IN" : "OUT"));
45446 +
45447 + /* Data toggles on split transactions cause the hc to halt.
45448 + * restart transfer */
45449 + if(hc->qh->do_split)
45450 + {
45451 + qtd->error_count++;
45452 + dwc_otg_hcd_save_data_toggle(hc, hc_regs, qtd);
45453 + update_urb_state_xfer_intr(hc, hc_regs,
45454 + qtd->urb, qtd, DWC_OTG_HC_XFER_XACT_ERR);
45455 + halt_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_XACT_ERR);
45456 + } else if (hc->ep_is_in) {
45457 + /* An unmasked data toggle error on a non-split DMA transaction is
45458 + * for the sole purpose of resetting error counts. Disable other
45459 + * interrupts unmasked for the same reason.
45460 + */
45461 + if(hcd->core_if->dma_enable) {
45462 + disable_hc_int(hc_regs, ack);
45463 + disable_hc_int(hc_regs, nak);
45464 + }
45465 + qtd->error_count = 0;
45466 + }
45467 +
45468 + disable_hc_int(hc_regs, datatglerr);
45469 +
45470 + return 1;
45471 +}
45472 +
45473 +#ifdef DEBUG
45474 +/**
45475 + * This function is for debug only. It checks that a valid halt status is set
45476 + * and that HCCHARn.chdis is clear. If there's a problem, corrective action is
45477 + * taken and a warning is issued.
45478 + * @return 1 if halt status is ok, 0 otherwise.
45479 + */
45480 +static inline int halt_status_ok(dwc_otg_hcd_t * hcd,
45481 + dwc_hc_t * hc,
45482 + dwc_otg_hc_regs_t * hc_regs,
45483 + dwc_otg_qtd_t * qtd)
45484 +{
45485 + hcchar_data_t hcchar;
45486 + hctsiz_data_t hctsiz;
45487 + hcint_data_t hcint;
45488 + hcintmsk_data_t hcintmsk;
45489 + hcsplt_data_t hcsplt;
45490 +
45491 + if (hc->halt_status == DWC_OTG_HC_XFER_NO_HALT_STATUS) {
45492 + /*
45493 + * This code is here only as a check. This condition should
45494 + * never happen. Ignore the halt if it does occur.
45495 + */
45496 + hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
45497 + hctsiz.d32 = DWC_READ_REG32(&hc_regs->hctsiz);
45498 + hcint.d32 = DWC_READ_REG32(&hc_regs->hcint);
45499 + hcintmsk.d32 = DWC_READ_REG32(&hc_regs->hcintmsk);
45500 + hcsplt.d32 = DWC_READ_REG32(&hc_regs->hcsplt);
45501 + DWC_WARN
45502 + ("%s: hc->halt_status == DWC_OTG_HC_XFER_NO_HALT_STATUS, "
45503 + "channel %d, hcchar 0x%08x, hctsiz 0x%08x, "
45504 + "hcint 0x%08x, hcintmsk 0x%08x, "
45505 + "hcsplt 0x%08x, qtd->complete_split %d\n", __func__,
45506 + hc->hc_num, hcchar.d32, hctsiz.d32, hcint.d32,
45507 + hcintmsk.d32, hcsplt.d32, qtd->complete_split);
45508 +
45509 + DWC_WARN("%s: no halt status, channel %d, ignoring interrupt\n",
45510 + __func__, hc->hc_num);
45511 + DWC_WARN("\n");
45512 + clear_hc_int(hc_regs, chhltd);
45513 + return 0;
45514 + }
45515 +
45516 + /*
45517 + * This code is here only as a check. hcchar.chdis should
45518 + * never be set when the halt interrupt occurs. Halt the
45519 + * channel again if it does occur.
45520 + */
45521 + hcchar.d32 = DWC_READ_REG32(&hc_regs->hcchar);
45522 + if (hcchar.b.chdis) {
45523 + DWC_WARN("%s: hcchar.chdis set unexpectedly, "
45524 + "hcchar 0x%08x, trying to halt again\n",
45525 + __func__, hcchar.d32);
45526 + clear_hc_int(hc_regs, chhltd);
45527 + hc->halt_pending = 0;
45528 + halt_channel(hcd, hc, qtd, hc->halt_status);
45529 + return 0;
45530 + }
45531 +
45532 + return 1;
45533 +}
45534 +#endif
45535 +
45536 +/**
45537 + * Handles a host Channel Halted interrupt in DMA mode. This handler
45538 + * determines the reason the channel halted and proceeds accordingly.
45539 + */
45540 +static void handle_hc_chhltd_intr_dma(dwc_otg_hcd_t * hcd,
45541 + dwc_hc_t * hc,
45542 + dwc_otg_hc_regs_t * hc_regs,
45543 + dwc_otg_qtd_t * qtd)
45544 +{
45545 + int out_nak_enh = 0;
45546 + hcint_data_t hcint;
45547 + hcintmsk_data_t hcintmsk;
45548 + /* For core with OUT NAK enhancement, the flow for high-
45549 + * speed CONTROL/BULK OUT is handled a little differently.
45550 + */
45551 + if (hcd->core_if->snpsid >= OTG_CORE_REV_2_71a) {
45552 + if (hc->speed == DWC_OTG_EP_SPEED_HIGH && !hc->ep_is_in &&
45553 + (hc->ep_type == DWC_OTG_EP_TYPE_CONTROL ||
45554 + hc->ep_type == DWC_OTG_EP_TYPE_BULK)) {
45555 + out_nak_enh = 1;
45556 + }
45557 + }
45558 +
45559 + if (hc->halt_status == DWC_OTG_HC_XFER_URB_DEQUEUE ||
45560 + (hc->halt_status == DWC_OTG_HC_XFER_AHB_ERR
45561 + && !hcd->core_if->dma_desc_enable)) {
45562 + /*
45563 + * Just release the channel. A dequeue can happen on a
45564 + * transfer timeout. In the case of an AHB Error, the channel
45565 + * was forced to halt because there's no way to gracefully
45566 + * recover.
45567 + */
45568 + if (hcd->core_if->dma_desc_enable)
45569 + dwc_otg_hcd_complete_xfer_ddma(hcd, hc, hc_regs,
45570 + hc->halt_status);
45571 + else
45572 + release_channel(hcd, hc, qtd, hc->halt_status);
45573 + return;
45574 + }
45575 +
45576 + /* Read the HCINTn register to determine the cause for the halt. */
45577 +
45578 + hcint.d32 = DWC_READ_REG32(&hc_regs->hcint);
45579 + hcintmsk.d32 = DWC_READ_REG32(&hc_regs->hcintmsk);
45580 +
45581 + if (hcint.b.xfercomp) {
45582 + /** @todo This is here because of a possible hardware bug. Spec
45583 + * says that on SPLIT-ISOC OUT transfers in DMA mode that a HALT
45584 + * interrupt w/ACK bit set should occur, but I only see the
45585 + * XFERCOMP bit, even with it masked out. This is a workaround
45586 + * for that behavior. Should fix this when hardware is fixed.
45587 + */
45588 + if (hc->ep_type == DWC_OTG_EP_TYPE_ISOC && !hc->ep_is_in) {
45589 + handle_hc_ack_intr(hcd, hc, hc_regs, qtd);
45590 + }
45591 + handle_hc_xfercomp_intr(hcd, hc, hc_regs, qtd);
45592 + } else if (hcint.b.stall) {
45593 + handle_hc_stall_intr(hcd, hc, hc_regs, qtd);
45594 + } else if (hcint.b.xacterr && !hcd->core_if->dma_desc_enable) {
45595 + if (out_nak_enh) {
45596 + if (hcint.b.nyet || hcint.b.nak || hcint.b.ack) {
45597 + DWC_DEBUGPL(DBG_HCD, "XactErr with NYET/NAK/ACK\n");
45598 + qtd->error_count = 0;
45599 + } else {
45600 + DWC_DEBUGPL(DBG_HCD, "XactErr without NYET/NAK/ACK\n");
45601 + }
45602 + }
45603 +
45604 + /*
45605 + * Must handle xacterr before nak or ack. Could get a xacterr
45606 + * at the same time as either of these on a BULK/CONTROL OUT
45607 + * that started with a PING. The xacterr takes precedence.
45608 + */
45609 + handle_hc_xacterr_intr(hcd, hc, hc_regs, qtd);
45610 + } else if (hcint.b.xcs_xact && hcd->core_if->dma_desc_enable) {
45611 + handle_hc_xacterr_intr(hcd, hc, hc_regs, qtd);
45612 + } else if (hcint.b.ahberr && hcd->core_if->dma_desc_enable) {
45613 + handle_hc_ahberr_intr(hcd, hc, hc_regs, qtd);
45614 + } else if (hcint.b.bblerr) {
45615 + handle_hc_babble_intr(hcd, hc, hc_regs, qtd);
45616 + } else if (hcint.b.frmovrun) {
45617 + handle_hc_frmovrun_intr(hcd, hc, hc_regs, qtd);
45618 + } else if (hcint.b.datatglerr) {
45619 + handle_hc_datatglerr_intr(hcd, hc, hc_regs, qtd);
45620 + } else if (!out_nak_enh) {
45621 + if (hcint.b.nyet) {
45622 + /*
45623 + * Must handle nyet before nak or ack. Could get a nyet at the
45624 + * same time as either of those on a BULK/CONTROL OUT that
45625 + * started with a PING. The nyet takes precedence.
45626 + */
45627 + handle_hc_nyet_intr(hcd, hc, hc_regs, qtd);
45628 + } else if (hcint.b.nak && !hcintmsk.b.nak) {
45629 + /*
45630 + * If nak is not masked, it's because a non-split IN transfer
45631 + * is in an error state. In that case, the nak is handled by
45632 + * the nak interrupt handler, not here. Handle nak here for
45633 + * BULK/CONTROL OUT transfers, which halt on a NAK to allow
45634 + * rewinding the buffer pointer.
45635 + */
45636 + handle_hc_nak_intr(hcd, hc, hc_regs, qtd);
45637 + } else if (hcint.b.ack && !hcintmsk.b.ack) {
45638 + /*
45639 + * If ack is not masked, it's because a non-split IN transfer
45640 + * is in an error state. In that case, the ack is handled by
45641 + * the ack interrupt handler, not here. Handle ack here for
45642 + * split transfers. Start splits halt on ACK.
45643 + */
45644 + handle_hc_ack_intr(hcd, hc, hc_regs, qtd);
45645 + } else {
45646 + if (hc->ep_type == DWC_OTG_EP_TYPE_INTR ||
45647 + hc->ep_type == DWC_OTG_EP_TYPE_ISOC) {
45648 + /*
45649 + * A periodic transfer halted with no other channel
45650 + * interrupts set. Assume it was halted by the core
45651 + * because it could not be completed in its scheduled
45652 + * (micro)frame.
45653 + */
45654 +#ifdef DEBUG
45655 + DWC_PRINTF
45656 + ("%s: Halt channel %d (assume incomplete periodic transfer)\n",
45657 + __func__, hc->hc_num);
45658 +#endif
45659 + halt_channel(hcd, hc, qtd,
45660 + DWC_OTG_HC_XFER_PERIODIC_INCOMPLETE);
45661 + } else {
45662 + DWC_ERROR
45663 + ("%s: Channel %d, DMA Mode -- ChHltd set, but reason "
45664 + "for halting is unknown, hcint 0x%08x, intsts 0x%08x\n",
45665 + __func__, hc->hc_num, hcint.d32,
45666 + DWC_READ_REG32(&hcd->
45667 + core_if->core_global_regs->
45668 + gintsts));
45669 + /* Failthrough: use 3-strikes rule */
45670 + qtd->error_count++;
45671 + dwc_otg_hcd_save_data_toggle(hc, hc_regs, qtd);
45672 + update_urb_state_xfer_intr(hc, hc_regs,
45673 + qtd->urb, qtd, DWC_OTG_HC_XFER_XACT_ERR);
45674 + halt_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_XACT_ERR);
45675 + }
45676 +
45677 + }
45678 + } else {
45679 + DWC_PRINTF("NYET/NAK/ACK/other in non-error case, 0x%08x\n",
45680 + hcint.d32);
45681 + /* Failthrough: use 3-strikes rule */
45682 + qtd->error_count++;
45683 + dwc_otg_hcd_save_data_toggle(hc, hc_regs, qtd);
45684 + update_urb_state_xfer_intr(hc, hc_regs,
45685 + qtd->urb, qtd, DWC_OTG_HC_XFER_XACT_ERR);
45686 + halt_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_XACT_ERR);
45687 + }
45688 +}
45689 +
45690 +/**
45691 + * Handles a host channel Channel Halted interrupt.
45692 + *
45693 + * In slave mode, this handler is called only when the driver specifically
45694 + * requests a halt. This occurs during handling other host channel interrupts
45695 + * (e.g. nak, xacterr, stall, nyet, etc.).
45696 + *
45697 + * In DMA mode, this is the interrupt that occurs when the core has finished
45698 + * processing a transfer on a channel. Other host channel interrupts (except
45699 + * ahberr) are disabled in DMA mode.
45700 + */
45701 +static int32_t handle_hc_chhltd_intr(dwc_otg_hcd_t * hcd,
45702 + dwc_hc_t * hc,
45703 + dwc_otg_hc_regs_t * hc_regs,
45704 + dwc_otg_qtd_t * qtd)
45705 +{
45706 + DWC_DEBUGPL(DBG_HCDI, "--Host Channel %d Interrupt: "
45707 + "Channel Halted--\n", hc->hc_num);
45708 +
45709 + if (hcd->core_if->dma_enable) {
45710 + handle_hc_chhltd_intr_dma(hcd, hc, hc_regs, qtd);
45711 + } else {
45712 +#ifdef DEBUG
45713 + if (!halt_status_ok(hcd, hc, hc_regs, qtd)) {
45714 + return 1;
45715 + }
45716 +#endif
45717 + release_channel(hcd, hc, qtd, hc->halt_status);
45718 + }
45719 +
45720 + return 1;
45721 +}
45722 +
45723 +
45724 +/**
45725 + * dwc_otg_fiq_unmangle_isoc() - Update the iso_frame_desc structure on
45726 + * FIQ transfer completion
45727 + * @hcd: Pointer to dwc_otg_hcd struct
45728 + * @num: Host channel number
45729 + *
45730 + * 1. Un-mangle the status as recorded in each iso_frame_desc status
45731 + * 2. Copy it from the dwc_otg_urb into the real URB
45732 + */
45733 +void dwc_otg_fiq_unmangle_isoc(dwc_otg_hcd_t *hcd, dwc_otg_qh_t *qh, dwc_otg_qtd_t *qtd, uint32_t num)
45734 +{
45735 + struct dwc_otg_hcd_urb *dwc_urb = qtd->urb;
45736 + int nr_frames = dwc_urb->packet_count;
45737 + int i;
45738 + hcint_data_t frame_hcint;
45739 +
45740 + for (i = 0; i < nr_frames; i++) {
45741 + frame_hcint.d32 = dwc_urb->iso_descs[i].status;
45742 + if (frame_hcint.b.xfercomp) {
45743 + dwc_urb->iso_descs[i].status = 0;
45744 + dwc_urb->actual_length += dwc_urb->iso_descs[i].actual_length;
45745 + } else if (frame_hcint.b.frmovrun) {
45746 + if (qh->ep_is_in)
45747 + dwc_urb->iso_descs[i].status = -DWC_E_NO_STREAM_RES;
45748 + else
45749 + dwc_urb->iso_descs[i].status = -DWC_E_COMMUNICATION;
45750 + dwc_urb->error_count++;
45751 + dwc_urb->iso_descs[i].actual_length = 0;
45752 + } else if (frame_hcint.b.xacterr) {
45753 + dwc_urb->iso_descs[i].status = -DWC_E_PROTOCOL;
45754 + dwc_urb->error_count++;
45755 + dwc_urb->iso_descs[i].actual_length = 0;
45756 + } else if (frame_hcint.b.bblerr) {
45757 + dwc_urb->iso_descs[i].status = -DWC_E_OVERFLOW;
45758 + dwc_urb->error_count++;
45759 + dwc_urb->iso_descs[i].actual_length = 0;
45760 + } else {
45761 + /* Something went wrong */
45762 + dwc_urb->iso_descs[i].status = -1;
45763 + dwc_urb->iso_descs[i].actual_length = 0;
45764 + dwc_urb->error_count++;
45765 + }
45766 + }
45767 + qh->sched_frame = dwc_frame_num_inc(qh->sched_frame, qh->interval * (nr_frames - 1));
45768 +
45769 + //printk_ratelimited(KERN_INFO "%s: HS isochronous of %d/%d frames with %d errors complete\n",
45770 + // __FUNCTION__, i, dwc_urb->packet_count, dwc_urb->error_count);
45771 +}
45772 +
45773 +/**
45774 + * dwc_otg_fiq_unsetup_per_dma() - Remove data from bounce buffers for split transactions
45775 + * @hcd: Pointer to dwc_otg_hcd struct
45776 + * @num: Host channel number
45777 + *
45778 + * Copies data from the FIQ bounce buffers into the URB's transfer buffer. Does not modify URB state.
45779 + * Returns total length of data or -1 if the buffers were not used.
45780 + *
45781 + */
45782 +int dwc_otg_fiq_unsetup_per_dma(dwc_otg_hcd_t *hcd, dwc_otg_qh_t *qh, dwc_otg_qtd_t *qtd, uint32_t num)
45783 +{
45784 + dwc_hc_t *hc = qh->channel;
45785 + struct fiq_dma_blob *blob = hcd->fiq_dmab;
45786 + struct fiq_channel_state *st = &hcd->fiq_state->channel[num];
45787 + uint8_t *ptr = NULL;
45788 + int index = 0, len = 0;
45789 + int i = 0;
45790 + if (hc->ep_is_in) {
45791 + /* Copy data out of the DMA bounce buffers to the URB's buffer.
45792 + * The align_buf is ignored as this is ignored on FSM enqueue. */
45793 + ptr = qtd->urb->buf;
45794 + if (qh->ep_type == UE_ISOCHRONOUS) {
45795 + /* Isoc IN transactions - grab the offset of the iso_frame_desc into the URB transfer buffer */
45796 + index = qtd->isoc_frame_index;
45797 + ptr += qtd->urb->iso_descs[index].offset;
45798 + } else {
45799 + /* Need to increment by actual_length for interrupt IN */
45800 + ptr += qtd->urb->actual_length;
45801 + }
45802 +
45803 + for (i = 0; i < st->dma_info.index; i++) {
45804 + len += st->dma_info.slot_len[i];
45805 + dwc_memcpy(ptr, &blob->channel[num].index[i].buf[0], st->dma_info.slot_len[i]);
45806 + ptr += st->dma_info.slot_len[i];
45807 + }
45808 + return len;
45809 + } else {
45810 + /* OUT endpoints - nothing to do. */
45811 + return -1;
45812 + }
45813 +
45814 +}
45815 +/**
45816 + * dwc_otg_hcd_handle_hc_fsm() - handle an unmasked channel interrupt
45817 + * from a channel handled in the FIQ
45818 + * @hcd: Pointer to dwc_otg_hcd struct
45819 + * @num: Host channel number
45820 + *
45821 + * If a host channel interrupt was received by the IRQ and this was a channel
45822 + * used by the FIQ, the execution flow for transfer completion is substantially
45823 + * different from the normal (messy) path. This function and its friends handles
45824 + * channel cleanup and transaction completion from a FIQ transaction.
45825 + */
45826 +void dwc_otg_hcd_handle_hc_fsm(dwc_otg_hcd_t *hcd, uint32_t num)
45827 +{
45828 + struct fiq_channel_state *st = &hcd->fiq_state->channel[num];
45829 + dwc_hc_t *hc = hcd->hc_ptr_array[num];
45830 + dwc_otg_qtd_t *qtd;
45831 + dwc_otg_hc_regs_t *hc_regs = hcd->core_if->host_if->hc_regs[num];
45832 + hcint_data_t hcint = hcd->fiq_state->channel[num].hcint_copy;
45833 + hctsiz_data_t hctsiz = hcd->fiq_state->channel[num].hctsiz_copy;
45834 + int hostchannels = 0;
45835 + fiq_print(FIQDBG_INT, hcd->fiq_state, "OUT %01d %01d ", num , st->fsm);
45836 +
45837 + hostchannels = hcd->available_host_channels;
45838 + if (hc->halt_pending) {
45839 + /* Dequeue: The FIQ was allowed to complete the transfer but state has been cleared. */
45840 + if (hc->qh && st->fsm == FIQ_NP_SPLIT_DONE &&
45841 + hcint.b.xfercomp && hc->qh->ep_type == UE_BULK) {
45842 + if (hctsiz.b.pid == DWC_HCTSIZ_DATA0) {
45843 + hc->qh->data_toggle = DWC_OTG_HC_PID_DATA1;
45844 + } else {
45845 + hc->qh->data_toggle = DWC_OTG_HC_PID_DATA0;
45846 + }
45847 + }
45848 + release_channel(hcd, hc, NULL, hc->halt_status);
45849 + return;
45850 + }
45851 +
45852 + qtd = DWC_CIRCLEQ_FIRST(&hc->qh->qtd_list);
45853 + switch (st->fsm) {
45854 + case FIQ_TEST:
45855 + break;
45856 +
45857 + case FIQ_DEQUEUE_ISSUED:
45858 + /* Handled above, but keep for posterity */
45859 + release_channel(hcd, hc, NULL, hc->halt_status);
45860 + break;
45861 +
45862 + case FIQ_NP_SPLIT_DONE:
45863 + /* Nonperiodic transaction complete. */
45864 + if (!hc->ep_is_in) {
45865 + qtd->ssplit_out_xfer_count = hc->xfer_len;
45866 + }
45867 + if (hcint.b.xfercomp) {
45868 + handle_hc_xfercomp_intr(hcd, hc, hc_regs, qtd);
45869 + } else if (hcint.b.nak) {
45870 + handle_hc_nak_intr(hcd, hc, hc_regs, qtd);
45871 + } else {
45872 + DWC_WARN("Unexpected IRQ state on FSM transaction:"
45873 + "dev_addr=%d ep=%d fsm=%d, hcint=0x%08x\n",
45874 + hc->dev_addr, hc->ep_num, st->fsm, hcint.d32);
45875 + release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_NO_HALT_STATUS);
45876 + }
45877 + break;
45878 +
45879 + case FIQ_NP_SPLIT_HS_ABORTED:
45880 + /* A HS abort is a 3-strikes on the HS bus at any point in the transaction.
45881 + * Normally a CLEAR_TT_BUFFER hub command would be required: we can't do that
45882 + * because there's no guarantee which order a non-periodic split happened in.
45883 + * We could end up clearing a perfectly good transaction out of the buffer.
45884 + */
45885 + if (hcint.b.xacterr) {
45886 + qtd->error_count += st->nr_errors;
45887 + handle_hc_xacterr_intr(hcd, hc, hc_regs, qtd);
45888 + } else if (hcint.b.ahberr) {
45889 + handle_hc_ahberr_intr(hcd, hc, hc_regs, qtd);
45890 + } else {
45891 + DWC_WARN("Unexpected IRQ state on FSM transaction:"
45892 + "dev_addr=%d ep=%d fsm=%d, hcint=0x%08x\n",
45893 + hc->dev_addr, hc->ep_num, st->fsm, hcint.d32);
45894 + release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_NO_HALT_STATUS);
45895 + }
45896 + break;
45897 +
45898 + case FIQ_NP_SPLIT_LS_ABORTED:
45899 + /* A few cases can cause this - either an unknown state on a SSPLIT or
45900 + * STALL/data toggle error response on a CSPLIT */
45901 + if (hcint.b.stall) {
45902 + handle_hc_stall_intr(hcd, hc, hc_regs, qtd);
45903 + } else if (hcint.b.datatglerr) {
45904 + handle_hc_datatglerr_intr(hcd, hc, hc_regs, qtd);
45905 + } else if (hcint.b.bblerr) {
45906 + handle_hc_babble_intr(hcd, hc, hc_regs, qtd);
45907 + } else if (hcint.b.ahberr) {
45908 + handle_hc_ahberr_intr(hcd, hc, hc_regs, qtd);
45909 + } else {
45910 + DWC_WARN("Unexpected IRQ state on FSM transaction:"
45911 + "dev_addr=%d ep=%d fsm=%d, hcint=0x%08x\n",
45912 + hc->dev_addr, hc->ep_num, st->fsm, hcint.d32);
45913 + release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_NO_HALT_STATUS);
45914 + }
45915 + break;
45916 +
45917 + case FIQ_PER_SPLIT_DONE:
45918 + /* Isoc IN or Interrupt IN/OUT */
45919 +
45920 + /* Flow control here is different from the normal execution by the driver.
45921 + * We need to completely ignore most of the driver's method of handling
45922 + * split transactions and do it ourselves.
45923 + */
45924 + if (hc->ep_type == UE_INTERRUPT) {
45925 + if (hcint.b.nak) {
45926 + handle_hc_nak_intr(hcd, hc, hc_regs, qtd);
45927 + } else if (hc->ep_is_in) {
45928 + int len;
45929 + len = dwc_otg_fiq_unsetup_per_dma(hcd, hc->qh, qtd, num);
45930 + //printk(KERN_NOTICE "FIQ Transaction: hc=%d len=%d urb_len = %d\n", num, len, qtd->urb->length);
45931 + qtd->urb->actual_length += len;
45932 + if (qtd->urb->actual_length >= qtd->urb->length) {
45933 + qtd->urb->status = 0;
45934 + hcd->fops->complete(hcd, qtd->urb->priv, qtd->urb, qtd->urb->status);
45935 + release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_URB_COMPLETE);
45936 + } else {
45937 + /* Interrupt transfer not complete yet - is it a short read? */
45938 + if (len < hc->max_packet) {
45939 + /* Interrupt transaction complete */
45940 + qtd->urb->status = 0;
45941 + hcd->fops->complete(hcd, qtd->urb->priv, qtd->urb, qtd->urb->status);
45942 + release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_URB_COMPLETE);
45943 + } else {
45944 + /* Further transactions required */
45945 + release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_COMPLETE);
45946 + }
45947 + }
45948 + } else {
45949 + /* Interrupt OUT complete. */
45950 + dwc_otg_hcd_save_data_toggle(hc, hc_regs, qtd);
45951 + qtd->urb->actual_length += hc->xfer_len;
45952 + if (qtd->urb->actual_length >= qtd->urb->length) {
45953 + qtd->urb->status = 0;
45954 + hcd->fops->complete(hcd, qtd->urb->priv, qtd->urb, qtd->urb->status);
45955 + release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_URB_COMPLETE);
45956 + } else {
45957 + release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_COMPLETE);
45958 + }
45959 + }
45960 + } else {
45961 + /* ISOC IN complete. */
45962 + struct dwc_otg_hcd_iso_packet_desc *frame_desc = &qtd->urb->iso_descs[qtd->isoc_frame_index];
45963 + int len = 0;
45964 + /* Record errors, update qtd. */
45965 + if (st->nr_errors) {
45966 + frame_desc->actual_length = 0;
45967 + frame_desc->status = -DWC_E_PROTOCOL;
45968 + } else {
45969 + frame_desc->status = 0;
45970 + /* Unswizzle dma */
45971 + len = dwc_otg_fiq_unsetup_per_dma(hcd, hc->qh, qtd, num);
45972 + frame_desc->actual_length = len;
45973 + }
45974 + qtd->isoc_frame_index++;
45975 + if (qtd->isoc_frame_index == qtd->urb->packet_count) {
45976 + hcd->fops->complete(hcd, qtd->urb->priv, qtd->urb, 0);
45977 + release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_URB_COMPLETE);
45978 + } else {
45979 + release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_COMPLETE);
45980 + }
45981 + }
45982 + break;
45983 +
45984 + case FIQ_PER_ISO_OUT_DONE: {
45985 + struct dwc_otg_hcd_iso_packet_desc *frame_desc = &qtd->urb->iso_descs[qtd->isoc_frame_index];
45986 + /* Record errors, update qtd. */
45987 + if (st->nr_errors) {
45988 + frame_desc->actual_length = 0;
45989 + frame_desc->status = -DWC_E_PROTOCOL;
45990 + } else {
45991 + frame_desc->status = 0;
45992 + frame_desc->actual_length = frame_desc->length;
45993 + }
45994 + qtd->isoc_frame_index++;
45995 + qtd->isoc_split_offset = 0;
45996 + if (qtd->isoc_frame_index == qtd->urb->packet_count) {
45997 + hcd->fops->complete(hcd, qtd->urb->priv, qtd->urb, 0);
45998 + release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_URB_COMPLETE);
45999 + } else {
46000 + release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_COMPLETE);
46001 + }
46002 + }
46003 + break;
46004 +
46005 + case FIQ_PER_SPLIT_NYET_ABORTED:
46006 + /* Doh. lost the data. */
46007 + printk_ratelimited(KERN_INFO "Transfer to device %d endpoint 0x%x frame %d failed "
46008 + "- FIQ reported NYET. Data may have been lost.\n",
46009 + hc->dev_addr, hc->ep_num, dwc_otg_hcd_get_frame_number(hcd) >> 3);
46010 + if (hc->ep_type == UE_ISOCHRONOUS) {
46011 + struct dwc_otg_hcd_iso_packet_desc *frame_desc = &qtd->urb->iso_descs[qtd->isoc_frame_index];
46012 + /* Record errors, update qtd. */
46013 + frame_desc->actual_length = 0;
46014 + frame_desc->status = -DWC_E_PROTOCOL;
46015 + qtd->isoc_frame_index++;
46016 + qtd->isoc_split_offset = 0;
46017 + if (qtd->isoc_frame_index == qtd->urb->packet_count) {
46018 + hcd->fops->complete(hcd, qtd->urb->priv, qtd->urb, 0);
46019 + release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_URB_COMPLETE);
46020 + } else {
46021 + release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_COMPLETE);
46022 + }
46023 + } else {
46024 + release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_NO_HALT_STATUS);
46025 + }
46026 + break;
46027 +
46028 + case FIQ_HS_ISOC_DONE:
46029 + /* The FIQ has performed a whole pile of isochronous transactions.
46030 + * The status is recorded as the interrupt state should the transaction
46031 + * fail.
46032 + */
46033 + dwc_otg_fiq_unmangle_isoc(hcd, hc->qh, qtd, num);
46034 + hcd->fops->complete(hcd, qtd->urb->priv, qtd->urb, 0);
46035 + release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_URB_COMPLETE);
46036 + break;
46037 +
46038 + case FIQ_PER_SPLIT_LS_ABORTED:
46039 + if (hcint.b.xacterr) {
46040 + /* Hub has responded with an ERR packet. Device
46041 + * has been unplugged or the port has been disabled.
46042 + * TODO: need to issue a reset to the hub port. */
46043 + qtd->error_count += 3;
46044 + handle_hc_xacterr_intr(hcd, hc, hc_regs, qtd);
46045 + } else if (hcint.b.stall) {
46046 + handle_hc_stall_intr(hcd, hc, hc_regs, qtd);
46047 + } else if (hcint.b.bblerr) {
46048 + handle_hc_babble_intr(hcd, hc, hc_regs, qtd);
46049 + } else {
46050 + printk_ratelimited(KERN_INFO "Transfer to device %d endpoint 0x%x failed "
46051 + "- FIQ reported FSM=%d. Data may have been lost.\n",
46052 + st->fsm, hc->dev_addr, hc->ep_num);
46053 + release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_NO_HALT_STATUS);
46054 + }
46055 + break;
46056 +
46057 + case FIQ_PER_SPLIT_HS_ABORTED:
46058 + /* Either the SSPLIT phase suffered transaction errors or something
46059 + * unexpected happened.
46060 + */
46061 + qtd->error_count += 3;
46062 + handle_hc_xacterr_intr(hcd, hc, hc_regs, qtd);
46063 + release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_NO_HALT_STATUS);
46064 + break;
46065 +
46066 + case FIQ_PER_SPLIT_TIMEOUT:
46067 + /* Couldn't complete in the nominated frame */
46068 + printk(KERN_INFO "Transfer to device %d endpoint 0x%x frame %d failed "
46069 + "- FIQ timed out. Data may have been lost.\n",
46070 + hc->dev_addr, hc->ep_num, dwc_otg_hcd_get_frame_number(hcd) >> 3);
46071 + if (hc->ep_type == UE_ISOCHRONOUS) {
46072 + struct dwc_otg_hcd_iso_packet_desc *frame_desc = &qtd->urb->iso_descs[qtd->isoc_frame_index];
46073 + /* Record errors, update qtd. */
46074 + frame_desc->actual_length = 0;
46075 + if (hc->ep_is_in) {
46076 + frame_desc->status = -DWC_E_NO_STREAM_RES;
46077 + } else {
46078 + frame_desc->status = -DWC_E_COMMUNICATION;
46079 + }
46080 + qtd->isoc_frame_index++;
46081 + if (qtd->isoc_frame_index == qtd->urb->packet_count) {
46082 + hcd->fops->complete(hcd, qtd->urb->priv, qtd->urb, 0);
46083 + release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_URB_COMPLETE);
46084 + } else {
46085 + release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_COMPLETE);
46086 + }
46087 + } else {
46088 + release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_NO_HALT_STATUS);
46089 + }
46090 + break;
46091 +
46092 + default:
46093 + DWC_WARN("Unexpected state received on hc=%d fsm=%d on transfer to device %d ep 0x%x",
46094 + hc->hc_num, st->fsm, hc->dev_addr, hc->ep_num);
46095 + qtd->error_count++;
46096 + release_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_NO_HALT_STATUS);
46097 + }
46098 + return;
46099 +}
46100 +
46101 +/** Handles interrupt for a specific Host Channel */
46102 +int32_t dwc_otg_hcd_handle_hc_n_intr(dwc_otg_hcd_t * dwc_otg_hcd, uint32_t num)
46103 +{
46104 + int retval = 0;
46105 + hcint_data_t hcint;
46106 + hcintmsk_data_t hcintmsk;
46107 + dwc_hc_t *hc;
46108 + dwc_otg_hc_regs_t *hc_regs;
46109 + dwc_otg_qtd_t *qtd;
46110 +
46111 + DWC_DEBUGPL(DBG_HCDV, "--Host Channel Interrupt--, Channel %d\n", num);
46112 +
46113 + hc = dwc_otg_hcd->hc_ptr_array[num];
46114 + hc_regs = dwc_otg_hcd->core_if->host_if->hc_regs[num];
46115 + if(hc->halt_status == DWC_OTG_HC_XFER_URB_DEQUEUE) {
46116 + /* A dequeue was issued for this transfer. Our QTD has gone away
46117 + * but in the case of a FIQ transfer, the transfer would have run
46118 + * to completion.
46119 + */
46120 + if (fiq_fsm_enable && dwc_otg_hcd->fiq_state->channel[num].fsm != FIQ_PASSTHROUGH) {
46121 + dwc_otg_hcd_handle_hc_fsm(dwc_otg_hcd, num);
46122 + } else {
46123 + release_channel(dwc_otg_hcd, hc, NULL, hc->halt_status);
46124 + }
46125 + return 1;
46126 + }
46127 + qtd = DWC_CIRCLEQ_FIRST(&hc->qh->qtd_list);
46128 +
46129 + /*
46130 + * FSM mode: Check to see if this is a HC interrupt from a channel handled by the FIQ.
46131 + * Execution path is fundamentally different for the channels after a FIQ has completed
46132 + * a split transaction.
46133 + */
46134 + if (fiq_fsm_enable) {
46135 + switch (dwc_otg_hcd->fiq_state->channel[num].fsm) {
46136 + case FIQ_PASSTHROUGH:
46137 + break;
46138 + case FIQ_PASSTHROUGH_ERRORSTATE:
46139 + /* Hook into the error count */
46140 + fiq_print(FIQDBG_ERR, dwc_otg_hcd->fiq_state, "HCDERR%02d", num);
46141 + if (!dwc_otg_hcd->fiq_state->channel[num].nr_errors) {
46142 + qtd->error_count = 0;
46143 + fiq_print(FIQDBG_ERR, dwc_otg_hcd->fiq_state, "RESET ");
46144 + }
46145 + break;
46146 + default:
46147 + dwc_otg_hcd_handle_hc_fsm(dwc_otg_hcd, num);
46148 + return 1;
46149 + }
46150 + }
46151 +
46152 + hcint.d32 = DWC_READ_REG32(&hc_regs->hcint);
46153 + hcintmsk.d32 = DWC_READ_REG32(&hc_regs->hcintmsk);
46154 + hcint.d32 = hcint.d32 & hcintmsk.d32;
46155 + if (!dwc_otg_hcd->core_if->dma_enable) {
46156 + if (hcint.b.chhltd && hcint.d32 != 0x2) {
46157 + hcint.b.chhltd = 0;
46158 + }
46159 + }
46160 +
46161 + if (hcint.b.xfercomp) {
46162 + retval |=
46163 + handle_hc_xfercomp_intr(dwc_otg_hcd, hc, hc_regs, qtd);
46164 + /*
46165 + * If NYET occurred at same time as Xfer Complete, the NYET is
46166 + * handled by the Xfer Complete interrupt handler. Don't want
46167 + * to call the NYET interrupt handler in this case.
46168 + */
46169 + hcint.b.nyet = 0;
46170 + }
46171 + if (hcint.b.chhltd) {
46172 + retval |= handle_hc_chhltd_intr(dwc_otg_hcd, hc, hc_regs, qtd);
46173 + }
46174 + if (hcint.b.ahberr) {
46175 + retval |= handle_hc_ahberr_intr(dwc_otg_hcd, hc, hc_regs, qtd);
46176 + }
46177 + if (hcint.b.stall) {
46178 + retval |= handle_hc_stall_intr(dwc_otg_hcd, hc, hc_regs, qtd);
46179 + }
46180 + if (hcint.b.nak) {
46181 + retval |= handle_hc_nak_intr(dwc_otg_hcd, hc, hc_regs, qtd);
46182 + }
46183 + if (hcint.b.ack) {
46184 + if(!hcint.b.chhltd)
46185 + retval |= handle_hc_ack_intr(dwc_otg_hcd, hc, hc_regs, qtd);
46186 + }
46187 + if (hcint.b.nyet) {
46188 + retval |= handle_hc_nyet_intr(dwc_otg_hcd, hc, hc_regs, qtd);
46189 + }
46190 + if (hcint.b.xacterr) {
46191 + retval |= handle_hc_xacterr_intr(dwc_otg_hcd, hc, hc_regs, qtd);
46192 + }
46193 + if (hcint.b.bblerr) {
46194 + retval |= handle_hc_babble_intr(dwc_otg_hcd, hc, hc_regs, qtd);
46195 + }
46196 + if (hcint.b.frmovrun) {
46197 + retval |=
46198 + handle_hc_frmovrun_intr(dwc_otg_hcd, hc, hc_regs, qtd);
46199 + }
46200 + if (hcint.b.datatglerr) {
46201 + retval |=
46202 + handle_hc_datatglerr_intr(dwc_otg_hcd, hc, hc_regs, qtd);
46203 + }
46204 +
46205 + return retval;
46206 +}
46207 +#endif /* DWC_DEVICE_ONLY */
46208 --- /dev/null
46209 +++ b/drivers/usb/host/dwc_otg/dwc_otg_hcd_linux.c
46210 @@ -0,0 +1,1083 @@
46211 +
46212 +/* ==========================================================================
46213 + * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_hcd_linux.c $
46214 + * $Revision: #20 $
46215 + * $Date: 2011/10/26 $
46216 + * $Change: 1872981 $
46217 + *
46218 + * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
46219 + * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
46220 + * otherwise expressly agreed to in writing between Synopsys and you.
46221 + *
46222 + * The Software IS NOT an item of Licensed Software or Licensed Product under
46223 + * any End User Software License Agreement or Agreement for Licensed Product
46224 + * with Synopsys or any supplement thereto. You are permitted to use and
46225 + * redistribute this Software in source and binary forms, with or without
46226 + * modification, provided that redistributions of source code must retain this
46227 + * notice. You may not view, use, disclose, copy or distribute this file or
46228 + * any information contained herein except pursuant to this license grant from
46229 + * Synopsys. If you do not agree with this notice, including the disclaimer
46230 + * below, then you are not authorized to use the Software.
46231 + *
46232 + * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
46233 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
46234 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
46235 + * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
46236 + * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
46237 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
46238 + * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
46239 + * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
46240 + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
46241 + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
46242 + * DAMAGE.
46243 + * ========================================================================== */
46244 +#ifndef DWC_DEVICE_ONLY
46245 +
46246 +/**
46247 + * @file
46248 + *
46249 + * This file contains the implementation of the HCD. In Linux, the HCD
46250 + * implements the hc_driver API.
46251 + */
46252 +#include <linux/kernel.h>
46253 +#include <linux/module.h>
46254 +#include <linux/moduleparam.h>
46255 +#include <linux/init.h>
46256 +#include <linux/device.h>
46257 +#include <linux/errno.h>
46258 +#include <linux/list.h>
46259 +#include <linux/interrupt.h>
46260 +#include <linux/string.h>
46261 +#include <linux/dma-mapping.h>
46262 +#include <linux/version.h>
46263 +#include <asm/io.h>
46264 +#ifdef CONFIG_ARM
46265 +#include <asm/fiq.h>
46266 +#endif
46267 +#include <linux/usb.h>
46268 +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,35)
46269 +#include <../drivers/usb/core/hcd.h>
46270 +#else
46271 +#include <linux/usb/hcd.h>
46272 +#endif
46273 +#include <asm/bug.h>
46274 +
46275 +#if (LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,30))
46276 +#define USB_URB_EP_LINKING 1
46277 +#else
46278 +#define USB_URB_EP_LINKING 0
46279 +#endif
46280 +
46281 +#include "dwc_otg_hcd_if.h"
46282 +#include "dwc_otg_dbg.h"
46283 +#include "dwc_otg_driver.h"
46284 +#include "dwc_otg_hcd.h"
46285 +
46286 +#ifndef __virt_to_bus
46287 +#define __virt_to_bus __virt_to_phys
46288 +#define __bus_to_virt __phys_to_virt
46289 +#define __pfn_to_bus(x) __pfn_to_phys(x)
46290 +#define __bus_to_pfn(x) __phys_to_pfn(x)
46291 +#endif
46292 +
46293 +extern unsigned char _dwc_otg_fiq_stub, _dwc_otg_fiq_stub_end;
46294 +
46295 +/**
46296 + * Gets the endpoint number from a _bEndpointAddress argument. The endpoint is
46297 + * qualified with its direction (possible 32 endpoints per device).
46298 + */
46299 +#define dwc_ep_addr_to_endpoint(_bEndpointAddress_) ((_bEndpointAddress_ & USB_ENDPOINT_NUMBER_MASK) | \
46300 + ((_bEndpointAddress_ & USB_DIR_IN) != 0) << 4)
46301 +
46302 +static const char dwc_otg_hcd_name[] = "dwc_otg_hcd";
46303 +
46304 +extern bool fiq_enable;
46305 +
46306 +/** @name Linux HC Driver API Functions */
46307 +/** @{ */
46308 +/* manage i/o requests, device state */
46309 +static int dwc_otg_urb_enqueue(struct usb_hcd *hcd,
46310 +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,28)
46311 + struct usb_host_endpoint *ep,
46312 +#endif
46313 + struct urb *urb, gfp_t mem_flags);
46314 +
46315 +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,30)
46316 +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,28)
46317 +static int dwc_otg_urb_dequeue(struct usb_hcd *hcd, struct urb *urb);
46318 +#endif
46319 +#else /* kernels at or post 2.6.30 */
46320 +static int dwc_otg_urb_dequeue(struct usb_hcd *hcd,
46321 + struct urb *urb, int status);
46322 +#endif /* LINUX_VERSION_CODE < KERNEL_VERSION(2,6,30) */
46323 +
46324 +static void endpoint_disable(struct usb_hcd *hcd, struct usb_host_endpoint *ep);
46325 +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,30)
46326 +static void endpoint_reset(struct usb_hcd *hcd, struct usb_host_endpoint *ep);
46327 +#endif
46328 +static irqreturn_t dwc_otg_hcd_irq(struct usb_hcd *hcd);
46329 +extern int hcd_start(struct usb_hcd *hcd);
46330 +extern void hcd_stop(struct usb_hcd *hcd);
46331 +static int get_frame_number(struct usb_hcd *hcd);
46332 +extern int hub_status_data(struct usb_hcd *hcd, char *buf);
46333 +extern int hub_control(struct usb_hcd *hcd,
46334 + u16 typeReq,
46335 + u16 wValue, u16 wIndex, char *buf, u16 wLength);
46336 +
46337 +struct wrapper_priv_data {
46338 + dwc_otg_hcd_t *dwc_otg_hcd;
46339 +};
46340 +
46341 +/** @} */
46342 +
46343 +static struct hc_driver dwc_otg_hc_driver = {
46344 +
46345 + .description = dwc_otg_hcd_name,
46346 + .product_desc = "DWC OTG Controller",
46347 + .hcd_priv_size = sizeof(struct wrapper_priv_data),
46348 +
46349 + .irq = dwc_otg_hcd_irq,
46350 +
46351 + .flags = HCD_MEMORY | HCD_USB2,
46352 +
46353 + //.reset =
46354 + .start = hcd_start,
46355 + //.suspend =
46356 + //.resume =
46357 + .stop = hcd_stop,
46358 +
46359 + .urb_enqueue = dwc_otg_urb_enqueue,
46360 + .urb_dequeue = dwc_otg_urb_dequeue,
46361 + .endpoint_disable = endpoint_disable,
46362 +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,30)
46363 + .endpoint_reset = endpoint_reset,
46364 +#endif
46365 + .get_frame_number = get_frame_number,
46366 +
46367 + .hub_status_data = hub_status_data,
46368 + .hub_control = hub_control,
46369 + //.bus_suspend =
46370 + //.bus_resume =
46371 +};
46372 +
46373 +/** Gets the dwc_otg_hcd from a struct usb_hcd */
46374 +static inline dwc_otg_hcd_t *hcd_to_dwc_otg_hcd(struct usb_hcd *hcd)
46375 +{
46376 + struct wrapper_priv_data *p;
46377 + p = (struct wrapper_priv_data *)(hcd->hcd_priv);
46378 + return p->dwc_otg_hcd;
46379 +}
46380 +
46381 +/** Gets the struct usb_hcd that contains a dwc_otg_hcd_t. */
46382 +static inline struct usb_hcd *dwc_otg_hcd_to_hcd(dwc_otg_hcd_t * dwc_otg_hcd)
46383 +{
46384 + return dwc_otg_hcd_get_priv_data(dwc_otg_hcd);
46385 +}
46386 +
46387 +/** Gets the usb_host_endpoint associated with an URB. */
46388 +inline struct usb_host_endpoint *dwc_urb_to_endpoint(struct urb *urb)
46389 +{
46390 + struct usb_device *dev = urb->dev;
46391 + int ep_num = usb_pipeendpoint(urb->pipe);
46392 +
46393 + if (usb_pipein(urb->pipe))
46394 + return dev->ep_in[ep_num];
46395 + else
46396 + return dev->ep_out[ep_num];
46397 +}
46398 +
46399 +static int _disconnect(dwc_otg_hcd_t * hcd)
46400 +{
46401 + struct usb_hcd *usb_hcd = dwc_otg_hcd_to_hcd(hcd);
46402 +
46403 + usb_hcd->self.is_b_host = 0;
46404 + return 0;
46405 +}
46406 +
46407 +static int _start(dwc_otg_hcd_t * hcd)
46408 +{
46409 + struct usb_hcd *usb_hcd = dwc_otg_hcd_to_hcd(hcd);
46410 +
46411 + usb_hcd->self.is_b_host = dwc_otg_hcd_is_b_host(hcd);
46412 + hcd_start(usb_hcd);
46413 +
46414 + return 0;
46415 +}
46416 +
46417 +static int _hub_info(dwc_otg_hcd_t * hcd, void *urb_handle, uint32_t * hub_addr,
46418 + uint32_t * port_addr)
46419 +{
46420 + struct urb *urb = (struct urb *)urb_handle;
46421 + struct usb_bus *bus;
46422 +#if 1 //GRAYG - temporary
46423 + if (NULL == urb_handle)
46424 + DWC_ERROR("**** %s - NULL URB handle\n", __func__);//GRAYG
46425 + if (NULL == urb->dev)
46426 + DWC_ERROR("**** %s - URB has no device\n", __func__);//GRAYG
46427 + if (NULL == port_addr)
46428 + DWC_ERROR("**** %s - NULL port_address\n", __func__);//GRAYG
46429 +#endif
46430 + if (urb->dev->tt) {
46431 + if (NULL == urb->dev->tt->hub) {
46432 + DWC_ERROR("**** %s - (URB's transactor has no TT - giving no hub)\n",
46433 + __func__); //GRAYG
46434 + //*hub_addr = (u8)usb_pipedevice(urb->pipe); //GRAYG
46435 + *hub_addr = 0; //GRAYG
46436 + // we probably shouldn't have a transaction translator if
46437 + // there's no associated hub?
46438 + } else {
46439 + bus = hcd_to_bus(dwc_otg_hcd_to_hcd(hcd));
46440 + if (urb->dev->tt->hub == bus->root_hub)
46441 + *hub_addr = 0;
46442 + else
46443 + *hub_addr = urb->dev->tt->hub->devnum;
46444 + }
46445 + *port_addr = urb->dev->ttport;
46446 + } else {
46447 + *hub_addr = 0;
46448 + *port_addr = urb->dev->ttport;
46449 + }
46450 + return 0;
46451 +}
46452 +
46453 +static int _speed(dwc_otg_hcd_t * hcd, void *urb_handle)
46454 +{
46455 + struct urb *urb = (struct urb *)urb_handle;
46456 + return urb->dev->speed;
46457 +}
46458 +
46459 +static int _get_b_hnp_enable(dwc_otg_hcd_t * hcd)
46460 +{
46461 + struct usb_hcd *usb_hcd = dwc_otg_hcd_to_hcd(hcd);
46462 + return usb_hcd->self.b_hnp_enable;
46463 +}
46464 +
46465 +static void allocate_bus_bandwidth(struct usb_hcd *hcd, uint32_t bw,
46466 + struct urb *urb)
46467 +{
46468 + hcd_to_bus(hcd)->bandwidth_allocated += bw / urb->interval;
46469 + if (usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS) {
46470 + hcd_to_bus(hcd)->bandwidth_isoc_reqs++;
46471 + } else {
46472 + hcd_to_bus(hcd)->bandwidth_int_reqs++;
46473 + }
46474 +}
46475 +
46476 +static void free_bus_bandwidth(struct usb_hcd *hcd, uint32_t bw,
46477 + struct urb *urb)
46478 +{
46479 + hcd_to_bus(hcd)->bandwidth_allocated -= bw / urb->interval;
46480 + if (usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS) {
46481 + hcd_to_bus(hcd)->bandwidth_isoc_reqs--;
46482 + } else {
46483 + hcd_to_bus(hcd)->bandwidth_int_reqs--;
46484 + }
46485 +}
46486 +
46487 +/**
46488 + * Sets the final status of an URB and returns it to the device driver. Any
46489 + * required cleanup of the URB is performed. The HCD lock should be held on
46490 + * entry.
46491 + */
46492 +static int _complete(dwc_otg_hcd_t * hcd, void *urb_handle,
46493 + dwc_otg_hcd_urb_t * dwc_otg_urb, int32_t status)
46494 +{
46495 + struct urb *urb = (struct urb *)urb_handle;
46496 + urb_tq_entry_t *new_entry;
46497 + int rc = 0;
46498 + if (CHK_DEBUG_LEVEL(DBG_HCDV | DBG_HCD_URB)) {
46499 + DWC_PRINTF("%s: urb %p, device %d, ep %d %s, status=%d\n",
46500 + __func__, urb, usb_pipedevice(urb->pipe),
46501 + usb_pipeendpoint(urb->pipe),
46502 + usb_pipein(urb->pipe) ? "IN" : "OUT", status);
46503 + if (usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS) {
46504 + int i;
46505 + for (i = 0; i < urb->number_of_packets; i++) {
46506 + DWC_PRINTF(" ISO Desc %d status: %d\n",
46507 + i, urb->iso_frame_desc[i].status);
46508 + }
46509 + }
46510 + }
46511 + new_entry = DWC_ALLOC_ATOMIC(sizeof(urb_tq_entry_t));
46512 + urb->actual_length = dwc_otg_hcd_urb_get_actual_length(dwc_otg_urb);
46513 + /* Convert status value. */
46514 + switch (status) {
46515 + case -DWC_E_PROTOCOL:
46516 + status = -EPROTO;
46517 + break;
46518 + case -DWC_E_IN_PROGRESS:
46519 + status = -EINPROGRESS;
46520 + break;
46521 + case -DWC_E_PIPE:
46522 + status = -EPIPE;
46523 + break;
46524 + case -DWC_E_IO:
46525 + status = -EIO;
46526 + break;
46527 + case -DWC_E_TIMEOUT:
46528 + status = -ETIMEDOUT;
46529 + break;
46530 + case -DWC_E_OVERFLOW:
46531 + status = -EOVERFLOW;
46532 + break;
46533 + case -DWC_E_SHUTDOWN:
46534 + status = -ESHUTDOWN;
46535 + break;
46536 + default:
46537 + if (status) {
46538 + DWC_PRINTF("Uknown urb status %d\n", status);
46539 +
46540 + }
46541 + }
46542 +
46543 + if (usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS) {
46544 + int i;
46545 +
46546 + urb->error_count = dwc_otg_hcd_urb_get_error_count(dwc_otg_urb);
46547 + urb->actual_length = 0;
46548 + for (i = 0; i < urb->number_of_packets; ++i) {
46549 + urb->iso_frame_desc[i].actual_length =
46550 + dwc_otg_hcd_urb_get_iso_desc_actual_length
46551 + (dwc_otg_urb, i);
46552 + urb->actual_length += urb->iso_frame_desc[i].actual_length;
46553 + urb->iso_frame_desc[i].status =
46554 + dwc_otg_hcd_urb_get_iso_desc_status(dwc_otg_urb, i);
46555 + }
46556 + }
46557 +
46558 + urb->status = status;
46559 + urb->hcpriv = NULL;
46560 + if (!status) {
46561 + if ((urb->transfer_flags & URB_SHORT_NOT_OK) &&
46562 + (urb->actual_length < urb->transfer_buffer_length)) {
46563 + urb->status = -EREMOTEIO;
46564 + }
46565 + }
46566 +
46567 + if ((usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS) ||
46568 + (usb_pipetype(urb->pipe) == PIPE_INTERRUPT)) {
46569 + struct usb_host_endpoint *ep = dwc_urb_to_endpoint(urb);
46570 + if (ep) {
46571 + free_bus_bandwidth(dwc_otg_hcd_to_hcd(hcd),
46572 + dwc_otg_hcd_get_ep_bandwidth(hcd,
46573 + ep->hcpriv),
46574 + urb);
46575 + }
46576 + }
46577 + DWC_FREE(dwc_otg_urb);
46578 + if (!new_entry) {
46579 + DWC_ERROR("dwc_otg_hcd: complete: cannot allocate URB TQ entry\n");
46580 + urb->status = -EPROTO;
46581 + /* don't schedule the tasklet -
46582 + * directly return the packet here with error. */
46583 +#if USB_URB_EP_LINKING
46584 + usb_hcd_unlink_urb_from_ep(dwc_otg_hcd_to_hcd(hcd), urb);
46585 +#endif
46586 +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,28)
46587 + usb_hcd_giveback_urb(dwc_otg_hcd_to_hcd(hcd), urb);
46588 +#else
46589 + usb_hcd_giveback_urb(dwc_otg_hcd_to_hcd(hcd), urb, urb->status);
46590 +#endif
46591 + } else {
46592 + new_entry->urb = urb;
46593 +#if USB_URB_EP_LINKING
46594 + rc = usb_hcd_check_unlink_urb(dwc_otg_hcd_to_hcd(hcd), urb, urb->status);
46595 + if(0 == rc) {
46596 + usb_hcd_unlink_urb_from_ep(dwc_otg_hcd_to_hcd(hcd), urb);
46597 + }
46598 +#endif
46599 + if(0 == rc) {
46600 + DWC_TAILQ_INSERT_TAIL(&hcd->completed_urb_list, new_entry,
46601 + urb_tq_entries);
46602 + DWC_TASK_HI_SCHEDULE(hcd->completion_tasklet);
46603 + }
46604 + }
46605 + return 0;
46606 +}
46607 +
46608 +static struct dwc_otg_hcd_function_ops hcd_fops = {
46609 + .start = _start,
46610 + .disconnect = _disconnect,
46611 + .hub_info = _hub_info,
46612 + .speed = _speed,
46613 + .complete = _complete,
46614 + .get_b_hnp_enable = _get_b_hnp_enable,
46615 +};
46616 +
46617 +#ifdef CONFIG_ARM64
46618 +
46619 +static int simfiq_irq = -1;
46620 +
46621 +void local_fiq_enable(void)
46622 +{
46623 + if (simfiq_irq >= 0)
46624 + enable_irq(simfiq_irq);
46625 +}
46626 +
46627 +void local_fiq_disable(void)
46628 +{
46629 + if (simfiq_irq >= 0)
46630 + disable_irq(simfiq_irq);
46631 +}
46632 +
46633 +irqreturn_t fiq_irq_handler(int irq, void *dev_id)
46634 +{
46635 + dwc_otg_hcd_t *dwc_otg_hcd = (dwc_otg_hcd_t *)dev_id;
46636 +
46637 + if (fiq_fsm_enable)
46638 + dwc_otg_fiq_fsm(dwc_otg_hcd->fiq_state, dwc_otg_hcd->core_if->core_params->host_channels);
46639 + else
46640 + dwc_otg_fiq_nop(dwc_otg_hcd->fiq_state);
46641 +
46642 + return IRQ_HANDLED;
46643 +}
46644 +
46645 +#else
46646 +static struct fiq_handler fh = {
46647 + .name = "usb_fiq",
46648 +};
46649 +
46650 +#endif
46651 +
46652 +static void hcd_init_fiq(void *cookie)
46653 +{
46654 + dwc_otg_device_t *otg_dev = cookie;
46655 + dwc_otg_hcd_t *dwc_otg_hcd = otg_dev->hcd;
46656 +#ifdef CONFIG_ARM64
46657 + int retval = 0;
46658 + int irq;
46659 +#else
46660 + struct pt_regs regs;
46661 + int irq;
46662 +
46663 + if (claim_fiq(&fh)) {
46664 + DWC_ERROR("Can't claim FIQ");
46665 + BUG();
46666 + }
46667 + DWC_WARN("FIQ on core %d", smp_processor_id());
46668 + DWC_WARN("FIQ ASM at %px length %d", &_dwc_otg_fiq_stub, (int)(&_dwc_otg_fiq_stub_end - &_dwc_otg_fiq_stub));
46669 + set_fiq_handler((void *) &_dwc_otg_fiq_stub, &_dwc_otg_fiq_stub_end - &_dwc_otg_fiq_stub);
46670 + memset(&regs,0,sizeof(regs));
46671 +
46672 + regs.ARM_r8 = (long) dwc_otg_hcd->fiq_state;
46673 + if (fiq_fsm_enable) {
46674 + regs.ARM_r9 = dwc_otg_hcd->core_if->core_params->host_channels;
46675 + //regs.ARM_r10 = dwc_otg_hcd->dma;
46676 + regs.ARM_fp = (long) dwc_otg_fiq_fsm;
46677 + } else {
46678 + regs.ARM_fp = (long) dwc_otg_fiq_nop;
46679 + }
46680 +
46681 + regs.ARM_sp = (long) dwc_otg_hcd->fiq_stack + (sizeof(struct fiq_stack) - 4);
46682 +
46683 +// __show_regs(&regs);
46684 + set_fiq_regs(&regs);
46685 +#endif
46686 +
46687 + dwc_otg_hcd->fiq_state->dwc_regs_base = otg_dev->os_dep.base;
46688 + //Set the mphi periph to the required registers
46689 + dwc_otg_hcd->fiq_state->mphi_regs.base = otg_dev->os_dep.mphi_base;
46690 + if (otg_dev->os_dep.use_swirq) {
46691 + dwc_otg_hcd->fiq_state->mphi_regs.swirq_set =
46692 + otg_dev->os_dep.mphi_base + 0x1f0;
46693 + dwc_otg_hcd->fiq_state->mphi_regs.swirq_clr =
46694 + otg_dev->os_dep.mphi_base + 0x1f4;
46695 + DWC_WARN("Fake MPHI regs_base at 0x%08x",
46696 + (int)dwc_otg_hcd->fiq_state->mphi_regs.base);
46697 + } else {
46698 + dwc_otg_hcd->fiq_state->mphi_regs.ctrl =
46699 + otg_dev->os_dep.mphi_base + 0x4c;
46700 + dwc_otg_hcd->fiq_state->mphi_regs.outdda
46701 + = otg_dev->os_dep.mphi_base + 0x28;
46702 + dwc_otg_hcd->fiq_state->mphi_regs.outddb
46703 + = otg_dev->os_dep.mphi_base + 0x2c;
46704 + dwc_otg_hcd->fiq_state->mphi_regs.intstat
46705 + = otg_dev->os_dep.mphi_base + 0x50;
46706 + DWC_WARN("MPHI regs_base at %px",
46707 + dwc_otg_hcd->fiq_state->mphi_regs.base);
46708 +
46709 + //Enable mphi peripheral
46710 + writel((1<<31),dwc_otg_hcd->fiq_state->mphi_regs.ctrl);
46711 +#ifdef DEBUG
46712 + if (readl(dwc_otg_hcd->fiq_state->mphi_regs.ctrl) & 0x80000000)
46713 + DWC_WARN("MPHI periph has been enabled");
46714 + else
46715 + DWC_WARN("MPHI periph has NOT been enabled");
46716 +#endif
46717 + }
46718 + // Enable FIQ interrupt from USB peripheral
46719 +#ifdef CONFIG_ARM64
46720 + irq = otg_dev->os_dep.fiq_num;
46721 +
46722 + if (irq < 0) {
46723 + DWC_ERROR("Can't get SIM-FIQ irq");
46724 + return;
46725 + }
46726 +
46727 + retval = request_irq(irq, fiq_irq_handler, 0, "dwc_otg_sim-fiq", dwc_otg_hcd);
46728 +
46729 + if (retval < 0) {
46730 + DWC_ERROR("Unable to request SIM-FIQ irq\n");
46731 + return;
46732 + }
46733 +
46734 + simfiq_irq = irq;
46735 +#else
46736 +#ifdef CONFIG_GENERIC_IRQ_MULTI_HANDLER
46737 + irq = otg_dev->os_dep.fiq_num;
46738 +#else
46739 + irq = INTERRUPT_VC_USB;
46740 +#endif
46741 + if (irq < 0) {
46742 + DWC_ERROR("Can't get FIQ irq");
46743 + return;
46744 + }
46745 + /*
46746 + * We could take an interrupt immediately after enabling the FIQ.
46747 + * Ensure coherency of hcd->fiq_state.
46748 + */
46749 + smp_mb();
46750 + enable_fiq(irq);
46751 + local_fiq_enable();
46752 +#endif
46753 +
46754 +}
46755 +
46756 +/**
46757 + * Initializes the HCD. This function allocates memory for and initializes the
46758 + * static parts of the usb_hcd and dwc_otg_hcd structures. It also registers the
46759 + * USB bus with the core and calls the hc_driver->start() function. It returns
46760 + * a negative error on failure.
46761 + */
46762 +int hcd_init(dwc_bus_dev_t *_dev)
46763 +{
46764 + struct usb_hcd *hcd = NULL;
46765 + dwc_otg_hcd_t *dwc_otg_hcd = NULL;
46766 + dwc_otg_device_t *otg_dev = DWC_OTG_BUSDRVDATA(_dev);
46767 + int retval = 0;
46768 + u64 dmamask;
46769 +
46770 + DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD INIT otg_dev=%p\n", otg_dev);
46771 +
46772 + /* Set device flags indicating whether the HCD supports DMA. */
46773 + if (dwc_otg_is_dma_enable(otg_dev->core_if))
46774 + dmamask = DMA_BIT_MASK(32);
46775 + else
46776 + dmamask = 0;
46777 +
46778 +#if defined(LM_INTERFACE) || defined(PLATFORM_INTERFACE)
46779 + dma_set_mask(&_dev->dev, dmamask);
46780 + dma_set_coherent_mask(&_dev->dev, dmamask);
46781 +#elif defined(PCI_INTERFACE)
46782 + pci_set_dma_mask(_dev, dmamask);
46783 + pci_set_consistent_dma_mask(_dev, dmamask);
46784 +#endif
46785 +
46786 + /*
46787 + * Allocate memory for the base HCD plus the DWC OTG HCD.
46788 + * Initialize the base HCD.
46789 + */
46790 +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,30)
46791 + hcd = usb_create_hcd(&dwc_otg_hc_driver, &_dev->dev, _dev->dev.bus_id);
46792 +#else
46793 + hcd = usb_create_hcd(&dwc_otg_hc_driver, &_dev->dev, dev_name(&_dev->dev));
46794 + hcd->has_tt = 1;
46795 +// hcd->uses_new_polling = 1;
46796 +// hcd->poll_rh = 0;
46797 +#endif
46798 + if (!hcd) {
46799 + retval = -ENOMEM;
46800 + goto error1;
46801 + }
46802 +
46803 + hcd->regs = otg_dev->os_dep.base;
46804 +
46805 +
46806 + /* Initialize the DWC OTG HCD. */
46807 + dwc_otg_hcd = dwc_otg_hcd_alloc_hcd();
46808 + if (!dwc_otg_hcd) {
46809 + goto error2;
46810 + }
46811 + ((struct wrapper_priv_data *)(hcd->hcd_priv))->dwc_otg_hcd =
46812 + dwc_otg_hcd;
46813 + otg_dev->hcd = dwc_otg_hcd;
46814 + otg_dev->hcd->otg_dev = otg_dev;
46815 +
46816 +#ifdef CONFIG_ARM64
46817 + if (dwc_otg_hcd_init(dwc_otg_hcd, otg_dev->core_if))
46818 + goto error2;
46819 +
46820 + if (fiq_enable)
46821 + hcd_init_fiq(otg_dev);
46822 +#else
46823 + if (dwc_otg_hcd_init(dwc_otg_hcd, otg_dev->core_if)) {
46824 + goto error2;
46825 + }
46826 +
46827 + if (fiq_enable) {
46828 + if (num_online_cpus() > 1) {
46829 + /*
46830 + * bcm2709: can run the FIQ on a separate core to IRQs.
46831 + * Ensure driver state is visible to other cores before setting up the FIQ.
46832 + */
46833 + smp_mb();
46834 + smp_call_function_single(1, hcd_init_fiq, otg_dev, 1);
46835 + } else {
46836 + smp_call_function_single(0, hcd_init_fiq, otg_dev, 1);
46837 + }
46838 + }
46839 +#endif
46840 +
46841 + hcd->self.otg_port = dwc_otg_hcd_otg_port(dwc_otg_hcd);
46842 +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,33) //don't support for LM(with 2.6.20.1 kernel)
46843 +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,35) //version field absent later
46844 + hcd->self.otg_version = dwc_otg_get_otg_version(otg_dev->core_if);
46845 +#endif
46846 + /* Don't support SG list at this point */
46847 + hcd->self.sg_tablesize = 0;
46848 +#endif
46849 + /*
46850 + * Finish generic HCD initialization and start the HCD. This function
46851 + * allocates the DMA buffer pool, registers the USB bus, requests the
46852 + * IRQ line, and calls hcd_start method.
46853 + */
46854 + retval = usb_add_hcd(hcd, otg_dev->os_dep.irq_num, IRQF_SHARED);
46855 + if (retval < 0) {
46856 + goto error2;
46857 + }
46858 +
46859 + dwc_otg_hcd_set_priv_data(dwc_otg_hcd, hcd);
46860 + return 0;
46861 +
46862 +error2:
46863 + usb_put_hcd(hcd);
46864 +error1:
46865 + return retval;
46866 +}
46867 +
46868 +/**
46869 + * Removes the HCD.
46870 + * Frees memory and resources associated with the HCD and deregisters the bus.
46871 + */
46872 +void hcd_remove(dwc_bus_dev_t *_dev)
46873 +{
46874 + dwc_otg_device_t *otg_dev = DWC_OTG_BUSDRVDATA(_dev);
46875 + dwc_otg_hcd_t *dwc_otg_hcd;
46876 + struct usb_hcd *hcd;
46877 +
46878 + DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD REMOVE otg_dev=%p\n", otg_dev);
46879 +
46880 + if (!otg_dev) {
46881 + DWC_DEBUGPL(DBG_ANY, "%s: otg_dev NULL!\n", __func__);
46882 + return;
46883 + }
46884 +
46885 + dwc_otg_hcd = otg_dev->hcd;
46886 +
46887 + if (!dwc_otg_hcd) {
46888 + DWC_DEBUGPL(DBG_ANY, "%s: otg_dev->hcd NULL!\n", __func__);
46889 + return;
46890 + }
46891 +
46892 + hcd = dwc_otg_hcd_to_hcd(dwc_otg_hcd);
46893 +
46894 + if (!hcd) {
46895 + DWC_DEBUGPL(DBG_ANY,
46896 + "%s: dwc_otg_hcd_to_hcd(dwc_otg_hcd) NULL!\n",
46897 + __func__);
46898 + return;
46899 + }
46900 + usb_remove_hcd(hcd);
46901 + dwc_otg_hcd_set_priv_data(dwc_otg_hcd, NULL);
46902 + dwc_otg_hcd_remove(dwc_otg_hcd);
46903 + usb_put_hcd(hcd);
46904 +}
46905 +
46906 +/* =========================================================================
46907 + * Linux HC Driver Functions
46908 + * ========================================================================= */
46909 +
46910 +/** Initializes the DWC_otg controller and its root hub and prepares it for host
46911 + * mode operation. Activates the root port. Returns 0 on success and a negative
46912 + * error code on failure. */
46913 +int hcd_start(struct usb_hcd *hcd)
46914 +{
46915 + dwc_otg_hcd_t *dwc_otg_hcd = hcd_to_dwc_otg_hcd(hcd);
46916 + struct usb_bus *bus;
46917 +
46918 + DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD START\n");
46919 + bus = hcd_to_bus(hcd);
46920 +
46921 + hcd->state = HC_STATE_RUNNING;
46922 + if (dwc_otg_hcd_start(dwc_otg_hcd, &hcd_fops)) {
46923 + return 0;
46924 + }
46925 +
46926 + /* Initialize and connect root hub if one is not already attached */
46927 + if (bus->root_hub) {
46928 + DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD Has Root Hub\n");
46929 + /* Inform the HUB driver to resume. */
46930 + usb_hcd_resume_root_hub(hcd);
46931 + }
46932 +
46933 + return 0;
46934 +}
46935 +
46936 +/**
46937 + * Halts the DWC_otg host mode operations in a clean manner. USB transfers are
46938 + * stopped.
46939 + */
46940 +void hcd_stop(struct usb_hcd *hcd)
46941 +{
46942 + dwc_otg_hcd_t *dwc_otg_hcd = hcd_to_dwc_otg_hcd(hcd);
46943 +
46944 + dwc_otg_hcd_stop(dwc_otg_hcd);
46945 +}
46946 +
46947 +/** Returns the current frame number. */
46948 +static int get_frame_number(struct usb_hcd *hcd)
46949 +{
46950 + hprt0_data_t hprt0;
46951 + dwc_otg_hcd_t *dwc_otg_hcd = hcd_to_dwc_otg_hcd(hcd);
46952 + hprt0.d32 = DWC_READ_REG32(dwc_otg_hcd->core_if->host_if->hprt0);
46953 + if (hprt0.b.prtspd == DWC_HPRT0_PRTSPD_HIGH_SPEED)
46954 + return dwc_otg_hcd_get_frame_number(dwc_otg_hcd) >> 3;
46955 + else
46956 + return dwc_otg_hcd_get_frame_number(dwc_otg_hcd);
46957 +}
46958 +
46959 +#ifdef DEBUG
46960 +static void dump_urb_info(struct urb *urb, char *fn_name)
46961 +{
46962 + DWC_PRINTF("%s, urb %p\n", fn_name, urb);
46963 + DWC_PRINTF(" Device address: %d\n", usb_pipedevice(urb->pipe));
46964 + DWC_PRINTF(" Endpoint: %d, %s\n", usb_pipeendpoint(urb->pipe),
46965 + (usb_pipein(urb->pipe) ? "IN" : "OUT"));
46966 + DWC_PRINTF(" Endpoint type: %s\n", ( {
46967 + char *pipetype;
46968 + switch (usb_pipetype(urb->pipe)) {
46969 +case PIPE_CONTROL:
46970 +pipetype = "CONTROL"; break; case PIPE_BULK:
46971 +pipetype = "BULK"; break; case PIPE_INTERRUPT:
46972 +pipetype = "INTERRUPT"; break; case PIPE_ISOCHRONOUS:
46973 +pipetype = "ISOCHRONOUS"; break; default:
46974 + pipetype = "UNKNOWN"; break;};
46975 + pipetype;}
46976 + )) ;
46977 + DWC_PRINTF(" Speed: %s\n", ( {
46978 + char *speed; switch (urb->dev->speed) {
46979 +case USB_SPEED_HIGH:
46980 +speed = "HIGH"; break; case USB_SPEED_FULL:
46981 +speed = "FULL"; break; case USB_SPEED_LOW:
46982 +speed = "LOW"; break; default:
46983 + speed = "UNKNOWN"; break;};
46984 + speed;}
46985 + )) ;
46986 + DWC_PRINTF(" Max packet size: %d\n",
46987 + usb_maxpacket(urb->dev, urb->pipe, usb_pipeout(urb->pipe)));
46988 + DWC_PRINTF(" Data buffer length: %d\n", urb->transfer_buffer_length);
46989 + DWC_PRINTF(" Transfer buffer: %p, Transfer DMA: %p\n",
46990 + urb->transfer_buffer, (void *)urb->transfer_dma);
46991 + DWC_PRINTF(" Setup buffer: %p, Setup DMA: %p\n",
46992 + urb->setup_packet, (void *)urb->setup_dma);
46993 + DWC_PRINTF(" Interval: %d\n", urb->interval);
46994 + if (usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS) {
46995 + int i;
46996 + for (i = 0; i < urb->number_of_packets; i++) {
46997 + DWC_PRINTF(" ISO Desc %d:\n", i);
46998 + DWC_PRINTF(" offset: %d, length %d\n",
46999 + urb->iso_frame_desc[i].offset,
47000 + urb->iso_frame_desc[i].length);
47001 + }
47002 + }
47003 +}
47004 +#endif
47005 +
47006 +/** Starts processing a USB transfer request specified by a USB Request Block
47007 + * (URB). mem_flags indicates the type of memory allocation to use while
47008 + * processing this URB. */
47009 +static int dwc_otg_urb_enqueue(struct usb_hcd *hcd,
47010 +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,28)
47011 + struct usb_host_endpoint *ep,
47012 +#endif
47013 + struct urb *urb, gfp_t mem_flags)
47014 +{
47015 + int retval = 0;
47016 +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,28)
47017 + struct usb_host_endpoint *ep = urb->ep;
47018 +#endif
47019 + dwc_irqflags_t irqflags;
47020 + void **ref_ep_hcpriv = &ep->hcpriv;
47021 + dwc_otg_hcd_t *dwc_otg_hcd = hcd_to_dwc_otg_hcd(hcd);
47022 + dwc_otg_hcd_urb_t *dwc_otg_urb;
47023 + int i;
47024 + int alloc_bandwidth = 0;
47025 + uint8_t ep_type = 0;
47026 + uint32_t flags = 0;
47027 + void *buf;
47028 +
47029 +#ifdef DEBUG
47030 + if (CHK_DEBUG_LEVEL(DBG_HCDV | DBG_HCD_URB)) {
47031 + dump_urb_info(urb, "dwc_otg_urb_enqueue");
47032 + }
47033 +#endif
47034 +
47035 + if (!urb->transfer_buffer && urb->transfer_buffer_length)
47036 + return -EINVAL;
47037 +
47038 + if ((usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS)
47039 + || (usb_pipetype(urb->pipe) == PIPE_INTERRUPT)) {
47040 + if (!dwc_otg_hcd_is_bandwidth_allocated
47041 + (dwc_otg_hcd, ref_ep_hcpriv)) {
47042 + alloc_bandwidth = 1;
47043 + }
47044 + }
47045 +
47046 + switch (usb_pipetype(urb->pipe)) {
47047 + case PIPE_CONTROL:
47048 + ep_type = USB_ENDPOINT_XFER_CONTROL;
47049 + break;
47050 + case PIPE_ISOCHRONOUS:
47051 + ep_type = USB_ENDPOINT_XFER_ISOC;
47052 + break;
47053 + case PIPE_BULK:
47054 + ep_type = USB_ENDPOINT_XFER_BULK;
47055 + break;
47056 + case PIPE_INTERRUPT:
47057 + ep_type = USB_ENDPOINT_XFER_INT;
47058 + break;
47059 + default:
47060 + DWC_WARN("Wrong EP type - %d\n", usb_pipetype(urb->pipe));
47061 + }
47062 +
47063 + /* # of packets is often 0 - do we really need to call this then? */
47064 + dwc_otg_urb = dwc_otg_hcd_urb_alloc(dwc_otg_hcd,
47065 + urb->number_of_packets,
47066 + mem_flags == GFP_ATOMIC ? 1 : 0);
47067 +
47068 + if(dwc_otg_urb == NULL)
47069 + return -ENOMEM;
47070 +
47071 + if (!dwc_otg_urb && urb->number_of_packets)
47072 + return -ENOMEM;
47073 +
47074 + dwc_otg_hcd_urb_set_pipeinfo(dwc_otg_urb, usb_pipedevice(urb->pipe),
47075 + usb_pipeendpoint(urb->pipe), ep_type,
47076 + usb_pipein(urb->pipe),
47077 + usb_maxpacket(urb->dev, urb->pipe,
47078 + !(usb_pipein(urb->pipe))));
47079 +
47080 + buf = urb->transfer_buffer;
47081 + if (hcd_uses_dma(hcd) && !buf && urb->transfer_buffer_length) {
47082 + /*
47083 + * Calculate virtual address from physical address,
47084 + * because some class driver may not fill transfer_buffer.
47085 + * In Buffer DMA mode virual address is used,
47086 + * when handling non DWORD aligned buffers.
47087 + */
47088 + buf = (void *)__bus_to_virt((unsigned long)urb->transfer_dma);
47089 + dev_warn_once(&urb->dev->dev,
47090 + "USB transfer_buffer was NULL, will use __bus_to_virt(%pad)=%p\n",
47091 + &urb->transfer_dma, buf);
47092 + }
47093 +
47094 + if (!(urb->transfer_flags & URB_NO_INTERRUPT))
47095 + flags |= URB_GIVEBACK_ASAP;
47096 + if (urb->transfer_flags & URB_ZERO_PACKET)
47097 + flags |= URB_SEND_ZERO_PACKET;
47098 +
47099 + dwc_otg_hcd_urb_set_params(dwc_otg_urb, urb, buf,
47100 + urb->transfer_dma,
47101 + urb->transfer_buffer_length,
47102 + urb->setup_packet,
47103 + urb->setup_dma, flags, urb->interval);
47104 +
47105 + for (i = 0; i < urb->number_of_packets; ++i) {
47106 + dwc_otg_hcd_urb_set_iso_desc_params(dwc_otg_urb, i,
47107 + urb->
47108 + iso_frame_desc[i].offset,
47109 + urb->
47110 + iso_frame_desc[i].length);
47111 + }
47112 +
47113 + DWC_SPINLOCK_IRQSAVE(dwc_otg_hcd->lock, &irqflags);
47114 + urb->hcpriv = dwc_otg_urb;
47115 +#if USB_URB_EP_LINKING
47116 + retval = usb_hcd_link_urb_to_ep(hcd, urb);
47117 + if (0 == retval)
47118 +#endif
47119 + {
47120 + retval = dwc_otg_hcd_urb_enqueue(dwc_otg_hcd, dwc_otg_urb,
47121 + /*(dwc_otg_qh_t **)*/
47122 + ref_ep_hcpriv, 1);
47123 + if (0 == retval) {
47124 + if (alloc_bandwidth) {
47125 + allocate_bus_bandwidth(hcd,
47126 + dwc_otg_hcd_get_ep_bandwidth(
47127 + dwc_otg_hcd, *ref_ep_hcpriv),
47128 + urb);
47129 + }
47130 + } else {
47131 + DWC_DEBUGPL(DBG_HCD, "DWC OTG dwc_otg_hcd_urb_enqueue failed rc %d\n", retval);
47132 +#if USB_URB_EP_LINKING
47133 + usb_hcd_unlink_urb_from_ep(hcd, urb);
47134 +#endif
47135 + DWC_FREE(dwc_otg_urb);
47136 + urb->hcpriv = NULL;
47137 + if (retval == -DWC_E_NO_DEVICE)
47138 + retval = -ENODEV;
47139 + }
47140 + }
47141 +#if USB_URB_EP_LINKING
47142 + else
47143 + {
47144 + DWC_FREE(dwc_otg_urb);
47145 + urb->hcpriv = NULL;
47146 + }
47147 +#endif
47148 + DWC_SPINUNLOCK_IRQRESTORE(dwc_otg_hcd->lock, irqflags);
47149 + return retval;
47150 +}
47151 +
47152 +/** Aborts/cancels a USB transfer request. Always returns 0 to indicate
47153 + * success. */
47154 +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,28)
47155 +static int dwc_otg_urb_dequeue(struct usb_hcd *hcd, struct urb *urb)
47156 +#else
47157 +static int dwc_otg_urb_dequeue(struct usb_hcd *hcd, struct urb *urb, int status)
47158 +#endif
47159 +{
47160 + dwc_irqflags_t flags;
47161 + dwc_otg_hcd_t *dwc_otg_hcd;
47162 + int rc;
47163 +
47164 + DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD URB Dequeue\n");
47165 +
47166 + dwc_otg_hcd = hcd_to_dwc_otg_hcd(hcd);
47167 +
47168 +#ifdef DEBUG
47169 + if (CHK_DEBUG_LEVEL(DBG_HCDV | DBG_HCD_URB)) {
47170 + dump_urb_info(urb, "dwc_otg_urb_dequeue");
47171 + }
47172 +#endif
47173 +
47174 + DWC_SPINLOCK_IRQSAVE(dwc_otg_hcd->lock, &flags);
47175 + rc = usb_hcd_check_unlink_urb(hcd, urb, status);
47176 + if (0 == rc) {
47177 + if(urb->hcpriv != NULL) {
47178 + dwc_otg_hcd_urb_dequeue(dwc_otg_hcd,
47179 + (dwc_otg_hcd_urb_t *)urb->hcpriv);
47180 +
47181 + DWC_FREE(urb->hcpriv);
47182 + urb->hcpriv = NULL;
47183 + }
47184 + }
47185 +
47186 + if (0 == rc) {
47187 + /* Higher layer software sets URB status. */
47188 +#if USB_URB_EP_LINKING
47189 + usb_hcd_unlink_urb_from_ep(hcd, urb);
47190 +#endif
47191 + DWC_SPINUNLOCK_IRQRESTORE(dwc_otg_hcd->lock, flags);
47192 +
47193 +
47194 +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,28)
47195 + usb_hcd_giveback_urb(hcd, urb);
47196 +#else
47197 + usb_hcd_giveback_urb(hcd, urb, status);
47198 +#endif
47199 + if (CHK_DEBUG_LEVEL(DBG_HCDV | DBG_HCD_URB)) {
47200 + DWC_PRINTF("Called usb_hcd_giveback_urb() \n");
47201 + DWC_PRINTF(" 1urb->status = %d\n", urb->status);
47202 + }
47203 + DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD URB Dequeue OK\n");
47204 + } else {
47205 + DWC_SPINUNLOCK_IRQRESTORE(dwc_otg_hcd->lock, flags);
47206 + DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD URB Dequeue failed - rc %d\n",
47207 + rc);
47208 + }
47209 +
47210 + return rc;
47211 +}
47212 +
47213 +/* Frees resources in the DWC_otg controller related to a given endpoint. Also
47214 + * clears state in the HCD related to the endpoint. Any URBs for the endpoint
47215 + * must already be dequeued. */
47216 +static void endpoint_disable(struct usb_hcd *hcd, struct usb_host_endpoint *ep)
47217 +{
47218 + dwc_otg_hcd_t *dwc_otg_hcd = hcd_to_dwc_otg_hcd(hcd);
47219 +
47220 + DWC_DEBUGPL(DBG_HCD,
47221 + "DWC OTG HCD EP DISABLE: _bEndpointAddress=0x%02x, "
47222 + "endpoint=%d\n", ep->desc.bEndpointAddress,
47223 + dwc_ep_addr_to_endpoint(ep->desc.bEndpointAddress));
47224 + dwc_otg_hcd_endpoint_disable(dwc_otg_hcd, ep->hcpriv, 250);
47225 + ep->hcpriv = NULL;
47226 +}
47227 +
47228 +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,30)
47229 +/* Resets endpoint specific parameter values, in current version used to reset
47230 + * the data toggle(as a WA). This function can be called from usb_clear_halt routine */
47231 +static void endpoint_reset(struct usb_hcd *hcd, struct usb_host_endpoint *ep)
47232 +{
47233 + dwc_irqflags_t flags;
47234 + dwc_otg_hcd_t *dwc_otg_hcd = hcd_to_dwc_otg_hcd(hcd);
47235 +
47236 + DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD EP RESET: Endpoint Num=0x%02d\n", epnum);
47237 +
47238 + DWC_SPINLOCK_IRQSAVE(dwc_otg_hcd->lock, &flags);
47239 + if (ep->hcpriv) {
47240 + dwc_otg_hcd_endpoint_reset(dwc_otg_hcd, ep->hcpriv);
47241 + }
47242 + DWC_SPINUNLOCK_IRQRESTORE(dwc_otg_hcd->lock, flags);
47243 +}
47244 +#endif
47245 +
47246 +/** Handles host mode interrupts for the DWC_otg controller. Returns IRQ_NONE if
47247 + * there was no interrupt to handle. Returns IRQ_HANDLED if there was a valid
47248 + * interrupt.
47249 + *
47250 + * This function is called by the USB core when an interrupt occurs */
47251 +static irqreturn_t dwc_otg_hcd_irq(struct usb_hcd *hcd)
47252 +{
47253 + dwc_otg_hcd_t *dwc_otg_hcd = hcd_to_dwc_otg_hcd(hcd);
47254 + int32_t retval = dwc_otg_hcd_handle_intr(dwc_otg_hcd);
47255 + if (retval != 0) {
47256 + S3C2410X_CLEAR_EINTPEND();
47257 + }
47258 + return IRQ_RETVAL(retval);
47259 +}
47260 +
47261 +/** Creates Status Change bitmap for the root hub and root port. The bitmap is
47262 + * returned in buf. Bit 0 is the status change indicator for the root hub. Bit 1
47263 + * is the status change indicator for the single root port. Returns 1 if either
47264 + * change indicator is 1, otherwise returns 0. */
47265 +int hub_status_data(struct usb_hcd *hcd, char *buf)
47266 +{
47267 + dwc_otg_hcd_t *dwc_otg_hcd = hcd_to_dwc_otg_hcd(hcd);
47268 +
47269 + buf[0] = 0;
47270 + buf[0] |= (dwc_otg_hcd_is_status_changed(dwc_otg_hcd, 1)) << 1;
47271 +
47272 + return (buf[0] != 0);
47273 +}
47274 +
47275 +/** Handles hub class-specific requests. */
47276 +int hub_control(struct usb_hcd *hcd,
47277 + u16 typeReq, u16 wValue, u16 wIndex, char *buf, u16 wLength)
47278 +{
47279 + int retval;
47280 +
47281 + retval = dwc_otg_hcd_hub_control(hcd_to_dwc_otg_hcd(hcd),
47282 + typeReq, wValue, wIndex, buf, wLength);
47283 +
47284 + switch (retval) {
47285 + case -DWC_E_INVALID:
47286 + retval = -EINVAL;
47287 + break;
47288 + }
47289 +
47290 + return retval;
47291 +}
47292 +
47293 +#endif /* DWC_DEVICE_ONLY */
47294 --- /dev/null
47295 +++ b/drivers/usb/host/dwc_otg/dwc_otg_hcd_queue.c
47296 @@ -0,0 +1,970 @@
47297 +/* ==========================================================================
47298 + * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_hcd_queue.c $
47299 + * $Revision: #44 $
47300 + * $Date: 2011/10/26 $
47301 + * $Change: 1873028 $
47302 + *
47303 + * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
47304 + * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
47305 + * otherwise expressly agreed to in writing between Synopsys and you.
47306 + *
47307 + * The Software IS NOT an item of Licensed Software or Licensed Product under
47308 + * any End User Software License Agreement or Agreement for Licensed Product
47309 + * with Synopsys or any supplement thereto. You are permitted to use and
47310 + * redistribute this Software in source and binary forms, with or without
47311 + * modification, provided that redistributions of source code must retain this
47312 + * notice. You may not view, use, disclose, copy or distribute this file or
47313 + * any information contained herein except pursuant to this license grant from
47314 + * Synopsys. If you do not agree with this notice, including the disclaimer
47315 + * below, then you are not authorized to use the Software.
47316 + *
47317 + * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
47318 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
47319 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
47320 + * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
47321 + * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
47322 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
47323 + * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
47324 + * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
47325 + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
47326 + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
47327 + * DAMAGE.
47328 + * ========================================================================== */
47329 +#ifndef DWC_DEVICE_ONLY
47330 +
47331 +/**
47332 + * @file
47333 + *
47334 + * This file contains the functions to manage Queue Heads and Queue
47335 + * Transfer Descriptors.
47336 + */
47337 +
47338 +#include "dwc_otg_hcd.h"
47339 +#include "dwc_otg_regs.h"
47340 +
47341 +extern bool microframe_schedule;
47342 +extern unsigned short int_ep_interval_min;
47343 +
47344 +/**
47345 + * Free each QTD in the QH's QTD-list then free the QH. QH should already be
47346 + * removed from a list. QTD list should already be empty if called from URB
47347 + * Dequeue.
47348 + *
47349 + * @param hcd HCD instance.
47350 + * @param qh The QH to free.
47351 + */
47352 +void dwc_otg_hcd_qh_free(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh)
47353 +{
47354 + dwc_otg_qtd_t *qtd, *qtd_tmp;
47355 + dwc_irqflags_t flags;
47356 + uint32_t buf_size = 0;
47357 + uint8_t *align_buf_virt = NULL;
47358 + dwc_dma_t align_buf_dma;
47359 + struct device *dev = dwc_otg_hcd_to_dev(hcd);
47360 +
47361 + /* Free each QTD in the QTD list */
47362 + DWC_SPINLOCK_IRQSAVE(hcd->lock, &flags);
47363 + DWC_CIRCLEQ_FOREACH_SAFE(qtd, qtd_tmp, &qh->qtd_list, qtd_list_entry) {
47364 + DWC_CIRCLEQ_REMOVE(&qh->qtd_list, qtd, qtd_list_entry);
47365 + dwc_otg_hcd_qtd_free(qtd);
47366 + }
47367 +
47368 + if (hcd->core_if->dma_desc_enable) {
47369 + dwc_otg_hcd_qh_free_ddma(hcd, qh);
47370 + } else if (qh->dw_align_buf) {
47371 + if (qh->ep_type == UE_ISOCHRONOUS) {
47372 + buf_size = 4096;
47373 + } else {
47374 + buf_size = hcd->core_if->core_params->max_transfer_size;
47375 + }
47376 + align_buf_virt = qh->dw_align_buf;
47377 + align_buf_dma = qh->dw_align_buf_dma;
47378 + }
47379 +
47380 + DWC_FREE(qh);
47381 + DWC_SPINUNLOCK_IRQRESTORE(hcd->lock, flags);
47382 + if (align_buf_virt)
47383 + DWC_DMA_FREE(dev, buf_size, align_buf_virt, align_buf_dma);
47384 + return;
47385 +}
47386 +
47387 +#define BitStuffTime(bytecount) ((8 * 7* bytecount) / 6)
47388 +#define HS_HOST_DELAY 5 /* nanoseconds */
47389 +#define FS_LS_HOST_DELAY 1000 /* nanoseconds */
47390 +#define HUB_LS_SETUP 333 /* nanoseconds */
47391 +#define NS_TO_US(ns) ((ns + 500) / 1000)
47392 + /* convert & round nanoseconds to microseconds */
47393 +
47394 +static uint32_t calc_bus_time(int speed, int is_in, int is_isoc, int bytecount)
47395 +{
47396 + unsigned long retval;
47397 +
47398 + switch (speed) {
47399 + case USB_SPEED_HIGH:
47400 + if (is_isoc) {
47401 + retval =
47402 + ((38 * 8 * 2083) +
47403 + (2083 * (3 + BitStuffTime(bytecount)))) / 1000 +
47404 + HS_HOST_DELAY;
47405 + } else {
47406 + retval =
47407 + ((55 * 8 * 2083) +
47408 + (2083 * (3 + BitStuffTime(bytecount)))) / 1000 +
47409 + HS_HOST_DELAY;
47410 + }
47411 + break;
47412 + case USB_SPEED_FULL:
47413 + if (is_isoc) {
47414 + retval =
47415 + (8354 * (31 + 10 * BitStuffTime(bytecount))) / 1000;
47416 + if (is_in) {
47417 + retval = 7268 + FS_LS_HOST_DELAY + retval;
47418 + } else {
47419 + retval = 6265 + FS_LS_HOST_DELAY + retval;
47420 + }
47421 + } else {
47422 + retval =
47423 + (8354 * (31 + 10 * BitStuffTime(bytecount))) / 1000;
47424 + retval = 9107 + FS_LS_HOST_DELAY + retval;
47425 + }
47426 + break;
47427 + case USB_SPEED_LOW:
47428 + if (is_in) {
47429 + retval =
47430 + (67667 * (31 + 10 * BitStuffTime(bytecount))) /
47431 + 1000;
47432 + retval =
47433 + 64060 + (2 * HUB_LS_SETUP) + FS_LS_HOST_DELAY +
47434 + retval;
47435 + } else {
47436 + retval =
47437 + (66700 * (31 + 10 * BitStuffTime(bytecount))) /
47438 + 1000;
47439 + retval =
47440 + 64107 + (2 * HUB_LS_SETUP) + FS_LS_HOST_DELAY +
47441 + retval;
47442 + }
47443 + break;
47444 + default:
47445 + DWC_WARN("Unknown device speed\n");
47446 + retval = -1;
47447 + }
47448 +
47449 + return NS_TO_US(retval);
47450 +}
47451 +
47452 +/**
47453 + * Initializes a QH structure.
47454 + *
47455 + * @param hcd The HCD state structure for the DWC OTG controller.
47456 + * @param qh The QH to init.
47457 + * @param urb Holds the information about the device/endpoint that we need
47458 + * to initialize the QH.
47459 + */
47460 +#define SCHEDULE_SLOP 10
47461 +void qh_init(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh, dwc_otg_hcd_urb_t * urb)
47462 +{
47463 + char *speed, *type;
47464 + int dev_speed;
47465 + uint32_t hub_addr, hub_port;
47466 + hprt0_data_t hprt;
47467 +
47468 + dwc_memset(qh, 0, sizeof(dwc_otg_qh_t));
47469 + hprt.d32 = DWC_READ_REG32(hcd->core_if->host_if->hprt0);
47470 +
47471 + /* Initialize QH */
47472 + qh->ep_type = dwc_otg_hcd_get_pipe_type(&urb->pipe_info);
47473 + qh->ep_is_in = dwc_otg_hcd_is_pipe_in(&urb->pipe_info) ? 1 : 0;
47474 +
47475 + qh->data_toggle = DWC_OTG_HC_PID_DATA0;
47476 + qh->maxp = dwc_otg_hcd_get_mps(&urb->pipe_info);
47477 + DWC_CIRCLEQ_INIT(&qh->qtd_list);
47478 + DWC_LIST_INIT(&qh->qh_list_entry);
47479 + qh->channel = NULL;
47480 +
47481 + /* FS/LS Enpoint on HS Hub
47482 + * NOT virtual root hub */
47483 + dev_speed = hcd->fops->speed(hcd, urb->priv);
47484 +
47485 + hcd->fops->hub_info(hcd, urb->priv, &hub_addr, &hub_port);
47486 + qh->do_split = 0;
47487 + if (microframe_schedule)
47488 + qh->speed = dev_speed;
47489 +
47490 + qh->nak_frame = 0xffff;
47491 +
47492 + if (hprt.b.prtspd == DWC_HPRT0_PRTSPD_HIGH_SPEED &&
47493 + dev_speed != USB_SPEED_HIGH) {
47494 + DWC_DEBUGPL(DBG_HCD,
47495 + "QH init: EP %d: TT found at hub addr %d, for port %d\n",
47496 + dwc_otg_hcd_get_ep_num(&urb->pipe_info), hub_addr,
47497 + hub_port);
47498 + qh->do_split = 1;
47499 + qh->skip_count = 0;
47500 + }
47501 +
47502 + if (qh->ep_type == UE_INTERRUPT || qh->ep_type == UE_ISOCHRONOUS) {
47503 + /* Compute scheduling parameters once and save them. */
47504 +
47505 + /** @todo Account for split transfers in the bus time. */
47506 + int bytecount =
47507 + dwc_hb_mult(qh->maxp) * dwc_max_packet(qh->maxp);
47508 +
47509 + qh->usecs =
47510 + calc_bus_time((qh->do_split ? USB_SPEED_HIGH : dev_speed),
47511 + qh->ep_is_in, (qh->ep_type == UE_ISOCHRONOUS),
47512 + bytecount);
47513 + /* Start in a slightly future (micro)frame. */
47514 + qh->sched_frame = dwc_frame_num_inc(hcd->frame_number,
47515 + SCHEDULE_SLOP);
47516 + qh->interval = urb->interval;
47517 +
47518 + if (hprt.b.prtspd == DWC_HPRT0_PRTSPD_HIGH_SPEED) {
47519 + if (dev_speed == USB_SPEED_LOW ||
47520 + dev_speed == USB_SPEED_FULL) {
47521 + qh->interval *= 8;
47522 + qh->sched_frame |= 0x7;
47523 + qh->start_split_frame = qh->sched_frame;
47524 + } else if (int_ep_interval_min >= 2 &&
47525 + qh->interval < int_ep_interval_min &&
47526 + qh->ep_type == UE_INTERRUPT) {
47527 + qh->interval = int_ep_interval_min;
47528 + }
47529 + }
47530 + }
47531 +
47532 + DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD QH Initialized\n");
47533 + DWC_DEBUGPL(DBG_HCDV, "DWC OTG HCD QH - qh = %p\n", qh);
47534 + DWC_DEBUGPL(DBG_HCDV, "DWC OTG HCD QH - Device Address = %d\n",
47535 + dwc_otg_hcd_get_dev_addr(&urb->pipe_info));
47536 + DWC_DEBUGPL(DBG_HCDV, "DWC OTG HCD QH - Endpoint %d, %s\n",
47537 + dwc_otg_hcd_get_ep_num(&urb->pipe_info),
47538 + dwc_otg_hcd_is_pipe_in(&urb->pipe_info) ? "IN" : "OUT");
47539 + switch (dev_speed) {
47540 + case USB_SPEED_LOW:
47541 + qh->dev_speed = DWC_OTG_EP_SPEED_LOW;
47542 + speed = "low";
47543 + break;
47544 + case USB_SPEED_FULL:
47545 + qh->dev_speed = DWC_OTG_EP_SPEED_FULL;
47546 + speed = "full";
47547 + break;
47548 + case USB_SPEED_HIGH:
47549 + qh->dev_speed = DWC_OTG_EP_SPEED_HIGH;
47550 + speed = "high";
47551 + break;
47552 + default:
47553 + speed = "?";
47554 + break;
47555 + }
47556 + DWC_DEBUGPL(DBG_HCDV, "DWC OTG HCD QH - Speed = %s\n", speed);
47557 +
47558 + switch (qh->ep_type) {
47559 + case UE_ISOCHRONOUS:
47560 + type = "isochronous";
47561 + break;
47562 + case UE_INTERRUPT:
47563 + type = "interrupt";
47564 + break;
47565 + case UE_CONTROL:
47566 + type = "control";
47567 + break;
47568 + case UE_BULK:
47569 + type = "bulk";
47570 + break;
47571 + default:
47572 + type = "?";
47573 + break;
47574 + }
47575 +
47576 + DWC_DEBUGPL(DBG_HCDV, "DWC OTG HCD QH - Type = %s\n", type);
47577 +
47578 +#ifdef DEBUG
47579 + if (qh->ep_type == UE_INTERRUPT) {
47580 + DWC_DEBUGPL(DBG_HCDV, "DWC OTG HCD QH - usecs = %d\n",
47581 + qh->usecs);
47582 + DWC_DEBUGPL(DBG_HCDV, "DWC OTG HCD QH - interval = %d\n",
47583 + qh->interval);
47584 + }
47585 +#endif
47586 +
47587 +}
47588 +
47589 +/**
47590 + * This function allocates and initializes a QH.
47591 + *
47592 + * @param hcd The HCD state structure for the DWC OTG controller.
47593 + * @param urb Holds the information about the device/endpoint that we need
47594 + * to initialize the QH.
47595 + * @param atomic_alloc Flag to do atomic allocation if needed
47596 + *
47597 + * @return Returns pointer to the newly allocated QH, or NULL on error. */
47598 +dwc_otg_qh_t *dwc_otg_hcd_qh_create(dwc_otg_hcd_t * hcd,
47599 + dwc_otg_hcd_urb_t * urb, int atomic_alloc)
47600 +{
47601 + dwc_otg_qh_t *qh;
47602 +
47603 + /* Allocate memory */
47604 + /** @todo add memflags argument */
47605 + qh = dwc_otg_hcd_qh_alloc(atomic_alloc);
47606 + if (qh == NULL) {
47607 + DWC_ERROR("qh allocation failed");
47608 + return NULL;
47609 + }
47610 +
47611 + qh_init(hcd, qh, urb);
47612 +
47613 + if (hcd->core_if->dma_desc_enable
47614 + && (dwc_otg_hcd_qh_init_ddma(hcd, qh) < 0)) {
47615 + dwc_otg_hcd_qh_free(hcd, qh);
47616 + return NULL;
47617 + }
47618 +
47619 + return qh;
47620 +}
47621 +
47622 +/* microframe_schedule=0 start */
47623 +
47624 +/**
47625 + * Checks that a channel is available for a periodic transfer.
47626 + *
47627 + * @return 0 if successful, negative error code otherise.
47628 + */
47629 +static int periodic_channel_available(dwc_otg_hcd_t * hcd)
47630 +{
47631 + /*
47632 + * Currently assuming that there is a dedicated host channnel for each
47633 + * periodic transaction plus at least one host channel for
47634 + * non-periodic transactions.
47635 + */
47636 + int status;
47637 + int num_channels;
47638 +
47639 + num_channels = hcd->core_if->core_params->host_channels;
47640 + if ((hcd->periodic_channels + hcd->non_periodic_channels < num_channels)
47641 + && (hcd->periodic_channels < num_channels - 1)) {
47642 + status = 0;
47643 + } else {
47644 + DWC_INFO("%s: Total channels: %d, Periodic: %d, Non-periodic: %d\n",
47645 + __func__, num_channels, hcd->periodic_channels, hcd->non_periodic_channels); //NOTICE
47646 + status = -DWC_E_NO_SPACE;
47647 + }
47648 +
47649 + return status;
47650 +}
47651 +
47652 +/**
47653 + * Checks that there is sufficient bandwidth for the specified QH in the
47654 + * periodic schedule. For simplicity, this calculation assumes that all the
47655 + * transfers in the periodic schedule may occur in the same (micro)frame.
47656 + *
47657 + * @param hcd The HCD state structure for the DWC OTG controller.
47658 + * @param qh QH containing periodic bandwidth required.
47659 + *
47660 + * @return 0 if successful, negative error code otherwise.
47661 + */
47662 +static int check_periodic_bandwidth(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh)
47663 +{
47664 + int status;
47665 + int16_t max_claimed_usecs;
47666 +
47667 + status = 0;
47668 +
47669 + if ((qh->dev_speed == DWC_OTG_EP_SPEED_HIGH) || qh->do_split) {
47670 + /*
47671 + * High speed mode.
47672 + * Max periodic usecs is 80% x 125 usec = 100 usec.
47673 + */
47674 +
47675 + max_claimed_usecs = 100 - qh->usecs;
47676 + } else {
47677 + /*
47678 + * Full speed mode.
47679 + * Max periodic usecs is 90% x 1000 usec = 900 usec.
47680 + */
47681 + max_claimed_usecs = 900 - qh->usecs;
47682 + }
47683 +
47684 + if (hcd->periodic_usecs > max_claimed_usecs) {
47685 + DWC_INFO("%s: already claimed usecs %d, required usecs %d\n", __func__, hcd->periodic_usecs, qh->usecs); //NOTICE
47686 + status = -DWC_E_NO_SPACE;
47687 + }
47688 +
47689 + return status;
47690 +}
47691 +
47692 +/* microframe_schedule=0 end */
47693 +
47694 +/**
47695 + * Microframe scheduler
47696 + * track the total use in hcd->frame_usecs
47697 + * keep each qh use in qh->frame_usecs
47698 + * when surrendering the qh then donate the time back
47699 + */
47700 +const unsigned short max_uframe_usecs[]={ 100, 100, 100, 100, 100, 100, 30, 0 };
47701 +
47702 +/*
47703 + * called from dwc_otg_hcd.c:dwc_otg_hcd_init
47704 + */
47705 +void init_hcd_usecs(dwc_otg_hcd_t *_hcd)
47706 +{
47707 + int i;
47708 + if (_hcd->flags.b.port_speed == DWC_HPRT0_PRTSPD_FULL_SPEED) {
47709 + _hcd->frame_usecs[0] = 900;
47710 + for (i = 1; i < 8; i++)
47711 + _hcd->frame_usecs[i] = 0;
47712 + } else {
47713 + for (i = 0; i < 8; i++)
47714 + _hcd->frame_usecs[i] = max_uframe_usecs[i];
47715 + }
47716 +}
47717 +
47718 +static int find_single_uframe(dwc_otg_hcd_t * _hcd, dwc_otg_qh_t * _qh)
47719 +{
47720 + int i;
47721 + unsigned short utime;
47722 + int t_left;
47723 + int ret;
47724 + int done;
47725 +
47726 + ret = -1;
47727 + utime = _qh->usecs;
47728 + t_left = utime;
47729 + i = 0;
47730 + done = 0;
47731 + while (done == 0) {
47732 + /* At the start _hcd->frame_usecs[i] = max_uframe_usecs[i]; */
47733 + if (utime <= _hcd->frame_usecs[i]) {
47734 + _hcd->frame_usecs[i] -= utime;
47735 + _qh->frame_usecs[i] += utime;
47736 + t_left -= utime;
47737 + ret = i;
47738 + done = 1;
47739 + return ret;
47740 + } else {
47741 + i++;
47742 + if (i == 8) {
47743 + done = 1;
47744 + ret = -1;
47745 + }
47746 + }
47747 + }
47748 + return ret;
47749 + }
47750 +
47751 +/*
47752 + * use this for FS apps that can span multiple uframes
47753 + */
47754 +static int find_multi_uframe(dwc_otg_hcd_t * _hcd, dwc_otg_qh_t * _qh)
47755 +{
47756 + int i;
47757 + int j;
47758 + unsigned short utime;
47759 + int t_left;
47760 + int ret;
47761 + int done;
47762 + unsigned short xtime;
47763 +
47764 + ret = -1;
47765 + utime = _qh->usecs;
47766 + t_left = utime;
47767 + i = 0;
47768 + done = 0;
47769 +loop:
47770 + while (done == 0) {
47771 + if(_hcd->frame_usecs[i] <= 0) {
47772 + i++;
47773 + if (i == 8) {
47774 + done = 1;
47775 + ret = -1;
47776 + }
47777 + goto loop;
47778 + }
47779 +
47780 + /*
47781 + * we need n consecutive slots
47782 + * so use j as a start slot j plus j+1 must be enough time (for now)
47783 + */
47784 + xtime= _hcd->frame_usecs[i];
47785 + for (j = i+1 ; j < 8 ; j++ ) {
47786 + /*
47787 + * if we add this frame remaining time to xtime we may
47788 + * be OK, if not we need to test j for a complete frame
47789 + */
47790 + if ((xtime+_hcd->frame_usecs[j]) < utime) {
47791 + if (_hcd->frame_usecs[j] < max_uframe_usecs[j]) {
47792 + j = 8;
47793 + ret = -1;
47794 + continue;
47795 + }
47796 + }
47797 + if (xtime >= utime) {
47798 + ret = i;
47799 + j = 8; /* stop loop with a good value ret */
47800 + continue;
47801 + }
47802 + /* add the frame time to x time */
47803 + xtime += _hcd->frame_usecs[j];
47804 + /* we must have a fully available next frame or break */
47805 + if ((xtime < utime)
47806 + && (_hcd->frame_usecs[j] == max_uframe_usecs[j])) {
47807 + ret = -1;
47808 + j = 8; /* stop loop with a bad value ret */
47809 + continue;
47810 + }
47811 + }
47812 + if (ret >= 0) {
47813 + t_left = utime;
47814 + for (j = i; (t_left>0) && (j < 8); j++ ) {
47815 + t_left -= _hcd->frame_usecs[j];
47816 + if ( t_left <= 0 ) {
47817 + _qh->frame_usecs[j] += _hcd->frame_usecs[j] + t_left;
47818 + _hcd->frame_usecs[j]= -t_left;
47819 + ret = i;
47820 + done = 1;
47821 + } else {
47822 + _qh->frame_usecs[j] += _hcd->frame_usecs[j];
47823 + _hcd->frame_usecs[j] = 0;
47824 + }
47825 + }
47826 + } else {
47827 + i++;
47828 + if (i == 8) {
47829 + done = 1;
47830 + ret = -1;
47831 + }
47832 + }
47833 + }
47834 + return ret;
47835 +}
47836 +
47837 +static int find_uframe(dwc_otg_hcd_t * _hcd, dwc_otg_qh_t * _qh)
47838 +{
47839 + int ret;
47840 + ret = -1;
47841 +
47842 + if (_qh->speed == USB_SPEED_HIGH ||
47843 + _hcd->flags.b.port_speed == DWC_HPRT0_PRTSPD_FULL_SPEED) {
47844 + /* if this is a hs transaction we need a full frame - or account for FS usecs */
47845 + ret = find_single_uframe(_hcd, _qh);
47846 + } else {
47847 + /* if this is a fs transaction we may need a sequence of frames */
47848 + ret = find_multi_uframe(_hcd, _qh);
47849 + }
47850 + return ret;
47851 +}
47852 +
47853 +/**
47854 + * Checks that the max transfer size allowed in a host channel is large enough
47855 + * to handle the maximum data transfer in a single (micro)frame for a periodic
47856 + * transfer.
47857 + *
47858 + * @param hcd The HCD state structure for the DWC OTG controller.
47859 + * @param qh QH for a periodic endpoint.
47860 + *
47861 + * @return 0 if successful, negative error code otherwise.
47862 + */
47863 +static int check_max_xfer_size(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh)
47864 +{
47865 + int status;
47866 + uint32_t max_xfer_size;
47867 + uint32_t max_channel_xfer_size;
47868 +
47869 + status = 0;
47870 +
47871 + max_xfer_size = dwc_max_packet(qh->maxp) * dwc_hb_mult(qh->maxp);
47872 + max_channel_xfer_size = hcd->core_if->core_params->max_transfer_size;
47873 +
47874 + if (max_xfer_size > max_channel_xfer_size) {
47875 + DWC_INFO("%s: Periodic xfer length %d > " "max xfer length for channel %d\n",
47876 + __func__, max_xfer_size, max_channel_xfer_size); //NOTICE
47877 + status = -DWC_E_NO_SPACE;
47878 + }
47879 +
47880 + return status;
47881 +}
47882 +
47883 +
47884 +
47885 +/**
47886 + * Schedules an interrupt or isochronous transfer in the periodic schedule.
47887 + *
47888 + * @param hcd The HCD state structure for the DWC OTG controller.
47889 + * @param qh QH for the periodic transfer. The QH should already contain the
47890 + * scheduling information.
47891 + *
47892 + * @return 0 if successful, negative error code otherwise.
47893 + */
47894 +static int schedule_periodic(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh)
47895 +{
47896 + int status = 0;
47897 +
47898 + if (microframe_schedule) {
47899 + int frame;
47900 + status = find_uframe(hcd, qh);
47901 + frame = -1;
47902 + if (status == 0) {
47903 + frame = 7;
47904 + } else {
47905 + if (status > 0 )
47906 + frame = status-1;
47907 + }
47908 +
47909 + /* Set the new frame up */
47910 + if (frame > -1) {
47911 + qh->sched_frame &= ~0x7;
47912 + qh->sched_frame |= (frame & 7);
47913 + }
47914 +
47915 + if (status != -1)
47916 + status = 0;
47917 + } else {
47918 + status = periodic_channel_available(hcd);
47919 + if (status) {
47920 + DWC_INFO("%s: No host channel available for periodic " "transfer.\n", __func__); //NOTICE
47921 + return status;
47922 + }
47923 +
47924 + status = check_periodic_bandwidth(hcd, qh);
47925 + }
47926 + if (status) {
47927 + DWC_INFO("%s: Insufficient periodic bandwidth for "
47928 + "periodic transfer.\n", __func__);
47929 + return -DWC_E_NO_SPACE;
47930 + }
47931 + status = check_max_xfer_size(hcd, qh);
47932 + if (status) {
47933 + DWC_INFO("%s: Channel max transfer size too small "
47934 + "for periodic transfer.\n", __func__);
47935 + return status;
47936 + }
47937 +
47938 + if (hcd->core_if->dma_desc_enable) {
47939 + /* Don't rely on SOF and start in ready schedule */
47940 + DWC_LIST_INSERT_TAIL(&hcd->periodic_sched_ready, &qh->qh_list_entry);
47941 + }
47942 + else {
47943 + if(fiq_enable && (DWC_LIST_EMPTY(&hcd->periodic_sched_inactive) || dwc_frame_num_le(qh->sched_frame, hcd->fiq_state->next_sched_frame)))
47944 + {
47945 + hcd->fiq_state->next_sched_frame = qh->sched_frame;
47946 +
47947 + }
47948 + /* Always start in the inactive schedule. */
47949 + DWC_LIST_INSERT_TAIL(&hcd->periodic_sched_inactive, &qh->qh_list_entry);
47950 + }
47951 +
47952 + if (!microframe_schedule) {
47953 + /* Reserve the periodic channel. */
47954 + hcd->periodic_channels++;
47955 + }
47956 +
47957 + /* Update claimed usecs per (micro)frame. */
47958 + hcd->periodic_usecs += qh->usecs;
47959 +
47960 + return status;
47961 +}
47962 +
47963 +
47964 +/**
47965 + * This function adds a QH to either the non periodic or periodic schedule if
47966 + * it is not already in the schedule. If the QH is already in the schedule, no
47967 + * action is taken.
47968 + *
47969 + * @return 0 if successful, negative error code otherwise.
47970 + */
47971 +int dwc_otg_hcd_qh_add(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh)
47972 +{
47973 + int status = 0;
47974 + gintmsk_data_t intr_mask = {.d32 = 0 };
47975 +
47976 + if (!DWC_LIST_EMPTY(&qh->qh_list_entry)) {
47977 + /* QH already in a schedule. */
47978 + return status;
47979 + }
47980 +
47981 + /* Add the new QH to the appropriate schedule */
47982 + if (dwc_qh_is_non_per(qh)) {
47983 + /* Always start in the inactive schedule. */
47984 + DWC_LIST_INSERT_TAIL(&hcd->non_periodic_sched_inactive,
47985 + &qh->qh_list_entry);
47986 + //hcd->fiq_state->kick_np_queues = 1;
47987 + } else {
47988 + status = schedule_periodic(hcd, qh);
47989 + if ( !hcd->periodic_qh_count ) {
47990 + intr_mask.b.sofintr = 1;
47991 + if (fiq_enable) {
47992 + local_fiq_disable();
47993 + fiq_fsm_spin_lock(&hcd->fiq_state->lock);
47994 + DWC_MODIFY_REG32(&hcd->core_if->core_global_regs->gintmsk, intr_mask.d32, intr_mask.d32);
47995 + fiq_fsm_spin_unlock(&hcd->fiq_state->lock);
47996 + local_fiq_enable();
47997 + } else {
47998 + DWC_MODIFY_REG32(&hcd->core_if->core_global_regs->gintmsk, intr_mask.d32, intr_mask.d32);
47999 + }
48000 + }
48001 + hcd->periodic_qh_count++;
48002 + }
48003 +
48004 + return status;
48005 +}
48006 +
48007 +/**
48008 + * Removes an interrupt or isochronous transfer from the periodic schedule.
48009 + *
48010 + * @param hcd The HCD state structure for the DWC OTG controller.
48011 + * @param qh QH for the periodic transfer.
48012 + */
48013 +static void deschedule_periodic(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh)
48014 +{
48015 + int i;
48016 + DWC_LIST_REMOVE_INIT(&qh->qh_list_entry);
48017 +
48018 + /* Update claimed usecs per (micro)frame. */
48019 + hcd->periodic_usecs -= qh->usecs;
48020 +
48021 + if (!microframe_schedule) {
48022 + /* Release the periodic channel reservation. */
48023 + hcd->periodic_channels--;
48024 + } else {
48025 + for (i = 0; i < 8; i++) {
48026 + hcd->frame_usecs[i] += qh->frame_usecs[i];
48027 + qh->frame_usecs[i] = 0;
48028 + }
48029 + }
48030 +}
48031 +
48032 +/**
48033 + * Removes a QH from either the non-periodic or periodic schedule. Memory is
48034 + * not freed.
48035 + *
48036 + * @param hcd The HCD state structure.
48037 + * @param qh QH to remove from schedule. */
48038 +void dwc_otg_hcd_qh_remove(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh)
48039 +{
48040 + gintmsk_data_t intr_mask = {.d32 = 0 };
48041 +
48042 + if (DWC_LIST_EMPTY(&qh->qh_list_entry)) {
48043 + /* QH is not in a schedule. */
48044 + return;
48045 + }
48046 +
48047 + if (dwc_qh_is_non_per(qh)) {
48048 + if (hcd->non_periodic_qh_ptr == &qh->qh_list_entry) {
48049 + hcd->non_periodic_qh_ptr =
48050 + hcd->non_periodic_qh_ptr->next;
48051 + }
48052 + DWC_LIST_REMOVE_INIT(&qh->qh_list_entry);
48053 + //if (!DWC_LIST_EMPTY(&hcd->non_periodic_sched_inactive))
48054 + // hcd->fiq_state->kick_np_queues = 1;
48055 + } else {
48056 + deschedule_periodic(hcd, qh);
48057 + hcd->periodic_qh_count--;
48058 + if( !hcd->periodic_qh_count && !fiq_fsm_enable ) {
48059 + intr_mask.b.sofintr = 1;
48060 + if (fiq_enable) {
48061 + local_fiq_disable();
48062 + fiq_fsm_spin_lock(&hcd->fiq_state->lock);
48063 + DWC_MODIFY_REG32(&hcd->core_if->core_global_regs->gintmsk, intr_mask.d32, 0);
48064 + fiq_fsm_spin_unlock(&hcd->fiq_state->lock);
48065 + local_fiq_enable();
48066 + } else {
48067 + DWC_MODIFY_REG32(&hcd->core_if->core_global_regs->gintmsk, intr_mask.d32, 0);
48068 + }
48069 + }
48070 + }
48071 +}
48072 +
48073 +/**
48074 + * Deactivates a QH. For non-periodic QHs, removes the QH from the active
48075 + * non-periodic schedule. The QH is added to the inactive non-periodic
48076 + * schedule if any QTDs are still attached to the QH.
48077 + *
48078 + * For periodic QHs, the QH is removed from the periodic queued schedule. If
48079 + * there are any QTDs still attached to the QH, the QH is added to either the
48080 + * periodic inactive schedule or the periodic ready schedule and its next
48081 + * scheduled frame is calculated. The QH is placed in the ready schedule if
48082 + * the scheduled frame has been reached already. Otherwise it's placed in the
48083 + * inactive schedule. If there are no QTDs attached to the QH, the QH is
48084 + * completely removed from the periodic schedule.
48085 + */
48086 +void dwc_otg_hcd_qh_deactivate(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh,
48087 + int sched_next_periodic_split)
48088 +{
48089 + if (dwc_qh_is_non_per(qh)) {
48090 + dwc_otg_hcd_qh_remove(hcd, qh);
48091 + if (!DWC_CIRCLEQ_EMPTY(&qh->qtd_list)) {
48092 + /* Add back to inactive non-periodic schedule. */
48093 + dwc_otg_hcd_qh_add(hcd, qh);
48094 + //hcd->fiq_state->kick_np_queues = 1;
48095 + } else {
48096 + if(nak_holdoff && qh->do_split) {
48097 + qh->nak_frame = 0xFFFF;
48098 + }
48099 + }
48100 + } else {
48101 + uint16_t frame_number = dwc_otg_hcd_get_frame_number(hcd);
48102 +
48103 + if (qh->do_split) {
48104 + /* Schedule the next continuing periodic split transfer */
48105 + if (sched_next_periodic_split) {
48106 +
48107 + qh->sched_frame = frame_number;
48108 +
48109 + if (dwc_frame_num_le(frame_number,
48110 + dwc_frame_num_inc
48111 + (qh->start_split_frame,
48112 + 1))) {
48113 + /*
48114 + * Allow one frame to elapse after start
48115 + * split microframe before scheduling
48116 + * complete split, but DONT if we are
48117 + * doing the next start split in the
48118 + * same frame for an ISOC out.
48119 + */
48120 + if ((qh->ep_type != UE_ISOCHRONOUS) ||
48121 + (qh->ep_is_in != 0)) {
48122 + qh->sched_frame =
48123 + dwc_frame_num_inc(qh->sched_frame, 1);
48124 + }
48125 + }
48126 + } else {
48127 + qh->sched_frame =
48128 + dwc_frame_num_inc(qh->start_split_frame,
48129 + qh->interval);
48130 + if (dwc_frame_num_le
48131 + (qh->sched_frame, frame_number)) {
48132 + qh->sched_frame = frame_number;
48133 + }
48134 + qh->sched_frame |= 0x7;
48135 + qh->start_split_frame = qh->sched_frame;
48136 + }
48137 + } else {
48138 + qh->sched_frame =
48139 + dwc_frame_num_inc(qh->sched_frame, qh->interval);
48140 + if (dwc_frame_num_le(qh->sched_frame, frame_number)) {
48141 + qh->sched_frame = frame_number;
48142 + }
48143 + }
48144 +
48145 + if (DWC_CIRCLEQ_EMPTY(&qh->qtd_list)) {
48146 + dwc_otg_hcd_qh_remove(hcd, qh);
48147 + } else {
48148 + /*
48149 + * Remove from periodic_sched_queued and move to
48150 + * appropriate queue.
48151 + */
48152 + if ((microframe_schedule && dwc_frame_num_le(qh->sched_frame, frame_number)) ||
48153 + (!microframe_schedule && qh->sched_frame == frame_number)) {
48154 + DWC_LIST_MOVE_HEAD(&hcd->periodic_sched_ready,
48155 + &qh->qh_list_entry);
48156 + } else {
48157 + if(fiq_enable && !dwc_frame_num_le(hcd->fiq_state->next_sched_frame, qh->sched_frame))
48158 + {
48159 + hcd->fiq_state->next_sched_frame = qh->sched_frame;
48160 + }
48161 +
48162 + DWC_LIST_MOVE_HEAD
48163 + (&hcd->periodic_sched_inactive,
48164 + &qh->qh_list_entry);
48165 + }
48166 + }
48167 + }
48168 +}
48169 +
48170 +/**
48171 + * This function allocates and initializes a QTD.
48172 + *
48173 + * @param urb The URB to create a QTD from. Each URB-QTD pair will end up
48174 + * pointing to each other so each pair should have a unique correlation.
48175 + * @param atomic_alloc Flag to do atomic alloc if needed
48176 + *
48177 + * @return Returns pointer to the newly allocated QTD, or NULL on error. */
48178 +dwc_otg_qtd_t *dwc_otg_hcd_qtd_create(dwc_otg_hcd_urb_t * urb, int atomic_alloc)
48179 +{
48180 + dwc_otg_qtd_t *qtd;
48181 +
48182 + qtd = dwc_otg_hcd_qtd_alloc(atomic_alloc);
48183 + if (qtd == NULL) {
48184 + return NULL;
48185 + }
48186 +
48187 + dwc_otg_hcd_qtd_init(qtd, urb);
48188 + return qtd;
48189 +}
48190 +
48191 +/**
48192 + * Initializes a QTD structure.
48193 + *
48194 + * @param qtd The QTD to initialize.
48195 + * @param urb The URB to use for initialization. */
48196 +void dwc_otg_hcd_qtd_init(dwc_otg_qtd_t * qtd, dwc_otg_hcd_urb_t * urb)
48197 +{
48198 + dwc_memset(qtd, 0, sizeof(dwc_otg_qtd_t));
48199 + qtd->urb = urb;
48200 + if (dwc_otg_hcd_get_pipe_type(&urb->pipe_info) == UE_CONTROL) {
48201 + /*
48202 + * The only time the QTD data toggle is used is on the data
48203 + * phase of control transfers. This phase always starts with
48204 + * DATA1.
48205 + */
48206 + qtd->data_toggle = DWC_OTG_HC_PID_DATA1;
48207 + qtd->control_phase = DWC_OTG_CONTROL_SETUP;
48208 + }
48209 +
48210 + /* start split */
48211 + qtd->complete_split = 0;
48212 + qtd->isoc_split_pos = DWC_HCSPLIT_XACTPOS_ALL;
48213 + qtd->isoc_split_offset = 0;
48214 + qtd->in_process = 0;
48215 +
48216 + /* Store the qtd ptr in the urb to reference what QTD. */
48217 + urb->qtd = qtd;
48218 + return;
48219 +}
48220 +
48221 +/**
48222 + * This function adds a QTD to the QTD-list of a QH. It will find the correct
48223 + * QH to place the QTD into. If it does not find a QH, then it will create a
48224 + * new QH. If the QH to which the QTD is added is not currently scheduled, it
48225 + * is placed into the proper schedule based on its EP type.
48226 + * HCD lock must be held and interrupts must be disabled on entry
48227 + *
48228 + * @param[in] qtd The QTD to add
48229 + * @param[in] hcd The DWC HCD structure
48230 + * @param[out] qh out parameter to return queue head
48231 + * @param atomic_alloc Flag to do atomic alloc if needed
48232 + *
48233 + * @return 0 if successful, negative error code otherwise.
48234 + */
48235 +int dwc_otg_hcd_qtd_add(dwc_otg_qtd_t * qtd,
48236 + dwc_otg_hcd_t * hcd, dwc_otg_qh_t ** qh, int atomic_alloc)
48237 +{
48238 + int retval = 0;
48239 + dwc_otg_hcd_urb_t *urb = qtd->urb;
48240 +
48241 + /*
48242 + * Get the QH which holds the QTD-list to insert to. Create QH if it
48243 + * doesn't exist.
48244 + */
48245 + if (*qh == NULL) {
48246 + *qh = dwc_otg_hcd_qh_create(hcd, urb, atomic_alloc);
48247 + if (*qh == NULL) {
48248 + retval = -DWC_E_NO_MEMORY;
48249 + goto done;
48250 + } else {
48251 + if (fiq_enable)
48252 + hcd->fiq_state->kick_np_queues = 1;
48253 + }
48254 + }
48255 + retval = dwc_otg_hcd_qh_add(hcd, *qh);
48256 + if (retval == 0) {
48257 + DWC_CIRCLEQ_INSERT_TAIL(&((*qh)->qtd_list), qtd,
48258 + qtd_list_entry);
48259 + qtd->qh = *qh;
48260 + }
48261 +done:
48262 +
48263 + return retval;
48264 +}
48265 +
48266 +#endif /* DWC_DEVICE_ONLY */
48267 --- /dev/null
48268 +++ b/drivers/usb/host/dwc_otg/dwc_otg_os_dep.h
48269 @@ -0,0 +1,199 @@
48270 +#ifndef _DWC_OS_DEP_H_
48271 +#define _DWC_OS_DEP_H_
48272 +
48273 +/**
48274 + * @file
48275 + *
48276 + * This file contains OS dependent structures.
48277 + *
48278 + */
48279 +
48280 +#include <linux/kernel.h>
48281 +#include <linux/module.h>
48282 +#include <linux/moduleparam.h>
48283 +#include <linux/init.h>
48284 +#include <linux/device.h>
48285 +#include <linux/errno.h>
48286 +#include <linux/types.h>
48287 +#include <linux/slab.h>
48288 +#include <linux/list.h>
48289 +#include <linux/interrupt.h>
48290 +#include <linux/ctype.h>
48291 +#include <linux/string.h>
48292 +#include <linux/dma-mapping.h>
48293 +#include <linux/jiffies.h>
48294 +#include <linux/delay.h>
48295 +#include <linux/timer.h>
48296 +#include <linux/workqueue.h>
48297 +#include <linux/stat.h>
48298 +#include <linux/pci.h>
48299 +
48300 +#include <linux/version.h>
48301 +
48302 +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,20)
48303 +# include <linux/irq.h>
48304 +#endif
48305 +
48306 +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,21)
48307 +# include <linux/usb/ch9.h>
48308 +#else
48309 +# include <linux/usb_ch9.h>
48310 +#endif
48311 +
48312 +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,24)
48313 +# include <linux/usb/gadget.h>
48314 +#else
48315 +# include <linux/usb_gadget.h>
48316 +#endif
48317 +
48318 +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,20)
48319 +# include <asm/irq.h>
48320 +#endif
48321 +
48322 +#ifdef PCI_INTERFACE
48323 +# include <asm/io.h>
48324 +#endif
48325 +
48326 +#ifdef LM_INTERFACE
48327 +# include <asm/unaligned.h>
48328 +# include <asm/sizes.h>
48329 +# include <asm/param.h>
48330 +# include <asm/io.h>
48331 +# if (LINUX_VERSION_CODE < KERNEL_VERSION(2,6,30))
48332 +# include <asm/arch/hardware.h>
48333 +# include <asm/arch/lm.h>
48334 +# include <asm/arch/irqs.h>
48335 +# include <asm/arch/regs-irq.h>
48336 +# else
48337 +/* in 2.6.31, at least, we seem to have lost the generic LM infrastructure -
48338 + here we assume that the machine architecture provides definitions
48339 + in its own header
48340 +*/
48341 +# include <mach/lm.h>
48342 +# include <mach/hardware.h>
48343 +# endif
48344 +#endif
48345 +
48346 +#ifdef PLATFORM_INTERFACE
48347 +#include <linux/platform_device.h>
48348 +#ifdef CONFIG_ARM
48349 +#include <asm/mach/map.h>
48350 +#endif
48351 +#endif
48352 +
48353 +/** The OS page size */
48354 +#define DWC_OS_PAGE_SIZE PAGE_SIZE
48355 +
48356 +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,14)
48357 +typedef int gfp_t;
48358 +#endif
48359 +
48360 +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,18)
48361 +# define IRQF_SHARED SA_SHIRQ
48362 +#endif
48363 +
48364 +typedef struct os_dependent {
48365 + /** Base address returned from ioremap() */
48366 + void *base;
48367 +
48368 + /** Register offset for Diagnostic API */
48369 + uint32_t reg_offset;
48370 +
48371 + /** Base address for MPHI peripheral */
48372 + void *mphi_base;
48373 +
48374 + /** mphi_base actually points to the SWIRQ block */
48375 + bool use_swirq;
48376 +
48377 + /** IRQ number (<0 if not valid) */
48378 + int irq_num;
48379 +
48380 + /** FIQ number (<0 if not valid) */
48381 + int fiq_num;
48382 +
48383 +#ifdef LM_INTERFACE
48384 + struct lm_device *lmdev;
48385 +#elif defined(PCI_INTERFACE)
48386 + struct pci_dev *pcidev;
48387 +
48388 + /** Start address of a PCI region */
48389 + resource_size_t rsrc_start;
48390 +
48391 + /** Length address of a PCI region */
48392 + resource_size_t rsrc_len;
48393 +#elif defined(PLATFORM_INTERFACE)
48394 + struct platform_device *platformdev;
48395 +#endif
48396 +
48397 +} os_dependent_t;
48398 +
48399 +#ifdef __cplusplus
48400 +}
48401 +#endif
48402 +
48403 +
48404 +
48405 +/* Type for the our device on the chosen bus */
48406 +#if defined(LM_INTERFACE)
48407 +typedef struct lm_device dwc_bus_dev_t;
48408 +#elif defined(PCI_INTERFACE)
48409 +typedef struct pci_dev dwc_bus_dev_t;
48410 +#elif defined(PLATFORM_INTERFACE)
48411 +typedef struct platform_device dwc_bus_dev_t;
48412 +#endif
48413 +
48414 +/* Helper macro to retrieve drvdata from the device on the chosen bus */
48415 +#if defined(LM_INTERFACE)
48416 +#define DWC_OTG_BUSDRVDATA(_dev) lm_get_drvdata(_dev)
48417 +#elif defined(PCI_INTERFACE)
48418 +#define DWC_OTG_BUSDRVDATA(_dev) pci_get_drvdata(_dev)
48419 +#elif defined(PLATFORM_INTERFACE)
48420 +#define DWC_OTG_BUSDRVDATA(_dev) platform_get_drvdata(_dev)
48421 +#endif
48422 +
48423 +/**
48424 + * Helper macro returning the otg_device structure of a given struct device
48425 + *
48426 + * c.f. static dwc_otg_device_t *dwc_otg_drvdev(struct device *_dev)
48427 + */
48428 +#ifdef LM_INTERFACE
48429 +#define DWC_OTG_GETDRVDEV(_var, _dev) do { \
48430 + struct lm_device *lm_dev = \
48431 + container_of(_dev, struct lm_device, dev); \
48432 + _var = lm_get_drvdata(lm_dev); \
48433 + } while (0)
48434 +
48435 +#elif defined(PCI_INTERFACE)
48436 +#define DWC_OTG_GETDRVDEV(_var, _dev) do { \
48437 + _var = dev_get_drvdata(_dev); \
48438 + } while (0)
48439 +
48440 +#elif defined(PLATFORM_INTERFACE)
48441 +#define DWC_OTG_GETDRVDEV(_var, _dev) do { \
48442 + struct platform_device *platform_dev = \
48443 + container_of(_dev, struct platform_device, dev); \
48444 + _var = platform_get_drvdata(platform_dev); \
48445 + } while (0)
48446 +#endif
48447 +
48448 +
48449 +/**
48450 + * Helper macro returning the struct dev of the given struct os_dependent
48451 + *
48452 + * c.f. static struct device *dwc_otg_getdev(struct os_dependent *osdep)
48453 + */
48454 +#ifdef LM_INTERFACE
48455 +#define DWC_OTG_OS_GETDEV(_osdep) \
48456 + ((_osdep).lmdev == NULL? NULL: &(_osdep).lmdev->dev)
48457 +#elif defined(PCI_INTERFACE)
48458 +#define DWC_OTG_OS_GETDEV(_osdep) \
48459 + ((_osdep).pci_dev == NULL? NULL: &(_osdep).pci_dev->dev)
48460 +#elif defined(PLATFORM_INTERFACE)
48461 +#define DWC_OTG_OS_GETDEV(_osdep) \
48462 + ((_osdep).platformdev == NULL? NULL: &(_osdep).platformdev->dev)
48463 +#endif
48464 +
48465 +
48466 +
48467 +
48468 +#endif /* _DWC_OS_DEP_H_ */
48469 --- /dev/null
48470 +++ b/drivers/usb/host/dwc_otg/dwc_otg_pcd.c
48471 @@ -0,0 +1,2725 @@
48472 +/* ==========================================================================
48473 + * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_pcd.c $
48474 + * $Revision: #101 $
48475 + * $Date: 2012/08/10 $
48476 + * $Change: 2047372 $
48477 + *
48478 + * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
48479 + * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
48480 + * otherwise expressly agreed to in writing between Synopsys and you.
48481 + *
48482 + * The Software IS NOT an item of Licensed Software or Licensed Product under
48483 + * any End User Software License Agreement or Agreement for Licensed Product
48484 + * with Synopsys or any supplement thereto. You are permitted to use and
48485 + * redistribute this Software in source and binary forms, with or without
48486 + * modification, provided that redistributions of source code must retain this
48487 + * notice. You may not view, use, disclose, copy or distribute this file or
48488 + * any information contained herein except pursuant to this license grant from
48489 + * Synopsys. If you do not agree with this notice, including the disclaimer
48490 + * below, then you are not authorized to use the Software.
48491 + *
48492 + * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
48493 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
48494 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
48495 + * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
48496 + * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
48497 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
48498 + * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
48499 + * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
48500 + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
48501 + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
48502 + * DAMAGE.
48503 + * ========================================================================== */
48504 +#ifndef DWC_HOST_ONLY
48505 +
48506 +/** @file
48507 + * This file implements PCD Core. All code in this file is portable and doesn't
48508 + * use any OS specific functions.
48509 + * PCD Core provides Interface, defined in <code><dwc_otg_pcd_if.h></code>
48510 + * header file, which can be used to implement OS specific PCD interface.
48511 + *
48512 + * An important function of the PCD is managing interrupts generated
48513 + * by the DWC_otg controller. The implementation of the DWC_otg device
48514 + * mode interrupt service routines is in dwc_otg_pcd_intr.c.
48515 + *
48516 + * @todo Add Device Mode test modes (Test J mode, Test K mode, etc).
48517 + * @todo Does it work when the request size is greater than DEPTSIZ
48518 + * transfer size
48519 + *
48520 + */
48521 +
48522 +#include "dwc_otg_pcd.h"
48523 +
48524 +#ifdef DWC_UTE_CFI
48525 +#include "dwc_otg_cfi.h"
48526 +
48527 +extern int init_cfi(cfiobject_t * cfiobj);
48528 +#endif
48529 +
48530 +/**
48531 + * Choose endpoint from ep arrays using usb_ep structure.
48532 + */
48533 +static dwc_otg_pcd_ep_t *get_ep_from_handle(dwc_otg_pcd_t * pcd, void *handle)
48534 +{
48535 + int i;
48536 + if (pcd->ep0.priv == handle) {
48537 + return &pcd->ep0;
48538 + }
48539 + for (i = 0; i < MAX_EPS_CHANNELS - 1; i++) {
48540 + if (pcd->in_ep[i].priv == handle)
48541 + return &pcd->in_ep[i];
48542 + if (pcd->out_ep[i].priv == handle)
48543 + return &pcd->out_ep[i];
48544 + }
48545 +
48546 + return NULL;
48547 +}
48548 +
48549 +/**
48550 + * This function completes a request. It call's the request call back.
48551 + */
48552 +void dwc_otg_request_done(dwc_otg_pcd_ep_t * ep, dwc_otg_pcd_request_t * req,
48553 + int32_t status)
48554 +{
48555 + unsigned stopped = ep->stopped;
48556 +
48557 + DWC_DEBUGPL(DBG_PCDV, "%s(ep %p req %p)\n", __func__, ep, req);
48558 + DWC_CIRCLEQ_REMOVE_INIT(&ep->queue, req, queue_entry);
48559 +
48560 + /* don't modify queue heads during completion callback */
48561 + ep->stopped = 1;
48562 + /* spin_unlock/spin_lock now done in fops->complete() */
48563 + ep->pcd->fops->complete(ep->pcd, ep->priv, req->priv, status,
48564 + req->actual);
48565 +
48566 + if (ep->pcd->request_pending > 0) {
48567 + --ep->pcd->request_pending;
48568 + }
48569 +
48570 + ep->stopped = stopped;
48571 + DWC_FREE(req);
48572 +}
48573 +
48574 +/**
48575 + * This function terminates all the requsts in the EP request queue.
48576 + */
48577 +void dwc_otg_request_nuke(dwc_otg_pcd_ep_t * ep)
48578 +{
48579 + dwc_otg_pcd_request_t *req;
48580 +
48581 + ep->stopped = 1;
48582 +
48583 + /* called with irqs blocked?? */
48584 + while (!DWC_CIRCLEQ_EMPTY(&ep->queue)) {
48585 + req = DWC_CIRCLEQ_FIRST(&ep->queue);
48586 + dwc_otg_request_done(ep, req, -DWC_E_SHUTDOWN);
48587 + }
48588 +}
48589 +
48590 +void dwc_otg_pcd_start(dwc_otg_pcd_t * pcd,
48591 + const struct dwc_otg_pcd_function_ops *fops)
48592 +{
48593 + pcd->fops = fops;
48594 +}
48595 +
48596 +/**
48597 + * PCD Callback function for initializing the PCD when switching to
48598 + * device mode.
48599 + *
48600 + * @param p void pointer to the <code>dwc_otg_pcd_t</code>
48601 + */
48602 +static int32_t dwc_otg_pcd_start_cb(void *p)
48603 +{
48604 + dwc_otg_pcd_t *pcd = (dwc_otg_pcd_t *) p;
48605 + dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
48606 +
48607 + /*
48608 + * Initialized the Core for Device mode.
48609 + */
48610 + if (dwc_otg_is_device_mode(core_if)) {
48611 + dwc_otg_core_dev_init(core_if);
48612 + /* Set core_if's lock pointer to the pcd->lock */
48613 + core_if->lock = pcd->lock;
48614 + }
48615 + return 1;
48616 +}
48617 +
48618 +/** CFI-specific buffer allocation function for EP */
48619 +#ifdef DWC_UTE_CFI
48620 +uint8_t *cfiw_ep_alloc_buffer(dwc_otg_pcd_t * pcd, void *pep, dwc_dma_t * addr,
48621 + size_t buflen, int flags)
48622 +{
48623 + dwc_otg_pcd_ep_t *ep;
48624 + ep = get_ep_from_handle(pcd, pep);
48625 + if (!ep) {
48626 + DWC_WARN("bad ep\n");
48627 + return -DWC_E_INVALID;
48628 + }
48629 +
48630 + return pcd->cfi->ops.ep_alloc_buf(pcd->cfi, pcd, ep, addr, buflen,
48631 + flags);
48632 +}
48633 +#else
48634 +uint8_t *cfiw_ep_alloc_buffer(dwc_otg_pcd_t * pcd, void *pep, dwc_dma_t * addr,
48635 + size_t buflen, int flags);
48636 +#endif
48637 +
48638 +/**
48639 + * PCD Callback function for notifying the PCD when resuming from
48640 + * suspend.
48641 + *
48642 + * @param p void pointer to the <code>dwc_otg_pcd_t</code>
48643 + */
48644 +static int32_t dwc_otg_pcd_resume_cb(void *p)
48645 +{
48646 + dwc_otg_pcd_t *pcd = (dwc_otg_pcd_t *) p;
48647 +
48648 + if (pcd->fops->resume) {
48649 + pcd->fops->resume(pcd);
48650 + }
48651 +
48652 + /* Stop the SRP timeout timer. */
48653 + if ((GET_CORE_IF(pcd)->core_params->phy_type != DWC_PHY_TYPE_PARAM_FS)
48654 + || (!GET_CORE_IF(pcd)->core_params->i2c_enable)) {
48655 + if (GET_CORE_IF(pcd)->srp_timer_started) {
48656 + GET_CORE_IF(pcd)->srp_timer_started = 0;
48657 + DWC_TIMER_CANCEL(GET_CORE_IF(pcd)->srp_timer);
48658 + }
48659 + }
48660 + return 1;
48661 +}
48662 +
48663 +/**
48664 + * PCD Callback function for notifying the PCD device is suspended.
48665 + *
48666 + * @param p void pointer to the <code>dwc_otg_pcd_t</code>
48667 + */
48668 +static int32_t dwc_otg_pcd_suspend_cb(void *p)
48669 +{
48670 + dwc_otg_pcd_t *pcd = (dwc_otg_pcd_t *) p;
48671 +
48672 + if (pcd->fops->suspend) {
48673 + DWC_SPINUNLOCK(pcd->lock);
48674 + pcd->fops->suspend(pcd);
48675 + DWC_SPINLOCK(pcd->lock);
48676 + }
48677 +
48678 + return 1;
48679 +}
48680 +
48681 +/**
48682 + * PCD Callback function for stopping the PCD when switching to Host
48683 + * mode.
48684 + *
48685 + * @param p void pointer to the <code>dwc_otg_pcd_t</code>
48686 + */
48687 +static int32_t dwc_otg_pcd_stop_cb(void *p)
48688 +{
48689 + dwc_otg_pcd_t *pcd = (dwc_otg_pcd_t *) p;
48690 + extern void dwc_otg_pcd_stop(dwc_otg_pcd_t * _pcd);
48691 +
48692 + dwc_otg_pcd_stop(pcd);
48693 + return 1;
48694 +}
48695 +
48696 +/**
48697 + * PCD Callback structure for handling mode switching.
48698 + */
48699 +static dwc_otg_cil_callbacks_t pcd_callbacks = {
48700 + .start = dwc_otg_pcd_start_cb,
48701 + .stop = dwc_otg_pcd_stop_cb,
48702 + .suspend = dwc_otg_pcd_suspend_cb,
48703 + .resume_wakeup = dwc_otg_pcd_resume_cb,
48704 + .p = 0, /* Set at registration */
48705 +};
48706 +
48707 +/**
48708 + * This function allocates a DMA Descriptor chain for the Endpoint
48709 + * buffer to be used for a transfer to/from the specified endpoint.
48710 + */
48711 +dwc_otg_dev_dma_desc_t *dwc_otg_ep_alloc_desc_chain(struct device *dev,
48712 + dwc_dma_t * dma_desc_addr,
48713 + uint32_t count)
48714 +{
48715 + return DWC_DMA_ALLOC_ATOMIC(dev, count * sizeof(dwc_otg_dev_dma_desc_t),
48716 + dma_desc_addr);
48717 +}
48718 +
48719 +/**
48720 + * This function frees a DMA Descriptor chain that was allocated by ep_alloc_desc.
48721 + */
48722 +void dwc_otg_ep_free_desc_chain(struct device *dev,
48723 + dwc_otg_dev_dma_desc_t * desc_addr,
48724 + uint32_t dma_desc_addr, uint32_t count)
48725 +{
48726 + DWC_DMA_FREE(dev, count * sizeof(dwc_otg_dev_dma_desc_t), desc_addr,
48727 + dma_desc_addr);
48728 +}
48729 +
48730 +#ifdef DWC_EN_ISOC
48731 +
48732 +/**
48733 + * This function initializes a descriptor chain for Isochronous transfer
48734 + *
48735 + * @param core_if Programming view of DWC_otg controller.
48736 + * @param dwc_ep The EP to start the transfer on.
48737 + *
48738 + */
48739 +void dwc_otg_iso_ep_start_ddma_transfer(dwc_otg_core_if_t * core_if,
48740 + dwc_ep_t * dwc_ep)
48741 +{
48742 +
48743 + dsts_data_t dsts = {.d32 = 0 };
48744 + depctl_data_t depctl = {.d32 = 0 };
48745 + volatile uint32_t *addr;
48746 + int i, j;
48747 + uint32_t len;
48748 +
48749 + if (dwc_ep->is_in)
48750 + dwc_ep->desc_cnt = dwc_ep->buf_proc_intrvl / dwc_ep->bInterval;
48751 + else
48752 + dwc_ep->desc_cnt =
48753 + dwc_ep->buf_proc_intrvl * dwc_ep->pkt_per_frm /
48754 + dwc_ep->bInterval;
48755 +
48756 + /** Allocate descriptors for double buffering */
48757 + dwc_ep->iso_desc_addr =
48758 + dwc_otg_ep_alloc_desc_chain(&dwc_ep->iso_dma_desc_addr,
48759 + dwc_ep->desc_cnt * 2);
48760 + if (dwc_ep->desc_addr) {
48761 + DWC_WARN("%s, can't allocate DMA descriptor chain\n", __func__);
48762 + return;
48763 + }
48764 +
48765 + dsts.d32 = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dsts);
48766 +
48767 + /** ISO OUT EP */
48768 + if (dwc_ep->is_in == 0) {
48769 + dev_dma_desc_sts_t sts = {.d32 = 0 };
48770 + dwc_otg_dev_dma_desc_t *dma_desc = dwc_ep->iso_desc_addr;
48771 + dma_addr_t dma_ad;
48772 + uint32_t data_per_desc;
48773 + dwc_otg_dev_out_ep_regs_t *out_regs =
48774 + core_if->dev_if->out_ep_regs[dwc_ep->num];
48775 + int offset;
48776 +
48777 + addr = &core_if->dev_if->out_ep_regs[dwc_ep->num]->doepctl;
48778 + dma_ad = (dma_addr_t) DWC_READ_REG32(&(out_regs->doepdma));
48779 +
48780 + /** Buffer 0 descriptors setup */
48781 + dma_ad = dwc_ep->dma_addr0;
48782 +
48783 + sts.b_iso_out.bs = BS_HOST_READY;
48784 + sts.b_iso_out.rxsts = 0;
48785 + sts.b_iso_out.l = 0;
48786 + sts.b_iso_out.sp = 0;
48787 + sts.b_iso_out.ioc = 0;
48788 + sts.b_iso_out.pid = 0;
48789 + sts.b_iso_out.framenum = 0;
48790 +
48791 + offset = 0;
48792 + for (i = 0; i < dwc_ep->desc_cnt - dwc_ep->pkt_per_frm;
48793 + i += dwc_ep->pkt_per_frm) {
48794 +
48795 + for (j = 0; j < dwc_ep->pkt_per_frm; ++j) {
48796 + uint32_t len = (j + 1) * dwc_ep->maxpacket;
48797 + if (len > dwc_ep->data_per_frame)
48798 + data_per_desc =
48799 + dwc_ep->data_per_frame -
48800 + j * dwc_ep->maxpacket;
48801 + else
48802 + data_per_desc = dwc_ep->maxpacket;
48803 + len = data_per_desc % 4;
48804 + if (len)
48805 + data_per_desc += 4 - len;
48806 +
48807 + sts.b_iso_out.rxbytes = data_per_desc;
48808 + dma_desc->buf = dma_ad;
48809 + dma_desc->status.d32 = sts.d32;
48810 +
48811 + offset += data_per_desc;
48812 + dma_desc++;
48813 + dma_ad += data_per_desc;
48814 + }
48815 + }
48816 +
48817 + for (j = 0; j < dwc_ep->pkt_per_frm - 1; ++j) {
48818 + uint32_t len = (j + 1) * dwc_ep->maxpacket;
48819 + if (len > dwc_ep->data_per_frame)
48820 + data_per_desc =
48821 + dwc_ep->data_per_frame -
48822 + j * dwc_ep->maxpacket;
48823 + else
48824 + data_per_desc = dwc_ep->maxpacket;
48825 + len = data_per_desc % 4;
48826 + if (len)
48827 + data_per_desc += 4 - len;
48828 + sts.b_iso_out.rxbytes = data_per_desc;
48829 + dma_desc->buf = dma_ad;
48830 + dma_desc->status.d32 = sts.d32;
48831 +
48832 + offset += data_per_desc;
48833 + dma_desc++;
48834 + dma_ad += data_per_desc;
48835 + }
48836 +
48837 + sts.b_iso_out.ioc = 1;
48838 + len = (j + 1) * dwc_ep->maxpacket;
48839 + if (len > dwc_ep->data_per_frame)
48840 + data_per_desc =
48841 + dwc_ep->data_per_frame - j * dwc_ep->maxpacket;
48842 + else
48843 + data_per_desc = dwc_ep->maxpacket;
48844 + len = data_per_desc % 4;
48845 + if (len)
48846 + data_per_desc += 4 - len;
48847 + sts.b_iso_out.rxbytes = data_per_desc;
48848 +
48849 + dma_desc->buf = dma_ad;
48850 + dma_desc->status.d32 = sts.d32;
48851 + dma_desc++;
48852 +
48853 + /** Buffer 1 descriptors setup */
48854 + sts.b_iso_out.ioc = 0;
48855 + dma_ad = dwc_ep->dma_addr1;
48856 +
48857 + offset = 0;
48858 + for (i = 0; i < dwc_ep->desc_cnt - dwc_ep->pkt_per_frm;
48859 + i += dwc_ep->pkt_per_frm) {
48860 + for (j = 0; j < dwc_ep->pkt_per_frm; ++j) {
48861 + uint32_t len = (j + 1) * dwc_ep->maxpacket;
48862 + if (len > dwc_ep->data_per_frame)
48863 + data_per_desc =
48864 + dwc_ep->data_per_frame -
48865 + j * dwc_ep->maxpacket;
48866 + else
48867 + data_per_desc = dwc_ep->maxpacket;
48868 + len = data_per_desc % 4;
48869 + if (len)
48870 + data_per_desc += 4 - len;
48871 +
48872 + data_per_desc =
48873 + sts.b_iso_out.rxbytes = data_per_desc;
48874 + dma_desc->buf = dma_ad;
48875 + dma_desc->status.d32 = sts.d32;
48876 +
48877 + offset += data_per_desc;
48878 + dma_desc++;
48879 + dma_ad += data_per_desc;
48880 + }
48881 + }
48882 + for (j = 0; j < dwc_ep->pkt_per_frm - 1; ++j) {
48883 + data_per_desc =
48884 + ((j + 1) * dwc_ep->maxpacket >
48885 + dwc_ep->data_per_frame) ? dwc_ep->data_per_frame -
48886 + j * dwc_ep->maxpacket : dwc_ep->maxpacket;
48887 + data_per_desc +=
48888 + (data_per_desc % 4) ? (4 - data_per_desc % 4) : 0;
48889 + sts.b_iso_out.rxbytes = data_per_desc;
48890 + dma_desc->buf = dma_ad;
48891 + dma_desc->status.d32 = sts.d32;
48892 +
48893 + offset += data_per_desc;
48894 + dma_desc++;
48895 + dma_ad += data_per_desc;
48896 + }
48897 +
48898 + sts.b_iso_out.ioc = 1;
48899 + sts.b_iso_out.l = 1;
48900 + data_per_desc =
48901 + ((j + 1) * dwc_ep->maxpacket >
48902 + dwc_ep->data_per_frame) ? dwc_ep->data_per_frame -
48903 + j * dwc_ep->maxpacket : dwc_ep->maxpacket;
48904 + data_per_desc +=
48905 + (data_per_desc % 4) ? (4 - data_per_desc % 4) : 0;
48906 + sts.b_iso_out.rxbytes = data_per_desc;
48907 +
48908 + dma_desc->buf = dma_ad;
48909 + dma_desc->status.d32 = sts.d32;
48910 +
48911 + dwc_ep->next_frame = 0;
48912 +
48913 + /** Write dma_ad into DOEPDMA register */
48914 + DWC_WRITE_REG32(&(out_regs->doepdma),
48915 + (uint32_t) dwc_ep->iso_dma_desc_addr);
48916 +
48917 + }
48918 + /** ISO IN EP */
48919 + else {
48920 + dev_dma_desc_sts_t sts = {.d32 = 0 };
48921 + dwc_otg_dev_dma_desc_t *dma_desc = dwc_ep->iso_desc_addr;
48922 + dma_addr_t dma_ad;
48923 + dwc_otg_dev_in_ep_regs_t *in_regs =
48924 + core_if->dev_if->in_ep_regs[dwc_ep->num];
48925 + unsigned int frmnumber;
48926 + fifosize_data_t txfifosize, rxfifosize;
48927 +
48928 + txfifosize.d32 =
48929 + DWC_READ_REG32(&core_if->dev_if->in_ep_regs[dwc_ep->num]->
48930 + dtxfsts);
48931 + rxfifosize.d32 =
48932 + DWC_READ_REG32(&core_if->core_global_regs->grxfsiz);
48933 +
48934 + addr = &core_if->dev_if->in_ep_regs[dwc_ep->num]->diepctl;
48935 +
48936 + dma_ad = dwc_ep->dma_addr0;
48937 +
48938 + dsts.d32 =
48939 + DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dsts);
48940 +
48941 + sts.b_iso_in.bs = BS_HOST_READY;
48942 + sts.b_iso_in.txsts = 0;
48943 + sts.b_iso_in.sp =
48944 + (dwc_ep->data_per_frame % dwc_ep->maxpacket) ? 1 : 0;
48945 + sts.b_iso_in.ioc = 0;
48946 + sts.b_iso_in.pid = dwc_ep->pkt_per_frm;
48947 +
48948 + frmnumber = dwc_ep->next_frame;
48949 +
48950 + sts.b_iso_in.framenum = frmnumber;
48951 + sts.b_iso_in.txbytes = dwc_ep->data_per_frame;
48952 + sts.b_iso_in.l = 0;
48953 +
48954 + /** Buffer 0 descriptors setup */
48955 + for (i = 0; i < dwc_ep->desc_cnt - 1; i++) {
48956 + dma_desc->buf = dma_ad;
48957 + dma_desc->status.d32 = sts.d32;
48958 + dma_desc++;
48959 +
48960 + dma_ad += dwc_ep->data_per_frame;
48961 + sts.b_iso_in.framenum += dwc_ep->bInterval;
48962 + }
48963 +
48964 + sts.b_iso_in.ioc = 1;
48965 + dma_desc->buf = dma_ad;
48966 + dma_desc->status.d32 = sts.d32;
48967 + ++dma_desc;
48968 +
48969 + /** Buffer 1 descriptors setup */
48970 + sts.b_iso_in.ioc = 0;
48971 + dma_ad = dwc_ep->dma_addr1;
48972 +
48973 + for (i = 0; i < dwc_ep->desc_cnt - dwc_ep->pkt_per_frm;
48974 + i += dwc_ep->pkt_per_frm) {
48975 + dma_desc->buf = dma_ad;
48976 + dma_desc->status.d32 = sts.d32;
48977 + dma_desc++;
48978 +
48979 + dma_ad += dwc_ep->data_per_frame;
48980 + sts.b_iso_in.framenum += dwc_ep->bInterval;
48981 +
48982 + sts.b_iso_in.ioc = 0;
48983 + }
48984 + sts.b_iso_in.ioc = 1;
48985 + sts.b_iso_in.l = 1;
48986 +
48987 + dma_desc->buf = dma_ad;
48988 + dma_desc->status.d32 = sts.d32;
48989 +
48990 + dwc_ep->next_frame = sts.b_iso_in.framenum + dwc_ep->bInterval;
48991 +
48992 + /** Write dma_ad into diepdma register */
48993 + DWC_WRITE_REG32(&(in_regs->diepdma),
48994 + (uint32_t) dwc_ep->iso_dma_desc_addr);
48995 + }
48996 + /** Enable endpoint, clear nak */
48997 + depctl.d32 = 0;
48998 + depctl.b.epena = 1;
48999 + depctl.b.usbactep = 1;
49000 + depctl.b.cnak = 1;
49001 +
49002 + DWC_MODIFY_REG32(addr, depctl.d32, depctl.d32);
49003 + depctl.d32 = DWC_READ_REG32(addr);
49004 +}
49005 +
49006 +/**
49007 + * This function initializes a descriptor chain for Isochronous transfer
49008 + *
49009 + * @param core_if Programming view of DWC_otg controller.
49010 + * @param ep The EP to start the transfer on.
49011 + *
49012 + */
49013 +void dwc_otg_iso_ep_start_buf_transfer(dwc_otg_core_if_t * core_if,
49014 + dwc_ep_t * ep)
49015 +{
49016 + depctl_data_t depctl = {.d32 = 0 };
49017 + volatile uint32_t *addr;
49018 +
49019 + if (ep->is_in) {
49020 + addr = &core_if->dev_if->in_ep_regs[ep->num]->diepctl;
49021 + } else {
49022 + addr = &core_if->dev_if->out_ep_regs[ep->num]->doepctl;
49023 + }
49024 +
49025 + if (core_if->dma_enable == 0 || core_if->dma_desc_enable != 0) {
49026 + return;
49027 + } else {
49028 + deptsiz_data_t deptsiz = {.d32 = 0 };
49029 +
49030 + ep->xfer_len =
49031 + ep->data_per_frame * ep->buf_proc_intrvl / ep->bInterval;
49032 + ep->pkt_cnt =
49033 + (ep->xfer_len - 1 + ep->maxpacket) / ep->maxpacket;
49034 + ep->xfer_count = 0;
49035 + ep->xfer_buff =
49036 + (ep->proc_buf_num) ? ep->xfer_buff1 : ep->xfer_buff0;
49037 + ep->dma_addr =
49038 + (ep->proc_buf_num) ? ep->dma_addr1 : ep->dma_addr0;
49039 +
49040 + if (ep->is_in) {
49041 + /* Program the transfer size and packet count
49042 + * as follows: xfersize = N * maxpacket +
49043 + * short_packet pktcnt = N + (short_packet
49044 + * exist ? 1 : 0)
49045 + */
49046 + deptsiz.b.mc = ep->pkt_per_frm;
49047 + deptsiz.b.xfersize = ep->xfer_len;
49048 + deptsiz.b.pktcnt =
49049 + (ep->xfer_len - 1 + ep->maxpacket) / ep->maxpacket;
49050 + DWC_WRITE_REG32(&core_if->dev_if->in_ep_regs[ep->num]->
49051 + dieptsiz, deptsiz.d32);
49052 +
49053 + /* Write the DMA register */
49054 + DWC_WRITE_REG32(&
49055 + (core_if->dev_if->in_ep_regs[ep->num]->
49056 + diepdma), (uint32_t) ep->dma_addr);
49057 +
49058 + } else {
49059 + deptsiz.b.pktcnt =
49060 + (ep->xfer_len + (ep->maxpacket - 1)) /
49061 + ep->maxpacket;
49062 + deptsiz.b.xfersize = deptsiz.b.pktcnt * ep->maxpacket;
49063 +
49064 + DWC_WRITE_REG32(&core_if->dev_if->out_ep_regs[ep->num]->
49065 + doeptsiz, deptsiz.d32);
49066 +
49067 + /* Write the DMA register */
49068 + DWC_WRITE_REG32(&
49069 + (core_if->dev_if->out_ep_regs[ep->num]->
49070 + doepdma), (uint32_t) ep->dma_addr);
49071 +
49072 + }
49073 + /** Enable endpoint, clear nak */
49074 + depctl.d32 = 0;
49075 + depctl.b.epena = 1;
49076 + depctl.b.cnak = 1;
49077 +
49078 + DWC_MODIFY_REG32(addr, depctl.d32, depctl.d32);
49079 + }
49080 +}
49081 +
49082 +/**
49083 + * This function does the setup for a data transfer for an EP and
49084 + * starts the transfer. For an IN transfer, the packets will be
49085 + * loaded into the appropriate Tx FIFO in the ISR. For OUT transfers,
49086 + * the packets are unloaded from the Rx FIFO in the ISR.
49087 + *
49088 + * @param core_if Programming view of DWC_otg controller.
49089 + * @param ep The EP to start the transfer on.
49090 + */
49091 +
49092 +static void dwc_otg_iso_ep_start_transfer(dwc_otg_core_if_t * core_if,
49093 + dwc_ep_t * ep)
49094 +{
49095 + if (core_if->dma_enable) {
49096 + if (core_if->dma_desc_enable) {
49097 + if (ep->is_in) {
49098 + ep->desc_cnt = ep->pkt_cnt / ep->pkt_per_frm;
49099 + } else {
49100 + ep->desc_cnt = ep->pkt_cnt;
49101 + }
49102 + dwc_otg_iso_ep_start_ddma_transfer(core_if, ep);
49103 + } else {
49104 + if (core_if->pti_enh_enable) {
49105 + dwc_otg_iso_ep_start_buf_transfer(core_if, ep);
49106 + } else {
49107 + ep->cur_pkt_addr =
49108 + (ep->proc_buf_num) ? ep->xfer_buff1 : ep->
49109 + xfer_buff0;
49110 + ep->cur_pkt_dma_addr =
49111 + (ep->proc_buf_num) ? ep->dma_addr1 : ep->
49112 + dma_addr0;
49113 + dwc_otg_iso_ep_start_frm_transfer(core_if, ep);
49114 + }
49115 + }
49116 + } else {
49117 + ep->cur_pkt_addr =
49118 + (ep->proc_buf_num) ? ep->xfer_buff1 : ep->xfer_buff0;
49119 + ep->cur_pkt_dma_addr =
49120 + (ep->proc_buf_num) ? ep->dma_addr1 : ep->dma_addr0;
49121 + dwc_otg_iso_ep_start_frm_transfer(core_if, ep);
49122 + }
49123 +}
49124 +
49125 +/**
49126 + * This function stops transfer for an EP and
49127 + * resets the ep's variables.
49128 + *
49129 + * @param core_if Programming view of DWC_otg controller.
49130 + * @param ep The EP to start the transfer on.
49131 + */
49132 +
49133 +void dwc_otg_iso_ep_stop_transfer(dwc_otg_core_if_t * core_if, dwc_ep_t * ep)
49134 +{
49135 + depctl_data_t depctl = {.d32 = 0 };
49136 + volatile uint32_t *addr;
49137 +
49138 + if (ep->is_in == 1) {
49139 + addr = &core_if->dev_if->in_ep_regs[ep->num]->diepctl;
49140 + } else {
49141 + addr = &core_if->dev_if->out_ep_regs[ep->num]->doepctl;
49142 + }
49143 +
49144 + /* disable the ep */
49145 + depctl.d32 = DWC_READ_REG32(addr);
49146 +
49147 + depctl.b.epdis = 1;
49148 + depctl.b.snak = 1;
49149 +
49150 + DWC_WRITE_REG32(addr, depctl.d32);
49151 +
49152 + if (core_if->dma_desc_enable &&
49153 + ep->iso_desc_addr && ep->iso_dma_desc_addr) {
49154 + dwc_otg_ep_free_desc_chain(ep->iso_desc_addr,
49155 + ep->iso_dma_desc_addr,
49156 + ep->desc_cnt * 2);
49157 + }
49158 +
49159 + /* reset varibales */
49160 + ep->dma_addr0 = 0;
49161 + ep->dma_addr1 = 0;
49162 + ep->xfer_buff0 = 0;
49163 + ep->xfer_buff1 = 0;
49164 + ep->data_per_frame = 0;
49165 + ep->data_pattern_frame = 0;
49166 + ep->sync_frame = 0;
49167 + ep->buf_proc_intrvl = 0;
49168 + ep->bInterval = 0;
49169 + ep->proc_buf_num = 0;
49170 + ep->pkt_per_frm = 0;
49171 + ep->pkt_per_frm = 0;
49172 + ep->desc_cnt = 0;
49173 + ep->iso_desc_addr = 0;
49174 + ep->iso_dma_desc_addr = 0;
49175 +}
49176 +
49177 +int dwc_otg_pcd_iso_ep_start(dwc_otg_pcd_t * pcd, void *ep_handle,
49178 + uint8_t * buf0, uint8_t * buf1, dwc_dma_t dma0,
49179 + dwc_dma_t dma1, int sync_frame, int dp_frame,
49180 + int data_per_frame, int start_frame,
49181 + int buf_proc_intrvl, void *req_handle,
49182 + int atomic_alloc)
49183 +{
49184 + dwc_otg_pcd_ep_t *ep;
49185 + dwc_irqflags_t flags = 0;
49186 + dwc_ep_t *dwc_ep;
49187 + int32_t frm_data;
49188 + dsts_data_t dsts;
49189 + dwc_otg_core_if_t *core_if;
49190 +
49191 + ep = get_ep_from_handle(pcd, ep_handle);
49192 +
49193 + if (!ep || !ep->desc || ep->dwc_ep.num == 0) {
49194 + DWC_WARN("bad ep\n");
49195 + return -DWC_E_INVALID;
49196 + }
49197 +
49198 + DWC_SPINLOCK_IRQSAVE(pcd->lock, &flags);
49199 + core_if = GET_CORE_IF(pcd);
49200 + dwc_ep = &ep->dwc_ep;
49201 +
49202 + if (ep->iso_req_handle) {
49203 + DWC_WARN("ISO request in progress\n");
49204 + }
49205 +
49206 + dwc_ep->dma_addr0 = dma0;
49207 + dwc_ep->dma_addr1 = dma1;
49208 +
49209 + dwc_ep->xfer_buff0 = buf0;
49210 + dwc_ep->xfer_buff1 = buf1;
49211 +
49212 + dwc_ep->data_per_frame = data_per_frame;
49213 +
49214 + /** @todo - pattern data support is to be implemented in the future */
49215 + dwc_ep->data_pattern_frame = dp_frame;
49216 + dwc_ep->sync_frame = sync_frame;
49217 +
49218 + dwc_ep->buf_proc_intrvl = buf_proc_intrvl;
49219 +
49220 + dwc_ep->bInterval = 1 << (ep->desc->bInterval - 1);
49221 +
49222 + dwc_ep->proc_buf_num = 0;
49223 +
49224 + dwc_ep->pkt_per_frm = 0;
49225 + frm_data = ep->dwc_ep.data_per_frame;
49226 + while (frm_data > 0) {
49227 + dwc_ep->pkt_per_frm++;
49228 + frm_data -= ep->dwc_ep.maxpacket;
49229 + }
49230 +
49231 + dsts.d32 = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dsts);
49232 +
49233 + if (start_frame == -1) {
49234 + dwc_ep->next_frame = dsts.b.soffn + 1;
49235 + if (dwc_ep->bInterval != 1) {
49236 + dwc_ep->next_frame =
49237 + dwc_ep->next_frame + (dwc_ep->bInterval - 1 -
49238 + dwc_ep->next_frame %
49239 + dwc_ep->bInterval);
49240 + }
49241 + } else {
49242 + dwc_ep->next_frame = start_frame;
49243 + }
49244 +
49245 + if (!core_if->pti_enh_enable) {
49246 + dwc_ep->pkt_cnt =
49247 + dwc_ep->buf_proc_intrvl * dwc_ep->pkt_per_frm /
49248 + dwc_ep->bInterval;
49249 + } else {
49250 + dwc_ep->pkt_cnt =
49251 + (dwc_ep->data_per_frame *
49252 + (dwc_ep->buf_proc_intrvl / dwc_ep->bInterval)
49253 + - 1 + dwc_ep->maxpacket) / dwc_ep->maxpacket;
49254 + }
49255 +
49256 + if (core_if->dma_desc_enable) {
49257 + dwc_ep->desc_cnt =
49258 + dwc_ep->buf_proc_intrvl * dwc_ep->pkt_per_frm /
49259 + dwc_ep->bInterval;
49260 + }
49261 +
49262 + if (atomic_alloc) {
49263 + dwc_ep->pkt_info =
49264 + DWC_ALLOC_ATOMIC(sizeof(iso_pkt_info_t) * dwc_ep->pkt_cnt);
49265 + } else {
49266 + dwc_ep->pkt_info =
49267 + DWC_ALLOC(sizeof(iso_pkt_info_t) * dwc_ep->pkt_cnt);
49268 + }
49269 + if (!dwc_ep->pkt_info) {
49270 + DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
49271 + return -DWC_E_NO_MEMORY;
49272 + }
49273 + if (core_if->pti_enh_enable) {
49274 + dwc_memset(dwc_ep->pkt_info, 0,
49275 + sizeof(iso_pkt_info_t) * dwc_ep->pkt_cnt);
49276 + }
49277 +
49278 + dwc_ep->cur_pkt = 0;
49279 + ep->iso_req_handle = req_handle;
49280 +
49281 + DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
49282 + dwc_otg_iso_ep_start_transfer(core_if, dwc_ep);
49283 + return 0;
49284 +}
49285 +
49286 +int dwc_otg_pcd_iso_ep_stop(dwc_otg_pcd_t * pcd, void *ep_handle,
49287 + void *req_handle)
49288 +{
49289 + dwc_irqflags_t flags = 0;
49290 + dwc_otg_pcd_ep_t *ep;
49291 + dwc_ep_t *dwc_ep;
49292 +
49293 + ep = get_ep_from_handle(pcd, ep_handle);
49294 + if (!ep || !ep->desc || ep->dwc_ep.num == 0) {
49295 + DWC_WARN("bad ep\n");
49296 + return -DWC_E_INVALID;
49297 + }
49298 + dwc_ep = &ep->dwc_ep;
49299 +
49300 + dwc_otg_iso_ep_stop_transfer(GET_CORE_IF(pcd), dwc_ep);
49301 +
49302 + DWC_FREE(dwc_ep->pkt_info);
49303 + DWC_SPINLOCK_IRQSAVE(pcd->lock, &flags);
49304 + if (ep->iso_req_handle != req_handle) {
49305 + DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
49306 + return -DWC_E_INVALID;
49307 + }
49308 +
49309 + DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
49310 +
49311 + ep->iso_req_handle = 0;
49312 + return 0;
49313 +}
49314 +
49315 +/**
49316 + * This function is used for perodical data exchnage between PCD and gadget drivers.
49317 + * for Isochronous EPs
49318 + *
49319 + * - Every time a sync period completes this function is called to
49320 + * perform data exchange between PCD and gadget
49321 + */
49322 +void dwc_otg_iso_buffer_done(dwc_otg_pcd_t * pcd, dwc_otg_pcd_ep_t * ep,
49323 + void *req_handle)
49324 +{
49325 + int i;
49326 + dwc_ep_t *dwc_ep;
49327 +
49328 + dwc_ep = &ep->dwc_ep;
49329 +
49330 + DWC_SPINUNLOCK(ep->pcd->lock);
49331 + pcd->fops->isoc_complete(pcd, ep->priv, ep->iso_req_handle,
49332 + dwc_ep->proc_buf_num ^ 0x1);
49333 + DWC_SPINLOCK(ep->pcd->lock);
49334 +
49335 + for (i = 0; i < dwc_ep->pkt_cnt; ++i) {
49336 + dwc_ep->pkt_info[i].status = 0;
49337 + dwc_ep->pkt_info[i].offset = 0;
49338 + dwc_ep->pkt_info[i].length = 0;
49339 + }
49340 +}
49341 +
49342 +int dwc_otg_pcd_get_iso_packet_count(dwc_otg_pcd_t * pcd, void *ep_handle,
49343 + void *iso_req_handle)
49344 +{
49345 + dwc_otg_pcd_ep_t *ep;
49346 + dwc_ep_t *dwc_ep;
49347 +
49348 + ep = get_ep_from_handle(pcd, ep_handle);
49349 + if (!ep->desc || ep->dwc_ep.num == 0) {
49350 + DWC_WARN("bad ep\n");
49351 + return -DWC_E_INVALID;
49352 + }
49353 + dwc_ep = &ep->dwc_ep;
49354 +
49355 + return dwc_ep->pkt_cnt;
49356 +}
49357 +
49358 +void dwc_otg_pcd_get_iso_packet_params(dwc_otg_pcd_t * pcd, void *ep_handle,
49359 + void *iso_req_handle, int packet,
49360 + int *status, int *actual, int *offset)
49361 +{
49362 + dwc_otg_pcd_ep_t *ep;
49363 + dwc_ep_t *dwc_ep;
49364 +
49365 + ep = get_ep_from_handle(pcd, ep_handle);
49366 + if (!ep)
49367 + DWC_WARN("bad ep\n");
49368 +
49369 + dwc_ep = &ep->dwc_ep;
49370 +
49371 + *status = dwc_ep->pkt_info[packet].status;
49372 + *actual = dwc_ep->pkt_info[packet].length;
49373 + *offset = dwc_ep->pkt_info[packet].offset;
49374 +}
49375 +
49376 +#endif /* DWC_EN_ISOC */
49377 +
49378 +static void dwc_otg_pcd_init_ep(dwc_otg_pcd_t * pcd, dwc_otg_pcd_ep_t * pcd_ep,
49379 + uint32_t is_in, uint32_t ep_num)
49380 +{
49381 + /* Init EP structure */
49382 + pcd_ep->desc = 0;
49383 + pcd_ep->pcd = pcd;
49384 + pcd_ep->stopped = 1;
49385 + pcd_ep->queue_sof = 0;
49386 +
49387 + /* Init DWC ep structure */
49388 + pcd_ep->dwc_ep.is_in = is_in;
49389 + pcd_ep->dwc_ep.num = ep_num;
49390 + pcd_ep->dwc_ep.active = 0;
49391 + pcd_ep->dwc_ep.tx_fifo_num = 0;
49392 + /* Control until ep is actvated */
49393 + pcd_ep->dwc_ep.type = DWC_OTG_EP_TYPE_CONTROL;
49394 + pcd_ep->dwc_ep.maxpacket = MAX_PACKET_SIZE;
49395 + pcd_ep->dwc_ep.dma_addr = 0;
49396 + pcd_ep->dwc_ep.start_xfer_buff = 0;
49397 + pcd_ep->dwc_ep.xfer_buff = 0;
49398 + pcd_ep->dwc_ep.xfer_len = 0;
49399 + pcd_ep->dwc_ep.xfer_count = 0;
49400 + pcd_ep->dwc_ep.sent_zlp = 0;
49401 + pcd_ep->dwc_ep.total_len = 0;
49402 + pcd_ep->dwc_ep.desc_addr = 0;
49403 + pcd_ep->dwc_ep.dma_desc_addr = 0;
49404 + DWC_CIRCLEQ_INIT(&pcd_ep->queue);
49405 +}
49406 +
49407 +/**
49408 + * Initialize ep's
49409 + */
49410 +static void dwc_otg_pcd_reinit(dwc_otg_pcd_t * pcd)
49411 +{
49412 + int i;
49413 + uint32_t hwcfg1;
49414 + dwc_otg_pcd_ep_t *ep;
49415 + int in_ep_cntr, out_ep_cntr;
49416 + uint32_t num_in_eps = (GET_CORE_IF(pcd))->dev_if->num_in_eps;
49417 + uint32_t num_out_eps = (GET_CORE_IF(pcd))->dev_if->num_out_eps;
49418 +
49419 + /**
49420 + * Initialize the EP0 structure.
49421 + */
49422 + ep = &pcd->ep0;
49423 + dwc_otg_pcd_init_ep(pcd, ep, 0, 0);
49424 +
49425 + in_ep_cntr = 0;
49426 + hwcfg1 = (GET_CORE_IF(pcd))->hwcfg1.d32 >> 3;
49427 + for (i = 1; in_ep_cntr < num_in_eps; i++) {
49428 + if ((hwcfg1 & 0x1) == 0) {
49429 + dwc_otg_pcd_ep_t *ep = &pcd->in_ep[in_ep_cntr];
49430 + in_ep_cntr++;
49431 + /**
49432 + * @todo NGS: Add direction to EP, based on contents
49433 + * of HWCFG1. Need a copy of HWCFG1 in pcd structure?
49434 + * sprintf(";r
49435 + */
49436 + dwc_otg_pcd_init_ep(pcd, ep, 1 /* IN */ , i);
49437 +
49438 + DWC_CIRCLEQ_INIT(&ep->queue);
49439 + }
49440 + hwcfg1 >>= 2;
49441 + }
49442 +
49443 + out_ep_cntr = 0;
49444 + hwcfg1 = (GET_CORE_IF(pcd))->hwcfg1.d32 >> 2;
49445 + for (i = 1; out_ep_cntr < num_out_eps; i++) {
49446 + if ((hwcfg1 & 0x1) == 0) {
49447 + dwc_otg_pcd_ep_t *ep = &pcd->out_ep[out_ep_cntr];
49448 + out_ep_cntr++;
49449 + /**
49450 + * @todo NGS: Add direction to EP, based on contents
49451 + * of HWCFG1. Need a copy of HWCFG1 in pcd structure?
49452 + * sprintf(";r
49453 + */
49454 + dwc_otg_pcd_init_ep(pcd, ep, 0 /* OUT */ , i);
49455 + DWC_CIRCLEQ_INIT(&ep->queue);
49456 + }
49457 + hwcfg1 >>= 2;
49458 + }
49459 +
49460 + pcd->ep0state = EP0_DISCONNECT;
49461 + pcd->ep0.dwc_ep.maxpacket = MAX_EP0_SIZE;
49462 + pcd->ep0.dwc_ep.type = DWC_OTG_EP_TYPE_CONTROL;
49463 +}
49464 +
49465 +/**
49466 + * This function is called when the SRP timer expires. The SRP should
49467 + * complete within 6 seconds.
49468 + */
49469 +static void srp_timeout(void *ptr)
49470 +{
49471 + gotgctl_data_t gotgctl;
49472 + dwc_otg_core_if_t *core_if = (dwc_otg_core_if_t *) ptr;
49473 + volatile uint32_t *addr = &core_if->core_global_regs->gotgctl;
49474 +
49475 + gotgctl.d32 = DWC_READ_REG32(addr);
49476 +
49477 + core_if->srp_timer_started = 0;
49478 +
49479 + if (core_if->adp_enable) {
49480 + if (gotgctl.b.bsesvld == 0) {
49481 + gpwrdn_data_t gpwrdn = {.d32 = 0 };
49482 + DWC_PRINTF("SRP Timeout BSESSVLD = 0\n");
49483 + /* Power off the core */
49484 + if (core_if->power_down == 2) {
49485 + gpwrdn.b.pwrdnswtch = 1;
49486 + DWC_MODIFY_REG32(&core_if->
49487 + core_global_regs->gpwrdn,
49488 + gpwrdn.d32, 0);
49489 + }
49490 +
49491 + gpwrdn.d32 = 0;
49492 + gpwrdn.b.pmuintsel = 1;
49493 + gpwrdn.b.pmuactv = 1;
49494 + DWC_MODIFY_REG32(&core_if->core_global_regs->gpwrdn, 0,
49495 + gpwrdn.d32);
49496 + dwc_otg_adp_probe_start(core_if);
49497 + } else {
49498 + DWC_PRINTF("SRP Timeout BSESSVLD = 1\n");
49499 + core_if->op_state = B_PERIPHERAL;
49500 + dwc_otg_core_init(core_if);
49501 + dwc_otg_enable_global_interrupts(core_if);
49502 + cil_pcd_start(core_if);
49503 + }
49504 + }
49505 +
49506 + if ((core_if->core_params->phy_type == DWC_PHY_TYPE_PARAM_FS) &&
49507 + (core_if->core_params->i2c_enable)) {
49508 + DWC_PRINTF("SRP Timeout\n");
49509 +
49510 + if ((core_if->srp_success) && (gotgctl.b.bsesvld)) {
49511 + if (core_if->pcd_cb && core_if->pcd_cb->resume_wakeup) {
49512 + core_if->pcd_cb->resume_wakeup(core_if->pcd_cb->p);
49513 + }
49514 +
49515 + /* Clear Session Request */
49516 + gotgctl.d32 = 0;
49517 + gotgctl.b.sesreq = 1;
49518 + DWC_MODIFY_REG32(&core_if->core_global_regs->gotgctl,
49519 + gotgctl.d32, 0);
49520 +
49521 + core_if->srp_success = 0;
49522 + } else {
49523 + __DWC_ERROR("Device not connected/responding\n");
49524 + gotgctl.b.sesreq = 0;
49525 + DWC_WRITE_REG32(addr, gotgctl.d32);
49526 + }
49527 + } else if (gotgctl.b.sesreq) {
49528 + DWC_PRINTF("SRP Timeout\n");
49529 +
49530 + __DWC_ERROR("Device not connected/responding\n");
49531 + gotgctl.b.sesreq = 0;
49532 + DWC_WRITE_REG32(addr, gotgctl.d32);
49533 + } else {
49534 + DWC_PRINTF(" SRP GOTGCTL=%0x\n", gotgctl.d32);
49535 + }
49536 +}
49537 +
49538 +/**
49539 + * Tasklet
49540 + *
49541 + */
49542 +extern void start_next_request(dwc_otg_pcd_ep_t * ep);
49543 +
49544 +static void start_xfer_tasklet_func(void *data)
49545 +{
49546 + dwc_otg_pcd_t *pcd = (dwc_otg_pcd_t *) data;
49547 + dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
49548 +
49549 + int i;
49550 + depctl_data_t diepctl;
49551 +
49552 + DWC_DEBUGPL(DBG_PCDV, "Start xfer tasklet\n");
49553 +
49554 + diepctl.d32 = DWC_READ_REG32(&core_if->dev_if->in_ep_regs[0]->diepctl);
49555 +
49556 + if (pcd->ep0.queue_sof) {
49557 + pcd->ep0.queue_sof = 0;
49558 + start_next_request(&pcd->ep0);
49559 + // break;
49560 + }
49561 +
49562 + for (i = 0; i < core_if->dev_if->num_in_eps; i++) {
49563 + depctl_data_t diepctl;
49564 + diepctl.d32 =
49565 + DWC_READ_REG32(&core_if->dev_if->in_ep_regs[i]->diepctl);
49566 +
49567 + if (pcd->in_ep[i].queue_sof) {
49568 + pcd->in_ep[i].queue_sof = 0;
49569 + start_next_request(&pcd->in_ep[i]);
49570 + // break;
49571 + }
49572 + }
49573 +
49574 + return;
49575 +}
49576 +
49577 +/**
49578 + * This function initialized the PCD portion of the driver.
49579 + *
49580 + */
49581 +dwc_otg_pcd_t *dwc_otg_pcd_init(dwc_otg_device_t *otg_dev)
49582 +{
49583 + struct device *dev = &otg_dev->os_dep.platformdev->dev;
49584 + dwc_otg_core_if_t *core_if = otg_dev->core_if;
49585 + dwc_otg_pcd_t *pcd = NULL;
49586 + dwc_otg_dev_if_t *dev_if;
49587 + int i;
49588 +
49589 + /*
49590 + * Allocate PCD structure
49591 + */
49592 + pcd = DWC_ALLOC(sizeof(dwc_otg_pcd_t));
49593 +
49594 + if (pcd == NULL) {
49595 + return NULL;
49596 + }
49597 +
49598 +#if (defined(DWC_LINUX) && defined(CONFIG_DEBUG_SPINLOCK))
49599 + DWC_SPINLOCK_ALLOC_LINUX_DEBUG(pcd->lock);
49600 +#else
49601 + pcd->lock = DWC_SPINLOCK_ALLOC();
49602 +#endif
49603 + DWC_DEBUGPL(DBG_HCDV, "Init of PCD %p given core_if %p\n",
49604 + pcd, core_if);//GRAYG
49605 + if (!pcd->lock) {
49606 + DWC_ERROR("Could not allocate lock for pcd");
49607 + DWC_FREE(pcd);
49608 + return NULL;
49609 + }
49610 + /* Set core_if's lock pointer to hcd->lock */
49611 + core_if->lock = pcd->lock;
49612 + pcd->core_if = core_if;
49613 +
49614 + dev_if = core_if->dev_if;
49615 + dev_if->isoc_ep = NULL;
49616 +
49617 + if (core_if->hwcfg4.b.ded_fifo_en) {
49618 + DWC_PRINTF("Dedicated Tx FIFOs mode\n");
49619 + } else {
49620 + DWC_PRINTF("Shared Tx FIFO mode\n");
49621 + }
49622 +
49623 + /*
49624 + * Initialized the Core for Device mode here if there is nod ADP support.
49625 + * Otherwise it will be done later in dwc_otg_adp_start routine.
49626 + */
49627 + if (dwc_otg_is_device_mode(core_if) /*&& !core_if->adp_enable*/) {
49628 + dwc_otg_core_dev_init(core_if);
49629 + }
49630 +
49631 + /*
49632 + * Register the PCD Callbacks.
49633 + */
49634 + dwc_otg_cil_register_pcd_callbacks(core_if, &pcd_callbacks, pcd);
49635 +
49636 + /*
49637 + * Initialize the DMA buffer for SETUP packets
49638 + */
49639 + if (GET_CORE_IF(pcd)->dma_enable) {
49640 + pcd->setup_pkt =
49641 + DWC_DMA_ALLOC(dev, sizeof(*pcd->setup_pkt) * 5,
49642 + &pcd->setup_pkt_dma_handle);
49643 + if (pcd->setup_pkt == NULL) {
49644 + DWC_FREE(pcd);
49645 + return NULL;
49646 + }
49647 +
49648 + pcd->status_buf =
49649 + DWC_DMA_ALLOC(dev, sizeof(uint16_t),
49650 + &pcd->status_buf_dma_handle);
49651 + if (pcd->status_buf == NULL) {
49652 + DWC_DMA_FREE(dev, sizeof(*pcd->setup_pkt) * 5,
49653 + pcd->setup_pkt, pcd->setup_pkt_dma_handle);
49654 + DWC_FREE(pcd);
49655 + return NULL;
49656 + }
49657 +
49658 + if (GET_CORE_IF(pcd)->dma_desc_enable) {
49659 + dev_if->setup_desc_addr[0] =
49660 + dwc_otg_ep_alloc_desc_chain(dev,
49661 + &dev_if->dma_setup_desc_addr[0], 1);
49662 + dev_if->setup_desc_addr[1] =
49663 + dwc_otg_ep_alloc_desc_chain(dev,
49664 + &dev_if->dma_setup_desc_addr[1], 1);
49665 + dev_if->in_desc_addr =
49666 + dwc_otg_ep_alloc_desc_chain(dev,
49667 + &dev_if->dma_in_desc_addr, 1);
49668 + dev_if->out_desc_addr =
49669 + dwc_otg_ep_alloc_desc_chain(dev,
49670 + &dev_if->dma_out_desc_addr, 1);
49671 + pcd->data_terminated = 0;
49672 +
49673 + if (dev_if->setup_desc_addr[0] == 0
49674 + || dev_if->setup_desc_addr[1] == 0
49675 + || dev_if->in_desc_addr == 0
49676 + || dev_if->out_desc_addr == 0) {
49677 +
49678 + if (dev_if->out_desc_addr)
49679 + dwc_otg_ep_free_desc_chain(dev,
49680 + dev_if->out_desc_addr,
49681 + dev_if->dma_out_desc_addr, 1);
49682 + if (dev_if->in_desc_addr)
49683 + dwc_otg_ep_free_desc_chain(dev,
49684 + dev_if->in_desc_addr,
49685 + dev_if->dma_in_desc_addr, 1);
49686 + if (dev_if->setup_desc_addr[1])
49687 + dwc_otg_ep_free_desc_chain(dev,
49688 + dev_if->setup_desc_addr[1],
49689 + dev_if->dma_setup_desc_addr[1], 1);
49690 + if (dev_if->setup_desc_addr[0])
49691 + dwc_otg_ep_free_desc_chain(dev,
49692 + dev_if->setup_desc_addr[0],
49693 + dev_if->dma_setup_desc_addr[0], 1);
49694 +
49695 + DWC_DMA_FREE(dev, sizeof(*pcd->setup_pkt) * 5,
49696 + pcd->setup_pkt,
49697 + pcd->setup_pkt_dma_handle);
49698 + DWC_DMA_FREE(dev, sizeof(*pcd->status_buf),
49699 + pcd->status_buf,
49700 + pcd->status_buf_dma_handle);
49701 +
49702 + DWC_FREE(pcd);
49703 +
49704 + return NULL;
49705 + }
49706 + }
49707 + } else {
49708 + pcd->setup_pkt = DWC_ALLOC(sizeof(*pcd->setup_pkt) * 5);
49709 + if (pcd->setup_pkt == NULL) {
49710 + DWC_FREE(pcd);
49711 + return NULL;
49712 + }
49713 +
49714 + pcd->status_buf = DWC_ALLOC(sizeof(uint16_t));
49715 + if (pcd->status_buf == NULL) {
49716 + DWC_FREE(pcd->setup_pkt);
49717 + DWC_FREE(pcd);
49718 + return NULL;
49719 + }
49720 + }
49721 +
49722 + dwc_otg_pcd_reinit(pcd);
49723 +
49724 + /* Allocate the cfi object for the PCD */
49725 +#ifdef DWC_UTE_CFI
49726 + pcd->cfi = DWC_ALLOC(sizeof(cfiobject_t));
49727 + if (NULL == pcd->cfi)
49728 + goto fail;
49729 + if (init_cfi(pcd->cfi)) {
49730 + CFI_INFO("%s: Failed to init the CFI object\n", __func__);
49731 + goto fail;
49732 + }
49733 +#endif
49734 +
49735 + /* Initialize tasklets */
49736 + pcd->start_xfer_tasklet = DWC_TASK_ALLOC("xfer_tasklet",
49737 + start_xfer_tasklet_func, pcd);
49738 + pcd->test_mode_tasklet = DWC_TASK_ALLOC("test_mode_tasklet",
49739 + do_test_mode, pcd);
49740 +
49741 + /* Initialize SRP timer */
49742 + core_if->srp_timer = DWC_TIMER_ALLOC("SRP TIMER", srp_timeout, core_if);
49743 +
49744 + if (core_if->core_params->dev_out_nak) {
49745 + /**
49746 + * Initialize xfer timeout timer. Implemented for
49747 + * 2.93a feature "Device DDMA OUT NAK Enhancement"
49748 + */
49749 + for(i = 0; i < MAX_EPS_CHANNELS; i++) {
49750 + pcd->core_if->ep_xfer_timer[i] =
49751 + DWC_TIMER_ALLOC("ep timer", ep_xfer_timeout,
49752 + &pcd->core_if->ep_xfer_info[i]);
49753 + }
49754 + }
49755 +
49756 + return pcd;
49757 +#ifdef DWC_UTE_CFI
49758 +fail:
49759 +#endif
49760 + if (pcd->setup_pkt)
49761 + DWC_FREE(pcd->setup_pkt);
49762 + if (pcd->status_buf)
49763 + DWC_FREE(pcd->status_buf);
49764 +#ifdef DWC_UTE_CFI
49765 + if (pcd->cfi)
49766 + DWC_FREE(pcd->cfi);
49767 +#endif
49768 + if (pcd)
49769 + DWC_FREE(pcd);
49770 + return NULL;
49771 +
49772 +}
49773 +
49774 +/**
49775 + * Remove PCD specific data
49776 + */
49777 +void dwc_otg_pcd_remove(dwc_otg_pcd_t * pcd)
49778 +{
49779 + dwc_otg_dev_if_t *dev_if = GET_CORE_IF(pcd)->dev_if;
49780 + struct device *dev = dwc_otg_pcd_to_dev(pcd);
49781 + int i;
49782 +
49783 + if (pcd->core_if->core_params->dev_out_nak) {
49784 + for (i = 0; i < MAX_EPS_CHANNELS; i++) {
49785 + DWC_TIMER_CANCEL(pcd->core_if->ep_xfer_timer[i]);
49786 + pcd->core_if->ep_xfer_info[i].state = 0;
49787 + }
49788 + }
49789 +
49790 + if (GET_CORE_IF(pcd)->dma_enable) {
49791 + DWC_DMA_FREE(dev, sizeof(*pcd->setup_pkt) * 5, pcd->setup_pkt,
49792 + pcd->setup_pkt_dma_handle);
49793 + DWC_DMA_FREE(dev, sizeof(uint16_t), pcd->status_buf,
49794 + pcd->status_buf_dma_handle);
49795 + if (GET_CORE_IF(pcd)->dma_desc_enable) {
49796 + dwc_otg_ep_free_desc_chain(dev,
49797 + dev_if->setup_desc_addr[0],
49798 + dev_if->dma_setup_desc_addr
49799 + [0], 1);
49800 + dwc_otg_ep_free_desc_chain(dev,
49801 + dev_if->setup_desc_addr[1],
49802 + dev_if->dma_setup_desc_addr
49803 + [1], 1);
49804 + dwc_otg_ep_free_desc_chain(dev,
49805 + dev_if->in_desc_addr,
49806 + dev_if->dma_in_desc_addr, 1);
49807 + dwc_otg_ep_free_desc_chain(dev,
49808 + dev_if->out_desc_addr,
49809 + dev_if->dma_out_desc_addr,
49810 + 1);
49811 + }
49812 + } else {
49813 + DWC_FREE(pcd->setup_pkt);
49814 + DWC_FREE(pcd->status_buf);
49815 + }
49816 + DWC_SPINLOCK_FREE(pcd->lock);
49817 + /* Set core_if's lock pointer to NULL */
49818 + pcd->core_if->lock = NULL;
49819 +
49820 + DWC_TASK_FREE(pcd->start_xfer_tasklet);
49821 + DWC_TASK_FREE(pcd->test_mode_tasklet);
49822 + if (pcd->core_if->core_params->dev_out_nak) {
49823 + for (i = 0; i < MAX_EPS_CHANNELS; i++) {
49824 + if (pcd->core_if->ep_xfer_timer[i]) {
49825 + DWC_TIMER_FREE(pcd->core_if->ep_xfer_timer[i]);
49826 + }
49827 + }
49828 + }
49829 +
49830 +/* Release the CFI object's dynamic memory */
49831 +#ifdef DWC_UTE_CFI
49832 + if (pcd->cfi->ops.release) {
49833 + pcd->cfi->ops.release(pcd->cfi);
49834 + }
49835 +#endif
49836 +
49837 + DWC_FREE(pcd);
49838 +}
49839 +
49840 +/**
49841 + * Returns whether registered pcd is dual speed or not
49842 + */
49843 +uint32_t dwc_otg_pcd_is_dualspeed(dwc_otg_pcd_t * pcd)
49844 +{
49845 + dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
49846 +
49847 + if ((core_if->core_params->speed == DWC_SPEED_PARAM_FULL) ||
49848 + ((core_if->hwcfg2.b.hs_phy_type == 2) &&
49849 + (core_if->hwcfg2.b.fs_phy_type == 1) &&
49850 + (core_if->core_params->ulpi_fs_ls))) {
49851 + return 0;
49852 + }
49853 +
49854 + return 1;
49855 +}
49856 +
49857 +/**
49858 + * Returns whether registered pcd is OTG capable or not
49859 + */
49860 +uint32_t dwc_otg_pcd_is_otg(dwc_otg_pcd_t * pcd)
49861 +{
49862 + dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
49863 + gusbcfg_data_t usbcfg = {.d32 = 0 };
49864 +
49865 + usbcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->gusbcfg);
49866 + if (!usbcfg.b.srpcap || !usbcfg.b.hnpcap) {
49867 + return 0;
49868 + }
49869 +
49870 + return 1;
49871 +}
49872 +
49873 +/**
49874 + * This function assigns periodic Tx FIFO to an periodic EP
49875 + * in shared Tx FIFO mode
49876 + */
49877 +static uint32_t assign_tx_fifo(dwc_otg_core_if_t * core_if)
49878 +{
49879 + uint32_t TxMsk = 1;
49880 + int i;
49881 +
49882 + for (i = 0; i < core_if->hwcfg4.b.num_in_eps; ++i) {
49883 + if ((TxMsk & core_if->tx_msk) == 0) {
49884 + core_if->tx_msk |= TxMsk;
49885 + return i + 1;
49886 + }
49887 + TxMsk <<= 1;
49888 + }
49889 + return 0;
49890 +}
49891 +
49892 +/**
49893 + * This function assigns periodic Tx FIFO to an periodic EP
49894 + * in shared Tx FIFO mode
49895 + */
49896 +static uint32_t assign_perio_tx_fifo(dwc_otg_core_if_t * core_if)
49897 +{
49898 + uint32_t PerTxMsk = 1;
49899 + int i;
49900 + for (i = 0; i < core_if->hwcfg4.b.num_dev_perio_in_ep; ++i) {
49901 + if ((PerTxMsk & core_if->p_tx_msk) == 0) {
49902 + core_if->p_tx_msk |= PerTxMsk;
49903 + return i + 1;
49904 + }
49905 + PerTxMsk <<= 1;
49906 + }
49907 + return 0;
49908 +}
49909 +
49910 +/**
49911 + * This function releases periodic Tx FIFO
49912 + * in shared Tx FIFO mode
49913 + */
49914 +static void release_perio_tx_fifo(dwc_otg_core_if_t * core_if,
49915 + uint32_t fifo_num)
49916 +{
49917 + core_if->p_tx_msk =
49918 + (core_if->p_tx_msk & (1 << (fifo_num - 1))) ^ core_if->p_tx_msk;
49919 +}
49920 +
49921 +/**
49922 + * This function releases periodic Tx FIFO
49923 + * in shared Tx FIFO mode
49924 + */
49925 +static void release_tx_fifo(dwc_otg_core_if_t * core_if, uint32_t fifo_num)
49926 +{
49927 + core_if->tx_msk =
49928 + (core_if->tx_msk & (1 << (fifo_num - 1))) ^ core_if->tx_msk;
49929 +}
49930 +
49931 +/**
49932 + * This function is being called from gadget
49933 + * to enable PCD endpoint.
49934 + */
49935 +int dwc_otg_pcd_ep_enable(dwc_otg_pcd_t * pcd,
49936 + const uint8_t * ep_desc, void *usb_ep)
49937 +{
49938 + int num, dir;
49939 + dwc_otg_pcd_ep_t *ep = NULL;
49940 + const usb_endpoint_descriptor_t *desc;
49941 + dwc_irqflags_t flags;
49942 + fifosize_data_t dptxfsiz = {.d32 = 0 };
49943 + gdfifocfg_data_t gdfifocfg = {.d32 = 0 };
49944 + gdfifocfg_data_t gdfifocfgbase = {.d32 = 0 };
49945 + int retval = 0;
49946 + int i, epcount;
49947 + struct device *dev = dwc_otg_pcd_to_dev(pcd);
49948 +
49949 + desc = (const usb_endpoint_descriptor_t *)ep_desc;
49950 +
49951 + if (!desc) {
49952 + pcd->ep0.priv = usb_ep;
49953 + ep = &pcd->ep0;
49954 + retval = -DWC_E_INVALID;
49955 + goto out;
49956 + }
49957 +
49958 + num = UE_GET_ADDR(desc->bEndpointAddress);
49959 + dir = UE_GET_DIR(desc->bEndpointAddress);
49960 +
49961 + if (!desc->wMaxPacketSize) {
49962 + DWC_WARN("bad maxpacketsize\n");
49963 + retval = -DWC_E_INVALID;
49964 + goto out;
49965 + }
49966 +
49967 + if (dir == UE_DIR_IN) {
49968 + epcount = pcd->core_if->dev_if->num_in_eps;
49969 + for (i = 0; i < epcount; i++) {
49970 + if (num == pcd->in_ep[i].dwc_ep.num) {
49971 + ep = &pcd->in_ep[i];
49972 + break;
49973 + }
49974 + }
49975 + } else {
49976 + epcount = pcd->core_if->dev_if->num_out_eps;
49977 + for (i = 0; i < epcount; i++) {
49978 + if (num == pcd->out_ep[i].dwc_ep.num) {
49979 + ep = &pcd->out_ep[i];
49980 + break;
49981 + }
49982 + }
49983 + }
49984 +
49985 + if (!ep) {
49986 + DWC_WARN("bad address\n");
49987 + retval = -DWC_E_INVALID;
49988 + goto out;
49989 + }
49990 +
49991 + DWC_SPINLOCK_IRQSAVE(pcd->lock, &flags);
49992 +
49993 + ep->desc = desc;
49994 + ep->priv = usb_ep;
49995 +
49996 + /*
49997 + * Activate the EP
49998 + */
49999 + ep->stopped = 0;
50000 +
50001 + ep->dwc_ep.is_in = (dir == UE_DIR_IN);
50002 + ep->dwc_ep.maxpacket = UGETW(desc->wMaxPacketSize);
50003 +
50004 + ep->dwc_ep.type = desc->bmAttributes & UE_XFERTYPE;
50005 +
50006 + if (ep->dwc_ep.is_in) {
50007 + if (!GET_CORE_IF(pcd)->en_multiple_tx_fifo) {
50008 + ep->dwc_ep.tx_fifo_num = 0;
50009 +
50010 + if (ep->dwc_ep.type == UE_ISOCHRONOUS) {
50011 + /*
50012 + * if ISOC EP then assign a Periodic Tx FIFO.
50013 + */
50014 + ep->dwc_ep.tx_fifo_num =
50015 + assign_perio_tx_fifo(GET_CORE_IF(pcd));
50016 + }
50017 + } else {
50018 + /*
50019 + * if Dedicated FIFOs mode is on then assign a Tx FIFO.
50020 + */
50021 + ep->dwc_ep.tx_fifo_num =
50022 + assign_tx_fifo(GET_CORE_IF(pcd));
50023 + }
50024 +
50025 + /* Calculating EP info controller base address */
50026 + if (ep->dwc_ep.tx_fifo_num
50027 + && GET_CORE_IF(pcd)->en_multiple_tx_fifo) {
50028 + gdfifocfg.d32 =
50029 + DWC_READ_REG32(&GET_CORE_IF(pcd)->
50030 + core_global_regs->gdfifocfg);
50031 + gdfifocfgbase.d32 = gdfifocfg.d32 >> 16;
50032 + dptxfsiz.d32 =
50033 + (DWC_READ_REG32
50034 + (&GET_CORE_IF(pcd)->core_global_regs->
50035 + dtxfsiz[ep->dwc_ep.tx_fifo_num - 1]) >> 16);
50036 + gdfifocfg.b.epinfobase =
50037 + gdfifocfgbase.d32 + dptxfsiz.d32;
50038 + if (GET_CORE_IF(pcd)->snpsid <= OTG_CORE_REV_2_94a) {
50039 + DWC_WRITE_REG32(&GET_CORE_IF(pcd)->
50040 + core_global_regs->gdfifocfg,
50041 + gdfifocfg.d32);
50042 + }
50043 + }
50044 + }
50045 + /* Set initial data PID. */
50046 + if (ep->dwc_ep.type == UE_BULK) {
50047 + ep->dwc_ep.data_pid_start = 0;
50048 + }
50049 +
50050 + /* Alloc DMA Descriptors */
50051 + if (GET_CORE_IF(pcd)->dma_desc_enable) {
50052 +#ifndef DWC_UTE_PER_IO
50053 + if (ep->dwc_ep.type != UE_ISOCHRONOUS) {
50054 +#endif
50055 + ep->dwc_ep.desc_addr =
50056 + dwc_otg_ep_alloc_desc_chain(dev,
50057 + &ep->dwc_ep.dma_desc_addr,
50058 + MAX_DMA_DESC_CNT);
50059 + if (!ep->dwc_ep.desc_addr) {
50060 + DWC_WARN("%s, can't allocate DMA descriptor\n",
50061 + __func__);
50062 + retval = -DWC_E_SHUTDOWN;
50063 + DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
50064 + goto out;
50065 + }
50066 +#ifndef DWC_UTE_PER_IO
50067 + }
50068 +#endif
50069 + }
50070 +
50071 + DWC_DEBUGPL(DBG_PCD, "Activate %s: type=%d, mps=%d desc=%p\n",
50072 + (ep->dwc_ep.is_in ? "IN" : "OUT"),
50073 + ep->dwc_ep.type, ep->dwc_ep.maxpacket, ep->desc);
50074 +#ifdef DWC_UTE_PER_IO
50075 + ep->dwc_ep.xiso_bInterval = 1 << (ep->desc->bInterval - 1);
50076 +#endif
50077 + if (ep->dwc_ep.type == DWC_OTG_EP_TYPE_ISOC) {
50078 + ep->dwc_ep.bInterval = 1 << (ep->desc->bInterval - 1);
50079 + ep->dwc_ep.frame_num = 0xFFFFFFFF;
50080 + }
50081 +
50082 + dwc_otg_ep_activate(GET_CORE_IF(pcd), &ep->dwc_ep);
50083 +
50084 +#ifdef DWC_UTE_CFI
50085 + if (pcd->cfi->ops.ep_enable) {
50086 + pcd->cfi->ops.ep_enable(pcd->cfi, pcd, ep);
50087 + }
50088 +#endif
50089 +
50090 + DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
50091 +
50092 +out:
50093 + return retval;
50094 +}
50095 +
50096 +/**
50097 + * This function is being called from gadget
50098 + * to disable PCD endpoint.
50099 + */
50100 +int dwc_otg_pcd_ep_disable(dwc_otg_pcd_t * pcd, void *ep_handle)
50101 +{
50102 + dwc_otg_pcd_ep_t *ep;
50103 + dwc_irqflags_t flags;
50104 + dwc_otg_dev_dma_desc_t *desc_addr;
50105 + dwc_dma_t dma_desc_addr;
50106 + gdfifocfg_data_t gdfifocfgbase = {.d32 = 0 };
50107 + gdfifocfg_data_t gdfifocfg = {.d32 = 0 };
50108 + fifosize_data_t dptxfsiz = {.d32 = 0 };
50109 + struct device *dev = dwc_otg_pcd_to_dev(pcd);
50110 +
50111 + ep = get_ep_from_handle(pcd, ep_handle);
50112 +
50113 + if (!ep || !ep->desc) {
50114 + DWC_DEBUGPL(DBG_PCD, "bad ep address\n");
50115 + return -DWC_E_INVALID;
50116 + }
50117 +
50118 + DWC_SPINLOCK_IRQSAVE(pcd->lock, &flags);
50119 +
50120 + dwc_otg_request_nuke(ep);
50121 +
50122 + dwc_otg_ep_deactivate(GET_CORE_IF(pcd), &ep->dwc_ep);
50123 + if (pcd->core_if->core_params->dev_out_nak) {
50124 + DWC_TIMER_CANCEL(pcd->core_if->ep_xfer_timer[ep->dwc_ep.num]);
50125 + pcd->core_if->ep_xfer_info[ep->dwc_ep.num].state = 0;
50126 + }
50127 + ep->desc = NULL;
50128 + ep->stopped = 1;
50129 +
50130 + gdfifocfg.d32 =
50131 + DWC_READ_REG32(&GET_CORE_IF(pcd)->core_global_regs->gdfifocfg);
50132 + gdfifocfgbase.d32 = gdfifocfg.d32 >> 16;
50133 +
50134 + if (ep->dwc_ep.is_in) {
50135 + if (GET_CORE_IF(pcd)->en_multiple_tx_fifo) {
50136 + /* Flush the Tx FIFO */
50137 + dwc_otg_flush_tx_fifo(GET_CORE_IF(pcd),
50138 + ep->dwc_ep.tx_fifo_num);
50139 + }
50140 + release_perio_tx_fifo(GET_CORE_IF(pcd), ep->dwc_ep.tx_fifo_num);
50141 + release_tx_fifo(GET_CORE_IF(pcd), ep->dwc_ep.tx_fifo_num);
50142 + if (GET_CORE_IF(pcd)->en_multiple_tx_fifo) {
50143 + /* Decreasing EPinfo Base Addr */
50144 + dptxfsiz.d32 =
50145 + (DWC_READ_REG32
50146 + (&GET_CORE_IF(pcd)->
50147 + core_global_regs->dtxfsiz[ep->dwc_ep.tx_fifo_num-1]) >> 16);
50148 + gdfifocfg.b.epinfobase = gdfifocfgbase.d32 - dptxfsiz.d32;
50149 + if (GET_CORE_IF(pcd)->snpsid <= OTG_CORE_REV_2_94a) {
50150 + DWC_WRITE_REG32(&GET_CORE_IF(pcd)->core_global_regs->gdfifocfg,
50151 + gdfifocfg.d32);
50152 + }
50153 + }
50154 + }
50155 +
50156 + /* Free DMA Descriptors */
50157 + if (GET_CORE_IF(pcd)->dma_desc_enable) {
50158 + if (ep->dwc_ep.type != UE_ISOCHRONOUS) {
50159 + desc_addr = ep->dwc_ep.desc_addr;
50160 + dma_desc_addr = ep->dwc_ep.dma_desc_addr;
50161 +
50162 + /* Cannot call dma_free_coherent() with IRQs disabled */
50163 + DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
50164 + dwc_otg_ep_free_desc_chain(dev, desc_addr, dma_desc_addr,
50165 + MAX_DMA_DESC_CNT);
50166 +
50167 + goto out_unlocked;
50168 + }
50169 + }
50170 + DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
50171 +
50172 +out_unlocked:
50173 + DWC_DEBUGPL(DBG_PCD, "%d %s disabled\n", ep->dwc_ep.num,
50174 + ep->dwc_ep.is_in ? "IN" : "OUT");
50175 + return 0;
50176 +
50177 +}
50178 +
50179 +/******************************************************************************/
50180 +#ifdef DWC_UTE_PER_IO
50181 +
50182 +/**
50183 + * Free the request and its extended parts
50184 + *
50185 + */
50186 +void dwc_pcd_xiso_ereq_free(dwc_otg_pcd_ep_t * ep, dwc_otg_pcd_request_t * req)
50187 +{
50188 + DWC_FREE(req->ext_req.per_io_frame_descs);
50189 + DWC_FREE(req);
50190 +}
50191 +
50192 +/**
50193 + * Start the next request in the endpoint's queue.
50194 + *
50195 + */
50196 +int dwc_otg_pcd_xiso_start_next_request(dwc_otg_pcd_t * pcd,
50197 + dwc_otg_pcd_ep_t * ep)
50198 +{
50199 + int i;
50200 + dwc_otg_pcd_request_t *req = NULL;
50201 + dwc_ep_t *dwcep = NULL;
50202 + struct dwc_iso_xreq_port *ereq = NULL;
50203 + struct dwc_iso_pkt_desc_port *ddesc_iso;
50204 + uint16_t nat;
50205 + depctl_data_t diepctl;
50206 +
50207 + dwcep = &ep->dwc_ep;
50208 +
50209 + if (dwcep->xiso_active_xfers > 0) {
50210 +#if 0 //Disable this to decrease s/w overhead that is crucial for Isoc transfers
50211 + DWC_WARN("There are currently active transfers for EP%d \
50212 + (active=%d; queued=%d)", dwcep->num, dwcep->xiso_active_xfers,
50213 + dwcep->xiso_queued_xfers);
50214 +#endif
50215 + return 0;
50216 + }
50217 +
50218 + nat = UGETW(ep->desc->wMaxPacketSize);
50219 + nat = (nat >> 11) & 0x03;
50220 +
50221 + if (!DWC_CIRCLEQ_EMPTY(&ep->queue)) {
50222 + req = DWC_CIRCLEQ_FIRST(&ep->queue);
50223 + ereq = &req->ext_req;
50224 + ep->stopped = 0;
50225 +
50226 + /* Get the frame number */
50227 + dwcep->xiso_frame_num =
50228 + dwc_otg_get_frame_number(GET_CORE_IF(pcd));
50229 + DWC_DEBUG("FRM_NUM=%d", dwcep->xiso_frame_num);
50230 +
50231 + ddesc_iso = ereq->per_io_frame_descs;
50232 +
50233 + if (dwcep->is_in) {
50234 + /* Setup DMA Descriptor chain for IN Isoc request */
50235 + for (i = 0; i < ereq->pio_pkt_count; i++) {
50236 + //if ((i % (nat + 1)) == 0)
50237 + if ( i > 0 )
50238 + dwcep->xiso_frame_num =
50239 + (dwcep->xiso_bInterval +
50240 + dwcep->xiso_frame_num) & 0x3FFF;
50241 + dwcep->desc_addr[i].buf =
50242 + req->dma + ddesc_iso[i].offset;
50243 + dwcep->desc_addr[i].status.b_iso_in.txbytes =
50244 + ddesc_iso[i].length;
50245 + dwcep->desc_addr[i].status.b_iso_in.framenum =
50246 + dwcep->xiso_frame_num;
50247 + dwcep->desc_addr[i].status.b_iso_in.bs =
50248 + BS_HOST_READY;
50249 + dwcep->desc_addr[i].status.b_iso_in.txsts = 0;
50250 + dwcep->desc_addr[i].status.b_iso_in.sp =
50251 + (ddesc_iso[i].length %
50252 + dwcep->maxpacket) ? 1 : 0;
50253 + dwcep->desc_addr[i].status.b_iso_in.ioc = 0;
50254 + dwcep->desc_addr[i].status.b_iso_in.pid = nat + 1;
50255 + dwcep->desc_addr[i].status.b_iso_in.l = 0;
50256 +
50257 + /* Process the last descriptor */
50258 + if (i == ereq->pio_pkt_count - 1) {
50259 + dwcep->desc_addr[i].status.b_iso_in.ioc = 1;
50260 + dwcep->desc_addr[i].status.b_iso_in.l = 1;
50261 + }
50262 + }
50263 +
50264 + /* Setup and start the transfer for this endpoint */
50265 + dwcep->xiso_active_xfers++;
50266 + DWC_WRITE_REG32(&GET_CORE_IF(pcd)->dev_if->
50267 + in_ep_regs[dwcep->num]->diepdma,
50268 + dwcep->dma_desc_addr);
50269 + diepctl.d32 = 0;
50270 + diepctl.b.epena = 1;
50271 + diepctl.b.cnak = 1;
50272 + DWC_MODIFY_REG32(&GET_CORE_IF(pcd)->dev_if->
50273 + in_ep_regs[dwcep->num]->diepctl, 0,
50274 + diepctl.d32);
50275 + } else {
50276 + /* Setup DMA Descriptor chain for OUT Isoc request */
50277 + for (i = 0; i < ereq->pio_pkt_count; i++) {
50278 + //if ((i % (nat + 1)) == 0)
50279 + dwcep->xiso_frame_num = (dwcep->xiso_bInterval +
50280 + dwcep->xiso_frame_num) & 0x3FFF;
50281 + dwcep->desc_addr[i].buf =
50282 + req->dma + ddesc_iso[i].offset;
50283 + dwcep->desc_addr[i].status.b_iso_out.rxbytes =
50284 + ddesc_iso[i].length;
50285 + dwcep->desc_addr[i].status.b_iso_out.framenum =
50286 + dwcep->xiso_frame_num;
50287 + dwcep->desc_addr[i].status.b_iso_out.bs =
50288 + BS_HOST_READY;
50289 + dwcep->desc_addr[i].status.b_iso_out.rxsts = 0;
50290 + dwcep->desc_addr[i].status.b_iso_out.sp =
50291 + (ddesc_iso[i].length %
50292 + dwcep->maxpacket) ? 1 : 0;
50293 + dwcep->desc_addr[i].status.b_iso_out.ioc = 0;
50294 + dwcep->desc_addr[i].status.b_iso_out.pid = nat + 1;
50295 + dwcep->desc_addr[i].status.b_iso_out.l = 0;
50296 +
50297 + /* Process the last descriptor */
50298 + if (i == ereq->pio_pkt_count - 1) {
50299 + dwcep->desc_addr[i].status.b_iso_out.ioc = 1;
50300 + dwcep->desc_addr[i].status.b_iso_out.l = 1;
50301 + }
50302 + }
50303 +
50304 + /* Setup and start the transfer for this endpoint */
50305 + dwcep->xiso_active_xfers++;
50306 + DWC_WRITE_REG32(&GET_CORE_IF(pcd)->
50307 + dev_if->out_ep_regs[dwcep->num]->
50308 + doepdma, dwcep->dma_desc_addr);
50309 + diepctl.d32 = 0;
50310 + diepctl.b.epena = 1;
50311 + diepctl.b.cnak = 1;
50312 + DWC_MODIFY_REG32(&GET_CORE_IF(pcd)->
50313 + dev_if->out_ep_regs[dwcep->num]->
50314 + doepctl, 0, diepctl.d32);
50315 + }
50316 +
50317 + } else {
50318 + ep->stopped = 1;
50319 + }
50320 +
50321 + return 0;
50322 +}
50323 +
50324 +/**
50325 + * - Remove the request from the queue
50326 + */
50327 +void complete_xiso_ep(dwc_otg_pcd_ep_t * ep)
50328 +{
50329 + dwc_otg_pcd_request_t *req = NULL;
50330 + struct dwc_iso_xreq_port *ereq = NULL;
50331 + struct dwc_iso_pkt_desc_port *ddesc_iso = NULL;
50332 + dwc_ep_t *dwcep = NULL;
50333 + int i;
50334 +
50335 + //DWC_DEBUG();
50336 + dwcep = &ep->dwc_ep;
50337 +
50338 + /* Get the first pending request from the queue */
50339 + if (!DWC_CIRCLEQ_EMPTY(&ep->queue)) {
50340 + req = DWC_CIRCLEQ_FIRST(&ep->queue);
50341 + if (!req) {
50342 + DWC_PRINTF("complete_ep 0x%p, req = NULL!\n", ep);
50343 + return;
50344 + }
50345 + dwcep->xiso_active_xfers--;
50346 + dwcep->xiso_queued_xfers--;
50347 + /* Remove this request from the queue */
50348 + DWC_CIRCLEQ_REMOVE_INIT(&ep->queue, req, queue_entry);
50349 + } else {
50350 + DWC_PRINTF("complete_ep 0x%p, ep->queue empty!\n", ep);
50351 + return;
50352 + }
50353 +
50354 + ep->stopped = 1;
50355 + ereq = &req->ext_req;
50356 + ddesc_iso = ereq->per_io_frame_descs;
50357 +
50358 + if (dwcep->xiso_active_xfers < 0) {
50359 + DWC_WARN("EP#%d (xiso_active_xfers=%d)", dwcep->num,
50360 + dwcep->xiso_active_xfers);
50361 + }
50362 +
50363 + /* Fill the Isoc descs of portable extended req from dma descriptors */
50364 + for (i = 0; i < ereq->pio_pkt_count; i++) {
50365 + if (dwcep->is_in) { /* IN endpoints */
50366 + ddesc_iso[i].actual_length = ddesc_iso[i].length -
50367 + dwcep->desc_addr[i].status.b_iso_in.txbytes;
50368 + ddesc_iso[i].status =
50369 + dwcep->desc_addr[i].status.b_iso_in.txsts;
50370 + } else { /* OUT endpoints */
50371 + ddesc_iso[i].actual_length = ddesc_iso[i].length -
50372 + dwcep->desc_addr[i].status.b_iso_out.rxbytes;
50373 + ddesc_iso[i].status =
50374 + dwcep->desc_addr[i].status.b_iso_out.rxsts;
50375 + }
50376 + }
50377 +
50378 + DWC_SPINUNLOCK(ep->pcd->lock);
50379 +
50380 + /* Call the completion function in the non-portable logic */
50381 + ep->pcd->fops->xisoc_complete(ep->pcd, ep->priv, req->priv, 0,
50382 + &req->ext_req);
50383 +
50384 + DWC_SPINLOCK(ep->pcd->lock);
50385 +
50386 + /* Free the request - specific freeing needed for extended request object */
50387 + dwc_pcd_xiso_ereq_free(ep, req);
50388 +
50389 + /* Start the next request */
50390 + dwc_otg_pcd_xiso_start_next_request(ep->pcd, ep);
50391 +
50392 + return;
50393 +}
50394 +
50395 +/**
50396 + * Create and initialize the Isoc pkt descriptors of the extended request.
50397 + *
50398 + */
50399 +static int dwc_otg_pcd_xiso_create_pkt_descs(dwc_otg_pcd_request_t * req,
50400 + void *ereq_nonport,
50401 + int atomic_alloc)
50402 +{
50403 + struct dwc_iso_xreq_port *ereq = NULL;
50404 + struct dwc_iso_xreq_port *req_mapped = NULL;
50405 + struct dwc_iso_pkt_desc_port *ipds = NULL; /* To be created in this function */
50406 + uint32_t pkt_count;
50407 + int i;
50408 +
50409 + ereq = &req->ext_req;
50410 + req_mapped = (struct dwc_iso_xreq_port *)ereq_nonport;
50411 + pkt_count = req_mapped->pio_pkt_count;
50412 +
50413 + /* Create the isoc descs */
50414 + if (atomic_alloc) {
50415 + ipds = DWC_ALLOC_ATOMIC(sizeof(*ipds) * pkt_count);
50416 + } else {
50417 + ipds = DWC_ALLOC(sizeof(*ipds) * pkt_count);
50418 + }
50419 +
50420 + if (!ipds) {
50421 + DWC_ERROR("Failed to allocate isoc descriptors");
50422 + return -DWC_E_NO_MEMORY;
50423 + }
50424 +
50425 + /* Initialize the extended request fields */
50426 + ereq->per_io_frame_descs = ipds;
50427 + ereq->error_count = 0;
50428 + ereq->pio_alloc_pkt_count = pkt_count;
50429 + ereq->pio_pkt_count = pkt_count;
50430 + ereq->tr_sub_flags = req_mapped->tr_sub_flags;
50431 +
50432 + /* Init the Isoc descriptors */
50433 + for (i = 0; i < pkt_count; i++) {
50434 + ipds[i].length = req_mapped->per_io_frame_descs[i].length;
50435 + ipds[i].offset = req_mapped->per_io_frame_descs[i].offset;
50436 + ipds[i].status = req_mapped->per_io_frame_descs[i].status; /* 0 */
50437 + ipds[i].actual_length =
50438 + req_mapped->per_io_frame_descs[i].actual_length;
50439 + }
50440 +
50441 + return 0;
50442 +}
50443 +
50444 +static void prn_ext_request(struct dwc_iso_xreq_port *ereq)
50445 +{
50446 + struct dwc_iso_pkt_desc_port *xfd = NULL;
50447 + int i;
50448 +
50449 + DWC_DEBUG("per_io_frame_descs=%p", ereq->per_io_frame_descs);
50450 + DWC_DEBUG("tr_sub_flags=%d", ereq->tr_sub_flags);
50451 + DWC_DEBUG("error_count=%d", ereq->error_count);
50452 + DWC_DEBUG("pio_alloc_pkt_count=%d", ereq->pio_alloc_pkt_count);
50453 + DWC_DEBUG("pio_pkt_count=%d", ereq->pio_pkt_count);
50454 + DWC_DEBUG("res=%d", ereq->res);
50455 +
50456 + for (i = 0; i < ereq->pio_pkt_count; i++) {
50457 + xfd = &ereq->per_io_frame_descs[0];
50458 + DWC_DEBUG("FD #%d", i);
50459 +
50460 + DWC_DEBUG("xfd->actual_length=%d", xfd->actual_length);
50461 + DWC_DEBUG("xfd->length=%d", xfd->length);
50462 + DWC_DEBUG("xfd->offset=%d", xfd->offset);
50463 + DWC_DEBUG("xfd->status=%d", xfd->status);
50464 + }
50465 +}
50466 +
50467 +/**
50468 + *
50469 + */
50470 +int dwc_otg_pcd_xiso_ep_queue(dwc_otg_pcd_t * pcd, void *ep_handle,
50471 + uint8_t * buf, dwc_dma_t dma_buf, uint32_t buflen,
50472 + int zero, void *req_handle, int atomic_alloc,
50473 + void *ereq_nonport)
50474 +{
50475 + dwc_otg_pcd_request_t *req = NULL;
50476 + dwc_otg_pcd_ep_t *ep;
50477 + dwc_irqflags_t flags;
50478 + int res;
50479 +
50480 + ep = get_ep_from_handle(pcd, ep_handle);
50481 + if (!ep) {
50482 + DWC_WARN("bad ep\n");
50483 + return -DWC_E_INVALID;
50484 + }
50485 +
50486 + /* We support this extension only for DDMA mode */
50487 + if (ep->dwc_ep.type == DWC_OTG_EP_TYPE_ISOC)
50488 + if (!GET_CORE_IF(pcd)->dma_desc_enable)
50489 + return -DWC_E_INVALID;
50490 +
50491 + /* Create a dwc_otg_pcd_request_t object */
50492 + if (atomic_alloc) {
50493 + req = DWC_ALLOC_ATOMIC(sizeof(*req));
50494 + } else {
50495 + req = DWC_ALLOC(sizeof(*req));
50496 + }
50497 +
50498 + if (!req) {
50499 + return -DWC_E_NO_MEMORY;
50500 + }
50501 +
50502 + /* Create the Isoc descs for this request which shall be the exact match
50503 + * of the structure sent to us from the non-portable logic */
50504 + res =
50505 + dwc_otg_pcd_xiso_create_pkt_descs(req, ereq_nonport, atomic_alloc);
50506 + if (res) {
50507 + DWC_WARN("Failed to init the Isoc descriptors");
50508 + DWC_FREE(req);
50509 + return res;
50510 + }
50511 +
50512 + DWC_SPINLOCK_IRQSAVE(pcd->lock, &flags);
50513 +
50514 + DWC_CIRCLEQ_INIT_ENTRY(req, queue_entry);
50515 + req->buf = buf;
50516 + req->dma = dma_buf;
50517 + req->length = buflen;
50518 + req->sent_zlp = zero;
50519 + req->priv = req_handle;
50520 +
50521 + //DWC_SPINLOCK_IRQSAVE(pcd->lock, &flags);
50522 + ep->dwc_ep.dma_addr = dma_buf;
50523 + ep->dwc_ep.start_xfer_buff = buf;
50524 + ep->dwc_ep.xfer_buff = buf;
50525 + ep->dwc_ep.xfer_len = 0;
50526 + ep->dwc_ep.xfer_count = 0;
50527 + ep->dwc_ep.sent_zlp = 0;
50528 + ep->dwc_ep.total_len = buflen;
50529 +
50530 + /* Add this request to the tail */
50531 + DWC_CIRCLEQ_INSERT_TAIL(&ep->queue, req, queue_entry);
50532 + ep->dwc_ep.xiso_queued_xfers++;
50533 +
50534 +//DWC_DEBUG("CP_0");
50535 +//DWC_DEBUG("req->ext_req.tr_sub_flags=%d", req->ext_req.tr_sub_flags);
50536 +//prn_ext_request((struct dwc_iso_xreq_port *) ereq_nonport);
50537 +//prn_ext_request(&req->ext_req);
50538 +
50539 + //DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
50540 +
50541 + /* If the req->status == ASAP then check if there is any active transfer
50542 + * for this endpoint. If no active transfers, then get the first entry
50543 + * from the queue and start that transfer
50544 + */
50545 + if (req->ext_req.tr_sub_flags == DWC_EREQ_TF_ASAP) {
50546 + res = dwc_otg_pcd_xiso_start_next_request(pcd, ep);
50547 + if (res) {
50548 + DWC_WARN("Failed to start the next Isoc transfer");
50549 + DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
50550 + DWC_FREE(req);
50551 + return res;
50552 + }
50553 + }
50554 +
50555 + DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
50556 + return 0;
50557 +}
50558 +
50559 +#endif
50560 +/* END ifdef DWC_UTE_PER_IO ***************************************************/
50561 +int dwc_otg_pcd_ep_queue(dwc_otg_pcd_t * pcd, void *ep_handle,
50562 + uint8_t * buf, dwc_dma_t dma_buf, uint32_t buflen,
50563 + int zero, void *req_handle, int atomic_alloc)
50564 +{
50565 + struct device *dev = dwc_otg_pcd_to_dev(pcd);
50566 + dwc_irqflags_t flags;
50567 + dwc_otg_pcd_request_t *req;
50568 + dwc_otg_pcd_ep_t *ep;
50569 + uint32_t max_transfer;
50570 +
50571 + ep = get_ep_from_handle(pcd, ep_handle);
50572 + if (!ep || (!ep->desc && ep->dwc_ep.num != 0)) {
50573 + DWC_WARN("bad ep\n");
50574 + return -DWC_E_INVALID;
50575 + }
50576 +
50577 + if (atomic_alloc) {
50578 + req = DWC_ALLOC_ATOMIC(sizeof(*req));
50579 + } else {
50580 + req = DWC_ALLOC(sizeof(*req));
50581 + }
50582 +
50583 + if (!req) {
50584 + return -DWC_E_NO_MEMORY;
50585 + }
50586 + DWC_CIRCLEQ_INIT_ENTRY(req, queue_entry);
50587 + if (!GET_CORE_IF(pcd)->core_params->opt) {
50588 + if (ep->dwc_ep.num != 0) {
50589 + DWC_ERROR("queue req %p, len %d buf %p\n",
50590 + req_handle, buflen, buf);
50591 + }
50592 + }
50593 +
50594 + req->buf = buf;
50595 + req->dma = dma_buf;
50596 + req->length = buflen;
50597 + req->sent_zlp = zero;
50598 + req->priv = req_handle;
50599 + req->dw_align_buf = NULL;
50600 + if ((dma_buf & 0x3) && GET_CORE_IF(pcd)->dma_enable
50601 + && !GET_CORE_IF(pcd)->dma_desc_enable)
50602 + req->dw_align_buf = DWC_DMA_ALLOC(dev, buflen,
50603 + &req->dw_align_buf_dma);
50604 + DWC_SPINLOCK_IRQSAVE(pcd->lock, &flags);
50605 +
50606 + /*
50607 + * After adding request to the queue for IN ISOC wait for In Token Received
50608 + * when TX FIFO is empty interrupt and for OUT ISOC wait for OUT Token
50609 + * Received when EP is disabled interrupt to obtain starting microframe
50610 + * (odd/even) start transfer
50611 + */
50612 + if (ep->dwc_ep.type == DWC_OTG_EP_TYPE_ISOC) {
50613 + if (req != 0) {
50614 + depctl_data_t depctl = {.d32 =
50615 + DWC_READ_REG32(&pcd->core_if->dev_if->
50616 + in_ep_regs[ep->dwc_ep.num]->
50617 + diepctl) };
50618 + ++pcd->request_pending;
50619 +
50620 + DWC_CIRCLEQ_INSERT_TAIL(&ep->queue, req, queue_entry);
50621 + if (ep->dwc_ep.is_in) {
50622 + depctl.b.cnak = 1;
50623 + DWC_WRITE_REG32(&pcd->core_if->dev_if->
50624 + in_ep_regs[ep->dwc_ep.num]->
50625 + diepctl, depctl.d32);
50626 + }
50627 +
50628 + DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
50629 + }
50630 + return 0;
50631 + }
50632 +
50633 + /*
50634 + * For EP0 IN without premature status, zlp is required?
50635 + */
50636 + if (ep->dwc_ep.num == 0 && ep->dwc_ep.is_in) {
50637 + DWC_DEBUGPL(DBG_PCDV, "%d-OUT ZLP\n", ep->dwc_ep.num);
50638 + //_req->zero = 1;
50639 + }
50640 +
50641 + /* Start the transfer */
50642 + if (DWC_CIRCLEQ_EMPTY(&ep->queue) && !ep->stopped) {
50643 + /* EP0 Transfer? */
50644 + if (ep->dwc_ep.num == 0) {
50645 + switch (pcd->ep0state) {
50646 + case EP0_IN_DATA_PHASE:
50647 + DWC_DEBUGPL(DBG_PCD,
50648 + "%s ep0: EP0_IN_DATA_PHASE\n",
50649 + __func__);
50650 + break;
50651 +
50652 + case EP0_OUT_DATA_PHASE:
50653 + DWC_DEBUGPL(DBG_PCD,
50654 + "%s ep0: EP0_OUT_DATA_PHASE\n",
50655 + __func__);
50656 + if (pcd->request_config) {
50657 + /* Complete STATUS PHASE */
50658 + ep->dwc_ep.is_in = 1;
50659 + pcd->ep0state = EP0_IN_STATUS_PHASE;
50660 + }
50661 + break;
50662 +
50663 + case EP0_IN_STATUS_PHASE:
50664 + DWC_DEBUGPL(DBG_PCD,
50665 + "%s ep0: EP0_IN_STATUS_PHASE\n",
50666 + __func__);
50667 + break;
50668 +
50669 + default:
50670 + DWC_DEBUGPL(DBG_ANY, "ep0: odd state %d\n",
50671 + pcd->ep0state);
50672 + DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
50673 + return -DWC_E_SHUTDOWN;
50674 + }
50675 +
50676 + ep->dwc_ep.dma_addr = dma_buf;
50677 + ep->dwc_ep.start_xfer_buff = buf;
50678 + ep->dwc_ep.xfer_buff = buf;
50679 + ep->dwc_ep.xfer_len = buflen;
50680 + ep->dwc_ep.xfer_count = 0;
50681 + ep->dwc_ep.sent_zlp = 0;
50682 + ep->dwc_ep.total_len = ep->dwc_ep.xfer_len;
50683 +
50684 + if (zero) {
50685 + if ((ep->dwc_ep.xfer_len %
50686 + ep->dwc_ep.maxpacket == 0)
50687 + && (ep->dwc_ep.xfer_len != 0)) {
50688 + ep->dwc_ep.sent_zlp = 1;
50689 + }
50690 +
50691 + }
50692 +
50693 + dwc_otg_ep0_start_transfer(GET_CORE_IF(pcd),
50694 + &ep->dwc_ep);
50695 + } // non-ep0 endpoints
50696 + else {
50697 +#ifdef DWC_UTE_CFI
50698 + if (ep->dwc_ep.buff_mode != BM_STANDARD) {
50699 + /* store the request length */
50700 + ep->dwc_ep.cfi_req_len = buflen;
50701 + pcd->cfi->ops.build_descriptors(pcd->cfi, pcd,
50702 + ep, req);
50703 + } else {
50704 +#endif
50705 + max_transfer =
50706 + GET_CORE_IF(ep->pcd)->core_params->
50707 + max_transfer_size;
50708 +
50709 + /* Setup and start the Transfer */
50710 + if (req->dw_align_buf){
50711 + if (ep->dwc_ep.is_in)
50712 + dwc_memcpy(req->dw_align_buf,
50713 + buf, buflen);
50714 + ep->dwc_ep.dma_addr =
50715 + req->dw_align_buf_dma;
50716 + ep->dwc_ep.start_xfer_buff =
50717 + req->dw_align_buf;
50718 + ep->dwc_ep.xfer_buff =
50719 + req->dw_align_buf;
50720 + } else {
50721 + ep->dwc_ep.dma_addr = dma_buf;
50722 + ep->dwc_ep.start_xfer_buff = buf;
50723 + ep->dwc_ep.xfer_buff = buf;
50724 + }
50725 + ep->dwc_ep.xfer_len = 0;
50726 + ep->dwc_ep.xfer_count = 0;
50727 + ep->dwc_ep.sent_zlp = 0;
50728 + ep->dwc_ep.total_len = buflen;
50729 +
50730 + ep->dwc_ep.maxxfer = max_transfer;
50731 + if (GET_CORE_IF(pcd)->dma_desc_enable) {
50732 + uint32_t out_max_xfer =
50733 + DDMA_MAX_TRANSFER_SIZE -
50734 + (DDMA_MAX_TRANSFER_SIZE % 4);
50735 + if (ep->dwc_ep.is_in) {
50736 + if (ep->dwc_ep.maxxfer >
50737 + DDMA_MAX_TRANSFER_SIZE) {
50738 + ep->dwc_ep.maxxfer =
50739 + DDMA_MAX_TRANSFER_SIZE;
50740 + }
50741 + } else {
50742 + if (ep->dwc_ep.maxxfer >
50743 + out_max_xfer) {
50744 + ep->dwc_ep.maxxfer =
50745 + out_max_xfer;
50746 + }
50747 + }
50748 + }
50749 + if (ep->dwc_ep.maxxfer < ep->dwc_ep.total_len) {
50750 + ep->dwc_ep.maxxfer -=
50751 + (ep->dwc_ep.maxxfer %
50752 + ep->dwc_ep.maxpacket);
50753 + }
50754 +
50755 + if (zero) {
50756 + if ((ep->dwc_ep.total_len %
50757 + ep->dwc_ep.maxpacket == 0)
50758 + && (ep->dwc_ep.total_len != 0)) {
50759 + ep->dwc_ep.sent_zlp = 1;
50760 + }
50761 + }
50762 +#ifdef DWC_UTE_CFI
50763 + }
50764 +#endif
50765 + dwc_otg_ep_start_transfer(GET_CORE_IF(pcd),
50766 + &ep->dwc_ep);
50767 + }
50768 + }
50769 +
50770 + if (req != 0) {
50771 + ++pcd->request_pending;
50772 + DWC_CIRCLEQ_INSERT_TAIL(&ep->queue, req, queue_entry);
50773 + if (ep->dwc_ep.is_in && ep->stopped
50774 + && !(GET_CORE_IF(pcd)->dma_enable)) {
50775 + /** @todo NGS Create a function for this. */
50776 + diepmsk_data_t diepmsk = {.d32 = 0 };
50777 + diepmsk.b.intktxfemp = 1;
50778 + if (GET_CORE_IF(pcd)->multiproc_int_enable) {
50779 + DWC_MODIFY_REG32(&GET_CORE_IF(pcd)->
50780 + dev_if->dev_global_regs->diepeachintmsk
50781 + [ep->dwc_ep.num], 0,
50782 + diepmsk.d32);
50783 + } else {
50784 + DWC_MODIFY_REG32(&GET_CORE_IF(pcd)->
50785 + dev_if->dev_global_regs->
50786 + diepmsk, 0, diepmsk.d32);
50787 + }
50788 +
50789 + }
50790 + }
50791 + DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
50792 +
50793 + return 0;
50794 +}
50795 +
50796 +int dwc_otg_pcd_ep_dequeue(dwc_otg_pcd_t * pcd, void *ep_handle,
50797 + void *req_handle)
50798 +{
50799 + dwc_irqflags_t flags;
50800 + dwc_otg_pcd_request_t *req;
50801 + dwc_otg_pcd_ep_t *ep;
50802 +
50803 + ep = get_ep_from_handle(pcd, ep_handle);
50804 + if (!ep || (!ep->desc && ep->dwc_ep.num != 0)) {
50805 + DWC_WARN("bad argument\n");
50806 + return -DWC_E_INVALID;
50807 + }
50808 +
50809 + DWC_SPINLOCK_IRQSAVE(pcd->lock, &flags);
50810 +
50811 + /* make sure it's actually queued on this endpoint */
50812 + DWC_CIRCLEQ_FOREACH(req, &ep->queue, queue_entry) {
50813 + if (req->priv == (void *)req_handle) {
50814 + break;
50815 + }
50816 + }
50817 +
50818 + if (req->priv != (void *)req_handle) {
50819 + DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
50820 + return -DWC_E_INVALID;
50821 + }
50822 +
50823 + if (!DWC_CIRCLEQ_EMPTY_ENTRY(req, queue_entry)) {
50824 + dwc_otg_request_done(ep, req, -DWC_E_RESTART);
50825 + } else {
50826 + req = NULL;
50827 + }
50828 +
50829 + DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
50830 +
50831 + return req ? 0 : -DWC_E_SHUTDOWN;
50832 +
50833 +}
50834 +
50835 +/**
50836 + * dwc_otg_pcd_ep_wedge - sets the halt feature and ignores clear requests
50837 + *
50838 + * Use this to stall an endpoint and ignore CLEAR_FEATURE(HALT_ENDPOINT)
50839 + * requests. If the gadget driver clears the halt status, it will
50840 + * automatically unwedge the endpoint.
50841 + *
50842 + * Returns zero on success, else negative DWC error code.
50843 + */
50844 +int dwc_otg_pcd_ep_wedge(dwc_otg_pcd_t * pcd, void *ep_handle)
50845 +{
50846 + dwc_otg_pcd_ep_t *ep;
50847 + dwc_irqflags_t flags;
50848 + int retval = 0;
50849 +
50850 + ep = get_ep_from_handle(pcd, ep_handle);
50851 +
50852 + if ((!ep->desc && ep != &pcd->ep0) ||
50853 + (ep->desc && (ep->desc->bmAttributes == UE_ISOCHRONOUS))) {
50854 + DWC_WARN("%s, bad ep\n", __func__);
50855 + return -DWC_E_INVALID;
50856 + }
50857 +
50858 + DWC_SPINLOCK_IRQSAVE(pcd->lock, &flags);
50859 + if (!DWC_CIRCLEQ_EMPTY(&ep->queue)) {
50860 + DWC_WARN("%d %s XFer In process\n", ep->dwc_ep.num,
50861 + ep->dwc_ep.is_in ? "IN" : "OUT");
50862 + retval = -DWC_E_AGAIN;
50863 + } else {
50864 + /* This code needs to be reviewed */
50865 + if (ep->dwc_ep.is_in == 1 && GET_CORE_IF(pcd)->dma_desc_enable) {
50866 + dtxfsts_data_t txstatus;
50867 + fifosize_data_t txfifosize;
50868 +
50869 + txfifosize.d32 =
50870 + DWC_READ_REG32(&GET_CORE_IF(pcd)->
50871 + core_global_regs->dtxfsiz[ep->dwc_ep.
50872 + tx_fifo_num]);
50873 + txstatus.d32 =
50874 + DWC_READ_REG32(&GET_CORE_IF(pcd)->
50875 + dev_if->in_ep_regs[ep->dwc_ep.num]->
50876 + dtxfsts);
50877 +
50878 + if (txstatus.b.txfspcavail < txfifosize.b.depth) {
50879 + DWC_WARN("%s() Data In Tx Fifo\n", __func__);
50880 + retval = -DWC_E_AGAIN;
50881 + } else {
50882 + if (ep->dwc_ep.num == 0) {
50883 + pcd->ep0state = EP0_STALL;
50884 + }
50885 +
50886 + ep->stopped = 1;
50887 + dwc_otg_ep_set_stall(GET_CORE_IF(pcd),
50888 + &ep->dwc_ep);
50889 + }
50890 + } else {
50891 + if (ep->dwc_ep.num == 0) {
50892 + pcd->ep0state = EP0_STALL;
50893 + }
50894 +
50895 + ep->stopped = 1;
50896 + dwc_otg_ep_set_stall(GET_CORE_IF(pcd), &ep->dwc_ep);
50897 + }
50898 + }
50899 +
50900 + DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
50901 +
50902 + return retval;
50903 +}
50904 +
50905 +int dwc_otg_pcd_ep_halt(dwc_otg_pcd_t * pcd, void *ep_handle, int value)
50906 +{
50907 + dwc_otg_pcd_ep_t *ep;
50908 + dwc_irqflags_t flags;
50909 + int retval = 0;
50910 +
50911 + ep = get_ep_from_handle(pcd, ep_handle);
50912 +
50913 + if (!ep || (!ep->desc && ep != &pcd->ep0) ||
50914 + (ep->desc && (ep->desc->bmAttributes == UE_ISOCHRONOUS))) {
50915 + DWC_WARN("%s, bad ep\n", __func__);
50916 + return -DWC_E_INVALID;
50917 + }
50918 +
50919 + DWC_SPINLOCK_IRQSAVE(pcd->lock, &flags);
50920 + if (!DWC_CIRCLEQ_EMPTY(&ep->queue)) {
50921 + DWC_WARN("%d %s XFer In process\n", ep->dwc_ep.num,
50922 + ep->dwc_ep.is_in ? "IN" : "OUT");
50923 + retval = -DWC_E_AGAIN;
50924 + } else if (value == 0) {
50925 + dwc_otg_ep_clear_stall(GET_CORE_IF(pcd), &ep->dwc_ep);
50926 + } else if (value == 1) {
50927 + if (ep->dwc_ep.is_in == 1 && GET_CORE_IF(pcd)->dma_desc_enable) {
50928 + dtxfsts_data_t txstatus;
50929 + fifosize_data_t txfifosize;
50930 +
50931 + txfifosize.d32 =
50932 + DWC_READ_REG32(&GET_CORE_IF(pcd)->core_global_regs->
50933 + dtxfsiz[ep->dwc_ep.tx_fifo_num]);
50934 + txstatus.d32 =
50935 + DWC_READ_REG32(&GET_CORE_IF(pcd)->dev_if->
50936 + in_ep_regs[ep->dwc_ep.num]->dtxfsts);
50937 +
50938 + if (txstatus.b.txfspcavail < txfifosize.b.depth) {
50939 + DWC_WARN("%s() Data In Tx Fifo\n", __func__);
50940 + retval = -DWC_E_AGAIN;
50941 + } else {
50942 + if (ep->dwc_ep.num == 0) {
50943 + pcd->ep0state = EP0_STALL;
50944 + }
50945 +
50946 + ep->stopped = 1;
50947 + dwc_otg_ep_set_stall(GET_CORE_IF(pcd),
50948 + &ep->dwc_ep);
50949 + }
50950 + } else {
50951 + if (ep->dwc_ep.num == 0) {
50952 + pcd->ep0state = EP0_STALL;
50953 + }
50954 +
50955 + ep->stopped = 1;
50956 + dwc_otg_ep_set_stall(GET_CORE_IF(pcd), &ep->dwc_ep);
50957 + }
50958 + } else if (value == 2) {
50959 + ep->dwc_ep.stall_clear_flag = 0;
50960 + } else if (value == 3) {
50961 + ep->dwc_ep.stall_clear_flag = 1;
50962 + }
50963 +
50964 + DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
50965 +
50966 + return retval;
50967 +}
50968 +
50969 +/**
50970 + * This function initiates remote wakeup of the host from suspend state.
50971 + */
50972 +void dwc_otg_pcd_rem_wkup_from_suspend(dwc_otg_pcd_t * pcd, int set)
50973 +{
50974 + dctl_data_t dctl = { 0 };
50975 + dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
50976 + dsts_data_t dsts;
50977 +
50978 + dsts.d32 = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dsts);
50979 + if (!dsts.b.suspsts) {
50980 + DWC_WARN("Remote wakeup while is not in suspend state\n");
50981 + }
50982 + /* Check if DEVICE_REMOTE_WAKEUP feature enabled */
50983 + if (pcd->remote_wakeup_enable) {
50984 + if (set) {
50985 +
50986 + if (core_if->adp_enable) {
50987 + gpwrdn_data_t gpwrdn;
50988 +
50989 + dwc_otg_adp_probe_stop(core_if);
50990 +
50991 + /* Mask SRP detected interrupt from Power Down Logic */
50992 + gpwrdn.d32 = 0;
50993 + gpwrdn.b.srp_det_msk = 1;
50994 + DWC_MODIFY_REG32(&core_if->
50995 + core_global_regs->gpwrdn,
50996 + gpwrdn.d32, 0);
50997 +
50998 + /* Disable Power Down Logic */
50999 + gpwrdn.d32 = 0;
51000 + gpwrdn.b.pmuactv = 1;
51001 + DWC_MODIFY_REG32(&core_if->
51002 + core_global_regs->gpwrdn,
51003 + gpwrdn.d32, 0);
51004 +
51005 + /*
51006 + * Initialize the Core for Device mode.
51007 + */
51008 + core_if->op_state = B_PERIPHERAL;
51009 + dwc_otg_core_init(core_if);
51010 + dwc_otg_enable_global_interrupts(core_if);
51011 + cil_pcd_start(core_if);
51012 +
51013 + dwc_otg_initiate_srp(core_if);
51014 + }
51015 +
51016 + dctl.b.rmtwkupsig = 1;
51017 + DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->
51018 + dctl, 0, dctl.d32);
51019 + DWC_DEBUGPL(DBG_PCD, "Set Remote Wakeup\n");
51020 +
51021 + dwc_mdelay(2);
51022 + DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->
51023 + dctl, dctl.d32, 0);
51024 + DWC_DEBUGPL(DBG_PCD, "Clear Remote Wakeup\n");
51025 + }
51026 + } else {
51027 + DWC_DEBUGPL(DBG_PCD, "Remote Wakeup is disabled\n");
51028 + }
51029 +}
51030 +
51031 +#ifdef CONFIG_USB_DWC_OTG_LPM
51032 +/**
51033 + * This function initiates remote wakeup of the host from L1 sleep state.
51034 + */
51035 +void dwc_otg_pcd_rem_wkup_from_sleep(dwc_otg_pcd_t * pcd, int set)
51036 +{
51037 + glpmcfg_data_t lpmcfg;
51038 + dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
51039 +
51040 + lpmcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->glpmcfg);
51041 +
51042 + /* Check if we are in L1 state */
51043 + if (!lpmcfg.b.prt_sleep_sts) {
51044 + DWC_DEBUGPL(DBG_PCD, "Device is not in sleep state\n");
51045 + return;
51046 + }
51047 +
51048 + /* Check if host allows remote wakeup */
51049 + if (!lpmcfg.b.rem_wkup_en) {
51050 + DWC_DEBUGPL(DBG_PCD, "Host does not allow remote wakeup\n");
51051 + return;
51052 + }
51053 +
51054 + /* Check if Resume OK */
51055 + if (!lpmcfg.b.sleep_state_resumeok) {
51056 + DWC_DEBUGPL(DBG_PCD, "Sleep state resume is not OK\n");
51057 + return;
51058 + }
51059 +
51060 + lpmcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->glpmcfg);
51061 + lpmcfg.b.en_utmi_sleep = 0;
51062 + lpmcfg.b.hird_thres &= (~(1 << 4));
51063 + DWC_WRITE_REG32(&core_if->core_global_regs->glpmcfg, lpmcfg.d32);
51064 +
51065 + if (set) {
51066 + dctl_data_t dctl = {.d32 = 0 };
51067 + dctl.b.rmtwkupsig = 1;
51068 + /* Set RmtWkUpSig bit to start remote wakup signaling.
51069 + * Hardware will automatically clear this bit.
51070 + */
51071 + DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->dctl,
51072 + 0, dctl.d32);
51073 + DWC_DEBUGPL(DBG_PCD, "Set Remote Wakeup\n");
51074 + }
51075 +
51076 +}
51077 +#endif
51078 +
51079 +/**
51080 + * Performs remote wakeup.
51081 + */
51082 +void dwc_otg_pcd_remote_wakeup(dwc_otg_pcd_t * pcd, int set)
51083 +{
51084 + dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
51085 + dwc_irqflags_t flags;
51086 + if (dwc_otg_is_device_mode(core_if)) {
51087 + DWC_SPINLOCK_IRQSAVE(pcd->lock, &flags);
51088 +#ifdef CONFIG_USB_DWC_OTG_LPM
51089 + if (core_if->lx_state == DWC_OTG_L1) {
51090 + dwc_otg_pcd_rem_wkup_from_sleep(pcd, set);
51091 + } else {
51092 +#endif
51093 + dwc_otg_pcd_rem_wkup_from_suspend(pcd, set);
51094 +#ifdef CONFIG_USB_DWC_OTG_LPM
51095 + }
51096 +#endif
51097 + DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
51098 + }
51099 + return;
51100 +}
51101 +
51102 +void dwc_otg_pcd_disconnect_us(dwc_otg_pcd_t * pcd, int no_of_usecs)
51103 +{
51104 + dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
51105 + dctl_data_t dctl = { 0 };
51106 +
51107 + if (dwc_otg_is_device_mode(core_if)) {
51108 + dctl.b.sftdiscon = 1;
51109 + DWC_PRINTF("Soft disconnect for %d useconds\n",no_of_usecs);
51110 + DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->dctl, 0, dctl.d32);
51111 + dwc_udelay(no_of_usecs);
51112 + DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->dctl, dctl.d32,0);
51113 +
51114 + } else{
51115 + DWC_PRINTF("NOT SUPPORTED IN HOST MODE\n");
51116 + }
51117 + return;
51118 +
51119 +}
51120 +
51121 +int dwc_otg_pcd_wakeup(dwc_otg_pcd_t * pcd)
51122 +{
51123 + dsts_data_t dsts;
51124 + gotgctl_data_t gotgctl;
51125 +
51126 + /*
51127 + * This function starts the Protocol if no session is in progress. If
51128 + * a session is already in progress, but the device is suspended,
51129 + * remote wakeup signaling is started.
51130 + */
51131 +
51132 + /* Check if valid session */
51133 + gotgctl.d32 =
51134 + DWC_READ_REG32(&(GET_CORE_IF(pcd)->core_global_regs->gotgctl));
51135 + if (gotgctl.b.bsesvld) {
51136 + /* Check if suspend state */
51137 + dsts.d32 =
51138 + DWC_READ_REG32(&
51139 + (GET_CORE_IF(pcd)->dev_if->
51140 + dev_global_regs->dsts));
51141 + if (dsts.b.suspsts) {
51142 + dwc_otg_pcd_remote_wakeup(pcd, 1);
51143 + }
51144 + } else {
51145 + dwc_otg_pcd_initiate_srp(pcd);
51146 + }
51147 +
51148 + return 0;
51149 +
51150 +}
51151 +
51152 +/**
51153 + * Start the SRP timer to detect when the SRP does not complete within
51154 + * 6 seconds.
51155 + *
51156 + * @param pcd the pcd structure.
51157 + */
51158 +void dwc_otg_pcd_initiate_srp(dwc_otg_pcd_t * pcd)
51159 +{
51160 + dwc_irqflags_t flags;
51161 + DWC_SPINLOCK_IRQSAVE(pcd->lock, &flags);
51162 + dwc_otg_initiate_srp(GET_CORE_IF(pcd));
51163 + DWC_SPINUNLOCK_IRQRESTORE(pcd->lock, flags);
51164 +}
51165 +
51166 +int dwc_otg_pcd_get_frame_number(dwc_otg_pcd_t * pcd)
51167 +{
51168 + return dwc_otg_get_frame_number(GET_CORE_IF(pcd));
51169 +}
51170 +
51171 +int dwc_otg_pcd_is_lpm_enabled(dwc_otg_pcd_t * pcd)
51172 +{
51173 + return GET_CORE_IF(pcd)->core_params->lpm_enable;
51174 +}
51175 +
51176 +uint32_t get_b_hnp_enable(dwc_otg_pcd_t * pcd)
51177 +{
51178 + return pcd->b_hnp_enable;
51179 +}
51180 +
51181 +uint32_t get_a_hnp_support(dwc_otg_pcd_t * pcd)
51182 +{
51183 + return pcd->a_hnp_support;
51184 +}
51185 +
51186 +uint32_t get_a_alt_hnp_support(dwc_otg_pcd_t * pcd)
51187 +{
51188 + return pcd->a_alt_hnp_support;
51189 +}
51190 +
51191 +int dwc_otg_pcd_get_rmwkup_enable(dwc_otg_pcd_t * pcd)
51192 +{
51193 + return pcd->remote_wakeup_enable;
51194 +}
51195 +
51196 +#endif /* DWC_HOST_ONLY */
51197 --- /dev/null
51198 +++ b/drivers/usb/host/dwc_otg/dwc_otg_pcd.h
51199 @@ -0,0 +1,273 @@
51200 +/* ==========================================================================
51201 + * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_pcd.h $
51202 + * $Revision: #48 $
51203 + * $Date: 2012/08/10 $
51204 + * $Change: 2047372 $
51205 + *
51206 + * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
51207 + * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
51208 + * otherwise expressly agreed to in writing between Synopsys and you.
51209 + *
51210 + * The Software IS NOT an item of Licensed Software or Licensed Product under
51211 + * any End User Software License Agreement or Agreement for Licensed Product
51212 + * with Synopsys or any supplement thereto. You are permitted to use and
51213 + * redistribute this Software in source and binary forms, with or without
51214 + * modification, provided that redistributions of source code must retain this
51215 + * notice. You may not view, use, disclose, copy or distribute this file or
51216 + * any information contained herein except pursuant to this license grant from
51217 + * Synopsys. If you do not agree with this notice, including the disclaimer
51218 + * below, then you are not authorized to use the Software.
51219 + *
51220 + * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
51221 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
51222 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
51223 + * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
51224 + * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
51225 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
51226 + * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
51227 + * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
51228 + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
51229 + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
51230 + * DAMAGE.
51231 + * ========================================================================== */
51232 +#ifndef DWC_HOST_ONLY
51233 +#if !defined(__DWC_PCD_H__)
51234 +#define __DWC_PCD_H__
51235 +
51236 +#include "dwc_otg_os_dep.h"
51237 +#include "usb.h"
51238 +#include "dwc_otg_cil.h"
51239 +#include "dwc_otg_pcd_if.h"
51240 +#include "dwc_otg_driver.h"
51241 +
51242 +struct cfiobject;
51243 +
51244 +/**
51245 + * @file
51246 + *
51247 + * This file contains the structures, constants, and interfaces for
51248 + * the Perpherial Contoller Driver (PCD).
51249 + *
51250 + * The Peripheral Controller Driver (PCD) for Linux will implement the
51251 + * Gadget API, so that the existing Gadget drivers can be used. For
51252 + * the Mass Storage Function driver the File-backed USB Storage Gadget
51253 + * (FBS) driver will be used. The FBS driver supports the
51254 + * Control-Bulk (CB), Control-Bulk-Interrupt (CBI), and Bulk-Only
51255 + * transports.
51256 + *
51257 + */
51258 +
51259 +/** Invalid DMA Address */
51260 +#define DWC_DMA_ADDR_INVALID (~(dwc_dma_t)0)
51261 +
51262 +/** Max Transfer size for any EP */
51263 +#define DDMA_MAX_TRANSFER_SIZE 65535
51264 +
51265 +/**
51266 + * Get the pointer to the core_if from the pcd pointer.
51267 + */
51268 +#define GET_CORE_IF( _pcd ) (_pcd->core_if)
51269 +
51270 +/**
51271 + * States of EP0.
51272 + */
51273 +typedef enum ep0_state {
51274 + EP0_DISCONNECT, /* no host */
51275 + EP0_IDLE,
51276 + EP0_IN_DATA_PHASE,
51277 + EP0_OUT_DATA_PHASE,
51278 + EP0_IN_STATUS_PHASE,
51279 + EP0_OUT_STATUS_PHASE,
51280 + EP0_STALL,
51281 +} ep0state_e;
51282 +
51283 +/** Fordward declaration.*/
51284 +struct dwc_otg_pcd;
51285 +
51286 +/** DWC_otg iso request structure.
51287 + *
51288 + */
51289 +typedef struct usb_iso_request dwc_otg_pcd_iso_request_t;
51290 +
51291 +#ifdef DWC_UTE_PER_IO
51292 +
51293 +/**
51294 + * This shall be the exact analogy of the same type structure defined in the
51295 + * usb_gadget.h. Each descriptor contains
51296 + */
51297 +struct dwc_iso_pkt_desc_port {
51298 + uint32_t offset;
51299 + uint32_t length; /* expected length */
51300 + uint32_t actual_length;
51301 + uint32_t status;
51302 +};
51303 +
51304 +struct dwc_iso_xreq_port {
51305 + /** transfer/submission flag */
51306 + uint32_t tr_sub_flags;
51307 + /** Start the request ASAP */
51308 +#define DWC_EREQ_TF_ASAP 0x00000002
51309 + /** Just enqueue the request w/o initiating a transfer */
51310 +#define DWC_EREQ_TF_ENQUEUE 0x00000004
51311 +
51312 + /**
51313 + * count of ISO packets attached to this request - shall
51314 + * not exceed the pio_alloc_pkt_count
51315 + */
51316 + uint32_t pio_pkt_count;
51317 + /** count of ISO packets allocated for this request */
51318 + uint32_t pio_alloc_pkt_count;
51319 + /** number of ISO packet errors */
51320 + uint32_t error_count;
51321 + /** reserved for future extension */
51322 + uint32_t res;
51323 + /** Will be allocated and freed in the UTE gadget and based on the CFC value */
51324 + struct dwc_iso_pkt_desc_port *per_io_frame_descs;
51325 +};
51326 +#endif
51327 +/** DWC_otg request structure.
51328 + * This structure is a list of requests.
51329 + */
51330 +typedef struct dwc_otg_pcd_request {
51331 + void *priv;
51332 + void *buf;
51333 + dwc_dma_t dma;
51334 + uint32_t length;
51335 + uint32_t actual;
51336 + unsigned sent_zlp:1;
51337 + /**
51338 + * Used instead of original buffer if
51339 + * it(physical address) is not dword-aligned.
51340 + **/
51341 + uint8_t *dw_align_buf;
51342 + dwc_dma_t dw_align_buf_dma;
51343 +
51344 + DWC_CIRCLEQ_ENTRY(dwc_otg_pcd_request) queue_entry;
51345 +#ifdef DWC_UTE_PER_IO
51346 + struct dwc_iso_xreq_port ext_req;
51347 + //void *priv_ereq_nport; /* */
51348 +#endif
51349 +} dwc_otg_pcd_request_t;
51350 +
51351 +DWC_CIRCLEQ_HEAD(req_list, dwc_otg_pcd_request);
51352 +
51353 +/** PCD EP structure.
51354 + * This structure describes an EP, there is an array of EPs in the PCD
51355 + * structure.
51356 + */
51357 +typedef struct dwc_otg_pcd_ep {
51358 + /** USB EP Descriptor */
51359 + const usb_endpoint_descriptor_t *desc;
51360 +
51361 + /** queue of dwc_otg_pcd_requests. */
51362 + struct req_list queue;
51363 + unsigned stopped:1;
51364 + unsigned disabling:1;
51365 + unsigned dma:1;
51366 + unsigned queue_sof:1;
51367 +
51368 +#ifdef DWC_EN_ISOC
51369 + /** ISOC req handle passed */
51370 + void *iso_req_handle;
51371 +#endif //_EN_ISOC_
51372 +
51373 + /** DWC_otg ep data. */
51374 + dwc_ep_t dwc_ep;
51375 +
51376 + /** Pointer to PCD */
51377 + struct dwc_otg_pcd *pcd;
51378 +
51379 + void *priv;
51380 +} dwc_otg_pcd_ep_t;
51381 +
51382 +/** DWC_otg PCD Structure.
51383 + * This structure encapsulates the data for the dwc_otg PCD.
51384 + */
51385 +struct dwc_otg_pcd {
51386 + const struct dwc_otg_pcd_function_ops *fops;
51387 + /** The DWC otg device pointer */
51388 + struct dwc_otg_device *otg_dev;
51389 + /** Core Interface */
51390 + dwc_otg_core_if_t *core_if;
51391 + /** State of EP0 */
51392 + ep0state_e ep0state;
51393 + /** EP0 Request is pending */
51394 + unsigned ep0_pending:1;
51395 + /** Indicates when SET CONFIGURATION Request is in process */
51396 + unsigned request_config:1;
51397 + /** The state of the Remote Wakeup Enable. */
51398 + unsigned remote_wakeup_enable:1;
51399 + /** The state of the B-Device HNP Enable. */
51400 + unsigned b_hnp_enable:1;
51401 + /** The state of A-Device HNP Support. */
51402 + unsigned a_hnp_support:1;
51403 + /** The state of the A-Device Alt HNP support. */
51404 + unsigned a_alt_hnp_support:1;
51405 + /** Count of pending Requests */
51406 + unsigned request_pending;
51407 +
51408 + /** SETUP packet for EP0
51409 + * This structure is allocated as a DMA buffer on PCD initialization
51410 + * with enough space for up to 3 setup packets.
51411 + */
51412 + union {
51413 + usb_device_request_t req;
51414 + uint32_t d32[2];
51415 + } *setup_pkt;
51416 +
51417 + dwc_dma_t setup_pkt_dma_handle;
51418 +
51419 + /* Additional buffer and flag for CTRL_WR premature case */
51420 + uint8_t *backup_buf;
51421 + unsigned data_terminated;
51422 +
51423 + /** 2-byte dma buffer used to return status from GET_STATUS */
51424 + uint16_t *status_buf;
51425 + dwc_dma_t status_buf_dma_handle;
51426 +
51427 + /** EP0 */
51428 + dwc_otg_pcd_ep_t ep0;
51429 +
51430 + /** Array of IN EPs. */
51431 + dwc_otg_pcd_ep_t in_ep[MAX_EPS_CHANNELS - 1];
51432 + /** Array of OUT EPs. */
51433 + dwc_otg_pcd_ep_t out_ep[MAX_EPS_CHANNELS - 1];
51434 + /** number of valid EPs in the above array. */
51435 +// unsigned num_eps : 4;
51436 + dwc_spinlock_t *lock;
51437 +
51438 + /** Tasklet to defer starting of TEST mode transmissions until
51439 + * Status Phase has been completed.
51440 + */
51441 + dwc_tasklet_t *test_mode_tasklet;
51442 +
51443 + /** Tasklet to delay starting of xfer in DMA mode */
51444 + dwc_tasklet_t *start_xfer_tasklet;
51445 +
51446 + /** The test mode to enter when the tasklet is executed. */
51447 + unsigned test_mode;
51448 + /** The cfi_api structure that implements most of the CFI API
51449 + * and OTG specific core configuration functionality
51450 + */
51451 +#ifdef DWC_UTE_CFI
51452 + struct cfiobject *cfi;
51453 +#endif
51454 +
51455 +};
51456 +
51457 +static inline struct device *dwc_otg_pcd_to_dev(struct dwc_otg_pcd *pcd)
51458 +{
51459 + return &pcd->otg_dev->os_dep.platformdev->dev;
51460 +}
51461 +
51462 +//FIXME this functions should be static, and this prototypes should be removed
51463 +extern void dwc_otg_request_nuke(dwc_otg_pcd_ep_t * ep);
51464 +extern void dwc_otg_request_done(dwc_otg_pcd_ep_t * ep,
51465 + dwc_otg_pcd_request_t * req, int32_t status);
51466 +
51467 +void dwc_otg_iso_buffer_done(dwc_otg_pcd_t * pcd, dwc_otg_pcd_ep_t * ep,
51468 + void *req_handle);
51469 +
51470 +extern void do_test_mode(void *data);
51471 +#endif
51472 +#endif /* DWC_HOST_ONLY */
51473 --- /dev/null
51474 +++ b/drivers/usb/host/dwc_otg/dwc_otg_pcd_if.h
51475 @@ -0,0 +1,361 @@
51476 +/* ==========================================================================
51477 + * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_pcd_if.h $
51478 + * $Revision: #11 $
51479 + * $Date: 2011/10/26 $
51480 + * $Change: 1873028 $
51481 + *
51482 + * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
51483 + * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
51484 + * otherwise expressly agreed to in writing between Synopsys and you.
51485 + *
51486 + * The Software IS NOT an item of Licensed Software or Licensed Product under
51487 + * any End User Software License Agreement or Agreement for Licensed Product
51488 + * with Synopsys or any supplement thereto. You are permitted to use and
51489 + * redistribute this Software in source and binary forms, with or without
51490 + * modification, provided that redistributions of source code must retain this
51491 + * notice. You may not view, use, disclose, copy or distribute this file or
51492 + * any information contained herein except pursuant to this license grant from
51493 + * Synopsys. If you do not agree with this notice, including the disclaimer
51494 + * below, then you are not authorized to use the Software.
51495 + *
51496 + * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
51497 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
51498 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
51499 + * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
51500 + * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
51501 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
51502 + * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
51503 + * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
51504 + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
51505 + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
51506 + * DAMAGE.
51507 + * ========================================================================== */
51508 +#ifndef DWC_HOST_ONLY
51509 +
51510 +#if !defined(__DWC_PCD_IF_H__)
51511 +#define __DWC_PCD_IF_H__
51512 +
51513 +//#include "dwc_os.h"
51514 +#include "dwc_otg_core_if.h"
51515 +#include "dwc_otg_driver.h"
51516 +
51517 +/** @file
51518 + * This file defines DWC_OTG PCD Core API.
51519 + */
51520 +
51521 +struct dwc_otg_pcd;
51522 +typedef struct dwc_otg_pcd dwc_otg_pcd_t;
51523 +
51524 +/** Maxpacket size for EP0 */
51525 +#define MAX_EP0_SIZE 64
51526 +/** Maxpacket size for any EP */
51527 +#define MAX_PACKET_SIZE 1024
51528 +
51529 +/** @name Function Driver Callbacks */
51530 +/** @{ */
51531 +
51532 +/** This function will be called whenever a previously queued request has
51533 + * completed. The status value will be set to -DWC_E_SHUTDOWN to indicated a
51534 + * failed or aborted transfer, or -DWC_E_RESTART to indicate the device was reset,
51535 + * or -DWC_E_TIMEOUT to indicate it timed out, or -DWC_E_INVALID to indicate invalid
51536 + * parameters. */
51537 +typedef int (*dwc_completion_cb_t) (dwc_otg_pcd_t * pcd, void *ep_handle,
51538 + void *req_handle, int32_t status,
51539 + uint32_t actual);
51540 +/**
51541 + * This function will be called whenever a previousle queued ISOC request has
51542 + * completed. Count of ISOC packets could be read using dwc_otg_pcd_get_iso_packet_count
51543 + * function.
51544 + * The status of each ISOC packet could be read using dwc_otg_pcd_get_iso_packet_*
51545 + * functions.
51546 + */
51547 +typedef int (*dwc_isoc_completion_cb_t) (dwc_otg_pcd_t * pcd, void *ep_handle,
51548 + void *req_handle, int proc_buf_num);
51549 +/** This function should handle any SETUP request that cannot be handled by the
51550 + * PCD Core. This includes most GET_DESCRIPTORs, SET_CONFIGS, Any
51551 + * class-specific requests, etc. The function must non-blocking.
51552 + *
51553 + * Returns 0 on success.
51554 + * Returns -DWC_E_NOT_SUPPORTED if the request is not supported.
51555 + * Returns -DWC_E_INVALID if the setup request had invalid parameters or bytes.
51556 + * Returns -DWC_E_SHUTDOWN on any other error. */
51557 +typedef int (*dwc_setup_cb_t) (dwc_otg_pcd_t * pcd, uint8_t * bytes);
51558 +/** This is called whenever the device has been disconnected. The function
51559 + * driver should take appropriate action to clean up all pending requests in the
51560 + * PCD Core, remove all endpoints (except ep0), and initialize back to reset
51561 + * state. */
51562 +typedef int (*dwc_disconnect_cb_t) (dwc_otg_pcd_t * pcd);
51563 +/** This function is called when device has been connected. */
51564 +typedef int (*dwc_connect_cb_t) (dwc_otg_pcd_t * pcd, int speed);
51565 +/** This function is called when device has been suspended */
51566 +typedef int (*dwc_suspend_cb_t) (dwc_otg_pcd_t * pcd);
51567 +/** This function is called when device has received LPM tokens, i.e.
51568 + * device has been sent to sleep state. */
51569 +typedef int (*dwc_sleep_cb_t) (dwc_otg_pcd_t * pcd);
51570 +/** This function is called when device has been resumed
51571 + * from suspend(L2) or L1 sleep state. */
51572 +typedef int (*dwc_resume_cb_t) (dwc_otg_pcd_t * pcd);
51573 +/** This function is called whenever hnp params has been changed.
51574 + * User can call get_b_hnp_enable, get_a_hnp_support, get_a_alt_hnp_support functions
51575 + * to get hnp parameters. */
51576 +typedef int (*dwc_hnp_params_changed_cb_t) (dwc_otg_pcd_t * pcd);
51577 +/** This function is called whenever USB RESET is detected. */
51578 +typedef int (*dwc_reset_cb_t) (dwc_otg_pcd_t * pcd);
51579 +
51580 +typedef int (*cfi_setup_cb_t) (dwc_otg_pcd_t * pcd, void *ctrl_req_bytes);
51581 +
51582 +/**
51583 + *
51584 + * @param ep_handle Void pointer to the usb_ep structure
51585 + * @param ereq_port Pointer to the extended request structure created in the
51586 + * portable part.
51587 + */
51588 +typedef int (*xiso_completion_cb_t) (dwc_otg_pcd_t * pcd, void *ep_handle,
51589 + void *req_handle, int32_t status,
51590 + void *ereq_port);
51591 +/** Function Driver Ops Data Structure */
51592 +struct dwc_otg_pcd_function_ops {
51593 + dwc_connect_cb_t connect;
51594 + dwc_disconnect_cb_t disconnect;
51595 + dwc_setup_cb_t setup;
51596 + dwc_completion_cb_t complete;
51597 + dwc_isoc_completion_cb_t isoc_complete;
51598 + dwc_suspend_cb_t suspend;
51599 + dwc_sleep_cb_t sleep;
51600 + dwc_resume_cb_t resume;
51601 + dwc_reset_cb_t reset;
51602 + dwc_hnp_params_changed_cb_t hnp_changed;
51603 + cfi_setup_cb_t cfi_setup;
51604 +#ifdef DWC_UTE_PER_IO
51605 + xiso_completion_cb_t xisoc_complete;
51606 +#endif
51607 +};
51608 +/** @} */
51609 +
51610 +/** @name Function Driver Functions */
51611 +/** @{ */
51612 +
51613 +/** Call this function to get pointer on dwc_otg_pcd_t,
51614 + * this pointer will be used for all PCD API functions.
51615 + *
51616 + * @param core_if The DWC_OTG Core
51617 + */
51618 +extern dwc_otg_pcd_t *dwc_otg_pcd_init(dwc_otg_device_t *otg_dev);
51619 +
51620 +/** Frees PCD allocated by dwc_otg_pcd_init
51621 + *
51622 + * @param pcd The PCD
51623 + */
51624 +extern void dwc_otg_pcd_remove(dwc_otg_pcd_t * pcd);
51625 +
51626 +/** Call this to bind the function driver to the PCD Core.
51627 + *
51628 + * @param pcd Pointer on dwc_otg_pcd_t returned by dwc_otg_pcd_init function.
51629 + * @param fops The Function Driver Ops data structure containing pointers to all callbacks.
51630 + */
51631 +extern void dwc_otg_pcd_start(dwc_otg_pcd_t * pcd,
51632 + const struct dwc_otg_pcd_function_ops *fops);
51633 +
51634 +/** Enables an endpoint for use. This function enables an endpoint in
51635 + * the PCD. The endpoint is described by the ep_desc which has the
51636 + * same format as a USB ep descriptor. The ep_handle parameter is used to refer
51637 + * to the endpoint from other API functions and in callbacks. Normally this
51638 + * should be called after a SET_CONFIGURATION/SET_INTERFACE to configure the
51639 + * core for that interface.
51640 + *
51641 + * Returns -DWC_E_INVALID if invalid parameters were passed.
51642 + * Returns -DWC_E_SHUTDOWN if any other error ocurred.
51643 + * Returns 0 on success.
51644 + *
51645 + * @param pcd The PCD
51646 + * @param ep_desc Endpoint descriptor
51647 + * @param usb_ep Handle on endpoint, that will be used to identify endpoint.
51648 + */
51649 +extern int dwc_otg_pcd_ep_enable(dwc_otg_pcd_t * pcd,
51650 + const uint8_t * ep_desc, void *usb_ep);
51651 +
51652 +/** Disable the endpoint referenced by ep_handle.
51653 + *
51654 + * Returns -DWC_E_INVALID if invalid parameters were passed.
51655 + * Returns -DWC_E_SHUTDOWN if any other error occurred.
51656 + * Returns 0 on success. */
51657 +extern int dwc_otg_pcd_ep_disable(dwc_otg_pcd_t * pcd, void *ep_handle);
51658 +
51659 +/** Queue a data transfer request on the endpoint referenced by ep_handle.
51660 + * After the transfer is completes, the complete callback will be called with
51661 + * the request status.
51662 + *
51663 + * @param pcd The PCD
51664 + * @param ep_handle The handle of the endpoint
51665 + * @param buf The buffer for the data
51666 + * @param dma_buf The DMA buffer for the data
51667 + * @param buflen The length of the data transfer
51668 + * @param zero Specifies whether to send zero length last packet.
51669 + * @param req_handle Set this handle to any value to use to reference this
51670 + * request in the ep_dequeue function or from the complete callback
51671 + * @param atomic_alloc If driver need to perform atomic allocations
51672 + * for internal data structures.
51673 + *
51674 + * Returns -DWC_E_INVALID if invalid parameters were passed.
51675 + * Returns -DWC_E_SHUTDOWN if any other error ocurred.
51676 + * Returns 0 on success. */
51677 +extern int dwc_otg_pcd_ep_queue(dwc_otg_pcd_t * pcd, void *ep_handle,
51678 + uint8_t * buf, dwc_dma_t dma_buf,
51679 + uint32_t buflen, int zero, void *req_handle,
51680 + int atomic_alloc);
51681 +#ifdef DWC_UTE_PER_IO
51682 +/**
51683 + *
51684 + * @param ereq_nonport Pointer to the extended request part of the
51685 + * usb_request structure defined in usb_gadget.h file.
51686 + */
51687 +extern int dwc_otg_pcd_xiso_ep_queue(dwc_otg_pcd_t * pcd, void *ep_handle,
51688 + uint8_t * buf, dwc_dma_t dma_buf,
51689 + uint32_t buflen, int zero,
51690 + void *req_handle, int atomic_alloc,
51691 + void *ereq_nonport);
51692 +
51693 +#endif
51694 +
51695 +/** De-queue the specified data transfer that has not yet completed.
51696 + *
51697 + * Returns -DWC_E_INVALID if invalid parameters were passed.
51698 + * Returns -DWC_E_SHUTDOWN if any other error ocurred.
51699 + * Returns 0 on success. */
51700 +extern int dwc_otg_pcd_ep_dequeue(dwc_otg_pcd_t * pcd, void *ep_handle,
51701 + void *req_handle);
51702 +
51703 +/** Halt (STALL) an endpoint or clear it.
51704 + *
51705 + * Returns -DWC_E_INVALID if invalid parameters were passed.
51706 + * Returns -DWC_E_SHUTDOWN if any other error ocurred.
51707 + * Returns -DWC_E_AGAIN if the STALL cannot be sent and must be tried again later
51708 + * Returns 0 on success. */
51709 +extern int dwc_otg_pcd_ep_halt(dwc_otg_pcd_t * pcd, void *ep_handle, int value);
51710 +
51711 +/** This function */
51712 +extern int dwc_otg_pcd_ep_wedge(dwc_otg_pcd_t * pcd, void *ep_handle);
51713 +
51714 +/** This function should be called on every hardware interrupt */
51715 +extern int32_t dwc_otg_pcd_handle_intr(dwc_otg_pcd_t * pcd);
51716 +
51717 +/** This function returns current frame number */
51718 +extern int dwc_otg_pcd_get_frame_number(dwc_otg_pcd_t * pcd);
51719 +
51720 +/**
51721 + * Start isochronous transfers on the endpoint referenced by ep_handle.
51722 + * For isochronous transfers duble buffering is used.
51723 + * After processing each of buffers comlete callback will be called with
51724 + * status for each transaction.
51725 + *
51726 + * @param pcd The PCD
51727 + * @param ep_handle The handle of the endpoint
51728 + * @param buf0 The virtual address of first data buffer
51729 + * @param buf1 The virtual address of second data buffer
51730 + * @param dma0 The DMA address of first data buffer
51731 + * @param dma1 The DMA address of second data buffer
51732 + * @param sync_frame Data pattern frame number
51733 + * @param dp_frame Data size for pattern frame
51734 + * @param data_per_frame Data size for regular frame
51735 + * @param start_frame Frame number to start transfers, if -1 then start transfers ASAP.
51736 + * @param buf_proc_intrvl Interval of ISOC Buffer processing
51737 + * @param req_handle Handle of ISOC request
51738 + * @param atomic_alloc Specefies whether to perform atomic allocation for
51739 + * internal data structures.
51740 + *
51741 + * Returns -DWC_E_NO_MEMORY if there is no enough memory.
51742 + * Returns -DWC_E_INVALID if incorrect arguments are passed to the function.
51743 + * Returns -DW_E_SHUTDOWN for any other error.
51744 + * Returns 0 on success
51745 + */
51746 +extern int dwc_otg_pcd_iso_ep_start(dwc_otg_pcd_t * pcd, void *ep_handle,
51747 + uint8_t * buf0, uint8_t * buf1,
51748 + dwc_dma_t dma0, dwc_dma_t dma1,
51749 + int sync_frame, int dp_frame,
51750 + int data_per_frame, int start_frame,
51751 + int buf_proc_intrvl, void *req_handle,
51752 + int atomic_alloc);
51753 +
51754 +/** Stop ISOC transfers on endpoint referenced by ep_handle.
51755 + *
51756 + * @param pcd The PCD
51757 + * @param ep_handle The handle of the endpoint
51758 + * @param req_handle Handle of ISOC request
51759 + *
51760 + * Returns -DWC_E_INVALID if incorrect arguments are passed to the function
51761 + * Returns 0 on success
51762 + */
51763 +int dwc_otg_pcd_iso_ep_stop(dwc_otg_pcd_t * pcd, void *ep_handle,
51764 + void *req_handle);
51765 +
51766 +/** Get ISOC packet status.
51767 + *
51768 + * @param pcd The PCD
51769 + * @param ep_handle The handle of the endpoint
51770 + * @param iso_req_handle Isochronoush request handle
51771 + * @param packet Number of packet
51772 + * @param status Out parameter for returning status
51773 + * @param actual Out parameter for returning actual length
51774 + * @param offset Out parameter for returning offset
51775 + *
51776 + */
51777 +extern void dwc_otg_pcd_get_iso_packet_params(dwc_otg_pcd_t * pcd,
51778 + void *ep_handle,
51779 + void *iso_req_handle, int packet,
51780 + int *status, int *actual,
51781 + int *offset);
51782 +
51783 +/** Get ISOC packet count.
51784 + *
51785 + * @param pcd The PCD
51786 + * @param ep_handle The handle of the endpoint
51787 + * @param iso_req_handle
51788 + */
51789 +extern int dwc_otg_pcd_get_iso_packet_count(dwc_otg_pcd_t * pcd,
51790 + void *ep_handle,
51791 + void *iso_req_handle);
51792 +
51793 +/** This function starts the SRP Protocol if no session is in progress. If
51794 + * a session is already in progress, but the device is suspended,
51795 + * remote wakeup signaling is started.
51796 + */
51797 +extern int dwc_otg_pcd_wakeup(dwc_otg_pcd_t * pcd);
51798 +
51799 +/** This function returns 1 if LPM support is enabled, and 0 otherwise. */
51800 +extern int dwc_otg_pcd_is_lpm_enabled(dwc_otg_pcd_t * pcd);
51801 +
51802 +/** This function returns 1 if remote wakeup is allowed and 0, otherwise. */
51803 +extern int dwc_otg_pcd_get_rmwkup_enable(dwc_otg_pcd_t * pcd);
51804 +
51805 +/** Initiate SRP */
51806 +extern void dwc_otg_pcd_initiate_srp(dwc_otg_pcd_t * pcd);
51807 +
51808 +/** Starts remote wakeup signaling. */
51809 +extern void dwc_otg_pcd_remote_wakeup(dwc_otg_pcd_t * pcd, int set);
51810 +
51811 +/** Starts micorsecond soft disconnect. */
51812 +extern void dwc_otg_pcd_disconnect_us(dwc_otg_pcd_t * pcd, int no_of_usecs);
51813 +/** This function returns whether device is dualspeed.*/
51814 +extern uint32_t dwc_otg_pcd_is_dualspeed(dwc_otg_pcd_t * pcd);
51815 +
51816 +/** This function returns whether device is otg. */
51817 +extern uint32_t dwc_otg_pcd_is_otg(dwc_otg_pcd_t * pcd);
51818 +
51819 +/** These functions allow to get hnp parameters */
51820 +extern uint32_t get_b_hnp_enable(dwc_otg_pcd_t * pcd);
51821 +extern uint32_t get_a_hnp_support(dwc_otg_pcd_t * pcd);
51822 +extern uint32_t get_a_alt_hnp_support(dwc_otg_pcd_t * pcd);
51823 +
51824 +/** CFI specific Interface functions */
51825 +/** Allocate a cfi buffer */
51826 +extern uint8_t *cfiw_ep_alloc_buffer(dwc_otg_pcd_t * pcd, void *pep,
51827 + dwc_dma_t * addr, size_t buflen,
51828 + int flags);
51829 +
51830 +/******************************************************************************/
51831 +
51832 +/** @} */
51833 +
51834 +#endif /* __DWC_PCD_IF_H__ */
51835 +
51836 +#endif /* DWC_HOST_ONLY */
51837 --- /dev/null
51838 +++ b/drivers/usb/host/dwc_otg/dwc_otg_pcd_intr.c
51839 @@ -0,0 +1,5148 @@
51840 +/* ==========================================================================
51841 + * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_pcd_intr.c $
51842 + * $Revision: #116 $
51843 + * $Date: 2012/08/10 $
51844 + * $Change: 2047372 $
51845 + *
51846 + * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
51847 + * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
51848 + * otherwise expressly agreed to in writing between Synopsys and you.
51849 + *
51850 + * The Software IS NOT an item of Licensed Software or Licensed Product under
51851 + * any End User Software License Agreement or Agreement for Licensed Product
51852 + * with Synopsys or any supplement thereto. You are permitted to use and
51853 + * redistribute this Software in source and binary forms, with or without
51854 + * modification, provided that redistributions of source code must retain this
51855 + * notice. You may not view, use, disclose, copy or distribute this file or
51856 + * any information contained herein except pursuant to this license grant from
51857 + * Synopsys. If you do not agree with this notice, including the disclaimer
51858 + * below, then you are not authorized to use the Software.
51859 + *
51860 + * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
51861 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
51862 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
51863 + * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
51864 + * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
51865 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
51866 + * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
51867 + * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
51868 + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
51869 + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
51870 + * DAMAGE.
51871 + * ========================================================================== */
51872 +#ifndef DWC_HOST_ONLY
51873 +
51874 +#include "dwc_otg_pcd.h"
51875 +
51876 +#ifdef DWC_UTE_CFI
51877 +#include "dwc_otg_cfi.h"
51878 +#endif
51879 +
51880 +#ifdef DWC_UTE_PER_IO
51881 +extern void complete_xiso_ep(dwc_otg_pcd_ep_t * ep);
51882 +#endif
51883 +//#define PRINT_CFI_DMA_DESCS
51884 +
51885 +#define DEBUG_EP0
51886 +
51887 +/**
51888 + * This function updates OTG.
51889 + */
51890 +static void dwc_otg_pcd_update_otg(dwc_otg_pcd_t * pcd, const unsigned reset)
51891 +{
51892 +
51893 + if (reset) {
51894 + pcd->b_hnp_enable = 0;
51895 + pcd->a_hnp_support = 0;
51896 + pcd->a_alt_hnp_support = 0;
51897 + }
51898 +
51899 + if (pcd->fops->hnp_changed) {
51900 + pcd->fops->hnp_changed(pcd);
51901 + }
51902 +}
51903 +
51904 +/** @file
51905 + * This file contains the implementation of the PCD Interrupt handlers.
51906 + *
51907 + * The PCD handles the device interrupts. Many conditions can cause a
51908 + * device interrupt. When an interrupt occurs, the device interrupt
51909 + * service routine determines the cause of the interrupt and
51910 + * dispatches handling to the appropriate function. These interrupt
51911 + * handling functions are described below.
51912 + * All interrupt registers are processed from LSB to MSB.
51913 + */
51914 +
51915 +/**
51916 + * This function prints the ep0 state for debug purposes.
51917 + */
51918 +static inline void print_ep0_state(dwc_otg_pcd_t * pcd)
51919 +{
51920 +#ifdef DEBUG
51921 + char str[40];
51922 +
51923 + switch (pcd->ep0state) {
51924 + case EP0_DISCONNECT:
51925 + dwc_strcpy(str, "EP0_DISCONNECT");
51926 + break;
51927 + case EP0_IDLE:
51928 + dwc_strcpy(str, "EP0_IDLE");
51929 + break;
51930 + case EP0_IN_DATA_PHASE:
51931 + dwc_strcpy(str, "EP0_IN_DATA_PHASE");
51932 + break;
51933 + case EP0_OUT_DATA_PHASE:
51934 + dwc_strcpy(str, "EP0_OUT_DATA_PHASE");
51935 + break;
51936 + case EP0_IN_STATUS_PHASE:
51937 + dwc_strcpy(str, "EP0_IN_STATUS_PHASE");
51938 + break;
51939 + case EP0_OUT_STATUS_PHASE:
51940 + dwc_strcpy(str, "EP0_OUT_STATUS_PHASE");
51941 + break;
51942 + case EP0_STALL:
51943 + dwc_strcpy(str, "EP0_STALL");
51944 + break;
51945 + default:
51946 + dwc_strcpy(str, "EP0_INVALID");
51947 + }
51948 +
51949 + DWC_DEBUGPL(DBG_ANY, "%s(%d)\n", str, pcd->ep0state);
51950 +#endif
51951 +}
51952 +
51953 +/**
51954 + * This function calculate the size of the payload in the memory
51955 + * for out endpoints and prints size for debug purposes(used in
51956 + * 2.93a DevOutNak feature).
51957 + */
51958 +static inline void print_memory_payload(dwc_otg_pcd_t * pcd, dwc_ep_t * ep)
51959 +{
51960 +#ifdef DEBUG
51961 + deptsiz_data_t deptsiz_init = {.d32 = 0 };
51962 + deptsiz_data_t deptsiz_updt = {.d32 = 0 };
51963 + int pack_num;
51964 + unsigned payload;
51965 +
51966 + deptsiz_init.d32 = pcd->core_if->start_doeptsiz_val[ep->num];
51967 + deptsiz_updt.d32 =
51968 + DWC_READ_REG32(&pcd->core_if->dev_if->
51969 + out_ep_regs[ep->num]->doeptsiz);
51970 + /* Payload will be */
51971 + payload = deptsiz_init.b.xfersize - deptsiz_updt.b.xfersize;
51972 + /* Packet count is decremented every time a packet
51973 + * is written to the RxFIFO not in to the external memory
51974 + * So, if payload == 0, then it means no packet was sent to ext memory*/
51975 + pack_num = (!payload) ? 0 : (deptsiz_init.b.pktcnt - deptsiz_updt.b.pktcnt);
51976 + DWC_DEBUGPL(DBG_PCDV,
51977 + "Payload for EP%d-%s\n",
51978 + ep->num, (ep->is_in ? "IN" : "OUT"));
51979 + DWC_DEBUGPL(DBG_PCDV,
51980 + "Number of transfered bytes = 0x%08x\n", payload);
51981 + DWC_DEBUGPL(DBG_PCDV,
51982 + "Number of transfered packets = %d\n", pack_num);
51983 +#endif
51984 +}
51985 +
51986 +
51987 +#ifdef DWC_UTE_CFI
51988 +static inline void print_desc(struct dwc_otg_dma_desc *ddesc,
51989 + const uint8_t * epname, int descnum)
51990 +{
51991 + CFI_INFO
51992 + ("%s DMA_DESC(%d) buf=0x%08x bytes=0x%04x; sp=0x%x; l=0x%x; sts=0x%02x; bs=0x%02x\n",
51993 + epname, descnum, ddesc->buf, ddesc->status.b.bytes,
51994 + ddesc->status.b.sp, ddesc->status.b.l, ddesc->status.b.sts,
51995 + ddesc->status.b.bs);
51996 +}
51997 +#endif
51998 +
51999 +/**
52000 + * This function returns pointer to in ep struct with number ep_num
52001 + */
52002 +static inline dwc_otg_pcd_ep_t *get_in_ep(dwc_otg_pcd_t * pcd, uint32_t ep_num)
52003 +{
52004 + int i;
52005 + int num_in_eps = GET_CORE_IF(pcd)->dev_if->num_in_eps;
52006 + if (ep_num == 0) {
52007 + return &pcd->ep0;
52008 + } else {
52009 + for (i = 0; i < num_in_eps; ++i) {
52010 + if (pcd->in_ep[i].dwc_ep.num == ep_num)
52011 + return &pcd->in_ep[i];
52012 + }
52013 + return 0;
52014 + }
52015 +}
52016 +
52017 +/**
52018 + * This function returns pointer to out ep struct with number ep_num
52019 + */
52020 +static inline dwc_otg_pcd_ep_t *get_out_ep(dwc_otg_pcd_t * pcd, uint32_t ep_num)
52021 +{
52022 + int i;
52023 + int num_out_eps = GET_CORE_IF(pcd)->dev_if->num_out_eps;
52024 + if (ep_num == 0) {
52025 + return &pcd->ep0;
52026 + } else {
52027 + for (i = 0; i < num_out_eps; ++i) {
52028 + if (pcd->out_ep[i].dwc_ep.num == ep_num)
52029 + return &pcd->out_ep[i];
52030 + }
52031 + return 0;
52032 + }
52033 +}
52034 +
52035 +/**
52036 + * This functions gets a pointer to an EP from the wIndex address
52037 + * value of the control request.
52038 + */
52039 +dwc_otg_pcd_ep_t *get_ep_by_addr(dwc_otg_pcd_t * pcd, u16 wIndex)
52040 +{
52041 + dwc_otg_pcd_ep_t *ep;
52042 + uint32_t ep_num = UE_GET_ADDR(wIndex);
52043 +
52044 + if (ep_num == 0) {
52045 + ep = &pcd->ep0;
52046 + } else if (UE_GET_DIR(wIndex) == UE_DIR_IN) { /* in ep */
52047 + ep = &pcd->in_ep[ep_num - 1];
52048 + } else {
52049 + ep = &pcd->out_ep[ep_num - 1];
52050 + }
52051 +
52052 + return ep;
52053 +}
52054 +
52055 +/**
52056 + * This function checks the EP request queue, if the queue is not
52057 + * empty the next request is started.
52058 + */
52059 +void start_next_request(dwc_otg_pcd_ep_t * ep)
52060 +{
52061 + dwc_otg_pcd_request_t *req = 0;
52062 + uint32_t max_transfer =
52063 + GET_CORE_IF(ep->pcd)->core_params->max_transfer_size;
52064 +
52065 +#ifdef DWC_UTE_CFI
52066 + struct dwc_otg_pcd *pcd;
52067 + pcd = ep->pcd;
52068 +#endif
52069 +
52070 + if (!DWC_CIRCLEQ_EMPTY(&ep->queue)) {
52071 + req = DWC_CIRCLEQ_FIRST(&ep->queue);
52072 +
52073 +#ifdef DWC_UTE_CFI
52074 + if (ep->dwc_ep.buff_mode != BM_STANDARD) {
52075 + ep->dwc_ep.cfi_req_len = req->length;
52076 + pcd->cfi->ops.build_descriptors(pcd->cfi, pcd, ep, req);
52077 + } else {
52078 +#endif
52079 + /* Setup and start the Transfer */
52080 + if (req->dw_align_buf) {
52081 + ep->dwc_ep.dma_addr = req->dw_align_buf_dma;
52082 + ep->dwc_ep.start_xfer_buff = req->dw_align_buf;
52083 + ep->dwc_ep.xfer_buff = req->dw_align_buf;
52084 + } else {
52085 + ep->dwc_ep.dma_addr = req->dma;
52086 + ep->dwc_ep.start_xfer_buff = req->buf;
52087 + ep->dwc_ep.xfer_buff = req->buf;
52088 + }
52089 + ep->dwc_ep.sent_zlp = 0;
52090 + ep->dwc_ep.total_len = req->length;
52091 + ep->dwc_ep.xfer_len = 0;
52092 + ep->dwc_ep.xfer_count = 0;
52093 +
52094 + ep->dwc_ep.maxxfer = max_transfer;
52095 + if (GET_CORE_IF(ep->pcd)->dma_desc_enable) {
52096 + uint32_t out_max_xfer = DDMA_MAX_TRANSFER_SIZE
52097 + - (DDMA_MAX_TRANSFER_SIZE % 4);
52098 + if (ep->dwc_ep.is_in) {
52099 + if (ep->dwc_ep.maxxfer >
52100 + DDMA_MAX_TRANSFER_SIZE) {
52101 + ep->dwc_ep.maxxfer =
52102 + DDMA_MAX_TRANSFER_SIZE;
52103 + }
52104 + } else {
52105 + if (ep->dwc_ep.maxxfer > out_max_xfer) {
52106 + ep->dwc_ep.maxxfer =
52107 + out_max_xfer;
52108 + }
52109 + }
52110 + }
52111 + if (ep->dwc_ep.maxxfer < ep->dwc_ep.total_len) {
52112 + ep->dwc_ep.maxxfer -=
52113 + (ep->dwc_ep.maxxfer % ep->dwc_ep.maxpacket);
52114 + }
52115 + if (req->sent_zlp) {
52116 + if ((ep->dwc_ep.total_len %
52117 + ep->dwc_ep.maxpacket == 0)
52118 + && (ep->dwc_ep.total_len != 0)) {
52119 + ep->dwc_ep.sent_zlp = 1;
52120 + }
52121 +
52122 + }
52123 +#ifdef DWC_UTE_CFI
52124 + }
52125 +#endif
52126 + dwc_otg_ep_start_transfer(GET_CORE_IF(ep->pcd), &ep->dwc_ep);
52127 + } else if (ep->dwc_ep.type == DWC_OTG_EP_TYPE_ISOC) {
52128 + DWC_PRINTF("There are no more ISOC requests \n");
52129 + ep->dwc_ep.frame_num = 0xFFFFFFFF;
52130 + }
52131 +}
52132 +
52133 +/**
52134 + * This function handles the SOF Interrupts. At this time the SOF
52135 + * Interrupt is disabled.
52136 + */
52137 +int32_t dwc_otg_pcd_handle_sof_intr(dwc_otg_pcd_t * pcd)
52138 +{
52139 + dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
52140 +
52141 + gintsts_data_t gintsts;
52142 +
52143 + DWC_DEBUGPL(DBG_PCD, "SOF\n");
52144 +
52145 + /* Clear interrupt */
52146 + gintsts.d32 = 0;
52147 + gintsts.b.sofintr = 1;
52148 + DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, gintsts.d32);
52149 +
52150 + return 1;
52151 +}
52152 +
52153 +/**
52154 + * This function handles the Rx Status Queue Level Interrupt, which
52155 + * indicates that there is a least one packet in the Rx FIFO. The
52156 + * packets are moved from the FIFO to memory, where they will be
52157 + * processed when the Endpoint Interrupt Register indicates Transfer
52158 + * Complete or SETUP Phase Done.
52159 + *
52160 + * Repeat the following until the Rx Status Queue is empty:
52161 + * -# Read the Receive Status Pop Register (GRXSTSP) to get Packet
52162 + * info
52163 + * -# If Receive FIFO is empty then skip to step Clear the interrupt
52164 + * and exit
52165 + * -# If SETUP Packet call dwc_otg_read_setup_packet to copy the
52166 + * SETUP data to the buffer
52167 + * -# If OUT Data Packet call dwc_otg_read_packet to copy the data
52168 + * to the destination buffer
52169 + */
52170 +int32_t dwc_otg_pcd_handle_rx_status_q_level_intr(dwc_otg_pcd_t * pcd)
52171 +{
52172 + dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
52173 + dwc_otg_core_global_regs_t *global_regs = core_if->core_global_regs;
52174 + gintmsk_data_t gintmask = {.d32 = 0 };
52175 + device_grxsts_data_t status;
52176 + dwc_otg_pcd_ep_t *ep;
52177 + gintsts_data_t gintsts;
52178 +#ifdef DEBUG
52179 + static char *dpid_str[] = { "D0", "D2", "D1", "MDATA" };
52180 +#endif
52181 +
52182 + //DWC_DEBUGPL(DBG_PCDV, "%s(%p)\n", __func__, _pcd);
52183 + /* Disable the Rx Status Queue Level interrupt */
52184 + gintmask.b.rxstsqlvl = 1;
52185 + DWC_MODIFY_REG32(&global_regs->gintmsk, gintmask.d32, 0);
52186 +
52187 + /* Get the Status from the top of the FIFO */
52188 + status.d32 = DWC_READ_REG32(&global_regs->grxstsp);
52189 +
52190 + DWC_DEBUGPL(DBG_PCD, "EP:%d BCnt:%d DPID:%s "
52191 + "pktsts:%x Frame:%d(0x%0x)\n",
52192 + status.b.epnum, status.b.bcnt,
52193 + dpid_str[status.b.dpid],
52194 + status.b.pktsts, status.b.fn, status.b.fn);
52195 + /* Get pointer to EP structure */
52196 + ep = get_out_ep(pcd, status.b.epnum);
52197 +
52198 + switch (status.b.pktsts) {
52199 + case DWC_DSTS_GOUT_NAK:
52200 + DWC_DEBUGPL(DBG_PCDV, "Global OUT NAK\n");
52201 + break;
52202 + case DWC_STS_DATA_UPDT:
52203 + DWC_DEBUGPL(DBG_PCDV, "OUT Data Packet\n");
52204 + if (status.b.bcnt && ep->dwc_ep.xfer_buff) {
52205 + /** @todo NGS Check for buffer overflow? */
52206 + dwc_otg_read_packet(core_if,
52207 + ep->dwc_ep.xfer_buff,
52208 + status.b.bcnt);
52209 + ep->dwc_ep.xfer_count += status.b.bcnt;
52210 + ep->dwc_ep.xfer_buff += status.b.bcnt;
52211 + }
52212 + break;
52213 + case DWC_STS_XFER_COMP:
52214 + DWC_DEBUGPL(DBG_PCDV, "OUT Complete\n");
52215 + break;
52216 + case DWC_DSTS_SETUP_COMP:
52217 +#ifdef DEBUG_EP0
52218 + DWC_DEBUGPL(DBG_PCDV, "Setup Complete\n");
52219 +#endif
52220 + break;
52221 + case DWC_DSTS_SETUP_UPDT:
52222 + dwc_otg_read_setup_packet(core_if, pcd->setup_pkt->d32);
52223 +#ifdef DEBUG_EP0
52224 + DWC_DEBUGPL(DBG_PCD,
52225 + "SETUP PKT: %02x.%02x v%04x i%04x l%04x\n",
52226 + pcd->setup_pkt->req.bmRequestType,
52227 + pcd->setup_pkt->req.bRequest,
52228 + UGETW(pcd->setup_pkt->req.wValue),
52229 + UGETW(pcd->setup_pkt->req.wIndex),
52230 + UGETW(pcd->setup_pkt->req.wLength));
52231 +#endif
52232 + ep->dwc_ep.xfer_count += status.b.bcnt;
52233 + break;
52234 + default:
52235 + DWC_DEBUGPL(DBG_PCDV, "Invalid Packet Status (0x%0x)\n",
52236 + status.b.pktsts);
52237 + break;
52238 + }
52239 +
52240 + /* Enable the Rx Status Queue Level interrupt */
52241 + DWC_MODIFY_REG32(&global_regs->gintmsk, 0, gintmask.d32);
52242 + /* Clear interrupt */
52243 + gintsts.d32 = 0;
52244 + gintsts.b.rxstsqlvl = 1;
52245 + DWC_WRITE_REG32(&global_regs->gintsts, gintsts.d32);
52246 +
52247 + //DWC_DEBUGPL(DBG_PCDV, "EXIT: %s\n", __func__);
52248 + return 1;
52249 +}
52250 +
52251 +/**
52252 + * This function examines the Device IN Token Learning Queue to
52253 + * determine the EP number of the last IN token received. This
52254 + * implementation is for the Mass Storage device where there are only
52255 + * 2 IN EPs (Control-IN and BULK-IN).
52256 + *
52257 + * The EP numbers for the first six IN Tokens are in DTKNQR1 and there
52258 + * are 8 EP Numbers in each of the other possible DTKNQ Registers.
52259 + *
52260 + * @param core_if Programming view of DWC_otg controller.
52261 + *
52262 + */
52263 +static inline int get_ep_of_last_in_token(dwc_otg_core_if_t * core_if)
52264 +{
52265 + dwc_otg_device_global_regs_t *dev_global_regs =
52266 + core_if->dev_if->dev_global_regs;
52267 + const uint32_t TOKEN_Q_DEPTH = core_if->hwcfg2.b.dev_token_q_depth;
52268 + /* Number of Token Queue Registers */
52269 + const int DTKNQ_REG_CNT = (TOKEN_Q_DEPTH + 7) / 8;
52270 + dtknq1_data_t dtknqr1;
52271 + uint32_t in_tkn_epnums[4];
52272 + int ndx = 0;
52273 + int i = 0;
52274 + volatile uint32_t *addr = &dev_global_regs->dtknqr1;
52275 + int epnum = 0;
52276 +
52277 + //DWC_DEBUGPL(DBG_PCD,"dev_token_q_depth=%d\n",TOKEN_Q_DEPTH);
52278 +
52279 + /* Read the DTKNQ Registers */
52280 + for (i = 0; i < DTKNQ_REG_CNT; i++) {
52281 + in_tkn_epnums[i] = DWC_READ_REG32(addr);
52282 + DWC_DEBUGPL(DBG_PCDV, "DTKNQR%d=0x%08x\n", i + 1,
52283 + in_tkn_epnums[i]);
52284 + if (addr == &dev_global_regs->dvbusdis) {
52285 + addr = &dev_global_regs->dtknqr3_dthrctl;
52286 + } else {
52287 + ++addr;
52288 + }
52289 +
52290 + }
52291 +
52292 + /* Copy the DTKNQR1 data to the bit field. */
52293 + dtknqr1.d32 = in_tkn_epnums[0];
52294 + /* Get the EP numbers */
52295 + in_tkn_epnums[0] = dtknqr1.b.epnums0_5;
52296 + ndx = dtknqr1.b.intknwptr - 1;
52297 +
52298 + //DWC_DEBUGPL(DBG_PCDV,"ndx=%d\n",ndx);
52299 + if (ndx == -1) {
52300 + /** @todo Find a simpler way to calculate the max
52301 + * queue position.*/
52302 + int cnt = TOKEN_Q_DEPTH;
52303 + if (TOKEN_Q_DEPTH <= 6) {
52304 + cnt = TOKEN_Q_DEPTH - 1;
52305 + } else if (TOKEN_Q_DEPTH <= 14) {
52306 + cnt = TOKEN_Q_DEPTH - 7;
52307 + } else if (TOKEN_Q_DEPTH <= 22) {
52308 + cnt = TOKEN_Q_DEPTH - 15;
52309 + } else {
52310 + cnt = TOKEN_Q_DEPTH - 23;
52311 + }
52312 + epnum = (in_tkn_epnums[DTKNQ_REG_CNT - 1] >> (cnt * 4)) & 0xF;
52313 + } else {
52314 + if (ndx <= 5) {
52315 + epnum = (in_tkn_epnums[0] >> (ndx * 4)) & 0xF;
52316 + } else if (ndx <= 13) {
52317 + ndx -= 6;
52318 + epnum = (in_tkn_epnums[1] >> (ndx * 4)) & 0xF;
52319 + } else if (ndx <= 21) {
52320 + ndx -= 14;
52321 + epnum = (in_tkn_epnums[2] >> (ndx * 4)) & 0xF;
52322 + } else if (ndx <= 29) {
52323 + ndx -= 22;
52324 + epnum = (in_tkn_epnums[3] >> (ndx * 4)) & 0xF;
52325 + }
52326 + }
52327 + //DWC_DEBUGPL(DBG_PCD,"epnum=%d\n",epnum);
52328 + return epnum;
52329 +}
52330 +
52331 +/**
52332 + * This interrupt occurs when the non-periodic Tx FIFO is half-empty.
52333 + * The active request is checked for the next packet to be loaded into
52334 + * the non-periodic Tx FIFO.
52335 + */
52336 +int32_t dwc_otg_pcd_handle_np_tx_fifo_empty_intr(dwc_otg_pcd_t * pcd)
52337 +{
52338 + dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
52339 + dwc_otg_core_global_regs_t *global_regs = core_if->core_global_regs;
52340 + dwc_otg_dev_in_ep_regs_t *ep_regs;
52341 + gnptxsts_data_t txstatus = {.d32 = 0 };
52342 + gintsts_data_t gintsts;
52343 +
52344 + int epnum = 0;
52345 + dwc_otg_pcd_ep_t *ep = 0;
52346 + uint32_t len = 0;
52347 + int dwords;
52348 +
52349 + /* Get the epnum from the IN Token Learning Queue. */
52350 + epnum = get_ep_of_last_in_token(core_if);
52351 + ep = get_in_ep(pcd, epnum);
52352 +
52353 + DWC_DEBUGPL(DBG_PCD, "NP TxFifo Empty: %d \n", epnum);
52354 +
52355 + ep_regs = core_if->dev_if->in_ep_regs[epnum];
52356 +
52357 + len = ep->dwc_ep.xfer_len - ep->dwc_ep.xfer_count;
52358 + if (len > ep->dwc_ep.maxpacket) {
52359 + len = ep->dwc_ep.maxpacket;
52360 + }
52361 + dwords = (len + 3) / 4;
52362 +
52363 + /* While there is space in the queue and space in the FIFO and
52364 + * More data to tranfer, Write packets to the Tx FIFO */
52365 + txstatus.d32 = DWC_READ_REG32(&global_regs->gnptxsts);
52366 + DWC_DEBUGPL(DBG_PCDV, "b4 GNPTXSTS=0x%08x\n", txstatus.d32);
52367 +
52368 + while (txstatus.b.nptxqspcavail > 0 &&
52369 + txstatus.b.nptxfspcavail > dwords &&
52370 + ep->dwc_ep.xfer_count < ep->dwc_ep.xfer_len) {
52371 + /* Write the FIFO */
52372 + dwc_otg_ep_write_packet(core_if, &ep->dwc_ep, 0);
52373 + len = ep->dwc_ep.xfer_len - ep->dwc_ep.xfer_count;
52374 +
52375 + if (len > ep->dwc_ep.maxpacket) {
52376 + len = ep->dwc_ep.maxpacket;
52377 + }
52378 +
52379 + dwords = (len + 3) / 4;
52380 + txstatus.d32 = DWC_READ_REG32(&global_regs->gnptxsts);
52381 + DWC_DEBUGPL(DBG_PCDV, "GNPTXSTS=0x%08x\n", txstatus.d32);
52382 + }
52383 +
52384 + DWC_DEBUGPL(DBG_PCDV, "GNPTXSTS=0x%08x\n",
52385 + DWC_READ_REG32(&global_regs->gnptxsts));
52386 +
52387 + /* Clear interrupt */
52388 + gintsts.d32 = 0;
52389 + gintsts.b.nptxfempty = 1;
52390 + DWC_WRITE_REG32(&global_regs->gintsts, gintsts.d32);
52391 +
52392 + return 1;
52393 +}
52394 +
52395 +/**
52396 + * This function is called when dedicated Tx FIFO Empty interrupt occurs.
52397 + * The active request is checked for the next packet to be loaded into
52398 + * apropriate Tx FIFO.
52399 + */
52400 +static int32_t write_empty_tx_fifo(dwc_otg_pcd_t * pcd, uint32_t epnum)
52401 +{
52402 + dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
52403 + dwc_otg_dev_if_t *dev_if = core_if->dev_if;
52404 + dwc_otg_dev_in_ep_regs_t *ep_regs;
52405 + dtxfsts_data_t txstatus = {.d32 = 0 };
52406 + dwc_otg_pcd_ep_t *ep = 0;
52407 + uint32_t len = 0;
52408 + int dwords;
52409 +
52410 + ep = get_in_ep(pcd, epnum);
52411 +
52412 + DWC_DEBUGPL(DBG_PCD, "Dedicated TxFifo Empty: %d \n", epnum);
52413 +
52414 + ep_regs = core_if->dev_if->in_ep_regs[epnum];
52415 +
52416 + len = ep->dwc_ep.xfer_len - ep->dwc_ep.xfer_count;
52417 +
52418 + if (len > ep->dwc_ep.maxpacket) {
52419 + len = ep->dwc_ep.maxpacket;
52420 + }
52421 +
52422 + dwords = (len + 3) / 4;
52423 +
52424 + /* While there is space in the queue and space in the FIFO and
52425 + * More data to tranfer, Write packets to the Tx FIFO */
52426 + txstatus.d32 = DWC_READ_REG32(&dev_if->in_ep_regs[epnum]->dtxfsts);
52427 + DWC_DEBUGPL(DBG_PCDV, "b4 dtxfsts[%d]=0x%08x\n", epnum, txstatus.d32);
52428 +
52429 + while (txstatus.b.txfspcavail > dwords &&
52430 + ep->dwc_ep.xfer_count < ep->dwc_ep.xfer_len &&
52431 + ep->dwc_ep.xfer_len != 0) {
52432 + /* Write the FIFO */
52433 + dwc_otg_ep_write_packet(core_if, &ep->dwc_ep, 0);
52434 +
52435 + len = ep->dwc_ep.xfer_len - ep->dwc_ep.xfer_count;
52436 + if (len > ep->dwc_ep.maxpacket) {
52437 + len = ep->dwc_ep.maxpacket;
52438 + }
52439 +
52440 + dwords = (len + 3) / 4;
52441 + txstatus.d32 =
52442 + DWC_READ_REG32(&dev_if->in_ep_regs[epnum]->dtxfsts);
52443 + DWC_DEBUGPL(DBG_PCDV, "dtxfsts[%d]=0x%08x\n", epnum,
52444 + txstatus.d32);
52445 + }
52446 +
52447 + DWC_DEBUGPL(DBG_PCDV, "b4 dtxfsts[%d]=0x%08x\n", epnum,
52448 + DWC_READ_REG32(&dev_if->in_ep_regs[epnum]->dtxfsts));
52449 +
52450 + return 1;
52451 +}
52452 +
52453 +/**
52454 + * This function is called when the Device is disconnected. It stops
52455 + * any active requests and informs the Gadget driver of the
52456 + * disconnect.
52457 + */
52458 +void dwc_otg_pcd_stop(dwc_otg_pcd_t * pcd)
52459 +{
52460 + int i, num_in_eps, num_out_eps;
52461 + dwc_otg_pcd_ep_t *ep;
52462 +
52463 + gintmsk_data_t intr_mask = {.d32 = 0 };
52464 +
52465 + DWC_SPINLOCK(pcd->lock);
52466 +
52467 + num_in_eps = GET_CORE_IF(pcd)->dev_if->num_in_eps;
52468 + num_out_eps = GET_CORE_IF(pcd)->dev_if->num_out_eps;
52469 +
52470 + DWC_DEBUGPL(DBG_PCDV, "%s() \n", __func__);
52471 + /* don't disconnect drivers more than once */
52472 + if (pcd->ep0state == EP0_DISCONNECT) {
52473 + DWC_DEBUGPL(DBG_ANY, "%s() Already Disconnected\n", __func__);
52474 + DWC_SPINUNLOCK(pcd->lock);
52475 + return;
52476 + }
52477 + pcd->ep0state = EP0_DISCONNECT;
52478 +
52479 + /* Reset the OTG state. */
52480 + dwc_otg_pcd_update_otg(pcd, 1);
52481 +
52482 + /* Disable the NP Tx Fifo Empty Interrupt. */
52483 + intr_mask.b.nptxfempty = 1;
52484 + DWC_MODIFY_REG32(&GET_CORE_IF(pcd)->core_global_regs->gintmsk,
52485 + intr_mask.d32, 0);
52486 +
52487 + /* Flush the FIFOs */
52488 + /**@todo NGS Flush Periodic FIFOs */
52489 + dwc_otg_flush_tx_fifo(GET_CORE_IF(pcd), 0x10);
52490 + dwc_otg_flush_rx_fifo(GET_CORE_IF(pcd));
52491 +
52492 + /* prevent new request submissions, kill any outstanding requests */
52493 + ep = &pcd->ep0;
52494 + dwc_otg_request_nuke(ep);
52495 + /* prevent new request submissions, kill any outstanding requests */
52496 + for (i = 0; i < num_in_eps; i++) {
52497 + dwc_otg_pcd_ep_t *ep = &pcd->in_ep[i];
52498 + dwc_otg_request_nuke(ep);
52499 + }
52500 + /* prevent new request submissions, kill any outstanding requests */
52501 + for (i = 0; i < num_out_eps; i++) {
52502 + dwc_otg_pcd_ep_t *ep = &pcd->out_ep[i];
52503 + dwc_otg_request_nuke(ep);
52504 + }
52505 +
52506 + /* report disconnect; the driver is already quiesced */
52507 + if (pcd->fops->disconnect) {
52508 + DWC_SPINUNLOCK(pcd->lock);
52509 + pcd->fops->disconnect(pcd);
52510 + DWC_SPINLOCK(pcd->lock);
52511 + }
52512 + DWC_SPINUNLOCK(pcd->lock);
52513 +}
52514 +
52515 +/**
52516 + * This interrupt indicates that ...
52517 + */
52518 +int32_t dwc_otg_pcd_handle_i2c_intr(dwc_otg_pcd_t * pcd)
52519 +{
52520 + gintmsk_data_t intr_mask = {.d32 = 0 };
52521 + gintsts_data_t gintsts;
52522 +
52523 + DWC_PRINTF("INTERRUPT Handler not implemented for %s\n", "i2cintr");
52524 + intr_mask.b.i2cintr = 1;
52525 + DWC_MODIFY_REG32(&GET_CORE_IF(pcd)->core_global_regs->gintmsk,
52526 + intr_mask.d32, 0);
52527 +
52528 + /* Clear interrupt */
52529 + gintsts.d32 = 0;
52530 + gintsts.b.i2cintr = 1;
52531 + DWC_WRITE_REG32(&GET_CORE_IF(pcd)->core_global_regs->gintsts,
52532 + gintsts.d32);
52533 + return 1;
52534 +}
52535 +
52536 +/**
52537 + * This interrupt indicates that ...
52538 + */
52539 +int32_t dwc_otg_pcd_handle_early_suspend_intr(dwc_otg_pcd_t * pcd)
52540 +{
52541 + gintsts_data_t gintsts;
52542 +#if defined(VERBOSE)
52543 + DWC_PRINTF("Early Suspend Detected\n");
52544 +#endif
52545 +
52546 + /* Clear interrupt */
52547 + gintsts.d32 = 0;
52548 + gintsts.b.erlysuspend = 1;
52549 + DWC_WRITE_REG32(&GET_CORE_IF(pcd)->core_global_regs->gintsts,
52550 + gintsts.d32);
52551 + return 1;
52552 +}
52553 +
52554 +/**
52555 + * This function configures EPO to receive SETUP packets.
52556 + *
52557 + * @todo NGS: Update the comments from the HW FS.
52558 + *
52559 + * -# Program the following fields in the endpoint specific registers
52560 + * for Control OUT EP 0, in order to receive a setup packet
52561 + * - DOEPTSIZ0.Packet Count = 3 (To receive up to 3 back to back
52562 + * setup packets)
52563 + * - DOEPTSIZE0.Transfer Size = 24 Bytes (To receive up to 3 back
52564 + * to back setup packets)
52565 + * - In DMA mode, DOEPDMA0 Register with a memory address to
52566 + * store any setup packets received
52567 + *
52568 + * @param core_if Programming view of DWC_otg controller.
52569 + * @param pcd Programming view of the PCD.
52570 + */
52571 +static inline void ep0_out_start(dwc_otg_core_if_t * core_if,
52572 + dwc_otg_pcd_t * pcd)
52573 +{
52574 + dwc_otg_dev_if_t *dev_if = core_if->dev_if;
52575 + deptsiz0_data_t doeptsize0 = {.d32 = 0 };
52576 + dwc_otg_dev_dma_desc_t *dma_desc;
52577 + depctl_data_t doepctl = {.d32 = 0 };
52578 +
52579 +#ifdef VERBOSE
52580 + DWC_DEBUGPL(DBG_PCDV, "%s() doepctl0=%0x\n", __func__,
52581 + DWC_READ_REG32(&dev_if->out_ep_regs[0]->doepctl));
52582 +#endif
52583 + if (core_if->snpsid >= OTG_CORE_REV_3_00a) {
52584 + doepctl.d32 = DWC_READ_REG32(&dev_if->out_ep_regs[0]->doepctl);
52585 + if (doepctl.b.epena) {
52586 + return;
52587 + }
52588 + }
52589 +
52590 + doeptsize0.b.supcnt = 3;
52591 + doeptsize0.b.pktcnt = 1;
52592 + doeptsize0.b.xfersize = 8 * 3;
52593 +
52594 + if (core_if->dma_enable) {
52595 + if (!core_if->dma_desc_enable) {
52596 + /** put here as for Hermes mode deptisz register should not be written */
52597 + DWC_WRITE_REG32(&dev_if->out_ep_regs[0]->doeptsiz,
52598 + doeptsize0.d32);
52599 +
52600 + /** @todo dma needs to handle multiple setup packets (up to 3) */
52601 + DWC_WRITE_REG32(&dev_if->out_ep_regs[0]->doepdma,
52602 + pcd->setup_pkt_dma_handle);
52603 + } else {
52604 + dev_if->setup_desc_index =
52605 + (dev_if->setup_desc_index + 1) & 1;
52606 + dma_desc =
52607 + dev_if->setup_desc_addr[dev_if->setup_desc_index];
52608 +
52609 + /** DMA Descriptor Setup */
52610 + dma_desc->status.b.bs = BS_HOST_BUSY;
52611 + if (core_if->snpsid >= OTG_CORE_REV_3_00a) {
52612 + dma_desc->status.b.sr = 0;
52613 + dma_desc->status.b.mtrf = 0;
52614 + }
52615 + dma_desc->status.b.l = 1;
52616 + dma_desc->status.b.ioc = 1;
52617 + dma_desc->status.b.bytes = pcd->ep0.dwc_ep.maxpacket;
52618 + dma_desc->buf = pcd->setup_pkt_dma_handle;
52619 + dma_desc->status.b.sts = 0;
52620 + dma_desc->status.b.bs = BS_HOST_READY;
52621 +
52622 + /** DOEPDMA0 Register write */
52623 + DWC_WRITE_REG32(&dev_if->out_ep_regs[0]->doepdma,
52624 + dev_if->dma_setup_desc_addr
52625 + [dev_if->setup_desc_index]);
52626 + }
52627 +
52628 + } else {
52629 + /** put here as for Hermes mode deptisz register should not be written */
52630 + DWC_WRITE_REG32(&dev_if->out_ep_regs[0]->doeptsiz,
52631 + doeptsize0.d32);
52632 + }
52633 +
52634 + /** DOEPCTL0 Register write cnak will be set after setup interrupt */
52635 + doepctl.d32 = 0;
52636 + doepctl.b.epena = 1;
52637 + if (core_if->snpsid <= OTG_CORE_REV_2_94a) {
52638 + doepctl.b.cnak = 1;
52639 + DWC_WRITE_REG32(&dev_if->out_ep_regs[0]->doepctl, doepctl.d32);
52640 + } else {
52641 + DWC_MODIFY_REG32(&dev_if->out_ep_regs[0]->doepctl, 0, doepctl.d32);
52642 + }
52643 +
52644 +#ifdef VERBOSE
52645 + DWC_DEBUGPL(DBG_PCDV, "doepctl0=%0x\n",
52646 + DWC_READ_REG32(&dev_if->out_ep_regs[0]->doepctl));
52647 + DWC_DEBUGPL(DBG_PCDV, "diepctl0=%0x\n",
52648 + DWC_READ_REG32(&dev_if->in_ep_regs[0]->diepctl));
52649 +#endif
52650 +}
52651 +
52652 +/**
52653 + * This interrupt occurs when a USB Reset is detected. When the USB
52654 + * Reset Interrupt occurs the device state is set to DEFAULT and the
52655 + * EP0 state is set to IDLE.
52656 + * -# Set the NAK bit for all OUT endpoints (DOEPCTLn.SNAK = 1)
52657 + * -# Unmask the following interrupt bits
52658 + * - DAINTMSK.INEP0 = 1 (Control 0 IN endpoint)
52659 + * - DAINTMSK.OUTEP0 = 1 (Control 0 OUT endpoint)
52660 + * - DOEPMSK.SETUP = 1
52661 + * - DOEPMSK.XferCompl = 1
52662 + * - DIEPMSK.XferCompl = 1
52663 + * - DIEPMSK.TimeOut = 1
52664 + * -# Program the following fields in the endpoint specific registers
52665 + * for Control OUT EP 0, in order to receive a setup packet
52666 + * - DOEPTSIZ0.Packet Count = 3 (To receive up to 3 back to back
52667 + * setup packets)
52668 + * - DOEPTSIZE0.Transfer Size = 24 Bytes (To receive up to 3 back
52669 + * to back setup packets)
52670 + * - In DMA mode, DOEPDMA0 Register with a memory address to
52671 + * store any setup packets received
52672 + * At this point, all the required initialization, except for enabling
52673 + * the control 0 OUT endpoint is done, for receiving SETUP packets.
52674 + */
52675 +int32_t dwc_otg_pcd_handle_usb_reset_intr(dwc_otg_pcd_t * pcd)
52676 +{
52677 + dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
52678 + dwc_otg_dev_if_t *dev_if = core_if->dev_if;
52679 + depctl_data_t doepctl = {.d32 = 0 };
52680 + depctl_data_t diepctl = {.d32 = 0 };
52681 + daint_data_t daintmsk = {.d32 = 0 };
52682 + doepmsk_data_t doepmsk = {.d32 = 0 };
52683 + diepmsk_data_t diepmsk = {.d32 = 0 };
52684 + dcfg_data_t dcfg = {.d32 = 0 };
52685 + grstctl_t resetctl = {.d32 = 0 };
52686 + dctl_data_t dctl = {.d32 = 0 };
52687 + int i = 0;
52688 + gintsts_data_t gintsts;
52689 + pcgcctl_data_t power = {.d32 = 0 };
52690 +
52691 + power.d32 = DWC_READ_REG32(core_if->pcgcctl);
52692 + if (power.b.stoppclk) {
52693 + power.d32 = 0;
52694 + power.b.stoppclk = 1;
52695 + DWC_MODIFY_REG32(core_if->pcgcctl, power.d32, 0);
52696 +
52697 + power.b.pwrclmp = 1;
52698 + DWC_MODIFY_REG32(core_if->pcgcctl, power.d32, 0);
52699 +
52700 + power.b.rstpdwnmodule = 1;
52701 + DWC_MODIFY_REG32(core_if->pcgcctl, power.d32, 0);
52702 + }
52703 +
52704 + core_if->lx_state = DWC_OTG_L0;
52705 +
52706 + DWC_PRINTF("USB RESET\n");
52707 +#ifdef DWC_EN_ISOC
52708 + for (i = 1; i < 16; ++i) {
52709 + dwc_otg_pcd_ep_t *ep;
52710 + dwc_ep_t *dwc_ep;
52711 + ep = get_in_ep(pcd, i);
52712 + if (ep != 0) {
52713 + dwc_ep = &ep->dwc_ep;
52714 + dwc_ep->next_frame = 0xffffffff;
52715 + }
52716 + }
52717 +#endif /* DWC_EN_ISOC */
52718 +
52719 + /* reset the HNP settings */
52720 + dwc_otg_pcd_update_otg(pcd, 1);
52721 +
52722 + /* Clear the Remote Wakeup Signalling */
52723 + dctl.b.rmtwkupsig = 1;
52724 + DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->dctl, dctl.d32, 0);
52725 +
52726 + /* Set NAK for all OUT EPs */
52727 + doepctl.b.snak = 1;
52728 + for (i = 0; i <= dev_if->num_out_eps; i++) {
52729 + DWC_WRITE_REG32(&dev_if->out_ep_regs[i]->doepctl, doepctl.d32);
52730 + }
52731 +
52732 + /* Flush the NP Tx FIFO */
52733 + dwc_otg_flush_tx_fifo(core_if, 0x10);
52734 + /* Flush the Learning Queue */
52735 + resetctl.b.intknqflsh = 1;
52736 + DWC_WRITE_REG32(&core_if->core_global_regs->grstctl, resetctl.d32);
52737 +
52738 + if (!core_if->core_params->en_multiple_tx_fifo && core_if->dma_enable) {
52739 + core_if->start_predict = 0;
52740 + for (i = 0; i<= core_if->dev_if->num_in_eps; ++i) {
52741 + core_if->nextep_seq[i] = 0xff; // 0xff - EP not active
52742 + }
52743 + core_if->nextep_seq[0] = 0;
52744 + core_if->first_in_nextep_seq = 0;
52745 + diepctl.d32 = DWC_READ_REG32(&dev_if->in_ep_regs[0]->diepctl);
52746 + diepctl.b.nextep = 0;
52747 + DWC_WRITE_REG32(&dev_if->in_ep_regs[0]->diepctl, diepctl.d32);
52748 +
52749 + /* Update IN Endpoint Mismatch Count by active IN NP EP count + 1 */
52750 + dcfg.d32 = DWC_READ_REG32(&dev_if->dev_global_regs->dcfg);
52751 + dcfg.b.epmscnt = 2;
52752 + DWC_WRITE_REG32(&dev_if->dev_global_regs->dcfg, dcfg.d32);
52753 +
52754 + DWC_DEBUGPL(DBG_PCDV,
52755 + "%s first_in_nextep_seq= %2d; nextep_seq[]:\n",
52756 + __func__, core_if->first_in_nextep_seq);
52757 + for (i=0; i <= core_if->dev_if->num_in_eps; i++) {
52758 + DWC_DEBUGPL(DBG_PCDV, "%2d\n", core_if->nextep_seq[i]);
52759 + }
52760 + }
52761 +
52762 + if (core_if->multiproc_int_enable) {
52763 + daintmsk.b.inep0 = 1;
52764 + daintmsk.b.outep0 = 1;
52765 + DWC_WRITE_REG32(&dev_if->dev_global_regs->deachintmsk,
52766 + daintmsk.d32);
52767 +
52768 + doepmsk.b.setup = 1;
52769 + doepmsk.b.xfercompl = 1;
52770 + doepmsk.b.ahberr = 1;
52771 + doepmsk.b.epdisabled = 1;
52772 +
52773 + if ((core_if->dma_desc_enable) ||
52774 + (core_if->dma_enable
52775 + && core_if->snpsid >= OTG_CORE_REV_3_00a)) {
52776 + doepmsk.b.stsphsercvd = 1;
52777 + }
52778 + if (core_if->dma_desc_enable)
52779 + doepmsk.b.bna = 1;
52780 +/*
52781 + doepmsk.b.babble = 1;
52782 + doepmsk.b.nyet = 1;
52783 +
52784 + if (core_if->dma_enable) {
52785 + doepmsk.b.nak = 1;
52786 + }
52787 +*/
52788 + DWC_WRITE_REG32(&dev_if->dev_global_regs->doepeachintmsk[0],
52789 + doepmsk.d32);
52790 +
52791 + diepmsk.b.xfercompl = 1;
52792 + diepmsk.b.timeout = 1;
52793 + diepmsk.b.epdisabled = 1;
52794 + diepmsk.b.ahberr = 1;
52795 + diepmsk.b.intknepmis = 1;
52796 + if (!core_if->en_multiple_tx_fifo && core_if->dma_enable)
52797 + diepmsk.b.intknepmis = 0;
52798 +
52799 +/* if (core_if->dma_desc_enable) {
52800 + diepmsk.b.bna = 1;
52801 + }
52802 +*/
52803 +/*
52804 + if (core_if->dma_enable) {
52805 + diepmsk.b.nak = 1;
52806 + }
52807 +*/
52808 + DWC_WRITE_REG32(&dev_if->dev_global_regs->diepeachintmsk[0],
52809 + diepmsk.d32);
52810 + } else {
52811 + daintmsk.b.inep0 = 1;
52812 + daintmsk.b.outep0 = 1;
52813 + DWC_WRITE_REG32(&dev_if->dev_global_regs->daintmsk,
52814 + daintmsk.d32);
52815 +
52816 + doepmsk.b.setup = 1;
52817 + doepmsk.b.xfercompl = 1;
52818 + doepmsk.b.ahberr = 1;
52819 + doepmsk.b.epdisabled = 1;
52820 +
52821 + if ((core_if->dma_desc_enable) ||
52822 + (core_if->dma_enable
52823 + && core_if->snpsid >= OTG_CORE_REV_3_00a)) {
52824 + doepmsk.b.stsphsercvd = 1;
52825 + }
52826 + if (core_if->dma_desc_enable)
52827 + doepmsk.b.bna = 1;
52828 + DWC_WRITE_REG32(&dev_if->dev_global_regs->doepmsk, doepmsk.d32);
52829 +
52830 + diepmsk.b.xfercompl = 1;
52831 + diepmsk.b.timeout = 1;
52832 + diepmsk.b.epdisabled = 1;
52833 + diepmsk.b.ahberr = 1;
52834 + if (!core_if->en_multiple_tx_fifo && core_if->dma_enable)
52835 + diepmsk.b.intknepmis = 0;
52836 +/*
52837 + if (core_if->dma_desc_enable) {
52838 + diepmsk.b.bna = 1;
52839 + }
52840 +*/
52841 +
52842 + DWC_WRITE_REG32(&dev_if->dev_global_regs->diepmsk, diepmsk.d32);
52843 + }
52844 +
52845 + /* Reset Device Address */
52846 + dcfg.d32 = DWC_READ_REG32(&dev_if->dev_global_regs->dcfg);
52847 + dcfg.b.devaddr = 0;
52848 + DWC_WRITE_REG32(&dev_if->dev_global_regs->dcfg, dcfg.d32);
52849 +
52850 + /* setup EP0 to receive SETUP packets */
52851 + if (core_if->snpsid <= OTG_CORE_REV_2_94a)
52852 + ep0_out_start(core_if, pcd);
52853 +
52854 + /* Clear interrupt */
52855 + gintsts.d32 = 0;
52856 + gintsts.b.usbreset = 1;
52857 + DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, gintsts.d32);
52858 +
52859 + return 1;
52860 +}
52861 +
52862 +/**
52863 + * Get the device speed from the device status register and convert it
52864 + * to USB speed constant.
52865 + *
52866 + * @param core_if Programming view of DWC_otg controller.
52867 + */
52868 +static int get_device_speed(dwc_otg_core_if_t * core_if)
52869 +{
52870 + dsts_data_t dsts;
52871 + int speed = 0;
52872 + dsts.d32 = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dsts);
52873 +
52874 + switch (dsts.b.enumspd) {
52875 + case DWC_DSTS_ENUMSPD_HS_PHY_30MHZ_OR_60MHZ:
52876 + speed = USB_SPEED_HIGH;
52877 + break;
52878 + case DWC_DSTS_ENUMSPD_FS_PHY_30MHZ_OR_60MHZ:
52879 + case DWC_DSTS_ENUMSPD_FS_PHY_48MHZ:
52880 + speed = USB_SPEED_FULL;
52881 + break;
52882 +
52883 + case DWC_DSTS_ENUMSPD_LS_PHY_6MHZ:
52884 + speed = USB_SPEED_LOW;
52885 + break;
52886 + }
52887 +
52888 + return speed;
52889 +}
52890 +
52891 +/**
52892 + * Read the device status register and set the device speed in the
52893 + * data structure.
52894 + * Set up EP0 to receive SETUP packets by calling dwc_ep0_activate.
52895 + */
52896 +int32_t dwc_otg_pcd_handle_enum_done_intr(dwc_otg_pcd_t * pcd)
52897 +{
52898 + dwc_otg_pcd_ep_t *ep0 = &pcd->ep0;
52899 + gintsts_data_t gintsts;
52900 + gusbcfg_data_t gusbcfg;
52901 + dwc_otg_core_global_regs_t *global_regs =
52902 + GET_CORE_IF(pcd)->core_global_regs;
52903 + uint8_t utmi16b, utmi8b;
52904 + int speed;
52905 + DWC_DEBUGPL(DBG_PCD, "SPEED ENUM\n");
52906 +
52907 + if (GET_CORE_IF(pcd)->snpsid >= OTG_CORE_REV_2_60a) {
52908 + utmi16b = 6; //vahrama old value was 6;
52909 + utmi8b = 9;
52910 + } else {
52911 + utmi16b = 4;
52912 + utmi8b = 8;
52913 + }
52914 + dwc_otg_ep0_activate(GET_CORE_IF(pcd), &ep0->dwc_ep);
52915 + if (GET_CORE_IF(pcd)->snpsid >= OTG_CORE_REV_3_00a) {
52916 + ep0_out_start(GET_CORE_IF(pcd), pcd);
52917 + }
52918 +
52919 +#ifdef DEBUG_EP0
52920 + print_ep0_state(pcd);
52921 +#endif
52922 +
52923 + if (pcd->ep0state == EP0_DISCONNECT) {
52924 + pcd->ep0state = EP0_IDLE;
52925 + } else if (pcd->ep0state == EP0_STALL) {
52926 + pcd->ep0state = EP0_IDLE;
52927 + }
52928 +
52929 + pcd->ep0state = EP0_IDLE;
52930 +
52931 + ep0->stopped = 0;
52932 +
52933 + speed = get_device_speed(GET_CORE_IF(pcd));
52934 + pcd->fops->connect(pcd, speed);
52935 +
52936 + /* Set USB turnaround time based on device speed and PHY interface. */
52937 + gusbcfg.d32 = DWC_READ_REG32(&global_regs->gusbcfg);
52938 + if (speed == USB_SPEED_HIGH) {
52939 + if (GET_CORE_IF(pcd)->hwcfg2.b.hs_phy_type ==
52940 + DWC_HWCFG2_HS_PHY_TYPE_ULPI) {
52941 + /* ULPI interface */
52942 + gusbcfg.b.usbtrdtim = 9;
52943 + }
52944 + if (GET_CORE_IF(pcd)->hwcfg2.b.hs_phy_type ==
52945 + DWC_HWCFG2_HS_PHY_TYPE_UTMI) {
52946 + /* UTMI+ interface */
52947 + if (GET_CORE_IF(pcd)->hwcfg4.b.utmi_phy_data_width == 0) {
52948 + gusbcfg.b.usbtrdtim = utmi8b;
52949 + } else if (GET_CORE_IF(pcd)->hwcfg4.
52950 + b.utmi_phy_data_width == 1) {
52951 + gusbcfg.b.usbtrdtim = utmi16b;
52952 + } else if (GET_CORE_IF(pcd)->
52953 + core_params->phy_utmi_width == 8) {
52954 + gusbcfg.b.usbtrdtim = utmi8b;
52955 + } else {
52956 + gusbcfg.b.usbtrdtim = utmi16b;
52957 + }
52958 + }
52959 + if (GET_CORE_IF(pcd)->hwcfg2.b.hs_phy_type ==
52960 + DWC_HWCFG2_HS_PHY_TYPE_UTMI_ULPI) {
52961 + /* UTMI+ OR ULPI interface */
52962 + if (gusbcfg.b.ulpi_utmi_sel == 1) {
52963 + /* ULPI interface */
52964 + gusbcfg.b.usbtrdtim = 9;
52965 + } else {
52966 + /* UTMI+ interface */
52967 + if (GET_CORE_IF(pcd)->
52968 + core_params->phy_utmi_width == 16) {
52969 + gusbcfg.b.usbtrdtim = utmi16b;
52970 + } else {
52971 + gusbcfg.b.usbtrdtim = utmi8b;
52972 + }
52973 + }
52974 + }
52975 + } else {
52976 + /* Full or low speed */
52977 + gusbcfg.b.usbtrdtim = 9;
52978 + }
52979 + DWC_WRITE_REG32(&global_regs->gusbcfg, gusbcfg.d32);
52980 +
52981 + /* Clear interrupt */
52982 + gintsts.d32 = 0;
52983 + gintsts.b.enumdone = 1;
52984 + DWC_WRITE_REG32(&GET_CORE_IF(pcd)->core_global_regs->gintsts,
52985 + gintsts.d32);
52986 + return 1;
52987 +}
52988 +
52989 +/**
52990 + * This interrupt indicates that the ISO OUT Packet was dropped due to
52991 + * Rx FIFO full or Rx Status Queue Full. If this interrupt occurs
52992 + * read all the data from the Rx FIFO.
52993 + */
52994 +int32_t dwc_otg_pcd_handle_isoc_out_packet_dropped_intr(dwc_otg_pcd_t * pcd)
52995 +{
52996 + gintmsk_data_t intr_mask = {.d32 = 0 };
52997 + gintsts_data_t gintsts;
52998 +
52999 + DWC_WARN("INTERRUPT Handler not implemented for %s\n",
53000 + "ISOC Out Dropped");
53001 +
53002 + intr_mask.b.isooutdrop = 1;
53003 + DWC_MODIFY_REG32(&GET_CORE_IF(pcd)->core_global_regs->gintmsk,
53004 + intr_mask.d32, 0);
53005 +
53006 + /* Clear interrupt */
53007 + gintsts.d32 = 0;
53008 + gintsts.b.isooutdrop = 1;
53009 + DWC_WRITE_REG32(&GET_CORE_IF(pcd)->core_global_regs->gintsts,
53010 + gintsts.d32);
53011 +
53012 + return 1;
53013 +}
53014 +
53015 +/**
53016 + * This interrupt indicates the end of the portion of the micro-frame
53017 + * for periodic transactions. If there is a periodic transaction for
53018 + * the next frame, load the packets into the EP periodic Tx FIFO.
53019 + */
53020 +int32_t dwc_otg_pcd_handle_end_periodic_frame_intr(dwc_otg_pcd_t * pcd)
53021 +{
53022 + gintmsk_data_t intr_mask = {.d32 = 0 };
53023 + gintsts_data_t gintsts;
53024 + DWC_PRINTF("INTERRUPT Handler not implemented for %s\n", "EOP");
53025 +
53026 + intr_mask.b.eopframe = 1;
53027 + DWC_MODIFY_REG32(&GET_CORE_IF(pcd)->core_global_regs->gintmsk,
53028 + intr_mask.d32, 0);
53029 +
53030 + /* Clear interrupt */
53031 + gintsts.d32 = 0;
53032 + gintsts.b.eopframe = 1;
53033 + DWC_WRITE_REG32(&GET_CORE_IF(pcd)->core_global_regs->gintsts,
53034 + gintsts.d32);
53035 +
53036 + return 1;
53037 +}
53038 +
53039 +/**
53040 + * This interrupt indicates that EP of the packet on the top of the
53041 + * non-periodic Tx FIFO does not match EP of the IN Token received.
53042 + *
53043 + * The "Device IN Token Queue" Registers are read to determine the
53044 + * order the IN Tokens have been received. The non-periodic Tx FIFO
53045 + * is flushed, so it can be reloaded in the order seen in the IN Token
53046 + * Queue.
53047 + */
53048 +int32_t dwc_otg_pcd_handle_ep_mismatch_intr(dwc_otg_pcd_t * pcd)
53049 +{
53050 + gintsts_data_t gintsts;
53051 + dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
53052 + dctl_data_t dctl;
53053 + gintmsk_data_t intr_mask = {.d32 = 0 };
53054 +
53055 + if (!core_if->en_multiple_tx_fifo && core_if->dma_enable) {
53056 + core_if->start_predict = 1;
53057 +
53058 + DWC_DEBUGPL(DBG_PCDV, "%s(%p)\n", __func__, core_if);
53059 +
53060 + gintsts.d32 = DWC_READ_REG32(&core_if->core_global_regs->gintsts);
53061 + if (!gintsts.b.ginnakeff) {
53062 + /* Disable EP Mismatch interrupt */
53063 + intr_mask.d32 = 0;
53064 + intr_mask.b.epmismatch = 1;
53065 + DWC_MODIFY_REG32(&core_if->core_global_regs->gintmsk, intr_mask.d32, 0);
53066 + /* Enable the Global IN NAK Effective Interrupt */
53067 + intr_mask.d32 = 0;
53068 + intr_mask.b.ginnakeff = 1;
53069 + DWC_MODIFY_REG32(&core_if->core_global_regs->gintmsk, 0, intr_mask.d32);
53070 + /* Set the global non-periodic IN NAK handshake */
53071 + dctl.d32 = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dctl);
53072 + dctl.b.sgnpinnak = 1;
53073 + DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->dctl, dctl.d32);
53074 + } else {
53075 + DWC_PRINTF("gintsts.b.ginnakeff = 1! dctl.b.sgnpinnak not set\n");
53076 + }
53077 + /* Disabling of all EP's will be done in dwc_otg_pcd_handle_in_nak_effective()
53078 + * handler after Global IN NAK Effective interrupt will be asserted */
53079 + }
53080 + /* Clear interrupt */
53081 + gintsts.d32 = 0;
53082 + gintsts.b.epmismatch = 1;
53083 + DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, gintsts.d32);
53084 +
53085 + return 1;
53086 +}
53087 +
53088 +/**
53089 + * This interrupt is valid only in DMA mode. This interrupt indicates that the
53090 + * core has stopped fetching data for IN endpoints due to the unavailability of
53091 + * TxFIFO space or Request Queue space. This interrupt is used by the
53092 + * application for an endpoint mismatch algorithm.
53093 + *
53094 + * @param pcd The PCD
53095 + */
53096 +int32_t dwc_otg_pcd_handle_ep_fetsusp_intr(dwc_otg_pcd_t * pcd)
53097 +{
53098 + gintsts_data_t gintsts;
53099 + gintmsk_data_t gintmsk_data;
53100 + dctl_data_t dctl;
53101 + dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
53102 + DWC_DEBUGPL(DBG_PCDV, "%s(%p)\n", __func__, core_if);
53103 +
53104 + /* Clear the global non-periodic IN NAK handshake */
53105 + dctl.d32 = 0;
53106 + dctl.b.cgnpinnak = 1;
53107 + DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->dctl, dctl.d32, dctl.d32);
53108 +
53109 + /* Mask GINTSTS.FETSUSP interrupt */
53110 + gintmsk_data.d32 = DWC_READ_REG32(&core_if->core_global_regs->gintmsk);
53111 + gintmsk_data.b.fetsusp = 0;
53112 + DWC_WRITE_REG32(&core_if->core_global_regs->gintmsk, gintmsk_data.d32);
53113 +
53114 + /* Clear interrupt */
53115 + gintsts.d32 = 0;
53116 + gintsts.b.fetsusp = 1;
53117 + DWC_WRITE_REG32(&core_if->core_global_regs->gintsts, gintsts.d32);
53118 +
53119 + return 1;
53120 +}
53121 +/**
53122 + * This funcion stalls EP0.
53123 + */
53124 +static inline void ep0_do_stall(dwc_otg_pcd_t * pcd, const int err_val)
53125 +{
53126 + dwc_otg_pcd_ep_t *ep0 = &pcd->ep0;
53127 + usb_device_request_t *ctrl = &pcd->setup_pkt->req;
53128 + DWC_WARN("req %02x.%02x protocol STALL; err %d\n",
53129 + ctrl->bmRequestType, ctrl->bRequest, err_val);
53130 +
53131 + ep0->dwc_ep.is_in = 1;
53132 + dwc_otg_ep_set_stall(GET_CORE_IF(pcd), &ep0->dwc_ep);
53133 + pcd->ep0.stopped = 1;
53134 + pcd->ep0state = EP0_IDLE;
53135 + ep0_out_start(GET_CORE_IF(pcd), pcd);
53136 +}
53137 +
53138 +/**
53139 + * This functions delegates the setup command to the gadget driver.
53140 + */
53141 +static inline void do_gadget_setup(dwc_otg_pcd_t * pcd,
53142 + usb_device_request_t * ctrl)
53143 +{
53144 + int ret = 0;
53145 + DWC_SPINUNLOCK(pcd->lock);
53146 + ret = pcd->fops->setup(pcd, (uint8_t *) ctrl);
53147 + DWC_SPINLOCK(pcd->lock);
53148 + if (ret < 0) {
53149 + ep0_do_stall(pcd, ret);
53150 + }
53151 +
53152 + /** @todo This is a g_file_storage gadget driver specific
53153 + * workaround: a DELAYED_STATUS result from the fsg_setup
53154 + * routine will result in the gadget queueing a EP0 IN status
53155 + * phase for a two-stage control transfer. Exactly the same as
53156 + * a SET_CONFIGURATION/SET_INTERFACE except that this is a class
53157 + * specific request. Need a generic way to know when the gadget
53158 + * driver will queue the status phase. Can we assume when we
53159 + * call the gadget driver setup() function that it will always
53160 + * queue and require the following flag? Need to look into
53161 + * this.
53162 + */
53163 +
53164 + if (ret == 256 + 999) {
53165 + pcd->request_config = 1;
53166 + }
53167 +}
53168 +
53169 +#ifdef DWC_UTE_CFI
53170 +/**
53171 + * This functions delegates the CFI setup commands to the gadget driver.
53172 + * This function will return a negative value to indicate a failure.
53173 + */
53174 +static inline int cfi_gadget_setup(dwc_otg_pcd_t * pcd,
53175 + struct cfi_usb_ctrlrequest *ctrl_req)
53176 +{
53177 + int ret = 0;
53178 +
53179 + if (pcd->fops && pcd->fops->cfi_setup) {
53180 + DWC_SPINUNLOCK(pcd->lock);
53181 + ret = pcd->fops->cfi_setup(pcd, ctrl_req);
53182 + DWC_SPINLOCK(pcd->lock);
53183 + if (ret < 0) {
53184 + ep0_do_stall(pcd, ret);
53185 + return ret;
53186 + }
53187 + }
53188 +
53189 + return ret;
53190 +}
53191 +#endif
53192 +
53193 +/**
53194 + * This function starts the Zero-Length Packet for the IN status phase
53195 + * of a 2 stage control transfer.
53196 + */
53197 +static inline void do_setup_in_status_phase(dwc_otg_pcd_t * pcd)
53198 +{
53199 + dwc_otg_pcd_ep_t *ep0 = &pcd->ep0;
53200 + if (pcd->ep0state == EP0_STALL) {
53201 + return;
53202 + }
53203 +
53204 + pcd->ep0state = EP0_IN_STATUS_PHASE;
53205 +
53206 + /* Prepare for more SETUP Packets */
53207 + DWC_DEBUGPL(DBG_PCD, "EP0 IN ZLP\n");
53208 + if ((GET_CORE_IF(pcd)->snpsid >= OTG_CORE_REV_3_00a)
53209 + && (pcd->core_if->dma_desc_enable)
53210 + && (ep0->dwc_ep.xfer_count < ep0->dwc_ep.total_len)) {
53211 + DWC_DEBUGPL(DBG_PCDV,
53212 + "Data terminated wait next packet in out_desc_addr\n");
53213 + pcd->backup_buf = phys_to_virt(ep0->dwc_ep.dma_addr);
53214 + pcd->data_terminated = 1;
53215 + }
53216 + ep0->dwc_ep.xfer_len = 0;
53217 + ep0->dwc_ep.xfer_count = 0;
53218 + ep0->dwc_ep.is_in = 1;
53219 + ep0->dwc_ep.dma_addr = pcd->setup_pkt_dma_handle;
53220 + dwc_otg_ep0_start_transfer(GET_CORE_IF(pcd), &ep0->dwc_ep);
53221 +
53222 + /* Prepare for more SETUP Packets */
53223 + //ep0_out_start(GET_CORE_IF(pcd), pcd);
53224 +}
53225 +
53226 +/**
53227 + * This function starts the Zero-Length Packet for the OUT status phase
53228 + * of a 2 stage control transfer.
53229 + */
53230 +static inline void do_setup_out_status_phase(dwc_otg_pcd_t * pcd)
53231 +{
53232 + dwc_otg_pcd_ep_t *ep0 = &pcd->ep0;
53233 + if (pcd->ep0state == EP0_STALL) {
53234 + DWC_DEBUGPL(DBG_PCD, "EP0 STALLED\n");
53235 + return;
53236 + }
53237 + pcd->ep0state = EP0_OUT_STATUS_PHASE;
53238 +
53239 + DWC_DEBUGPL(DBG_PCD, "EP0 OUT ZLP\n");
53240 + ep0->dwc_ep.xfer_len = 0;
53241 + ep0->dwc_ep.xfer_count = 0;
53242 + ep0->dwc_ep.is_in = 0;
53243 + ep0->dwc_ep.dma_addr = pcd->setup_pkt_dma_handle;
53244 + dwc_otg_ep0_start_transfer(GET_CORE_IF(pcd), &ep0->dwc_ep);
53245 +
53246 + /* Prepare for more SETUP Packets */
53247 + if (GET_CORE_IF(pcd)->dma_enable == 0) {
53248 + ep0_out_start(GET_CORE_IF(pcd), pcd);
53249 + }
53250 +}
53251 +
53252 +/**
53253 + * Clear the EP halt (STALL) and if pending requests start the
53254 + * transfer.
53255 + */
53256 +static inline void pcd_clear_halt(dwc_otg_pcd_t * pcd, dwc_otg_pcd_ep_t * ep)
53257 +{
53258 + if (ep->dwc_ep.stall_clear_flag == 0)
53259 + dwc_otg_ep_clear_stall(GET_CORE_IF(pcd), &ep->dwc_ep);
53260 +
53261 + /* Reactive the EP */
53262 + dwc_otg_ep_activate(GET_CORE_IF(pcd), &ep->dwc_ep);
53263 + if (ep->stopped) {
53264 + ep->stopped = 0;
53265 + /* If there is a request in the EP queue start it */
53266 +
53267 + /** @todo FIXME: this causes an EP mismatch in DMA mode.
53268 + * epmismatch not yet implemented. */
53269 +
53270 + /*
53271 + * Above fixme is solved by implmenting a tasklet to call the
53272 + * start_next_request(), outside of interrupt context at some
53273 + * time after the current time, after a clear-halt setup packet.
53274 + * Still need to implement ep mismatch in the future if a gadget
53275 + * ever uses more than one endpoint at once
53276 + */
53277 + ep->queue_sof = 1;
53278 + DWC_TASK_SCHEDULE(pcd->start_xfer_tasklet);
53279 + }
53280 + /* Start Control Status Phase */
53281 + do_setup_in_status_phase(pcd);
53282 +}
53283 +
53284 +/**
53285 + * This function is called when the SET_FEATURE TEST_MODE Setup packet
53286 + * is sent from the host. The Device Control register is written with
53287 + * the Test Mode bits set to the specified Test Mode. This is done as
53288 + * a tasklet so that the "Status" phase of the control transfer
53289 + * completes before transmitting the TEST packets.
53290 + *
53291 + * @todo This has not been tested since the tasklet struct was put
53292 + * into the PCD struct!
53293 + *
53294 + */
53295 +void do_test_mode(void *data)
53296 +{
53297 + dctl_data_t dctl;
53298 + dwc_otg_pcd_t *pcd = (dwc_otg_pcd_t *) data;
53299 + dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
53300 + int test_mode = pcd->test_mode;
53301 +
53302 +// DWC_WARN("%s() has not been tested since being rewritten!\n", __func__);
53303 +
53304 + dctl.d32 = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dctl);
53305 + switch (test_mode) {
53306 + case 1: // TEST_J
53307 + dctl.b.tstctl = 1;
53308 + break;
53309 +
53310 + case 2: // TEST_K
53311 + dctl.b.tstctl = 2;
53312 + break;
53313 +
53314 + case 3: // TEST_SE0_NAK
53315 + dctl.b.tstctl = 3;
53316 + break;
53317 +
53318 + case 4: // TEST_PACKET
53319 + dctl.b.tstctl = 4;
53320 + break;
53321 +
53322 + case 5: // TEST_FORCE_ENABLE
53323 + dctl.b.tstctl = 5;
53324 + break;
53325 + }
53326 + DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->dctl, dctl.d32);
53327 +}
53328 +
53329 +/**
53330 + * This function process the GET_STATUS Setup Commands.
53331 + */
53332 +static inline void do_get_status(dwc_otg_pcd_t * pcd)
53333 +{
53334 + usb_device_request_t ctrl = pcd->setup_pkt->req;
53335 + dwc_otg_pcd_ep_t *ep;
53336 + dwc_otg_pcd_ep_t *ep0 = &pcd->ep0;
53337 + uint16_t *status = pcd->status_buf;
53338 + dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
53339 +
53340 +#ifdef DEBUG_EP0
53341 + DWC_DEBUGPL(DBG_PCD,
53342 + "GET_STATUS %02x.%02x v%04x i%04x l%04x\n",
53343 + ctrl.bmRequestType, ctrl.bRequest,
53344 + UGETW(ctrl.wValue), UGETW(ctrl.wIndex),
53345 + UGETW(ctrl.wLength));
53346 +#endif
53347 +
53348 + switch (UT_GET_RECIPIENT(ctrl.bmRequestType)) {
53349 + case UT_DEVICE:
53350 + if(UGETW(ctrl.wIndex) == 0xF000) { /* OTG Status selector */
53351 + DWC_PRINTF("wIndex - %d\n", UGETW(ctrl.wIndex));
53352 + DWC_PRINTF("OTG VERSION - %d\n", core_if->otg_ver);
53353 + DWC_PRINTF("OTG CAP - %d, %d\n",
53354 + core_if->core_params->otg_cap,
53355 + DWC_OTG_CAP_PARAM_HNP_SRP_CAPABLE);
53356 + if (core_if->otg_ver == 1
53357 + && core_if->core_params->otg_cap ==
53358 + DWC_OTG_CAP_PARAM_HNP_SRP_CAPABLE) {
53359 + uint8_t *otgsts = (uint8_t*)pcd->status_buf;
53360 + *otgsts = (core_if->otg_sts & 0x1);
53361 + pcd->ep0_pending = 1;
53362 + ep0->dwc_ep.start_xfer_buff =
53363 + (uint8_t *) otgsts;
53364 + ep0->dwc_ep.xfer_buff = (uint8_t *) otgsts;
53365 + ep0->dwc_ep.dma_addr =
53366 + pcd->status_buf_dma_handle;
53367 + ep0->dwc_ep.xfer_len = 1;
53368 + ep0->dwc_ep.xfer_count = 0;
53369 + ep0->dwc_ep.total_len = ep0->dwc_ep.xfer_len;
53370 + dwc_otg_ep0_start_transfer(GET_CORE_IF(pcd),
53371 + &ep0->dwc_ep);
53372 + return;
53373 + } else {
53374 + ep0_do_stall(pcd, -DWC_E_NOT_SUPPORTED);
53375 + return;
53376 + }
53377 + break;
53378 + } else {
53379 + *status = 0x1; /* Self powered */
53380 + *status |= pcd->remote_wakeup_enable << 1;
53381 + break;
53382 + }
53383 + case UT_INTERFACE:
53384 + *status = 0;
53385 + break;
53386 +
53387 + case UT_ENDPOINT:
53388 + ep = get_ep_by_addr(pcd, UGETW(ctrl.wIndex));
53389 + if (ep == 0 || UGETW(ctrl.wLength) > 2) {
53390 + ep0_do_stall(pcd, -DWC_E_NOT_SUPPORTED);
53391 + return;
53392 + }
53393 + /** @todo check for EP stall */
53394 + *status = ep->stopped;
53395 + break;
53396 + }
53397 + pcd->ep0_pending = 1;
53398 + ep0->dwc_ep.start_xfer_buff = (uint8_t *) status;
53399 + ep0->dwc_ep.xfer_buff = (uint8_t *) status;
53400 + ep0->dwc_ep.dma_addr = pcd->status_buf_dma_handle;
53401 + ep0->dwc_ep.xfer_len = 2;
53402 + ep0->dwc_ep.xfer_count = 0;
53403 + ep0->dwc_ep.total_len = ep0->dwc_ep.xfer_len;
53404 + dwc_otg_ep0_start_transfer(GET_CORE_IF(pcd), &ep0->dwc_ep);
53405 +}
53406 +
53407 +/**
53408 + * This function process the SET_FEATURE Setup Commands.
53409 + */
53410 +static inline void do_set_feature(dwc_otg_pcd_t * pcd)
53411 +{
53412 + dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
53413 + dwc_otg_core_global_regs_t *global_regs = core_if->core_global_regs;
53414 + usb_device_request_t ctrl = pcd->setup_pkt->req;
53415 + dwc_otg_pcd_ep_t *ep = 0;
53416 + int32_t otg_cap_param = core_if->core_params->otg_cap;
53417 + gotgctl_data_t gotgctl = {.d32 = 0 };
53418 +
53419 + DWC_DEBUGPL(DBG_PCD, "SET_FEATURE:%02x.%02x v%04x i%04x l%04x\n",
53420 + ctrl.bmRequestType, ctrl.bRequest,
53421 + UGETW(ctrl.wValue), UGETW(ctrl.wIndex),
53422 + UGETW(ctrl.wLength));
53423 + DWC_DEBUGPL(DBG_PCD, "otg_cap=%d\n", otg_cap_param);
53424 +
53425 + switch (UT_GET_RECIPIENT(ctrl.bmRequestType)) {
53426 + case UT_DEVICE:
53427 + switch (UGETW(ctrl.wValue)) {
53428 + case UF_DEVICE_REMOTE_WAKEUP:
53429 + pcd->remote_wakeup_enable = 1;
53430 + break;
53431 +
53432 + case UF_TEST_MODE:
53433 + /* Setup the Test Mode tasklet to do the Test
53434 + * Packet generation after the SETUP Status
53435 + * phase has completed. */
53436 +
53437 + /** @todo This has not been tested since the
53438 + * tasklet struct was put into the PCD
53439 + * struct! */
53440 + pcd->test_mode = UGETW(ctrl.wIndex) >> 8;
53441 + DWC_TASK_SCHEDULE(pcd->test_mode_tasklet);
53442 + break;
53443 +
53444 + case UF_DEVICE_B_HNP_ENABLE:
53445 + DWC_DEBUGPL(DBG_PCDV,
53446 + "SET_FEATURE: USB_DEVICE_B_HNP_ENABLE\n");
53447 +
53448 + /* dev may initiate HNP */
53449 + if (otg_cap_param == DWC_OTG_CAP_PARAM_HNP_SRP_CAPABLE) {
53450 + pcd->b_hnp_enable = 1;
53451 + dwc_otg_pcd_update_otg(pcd, 0);
53452 + DWC_DEBUGPL(DBG_PCD, "Request B HNP\n");
53453 + /**@todo Is the gotgctl.devhnpen cleared
53454 + * by a USB Reset? */
53455 + gotgctl.b.devhnpen = 1;
53456 + gotgctl.b.hnpreq = 1;
53457 + DWC_WRITE_REG32(&global_regs->gotgctl,
53458 + gotgctl.d32);
53459 + } else {
53460 + ep0_do_stall(pcd, -DWC_E_NOT_SUPPORTED);
53461 + return;
53462 + }
53463 + break;
53464 +
53465 + case UF_DEVICE_A_HNP_SUPPORT:
53466 + /* RH port supports HNP */
53467 + DWC_DEBUGPL(DBG_PCDV,
53468 + "SET_FEATURE: USB_DEVICE_A_HNP_SUPPORT\n");
53469 + if (otg_cap_param == DWC_OTG_CAP_PARAM_HNP_SRP_CAPABLE) {
53470 + pcd->a_hnp_support = 1;
53471 + dwc_otg_pcd_update_otg(pcd, 0);
53472 + } else {
53473 + ep0_do_stall(pcd, -DWC_E_NOT_SUPPORTED);
53474 + return;
53475 + }
53476 + break;
53477 +
53478 + case UF_DEVICE_A_ALT_HNP_SUPPORT:
53479 + /* other RH port does */
53480 + DWC_DEBUGPL(DBG_PCDV,
53481 + "SET_FEATURE: USB_DEVICE_A_ALT_HNP_SUPPORT\n");
53482 + if (otg_cap_param == DWC_OTG_CAP_PARAM_HNP_SRP_CAPABLE) {
53483 + pcd->a_alt_hnp_support = 1;
53484 + dwc_otg_pcd_update_otg(pcd, 0);
53485 + } else {
53486 + ep0_do_stall(pcd, -DWC_E_NOT_SUPPORTED);
53487 + return;
53488 + }
53489 + break;
53490 +
53491 + default:
53492 + ep0_do_stall(pcd, -DWC_E_NOT_SUPPORTED);
53493 + return;
53494 +
53495 + }
53496 + do_setup_in_status_phase(pcd);
53497 + break;
53498 +
53499 + case UT_INTERFACE:
53500 + do_gadget_setup(pcd, &ctrl);
53501 + break;
53502 +
53503 + case UT_ENDPOINT:
53504 + if (UGETW(ctrl.wValue) == UF_ENDPOINT_HALT) {
53505 + ep = get_ep_by_addr(pcd, UGETW(ctrl.wIndex));
53506 + if (ep == 0) {
53507 + ep0_do_stall(pcd, -DWC_E_NOT_SUPPORTED);
53508 + return;
53509 + }
53510 + ep->stopped = 1;
53511 + dwc_otg_ep_set_stall(core_if, &ep->dwc_ep);
53512 + }
53513 + do_setup_in_status_phase(pcd);
53514 + break;
53515 + }
53516 +}
53517 +
53518 +/**
53519 + * This function process the CLEAR_FEATURE Setup Commands.
53520 + */
53521 +static inline void do_clear_feature(dwc_otg_pcd_t * pcd)
53522 +{
53523 + usb_device_request_t ctrl = pcd->setup_pkt->req;
53524 + dwc_otg_pcd_ep_t *ep = 0;
53525 +
53526 + DWC_DEBUGPL(DBG_PCD,
53527 + "CLEAR_FEATURE:%02x.%02x v%04x i%04x l%04x\n",
53528 + ctrl.bmRequestType, ctrl.bRequest,
53529 + UGETW(ctrl.wValue), UGETW(ctrl.wIndex),
53530 + UGETW(ctrl.wLength));
53531 +
53532 + switch (UT_GET_RECIPIENT(ctrl.bmRequestType)) {
53533 + case UT_DEVICE:
53534 + switch (UGETW(ctrl.wValue)) {
53535 + case UF_DEVICE_REMOTE_WAKEUP:
53536 + pcd->remote_wakeup_enable = 0;
53537 + break;
53538 +
53539 + case UF_TEST_MODE:
53540 + /** @todo Add CLEAR_FEATURE for TEST modes. */
53541 + break;
53542 +
53543 + default:
53544 + ep0_do_stall(pcd, -DWC_E_NOT_SUPPORTED);
53545 + return;
53546 + }
53547 + do_setup_in_status_phase(pcd);
53548 + break;
53549 +
53550 + case UT_ENDPOINT:
53551 + ep = get_ep_by_addr(pcd, UGETW(ctrl.wIndex));
53552 + if (ep == 0) {
53553 + ep0_do_stall(pcd, -DWC_E_NOT_SUPPORTED);
53554 + return;
53555 + }
53556 +
53557 + pcd_clear_halt(pcd, ep);
53558 +
53559 + break;
53560 + }
53561 +}
53562 +
53563 +/**
53564 + * This function process the SET_ADDRESS Setup Commands.
53565 + */
53566 +static inline void do_set_address(dwc_otg_pcd_t * pcd)
53567 +{
53568 + dwc_otg_dev_if_t *dev_if = GET_CORE_IF(pcd)->dev_if;
53569 + usb_device_request_t ctrl = pcd->setup_pkt->req;
53570 +
53571 + if (ctrl.bmRequestType == UT_DEVICE) {
53572 + dcfg_data_t dcfg = {.d32 = 0 };
53573 +
53574 +#ifdef DEBUG_EP0
53575 +// DWC_DEBUGPL(DBG_PCDV, "SET_ADDRESS:%d\n", ctrl.wValue);
53576 +#endif
53577 + dcfg.b.devaddr = UGETW(ctrl.wValue);
53578 + DWC_MODIFY_REG32(&dev_if->dev_global_regs->dcfg, 0, dcfg.d32);
53579 + do_setup_in_status_phase(pcd);
53580 + }
53581 +}
53582 +
53583 +/**
53584 + * This function processes SETUP commands. In Linux, the USB Command
53585 + * processing is done in two places - the first being the PCD and the
53586 + * second in the Gadget Driver (for example, the File-Backed Storage
53587 + * Gadget Driver).
53588 + *
53589 + * <table>
53590 + * <tr><td>Command </td><td>Driver </td><td>Description</td></tr>
53591 + *
53592 + * <tr><td>GET_STATUS </td><td>PCD </td><td>Command is processed as
53593 + * defined in chapter 9 of the USB 2.0 Specification chapter 9
53594 + * </td></tr>
53595 + *
53596 + * <tr><td>CLEAR_FEATURE </td><td>PCD </td><td>The Device and Endpoint
53597 + * requests are the ENDPOINT_HALT feature is procesed, all others the
53598 + * interface requests are ignored.</td></tr>
53599 + *
53600 + * <tr><td>SET_FEATURE </td><td>PCD </td><td>The Device and Endpoint
53601 + * requests are processed by the PCD. Interface requests are passed
53602 + * to the Gadget Driver.</td></tr>
53603 + *
53604 + * <tr><td>SET_ADDRESS </td><td>PCD </td><td>Program the DCFG reg,
53605 + * with device address received </td></tr>
53606 + *
53607 + * <tr><td>GET_DESCRIPTOR </td><td>Gadget Driver </td><td>Return the
53608 + * requested descriptor</td></tr>
53609 + *
53610 + * <tr><td>SET_DESCRIPTOR </td><td>Gadget Driver </td><td>Optional -
53611 + * not implemented by any of the existing Gadget Drivers.</td></tr>
53612 + *
53613 + * <tr><td>SET_CONFIGURATION </td><td>Gadget Driver </td><td>Disable
53614 + * all EPs and enable EPs for new configuration.</td></tr>
53615 + *
53616 + * <tr><td>GET_CONFIGURATION </td><td>Gadget Driver </td><td>Return
53617 + * the current configuration</td></tr>
53618 + *
53619 + * <tr><td>SET_INTERFACE </td><td>Gadget Driver </td><td>Disable all
53620 + * EPs and enable EPs for new configuration.</td></tr>
53621 + *
53622 + * <tr><td>GET_INTERFACE </td><td>Gadget Driver </td><td>Return the
53623 + * current interface.</td></tr>
53624 + *
53625 + * <tr><td>SYNC_FRAME </td><td>PCD </td><td>Display debug
53626 + * message.</td></tr>
53627 + * </table>
53628 + *
53629 + * When the SETUP Phase Done interrupt occurs, the PCD SETUP commands are
53630 + * processed by pcd_setup. Calling the Function Driver's setup function from
53631 + * pcd_setup processes the gadget SETUP commands.
53632 + */
53633 +static inline void pcd_setup(dwc_otg_pcd_t * pcd)
53634 +{
53635 + dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
53636 + dwc_otg_dev_if_t *dev_if = core_if->dev_if;
53637 + usb_device_request_t ctrl = pcd->setup_pkt->req;
53638 + dwc_otg_pcd_ep_t *ep0 = &pcd->ep0;
53639 +
53640 + deptsiz0_data_t doeptsize0 = {.d32 = 0 };
53641 +
53642 +#ifdef DWC_UTE_CFI
53643 + int retval = 0;
53644 + struct cfi_usb_ctrlrequest cfi_req;
53645 +#endif
53646 +
53647 + doeptsize0.d32 = DWC_READ_REG32(&dev_if->out_ep_regs[0]->doeptsiz);
53648 +
53649 + /** In BDMA more then 1 setup packet is not supported till 3.00a */
53650 + if (core_if->dma_enable && core_if->dma_desc_enable == 0
53651 + && (doeptsize0.b.supcnt < 2)
53652 + && (core_if->snpsid < OTG_CORE_REV_2_94a)) {
53653 + DWC_ERROR
53654 + ("\n\n----------- CANNOT handle > 1 setup packet in DMA mode\n\n");
53655 + }
53656 + if ((core_if->snpsid >= OTG_CORE_REV_3_00a)
53657 + && (core_if->dma_enable == 1) && (core_if->dma_desc_enable == 0)) {
53658 + ctrl =
53659 + (pcd->setup_pkt +
53660 + (3 - doeptsize0.b.supcnt - 1 +
53661 + ep0->dwc_ep.stp_rollover))->req;
53662 + }
53663 +#ifdef DEBUG_EP0
53664 + DWC_DEBUGPL(DBG_PCD, "SETUP %02x.%02x v%04x i%04x l%04x\n",
53665 + ctrl.bmRequestType, ctrl.bRequest,
53666 + UGETW(ctrl.wValue), UGETW(ctrl.wIndex),
53667 + UGETW(ctrl.wLength));
53668 +#endif
53669 +
53670 + /* Clean up the request queue */
53671 + dwc_otg_request_nuke(ep0);
53672 + ep0->stopped = 0;
53673 +
53674 + if (ctrl.bmRequestType & UE_DIR_IN) {
53675 + ep0->dwc_ep.is_in = 1;
53676 + pcd->ep0state = EP0_IN_DATA_PHASE;
53677 + } else {
53678 + ep0->dwc_ep.is_in = 0;
53679 + pcd->ep0state = EP0_OUT_DATA_PHASE;
53680 + }
53681 +
53682 + if (UGETW(ctrl.wLength) == 0) {
53683 + ep0->dwc_ep.is_in = 1;
53684 + pcd->ep0state = EP0_IN_STATUS_PHASE;
53685 + }
53686 +
53687 + if (UT_GET_TYPE(ctrl.bmRequestType) != UT_STANDARD) {
53688 +
53689 +#ifdef DWC_UTE_CFI
53690 + DWC_MEMCPY(&cfi_req, &ctrl, sizeof(usb_device_request_t));
53691 +
53692 + //printk(KERN_ALERT "CFI: req_type=0x%02x; req=0x%02x\n",
53693 + ctrl.bRequestType, ctrl.bRequest);
53694 + if (UT_GET_TYPE(cfi_req.bRequestType) == UT_VENDOR) {
53695 + if (cfi_req.bRequest > 0xB0 && cfi_req.bRequest < 0xBF) {
53696 + retval = cfi_setup(pcd, &cfi_req);
53697 + if (retval < 0) {
53698 + ep0_do_stall(pcd, retval);
53699 + pcd->ep0_pending = 0;
53700 + return;
53701 + }
53702 +
53703 + /* if need gadget setup then call it and check the retval */
53704 + if (pcd->cfi->need_gadget_att) {
53705 + retval =
53706 + cfi_gadget_setup(pcd,
53707 + &pcd->
53708 + cfi->ctrl_req);
53709 + if (retval < 0) {
53710 + pcd->ep0_pending = 0;
53711 + return;
53712 + }
53713 + }
53714 +
53715 + if (pcd->cfi->need_status_in_complete) {
53716 + do_setup_in_status_phase(pcd);
53717 + }
53718 + return;
53719 + }
53720 + }
53721 +#endif
53722 +
53723 + /* handle non-standard (class/vendor) requests in the gadget driver */
53724 + do_gadget_setup(pcd, &ctrl);
53725 + return;
53726 + }
53727 +
53728 + /** @todo NGS: Handle bad setup packet? */
53729 +
53730 +///////////////////////////////////////////
53731 +//// --- Standard Request handling --- ////
53732 +
53733 + switch (ctrl.bRequest) {
53734 + case UR_GET_STATUS:
53735 + do_get_status(pcd);
53736 + break;
53737 +
53738 + case UR_CLEAR_FEATURE:
53739 + do_clear_feature(pcd);
53740 + break;
53741 +
53742 + case UR_SET_FEATURE:
53743 + do_set_feature(pcd);
53744 + break;
53745 +
53746 + case UR_SET_ADDRESS:
53747 + do_set_address(pcd);
53748 + break;
53749 +
53750 + case UR_SET_INTERFACE:
53751 + case UR_SET_CONFIG:
53752 +// _pcd->request_config = 1; /* Configuration changed */
53753 + do_gadget_setup(pcd, &ctrl);
53754 + break;
53755 +
53756 + case UR_SYNCH_FRAME:
53757 + do_gadget_setup(pcd, &ctrl);
53758 + break;
53759 +
53760 + default:
53761 + /* Call the Gadget Driver's setup functions */
53762 + do_gadget_setup(pcd, &ctrl);
53763 + break;
53764 + }
53765 +}
53766 +
53767 +/**
53768 + * This function completes the ep0 control transfer.
53769 + */
53770 +static int32_t ep0_complete_request(dwc_otg_pcd_ep_t * ep)
53771 +{
53772 + dwc_otg_core_if_t *core_if = GET_CORE_IF(ep->pcd);
53773 + dwc_otg_dev_if_t *dev_if = core_if->dev_if;
53774 + dwc_otg_dev_in_ep_regs_t *in_ep_regs =
53775 + dev_if->in_ep_regs[ep->dwc_ep.num];
53776 +#ifdef DEBUG_EP0
53777 + dwc_otg_dev_out_ep_regs_t *out_ep_regs =
53778 + dev_if->out_ep_regs[ep->dwc_ep.num];
53779 +#endif
53780 + deptsiz0_data_t deptsiz;
53781 + dev_dma_desc_sts_t desc_sts;
53782 + dwc_otg_pcd_request_t *req;
53783 + int is_last = 0;
53784 + dwc_otg_pcd_t *pcd = ep->pcd;
53785 +
53786 +#ifdef DWC_UTE_CFI
53787 + struct cfi_usb_ctrlrequest *ctrlreq;
53788 + int retval = -DWC_E_NOT_SUPPORTED;
53789 +#endif
53790 +
53791 + desc_sts.b.bytes = 0;
53792 +
53793 + if (pcd->ep0_pending && DWC_CIRCLEQ_EMPTY(&ep->queue)) {
53794 + if (ep->dwc_ep.is_in) {
53795 +#ifdef DEBUG_EP0
53796 + DWC_DEBUGPL(DBG_PCDV, "Do setup OUT status phase\n");
53797 +#endif
53798 + do_setup_out_status_phase(pcd);
53799 + } else {
53800 +#ifdef DEBUG_EP0
53801 + DWC_DEBUGPL(DBG_PCDV, "Do setup IN status phase\n");
53802 +#endif
53803 +
53804 +#ifdef DWC_UTE_CFI
53805 + ctrlreq = &pcd->cfi->ctrl_req;
53806 +
53807 + if (UT_GET_TYPE(ctrlreq->bRequestType) == UT_VENDOR) {
53808 + if (ctrlreq->bRequest > 0xB0
53809 + && ctrlreq->bRequest < 0xBF) {
53810 +
53811 + /* Return if the PCD failed to handle the request */
53812 + if ((retval =
53813 + pcd->cfi->ops.
53814 + ctrl_write_complete(pcd->cfi,
53815 + pcd)) < 0) {
53816 + CFI_INFO
53817 + ("ERROR setting a new value in the PCD(%d)\n",
53818 + retval);
53819 + ep0_do_stall(pcd, retval);
53820 + pcd->ep0_pending = 0;
53821 + return 0;
53822 + }
53823 +
53824 + /* If the gadget needs to be notified on the request */
53825 + if (pcd->cfi->need_gadget_att == 1) {
53826 + //retval = do_gadget_setup(pcd, &pcd->cfi->ctrl_req);
53827 + retval =
53828 + cfi_gadget_setup(pcd,
53829 + &pcd->cfi->
53830 + ctrl_req);
53831 +
53832 + /* Return from the function if the gadget failed to process
53833 + * the request properly - this should never happen !!!
53834 + */
53835 + if (retval < 0) {
53836 + CFI_INFO
53837 + ("ERROR setting a new value in the gadget(%d)\n",
53838 + retval);
53839 + pcd->ep0_pending = 0;
53840 + return 0;
53841 + }
53842 + }
53843 +
53844 + CFI_INFO("%s: RETVAL=%d\n", __func__,
53845 + retval);
53846 + /* If we hit here then the PCD and the gadget has properly
53847 + * handled the request - so send the ZLP IN to the host.
53848 + */
53849 + /* @todo: MAS - decide whether we need to start the setup
53850 + * stage based on the need_setup value of the cfi object
53851 + */
53852 + do_setup_in_status_phase(pcd);
53853 + pcd->ep0_pending = 0;
53854 + return 1;
53855 + }
53856 + }
53857 +#endif
53858 +
53859 + do_setup_in_status_phase(pcd);
53860 + }
53861 + pcd->ep0_pending = 0;
53862 + return 1;
53863 + }
53864 +
53865 + if (DWC_CIRCLEQ_EMPTY(&ep->queue)) {
53866 + return 0;
53867 + }
53868 + req = DWC_CIRCLEQ_FIRST(&ep->queue);
53869 +
53870 + if (pcd->ep0state == EP0_OUT_STATUS_PHASE
53871 + || pcd->ep0state == EP0_IN_STATUS_PHASE) {
53872 + is_last = 1;
53873 + } else if (ep->dwc_ep.is_in) {
53874 + deptsiz.d32 = DWC_READ_REG32(&in_ep_regs->dieptsiz);
53875 + if (core_if->dma_desc_enable != 0)
53876 + desc_sts = dev_if->in_desc_addr->status;
53877 +#ifdef DEBUG_EP0
53878 + DWC_DEBUGPL(DBG_PCDV, "%d len=%d xfersize=%d pktcnt=%d\n",
53879 + ep->dwc_ep.num, ep->dwc_ep.xfer_len,
53880 + deptsiz.b.xfersize, deptsiz.b.pktcnt);
53881 +#endif
53882 +
53883 + if (((core_if->dma_desc_enable == 0)
53884 + && (deptsiz.b.xfersize == 0))
53885 + || ((core_if->dma_desc_enable != 0)
53886 + && (desc_sts.b.bytes == 0))) {
53887 + req->actual = ep->dwc_ep.xfer_count;
53888 + /* Is a Zero Len Packet needed? */
53889 + if (req->sent_zlp) {
53890 +#ifdef DEBUG_EP0
53891 + DWC_DEBUGPL(DBG_PCD, "Setup Rx ZLP\n");
53892 +#endif
53893 + req->sent_zlp = 0;
53894 + }
53895 + do_setup_out_status_phase(pcd);
53896 + }
53897 + } else {
53898 + /* ep0-OUT */
53899 +#ifdef DEBUG_EP0
53900 + deptsiz.d32 = DWC_READ_REG32(&out_ep_regs->doeptsiz);
53901 + DWC_DEBUGPL(DBG_PCDV, "%d len=%d xsize=%d pktcnt=%d\n",
53902 + ep->dwc_ep.num, ep->dwc_ep.xfer_len,
53903 + deptsiz.b.xfersize, deptsiz.b.pktcnt);
53904 +#endif
53905 + req->actual = ep->dwc_ep.xfer_count;
53906 +
53907 + /* Is a Zero Len Packet needed? */
53908 + if (req->sent_zlp) {
53909 +#ifdef DEBUG_EP0
53910 + DWC_DEBUGPL(DBG_PCDV, "Setup Tx ZLP\n");
53911 +#endif
53912 + req->sent_zlp = 0;
53913 + }
53914 + /* For older cores do setup in status phase in Slave/BDMA modes,
53915 + * starting from 3.00 do that only in slave, and for DMA modes
53916 + * just re-enable ep 0 OUT here*/
53917 + if (core_if->dma_enable == 0
53918 + || (core_if->dma_desc_enable == 0
53919 + && core_if->snpsid <= OTG_CORE_REV_2_94a)) {
53920 + do_setup_in_status_phase(pcd);
53921 + } else if (core_if->snpsid >= OTG_CORE_REV_3_00a) {
53922 + DWC_DEBUGPL(DBG_PCDV,
53923 + "Enable out ep before in status phase\n");
53924 + ep0_out_start(core_if, pcd);
53925 + }
53926 + }
53927 +
53928 + /* Complete the request */
53929 + if (is_last) {
53930 + dwc_otg_request_done(ep, req, 0);
53931 + ep->dwc_ep.start_xfer_buff = 0;
53932 + ep->dwc_ep.xfer_buff = 0;
53933 + ep->dwc_ep.xfer_len = 0;
53934 + return 1;
53935 + }
53936 + return 0;
53937 +}
53938 +
53939 +#ifdef DWC_UTE_CFI
53940 +/**
53941 + * This function calculates traverses all the CFI DMA descriptors and
53942 + * and accumulates the bytes that are left to be transfered.
53943 + *
53944 + * @return The total bytes left to transfered, or a negative value as failure
53945 + */
53946 +static inline int cfi_calc_desc_residue(dwc_otg_pcd_ep_t * ep)
53947 +{
53948 + int32_t ret = 0;
53949 + int i;
53950 + struct dwc_otg_dma_desc *ddesc = NULL;
53951 + struct cfi_ep *cfiep;
53952 +
53953 + /* See if the pcd_ep has its respective cfi_ep mapped */
53954 + cfiep = get_cfi_ep_by_pcd_ep(ep->pcd->cfi, ep);
53955 + if (!cfiep) {
53956 + CFI_INFO("%s: Failed to find ep\n", __func__);
53957 + return -1;
53958 + }
53959 +
53960 + ddesc = ep->dwc_ep.descs;
53961 +
53962 + for (i = 0; (i < cfiep->desc_count) && (i < MAX_DMA_DESCS_PER_EP); i++) {
53963 +
53964 +#if defined(PRINT_CFI_DMA_DESCS)
53965 + print_desc(ddesc, ep->ep.name, i);
53966 +#endif
53967 + ret += ddesc->status.b.bytes;
53968 + ddesc++;
53969 + }
53970 +
53971 + if (ret)
53972 + CFI_INFO("!!!!!!!!!! WARNING (%s) - residue=%d\n", __func__,
53973 + ret);
53974 +
53975 + return ret;
53976 +}
53977 +#endif
53978 +
53979 +/**
53980 + * This function completes the request for the EP. If there are
53981 + * additional requests for the EP in the queue they will be started.
53982 + */
53983 +static void complete_ep(dwc_otg_pcd_ep_t * ep)
53984 +{
53985 + dwc_otg_core_if_t *core_if = GET_CORE_IF(ep->pcd);
53986 + struct device *dev = dwc_otg_pcd_to_dev(ep->pcd);
53987 + dwc_otg_dev_if_t *dev_if = core_if->dev_if;
53988 + dwc_otg_dev_in_ep_regs_t *in_ep_regs =
53989 + dev_if->in_ep_regs[ep->dwc_ep.num];
53990 + deptsiz_data_t deptsiz;
53991 + dev_dma_desc_sts_t desc_sts;
53992 + dwc_otg_pcd_request_t *req = 0;
53993 + dwc_otg_dev_dma_desc_t *dma_desc;
53994 + uint32_t byte_count = 0;
53995 + int is_last = 0;
53996 + int i;
53997 +
53998 + DWC_DEBUGPL(DBG_PCDV, "%s() %d-%s\n", __func__, ep->dwc_ep.num,
53999 + (ep->dwc_ep.is_in ? "IN" : "OUT"));
54000 +
54001 + /* Get any pending requests */
54002 + if (!DWC_CIRCLEQ_EMPTY(&ep->queue)) {
54003 + req = DWC_CIRCLEQ_FIRST(&ep->queue);
54004 + if (!req) {
54005 + DWC_PRINTF("complete_ep 0x%p, req = NULL!\n", ep);
54006 + return;
54007 + }
54008 + } else {
54009 + DWC_PRINTF("complete_ep 0x%p, ep->queue empty!\n", ep);
54010 + return;
54011 + }
54012 +
54013 + DWC_DEBUGPL(DBG_PCD, "Requests %d\n", ep->pcd->request_pending);
54014 +
54015 + if (ep->dwc_ep.is_in) {
54016 + deptsiz.d32 = DWC_READ_REG32(&in_ep_regs->dieptsiz);
54017 +
54018 + if (core_if->dma_enable) {
54019 + if (core_if->dma_desc_enable == 0) {
54020 + if (deptsiz.b.xfersize == 0
54021 + && deptsiz.b.pktcnt == 0) {
54022 + byte_count =
54023 + ep->dwc_ep.xfer_len -
54024 + ep->dwc_ep.xfer_count;
54025 +
54026 + ep->dwc_ep.xfer_buff += byte_count;
54027 + ep->dwc_ep.dma_addr += byte_count;
54028 + ep->dwc_ep.xfer_count += byte_count;
54029 +
54030 + DWC_DEBUGPL(DBG_PCDV,
54031 + "%d-%s len=%d xfersize=%d pktcnt=%d\n",
54032 + ep->dwc_ep.num,
54033 + (ep->dwc_ep.
54034 + is_in ? "IN" : "OUT"),
54035 + ep->dwc_ep.xfer_len,
54036 + deptsiz.b.xfersize,
54037 + deptsiz.b.pktcnt);
54038 +
54039 + if (ep->dwc_ep.xfer_len <
54040 + ep->dwc_ep.total_len) {
54041 + dwc_otg_ep_start_transfer
54042 + (core_if, &ep->dwc_ep);
54043 + } else if (ep->dwc_ep.sent_zlp) {
54044 + /*
54045 + * This fragment of code should initiate 0
54046 + * length transfer in case if it is queued
54047 + * a transfer with size divisible to EPs max
54048 + * packet size and with usb_request zero field
54049 + * is set, which means that after data is transfered,
54050 + * it is also should be transfered
54051 + * a 0 length packet at the end. For Slave and
54052 + * Buffer DMA modes in this case SW has
54053 + * to initiate 2 transfers one with transfer size,
54054 + * and the second with 0 size. For Descriptor
54055 + * DMA mode SW is able to initiate a transfer,
54056 + * which will handle all the packets including
54057 + * the last 0 length.
54058 + */
54059 + ep->dwc_ep.sent_zlp = 0;
54060 + dwc_otg_ep_start_zl_transfer
54061 + (core_if, &ep->dwc_ep);
54062 + } else {
54063 + is_last = 1;
54064 + }
54065 + } else {
54066 + if (ep->dwc_ep.type ==
54067 + DWC_OTG_EP_TYPE_ISOC) {
54068 + req->actual = 0;
54069 + dwc_otg_request_done(ep, req, 0);
54070 +
54071 + ep->dwc_ep.start_xfer_buff = 0;
54072 + ep->dwc_ep.xfer_buff = 0;
54073 + ep->dwc_ep.xfer_len = 0;
54074 +
54075 + /* If there is a request in the queue start it. */
54076 + start_next_request(ep);
54077 + } else
54078 + DWC_WARN
54079 + ("Incomplete transfer (%d - %s [siz=%d pkt=%d])\n",
54080 + ep->dwc_ep.num,
54081 + (ep->dwc_ep.is_in ? "IN" : "OUT"),
54082 + deptsiz.b.xfersize,
54083 + deptsiz.b.pktcnt);
54084 + }
54085 + } else {
54086 + dma_desc = ep->dwc_ep.desc_addr;
54087 + byte_count = 0;
54088 + ep->dwc_ep.sent_zlp = 0;
54089 +
54090 +#ifdef DWC_UTE_CFI
54091 + CFI_INFO("%s: BUFFER_MODE=%d\n", __func__,
54092 + ep->dwc_ep.buff_mode);
54093 + if (ep->dwc_ep.buff_mode != BM_STANDARD) {
54094 + int residue;
54095 +
54096 + residue = cfi_calc_desc_residue(ep);
54097 + if (residue < 0)
54098 + return;
54099 +
54100 + byte_count = residue;
54101 + } else {
54102 +#endif
54103 + for (i = 0; i < ep->dwc_ep.desc_cnt;
54104 + ++i) {
54105 + desc_sts = dma_desc->status;
54106 + byte_count += desc_sts.b.bytes;
54107 + dma_desc++;
54108 + }
54109 +#ifdef DWC_UTE_CFI
54110 + }
54111 +#endif
54112 + if (byte_count == 0) {
54113 + ep->dwc_ep.xfer_count =
54114 + ep->dwc_ep.total_len;
54115 + is_last = 1;
54116 + } else {
54117 + DWC_WARN("Incomplete transfer\n");
54118 + }
54119 + }
54120 + } else {
54121 + if (deptsiz.b.xfersize == 0 && deptsiz.b.pktcnt == 0) {
54122 + DWC_DEBUGPL(DBG_PCDV,
54123 + "%d-%s len=%d xfersize=%d pktcnt=%d\n",
54124 + ep->dwc_ep.num,
54125 + ep->dwc_ep.is_in ? "IN" : "OUT",
54126 + ep->dwc_ep.xfer_len,
54127 + deptsiz.b.xfersize,
54128 + deptsiz.b.pktcnt);
54129 +
54130 + /* Check if the whole transfer was completed,
54131 + * if no, setup transfer for next portion of data
54132 + */
54133 + if (ep->dwc_ep.xfer_len < ep->dwc_ep.total_len) {
54134 + dwc_otg_ep_start_transfer(core_if,
54135 + &ep->dwc_ep);
54136 + } else if (ep->dwc_ep.sent_zlp) {
54137 + /*
54138 + * This fragment of code should initiate 0
54139 + * length trasfer in case if it is queued
54140 + * a trasfer with size divisible to EPs max
54141 + * packet size and with usb_request zero field
54142 + * is set, which means that after data is transfered,
54143 + * it is also should be transfered
54144 + * a 0 length packet at the end. For Slave and
54145 + * Buffer DMA modes in this case SW has
54146 + * to initiate 2 transfers one with transfer size,
54147 + * and the second with 0 size. For Desriptor
54148 + * DMA mode SW is able to initiate a transfer,
54149 + * which will handle all the packets including
54150 + * the last 0 legth.
54151 + */
54152 + ep->dwc_ep.sent_zlp = 0;
54153 + dwc_otg_ep_start_zl_transfer(core_if,
54154 + &ep->dwc_ep);
54155 + } else {
54156 + is_last = 1;
54157 + }
54158 + } else {
54159 + DWC_WARN
54160 + ("Incomplete transfer (%d-%s [siz=%d pkt=%d])\n",
54161 + ep->dwc_ep.num,
54162 + (ep->dwc_ep.is_in ? "IN" : "OUT"),
54163 + deptsiz.b.xfersize, deptsiz.b.pktcnt);
54164 + }
54165 + }
54166 + } else {
54167 + dwc_otg_dev_out_ep_regs_t *out_ep_regs =
54168 + dev_if->out_ep_regs[ep->dwc_ep.num];
54169 + desc_sts.d32 = 0;
54170 + if (core_if->dma_enable) {
54171 + if (core_if->dma_desc_enable) {
54172 + dma_desc = ep->dwc_ep.desc_addr;
54173 + byte_count = 0;
54174 + ep->dwc_ep.sent_zlp = 0;
54175 +
54176 +#ifdef DWC_UTE_CFI
54177 + CFI_INFO("%s: BUFFER_MODE=%d\n", __func__,
54178 + ep->dwc_ep.buff_mode);
54179 + if (ep->dwc_ep.buff_mode != BM_STANDARD) {
54180 + int residue;
54181 + residue = cfi_calc_desc_residue(ep);
54182 + if (residue < 0)
54183 + return;
54184 + byte_count = residue;
54185 + } else {
54186 +#endif
54187 +
54188 + for (i = 0; i < ep->dwc_ep.desc_cnt;
54189 + ++i) {
54190 + desc_sts = dma_desc->status;
54191 + byte_count += desc_sts.b.bytes;
54192 + dma_desc++;
54193 + }
54194 +
54195 +#ifdef DWC_UTE_CFI
54196 + }
54197 +#endif
54198 + /* Checking for interrupt Out transfers with not
54199 + * dword aligned mps sizes
54200 + */
54201 + if (ep->dwc_ep.type == DWC_OTG_EP_TYPE_INTR &&
54202 + (ep->dwc_ep.maxpacket%4)) {
54203 + ep->dwc_ep.xfer_count =
54204 + ep->dwc_ep.total_len - byte_count;
54205 + if ((ep->dwc_ep.xfer_len %
54206 + ep->dwc_ep.maxpacket)
54207 + && (ep->dwc_ep.xfer_len /
54208 + ep->dwc_ep.maxpacket <
54209 + MAX_DMA_DESC_CNT))
54210 + ep->dwc_ep.xfer_len -=
54211 + (ep->dwc_ep.desc_cnt -
54212 + 1) * ep->dwc_ep.maxpacket +
54213 + ep->dwc_ep.xfer_len %
54214 + ep->dwc_ep.maxpacket;
54215 + else
54216 + ep->dwc_ep.xfer_len -=
54217 + ep->dwc_ep.desc_cnt *
54218 + ep->dwc_ep.maxpacket;
54219 + if (ep->dwc_ep.xfer_len > 0) {
54220 + dwc_otg_ep_start_transfer
54221 + (core_if, &ep->dwc_ep);
54222 + } else {
54223 + is_last = 1;
54224 + }
54225 + } else {
54226 + ep->dwc_ep.xfer_count =
54227 + ep->dwc_ep.total_len - byte_count +
54228 + ((4 -
54229 + (ep->dwc_ep.
54230 + total_len & 0x3)) & 0x3);
54231 + is_last = 1;
54232 + }
54233 + } else {
54234 + deptsiz.d32 = 0;
54235 + deptsiz.d32 =
54236 + DWC_READ_REG32(&out_ep_regs->doeptsiz);
54237 +
54238 + byte_count = (ep->dwc_ep.xfer_len -
54239 + ep->dwc_ep.xfer_count -
54240 + deptsiz.b.xfersize);
54241 + ep->dwc_ep.xfer_buff += byte_count;
54242 + ep->dwc_ep.dma_addr += byte_count;
54243 + ep->dwc_ep.xfer_count += byte_count;
54244 +
54245 + /* Check if the whole transfer was completed,
54246 + * if no, setup transfer for next portion of data
54247 + */
54248 + if (ep->dwc_ep.xfer_len < ep->dwc_ep.total_len) {
54249 + dwc_otg_ep_start_transfer(core_if,
54250 + &ep->dwc_ep);
54251 + } else if (ep->dwc_ep.sent_zlp) {
54252 + /*
54253 + * This fragment of code should initiate 0
54254 + * length trasfer in case if it is queued
54255 + * a trasfer with size divisible to EPs max
54256 + * packet size and with usb_request zero field
54257 + * is set, which means that after data is transfered,
54258 + * it is also should be transfered
54259 + * a 0 length packet at the end. For Slave and
54260 + * Buffer DMA modes in this case SW has
54261 + * to initiate 2 transfers one with transfer size,
54262 + * and the second with 0 size. For Desriptor
54263 + * DMA mode SW is able to initiate a transfer,
54264 + * which will handle all the packets including
54265 + * the last 0 legth.
54266 + */
54267 + ep->dwc_ep.sent_zlp = 0;
54268 + dwc_otg_ep_start_zl_transfer(core_if,
54269 + &ep->dwc_ep);
54270 + } else {
54271 + is_last = 1;
54272 + }
54273 + }
54274 + } else {
54275 + /* Check if the whole transfer was completed,
54276 + * if no, setup transfer for next portion of data
54277 + */
54278 + if (ep->dwc_ep.xfer_len < ep->dwc_ep.total_len) {
54279 + dwc_otg_ep_start_transfer(core_if, &ep->dwc_ep);
54280 + } else if (ep->dwc_ep.sent_zlp) {
54281 + /*
54282 + * This fragment of code should initiate 0
54283 + * length transfer in case if it is queued
54284 + * a transfer with size divisible to EPs max
54285 + * packet size and with usb_request zero field
54286 + * is set, which means that after data is transfered,
54287 + * it is also should be transfered
54288 + * a 0 length packet at the end. For Slave and
54289 + * Buffer DMA modes in this case SW has
54290 + * to initiate 2 transfers one with transfer size,
54291 + * and the second with 0 size. For Descriptor
54292 + * DMA mode SW is able to initiate a transfer,
54293 + * which will handle all the packets including
54294 + * the last 0 length.
54295 + */
54296 + ep->dwc_ep.sent_zlp = 0;
54297 + dwc_otg_ep_start_zl_transfer(core_if,
54298 + &ep->dwc_ep);
54299 + } else {
54300 + is_last = 1;
54301 + }
54302 + }
54303 +
54304 + DWC_DEBUGPL(DBG_PCDV,
54305 + "addr %p, %d-%s len=%d cnt=%d xsize=%d pktcnt=%d\n",
54306 + &out_ep_regs->doeptsiz, ep->dwc_ep.num,
54307 + ep->dwc_ep.is_in ? "IN" : "OUT",
54308 + ep->dwc_ep.xfer_len, ep->dwc_ep.xfer_count,
54309 + deptsiz.b.xfersize, deptsiz.b.pktcnt);
54310 + }
54311 +
54312 + /* Complete the request */
54313 + if (is_last) {
54314 +#ifdef DWC_UTE_CFI
54315 + if (ep->dwc_ep.buff_mode != BM_STANDARD) {
54316 + req->actual = ep->dwc_ep.cfi_req_len - byte_count;
54317 + } else {
54318 +#endif
54319 + req->actual = ep->dwc_ep.xfer_count;
54320 +#ifdef DWC_UTE_CFI
54321 + }
54322 +#endif
54323 + if (req->dw_align_buf) {
54324 + if (!ep->dwc_ep.is_in) {
54325 + dwc_memcpy(req->buf, req->dw_align_buf, req->length);
54326 + }
54327 + DWC_DMA_FREE(dev, req->length, req->dw_align_buf,
54328 + req->dw_align_buf_dma);
54329 + }
54330 +
54331 + dwc_otg_request_done(ep, req, 0);
54332 +
54333 + ep->dwc_ep.start_xfer_buff = 0;
54334 + ep->dwc_ep.xfer_buff = 0;
54335 + ep->dwc_ep.xfer_len = 0;
54336 +
54337 + /* If there is a request in the queue start it. */
54338 + start_next_request(ep);
54339 + }
54340 +}
54341 +
54342 +#ifdef DWC_EN_ISOC
54343 +
54344 +/**
54345 + * This function BNA interrupt for Isochronous EPs
54346 + *
54347 + */
54348 +static void dwc_otg_pcd_handle_iso_bna(dwc_otg_pcd_ep_t * ep)
54349 +{
54350 + dwc_ep_t *dwc_ep = &ep->dwc_ep;
54351 + volatile uint32_t *addr;
54352 + depctl_data_t depctl = {.d32 = 0 };
54353 + dwc_otg_pcd_t *pcd = ep->pcd;
54354 + dwc_otg_dev_dma_desc_t *dma_desc;
54355 + int i;
54356 +
54357 + dma_desc =
54358 + dwc_ep->iso_desc_addr + dwc_ep->desc_cnt * (dwc_ep->proc_buf_num);
54359 +
54360 + if (dwc_ep->is_in) {
54361 + dev_dma_desc_sts_t sts = {.d32 = 0 };
54362 + for (i = 0; i < dwc_ep->desc_cnt; ++i, ++dma_desc) {
54363 + sts.d32 = dma_desc->status.d32;
54364 + sts.b_iso_in.bs = BS_HOST_READY;
54365 + dma_desc->status.d32 = sts.d32;
54366 + }
54367 + } else {
54368 + dev_dma_desc_sts_t sts = {.d32 = 0 };
54369 + for (i = 0; i < dwc_ep->desc_cnt; ++i, ++dma_desc) {
54370 + sts.d32 = dma_desc->status.d32;
54371 + sts.b_iso_out.bs = BS_HOST_READY;
54372 + dma_desc->status.d32 = sts.d32;
54373 + }
54374 + }
54375 +
54376 + if (dwc_ep->is_in == 0) {
54377 + addr =
54378 + &GET_CORE_IF(pcd)->dev_if->out_ep_regs[dwc_ep->
54379 + num]->doepctl;
54380 + } else {
54381 + addr =
54382 + &GET_CORE_IF(pcd)->dev_if->in_ep_regs[dwc_ep->num]->diepctl;
54383 + }
54384 + depctl.b.epena = 1;
54385 + DWC_MODIFY_REG32(addr, depctl.d32, depctl.d32);
54386 +}
54387 +
54388 +/**
54389 + * This function sets latest iso packet information(non-PTI mode)
54390 + *
54391 + * @param core_if Programming view of DWC_otg controller.
54392 + * @param ep The EP to start the transfer on.
54393 + *
54394 + */
54395 +void set_current_pkt_info(dwc_otg_core_if_t * core_if, dwc_ep_t * ep)
54396 +{
54397 + deptsiz_data_t deptsiz = {.d32 = 0 };
54398 + dma_addr_t dma_addr;
54399 + uint32_t offset;
54400 +
54401 + if (ep->proc_buf_num)
54402 + dma_addr = ep->dma_addr1;
54403 + else
54404 + dma_addr = ep->dma_addr0;
54405 +
54406 + if (ep->is_in) {
54407 + deptsiz.d32 =
54408 + DWC_READ_REG32(&core_if->dev_if->
54409 + in_ep_regs[ep->num]->dieptsiz);
54410 + offset = ep->data_per_frame;
54411 + } else {
54412 + deptsiz.d32 =
54413 + DWC_READ_REG32(&core_if->dev_if->
54414 + out_ep_regs[ep->num]->doeptsiz);
54415 + offset =
54416 + ep->data_per_frame +
54417 + (0x4 & (0x4 - (ep->data_per_frame & 0x3)));
54418 + }
54419 +
54420 + if (!deptsiz.b.xfersize) {
54421 + ep->pkt_info[ep->cur_pkt].length = ep->data_per_frame;
54422 + ep->pkt_info[ep->cur_pkt].offset =
54423 + ep->cur_pkt_dma_addr - dma_addr;
54424 + ep->pkt_info[ep->cur_pkt].status = 0;
54425 + } else {
54426 + ep->pkt_info[ep->cur_pkt].length = ep->data_per_frame;
54427 + ep->pkt_info[ep->cur_pkt].offset =
54428 + ep->cur_pkt_dma_addr - dma_addr;
54429 + ep->pkt_info[ep->cur_pkt].status = -DWC_E_NO_DATA;
54430 + }
54431 + ep->cur_pkt_addr += offset;
54432 + ep->cur_pkt_dma_addr += offset;
54433 + ep->cur_pkt++;
54434 +}
54435 +
54436 +/**
54437 + * This function sets latest iso packet information(DDMA mode)
54438 + *
54439 + * @param core_if Programming view of DWC_otg controller.
54440 + * @param dwc_ep The EP to start the transfer on.
54441 + *
54442 + */
54443 +static void set_ddma_iso_pkts_info(dwc_otg_core_if_t * core_if,
54444 + dwc_ep_t * dwc_ep)
54445 +{
54446 + dwc_otg_dev_dma_desc_t *dma_desc;
54447 + dev_dma_desc_sts_t sts = {.d32 = 0 };
54448 + iso_pkt_info_t *iso_packet;
54449 + uint32_t data_per_desc;
54450 + uint32_t offset;
54451 + int i, j;
54452 +
54453 + iso_packet = dwc_ep->pkt_info;
54454 +
54455 + /** Reinit closed DMA Descriptors*/
54456 + /** ISO OUT EP */
54457 + if (dwc_ep->is_in == 0) {
54458 + dma_desc =
54459 + dwc_ep->iso_desc_addr +
54460 + dwc_ep->desc_cnt * dwc_ep->proc_buf_num;
54461 + offset = 0;
54462 +
54463 + for (i = 0; i < dwc_ep->desc_cnt - dwc_ep->pkt_per_frm;
54464 + i += dwc_ep->pkt_per_frm) {
54465 + for (j = 0; j < dwc_ep->pkt_per_frm; ++j) {
54466 + data_per_desc =
54467 + ((j + 1) * dwc_ep->maxpacket >
54468 + dwc_ep->
54469 + data_per_frame) ? dwc_ep->data_per_frame -
54470 + j * dwc_ep->maxpacket : dwc_ep->maxpacket;
54471 + data_per_desc +=
54472 + (data_per_desc % 4) ? (4 -
54473 + data_per_desc %
54474 + 4) : 0;
54475 +
54476 + sts.d32 = dma_desc->status.d32;
54477 +
54478 + /* Write status in iso_packet_decsriptor */
54479 + iso_packet->status =
54480 + sts.b_iso_out.rxsts +
54481 + (sts.b_iso_out.bs ^ BS_DMA_DONE);
54482 + if (iso_packet->status) {
54483 + iso_packet->status = -DWC_E_NO_DATA;
54484 + }
54485 +
54486 + /* Received data length */
54487 + if (!sts.b_iso_out.rxbytes) {
54488 + iso_packet->length =
54489 + data_per_desc -
54490 + sts.b_iso_out.rxbytes;
54491 + } else {
54492 + iso_packet->length =
54493 + data_per_desc -
54494 + sts.b_iso_out.rxbytes + (4 -
54495 + dwc_ep->data_per_frame
54496 + % 4);
54497 + }
54498 +
54499 + iso_packet->offset = offset;
54500 +
54501 + offset += data_per_desc;
54502 + dma_desc++;
54503 + iso_packet++;
54504 + }
54505 + }
54506 +
54507 + for (j = 0; j < dwc_ep->pkt_per_frm - 1; ++j) {
54508 + data_per_desc =
54509 + ((j + 1) * dwc_ep->maxpacket >
54510 + dwc_ep->data_per_frame) ? dwc_ep->data_per_frame -
54511 + j * dwc_ep->maxpacket : dwc_ep->maxpacket;
54512 + data_per_desc +=
54513 + (data_per_desc % 4) ? (4 - data_per_desc % 4) : 0;
54514 +
54515 + sts.d32 = dma_desc->status.d32;
54516 +
54517 + /* Write status in iso_packet_decsriptor */
54518 + iso_packet->status =
54519 + sts.b_iso_out.rxsts +
54520 + (sts.b_iso_out.bs ^ BS_DMA_DONE);
54521 + if (iso_packet->status) {
54522 + iso_packet->status = -DWC_E_NO_DATA;
54523 + }
54524 +
54525 + /* Received data length */
54526 + iso_packet->length =
54527 + dwc_ep->data_per_frame - sts.b_iso_out.rxbytes;
54528 +
54529 + iso_packet->offset = offset;
54530 +
54531 + offset += data_per_desc;
54532 + iso_packet++;
54533 + dma_desc++;
54534 + }
54535 +
54536 + sts.d32 = dma_desc->status.d32;
54537 +
54538 + /* Write status in iso_packet_decsriptor */
54539 + iso_packet->status =
54540 + sts.b_iso_out.rxsts + (sts.b_iso_out.bs ^ BS_DMA_DONE);
54541 + if (iso_packet->status) {
54542 + iso_packet->status = -DWC_E_NO_DATA;
54543 + }
54544 + /* Received data length */
54545 + if (!sts.b_iso_out.rxbytes) {
54546 + iso_packet->length =
54547 + dwc_ep->data_per_frame - sts.b_iso_out.rxbytes;
54548 + } else {
54549 + iso_packet->length =
54550 + dwc_ep->data_per_frame - sts.b_iso_out.rxbytes +
54551 + (4 - dwc_ep->data_per_frame % 4);
54552 + }
54553 +
54554 + iso_packet->offset = offset;
54555 + } else {
54556 +/** ISO IN EP */
54557 +
54558 + dma_desc =
54559 + dwc_ep->iso_desc_addr +
54560 + dwc_ep->desc_cnt * dwc_ep->proc_buf_num;
54561 +
54562 + for (i = 0; i < dwc_ep->desc_cnt - 1; i++) {
54563 + sts.d32 = dma_desc->status.d32;
54564 +
54565 + /* Write status in iso packet descriptor */
54566 + iso_packet->status =
54567 + sts.b_iso_in.txsts +
54568 + (sts.b_iso_in.bs ^ BS_DMA_DONE);
54569 + if (iso_packet->status != 0) {
54570 + iso_packet->status = -DWC_E_NO_DATA;
54571 +
54572 + }
54573 + /* Bytes has been transfered */
54574 + iso_packet->length =
54575 + dwc_ep->data_per_frame - sts.b_iso_in.txbytes;
54576 +
54577 + dma_desc++;
54578 + iso_packet++;
54579 + }
54580 +
54581 + sts.d32 = dma_desc->status.d32;
54582 + while (sts.b_iso_in.bs == BS_DMA_BUSY) {
54583 + sts.d32 = dma_desc->status.d32;
54584 + }
54585 +
54586 + /* Write status in iso packet descriptor ??? do be done with ERROR codes */
54587 + iso_packet->status =
54588 + sts.b_iso_in.txsts + (sts.b_iso_in.bs ^ BS_DMA_DONE);
54589 + if (iso_packet->status != 0) {
54590 + iso_packet->status = -DWC_E_NO_DATA;
54591 + }
54592 +
54593 + /* Bytes has been transfered */
54594 + iso_packet->length =
54595 + dwc_ep->data_per_frame - sts.b_iso_in.txbytes;
54596 + }
54597 +}
54598 +
54599 +/**
54600 + * This function reinitialize DMA Descriptors for Isochronous transfer
54601 + *
54602 + * @param core_if Programming view of DWC_otg controller.
54603 + * @param dwc_ep The EP to start the transfer on.
54604 + *
54605 + */
54606 +static void reinit_ddma_iso_xfer(dwc_otg_core_if_t * core_if, dwc_ep_t * dwc_ep)
54607 +{
54608 + int i, j;
54609 + dwc_otg_dev_dma_desc_t *dma_desc;
54610 + dma_addr_t dma_ad;
54611 + volatile uint32_t *addr;
54612 + dev_dma_desc_sts_t sts = {.d32 = 0 };
54613 + uint32_t data_per_desc;
54614 +
54615 + if (dwc_ep->is_in == 0) {
54616 + addr = &core_if->dev_if->out_ep_regs[dwc_ep->num]->doepctl;
54617 + } else {
54618 + addr = &core_if->dev_if->in_ep_regs[dwc_ep->num]->diepctl;
54619 + }
54620 +
54621 + if (dwc_ep->proc_buf_num == 0) {
54622 + /** Buffer 0 descriptors setup */
54623 + dma_ad = dwc_ep->dma_addr0;
54624 + } else {
54625 + /** Buffer 1 descriptors setup */
54626 + dma_ad = dwc_ep->dma_addr1;
54627 + }
54628 +
54629 + /** Reinit closed DMA Descriptors*/
54630 + /** ISO OUT EP */
54631 + if (dwc_ep->is_in == 0) {
54632 + dma_desc =
54633 + dwc_ep->iso_desc_addr +
54634 + dwc_ep->desc_cnt * dwc_ep->proc_buf_num;
54635 +
54636 + sts.b_iso_out.bs = BS_HOST_READY;
54637 + sts.b_iso_out.rxsts = 0;
54638 + sts.b_iso_out.l = 0;
54639 + sts.b_iso_out.sp = 0;
54640 + sts.b_iso_out.ioc = 0;
54641 + sts.b_iso_out.pid = 0;
54642 + sts.b_iso_out.framenum = 0;
54643 +
54644 + for (i = 0; i < dwc_ep->desc_cnt - dwc_ep->pkt_per_frm;
54645 + i += dwc_ep->pkt_per_frm) {
54646 + for (j = 0; j < dwc_ep->pkt_per_frm; ++j) {
54647 + data_per_desc =
54648 + ((j + 1) * dwc_ep->maxpacket >
54649 + dwc_ep->
54650 + data_per_frame) ? dwc_ep->data_per_frame -
54651 + j * dwc_ep->maxpacket : dwc_ep->maxpacket;
54652 + data_per_desc +=
54653 + (data_per_desc % 4) ? (4 -
54654 + data_per_desc %
54655 + 4) : 0;
54656 + sts.b_iso_out.rxbytes = data_per_desc;
54657 + dma_desc->buf = dma_ad;
54658 + dma_desc->status.d32 = sts.d32;
54659 +
54660 + dma_ad += data_per_desc;
54661 + dma_desc++;
54662 + }
54663 + }
54664 +
54665 + for (j = 0; j < dwc_ep->pkt_per_frm - 1; ++j) {
54666 +
54667 + data_per_desc =
54668 + ((j + 1) * dwc_ep->maxpacket >
54669 + dwc_ep->data_per_frame) ? dwc_ep->data_per_frame -
54670 + j * dwc_ep->maxpacket : dwc_ep->maxpacket;
54671 + data_per_desc +=
54672 + (data_per_desc % 4) ? (4 - data_per_desc % 4) : 0;
54673 + sts.b_iso_out.rxbytes = data_per_desc;
54674 +
54675 + dma_desc->buf = dma_ad;
54676 + dma_desc->status.d32 = sts.d32;
54677 +
54678 + dma_desc++;
54679 + dma_ad += data_per_desc;
54680 + }
54681 +
54682 + sts.b_iso_out.ioc = 1;
54683 + sts.b_iso_out.l = dwc_ep->proc_buf_num;
54684 +
54685 + data_per_desc =
54686 + ((j + 1) * dwc_ep->maxpacket >
54687 + dwc_ep->data_per_frame) ? dwc_ep->data_per_frame -
54688 + j * dwc_ep->maxpacket : dwc_ep->maxpacket;
54689 + data_per_desc +=
54690 + (data_per_desc % 4) ? (4 - data_per_desc % 4) : 0;
54691 + sts.b_iso_out.rxbytes = data_per_desc;
54692 +
54693 + dma_desc->buf = dma_ad;
54694 + dma_desc->status.d32 = sts.d32;
54695 + } else {
54696 +/** ISO IN EP */
54697 +
54698 + dma_desc =
54699 + dwc_ep->iso_desc_addr +
54700 + dwc_ep->desc_cnt * dwc_ep->proc_buf_num;
54701 +
54702 + sts.b_iso_in.bs = BS_HOST_READY;
54703 + sts.b_iso_in.txsts = 0;
54704 + sts.b_iso_in.sp = 0;
54705 + sts.b_iso_in.ioc = 0;
54706 + sts.b_iso_in.pid = dwc_ep->pkt_per_frm;
54707 + sts.b_iso_in.framenum = dwc_ep->next_frame;
54708 + sts.b_iso_in.txbytes = dwc_ep->data_per_frame;
54709 + sts.b_iso_in.l = 0;
54710 +
54711 + for (i = 0; i < dwc_ep->desc_cnt - 1; i++) {
54712 + dma_desc->buf = dma_ad;
54713 + dma_desc->status.d32 = sts.d32;
54714 +
54715 + sts.b_iso_in.framenum += dwc_ep->bInterval;
54716 + dma_ad += dwc_ep->data_per_frame;
54717 + dma_desc++;
54718 + }
54719 +
54720 + sts.b_iso_in.ioc = 1;
54721 + sts.b_iso_in.l = dwc_ep->proc_buf_num;
54722 +
54723 + dma_desc->buf = dma_ad;
54724 + dma_desc->status.d32 = sts.d32;
54725 +
54726 + dwc_ep->next_frame =
54727 + sts.b_iso_in.framenum + dwc_ep->bInterval * 1;
54728 + }
54729 + dwc_ep->proc_buf_num = (dwc_ep->proc_buf_num ^ 1) & 0x1;
54730 +}
54731 +
54732 +/**
54733 + * This function is to handle Iso EP transfer complete interrupt
54734 + * in case Iso out packet was dropped
54735 + *
54736 + * @param core_if Programming view of DWC_otg controller.
54737 + * @param dwc_ep The EP for wihich transfer complete was asserted
54738 + *
54739 + */
54740 +static uint32_t handle_iso_out_pkt_dropped(dwc_otg_core_if_t * core_if,
54741 + dwc_ep_t * dwc_ep)
54742 +{
54743 + uint32_t dma_addr;
54744 + uint32_t drp_pkt;
54745 + uint32_t drp_pkt_cnt;
54746 + deptsiz_data_t deptsiz = {.d32 = 0 };
54747 + depctl_data_t depctl = {.d32 = 0 };
54748 + int i;
54749 +
54750 + deptsiz.d32 =
54751 + DWC_READ_REG32(&core_if->dev_if->
54752 + out_ep_regs[dwc_ep->num]->doeptsiz);
54753 +
54754 + drp_pkt = dwc_ep->pkt_cnt - deptsiz.b.pktcnt;
54755 + drp_pkt_cnt = dwc_ep->pkt_per_frm - (drp_pkt % dwc_ep->pkt_per_frm);
54756 +
54757 + /* Setting dropped packets status */
54758 + for (i = 0; i < drp_pkt_cnt; ++i) {
54759 + dwc_ep->pkt_info[drp_pkt].status = -DWC_E_NO_DATA;
54760 + drp_pkt++;
54761 + deptsiz.b.pktcnt--;
54762 + }
54763 +
54764 + if (deptsiz.b.pktcnt > 0) {
54765 + deptsiz.b.xfersize =
54766 + dwc_ep->xfer_len - (dwc_ep->pkt_cnt -
54767 + deptsiz.b.pktcnt) * dwc_ep->maxpacket;
54768 + } else {
54769 + deptsiz.b.xfersize = 0;
54770 + deptsiz.b.pktcnt = 0;
54771 + }
54772 +
54773 + DWC_WRITE_REG32(&core_if->dev_if->out_ep_regs[dwc_ep->num]->doeptsiz,
54774 + deptsiz.d32);
54775 +
54776 + if (deptsiz.b.pktcnt > 0) {
54777 + if (dwc_ep->proc_buf_num) {
54778 + dma_addr =
54779 + dwc_ep->dma_addr1 + dwc_ep->xfer_len -
54780 + deptsiz.b.xfersize;
54781 + } else {
54782 + dma_addr =
54783 + dwc_ep->dma_addr0 + dwc_ep->xfer_len -
54784 + deptsiz.b.xfersize;;
54785 + }
54786 +
54787 + DWC_WRITE_REG32(&core_if->dev_if->
54788 + out_ep_regs[dwc_ep->num]->doepdma, dma_addr);
54789 +
54790 + /** Re-enable endpoint, clear nak */
54791 + depctl.d32 = 0;
54792 + depctl.b.epena = 1;
54793 + depctl.b.cnak = 1;
54794 +
54795 + DWC_MODIFY_REG32(&core_if->dev_if->
54796 + out_ep_regs[dwc_ep->num]->doepctl, depctl.d32,
54797 + depctl.d32);
54798 + return 0;
54799 + } else {
54800 + return 1;
54801 + }
54802 +}
54803 +
54804 +/**
54805 + * This function sets iso packets information(PTI mode)
54806 + *
54807 + * @param core_if Programming view of DWC_otg controller.
54808 + * @param ep The EP to start the transfer on.
54809 + *
54810 + */
54811 +static uint32_t set_iso_pkts_info(dwc_otg_core_if_t * core_if, dwc_ep_t * ep)
54812 +{
54813 + int i, j;
54814 + dma_addr_t dma_ad;
54815 + iso_pkt_info_t *packet_info = ep->pkt_info;
54816 + uint32_t offset;
54817 + uint32_t frame_data;
54818 + deptsiz_data_t deptsiz;
54819 +
54820 + if (ep->proc_buf_num == 0) {
54821 + /** Buffer 0 descriptors setup */
54822 + dma_ad = ep->dma_addr0;
54823 + } else {
54824 + /** Buffer 1 descriptors setup */
54825 + dma_ad = ep->dma_addr1;
54826 + }
54827 +
54828 + if (ep->is_in) {
54829 + deptsiz.d32 =
54830 + DWC_READ_REG32(&core_if->dev_if->in_ep_regs[ep->num]->
54831 + dieptsiz);
54832 + } else {
54833 + deptsiz.d32 =
54834 + DWC_READ_REG32(&core_if->dev_if->out_ep_regs[ep->num]->
54835 + doeptsiz);
54836 + }
54837 +
54838 + if (!deptsiz.b.xfersize) {
54839 + offset = 0;
54840 + for (i = 0; i < ep->pkt_cnt; i += ep->pkt_per_frm) {
54841 + frame_data = ep->data_per_frame;
54842 + for (j = 0; j < ep->pkt_per_frm; ++j) {
54843 +
54844 + /* Packet status - is not set as initially
54845 + * it is set to 0 and if packet was sent
54846 + successfully, status field will remain 0*/
54847 +
54848 + /* Bytes has been transfered */
54849 + packet_info->length =
54850 + (ep->maxpacket <
54851 + frame_data) ? ep->maxpacket : frame_data;
54852 +
54853 + /* Received packet offset */
54854 + packet_info->offset = offset;
54855 + offset += packet_info->length;
54856 + frame_data -= packet_info->length;
54857 +
54858 + packet_info++;
54859 + }
54860 + }
54861 + return 1;
54862 + } else {
54863 + /* This is a workaround for in case of Transfer Complete with
54864 + * PktDrpSts interrupts merging - in this case Transfer complete
54865 + * interrupt for Isoc Out Endpoint is asserted without PktDrpSts
54866 + * set and with DOEPTSIZ register non zero. Investigations showed,
54867 + * that this happens when Out packet is dropped, but because of
54868 + * interrupts merging during first interrupt handling PktDrpSts
54869 + * bit is cleared and for next merged interrupts it is not reset.
54870 + * In this case SW hadles the interrupt as if PktDrpSts bit is set.
54871 + */
54872 + if (ep->is_in) {
54873 + return 1;
54874 + } else {
54875 + return handle_iso_out_pkt_dropped(core_if, ep);
54876 + }
54877 + }
54878 +}
54879 +
54880 +/**
54881 + * This function is to handle Iso EP transfer complete interrupt
54882 + *
54883 + * @param pcd The PCD
54884 + * @param ep The EP for which transfer complete was asserted
54885 + *
54886 + */
54887 +static void complete_iso_ep(dwc_otg_pcd_t * pcd, dwc_otg_pcd_ep_t * ep)
54888 +{
54889 + dwc_otg_core_if_t *core_if = GET_CORE_IF(ep->pcd);
54890 + dwc_ep_t *dwc_ep = &ep->dwc_ep;
54891 + uint8_t is_last = 0;
54892 +
54893 + if (ep->dwc_ep.next_frame == 0xffffffff) {
54894 + DWC_WARN("Next frame is not set!\n");
54895 + return;
54896 + }
54897 +
54898 + if (core_if->dma_enable) {
54899 + if (core_if->dma_desc_enable) {
54900 + set_ddma_iso_pkts_info(core_if, dwc_ep);
54901 + reinit_ddma_iso_xfer(core_if, dwc_ep);
54902 + is_last = 1;
54903 + } else {
54904 + if (core_if->pti_enh_enable) {
54905 + if (set_iso_pkts_info(core_if, dwc_ep)) {
54906 + dwc_ep->proc_buf_num =
54907 + (dwc_ep->proc_buf_num ^ 1) & 0x1;
54908 + dwc_otg_iso_ep_start_buf_transfer
54909 + (core_if, dwc_ep);
54910 + is_last = 1;
54911 + }
54912 + } else {
54913 + set_current_pkt_info(core_if, dwc_ep);
54914 + if (dwc_ep->cur_pkt >= dwc_ep->pkt_cnt) {
54915 + is_last = 1;
54916 + dwc_ep->cur_pkt = 0;
54917 + dwc_ep->proc_buf_num =
54918 + (dwc_ep->proc_buf_num ^ 1) & 0x1;
54919 + if (dwc_ep->proc_buf_num) {
54920 + dwc_ep->cur_pkt_addr =
54921 + dwc_ep->xfer_buff1;
54922 + dwc_ep->cur_pkt_dma_addr =
54923 + dwc_ep->dma_addr1;
54924 + } else {
54925 + dwc_ep->cur_pkt_addr =
54926 + dwc_ep->xfer_buff0;
54927 + dwc_ep->cur_pkt_dma_addr =
54928 + dwc_ep->dma_addr0;
54929 + }
54930 +
54931 + }
54932 + dwc_otg_iso_ep_start_frm_transfer(core_if,
54933 + dwc_ep);
54934 + }
54935 + }
54936 + } else {
54937 + set_current_pkt_info(core_if, dwc_ep);
54938 + if (dwc_ep->cur_pkt >= dwc_ep->pkt_cnt) {
54939 + is_last = 1;
54940 + dwc_ep->cur_pkt = 0;
54941 + dwc_ep->proc_buf_num = (dwc_ep->proc_buf_num ^ 1) & 0x1;
54942 + if (dwc_ep->proc_buf_num) {
54943 + dwc_ep->cur_pkt_addr = dwc_ep->xfer_buff1;
54944 + dwc_ep->cur_pkt_dma_addr = dwc_ep->dma_addr1;
54945 + } else {
54946 + dwc_ep->cur_pkt_addr = dwc_ep->xfer_buff0;
54947 + dwc_ep->cur_pkt_dma_addr = dwc_ep->dma_addr0;
54948 + }
54949 +
54950 + }
54951 + dwc_otg_iso_ep_start_frm_transfer(core_if, dwc_ep);
54952 + }
54953 + if (is_last)
54954 + dwc_otg_iso_buffer_done(pcd, ep, ep->iso_req_handle);
54955 +}
54956 +#endif /* DWC_EN_ISOC */
54957 +
54958 +/**
54959 + * This function handle BNA interrupt for Non Isochronous EPs
54960 + *
54961 + */
54962 +static void dwc_otg_pcd_handle_noniso_bna(dwc_otg_pcd_ep_t * ep)
54963 +{
54964 + dwc_ep_t *dwc_ep = &ep->dwc_ep;
54965 + volatile uint32_t *addr;
54966 + depctl_data_t depctl = {.d32 = 0 };
54967 + dwc_otg_pcd_t *pcd = ep->pcd;
54968 + dwc_otg_dev_dma_desc_t *dma_desc;
54969 + dev_dma_desc_sts_t sts = {.d32 = 0 };
54970 + dwc_otg_core_if_t *core_if = ep->pcd->core_if;
54971 + int i, start;
54972 +
54973 + if (!dwc_ep->desc_cnt)
54974 + DWC_WARN("Ep%d %s Descriptor count = %d \n", dwc_ep->num,
54975 + (dwc_ep->is_in ? "IN" : "OUT"), dwc_ep->desc_cnt);
54976 +
54977 + if (core_if->core_params->cont_on_bna && !dwc_ep->is_in
54978 + && dwc_ep->type != DWC_OTG_EP_TYPE_CONTROL) {
54979 + uint32_t doepdma;
54980 + dwc_otg_dev_out_ep_regs_t *out_regs =
54981 + core_if->dev_if->out_ep_regs[dwc_ep->num];
54982 + doepdma = DWC_READ_REG32(&(out_regs->doepdma));
54983 + start = (doepdma - dwc_ep->dma_desc_addr)/sizeof(dwc_otg_dev_dma_desc_t);
54984 + dma_desc = &(dwc_ep->desc_addr[start]);
54985 + } else {
54986 + start = 0;
54987 + dma_desc = dwc_ep->desc_addr;
54988 + }
54989 +
54990 +
54991 + for (i = start; i < dwc_ep->desc_cnt; ++i, ++dma_desc) {
54992 + sts.d32 = dma_desc->status.d32;
54993 + sts.b.bs = BS_HOST_READY;
54994 + dma_desc->status.d32 = sts.d32;
54995 + }
54996 +
54997 + if (dwc_ep->is_in == 0) {
54998 + addr =
54999 + &GET_CORE_IF(pcd)->dev_if->out_ep_regs[dwc_ep->num]->
55000 + doepctl;
55001 + } else {
55002 + addr =
55003 + &GET_CORE_IF(pcd)->dev_if->in_ep_regs[dwc_ep->num]->diepctl;
55004 + }
55005 + depctl.b.epena = 1;
55006 + depctl.b.cnak = 1;
55007 + DWC_MODIFY_REG32(addr, 0, depctl.d32);
55008 +}
55009 +
55010 +/**
55011 + * This function handles EP0 Control transfers.
55012 + *
55013 + * The state of the control transfers are tracked in
55014 + * <code>ep0state</code>.
55015 + */
55016 +static void handle_ep0(dwc_otg_pcd_t * pcd)
55017 +{
55018 + dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
55019 + dwc_otg_pcd_ep_t *ep0 = &pcd->ep0;
55020 + dev_dma_desc_sts_t desc_sts;
55021 + deptsiz0_data_t deptsiz;
55022 + uint32_t byte_count;
55023 +
55024 +#ifdef DEBUG_EP0
55025 + DWC_DEBUGPL(DBG_PCDV, "%s()\n", __func__);
55026 + print_ep0_state(pcd);
55027 +#endif
55028 +
55029 +// DWC_PRINTF("HANDLE EP0\n");
55030 +
55031 + switch (pcd->ep0state) {
55032 + case EP0_DISCONNECT:
55033 + break;
55034 +
55035 + case EP0_IDLE:
55036 + pcd->request_config = 0;
55037 +
55038 + pcd_setup(pcd);
55039 + break;
55040 +
55041 + case EP0_IN_DATA_PHASE:
55042 +#ifdef DEBUG_EP0
55043 + DWC_DEBUGPL(DBG_PCD, "DATA_IN EP%d-%s: type=%d, mps=%d\n",
55044 + ep0->dwc_ep.num, (ep0->dwc_ep.is_in ? "IN" : "OUT"),
55045 + ep0->dwc_ep.type, ep0->dwc_ep.maxpacket);
55046 +#endif
55047 +
55048 + if (core_if->dma_enable != 0) {
55049 + /*
55050 + * For EP0 we can only program 1 packet at a time so we
55051 + * need to do the make calculations after each complete.
55052 + * Call write_packet to make the calculations, as in
55053 + * slave mode, and use those values to determine if we
55054 + * can complete.
55055 + */
55056 + if (core_if->dma_desc_enable == 0) {
55057 + deptsiz.d32 =
55058 + DWC_READ_REG32(&core_if->
55059 + dev_if->in_ep_regs[0]->
55060 + dieptsiz);
55061 + byte_count =
55062 + ep0->dwc_ep.xfer_len - deptsiz.b.xfersize;
55063 + } else {
55064 + desc_sts =
55065 + core_if->dev_if->in_desc_addr->status;
55066 + byte_count =
55067 + ep0->dwc_ep.xfer_len - desc_sts.b.bytes;
55068 + }
55069 + ep0->dwc_ep.xfer_count += byte_count;
55070 + ep0->dwc_ep.xfer_buff += byte_count;
55071 + ep0->dwc_ep.dma_addr += byte_count;
55072 + }
55073 + if (ep0->dwc_ep.xfer_count < ep0->dwc_ep.total_len) {
55074 + dwc_otg_ep0_continue_transfer(GET_CORE_IF(pcd),
55075 + &ep0->dwc_ep);
55076 + DWC_DEBUGPL(DBG_PCD, "CONTINUE TRANSFER\n");
55077 + } else if (ep0->dwc_ep.sent_zlp) {
55078 + dwc_otg_ep0_continue_transfer(GET_CORE_IF(pcd),
55079 + &ep0->dwc_ep);
55080 + ep0->dwc_ep.sent_zlp = 0;
55081 + DWC_DEBUGPL(DBG_PCD, "CONTINUE TRANSFER sent zlp\n");
55082 + } else {
55083 + ep0_complete_request(ep0);
55084 + DWC_DEBUGPL(DBG_PCD, "COMPLETE TRANSFER\n");
55085 + }
55086 + break;
55087 + case EP0_OUT_DATA_PHASE:
55088 +#ifdef DEBUG_EP0
55089 + DWC_DEBUGPL(DBG_PCD, "DATA_OUT EP%d-%s: type=%d, mps=%d\n",
55090 + ep0->dwc_ep.num, (ep0->dwc_ep.is_in ? "IN" : "OUT"),
55091 + ep0->dwc_ep.type, ep0->dwc_ep.maxpacket);
55092 +#endif
55093 + if (core_if->dma_enable != 0) {
55094 + if (core_if->dma_desc_enable == 0) {
55095 + deptsiz.d32 =
55096 + DWC_READ_REG32(&core_if->
55097 + dev_if->out_ep_regs[0]->
55098 + doeptsiz);
55099 + byte_count =
55100 + ep0->dwc_ep.maxpacket - deptsiz.b.xfersize;
55101 + } else {
55102 + desc_sts =
55103 + core_if->dev_if->out_desc_addr->status;
55104 + byte_count =
55105 + ep0->dwc_ep.maxpacket - desc_sts.b.bytes;
55106 + }
55107 + ep0->dwc_ep.xfer_count += byte_count;
55108 + ep0->dwc_ep.xfer_buff += byte_count;
55109 + ep0->dwc_ep.dma_addr += byte_count;
55110 + }
55111 + if (ep0->dwc_ep.xfer_count < ep0->dwc_ep.total_len) {
55112 + dwc_otg_ep0_continue_transfer(GET_CORE_IF(pcd),
55113 + &ep0->dwc_ep);
55114 + DWC_DEBUGPL(DBG_PCD, "CONTINUE TRANSFER\n");
55115 + } else if (ep0->dwc_ep.sent_zlp) {
55116 + dwc_otg_ep0_continue_transfer(GET_CORE_IF(pcd),
55117 + &ep0->dwc_ep);
55118 + ep0->dwc_ep.sent_zlp = 0;
55119 + DWC_DEBUGPL(DBG_PCD, "CONTINUE TRANSFER sent zlp\n");
55120 + } else {
55121 + ep0_complete_request(ep0);
55122 + DWC_DEBUGPL(DBG_PCD, "COMPLETE TRANSFER\n");
55123 + }
55124 + break;
55125 +
55126 + case EP0_IN_STATUS_PHASE:
55127 + case EP0_OUT_STATUS_PHASE:
55128 + DWC_DEBUGPL(DBG_PCD, "CASE: EP0_STATUS\n");
55129 + ep0_complete_request(ep0);
55130 + pcd->ep0state = EP0_IDLE;
55131 + ep0->stopped = 1;
55132 + ep0->dwc_ep.is_in = 0; /* OUT for next SETUP */
55133 +
55134 + /* Prepare for more SETUP Packets */
55135 + if (core_if->dma_enable) {
55136 + ep0_out_start(core_if, pcd);
55137 + }
55138 + break;
55139 +
55140 + case EP0_STALL:
55141 + DWC_ERROR("EP0 STALLed, should not get here pcd_setup()\n");
55142 + break;
55143 + }
55144 +#ifdef DEBUG_EP0
55145 + print_ep0_state(pcd);
55146 +#endif
55147 +}
55148 +
55149 +/**
55150 + * Restart transfer
55151 + */
55152 +static void restart_transfer(dwc_otg_pcd_t * pcd, const uint32_t epnum)
55153 +{
55154 + dwc_otg_core_if_t *core_if;
55155 + dwc_otg_dev_if_t *dev_if;
55156 + deptsiz_data_t dieptsiz = {.d32 = 0 };
55157 + dwc_otg_pcd_ep_t *ep;
55158 +
55159 + ep = get_in_ep(pcd, epnum);
55160 +
55161 +#ifdef DWC_EN_ISOC
55162 + if (ep->dwc_ep.type == DWC_OTG_EP_TYPE_ISOC) {
55163 + return;
55164 + }
55165 +#endif /* DWC_EN_ISOC */
55166 +
55167 + core_if = GET_CORE_IF(pcd);
55168 + dev_if = core_if->dev_if;
55169 +
55170 + dieptsiz.d32 = DWC_READ_REG32(&dev_if->in_ep_regs[epnum]->dieptsiz);
55171 +
55172 + DWC_DEBUGPL(DBG_PCD, "xfer_buff=%p xfer_count=%0x xfer_len=%0x"
55173 + " stopped=%d\n", ep->dwc_ep.xfer_buff,
55174 + ep->dwc_ep.xfer_count, ep->dwc_ep.xfer_len, ep->stopped);
55175 + /*
55176 + * If xfersize is 0 and pktcnt in not 0, resend the last packet.
55177 + */
55178 + if (dieptsiz.b.pktcnt && dieptsiz.b.xfersize == 0 &&
55179 + ep->dwc_ep.start_xfer_buff != 0) {
55180 + if (ep->dwc_ep.total_len <= ep->dwc_ep.maxpacket) {
55181 + ep->dwc_ep.xfer_count = 0;
55182 + ep->dwc_ep.xfer_buff = ep->dwc_ep.start_xfer_buff;
55183 + ep->dwc_ep.xfer_len = ep->dwc_ep.xfer_count;
55184 + } else {
55185 + ep->dwc_ep.xfer_count -= ep->dwc_ep.maxpacket;
55186 + /* convert packet size to dwords. */
55187 + ep->dwc_ep.xfer_buff -= ep->dwc_ep.maxpacket;
55188 + ep->dwc_ep.xfer_len = ep->dwc_ep.xfer_count;
55189 + }
55190 + ep->stopped = 0;
55191 + DWC_DEBUGPL(DBG_PCD, "xfer_buff=%p xfer_count=%0x "
55192 + "xfer_len=%0x stopped=%d\n",
55193 + ep->dwc_ep.xfer_buff,
55194 + ep->dwc_ep.xfer_count, ep->dwc_ep.xfer_len,
55195 + ep->stopped);
55196 + if (epnum == 0) {
55197 + dwc_otg_ep0_start_transfer(core_if, &ep->dwc_ep);
55198 + } else {
55199 + dwc_otg_ep_start_transfer(core_if, &ep->dwc_ep);
55200 + }
55201 + }
55202 +}
55203 +
55204 +/*
55205 + * This function create new nextep sequnce based on Learn Queue.
55206 + *
55207 + * @param core_if Programming view of DWC_otg controller
55208 + */
55209 +void predict_nextep_seq( dwc_otg_core_if_t * core_if)
55210 +{
55211 + dwc_otg_device_global_regs_t *dev_global_regs =
55212 + core_if->dev_if->dev_global_regs;
55213 + const uint32_t TOKEN_Q_DEPTH = core_if->hwcfg2.b.dev_token_q_depth;
55214 + /* Number of Token Queue Registers */
55215 + const int DTKNQ_REG_CNT = (TOKEN_Q_DEPTH + 7) / 8;
55216 + dtknq1_data_t dtknqr1;
55217 + uint32_t in_tkn_epnums[4];
55218 + uint8_t seqnum[MAX_EPS_CHANNELS];
55219 + uint8_t intkn_seq[TOKEN_Q_DEPTH];
55220 + grstctl_t resetctl = {.d32 = 0 };
55221 + uint8_t temp;
55222 + int ndx = 0;
55223 + int start = 0;
55224 + int end = 0;
55225 + int sort_done = 0;
55226 + int i = 0;
55227 + volatile uint32_t *addr = &dev_global_regs->dtknqr1;
55228 +
55229 +
55230 + DWC_DEBUGPL(DBG_PCD,"dev_token_q_depth=%d\n",TOKEN_Q_DEPTH);
55231 +
55232 + /* Read the DTKNQ Registers */
55233 + for (i = 0; i < DTKNQ_REG_CNT; i++) {
55234 + in_tkn_epnums[i] = DWC_READ_REG32(addr);
55235 + DWC_DEBUGPL(DBG_PCDV, "DTKNQR%d=0x%08x\n", i + 1,
55236 + in_tkn_epnums[i]);
55237 + if (addr == &dev_global_regs->dvbusdis) {
55238 + addr = &dev_global_regs->dtknqr3_dthrctl;
55239 + } else {
55240 + ++addr;
55241 + }
55242 +
55243 + }
55244 +
55245 + /* Copy the DTKNQR1 data to the bit field. */
55246 + dtknqr1.d32 = in_tkn_epnums[0];
55247 + if (dtknqr1.b.wrap_bit) {
55248 + ndx = dtknqr1.b.intknwptr;
55249 + end = ndx -1;
55250 + if (end < 0)
55251 + end = TOKEN_Q_DEPTH -1;
55252 + } else {
55253 + ndx = 0;
55254 + end = dtknqr1.b.intknwptr -1;
55255 + if (end < 0)
55256 + end = 0;
55257 + }
55258 + start = ndx;
55259 +
55260 + /* Fill seqnum[] by initial values: EP number + 31 */
55261 + for (i=0; i <= core_if->dev_if->num_in_eps; i++) {
55262 + seqnum[i] = i +31;
55263 + }
55264 +
55265 + /* Fill intkn_seq[] from in_tkn_epnums[0] */
55266 + for (i=0; i < 6; i++)
55267 + intkn_seq[i] = (in_tkn_epnums[0] >> ((7-i) * 4)) & 0xf;
55268 +
55269 + if (TOKEN_Q_DEPTH > 6) {
55270 + /* Fill intkn_seq[] from in_tkn_epnums[1] */
55271 + for (i=6; i < 14; i++)
55272 + intkn_seq[i] =
55273 + (in_tkn_epnums[1] >> ((7 - (i - 6)) * 4)) & 0xf;
55274 + }
55275 +
55276 + if (TOKEN_Q_DEPTH > 14) {
55277 + /* Fill intkn_seq[] from in_tkn_epnums[1] */
55278 + for (i=14; i < 22; i++)
55279 + intkn_seq[i] =
55280 + (in_tkn_epnums[2] >> ((7 - (i - 14)) * 4)) & 0xf;
55281 + }
55282 +
55283 + if (TOKEN_Q_DEPTH > 22) {
55284 + /* Fill intkn_seq[] from in_tkn_epnums[1] */
55285 + for (i=22; i < 30; i++)
55286 + intkn_seq[i] =
55287 + (in_tkn_epnums[3] >> ((7 - (i - 22)) * 4)) & 0xf;
55288 + }
55289 +
55290 + DWC_DEBUGPL(DBG_PCDV, "%s start=%d end=%d intkn_seq[]:\n", __func__,
55291 + start, end);
55292 + for (i=0; i<TOKEN_Q_DEPTH; i++)
55293 + DWC_DEBUGPL(DBG_PCDV,"%d\n", intkn_seq[i]);
55294 +
55295 + /* Update seqnum based on intkn_seq[] */
55296 + i = 0;
55297 + do {
55298 + seqnum[intkn_seq[ndx]] = i;
55299 + ndx++;
55300 + i++;
55301 + if (ndx == TOKEN_Q_DEPTH)
55302 + ndx = 0;
55303 + } while ( i < TOKEN_Q_DEPTH );
55304 +
55305 + /* Mark non active EP's in seqnum[] by 0xff */
55306 + for (i=0; i<=core_if->dev_if->num_in_eps; i++) {
55307 + if (core_if->nextep_seq[i] == 0xff )
55308 + seqnum[i] = 0xff;
55309 + }
55310 +
55311 + /* Sort seqnum[] */
55312 + sort_done = 0;
55313 + while (!sort_done) {
55314 + sort_done = 1;
55315 + for (i=0; i<core_if->dev_if->num_in_eps; i++) {
55316 + if (seqnum[i] > seqnum[i+1]) {
55317 + temp = seqnum[i];
55318 + seqnum[i] = seqnum[i+1];
55319 + seqnum[i+1] = temp;
55320 + sort_done = 0;
55321 + }
55322 + }
55323 + }
55324 +
55325 + ndx = start + seqnum[0];
55326 + if (ndx >= TOKEN_Q_DEPTH)
55327 + ndx = ndx % TOKEN_Q_DEPTH;
55328 + core_if->first_in_nextep_seq = intkn_seq[ndx];
55329 +
55330 + /* Update seqnum[] by EP numbers */
55331 + for (i=0; i<=core_if->dev_if->num_in_eps; i++) {
55332 + ndx = start + i;
55333 + if (seqnum[i] < 31) {
55334 + ndx = start + seqnum[i];
55335 + if (ndx >= TOKEN_Q_DEPTH)
55336 + ndx = ndx % TOKEN_Q_DEPTH;
55337 + seqnum[i] = intkn_seq[ndx];
55338 + } else {
55339 + if (seqnum[i] < 0xff) {
55340 + seqnum[i] = seqnum[i] - 31;
55341 + } else {
55342 + break;
55343 + }
55344 + }
55345 + }
55346 +
55347 + /* Update nextep_seq[] based on seqnum[] */
55348 + for (i=0; i<core_if->dev_if->num_in_eps; i++) {
55349 + if (seqnum[i] != 0xff) {
55350 + if (seqnum[i+1] != 0xff) {
55351 + core_if->nextep_seq[seqnum[i]] = seqnum[i+1];
55352 + } else {
55353 + core_if->nextep_seq[seqnum[i]] = core_if->first_in_nextep_seq;
55354 + break;
55355 + }
55356 + } else {
55357 + break;
55358 + }
55359 + }
55360 +
55361 + DWC_DEBUGPL(DBG_PCDV, "%s first_in_nextep_seq= %2d; nextep_seq[]:\n",
55362 + __func__, core_if->first_in_nextep_seq);
55363 + for (i=0; i <= core_if->dev_if->num_in_eps; i++) {
55364 + DWC_DEBUGPL(DBG_PCDV,"%2d\n", core_if->nextep_seq[i]);
55365 + }
55366 +
55367 + /* Flush the Learning Queue */
55368 + resetctl.d32 = DWC_READ_REG32(&core_if->core_global_regs->grstctl);
55369 + resetctl.b.intknqflsh = 1;
55370 + DWC_WRITE_REG32(&core_if->core_global_regs->grstctl, resetctl.d32);
55371 +
55372 +
55373 +}
55374 +
55375 +/**
55376 + * handle the IN EP disable interrupt.
55377 + */
55378 +static inline void handle_in_ep_disable_intr(dwc_otg_pcd_t * pcd,
55379 + const uint32_t epnum)
55380 +{
55381 + dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
55382 + dwc_otg_dev_if_t *dev_if = core_if->dev_if;
55383 + deptsiz_data_t dieptsiz = {.d32 = 0 };
55384 + dctl_data_t dctl = {.d32 = 0 };
55385 + dwc_otg_pcd_ep_t *ep;
55386 + dwc_ep_t *dwc_ep;
55387 + gintmsk_data_t gintmsk_data;
55388 + depctl_data_t depctl;
55389 + uint32_t diepdma;
55390 + uint32_t remain_to_transfer = 0;
55391 + uint8_t i;
55392 + uint32_t xfer_size;
55393 +
55394 + ep = get_in_ep(pcd, epnum);
55395 + dwc_ep = &ep->dwc_ep;
55396 +
55397 + if (dwc_ep->type == DWC_OTG_EP_TYPE_ISOC) {
55398 + dwc_otg_flush_tx_fifo(core_if, dwc_ep->tx_fifo_num);
55399 + complete_ep(ep);
55400 + return;
55401 + }
55402 +
55403 + DWC_DEBUGPL(DBG_PCD, "diepctl%d=%0x\n", epnum,
55404 + DWC_READ_REG32(&dev_if->in_ep_regs[epnum]->diepctl));
55405 + dieptsiz.d32 = DWC_READ_REG32(&dev_if->in_ep_regs[epnum]->dieptsiz);
55406 + depctl.d32 = DWC_READ_REG32(&dev_if->in_ep_regs[epnum]->diepctl);
55407 +
55408 + DWC_DEBUGPL(DBG_ANY, "pktcnt=%d size=%d\n",
55409 + dieptsiz.b.pktcnt, dieptsiz.b.xfersize);
55410 +
55411 + if ((core_if->start_predict == 0) || (depctl.b.eptype & 1)) {
55412 + if (ep->stopped) {
55413 + if (core_if->en_multiple_tx_fifo)
55414 + /* Flush the Tx FIFO */
55415 + dwc_otg_flush_tx_fifo(core_if, dwc_ep->tx_fifo_num);
55416 + /* Clear the Global IN NP NAK */
55417 + dctl.d32 = 0;
55418 + dctl.b.cgnpinnak = 1;
55419 + DWC_MODIFY_REG32(&dev_if->dev_global_regs->dctl, dctl.d32, dctl.d32);
55420 + /* Restart the transaction */
55421 + if (dieptsiz.b.pktcnt != 0 || dieptsiz.b.xfersize != 0) {
55422 + restart_transfer(pcd, epnum);
55423 + }
55424 + } else {
55425 + /* Restart the transaction */
55426 + if (dieptsiz.b.pktcnt != 0 || dieptsiz.b.xfersize != 0) {
55427 + restart_transfer(pcd, epnum);
55428 + }
55429 + DWC_DEBUGPL(DBG_ANY, "STOPPED!!!\n");
55430 + }
55431 + return;
55432 + }
55433 +
55434 + if (core_if->start_predict > 2) { // NP IN EP
55435 + core_if->start_predict--;
55436 + return;
55437 + }
55438 +
55439 + core_if->start_predict--;
55440 +
55441 + if (core_if->start_predict == 1) { // All NP IN Ep's disabled now
55442 +
55443 + predict_nextep_seq(core_if);
55444 +
55445 + /* Update all active IN EP's NextEP field based of nextep_seq[] */
55446 + for ( i = 0; i <= core_if->dev_if->num_in_eps; i++) {
55447 + depctl.d32 =
55448 + DWC_READ_REG32(&dev_if->in_ep_regs[i]->diepctl);
55449 + if (core_if->nextep_seq[i] != 0xff) { // Active NP IN EP
55450 + depctl.b.nextep = core_if->nextep_seq[i];
55451 + DWC_WRITE_REG32(&dev_if->in_ep_regs[i]->diepctl, depctl.d32);
55452 + }
55453 + }
55454 + /* Flush Shared NP TxFIFO */
55455 + dwc_otg_flush_tx_fifo(core_if, 0);
55456 + /* Rewind buffers */
55457 + if (!core_if->dma_desc_enable) {
55458 + i = core_if->first_in_nextep_seq;
55459 + do {
55460 + ep = get_in_ep(pcd, i);
55461 + dieptsiz.d32 = DWC_READ_REG32(&dev_if->in_ep_regs[i]->dieptsiz);
55462 + xfer_size = ep->dwc_ep.total_len - ep->dwc_ep.xfer_count;
55463 + if (xfer_size > ep->dwc_ep.maxxfer)
55464 + xfer_size = ep->dwc_ep.maxxfer;
55465 + depctl.d32 = DWC_READ_REG32(&dev_if->in_ep_regs[i]->diepctl);
55466 + if (dieptsiz.b.pktcnt != 0) {
55467 + if (xfer_size == 0) {
55468 + remain_to_transfer = 0;
55469 + } else {
55470 + if ((xfer_size % ep->dwc_ep.maxpacket) == 0) {
55471 + remain_to_transfer =
55472 + dieptsiz.b.pktcnt * ep->dwc_ep.maxpacket;
55473 + } else {
55474 + remain_to_transfer = ((dieptsiz.b.pktcnt -1) * ep->dwc_ep.maxpacket)
55475 + + (xfer_size % ep->dwc_ep.maxpacket);
55476 + }
55477 + }
55478 + diepdma = DWC_READ_REG32(&dev_if->in_ep_regs[i]->diepdma);
55479 + dieptsiz.b.xfersize = remain_to_transfer;
55480 + DWC_WRITE_REG32(&dev_if->in_ep_regs[i]->dieptsiz, dieptsiz.d32);
55481 + diepdma = ep->dwc_ep.dma_addr + (xfer_size - remain_to_transfer);
55482 + DWC_WRITE_REG32(&dev_if->in_ep_regs[i]->diepdma, diepdma);
55483 + }
55484 + i = core_if->nextep_seq[i];
55485 + } while (i != core_if->first_in_nextep_seq);
55486 + } else { // dma_desc_enable
55487 + DWC_PRINTF("%s Learning Queue not supported in DDMA\n", __func__);
55488 + }
55489 +
55490 + /* Restart transfers in predicted sequences */
55491 + i = core_if->first_in_nextep_seq;
55492 + do {
55493 + dieptsiz.d32 = DWC_READ_REG32(&dev_if->in_ep_regs[i]->dieptsiz);
55494 + depctl.d32 = DWC_READ_REG32(&dev_if->in_ep_regs[i]->diepctl);
55495 + if (dieptsiz.b.pktcnt != 0) {
55496 + depctl.d32 = DWC_READ_REG32(&dev_if->in_ep_regs[i]->diepctl);
55497 + depctl.b.epena = 1;
55498 + depctl.b.cnak = 1;
55499 + DWC_WRITE_REG32(&dev_if->in_ep_regs[i]->diepctl, depctl.d32);
55500 + }
55501 + i = core_if->nextep_seq[i];
55502 + } while (i != core_if->first_in_nextep_seq);
55503 +
55504 + /* Clear the global non-periodic IN NAK handshake */
55505 + dctl.d32 = 0;
55506 + dctl.b.cgnpinnak = 1;
55507 + DWC_MODIFY_REG32(&dev_if->dev_global_regs->dctl, dctl.d32, dctl.d32);
55508 +
55509 + /* Unmask EP Mismatch interrupt */
55510 + gintmsk_data.d32 = 0;
55511 + gintmsk_data.b.epmismatch = 1;
55512 + DWC_MODIFY_REG32(&core_if->core_global_regs->gintmsk, 0, gintmsk_data.d32);
55513 +
55514 + core_if->start_predict = 0;
55515 +
55516 + }
55517 +}
55518 +
55519 +/**
55520 + * Handler for the IN EP timeout handshake interrupt.
55521 + */
55522 +static inline void handle_in_ep_timeout_intr(dwc_otg_pcd_t * pcd,
55523 + const uint32_t epnum)
55524 +{
55525 + dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
55526 + dwc_otg_dev_if_t *dev_if = core_if->dev_if;
55527 +
55528 +#ifdef DEBUG
55529 + deptsiz_data_t dieptsiz = {.d32 = 0 };
55530 + uint32_t num = 0;
55531 +#endif
55532 + dctl_data_t dctl = {.d32 = 0 };
55533 + dwc_otg_pcd_ep_t *ep;
55534 +
55535 + gintmsk_data_t intr_mask = {.d32 = 0 };
55536 +
55537 + ep = get_in_ep(pcd, epnum);
55538 +
55539 + /* Disable the NP Tx Fifo Empty Interrrupt */
55540 + if (!core_if->dma_enable) {
55541 + intr_mask.b.nptxfempty = 1;
55542 + DWC_MODIFY_REG32(&core_if->core_global_regs->gintmsk,
55543 + intr_mask.d32, 0);
55544 + }
55545 + /** @todo NGS Check EP type.
55546 + * Implement for Periodic EPs */
55547 + /*
55548 + * Non-periodic EP
55549 + */
55550 + /* Enable the Global IN NAK Effective Interrupt */
55551 + intr_mask.b.ginnakeff = 1;
55552 + DWC_MODIFY_REG32(&core_if->core_global_regs->gintmsk, 0, intr_mask.d32);
55553 +
55554 + /* Set Global IN NAK */
55555 + dctl.b.sgnpinnak = 1;
55556 + DWC_MODIFY_REG32(&dev_if->dev_global_regs->dctl, dctl.d32, dctl.d32);
55557 +
55558 + ep->stopped = 1;
55559 +
55560 +#ifdef DEBUG
55561 + dieptsiz.d32 = DWC_READ_REG32(&dev_if->in_ep_regs[num]->dieptsiz);
55562 + DWC_DEBUGPL(DBG_ANY, "pktcnt=%d size=%d\n",
55563 + dieptsiz.b.pktcnt, dieptsiz.b.xfersize);
55564 +#endif
55565 +
55566 +#ifdef DISABLE_PERIODIC_EP
55567 + /*
55568 + * Set the NAK bit for this EP to
55569 + * start the disable process.
55570 + */
55571 + diepctl.d32 = 0;
55572 + diepctl.b.snak = 1;
55573 + DWC_MODIFY_REG32(&dev_if->in_ep_regs[num]->diepctl, diepctl.d32,
55574 + diepctl.d32);
55575 + ep->disabling = 1;
55576 + ep->stopped = 1;
55577 +#endif
55578 +}
55579 +
55580 +/**
55581 + * Handler for the IN EP NAK interrupt.
55582 + */
55583 +static inline int32_t handle_in_ep_nak_intr(dwc_otg_pcd_t * pcd,
55584 + const uint32_t epnum)
55585 +{
55586 + /** @todo implement ISR */
55587 + dwc_otg_core_if_t *core_if;
55588 + diepmsk_data_t intr_mask = {.d32 = 0 };
55589 +
55590 + DWC_PRINTF("INTERRUPT Handler not implemented for %s\n", "IN EP NAK");
55591 + core_if = GET_CORE_IF(pcd);
55592 + intr_mask.b.nak = 1;
55593 +
55594 + if (core_if->multiproc_int_enable) {
55595 + DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->
55596 + diepeachintmsk[epnum], intr_mask.d32, 0);
55597 + } else {
55598 + DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->diepmsk,
55599 + intr_mask.d32, 0);
55600 + }
55601 +
55602 + return 1;
55603 +}
55604 +
55605 +/**
55606 + * Handler for the OUT EP Babble interrupt.
55607 + */
55608 +static inline int32_t handle_out_ep_babble_intr(dwc_otg_pcd_t * pcd,
55609 + const uint32_t epnum)
55610 +{
55611 + /** @todo implement ISR */
55612 + dwc_otg_core_if_t *core_if;
55613 + doepmsk_data_t intr_mask = {.d32 = 0 };
55614 +
55615 + DWC_PRINTF("INTERRUPT Handler not implemented for %s\n",
55616 + "OUT EP Babble");
55617 + core_if = GET_CORE_IF(pcd);
55618 + intr_mask.b.babble = 1;
55619 +
55620 + if (core_if->multiproc_int_enable) {
55621 + DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->
55622 + doepeachintmsk[epnum], intr_mask.d32, 0);
55623 + } else {
55624 + DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->doepmsk,
55625 + intr_mask.d32, 0);
55626 + }
55627 +
55628 + return 1;
55629 +}
55630 +
55631 +/**
55632 + * Handler for the OUT EP NAK interrupt.
55633 + */
55634 +static inline int32_t handle_out_ep_nak_intr(dwc_otg_pcd_t * pcd,
55635 + const uint32_t epnum)
55636 +{
55637 + /** @todo implement ISR */
55638 + dwc_otg_core_if_t *core_if;
55639 + doepmsk_data_t intr_mask = {.d32 = 0 };
55640 +
55641 + DWC_DEBUGPL(DBG_ANY, "INTERRUPT Handler not implemented for %s\n", "OUT EP NAK");
55642 + core_if = GET_CORE_IF(pcd);
55643 + intr_mask.b.nak = 1;
55644 +
55645 + if (core_if->multiproc_int_enable) {
55646 + DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->
55647 + doepeachintmsk[epnum], intr_mask.d32, 0);
55648 + } else {
55649 + DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->doepmsk,
55650 + intr_mask.d32, 0);
55651 + }
55652 +
55653 + return 1;
55654 +}
55655 +
55656 +/**
55657 + * Handler for the OUT EP NYET interrupt.
55658 + */
55659 +static inline int32_t handle_out_ep_nyet_intr(dwc_otg_pcd_t * pcd,
55660 + const uint32_t epnum)
55661 +{
55662 + /** @todo implement ISR */
55663 + dwc_otg_core_if_t *core_if;
55664 + doepmsk_data_t intr_mask = {.d32 = 0 };
55665 +
55666 + DWC_PRINTF("INTERRUPT Handler not implemented for %s\n", "OUT EP NYET");
55667 + core_if = GET_CORE_IF(pcd);
55668 + intr_mask.b.nyet = 1;
55669 +
55670 + if (core_if->multiproc_int_enable) {
55671 + DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->
55672 + doepeachintmsk[epnum], intr_mask.d32, 0);
55673 + } else {
55674 + DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->doepmsk,
55675 + intr_mask.d32, 0);
55676 + }
55677 +
55678 + return 1;
55679 +}
55680 +
55681 +/**
55682 + * This interrupt indicates that an IN EP has a pending Interrupt.
55683 + * The sequence for handling the IN EP interrupt is shown below:
55684 + * -# Read the Device All Endpoint Interrupt register
55685 + * -# Repeat the following for each IN EP interrupt bit set (from
55686 + * LSB to MSB).
55687 + * -# Read the Device Endpoint Interrupt (DIEPINTn) register
55688 + * -# If "Transfer Complete" call the request complete function
55689 + * -# If "Endpoint Disabled" complete the EP disable procedure.
55690 + * -# If "AHB Error Interrupt" log error
55691 + * -# If "Time-out Handshake" log error
55692 + * -# If "IN Token Received when TxFIFO Empty" write packet to Tx
55693 + * FIFO.
55694 + * -# If "IN Token EP Mismatch" (disable, this is handled by EP
55695 + * Mismatch Interrupt)
55696 + */
55697 +static int32_t dwc_otg_pcd_handle_in_ep_intr(dwc_otg_pcd_t * pcd)
55698 +{
55699 +#define CLEAR_IN_EP_INTR(__core_if,__epnum,__intr) \
55700 +do { \
55701 + diepint_data_t diepint = {.d32=0}; \
55702 + diepint.b.__intr = 1; \
55703 + DWC_WRITE_REG32(&__core_if->dev_if->in_ep_regs[__epnum]->diepint, \
55704 + diepint.d32); \
55705 +} while (0)
55706 +
55707 + dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
55708 + dwc_otg_dev_if_t *dev_if = core_if->dev_if;
55709 + diepint_data_t diepint = {.d32 = 0 };
55710 + depctl_data_t depctl = {.d32 = 0 };
55711 + uint32_t ep_intr;
55712 + uint32_t epnum = 0;
55713 + dwc_otg_pcd_ep_t *ep;
55714 + dwc_ep_t *dwc_ep;
55715 + gintmsk_data_t intr_mask = {.d32 = 0 };
55716 +
55717 + DWC_DEBUGPL(DBG_PCDV, "%s(%p)\n", __func__, pcd);
55718 +
55719 + /* Read in the device interrupt bits */
55720 + ep_intr = dwc_otg_read_dev_all_in_ep_intr(core_if);
55721 +
55722 + /* Service the Device IN interrupts for each endpoint */
55723 + while (ep_intr) {
55724 + if (ep_intr & 0x1) {
55725 + uint32_t empty_msk;
55726 + /* Get EP pointer */
55727 + ep = get_in_ep(pcd, epnum);
55728 + dwc_ep = &ep->dwc_ep;
55729 +
55730 + depctl.d32 =
55731 + DWC_READ_REG32(&dev_if->in_ep_regs[epnum]->diepctl);
55732 + empty_msk =
55733 + DWC_READ_REG32(&dev_if->
55734 + dev_global_regs->dtknqr4_fifoemptymsk);
55735 +
55736 + DWC_DEBUGPL(DBG_PCDV,
55737 + "IN EP INTERRUPT - %d\nepmty_msk - %8x diepctl - %8x\n",
55738 + epnum, empty_msk, depctl.d32);
55739 +
55740 + DWC_DEBUGPL(DBG_PCD,
55741 + "EP%d-%s: type=%d, mps=%d\n",
55742 + dwc_ep->num, (dwc_ep->is_in ? "IN" : "OUT"),
55743 + dwc_ep->type, dwc_ep->maxpacket);
55744 +
55745 + diepint.d32 =
55746 + dwc_otg_read_dev_in_ep_intr(core_if, dwc_ep);
55747 +
55748 + DWC_DEBUGPL(DBG_PCDV,
55749 + "EP %d Interrupt Register - 0x%x\n", epnum,
55750 + diepint.d32);
55751 + /* Transfer complete */
55752 + if (diepint.b.xfercompl) {
55753 + /* Disable the NP Tx FIFO Empty
55754 + * Interrupt */
55755 + if (core_if->en_multiple_tx_fifo == 0) {
55756 + intr_mask.b.nptxfempty = 1;
55757 + DWC_MODIFY_REG32
55758 + (&core_if->core_global_regs->gintmsk,
55759 + intr_mask.d32, 0);
55760 + } else {
55761 + /* Disable the Tx FIFO Empty Interrupt for this EP */
55762 + uint32_t fifoemptymsk =
55763 + 0x1 << dwc_ep->num;
55764 + DWC_MODIFY_REG32(&core_if->
55765 + dev_if->dev_global_regs->dtknqr4_fifoemptymsk,
55766 + fifoemptymsk, 0);
55767 + }
55768 + /* Clear the bit in DIEPINTn for this interrupt */
55769 + CLEAR_IN_EP_INTR(core_if, epnum, xfercompl);
55770 +
55771 + /* Complete the transfer */
55772 + if (epnum == 0) {
55773 + handle_ep0(pcd);
55774 + }
55775 +#ifdef DWC_EN_ISOC
55776 + else if (dwc_ep->type == DWC_OTG_EP_TYPE_ISOC) {
55777 + if (!ep->stopped)
55778 + complete_iso_ep(pcd, ep);
55779 + }
55780 +#endif /* DWC_EN_ISOC */
55781 +#ifdef DWC_UTE_PER_IO
55782 + else if (dwc_ep->type == DWC_OTG_EP_TYPE_ISOC) {
55783 + if (!ep->stopped)
55784 + complete_xiso_ep(ep);
55785 + }
55786 +#endif /* DWC_UTE_PER_IO */
55787 + else {
55788 + if (dwc_ep->type == DWC_OTG_EP_TYPE_ISOC &&
55789 + dwc_ep->bInterval > 1) {
55790 + dwc_ep->frame_num += dwc_ep->bInterval;
55791 + if (dwc_ep->frame_num > 0x3FFF)
55792 + {
55793 + dwc_ep->frm_overrun = 1;
55794 + dwc_ep->frame_num &= 0x3FFF;
55795 + } else
55796 + dwc_ep->frm_overrun = 0;
55797 + }
55798 + complete_ep(ep);
55799 + if(diepint.b.nak)
55800 + CLEAR_IN_EP_INTR(core_if, epnum, nak);
55801 + }
55802 + }
55803 + /* Endpoint disable */
55804 + if (diepint.b.epdisabled) {
55805 + DWC_DEBUGPL(DBG_ANY, "EP%d IN disabled\n",
55806 + epnum);
55807 + handle_in_ep_disable_intr(pcd, epnum);
55808 +
55809 + /* Clear the bit in DIEPINTn for this interrupt */
55810 + CLEAR_IN_EP_INTR(core_if, epnum, epdisabled);
55811 + }
55812 + /* AHB Error */
55813 + if (diepint.b.ahberr) {
55814 + DWC_ERROR("EP%d IN AHB Error\n", epnum);
55815 + /* Clear the bit in DIEPINTn for this interrupt */
55816 + CLEAR_IN_EP_INTR(core_if, epnum, ahberr);
55817 + }
55818 + /* TimeOUT Handshake (non-ISOC IN EPs) */
55819 + if (diepint.b.timeout) {
55820 + DWC_ERROR("EP%d IN Time-out\n", epnum);
55821 + handle_in_ep_timeout_intr(pcd, epnum);
55822 +
55823 + CLEAR_IN_EP_INTR(core_if, epnum, timeout);
55824 + }
55825 + /** IN Token received with TxF Empty */
55826 + if (diepint.b.intktxfemp) {
55827 + DWC_DEBUGPL(DBG_ANY,
55828 + "EP%d IN TKN TxFifo Empty\n",
55829 + epnum);
55830 + if (!ep->stopped && epnum != 0) {
55831 +
55832 + diepmsk_data_t diepmsk = {.d32 = 0 };
55833 + diepmsk.b.intktxfemp = 1;
55834 +
55835 + if (core_if->multiproc_int_enable) {
55836 + DWC_MODIFY_REG32
55837 + (&dev_if->dev_global_regs->diepeachintmsk
55838 + [epnum], diepmsk.d32, 0);
55839 + } else {
55840 + DWC_MODIFY_REG32
55841 + (&dev_if->dev_global_regs->diepmsk,
55842 + diepmsk.d32, 0);
55843 + }
55844 + } else if (core_if->dma_desc_enable
55845 + && epnum == 0
55846 + && pcd->ep0state ==
55847 + EP0_OUT_STATUS_PHASE) {
55848 + // EP0 IN set STALL
55849 + depctl.d32 =
55850 + DWC_READ_REG32(&dev_if->in_ep_regs
55851 + [epnum]->diepctl);
55852 +
55853 + /* set the disable and stall bits */
55854 + if (depctl.b.epena) {
55855 + depctl.b.epdis = 1;
55856 + }
55857 + depctl.b.stall = 1;
55858 + DWC_WRITE_REG32(&dev_if->in_ep_regs
55859 + [epnum]->diepctl,
55860 + depctl.d32);
55861 + }
55862 + CLEAR_IN_EP_INTR(core_if, epnum, intktxfemp);
55863 + }
55864 + /** IN Token Received with EP mismatch */
55865 + if (diepint.b.intknepmis) {
55866 + DWC_DEBUGPL(DBG_ANY,
55867 + "EP%d IN TKN EP Mismatch\n", epnum);
55868 + CLEAR_IN_EP_INTR(core_if, epnum, intknepmis);
55869 + }
55870 + /** IN Endpoint NAK Effective */
55871 + if (diepint.b.inepnakeff) {
55872 + DWC_DEBUGPL(DBG_ANY,
55873 + "EP%d IN EP NAK Effective\n",
55874 + epnum);
55875 + /* Periodic EP */
55876 + if (ep->disabling) {
55877 + depctl.d32 = 0;
55878 + depctl.b.snak = 1;
55879 + depctl.b.epdis = 1;
55880 + DWC_MODIFY_REG32(&dev_if->in_ep_regs
55881 + [epnum]->diepctl,
55882 + depctl.d32,
55883 + depctl.d32);
55884 + }
55885 + CLEAR_IN_EP_INTR(core_if, epnum, inepnakeff);
55886 +
55887 + }
55888 +
55889 + /** IN EP Tx FIFO Empty Intr */
55890 + if (diepint.b.emptyintr) {
55891 + DWC_DEBUGPL(DBG_ANY,
55892 + "EP%d Tx FIFO Empty Intr \n",
55893 + epnum);
55894 + write_empty_tx_fifo(pcd, epnum);
55895 +
55896 + CLEAR_IN_EP_INTR(core_if, epnum, emptyintr);
55897 +
55898 + }
55899 +
55900 + /** IN EP BNA Intr */
55901 + if (diepint.b.bna) {
55902 + CLEAR_IN_EP_INTR(core_if, epnum, bna);
55903 + if (core_if->dma_desc_enable) {
55904 +#ifdef DWC_EN_ISOC
55905 + if (dwc_ep->type ==
55906 + DWC_OTG_EP_TYPE_ISOC) {
55907 + /*
55908 + * This checking is performed to prevent first "false" BNA
55909 + * handling occuring right after reconnect
55910 + */
55911 + if (dwc_ep->next_frame !=
55912 + 0xffffffff)
55913 + dwc_otg_pcd_handle_iso_bna(ep);
55914 + } else
55915 +#endif /* DWC_EN_ISOC */
55916 + {
55917 + dwc_otg_pcd_handle_noniso_bna(ep);
55918 + }
55919 + }
55920 + }
55921 + /* NAK Interrutp */
55922 + if (diepint.b.nak) {
55923 + DWC_DEBUGPL(DBG_ANY, "EP%d IN NAK Interrupt\n",
55924 + epnum);
55925 + if (ep->dwc_ep.type == DWC_OTG_EP_TYPE_ISOC) {
55926 + depctl_data_t depctl;
55927 + if (ep->dwc_ep.frame_num == 0xFFFFFFFF) {
55928 + ep->dwc_ep.frame_num = core_if->frame_num;
55929 + if (ep->dwc_ep.bInterval > 1) {
55930 + depctl.d32 = 0;
55931 + depctl.d32 = DWC_READ_REG32(&dev_if->in_ep_regs[epnum]->diepctl);
55932 + if (ep->dwc_ep.frame_num & 0x1) {
55933 + depctl.b.setd1pid = 1;
55934 + depctl.b.setd0pid = 0;
55935 + } else {
55936 + depctl.b.setd0pid = 1;
55937 + depctl.b.setd1pid = 0;
55938 + }
55939 + DWC_WRITE_REG32(&dev_if->in_ep_regs[epnum]->diepctl, depctl.d32);
55940 + }
55941 + start_next_request(ep);
55942 + }
55943 + ep->dwc_ep.frame_num += ep->dwc_ep.bInterval;
55944 + if (dwc_ep->frame_num > 0x3FFF) {
55945 + dwc_ep->frm_overrun = 1;
55946 + dwc_ep->frame_num &= 0x3FFF;
55947 + } else
55948 + dwc_ep->frm_overrun = 0;
55949 + }
55950 +
55951 + CLEAR_IN_EP_INTR(core_if, epnum, nak);
55952 + }
55953 + }
55954 + epnum++;
55955 + ep_intr >>= 1;
55956 + }
55957 +
55958 + return 1;
55959 +#undef CLEAR_IN_EP_INTR
55960 +}
55961 +
55962 +/**
55963 + * This interrupt indicates that an OUT EP has a pending Interrupt.
55964 + * The sequence for handling the OUT EP interrupt is shown below:
55965 + * -# Read the Device All Endpoint Interrupt register
55966 + * -# Repeat the following for each OUT EP interrupt bit set (from
55967 + * LSB to MSB).
55968 + * -# Read the Device Endpoint Interrupt (DOEPINTn) register
55969 + * -# If "Transfer Complete" call the request complete function
55970 + * -# If "Endpoint Disabled" complete the EP disable procedure.
55971 + * -# If "AHB Error Interrupt" log error
55972 + * -# If "Setup Phase Done" process Setup Packet (See Standard USB
55973 + * Command Processing)
55974 + */
55975 +static int32_t dwc_otg_pcd_handle_out_ep_intr(dwc_otg_pcd_t * pcd)
55976 +{
55977 +#define CLEAR_OUT_EP_INTR(__core_if,__epnum,__intr) \
55978 +do { \
55979 + doepint_data_t doepint = {.d32=0}; \
55980 + doepint.b.__intr = 1; \
55981 + DWC_WRITE_REG32(&__core_if->dev_if->out_ep_regs[__epnum]->doepint, \
55982 + doepint.d32); \
55983 +} while (0)
55984 +
55985 + dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
55986 + uint32_t ep_intr;
55987 + doepint_data_t doepint = {.d32 = 0 };
55988 + uint32_t epnum = 0;
55989 + dwc_otg_pcd_ep_t *ep;
55990 + dwc_ep_t *dwc_ep;
55991 + dctl_data_t dctl = {.d32 = 0 };
55992 + gintmsk_data_t gintmsk = {.d32 = 0 };
55993 +
55994 +
55995 + DWC_DEBUGPL(DBG_PCDV, "%s()\n", __func__);
55996 +
55997 + /* Read in the device interrupt bits */
55998 + ep_intr = dwc_otg_read_dev_all_out_ep_intr(core_if);
55999 +
56000 + while (ep_intr) {
56001 + if (ep_intr & 0x1) {
56002 + /* Get EP pointer */
56003 + ep = get_out_ep(pcd, epnum);
56004 + dwc_ep = &ep->dwc_ep;
56005 +
56006 +#ifdef VERBOSE
56007 + DWC_DEBUGPL(DBG_PCDV,
56008 + "EP%d-%s: type=%d, mps=%d\n",
56009 + dwc_ep->num, (dwc_ep->is_in ? "IN" : "OUT"),
56010 + dwc_ep->type, dwc_ep->maxpacket);
56011 +#endif
56012 + doepint.d32 =
56013 + dwc_otg_read_dev_out_ep_intr(core_if, dwc_ep);
56014 + /* Moved this interrupt upper due to core deffect of asserting
56015 + * OUT EP 0 xfercompl along with stsphsrcvd in BDMA */
56016 + if (doepint.b.stsphsercvd) {
56017 + deptsiz0_data_t deptsiz;
56018 + CLEAR_OUT_EP_INTR(core_if, epnum, stsphsercvd);
56019 + deptsiz.d32 =
56020 + DWC_READ_REG32(&core_if->dev_if->
56021 + out_ep_regs[0]->doeptsiz);
56022 + if (core_if->snpsid >= OTG_CORE_REV_3_00a
56023 + && core_if->dma_enable
56024 + && core_if->dma_desc_enable == 0
56025 + && doepint.b.xfercompl
56026 + && deptsiz.b.xfersize == 24) {
56027 + CLEAR_OUT_EP_INTR(core_if, epnum,
56028 + xfercompl);
56029 + doepint.b.xfercompl = 0;
56030 + ep0_out_start(core_if, pcd);
56031 + }
56032 + if ((core_if->dma_desc_enable) ||
56033 + (core_if->dma_enable
56034 + && core_if->snpsid >=
56035 + OTG_CORE_REV_3_00a)) {
56036 + do_setup_in_status_phase(pcd);
56037 + }
56038 + }
56039 + /* Transfer complete */
56040 + if (doepint.b.xfercompl) {
56041 +
56042 + if (epnum == 0) {
56043 + /* Clear the bit in DOEPINTn for this interrupt */
56044 + CLEAR_OUT_EP_INTR(core_if, epnum, xfercompl);
56045 + if (core_if->snpsid >= OTG_CORE_REV_3_00a) {
56046 + DWC_DEBUGPL(DBG_PCDV, "DOEPINT=%x doepint=%x\n",
56047 + DWC_READ_REG32(&core_if->dev_if->out_ep_regs[0]->doepint),
56048 + doepint.d32);
56049 + DWC_DEBUGPL(DBG_PCDV, "DOEPCTL=%x \n",
56050 + DWC_READ_REG32(&core_if->dev_if->out_ep_regs[0]->doepctl));
56051 +
56052 + if (core_if->snpsid >= OTG_CORE_REV_3_00a
56053 + && core_if->dma_enable == 0) {
56054 + doepint_data_t doepint;
56055 + doepint.d32 = DWC_READ_REG32(&core_if->dev_if->
56056 + out_ep_regs[0]->doepint);
56057 + if (pcd->ep0state == EP0_IDLE && doepint.b.sr) {
56058 + CLEAR_OUT_EP_INTR(core_if, epnum, sr);
56059 + goto exit_xfercompl;
56060 + }
56061 + }
56062 + /* In case of DDMA look at SR bit to go to the Data Stage */
56063 + if (core_if->dma_desc_enable) {
56064 + dev_dma_desc_sts_t status = {.d32 = 0};
56065 + if (pcd->ep0state == EP0_IDLE) {
56066 + status.d32 = core_if->dev_if->setup_desc_addr[core_if->
56067 + dev_if->setup_desc_index]->status.d32;
56068 + if(pcd->data_terminated) {
56069 + pcd->data_terminated = 0;
56070 + status.d32 = core_if->dev_if->out_desc_addr->status.d32;
56071 + dwc_memcpy(&pcd->setup_pkt->req, pcd->backup_buf, 8);
56072 + }
56073 + if (status.b.sr) {
56074 + if (doepint.b.setup) {
56075 + DWC_DEBUGPL(DBG_PCDV, "DMA DESC EP0_IDLE SR=1 setup=1\n");
56076 + /* Already started data stage, clear setup */
56077 + CLEAR_OUT_EP_INTR(core_if, epnum, setup);
56078 + doepint.b.setup = 0;
56079 + handle_ep0(pcd);
56080 + /* Prepare for more setup packets */
56081 + if (pcd->ep0state == EP0_IN_STATUS_PHASE ||
56082 + pcd->ep0state == EP0_IN_DATA_PHASE) {
56083 + ep0_out_start(core_if, pcd);
56084 + }
56085 +
56086 + goto exit_xfercompl;
56087 + } else {
56088 + /* Prepare for more setup packets */
56089 + DWC_DEBUGPL(DBG_PCDV,
56090 + "EP0_IDLE SR=1 setup=0 new setup comes\n");
56091 + ep0_out_start(core_if, pcd);
56092 + }
56093 + }
56094 + } else {
56095 + dwc_otg_pcd_request_t *req;
56096 + dev_dma_desc_sts_t status = {.d32 = 0};
56097 + diepint_data_t diepint0;
56098 + diepint0.d32 = DWC_READ_REG32(&core_if->dev_if->
56099 + in_ep_regs[0]->diepint);
56100 +
56101 + if (pcd->ep0state == EP0_STALL || pcd->ep0state == EP0_DISCONNECT) {
56102 + DWC_ERROR("EP0 is stalled/disconnected\n");
56103 + }
56104 +
56105 + /* Clear IN xfercompl if set */
56106 + if (diepint0.b.xfercompl && (pcd->ep0state == EP0_IN_STATUS_PHASE
56107 + || pcd->ep0state == EP0_IN_DATA_PHASE)) {
56108 + DWC_WRITE_REG32(&core_if->dev_if->
56109 + in_ep_regs[0]->diepint, diepint0.d32);
56110 + }
56111 +
56112 + status.d32 = core_if->dev_if->setup_desc_addr[core_if->
56113 + dev_if->setup_desc_index]->status.d32;
56114 +
56115 + if (ep->dwc_ep.xfer_count != ep->dwc_ep.total_len
56116 + && (pcd->ep0state == EP0_OUT_DATA_PHASE))
56117 + status.d32 = core_if->dev_if->out_desc_addr->status.d32;
56118 + if (pcd->ep0state == EP0_OUT_STATUS_PHASE)
56119 + status.d32 = core_if->dev_if->
56120 + out_desc_addr->status.d32;
56121 +
56122 + if (status.b.sr) {
56123 + if (DWC_CIRCLEQ_EMPTY(&ep->queue)) {
56124 + DWC_DEBUGPL(DBG_PCDV, "Request queue empty!!\n");
56125 + } else {
56126 + DWC_DEBUGPL(DBG_PCDV, "complete req!!\n");
56127 + req = DWC_CIRCLEQ_FIRST(&ep->queue);
56128 + if (ep->dwc_ep.xfer_count != ep->dwc_ep.total_len &&
56129 + pcd->ep0state == EP0_OUT_DATA_PHASE) {
56130 + /* Read arrived setup packet from req->buf */
56131 + dwc_memcpy(&pcd->setup_pkt->req,
56132 + req->buf + ep->dwc_ep.xfer_count, 8);
56133 + }
56134 + req->actual = ep->dwc_ep.xfer_count;
56135 + dwc_otg_request_done(ep, req, -ECONNRESET);
56136 + ep->dwc_ep.start_xfer_buff = 0;
56137 + ep->dwc_ep.xfer_buff = 0;
56138 + ep->dwc_ep.xfer_len = 0;
56139 + }
56140 + pcd->ep0state = EP0_IDLE;
56141 + if (doepint.b.setup) {
56142 + DWC_DEBUGPL(DBG_PCDV, "EP0_IDLE SR=1 setup=1\n");
56143 + /* Data stage started, clear setup */
56144 + CLEAR_OUT_EP_INTR(core_if, epnum, setup);
56145 + doepint.b.setup = 0;
56146 + handle_ep0(pcd);
56147 + /* Prepare for setup packets if ep0in was enabled*/
56148 + if (pcd->ep0state == EP0_IN_STATUS_PHASE) {
56149 + ep0_out_start(core_if, pcd);
56150 + }
56151 +
56152 + goto exit_xfercompl;
56153 + } else {
56154 + /* Prepare for more setup packets */
56155 + DWC_DEBUGPL(DBG_PCDV,
56156 + "EP0_IDLE SR=1 setup=0 new setup comes 2\n");
56157 + ep0_out_start(core_if, pcd);
56158 + }
56159 + }
56160 + }
56161 + }
56162 + if (core_if->snpsid >= OTG_CORE_REV_2_94a && core_if->dma_enable
56163 + && core_if->dma_desc_enable == 0) {
56164 + doepint_data_t doepint_temp = {.d32 = 0};
56165 + deptsiz0_data_t doeptsize0 = {.d32 = 0 };
56166 + doepint_temp.d32 = DWC_READ_REG32(&core_if->dev_if->
56167 + out_ep_regs[ep->dwc_ep.num]->doepint);
56168 + doeptsize0.d32 = DWC_READ_REG32(&core_if->dev_if->
56169 + out_ep_regs[ep->dwc_ep.num]->doeptsiz);
56170 + if (pcd->ep0state == EP0_IDLE) {
56171 + if (doepint_temp.b.sr) {
56172 + CLEAR_OUT_EP_INTR(core_if, epnum, sr);
56173 + }
56174 + doepint.d32 = DWC_READ_REG32(&core_if->dev_if->
56175 + out_ep_regs[0]->doepint);
56176 + if (doeptsize0.b.supcnt == 3) {
56177 + DWC_DEBUGPL(DBG_ANY, "Rolling over!!!!!!!\n");
56178 + ep->dwc_ep.stp_rollover = 1;
56179 + }
56180 + if (doepint.b.setup) {
56181 +retry:
56182 + /* Already started data stage, clear setup */
56183 + CLEAR_OUT_EP_INTR(core_if, epnum, setup);
56184 + doepint.b.setup = 0;
56185 + handle_ep0(pcd);
56186 + ep->dwc_ep.stp_rollover = 0;
56187 + /* Prepare for more setup packets */
56188 + if (pcd->ep0state == EP0_IN_STATUS_PHASE ||
56189 + pcd->ep0state == EP0_IN_DATA_PHASE) {
56190 + ep0_out_start(core_if, pcd);
56191 + }
56192 + goto exit_xfercompl;
56193 + } else {
56194 + /* Prepare for more setup packets */
56195 + DWC_DEBUGPL(DBG_ANY,
56196 + "EP0_IDLE SR=1 setup=0 new setup comes\n");
56197 + doepint.d32 = DWC_READ_REG32(&core_if->dev_if->
56198 + out_ep_regs[0]->doepint);
56199 + if(doepint.b.setup)
56200 + goto retry;
56201 + ep0_out_start(core_if, pcd);
56202 + }
56203 + } else {
56204 + dwc_otg_pcd_request_t *req;
56205 + diepint_data_t diepint0 = {.d32 = 0};
56206 + doepint_data_t doepint_temp = {.d32 = 0};
56207 + depctl_data_t diepctl0;
56208 + diepint0.d32 = DWC_READ_REG32(&core_if->dev_if->
56209 + in_ep_regs[0]->diepint);
56210 + diepctl0.d32 = DWC_READ_REG32(&core_if->dev_if->
56211 + in_ep_regs[0]->diepctl);
56212 +
56213 + if (pcd->ep0state == EP0_IN_DATA_PHASE
56214 + || pcd->ep0state == EP0_IN_STATUS_PHASE) {
56215 + if (diepint0.b.xfercompl) {
56216 + DWC_WRITE_REG32(&core_if->dev_if->
56217 + in_ep_regs[0]->diepint, diepint0.d32);
56218 + }
56219 + if (diepctl0.b.epena) {
56220 + diepint_data_t diepint = {.d32 = 0};
56221 + diepctl0.b.snak = 1;
56222 + DWC_WRITE_REG32(&core_if->dev_if->
56223 + in_ep_regs[0]->diepctl, diepctl0.d32);
56224 + do {
56225 + dwc_udelay(10);
56226 + diepint.d32 = DWC_READ_REG32(&core_if->dev_if->
56227 + in_ep_regs[0]->diepint);
56228 + } while (!diepint.b.inepnakeff);
56229 + diepint.b.inepnakeff = 1;
56230 + DWC_WRITE_REG32(&core_if->dev_if->
56231 + in_ep_regs[0]->diepint, diepint.d32);
56232 + diepctl0.d32 = 0;
56233 + diepctl0.b.epdis = 1;
56234 + DWC_WRITE_REG32(&core_if->dev_if->in_ep_regs[0]->diepctl,
56235 + diepctl0.d32);
56236 + do {
56237 + dwc_udelay(10);
56238 + diepint.d32 = DWC_READ_REG32(&core_if->dev_if->
56239 + in_ep_regs[0]->diepint);
56240 + } while (!diepint.b.epdisabled);
56241 + diepint.b.epdisabled = 1;
56242 + DWC_WRITE_REG32(&core_if->dev_if->in_ep_regs[0]->diepint,
56243 + diepint.d32);
56244 + }
56245 + }
56246 + doepint_temp.d32 = DWC_READ_REG32(&core_if->dev_if->
56247 + out_ep_regs[ep->dwc_ep.num]->doepint);
56248 + if (doepint_temp.b.sr) {
56249 + CLEAR_OUT_EP_INTR(core_if, epnum, sr);
56250 + if (DWC_CIRCLEQ_EMPTY(&ep->queue)) {
56251 + DWC_DEBUGPL(DBG_PCDV, "Request queue empty!!\n");
56252 + } else {
56253 + DWC_DEBUGPL(DBG_PCDV, "complete req!!\n");
56254 + req = DWC_CIRCLEQ_FIRST(&ep->queue);
56255 + if (ep->dwc_ep.xfer_count != ep->dwc_ep.total_len &&
56256 + pcd->ep0state == EP0_OUT_DATA_PHASE) {
56257 + /* Read arrived setup packet from req->buf */
56258 + dwc_memcpy(&pcd->setup_pkt->req,
56259 + req->buf + ep->dwc_ep.xfer_count, 8);
56260 + }
56261 + req->actual = ep->dwc_ep.xfer_count;
56262 + dwc_otg_request_done(ep, req, -ECONNRESET);
56263 + ep->dwc_ep.start_xfer_buff = 0;
56264 + ep->dwc_ep.xfer_buff = 0;
56265 + ep->dwc_ep.xfer_len = 0;
56266 + }
56267 + pcd->ep0state = EP0_IDLE;
56268 + if (doepint.b.setup) {
56269 + DWC_DEBUGPL(DBG_PCDV, "EP0_IDLE SR=1 setup=1\n");
56270 + /* Data stage started, clear setup */
56271 + CLEAR_OUT_EP_INTR(core_if, epnum, setup);
56272 + doepint.b.setup = 0;
56273 + handle_ep0(pcd);
56274 + /* Prepare for setup packets if ep0in was enabled*/
56275 + if (pcd->ep0state == EP0_IN_STATUS_PHASE) {
56276 + ep0_out_start(core_if, pcd);
56277 + }
56278 + goto exit_xfercompl;
56279 + } else {
56280 + /* Prepare for more setup packets */
56281 + DWC_DEBUGPL(DBG_PCDV,
56282 + "EP0_IDLE SR=1 setup=0 new setup comes 2\n");
56283 + ep0_out_start(core_if, pcd);
56284 + }
56285 + }
56286 + }
56287 + }
56288 + if (core_if->dma_enable == 0 || pcd->ep0state != EP0_IDLE)
56289 + handle_ep0(pcd);
56290 +exit_xfercompl:
56291 + DWC_DEBUGPL(DBG_PCDV, "DOEPINT=%x doepint=%x\n",
56292 + dwc_otg_read_dev_out_ep_intr(core_if, dwc_ep), doepint.d32);
56293 + } else {
56294 + if (core_if->dma_desc_enable == 0
56295 + || pcd->ep0state != EP0_IDLE)
56296 + handle_ep0(pcd);
56297 + }
56298 +#ifdef DWC_EN_ISOC
56299 + } else if (dwc_ep->type == DWC_OTG_EP_TYPE_ISOC) {
56300 + if (doepint.b.pktdrpsts == 0) {
56301 + /* Clear the bit in DOEPINTn for this interrupt */
56302 + CLEAR_OUT_EP_INTR(core_if,
56303 + epnum,
56304 + xfercompl);
56305 + complete_iso_ep(pcd, ep);
56306 + } else {
56307 +
56308 + doepint_data_t doepint = {.d32 = 0 };
56309 + doepint.b.xfercompl = 1;
56310 + doepint.b.pktdrpsts = 1;
56311 + DWC_WRITE_REG32
56312 + (&core_if->dev_if->out_ep_regs
56313 + [epnum]->doepint,
56314 + doepint.d32);
56315 + if (handle_iso_out_pkt_dropped
56316 + (core_if, dwc_ep)) {
56317 + complete_iso_ep(pcd,
56318 + ep);
56319 + }
56320 + }
56321 +#endif /* DWC_EN_ISOC */
56322 +#ifdef DWC_UTE_PER_IO
56323 + } else if (dwc_ep->type == DWC_OTG_EP_TYPE_ISOC) {
56324 + CLEAR_OUT_EP_INTR(core_if, epnum, xfercompl);
56325 + if (!ep->stopped)
56326 + complete_xiso_ep(ep);
56327 +#endif /* DWC_UTE_PER_IO */
56328 + } else {
56329 + /* Clear the bit in DOEPINTn for this interrupt */
56330 + CLEAR_OUT_EP_INTR(core_if, epnum,
56331 + xfercompl);
56332 +
56333 + if (core_if->core_params->dev_out_nak) {
56334 + DWC_TIMER_CANCEL(pcd->core_if->ep_xfer_timer[epnum]);
56335 + pcd->core_if->ep_xfer_info[epnum].state = 0;
56336 +#ifdef DEBUG
56337 + print_memory_payload(pcd, dwc_ep);
56338 +#endif
56339 + }
56340 + complete_ep(ep);
56341 + }
56342 +
56343 + }
56344 +
56345 + /* Endpoint disable */
56346 + if (doepint.b.epdisabled) {
56347 +
56348 + /* Clear the bit in DOEPINTn for this interrupt */
56349 + CLEAR_OUT_EP_INTR(core_if, epnum, epdisabled);
56350 + if (core_if->core_params->dev_out_nak) {
56351 +#ifdef DEBUG
56352 + print_memory_payload(pcd, dwc_ep);
56353 +#endif
56354 + /* In case of timeout condition */
56355 + if (core_if->ep_xfer_info[epnum].state == 2) {
56356 + dctl.d32 = DWC_READ_REG32(&core_if->dev_if->
56357 + dev_global_regs->dctl);
56358 + dctl.b.cgoutnak = 1;
56359 + DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->dctl,
56360 + dctl.d32);
56361 + /* Unmask goutnakeff interrupt which was masked
56362 + * during handle nak out interrupt */
56363 + gintmsk.b.goutnakeff = 1;
56364 + DWC_MODIFY_REG32(&core_if->core_global_regs->gintmsk,
56365 + 0, gintmsk.d32);
56366 +
56367 + complete_ep(ep);
56368 + }
56369 + }
56370 + if (ep->dwc_ep.type == DWC_OTG_EP_TYPE_ISOC)
56371 + {
56372 + dctl_data_t dctl;
56373 + gintmsk_data_t intr_mask = {.d32 = 0};
56374 + dwc_otg_pcd_request_t *req = 0;
56375 +
56376 + dctl.d32 = DWC_READ_REG32(&core_if->dev_if->
56377 + dev_global_regs->dctl);
56378 + dctl.b.cgoutnak = 1;
56379 + DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->dctl,
56380 + dctl.d32);
56381 +
56382 + intr_mask.d32 = 0;
56383 + intr_mask.b.incomplisoout = 1;
56384 +
56385 + /* Get any pending requests */
56386 + if (!DWC_CIRCLEQ_EMPTY(&ep->queue)) {
56387 + req = DWC_CIRCLEQ_FIRST(&ep->queue);
56388 + if (!req) {
56389 + DWC_PRINTF("complete_ep 0x%p, req = NULL!\n", ep);
56390 + } else {
56391 + dwc_otg_request_done(ep, req, 0);
56392 + start_next_request(ep);
56393 + }
56394 + } else {
56395 + DWC_PRINTF("complete_ep 0x%p, ep->queue empty!\n", ep);
56396 + }
56397 + }
56398 + }
56399 + /* AHB Error */
56400 + if (doepint.b.ahberr) {
56401 + DWC_ERROR("EP%d OUT AHB Error\n", epnum);
56402 + DWC_ERROR("EP%d DEPDMA=0x%08x \n",
56403 + epnum, core_if->dev_if->out_ep_regs[epnum]->doepdma);
56404 + CLEAR_OUT_EP_INTR(core_if, epnum, ahberr);
56405 + }
56406 + /* Setup Phase Done (contorl EPs) */
56407 + if (doepint.b.setup) {
56408 +#ifdef DEBUG_EP0
56409 + DWC_DEBUGPL(DBG_PCD, "EP%d SETUP Done\n", epnum);
56410 +#endif
56411 + CLEAR_OUT_EP_INTR(core_if, epnum, setup);
56412 +
56413 + handle_ep0(pcd);
56414 + }
56415 +
56416 + /** OUT EP BNA Intr */
56417 + if (doepint.b.bna) {
56418 + CLEAR_OUT_EP_INTR(core_if, epnum, bna);
56419 + if (core_if->dma_desc_enable) {
56420 +#ifdef DWC_EN_ISOC
56421 + if (dwc_ep->type ==
56422 + DWC_OTG_EP_TYPE_ISOC) {
56423 + /*
56424 + * This checking is performed to prevent first "false" BNA
56425 + * handling occuring right after reconnect
56426 + */
56427 + if (dwc_ep->next_frame !=
56428 + 0xffffffff)
56429 + dwc_otg_pcd_handle_iso_bna(ep);
56430 + } else
56431 +#endif /* DWC_EN_ISOC */
56432 + {
56433 + dwc_otg_pcd_handle_noniso_bna(ep);
56434 + }
56435 + }
56436 + }
56437 + /* Babble Interrupt */
56438 + if (doepint.b.babble) {
56439 + DWC_DEBUGPL(DBG_ANY, "EP%d OUT Babble\n",
56440 + epnum);
56441 + handle_out_ep_babble_intr(pcd, epnum);
56442 +
56443 + CLEAR_OUT_EP_INTR(core_if, epnum, babble);
56444 + }
56445 + if (doepint.b.outtknepdis) {
56446 + DWC_DEBUGPL(DBG_ANY, "EP%d OUT Token received when EP is \
56447 + disabled\n",epnum);
56448 + if (ep->dwc_ep.type == DWC_OTG_EP_TYPE_ISOC) {
56449 + doepmsk_data_t doepmsk = {.d32 = 0};
56450 + ep->dwc_ep.frame_num = core_if->frame_num;
56451 + if (ep->dwc_ep.bInterval > 1) {
56452 + depctl_data_t depctl;
56453 + depctl.d32 = DWC_READ_REG32(&core_if->dev_if->
56454 + out_ep_regs[epnum]->doepctl);
56455 + if (ep->dwc_ep.frame_num & 0x1) {
56456 + depctl.b.setd1pid = 1;
56457 + depctl.b.setd0pid = 0;
56458 + } else {
56459 + depctl.b.setd0pid = 1;
56460 + depctl.b.setd1pid = 0;
56461 + }
56462 + DWC_WRITE_REG32(&core_if->dev_if->
56463 + out_ep_regs[epnum]->doepctl, depctl.d32);
56464 + }
56465 + start_next_request(ep);
56466 + doepmsk.b.outtknepdis = 1;
56467 + DWC_MODIFY_REG32(&core_if->dev_if->dev_global_regs->doepmsk,
56468 + doepmsk.d32, 0);
56469 + }
56470 + CLEAR_OUT_EP_INTR(core_if, epnum, outtknepdis);
56471 + }
56472 +
56473 + /* NAK Interrutp */
56474 + if (doepint.b.nak) {
56475 + DWC_DEBUGPL(DBG_ANY, "EP%d OUT NAK\n", epnum);
56476 + handle_out_ep_nak_intr(pcd, epnum);
56477 +
56478 + CLEAR_OUT_EP_INTR(core_if, epnum, nak);
56479 + }
56480 + /* NYET Interrutp */
56481 + if (doepint.b.nyet) {
56482 + DWC_DEBUGPL(DBG_ANY, "EP%d OUT NYET\n", epnum);
56483 + handle_out_ep_nyet_intr(pcd, epnum);
56484 +
56485 + CLEAR_OUT_EP_INTR(core_if, epnum, nyet);
56486 + }
56487 + }
56488 +
56489 + epnum++;
56490 + ep_intr >>= 1;
56491 + }
56492 +
56493 + return 1;
56494 +
56495 +#undef CLEAR_OUT_EP_INTR
56496 +}
56497 +static int drop_transfer(uint32_t trgt_fr, uint32_t curr_fr, uint8_t frm_overrun)
56498 +{
56499 + int retval = 0;
56500 + if(!frm_overrun && curr_fr >= trgt_fr)
56501 + retval = 1;
56502 + else if (frm_overrun
56503 + && (curr_fr >= trgt_fr && ((curr_fr - trgt_fr) < 0x3FFF / 2)))
56504 + retval = 1;
56505 + return retval;
56506 +}
56507 +/**
56508 + * Incomplete ISO IN Transfer Interrupt.
56509 + * This interrupt indicates one of the following conditions occurred
56510 + * while transmitting an ISOC transaction.
56511 + * - Corrupted IN Token for ISOC EP.
56512 + * - Packet not complete in FIFO.
56513 + * The follow actions will be taken:
56514 + * -# Determine the EP
56515 + * -# Set incomplete flag in dwc_ep structure
56516 + * -# Disable EP; when "Endpoint Disabled" interrupt is received
56517 + * Flush FIFO
56518 + */
56519 +int32_t dwc_otg_pcd_handle_incomplete_isoc_in_intr(dwc_otg_pcd_t * pcd)
56520 +{
56521 + gintsts_data_t gintsts;
56522 +
56523 +#ifdef DWC_EN_ISOC
56524 + dwc_otg_dev_if_t *dev_if;
56525 + deptsiz_data_t deptsiz = {.d32 = 0 };
56526 + depctl_data_t depctl = {.d32 = 0 };
56527 + dsts_data_t dsts = {.d32 = 0 };
56528 + dwc_ep_t *dwc_ep;
56529 + int i;
56530 +
56531 + dev_if = GET_CORE_IF(pcd)->dev_if;
56532 +
56533 + for (i = 1; i <= dev_if->num_in_eps; ++i) {
56534 + dwc_ep = &pcd->in_ep[i].dwc_ep;
56535 + if (dwc_ep->active && dwc_ep->type == DWC_OTG_EP_TYPE_ISOC) {
56536 + deptsiz.d32 =
56537 + DWC_READ_REG32(&dev_if->in_ep_regs[i]->dieptsiz);
56538 + depctl.d32 =
56539 + DWC_READ_REG32(&dev_if->in_ep_regs[i]->diepctl);
56540 +
56541 + if (depctl.b.epdis && deptsiz.d32) {
56542 + set_current_pkt_info(GET_CORE_IF(pcd), dwc_ep);
56543 + if (dwc_ep->cur_pkt >= dwc_ep->pkt_cnt) {
56544 + dwc_ep->cur_pkt = 0;
56545 + dwc_ep->proc_buf_num =
56546 + (dwc_ep->proc_buf_num ^ 1) & 0x1;
56547 +
56548 + if (dwc_ep->proc_buf_num) {
56549 + dwc_ep->cur_pkt_addr =
56550 + dwc_ep->xfer_buff1;
56551 + dwc_ep->cur_pkt_dma_addr =
56552 + dwc_ep->dma_addr1;
56553 + } else {
56554 + dwc_ep->cur_pkt_addr =
56555 + dwc_ep->xfer_buff0;
56556 + dwc_ep->cur_pkt_dma_addr =
56557 + dwc_ep->dma_addr0;
56558 + }
56559 +
56560 + }
56561 +
56562 + dsts.d32 =
56563 + DWC_READ_REG32(&GET_CORE_IF(pcd)->dev_if->
56564 + dev_global_regs->dsts);
56565 + dwc_ep->next_frame = dsts.b.soffn;
56566 +
56567 + dwc_otg_iso_ep_start_frm_transfer(GET_CORE_IF
56568 + (pcd),
56569 + dwc_ep);
56570 + }
56571 + }
56572 + }
56573 +
56574 +#else
56575 + depctl_data_t depctl = {.d32 = 0 };
56576 + dwc_ep_t *dwc_ep;
56577 + dwc_otg_dev_if_t *dev_if;
56578 + int i;
56579 + dev_if = GET_CORE_IF(pcd)->dev_if;
56580 +
56581 + DWC_DEBUGPL(DBG_PCD,"Incomplete ISO IN \n");
56582 +
56583 + for (i = 1; i <= dev_if->num_in_eps; ++i) {
56584 + dwc_ep = &pcd->in_ep[i-1].dwc_ep;
56585 + depctl.d32 =
56586 + DWC_READ_REG32(&dev_if->in_ep_regs[i]->diepctl);
56587 + if (depctl.b.epena && dwc_ep->type == DWC_OTG_EP_TYPE_ISOC) {
56588 + if (drop_transfer(dwc_ep->frame_num, GET_CORE_IF(pcd)->frame_num,
56589 + dwc_ep->frm_overrun))
56590 + {
56591 + depctl.d32 =
56592 + DWC_READ_REG32(&dev_if->in_ep_regs[i]->diepctl);
56593 + depctl.b.snak = 1;
56594 + depctl.b.epdis = 1;
56595 + DWC_MODIFY_REG32(&dev_if->in_ep_regs[i]->diepctl, depctl.d32, depctl.d32);
56596 + }
56597 + }
56598 + }
56599 +
56600 + /*intr_mask.b.incomplisoin = 1;
56601 + DWC_MODIFY_REG32(&GET_CORE_IF(pcd)->core_global_regs->gintmsk,
56602 + intr_mask.d32, 0); */
56603 +#endif //DWC_EN_ISOC
56604 +
56605 + /* Clear interrupt */
56606 + gintsts.d32 = 0;
56607 + gintsts.b.incomplisoin = 1;
56608 + DWC_WRITE_REG32(&GET_CORE_IF(pcd)->core_global_regs->gintsts,
56609 + gintsts.d32);
56610 +
56611 + return 1;
56612 +}
56613 +
56614 +/**
56615 + * Incomplete ISO OUT Transfer Interrupt.
56616 + *
56617 + * This interrupt indicates that the core has dropped an ISO OUT
56618 + * packet. The following conditions can be the cause:
56619 + * - FIFO Full, the entire packet would not fit in the FIFO.
56620 + * - CRC Error
56621 + * - Corrupted Token
56622 + * The follow actions will be taken:
56623 + * -# Determine the EP
56624 + * -# Set incomplete flag in dwc_ep structure
56625 + * -# Read any data from the FIFO
56626 + * -# Disable EP. When "Endpoint Disabled" interrupt is received
56627 + * re-enable EP.
56628 + */
56629 +int32_t dwc_otg_pcd_handle_incomplete_isoc_out_intr(dwc_otg_pcd_t * pcd)
56630 +{
56631 +
56632 + gintsts_data_t gintsts;
56633 +
56634 +#ifdef DWC_EN_ISOC
56635 + dwc_otg_dev_if_t *dev_if;
56636 + deptsiz_data_t deptsiz = {.d32 = 0 };
56637 + depctl_data_t depctl = {.d32 = 0 };
56638 + dsts_data_t dsts = {.d32 = 0 };
56639 + dwc_ep_t *dwc_ep;
56640 + int i;
56641 +
56642 + dev_if = GET_CORE_IF(pcd)->dev_if;
56643 +
56644 + for (i = 1; i <= dev_if->num_out_eps; ++i) {
56645 + dwc_ep = &pcd->in_ep[i].dwc_ep;
56646 + if (pcd->out_ep[i].dwc_ep.active &&
56647 + pcd->out_ep[i].dwc_ep.type == DWC_OTG_EP_TYPE_ISOC) {
56648 + deptsiz.d32 =
56649 + DWC_READ_REG32(&dev_if->out_ep_regs[i]->doeptsiz);
56650 + depctl.d32 =
56651 + DWC_READ_REG32(&dev_if->out_ep_regs[i]->doepctl);
56652 +
56653 + if (depctl.b.epdis && deptsiz.d32) {
56654 + set_current_pkt_info(GET_CORE_IF(pcd),
56655 + &pcd->out_ep[i].dwc_ep);
56656 + if (dwc_ep->cur_pkt >= dwc_ep->pkt_cnt) {
56657 + dwc_ep->cur_pkt = 0;
56658 + dwc_ep->proc_buf_num =
56659 + (dwc_ep->proc_buf_num ^ 1) & 0x1;
56660 +
56661 + if (dwc_ep->proc_buf_num) {
56662 + dwc_ep->cur_pkt_addr =
56663 + dwc_ep->xfer_buff1;
56664 + dwc_ep->cur_pkt_dma_addr =
56665 + dwc_ep->dma_addr1;
56666 + } else {
56667 + dwc_ep->cur_pkt_addr =
56668 + dwc_ep->xfer_buff0;
56669 + dwc_ep->cur_pkt_dma_addr =
56670 + dwc_ep->dma_addr0;
56671 + }
56672 +
56673 + }
56674 +
56675 + dsts.d32 =
56676 + DWC_READ_REG32(&GET_CORE_IF(pcd)->dev_if->
56677 + dev_global_regs->dsts);
56678 + dwc_ep->next_frame = dsts.b.soffn;
56679 +
56680 + dwc_otg_iso_ep_start_frm_transfer(GET_CORE_IF
56681 + (pcd),
56682 + dwc_ep);
56683 + }
56684 + }
56685 + }
56686 +#else
56687 + /** @todo implement ISR */
56688 + gintmsk_data_t intr_mask = {.d32 = 0 };
56689 + dwc_otg_core_if_t *core_if;
56690 + deptsiz_data_t deptsiz = {.d32 = 0 };
56691 + depctl_data_t depctl = {.d32 = 0 };
56692 + dctl_data_t dctl = {.d32 = 0 };
56693 + dwc_ep_t *dwc_ep = NULL;
56694 + int i;
56695 + core_if = GET_CORE_IF(pcd);
56696 +
56697 + for (i = 0; i < core_if->dev_if->num_out_eps; ++i) {
56698 + dwc_ep = &pcd->out_ep[i].dwc_ep;
56699 + depctl.d32 =
56700 + DWC_READ_REG32(&core_if->dev_if->out_ep_regs[dwc_ep->num]->doepctl);
56701 + if (depctl.b.epena && depctl.b.dpid == (core_if->frame_num & 0x1)) {
56702 + core_if->dev_if->isoc_ep = dwc_ep;
56703 + deptsiz.d32 =
56704 + DWC_READ_REG32(&core_if->dev_if->out_ep_regs[dwc_ep->num]->doeptsiz);
56705 + break;
56706 + }
56707 + }
56708 + dctl.d32 = DWC_READ_REG32(&core_if->dev_if->dev_global_regs->dctl);
56709 + gintsts.d32 = DWC_READ_REG32(&core_if->core_global_regs->gintsts);
56710 + intr_mask.d32 = DWC_READ_REG32(&core_if->core_global_regs->gintmsk);
56711 +
56712 + if (!intr_mask.b.goutnakeff) {
56713 + /* Unmask it */
56714 + intr_mask.b.goutnakeff = 1;
56715 + DWC_WRITE_REG32(&core_if->core_global_regs->gintmsk, intr_mask.d32);
56716 + }
56717 + if (!gintsts.b.goutnakeff) {
56718 + dctl.b.sgoutnak = 1;
56719 + }
56720 + DWC_WRITE_REG32(&core_if->dev_if->dev_global_regs->dctl, dctl.d32);
56721 +
56722 + depctl.d32 = DWC_READ_REG32(&core_if->dev_if->out_ep_regs[dwc_ep->num]->doepctl);
56723 + if (depctl.b.epena) {
56724 + depctl.b.epdis = 1;
56725 + depctl.b.snak = 1;
56726 + }
56727 + DWC_WRITE_REG32(&core_if->dev_if->out_ep_regs[dwc_ep->num]->doepctl, depctl.d32);
56728 +
56729 + intr_mask.d32 = 0;
56730 + intr_mask.b.incomplisoout = 1;
56731 +
56732 +#endif /* DWC_EN_ISOC */
56733 +
56734 + /* Clear interrupt */
56735 + gintsts.d32 = 0;
56736 + gintsts.b.incomplisoout = 1;
56737 + DWC_WRITE_REG32(&GET_CORE_IF(pcd)->core_global_regs->gintsts,
56738 + gintsts.d32);
56739 +
56740 + return 1;
56741 +}
56742 +
56743 +/**
56744 + * This function handles the Global IN NAK Effective interrupt.
56745 + *
56746 + */
56747 +int32_t dwc_otg_pcd_handle_in_nak_effective(dwc_otg_pcd_t * pcd)
56748 +{
56749 + dwc_otg_dev_if_t *dev_if = GET_CORE_IF(pcd)->dev_if;
56750 + depctl_data_t diepctl = {.d32 = 0 };
56751 + gintmsk_data_t intr_mask = {.d32 = 0 };
56752 + gintsts_data_t gintsts;
56753 + dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
56754 + int i;
56755 +
56756 + DWC_DEBUGPL(DBG_PCD, "Global IN NAK Effective\n");
56757 +
56758 + /* Disable all active IN EPs */
56759 + for (i = 0; i <= dev_if->num_in_eps; i++) {
56760 + diepctl.d32 = DWC_READ_REG32(&dev_if->in_ep_regs[i]->diepctl);
56761 + if (!(diepctl.b.eptype & 1) && diepctl.b.epena) {
56762 + if (core_if->start_predict > 0)
56763 + core_if->start_predict++;
56764 + diepctl.b.epdis = 1;
56765 + diepctl.b.snak = 1;
56766 + DWC_WRITE_REG32(&dev_if->in_ep_regs[i]->diepctl, diepctl.d32);
56767 + }
56768 + }
56769 +
56770 +
56771 + /* Disable the Global IN NAK Effective Interrupt */
56772 + intr_mask.b.ginnakeff = 1;
56773 + DWC_MODIFY_REG32(&GET_CORE_IF(pcd)->core_global_regs->gintmsk,
56774 + intr_mask.d32, 0);
56775 +
56776 + /* Clear interrupt */
56777 + gintsts.d32 = 0;
56778 + gintsts.b.ginnakeff = 1;
56779 + DWC_WRITE_REG32(&GET_CORE_IF(pcd)->core_global_regs->gintsts,
56780 + gintsts.d32);
56781 +
56782 + return 1;
56783 +}
56784 +
56785 +/**
56786 + * OUT NAK Effective.
56787 + *
56788 + */
56789 +int32_t dwc_otg_pcd_handle_out_nak_effective(dwc_otg_pcd_t * pcd)
56790 +{
56791 + dwc_otg_dev_if_t *dev_if = GET_CORE_IF(pcd)->dev_if;
56792 + gintmsk_data_t intr_mask = {.d32 = 0 };
56793 + gintsts_data_t gintsts;
56794 + depctl_data_t doepctl;
56795 + int i;
56796 +
56797 + /* Disable the Global OUT NAK Effective Interrupt */
56798 + intr_mask.b.goutnakeff = 1;
56799 + DWC_MODIFY_REG32(&GET_CORE_IF(pcd)->core_global_regs->gintmsk,
56800 + intr_mask.d32, 0);
56801 +
56802 + /* If DEV OUT NAK enabled*/
56803 + if (pcd->core_if->core_params->dev_out_nak) {
56804 + /* Run over all out endpoints to determine the ep number on
56805 + * which the timeout has happened
56806 + */
56807 + for (i = 0; i <= dev_if->num_out_eps; i++) {
56808 + if ( pcd->core_if->ep_xfer_info[i].state == 2 )
56809 + break;
56810 + }
56811 + if (i > dev_if->num_out_eps) {
56812 + dctl_data_t dctl;
56813 + dctl.d32 =
56814 + DWC_READ_REG32(&dev_if->dev_global_regs->dctl);
56815 + dctl.b.cgoutnak = 1;
56816 + DWC_WRITE_REG32(&dev_if->dev_global_regs->dctl,
56817 + dctl.d32);
56818 + goto out;
56819 + }
56820 +
56821 + /* Disable the endpoint */
56822 + doepctl.d32 = DWC_READ_REG32(&dev_if->out_ep_regs[i]->doepctl);
56823 + if (doepctl.b.epena) {
56824 + doepctl.b.epdis = 1;
56825 + doepctl.b.snak = 1;
56826 + }
56827 + DWC_WRITE_REG32(&dev_if->out_ep_regs[i]->doepctl, doepctl.d32);
56828 + return 1;
56829 + }
56830 + /* We come here from Incomplete ISO OUT handler */
56831 + if (dev_if->isoc_ep) {
56832 + dwc_ep_t *dwc_ep = (dwc_ep_t *)dev_if->isoc_ep;
56833 + uint32_t epnum = dwc_ep->num;
56834 + doepint_data_t doepint;
56835 + doepint.d32 =
56836 + DWC_READ_REG32(&dev_if->out_ep_regs[dwc_ep->num]->doepint);
56837 + dev_if->isoc_ep = NULL;
56838 + doepctl.d32 =
56839 + DWC_READ_REG32(&dev_if->out_ep_regs[epnum]->doepctl);
56840 + DWC_PRINTF("Before disable DOEPCTL = %08x\n", doepctl.d32);
56841 + if (doepctl.b.epena) {
56842 + doepctl.b.epdis = 1;
56843 + doepctl.b.snak = 1;
56844 + }
56845 + DWC_WRITE_REG32(&dev_if->out_ep_regs[epnum]->doepctl,
56846 + doepctl.d32);
56847 + return 1;
56848 + } else
56849 + DWC_PRINTF("INTERRUPT Handler not implemented for %s\n",
56850 + "Global OUT NAK Effective\n");
56851 +
56852 +out:
56853 + /* Clear interrupt */
56854 + gintsts.d32 = 0;
56855 + gintsts.b.goutnakeff = 1;
56856 + DWC_WRITE_REG32(&GET_CORE_IF(pcd)->core_global_regs->gintsts,
56857 + gintsts.d32);
56858 +
56859 + return 1;
56860 +}
56861 +
56862 +/**
56863 + * PCD interrupt handler.
56864 + *
56865 + * The PCD handles the device interrupts. Many conditions can cause a
56866 + * device interrupt. When an interrupt occurs, the device interrupt
56867 + * service routine determines the cause of the interrupt and
56868 + * dispatches handling to the appropriate function. These interrupt
56869 + * handling functions are described below.
56870 + *
56871 + * All interrupt registers are processed from LSB to MSB.
56872 + *
56873 + */
56874 +int32_t dwc_otg_pcd_handle_intr(dwc_otg_pcd_t * pcd)
56875 +{
56876 + dwc_otg_core_if_t *core_if = GET_CORE_IF(pcd);
56877 +#ifdef VERBOSE
56878 + dwc_otg_core_global_regs_t *global_regs = core_if->core_global_regs;
56879 +#endif
56880 + gintsts_data_t gintr_status;
56881 + int32_t retval = 0;
56882 +
56883 + /* Exit from ISR if core is hibernated */
56884 + if (core_if->hibernation_suspend == 1) {
56885 + return retval;
56886 + }
56887 +#ifdef VERBOSE
56888 + DWC_DEBUGPL(DBG_ANY, "%s() gintsts=%08x gintmsk=%08x\n",
56889 + __func__,
56890 + DWC_READ_REG32(&global_regs->gintsts),
56891 + DWC_READ_REG32(&global_regs->gintmsk));
56892 +#endif
56893 +
56894 + if (dwc_otg_is_device_mode(core_if)) {
56895 + DWC_SPINLOCK(pcd->lock);
56896 +#ifdef VERBOSE
56897 + DWC_DEBUGPL(DBG_PCDV, "%s() gintsts=%08x gintmsk=%08x\n",
56898 + __func__,
56899 + DWC_READ_REG32(&global_regs->gintsts),
56900 + DWC_READ_REG32(&global_regs->gintmsk));
56901 +#endif
56902 +
56903 + gintr_status.d32 = dwc_otg_read_core_intr(core_if);
56904 +
56905 + DWC_DEBUGPL(DBG_PCDV, "%s: gintsts&gintmsk=%08x\n",
56906 + __func__, gintr_status.d32);
56907 +
56908 + if (gintr_status.b.sofintr) {
56909 + retval |= dwc_otg_pcd_handle_sof_intr(pcd);
56910 + }
56911 + if (gintr_status.b.rxstsqlvl) {
56912 + retval |=
56913 + dwc_otg_pcd_handle_rx_status_q_level_intr(pcd);
56914 + }
56915 + if (gintr_status.b.nptxfempty) {
56916 + retval |= dwc_otg_pcd_handle_np_tx_fifo_empty_intr(pcd);
56917 + }
56918 + if (gintr_status.b.goutnakeff) {
56919 + retval |= dwc_otg_pcd_handle_out_nak_effective(pcd);
56920 + }
56921 + if (gintr_status.b.i2cintr) {
56922 + retval |= dwc_otg_pcd_handle_i2c_intr(pcd);
56923 + }
56924 + if (gintr_status.b.erlysuspend) {
56925 + retval |= dwc_otg_pcd_handle_early_suspend_intr(pcd);
56926 + }
56927 + if (gintr_status.b.usbreset) {
56928 + retval |= dwc_otg_pcd_handle_usb_reset_intr(pcd);
56929 + }
56930 + if (gintr_status.b.enumdone) {
56931 + retval |= dwc_otg_pcd_handle_enum_done_intr(pcd);
56932 + }
56933 + if (gintr_status.b.isooutdrop) {
56934 + retval |=
56935 + dwc_otg_pcd_handle_isoc_out_packet_dropped_intr
56936 + (pcd);
56937 + }
56938 + if (gintr_status.b.eopframe) {
56939 + retval |=
56940 + dwc_otg_pcd_handle_end_periodic_frame_intr(pcd);
56941 + }
56942 + if (gintr_status.b.inepint) {
56943 + if (!core_if->multiproc_int_enable) {
56944 + retval |= dwc_otg_pcd_handle_in_ep_intr(pcd);
56945 + }
56946 + }
56947 + if (gintr_status.b.outepintr) {
56948 + if (!core_if->multiproc_int_enable) {
56949 + retval |= dwc_otg_pcd_handle_out_ep_intr(pcd);
56950 + }
56951 + }
56952 + if (gintr_status.b.epmismatch) {
56953 + retval |= dwc_otg_pcd_handle_ep_mismatch_intr(pcd);
56954 + }
56955 + if (gintr_status.b.fetsusp) {
56956 + retval |= dwc_otg_pcd_handle_ep_fetsusp_intr(pcd);
56957 + }
56958 + if (gintr_status.b.ginnakeff) {
56959 + retval |= dwc_otg_pcd_handle_in_nak_effective(pcd);
56960 + }
56961 + if (gintr_status.b.incomplisoin) {
56962 + retval |=
56963 + dwc_otg_pcd_handle_incomplete_isoc_in_intr(pcd);
56964 + }
56965 + if (gintr_status.b.incomplisoout) {
56966 + retval |=
56967 + dwc_otg_pcd_handle_incomplete_isoc_out_intr(pcd);
56968 + }
56969 +
56970 + /* In MPI mode Device Endpoints interrupts are asserted
56971 + * without setting outepintr and inepint bits set, so these
56972 + * Interrupt handlers are called without checking these bit-fields
56973 + */
56974 + if (core_if->multiproc_int_enable) {
56975 + retval |= dwc_otg_pcd_handle_in_ep_intr(pcd);
56976 + retval |= dwc_otg_pcd_handle_out_ep_intr(pcd);
56977 + }
56978 +#ifdef VERBOSE
56979 + DWC_DEBUGPL(DBG_PCDV, "%s() gintsts=%0x\n", __func__,
56980 + DWC_READ_REG32(&global_regs->gintsts));
56981 +#endif
56982 + DWC_SPINUNLOCK(pcd->lock);
56983 + }
56984 + return retval;
56985 +}
56986 +
56987 +#endif /* DWC_HOST_ONLY */
56988 --- /dev/null
56989 +++ b/drivers/usb/host/dwc_otg/dwc_otg_pcd_linux.c
56990 @@ -0,0 +1,1262 @@
56991 + /* ==========================================================================
56992 + * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_pcd_linux.c $
56993 + * $Revision: #21 $
56994 + * $Date: 2012/08/10 $
56995 + * $Change: 2047372 $
56996 + *
56997 + * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
56998 + * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
56999 + * otherwise expressly agreed to in writing between Synopsys and you.
57000 + *
57001 + * The Software IS NOT an item of Licensed Software or Licensed Product under
57002 + * any End User Software License Agreement or Agreement for Licensed Product
57003 + * with Synopsys or any supplement thereto. You are permitted to use and
57004 + * redistribute this Software in source and binary forms, with or without
57005 + * modification, provided that redistributions of source code must retain this
57006 + * notice. You may not view, use, disclose, copy or distribute this file or
57007 + * any information contained herein except pursuant to this license grant from
57008 + * Synopsys. If you do not agree with this notice, including the disclaimer
57009 + * below, then you are not authorized to use the Software.
57010 + *
57011 + * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
57012 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
57013 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
57014 + * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
57015 + * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
57016 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
57017 + * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
57018 + * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
57019 + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
57020 + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
57021 + * DAMAGE.
57022 + * ========================================================================== */
57023 +#ifndef DWC_HOST_ONLY
57024 +
57025 +/** @file
57026 + * This file implements the Peripheral Controller Driver.
57027 + *
57028 + * The Peripheral Controller Driver (PCD) is responsible for
57029 + * translating requests from the Function Driver into the appropriate
57030 + * actions on the DWC_otg controller. It isolates the Function Driver
57031 + * from the specifics of the controller by providing an API to the
57032 + * Function Driver.
57033 + *
57034 + * The Peripheral Controller Driver for Linux will implement the
57035 + * Gadget API, so that the existing Gadget drivers can be used.
57036 + * (Gadget Driver is the Linux terminology for a Function Driver.)
57037 + *
57038 + * The Linux Gadget API is defined in the header file
57039 + * <code><linux/usb_gadget.h></code>. The USB EP operations API is
57040 + * defined in the structure <code>usb_ep_ops</code> and the USB
57041 + * Controller API is defined in the structure
57042 + * <code>usb_gadget_ops</code>.
57043 + *
57044 + */
57045 +
57046 +#include "dwc_otg_os_dep.h"
57047 +#include "dwc_otg_pcd_if.h"
57048 +#include "dwc_otg_pcd.h"
57049 +#include "dwc_otg_driver.h"
57050 +#include "dwc_otg_dbg.h"
57051 +
57052 +extern bool fiq_enable;
57053 +
57054 +static struct gadget_wrapper {
57055 + dwc_otg_pcd_t *pcd;
57056 +
57057 + struct usb_gadget gadget;
57058 + struct usb_gadget_driver *driver;
57059 +
57060 + struct usb_ep ep0;
57061 + struct usb_ep in_ep[16];
57062 + struct usb_ep out_ep[16];
57063 +
57064 +} *gadget_wrapper;
57065 +
57066 +/* Display the contents of the buffer */
57067 +extern void dump_msg(const u8 * buf, unsigned int length);
57068 +/**
57069 + * Get the dwc_otg_pcd_ep_t* from usb_ep* pointer - NULL in case
57070 + * if the endpoint is not found
57071 + */
57072 +static struct dwc_otg_pcd_ep *ep_from_handle(dwc_otg_pcd_t * pcd, void *handle)
57073 +{
57074 + int i;
57075 + if (pcd->ep0.priv == handle) {
57076 + return &pcd->ep0;
57077 + }
57078 +
57079 + for (i = 0; i < MAX_EPS_CHANNELS - 1; i++) {
57080 + if (pcd->in_ep[i].priv == handle)
57081 + return &pcd->in_ep[i];
57082 + if (pcd->out_ep[i].priv == handle)
57083 + return &pcd->out_ep[i];
57084 + }
57085 +
57086 + return NULL;
57087 +}
57088 +
57089 +/* USB Endpoint Operations */
57090 +/*
57091 + * The following sections briefly describe the behavior of the Gadget
57092 + * API endpoint operations implemented in the DWC_otg driver
57093 + * software. Detailed descriptions of the generic behavior of each of
57094 + * these functions can be found in the Linux header file
57095 + * include/linux/usb_gadget.h.
57096 + *
57097 + * The Gadget API provides wrapper functions for each of the function
57098 + * pointers defined in usb_ep_ops. The Gadget Driver calls the wrapper
57099 + * function, which then calls the underlying PCD function. The
57100 + * following sections are named according to the wrapper
57101 + * functions. Within each section, the corresponding DWC_otg PCD
57102 + * function name is specified.
57103 + *
57104 + */
57105 +
57106 +/**
57107 + * This function is called by the Gadget Driver for each EP to be
57108 + * configured for the current configuration (SET_CONFIGURATION).
57109 + *
57110 + * This function initializes the dwc_otg_ep_t data structure, and then
57111 + * calls dwc_otg_ep_activate.
57112 + */
57113 +static int ep_enable(struct usb_ep *usb_ep,
57114 + const struct usb_endpoint_descriptor *ep_desc)
57115 +{
57116 + int retval;
57117 +
57118 + DWC_DEBUGPL(DBG_PCDV, "%s(%p,%p)\n", __func__, usb_ep, ep_desc);
57119 +
57120 + if (!usb_ep || !ep_desc || ep_desc->bDescriptorType != USB_DT_ENDPOINT) {
57121 + DWC_WARN("%s, bad ep or descriptor\n", __func__);
57122 + return -EINVAL;
57123 + }
57124 + if (usb_ep == &gadget_wrapper->ep0) {
57125 + DWC_WARN("%s, bad ep(0)\n", __func__);
57126 + return -EINVAL;
57127 + }
57128 +
57129 + /* Check FIFO size? */
57130 + if (!ep_desc->wMaxPacketSize) {
57131 + DWC_WARN("%s, bad %s maxpacket\n", __func__, usb_ep->name);
57132 + return -ERANGE;
57133 + }
57134 +
57135 + if (!gadget_wrapper->driver ||
57136 + gadget_wrapper->gadget.speed == USB_SPEED_UNKNOWN) {
57137 + DWC_WARN("%s, bogus device state\n", __func__);
57138 + return -ESHUTDOWN;
57139 + }
57140 +
57141 + /* Delete after check - MAS */
57142 +#if 0
57143 + nat = (uint32_t) ep_desc->wMaxPacketSize;
57144 + printk(KERN_ALERT "%s: nat (before) =%d\n", __func__, nat);
57145 + nat = (nat >> 11) & 0x03;
57146 + printk(KERN_ALERT "%s: nat (after) =%d\n", __func__, nat);
57147 +#endif
57148 + retval = dwc_otg_pcd_ep_enable(gadget_wrapper->pcd,
57149 + (const uint8_t *)ep_desc,
57150 + (void *)usb_ep);
57151 + if (retval) {
57152 + DWC_WARN("dwc_otg_pcd_ep_enable failed\n");
57153 + return -EINVAL;
57154 + }
57155 +
57156 + usb_ep->maxpacket = le16_to_cpu(ep_desc->wMaxPacketSize);
57157 +
57158 + return 0;
57159 +}
57160 +
57161 +/**
57162 + * This function is called when an EP is disabled due to disconnect or
57163 + * change in configuration. Any pending requests will terminate with a
57164 + * status of -ESHUTDOWN.
57165 + *
57166 + * This function modifies the dwc_otg_ep_t data structure for this EP,
57167 + * and then calls dwc_otg_ep_deactivate.
57168 + */
57169 +static int ep_disable(struct usb_ep *usb_ep)
57170 +{
57171 + int retval;
57172 +
57173 + DWC_DEBUGPL(DBG_PCDV, "%s(%p)\n", __func__, usb_ep);
57174 + if (!usb_ep) {
57175 + DWC_DEBUGPL(DBG_PCD, "%s, %s not enabled\n", __func__,
57176 + usb_ep ? usb_ep->name : NULL);
57177 + return -EINVAL;
57178 + }
57179 +
57180 + retval = dwc_otg_pcd_ep_disable(gadget_wrapper->pcd, usb_ep);
57181 + if (retval) {
57182 + retval = -EINVAL;
57183 + }
57184 +
57185 + return retval;
57186 +}
57187 +
57188 +/**
57189 + * This function allocates a request object to use with the specified
57190 + * endpoint.
57191 + *
57192 + * @param ep The endpoint to be used with with the request
57193 + * @param gfp_flags the GFP_* flags to use.
57194 + */
57195 +static struct usb_request *dwc_otg_pcd_alloc_request(struct usb_ep *ep,
57196 + gfp_t gfp_flags)
57197 +{
57198 + struct usb_request *usb_req;
57199 +
57200 + DWC_DEBUGPL(DBG_PCDV, "%s(%p,%d)\n", __func__, ep, gfp_flags);
57201 + if (0 == ep) {
57202 + DWC_WARN("%s() %s\n", __func__, "Invalid EP!\n");
57203 + return 0;
57204 + }
57205 + usb_req = kzalloc(sizeof(*usb_req), gfp_flags);
57206 + if (0 == usb_req) {
57207 + DWC_WARN("%s() %s\n", __func__, "request allocation failed!\n");
57208 + return 0;
57209 + }
57210 + usb_req->dma = DWC_DMA_ADDR_INVALID;
57211 +
57212 + return usb_req;
57213 +}
57214 +
57215 +/**
57216 + * This function frees a request object.
57217 + *
57218 + * @param ep The endpoint associated with the request
57219 + * @param req The request being freed
57220 + */
57221 +static void dwc_otg_pcd_free_request(struct usb_ep *ep, struct usb_request *req)
57222 +{
57223 + DWC_DEBUGPL(DBG_PCDV, "%s(%p,%p)\n", __func__, ep, req);
57224 +
57225 + if (0 == ep || 0 == req) {
57226 + DWC_WARN("%s() %s\n", __func__,
57227 + "Invalid ep or req argument!\n");
57228 + return;
57229 + }
57230 +
57231 + kfree(req);
57232 +}
57233 +
57234 +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,28)
57235 +/**
57236 + * This function allocates an I/O buffer to be used for a transfer
57237 + * to/from the specified endpoint.
57238 + *
57239 + * @param usb_ep The endpoint to be used with with the request
57240 + * @param bytes The desired number of bytes for the buffer
57241 + * @param dma Pointer to the buffer's DMA address; must be valid
57242 + * @param gfp_flags the GFP_* flags to use.
57243 + * @return address of a new buffer or null is buffer could not be allocated.
57244 + */
57245 +static void *dwc_otg_pcd_alloc_buffer(struct usb_ep *usb_ep, unsigned bytes,
57246 + dma_addr_t * dma, gfp_t gfp_flags)
57247 +{
57248 + void *buf;
57249 + dwc_otg_pcd_t *pcd = 0;
57250 +
57251 + pcd = gadget_wrapper->pcd;
57252 +
57253 + DWC_DEBUGPL(DBG_PCDV, "%s(%p,%d,%p,%0x)\n", __func__, usb_ep, bytes,
57254 + dma, gfp_flags);
57255 +
57256 + /* Check dword alignment */
57257 + if ((bytes & 0x3UL) != 0) {
57258 + DWC_WARN("%s() Buffer size is not a multiple of"
57259 + "DWORD size (%d)", __func__, bytes);
57260 + }
57261 +
57262 + buf = dma_alloc_coherent(NULL, bytes, dma, gfp_flags);
57263 + WARN_ON(!buf);
57264 +
57265 + /* Check dword alignment */
57266 + if (((int)buf & 0x3UL) != 0) {
57267 + DWC_WARN("%s() Buffer is not DWORD aligned (%p)",
57268 + __func__, buf);
57269 + }
57270 +
57271 + return buf;
57272 +}
57273 +
57274 +/**
57275 + * This function frees an I/O buffer that was allocated by alloc_buffer.
57276 + *
57277 + * @param usb_ep the endpoint associated with the buffer
57278 + * @param buf address of the buffer
57279 + * @param dma The buffer's DMA address
57280 + * @param bytes The number of bytes of the buffer
57281 + */
57282 +static void dwc_otg_pcd_free_buffer(struct usb_ep *usb_ep, void *buf,
57283 + dma_addr_t dma, unsigned bytes)
57284 +{
57285 + dwc_otg_pcd_t *pcd = 0;
57286 +
57287 + pcd = gadget_wrapper->pcd;
57288 +
57289 + DWC_DEBUGPL(DBG_PCDV, "%s(%p,%0x,%d)\n", __func__, buf, dma, bytes);
57290 +
57291 + dma_free_coherent(NULL, bytes, buf, dma);
57292 +}
57293 +#endif
57294 +
57295 +/**
57296 + * This function is used to submit an I/O Request to an EP.
57297 + *
57298 + * - When the request completes the request's completion callback
57299 + * is called to return the request to the driver.
57300 + * - An EP, except control EPs, may have multiple requests
57301 + * pending.
57302 + * - Once submitted the request cannot be examined or modified.
57303 + * - Each request is turned into one or more packets.
57304 + * - A BULK EP can queue any amount of data; the transfer is
57305 + * packetized.
57306 + * - Zero length Packets are specified with the request 'zero'
57307 + * flag.
57308 + */
57309 +static int ep_queue(struct usb_ep *usb_ep, struct usb_request *usb_req,
57310 + gfp_t gfp_flags)
57311 +{
57312 + dwc_otg_pcd_t *pcd;
57313 + struct dwc_otg_pcd_ep *ep = NULL;
57314 + int retval = 0, is_isoc_ep = 0;
57315 + dma_addr_t dma_addr = DWC_DMA_ADDR_INVALID;
57316 +
57317 + DWC_DEBUGPL(DBG_PCDV, "%s(%p,%p,%d)\n",
57318 + __func__, usb_ep, usb_req, gfp_flags);
57319 +
57320 + if (!usb_req || !usb_req->complete || !usb_req->buf) {
57321 + DWC_WARN("bad params\n");
57322 + return -EINVAL;
57323 + }
57324 +
57325 + if (!usb_ep) {
57326 + DWC_WARN("bad ep\n");
57327 + return -EINVAL;
57328 + }
57329 +
57330 + pcd = gadget_wrapper->pcd;
57331 + if (!gadget_wrapper->driver ||
57332 + gadget_wrapper->gadget.speed == USB_SPEED_UNKNOWN) {
57333 + DWC_DEBUGPL(DBG_PCDV, "gadget.speed=%d\n",
57334 + gadget_wrapper->gadget.speed);
57335 + DWC_WARN("bogus device state\n");
57336 + return -ESHUTDOWN;
57337 + }
57338 +
57339 + DWC_DEBUGPL(DBG_PCD, "%s queue req %p, len %d buf %p\n",
57340 + usb_ep->name, usb_req, usb_req->length, usb_req->buf);
57341 +
57342 + usb_req->status = -EINPROGRESS;
57343 + usb_req->actual = 0;
57344 +
57345 + ep = ep_from_handle(pcd, usb_ep);
57346 + if (ep == NULL)
57347 + is_isoc_ep = 0;
57348 + else
57349 + is_isoc_ep = (ep->dwc_ep.type == DWC_OTG_EP_TYPE_ISOC) ? 1 : 0;
57350 +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,28)
57351 + dma_addr = usb_req->dma;
57352 +#else
57353 + if (GET_CORE_IF(pcd)->dma_enable) {
57354 + dwc_otg_device_t *otg_dev = gadget_wrapper->pcd->otg_dev;
57355 + struct device *dev = NULL;
57356 +
57357 + if (otg_dev != NULL)
57358 + dev = DWC_OTG_OS_GETDEV(otg_dev->os_dep);
57359 +
57360 + if (usb_req->length != 0 &&
57361 + usb_req->dma == DWC_DMA_ADDR_INVALID) {
57362 + dma_addr = dma_map_single(dev, usb_req->buf,
57363 + usb_req->length,
57364 + ep->dwc_ep.is_in ?
57365 + DMA_TO_DEVICE:
57366 + DMA_FROM_DEVICE);
57367 + }
57368 + }
57369 +#endif
57370 +
57371 +#ifdef DWC_UTE_PER_IO
57372 + if (is_isoc_ep == 1) {
57373 + retval = dwc_otg_pcd_xiso_ep_queue(pcd, usb_ep, usb_req->buf, dma_addr,
57374 + usb_req->length, usb_req->zero, usb_req,
57375 + gfp_flags == GFP_ATOMIC ? 1 : 0, &usb_req->ext_req);
57376 + if (retval)
57377 + return -EINVAL;
57378 +
57379 + return 0;
57380 + }
57381 +#endif
57382 + retval = dwc_otg_pcd_ep_queue(pcd, usb_ep, usb_req->buf, dma_addr,
57383 + usb_req->length, usb_req->zero, usb_req,
57384 + gfp_flags == GFP_ATOMIC ? 1 : 0);
57385 + if (retval) {
57386 + return -EINVAL;
57387 + }
57388 +
57389 + return 0;
57390 +}
57391 +
57392 +/**
57393 + * This function cancels an I/O request from an EP.
57394 + */
57395 +static int ep_dequeue(struct usb_ep *usb_ep, struct usb_request *usb_req)
57396 +{
57397 + DWC_DEBUGPL(DBG_PCDV, "%s(%p,%p)\n", __func__, usb_ep, usb_req);
57398 +
57399 + if (!usb_ep || !usb_req) {
57400 + DWC_WARN("bad argument\n");
57401 + return -EINVAL;
57402 + }
57403 + if (!gadget_wrapper->driver ||
57404 + gadget_wrapper->gadget.speed == USB_SPEED_UNKNOWN) {
57405 + DWC_WARN("bogus device state\n");
57406 + return -ESHUTDOWN;
57407 + }
57408 + if (dwc_otg_pcd_ep_dequeue(gadget_wrapper->pcd, usb_ep, usb_req)) {
57409 + return -EINVAL;
57410 + }
57411 +
57412 + return 0;
57413 +}
57414 +
57415 +/**
57416 + * usb_ep_set_halt stalls an endpoint.
57417 + *
57418 + * usb_ep_clear_halt clears an endpoint halt and resets its data
57419 + * toggle.
57420 + *
57421 + * Both of these functions are implemented with the same underlying
57422 + * function. The behavior depends on the value argument.
57423 + *
57424 + * @param[in] usb_ep the Endpoint to halt or clear halt.
57425 + * @param[in] value
57426 + * - 0 means clear_halt.
57427 + * - 1 means set_halt,
57428 + * - 2 means clear stall lock flag.
57429 + * - 3 means set stall lock flag.
57430 + */
57431 +static int ep_halt(struct usb_ep *usb_ep, int value)
57432 +{
57433 + int retval = 0;
57434 +
57435 + DWC_DEBUGPL(DBG_PCD, "HALT %s %d\n", usb_ep->name, value);
57436 +
57437 + if (!usb_ep) {
57438 + DWC_WARN("bad ep\n");
57439 + return -EINVAL;
57440 + }
57441 +
57442 + retval = dwc_otg_pcd_ep_halt(gadget_wrapper->pcd, usb_ep, value);
57443 + if (retval == -DWC_E_AGAIN) {
57444 + return -EAGAIN;
57445 + } else if (retval) {
57446 + retval = -EINVAL;
57447 + }
57448 +
57449 + return retval;
57450 +}
57451 +
57452 +//#if (LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,30))
57453 +#if 0
57454 +/**
57455 + * ep_wedge: sets the halt feature and ignores clear requests
57456 + *
57457 + * @usb_ep: the endpoint being wedged
57458 + *
57459 + * Use this to stall an endpoint and ignore CLEAR_FEATURE(HALT_ENDPOINT)
57460 + * requests. If the gadget driver clears the halt status, it will
57461 + * automatically unwedge the endpoint.
57462 + *
57463 + * Returns zero on success, else negative errno. *
57464 + * Check usb_ep_set_wedge() at "usb_gadget.h" for details
57465 + */
57466 +static int ep_wedge(struct usb_ep *usb_ep)
57467 +{
57468 + int retval = 0;
57469 +
57470 + DWC_DEBUGPL(DBG_PCD, "WEDGE %s\n", usb_ep->name);
57471 +
57472 + if (!usb_ep) {
57473 + DWC_WARN("bad ep\n");
57474 + return -EINVAL;
57475 + }
57476 +
57477 + retval = dwc_otg_pcd_ep_wedge(gadget_wrapper->pcd, usb_ep);
57478 + if (retval == -DWC_E_AGAIN) {
57479 + retval = -EAGAIN;
57480 + } else if (retval) {
57481 + retval = -EINVAL;
57482 + }
57483 +
57484 + return retval;
57485 +}
57486 +#endif
57487 +
57488 +#ifdef DWC_EN_ISOC
57489 +/**
57490 + * This function is used to submit an ISOC Transfer Request to an EP.
57491 + *
57492 + * - Every time a sync period completes the request's completion callback
57493 + * is called to provide data to the gadget driver.
57494 + * - Once submitted the request cannot be modified.
57495 + * - Each request is turned into periodic data packets untill ISO
57496 + * Transfer is stopped..
57497 + */
57498 +static int iso_ep_start(struct usb_ep *usb_ep, struct usb_iso_request *req,
57499 + gfp_t gfp_flags)
57500 +{
57501 + int retval = 0;
57502 +
57503 + if (!req || !req->process_buffer || !req->buf0 || !req->buf1) {
57504 + DWC_WARN("bad params\n");
57505 + return -EINVAL;
57506 + }
57507 +
57508 + if (!usb_ep) {
57509 + DWC_PRINTF("bad params\n");
57510 + return -EINVAL;
57511 + }
57512 +
57513 + req->status = -EINPROGRESS;
57514 +
57515 + retval =
57516 + dwc_otg_pcd_iso_ep_start(gadget_wrapper->pcd, usb_ep, req->buf0,
57517 + req->buf1, req->dma0, req->dma1,
57518 + req->sync_frame, req->data_pattern_frame,
57519 + req->data_per_frame,
57520 + req->
57521 + flags & USB_REQ_ISO_ASAP ? -1 :
57522 + req->start_frame, req->buf_proc_intrvl,
57523 + req, gfp_flags == GFP_ATOMIC ? 1 : 0);
57524 +
57525 + if (retval) {
57526 + return -EINVAL;
57527 + }
57528 +
57529 + return retval;
57530 +}
57531 +
57532 +/**
57533 + * This function stops ISO EP Periodic Data Transfer.
57534 + */
57535 +static int iso_ep_stop(struct usb_ep *usb_ep, struct usb_iso_request *req)
57536 +{
57537 + int retval = 0;
57538 + if (!usb_ep) {
57539 + DWC_WARN("bad ep\n");
57540 + }
57541 +
57542 + if (!gadget_wrapper->driver ||
57543 + gadget_wrapper->gadget.speed == USB_SPEED_UNKNOWN) {
57544 + DWC_DEBUGPL(DBG_PCDV, "gadget.speed=%d\n",
57545 + gadget_wrapper->gadget.speed);
57546 + DWC_WARN("bogus device state\n");
57547 + }
57548 +
57549 + dwc_otg_pcd_iso_ep_stop(gadget_wrapper->pcd, usb_ep, req);
57550 + if (retval) {
57551 + retval = -EINVAL;
57552 + }
57553 +
57554 + return retval;
57555 +}
57556 +
57557 +static struct usb_iso_request *alloc_iso_request(struct usb_ep *ep,
57558 + int packets, gfp_t gfp_flags)
57559 +{
57560 + struct usb_iso_request *pReq = NULL;
57561 + uint32_t req_size;
57562 +
57563 + req_size = sizeof(struct usb_iso_request);
57564 + req_size +=
57565 + (2 * packets * (sizeof(struct usb_gadget_iso_packet_descriptor)));
57566 +
57567 + pReq = kmalloc(req_size, gfp_flags);
57568 + if (!pReq) {
57569 + DWC_WARN("Can't allocate Iso Request\n");
57570 + return 0;
57571 + }
57572 + pReq->iso_packet_desc0 = (void *)(pReq + 1);
57573 +
57574 + pReq->iso_packet_desc1 = pReq->iso_packet_desc0 + packets;
57575 +
57576 + return pReq;
57577 +}
57578 +
57579 +static void free_iso_request(struct usb_ep *ep, struct usb_iso_request *req)
57580 +{
57581 + kfree(req);
57582 +}
57583 +
57584 +static struct usb_isoc_ep_ops dwc_otg_pcd_ep_ops = {
57585 + .ep_ops = {
57586 + .enable = ep_enable,
57587 + .disable = ep_disable,
57588 +
57589 + .alloc_request = dwc_otg_pcd_alloc_request,
57590 + .free_request = dwc_otg_pcd_free_request,
57591 +
57592 +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,28)
57593 + .alloc_buffer = dwc_otg_pcd_alloc_buffer,
57594 + .free_buffer = dwc_otg_pcd_free_buffer,
57595 +#endif
57596 +
57597 + .queue = ep_queue,
57598 + .dequeue = ep_dequeue,
57599 +
57600 + .set_halt = ep_halt,
57601 + .fifo_status = 0,
57602 + .fifo_flush = 0,
57603 + },
57604 + .iso_ep_start = iso_ep_start,
57605 + .iso_ep_stop = iso_ep_stop,
57606 + .alloc_iso_request = alloc_iso_request,
57607 + .free_iso_request = free_iso_request,
57608 +};
57609 +
57610 +#else
57611 +
57612 + int (*enable) (struct usb_ep *ep,
57613 + const struct usb_endpoint_descriptor *desc);
57614 + int (*disable) (struct usb_ep *ep);
57615 +
57616 + struct usb_request *(*alloc_request) (struct usb_ep *ep,
57617 + gfp_t gfp_flags);
57618 + void (*free_request) (struct usb_ep *ep, struct usb_request *req);
57619 +
57620 + int (*queue) (struct usb_ep *ep, struct usb_request *req,
57621 + gfp_t gfp_flags);
57622 + int (*dequeue) (struct usb_ep *ep, struct usb_request *req);
57623 +
57624 + int (*set_halt) (struct usb_ep *ep, int value);
57625 + int (*set_wedge) (struct usb_ep *ep);
57626 +
57627 + int (*fifo_status) (struct usb_ep *ep);
57628 + void (*fifo_flush) (struct usb_ep *ep);
57629 +static struct usb_ep_ops dwc_otg_pcd_ep_ops = {
57630 + .enable = ep_enable,
57631 + .disable = ep_disable,
57632 +
57633 + .alloc_request = dwc_otg_pcd_alloc_request,
57634 + .free_request = dwc_otg_pcd_free_request,
57635 +
57636 +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,28)
57637 + .alloc_buffer = dwc_otg_pcd_alloc_buffer,
57638 + .free_buffer = dwc_otg_pcd_free_buffer,
57639 +#else
57640 + /* .set_wedge = ep_wedge, */
57641 + .set_wedge = NULL, /* uses set_halt instead */
57642 +#endif
57643 +
57644 + .queue = ep_queue,
57645 + .dequeue = ep_dequeue,
57646 +
57647 + .set_halt = ep_halt,
57648 + .fifo_status = 0,
57649 + .fifo_flush = 0,
57650 +
57651 +};
57652 +
57653 +#endif /* _EN_ISOC_ */
57654 +/* Gadget Operations */
57655 +/**
57656 + * The following gadget operations will be implemented in the DWC_otg
57657 + * PCD. Functions in the API that are not described below are not
57658 + * implemented.
57659 + *
57660 + * The Gadget API provides wrapper functions for each of the function
57661 + * pointers defined in usb_gadget_ops. The Gadget Driver calls the
57662 + * wrapper function, which then calls the underlying PCD function. The
57663 + * following sections are named according to the wrapper functions
57664 + * (except for ioctl, which doesn't have a wrapper function). Within
57665 + * each section, the corresponding DWC_otg PCD function name is
57666 + * specified.
57667 + *
57668 + */
57669 +
57670 +/**
57671 + *Gets the USB Frame number of the last SOF.
57672 + */
57673 +static int get_frame_number(struct usb_gadget *gadget)
57674 +{
57675 + struct gadget_wrapper *d;
57676 +
57677 + DWC_DEBUGPL(DBG_PCDV, "%s(%p)\n", __func__, gadget);
57678 +
57679 + if (gadget == 0) {
57680 + return -ENODEV;
57681 + }
57682 +
57683 + d = container_of(gadget, struct gadget_wrapper, gadget);
57684 + return dwc_otg_pcd_get_frame_number(d->pcd);
57685 +}
57686 +
57687 +#ifdef CONFIG_USB_DWC_OTG_LPM
57688 +static int test_lpm_enabled(struct usb_gadget *gadget)
57689 +{
57690 + struct gadget_wrapper *d;
57691 +
57692 + d = container_of(gadget, struct gadget_wrapper, gadget);
57693 +
57694 + return dwc_otg_pcd_is_lpm_enabled(d->pcd);
57695 +}
57696 +#endif
57697 +
57698 +/**
57699 + * Initiates Session Request Protocol (SRP) to wakeup the host if no
57700 + * session is in progress. If a session is already in progress, but
57701 + * the device is suspended, remote wakeup signaling is started.
57702 + *
57703 + */
57704 +static int wakeup(struct usb_gadget *gadget)
57705 +{
57706 + struct gadget_wrapper *d;
57707 +
57708 + DWC_DEBUGPL(DBG_PCDV, "%s(%p)\n", __func__, gadget);
57709 +
57710 + if (gadget == 0) {
57711 + return -ENODEV;
57712 + } else {
57713 + d = container_of(gadget, struct gadget_wrapper, gadget);
57714 + }
57715 + dwc_otg_pcd_wakeup(d->pcd);
57716 + return 0;
57717 +}
57718 +
57719 +static const struct usb_gadget_ops dwc_otg_pcd_ops = {
57720 + .get_frame = get_frame_number,
57721 + .wakeup = wakeup,
57722 +#ifdef CONFIG_USB_DWC_OTG_LPM
57723 + .lpm_support = test_lpm_enabled,
57724 +#endif
57725 + // current versions must always be self-powered
57726 +};
57727 +
57728 +static int _setup(dwc_otg_pcd_t * pcd, uint8_t * bytes)
57729 +{
57730 + int retval = -DWC_E_NOT_SUPPORTED;
57731 + if (gadget_wrapper->driver && gadget_wrapper->driver->setup) {
57732 + retval = gadget_wrapper->driver->setup(&gadget_wrapper->gadget,
57733 + (struct usb_ctrlrequest
57734 + *)bytes);
57735 + }
57736 +
57737 + if (retval == -ENOTSUPP) {
57738 + retval = -DWC_E_NOT_SUPPORTED;
57739 + } else if (retval < 0) {
57740 + retval = -DWC_E_INVALID;
57741 + }
57742 +
57743 + return retval;
57744 +}
57745 +
57746 +#ifdef DWC_EN_ISOC
57747 +static int _isoc_complete(dwc_otg_pcd_t * pcd, void *ep_handle,
57748 + void *req_handle, int proc_buf_num)
57749 +{
57750 + int i, packet_count;
57751 + struct usb_gadget_iso_packet_descriptor *iso_packet = 0;
57752 + struct usb_iso_request *iso_req = req_handle;
57753 +
57754 + if (proc_buf_num) {
57755 + iso_packet = iso_req->iso_packet_desc1;
57756 + } else {
57757 + iso_packet = iso_req->iso_packet_desc0;
57758 + }
57759 + packet_count =
57760 + dwc_otg_pcd_get_iso_packet_count(pcd, ep_handle, req_handle);
57761 + for (i = 0; i < packet_count; ++i) {
57762 + int status;
57763 + int actual;
57764 + int offset;
57765 + dwc_otg_pcd_get_iso_packet_params(pcd, ep_handle, req_handle,
57766 + i, &status, &actual, &offset);
57767 + switch (status) {
57768 + case -DWC_E_NO_DATA:
57769 + status = -ENODATA;
57770 + break;
57771 + default:
57772 + if (status) {
57773 + DWC_PRINTF("unknown status in isoc packet\n");
57774 + }
57775 +
57776 + }
57777 + iso_packet[i].status = status;
57778 + iso_packet[i].offset = offset;
57779 + iso_packet[i].actual_length = actual;
57780 + }
57781 +
57782 + iso_req->status = 0;
57783 + iso_req->process_buffer(ep_handle, iso_req);
57784 +
57785 + return 0;
57786 +}
57787 +#endif /* DWC_EN_ISOC */
57788 +
57789 +#ifdef DWC_UTE_PER_IO
57790 +/**
57791 + * Copy the contents of the extended request to the Linux usb_request's
57792 + * extended part and call the gadget's completion.
57793 + *
57794 + * @param pcd Pointer to the pcd structure
57795 + * @param ep_handle Void pointer to the usb_ep structure
57796 + * @param req_handle Void pointer to the usb_request structure
57797 + * @param status Request status returned from the portable logic
57798 + * @param ereq_port Void pointer to the extended request structure
57799 + * created in the the portable part that contains the
57800 + * results of the processed iso packets.
57801 + */
57802 +static int _xisoc_complete(dwc_otg_pcd_t * pcd, void *ep_handle,
57803 + void *req_handle, int32_t status, void *ereq_port)
57804 +{
57805 + struct dwc_ute_iso_req_ext *ereqorg = NULL;
57806 + struct dwc_iso_xreq_port *ereqport = NULL;
57807 + struct dwc_ute_iso_packet_descriptor *desc_org = NULL;
57808 + int i;
57809 + struct usb_request *req;
57810 + //struct dwc_ute_iso_packet_descriptor *
57811 + //int status = 0;
57812 +
57813 + req = (struct usb_request *)req_handle;
57814 + ereqorg = &req->ext_req;
57815 + ereqport = (struct dwc_iso_xreq_port *)ereq_port;
57816 + desc_org = ereqorg->per_io_frame_descs;
57817 +
57818 + if (req && req->complete) {
57819 + /* Copy the request data from the portable logic to our request */
57820 + for (i = 0; i < ereqport->pio_pkt_count; i++) {
57821 + desc_org[i].actual_length =
57822 + ereqport->per_io_frame_descs[i].actual_length;
57823 + desc_org[i].status =
57824 + ereqport->per_io_frame_descs[i].status;
57825 + }
57826 +
57827 + switch (status) {
57828 + case -DWC_E_SHUTDOWN:
57829 + req->status = -ESHUTDOWN;
57830 + break;
57831 + case -DWC_E_RESTART:
57832 + req->status = -ECONNRESET;
57833 + break;
57834 + case -DWC_E_INVALID:
57835 + req->status = -EINVAL;
57836 + break;
57837 + case -DWC_E_TIMEOUT:
57838 + req->status = -ETIMEDOUT;
57839 + break;
57840 + default:
57841 + req->status = status;
57842 + }
57843 +
57844 + /* And call the gadget's completion */
57845 + req->complete(ep_handle, req);
57846 + }
57847 +
57848 + return 0;
57849 +}
57850 +#endif /* DWC_UTE_PER_IO */
57851 +
57852 +static int _complete(dwc_otg_pcd_t * pcd, void *ep_handle,
57853 + void *req_handle, int32_t status, uint32_t actual)
57854 +{
57855 + struct usb_request *req = (struct usb_request *)req_handle;
57856 +#if LINUX_VERSION_CODE > KERNEL_VERSION(2,6,27)
57857 + struct dwc_otg_pcd_ep *ep = NULL;
57858 +#endif
57859 +
57860 + if (req && req->complete) {
57861 + switch (status) {
57862 + case -DWC_E_SHUTDOWN:
57863 + req->status = -ESHUTDOWN;
57864 + break;
57865 + case -DWC_E_RESTART:
57866 + req->status = -ECONNRESET;
57867 + break;
57868 + case -DWC_E_INVALID:
57869 + req->status = -EINVAL;
57870 + break;
57871 + case -DWC_E_TIMEOUT:
57872 + req->status = -ETIMEDOUT;
57873 + break;
57874 + default:
57875 + req->status = status;
57876 +
57877 + }
57878 +
57879 + req->actual = actual;
57880 + DWC_SPINUNLOCK(pcd->lock);
57881 + req->complete(ep_handle, req);
57882 + DWC_SPINLOCK(pcd->lock);
57883 + }
57884 +#if LINUX_VERSION_CODE > KERNEL_VERSION(2,6,27)
57885 + ep = ep_from_handle(pcd, ep_handle);
57886 + if (GET_CORE_IF(pcd)->dma_enable) {
57887 + if (req->length != 0) {
57888 + dwc_otg_device_t *otg_dev = gadget_wrapper->pcd->otg_dev;
57889 + struct device *dev = NULL;
57890 +
57891 + if (otg_dev != NULL)
57892 + dev = DWC_OTG_OS_GETDEV(otg_dev->os_dep);
57893 +
57894 + dma_unmap_single(dev, req->dma, req->length,
57895 + ep->dwc_ep.is_in ?
57896 + DMA_TO_DEVICE: DMA_FROM_DEVICE);
57897 + }
57898 + }
57899 +#endif
57900 +
57901 + return 0;
57902 +}
57903 +
57904 +static int _connect(dwc_otg_pcd_t * pcd, int speed)
57905 +{
57906 + gadget_wrapper->gadget.speed = speed;
57907 + return 0;
57908 +}
57909 +
57910 +static int _disconnect(dwc_otg_pcd_t * pcd)
57911 +{
57912 + if (gadget_wrapper->driver && gadget_wrapper->driver->disconnect) {
57913 + gadget_wrapper->driver->disconnect(&gadget_wrapper->gadget);
57914 + }
57915 + return 0;
57916 +}
57917 +
57918 +static int _resume(dwc_otg_pcd_t * pcd)
57919 +{
57920 + if (gadget_wrapper->driver && gadget_wrapper->driver->resume) {
57921 + gadget_wrapper->driver->resume(&gadget_wrapper->gadget);
57922 + }
57923 +
57924 + return 0;
57925 +}
57926 +
57927 +static int _suspend(dwc_otg_pcd_t * pcd)
57928 +{
57929 + if (gadget_wrapper->driver && gadget_wrapper->driver->suspend) {
57930 + gadget_wrapper->driver->suspend(&gadget_wrapper->gadget);
57931 + }
57932 + return 0;
57933 +}
57934 +
57935 +/**
57936 + * This function updates the otg values in the gadget structure.
57937 + */
57938 +static int _hnp_changed(dwc_otg_pcd_t * pcd)
57939 +{
57940 +
57941 + if (!gadget_wrapper->gadget.is_otg)
57942 + return 0;
57943 +
57944 + gadget_wrapper->gadget.b_hnp_enable = get_b_hnp_enable(pcd);
57945 + gadget_wrapper->gadget.a_hnp_support = get_a_hnp_support(pcd);
57946 + gadget_wrapper->gadget.a_alt_hnp_support = get_a_alt_hnp_support(pcd);
57947 + return 0;
57948 +}
57949 +
57950 +static int _reset(dwc_otg_pcd_t * pcd)
57951 +{
57952 + return 0;
57953 +}
57954 +
57955 +#ifdef DWC_UTE_CFI
57956 +static int _cfi_setup(dwc_otg_pcd_t * pcd, void *cfi_req)
57957 +{
57958 + int retval = -DWC_E_INVALID;
57959 + if (gadget_wrapper->driver->cfi_feature_setup) {
57960 + retval =
57961 + gadget_wrapper->driver->
57962 + cfi_feature_setup(&gadget_wrapper->gadget,
57963 + (struct cfi_usb_ctrlrequest *)cfi_req);
57964 + }
57965 +
57966 + return retval;
57967 +}
57968 +#endif
57969 +
57970 +static const struct dwc_otg_pcd_function_ops fops = {
57971 + .complete = _complete,
57972 +#ifdef DWC_EN_ISOC
57973 + .isoc_complete = _isoc_complete,
57974 +#endif
57975 + .setup = _setup,
57976 + .disconnect = _disconnect,
57977 + .connect = _connect,
57978 + .resume = _resume,
57979 + .suspend = _suspend,
57980 + .hnp_changed = _hnp_changed,
57981 + .reset = _reset,
57982 +#ifdef DWC_UTE_CFI
57983 + .cfi_setup = _cfi_setup,
57984 +#endif
57985 +#ifdef DWC_UTE_PER_IO
57986 + .xisoc_complete = _xisoc_complete,
57987 +#endif
57988 +};
57989 +
57990 +/**
57991 + * This function is the top level PCD interrupt handler.
57992 + */
57993 +static irqreturn_t dwc_otg_pcd_irq(int irq, void *dev)
57994 +{
57995 + dwc_otg_pcd_t *pcd = dev;
57996 + int32_t retval = IRQ_NONE;
57997 +
57998 + retval = dwc_otg_pcd_handle_intr(pcd);
57999 + if (retval != 0) {
58000 + S3C2410X_CLEAR_EINTPEND();
58001 + }
58002 + return IRQ_RETVAL(retval);
58003 +}
58004 +
58005 +/**
58006 + * This function initialized the usb_ep structures to there default
58007 + * state.
58008 + *
58009 + * @param d Pointer on gadget_wrapper.
58010 + */
58011 +void gadget_add_eps(struct gadget_wrapper *d)
58012 +{
58013 + static const char *names[] = {
58014 +
58015 + "ep0",
58016 + "ep1in",
58017 + "ep2in",
58018 + "ep3in",
58019 + "ep4in",
58020 + "ep5in",
58021 + "ep6in",
58022 + "ep7in",
58023 + "ep8in",
58024 + "ep9in",
58025 + "ep10in",
58026 + "ep11in",
58027 + "ep12in",
58028 + "ep13in",
58029 + "ep14in",
58030 + "ep15in",
58031 + "ep1out",
58032 + "ep2out",
58033 + "ep3out",
58034 + "ep4out",
58035 + "ep5out",
58036 + "ep6out",
58037 + "ep7out",
58038 + "ep8out",
58039 + "ep9out",
58040 + "ep10out",
58041 + "ep11out",
58042 + "ep12out",
58043 + "ep13out",
58044 + "ep14out",
58045 + "ep15out"
58046 + };
58047 +
58048 + int i;
58049 + struct usb_ep *ep;
58050 + int8_t dev_endpoints;
58051 +
58052 + DWC_DEBUGPL(DBG_PCDV, "%s\n", __func__);
58053 +
58054 + INIT_LIST_HEAD(&d->gadget.ep_list);
58055 + d->gadget.ep0 = &d->ep0;
58056 + d->gadget.speed = USB_SPEED_UNKNOWN;
58057 +
58058 + INIT_LIST_HEAD(&d->gadget.ep0->ep_list);
58059 +
58060 + /**
58061 + * Initialize the EP0 structure.
58062 + */
58063 + ep = &d->ep0;
58064 +
58065 + /* Init the usb_ep structure. */
58066 + ep->name = names[0];
58067 + ep->ops = (struct usb_ep_ops *)&dwc_otg_pcd_ep_ops;
58068 +
58069 + /**
58070 + * @todo NGS: What should the max packet size be set to
58071 + * here? Before EP type is set?
58072 + */
58073 + ep->maxpacket = MAX_PACKET_SIZE;
58074 + dwc_otg_pcd_ep_enable(d->pcd, NULL, ep);
58075 +
58076 + list_add_tail(&ep->ep_list, &d->gadget.ep_list);
58077 +
58078 + /**
58079 + * Initialize the EP structures.
58080 + */
58081 + dev_endpoints = d->pcd->core_if->dev_if->num_in_eps;
58082 +
58083 + for (i = 0; i < dev_endpoints; i++) {
58084 + ep = &d->in_ep[i];
58085 +
58086 + /* Init the usb_ep structure. */
58087 + ep->name = names[d->pcd->in_ep[i].dwc_ep.num];
58088 + ep->ops = (struct usb_ep_ops *)&dwc_otg_pcd_ep_ops;
58089 +
58090 + /**
58091 + * @todo NGS: What should the max packet size be set to
58092 + * here? Before EP type is set?
58093 + */
58094 + ep->maxpacket = MAX_PACKET_SIZE;
58095 + list_add_tail(&ep->ep_list, &d->gadget.ep_list);
58096 + }
58097 +
58098 + dev_endpoints = d->pcd->core_if->dev_if->num_out_eps;
58099 +
58100 + for (i = 0; i < dev_endpoints; i++) {
58101 + ep = &d->out_ep[i];
58102 +
58103 + /* Init the usb_ep structure. */
58104 + ep->name = names[15 + d->pcd->out_ep[i].dwc_ep.num];
58105 + ep->ops = (struct usb_ep_ops *)&dwc_otg_pcd_ep_ops;
58106 +
58107 + /**
58108 + * @todo NGS: What should the max packet size be set to
58109 + * here? Before EP type is set?
58110 + */
58111 + ep->maxpacket = MAX_PACKET_SIZE;
58112 +
58113 + list_add_tail(&ep->ep_list, &d->gadget.ep_list);
58114 + }
58115 +
58116 + /* remove ep0 from the list. There is a ep0 pointer. */
58117 + list_del_init(&d->ep0.ep_list);
58118 +
58119 + d->ep0.maxpacket = MAX_EP0_SIZE;
58120 +}
58121 +
58122 +/**
58123 + * This function releases the Gadget device.
58124 + * required by device_unregister().
58125 + *
58126 + * @todo Should this do something? Should it free the PCD?
58127 + */
58128 +static void dwc_otg_pcd_gadget_release(struct device *dev)
58129 +{
58130 + DWC_DEBUGPL(DBG_PCDV, "%s(%p)\n", __func__, dev);
58131 +}
58132 +
58133 +static struct gadget_wrapper *alloc_wrapper(dwc_bus_dev_t *_dev)
58134 +{
58135 + static char pcd_name[] = "dwc_otg_pcd";
58136 + dwc_otg_device_t *otg_dev = DWC_OTG_BUSDRVDATA(_dev);
58137 + struct gadget_wrapper *d;
58138 + int retval;
58139 +
58140 + d = DWC_ALLOC(sizeof(*d));
58141 + if (d == NULL) {
58142 + return NULL;
58143 + }
58144 +
58145 + memset(d, 0, sizeof(*d));
58146 +
58147 + d->gadget.name = pcd_name;
58148 + d->pcd = otg_dev->pcd;
58149 +
58150 +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,30)
58151 + strcpy(d->gadget.dev.bus_id, "gadget");
58152 +#else
58153 + dev_set_name(&d->gadget.dev, "%s", "gadget");
58154 +#endif
58155 +
58156 + d->gadget.dev.parent = &_dev->dev;
58157 + d->gadget.dev.release = dwc_otg_pcd_gadget_release;
58158 + d->gadget.ops = &dwc_otg_pcd_ops;
58159 + d->gadget.max_speed = dwc_otg_pcd_is_dualspeed(otg_dev->pcd) ? USB_SPEED_HIGH:USB_SPEED_FULL;
58160 + d->gadget.is_otg = dwc_otg_pcd_is_otg(otg_dev->pcd);
58161 +
58162 + d->driver = 0;
58163 + /* Register the gadget device */
58164 + retval = device_register(&d->gadget.dev);
58165 + if (retval != 0) {
58166 + DWC_ERROR("device_register failed\n");
58167 + DWC_FREE(d);
58168 + return NULL;
58169 + }
58170 +
58171 + return d;
58172 +}
58173 +
58174 +static void free_wrapper(struct gadget_wrapper *d)
58175 +{
58176 + if (d->driver) {
58177 + /* should have been done already by driver model core */
58178 + DWC_WARN("driver '%s' is still registered\n",
58179 + d->driver->driver.name);
58180 +#ifdef CONFIG_USB_GADGET
58181 + usb_gadget_unregister_driver(d->driver);
58182 +#endif
58183 + }
58184 +
58185 + device_unregister(&d->gadget.dev);
58186 + DWC_FREE(d);
58187 +}
58188 +
58189 +/**
58190 + * This function initialized the PCD portion of the driver.
58191 + *
58192 + */
58193 +int pcd_init(dwc_bus_dev_t *_dev)
58194 +{
58195 + dwc_otg_device_t *otg_dev = DWC_OTG_BUSDRVDATA(_dev);
58196 + int retval = 0;
58197 +
58198 + DWC_DEBUGPL(DBG_PCDV, "%s(%p) otg_dev=%p\n", __func__, _dev, otg_dev);
58199 +
58200 + otg_dev->pcd = dwc_otg_pcd_init(otg_dev);
58201 +
58202 + if (!otg_dev->pcd) {
58203 + DWC_ERROR("dwc_otg_pcd_init failed\n");
58204 + return -ENOMEM;
58205 + }
58206 +
58207 + otg_dev->pcd->otg_dev = otg_dev;
58208 + gadget_wrapper = alloc_wrapper(_dev);
58209 +
58210 + /*
58211 + * Initialize EP structures
58212 + */
58213 + gadget_add_eps(gadget_wrapper);
58214 + /*
58215 + * Setup interupt handler
58216 + */
58217 + DWC_DEBUGPL(DBG_ANY, "registering handler for irq%d\n",
58218 + otg_dev->os_dep.irq_num);
58219 + retval = request_irq(otg_dev->os_dep.irq_num, dwc_otg_pcd_irq,
58220 + IRQF_SHARED, gadget_wrapper->gadget.name,
58221 + otg_dev->pcd);
58222 + if (retval != 0) {
58223 + DWC_ERROR("request of irq%d failed\n", otg_dev->os_dep.irq_num);
58224 + free_wrapper(gadget_wrapper);
58225 + return -EBUSY;
58226 + }
58227 +
58228 + dwc_otg_pcd_start(gadget_wrapper->pcd, &fops);
58229 +
58230 + return retval;
58231 +}
58232 +
58233 +/**
58234 + * Cleanup the PCD.
58235 + */
58236 +void pcd_remove(dwc_bus_dev_t *_dev)
58237 +{
58238 + dwc_otg_device_t *otg_dev = DWC_OTG_BUSDRVDATA(_dev);
58239 + dwc_otg_pcd_t *pcd = otg_dev->pcd;
58240 +
58241 + DWC_DEBUGPL(DBG_PCDV, "%s(%p) otg_dev %p\n", __func__, _dev, otg_dev);
58242 +
58243 + /*
58244 + * Free the IRQ
58245 + */
58246 + free_irq(otg_dev->os_dep.irq_num, pcd);
58247 + dwc_otg_pcd_remove(otg_dev->pcd);
58248 + free_wrapper(gadget_wrapper);
58249 + otg_dev->pcd = 0;
58250 +}
58251 +
58252 +#endif /* DWC_HOST_ONLY */
58253 --- /dev/null
58254 +++ b/drivers/usb/host/dwc_otg/dwc_otg_regs.h
58255 @@ -0,0 +1,2550 @@
58256 +/* ==========================================================================
58257 + * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_regs.h $
58258 + * $Revision: #98 $
58259 + * $Date: 2012/08/10 $
58260 + * $Change: 2047372 $
58261 + *
58262 + * Synopsys HS OTG Linux Software Driver and documentation (hereinafter,
58263 + * "Software") is an Unsupported proprietary work of Synopsys, Inc. unless
58264 + * otherwise expressly agreed to in writing between Synopsys and you.
58265 + *
58266 + * The Software IS NOT an item of Licensed Software or Licensed Product under
58267 + * any End User Software License Agreement or Agreement for Licensed Product
58268 + * with Synopsys or any supplement thereto. You are permitted to use and
58269 + * redistribute this Software in source and binary forms, with or without
58270 + * modification, provided that redistributions of source code must retain this
58271 + * notice. You may not view, use, disclose, copy or distribute this file or
58272 + * any information contained herein except pursuant to this license grant from
58273 + * Synopsys. If you do not agree with this notice, including the disclaimer
58274 + * below, then you are not authorized to use the Software.
58275 + *
58276 + * THIS SOFTWARE IS BEING DISTRIBUTED BY SYNOPSYS SOLELY ON AN "AS IS" BASIS
58277 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
58278 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
58279 + * ARE HEREBY DISCLAIMED. IN NO EVENT SHALL SYNOPSYS BE LIABLE FOR ANY DIRECT,
58280 + * INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
58281 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
58282 + * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
58283 + * CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
58284 + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
58285 + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
58286 + * DAMAGE.
58287 + * ========================================================================== */
58288 +
58289 +#ifndef __DWC_OTG_REGS_H__
58290 +#define __DWC_OTG_REGS_H__
58291 +
58292 +#include "dwc_otg_core_if.h"
58293 +
58294 +/**
58295 + * @file
58296 + *
58297 + * This file contains the data structures for accessing the DWC_otg core registers.
58298 + *
58299 + * The application interfaces with the HS OTG core by reading from and
58300 + * writing to the Control and Status Register (CSR) space through the
58301 + * AHB Slave interface. These registers are 32 bits wide, and the
58302 + * addresses are 32-bit-block aligned.
58303 + * CSRs are classified as follows:
58304 + * - Core Global Registers
58305 + * - Device Mode Registers
58306 + * - Device Global Registers
58307 + * - Device Endpoint Specific Registers
58308 + * - Host Mode Registers
58309 + * - Host Global Registers
58310 + * - Host Port CSRs
58311 + * - Host Channel Specific Registers
58312 + *
58313 + * Only the Core Global registers can be accessed in both Device and
58314 + * Host modes. When the HS OTG core is operating in one mode, either
58315 + * Device or Host, the application must not access registers from the
58316 + * other mode. When the core switches from one mode to another, the
58317 + * registers in the new mode of operation must be reprogrammed as they
58318 + * would be after a power-on reset.
58319 + */
58320 +
58321 +/****************************************************************************/
58322 +/** DWC_otg Core registers .
58323 + * The dwc_otg_core_global_regs structure defines the size
58324 + * and relative field offsets for the Core Global registers.
58325 + */
58326 +typedef struct dwc_otg_core_global_regs {
58327 + /** OTG Control and Status Register. <i>Offset: 000h</i> */
58328 + volatile uint32_t gotgctl;
58329 + /** OTG Interrupt Register. <i>Offset: 004h</i> */
58330 + volatile uint32_t gotgint;
58331 + /**Core AHB Configuration Register. <i>Offset: 008h</i> */
58332 + volatile uint32_t gahbcfg;
58333 +
58334 +#define DWC_GLBINTRMASK 0x0001
58335 +#define DWC_DMAENABLE 0x0020
58336 +#define DWC_NPTXEMPTYLVL_EMPTY 0x0080
58337 +#define DWC_NPTXEMPTYLVL_HALFEMPTY 0x0000
58338 +#define DWC_PTXEMPTYLVL_EMPTY 0x0100
58339 +#define DWC_PTXEMPTYLVL_HALFEMPTY 0x0000
58340 +
58341 + /**Core USB Configuration Register. <i>Offset: 00Ch</i> */
58342 + volatile uint32_t gusbcfg;
58343 + /**Core Reset Register. <i>Offset: 010h</i> */
58344 + volatile uint32_t grstctl;
58345 + /**Core Interrupt Register. <i>Offset: 014h</i> */
58346 + volatile uint32_t gintsts;
58347 + /**Core Interrupt Mask Register. <i>Offset: 018h</i> */
58348 + volatile uint32_t gintmsk;
58349 + /**Receive Status Queue Read Register (Read Only). <i>Offset: 01Ch</i> */
58350 + volatile uint32_t grxstsr;
58351 + /**Receive Status Queue Read & POP Register (Read Only). <i>Offset: 020h</i>*/
58352 + volatile uint32_t grxstsp;
58353 + /**Receive FIFO Size Register. <i>Offset: 024h</i> */
58354 + volatile uint32_t grxfsiz;
58355 + /**Non Periodic Transmit FIFO Size Register. <i>Offset: 028h</i> */
58356 + volatile uint32_t gnptxfsiz;
58357 + /**Non Periodic Transmit FIFO/Queue Status Register (Read
58358 + * Only). <i>Offset: 02Ch</i> */
58359 + volatile uint32_t gnptxsts;
58360 + /**I2C Access Register. <i>Offset: 030h</i> */
58361 + volatile uint32_t gi2cctl;
58362 + /**PHY Vendor Control Register. <i>Offset: 034h</i> */
58363 + volatile uint32_t gpvndctl;
58364 + /**General Purpose Input/Output Register. <i>Offset: 038h</i> */
58365 + volatile uint32_t ggpio;
58366 + /**User ID Register. <i>Offset: 03Ch</i> */
58367 + volatile uint32_t guid;
58368 + /**Synopsys ID Register (Read Only). <i>Offset: 040h</i> */
58369 + volatile uint32_t gsnpsid;
58370 + /**User HW Config1 Register (Read Only). <i>Offset: 044h</i> */
58371 + volatile uint32_t ghwcfg1;
58372 + /**User HW Config2 Register (Read Only). <i>Offset: 048h</i> */
58373 + volatile uint32_t ghwcfg2;
58374 +#define DWC_SLAVE_ONLY_ARCH 0
58375 +#define DWC_EXT_DMA_ARCH 1
58376 +#define DWC_INT_DMA_ARCH 2
58377 +
58378 +#define DWC_MODE_HNP_SRP_CAPABLE 0
58379 +#define DWC_MODE_SRP_ONLY_CAPABLE 1
58380 +#define DWC_MODE_NO_HNP_SRP_CAPABLE 2
58381 +#define DWC_MODE_SRP_CAPABLE_DEVICE 3
58382 +#define DWC_MODE_NO_SRP_CAPABLE_DEVICE 4
58383 +#define DWC_MODE_SRP_CAPABLE_HOST 5
58384 +#define DWC_MODE_NO_SRP_CAPABLE_HOST 6
58385 +
58386 + /**User HW Config3 Register (Read Only). <i>Offset: 04Ch</i> */
58387 + volatile uint32_t ghwcfg3;
58388 + /**User HW Config4 Register (Read Only). <i>Offset: 050h</i>*/
58389 + volatile uint32_t ghwcfg4;
58390 + /** Core LPM Configuration register <i>Offset: 054h</i>*/
58391 + volatile uint32_t glpmcfg;
58392 + /** Global PowerDn Register <i>Offset: 058h</i> */
58393 + volatile uint32_t gpwrdn;
58394 + /** Global DFIFO SW Config Register <i>Offset: 05Ch</i> */
58395 + volatile uint32_t gdfifocfg;
58396 + /** ADP Control Register <i>Offset: 060h</i> */
58397 + volatile uint32_t adpctl;
58398 + /** Reserved <i>Offset: 064h-0FFh</i> */
58399 + volatile uint32_t reserved39[39];
58400 + /** Host Periodic Transmit FIFO Size Register. <i>Offset: 100h</i> */
58401 + volatile uint32_t hptxfsiz;
58402 + /** Device Periodic Transmit FIFO#n Register if dedicated fifos are disabled,
58403 + otherwise Device Transmit FIFO#n Register.
58404 + * <i>Offset: 104h + (FIFO_Number-1)*04h, 1 <= FIFO Number <= 15 (1<=n<=15).</i> */
58405 + volatile uint32_t dtxfsiz[15];
58406 +} dwc_otg_core_global_regs_t;
58407 +
58408 +/**
58409 + * This union represents the bit fields of the Core OTG Control
58410 + * and Status Register (GOTGCTL). Set the bits using the bit
58411 + * fields then write the <i>d32</i> value to the register.
58412 + */
58413 +typedef union gotgctl_data {
58414 + /** raw register data */
58415 + uint32_t d32;
58416 + /** register bits */
58417 + struct {
58418 + unsigned sesreqscs:1;
58419 + unsigned sesreq:1;
58420 + unsigned vbvalidoven:1;
58421 + unsigned vbvalidovval:1;
58422 + unsigned avalidoven:1;
58423 + unsigned avalidovval:1;
58424 + unsigned bvalidoven:1;
58425 + unsigned bvalidovval:1;
58426 + unsigned hstnegscs:1;
58427 + unsigned hnpreq:1;
58428 + unsigned hstsethnpen:1;
58429 + unsigned devhnpen:1;
58430 + unsigned reserved12_15:4;
58431 + unsigned conidsts:1;
58432 + unsigned dbnctime:1;
58433 + unsigned asesvld:1;
58434 + unsigned bsesvld:1;
58435 + unsigned otgver:1;
58436 + unsigned reserved1:1;
58437 + unsigned multvalidbc:5;
58438 + unsigned chirpen:1;
58439 + unsigned reserved28_31:4;
58440 + } b;
58441 +} gotgctl_data_t;
58442 +
58443 +/**
58444 + * This union represents the bit fields of the Core OTG Interrupt Register
58445 + * (GOTGINT). Set/clear the bits using the bit fields then write the <i>d32</i>
58446 + * value to the register.
58447 + */
58448 +typedef union gotgint_data {
58449 + /** raw register data */
58450 + uint32_t d32;
58451 + /** register bits */
58452 + struct {
58453 + /** Current Mode */
58454 + unsigned reserved0_1:2;
58455 +
58456 + /** Session End Detected */
58457 + unsigned sesenddet:1;
58458 +
58459 + unsigned reserved3_7:5;
58460 +
58461 + /** Session Request Success Status Change */
58462 + unsigned sesreqsucstschng:1;
58463 + /** Host Negotiation Success Status Change */
58464 + unsigned hstnegsucstschng:1;
58465 +
58466 + unsigned reserved10_16:7;
58467 +
58468 + /** Host Negotiation Detected */
58469 + unsigned hstnegdet:1;
58470 + /** A-Device Timeout Change */
58471 + unsigned adevtoutchng:1;
58472 + /** Debounce Done */
58473 + unsigned debdone:1;
58474 + /** Multi-Valued input changed */
58475 + unsigned mvic:1;
58476 +
58477 + unsigned reserved31_21:11;
58478 +
58479 + } b;
58480 +} gotgint_data_t;
58481 +
58482 +/**
58483 + * This union represents the bit fields of the Core AHB Configuration
58484 + * Register (GAHBCFG). Set/clear the bits using the bit fields then
58485 + * write the <i>d32</i> value to the register.
58486 + */
58487 +typedef union gahbcfg_data {
58488 + /** raw register data */
58489 + uint32_t d32;
58490 + /** register bits */
58491 + struct {
58492 + unsigned glblintrmsk:1;
58493 +#define DWC_GAHBCFG_GLBINT_ENABLE 1
58494 +
58495 + unsigned hburstlen:4;
58496 +#define DWC_GAHBCFG_INT_DMA_BURST_SINGLE 0
58497 +#define DWC_GAHBCFG_INT_DMA_BURST_INCR 1
58498 +#define DWC_GAHBCFG_INT_DMA_BURST_INCR4 3
58499 +#define DWC_GAHBCFG_INT_DMA_BURST_INCR8 5
58500 +#define DWC_GAHBCFG_INT_DMA_BURST_INCR16 7
58501 +
58502 + unsigned dmaenable:1;
58503 +#define DWC_GAHBCFG_DMAENABLE 1
58504 + unsigned reserved:1;
58505 + unsigned nptxfemplvl_txfemplvl:1;
58506 + unsigned ptxfemplvl:1;
58507 +#define DWC_GAHBCFG_TXFEMPTYLVL_EMPTY 1
58508 +#define DWC_GAHBCFG_TXFEMPTYLVL_HALFEMPTY 0
58509 + unsigned reserved9_20:12;
58510 + unsigned remmemsupp:1;
58511 + unsigned notialldmawrit:1;
58512 + unsigned ahbsingle:1;
58513 + unsigned reserved24_31:8;
58514 + } b;
58515 +} gahbcfg_data_t;
58516 +
58517 +/**
58518 + * This union represents the bit fields of the Core USB Configuration
58519 + * Register (GUSBCFG). Set the bits using the bit fields then write
58520 + * the <i>d32</i> value to the register.
58521 + */
58522 +typedef union gusbcfg_data {
58523 + /** raw register data */
58524 + uint32_t d32;
58525 + /** register bits */
58526 + struct {
58527 + unsigned toutcal:3;
58528 + unsigned phyif:1;
58529 + unsigned ulpi_utmi_sel:1;
58530 + unsigned fsintf:1;
58531 + unsigned physel:1;
58532 + unsigned ddrsel:1;
58533 + unsigned srpcap:1;
58534 + unsigned hnpcap:1;
58535 + unsigned usbtrdtim:4;
58536 + unsigned reserved1:1;
58537 + unsigned phylpwrclksel:1;
58538 + unsigned otgutmifssel:1;
58539 + unsigned ulpi_fsls:1;
58540 + unsigned ulpi_auto_res:1;
58541 + unsigned ulpi_clk_sus_m:1;
58542 + unsigned ulpi_ext_vbus_drv:1;
58543 + unsigned ulpi_int_vbus_indicator:1;
58544 + unsigned term_sel_dl_pulse:1;
58545 + unsigned indicator_complement:1;
58546 + unsigned indicator_pass_through:1;
58547 + unsigned ulpi_int_prot_dis:1;
58548 + unsigned ic_usb_cap:1;
58549 + unsigned ic_traffic_pull_remove:1;
58550 + unsigned tx_end_delay:1;
58551 + unsigned force_host_mode:1;
58552 + unsigned force_dev_mode:1;
58553 + unsigned reserved31:1;
58554 + } b;
58555 +} gusbcfg_data_t;
58556 +
58557 +/**
58558 + * This union represents the bit fields of the Core Reset Register
58559 + * (GRSTCTL). Set/clear the bits using the bit fields then write the
58560 + * <i>d32</i> value to the register.
58561 + */
58562 +typedef union grstctl_data {
58563 + /** raw register data */
58564 + uint32_t d32;
58565 + /** register bits */
58566 + struct {
58567 + /** Core Soft Reset (CSftRst) (Device and Host)
58568 + *
58569 + * The application can flush the control logic in the
58570 + * entire core using this bit. This bit resets the
58571 + * pipelines in the AHB Clock domain as well as the
58572 + * PHY Clock domain.
58573 + *
58574 + * The state machines are reset to an IDLE state, the
58575 + * control bits in the CSRs are cleared, all the
58576 + * transmit FIFOs and the receive FIFO are flushed.
58577 + *
58578 + * The status mask bits that control the generation of
58579 + * the interrupt, are cleared, to clear the
58580 + * interrupt. The interrupt status bits are not
58581 + * cleared, so the application can get the status of
58582 + * any events that occurred in the core after it has
58583 + * set this bit.
58584 + *
58585 + * Any transactions on the AHB are terminated as soon
58586 + * as possible following the protocol. Any
58587 + * transactions on the USB are terminated immediately.
58588 + *
58589 + * The configuration settings in the CSRs are
58590 + * unchanged, so the software doesn't have to
58591 + * reprogram these registers (Device
58592 + * Configuration/Host Configuration/Core System
58593 + * Configuration/Core PHY Configuration).
58594 + *
58595 + * The application can write to this bit, any time it
58596 + * wants to reset the core. This is a self clearing
58597 + * bit and the core clears this bit after all the
58598 + * necessary logic is reset in the core, which may
58599 + * take several clocks, depending on the current state
58600 + * of the core.
58601 + */
58602 + unsigned csftrst:1;
58603 + /** Hclk Soft Reset
58604 + *
58605 + * The application uses this bit to reset the control logic in
58606 + * the AHB clock domain. Only AHB clock domain pipelines are
58607 + * reset.
58608 + */
58609 + unsigned hsftrst:1;
58610 + /** Host Frame Counter Reset (Host Only)<br>
58611 + *
58612 + * The application can reset the (micro)frame number
58613 + * counter inside the core, using this bit. When the
58614 + * (micro)frame counter is reset, the subsequent SOF
58615 + * sent out by the core, will have a (micro)frame
58616 + * number of 0.
58617 + */
58618 + unsigned hstfrm:1;
58619 + /** In Token Sequence Learning Queue Flush
58620 + * (INTknQFlsh) (Device Only)
58621 + */
58622 + unsigned intknqflsh:1;
58623 + /** RxFIFO Flush (RxFFlsh) (Device and Host)
58624 + *
58625 + * The application can flush the entire Receive FIFO
58626 + * using this bit. The application must first
58627 + * ensure that the core is not in the middle of a
58628 + * transaction. The application should write into
58629 + * this bit, only after making sure that neither the
58630 + * DMA engine is reading from the RxFIFO nor the MAC
58631 + * is writing the data in to the FIFO. The
58632 + * application should wait until the bit is cleared
58633 + * before performing any other operations. This bit
58634 + * will takes 8 clocks (slowest of PHY or AHB clock)
58635 + * to clear.
58636 + */
58637 + unsigned rxfflsh:1;
58638 + /** TxFIFO Flush (TxFFlsh) (Device and Host).
58639 + *
58640 + * This bit is used to selectively flush a single or
58641 + * all transmit FIFOs. The application must first
58642 + * ensure that the core is not in the middle of a
58643 + * transaction. The application should write into
58644 + * this bit, only after making sure that neither the
58645 + * DMA engine is writing into the TxFIFO nor the MAC
58646 + * is reading the data out of the FIFO. The
58647 + * application should wait until the core clears this
58648 + * bit, before performing any operations. This bit
58649 + * will takes 8 clocks (slowest of PHY or AHB clock)
58650 + * to clear.
58651 + */
58652 + unsigned txfflsh:1;
58653 +
58654 + /** TxFIFO Number (TxFNum) (Device and Host).
58655 + *
58656 + * This is the FIFO number which needs to be flushed,
58657 + * using the TxFIFO Flush bit. This field should not
58658 + * be changed until the TxFIFO Flush bit is cleared by
58659 + * the core.
58660 + * - 0x0 : Non Periodic TxFIFO Flush
58661 + * - 0x1 : Periodic TxFIFO #1 Flush in device mode
58662 + * or Periodic TxFIFO in host mode
58663 + * - 0x2 : Periodic TxFIFO #2 Flush in device mode.
58664 + * - ...
58665 + * - 0xF : Periodic TxFIFO #15 Flush in device mode
58666 + * - 0x10: Flush all the Transmit NonPeriodic and
58667 + * Transmit Periodic FIFOs in the core
58668 + */
58669 + unsigned txfnum:5;
58670 + /** Reserved */
58671 + unsigned reserved11_29:19;
58672 + /** DMA Request Signal. Indicated DMA request is in
58673 + * probress. Used for debug purpose. */
58674 + unsigned dmareq:1;
58675 + /** AHB Master Idle. Indicates the AHB Master State
58676 + * Machine is in IDLE condition. */
58677 + unsigned ahbidle:1;
58678 + } b;
58679 +} grstctl_t;
58680 +
58681 +/**
58682 + * This union represents the bit fields of the Core Interrupt Mask
58683 + * Register (GINTMSK). Set/clear the bits using the bit fields then
58684 + * write the <i>d32</i> value to the register.
58685 + */
58686 +typedef union gintmsk_data {
58687 + /** raw register data */
58688 + uint32_t d32;
58689 + /** register bits */
58690 + struct {
58691 + unsigned reserved0:1;
58692 + unsigned modemismatch:1;
58693 + unsigned otgintr:1;
58694 + unsigned sofintr:1;
58695 + unsigned rxstsqlvl:1;
58696 + unsigned nptxfempty:1;
58697 + unsigned ginnakeff:1;
58698 + unsigned goutnakeff:1;
58699 + unsigned ulpickint:1;
58700 + unsigned i2cintr:1;
58701 + unsigned erlysuspend:1;
58702 + unsigned usbsuspend:1;
58703 + unsigned usbreset:1;
58704 + unsigned enumdone:1;
58705 + unsigned isooutdrop:1;
58706 + unsigned eopframe:1;
58707 + unsigned restoredone:1;
58708 + unsigned epmismatch:1;
58709 + unsigned inepintr:1;
58710 + unsigned outepintr:1;
58711 + unsigned incomplisoin:1;
58712 + unsigned incomplisoout:1;
58713 + unsigned fetsusp:1;
58714 + unsigned resetdet:1;
58715 + unsigned portintr:1;
58716 + unsigned hcintr:1;
58717 + unsigned ptxfempty:1;
58718 + unsigned lpmtranrcvd:1;
58719 + unsigned conidstschng:1;
58720 + unsigned disconnect:1;
58721 + unsigned sessreqintr:1;
58722 + unsigned wkupintr:1;
58723 + } b;
58724 +} gintmsk_data_t;
58725 +/**
58726 + * This union represents the bit fields of the Core Interrupt Register
58727 + * (GINTSTS). Set/clear the bits using the bit fields then write the
58728 + * <i>d32</i> value to the register.
58729 + */
58730 +typedef union gintsts_data {
58731 + /** raw register data */
58732 + uint32_t d32;
58733 +#define DWC_SOF_INTR_MASK 0x0008
58734 + /** register bits */
58735 + struct {
58736 +#define DWC_HOST_MODE 1
58737 + unsigned curmode:1;
58738 + unsigned modemismatch:1;
58739 + unsigned otgintr:1;
58740 + unsigned sofintr:1;
58741 + unsigned rxstsqlvl:1;
58742 + unsigned nptxfempty:1;
58743 + unsigned ginnakeff:1;
58744 + unsigned goutnakeff:1;
58745 + unsigned ulpickint:1;
58746 + unsigned i2cintr:1;
58747 + unsigned erlysuspend:1;
58748 + unsigned usbsuspend:1;
58749 + unsigned usbreset:1;
58750 + unsigned enumdone:1;
58751 + unsigned isooutdrop:1;
58752 + unsigned eopframe:1;
58753 + unsigned restoredone:1;
58754 + unsigned epmismatch:1;
58755 + unsigned inepint:1;
58756 + unsigned outepintr:1;
58757 + unsigned incomplisoin:1;
58758 + unsigned incomplisoout:1;
58759 + unsigned fetsusp:1;
58760 + unsigned resetdet:1;
58761 + unsigned portintr:1;
58762 + unsigned hcintr:1;
58763 + unsigned ptxfempty:1;
58764 + unsigned lpmtranrcvd:1;
58765 + unsigned conidstschng:1;
58766 + unsigned disconnect:1;
58767 + unsigned sessreqintr:1;
58768 + unsigned wkupintr:1;
58769 + } b;
58770 +} gintsts_data_t;
58771 +
58772 +/**
58773 + * This union represents the bit fields in the Device Receive Status Read and
58774 + * Pop Registers (GRXSTSR, GRXSTSP) Read the register into the <i>d32</i>
58775 + * element then read out the bits using the <i>b</i>it elements.
58776 + */
58777 +typedef union device_grxsts_data {
58778 + /** raw register data */
58779 + uint32_t d32;
58780 + /** register bits */
58781 + struct {
58782 + unsigned epnum:4;
58783 + unsigned bcnt:11;
58784 + unsigned dpid:2;
58785 +
58786 +#define DWC_STS_DATA_UPDT 0x2 // OUT Data Packet
58787 +#define DWC_STS_XFER_COMP 0x3 // OUT Data Transfer Complete
58788 +
58789 +#define DWC_DSTS_GOUT_NAK 0x1 // Global OUT NAK
58790 +#define DWC_DSTS_SETUP_COMP 0x4 // Setup Phase Complete
58791 +#define DWC_DSTS_SETUP_UPDT 0x6 // SETUP Packet
58792 + unsigned pktsts:4;
58793 + unsigned fn:4;
58794 + unsigned reserved25_31:7;
58795 + } b;
58796 +} device_grxsts_data_t;
58797 +
58798 +/**
58799 + * This union represents the bit fields in the Host Receive Status Read and
58800 + * Pop Registers (GRXSTSR, GRXSTSP) Read the register into the <i>d32</i>
58801 + * element then read out the bits using the <i>b</i>it elements.
58802 + */
58803 +typedef union host_grxsts_data {
58804 + /** raw register data */
58805 + uint32_t d32;
58806 + /** register bits */
58807 + struct {
58808 + unsigned chnum:4;
58809 + unsigned bcnt:11;
58810 + unsigned dpid:2;
58811 +
58812 + unsigned pktsts:4;
58813 +#define DWC_GRXSTS_PKTSTS_IN 0x2
58814 +#define DWC_GRXSTS_PKTSTS_IN_XFER_COMP 0x3
58815 +#define DWC_GRXSTS_PKTSTS_DATA_TOGGLE_ERR 0x5
58816 +#define DWC_GRXSTS_PKTSTS_CH_HALTED 0x7
58817 +
58818 + unsigned reserved21_31:11;
58819 + } b;
58820 +} host_grxsts_data_t;
58821 +
58822 +/**
58823 + * This union represents the bit fields in the FIFO Size Registers (HPTXFSIZ,
58824 + * GNPTXFSIZ, DPTXFSIZn, DIEPTXFn). Read the register into the <i>d32</i> element
58825 + * then read out the bits using the <i>b</i>it elements.
58826 + */
58827 +typedef union fifosize_data {
58828 + /** raw register data */
58829 + uint32_t d32;
58830 + /** register bits */
58831 + struct {
58832 + unsigned startaddr:16;
58833 + unsigned depth:16;
58834 + } b;
58835 +} fifosize_data_t;
58836 +
58837 +/**
58838 + * This union represents the bit fields in the Non-Periodic Transmit
58839 + * FIFO/Queue Status Register (GNPTXSTS). Read the register into the
58840 + * <i>d32</i> element then read out the bits using the <i>b</i>it
58841 + * elements.
58842 + */
58843 +typedef union gnptxsts_data {
58844 + /** raw register data */
58845 + uint32_t d32;
58846 + /** register bits */
58847 + struct {
58848 + unsigned nptxfspcavail:16;
58849 + unsigned nptxqspcavail:8;
58850 + /** Top of the Non-Periodic Transmit Request Queue
58851 + * - bit 24 - Terminate (Last entry for the selected
58852 + * channel/EP)
58853 + * - bits 26:25 - Token Type
58854 + * - 2'b00 - IN/OUT
58855 + * - 2'b01 - Zero Length OUT
58856 + * - 2'b10 - PING/Complete Split
58857 + * - 2'b11 - Channel Halt
58858 + * - bits 30:27 - Channel/EP Number
58859 + */
58860 + unsigned nptxqtop_terminate:1;
58861 + unsigned nptxqtop_token:2;
58862 + unsigned nptxqtop_chnep:4;
58863 + unsigned reserved:1;
58864 + } b;
58865 +} gnptxsts_data_t;
58866 +
58867 +/**
58868 + * This union represents the bit fields in the Transmit
58869 + * FIFO Status Register (DTXFSTS). Read the register into the
58870 + * <i>d32</i> element then read out the bits using the <i>b</i>it
58871 + * elements.
58872 + */
58873 +typedef union dtxfsts_data {
58874 + /** raw register data */
58875 + uint32_t d32;
58876 + /** register bits */
58877 + struct {
58878 + unsigned txfspcavail:16;
58879 + unsigned reserved:16;
58880 + } b;
58881 +} dtxfsts_data_t;
58882 +
58883 +/**
58884 + * This union represents the bit fields in the I2C Control Register
58885 + * (I2CCTL). Read the register into the <i>d32</i> element then read out the
58886 + * bits using the <i>b</i>it elements.
58887 + */
58888 +typedef union gi2cctl_data {
58889 + /** raw register data */
58890 + uint32_t d32;
58891 + /** register bits */
58892 + struct {
58893 + unsigned rwdata:8;
58894 + unsigned regaddr:8;
58895 + unsigned addr:7;
58896 + unsigned i2cen:1;
58897 + unsigned ack:1;
58898 + unsigned i2csuspctl:1;
58899 + unsigned i2cdevaddr:2;
58900 + unsigned i2cdatse0:1;
58901 + unsigned reserved:1;
58902 + unsigned rw:1;
58903 + unsigned bsydne:1;
58904 + } b;
58905 +} gi2cctl_data_t;
58906 +
58907 +/**
58908 + * This union represents the bit fields in the PHY Vendor Control Register
58909 + * (GPVNDCTL). Read the register into the <i>d32</i> element then read out the
58910 + * bits using the <i>b</i>it elements.
58911 + */
58912 +typedef union gpvndctl_data {
58913 + /** raw register data */
58914 + uint32_t d32;
58915 + /** register bits */
58916 + struct {
58917 + unsigned regdata:8;
58918 + unsigned vctrl:8;
58919 + unsigned regaddr16_21:6;
58920 + unsigned regwr:1;
58921 + unsigned reserved23_24:2;
58922 + unsigned newregreq:1;
58923 + unsigned vstsbsy:1;
58924 + unsigned vstsdone:1;
58925 + unsigned reserved28_30:3;
58926 + unsigned disulpidrvr:1;
58927 + } b;
58928 +} gpvndctl_data_t;
58929 +
58930 +/**
58931 + * This union represents the bit fields in the General Purpose
58932 + * Input/Output Register (GGPIO).
58933 + * Read the register into the <i>d32</i> element then read out the
58934 + * bits using the <i>b</i>it elements.
58935 + */
58936 +typedef union ggpio_data {
58937 + /** raw register data */
58938 + uint32_t d32;
58939 + /** register bits */
58940 + struct {
58941 + unsigned gpi:16;
58942 + unsigned gpo:16;
58943 + } b;
58944 +} ggpio_data_t;
58945 +
58946 +/**
58947 + * This union represents the bit fields in the User ID Register
58948 + * (GUID). Read the register into the <i>d32</i> element then read out the
58949 + * bits using the <i>b</i>it elements.
58950 + */
58951 +typedef union guid_data {
58952 + /** raw register data */
58953 + uint32_t d32;
58954 + /** register bits */
58955 + struct {
58956 + unsigned rwdata:32;
58957 + } b;
58958 +} guid_data_t;
58959 +
58960 +/**
58961 + * This union represents the bit fields in the Synopsys ID Register
58962 + * (GSNPSID). Read the register into the <i>d32</i> element then read out the
58963 + * bits using the <i>b</i>it elements.
58964 + */
58965 +typedef union gsnpsid_data {
58966 + /** raw register data */
58967 + uint32_t d32;
58968 + /** register bits */
58969 + struct {
58970 + unsigned rwdata:32;
58971 + } b;
58972 +} gsnpsid_data_t;
58973 +
58974 +/**
58975 + * This union represents the bit fields in the User HW Config1
58976 + * Register. Read the register into the <i>d32</i> element then read
58977 + * out the bits using the <i>b</i>it elements.
58978 + */
58979 +typedef union hwcfg1_data {
58980 + /** raw register data */
58981 + uint32_t d32;
58982 + /** register bits */
58983 + struct {
58984 + unsigned ep_dir0:2;
58985 + unsigned ep_dir1:2;
58986 + unsigned ep_dir2:2;
58987 + unsigned ep_dir3:2;
58988 + unsigned ep_dir4:2;
58989 + unsigned ep_dir5:2;
58990 + unsigned ep_dir6:2;
58991 + unsigned ep_dir7:2;
58992 + unsigned ep_dir8:2;
58993 + unsigned ep_dir9:2;
58994 + unsigned ep_dir10:2;
58995 + unsigned ep_dir11:2;
58996 + unsigned ep_dir12:2;
58997 + unsigned ep_dir13:2;
58998 + unsigned ep_dir14:2;
58999 + unsigned ep_dir15:2;
59000 + } b;
59001 +} hwcfg1_data_t;
59002 +
59003 +/**
59004 + * This union represents the bit fields in the User HW Config2
59005 + * Register. Read the register into the <i>d32</i> element then read
59006 + * out the bits using the <i>b</i>it elements.
59007 + */
59008 +typedef union hwcfg2_data {
59009 + /** raw register data */
59010 + uint32_t d32;
59011 + /** register bits */
59012 + struct {
59013 + /* GHWCFG2 */
59014 + unsigned op_mode:3;
59015 +#define DWC_HWCFG2_OP_MODE_HNP_SRP_CAPABLE_OTG 0
59016 +#define DWC_HWCFG2_OP_MODE_SRP_ONLY_CAPABLE_OTG 1
59017 +#define DWC_HWCFG2_OP_MODE_NO_HNP_SRP_CAPABLE_OTG 2
59018 +#define DWC_HWCFG2_OP_MODE_SRP_CAPABLE_DEVICE 3
59019 +#define DWC_HWCFG2_OP_MODE_NO_SRP_CAPABLE_DEVICE 4
59020 +#define DWC_HWCFG2_OP_MODE_SRP_CAPABLE_HOST 5
59021 +#define DWC_HWCFG2_OP_MODE_NO_SRP_CAPABLE_HOST 6
59022 +
59023 + unsigned architecture:2;
59024 + unsigned point2point:1;
59025 + unsigned hs_phy_type:2;
59026 +#define DWC_HWCFG2_HS_PHY_TYPE_NOT_SUPPORTED 0
59027 +#define DWC_HWCFG2_HS_PHY_TYPE_UTMI 1
59028 +#define DWC_HWCFG2_HS_PHY_TYPE_ULPI 2
59029 +#define DWC_HWCFG2_HS_PHY_TYPE_UTMI_ULPI 3
59030 +
59031 + unsigned fs_phy_type:2;
59032 + unsigned num_dev_ep:4;
59033 + unsigned num_host_chan:4;
59034 + unsigned perio_ep_supported:1;
59035 + unsigned dynamic_fifo:1;
59036 + unsigned multi_proc_int:1;
59037 + unsigned reserved21:1;
59038 + unsigned nonperio_tx_q_depth:2;
59039 + unsigned host_perio_tx_q_depth:2;
59040 + unsigned dev_token_q_depth:5;
59041 + unsigned otg_enable_ic_usb:1;
59042 + } b;
59043 +} hwcfg2_data_t;
59044 +
59045 +/**
59046 + * This union represents the bit fields in the User HW Config3
59047 + * Register. Read the register into the <i>d32</i> element then read
59048 + * out the bits using the <i>b</i>it elements.
59049 + */
59050 +typedef union hwcfg3_data {
59051 + /** raw register data */
59052 + uint32_t d32;
59053 + /** register bits */
59054 + struct {
59055 + /* GHWCFG3 */
59056 + unsigned xfer_size_cntr_width:4;
59057 + unsigned packet_size_cntr_width:3;
59058 + unsigned otg_func:1;
59059 + unsigned i2c:1;
59060 + unsigned vendor_ctrl_if:1;
59061 + unsigned optional_features:1;
59062 + unsigned synch_reset_type:1;
59063 + unsigned adp_supp:1;
59064 + unsigned otg_enable_hsic:1;
59065 + unsigned bc_support:1;
59066 + unsigned otg_lpm_en:1;
59067 + unsigned dfifo_depth:16;
59068 + } b;
59069 +} hwcfg3_data_t;
59070 +
59071 +/**
59072 + * This union represents the bit fields in the User HW Config4
59073 + * Register. Read the register into the <i>d32</i> element then read
59074 + * out the bits using the <i>b</i>it elements.
59075 + */
59076 +typedef union hwcfg4_data {
59077 + /** raw register data */
59078 + uint32_t d32;
59079 + /** register bits */
59080 + struct {
59081 + unsigned num_dev_perio_in_ep:4;
59082 + unsigned power_optimiz:1;
59083 + unsigned min_ahb_freq:1;
59084 + unsigned hiber:1;
59085 + unsigned xhiber:1;
59086 + unsigned reserved:6;
59087 + unsigned utmi_phy_data_width:2;
59088 + unsigned num_dev_mode_ctrl_ep:4;
59089 + unsigned iddig_filt_en:1;
59090 + unsigned vbus_valid_filt_en:1;
59091 + unsigned a_valid_filt_en:1;
59092 + unsigned b_valid_filt_en:1;
59093 + unsigned session_end_filt_en:1;
59094 + unsigned ded_fifo_en:1;
59095 + unsigned num_in_eps:4;
59096 + unsigned desc_dma:1;
59097 + unsigned desc_dma_dyn:1;
59098 + } b;
59099 +} hwcfg4_data_t;
59100 +
59101 +/**
59102 + * This union represents the bit fields of the Core LPM Configuration
59103 + * Register (GLPMCFG). Set the bits using bit fields then write
59104 + * the <i>d32</i> value to the register.
59105 + */
59106 +typedef union glpmctl_data {
59107 + /** raw register data */
59108 + uint32_t d32;
59109 + /** register bits */
59110 + struct {
59111 + /** LPM-Capable (LPMCap) (Device and Host)
59112 + * The application uses this bit to control
59113 + * the DWC_otg core LPM capabilities.
59114 + */
59115 + unsigned lpm_cap_en:1;
59116 + /** LPM response programmed by application (AppL1Res) (Device)
59117 + * Handshake response to LPM token pre-programmed
59118 + * by device application software.
59119 + */
59120 + unsigned appl_resp:1;
59121 + /** Host Initiated Resume Duration (HIRD) (Device and Host)
59122 + * In Host mode this field indicates the value of HIRD
59123 + * to be sent in an LPM transaction.
59124 + * In Device mode this field is updated with the
59125 + * Received LPM Token HIRD bmAttribute
59126 + * when an ACK/NYET/STALL response is sent
59127 + * to an LPM transaction.
59128 + */
59129 + unsigned hird:4;
59130 + /** RemoteWakeEnable (bRemoteWake) (Device and Host)
59131 + * In Host mode this bit indicates the value of remote
59132 + * wake up to be sent in wIndex field of LPM transaction.
59133 + * In Device mode this field is updated with the
59134 + * Received LPM Token bRemoteWake bmAttribute
59135 + * when an ACK/NYET/STALL response is sent
59136 + * to an LPM transaction.
59137 + */
59138 + unsigned rem_wkup_en:1;
59139 + /** Enable utmi_sleep_n (EnblSlpM) (Device and Host)
59140 + * The application uses this bit to control
59141 + * the utmi_sleep_n assertion to the PHY when in L1 state.
59142 + */
59143 + unsigned en_utmi_sleep:1;
59144 + /** HIRD Threshold (HIRD_Thres) (Device and Host)
59145 + */
59146 + unsigned hird_thres:5;
59147 + /** LPM Response (CoreL1Res) (Device and Host)
59148 + * In Host mode this bit contains handsake response to
59149 + * LPM transaction.
59150 + * In Device mode the response of the core to
59151 + * LPM transaction received is reflected in these two bits.
59152 + - 0x0 : ERROR (No handshake response)
59153 + - 0x1 : STALL
59154 + - 0x2 : NYET
59155 + - 0x3 : ACK
59156 + */
59157 + unsigned lpm_resp:2;
59158 + /** Port Sleep Status (SlpSts) (Device and Host)
59159 + * This bit is set as long as a Sleep condition
59160 + * is present on the USB bus.
59161 + */
59162 + unsigned prt_sleep_sts:1;
59163 + /** Sleep State Resume OK (L1ResumeOK) (Device and Host)
59164 + * Indicates that the application or host
59165 + * can start resume from Sleep state.
59166 + */
59167 + unsigned sleep_state_resumeok:1;
59168 + /** LPM channel Index (LPM_Chnl_Indx) (Host)
59169 + * The channel number on which the LPM transaction
59170 + * has to be applied while sending
59171 + * an LPM transaction to the local device.
59172 + */
59173 + unsigned lpm_chan_index:4;
59174 + /** LPM Retry Count (LPM_Retry_Cnt) (Host)
59175 + * Number host retries that would be performed
59176 + * if the device response was not valid response.
59177 + */
59178 + unsigned retry_count:3;
59179 + /** Send LPM Transaction (SndLPM) (Host)
59180 + * When set by application software,
59181 + * an LPM transaction containing two tokens
59182 + * is sent.
59183 + */
59184 + unsigned send_lpm:1;
59185 + /** LPM Retry status (LPM_RetryCnt_Sts) (Host)
59186 + * Number of LPM Host Retries still remaining
59187 + * to be transmitted for the current LPM sequence
59188 + */
59189 + unsigned retry_count_sts:3;
59190 + unsigned reserved28_29:2;
59191 + /** In host mode once this bit is set, the host
59192 + * configures to drive the HSIC Idle state on the bus.
59193 + * It then waits for the device to initiate the Connect sequence.
59194 + * In device mode once this bit is set, the device waits for
59195 + * the HSIC Idle line state on the bus. Upon receving the Idle
59196 + * line state, it initiates the HSIC Connect sequence.
59197 + */
59198 + unsigned hsic_connect:1;
59199 + /** This bit overrides and functionally inverts
59200 + * the if_select_hsic input port signal.
59201 + */
59202 + unsigned inv_sel_hsic:1;
59203 + } b;
59204 +} glpmcfg_data_t;
59205 +
59206 +/**
59207 + * This union represents the bit fields of the Core ADP Timer, Control and
59208 + * Status Register (ADPTIMCTLSTS). Set the bits using bit fields then write
59209 + * the <i>d32</i> value to the register.
59210 + */
59211 +typedef union adpctl_data {
59212 + /** raw register data */
59213 + uint32_t d32;
59214 + /** register bits */
59215 + struct {
59216 + /** Probe Discharge (PRB_DSCHG)
59217 + * These bits set the times for TADP_DSCHG.
59218 + * These bits are defined as follows:
59219 + * 2'b00 - 4 msec
59220 + * 2'b01 - 8 msec
59221 + * 2'b10 - 16 msec
59222 + * 2'b11 - 32 msec
59223 + */
59224 + unsigned prb_dschg:2;
59225 + /** Probe Delta (PRB_DELTA)
59226 + * These bits set the resolution for RTIM value.
59227 + * The bits are defined in units of 32 kHz clock cycles as follows:
59228 + * 2'b00 - 1 cycles
59229 + * 2'b01 - 2 cycles
59230 + * 2'b10 - 3 cycles
59231 + * 2'b11 - 4 cycles
59232 + * For example if this value is chosen to 2'b01, it means that RTIM
59233 + * increments for every 3(three) 32Khz clock cycles.
59234 + */
59235 + unsigned prb_delta:2;
59236 + /** Probe Period (PRB_PER)
59237 + * These bits sets the TADP_PRD as shown in Figure 4 as follows:
59238 + * 2'b00 - 0.625 to 0.925 sec (typical 0.775 sec)
59239 + * 2'b01 - 1.25 to 1.85 sec (typical 1.55 sec)
59240 + * 2'b10 - 1.9 to 2.6 sec (typical 2.275 sec)
59241 + * 2'b11 - Reserved
59242 + */
59243 + unsigned prb_per:2;
59244 + /** These bits capture the latest time it took for VBUS to ramp from
59245 + * VADP_SINK to VADP_PRB.
59246 + * 0x000 - 1 cycles
59247 + * 0x001 - 2 cycles
59248 + * 0x002 - 3 cycles
59249 + * etc
59250 + * 0x7FF - 2048 cycles
59251 + * A time of 1024 cycles at 32 kHz corresponds to a time of 32 msec.
59252 + */
59253 + unsigned rtim:11;
59254 + /** Enable Probe (EnaPrb)
59255 + * When programmed to 1'b1, the core performs a probe operation.
59256 + * This bit is valid only if OTG_Ver = 1'b1.
59257 + */
59258 + unsigned enaprb:1;
59259 + /** Enable Sense (EnaSns)
59260 + * When programmed to 1'b1, the core performs a Sense operation.
59261 + * This bit is valid only if OTG_Ver = 1'b1.
59262 + */
59263 + unsigned enasns:1;
59264 + /** ADP Reset (ADPRes)
59265 + * When set, ADP controller is reset.
59266 + * This bit is valid only if OTG_Ver = 1'b1.
59267 + */
59268 + unsigned adpres:1;
59269 + /** ADP Enable (ADPEn)
59270 + * When set, the core performs either ADP probing or sensing
59271 + * based on EnaPrb or EnaSns.
59272 + * This bit is valid only if OTG_Ver = 1'b1.
59273 + */
59274 + unsigned adpen:1;
59275 + /** ADP Probe Interrupt (ADP_PRB_INT)
59276 + * When this bit is set, it means that the VBUS
59277 + * voltage is greater than VADP_PRB or VADP_PRB is reached.
59278 + * This bit is valid only if OTG_Ver = 1'b1.
59279 + */
59280 + unsigned adp_prb_int:1;
59281 + /**
59282 + * ADP Sense Interrupt (ADP_SNS_INT)
59283 + * When this bit is set, it means that the VBUS voltage is greater than
59284 + * VADP_SNS value or VADP_SNS is reached.
59285 + * This bit is valid only if OTG_Ver = 1'b1.
59286 + */
59287 + unsigned adp_sns_int:1;
59288 + /** ADP Tomeout Interrupt (ADP_TMOUT_INT)
59289 + * This bit is relevant only for an ADP probe.
59290 + * When this bit is set, it means that the ramp time has
59291 + * completed ie ADPCTL.RTIM has reached its terminal value
59292 + * of 0x7FF. This is a debug feature that allows software
59293 + * to read the ramp time after each cycle.
59294 + * This bit is valid only if OTG_Ver = 1'b1.
59295 + */
59296 + unsigned adp_tmout_int:1;
59297 + /** ADP Probe Interrupt Mask (ADP_PRB_INT_MSK)
59298 + * When this bit is set, it unmasks the interrupt due to ADP_PRB_INT.
59299 + * This bit is valid only if OTG_Ver = 1'b1.
59300 + */
59301 + unsigned adp_prb_int_msk:1;
59302 + /** ADP Sense Interrupt Mask (ADP_SNS_INT_MSK)
59303 + * When this bit is set, it unmasks the interrupt due to ADP_SNS_INT.
59304 + * This bit is valid only if OTG_Ver = 1'b1.
59305 + */
59306 + unsigned adp_sns_int_msk:1;
59307 + /** ADP Timoeout Interrupt Mask (ADP_TMOUT_MSK)
59308 + * When this bit is set, it unmasks the interrupt due to ADP_TMOUT_INT.
59309 + * This bit is valid only if OTG_Ver = 1'b1.
59310 + */
59311 + unsigned adp_tmout_int_msk:1;
59312 + /** Access Request
59313 + * 2'b00 - Read/Write Valid (updated by the core)
59314 + * 2'b01 - Read
59315 + * 2'b00 - Write
59316 + * 2'b00 - Reserved
59317 + */
59318 + unsigned ar:2;
59319 + /** Reserved */
59320 + unsigned reserved29_31:3;
59321 + } b;
59322 +} adpctl_data_t;
59323 +
59324 +////////////////////////////////////////////
59325 +// Device Registers
59326 +/**
59327 + * Device Global Registers. <i>Offsets 800h-BFFh</i>
59328 + *
59329 + * The following structures define the size and relative field offsets
59330 + * for the Device Mode Registers.
59331 + *
59332 + * <i>These registers are visible only in Device mode and must not be
59333 + * accessed in Host mode, as the results are unknown.</i>
59334 + */
59335 +typedef struct dwc_otg_dev_global_regs {
59336 + /** Device Configuration Register. <i>Offset 800h</i> */
59337 + volatile uint32_t dcfg;
59338 + /** Device Control Register. <i>Offset: 804h</i> */
59339 + volatile uint32_t dctl;
59340 + /** Device Status Register (Read Only). <i>Offset: 808h</i> */
59341 + volatile uint32_t dsts;
59342 + /** Reserved. <i>Offset: 80Ch</i> */
59343 + uint32_t unused;
59344 + /** Device IN Endpoint Common Interrupt Mask
59345 + * Register. <i>Offset: 810h</i> */
59346 + volatile uint32_t diepmsk;
59347 + /** Device OUT Endpoint Common Interrupt Mask
59348 + * Register. <i>Offset: 814h</i> */
59349 + volatile uint32_t doepmsk;
59350 + /** Device All Endpoints Interrupt Register. <i>Offset: 818h</i> */
59351 + volatile uint32_t daint;
59352 + /** Device All Endpoints Interrupt Mask Register. <i>Offset:
59353 + * 81Ch</i> */
59354 + volatile uint32_t daintmsk;
59355 + /** Device IN Token Queue Read Register-1 (Read Only).
59356 + * <i>Offset: 820h</i> */
59357 + volatile uint32_t dtknqr1;
59358 + /** Device IN Token Queue Read Register-2 (Read Only).
59359 + * <i>Offset: 824h</i> */
59360 + volatile uint32_t dtknqr2;
59361 + /** Device VBUS discharge Register. <i>Offset: 828h</i> */
59362 + volatile uint32_t dvbusdis;
59363 + /** Device VBUS Pulse Register. <i>Offset: 82Ch</i> */
59364 + volatile uint32_t dvbuspulse;
59365 + /** Device IN Token Queue Read Register-3 (Read Only). /
59366 + * Device Thresholding control register (Read/Write)
59367 + * <i>Offset: 830h</i> */
59368 + volatile uint32_t dtknqr3_dthrctl;
59369 + /** Device IN Token Queue Read Register-4 (Read Only). /
59370 + * Device IN EPs empty Inr. Mask Register (Read/Write)
59371 + * <i>Offset: 834h</i> */
59372 + volatile uint32_t dtknqr4_fifoemptymsk;
59373 + /** Device Each Endpoint Interrupt Register (Read Only). /
59374 + * <i>Offset: 838h</i> */
59375 + volatile uint32_t deachint;
59376 + /** Device Each Endpoint Interrupt mask Register (Read/Write). /
59377 + * <i>Offset: 83Ch</i> */
59378 + volatile uint32_t deachintmsk;
59379 + /** Device Each In Endpoint Interrupt mask Register (Read/Write). /
59380 + * <i>Offset: 840h</i> */
59381 + volatile uint32_t diepeachintmsk[MAX_EPS_CHANNELS];
59382 + /** Device Each Out Endpoint Interrupt mask Register (Read/Write). /
59383 + * <i>Offset: 880h</i> */
59384 + volatile uint32_t doepeachintmsk[MAX_EPS_CHANNELS];
59385 +} dwc_otg_device_global_regs_t;
59386 +
59387 +/**
59388 + * This union represents the bit fields in the Device Configuration
59389 + * Register. Read the register into the <i>d32</i> member then
59390 + * set/clear the bits using the <i>b</i>it elements. Write the
59391 + * <i>d32</i> member to the dcfg register.
59392 + */
59393 +typedef union dcfg_data {
59394 + /** raw register data */
59395 + uint32_t d32;
59396 + /** register bits */
59397 + struct {
59398 + /** Device Speed */
59399 + unsigned devspd:2;
59400 + /** Non Zero Length Status OUT Handshake */
59401 + unsigned nzstsouthshk:1;
59402 +#define DWC_DCFG_SEND_STALL 1
59403 +
59404 + unsigned ena32khzs:1;
59405 + /** Device Addresses */
59406 + unsigned devaddr:7;
59407 + /** Periodic Frame Interval */
59408 + unsigned perfrint:2;
59409 +#define DWC_DCFG_FRAME_INTERVAL_80 0
59410 +#define DWC_DCFG_FRAME_INTERVAL_85 1
59411 +#define DWC_DCFG_FRAME_INTERVAL_90 2
59412 +#define DWC_DCFG_FRAME_INTERVAL_95 3
59413 +
59414 + /** Enable Device OUT NAK for bulk in DDMA mode */
59415 + unsigned endevoutnak:1;
59416 +
59417 + unsigned reserved14_17:4;
59418 + /** In Endpoint Mis-match count */
59419 + unsigned epmscnt:5;
59420 + /** Enable Descriptor DMA in Device mode */
59421 + unsigned descdma:1;
59422 + unsigned perschintvl:2;
59423 + unsigned resvalid:6;
59424 + } b;
59425 +} dcfg_data_t;
59426 +
59427 +/**
59428 + * This union represents the bit fields in the Device Control
59429 + * Register. Read the register into the <i>d32</i> member then
59430 + * set/clear the bits using the <i>b</i>it elements.
59431 + */
59432 +typedef union dctl_data {
59433 + /** raw register data */
59434 + uint32_t d32;
59435 + /** register bits */
59436 + struct {
59437 + /** Remote Wakeup */
59438 + unsigned rmtwkupsig:1;
59439 + /** Soft Disconnect */
59440 + unsigned sftdiscon:1;
59441 + /** Global Non-Periodic IN NAK Status */
59442 + unsigned gnpinnaksts:1;
59443 + /** Global OUT NAK Status */
59444 + unsigned goutnaksts:1;
59445 + /** Test Control */
59446 + unsigned tstctl:3;
59447 + /** Set Global Non-Periodic IN NAK */
59448 + unsigned sgnpinnak:1;
59449 + /** Clear Global Non-Periodic IN NAK */
59450 + unsigned cgnpinnak:1;
59451 + /** Set Global OUT NAK */
59452 + unsigned sgoutnak:1;
59453 + /** Clear Global OUT NAK */
59454 + unsigned cgoutnak:1;
59455 + /** Power-On Programming Done */
59456 + unsigned pwronprgdone:1;
59457 + /** Reserved */
59458 + unsigned reserved:1;
59459 + /** Global Multi Count */
59460 + unsigned gmc:2;
59461 + /** Ignore Frame Number for ISOC EPs */
59462 + unsigned ifrmnum:1;
59463 + /** NAK on Babble */
59464 + unsigned nakonbble:1;
59465 + /** Enable Continue on BNA */
59466 + unsigned encontonbna:1;
59467 +
59468 + unsigned reserved18_31:14;
59469 + } b;
59470 +} dctl_data_t;
59471 +
59472 +/**
59473 + * This union represents the bit fields in the Device Status
59474 + * Register. Read the register into the <i>d32</i> member then
59475 + * set/clear the bits using the <i>b</i>it elements.
59476 + */
59477 +typedef union dsts_data {
59478 + /** raw register data */
59479 + uint32_t d32;
59480 + /** register bits */
59481 + struct {
59482 + /** Suspend Status */
59483 + unsigned suspsts:1;
59484 + /** Enumerated Speed */
59485 + unsigned enumspd:2;
59486 +#define DWC_DSTS_ENUMSPD_HS_PHY_30MHZ_OR_60MHZ 0
59487 +#define DWC_DSTS_ENUMSPD_FS_PHY_30MHZ_OR_60MHZ 1
59488 +#define DWC_DSTS_ENUMSPD_LS_PHY_6MHZ 2
59489 +#define DWC_DSTS_ENUMSPD_FS_PHY_48MHZ 3
59490 + /** Erratic Error */
59491 + unsigned errticerr:1;
59492 + unsigned reserved4_7:4;
59493 + /** Frame or Microframe Number of the received SOF */
59494 + unsigned soffn:14;
59495 + unsigned reserved22_31:10;
59496 + } b;
59497 +} dsts_data_t;
59498 +
59499 +/**
59500 + * This union represents the bit fields in the Device IN EP Interrupt
59501 + * Register and the Device IN EP Common Mask Register.
59502 + *
59503 + * - Read the register into the <i>d32</i> member then set/clear the
59504 + * bits using the <i>b</i>it elements.
59505 + */
59506 +typedef union diepint_data {
59507 + /** raw register data */
59508 + uint32_t d32;
59509 + /** register bits */
59510 + struct {
59511 + /** Transfer complete mask */
59512 + unsigned xfercompl:1;
59513 + /** Endpoint disable mask */
59514 + unsigned epdisabled:1;
59515 + /** AHB Error mask */
59516 + unsigned ahberr:1;
59517 + /** TimeOUT Handshake mask (non-ISOC EPs) */
59518 + unsigned timeout:1;
59519 + /** IN Token received with TxF Empty mask */
59520 + unsigned intktxfemp:1;
59521 + /** IN Token Received with EP mismatch mask */
59522 + unsigned intknepmis:1;
59523 + /** IN Endpoint NAK Effective mask */
59524 + unsigned inepnakeff:1;
59525 + /** Reserved */
59526 + unsigned emptyintr:1;
59527 +
59528 + unsigned txfifoundrn:1;
59529 +
59530 + /** BNA Interrupt mask */
59531 + unsigned bna:1;
59532 +
59533 + unsigned reserved10_12:3;
59534 + /** BNA Interrupt mask */
59535 + unsigned nak:1;
59536 +
59537 + unsigned reserved14_31:18;
59538 + } b;
59539 +} diepint_data_t;
59540 +
59541 +/**
59542 + * This union represents the bit fields in the Device IN EP
59543 + * Common/Dedicated Interrupt Mask Register.
59544 + */
59545 +typedef union diepint_data diepmsk_data_t;
59546 +
59547 +/**
59548 + * This union represents the bit fields in the Device OUT EP Interrupt
59549 + * Registerand Device OUT EP Common Interrupt Mask Register.
59550 + *
59551 + * - Read the register into the <i>d32</i> member then set/clear the
59552 + * bits using the <i>b</i>it elements.
59553 + */
59554 +typedef union doepint_data {
59555 + /** raw register data */
59556 + uint32_t d32;
59557 + /** register bits */
59558 + struct {
59559 + /** Transfer complete */
59560 + unsigned xfercompl:1;
59561 + /** Endpoint disable */
59562 + unsigned epdisabled:1;
59563 + /** AHB Error */
59564 + unsigned ahberr:1;
59565 + /** Setup Phase Done (contorl EPs) */
59566 + unsigned setup:1;
59567 + /** OUT Token Received when Endpoint Disabled */
59568 + unsigned outtknepdis:1;
59569 +
59570 + unsigned stsphsercvd:1;
59571 + /** Back-to-Back SETUP Packets Received */
59572 + unsigned back2backsetup:1;
59573 +
59574 + unsigned reserved7:1;
59575 + /** OUT packet Error */
59576 + unsigned outpkterr:1;
59577 + /** BNA Interrupt */
59578 + unsigned bna:1;
59579 +
59580 + unsigned reserved10:1;
59581 + /** Packet Drop Status */
59582 + unsigned pktdrpsts:1;
59583 + /** Babble Interrupt */
59584 + unsigned babble:1;
59585 + /** NAK Interrupt */
59586 + unsigned nak:1;
59587 + /** NYET Interrupt */
59588 + unsigned nyet:1;
59589 + /** Bit indicating setup packet received */
59590 + unsigned sr:1;
59591 +
59592 + unsigned reserved16_31:16;
59593 + } b;
59594 +} doepint_data_t;
59595 +
59596 +/**
59597 + * This union represents the bit fields in the Device OUT EP
59598 + * Common/Dedicated Interrupt Mask Register.
59599 + */
59600 +typedef union doepint_data doepmsk_data_t;
59601 +
59602 +/**
59603 + * This union represents the bit fields in the Device All EP Interrupt
59604 + * and Mask Registers.
59605 + * - Read the register into the <i>d32</i> member then set/clear the
59606 + * bits using the <i>b</i>it elements.
59607 + */
59608 +typedef union daint_data {
59609 + /** raw register data */
59610 + uint32_t d32;
59611 + /** register bits */
59612 + struct {
59613 + /** IN Endpoint bits */
59614 + unsigned in:16;
59615 + /** OUT Endpoint bits */
59616 + unsigned out:16;
59617 + } ep;
59618 + struct {
59619 + /** IN Endpoint bits */
59620 + unsigned inep0:1;
59621 + unsigned inep1:1;
59622 + unsigned inep2:1;
59623 + unsigned inep3:1;
59624 + unsigned inep4:1;
59625 + unsigned inep5:1;
59626 + unsigned inep6:1;
59627 + unsigned inep7:1;
59628 + unsigned inep8:1;
59629 + unsigned inep9:1;
59630 + unsigned inep10:1;
59631 + unsigned inep11:1;
59632 + unsigned inep12:1;
59633 + unsigned inep13:1;
59634 + unsigned inep14:1;
59635 + unsigned inep15:1;
59636 + /** OUT Endpoint bits */
59637 + unsigned outep0:1;
59638 + unsigned outep1:1;
59639 + unsigned outep2:1;
59640 + unsigned outep3:1;
59641 + unsigned outep4:1;
59642 + unsigned outep5:1;
59643 + unsigned outep6:1;
59644 + unsigned outep7:1;
59645 + unsigned outep8:1;
59646 + unsigned outep9:1;
59647 + unsigned outep10:1;
59648 + unsigned outep11:1;
59649 + unsigned outep12:1;
59650 + unsigned outep13:1;
59651 + unsigned outep14:1;
59652 + unsigned outep15:1;
59653 + } b;
59654 +} daint_data_t;
59655 +
59656 +/**
59657 + * This union represents the bit fields in the Device IN Token Queue
59658 + * Read Registers.
59659 + * - Read the register into the <i>d32</i> member.
59660 + * - READ-ONLY Register
59661 + */
59662 +typedef union dtknq1_data {
59663 + /** raw register data */
59664 + uint32_t d32;
59665 + /** register bits */
59666 + struct {
59667 + /** In Token Queue Write Pointer */
59668 + unsigned intknwptr:5;
59669 + /** Reserved */
59670 + unsigned reserved05_06:2;
59671 + /** write pointer has wrapped. */
59672 + unsigned wrap_bit:1;
59673 + /** EP Numbers of IN Tokens 0 ... 4 */
59674 + unsigned epnums0_5:24;
59675 + } b;
59676 +} dtknq1_data_t;
59677 +
59678 +/**
59679 + * This union represents Threshold control Register
59680 + * - Read and write the register into the <i>d32</i> member.
59681 + * - READ-WRITABLE Register
59682 + */
59683 +typedef union dthrctl_data {
59684 + /** raw register data */
59685 + uint32_t d32;
59686 + /** register bits */
59687 + struct {
59688 + /** non ISO Tx Thr. Enable */
59689 + unsigned non_iso_thr_en:1;
59690 + /** ISO Tx Thr. Enable */
59691 + unsigned iso_thr_en:1;
59692 + /** Tx Thr. Length */
59693 + unsigned tx_thr_len:9;
59694 + /** AHB Threshold ratio */
59695 + unsigned ahb_thr_ratio:2;
59696 + /** Reserved */
59697 + unsigned reserved13_15:3;
59698 + /** Rx Thr. Enable */
59699 + unsigned rx_thr_en:1;
59700 + /** Rx Thr. Length */
59701 + unsigned rx_thr_len:9;
59702 + unsigned reserved26:1;
59703 + /** Arbiter Parking Enable*/
59704 + unsigned arbprken:1;
59705 + /** Reserved */
59706 + unsigned reserved28_31:4;
59707 + } b;
59708 +} dthrctl_data_t;
59709 +
59710 +/**
59711 + * Device Logical IN Endpoint-Specific Registers. <i>Offsets
59712 + * 900h-AFCh</i>
59713 + *
59714 + * There will be one set of endpoint registers per logical endpoint
59715 + * implemented.
59716 + *
59717 + * <i>These registers are visible only in Device mode and must not be
59718 + * accessed in Host mode, as the results are unknown.</i>
59719 + */
59720 +typedef struct dwc_otg_dev_in_ep_regs {
59721 + /** Device IN Endpoint Control Register. <i>Offset:900h +
59722 + * (ep_num * 20h) + 00h</i> */
59723 + volatile uint32_t diepctl;
59724 + /** Reserved. <i>Offset:900h + (ep_num * 20h) + 04h</i> */
59725 + uint32_t reserved04;
59726 + /** Device IN Endpoint Interrupt Register. <i>Offset:900h +
59727 + * (ep_num * 20h) + 08h</i> */
59728 + volatile uint32_t diepint;
59729 + /** Reserved. <i>Offset:900h + (ep_num * 20h) + 0Ch</i> */
59730 + uint32_t reserved0C;
59731 + /** Device IN Endpoint Transfer Size
59732 + * Register. <i>Offset:900h + (ep_num * 20h) + 10h</i> */
59733 + volatile uint32_t dieptsiz;
59734 + /** Device IN Endpoint DMA Address Register. <i>Offset:900h +
59735 + * (ep_num * 20h) + 14h</i> */
59736 + volatile uint32_t diepdma;
59737 + /** Device IN Endpoint Transmit FIFO Status Register. <i>Offset:900h +
59738 + * (ep_num * 20h) + 18h</i> */
59739 + volatile uint32_t dtxfsts;
59740 + /** Device IN Endpoint DMA Buffer Register. <i>Offset:900h +
59741 + * (ep_num * 20h) + 1Ch</i> */
59742 + volatile uint32_t diepdmab;
59743 +} dwc_otg_dev_in_ep_regs_t;
59744 +
59745 +/**
59746 + * Device Logical OUT Endpoint-Specific Registers. <i>Offsets:
59747 + * B00h-CFCh</i>
59748 + *
59749 + * There will be one set of endpoint registers per logical endpoint
59750 + * implemented.
59751 + *
59752 + * <i>These registers are visible only in Device mode and must not be
59753 + * accessed in Host mode, as the results are unknown.</i>
59754 + */
59755 +typedef struct dwc_otg_dev_out_ep_regs {
59756 + /** Device OUT Endpoint Control Register. <i>Offset:B00h +
59757 + * (ep_num * 20h) + 00h</i> */
59758 + volatile uint32_t doepctl;
59759 + /** Reserved. <i>Offset:B00h + (ep_num * 20h) + 04h</i> */
59760 + uint32_t reserved04;
59761 + /** Device OUT Endpoint Interrupt Register. <i>Offset:B00h +
59762 + * (ep_num * 20h) + 08h</i> */
59763 + volatile uint32_t doepint;
59764 + /** Reserved. <i>Offset:B00h + (ep_num * 20h) + 0Ch</i> */
59765 + uint32_t reserved0C;
59766 + /** Device OUT Endpoint Transfer Size Register. <i>Offset:
59767 + * B00h + (ep_num * 20h) + 10h</i> */
59768 + volatile uint32_t doeptsiz;
59769 + /** Device OUT Endpoint DMA Address Register. <i>Offset:B00h
59770 + * + (ep_num * 20h) + 14h</i> */
59771 + volatile uint32_t doepdma;
59772 + /** Reserved. <i>Offset:B00h + * (ep_num * 20h) + 18h</i> */
59773 + uint32_t unused;
59774 + /** Device OUT Endpoint DMA Buffer Register. <i>Offset:B00h
59775 + * + (ep_num * 20h) + 1Ch</i> */
59776 + uint32_t doepdmab;
59777 +} dwc_otg_dev_out_ep_regs_t;
59778 +
59779 +/**
59780 + * This union represents the bit fields in the Device EP Control
59781 + * Register. Read the register into the <i>d32</i> member then
59782 + * set/clear the bits using the <i>b</i>it elements.
59783 + */
59784 +typedef union depctl_data {
59785 + /** raw register data */
59786 + uint32_t d32;
59787 + /** register bits */
59788 + struct {
59789 + /** Maximum Packet Size
59790 + * IN/OUT EPn
59791 + * IN/OUT EP0 - 2 bits
59792 + * 2'b00: 64 Bytes
59793 + * 2'b01: 32
59794 + * 2'b10: 16
59795 + * 2'b11: 8 */
59796 + unsigned mps:11;
59797 +#define DWC_DEP0CTL_MPS_64 0
59798 +#define DWC_DEP0CTL_MPS_32 1
59799 +#define DWC_DEP0CTL_MPS_16 2
59800 +#define DWC_DEP0CTL_MPS_8 3
59801 +
59802 + /** Next Endpoint
59803 + * IN EPn/IN EP0
59804 + * OUT EPn/OUT EP0 - reserved */
59805 + unsigned nextep:4;
59806 +
59807 + /** USB Active Endpoint */
59808 + unsigned usbactep:1;
59809 +
59810 + /** Endpoint DPID (INTR/Bulk IN and OUT endpoints)
59811 + * This field contains the PID of the packet going to
59812 + * be received or transmitted on this endpoint. The
59813 + * application should program the PID of the first
59814 + * packet going to be received or transmitted on this
59815 + * endpoint , after the endpoint is
59816 + * activated. Application use the SetD1PID and
59817 + * SetD0PID fields of this register to program either
59818 + * D0 or D1 PID.
59819 + *
59820 + * The encoding for this field is
59821 + * - 0: D0
59822 + * - 1: D1
59823 + */
59824 + unsigned dpid:1;
59825 +
59826 + /** NAK Status */
59827 + unsigned naksts:1;
59828 +
59829 + /** Endpoint Type
59830 + * 2'b00: Control
59831 + * 2'b01: Isochronous
59832 + * 2'b10: Bulk
59833 + * 2'b11: Interrupt */
59834 + unsigned eptype:2;
59835 +
59836 + /** Snoop Mode
59837 + * OUT EPn/OUT EP0
59838 + * IN EPn/IN EP0 - reserved */
59839 + unsigned snp:1;
59840 +
59841 + /** Stall Handshake */
59842 + unsigned stall:1;
59843 +
59844 + /** Tx Fifo Number
59845 + * IN EPn/IN EP0
59846 + * OUT EPn/OUT EP0 - reserved */
59847 + unsigned txfnum:4;
59848 +
59849 + /** Clear NAK */
59850 + unsigned cnak:1;
59851 + /** Set NAK */
59852 + unsigned snak:1;
59853 + /** Set DATA0 PID (INTR/Bulk IN and OUT endpoints)
59854 + * Writing to this field sets the Endpoint DPID (DPID)
59855 + * field in this register to DATA0. Set Even
59856 + * (micro)frame (SetEvenFr) (ISO IN and OUT Endpoints)
59857 + * Writing to this field sets the Even/Odd
59858 + * (micro)frame (EO_FrNum) field to even (micro)
59859 + * frame.
59860 + */
59861 + unsigned setd0pid:1;
59862 + /** Set DATA1 PID (INTR/Bulk IN and OUT endpoints)
59863 + * Writing to this field sets the Endpoint DPID (DPID)
59864 + * field in this register to DATA1 Set Odd
59865 + * (micro)frame (SetOddFr) (ISO IN and OUT Endpoints)
59866 + * Writing to this field sets the Even/Odd
59867 + * (micro)frame (EO_FrNum) field to odd (micro) frame.
59868 + */
59869 + unsigned setd1pid:1;
59870 +
59871 + /** Endpoint Disable */
59872 + unsigned epdis:1;
59873 + /** Endpoint Enable */
59874 + unsigned epena:1;
59875 + } b;
59876 +} depctl_data_t;
59877 +
59878 +/**
59879 + * This union represents the bit fields in the Device EP Transfer
59880 + * Size Register. Read the register into the <i>d32</i> member then
59881 + * set/clear the bits using the <i>b</i>it elements.
59882 + */
59883 +typedef union deptsiz_data {
59884 + /** raw register data */
59885 + uint32_t d32;
59886 + /** register bits */
59887 + struct {
59888 + /** Transfer size */
59889 + unsigned xfersize:19;
59890 +/** Max packet count for EP (pow(2,10)-1) */
59891 +#define MAX_PKT_CNT 1023
59892 + /** Packet Count */
59893 + unsigned pktcnt:10;
59894 + /** Multi Count - Periodic IN endpoints */
59895 + unsigned mc:2;
59896 + unsigned reserved:1;
59897 + } b;
59898 +} deptsiz_data_t;
59899 +
59900 +/**
59901 + * This union represents the bit fields in the Device EP 0 Transfer
59902 + * Size Register. Read the register into the <i>d32</i> member then
59903 + * set/clear the bits using the <i>b</i>it elements.
59904 + */
59905 +typedef union deptsiz0_data {
59906 + /** raw register data */
59907 + uint32_t d32;
59908 + /** register bits */
59909 + struct {
59910 + /** Transfer size */
59911 + unsigned xfersize:7;
59912 + /** Reserved */
59913 + unsigned reserved7_18:12;
59914 + /** Packet Count */
59915 + unsigned pktcnt:2;
59916 + /** Reserved */
59917 + unsigned reserved21_28:8;
59918 + /**Setup Packet Count (DOEPTSIZ0 Only) */
59919 + unsigned supcnt:2;
59920 + unsigned reserved31;
59921 + } b;
59922 +} deptsiz0_data_t;
59923 +
59924 +/////////////////////////////////////////////////
59925 +// DMA Descriptor Specific Structures
59926 +//
59927 +
59928 +/** Buffer status definitions */
59929 +
59930 +#define BS_HOST_READY 0x0
59931 +#define BS_DMA_BUSY 0x1
59932 +#define BS_DMA_DONE 0x2
59933 +#define BS_HOST_BUSY 0x3
59934 +
59935 +/** Receive/Transmit status definitions */
59936 +
59937 +#define RTS_SUCCESS 0x0
59938 +#define RTS_BUFFLUSH 0x1
59939 +#define RTS_RESERVED 0x2
59940 +#define RTS_BUFERR 0x3
59941 +
59942 +/**
59943 + * This union represents the bit fields in the DMA Descriptor
59944 + * status quadlet. Read the quadlet into the <i>d32</i> member then
59945 + * set/clear the bits using the <i>b</i>it, <i>b_iso_out</i> and
59946 + * <i>b_iso_in</i> elements.
59947 + */
59948 +typedef union dev_dma_desc_sts {
59949 + /** raw register data */
59950 + uint32_t d32;
59951 + /** quadlet bits */
59952 + struct {
59953 + /** Received number of bytes */
59954 + unsigned bytes:16;
59955 + /** NAK bit - only for OUT EPs */
59956 + unsigned nak:1;
59957 + unsigned reserved17_22:6;
59958 + /** Multiple Transfer - only for OUT EPs */
59959 + unsigned mtrf:1;
59960 + /** Setup Packet received - only for OUT EPs */
59961 + unsigned sr:1;
59962 + /** Interrupt On Complete */
59963 + unsigned ioc:1;
59964 + /** Short Packet */
59965 + unsigned sp:1;
59966 + /** Last */
59967 + unsigned l:1;
59968 + /** Receive Status */
59969 + unsigned sts:2;
59970 + /** Buffer Status */
59971 + unsigned bs:2;
59972 + } b;
59973 +
59974 +//#ifdef DWC_EN_ISOC
59975 + /** iso out quadlet bits */
59976 + struct {
59977 + /** Received number of bytes */
59978 + unsigned rxbytes:11;
59979 +
59980 + unsigned reserved11:1;
59981 + /** Frame Number */
59982 + unsigned framenum:11;
59983 + /** Received ISO Data PID */
59984 + unsigned pid:2;
59985 + /** Interrupt On Complete */
59986 + unsigned ioc:1;
59987 + /** Short Packet */
59988 + unsigned sp:1;
59989 + /** Last */
59990 + unsigned l:1;
59991 + /** Receive Status */
59992 + unsigned rxsts:2;
59993 + /** Buffer Status */
59994 + unsigned bs:2;
59995 + } b_iso_out;
59996 +
59997 + /** iso in quadlet bits */
59998 + struct {
59999 + /** Transmited number of bytes */
60000 + unsigned txbytes:12;
60001 + /** Frame Number */
60002 + unsigned framenum:11;
60003 + /** Transmited ISO Data PID */
60004 + unsigned pid:2;
60005 + /** Interrupt On Complete */
60006 + unsigned ioc:1;
60007 + /** Short Packet */
60008 + unsigned sp:1;
60009 + /** Last */
60010 + unsigned l:1;
60011 + /** Transmit Status */
60012 + unsigned txsts:2;
60013 + /** Buffer Status */
60014 + unsigned bs:2;
60015 + } b_iso_in;
60016 +//#endif /* DWC_EN_ISOC */
60017 +} dev_dma_desc_sts_t;
60018 +
60019 +/**
60020 + * DMA Descriptor structure
60021 + *
60022 + * DMA Descriptor structure contains two quadlets:
60023 + * Status quadlet and Data buffer pointer.
60024 + */
60025 +typedef struct dwc_otg_dev_dma_desc {
60026 + /** DMA Descriptor status quadlet */
60027 + dev_dma_desc_sts_t status;
60028 + /** DMA Descriptor data buffer pointer */
60029 + uint32_t buf;
60030 +} dwc_otg_dev_dma_desc_t;
60031 +
60032 +/**
60033 + * The dwc_otg_dev_if structure contains information needed to manage
60034 + * the DWC_otg controller acting in device mode. It represents the
60035 + * programming view of the device-specific aspects of the controller.
60036 + */
60037 +typedef struct dwc_otg_dev_if {
60038 + /** Pointer to device Global registers.
60039 + * Device Global Registers starting at offset 800h
60040 + */
60041 + dwc_otg_device_global_regs_t *dev_global_regs;
60042 +#define DWC_DEV_GLOBAL_REG_OFFSET 0x800
60043 +
60044 + /**
60045 + * Device Logical IN Endpoint-Specific Registers 900h-AFCh
60046 + */
60047 + dwc_otg_dev_in_ep_regs_t *in_ep_regs[MAX_EPS_CHANNELS];
60048 +#define DWC_DEV_IN_EP_REG_OFFSET 0x900
60049 +#define DWC_EP_REG_OFFSET 0x20
60050 +
60051 + /** Device Logical OUT Endpoint-Specific Registers B00h-CFCh */
60052 + dwc_otg_dev_out_ep_regs_t *out_ep_regs[MAX_EPS_CHANNELS];
60053 +#define DWC_DEV_OUT_EP_REG_OFFSET 0xB00
60054 +
60055 + /* Device configuration information */
60056 + uint8_t speed; /**< Device Speed 0: Unknown, 1: LS, 2:FS, 3: HS */
60057 + uint8_t num_in_eps; /**< Number # of Tx EP range: 0-15 exept ep0 */
60058 + uint8_t num_out_eps; /**< Number # of Rx EP range: 0-15 exept ep 0*/
60059 +
60060 + /** Size of periodic FIFOs (Bytes) */
60061 + uint16_t perio_tx_fifo_size[MAX_PERIO_FIFOS];
60062 +
60063 + /** Size of Tx FIFOs (Bytes) */
60064 + uint16_t tx_fifo_size[MAX_TX_FIFOS];
60065 +
60066 + /** Thresholding enable flags and length varaiables **/
60067 + uint16_t rx_thr_en;
60068 + uint16_t iso_tx_thr_en;
60069 + uint16_t non_iso_tx_thr_en;
60070 +
60071 + uint16_t rx_thr_length;
60072 + uint16_t tx_thr_length;
60073 +
60074 + /**
60075 + * Pointers to the DMA Descriptors for EP0 Control
60076 + * transfers (virtual and physical)
60077 + */
60078 +
60079 + /** 2 descriptors for SETUP packets */
60080 + dwc_dma_t dma_setup_desc_addr[2];
60081 + dwc_otg_dev_dma_desc_t *setup_desc_addr[2];
60082 +
60083 + /** Pointer to Descriptor with latest SETUP packet */
60084 + dwc_otg_dev_dma_desc_t *psetup;
60085 +
60086 + /** Index of current SETUP handler descriptor */
60087 + uint32_t setup_desc_index;
60088 +
60089 + /** Descriptor for Data In or Status In phases */
60090 + dwc_dma_t dma_in_desc_addr;
60091 + dwc_otg_dev_dma_desc_t *in_desc_addr;
60092 +
60093 + /** Descriptor for Data Out or Status Out phases */
60094 + dwc_dma_t dma_out_desc_addr;
60095 + dwc_otg_dev_dma_desc_t *out_desc_addr;
60096 +
60097 + /** Setup Packet Detected - if set clear NAK when queueing */
60098 + uint32_t spd;
60099 + /** Isoc ep pointer on which incomplete happens */
60100 + void *isoc_ep;
60101 +
60102 +} dwc_otg_dev_if_t;
60103 +
60104 +/////////////////////////////////////////////////
60105 +// Host Mode Register Structures
60106 +//
60107 +/**
60108 + * The Host Global Registers structure defines the size and relative
60109 + * field offsets for the Host Mode Global Registers. Host Global
60110 + * Registers offsets 400h-7FFh.
60111 +*/
60112 +typedef struct dwc_otg_host_global_regs {
60113 + /** Host Configuration Register. <i>Offset: 400h</i> */
60114 + volatile uint32_t hcfg;
60115 + /** Host Frame Interval Register. <i>Offset: 404h</i> */
60116 + volatile uint32_t hfir;
60117 + /** Host Frame Number / Frame Remaining Register. <i>Offset: 408h</i> */
60118 + volatile uint32_t hfnum;
60119 + /** Reserved. <i>Offset: 40Ch</i> */
60120 + uint32_t reserved40C;
60121 + /** Host Periodic Transmit FIFO/ Queue Status Register. <i>Offset: 410h</i> */
60122 + volatile uint32_t hptxsts;
60123 + /** Host All Channels Interrupt Register. <i>Offset: 414h</i> */
60124 + volatile uint32_t haint;
60125 + /** Host All Channels Interrupt Mask Register. <i>Offset: 418h</i> */
60126 + volatile uint32_t haintmsk;
60127 + /** Host Frame List Base Address Register . <i>Offset: 41Ch</i> */
60128 + volatile uint32_t hflbaddr;
60129 +} dwc_otg_host_global_regs_t;
60130 +
60131 +/**
60132 + * This union represents the bit fields in the Host Configuration Register.
60133 + * Read the register into the <i>d32</i> member then set/clear the bits using
60134 + * the <i>b</i>it elements. Write the <i>d32</i> member to the hcfg register.
60135 + */
60136 +typedef union hcfg_data {
60137 + /** raw register data */
60138 + uint32_t d32;
60139 +
60140 + /** register bits */
60141 + struct {
60142 + /** FS/LS Phy Clock Select */
60143 + unsigned fslspclksel:2;
60144 +#define DWC_HCFG_30_60_MHZ 0
60145 +#define DWC_HCFG_48_MHZ 1
60146 +#define DWC_HCFG_6_MHZ 2
60147 +
60148 + /** FS/LS Only Support */
60149 + unsigned fslssupp:1;
60150 + unsigned reserved3_6:4;
60151 + /** Enable 32-KHz Suspend Mode */
60152 + unsigned ena32khzs:1;
60153 + /** Resume Validation Periiod */
60154 + unsigned resvalid:8;
60155 + unsigned reserved16_22:7;
60156 + /** Enable Scatter/gather DMA in Host mode */
60157 + unsigned descdma:1;
60158 + /** Frame List Entries */
60159 + unsigned frlisten:2;
60160 + /** Enable Periodic Scheduling */
60161 + unsigned perschedena:1;
60162 + unsigned reserved27_30:4;
60163 + unsigned modechtimen:1;
60164 + } b;
60165 +} hcfg_data_t;
60166 +
60167 +/**
60168 + * This union represents the bit fields in the Host Frame Remaing/Number
60169 + * Register.
60170 + */
60171 +typedef union hfir_data {
60172 + /** raw register data */
60173 + uint32_t d32;
60174 +
60175 + /** register bits */
60176 + struct {
60177 + unsigned frint:16;
60178 + unsigned hfirrldctrl:1;
60179 + unsigned reserved:15;
60180 + } b;
60181 +} hfir_data_t;
60182 +
60183 +/**
60184 + * This union represents the bit fields in the Host Frame Remaing/Number
60185 + * Register.
60186 + */
60187 +typedef union hfnum_data {
60188 + /** raw register data */
60189 + uint32_t d32;
60190 +
60191 + /** register bits */
60192 + struct {
60193 + unsigned frnum:16;
60194 +#define DWC_HFNUM_MAX_FRNUM 0x3FFF
60195 + unsigned frrem:16;
60196 + } b;
60197 +} hfnum_data_t;
60198 +
60199 +typedef union hptxsts_data {
60200 + /** raw register data */
60201 + uint32_t d32;
60202 +
60203 + /** register bits */
60204 + struct {
60205 + unsigned ptxfspcavail:16;
60206 + unsigned ptxqspcavail:8;
60207 + /** Top of the Periodic Transmit Request Queue
60208 + * - bit 24 - Terminate (last entry for the selected channel)
60209 + * - bits 26:25 - Token Type
60210 + * - 2'b00 - Zero length
60211 + * - 2'b01 - Ping
60212 + * - 2'b10 - Disable
60213 + * - bits 30:27 - Channel Number
60214 + * - bit 31 - Odd/even microframe
60215 + */
60216 + unsigned ptxqtop_terminate:1;
60217 + unsigned ptxqtop_token:2;
60218 + unsigned ptxqtop_chnum:4;
60219 + unsigned ptxqtop_odd:1;
60220 + } b;
60221 +} hptxsts_data_t;
60222 +
60223 +/**
60224 + * This union represents the bit fields in the Host Port Control and Status
60225 + * Register. Read the register into the <i>d32</i> member then set/clear the
60226 + * bits using the <i>b</i>it elements. Write the <i>d32</i> member to the
60227 + * hprt0 register.
60228 + */
60229 +typedef union hprt0_data {
60230 + /** raw register data */
60231 + uint32_t d32;
60232 + /** register bits */
60233 + struct {
60234 + unsigned prtconnsts:1;
60235 + unsigned prtconndet:1;
60236 + unsigned prtena:1;
60237 + unsigned prtenchng:1;
60238 + unsigned prtovrcurract:1;
60239 + unsigned prtovrcurrchng:1;
60240 + unsigned prtres:1;
60241 + unsigned prtsusp:1;
60242 + unsigned prtrst:1;
60243 + unsigned reserved9:1;
60244 + unsigned prtlnsts:2;
60245 + unsigned prtpwr:1;
60246 + unsigned prttstctl:4;
60247 + unsigned prtspd:2;
60248 +#define DWC_HPRT0_PRTSPD_HIGH_SPEED 0
60249 +#define DWC_HPRT0_PRTSPD_FULL_SPEED 1
60250 +#define DWC_HPRT0_PRTSPD_LOW_SPEED 2
60251 + unsigned reserved19_31:13;
60252 + } b;
60253 +} hprt0_data_t;
60254 +
60255 +/**
60256 + * This union represents the bit fields in the Host All Interrupt
60257 + * Register.
60258 + */
60259 +typedef union haint_data {
60260 + /** raw register data */
60261 + uint32_t d32;
60262 + /** register bits */
60263 + struct {
60264 + unsigned ch0:1;
60265 + unsigned ch1:1;
60266 + unsigned ch2:1;
60267 + unsigned ch3:1;
60268 + unsigned ch4:1;
60269 + unsigned ch5:1;
60270 + unsigned ch6:1;
60271 + unsigned ch7:1;
60272 + unsigned ch8:1;
60273 + unsigned ch9:1;
60274 + unsigned ch10:1;
60275 + unsigned ch11:1;
60276 + unsigned ch12:1;
60277 + unsigned ch13:1;
60278 + unsigned ch14:1;
60279 + unsigned ch15:1;
60280 + unsigned reserved:16;
60281 + } b;
60282 +
60283 + struct {
60284 + unsigned chint:16;
60285 + unsigned reserved:16;
60286 + } b2;
60287 +} haint_data_t;
60288 +
60289 +/**
60290 + * This union represents the bit fields in the Host All Interrupt
60291 + * Register.
60292 + */
60293 +typedef union haintmsk_data {
60294 + /** raw register data */
60295 + uint32_t d32;
60296 + /** register bits */
60297 + struct {
60298 + unsigned ch0:1;
60299 + unsigned ch1:1;
60300 + unsigned ch2:1;
60301 + unsigned ch3:1;
60302 + unsigned ch4:1;
60303 + unsigned ch5:1;
60304 + unsigned ch6:1;
60305 + unsigned ch7:1;
60306 + unsigned ch8:1;
60307 + unsigned ch9:1;
60308 + unsigned ch10:1;
60309 + unsigned ch11:1;
60310 + unsigned ch12:1;
60311 + unsigned ch13:1;
60312 + unsigned ch14:1;
60313 + unsigned ch15:1;
60314 + unsigned reserved:16;
60315 + } b;
60316 +
60317 + struct {
60318 + unsigned chint:16;
60319 + unsigned reserved:16;
60320 + } b2;
60321 +} haintmsk_data_t;
60322 +
60323 +/**
60324 + * Host Channel Specific Registers. <i>500h-5FCh</i>
60325 + */
60326 +typedef struct dwc_otg_hc_regs {
60327 + /** Host Channel 0 Characteristic Register. <i>Offset: 500h + (chan_num * 20h) + 00h</i> */
60328 + volatile uint32_t hcchar;
60329 + /** Host Channel 0 Split Control Register. <i>Offset: 500h + (chan_num * 20h) + 04h</i> */
60330 + volatile uint32_t hcsplt;
60331 + /** Host Channel 0 Interrupt Register. <i>Offset: 500h + (chan_num * 20h) + 08h</i> */
60332 + volatile uint32_t hcint;
60333 + /** Host Channel 0 Interrupt Mask Register. <i>Offset: 500h + (chan_num * 20h) + 0Ch</i> */
60334 + volatile uint32_t hcintmsk;
60335 + /** Host Channel 0 Transfer Size Register. <i>Offset: 500h + (chan_num * 20h) + 10h</i> */
60336 + volatile uint32_t hctsiz;
60337 + /** Host Channel 0 DMA Address Register. <i>Offset: 500h + (chan_num * 20h) + 14h</i> */
60338 + volatile uint32_t hcdma;
60339 + volatile uint32_t reserved;
60340 + /** Host Channel 0 DMA Buffer Address Register. <i>Offset: 500h + (chan_num * 20h) + 1Ch</i> */
60341 + volatile uint32_t hcdmab;
60342 +} dwc_otg_hc_regs_t;
60343 +
60344 +/**
60345 + * This union represents the bit fields in the Host Channel Characteristics
60346 + * Register. Read the register into the <i>d32</i> member then set/clear the
60347 + * bits using the <i>b</i>it elements. Write the <i>d32</i> member to the
60348 + * hcchar register.
60349 + */
60350 +typedef union hcchar_data {
60351 + /** raw register data */
60352 + uint32_t d32;
60353 +
60354 + /** register bits */
60355 + struct {
60356 + /** Maximum packet size in bytes */
60357 + unsigned mps:11;
60358 +
60359 + /** Endpoint number */
60360 + unsigned epnum:4;
60361 +
60362 + /** 0: OUT, 1: IN */
60363 + unsigned epdir:1;
60364 +
60365 + unsigned reserved:1;
60366 +
60367 + /** 0: Full/high speed device, 1: Low speed device */
60368 + unsigned lspddev:1;
60369 +
60370 + /** 0: Control, 1: Isoc, 2: Bulk, 3: Intr */
60371 + unsigned eptype:2;
60372 +
60373 + /** Packets per frame for periodic transfers. 0 is reserved. */
60374 + unsigned multicnt:2;
60375 +
60376 + /** Device address */
60377 + unsigned devaddr:7;
60378 +
60379 + /**
60380 + * Frame to transmit periodic transaction.
60381 + * 0: even, 1: odd
60382 + */
60383 + unsigned oddfrm:1;
60384 +
60385 + /** Channel disable */
60386 + unsigned chdis:1;
60387 +
60388 + /** Channel enable */
60389 + unsigned chen:1;
60390 + } b;
60391 +} hcchar_data_t;
60392 +
60393 +typedef union hcsplt_data {
60394 + /** raw register data */
60395 + uint32_t d32;
60396 +
60397 + /** register bits */
60398 + struct {
60399 + /** Port Address */
60400 + unsigned prtaddr:7;
60401 +
60402 + /** Hub Address */
60403 + unsigned hubaddr:7;
60404 +
60405 + /** Transaction Position */
60406 + unsigned xactpos:2;
60407 +#define DWC_HCSPLIT_XACTPOS_MID 0
60408 +#define DWC_HCSPLIT_XACTPOS_END 1
60409 +#define DWC_HCSPLIT_XACTPOS_BEGIN 2
60410 +#define DWC_HCSPLIT_XACTPOS_ALL 3
60411 +
60412 + /** Do Complete Split */
60413 + unsigned compsplt:1;
60414 +
60415 + /** Reserved */
60416 + unsigned reserved:14;
60417 +
60418 + /** Split Enble */
60419 + unsigned spltena:1;
60420 + } b;
60421 +} hcsplt_data_t;
60422 +
60423 +/**
60424 + * This union represents the bit fields in the Host All Interrupt
60425 + * Register.
60426 + */
60427 +typedef union hcint_data {
60428 + /** raw register data */
60429 + uint32_t d32;
60430 + /** register bits */
60431 + struct {
60432 + /** Transfer Complete */
60433 + unsigned xfercomp:1;
60434 + /** Channel Halted */
60435 + unsigned chhltd:1;
60436 + /** AHB Error */
60437 + unsigned ahberr:1;
60438 + /** STALL Response Received */
60439 + unsigned stall:1;
60440 + /** NAK Response Received */
60441 + unsigned nak:1;
60442 + /** ACK Response Received */
60443 + unsigned ack:1;
60444 + /** NYET Response Received */
60445 + unsigned nyet:1;
60446 + /** Transaction Err */
60447 + unsigned xacterr:1;
60448 + /** Babble Error */
60449 + unsigned bblerr:1;
60450 + /** Frame Overrun */
60451 + unsigned frmovrun:1;
60452 + /** Data Toggle Error */
60453 + unsigned datatglerr:1;
60454 + /** Buffer Not Available (only for DDMA mode) */
60455 + unsigned bna:1;
60456 + /** Exessive transaction error (only for DDMA mode) */
60457 + unsigned xcs_xact:1;
60458 + /** Frame List Rollover interrupt */
60459 + unsigned frm_list_roll:1;
60460 + /** Reserved */
60461 + unsigned reserved14_31:18;
60462 + } b;
60463 +} hcint_data_t;
60464 +
60465 +/**
60466 + * This union represents the bit fields in the Host Channel Interrupt Mask
60467 + * Register. Read the register into the <i>d32</i> member then set/clear the
60468 + * bits using the <i>b</i>it elements. Write the <i>d32</i> member to the
60469 + * hcintmsk register.
60470 + */
60471 +typedef union hcintmsk_data {
60472 + /** raw register data */
60473 + uint32_t d32;
60474 +
60475 + /** register bits */
60476 + struct {
60477 + unsigned xfercompl:1;
60478 + unsigned chhltd:1;
60479 + unsigned ahberr:1;
60480 + unsigned stall:1;
60481 + unsigned nak:1;
60482 + unsigned ack:1;
60483 + unsigned nyet:1;
60484 + unsigned xacterr:1;
60485 + unsigned bblerr:1;
60486 + unsigned frmovrun:1;
60487 + unsigned datatglerr:1;
60488 + unsigned bna:1;
60489 + unsigned xcs_xact:1;
60490 + unsigned frm_list_roll:1;
60491 + unsigned reserved14_31:18;
60492 + } b;
60493 +} hcintmsk_data_t;
60494 +
60495 +/**
60496 + * This union represents the bit fields in the Host Channel Transfer Size
60497 + * Register. Read the register into the <i>d32</i> member then set/clear the
60498 + * bits using the <i>b</i>it elements. Write the <i>d32</i> member to the
60499 + * hcchar register.
60500 + */
60501 +
60502 +typedef union hctsiz_data {
60503 + /** raw register data */
60504 + uint32_t d32;
60505 +
60506 + /** register bits */
60507 + struct {
60508 + /** Total transfer size in bytes */
60509 + unsigned xfersize:19;
60510 +
60511 + /** Data packets to transfer */
60512 + unsigned pktcnt:10;
60513 +
60514 + /**
60515 + * Packet ID for next data packet
60516 + * 0: DATA0
60517 + * 1: DATA2
60518 + * 2: DATA1
60519 + * 3: MDATA (non-Control), SETUP (Control)
60520 + */
60521 + unsigned pid:2;
60522 +#define DWC_HCTSIZ_DATA0 0
60523 +#define DWC_HCTSIZ_DATA1 2
60524 +#define DWC_HCTSIZ_DATA2 1
60525 +#define DWC_HCTSIZ_MDATA 3
60526 +#define DWC_HCTSIZ_SETUP 3
60527 +
60528 + /** Do PING protocol when 1 */
60529 + unsigned dopng:1;
60530 + } b;
60531 +
60532 + /** register bits */
60533 + struct {
60534 + /** Scheduling information */
60535 + unsigned schinfo:8;
60536 +
60537 + /** Number of transfer descriptors.
60538 + * Max value:
60539 + * 64 in general,
60540 + * 256 only for HS isochronous endpoint.
60541 + */
60542 + unsigned ntd:8;
60543 +
60544 + /** Data packets to transfer */
60545 + unsigned reserved16_28:13;
60546 +
60547 + /**
60548 + * Packet ID for next data packet
60549 + * 0: DATA0
60550 + * 1: DATA2
60551 + * 2: DATA1
60552 + * 3: MDATA (non-Control)
60553 + */
60554 + unsigned pid:2;
60555 +
60556 + /** Do PING protocol when 1 */
60557 + unsigned dopng:1;
60558 + } b_ddma;
60559 +} hctsiz_data_t;
60560 +
60561 +/**
60562 + * This union represents the bit fields in the Host DMA Address
60563 + * Register used in Descriptor DMA mode.
60564 + */
60565 +typedef union hcdma_data {
60566 + /** raw register data */
60567 + uint32_t d32;
60568 + /** register bits */
60569 + struct {
60570 + unsigned reserved0_2:3;
60571 + /** Current Transfer Descriptor. Not used for ISOC */
60572 + unsigned ctd:8;
60573 + /** Start Address of Descriptor List */
60574 + unsigned dma_addr:21;
60575 + } b;
60576 +} hcdma_data_t;
60577 +
60578 +/**
60579 + * This union represents the bit fields in the DMA Descriptor
60580 + * status quadlet for host mode. Read the quadlet into the <i>d32</i> member then
60581 + * set/clear the bits using the <i>b</i>it elements.
60582 + */
60583 +typedef union host_dma_desc_sts {
60584 + /** raw register data */
60585 + uint32_t d32;
60586 + /** quadlet bits */
60587 +
60588 + /* for non-isochronous */
60589 + struct {
60590 + /** Number of bytes */
60591 + unsigned n_bytes:17;
60592 + /** QTD offset to jump when Short Packet received - only for IN EPs */
60593 + unsigned qtd_offset:6;
60594 + /**
60595 + * Set to request the core to jump to alternate QTD if
60596 + * Short Packet received - only for IN EPs
60597 + */
60598 + unsigned a_qtd:1;
60599 + /**
60600 + * Setup Packet bit. When set indicates that buffer contains
60601 + * setup packet.
60602 + */
60603 + unsigned sup:1;
60604 + /** Interrupt On Complete */
60605 + unsigned ioc:1;
60606 + /** End of List */
60607 + unsigned eol:1;
60608 + unsigned reserved27:1;
60609 + /** Rx/Tx Status */
60610 + unsigned sts:2;
60611 +#define DMA_DESC_STS_PKTERR 1
60612 + unsigned reserved30:1;
60613 + /** Active Bit */
60614 + unsigned a:1;
60615 + } b;
60616 + /* for isochronous */
60617 + struct {
60618 + /** Number of bytes */
60619 + unsigned n_bytes:12;
60620 + unsigned reserved12_24:13;
60621 + /** Interrupt On Complete */
60622 + unsigned ioc:1;
60623 + unsigned reserved26_27:2;
60624 + /** Rx/Tx Status */
60625 + unsigned sts:2;
60626 + unsigned reserved30:1;
60627 + /** Active Bit */
60628 + unsigned a:1;
60629 + } b_isoc;
60630 +} host_dma_desc_sts_t;
60631 +
60632 +#define MAX_DMA_DESC_SIZE 131071
60633 +#define MAX_DMA_DESC_NUM_GENERIC 64
60634 +#define MAX_DMA_DESC_NUM_HS_ISOC 256
60635 +#define MAX_FRLIST_EN_NUM 64
60636 +/**
60637 + * Host-mode DMA Descriptor structure
60638 + *
60639 + * DMA Descriptor structure contains two quadlets:
60640 + * Status quadlet and Data buffer pointer.
60641 + */
60642 +typedef struct dwc_otg_host_dma_desc {
60643 + /** DMA Descriptor status quadlet */
60644 + host_dma_desc_sts_t status;
60645 + /** DMA Descriptor data buffer pointer */
60646 + uint32_t buf;
60647 +} dwc_otg_host_dma_desc_t;
60648 +
60649 +/** OTG Host Interface Structure.
60650 + *
60651 + * The OTG Host Interface Structure structure contains information
60652 + * needed to manage the DWC_otg controller acting in host mode. It
60653 + * represents the programming view of the host-specific aspects of the
60654 + * controller.
60655 + */
60656 +typedef struct dwc_otg_host_if {
60657 + /** Host Global Registers starting at offset 400h.*/
60658 + dwc_otg_host_global_regs_t *host_global_regs;
60659 +#define DWC_OTG_HOST_GLOBAL_REG_OFFSET 0x400
60660 +
60661 + /** Host Port 0 Control and Status Register */
60662 + volatile uint32_t *hprt0;
60663 +#define DWC_OTG_HOST_PORT_REGS_OFFSET 0x440
60664 +
60665 + /** Host Channel Specific Registers at offsets 500h-5FCh. */
60666 + dwc_otg_hc_regs_t *hc_regs[MAX_EPS_CHANNELS];
60667 +#define DWC_OTG_HOST_CHAN_REGS_OFFSET 0x500
60668 +#define DWC_OTG_CHAN_REGS_OFFSET 0x20
60669 +
60670 + /* Host configuration information */
60671 + /** Number of Host Channels (range: 1-16) */
60672 + uint8_t num_host_channels;
60673 + /** Periodic EPs supported (0: no, 1: yes) */
60674 + uint8_t perio_eps_supported;
60675 + /** Periodic Tx FIFO Size (Only 1 host periodic Tx FIFO) */
60676 + uint16_t perio_tx_fifo_size;
60677 +
60678 +} dwc_otg_host_if_t;
60679 +
60680 +/**
60681 + * This union represents the bit fields in the Power and Clock Gating Control
60682 + * Register. Read the register into the <i>d32</i> member then set/clear the
60683 + * bits using the <i>b</i>it elements.
60684 + */
60685 +typedef union pcgcctl_data {
60686 + /** raw register data */
60687 + uint32_t d32;
60688 +
60689 + /** register bits */
60690 + struct {
60691 + /** Stop Pclk */
60692 + unsigned stoppclk:1;
60693 + /** Gate Hclk */
60694 + unsigned gatehclk:1;
60695 + /** Power Clamp */
60696 + unsigned pwrclmp:1;
60697 + /** Reset Power Down Modules */
60698 + unsigned rstpdwnmodule:1;
60699 + /** Reserved */
60700 + unsigned reserved:1;
60701 + /** Enable Sleep Clock Gating (Enbl_L1Gating) */
60702 + unsigned enbl_sleep_gating:1;
60703 + /** PHY In Sleep (PhySleep) */
60704 + unsigned phy_in_sleep:1;
60705 + /** Deep Sleep*/
60706 + unsigned deep_sleep:1;
60707 + unsigned resetaftsusp:1;
60708 + unsigned restoremode:1;
60709 + unsigned enbl_extnd_hiber:1;
60710 + unsigned extnd_hiber_pwrclmp:1;
60711 + unsigned extnd_hiber_switch:1;
60712 + unsigned ess_reg_restored:1;
60713 + unsigned prt_clk_sel:2;
60714 + unsigned port_power:1;
60715 + unsigned max_xcvrselect:2;
60716 + unsigned max_termsel:1;
60717 + unsigned mac_dev_addr:7;
60718 + unsigned p2hd_dev_enum_spd:2;
60719 + unsigned p2hd_prt_spd:2;
60720 + unsigned if_dev_mode:1;
60721 + } b;
60722 +} pcgcctl_data_t;
60723 +
60724 +/**
60725 + * This union represents the bit fields in the Global Data FIFO Software
60726 + * Configuration Register. Read the register into the <i>d32</i> member then
60727 + * set/clear the bits using the <i>b</i>it elements.
60728 + */
60729 +typedef union gdfifocfg_data {
60730 + /* raw register data */
60731 + uint32_t d32;
60732 + /** register bits */
60733 + struct {
60734 + /** OTG Data FIFO depth */
60735 + unsigned gdfifocfg:16;
60736 + /** Start address of EP info controller */
60737 + unsigned epinfobase:16;
60738 + } b;
60739 +} gdfifocfg_data_t;
60740 +
60741 +/**
60742 + * This union represents the bit fields in the Global Power Down Register
60743 + * Register. Read the register into the <i>d32</i> member then set/clear the
60744 + * bits using the <i>b</i>it elements.
60745 + */
60746 +typedef union gpwrdn_data {
60747 + /* raw register data */
60748 + uint32_t d32;
60749 +
60750 + /** register bits */
60751 + struct {
60752 + /** PMU Interrupt Select */
60753 + unsigned pmuintsel:1;
60754 + /** PMU Active */
60755 + unsigned pmuactv:1;
60756 + /** Restore */
60757 + unsigned restore:1;
60758 + /** Power Down Clamp */
60759 + unsigned pwrdnclmp:1;
60760 + /** Power Down Reset */
60761 + unsigned pwrdnrstn:1;
60762 + /** Power Down Switch */
60763 + unsigned pwrdnswtch:1;
60764 + /** Disable VBUS */
60765 + unsigned dis_vbus:1;
60766 + /** Line State Change */
60767 + unsigned lnstschng:1;
60768 + /** Line state change mask */
60769 + unsigned lnstchng_msk:1;
60770 + /** Reset Detected */
60771 + unsigned rst_det:1;
60772 + /** Reset Detect mask */
60773 + unsigned rst_det_msk:1;
60774 + /** Disconnect Detected */
60775 + unsigned disconn_det:1;
60776 + /** Disconnect Detect mask */
60777 + unsigned disconn_det_msk:1;
60778 + /** Connect Detected*/
60779 + unsigned connect_det:1;
60780 + /** Connect Detected Mask*/
60781 + unsigned connect_det_msk:1;
60782 + /** SRP Detected */
60783 + unsigned srp_det:1;
60784 + /** SRP Detect mask */
60785 + unsigned srp_det_msk:1;
60786 + /** Status Change Interrupt */
60787 + unsigned sts_chngint:1;
60788 + /** Status Change Interrupt Mask */
60789 + unsigned sts_chngint_msk:1;
60790 + /** Line State */
60791 + unsigned linestate:2;
60792 + /** Indicates current mode(status of IDDIG signal) */
60793 + unsigned idsts:1;
60794 + /** B Session Valid signal status*/
60795 + unsigned bsessvld:1;
60796 + /** ADP Event Detected */
60797 + unsigned adp_int:1;
60798 + /** Multi Valued ID pin */
60799 + unsigned mult_val_id_bc:5;
60800 + /** Reserved 24_31 */
60801 + unsigned reserved29_31:3;
60802 + } b;
60803 +} gpwrdn_data_t;
60804 +
60805 +#endif
60806 --- /dev/null
60807 +++ b/drivers/usb/host/dwc_otg/test/Makefile
60808 @@ -0,0 +1,16 @@
60809 +
60810 +PERL=/usr/bin/perl
60811 +PL_TESTS=test_sysfs.pl test_mod_param.pl
60812 +
60813 +.PHONY : test
60814 +test : perl_tests
60815 +
60816 +perl_tests :
60817 + @echo
60818 + @echo Running perl tests
60819 + @for test in $(PL_TESTS); do \
60820 + if $(PERL) ./$$test ; then \
60821 + echo "=======> $$test, PASSED" ; \
60822 + else echo "=======> $$test, FAILED" ; \
60823 + fi \
60824 + done
60825 --- /dev/null
60826 +++ b/drivers/usb/host/dwc_otg/test/dwc_otg_test.pm
60827 @@ -0,0 +1,337 @@
60828 +package dwc_otg_test;
60829 +
60830 +use strict;
60831 +use Exporter ();
60832 +
60833 +use vars qw(@ISA @EXPORT
60834 +$sysfsdir $paramdir $errors $params
60835 +);
60836 +
60837 +@ISA = qw(Exporter);
60838 +
60839 +#
60840 +# Globals
60841 +#
60842 +$sysfsdir = "/sys/devices/lm0";
60843 +$paramdir = "/sys/module/dwc_otg";
60844 +$errors = 0;
60845 +
60846 +$params = [
60847 + {
60848 + NAME => "otg_cap",
60849 + DEFAULT => 0,
60850 + ENUM => [],
60851 + LOW => 0,
60852 + HIGH => 2
60853 + },
60854 + {
60855 + NAME => "dma_enable",
60856 + DEFAULT => 0,
60857 + ENUM => [],
60858 + LOW => 0,
60859 + HIGH => 1
60860 + },
60861 + {
60862 + NAME => "dma_burst_size",
60863 + DEFAULT => 32,
60864 + ENUM => [1, 4, 8, 16, 32, 64, 128, 256],
60865 + LOW => 1,
60866 + HIGH => 256
60867 + },
60868 + {
60869 + NAME => "host_speed",
60870 + DEFAULT => 0,
60871 + ENUM => [],
60872 + LOW => 0,
60873 + HIGH => 1
60874 + },
60875 + {
60876 + NAME => "host_support_fs_ls_low_power",
60877 + DEFAULT => 0,
60878 + ENUM => [],
60879 + LOW => 0,
60880 + HIGH => 1
60881 + },
60882 + {
60883 + NAME => "host_ls_low_power_phy_clk",
60884 + DEFAULT => 0,
60885 + ENUM => [],
60886 + LOW => 0,
60887 + HIGH => 1
60888 + },
60889 + {
60890 + NAME => "dev_speed",
60891 + DEFAULT => 0,
60892 + ENUM => [],
60893 + LOW => 0,
60894 + HIGH => 1
60895 + },
60896 + {
60897 + NAME => "enable_dynamic_fifo",
60898 + DEFAULT => 1,
60899 + ENUM => [],
60900 + LOW => 0,
60901 + HIGH => 1
60902 + },
60903 + {
60904 + NAME => "data_fifo_size",
60905 + DEFAULT => 8192,
60906 + ENUM => [],
60907 + LOW => 32,
60908 + HIGH => 32768
60909 + },
60910 + {
60911 + NAME => "dev_rx_fifo_size",
60912 + DEFAULT => 1064,
60913 + ENUM => [],
60914 + LOW => 16,
60915 + HIGH => 32768
60916 + },
60917 + {
60918 + NAME => "dev_nperio_tx_fifo_size",
60919 + DEFAULT => 1024,
60920 + ENUM => [],
60921 + LOW => 16,
60922 + HIGH => 32768
60923 + },
60924 + {
60925 + NAME => "dev_perio_tx_fifo_size_1",
60926 + DEFAULT => 256,
60927 + ENUM => [],
60928 + LOW => 4,
60929 + HIGH => 768
60930 + },
60931 + {
60932 + NAME => "dev_perio_tx_fifo_size_2",
60933 + DEFAULT => 256,
60934 + ENUM => [],
60935 + LOW => 4,
60936 + HIGH => 768
60937 + },
60938 + {
60939 + NAME => "dev_perio_tx_fifo_size_3",
60940 + DEFAULT => 256,
60941 + ENUM => [],
60942 + LOW => 4,
60943 + HIGH => 768
60944 + },
60945 + {
60946 + NAME => "dev_perio_tx_fifo_size_4",
60947 + DEFAULT => 256,
60948 + ENUM => [],
60949 + LOW => 4,
60950 + HIGH => 768
60951 + },
60952 + {
60953 + NAME => "dev_perio_tx_fifo_size_5",
60954 + DEFAULT => 256,
60955 + ENUM => [],
60956 + LOW => 4,
60957 + HIGH => 768
60958 + },
60959 + {
60960 + NAME => "dev_perio_tx_fifo_size_6",
60961 + DEFAULT => 256,
60962 + ENUM => [],
60963 + LOW => 4,
60964 + HIGH => 768
60965 + },
60966 + {
60967 + NAME => "dev_perio_tx_fifo_size_7",
60968 + DEFAULT => 256,
60969 + ENUM => [],
60970 + LOW => 4,
60971 + HIGH => 768
60972 + },
60973 + {
60974 + NAME => "dev_perio_tx_fifo_size_8",
60975 + DEFAULT => 256,
60976 + ENUM => [],
60977 + LOW => 4,
60978 + HIGH => 768
60979 + },
60980 + {
60981 + NAME => "dev_perio_tx_fifo_size_9",
60982 + DEFAULT => 256,
60983 + ENUM => [],
60984 + LOW => 4,
60985 + HIGH => 768
60986 + },
60987 + {
60988 + NAME => "dev_perio_tx_fifo_size_10",
60989 + DEFAULT => 256,
60990 + ENUM => [],
60991 + LOW => 4,
60992 + HIGH => 768
60993 + },
60994 + {
60995 + NAME => "dev_perio_tx_fifo_size_11",
60996 + DEFAULT => 256,
60997 + ENUM => [],
60998 + LOW => 4,
60999 + HIGH => 768
61000 + },
61001 + {
61002 + NAME => "dev_perio_tx_fifo_size_12",
61003 + DEFAULT => 256,
61004 + ENUM => [],
61005 + LOW => 4,
61006 + HIGH => 768
61007 + },
61008 + {
61009 + NAME => "dev_perio_tx_fifo_size_13",
61010 + DEFAULT => 256,
61011 + ENUM => [],
61012 + LOW => 4,
61013 + HIGH => 768
61014 + },
61015 + {
61016 + NAME => "dev_perio_tx_fifo_size_14",
61017 + DEFAULT => 256,
61018 + ENUM => [],
61019 + LOW => 4,
61020 + HIGH => 768
61021 + },
61022 + {
61023 + NAME => "dev_perio_tx_fifo_size_15",
61024 + DEFAULT => 256,
61025 + ENUM => [],
61026 + LOW => 4,
61027 + HIGH => 768
61028 + },
61029 + {
61030 + NAME => "host_rx_fifo_size",
61031 + DEFAULT => 1024,
61032 + ENUM => [],
61033 + LOW => 16,
61034 + HIGH => 32768
61035 + },
61036 + {
61037 + NAME => "host_nperio_tx_fifo_size",
61038 + DEFAULT => 1024,
61039 + ENUM => [],
61040 + LOW => 16,
61041 + HIGH => 32768
61042 + },
61043 + {
61044 + NAME => "host_perio_tx_fifo_size",
61045 + DEFAULT => 1024,
61046 + ENUM => [],
61047 + LOW => 16,
61048 + HIGH => 32768
61049 + },
61050 + {
61051 + NAME => "max_transfer_size",
61052 + DEFAULT => 65535,
61053 + ENUM => [],
61054 + LOW => 2047,
61055 + HIGH => 65535
61056 + },
61057 + {
61058 + NAME => "max_packet_count",
61059 + DEFAULT => 511,
61060 + ENUM => [],
61061 + LOW => 15,
61062 + HIGH => 511
61063 + },
61064 + {
61065 + NAME => "host_channels",
61066 + DEFAULT => 12,
61067 + ENUM => [],
61068 + LOW => 1,
61069 + HIGH => 16
61070 + },
61071 + {
61072 + NAME => "dev_endpoints",
61073 + DEFAULT => 6,
61074 + ENUM => [],
61075 + LOW => 1,
61076 + HIGH => 15
61077 + },
61078 + {
61079 + NAME => "phy_type",
61080 + DEFAULT => 1,
61081 + ENUM => [],
61082 + LOW => 0,
61083 + HIGH => 2
61084 + },
61085 + {
61086 + NAME => "phy_utmi_width",
61087 + DEFAULT => 16,
61088 + ENUM => [8, 16],
61089 + LOW => 8,
61090 + HIGH => 16
61091 + },
61092 + {
61093 + NAME => "phy_ulpi_ddr",
61094 + DEFAULT => 0,
61095 + ENUM => [],
61096 + LOW => 0,
61097 + HIGH => 1
61098 + },
61099 + ];
61100 +
61101 +
61102 +#
61103 +#
61104 +sub check_arch {
61105 + $_ = `uname -m`;
61106 + chomp;
61107 + unless (m/armv4tl/) {
61108 + warn "# \n# Can't execute on $_. Run on integrator platform.\n# \n";
61109 + return 0;
61110 + }
61111 + return 1;
61112 +}
61113 +
61114 +#
61115 +#
61116 +sub load_module {
61117 + my $params = shift;
61118 + print "\nRemoving Module\n";
61119 + system "rmmod dwc_otg";
61120 + print "Loading Module\n";
61121 + if ($params ne "") {
61122 + print "Module Parameters: $params\n";
61123 + }
61124 + if (system("modprobe dwc_otg $params")) {
61125 + warn "Unable to load module\n";
61126 + return 0;
61127 + }
61128 + return 1;
61129 +}
61130 +
61131 +#
61132 +#
61133 +sub test_status {
61134 + my $arg = shift;
61135 +
61136 + print "\n";
61137 +
61138 + if (defined $arg) {
61139 + warn "WARNING: $arg\n";
61140 + }
61141 +
61142 + if ($errors > 0) {
61143 + warn "TEST FAILED with $errors errors\n";
61144 + return 0;
61145 + } else {
61146 + print "TEST PASSED\n";
61147 + return 0 if (defined $arg);
61148 + }
61149 + return 1;
61150 +}
61151 +
61152 +#
61153 +#
61154 +@EXPORT = qw(
61155 +$sysfsdir
61156 +$paramdir
61157 +$params
61158 +$errors
61159 +check_arch
61160 +load_module
61161 +test_status
61162 +);
61163 +
61164 +1;
61165 --- /dev/null
61166 +++ b/drivers/usb/host/dwc_otg/test/test_mod_param.pl
61167 @@ -0,0 +1,133 @@
61168 +#!/usr/bin/perl -w
61169 +#
61170 +# Run this program on the integrator.
61171 +#
61172 +# - Tests module parameter default values.
61173 +# - Tests setting of valid module parameter values via modprobe.
61174 +# - Tests invalid module parameter values.
61175 +# -----------------------------------------------------------------------------
61176 +use strict;
61177 +use dwc_otg_test;
61178 +
61179 +check_arch() or die;
61180 +
61181 +#
61182 +#
61183 +sub test {
61184 + my ($param,$expected) = @_;
61185 + my $value = get($param);
61186 +
61187 + if ($value == $expected) {
61188 + print "$param = $value, okay\n";
61189 + }
61190 +
61191 + else {
61192 + warn "ERROR: value of $param != $expected, $value\n";
61193 + $errors ++;
61194 + }
61195 +}
61196 +
61197 +#
61198 +#
61199 +sub get {
61200 + my $param = shift;
61201 + my $tmp = `cat $paramdir/$param`;
61202 + chomp $tmp;
61203 + return $tmp;
61204 +}
61205 +
61206 +#
61207 +#
61208 +sub test_main {
61209 +
61210 + print "\nTesting Module Parameters\n";
61211 +
61212 + load_module("") or die;
61213 +
61214 + # Test initial values
61215 + print "\nTesting Default Values\n";
61216 + foreach (@{$params}) {
61217 + test ($_->{NAME}, $_->{DEFAULT});
61218 + }
61219 +
61220 + # Test low value
61221 + print "\nTesting Low Value\n";
61222 + my $cmd_params = "";
61223 + foreach (@{$params}) {
61224 + $cmd_params = $cmd_params . "$_->{NAME}=$_->{LOW} ";
61225 + }
61226 + load_module($cmd_params) or die;
61227 +
61228 + foreach (@{$params}) {
61229 + test ($_->{NAME}, $_->{LOW});
61230 + }
61231 +
61232 + # Test high value
61233 + print "\nTesting High Value\n";
61234 + $cmd_params = "";
61235 + foreach (@{$params}) {
61236 + $cmd_params = $cmd_params . "$_->{NAME}=$_->{HIGH} ";
61237 + }
61238 + load_module($cmd_params) or die;
61239 +
61240 + foreach (@{$params}) {
61241 + test ($_->{NAME}, $_->{HIGH});
61242 + }
61243 +
61244 + # Test Enum
61245 + print "\nTesting Enumerated\n";
61246 + foreach (@{$params}) {
61247 + if (defined $_->{ENUM}) {
61248 + my $value;
61249 + foreach $value (@{$_->{ENUM}}) {
61250 + $cmd_params = "$_->{NAME}=$value";
61251 + load_module($cmd_params) or die;
61252 + test ($_->{NAME}, $value);
61253 + }
61254 + }
61255 + }
61256 +
61257 + # Test Invalid Values
61258 + print "\nTesting Invalid Values\n";
61259 + $cmd_params = "";
61260 + foreach (@{$params}) {
61261 + $cmd_params = $cmd_params . sprintf "$_->{NAME}=%d ", $_->{LOW}-1;
61262 + }
61263 + load_module($cmd_params) or die;
61264 +
61265 + foreach (@{$params}) {
61266 + test ($_->{NAME}, $_->{DEFAULT});
61267 + }
61268 +
61269 + $cmd_params = "";
61270 + foreach (@{$params}) {
61271 + $cmd_params = $cmd_params . sprintf "$_->{NAME}=%d ", $_->{HIGH}+1;
61272 + }
61273 + load_module($cmd_params) or die;
61274 +
61275 + foreach (@{$params}) {
61276 + test ($_->{NAME}, $_->{DEFAULT});
61277 + }
61278 +
61279 + print "\nTesting Enumerated\n";
61280 + foreach (@{$params}) {
61281 + if (defined $_->{ENUM}) {
61282 + my $value;
61283 + foreach $value (@{$_->{ENUM}}) {
61284 + $value = $value + 1;
61285 + $cmd_params = "$_->{NAME}=$value";
61286 + load_module($cmd_params) or die;
61287 + test ($_->{NAME}, $_->{DEFAULT});
61288 + $value = $value - 2;
61289 + $cmd_params = "$_->{NAME}=$value";
61290 + load_module($cmd_params) or die;
61291 + test ($_->{NAME}, $_->{DEFAULT});
61292 + }
61293 + }
61294 + }
61295 +
61296 + test_status() or die;
61297 +}
61298 +
61299 +test_main();
61300 +0;
61301 --- /dev/null
61302 +++ b/drivers/usb/host/dwc_otg/test/test_sysfs.pl
61303 @@ -0,0 +1,193 @@
61304 +#!/usr/bin/perl -w
61305 +#
61306 +# Run this program on the integrator
61307 +# - Tests select sysfs attributes.
61308 +# - Todo ... test more attributes, hnp/srp, buspower/bussuspend, etc.
61309 +# -----------------------------------------------------------------------------
61310 +use strict;
61311 +use dwc_otg_test;
61312 +
61313 +check_arch() or die;
61314 +
61315 +#
61316 +#
61317 +sub test {
61318 + my ($attr,$expected) = @_;
61319 + my $string = get($attr);
61320 +
61321 + if ($string eq $expected) {
61322 + printf("$attr = $string, okay\n");
61323 + }
61324 + else {
61325 + warn "ERROR: value of $attr != $expected, $string\n";
61326 + $errors ++;
61327 + }
61328 +}
61329 +
61330 +#
61331 +#
61332 +sub set {
61333 + my ($reg, $value) = @_;
61334 + system "echo $value > $sysfsdir/$reg";
61335 +}
61336 +
61337 +#
61338 +#
61339 +sub get {
61340 + my $attr = shift;
61341 + my $string = `cat $sysfsdir/$attr`;
61342 + chomp $string;
61343 + if ($string =~ m/\s\=\s/) {
61344 + my $tmp;
61345 + ($tmp, $string) = split /\s=\s/, $string;
61346 + }
61347 + return $string;
61348 +}
61349 +
61350 +#
61351 +#
61352 +sub test_main {
61353 + print("\nTesting Sysfs Attributes\n");
61354 +
61355 + load_module("") or die;
61356 +
61357 + # Test initial values of regoffset/regvalue/guid/gsnpsid
61358 + print("\nTesting Default Values\n");
61359 +
61360 + test("regoffset", "0xffffffff");
61361 + test("regvalue", "invalid offset");
61362 + test("guid", "0x12345678"); # this will fail if it has been changed
61363 + test("gsnpsid", "0x4f54200a");
61364 +
61365 + # Test operation of regoffset/regvalue
61366 + print("\nTesting regoffset\n");
61367 + set('regoffset', '5a5a5a5a');
61368 + test("regoffset", "0xffffffff");
61369 +
61370 + set('regoffset', '0');
61371 + test("regoffset", "0x00000000");
61372 +
61373 + set('regoffset', '40000');
61374 + test("regoffset", "0x00000000");
61375 +
61376 + set('regoffset', '3ffff');
61377 + test("regoffset", "0x0003ffff");
61378 +
61379 + set('regoffset', '1');
61380 + test("regoffset", "0x00000001");
61381 +
61382 + print("\nTesting regvalue\n");
61383 + set('regoffset', '3c');
61384 + test("regvalue", "0x12345678");
61385 + set('regvalue', '5a5a5a5a');
61386 + test("regvalue", "0x5a5a5a5a");
61387 + set('regvalue','a5a5a5a5');
61388 + test("regvalue", "0xa5a5a5a5");
61389 + set('guid','12345678');
61390 +
61391 + # Test HNP Capable
61392 + print("\nTesting HNP Capable bit\n");
61393 + set('hnpcapable', '1');
61394 + test("hnpcapable", "0x1");
61395 + set('hnpcapable','0');
61396 + test("hnpcapable", "0x0");
61397 +
61398 + set('regoffset','0c');
61399 +
61400 + my $old = get('gusbcfg');
61401 + print("setting hnpcapable\n");
61402 + set('hnpcapable', '1');
61403 + test("hnpcapable", "0x1");
61404 + test('gusbcfg', sprintf "0x%08x", (oct ($old) | (1<<9)));
61405 + test('regvalue', sprintf "0x%08x", (oct ($old) | (1<<9)));
61406 +
61407 + $old = get('gusbcfg');
61408 + print("clearing hnpcapable\n");
61409 + set('hnpcapable', '0');
61410 + test("hnpcapable", "0x0");
61411 + test ('gusbcfg', sprintf "0x%08x", oct ($old) & (~(1<<9)));
61412 + test ('regvalue', sprintf "0x%08x", oct ($old) & (~(1<<9)));
61413 +
61414 + # Test SRP Capable
61415 + print("\nTesting SRP Capable bit\n");
61416 + set('srpcapable', '1');
61417 + test("srpcapable", "0x1");
61418 + set('srpcapable','0');
61419 + test("srpcapable", "0x0");
61420 +
61421 + set('regoffset','0c');
61422 +
61423 + $old = get('gusbcfg');
61424 + print("setting srpcapable\n");
61425 + set('srpcapable', '1');
61426 + test("srpcapable", "0x1");
61427 + test('gusbcfg', sprintf "0x%08x", (oct ($old) | (1<<8)));
61428 + test('regvalue', sprintf "0x%08x", (oct ($old) | (1<<8)));
61429 +
61430 + $old = get('gusbcfg');
61431 + print("clearing srpcapable\n");
61432 + set('srpcapable', '0');
61433 + test("srpcapable", "0x0");
61434 + test('gusbcfg', sprintf "0x%08x", oct ($old) & (~(1<<8)));
61435 + test('regvalue', sprintf "0x%08x", oct ($old) & (~(1<<8)));
61436 +
61437 + # Test GGPIO
61438 + print("\nTesting GGPIO\n");
61439 + set('ggpio','5a5a5a5a');
61440 + test('ggpio','0x5a5a0000');
61441 + set('ggpio','a5a5a5a5');
61442 + test('ggpio','0xa5a50000');
61443 + set('ggpio','11110000');
61444 + test('ggpio','0x11110000');
61445 + set('ggpio','00001111');
61446 + test('ggpio','0x00000000');
61447 +
61448 + # Test DEVSPEED
61449 + print("\nTesting DEVSPEED\n");
61450 + set('regoffset','800');
61451 + $old = get('regvalue');
61452 + set('devspeed','0');
61453 + test('devspeed','0x0');
61454 + test('regvalue',sprintf("0x%08x", oct($old) & ~(0x3)));
61455 + set('devspeed','1');
61456 + test('devspeed','0x1');
61457 + test('regvalue',sprintf("0x%08x", oct($old) & ~(0x3) | 1));
61458 + set('devspeed','2');
61459 + test('devspeed','0x2');
61460 + test('regvalue',sprintf("0x%08x", oct($old) & ~(0x3) | 2));
61461 + set('devspeed','3');
61462 + test('devspeed','0x3');
61463 + test('regvalue',sprintf("0x%08x", oct($old) & ~(0x3) | 3));
61464 + set('devspeed','4');
61465 + test('devspeed','0x0');
61466 + test('regvalue',sprintf("0x%08x", oct($old) & ~(0x3)));
61467 + set('devspeed','5');
61468 + test('devspeed','0x1');
61469 + test('regvalue',sprintf("0x%08x", oct($old) & ~(0x3) | 1));
61470 +
61471 +
61472 + # mode Returns the current mode:0 for device mode1 for host mode Read
61473 + # hnp Initiate the Host Negotiation Protocol. Read returns the status. Read/Write
61474 + # srp Initiate the Session Request Protocol. Read returns the status. Read/Write
61475 + # buspower Get or Set the Power State of the bus (0 - Off or 1 - On) Read/Write
61476 + # bussuspend Suspend the USB bus. Read/Write
61477 + # busconnected Get the connection status of the bus Read
61478 +
61479 + # gotgctl Get or set the Core Control Status Register. Read/Write
61480 + ## gusbcfg Get or set the Core USB Configuration Register Read/Write
61481 + # grxfsiz Get or set the Receive FIFO Size Register Read/Write
61482 + # gnptxfsiz Get or set the non-periodic Transmit Size Register Read/Write
61483 + # gpvndctl Get or set the PHY Vendor Control Register Read/Write
61484 + ## ggpio Get the value in the lower 16-bits of the General Purpose IO Register or Set the upper 16 bits. Read/Write
61485 + ## guid Get or set the value of the User ID Register Read/Write
61486 + ## gsnpsid Get the value of the Synopsys ID Regester Read
61487 + ## devspeed Get or set the device speed setting in the DCFG register Read/Write
61488 + # enumspeed Gets the device enumeration Speed. Read
61489 + # hptxfsiz Get the value of the Host Periodic Transmit FIFO Read
61490 + # hprt0 Get or Set the value in the Host Port Control and Status Register Read/Write
61491 +
61492 + test_status("TEST NYI") or die;
61493 +}
61494 +
61495 +test_main();
61496 +0;