lantiq: Use the BAR0 base address in the ath PCI fixup code
[openwrt/openwrt.git] / target / linux / brcm2708 / patches-3.18 / 0025-Add-FIQ-patch-to-dwc_otg-driver.-Enable-with-dwc_otg.patch
1 From a29a51d9320d44124fe13457c45663d3051a9452 Mon Sep 17 00:00:00 2001
2 From: popcornmix <popcornmix@gmail.com>
3 Date: Wed, 3 Jul 2013 00:46:42 +0100
4 Subject: [PATCH 025/114] Add FIQ patch to dwc_otg driver. Enable with
5 dwc_otg.fiq_fix_enable=1. Should give about 10% more ARM performance. Thanks
6 to Gordon and Costas
7
8 Avoid dynamic memory allocation for channel lock in USB driver. Thanks ddv2005.
9
10 Add NAK holdoff scheme. Enabled by default, disable with dwc_otg.nak_holdoff_enable=0. Thanks gsh
11
12 Make sure we wait for the reset to finish
13
14 dwc_otg: fix bug in dwc_otg_hcd.c resulting in silent kernel
15 memory corruption, escalating to OOPS under high USB load.
16
17 dwc_otg: Fix unsafe access of QTD during URB enqueue
18
19 In dwc_otg_hcd_urb_enqueue during qtd creation, it was possible that the
20 transaction could complete almost immediately after the qtd was assigned
21 to a host channel during URB enqueue, which meant the qtd pointer was no
22 longer valid having been completed and removed. Usually, this resulted in
23 an OOPS during URB submission. By predetermining whether transactions
24 need to be queued or not, this unsafe pointer access is avoided.
25
26 This bug was only evident on the Pi model A where a device was attached
27 that had no periodic endpoints (e.g. USB pendrive or some wlan devices).
28
29 dwc_otg: Fix incorrect URB allocation error handling
30
31 If the memory allocation for a dwc_otg_urb failed, the kernel would OOPS
32 because for some reason a member of the *unallocated* struct was set to
33 zero. Error handling changed to fail correctly.
34
35 dwc_otg: fix potential use-after-free case in interrupt handler
36
37 If a transaction had previously aborted, certain interrupts are
38 enabled to track error counts and reset where necessary. On IN
39 endpoints the host generates an ACK interrupt near-simultaneously
40 with completion of transfer. In the case where this transfer had
41 previously had an error, this results in a use-after-free on
42 the QTD memory space with a 1-byte length being overwritten to
43 0x00.
44
45 dwc_otg: add handling of SPLIT transaction data toggle errors
46
47 Previously a data toggle error on packets from a USB1.1 device behind
48 a TT would result in the Pi locking up as the driver never handled
49 the associated interrupt. Patch adds basic retry mechanism and
50 interrupt acknowledgement to cater for either a chance toggle error or
51 for devices that have a broken initial toggle state (FT8U232/FT232BM).
52
53 dwc_otg: implement tasklet for returning URBs to usbcore hcd layer
54
55 The dwc_otg driver interrupt handler for transfer completion will spend
56 a very long time with interrupts disabled when a URB is completed -
57 this is because usb_hcd_giveback_urb is called from within the handler
58 which for a USB device driver with complicated processing (e.g. webcam)
59 will take an exorbitant amount of time to complete. This results in
60 missed completion interrupts for other USB packets which lead to them
61 being dropped due to microframe overruns.
62
63 This patch splits returning the URB to the usb hcd layer into a
64 high-priority tasklet. This will have most benefit for isochronous IN
65 transfers but will also have incidental benefit where multiple periodic
66 devices are active at once.
67
68 dwc_otg: fix NAK holdoff and allow on split transactions only
69
70 This corrects a bug where if a single active non-periodic endpoint
71 had at least one transaction in its qh, on frnum == MAX_FRNUM the qh
72 would get skipped and never get queued again. This would result in
73 a silent device until error detection (automatic or otherwise) would
74 either reset the device or flush and requeue the URBs.
75
76 Additionally the NAK holdoff was enabled for all transactions - this
77 would potentially stall a HS endpoint for 1ms if a previous error state
78 enabled this interrupt and the next response was a NAK. Fix so that
79 only split transactions get held off.
80
81 dwc_otg: Call usb_hcd_unlink_urb_from_ep with lock held in completion handler
82
83 usb_hcd_unlink_urb_from_ep must be called with the HCD lock held. Calling it
84 asynchronously in the tasklet was not safe (regression in
85 c4564d4a1a0a9b10d4419e48239f5d99e88d2667).
86
87 This change unlinks it from the endpoint prior to queueing it for handling in
88 the tasklet, and also adds a check to ensure the urb is OK to be unlinked
89 before doing so.
90
91 NULL pointer dereference kernel oopses had been observed in usb_hcd_giveback_urb
92 when a USB device was unplugged/replugged during data transfer. This effect
93 was reproduced using automated USB port power control, hundreds of replug
94 events were performed during active transfers to confirm that the problem was
95 eliminated.
96
97 USB fix using a FIQ to implement split transactions
98
99 This commit adds a FIQ implementaion that schedules
100 the split transactions using a FIQ so we don't get
101 held off by the interrupt latency of Linux
102
103 dwc_otg: fix device attributes and avoid kernel warnings on boot
104
105 dcw_otg: avoid logging function that can cause panics
106
107 See: https://github.com/raspberrypi/firmware/issues/21
108 Thanks to cleverca22 for fix
109
110 dwc_otg: mask correct interrupts after transaction error recovery
111
112 The dwc_otg driver will unmask certain interrupts on a transaction
113 that previously halted in the error state in order to reset the
114 QTD error count. The various fine-grained interrupt handlers do not
115 consider that other interrupts besides themselves were unmasked.
116
117 By disabling the two other interrupts only ever enabled in DMA mode
118 for this purpose, we can avoid unnecessary function calls in the
119 IRQ handler. This will also prevent an unneccesary FIQ interrupt
120 from being generated if the FIQ is enabled.
121
122 dwc_otg: fiq: prevent FIQ thrash and incorrect state passing to IRQ
123
124 In the case of a transaction to a device that had previously aborted
125 due to an error, several interrupts are enabled to reset the error
126 count when a device responds. This has the side-effect of making the
127 FIQ thrash because the hardware will generate multiple instances of
128 a NAK on an IN bulk/interrupt endpoint and multiple instances of ACK
129 on an OUT bulk/interrupt endpoint. Make the FIQ mask and clear the
130 associated interrupts.
131
132 Additionally, on non-split transactions make sure that only unmasked
133 interrupts are cleared. This caused a hard-to-trigger but serious
134 race condition when you had the combination of an endpoint awaiting
135 error recovery and a transaction completed on an endpoint - due to
136 the sequencing and timing of interrupts generated by the dwc_otg core,
137 it was possible to confuse the IRQ handler.
138
139 Fix function tracing
140
141 dwc_otg: whitespace cleanup in dwc_otg_urb_enqueue
142
143 dwc_otg: prevent OOPSes during device disconnects
144
145 The dwc_otg_urb_enqueue function is thread-unsafe. In particular the
146 access of urb->hcpriv, usb_hcd_link_urb_to_ep, dwc_otg_urb->qtd and
147 friends does not occur within a critical section and so if a device
148 was unplugged during activity there was a high chance that the
149 usbcore hub_thread would try to disable the endpoint with partially-
150 formed entries in the URB queue. This would result in BUG() or null
151 pointer dereferences.
152
153 Fix so that access of urb->hcpriv, enqueuing to the hardware and
154 adding to usbcore endpoint URB lists is contained within a single
155 critical section.
156
157 dwc_otg: prevent BUG() in TT allocation if hub address is > 16
158
159 A fixed-size array is used to track TT allocation. This was
160 previously set to 16 which caused a crash because
161 dwc_otg_hcd_allocate_port would read past the end of the array.
162
163 This was hit if a hub was plugged in which enumerated as addr > 16,
164 due to previous device resets or unplugs.
165
166 Also add #ifdef FIQ_DEBUG around hcd->hub_port_alloc[], which grows
167 to a large size if 128 hub addresses are supported. This field is
168 for debug only for tracking which frame an allocate happened in.
169
170 dwc_otg: make channel halts with unknown state less damaging
171
172 If the IRQ received a channel halt interrupt through the FIQ
173 with no other bits set, the IRQ would not release the host
174 channel and never complete the URB.
175
176 Add catchall handling to treat as a transaction error and retry.
177
178 dwc_otg: fiq_split: use TTs with more granularity
179
180 This fixes certain issues with split transaction scheduling.
181
182 - Isochronous multi-packet OUT transactions now hog the TT until
183 they are completed - this prevents hubs aborting transactions
184 if they get a periodic start-split out-of-order
185 - Don't perform TT allocation on non-periodic endpoints - this
186 allows simultaneous use of the TT's bulk/control and periodic
187 transaction buffers
188
189 This commit will mainly affect USB audio playback.
190
191 dwc_otg: fix potential sleep while atomic during urb enqueue
192
193 Fixes a regression introduced with eb1b482a. Kmalloc called from
194 dwc_otg_hcd_qtd_add / dwc_otg_hcd_qtd_create did not always have
195 the GPF_ATOMIC flag set. Force this flag when inside the larger
196 critical section.
197
198 dwc_otg: make fiq_split_enable imply fiq_fix_enable
199
200 Failing to set up the FIQ correctly would result in
201 "IRQ 32: nobody cared" errors in dmesg.
202
203 dwc_otg: prevent crashes on host port disconnects
204
205 Fix several issues resulting in crashes or inconsistent state
206 if a Model A root port was disconnected.
207
208 - Clean up queue heads properly in kill_urbs_in_qh_list by
209 removing the empty QHs from the schedule lists
210 - Set the halt status properly to prevent IRQ handlers from
211 using freed memory
212 - Add fiq_split related cleanup for saved registers
213 - Make microframe scheduling reclaim host channels if
214 active during a disconnect
215 - Abort URBs with -ESHUTDOWN status response, informing
216 device drivers so they respond in a more correct fashion
217 and don't try to resubmit URBs
218 - Prevent IRQ handlers from attempting to handle channel
219 interrupts if the associated URB was dequeued (and the
220 driver state was cleared)
221
222 dwc_otg: prevent leaking URBs during enqueue
223
224 A dwc_otg_urb would get leaked if the HCD enqueue function
225 failed for any reason. Free the URB at the appropriate points.
226
227 dwc_otg: Enable NAK holdoff for control split transactions
228
229 Certain low-speed devices take a very long time to complete a
230 data or status stage of a control transaction, producing NAK
231 responses until they complete internal processing - the USB2.0
232 spec limit is up to 500mS. This causes the same type of interrupt
233 storm as seen with USB-serial dongles prior to c8edb238.
234
235 In certain circumstances, usually while booting, this interrupt
236 storm could cause SD card timeouts.
237
238 dwc_otg: Fix for occasional lockup on boot when doing a USB reset
239
240 dwc_otg: Don't issue traffic to LS devices in FS mode
241
242 Issuing low-speed packets when the root port is in full-speed mode
243 causes the root port to stop responding. Explicitly fail when
244 enqueuing URBs to a LS endpoint on a FS bus.
245
246 Fix ARM architecture issue with local_irq_restore()
247
248 If local_fiq_enable() is called before a local_irq_restore(flags) where
249 the flags variable has the F bit set, the FIQ will be erroneously disabled.
250
251 Fixup arch_local_irq_restore to avoid trampling the F bit in CPSR.
252
253 Also fix some of the hacks previously implemented for previous dwc_otg
254 incarnations.
255 ---
256 arch/arm/Kconfig | 1 +
257 arch/arm/include/asm/irqflags.h | 16 +-
258 arch/arm/kernel/fiqasm.S | 4 +
259 arch/arm/mach-bcm2708/armctrl.c | 19 +-
260 arch/arm/mach-bcm2708/bcm2708.c | 29 +-
261 arch/arm/mach-bcm2708/include/mach/irqs.h | 153 ++---
262 .../usb/host/dwc_common_port/dwc_common_linux.c | 11 +
263 drivers/usb/host/dwc_common_port/dwc_list.h | 14 +-
264 drivers/usb/host/dwc_common_port/dwc_os.h | 2 +
265 drivers/usb/host/dwc_otg/Makefile | 1 +
266 drivers/usb/host/dwc_otg/dwc_otg_attr.c | 14 +-
267 drivers/usb/host/dwc_otg/dwc_otg_cil_intr.c | 47 +-
268 drivers/usb/host/dwc_otg/dwc_otg_dbg.h | 1 +
269 drivers/usb/host/dwc_otg/dwc_otg_driver.c | 52 +-
270 drivers/usb/host/dwc_otg/dwc_otg_hcd.c | 303 +++++++--
271 drivers/usb/host/dwc_otg/dwc_otg_hcd.h | 37 +-
272 drivers/usb/host/dwc_otg/dwc_otg_hcd_ddma.c | 3 +-
273 drivers/usb/host/dwc_otg/dwc_otg_hcd_if.h | 5 +
274 drivers/usb/host/dwc_otg/dwc_otg_hcd_intr.c | 705 ++++++++++++++++++++-
275 drivers/usb/host/dwc_otg/dwc_otg_hcd_linux.c | 159 +++--
276 drivers/usb/host/dwc_otg/dwc_otg_hcd_queue.c | 53 +-
277 drivers/usb/host/dwc_otg/dwc_otg_mphi_fix.c | 113 ++++
278 drivers/usb/host/dwc_otg/dwc_otg_mphi_fix.h | 48 ++
279 drivers/usb/host/dwc_otg/dwc_otg_os_dep.h | 3 +
280 drivers/usb/host/dwc_otg/dwc_otg_pcd_intr.c | 2 +-
281 25 files changed, 1544 insertions(+), 251 deletions(-)
282 create mode 100755 drivers/usb/host/dwc_otg/dwc_otg_mphi_fix.c
283 create mode 100755 drivers/usb/host/dwc_otg/dwc_otg_mphi_fix.h
284
285 --- a/arch/arm/Kconfig
286 +++ b/arch/arm/Kconfig
287 @@ -395,6 +395,7 @@ config ARCH_BCM2708
288 select ARM_ERRATA_411920
289 select MACH_BCM2708
290 select VC4
291 + select FIQ
292 help
293 This enables support for Broadcom BCM2708 boards.
294
295 --- a/arch/arm/include/asm/irqflags.h
296 +++ b/arch/arm/include/asm/irqflags.h
297 @@ -145,12 +145,22 @@ static inline unsigned long arch_local_s
298 }
299
300 /*
301 - * restore saved IRQ & FIQ state
302 + * restore saved IRQ state
303 */
304 static inline void arch_local_irq_restore(unsigned long flags)
305 {
306 - asm volatile(
307 - " msr " IRQMASK_REG_NAME_W ", %0 @ local_irq_restore"
308 + unsigned long temp = 0;
309 + flags &= ~(1 << 6);
310 + asm volatile (
311 + " mrs %0, cpsr"
312 + : "=r" (temp)
313 + :
314 + : "memory", "cc");
315 + /* Preserve FIQ bit */
316 + temp &= (1 << 6);
317 + flags = flags | temp;
318 + asm volatile (
319 + " msr cpsr_c, %0 @ local_irq_restore"
320 :
321 : "r" (flags)
322 : "memory", "cc");
323 --- a/arch/arm/kernel/fiqasm.S
324 +++ b/arch/arm/kernel/fiqasm.S
325 @@ -47,3 +47,7 @@ ENTRY(__get_fiq_regs)
326 mov r0, r0 @ avoid hazard prior to ARMv4
327 ret lr
328 ENDPROC(__get_fiq_regs)
329 +
330 +ENTRY(__FIQ_Branch)
331 + mov pc, r8
332 +ENDPROC(__FIQ_Branch)
333 --- a/arch/arm/mach-bcm2708/armctrl.c
334 +++ b/arch/arm/mach-bcm2708/armctrl.c
335 @@ -52,8 +52,12 @@ static void armctrl_mask_irq(struct irq_
336 0
337 };
338
339 - unsigned int data = (unsigned int)irq_get_chip_data(d->irq);
340 - writel(1 << (data & 0x1f), __io_address(disables[(data >> 5) & 0x3]));
341 + if (d->irq >= FIQ_START) {
342 + writel(0, __io_address(ARM_IRQ_FAST));
343 + } else {
344 + unsigned int data = (unsigned int)irq_get_chip_data(d->irq);
345 + writel(1 << (data & 0x1f), __io_address(disables[(data >> 5) & 0x3]));
346 + }
347 }
348
349 static void armctrl_unmask_irq(struct irq_data *d)
350 @@ -65,8 +69,14 @@ static void armctrl_unmask_irq(struct ir
351 0
352 };
353
354 - unsigned int data = (unsigned int)irq_get_chip_data(d->irq);
355 - writel(1 << (data & 0x1f), __io_address(enables[(data >> 5) & 0x3]));
356 + if (d->irq >= FIQ_START) {
357 + unsigned int data =
358 + (unsigned int)irq_get_chip_data(d->irq) - FIQ_START;
359 + writel(0x80 | data, __io_address(ARM_IRQ_FAST));
360 + } else {
361 + unsigned int data = (unsigned int)irq_get_chip_data(d->irq);
362 + writel(1 << (data & 0x1f), __io_address(enables[(data >> 5) & 0x3]));
363 + }
364 }
365
366 #if defined(CONFIG_PM)
367 @@ -204,5 +214,6 @@ int __init armctrl_init(void __iomem * b
368 }
369
370 armctrl_pm_register(base, irq_start, resume_sources);
371 + init_FIQ(FIQ_START);
372 return 0;
373 }
374 --- a/arch/arm/mach-bcm2708/bcm2708.c
375 +++ b/arch/arm/mach-bcm2708/bcm2708.c
376 @@ -321,12 +321,32 @@ static struct resource bcm2708_usb_resou
377 .flags = IORESOURCE_MEM,
378 },
379 [1] = {
380 - .start = IRQ_USB,
381 - .end = IRQ_USB,
382 + .start = MPHI_BASE,
383 + .end = MPHI_BASE + SZ_4K - 1,
384 + .flags = IORESOURCE_MEM,
385 + },
386 + [2] = {
387 + .start = IRQ_HOSTPORT,
388 + .end = IRQ_HOSTPORT,
389 .flags = IORESOURCE_IRQ,
390 },
391 };
392
393 +bool fiq_fix_enable = true;
394 +
395 +static struct resource bcm2708_usb_resources_no_fiq_fix[] = {
396 + [0] = {
397 + .start = USB_BASE,
398 + .end = USB_BASE + SZ_128K - 1,
399 + .flags = IORESOURCE_MEM,
400 + },
401 + [1] = {
402 + .start = IRQ_USB,
403 + .end = IRQ_USB,
404 + .flags = IORESOURCE_IRQ,
405 + },
406 +};
407 +
408 static u64 usb_dmamask = DMA_BIT_MASK(DMA_MASK_BITS_COMMON);
409
410 static struct platform_device bcm2708_usb_device = {
411 @@ -681,6 +701,11 @@ void __init bcm2708_init(void)
412 #endif
413 bcm_register_device(&bcm2708_systemtimer_device);
414 bcm_register_device(&bcm2708_fb_device);
415 + if (!fiq_fix_enable)
416 + {
417 + bcm2708_usb_device.resource = bcm2708_usb_resources_no_fiq_fix;
418 + bcm2708_usb_device.num_resources = ARRAY_SIZE(bcm2708_usb_resources_no_fiq_fix);
419 + }
420 bcm_register_device(&bcm2708_usb_device);
421 bcm_register_device(&bcm2708_uart1_device);
422 bcm_register_device(&bcm2708_powerman_device);
423 --- a/arch/arm/mach-bcm2708/include/mach/irqs.h
424 +++ b/arch/arm/mach-bcm2708/include/mach/irqs.h
425 @@ -106,87 +106,90 @@
426 #define IRQ_PENDING1 (IRQ_ARMCTRL_START + INTERRUPT_PENDING1)
427 #define IRQ_PENDING2 (IRQ_ARMCTRL_START + INTERRUPT_PENDING2)
428
429 +#define FIQ_START HARD_IRQS
430 +
431 /*
432 * FIQ interrupts definitions are the same as the INT definitions.
433 */
434 -#define FIQ_TIMER0 INT_TIMER0
435 -#define FIQ_TIMER1 INT_TIMER1
436 -#define FIQ_TIMER2 INT_TIMER2
437 -#define FIQ_TIMER3 INT_TIMER3
438 -#define FIQ_CODEC0 INT_CODEC0
439 -#define FIQ_CODEC1 INT_CODEC1
440 -#define FIQ_CODEC2 INT_CODEC2
441 -#define FIQ_JPEG INT_JPEG
442 -#define FIQ_ISP INT_ISP
443 -#define FIQ_USB INT_USB
444 -#define FIQ_3D INT_3D
445 -#define FIQ_TRANSPOSER INT_TRANSPOSER
446 -#define FIQ_MULTICORESYNC0 INT_MULTICORESYNC0
447 -#define FIQ_MULTICORESYNC1 INT_MULTICORESYNC1
448 -#define FIQ_MULTICORESYNC2 INT_MULTICORESYNC2
449 -#define FIQ_MULTICORESYNC3 INT_MULTICORESYNC3
450 -#define FIQ_DMA0 INT_DMA0
451 -#define FIQ_DMA1 INT_DMA1
452 -#define FIQ_DMA2 INT_DMA2
453 -#define FIQ_DMA3 INT_DMA3
454 -#define FIQ_DMA4 INT_DMA4
455 -#define FIQ_DMA5 INT_DMA5
456 -#define FIQ_DMA6 INT_DMA6
457 -#define FIQ_DMA7 INT_DMA7
458 -#define FIQ_DMA8 INT_DMA8
459 -#define FIQ_DMA9 INT_DMA9
460 -#define FIQ_DMA10 INT_DMA10
461 -#define FIQ_DMA11 INT_DMA11
462 -#define FIQ_DMA12 INT_DMA12
463 -#define FIQ_AUX INT_AUX
464 -#define FIQ_ARM INT_ARM
465 -#define FIQ_VPUDMA INT_VPUDMA
466 -#define FIQ_HOSTPORT INT_HOSTPORT
467 -#define FIQ_VIDEOSCALER INT_VIDEOSCALER
468 -#define FIQ_CCP2TX INT_CCP2TX
469 -#define FIQ_SDC INT_SDC
470 -#define FIQ_DSI0 INT_DSI0
471 -#define FIQ_AVE INT_AVE
472 -#define FIQ_CAM0 INT_CAM0
473 -#define FIQ_CAM1 INT_CAM1
474 -#define FIQ_HDMI0 INT_HDMI0
475 -#define FIQ_HDMI1 INT_HDMI1
476 -#define FIQ_PIXELVALVE1 INT_PIXELVALVE1
477 -#define FIQ_I2CSPISLV INT_I2CSPISLV
478 -#define FIQ_DSI1 INT_DSI1
479 -#define FIQ_PWA0 INT_PWA0
480 -#define FIQ_PWA1 INT_PWA1
481 -#define FIQ_CPR INT_CPR
482 -#define FIQ_SMI INT_SMI
483 -#define FIQ_GPIO0 INT_GPIO0
484 -#define FIQ_GPIO1 INT_GPIO1
485 -#define FIQ_GPIO2 INT_GPIO2
486 -#define FIQ_GPIO3 INT_GPIO3
487 -#define FIQ_I2C INT_I2C
488 -#define FIQ_SPI INT_SPI
489 -#define FIQ_I2SPCM INT_I2SPCM
490 -#define FIQ_SDIO INT_SDIO
491 -#define FIQ_UART INT_UART
492 -#define FIQ_SLIMBUS INT_SLIMBUS
493 -#define FIQ_VEC INT_VEC
494 -#define FIQ_CPG INT_CPG
495 -#define FIQ_RNG INT_RNG
496 -#define FIQ_ARASANSDIO INT_ARASANSDIO
497 -#define FIQ_AVSPMON INT_AVSPMON
498 +#define FIQ_TIMER0 (FIQ_START+INTERRUPT_TIMER0)
499 +#define FIQ_TIMER1 (FIQ_START+INTERRUPT_TIMER1)
500 +#define FIQ_TIMER2 (FIQ_START+INTERRUPT_TIMER2)
501 +#define FIQ_TIMER3 (FIQ_START+INTERRUPT_TIMER3)
502 +#define FIQ_CODEC0 (FIQ_START+INTERRUPT_CODEC0)
503 +#define FIQ_CODEC1 (FIQ_START+INTERRUPT_CODEC1)
504 +#define FIQ_CODEC2 (FIQ_START+INTERRUPT_CODEC2)
505 +#define FIQ_JPEG (FIQ_START+INTERRUPT_JPEG)
506 +#define FIQ_ISP (FIQ_START+INTERRUPT_ISP)
507 +#define FIQ_USB (FIQ_START+INTERRUPT_USB)
508 +#define FIQ_3D (FIQ_START+INTERRUPT_3D)
509 +#define FIQ_TRANSPOSER (FIQ_START+INTERRUPT_TRANSPOSER)
510 +#define FIQ_MULTICORESYNC0 (FIQ_START+INTERRUPT_MULTICORESYNC0)
511 +#define FIQ_MULTICORESYNC1 (FIQ_START+INTERRUPT_MULTICORESYNC1)
512 +#define FIQ_MULTICORESYNC2 (FIQ_START+INTERRUPT_MULTICORESYNC2)
513 +#define FIQ_MULTICORESYNC3 (FIQ_START+INTERRUPT_MULTICORESYNC3)
514 +#define FIQ_DMA0 (FIQ_START+INTERRUPT_DMA0)
515 +#define FIQ_DMA1 (FIQ_START+INTERRUPT_DMA1)
516 +#define FIQ_DMA2 (FIQ_START+INTERRUPT_DMA2)
517 +#define FIQ_DMA3 (FIQ_START+INTERRUPT_DMA3)
518 +#define FIQ_DMA4 (FIQ_START+INTERRUPT_DMA4)
519 +#define FIQ_DMA5 (FIQ_START+INTERRUPT_DMA5)
520 +#define FIQ_DMA6 (FIQ_START+INTERRUPT_DMA6)
521 +#define FIQ_DMA7 (FIQ_START+INTERRUPT_DMA7)
522 +#define FIQ_DMA8 (FIQ_START+INTERRUPT_DMA8)
523 +#define FIQ_DMA9 (FIQ_START+INTERRUPT_DMA9)
524 +#define FIQ_DMA10 (FIQ_START+INTERRUPT_DMA10)
525 +#define FIQ_DMA11 (FIQ_START+INTERRUPT_DMA11)
526 +#define FIQ_DMA12 (FIQ_START+INTERRUPT_DMA12)
527 +#define FIQ_AUX (FIQ_START+INTERRUPT_AUX)
528 +#define FIQ_ARM (FIQ_START+INTERRUPT_ARM)
529 +#define FIQ_VPUDMA (FIQ_START+INTERRUPT_VPUDMA)
530 +#define FIQ_HOSTPORT (FIQ_START+INTERRUPT_HOSTPORT)
531 +#define FIQ_VIDEOSCALER (FIQ_START+INTERRUPT_VIDEOSCALER)
532 +#define FIQ_CCP2TX (FIQ_START+INTERRUPT_CCP2TX)
533 +#define FIQ_SDC (FIQ_START+INTERRUPT_SDC)
534 +#define FIQ_DSI0 (FIQ_START+INTERRUPT_DSI0)
535 +#define FIQ_AVE (FIQ_START+INTERRUPT_AVE)
536 +#define FIQ_CAM0 (FIQ_START+INTERRUPT_CAM0)
537 +#define FIQ_CAM1 (FIQ_START+INTERRUPT_CAM1)
538 +#define FIQ_HDMI0 (FIQ_START+INTERRUPT_HDMI0)
539 +#define FIQ_HDMI1 (FIQ_START+INTERRUPT_HDMI1)
540 +#define FIQ_PIXELVALVE1 (FIQ_START+INTERRUPT_PIXELVALVE1)
541 +#define FIQ_I2CSPISLV (FIQ_START+INTERRUPT_I2CSPISLV)
542 +#define FIQ_DSI1 (FIQ_START+INTERRUPT_DSI1)
543 +#define FIQ_PWA0 (FIQ_START+INTERRUPT_PWA0)
544 +#define FIQ_PWA1 (FIQ_START+INTERRUPT_PWA1)
545 +#define FIQ_CPR (FIQ_START+INTERRUPT_CPR)
546 +#define FIQ_SMI (FIQ_START+INTERRUPT_SMI)
547 +#define FIQ_GPIO0 (FIQ_START+INTERRUPT_GPIO0)
548 +#define FIQ_GPIO1 (FIQ_START+INTERRUPT_GPIO1)
549 +#define FIQ_GPIO2 (FIQ_START+INTERRUPT_GPIO2)
550 +#define FIQ_GPIO3 (FIQ_START+INTERRUPT_GPIO3)
551 +#define FIQ_I2C (FIQ_START+INTERRUPT_I2C)
552 +#define FIQ_SPI (FIQ_START+INTERRUPT_SPI)
553 +#define FIQ_I2SPCM (FIQ_START+INTERRUPT_I2SPCM)
554 +#define FIQ_SDIO (FIQ_START+INTERRUPT_SDIO)
555 +#define FIQ_UART (FIQ_START+INTERRUPT_UART)
556 +#define FIQ_SLIMBUS (FIQ_START+INTERRUPT_SLIMBUS)
557 +#define FIQ_VEC (FIQ_START+INTERRUPT_VEC)
558 +#define FIQ_CPG (FIQ_START+INTERRUPT_CPG)
559 +#define FIQ_RNG (FIQ_START+INTERRUPT_RNG)
560 +#define FIQ_ARASANSDIO (FIQ_START+INTERRUPT_ARASANSDIO)
561 +#define FIQ_AVSPMON (FIQ_START+INTERRUPT_AVSPMON)
562
563 -#define FIQ_ARM_TIMER INT_ARM_TIMER
564 -#define FIQ_ARM_MAILBOX INT_ARM_MAILBOX
565 -#define FIQ_ARM_DOORBELL_0 INT_ARM_DOORBELL_0
566 -#define FIQ_ARM_DOORBELL_1 INT_ARM_DOORBELL_1
567 -#define FIQ_VPU0_HALTED INT_VPU0_HALTED
568 -#define FIQ_VPU1_HALTED INT_VPU1_HALTED
569 -#define FIQ_ILLEGAL_TYPE0 INT_ILLEGAL_TYPE0
570 -#define FIQ_ILLEGAL_TYPE1 INT_ILLEGAL_TYPE1
571 -#define FIQ_PENDING1 INT_PENDING1
572 -#define FIQ_PENDING2 INT_PENDING2
573 +#define FIQ_ARM_TIMER (FIQ_START+INTERRUPT_ARM_TIMER)
574 +#define FIQ_ARM_MAILBOX (FIQ_START+INTERRUPT_ARM_MAILBOX)
575 +#define FIQ_ARM_DOORBELL_0 (FIQ_START+INTERRUPT_ARM_DOORBELL_0)
576 +#define FIQ_ARM_DOORBELL_1 (FIQ_START+INTERRUPT_ARM_DOORBELL_1)
577 +#define FIQ_VPU0_HALTED (FIQ_START+INTERRUPT_VPU0_HALTED)
578 +#define FIQ_VPU1_HALTED (FIQ_START+INTERRUPT_VPU1_HALTED)
579 +#define FIQ_ILLEGAL_TYPE0 (FIQ_START+INTERRUPT_ILLEGAL_TYPE0)
580 +#define FIQ_ILLEGAL_TYPE1 (FIQ_START+INTERRUPT_ILLEGAL_TYPE1)
581 +#define FIQ_PENDING1 (FIQ_START+INTERRUPT_PENDING1)
582 +#define FIQ_PENDING2 (FIQ_START+INTERRUPT_PENDING2)
583
584 #define HARD_IRQS (64 + 21)
585 -#define GPIO_IRQ_START (HARD_IRQS)
586 +#define FIQ_IRQS (64 + 21)
587 +#define GPIO_IRQ_START (HARD_IRQS + FIQ_IRQS)
588 #define GPIO_IRQS (32*5)
589 #define SPARE_ALLOC_IRQS 64
590 #define BCM2708_ALLOC_IRQS (HARD_IRQS+FIQ_IRQS+GPIO_IRQS+SPARE_ALLOC_IRQS)
591 --- a/drivers/usb/host/dwc_common_port/dwc_common_linux.c
592 +++ b/drivers/usb/host/dwc_common_port/dwc_common_linux.c
593 @@ -580,7 +580,13 @@ void DWC_WRITE_REG64(uint64_t volatile *
594
595 void DWC_MODIFY_REG32(uint32_t volatile *reg, uint32_t clear_mask, uint32_t set_mask)
596 {
597 + unsigned long flags;
598 +
599 + local_irq_save(flags);
600 + local_fiq_disable();
601 writel((readl(reg) & ~clear_mask) | set_mask, reg);
602 + local_fiq_enable();
603 + local_irq_restore(flags);
604 }
605
606 #if 0
607 @@ -995,6 +1001,11 @@ void DWC_TASK_SCHEDULE(dwc_tasklet_t *ta
608 tasklet_schedule(&task->t);
609 }
610
611 +void DWC_TASK_HI_SCHEDULE(dwc_tasklet_t *task)
612 +{
613 + tasklet_hi_schedule(&task->t);
614 +}
615 +
616
617 /* workqueues
618 - run in process context (can sleep)
619 --- a/drivers/usb/host/dwc_common_port/dwc_list.h
620 +++ b/drivers/usb/host/dwc_common_port/dwc_list.h
621 @@ -384,17 +384,17 @@ struct { \
622 #define DWC_TAILQ_PREV(elm, headname, field) \
623 (*(((struct headname *)((elm)->field.tqe_prev))->tqh_last))
624 #define DWC_TAILQ_EMPTY(head) \
625 - (TAILQ_FIRST(head) == TAILQ_END(head))
626 + (DWC_TAILQ_FIRST(head) == DWC_TAILQ_END(head))
627
628 #define DWC_TAILQ_FOREACH(var, head, field) \
629 - for((var) = TAILQ_FIRST(head); \
630 - (var) != TAILQ_END(head); \
631 - (var) = TAILQ_NEXT(var, field))
632 + for ((var) = DWC_TAILQ_FIRST(head); \
633 + (var) != DWC_TAILQ_END(head); \
634 + (var) = DWC_TAILQ_NEXT(var, field))
635
636 #define DWC_TAILQ_FOREACH_REVERSE(var, head, headname, field) \
637 - for((var) = TAILQ_LAST(head, headname); \
638 - (var) != TAILQ_END(head); \
639 - (var) = TAILQ_PREV(var, headname, field))
640 + for ((var) = DWC_TAILQ_LAST(head, headname); \
641 + (var) != DWC_TAILQ_END(head); \
642 + (var) = DWC_TAILQ_PREV(var, headname, field))
643
644 /*
645 * Tail queue functions.
646 --- a/drivers/usb/host/dwc_common_port/dwc_os.h
647 +++ b/drivers/usb/host/dwc_common_port/dwc_os.h
648 @@ -982,6 +982,8 @@ extern void DWC_TASK_FREE(dwc_tasklet_t
649 extern void DWC_TASK_SCHEDULE(dwc_tasklet_t *task);
650 #define dwc_task_schedule DWC_TASK_SCHEDULE
651
652 +extern void DWC_TASK_HI_SCHEDULE(dwc_tasklet_t *task);
653 +#define dwc_task_hi_schedule DWC_TASK_HI_SCHEDULE
654
655 /** @name Timer
656 *
657 --- a/drivers/usb/host/dwc_otg/Makefile
658 +++ b/drivers/usb/host/dwc_otg/Makefile
659 @@ -36,6 +36,7 @@ dwc_otg-objs += dwc_otg_cil.o dwc_otg_ci
660 dwc_otg-objs += dwc_otg_pcd_linux.o dwc_otg_pcd.o dwc_otg_pcd_intr.o
661 dwc_otg-objs += dwc_otg_hcd.o dwc_otg_hcd_linux.o dwc_otg_hcd_intr.o dwc_otg_hcd_queue.o dwc_otg_hcd_ddma.o
662 dwc_otg-objs += dwc_otg_adp.o
663 +dwc_otg-objs += dwc_otg_mphi_fix.o
664 ifneq ($(CFI),)
665 dwc_otg-objs += dwc_otg_cfi.o
666 endif
667 --- a/drivers/usb/host/dwc_otg/dwc_otg_attr.c
668 +++ b/drivers/usb/host/dwc_otg/dwc_otg_attr.c
669 @@ -909,7 +909,7 @@ static ssize_t regdump_show(struct devic
670 return sprintf(buf, "Register Dump\n");
671 }
672
673 -DEVICE_ATTR(regdump, S_IRUGO | S_IWUSR, regdump_show, 0);
674 +DEVICE_ATTR(regdump, S_IRUGO, regdump_show, 0);
675
676 /**
677 * Dump global registers and either host or device registers (depending on the
678 @@ -920,12 +920,12 @@ static ssize_t spramdump_show(struct dev
679 {
680 dwc_otg_device_t *otg_dev = dwc_otg_drvdev(_dev);
681
682 - dwc_otg_dump_spram(otg_dev->core_if);
683 + //dwc_otg_dump_spram(otg_dev->core_if);
684
685 return sprintf(buf, "SPRAM Dump\n");
686 }
687
688 -DEVICE_ATTR(spramdump, S_IRUGO | S_IWUSR, spramdump_show, 0);
689 +DEVICE_ATTR(spramdump, S_IRUGO, spramdump_show, 0);
690
691 /**
692 * Dump the current hcd state.
693 @@ -940,7 +940,7 @@ static ssize_t hcddump_show(struct devic
694 return sprintf(buf, "HCD Dump\n");
695 }
696
697 -DEVICE_ATTR(hcddump, S_IRUGO | S_IWUSR, hcddump_show, 0);
698 +DEVICE_ATTR(hcddump, S_IRUGO, hcddump_show, 0);
699
700 /**
701 * Dump the average frame remaining at SOF. This can be used to
702 @@ -958,7 +958,7 @@ static ssize_t hcd_frrem_show(struct dev
703 return sprintf(buf, "HCD Dump Frame Remaining\n");
704 }
705
706 -DEVICE_ATTR(hcd_frrem, S_IRUGO | S_IWUSR, hcd_frrem_show, 0);
707 +DEVICE_ATTR(hcd_frrem, S_IRUGO, hcd_frrem_show, 0);
708
709 /**
710 * Displays the time required to read the GNPTXFSIZ register many times (the
711 @@ -986,7 +986,7 @@ static ssize_t rd_reg_test_show(struct d
712 RW_REG_COUNT, time * MSEC_PER_JIFFIE, time);
713 }
714
715 -DEVICE_ATTR(rd_reg_test, S_IRUGO | S_IWUSR, rd_reg_test_show, 0);
716 +DEVICE_ATTR(rd_reg_test, S_IRUGO, rd_reg_test_show, 0);
717
718 /**
719 * Displays the time required to write the GNPTXFSIZ register many times (the
720 @@ -1014,7 +1014,7 @@ static ssize_t wr_reg_test_show(struct d
721 RW_REG_COUNT, time * MSEC_PER_JIFFIE, time);
722 }
723
724 -DEVICE_ATTR(wr_reg_test, S_IRUGO | S_IWUSR, wr_reg_test_show, 0);
725 +DEVICE_ATTR(wr_reg_test, S_IRUGO, wr_reg_test_show, 0);
726
727 #ifdef CONFIG_USB_DWC_OTG_LPM
728
729 --- a/drivers/usb/host/dwc_otg/dwc_otg_cil_intr.c
730 +++ b/drivers/usb/host/dwc_otg/dwc_otg_cil_intr.c
731 @@ -45,6 +45,7 @@
732 #include "dwc_otg_driver.h"
733 #include "dwc_otg_pcd.h"
734 #include "dwc_otg_hcd.h"
735 +#include "dwc_otg_mphi_fix.h"
736
737 #ifdef DEBUG
738 inline const char *op_state_str(dwc_otg_core_if_t * core_if)
739 @@ -1318,7 +1319,7 @@ static int32_t dwc_otg_handle_lpm_intr(d
740 /**
741 * This function returns the Core Interrupt register.
742 */
743 -static inline uint32_t dwc_otg_read_common_intr(dwc_otg_core_if_t * core_if)
744 +static inline uint32_t dwc_otg_read_common_intr(dwc_otg_core_if_t * core_if, gintmsk_data_t *reenable_gintmsk)
745 {
746 gahbcfg_data_t gahbcfg = {.d32 = 0 };
747 gintsts_data_t gintsts;
748 @@ -1335,26 +1336,45 @@ static inline uint32_t dwc_otg_read_comm
749 gintmsk_common.b.lpmtranrcvd = 1;
750 #endif
751 gintmsk_common.b.restoredone = 1;
752 - /** @todo: The port interrupt occurs while in device
753 - * mode. Added code to CIL to clear the interrupt for now!
754 - */
755 - gintmsk_common.b.portintr = 1;
756 -
757 + if(dwc_otg_is_device_mode(core_if))
758 + {
759 + /** @todo: The port interrupt occurs while in device
760 + * mode. Added code to CIL to clear the interrupt for now!
761 + */
762 + gintmsk_common.b.portintr = 1;
763 + }
764 gintsts.d32 = DWC_READ_REG32(&core_if->core_global_regs->gintsts);
765 gintmsk.d32 = DWC_READ_REG32(&core_if->core_global_regs->gintmsk);
766 + {
767 + unsigned long flags;
768 +
769 + // Re-enable the saved interrupts
770 + local_irq_save(flags);
771 + local_fiq_disable();
772 + gintmsk.d32 |= gintmsk_common.d32;
773 + gintsts_saved.d32 &= ~gintmsk_common.d32;
774 + reenable_gintmsk->d32 = gintmsk.d32;
775 + local_irq_restore(flags);
776 + }
777 +
778 gahbcfg.d32 = DWC_READ_REG32(&core_if->core_global_regs->gahbcfg);
779
780 #ifdef DEBUG
781 /* if any common interrupts set */
782 if (gintsts.d32 & gintmsk_common.d32) {
783 - DWC_DEBUGPL(DBG_ANY, "gintsts=%08x gintmsk=%08x\n",
784 + DWC_DEBUGPL(DBG_ANY, "common_intr: gintsts=%08x gintmsk=%08x\n",
785 gintsts.d32, gintmsk.d32);
786 }
787 #endif
788 - if (gahbcfg.b.glblintrmsk)
789 + if (!fiq_fix_enable){
790 + if (gahbcfg.b.glblintrmsk)
791 + return ((gintsts.d32 & gintmsk.d32) & gintmsk_common.d32);
792 + else
793 + return 0;
794 + }
795 + else {
796 return ((gintsts.d32 & gintmsk.d32) & gintmsk_common.d32);
797 - else
798 - return 0;
799 + }
800
801 }
802
803 @@ -1386,6 +1406,7 @@ int32_t dwc_otg_handle_common_intr(void
804 {
805 int retval = 0;
806 gintsts_data_t gintsts;
807 + gintmsk_data_t reenable_gintmsk;
808 gpwrdn_data_t gpwrdn = {.d32 = 0 };
809 dwc_otg_device_t *otg_dev = dev;
810 dwc_otg_core_if_t *core_if = otg_dev->core_if;
811 @@ -1407,7 +1428,7 @@ int32_t dwc_otg_handle_common_intr(void
812 }
813
814 if (core_if->hibernation_suspend <= 0) {
815 - gintsts.d32 = dwc_otg_read_common_intr(core_if);
816 + gintsts.d32 = dwc_otg_read_common_intr(core_if, &reenable_gintmsk);
817
818 if (gintsts.b.modemismatch) {
819 retval |= dwc_otg_handle_mode_mismatch_intr(core_if);
820 @@ -1504,8 +1525,12 @@ int32_t dwc_otg_handle_common_intr(void
821 gintsts.b.portintr = 1;
822 DWC_WRITE_REG32(&core_if->core_global_regs->gintsts,gintsts.d32);
823 retval |= 1;
824 + reenable_gintmsk.b.portintr = 1;
825
826 }
827 +
828 + DWC_WRITE_REG32(&core_if->core_global_regs->gintmsk, reenable_gintmsk.d32);
829 +
830 } else {
831 DWC_DEBUGPL(DBG_ANY, "gpwrdn=%08x\n", gpwrdn.d32);
832
833 --- a/drivers/usb/host/dwc_otg/dwc_otg_dbg.h
834 +++ b/drivers/usb/host/dwc_otg/dwc_otg_dbg.h
835 @@ -49,6 +49,7 @@ static inline uint32_t SET_DEBUG_LEVEL(c
836 return old;
837 }
838
839 +#define DBG_USER (0x1)
840 /** When debug level has the DBG_CIL bit set, display CIL Debug messages. */
841 #define DBG_CIL (0x2)
842 /** When debug level has the DBG_CILV bit set, display CIL Verbose debug
843 --- a/drivers/usb/host/dwc_otg/dwc_otg_driver.c
844 +++ b/drivers/usb/host/dwc_otg/dwc_otg_driver.c
845 @@ -64,6 +64,8 @@ bool microframe_schedule=true;
846
847 static const char dwc_driver_name[] = "dwc_otg";
848
849 +extern void* dummy_send;
850 +
851 extern int pcd_init(
852 #ifdef LM_INTERFACE
853 struct lm_device *_dev
854 @@ -238,6 +240,14 @@ static struct dwc_otg_driver_module_para
855 .adp_enable = -1,
856 };
857
858 +//Global variable to switch the fiq fix on or off (declared in bcm2708.c)
859 +extern bool fiq_fix_enable;
860 +// Global variable to enable the split transaction fix
861 +bool fiq_split_enable = true;
862 +//Global variable to switch the nak holdoff on or off
863 +bool nak_holdoff_enable = true;
864 +
865 +
866 /**
867 * This function shows the Driver Version.
868 */
869 @@ -779,17 +789,33 @@ static int dwc_otg_driver_probe(
870 _dev->resource->start,
871 _dev->resource->end - _dev->resource->start + 1);
872 #if 1
873 - if (!request_mem_region(_dev->resource->start,
874 - _dev->resource->end - _dev->resource->start + 1,
875 + if (!request_mem_region(_dev->resource[0].start,
876 + _dev->resource[0].end - _dev->resource[0].start + 1,
877 "dwc_otg")) {
878 dev_dbg(&_dev->dev, "error reserving mapped memory\n");
879 retval = -EFAULT;
880 goto fail;
881 }
882
883 - dwc_otg_device->os_dep.base = ioremap_nocache(_dev->resource->start,
884 - _dev->resource->end -
885 - _dev->resource->start+1);
886 + dwc_otg_device->os_dep.base = ioremap_nocache(_dev->resource[0].start,
887 + _dev->resource[0].end -
888 + _dev->resource[0].start+1);
889 + if (fiq_fix_enable)
890 + {
891 + if (!request_mem_region(_dev->resource[1].start,
892 + _dev->resource[1].end - _dev->resource[1].start + 1,
893 + "dwc_otg")) {
894 + dev_dbg(&_dev->dev, "error reserving mapped memory\n");
895 + retval = -EFAULT;
896 + goto fail;
897 + }
898 +
899 + dwc_otg_device->os_dep.mphi_base = ioremap_nocache(_dev->resource[1].start,
900 + _dev->resource[1].end -
901 + _dev->resource[1].start + 1);
902 + dummy_send = (void *) kmalloc(16, GFP_ATOMIC);
903 + }
904 +
905 #else
906 {
907 struct map_desc desc = {
908 @@ -1044,6 +1070,12 @@ static int __init dwc_otg_driver_init(vo
909 int retval = 0;
910 int error;
911 struct device_driver *drv;
912 +
913 + if(fiq_split_enable && !fiq_fix_enable) {
914 + printk(KERN_WARNING "dwc_otg: fiq_split_enable was set without fiq_fix_enable! Correcting.\n");
915 + fiq_fix_enable = 1;
916 + }
917 +
918 printk(KERN_INFO "%s: version %s (%s bus)\n", dwc_driver_name,
919 DWC_DRIVER_VERSION,
920 #ifdef LM_INTERFACE
921 @@ -1063,6 +1095,9 @@ static int __init dwc_otg_driver_init(vo
922 printk(KERN_ERR "%s retval=%d\n", __func__, retval);
923 return retval;
924 }
925 + printk(KERN_DEBUG "dwc_otg: FIQ %s\n", fiq_fix_enable ? "enabled":"disabled");
926 + printk(KERN_DEBUG "dwc_otg: NAK holdoff %s\n", nak_holdoff_enable ? "enabled":"disabled");
927 + printk(KERN_DEBUG "dwc_otg: FIQ split fix %s\n", fiq_split_enable ? "enabled":"disabled");
928
929 error = driver_create_file(drv, &driver_attr_version);
930 #ifdef DEBUG
931 @@ -1343,6 +1378,13 @@ MODULE_PARM_DESC(otg_ver, "OTG revision
932 module_param(microframe_schedule, bool, 0444);
933 MODULE_PARM_DESC(microframe_schedule, "Enable the microframe scheduler");
934
935 +module_param(fiq_fix_enable, bool, 0444);
936 +MODULE_PARM_DESC(fiq_fix_enable, "Enable the fiq fix");
937 +module_param(nak_holdoff_enable, bool, 0444);
938 +MODULE_PARM_DESC(nak_holdoff_enable, "Enable the NAK holdoff");
939 +module_param(fiq_split_enable, bool, 0444);
940 +MODULE_PARM_DESC(fiq_split_enable, "Enable the FIQ fix on split transactions");
941 +
942 /** @page "Module Parameters"
943 *
944 * The following parameters may be specified when starting the module.
945 --- a/drivers/usb/host/dwc_otg/dwc_otg_hcd.c
946 +++ b/drivers/usb/host/dwc_otg/dwc_otg_hcd.c
947 @@ -40,10 +40,14 @@
948 * header file.
949 */
950
951 +#include <linux/usb.h>
952 +#include <linux/usb/hcd.h>
953 +
954 #include "dwc_otg_hcd.h"
955 #include "dwc_otg_regs.h"
956 +#include "dwc_otg_mphi_fix.h"
957
958 -extern bool microframe_schedule;
959 +extern bool microframe_schedule, nak_holdoff_enable;
960
961 //#define DEBUG_HOST_CHANNELS
962 #ifdef DEBUG_HOST_CHANNELS
963 @@ -53,6 +57,13 @@ static int last_sel_trans_num_avail_hc_a
964 static int last_sel_trans_num_avail_hc_at_end = 0;
965 #endif /* DEBUG_HOST_CHANNELS */
966
967 +extern int g_next_sched_frame, g_np_count, g_np_sent;
968 +
969 +extern haint_data_t haint_saved;
970 +extern hcintmsk_data_t hcintmsk_saved[MAX_EPS_CHANNELS];
971 +extern hcint_data_t hcint_saved[MAX_EPS_CHANNELS];
972 +extern gintsts_data_t ginsts_saved;
973 +
974 dwc_otg_hcd_t *dwc_otg_hcd_alloc_hcd(void)
975 {
976 return DWC_ALLOC(sizeof(dwc_otg_hcd_t));
977 @@ -162,31 +173,43 @@ static void del_timers(dwc_otg_hcd_t * h
978
979 /**
980 * Processes all the URBs in a single list of QHs. Completes them with
981 - * -ETIMEDOUT and frees the QTD.
982 + * -ESHUTDOWN and frees the QTD.
983 */
984 static void kill_urbs_in_qh_list(dwc_otg_hcd_t * hcd, dwc_list_link_t * qh_list)
985 {
986 - dwc_list_link_t *qh_item;
987 + dwc_list_link_t *qh_item, *qh_tmp;
988 dwc_otg_qh_t *qh;
989 dwc_otg_qtd_t *qtd, *qtd_tmp;
990
991 - DWC_LIST_FOREACH(qh_item, qh_list) {
992 + DWC_LIST_FOREACH_SAFE(qh_item, qh_tmp, qh_list) {
993 qh = DWC_LIST_ENTRY(qh_item, dwc_otg_qh_t, qh_list_entry);
994 DWC_CIRCLEQ_FOREACH_SAFE(qtd, qtd_tmp,
995 &qh->qtd_list, qtd_list_entry) {
996 qtd = DWC_CIRCLEQ_FIRST(&qh->qtd_list);
997 if (qtd->urb != NULL) {
998 hcd->fops->complete(hcd, qtd->urb->priv,
999 - qtd->urb, -DWC_E_TIMEOUT);
1000 + qtd->urb, -DWC_E_SHUTDOWN);
1001 dwc_otg_hcd_qtd_remove_and_free(hcd, qtd, qh);
1002 }
1003
1004 }
1005 + if(qh->channel) {
1006 + /* Using hcchar.chen == 1 is not a reliable test.
1007 + * It is possible that the channel has already halted
1008 + * but not yet been through the IRQ handler.
1009 + */
1010 + dwc_otg_hc_halt(hcd->core_if, qh->channel,
1011 + DWC_OTG_HC_XFER_URB_DEQUEUE);
1012 + if(microframe_schedule)
1013 + hcd->available_host_channels++;
1014 + qh->channel = NULL;
1015 + }
1016 + dwc_otg_hcd_qh_remove(hcd, qh);
1017 }
1018 }
1019
1020 /**
1021 - * Responds with an error status of ETIMEDOUT to all URBs in the non-periodic
1022 + * Responds with an error status of ESHUTDOWN to all URBs in the non-periodic
1023 * and periodic schedules. The QTD associated with each URB is removed from
1024 * the schedule and freed. This function may be called when a disconnect is
1025 * detected or when the HCD is being stopped.
1026 @@ -272,7 +295,8 @@ static int32_t dwc_otg_hcd_disconnect_cb
1027 */
1028 dwc_otg_hcd->flags.b.port_connect_status_change = 1;
1029 dwc_otg_hcd->flags.b.port_connect_status = 0;
1030 -
1031 + if(fiq_fix_enable)
1032 + local_fiq_disable();
1033 /*
1034 * Shutdown any transfers in process by clearing the Tx FIFO Empty
1035 * interrupt mask and status bits and disabling subsequent host
1036 @@ -368,8 +392,22 @@ static int32_t dwc_otg_hcd_disconnect_cb
1037 channel->qh = NULL;
1038 }
1039 }
1040 + if(fiq_split_enable) {
1041 + for(i=0; i < 128; i++) {
1042 + dwc_otg_hcd->hub_port[i] = 0;
1043 + }
1044 + haint_saved.d32 = 0;
1045 + for(i=0; i < MAX_EPS_CHANNELS; i++) {
1046 + hcint_saved[i].d32 = 0;
1047 + hcintmsk_saved[i].d32 = 0;
1048 + }
1049 + }
1050 +
1051 }
1052
1053 + if(fiq_fix_enable)
1054 + local_fiq_enable();
1055 +
1056 if (dwc_otg_hcd->fops->disconnect) {
1057 dwc_otg_hcd->fops->disconnect(dwc_otg_hcd);
1058 }
1059 @@ -407,6 +445,7 @@ static int dwc_otg_hcd_sleep_cb(void *p)
1060 }
1061 #endif
1062
1063 +
1064 /**
1065 * HCD Callback function for Remote Wakeup.
1066 *
1067 @@ -457,10 +496,12 @@ int dwc_otg_hcd_urb_enqueue(dwc_otg_hcd_
1068 dwc_otg_hcd_urb_t * dwc_otg_urb, void **ep_handle,
1069 int atomic_alloc)
1070 {
1071 - dwc_irqflags_t flags;
1072 int retval = 0;
1073 + uint8_t needs_scheduling = 0;
1074 + dwc_otg_transaction_type_e tr_type;
1075 dwc_otg_qtd_t *qtd;
1076 gintmsk_data_t intr_mask = {.d32 = 0 };
1077 + hprt0_data_t hprt0 = { .d32 = 0 };
1078
1079 #ifdef DEBUG /* integrity checks (Broadcom) */
1080 if (NULL == hcd->core_if) {
1081 @@ -475,6 +516,16 @@ int dwc_otg_hcd_urb_enqueue(dwc_otg_hcd_
1082 return -DWC_E_NO_DEVICE;
1083 }
1084
1085 + /* Some core configurations cannot support LS traffic on a FS root port */
1086 + if ((hcd->fops->speed(hcd, dwc_otg_urb->priv) == USB_SPEED_LOW) &&
1087 + (hcd->core_if->hwcfg2.b.fs_phy_type == 1) &&
1088 + (hcd->core_if->hwcfg2.b.hs_phy_type == 1)) {
1089 + hprt0.d32 = DWC_READ_REG32(hcd->core_if->host_if->hprt0);
1090 + if (hprt0.b.prtspd == DWC_HPRT0_PRTSPD_FULL_SPEED) {
1091 + return -DWC_E_NO_DEVICE;
1092 + }
1093 + }
1094 +
1095 qtd = dwc_otg_hcd_qtd_create(dwc_otg_urb, atomic_alloc);
1096 if (qtd == NULL) {
1097 DWC_ERROR("DWC OTG HCD URB Enqueue failed creating QTD\n");
1098 @@ -490,32 +541,27 @@ int dwc_otg_hcd_urb_enqueue(dwc_otg_hcd_
1099 return -DWC_E_NO_MEMORY;
1100 }
1101 #endif
1102 - retval =
1103 - dwc_otg_hcd_qtd_add(qtd, hcd, (dwc_otg_qh_t **) ep_handle, atomic_alloc);
1104 + intr_mask.d32 = DWC_READ_REG32(&hcd->core_if->core_global_regs->gintmsk);
1105 + if(!intr_mask.b.sofintr) needs_scheduling = 1;
1106 + if((((dwc_otg_qh_t *)ep_handle)->ep_type == UE_BULK) && !(qtd->urb->flags & URB_GIVEBACK_ASAP))
1107 + /* Do not schedule SG transactions until qtd has URB_GIVEBACK_ASAP set */
1108 + needs_scheduling = 0;
1109 +
1110 + retval = dwc_otg_hcd_qtd_add(qtd, hcd, (dwc_otg_qh_t **) ep_handle, atomic_alloc);
1111 // creates a new queue in ep_handle if it doesn't exist already
1112 if (retval < 0) {
1113 DWC_ERROR("DWC OTG HCD URB Enqueue failed adding QTD. "
1114 "Error status %d\n", retval);
1115 dwc_otg_hcd_qtd_free(qtd);
1116 - } else {
1117 - qtd->qh = *ep_handle;
1118 + return retval;
1119 }
1120 - intr_mask.d32 = DWC_READ_REG32(&hcd->core_if->core_global_regs->gintmsk);
1121 - if (!intr_mask.b.sofintr && retval == 0) {
1122 - dwc_otg_transaction_type_e tr_type;
1123 - if ((qtd->qh->ep_type == UE_BULK)
1124 - && !(qtd->urb->flags & URB_GIVEBACK_ASAP)) {
1125 - /* Do not schedule SG transactions until qtd has URB_GIVEBACK_ASAP set */
1126 - return 0;
1127 - }
1128 - DWC_SPINLOCK_IRQSAVE(hcd->lock, &flags);
1129 +
1130 + if(needs_scheduling) {
1131 tr_type = dwc_otg_hcd_select_transactions(hcd);
1132 if (tr_type != DWC_OTG_TRANSACTION_NONE) {
1133 dwc_otg_hcd_queue_transactions(hcd, tr_type);
1134 }
1135 - DWC_SPINUNLOCK_IRQRESTORE(hcd->lock, flags);
1136 }
1137 -
1138 return retval;
1139 }
1140
1141 @@ -524,6 +570,8 @@ int dwc_otg_hcd_urb_dequeue(dwc_otg_hcd_
1142 {
1143 dwc_otg_qh_t *qh;
1144 dwc_otg_qtd_t *urb_qtd;
1145 + BUG_ON(!hcd);
1146 + BUG_ON(!dwc_otg_urb);
1147
1148 #ifdef DEBUG /* integrity checks (Broadcom) */
1149
1150 @@ -540,14 +588,17 @@ int dwc_otg_hcd_urb_dequeue(dwc_otg_hcd_
1151 return -DWC_E_INVALID;
1152 }
1153 urb_qtd = dwc_otg_urb->qtd;
1154 + BUG_ON(!urb_qtd);
1155 if (urb_qtd->qh == NULL) {
1156 DWC_ERROR("**** DWC OTG HCD URB Dequeue with QTD with NULL Q handler\n");
1157 return -DWC_E_INVALID;
1158 }
1159 #else
1160 urb_qtd = dwc_otg_urb->qtd;
1161 + BUG_ON(!urb_qtd);
1162 #endif
1163 qh = urb_qtd->qh;
1164 + BUG_ON(!qh);
1165 if (CHK_DEBUG_LEVEL(DBG_HCDV | DBG_HCD_URB)) {
1166 if (urb_qtd->in_process) {
1167 dump_channel_info(hcd, qh);
1168 @@ -571,6 +622,8 @@ int dwc_otg_hcd_urb_dequeue(dwc_otg_hcd_
1169 */
1170 dwc_otg_hc_halt(hcd->core_if, qh->channel,
1171 DWC_OTG_HC_XFER_URB_DEQUEUE);
1172 +
1173 + dwc_otg_hcd_release_port(hcd, qh);
1174 }
1175 }
1176
1177 @@ -687,6 +740,33 @@ static void reset_tasklet_func(void *dat
1178 dwc_otg_hcd->flags.b.port_reset_change = 1;
1179 }
1180
1181 +static void completion_tasklet_func(void *ptr)
1182 +{
1183 + dwc_otg_hcd_t *hcd = (dwc_otg_hcd_t *) ptr;
1184 + struct urb *urb;
1185 + urb_tq_entry_t *item;
1186 + dwc_irqflags_t flags;
1187 +
1188 + /* This could just be spin_lock_irq */
1189 + DWC_SPINLOCK_IRQSAVE(hcd->lock, &flags);
1190 + while (!DWC_TAILQ_EMPTY(&hcd->completed_urb_list)) {
1191 + item = DWC_TAILQ_FIRST(&hcd->completed_urb_list);
1192 + urb = item->urb;
1193 + DWC_TAILQ_REMOVE(&hcd->completed_urb_list, item,
1194 + urb_tq_entries);
1195 + DWC_SPINUNLOCK_IRQRESTORE(hcd->lock, flags);
1196 + DWC_FREE(item);
1197 +
1198 + usb_hcd_giveback_urb(hcd->priv, urb, urb->status);
1199 +
1200 + fiq_print(FIQDBG_PORTHUB, "COMPLETE");
1201 +
1202 + DWC_SPINLOCK_IRQSAVE(hcd->lock, &flags);
1203 + }
1204 + DWC_SPINUNLOCK_IRQRESTORE(hcd->lock, flags);
1205 + return;
1206 +}
1207 +
1208 static void qh_list_free(dwc_otg_hcd_t * hcd, dwc_list_link_t * qh_list)
1209 {
1210 dwc_list_link_t *item;
1211 @@ -819,12 +899,14 @@ static void dwc_otg_hcd_free(dwc_otg_hcd
1212 } else if (dwc_otg_hcd->status_buf != NULL) {
1213 DWC_FREE(dwc_otg_hcd->status_buf);
1214 }
1215 + DWC_SPINLOCK_FREE(dwc_otg_hcd->channel_lock);
1216 DWC_SPINLOCK_FREE(dwc_otg_hcd->lock);
1217 /* Set core_if's lock pointer to NULL */
1218 dwc_otg_hcd->core_if->lock = NULL;
1219
1220 DWC_TIMER_FREE(dwc_otg_hcd->conn_timer);
1221 DWC_TASK_FREE(dwc_otg_hcd->reset_tasklet);
1222 + DWC_TASK_FREE(dwc_otg_hcd->completion_tasklet);
1223
1224 #ifdef DWC_DEV_SRPCAP
1225 if (dwc_otg_hcd->core_if->power_down == 2 &&
1226 @@ -874,7 +956,7 @@ int dwc_otg_hcd_init(dwc_otg_hcd_t * hcd
1227 DWC_LIST_INIT(&hcd->periodic_sched_ready);
1228 DWC_LIST_INIT(&hcd->periodic_sched_assigned);
1229 DWC_LIST_INIT(&hcd->periodic_sched_queued);
1230 -
1231 + DWC_TAILQ_INIT(&hcd->completed_urb_list);
1232 /*
1233 * Create a host channel descriptor for each host channel implemented
1234 * in the controller. Initialize the channel descriptor array.
1235 @@ -912,6 +994,9 @@ int dwc_otg_hcd_init(dwc_otg_hcd_t * hcd
1236
1237 /* Initialize reset tasklet. */
1238 hcd->reset_tasklet = DWC_TASK_ALLOC("reset_tasklet", reset_tasklet_func, hcd);
1239 +
1240 + hcd->completion_tasklet = DWC_TASK_ALLOC("completion_tasklet",
1241 + completion_tasklet_func, hcd);
1242 #ifdef DWC_DEV_SRPCAP
1243 if (hcd->core_if->power_down == 2) {
1244 /* Initialize Power on timer for Host power up in case hibernation */
1245 @@ -944,6 +1029,12 @@ int dwc_otg_hcd_init(dwc_otg_hcd_t * hcd
1246 hcd->frame_list = NULL;
1247 hcd->frame_list_dma = 0;
1248 hcd->periodic_qh_count = 0;
1249 +
1250 + DWC_MEMSET(hcd->hub_port, 0, sizeof(hcd->hub_port));
1251 +#ifdef FIQ_DEBUG
1252 + DWC_MEMSET(hcd->hub_port_alloc, -1, sizeof(hcd->hub_port_alloc));
1253 +#endif
1254 +
1255 out:
1256 return retval;
1257 }
1258 @@ -1089,7 +1180,12 @@ static void assign_and_init_hc(dwc_otg_h
1259 uint32_t hub_addr, port_addr;
1260 hc->do_split = 1;
1261 hc->xact_pos = qtd->isoc_split_pos;
1262 - hc->complete_split = qtd->complete_split;
1263 + /* We don't need to do complete splits anymore */
1264 + if(fiq_split_enable)
1265 + hc->complete_split = qtd->complete_split = 0;
1266 + else
1267 + hc->complete_split = qtd->complete_split;
1268 +
1269 hcd->fops->hub_info(hcd, urb->priv, &hub_addr, &port_addr);
1270 hc->hub_addr = (uint8_t) hub_addr;
1271 hc->port_addr = (uint8_t) port_addr;
1272 @@ -1236,6 +1332,65 @@ static void assign_and_init_hc(dwc_otg_h
1273 hc->qh = qh;
1274 }
1275
1276 +/*
1277 +** Check the transaction to see if the port / hub has already been assigned for
1278 +** a split transaction
1279 +**
1280 +** Return 0 - Port is already in use
1281 +*/
1282 +int dwc_otg_hcd_allocate_port(dwc_otg_hcd_t * hcd, dwc_otg_qh_t *qh)
1283 +{
1284 + uint32_t hub_addr, port_addr;
1285 +
1286 + if(!fiq_split_enable)
1287 + return 0;
1288 +
1289 + hcd->fops->hub_info(hcd, DWC_CIRCLEQ_FIRST(&qh->qtd_list)->urb->priv, &hub_addr, &port_addr);
1290 +
1291 + if(hcd->hub_port[hub_addr] & (1 << port_addr))
1292 + {
1293 + fiq_print(FIQDBG_PORTHUB, "H%dP%d:S%02d", hub_addr, port_addr, qh->skip_count);
1294 +
1295 + qh->skip_count++;
1296 +
1297 + if(qh->skip_count > 40000)
1298 + {
1299 + printk_once(KERN_ERR "Error: Having to skip port allocation");
1300 + local_fiq_disable();
1301 + BUG();
1302 + return 0;
1303 + }
1304 + return 1;
1305 + }
1306 + else
1307 + {
1308 + qh->skip_count = 0;
1309 + hcd->hub_port[hub_addr] |= 1 << port_addr;
1310 + fiq_print(FIQDBG_PORTHUB, "H%dP%d:A %d", hub_addr, port_addr, DWC_CIRCLEQ_FIRST(&qh->qtd_list)->urb->pipe_info.ep_num);
1311 +#ifdef FIQ_DEBUG
1312 + hcd->hub_port_alloc[hub_addr * 16 + port_addr] = dwc_otg_hcd_get_frame_number(hcd);
1313 +#endif
1314 + return 0;
1315 + }
1316 +}
1317 +void dwc_otg_hcd_release_port(dwc_otg_hcd_t * hcd, dwc_otg_qh_t *qh)
1318 +{
1319 + uint32_t hub_addr, port_addr;
1320 +
1321 + if(!fiq_split_enable)
1322 + return;
1323 +
1324 + hcd->fops->hub_info(hcd, DWC_CIRCLEQ_FIRST(&qh->qtd_list)->urb->priv, &hub_addr, &port_addr);
1325 +
1326 + hcd->hub_port[hub_addr] &= ~(1 << port_addr);
1327 +#ifdef FIQ_DEBUG
1328 + hcd->hub_port_alloc[hub_addr * 16 + port_addr] = -1;
1329 +#endif
1330 + fiq_print(FIQDBG_PORTHUB, "H%dP%d:RO%d", hub_addr, port_addr, DWC_CIRCLEQ_FIRST(&qh->qtd_list)->urb->pipe_info.ep_num);
1331 +
1332 +}
1333 +
1334 +
1335 /**
1336 * This function selects transactions from the HCD transfer schedule and
1337 * assigns them to available host channels. It is called from HCD interrupt
1338 @@ -1249,9 +1404,10 @@ dwc_otg_transaction_type_e dwc_otg_hcd_s
1339 {
1340 dwc_list_link_t *qh_ptr;
1341 dwc_otg_qh_t *qh;
1342 + dwc_otg_qtd_t *qtd;
1343 int num_channels;
1344 dwc_irqflags_t flags;
1345 - dwc_spinlock_t *channel_lock = DWC_SPINLOCK_ALLOC();
1346 + dwc_spinlock_t *channel_lock = hcd->channel_lock;
1347 dwc_otg_transaction_type_e ret_val = DWC_OTG_TRANSACTION_NONE;
1348
1349 #ifdef DEBUG_SOF
1350 @@ -1269,11 +1425,29 @@ dwc_otg_transaction_type_e dwc_otg_hcd_s
1351
1352 while (qh_ptr != &hcd->periodic_sched_ready &&
1353 !DWC_CIRCLEQ_EMPTY(&hcd->free_hc_list)) {
1354 +
1355 + qh = DWC_LIST_ENTRY(qh_ptr, dwc_otg_qh_t, qh_list_entry);
1356 +
1357 + if(qh->do_split) {
1358 + qtd = DWC_CIRCLEQ_FIRST(&qh->qtd_list);
1359 + if(!(qh->ep_type == UE_ISOCHRONOUS &&
1360 + (qtd->isoc_split_pos == DWC_HCSPLIT_XACTPOS_MID ||
1361 + qtd->isoc_split_pos == DWC_HCSPLIT_XACTPOS_END))) {
1362 + if(dwc_otg_hcd_allocate_port(hcd, qh))
1363 + {
1364 + qh_ptr = DWC_LIST_NEXT(qh_ptr);
1365 + g_next_sched_frame = dwc_frame_num_inc(dwc_otg_hcd_get_frame_number(hcd), 1);
1366 + continue;
1367 + }
1368 + }
1369 + }
1370 +
1371 if (microframe_schedule) {
1372 // Make sure we leave one channel for non periodic transactions.
1373 DWC_SPINLOCK_IRQSAVE(channel_lock, &flags);
1374 if (hcd->available_host_channels <= 1) {
1375 DWC_SPINUNLOCK_IRQRESTORE(channel_lock, flags);
1376 + if(qh->do_split) dwc_otg_hcd_release_port(hcd, qh);
1377 break;
1378 }
1379 hcd->available_host_channels--;
1380 @@ -1294,8 +1468,6 @@ dwc_otg_transaction_type_e dwc_otg_hcd_s
1381 DWC_LIST_MOVE_HEAD(&hcd->periodic_sched_assigned,
1382 &qh->qh_list_entry);
1383 DWC_SPINUNLOCK_IRQRESTORE(channel_lock, flags);
1384 -
1385 - ret_val = DWC_OTG_TRANSACTION_PERIODIC;
1386 }
1387
1388 /*
1389 @@ -1310,6 +1482,31 @@ dwc_otg_transaction_type_e dwc_otg_hcd_s
1390 num_channels - hcd->periodic_channels) &&
1391 !DWC_CIRCLEQ_EMPTY(&hcd->free_hc_list)) {
1392
1393 + qh = DWC_LIST_ENTRY(qh_ptr, dwc_otg_qh_t, qh_list_entry);
1394 +
1395 + /*
1396 + * Check to see if this is a NAK'd retransmit, in which case ignore for retransmission
1397 + * we hold off on bulk retransmissions to reduce NAK interrupt overhead for full-speed
1398 + * cheeky devices that just hold off using NAKs
1399 + */
1400 + if (nak_holdoff_enable && qh->do_split) {
1401 + if (qh->nak_frame != 0xffff &&
1402 + dwc_full_frame_num(qh->nak_frame) ==
1403 + dwc_full_frame_num(dwc_otg_hcd_get_frame_number(hcd))) {
1404 + /*
1405 + * Revisit: Need to avoid trampling on periodic scheduling.
1406 + * Currently we are safe because g_np_count != g_np_sent whenever we hit this,
1407 + * but if this behaviour is changed then periodic endpoints will get a slower
1408 + * polling rate.
1409 + */
1410 + g_next_sched_frame = ((qh->nak_frame + 8) & ~7) & DWC_HFNUM_MAX_FRNUM;
1411 + qh_ptr = DWC_LIST_NEXT(qh_ptr);
1412 + continue;
1413 + } else {
1414 + qh->nak_frame = 0xffff;
1415 + }
1416 + }
1417 +
1418 if (microframe_schedule) {
1419 DWC_SPINLOCK_IRQSAVE(channel_lock, &flags);
1420 if (hcd->available_host_channels < 1) {
1421 @@ -1322,7 +1519,6 @@ dwc_otg_transaction_type_e dwc_otg_hcd_s
1422 last_sel_trans_num_nonper_scheduled++;
1423 #endif /* DEBUG_HOST_CHANNELS */
1424 }
1425 - qh = DWC_LIST_ENTRY(qh_ptr, dwc_otg_qh_t, qh_list_entry);
1426
1427 assign_and_init_hc(hcd, qh);
1428
1429 @@ -1336,21 +1532,22 @@ dwc_otg_transaction_type_e dwc_otg_hcd_s
1430 &qh->qh_list_entry);
1431 DWC_SPINUNLOCK_IRQRESTORE(channel_lock, flags);
1432
1433 - if (ret_val == DWC_OTG_TRANSACTION_NONE) {
1434 - ret_val = DWC_OTG_TRANSACTION_NON_PERIODIC;
1435 - } else {
1436 - ret_val = DWC_OTG_TRANSACTION_ALL;
1437 - }
1438 + g_np_sent++;
1439
1440 if (!microframe_schedule)
1441 hcd->non_periodic_channels++;
1442 }
1443
1444 + if(!DWC_LIST_EMPTY(&hcd->periodic_sched_assigned))
1445 + ret_val |= DWC_OTG_TRANSACTION_PERIODIC;
1446 +
1447 + if(!DWC_LIST_EMPTY(&hcd->non_periodic_sched_active))
1448 + ret_val |= DWC_OTG_TRANSACTION_NON_PERIODIC;
1449 +
1450 +
1451 #ifdef DEBUG_HOST_CHANNELS
1452 last_sel_trans_num_avail_hc_at_end = hcd->available_host_channels;
1453 #endif /* DEBUG_HOST_CHANNELS */
1454 -
1455 - DWC_SPINLOCK_FREE(channel_lock);
1456 return ret_val;
1457 }
1458
1459 @@ -1464,6 +1661,15 @@ static void process_periodic_channels(dw
1460
1461 qh = DWC_LIST_ENTRY(qh_ptr, dwc_otg_qh_t, qh_list_entry);
1462
1463 + // Do not send a split start transaction any later than frame .6
1464 + // Note, we have to schedule a periodic in .5 to make it go in .6
1465 + if(fiq_split_enable && qh->do_split && ((dwc_otg_hcd_get_frame_number(hcd) + 1) & 7) > 6)
1466 + {
1467 + qh_ptr = qh_ptr->next;
1468 + g_next_sched_frame = dwc_otg_hcd_get_frame_number(hcd) | 7;
1469 + continue;
1470 + }
1471 +
1472 /*
1473 * Set a flag if we're queuing high-bandwidth in slave mode.
1474 * The flag prevents any halts to get into the request queue in
1475 @@ -1593,6 +1799,15 @@ static void process_non_periodic_channel
1476
1477 qh = DWC_LIST_ENTRY(hcd->non_periodic_qh_ptr, dwc_otg_qh_t,
1478 qh_list_entry);
1479 +
1480 + // Do not send a split start transaction any later than frame .5
1481 + // non periodic transactions will start immediately in this uframe
1482 + if(fiq_split_enable && qh->do_split && ((dwc_otg_hcd_get_frame_number(hcd) + 1) & 7) > 6)
1483 + {
1484 + g_next_sched_frame = dwc_otg_hcd_get_frame_number(hcd) | 7;
1485 + break;
1486 + }
1487 +
1488 status =
1489 queue_transaction(hcd, qh->channel,
1490 tx_status.b.nptxfspcavail);
1491 @@ -3118,17 +3333,13 @@ dwc_otg_hcd_urb_t *dwc_otg_hcd_urb_alloc
1492 else
1493 dwc_otg_urb = DWC_ALLOC(size);
1494
1495 - if (NULL != dwc_otg_urb)
1496 - dwc_otg_urb->packet_count = iso_desc_count;
1497 + if (dwc_otg_urb)
1498 + dwc_otg_urb->packet_count = iso_desc_count;
1499 else {
1500 - dwc_otg_urb->packet_count = 0;
1501 - if (size != 0) {
1502 - DWC_ERROR("**** DWC OTG HCD URB alloc - "
1503 - "%salloc of %db failed\n",
1504 - atomic_alloc?"atomic ":"", size);
1505 - }
1506 - }
1507 -
1508 + DWC_ERROR("**** DWC OTG HCD URB alloc - "
1509 + "%salloc of %db failed\n",
1510 + atomic_alloc?"atomic ":"", size);
1511 + }
1512 return dwc_otg_urb;
1513 }
1514
1515 --- a/drivers/usb/host/dwc_otg/dwc_otg_hcd.h
1516 +++ b/drivers/usb/host/dwc_otg/dwc_otg_hcd.h
1517 @@ -168,10 +168,10 @@ typedef enum dwc_otg_control_phase {
1518
1519 /** Transaction types. */
1520 typedef enum dwc_otg_transaction_type {
1521 - DWC_OTG_TRANSACTION_NONE,
1522 - DWC_OTG_TRANSACTION_PERIODIC,
1523 - DWC_OTG_TRANSACTION_NON_PERIODIC,
1524 - DWC_OTG_TRANSACTION_ALL
1525 + DWC_OTG_TRANSACTION_NONE = 0,
1526 + DWC_OTG_TRANSACTION_PERIODIC = 1,
1527 + DWC_OTG_TRANSACTION_NON_PERIODIC = 2,
1528 + DWC_OTG_TRANSACTION_ALL = DWC_OTG_TRANSACTION_PERIODIC + DWC_OTG_TRANSACTION_NON_PERIODIC
1529 } dwc_otg_transaction_type_e;
1530
1531 struct dwc_otg_qh;
1532 @@ -321,6 +321,11 @@ typedef struct dwc_otg_qh {
1533 */
1534 uint16_t sched_frame;
1535
1536 + /*
1537 + ** Frame a NAK was received on this queue head, used to minimise NAK retransmission
1538 + */
1539 + uint16_t nak_frame;
1540 +
1541 /** (micro)frame at which last start split was initialized. */
1542 uint16_t start_split_frame;
1543
1544 @@ -365,10 +370,19 @@ typedef struct dwc_otg_qh {
1545
1546 uint16_t speed;
1547 uint16_t frame_usecs[8];
1548 +
1549 + uint32_t skip_count;
1550 } dwc_otg_qh_t;
1551
1552 DWC_CIRCLEQ_HEAD(hc_list, dwc_hc);
1553
1554 +typedef struct urb_tq_entry {
1555 + struct urb *urb;
1556 + DWC_TAILQ_ENTRY(urb_tq_entry) urb_tq_entries;
1557 +} urb_tq_entry_t;
1558 +
1559 +DWC_TAILQ_HEAD(urb_list, urb_tq_entry);
1560 +
1561 /**
1562 * This structure holds the state of the HCD, including the non-periodic and
1563 * periodic schedules.
1564 @@ -546,9 +560,12 @@ struct dwc_otg_hcd {
1565 /* Tasket to do a reset */
1566 dwc_tasklet_t *reset_tasklet;
1567
1568 + dwc_tasklet_t *completion_tasklet;
1569 + struct urb_list completed_urb_list;
1570 +
1571 /* */
1572 dwc_spinlock_t *lock;
1573 -
1574 + dwc_spinlock_t *channel_lock;
1575 /**
1576 * Private data that could be used by OS wrapper.
1577 */
1578 @@ -559,6 +576,12 @@ struct dwc_otg_hcd {
1579 /** Frame List */
1580 uint32_t *frame_list;
1581
1582 + /** Hub - Port assignment */
1583 + int hub_port[128];
1584 +#ifdef FIQ_DEBUG
1585 + int hub_port_alloc[2048];
1586 +#endif
1587 +
1588 /** Frame List DMA address */
1589 dma_addr_t frame_list_dma;
1590
1591 @@ -589,6 +612,10 @@ extern dwc_otg_transaction_type_e dwc_ot
1592 extern void dwc_otg_hcd_queue_transactions(dwc_otg_hcd_t * hcd,
1593 dwc_otg_transaction_type_e tr_type);
1594
1595 +int dwc_otg_hcd_allocate_port(dwc_otg_hcd_t * hcd, dwc_otg_qh_t *qh);
1596 +void dwc_otg_hcd_release_port(dwc_otg_hcd_t * dwc_otg_hcd, dwc_otg_qh_t *qh);
1597 +
1598 +
1599 /** @} */
1600
1601 /** @name Interrupt Handler Functions */
1602 --- a/drivers/usb/host/dwc_otg/dwc_otg_hcd_ddma.c
1603 +++ b/drivers/usb/host/dwc_otg/dwc_otg_hcd_ddma.c
1604 @@ -276,7 +276,7 @@ void dump_frame_list(dwc_otg_hcd_t * hcd
1605 static void release_channel_ddma(dwc_otg_hcd_t * hcd, dwc_otg_qh_t * qh)
1606 {
1607 dwc_irqflags_t flags;
1608 - dwc_spinlock_t *channel_lock = DWC_SPINLOCK_ALLOC();
1609 + dwc_spinlock_t *channel_lock = hcd->channel_lock;
1610
1611 dwc_hc_t *hc = qh->channel;
1612 if (dwc_qh_is_non_per(qh)) {
1613 @@ -306,7 +306,6 @@ static void release_channel_ddma(dwc_otg
1614 dwc_memset(qh->desc_list, 0x00,
1615 sizeof(dwc_otg_host_dma_desc_t) * max_desc_num(qh));
1616 }
1617 - DWC_SPINLOCK_FREE(channel_lock);
1618 }
1619
1620 /**
1621 --- a/drivers/usb/host/dwc_otg/dwc_otg_hcd_if.h
1622 +++ b/drivers/usb/host/dwc_otg/dwc_otg_hcd_if.h
1623 @@ -113,6 +113,11 @@ extern void dwc_otg_hcd_remove(dwc_otg_h
1624 */
1625 extern int32_t dwc_otg_hcd_handle_intr(dwc_otg_hcd_t * dwc_otg_hcd);
1626
1627 +/** This function is used to handle the fast interrupt
1628 + *
1629 + */
1630 +extern void __attribute__ ((naked)) dwc_otg_hcd_handle_fiq(void);
1631 +
1632 /**
1633 * Returns private data set by
1634 * dwc_otg_hcd_set_priv_data function.
1635 --- a/drivers/usb/host/dwc_otg/dwc_otg_hcd_intr.c
1636 +++ b/drivers/usb/host/dwc_otg/dwc_otg_hcd_intr.c
1637 @@ -34,6 +34,12 @@
1638
1639 #include "dwc_otg_hcd.h"
1640 #include "dwc_otg_regs.h"
1641 +#include "dwc_otg_mphi_fix.h"
1642 +
1643 +#include <linux/jiffies.h>
1644 +#include <mach/hardware.h>
1645 +#include <asm/fiq.h>
1646 +
1647
1648 extern bool microframe_schedule;
1649
1650 @@ -41,38 +47,487 @@ extern bool microframe_schedule;
1651 * This file contains the implementation of the HCD Interrupt handlers.
1652 */
1653
1654 +/*
1655 + * Some globals to communicate between the FIQ and INTERRUPT
1656 + */
1657 +
1658 +void * dummy_send;
1659 +mphi_regs_t c_mphi_regs;
1660 +volatile void *dwc_regs_base;
1661 +int fiq_done, int_done;
1662 +
1663 +gintsts_data_t gintsts_saved = {.d32 = 0};
1664 +hcint_data_t hcint_saved[MAX_EPS_CHANNELS];
1665 +hcintmsk_data_t hcintmsk_saved[MAX_EPS_CHANNELS];
1666 +int split_out_xfersize[MAX_EPS_CHANNELS];
1667 +haint_data_t haint_saved;
1668 +
1669 +int g_next_sched_frame, g_np_count, g_np_sent;
1670 +static int mphi_int_count = 0 ;
1671 +
1672 +hcchar_data_t nak_hcchar;
1673 +hctsiz_data_t nak_hctsiz;
1674 +hcsplt_data_t nak_hcsplt;
1675 +int nak_count;
1676 +
1677 +int complete_sched[MAX_EPS_CHANNELS] = { -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1};
1678 +int split_start_frame[MAX_EPS_CHANNELS];
1679 +int queued_port[MAX_EPS_CHANNELS];
1680 +
1681 +#ifdef FIQ_DEBUG
1682 +char buffer[1000*16];
1683 +int wptr;
1684 +void notrace _fiq_print(FIQDBG_T dbg_lvl, char *fmt, ...)
1685 +{
1686 + FIQDBG_T dbg_lvl_req = FIQDBG_PORTHUB;
1687 + va_list args;
1688 + char text[17];
1689 + hfnum_data_t hfnum = { .d32 = FIQ_READ(dwc_regs_base + 0x408) };
1690 + unsigned long flags;
1691 +
1692 + local_irq_save(flags);
1693 + local_fiq_disable();
1694 + if(dbg_lvl & dbg_lvl_req || dbg_lvl == FIQDBG_ERR)
1695 + {
1696 + snprintf(text, 9, "%4d%d:%d ", hfnum.b.frnum/8, hfnum.b.frnum%8, 8 - hfnum.b.frrem/937);
1697 + va_start(args, fmt);
1698 + vsnprintf(text+8, 9, fmt, args);
1699 + va_end(args);
1700 +
1701 + memcpy(buffer + wptr, text, 16);
1702 + wptr = (wptr + 16) % sizeof(buffer);
1703 + }
1704 + local_irq_restore(flags);
1705 +}
1706 +#endif
1707 +
1708 +void notrace fiq_queue_request(int channel, int odd_frame)
1709 +{
1710 + hcchar_data_t hcchar = { .d32 = FIQ_READ(dwc_regs_base + 0x500 + (channel * 0x20) + 0x0) };
1711 + hcsplt_data_t hcsplt = { .d32 = FIQ_READ(dwc_regs_base + 0x500 + (channel * 0x20) + 0x4) };
1712 + hctsiz_data_t hctsiz = { .d32 = FIQ_READ(dwc_regs_base + 0x500 + (channel * 0x20) + 0x10) };
1713 +
1714 + if(hcsplt.b.spltena == 0)
1715 + {
1716 + fiq_print(FIQDBG_ERR, "SPLTENA ");
1717 + BUG();
1718 + }
1719 +
1720 + if(hcchar.b.epdir == 1)
1721 + {
1722 + fiq_print(FIQDBG_SCHED, "IN Ch %d", channel);
1723 + }
1724 + else
1725 + {
1726 + hctsiz.b.xfersize = 0;
1727 + fiq_print(FIQDBG_SCHED, "OUT Ch %d", channel);
1728 + }
1729 + FIQ_WRITE((dwc_regs_base + 0x500 + (channel * 0x20) + 0x10), hctsiz.d32);
1730 +
1731 + hcsplt.b.compsplt = 1;
1732 + FIQ_WRITE((dwc_regs_base + 0x500 + (channel * 0x20) + 0x4), hcsplt.d32);
1733 +
1734 + // Send the Split complete
1735 + hcchar.b.chen = 1;
1736 + hcchar.b.oddfrm = odd_frame ? 1 : 0;
1737 +
1738 + // Post this for transmit on the next frame for periodic or this frame for non-periodic
1739 + fiq_print(FIQDBG_SCHED, "SND_%s", odd_frame ? "ODD " : "EVEN");
1740 +
1741 + FIQ_WRITE((dwc_regs_base + 0x500 + (channel * 0x20) + 0x0), hcchar.d32);
1742 +}
1743 +
1744 +static int last_sof = -1;
1745 +
1746 +/*
1747 +** Function to handle the start of frame interrupt, choose whether we need to do anything and
1748 +** therefore trigger the main interrupt
1749 +**
1750 +** returns int != 0 - interrupt has been handled
1751 +*/
1752 +int diff;
1753 +
1754 +int notrace fiq_sof_handle(hfnum_data_t hfnum)
1755 +{
1756 + int handled = 0;
1757 + int i;
1758 +
1759 + // Just check that once we're running we don't miss a SOF
1760 + /*if(last_sof != -1 && (hfnum.b.frnum != ((last_sof + 1) & 0x3fff)))
1761 + {
1762 + fiq_print(FIQDBG_ERR, "LASTSOF ");
1763 + fiq_print(FIQDBG_ERR, "%4d%d ", last_sof / 8, last_sof & 7);
1764 + fiq_print(FIQDBG_ERR, "%4d%d ", hfnum.b.frnum / 8, hfnum.b.frnum & 7);
1765 + BUG();
1766 + }*/
1767 +
1768 + // Only start remembering the last sof when the interrupt has been
1769 + // enabled (we don't check the mask to come in here...)
1770 + if(last_sof != -1 || FIQ_READ(dwc_regs_base + 0x18) & (1<<3))
1771 + last_sof = hfnum.b.frnum;
1772 +
1773 + for(i = 0; i < MAX_EPS_CHANNELS; i++)
1774 + {
1775 + if(complete_sched[i] != -1)
1776 + {
1777 + if(complete_sched[i] <= hfnum.b.frnum || (complete_sched[i] > 0x3f00 && hfnum.b.frnum < 0xf0))
1778 + {
1779 + fiq_queue_request(i, hfnum.b.frnum & 1);
1780 + complete_sched[i] = -1;
1781 + }
1782 + }
1783 +
1784 + if(complete_sched[i] != -1)
1785 + {
1786 + // This is because we've seen a split complete occur with no start...
1787 + // most likely because missed the complete 0x3fff frames ago!
1788 +
1789 + diff = (hfnum.b.frnum + 0x3fff - complete_sched[i]) & 0x3fff ;
1790 + if(diff > 32 && diff < 0x3f00)
1791 + {
1792 + fiq_print(FIQDBG_ERR, "SPLTMISS");
1793 + BUG();
1794 + }
1795 + }
1796 + }
1797 +
1798 + if(g_np_count == g_np_sent && dwc_frame_num_gt(g_next_sched_frame, hfnum.b.frnum))
1799 + {
1800 + /*
1801 + * If np_count != np_sent that means we need to queue non-periodic (bulk) packets this packet
1802 + * g_next_sched_frame is the next frame we have periodic packets for
1803 + *
1804 + * if neither of these are required for this frame then just clear the interrupt
1805 + */
1806 + handled = 1;
1807 +
1808 + }
1809 +
1810 + return handled;
1811 +}
1812 +
1813 +int notrace port_id(hcsplt_data_t hcsplt)
1814 +{
1815 + return hcsplt.b.prtaddr + (hcsplt.b.hubaddr << 8);
1816 +}
1817 +
1818 +int notrace fiq_hcintr_handle(int channel, hfnum_data_t hfnum)
1819 +{
1820 + hcchar_data_t hcchar = { .d32 = FIQ_READ(dwc_regs_base + 0x500 + (channel * 0x20) + 0x0) };
1821 + hcsplt_data_t hcsplt = { .d32 = FIQ_READ(dwc_regs_base + 0x500 + (channel * 0x20) + 0x4) };
1822 + hcint_data_t hcint = { .d32 = FIQ_READ(dwc_regs_base + 0x500 + (channel * 0x20) + 0x8) };
1823 + hcintmsk_data_t hcintmsk = { .d32 = FIQ_READ(dwc_regs_base + 0x500 + (channel * 0x20) + 0xc) };
1824 + hctsiz_data_t hctsiz = { .d32 = FIQ_READ(dwc_regs_base + 0x500 + (channel * 0x20) + 0x10)};
1825 +
1826 + hcint_saved[channel].d32 |= hcint.d32;
1827 + hcintmsk_saved[channel].d32 = hcintmsk.d32;
1828 +
1829 + if(hcsplt.b.spltena)
1830 + {
1831 + fiq_print(FIQDBG_PORTHUB, "ph: %4x", port_id(hcsplt));
1832 + if(hcint.b.chhltd)
1833 + {
1834 + fiq_print(FIQDBG_SCHED, "CH HLT %d", channel);
1835 + fiq_print(FIQDBG_SCHED, "%08x", hcint_saved[channel]);
1836 + }
1837 + if(hcint.b.stall || hcint.b.xacterr || hcint.b.bblerr || hcint.b.frmovrun || hcint.b.datatglerr)
1838 + {
1839 + queued_port[channel] = 0;
1840 + fiq_print(FIQDBG_ERR, "CHAN ERR");
1841 + }
1842 + if(hcint.b.xfercomp)
1843 + {
1844 + // Clear the port allocation and transmit anything also on this port
1845 + queued_port[channel] = 0;
1846 + fiq_print(FIQDBG_SCHED, "XFERCOMP");
1847 + }
1848 + if(hcint.b.nak)
1849 + {
1850 + queued_port[channel] = 0;
1851 + fiq_print(FIQDBG_SCHED, "NAK");
1852 + }
1853 + if(hcint.b.ack && !hcsplt.b.compsplt)
1854 + {
1855 + int i;
1856 +
1857 + // Do not complete isochronous out transactions
1858 + if(hcchar.b.eptype == 1 && hcchar.b.epdir == 0)
1859 + {
1860 + queued_port[channel] = 0;
1861 + fiq_print(FIQDBG_SCHED, "ISOC_OUT");
1862 + }
1863 + else
1864 + {
1865 + // Make sure we check the port / hub combination that we sent this split on.
1866 + // Do not queue a second request to the same port
1867 + for(i = 0; i < MAX_EPS_CHANNELS; i++)
1868 + {
1869 + if(port_id(hcsplt) == queued_port[i])
1870 + {
1871 + fiq_print(FIQDBG_ERR, "PORTERR ");
1872 + //BUG();
1873 + }
1874 + }
1875 +
1876 + split_start_frame[channel] = (hfnum.b.frnum + 1) & ~7;
1877 +
1878 + // Note, the size of an OUT is in the start split phase, not
1879 + // the complete split
1880 + split_out_xfersize[channel] = hctsiz.b.xfersize;
1881 +
1882 + hcint_saved[channel].b.chhltd = 0;
1883 + hcint_saved[channel].b.ack = 0;
1884 +
1885 + queued_port[channel] = port_id(hcsplt);
1886 +
1887 + if(hcchar.b.eptype & 1)
1888 + {
1889 + // Send the periodic complete in the same oddness frame as the ACK went...
1890 + fiq_queue_request(channel, !(hfnum.b.frnum & 1));
1891 + // complete_sched[channel] = dwc_frame_num_inc(hfnum.b.frnum, 1);
1892 + }
1893 + else
1894 + {
1895 + // Schedule the split complete to occur later
1896 + complete_sched[channel] = dwc_frame_num_inc(hfnum.b.frnum, 2);
1897 + fiq_print(FIQDBG_SCHED, "ACK%04d%d", complete_sched[channel]/8, complete_sched[channel]%8);
1898 + }
1899 + }
1900 + }
1901 + if(hcint.b.nyet)
1902 + {
1903 + fiq_print(FIQDBG_ERR, "NYETERR1");
1904 + //BUG();
1905 + // Can transmit a split complete up to uframe .0 of the next frame
1906 + if(hfnum.b.frnum <= dwc_frame_num_inc(split_start_frame[channel], 8))
1907 + {
1908 + // Send it next frame
1909 + if(hcchar.b.eptype & 1) // type 1 & 3 are interrupt & isoc
1910 + {
1911 + fiq_print(FIQDBG_SCHED, "NYT:SEND");
1912 + fiq_queue_request(channel, !(hfnum.b.frnum & 1));
1913 + }
1914 + else
1915 + {
1916 + // Schedule non-periodic access for next frame (the odd-even bit doesn't effect NP)
1917 + complete_sched[channel] = dwc_frame_num_inc(hfnum.b.frnum, 1);
1918 + fiq_print(FIQDBG_SCHED, "NYT%04d%d", complete_sched[channel]/8, complete_sched[channel]%8);
1919 + }
1920 + hcint_saved[channel].b.chhltd = 0;
1921 + hcint_saved[channel].b.nyet = 0;
1922 + }
1923 + else
1924 + {
1925 + queued_port[channel] = 0;
1926 + fiq_print(FIQDBG_ERR, "NYETERR2");
1927 + //BUG();
1928 + }
1929 + }
1930 + }
1931 + else
1932 + {
1933 + /*
1934 + * If we have any of NAK, ACK, Datatlgerr active on a
1935 + * non-split channel, the sole reason is to reset error
1936 + * counts for a previously broken transaction. The FIQ
1937 + * will thrash on NAK IN and ACK OUT in particular so
1938 + * handle it "once" and allow the IRQ to do the rest.
1939 + */
1940 + hcint.d32 &= hcintmsk.d32;
1941 + if(hcint.b.nak)
1942 + {
1943 + hcintmsk.b.nak = 0;
1944 + FIQ_WRITE((dwc_regs_base + 0x500 + (channel * 0x20) + 0xc), hcintmsk.d32);
1945 + }
1946 + if (hcint.b.ack)
1947 + {
1948 + hcintmsk.b.ack = 0;
1949 + FIQ_WRITE((dwc_regs_base + 0x500 + (channel * 0x20) + 0xc), hcintmsk.d32);
1950 + }
1951 + }
1952 +
1953 + // Clear the interrupt, this will also clear the HAINT bit
1954 + FIQ_WRITE((dwc_regs_base + 0x500 + (channel * 0x20) + 0x8), hcint.d32);
1955 + return hcint_saved[channel].d32 == 0;
1956 +}
1957 +
1958 +gintsts_data_t gintsts;
1959 +gintmsk_data_t gintmsk;
1960 +// triggered: The set of interrupts that were triggered
1961 +// handled: The set of interrupts that have been handled (no IRQ is
1962 +// required)
1963 +// keep: The set of interrupts we want to keep unmasked even though we
1964 +// want to trigger an IRQ to handle it (SOF and HCINTR)
1965 +gintsts_data_t triggered, handled, keep;
1966 +hfnum_data_t hfnum;
1967 +
1968 +void __attribute__ ((naked)) notrace dwc_otg_hcd_handle_fiq(void)
1969 +{
1970 +
1971 + /* entry takes care to store registers we will be treading on here */
1972 + asm __volatile__ (
1973 + "mov ip, sp ;"
1974 + /* stash FIQ and normal regs */
1975 + "stmdb sp!, {r0-r12, lr};"
1976 + /* !! THIS SETS THE FRAME, adjust to > sizeof locals */
1977 + "sub fp, ip, #512 ;"
1978 + );
1979 +
1980 + // Cannot put local variables at the beginning of the function
1981 + // because otherwise 'C' will play with the stack pointer. any locals
1982 + // need to be inside the following block
1983 + do
1984 + {
1985 + fiq_done++;
1986 + gintsts.d32 = FIQ_READ(dwc_regs_base + 0x14);
1987 + gintmsk.d32 = FIQ_READ(dwc_regs_base + 0x18);
1988 + hfnum.d32 = FIQ_READ(dwc_regs_base + 0x408);
1989 + triggered.d32 = gintsts.d32 & gintmsk.d32;
1990 + handled.d32 = 0;
1991 + keep.d32 = 0;
1992 + fiq_print(FIQDBG_INT, "FIQ ");
1993 + fiq_print(FIQDBG_INT, "%08x", gintsts.d32);
1994 + fiq_print(FIQDBG_INT, "%08x", gintmsk.d32);
1995 + if(gintsts.d32)
1996 + {
1997 + // If port enabled
1998 + if((FIQ_READ(dwc_regs_base + 0x440) & 0xf) == 0x5)
1999 + {
2000 + if(gintsts.b.sofintr)
2001 + {
2002 + if(fiq_sof_handle(hfnum))
2003 + {
2004 + handled.b.sofintr = 1; /* Handled in FIQ */
2005 + }
2006 + else
2007 + {
2008 + /* Keer interrupt unmasked */
2009 + keep.b.sofintr = 1;
2010 + }
2011 + {
2012 + // Need to make sure the read and clearing of the SOF interrupt is as close as possible to avoid the possibility of missing
2013 + // a start of frame interrupt
2014 + gintsts_data_t gintsts = { .b.sofintr = 1 };
2015 + FIQ_WRITE((dwc_regs_base + 0x14), gintsts.d32);
2016 + }
2017 + }
2018 +
2019 + if(fiq_split_enable && gintsts.b.hcintr)
2020 + {
2021 + int i;
2022 + haint_data_t haint;
2023 + haintmsk_data_t haintmsk;
2024 +
2025 + haint.d32 = FIQ_READ(dwc_regs_base + 0x414);
2026 + haintmsk.d32 = FIQ_READ(dwc_regs_base + 0x418);
2027 + haint.d32 &= haintmsk.d32;
2028 + haint_saved.d32 |= haint.d32;
2029 +
2030 + fiq_print(FIQDBG_INT, "hcintr");
2031 + fiq_print(FIQDBG_INT, "%08x", FIQ_READ(dwc_regs_base + 0x414));
2032 +
2033 + // Go through each channel that has an enabled interrupt
2034 + for(i = 0; i < 16; i++)
2035 + if((haint.d32 >> i) & 1)
2036 + if(fiq_hcintr_handle(i, hfnum))
2037 + haint_saved.d32 &= ~(1 << i); /* this was handled */
2038 +
2039 + /* If we've handled all host channel interrupts then don't trigger the interrupt */
2040 + if(haint_saved.d32 == 0)
2041 + {
2042 + handled.b.hcintr = 1;
2043 + }
2044 + else
2045 + {
2046 + /* Make sure we keep the channel interrupt unmasked when triggering the IRQ */
2047 + keep.b.hcintr = 1;
2048 + }
2049 +
2050 + {
2051 + gintsts_data_t gintsts = { .b.hcintr = 1 };
2052 +
2053 + // Always clear the channel interrupt
2054 + FIQ_WRITE((dwc_regs_base + 0x14), gintsts.d32);
2055 + }
2056 + }
2057 + }
2058 + else
2059 + {
2060 + last_sof = -1;
2061 + }
2062 + }
2063 +
2064 + // Mask out the interrupts triggered - those handled - don't mask out the ones we want to keep
2065 + gintmsk.d32 = keep.d32 | (gintmsk.d32 & ~(triggered.d32 & ~handled.d32));
2066 + // Save those that were triggered but not handled
2067 + gintsts_saved.d32 |= triggered.d32 & ~handled.d32;
2068 + FIQ_WRITE(dwc_regs_base + 0x18, gintmsk.d32);
2069 +
2070 + // Clear and save any unhandled interrupts and trigger the interrupt
2071 + if(gintsts_saved.d32)
2072 + {
2073 + /* To enable the MPHI interrupt (INT 32)
2074 + */
2075 + FIQ_WRITE( c_mphi_regs.outdda, (int) dummy_send);
2076 + FIQ_WRITE( c_mphi_regs.outddb, (1 << 29));
2077 +
2078 + mphi_int_count++;
2079 + }
2080 + }
2081 + while(0);
2082 +
2083 + mb();
2084 +
2085 + /* exit back to normal mode restoring everything */
2086 + asm __volatile__ (
2087 + /* return FIQ regs back to pristine state
2088 + * and get normal regs back
2089 + */
2090 + "ldmia sp!, {r0-r12, lr};"
2091 +
2092 + /* return */
2093 + "subs pc, lr, #4;"
2094 + );
2095 +}
2096 +
2097 /** This function handles interrupts for the HCD. */
2098 int32_t dwc_otg_hcd_handle_intr(dwc_otg_hcd_t * dwc_otg_hcd)
2099 {
2100 int retval = 0;
2101 + static int last_time;
2102
2103 dwc_otg_core_if_t *core_if = dwc_otg_hcd->core_if;
2104 gintsts_data_t gintsts;
2105 + gintmsk_data_t gintmsk;
2106 + hfnum_data_t hfnum;
2107 +
2108 #ifdef DEBUG
2109 dwc_otg_core_global_regs_t *global_regs = core_if->core_global_regs;
2110
2111 - //GRAYG: debugging
2112 - if (NULL == global_regs) {
2113 - DWC_DEBUGPL(DBG_HCD, "**** NULL regs: dwc_otg_hcd=%p "
2114 - "core_if=%p\n",
2115 - dwc_otg_hcd, global_regs);
2116 - return retval;
2117 - }
2118 #endif
2119
2120 + gintsts.d32 = DWC_READ_REG32(&core_if->core_global_regs->gintsts);
2121 + gintmsk.d32 = DWC_READ_REG32(&core_if->core_global_regs->gintmsk);
2122 +
2123 /* Exit from ISR if core is hibernated */
2124 if (core_if->hibernation_suspend == 1) {
2125 - return retval;
2126 + goto exit_handler_routine;
2127 }
2128 DWC_SPINLOCK(dwc_otg_hcd->lock);
2129 /* Check if HOST Mode */
2130 if (dwc_otg_is_host_mode(core_if)) {
2131 - gintsts.d32 = dwc_otg_read_core_intr(core_if);
2132 + local_fiq_disable();
2133 + gintmsk.d32 |= gintsts_saved.d32;
2134 + gintsts.d32 |= gintsts_saved.d32;
2135 + gintsts_saved.d32 = 0;
2136 + local_fiq_enable();
2137 if (!gintsts.d32) {
2138 - DWC_SPINUNLOCK(dwc_otg_hcd->lock);
2139 - return 0;
2140 + goto exit_handler_routine;
2141 }
2142 + gintsts.d32 &= gintmsk.d32;
2143 +
2144 #ifdef DEBUG
2145 + // We should be OK doing this because the common interrupts should already have been serviced
2146 /* Don't print debug message in the interrupt handler on SOF */
2147 #ifndef DEBUG_SOF
2148 if (gintsts.d32 != DWC_SOF_INTR_MASK)
2149 @@ -88,10 +543,16 @@ int32_t dwc_otg_hcd_handle_intr(dwc_otg_
2150 "DWC OTG HCD Interrupt Detected gintsts&gintmsk=0x%08x core_if=%p\n",
2151 gintsts.d32, core_if);
2152 #endif
2153 -
2154 - if (gintsts.b.sofintr) {
2155 + hfnum.d32 = DWC_READ_REG32(&dwc_otg_hcd->core_if->host_if->host_global_regs->hfnum);
2156 + if (gintsts.b.sofintr && g_np_count == g_np_sent && dwc_frame_num_gt(g_next_sched_frame, hfnum.b.frnum))
2157 + {
2158 + /* Note, we should never get here if the FIQ is doing it's job properly*/
2159 retval |= dwc_otg_hcd_handle_sof_intr(dwc_otg_hcd);
2160 }
2161 + else if (gintsts.b.sofintr) {
2162 + retval |= dwc_otg_hcd_handle_sof_intr(dwc_otg_hcd);
2163 + }
2164 +
2165 if (gintsts.b.rxstsqlvl) {
2166 retval |=
2167 dwc_otg_hcd_handle_rx_status_q_level_intr
2168 @@ -106,7 +567,10 @@ int32_t dwc_otg_hcd_handle_intr(dwc_otg_
2169 /** @todo Implement i2cintr handler. */
2170 }
2171 if (gintsts.b.portintr) {
2172 +
2173 + gintmsk_data_t gintmsk = { .b.portintr = 1};
2174 retval |= dwc_otg_hcd_handle_port_intr(dwc_otg_hcd);
2175 + DWC_MODIFY_REG32(&core_if->core_global_regs->gintmsk, 0, gintmsk.d32);
2176 }
2177 if (gintsts.b.hcintr) {
2178 retval |= dwc_otg_hcd_handle_hc_intr(dwc_otg_hcd);
2179 @@ -138,11 +602,48 @@ int32_t dwc_otg_hcd_handle_intr(dwc_otg_
2180 #endif
2181
2182 }
2183 +
2184 +exit_handler_routine:
2185 +
2186 + if (fiq_fix_enable)
2187 + {
2188 + local_fiq_disable();
2189 + // Make sure that we don't clear the interrupt if we've still got pending work to do
2190 + if(gintsts_saved.d32 == 0)
2191 + {
2192 + /* Clear the MPHI interrupt */
2193 + DWC_WRITE_REG32(c_mphi_regs.intstat, (1<<16));
2194 + if (mphi_int_count >= 60)
2195 + {
2196 + DWC_WRITE_REG32(c_mphi_regs.ctrl, ((1<<31) + (1<<16)));
2197 + while(!(DWC_READ_REG32(c_mphi_regs.ctrl) & (1 << 17)))
2198 + ;
2199 + DWC_WRITE_REG32(c_mphi_regs.ctrl, (1<<31));
2200 + mphi_int_count = 0;
2201 + }
2202 + int_done++;
2203 + }
2204 +
2205 + // Unmask handled interrupts
2206 + FIQ_WRITE(dwc_regs_base + 0x18, gintmsk.d32);
2207 + //DWC_MODIFY_REG32((uint32_t *)IO_ADDRESS(USB_BASE + 0x8), 0 , 1);
2208 +
2209 + local_fiq_enable();
2210 +
2211 + if((jiffies / HZ) > last_time)
2212 + {
2213 + /* Once a second output the fiq and irq numbers, useful for debug */
2214 + last_time = jiffies / HZ;
2215 + DWC_DEBUGPL(DBG_USER, "int_done = %d fiq_done = %d\n", int_done, fiq_done);
2216 + }
2217 + }
2218 +
2219 DWC_SPINUNLOCK(dwc_otg_hcd->lock);
2220 return retval;
2221 }
2222
2223 #ifdef DWC_TRACK_MISSED_SOFS
2224 +
2225 #warning Compiling code to track missed SOFs
2226 #define FRAME_NUM_ARRAY_SIZE 1000
2227 /**
2228 @@ -188,7 +689,8 @@ int32_t dwc_otg_hcd_handle_sof_intr(dwc_
2229 dwc_list_link_t *qh_entry;
2230 dwc_otg_qh_t *qh;
2231 dwc_otg_transaction_type_e tr_type;
2232 - gintsts_data_t gintsts = {.d32 = 0 };
2233 + int did_something = 0;
2234 + int32_t next_sched_frame = -1;
2235
2236 hfnum.d32 =
2237 DWC_READ_REG32(&hcd->core_if->host_if->host_global_regs->hfnum);
2238 @@ -212,17 +714,31 @@ int32_t dwc_otg_hcd_handle_sof_intr(dwc_
2239 qh = DWC_LIST_ENTRY(qh_entry, dwc_otg_qh_t, qh_list_entry);
2240 qh_entry = qh_entry->next;
2241 if (dwc_frame_num_le(qh->sched_frame, hcd->frame_number)) {
2242 +
2243 /*
2244 * Move QH to the ready list to be executed next
2245 * (micro)frame.
2246 */
2247 DWC_LIST_MOVE_HEAD(&hcd->periodic_sched_ready,
2248 &qh->qh_list_entry);
2249 +
2250 + did_something = 1;
2251 + }
2252 + else
2253 + {
2254 + if(next_sched_frame < 0 || dwc_frame_num_le(qh->sched_frame, next_sched_frame))
2255 + {
2256 + next_sched_frame = qh->sched_frame;
2257 + }
2258 }
2259 }
2260 +
2261 + g_next_sched_frame = next_sched_frame;
2262 +
2263 tr_type = dwc_otg_hcd_select_transactions(hcd);
2264 if (tr_type != DWC_OTG_TRANSACTION_NONE) {
2265 dwc_otg_hcd_queue_transactions(hcd, tr_type);
2266 + did_something = 1;
2267 }
2268
2269 /* Clear interrupt */
2270 @@ -511,6 +1027,15 @@ int32_t dwc_otg_hcd_handle_hc_intr(dwc_o
2271
2272 haint.d32 = dwc_otg_read_host_all_channels_intr(dwc_otg_hcd->core_if);
2273
2274 + // Overwrite with saved interrupts from fiq handler
2275 + if(fiq_split_enable)
2276 + {
2277 + local_fiq_disable();
2278 + haint.d32 = haint_saved.d32;
2279 + haint_saved.d32 = 0;
2280 + local_fiq_enable();
2281 + }
2282 +
2283 for (i = 0; i < dwc_otg_hcd->core_if->core_params->host_channels; i++) {
2284 if (haint.b2.chint & (1 << i)) {
2285 retval |= dwc_otg_hcd_handle_hc_n_intr(dwc_otg_hcd, i);
2286 @@ -551,7 +1076,10 @@ static uint32_t get_actual_xfer_length(d
2287 *short_read = (hctsiz.b.xfersize != 0);
2288 }
2289 } else if (hc->qh->do_split) {
2290 - length = qtd->ssplit_out_xfer_count;
2291 + if(fiq_split_enable)
2292 + length = split_out_xfersize[hc->hc_num];
2293 + else
2294 + length = qtd->ssplit_out_xfer_count;
2295 } else {
2296 length = hc->xfer_len;
2297 }
2298 @@ -595,7 +1123,6 @@ static int update_urb_state_xfer_comp(dw
2299 DWC_OTG_HC_XFER_COMPLETE,
2300 &short_read);
2301
2302 -
2303 /* non DWORD-aligned buffer case handling. */
2304 if (hc->align_buff && xfer_length && hc->ep_is_in) {
2305 dwc_memcpy(urb->buf + urb->actual_length, hc->qh->dw_align_buf,
2306 @@ -797,11 +1324,24 @@ static void release_channel(dwc_otg_hcd_
2307 dwc_otg_transaction_type_e tr_type;
2308 int free_qtd;
2309 dwc_irqflags_t flags;
2310 - dwc_spinlock_t *channel_lock = DWC_SPINLOCK_ALLOC();
2311 + dwc_spinlock_t *channel_lock = hcd->channel_lock;
2312 +#ifdef FIQ_DEBUG
2313 + int endp = qtd->urb ? qtd->urb->pipe_info.ep_num : 0;
2314 +#endif
2315 + int hog_port = 0;
2316
2317 DWC_DEBUGPL(DBG_HCDV, " %s: channel %d, halt_status %d, xfer_len %d\n",
2318 __func__, hc->hc_num, halt_status, hc->xfer_len);
2319
2320 + if(fiq_split_enable && hc->do_split) {
2321 + if(!hc->ep_is_in && hc->ep_type == UE_ISOCHRONOUS) {
2322 + if(hc->xact_pos == DWC_HCSPLIT_XACTPOS_MID ||
2323 + hc->xact_pos == DWC_HCSPLIT_XACTPOS_BEGIN) {
2324 + hog_port = 1;
2325 + }
2326 + }
2327 + }
2328 +
2329 switch (halt_status) {
2330 case DWC_OTG_HC_XFER_URB_COMPLETE:
2331 free_qtd = 1;
2332 @@ -876,15 +1416,32 @@ cleanup:
2333
2334 DWC_SPINLOCK_IRQSAVE(channel_lock, &flags);
2335 hcd->available_host_channels++;
2336 + fiq_print(FIQDBG_PORTHUB, "AHC = %d ", hcd->available_host_channels);
2337 DWC_SPINUNLOCK_IRQRESTORE(channel_lock, flags);
2338 }
2339
2340 + if(fiq_split_enable && hc->do_split)
2341 + {
2342 + if(!(hcd->hub_port[hc->hub_addr] & (1 << hc->port_addr)))
2343 + {
2344 + fiq_print(FIQDBG_ERR, "PRTNOTAL");
2345 + //BUG();
2346 + }
2347 + if(!hog_port && (hc->ep_type == DWC_OTG_EP_TYPE_ISOC ||
2348 + hc->ep_type == DWC_OTG_EP_TYPE_INTR)) {
2349 + hcd->hub_port[hc->hub_addr] &= ~(1 << hc->port_addr);
2350 +#ifdef FIQ_DEBUG
2351 + hcd->hub_port_alloc[hc->hub_addr * 16 + hc->port_addr] = -1;
2352 +#endif
2353 + fiq_print(FIQDBG_PORTHUB, "H%dP%d:RR%d", hc->hub_addr, hc->port_addr, endp);
2354 + }
2355 + }
2356 +
2357 /* Try to queue more transfers now that there's a free channel. */
2358 tr_type = dwc_otg_hcd_select_transactions(hcd);
2359 if (tr_type != DWC_OTG_TRANSACTION_NONE) {
2360 dwc_otg_hcd_queue_transactions(hcd, tr_type);
2361 }
2362 - DWC_SPINLOCK_FREE(channel_lock);
2363 }
2364
2365 /**
2366 @@ -1295,6 +1852,17 @@ static int32_t handle_hc_nak_intr(dwc_ot
2367 "NAK Received--\n", hc->hc_num);
2368
2369 /*
2370 + * When we get bulk NAKs then remember this so we holdoff on this qh until
2371 + * the beginning of the next frame
2372 + */
2373 + switch(dwc_otg_hcd_get_pipe_type(&qtd->urb->pipe_info)) {
2374 + case UE_BULK:
2375 + case UE_CONTROL:
2376 + if (nak_holdoff_enable)
2377 + hc->qh->nak_frame = dwc_otg_hcd_get_frame_number(hcd);
2378 + }
2379 +
2380 + /*
2381 * Handle NAK for IN/OUT SSPLIT/CSPLIT transfers, bulk, control, and
2382 * interrupt. Re-start the SSPLIT transfer.
2383 */
2384 @@ -1316,7 +1884,11 @@ static int32_t handle_hc_nak_intr(dwc_ot
2385 * transfers in DMA mode for the sole purpose of
2386 * resetting the error count after a transaction error
2387 * occurs. The core will continue transferring data.
2388 + * Disable other interrupts unmasked for the same
2389 + * reason.
2390 */
2391 + disable_hc_int(hc_regs, datatglerr);
2392 + disable_hc_int(hc_regs, ack);
2393 qtd->error_count = 0;
2394 goto handle_nak_done;
2395 }
2396 @@ -1428,6 +2000,15 @@ static int32_t handle_hc_ack_intr(dwc_ot
2397 halt_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_ACK);
2398 }
2399 } else {
2400 + /*
2401 + * An unmasked ACK on a non-split DMA transaction is
2402 + * for the sole purpose of resetting error counts. Disable other
2403 + * interrupts unmasked for the same reason.
2404 + */
2405 + if(hcd->core_if->dma_enable) {
2406 + disable_hc_int(hc_regs, datatglerr);
2407 + disable_hc_int(hc_regs, nak);
2408 + }
2409 qtd->error_count = 0;
2410
2411 if (hc->qh->ping_state) {
2412 @@ -1490,8 +2071,10 @@ static int32_t handle_hc_nyet_intr(dwc_o
2413 hc->ep_type == DWC_OTG_EP_TYPE_ISOC) {
2414 int frnum = dwc_otg_hcd_get_frame_number(hcd);
2415
2416 + // With the FIQ running we only ever see the failed NYET
2417 if (dwc_full_frame_num(frnum) !=
2418 - dwc_full_frame_num(hc->qh->sched_frame)) {
2419 + dwc_full_frame_num(hc->qh->sched_frame) ||
2420 + fiq_split_enable) {
2421 /*
2422 * No longer in the same full speed frame.
2423 * Treat this as a transaction error.
2424 @@ -1778,13 +2361,28 @@ static int32_t handle_hc_datatglerr_intr
2425 dwc_otg_qtd_t * qtd)
2426 {
2427 DWC_DEBUGPL(DBG_HCDI, "--Host Channel %d Interrupt: "
2428 - "Data Toggle Error--\n", hc->hc_num);
2429 + "Data Toggle Error on %s transfer--\n",
2430 + hc->hc_num, (hc->ep_is_in ? "IN" : "OUT"));
2431
2432 - if (hc->ep_is_in) {
2433 + /* Data toggles on split transactions cause the hc to halt.
2434 + * restart transfer */
2435 + if(hc->qh->do_split)
2436 + {
2437 + qtd->error_count++;
2438 + dwc_otg_hcd_save_data_toggle(hc, hc_regs, qtd);
2439 + update_urb_state_xfer_intr(hc, hc_regs,
2440 + qtd->urb, qtd, DWC_OTG_HC_XFER_XACT_ERR);
2441 + halt_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_XACT_ERR);
2442 + } else if (hc->ep_is_in) {
2443 + /* An unmasked data toggle error on a non-split DMA transaction is
2444 + * for the sole purpose of resetting error counts. Disable other
2445 + * interrupts unmasked for the same reason.
2446 + */
2447 + if(hcd->core_if->dma_enable) {
2448 + disable_hc_int(hc_regs, ack);
2449 + disable_hc_int(hc_regs, nak);
2450 + }
2451 qtd->error_count = 0;
2452 - } else {
2453 - DWC_ERROR("Data Toggle Error on OUT transfer,"
2454 - "channel %d\n", hc->hc_num);
2455 }
2456
2457 disable_hc_int(hc_regs, datatglerr);
2458 @@ -1862,10 +2460,10 @@ static inline int halt_status_ok(dwc_otg
2459 static void handle_hc_chhltd_intr_dma(dwc_otg_hcd_t * hcd,
2460 dwc_hc_t * hc,
2461 dwc_otg_hc_regs_t * hc_regs,
2462 - dwc_otg_qtd_t * qtd)
2463 + dwc_otg_qtd_t * qtd,
2464 + hcint_data_t hcint,
2465 + hcintmsk_data_t hcintmsk)
2466 {
2467 - hcint_data_t hcint;
2468 - hcintmsk_data_t hcintmsk;
2469 int out_nak_enh = 0;
2470
2471 /* For core with OUT NAK enhancement, the flow for high-
2472 @@ -1897,8 +2495,11 @@ static void handle_hc_chhltd_intr_dma(dw
2473 }
2474
2475 /* Read the HCINTn register to determine the cause for the halt. */
2476 - hcint.d32 = DWC_READ_REG32(&hc_regs->hcint);
2477 - hcintmsk.d32 = DWC_READ_REG32(&hc_regs->hcintmsk);
2478 + if(!fiq_split_enable)
2479 + {
2480 + hcint.d32 = DWC_READ_REG32(&hc_regs->hcint);
2481 + hcintmsk.d32 = DWC_READ_REG32(&hc_regs->hcintmsk);
2482 + }
2483
2484 if (hcint.b.xfercomp) {
2485 /** @todo This is here because of a possible hardware bug. Spec
2486 @@ -1937,6 +2538,8 @@ static void handle_hc_chhltd_intr_dma(dw
2487 handle_hc_babble_intr(hcd, hc, hc_regs, qtd);
2488 } else if (hcint.b.frmovrun) {
2489 handle_hc_frmovrun_intr(hcd, hc, hc_regs, qtd);
2490 + } else if (hcint.b.datatglerr) {
2491 + handle_hc_datatglerr_intr(hcd, hc, hc_regs, qtd);
2492 } else if (!out_nak_enh) {
2493 if (hcint.b.nyet) {
2494 /*
2495 @@ -1986,12 +2589,24 @@ static void handle_hc_chhltd_intr_dma(dw
2496 DWC_READ_REG32(&hcd->
2497 core_if->core_global_regs->
2498 gintsts));
2499 + /* Failthrough: use 3-strikes rule */
2500 + qtd->error_count++;
2501 + dwc_otg_hcd_save_data_toggle(hc, hc_regs, qtd);
2502 + update_urb_state_xfer_intr(hc, hc_regs,
2503 + qtd->urb, qtd, DWC_OTG_HC_XFER_XACT_ERR);
2504 + halt_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_XACT_ERR);
2505 }
2506
2507 }
2508 } else {
2509 DWC_PRINTF("NYET/NAK/ACK/other in non-error case, 0x%08x\n",
2510 hcint.d32);
2511 + /* Failthrough: use 3-strikes rule */
2512 + qtd->error_count++;
2513 + dwc_otg_hcd_save_data_toggle(hc, hc_regs, qtd);
2514 + update_urb_state_xfer_intr(hc, hc_regs,
2515 + qtd->urb, qtd, DWC_OTG_HC_XFER_XACT_ERR);
2516 + halt_channel(hcd, hc, qtd, DWC_OTG_HC_XFER_XACT_ERR);
2517 }
2518 }
2519
2520 @@ -2009,13 +2624,15 @@ static void handle_hc_chhltd_intr_dma(dw
2521 static int32_t handle_hc_chhltd_intr(dwc_otg_hcd_t * hcd,
2522 dwc_hc_t * hc,
2523 dwc_otg_hc_regs_t * hc_regs,
2524 - dwc_otg_qtd_t * qtd)
2525 + dwc_otg_qtd_t * qtd,
2526 + hcint_data_t hcint,
2527 + hcintmsk_data_t hcintmsk)
2528 {
2529 DWC_DEBUGPL(DBG_HCDI, "--Host Channel %d Interrupt: "
2530 "Channel Halted--\n", hc->hc_num);
2531
2532 if (hcd->core_if->dma_enable) {
2533 - handle_hc_chhltd_intr_dma(hcd, hc, hc_regs, qtd);
2534 + handle_hc_chhltd_intr_dma(hcd, hc, hc_regs, qtd, hcint, hcintmsk);
2535 } else {
2536 #ifdef DEBUG
2537 if (!halt_status_ok(hcd, hc, hc_regs, qtd)) {
2538 @@ -2032,7 +2649,7 @@ static int32_t handle_hc_chhltd_intr(dwc
2539 int32_t dwc_otg_hcd_handle_hc_n_intr(dwc_otg_hcd_t * dwc_otg_hcd, uint32_t num)
2540 {
2541 int retval = 0;
2542 - hcint_data_t hcint;
2543 + hcint_data_t hcint, hcint_orig;
2544 hcintmsk_data_t hcintmsk;
2545 dwc_hc_t *hc;
2546 dwc_otg_hc_regs_t *hc_regs;
2547 @@ -2042,15 +2659,33 @@ int32_t dwc_otg_hcd_handle_hc_n_intr(dwc
2548
2549 hc = dwc_otg_hcd->hc_ptr_array[num];
2550 hc_regs = dwc_otg_hcd->core_if->host_if->hc_regs[num];
2551 + if(hc->halt_status == DWC_OTG_HC_XFER_URB_DEQUEUE) {
2552 + /* We are responding to a channel disable. Driver
2553 + * state is cleared - our qtd has gone away.
2554 + */
2555 + release_channel(dwc_otg_hcd, hc, NULL, hc->halt_status);
2556 + return 1;
2557 + }
2558 qtd = DWC_CIRCLEQ_FIRST(&hc->qh->qtd_list);
2559
2560 hcint.d32 = DWC_READ_REG32(&hc_regs->hcint);
2561 + hcint_orig = hcint;
2562 hcintmsk.d32 = DWC_READ_REG32(&hc_regs->hcintmsk);
2563 DWC_DEBUGPL(DBG_HCDV,
2564 " hcint 0x%08x, hcintmsk 0x%08x, hcint&hcintmsk 0x%08x\n",
2565 hcint.d32, hcintmsk.d32, (hcint.d32 & hcintmsk.d32));
2566 hcint.d32 = hcint.d32 & hcintmsk.d32;
2567
2568 + if(fiq_split_enable)
2569 + {
2570 + // replace with the saved interrupts from the fiq handler
2571 + local_fiq_disable();
2572 + hcint_orig.d32 = hcint_saved[num].d32;
2573 + hcint.d32 = hcint_orig.d32 & hcintmsk_saved[num].d32;
2574 + hcint_saved[num].d32 = 0;
2575 + local_fiq_enable();
2576 + }
2577 +
2578 if (!dwc_otg_hcd->core_if->dma_enable) {
2579 if (hcint.b.chhltd && hcint.d32 != 0x2) {
2580 hcint.b.chhltd = 0;
2581 @@ -2068,7 +2703,7 @@ int32_t dwc_otg_hcd_handle_hc_n_intr(dwc
2582 hcint.b.nyet = 0;
2583 }
2584 if (hcint.b.chhltd) {
2585 - retval |= handle_hc_chhltd_intr(dwc_otg_hcd, hc, hc_regs, qtd);
2586 + retval |= handle_hc_chhltd_intr(dwc_otg_hcd, hc, hc_regs, qtd, hcint_orig, hcintmsk_saved[num]);
2587 }
2588 if (hcint.b.ahberr) {
2589 retval |= handle_hc_ahberr_intr(dwc_otg_hcd, hc, hc_regs, qtd);
2590 @@ -2080,7 +2715,8 @@ int32_t dwc_otg_hcd_handle_hc_n_intr(dwc
2591 retval |= handle_hc_nak_intr(dwc_otg_hcd, hc, hc_regs, qtd);
2592 }
2593 if (hcint.b.ack) {
2594 - retval |= handle_hc_ack_intr(dwc_otg_hcd, hc, hc_regs, qtd);
2595 + if(!hcint.b.chhltd)
2596 + retval |= handle_hc_ack_intr(dwc_otg_hcd, hc, hc_regs, qtd);
2597 }
2598 if (hcint.b.nyet) {
2599 retval |= handle_hc_nyet_intr(dwc_otg_hcd, hc, hc_regs, qtd);
2600 @@ -2102,5 +2738,4 @@ int32_t dwc_otg_hcd_handle_hc_n_intr(dwc
2601
2602 return retval;
2603 }
2604 -
2605 #endif /* DWC_DEVICE_ONLY */
2606 --- a/drivers/usb/host/dwc_otg/dwc_otg_hcd_linux.c
2607 +++ b/drivers/usb/host/dwc_otg/dwc_otg_hcd_linux.c
2608 @@ -1,3 +1,4 @@
2609 +
2610 /* ==========================================================================
2611 * $File: //dwh/usb_iip/dev/software/otg/linux/drivers/dwc_otg_hcd_linux.c $
2612 * $Revision: #20 $
2613 @@ -50,6 +51,7 @@
2614 #include <linux/dma-mapping.h>
2615 #include <linux/version.h>
2616 #include <asm/io.h>
2617 +#include <asm/fiq.h>
2618 #include <linux/usb.h>
2619 #if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,35)
2620 #include <../drivers/usb/core/hcd.h>
2621 @@ -67,6 +69,8 @@
2622 #include "dwc_otg_dbg.h"
2623 #include "dwc_otg_driver.h"
2624 #include "dwc_otg_hcd.h"
2625 +#include "dwc_otg_mphi_fix.h"
2626 +
2627 /**
2628 * Gets the endpoint number from a _bEndpointAddress argument. The endpoint is
2629 * qualified with its direction (possible 32 endpoints per device).
2630 @@ -76,6 +80,8 @@
2631
2632 static const char dwc_otg_hcd_name[] = "dwc_otg_hcd";
2633
2634 +extern bool fiq_fix_enable;
2635 +
2636 /** @name Linux HC Driver API Functions */
2637 /** @{ */
2638 /* manage i/o requests, device state */
2639 @@ -259,13 +265,15 @@ static void free_bus_bandwidth(struct us
2640
2641 /**
2642 * Sets the final status of an URB and returns it to the device driver. Any
2643 - * required cleanup of the URB is performed.
2644 + * required cleanup of the URB is performed. The HCD lock should be held on
2645 + * entry.
2646 */
2647 static int _complete(dwc_otg_hcd_t * hcd, void *urb_handle,
2648 dwc_otg_hcd_urb_t * dwc_otg_urb, int32_t status)
2649 {
2650 struct urb *urb = (struct urb *)urb_handle;
2651 -
2652 + urb_tq_entry_t *new_entry;
2653 + int rc = 0;
2654 if (CHK_DEBUG_LEVEL(DBG_HCDV | DBG_HCD_URB)) {
2655 DWC_PRINTF("%s: urb %p, device %d, ep %d %s, status=%d\n",
2656 __func__, urb, usb_pipedevice(urb->pipe),
2657 @@ -279,7 +287,7 @@ static int _complete(dwc_otg_hcd_t * hcd
2658 }
2659 }
2660 }
2661 -
2662 + new_entry = DWC_ALLOC_ATOMIC(sizeof(urb_tq_entry_t));
2663 urb->actual_length = dwc_otg_hcd_urb_get_actual_length(dwc_otg_urb);
2664 /* Convert status value. */
2665 switch (status) {
2666 @@ -301,6 +309,9 @@ static int _complete(dwc_otg_hcd_t * hcd
2667 case -DWC_E_OVERFLOW:
2668 status = -EOVERFLOW;
2669 break;
2670 + case -DWC_E_SHUTDOWN:
2671 + status = -ESHUTDOWN;
2672 + break;
2673 default:
2674 if (status) {
2675 DWC_PRINTF("Uknown urb status %d\n", status);
2676 @@ -342,18 +353,33 @@ static int _complete(dwc_otg_hcd_t * hcd
2677 }
2678
2679 DWC_FREE(dwc_otg_urb);
2680 -
2681 + if (!new_entry) {
2682 + DWC_ERROR("dwc_otg_hcd: complete: cannot allocate URB TQ entry\n");
2683 + urb->status = -EPROTO;
2684 + /* don't schedule the tasklet -
2685 + * directly return the packet here with error. */
2686 #if USB_URB_EP_LINKING
2687 - usb_hcd_unlink_urb_from_ep(dwc_otg_hcd_to_hcd(hcd), urb);
2688 + usb_hcd_unlink_urb_from_ep(dwc_otg_hcd_to_hcd(hcd), urb);
2689 #endif
2690 - DWC_SPINUNLOCK(hcd->lock);
2691 #if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,28)
2692 - usb_hcd_giveback_urb(dwc_otg_hcd_to_hcd(hcd), urb);
2693 + usb_hcd_giveback_urb(dwc_otg_hcd_to_hcd(hcd), urb);
2694 #else
2695 - usb_hcd_giveback_urb(dwc_otg_hcd_to_hcd(hcd), urb, status);
2696 + usb_hcd_giveback_urb(dwc_otg_hcd_to_hcd(hcd), urb, urb->status);
2697 #endif
2698 - DWC_SPINLOCK(hcd->lock);
2699 -
2700 + } else {
2701 + new_entry->urb = urb;
2702 +#if USB_URB_EP_LINKING
2703 + rc = usb_hcd_check_unlink_urb(dwc_otg_hcd_to_hcd(hcd), urb, urb->status);
2704 + if(0 == rc) {
2705 + usb_hcd_unlink_urb_from_ep(dwc_otg_hcd_to_hcd(hcd), urb);
2706 + }
2707 +#endif
2708 + if(0 == rc) {
2709 + DWC_TAILQ_INSERT_TAIL(&hcd->completed_urb_list, new_entry,
2710 + urb_tq_entries);
2711 + DWC_TASK_HI_SCHEDULE(hcd->completion_tasklet);
2712 + }
2713 + }
2714 return 0;
2715 }
2716
2717 @@ -366,6 +392,16 @@ static struct dwc_otg_hcd_function_ops h
2718 .get_b_hnp_enable = _get_b_hnp_enable,
2719 };
2720
2721 +static struct fiq_handler fh = {
2722 + .name = "usb_fiq",
2723 +};
2724 +struct fiq_stack_s {
2725 + int magic1;
2726 + uint8_t stack[2048];
2727 + int magic2;
2728 +} fiq_stack;
2729 +
2730 +extern mphi_regs_t c_mphi_regs;
2731 /**
2732 * Initializes the HCD. This function allocates memory for and initializes the
2733 * static parts of the usb_hcd and dwc_otg_hcd structures. It also registers the
2734 @@ -379,6 +415,7 @@ int hcd_init(dwc_bus_dev_t *_dev)
2735 dwc_otg_device_t *otg_dev = DWC_OTG_BUSDRVDATA(_dev);
2736 int retval = 0;
2737 u64 dmamask;
2738 + struct pt_regs regs;
2739
2740 DWC_DEBUGPL(DBG_HCD, "DWC OTG HCD INIT otg_dev=%p\n", otg_dev);
2741
2742 @@ -396,6 +433,20 @@ int hcd_init(dwc_bus_dev_t *_dev)
2743 pci_set_consistent_dma_mask(_dev, dmamask);
2744 #endif
2745
2746 + if (fiq_fix_enable)
2747 + {
2748 + // Set up fiq
2749 + claim_fiq(&fh);
2750 + set_fiq_handler(__FIQ_Branch, 4);
2751 + memset(&regs,0,sizeof(regs));
2752 + regs.ARM_r8 = (long)dwc_otg_hcd_handle_fiq;
2753 + regs.ARM_r9 = (long)0;
2754 + regs.ARM_sp = (long)fiq_stack.stack + sizeof(fiq_stack.stack) - 4;
2755 + set_fiq_regs(&regs);
2756 + fiq_stack.magic1 = 0xdeadbeef;
2757 + fiq_stack.magic2 = 0xaa995566;
2758 + }
2759 +
2760 /*
2761 * Allocate memory for the base HCD plus the DWC OTG HCD.
2762 * Initialize the base HCD.
2763 @@ -415,6 +466,30 @@ int hcd_init(dwc_bus_dev_t *_dev)
2764
2765 hcd->regs = otg_dev->os_dep.base;
2766
2767 + if (fiq_fix_enable)
2768 + {
2769 + volatile extern void *dwc_regs_base;
2770 +
2771 + //Set the mphi periph to the required registers
2772 + c_mphi_regs.base = otg_dev->os_dep.mphi_base;
2773 + c_mphi_regs.ctrl = otg_dev->os_dep.mphi_base + 0x4c;
2774 + c_mphi_regs.outdda = otg_dev->os_dep.mphi_base + 0x28;
2775 + c_mphi_regs.outddb = otg_dev->os_dep.mphi_base + 0x2c;
2776 + c_mphi_regs.intstat = otg_dev->os_dep.mphi_base + 0x50;
2777 +
2778 + dwc_regs_base = otg_dev->os_dep.base;
2779 +
2780 + //Enable mphi peripheral
2781 + writel((1<<31),c_mphi_regs.ctrl);
2782 +#ifdef DEBUG
2783 + if (readl(c_mphi_regs.ctrl) & 0x80000000)
2784 + DWC_DEBUGPL(DBG_USER, "MPHI periph has been enabled\n");
2785 + else
2786 + DWC_DEBUGPL(DBG_USER, "MPHI periph has NOT been enabled\n");
2787 +#endif
2788 + // Enable FIQ interrupt from USB peripheral
2789 + enable_fiq(INTERRUPT_VC_USB);
2790 + }
2791 /* Initialize the DWC OTG HCD. */
2792 dwc_otg_hcd = dwc_otg_hcd_alloc_hcd();
2793 if (!dwc_otg_hcd) {
2794 @@ -607,9 +682,7 @@ static int dwc_otg_urb_enqueue(struct us
2795 #if LINUX_VERSION_CODE >= KERNEL_VERSION(2,6,28)
2796 struct usb_host_endpoint *ep = urb->ep;
2797 #endif
2798 -#if USB_URB_EP_LINKING
2799 dwc_irqflags_t irqflags;
2800 -#endif
2801 void **ref_ep_hcpriv = &ep->hcpriv;
2802 dwc_otg_hcd_t *dwc_otg_hcd = hcd_to_dwc_otg_hcd(hcd);
2803 dwc_otg_hcd_urb_t *dwc_otg_urb;
2804 @@ -661,9 +734,8 @@ static int dwc_otg_urb_enqueue(struct us
2805 if(dwc_otg_urb == NULL)
2806 return -ENOMEM;
2807
2808 - urb->hcpriv = dwc_otg_urb;
2809 - if (!dwc_otg_urb && urb->number_of_packets)
2810 - return -ENOMEM;
2811 + if (!dwc_otg_urb && urb->number_of_packets)
2812 + return -ENOMEM;
2813
2814 dwc_otg_hcd_urb_set_pipeinfo(dwc_otg_urb, usb_pipedevice(urb->pipe),
2815 usb_pipeendpoint(urb->pipe), ep_type,
2816 @@ -703,37 +775,42 @@ static int dwc_otg_urb_enqueue(struct us
2817 iso_frame_desc[i].length);
2818 }
2819
2820 + DWC_SPINLOCK_IRQSAVE(dwc_otg_hcd->lock, &irqflags);
2821 + urb->hcpriv = dwc_otg_urb;
2822 #if USB_URB_EP_LINKING
2823 - DWC_SPINLOCK_IRQSAVE(dwc_otg_hcd->lock, &irqflags);
2824 retval = usb_hcd_link_urb_to_ep(hcd, urb);
2825 - DWC_SPINUNLOCK_IRQRESTORE(dwc_otg_hcd->lock, irqflags);
2826 if (0 == retval)
2827 #endif
2828 - {
2829 - retval = dwc_otg_hcd_urb_enqueue(dwc_otg_hcd, dwc_otg_urb,
2830 - /*(dwc_otg_qh_t **)*/
2831 - ref_ep_hcpriv,
2832 - mem_flags == GFP_ATOMIC ? 1 : 0);
2833 - if (0 == retval) {
2834 - if (alloc_bandwidth) {
2835 - allocate_bus_bandwidth(hcd,
2836 - dwc_otg_hcd_get_ep_bandwidth(
2837 - dwc_otg_hcd, *ref_ep_hcpriv),
2838 - urb);
2839 - }
2840 - } else {
2841 + {
2842 + retval = dwc_otg_hcd_urb_enqueue(dwc_otg_hcd, dwc_otg_urb,
2843 + /*(dwc_otg_qh_t **)*/
2844 + ref_ep_hcpriv, 1);
2845 + if (0 == retval) {
2846 + if (alloc_bandwidth) {
2847 + allocate_bus_bandwidth(hcd,
2848 + dwc_otg_hcd_get_ep_bandwidth(
2849 + dwc_otg_hcd, *ref_ep_hcpriv),
2850 + urb);
2851 + }
2852 + } else {
2853 + DWC_DEBUGPL(DBG_HCD, "DWC OTG dwc_otg_hcd_urb_enqueue failed rc %d\n", retval);
2854 #if USB_URB_EP_LINKING
2855 - dwc_irqflags_t irqflags;
2856 - DWC_DEBUGPL(DBG_HCD, "DWC OTG dwc_otg_hcd_urb_enqueue failed rc %d\n", retval);
2857 - DWC_SPINLOCK_IRQSAVE(dwc_otg_hcd->lock, &irqflags);
2858 - usb_hcd_unlink_urb_from_ep(hcd, urb);
2859 - DWC_SPINUNLOCK_IRQRESTORE(dwc_otg_hcd->lock, irqflags);
2860 -#endif
2861 - if (retval == -DWC_E_NO_DEVICE) {
2862 - retval = -ENODEV;
2863 - }
2864 - }
2865 - }
2866 + usb_hcd_unlink_urb_from_ep(hcd, urb);
2867 +#endif
2868 + DWC_FREE(dwc_otg_urb);
2869 + urb->hcpriv = NULL;
2870 + if (retval == -DWC_E_NO_DEVICE)
2871 + retval = -ENODEV;
2872 + }
2873 + }
2874 +#if USB_URB_EP_LINKING
2875 + else
2876 + {
2877 + DWC_FREE(dwc_otg_urb);
2878 + urb->hcpriv = NULL;
2879 + }
2880 +#endif
2881 + DWC_SPINUNLOCK_IRQRESTORE(dwc_otg_hcd->lock, irqflags);
2882 return retval;
2883 }
2884
2885 @@ -777,6 +854,8 @@ static int dwc_otg_urb_dequeue(struct us
2886 usb_hcd_unlink_urb_from_ep(hcd, urb);
2887 #endif
2888 DWC_SPINUNLOCK_IRQRESTORE(dwc_otg_hcd->lock, flags);
2889 +
2890 +
2891 #if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,28)
2892 usb_hcd_giveback_urb(hcd, urb);
2893 #else
2894 --- a/drivers/usb/host/dwc_otg/dwc_otg_hcd_queue.c
2895 +++ b/drivers/usb/host/dwc_otg/dwc_otg_hcd_queue.c
2896 @@ -41,6 +41,7 @@
2897
2898 #include "dwc_otg_hcd.h"
2899 #include "dwc_otg_regs.h"
2900 +#include "dwc_otg_mphi_fix.h"
2901
2902 extern bool microframe_schedule;
2903
2904 @@ -182,6 +183,7 @@ void qh_init(dwc_otg_hcd_t * hcd, dwc_ot
2905 if (microframe_schedule)
2906 qh->speed = dev_speed;
2907
2908 + qh->nak_frame = 0xffff;
2909
2910 if (((dev_speed == USB_SPEED_LOW) ||
2911 (dev_speed == USB_SPEED_FULL)) &&
2912 @@ -191,6 +193,7 @@ void qh_init(dwc_otg_hcd_t * hcd, dwc_ot
2913 dwc_otg_hcd_get_ep_num(&urb->pipe_info), hub_addr,
2914 hub_port);
2915 qh->do_split = 1;
2916 + qh->skip_count = 0;
2917 }
2918
2919 if (qh->ep_type == UE_INTERRUPT || qh->ep_type == UE_ISOCHRONOUS) {
2920 @@ -573,6 +576,9 @@ static int check_max_xfer_size(dwc_otg_h
2921 return status;
2922 }
2923
2924 +
2925 +extern int g_next_sched_frame, g_np_count, g_np_sent;
2926 +
2927 /**
2928 * Schedules an interrupt or isochronous transfer in the periodic schedule.
2929 *
2930 @@ -631,8 +637,13 @@ static int schedule_periodic(dwc_otg_hcd
2931 DWC_LIST_INSERT_TAIL(&hcd->periodic_sched_ready, &qh->qh_list_entry);
2932 }
2933 else {
2934 - /* Always start in the inactive schedule. */
2935 - DWC_LIST_INSERT_TAIL(&hcd->periodic_sched_inactive, &qh->qh_list_entry);
2936 + if(DWC_LIST_EMPTY(&hcd->periodic_sched_inactive) || dwc_frame_num_le(qh->sched_frame, g_next_sched_frame))
2937 + {
2938 + g_next_sched_frame = qh->sched_frame;
2939 +
2940 + }
2941 + /* Always start in the inactive schedule. */
2942 + DWC_LIST_INSERT_TAIL(&hcd->periodic_sched_inactive, &qh->qh_list_entry);
2943 }
2944
2945 if (!microframe_schedule) {
2946 @@ -646,6 +657,7 @@ static int schedule_periodic(dwc_otg_hcd
2947 return status;
2948 }
2949
2950 +
2951 /**
2952 * This function adds a QH to either the non periodic or periodic schedule if
2953 * it is not already in the schedule. If the QH is already in the schedule, no
2954 @@ -668,6 +680,7 @@ int dwc_otg_hcd_qh_add(dwc_otg_hcd_t * h
2955 /* Always start in the inactive schedule. */
2956 DWC_LIST_INSERT_TAIL(&hcd->non_periodic_sched_inactive,
2957 &qh->qh_list_entry);
2958 + g_np_count++;
2959 } else {
2960 status = schedule_periodic(hcd, qh);
2961 if ( !hcd->periodic_qh_count ) {
2962 @@ -727,6 +740,9 @@ void dwc_otg_hcd_qh_remove(dwc_otg_hcd_t
2963 hcd->non_periodic_qh_ptr->next;
2964 }
2965 DWC_LIST_REMOVE_INIT(&qh->qh_list_entry);
2966 +
2967 + // If we've removed the last non-periodic entry then there are none left!
2968 + g_np_count = g_np_sent;
2969 } else {
2970 deschedule_periodic(hcd, qh);
2971 hcd->periodic_qh_count--;
2972 @@ -755,6 +771,24 @@ void dwc_otg_hcd_qh_deactivate(dwc_otg_h
2973 int sched_next_periodic_split)
2974 {
2975 if (dwc_qh_is_non_per(qh)) {
2976 +
2977 + dwc_otg_qh_t *qh_tmp;
2978 + dwc_list_link_t *qh_list;
2979 + DWC_LIST_FOREACH(qh_list, &hcd->non_periodic_sched_inactive)
2980 + {
2981 + qh_tmp = DWC_LIST_ENTRY(qh_list, struct dwc_otg_qh, qh_list_entry);
2982 + if(qh_tmp == qh)
2983 + {
2984 + /*
2985 + * FIQ is being disabled because this one nevers gets a np_count increment
2986 + * This is still not absolutely correct, but it should fix itself with
2987 + * just an unnecessary extra interrupt
2988 + */
2989 + g_np_sent = g_np_count;
2990 + }
2991 + }
2992 +
2993 +
2994 dwc_otg_hcd_qh_remove(hcd, qh);
2995 if (!DWC_CIRCLEQ_EMPTY(&qh->qtd_list)) {
2996 /* Add back to inactive non-periodic schedule. */
2997 @@ -768,6 +802,7 @@ void dwc_otg_hcd_qh_deactivate(dwc_otg_h
2998 if (sched_next_periodic_split) {
2999
3000 qh->sched_frame = frame_number;
3001 +
3002 if (dwc_frame_num_le(frame_number,
3003 dwc_frame_num_inc
3004 (qh->start_split_frame,
3005 @@ -816,6 +851,11 @@ void dwc_otg_hcd_qh_deactivate(dwc_otg_h
3006 DWC_LIST_MOVE_HEAD(&hcd->periodic_sched_ready,
3007 &qh->qh_list_entry);
3008 } else {
3009 + if(!dwc_frame_num_le(g_next_sched_frame, qh->sched_frame))
3010 + {
3011 + g_next_sched_frame = qh->sched_frame;
3012 + }
3013 +
3014 DWC_LIST_MOVE_HEAD
3015 (&hcd->periodic_sched_inactive,
3016 &qh->qh_list_entry);
3017 @@ -880,6 +920,7 @@ void dwc_otg_hcd_qtd_init(dwc_otg_qtd_t
3018 * QH to place the QTD into. If it does not find a QH, then it will create a
3019 * new QH. If the QH to which the QTD is added is not currently scheduled, it
3020 * is placed into the proper schedule based on its EP type.
3021 + * HCD lock must be held and interrupts must be disabled on entry
3022 *
3023 * @param[in] qtd The QTD to add
3024 * @param[in] hcd The DWC HCD structure
3025 @@ -892,8 +933,6 @@ int dwc_otg_hcd_qtd_add(dwc_otg_qtd_t *
3026 dwc_otg_hcd_t * hcd, dwc_otg_qh_t ** qh, int atomic_alloc)
3027 {
3028 int retval = 0;
3029 - dwc_irqflags_t flags;
3030 -
3031 dwc_otg_hcd_urb_t *urb = qtd->urb;
3032
3033 /*
3034 @@ -903,18 +942,16 @@ int dwc_otg_hcd_qtd_add(dwc_otg_qtd_t *
3035 if (*qh == NULL) {
3036 *qh = dwc_otg_hcd_qh_create(hcd, urb, atomic_alloc);
3037 if (*qh == NULL) {
3038 - retval = -1;
3039 + retval = -DWC_E_NO_MEMORY;
3040 goto done;
3041 }
3042 }
3043 - DWC_SPINLOCK_IRQSAVE(hcd->lock, &flags);
3044 retval = dwc_otg_hcd_qh_add(hcd, *qh);
3045 if (retval == 0) {
3046 DWC_CIRCLEQ_INSERT_TAIL(&((*qh)->qtd_list), qtd,
3047 qtd_list_entry);
3048 + qtd->qh = *qh;
3049 }
3050 - DWC_SPINUNLOCK_IRQRESTORE(hcd->lock, flags);
3051 -
3052 done:
3053
3054 return retval;
3055 --- /dev/null
3056 +++ b/drivers/usb/host/dwc_otg/dwc_otg_mphi_fix.c
3057 @@ -0,0 +1,113 @@
3058 +#include "dwc_otg_regs.h"
3059 +#include "dwc_otg_dbg.h"
3060 +
3061 +void dwc_debug_print_core_int_reg(gintsts_data_t gintsts, const char* function_name)
3062 +{
3063 + DWC_DEBUGPL(DBG_USER, "*** Debugging from within the %s function: ***\n"
3064 + "curmode: %1i Modemismatch: %1i otgintr: %1i sofintr: %1i\n"
3065 + "rxstsqlvl: %1i nptxfempty : %1i ginnakeff: %1i goutnakeff: %1i\n"
3066 + "ulpickint: %1i i2cintr: %1i erlysuspend:%1i usbsuspend: %1i\n"
3067 + "usbreset: %1i enumdone: %1i isooutdrop: %1i eopframe: %1i\n"
3068 + "restoredone: %1i epmismatch: %1i inepint: %1i outepintr: %1i\n"
3069 + "incomplisoin:%1i incomplisoout:%1i fetsusp: %1i resetdet: %1i\n"
3070 + "portintr: %1i hcintr: %1i ptxfempty: %1i lpmtranrcvd:%1i\n"
3071 + "conidstschng:%1i disconnect: %1i sessreqintr:%1i wkupintr: %1i\n",
3072 + function_name,
3073 + gintsts.b.curmode,
3074 + gintsts.b.modemismatch,
3075 + gintsts.b.otgintr,
3076 + gintsts.b.sofintr,
3077 + gintsts.b.rxstsqlvl,
3078 + gintsts.b.nptxfempty,
3079 + gintsts.b.ginnakeff,
3080 + gintsts.b.goutnakeff,
3081 + gintsts.b.ulpickint,
3082 + gintsts.b.i2cintr,
3083 + gintsts.b.erlysuspend,
3084 + gintsts.b.usbsuspend,
3085 + gintsts.b.usbreset,
3086 + gintsts.b.enumdone,
3087 + gintsts.b.isooutdrop,
3088 + gintsts.b.eopframe,
3089 + gintsts.b.restoredone,
3090 + gintsts.b.epmismatch,
3091 + gintsts.b.inepint,
3092 + gintsts.b.outepintr,
3093 + gintsts.b.incomplisoin,
3094 + gintsts.b.incomplisoout,
3095 + gintsts.b.fetsusp,
3096 + gintsts.b.resetdet,
3097 + gintsts.b.portintr,
3098 + gintsts.b.hcintr,
3099 + gintsts.b.ptxfempty,
3100 + gintsts.b.lpmtranrcvd,
3101 + gintsts.b.conidstschng,
3102 + gintsts.b.disconnect,
3103 + gintsts.b.sessreqintr,
3104 + gintsts.b.wkupintr);
3105 + return;
3106 +}
3107 +
3108 +void dwc_debug_core_int_mask(gintmsk_data_t gintmsk, const char* function_name)
3109 +{
3110 + DWC_DEBUGPL(DBG_USER, "Interrupt Mask status (called from %s) :\n"
3111 + "modemismatch: %1i otgintr: %1i sofintr: %1i rxstsqlvl: %1i\n"
3112 + "nptxfempty: %1i ginnakeff: %1i goutnakeff: %1i ulpickint: %1i\n"
3113 + "i2cintr: %1i erlysuspend:%1i usbsuspend: %1i usbreset: %1i\n"
3114 + "enumdone: %1i isooutdrop: %1i eopframe: %1i restoredone: %1i\n"
3115 + "epmismatch: %1i inepintr: %1i outepintr: %1i incomplisoin:%1i\n"
3116 + "incomplisoout:%1i fetsusp: %1i resetdet: %1i portintr: %1i\n"
3117 + "hcintr: %1i ptxfempty: %1i lpmtranrcvd:%1i conidstschng:%1i\n"
3118 + "disconnect: %1i sessreqintr:%1i wkupintr: %1i\n",
3119 + function_name,
3120 + gintmsk.b.modemismatch,
3121 + gintmsk.b.otgintr,
3122 + gintmsk.b.sofintr,
3123 + gintmsk.b.rxstsqlvl,
3124 + gintmsk.b.nptxfempty,
3125 + gintmsk.b.ginnakeff,
3126 + gintmsk.b.goutnakeff,
3127 + gintmsk.b.ulpickint,
3128 + gintmsk.b.i2cintr,
3129 + gintmsk.b.erlysuspend,
3130 + gintmsk.b.usbsuspend,
3131 + gintmsk.b.usbreset,
3132 + gintmsk.b.enumdone,
3133 + gintmsk.b.isooutdrop,
3134 + gintmsk.b.eopframe,
3135 + gintmsk.b.restoredone,
3136 + gintmsk.b.epmismatch,
3137 + gintmsk.b.inepintr,
3138 + gintmsk.b.outepintr,
3139 + gintmsk.b.incomplisoin,
3140 + gintmsk.b.incomplisoout,
3141 + gintmsk.b.fetsusp,
3142 + gintmsk.b.resetdet,
3143 + gintmsk.b.portintr,
3144 + gintmsk.b.hcintr,
3145 + gintmsk.b.ptxfempty,
3146 + gintmsk.b.lpmtranrcvd,
3147 + gintmsk.b.conidstschng,
3148 + gintmsk.b.disconnect,
3149 + gintmsk.b.sessreqintr,
3150 + gintmsk.b.wkupintr);
3151 + return;
3152 +}
3153 +
3154 +void dwc_debug_otg_int(gotgint_data_t gotgint, const char* function_name)
3155 +{
3156 + DWC_DEBUGPL(DBG_USER, "otg int register (from %s function):\n"
3157 + "sesenddet:%1i sesreqsucstschung:%2i hstnegsucstschng:%1i\n"
3158 + "hstnegdet:%1i adevtoutchng: %2i debdone: %1i\n"
3159 + "mvic: %1i\n",
3160 + function_name,
3161 + gotgint.b.sesenddet,
3162 + gotgint.b.sesreqsucstschng,
3163 + gotgint.b.hstnegsucstschng,
3164 + gotgint.b.hstnegdet,
3165 + gotgint.b.adevtoutchng,
3166 + gotgint.b.debdone,
3167 + gotgint.b.mvic);
3168 +
3169 + return;
3170 +}
3171 --- /dev/null
3172 +++ b/drivers/usb/host/dwc_otg/dwc_otg_mphi_fix.h
3173 @@ -0,0 +1,48 @@
3174 +#ifndef __DWC_OTG_MPHI_FIX_H__
3175 +#define __DWC_OTG_MPHI_FIX_H__
3176 +#define FIQ_WRITE(_addr_,_data_) (*(volatile uint32_t *) (_addr_) = (_data_))
3177 +#define FIQ_READ(_addr_) (*(volatile uint32_t *) (_addr_))
3178 +
3179 +typedef struct {
3180 + volatile void* base;
3181 + volatile void* ctrl;
3182 + volatile void* outdda;
3183 + volatile void* outddb;
3184 + volatile void* intstat;
3185 +} mphi_regs_t;
3186 +
3187 +void dwc_debug_print_core_int_reg(gintsts_data_t gintsts, const char* function_name);
3188 +void dwc_debug_core_int_mask(gintsts_data_t gintmsk, const char* function_name);
3189 +void dwc_debug_otg_int(gotgint_data_t gotgint, const char* function_name);
3190 +
3191 +extern gintsts_data_t gintsts_saved;
3192 +
3193 +#ifdef DEBUG
3194 +#define DWC_DBG_PRINT_CORE_INT(_arg_) dwc_debug_print_core_int_reg(_arg_,__func__)
3195 +#define DWC_DBG_PRINT_CORE_INT_MASK(_arg_) dwc_debug_core_int_mask(_arg_,__func__)
3196 +#define DWC_DBG_PRINT_OTG_INT(_arg_) dwc_debug_otg_int(_arg_,__func__)
3197 +
3198 +#else
3199 +#define DWC_DBG_PRINT_CORE_INT(_arg_)
3200 +#define DWC_DBG_PRINT_CORE_INT_MASK(_arg_)
3201 +#define DWC_DBG_PRINT_OTG_INT(_arg_)
3202 +
3203 +#endif
3204 +
3205 +typedef enum {
3206 + FIQDBG_SCHED = (1 << 0),
3207 + FIQDBG_INT = (1 << 1),
3208 + FIQDBG_ERR = (1 << 2),
3209 + FIQDBG_PORTHUB = (1 << 3),
3210 +} FIQDBG_T;
3211 +
3212 +void _fiq_print(FIQDBG_T dbg_lvl, char *fmt, ...);
3213 +#ifdef FIQ_DEBUG
3214 +#define fiq_print _fiq_print
3215 +#else
3216 +#define fiq_print(x, y, ...)
3217 +#endif
3218 +
3219 +extern bool fiq_fix_enable, nak_holdoff_enable, fiq_split_enable;
3220 +
3221 +#endif
3222 --- a/drivers/usb/host/dwc_otg/dwc_otg_os_dep.h
3223 +++ b/drivers/usb/host/dwc_otg/dwc_otg_os_dep.h
3224 @@ -97,6 +97,9 @@ typedef struct os_dependent {
3225 /** Register offset for Diagnostic API */
3226 uint32_t reg_offset;
3227
3228 + /** Base address for MPHI peripheral */
3229 + void *mphi_base;
3230 +
3231 #ifdef LM_INTERFACE
3232 struct lm_device *lmdev;
3233 #elif defined(PCI_INTERFACE)
3234 --- a/drivers/usb/host/dwc_otg/dwc_otg_pcd_intr.c
3235 +++ b/drivers/usb/host/dwc_otg/dwc_otg_pcd_intr.c
3236 @@ -4276,7 +4276,7 @@ do { \
3237 && (pcd->ep0state == EP0_OUT_DATA_PHASE))
3238 status.d32 = core_if->dev_if->out_desc_addr->status.d32;
3239 if (pcd->ep0state == EP0_OUT_STATUS_PHASE)
3240 - status.d32 = status.d32 = core_if->dev_if->
3241 + status.d32 = core_if->dev_if->
3242 out_desc_addr->status.d32;
3243
3244 if (status.b.sr) {