1 From 24d3397930cc8faafd020bae31a2c9f1e4682f02 Mon Sep 17 00:00:00 2001
2 From: popcornmix <popcornmix@gmail.com>
3 Date: Tue, 2 Jul 2013 23:42:01 +0100
4 Subject: [PATCH] bcm2708 vchiq driver
6 Content-Type: text/plain; charset=UTF-8
7 Content-Transfer-Encoding: 8bit
9 Signed-off-by: popcornmix <popcornmix@gmail.com>
11 vchiq: create_pagelist copes with vmalloc memory
13 Signed-off-by: Daniel Stone <daniels@collabora.com>
15 vchiq: fix the shim message release
17 Signed-off-by: Daniel Stone <daniels@collabora.com>
19 vchiq: export additional symbols
21 Signed-off-by: Daniel Stone <daniels@collabora.com>
23 VCHIQ: Make service closure fully synchronous (drv)
25 This is one half of a two-part patch, the other half of which is to
26 the vchiq_lib user library. With these patches, calls to
27 vchiq_close_service and vchiq_remove_service won't return until any
28 associated callbacks have been delivered to the callback thread.
30 VCHIQ: Add per-service tracing
32 The new service option VCHIQ_SERVICE_OPTION_TRACE is a boolean that
33 toggles tracing for the specified service.
35 This commit also introduces vchi_service_set_option and the associated
36 option VCHI_SERVICE_OPTION_TRACE.
38 vchiq: Make the synchronous-CLOSE logic more tolerant
40 vchiq: Move logging control into debugfs
42 vchiq: Take care of a corner case tickled by VCSM
44 Closing a connection that isn't fully open requires care, since one
45 side does not know the other side's port number. Code was present to
46 handle the case where a CLOSE is sent immediately after an OPEN, i.e.
47 before the OPENACK has been received, but this was incorrectly being
48 used when an OPEN from a client using port 0 was rejected.
50 (In the observed failure, the host was attempting to use the VCSM
51 service, which isn't present in the 'cutdown' firmware. The failure
52 was intermittent because sometimes the keepalive service would
55 This case can be distinguished because the client's remoteport will
56 still be VCHIQ_PORT_FREE, and the srvstate will be OPENING. Either
57 condition is sufficient to differentiate it from the special case
60 vchiq: Avoid high load when blocked and unkillable
62 vchiq: Include SIGSTOP and SIGCONT in list of signals not-masked by vchiq to allow gdb to work
64 vchiq_arm: Complete support for SYNCHRONOUS mode
66 vchiq: Remove inline from suspend/resume
68 vchiq: Allocation does not need to be atomic
70 vchiq: Fix wrong condition check
72 The log level is checked from within the log call. Remove the check in the call.
74 Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
76 BCM270x: Add vchiq device to platform file and Device Tree
78 Prepare to turn the vchiq module into a driver.
80 Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
82 bcm2708: vchiq: Add Device Tree support
84 Turn vchiq into a driver and stop hardcoding resources.
85 Use devm_* functions in probe path to simplify cleanup.
86 A global variable is used to hold the register address. This is done
87 to keep this patch as small as possible.
88 Also make available on ARCH_BCM2835.
89 Based on work by Lubomir Rintel.
91 Signed-off-by: Noralf Trønnes <noralf@tronnes.org>
93 vchiq: Change logging level for inbound data
95 vchiq_arm: Two cacheing fixes
97 1) Make fragment size vary with cache line size
98 Without this patch, non-cache-line-aligned transfers may corrupt
99 (or be corrupted by) adjacent data structures.
101 Both ARM and VC need to be updated to enable this feature. This is
102 ensured by having the loader apply a new DT parameter -
103 cache-line-size. The existence of this parameter guarantees that the
104 kernel is capable, and the parameter will only be modified from the
105 safe default if the loader is capable.
107 2) Flush/invalidate vmalloc'd memory, and invalidate after reads
109 vchiq: fix NULL pointer dereference when closing driver
111 The following code run as root will cause a null pointer dereference oops:
113 int fd = open("/dev/vc-cma", O_RDONLY);
115 err(1, "open failed");
118 [ 1704.877721] Unable to handle kernel NULL pointer dereference at virtual address 00000000
119 [ 1704.877725] pgd = b899c000
120 [ 1704.877736] [00000000] *pgd=37fab831, *pte=00000000, *ppte=00000000
121 [ 1704.877748] Internal error: Oops: 817 [#1] PREEMPT SMP ARM
122 [ 1704.877765] Modules linked in: evdev i2c_bcm2708 uio_pdrv_genirq uio
123 [ 1704.877774] CPU: 2 PID: 3656 Comm: stress-ng-fstat Not tainted 3.19.1-12-generic-bcm2709 #12-Ubuntu
124 [ 1704.877777] Hardware name: BCM2709
125 [ 1704.877783] task: b8ab9b00 ti: b7e68000 task.ti: b7e68000
126 [ 1704.877798] PC is at __down_interruptible+0x50/0xec
127 [ 1704.877806] LR is at down_interruptible+0x5c/0x68
128 [ 1704.877813] pc : [<80630ee8>] lr : [<800704b0>] psr: 60080093
129 sp : b7e69e50 ip : b7e69e88 fp : b7e69e84
130 [ 1704.877817] r10: b88123c8 r9 : 00000010 r8 : 00000001
131 [ 1704.877822] r7 : b8ab9b00 r6 : 7fffffff r5 : 80a1cc34 r4 : 80a1cc34
132 [ 1704.877826] r3 : b7e69e50 r2 : 00000000 r1 : 00000000 r0 : 80a1cc34
133 [ 1704.877833] Flags: nZCv IRQs off FIQs on Mode SVC_32 ISA ARM Segment user
134 [ 1704.877838] Control: 10c5387d Table: 3899c06a DAC: 00000015
135 [ 1704.877843] Process do-oops (pid: 3656, stack limit = 0xb7e68238)
136 [ 1704.877848] Stack: (0xb7e69e50 to 0xb7e6a000)
137 [ 1704.877856] 9e40: 80a1cc3c 00000000 00000010 b88123c8
138 [ 1704.877865] 9e60: b7e69e84 80a1cc34 fff9fee9 ffffffff b7e68000 00000009 b7e69ea4 b7e69e88
139 [ 1704.877874] 9e80: 800704b0 80630ea4 fff9fee9 60080013 80a1cc28 fff9fee9 b7e69edc b7e69ea8
140 [ 1704.877884] 9ea0: 8040f558 80070460 fff9fee9 ffffffff 00000000 00000000 00000009 80a1cb7c
141 [ 1704.877893] 9ec0: 00000000 80a1cb7c 00000000 00000010 b7e69ef4 b7e69ee0 803e1ba4 8040f514
142 [ 1704.877902] 9ee0: 00000e48 80a1cb7c b7e69f14 b7e69ef8 803e1c9c 803e1b74 b88123c0 b92acb18
143 [ 1704.877911] 9f00: b8812790 b8d815d8 b7e69f24 b7e69f18 803e2250 803e1bc8 b7e69f5c b7e69f28
144 [ 1704.877921] 9f20: 80167bac 803e222c 00000000 00000000 b7e69f54 b8ab9ffc 00000000 8098c794
145 [ 1704.877930] 9f40: b8ab9b00 8000efc4 b7e68000 00000000 b7e69f6c b7e69f60 80167d6c 80167b28
146 [ 1704.877939] 9f60: b7e69f8c b7e69f70 80047d38 80167d60 b7e68000 b7e68010 8000efc4 b7e69fb0
147 [ 1704.877949] 9f80: b7e69fac b7e69f90 80012820 80047c84 01155490 011549a8 00000001 00000006
148 [ 1704.877957] 9fa0: 00000000 b7e69fb0 8000ee5c 80012790 00000000 353d8c0f 7efc4308 00000000
149 [ 1704.877966] 9fc0: 01155490 011549a8 00000001 00000006 00000000 00000000 76cf3ba0 00000003
150 [ 1704.877975] 9fe0: 00000000 7efc42e4 0002272f 76e2ed66 60080030 00000003 00000000 00000000
151 [ 1704.877998] [<80630ee8>] (__down_interruptible) from [<800704b0>] (down_interruptible+0x5c/0x68)
152 [ 1704.878015] [<800704b0>] (down_interruptible) from [<8040f558>] (vchiu_queue_push+0x50/0xd8)
153 [ 1704.878032] [<8040f558>] (vchiu_queue_push) from [<803e1ba4>] (send_worker_msg+0x3c/0x54)
154 [ 1704.878045] [<803e1ba4>] (send_worker_msg) from [<803e1c9c>] (vc_cma_set_reserve+0xe0/0x1c4)
155 [ 1704.878057] [<803e1c9c>] (vc_cma_set_reserve) from [<803e2250>] (vc_cma_release+0x30/0x38)
156 [ 1704.878069] [<803e2250>] (vc_cma_release) from [<80167bac>] (__fput+0x90/0x1e0)
157 [ 1704.878082] [<80167bac>] (__fput) from [<80167d6c>] (____fput+0x18/0x1c)
158 [ 1704.878094] [<80167d6c>] (____fput) from [<80047d38>] (task_work_run+0xc0/0xf8)
159 [ 1704.878109] [<80047d38>] (task_work_run) from [<80012820>] (do_work_pending+0x9c/0xc4)
160 [ 1704.878123] [<80012820>] (do_work_pending) from [<8000ee5c>] (work_pending+0xc/0x20)
161 [ 1704.878133] Code: e50b1034 e3a01000 e50b2030 e580300c (e5823000)
163 ..the fix is to ensure that we have actually initialized the queue before we attempt
164 to push any items onto it. This occurs if we do an open() followed by a close() without
165 any activity in between.
167 Signed-off-by: Colin Ian King <colin.king@canonical.com>
169 vchiq_arm: Sort out the vmalloc case
171 See: https://github.com/raspberrypi/linux/issues/1055
173 vchiq: hack: Add include depecated dma include file
175 arch/arm/mach-bcm2708/include/mach/platform.h | 2 +
176 arch/arm/mach-bcm2709/include/mach/platform.h | 2 +
177 drivers/misc/Kconfig | 1 +
178 drivers/misc/Makefile | 1 +
179 drivers/misc/vc04_services/Kconfig | 9 +
180 drivers/misc/vc04_services/Makefile | 14 +
181 .../interface/vchi/connections/connection.h | 328 ++
182 .../interface/vchi/message_drivers/message.h | 204 +
183 drivers/misc/vc04_services/interface/vchi/vchi.h | 378 ++
184 .../misc/vc04_services/interface/vchi/vchi_cfg.h | 224 ++
185 .../interface/vchi/vchi_cfg_internal.h | 71 +
186 .../vc04_services/interface/vchi/vchi_common.h | 175 +
187 .../misc/vc04_services/interface/vchi/vchi_mh.h | 42 +
188 .../misc/vc04_services/interface/vchiq_arm/vchiq.h | 40 +
189 .../vc04_services/interface/vchiq_arm/vchiq_2835.h | 42 +
190 .../interface/vchiq_arm/vchiq_2835_arm.c | 586 +++
191 .../vc04_services/interface/vchiq_arm/vchiq_arm.c | 2903 +++++++++++++++
192 .../vc04_services/interface/vchiq_arm/vchiq_arm.h | 220 ++
193 .../interface/vchiq_arm/vchiq_build_info.h | 37 +
194 .../vc04_services/interface/vchiq_arm/vchiq_cfg.h | 69 +
195 .../interface/vchiq_arm/vchiq_connected.c | 120 +
196 .../interface/vchiq_arm/vchiq_connected.h | 50 +
197 .../vc04_services/interface/vchiq_arm/vchiq_core.c | 3934 ++++++++++++++++++++
198 .../vc04_services/interface/vchiq_arm/vchiq_core.h | 712 ++++
199 .../interface/vchiq_arm/vchiq_debugfs.c | 383 ++
200 .../interface/vchiq_arm/vchiq_debugfs.h | 52 +
201 .../interface/vchiq_arm/vchiq_genversion | 87 +
202 .../vc04_services/interface/vchiq_arm/vchiq_if.h | 189 +
203 .../interface/vchiq_arm/vchiq_ioctl.h | 131 +
204 .../interface/vchiq_arm/vchiq_kern_lib.c | 458 +++
205 .../interface/vchiq_arm/vchiq_killable.h | 69 +
206 .../interface/vchiq_arm/vchiq_memdrv.h | 71 +
207 .../interface/vchiq_arm/vchiq_pagelist.h | 58 +
208 .../vc04_services/interface/vchiq_arm/vchiq_shim.c | 860 +++++
209 .../vc04_services/interface/vchiq_arm/vchiq_util.c | 156 +
210 .../vc04_services/interface/vchiq_arm/vchiq_util.h | 82 +
211 .../interface/vchiq_arm/vchiq_version.c | 59 +
212 37 files changed, 12819 insertions(+)
213 create mode 100644 drivers/misc/vc04_services/Kconfig
214 create mode 100644 drivers/misc/vc04_services/Makefile
215 create mode 100644 drivers/misc/vc04_services/interface/vchi/connections/connection.h
216 create mode 100644 drivers/misc/vc04_services/interface/vchi/message_drivers/message.h
217 create mode 100644 drivers/misc/vc04_services/interface/vchi/vchi.h
218 create mode 100644 drivers/misc/vc04_services/interface/vchi/vchi_cfg.h
219 create mode 100644 drivers/misc/vc04_services/interface/vchi/vchi_cfg_internal.h
220 create mode 100644 drivers/misc/vc04_services/interface/vchi/vchi_common.h
221 create mode 100644 drivers/misc/vc04_services/interface/vchi/vchi_mh.h
222 create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq.h
223 create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_2835.h
224 create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c
225 create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_arm.c
226 create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_arm.h
227 create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_build_info.h
228 create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_cfg.h
229 create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_connected.c
230 create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_connected.h
231 create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_core.c
232 create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_core.h
233 create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_debugfs.c
234 create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_debugfs.h
235 create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_genversion
236 create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_if.h
237 create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_ioctl.h
238 create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_kern_lib.c
239 create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_killable.h
240 create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_memdrv.h
241 create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_pagelist.h
242 create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_shim.c
243 create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_util.c
244 create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_util.h
245 create mode 100644 drivers/misc/vc04_services/interface/vchiq_arm/vchiq_version.c
247 --- a/arch/arm/mach-bcm2708/include/mach/platform.h
248 +++ b/arch/arm/mach-bcm2708/include/mach/platform.h
250 #define ARMCTRL_IC_BASE (ARM_BASE + 0x200) /* ARM interrupt controller */
251 #define ARMCTRL_TIMER0_1_BASE (ARM_BASE + 0x400) /* Timer 0 and 1 */
252 #define ARMCTRL_0_SBM_BASE (ARM_BASE + 0x800) /* User 0 (ARM)'s Semaphores Doorbells and Mailboxes */
253 +#define ARMCTRL_0_BELL_BASE (ARMCTRL_0_SBM_BASE + 0x40) /* User 0 (ARM)'s Doorbell */
254 +#define ARMCTRL_0_MAIL0_BASE (ARMCTRL_0_SBM_BASE + 0x80) /* User 0 (ARM)'s Mailbox 0 */
258 --- a/arch/arm/mach-bcm2709/include/mach/platform.h
259 +++ b/arch/arm/mach-bcm2709/include/mach/platform.h
261 #define ARMCTRL_IC_BASE (ARM_BASE + 0x200) /* ARM interrupt controller */
262 #define ARMCTRL_TIMER0_1_BASE (ARM_BASE + 0x400) /* Timer 0 and 1 */
263 #define ARMCTRL_0_SBM_BASE (ARM_BASE + 0x800) /* User 0 (ARM)'s Semaphores Doorbells and Mailboxes */
264 +#define ARMCTRL_0_BELL_BASE (ARMCTRL_0_SBM_BASE + 0x40) /* User 0 (ARM)'s Doorbell */
265 +#define ARMCTRL_0_MAIL0_BASE (ARMCTRL_0_SBM_BASE + 0x80) /* User 0 (ARM)'s Mailbox 0 */
269 --- a/drivers/misc/Kconfig
270 +++ b/drivers/misc/Kconfig
271 @@ -545,6 +545,7 @@ source "drivers/misc/lis3lv02d/Kconfig"
272 source "drivers/misc/altera-stapl/Kconfig"
273 source "drivers/misc/mei/Kconfig"
274 source "drivers/misc/vmw_vmci/Kconfig"
275 +source "drivers/misc/vc04_services/Kconfig"
276 source "drivers/misc/mic/Kconfig"
277 source "drivers/misc/genwqe/Kconfig"
278 source "drivers/misc/echo/Kconfig"
279 --- a/drivers/misc/Makefile
280 +++ b/drivers/misc/Makefile
281 @@ -52,6 +52,7 @@ obj-$(CONFIG_INTEL_MEI) += mei/
282 obj-$(CONFIG_VMWARE_VMCI) += vmw_vmci/
283 obj-$(CONFIG_LATTICE_ECP3_CONFIG) += lattice-ecp3-config.o
284 obj-$(CONFIG_SRAM) += sram.o
285 +obj-$(CONFIG_BCM2708_VCHIQ) += vc04_services/
287 obj-$(CONFIG_GENWQE) += genwqe/
288 obj-$(CONFIG_ECHO) += echo/
290 +++ b/drivers/misc/vc04_services/Kconfig
292 +config BCM2708_VCHIQ
293 + tristate "Videocore VCHIQ"
294 + depends on RASPBERRYPI_FIRMWARE
297 + Kernel to VideoCore communication interface for the
298 + BCM2708 family of products.
299 + Defaults to Y when the Broadcom Videocore services
300 + are included in the build, N otherwise.
302 +++ b/drivers/misc/vc04_services/Makefile
304 +obj-$(CONFIG_BCM2708_VCHIQ) += vchiq.o
307 + interface/vchiq_arm/vchiq_core.o \
308 + interface/vchiq_arm/vchiq_arm.o \
309 + interface/vchiq_arm/vchiq_kern_lib.o \
310 + interface/vchiq_arm/vchiq_2835_arm.o \
311 + interface/vchiq_arm/vchiq_debugfs.o \
312 + interface/vchiq_arm/vchiq_shim.o \
313 + interface/vchiq_arm/vchiq_util.o \
314 + interface/vchiq_arm/vchiq_connected.o \
316 +ccflags-y += -DVCOS_VERIFY_BKPTS=1 -Idrivers/misc/vc04_services -DUSE_VCHIQ_ARM -D__VCCOREVER__=0x04000000
319 +++ b/drivers/misc/vc04_services/interface/vchi/connections/connection.h
322 + * Copyright (c) 2010-2012 Broadcom. All rights reserved.
324 + * Redistribution and use in source and binary forms, with or without
325 + * modification, are permitted provided that the following conditions
327 + * 1. Redistributions of source code must retain the above copyright
328 + * notice, this list of conditions, and the following disclaimer,
329 + * without modification.
330 + * 2. Redistributions in binary form must reproduce the above copyright
331 + * notice, this list of conditions and the following disclaimer in the
332 + * documentation and/or other materials provided with the distribution.
333 + * 3. The names of the above-listed copyright holders may not be used
334 + * to endorse or promote products derived from this software without
335 + * specific prior written permission.
337 + * ALTERNATIVELY, this software may be distributed under the terms of the
338 + * GNU General Public License ("GPL") version 2, as published by the Free
339 + * Software Foundation.
341 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
342 + * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
343 + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
344 + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
345 + * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
346 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
347 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
348 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
349 + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
350 + * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
351 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
354 +#ifndef CONNECTION_H_
355 +#define CONNECTION_H_
357 +#include <linux/kernel.h>
358 +#include <linux/types.h>
359 +#include <linux/semaphore.h>
361 +#include "interface/vchi/vchi_cfg_internal.h"
362 +#include "interface/vchi/vchi_common.h"
363 +#include "interface/vchi/message_drivers/message.h"
365 +/******************************************************************************
367 + *****************************************************************************/
369 +// Opaque handle for a connection / service pair
370 +typedef struct opaque_vchi_connection_connected_service_handle_t *VCHI_CONNECTION_SERVICE_HANDLE_T;
372 +// opaque handle to the connection state information
373 +typedef struct opaque_vchi_connection_info_t VCHI_CONNECTION_STATE_T;
375 +typedef struct vchi_connection_t VCHI_CONNECTION_T;
378 +/******************************************************************************
380 + *****************************************************************************/
382 +// Routine to init a connection with a particular low level driver
383 +typedef VCHI_CONNECTION_STATE_T * (*VCHI_CONNECTION_INIT_T)( struct vchi_connection_t * connection,
384 + const VCHI_MESSAGE_DRIVER_T * driver );
386 +// Routine to control CRC enabling at a connection level
387 +typedef int32_t (*VCHI_CONNECTION_CRC_CONTROL_T)( VCHI_CONNECTION_STATE_T *state_handle,
388 + VCHI_CRC_CONTROL_T control );
390 +// Routine to create a service
391 +typedef int32_t (*VCHI_CONNECTION_SERVICE_CONNECT_T)( VCHI_CONNECTION_STATE_T *state_handle,
392 + int32_t service_id,
393 + uint32_t rx_fifo_size,
394 + uint32_t tx_fifo_size,
396 + VCHI_CALLBACK_T callback,
397 + void *callback_param,
399 + int32_t want_unaligned_bulk_rx,
400 + int32_t want_unaligned_bulk_tx,
401 + VCHI_CONNECTION_SERVICE_HANDLE_T *service_handle );
403 +// Routine to close a service
404 +typedef int32_t (*VCHI_CONNECTION_SERVICE_DISCONNECT_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service_handle );
406 +// Routine to queue a message
407 +typedef int32_t (*VCHI_CONNECTION_SERVICE_QUEUE_MESSAGE_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service_handle,
409 + uint32_t data_size,
410 + VCHI_FLAGS_T flags,
411 + void *msg_handle );
413 +// scatter-gather (vector) message queueing
414 +typedef int32_t (*VCHI_CONNECTION_SERVICE_QUEUE_MESSAGEV_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service_handle,
415 + VCHI_MSG_VECTOR_T *vector,
417 + VCHI_FLAGS_T flags,
418 + void *msg_handle );
420 +// Routine to dequeue a message
421 +typedef int32_t (*VCHI_CONNECTION_SERVICE_DEQUEUE_MESSAGE_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service_handle,
423 + uint32_t max_data_size_to_read,
424 + uint32_t *actual_msg_size,
425 + VCHI_FLAGS_T flags );
427 +// Routine to peek at a message
428 +typedef int32_t (*VCHI_CONNECTION_SERVICE_PEEK_MESSAGE_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service_handle,
430 + uint32_t *msg_size,
431 + VCHI_FLAGS_T flags );
433 +// Routine to hold a message
434 +typedef int32_t (*VCHI_CONNECTION_SERVICE_HOLD_MESSAGE_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service_handle,
436 + uint32_t *msg_size,
437 + VCHI_FLAGS_T flags,
438 + void **message_handle );
440 +// Routine to initialise a received message iterator
441 +typedef int32_t (*VCHI_CONNECTION_SERVICE_LOOKAHEAD_MESSAGE_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service_handle,
442 + VCHI_MSG_ITER_T *iter,
443 + VCHI_FLAGS_T flags );
445 +// Routine to release a held message
446 +typedef int32_t (*VCHI_CONNECTION_HELD_MSG_RELEASE_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service_handle,
447 + void *message_handle );
449 +// Routine to get info on a held message
450 +typedef int32_t (*VCHI_CONNECTION_HELD_MSG_INFO_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service_handle,
451 + void *message_handle,
454 + uint32_t *tx_timestamp,
455 + uint32_t *rx_timestamp );
457 +// Routine to check whether the iterator has a next message
458 +typedef int32_t (*VCHI_CONNECTION_MSG_ITER_HAS_NEXT_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service,
459 + const VCHI_MSG_ITER_T *iter );
461 +// Routine to advance the iterator
462 +typedef int32_t (*VCHI_CONNECTION_MSG_ITER_NEXT_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service,
463 + VCHI_MSG_ITER_T *iter,
465 + uint32_t *msg_size );
467 +// Routine to remove the last message returned by the iterator
468 +typedef int32_t (*VCHI_CONNECTION_MSG_ITER_REMOVE_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service,
469 + VCHI_MSG_ITER_T *iter );
471 +// Routine to hold the last message returned by the iterator
472 +typedef int32_t (*VCHI_CONNECTION_MSG_ITER_HOLD_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service,
473 + VCHI_MSG_ITER_T *iter,
474 + void **msg_handle );
476 +// Routine to transmit bulk data
477 +typedef int32_t (*VCHI_CONNECTION_BULK_QUEUE_TRANSMIT_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service_handle,
478 + const void *data_src,
479 + uint32_t data_size,
480 + VCHI_FLAGS_T flags,
481 + void *bulk_handle );
483 +// Routine to receive data
484 +typedef int32_t (*VCHI_CONNECTION_BULK_QUEUE_RECEIVE_T)( VCHI_CONNECTION_SERVICE_HANDLE_T service_handle,
486 + uint32_t data_size,
487 + VCHI_FLAGS_T flags,
488 + void *bulk_handle );
490 +// Routine to report if a server is available
491 +typedef int32_t (*VCHI_CONNECTION_SERVER_PRESENT)( VCHI_CONNECTION_STATE_T *state, int32_t service_id, int32_t peer_flags );
493 +// Routine to report the number of RX slots available
494 +typedef int (*VCHI_CONNECTION_RX_SLOTS_AVAILABLE)( const VCHI_CONNECTION_STATE_T *state );
496 +// Routine to report the RX slot size
497 +typedef uint32_t (*VCHI_CONNECTION_RX_SLOT_SIZE)( const VCHI_CONNECTION_STATE_T *state );
499 +// Callback to indicate that the other side has added a buffer to the rx bulk DMA FIFO
500 +typedef void (*VCHI_CONNECTION_RX_BULK_BUFFER_ADDED)(VCHI_CONNECTION_STATE_T *state,
503 + MESSAGE_TX_CHANNEL_T channel,
504 + uint32_t channel_params,
505 + uint32_t data_length,
506 + uint32_t data_offset);
508 +// Callback to inform a service that a Xon or Xoff message has been received
509 +typedef void (*VCHI_CONNECTION_FLOW_CONTROL)(VCHI_CONNECTION_STATE_T *state, int32_t service_id, int32_t xoff);
511 +// Callback to inform a service that a server available reply message has been received
512 +typedef void (*VCHI_CONNECTION_SERVER_AVAILABLE_REPLY)(VCHI_CONNECTION_STATE_T *state, int32_t service_id, uint32_t flags);
514 +// Callback to indicate that bulk auxiliary messages have arrived
515 +typedef void (*VCHI_CONNECTION_BULK_AUX_RECEIVED)(VCHI_CONNECTION_STATE_T *state);
517 +// Callback to indicate that bulk auxiliary messages have arrived
518 +typedef void (*VCHI_CONNECTION_BULK_AUX_TRANSMITTED)(VCHI_CONNECTION_STATE_T *state, void *handle);
520 +// Callback with all the connection info you require
521 +typedef void (*VCHI_CONNECTION_INFO)(VCHI_CONNECTION_STATE_T *state, uint32_t protocol_version, uint32_t slot_size, uint32_t num_slots, uint32_t min_bulk_size);
523 +// Callback to inform of a disconnect
524 +typedef void (*VCHI_CONNECTION_DISCONNECT)(VCHI_CONNECTION_STATE_T *state, uint32_t flags);
526 +// Callback to inform of a power control request
527 +typedef void (*VCHI_CONNECTION_POWER_CONTROL)(VCHI_CONNECTION_STATE_T *state, MESSAGE_TX_CHANNEL_T channel, int32_t enable);
529 +// allocate memory suitably aligned for this connection
530 +typedef void * (*VCHI_BUFFER_ALLOCATE)(VCHI_CONNECTION_SERVICE_HANDLE_T service_handle, uint32_t * length);
532 +// free memory allocated by buffer_allocate
533 +typedef void (*VCHI_BUFFER_FREE)(VCHI_CONNECTION_SERVICE_HANDLE_T service_handle, void * address);
536 +/******************************************************************************
537 + System driver struct
538 + *****************************************************************************/
540 +struct opaque_vchi_connection_api_t
542 + // Routine to init the connection
543 + VCHI_CONNECTION_INIT_T init;
545 + // Connection-level CRC control
546 + VCHI_CONNECTION_CRC_CONTROL_T crc_control;
548 + // Routine to connect to or create service
549 + VCHI_CONNECTION_SERVICE_CONNECT_T service_connect;
551 + // Routine to disconnect from a service
552 + VCHI_CONNECTION_SERVICE_DISCONNECT_T service_disconnect;
554 + // Routine to queue a message
555 + VCHI_CONNECTION_SERVICE_QUEUE_MESSAGE_T service_queue_msg;
557 + // scatter-gather (vector) message queue
558 + VCHI_CONNECTION_SERVICE_QUEUE_MESSAGEV_T service_queue_msgv;
560 + // Routine to dequeue a message
561 + VCHI_CONNECTION_SERVICE_DEQUEUE_MESSAGE_T service_dequeue_msg;
563 + // Routine to peek at a message
564 + VCHI_CONNECTION_SERVICE_PEEK_MESSAGE_T service_peek_msg;
566 + // Routine to hold a message
567 + VCHI_CONNECTION_SERVICE_HOLD_MESSAGE_T service_hold_msg;
569 + // Routine to initialise a received message iterator
570 + VCHI_CONNECTION_SERVICE_LOOKAHEAD_MESSAGE_T service_look_ahead_msg;
572 + // Routine to release a message
573 + VCHI_CONNECTION_HELD_MSG_RELEASE_T held_msg_release;
575 + // Routine to get information on a held message
576 + VCHI_CONNECTION_HELD_MSG_INFO_T held_msg_info;
578 + // Routine to check for next message on iterator
579 + VCHI_CONNECTION_MSG_ITER_HAS_NEXT_T msg_iter_has_next;
581 + // Routine to get next message on iterator
582 + VCHI_CONNECTION_MSG_ITER_NEXT_T msg_iter_next;
584 + // Routine to remove the last message returned by iterator
585 + VCHI_CONNECTION_MSG_ITER_REMOVE_T msg_iter_remove;
587 + // Routine to hold the last message returned by iterator
588 + VCHI_CONNECTION_MSG_ITER_HOLD_T msg_iter_hold;
590 + // Routine to transmit bulk data
591 + VCHI_CONNECTION_BULK_QUEUE_TRANSMIT_T bulk_queue_transmit;
593 + // Routine to receive data
594 + VCHI_CONNECTION_BULK_QUEUE_RECEIVE_T bulk_queue_receive;
596 + // Routine to report the available servers
597 + VCHI_CONNECTION_SERVER_PRESENT server_present;
599 + // Routine to report the number of RX slots available
600 + VCHI_CONNECTION_RX_SLOTS_AVAILABLE connection_rx_slots_available;
602 + // Routine to report the RX slot size
603 + VCHI_CONNECTION_RX_SLOT_SIZE connection_rx_slot_size;
605 + // Callback to indicate that the other side has added a buffer to the rx bulk DMA FIFO
606 + VCHI_CONNECTION_RX_BULK_BUFFER_ADDED rx_bulk_buffer_added;
608 + // Callback to inform a service that a Xon or Xoff message has been received
609 + VCHI_CONNECTION_FLOW_CONTROL flow_control;
611 + // Callback to inform a service that a server available reply message has been received
612 + VCHI_CONNECTION_SERVER_AVAILABLE_REPLY server_available_reply;
614 + // Callback to indicate that bulk auxiliary messages have arrived
615 + VCHI_CONNECTION_BULK_AUX_RECEIVED bulk_aux_received;
617 + // Callback to indicate that a bulk auxiliary message has been transmitted
618 + VCHI_CONNECTION_BULK_AUX_TRANSMITTED bulk_aux_transmitted;
620 + // Callback to provide information about the connection
621 + VCHI_CONNECTION_INFO connection_info;
623 + // Callback to notify that peer has requested disconnect
624 + VCHI_CONNECTION_DISCONNECT disconnect;
626 + // Callback to notify that peer has requested power change
627 + VCHI_CONNECTION_POWER_CONTROL power_control;
629 + // allocate memory suitably aligned for this connection
630 + VCHI_BUFFER_ALLOCATE buffer_allocate;
632 + // free memory allocated by buffer_allocate
633 + VCHI_BUFFER_FREE buffer_free;
637 +struct vchi_connection_t {
638 + const VCHI_CONNECTION_API_T *api;
639 + VCHI_CONNECTION_STATE_T *state;
640 +#ifdef VCHI_COARSE_LOCKING
641 + struct semaphore sem;
646 +#endif /* CONNECTION_H_ */
648 +/****************************** End of file **********************************/
650 +++ b/drivers/misc/vc04_services/interface/vchi/message_drivers/message.h
653 + * Copyright (c) 2010-2012 Broadcom. All rights reserved.
655 + * Redistribution and use in source and binary forms, with or without
656 + * modification, are permitted provided that the following conditions
658 + * 1. Redistributions of source code must retain the above copyright
659 + * notice, this list of conditions, and the following disclaimer,
660 + * without modification.
661 + * 2. Redistributions in binary form must reproduce the above copyright
662 + * notice, this list of conditions and the following disclaimer in the
663 + * documentation and/or other materials provided with the distribution.
664 + * 3. The names of the above-listed copyright holders may not be used
665 + * to endorse or promote products derived from this software without
666 + * specific prior written permission.
668 + * ALTERNATIVELY, this software may be distributed under the terms of the
669 + * GNU General Public License ("GPL") version 2, as published by the Free
670 + * Software Foundation.
672 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
673 + * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
674 + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
675 + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
676 + * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
677 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
678 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
679 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
680 + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
681 + * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
682 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
685 +#ifndef _VCHI_MESSAGE_H_
686 +#define _VCHI_MESSAGE_H_
688 +#include <linux/kernel.h>
689 +#include <linux/types.h>
690 +#include <linux/semaphore.h>
692 +#include "interface/vchi/vchi_cfg_internal.h"
693 +#include "interface/vchi/vchi_common.h"
696 +typedef enum message_event_type {
697 + MESSAGE_EVENT_NONE,
699 + MESSAGE_EVENT_MESSAGE,
700 + MESSAGE_EVENT_SLOT_COMPLETE,
701 + MESSAGE_EVENT_RX_BULK_PAUSED,
702 + MESSAGE_EVENT_RX_BULK_COMPLETE,
703 + MESSAGE_EVENT_TX_COMPLETE,
704 + MESSAGE_EVENT_MSG_DISCARDED
705 +} MESSAGE_EVENT_TYPE_T;
707 +typedef enum vchi_msg_flags
709 + VCHI_MSG_FLAGS_NONE = 0x0,
710 + VCHI_MSG_FLAGS_TERMINATE_DMA = 0x1
713 +typedef enum message_tx_channel
715 + MESSAGE_TX_CHANNEL_MESSAGE = 0,
716 + MESSAGE_TX_CHANNEL_BULK = 1 // drivers may provide multiple bulk channels, from 1 upwards
717 +} MESSAGE_TX_CHANNEL_T;
719 +// Macros used for cycling through bulk channels
720 +#define MESSAGE_TX_CHANNEL_BULK_PREV(c) (MESSAGE_TX_CHANNEL_BULK+((c)-MESSAGE_TX_CHANNEL_BULK+VCHI_MAX_BULK_TX_CHANNELS_PER_CONNECTION-1)%VCHI_MAX_BULK_TX_CHANNELS_PER_CONNECTION)
721 +#define MESSAGE_TX_CHANNEL_BULK_NEXT(c) (MESSAGE_TX_CHANNEL_BULK+((c)-MESSAGE_TX_CHANNEL_BULK+1)%VCHI_MAX_BULK_TX_CHANNELS_PER_CONNECTION)
723 +typedef enum message_rx_channel
725 + MESSAGE_RX_CHANNEL_MESSAGE = 0,
726 + MESSAGE_RX_CHANNEL_BULK = 1 // drivers may provide multiple bulk channels, from 1 upwards
727 +} MESSAGE_RX_CHANNEL_T;
729 +// Message receive slot information
730 +typedef struct rx_msg_slot_info {
732 + struct rx_msg_slot_info *next;
733 + //struct slot_info *prev;
734 +#if !defined VCHI_COARSE_LOCKING
735 + struct semaphore sem;
738 + uint8_t *addr; // base address of slot
739 + uint32_t len; // length of slot in bytes
741 + uint32_t write_ptr; // hardware causes this to advance
742 + uint32_t read_ptr; // this module does the reading
743 + int active; // is this slot in the hardware dma fifo?
744 + uint32_t msgs_parsed; // count how many messages are in this slot
745 + uint32_t msgs_released; // how many messages have been released
746 + void *state; // connection state information
747 + uint8_t ref_count[VCHI_MAX_SERVICES_PER_CONNECTION]; // reference count for slots held by services
748 +} RX_MSG_SLOTINFO_T;
750 +// The message driver no longer needs to know about the fields of RX_BULK_SLOTINFO_T - sort this out.
751 +// In particular, it mustn't use addr and len - they're the client buffer, but the message
752 +// driver will be tasked with sending the aligned core section.
753 +typedef struct rx_bulk_slotinfo_t {
754 + struct rx_bulk_slotinfo_t *next;
756 + struct semaphore *blocking;
762 + // needed for the callback
765 + VCHI_FLAGS_T flags;
766 +} RX_BULK_SLOTINFO_T;
769 +/* ----------------------------------------------------------------------
770 + * each connection driver will have a pool of the following struct.
772 + * the pool will be managed by vchi_qman_*
773 + * this means there will be multiple queues (single linked lists)
774 + * a given struct message_info will be on exactly one of these queues
776 + * -------------------------------------------------------------------- */
777 +typedef struct rx_message_info {
779 + struct message_info *next;
780 + //struct message_info *prev;
784 + RX_MSG_SLOTINFO_T *slot; // points to whichever slot contains this message
785 + uint32_t tx_timestamp;
786 + uint32_t rx_timestamp;
788 +} RX_MESSAGE_INFO_T;
791 + MESSAGE_EVENT_TYPE_T type;
795 + void *addr; // address of message
796 + uint16_t slot_delta; // whether this message indicated slot delta
797 + uint32_t len; // length of message
798 + RX_MSG_SLOTINFO_T *slot; // slot this message is in
799 + int32_t service; // service id this message is destined for
800 + uint32_t tx_timestamp; // timestamp from the header
801 + uint32_t rx_timestamp; // timestamp when we parsed it
804 + // FIXME: cleanup slot reporting...
805 + RX_MSG_SLOTINFO_T *rx_msg;
806 + RX_BULK_SLOTINFO_T *rx_bulk;
808 + MESSAGE_TX_CHANNEL_T tx_channel;
814 +typedef void VCHI_MESSAGE_DRIVER_EVENT_CALLBACK_T( void *state );
817 + VCHI_MESSAGE_DRIVER_EVENT_CALLBACK_T *event_callback;
818 +} VCHI_MESSAGE_DRIVER_OPEN_T;
821 +// handle to this instance of message driver (as returned by ->open)
822 +typedef struct opaque_mhandle_t *VCHI_MDRIVER_HANDLE_T;
824 +struct opaque_vchi_message_driver_t {
825 + VCHI_MDRIVER_HANDLE_T *(*open)( VCHI_MESSAGE_DRIVER_OPEN_T *params, void *state );
826 + int32_t (*suspending)( VCHI_MDRIVER_HANDLE_T *handle );
827 + int32_t (*resumed)( VCHI_MDRIVER_HANDLE_T *handle );
828 + int32_t (*power_control)( VCHI_MDRIVER_HANDLE_T *handle, MESSAGE_TX_CHANNEL_T, int32_t enable );
829 + int32_t (*add_msg_rx_slot)( VCHI_MDRIVER_HANDLE_T *handle, RX_MSG_SLOTINFO_T *slot ); // rx message
830 + int32_t (*add_bulk_rx)( VCHI_MDRIVER_HANDLE_T *handle, void *data, uint32_t len, RX_BULK_SLOTINFO_T *slot ); // rx data (bulk)
831 + int32_t (*send)( VCHI_MDRIVER_HANDLE_T *handle, MESSAGE_TX_CHANNEL_T channel, const void *data, uint32_t len, VCHI_MSG_FLAGS_T flags, void *send_handle ); // tx (message & bulk)
832 + void (*next_event)( VCHI_MDRIVER_HANDLE_T *handle, MESSAGE_EVENT_T *event ); // get the next event from message_driver
833 + int32_t (*enable)( VCHI_MDRIVER_HANDLE_T *handle );
834 + int32_t (*form_message)( VCHI_MDRIVER_HANDLE_T *handle, int32_t service_id, VCHI_MSG_VECTOR_T *vector, uint32_t count, void
835 + *address, uint32_t length_avail, uint32_t max_total_length, int32_t pad_to_fill, int32_t allow_partial );
837 + int32_t (*update_message)( VCHI_MDRIVER_HANDLE_T *handle, void *dest, int16_t *slot_count );
838 + int32_t (*buffer_aligned)( VCHI_MDRIVER_HANDLE_T *handle, int tx, int uncached, const void *address, const uint32_t length );
839 + void * (*allocate_buffer)( VCHI_MDRIVER_HANDLE_T *handle, uint32_t *length );
840 + void (*free_buffer)( VCHI_MDRIVER_HANDLE_T *handle, void *address );
841 + int (*rx_slot_size)( VCHI_MDRIVER_HANDLE_T *handle, int msg_size );
842 + int (*tx_slot_size)( VCHI_MDRIVER_HANDLE_T *handle, int msg_size );
844 + int32_t (*tx_supports_terminate)( const VCHI_MDRIVER_HANDLE_T *handle, MESSAGE_TX_CHANNEL_T channel );
845 + uint32_t (*tx_bulk_chunk_size)( const VCHI_MDRIVER_HANDLE_T *handle, MESSAGE_TX_CHANNEL_T channel );
846 + int (*tx_alignment)( const VCHI_MDRIVER_HANDLE_T *handle, MESSAGE_TX_CHANNEL_T channel );
847 + int (*rx_alignment)( const VCHI_MDRIVER_HANDLE_T *handle, MESSAGE_RX_CHANNEL_T channel );
848 + void (*form_bulk_aux)( VCHI_MDRIVER_HANDLE_T *handle, MESSAGE_TX_CHANNEL_T channel, const void *data, uint32_t len, uint32_t chunk_size, const void **aux_data, int32_t *aux_len );
849 + void (*debug)( VCHI_MDRIVER_HANDLE_T *handle );
853 +#endif // _VCHI_MESSAGE_H_
855 +/****************************** End of file ***********************************/
857 +++ b/drivers/misc/vc04_services/interface/vchi/vchi.h
860 + * Copyright (c) 2010-2012 Broadcom. All rights reserved.
862 + * Redistribution and use in source and binary forms, with or without
863 + * modification, are permitted provided that the following conditions
865 + * 1. Redistributions of source code must retain the above copyright
866 + * notice, this list of conditions, and the following disclaimer,
867 + * without modification.
868 + * 2. Redistributions in binary form must reproduce the above copyright
869 + * notice, this list of conditions and the following disclaimer in the
870 + * documentation and/or other materials provided with the distribution.
871 + * 3. The names of the above-listed copyright holders may not be used
872 + * to endorse or promote products derived from this software without
873 + * specific prior written permission.
875 + * ALTERNATIVELY, this software may be distributed under the terms of the
876 + * GNU General Public License ("GPL") version 2, as published by the Free
877 + * Software Foundation.
879 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
880 + * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
881 + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
882 + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
883 + * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
884 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
885 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
886 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
887 + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
888 + * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
889 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
895 +#include "interface/vchi/vchi_cfg.h"
896 +#include "interface/vchi/vchi_common.h"
897 +#include "interface/vchi/connections/connection.h"
898 +#include "vchi_mh.h"
901 +/******************************************************************************
903 + *****************************************************************************/
905 +#define VCHI_BULK_ROUND_UP(x) ((((unsigned long)(x))+VCHI_BULK_ALIGN-1) & ~(VCHI_BULK_ALIGN-1))
906 +#define VCHI_BULK_ROUND_DOWN(x) (((unsigned long)(x)) & ~(VCHI_BULK_ALIGN-1))
907 +#define VCHI_BULK_ALIGN_NBYTES(x) (VCHI_BULK_ALIGNED(x) ? 0 : (VCHI_BULK_ALIGN - ((unsigned long)(x) & (VCHI_BULK_ALIGN-1))))
909 +#ifdef USE_VCHIQ_ARM
910 +#define VCHI_BULK_ALIGNED(x) 1
912 +#define VCHI_BULK_ALIGNED(x) (((unsigned long)(x) & (VCHI_BULK_ALIGN-1)) == 0)
915 +struct vchi_version {
917 + uint32_t version_min;
919 +#define VCHI_VERSION(v_) { v_, v_ }
920 +#define VCHI_VERSION_EX(v_, m_) { v_, m_ }
927 +} VCHI_MSG_VECTOR_TYPE_T;
929 +typedef struct vchi_msg_vector_ex {
931 + VCHI_MSG_VECTOR_TYPE_T type;
937 + VCHI_MEM_HANDLE_T handle;
942 + // an ordinary data pointer
945 + const void *vec_base;
949 + // a nested vector list
952 + struct vchi_msg_vector_ex *vec;
956 +} VCHI_MSG_VECTOR_EX_T;
959 +// Construct an entry in a msg vector for a pointer (p) of length (l)
960 +#define VCHI_VEC_POINTER(p,l) VCHI_VEC_POINTER, { { (VCHI_MEM_HANDLE_T)(p), (l) } }
962 +// Construct an entry in a msg vector for a message handle (h), starting at offset (o) of length (l)
963 +#define VCHI_VEC_HANDLE(h,o,l) VCHI_VEC_HANDLE, { { (h), (o), (l) } }
965 +// Macros to manipulate 'FOURCC' values
966 +#define MAKE_FOURCC(x) ((int32_t)( (x[0] << 24) | (x[1] << 16) | (x[2] << 8) | x[3] ))
967 +#define FOURCC_TO_CHAR(x) (x >> 24) & 0xFF,(x >> 16) & 0xFF,(x >> 8) & 0xFF, x & 0xFF
970 +// Opaque service information
971 +struct opaque_vchi_service_t;
973 +// Descriptor for a held message. Allocated by client, initialised by vchi_msg_hold,
974 +// vchi_msg_iter_hold or vchi_msg_iter_hold_next. Fields are for internal VCHI use only.
977 + struct opaque_vchi_service_t *service;
983 +// structure used to provide the information needed to open a server or a client
985 + struct vchi_version version;
986 + int32_t service_id;
987 + VCHI_CONNECTION_T *connection;
988 + uint32_t rx_fifo_size;
989 + uint32_t tx_fifo_size;
990 + VCHI_CALLBACK_T callback;
991 + void *callback_param;
992 + /* client intends to receive bulk transfers of
993 + odd lengths or into unaligned buffers */
994 + int32_t want_unaligned_bulk_rx;
995 + /* client intends to transmit bulk transfers of
996 + odd lengths or out of unaligned buffers */
997 + int32_t want_unaligned_bulk_tx;
998 + /* client wants to check CRCs on (bulk) xfers.
999 + Only needs to be set at 1 end - will do both directions. */
1001 +} SERVICE_CREATION_T;
1003 +// Opaque handle for a VCHI instance
1004 +typedef struct opaque_vchi_instance_handle_t *VCHI_INSTANCE_T;
1006 +// Opaque handle for a server or client
1007 +typedef struct opaque_vchi_service_handle_t *VCHI_SERVICE_HANDLE_T;
1009 +// Service registration & startup
1010 +typedef void (*VCHI_SERVICE_INIT)(VCHI_INSTANCE_T initialise_instance, VCHI_CONNECTION_T **connections, uint32_t num_connections);
1012 +typedef struct service_info_tag {
1013 + const char * const vll_filename; /* VLL to load to start this service. This is an empty string if VLL is "static" */
1014 + VCHI_SERVICE_INIT init; /* Service initialisation function */
1015 + void *vll_handle; /* VLL handle; NULL when unloaded or a "static VLL" in build */
1018 +/******************************************************************************
1019 + Global funcs - implementation is specific to which side you are on (local / remote)
1020 + *****************************************************************************/
1026 +extern /*@observer@*/ VCHI_CONNECTION_T * vchi_create_connection( const VCHI_CONNECTION_API_T * function_table,
1027 + const VCHI_MESSAGE_DRIVER_T * low_level);
1030 +// Routine used to initialise the vchi on both local + remote connections
1031 +extern int32_t vchi_initialise( VCHI_INSTANCE_T *instance_handle );
1033 +extern int32_t vchi_exit( void );
1035 +extern int32_t vchi_connect( VCHI_CONNECTION_T **connections,
1036 + const uint32_t num_connections,
1037 + VCHI_INSTANCE_T instance_handle );
1039 +//When this is called, ensure that all services have no data pending.
1040 +//Bulk transfers can remain 'queued'
1041 +extern int32_t vchi_disconnect( VCHI_INSTANCE_T instance_handle );
1043 +// Global control over bulk CRC checking
1044 +extern int32_t vchi_crc_control( VCHI_CONNECTION_T *connection,
1045 + VCHI_CRC_CONTROL_T control );
1047 +// helper functions
1048 +extern void * vchi_allocate_buffer(VCHI_SERVICE_HANDLE_T handle, uint32_t *length);
1049 +extern void vchi_free_buffer(VCHI_SERVICE_HANDLE_T handle, void *address);
1050 +extern uint32_t vchi_current_time(VCHI_INSTANCE_T instance_handle);
1053 +/******************************************************************************
1054 + Global service API
1055 + *****************************************************************************/
1056 +// Routine to create a named service
1057 +extern int32_t vchi_service_create( VCHI_INSTANCE_T instance_handle,
1058 + SERVICE_CREATION_T *setup,
1059 + VCHI_SERVICE_HANDLE_T *handle );
1061 +// Routine to destory a service
1062 +extern int32_t vchi_service_destroy( const VCHI_SERVICE_HANDLE_T handle );
1064 +// Routine to open a named service
1065 +extern int32_t vchi_service_open( VCHI_INSTANCE_T instance_handle,
1066 + SERVICE_CREATION_T *setup,
1067 + VCHI_SERVICE_HANDLE_T *handle);
1069 +extern int32_t vchi_get_peer_version( const VCHI_SERVICE_HANDLE_T handle,
1070 + short *peer_version );
1072 +// Routine to close a named service
1073 +extern int32_t vchi_service_close( const VCHI_SERVICE_HANDLE_T handle );
1075 +// Routine to increment ref count on a named service
1076 +extern int32_t vchi_service_use( const VCHI_SERVICE_HANDLE_T handle );
1078 +// Routine to decrement ref count on a named service
1079 +extern int32_t vchi_service_release( const VCHI_SERVICE_HANDLE_T handle );
1081 +// Routine to set a control option for a named service
1082 +extern int32_t vchi_service_set_option( const VCHI_SERVICE_HANDLE_T handle,
1083 + VCHI_SERVICE_OPTION_T option,
1086 +// Routine to send a message across a service
1087 +extern int32_t vchi_msg_queue( VCHI_SERVICE_HANDLE_T handle,
1089 + uint32_t data_size,
1090 + VCHI_FLAGS_T flags,
1091 + void *msg_handle );
1093 +// scatter-gather (vector) and send message
1094 +int32_t vchi_msg_queuev_ex( VCHI_SERVICE_HANDLE_T handle,
1095 + VCHI_MSG_VECTOR_EX_T *vector,
1097 + VCHI_FLAGS_T flags,
1098 + void *msg_handle );
1100 +// legacy scatter-gather (vector) and send message, only handles pointers
1101 +int32_t vchi_msg_queuev( VCHI_SERVICE_HANDLE_T handle,
1102 + VCHI_MSG_VECTOR_T *vector,
1104 + VCHI_FLAGS_T flags,
1105 + void *msg_handle );
1107 +// Routine to receive a msg from a service
1108 +// Dequeue is equivalent to hold, copy into client buffer, release
1109 +extern int32_t vchi_msg_dequeue( VCHI_SERVICE_HANDLE_T handle,
1111 + uint32_t max_data_size_to_read,
1112 + uint32_t *actual_msg_size,
1113 + VCHI_FLAGS_T flags );
1115 +// Routine to look at a message in place.
1116 +// The message is not dequeued, so a subsequent call to peek or dequeue
1117 +// will return the same message.
1118 +extern int32_t vchi_msg_peek( VCHI_SERVICE_HANDLE_T handle,
1120 + uint32_t *msg_size,
1121 + VCHI_FLAGS_T flags );
1123 +// Routine to remove a message after it has been read in place with peek
1124 +// The first message on the queue is dequeued.
1125 +extern int32_t vchi_msg_remove( VCHI_SERVICE_HANDLE_T handle );
1127 +// Routine to look at a message in place.
1128 +// The message is dequeued, so the caller is left holding it; the descriptor is
1129 +// filled in and must be released when the user has finished with the message.
1130 +extern int32_t vchi_msg_hold( VCHI_SERVICE_HANDLE_T handle,
1131 + void **data, // } may be NULL, as info can be
1132 + uint32_t *msg_size, // } obtained from HELD_MSG_T
1133 + VCHI_FLAGS_T flags,
1134 + VCHI_HELD_MSG_T *message_descriptor );
1136 +// Initialise an iterator to look through messages in place
1137 +extern int32_t vchi_msg_look_ahead( VCHI_SERVICE_HANDLE_T handle,
1138 + VCHI_MSG_ITER_T *iter,
1139 + VCHI_FLAGS_T flags );
1141 +/******************************************************************************
1142 + Global service support API - operations on held messages and message iterators
1143 + *****************************************************************************/
1145 +// Routine to get the address of a held message
1146 +extern void *vchi_held_msg_ptr( const VCHI_HELD_MSG_T *message );
1148 +// Routine to get the size of a held message
1149 +extern int32_t vchi_held_msg_size( const VCHI_HELD_MSG_T *message );
1151 +// Routine to get the transmit timestamp as written into the header by the peer
1152 +extern uint32_t vchi_held_msg_tx_timestamp( const VCHI_HELD_MSG_T *message );
1154 +// Routine to get the reception timestamp, written as we parsed the header
1155 +extern uint32_t vchi_held_msg_rx_timestamp( const VCHI_HELD_MSG_T *message );
1157 +// Routine to release a held message after it has been processed
1158 +extern int32_t vchi_held_msg_release( VCHI_HELD_MSG_T *message );
1160 +// Indicates whether the iterator has a next message.
1161 +extern int32_t vchi_msg_iter_has_next( const VCHI_MSG_ITER_T *iter );
1163 +// Return the pointer and length for the next message and advance the iterator.
1164 +extern int32_t vchi_msg_iter_next( VCHI_MSG_ITER_T *iter,
1166 + uint32_t *msg_size );
1168 +// Remove the last message returned by vchi_msg_iter_next.
1169 +// Can only be called once after each call to vchi_msg_iter_next.
1170 +extern int32_t vchi_msg_iter_remove( VCHI_MSG_ITER_T *iter );
1172 +// Hold the last message returned by vchi_msg_iter_next.
1173 +// Can only be called once after each call to vchi_msg_iter_next.
1174 +extern int32_t vchi_msg_iter_hold( VCHI_MSG_ITER_T *iter,
1175 + VCHI_HELD_MSG_T *message );
1177 +// Return information for the next message, and hold it, advancing the iterator.
1178 +extern int32_t vchi_msg_iter_hold_next( VCHI_MSG_ITER_T *iter,
1179 + void **data, // } may be NULL
1180 + uint32_t *msg_size, // }
1181 + VCHI_HELD_MSG_T *message );
1184 +/******************************************************************************
1186 + *****************************************************************************/
1188 +// Routine to prepare interface for a transfer from the other side
1189 +extern int32_t vchi_bulk_queue_receive( VCHI_SERVICE_HANDLE_T handle,
1191 + uint32_t data_size,
1192 + VCHI_FLAGS_T flags,
1193 + void *transfer_handle );
1196 +// Prepare interface for a transfer from the other side into relocatable memory.
1197 +int32_t vchi_bulk_queue_receive_reloc( const VCHI_SERVICE_HANDLE_T handle,
1198 + VCHI_MEM_HANDLE_T h_dst,
1200 + uint32_t data_size,
1201 + const VCHI_FLAGS_T flags,
1202 + void * const bulk_handle );
1204 +// Routine to queue up data ready for transfer to the other (once they have signalled they are ready)
1205 +extern int32_t vchi_bulk_queue_transmit( VCHI_SERVICE_HANDLE_T handle,
1206 + const void *data_src,
1207 + uint32_t data_size,
1208 + VCHI_FLAGS_T flags,
1209 + void *transfer_handle );
1212 +/******************************************************************************
1213 + Configuration plumbing
1214 + *****************************************************************************/
1216 +// function prototypes for the different mid layers (the state info gives the different physical connections)
1217 +extern const VCHI_CONNECTION_API_T *single_get_func_table( void );
1218 +//extern const VCHI_CONNECTION_API_T *local_server_get_func_table( void );
1219 +//extern const VCHI_CONNECTION_API_T *local_client_get_func_table( void );
1221 +// declare all message drivers here
1222 +const VCHI_MESSAGE_DRIVER_T *vchi_mphi_message_driver_func_table( void );
1228 +extern int32_t vchi_bulk_queue_transmit_reloc( VCHI_SERVICE_HANDLE_T handle,
1229 + VCHI_MEM_HANDLE_T h_src,
1231 + uint32_t data_size,
1232 + VCHI_FLAGS_T flags,
1233 + void *transfer_handle );
1234 +#endif /* VCHI_H_ */
1236 +/****************************** End of file **********************************/
1238 +++ b/drivers/misc/vc04_services/interface/vchi/vchi_cfg.h
1241 + * Copyright (c) 2010-2012 Broadcom. All rights reserved.
1243 + * Redistribution and use in source and binary forms, with or without
1244 + * modification, are permitted provided that the following conditions
1246 + * 1. Redistributions of source code must retain the above copyright
1247 + * notice, this list of conditions, and the following disclaimer,
1248 + * without modification.
1249 + * 2. Redistributions in binary form must reproduce the above copyright
1250 + * notice, this list of conditions and the following disclaimer in the
1251 + * documentation and/or other materials provided with the distribution.
1252 + * 3. The names of the above-listed copyright holders may not be used
1253 + * to endorse or promote products derived from this software without
1254 + * specific prior written permission.
1256 + * ALTERNATIVELY, this software may be distributed under the terms of the
1257 + * GNU General Public License ("GPL") version 2, as published by the Free
1258 + * Software Foundation.
1260 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
1261 + * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
1262 + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
1263 + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
1264 + * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
1265 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
1266 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
1267 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
1268 + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
1269 + * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
1270 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
1273 +#ifndef VCHI_CFG_H_
1274 +#define VCHI_CFG_H_
1276 +/****************************************************************************************
1277 + * Defines in this first section are part of the VCHI API and may be examined by VCHI
1279 + ***************************************************************************************/
1281 +/* Required alignment of base addresses for bulk transfer, if unaligned transfers are not enabled */
1282 +/* Really determined by the message driver, and should be available from a run-time call. */
1283 +#ifndef VCHI_BULK_ALIGN
1284 +# if __VCCOREVER__ >= 0x04000000
1285 +# define VCHI_BULK_ALIGN 32 // Allows for the need to do cache cleans
1287 +# define VCHI_BULK_ALIGN 16
1291 +/* Required length multiple for bulk transfers, if unaligned transfers are not enabled */
1292 +/* May be less than or greater than VCHI_BULK_ALIGN */
1293 +/* Really determined by the message driver, and should be available from a run-time call. */
1294 +#ifndef VCHI_BULK_GRANULARITY
1295 +# if __VCCOREVER__ >= 0x04000000
1296 +# define VCHI_BULK_GRANULARITY 32 // Allows for the need to do cache cleans
1298 +# define VCHI_BULK_GRANULARITY 16
1302 +/* The largest possible message to be queued with vchi_msg_queue. */
1303 +#ifndef VCHI_MAX_MSG_SIZE
1304 +# if defined VCHI_LOCAL_HOST_PORT
1305 +# define VCHI_MAX_MSG_SIZE 16384 // makes file transfers fast, but should they be using bulk?
1307 +# define VCHI_MAX_MSG_SIZE 4096 // NOTE: THIS MUST BE LARGER THAN OR EQUAL TO THE SIZE OF THE KHRONOS MERGE BUFFER!!
1311 +/******************************************************************************************
1312 + * Defines below are system configuration options, and should not be used by VCHI services.
1313 + *****************************************************************************************/
1315 +/* How many connections can we support? A localhost implementation uses 2 connections,
1316 + * 1 for host-app, 1 for VMCS, and these are hooked together by a loopback MPHI VCFW
1318 +#ifndef VCHI_MAX_NUM_CONNECTIONS
1319 +# define VCHI_MAX_NUM_CONNECTIONS 3
1322 +/* How many services can we open per connection? Extending this doesn't cost processing time, just a small
1323 + * amount of static memory. */
1324 +#ifndef VCHI_MAX_SERVICES_PER_CONNECTION
1325 +# define VCHI_MAX_SERVICES_PER_CONNECTION 36
1328 +/* Adjust if using a message driver that supports more logical TX channels */
1329 +#ifndef VCHI_MAX_BULK_TX_CHANNELS_PER_CONNECTION
1330 +# define VCHI_MAX_BULK_TX_CHANNELS_PER_CONNECTION 9 // 1 MPHI + 8 CCP2 logical channels
1333 +/* Adjust if using a message driver that supports more logical RX channels */
1334 +#ifndef VCHI_MAX_BULK_RX_CHANNELS_PER_CONNECTION
1335 +# define VCHI_MAX_BULK_RX_CHANNELS_PER_CONNECTION 1 // 1 MPHI
1338 +/* How many receive slots do we use. This times VCHI_MAX_MSG_SIZE gives the effective
1339 + * receive queue space, less message headers. */
1340 +#ifndef VCHI_NUM_READ_SLOTS
1341 +# if defined(VCHI_LOCAL_HOST_PORT)
1342 +# define VCHI_NUM_READ_SLOTS 4
1344 +# define VCHI_NUM_READ_SLOTS 48
1348 +/* Do we utilise overrun facility for receive message slots? Can aid peer transmit
1349 + * performance. Only define on VideoCore end, talking to host.
1351 +//#define VCHI_MSG_RX_OVERRUN
1353 +/* How many transmit slots do we use. Generally don't need many, as the hardware driver
1354 + * underneath VCHI will usually have its own buffering. */
1355 +#ifndef VCHI_NUM_WRITE_SLOTS
1356 +# define VCHI_NUM_WRITE_SLOTS 4
1359 +/* If a service has held or queued received messages in VCHI_XOFF_THRESHOLD or more slots,
1360 + * then it's taking up too much buffer space, and the peer service will be told to stop
1361 + * transmitting with an XOFF message. For this to be effective, the VCHI_NUM_READ_SLOTS
1362 + * needs to be considerably bigger than VCHI_NUM_WRITE_SLOTS, or the transmit latency
1364 +#ifndef VCHI_XOFF_THRESHOLD
1365 +# define VCHI_XOFF_THRESHOLD (VCHI_NUM_READ_SLOTS / 2)
1368 +/* After we've sent an XOFF, the peer will be told to resume transmission once the local
1369 + * service has dequeued/released enough messages that it's now occupying
1370 + * VCHI_XON_THRESHOLD slots or fewer. */
1371 +#ifndef VCHI_XON_THRESHOLD
1372 +# define VCHI_XON_THRESHOLD (VCHI_NUM_READ_SLOTS / 4)
1375 +/* A size below which a bulk transfer omits the handshake completely and always goes
1376 + * via the message channel, if bulk auxiliary is being sent on that service. (The user
1377 + * can guarantee this by enabling unaligned transmits).
1379 +#ifndef VCHI_MIN_BULK_SIZE
1380 +# define VCHI_MIN_BULK_SIZE ( VCHI_MAX_MSG_SIZE / 2 < 4096 ? VCHI_MAX_MSG_SIZE / 2 : 4096 )
1383 +/* Maximum size of bulk transmission chunks, for each interface type. A trade-off between
1384 + * speed and latency; the smaller the chunk size the better change of messages and other
1385 + * bulk transmissions getting in when big bulk transfers are happening. Set to 0 to not
1386 + * break transmissions into chunks.
1388 +#ifndef VCHI_MAX_BULK_CHUNK_SIZE_MPHI
1389 +# define VCHI_MAX_BULK_CHUNK_SIZE_MPHI (16 * 1024)
1392 +/* NB Chunked CCP2 transmissions violate the letter of the CCP2 spec by using "JPEG8" mode
1393 + * with multiple-line frames. Only use if the receiver can cope. */
1394 +#ifndef VCHI_MAX_BULK_CHUNK_SIZE_CCP2
1395 +# define VCHI_MAX_BULK_CHUNK_SIZE_CCP2 0
1398 +/* How many TX messages can we have pending in our transmit slots. Once exhausted,
1399 + * vchi_msg_queue will be blocked. */
1400 +#ifndef VCHI_TX_MSG_QUEUE_SIZE
1401 +# define VCHI_TX_MSG_QUEUE_SIZE 256
1404 +/* How many RX messages can we have parsed in the receive slots. Once exhausted, parsing
1405 + * will be suspended until older messages are dequeued/released. */
1406 +#ifndef VCHI_RX_MSG_QUEUE_SIZE
1407 +# define VCHI_RX_MSG_QUEUE_SIZE 256
1410 +/* Really should be able to cope if we run out of received message descriptors, by
1411 + * suspending parsing as the comment above says, but we don't. This sweeps the issue
1412 + * under the carpet. */
1413 +#if VCHI_RX_MSG_QUEUE_SIZE < (VCHI_MAX_MSG_SIZE/16 + 1) * VCHI_NUM_READ_SLOTS
1414 +# undef VCHI_RX_MSG_QUEUE_SIZE
1415 +# define VCHI_RX_MSG_QUEUE_SIZE (VCHI_MAX_MSG_SIZE/16 + 1) * VCHI_NUM_READ_SLOTS
1418 +/* How many bulk transmits can we have pending. Once exhausted, vchi_bulk_queue_transmit
1419 + * will be blocked. */
1420 +#ifndef VCHI_TX_BULK_QUEUE_SIZE
1421 +# define VCHI_TX_BULK_QUEUE_SIZE 64
1424 +/* How many bulk receives can we have pending. Once exhausted, vchi_bulk_queue_receive
1425 + * will be blocked. */
1426 +#ifndef VCHI_RX_BULK_QUEUE_SIZE
1427 +# define VCHI_RX_BULK_QUEUE_SIZE 64
1430 +/* A limit on how many outstanding bulk requests we expect the peer to give us. If
1431 + * the peer asks for more than this, VCHI will fail and assert. The number is determined
1432 + * by the peer's hardware - it's the number of outstanding requests that can be queued
1433 + * on all bulk channels. VC3's MPHI peripheral allows 16. */
1434 +#ifndef VCHI_MAX_PEER_BULK_REQUESTS
1435 +# define VCHI_MAX_PEER_BULK_REQUESTS 32
1438 +/* Define VCHI_CCP2TX_MANUAL_POWER if the host tells us when to turn the CCP2
1439 + * transmitter on and off.
1441 +/*#define VCHI_CCP2TX_MANUAL_POWER*/
1443 +#ifndef VCHI_CCP2TX_MANUAL_POWER
1445 +/* Timeout (in milliseconds) for putting the CCP2TX interface into IDLE state. Set
1446 + * negative for no IDLE.
1448 +# ifndef VCHI_CCP2TX_IDLE_TIMEOUT
1449 +# define VCHI_CCP2TX_IDLE_TIMEOUT 5
1452 +/* Timeout (in milliseconds) for putting the CCP2TX interface into OFF state. Set
1453 + * negative for no OFF.
1455 +# ifndef VCHI_CCP2TX_OFF_TIMEOUT
1456 +# define VCHI_CCP2TX_OFF_TIMEOUT 1000
1459 +#endif /* VCHI_CCP2TX_MANUAL_POWER */
1461 +#endif /* VCHI_CFG_H_ */
1463 +/****************************** End of file **********************************/
1465 +++ b/drivers/misc/vc04_services/interface/vchi/vchi_cfg_internal.h
1468 + * Copyright (c) 2010-2012 Broadcom. All rights reserved.
1470 + * Redistribution and use in source and binary forms, with or without
1471 + * modification, are permitted provided that the following conditions
1473 + * 1. Redistributions of source code must retain the above copyright
1474 + * notice, this list of conditions, and the following disclaimer,
1475 + * without modification.
1476 + * 2. Redistributions in binary form must reproduce the above copyright
1477 + * notice, this list of conditions and the following disclaimer in the
1478 + * documentation and/or other materials provided with the distribution.
1479 + * 3. The names of the above-listed copyright holders may not be used
1480 + * to endorse or promote products derived from this software without
1481 + * specific prior written permission.
1483 + * ALTERNATIVELY, this software may be distributed under the terms of the
1484 + * GNU General Public License ("GPL") version 2, as published by the Free
1485 + * Software Foundation.
1487 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
1488 + * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
1489 + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
1490 + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
1491 + * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
1492 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
1493 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
1494 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
1495 + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
1496 + * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
1497 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
1500 +#ifndef VCHI_CFG_INTERNAL_H_
1501 +#define VCHI_CFG_INTERNAL_H_
1503 +/****************************************************************************************
1504 + * Control optimisation attempts.
1505 + ***************************************************************************************/
1507 +// Don't use lots of short-term locks - use great long ones, reducing the overall locks-per-second
1508 +#define VCHI_COARSE_LOCKING
1510 +// Avoid lock then unlock on exit from blocking queue operations (msg tx, bulk rx/tx)
1511 +// (only relevant if VCHI_COARSE_LOCKING)
1512 +#define VCHI_ELIDE_BLOCK_EXIT_LOCK
1514 +// Avoid lock on non-blocking peek
1515 +// (only relevant if VCHI_COARSE_LOCKING)
1516 +#define VCHI_AVOID_PEEK_LOCK
1518 +// Use one slot-handler thread per connection, rather than 1 thread dealing with all connections in rotation.
1519 +#define VCHI_MULTIPLE_HANDLER_THREADS
1521 +// Put free descriptors onto the head of the free queue, rather than the tail, so that we don't thrash
1522 +// our way through the pool of descriptors.
1523 +#define VCHI_PUSH_FREE_DESCRIPTORS_ONTO_HEAD
1525 +// Don't issue a MSG_AVAILABLE callback for every single message. Possibly only safe if VCHI_COARSE_LOCKING.
1526 +#define VCHI_FEWER_MSG_AVAILABLE_CALLBACKS
1528 +// Don't use message descriptors for TX messages that don't need them
1529 +#define VCHI_MINIMISE_TX_MSG_DESCRIPTORS
1531 +// Nano-locks for multiqueue
1532 +//#define VCHI_MQUEUE_NANOLOCKS
1534 +// Lock-free(er) dequeuing
1535 +//#define VCHI_RX_NANOLOCKS
1537 +#endif /*VCHI_CFG_INTERNAL_H_*/
1539 +++ b/drivers/misc/vc04_services/interface/vchi/vchi_common.h
1542 + * Copyright (c) 2010-2012 Broadcom. All rights reserved.
1544 + * Redistribution and use in source and binary forms, with or without
1545 + * modification, are permitted provided that the following conditions
1547 + * 1. Redistributions of source code must retain the above copyright
1548 + * notice, this list of conditions, and the following disclaimer,
1549 + * without modification.
1550 + * 2. Redistributions in binary form must reproduce the above copyright
1551 + * notice, this list of conditions and the following disclaimer in the
1552 + * documentation and/or other materials provided with the distribution.
1553 + * 3. The names of the above-listed copyright holders may not be used
1554 + * to endorse or promote products derived from this software without
1555 + * specific prior written permission.
1557 + * ALTERNATIVELY, this software may be distributed under the terms of the
1558 + * GNU General Public License ("GPL") version 2, as published by the Free
1559 + * Software Foundation.
1561 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
1562 + * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
1563 + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
1564 + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
1565 + * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
1566 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
1567 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
1568 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
1569 + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
1570 + * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
1571 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
1574 +#ifndef VCHI_COMMON_H_
1575 +#define VCHI_COMMON_H_
1578 +//flags used when sending messages (must be bitmapped)
1581 + VCHI_FLAGS_NONE = 0x0,
1582 + VCHI_FLAGS_BLOCK_UNTIL_OP_COMPLETE = 0x1, // waits for message to be received, or sent (NB. not the same as being seen on other side)
1583 + VCHI_FLAGS_CALLBACK_WHEN_OP_COMPLETE = 0x2, // run a callback when message sent
1584 + VCHI_FLAGS_BLOCK_UNTIL_QUEUED = 0x4, // return once the transfer is in a queue ready to go
1585 + VCHI_FLAGS_ALLOW_PARTIAL = 0x8,
1586 + VCHI_FLAGS_BLOCK_UNTIL_DATA_READ = 0x10,
1587 + VCHI_FLAGS_CALLBACK_WHEN_DATA_READ = 0x20,
1589 + VCHI_FLAGS_ALIGN_SLOT = 0x000080, // internal use only
1590 + VCHI_FLAGS_BULK_AUX_QUEUED = 0x010000, // internal use only
1591 + VCHI_FLAGS_BULK_AUX_COMPLETE = 0x020000, // internal use only
1592 + VCHI_FLAGS_BULK_DATA_QUEUED = 0x040000, // internal use only
1593 + VCHI_FLAGS_BULK_DATA_COMPLETE = 0x080000, // internal use only
1594 + VCHI_FLAGS_INTERNAL = 0xFF0000
1597 +// constants for vchi_crc_control()
1599 + VCHI_CRC_NOTHING = -1,
1600 + VCHI_CRC_PER_SERVICE = 0,
1601 + VCHI_CRC_EVERYTHING = 1,
1602 +} VCHI_CRC_CONTROL_T;
1604 +//callback reasons when an event occurs on a service
1607 + VCHI_CALLBACK_REASON_MIN,
1609 + //This indicates that there is data available
1610 + //handle is the msg id that was transmitted with the data
1611 + // When a message is received and there was no FULL message available previously, send callback
1612 + // Tasks get kicked by the callback, reset their event and try and read from the fifo until it fails
1613 + VCHI_CALLBACK_MSG_AVAILABLE,
1614 + VCHI_CALLBACK_MSG_SENT,
1615 + VCHI_CALLBACK_MSG_SPACE_AVAILABLE, // XXX not yet implemented
1617 + // This indicates that a transfer from the other side has completed
1618 + VCHI_CALLBACK_BULK_RECEIVED,
1619 + //This indicates that data queued up to be sent has now gone
1620 + //handle is the msg id that was used when sending the data
1621 + VCHI_CALLBACK_BULK_SENT,
1622 + VCHI_CALLBACK_BULK_RX_SPACE_AVAILABLE, // XXX not yet implemented
1623 + VCHI_CALLBACK_BULK_TX_SPACE_AVAILABLE, // XXX not yet implemented
1625 + VCHI_CALLBACK_SERVICE_CLOSED,
1627 + // this side has sent XOFF to peer due to lack of data consumption by service
1628 + // (suggests the service may need to take some recovery action if it has
1629 + // been deliberately holding off consuming data)
1630 + VCHI_CALLBACK_SENT_XOFF,
1631 + VCHI_CALLBACK_SENT_XON,
1633 + // indicates that a bulk transfer has finished reading the source buffer
1634 + VCHI_CALLBACK_BULK_DATA_READ,
1636 + // power notification events (currently host side only)
1637 + VCHI_CALLBACK_PEER_OFF,
1638 + VCHI_CALLBACK_PEER_SUSPENDED,
1639 + VCHI_CALLBACK_PEER_ON,
1640 + VCHI_CALLBACK_PEER_RESUMED,
1641 + VCHI_CALLBACK_FORCED_POWER_OFF,
1643 +#ifdef USE_VCHIQ_ARM
1644 + // some extra notifications provided by vchiq_arm
1645 + VCHI_CALLBACK_SERVICE_OPENED,
1646 + VCHI_CALLBACK_BULK_RECEIVE_ABORTED,
1647 + VCHI_CALLBACK_BULK_TRANSMIT_ABORTED,
1650 + VCHI_CALLBACK_REASON_MAX
1651 +} VCHI_CALLBACK_REASON_T;
1653 +// service control options
1656 + VCHI_SERVICE_OPTION_MIN,
1658 + VCHI_SERVICE_OPTION_TRACE,
1659 + VCHI_SERVICE_OPTION_SYNCHRONOUS,
1661 + VCHI_SERVICE_OPTION_MAX
1662 +} VCHI_SERVICE_OPTION_T;
1665 +//Callback used by all services / bulk transfers
1666 +typedef void (*VCHI_CALLBACK_T)( void *callback_param, //my service local param
1667 + VCHI_CALLBACK_REASON_T reason,
1668 + void *handle ); //for transmitting msg's only
1673 + * Define vector struct for scatter-gather (vector) operations
1674 + * Vectors can be nested - if a vector element has negative length, then
1675 + * the data pointer is treated as pointing to another vector array, with
1676 + * '-vec_len' elements. Thus to append a header onto an existing vector,
1677 + * you can do this:
1679 + * void foo(const VCHI_MSG_VECTOR_T *v, int n)
1681 + * VCHI_MSG_VECTOR_T nv[2];
1682 + * nv[0].vec_base = my_header;
1683 + * nv[0].vec_len = sizeof my_header;
1684 + * nv[1].vec_base = v;
1685 + * nv[1].vec_len = -n;
1689 +typedef struct vchi_msg_vector {
1690 + const void *vec_base;
1692 +} VCHI_MSG_VECTOR_T;
1694 +// Opaque type for a connection API
1695 +typedef struct opaque_vchi_connection_api_t VCHI_CONNECTION_API_T;
1697 +// Opaque type for a message driver
1698 +typedef struct opaque_vchi_message_driver_t VCHI_MESSAGE_DRIVER_T;
1701 +// Iterator structure for reading ahead through received message queue. Allocated by client,
1702 +// initialised by vchi_msg_look_ahead. Fields are for internal VCHI use only.
1703 +// Iterates over messages in queue at the instant of the call to vchi_msg_lookahead -
1704 +// will not proceed to messages received since. Behaviour is undefined if an iterator
1705 +// is used again after messages for that service are removed/dequeued by any
1706 +// means other than vchi_msg_iter_... calls on the iterator itself.
1708 + struct opaque_vchi_service_t *service;
1715 +#endif // VCHI_COMMON_H_
1717 +++ b/drivers/misc/vc04_services/interface/vchi/vchi_mh.h
1720 + * Copyright (c) 2010-2012 Broadcom. All rights reserved.
1722 + * Redistribution and use in source and binary forms, with or without
1723 + * modification, are permitted provided that the following conditions
1725 + * 1. Redistributions of source code must retain the above copyright
1726 + * notice, this list of conditions, and the following disclaimer,
1727 + * without modification.
1728 + * 2. Redistributions in binary form must reproduce the above copyright
1729 + * notice, this list of conditions and the following disclaimer in the
1730 + * documentation and/or other materials provided with the distribution.
1731 + * 3. The names of the above-listed copyright holders may not be used
1732 + * to endorse or promote products derived from this software without
1733 + * specific prior written permission.
1735 + * ALTERNATIVELY, this software may be distributed under the terms of the
1736 + * GNU General Public License ("GPL") version 2, as published by the Free
1737 + * Software Foundation.
1739 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
1740 + * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
1741 + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
1742 + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
1743 + * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
1744 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
1745 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
1746 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
1747 + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
1748 + * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
1749 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
1755 +#include <linux/types.h>
1757 +typedef int32_t VCHI_MEM_HANDLE_T;
1758 +#define VCHI_MEM_HANDLE_INVALID 0
1762 +++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq.h
1765 + * Copyright (c) 2010-2012 Broadcom. All rights reserved.
1767 + * Redistribution and use in source and binary forms, with or without
1768 + * modification, are permitted provided that the following conditions
1770 + * 1. Redistributions of source code must retain the above copyright
1771 + * notice, this list of conditions, and the following disclaimer,
1772 + * without modification.
1773 + * 2. Redistributions in binary form must reproduce the above copyright
1774 + * notice, this list of conditions and the following disclaimer in the
1775 + * documentation and/or other materials provided with the distribution.
1776 + * 3. The names of the above-listed copyright holders may not be used
1777 + * to endorse or promote products derived from this software without
1778 + * specific prior written permission.
1780 + * ALTERNATIVELY, this software may be distributed under the terms of the
1781 + * GNU General Public License ("GPL") version 2, as published by the Free
1782 + * Software Foundation.
1784 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
1785 + * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
1786 + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
1787 + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
1788 + * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
1789 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
1790 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
1791 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
1792 + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
1793 + * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
1794 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
1797 +#ifndef VCHIQ_VCHIQ_H
1798 +#define VCHIQ_VCHIQ_H
1800 +#include "vchiq_if.h"
1801 +#include "vchiq_util.h"
1805 +++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_2835.h
1808 + * Copyright (c) 2010-2012 Broadcom. All rights reserved.
1810 + * Redistribution and use in source and binary forms, with or without
1811 + * modification, are permitted provided that the following conditions
1813 + * 1. Redistributions of source code must retain the above copyright
1814 + * notice, this list of conditions, and the following disclaimer,
1815 + * without modification.
1816 + * 2. Redistributions in binary form must reproduce the above copyright
1817 + * notice, this list of conditions and the following disclaimer in the
1818 + * documentation and/or other materials provided with the distribution.
1819 + * 3. The names of the above-listed copyright holders may not be used
1820 + * to endorse or promote products derived from this software without
1821 + * specific prior written permission.
1823 + * ALTERNATIVELY, this software may be distributed under the terms of the
1824 + * GNU General Public License ("GPL") version 2, as published by the Free
1825 + * Software Foundation.
1827 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
1828 + * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
1829 + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
1830 + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
1831 + * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
1832 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
1833 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
1834 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
1835 + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
1836 + * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
1837 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
1840 +#ifndef VCHIQ_2835_H
1841 +#define VCHIQ_2835_H
1843 +#include "vchiq_pagelist.h"
1845 +#define VCHIQ_PLATFORM_FRAGMENTS_OFFSET_IDX 0
1846 +#define VCHIQ_PLATFORM_FRAGMENTS_COUNT_IDX 1
1848 +#endif /* VCHIQ_2835_H */
1850 +++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c
1853 + * Copyright (c) 2010-2012 Broadcom. All rights reserved.
1855 + * Redistribution and use in source and binary forms, with or without
1856 + * modification, are permitted provided that the following conditions
1858 + * 1. Redistributions of source code must retain the above copyright
1859 + * notice, this list of conditions, and the following disclaimer,
1860 + * without modification.
1861 + * 2. Redistributions in binary form must reproduce the above copyright
1862 + * notice, this list of conditions and the following disclaimer in the
1863 + * documentation and/or other materials provided with the distribution.
1864 + * 3. The names of the above-listed copyright holders may not be used
1865 + * to endorse or promote products derived from this software without
1866 + * specific prior written permission.
1868 + * ALTERNATIVELY, this software may be distributed under the terms of the
1869 + * GNU General Public License ("GPL") version 2, as published by the Free
1870 + * Software Foundation.
1872 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
1873 + * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
1874 + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
1875 + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
1876 + * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
1877 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
1878 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
1879 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
1880 + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
1881 + * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
1882 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
1885 +#include <linux/kernel.h>
1886 +#include <linux/types.h>
1887 +#include <linux/errno.h>
1888 +#include <linux/interrupt.h>
1889 +#include <linux/pagemap.h>
1890 +#include <linux/dma-mapping.h>
1891 +#include <linux/version.h>
1892 +#include <linux/io.h>
1893 +#include <linux/platform_device.h>
1894 +#include <linux/uaccess.h>
1895 +#include <linux/of.h>
1896 +#include <asm/pgtable.h>
1897 +#include <soc/bcm2835/raspberrypi-firmware.h>
1899 +#define dmac_map_area __glue(_CACHE,_dma_map_area)
1900 +#define dmac_unmap_area __glue(_CACHE,_dma_unmap_area)
1902 +extern void dmac_map_area(const void *, size_t, int);
1903 +extern void dmac_unmap_area(const void *, size_t, int);
1905 +#define TOTAL_SLOTS (VCHIQ_SLOT_ZERO_SLOTS + 2 * 32)
1907 +#define VCHIQ_ARM_ADDRESS(x) ((void *)((char *)x + g_virt_to_bus_offset))
1909 +#include "vchiq_arm.h"
1910 +#include "vchiq_2835.h"
1911 +#include "vchiq_connected.h"
1912 +#include "vchiq_killable.h"
1914 +#define MAX_FRAGMENTS (VCHIQ_NUM_CURRENT_BULKS * 2)
1919 +typedef struct vchiq_2835_state_struct {
1921 + VCHIQ_ARM_STATE_T arm_state;
1922 +} VCHIQ_2835_ARM_STATE_T;
1924 +static void __iomem *g_regs;
1925 +static unsigned int g_cache_line_size = sizeof(CACHE_LINE_SIZE);
1926 +static unsigned int g_fragments_size;
1927 +static char *g_fragments_base;
1928 +static char *g_free_fragments;
1929 +static struct semaphore g_free_fragments_sema;
1930 +static unsigned long g_virt_to_bus_offset;
1932 +extern int vchiq_arm_log_level;
1934 +static DEFINE_SEMAPHORE(g_free_fragments_mutex);
1937 +vchiq_doorbell_irq(int irq, void *dev_id);
1940 +create_pagelist(char __user *buf, size_t count, unsigned short type,
1941 + struct task_struct *task, PAGELIST_T ** ppagelist);
1944 +free_pagelist(PAGELIST_T *pagelist, int actual);
1946 +int vchiq_platform_init(struct platform_device *pdev, VCHIQ_STATE_T *state)
1948 + struct device *dev = &pdev->dev;
1949 + struct rpi_firmware *fw = platform_get_drvdata(pdev);
1950 + VCHIQ_SLOT_ZERO_T *vchiq_slot_zero;
1951 + struct resource *res;
1953 + dma_addr_t slot_phys;
1955 + int slot_mem_size, frag_mem_size;
1958 + g_virt_to_bus_offset = virt_to_dma(dev, (void *)0);
1960 + (void)of_property_read_u32(dev->of_node, "cache-line-size",
1961 + &g_cache_line_size);
1962 + g_fragments_size = 2 * g_cache_line_size;
1964 + /* Allocate space for the channels in coherent memory */
1965 + slot_mem_size = PAGE_ALIGN(TOTAL_SLOTS * VCHIQ_SLOT_SIZE);
1966 + frag_mem_size = PAGE_ALIGN(g_fragments_size * MAX_FRAGMENTS);
1968 + slot_mem = dmam_alloc_coherent(dev, slot_mem_size + frag_mem_size,
1969 + &slot_phys, GFP_KERNEL);
1971 + dev_err(dev, "could not allocate DMA memory\n");
1975 + WARN_ON(((int)slot_mem & (PAGE_SIZE - 1)) != 0);
1977 + vchiq_slot_zero = vchiq_init_slots(slot_mem, slot_mem_size);
1978 + if (!vchiq_slot_zero)
1981 + vchiq_slot_zero->platform_data[VCHIQ_PLATFORM_FRAGMENTS_OFFSET_IDX] =
1982 + (int)slot_phys + slot_mem_size;
1983 + vchiq_slot_zero->platform_data[VCHIQ_PLATFORM_FRAGMENTS_COUNT_IDX] =
1986 + g_fragments_base = (char *)slot_mem + slot_mem_size;
1987 + slot_mem_size += frag_mem_size;
1989 + g_free_fragments = g_fragments_base;
1990 + for (i = 0; i < (MAX_FRAGMENTS - 1); i++) {
1991 + *(char **)&g_fragments_base[i*g_fragments_size] =
1992 + &g_fragments_base[(i + 1)*g_fragments_size];
1994 + *(char **)&g_fragments_base[i * g_fragments_size] = NULL;
1995 + sema_init(&g_free_fragments_sema, MAX_FRAGMENTS);
1997 + if (vchiq_init_state(state, vchiq_slot_zero, 0) != VCHIQ_SUCCESS)
2000 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
2001 + g_regs = devm_ioremap_resource(&pdev->dev, res);
2002 + if (IS_ERR(g_regs))
2003 + return PTR_ERR(g_regs);
2005 + irq = platform_get_irq(pdev, 0);
2007 + dev_err(dev, "failed to get IRQ\n");
2011 + err = devm_request_irq(dev, irq, vchiq_doorbell_irq, IRQF_IRQPOLL,
2012 + "VCHIQ doorbell", state);
2014 + dev_err(dev, "failed to register irq=%d\n", irq);
2018 + /* Send the base address of the slots to VideoCore */
2019 + channelbase = slot_phys;
2020 + err = rpi_firmware_property(fw, RPI_FIRMWARE_VCHIQ_INIT,
2021 + &channelbase, sizeof(channelbase));
2022 + if (err || channelbase) {
2023 + dev_err(dev, "failed to set channelbase\n");
2024 + return err ? : -ENXIO;
2027 + vchiq_log_info(vchiq_arm_log_level,
2028 + "vchiq_init - done (slots %x, phys %pad)",
2029 + (unsigned int)vchiq_slot_zero, &slot_phys);
2031 + vchiq_call_connected_callbacks();
2037 +vchiq_platform_init_state(VCHIQ_STATE_T *state)
2039 + VCHIQ_STATUS_T status = VCHIQ_SUCCESS;
2040 + state->platform_state = kzalloc(sizeof(VCHIQ_2835_ARM_STATE_T), GFP_KERNEL);
2041 + ((VCHIQ_2835_ARM_STATE_T*)state->platform_state)->inited = 1;
2042 + status = vchiq_arm_init_state(state, &((VCHIQ_2835_ARM_STATE_T*)state->platform_state)->arm_state);
2043 + if(status != VCHIQ_SUCCESS)
2045 + ((VCHIQ_2835_ARM_STATE_T*)state->platform_state)->inited = 0;
2051 +vchiq_platform_get_arm_state(VCHIQ_STATE_T *state)
2053 + if(!((VCHIQ_2835_ARM_STATE_T*)state->platform_state)->inited)
2057 + return &((VCHIQ_2835_ARM_STATE_T*)state->platform_state)->arm_state;
2061 +remote_event_signal(REMOTE_EVENT_T *event)
2067 + dsb(); /* data barrier operation */
2070 + writel(0, g_regs + BELL2); /* trigger vc interrupt */
2074 +vchiq_copy_from_user(void *dst, const void *src, int size)
2076 + if ((uint32_t)src < TASK_SIZE) {
2077 + return copy_from_user(dst, src, size);
2079 + memcpy(dst, src, size);
2085 +vchiq_prepare_bulk_data(VCHIQ_BULK_T *bulk, VCHI_MEM_HANDLE_T memhandle,
2086 + void *offset, int size, int dir)
2088 + PAGELIST_T *pagelist;
2091 + WARN_ON(memhandle != VCHI_MEM_HANDLE_INVALID);
2093 + ret = create_pagelist((char __user *)offset, size,
2094 + (dir == VCHIQ_BULK_RECEIVE)
2100 + return VCHIQ_ERROR;
2102 + bulk->handle = memhandle;
2103 + bulk->data = VCHIQ_ARM_ADDRESS(pagelist);
2105 + /* Store the pagelist address in remote_data, which isn't used by the
2107 + bulk->remote_data = pagelist;
2109 + return VCHIQ_SUCCESS;
2113 +vchiq_complete_bulk(VCHIQ_BULK_T *bulk)
2115 + if (bulk && bulk->remote_data && bulk->actual)
2116 + free_pagelist((PAGELIST_T *)bulk->remote_data, bulk->actual);
2120 +vchiq_transfer_bulk(VCHIQ_BULK_T *bulk)
2123 + * This should only be called on the master (VideoCore) side, but
2124 + * provide an implementation to avoid the need for ifdefery.
2130 +vchiq_dump_platform_state(void *dump_context)
2134 + len = snprintf(buf, sizeof(buf),
2135 + " Platform: 2835 (VC master)");
2136 + vchiq_dump(dump_context, buf, len + 1);
2140 +vchiq_platform_suspend(VCHIQ_STATE_T *state)
2142 + return VCHIQ_ERROR;
2146 +vchiq_platform_resume(VCHIQ_STATE_T *state)
2148 + return VCHIQ_SUCCESS;
2152 +vchiq_platform_paused(VCHIQ_STATE_T *state)
2157 +vchiq_platform_resumed(VCHIQ_STATE_T *state)
2162 +vchiq_platform_videocore_wanted(VCHIQ_STATE_T* state)
2164 + return 1; // autosuspend not supported - videocore always wanted
2168 +vchiq_platform_use_suspend_timer(void)
2173 +vchiq_dump_platform_use_state(VCHIQ_STATE_T *state)
2175 + vchiq_log_info(vchiq_arm_log_level, "Suspend timer not in use");
2178 +vchiq_platform_handle_timeout(VCHIQ_STATE_T *state)
2187 +vchiq_doorbell_irq(int irq, void *dev_id)
2189 + VCHIQ_STATE_T *state = dev_id;
2190 + irqreturn_t ret = IRQ_NONE;
2191 + unsigned int status;
2193 + /* Read (and clear) the doorbell */
2194 + status = readl(g_regs + BELL0);
2196 + if (status & 0x4) { /* Was the doorbell rung? */
2197 + remote_event_pollall(state);
2198 + ret = IRQ_HANDLED;
2204 +/* There is a potential problem with partial cache lines (pages?)
2205 +** at the ends of the block when reading. If the CPU accessed anything in
2206 +** the same line (page?) then it may have pulled old data into the cache,
2207 +** obscuring the new data underneath. We can solve this by transferring the
2208 +** partial cache lines separately, and allowing the ARM to copy into the
2211 +** N.B. This implementation plays slightly fast and loose with the Linux
2212 +** driver programming rules, e.g. its use of dmac_map_area instead of
2213 +** dma_map_single, but it isn't a multi-platform driver and it benefits
2214 +** from increased speed as a result.
2218 +create_pagelist(char __user *buf, size_t count, unsigned short type,
2219 + struct task_struct *task, PAGELIST_T ** ppagelist)
2221 + PAGELIST_T *pagelist;
2222 + struct page **pages;
2223 + unsigned long *addrs;
2224 + unsigned int num_pages, offset, i;
2225 + char *addr, *base_addr, *next_addr;
2226 + int run, addridx, actual_pages;
2227 + unsigned long *need_release;
2229 + offset = (unsigned int)buf & (PAGE_SIZE - 1);
2230 + num_pages = (count + offset + PAGE_SIZE - 1) / PAGE_SIZE;
2232 + *ppagelist = NULL;
2234 + /* Allocate enough storage to hold the page pointers and the page
2237 + pagelist = kmalloc(sizeof(PAGELIST_T) +
2238 + (num_pages * sizeof(unsigned long)) +
2239 + sizeof(unsigned long) +
2240 + (num_pages * sizeof(pages[0])),
2243 + vchiq_log_trace(vchiq_arm_log_level,
2244 + "create_pagelist - %x", (unsigned int)pagelist);
2248 + addrs = pagelist->addrs;
2249 + need_release = (unsigned long *)(addrs + num_pages);
2250 + pages = (struct page **)(addrs + num_pages + 1);
2252 + if (is_vmalloc_addr(buf)) {
2253 + int dir = (type == PAGELIST_WRITE) ?
2254 + DMA_TO_DEVICE : DMA_FROM_DEVICE;
2255 + unsigned long length = count;
2256 + unsigned int off = offset;
2258 + for (actual_pages = 0; actual_pages < num_pages;
2260 + struct page *pg = vmalloc_to_page(buf + (actual_pages *
2262 + size_t bytes = PAGE_SIZE - off;
2264 + if (bytes > length)
2266 + pages[actual_pages] = pg;
2267 + dmac_map_area(page_address(pg) + off, bytes, dir);
2271 + *need_release = 0; /* do not try and release vmalloc pages */
2273 + down_read(&task->mm->mmap_sem);
2274 + actual_pages = get_user_pages(task, task->mm,
2275 + (unsigned long)buf & ~(PAGE_SIZE - 1),
2277 + (type == PAGELIST_READ) /*Write */ ,
2281 + up_read(&task->mm->mmap_sem);
2283 + if (actual_pages != num_pages) {
2284 + vchiq_log_info(vchiq_arm_log_level,
2285 + "create_pagelist - only %d/%d pages locked",
2289 + /* This is probably due to the process being killed */
2290 + while (actual_pages > 0)
2293 + page_cache_release(pages[actual_pages]);
2296 + if (actual_pages == 0)
2297 + actual_pages = -ENOMEM;
2298 + return actual_pages;
2300 + *need_release = 1; /* release user pages */
2303 + pagelist->length = count;
2304 + pagelist->type = type;
2305 + pagelist->offset = offset;
2307 + /* Group the pages into runs of contiguous pages */
2309 + base_addr = VCHIQ_ARM_ADDRESS(page_address(pages[0]));
2310 + next_addr = base_addr + PAGE_SIZE;
2314 + for (i = 1; i < num_pages; i++) {
2315 + addr = VCHIQ_ARM_ADDRESS(page_address(pages[i]));
2316 + if ((addr == next_addr) && (run < (PAGE_SIZE - 1))) {
2317 + next_addr += PAGE_SIZE;
2320 + addrs[addridx] = (unsigned long)base_addr + run;
2323 + next_addr = addr + PAGE_SIZE;
2328 + addrs[addridx] = (unsigned long)base_addr + run;
2331 + /* Partial cache lines (fragments) require special measures */
2332 + if ((type == PAGELIST_READ) &&
2333 + ((pagelist->offset & (g_cache_line_size - 1)) ||
2334 + ((pagelist->offset + pagelist->length) &
2335 + (g_cache_line_size - 1)))) {
2338 + if (down_interruptible(&g_free_fragments_sema) != 0) {
2343 + WARN_ON(g_free_fragments == NULL);
2345 + down(&g_free_fragments_mutex);
2346 + fragments = g_free_fragments;
2347 + WARN_ON(fragments == NULL);
2348 + g_free_fragments = *(char **) g_free_fragments;
2349 + up(&g_free_fragments_mutex);
2350 + pagelist->type = PAGELIST_READ_WITH_FRAGMENTS +
2351 + (fragments - g_fragments_base) / g_fragments_size;
2354 + dmac_flush_range(pagelist, addrs + num_pages);
2356 + *ppagelist = pagelist;
2362 +free_pagelist(PAGELIST_T *pagelist, int actual)
2364 + unsigned long *need_release;
2365 + struct page **pages;
2366 + unsigned int num_pages, i;
2368 + vchiq_log_trace(vchiq_arm_log_level,
2369 + "free_pagelist - %x, %d", (unsigned int)pagelist, actual);
2372 + (pagelist->length + pagelist->offset + PAGE_SIZE - 1) /
2375 + need_release = (unsigned long *)(pagelist->addrs + num_pages);
2376 + pages = (struct page **)(pagelist->addrs + num_pages + 1);
2378 + /* Deal with any partial cache lines (fragments) */
2379 + if (pagelist->type >= PAGELIST_READ_WITH_FRAGMENTS) {
2380 + char *fragments = g_fragments_base +
2381 + (pagelist->type - PAGELIST_READ_WITH_FRAGMENTS) *
2383 + int head_bytes, tail_bytes;
2384 + head_bytes = (g_cache_line_size - pagelist->offset) &
2385 + (g_cache_line_size - 1);
2386 + tail_bytes = (pagelist->offset + actual) &
2387 + (g_cache_line_size - 1);
2389 + if ((actual >= 0) && (head_bytes != 0)) {
2390 + if (head_bytes > actual)
2391 + head_bytes = actual;
2393 + memcpy((char *)page_address(pages[0]) +
2398 + if ((actual >= 0) && (head_bytes < actual) &&
2399 + (tail_bytes != 0)) {
2400 + memcpy((char *)page_address(pages[num_pages - 1]) +
2401 + ((pagelist->offset + actual) &
2402 + (PAGE_SIZE - 1) & ~(g_cache_line_size - 1)),
2403 + fragments + g_cache_line_size,
2407 + down(&g_free_fragments_mutex);
2408 + *(char **)fragments = g_free_fragments;
2409 + g_free_fragments = fragments;
2410 + up(&g_free_fragments_mutex);
2411 + up(&g_free_fragments_sema);
2414 + if (*need_release) {
2415 + unsigned int length = pagelist->length;
2416 + unsigned int offset = pagelist->offset;
2418 + for (i = 0; i < num_pages; i++) {
2419 + struct page *pg = pages[i];
2421 + if (pagelist->type != PAGELIST_WRITE) {
2422 + unsigned int bytes = PAGE_SIZE - offset;
2424 + if (bytes > length)
2426 + dmac_unmap_area(page_address(pg) + offset,
2427 + bytes, DMA_FROM_DEVICE);
2430 + set_page_dirty(pg);
2432 + page_cache_release(pg);
2439 +++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_arm.c
2442 + * Copyright (c) 2014 Raspberry Pi (Trading) Ltd. All rights reserved.
2443 + * Copyright (c) 2010-2012 Broadcom. All rights reserved.
2445 + * Redistribution and use in source and binary forms, with or without
2446 + * modification, are permitted provided that the following conditions
2448 + * 1. Redistributions of source code must retain the above copyright
2449 + * notice, this list of conditions, and the following disclaimer,
2450 + * without modification.
2451 + * 2. Redistributions in binary form must reproduce the above copyright
2452 + * notice, this list of conditions and the following disclaimer in the
2453 + * documentation and/or other materials provided with the distribution.
2454 + * 3. The names of the above-listed copyright holders may not be used
2455 + * to endorse or promote products derived from this software without
2456 + * specific prior written permission.
2458 + * ALTERNATIVELY, this software may be distributed under the terms of the
2459 + * GNU General Public License ("GPL") version 2, as published by the Free
2460 + * Software Foundation.
2462 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
2463 + * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
2464 + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
2465 + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
2466 + * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
2467 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
2468 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
2469 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
2470 + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
2471 + * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
2472 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
2475 +#include <linux/kernel.h>
2476 +#include <linux/module.h>
2477 +#include <linux/types.h>
2478 +#include <linux/errno.h>
2479 +#include <linux/cdev.h>
2480 +#include <linux/fs.h>
2481 +#include <linux/device.h>
2482 +#include <linux/mm.h>
2483 +#include <linux/highmem.h>
2484 +#include <linux/pagemap.h>
2485 +#include <linux/bug.h>
2486 +#include <linux/semaphore.h>
2487 +#include <linux/list.h>
2488 +#include <linux/of.h>
2489 +#include <linux/platform_device.h>
2490 +#include <soc/bcm2835/raspberrypi-firmware.h>
2492 +#include "vchiq_core.h"
2493 +#include "vchiq_ioctl.h"
2494 +#include "vchiq_arm.h"
2495 +#include "vchiq_debugfs.h"
2496 +#include "vchiq_killable.h"
2498 +#define DEVICE_NAME "vchiq"
2500 +/* Override the default prefix, which would be vchiq_arm (from the filename) */
2501 +#undef MODULE_PARAM_PREFIX
2502 +#define MODULE_PARAM_PREFIX DEVICE_NAME "."
2504 +#define VCHIQ_MINOR 0
2506 +/* Some per-instance constants */
2507 +#define MAX_COMPLETIONS 16
2508 +#define MAX_SERVICES 64
2509 +#define MAX_ELEMENTS 8
2510 +#define MSG_QUEUE_SIZE 64
2512 +#define KEEPALIVE_VER 1
2513 +#define KEEPALIVE_VER_MIN KEEPALIVE_VER
2515 +/* Run time control of log level, based on KERN_XXX level. */
2516 +int vchiq_arm_log_level = VCHIQ_LOG_DEFAULT;
2517 +int vchiq_susp_log_level = VCHIQ_LOG_ERROR;
2519 +#define SUSPEND_TIMER_TIMEOUT_MS 100
2520 +#define SUSPEND_RETRY_TIMER_TIMEOUT_MS 1000
2522 +#define VC_SUSPEND_NUM_OFFSET 3 /* number of values before idle which are -ve */
2523 +static const char *const suspend_state_names[] = {
2524 + "VC_SUSPEND_FORCE_CANCELED",
2525 + "VC_SUSPEND_REJECTED",
2526 + "VC_SUSPEND_FAILED",
2527 + "VC_SUSPEND_IDLE",
2528 + "VC_SUSPEND_REQUESTED",
2529 + "VC_SUSPEND_IN_PROGRESS",
2530 + "VC_SUSPEND_SUSPENDED"
2532 +#define VC_RESUME_NUM_OFFSET 1 /* number of values before idle which are -ve */
2533 +static const char *const resume_state_names[] = {
2534 + "VC_RESUME_FAILED",
2536 + "VC_RESUME_REQUESTED",
2537 + "VC_RESUME_IN_PROGRESS",
2538 + "VC_RESUME_RESUMED"
2540 +/* The number of times we allow force suspend to timeout before actually
2541 +** _forcing_ suspend. This is to cater for SW which fails to release vchiq
2542 +** correctly - we don't want to prevent ARM suspend indefinitely in this case.
2544 +#define FORCE_SUSPEND_FAIL_MAX 8
2546 +/* The time in ms allowed for videocore to go idle when force suspend has been
2548 +#define FORCE_SUSPEND_TIMEOUT_MS 200
2551 +static void suspend_timer_callback(unsigned long context);
2554 +typedef struct user_service_struct {
2555 + VCHIQ_SERVICE_T *service;
2557 + VCHIQ_INSTANCE_T instance;
2559 + char dequeue_pending;
2560 + char close_pending;
2561 + int message_available_pos;
2564 + struct semaphore insert_event;
2565 + struct semaphore remove_event;
2566 + struct semaphore close_event;
2567 + VCHIQ_HEADER_T * msg_queue[MSG_QUEUE_SIZE];
2570 +struct bulk_waiter_node {
2571 + struct bulk_waiter bulk_waiter;
2573 + struct list_head list;
2576 +struct vchiq_instance_struct {
2577 + VCHIQ_STATE_T *state;
2578 + VCHIQ_COMPLETION_DATA_T completions[MAX_COMPLETIONS];
2579 + int completion_insert;
2580 + int completion_remove;
2581 + struct semaphore insert_event;
2582 + struct semaphore remove_event;
2583 + struct mutex completion_mutex;
2589 + int use_close_delivered;
2592 + struct list_head bulk_waiter_list;
2593 + struct mutex bulk_waiter_list_mutex;
2595 + VCHIQ_DEBUGFS_NODE_T debugfs_node;
2598 +typedef struct dump_context_struct {
2605 +static struct cdev vchiq_cdev;
2606 +static dev_t vchiq_devid;
2607 +static VCHIQ_STATE_T g_state;
2608 +static struct class *vchiq_class;
2609 +static struct device *vchiq_dev;
2610 +static DEFINE_SPINLOCK(msg_queue_spinlock);
2612 +static const char *const ioctl_names[] = {
2618 + "QUEUE_BULK_TRANSMIT",
2619 + "QUEUE_BULK_RECEIVE",
2620 + "AWAIT_COMPLETION",
2621 + "DEQUEUE_MESSAGE",
2626 + "RELEASE_SERVICE",
2627 + "SET_SERVICE_OPTION",
2633 +vchiq_static_assert((sizeof(ioctl_names)/sizeof(ioctl_names[0])) ==
2634 + (VCHIQ_IOC_MAX + 1));
2637 +dump_phys_mem(void *virt_addr, uint32_t num_bytes);
2639 +/****************************************************************************
2643 +***************************************************************************/
2645 +static VCHIQ_STATUS_T
2646 +add_completion(VCHIQ_INSTANCE_T instance, VCHIQ_REASON_T reason,
2647 + VCHIQ_HEADER_T *header, USER_SERVICE_T *user_service,
2648 + void *bulk_userdata)
2650 + VCHIQ_COMPLETION_DATA_T *completion;
2651 + DEBUG_INITIALISE(g_state.local)
2653 + while (instance->completion_insert ==
2654 + (instance->completion_remove + MAX_COMPLETIONS)) {
2655 + /* Out of space - wait for the client */
2656 + DEBUG_TRACE(SERVICE_CALLBACK_LINE);
2657 + vchiq_log_trace(vchiq_arm_log_level,
2658 + "add_completion - completion queue full");
2659 + DEBUG_COUNT(COMPLETION_QUEUE_FULL_COUNT);
2660 + if (down_interruptible(&instance->remove_event) != 0) {
2661 + vchiq_log_info(vchiq_arm_log_level,
2662 + "service_callback interrupted");
2663 + return VCHIQ_RETRY;
2664 + } else if (instance->closing) {
2665 + vchiq_log_info(vchiq_arm_log_level,
2666 + "service_callback closing");
2667 + return VCHIQ_ERROR;
2669 + DEBUG_TRACE(SERVICE_CALLBACK_LINE);
2673 + &instance->completions[instance->completion_insert &
2674 + (MAX_COMPLETIONS - 1)];
2676 + completion->header = header;
2677 + completion->reason = reason;
2678 + /* N.B. service_userdata is updated while processing AWAIT_COMPLETION */
2679 + completion->service_userdata = user_service->service;
2680 + completion->bulk_userdata = bulk_userdata;
2682 + if (reason == VCHIQ_SERVICE_CLOSED) {
2683 + /* Take an extra reference, to be held until
2684 + this CLOSED notification is delivered. */
2685 + lock_service(user_service->service);
2686 + if (instance->use_close_delivered)
2687 + user_service->close_pending = 1;
2690 + /* A write barrier is needed here to ensure that the entire completion
2691 + record is written out before the insert point. */
2694 + if (reason == VCHIQ_MESSAGE_AVAILABLE)
2695 + user_service->message_available_pos =
2696 + instance->completion_insert;
2697 + instance->completion_insert++;
2699 + up(&instance->insert_event);
2701 + return VCHIQ_SUCCESS;
2704 +/****************************************************************************
2708 +***************************************************************************/
2710 +static VCHIQ_STATUS_T
2711 +service_callback(VCHIQ_REASON_T reason, VCHIQ_HEADER_T *header,
2712 + VCHIQ_SERVICE_HANDLE_T handle, void *bulk_userdata)
2714 + /* How do we ensure the callback goes to the right client?
2715 + ** The service_user data points to a USER_SERVICE_T record containing
2716 + ** the original callback and the user state structure, which contains a
2717 + ** circular buffer for completion records.
2719 + USER_SERVICE_T *user_service;
2720 + VCHIQ_SERVICE_T *service;
2721 + VCHIQ_INSTANCE_T instance;
2722 + DEBUG_INITIALISE(g_state.local)
2724 + DEBUG_TRACE(SERVICE_CALLBACK_LINE);
2726 + service = handle_to_service(handle);
2728 + user_service = (USER_SERVICE_T *)service->base.userdata;
2729 + instance = user_service->instance;
2731 + if (!instance || instance->closing)
2732 + return VCHIQ_SUCCESS;
2734 + vchiq_log_trace(vchiq_arm_log_level,
2735 + "service_callback - service %lx(%d,%p), reason %d, header %lx, "
2736 + "instance %lx, bulk_userdata %lx",
2737 + (unsigned long)user_service,
2738 + service->localport, user_service->userdata,
2739 + reason, (unsigned long)header,
2740 + (unsigned long)instance, (unsigned long)bulk_userdata);
2742 + if (header && user_service->is_vchi) {
2743 + spin_lock(&msg_queue_spinlock);
2744 + while (user_service->msg_insert ==
2745 + (user_service->msg_remove + MSG_QUEUE_SIZE)) {
2746 + spin_unlock(&msg_queue_spinlock);
2747 + DEBUG_TRACE(SERVICE_CALLBACK_LINE);
2748 + DEBUG_COUNT(MSG_QUEUE_FULL_COUNT);
2749 + vchiq_log_trace(vchiq_arm_log_level,
2750 + "service_callback - msg queue full");
2751 + /* If there is no MESSAGE_AVAILABLE in the completion
2754 + if ((user_service->message_available_pos -
2755 + instance->completion_remove) < 0) {
2756 + VCHIQ_STATUS_T status;
2757 + vchiq_log_info(vchiq_arm_log_level,
2758 + "Inserting extra MESSAGE_AVAILABLE");
2759 + DEBUG_TRACE(SERVICE_CALLBACK_LINE);
2760 + status = add_completion(instance, reason,
2761 + NULL, user_service, bulk_userdata);
2762 + if (status != VCHIQ_SUCCESS) {
2763 + DEBUG_TRACE(SERVICE_CALLBACK_LINE);
2768 + DEBUG_TRACE(SERVICE_CALLBACK_LINE);
2769 + if (down_interruptible(&user_service->remove_event)
2771 + vchiq_log_info(vchiq_arm_log_level,
2772 + "service_callback interrupted");
2773 + DEBUG_TRACE(SERVICE_CALLBACK_LINE);
2774 + return VCHIQ_RETRY;
2775 + } else if (instance->closing) {
2776 + vchiq_log_info(vchiq_arm_log_level,
2777 + "service_callback closing");
2778 + DEBUG_TRACE(SERVICE_CALLBACK_LINE);
2779 + return VCHIQ_ERROR;
2781 + DEBUG_TRACE(SERVICE_CALLBACK_LINE);
2782 + spin_lock(&msg_queue_spinlock);
2785 + user_service->msg_queue[user_service->msg_insert &
2786 + (MSG_QUEUE_SIZE - 1)] = header;
2787 + user_service->msg_insert++;
2788 + spin_unlock(&msg_queue_spinlock);
2790 + up(&user_service->insert_event);
2792 + /* If there is a thread waiting in DEQUEUE_MESSAGE, or if
2793 + ** there is a MESSAGE_AVAILABLE in the completion queue then
2794 + ** bypass the completion queue.
2796 + if (((user_service->message_available_pos -
2797 + instance->completion_remove) >= 0) ||
2798 + user_service->dequeue_pending) {
2799 + DEBUG_TRACE(SERVICE_CALLBACK_LINE);
2800 + user_service->dequeue_pending = 0;
2801 + return VCHIQ_SUCCESS;
2806 + DEBUG_TRACE(SERVICE_CALLBACK_LINE);
2808 + return add_completion(instance, reason, header, user_service,
2812 +/****************************************************************************
2814 +* user_service_free
2816 +***************************************************************************/
2818 +user_service_free(void *userdata)
2823 +/****************************************************************************
2827 +***************************************************************************/
2828 +static void close_delivered(USER_SERVICE_T *user_service)
2830 + vchiq_log_info(vchiq_arm_log_level,
2831 + "close_delivered(handle=%x)",
2832 + user_service->service->handle);
2834 + if (user_service->close_pending) {
2835 + /* Allow the underlying service to be culled */
2836 + unlock_service(user_service->service);
2838 + /* Wake the user-thread blocked in close_ or remove_service */
2839 + up(&user_service->close_event);
2841 + user_service->close_pending = 0;
2845 +/****************************************************************************
2849 +***************************************************************************/
2851 +vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
2853 + VCHIQ_INSTANCE_T instance = file->private_data;
2854 + VCHIQ_STATUS_T status = VCHIQ_SUCCESS;
2855 + VCHIQ_SERVICE_T *service = NULL;
2858 + DEBUG_INITIALISE(g_state.local)
2860 + vchiq_log_trace(vchiq_arm_log_level,
2861 + "vchiq_ioctl - instance %x, cmd %s, arg %lx",
2862 + (unsigned int)instance,
2863 + ((_IOC_TYPE(cmd) == VCHIQ_IOC_MAGIC) &&
2864 + (_IOC_NR(cmd) <= VCHIQ_IOC_MAX)) ?
2865 + ioctl_names[_IOC_NR(cmd)] : "<invalid>", arg);
2868 + case VCHIQ_IOC_SHUTDOWN:
2869 + if (!instance->connected)
2872 + /* Remove all services */
2874 + while ((service = next_service_by_instance(instance->state,
2875 + instance, &i)) != NULL) {
2876 + status = vchiq_remove_service(service->handle);
2877 + unlock_service(service);
2878 + if (status != VCHIQ_SUCCESS)
2883 + if (status == VCHIQ_SUCCESS) {
2884 + /* Wake the completion thread and ask it to exit */
2885 + instance->closing = 1;
2886 + up(&instance->insert_event);
2891 + case VCHIQ_IOC_CONNECT:
2892 + if (instance->connected) {
2896 + rc = mutex_lock_interruptible(&instance->state->mutex);
2898 + vchiq_log_error(vchiq_arm_log_level,
2899 + "vchiq: connect: could not lock mutex for "
2901 + instance->state->id, rc);
2905 + status = vchiq_connect_internal(instance->state, instance);
2906 + mutex_unlock(&instance->state->mutex);
2908 + if (status == VCHIQ_SUCCESS)
2909 + instance->connected = 1;
2911 + vchiq_log_error(vchiq_arm_log_level,
2912 + "vchiq: could not connect: %d", status);
2915 + case VCHIQ_IOC_CREATE_SERVICE: {
2916 + VCHIQ_CREATE_SERVICE_T args;
2917 + USER_SERVICE_T *user_service = NULL;
2921 + if (copy_from_user
2922 + (&args, (const void __user *)arg,
2923 + sizeof(args)) != 0) {
2928 + user_service = kmalloc(sizeof(USER_SERVICE_T), GFP_KERNEL);
2929 + if (!user_service) {
2934 + if (args.is_open) {
2935 + if (!instance->connected) {
2937 + kfree(user_service);
2940 + srvstate = VCHIQ_SRVSTATE_OPENING;
2943 + instance->connected ?
2944 + VCHIQ_SRVSTATE_LISTENING :
2945 + VCHIQ_SRVSTATE_HIDDEN;
2948 + userdata = args.params.userdata;
2949 + args.params.callback = service_callback;
2950 + args.params.userdata = user_service;
2951 + service = vchiq_add_service_internal(
2953 + &args.params, srvstate,
2954 + instance, user_service_free);
2956 + if (service != NULL) {
2957 + user_service->service = service;
2958 + user_service->userdata = userdata;
2959 + user_service->instance = instance;
2960 + user_service->is_vchi = (args.is_vchi != 0);
2961 + user_service->dequeue_pending = 0;
2962 + user_service->close_pending = 0;
2963 + user_service->message_available_pos =
2964 + instance->completion_remove - 1;
2965 + user_service->msg_insert = 0;
2966 + user_service->msg_remove = 0;
2967 + sema_init(&user_service->insert_event, 0);
2968 + sema_init(&user_service->remove_event, 0);
2969 + sema_init(&user_service->close_event, 0);
2971 + if (args.is_open) {
2972 + status = vchiq_open_service_internal
2973 + (service, instance->pid);
2974 + if (status != VCHIQ_SUCCESS) {
2975 + vchiq_remove_service(service->handle);
2977 + ret = (status == VCHIQ_RETRY) ?
2983 + if (copy_to_user((void __user *)
2984 + &(((VCHIQ_CREATE_SERVICE_T __user *)
2986 + (const void *)&service->handle,
2987 + sizeof(service->handle)) != 0) {
2989 + vchiq_remove_service(service->handle);
2995 + kfree(user_service);
2999 + case VCHIQ_IOC_CLOSE_SERVICE: {
3000 + VCHIQ_SERVICE_HANDLE_T handle = (VCHIQ_SERVICE_HANDLE_T)arg;
3002 + service = find_service_for_instance(instance, handle);
3003 + if (service != NULL) {
3004 + USER_SERVICE_T *user_service =
3005 + (USER_SERVICE_T *)service->base.userdata;
3006 + /* close_pending is false on first entry, and when the
3007 + wait in vchiq_close_service has been interrupted. */
3008 + if (!user_service->close_pending) {
3009 + status = vchiq_close_service(service->handle);
3010 + if (status != VCHIQ_SUCCESS)
3014 + /* close_pending is true once the underlying service
3015 + has been closed until the client library calls the
3016 + CLOSE_DELIVERED ioctl, signalling close_event. */
3017 + if (user_service->close_pending &&
3018 + down_interruptible(&user_service->close_event))
3019 + status = VCHIQ_RETRY;
3025 + case VCHIQ_IOC_REMOVE_SERVICE: {
3026 + VCHIQ_SERVICE_HANDLE_T handle = (VCHIQ_SERVICE_HANDLE_T)arg;
3028 + service = find_service_for_instance(instance, handle);
3029 + if (service != NULL) {
3030 + USER_SERVICE_T *user_service =
3031 + (USER_SERVICE_T *)service->base.userdata;
3032 + /* close_pending is false on first entry, and when the
3033 + wait in vchiq_close_service has been interrupted. */
3034 + if (!user_service->close_pending) {
3035 + status = vchiq_remove_service(service->handle);
3036 + if (status != VCHIQ_SUCCESS)
3040 + /* close_pending is true once the underlying service
3041 + has been closed until the client library calls the
3042 + CLOSE_DELIVERED ioctl, signalling close_event. */
3043 + if (user_service->close_pending &&
3044 + down_interruptible(&user_service->close_event))
3045 + status = VCHIQ_RETRY;
3051 + case VCHIQ_IOC_USE_SERVICE:
3052 + case VCHIQ_IOC_RELEASE_SERVICE: {
3053 + VCHIQ_SERVICE_HANDLE_T handle = (VCHIQ_SERVICE_HANDLE_T)arg;
3055 + service = find_service_for_instance(instance, handle);
3056 + if (service != NULL) {
3057 + status = (cmd == VCHIQ_IOC_USE_SERVICE) ?
3058 + vchiq_use_service_internal(service) :
3059 + vchiq_release_service_internal(service);
3060 + if (status != VCHIQ_SUCCESS) {
3061 + vchiq_log_error(vchiq_susp_log_level,
3062 + "%s: cmd %s returned error %d for "
3063 + "service %c%c%c%c:%03d",
3065 + (cmd == VCHIQ_IOC_USE_SERVICE) ?
3066 + "VCHIQ_IOC_USE_SERVICE" :
3067 + "VCHIQ_IOC_RELEASE_SERVICE",
3069 + VCHIQ_FOURCC_AS_4CHARS(
3070 + service->base.fourcc),
3071 + service->client_id);
3078 + case VCHIQ_IOC_QUEUE_MESSAGE: {
3079 + VCHIQ_QUEUE_MESSAGE_T args;
3080 + if (copy_from_user
3081 + (&args, (const void __user *)arg,
3082 + sizeof(args)) != 0) {
3087 + service = find_service_for_instance(instance, args.handle);
3089 + if ((service != NULL) && (args.count <= MAX_ELEMENTS)) {
3090 + /* Copy elements into kernel space */
3091 + VCHIQ_ELEMENT_T elements[MAX_ELEMENTS];
3092 + if (copy_from_user(elements, args.elements,
3093 + args.count * sizeof(VCHIQ_ELEMENT_T)) == 0)
3094 + status = vchiq_queue_message
3096 + elements, args.count);
3104 + case VCHIQ_IOC_QUEUE_BULK_TRANSMIT:
3105 + case VCHIQ_IOC_QUEUE_BULK_RECEIVE: {
3106 + VCHIQ_QUEUE_BULK_TRANSFER_T args;
3107 + struct bulk_waiter_node *waiter = NULL;
3108 + VCHIQ_BULK_DIR_T dir =
3109 + (cmd == VCHIQ_IOC_QUEUE_BULK_TRANSMIT) ?
3110 + VCHIQ_BULK_TRANSMIT : VCHIQ_BULK_RECEIVE;
3112 + if (copy_from_user
3113 + (&args, (const void __user *)arg,
3114 + sizeof(args)) != 0) {
3119 + service = find_service_for_instance(instance, args.handle);
3125 + if (args.mode == VCHIQ_BULK_MODE_BLOCKING) {
3126 + waiter = kzalloc(sizeof(struct bulk_waiter_node),
3132 + args.userdata = &waiter->bulk_waiter;
3133 + } else if (args.mode == VCHIQ_BULK_MODE_WAITING) {
3134 + struct list_head *pos;
3135 + mutex_lock(&instance->bulk_waiter_list_mutex);
3136 + list_for_each(pos, &instance->bulk_waiter_list) {
3137 + if (list_entry(pos, struct bulk_waiter_node,
3138 + list)->pid == current->pid) {
3139 + waiter = list_entry(pos,
3140 + struct bulk_waiter_node,
3147 + mutex_unlock(&instance->bulk_waiter_list_mutex);
3149 + vchiq_log_error(vchiq_arm_log_level,
3150 + "no bulk_waiter found for pid %d",
3155 + vchiq_log_info(vchiq_arm_log_level,
3156 + "found bulk_waiter %x for pid %d",
3157 + (unsigned int)waiter, current->pid);
3158 + args.userdata = &waiter->bulk_waiter;
3160 + status = vchiq_bulk_transfer
3162 + VCHI_MEM_HANDLE_INVALID,
3163 + args.data, args.size,
3164 + args.userdata, args.mode,
3168 + if ((status != VCHIQ_RETRY) || fatal_signal_pending(current) ||
3169 + !waiter->bulk_waiter.bulk) {
3170 + if (waiter->bulk_waiter.bulk) {
3171 + /* Cancel the signal when the transfer
3173 + spin_lock(&bulk_waiter_spinlock);
3174 + waiter->bulk_waiter.bulk->userdata = NULL;
3175 + spin_unlock(&bulk_waiter_spinlock);
3179 + const VCHIQ_BULK_MODE_T mode_waiting =
3180 + VCHIQ_BULK_MODE_WAITING;
3181 + waiter->pid = current->pid;
3182 + mutex_lock(&instance->bulk_waiter_list_mutex);
3183 + list_add(&waiter->list, &instance->bulk_waiter_list);
3184 + mutex_unlock(&instance->bulk_waiter_list_mutex);
3185 + vchiq_log_info(vchiq_arm_log_level,
3186 + "saved bulk_waiter %x for pid %d",
3187 + (unsigned int)waiter, current->pid);
3189 + if (copy_to_user((void __user *)
3190 + &(((VCHIQ_QUEUE_BULK_TRANSFER_T __user *)
3192 + (const void *)&mode_waiting,
3193 + sizeof(mode_waiting)) != 0)
3198 + case VCHIQ_IOC_AWAIT_COMPLETION: {
3199 + VCHIQ_AWAIT_COMPLETION_T args;
3201 + DEBUG_TRACE(AWAIT_COMPLETION_LINE);
3202 + if (!instance->connected) {
3207 + if (copy_from_user(&args, (const void __user *)arg,
3208 + sizeof(args)) != 0) {
3213 + mutex_lock(&instance->completion_mutex);
3215 + DEBUG_TRACE(AWAIT_COMPLETION_LINE);
3216 + while ((instance->completion_remove ==
3217 + instance->completion_insert)
3218 + && !instance->closing) {
3220 + DEBUG_TRACE(AWAIT_COMPLETION_LINE);
3221 + mutex_unlock(&instance->completion_mutex);
3222 + rc = down_interruptible(&instance->insert_event);
3223 + mutex_lock(&instance->completion_mutex);
3225 + DEBUG_TRACE(AWAIT_COMPLETION_LINE);
3226 + vchiq_log_info(vchiq_arm_log_level,
3227 + "AWAIT_COMPLETION interrupted");
3232 + DEBUG_TRACE(AWAIT_COMPLETION_LINE);
3234 + /* A read memory barrier is needed to stop prefetch of a stale
3235 + ** completion record
3240 + int msgbufcount = args.msgbufcount;
3241 + for (ret = 0; ret < args.count; ret++) {
3242 + VCHIQ_COMPLETION_DATA_T *completion;
3243 + VCHIQ_SERVICE_T *service;
3244 + USER_SERVICE_T *user_service;
3245 + VCHIQ_HEADER_T *header;
3246 + if (instance->completion_remove ==
3247 + instance->completion_insert)
3249 + completion = &instance->completions[
3250 + instance->completion_remove &
3251 + (MAX_COMPLETIONS - 1)];
3253 + service = completion->service_userdata;
3254 + user_service = service->base.userdata;
3255 + completion->service_userdata =
3256 + user_service->userdata;
3258 + header = completion->header;
3260 + void __user *msgbuf;
3263 + msglen = header->size +
3264 + sizeof(VCHIQ_HEADER_T);
3265 + /* This must be a VCHIQ-style service */
3266 + if (args.msgbufsize < msglen) {
3268 + vchiq_arm_log_level,
3269 + "header %x: msgbufsize"
3270 + " %x < msglen %x",
3271 + (unsigned int)header,
3274 + WARN(1, "invalid message "
3280 + if (msgbufcount <= 0)
3281 + /* Stall here for lack of a
3282 + ** buffer for the message. */
3284 + /* Get the pointer from user space */
3286 + if (copy_from_user(&msgbuf,
3287 + (const void __user *)
3288 + &args.msgbufs[msgbufcount],
3289 + sizeof(msgbuf)) != 0) {
3295 + /* Copy the message to user space */
3296 + if (copy_to_user(msgbuf, header,
3303 + /* Now it has been copied, the message
3304 + ** can be released. */
3305 + vchiq_release_message(service->handle,
3308 + /* The completion must point to the
3310 + completion->header = msgbuf;
3313 + if ((completion->reason ==
3314 + VCHIQ_SERVICE_CLOSED) &&
3315 + !instance->use_close_delivered)
3316 + unlock_service(service);
3318 + if (copy_to_user((void __user *)(
3319 + (size_t)args.buf +
3320 + ret * sizeof(VCHIQ_COMPLETION_DATA_T)),
3322 + sizeof(VCHIQ_COMPLETION_DATA_T)) != 0) {
3328 + instance->completion_remove++;
3331 + if (msgbufcount != args.msgbufcount) {
3332 + if (copy_to_user((void __user *)
3333 + &((VCHIQ_AWAIT_COMPLETION_T *)arg)->
3336 + sizeof(msgbufcount)) != 0) {
3343 + up(&instance->remove_event);
3344 + mutex_unlock(&instance->completion_mutex);
3345 + DEBUG_TRACE(AWAIT_COMPLETION_LINE);
3348 + case VCHIQ_IOC_DEQUEUE_MESSAGE: {
3349 + VCHIQ_DEQUEUE_MESSAGE_T args;
3350 + USER_SERVICE_T *user_service;
3351 + VCHIQ_HEADER_T *header;
3353 + DEBUG_TRACE(DEQUEUE_MESSAGE_LINE);
3354 + if (copy_from_user
3355 + (&args, (const void __user *)arg,
3356 + sizeof(args)) != 0) {
3360 + service = find_service_for_instance(instance, args.handle);
3365 + user_service = (USER_SERVICE_T *)service->base.userdata;
3366 + if (user_service->is_vchi == 0) {
3371 + spin_lock(&msg_queue_spinlock);
3372 + if (user_service->msg_remove == user_service->msg_insert) {
3373 + if (!args.blocking) {
3374 + spin_unlock(&msg_queue_spinlock);
3375 + DEBUG_TRACE(DEQUEUE_MESSAGE_LINE);
3376 + ret = -EWOULDBLOCK;
3379 + user_service->dequeue_pending = 1;
3381 + spin_unlock(&msg_queue_spinlock);
3382 + DEBUG_TRACE(DEQUEUE_MESSAGE_LINE);
3383 + if (down_interruptible(
3384 + &user_service->insert_event) != 0) {
3385 + vchiq_log_info(vchiq_arm_log_level,
3386 + "DEQUEUE_MESSAGE interrupted");
3390 + spin_lock(&msg_queue_spinlock);
3391 + } while (user_service->msg_remove ==
3392 + user_service->msg_insert);
3398 + BUG_ON((int)(user_service->msg_insert -
3399 + user_service->msg_remove) < 0);
3401 + header = user_service->msg_queue[user_service->msg_remove &
3402 + (MSG_QUEUE_SIZE - 1)];
3403 + user_service->msg_remove++;
3404 + spin_unlock(&msg_queue_spinlock);
3406 + up(&user_service->remove_event);
3407 + if (header == NULL)
3409 + else if (header->size <= args.bufsize) {
3410 + /* Copy to user space if msgbuf is not NULL */
3411 + if ((args.buf == NULL) ||
3412 + (copy_to_user((void __user *)args.buf,
3414 + header->size) == 0)) {
3415 + ret = header->size;
3416 + vchiq_release_message(
3422 + vchiq_log_error(vchiq_arm_log_level,
3423 + "header %x: bufsize %x < size %x",
3424 + (unsigned int)header, args.bufsize,
3426 + WARN(1, "invalid size\n");
3429 + DEBUG_TRACE(DEQUEUE_MESSAGE_LINE);
3432 + case VCHIQ_IOC_GET_CLIENT_ID: {
3433 + VCHIQ_SERVICE_HANDLE_T handle = (VCHIQ_SERVICE_HANDLE_T)arg;
3435 + ret = vchiq_get_client_id(handle);
3438 + case VCHIQ_IOC_GET_CONFIG: {
3439 + VCHIQ_GET_CONFIG_T args;
3440 + VCHIQ_CONFIG_T config;
3442 + if (copy_from_user(&args, (const void __user *)arg,
3443 + sizeof(args)) != 0) {
3447 + if (args.config_size > sizeof(config)) {
3451 + status = vchiq_get_config(instance, args.config_size, &config);
3452 + if (status == VCHIQ_SUCCESS) {
3453 + if (copy_to_user((void __user *)args.pconfig,
3454 + &config, args.config_size) != 0) {
3461 + case VCHIQ_IOC_SET_SERVICE_OPTION: {
3462 + VCHIQ_SET_SERVICE_OPTION_T args;
3464 + if (copy_from_user(
3465 + &args, (const void __user *)arg,
3466 + sizeof(args)) != 0) {
3471 + service = find_service_for_instance(instance, args.handle);
3477 + status = vchiq_set_service_option(
3478 + args.handle, args.option, args.value);
3481 + case VCHIQ_IOC_DUMP_PHYS_MEM: {
3482 + VCHIQ_DUMP_MEM_T args;
3484 + if (copy_from_user
3485 + (&args, (const void __user *)arg,
3486 + sizeof(args)) != 0) {
3490 + dump_phys_mem(args.virt_addr, args.num_bytes);
3493 + case VCHIQ_IOC_LIB_VERSION: {
3494 + unsigned int lib_version = (unsigned int)arg;
3496 + if (lib_version < VCHIQ_VERSION_MIN)
3498 + else if (lib_version >= VCHIQ_VERSION_CLOSE_DELIVERED)
3499 + instance->use_close_delivered = 1;
3502 + case VCHIQ_IOC_CLOSE_DELIVERED: {
3503 + VCHIQ_SERVICE_HANDLE_T handle = (VCHIQ_SERVICE_HANDLE_T)arg;
3505 + service = find_closed_service_for_instance(instance, handle);
3506 + if (service != NULL) {
3507 + USER_SERVICE_T *user_service =
3508 + (USER_SERVICE_T *)service->base.userdata;
3509 + close_delivered(user_service);
3521 + unlock_service(service);
3524 + if (status == VCHIQ_ERROR)
3526 + else if (status == VCHIQ_RETRY)
3530 + if ((status == VCHIQ_SUCCESS) && (ret < 0) && (ret != -EINTR) &&
3531 + (ret != -EWOULDBLOCK))
3532 + vchiq_log_info(vchiq_arm_log_level,
3533 + " ioctl instance %lx, cmd %s -> status %d, %ld",
3534 + (unsigned long)instance,
3535 + (_IOC_NR(cmd) <= VCHIQ_IOC_MAX) ?
3536 + ioctl_names[_IOC_NR(cmd)] :
3540 + vchiq_log_trace(vchiq_arm_log_level,
3541 + " ioctl instance %lx, cmd %s -> status %d, %ld",
3542 + (unsigned long)instance,
3543 + (_IOC_NR(cmd) <= VCHIQ_IOC_MAX) ?
3544 + ioctl_names[_IOC_NR(cmd)] :
3551 +/****************************************************************************
3555 +***************************************************************************/
3558 +vchiq_open(struct inode *inode, struct file *file)
3560 + int dev = iminor(inode) & 0x0f;
3561 + vchiq_log_info(vchiq_arm_log_level, "vchiq_open");
3563 + case VCHIQ_MINOR: {
3565 + VCHIQ_STATE_T *state = vchiq_get_state();
3566 + VCHIQ_INSTANCE_T instance;
3569 + vchiq_log_error(vchiq_arm_log_level,
3570 + "vchiq has no connection to VideoCore");
3574 + instance = kzalloc(sizeof(*instance), GFP_KERNEL);
3578 + instance->state = state;
3579 + instance->pid = current->tgid;
3581 + ret = vchiq_debugfs_add_instance(instance);
3587 + sema_init(&instance->insert_event, 0);
3588 + sema_init(&instance->remove_event, 0);
3589 + mutex_init(&instance->completion_mutex);
3590 + mutex_init(&instance->bulk_waiter_list_mutex);
3591 + INIT_LIST_HEAD(&instance->bulk_waiter_list);
3593 + file->private_data = instance;
3597 + vchiq_log_error(vchiq_arm_log_level,
3598 + "Unknown minor device: %d", dev);
3605 +/****************************************************************************
3609 +***************************************************************************/
3612 +vchiq_release(struct inode *inode, struct file *file)
3614 + int dev = iminor(inode) & 0x0f;
3617 + case VCHIQ_MINOR: {
3618 + VCHIQ_INSTANCE_T instance = file->private_data;
3619 + VCHIQ_STATE_T *state = vchiq_get_state();
3620 + VCHIQ_SERVICE_T *service;
3623 + vchiq_log_info(vchiq_arm_log_level,
3624 + "vchiq_release: instance=%lx",
3625 + (unsigned long)instance);
3632 + /* Ensure videocore is awake to allow termination. */
3633 + vchiq_use_internal(instance->state, NULL,
3636 + mutex_lock(&instance->completion_mutex);
3638 + /* Wake the completion thread and ask it to exit */
3639 + instance->closing = 1;
3640 + up(&instance->insert_event);
3642 + mutex_unlock(&instance->completion_mutex);
3644 + /* Wake the slot handler if the completion queue is full. */
3645 + up(&instance->remove_event);
3647 + /* Mark all services for termination... */
3649 + while ((service = next_service_by_instance(state, instance,
3651 + USER_SERVICE_T *user_service = service->base.userdata;
3653 + /* Wake the slot handler if the msg queue is full. */
3654 + up(&user_service->remove_event);
3656 + vchiq_terminate_service_internal(service);
3657 + unlock_service(service);
3660 + /* ...and wait for them to die */
3662 + while ((service = next_service_by_instance(state, instance, &i))
3664 + USER_SERVICE_T *user_service = service->base.userdata;
3666 + down(&service->remove_event);
3668 + BUG_ON(service->srvstate != VCHIQ_SRVSTATE_FREE);
3670 + spin_lock(&msg_queue_spinlock);
3672 + while (user_service->msg_remove !=
3673 + user_service->msg_insert) {
3674 + VCHIQ_HEADER_T *header = user_service->
3675 + msg_queue[user_service->msg_remove &
3676 + (MSG_QUEUE_SIZE - 1)];
3677 + user_service->msg_remove++;
3678 + spin_unlock(&msg_queue_spinlock);
3681 + vchiq_release_message(
3684 + spin_lock(&msg_queue_spinlock);
3687 + spin_unlock(&msg_queue_spinlock);
3689 + unlock_service(service);
3692 + /* Release any closed services */
3693 + while (instance->completion_remove !=
3694 + instance->completion_insert) {
3695 + VCHIQ_COMPLETION_DATA_T *completion;
3696 + VCHIQ_SERVICE_T *service;
3697 + completion = &instance->completions[
3698 + instance->completion_remove &
3699 + (MAX_COMPLETIONS - 1)];
3700 + service = completion->service_userdata;
3701 + if (completion->reason == VCHIQ_SERVICE_CLOSED)
3703 + USER_SERVICE_T *user_service =
3704 + service->base.userdata;
3706 + /* Wake any blocked user-thread */
3707 + if (instance->use_close_delivered)
3708 + up(&user_service->close_event);
3709 + unlock_service(service);
3711 + instance->completion_remove++;
3714 + /* Release the PEER service count. */
3715 + vchiq_release_internal(instance->state, NULL);
3718 + struct list_head *pos, *next;
3719 + list_for_each_safe(pos, next,
3720 + &instance->bulk_waiter_list) {
3721 + struct bulk_waiter_node *waiter;
3722 + waiter = list_entry(pos,
3723 + struct bulk_waiter_node,
3726 + vchiq_log_info(vchiq_arm_log_level,
3727 + "bulk_waiter - cleaned up %x "
3729 + (unsigned int)waiter, waiter->pid);
3734 + vchiq_debugfs_remove_instance(instance);
3737 + file->private_data = NULL;
3741 + vchiq_log_error(vchiq_arm_log_level,
3742 + "Unknown minor device: %d", dev);
3750 +/****************************************************************************
3754 +***************************************************************************/
3757 +vchiq_dump(void *dump_context, const char *str, int len)
3759 + DUMP_CONTEXT_T *context = (DUMP_CONTEXT_T *)dump_context;
3761 + if (context->actual < context->space) {
3763 + if (context->offset > 0) {
3764 + int skip_bytes = min(len, (int)context->offset);
3765 + str += skip_bytes;
3766 + len -= skip_bytes;
3767 + context->offset -= skip_bytes;
3768 + if (context->offset > 0)
3771 + copy_bytes = min(len, (int)(context->space - context->actual));
3772 + if (copy_bytes == 0)
3774 + if (copy_to_user(context->buf + context->actual, str,
3776 + context->actual = -EFAULT;
3777 + context->actual += copy_bytes;
3778 + len -= copy_bytes;
3780 + /* If tne terminating NUL is included in the length, then it
3781 + ** marks the end of a line and should be replaced with a
3782 + ** carriage return. */
3783 + if ((len == 0) && (str[copy_bytes - 1] == '\0')) {
3785 + if (copy_to_user(context->buf + context->actual - 1,
3787 + context->actual = -EFAULT;
3792 +/****************************************************************************
3794 +* vchiq_dump_platform_instance_state
3796 +***************************************************************************/
3799 +vchiq_dump_platform_instances(void *dump_context)
3801 + VCHIQ_STATE_T *state = vchiq_get_state();
3806 + /* There is no list of instances, so instead scan all services,
3807 + marking those that have been dumped. */
3809 + for (i = 0; i < state->unused_service; i++) {
3810 + VCHIQ_SERVICE_T *service = state->services[i];
3811 + VCHIQ_INSTANCE_T instance;
3813 + if (service && (service->base.callback == service_callback)) {
3814 + instance = service->instance;
3816 + instance->mark = 0;
3820 + for (i = 0; i < state->unused_service; i++) {
3821 + VCHIQ_SERVICE_T *service = state->services[i];
3822 + VCHIQ_INSTANCE_T instance;
3824 + if (service && (service->base.callback == service_callback)) {
3825 + instance = service->instance;
3826 + if (instance && !instance->mark) {
3827 + len = snprintf(buf, sizeof(buf),
3828 + "Instance %x: pid %d,%s completions "
3830 + (unsigned int)instance, instance->pid,
3831 + instance->connected ? " connected, " :
3833 + instance->completion_insert -
3834 + instance->completion_remove,
3837 + vchiq_dump(dump_context, buf, len + 1);
3839 + instance->mark = 1;
3845 +/****************************************************************************
3847 +* vchiq_dump_platform_service_state
3849 +***************************************************************************/
3852 +vchiq_dump_platform_service_state(void *dump_context, VCHIQ_SERVICE_T *service)
3854 + USER_SERVICE_T *user_service = (USER_SERVICE_T *)service->base.userdata;
3858 + len = snprintf(buf, sizeof(buf), " instance %x",
3859 + (unsigned int)service->instance);
3861 + if ((service->base.callback == service_callback) &&
3862 + user_service->is_vchi) {
3863 + len += snprintf(buf + len, sizeof(buf) - len,
3864 + ", %d/%d messages",
3865 + user_service->msg_insert - user_service->msg_remove,
3868 + if (user_service->dequeue_pending)
3869 + len += snprintf(buf + len, sizeof(buf) - len,
3870 + " (dequeue pending)");
3873 + vchiq_dump(dump_context, buf, len + 1);
3876 +/****************************************************************************
3880 +***************************************************************************/
3883 +dump_phys_mem(void *virt_addr, uint32_t num_bytes)
3886 + uint8_t *end_virt_addr = virt_addr + num_bytes;
3892 + struct page *page;
3893 + struct page **pages;
3894 + uint8_t *kmapped_virt_ptr;
3896 + /* Align virtAddr and endVirtAddr to 16 byte boundaries. */
3898 + virt_addr = (void *)((unsigned long)virt_addr & ~0x0fuL);
3899 + end_virt_addr = (void *)(((unsigned long)end_virt_addr + 15uL) &
3902 + offset = (int)(long)virt_addr & (PAGE_SIZE - 1);
3903 + end_offset = (int)(long)end_virt_addr & (PAGE_SIZE - 1);
3905 + num_pages = (offset + num_bytes + PAGE_SIZE - 1) / PAGE_SIZE;
3907 + pages = kmalloc(sizeof(struct page *) * num_pages, GFP_KERNEL);
3908 + if (pages == NULL) {
3909 + vchiq_log_error(vchiq_arm_log_level,
3910 + "Unable to allocation memory for %d pages\n",
3915 + down_read(¤t->mm->mmap_sem);
3916 + rc = get_user_pages(current, /* task */
3917 + current->mm, /* mm */
3918 + (unsigned long)virt_addr, /* start */
3919 + num_pages, /* len */
3922 + pages, /* pages (array of page pointers) */
3924 + up_read(¤t->mm->mmap_sem);
3929 + while (offset < end_offset) {
3931 + int page_offset = offset % PAGE_SIZE;
3932 + page_idx = offset / PAGE_SIZE;
3934 + if (page_idx != prev_idx) {
3938 + page = pages[page_idx];
3939 + kmapped_virt_ptr = kmap(page);
3941 + prev_idx = page_idx;
3944 + if (vchiq_arm_log_level >= VCHIQ_LOG_TRACE)
3945 + vchiq_log_dump_mem("ph",
3946 + (uint32_t)(unsigned long)&kmapped_virt_ptr[
3948 + &kmapped_virt_ptr[page_offset], 16);
3955 + for (page_idx = 0; page_idx < num_pages; page_idx++)
3956 + page_cache_release(pages[page_idx]);
3961 +/****************************************************************************
3965 +***************************************************************************/
3968 +vchiq_read(struct file *file, char __user *buf,
3969 + size_t count, loff_t *ppos)
3971 + DUMP_CONTEXT_T context;
3972 + context.buf = buf;
3973 + context.actual = 0;
3974 + context.space = count;
3975 + context.offset = *ppos;
3977 + vchiq_dump_state(&context, &g_state);
3979 + *ppos += context.actual;
3981 + return context.actual;
3985 +vchiq_get_state(void)
3988 + if (g_state.remote == NULL)
3989 + printk(KERN_ERR "%s: g_state.remote == NULL\n", __func__);
3990 + else if (g_state.remote->initialised != 1)
3991 + printk(KERN_NOTICE "%s: g_state.remote->initialised != 1 (%d)\n",
3992 + __func__, g_state.remote->initialised);
3994 + return ((g_state.remote != NULL) &&
3995 + (g_state.remote->initialised == 1)) ? &g_state : NULL;
3998 +static const struct file_operations
4000 + .owner = THIS_MODULE,
4001 + .unlocked_ioctl = vchiq_ioctl,
4002 + .open = vchiq_open,
4003 + .release = vchiq_release,
4004 + .read = vchiq_read
4008 + * Autosuspend related functionality
4012 +vchiq_videocore_wanted(VCHIQ_STATE_T *state)
4014 + VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
4016 + /* autosuspend not supported - always return wanted */
4018 + else if (arm_state->blocked_count)
4020 + else if (!arm_state->videocore_use_count)
4021 + /* usage count zero - check for override unless we're forcing */
4022 + if (arm_state->resume_blocked)
4025 + return vchiq_platform_videocore_wanted(state);
4027 + /* non-zero usage count - videocore still required */
4031 +static VCHIQ_STATUS_T
4032 +vchiq_keepalive_vchiq_callback(VCHIQ_REASON_T reason,
4033 + VCHIQ_HEADER_T *header,
4034 + VCHIQ_SERVICE_HANDLE_T service_user,
4037 + vchiq_log_error(vchiq_susp_log_level,
4038 + "%s callback reason %d", __func__, reason);
4043 +vchiq_keepalive_thread_func(void *v)
4045 + VCHIQ_STATE_T *state = (VCHIQ_STATE_T *) v;
4046 + VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
4048 + VCHIQ_STATUS_T status;
4049 + VCHIQ_INSTANCE_T instance;
4050 + VCHIQ_SERVICE_HANDLE_T ka_handle;
4052 + VCHIQ_SERVICE_PARAMS_T params = {
4053 + .fourcc = VCHIQ_MAKE_FOURCC('K', 'E', 'E', 'P'),
4054 + .callback = vchiq_keepalive_vchiq_callback,
4055 + .version = KEEPALIVE_VER,
4056 + .version_min = KEEPALIVE_VER_MIN
4059 + status = vchiq_initialise(&instance);
4060 + if (status != VCHIQ_SUCCESS) {
4061 + vchiq_log_error(vchiq_susp_log_level,
4062 + "%s vchiq_initialise failed %d", __func__, status);
4066 + status = vchiq_connect(instance);
4067 + if (status != VCHIQ_SUCCESS) {
4068 + vchiq_log_error(vchiq_susp_log_level,
4069 + "%s vchiq_connect failed %d", __func__, status);
4073 + status = vchiq_add_service(instance, ¶ms, &ka_handle);
4074 + if (status != VCHIQ_SUCCESS) {
4075 + vchiq_log_error(vchiq_susp_log_level,
4076 + "%s vchiq_open_service failed %d", __func__, status);
4081 + long rc = 0, uc = 0;
4082 + if (wait_for_completion_interruptible(&arm_state->ka_evt)
4084 + vchiq_log_error(vchiq_susp_log_level,
4085 + "%s interrupted", __func__);
4086 + flush_signals(current);
4090 + /* read and clear counters. Do release_count then use_count to
4091 + * prevent getting more releases than uses */
4092 + rc = atomic_xchg(&arm_state->ka_release_count, 0);
4093 + uc = atomic_xchg(&arm_state->ka_use_count, 0);
4095 + /* Call use/release service the requisite number of times.
4096 + * Process use before release so use counts don't go negative */
4098 + atomic_inc(&arm_state->ka_use_ack_count);
4099 + status = vchiq_use_service(ka_handle);
4100 + if (status != VCHIQ_SUCCESS) {
4101 + vchiq_log_error(vchiq_susp_log_level,
4102 + "%s vchiq_use_service error %d",
4103 + __func__, status);
4107 + status = vchiq_release_service(ka_handle);
4108 + if (status != VCHIQ_SUCCESS) {
4109 + vchiq_log_error(vchiq_susp_log_level,
4110 + "%s vchiq_release_service error %d",
4111 + __func__, status);
4117 + vchiq_shutdown(instance);
4125 +vchiq_arm_init_state(VCHIQ_STATE_T *state, VCHIQ_ARM_STATE_T *arm_state)
4127 + VCHIQ_STATUS_T status = VCHIQ_SUCCESS;
4130 + rwlock_init(&arm_state->susp_res_lock);
4132 + init_completion(&arm_state->ka_evt);
4133 + atomic_set(&arm_state->ka_use_count, 0);
4134 + atomic_set(&arm_state->ka_use_ack_count, 0);
4135 + atomic_set(&arm_state->ka_release_count, 0);
4137 + init_completion(&arm_state->vc_suspend_complete);
4139 + init_completion(&arm_state->vc_resume_complete);
4140 + /* Initialise to 'done' state. We only want to block on resume
4141 + * completion while videocore is suspended. */
4142 + set_resume_state(arm_state, VC_RESUME_RESUMED);
4144 + init_completion(&arm_state->resume_blocker);
4145 + /* Initialise to 'done' state. We only want to block on this
4146 + * completion while resume is blocked */
4147 + complete_all(&arm_state->resume_blocker);
4149 + init_completion(&arm_state->blocked_blocker);
4150 + /* Initialise to 'done' state. We only want to block on this
4151 + * completion while things are waiting on the resume blocker */
4152 + complete_all(&arm_state->blocked_blocker);
4154 + arm_state->suspend_timer_timeout = SUSPEND_TIMER_TIMEOUT_MS;
4155 + arm_state->suspend_timer_running = 0;
4156 + init_timer(&arm_state->suspend_timer);
4157 + arm_state->suspend_timer.data = (unsigned long)(state);
4158 + arm_state->suspend_timer.function = suspend_timer_callback;
4160 + arm_state->first_connect = 0;
4167 +** Functions to modify the state variables;
4168 +** set_suspend_state
4169 +** set_resume_state
4171 +** There are more state variables than we might like, so ensure they remain in
4172 +** step. Suspend and resume state are maintained separately, since most of
4173 +** these state machines can operate independently. However, there are a few
4174 +** states where state transitions in one state machine cause a reset to the
4175 +** other state machine. In addition, there are some completion events which
4176 +** need to occur on state machine reset and end-state(s), so these are also
4177 +** dealt with in these functions.
4179 +** In all states we set the state variable according to the input, but in some
4180 +** cases we perform additional steps outlined below;
4182 +** VC_SUSPEND_IDLE - Initialise the suspend completion at the same time.
4183 +** The suspend completion is completed after any suspend
4184 +** attempt. When we reset the state machine we also reset
4185 +** the completion. This reset occurs when videocore is
4186 +** resumed, and also if we initiate suspend after a suspend
4189 +** VC_SUSPEND_IN_PROGRESS - This state is considered the point of no return for
4190 +** suspend - ie from this point on we must try to suspend
4191 +** before resuming can occur. We therefore also reset the
4192 +** resume state machine to VC_RESUME_IDLE in this state.
4194 +** VC_SUSPEND_SUSPENDED - Suspend has completed successfully. Also call
4195 +** complete_all on the suspend completion to notify
4196 +** anything waiting for suspend to happen.
4198 +** VC_SUSPEND_REJECTED - Videocore rejected suspend. Videocore will also
4199 +** initiate resume, so no need to alter resume state.
4200 +** We call complete_all on the suspend completion to notify
4201 +** of suspend rejection.
4203 +** VC_SUSPEND_FAILED - We failed to initiate videocore suspend. We notify the
4204 +** suspend completion and reset the resume state machine.
4206 +** VC_RESUME_IDLE - Initialise the resume completion at the same time. The
4207 +** resume completion is in it's 'done' state whenever
4208 +** videcore is running. Therfore, the VC_RESUME_IDLE state
4209 +** implies that videocore is suspended.
4210 +** Hence, any thread which needs to wait until videocore is
4211 +** running can wait on this completion - it will only block
4212 +** if videocore is suspended.
4214 +** VC_RESUME_RESUMED - Resume has completed successfully. Videocore is running.
4215 +** Call complete_all on the resume completion to unblock
4216 +** any threads waiting for resume. Also reset the suspend
4217 +** state machine to it's idle state.
4219 +** VC_RESUME_FAILED - Currently unused - no mechanism to fail resume exists.
4223 +set_suspend_state(VCHIQ_ARM_STATE_T *arm_state,
4224 + enum vc_suspend_status new_state)
4226 + /* set the state in all cases */
4227 + arm_state->vc_suspend_state = new_state;
4229 + /* state specific additional actions */
4230 + switch (new_state) {
4231 + case VC_SUSPEND_FORCE_CANCELED:
4232 + complete_all(&arm_state->vc_suspend_complete);
4234 + case VC_SUSPEND_REJECTED:
4235 + complete_all(&arm_state->vc_suspend_complete);
4237 + case VC_SUSPEND_FAILED:
4238 + complete_all(&arm_state->vc_suspend_complete);
4239 + arm_state->vc_resume_state = VC_RESUME_RESUMED;
4240 + complete_all(&arm_state->vc_resume_complete);
4242 + case VC_SUSPEND_IDLE:
4243 + reinit_completion(&arm_state->vc_suspend_complete);
4245 + case VC_SUSPEND_REQUESTED:
4247 + case VC_SUSPEND_IN_PROGRESS:
4248 + set_resume_state(arm_state, VC_RESUME_IDLE);
4250 + case VC_SUSPEND_SUSPENDED:
4251 + complete_all(&arm_state->vc_suspend_complete);
4260 +set_resume_state(VCHIQ_ARM_STATE_T *arm_state,
4261 + enum vc_resume_status new_state)
4263 + /* set the state in all cases */
4264 + arm_state->vc_resume_state = new_state;
4266 + /* state specific additional actions */
4267 + switch (new_state) {
4268 + case VC_RESUME_FAILED:
4270 + case VC_RESUME_IDLE:
4271 + reinit_completion(&arm_state->vc_resume_complete);
4273 + case VC_RESUME_REQUESTED:
4275 + case VC_RESUME_IN_PROGRESS:
4277 + case VC_RESUME_RESUMED:
4278 + complete_all(&arm_state->vc_resume_complete);
4279 + set_suspend_state(arm_state, VC_SUSPEND_IDLE);
4288 +/* should be called with the write lock held */
4290 +start_suspend_timer(VCHIQ_ARM_STATE_T *arm_state)
4292 + del_timer(&arm_state->suspend_timer);
4293 + arm_state->suspend_timer.expires = jiffies +
4294 + msecs_to_jiffies(arm_state->
4295 + suspend_timer_timeout);
4296 + add_timer(&arm_state->suspend_timer);
4297 + arm_state->suspend_timer_running = 1;
4300 +/* should be called with the write lock held */
4302 +stop_suspend_timer(VCHIQ_ARM_STATE_T *arm_state)
4304 + if (arm_state->suspend_timer_running) {
4305 + del_timer(&arm_state->suspend_timer);
4306 + arm_state->suspend_timer_running = 0;
4311 +need_resume(VCHIQ_STATE_T *state)
4313 + VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
4314 + return (arm_state->vc_suspend_state > VC_SUSPEND_IDLE) &&
4315 + (arm_state->vc_resume_state < VC_RESUME_REQUESTED) &&
4316 + vchiq_videocore_wanted(state);
4320 +block_resume(VCHIQ_ARM_STATE_T *arm_state)
4322 + int status = VCHIQ_SUCCESS;
4323 + const unsigned long timeout_val =
4324 + msecs_to_jiffies(FORCE_SUSPEND_TIMEOUT_MS);
4325 + int resume_count = 0;
4327 + /* Allow any threads which were blocked by the last force suspend to
4328 + * complete if they haven't already. Only give this one shot; if
4329 + * blocked_count is incremented after blocked_blocker is completed
4330 + * (which only happens when blocked_count hits 0) then those threads
4331 + * will have to wait until next time around */
4332 + if (arm_state->blocked_count) {
4333 + reinit_completion(&arm_state->blocked_blocker);
4334 + write_unlock_bh(&arm_state->susp_res_lock);
4335 + vchiq_log_info(vchiq_susp_log_level, "%s wait for previously "
4336 + "blocked clients", __func__);
4337 + if (wait_for_completion_interruptible_timeout(
4338 + &arm_state->blocked_blocker, timeout_val)
4340 + vchiq_log_error(vchiq_susp_log_level, "%s wait for "
4341 + "previously blocked clients failed" , __func__);
4342 + status = VCHIQ_ERROR;
4343 + write_lock_bh(&arm_state->susp_res_lock);
4346 + vchiq_log_info(vchiq_susp_log_level, "%s previously blocked "
4347 + "clients resumed", __func__);
4348 + write_lock_bh(&arm_state->susp_res_lock);
4351 + /* We need to wait for resume to complete if it's in process */
4352 + while (arm_state->vc_resume_state != VC_RESUME_RESUMED &&
4353 + arm_state->vc_resume_state > VC_RESUME_IDLE) {
4354 + if (resume_count > 1) {
4355 + status = VCHIQ_ERROR;
4356 + vchiq_log_error(vchiq_susp_log_level, "%s waited too "
4357 + "many times for resume" , __func__);
4360 + write_unlock_bh(&arm_state->susp_res_lock);
4361 + vchiq_log_info(vchiq_susp_log_level, "%s wait for resume",
4363 + if (wait_for_completion_interruptible_timeout(
4364 + &arm_state->vc_resume_complete, timeout_val)
4366 + vchiq_log_error(vchiq_susp_log_level, "%s wait for "
4367 + "resume failed (%s)", __func__,
4368 + resume_state_names[arm_state->vc_resume_state +
4369 + VC_RESUME_NUM_OFFSET]);
4370 + status = VCHIQ_ERROR;
4371 + write_lock_bh(&arm_state->susp_res_lock);
4374 + vchiq_log_info(vchiq_susp_log_level, "%s resumed", __func__);
4375 + write_lock_bh(&arm_state->susp_res_lock);
4378 + reinit_completion(&arm_state->resume_blocker);
4379 + arm_state->resume_blocked = 1;
4386 +unblock_resume(VCHIQ_ARM_STATE_T *arm_state)
4388 + complete_all(&arm_state->resume_blocker);
4389 + arm_state->resume_blocked = 0;
4392 +/* Initiate suspend via slot handler. Should be called with the write lock
4395 +vchiq_arm_vcsuspend(VCHIQ_STATE_T *state)
4397 + VCHIQ_STATUS_T status = VCHIQ_ERROR;
4398 + VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
4403 + vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
4404 + status = VCHIQ_SUCCESS;
4407 + switch (arm_state->vc_suspend_state) {
4408 + case VC_SUSPEND_REQUESTED:
4409 + vchiq_log_info(vchiq_susp_log_level, "%s: suspend already "
4410 + "requested", __func__);
4412 + case VC_SUSPEND_IN_PROGRESS:
4413 + vchiq_log_info(vchiq_susp_log_level, "%s: suspend already in "
4414 + "progress", __func__);
4418 + /* We don't expect to be in other states, so log but continue
4420 + vchiq_log_error(vchiq_susp_log_level,
4421 + "%s unexpected suspend state %s", __func__,
4422 + suspend_state_names[arm_state->vc_suspend_state +
4423 + VC_SUSPEND_NUM_OFFSET]);
4424 + /* fall through */
4425 + case VC_SUSPEND_REJECTED:
4426 + case VC_SUSPEND_FAILED:
4427 + /* Ensure any idle state actions have been run */
4428 + set_suspend_state(arm_state, VC_SUSPEND_IDLE);
4429 + /* fall through */
4430 + case VC_SUSPEND_IDLE:
4431 + vchiq_log_info(vchiq_susp_log_level,
4432 + "%s: suspending", __func__);
4433 + set_suspend_state(arm_state, VC_SUSPEND_REQUESTED);
4434 + /* kick the slot handler thread to initiate suspend */
4435 + request_poll(state, NULL, 0);
4440 + vchiq_log_trace(vchiq_susp_log_level, "%s exit %d", __func__, status);
4445 +vchiq_platform_check_suspend(VCHIQ_STATE_T *state)
4447 + VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
4453 + vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
4455 + write_lock_bh(&arm_state->susp_res_lock);
4456 + if (arm_state->vc_suspend_state == VC_SUSPEND_REQUESTED &&
4457 + arm_state->vc_resume_state == VC_RESUME_RESUMED) {
4458 + set_suspend_state(arm_state, VC_SUSPEND_IN_PROGRESS);
4461 + write_unlock_bh(&arm_state->susp_res_lock);
4464 + vchiq_platform_suspend(state);
4467 + vchiq_log_trace(vchiq_susp_log_level, "%s exit", __func__);
4473 +output_timeout_error(VCHIQ_STATE_T *state)
4475 + VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
4476 + char service_err[50] = "";
4477 + int vc_use_count = arm_state->videocore_use_count;
4478 + int active_services = state->unused_service;
4481 + if (!arm_state->videocore_use_count) {
4482 + snprintf(service_err, 50, " Videocore usecount is 0");
4485 + for (i = 0; i < active_services; i++) {
4486 + VCHIQ_SERVICE_T *service_ptr = state->services[i];
4487 + if (service_ptr && service_ptr->service_use_count &&
4488 + (service_ptr->srvstate != VCHIQ_SRVSTATE_FREE)) {
4489 + snprintf(service_err, 50, " %c%c%c%c(%d) service has "
4490 + "use count %d%s", VCHIQ_FOURCC_AS_4CHARS(
4491 + service_ptr->base.fourcc),
4492 + service_ptr->client_id,
4493 + service_ptr->service_use_count,
4494 + service_ptr->service_use_count ==
4495 + vc_use_count ? "" : " (+ more)");
4501 + vchiq_log_error(vchiq_susp_log_level,
4502 + "timed out waiting for vc suspend (%d).%s",
4503 + arm_state->autosuspend_override, service_err);
4507 +/* Try to get videocore into suspended state, regardless of autosuspend state.
4508 +** We don't actually force suspend, since videocore may get into a bad state
4509 +** if we force suspend at a bad time. Instead, we wait for autosuspend to
4510 +** determine a good point to suspend. If this doesn't happen within 100ms we
4513 +** Returns VCHIQ_SUCCESS if videocore suspended successfully, VCHIQ_RETRY if
4514 +** videocore failed to suspend in time or VCHIQ_ERROR if interrupted.
4517 +vchiq_arm_force_suspend(VCHIQ_STATE_T *state)
4519 + VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
4520 + VCHIQ_STATUS_T status = VCHIQ_ERROR;
4527 + vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
4529 + write_lock_bh(&arm_state->susp_res_lock);
4531 + status = block_resume(arm_state);
4532 + if (status != VCHIQ_SUCCESS)
4534 + if (arm_state->vc_suspend_state == VC_SUSPEND_SUSPENDED) {
4535 + /* Already suspended - just block resume and exit */
4536 + vchiq_log_info(vchiq_susp_log_level, "%s already suspended",
4538 + status = VCHIQ_SUCCESS;
4540 + } else if (arm_state->vc_suspend_state <= VC_SUSPEND_IDLE) {
4541 + /* initiate suspend immediately in the case that we're waiting
4542 + * for the timeout */
4543 + stop_suspend_timer(arm_state);
4544 + if (!vchiq_videocore_wanted(state)) {
4545 + vchiq_log_info(vchiq_susp_log_level, "%s videocore "
4546 + "idle, initiating suspend", __func__);
4547 + status = vchiq_arm_vcsuspend(state);
4548 + } else if (arm_state->autosuspend_override <
4549 + FORCE_SUSPEND_FAIL_MAX) {
4550 + vchiq_log_info(vchiq_susp_log_level, "%s letting "
4551 + "videocore go idle", __func__);
4552 + status = VCHIQ_SUCCESS;
4554 + vchiq_log_warning(vchiq_susp_log_level, "%s failed too "
4555 + "many times - attempting suspend", __func__);
4556 + status = vchiq_arm_vcsuspend(state);
4559 + vchiq_log_info(vchiq_susp_log_level, "%s videocore suspend "
4560 + "in progress - wait for completion", __func__);
4561 + status = VCHIQ_SUCCESS;
4564 + /* Wait for suspend to happen due to system idle (not forced..) */
4565 + if (status != VCHIQ_SUCCESS)
4566 + goto unblock_resume;
4569 + write_unlock_bh(&arm_state->susp_res_lock);
4571 + rc = wait_for_completion_interruptible_timeout(
4572 + &arm_state->vc_suspend_complete,
4573 + msecs_to_jiffies(FORCE_SUSPEND_TIMEOUT_MS));
4575 + write_lock_bh(&arm_state->susp_res_lock);
4577 + vchiq_log_warning(vchiq_susp_log_level, "%s "
4578 + "interrupted waiting for suspend", __func__);
4579 + status = VCHIQ_ERROR;
4580 + goto unblock_resume;
4581 + } else if (rc == 0) {
4582 + if (arm_state->vc_suspend_state > VC_SUSPEND_IDLE) {
4583 + /* Repeat timeout once if in progress */
4589 + arm_state->autosuspend_override++;
4590 + output_timeout_error(state);
4592 + status = VCHIQ_RETRY;
4593 + goto unblock_resume;
4595 + } while (0 < (repeat--));
4597 + /* Check and report state in case we need to abort ARM suspend */
4598 + if (arm_state->vc_suspend_state != VC_SUSPEND_SUSPENDED) {
4599 + status = VCHIQ_RETRY;
4600 + vchiq_log_error(vchiq_susp_log_level,
4601 + "%s videocore suspend failed (state %s)", __func__,
4602 + suspend_state_names[arm_state->vc_suspend_state +
4603 + VC_SUSPEND_NUM_OFFSET]);
4604 + /* Reset the state only if it's still in an error state.
4605 + * Something could have already initiated another suspend. */
4606 + if (arm_state->vc_suspend_state < VC_SUSPEND_IDLE)
4607 + set_suspend_state(arm_state, VC_SUSPEND_IDLE);
4609 + goto unblock_resume;
4612 + /* successfully suspended - unlock and exit */
4616 + /* all error states need to unblock resume before exit */
4617 + unblock_resume(arm_state);
4620 + write_unlock_bh(&arm_state->susp_res_lock);
4623 + vchiq_log_trace(vchiq_susp_log_level, "%s exit %d", __func__, status);
4628 +vchiq_check_suspend(VCHIQ_STATE_T *state)
4630 + VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
4635 + vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
4637 + write_lock_bh(&arm_state->susp_res_lock);
4638 + if (arm_state->vc_suspend_state != VC_SUSPEND_SUSPENDED &&
4639 + arm_state->first_connect &&
4640 + !vchiq_videocore_wanted(state)) {
4641 + vchiq_arm_vcsuspend(state);
4643 + write_unlock_bh(&arm_state->susp_res_lock);
4646 + vchiq_log_trace(vchiq_susp_log_level, "%s exit", __func__);
4652 +vchiq_arm_allow_resume(VCHIQ_STATE_T *state)
4654 + VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
4661 + vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
4663 + write_lock_bh(&arm_state->susp_res_lock);
4664 + unblock_resume(arm_state);
4665 + resume = vchiq_check_resume(state);
4666 + write_unlock_bh(&arm_state->susp_res_lock);
4669 + if (wait_for_completion_interruptible(
4670 + &arm_state->vc_resume_complete) < 0) {
4671 + vchiq_log_error(vchiq_susp_log_level,
4672 + "%s interrupted", __func__);
4673 + /* failed, cannot accurately derive suspend
4674 + * state, so exit early. */
4679 + read_lock_bh(&arm_state->susp_res_lock);
4680 + if (arm_state->vc_suspend_state == VC_SUSPEND_SUSPENDED) {
4681 + vchiq_log_info(vchiq_susp_log_level,
4682 + "%s: Videocore remains suspended", __func__);
4684 + vchiq_log_info(vchiq_susp_log_level,
4685 + "%s: Videocore resumed", __func__);
4688 + read_unlock_bh(&arm_state->susp_res_lock);
4690 + vchiq_log_trace(vchiq_susp_log_level, "%s exit %d", __func__, ret);
4694 +/* This function should be called with the write lock held */
4696 +vchiq_check_resume(VCHIQ_STATE_T *state)
4698 + VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
4704 + vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
4706 + if (need_resume(state)) {
4707 + set_resume_state(arm_state, VC_RESUME_REQUESTED);
4708 + request_poll(state, NULL, 0);
4713 + vchiq_log_trace(vchiq_susp_log_level, "%s exit", __func__);
4718 +vchiq_platform_check_resume(VCHIQ_STATE_T *state)
4720 + VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
4726 + vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
4728 + write_lock_bh(&arm_state->susp_res_lock);
4729 + if (arm_state->wake_address == 0) {
4730 + vchiq_log_info(vchiq_susp_log_level,
4731 + "%s: already awake", __func__);
4734 + if (arm_state->vc_resume_state == VC_RESUME_IN_PROGRESS) {
4735 + vchiq_log_info(vchiq_susp_log_level,
4736 + "%s: already resuming", __func__);
4740 + if (arm_state->vc_resume_state == VC_RESUME_REQUESTED) {
4741 + set_resume_state(arm_state, VC_RESUME_IN_PROGRESS);
4744 + vchiq_log_trace(vchiq_susp_log_level,
4745 + "%s: not resuming (resume state %s)", __func__,
4746 + resume_state_names[arm_state->vc_resume_state +
4747 + VC_RESUME_NUM_OFFSET]);
4750 + write_unlock_bh(&arm_state->susp_res_lock);
4753 + vchiq_platform_resume(state);
4756 + vchiq_log_trace(vchiq_susp_log_level, "%s exit", __func__);
4764 +vchiq_use_internal(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service,
4765 + enum USE_TYPE_E use_type)
4767 + VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
4768 + VCHIQ_STATUS_T ret = VCHIQ_SUCCESS;
4771 + int local_uc, local_entity_uc;
4776 + vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
4778 + if (use_type == USE_TYPE_VCHIQ) {
4779 + sprintf(entity, "VCHIQ: ");
4780 + entity_uc = &arm_state->peer_use_count;
4781 + } else if (service) {
4782 + sprintf(entity, "%c%c%c%c:%03d",
4783 + VCHIQ_FOURCC_AS_4CHARS(service->base.fourcc),
4784 + service->client_id);
4785 + entity_uc = &service->service_use_count;
4787 + vchiq_log_error(vchiq_susp_log_level, "%s null service "
4789 + ret = VCHIQ_ERROR;
4793 + write_lock_bh(&arm_state->susp_res_lock);
4794 + while (arm_state->resume_blocked) {
4795 + /* If we call 'use' while force suspend is waiting for suspend,
4796 + * then we're about to block the thread which the force is
4797 + * waiting to complete, so we're bound to just time out. In this
4798 + * case, set the suspend state such that the wait will be
4799 + * canceled, so we can complete as quickly as possible. */
4800 + if (arm_state->resume_blocked && arm_state->vc_suspend_state ==
4801 + VC_SUSPEND_IDLE) {
4802 + set_suspend_state(arm_state, VC_SUSPEND_FORCE_CANCELED);
4805 + /* If suspend is already in progress then we need to block */
4806 + if (!try_wait_for_completion(&arm_state->resume_blocker)) {
4807 + /* Indicate that there are threads waiting on the resume
4808 + * blocker. These need to be allowed to complete before
4809 + * a _second_ call to force suspend can complete,
4810 + * otherwise low priority threads might never actually
4812 + arm_state->blocked_count++;
4813 + write_unlock_bh(&arm_state->susp_res_lock);
4814 + vchiq_log_info(vchiq_susp_log_level, "%s %s resume "
4815 + "blocked - waiting...", __func__, entity);
4816 + if (wait_for_completion_killable(
4817 + &arm_state->resume_blocker) != 0) {
4818 + vchiq_log_error(vchiq_susp_log_level, "%s %s "
4819 + "wait for resume blocker interrupted",
4820 + __func__, entity);
4821 + ret = VCHIQ_ERROR;
4822 + write_lock_bh(&arm_state->susp_res_lock);
4823 + arm_state->blocked_count--;
4824 + write_unlock_bh(&arm_state->susp_res_lock);
4827 + vchiq_log_info(vchiq_susp_log_level, "%s %s resume "
4828 + "unblocked", __func__, entity);
4829 + write_lock_bh(&arm_state->susp_res_lock);
4830 + if (--arm_state->blocked_count == 0)
4831 + complete_all(&arm_state->blocked_blocker);
4835 + stop_suspend_timer(arm_state);
4837 + local_uc = ++arm_state->videocore_use_count;
4838 + local_entity_uc = ++(*entity_uc);
4840 + /* If there's a pending request which hasn't yet been serviced then
4841 + * just clear it. If we're past VC_SUSPEND_REQUESTED state then
4842 + * vc_resume_complete will block until we either resume or fail to
4844 + if (arm_state->vc_suspend_state <= VC_SUSPEND_REQUESTED)
4845 + set_suspend_state(arm_state, VC_SUSPEND_IDLE);
4847 + if ((use_type != USE_TYPE_SERVICE_NO_RESUME) && need_resume(state)) {
4848 + set_resume_state(arm_state, VC_RESUME_REQUESTED);
4849 + vchiq_log_info(vchiq_susp_log_level,
4850 + "%s %s count %d, state count %d",
4851 + __func__, entity, local_entity_uc, local_uc);
4852 + request_poll(state, NULL, 0);
4854 + vchiq_log_trace(vchiq_susp_log_level,
4855 + "%s %s count %d, state count %d",
4856 + __func__, entity, *entity_uc, local_uc);
4859 + write_unlock_bh(&arm_state->susp_res_lock);
4861 + /* Completion is in a done state when we're not suspended, so this won't
4862 + * block for the non-suspended case. */
4863 + if (!try_wait_for_completion(&arm_state->vc_resume_complete)) {
4864 + vchiq_log_info(vchiq_susp_log_level, "%s %s wait for resume",
4865 + __func__, entity);
4866 + if (wait_for_completion_killable(
4867 + &arm_state->vc_resume_complete) != 0) {
4868 + vchiq_log_error(vchiq_susp_log_level, "%s %s wait for "
4869 + "resume interrupted", __func__, entity);
4870 + ret = VCHIQ_ERROR;
4873 + vchiq_log_info(vchiq_susp_log_level, "%s %s resumed", __func__,
4877 + if (ret == VCHIQ_SUCCESS) {
4878 + VCHIQ_STATUS_T status = VCHIQ_SUCCESS;
4879 + long ack_cnt = atomic_xchg(&arm_state->ka_use_ack_count, 0);
4880 + while (ack_cnt && (status == VCHIQ_SUCCESS)) {
4881 + /* Send the use notify to videocore */
4882 + status = vchiq_send_remote_use_active(state);
4883 + if (status == VCHIQ_SUCCESS)
4886 + atomic_add(ack_cnt,
4887 + &arm_state->ka_use_ack_count);
4892 + vchiq_log_trace(vchiq_susp_log_level, "%s exit %d", __func__, ret);
4897 +vchiq_release_internal(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service)
4899 + VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
4900 + VCHIQ_STATUS_T ret = VCHIQ_SUCCESS;
4903 + int local_uc, local_entity_uc;
4908 + vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
4911 + sprintf(entity, "%c%c%c%c:%03d",
4912 + VCHIQ_FOURCC_AS_4CHARS(service->base.fourcc),
4913 + service->client_id);
4914 + entity_uc = &service->service_use_count;
4916 + sprintf(entity, "PEER: ");
4917 + entity_uc = &arm_state->peer_use_count;
4920 + write_lock_bh(&arm_state->susp_res_lock);
4921 + if (!arm_state->videocore_use_count || !(*entity_uc)) {
4922 + /* Don't use BUG_ON - don't allow user thread to crash kernel */
4923 + WARN_ON(!arm_state->videocore_use_count);
4924 + WARN_ON(!(*entity_uc));
4925 + ret = VCHIQ_ERROR;
4928 + local_uc = --arm_state->videocore_use_count;
4929 + local_entity_uc = --(*entity_uc);
4931 + if (!vchiq_videocore_wanted(state)) {
4932 + if (vchiq_platform_use_suspend_timer() &&
4933 + !arm_state->resume_blocked) {
4934 + /* Only use the timer if we're not trying to force
4935 + * suspend (=> resume_blocked) */
4936 + start_suspend_timer(arm_state);
4938 + vchiq_log_info(vchiq_susp_log_level,
4939 + "%s %s count %d, state count %d - suspending",
4940 + __func__, entity, *entity_uc,
4941 + arm_state->videocore_use_count);
4942 + vchiq_arm_vcsuspend(state);
4945 + vchiq_log_trace(vchiq_susp_log_level,
4946 + "%s %s count %d, state count %d",
4947 + __func__, entity, *entity_uc,
4948 + arm_state->videocore_use_count);
4951 + write_unlock_bh(&arm_state->susp_res_lock);
4954 + vchiq_log_trace(vchiq_susp_log_level, "%s exit %d", __func__, ret);
4959 +vchiq_on_remote_use(VCHIQ_STATE_T *state)
4961 + VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
4962 + vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
4963 + atomic_inc(&arm_state->ka_use_count);
4964 + complete(&arm_state->ka_evt);
4968 +vchiq_on_remote_release(VCHIQ_STATE_T *state)
4970 + VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
4971 + vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
4972 + atomic_inc(&arm_state->ka_release_count);
4973 + complete(&arm_state->ka_evt);
4977 +vchiq_use_service_internal(VCHIQ_SERVICE_T *service)
4979 + return vchiq_use_internal(service->state, service, USE_TYPE_SERVICE);
4983 +vchiq_release_service_internal(VCHIQ_SERVICE_T *service)
4985 + return vchiq_release_internal(service->state, service);
4988 +VCHIQ_DEBUGFS_NODE_T *
4989 +vchiq_instance_get_debugfs_node(VCHIQ_INSTANCE_T instance)
4991 + return &instance->debugfs_node;
4995 +vchiq_instance_get_use_count(VCHIQ_INSTANCE_T instance)
4997 + VCHIQ_SERVICE_T *service;
4998 + int use_count = 0, i;
5000 + while ((service = next_service_by_instance(instance->state,
5001 + instance, &i)) != NULL) {
5002 + use_count += service->service_use_count;
5003 + unlock_service(service);
5009 +vchiq_instance_get_pid(VCHIQ_INSTANCE_T instance)
5011 + return instance->pid;
5015 +vchiq_instance_get_trace(VCHIQ_INSTANCE_T instance)
5017 + return instance->trace;
5021 +vchiq_instance_set_trace(VCHIQ_INSTANCE_T instance, int trace)
5023 + VCHIQ_SERVICE_T *service;
5026 + while ((service = next_service_by_instance(instance->state,
5027 + instance, &i)) != NULL) {
5028 + service->trace = trace;
5029 + unlock_service(service);
5031 + instance->trace = (trace != 0);
5034 +static void suspend_timer_callback(unsigned long context)
5036 + VCHIQ_STATE_T *state = (VCHIQ_STATE_T *)context;
5037 + VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
5040 + vchiq_log_info(vchiq_susp_log_level,
5041 + "%s - suspend timer expired - check suspend", __func__);
5042 + vchiq_check_suspend(state);
5048 +vchiq_use_service_no_resume(VCHIQ_SERVICE_HANDLE_T handle)
5050 + VCHIQ_STATUS_T ret = VCHIQ_ERROR;
5051 + VCHIQ_SERVICE_T *service = find_service_by_handle(handle);
5053 + ret = vchiq_use_internal(service->state, service,
5054 + USE_TYPE_SERVICE_NO_RESUME);
5055 + unlock_service(service);
5061 +vchiq_use_service(VCHIQ_SERVICE_HANDLE_T handle)
5063 + VCHIQ_STATUS_T ret = VCHIQ_ERROR;
5064 + VCHIQ_SERVICE_T *service = find_service_by_handle(handle);
5066 + ret = vchiq_use_internal(service->state, service,
5067 + USE_TYPE_SERVICE);
5068 + unlock_service(service);
5074 +vchiq_release_service(VCHIQ_SERVICE_HANDLE_T handle)
5076 + VCHIQ_STATUS_T ret = VCHIQ_ERROR;
5077 + VCHIQ_SERVICE_T *service = find_service_by_handle(handle);
5079 + ret = vchiq_release_internal(service->state, service);
5080 + unlock_service(service);
5086 +vchiq_dump_service_use_state(VCHIQ_STATE_T *state)
5088 + VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
5090 + /* Only dump 64 services */
5091 + static const int local_max_services = 64;
5092 + /* If there's more than 64 services, only dump ones with
5093 + * non-zero counts */
5094 + int only_nonzero = 0;
5095 + static const char *nz = "<-- preventing suspend";
5097 + enum vc_suspend_status vc_suspend_state;
5098 + enum vc_resume_status vc_resume_state;
5101 + int active_services;
5102 + struct service_data_struct {
5106 + } service_data[local_max_services];
5111 + read_lock_bh(&arm_state->susp_res_lock);
5112 + vc_suspend_state = arm_state->vc_suspend_state;
5113 + vc_resume_state = arm_state->vc_resume_state;
5114 + peer_count = arm_state->peer_use_count;
5115 + vc_use_count = arm_state->videocore_use_count;
5116 + active_services = state->unused_service;
5117 + if (active_services > local_max_services)
5120 + for (i = 0; (i < active_services) && (j < local_max_services); i++) {
5121 + VCHIQ_SERVICE_T *service_ptr = state->services[i];
5125 + if (only_nonzero && !service_ptr->service_use_count)
5128 + if (service_ptr->srvstate != VCHIQ_SRVSTATE_FREE) {
5129 + service_data[j].fourcc = service_ptr->base.fourcc;
5130 + service_data[j].clientid = service_ptr->client_id;
5131 + service_data[j++].use_count = service_ptr->
5132 + service_use_count;
5136 + read_unlock_bh(&arm_state->susp_res_lock);
5138 + vchiq_log_warning(vchiq_susp_log_level,
5139 + "-- Videcore suspend state: %s --",
5140 + suspend_state_names[vc_suspend_state + VC_SUSPEND_NUM_OFFSET]);
5141 + vchiq_log_warning(vchiq_susp_log_level,
5142 + "-- Videcore resume state: %s --",
5143 + resume_state_names[vc_resume_state + VC_RESUME_NUM_OFFSET]);
5146 + vchiq_log_warning(vchiq_susp_log_level, "Too many active "
5147 + "services (%d). Only dumping up to first %d services "
5148 + "with non-zero use-count", active_services,
5149 + local_max_services);
5151 + for (i = 0; i < j; i++) {
5152 + vchiq_log_warning(vchiq_susp_log_level,
5153 + "----- %c%c%c%c:%d service count %d %s",
5154 + VCHIQ_FOURCC_AS_4CHARS(service_data[i].fourcc),
5155 + service_data[i].clientid,
5156 + service_data[i].use_count,
5157 + service_data[i].use_count ? nz : "");
5159 + vchiq_log_warning(vchiq_susp_log_level,
5160 + "----- VCHIQ use count count %d", peer_count);
5161 + vchiq_log_warning(vchiq_susp_log_level,
5162 + "--- Overall vchiq instance use count %d", vc_use_count);
5164 + vchiq_dump_platform_use_state(state);
5168 +vchiq_check_service(VCHIQ_SERVICE_T *service)
5170 + VCHIQ_ARM_STATE_T *arm_state;
5171 + VCHIQ_STATUS_T ret = VCHIQ_ERROR;
5173 + if (!service || !service->state)
5176 + vchiq_log_trace(vchiq_susp_log_level, "%s", __func__);
5178 + arm_state = vchiq_platform_get_arm_state(service->state);
5180 + read_lock_bh(&arm_state->susp_res_lock);
5181 + if (service->service_use_count)
5182 + ret = VCHIQ_SUCCESS;
5183 + read_unlock_bh(&arm_state->susp_res_lock);
5185 + if (ret == VCHIQ_ERROR) {
5186 + vchiq_log_error(vchiq_susp_log_level,
5187 + "%s ERROR - %c%c%c%c:%d service count %d, "
5188 + "state count %d, videocore suspend state %s", __func__,
5189 + VCHIQ_FOURCC_AS_4CHARS(service->base.fourcc),
5190 + service->client_id, service->service_use_count,
5191 + arm_state->videocore_use_count,
5192 + suspend_state_names[arm_state->vc_suspend_state +
5193 + VC_SUSPEND_NUM_OFFSET]);
5194 + vchiq_dump_service_use_state(service->state);
5200 +/* stub functions */
5201 +void vchiq_on_remote_use_active(VCHIQ_STATE_T *state)
5206 +void vchiq_platform_conn_state_changed(VCHIQ_STATE_T *state,
5207 + VCHIQ_CONNSTATE_T oldstate, VCHIQ_CONNSTATE_T newstate)
5209 + VCHIQ_ARM_STATE_T *arm_state = vchiq_platform_get_arm_state(state);
5210 + vchiq_log_info(vchiq_susp_log_level, "%d: %s->%s", state->id,
5211 + get_conn_state_name(oldstate), get_conn_state_name(newstate));
5212 + if (state->conn_state == VCHIQ_CONNSTATE_CONNECTED) {
5213 + write_lock_bh(&arm_state->susp_res_lock);
5214 + if (!arm_state->first_connect) {
5215 + char threadname[10];
5216 + arm_state->first_connect = 1;
5217 + write_unlock_bh(&arm_state->susp_res_lock);
5218 + snprintf(threadname, sizeof(threadname), "VCHIQka-%d",
5220 + arm_state->ka_thread = kthread_create(
5221 + &vchiq_keepalive_thread_func,
5224 + if (arm_state->ka_thread == NULL) {
5225 + vchiq_log_error(vchiq_susp_log_level,
5226 + "vchiq: FATAL: couldn't create thread %s",
5229 + wake_up_process(arm_state->ka_thread);
5232 + write_unlock_bh(&arm_state->susp_res_lock);
5236 +static int vchiq_probe(struct platform_device *pdev)
5238 + struct device_node *fw_node;
5239 + struct rpi_firmware *fw;
5243 + fw_node = of_parse_phandle(pdev->dev.of_node, "firmware", 0);
5244 +/* Remove comment when booting without Device Tree is no longer supported
5246 + dev_err(&pdev->dev, "Missing firmware node\n");
5250 + fw = rpi_firmware_get(fw_node);
5252 + return -EPROBE_DEFER;
5254 + platform_set_drvdata(pdev, fw);
5256 + /* create debugfs entries */
5257 + err = vchiq_debugfs_init();
5259 + goto failed_debugfs_init;
5261 + err = alloc_chrdev_region(&vchiq_devid, VCHIQ_MINOR, 1, DEVICE_NAME);
5263 + vchiq_log_error(vchiq_arm_log_level,
5264 + "Unable to allocate device number");
5265 + goto failed_alloc_chrdev;
5267 + cdev_init(&vchiq_cdev, &vchiq_fops);
5268 + vchiq_cdev.owner = THIS_MODULE;
5269 + err = cdev_add(&vchiq_cdev, vchiq_devid, 1);
5271 + vchiq_log_error(vchiq_arm_log_level,
5272 + "Unable to register device");
5273 + goto failed_cdev_add;
5276 + /* create sysfs entries */
5277 + vchiq_class = class_create(THIS_MODULE, DEVICE_NAME);
5278 + ptr_err = vchiq_class;
5279 + if (IS_ERR(ptr_err))
5280 + goto failed_class_create;
5282 + vchiq_dev = device_create(vchiq_class, NULL,
5283 + vchiq_devid, NULL, "vchiq");
5284 + ptr_err = vchiq_dev;
5285 + if (IS_ERR(ptr_err))
5286 + goto failed_device_create;
5288 + err = vchiq_platform_init(pdev, &g_state);
5290 + goto failed_platform_init;
5292 + vchiq_log_info(vchiq_arm_log_level,
5293 + "vchiq: initialised - version %d (min %d), device %d.%d",
5294 + VCHIQ_VERSION, VCHIQ_VERSION_MIN,
5295 + MAJOR(vchiq_devid), MINOR(vchiq_devid));
5299 +failed_platform_init:
5300 + device_destroy(vchiq_class, vchiq_devid);
5301 +failed_device_create:
5302 + class_destroy(vchiq_class);
5303 +failed_class_create:
5304 + cdev_del(&vchiq_cdev);
5305 + err = PTR_ERR(ptr_err);
5307 + unregister_chrdev_region(vchiq_devid, 1);
5308 +failed_alloc_chrdev:
5309 + vchiq_debugfs_deinit();
5310 +failed_debugfs_init:
5311 + vchiq_log_warning(vchiq_arm_log_level, "could not load vchiq");
5315 +static int vchiq_remove(struct platform_device *pdev)
5317 + device_destroy(vchiq_class, vchiq_devid);
5318 + class_destroy(vchiq_class);
5319 + cdev_del(&vchiq_cdev);
5320 + unregister_chrdev_region(vchiq_devid, 1);
5325 +static const struct of_device_id vchiq_of_match[] = {
5326 + { .compatible = "brcm,bcm2835-vchiq", },
5329 +MODULE_DEVICE_TABLE(of, vchiq_of_match);
5331 +static struct platform_driver vchiq_driver = {
5333 + .name = "bcm2835_vchiq",
5334 + .owner = THIS_MODULE,
5335 + .of_match_table = vchiq_of_match,
5337 + .probe = vchiq_probe,
5338 + .remove = vchiq_remove,
5340 +module_platform_driver(vchiq_driver);
5342 +MODULE_LICENSE("GPL");
5343 +MODULE_AUTHOR("Broadcom Corporation");
5345 +++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_arm.h
5348 + * Copyright (c) 2014 Raspberry Pi (Trading) Ltd. All rights reserved.
5349 + * Copyright (c) 2010-2012 Broadcom. All rights reserved.
5351 + * Redistribution and use in source and binary forms, with or without
5352 + * modification, are permitted provided that the following conditions
5354 + * 1. Redistributions of source code must retain the above copyright
5355 + * notice, this list of conditions, and the following disclaimer,
5356 + * without modification.
5357 + * 2. Redistributions in binary form must reproduce the above copyright
5358 + * notice, this list of conditions and the following disclaimer in the
5359 + * documentation and/or other materials provided with the distribution.
5360 + * 3. The names of the above-listed copyright holders may not be used
5361 + * to endorse or promote products derived from this software without
5362 + * specific prior written permission.
5364 + * ALTERNATIVELY, this software may be distributed under the terms of the
5365 + * GNU General Public License ("GPL") version 2, as published by the Free
5366 + * Software Foundation.
5368 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
5369 + * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
5370 + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
5371 + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
5372 + * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
5373 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
5374 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
5375 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
5376 + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
5377 + * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
5378 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
5381 +#ifndef VCHIQ_ARM_H
5382 +#define VCHIQ_ARM_H
5384 +#include <linux/mutex.h>
5385 +#include <linux/platform_device.h>
5386 +#include <linux/semaphore.h>
5387 +#include <linux/atomic.h>
5388 +#include "vchiq_core.h"
5389 +#include "vchiq_debugfs.h"
5392 +enum vc_suspend_status {
5393 + VC_SUSPEND_FORCE_CANCELED = -3, /* Force suspend canceled, too busy */
5394 + VC_SUSPEND_REJECTED = -2, /* Videocore rejected suspend request */
5395 + VC_SUSPEND_FAILED = -1, /* Videocore suspend failed */
5396 + VC_SUSPEND_IDLE = 0, /* VC active, no suspend actions */
5397 + VC_SUSPEND_REQUESTED, /* User has requested suspend */
5398 + VC_SUSPEND_IN_PROGRESS, /* Slot handler has recvd suspend request */
5399 + VC_SUSPEND_SUSPENDED /* Videocore suspend succeeded */
5402 +enum vc_resume_status {
5403 + VC_RESUME_FAILED = -1, /* Videocore resume failed */
5404 + VC_RESUME_IDLE = 0, /* VC suspended, no resume actions */
5405 + VC_RESUME_REQUESTED, /* User has requested resume */
5406 + VC_RESUME_IN_PROGRESS, /* Slot handler has received resume request */
5407 + VC_RESUME_RESUMED /* Videocore resumed successfully (active) */
5413 + USE_TYPE_SERVICE_NO_RESUME,
5419 +typedef struct vchiq_arm_state_struct {
5420 + /* Keepalive-related data */
5421 + struct task_struct *ka_thread;
5422 + struct completion ka_evt;
5423 + atomic_t ka_use_count;
5424 + atomic_t ka_use_ack_count;
5425 + atomic_t ka_release_count;
5427 + struct completion vc_suspend_complete;
5428 + struct completion vc_resume_complete;
5430 + rwlock_t susp_res_lock;
5431 + enum vc_suspend_status vc_suspend_state;
5432 + enum vc_resume_status vc_resume_state;
5434 + unsigned int wake_address;
5436 + struct timer_list suspend_timer;
5437 + int suspend_timer_timeout;
5438 + int suspend_timer_running;
5440 + /* Global use count for videocore.
5441 + ** This is equal to the sum of the use counts for all services. When
5442 + ** this hits zero the videocore suspend procedure will be initiated.
5444 + int videocore_use_count;
5446 + /* Use count to track requests from videocore peer.
5447 + ** This use count is not associated with a service, so needs to be
5448 + ** tracked separately with the state.
5450 + int peer_use_count;
5452 + /* Flag to indicate whether resume is blocked. This happens when the
5453 + ** ARM is suspending
5455 + struct completion resume_blocker;
5456 + int resume_blocked;
5457 + struct completion blocked_blocker;
5458 + int blocked_count;
5460 + int autosuspend_override;
5462 + /* Flag to indicate that the first vchiq connect has made it through.
5463 + ** This means that both sides should be fully ready, and we should
5464 + ** be able to suspend after this point.
5466 + int first_connect;
5468 + unsigned long long suspend_start_time;
5469 + unsigned long long sleep_start_time;
5470 + unsigned long long resume_start_time;
5471 + unsigned long long last_wake_time;
5473 +} VCHIQ_ARM_STATE_T;
5475 +extern int vchiq_arm_log_level;
5476 +extern int vchiq_susp_log_level;
5478 +int vchiq_platform_init(struct platform_device *pdev, VCHIQ_STATE_T *state);
5480 +extern VCHIQ_STATE_T *
5481 +vchiq_get_state(void);
5483 +extern VCHIQ_STATUS_T
5484 +vchiq_arm_vcsuspend(VCHIQ_STATE_T *state);
5486 +extern VCHIQ_STATUS_T
5487 +vchiq_arm_force_suspend(VCHIQ_STATE_T *state);
5490 +vchiq_arm_allow_resume(VCHIQ_STATE_T *state);
5492 +extern VCHIQ_STATUS_T
5493 +vchiq_arm_vcresume(VCHIQ_STATE_T *state);
5495 +extern VCHIQ_STATUS_T
5496 +vchiq_arm_init_state(VCHIQ_STATE_T *state, VCHIQ_ARM_STATE_T *arm_state);
5499 +vchiq_check_resume(VCHIQ_STATE_T *state);
5502 +vchiq_check_suspend(VCHIQ_STATE_T *state);
5504 +vchiq_use_service(VCHIQ_SERVICE_HANDLE_T handle);
5506 +extern VCHIQ_STATUS_T
5507 +vchiq_release_service(VCHIQ_SERVICE_HANDLE_T handle);
5509 +extern VCHIQ_STATUS_T
5510 +vchiq_check_service(VCHIQ_SERVICE_T *service);
5512 +extern VCHIQ_STATUS_T
5513 +vchiq_platform_suspend(VCHIQ_STATE_T *state);
5516 +vchiq_platform_videocore_wanted(VCHIQ_STATE_T *state);
5519 +vchiq_platform_use_suspend_timer(void);
5522 +vchiq_dump_platform_use_state(VCHIQ_STATE_T *state);
5525 +vchiq_dump_service_use_state(VCHIQ_STATE_T *state);
5527 +extern VCHIQ_ARM_STATE_T*
5528 +vchiq_platform_get_arm_state(VCHIQ_STATE_T *state);
5531 +vchiq_videocore_wanted(VCHIQ_STATE_T *state);
5533 +extern VCHIQ_STATUS_T
5534 +vchiq_use_internal(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service,
5535 + enum USE_TYPE_E use_type);
5536 +extern VCHIQ_STATUS_T
5537 +vchiq_release_internal(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service);
5539 +extern VCHIQ_DEBUGFS_NODE_T *
5540 +vchiq_instance_get_debugfs_node(VCHIQ_INSTANCE_T instance);
5543 +vchiq_instance_get_use_count(VCHIQ_INSTANCE_T instance);
5546 +vchiq_instance_get_pid(VCHIQ_INSTANCE_T instance);
5549 +vchiq_instance_get_trace(VCHIQ_INSTANCE_T instance);
5552 +vchiq_instance_set_trace(VCHIQ_INSTANCE_T instance, int trace);
5555 +set_suspend_state(VCHIQ_ARM_STATE_T *arm_state,
5556 + enum vc_suspend_status new_state);
5559 +set_resume_state(VCHIQ_ARM_STATE_T *arm_state,
5560 + enum vc_resume_status new_state);
5563 +start_suspend_timer(VCHIQ_ARM_STATE_T *arm_state);
5566 +#endif /* VCHIQ_ARM_H */
5568 +++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_build_info.h
5571 + * Copyright (c) 2010-2012 Broadcom. All rights reserved.
5573 + * Redistribution and use in source and binary forms, with or without
5574 + * modification, are permitted provided that the following conditions
5576 + * 1. Redistributions of source code must retain the above copyright
5577 + * notice, this list of conditions, and the following disclaimer,
5578 + * without modification.
5579 + * 2. Redistributions in binary form must reproduce the above copyright
5580 + * notice, this list of conditions and the following disclaimer in the
5581 + * documentation and/or other materials provided with the distribution.
5582 + * 3. The names of the above-listed copyright holders may not be used
5583 + * to endorse or promote products derived from this software without
5584 + * specific prior written permission.
5586 + * ALTERNATIVELY, this software may be distributed under the terms of the
5587 + * GNU General Public License ("GPL") version 2, as published by the Free
5588 + * Software Foundation.
5590 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
5591 + * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
5592 + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
5593 + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
5594 + * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
5595 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
5596 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
5597 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
5598 + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
5599 + * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
5600 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
5603 +const char *vchiq_get_build_hostname(void);
5604 +const char *vchiq_get_build_version(void);
5605 +const char *vchiq_get_build_time(void);
5606 +const char *vchiq_get_build_date(void);
5608 +++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_cfg.h
5611 + * Copyright (c) 2010-2014 Broadcom. All rights reserved.
5613 + * Redistribution and use in source and binary forms, with or without
5614 + * modification, are permitted provided that the following conditions
5616 + * 1. Redistributions of source code must retain the above copyright
5617 + * notice, this list of conditions, and the following disclaimer,
5618 + * without modification.
5619 + * 2. Redistributions in binary form must reproduce the above copyright
5620 + * notice, this list of conditions and the following disclaimer in the
5621 + * documentation and/or other materials provided with the distribution.
5622 + * 3. The names of the above-listed copyright holders may not be used
5623 + * to endorse or promote products derived from this software without
5624 + * specific prior written permission.
5626 + * ALTERNATIVELY, this software may be distributed under the terms of the
5627 + * GNU General Public License ("GPL") version 2, as published by the Free
5628 + * Software Foundation.
5630 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
5631 + * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
5632 + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
5633 + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
5634 + * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
5635 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
5636 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
5637 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
5638 + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
5639 + * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
5640 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
5643 +#ifndef VCHIQ_CFG_H
5644 +#define VCHIQ_CFG_H
5646 +#define VCHIQ_MAGIC VCHIQ_MAKE_FOURCC('V', 'C', 'H', 'I')
5647 +/* The version of VCHIQ - change with any non-trivial change */
5648 +#define VCHIQ_VERSION 8
5649 +/* The minimum compatible version - update to match VCHIQ_VERSION with any
5650 +** incompatible change */
5651 +#define VCHIQ_VERSION_MIN 3
5653 +/* The version that introduced the VCHIQ_IOC_LIB_VERSION ioctl */
5654 +#define VCHIQ_VERSION_LIB_VERSION 7
5656 +/* The version that introduced the VCHIQ_IOC_CLOSE_DELIVERED ioctl */
5657 +#define VCHIQ_VERSION_CLOSE_DELIVERED 7
5659 +/* The version that made it safe to use SYNCHRONOUS mode */
5660 +#define VCHIQ_VERSION_SYNCHRONOUS_MODE 8
5662 +#define VCHIQ_MAX_STATES 1
5663 +#define VCHIQ_MAX_SERVICES 4096
5664 +#define VCHIQ_MAX_SLOTS 128
5665 +#define VCHIQ_MAX_SLOTS_PER_SIDE 64
5667 +#define VCHIQ_NUM_CURRENT_BULKS 32
5668 +#define VCHIQ_NUM_SERVICE_BULKS 4
5670 +#ifndef VCHIQ_ENABLE_DEBUG
5671 +#define VCHIQ_ENABLE_DEBUG 1
5674 +#ifndef VCHIQ_ENABLE_STATS
5675 +#define VCHIQ_ENABLE_STATS 1
5678 +#endif /* VCHIQ_CFG_H */
5680 +++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_connected.c
5683 + * Copyright (c) 2010-2012 Broadcom. All rights reserved.
5685 + * Redistribution and use in source and binary forms, with or without
5686 + * modification, are permitted provided that the following conditions
5688 + * 1. Redistributions of source code must retain the above copyright
5689 + * notice, this list of conditions, and the following disclaimer,
5690 + * without modification.
5691 + * 2. Redistributions in binary form must reproduce the above copyright
5692 + * notice, this list of conditions and the following disclaimer in the
5693 + * documentation and/or other materials provided with the distribution.
5694 + * 3. The names of the above-listed copyright holders may not be used
5695 + * to endorse or promote products derived from this software without
5696 + * specific prior written permission.
5698 + * ALTERNATIVELY, this software may be distributed under the terms of the
5699 + * GNU General Public License ("GPL") version 2, as published by the Free
5700 + * Software Foundation.
5702 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
5703 + * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
5704 + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
5705 + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
5706 + * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
5707 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
5708 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
5709 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
5710 + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
5711 + * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
5712 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
5715 +#include "vchiq_connected.h"
5716 +#include "vchiq_core.h"
5717 +#include "vchiq_killable.h"
5718 +#include <linux/module.h>
5719 +#include <linux/mutex.h>
5721 +#define MAX_CALLBACKS 10
5723 +static int g_connected;
5724 +static int g_num_deferred_callbacks;
5725 +static VCHIQ_CONNECTED_CALLBACK_T g_deferred_callback[MAX_CALLBACKS];
5726 +static int g_once_init;
5727 +static struct mutex g_connected_mutex;
5729 +/****************************************************************************
5731 +* Function to initialize our lock.
5733 +***************************************************************************/
5735 +static void connected_init(void)
5737 + if (!g_once_init) {
5738 + mutex_init(&g_connected_mutex);
5743 +/****************************************************************************
5745 +* This function is used to defer initialization until the vchiq stack is
5746 +* initialized. If the stack is already initialized, then the callback will
5747 +* be made immediately, otherwise it will be deferred until
5748 +* vchiq_call_connected_callbacks is called.
5750 +***************************************************************************/
5752 +void vchiq_add_connected_callback(VCHIQ_CONNECTED_CALLBACK_T callback)
5756 + if (mutex_lock_interruptible(&g_connected_mutex) != 0)
5760 + /* We're already connected. Call the callback immediately. */
5764 + if (g_num_deferred_callbacks >= MAX_CALLBACKS)
5765 + vchiq_log_error(vchiq_core_log_level,
5766 + "There already %d callback registered - "
5767 + "please increase MAX_CALLBACKS",
5768 + g_num_deferred_callbacks);
5770 + g_deferred_callback[g_num_deferred_callbacks] =
5772 + g_num_deferred_callbacks++;
5775 + mutex_unlock(&g_connected_mutex);
5778 +/****************************************************************************
5780 +* This function is called by the vchiq stack once it has been connected to
5781 +* the videocore and clients can start to use the stack.
5783 +***************************************************************************/
5785 +void vchiq_call_connected_callbacks(void)
5791 + if (mutex_lock_interruptible(&g_connected_mutex) != 0)
5794 + for (i = 0; i < g_num_deferred_callbacks; i++)
5795 + g_deferred_callback[i]();
5797 + g_num_deferred_callbacks = 0;
5799 + mutex_unlock(&g_connected_mutex);
5801 +EXPORT_SYMBOL(vchiq_add_connected_callback);
5803 +++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_connected.h
5806 + * Copyright (c) 2010-2012 Broadcom. All rights reserved.
5808 + * Redistribution and use in source and binary forms, with or without
5809 + * modification, are permitted provided that the following conditions
5811 + * 1. Redistributions of source code must retain the above copyright
5812 + * notice, this list of conditions, and the following disclaimer,
5813 + * without modification.
5814 + * 2. Redistributions in binary form must reproduce the above copyright
5815 + * notice, this list of conditions and the following disclaimer in the
5816 + * documentation and/or other materials provided with the distribution.
5817 + * 3. The names of the above-listed copyright holders may not be used
5818 + * to endorse or promote products derived from this software without
5819 + * specific prior written permission.
5821 + * ALTERNATIVELY, this software may be distributed under the terms of the
5822 + * GNU General Public License ("GPL") version 2, as published by the Free
5823 + * Software Foundation.
5825 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
5826 + * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
5827 + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
5828 + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
5829 + * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
5830 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
5831 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
5832 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
5833 + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
5834 + * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
5835 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
5838 +#ifndef VCHIQ_CONNECTED_H
5839 +#define VCHIQ_CONNECTED_H
5841 +/* ---- Include Files ----------------------------------------------------- */
5843 +/* ---- Constants and Types ---------------------------------------------- */
5845 +typedef void (*VCHIQ_CONNECTED_CALLBACK_T)(void);
5847 +/* ---- Variable Externs ------------------------------------------------- */
5849 +/* ---- Function Prototypes ---------------------------------------------- */
5851 +void vchiq_add_connected_callback(VCHIQ_CONNECTED_CALLBACK_T callback);
5852 +void vchiq_call_connected_callbacks(void);
5854 +#endif /* VCHIQ_CONNECTED_H */
5856 +++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_core.c
5859 + * Copyright (c) 2010-2012 Broadcom. All rights reserved.
5861 + * Redistribution and use in source and binary forms, with or without
5862 + * modification, are permitted provided that the following conditions
5864 + * 1. Redistributions of source code must retain the above copyright
5865 + * notice, this list of conditions, and the following disclaimer,
5866 + * without modification.
5867 + * 2. Redistributions in binary form must reproduce the above copyright
5868 + * notice, this list of conditions and the following disclaimer in the
5869 + * documentation and/or other materials provided with the distribution.
5870 + * 3. The names of the above-listed copyright holders may not be used
5871 + * to endorse or promote products derived from this software without
5872 + * specific prior written permission.
5874 + * ALTERNATIVELY, this software may be distributed under the terms of the
5875 + * GNU General Public License ("GPL") version 2, as published by the Free
5876 + * Software Foundation.
5878 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
5879 + * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
5880 + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
5881 + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
5882 + * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
5883 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
5884 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
5885 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
5886 + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
5887 + * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
5888 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
5891 +#include "vchiq_core.h"
5892 +#include "vchiq_killable.h"
5894 +#define VCHIQ_SLOT_HANDLER_STACK 8192
5896 +#define HANDLE_STATE_SHIFT 12
5898 +#define SLOT_INFO_FROM_INDEX(state, index) (state->slot_info + (index))
5899 +#define SLOT_DATA_FROM_INDEX(state, index) (state->slot_data + (index))
5900 +#define SLOT_INDEX_FROM_DATA(state, data) \
5901 + (((unsigned int)((char *)data - (char *)state->slot_data)) / \
5903 +#define SLOT_INDEX_FROM_INFO(state, info) \
5904 + ((unsigned int)(info - state->slot_info))
5905 +#define SLOT_QUEUE_INDEX_FROM_POS(pos) \
5906 + ((int)((unsigned int)(pos) / VCHIQ_SLOT_SIZE))
5908 +#define BULK_INDEX(x) (x & (VCHIQ_NUM_SERVICE_BULKS - 1))
5910 +#define SRVTRACE_LEVEL(srv) \
5911 + (((srv) && (srv)->trace) ? VCHIQ_LOG_TRACE : vchiq_core_msg_log_level)
5912 +#define SRVTRACE_ENABLED(srv, lev) \
5913 + (((srv) && (srv)->trace) || (vchiq_core_msg_log_level >= (lev)))
5915 +struct vchiq_open_payload {
5919 + short version_min;
5922 +struct vchiq_openack_payload {
5928 + QMFLAGS_IS_BLOCKING = (1 << 0),
5929 + QMFLAGS_NO_MUTEX_LOCK = (1 << 1),
5930 + QMFLAGS_NO_MUTEX_UNLOCK = (1 << 2)
5933 +/* we require this for consistency between endpoints */
5934 +vchiq_static_assert(sizeof(VCHIQ_HEADER_T) == 8);
5935 +vchiq_static_assert(IS_POW2(sizeof(VCHIQ_HEADER_T)));
5936 +vchiq_static_assert(IS_POW2(VCHIQ_NUM_CURRENT_BULKS));
5937 +vchiq_static_assert(IS_POW2(VCHIQ_NUM_SERVICE_BULKS));
5938 +vchiq_static_assert(IS_POW2(VCHIQ_MAX_SERVICES));
5939 +vchiq_static_assert(VCHIQ_VERSION >= VCHIQ_VERSION_MIN);
5941 +/* Run time control of log level, based on KERN_XXX level. */
5942 +int vchiq_core_log_level = VCHIQ_LOG_DEFAULT;
5943 +int vchiq_core_msg_log_level = VCHIQ_LOG_DEFAULT;
5944 +int vchiq_sync_log_level = VCHIQ_LOG_DEFAULT;
5946 +static atomic_t pause_bulks_count = ATOMIC_INIT(0);
5948 +static DEFINE_SPINLOCK(service_spinlock);
5949 +DEFINE_SPINLOCK(bulk_waiter_spinlock);
5950 +DEFINE_SPINLOCK(quota_spinlock);
5952 +VCHIQ_STATE_T *vchiq_states[VCHIQ_MAX_STATES];
5953 +static unsigned int handle_seq;
5955 +static const char *const srvstate_names[] = {
5968 +static const char *const reason_names[] = {
5971 + "MESSAGE_AVAILABLE",
5972 + "BULK_TRANSMIT_DONE",
5973 + "BULK_RECEIVE_DONE",
5974 + "BULK_TRANSMIT_ABORTED",
5975 + "BULK_RECEIVE_ABORTED"
5978 +static const char *const conn_state_names[] = {
5992 +release_message_sync(VCHIQ_STATE_T *state, VCHIQ_HEADER_T *header);
5994 +static const char *msg_type_str(unsigned int msg_type)
5996 + switch (msg_type) {
5997 + case VCHIQ_MSG_PADDING: return "PADDING";
5998 + case VCHIQ_MSG_CONNECT: return "CONNECT";
5999 + case VCHIQ_MSG_OPEN: return "OPEN";
6000 + case VCHIQ_MSG_OPENACK: return "OPENACK";
6001 + case VCHIQ_MSG_CLOSE: return "CLOSE";
6002 + case VCHIQ_MSG_DATA: return "DATA";
6003 + case VCHIQ_MSG_BULK_RX: return "BULK_RX";
6004 + case VCHIQ_MSG_BULK_TX: return "BULK_TX";
6005 + case VCHIQ_MSG_BULK_RX_DONE: return "BULK_RX_DONE";
6006 + case VCHIQ_MSG_BULK_TX_DONE: return "BULK_TX_DONE";
6007 + case VCHIQ_MSG_PAUSE: return "PAUSE";
6008 + case VCHIQ_MSG_RESUME: return "RESUME";
6009 + case VCHIQ_MSG_REMOTE_USE: return "REMOTE_USE";
6010 + case VCHIQ_MSG_REMOTE_RELEASE: return "REMOTE_RELEASE";
6011 + case VCHIQ_MSG_REMOTE_USE_ACTIVE: return "REMOTE_USE_ACTIVE";
6017 +vchiq_set_service_state(VCHIQ_SERVICE_T *service, int newstate)
6019 + vchiq_log_info(vchiq_core_log_level, "%d: srv:%d %s->%s",
6020 + service->state->id, service->localport,
6021 + srvstate_names[service->srvstate],
6022 + srvstate_names[newstate]);
6023 + service->srvstate = newstate;
6027 +find_service_by_handle(VCHIQ_SERVICE_HANDLE_T handle)
6029 + VCHIQ_SERVICE_T *service;
6031 + spin_lock(&service_spinlock);
6032 + service = handle_to_service(handle);
6033 + if (service && (service->srvstate != VCHIQ_SRVSTATE_FREE) &&
6034 + (service->handle == handle)) {
6035 + BUG_ON(service->ref_count == 0);
6036 + service->ref_count++;
6039 + spin_unlock(&service_spinlock);
6042 + vchiq_log_info(vchiq_core_log_level,
6043 + "Invalid service handle 0x%x", handle);
6049 +find_service_by_port(VCHIQ_STATE_T *state, int localport)
6051 + VCHIQ_SERVICE_T *service = NULL;
6052 + if ((unsigned int)localport <= VCHIQ_PORT_MAX) {
6053 + spin_lock(&service_spinlock);
6054 + service = state->services[localport];
6055 + if (service && (service->srvstate != VCHIQ_SRVSTATE_FREE)) {
6056 + BUG_ON(service->ref_count == 0);
6057 + service->ref_count++;
6060 + spin_unlock(&service_spinlock);
6064 + vchiq_log_info(vchiq_core_log_level,
6065 + "Invalid port %d", localport);
6071 +find_service_for_instance(VCHIQ_INSTANCE_T instance,
6072 + VCHIQ_SERVICE_HANDLE_T handle) {
6073 + VCHIQ_SERVICE_T *service;
6075 + spin_lock(&service_spinlock);
6076 + service = handle_to_service(handle);
6077 + if (service && (service->srvstate != VCHIQ_SRVSTATE_FREE) &&
6078 + (service->handle == handle) &&
6079 + (service->instance == instance)) {
6080 + BUG_ON(service->ref_count == 0);
6081 + service->ref_count++;
6084 + spin_unlock(&service_spinlock);
6087 + vchiq_log_info(vchiq_core_log_level,
6088 + "Invalid service handle 0x%x", handle);
6094 +find_closed_service_for_instance(VCHIQ_INSTANCE_T instance,
6095 + VCHIQ_SERVICE_HANDLE_T handle) {
6096 + VCHIQ_SERVICE_T *service;
6098 + spin_lock(&service_spinlock);
6099 + service = handle_to_service(handle);
6101 + ((service->srvstate == VCHIQ_SRVSTATE_FREE) ||
6102 + (service->srvstate == VCHIQ_SRVSTATE_CLOSED)) &&
6103 + (service->handle == handle) &&
6104 + (service->instance == instance)) {
6105 + BUG_ON(service->ref_count == 0);
6106 + service->ref_count++;
6109 + spin_unlock(&service_spinlock);
6112 + vchiq_log_info(vchiq_core_log_level,
6113 + "Invalid service handle 0x%x", handle);
6119 +next_service_by_instance(VCHIQ_STATE_T *state, VCHIQ_INSTANCE_T instance,
6122 + VCHIQ_SERVICE_T *service = NULL;
6125 + spin_lock(&service_spinlock);
6126 + while (idx < state->unused_service) {
6127 + VCHIQ_SERVICE_T *srv = state->services[idx++];
6128 + if (srv && (srv->srvstate != VCHIQ_SRVSTATE_FREE) &&
6129 + (srv->instance == instance)) {
6131 + BUG_ON(service->ref_count == 0);
6132 + service->ref_count++;
6136 + spin_unlock(&service_spinlock);
6144 +lock_service(VCHIQ_SERVICE_T *service)
6146 + spin_lock(&service_spinlock);
6147 + BUG_ON(!service || (service->ref_count == 0));
6149 + service->ref_count++;
6150 + spin_unlock(&service_spinlock);
6154 +unlock_service(VCHIQ_SERVICE_T *service)
6156 + VCHIQ_STATE_T *state = service->state;
6157 + spin_lock(&service_spinlock);
6158 + BUG_ON(!service || (service->ref_count == 0));
6159 + if (service && service->ref_count) {
6160 + service->ref_count--;
6161 + if (!service->ref_count) {
6162 + BUG_ON(service->srvstate != VCHIQ_SRVSTATE_FREE);
6163 + state->services[service->localport] = NULL;
6167 + spin_unlock(&service_spinlock);
6169 + if (service && service->userdata_term)
6170 + service->userdata_term(service->base.userdata);
6176 +vchiq_get_client_id(VCHIQ_SERVICE_HANDLE_T handle)
6178 + VCHIQ_SERVICE_T *service = find_service_by_handle(handle);
6181 + id = service ? service->client_id : 0;
6183 + unlock_service(service);
6189 +vchiq_get_service_userdata(VCHIQ_SERVICE_HANDLE_T handle)
6191 + VCHIQ_SERVICE_T *service = handle_to_service(handle);
6193 + return service ? service->base.userdata : NULL;
6197 +vchiq_get_service_fourcc(VCHIQ_SERVICE_HANDLE_T handle)
6199 + VCHIQ_SERVICE_T *service = handle_to_service(handle);
6201 + return service ? service->base.fourcc : 0;
6205 +mark_service_closing_internal(VCHIQ_SERVICE_T *service, int sh_thread)
6207 + VCHIQ_STATE_T *state = service->state;
6208 + VCHIQ_SERVICE_QUOTA_T *service_quota;
6210 + service->closing = 1;
6212 + /* Synchronise with other threads. */
6213 + mutex_lock(&state->recycle_mutex);
6214 + mutex_unlock(&state->recycle_mutex);
6215 + if (!sh_thread || (state->conn_state != VCHIQ_CONNSTATE_PAUSE_SENT)) {
6216 + /* If we're pausing then the slot_mutex is held until resume
6217 + * by the slot handler. Therefore don't try to acquire this
6218 + * mutex if we're the slot handler and in the pause sent state.
6219 + * We don't need to in this case anyway. */
6220 + mutex_lock(&state->slot_mutex);
6221 + mutex_unlock(&state->slot_mutex);
6224 + /* Unblock any sending thread. */
6225 + service_quota = &state->service_quotas[service->localport];
6226 + up(&service_quota->quota_event);
6230 +mark_service_closing(VCHIQ_SERVICE_T *service)
6232 + mark_service_closing_internal(service, 0);
6235 +static inline VCHIQ_STATUS_T
6236 +make_service_callback(VCHIQ_SERVICE_T *service, VCHIQ_REASON_T reason,
6237 + VCHIQ_HEADER_T *header, void *bulk_userdata)
6239 + VCHIQ_STATUS_T status;
6240 + vchiq_log_trace(vchiq_core_log_level, "%d: callback:%d (%s, %x, %x)",
6241 + service->state->id, service->localport, reason_names[reason],
6242 + (unsigned int)header, (unsigned int)bulk_userdata);
6243 + status = service->base.callback(reason, header, service->handle,
6245 + if (status == VCHIQ_ERROR) {
6246 + vchiq_log_warning(vchiq_core_log_level,
6247 + "%d: ignoring ERROR from callback to service %x",
6248 + service->state->id, service->handle);
6249 + status = VCHIQ_SUCCESS;
6255 +vchiq_set_conn_state(VCHIQ_STATE_T *state, VCHIQ_CONNSTATE_T newstate)
6257 + VCHIQ_CONNSTATE_T oldstate = state->conn_state;
6258 + vchiq_log_info(vchiq_core_log_level, "%d: %s->%s", state->id,
6259 + conn_state_names[oldstate],
6260 + conn_state_names[newstate]);
6261 + state->conn_state = newstate;
6262 + vchiq_platform_conn_state_changed(state, oldstate, newstate);
6266 +remote_event_create(REMOTE_EVENT_T *event)
6269 + /* Don't clear the 'fired' flag because it may already have been set
6270 + ** by the other side. */
6271 + sema_init(event->event, 0);
6275 +remote_event_destroy(REMOTE_EVENT_T *event)
6281 +remote_event_wait(REMOTE_EVENT_T *event)
6283 + if (!event->fired) {
6286 + if (!event->fired) {
6287 + if (down_interruptible(event->event) != 0) {
6301 +remote_event_signal_local(REMOTE_EVENT_T *event)
6308 +remote_event_poll(REMOTE_EVENT_T *event)
6310 + if (event->fired && event->armed)
6311 + remote_event_signal_local(event);
6315 +remote_event_pollall(VCHIQ_STATE_T *state)
6317 + remote_event_poll(&state->local->sync_trigger);
6318 + remote_event_poll(&state->local->sync_release);
6319 + remote_event_poll(&state->local->trigger);
6320 + remote_event_poll(&state->local->recycle);
6323 +/* Round up message sizes so that any space at the end of a slot is always big
6324 +** enough for a header. This relies on header size being a power of two, which
6325 +** has been verified earlier by a static assertion. */
6327 +static inline unsigned int
6328 +calc_stride(unsigned int size)
6330 + /* Allow room for the header */
6331 + size += sizeof(VCHIQ_HEADER_T);
6334 + return (size + sizeof(VCHIQ_HEADER_T) - 1) & ~(sizeof(VCHIQ_HEADER_T)
6338 +/* Called by the slot handler thread */
6339 +static VCHIQ_SERVICE_T *
6340 +get_listening_service(VCHIQ_STATE_T *state, int fourcc)
6344 + WARN_ON(fourcc == VCHIQ_FOURCC_INVALID);
6346 + for (i = 0; i < state->unused_service; i++) {
6347 + VCHIQ_SERVICE_T *service = state->services[i];
6349 + (service->public_fourcc == fourcc) &&
6350 + ((service->srvstate == VCHIQ_SRVSTATE_LISTENING) ||
6351 + ((service->srvstate == VCHIQ_SRVSTATE_OPEN) &&
6352 + (service->remoteport == VCHIQ_PORT_FREE)))) {
6353 + lock_service(service);
6361 +/* Called by the slot handler thread */
6362 +static VCHIQ_SERVICE_T *
6363 +get_connected_service(VCHIQ_STATE_T *state, unsigned int port)
6366 + for (i = 0; i < state->unused_service; i++) {
6367 + VCHIQ_SERVICE_T *service = state->services[i];
6368 + if (service && (service->srvstate == VCHIQ_SRVSTATE_OPEN)
6369 + && (service->remoteport == port)) {
6370 + lock_service(service);
6378 +request_poll(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service, int poll_type)
6384 + value = atomic_read(&service->poll_flags);
6385 + } while (atomic_cmpxchg(&service->poll_flags, value,
6386 + value | (1 << poll_type)) != value);
6389 + value = atomic_read(&state->poll_services[
6390 + service->localport>>5]);
6391 + } while (atomic_cmpxchg(
6392 + &state->poll_services[service->localport>>5],
6393 + value, value | (1 << (service->localport & 0x1f)))
6397 + state->poll_needed = 1;
6400 + /* ... and ensure the slot handler runs. */
6401 + remote_event_signal_local(&state->local->trigger);
6404 +/* Called from queue_message, by the slot handler and application threads,
6405 +** with slot_mutex held */
6406 +static VCHIQ_HEADER_T *
6407 +reserve_space(VCHIQ_STATE_T *state, int space, int is_blocking)
6409 + VCHIQ_SHARED_STATE_T *local = state->local;
6410 + int tx_pos = state->local_tx_pos;
6411 + int slot_space = VCHIQ_SLOT_SIZE - (tx_pos & VCHIQ_SLOT_MASK);
6413 + if (space > slot_space) {
6414 + VCHIQ_HEADER_T *header;
6415 + /* Fill the remaining space with padding */
6416 + WARN_ON(state->tx_data == NULL);
6417 + header = (VCHIQ_HEADER_T *)
6418 + (state->tx_data + (tx_pos & VCHIQ_SLOT_MASK));
6419 + header->msgid = VCHIQ_MSGID_PADDING;
6420 + header->size = slot_space - sizeof(VCHIQ_HEADER_T);
6422 + tx_pos += slot_space;
6425 + /* If necessary, get the next slot. */
6426 + if ((tx_pos & VCHIQ_SLOT_MASK) == 0) {
6429 + /* If there is no free slot... */
6431 + if (down_trylock(&state->slot_available_event) != 0) {
6432 + /* ...wait for one. */
6434 + VCHIQ_STATS_INC(state, slot_stalls);
6436 + /* But first, flush through the last slot. */
6437 + state->local_tx_pos = tx_pos;
6438 + local->tx_pos = tx_pos;
6439 + remote_event_signal(&state->remote->trigger);
6441 + if (!is_blocking ||
6442 + (down_interruptible(
6443 + &state->slot_available_event) != 0))
6444 + return NULL; /* No space available */
6448 + (state->slot_queue_available * VCHIQ_SLOT_SIZE));
6450 + slot_index = local->slot_queue[
6451 + SLOT_QUEUE_INDEX_FROM_POS(tx_pos) &
6452 + VCHIQ_SLOT_QUEUE_MASK];
6454 + (char *)SLOT_DATA_FROM_INDEX(state, slot_index);
6457 + state->local_tx_pos = tx_pos + space;
6459 + return (VCHIQ_HEADER_T *)(state->tx_data + (tx_pos & VCHIQ_SLOT_MASK));
6462 +/* Called by the recycle thread. */
6464 +process_free_queue(VCHIQ_STATE_T *state)
6466 + VCHIQ_SHARED_STATE_T *local = state->local;
6467 + BITSET_T service_found[BITSET_SIZE(VCHIQ_MAX_SERVICES)];
6468 + int slot_queue_available;
6470 + /* Use a read memory barrier to ensure that any state that may have
6471 + ** been modified by another thread is not masked by stale prefetched
6475 + /* Find slots which have been freed by the other side, and return them
6476 + ** to the available queue. */
6477 + slot_queue_available = state->slot_queue_available;
6479 + while (slot_queue_available != local->slot_queue_recycle) {
6481 + int slot_index = local->slot_queue[slot_queue_available++ &
6482 + VCHIQ_SLOT_QUEUE_MASK];
6483 + char *data = (char *)SLOT_DATA_FROM_INDEX(state, slot_index);
6484 + int data_found = 0;
6486 + vchiq_log_trace(vchiq_core_log_level, "%d: pfq %d=%x %x %x",
6487 + state->id, slot_index, (unsigned int)data,
6488 + local->slot_queue_recycle, slot_queue_available);
6490 + /* Initialise the bitmask for services which have used this
6492 + BITSET_ZERO(service_found);
6496 + while (pos < VCHIQ_SLOT_SIZE) {
6497 + VCHIQ_HEADER_T *header =
6498 + (VCHIQ_HEADER_T *)(data + pos);
6499 + int msgid = header->msgid;
6500 + if (VCHIQ_MSG_TYPE(msgid) == VCHIQ_MSG_DATA) {
6501 + int port = VCHIQ_MSG_SRCPORT(msgid);
6502 + VCHIQ_SERVICE_QUOTA_T *service_quota =
6503 + &state->service_quotas[port];
6505 + spin_lock("a_spinlock);
6506 + count = service_quota->message_use_count;
6508 + service_quota->message_use_count =
6510 + spin_unlock("a_spinlock);
6512 + if (count == service_quota->message_quota)
6513 + /* Signal the service that it
6514 + ** has dropped below its quota
6516 + up(&service_quota->quota_event);
6517 + else if (count == 0) {
6518 + vchiq_log_error(vchiq_core_log_level,
6520 + "message_use_count=%d "
6521 + "(header %x, msgid %x, "
6522 + "header->msgid %x, "
6523 + "header->size %x)",
6526 + message_use_count,
6527 + (unsigned int)header, msgid,
6530 + WARN(1, "invalid message use count\n");
6532 + if (!BITSET_IS_SET(service_found, port)) {
6533 + /* Set the found bit for this service */
6534 + BITSET_SET(service_found, port);
6536 + spin_lock("a_spinlock);
6537 + count = service_quota->slot_use_count;
6539 + service_quota->slot_use_count =
6541 + spin_unlock("a_spinlock);
6544 + /* Signal the service in case
6545 + ** it has dropped below its
6547 + up(&service_quota->quota_event);
6549 + vchiq_core_log_level,
6550 + "%d: pfq:%d %x@%x - "
6554 + (unsigned int)header,
6558 + vchiq_core_log_level,
6567 + (unsigned int)header,
6571 + WARN(1, "bad slot use count\n");
6578 + pos += calc_stride(header->size);
6579 + if (pos > VCHIQ_SLOT_SIZE) {
6580 + vchiq_log_error(vchiq_core_log_level,
6581 + "pfq - pos %x: header %x, msgid %x, "
6582 + "header->msgid %x, header->size %x",
6583 + pos, (unsigned int)header, msgid,
6584 + header->msgid, header->size);
6585 + WARN(1, "invalid slot position\n");
6591 + spin_lock("a_spinlock);
6592 + count = state->data_use_count;
6594 + state->data_use_count =
6596 + spin_unlock("a_spinlock);
6597 + if (count == state->data_quota)
6598 + up(&state->data_quota_event);
6601 + state->slot_queue_available = slot_queue_available;
6602 + up(&state->slot_available_event);
6606 +/* Called by the slot handler and application threads */
6607 +static VCHIQ_STATUS_T
6608 +queue_message(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service,
6609 + int msgid, const VCHIQ_ELEMENT_T *elements,
6610 + int count, int size, int flags)
6612 + VCHIQ_SHARED_STATE_T *local;
6613 + VCHIQ_SERVICE_QUOTA_T *service_quota = NULL;
6614 + VCHIQ_HEADER_T *header;
6615 + int type = VCHIQ_MSG_TYPE(msgid);
6617 + unsigned int stride;
6619 + local = state->local;
6621 + stride = calc_stride(size);
6623 + WARN_ON(!(stride <= VCHIQ_SLOT_SIZE));
6625 + if (!(flags & QMFLAGS_NO_MUTEX_LOCK) &&
6626 + (mutex_lock_interruptible(&state->slot_mutex) != 0))
6627 + return VCHIQ_RETRY;
6629 + if (type == VCHIQ_MSG_DATA) {
6633 + BUG_ON((flags & (QMFLAGS_NO_MUTEX_LOCK |
6634 + QMFLAGS_NO_MUTEX_UNLOCK)) != 0);
6636 + if (service->closing) {
6637 + /* The service has been closed */
6638 + mutex_unlock(&state->slot_mutex);
6639 + return VCHIQ_ERROR;
6642 + service_quota = &state->service_quotas[service->localport];
6644 + spin_lock("a_spinlock);
6646 + /* Ensure this service doesn't use more than its quota of
6647 + ** messages or slots */
6648 + tx_end_index = SLOT_QUEUE_INDEX_FROM_POS(
6649 + state->local_tx_pos + stride - 1);
6651 + /* Ensure data messages don't use more than their quota of
6653 + while ((tx_end_index != state->previous_data_index) &&
6654 + (state->data_use_count == state->data_quota)) {
6655 + VCHIQ_STATS_INC(state, data_stalls);
6656 + spin_unlock("a_spinlock);
6657 + mutex_unlock(&state->slot_mutex);
6659 + if (down_interruptible(&state->data_quota_event)
6661 + return VCHIQ_RETRY;
6663 + mutex_lock(&state->slot_mutex);
6664 + spin_lock("a_spinlock);
6665 + tx_end_index = SLOT_QUEUE_INDEX_FROM_POS(
6666 + state->local_tx_pos + stride - 1);
6667 + if ((tx_end_index == state->previous_data_index) ||
6668 + (state->data_use_count < state->data_quota)) {
6669 + /* Pass the signal on to other waiters */
6670 + up(&state->data_quota_event);
6675 + while ((service_quota->message_use_count ==
6676 + service_quota->message_quota) ||
6677 + ((tx_end_index != service_quota->previous_tx_index) &&
6678 + (service_quota->slot_use_count ==
6679 + service_quota->slot_quota))) {
6680 + spin_unlock("a_spinlock);
6681 + vchiq_log_trace(vchiq_core_log_level,
6682 + "%d: qm:%d %s,%x - quota stall "
6683 + "(msg %d, slot %d)",
6684 + state->id, service->localport,
6685 + msg_type_str(type), size,
6686 + service_quota->message_use_count,
6687 + service_quota->slot_use_count);
6688 + VCHIQ_SERVICE_STATS_INC(service, quota_stalls);
6689 + mutex_unlock(&state->slot_mutex);
6690 + if (down_interruptible(&service_quota->quota_event)
6692 + return VCHIQ_RETRY;
6693 + if (service->closing)
6694 + return VCHIQ_ERROR;
6695 + if (mutex_lock_interruptible(&state->slot_mutex) != 0)
6696 + return VCHIQ_RETRY;
6697 + if (service->srvstate != VCHIQ_SRVSTATE_OPEN) {
6698 + /* The service has been closed */
6699 + mutex_unlock(&state->slot_mutex);
6700 + return VCHIQ_ERROR;
6702 + spin_lock("a_spinlock);
6703 + tx_end_index = SLOT_QUEUE_INDEX_FROM_POS(
6704 + state->local_tx_pos + stride - 1);
6707 + spin_unlock("a_spinlock);
6710 + header = reserve_space(state, stride, flags & QMFLAGS_IS_BLOCKING);
6714 + VCHIQ_SERVICE_STATS_INC(service, slot_stalls);
6715 + /* In the event of a failure, return the mutex to the
6716 + state it was in */
6717 + if (!(flags & QMFLAGS_NO_MUTEX_LOCK))
6718 + mutex_unlock(&state->slot_mutex);
6719 + return VCHIQ_RETRY;
6722 + if (type == VCHIQ_MSG_DATA) {
6725 + int slot_use_count;
6727 + vchiq_log_info(vchiq_core_log_level,
6728 + "%d: qm %s@%x,%x (%d->%d)",
6730 + msg_type_str(VCHIQ_MSG_TYPE(msgid)),
6731 + (unsigned int)header, size,
6732 + VCHIQ_MSG_SRCPORT(msgid),
6733 + VCHIQ_MSG_DSTPORT(msgid));
6736 + BUG_ON((flags & (QMFLAGS_NO_MUTEX_LOCK |
6737 + QMFLAGS_NO_MUTEX_UNLOCK)) != 0);
6739 + for (i = 0, pos = 0; i < (unsigned int)count;
6740 + pos += elements[i++].size)
6741 + if (elements[i].size) {
6742 + if (vchiq_copy_from_user
6743 + (header->data + pos, elements[i].data,
6744 + (size_t) elements[i].size) !=
6746 + mutex_unlock(&state->slot_mutex);
6747 + VCHIQ_SERVICE_STATS_INC(service,
6749 + return VCHIQ_ERROR;
6752 + if (SRVTRACE_ENABLED(service,
6754 + vchiq_log_dump_mem("Sent", 0,
6755 + header->data + pos,
6757 + elements[0].size));
6761 + spin_lock("a_spinlock);
6762 + service_quota->message_use_count++;
6765 + SLOT_QUEUE_INDEX_FROM_POS(state->local_tx_pos - 1);
6767 + /* If this transmission can't fit in the last slot used by any
6768 + ** service, the data_use_count must be increased. */
6769 + if (tx_end_index != state->previous_data_index) {
6770 + state->previous_data_index = tx_end_index;
6771 + state->data_use_count++;
6774 + /* If this isn't the same slot last used by this service,
6775 + ** the service's slot_use_count must be increased. */
6776 + if (tx_end_index != service_quota->previous_tx_index) {
6777 + service_quota->previous_tx_index = tx_end_index;
6778 + slot_use_count = ++service_quota->slot_use_count;
6780 + slot_use_count = 0;
6783 + spin_unlock("a_spinlock);
6785 + if (slot_use_count)
6786 + vchiq_log_trace(vchiq_core_log_level,
6787 + "%d: qm:%d %s,%x - slot_use->%d (hdr %p)",
6788 + state->id, service->localport,
6789 + msg_type_str(VCHIQ_MSG_TYPE(msgid)), size,
6790 + slot_use_count, header);
6792 + VCHIQ_SERVICE_STATS_INC(service, ctrl_tx_count);
6793 + VCHIQ_SERVICE_STATS_ADD(service, ctrl_tx_bytes, size);
6795 + vchiq_log_info(vchiq_core_log_level,
6796 + "%d: qm %s@%x,%x (%d->%d)", state->id,
6797 + msg_type_str(VCHIQ_MSG_TYPE(msgid)),
6798 + (unsigned int)header, size,
6799 + VCHIQ_MSG_SRCPORT(msgid),
6800 + VCHIQ_MSG_DSTPORT(msgid));
6802 + WARN_ON(!((count == 1) && (size == elements[0].size)));
6803 + memcpy(header->data, elements[0].data,
6804 + elements[0].size);
6806 + VCHIQ_STATS_INC(state, ctrl_tx_count);
6809 + header->msgid = msgid;
6810 + header->size = size;
6815 + svc_fourcc = service
6816 + ? service->base.fourcc
6817 + : VCHIQ_MAKE_FOURCC('?', '?', '?', '?');
6819 + vchiq_log_info(SRVTRACE_LEVEL(service),
6820 + "Sent Msg %s(%u) to %c%c%c%c s:%u d:%d len:%d",
6821 + msg_type_str(VCHIQ_MSG_TYPE(msgid)),
6822 + VCHIQ_MSG_TYPE(msgid),
6823 + VCHIQ_FOURCC_AS_4CHARS(svc_fourcc),
6824 + VCHIQ_MSG_SRCPORT(msgid),
6825 + VCHIQ_MSG_DSTPORT(msgid),
6829 + /* Make sure the new header is visible to the peer. */
6832 + /* Make the new tx_pos visible to the peer. */
6833 + local->tx_pos = state->local_tx_pos;
6836 + if (service && (type == VCHIQ_MSG_CLOSE))
6837 + vchiq_set_service_state(service, VCHIQ_SRVSTATE_CLOSESENT);
6839 + if (!(flags & QMFLAGS_NO_MUTEX_UNLOCK))
6840 + mutex_unlock(&state->slot_mutex);
6842 + remote_event_signal(&state->remote->trigger);
6844 + return VCHIQ_SUCCESS;
6847 +/* Called by the slot handler and application threads */
6848 +static VCHIQ_STATUS_T
6849 +queue_message_sync(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service,
6850 + int msgid, const VCHIQ_ELEMENT_T *elements,
6851 + int count, int size, int is_blocking)
6853 + VCHIQ_SHARED_STATE_T *local;
6854 + VCHIQ_HEADER_T *header;
6856 + local = state->local;
6858 + if ((VCHIQ_MSG_TYPE(msgid) != VCHIQ_MSG_RESUME) &&
6859 + (mutex_lock_interruptible(&state->sync_mutex) != 0))
6860 + return VCHIQ_RETRY;
6862 + remote_event_wait(&local->sync_release);
6866 + header = (VCHIQ_HEADER_T *)SLOT_DATA_FROM_INDEX(state,
6867 + local->slot_sync);
6870 + int oldmsgid = header->msgid;
6871 + if (oldmsgid != VCHIQ_MSGID_PADDING)
6872 + vchiq_log_error(vchiq_core_log_level,
6873 + "%d: qms - msgid %x, not PADDING",
6874 + state->id, oldmsgid);
6880 + vchiq_log_info(vchiq_sync_log_level,
6881 + "%d: qms %s@%x,%x (%d->%d)", state->id,
6882 + msg_type_str(VCHIQ_MSG_TYPE(msgid)),
6883 + (unsigned int)header, size,
6884 + VCHIQ_MSG_SRCPORT(msgid),
6885 + VCHIQ_MSG_DSTPORT(msgid));
6887 + for (i = 0, pos = 0; i < (unsigned int)count;
6888 + pos += elements[i++].size)
6889 + if (elements[i].size) {
6890 + if (vchiq_copy_from_user
6891 + (header->data + pos, elements[i].data,
6892 + (size_t) elements[i].size) !=
6894 + mutex_unlock(&state->sync_mutex);
6895 + VCHIQ_SERVICE_STATS_INC(service,
6897 + return VCHIQ_ERROR;
6900 + if (vchiq_sync_log_level >=
6902 + vchiq_log_dump_mem("Sent Sync",
6903 + 0, header->data + pos,
6905 + elements[0].size));
6909 + VCHIQ_SERVICE_STATS_INC(service, ctrl_tx_count);
6910 + VCHIQ_SERVICE_STATS_ADD(service, ctrl_tx_bytes, size);
6912 + vchiq_log_info(vchiq_sync_log_level,
6913 + "%d: qms %s@%x,%x (%d->%d)", state->id,
6914 + msg_type_str(VCHIQ_MSG_TYPE(msgid)),
6915 + (unsigned int)header, size,
6916 + VCHIQ_MSG_SRCPORT(msgid),
6917 + VCHIQ_MSG_DSTPORT(msgid));
6919 + WARN_ON(!((count == 1) && (size == elements[0].size)));
6920 + memcpy(header->data, elements[0].data,
6921 + elements[0].size);
6923 + VCHIQ_STATS_INC(state, ctrl_tx_count);
6926 + header->size = size;
6927 + header->msgid = msgid;
6929 + if (vchiq_sync_log_level >= VCHIQ_LOG_TRACE) {
6932 + svc_fourcc = service
6933 + ? service->base.fourcc
6934 + : VCHIQ_MAKE_FOURCC('?', '?', '?', '?');
6936 + vchiq_log_trace(vchiq_sync_log_level,
6937 + "Sent Sync Msg %s(%u) to %c%c%c%c s:%u d:%d len:%d",
6938 + msg_type_str(VCHIQ_MSG_TYPE(msgid)),
6939 + VCHIQ_MSG_TYPE(msgid),
6940 + VCHIQ_FOURCC_AS_4CHARS(svc_fourcc),
6941 + VCHIQ_MSG_SRCPORT(msgid),
6942 + VCHIQ_MSG_DSTPORT(msgid),
6946 + /* Make sure the new header is visible to the peer. */
6949 + remote_event_signal(&state->remote->sync_trigger);
6951 + if (VCHIQ_MSG_TYPE(msgid) != VCHIQ_MSG_PAUSE)
6952 + mutex_unlock(&state->sync_mutex);
6954 + return VCHIQ_SUCCESS;
6958 +claim_slot(VCHIQ_SLOT_INFO_T *slot)
6960 + slot->use_count++;
6964 +release_slot(VCHIQ_STATE_T *state, VCHIQ_SLOT_INFO_T *slot_info,
6965 + VCHIQ_HEADER_T *header, VCHIQ_SERVICE_T *service)
6967 + int release_count;
6969 + mutex_lock(&state->recycle_mutex);
6972 + int msgid = header->msgid;
6973 + if (((msgid & VCHIQ_MSGID_CLAIMED) == 0) ||
6974 + (service && service->closing)) {
6975 + mutex_unlock(&state->recycle_mutex);
6979 + /* Rewrite the message header to prevent a double
6981 + header->msgid = msgid & ~VCHIQ_MSGID_CLAIMED;
6984 + release_count = slot_info->release_count;
6985 + slot_info->release_count = ++release_count;
6987 + if (release_count == slot_info->use_count) {
6988 + int slot_queue_recycle;
6989 + /* Add to the freed queue */
6991 + /* A read barrier is necessary here to prevent speculative
6992 + ** fetches of remote->slot_queue_recycle from overtaking the
6996 + slot_queue_recycle = state->remote->slot_queue_recycle;
6997 + state->remote->slot_queue[slot_queue_recycle &
6998 + VCHIQ_SLOT_QUEUE_MASK] =
6999 + SLOT_INDEX_FROM_INFO(state, slot_info);
7000 + state->remote->slot_queue_recycle = slot_queue_recycle + 1;
7001 + vchiq_log_info(vchiq_core_log_level,
7002 + "%d: release_slot %d - recycle->%x",
7003 + state->id, SLOT_INDEX_FROM_INFO(state, slot_info),
7004 + state->remote->slot_queue_recycle);
7006 + /* A write barrier is necessary, but remote_event_signal
7007 + ** contains one. */
7008 + remote_event_signal(&state->remote->recycle);
7011 + mutex_unlock(&state->recycle_mutex);
7014 +/* Called by the slot handler - don't hold the bulk mutex */
7015 +static VCHIQ_STATUS_T
7016 +notify_bulks(VCHIQ_SERVICE_T *service, VCHIQ_BULK_QUEUE_T *queue,
7019 + VCHIQ_STATUS_T status = VCHIQ_SUCCESS;
7021 + vchiq_log_trace(vchiq_core_log_level,
7022 + "%d: nb:%d %cx - p=%x rn=%x r=%x",
7023 + service->state->id, service->localport,
7024 + (queue == &service->bulk_tx) ? 't' : 'r',
7025 + queue->process, queue->remote_notify, queue->remove);
7027 + if (service->state->is_master) {
7028 + while (queue->remote_notify != queue->process) {
7029 + VCHIQ_BULK_T *bulk =
7030 + &queue->bulks[BULK_INDEX(queue->remote_notify)];
7031 + int msgtype = (bulk->dir == VCHIQ_BULK_TRANSMIT) ?
7032 + VCHIQ_MSG_BULK_RX_DONE : VCHIQ_MSG_BULK_TX_DONE;
7033 + int msgid = VCHIQ_MAKE_MSG(msgtype, service->localport,
7034 + service->remoteport);
7035 + VCHIQ_ELEMENT_T element = { &bulk->actual, 4 };
7036 + /* Only reply to non-dummy bulk requests */
7037 + if (bulk->remote_data) {
7038 + status = queue_message(service->state, NULL,
7039 + msgid, &element, 1, 4, 0);
7040 + if (status != VCHIQ_SUCCESS)
7043 + queue->remote_notify++;
7046 + queue->remote_notify = queue->process;
7049 + if (status == VCHIQ_SUCCESS) {
7050 + while (queue->remove != queue->remote_notify) {
7051 + VCHIQ_BULK_T *bulk =
7052 + &queue->bulks[BULK_INDEX(queue->remove)];
7054 + /* Only generate callbacks for non-dummy bulk
7055 + ** requests, and non-terminated services */
7056 + if (bulk->data && service->instance) {
7057 + if (bulk->actual != VCHIQ_BULK_ACTUAL_ABORTED) {
7058 + if (bulk->dir == VCHIQ_BULK_TRANSMIT) {
7059 + VCHIQ_SERVICE_STATS_INC(service,
7061 + VCHIQ_SERVICE_STATS_ADD(service,
7065 + VCHIQ_SERVICE_STATS_INC(service,
7067 + VCHIQ_SERVICE_STATS_ADD(service,
7072 + VCHIQ_SERVICE_STATS_INC(service,
7073 + bulk_aborted_count);
7075 + if (bulk->mode == VCHIQ_BULK_MODE_BLOCKING) {
7076 + struct bulk_waiter *waiter;
7077 + spin_lock(&bulk_waiter_spinlock);
7078 + waiter = bulk->userdata;
7080 + waiter->actual = bulk->actual;
7081 + up(&waiter->event);
7083 + spin_unlock(&bulk_waiter_spinlock);
7084 + } else if (bulk->mode ==
7085 + VCHIQ_BULK_MODE_CALLBACK) {
7086 + VCHIQ_REASON_T reason = (bulk->dir ==
7087 + VCHIQ_BULK_TRANSMIT) ?
7089 + VCHIQ_BULK_ACTUAL_ABORTED) ?
7090 + VCHIQ_BULK_TRANSMIT_ABORTED :
7091 + VCHIQ_BULK_TRANSMIT_DONE) :
7093 + VCHIQ_BULK_ACTUAL_ABORTED) ?
7094 + VCHIQ_BULK_RECEIVE_ABORTED :
7095 + VCHIQ_BULK_RECEIVE_DONE);
7096 + status = make_service_callback(service,
7097 + reason, NULL, bulk->userdata);
7098 + if (status == VCHIQ_RETRY)
7104 + up(&service->bulk_remove_event);
7107 + status = VCHIQ_SUCCESS;
7110 + if (status == VCHIQ_RETRY)
7111 + request_poll(service->state, service,
7112 + (queue == &service->bulk_tx) ?
7113 + VCHIQ_POLL_TXNOTIFY : VCHIQ_POLL_RXNOTIFY);
7118 +/* Called by the slot handler thread */
7120 +poll_services(VCHIQ_STATE_T *state)
7124 + for (group = 0; group < BITSET_SIZE(state->unused_service); group++) {
7126 + flags = atomic_xchg(&state->poll_services[group], 0);
7127 + for (i = 0; flags; i++) {
7128 + if (flags & (1 << i)) {
7129 + VCHIQ_SERVICE_T *service =
7130 + find_service_by_port(state,
7132 + uint32_t service_flags;
7133 + flags &= ~(1 << i);
7137 + atomic_xchg(&service->poll_flags, 0);
7138 + if (service_flags &
7139 + (1 << VCHIQ_POLL_REMOVE)) {
7140 + vchiq_log_info(vchiq_core_log_level,
7141 + "%d: ps - remove %d<->%d",
7142 + state->id, service->localport,
7143 + service->remoteport);
7145 + /* Make it look like a client, because
7146 + it must be removed and not left in
7147 + the LISTENING state. */
7148 + service->public_fourcc =
7149 + VCHIQ_FOURCC_INVALID;
7151 + if (vchiq_close_service_internal(
7152 + service, 0/*!close_recvd*/) !=
7154 + request_poll(state, service,
7155 + VCHIQ_POLL_REMOVE);
7156 + } else if (service_flags &
7157 + (1 << VCHIQ_POLL_TERMINATE)) {
7158 + vchiq_log_info(vchiq_core_log_level,
7159 + "%d: ps - terminate %d<->%d",
7160 + state->id, service->localport,
7161 + service->remoteport);
7162 + if (vchiq_close_service_internal(
7163 + service, 0/*!close_recvd*/) !=
7165 + request_poll(state, service,
7166 + VCHIQ_POLL_TERMINATE);
7168 + if (service_flags & (1 << VCHIQ_POLL_TXNOTIFY))
7169 + notify_bulks(service,
7170 + &service->bulk_tx,
7172 + if (service_flags & (1 << VCHIQ_POLL_RXNOTIFY))
7173 + notify_bulks(service,
7174 + &service->bulk_rx,
7176 + unlock_service(service);
7182 +/* Called by the slot handler or application threads, holding the bulk mutex. */
7184 +resolve_bulks(VCHIQ_SERVICE_T *service, VCHIQ_BULK_QUEUE_T *queue)
7186 + VCHIQ_STATE_T *state = service->state;
7190 + while ((queue->process != queue->local_insert) &&
7191 + (queue->process != queue->remote_insert)) {
7192 + VCHIQ_BULK_T *bulk = &queue->bulks[BULK_INDEX(queue->process)];
7194 + vchiq_log_trace(vchiq_core_log_level,
7195 + "%d: rb:%d %cx - li=%x ri=%x p=%x",
7196 + state->id, service->localport,
7197 + (queue == &service->bulk_tx) ? 't' : 'r',
7198 + queue->local_insert, queue->remote_insert,
7201 + WARN_ON(!((int)(queue->local_insert - queue->process) > 0));
7202 + WARN_ON(!((int)(queue->remote_insert - queue->process) > 0));
7204 + rc = mutex_lock_interruptible(&state->bulk_transfer_mutex);
7208 + vchiq_transfer_bulk(bulk);
7209 + mutex_unlock(&state->bulk_transfer_mutex);
7211 + if (SRVTRACE_ENABLED(service, VCHIQ_LOG_INFO)) {
7212 + const char *header = (queue == &service->bulk_tx) ?
7213 + "Send Bulk to" : "Recv Bulk from";
7214 + if (bulk->actual != VCHIQ_BULK_ACTUAL_ABORTED)
7215 + vchiq_log_info(SRVTRACE_LEVEL(service),
7216 + "%s %c%c%c%c d:%d len:%d %x<->%x",
7218 + VCHIQ_FOURCC_AS_4CHARS(
7219 + service->base.fourcc),
7220 + service->remoteport,
7222 + (unsigned int)bulk->data,
7223 + (unsigned int)bulk->remote_data);
7225 + vchiq_log_info(SRVTRACE_LEVEL(service),
7226 + "%s %c%c%c%c d:%d ABORTED - tx len:%d,"
7227 + " rx len:%d %x<->%x",
7229 + VCHIQ_FOURCC_AS_4CHARS(
7230 + service->base.fourcc),
7231 + service->remoteport,
7233 + bulk->remote_size,
7234 + (unsigned int)bulk->data,
7235 + (unsigned int)bulk->remote_data);
7238 + vchiq_complete_bulk(bulk);
7245 +/* Called with the bulk_mutex held */
7247 +abort_outstanding_bulks(VCHIQ_SERVICE_T *service, VCHIQ_BULK_QUEUE_T *queue)
7249 + int is_tx = (queue == &service->bulk_tx);
7250 + vchiq_log_trace(vchiq_core_log_level,
7251 + "%d: aob:%d %cx - li=%x ri=%x p=%x",
7252 + service->state->id, service->localport, is_tx ? 't' : 'r',
7253 + queue->local_insert, queue->remote_insert, queue->process);
7255 + WARN_ON(!((int)(queue->local_insert - queue->process) >= 0));
7256 + WARN_ON(!((int)(queue->remote_insert - queue->process) >= 0));
7258 + while ((queue->process != queue->local_insert) ||
7259 + (queue->process != queue->remote_insert)) {
7260 + VCHIQ_BULK_T *bulk = &queue->bulks[BULK_INDEX(queue->process)];
7262 + if (queue->process == queue->remote_insert) {
7263 + /* fabricate a matching dummy bulk */
7264 + bulk->remote_data = NULL;
7265 + bulk->remote_size = 0;
7266 + queue->remote_insert++;
7269 + if (queue->process != queue->local_insert) {
7270 + vchiq_complete_bulk(bulk);
7272 + vchiq_log_info(SRVTRACE_LEVEL(service),
7273 + "%s %c%c%c%c d:%d ABORTED - tx len:%d, "
7275 + is_tx ? "Send Bulk to" : "Recv Bulk from",
7276 + VCHIQ_FOURCC_AS_4CHARS(service->base.fourcc),
7277 + service->remoteport,
7279 + bulk->remote_size);
7281 + /* fabricate a matching dummy bulk */
7282 + bulk->data = NULL;
7284 + bulk->actual = VCHIQ_BULK_ACTUAL_ABORTED;
7285 + bulk->dir = is_tx ? VCHIQ_BULK_TRANSMIT :
7286 + VCHIQ_BULK_RECEIVE;
7287 + queue->local_insert++;
7294 +/* Called from the slot handler thread */
7296 +pause_bulks(VCHIQ_STATE_T *state)
7298 + if (unlikely(atomic_inc_return(&pause_bulks_count) != 1)) {
7300 + atomic_set(&pause_bulks_count, 1);
7304 + /* Block bulk transfers from all services */
7305 + mutex_lock(&state->bulk_transfer_mutex);
7308 +/* Called from the slot handler thread */
7310 +resume_bulks(VCHIQ_STATE_T *state)
7313 + if (unlikely(atomic_dec_return(&pause_bulks_count) != 0)) {
7315 + atomic_set(&pause_bulks_count, 0);
7319 + /* Allow bulk transfers from all services */
7320 + mutex_unlock(&state->bulk_transfer_mutex);
7322 + if (state->deferred_bulks == 0)
7325 + /* Deal with any bulks which had to be deferred due to being in
7326 + * paused state. Don't try to match up to number of deferred bulks
7327 + * in case we've had something come and close the service in the
7328 + * interim - just process all bulk queues for all services */
7329 + vchiq_log_info(vchiq_core_log_level, "%s: processing %d deferred bulks",
7330 + __func__, state->deferred_bulks);
7332 + for (i = 0; i < state->unused_service; i++) {
7333 + VCHIQ_SERVICE_T *service = state->services[i];
7334 + int resolved_rx = 0;
7335 + int resolved_tx = 0;
7336 + if (!service || (service->srvstate != VCHIQ_SRVSTATE_OPEN))
7339 + mutex_lock(&service->bulk_mutex);
7340 + resolved_rx = resolve_bulks(service, &service->bulk_rx);
7341 + resolved_tx = resolve_bulks(service, &service->bulk_tx);
7342 + mutex_unlock(&service->bulk_mutex);
7344 + notify_bulks(service, &service->bulk_rx, 1);
7346 + notify_bulks(service, &service->bulk_tx, 1);
7348 + state->deferred_bulks = 0;
7352 +parse_open(VCHIQ_STATE_T *state, VCHIQ_HEADER_T *header)
7354 + VCHIQ_SERVICE_T *service = NULL;
7357 + unsigned int localport, remoteport;
7359 + msgid = header->msgid;
7360 + size = header->size;
7361 + type = VCHIQ_MSG_TYPE(msgid);
7362 + localport = VCHIQ_MSG_DSTPORT(msgid);
7363 + remoteport = VCHIQ_MSG_SRCPORT(msgid);
7364 + if (size >= sizeof(struct vchiq_open_payload)) {
7365 + const struct vchiq_open_payload *payload =
7366 + (struct vchiq_open_payload *)header->data;
7367 + unsigned int fourcc;
7369 + fourcc = payload->fourcc;
7370 + vchiq_log_info(vchiq_core_log_level,
7371 + "%d: prs OPEN@%x (%d->'%c%c%c%c')",
7372 + state->id, (unsigned int)header,
7374 + VCHIQ_FOURCC_AS_4CHARS(fourcc));
7376 + service = get_listening_service(state, fourcc);
7379 + /* A matching service exists */
7380 + short version = payload->version;
7381 + short version_min = payload->version_min;
7382 + if ((service->version < version_min) ||
7383 + (version < service->version_min)) {
7384 + /* Version mismatch */
7385 + vchiq_loud_error_header();
7386 + vchiq_loud_error("%d: service %d (%c%c%c%c) "
7387 + "version mismatch - local (%d, min %d)"
7388 + " vs. remote (%d, min %d)",
7389 + state->id, service->localport,
7390 + VCHIQ_FOURCC_AS_4CHARS(fourcc),
7391 + service->version, service->version_min,
7392 + version, version_min);
7393 + vchiq_loud_error_footer();
7394 + unlock_service(service);
7398 + service->peer_version = version;
7400 + if (service->srvstate == VCHIQ_SRVSTATE_LISTENING) {
7401 + struct vchiq_openack_payload ack_payload = {
7404 + VCHIQ_ELEMENT_T body = {
7406 + sizeof(ack_payload)
7409 + if (state->version_common <
7410 + VCHIQ_VERSION_SYNCHRONOUS_MODE)
7411 + service->sync = 0;
7413 + /* Acknowledge the OPEN */
7414 + if (service->sync &&
7415 + (state->version_common >=
7416 + VCHIQ_VERSION_SYNCHRONOUS_MODE)) {
7417 + if (queue_message_sync(state, NULL,
7419 + VCHIQ_MSG_OPENACK,
7420 + service->localport,
7422 + &body, 1, sizeof(ack_payload),
7423 + 0) == VCHIQ_RETRY)
7424 + goto bail_not_ready;
7426 + if (queue_message(state, NULL,
7428 + VCHIQ_MSG_OPENACK,
7429 + service->localport,
7431 + &body, 1, sizeof(ack_payload),
7432 + 0) == VCHIQ_RETRY)
7433 + goto bail_not_ready;
7436 + /* The service is now open */
7437 + vchiq_set_service_state(service,
7438 + service->sync ? VCHIQ_SRVSTATE_OPENSYNC
7439 + : VCHIQ_SRVSTATE_OPEN);
7442 + service->remoteport = remoteport;
7443 + service->client_id = ((int *)header->data)[1];
7444 + if (make_service_callback(service, VCHIQ_SERVICE_OPENED,
7445 + NULL, NULL) == VCHIQ_RETRY) {
7446 + /* Bail out if not ready */
7447 + service->remoteport = VCHIQ_PORT_FREE;
7448 + goto bail_not_ready;
7451 + /* Success - the message has been dealt with */
7452 + unlock_service(service);
7458 + /* No available service, or an invalid request - send a CLOSE */
7459 + if (queue_message(state, NULL,
7460 + VCHIQ_MAKE_MSG(VCHIQ_MSG_CLOSE, 0, VCHIQ_MSG_SRCPORT(msgid)),
7461 + NULL, 0, 0, 0) == VCHIQ_RETRY)
7462 + goto bail_not_ready;
7468 + unlock_service(service);
7473 +/* Called by the slot handler thread */
7475 +parse_rx_slots(VCHIQ_STATE_T *state)
7477 + VCHIQ_SHARED_STATE_T *remote = state->remote;
7478 + VCHIQ_SERVICE_T *service = NULL;
7480 + DEBUG_INITIALISE(state->local)
7482 + tx_pos = remote->tx_pos;
7484 + while (state->rx_pos != tx_pos) {
7485 + VCHIQ_HEADER_T *header;
7488 + unsigned int localport, remoteport;
7490 + DEBUG_TRACE(PARSE_LINE);
7491 + if (!state->rx_data) {
7493 + WARN_ON(!((state->rx_pos & VCHIQ_SLOT_MASK) == 0));
7494 + rx_index = remote->slot_queue[
7495 + SLOT_QUEUE_INDEX_FROM_POS(state->rx_pos) &
7496 + VCHIQ_SLOT_QUEUE_MASK];
7497 + state->rx_data = (char *)SLOT_DATA_FROM_INDEX(state,
7499 + state->rx_info = SLOT_INFO_FROM_INDEX(state, rx_index);
7501 + /* Initialise use_count to one, and increment
7502 + ** release_count at the end of the slot to avoid
7503 + ** releasing the slot prematurely. */
7504 + state->rx_info->use_count = 1;
7505 + state->rx_info->release_count = 0;
7508 + header = (VCHIQ_HEADER_T *)(state->rx_data +
7509 + (state->rx_pos & VCHIQ_SLOT_MASK));
7510 + DEBUG_VALUE(PARSE_HEADER, (int)header);
7511 + msgid = header->msgid;
7512 + DEBUG_VALUE(PARSE_MSGID, msgid);
7513 + size = header->size;
7514 + type = VCHIQ_MSG_TYPE(msgid);
7515 + localport = VCHIQ_MSG_DSTPORT(msgid);
7516 + remoteport = VCHIQ_MSG_SRCPORT(msgid);
7518 + if (type != VCHIQ_MSG_DATA)
7519 + VCHIQ_STATS_INC(state, ctrl_rx_count);
7522 + case VCHIQ_MSG_OPENACK:
7523 + case VCHIQ_MSG_CLOSE:
7524 + case VCHIQ_MSG_DATA:
7525 + case VCHIQ_MSG_BULK_RX:
7526 + case VCHIQ_MSG_BULK_TX:
7527 + case VCHIQ_MSG_BULK_RX_DONE:
7528 + case VCHIQ_MSG_BULK_TX_DONE:
7529 + service = find_service_by_port(state, localport);
7531 + ((service->remoteport != remoteport) &&
7532 + (service->remoteport != VCHIQ_PORT_FREE))) &&
7533 + (localport == 0) &&
7534 + (type == VCHIQ_MSG_CLOSE)) {
7535 + /* This could be a CLOSE from a client which
7536 + hadn't yet received the OPENACK - look for
7537 + the connected service */
7539 + unlock_service(service);
7540 + service = get_connected_service(state,
7543 + vchiq_log_warning(vchiq_core_log_level,
7544 + "%d: prs %s@%x (%d->%d) - "
7545 + "found connected service %d",
7546 + state->id, msg_type_str(type),
7547 + (unsigned int)header,
7548 + remoteport, localport,
7549 + service->localport);
7553 + vchiq_log_error(vchiq_core_log_level,
7554 + "%d: prs %s@%x (%d->%d) - "
7555 + "invalid/closed service %d",
7556 + state->id, msg_type_str(type),
7557 + (unsigned int)header,
7558 + remoteport, localport, localport);
7559 + goto skip_message;
7566 + if (SRVTRACE_ENABLED(service, VCHIQ_LOG_INFO)) {
7569 + svc_fourcc = service
7570 + ? service->base.fourcc
7571 + : VCHIQ_MAKE_FOURCC('?', '?', '?', '?');
7572 + vchiq_log_info(SRVTRACE_LEVEL(service),
7573 + "Rcvd Msg %s(%u) from %c%c%c%c s:%d d:%d "
7575 + msg_type_str(type), type,
7576 + VCHIQ_FOURCC_AS_4CHARS(svc_fourcc),
7577 + remoteport, localport, size);
7579 + vchiq_log_dump_mem("Rcvd", 0, header->data,
7583 + if (((unsigned int)header & VCHIQ_SLOT_MASK) + calc_stride(size)
7584 + > VCHIQ_SLOT_SIZE) {
7585 + vchiq_log_error(vchiq_core_log_level,
7586 + "header %x (msgid %x) - size %x too big for "
7588 + (unsigned int)header, (unsigned int)msgid,
7589 + (unsigned int)size);
7590 + WARN(1, "oversized for slot\n");
7594 + case VCHIQ_MSG_OPEN:
7595 + WARN_ON(!(VCHIQ_MSG_DSTPORT(msgid) == 0));
7596 + if (!parse_open(state, header))
7597 + goto bail_not_ready;
7599 + case VCHIQ_MSG_OPENACK:
7600 + if (size >= sizeof(struct vchiq_openack_payload)) {
7601 + const struct vchiq_openack_payload *payload =
7602 + (struct vchiq_openack_payload *)
7604 + service->peer_version = payload->version;
7606 + vchiq_log_info(vchiq_core_log_level,
7607 + "%d: prs OPENACK@%x,%x (%d->%d) v:%d",
7608 + state->id, (unsigned int)header, size,
7609 + remoteport, localport, service->peer_version);
7610 + if (service->srvstate ==
7611 + VCHIQ_SRVSTATE_OPENING) {
7612 + service->remoteport = remoteport;
7613 + vchiq_set_service_state(service,
7614 + VCHIQ_SRVSTATE_OPEN);
7615 + up(&service->remove_event);
7617 + vchiq_log_error(vchiq_core_log_level,
7618 + "OPENACK received in state %s",
7619 + srvstate_names[service->srvstate]);
7621 + case VCHIQ_MSG_CLOSE:
7622 + WARN_ON(size != 0); /* There should be no data */
7624 + vchiq_log_info(vchiq_core_log_level,
7625 + "%d: prs CLOSE@%x (%d->%d)",
7626 + state->id, (unsigned int)header,
7627 + remoteport, localport);
7629 + mark_service_closing_internal(service, 1);
7631 + if (vchiq_close_service_internal(service,
7632 + 1/*close_recvd*/) == VCHIQ_RETRY)
7633 + goto bail_not_ready;
7635 + vchiq_log_info(vchiq_core_log_level,
7636 + "Close Service %c%c%c%c s:%u d:%d",
7637 + VCHIQ_FOURCC_AS_4CHARS(service->base.fourcc),
7638 + service->localport,
7639 + service->remoteport);
7641 + case VCHIQ_MSG_DATA:
7642 + vchiq_log_info(vchiq_core_log_level,
7643 + "%d: prs DATA@%x,%x (%d->%d)",
7644 + state->id, (unsigned int)header, size,
7645 + remoteport, localport);
7647 + if ((service->remoteport == remoteport)
7648 + && (service->srvstate ==
7649 + VCHIQ_SRVSTATE_OPEN)) {
7650 + header->msgid = msgid | VCHIQ_MSGID_CLAIMED;
7651 + claim_slot(state->rx_info);
7652 + DEBUG_TRACE(PARSE_LINE);
7653 + if (make_service_callback(service,
7654 + VCHIQ_MESSAGE_AVAILABLE, header,
7655 + NULL) == VCHIQ_RETRY) {
7656 + DEBUG_TRACE(PARSE_LINE);
7657 + goto bail_not_ready;
7659 + VCHIQ_SERVICE_STATS_INC(service, ctrl_rx_count);
7660 + VCHIQ_SERVICE_STATS_ADD(service, ctrl_rx_bytes,
7663 + VCHIQ_STATS_INC(state, error_count);
7666 + case VCHIQ_MSG_CONNECT:
7667 + vchiq_log_info(vchiq_core_log_level,
7668 + "%d: prs CONNECT@%x",
7669 + state->id, (unsigned int)header);
7670 + state->version_common = ((VCHIQ_SLOT_ZERO_T *)
7671 + state->slot_data)->version;
7672 + up(&state->connect);
7674 + case VCHIQ_MSG_BULK_RX:
7675 + case VCHIQ_MSG_BULK_TX: {
7676 + VCHIQ_BULK_QUEUE_T *queue;
7677 + WARN_ON(!state->is_master);
7678 + queue = (type == VCHIQ_MSG_BULK_RX) ?
7679 + &service->bulk_tx : &service->bulk_rx;
7680 + if ((service->remoteport == remoteport)
7681 + && (service->srvstate ==
7682 + VCHIQ_SRVSTATE_OPEN)) {
7683 + VCHIQ_BULK_T *bulk;
7686 + DEBUG_TRACE(PARSE_LINE);
7687 + if (mutex_lock_interruptible(
7688 + &service->bulk_mutex) != 0) {
7689 + DEBUG_TRACE(PARSE_LINE);
7690 + goto bail_not_ready;
7693 + WARN_ON(!(queue->remote_insert < queue->remove +
7694 + VCHIQ_NUM_SERVICE_BULKS));
7695 + bulk = &queue->bulks[
7696 + BULK_INDEX(queue->remote_insert)];
7697 + bulk->remote_data =
7698 + (void *)((int *)header->data)[0];
7699 + bulk->remote_size = ((int *)header->data)[1];
7702 + vchiq_log_info(vchiq_core_log_level,
7703 + "%d: prs %s@%x (%d->%d) %x@%x",
7704 + state->id, msg_type_str(type),
7705 + (unsigned int)header,
7706 + remoteport, localport,
7707 + bulk->remote_size,
7708 + (unsigned int)bulk->remote_data);
7710 + queue->remote_insert++;
7712 + if (atomic_read(&pause_bulks_count)) {
7713 + state->deferred_bulks++;
7714 + vchiq_log_info(vchiq_core_log_level,
7715 + "%s: deferring bulk (%d)",
7717 + state->deferred_bulks);
7718 + if (state->conn_state !=
7719 + VCHIQ_CONNSTATE_PAUSE_SENT)
7721 + vchiq_core_log_level,
7722 + "%s: bulks paused in "
7723 + "unexpected state %s",
7726 + state->conn_state]);
7727 + } else if (state->conn_state ==
7728 + VCHIQ_CONNSTATE_CONNECTED) {
7729 + DEBUG_TRACE(PARSE_LINE);
7730 + resolved = resolve_bulks(service,
7734 + mutex_unlock(&service->bulk_mutex);
7736 + notify_bulks(service, queue,
7740 + case VCHIQ_MSG_BULK_RX_DONE:
7741 + case VCHIQ_MSG_BULK_TX_DONE:
7742 + WARN_ON(state->is_master);
7743 + if ((service->remoteport == remoteport)
7744 + && (service->srvstate !=
7745 + VCHIQ_SRVSTATE_FREE)) {
7746 + VCHIQ_BULK_QUEUE_T *queue;
7747 + VCHIQ_BULK_T *bulk;
7749 + queue = (type == VCHIQ_MSG_BULK_RX_DONE) ?
7750 + &service->bulk_rx : &service->bulk_tx;
7752 + DEBUG_TRACE(PARSE_LINE);
7753 + if (mutex_lock_interruptible(
7754 + &service->bulk_mutex) != 0) {
7755 + DEBUG_TRACE(PARSE_LINE);
7756 + goto bail_not_ready;
7758 + if ((int)(queue->remote_insert -
7759 + queue->local_insert) >= 0) {
7760 + vchiq_log_error(vchiq_core_log_level,
7761 + "%d: prs %s@%x (%d->%d) "
7762 + "unexpected (ri=%d,li=%d)",
7763 + state->id, msg_type_str(type),
7764 + (unsigned int)header,
7765 + remoteport, localport,
7766 + queue->remote_insert,
7767 + queue->local_insert);
7768 + mutex_unlock(&service->bulk_mutex);
7772 + BUG_ON(queue->process == queue->local_insert);
7773 + BUG_ON(queue->process != queue->remote_insert);
7775 + bulk = &queue->bulks[
7776 + BULK_INDEX(queue->remote_insert)];
7777 + bulk->actual = *(int *)header->data;
7778 + queue->remote_insert++;
7780 + vchiq_log_info(vchiq_core_log_level,
7781 + "%d: prs %s@%x (%d->%d) %x@%x",
7782 + state->id, msg_type_str(type),
7783 + (unsigned int)header,
7784 + remoteport, localport,
7785 + bulk->actual, (unsigned int)bulk->data);
7787 + vchiq_log_trace(vchiq_core_log_level,
7788 + "%d: prs:%d %cx li=%x ri=%x p=%x",
7789 + state->id, localport,
7790 + (type == VCHIQ_MSG_BULK_RX_DONE) ?
7792 + queue->local_insert,
7793 + queue->remote_insert, queue->process);
7795 + DEBUG_TRACE(PARSE_LINE);
7796 + WARN_ON(queue->process == queue->local_insert);
7797 + vchiq_complete_bulk(bulk);
7799 + mutex_unlock(&service->bulk_mutex);
7800 + DEBUG_TRACE(PARSE_LINE);
7801 + notify_bulks(service, queue, 1/*retry_poll*/);
7802 + DEBUG_TRACE(PARSE_LINE);
7805 + case VCHIQ_MSG_PADDING:
7806 + vchiq_log_trace(vchiq_core_log_level,
7807 + "%d: prs PADDING@%x,%x",
7808 + state->id, (unsigned int)header, size);
7810 + case VCHIQ_MSG_PAUSE:
7811 + /* If initiated, signal the application thread */
7812 + vchiq_log_trace(vchiq_core_log_level,
7813 + "%d: prs PAUSE@%x,%x",
7814 + state->id, (unsigned int)header, size);
7815 + if (state->conn_state == VCHIQ_CONNSTATE_PAUSED) {
7816 + vchiq_log_error(vchiq_core_log_level,
7817 + "%d: PAUSE received in state PAUSED",
7821 + if (state->conn_state != VCHIQ_CONNSTATE_PAUSE_SENT) {
7822 + /* Send a PAUSE in response */
7823 + if (queue_message(state, NULL,
7824 + VCHIQ_MAKE_MSG(VCHIQ_MSG_PAUSE, 0, 0),
7825 + NULL, 0, 0, QMFLAGS_NO_MUTEX_UNLOCK)
7827 + goto bail_not_ready;
7828 + if (state->is_master)
7829 + pause_bulks(state);
7831 + /* At this point slot_mutex is held */
7832 + vchiq_set_conn_state(state, VCHIQ_CONNSTATE_PAUSED);
7833 + vchiq_platform_paused(state);
7835 + case VCHIQ_MSG_RESUME:
7836 + vchiq_log_trace(vchiq_core_log_level,
7837 + "%d: prs RESUME@%x,%x",
7838 + state->id, (unsigned int)header, size);
7839 + /* Release the slot mutex */
7840 + mutex_unlock(&state->slot_mutex);
7841 + if (state->is_master)
7842 + resume_bulks(state);
7843 + vchiq_set_conn_state(state, VCHIQ_CONNSTATE_CONNECTED);
7844 + vchiq_platform_resumed(state);
7847 + case VCHIQ_MSG_REMOTE_USE:
7848 + vchiq_on_remote_use(state);
7850 + case VCHIQ_MSG_REMOTE_RELEASE:
7851 + vchiq_on_remote_release(state);
7853 + case VCHIQ_MSG_REMOTE_USE_ACTIVE:
7854 + vchiq_on_remote_use_active(state);
7858 + vchiq_log_error(vchiq_core_log_level,
7859 + "%d: prs invalid msgid %x@%x,%x",
7860 + state->id, msgid, (unsigned int)header, size);
7861 + WARN(1, "invalid message\n");
7867 + unlock_service(service);
7871 + state->rx_pos += calc_stride(size);
7873 + DEBUG_TRACE(PARSE_LINE);
7874 + /* Perform some housekeeping when the end of the slot is
7876 + if ((state->rx_pos & VCHIQ_SLOT_MASK) == 0) {
7877 + /* Remove the extra reference count. */
7878 + release_slot(state, state->rx_info, NULL, NULL);
7879 + state->rx_data = NULL;
7885 + unlock_service(service);
7888 +/* Called by the slot handler thread */
7890 +slot_handler_func(void *v)
7892 + VCHIQ_STATE_T *state = (VCHIQ_STATE_T *) v;
7893 + VCHIQ_SHARED_STATE_T *local = state->local;
7894 + DEBUG_INITIALISE(local)
7897 + DEBUG_COUNT(SLOT_HANDLER_COUNT);
7898 + DEBUG_TRACE(SLOT_HANDLER_LINE);
7899 + remote_event_wait(&local->trigger);
7903 + DEBUG_TRACE(SLOT_HANDLER_LINE);
7904 + if (state->poll_needed) {
7905 + /* Check if we need to suspend - may change our
7907 + vchiq_platform_check_suspend(state);
7909 + state->poll_needed = 0;
7911 + /* Handle service polling and other rare conditions here
7912 + ** out of the mainline code */
7913 + switch (state->conn_state) {
7914 + case VCHIQ_CONNSTATE_CONNECTED:
7915 + /* Poll the services as requested */
7916 + poll_services(state);
7919 + case VCHIQ_CONNSTATE_PAUSING:
7920 + if (state->is_master)
7921 + pause_bulks(state);
7922 + if (queue_message(state, NULL,
7923 + VCHIQ_MAKE_MSG(VCHIQ_MSG_PAUSE, 0, 0),
7925 + QMFLAGS_NO_MUTEX_UNLOCK)
7927 + vchiq_set_conn_state(state,
7928 + VCHIQ_CONNSTATE_PAUSE_SENT);
7930 + if (state->is_master)
7931 + resume_bulks(state);
7933 + state->poll_needed = 1;
7937 + case VCHIQ_CONNSTATE_PAUSED:
7938 + vchiq_platform_resume(state);
7941 + case VCHIQ_CONNSTATE_RESUMING:
7942 + if (queue_message(state, NULL,
7943 + VCHIQ_MAKE_MSG(VCHIQ_MSG_RESUME, 0, 0),
7944 + NULL, 0, 0, QMFLAGS_NO_MUTEX_LOCK)
7946 + if (state->is_master)
7947 + resume_bulks(state);
7948 + vchiq_set_conn_state(state,
7949 + VCHIQ_CONNSTATE_CONNECTED);
7950 + vchiq_platform_resumed(state);
7952 + /* This should really be impossible,
7953 + ** since the PAUSE should have flushed
7954 + ** through outstanding messages. */
7955 + vchiq_log_error(vchiq_core_log_level,
7956 + "Failed to send RESUME "
7962 + case VCHIQ_CONNSTATE_PAUSE_TIMEOUT:
7963 + case VCHIQ_CONNSTATE_RESUME_TIMEOUT:
7964 + vchiq_platform_handle_timeout(state);
7973 + DEBUG_TRACE(SLOT_HANDLER_LINE);
7974 + parse_rx_slots(state);
7980 +/* Called by the recycle thread */
7982 +recycle_func(void *v)
7984 + VCHIQ_STATE_T *state = (VCHIQ_STATE_T *) v;
7985 + VCHIQ_SHARED_STATE_T *local = state->local;
7988 + remote_event_wait(&local->recycle);
7990 + process_free_queue(state);
7996 +/* Called by the sync thread */
8000 + VCHIQ_STATE_T *state = (VCHIQ_STATE_T *) v;
8001 + VCHIQ_SHARED_STATE_T *local = state->local;
8002 + VCHIQ_HEADER_T *header = (VCHIQ_HEADER_T *)SLOT_DATA_FROM_INDEX(state,
8003 + state->remote->slot_sync);
8006 + VCHIQ_SERVICE_T *service;
8009 + unsigned int localport, remoteport;
8011 + remote_event_wait(&local->sync_trigger);
8015 + msgid = header->msgid;
8016 + size = header->size;
8017 + type = VCHIQ_MSG_TYPE(msgid);
8018 + localport = VCHIQ_MSG_DSTPORT(msgid);
8019 + remoteport = VCHIQ_MSG_SRCPORT(msgid);
8021 + service = find_service_by_port(state, localport);
8024 + vchiq_log_error(vchiq_sync_log_level,
8025 + "%d: sf %s@%x (%d->%d) - "
8026 + "invalid/closed service %d",
8027 + state->id, msg_type_str(type),
8028 + (unsigned int)header,
8029 + remoteport, localport, localport);
8030 + release_message_sync(state, header);
8034 + if (vchiq_sync_log_level >= VCHIQ_LOG_TRACE) {
8037 + svc_fourcc = service
8038 + ? service->base.fourcc
8039 + : VCHIQ_MAKE_FOURCC('?', '?', '?', '?');
8040 + vchiq_log_trace(vchiq_sync_log_level,
8041 + "Rcvd Msg %s from %c%c%c%c s:%d d:%d len:%d",
8042 + msg_type_str(type),
8043 + VCHIQ_FOURCC_AS_4CHARS(svc_fourcc),
8044 + remoteport, localport, size);
8046 + vchiq_log_dump_mem("Rcvd", 0, header->data,
8051 + case VCHIQ_MSG_OPENACK:
8052 + if (size >= sizeof(struct vchiq_openack_payload)) {
8053 + const struct vchiq_openack_payload *payload =
8054 + (struct vchiq_openack_payload *)
8056 + service->peer_version = payload->version;
8058 + vchiq_log_info(vchiq_sync_log_level,
8059 + "%d: sf OPENACK@%x,%x (%d->%d) v:%d",
8060 + state->id, (unsigned int)header, size,
8061 + remoteport, localport, service->peer_version);
8062 + if (service->srvstate == VCHIQ_SRVSTATE_OPENING) {
8063 + service->remoteport = remoteport;
8064 + vchiq_set_service_state(service,
8065 + VCHIQ_SRVSTATE_OPENSYNC);
8066 + service->sync = 1;
8067 + up(&service->remove_event);
8069 + release_message_sync(state, header);
8072 + case VCHIQ_MSG_DATA:
8073 + vchiq_log_trace(vchiq_sync_log_level,
8074 + "%d: sf DATA@%x,%x (%d->%d)",
8075 + state->id, (unsigned int)header, size,
8076 + remoteport, localport);
8078 + if ((service->remoteport == remoteport) &&
8079 + (service->srvstate ==
8080 + VCHIQ_SRVSTATE_OPENSYNC)) {
8081 + if (make_service_callback(service,
8082 + VCHIQ_MESSAGE_AVAILABLE, header,
8083 + NULL) == VCHIQ_RETRY)
8084 + vchiq_log_error(vchiq_sync_log_level,
8085 + "synchronous callback to "
8086 + "service %d returns "
8093 + vchiq_log_error(vchiq_sync_log_level,
8094 + "%d: sf unexpected msgid %x@%x,%x",
8095 + state->id, msgid, (unsigned int)header, size);
8096 + release_message_sync(state, header);
8100 + unlock_service(service);
8108 +init_bulk_queue(VCHIQ_BULK_QUEUE_T *queue)
8110 + queue->local_insert = 0;
8111 + queue->remote_insert = 0;
8112 + queue->process = 0;
8113 + queue->remote_notify = 0;
8114 + queue->remove = 0;
8118 +inline const char *
8119 +get_conn_state_name(VCHIQ_CONNSTATE_T conn_state)
8121 + return conn_state_names[conn_state];
8125 +VCHIQ_SLOT_ZERO_T *
8126 +vchiq_init_slots(void *mem_base, int mem_size)
8128 + int mem_align = (VCHIQ_SLOT_SIZE - (int)mem_base) & VCHIQ_SLOT_MASK;
8129 + VCHIQ_SLOT_ZERO_T *slot_zero =
8130 + (VCHIQ_SLOT_ZERO_T *)((char *)mem_base + mem_align);
8131 + int num_slots = (mem_size - mem_align)/VCHIQ_SLOT_SIZE;
8132 + int first_data_slot = VCHIQ_SLOT_ZERO_SLOTS;
8134 + /* Ensure there is enough memory to run an absolutely minimum system */
8135 + num_slots -= first_data_slot;
8137 + if (num_slots < 4) {
8138 + vchiq_log_error(vchiq_core_log_level,
8139 + "vchiq_init_slots - insufficient memory %x bytes",
8144 + memset(slot_zero, 0, sizeof(VCHIQ_SLOT_ZERO_T));
8146 + slot_zero->magic = VCHIQ_MAGIC;
8147 + slot_zero->version = VCHIQ_VERSION;
8148 + slot_zero->version_min = VCHIQ_VERSION_MIN;
8149 + slot_zero->slot_zero_size = sizeof(VCHIQ_SLOT_ZERO_T);
8150 + slot_zero->slot_size = VCHIQ_SLOT_SIZE;
8151 + slot_zero->max_slots = VCHIQ_MAX_SLOTS;
8152 + slot_zero->max_slots_per_side = VCHIQ_MAX_SLOTS_PER_SIDE;
8154 + slot_zero->master.slot_sync = first_data_slot;
8155 + slot_zero->master.slot_first = first_data_slot + 1;
8156 + slot_zero->master.slot_last = first_data_slot + (num_slots/2) - 1;
8157 + slot_zero->slave.slot_sync = first_data_slot + (num_slots/2);
8158 + slot_zero->slave.slot_first = first_data_slot + (num_slots/2) + 1;
8159 + slot_zero->slave.slot_last = first_data_slot + num_slots - 1;
8165 +vchiq_init_state(VCHIQ_STATE_T *state, VCHIQ_SLOT_ZERO_T *slot_zero,
8168 + VCHIQ_SHARED_STATE_T *local;
8169 + VCHIQ_SHARED_STATE_T *remote;
8170 + VCHIQ_STATUS_T status;
8171 + char threadname[10];
8175 + vchiq_log_warning(vchiq_core_log_level,
8176 + "%s: slot_zero = 0x%08lx, is_master = %d",
8177 + __func__, (unsigned long)slot_zero, is_master);
8179 + /* Check the input configuration */
8181 + if (slot_zero->magic != VCHIQ_MAGIC) {
8182 + vchiq_loud_error_header();
8183 + vchiq_loud_error("Invalid VCHIQ magic value found.");
8184 + vchiq_loud_error("slot_zero=%x: magic=%x (expected %x)",
8185 + (unsigned int)slot_zero, slot_zero->magic, VCHIQ_MAGIC);
8186 + vchiq_loud_error_footer();
8187 + return VCHIQ_ERROR;
8190 + if (slot_zero->version < VCHIQ_VERSION_MIN) {
8191 + vchiq_loud_error_header();
8192 + vchiq_loud_error("Incompatible VCHIQ versions found.");
8193 + vchiq_loud_error("slot_zero=%x: VideoCore version=%d "
8195 + (unsigned int)slot_zero, slot_zero->version,
8196 + VCHIQ_VERSION_MIN);
8197 + vchiq_loud_error("Restart with a newer VideoCore image.");
8198 + vchiq_loud_error_footer();
8199 + return VCHIQ_ERROR;
8202 + if (VCHIQ_VERSION < slot_zero->version_min) {
8203 + vchiq_loud_error_header();
8204 + vchiq_loud_error("Incompatible VCHIQ versions found.");
8205 + vchiq_loud_error("slot_zero=%x: version=%d (VideoCore "
8207 + (unsigned int)slot_zero, VCHIQ_VERSION,
8208 + slot_zero->version_min);
8209 + vchiq_loud_error("Restart with a newer kernel.");
8210 + vchiq_loud_error_footer();
8211 + return VCHIQ_ERROR;
8214 + if ((slot_zero->slot_zero_size != sizeof(VCHIQ_SLOT_ZERO_T)) ||
8215 + (slot_zero->slot_size != VCHIQ_SLOT_SIZE) ||
8216 + (slot_zero->max_slots != VCHIQ_MAX_SLOTS) ||
8217 + (slot_zero->max_slots_per_side != VCHIQ_MAX_SLOTS_PER_SIDE)) {
8218 + vchiq_loud_error_header();
8219 + if (slot_zero->slot_zero_size != sizeof(VCHIQ_SLOT_ZERO_T))
8220 + vchiq_loud_error("slot_zero=%x: slot_zero_size=%x "
8222 + (unsigned int)slot_zero,
8223 + slot_zero->slot_zero_size,
8224 + sizeof(VCHIQ_SLOT_ZERO_T));
8225 + if (slot_zero->slot_size != VCHIQ_SLOT_SIZE)
8226 + vchiq_loud_error("slot_zero=%x: slot_size=%d "
8228 + (unsigned int)slot_zero, slot_zero->slot_size,
8230 + if (slot_zero->max_slots != VCHIQ_MAX_SLOTS)
8231 + vchiq_loud_error("slot_zero=%x: max_slots=%d "
8233 + (unsigned int)slot_zero, slot_zero->max_slots,
8235 + if (slot_zero->max_slots_per_side != VCHIQ_MAX_SLOTS_PER_SIDE)
8236 + vchiq_loud_error("slot_zero=%x: max_slots_per_side=%d "
8238 + (unsigned int)slot_zero,
8239 + slot_zero->max_slots_per_side,
8240 + VCHIQ_MAX_SLOTS_PER_SIDE);
8241 + vchiq_loud_error_footer();
8242 + return VCHIQ_ERROR;
8245 + if (VCHIQ_VERSION < slot_zero->version)
8246 + slot_zero->version = VCHIQ_VERSION;
8249 + local = &slot_zero->master;
8250 + remote = &slot_zero->slave;
8252 + local = &slot_zero->slave;
8253 + remote = &slot_zero->master;
8256 + if (local->initialised) {
8257 + vchiq_loud_error_header();
8258 + if (remote->initialised)
8259 + vchiq_loud_error("local state has already been "
8262 + vchiq_loud_error("master/slave mismatch - two %ss",
8263 + is_master ? "master" : "slave");
8264 + vchiq_loud_error_footer();
8265 + return VCHIQ_ERROR;
8268 + memset(state, 0, sizeof(VCHIQ_STATE_T));
8271 + state->is_master = is_master;
8274 + initialize shared state pointers
8277 + state->local = local;
8278 + state->remote = remote;
8279 + state->slot_data = (VCHIQ_SLOT_T *)slot_zero;
8282 + initialize events and mutexes
8285 + sema_init(&state->connect, 0);
8286 + mutex_init(&state->mutex);
8287 + sema_init(&state->trigger_event, 0);
8288 + sema_init(&state->recycle_event, 0);
8289 + sema_init(&state->sync_trigger_event, 0);
8290 + sema_init(&state->sync_release_event, 0);
8292 + mutex_init(&state->slot_mutex);
8293 + mutex_init(&state->recycle_mutex);
8294 + mutex_init(&state->sync_mutex);
8295 + mutex_init(&state->bulk_transfer_mutex);
8297 + sema_init(&state->slot_available_event, 0);
8298 + sema_init(&state->slot_remove_event, 0);
8299 + sema_init(&state->data_quota_event, 0);
8301 + state->slot_queue_available = 0;
8303 + for (i = 0; i < VCHIQ_MAX_SERVICES; i++) {
8304 + VCHIQ_SERVICE_QUOTA_T *service_quota =
8305 + &state->service_quotas[i];
8306 + sema_init(&service_quota->quota_event, 0);
8309 + for (i = local->slot_first; i <= local->slot_last; i++) {
8310 + local->slot_queue[state->slot_queue_available++] = i;
8311 + up(&state->slot_available_event);
8314 + state->default_slot_quota = state->slot_queue_available/2;
8315 + state->default_message_quota =
8316 + min((unsigned short)(state->default_slot_quota * 256),
8317 + (unsigned short)~0);
8319 + state->previous_data_index = -1;
8320 + state->data_use_count = 0;
8321 + state->data_quota = state->slot_queue_available - 1;
8323 + local->trigger.event = &state->trigger_event;
8324 + remote_event_create(&local->trigger);
8325 + local->tx_pos = 0;
8327 + local->recycle.event = &state->recycle_event;
8328 + remote_event_create(&local->recycle);
8329 + local->slot_queue_recycle = state->slot_queue_available;
8331 + local->sync_trigger.event = &state->sync_trigger_event;
8332 + remote_event_create(&local->sync_trigger);
8334 + local->sync_release.event = &state->sync_release_event;
8335 + remote_event_create(&local->sync_release);
8337 + /* At start-of-day, the slot is empty and available */
8338 + ((VCHIQ_HEADER_T *)SLOT_DATA_FROM_INDEX(state, local->slot_sync))->msgid
8339 + = VCHIQ_MSGID_PADDING;
8340 + remote_event_signal_local(&local->sync_release);
8342 + local->debug[DEBUG_ENTRIES] = DEBUG_MAX;
8344 + status = vchiq_platform_init_state(state);
8347 + bring up slot handler thread
8349 + snprintf(threadname, sizeof(threadname), "VCHIQ-%d", state->id);
8350 + state->slot_handler_thread = kthread_create(&slot_handler_func,
8354 + if (state->slot_handler_thread == NULL) {
8355 + vchiq_loud_error_header();
8356 + vchiq_loud_error("couldn't create thread %s", threadname);
8357 + vchiq_loud_error_footer();
8358 + return VCHIQ_ERROR;
8360 + set_user_nice(state->slot_handler_thread, -19);
8361 + wake_up_process(state->slot_handler_thread);
8363 + snprintf(threadname, sizeof(threadname), "VCHIQr-%d", state->id);
8364 + state->recycle_thread = kthread_create(&recycle_func,
8367 + if (state->recycle_thread == NULL) {
8368 + vchiq_loud_error_header();
8369 + vchiq_loud_error("couldn't create thread %s", threadname);
8370 + vchiq_loud_error_footer();
8371 + return VCHIQ_ERROR;
8373 + set_user_nice(state->recycle_thread, -19);
8374 + wake_up_process(state->recycle_thread);
8376 + snprintf(threadname, sizeof(threadname), "VCHIQs-%d", state->id);
8377 + state->sync_thread = kthread_create(&sync_func,
8380 + if (state->sync_thread == NULL) {
8381 + vchiq_loud_error_header();
8382 + vchiq_loud_error("couldn't create thread %s", threadname);
8383 + vchiq_loud_error_footer();
8384 + return VCHIQ_ERROR;
8386 + set_user_nice(state->sync_thread, -20);
8387 + wake_up_process(state->sync_thread);
8389 + BUG_ON(state->id >= VCHIQ_MAX_STATES);
8390 + vchiq_states[state->id] = state;
8392 + /* Indicate readiness to the other side */
8393 + local->initialised = 1;
8398 +/* Called from application thread when a client or server service is created. */
8400 +vchiq_add_service_internal(VCHIQ_STATE_T *state,
8401 + const VCHIQ_SERVICE_PARAMS_T *params, int srvstate,
8402 + VCHIQ_INSTANCE_T instance, VCHIQ_USERDATA_TERM_T userdata_term)
8404 + VCHIQ_SERVICE_T *service;
8406 + service = kmalloc(sizeof(VCHIQ_SERVICE_T), GFP_KERNEL);
8408 + service->base.fourcc = params->fourcc;
8409 + service->base.callback = params->callback;
8410 + service->base.userdata = params->userdata;
8411 + service->handle = VCHIQ_SERVICE_HANDLE_INVALID;
8412 + service->ref_count = 1;
8413 + service->srvstate = VCHIQ_SRVSTATE_FREE;
8414 + service->userdata_term = userdata_term;
8415 + service->localport = VCHIQ_PORT_FREE;
8416 + service->remoteport = VCHIQ_PORT_FREE;
8418 + service->public_fourcc = (srvstate == VCHIQ_SRVSTATE_OPENING) ?
8419 + VCHIQ_FOURCC_INVALID : params->fourcc;
8420 + service->client_id = 0;
8421 + service->auto_close = 1;
8422 + service->sync = 0;
8423 + service->closing = 0;
8424 + service->trace = 0;
8425 + atomic_set(&service->poll_flags, 0);
8426 + service->version = params->version;
8427 + service->version_min = params->version_min;
8428 + service->state = state;
8429 + service->instance = instance;
8430 + service->service_use_count = 0;
8431 + init_bulk_queue(&service->bulk_tx);
8432 + init_bulk_queue(&service->bulk_rx);
8433 + sema_init(&service->remove_event, 0);
8434 + sema_init(&service->bulk_remove_event, 0);
8435 + mutex_init(&service->bulk_mutex);
8436 + memset(&service->stats, 0, sizeof(service->stats));
8438 + vchiq_log_error(vchiq_core_log_level,
8443 + VCHIQ_SERVICE_T **pservice = NULL;
8446 + /* Although it is perfectly possible to use service_spinlock
8447 + ** to protect the creation of services, it is overkill as it
8448 + ** disables interrupts while the array is searched.
8449 + ** The only danger is of another thread trying to create a
8450 + ** service - service deletion is safe.
8451 + ** Therefore it is preferable to use state->mutex which,
8452 + ** although slower to claim, doesn't block interrupts while
8456 + mutex_lock(&state->mutex);
8458 + /* Prepare to use a previously unused service */
8459 + if (state->unused_service < VCHIQ_MAX_SERVICES)
8460 + pservice = &state->services[state->unused_service];
8462 + if (srvstate == VCHIQ_SRVSTATE_OPENING) {
8463 + for (i = 0; i < state->unused_service; i++) {
8464 + VCHIQ_SERVICE_T *srv = state->services[i];
8466 + pservice = &state->services[i];
8471 + for (i = (state->unused_service - 1); i >= 0; i--) {
8472 + VCHIQ_SERVICE_T *srv = state->services[i];
8474 + pservice = &state->services[i];
8475 + else if ((srv->public_fourcc == params->fourcc)
8476 + && ((srv->instance != instance) ||
8477 + (srv->base.callback !=
8478 + params->callback))) {
8479 + /* There is another server using this
8480 + ** fourcc which doesn't match. */
8488 + service->localport = (pservice - state->services);
8490 + handle_seq = VCHIQ_MAX_STATES *
8491 + VCHIQ_MAX_SERVICES;
8492 + service->handle = handle_seq |
8493 + (state->id * VCHIQ_MAX_SERVICES) |
8494 + service->localport;
8495 + handle_seq += VCHIQ_MAX_STATES * VCHIQ_MAX_SERVICES;
8496 + *pservice = service;
8497 + if (pservice == &state->services[state->unused_service])
8498 + state->unused_service++;
8501 + mutex_unlock(&state->mutex);
8510 + VCHIQ_SERVICE_QUOTA_T *service_quota =
8511 + &state->service_quotas[service->localport];
8512 + service_quota->slot_quota = state->default_slot_quota;
8513 + service_quota->message_quota = state->default_message_quota;
8514 + if (service_quota->slot_use_count == 0)
8515 + service_quota->previous_tx_index =
8516 + SLOT_QUEUE_INDEX_FROM_POS(state->local_tx_pos)
8519 + /* Bring this service online */
8520 + vchiq_set_service_state(service, srvstate);
8522 + vchiq_log_info(vchiq_core_msg_log_level,
8523 + "%s Service %c%c%c%c SrcPort:%d",
8524 + (srvstate == VCHIQ_SRVSTATE_OPENING)
8526 + VCHIQ_FOURCC_AS_4CHARS(params->fourcc),
8527 + service->localport);
8530 + /* Don't unlock the service - leave it with a ref_count of 1. */
8536 +vchiq_open_service_internal(VCHIQ_SERVICE_T *service, int client_id)
8538 + struct vchiq_open_payload payload = {
8539 + service->base.fourcc,
8542 + service->version_min
8544 + VCHIQ_ELEMENT_T body = { &payload, sizeof(payload) };
8545 + VCHIQ_STATUS_T status = VCHIQ_SUCCESS;
8547 + service->client_id = client_id;
8548 + vchiq_use_service_internal(service);
8549 + status = queue_message(service->state, NULL,
8550 + VCHIQ_MAKE_MSG(VCHIQ_MSG_OPEN, service->localport, 0),
8551 + &body, 1, sizeof(payload), QMFLAGS_IS_BLOCKING);
8552 + if (status == VCHIQ_SUCCESS) {
8553 + /* Wait for the ACK/NAK */
8554 + if (down_interruptible(&service->remove_event) != 0) {
8555 + status = VCHIQ_RETRY;
8556 + vchiq_release_service_internal(service);
8557 + } else if ((service->srvstate != VCHIQ_SRVSTATE_OPEN) &&
8558 + (service->srvstate != VCHIQ_SRVSTATE_OPENSYNC)) {
8559 + if (service->srvstate != VCHIQ_SRVSTATE_CLOSEWAIT)
8560 + vchiq_log_error(vchiq_core_log_level,
8561 + "%d: osi - srvstate = %s (ref %d)",
8562 + service->state->id,
8563 + srvstate_names[service->srvstate],
8564 + service->ref_count);
8565 + status = VCHIQ_ERROR;
8566 + VCHIQ_SERVICE_STATS_INC(service, error_count);
8567 + vchiq_release_service_internal(service);
8574 +release_service_messages(VCHIQ_SERVICE_T *service)
8576 + VCHIQ_STATE_T *state = service->state;
8577 + int slot_last = state->remote->slot_last;
8580 + /* Release any claimed messages aimed at this service */
8582 + if (service->sync) {
8583 + VCHIQ_HEADER_T *header =
8584 + (VCHIQ_HEADER_T *)SLOT_DATA_FROM_INDEX(state,
8585 + state->remote->slot_sync);
8586 + if (VCHIQ_MSG_DSTPORT(header->msgid) == service->localport)
8587 + release_message_sync(state, header);
8592 + for (i = state->remote->slot_first; i <= slot_last; i++) {
8593 + VCHIQ_SLOT_INFO_T *slot_info =
8594 + SLOT_INFO_FROM_INDEX(state, i);
8595 + if (slot_info->release_count != slot_info->use_count) {
8597 + (char *)SLOT_DATA_FROM_INDEX(state, i);
8598 + unsigned int pos, end;
8600 + end = VCHIQ_SLOT_SIZE;
8601 + if (data == state->rx_data)
8602 + /* This buffer is still being read from - stop
8603 + ** at the current read position */
8604 + end = state->rx_pos & VCHIQ_SLOT_MASK;
8608 + while (pos < end) {
8609 + VCHIQ_HEADER_T *header =
8610 + (VCHIQ_HEADER_T *)(data + pos);
8611 + int msgid = header->msgid;
8612 + int port = VCHIQ_MSG_DSTPORT(msgid);
8613 + if ((port == service->localport) &&
8614 + (msgid & VCHIQ_MSGID_CLAIMED)) {
8615 + vchiq_log_info(vchiq_core_log_level,
8617 + (unsigned int)header);
8618 + release_slot(state, slot_info, header,
8621 + pos += calc_stride(header->size);
8622 + if (pos > VCHIQ_SLOT_SIZE) {
8623 + vchiq_log_error(vchiq_core_log_level,
8624 + "fsi - pos %x: header %x, "
8625 + "msgid %x, header->msgid %x, "
8626 + "header->size %x",
8627 + pos, (unsigned int)header,
8628 + msgid, header->msgid,
8630 + WARN(1, "invalid slot position\n");
8638 +do_abort_bulks(VCHIQ_SERVICE_T *service)
8640 + VCHIQ_STATUS_T status;
8642 + /* Abort any outstanding bulk transfers */
8643 + if (mutex_lock_interruptible(&service->bulk_mutex) != 0)
8645 + abort_outstanding_bulks(service, &service->bulk_tx);
8646 + abort_outstanding_bulks(service, &service->bulk_rx);
8647 + mutex_unlock(&service->bulk_mutex);
8649 + status = notify_bulks(service, &service->bulk_tx, 0/*!retry_poll*/);
8650 + if (status == VCHIQ_SUCCESS)
8651 + status = notify_bulks(service, &service->bulk_rx,
8652 + 0/*!retry_poll*/);
8653 + return (status == VCHIQ_SUCCESS);
8656 +static VCHIQ_STATUS_T
8657 +close_service_complete(VCHIQ_SERVICE_T *service, int failstate)
8659 + VCHIQ_STATUS_T status;
8660 + int is_server = (service->public_fourcc != VCHIQ_FOURCC_INVALID);
8663 + switch (service->srvstate) {
8664 + case VCHIQ_SRVSTATE_OPEN:
8665 + case VCHIQ_SRVSTATE_CLOSESENT:
8666 + case VCHIQ_SRVSTATE_CLOSERECVD:
8668 + if (service->auto_close) {
8669 + service->client_id = 0;
8670 + service->remoteport = VCHIQ_PORT_FREE;
8671 + newstate = VCHIQ_SRVSTATE_LISTENING;
8673 + newstate = VCHIQ_SRVSTATE_CLOSEWAIT;
8675 + newstate = VCHIQ_SRVSTATE_CLOSED;
8676 + vchiq_set_service_state(service, newstate);
8678 + case VCHIQ_SRVSTATE_LISTENING:
8681 + vchiq_log_error(vchiq_core_log_level,
8682 + "close_service_complete(%x) called in state %s",
8683 + service->handle, srvstate_names[service->srvstate]);
8684 + WARN(1, "close_service_complete in unexpected state\n");
8685 + return VCHIQ_ERROR;
8688 + status = make_service_callback(service,
8689 + VCHIQ_SERVICE_CLOSED, NULL, NULL);
8691 + if (status != VCHIQ_RETRY) {
8692 + int uc = service->service_use_count;
8694 + /* Complete the close process */
8695 + for (i = 0; i < uc; i++)
8696 + /* cater for cases where close is forced and the
8697 + ** client may not close all it's handles */
8698 + vchiq_release_service_internal(service);
8700 + service->client_id = 0;
8701 + service->remoteport = VCHIQ_PORT_FREE;
8703 + if (service->srvstate == VCHIQ_SRVSTATE_CLOSED)
8704 + vchiq_free_service_internal(service);
8705 + else if (service->srvstate != VCHIQ_SRVSTATE_CLOSEWAIT) {
8707 + service->closing = 0;
8709 + up(&service->remove_event);
8712 + vchiq_set_service_state(service, failstate);
8717 +/* Called by the slot handler */
8719 +vchiq_close_service_internal(VCHIQ_SERVICE_T *service, int close_recvd)
8721 + VCHIQ_STATE_T *state = service->state;
8722 + VCHIQ_STATUS_T status = VCHIQ_SUCCESS;
8723 + int is_server = (service->public_fourcc != VCHIQ_FOURCC_INVALID);
8725 + vchiq_log_info(vchiq_core_log_level, "%d: csi:%d,%d (%s)",
8726 + service->state->id, service->localport, close_recvd,
8727 + srvstate_names[service->srvstate]);
8729 + switch (service->srvstate) {
8730 + case VCHIQ_SRVSTATE_CLOSED:
8731 + case VCHIQ_SRVSTATE_HIDDEN:
8732 + case VCHIQ_SRVSTATE_LISTENING:
8733 + case VCHIQ_SRVSTATE_CLOSEWAIT:
8735 + vchiq_log_error(vchiq_core_log_level,
8736 + "vchiq_close_service_internal(1) called "
8738 + srvstate_names[service->srvstate]);
8739 + else if (is_server) {
8740 + if (service->srvstate == VCHIQ_SRVSTATE_LISTENING) {
8741 + status = VCHIQ_ERROR;
8743 + service->client_id = 0;
8744 + service->remoteport = VCHIQ_PORT_FREE;
8745 + if (service->srvstate ==
8746 + VCHIQ_SRVSTATE_CLOSEWAIT)
8747 + vchiq_set_service_state(service,
8748 + VCHIQ_SRVSTATE_LISTENING);
8750 + up(&service->remove_event);
8752 + vchiq_free_service_internal(service);
8754 + case VCHIQ_SRVSTATE_OPENING:
8755 + if (close_recvd) {
8756 + /* The open was rejected - tell the user */
8757 + vchiq_set_service_state(service,
8758 + VCHIQ_SRVSTATE_CLOSEWAIT);
8759 + up(&service->remove_event);
8761 + /* Shutdown mid-open - let the other side know */
8762 + status = queue_message(state, service,
8765 + service->localport,
8766 + VCHIQ_MSG_DSTPORT(service->remoteport)),
8771 + case VCHIQ_SRVSTATE_OPENSYNC:
8772 + mutex_lock(&state->sync_mutex);
8773 + /* Drop through */
8775 + case VCHIQ_SRVSTATE_OPEN:
8776 + if (state->is_master || close_recvd) {
8777 + if (!do_abort_bulks(service))
8778 + status = VCHIQ_RETRY;
8781 + release_service_messages(service);
8783 + if (status == VCHIQ_SUCCESS)
8784 + status = queue_message(state, service,
8787 + service->localport,
8788 + VCHIQ_MSG_DSTPORT(service->remoteport)),
8789 + NULL, 0, 0, QMFLAGS_NO_MUTEX_UNLOCK);
8791 + if (status == VCHIQ_SUCCESS) {
8792 + if (!close_recvd) {
8793 + /* Change the state while the mutex is
8795 + vchiq_set_service_state(service,
8796 + VCHIQ_SRVSTATE_CLOSESENT);
8797 + mutex_unlock(&state->slot_mutex);
8798 + if (service->sync)
8799 + mutex_unlock(&state->sync_mutex);
8802 + } else if (service->srvstate == VCHIQ_SRVSTATE_OPENSYNC) {
8803 + mutex_unlock(&state->sync_mutex);
8808 + /* Change the state while the mutex is still held */
8809 + vchiq_set_service_state(service, VCHIQ_SRVSTATE_CLOSERECVD);
8810 + mutex_unlock(&state->slot_mutex);
8811 + if (service->sync)
8812 + mutex_unlock(&state->sync_mutex);
8814 + status = close_service_complete(service,
8815 + VCHIQ_SRVSTATE_CLOSERECVD);
8818 + case VCHIQ_SRVSTATE_CLOSESENT:
8820 + /* This happens when a process is killed mid-close */
8823 + if (!state->is_master) {
8824 + if (!do_abort_bulks(service)) {
8825 + status = VCHIQ_RETRY;
8830 + if (status == VCHIQ_SUCCESS)
8831 + status = close_service_complete(service,
8832 + VCHIQ_SRVSTATE_CLOSERECVD);
8835 + case VCHIQ_SRVSTATE_CLOSERECVD:
8836 + if (!close_recvd && is_server)
8837 + /* Force into LISTENING mode */
8838 + vchiq_set_service_state(service,
8839 + VCHIQ_SRVSTATE_LISTENING);
8840 + status = close_service_complete(service,
8841 + VCHIQ_SRVSTATE_CLOSERECVD);
8845 + vchiq_log_error(vchiq_core_log_level,
8846 + "vchiq_close_service_internal(%d) called in state %s",
8847 + close_recvd, srvstate_names[service->srvstate]);
8854 +/* Called from the application process upon process death */
8856 +vchiq_terminate_service_internal(VCHIQ_SERVICE_T *service)
8858 + VCHIQ_STATE_T *state = service->state;
8860 + vchiq_log_info(vchiq_core_log_level, "%d: tsi - (%d<->%d)",
8861 + state->id, service->localport, service->remoteport);
8863 + mark_service_closing(service);
8865 + /* Mark the service for removal by the slot handler */
8866 + request_poll(state, service, VCHIQ_POLL_REMOVE);
8869 +/* Called from the slot handler */
8871 +vchiq_free_service_internal(VCHIQ_SERVICE_T *service)
8873 + VCHIQ_STATE_T *state = service->state;
8875 + vchiq_log_info(vchiq_core_log_level, "%d: fsi - (%d)",
8876 + state->id, service->localport);
8878 + switch (service->srvstate) {
8879 + case VCHIQ_SRVSTATE_OPENING:
8880 + case VCHIQ_SRVSTATE_CLOSED:
8881 + case VCHIQ_SRVSTATE_HIDDEN:
8882 + case VCHIQ_SRVSTATE_LISTENING:
8883 + case VCHIQ_SRVSTATE_CLOSEWAIT:
8886 + vchiq_log_error(vchiq_core_log_level,
8887 + "%d: fsi - (%d) in state %s",
8888 + state->id, service->localport,
8889 + srvstate_names[service->srvstate]);
8893 + vchiq_set_service_state(service, VCHIQ_SRVSTATE_FREE);
8895 + up(&service->remove_event);
8897 + /* Release the initial lock */
8898 + unlock_service(service);
8902 +vchiq_connect_internal(VCHIQ_STATE_T *state, VCHIQ_INSTANCE_T instance)
8904 + VCHIQ_SERVICE_T *service;
8907 + /* Find all services registered to this client and enable them. */
8909 + while ((service = next_service_by_instance(state, instance,
8911 + if (service->srvstate == VCHIQ_SRVSTATE_HIDDEN)
8912 + vchiq_set_service_state(service,
8913 + VCHIQ_SRVSTATE_LISTENING);
8914 + unlock_service(service);
8917 + if (state->conn_state == VCHIQ_CONNSTATE_DISCONNECTED) {
8918 + if (queue_message(state, NULL,
8919 + VCHIQ_MAKE_MSG(VCHIQ_MSG_CONNECT, 0, 0), NULL, 0,
8920 + 0, QMFLAGS_IS_BLOCKING) == VCHIQ_RETRY)
8921 + return VCHIQ_RETRY;
8923 + vchiq_set_conn_state(state, VCHIQ_CONNSTATE_CONNECTING);
8926 + if (state->conn_state == VCHIQ_CONNSTATE_CONNECTING) {
8927 + if (down_interruptible(&state->connect) != 0)
8928 + return VCHIQ_RETRY;
8930 + vchiq_set_conn_state(state, VCHIQ_CONNSTATE_CONNECTED);
8931 + up(&state->connect);
8934 + return VCHIQ_SUCCESS;
8938 +vchiq_shutdown_internal(VCHIQ_STATE_T *state, VCHIQ_INSTANCE_T instance)
8940 + VCHIQ_SERVICE_T *service;
8943 + /* Find all services registered to this client and enable them. */
8945 + while ((service = next_service_by_instance(state, instance,
8947 + (void)vchiq_remove_service(service->handle);
8948 + unlock_service(service);
8951 + return VCHIQ_SUCCESS;
8955 +vchiq_pause_internal(VCHIQ_STATE_T *state)
8957 + VCHIQ_STATUS_T status = VCHIQ_SUCCESS;
8959 + switch (state->conn_state) {
8960 + case VCHIQ_CONNSTATE_CONNECTED:
8961 + /* Request a pause */
8962 + vchiq_set_conn_state(state, VCHIQ_CONNSTATE_PAUSING);
8963 + request_poll(state, NULL, 0);
8966 + vchiq_log_error(vchiq_core_log_level,
8967 + "vchiq_pause_internal in state %s\n",
8968 + conn_state_names[state->conn_state]);
8969 + status = VCHIQ_ERROR;
8970 + VCHIQ_STATS_INC(state, error_count);
8978 +vchiq_resume_internal(VCHIQ_STATE_T *state)
8980 + VCHIQ_STATUS_T status = VCHIQ_SUCCESS;
8982 + if (state->conn_state == VCHIQ_CONNSTATE_PAUSED) {
8983 + vchiq_set_conn_state(state, VCHIQ_CONNSTATE_RESUMING);
8984 + request_poll(state, NULL, 0);
8986 + status = VCHIQ_ERROR;
8987 + VCHIQ_STATS_INC(state, error_count);
8994 +vchiq_close_service(VCHIQ_SERVICE_HANDLE_T handle)
8996 + /* Unregister the service */
8997 + VCHIQ_SERVICE_T *service = find_service_by_handle(handle);
8998 + VCHIQ_STATUS_T status = VCHIQ_SUCCESS;
9001 + return VCHIQ_ERROR;
9003 + vchiq_log_info(vchiq_core_log_level,
9004 + "%d: close_service:%d",
9005 + service->state->id, service->localport);
9007 + if ((service->srvstate == VCHIQ_SRVSTATE_FREE) ||
9008 + (service->srvstate == VCHIQ_SRVSTATE_LISTENING) ||
9009 + (service->srvstate == VCHIQ_SRVSTATE_HIDDEN)) {
9010 + unlock_service(service);
9011 + return VCHIQ_ERROR;
9014 + mark_service_closing(service);
9016 + if (current == service->state->slot_handler_thread) {
9017 + status = vchiq_close_service_internal(service,
9018 + 0/*!close_recvd*/);
9019 + BUG_ON(status == VCHIQ_RETRY);
9021 + /* Mark the service for termination by the slot handler */
9022 + request_poll(service->state, service, VCHIQ_POLL_TERMINATE);
9026 + if (down_interruptible(&service->remove_event) != 0) {
9027 + status = VCHIQ_RETRY;
9031 + if ((service->srvstate == VCHIQ_SRVSTATE_FREE) ||
9032 + (service->srvstate == VCHIQ_SRVSTATE_LISTENING) ||
9033 + (service->srvstate == VCHIQ_SRVSTATE_OPEN))
9036 + vchiq_log_warning(vchiq_core_log_level,
9037 + "%d: close_service:%d - waiting in state %s",
9038 + service->state->id, service->localport,
9039 + srvstate_names[service->srvstate]);
9042 + if ((status == VCHIQ_SUCCESS) &&
9043 + (service->srvstate != VCHIQ_SRVSTATE_FREE) &&
9044 + (service->srvstate != VCHIQ_SRVSTATE_LISTENING))
9045 + status = VCHIQ_ERROR;
9047 + unlock_service(service);
9053 +vchiq_remove_service(VCHIQ_SERVICE_HANDLE_T handle)
9055 + /* Unregister the service */
9056 + VCHIQ_SERVICE_T *service = find_service_by_handle(handle);
9057 + VCHIQ_STATUS_T status = VCHIQ_SUCCESS;
9060 + return VCHIQ_ERROR;
9062 + vchiq_log_info(vchiq_core_log_level,
9063 + "%d: remove_service:%d",
9064 + service->state->id, service->localport);
9066 + if (service->srvstate == VCHIQ_SRVSTATE_FREE) {
9067 + unlock_service(service);
9068 + return VCHIQ_ERROR;
9071 + mark_service_closing(service);
9073 + if ((service->srvstate == VCHIQ_SRVSTATE_HIDDEN) ||
9074 + (current == service->state->slot_handler_thread)) {
9075 + /* Make it look like a client, because it must be removed and
9076 + not left in the LISTENING state. */
9077 + service->public_fourcc = VCHIQ_FOURCC_INVALID;
9079 + status = vchiq_close_service_internal(service,
9080 + 0/*!close_recvd*/);
9081 + BUG_ON(status == VCHIQ_RETRY);
9083 + /* Mark the service for removal by the slot handler */
9084 + request_poll(service->state, service, VCHIQ_POLL_REMOVE);
9087 + if (down_interruptible(&service->remove_event) != 0) {
9088 + status = VCHIQ_RETRY;
9092 + if ((service->srvstate == VCHIQ_SRVSTATE_FREE) ||
9093 + (service->srvstate == VCHIQ_SRVSTATE_OPEN))
9096 + vchiq_log_warning(vchiq_core_log_level,
9097 + "%d: remove_service:%d - waiting in state %s",
9098 + service->state->id, service->localport,
9099 + srvstate_names[service->srvstate]);
9102 + if ((status == VCHIQ_SUCCESS) &&
9103 + (service->srvstate != VCHIQ_SRVSTATE_FREE))
9104 + status = VCHIQ_ERROR;
9106 + unlock_service(service);
9112 +/* This function may be called by kernel threads or user threads.
9113 + * User threads may receive VCHIQ_RETRY to indicate that a signal has been
9114 + * received and the call should be retried after being returned to user
9116 + * When called in blocking mode, the userdata field points to a bulk_waiter
9120 +vchiq_bulk_transfer(VCHIQ_SERVICE_HANDLE_T handle,
9121 + VCHI_MEM_HANDLE_T memhandle, void *offset, int size, void *userdata,
9122 + VCHIQ_BULK_MODE_T mode, VCHIQ_BULK_DIR_T dir)
9124 + VCHIQ_SERVICE_T *service = find_service_by_handle(handle);
9125 + VCHIQ_BULK_QUEUE_T *queue;
9126 + VCHIQ_BULK_T *bulk;
9127 + VCHIQ_STATE_T *state;
9128 + struct bulk_waiter *bulk_waiter = NULL;
9129 + const char dir_char = (dir == VCHIQ_BULK_TRANSMIT) ? 't' : 'r';
9130 + const int dir_msgtype = (dir == VCHIQ_BULK_TRANSMIT) ?
9131 + VCHIQ_MSG_BULK_TX : VCHIQ_MSG_BULK_RX;
9132 + VCHIQ_STATUS_T status = VCHIQ_ERROR;
9135 + (service->srvstate != VCHIQ_SRVSTATE_OPEN) ||
9136 + ((memhandle == VCHI_MEM_HANDLE_INVALID) && (offset == NULL)) ||
9137 + (vchiq_check_service(service) != VCHIQ_SUCCESS))
9141 + case VCHIQ_BULK_MODE_NOCALLBACK:
9142 + case VCHIQ_BULK_MODE_CALLBACK:
9144 + case VCHIQ_BULK_MODE_BLOCKING:
9145 + bulk_waiter = (struct bulk_waiter *)userdata;
9146 + sema_init(&bulk_waiter->event, 0);
9147 + bulk_waiter->actual = 0;
9148 + bulk_waiter->bulk = NULL;
9150 + case VCHIQ_BULK_MODE_WAITING:
9151 + bulk_waiter = (struct bulk_waiter *)userdata;
9152 + bulk = bulk_waiter->bulk;
9158 + state = service->state;
9160 + queue = (dir == VCHIQ_BULK_TRANSMIT) ?
9161 + &service->bulk_tx : &service->bulk_rx;
9163 + if (mutex_lock_interruptible(&service->bulk_mutex) != 0) {
9164 + status = VCHIQ_RETRY;
9168 + if (queue->local_insert == queue->remove + VCHIQ_NUM_SERVICE_BULKS) {
9169 + VCHIQ_SERVICE_STATS_INC(service, bulk_stalls);
9171 + mutex_unlock(&service->bulk_mutex);
9172 + if (down_interruptible(&service->bulk_remove_event)
9174 + status = VCHIQ_RETRY;
9177 + if (mutex_lock_interruptible(&service->bulk_mutex)
9179 + status = VCHIQ_RETRY;
9182 + } while (queue->local_insert == queue->remove +
9183 + VCHIQ_NUM_SERVICE_BULKS);
9186 + bulk = &queue->bulks[BULK_INDEX(queue->local_insert)];
9188 + bulk->mode = mode;
9190 + bulk->userdata = userdata;
9191 + bulk->size = size;
9192 + bulk->actual = VCHIQ_BULK_ACTUAL_ABORTED;
9194 + if (vchiq_prepare_bulk_data(bulk, memhandle, offset, size, dir) !=
9196 + goto unlock_error_exit;
9200 + vchiq_log_info(vchiq_core_log_level,
9201 + "%d: bt (%d->%d) %cx %x@%x %x",
9203 + service->localport, service->remoteport, dir_char,
9204 + size, (unsigned int)bulk->data, (unsigned int)userdata);
9206 + /* The slot mutex must be held when the service is being closed, so
9207 + claim it here to ensure that isn't happening */
9208 + if (mutex_lock_interruptible(&state->slot_mutex) != 0) {
9209 + status = VCHIQ_RETRY;
9210 + goto cancel_bulk_error_exit;
9213 + if (service->srvstate != VCHIQ_SRVSTATE_OPEN)
9214 + goto unlock_both_error_exit;
9216 + if (state->is_master) {
9217 + queue->local_insert++;
9218 + if (resolve_bulks(service, queue))
9219 + request_poll(state, service,
9220 + (dir == VCHIQ_BULK_TRANSMIT) ?
9221 + VCHIQ_POLL_TXNOTIFY : VCHIQ_POLL_RXNOTIFY);
9223 + int payload[2] = { (int)bulk->data, bulk->size };
9224 + VCHIQ_ELEMENT_T element = { payload, sizeof(payload) };
9226 + status = queue_message(state, NULL,
9227 + VCHIQ_MAKE_MSG(dir_msgtype,
9228 + service->localport, service->remoteport),
9229 + &element, 1, sizeof(payload),
9230 + QMFLAGS_IS_BLOCKING |
9231 + QMFLAGS_NO_MUTEX_LOCK |
9232 + QMFLAGS_NO_MUTEX_UNLOCK);
9233 + if (status != VCHIQ_SUCCESS) {
9234 + goto unlock_both_error_exit;
9236 + queue->local_insert++;
9239 + mutex_unlock(&state->slot_mutex);
9240 + mutex_unlock(&service->bulk_mutex);
9242 + vchiq_log_trace(vchiq_core_log_level,
9243 + "%d: bt:%d %cx li=%x ri=%x p=%x",
9245 + service->localport, dir_char,
9246 + queue->local_insert, queue->remote_insert, queue->process);
9249 + unlock_service(service);
9251 + status = VCHIQ_SUCCESS;
9253 + if (bulk_waiter) {
9254 + bulk_waiter->bulk = bulk;
9255 + if (down_interruptible(&bulk_waiter->event) != 0)
9256 + status = VCHIQ_RETRY;
9257 + else if (bulk_waiter->actual == VCHIQ_BULK_ACTUAL_ABORTED)
9258 + status = VCHIQ_ERROR;
9263 +unlock_both_error_exit:
9264 + mutex_unlock(&state->slot_mutex);
9265 +cancel_bulk_error_exit:
9266 + vchiq_complete_bulk(bulk);
9268 + mutex_unlock(&service->bulk_mutex);
9272 + unlock_service(service);
9277 +vchiq_queue_message(VCHIQ_SERVICE_HANDLE_T handle,
9278 + const VCHIQ_ELEMENT_T *elements, unsigned int count)
9280 + VCHIQ_SERVICE_T *service = find_service_by_handle(handle);
9281 + VCHIQ_STATUS_T status = VCHIQ_ERROR;
9283 + unsigned int size = 0;
9287 + (vchiq_check_service(service) != VCHIQ_SUCCESS))
9290 + for (i = 0; i < (unsigned int)count; i++) {
9291 + if (elements[i].size) {
9292 + if (elements[i].data == NULL) {
9293 + VCHIQ_SERVICE_STATS_INC(service, error_count);
9296 + size += elements[i].size;
9300 + if (size > VCHIQ_MAX_MSG_SIZE) {
9301 + VCHIQ_SERVICE_STATS_INC(service, error_count);
9305 + switch (service->srvstate) {
9306 + case VCHIQ_SRVSTATE_OPEN:
9307 + status = queue_message(service->state, service,
9308 + VCHIQ_MAKE_MSG(VCHIQ_MSG_DATA,
9309 + service->localport,
9310 + service->remoteport),
9311 + elements, count, size, 1);
9313 + case VCHIQ_SRVSTATE_OPENSYNC:
9314 + status = queue_message_sync(service->state, service,
9315 + VCHIQ_MAKE_MSG(VCHIQ_MSG_DATA,
9316 + service->localport,
9317 + service->remoteport),
9318 + elements, count, size, 1);
9321 + status = VCHIQ_ERROR;
9327 + unlock_service(service);
9333 +vchiq_release_message(VCHIQ_SERVICE_HANDLE_T handle, VCHIQ_HEADER_T *header)
9335 + VCHIQ_SERVICE_T *service = find_service_by_handle(handle);
9336 + VCHIQ_SHARED_STATE_T *remote;
9337 + VCHIQ_STATE_T *state;
9343 + state = service->state;
9344 + remote = state->remote;
9346 + slot_index = SLOT_INDEX_FROM_DATA(state, (void *)header);
9348 + if ((slot_index >= remote->slot_first) &&
9349 + (slot_index <= remote->slot_last)) {
9350 + int msgid = header->msgid;
9351 + if (msgid & VCHIQ_MSGID_CLAIMED) {
9352 + VCHIQ_SLOT_INFO_T *slot_info =
9353 + SLOT_INFO_FROM_INDEX(state, slot_index);
9355 + release_slot(state, slot_info, header, service);
9357 + } else if (slot_index == remote->slot_sync)
9358 + release_message_sync(state, header);
9360 + unlock_service(service);
9364 +release_message_sync(VCHIQ_STATE_T *state, VCHIQ_HEADER_T *header)
9366 + header->msgid = VCHIQ_MSGID_PADDING;
9368 + remote_event_signal(&state->remote->sync_release);
9372 +vchiq_get_peer_version(VCHIQ_SERVICE_HANDLE_T handle, short *peer_version)
9374 + VCHIQ_STATUS_T status = VCHIQ_ERROR;
9375 + VCHIQ_SERVICE_T *service = find_service_by_handle(handle);
9378 + (vchiq_check_service(service) != VCHIQ_SUCCESS) ||
9381 + *peer_version = service->peer_version;
9382 + status = VCHIQ_SUCCESS;
9386 + unlock_service(service);
9391 +vchiq_get_config(VCHIQ_INSTANCE_T instance,
9392 + int config_size, VCHIQ_CONFIG_T *pconfig)
9394 + VCHIQ_CONFIG_T config;
9398 + config.max_msg_size = VCHIQ_MAX_MSG_SIZE;
9399 + config.bulk_threshold = VCHIQ_MAX_MSG_SIZE;
9400 + config.max_outstanding_bulks = VCHIQ_NUM_SERVICE_BULKS;
9401 + config.max_services = VCHIQ_MAX_SERVICES;
9402 + config.version = VCHIQ_VERSION;
9403 + config.version_min = VCHIQ_VERSION_MIN;
9405 + if (config_size > sizeof(VCHIQ_CONFIG_T))
9406 + return VCHIQ_ERROR;
9408 + memcpy(pconfig, &config,
9409 + min(config_size, (int)(sizeof(VCHIQ_CONFIG_T))));
9411 + return VCHIQ_SUCCESS;
9415 +vchiq_set_service_option(VCHIQ_SERVICE_HANDLE_T handle,
9416 + VCHIQ_SERVICE_OPTION_T option, int value)
9418 + VCHIQ_SERVICE_T *service = find_service_by_handle(handle);
9419 + VCHIQ_STATUS_T status = VCHIQ_ERROR;
9423 + case VCHIQ_SERVICE_OPTION_AUTOCLOSE:
9424 + service->auto_close = value;
9425 + status = VCHIQ_SUCCESS;
9428 + case VCHIQ_SERVICE_OPTION_SLOT_QUOTA: {
9429 + VCHIQ_SERVICE_QUOTA_T *service_quota =
9430 + &service->state->service_quotas[
9431 + service->localport];
9433 + value = service->state->default_slot_quota;
9434 + if ((value >= service_quota->slot_use_count) &&
9435 + (value < (unsigned short)~0)) {
9436 + service_quota->slot_quota = value;
9437 + if ((value >= service_quota->slot_use_count) &&
9438 + (service_quota->message_quota >=
9439 + service_quota->message_use_count)) {
9440 + /* Signal the service that it may have
9441 + ** dropped below its quota */
9442 + up(&service_quota->quota_event);
9444 + status = VCHIQ_SUCCESS;
9448 + case VCHIQ_SERVICE_OPTION_MESSAGE_QUOTA: {
9449 + VCHIQ_SERVICE_QUOTA_T *service_quota =
9450 + &service->state->service_quotas[
9451 + service->localport];
9453 + value = service->state->default_message_quota;
9454 + if ((value >= service_quota->message_use_count) &&
9455 + (value < (unsigned short)~0)) {
9456 + service_quota->message_quota = value;
9458 + service_quota->message_use_count) &&
9459 + (service_quota->slot_quota >=
9460 + service_quota->slot_use_count))
9461 + /* Signal the service that it may have
9462 + ** dropped below its quota */
9463 + up(&service_quota->quota_event);
9464 + status = VCHIQ_SUCCESS;
9468 + case VCHIQ_SERVICE_OPTION_SYNCHRONOUS:
9469 + if ((service->srvstate == VCHIQ_SRVSTATE_HIDDEN) ||
9470 + (service->srvstate ==
9471 + VCHIQ_SRVSTATE_LISTENING)) {
9472 + service->sync = value;
9473 + status = VCHIQ_SUCCESS;
9477 + case VCHIQ_SERVICE_OPTION_TRACE:
9478 + service->trace = value;
9479 + status = VCHIQ_SUCCESS;
9485 + unlock_service(service);
9492 +vchiq_dump_shared_state(void *dump_context, VCHIQ_STATE_T *state,
9493 + VCHIQ_SHARED_STATE_T *shared, const char *label)
9495 + static const char *const debug_names[] = {
9497 + "SLOT_HANDLER_COUNT",
9498 + "SLOT_HANDLER_LINE",
9502 + "AWAIT_COMPLETION_LINE",
9503 + "DEQUEUE_MESSAGE_LINE",
9504 + "SERVICE_CALLBACK_LINE",
9505 + "MSG_QUEUE_FULL_COUNT",
9506 + "COMPLETION_QUEUE_FULL_COUNT"
9512 + len = snprintf(buf, sizeof(buf),
9513 + " %s: slots %d-%d tx_pos=%x recycle=%x",
9514 + label, shared->slot_first, shared->slot_last,
9515 + shared->tx_pos, shared->slot_queue_recycle);
9516 + vchiq_dump(dump_context, buf, len + 1);
9518 + len = snprintf(buf, sizeof(buf),
9519 + " Slots claimed:");
9520 + vchiq_dump(dump_context, buf, len + 1);
9522 + for (i = shared->slot_first; i <= shared->slot_last; i++) {
9523 + VCHIQ_SLOT_INFO_T slot_info = *SLOT_INFO_FROM_INDEX(state, i);
9524 + if (slot_info.use_count != slot_info.release_count) {
9525 + len = snprintf(buf, sizeof(buf),
9526 + " %d: %d/%d", i, slot_info.use_count,
9527 + slot_info.release_count);
9528 + vchiq_dump(dump_context, buf, len + 1);
9532 + for (i = 1; i < shared->debug[DEBUG_ENTRIES]; i++) {
9533 + len = snprintf(buf, sizeof(buf), " DEBUG: %s = %d(%x)",
9534 + debug_names[i], shared->debug[i], shared->debug[i]);
9535 + vchiq_dump(dump_context, buf, len + 1);
9540 +vchiq_dump_state(void *dump_context, VCHIQ_STATE_T *state)
9546 + len = snprintf(buf, sizeof(buf), "State %d: %s", state->id,
9547 + conn_state_names[state->conn_state]);
9548 + vchiq_dump(dump_context, buf, len + 1);
9550 + len = snprintf(buf, sizeof(buf),
9551 + " tx_pos=%x(@%x), rx_pos=%x(@%x)",
9552 + state->local->tx_pos,
9553 + (uint32_t)state->tx_data +
9554 + (state->local_tx_pos & VCHIQ_SLOT_MASK),
9556 + (uint32_t)state->rx_data +
9557 + (state->rx_pos & VCHIQ_SLOT_MASK));
9558 + vchiq_dump(dump_context, buf, len + 1);
9560 + len = snprintf(buf, sizeof(buf),
9561 + " Version: %d (min %d)",
9562 + VCHIQ_VERSION, VCHIQ_VERSION_MIN);
9563 + vchiq_dump(dump_context, buf, len + 1);
9565 + if (VCHIQ_ENABLE_STATS) {
9566 + len = snprintf(buf, sizeof(buf),
9567 + " Stats: ctrl_tx_count=%d, ctrl_rx_count=%d, "
9569 + state->stats.ctrl_tx_count, state->stats.ctrl_rx_count,
9570 + state->stats.error_count);
9571 + vchiq_dump(dump_context, buf, len + 1);
9574 + len = snprintf(buf, sizeof(buf),
9575 + " Slots: %d available (%d data), %d recyclable, %d stalls "
9577 + ((state->slot_queue_available * VCHIQ_SLOT_SIZE) -
9578 + state->local_tx_pos) / VCHIQ_SLOT_SIZE,
9579 + state->data_quota - state->data_use_count,
9580 + state->local->slot_queue_recycle - state->slot_queue_available,
9581 + state->stats.slot_stalls, state->stats.data_stalls);
9582 + vchiq_dump(dump_context, buf, len + 1);
9584 + vchiq_dump_platform_state(dump_context);
9586 + vchiq_dump_shared_state(dump_context, state, state->local, "Local");
9587 + vchiq_dump_shared_state(dump_context, state, state->remote, "Remote");
9589 + vchiq_dump_platform_instances(dump_context);
9591 + for (i = 0; i < state->unused_service; i++) {
9592 + VCHIQ_SERVICE_T *service = find_service_by_port(state, i);
9595 + vchiq_dump_service_state(dump_context, service);
9596 + unlock_service(service);
9602 +vchiq_dump_service_state(void *dump_context, VCHIQ_SERVICE_T *service)
9607 + len = snprintf(buf, sizeof(buf), "Service %d: %s (ref %u)",
9608 + service->localport, srvstate_names[service->srvstate],
9609 + service->ref_count - 1); /*Don't include the lock just taken*/
9611 + if (service->srvstate != VCHIQ_SRVSTATE_FREE) {
9612 + char remoteport[30];
9613 + VCHIQ_SERVICE_QUOTA_T *service_quota =
9614 + &service->state->service_quotas[service->localport];
9615 + int fourcc = service->base.fourcc;
9616 + int tx_pending, rx_pending;
9617 + if (service->remoteport != VCHIQ_PORT_FREE) {
9618 + int len2 = snprintf(remoteport, sizeof(remoteport),
9619 + "%d", service->remoteport);
9620 + if (service->public_fourcc != VCHIQ_FOURCC_INVALID)
9621 + snprintf(remoteport + len2,
9622 + sizeof(remoteport) - len2,
9623 + " (client %x)", service->client_id);
9625 + strcpy(remoteport, "n/a");
9627 + len += snprintf(buf + len, sizeof(buf) - len,
9628 + " '%c%c%c%c' remote %s (msg use %d/%d, slot use %d/%d)",
9629 + VCHIQ_FOURCC_AS_4CHARS(fourcc),
9631 + service_quota->message_use_count,
9632 + service_quota->message_quota,
9633 + service_quota->slot_use_count,
9634 + service_quota->slot_quota);
9636 + vchiq_dump(dump_context, buf, len + 1);
9638 + tx_pending = service->bulk_tx.local_insert -
9639 + service->bulk_tx.remote_insert;
9641 + rx_pending = service->bulk_rx.local_insert -
9642 + service->bulk_rx.remote_insert;
9644 + len = snprintf(buf, sizeof(buf),
9645 + " Bulk: tx_pending=%d (size %d),"
9646 + " rx_pending=%d (size %d)",
9648 + tx_pending ? service->bulk_tx.bulks[
9649 + BULK_INDEX(service->bulk_tx.remove)].size : 0,
9651 + rx_pending ? service->bulk_rx.bulks[
9652 + BULK_INDEX(service->bulk_rx.remove)].size : 0);
9654 + if (VCHIQ_ENABLE_STATS) {
9655 + vchiq_dump(dump_context, buf, len + 1);
9657 + len = snprintf(buf, sizeof(buf),
9658 + " Ctrl: tx_count=%d, tx_bytes=%llu, "
9659 + "rx_count=%d, rx_bytes=%llu",
9660 + service->stats.ctrl_tx_count,
9661 + service->stats.ctrl_tx_bytes,
9662 + service->stats.ctrl_rx_count,
9663 + service->stats.ctrl_rx_bytes);
9664 + vchiq_dump(dump_context, buf, len + 1);
9666 + len = snprintf(buf, sizeof(buf),
9667 + " Bulk: tx_count=%d, tx_bytes=%llu, "
9668 + "rx_count=%d, rx_bytes=%llu",
9669 + service->stats.bulk_tx_count,
9670 + service->stats.bulk_tx_bytes,
9671 + service->stats.bulk_rx_count,
9672 + service->stats.bulk_rx_bytes);
9673 + vchiq_dump(dump_context, buf, len + 1);
9675 + len = snprintf(buf, sizeof(buf),
9676 + " %d quota stalls, %d slot stalls, "
9677 + "%d bulk stalls, %d aborted, %d errors",
9678 + service->stats.quota_stalls,
9679 + service->stats.slot_stalls,
9680 + service->stats.bulk_stalls,
9681 + service->stats.bulk_aborted_count,
9682 + service->stats.error_count);
9686 + vchiq_dump(dump_context, buf, len + 1);
9688 + if (service->srvstate != VCHIQ_SRVSTATE_FREE)
9689 + vchiq_dump_platform_service_state(dump_context, service);
9694 +vchiq_loud_error_header(void)
9696 + vchiq_log_error(vchiq_core_log_level,
9697 + "============================================================"
9698 + "================");
9699 + vchiq_log_error(vchiq_core_log_level,
9700 + "============================================================"
9701 + "================");
9702 + vchiq_log_error(vchiq_core_log_level, "=====");
9706 +vchiq_loud_error_footer(void)
9708 + vchiq_log_error(vchiq_core_log_level, "=====");
9709 + vchiq_log_error(vchiq_core_log_level,
9710 + "============================================================"
9711 + "================");
9712 + vchiq_log_error(vchiq_core_log_level,
9713 + "============================================================"
9714 + "================");
9718 +VCHIQ_STATUS_T vchiq_send_remote_use(VCHIQ_STATE_T *state)
9720 + VCHIQ_STATUS_T status = VCHIQ_RETRY;
9721 + if (state->conn_state != VCHIQ_CONNSTATE_DISCONNECTED)
9722 + status = queue_message(state, NULL,
9723 + VCHIQ_MAKE_MSG(VCHIQ_MSG_REMOTE_USE, 0, 0),
9728 +VCHIQ_STATUS_T vchiq_send_remote_release(VCHIQ_STATE_T *state)
9730 + VCHIQ_STATUS_T status = VCHIQ_RETRY;
9731 + if (state->conn_state != VCHIQ_CONNSTATE_DISCONNECTED)
9732 + status = queue_message(state, NULL,
9733 + VCHIQ_MAKE_MSG(VCHIQ_MSG_REMOTE_RELEASE, 0, 0),
9738 +VCHIQ_STATUS_T vchiq_send_remote_use_active(VCHIQ_STATE_T *state)
9740 + VCHIQ_STATUS_T status = VCHIQ_RETRY;
9741 + if (state->conn_state != VCHIQ_CONNSTATE_DISCONNECTED)
9742 + status = queue_message(state, NULL,
9743 + VCHIQ_MAKE_MSG(VCHIQ_MSG_REMOTE_USE_ACTIVE, 0, 0),
9748 +void vchiq_log_dump_mem(const char *label, uint32_t addr, const void *voidMem,
9751 + const uint8_t *mem = (const uint8_t *)voidMem;
9753 + char lineBuf[100];
9756 + while (numBytes > 0) {
9759 + for (offset = 0; offset < 16; offset++) {
9760 + if (offset < numBytes)
9761 + s += snprintf(s, 4, "%02x ", mem[offset]);
9763 + s += snprintf(s, 4, " ");
9766 + for (offset = 0; offset < 16; offset++) {
9767 + if (offset < numBytes) {
9768 + uint8_t ch = mem[offset];
9770 + if ((ch < ' ') || (ch > '~'))
9777 + if ((label != NULL) && (*label != '\0'))
9778 + vchiq_log_trace(VCHIQ_LOG_TRACE,
9779 + "%s: %08x: %s", label, addr, lineBuf);
9781 + vchiq_log_trace(VCHIQ_LOG_TRACE,
9782 + "%08x: %s", addr, lineBuf);
9786 + if (numBytes > 16)
9793 +++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_core.h
9796 + * Copyright (c) 2010-2012 Broadcom. All rights reserved.
9798 + * Redistribution and use in source and binary forms, with or without
9799 + * modification, are permitted provided that the following conditions
9801 + * 1. Redistributions of source code must retain the above copyright
9802 + * notice, this list of conditions, and the following disclaimer,
9803 + * without modification.
9804 + * 2. Redistributions in binary form must reproduce the above copyright
9805 + * notice, this list of conditions and the following disclaimer in the
9806 + * documentation and/or other materials provided with the distribution.
9807 + * 3. The names of the above-listed copyright holders may not be used
9808 + * to endorse or promote products derived from this software without
9809 + * specific prior written permission.
9811 + * ALTERNATIVELY, this software may be distributed under the terms of the
9812 + * GNU General Public License ("GPL") version 2, as published by the Free
9813 + * Software Foundation.
9815 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
9816 + * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
9817 + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
9818 + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
9819 + * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
9820 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
9821 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
9822 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
9823 + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
9824 + * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
9825 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
9828 +#ifndef VCHIQ_CORE_H
9829 +#define VCHIQ_CORE_H
9831 +#include <linux/mutex.h>
9832 +#include <linux/semaphore.h>
9833 +#include <linux/kthread.h>
9835 +#include "vchiq_cfg.h"
9839 +/* Run time control of log level, based on KERN_XXX level. */
9840 +#define VCHIQ_LOG_DEFAULT 4
9841 +#define VCHIQ_LOG_ERROR 3
9842 +#define VCHIQ_LOG_WARNING 4
9843 +#define VCHIQ_LOG_INFO 6
9844 +#define VCHIQ_LOG_TRACE 7
9846 +#define VCHIQ_LOG_PREFIX KERN_INFO "vchiq: "
9848 +#ifndef vchiq_log_error
9849 +#define vchiq_log_error(cat, fmt, ...) \
9850 + do { if (cat >= VCHIQ_LOG_ERROR) \
9851 + printk(VCHIQ_LOG_PREFIX fmt "\n", ##__VA_ARGS__); } while (0)
9853 +#ifndef vchiq_log_warning
9854 +#define vchiq_log_warning(cat, fmt, ...) \
9855 + do { if (cat >= VCHIQ_LOG_WARNING) \
9856 + printk(VCHIQ_LOG_PREFIX fmt "\n", ##__VA_ARGS__); } while (0)
9858 +#ifndef vchiq_log_info
9859 +#define vchiq_log_info(cat, fmt, ...) \
9860 + do { if (cat >= VCHIQ_LOG_INFO) \
9861 + printk(VCHIQ_LOG_PREFIX fmt "\n", ##__VA_ARGS__); } while (0)
9863 +#ifndef vchiq_log_trace
9864 +#define vchiq_log_trace(cat, fmt, ...) \
9865 + do { if (cat >= VCHIQ_LOG_TRACE) \
9866 + printk(VCHIQ_LOG_PREFIX fmt "\n", ##__VA_ARGS__); } while (0)
9869 +#define vchiq_loud_error(...) \
9870 + vchiq_log_error(vchiq_core_log_level, "===== " __VA_ARGS__)
9872 +#ifndef vchiq_static_assert
9873 +#define vchiq_static_assert(cond) __attribute__((unused)) \
9874 + extern int vchiq_static_assert[(cond) ? 1 : -1]
9877 +#define IS_POW2(x) (x && ((x & (x - 1)) == 0))
9879 +/* Ensure that the slot size and maximum number of slots are powers of 2 */
9880 +vchiq_static_assert(IS_POW2(VCHIQ_SLOT_SIZE));
9881 +vchiq_static_assert(IS_POW2(VCHIQ_MAX_SLOTS));
9882 +vchiq_static_assert(IS_POW2(VCHIQ_MAX_SLOTS_PER_SIDE));
9884 +#define VCHIQ_SLOT_MASK (VCHIQ_SLOT_SIZE - 1)
9885 +#define VCHIQ_SLOT_QUEUE_MASK (VCHIQ_MAX_SLOTS_PER_SIDE - 1)
9886 +#define VCHIQ_SLOT_ZERO_SLOTS ((sizeof(VCHIQ_SLOT_ZERO_T) + \
9887 + VCHIQ_SLOT_SIZE - 1) / VCHIQ_SLOT_SIZE)
9889 +#define VCHIQ_MSG_PADDING 0 /* - */
9890 +#define VCHIQ_MSG_CONNECT 1 /* - */
9891 +#define VCHIQ_MSG_OPEN 2 /* + (srcport, -), fourcc, client_id */
9892 +#define VCHIQ_MSG_OPENACK 3 /* + (srcport, dstport) */
9893 +#define VCHIQ_MSG_CLOSE 4 /* + (srcport, dstport) */
9894 +#define VCHIQ_MSG_DATA 5 /* + (srcport, dstport) */
9895 +#define VCHIQ_MSG_BULK_RX 6 /* + (srcport, dstport), data, size */
9896 +#define VCHIQ_MSG_BULK_TX 7 /* + (srcport, dstport), data, size */
9897 +#define VCHIQ_MSG_BULK_RX_DONE 8 /* + (srcport, dstport), actual */
9898 +#define VCHIQ_MSG_BULK_TX_DONE 9 /* + (srcport, dstport), actual */
9899 +#define VCHIQ_MSG_PAUSE 10 /* - */
9900 +#define VCHIQ_MSG_RESUME 11 /* - */
9901 +#define VCHIQ_MSG_REMOTE_USE 12 /* - */
9902 +#define VCHIQ_MSG_REMOTE_RELEASE 13 /* - */
9903 +#define VCHIQ_MSG_REMOTE_USE_ACTIVE 14 /* - */
9905 +#define VCHIQ_PORT_MAX (VCHIQ_MAX_SERVICES - 1)
9906 +#define VCHIQ_PORT_FREE 0x1000
9907 +#define VCHIQ_PORT_IS_VALID(port) (port < VCHIQ_PORT_FREE)
9908 +#define VCHIQ_MAKE_MSG(type, srcport, dstport) \
9909 + ((type<<24) | (srcport<<12) | (dstport<<0))
9910 +#define VCHIQ_MSG_TYPE(msgid) ((unsigned int)msgid >> 24)
9911 +#define VCHIQ_MSG_SRCPORT(msgid) \
9912 + (unsigned short)(((unsigned int)msgid >> 12) & 0xfff)
9913 +#define VCHIQ_MSG_DSTPORT(msgid) \
9914 + ((unsigned short)msgid & 0xfff)
9916 +#define VCHIQ_FOURCC_AS_4CHARS(fourcc) \
9917 + ((fourcc) >> 24) & 0xff, \
9918 + ((fourcc) >> 16) & 0xff, \
9919 + ((fourcc) >> 8) & 0xff, \
9922 +/* Ensure the fields are wide enough */
9923 +vchiq_static_assert(VCHIQ_MSG_SRCPORT(VCHIQ_MAKE_MSG(0, 0, VCHIQ_PORT_MAX))
9925 +vchiq_static_assert(VCHIQ_MSG_TYPE(VCHIQ_MAKE_MSG(0, VCHIQ_PORT_MAX, 0)) == 0);
9926 +vchiq_static_assert((unsigned int)VCHIQ_PORT_MAX <
9927 + (unsigned int)VCHIQ_PORT_FREE);
9929 +#define VCHIQ_MSGID_PADDING VCHIQ_MAKE_MSG(VCHIQ_MSG_PADDING, 0, 0)
9930 +#define VCHIQ_MSGID_CLAIMED 0x40000000
9932 +#define VCHIQ_FOURCC_INVALID 0x00000000
9933 +#define VCHIQ_FOURCC_IS_LEGAL(fourcc) (fourcc != VCHIQ_FOURCC_INVALID)
9935 +#define VCHIQ_BULK_ACTUAL_ABORTED -1
9937 +typedef uint32_t BITSET_T;
9939 +vchiq_static_assert((sizeof(BITSET_T) * 8) == 32);
9941 +#define BITSET_SIZE(b) ((b + 31) >> 5)
9942 +#define BITSET_WORD(b) (b >> 5)
9943 +#define BITSET_BIT(b) (1 << (b & 31))
9944 +#define BITSET_ZERO(bs) memset(bs, 0, sizeof(bs))
9945 +#define BITSET_IS_SET(bs, b) (bs[BITSET_WORD(b)] & BITSET_BIT(b))
9946 +#define BITSET_SET(bs, b) (bs[BITSET_WORD(b)] |= BITSET_BIT(b))
9947 +#define BITSET_CLR(bs, b) (bs[BITSET_WORD(b)] &= ~BITSET_BIT(b))
9949 +#if VCHIQ_ENABLE_STATS
9950 +#define VCHIQ_STATS_INC(state, stat) (state->stats. stat++)
9951 +#define VCHIQ_SERVICE_STATS_INC(service, stat) (service->stats. stat++)
9952 +#define VCHIQ_SERVICE_STATS_ADD(service, stat, addend) \
9953 + (service->stats. stat += addend)
9955 +#define VCHIQ_STATS_INC(state, stat) ((void)0)
9956 +#define VCHIQ_SERVICE_STATS_INC(service, stat) ((void)0)
9957 +#define VCHIQ_SERVICE_STATS_ADD(service, stat, addend) ((void)0)
9962 +#if VCHIQ_ENABLE_DEBUG
9963 + DEBUG_SLOT_HANDLER_COUNT,
9964 + DEBUG_SLOT_HANDLER_LINE,
9966 + DEBUG_PARSE_HEADER,
9967 + DEBUG_PARSE_MSGID,
9968 + DEBUG_AWAIT_COMPLETION_LINE,
9969 + DEBUG_DEQUEUE_MESSAGE_LINE,
9970 + DEBUG_SERVICE_CALLBACK_LINE,
9971 + DEBUG_MSG_QUEUE_FULL_COUNT,
9972 + DEBUG_COMPLETION_QUEUE_FULL_COUNT,
9977 +#if VCHIQ_ENABLE_DEBUG
9979 +#define DEBUG_INITIALISE(local) int *debug_ptr = (local)->debug;
9980 +#define DEBUG_TRACE(d) \
9981 + do { debug_ptr[DEBUG_ ## d] = __LINE__; dsb(); } while (0)
9982 +#define DEBUG_VALUE(d, v) \
9983 + do { debug_ptr[DEBUG_ ## d] = (v); dsb(); } while (0)
9984 +#define DEBUG_COUNT(d) \
9985 + do { debug_ptr[DEBUG_ ## d]++; dsb(); } while (0)
9987 +#else /* VCHIQ_ENABLE_DEBUG */
9989 +#define DEBUG_INITIALISE(local)
9990 +#define DEBUG_TRACE(d)
9991 +#define DEBUG_VALUE(d, v)
9992 +#define DEBUG_COUNT(d)
9994 +#endif /* VCHIQ_ENABLE_DEBUG */
9997 + VCHIQ_CONNSTATE_DISCONNECTED,
9998 + VCHIQ_CONNSTATE_CONNECTING,
9999 + VCHIQ_CONNSTATE_CONNECTED,
10000 + VCHIQ_CONNSTATE_PAUSING,
10001 + VCHIQ_CONNSTATE_PAUSE_SENT,
10002 + VCHIQ_CONNSTATE_PAUSED,
10003 + VCHIQ_CONNSTATE_RESUMING,
10004 + VCHIQ_CONNSTATE_PAUSE_TIMEOUT,
10005 + VCHIQ_CONNSTATE_RESUME_TIMEOUT
10006 +} VCHIQ_CONNSTATE_T;
10009 + VCHIQ_SRVSTATE_FREE,
10010 + VCHIQ_SRVSTATE_HIDDEN,
10011 + VCHIQ_SRVSTATE_LISTENING,
10012 + VCHIQ_SRVSTATE_OPENING,
10013 + VCHIQ_SRVSTATE_OPEN,
10014 + VCHIQ_SRVSTATE_OPENSYNC,
10015 + VCHIQ_SRVSTATE_CLOSESENT,
10016 + VCHIQ_SRVSTATE_CLOSERECVD,
10017 + VCHIQ_SRVSTATE_CLOSEWAIT,
10018 + VCHIQ_SRVSTATE_CLOSED
10022 + VCHIQ_POLL_TERMINATE,
10023 + VCHIQ_POLL_REMOVE,
10024 + VCHIQ_POLL_TXNOTIFY,
10025 + VCHIQ_POLL_RXNOTIFY,
10030 + VCHIQ_BULK_TRANSMIT,
10031 + VCHIQ_BULK_RECEIVE
10032 +} VCHIQ_BULK_DIR_T;
10034 +typedef void (*VCHIQ_USERDATA_TERM_T)(void *userdata);
10036 +typedef struct vchiq_bulk_struct {
10040 + VCHI_MEM_HANDLE_T handle;
10043 + void *remote_data;
10048 +typedef struct vchiq_bulk_queue_struct {
10049 + int local_insert; /* Where to insert the next local bulk */
10050 + int remote_insert; /* Where to insert the next remote bulk (master) */
10051 + int process; /* Bulk to transfer next */
10052 + int remote_notify; /* Bulk to notify the remote client of next (mstr) */
10053 + int remove; /* Bulk to notify the local client of, and remove,
10055 + VCHIQ_BULK_T bulks[VCHIQ_NUM_SERVICE_BULKS];
10056 +} VCHIQ_BULK_QUEUE_T;
10058 +typedef struct remote_event_struct {
10061 + struct semaphore *event;
10064 +typedef struct opaque_platform_state_t *VCHIQ_PLATFORM_STATE_T;
10066 +typedef struct vchiq_state_struct VCHIQ_STATE_T;
10068 +typedef struct vchiq_slot_struct {
10069 + char data[VCHIQ_SLOT_SIZE];
10072 +typedef struct vchiq_slot_info_struct {
10073 + /* Use two counters rather than one to avoid the need for a mutex. */
10075 + short release_count;
10076 +} VCHIQ_SLOT_INFO_T;
10078 +typedef struct vchiq_service_struct {
10079 + VCHIQ_SERVICE_BASE_T base;
10080 + VCHIQ_SERVICE_HANDLE_T handle;
10081 + unsigned int ref_count;
10083 + VCHIQ_USERDATA_TERM_T userdata_term;
10084 + unsigned int localport;
10085 + unsigned int remoteport;
10086 + int public_fourcc;
10092 + atomic_t poll_flags;
10094 + short version_min;
10095 + short peer_version;
10097 + VCHIQ_STATE_T *state;
10098 + VCHIQ_INSTANCE_T instance;
10100 + int service_use_count;
10102 + VCHIQ_BULK_QUEUE_T bulk_tx;
10103 + VCHIQ_BULK_QUEUE_T bulk_rx;
10105 + struct semaphore remove_event;
10106 + struct semaphore bulk_remove_event;
10107 + struct mutex bulk_mutex;
10109 + struct service_stats_struct {
10110 + int quota_stalls;
10114 + int ctrl_tx_count;
10115 + int ctrl_rx_count;
10116 + int bulk_tx_count;
10117 + int bulk_rx_count;
10118 + int bulk_aborted_count;
10119 + uint64_t ctrl_tx_bytes;
10120 + uint64_t ctrl_rx_bytes;
10121 + uint64_t bulk_tx_bytes;
10122 + uint64_t bulk_rx_bytes;
10124 +} VCHIQ_SERVICE_T;
10126 +/* The quota information is outside VCHIQ_SERVICE_T so that it can be
10127 + statically allocated, since for accounting reasons a service's slot
10128 + usage is carried over between users of the same port number.
10130 +typedef struct vchiq_service_quota_struct {
10131 + unsigned short slot_quota;
10132 + unsigned short slot_use_count;
10133 + unsigned short message_quota;
10134 + unsigned short message_use_count;
10135 + struct semaphore quota_event;
10136 + int previous_tx_index;
10137 +} VCHIQ_SERVICE_QUOTA_T;
10139 +typedef struct vchiq_shared_state_struct {
10141 + /* A non-zero value here indicates that the content is valid. */
10144 + /* The first and last (inclusive) slots allocated to the owner. */
10148 + /* The slot allocated to synchronous messages from the owner. */
10151 + /* Signalling this event indicates that owner's slot handler thread
10152 + ** should run. */
10153 + REMOTE_EVENT_T trigger;
10155 + /* Indicates the byte position within the stream where the next message
10156 + ** will be written. The least significant bits are an index into the
10157 + ** slot. The next bits are the index of the slot in slot_queue. */
10160 + /* This event should be signalled when a slot is recycled. */
10161 + REMOTE_EVENT_T recycle;
10163 + /* The slot_queue index where the next recycled slot will be written. */
10164 + int slot_queue_recycle;
10166 + /* This event should be signalled when a synchronous message is sent. */
10167 + REMOTE_EVENT_T sync_trigger;
10169 + /* This event should be signalled when a synchronous message has been
10171 + REMOTE_EVENT_T sync_release;
10173 + /* A circular buffer of slot indexes. */
10174 + int slot_queue[VCHIQ_MAX_SLOTS_PER_SIDE];
10176 + /* Debugging state */
10177 + int debug[DEBUG_MAX];
10178 +} VCHIQ_SHARED_STATE_T;
10180 +typedef struct vchiq_slot_zero_struct {
10183 + short version_min;
10184 + int slot_zero_size;
10187 + int max_slots_per_side;
10188 + int platform_data[2];
10189 + VCHIQ_SHARED_STATE_T master;
10190 + VCHIQ_SHARED_STATE_T slave;
10191 + VCHIQ_SLOT_INFO_T slots[VCHIQ_MAX_SLOTS];
10192 +} VCHIQ_SLOT_ZERO_T;
10194 +struct vchiq_state_struct {
10197 + VCHIQ_CONNSTATE_T conn_state;
10199 + short version_common;
10201 + VCHIQ_SHARED_STATE_T *local;
10202 + VCHIQ_SHARED_STATE_T *remote;
10203 + VCHIQ_SLOT_T *slot_data;
10205 + unsigned short default_slot_quota;
10206 + unsigned short default_message_quota;
10208 + /* Event indicating connect message received */
10209 + struct semaphore connect;
10211 + /* Mutex protecting services */
10212 + struct mutex mutex;
10213 + VCHIQ_INSTANCE_T *instance;
10215 + /* Processes incoming messages */
10216 + struct task_struct *slot_handler_thread;
10218 + /* Processes recycled slots */
10219 + struct task_struct *recycle_thread;
10221 + /* Processes synchronous messages */
10222 + struct task_struct *sync_thread;
10224 + /* Local implementation of the trigger remote event */
10225 + struct semaphore trigger_event;
10227 + /* Local implementation of the recycle remote event */
10228 + struct semaphore recycle_event;
10230 + /* Local implementation of the sync trigger remote event */
10231 + struct semaphore sync_trigger_event;
10233 + /* Local implementation of the sync release remote event */
10234 + struct semaphore sync_release_event;
10238 + VCHIQ_SLOT_INFO_T *rx_info;
10240 + struct mutex slot_mutex;
10242 + struct mutex recycle_mutex;
10244 + struct mutex sync_mutex;
10246 + struct mutex bulk_transfer_mutex;
10248 + /* Indicates the byte position within the stream from where the next
10249 + ** message will be read. The least significant bits are an index into
10250 + ** the slot.The next bits are the index of the slot in
10251 + ** remote->slot_queue. */
10254 + /* A cached copy of local->tx_pos. Only write to local->tx_pos, and read
10255 + from remote->tx_pos. */
10256 + int local_tx_pos;
10258 + /* The slot_queue index of the slot to become available next. */
10259 + int slot_queue_available;
10261 + /* A flag to indicate if any poll has been requested */
10264 + /* Ths index of the previous slot used for data messages. */
10265 + int previous_data_index;
10267 + /* The number of slots occupied by data messages. */
10268 + unsigned short data_use_count;
10270 + /* The maximum number of slots to be occupied by data messages. */
10271 + unsigned short data_quota;
10273 + /* An array of bit sets indicating which services must be polled. */
10274 + atomic_t poll_services[BITSET_SIZE(VCHIQ_MAX_SERVICES)];
10276 + /* The number of the first unused service */
10277 + int unused_service;
10279 + /* Signalled when a free slot becomes available. */
10280 + struct semaphore slot_available_event;
10282 + struct semaphore slot_remove_event;
10284 + /* Signalled when a free data slot becomes available. */
10285 + struct semaphore data_quota_event;
10287 + /* Incremented when there are bulk transfers which cannot be processed
10288 + * whilst paused and must be processed on resume */
10289 + int deferred_bulks;
10291 + struct state_stats_struct {
10294 + int ctrl_tx_count;
10295 + int ctrl_rx_count;
10299 + VCHIQ_SERVICE_T * services[VCHIQ_MAX_SERVICES];
10300 + VCHIQ_SERVICE_QUOTA_T service_quotas[VCHIQ_MAX_SERVICES];
10301 + VCHIQ_SLOT_INFO_T slot_info[VCHIQ_MAX_SLOTS];
10303 + VCHIQ_PLATFORM_STATE_T platform_state;
10306 +struct bulk_waiter {
10307 + VCHIQ_BULK_T *bulk;
10308 + struct semaphore event;
10312 +extern spinlock_t bulk_waiter_spinlock;
10314 +extern int vchiq_core_log_level;
10315 +extern int vchiq_core_msg_log_level;
10316 +extern int vchiq_sync_log_level;
10318 +extern VCHIQ_STATE_T *vchiq_states[VCHIQ_MAX_STATES];
10320 +extern const char *
10321 +get_conn_state_name(VCHIQ_CONNSTATE_T conn_state);
10323 +extern VCHIQ_SLOT_ZERO_T *
10324 +vchiq_init_slots(void *mem_base, int mem_size);
10326 +extern VCHIQ_STATUS_T
10327 +vchiq_init_state(VCHIQ_STATE_T *state, VCHIQ_SLOT_ZERO_T *slot_zero,
10330 +extern VCHIQ_STATUS_T
10331 +vchiq_connect_internal(VCHIQ_STATE_T *state, VCHIQ_INSTANCE_T instance);
10333 +extern VCHIQ_SERVICE_T *
10334 +vchiq_add_service_internal(VCHIQ_STATE_T *state,
10335 + const VCHIQ_SERVICE_PARAMS_T *params, int srvstate,
10336 + VCHIQ_INSTANCE_T instance, VCHIQ_USERDATA_TERM_T userdata_term);
10338 +extern VCHIQ_STATUS_T
10339 +vchiq_open_service_internal(VCHIQ_SERVICE_T *service, int client_id);
10341 +extern VCHIQ_STATUS_T
10342 +vchiq_close_service_internal(VCHIQ_SERVICE_T *service, int close_recvd);
10345 +vchiq_terminate_service_internal(VCHIQ_SERVICE_T *service);
10348 +vchiq_free_service_internal(VCHIQ_SERVICE_T *service);
10350 +extern VCHIQ_STATUS_T
10351 +vchiq_shutdown_internal(VCHIQ_STATE_T *state, VCHIQ_INSTANCE_T instance);
10353 +extern VCHIQ_STATUS_T
10354 +vchiq_pause_internal(VCHIQ_STATE_T *state);
10356 +extern VCHIQ_STATUS_T
10357 +vchiq_resume_internal(VCHIQ_STATE_T *state);
10360 +remote_event_pollall(VCHIQ_STATE_T *state);
10362 +extern VCHIQ_STATUS_T
10363 +vchiq_bulk_transfer(VCHIQ_SERVICE_HANDLE_T handle,
10364 + VCHI_MEM_HANDLE_T memhandle, void *offset, int size, void *userdata,
10365 + VCHIQ_BULK_MODE_T mode, VCHIQ_BULK_DIR_T dir);
10368 +vchiq_dump_state(void *dump_context, VCHIQ_STATE_T *state);
10371 +vchiq_dump_service_state(void *dump_context, VCHIQ_SERVICE_T *service);
10374 +vchiq_loud_error_header(void);
10377 +vchiq_loud_error_footer(void);
10380 +request_poll(VCHIQ_STATE_T *state, VCHIQ_SERVICE_T *service, int poll_type);
10382 +static inline VCHIQ_SERVICE_T *
10383 +handle_to_service(VCHIQ_SERVICE_HANDLE_T handle)
10385 + VCHIQ_STATE_T *state = vchiq_states[(handle / VCHIQ_MAX_SERVICES) &
10386 + (VCHIQ_MAX_STATES - 1)];
10390 + return state->services[handle & (VCHIQ_MAX_SERVICES - 1)];
10393 +extern VCHIQ_SERVICE_T *
10394 +find_service_by_handle(VCHIQ_SERVICE_HANDLE_T handle);
10396 +extern VCHIQ_SERVICE_T *
10397 +find_service_by_port(VCHIQ_STATE_T *state, int localport);
10399 +extern VCHIQ_SERVICE_T *
10400 +find_service_for_instance(VCHIQ_INSTANCE_T instance,
10401 + VCHIQ_SERVICE_HANDLE_T handle);
10403 +extern VCHIQ_SERVICE_T *
10404 +find_closed_service_for_instance(VCHIQ_INSTANCE_T instance,
10405 + VCHIQ_SERVICE_HANDLE_T handle);
10407 +extern VCHIQ_SERVICE_T *
10408 +next_service_by_instance(VCHIQ_STATE_T *state, VCHIQ_INSTANCE_T instance,
10412 +lock_service(VCHIQ_SERVICE_T *service);
10415 +unlock_service(VCHIQ_SERVICE_T *service);
10417 +/* The following functions are called from vchiq_core, and external
10418 +** implementations must be provided. */
10420 +extern VCHIQ_STATUS_T
10421 +vchiq_prepare_bulk_data(VCHIQ_BULK_T *bulk,
10422 + VCHI_MEM_HANDLE_T memhandle, void *offset, int size, int dir);
10425 +vchiq_transfer_bulk(VCHIQ_BULK_T *bulk);
10428 +vchiq_complete_bulk(VCHIQ_BULK_T *bulk);
10430 +extern VCHIQ_STATUS_T
10431 +vchiq_copy_from_user(void *dst, const void *src, int size);
10434 +remote_event_signal(REMOTE_EVENT_T *event);
10437 +vchiq_platform_check_suspend(VCHIQ_STATE_T *state);
10440 +vchiq_platform_paused(VCHIQ_STATE_T *state);
10442 +extern VCHIQ_STATUS_T
10443 +vchiq_platform_resume(VCHIQ_STATE_T *state);
10446 +vchiq_platform_resumed(VCHIQ_STATE_T *state);
10449 +vchiq_dump(void *dump_context, const char *str, int len);
10452 +vchiq_dump_platform_state(void *dump_context);
10455 +vchiq_dump_platform_instances(void *dump_context);
10458 +vchiq_dump_platform_service_state(void *dump_context,
10459 + VCHIQ_SERVICE_T *service);
10461 +extern VCHIQ_STATUS_T
10462 +vchiq_use_service_internal(VCHIQ_SERVICE_T *service);
10464 +extern VCHIQ_STATUS_T
10465 +vchiq_release_service_internal(VCHIQ_SERVICE_T *service);
10468 +vchiq_on_remote_use(VCHIQ_STATE_T *state);
10471 +vchiq_on_remote_release(VCHIQ_STATE_T *state);
10473 +extern VCHIQ_STATUS_T
10474 +vchiq_platform_init_state(VCHIQ_STATE_T *state);
10476 +extern VCHIQ_STATUS_T
10477 +vchiq_check_service(VCHIQ_SERVICE_T *service);
10480 +vchiq_on_remote_use_active(VCHIQ_STATE_T *state);
10482 +extern VCHIQ_STATUS_T
10483 +vchiq_send_remote_use(VCHIQ_STATE_T *state);
10485 +extern VCHIQ_STATUS_T
10486 +vchiq_send_remote_release(VCHIQ_STATE_T *state);
10488 +extern VCHIQ_STATUS_T
10489 +vchiq_send_remote_use_active(VCHIQ_STATE_T *state);
10492 +vchiq_platform_conn_state_changed(VCHIQ_STATE_T *state,
10493 + VCHIQ_CONNSTATE_T oldstate, VCHIQ_CONNSTATE_T newstate);
10496 +vchiq_platform_handle_timeout(VCHIQ_STATE_T *state);
10499 +vchiq_set_conn_state(VCHIQ_STATE_T *state, VCHIQ_CONNSTATE_T newstate);
10503 +vchiq_log_dump_mem(const char *label, uint32_t addr, const void *voidMem,
10504 + size_t numBytes);
10508 +++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_debugfs.c
10511 + * Copyright (c) 2014 Raspberry Pi (Trading) Ltd. All rights reserved.
10512 + * Copyright (c) 2010-2012 Broadcom. All rights reserved.
10514 + * Redistribution and use in source and binary forms, with or without
10515 + * modification, are permitted provided that the following conditions
10517 + * 1. Redistributions of source code must retain the above copyright
10518 + * notice, this list of conditions, and the following disclaimer,
10519 + * without modification.
10520 + * 2. Redistributions in binary form must reproduce the above copyright
10521 + * notice, this list of conditions and the following disclaimer in the
10522 + * documentation and/or other materials provided with the distribution.
10523 + * 3. The names of the above-listed copyright holders may not be used
10524 + * to endorse or promote products derived from this software without
10525 + * specific prior written permission.
10527 + * ALTERNATIVELY, this software may be distributed under the terms of the
10528 + * GNU General Public License ("GPL") version 2, as published by the Free
10529 + * Software Foundation.
10531 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
10532 + * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
10533 + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
10534 + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
10535 + * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
10536 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
10537 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
10538 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
10539 + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
10540 + * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
10541 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
10545 +#include <linux/debugfs.h>
10546 +#include "vchiq_core.h"
10547 +#include "vchiq_arm.h"
10548 +#include "vchiq_debugfs.h"
10550 +#ifdef CONFIG_DEBUG_FS
10552 +/****************************************************************************
10554 +* log category entries
10556 +***************************************************************************/
10557 +#define DEBUGFS_WRITE_BUF_SIZE 256
10559 +#define VCHIQ_LOG_ERROR_STR "error"
10560 +#define VCHIQ_LOG_WARNING_STR "warning"
10561 +#define VCHIQ_LOG_INFO_STR "info"
10562 +#define VCHIQ_LOG_TRACE_STR "trace"
10565 +/* Top-level debug info */
10566 +struct vchiq_debugfs_info {
10567 + /* Global 'vchiq' debugfs entry used by all instances */
10568 + struct dentry *vchiq_cfg_dir;
10570 + /* one entry per client process */
10571 + struct dentry *clients;
10573 + /* log categories */
10574 + struct dentry *log_categories;
10577 +static struct vchiq_debugfs_info debugfs_info;
10579 +/* Log category debugfs entries */
10580 +struct vchiq_debugfs_log_entry {
10581 + const char *name;
10583 + struct dentry *dir;
10586 +static struct vchiq_debugfs_log_entry vchiq_debugfs_log_entries[] = {
10587 + { "core", &vchiq_core_log_level },
10588 + { "msg", &vchiq_core_msg_log_level },
10589 + { "sync", &vchiq_sync_log_level },
10590 + { "susp", &vchiq_susp_log_level },
10591 + { "arm", &vchiq_arm_log_level },
10593 +static int n_log_entries =
10594 + sizeof(vchiq_debugfs_log_entries)/sizeof(vchiq_debugfs_log_entries[0]);
10597 +static struct dentry *vchiq_clients_top(void);
10598 +static struct dentry *vchiq_debugfs_top(void);
10600 +static int debugfs_log_show(struct seq_file *f, void *offset)
10602 + int *levp = f->private;
10603 + char *log_value = NULL;
10606 + case VCHIQ_LOG_ERROR:
10607 + log_value = VCHIQ_LOG_ERROR_STR;
10609 + case VCHIQ_LOG_WARNING:
10610 + log_value = VCHIQ_LOG_WARNING_STR;
10612 + case VCHIQ_LOG_INFO:
10613 + log_value = VCHIQ_LOG_INFO_STR;
10615 + case VCHIQ_LOG_TRACE:
10616 + log_value = VCHIQ_LOG_TRACE_STR;
10622 + seq_printf(f, "%s\n", log_value ? log_value : "(null)");
10627 +static int debugfs_log_open(struct inode *inode, struct file *file)
10629 + return single_open(file, debugfs_log_show, inode->i_private);
10632 +static int debugfs_log_write(struct file *file,
10633 + const char __user *buffer,
10634 + size_t count, loff_t *ppos)
10636 + struct seq_file *f = (struct seq_file *)file->private_data;
10637 + int *levp = f->private;
10638 + char kbuf[DEBUGFS_WRITE_BUF_SIZE + 1];
10640 + memset(kbuf, 0, DEBUGFS_WRITE_BUF_SIZE + 1);
10641 + if (count >= DEBUGFS_WRITE_BUF_SIZE)
10642 + count = DEBUGFS_WRITE_BUF_SIZE;
10644 + if (copy_from_user(kbuf, buffer, count) != 0)
10646 + kbuf[count - 1] = 0;
10648 + if (strncmp("error", kbuf, strlen("error")) == 0)
10649 + *levp = VCHIQ_LOG_ERROR;
10650 + else if (strncmp("warning", kbuf, strlen("warning")) == 0)
10651 + *levp = VCHIQ_LOG_WARNING;
10652 + else if (strncmp("info", kbuf, strlen("info")) == 0)
10653 + *levp = VCHIQ_LOG_INFO;
10654 + else if (strncmp("trace", kbuf, strlen("trace")) == 0)
10655 + *levp = VCHIQ_LOG_TRACE;
10657 + *levp = VCHIQ_LOG_DEFAULT;
10664 +static const struct file_operations debugfs_log_fops = {
10665 + .owner = THIS_MODULE,
10666 + .open = debugfs_log_open,
10667 + .write = debugfs_log_write,
10668 + .read = seq_read,
10669 + .llseek = seq_lseek,
10670 + .release = single_release,
10673 +/* create an entry under <debugfs>/vchiq/log for each log category */
10674 +static int vchiq_debugfs_create_log_entries(struct dentry *top)
10676 + struct dentry *dir;
10679 + dir = debugfs_create_dir("log", vchiq_debugfs_top());
10682 + debugfs_info.log_categories = dir;
10684 + for (i = 0; i < n_log_entries; i++) {
10685 + void *levp = (void *)vchiq_debugfs_log_entries[i].plevel;
10686 + dir = debugfs_create_file(vchiq_debugfs_log_entries[i].name,
10688 + debugfs_info.log_categories,
10690 + &debugfs_log_fops);
10696 + vchiq_debugfs_log_entries[i].dir = dir;
10701 +static int debugfs_usecount_show(struct seq_file *f, void *offset)
10703 + VCHIQ_INSTANCE_T instance = f->private;
10706 + use_count = vchiq_instance_get_use_count(instance);
10707 + seq_printf(f, "%d\n", use_count);
10712 +static int debugfs_usecount_open(struct inode *inode, struct file *file)
10714 + return single_open(file, debugfs_usecount_show, inode->i_private);
10717 +static const struct file_operations debugfs_usecount_fops = {
10718 + .owner = THIS_MODULE,
10719 + .open = debugfs_usecount_open,
10720 + .read = seq_read,
10721 + .llseek = seq_lseek,
10722 + .release = single_release,
10725 +static int debugfs_trace_show(struct seq_file *f, void *offset)
10727 + VCHIQ_INSTANCE_T instance = f->private;
10730 + trace = vchiq_instance_get_trace(instance);
10731 + seq_printf(f, "%s\n", trace ? "Y" : "N");
10736 +static int debugfs_trace_open(struct inode *inode, struct file *file)
10738 + return single_open(file, debugfs_trace_show, inode->i_private);
10741 +static int debugfs_trace_write(struct file *file,
10742 + const char __user *buffer,
10743 + size_t count, loff_t *ppos)
10745 + struct seq_file *f = (struct seq_file *)file->private_data;
10746 + VCHIQ_INSTANCE_T instance = f->private;
10749 + if (copy_from_user(&firstchar, buffer, 1) != 0)
10752 + switch (firstchar) {
10756 + vchiq_instance_set_trace(instance, 1);
10761 + vchiq_instance_set_trace(instance, 0);
10772 +static const struct file_operations debugfs_trace_fops = {
10773 + .owner = THIS_MODULE,
10774 + .open = debugfs_trace_open,
10775 + .write = debugfs_trace_write,
10776 + .read = seq_read,
10777 + .llseek = seq_lseek,
10778 + .release = single_release,
10781 +/* add an instance (process) to the debugfs entries */
10782 +int vchiq_debugfs_add_instance(VCHIQ_INSTANCE_T instance)
10785 + struct dentry *top, *use_count, *trace;
10786 + struct dentry *clients = vchiq_clients_top();
10788 + snprintf(pidstr, sizeof(pidstr), "%d",
10789 + vchiq_instance_get_pid(instance));
10791 + top = debugfs_create_dir(pidstr, clients);
10795 + use_count = debugfs_create_file("use_count",
10798 + &debugfs_usecount_fops);
10800 + goto fail_use_count;
10802 + trace = debugfs_create_file("trace",
10805 + &debugfs_trace_fops);
10809 + vchiq_instance_get_debugfs_node(instance)->dentry = top;
10814 + debugfs_remove(use_count);
10816 + debugfs_remove(top);
10821 +void vchiq_debugfs_remove_instance(VCHIQ_INSTANCE_T instance)
10823 + VCHIQ_DEBUGFS_NODE_T *node = vchiq_instance_get_debugfs_node(instance);
10824 + debugfs_remove_recursive(node->dentry);
10828 +int vchiq_debugfs_init(void)
10830 + BUG_ON(debugfs_info.vchiq_cfg_dir != NULL);
10832 + debugfs_info.vchiq_cfg_dir = debugfs_create_dir("vchiq", NULL);
10833 + if (debugfs_info.vchiq_cfg_dir == NULL)
10836 + debugfs_info.clients = debugfs_create_dir("clients",
10837 + vchiq_debugfs_top());
10838 + if (!debugfs_info.clients)
10841 + if (vchiq_debugfs_create_log_entries(vchiq_debugfs_top()) != 0)
10847 + vchiq_debugfs_deinit();
10848 + vchiq_log_error(vchiq_arm_log_level,
10849 + "%s: failed to create debugfs directory",
10855 +/* remove all the debugfs entries */
10856 +void vchiq_debugfs_deinit(void)
10858 + debugfs_remove_recursive(vchiq_debugfs_top());
10861 +static struct dentry *vchiq_clients_top(void)
10863 + return debugfs_info.clients;
10866 +static struct dentry *vchiq_debugfs_top(void)
10868 + BUG_ON(debugfs_info.vchiq_cfg_dir == NULL);
10869 + return debugfs_info.vchiq_cfg_dir;
10872 +#else /* CONFIG_DEBUG_FS */
10874 +int vchiq_debugfs_init(void)
10879 +void vchiq_debugfs_deinit(void)
10883 +int vchiq_debugfs_add_instance(VCHIQ_INSTANCE_T instance)
10888 +void vchiq_debugfs_remove_instance(VCHIQ_INSTANCE_T instance)
10892 +#endif /* CONFIG_DEBUG_FS */
10894 +++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_debugfs.h
10897 + * Copyright (c) 2014 Raspberry Pi (Trading) Ltd. All rights reserved.
10899 + * Redistribution and use in source and binary forms, with or without
10900 + * modification, are permitted provided that the following conditions
10902 + * 1. Redistributions of source code must retain the above copyright
10903 + * notice, this list of conditions, and the following disclaimer,
10904 + * without modification.
10905 + * 2. Redistributions in binary form must reproduce the above copyright
10906 + * notice, this list of conditions and the following disclaimer in the
10907 + * documentation and/or other materials provided with the distribution.
10908 + * 3. The names of the above-listed copyright holders may not be used
10909 + * to endorse or promote products derived from this software without
10910 + * specific prior written permission.
10912 + * ALTERNATIVELY, this software may be distributed under the terms of the
10913 + * GNU General Public License ("GPL") version 2, as published by the Free
10914 + * Software Foundation.
10916 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
10917 + * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
10918 + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
10919 + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
10920 + * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
10921 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
10922 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
10923 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
10924 + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
10925 + * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
10926 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
10929 +#ifndef VCHIQ_DEBUGFS_H
10930 +#define VCHIQ_DEBUGFS_H
10932 +#include "vchiq_core.h"
10934 +typedef struct vchiq_debugfs_node_struct
10936 + struct dentry *dentry;
10937 +} VCHIQ_DEBUGFS_NODE_T;
10939 +int vchiq_debugfs_init(void);
10941 +void vchiq_debugfs_deinit(void);
10943 +int vchiq_debugfs_add_instance(VCHIQ_INSTANCE_T instance);
10945 +void vchiq_debugfs_remove_instance(VCHIQ_INSTANCE_T instance);
10947 +#endif /* VCHIQ_DEBUGFS_H */
10949 +++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_genversion
10951 +#!/usr/bin/perl -w
10956 +# Generate a version from available information
10959 +my $prefix = shift @ARGV;
10960 +my $root = shift @ARGV;
10963 +if ( not defined $root ) {
10964 + die "usage: $0 prefix root-dir\n";
10967 +if ( ! -d $root ) {
10968 + die "root directory $root not found\n";
10971 +my $version = "unknown";
10974 +if ( -d "$root/.git" ) {
10975 + # attempt to work out git version. only do so
10976 + # on a linux build host, as cygwin builds are
10977 + # already slow enough
10979 + if ( -f "/usr/bin/git" || -f "/usr/local/bin/git" ) {
10980 + if (not open(F, "git --git-dir $root/.git rev-parse --verify HEAD|")) {
10981 + $version = "no git version";
10985 + $version =~ s/[ \r\n]*$//; # chomp may not be enough (cygwin).
10986 + $version =~ s/^[ \r\n]*//; # chomp may not be enough (cygwin).
10989 + if (open(G, "git --git-dir $root/.git status --porcelain|")) {
10991 + $tainted =~ s/[ \r\n]*$//; # chomp may not be enough (cygwin).
10992 + $tainted =~ s/^[ \r\n]*//; # chomp may not be enough (cygwin).
10993 + if (length $tainted) {
10994 + $version = join ' ', $version, "(tainted)";
10997 + $version = join ' ', $version, "(clean)";
11003 +my $hostname = `hostname`;
11004 +$hostname =~ s/[ \r\n]*$//; # chomp may not be enough (cygwin).
11005 +$hostname =~ s/^[ \r\n]*//; # chomp may not be enough (cygwin).
11008 +print STDERR "Version $version\n";
11010 +#include "${prefix}_build_info.h"
11011 +#include <linux/broadcom/vc_debug_sym.h>
11013 +VC_DEBUG_DECLARE_STRING_VAR( ${prefix}_build_hostname, "$hostname" );
11014 +VC_DEBUG_DECLARE_STRING_VAR( ${prefix}_build_version, "$version" );
11015 +VC_DEBUG_DECLARE_STRING_VAR( ${prefix}_build_time, __TIME__ );
11016 +VC_DEBUG_DECLARE_STRING_VAR( ${prefix}_build_date, __DATE__ );
11018 +const char *vchiq_get_build_hostname( void )
11020 + return vchiq_build_hostname;
11023 +const char *vchiq_get_build_version( void )
11025 + return vchiq_build_version;
11028 +const char *vchiq_get_build_date( void )
11030 + return vchiq_build_date;
11033 +const char *vchiq_get_build_time( void )
11035 + return vchiq_build_time;
11039 +++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_if.h
11042 + * Copyright (c) 2010-2012 Broadcom. All rights reserved.
11044 + * Redistribution and use in source and binary forms, with or without
11045 + * modification, are permitted provided that the following conditions
11047 + * 1. Redistributions of source code must retain the above copyright
11048 + * notice, this list of conditions, and the following disclaimer,
11049 + * without modification.
11050 + * 2. Redistributions in binary form must reproduce the above copyright
11051 + * notice, this list of conditions and the following disclaimer in the
11052 + * documentation and/or other materials provided with the distribution.
11053 + * 3. The names of the above-listed copyright holders may not be used
11054 + * to endorse or promote products derived from this software without
11055 + * specific prior written permission.
11057 + * ALTERNATIVELY, this software may be distributed under the terms of the
11058 + * GNU General Public License ("GPL") version 2, as published by the Free
11059 + * Software Foundation.
11061 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
11062 + * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
11063 + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
11064 + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
11065 + * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
11066 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
11067 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
11068 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
11069 + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
11070 + * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
11071 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
11074 +#ifndef VCHIQ_IF_H
11075 +#define VCHIQ_IF_H
11077 +#include "interface/vchi/vchi_mh.h"
11079 +#define VCHIQ_SERVICE_HANDLE_INVALID 0
11081 +#define VCHIQ_SLOT_SIZE 4096
11082 +#define VCHIQ_MAX_MSG_SIZE (VCHIQ_SLOT_SIZE - sizeof(VCHIQ_HEADER_T))
11083 +#define VCHIQ_CHANNEL_SIZE VCHIQ_MAX_MSG_SIZE /* For backwards compatibility */
11085 +#define VCHIQ_MAKE_FOURCC(x0, x1, x2, x3) \
11086 + (((x0) << 24) | ((x1) << 16) | ((x2) << 8) | (x3))
11087 +#define VCHIQ_GET_SERVICE_USERDATA(service) vchiq_get_service_userdata(service)
11088 +#define VCHIQ_GET_SERVICE_FOURCC(service) vchiq_get_service_fourcc(service)
11091 + VCHIQ_SERVICE_OPENED, /* service, -, - */
11092 + VCHIQ_SERVICE_CLOSED, /* service, -, - */
11093 + VCHIQ_MESSAGE_AVAILABLE, /* service, header, - */
11094 + VCHIQ_BULK_TRANSMIT_DONE, /* service, -, bulk_userdata */
11095 + VCHIQ_BULK_RECEIVE_DONE, /* service, -, bulk_userdata */
11096 + VCHIQ_BULK_TRANSMIT_ABORTED, /* service, -, bulk_userdata */
11097 + VCHIQ_BULK_RECEIVE_ABORTED /* service, -, bulk_userdata */
11101 + VCHIQ_ERROR = -1,
11102 + VCHIQ_SUCCESS = 0,
11107 + VCHIQ_BULK_MODE_CALLBACK,
11108 + VCHIQ_BULK_MODE_BLOCKING,
11109 + VCHIQ_BULK_MODE_NOCALLBACK,
11110 + VCHIQ_BULK_MODE_WAITING /* Reserved for internal use */
11111 +} VCHIQ_BULK_MODE_T;
11114 + VCHIQ_SERVICE_OPTION_AUTOCLOSE,
11115 + VCHIQ_SERVICE_OPTION_SLOT_QUOTA,
11116 + VCHIQ_SERVICE_OPTION_MESSAGE_QUOTA,
11117 + VCHIQ_SERVICE_OPTION_SYNCHRONOUS,
11118 + VCHIQ_SERVICE_OPTION_TRACE
11119 +} VCHIQ_SERVICE_OPTION_T;
11121 +typedef struct vchiq_header_struct {
11122 + /* The message identifier - opaque to applications. */
11125 + /* Size of message data. */
11126 + unsigned int size;
11128 + char data[0]; /* message */
11132 + const void *data;
11133 + unsigned int size;
11134 +} VCHIQ_ELEMENT_T;
11136 +typedef unsigned int VCHIQ_SERVICE_HANDLE_T;
11138 +typedef VCHIQ_STATUS_T (*VCHIQ_CALLBACK_T)(VCHIQ_REASON_T, VCHIQ_HEADER_T *,
11139 + VCHIQ_SERVICE_HANDLE_T, void *);
11141 +typedef struct vchiq_service_base_struct {
11143 + VCHIQ_CALLBACK_T callback;
11145 +} VCHIQ_SERVICE_BASE_T;
11147 +typedef struct vchiq_service_params_struct {
11149 + VCHIQ_CALLBACK_T callback;
11151 + short version; /* Increment for non-trivial changes */
11152 + short version_min; /* Update for incompatible changes */
11153 +} VCHIQ_SERVICE_PARAMS_T;
11155 +typedef struct vchiq_config_struct {
11156 + unsigned int max_msg_size;
11157 + unsigned int bulk_threshold; /* The message size above which it
11158 + is better to use a bulk transfer
11159 + (<= max_msg_size) */
11160 + unsigned int max_outstanding_bulks;
11161 + unsigned int max_services;
11162 + short version; /* The version of VCHIQ */
11163 + short version_min; /* The minimum compatible version of VCHIQ */
11166 +typedef struct vchiq_instance_struct *VCHIQ_INSTANCE_T;
11167 +typedef void (*VCHIQ_REMOTE_USE_CALLBACK_T)(void *cb_arg);
11169 +extern VCHIQ_STATUS_T vchiq_initialise(VCHIQ_INSTANCE_T *pinstance);
11170 +extern VCHIQ_STATUS_T vchiq_shutdown(VCHIQ_INSTANCE_T instance);
11171 +extern VCHIQ_STATUS_T vchiq_connect(VCHIQ_INSTANCE_T instance);
11172 +extern VCHIQ_STATUS_T vchiq_add_service(VCHIQ_INSTANCE_T instance,
11173 + const VCHIQ_SERVICE_PARAMS_T *params,
11174 + VCHIQ_SERVICE_HANDLE_T *pservice);
11175 +extern VCHIQ_STATUS_T vchiq_open_service(VCHIQ_INSTANCE_T instance,
11176 + const VCHIQ_SERVICE_PARAMS_T *params,
11177 + VCHIQ_SERVICE_HANDLE_T *pservice);
11178 +extern VCHIQ_STATUS_T vchiq_close_service(VCHIQ_SERVICE_HANDLE_T service);
11179 +extern VCHIQ_STATUS_T vchiq_remove_service(VCHIQ_SERVICE_HANDLE_T service);
11180 +extern VCHIQ_STATUS_T vchiq_use_service(VCHIQ_SERVICE_HANDLE_T service);
11181 +extern VCHIQ_STATUS_T vchiq_use_service_no_resume(
11182 + VCHIQ_SERVICE_HANDLE_T service);
11183 +extern VCHIQ_STATUS_T vchiq_release_service(VCHIQ_SERVICE_HANDLE_T service);
11185 +extern VCHIQ_STATUS_T vchiq_queue_message(VCHIQ_SERVICE_HANDLE_T service,
11186 + const VCHIQ_ELEMENT_T *elements, unsigned int count);
11187 +extern void vchiq_release_message(VCHIQ_SERVICE_HANDLE_T service,
11188 + VCHIQ_HEADER_T *header);
11189 +extern VCHIQ_STATUS_T vchiq_queue_bulk_transmit(VCHIQ_SERVICE_HANDLE_T service,
11190 + const void *data, unsigned int size, void *userdata);
11191 +extern VCHIQ_STATUS_T vchiq_queue_bulk_receive(VCHIQ_SERVICE_HANDLE_T service,
11192 + void *data, unsigned int size, void *userdata);
11193 +extern VCHIQ_STATUS_T vchiq_queue_bulk_transmit_handle(
11194 + VCHIQ_SERVICE_HANDLE_T service, VCHI_MEM_HANDLE_T handle,
11195 + const void *offset, unsigned int size, void *userdata);
11196 +extern VCHIQ_STATUS_T vchiq_queue_bulk_receive_handle(
11197 + VCHIQ_SERVICE_HANDLE_T service, VCHI_MEM_HANDLE_T handle,
11198 + void *offset, unsigned int size, void *userdata);
11199 +extern VCHIQ_STATUS_T vchiq_bulk_transmit(VCHIQ_SERVICE_HANDLE_T service,
11200 + const void *data, unsigned int size, void *userdata,
11201 + VCHIQ_BULK_MODE_T mode);
11202 +extern VCHIQ_STATUS_T vchiq_bulk_receive(VCHIQ_SERVICE_HANDLE_T service,
11203 + void *data, unsigned int size, void *userdata,
11204 + VCHIQ_BULK_MODE_T mode);
11205 +extern VCHIQ_STATUS_T vchiq_bulk_transmit_handle(VCHIQ_SERVICE_HANDLE_T service,
11206 + VCHI_MEM_HANDLE_T handle, const void *offset, unsigned int size,
11207 + void *userdata, VCHIQ_BULK_MODE_T mode);
11208 +extern VCHIQ_STATUS_T vchiq_bulk_receive_handle(VCHIQ_SERVICE_HANDLE_T service,
11209 + VCHI_MEM_HANDLE_T handle, void *offset, unsigned int size,
11210 + void *userdata, VCHIQ_BULK_MODE_T mode);
11211 +extern int vchiq_get_client_id(VCHIQ_SERVICE_HANDLE_T service);
11212 +extern void *vchiq_get_service_userdata(VCHIQ_SERVICE_HANDLE_T service);
11213 +extern int vchiq_get_service_fourcc(VCHIQ_SERVICE_HANDLE_T service);
11214 +extern VCHIQ_STATUS_T vchiq_get_config(VCHIQ_INSTANCE_T instance,
11215 + int config_size, VCHIQ_CONFIG_T *pconfig);
11216 +extern VCHIQ_STATUS_T vchiq_set_service_option(VCHIQ_SERVICE_HANDLE_T service,
11217 + VCHIQ_SERVICE_OPTION_T option, int value);
11219 +extern VCHIQ_STATUS_T vchiq_remote_use(VCHIQ_INSTANCE_T instance,
11220 + VCHIQ_REMOTE_USE_CALLBACK_T callback, void *cb_arg);
11221 +extern VCHIQ_STATUS_T vchiq_remote_release(VCHIQ_INSTANCE_T instance);
11223 +extern VCHIQ_STATUS_T vchiq_dump_phys_mem(VCHIQ_SERVICE_HANDLE_T service,
11224 + void *ptr, size_t num_bytes);
11226 +extern VCHIQ_STATUS_T vchiq_get_peer_version(VCHIQ_SERVICE_HANDLE_T handle,
11227 + short *peer_version);
11229 +#endif /* VCHIQ_IF_H */
11231 +++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_ioctl.h
11234 + * Copyright (c) 2010-2012 Broadcom. All rights reserved.
11236 + * Redistribution and use in source and binary forms, with or without
11237 + * modification, are permitted provided that the following conditions
11239 + * 1. Redistributions of source code must retain the above copyright
11240 + * notice, this list of conditions, and the following disclaimer,
11241 + * without modification.
11242 + * 2. Redistributions in binary form must reproduce the above copyright
11243 + * notice, this list of conditions and the following disclaimer in the
11244 + * documentation and/or other materials provided with the distribution.
11245 + * 3. The names of the above-listed copyright holders may not be used
11246 + * to endorse or promote products derived from this software without
11247 + * specific prior written permission.
11249 + * ALTERNATIVELY, this software may be distributed under the terms of the
11250 + * GNU General Public License ("GPL") version 2, as published by the Free
11251 + * Software Foundation.
11253 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
11254 + * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
11255 + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
11256 + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
11257 + * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
11258 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
11259 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
11260 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
11261 + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
11262 + * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
11263 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
11266 +#ifndef VCHIQ_IOCTLS_H
11267 +#define VCHIQ_IOCTLS_H
11269 +#include <linux/ioctl.h>
11270 +#include "vchiq_if.h"
11272 +#define VCHIQ_IOC_MAGIC 0xc4
11273 +#define VCHIQ_INVALID_HANDLE (~0)
11276 + VCHIQ_SERVICE_PARAMS_T params;
11279 + unsigned int handle; /* OUT */
11280 +} VCHIQ_CREATE_SERVICE_T;
11283 + unsigned int handle;
11284 + unsigned int count;
11285 + const VCHIQ_ELEMENT_T *elements;
11286 +} VCHIQ_QUEUE_MESSAGE_T;
11289 + unsigned int handle;
11291 + unsigned int size;
11293 + VCHIQ_BULK_MODE_T mode;
11294 +} VCHIQ_QUEUE_BULK_TRANSFER_T;
11297 + VCHIQ_REASON_T reason;
11298 + VCHIQ_HEADER_T *header;
11299 + void *service_userdata;
11300 + void *bulk_userdata;
11301 +} VCHIQ_COMPLETION_DATA_T;
11304 + unsigned int count;
11305 + VCHIQ_COMPLETION_DATA_T *buf;
11306 + unsigned int msgbufsize;
11307 + unsigned int msgbufcount; /* IN/OUT */
11309 +} VCHIQ_AWAIT_COMPLETION_T;
11312 + unsigned int handle;
11314 + unsigned int bufsize;
11316 +} VCHIQ_DEQUEUE_MESSAGE_T;
11319 + unsigned int config_size;
11320 + VCHIQ_CONFIG_T *pconfig;
11321 +} VCHIQ_GET_CONFIG_T;
11324 + unsigned int handle;
11325 + VCHIQ_SERVICE_OPTION_T option;
11327 +} VCHIQ_SET_SERVICE_OPTION_T;
11331 + size_t num_bytes;
11332 +} VCHIQ_DUMP_MEM_T;
11334 +#define VCHIQ_IOC_CONNECT _IO(VCHIQ_IOC_MAGIC, 0)
11335 +#define VCHIQ_IOC_SHUTDOWN _IO(VCHIQ_IOC_MAGIC, 1)
11336 +#define VCHIQ_IOC_CREATE_SERVICE \
11337 + _IOWR(VCHIQ_IOC_MAGIC, 2, VCHIQ_CREATE_SERVICE_T)
11338 +#define VCHIQ_IOC_REMOVE_SERVICE _IO(VCHIQ_IOC_MAGIC, 3)
11339 +#define VCHIQ_IOC_QUEUE_MESSAGE \
11340 + _IOW(VCHIQ_IOC_MAGIC, 4, VCHIQ_QUEUE_MESSAGE_T)
11341 +#define VCHIQ_IOC_QUEUE_BULK_TRANSMIT \
11342 + _IOWR(VCHIQ_IOC_MAGIC, 5, VCHIQ_QUEUE_BULK_TRANSFER_T)
11343 +#define VCHIQ_IOC_QUEUE_BULK_RECEIVE \
11344 + _IOWR(VCHIQ_IOC_MAGIC, 6, VCHIQ_QUEUE_BULK_TRANSFER_T)
11345 +#define VCHIQ_IOC_AWAIT_COMPLETION \
11346 + _IOWR(VCHIQ_IOC_MAGIC, 7, VCHIQ_AWAIT_COMPLETION_T)
11347 +#define VCHIQ_IOC_DEQUEUE_MESSAGE \
11348 + _IOWR(VCHIQ_IOC_MAGIC, 8, VCHIQ_DEQUEUE_MESSAGE_T)
11349 +#define VCHIQ_IOC_GET_CLIENT_ID _IO(VCHIQ_IOC_MAGIC, 9)
11350 +#define VCHIQ_IOC_GET_CONFIG \
11351 + _IOWR(VCHIQ_IOC_MAGIC, 10, VCHIQ_GET_CONFIG_T)
11352 +#define VCHIQ_IOC_CLOSE_SERVICE _IO(VCHIQ_IOC_MAGIC, 11)
11353 +#define VCHIQ_IOC_USE_SERVICE _IO(VCHIQ_IOC_MAGIC, 12)
11354 +#define VCHIQ_IOC_RELEASE_SERVICE _IO(VCHIQ_IOC_MAGIC, 13)
11355 +#define VCHIQ_IOC_SET_SERVICE_OPTION \
11356 + _IOW(VCHIQ_IOC_MAGIC, 14, VCHIQ_SET_SERVICE_OPTION_T)
11357 +#define VCHIQ_IOC_DUMP_PHYS_MEM \
11358 + _IOW(VCHIQ_IOC_MAGIC, 15, VCHIQ_DUMP_MEM_T)
11359 +#define VCHIQ_IOC_LIB_VERSION _IO(VCHIQ_IOC_MAGIC, 16)
11360 +#define VCHIQ_IOC_CLOSE_DELIVERED _IO(VCHIQ_IOC_MAGIC, 17)
11361 +#define VCHIQ_IOC_MAX 17
11365 +++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_kern_lib.c
11368 + * Copyright (c) 2010-2012 Broadcom. All rights reserved.
11370 + * Redistribution and use in source and binary forms, with or without
11371 + * modification, are permitted provided that the following conditions
11373 + * 1. Redistributions of source code must retain the above copyright
11374 + * notice, this list of conditions, and the following disclaimer,
11375 + * without modification.
11376 + * 2. Redistributions in binary form must reproduce the above copyright
11377 + * notice, this list of conditions and the following disclaimer in the
11378 + * documentation and/or other materials provided with the distribution.
11379 + * 3. The names of the above-listed copyright holders may not be used
11380 + * to endorse or promote products derived from this software without
11381 + * specific prior written permission.
11383 + * ALTERNATIVELY, this software may be distributed under the terms of the
11384 + * GNU General Public License ("GPL") version 2, as published by the Free
11385 + * Software Foundation.
11387 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
11388 + * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
11389 + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
11390 + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
11391 + * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
11392 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
11393 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
11394 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
11395 + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
11396 + * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
11397 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
11400 +/* ---- Include Files ---------------------------------------------------- */
11402 +#include <linux/kernel.h>
11403 +#include <linux/module.h>
11404 +#include <linux/mutex.h>
11406 +#include "vchiq_core.h"
11407 +#include "vchiq_arm.h"
11408 +#include "vchiq_killable.h"
11410 +/* ---- Public Variables ------------------------------------------------- */
11412 +/* ---- Private Constants and Types -------------------------------------- */
11414 +struct bulk_waiter_node {
11415 + struct bulk_waiter bulk_waiter;
11417 + struct list_head list;
11420 +struct vchiq_instance_struct {
11421 + VCHIQ_STATE_T *state;
11425 + struct list_head bulk_waiter_list;
11426 + struct mutex bulk_waiter_list_mutex;
11429 +static VCHIQ_STATUS_T
11430 +vchiq_blocking_bulk_transfer(VCHIQ_SERVICE_HANDLE_T handle, void *data,
11431 + unsigned int size, VCHIQ_BULK_DIR_T dir);
11433 +/****************************************************************************
11435 +* vchiq_initialise
11437 +***************************************************************************/
11438 +#define VCHIQ_INIT_RETRIES 10
11439 +VCHIQ_STATUS_T vchiq_initialise(VCHIQ_INSTANCE_T *instanceOut)
11441 + VCHIQ_STATUS_T status = VCHIQ_ERROR;
11442 + VCHIQ_STATE_T *state;
11443 + VCHIQ_INSTANCE_T instance = NULL;
11446 + vchiq_log_trace(vchiq_core_log_level, "%s called", __func__);
11448 + /* VideoCore may not be ready due to boot up timing.
11449 + It may never be ready if kernel and firmware are mismatched, so don't block forever. */
11450 + for (i=0; i<VCHIQ_INIT_RETRIES; i++) {
11451 + state = vchiq_get_state();
11456 + if (i==VCHIQ_INIT_RETRIES) {
11457 + vchiq_log_error(vchiq_core_log_level,
11458 + "%s: videocore not initialized\n", __func__);
11460 + } else if (i>0) {
11461 + vchiq_log_warning(vchiq_core_log_level,
11462 + "%s: videocore initialized after %d retries\n", __func__, i);
11465 + instance = kzalloc(sizeof(*instance), GFP_KERNEL);
11467 + vchiq_log_error(vchiq_core_log_level,
11468 + "%s: error allocating vchiq instance\n", __func__);
11472 + instance->connected = 0;
11473 + instance->state = state;
11474 + mutex_init(&instance->bulk_waiter_list_mutex);
11475 + INIT_LIST_HEAD(&instance->bulk_waiter_list);
11477 + *instanceOut = instance;
11479 + status = VCHIQ_SUCCESS;
11482 + vchiq_log_trace(vchiq_core_log_level,
11483 + "%s(%p): returning %d", __func__, instance, status);
11487 +EXPORT_SYMBOL(vchiq_initialise);
11489 +/****************************************************************************
11493 +***************************************************************************/
11495 +VCHIQ_STATUS_T vchiq_shutdown(VCHIQ_INSTANCE_T instance)
11497 + VCHIQ_STATUS_T status;
11498 + VCHIQ_STATE_T *state = instance->state;
11500 + vchiq_log_trace(vchiq_core_log_level,
11501 + "%s(%p) called", __func__, instance);
11503 + if (mutex_lock_interruptible(&state->mutex) != 0)
11504 + return VCHIQ_RETRY;
11506 + /* Remove all services */
11507 + status = vchiq_shutdown_internal(state, instance);
11509 + mutex_unlock(&state->mutex);
11511 + vchiq_log_trace(vchiq_core_log_level,
11512 + "%s(%p): returning %d", __func__, instance, status);
11514 + if (status == VCHIQ_SUCCESS) {
11515 + struct list_head *pos, *next;
11516 + list_for_each_safe(pos, next,
11517 + &instance->bulk_waiter_list) {
11518 + struct bulk_waiter_node *waiter;
11519 + waiter = list_entry(pos,
11520 + struct bulk_waiter_node,
11523 + vchiq_log_info(vchiq_arm_log_level,
11524 + "bulk_waiter - cleaned up %x "
11526 + (unsigned int)waiter, waiter->pid);
11534 +EXPORT_SYMBOL(vchiq_shutdown);
11536 +/****************************************************************************
11538 +* vchiq_is_connected
11540 +***************************************************************************/
11542 +int vchiq_is_connected(VCHIQ_INSTANCE_T instance)
11544 + return instance->connected;
11547 +/****************************************************************************
11551 +***************************************************************************/
11553 +VCHIQ_STATUS_T vchiq_connect(VCHIQ_INSTANCE_T instance)
11555 + VCHIQ_STATUS_T status;
11556 + VCHIQ_STATE_T *state = instance->state;
11558 + vchiq_log_trace(vchiq_core_log_level,
11559 + "%s(%p) called", __func__, instance);
11561 + if (mutex_lock_interruptible(&state->mutex) != 0) {
11562 + vchiq_log_trace(vchiq_core_log_level,
11563 + "%s: call to mutex_lock failed", __func__);
11564 + status = VCHIQ_RETRY;
11567 + status = vchiq_connect_internal(state, instance);
11569 + if (status == VCHIQ_SUCCESS)
11570 + instance->connected = 1;
11572 + mutex_unlock(&state->mutex);
11575 + vchiq_log_trace(vchiq_core_log_level,
11576 + "%s(%p): returning %d", __func__, instance, status);
11580 +EXPORT_SYMBOL(vchiq_connect);
11582 +/****************************************************************************
11584 +* vchiq_add_service
11586 +***************************************************************************/
11588 +VCHIQ_STATUS_T vchiq_add_service(
11589 + VCHIQ_INSTANCE_T instance,
11590 + const VCHIQ_SERVICE_PARAMS_T *params,
11591 + VCHIQ_SERVICE_HANDLE_T *phandle)
11593 + VCHIQ_STATUS_T status;
11594 + VCHIQ_STATE_T *state = instance->state;
11595 + VCHIQ_SERVICE_T *service = NULL;
11598 + vchiq_log_trace(vchiq_core_log_level,
11599 + "%s(%p) called", __func__, instance);
11601 + *phandle = VCHIQ_SERVICE_HANDLE_INVALID;
11603 + srvstate = vchiq_is_connected(instance)
11604 + ? VCHIQ_SRVSTATE_LISTENING
11605 + : VCHIQ_SRVSTATE_HIDDEN;
11607 + service = vchiq_add_service_internal(
11615 + *phandle = service->handle;
11616 + status = VCHIQ_SUCCESS;
11618 + status = VCHIQ_ERROR;
11620 + vchiq_log_trace(vchiq_core_log_level,
11621 + "%s(%p): returning %d", __func__, instance, status);
11625 +EXPORT_SYMBOL(vchiq_add_service);
11627 +/****************************************************************************
11629 +* vchiq_open_service
11631 +***************************************************************************/
11633 +VCHIQ_STATUS_T vchiq_open_service(
11634 + VCHIQ_INSTANCE_T instance,
11635 + const VCHIQ_SERVICE_PARAMS_T *params,
11636 + VCHIQ_SERVICE_HANDLE_T *phandle)
11638 + VCHIQ_STATUS_T status = VCHIQ_ERROR;
11639 + VCHIQ_STATE_T *state = instance->state;
11640 + VCHIQ_SERVICE_T *service = NULL;
11642 + vchiq_log_trace(vchiq_core_log_level,
11643 + "%s(%p) called", __func__, instance);
11645 + *phandle = VCHIQ_SERVICE_HANDLE_INVALID;
11647 + if (!vchiq_is_connected(instance))
11650 + service = vchiq_add_service_internal(state,
11652 + VCHIQ_SRVSTATE_OPENING,
11657 + *phandle = service->handle;
11658 + status = vchiq_open_service_internal(service, current->pid);
11659 + if (status != VCHIQ_SUCCESS) {
11660 + vchiq_remove_service(service->handle);
11661 + *phandle = VCHIQ_SERVICE_HANDLE_INVALID;
11666 + vchiq_log_trace(vchiq_core_log_level,
11667 + "%s(%p): returning %d", __func__, instance, status);
11671 +EXPORT_SYMBOL(vchiq_open_service);
11674 +vchiq_queue_bulk_transmit(VCHIQ_SERVICE_HANDLE_T handle,
11675 + const void *data, unsigned int size, void *userdata)
11677 + return vchiq_bulk_transfer(handle,
11678 + VCHI_MEM_HANDLE_INVALID, (void *)data, size, userdata,
11679 + VCHIQ_BULK_MODE_CALLBACK, VCHIQ_BULK_TRANSMIT);
11681 +EXPORT_SYMBOL(vchiq_queue_bulk_transmit);
11684 +vchiq_queue_bulk_receive(VCHIQ_SERVICE_HANDLE_T handle, void *data,
11685 + unsigned int size, void *userdata)
11687 + return vchiq_bulk_transfer(handle,
11688 + VCHI_MEM_HANDLE_INVALID, data, size, userdata,
11689 + VCHIQ_BULK_MODE_CALLBACK, VCHIQ_BULK_RECEIVE);
11691 +EXPORT_SYMBOL(vchiq_queue_bulk_receive);
11694 +vchiq_bulk_transmit(VCHIQ_SERVICE_HANDLE_T handle, const void *data,
11695 + unsigned int size, void *userdata, VCHIQ_BULK_MODE_T mode)
11697 + VCHIQ_STATUS_T status;
11700 + case VCHIQ_BULK_MODE_NOCALLBACK:
11701 + case VCHIQ_BULK_MODE_CALLBACK:
11702 + status = vchiq_bulk_transfer(handle,
11703 + VCHI_MEM_HANDLE_INVALID, (void *)data, size, userdata,
11704 + mode, VCHIQ_BULK_TRANSMIT);
11706 + case VCHIQ_BULK_MODE_BLOCKING:
11707 + status = vchiq_blocking_bulk_transfer(handle,
11708 + (void *)data, size, VCHIQ_BULK_TRANSMIT);
11711 + return VCHIQ_ERROR;
11716 +EXPORT_SYMBOL(vchiq_bulk_transmit);
11719 +vchiq_bulk_receive(VCHIQ_SERVICE_HANDLE_T handle, void *data,
11720 + unsigned int size, void *userdata, VCHIQ_BULK_MODE_T mode)
11722 + VCHIQ_STATUS_T status;
11725 + case VCHIQ_BULK_MODE_NOCALLBACK:
11726 + case VCHIQ_BULK_MODE_CALLBACK:
11727 + status = vchiq_bulk_transfer(handle,
11728 + VCHI_MEM_HANDLE_INVALID, data, size, userdata,
11729 + mode, VCHIQ_BULK_RECEIVE);
11731 + case VCHIQ_BULK_MODE_BLOCKING:
11732 + status = vchiq_blocking_bulk_transfer(handle,
11733 + (void *)data, size, VCHIQ_BULK_RECEIVE);
11736 + return VCHIQ_ERROR;
11741 +EXPORT_SYMBOL(vchiq_bulk_receive);
11743 +static VCHIQ_STATUS_T
11744 +vchiq_blocking_bulk_transfer(VCHIQ_SERVICE_HANDLE_T handle, void *data,
11745 + unsigned int size, VCHIQ_BULK_DIR_T dir)
11747 + VCHIQ_INSTANCE_T instance;
11748 + VCHIQ_SERVICE_T *service;
11749 + VCHIQ_STATUS_T status;
11750 + struct bulk_waiter_node *waiter = NULL;
11751 + struct list_head *pos;
11753 + service = find_service_by_handle(handle);
11755 + return VCHIQ_ERROR;
11757 + instance = service->instance;
11759 + unlock_service(service);
11761 + mutex_lock(&instance->bulk_waiter_list_mutex);
11762 + list_for_each(pos, &instance->bulk_waiter_list) {
11763 + if (list_entry(pos, struct bulk_waiter_node,
11764 + list)->pid == current->pid) {
11765 + waiter = list_entry(pos,
11766 + struct bulk_waiter_node,
11772 + mutex_unlock(&instance->bulk_waiter_list_mutex);
11775 + VCHIQ_BULK_T *bulk = waiter->bulk_waiter.bulk;
11777 + /* This thread has an outstanding bulk transfer. */
11778 + if ((bulk->data != data) ||
11779 + (bulk->size != size)) {
11780 + /* This is not a retry of the previous one.
11781 + ** Cancel the signal when the transfer
11783 + spin_lock(&bulk_waiter_spinlock);
11784 + bulk->userdata = NULL;
11785 + spin_unlock(&bulk_waiter_spinlock);
11791 + waiter = kzalloc(sizeof(struct bulk_waiter_node), GFP_KERNEL);
11793 + vchiq_log_error(vchiq_core_log_level,
11794 + "%s - out of memory", __func__);
11795 + return VCHIQ_ERROR;
11799 + status = vchiq_bulk_transfer(handle, VCHI_MEM_HANDLE_INVALID,
11800 + data, size, &waiter->bulk_waiter, VCHIQ_BULK_MODE_BLOCKING,
11802 + if ((status != VCHIQ_RETRY) || fatal_signal_pending(current) ||
11803 + !waiter->bulk_waiter.bulk) {
11804 + VCHIQ_BULK_T *bulk = waiter->bulk_waiter.bulk;
11806 + /* Cancel the signal when the transfer
11808 + spin_lock(&bulk_waiter_spinlock);
11809 + bulk->userdata = NULL;
11810 + spin_unlock(&bulk_waiter_spinlock);
11814 + waiter->pid = current->pid;
11815 + mutex_lock(&instance->bulk_waiter_list_mutex);
11816 + list_add(&waiter->list, &instance->bulk_waiter_list);
11817 + mutex_unlock(&instance->bulk_waiter_list_mutex);
11818 + vchiq_log_info(vchiq_arm_log_level,
11819 + "saved bulk_waiter %x for pid %d",
11820 + (unsigned int)waiter, current->pid);
11826 +++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_killable.h
11829 + * Copyright (c) 2010-2012 Broadcom. All rights reserved.
11831 + * Redistribution and use in source and binary forms, with or without
11832 + * modification, are permitted provided that the following conditions
11834 + * 1. Redistributions of source code must retain the above copyright
11835 + * notice, this list of conditions, and the following disclaimer,
11836 + * without modification.
11837 + * 2. Redistributions in binary form must reproduce the above copyright
11838 + * notice, this list of conditions and the following disclaimer in the
11839 + * documentation and/or other materials provided with the distribution.
11840 + * 3. The names of the above-listed copyright holders may not be used
11841 + * to endorse or promote products derived from this software without
11842 + * specific prior written permission.
11844 + * ALTERNATIVELY, this software may be distributed under the terms of the
11845 + * GNU General Public License ("GPL") version 2, as published by the Free
11846 + * Software Foundation.
11848 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
11849 + * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
11850 + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
11851 + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
11852 + * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
11853 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
11854 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
11855 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
11856 + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
11857 + * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
11858 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
11861 +#ifndef VCHIQ_KILLABLE_H
11862 +#define VCHIQ_KILLABLE_H
11864 +#include <linux/mutex.h>
11865 +#include <linux/semaphore.h>
11867 +#define SHUTDOWN_SIGS (sigmask(SIGKILL) | sigmask(SIGINT) | sigmask(SIGQUIT) | sigmask(SIGTRAP) | sigmask(SIGSTOP) | sigmask(SIGCONT))
11869 +static inline int __must_check down_interruptible_killable(struct semaphore *sem)
11871 + /* Allow interception of killable signals only. We don't want to be interrupted by harmless signals like SIGALRM */
11873 + sigset_t blocked, oldset;
11874 + siginitsetinv(&blocked, SHUTDOWN_SIGS);
11875 + sigprocmask(SIG_SETMASK, &blocked, &oldset);
11876 + ret = down_interruptible(sem);
11877 + sigprocmask(SIG_SETMASK, &oldset, NULL);
11880 +#define down_interruptible down_interruptible_killable
11883 +static inline int __must_check mutex_lock_interruptible_killable(struct mutex *lock)
11885 + /* Allow interception of killable signals only. We don't want to be interrupted by harmless signals like SIGALRM */
11887 + sigset_t blocked, oldset;
11888 + siginitsetinv(&blocked, SHUTDOWN_SIGS);
11889 + sigprocmask(SIG_SETMASK, &blocked, &oldset);
11890 + ret = mutex_lock_interruptible(lock);
11891 + sigprocmask(SIG_SETMASK, &oldset, NULL);
11894 +#define mutex_lock_interruptible mutex_lock_interruptible_killable
11898 +++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_memdrv.h
11901 + * Copyright (c) 2010-2012 Broadcom. All rights reserved.
11903 + * Redistribution and use in source and binary forms, with or without
11904 + * modification, are permitted provided that the following conditions
11906 + * 1. Redistributions of source code must retain the above copyright
11907 + * notice, this list of conditions, and the following disclaimer,
11908 + * without modification.
11909 + * 2. Redistributions in binary form must reproduce the above copyright
11910 + * notice, this list of conditions and the following disclaimer in the
11911 + * documentation and/or other materials provided with the distribution.
11912 + * 3. The names of the above-listed copyright holders may not be used
11913 + * to endorse or promote products derived from this software without
11914 + * specific prior written permission.
11916 + * ALTERNATIVELY, this software may be distributed under the terms of the
11917 + * GNU General Public License ("GPL") version 2, as published by the Free
11918 + * Software Foundation.
11920 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
11921 + * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
11922 + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
11923 + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
11924 + * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
11925 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
11926 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
11927 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
11928 + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
11929 + * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
11930 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
11933 +#ifndef VCHIQ_MEMDRV_H
11934 +#define VCHIQ_MEMDRV_H
11936 +/* ---- Include Files ----------------------------------------------------- */
11938 +#include <linux/kernel.h>
11939 +#include "vchiq_if.h"
11941 +/* ---- Constants and Types ---------------------------------------------- */
11944 + void *armSharedMemVirt;
11945 + dma_addr_t armSharedMemPhys;
11946 + size_t armSharedMemSize;
11948 + void *vcSharedMemVirt;
11949 + dma_addr_t vcSharedMemPhys;
11950 + size_t vcSharedMemSize;
11951 +} VCHIQ_SHARED_MEM_INFO_T;
11953 +/* ---- Variable Externs ------------------------------------------------- */
11955 +/* ---- Function Prototypes ---------------------------------------------- */
11957 +void vchiq_get_shared_mem_info(VCHIQ_SHARED_MEM_INFO_T *info);
11959 +VCHIQ_STATUS_T vchiq_memdrv_initialise(void);
11961 +VCHIQ_STATUS_T vchiq_userdrv_create_instance(
11962 + const VCHIQ_PLATFORM_DATA_T * platform_data);
11964 +VCHIQ_STATUS_T vchiq_userdrv_suspend(
11965 + const VCHIQ_PLATFORM_DATA_T * platform_data);
11967 +VCHIQ_STATUS_T vchiq_userdrv_resume(
11968 + const VCHIQ_PLATFORM_DATA_T * platform_data);
11972 +++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_pagelist.h
11975 + * Copyright (c) 2010-2012 Broadcom. All rights reserved.
11977 + * Redistribution and use in source and binary forms, with or without
11978 + * modification, are permitted provided that the following conditions
11980 + * 1. Redistributions of source code must retain the above copyright
11981 + * notice, this list of conditions, and the following disclaimer,
11982 + * without modification.
11983 + * 2. Redistributions in binary form must reproduce the above copyright
11984 + * notice, this list of conditions and the following disclaimer in the
11985 + * documentation and/or other materials provided with the distribution.
11986 + * 3. The names of the above-listed copyright holders may not be used
11987 + * to endorse or promote products derived from this software without
11988 + * specific prior written permission.
11990 + * ALTERNATIVELY, this software may be distributed under the terms of the
11991 + * GNU General Public License ("GPL") version 2, as published by the Free
11992 + * Software Foundation.
11994 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
11995 + * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
11996 + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
11997 + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
11998 + * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
11999 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
12000 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
12001 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
12002 + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
12003 + * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
12004 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
12007 +#ifndef VCHIQ_PAGELIST_H
12008 +#define VCHIQ_PAGELIST_H
12011 +#define PAGE_SIZE 4096
12013 +#define CACHE_LINE_SIZE 32
12014 +#define PAGELIST_WRITE 0
12015 +#define PAGELIST_READ 1
12016 +#define PAGELIST_READ_WITH_FRAGMENTS 2
12018 +typedef struct pagelist_struct {
12019 + unsigned long length;
12020 + unsigned short type;
12021 + unsigned short offset;
12022 + unsigned long addrs[1]; /* N.B. 12 LSBs hold the number of following
12023 + pages at consecutive addresses. */
12026 +typedef struct fragments_struct {
12027 + char headbuf[CACHE_LINE_SIZE];
12028 + char tailbuf[CACHE_LINE_SIZE];
12031 +#endif /* VCHIQ_PAGELIST_H */
12033 +++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_shim.c
12036 + * Copyright (c) 2010-2012 Broadcom. All rights reserved.
12038 + * Redistribution and use in source and binary forms, with or without
12039 + * modification, are permitted provided that the following conditions
12041 + * 1. Redistributions of source code must retain the above copyright
12042 + * notice, this list of conditions, and the following disclaimer,
12043 + * without modification.
12044 + * 2. Redistributions in binary form must reproduce the above copyright
12045 + * notice, this list of conditions and the following disclaimer in the
12046 + * documentation and/or other materials provided with the distribution.
12047 + * 3. The names of the above-listed copyright holders may not be used
12048 + * to endorse or promote products derived from this software without
12049 + * specific prior written permission.
12051 + * ALTERNATIVELY, this software may be distributed under the terms of the
12052 + * GNU General Public License ("GPL") version 2, as published by the Free
12053 + * Software Foundation.
12055 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
12056 + * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
12057 + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
12058 + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
12059 + * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
12060 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
12061 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
12062 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
12063 + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
12064 + * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
12065 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
12067 +#include <linux/module.h>
12068 +#include <linux/types.h>
12070 +#include "interface/vchi/vchi.h"
12071 +#include "vchiq.h"
12072 +#include "vchiq_core.h"
12074 +#include "vchiq_util.h"
12076 +#include <stddef.h>
12078 +#define vchiq_status_to_vchi(status) ((int32_t)status)
12081 + VCHIQ_SERVICE_HANDLE_T handle;
12083 + VCHIU_QUEUE_T queue;
12085 + VCHI_CALLBACK_T callback;
12086 + void *callback_param;
12089 +/* ----------------------------------------------------------------------
12090 + * return pointer to the mphi message driver function table
12091 + * -------------------------------------------------------------------- */
12092 +const VCHI_MESSAGE_DRIVER_T *
12093 +vchi_mphi_message_driver_func_table(void)
12098 +/* ----------------------------------------------------------------------
12099 + * return a pointer to the 'single' connection driver fops
12100 + * -------------------------------------------------------------------- */
12101 +const VCHI_CONNECTION_API_T *
12102 +single_get_func_table(void)
12107 +VCHI_CONNECTION_T *vchi_create_connection(
12108 + const VCHI_CONNECTION_API_T *function_table,
12109 + const VCHI_MESSAGE_DRIVER_T *low_level)
12111 + (void)function_table;
12116 +/***********************************************************
12117 + * Name: vchi_msg_peek
12119 + * Arguments: const VCHI_SERVICE_HANDLE_T handle,
12121 + * uint32_t *msg_size,
12124 + * VCHI_FLAGS_T flags
12126 + * Description: Routine to return a pointer to the current message (to allow in
12127 + * place processing). The message can be removed using
12128 + * vchi_msg_remove when you're finished
12130 + * Returns: int32_t - success == 0
12132 + ***********************************************************/
12133 +int32_t vchi_msg_peek(VCHI_SERVICE_HANDLE_T handle,
12135 + uint32_t *msg_size,
12136 + VCHI_FLAGS_T flags)
12138 + SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
12139 + VCHIQ_HEADER_T *header;
12141 + WARN_ON((flags != VCHI_FLAGS_NONE) &&
12142 + (flags != VCHI_FLAGS_BLOCK_UNTIL_OP_COMPLETE));
12144 + if (flags == VCHI_FLAGS_NONE)
12145 + if (vchiu_queue_is_empty(&service->queue))
12148 + header = vchiu_queue_peek(&service->queue);
12150 + *data = header->data;
12151 + *msg_size = header->size;
12155 +EXPORT_SYMBOL(vchi_msg_peek);
12157 +/***********************************************************
12158 + * Name: vchi_msg_remove
12160 + * Arguments: const VCHI_SERVICE_HANDLE_T handle,
12162 + * Description: Routine to remove a message (after it has been read with
12165 + * Returns: int32_t - success == 0
12167 + ***********************************************************/
12168 +int32_t vchi_msg_remove(VCHI_SERVICE_HANDLE_T handle)
12170 + SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
12171 + VCHIQ_HEADER_T *header;
12173 + header = vchiu_queue_pop(&service->queue);
12175 + vchiq_release_message(service->handle, header);
12179 +EXPORT_SYMBOL(vchi_msg_remove);
12181 +/***********************************************************
12182 + * Name: vchi_msg_queue
12184 + * Arguments: VCHI_SERVICE_HANDLE_T handle,
12185 + * const void *data,
12186 + * uint32_t data_size,
12187 + * VCHI_FLAGS_T flags,
12188 + * void *msg_handle,
12190 + * Description: Thin wrapper to queue a message onto a connection
12192 + * Returns: int32_t - success == 0
12194 + ***********************************************************/
12195 +int32_t vchi_msg_queue(VCHI_SERVICE_HANDLE_T handle,
12196 + const void *data,
12197 + uint32_t data_size,
12198 + VCHI_FLAGS_T flags,
12199 + void *msg_handle)
12201 + SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
12202 + VCHIQ_ELEMENT_T element = {data, data_size};
12203 + VCHIQ_STATUS_T status;
12205 + (void)msg_handle;
12207 + WARN_ON(flags != VCHI_FLAGS_BLOCK_UNTIL_QUEUED);
12209 + status = vchiq_queue_message(service->handle, &element, 1);
12211 + /* vchiq_queue_message() may return VCHIQ_RETRY, so we need to
12212 + ** implement a retry mechanism since this function is supposed
12213 + ** to block until queued
12215 + while (status == VCHIQ_RETRY) {
12217 + status = vchiq_queue_message(service->handle, &element, 1);
12220 + return vchiq_status_to_vchi(status);
12222 +EXPORT_SYMBOL(vchi_msg_queue);
12224 +/***********************************************************
12225 + * Name: vchi_bulk_queue_receive
12227 + * Arguments: VCHI_BULK_HANDLE_T handle,
12228 + * void *data_dst,
12229 + * const uint32_t data_size,
12230 + * VCHI_FLAGS_T flags
12231 + * void *bulk_handle
12233 + * Description: Routine to setup a rcv buffer
12235 + * Returns: int32_t - success == 0
12237 + ***********************************************************/
12238 +int32_t vchi_bulk_queue_receive(VCHI_SERVICE_HANDLE_T handle,
12240 + uint32_t data_size,
12241 + VCHI_FLAGS_T flags,
12242 + void *bulk_handle)
12244 + SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
12245 + VCHIQ_BULK_MODE_T mode;
12246 + VCHIQ_STATUS_T status;
12248 + switch ((int)flags) {
12249 + case VCHI_FLAGS_CALLBACK_WHEN_OP_COMPLETE
12250 + | VCHI_FLAGS_BLOCK_UNTIL_QUEUED:
12251 + WARN_ON(!service->callback);
12252 + mode = VCHIQ_BULK_MODE_CALLBACK;
12254 + case VCHI_FLAGS_BLOCK_UNTIL_OP_COMPLETE:
12255 + mode = VCHIQ_BULK_MODE_BLOCKING;
12257 + case VCHI_FLAGS_BLOCK_UNTIL_QUEUED:
12258 + case VCHI_FLAGS_NONE:
12259 + mode = VCHIQ_BULK_MODE_NOCALLBACK;
12262 + WARN(1, "unsupported message\n");
12263 + return vchiq_status_to_vchi(VCHIQ_ERROR);
12266 + status = vchiq_bulk_receive(service->handle, data_dst, data_size,
12267 + bulk_handle, mode);
12269 + /* vchiq_bulk_receive() may return VCHIQ_RETRY, so we need to
12270 + ** implement a retry mechanism since this function is supposed
12271 + ** to block until queued
12273 + while (status == VCHIQ_RETRY) {
12275 + status = vchiq_bulk_receive(service->handle, data_dst,
12276 + data_size, bulk_handle, mode);
12279 + return vchiq_status_to_vchi(status);
12281 +EXPORT_SYMBOL(vchi_bulk_queue_receive);
12283 +/***********************************************************
12284 + * Name: vchi_bulk_queue_transmit
12286 + * Arguments: VCHI_BULK_HANDLE_T handle,
12287 + * const void *data_src,
12288 + * uint32_t data_size,
12289 + * VCHI_FLAGS_T flags,
12290 + * void *bulk_handle
12292 + * Description: Routine to transmit some data
12294 + * Returns: int32_t - success == 0
12296 + ***********************************************************/
12297 +int32_t vchi_bulk_queue_transmit(VCHI_SERVICE_HANDLE_T handle,
12298 + const void *data_src,
12299 + uint32_t data_size,
12300 + VCHI_FLAGS_T flags,
12301 + void *bulk_handle)
12303 + SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
12304 + VCHIQ_BULK_MODE_T mode;
12305 + VCHIQ_STATUS_T status;
12307 + switch ((int)flags) {
12308 + case VCHI_FLAGS_CALLBACK_WHEN_OP_COMPLETE
12309 + | VCHI_FLAGS_BLOCK_UNTIL_QUEUED:
12310 + WARN_ON(!service->callback);
12311 + mode = VCHIQ_BULK_MODE_CALLBACK;
12313 + case VCHI_FLAGS_BLOCK_UNTIL_DATA_READ:
12314 + case VCHI_FLAGS_BLOCK_UNTIL_OP_COMPLETE:
12315 + mode = VCHIQ_BULK_MODE_BLOCKING;
12317 + case VCHI_FLAGS_BLOCK_UNTIL_QUEUED:
12318 + case VCHI_FLAGS_NONE:
12319 + mode = VCHIQ_BULK_MODE_NOCALLBACK;
12322 + WARN(1, "unsupported message\n");
12323 + return vchiq_status_to_vchi(VCHIQ_ERROR);
12326 + status = vchiq_bulk_transmit(service->handle, data_src, data_size,
12327 + bulk_handle, mode);
12329 + /* vchiq_bulk_transmit() may return VCHIQ_RETRY, so we need to
12330 + ** implement a retry mechanism since this function is supposed
12331 + ** to block until queued
12333 + while (status == VCHIQ_RETRY) {
12335 + status = vchiq_bulk_transmit(service->handle, data_src,
12336 + data_size, bulk_handle, mode);
12339 + return vchiq_status_to_vchi(status);
12341 +EXPORT_SYMBOL(vchi_bulk_queue_transmit);
12343 +/***********************************************************
12344 + * Name: vchi_msg_dequeue
12346 + * Arguments: VCHI_SERVICE_HANDLE_T handle,
12348 + * uint32_t max_data_size_to_read,
12349 + * uint32_t *actual_msg_size
12350 + * VCHI_FLAGS_T flags
12352 + * Description: Routine to dequeue a message into the supplied buffer
12354 + * Returns: int32_t - success == 0
12356 + ***********************************************************/
12357 +int32_t vchi_msg_dequeue(VCHI_SERVICE_HANDLE_T handle,
12359 + uint32_t max_data_size_to_read,
12360 + uint32_t *actual_msg_size,
12361 + VCHI_FLAGS_T flags)
12363 + SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
12364 + VCHIQ_HEADER_T *header;
12366 + WARN_ON((flags != VCHI_FLAGS_NONE) &&
12367 + (flags != VCHI_FLAGS_BLOCK_UNTIL_OP_COMPLETE));
12369 + if (flags == VCHI_FLAGS_NONE)
12370 + if (vchiu_queue_is_empty(&service->queue))
12373 + header = vchiu_queue_pop(&service->queue);
12375 + memcpy(data, header->data, header->size < max_data_size_to_read ?
12376 + header->size : max_data_size_to_read);
12378 + *actual_msg_size = header->size;
12380 + vchiq_release_message(service->handle, header);
12384 +EXPORT_SYMBOL(vchi_msg_dequeue);
12386 +/***********************************************************
12387 + * Name: vchi_msg_queuev
12389 + * Arguments: VCHI_SERVICE_HANDLE_T handle,
12390 + * VCHI_MSG_VECTOR_T *vector,
12391 + * uint32_t count,
12392 + * VCHI_FLAGS_T flags,
12393 + * void *msg_handle
12395 + * Description: Thin wrapper to queue a message onto a connection
12397 + * Returns: int32_t - success == 0
12399 + ***********************************************************/
12401 +vchiq_static_assert(sizeof(VCHI_MSG_VECTOR_T) == sizeof(VCHIQ_ELEMENT_T));
12402 +vchiq_static_assert(offsetof(VCHI_MSG_VECTOR_T, vec_base) ==
12403 + offsetof(VCHIQ_ELEMENT_T, data));
12404 +vchiq_static_assert(offsetof(VCHI_MSG_VECTOR_T, vec_len) ==
12405 + offsetof(VCHIQ_ELEMENT_T, size));
12407 +int32_t vchi_msg_queuev(VCHI_SERVICE_HANDLE_T handle,
12408 + VCHI_MSG_VECTOR_T *vector,
12410 + VCHI_FLAGS_T flags,
12411 + void *msg_handle)
12413 + SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
12415 + (void)msg_handle;
12417 + WARN_ON(flags != VCHI_FLAGS_BLOCK_UNTIL_QUEUED);
12419 + return vchiq_status_to_vchi(vchiq_queue_message(service->handle,
12420 + (const VCHIQ_ELEMENT_T *)vector, count));
12422 +EXPORT_SYMBOL(vchi_msg_queuev);
12424 +/***********************************************************
12425 + * Name: vchi_held_msg_release
12427 + * Arguments: VCHI_HELD_MSG_T *message
12429 + * Description: Routine to release a held message (after it has been read with
12432 + * Returns: int32_t - success == 0
12434 + ***********************************************************/
12435 +int32_t vchi_held_msg_release(VCHI_HELD_MSG_T *message)
12437 + vchiq_release_message((VCHIQ_SERVICE_HANDLE_T)message->service,
12438 + (VCHIQ_HEADER_T *)message->message);
12442 +EXPORT_SYMBOL(vchi_held_msg_release);
12444 +/***********************************************************
12445 + * Name: vchi_msg_hold
12447 + * Arguments: VCHI_SERVICE_HANDLE_T handle,
12449 + * uint32_t *msg_size,
12450 + * VCHI_FLAGS_T flags,
12451 + * VCHI_HELD_MSG_T *message_handle
12453 + * Description: Routine to return a pointer to the current message (to allow
12454 + * in place processing). The message is dequeued - don't forget
12455 + * to release the message using vchi_held_msg_release when you're
12458 + * Returns: int32_t - success == 0
12460 + ***********************************************************/
12461 +int32_t vchi_msg_hold(VCHI_SERVICE_HANDLE_T handle,
12463 + uint32_t *msg_size,
12464 + VCHI_FLAGS_T flags,
12465 + VCHI_HELD_MSG_T *message_handle)
12467 + SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
12468 + VCHIQ_HEADER_T *header;
12470 + WARN_ON((flags != VCHI_FLAGS_NONE) &&
12471 + (flags != VCHI_FLAGS_BLOCK_UNTIL_OP_COMPLETE));
12473 + if (flags == VCHI_FLAGS_NONE)
12474 + if (vchiu_queue_is_empty(&service->queue))
12477 + header = vchiu_queue_pop(&service->queue);
12479 + *data = header->data;
12480 + *msg_size = header->size;
12482 + message_handle->service =
12483 + (struct opaque_vchi_service_t *)service->handle;
12484 + message_handle->message = header;
12488 +EXPORT_SYMBOL(vchi_msg_hold);
12490 +/***********************************************************
12491 + * Name: vchi_initialise
12493 + * Arguments: VCHI_INSTANCE_T *instance_handle
12495 + * Description: Initialises the hardware but does not transmit anything
12496 + * When run as a Host App this will be called twice hence the need
12497 + * to malloc the state information
12499 + * Returns: 0 if successful, failure otherwise
12501 + ***********************************************************/
12503 +int32_t vchi_initialise(VCHI_INSTANCE_T *instance_handle)
12505 + VCHIQ_INSTANCE_T instance;
12506 + VCHIQ_STATUS_T status;
12508 + status = vchiq_initialise(&instance);
12510 + *instance_handle = (VCHI_INSTANCE_T)instance;
12512 + return vchiq_status_to_vchi(status);
12514 +EXPORT_SYMBOL(vchi_initialise);
12516 +/***********************************************************
12517 + * Name: vchi_connect
12519 + * Arguments: VCHI_CONNECTION_T **connections
12520 + * const uint32_t num_connections
12521 + * VCHI_INSTANCE_T instance_handle)
12523 + * Description: Starts the command service on each connection,
12524 + * causing INIT messages to be pinged back and forth
12526 + * Returns: 0 if successful, failure otherwise
12528 + ***********************************************************/
12529 +int32_t vchi_connect(VCHI_CONNECTION_T **connections,
12530 + const uint32_t num_connections,
12531 + VCHI_INSTANCE_T instance_handle)
12533 + VCHIQ_INSTANCE_T instance = (VCHIQ_INSTANCE_T)instance_handle;
12535 + (void)connections;
12536 + (void)num_connections;
12538 + return vchiq_connect(instance);
12540 +EXPORT_SYMBOL(vchi_connect);
12543 +/***********************************************************
12544 + * Name: vchi_disconnect
12546 + * Arguments: VCHI_INSTANCE_T instance_handle
12548 + * Description: Stops the command service on each connection,
12549 + * causing DE-INIT messages to be pinged back and forth
12551 + * Returns: 0 if successful, failure otherwise
12553 + ***********************************************************/
12554 +int32_t vchi_disconnect(VCHI_INSTANCE_T instance_handle)
12556 + VCHIQ_INSTANCE_T instance = (VCHIQ_INSTANCE_T)instance_handle;
12557 + return vchiq_status_to_vchi(vchiq_shutdown(instance));
12559 +EXPORT_SYMBOL(vchi_disconnect);
12562 +/***********************************************************
12563 + * Name: vchi_service_open
12564 + * Name: vchi_service_create
12566 + * Arguments: VCHI_INSTANCE_T *instance_handle
12567 + * SERVICE_CREATION_T *setup,
12568 + * VCHI_SERVICE_HANDLE_T *handle
12570 + * Description: Routine to open a service
12572 + * Returns: int32_t - success == 0
12574 + ***********************************************************/
12576 +static VCHIQ_STATUS_T shim_callback(VCHIQ_REASON_T reason,
12577 + VCHIQ_HEADER_T *header, VCHIQ_SERVICE_HANDLE_T handle, void *bulk_user)
12579 + SHIM_SERVICE_T *service =
12580 + (SHIM_SERVICE_T *)VCHIQ_GET_SERVICE_USERDATA(handle);
12582 + if (!service->callback)
12585 + switch (reason) {
12586 + case VCHIQ_MESSAGE_AVAILABLE:
12587 + vchiu_queue_push(&service->queue, header);
12589 + service->callback(service->callback_param,
12590 + VCHI_CALLBACK_MSG_AVAILABLE, NULL);
12595 + case VCHIQ_BULK_TRANSMIT_DONE:
12596 + service->callback(service->callback_param,
12597 + VCHI_CALLBACK_BULK_SENT, bulk_user);
12600 + case VCHIQ_BULK_RECEIVE_DONE:
12601 + service->callback(service->callback_param,
12602 + VCHI_CALLBACK_BULK_RECEIVED, bulk_user);
12605 + case VCHIQ_SERVICE_CLOSED:
12606 + service->callback(service->callback_param,
12607 + VCHI_CALLBACK_SERVICE_CLOSED, NULL);
12610 + case VCHIQ_SERVICE_OPENED:
12611 + /* No equivalent VCHI reason */
12614 + case VCHIQ_BULK_TRANSMIT_ABORTED:
12615 + service->callback(service->callback_param,
12616 + VCHI_CALLBACK_BULK_TRANSMIT_ABORTED,
12620 + case VCHIQ_BULK_RECEIVE_ABORTED:
12621 + service->callback(service->callback_param,
12622 + VCHI_CALLBACK_BULK_RECEIVE_ABORTED,
12627 + WARN(1, "not supported\n");
12632 + vchiq_release_message(service->handle, header);
12634 + return VCHIQ_SUCCESS;
12637 +static SHIM_SERVICE_T *service_alloc(VCHIQ_INSTANCE_T instance,
12638 + SERVICE_CREATION_T *setup)
12640 + SHIM_SERVICE_T *service = kzalloc(sizeof(SHIM_SERVICE_T), GFP_KERNEL);
12645 + if (vchiu_queue_init(&service->queue, 64)) {
12646 + service->callback = setup->callback;
12647 + service->callback_param = setup->callback_param;
12657 +static void service_free(SHIM_SERVICE_T *service)
12660 + vchiu_queue_delete(&service->queue);
12665 +int32_t vchi_service_open(VCHI_INSTANCE_T instance_handle,
12666 + SERVICE_CREATION_T *setup,
12667 + VCHI_SERVICE_HANDLE_T *handle)
12669 + VCHIQ_INSTANCE_T instance = (VCHIQ_INSTANCE_T)instance_handle;
12670 + SHIM_SERVICE_T *service = service_alloc(instance, setup);
12672 + *handle = (VCHI_SERVICE_HANDLE_T)service;
12675 + VCHIQ_SERVICE_PARAMS_T params;
12676 + VCHIQ_STATUS_T status;
12678 + memset(¶ms, 0, sizeof(params));
12679 + params.fourcc = setup->service_id;
12680 + params.callback = shim_callback;
12681 + params.userdata = service;
12682 + params.version = setup->version.version;
12683 + params.version_min = setup->version.version_min;
12685 + status = vchiq_open_service(instance, ¶ms,
12686 + &service->handle);
12687 + if (status != VCHIQ_SUCCESS) {
12688 + service_free(service);
12694 + return (service != NULL) ? 0 : -1;
12696 +EXPORT_SYMBOL(vchi_service_open);
12698 +int32_t vchi_service_create(VCHI_INSTANCE_T instance_handle,
12699 + SERVICE_CREATION_T *setup,
12700 + VCHI_SERVICE_HANDLE_T *handle)
12702 + VCHIQ_INSTANCE_T instance = (VCHIQ_INSTANCE_T)instance_handle;
12703 + SHIM_SERVICE_T *service = service_alloc(instance, setup);
12705 + *handle = (VCHI_SERVICE_HANDLE_T)service;
12708 + VCHIQ_SERVICE_PARAMS_T params;
12709 + VCHIQ_STATUS_T status;
12711 + memset(¶ms, 0, sizeof(params));
12712 + params.fourcc = setup->service_id;
12713 + params.callback = shim_callback;
12714 + params.userdata = service;
12715 + params.version = setup->version.version;
12716 + params.version_min = setup->version.version_min;
12717 + status = vchiq_add_service(instance, ¶ms, &service->handle);
12719 + if (status != VCHIQ_SUCCESS) {
12720 + service_free(service);
12726 + return (service != NULL) ? 0 : -1;
12728 +EXPORT_SYMBOL(vchi_service_create);
12730 +int32_t vchi_service_close(const VCHI_SERVICE_HANDLE_T handle)
12732 + int32_t ret = -1;
12733 + SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
12735 + VCHIQ_STATUS_T status = vchiq_close_service(service->handle);
12736 + if (status == VCHIQ_SUCCESS) {
12737 + service_free(service);
12741 + ret = vchiq_status_to_vchi(status);
12745 +EXPORT_SYMBOL(vchi_service_close);
12747 +int32_t vchi_service_destroy(const VCHI_SERVICE_HANDLE_T handle)
12749 + int32_t ret = -1;
12750 + SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
12752 + VCHIQ_STATUS_T status = vchiq_remove_service(service->handle);
12753 + if (status == VCHIQ_SUCCESS) {
12754 + service_free(service);
12758 + ret = vchiq_status_to_vchi(status);
12762 +EXPORT_SYMBOL(vchi_service_destroy);
12764 +int32_t vchi_service_set_option(const VCHI_SERVICE_HANDLE_T handle,
12765 + VCHI_SERVICE_OPTION_T option,
12768 + int32_t ret = -1;
12769 + SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
12770 + VCHIQ_SERVICE_OPTION_T vchiq_option;
12771 + switch (option) {
12772 + case VCHI_SERVICE_OPTION_TRACE:
12773 + vchiq_option = VCHIQ_SERVICE_OPTION_TRACE;
12775 + case VCHI_SERVICE_OPTION_SYNCHRONOUS:
12776 + vchiq_option = VCHIQ_SERVICE_OPTION_SYNCHRONOUS;
12783 + VCHIQ_STATUS_T status =
12784 + vchiq_set_service_option(service->handle,
12788 + ret = vchiq_status_to_vchi(status);
12792 +EXPORT_SYMBOL(vchi_service_set_option);
12794 +int32_t vchi_get_peer_version( const VCHI_SERVICE_HANDLE_T handle, short *peer_version )
12796 + int32_t ret = -1;
12797 + SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
12800 + VCHIQ_STATUS_T status = vchiq_get_peer_version(service->handle, peer_version);
12801 + ret = vchiq_status_to_vchi( status );
12805 +EXPORT_SYMBOL(vchi_get_peer_version);
12807 +/* ----------------------------------------------------------------------
12808 + * read a uint32_t from buffer.
12809 + * network format is defined to be little endian
12810 + * -------------------------------------------------------------------- */
12812 +vchi_readbuf_uint32(const void *_ptr)
12814 + const unsigned char *ptr = _ptr;
12815 + return ptr[0] | (ptr[1] << 8) | (ptr[2] << 16) | (ptr[3] << 24);
12818 +/* ----------------------------------------------------------------------
12819 + * write a uint32_t to buffer.
12820 + * network format is defined to be little endian
12821 + * -------------------------------------------------------------------- */
12823 +vchi_writebuf_uint32(void *_ptr, uint32_t value)
12825 + unsigned char *ptr = _ptr;
12826 + ptr[0] = (unsigned char)((value >> 0) & 0xFF);
12827 + ptr[1] = (unsigned char)((value >> 8) & 0xFF);
12828 + ptr[2] = (unsigned char)((value >> 16) & 0xFF);
12829 + ptr[3] = (unsigned char)((value >> 24) & 0xFF);
12832 +/* ----------------------------------------------------------------------
12833 + * read a uint16_t from buffer.
12834 + * network format is defined to be little endian
12835 + * -------------------------------------------------------------------- */
12837 +vchi_readbuf_uint16(const void *_ptr)
12839 + const unsigned char *ptr = _ptr;
12840 + return ptr[0] | (ptr[1] << 8);
12843 +/* ----------------------------------------------------------------------
12844 + * write a uint16_t into the buffer.
12845 + * network format is defined to be little endian
12846 + * -------------------------------------------------------------------- */
12848 +vchi_writebuf_uint16(void *_ptr, uint16_t value)
12850 + unsigned char *ptr = _ptr;
12851 + ptr[0] = (value >> 0) & 0xFF;
12852 + ptr[1] = (value >> 8) & 0xFF;
12855 +/***********************************************************
12856 + * Name: vchi_service_use
12858 + * Arguments: const VCHI_SERVICE_HANDLE_T handle
12860 + * Description: Routine to increment refcount on a service
12864 + ***********************************************************/
12865 +int32_t vchi_service_use(const VCHI_SERVICE_HANDLE_T handle)
12867 + int32_t ret = -1;
12868 + SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
12870 + ret = vchiq_status_to_vchi(vchiq_use_service(service->handle));
12873 +EXPORT_SYMBOL(vchi_service_use);
12875 +/***********************************************************
12876 + * Name: vchi_service_release
12878 + * Arguments: const VCHI_SERVICE_HANDLE_T handle
12880 + * Description: Routine to decrement refcount on a service
12884 + ***********************************************************/
12885 +int32_t vchi_service_release(const VCHI_SERVICE_HANDLE_T handle)
12887 + int32_t ret = -1;
12888 + SHIM_SERVICE_T *service = (SHIM_SERVICE_T *)handle;
12890 + ret = vchiq_status_to_vchi(
12891 + vchiq_release_service(service->handle));
12894 +EXPORT_SYMBOL(vchi_service_release);
12896 +++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_util.c
12899 + * Copyright (c) 2010-2012 Broadcom. All rights reserved.
12901 + * Redistribution and use in source and binary forms, with or without
12902 + * modification, are permitted provided that the following conditions
12904 + * 1. Redistributions of source code must retain the above copyright
12905 + * notice, this list of conditions, and the following disclaimer,
12906 + * without modification.
12907 + * 2. Redistributions in binary form must reproduce the above copyright
12908 + * notice, this list of conditions and the following disclaimer in the
12909 + * documentation and/or other materials provided with the distribution.
12910 + * 3. The names of the above-listed copyright holders may not be used
12911 + * to endorse or promote products derived from this software without
12912 + * specific prior written permission.
12914 + * ALTERNATIVELY, this software may be distributed under the terms of the
12915 + * GNU General Public License ("GPL") version 2, as published by the Free
12916 + * Software Foundation.
12918 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
12919 + * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
12920 + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
12921 + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
12922 + * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
12923 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
12924 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
12925 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
12926 + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
12927 + * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
12928 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
12931 +#include "vchiq_util.h"
12932 +#include "vchiq_killable.h"
12934 +static inline int is_pow2(int i)
12936 + return i && !(i & (i - 1));
12939 +int vchiu_queue_init(VCHIU_QUEUE_T *queue, int size)
12941 + WARN_ON(!is_pow2(size));
12943 + queue->size = size;
12945 + queue->write = 0;
12946 + queue->initialized = 1;
12948 + sema_init(&queue->pop, 0);
12949 + sema_init(&queue->push, 0);
12951 + queue->storage = kzalloc(size * sizeof(VCHIQ_HEADER_T *), GFP_KERNEL);
12952 + if (queue->storage == NULL) {
12953 + vchiu_queue_delete(queue);
12959 +void vchiu_queue_delete(VCHIU_QUEUE_T *queue)
12961 + if (queue->storage != NULL)
12962 + kfree(queue->storage);
12965 +int vchiu_queue_is_empty(VCHIU_QUEUE_T *queue)
12967 + return queue->read == queue->write;
12970 +int vchiu_queue_is_full(VCHIU_QUEUE_T *queue)
12972 + return queue->write == queue->read + queue->size;
12975 +void vchiu_queue_push(VCHIU_QUEUE_T *queue, VCHIQ_HEADER_T *header)
12977 + if (!queue->initialized)
12980 + while (queue->write == queue->read + queue->size) {
12981 + if (down_interruptible(&queue->pop) != 0) {
12982 + flush_signals(current);
12987 + * Write to queue->storage must be visible after read from
12992 + queue->storage[queue->write & (queue->size - 1)] = header;
12995 + * Write to queue->storage must be visible before write to
13002 + up(&queue->push);
13005 +VCHIQ_HEADER_T *vchiu_queue_peek(VCHIU_QUEUE_T *queue)
13007 + while (queue->write == queue->read) {
13008 + if (down_interruptible(&queue->push) != 0) {
13009 + flush_signals(current);
13013 + up(&queue->push); // We haven't removed anything from the queue.
13016 + * Read from queue->storage must be visible after read from
13021 + return queue->storage[queue->read & (queue->size - 1)];
13024 +VCHIQ_HEADER_T *vchiu_queue_pop(VCHIU_QUEUE_T *queue)
13026 + VCHIQ_HEADER_T *header;
13028 + while (queue->write == queue->read) {
13029 + if (down_interruptible(&queue->push) != 0) {
13030 + flush_signals(current);
13035 + * Read from queue->storage must be visible after read from
13040 + header = queue->storage[queue->read & (queue->size - 1)];
13043 + * Read from queue->storage must be visible before write to
13055 +++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_util.h
13058 + * Copyright (c) 2010-2012 Broadcom. All rights reserved.
13060 + * Redistribution and use in source and binary forms, with or without
13061 + * modification, are permitted provided that the following conditions
13063 + * 1. Redistributions of source code must retain the above copyright
13064 + * notice, this list of conditions, and the following disclaimer,
13065 + * without modification.
13066 + * 2. Redistributions in binary form must reproduce the above copyright
13067 + * notice, this list of conditions and the following disclaimer in the
13068 + * documentation and/or other materials provided with the distribution.
13069 + * 3. The names of the above-listed copyright holders may not be used
13070 + * to endorse or promote products derived from this software without
13071 + * specific prior written permission.
13073 + * ALTERNATIVELY, this software may be distributed under the terms of the
13074 + * GNU General Public License ("GPL") version 2, as published by the Free
13075 + * Software Foundation.
13077 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
13078 + * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
13079 + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
13080 + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
13081 + * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
13082 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
13083 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
13084 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
13085 + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
13086 + * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
13087 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
13090 +#ifndef VCHIQ_UTIL_H
13091 +#define VCHIQ_UTIL_H
13093 +#include <linux/types.h>
13094 +#include <linux/semaphore.h>
13095 +#include <linux/mutex.h>
13096 +#include <linux/bitops.h>
13097 +#include <linux/kthread.h>
13098 +#include <linux/wait.h>
13099 +#include <linux/vmalloc.h>
13100 +#include <linux/jiffies.h>
13101 +#include <linux/delay.h>
13102 +#include <linux/string.h>
13103 +#include <linux/types.h>
13104 +#include <linux/interrupt.h>
13105 +#include <linux/random.h>
13106 +#include <linux/sched.h>
13107 +#include <linux/ctype.h>
13108 +#include <linux/uaccess.h>
13109 +#include <linux/time.h> /* for time_t */
13110 +#include <linux/slab.h>
13111 +#include <linux/vmalloc.h>
13113 +#include "vchiq_if.h"
13121 + struct semaphore pop;
13122 + struct semaphore push;
13124 + VCHIQ_HEADER_T **storage;
13127 +extern int vchiu_queue_init(VCHIU_QUEUE_T *queue, int size);
13128 +extern void vchiu_queue_delete(VCHIU_QUEUE_T *queue);
13130 +extern int vchiu_queue_is_empty(VCHIU_QUEUE_T *queue);
13131 +extern int vchiu_queue_is_full(VCHIU_QUEUE_T *queue);
13133 +extern void vchiu_queue_push(VCHIU_QUEUE_T *queue, VCHIQ_HEADER_T *header);
13135 +extern VCHIQ_HEADER_T *vchiu_queue_peek(VCHIU_QUEUE_T *queue);
13136 +extern VCHIQ_HEADER_T *vchiu_queue_pop(VCHIU_QUEUE_T *queue);
13140 +++ b/drivers/misc/vc04_services/interface/vchiq_arm/vchiq_version.c
13143 + * Copyright (c) 2010-2012 Broadcom. All rights reserved.
13145 + * Redistribution and use in source and binary forms, with or without
13146 + * modification, are permitted provided that the following conditions
13148 + * 1. Redistributions of source code must retain the above copyright
13149 + * notice, this list of conditions, and the following disclaimer,
13150 + * without modification.
13151 + * 2. Redistributions in binary form must reproduce the above copyright
13152 + * notice, this list of conditions and the following disclaimer in the
13153 + * documentation and/or other materials provided with the distribution.
13154 + * 3. The names of the above-listed copyright holders may not be used
13155 + * to endorse or promote products derived from this software without
13156 + * specific prior written permission.
13158 + * ALTERNATIVELY, this software may be distributed under the terms of the
13159 + * GNU General Public License ("GPL") version 2, as published by the Free
13160 + * Software Foundation.
13162 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
13163 + * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
13164 + * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
13165 + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
13166 + * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
13167 + * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
13168 + * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
13169 + * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
13170 + * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
13171 + * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
13172 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
13174 +#include "vchiq_build_info.h"
13175 +#include <linux/broadcom/vc_debug_sym.h>
13177 +VC_DEBUG_DECLARE_STRING_VAR( vchiq_build_hostname, "dc4-arm-01" );
13178 +VC_DEBUG_DECLARE_STRING_VAR( vchiq_build_version, "9245b4c35b99b3870e1f7dc598c5692b3c66a6f0 (tainted)" );
13179 +VC_DEBUG_DECLARE_STRING_VAR( vchiq_build_time, __TIME__ );
13180 +VC_DEBUG_DECLARE_STRING_VAR( vchiq_build_date, __DATE__ );
13182 +const char *vchiq_get_build_hostname( void )
13184 + return vchiq_build_hostname;
13187 +const char *vchiq_get_build_version( void )
13189 + return vchiq_build_version;
13192 +const char *vchiq_get_build_date( void )
13194 + return vchiq_build_date;
13197 +const char *vchiq_get_build_time( void )
13199 + return vchiq_build_time;