kernel: bump 5.4 to 5.4.36
[openwrt/staging/wigyori.git] / target / linux / bcm27xx / patches-5.4 / 950-0174-staging-vc04_services-Add-new-vc-sm-cma-driver.patch
1 From 878c0bfd0c5f2dc0ef04874b1cba915cf208ca8f Mon Sep 17 00:00:00 2001
2 From: Dave Stevenson <dave.stevenson@raspberrypi.org>
3 Date: Tue, 25 Sep 2018 10:27:11 +0100
4 Subject: [PATCH] staging: vc04_services: Add new vc-sm-cma driver
5 MIME-Version: 1.0
6 Content-Type: text/plain; charset=UTF-8
7 Content-Transfer-Encoding: 8bit
8
9 This new driver allows contiguous memory blocks to be imported
10 into the VideoCore VPU memory map, and manages the lifetime of
11 those objects, only releasing the source dmabuf once the VPU has
12 confirmed it has finished with it.
13
14 Signed-off-by: Dave Stevenson <dave.stevenson@raspberrypi.org>
15
16 staging: vc-sm-cma: Correct DMA configuration.
17
18 Now that VCHIQ is setting up the DMA configuration as our
19 parent device, don't try to configure it during probe.
20
21 Signed-off-by: Dave Stevenson <dave.stevenson@raspberrypi.org>
22
23 staging: vc-sm-cma: Use a void* pointer as the handle within the kernel
24
25 The driver was using an unsigned int as the handle to the outside world,
26 and doing a nasty cast to the struct dmabuf when handed it back.
27 This breaks badly with a 64 bit kernel where the pointer doesn't fit
28 in an unsigned int.
29
30 Switch to using a void* within the kernel. Reality is that it is
31 a struct dma_buf*, but advertising it as such to other drivers seems
32 to encourage the use of it as such, and I'm not sure on the implications
33 of that.
34
35 Signed-off-by: Dave Stevenson <dave.stevenson@raspberrypi.org>
36
37 staging: vc-sm-cma: Fix up for 64bit builds
38
39 There were a number of logging lines that were using
40 inappropriate formatting under 64bit kernels.
41
42 The kernel_id field passed to/from the VPU was being
43 abused for storing the struct vc_sm_buffer *.
44 This breaks with 64bit kernels, so change to using an IDR.
45
46 Signed-off-by: Dave Stevenson <dave.stevenson@raspberrypi.org>
47
48 staging: vc_sm_cma: Remove erroneous misc_deregister
49
50 Code from the misc /dev node was still present in
51 bcm2835_vc_sm_cma_remove, which caused a NULL deref.
52 Remove it.
53
54 See #2885.
55
56 Signed-off-by: Dave Stevenson <dave.stevenson@raspberrypi.org>
57
58 staging: vc-sm-cma: Remove the debugfs directory on remove
59
60 Without removing that, reloading the driver fails.
61
62 Signed-off-by: Dave Stevenson <dave.stevenson@raspberrypi.org>
63
64 staging: vc-sm-cma: Use devm_ allocs for sm_state.
65
66 Use managed allocations for sm_state, removing reliance on
67 manual management.
68
69 Signed-off-by: Dave Stevenson <dave.stevenson@raspberrypi.org>
70
71 staging: vc-sm-cma: Don't fail if debugfs calls fail.
72
73 Return codes from debugfs calls should never alter the
74 flow of the main code.
75
76 Signed-off-by: Dave Stevenson <dave.stevenson@raspberrypi.org>
77
78 staging: vc-sm-cma: Ensure mutex and idr are destroyed
79
80 map_lock and kernelid_map are created in probe, but not released
81 in release should the vcsm service not connect (eg running the
82 cutdown firmware).
83
84 Signed-off-by: Dave Stevenson <dave.stevenson@raspberrypi.org>
85
86 staging: vc-sm-cma: Remove obsolete comment and make function static
87
88 Removes obsolete comment about wanting to pass a function
89 pointer into mmal-vchiq as we now do.
90 As the function is passed as a function pointer, the function itself
91 can be static.
92
93 Signed-off-by: Dave Stevenson <dave.stevenson@raspberrypi.org>
94
95 staging: vc-sm-cma: Add in allocation for VPU requests.
96
97 Module has to change from tristate to bool as all CMA functions
98 are boolean.
99
100 Signed-off-by: Dave Stevenson <dave.stevenson@raspberrypi.org>
101
102 staging: vc-sm-cma: Update TODO.
103
104 The driver is already a platform driver, so that can be
105 deleted from the TODO.
106 There are no known issues that need to be resolved.
107
108 Signed-off-by: Dave Stevenson <dave.stevenson@raspberrypi.org>
109
110 staging: vc-sm-cma: Add in userspace allocation API
111
112 Replacing the functionality from the older vc-sm driver,
113 add in a userspace API that allows allocation of buffers,
114 and importing of dma-bufs.
115 The driver hands out dma-buf fds, therefore much of the
116 handling around lifespan and odd mmaps from the old driver
117 goes away.
118
119 Signed-off-by: Dave Stevenson <dave.stevenson@raspberrypi.org>
120
121 staging: vcsm-cma: Add cache control ioctls
122
123 The old driver allowed for direct cache manipulation and that
124 was used by various clients. Replicate here.
125
126 Signed-off-by: Dave Stevenson <dave.stevenson@raspberrypi.org>
127
128 staging: vcsm-cma: Alter dev node permissions to 0666
129
130 Until the udev rules are updated, open up access to this node by
131 default.
132
133 Signed-off-by: Dave Stevenson <dave.stevenson@raspberrypi.org>
134
135 staging: vcsm-cma: Drop logging level on messages in vc_sm_release_resource
136
137 They weren't errors but were logged as such.
138
139 Signed-off-by: Dave Stevenson <dave.stevenson@raspberrypi.org>
140
141 staging: vcsm-cma: Fixup the alloc code handling of kernel_id
142
143 The allocation code had been copied in from an old branch prior
144 to having added the IDR for 64bit support. It was therefore pushing
145 a pointer into the kernel_id field instead of an IDR handle, the
146 lookup therefore failed, and we never released the buffer.
147
148 Signed-off-by: Dave Stevenson <dave.stevenson@raspberrypi.org>
149
150 staging: vcsm-cma: Remove cache manipulation ioctl from ARM64
151
152 The cache flushing ioctls are used by the Pi3 HEVC hw-assisted
153 decoder as it needs finer grained flushing control than dma_ops
154 allow.
155 These cache calls are not present for ARM64, therefore disable
156 them. We are not actively supporting 64bit kernels at present,
157 and the use case of the HEVC decoder is fairly limited.
158
159 Signed-off-by: Dave Stevenson <dave.stevenson@raspberrypi.org>
160
161 staging: vcsm-cma: Rework to use dma APIs, not CMA
162
163 Due to a misunderstanding of the DMA mapping APIs, I made
164 the wrong decision on how to implement this.
165
166 Rework to use dma_alloc_coherent instead of the CMA
167 API. This also allows it to be built as a module easily.
168
169 Signed-off-by: Dave Stevenson <dave.stevenson@raspberrypi.org>
170
171 staging: vc-sm-cma: Fix the few remaining coding style issues
172
173 Fix a few minor checkpatch complaints to make the driver clean
174
175 Signed-off-by: Dave Stevenson <dave.stevenson@raspberrypi.org>
176
177 staging: vc04_services: fix compiling in separate directory
178
179 The vc04_services Makefiles do not respect the O=path argument
180 correctly: include paths in CFLAGS are given relatively to object path,
181 not source path. Compiling in a separate directory yields #include
182 errors.
183
184 Signed-off-by: Marek Behún <marek.behun@nic.cz>
185
186 vc-sm-cma: Fix compatibility ioctl
187
188 This code path hasn't been used previously.
189 Fixed up after testing with kodi on 32-bit userland and 64-bit kernel
190
191 Signed-off-by: popcornmix <popcornmix@gmail.com>
192 ---
193 drivers/staging/vc04_services/Kconfig | 1 +
194 drivers/staging/vc04_services/Makefile | 1 +
195 .../vc04_services/bcm2835-camera/Makefile | 4 +-
196 .../staging/vc04_services/vc-sm-cma/Kconfig | 10 +
197 .../staging/vc04_services/vc-sm-cma/Makefile | 8 +
198 drivers/staging/vc04_services/vc-sm-cma/TODO | 1 +
199 .../staging/vc04_services/vc-sm-cma/vc_sm.c | 1774 +++++++++++++++++
200 .../staging/vc04_services/vc-sm-cma/vc_sm.h | 84 +
201 .../vc04_services/vc-sm-cma/vc_sm_cma_vchi.c | 505 +++++
202 .../vc04_services/vc-sm-cma/vc_sm_cma_vchi.h | 63 +
203 .../vc04_services/vc-sm-cma/vc_sm_defs.h | 300 +++
204 .../vc04_services/vc-sm-cma/vc_sm_knl.h | 28 +
205 .../staging/vc04_services/vchiq-mmal/Makefile | 2 +-
206 include/linux/broadcom/vc_sm_cma_ioctl.h | 114 ++
207 14 files changed, 2892 insertions(+), 3 deletions(-)
208 create mode 100644 drivers/staging/vc04_services/vc-sm-cma/Kconfig
209 create mode 100644 drivers/staging/vc04_services/vc-sm-cma/Makefile
210 create mode 100644 drivers/staging/vc04_services/vc-sm-cma/TODO
211 create mode 100644 drivers/staging/vc04_services/vc-sm-cma/vc_sm.c
212 create mode 100644 drivers/staging/vc04_services/vc-sm-cma/vc_sm.h
213 create mode 100644 drivers/staging/vc04_services/vc-sm-cma/vc_sm_cma_vchi.c
214 create mode 100644 drivers/staging/vc04_services/vc-sm-cma/vc_sm_cma_vchi.h
215 create mode 100644 drivers/staging/vc04_services/vc-sm-cma/vc_sm_defs.h
216 create mode 100644 drivers/staging/vc04_services/vc-sm-cma/vc_sm_knl.h
217 create mode 100644 include/linux/broadcom/vc_sm_cma_ioctl.h
218
219 --- a/drivers/staging/vc04_services/Kconfig
220 +++ b/drivers/staging/vc04_services/Kconfig
221 @@ -23,6 +23,7 @@ source "drivers/staging/vc04_services/bc
222
223 source "drivers/staging/vc04_services/bcm2835-camera/Kconfig"
224 source "drivers/staging/vc04_services/vchiq-mmal/Kconfig"
225 +source "drivers/staging/vc04_services/vc-sm-cma/Kconfig"
226
227 endif
228
229 --- a/drivers/staging/vc04_services/Makefile
230 +++ b/drivers/staging/vc04_services/Makefile
231 @@ -13,6 +13,7 @@ vchiq-objs := \
232 obj-$(CONFIG_SND_BCM2835) += bcm2835-audio/
233 obj-$(CONFIG_VIDEO_BCM2835) += bcm2835-camera/
234 obj-$(CONFIG_BCM2835_VCHIQ_MMAL) += vchiq-mmal/
235 +obj-$(CONFIG_BCM_VC_SM_CMA) += vc-sm-cma/
236
237 ccflags-y += -Idrivers/staging/vc04_services -D__VCCOREVER__=0x04000000
238
239 --- a/drivers/staging/vc04_services/bcm2835-camera/Makefile
240 +++ b/drivers/staging/vc04_services/bcm2835-camera/Makefile
241 @@ -7,6 +7,6 @@ obj-$(CONFIG_VIDEO_BCM2835) += bcm2835-v
242
243 ccflags-y += \
244 -I $(srctree)/$(src)/.. \
245 - -Idrivers/staging/vc04_services \
246 - -Idrivers/staging/vc04_services/vchiq-mmal \
247 + -I$(srctree)/drivers/staging/vc04_services \
248 + -I$(srctree)/drivers/staging/vc04_services/vchiq-mmal \
249 -D__VCCOREVER__=0x04000000
250 --- /dev/null
251 +++ b/drivers/staging/vc04_services/vc-sm-cma/Kconfig
252 @@ -0,0 +1,10 @@
253 +config BCM_VC_SM_CMA
254 + tristate "VideoCore Shared Memory (CMA) driver"
255 + depends on BCM2835_VCHIQ
256 + select RBTREE
257 + select DMA_SHARED_BUFFER
258 + help
259 + Say Y here to enable the shared memory interface that
260 + supports sharing dmabufs with VideoCore.
261 + This operates over the VCHIQ interface to a service
262 + running on VideoCore.
263 --- /dev/null
264 +++ b/drivers/staging/vc04_services/vc-sm-cma/Makefile
265 @@ -0,0 +1,8 @@
266 +ccflags-y += -I$(srctree)/drivers/staging/vc04_services -I$(srctree)/drivers/staging/vc04_services/interface/vchi -I$(srctree)/drivers/staging/vc04_services/interface/vchiq_arm
267 +# -I"drivers/staging/android/ion/" -I"$(srctree)/fs/"
268 +ccflags-y += -D__VCCOREVER__=0
269 +
270 +vc-sm-cma-$(CONFIG_BCM_VC_SM_CMA) := \
271 + vc_sm.o vc_sm_cma_vchi.o
272 +
273 +obj-$(CONFIG_BCM_VC_SM_CMA) += vc-sm-cma.o
274 --- /dev/null
275 +++ b/drivers/staging/vc04_services/vc-sm-cma/TODO
276 @@ -0,0 +1 @@
277 +No currently outstanding tasks except some clean-up.
278 --- /dev/null
279 +++ b/drivers/staging/vc04_services/vc-sm-cma/vc_sm.c
280 @@ -0,0 +1,1774 @@
281 +// SPDX-License-Identifier: GPL-2.0
282 +/*
283 + * VideoCore Shared Memory driver using CMA.
284 + *
285 + * Copyright: 2018, Raspberry Pi (Trading) Ltd
286 + * Dave Stevenson <dave.stevenson@raspberrypi.org>
287 + *
288 + * Based on vmcs_sm driver from Broadcom Corporation for some API,
289 + * and taking some code for buffer allocation and dmabuf handling from
290 + * videobuf2.
291 + *
292 + *
293 + * This driver has 3 main uses:
294 + * 1) Allocating buffers for the kernel or userspace that can be shared with the
295 + * VPU.
296 + * 2) Importing dmabufs from elsewhere for sharing with the VPU.
297 + * 3) Allocating buffers for use by the VPU.
298 + *
299 + * In the first and second cases the native handle is a dmabuf. Releasing the
300 + * resource inherently comes from releasing the dmabuf, and this will trigger
301 + * unmapping on the VPU. The underlying allocation and our buffer structure are
302 + * retained until the VPU has confirmed that it has finished with it.
303 + *
304 + * For the VPU allocations the VPU is responsible for triggering the release,
305 + * and therefore the released message decrements the dma_buf refcount (with the
306 + * VPU mapping having already been marked as released).
307 + */
308 +
309 +/* ---- Include Files ----------------------------------------------------- */
310 +#include <linux/cdev.h>
311 +#include <linux/device.h>
312 +#include <linux/debugfs.h>
313 +#include <linux/dma-mapping.h>
314 +#include <linux/dma-buf.h>
315 +#include <linux/errno.h>
316 +#include <linux/fs.h>
317 +#include <linux/kernel.h>
318 +#include <linux/list.h>
319 +#include <linux/miscdevice.h>
320 +#include <linux/module.h>
321 +#include <linux/mm.h>
322 +#include <linux/of_device.h>
323 +#include <linux/platform_device.h>
324 +#include <linux/proc_fs.h>
325 +#include <linux/slab.h>
326 +#include <linux/seq_file.h>
327 +#include <linux/syscalls.h>
328 +#include <linux/types.h>
329 +#include <asm/cacheflush.h>
330 +
331 +#include "vchiq_connected.h"
332 +#include "vc_sm_cma_vchi.h"
333 +
334 +#include "vc_sm.h"
335 +#include "vc_sm_knl.h"
336 +#include <linux/broadcom/vc_sm_cma_ioctl.h>
337 +
338 +/* ---- Private Constants and Types --------------------------------------- */
339 +
340 +#define DEVICE_NAME "vcsm-cma"
341 +#define DEVICE_MINOR 0
342 +
343 +#define VC_SM_RESOURCE_NAME_DEFAULT "sm-host-resource"
344 +
345 +#define VC_SM_DIR_ROOT_NAME "vcsm-cma"
346 +#define VC_SM_STATE "state"
347 +
348 +/* Private file data associated with each opened device. */
349 +struct vc_sm_privdata_t {
350 + pid_t pid; /* PID of creator. */
351 +
352 + int restart_sys; /* Tracks restart on interrupt. */
353 + enum vc_sm_msg_type int_action; /* Interrupted action. */
354 + u32 int_trans_id; /* Interrupted transaction. */
355 +};
356 +
357 +typedef int (*VC_SM_SHOW) (struct seq_file *s, void *v);
358 +struct sm_pde_t {
359 + VC_SM_SHOW show; /* Debug fs function hookup. */
360 + struct dentry *dir_entry; /* Debug fs directory entry. */
361 + void *priv_data; /* Private data */
362 +};
363 +
364 +/* Global state information. */
365 +struct sm_state_t {
366 + struct platform_device *pdev;
367 +
368 + struct miscdevice misc_dev;
369 +
370 + struct sm_instance *sm_handle; /* Handle for videocore service. */
371 +
372 + spinlock_t kernelid_map_lock; /* Spinlock protecting kernelid_map */
373 + struct idr kernelid_map;
374 +
375 + struct mutex map_lock; /* Global map lock. */
376 + struct list_head buffer_list; /* List of buffer. */
377 +
378 + struct vc_sm_privdata_t *data_knl; /* Kernel internal data tracking. */
379 + struct vc_sm_privdata_t *vpu_allocs; /* All allocations from the VPU */
380 + struct dentry *dir_root; /* Debug fs entries root. */
381 + struct sm_pde_t dir_state; /* Debug fs entries state sub-tree. */
382 +
383 + bool require_released_callback; /* VPU will send a released msg when it
384 + * has finished with a resource.
385 + */
386 + u32 int_trans_id; /* Interrupted transaction. */
387 +};
388 +
389 +struct vc_sm_dma_buf_attachment {
390 + struct device *dev;
391 + struct sg_table sg_table;
392 + struct list_head list;
393 + enum dma_data_direction dma_dir;
394 +};
395 +
396 +/* ---- Private Variables ----------------------------------------------- */
397 +
398 +static struct sm_state_t *sm_state;
399 +static int sm_inited;
400 +
401 +/* ---- Private Function Prototypes -------------------------------------- */
402 +
403 +/* ---- Private Functions ------------------------------------------------ */
404 +
405 +static int get_kernel_id(struct vc_sm_buffer *buffer)
406 +{
407 + int handle;
408 +
409 + spin_lock(&sm_state->kernelid_map_lock);
410 + handle = idr_alloc(&sm_state->kernelid_map, buffer, 0, 0, GFP_KERNEL);
411 + spin_unlock(&sm_state->kernelid_map_lock);
412 +
413 + return handle;
414 +}
415 +
416 +static struct vc_sm_buffer *lookup_kernel_id(int handle)
417 +{
418 + return idr_find(&sm_state->kernelid_map, handle);
419 +}
420 +
421 +static void free_kernel_id(int handle)
422 +{
423 + spin_lock(&sm_state->kernelid_map_lock);
424 + idr_remove(&sm_state->kernelid_map, handle);
425 + spin_unlock(&sm_state->kernelid_map_lock);
426 +}
427 +
428 +static int vc_sm_cma_seq_file_show(struct seq_file *s, void *v)
429 +{
430 + struct sm_pde_t *sm_pde;
431 +
432 + sm_pde = (struct sm_pde_t *)(s->private);
433 +
434 + if (sm_pde && sm_pde->show)
435 + sm_pde->show(s, v);
436 +
437 + return 0;
438 +}
439 +
440 +static int vc_sm_cma_single_open(struct inode *inode, struct file *file)
441 +{
442 + return single_open(file, vc_sm_cma_seq_file_show, inode->i_private);
443 +}
444 +
445 +static const struct file_operations vc_sm_cma_debug_fs_fops = {
446 + .open = vc_sm_cma_single_open,
447 + .read = seq_read,
448 + .llseek = seq_lseek,
449 + .release = single_release,
450 +};
451 +
452 +static int vc_sm_cma_global_state_show(struct seq_file *s, void *v)
453 +{
454 + struct vc_sm_buffer *resource = NULL;
455 + int resource_count = 0;
456 +
457 + if (!sm_state)
458 + return 0;
459 +
460 + seq_printf(s, "\nVC-ServiceHandle %p\n", sm_state->sm_handle);
461 +
462 + /* Log all applicable mapping(s). */
463 +
464 + mutex_lock(&sm_state->map_lock);
465 + seq_puts(s, "\nResources\n");
466 + if (!list_empty(&sm_state->buffer_list)) {
467 + list_for_each_entry(resource, &sm_state->buffer_list,
468 + global_buffer_list) {
469 + resource_count++;
470 +
471 + seq_printf(s, "\nResource %p\n",
472 + resource);
473 + seq_printf(s, " NAME %s\n",
474 + resource->name);
475 + seq_printf(s, " SIZE %zu\n",
476 + resource->size);
477 + seq_printf(s, " DMABUF %p\n",
478 + resource->dma_buf);
479 + if (resource->imported) {
480 + seq_printf(s, " ATTACH %p\n",
481 + resource->import.attach);
482 + seq_printf(s, " SGT %p\n",
483 + resource->import.sgt);
484 + } else {
485 + seq_printf(s, " SGT %p\n",
486 + resource->alloc.sg_table);
487 + }
488 + seq_printf(s, " DMA_ADDR %pad\n",
489 + &resource->dma_addr);
490 + seq_printf(s, " VC_HANDLE %08x\n",
491 + resource->vc_handle);
492 + seq_printf(s, " VC_MAPPING %d\n",
493 + resource->vpu_state);
494 + }
495 + }
496 + seq_printf(s, "\n\nTotal resource count: %d\n\n", resource_count);
497 +
498 + mutex_unlock(&sm_state->map_lock);
499 +
500 + return 0;
501 +}
502 +
503 +/*
504 + * Adds a buffer to the private data list which tracks all the allocated
505 + * data.
506 + */
507 +static void vc_sm_add_resource(struct vc_sm_privdata_t *privdata,
508 + struct vc_sm_buffer *buffer)
509 +{
510 + mutex_lock(&sm_state->map_lock);
511 + list_add(&buffer->global_buffer_list, &sm_state->buffer_list);
512 + mutex_unlock(&sm_state->map_lock);
513 +
514 + pr_debug("[%s]: added buffer %p (name %s, size %zu)\n",
515 + __func__, buffer, buffer->name, buffer->size);
516 +}
517 +
518 +/*
519 + * Cleans up imported dmabuf.
520 + */
521 +static void vc_sm_clean_up_dmabuf(struct vc_sm_buffer *buffer)
522 +{
523 + if (!buffer->imported)
524 + return;
525 +
526 + /* Handle cleaning up imported dmabufs */
527 + mutex_lock(&buffer->lock);
528 + if (buffer->import.sgt) {
529 + dma_buf_unmap_attachment(buffer->import.attach,
530 + buffer->import.sgt,
531 + DMA_BIDIRECTIONAL);
532 + buffer->import.sgt = NULL;
533 + }
534 + if (buffer->import.attach) {
535 + dma_buf_detach(buffer->dma_buf, buffer->import.attach);
536 + buffer->import.attach = NULL;
537 + }
538 + mutex_unlock(&buffer->lock);
539 +}
540 +
541 +/*
542 + * Instructs VPU to decrement the refcount on a buffer.
543 + */
544 +static void vc_sm_vpu_free(struct vc_sm_buffer *buffer)
545 +{
546 + if (buffer->vc_handle && buffer->vpu_state == VPU_MAPPED) {
547 + struct vc_sm_free_t free = { buffer->vc_handle, 0 };
548 + int status = vc_sm_cma_vchi_free(sm_state->sm_handle, &free,
549 + &sm_state->int_trans_id);
550 + if (status != 0 && status != -EINTR) {
551 + pr_err("[%s]: failed to free memory on videocore (status: %u, trans_id: %u)\n",
552 + __func__, status, sm_state->int_trans_id);
553 + }
554 +
555 + if (sm_state->require_released_callback) {
556 + /* Need to wait for the VPU to confirm the free. */
557 +
558 + /* Retain a reference on this until the VPU has
559 + * released it
560 + */
561 + buffer->vpu_state = VPU_UNMAPPING;
562 + } else {
563 + buffer->vpu_state = VPU_NOT_MAPPED;
564 + buffer->vc_handle = 0;
565 + }
566 + }
567 +}
568 +
569 +/*
570 + * Release an allocation.
571 + * All refcounting is done via the dma buf object.
572 + *
573 + * Must be called with the mutex held. The function will either release the
574 + * mutex (if defering the release) or destroy it. The caller must therefore not
575 + * reuse the buffer on return.
576 + */
577 +static void vc_sm_release_resource(struct vc_sm_buffer *buffer)
578 +{
579 + pr_debug("[%s]: buffer %p (name %s, size %zu), imported %u\n",
580 + __func__, buffer, buffer->name, buffer->size,
581 + buffer->imported);
582 +
583 + if (buffer->vc_handle) {
584 + /* We've sent the unmap request but not had the response. */
585 + pr_debug("[%s]: Waiting for VPU unmap response on %p\n",
586 + __func__, buffer);
587 + goto defer;
588 + }
589 + if (buffer->in_use) {
590 + /* dmabuf still in use - we await the release */
591 + pr_debug("[%s]: buffer %p is still in use\n", __func__, buffer);
592 + goto defer;
593 + }
594 +
595 + /* Release the allocation (whether imported dmabuf or CMA allocation) */
596 + if (buffer->imported) {
597 + if (buffer->import.dma_buf)
598 + dma_buf_put(buffer->import.dma_buf);
599 + else
600 + pr_err("%s: Imported dmabuf already been put for buf %p\n",
601 + __func__, buffer);
602 + buffer->import.dma_buf = NULL;
603 + } else {
604 + dma_free_coherent(&sm_state->pdev->dev, buffer->size,
605 + buffer->cookie, buffer->dma_addr);
606 + }
607 +
608 + /* Free our buffer. Start by removing it from the list */
609 + mutex_lock(&sm_state->map_lock);
610 + list_del(&buffer->global_buffer_list);
611 + mutex_unlock(&sm_state->map_lock);
612 +
613 + pr_debug("%s: Release our allocation - done\n", __func__);
614 + mutex_unlock(&buffer->lock);
615 +
616 + mutex_destroy(&buffer->lock);
617 +
618 + kfree(buffer);
619 + return;
620 +
621 +defer:
622 + mutex_unlock(&buffer->lock);
623 +}
624 +
625 +/* Create support for private data tracking. */
626 +static struct vc_sm_privdata_t *vc_sm_cma_create_priv_data(pid_t id)
627 +{
628 + char alloc_name[32];
629 + struct vc_sm_privdata_t *file_data = NULL;
630 +
631 + /* Allocate private structure. */
632 + file_data = kzalloc(sizeof(*file_data), GFP_KERNEL);
633 +
634 + if (!file_data)
635 + return NULL;
636 +
637 + snprintf(alloc_name, sizeof(alloc_name), "%d", id);
638 +
639 + file_data->pid = id;
640 +
641 + return file_data;
642 +}
643 +
644 +/* Dma buf operations for use with our own allocations */
645 +
646 +static int vc_sm_dma_buf_attach(struct dma_buf *dmabuf,
647 + struct dma_buf_attachment *attachment)
648 +
649 +{
650 + struct vc_sm_dma_buf_attachment *a;
651 + struct sg_table *sgt;
652 + struct vc_sm_buffer *buf = dmabuf->priv;
653 + struct scatterlist *rd, *wr;
654 + int ret, i;
655 +
656 + a = kzalloc(sizeof(*a), GFP_KERNEL);
657 + if (!a)
658 + return -ENOMEM;
659 +
660 + pr_debug("%s dmabuf %p attachment %p\n", __func__, dmabuf, attachment);
661 +
662 + mutex_lock(&buf->lock);
663 +
664 + INIT_LIST_HEAD(&a->list);
665 +
666 + sgt = &a->sg_table;
667 +
668 + /* Copy the buf->base_sgt scatter list to the attachment, as we can't
669 + * map the same scatter list to multiple attachments at the same time.
670 + */
671 + ret = sg_alloc_table(sgt, buf->alloc.sg_table->orig_nents, GFP_KERNEL);
672 + if (ret) {
673 + kfree(a);
674 + return -ENOMEM;
675 + }
676 +
677 + rd = buf->alloc.sg_table->sgl;
678 + wr = sgt->sgl;
679 + for (i = 0; i < sgt->orig_nents; ++i) {
680 + sg_set_page(wr, sg_page(rd), rd->length, rd->offset);
681 + rd = sg_next(rd);
682 + wr = sg_next(wr);
683 + }
684 +
685 + a->dma_dir = DMA_NONE;
686 + attachment->priv = a;
687 +
688 + list_add(&a->list, &buf->attachments);
689 + mutex_unlock(&buf->lock);
690 +
691 + return 0;
692 +}
693 +
694 +static void vc_sm_dma_buf_detach(struct dma_buf *dmabuf,
695 + struct dma_buf_attachment *attachment)
696 +{
697 + struct vc_sm_dma_buf_attachment *a = attachment->priv;
698 + struct vc_sm_buffer *buf = dmabuf->priv;
699 + struct sg_table *sgt;
700 +
701 + pr_debug("%s dmabuf %p attachment %p\n", __func__, dmabuf, attachment);
702 + if (!a)
703 + return;
704 +
705 + sgt = &a->sg_table;
706 +
707 + /* release the scatterlist cache */
708 + if (a->dma_dir != DMA_NONE)
709 + dma_unmap_sg(attachment->dev, sgt->sgl, sgt->orig_nents,
710 + a->dma_dir);
711 + sg_free_table(sgt);
712 +
713 + mutex_lock(&buf->lock);
714 + list_del(&a->list);
715 + mutex_unlock(&buf->lock);
716 +
717 + kfree(a);
718 +}
719 +
720 +static struct sg_table *vc_sm_map_dma_buf(struct dma_buf_attachment *attachment,
721 + enum dma_data_direction direction)
722 +{
723 + struct vc_sm_dma_buf_attachment *a = attachment->priv;
724 + /* stealing dmabuf mutex to serialize map/unmap operations */
725 + struct mutex *lock = &attachment->dmabuf->lock;
726 + struct sg_table *table;
727 +
728 + mutex_lock(lock);
729 + pr_debug("%s attachment %p\n", __func__, attachment);
730 + table = &a->sg_table;
731 +
732 + /* return previously mapped sg table */
733 + if (a->dma_dir == direction) {
734 + mutex_unlock(lock);
735 + return table;
736 + }
737 +
738 + /* release any previous cache */
739 + if (a->dma_dir != DMA_NONE) {
740 + dma_unmap_sg(attachment->dev, table->sgl, table->orig_nents,
741 + a->dma_dir);
742 + a->dma_dir = DMA_NONE;
743 + }
744 +
745 + /* mapping to the client with new direction */
746 + table->nents = dma_map_sg(attachment->dev, table->sgl,
747 + table->orig_nents, direction);
748 + if (!table->nents) {
749 + pr_err("failed to map scatterlist\n");
750 + mutex_unlock(lock);
751 + return ERR_PTR(-EIO);
752 + }
753 +
754 + a->dma_dir = direction;
755 + mutex_unlock(lock);
756 +
757 + pr_debug("%s attachment %p\n", __func__, attachment);
758 + return table;
759 +}
760 +
761 +static void vc_sm_unmap_dma_buf(struct dma_buf_attachment *attachment,
762 + struct sg_table *table,
763 + enum dma_data_direction direction)
764 +{
765 + pr_debug("%s attachment %p\n", __func__, attachment);
766 + dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction);
767 +}
768 +
769 +static int vc_sm_dmabuf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
770 +{
771 + struct vc_sm_buffer *buf = dmabuf->priv;
772 + int ret;
773 +
774 + pr_debug("%s dmabuf %p, buf %p, vm_start %08lX\n", __func__, dmabuf,
775 + buf, vma->vm_start);
776 +
777 + mutex_lock(&buf->lock);
778 +
779 + /* now map it to userspace */
780 + vma->vm_pgoff = 0;
781 +
782 + ret = dma_mmap_coherent(&sm_state->pdev->dev, vma, buf->cookie,
783 + buf->dma_addr, buf->size);
784 +
785 + if (ret) {
786 + pr_err("Remapping memory failed, error: %d\n", ret);
787 + return ret;
788 + }
789 +
790 + vma->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
791 +
792 + mutex_unlock(&buf->lock);
793 +
794 + if (ret)
795 + pr_err("%s: failure mapping buffer to userspace\n",
796 + __func__);
797 +
798 + return ret;
799 +}
800 +
801 +static void vc_sm_dma_buf_release(struct dma_buf *dmabuf)
802 +{
803 + struct vc_sm_buffer *buffer;
804 +
805 + if (!dmabuf)
806 + return;
807 +
808 + buffer = (struct vc_sm_buffer *)dmabuf->priv;
809 +
810 + mutex_lock(&buffer->lock);
811 +
812 + pr_debug("%s dmabuf %p, buffer %p\n", __func__, dmabuf, buffer);
813 +
814 + buffer->in_use = 0;
815 +
816 + /* Unmap on the VPU */
817 + vc_sm_vpu_free(buffer);
818 + pr_debug("%s vpu_free done\n", __func__);
819 +
820 + /* Unmap our dma_buf object (the vc_sm_buffer remains until released
821 + * on the VPU).
822 + */
823 + vc_sm_clean_up_dmabuf(buffer);
824 + pr_debug("%s clean_up dmabuf done\n", __func__);
825 +
826 + /* buffer->lock will be destroyed by vc_sm_release_resource if finished
827 + * with, otherwise unlocked. Do NOT unlock here.
828 + */
829 + vc_sm_release_resource(buffer);
830 + pr_debug("%s done\n", __func__);
831 +}
832 +
833 +static int vc_sm_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
834 + enum dma_data_direction direction)
835 +{
836 + struct vc_sm_buffer *buf;
837 + struct vc_sm_dma_buf_attachment *a;
838 +
839 + if (!dmabuf)
840 + return -EFAULT;
841 +
842 + buf = dmabuf->priv;
843 + if (!buf)
844 + return -EFAULT;
845 +
846 + mutex_lock(&buf->lock);
847 +
848 + list_for_each_entry(a, &buf->attachments, list) {
849 + dma_sync_sg_for_cpu(a->dev, a->sg_table.sgl,
850 + a->sg_table.nents, direction);
851 + }
852 + mutex_unlock(&buf->lock);
853 +
854 + return 0;
855 +}
856 +
857 +static int vc_sm_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
858 + enum dma_data_direction direction)
859 +{
860 + struct vc_sm_buffer *buf;
861 + struct vc_sm_dma_buf_attachment *a;
862 +
863 + if (!dmabuf)
864 + return -EFAULT;
865 + buf = dmabuf->priv;
866 + if (!buf)
867 + return -EFAULT;
868 +
869 + mutex_lock(&buf->lock);
870 +
871 + list_for_each_entry(a, &buf->attachments, list) {
872 + dma_sync_sg_for_device(a->dev, a->sg_table.sgl,
873 + a->sg_table.nents, direction);
874 + }
875 + mutex_unlock(&buf->lock);
876 +
877 + return 0;
878 +}
879 +
880 +static void *vc_sm_dma_buf_kmap(struct dma_buf *dmabuf, unsigned long offset)
881 +{
882 + /* FIXME */
883 + return NULL;
884 +}
885 +
886 +static void vc_sm_dma_buf_kunmap(struct dma_buf *dmabuf, unsigned long offset,
887 + void *ptr)
888 +{
889 + /* FIXME */
890 +}
891 +
892 +static const struct dma_buf_ops dma_buf_ops = {
893 + .map_dma_buf = vc_sm_map_dma_buf,
894 + .unmap_dma_buf = vc_sm_unmap_dma_buf,
895 + .mmap = vc_sm_dmabuf_mmap,
896 + .release = vc_sm_dma_buf_release,
897 + .attach = vc_sm_dma_buf_attach,
898 + .detach = vc_sm_dma_buf_detach,
899 + .begin_cpu_access = vc_sm_dma_buf_begin_cpu_access,
900 + .end_cpu_access = vc_sm_dma_buf_end_cpu_access,
901 + .map = vc_sm_dma_buf_kmap,
902 + .unmap = vc_sm_dma_buf_kunmap,
903 +};
904 +
905 +/* Dma_buf operations for chaining through to an imported dma_buf */
906 +
907 +static
908 +int vc_sm_import_dma_buf_attach(struct dma_buf *dmabuf,
909 + struct dma_buf_attachment *attachment)
910 +{
911 + struct vc_sm_buffer *buf = dmabuf->priv;
912 +
913 + if (!buf->imported)
914 + return -EINVAL;
915 + return buf->import.dma_buf->ops->attach(buf->import.dma_buf,
916 + attachment);
917 +}
918 +
919 +static
920 +void vc_sm_import_dma_buf_detatch(struct dma_buf *dmabuf,
921 + struct dma_buf_attachment *attachment)
922 +{
923 + struct vc_sm_buffer *buf = dmabuf->priv;
924 +
925 + if (!buf->imported)
926 + return;
927 + buf->import.dma_buf->ops->detach(buf->import.dma_buf, attachment);
928 +}
929 +
930 +static
931 +struct sg_table *vc_sm_import_map_dma_buf(struct dma_buf_attachment *attachment,
932 + enum dma_data_direction direction)
933 +{
934 + struct vc_sm_buffer *buf = attachment->dmabuf->priv;
935 +
936 + if (!buf->imported)
937 + return NULL;
938 + return buf->import.dma_buf->ops->map_dma_buf(attachment,
939 + direction);
940 +}
941 +
942 +static
943 +void vc_sm_import_unmap_dma_buf(struct dma_buf_attachment *attachment,
944 + struct sg_table *table,
945 + enum dma_data_direction direction)
946 +{
947 + struct vc_sm_buffer *buf = attachment->dmabuf->priv;
948 +
949 + if (!buf->imported)
950 + return;
951 + buf->import.dma_buf->ops->unmap_dma_buf(attachment, table, direction);
952 +}
953 +
954 +static
955 +int vc_sm_import_dmabuf_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
956 +{
957 + struct vc_sm_buffer *buf = dmabuf->priv;
958 +
959 + pr_debug("%s: mmap dma_buf %p, buf %p, imported db %p\n", __func__,
960 + dmabuf, buf, buf->import.dma_buf);
961 + if (!buf->imported) {
962 + pr_err("%s: mmap dma_buf %p- not an imported buffer\n",
963 + __func__, dmabuf);
964 + return -EINVAL;
965 + }
966 + return buf->import.dma_buf->ops->mmap(buf->import.dma_buf, vma);
967 +}
968 +
969 +static
970 +void vc_sm_import_dma_buf_release(struct dma_buf *dmabuf)
971 +{
972 + struct vc_sm_buffer *buf = dmabuf->priv;
973 +
974 + pr_debug("%s: Relasing dma_buf %p\n", __func__, dmabuf);
975 + mutex_lock(&buf->lock);
976 + if (!buf->imported)
977 + return;
978 +
979 + buf->in_use = 0;
980 +
981 + vc_sm_vpu_free(buf);
982 +
983 + vc_sm_release_resource(buf);
984 +}
985 +
986 +static
987 +void *vc_sm_import_dma_buf_kmap(struct dma_buf *dmabuf,
988 + unsigned long offset)
989 +{
990 + struct vc_sm_buffer *buf = dmabuf->priv;
991 +
992 + if (!buf->imported)
993 + return NULL;
994 + return buf->import.dma_buf->ops->map(buf->import.dma_buf, offset);
995 +}
996 +
997 +static
998 +void vc_sm_import_dma_buf_kunmap(struct dma_buf *dmabuf,
999 + unsigned long offset, void *ptr)
1000 +{
1001 + struct vc_sm_buffer *buf = dmabuf->priv;
1002 +
1003 + if (!buf->imported)
1004 + return;
1005 + buf->import.dma_buf->ops->unmap(buf->import.dma_buf, offset, ptr);
1006 +}
1007 +
1008 +static
1009 +int vc_sm_import_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
1010 + enum dma_data_direction direction)
1011 +{
1012 + struct vc_sm_buffer *buf = dmabuf->priv;
1013 +
1014 + if (!buf->imported)
1015 + return -EINVAL;
1016 + return buf->import.dma_buf->ops->begin_cpu_access(buf->import.dma_buf,
1017 + direction);
1018 +}
1019 +
1020 +static
1021 +int vc_sm_import_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
1022 + enum dma_data_direction direction)
1023 +{
1024 + struct vc_sm_buffer *buf = dmabuf->priv;
1025 +
1026 + if (!buf->imported)
1027 + return -EINVAL;
1028 + return buf->import.dma_buf->ops->end_cpu_access(buf->import.dma_buf,
1029 + direction);
1030 +}
1031 +
1032 +static const struct dma_buf_ops dma_buf_import_ops = {
1033 + .map_dma_buf = vc_sm_import_map_dma_buf,
1034 + .unmap_dma_buf = vc_sm_import_unmap_dma_buf,
1035 + .mmap = vc_sm_import_dmabuf_mmap,
1036 + .release = vc_sm_import_dma_buf_release,
1037 + .attach = vc_sm_import_dma_buf_attach,
1038 + .detach = vc_sm_import_dma_buf_detatch,
1039 + .begin_cpu_access = vc_sm_import_dma_buf_begin_cpu_access,
1040 + .end_cpu_access = vc_sm_import_dma_buf_end_cpu_access,
1041 + .map = vc_sm_import_dma_buf_kmap,
1042 + .unmap = vc_sm_import_dma_buf_kunmap,
1043 +};
1044 +
1045 +/* Import a dma_buf to be shared with VC. */
1046 +int
1047 +vc_sm_cma_import_dmabuf_internal(struct vc_sm_privdata_t *private,
1048 + struct dma_buf *dma_buf,
1049 + int fd,
1050 + struct dma_buf **imported_buf)
1051 +{
1052 + DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
1053 + struct vc_sm_buffer *buffer = NULL;
1054 + struct vc_sm_import import = { };
1055 + struct vc_sm_import_result result = { };
1056 + struct dma_buf_attachment *attach = NULL;
1057 + struct sg_table *sgt = NULL;
1058 + dma_addr_t dma_addr;
1059 + int ret = 0;
1060 + int status;
1061 +
1062 + /* Setup our allocation parameters */
1063 + pr_debug("%s: importing dma_buf %p/fd %d\n", __func__, dma_buf, fd);
1064 +
1065 + if (fd < 0)
1066 + get_dma_buf(dma_buf);
1067 + else
1068 + dma_buf = dma_buf_get(fd);
1069 +
1070 + if (!dma_buf)
1071 + return -EINVAL;
1072 +
1073 + attach = dma_buf_attach(dma_buf, &sm_state->pdev->dev);
1074 + if (IS_ERR(attach)) {
1075 + ret = PTR_ERR(attach);
1076 + goto error;
1077 + }
1078 +
1079 + sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL);
1080 + if (IS_ERR(sgt)) {
1081 + ret = PTR_ERR(sgt);
1082 + goto error;
1083 + }
1084 +
1085 + /* Verify that the address block is contiguous */
1086 + if (sgt->nents != 1) {
1087 + ret = -ENOMEM;
1088 + goto error;
1089 + }
1090 +
1091 + /* Allocate local buffer to track this allocation. */
1092 + buffer = kzalloc(sizeof(*buffer), GFP_KERNEL);
1093 + if (!buffer) {
1094 + ret = -ENOMEM;
1095 + goto error;
1096 + }
1097 +
1098 + import.type = VC_SM_ALLOC_NON_CACHED;
1099 + dma_addr = sg_dma_address(sgt->sgl);
1100 + import.addr = (u32)dma_addr;
1101 + if ((import.addr & 0xC0000000) != 0xC0000000) {
1102 + pr_err("%s: Expecting an uncached alias for dma_addr %pad\n",
1103 + __func__, &dma_addr);
1104 + import.addr |= 0xC0000000;
1105 + }
1106 + import.size = sg_dma_len(sgt->sgl);
1107 + import.allocator = current->tgid;
1108 + import.kernel_id = get_kernel_id(buffer);
1109 +
1110 + memcpy(import.name, VC_SM_RESOURCE_NAME_DEFAULT,
1111 + sizeof(VC_SM_RESOURCE_NAME_DEFAULT));
1112 +
1113 + pr_debug("[%s]: attempt to import \"%s\" data - type %u, addr %pad, size %u.\n",
1114 + __func__, import.name, import.type, &dma_addr, import.size);
1115 +
1116 + /* Allocate the videocore buffer. */
1117 + status = vc_sm_cma_vchi_import(sm_state->sm_handle, &import, &result,
1118 + &sm_state->int_trans_id);
1119 + if (status == -EINTR) {
1120 + pr_debug("[%s]: requesting import memory action restart (trans_id: %u)\n",
1121 + __func__, sm_state->int_trans_id);
1122 + ret = -ERESTARTSYS;
1123 + private->restart_sys = -EINTR;
1124 + private->int_action = VC_SM_MSG_TYPE_IMPORT;
1125 + goto error;
1126 + } else if (status || !result.res_handle) {
1127 + pr_debug("[%s]: failed to import memory on videocore (status: %u, trans_id: %u)\n",
1128 + __func__, status, sm_state->int_trans_id);
1129 + ret = -ENOMEM;
1130 + goto error;
1131 + }
1132 +
1133 + mutex_init(&buffer->lock);
1134 + INIT_LIST_HEAD(&buffer->attachments);
1135 + memcpy(buffer->name, import.name,
1136 + min(sizeof(buffer->name), sizeof(import.name) - 1));
1137 +
1138 + /* Keep track of the buffer we created. */
1139 + buffer->private = private;
1140 + buffer->vc_handle = result.res_handle;
1141 + buffer->size = import.size;
1142 + buffer->vpu_state = VPU_MAPPED;
1143 +
1144 + buffer->imported = 1;
1145 + buffer->import.dma_buf = dma_buf;
1146 +
1147 + buffer->import.attach = attach;
1148 + buffer->import.sgt = sgt;
1149 + buffer->dma_addr = dma_addr;
1150 + buffer->in_use = 1;
1151 + buffer->kernel_id = import.kernel_id;
1152 +
1153 + /*
1154 + * We're done - we need to export a new dmabuf chaining through most
1155 + * functions, but enabling us to release our own internal references
1156 + * here.
1157 + */
1158 + exp_info.ops = &dma_buf_import_ops;
1159 + exp_info.size = import.size;
1160 + exp_info.flags = O_RDWR;
1161 + exp_info.priv = buffer;
1162 +
1163 + buffer->dma_buf = dma_buf_export(&exp_info);
1164 + if (IS_ERR(buffer->dma_buf)) {
1165 + ret = PTR_ERR(buffer->dma_buf);
1166 + goto error;
1167 + }
1168 +
1169 + vc_sm_add_resource(private, buffer);
1170 +
1171 + *imported_buf = buffer->dma_buf;
1172 +
1173 + return 0;
1174 +
1175 +error:
1176 + if (result.res_handle) {
1177 + struct vc_sm_free_t free = { result.res_handle, 0 };
1178 +
1179 + vc_sm_cma_vchi_free(sm_state->sm_handle, &free,
1180 + &sm_state->int_trans_id);
1181 + }
1182 + free_kernel_id(import.kernel_id);
1183 + kfree(buffer);
1184 + if (sgt)
1185 + dma_buf_unmap_attachment(attach, sgt, DMA_BIDIRECTIONAL);
1186 + if (attach)
1187 + dma_buf_detach(dma_buf, attach);
1188 + dma_buf_put(dma_buf);
1189 + return ret;
1190 +}
1191 +
1192 +static int vc_sm_cma_vpu_alloc(u32 size, u32 align, const char *name,
1193 + u32 mem_handle, struct vc_sm_buffer **ret_buffer)
1194 +{
1195 + DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
1196 + struct vc_sm_buffer *buffer = NULL;
1197 + struct sg_table *sgt;
1198 + int aligned_size;
1199 + int ret = 0;
1200 +
1201 + /* Align to the user requested align */
1202 + aligned_size = ALIGN(size, align);
1203 + /* and then to a page boundary */
1204 + aligned_size = PAGE_ALIGN(aligned_size);
1205 +
1206 + if (!aligned_size)
1207 + return -EINVAL;
1208 +
1209 + /* Allocate local buffer to track this allocation. */
1210 + buffer = kzalloc(sizeof(*buffer), GFP_KERNEL);
1211 + if (!buffer)
1212 + return -ENOMEM;
1213 +
1214 + mutex_init(&buffer->lock);
1215 + /* Acquire the mutex as vc_sm_release_resource will release it in the
1216 + * error path.
1217 + */
1218 + mutex_lock(&buffer->lock);
1219 +
1220 + buffer->cookie = dma_alloc_coherent(&sm_state->pdev->dev,
1221 + aligned_size, &buffer->dma_addr,
1222 + GFP_KERNEL);
1223 + if (!buffer->cookie) {
1224 + pr_err("[%s]: dma_alloc_coherent alloc of %d bytes failed\n",
1225 + __func__, aligned_size);
1226 + ret = -ENOMEM;
1227 + goto error;
1228 + }
1229 +
1230 + pr_debug("[%s]: alloc of %d bytes success\n",
1231 + __func__, aligned_size);
1232 +
1233 + sgt = kmalloc(sizeof(*sgt), GFP_KERNEL);
1234 + if (!sgt) {
1235 + ret = -ENOMEM;
1236 + goto error;
1237 + }
1238 +
1239 + ret = dma_get_sgtable(&sm_state->pdev->dev, sgt, buffer->cookie,
1240 + buffer->dma_addr, buffer->size);
1241 + if (ret < 0) {
1242 + pr_err("failed to get scatterlist from DMA API\n");
1243 + kfree(sgt);
1244 + ret = -ENOMEM;
1245 + goto error;
1246 + }
1247 + buffer->alloc.sg_table = sgt;
1248 +
1249 + INIT_LIST_HEAD(&buffer->attachments);
1250 +
1251 + memcpy(buffer->name, name,
1252 + min(sizeof(buffer->name), strlen(name)));
1253 +
1254 + exp_info.ops = &dma_buf_ops;
1255 + exp_info.size = aligned_size;
1256 + exp_info.flags = O_RDWR;
1257 + exp_info.priv = buffer;
1258 +
1259 + buffer->dma_buf = dma_buf_export(&exp_info);
1260 + if (IS_ERR(buffer->dma_buf)) {
1261 + ret = PTR_ERR(buffer->dma_buf);
1262 + goto error;
1263 + }
1264 + buffer->dma_addr = (u32)sg_dma_address(buffer->alloc.sg_table->sgl);
1265 + if ((buffer->dma_addr & 0xC0000000) != 0xC0000000) {
1266 + pr_warn_once("%s: Expecting an uncached alias for dma_addr %pad\n",
1267 + __func__, &buffer->dma_addr);
1268 + buffer->dma_addr |= 0xC0000000;
1269 + }
1270 + buffer->private = sm_state->vpu_allocs;
1271 +
1272 + buffer->vc_handle = mem_handle;
1273 + buffer->vpu_state = VPU_MAPPED;
1274 + buffer->vpu_allocated = 1;
1275 + buffer->size = size;
1276 + /*
1277 + * Create an ID that will be passed along with our message so
1278 + * that when we service the release reply, we can look up which
1279 + * resource is being released.
1280 + */
1281 + buffer->kernel_id = get_kernel_id(buffer);
1282 +
1283 + vc_sm_add_resource(sm_state->vpu_allocs, buffer);
1284 +
1285 + mutex_unlock(&buffer->lock);
1286 +
1287 + *ret_buffer = buffer;
1288 + return 0;
1289 +error:
1290 + if (buffer)
1291 + vc_sm_release_resource(buffer);
1292 + return ret;
1293 +}
1294 +
1295 +static void
1296 +vc_sm_vpu_event(struct sm_instance *instance, struct vc_sm_result_t *reply,
1297 + int reply_len)
1298 +{
1299 + switch (reply->trans_id & ~0x80000000) {
1300 + case VC_SM_MSG_TYPE_CLIENT_VERSION:
1301 + {
1302 + /* Acknowledge that the firmware supports the version command */
1303 + pr_debug("%s: firmware acked version msg. Require release cb\n",
1304 + __func__);
1305 + sm_state->require_released_callback = true;
1306 + }
1307 + break;
1308 + case VC_SM_MSG_TYPE_RELEASED:
1309 + {
1310 + struct vc_sm_released *release = (struct vc_sm_released *)reply;
1311 + struct vc_sm_buffer *buffer =
1312 + lookup_kernel_id(release->kernel_id);
1313 + if (!buffer) {
1314 + pr_err("%s: VC released a buffer that is already released, kernel_id %d\n",
1315 + __func__, release->kernel_id);
1316 + break;
1317 + }
1318 + mutex_lock(&buffer->lock);
1319 +
1320 + pr_debug("%s: Released addr %08x, size %u, id %08x, mem_handle %08x\n",
1321 + __func__, release->addr, release->size,
1322 + release->kernel_id, release->vc_handle);
1323 +
1324 + buffer->vc_handle = 0;
1325 + buffer->vpu_state = VPU_NOT_MAPPED;
1326 + free_kernel_id(release->kernel_id);
1327 +
1328 + if (buffer->vpu_allocated) {
1329 + /* VPU allocation, so release the dmabuf which will
1330 + * trigger the clean up.
1331 + */
1332 + mutex_unlock(&buffer->lock);
1333 + dma_buf_put(buffer->dma_buf);
1334 + } else {
1335 + vc_sm_release_resource(buffer);
1336 + }
1337 + }
1338 + break;
1339 + case VC_SM_MSG_TYPE_VC_MEM_REQUEST:
1340 + {
1341 + struct vc_sm_buffer *buffer = NULL;
1342 + struct vc_sm_vc_mem_request *req =
1343 + (struct vc_sm_vc_mem_request *)reply;
1344 + struct vc_sm_vc_mem_request_result reply;
1345 + int ret;
1346 +
1347 + pr_debug("%s: Request %u bytes of memory, align %d name %s, trans_id %08x\n",
1348 + __func__, req->size, req->align, req->name,
1349 + req->trans_id);
1350 + ret = vc_sm_cma_vpu_alloc(req->size, req->align, req->name,
1351 + req->vc_handle, &buffer);
1352 +
1353 + reply.trans_id = req->trans_id;
1354 + if (!ret) {
1355 + reply.addr = buffer->dma_addr;
1356 + reply.kernel_id = buffer->kernel_id;
1357 + pr_debug("%s: Allocated resource buffer %p, addr %pad\n",
1358 + __func__, buffer, &buffer->dma_addr);
1359 + } else {
1360 + pr_err("%s: Allocation failed size %u, name %s, vc_handle %u\n",
1361 + __func__, req->size, req->name, req->vc_handle);
1362 + reply.addr = 0;
1363 + reply.kernel_id = 0;
1364 + }
1365 + vc_sm_vchi_client_vc_mem_req_reply(sm_state->sm_handle, &reply,
1366 + &sm_state->int_trans_id);
1367 + break;
1368 + }
1369 + break;
1370 + default:
1371 + pr_err("%s: Unknown vpu cmd %x\n", __func__, reply->trans_id);
1372 + break;
1373 + }
1374 +}
1375 +
1376 +/* Userspace handling */
1377 +/*
1378 + * Open the device. Creates a private state to help track all allocation
1379 + * associated with this device.
1380 + */
1381 +static int vc_sm_cma_open(struct inode *inode, struct file *file)
1382 +{
1383 + /* Make sure the device was started properly. */
1384 + if (!sm_state) {
1385 + pr_err("[%s]: invalid device\n", __func__);
1386 + return -EPERM;
1387 + }
1388 +
1389 + file->private_data = vc_sm_cma_create_priv_data(current->tgid);
1390 + if (!file->private_data) {
1391 + pr_err("[%s]: failed to create data tracker\n", __func__);
1392 +
1393 + return -ENOMEM;
1394 + }
1395 +
1396 + return 0;
1397 +}
1398 +
1399 +/*
1400 + * Close the vcsm-cma device.
1401 + * All allocations are file descriptors to the dmabuf objects, so we will get
1402 + * the clean up request on those as those are cleaned up.
1403 + */
1404 +static int vc_sm_cma_release(struct inode *inode, struct file *file)
1405 +{
1406 + struct vc_sm_privdata_t *file_data =
1407 + (struct vc_sm_privdata_t *)file->private_data;
1408 + int ret = 0;
1409 +
1410 + /* Make sure the device was started properly. */
1411 + if (!sm_state || !file_data) {
1412 + pr_err("[%s]: invalid device\n", __func__);
1413 + ret = -EPERM;
1414 + goto out;
1415 + }
1416 +
1417 + pr_debug("[%s]: using private data %p\n", __func__, file_data);
1418 +
1419 + /* Terminate the private data. */
1420 + kfree(file_data);
1421 +
1422 +out:
1423 + return ret;
1424 +}
1425 +
1426 +/*
1427 + * Allocate a shared memory handle and block.
1428 + * Allocation is from CMA, and then imported into the VPU mappings.
1429 + */
1430 +int vc_sm_cma_ioctl_alloc(struct vc_sm_privdata_t *private,
1431 + struct vc_sm_cma_ioctl_alloc *ioparam)
1432 +{
1433 + DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
1434 + struct vc_sm_buffer *buffer = NULL;
1435 + struct vc_sm_import import = { 0 };
1436 + struct vc_sm_import_result result = { 0 };
1437 + struct dma_buf *dmabuf = NULL;
1438 + struct sg_table *sgt;
1439 + int aligned_size;
1440 + int ret = 0;
1441 + int status;
1442 + int fd = -1;
1443 +
1444 + aligned_size = PAGE_ALIGN(ioparam->size);
1445 +
1446 + if (!aligned_size)
1447 + return -EINVAL;
1448 +
1449 + /* Allocate local buffer to track this allocation. */
1450 + buffer = kzalloc(sizeof(*buffer), GFP_KERNEL);
1451 + if (!buffer) {
1452 + ret = -ENOMEM;
1453 + goto error;
1454 + }
1455 +
1456 + buffer->cookie = dma_alloc_coherent(&sm_state->pdev->dev,
1457 + aligned_size,
1458 + &buffer->dma_addr,
1459 + GFP_KERNEL);
1460 + if (!buffer->cookie) {
1461 + pr_err("[%s]: dma_alloc_coherent alloc of %d bytes failed\n",
1462 + __func__, aligned_size);
1463 + ret = -ENOMEM;
1464 + goto error;
1465 + }
1466 +
1467 + import.type = VC_SM_ALLOC_NON_CACHED;
1468 + import.allocator = current->tgid;
1469 +
1470 + if (*ioparam->name)
1471 + memcpy(import.name, ioparam->name, sizeof(import.name) - 1);
1472 + else
1473 + memcpy(import.name, VC_SM_RESOURCE_NAME_DEFAULT,
1474 + sizeof(VC_SM_RESOURCE_NAME_DEFAULT));
1475 +
1476 + mutex_init(&buffer->lock);
1477 + INIT_LIST_HEAD(&buffer->attachments);
1478 + memcpy(buffer->name, import.name,
1479 + min(sizeof(buffer->name), sizeof(import.name) - 1));
1480 +
1481 + exp_info.ops = &dma_buf_ops;
1482 + exp_info.size = aligned_size;
1483 + exp_info.flags = O_RDWR;
1484 + exp_info.priv = buffer;
1485 +
1486 + dmabuf = dma_buf_export(&exp_info);
1487 + if (IS_ERR(dmabuf)) {
1488 + ret = PTR_ERR(dmabuf);
1489 + goto error;
1490 + }
1491 + buffer->dma_buf = dmabuf;
1492 +
1493 + import.addr = buffer->dma_addr;
1494 + import.size = aligned_size;
1495 + import.kernel_id = get_kernel_id(buffer);
1496 +
1497 + /* Wrap it into a videocore buffer. */
1498 + status = vc_sm_cma_vchi_import(sm_state->sm_handle, &import, &result,
1499 + &sm_state->int_trans_id);
1500 + if (status == -EINTR) {
1501 + pr_debug("[%s]: requesting import memory action restart (trans_id: %u)\n",
1502 + __func__, sm_state->int_trans_id);
1503 + ret = -ERESTARTSYS;
1504 + private->restart_sys = -EINTR;
1505 + private->int_action = VC_SM_MSG_TYPE_IMPORT;
1506 + goto error;
1507 + } else if (status || !result.res_handle) {
1508 + pr_err("[%s]: failed to import memory on videocore (status: %u, trans_id: %u)\n",
1509 + __func__, status, sm_state->int_trans_id);
1510 + ret = -ENOMEM;
1511 + goto error;
1512 + }
1513 +
1514 + /* Keep track of the buffer we created. */
1515 + buffer->private = private;
1516 + buffer->vc_handle = result.res_handle;
1517 + buffer->size = import.size;
1518 + buffer->vpu_state = VPU_MAPPED;
1519 + buffer->kernel_id = import.kernel_id;
1520 +
1521 + sgt = kmalloc(sizeof(*sgt), GFP_KERNEL);
1522 + if (!sgt) {
1523 + ret = -ENOMEM;
1524 + goto error;
1525 + }
1526 +
1527 + ret = dma_get_sgtable(&sm_state->pdev->dev, sgt, buffer->cookie,
1528 + buffer->dma_addr, buffer->size);
1529 + if (ret < 0) {
1530 + /* FIXME: error handling */
1531 + pr_err("failed to get scatterlist from DMA API\n");
1532 + kfree(sgt);
1533 + ret = -ENOMEM;
1534 + goto error;
1535 + }
1536 + buffer->alloc.sg_table = sgt;
1537 +
1538 + fd = dma_buf_fd(dmabuf, O_CLOEXEC);
1539 + if (fd < 0)
1540 + goto error;
1541 +
1542 + vc_sm_add_resource(private, buffer);
1543 +
1544 + pr_debug("[%s]: Added resource as fd %d, buffer %p, private %p, dma_addr %pad\n",
1545 + __func__, fd, buffer, private, &buffer->dma_addr);
1546 +
1547 + /* We're done */
1548 + ioparam->handle = fd;
1549 + ioparam->vc_handle = buffer->vc_handle;
1550 + ioparam->dma_addr = buffer->dma_addr;
1551 + return 0;
1552 +
1553 +error:
1554 + pr_err("[%s]: something failed - cleanup. ret %d\n", __func__, ret);
1555 +
1556 + if (dmabuf) {
1557 + /* dmabuf has been exported, therefore allow dmabuf cleanup to
1558 + * deal with this
1559 + */
1560 + dma_buf_put(dmabuf);
1561 + } else {
1562 + /* No dmabuf, therefore just free the buffer here */
1563 + if (buffer->cookie)
1564 + dma_free_coherent(&sm_state->pdev->dev, buffer->size,
1565 + buffer->cookie, buffer->dma_addr);
1566 + kfree(buffer);
1567 + }
1568 + return ret;
1569 +}
1570 +
1571 +#ifndef CONFIG_ARM64
1572 +/* Converts VCSM_CACHE_OP_* to an operating function. */
1573 +static void (*cache_op_to_func(const unsigned int cache_op))
1574 + (const void*, const void*)
1575 +{
1576 + switch (cache_op) {
1577 + case VC_SM_CACHE_OP_NOP:
1578 + return NULL;
1579 +
1580 + case VC_SM_CACHE_OP_INV:
1581 + return dmac_inv_range;
1582 +
1583 + case VC_SM_CACHE_OP_CLEAN:
1584 + return dmac_clean_range;
1585 +
1586 + case VC_SM_CACHE_OP_FLUSH:
1587 + return dmac_flush_range;
1588 +
1589 + default:
1590 + pr_err("[%s]: Invalid cache_op: 0x%08x\n", __func__, cache_op);
1591 + return NULL;
1592 + }
1593 +}
1594 +
1595 +/*
1596 + * Clean/invalid/flush cache of which buffer is already pinned (i.e. accessed).
1597 + */
1598 +static int clean_invalid_contig_2d(const void __user *addr,
1599 + const size_t block_count,
1600 + const size_t block_size,
1601 + const size_t stride,
1602 + const unsigned int cache_op)
1603 +{
1604 + size_t i;
1605 + void (*op_fn)(const void *start, const void *end);
1606 +
1607 + if (!block_size) {
1608 + pr_err("[%s]: size cannot be 0\n", __func__);
1609 + return -EINVAL;
1610 + }
1611 +
1612 + op_fn = cache_op_to_func(cache_op);
1613 + if (!op_fn)
1614 + return -EINVAL;
1615 +
1616 + for (i = 0; i < block_count; i ++, addr += stride)
1617 + op_fn(addr, addr + block_size);
1618 +
1619 + return 0;
1620 +}
1621 +
1622 +static int vc_sm_cma_clean_invalid2(unsigned int cmdnr, unsigned long arg)
1623 +{
1624 + struct vc_sm_cma_ioctl_clean_invalid2 ioparam;
1625 + struct vc_sm_cma_ioctl_clean_invalid_block *block = NULL;
1626 + int i, ret = 0;
1627 +
1628 + /* Get parameter data. */
1629 + if (copy_from_user(&ioparam, (void *)arg, sizeof(ioparam))) {
1630 + pr_err("[%s]: failed to copy-from-user header for cmd %x\n",
1631 + __func__, cmdnr);
1632 + return -EFAULT;
1633 + }
1634 + block = kmalloc(ioparam.op_count * sizeof(*block), GFP_KERNEL);
1635 + if (!block)
1636 + return -EFAULT;
1637 +
1638 + if (copy_from_user(block, (void *)(arg + sizeof(ioparam)),
1639 + ioparam.op_count * sizeof(*block)) != 0) {
1640 + pr_err("[%s]: failed to copy-from-user payload for cmd %x\n",
1641 + __func__, cmdnr);
1642 + ret = -EFAULT;
1643 + goto out;
1644 + }
1645 +
1646 + for (i = 0; i < ioparam.op_count; i++) {
1647 + const struct vc_sm_cma_ioctl_clean_invalid_block * const op =
1648 + block + i;
1649 +
1650 + if (op->invalidate_mode == VC_SM_CACHE_OP_NOP)
1651 + continue;
1652 +
1653 + ret = clean_invalid_contig_2d((void __user *)op->start_address,
1654 + op->block_count, op->block_size,
1655 + op->inter_block_stride,
1656 + op->invalidate_mode);
1657 + if (ret)
1658 + break;
1659 + }
1660 +out:
1661 + kfree(block);
1662 +
1663 + return ret;
1664 +}
1665 +#endif
1666 +
1667 +static long vc_sm_cma_ioctl(struct file *file, unsigned int cmd,
1668 + unsigned long arg)
1669 +{
1670 + int ret = 0;
1671 + unsigned int cmdnr = _IOC_NR(cmd);
1672 + struct vc_sm_privdata_t *file_data =
1673 + (struct vc_sm_privdata_t *)file->private_data;
1674 +
1675 + /* Validate we can work with this device. */
1676 + if (!sm_state || !file_data) {
1677 + pr_err("[%s]: invalid device\n", __func__);
1678 + return -EPERM;
1679 + }
1680 +
1681 + /* Action is a re-post of a previously interrupted action? */
1682 + if (file_data->restart_sys == -EINTR) {
1683 + struct vc_sm_action_clean_t action_clean;
1684 +
1685 + pr_debug("[%s]: clean up of action %u (trans_id: %u) following EINTR\n",
1686 + __func__, file_data->int_action,
1687 + file_data->int_trans_id);
1688 +
1689 + action_clean.res_action = file_data->int_action;
1690 + action_clean.action_trans_id = file_data->int_trans_id;
1691 +
1692 + file_data->restart_sys = 0;
1693 + }
1694 +
1695 + /* Now process the command. */
1696 + switch (cmdnr) {
1697 + /* New memory allocation.
1698 + */
1699 + case VC_SM_CMA_CMD_ALLOC:
1700 + {
1701 + struct vc_sm_cma_ioctl_alloc ioparam;
1702 +
1703 + /* Get the parameter data. */
1704 + if (copy_from_user
1705 + (&ioparam, (void *)arg, sizeof(ioparam)) != 0) {
1706 + pr_err("[%s]: failed to copy-from-user for cmd %x\n",
1707 + __func__, cmdnr);
1708 + ret = -EFAULT;
1709 + break;
1710 + }
1711 +
1712 + ret = vc_sm_cma_ioctl_alloc(file_data, &ioparam);
1713 + if (!ret &&
1714 + (copy_to_user((void *)arg, &ioparam,
1715 + sizeof(ioparam)) != 0)) {
1716 + /* FIXME: Release allocation */
1717 + pr_err("[%s]: failed to copy-to-user for cmd %x\n",
1718 + __func__, cmdnr);
1719 + ret = -EFAULT;
1720 + }
1721 + break;
1722 + }
1723 +
1724 + case VC_SM_CMA_CMD_IMPORT_DMABUF:
1725 + {
1726 + struct vc_sm_cma_ioctl_import_dmabuf ioparam;
1727 + struct dma_buf *new_dmabuf;
1728 +
1729 + /* Get the parameter data. */
1730 + if (copy_from_user
1731 + (&ioparam, (void *)arg, sizeof(ioparam)) != 0) {
1732 + pr_err("[%s]: failed to copy-from-user for cmd %x\n",
1733 + __func__, cmdnr);
1734 + ret = -EFAULT;
1735 + break;
1736 + }
1737 +
1738 + ret = vc_sm_cma_import_dmabuf_internal(file_data,
1739 + NULL,
1740 + ioparam.dmabuf_fd,
1741 + &new_dmabuf);
1742 +
1743 + if (!ret) {
1744 + struct vc_sm_buffer *buf = new_dmabuf->priv;
1745 +
1746 + ioparam.size = buf->size;
1747 + ioparam.handle = dma_buf_fd(new_dmabuf,
1748 + O_CLOEXEC);
1749 + ioparam.vc_handle = buf->vc_handle;
1750 + ioparam.dma_addr = buf->dma_addr;
1751 +
1752 + if (ioparam.handle < 0 ||
1753 + (copy_to_user((void *)arg, &ioparam,
1754 + sizeof(ioparam)) != 0)) {
1755 + dma_buf_put(new_dmabuf);
1756 + /* FIXME: Release allocation */
1757 + ret = -EFAULT;
1758 + }
1759 + }
1760 + break;
1761 + }
1762 +
1763 +#ifndef CONFIG_ARM64
1764 + /*
1765 + * Flush/Invalidate the cache for a given mapping.
1766 + * Blocks must be pinned (i.e. accessed) before this call.
1767 + */
1768 + case VC_SM_CMA_CMD_CLEAN_INVALID2:
1769 + ret = vc_sm_cma_clean_invalid2(cmdnr, arg);
1770 + break;
1771 +#endif
1772 +
1773 + default:
1774 + pr_debug("[%s]: cmd %x tgid %u, owner %u\n", __func__, cmdnr,
1775 + current->tgid, file_data->pid);
1776 +
1777 + ret = -EINVAL;
1778 + break;
1779 + }
1780 +
1781 + return ret;
1782 +}
1783 +
1784 +#ifdef CONFIG_COMPAT
1785 +struct vc_sm_cma_ioctl_clean_invalid2_32 {
1786 + u32 op_count;
1787 + struct vc_sm_cma_ioctl_clean_invalid_block_32 {
1788 + u16 invalidate_mode;
1789 + u16 block_count;
1790 + compat_uptr_t start_address;
1791 + u32 block_size;
1792 + u32 inter_block_stride;
1793 + } s[0];
1794 +};
1795 +
1796 +#define VC_SM_CMA_CMD_CLEAN_INVALID2_32\
1797 + _IOR(VC_SM_CMA_MAGIC_TYPE, VC_SM_CMA_CMD_CLEAN_INVALID2,\
1798 + struct vc_sm_cma_ioctl_clean_invalid2_32)
1799 +
1800 +static long vc_sm_cma_compat_ioctl(struct file *file, unsigned int cmd,
1801 + unsigned long arg)
1802 +{
1803 + switch (cmd) {
1804 + case VC_SM_CMA_CMD_CLEAN_INVALID2_32:
1805 + /* FIXME */
1806 + return -EINVAL;
1807 +
1808 + default:
1809 + return vc_sm_cma_ioctl(file, cmd, arg);
1810 + }
1811 +}
1812 +#endif
1813 +
1814 +/* Device operations that we managed in this driver. */
1815 +static const struct file_operations vc_sm_ops = {
1816 + .owner = THIS_MODULE,
1817 + .unlocked_ioctl = vc_sm_cma_ioctl,
1818 +#ifdef CONFIG_COMPAT
1819 + .compat_ioctl = vc_sm_cma_compat_ioctl,
1820 +#endif
1821 + .open = vc_sm_cma_open,
1822 + .release = vc_sm_cma_release,
1823 +};
1824 +
1825 +/* Driver load/unload functions */
1826 +/* Videocore connected. */
1827 +static void vc_sm_connected_init(void)
1828 +{
1829 + int ret;
1830 + VCHI_INSTANCE_T vchi_instance;
1831 + struct vc_sm_version version;
1832 + struct vc_sm_result_t version_result;
1833 +
1834 + pr_info("[%s]: start\n", __func__);
1835 +
1836 + /*
1837 + * Initialize and create a VCHI connection for the shared memory service
1838 + * running on videocore.
1839 + */
1840 + ret = vchi_initialise(&vchi_instance);
1841 + if (ret) {
1842 + pr_err("[%s]: failed to initialise VCHI instance (ret=%d)\n",
1843 + __func__, ret);
1844 +
1845 + return;
1846 + }
1847 +
1848 + ret = vchi_connect(vchi_instance);
1849 + if (ret) {
1850 + pr_err("[%s]: failed to connect VCHI instance (ret=%d)\n",
1851 + __func__, ret);
1852 +
1853 + return;
1854 + }
1855 +
1856 + /* Initialize an instance of the shared memory service. */
1857 + sm_state->sm_handle = vc_sm_cma_vchi_init(vchi_instance, 1,
1858 + vc_sm_vpu_event);
1859 + if (!sm_state->sm_handle) {
1860 + pr_err("[%s]: failed to initialize shared memory service\n",
1861 + __func__);
1862 +
1863 + return;
1864 + }
1865 +
1866 + /* Create a debug fs directory entry (root). */
1867 + sm_state->dir_root = debugfs_create_dir(VC_SM_DIR_ROOT_NAME, NULL);
1868 +
1869 + sm_state->dir_state.show = &vc_sm_cma_global_state_show;
1870 + sm_state->dir_state.dir_entry =
1871 + debugfs_create_file(VC_SM_STATE, 0444, sm_state->dir_root,
1872 + &sm_state->dir_state,
1873 + &vc_sm_cma_debug_fs_fops);
1874 +
1875 + INIT_LIST_HEAD(&sm_state->buffer_list);
1876 +
1877 + /* Create a shared memory device. */
1878 + sm_state->misc_dev.minor = MISC_DYNAMIC_MINOR;
1879 + sm_state->misc_dev.name = DEVICE_NAME;
1880 + sm_state->misc_dev.fops = &vc_sm_ops;
1881 + sm_state->misc_dev.parent = NULL;
1882 + /* Temporarily set as 666 until udev rules have been sorted */
1883 + sm_state->misc_dev.mode = 0666;
1884 + ret = misc_register(&sm_state->misc_dev);
1885 + if (ret) {
1886 + pr_err("vcsm-cma: failed to register misc device.\n");
1887 + goto err_remove_debugfs;
1888 + }
1889 +
1890 + sm_state->data_knl = vc_sm_cma_create_priv_data(0);
1891 + if (!sm_state->data_knl) {
1892 + pr_err("[%s]: failed to create kernel private data tracker\n",
1893 + __func__);
1894 + goto err_remove_misc_dev;
1895 + }
1896 +
1897 + version.version = 2;
1898 + ret = vc_sm_cma_vchi_client_version(sm_state->sm_handle, &version,
1899 + &version_result,
1900 + &sm_state->int_trans_id);
1901 + if (ret) {
1902 + pr_err("[%s]: Failed to send version request %d\n", __func__,
1903 + ret);
1904 + }
1905 +
1906 + /* Done! */
1907 + sm_inited = 1;
1908 + pr_info("[%s]: installed successfully\n", __func__);
1909 + return;
1910 +
1911 +err_remove_misc_dev:
1912 + misc_deregister(&sm_state->misc_dev);
1913 +err_remove_debugfs:
1914 + debugfs_remove_recursive(sm_state->dir_root);
1915 + vc_sm_cma_vchi_stop(&sm_state->sm_handle);
1916 +}
1917 +
1918 +/* Driver loading. */
1919 +static int bcm2835_vc_sm_cma_probe(struct platform_device *pdev)
1920 +{
1921 + pr_info("%s: Videocore shared memory driver\n", __func__);
1922 +
1923 + sm_state = devm_kzalloc(&pdev->dev, sizeof(*sm_state), GFP_KERNEL);
1924 + if (!sm_state)
1925 + return -ENOMEM;
1926 + sm_state->pdev = pdev;
1927 + mutex_init(&sm_state->map_lock);
1928 +
1929 + spin_lock_init(&sm_state->kernelid_map_lock);
1930 + idr_init_base(&sm_state->kernelid_map, 1);
1931 +
1932 + pdev->dev.dma_parms = devm_kzalloc(&pdev->dev,
1933 + sizeof(*pdev->dev.dma_parms),
1934 + GFP_KERNEL);
1935 + /* dma_set_max_seg_size checks if dma_parms is NULL. */
1936 + dma_set_max_seg_size(&pdev->dev, 0x3FFFFFFF);
1937 +
1938 + vchiq_add_connected_callback(vc_sm_connected_init);
1939 + return 0;
1940 +}
1941 +
1942 +/* Driver unloading. */
1943 +static int bcm2835_vc_sm_cma_remove(struct platform_device *pdev)
1944 +{
1945 + pr_debug("[%s]: start\n", __func__);
1946 + if (sm_inited) {
1947 + misc_deregister(&sm_state->misc_dev);
1948 +
1949 + /* Remove all proc entries. */
1950 + debugfs_remove_recursive(sm_state->dir_root);
1951 +
1952 + /* Stop the videocore shared memory service. */
1953 + vc_sm_cma_vchi_stop(&sm_state->sm_handle);
1954 + }
1955 +
1956 + if (sm_state) {
1957 + idr_destroy(&sm_state->kernelid_map);
1958 +
1959 + /* Free the memory for the state structure. */
1960 + mutex_destroy(&sm_state->map_lock);
1961 + }
1962 +
1963 + pr_debug("[%s]: end\n", __func__);
1964 + return 0;
1965 +}
1966 +
1967 +/* Kernel API calls */
1968 +/* Get an internal resource handle mapped from the external one. */
1969 +int vc_sm_cma_int_handle(void *handle)
1970 +{
1971 + struct dma_buf *dma_buf = (struct dma_buf *)handle;
1972 + struct vc_sm_buffer *buf;
1973 +
1974 + /* Validate we can work with this device. */
1975 + if (!sm_state || !handle) {
1976 + pr_err("[%s]: invalid input\n", __func__);
1977 + return 0;
1978 + }
1979 +
1980 + buf = (struct vc_sm_buffer *)dma_buf->priv;
1981 + return buf->vc_handle;
1982 +}
1983 +EXPORT_SYMBOL_GPL(vc_sm_cma_int_handle);
1984 +
1985 +/* Free a previously allocated shared memory handle and block. */
1986 +int vc_sm_cma_free(void *handle)
1987 +{
1988 + struct dma_buf *dma_buf = (struct dma_buf *)handle;
1989 +
1990 + /* Validate we can work with this device. */
1991 + if (!sm_state || !handle) {
1992 + pr_err("[%s]: invalid input\n", __func__);
1993 + return -EPERM;
1994 + }
1995 +
1996 + pr_debug("%s: handle %p/dmabuf %p\n", __func__, handle, dma_buf);
1997 +
1998 + dma_buf_put(dma_buf);
1999 +
2000 + return 0;
2001 +}
2002 +EXPORT_SYMBOL_GPL(vc_sm_cma_free);
2003 +
2004 +/* Import a dmabuf to be shared with VC. */
2005 +int vc_sm_cma_import_dmabuf(struct dma_buf *src_dmabuf, void **handle)
2006 +{
2007 + struct dma_buf *new_dma_buf;
2008 + struct vc_sm_buffer *buf;
2009 + int ret;
2010 +
2011 + /* Validate we can work with this device. */
2012 + if (!sm_state || !src_dmabuf || !handle) {
2013 + pr_err("[%s]: invalid input\n", __func__);
2014 + return -EPERM;
2015 + }
2016 +
2017 + ret = vc_sm_cma_import_dmabuf_internal(sm_state->data_knl, src_dmabuf,
2018 + -1, &new_dma_buf);
2019 +
2020 + if (!ret) {
2021 + pr_debug("%s: imported to ptr %p\n", __func__, new_dma_buf);
2022 + buf = (struct vc_sm_buffer *)new_dma_buf->priv;
2023 +
2024 + /* Assign valid handle at this time.*/
2025 + *handle = new_dma_buf;
2026 + } else {
2027 + /*
2028 + * succeeded in importing the dma_buf, but then
2029 + * failed to look it up again. How?
2030 + * Release the fd again.
2031 + */
2032 + pr_err("%s: imported vc_sm_cma_get_buffer failed %d\n",
2033 + __func__, ret);
2034 + }
2035 +
2036 + return ret;
2037 +}
2038 +EXPORT_SYMBOL_GPL(vc_sm_cma_import_dmabuf);
2039 +
2040 +static struct platform_driver bcm2835_vcsm_cma_driver = {
2041 + .probe = bcm2835_vc_sm_cma_probe,
2042 + .remove = bcm2835_vc_sm_cma_remove,
2043 + .driver = {
2044 + .name = DEVICE_NAME,
2045 + .owner = THIS_MODULE,
2046 + },
2047 +};
2048 +
2049 +module_platform_driver(bcm2835_vcsm_cma_driver);
2050 +
2051 +MODULE_AUTHOR("Dave Stevenson");
2052 +MODULE_DESCRIPTION("VideoCore CMA Shared Memory Driver");
2053 +MODULE_LICENSE("GPL v2");
2054 +MODULE_ALIAS("platform:vcsm-cma");
2055 --- /dev/null
2056 +++ b/drivers/staging/vc04_services/vc-sm-cma/vc_sm.h
2057 @@ -0,0 +1,84 @@
2058 +/* SPDX-License-Identifier: GPL-2.0 */
2059 +
2060 +/*
2061 + * VideoCore Shared Memory driver using CMA.
2062 + *
2063 + * Copyright: 2018, Raspberry Pi (Trading) Ltd
2064 + *
2065 + */
2066 +
2067 +#ifndef VC_SM_H
2068 +#define VC_SM_H
2069 +
2070 +#include <linux/device.h>
2071 +#include <linux/dma-direction.h>
2072 +#include <linux/kref.h>
2073 +#include <linux/mm_types.h>
2074 +#include <linux/mutex.h>
2075 +#include <linux/rbtree.h>
2076 +#include <linux/sched.h>
2077 +#include <linux/shrinker.h>
2078 +#include <linux/types.h>
2079 +#include <linux/miscdevice.h>
2080 +
2081 +#define VC_SM_MAX_NAME_LEN 32
2082 +
2083 +enum vc_sm_vpu_mapping_state {
2084 + VPU_NOT_MAPPED,
2085 + VPU_MAPPED,
2086 + VPU_UNMAPPING
2087 +};
2088 +
2089 +struct vc_sm_alloc_data {
2090 + unsigned long num_pages;
2091 + void *priv_virt;
2092 + struct sg_table *sg_table;
2093 +};
2094 +
2095 +struct vc_sm_imported {
2096 + struct dma_buf *dma_buf;
2097 + struct dma_buf_attachment *attach;
2098 + struct sg_table *sgt;
2099 +};
2100 +
2101 +struct vc_sm_buffer {
2102 + struct list_head global_buffer_list; /* Global list of buffers. */
2103 +
2104 + /* Index in the kernel_id idr so that we can find the
2105 + * mmal_msg_context again when servicing the VCHI reply.
2106 + */
2107 + int kernel_id;
2108 +
2109 + size_t size;
2110 +
2111 + /* Lock over all the following state for this buffer */
2112 + struct mutex lock;
2113 + struct list_head attachments;
2114 +
2115 + char name[VC_SM_MAX_NAME_LEN];
2116 +
2117 + int in_use:1; /* Kernel is still using this resource */
2118 + int imported:1; /* Imported dmabuf */
2119 +
2120 + enum vc_sm_vpu_mapping_state vpu_state;
2121 + u32 vc_handle; /* VideoCore handle for this buffer */
2122 + int vpu_allocated; /*
2123 + * The VPU made this allocation. Release the
2124 + * local dma_buf when the VPU releases the
2125 + * resource.
2126 + */
2127 +
2128 + /* DMABUF related fields */
2129 + struct dma_buf *dma_buf;
2130 + dma_addr_t dma_addr;
2131 + void *cookie;
2132 +
2133 + struct vc_sm_privdata_t *private;
2134 +
2135 + union {
2136 + struct vc_sm_alloc_data alloc;
2137 + struct vc_sm_imported import;
2138 + };
2139 +};
2140 +
2141 +#endif
2142 --- /dev/null
2143 +++ b/drivers/staging/vc04_services/vc-sm-cma/vc_sm_cma_vchi.c
2144 @@ -0,0 +1,505 @@
2145 +// SPDX-License-Identifier: GPL-2.0
2146 +/*
2147 + * VideoCore Shared Memory CMA allocator
2148 + *
2149 + * Copyright: 2018, Raspberry Pi (Trading) Ltd
2150 + * Copyright 2011-2012 Broadcom Corporation. All rights reserved.
2151 + *
2152 + * Based on vmcs_sm driver from Broadcom Corporation.
2153 + *
2154 + */
2155 +
2156 +/* ---- Include Files ----------------------------------------------------- */
2157 +#include <linux/completion.h>
2158 +#include <linux/kernel.h>
2159 +#include <linux/kthread.h>
2160 +#include <linux/list.h>
2161 +#include <linux/mutex.h>
2162 +#include <linux/semaphore.h>
2163 +#include <linux/slab.h>
2164 +#include <linux/types.h>
2165 +
2166 +#include "vc_sm_cma_vchi.h"
2167 +
2168 +#define VC_SM_VER 1
2169 +#define VC_SM_MIN_VER 0
2170 +
2171 +/* ---- Private Constants and Types -------------------------------------- */
2172 +
2173 +/* Command blocks come from a pool */
2174 +#define SM_MAX_NUM_CMD_RSP_BLKS 32
2175 +
2176 +struct sm_cmd_rsp_blk {
2177 + struct list_head head; /* To create lists */
2178 + /* To be signaled when the response is there */
2179 + struct completion cmplt;
2180 +
2181 + u16 id;
2182 + u16 length;
2183 +
2184 + u8 msg[VC_SM_MAX_MSG_LEN];
2185 +
2186 + uint32_t wait:1;
2187 + uint32_t sent:1;
2188 + uint32_t alloc:1;
2189 +
2190 +};
2191 +
2192 +struct sm_instance {
2193 + u32 num_connections;
2194 + VCHI_SERVICE_HANDLE_T vchi_handle[VCHI_MAX_NUM_CONNECTIONS];
2195 + struct task_struct *io_thread;
2196 + struct completion io_cmplt;
2197 +
2198 + vpu_event_cb vpu_event;
2199 +
2200 + /* Mutex over the following lists */
2201 + struct mutex lock;
2202 + u32 trans_id;
2203 + struct list_head cmd_list;
2204 + struct list_head rsp_list;
2205 + struct list_head dead_list;
2206 +
2207 + struct sm_cmd_rsp_blk free_blk[SM_MAX_NUM_CMD_RSP_BLKS];
2208 +
2209 + /* Mutex over the free_list */
2210 + struct mutex free_lock;
2211 + struct list_head free_list;
2212 +
2213 + struct semaphore free_sema;
2214 +
2215 +};
2216 +
2217 +/* ---- Private Variables ------------------------------------------------ */
2218 +
2219 +/* ---- Private Function Prototypes -------------------------------------- */
2220 +
2221 +/* ---- Private Functions ------------------------------------------------ */
2222 +static int
2223 +bcm2835_vchi_msg_queue(VCHI_SERVICE_HANDLE_T handle,
2224 + void *data,
2225 + unsigned int size)
2226 +{
2227 + return vchi_queue_kernel_message(handle,
2228 + data,
2229 + size);
2230 +}
2231 +
2232 +static struct
2233 +sm_cmd_rsp_blk *vc_vchi_cmd_create(struct sm_instance *instance,
2234 + enum vc_sm_msg_type id, void *msg,
2235 + u32 size, int wait)
2236 +{
2237 + struct sm_cmd_rsp_blk *blk;
2238 + struct vc_sm_msg_hdr_t *hdr;
2239 +
2240 + if (down_interruptible(&instance->free_sema)) {
2241 + blk = kmalloc(sizeof(*blk), GFP_KERNEL);
2242 + if (!blk)
2243 + return NULL;
2244 +
2245 + blk->alloc = 1;
2246 + init_completion(&blk->cmplt);
2247 + } else {
2248 + mutex_lock(&instance->free_lock);
2249 + blk =
2250 + list_first_entry(&instance->free_list,
2251 + struct sm_cmd_rsp_blk, head);
2252 + list_del(&blk->head);
2253 + mutex_unlock(&instance->free_lock);
2254 + }
2255 +
2256 + blk->sent = 0;
2257 + blk->wait = wait;
2258 + blk->length = sizeof(*hdr) + size;
2259 +
2260 + hdr = (struct vc_sm_msg_hdr_t *)blk->msg;
2261 + hdr->type = id;
2262 + mutex_lock(&instance->lock);
2263 + instance->trans_id++;
2264 + /*
2265 + * Retain the top bit for identifying asynchronous events, or VPU cmds.
2266 + */
2267 + instance->trans_id &= ~0x80000000;
2268 + hdr->trans_id = instance->trans_id;
2269 + blk->id = instance->trans_id;
2270 + mutex_unlock(&instance->lock);
2271 +
2272 + if (size)
2273 + memcpy(hdr->body, msg, size);
2274 +
2275 + return blk;
2276 +}
2277 +
2278 +static void
2279 +vc_vchi_cmd_delete(struct sm_instance *instance, struct sm_cmd_rsp_blk *blk)
2280 +{
2281 + if (blk->alloc) {
2282 + kfree(blk);
2283 + return;
2284 + }
2285 +
2286 + mutex_lock(&instance->free_lock);
2287 + list_add(&blk->head, &instance->free_list);
2288 + mutex_unlock(&instance->free_lock);
2289 + up(&instance->free_sema);
2290 +}
2291 +
2292 +static void vc_sm_cma_vchi_rx_ack(struct sm_instance *instance,
2293 + struct sm_cmd_rsp_blk *cmd,
2294 + struct vc_sm_result_t *reply,
2295 + u32 reply_len)
2296 +{
2297 + mutex_lock(&instance->lock);
2298 + list_for_each_entry(cmd,
2299 + &instance->rsp_list,
2300 + head) {
2301 + if (cmd->id == reply->trans_id)
2302 + break;
2303 + }
2304 + mutex_unlock(&instance->lock);
2305 +
2306 + if (&cmd->head == &instance->rsp_list) {
2307 + //pr_debug("%s: received response %u, throw away...",
2308 + pr_err("%s: received response %u, throw away...",
2309 + __func__,
2310 + reply->trans_id);
2311 + } else if (reply_len > sizeof(cmd->msg)) {
2312 + pr_err("%s: reply too big (%u) %u, throw away...",
2313 + __func__, reply_len,
2314 + reply->trans_id);
2315 + } else {
2316 + memcpy(cmd->msg, reply,
2317 + reply_len);
2318 + complete(&cmd->cmplt);
2319 + }
2320 +}
2321 +
2322 +static int vc_sm_cma_vchi_videocore_io(void *arg)
2323 +{
2324 + struct sm_instance *instance = arg;
2325 + struct sm_cmd_rsp_blk *cmd = NULL, *cmd_tmp;
2326 + struct vc_sm_result_t *reply;
2327 + u32 reply_len;
2328 + s32 status;
2329 + int svc_use = 1;
2330 +
2331 + while (1) {
2332 + if (svc_use)
2333 + vchi_service_release(instance->vchi_handle[0]);
2334 + svc_use = 0;
2335 +
2336 + if (wait_for_completion_interruptible(&instance->io_cmplt))
2337 + continue;
2338 +
2339 + vchi_service_use(instance->vchi_handle[0]);
2340 + svc_use = 1;
2341 +
2342 + do {
2343 + /*
2344 + * Get new command and move it to response list
2345 + */
2346 + mutex_lock(&instance->lock);
2347 + if (list_empty(&instance->cmd_list)) {
2348 + /* no more commands to process */
2349 + mutex_unlock(&instance->lock);
2350 + break;
2351 + }
2352 + cmd = list_first_entry(&instance->cmd_list,
2353 + struct sm_cmd_rsp_blk, head);
2354 + list_move(&cmd->head, &instance->rsp_list);
2355 + cmd->sent = 1;
2356 + mutex_unlock(&instance->lock);
2357 +
2358 + /* Send the command */
2359 + status =
2360 + bcm2835_vchi_msg_queue(instance->vchi_handle[0],
2361 + cmd->msg, cmd->length);
2362 + if (status) {
2363 + pr_err("%s: failed to queue message (%d)",
2364 + __func__, status);
2365 + }
2366 +
2367 + /* If no reply is needed then we're done */
2368 + if (!cmd->wait) {
2369 + mutex_lock(&instance->lock);
2370 + list_del(&cmd->head);
2371 + mutex_unlock(&instance->lock);
2372 + vc_vchi_cmd_delete(instance, cmd);
2373 + continue;
2374 + }
2375 +
2376 + if (status) {
2377 + complete(&cmd->cmplt);
2378 + continue;
2379 + }
2380 +
2381 + } while (1);
2382 +
2383 + while (!vchi_msg_peek(instance->vchi_handle[0], (void **)&reply,
2384 + &reply_len, VCHI_FLAGS_NONE)) {
2385 + if (reply->trans_id & 0x80000000) {
2386 + /* Async event or cmd from the VPU */
2387 + if (instance->vpu_event)
2388 + instance->vpu_event(instance, reply,
2389 + reply_len);
2390 + } else {
2391 + vc_sm_cma_vchi_rx_ack(instance, cmd, reply,
2392 + reply_len);
2393 + }
2394 +
2395 + vchi_msg_remove(instance->vchi_handle[0]);
2396 + }
2397 +
2398 + /* Go through the dead list and free them */
2399 + mutex_lock(&instance->lock);
2400 + list_for_each_entry_safe(cmd, cmd_tmp, &instance->dead_list,
2401 + head) {
2402 + list_del(&cmd->head);
2403 + vc_vchi_cmd_delete(instance, cmd);
2404 + }
2405 + mutex_unlock(&instance->lock);
2406 + }
2407 +
2408 + return 0;
2409 +}
2410 +
2411 +static void vc_sm_cma_vchi_callback(void *param,
2412 + const VCHI_CALLBACK_REASON_T reason,
2413 + void *msg_handle)
2414 +{
2415 + struct sm_instance *instance = param;
2416 +
2417 + (void)msg_handle;
2418 +
2419 + switch (reason) {
2420 + case VCHI_CALLBACK_MSG_AVAILABLE:
2421 + complete(&instance->io_cmplt);
2422 + break;
2423 +
2424 + case VCHI_CALLBACK_SERVICE_CLOSED:
2425 + pr_info("%s: service CLOSED!!", __func__);
2426 + default:
2427 + break;
2428 + }
2429 +}
2430 +
2431 +struct sm_instance *vc_sm_cma_vchi_init(VCHI_INSTANCE_T vchi_instance,
2432 + unsigned int num_connections,
2433 + vpu_event_cb vpu_event)
2434 +{
2435 + u32 i;
2436 + struct sm_instance *instance;
2437 + int status;
2438 +
2439 + pr_debug("%s: start", __func__);
2440 +
2441 + if (num_connections > VCHI_MAX_NUM_CONNECTIONS) {
2442 + pr_err("%s: unsupported number of connections %u (max=%u)",
2443 + __func__, num_connections, VCHI_MAX_NUM_CONNECTIONS);
2444 +
2445 + goto err_null;
2446 + }
2447 + /* Allocate memory for this instance */
2448 + instance = kzalloc(sizeof(*instance), GFP_KERNEL);
2449 +
2450 + /* Misc initialisations */
2451 + mutex_init(&instance->lock);
2452 + init_completion(&instance->io_cmplt);
2453 + INIT_LIST_HEAD(&instance->cmd_list);
2454 + INIT_LIST_HEAD(&instance->rsp_list);
2455 + INIT_LIST_HEAD(&instance->dead_list);
2456 + INIT_LIST_HEAD(&instance->free_list);
2457 + sema_init(&instance->free_sema, SM_MAX_NUM_CMD_RSP_BLKS);
2458 + mutex_init(&instance->free_lock);
2459 + for (i = 0; i < SM_MAX_NUM_CMD_RSP_BLKS; i++) {
2460 + init_completion(&instance->free_blk[i].cmplt);
2461 + list_add(&instance->free_blk[i].head, &instance->free_list);
2462 + }
2463 +
2464 + /* Open the VCHI service connections */
2465 + instance->num_connections = num_connections;
2466 + for (i = 0; i < num_connections; i++) {
2467 + struct service_creation params = {
2468 + .version = VCHI_VERSION_EX(VC_SM_VER, VC_SM_MIN_VER),
2469 + .service_id = VC_SM_SERVER_NAME,
2470 + .callback = vc_sm_cma_vchi_callback,
2471 + .callback_param = instance,
2472 + };
2473 +
2474 + status = vchi_service_open(vchi_instance,
2475 + &params, &instance->vchi_handle[i]);
2476 + if (status) {
2477 + pr_err("%s: failed to open VCHI service (%d)",
2478 + __func__, status);
2479 +
2480 + goto err_close_services;
2481 + }
2482 + }
2483 +
2484 + /* Create the thread which takes care of all io to/from videoocore. */
2485 + instance->io_thread = kthread_create(&vc_sm_cma_vchi_videocore_io,
2486 + (void *)instance, "SMIO");
2487 + if (!instance->io_thread) {
2488 + pr_err("%s: failed to create SMIO thread", __func__);
2489 +
2490 + goto err_close_services;
2491 + }
2492 + instance->vpu_event = vpu_event;
2493 + set_user_nice(instance->io_thread, -10);
2494 + wake_up_process(instance->io_thread);
2495 +
2496 + pr_debug("%s: success - instance %p", __func__, instance);
2497 + return instance;
2498 +
2499 +err_close_services:
2500 + for (i = 0; i < instance->num_connections; i++) {
2501 + if (instance->vchi_handle[i])
2502 + vchi_service_close(instance->vchi_handle[i]);
2503 + }
2504 + kfree(instance);
2505 +err_null:
2506 + pr_debug("%s: FAILED", __func__);
2507 + return NULL;
2508 +}
2509 +
2510 +int vc_sm_cma_vchi_stop(struct sm_instance **handle)
2511 +{
2512 + struct sm_instance *instance;
2513 + u32 i;
2514 +
2515 + if (!handle) {
2516 + pr_err("%s: invalid pointer to handle %p", __func__, handle);
2517 + goto lock;
2518 + }
2519 +
2520 + if (!*handle) {
2521 + pr_err("%s: invalid handle %p", __func__, *handle);
2522 + goto lock;
2523 + }
2524 +
2525 + instance = *handle;
2526 +
2527 + /* Close all VCHI service connections */
2528 + for (i = 0; i < instance->num_connections; i++) {
2529 + s32 success;
2530 +
2531 + vchi_service_use(instance->vchi_handle[i]);
2532 +
2533 + success = vchi_service_close(instance->vchi_handle[i]);
2534 + }
2535 +
2536 + kfree(instance);
2537 +
2538 + *handle = NULL;
2539 + return 0;
2540 +
2541 +lock:
2542 + return -EINVAL;
2543 +}
2544 +
2545 +static int vc_sm_cma_vchi_send_msg(struct sm_instance *handle,
2546 + enum vc_sm_msg_type msg_id, void *msg,
2547 + u32 msg_size, void *result, u32 result_size,
2548 + u32 *cur_trans_id, u8 wait_reply)
2549 +{
2550 + int status = 0;
2551 + struct sm_instance *instance = handle;
2552 + struct sm_cmd_rsp_blk *cmd_blk;
2553 +
2554 + if (!handle) {
2555 + pr_err("%s: invalid handle", __func__);
2556 + return -EINVAL;
2557 + }
2558 + if (!msg) {
2559 + pr_err("%s: invalid msg pointer", __func__);
2560 + return -EINVAL;
2561 + }
2562 +
2563 + cmd_blk =
2564 + vc_vchi_cmd_create(instance, msg_id, msg, msg_size, wait_reply);
2565 + if (!cmd_blk) {
2566 + pr_err("[%s]: failed to allocate global tracking resource",
2567 + __func__);
2568 + return -ENOMEM;
2569 + }
2570 +
2571 + if (cur_trans_id)
2572 + *cur_trans_id = cmd_blk->id;
2573 +
2574 + mutex_lock(&instance->lock);
2575 + list_add_tail(&cmd_blk->head, &instance->cmd_list);
2576 + mutex_unlock(&instance->lock);
2577 + complete(&instance->io_cmplt);
2578 +
2579 + if (!wait_reply)
2580 + /* We're done */
2581 + return 0;
2582 +
2583 + /* Wait for the response */
2584 + if (wait_for_completion_interruptible(&cmd_blk->cmplt)) {
2585 + mutex_lock(&instance->lock);
2586 + if (!cmd_blk->sent) {
2587 + list_del(&cmd_blk->head);
2588 + mutex_unlock(&instance->lock);
2589 + vc_vchi_cmd_delete(instance, cmd_blk);
2590 + return -ENXIO;
2591 + }
2592 +
2593 + list_move(&cmd_blk->head, &instance->dead_list);
2594 + mutex_unlock(&instance->lock);
2595 + complete(&instance->io_cmplt);
2596 + return -EINTR; /* We're done */
2597 + }
2598 +
2599 + if (result && result_size) {
2600 + memcpy(result, cmd_blk->msg, result_size);
2601 + } else {
2602 + struct vc_sm_result_t *res =
2603 + (struct vc_sm_result_t *)cmd_blk->msg;
2604 + status = (res->success == 0) ? 0 : -ENXIO;
2605 + }
2606 +
2607 + mutex_lock(&instance->lock);
2608 + list_del(&cmd_blk->head);
2609 + mutex_unlock(&instance->lock);
2610 + vc_vchi_cmd_delete(instance, cmd_blk);
2611 + return status;
2612 +}
2613 +
2614 +int vc_sm_cma_vchi_free(struct sm_instance *handle, struct vc_sm_free_t *msg,
2615 + u32 *cur_trans_id)
2616 +{
2617 + return vc_sm_cma_vchi_send_msg(handle, VC_SM_MSG_TYPE_FREE,
2618 + msg, sizeof(*msg), 0, 0, cur_trans_id, 0);
2619 +}
2620 +
2621 +int vc_sm_cma_vchi_import(struct sm_instance *handle, struct vc_sm_import *msg,
2622 + struct vc_sm_import_result *result, u32 *cur_trans_id)
2623 +{
2624 + return vc_sm_cma_vchi_send_msg(handle, VC_SM_MSG_TYPE_IMPORT,
2625 + msg, sizeof(*msg), result, sizeof(*result),
2626 + cur_trans_id, 1);
2627 +}
2628 +
2629 +int vc_sm_cma_vchi_client_version(struct sm_instance *handle,
2630 + struct vc_sm_version *msg,
2631 + struct vc_sm_result_t *result,
2632 + u32 *cur_trans_id)
2633 +{
2634 + return vc_sm_cma_vchi_send_msg(handle, VC_SM_MSG_TYPE_CLIENT_VERSION,
2635 + //msg, sizeof(*msg), result, sizeof(*result),
2636 + //cur_trans_id, 1);
2637 + msg, sizeof(*msg), NULL, 0,
2638 + cur_trans_id, 0);
2639 +}
2640 +
2641 +int vc_sm_vchi_client_vc_mem_req_reply(struct sm_instance *handle,
2642 + struct vc_sm_vc_mem_request_result *msg,
2643 + uint32_t *cur_trans_id)
2644 +{
2645 + return vc_sm_cma_vchi_send_msg(handle,
2646 + VC_SM_MSG_TYPE_VC_MEM_REQUEST_REPLY,
2647 + msg, sizeof(*msg), 0, 0, cur_trans_id,
2648 + 0);
2649 +}
2650 --- /dev/null
2651 +++ b/drivers/staging/vc04_services/vc-sm-cma/vc_sm_cma_vchi.h
2652 @@ -0,0 +1,63 @@
2653 +/* SPDX-License-Identifier: GPL-2.0 */
2654 +
2655 +/*
2656 + * VideoCore Shared Memory CMA allocator
2657 + *
2658 + * Copyright: 2018, Raspberry Pi (Trading) Ltd
2659 + * Copyright 2011-2012 Broadcom Corporation. All rights reserved.
2660 + *
2661 + * Based on vmcs_sm driver from Broadcom Corporation.
2662 + *
2663 + */
2664 +
2665 +#ifndef __VC_SM_CMA_VCHI_H__INCLUDED__
2666 +#define __VC_SM_CMA_VCHI_H__INCLUDED__
2667 +
2668 +#include "interface/vchi/vchi.h"
2669 +
2670 +#include "vc_sm_defs.h"
2671 +
2672 +/*
2673 + * Forward declare.
2674 + */
2675 +struct sm_instance;
2676 +
2677 +typedef void (*vpu_event_cb)(struct sm_instance *instance,
2678 + struct vc_sm_result_t *reply, int reply_len);
2679 +
2680 +/*
2681 + * Initialize the shared memory service, opens up vchi connection to talk to it.
2682 + */
2683 +struct sm_instance *vc_sm_cma_vchi_init(VCHI_INSTANCE_T vchi_instance,
2684 + unsigned int num_connections,
2685 + vpu_event_cb vpu_event);
2686 +
2687 +/*
2688 + * Terminates the shared memory service.
2689 + */
2690 +int vc_sm_cma_vchi_stop(struct sm_instance **handle);
2691 +
2692 +/*
2693 + * Ask the shared memory service to free up some memory that was previously
2694 + * allocated by the vc_sm_cma_vchi_alloc function call.
2695 + */
2696 +int vc_sm_cma_vchi_free(struct sm_instance *handle, struct vc_sm_free_t *msg,
2697 + u32 *cur_trans_id);
2698 +
2699 +/*
2700 + * Import a contiguous block of memory and wrap it in a GPU MEM_HANDLE_T.
2701 + */
2702 +int vc_sm_cma_vchi_import(struct sm_instance *handle, struct vc_sm_import *msg,
2703 + struct vc_sm_import_result *result,
2704 + u32 *cur_trans_id);
2705 +
2706 +int vc_sm_cma_vchi_client_version(struct sm_instance *handle,
2707 + struct vc_sm_version *msg,
2708 + struct vc_sm_result_t *result,
2709 + u32 *cur_trans_id);
2710 +
2711 +int vc_sm_vchi_client_vc_mem_req_reply(struct sm_instance *handle,
2712 + struct vc_sm_vc_mem_request_result *msg,
2713 + uint32_t *cur_trans_id);
2714 +
2715 +#endif /* __VC_SM_CMA_VCHI_H__INCLUDED__ */
2716 --- /dev/null
2717 +++ b/drivers/staging/vc04_services/vc-sm-cma/vc_sm_defs.h
2718 @@ -0,0 +1,300 @@
2719 +/* SPDX-License-Identifier: GPL-2.0 */
2720 +
2721 +/*
2722 + * VideoCore Shared Memory CMA allocator
2723 + *
2724 + * Copyright: 2018, Raspberry Pi (Trading) Ltd
2725 + *
2726 + * Based on vc_sm_defs.h from the vmcs_sm driver Copyright Broadcom Corporation.
2727 + * All IPC messages are copied across to this file, even if the vc-sm-cma
2728 + * driver is not currently using them.
2729 + *
2730 + ****************************************************************************
2731 + */
2732 +
2733 +#ifndef __VC_SM_DEFS_H__INCLUDED__
2734 +#define __VC_SM_DEFS_H__INCLUDED__
2735 +
2736 +/* FourCC code used for VCHI connection */
2737 +#define VC_SM_SERVER_NAME MAKE_FOURCC("SMEM")
2738 +
2739 +/* Maximum message length */
2740 +#define VC_SM_MAX_MSG_LEN (sizeof(union vc_sm_msg_union_t) + \
2741 + sizeof(struct vc_sm_msg_hdr_t))
2742 +#define VC_SM_MAX_RSP_LEN (sizeof(union vc_sm_msg_union_t))
2743 +
2744 +/* Resource name maximum size */
2745 +#define VC_SM_RESOURCE_NAME 32
2746 +
2747 +/*
2748 + * Version to be reported to the VPU
2749 + * VPU assumes 0 (aka 1) which does not require the released callback, nor
2750 + * expect the client to handle VC_MEM_REQUESTS.
2751 + * Version 2 requires the released callback, and must support VC_MEM_REQUESTS.
2752 + */
2753 +#define VC_SM_PROTOCOL_VERSION 2
2754 +
2755 +enum vc_sm_msg_type {
2756 + /* Message types supported for HOST->VC direction */
2757 +
2758 + /* Allocate shared memory block */
2759 + VC_SM_MSG_TYPE_ALLOC,
2760 + /* Lock allocated shared memory block */
2761 + VC_SM_MSG_TYPE_LOCK,
2762 + /* Unlock allocated shared memory block */
2763 + VC_SM_MSG_TYPE_UNLOCK,
2764 + /* Unlock allocated shared memory block, do not answer command */
2765 + VC_SM_MSG_TYPE_UNLOCK_NOANS,
2766 + /* Free shared memory block */
2767 + VC_SM_MSG_TYPE_FREE,
2768 + /* Resize a shared memory block */
2769 + VC_SM_MSG_TYPE_RESIZE,
2770 + /* Walk the allocated shared memory block(s) */
2771 + VC_SM_MSG_TYPE_WALK_ALLOC,
2772 +
2773 + /* A previously applied action will need to be reverted */
2774 + VC_SM_MSG_TYPE_ACTION_CLEAN,
2775 +
2776 + /*
2777 + * Import a physical address and wrap into a MEM_HANDLE_T.
2778 + * Release with VC_SM_MSG_TYPE_FREE.
2779 + */
2780 + VC_SM_MSG_TYPE_IMPORT,
2781 + /*
2782 + *Tells VC the protocol version supported by this client.
2783 + * 2 supports the async/cmd messages from the VPU for final release
2784 + * of memory, and for VC allocations.
2785 + */
2786 + VC_SM_MSG_TYPE_CLIENT_VERSION,
2787 + /* Response to VC request for memory */
2788 + VC_SM_MSG_TYPE_VC_MEM_REQUEST_REPLY,
2789 +
2790 + /*
2791 + * Asynchronous/cmd messages supported for VC->HOST direction.
2792 + * Signalled by setting the top bit in vc_sm_result_t trans_id.
2793 + */
2794 +
2795 + /*
2796 + * VC has finished with an imported memory allocation.
2797 + * Release any Linux reference counts on the underlying block.
2798 + */
2799 + VC_SM_MSG_TYPE_RELEASED,
2800 + /* VC request for memory */
2801 + VC_SM_MSG_TYPE_VC_MEM_REQUEST,
2802 +
2803 + VC_SM_MSG_TYPE_MAX
2804 +};
2805 +
2806 +/* Type of memory to be allocated */
2807 +enum vc_sm_alloc_type_t {
2808 + VC_SM_ALLOC_CACHED,
2809 + VC_SM_ALLOC_NON_CACHED,
2810 +};
2811 +
2812 +/* Message header for all messages in HOST->VC direction */
2813 +struct vc_sm_msg_hdr_t {
2814 + u32 type;
2815 + u32 trans_id;
2816 + u8 body[0];
2817 +
2818 +};
2819 +
2820 +/* Request to allocate memory (HOST->VC) */
2821 +struct vc_sm_alloc_t {
2822 + /* type of memory to allocate */
2823 + enum vc_sm_alloc_type_t type;
2824 + /* byte amount of data to allocate per unit */
2825 + u32 base_unit;
2826 + /* number of unit to allocate */
2827 + u32 num_unit;
2828 + /* alignment to be applied on allocation */
2829 + u32 alignment;
2830 + /* identity of who allocated this block */
2831 + u32 allocator;
2832 + /* resource name (for easier tracking on vc side) */
2833 + char name[VC_SM_RESOURCE_NAME];
2834 +
2835 +};
2836 +
2837 +/* Result of a requested memory allocation (VC->HOST) */
2838 +struct vc_sm_alloc_result_t {
2839 + /* Transaction identifier */
2840 + u32 trans_id;
2841 +
2842 + /* Resource handle */
2843 + u32 res_handle;
2844 + /* Pointer to resource buffer */
2845 + u32 res_mem;
2846 + /* Resource base size (bytes) */
2847 + u32 res_base_size;
2848 + /* Resource number */
2849 + u32 res_num;
2850 +
2851 +};
2852 +
2853 +/* Request to free a previously allocated memory (HOST->VC) */
2854 +struct vc_sm_free_t {
2855 + /* Resource handle (returned from alloc) */
2856 + u32 res_handle;
2857 + /* Resource buffer (returned from alloc) */
2858 + u32 res_mem;
2859 +
2860 +};
2861 +
2862 +/* Request to lock a previously allocated memory (HOST->VC) */
2863 +struct vc_sm_lock_unlock_t {
2864 + /* Resource handle (returned from alloc) */
2865 + u32 res_handle;
2866 + /* Resource buffer (returned from alloc) */
2867 + u32 res_mem;
2868 +
2869 +};
2870 +
2871 +/* Request to resize a previously allocated memory (HOST->VC) */
2872 +struct vc_sm_resize_t {
2873 + /* Resource handle (returned from alloc) */
2874 + u32 res_handle;
2875 + /* Resource buffer (returned from alloc) */
2876 + u32 res_mem;
2877 + /* Resource *new* size requested (bytes) */
2878 + u32 res_new_size;
2879 +
2880 +};
2881 +
2882 +/* Result of a requested memory lock (VC->HOST) */
2883 +struct vc_sm_lock_result_t {
2884 + /* Transaction identifier */
2885 + u32 trans_id;
2886 +
2887 + /* Resource handle */
2888 + u32 res_handle;
2889 + /* Pointer to resource buffer */
2890 + u32 res_mem;
2891 + /*
2892 + * Pointer to former resource buffer if the memory
2893 + * was reallocated
2894 + */
2895 + u32 res_old_mem;
2896 +
2897 +};
2898 +
2899 +/* Generic result for a request (VC->HOST) */
2900 +struct vc_sm_result_t {
2901 + /* Transaction identifier */
2902 + u32 trans_id;
2903 +
2904 + s32 success;
2905 +
2906 +};
2907 +
2908 +/* Request to revert a previously applied action (HOST->VC) */
2909 +struct vc_sm_action_clean_t {
2910 + /* Action of interest */
2911 + enum vc_sm_msg_type res_action;
2912 + /* Transaction identifier for the action of interest */
2913 + u32 action_trans_id;
2914 +
2915 +};
2916 +
2917 +/* Request to remove all data associated with a given allocator (HOST->VC) */
2918 +struct vc_sm_free_all_t {
2919 + /* Allocator identifier */
2920 + u32 allocator;
2921 +};
2922 +
2923 +/* Request to import memory (HOST->VC) */
2924 +struct vc_sm_import {
2925 + /* type of memory to allocate */
2926 + enum vc_sm_alloc_type_t type;
2927 + /* pointer to the VC (ie physical) address of the allocated memory */
2928 + u32 addr;
2929 + /* size of buffer */
2930 + u32 size;
2931 + /* opaque handle returned in RELEASED messages */
2932 + u32 kernel_id;
2933 + /* Allocator identifier */
2934 + u32 allocator;
2935 + /* resource name (for easier tracking on vc side) */
2936 + char name[VC_SM_RESOURCE_NAME];
2937 +};
2938 +
2939 +/* Result of a requested memory import (VC->HOST) */
2940 +struct vc_sm_import_result {
2941 + /* Transaction identifier */
2942 + u32 trans_id;
2943 +
2944 + /* Resource handle */
2945 + u32 res_handle;
2946 +};
2947 +
2948 +/* Notification that VC has finished with an allocation (VC->HOST) */
2949 +struct vc_sm_released {
2950 + /* cmd type / trans_id */
2951 + u32 cmd;
2952 +
2953 + /* pointer to the VC (ie physical) address of the allocated memory */
2954 + u32 addr;
2955 + /* size of buffer */
2956 + u32 size;
2957 + /* opaque handle returned in RELEASED messages */
2958 + u32 kernel_id;
2959 + u32 vc_handle;
2960 +};
2961 +
2962 +/*
2963 + * Client informing VC as to the protocol version it supports.
2964 + * >=2 requires the released callback, and supports VC asking for memory.
2965 + * Failure means that the firmware doesn't support this call, and therefore the
2966 + * client should either fail, or NOT rely on getting the released callback.
2967 + */
2968 +struct vc_sm_version {
2969 + u32 version;
2970 +};
2971 +
2972 +/* Request FROM VideoCore for some memory */
2973 +struct vc_sm_vc_mem_request {
2974 + /* cmd type */
2975 + u32 cmd;
2976 +
2977 + /* trans_id (from VPU) */
2978 + u32 trans_id;
2979 + /* size of buffer */
2980 + u32 size;
2981 + /* alignment of buffer */
2982 + u32 align;
2983 + /* resource name (for easier tracking) */
2984 + char name[VC_SM_RESOURCE_NAME];
2985 + /* VPU handle for the resource */
2986 + u32 vc_handle;
2987 +};
2988 +
2989 +/* Response from the kernel to provide the VPU with some memory */
2990 +struct vc_sm_vc_mem_request_result {
2991 + /* Transaction identifier for the VPU */
2992 + u32 trans_id;
2993 + /* pointer to the physical address of the allocated memory */
2994 + u32 addr;
2995 + /* opaque handle returned in RELEASED messages */
2996 + u32 kernel_id;
2997 +};
2998 +
2999 +/* Union of ALL messages */
3000 +union vc_sm_msg_union_t {
3001 + struct vc_sm_alloc_t alloc;
3002 + struct vc_sm_alloc_result_t alloc_result;
3003 + struct vc_sm_free_t free;
3004 + struct vc_sm_lock_unlock_t lock_unlock;
3005 + struct vc_sm_action_clean_t action_clean;
3006 + struct vc_sm_resize_t resize;
3007 + struct vc_sm_lock_result_t lock_result;
3008 + struct vc_sm_result_t result;
3009 + struct vc_sm_free_all_t free_all;
3010 + struct vc_sm_import import;
3011 + struct vc_sm_import_result import_result;
3012 + struct vc_sm_version version;
3013 + struct vc_sm_released released;
3014 + struct vc_sm_vc_mem_request vc_request;
3015 + struct vc_sm_vc_mem_request_result vc_request_result;
3016 +};
3017 +
3018 +#endif /* __VC_SM_DEFS_H__INCLUDED__ */
3019 --- /dev/null
3020 +++ b/drivers/staging/vc04_services/vc-sm-cma/vc_sm_knl.h
3021 @@ -0,0 +1,28 @@
3022 +/* SPDX-License-Identifier: GPL-2.0 */
3023 +
3024 +/*
3025 + * VideoCore Shared Memory CMA allocator
3026 + *
3027 + * Copyright: 2018, Raspberry Pi (Trading) Ltd
3028 + *
3029 + * Based on vc_sm_defs.h from the vmcs_sm driver Copyright Broadcom Corporation.
3030 + *
3031 + */
3032 +
3033 +#ifndef __VC_SM_KNL_H__INCLUDED__
3034 +#define __VC_SM_KNL_H__INCLUDED__
3035 +
3036 +#if !defined(__KERNEL__)
3037 +#error "This interface is for kernel use only..."
3038 +#endif
3039 +
3040 +/* Free a previously allocated or imported shared memory handle and block. */
3041 +int vc_sm_cma_free(void *handle);
3042 +
3043 +/* Get an internal resource handle mapped from the external one. */
3044 +int vc_sm_cma_int_handle(void *handle);
3045 +
3046 +/* Import a block of memory into the GPU space. */
3047 +int vc_sm_cma_import_dmabuf(struct dma_buf *dmabuf, void **handle);
3048 +
3049 +#endif /* __VC_SM_KNL_H__INCLUDED__ */
3050 --- a/drivers/staging/vc04_services/vchiq-mmal/Makefile
3051 +++ b/drivers/staging/vc04_services/vchiq-mmal/Makefile
3052 @@ -4,5 +4,5 @@ bcm2835-mmal-vchiq-objs := mmal-vchiq.o
3053 obj-$(CONFIG_BCM2835_VCHIQ_MMAL) += bcm2835-mmal-vchiq.o
3054
3055 ccflags-y += \
3056 - -Idrivers/staging/vc04_services \
3057 + -I$(srctree)/drivers/staging/vc04_services \
3058 -D__VCCOREVER__=0x04000000
3059 --- /dev/null
3060 +++ b/include/linux/broadcom/vc_sm_cma_ioctl.h
3061 @@ -0,0 +1,114 @@
3062 +/* SPDX-License-Identifier: GPL-2.0 */
3063 +
3064 +/*
3065 + * Copyright 2019 Raspberry Pi (Trading) Ltd. All rights reserved.
3066 + *
3067 + * Based on vmcs_sm_ioctl.h Copyright Broadcom Corporation.
3068 + */
3069 +
3070 +#ifndef __VC_SM_CMA_IOCTL_H
3071 +#define __VC_SM_CMA_IOCTL_H
3072 +
3073 +/* ---- Include Files ---------------------------------------------------- */
3074 +
3075 +#if defined(__KERNEL__)
3076 +#include <linux/types.h> /* Needed for standard types */
3077 +#else
3078 +#include <stdint.h>
3079 +#endif
3080 +
3081 +#include <linux/ioctl.h>
3082 +
3083 +/* ---- Constants and Types ---------------------------------------------- */
3084 +
3085 +#define VC_SM_CMA_RESOURCE_NAME 32
3086 +#define VC_SM_CMA_RESOURCE_NAME_DEFAULT "sm-host-resource"
3087 +
3088 +/* Type define used to create unique IOCTL number */
3089 +#define VC_SM_CMA_MAGIC_TYPE 'J'
3090 +
3091 +/* IOCTL commands on /dev/vc-sm-cma */
3092 +enum vc_sm_cma_cmd_e {
3093 + VC_SM_CMA_CMD_ALLOC = 0x5A, /* Start at 0x5A arbitrarily */
3094 +
3095 + VC_SM_CMA_CMD_IMPORT_DMABUF,
3096 +
3097 + VC_SM_CMA_CMD_CLEAN_INVALID2,
3098 +
3099 + VC_SM_CMA_CMD_LAST /* Do not delete */
3100 +};
3101 +
3102 +/* Cache type supported, conveniently matches the user space definition in
3103 + * user-vcsm.h.
3104 + */
3105 +enum vc_sm_cma_cache_e {
3106 + VC_SM_CMA_CACHE_NONE,
3107 + VC_SM_CMA_CACHE_HOST,
3108 + VC_SM_CMA_CACHE_VC,
3109 + VC_SM_CMA_CACHE_BOTH,
3110 +};
3111 +
3112 +/* IOCTL Data structures */
3113 +struct vc_sm_cma_ioctl_alloc {
3114 + /* user -> kernel */
3115 + __u32 size;
3116 + __u32 num;
3117 + __u32 cached; /* enum vc_sm_cma_cache_e */
3118 + __u32 pad;
3119 + __u8 name[VC_SM_CMA_RESOURCE_NAME];
3120 +
3121 + /* kernel -> user */
3122 + __s32 handle;
3123 + __u32 vc_handle;
3124 + __u64 dma_addr;
3125 +};
3126 +
3127 +struct vc_sm_cma_ioctl_import_dmabuf {
3128 + /* user -> kernel */
3129 + __s32 dmabuf_fd;
3130 + __u32 cached; /* enum vc_sm_cma_cache_e */
3131 + __u8 name[VC_SM_CMA_RESOURCE_NAME];
3132 +
3133 + /* kernel -> user */
3134 + __s32 handle;
3135 + __u32 vc_handle;
3136 + __u32 size;
3137 + __u32 pad;
3138 + __u64 dma_addr;
3139 +};
3140 +
3141 +/*
3142 + * Cache functions to be set to struct vc_sm_cma_ioctl_clean_invalid2
3143 + * invalidate_mode.
3144 + */
3145 +#define VC_SM_CACHE_OP_NOP 0x00
3146 +#define VC_SM_CACHE_OP_INV 0x01
3147 +#define VC_SM_CACHE_OP_CLEAN 0x02
3148 +#define VC_SM_CACHE_OP_FLUSH 0x03
3149 +
3150 +struct vc_sm_cma_ioctl_clean_invalid2 {
3151 + __u32 op_count;
3152 + __u32 pad;
3153 + struct vc_sm_cma_ioctl_clean_invalid_block {
3154 + __u32 invalidate_mode;
3155 + __u32 block_count;
3156 + void * __user start_address;
3157 + __u32 block_size;
3158 + __u32 inter_block_stride;
3159 + } s[0];
3160 +};
3161 +
3162 +/* IOCTL numbers */
3163 +#define VC_SM_CMA_IOCTL_MEM_ALLOC\
3164 + _IOR(VC_SM_CMA_MAGIC_TYPE, VC_SM_CMA_CMD_ALLOC,\
3165 + struct vc_sm_cma_ioctl_alloc)
3166 +
3167 +#define VC_SM_CMA_IOCTL_MEM_IMPORT_DMABUF\
3168 + _IOR(VC_SM_CMA_MAGIC_TYPE, VC_SM_CMA_CMD_IMPORT_DMABUF,\
3169 + struct vc_sm_cma_ioctl_import_dmabuf)
3170 +
3171 +#define VC_SM_CMA_IOCTL_MEM_CLEAN_INVALID2\
3172 + _IOR(VC_SM_CMA_MAGIC_TYPE, VC_SM_CMA_CMD_CLEAN_INVALID2,\
3173 + struct vc_sm_cma_ioctl_clean_invalid2)
3174 +
3175 +#endif /* __VC_SM_CMA_IOCTL_H */