1
0
Fork 0
forked from len0rd/rockbox

imx233: make sure dma descriptors are cache friendly

Because DMA descriptors needs to be committed and discarded from
the cache, if they are not cache aligned and/or if their size
is not a multiple of cache ligne, nasty side effects could occur
with adjacents data. The same applies to DMA buffers which are
still potentially broken. Add a macro to ensure that these
constraints will not break by error in the future.

Change-Id: I1dd69a5a9c29796c156d953eaa57c0d281e79846
This commit is contained in:
Amaury Pouly 2012-05-20 01:23:17 +02:00
parent 1adc474771
commit 1b6e8cba62
5 changed files with 34 additions and 7 deletions

View file

@ -86,6 +86,12 @@
/* 32 bytes per cache line */
#define CACHEALIGN_BITS 5
#define ___ENSURE_ZERO(line, x) static uint8_t __ensure_zero_##line[-(x)] __attribute__((unused));
#define __ENSURE_ZERO(x) ___ENSURE_ZERO(__LINE__, x)
#define __ENSURE_MULTIPLE(x, y) __ENSURE_ZERO((x) % (y))
#define __ENSURE_CACHELINE_MULTIPLE(x) __ENSURE_MULTIPLE(x, 1 << CACHEALIGN_BITS)
#define __ENSURE_STRUCT_CACHE_FRIENDLY(name) __ENSURE_CACHELINE_MULTIPLE(sizeof(name))
#define __XTRACT(reg, field) ((reg & reg##__##field##_BM) >> reg##__##field##_BP)
#define __XTRACT_EX(val, field) (((val) & field##_BM) >> field##_BP)
#define __FIELD_SET(reg, field, val) reg = (reg & ~reg##__##field##_BM) | (val << reg##__##field##_BP)