imx233: make sure dma descriptors are cache friendly

Because DMA descriptors needs to be committed and discarded from
the cache, if they are not cache aligned and/or if their size
is not a multiple of cache ligne, nasty side effects could occur
with adjacents data. The same applies to DMA buffers which are
still potentially broken. Add a macro to ensure that these
constraints will not break by error in the future.

Change-Id: I1dd69a5a9c29796c156d953eaa57c0d281e79846
This commit is contained in:
Amaury Pouly 2012-05-20 01:23:17 +02:00
parent 1adc474771
commit 1b6e8cba62
5 changed files with 34 additions and 7 deletions

View file

@ -43,7 +43,11 @@ struct ssp_dma_command_t
uint32_t ctrl0;
uint32_t cmd0;
uint32_t cmd1;
};
/* padded to next multiple of cache line size (32 bytes) */
uint32_t pad[2];
} __attribute__((packed)) CACHEALIGN_ATTR;
__ENSURE_STRUCT_CACHE_FRIENDLY(struct ssp_dma_command_t)
static bool ssp_in_use[2];
static int ssp_nr_in_use = 0;