tegrakernel/kernel/nvidia/drivers/video/tegra/nvmap/Kconfig

93 lines
3.2 KiB
Plaintext
Raw Normal View History

2022-02-16 09:13:02 -06:00
menuconfig TEGRA_NVMAP
bool "Tegra GPU memory management driver (nvmap)"
select ARM_DMA_USE_IOMMU if IOMMU_API
select DMA_SHARED_BUFFER
select CRYPTO_LZO
default y
help
Say Y here to include the memory management driver for the Tegra
GPU, multimedia and display subsystems
if TEGRA_NVMAP
config TEGRA_NVMAP_V2
bool "Use NVMAP version 2"
default n
help
say Y here to use nvmap version 2 which has been rewritten for
code clarify and safety.
config NVMAP_PAGE_POOLS
bool "Use page pools to reduce allocation overhead"
default y
help
say Y here to reduce the alloction overhead, which is significant
for uncached, writecombine and inner cacheable memories as it
involves changing page attributes during every allocation per page
and flushing cache. Alloc time is reduced by allcoating the pages
ahead and keeping them aside. The reserved pages would be released
when system is low on memory and acquired back during release of
memory.
config NVMAP_PAGE_POOL_DEBUG
bool "Debugging for page pools"
depends on NVMAP_PAGE_POOLS
help
Say Y here to include some debugging info in the page pools. This
adds a bit of unnecessary overhead so only enable this is you
suspect there is an issue with the nvmap page pools.
config NVMAP_PAGE_POOL_SIZE
depends on NVMAP_PAGE_POOLS
hex "Page pool size in pages"
default 0x0
config NVMAP_COLOR_PAGES
bool "Color pages allocated"
help
Say Y here to enable page coloring.
Page coloring rearranges the pages allocated based on the color
of the page. It can improve memory access performance.
The coloring option enable can optionally overallocate a portion of
reqeusted allcoation size to improve the probabilty of better
page coloring. If unsure, say Y.
config NVMAP_CACHE_MAINT_BY_SET_WAYS
bool "Enable cache maintenance by set/ways"
help
Say Y here to reduce cache maintenance overhead by MVA.
This helps in reducing cache maintenance overhead in the systems,
where inner cache includes only L1. For the systems, where inner cache
includes L1 and L2, keep this option disabled.
config NVMAP_FD_START
hex "FD number to start allocation from"
default 0x400
help
NvMap handles are represented with FD's in the user processes.
To avoid Linux FD usage limitations, NvMap allocates FD starting
from this number.
config NVMAP_DEFER_FD_RECYCLE
bool "Defer FD recycle"
help
Say Y here to enable deferred FD recycle.
A released nvmap handle would release memory and FD. This FD
can be reused immediately for subsequent nvmap allocation req in
the same process. Any buggy code in client process that continues to
use FD of released allocation would continue to use new allocation
and can lead to undesired consequences, which can be hard to debug.
Enabling this option would defer recycling FD for longer time and
allows debugging incorrect FD references by clients by returning errors
for the accesses that occur after handle/FD release.
config NVMAP_DEFER_FD_RECYCLE_MAX_FD
hex "FD number to start free FD recycle"
depends on NVMAP_DEFER_FD_RECYCLE
default 0x8000
help
Once last allocated FD reaches this number, allocation of subsequent
FD's start from NVMAP_START_FD.
endif