<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html><head><meta http-equiv="Content-Type" content="text/html;charset=UTF-8"> <title>NVIDIA CUDA Library: cuCtxGetCacheConfig</title> <link href="customdoxygen.css" rel="stylesheet" type="text/css"> <link href="tabs.css" rel="stylesheet" type="text/css"> </head><body> <!-- Generated by Doxygen 1.5.8 --> <div class="navigation" id="top"> <div class="tabs"> <ul> <li><a href="index.html"><span>Main Page</span></a></li> <li><a href="modules.html"><span>Modules</span></a></li> <li><a href="annotated.html"><span>Data Structures</span></a></li> <li><a href="pages.html"><span>Related Pages</span></a></li> </ul> </div> </div> <div class="contents"> <div class="navpath"><a class="el" href="group__CUDA__CTX.html">Context Management</a> </div> <table cellspacing="0" cellpadding="0" border="0"> <tr> <td valign="top"> <div class="navtab"> <table> <tr><td class="navtab"><a class="qindex" href="group__CUDA__CTX_g65dc0012348bc84810e2103a40d8e2cf.html#g65dc0012348bc84810e2103a40d8e2cf">cuCtxCreate</a></td></tr> <tr><td class="navtab"><a class="qindex" href="group__CUDA__CTX_g27a365aebb0eb548166309f58a1e8b8e.html#g27a365aebb0eb548166309f58a1e8b8e">cuCtxDestroy</a></td></tr> <tr><td class="navtab"><a class="qindex" href="group__CUDA__CTX_g088a90490dafca5893ef6fbebc8de8fb.html#g088a90490dafca5893ef6fbebc8de8fb">cuCtxGetApiVersion</a></td></tr> <tr><td class="navtab"><a class="qindexHL" href="group__CUDA__CTX_g40b6b141698f76744dea6e39b9a25360.html#g40b6b141698f76744dea6e39b9a25360">cuCtxGetCacheConfig</a></td></tr> <tr><td class="navtab"><a class="qindex" href="group__CUDA__CTX_g8f13165846b73750693640fb3e8380d0.html#g8f13165846b73750693640fb3e8380d0">cuCtxGetCurrent</a></td></tr> <tr><td class="navtab"><a class="qindex" href="group__CUDA__CTX_g4e84b109eba36cdaaade167f34ae881e.html#g4e84b109eba36cdaaade167f34ae881e">cuCtxGetDevice</a></td></tr> <tr><td class="navtab"><a class="qindex" href="group__CUDA__CTX_g9f2d47d1745752aa16da7ed0d111b6a8.html#g9f2d47d1745752aa16da7ed0d111b6a8">cuCtxGetLimit</a></td></tr> <tr><td class="navtab"><a class="qindex" href="group__CUDA__CTX_g2fac188026a062d92e91a8687d0a7902.html#g2fac188026a062d92e91a8687d0a7902">cuCtxPopCurrent</a></td></tr> <tr><td class="navtab"><a class="qindex" href="group__CUDA__CTX_gb02d4c850eb16f861fe5a29682cc90ba.html#gb02d4c850eb16f861fe5a29682cc90ba">cuCtxPushCurrent</a></td></tr> <tr><td class="navtab"><a class="qindex" href="group__CUDA__CTX_g54699acf7e2ef27279d013ca2095f4a3.html#g54699acf7e2ef27279d013ca2095f4a3">cuCtxSetCacheConfig</a></td></tr> <tr><td class="navtab"><a class="qindex" href="group__CUDA__CTX_gbe562ee6258b4fcc272ca6478ca2a2f7.html#gbe562ee6258b4fcc272ca6478ca2a2f7">cuCtxSetCurrent</a></td></tr> <tr><td class="navtab"><a class="qindex" href="group__CUDA__CTX_g0651954dfb9788173e60a9af7201e65a.html#g0651954dfb9788173e60a9af7201e65a">cuCtxSetLimit</a></td></tr> <tr><td class="navtab"><a class="qindex" href="group__CUDA__CTX_g7a54725f28d34b8c6299f0c6ca579616.html#g7a54725f28d34b8c6299f0c6ca579616">cuCtxSynchronize</a></td></tr> </table> </div> </td> <td valign="top"> <a class="anchor" name="g40b6b141698f76744dea6e39b9a25360"></a><!-- doxytag: member="cuda.h::cuCtxGetCacheConfig" ref="g40b6b141698f76744dea6e39b9a25360" args="(CUfunc_cache *pconfig)" --> <div class="memitem"> <div class="memproto"> <table class="memname"> <tr> <td class="memname"><a class="el" href="group__CUDA__TYPES_g09da14df1a751dcbfeccb9cf0073d64c.html#g09da14df1a751dcbfeccb9cf0073d64c">CUresult</a> cuCtxGetCacheConfig </td> <td>(</td> <td class="paramtype"><a class="el" href="group__CUDA__TYPES_g80614321bca564dd0d73d93017a8cb69.html#g80614321bca564dd0d73d93017a8cb69">CUfunc_cache</a> * </td> <td class="paramname"> <em>pconfig</em> </td> <td> ) </td> <td></td> </tr> </table> </div> <div class="memdoc"> <p> On devices where the L1 cache and shared memory use the same hardware resources, this function returns through <code>pconfig</code> the preferred cache configuration for the current context. This is only a preference. The driver will use the requested configuration if possible, but it is free to choose a different configuration if required to execute functions.<p> This will return a <code>pconfig</code> of <a class="el" href="group__CUDA__TYPES_g5d731dfd360f2a68ae45a4df46089af4.html#gg5d731dfd360f2a68ae45a4df46089af447d2f367dc3965c27ff748688229dc22">CU_FUNC_CACHE_PREFER_NONE</a> on devices where the size of the L1 cache and shared memory are fixed.<p> The supported cache configurations are:<ul> <li><a class="el" href="group__CUDA__TYPES_g5d731dfd360f2a68ae45a4df46089af4.html#gg5d731dfd360f2a68ae45a4df46089af447d2f367dc3965c27ff748688229dc22">CU_FUNC_CACHE_PREFER_NONE</a>: no preference for shared memory or L1 (default)</li><li><a class="el" href="group__CUDA__TYPES_g5d731dfd360f2a68ae45a4df46089af4.html#gg5d731dfd360f2a68ae45a4df46089af4712f43defb051d7985317bce426cccc8">CU_FUNC_CACHE_PREFER_SHARED</a>: prefer larger shared memory and smaller L1 cache</li><li><a class="el" href="group__CUDA__TYPES_g5d731dfd360f2a68ae45a4df46089af4.html#gg5d731dfd360f2a68ae45a4df46089af4b1e6c4e889e1a70ed5283172be08f6a5">CU_FUNC_CACHE_PREFER_L1</a>: prefer larger L1 cache and smaller shared memory</li><li><a class="el" href="group__CUDA__TYPES_g5d731dfd360f2a68ae45a4df46089af4.html#gg5d731dfd360f2a68ae45a4df46089af44434321280821d844a15b02e4d6c80a9">CU_FUNC_CACHE_PREFER_EQUAL</a>: prefer equal sized L1 cache and shared memory</li></ul> <p> <dl compact><dt><b>Parameters:</b></dt><dd> <table border="0" cellspacing="2" cellpadding="0"> <tr><td valign="top"></td><td valign="top"><em>pconfig</em> </td><td>- Returned cache configuration</td></tr> </table> </dl> <dl class="return" compact><dt><b>Returns:</b></dt><dd><a class="el" href="group__CUDA__TYPES_g0cdead942fd5028d157641eef6bdeeaa.html#gg0cdead942fd5028d157641eef6bdeeaaa0eed720f8a87cd1c5fd1c453bc7a03d">CUDA_SUCCESS</a>, <a class="el" href="group__CUDA__TYPES_g0cdead942fd5028d157641eef6bdeeaa.html#gg0cdead942fd5028d157641eef6bdeeaaacf52f132faf29b473cdda6061f0f44a">CUDA_ERROR_DEINITIALIZED</a>, <a class="el" href="group__CUDA__TYPES_g0cdead942fd5028d157641eef6bdeeaa.html#gg0cdead942fd5028d157641eef6bdeeaa8feb999f0af99b4a25ab26b3866f4df8">CUDA_ERROR_NOT_INITIALIZED</a>, <a class="el" href="group__CUDA__TYPES_g0cdead942fd5028d157641eef6bdeeaa.html#gg0cdead942fd5028d157641eef6bdeeaaa484e9af32c1e9893ff21f0e0191a12d">CUDA_ERROR_INVALID_CONTEXT</a>, <a class="el" href="group__CUDA__TYPES_g0cdead942fd5028d157641eef6bdeeaa.html#gg0cdead942fd5028d157641eef6bdeeaa90696c86fcee1f536a1ec7d25867feeb">CUDA_ERROR_INVALID_VALUE</a> </dd></dl> <dl class="note" compact><dt><b>Note:</b></dt><dd>Note that this function may also return error codes from previous, asynchronous launches.</dd></dl> <dl class="see" compact><dt><b>See also:</b></dt><dd><a class="el" href="group__CUDA__CTX_g65dc0012348bc84810e2103a40d8e2cf.html#g65dc0012348bc84810e2103a40d8e2cf" title="Create a CUDA context.">cuCtxCreate</a>, <a class="el" href="group__CUDA__CTX_g27a365aebb0eb548166309f58a1e8b8e.html#g27a365aebb0eb548166309f58a1e8b8e" title="Destroy a CUDA context.">cuCtxDestroy</a>, <a class="el" href="group__CUDA__CTX_g088a90490dafca5893ef6fbebc8de8fb.html#g088a90490dafca5893ef6fbebc8de8fb" title="Gets the context's API version.">cuCtxGetApiVersion</a>, <a class="el" href="group__CUDA__CTX_g4e84b109eba36cdaaade167f34ae881e.html#g4e84b109eba36cdaaade167f34ae881e" title="Returns the device ID for the current context.">cuCtxGetDevice</a>, <a class="el" href="group__CUDA__CTX_g9f2d47d1745752aa16da7ed0d111b6a8.html#g9f2d47d1745752aa16da7ed0d111b6a8" title="Returns resource limits.">cuCtxGetLimit</a>, <a class="el" href="group__CUDA__CTX_g2fac188026a062d92e91a8687d0a7902.html#g2fac188026a062d92e91a8687d0a7902" title="Pops the current CUDA context from the current CPU thread.">cuCtxPopCurrent</a>, <a class="el" href="group__CUDA__CTX_gb02d4c850eb16f861fe5a29682cc90ba.html#gb02d4c850eb16f861fe5a29682cc90ba" title="Pushes a context on the current CPU thread.">cuCtxPushCurrent</a>, <a class="el" href="group__CUDA__CTX_g54699acf7e2ef27279d013ca2095f4a3.html#g54699acf7e2ef27279d013ca2095f4a3" title="Sets the preferred cache configuration for the current context.">cuCtxSetCacheConfig</a>, <a class="el" href="group__CUDA__CTX_g0651954dfb9788173e60a9af7201e65a.html#g0651954dfb9788173e60a9af7201e65a" title="Set resource limits.">cuCtxSetLimit</a>, <a class="el" href="group__CUDA__CTX_g7a54725f28d34b8c6299f0c6ca579616.html#g7a54725f28d34b8c6299f0c6ca579616" title="Block for a context's tasks to complete.">cuCtxSynchronize</a>, <a class="el" href="group__CUDA__EXEC_g40f8c11e81def95dc0072a375f965681.html#g40f8c11e81def95dc0072a375f965681" title="Sets the preferred cache configuration for a device function.">cuFuncSetCacheConfig</a> </dd></dl> </div> </div><p> </td> </tr> </table> </div> <hr size="1"><address style="text-align: right;"><small> Generated by Doxygen for NVIDIA CUDA Library <a href="http://www.nvidia.com/cuda"><img src="nvidia_logo.jpg" alt="NVIDIA" align="middle" border="0" height="80"></a></small></address> </body> </html>