Sophie

Sophie

distrib > Mageia > 2 > i586 > media > nonfree-release > by-pkgid > f86555c654b1f4a4c7ccf47789979868 > files > 970

nvidia-cuda-toolkit-devel-4.2.9-2.mga2.nonfree.i586.rpm

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html><head><meta http-equiv="Content-Type" content="text/html;charset=UTF-8">
<title>NVIDIA CUDA Library: cudaThreadSetCacheConfig</title>
<link href="customdoxygen.css" rel="stylesheet" type="text/css">
<link href="tabs.css" rel="stylesheet" type="text/css">
</head><body>
<!-- Generated by Doxygen 1.5.8 -->
<div class="navigation" id="top">
  <div class="tabs">
    <ul>
      <li><a href="index.html"><span>Main&nbsp;Page</span></a></li>
      <li><a href="modules.html"><span>Modules</span></a></li>
      <li><a href="annotated.html"><span>Data&nbsp;Structures</span></a></li>
      <li><a href="pages.html"><span>Related&nbsp;Pages</span></a></li>
    </ul>
  </div>
</div>
<div class="contents">
  <div class="navpath"><a class="el" href="group__CUDART__THREAD__DEPRECATED.html">Thread Management [DEPRECATED]</a>
  </div>
<table cellspacing="0" cellpadding="0" border="0">
  <tr>
   <td valign="top">
      <div class="navtab">
        <table>
          <tr><td class="navtab"><a class="qindex" href="group__CUDART__THREAD__DEPRECATED_gf423ba04af587d42b52799455a7c094d.html#gf423ba04af587d42b52799455a7c094d">cudaThreadExit</a></td></tr>
          <tr><td class="navtab"><a class="qindex" href="group__CUDART__THREAD__DEPRECATED_g7a82ef85e4f7e0cff0a84b2f7f6bc63a.html#g7a82ef85e4f7e0cff0a84b2f7f6bc63a">cudaThreadGetCacheConfig</a></td></tr>
          <tr><td class="navtab"><a class="qindex" href="group__CUDART__THREAD__DEPRECATED_gfd87d16d2bbf4bc41a892f3f75bac5e0.html#gfd87d16d2bbf4bc41a892f3f75bac5e0">cudaThreadGetLimit</a></td></tr>
          <tr><td class="navtab"><a class="qindexHL" href="group__CUDART__THREAD__DEPRECATED_g27d0f538b3018142bf04deae7f02c49e.html#g27d0f538b3018142bf04deae7f02c49e">cudaThreadSetCacheConfig</a></td></tr>
          <tr><td class="navtab"><a class="qindex" href="group__CUDART__THREAD__DEPRECATED_gd636fe22576028cdf3d2c271b544b316.html#gd636fe22576028cdf3d2c271b544b316">cudaThreadSetLimit</a></td></tr>
          <tr><td class="navtab"><a class="qindex" href="group__CUDART__THREAD__DEPRECATED_g6e0c5163e6f959b56b6ae2eaa8483576.html#g6e0c5163e6f959b56b6ae2eaa8483576">cudaThreadSynchronize</a></td></tr>
        </table>
      </div>
   </td>
   <td valign="top">
<a class="anchor" name="g27d0f538b3018142bf04deae7f02c49e"></a><!-- doxytag: member="cuda_runtime_api.h::cudaThreadSetCacheConfig" ref="g27d0f538b3018142bf04deae7f02c49e" args="(enum cudaFuncCache cacheConfig)" -->
<div class="memitem">
<div class="memproto">
      <table class="memname">
        <tr>
          <td class="memname"><a class="el" href="group__CUDART__TYPES_gf599e5b8b829ce7db0f5216928f6ecb6.html#gf599e5b8b829ce7db0f5216928f6ecb6">cudaError_t</a> cudaThreadSetCacheConfig           </td>
          <td>(</td>
          <td class="paramtype">enum <a class="el" href="group__CUDART__TYPES_gb980f35ed69ee7991704de29a13de49b.html#gb980f35ed69ee7991704de29a13de49b">cudaFuncCache</a>&nbsp;</td>
          <td class="paramname"> <em>cacheConfig</em>          </td>
          <td>&nbsp;)&nbsp;</td>
          <td></td>
        </tr>
      </table>
</div>
<div class="memdoc">

<p>
<dl compact><dt><b><a class="el" href="deprecated.html#_deprecated000006">Deprecated:</a></b></dt><dd></dd></dl>
Note that this function is deprecated because its name does not reflect its behavior. Its functionality is identical to the non-deprecated function <a class="el" href="group__CUDART__DEVICE_gac27b566beee1aa9175373bb9e29b8d1.html#gac27b566beee1aa9175373bb9e29b8d1" title="Sets the preferred cache configuration for the current device.">cudaDeviceSetCacheConfig()</a>, which should be used instead.<p>
On devices where the L1 cache and shared memory use the same hardware resources, this sets through <code>cacheConfig</code> the preferred cache configuration for the current device. This is only a preference. The runtime will use the requested configuration if possible, but it is free to choose a different configuration if required to execute the function. Any function preference set via <a class="el" href="group__CUDART__EXECUTION_g400ecf473d4fa8ac6738fbc753d5c288.html#g400ecf473d4fa8ac6738fbc753d5c288">cudaFuncSetCacheConfig (C API)</a> or <a class="el" href="group__CUDART__HIGHLEVEL_ge0969184de8a5c2d809aa8d7d2425484.html#ge0969184de8a5c2d809aa8d7d2425484">cudaFuncSetCacheConfig (C++ API)</a> will be preferred over this device-wide setting. Setting the device-wide cache configuration to <a class="el" href="group__CUDART__TYPES_gb980f35ed69ee7991704de29a13de49b.html#ggb980f35ed69ee7991704de29a13de49b3b4b8c65376ce1ca107be413e15981bc">cudaFuncCachePreferNone</a> will cause subsequent kernel launches to prefer to not change the cache configuration unless required to launch the kernel.<p>
This setting does nothing on devices where the size of the L1 cache and shared memory are fixed.<p>
Launching a kernel with a different preference than the most recent preference setting may insert a device-side synchronization point.<p>
The supported cache configurations are:<ul>
<li><a class="el" href="group__CUDART__TYPES_gb980f35ed69ee7991704de29a13de49b.html#ggb980f35ed69ee7991704de29a13de49b3b4b8c65376ce1ca107be413e15981bc">cudaFuncCachePreferNone</a>: no preference for shared memory or L1 (default)</li><li><a class="el" href="group__CUDART__TYPES_gb980f35ed69ee7991704de29a13de49b.html#ggb980f35ed69ee7991704de29a13de49b84725d25c531f9bafc61ae329afe5b2b">cudaFuncCachePreferShared</a>: prefer larger shared memory and smaller L1 cache</li><li><a class="el" href="group__CUDART__TYPES_gb980f35ed69ee7991704de29a13de49b.html#ggb980f35ed69ee7991704de29a13de49b8ecb48ccbc2230c81528a2c7c695100e">cudaFuncCachePreferL1</a>: prefer larger L1 cache and smaller shared memory</li></ul>
<p>
<dl compact><dt><b>Parameters:</b></dt><dd>
  <table border="0" cellspacing="2" cellpadding="0">
    <tr><td valign="top"></td><td valign="top"><em>cacheConfig</em>&nbsp;</td><td>- Requested cache configuration</td></tr>
  </table>
</dl>
<dl class="return" compact><dt><b>Returns:</b></dt><dd><a class="el" href="group__CUDART__TYPES_g3f51e3575c2178246db0a94a430e0038.html#gg3f51e3575c2178246db0a94a430e0038e355f04607d824883b4a50662830d591">cudaSuccess</a>, <a class="el" href="group__CUDART__TYPES_g3f51e3575c2178246db0a94a430e0038.html#gg3f51e3575c2178246db0a94a430e0038ce7993a88ecf2c57b8102d55d997a18c">cudaErrorInitializationError</a> </dd></dl>
<dl class="note" compact><dt><b>Note:</b></dt><dd>Note that this function may also return error codes from previous, asynchronous launches.</dd></dl>
<dl class="see" compact><dt><b>See also:</b></dt><dd><a class="el" href="group__CUDART__DEVICE_gac27b566beee1aa9175373bb9e29b8d1.html#gac27b566beee1aa9175373bb9e29b8d1" title="Sets the preferred cache configuration for the current device.">cudaDeviceSetCacheConfig</a> </dd></dl>

</div>
</div><p>
    </td>
  </tr>
</table>
</div>
<hr size="1"><address style="text-align: right;"><small>
Generated by Doxygen for NVIDIA CUDA Library &nbsp;<a
href="http://www.nvidia.com/cuda"><img src="nvidia_logo.jpg" alt="NVIDIA" align="middle" border="0" height="80"></a></small></address>
</body>
</html>