<div class="clearfix">
<div class="field-items">
<div class="field-item even"><span class="date-display-single" property="dc:date" datatype="xsd:dateTime" content="2012-10-31T00:00:00+01:00">31/10/2012</span></div>
</div>
</div>
<div class="clearfix">
<div class="field-items">
<div class="field-item even">
<p>Dear Users,</p>
<p>We inform you that in CINECA on November 6th at 11 am, Dr. Luca Franci, from University of Parma will hold the seminar titled:</p>
<p> </p>
<p>“Optimization and porting of a numerical code for simulations in General Relativistic Magnetohydrodynamics on CPU/GPU hybrid clusters using the new OpenACC standard”</p>
<p> </p>
<p>If interested, you are invited to join.</p>
<p>Please find the abstract below.</p>
<p> </p>
<p>Best regards.</p>
<p> User Support @ CINECA</p>
<p> </p>
<p> </p>
<p>----------------------------------------------</p>
<p> </p>
<p>ABSTRACT</p>
<p>General Relativistic Magnetohydrodynamics (GRMHD) is the study of relativistic magnetized flows in very strong gravitational fields and it’s therefore the right framework for modeling compact objects like black holes and neutron stars, which are believed to be responsible for many high-energy phenomena in astrophysics.</p>
<p>X-ECHO (Del Zanna et al. 2007, A&A, 473, 1; Bucciantini et al. 2011, A&A, 528, A101) is a numerical code aimed at performing GRMHD simulations, such as the 2D evolution of isolated magnetized neutron stars.</p>
<p>The GRMHD conservation laws are here solved within a finite-difference discretization scheme whereas a staggered constrained-transport method is used to preserve the divergence-free condition for the magnetic field. Since all the computational operations are local, the problem is naturally data-parallel via domain decomposition. X-ECHO was already MPI-parallelized along one direction of the full 3D domain but this parallelization was not optimized at all, so that the code could be run only on a very few processors (up to 8).</p>
<p>GPUs offer a substantially higher peak performance than traditional CPUs and are better suited for the high throughput, data-parallel problems typical of large-scale simulations, since the number of processes in a typical MPI simulation is usually in the order of tens or hundred,whereas GPUs require tens or hundreds of thousands of threads to be saturated.</p>
<p>OpenACC is a new open parallel programming standard designed to easily take advantage of the transformative power of heterogeneous CPU/GPU computing systems. It allows programmers to provide simple directives to the compiler, identifying which areas of code to accelerate, without requiring to modify or adapt the underlying code itself and providing portability across operating systems, host CPUs and accelerators.</p>
<p> </p>
<p>The X-ECHO code has been optimized and its evolution routine, which is the main computational kernel, has been fully ported on GPUs, so that now only the reading of the initial model, the set up of the computational grid, the transfers of data between the host and the accelerator and the I/0 are handled by the CPU, while all the calculations are performed by the GPU. </p>
<p> </p>
<p>I'm going to show how the code was optimized, the idea behind the new OpenACC standard and its main requirements and features, some simple examples about its use and how the code was fully ported on GPUs, showing the gain in performance.</p>
<p>Lastly, I'll show how it would still be possible to improve performances by manually chosing the number of gangs, workers and vectors (e.g. CUDA threadblocks, warps and threads).</p>
<div> </div>
</div>
</div>
</div>
<p>-- To unsubscribe to HPC-new send a mail to <a href="mailto:listserv@list.cineca.it?subject=any&body=unsubscribe%20hpc-news">listserv@list.cineca.it</a>.<br />
-- For more information see documentation at <a href="http://www.hpc.cineca.it/content/stay-tuned">http://www.hpc.cineca.it/content/stay-tuned</a></p>