0% found this document useful (0 votes)
74 views2 pages

OpenMP Reference

This document contains a reference sheet for OpenMP constructs in C/C++. It summarizes directives for parallelizing loops and code sections, work-sharing constructs, synchronization constructs, and settings for controlling OpenMP behavior. Functions are also listed for locking, querying thread numbers and settings, and timing.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
74 views2 pages

OpenMP Reference

This document contains a reference sheet for OpenMP constructs in C/C++. It summarizes directives for parallelizing loops and code sections, work-sharing constructs, synchronization constructs, and settings for controlling OpenMP behavior. Functions are also listed for locking, querying thread numbers and settings, and timing.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

OpenMP Reference Sheet for C/C++ <A,B,C such that total iterations known at start of loop>

for(A=C;A<B;A++) {
<your code here>

Constructs <force ordered execution of part of the code. A=C will be guaranteed to execute
<parallelize a for loop by breaking apart iterations into chunks> before A=C+1>
#pragma omp parallel for [shared(vars), private(vars), firstprivate(vars), #pragma omp ordered {
lastprivate(vars), default(shared|none), reduction(op:vars), copyin(vars), if(expr), <your code here>
ordered, schedule(type[,chunkSize])] }
<A,B,C such that total iterations known at start of loop> }
for(A=C;A<B;A++) {
<your code here> <parallelized sections of code with each section operating in one thread>
#pragma omp sections [private(vars), firstprivate(vars), lastprivate(vars),
<force ordered execution of part of the code. A=C will be guaranteed to execute reduction(op:vars), nowait] {
before A=C+1> #pragma omp section {
#pragma omp ordered { <your code here>
<your code here> }
} #pragma omp section {
} <your code here>
}
<parallelized sections of code with each section operating in one thread> ....
#pragma omp parallel sections [shared(vars), private(vars), firstprivate(vars), }
lastprivate(vars), default(shared|none), reduction(op:vars), copyin(vars), if(expr)] {
#pragma omp section { <only one thread will execute the following. NOT always by the master thread>
<your code here> #pragma omp single {
} <your code here (only executed once)>
#pragma omp section { }
<your code here> }
}
....
} Directives
shared(vars) <share the same variables between all the threads>
<grand parallelization region with optional work-sharing constructs defining more private(vars) <each thread gets a private copy of variables. Note that other than the
specific splitting of work and variables amongst threads. You may use work-sharing master thread, which uses the original, these variables are not initialized to
constructs without a grand parallelization region, but it will have no effect (sometimes anything.>
useful if you are making OpenMP'able functions but want to leave the creation of threads firstprivate(vars) <like private, but the variables do get copies of their master thread
to the user of those functions)> values>
#pragma omp parallel [shared(vars), private(vars), firstprivate(vars), lastprivate(vars), lastprivate(vars) <copy back the last iteration (in a for loop) or the last section (in a
default(private|shared|none), reduction(op:vars), copyin(vars), if(expr)] { sections) variables to the master thread copy (so it will persist even after the
<the work-sharing constructs below can appear in any order, are optional, and can parallelization ends)>
be used multiple times. Note that no new threads will be created by the constructs. default(private|shared|none) <set the default behavior of variables in the parallelization
They reuse the ones created by the above parallel construct.> construct. shared is the default setting, so only the private and none setting have
effects. none forces the user to specify the behavior of variables. Note that even with
<your code here (will be executed by all threads)> shared, the iterator variable in for loops still is private by necessity >
reduction(op:vars) <vars are treated as private and the specified operation(op, which
<parallelize a for loop by breaking apart iterations into chunks> can be +,*,-,&,|,&,&&,||) is performed using the private copies in each thread. The
#pragma omp for [private(vars), firstprivate(vars), lastprivate(vars), master thread copy (which will persist) is updated with the final value.>
reduction(op:vars), ordered, schedule(type[,chunkSize]), nowait]
copyin(vars) <used to perform the copying of threadprivate vars to the other threads.
Similar to firstprivate for private vars.>
if(expr) <parallelization will only occur if expr evaluates to true.>
schedule(type [,chunkSize]) <thread scheduling model>
Function Based Locking < nest versions allow recursive locking>
type chunkSize void omp_init_[nest_]lock(omp_lock_t*) <make a generic mutex lock>
static number of iterations per thread pre-assigned at beginning of loop void omp_destroy_[nest_]lock(omp_lock_t*) <destroy a generic mutex lock>
(typical default is number of processors) void omp_set_[nest_]lock(omp_lock_t*) <block until mutex lock obtained>
void omp_unset_[nest_]lock(omp_lock_t*) <unlock the mutex lock>
dynamic number of iterations to allocate to a thread when available (typical int omp_test_[nest_]lock(omp_lock_t*) <is lock currently locked by somebody>
default is 1)
guided highly dependent on specific implementation of OpenMP

nowait <remove the implicit barrier which forces all threads to finish before continuation Settings and Control
in the construct> int omp_get_num_threads() <returns the number of threads used for the parallel
region in which the function was called>
int omp_get_thread_num() <get the unique thread number used to handle this
iteration/section of a parallel construct. You may break up algorithms into parts
based on this number.>
Synchronization/Locking Constructs <May be used almost anywhere, but will
int omp_in_parallel() <are you in a parallel construct>
only have effects within parallelization constructs.>
int omp_get_max_threads() <get number of threads OpenMP can make>
int omp_get_num_procs() <get number of processors on this system>
<only the master thread will execute the following. Sometimes useful for special handling
int omp_get_dynamic() <is dynamic scheduling allowed>
of variables which will persist after the parallelization.>
int omp_get_nested() <is nested parallelism allowed>
#pragma omp master {
double omp_get_wtime() <returns time (in seconds) of the system clock>
<your code here (only executed once and by the master thread).
double omp_get_wtick() <number of seconds between ticks on the system clock>
}
void omp_set_num_threads(int) <set number of threads OpenMP can make>
void omp_set_dynamic(int) <allow dynamic scheduling (note this does not make
<mutex lock the region. name allows the creation of unique mutex locks.>
dynamic scheduling the default)>
#pragma omp critical [(name)] {
void omp_set_nested(int) <allow nested parallelism; Parallel constructs within other
<your code here (only one thread allowed in at a time)>
parallel constructs can make new threads (note this tends to be unimplemented
}
in many OpenMP implementations)>
<force all threads to complete their operations before continuing>
<env vars- implementation dependent, but here are some common ones>
#pragma omp barrier
OMP_NUM_THREADS "number" <maximum number of threads to use>
OMP_SCHEDULE "type,chunkSize" <default #pragma omp schedule settings>
<like critical, but only works for simple operations and structures contained in one line of
code>
#pragma omp atomic
<simple code operation, ex. a += 3; Typical supported operations are ++,--,+,*,-
,/,&,^,<<,>>,| on primitive data types> Legend
vars is a comma separated list of variables
<force a register flush of the variables so all threads see the same memory> [optional parameters and directives]
#pragma omp flush[(vars)] <descriptions, comments, suggestions>
.... above directive can be used multiple times
<applies the private clause to the vars of any future parallelize constructs encountered (a For mistakes, suggestions, and comments please email e_berta@plutospin.com
convenience routine)>
#pragma omp threadprivate(vars)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy