ATLAS Offline Software
Loading...
Searching...
No Matches
IterateUntilCondition::Holder< Condition, Before, After, Funcs > Struct Template Reference

Condition, Before, After and Funcs must all be functor classes. More...

#include <IterateUntilCondition.h>

Collaboration diagram for IterateUntilCondition::Holder< Condition, Before, After, Funcs >:

Static Public Member Functions

template<class ... Args>
static void execute (const bool use_native_sync, const dim3 &grid_size, const dim3 &block_size, size_t shared_memory, cudaStream_t stream, Storage *gpu_ptr, Args ... args)

Detailed Description

template<class Condition, class Before, class After, class ... Funcs>
struct IterateUntilCondition::Holder< Condition, Before, After, Funcs >

Condition, Before, After and Funcs must all be functor classes.

They will receive two unsigned ints for grid size and block index (for simplicity, we only handle 1D grids), a reference to a mutable Condition (except for Condition) and any arguments you pass to the execute of this return. The functions should not use the actual block indices, but the thread indices inside the block are respected. Condition must return a boolean, with true meaning we have reached the end of the iterations, all others are void. Condition may be (locally) stateful as the same local instance is used throughout the iterations, while the others must be stateless (being constructed every iteration in the case of Funcs).

Definition at line 283 of file IterateUntilCondition.h.

Member Function Documentation

◆ execute()

template<class Condition, class Before, class After, class ... Funcs>
template<class ... Args>
void IterateUntilCondition::Holder< Condition, Before, After, Funcs >::execute ( const bool use_native_sync,
const dim3 & grid_size,
const dim3 & block_size,
size_t shared_memory,
cudaStream_t stream,
Storage * gpu_ptr,
Args ... args )
inlinestatic

Definition at line 286 of file IterateUntilCondition.h.

293 {
294#if CALORECGPU_ITERATE_UNTIL_CONDITION_INCLUDE_ASSERTS
296 assert(grid_size.y == 1);
297 assert(grid_size.z == 1);
298#endif
299
300 if (use_native_sync)
301 {
302 void * arg_ptrs[] = { static_cast<void *>(&args)... };
303
305 grid_size,
307 arg_ptrs,
309 stream);
310 }
311 else
312 {
313 cudaMemsetAsync(static_cast<BasicStorage *>(gpu_ptr), 0, sizeof(BasicStorage), stream);
314
316 }
317 }
Condition, Before, After and Funcs must all be functor classes.

The documentation for this struct was generated from the following file: