ATLAS Offline Software
Loading...
Searching...
No Matches
IterateUntilCondition::Holder< Condition, Before, After, Funcs > Struct Template Reference

Condition, Before, After and Funcs must all be functor classes. More...

#include <IterateUntilCondition.h>

Collaboration diagram for IterateUntilCondition::Holder< Condition, Before, After, Funcs >:

Static Public Member Functions

template<class ... Args>
static void execute (const bool use_native_sync, const dim3 &grid_size, const dim3 &block_size, size_t shared_memory, cudaStream_t stream, Storage *gpu_ptr, Args ... args)

Detailed Description

template<class Condition, class Before, class After, class ... Funcs>
struct IterateUntilCondition::Holder< Condition, Before, After, Funcs >

Condition, Before, After and Funcs must all be functor classes.

They will receive two unsigned ints for grid size and block index (for simplicity, we only handle 1D grids), a reference to a mutable Condition (except for Condition) and any arguments you pass to the execute of this return. The functions should not use the actual block indices, but the thread indices inside the block are respected. Condition must return a boolean, with true meaning we have reached the end of the iterations, all others are void. Condition may be (locally) stateful as the same local instance is used throughout the iterations, while the others must be stateless (being constructed every iteration in the case of Funcs).

Definition at line 266 of file IterateUntilCondition.h.

Member Function Documentation

◆ execute()

template<class Condition, class Before, class After, class ... Funcs>
template<class ... Args>
void IterateUntilCondition::Holder< Condition, Before, After, Funcs >::execute ( const bool use_native_sync,
const dim3 & grid_size,
const dim3 & block_size,
size_t shared_memory,
cudaStream_t stream,
Storage * gpu_ptr,
Args ... args )
inlinestatic

Definition at line 269 of file IterateUntilCondition.h.

276 {
277#if CALORECGPU_ITERATE_UNTIL_CONDITION_INCLUDE_ASSERTS
279 assert(grid_size.y == 1);
280 assert(grid_size.z == 1);
281#endif
282
283 if (use_native_sync)
284 {
285 void * arg_ptrs[] = { static_cast<void *>(&args)... };
286
288 grid_size,
290 arg_ptrs,
292 stream);
293 }
294 else
295 {
296 cudaMemsetAsync(static_cast<BasicStorage *>(gpu_ptr), 0, sizeof(BasicStorage), stream);
297
299 }
300 }
Condition, Before, After and Funcs must all be functor classes.

The documentation for this struct was generated from the following file: