libjmmcg
release_579_6_g8cffd
A C++ library containing an eclectic mix of useful, advanced components.
|
The fundamental way to specify the type of thread_pool that is required. More...
#include <thread_pool_aspects.hpp>
Classes | |
struct | algo_thread_wk |
Some classes used as short-hands. More... | |
struct | algo_thread_wk_buffered |
Some classes used as short-hands. More... | |
struct | thread_wk |
Some classes used as short-hands. More... | |
struct | thread_wk< generic_traits::return_data::joinable, ThrW, WFlg, Del, AtCtr > |
Some classes used as short-hands. More... | |
struct | thread_wk< generic_traits::return_data::nonjoinable, ThrW, WFlg, Del, AtCtr > |
Some classes used as short-hands. More... | |
Public Types | |
typedef thread_os_traits< API, Mdl > | os_traits |
The all-important os-traits: used to obtain not only the threading model_traits and generic_traits which provide the abstraction to the underlying threading implementation in the api_threading_traits, but also the api_type, and therefore the api_lock_traits which contains the atomic locks and atomic counters used. So: rather important. More... | |
typedef CFG< os_traits > | cfg_type |
template<class V > | |
using | atomic_wrapper_t = typename os_traits::lock_traits::template atomic_counter_type< V > |
using | async_thread_wk_elem_type = typename private_::closure::thread_wk_async_t< result_traits_, os_traits, default_delete, atomic_wrapper_t, cfg_type > |
Some typedefs used as short-hands. More... | |
using | thread_wk_elem_type = typename async_thread_wk_elem_type::base_t |
template<class QM > | |
using | thread_pool_queue_details = typename queue_t::template thread_pool_queue_details< QM > |
The specific signalled_work_queue_type to be used in the thread_pool. More... | |
template<class QM > | |
using | signalled_work_queue_type = typename thread_pool_queue_details< QM >::container_type |
template<class QM > | |
using | exit_requested_type = typename thread_pool_queue_details< QM >::exit_requested_type |
template<class QM > | |
using | have_work_type = typename thread_pool_queue_details< QM >::have_work_type |
template<class QM > | |
using | pool_thread_queue_details = typename queue_t::template pool_thread_queue_details< QM > |
template<class QM > | |
using | statistics_type = typename pool_thread_queue_details< QM >::statistics_type |
Static Public Attributes | |
static constexpr generic_traits::return_data | result_traits_ =RD |
If the thread_pool should implement returning data from the mutated work. i.e. support execution_contexts. More... | |
static constexpr unsigned long | GSSk =GSSkSz |
The k-size for the batches to implement GSS(k) scheduling, if >1, then this is effectively the baker's ticket scheduling scheme. More... | |
static constexpr pool_traits::priority_mode_t | priority_mode =queue_t::priority_mode |
An accessor for getting at the priority mode that the thread_pool may support. More... | |
The fundamental way to specify the type of thread_pool that is required.
This class is used to encapsulate the OS-specific threading traits, atomic locks, etc for the thread_pool and other dependent classes.
Definition at line 639 of file thread_pool_aspects.hpp.
using jmmcg::LIBJMMCG_VER_NAMESPACE::ppd::pool_aspects< RD, API, Mdl, PM, Comp, GSSkSz, Stats, CFG >::async_thread_wk_elem_type = typename private_::closure::thread_wk_async_t<result_traits_, os_traits, default_delete, atomic_wrapper_t, cfg_type> |
Some typedefs used as short-hands.
Definition at line 659 of file thread_pool_aspects.hpp.
using jmmcg::LIBJMMCG_VER_NAMESPACE::ppd::pool_aspects< RD, API, Mdl, PM, Comp, GSSkSz, Stats, CFG >::atomic_wrapper_t = typename os_traits::lock_traits::template atomic_counter_type<V> |
Definition at line 656 of file thread_pool_aspects.hpp.
typedef CFG<os_traits> jmmcg::LIBJMMCG_VER_NAMESPACE::ppd::pool_aspects< RD, API, Mdl, PM, Comp, GSSkSz, Stats, CFG >::cfg_type |
Definition at line 654 of file thread_pool_aspects.hpp.
using jmmcg::LIBJMMCG_VER_NAMESPACE::ppd::pool_aspects< RD, API, Mdl, PM, Comp, GSSkSz, Stats, CFG >::exit_requested_type = typename thread_pool_queue_details<QM>::exit_requested_type |
Definition at line 749 of file thread_pool_aspects.hpp.
using jmmcg::LIBJMMCG_VER_NAMESPACE::ppd::pool_aspects< RD, API, Mdl, PM, Comp, GSSkSz, Stats, CFG >::have_work_type = typename thread_pool_queue_details<QM>::have_work_type |
Definition at line 751 of file thread_pool_aspects.hpp.
typedef thread_os_traits<API, Mdl> jmmcg::LIBJMMCG_VER_NAMESPACE::ppd::pool_aspects< RD, API, Mdl, PM, Comp, GSSkSz, Stats, CFG >::os_traits |
The all-important os-traits: used to obtain not only the threading model_traits and generic_traits which provide the abstraction to the underlying threading implementation in the api_threading_traits, but also the api_type, and therefore the api_lock_traits which contains the atomic locks and atomic counters used. So: rather important.
Definition at line 653 of file thread_pool_aspects.hpp.
using jmmcg::LIBJMMCG_VER_NAMESPACE::ppd::pool_aspects< RD, API, Mdl, PM, Comp, GSSkSz, Stats, CFG >::pool_thread_queue_details = typename queue_t::template pool_thread_queue_details<QM> |
Definition at line 753 of file thread_pool_aspects.hpp.
using jmmcg::LIBJMMCG_VER_NAMESPACE::ppd::pool_aspects< RD, API, Mdl, PM, Comp, GSSkSz, Stats, CFG >::signalled_work_queue_type = typename thread_pool_queue_details<QM>::container_type |
Definition at line 747 of file thread_pool_aspects.hpp.
using jmmcg::LIBJMMCG_VER_NAMESPACE::ppd::pool_aspects< RD, API, Mdl, PM, Comp, GSSkSz, Stats, CFG >::statistics_type = typename pool_thread_queue_details<QM>::statistics_type |
Note that the parameter to Stats is not atomic, implying that performance over accuracy is preferred. This is by design: performance over accuracy has been preferred and locking reduces performance, and this library has been designed to be fast, so the statistics gathering is consequently less accurate, in particular may be under-estimates. This isn't as bad as it first appears as most SMP architectures implement some form of cache-coherency protocol (e.g. MESI or MOESI) that can correct some of these inaccuracies.
A consequence of this is that 'valgrind –tool=helgrind' will report potential race-conditions if, for example basic_statistics, is used. This is not a problem. For speed basic_statistics does not add any locking so the race-conditions are to be expected. Please ignore those warnings, or use the no_statistics class instead.
Definition at line 763 of file thread_pool_aspects.hpp.
using jmmcg::LIBJMMCG_VER_NAMESPACE::ppd::pool_aspects< RD, API, Mdl, PM, Comp, GSSkSz, Stats, CFG >::thread_pool_queue_details = typename queue_t::template thread_pool_queue_details<QM> |
The specific signalled_work_queue_type to be used in the thread_pool.
This class should combine a container with an atomic event. The event should be set when there are items in the queue and reset when the container becomes empty. This would allow threads to atomically wait upon the container for work to be added to it.
Definition at line 745 of file thread_pool_aspects.hpp.
using jmmcg::LIBJMMCG_VER_NAMESPACE::ppd::pool_aspects< RD, API, Mdl, PM, Comp, GSSkSz, Stats, CFG >::thread_wk_elem_type = typename async_thread_wk_elem_type::base_t |
Definition at line 660 of file thread_pool_aspects.hpp.
|
staticconstexpr |
The k-size for the batches to implement GSS(k) scheduling, if >1, then this is effectively the baker's ticket scheduling scheme.
The size of the batch to be taken in the GSS(k) or bakers' scheduling algorithm. Note that the is what I term as "front_batch"ing: when the tasks extracted from the signalled_work_queue_type in the thread_pool, as opposed to adding to the thread_pool. A value of zero is not allowed. Note that with an average optimizing compiler, there should be no performance loss for a batch-size of one, and higher batch sizes should simply result in reduced contention on the signalled_work_queue_type within the thread_pool. A template parameter is used so that the implementation can allocate on the stack a fixed-size array of tasks, so avoiding calling the memory manager, reducing locks, the converse would defeat the point of GSS(k) scheduling, which is to reduce lock contention!
If the GSSk>1 and the first closure_base-derived closure depends upon a later job to complete (with a dependency that is not managed by execution_context's, i.e. a back-edge in the control dependency graph, i.e. not a strictly nested dependency), then that sub-tree of dependent closure_base will deadlock. This is because the processing loop in pool_thread::process() will wait for the first closure_base to complete, which depends upon the second (or later in the batch) closure_base which will not be executed as the earlier closure_base is preventing this loop for continuing. Therefore one must ensure that for GGSk>1, the dependency tree of the closure_base has been carefully constructed. If all is well in sequential_mode, yet fails with GSSk>1 using platform_api, try with GSSk=1, and this will be your issue.
Definition at line 650 of file thread_pool_aspects.hpp.
|
staticconstexpr |
An accessor for getting at the priority mode that the thread_pool may support.
Definition at line 766 of file thread_pool_aspects.hpp.
|
staticconstexpr |
If the thread_pool should implement returning data from the mutated work. i.e. support execution_contexts.
Definition at line 641 of file thread_pool_aspects.hpp.