Parallel Iterations
Taskflow provides template function that constructs a task to perform parallel iterations over a range of items.
Index-based Parallel Iterations
Index-based parallel-for performs parallel iterations over a range [first, last)
with the given step
size. The task created by tf::
// positive step for(auto i=first; i<last; i+=step) { callable(i); } // negative step for(auto i=first; i>last; i+=step) { callable(i); }
We support only integer-based range. The range can go positive or negative direction.
taskflow.for_each_index(0, 100, 2, [](int i) { }); // 50 loops with a + step taskflow.for_each_index(100, 0, -2, [](int i) { }); // 50 loops with a - step
Notice that either positive or negative direction is defined in terms of the range, [first, last)
, where end
is excluded. In the positive case, the 50 items are 0, 2, 4, 6, 8, ..., 96, 98. In the negative case, the 50 items are 100, 98, 96, 04, ... 4, 2. An example of the Taskflow graph for the positive case under 12 workers is depicted below:
The index types, B
, E
, and S
, are templates to preserve the variable types and their underlying types must be of the same integral type (e.g., int
, size_t
, unsigned
). By default, tf::
int* vec; int first, last; auto init = taskflow.emplace([&](){ first = 0; last = 1000; vec = new int[1000]; }); auto pf = taskflow.for_each_index(std::ref(first), std::ref(last), 1, [&] (int i) { std::cout << "parallel iteration on index " << vec[i] << '\n'; } ); // wrong! must use std::ref, or first and last are captured by copy // auto pf = taskflow.for_each_index(first, last, 1, [&](int i) { // std::cout << "parallel iteration on index " << vec[i] << '\n'; // }); init.precede(pf);
When init
finishes, the parallel-for task pf
will see first
as 0 and last
as 1000 and performs parallel iterations over the 1000 items. This property is especially important for task graph parallelism, because users can define end-to-end parallelism through stateful closures that marshal parameter exchange between dependent tasks.
Iterator-based Parallel Iterations
Iterator-based parallel-for performs parallel iterations over a range specified by two STL-styled iterators, first
and last
. The task created by tf::
for(auto i=first; i<last; i++) { callable(*i); }
By default, tf::[first, last)
. It is user's responsibility for ensuring the range is valid within the execution of the parallel-for task. Iterators must have the post-increment operator ++ defined. This version of parallel-for applies to all iterable STL containers.
std::vector<int> vec = {1, 2, 3, 4, 5}; taskflow.for_each(vec.begin(), vec.end(), [](int i){ std::cout << "parallel for on item " << i << '\n'; }); std::list<std::string> list = {"hi", "from", "t", "a", "s", "k", "f", "low"}; taskflow.for_each(list.begin(), list.end(), [](const std::string& str){ std::cout << "parallel for on item " << str << '\n'; });
Similar to index-based parallel-for, the iterator types are templates to enable users to leverage the property of stateful closure. For example:
std::vector<int> vec; std::vector<int>::iterator first, last;; tf::Task init = taskflow.emplace([&](){ vec.resize(1000); first = vec.begin(); last = vec.end(); }); tf::Task pf = taskflow.for_each(std::ref(first), std::ref(last), [&](int i) { std::cout << "parallel iteration on item " << i << '\n'; }); // wrong! must use std::ref, or first and last are captured by copy // tf::Task pf = taskflow.for_each(first, last, [&](int i) { // std::cout << "parallel iteration on item " << i << '\n'; // }); init.precede(pf);
When init
finishes, the parallel-for task pf
will see first
pointing to the beginning of vec
and last
pointing to the end of vec
and performs parallel iterations over the 1000 items. The two tasks form an end-to-end task graph where the parameters of parallel-for are computed on the fly.
Partition Algorithms
At runtime, the parallel-for task automatically partitions the range into chunks and assign each chunk a task in the spawned subflow. Inspired by the scheduling algorithms of OpenMP, we support three partition algorithms, static partition, dynamic partition, and guided partition.
size_t first = 0, last = 1000, step = 1, chunk_size = 100; auto task1 = taskflow.for_each_index_static (first, last, step, [](auto){ }, chunk_size) auto task1 = taskflow.for_each_index_dynamic(first, last, step, [](auto){ }, chunk_size) auto task1 = taskflow.for_each_index_guided (first, last, step, [](auto){ }, chunk_size) // same syntax for the iterator-based parallel-for // ...
Each of these methods takes an additional unsigned argument of the chunk size.
- The static partition algorithm divides the workload into equal-size chunks. If the given chunk size is zero, it distributes the workload evenly across workers. Static partition has the lowest scheduling overhead but the least optimal workload distribution (i.e., load balancing).
- The dynamic partition algorithm dynamically assigns chunks to threads using a data dispatching loop. Dynamic partition has the highest scheduling overhead but the optimal workload distribution in particular when the chunk size equals one.
- The guided partition algorithm (1) starts with big chunks proportional to the number of unassigned iterations divided by the number of workers and (2) then makes them progressively smaller until the chunk size reaches at the given size. Guided partition attempts to seek a balance between scheduling overhead and workload distribution.
The picture below illustrates the three partition algorithms.

By default, tf::