TGFF Packed Schedules
(Note that some new options may not be
compatible with this feature.)
Packed schedules are a recently added feature of TGFF. There are
no guarantees of feasibility for the graphs generated with TGFF's
default settings. There may exist a path through the task graph with
a deadline which is tighter than the minimal sums of execution times
and communication times for the tasks and communication events along
that path. Using this system, a guarantee of feasibility (on an
unlimited number of resources) could be given if one set deadlines on
tasks based on the depth and the sum of average and multiple for the
tasks (this is not an option in the code as provided). Of course,
these deadlines may be extremely generous and it most cases generated
in this way would likely fall into the easy to solve category. Thus, a
new generation technique has been developed.
The aims for the packed schedule TGFF generator are:
- Generate difficult, but feasible, task-graph problem
- Generate instances which include the realistic effects of
communication (including resource contention, allowing
overlapped computation and communication)
- Allow user controls of many types, including how
difficult to make the instance; the number of
processors and communication resources for which a known
solution exists; level of heterogeneity (including
generation of homogeneous cases); etc.
- Generation targeted to both preemptive and non-preemptive
schedules for task execution and priority-based or
arbitrated communication styles. This goal in particular
is not yet fulfilled (see below).
- Creation of both periodic and non-periodic cases; this
too has not been met, only a non-periodic instance has
With these goals in mind, the technique discussed below has
been implemented (but is not yet available--please feel free to
comment!). A new command pack_schedule
has been created, it has the following 10 parameters:
- -- (integer) the number of task graphs that the instance
- -- (integer) the average number of tasks expected for
each processor in a known solution.
- -- (float) average task execution time
- -- (float) task time multiplier (most tasks are in the
range [avg_task_time - mul_task_time, avg_task_time +
- -- (float) amount of time usually between tasks
executions on a resource
- -- (float) rounding of task execution times (0 = no
rounding, 1.0 = integer rounding).
- -- (integer) the number of different types of PEs to
generate. Note that the PE attributes are: cost,
n_interrupts, and interrupt_time (the latter two of these
set to 0). The type table is a list of task
execution times. Cost is randomly selected from the
[50,150] range. All tasks are unique.
- - (integer) the number of PEs, which are randomly
selected instances of the PE types, for which a solution
- -- (integer) the number of different types of COM
resources; attributes are: cost, n_interrupts (= 0 for
now), interrupt_time (also = 0 for now), datarate,
n_connects (= 8 now), and code (= 2 now). The datarate is
randomly selected from the range [50,150]. There is no
type table for COM resources (arc data sizes are placed
in a separate table).
- -- (integer) the number of COM resources, which are
randomly selected instances of the COM types, for which a
solution is known.
In addition to these parameters, which are directly part of
the pack_schedule directive, the method makes use of: seed and
task_degree settings (more later).
Using these parameters the following generation algorithm has been
- Randomly select PE and COM types for a solution
instance. Generate a ArcDataSizeTable
which will define the size of data communications for
each arc in the task-graphs.
- Set the period of all tasks to:
avg_tasks_per_pe * (avg_task_time + mul_task_time).
- For each PE instance in the solution, pack
it's schedule. This entails generating tasks of random
sizes within the [avg_task_time - mul_task_time,
avg_task_time + mul_task_time] time range and placing
them task_slack apart. Note that a final task may be
placed which is shorter than this if space (before the
supposed period) on the resource does not allow.
- Randomly assign the tasks generated across all the
solution PEs to task graphs (there are num_task_graphs of
- For each pair of tasks, say t1 and t2,
within the same task-graph where t1.end
is less than t2.start
consider addition of an arc. If t1
and t2 are assigned to the same PE in
the known solution, then an arc with any datasize is
possible since we assume that communication of data on
the same resource takes no time (or could be included in
the task time). However, generating relatively
large data sizes for these arcs would be a
clue to co-design tools to place the sending
and receiving tasks on the same resource, thus these arcs
are sized to be require on average 20% of avg_task_time
assuming that the datarate of the first COM resource type
is used. This arc-data-size is written to the
ArcDataSizeTable. When t1 and t2
are assigned to different resources (PEs) then a
check is made across the COM types in the solution for
the largest amount of continuous open space
between the time points t1 and t2.
Using this open space computation, the associated size of
the data for each COM resource is computed (using its
datarate). The COM which supports the maximum
arc-data-size--assuming it's greater then 1--is chosen for the
solution, this usage of this COM is noted for
subsequent COM scheduling and this arc-data-size is
written to the ArcDataSizeTable. Note that arc generation
may be limited by using the task_degree
settings, otherwise arcs are generated without regard to
maximum fan-in/out restrictions.
- The execution times for each task on each PE type is
generated. If the PE type is the one for which the task
runs on in the generated solution instance, then the
solution instance time is used for this task on that PE,
otherwise a value of 2 times this is used.
- Deadlines are placed on all terminal tasks
which coincide exactly with that of the solution.
Terminal tasks are those which have no children. Note
that an alternative idea would be to put deadlines on all
tasks in accordance with the known solution, however,
this would also be strong guidance for co-design tools!
- Generate the TGFF file and the .eps file, include the
known solution in the .tgff file as comments.
- This is a very heterogeneous problem...note in
particular that there is no guarantee that increasing the
cost of a processor (PE) generally speeds up tasks. In
fact, cost/execution time is generally uncorrelated. This
aspect is bothersome.
- A similar concern exists for COM resources, that is right
now their datarate and cost are (statistically)
unrelated. Also, the method used to create arcs, i.e.,
fill the largest space on COM resources seems
to generate cases with high communication utilization.
That is, the method will tend to fill the
task_slack timeframes with communications;
perhaps an arc_factor parameter restricted to
the range (0,1) should be added to allow the user to help
avoid this? Comments?
- An unscrupulous co-design researcher can cheat by merely
looking for the PEs for which task times are 1/2 of
others and generate the known correct allocation
(although without looking at the commented solution one
would not know the number of instances to use of a
particular PE type). Of course, the 2X thing must be
randomized, to perhaps give 1X to 2X ranges, but even
this is with its limits. That is, since it is always more
than 1X, a known solution exists for which the fastest PE
for each task should be chosen. If instead a 0.5X to 2X
range were used, then this might make for problem
instances that are too easy. Again, any suggestions?
- The deadlines are generated tight against the
known solution, but the user can use the task_slack
parameter to help to loosen schedules. However,
inter-task slack time tends to get filled with
communications as mentioned above.
- Note that unlike using the other task-graph generation
method, the pack_schedule generation algorithm does not
guarantee fully connected task-graphs. The construction
method otherwise used in TGFF generates fully connected
task-graphs by construction, but here a single task-graph
may be composed of several unconnected pieces.
- There is still some thought which should be given to the
general nature of these instances w/r/t to local priority
scheduling of PEs, i.e., we would like cases
where many tasks simultaneously reach their ready state
and where priority scheduling of these becomes important.
- It is felt that, when using relatively low task_slack
that rather hard allocation/scheduling problem instances
will be generated by this method.
Page maintained by