The input data sets

Essentially the results obtained from the 1978 reference machine allowed a set of scaling factors to be devised, which in turn led to 2 (then in 1985 to 3 and in 1995 to 4) input data sets: it was deemed to be important that no measured CPU time should be less than 100 seconds, whatever the machine. The initial and almost-always-used data set, DAT50, targets usual, normal machines, eg. mainframes, workstations... and takes 286.30s on the IBM 3033-U, or 2097s on a Vax 11/780, but only around 40s on a Cray X/MP. The DAT100 data set, approximately four times bigger, is more appropriate for the ("class VI") supercomputers, while the personal computers -all items of the early eighties- can only bear the DAT35 data set which is six times smaller than the initial data set.
data setreference timetypical time
DAT35 50.05 IBM/PC: 11hours
DAT50 286.30 Vax 11/780: 30mn
DAT1001212.12 Cray 1S: 200s
DAT99916385.1Alpha 21164: 5mn
  By 1995, a fourth data set, DAT999, is added to take care of the
micro-killer, whose typical times fall much too low under 100
seconds of processing time : about 20s with the DAT100 set or
even 6s with the initial DAT50 set.
Yet, a fifth data set, DAT10K, is prepared to handle the
microprocessors of the next generations (with respect to 1997).


Next: The benchmark's correctness Up: Fortran Execution Time Previous: The benchmark's RAPIDITY