Input Scaling

There is an old feature still present in Fortran today which makes reading integer data into REALs by formatted input potentially problematic. As an example, consider this program which re-reads the same data item from a string with different REAL edit descriptors.

PROGRAM input_scaling
IMPLICIT NONE
REAL               :: three_darts
INTEGER            :: decimals
CHARACTER (LEN= 7) :: cformat
CHARACTER (LEN=18) :: cdata

PRINT *, 'Enter an integer'
READ *, cdata

DO decimals = 0, 9, 3
  WRITE (cformat, "('(F18.', I1, ')')") decimals
  READ (cdata, FMT=cformat) three_darts
  PRINT *, 'With format "', cformat, '" the value obtained is ', three_darts

  WRITE (cformat, "('(E18.', I1, ')')") decimals
  READ (cdata, FMT=cformat) three_darts
  PRINT *, 'With format "', cformat, '" the value obtained is ', three_darts

  WRITE (cformat, "('(G18.', I1, ')')") decimals
  READ (cdata, FMT=cformat) three_darts
  PRINT *, 'With format "', cformat, '" the value obtained is ', three_darts
END DO

END PROGRAM input_scaling

The output from one run of this program is shown here:

 Enter an integer
180
 With format "(F18.0)" the value obtained is    180.000000    
 With format "(E18.0)" the value obtained is    180.000000    
 With format "(G18.0)" the value obtained is    180.000000    
 With format "(F18.3)" the value obtained is   0.180000007    
 With format "(E18.3)" the value obtained is   0.180000007    
 With format "(G18.3)" the value obtained is   0.180000007    
 With format "(F18.6)" the value obtained is    1.80000003E-04
 With format "(E18.6)" the value obtained is    1.80000003E-04
 With format "(G18.6)" the value obtained is    1.80000003E-04
 With format "(F18.9)" the value obtained is    1.80000001E-07
 With format "(E18.9)" the value obtained is    1.80000001E-07
 With format "(G18.9)" the value obtained is    1.80000001E-07

In reading the same piece of data, "180", into REAL variables with different format specifiers, the resulting number stored in that variable differs. The rule is that when there is no decimal point present in the input data string, the resulting number is multiplied by the negative factor of 10 as specified after the dot in the format specifier.

In the 21st century this is, of course, a ridiculous feature and should be avoided at all costs. The best practice should be that when reading into REAL variables with an E, F or G format specifier there should always be a zero after the decimal point to eradicate any such accidental multiplication.