Another book I was recently given is the ‘Fortran-Fibel’ by Hans Breuer. Written in 1969, it didn’t only give me an interesting overview of FORTRAN (FORmula TRANslation), one of the early programming languages that was quite popular in the 1960s on mainframe computers to solve mathematical problems, but also in how mainframes were operated at the time and the ‘cost of computing’ back then.
Let have a look:
When programming in Fortran on a mainframe of the 1960s and 70s, each instruction line was punched on one punch card. A Fortran compiler was then used on the mainframe to convert the program into machine code. The machine code could then be written to punch cards if the program was used more often so it would only have to be compiled once. 1000 punch cards had a width of 18 centimeters. Now think about how quickly 1000 lines of code are written today… 125 punch cards, which could hold about 10.000 characters, cost around 1 DM (Deutsche Mark) which is roughly 1 euro in 2018.
Paper tape, which, according to the author, was more suitable for recording and reading data rather than programs cost around 1 DM per 50 meters on which around 20.000 characters could be stored. The author much preferred punch cards over paper tape for two reasons: Punch cards contained the text that was punched into them at the top, so it was clear what was on the card. Paper tape, on the other hand just had holes. Therefore, changing code later on was quite difficult, firstly because it was difficult to find the problematic location and then it was also no joy to fix things by cutting the relevant section and then using scotch tape to attach a new piece of tape at the location. In addition, mainframes were mostly equipped with punch card readers and only rarely with paper tape readers. Machines existed, however, to copy and translate information contained on paper tape into punch cards. As a consequence, the author says in the book that paper tape was much more suitable to store data and not so much for storing code. This was perhaps not universally so as there are videos on Youtube and other places that show how foldable paper tapes were used with DEC PDP computers to read programs into memory.
Other interesting details on the use of punch cards are that program and data was separated by a punch card with a special encoding. In addition, a number of special cards were always at the beginning of a program to let the machine and the human ‘operator’ know what the expected run-time of the program was. The author notes that programs with a short run time were sometimes preferred and run earlier than other programs. However, if the program ran longer than advertised, it would be terminated and one would have to resubmit the program again for another run with an updated time card. Also, the author notes that it was not unusual to wait for a day for the results as the machines were quite busy and so programs were not run straight away. This fact alone explains why the next step, interactive programming, compiling and executing programs by using CRT (cathode ray tube) terminals with a keyboard that came up at some point in the 1970s was such a quantum leap in computing.
One other ‘shocking’ number given in the book was the amount of RAM in mainframes of the day. On page 13, the author says that a program with less than 50 lines of code that processes less than 1000 data values can usually be transacted with a core memory of 8K (8192 words). There you go, this was the computing world of the 1960s and early 70s!