Century of Endeavour

Techno-economic analysis in the 60s

(c) Roy Johnston 2003

(comments to rjtechne at iol dot ie)

In this decade the present writer RJ made the transition from basic science to applied science, and then to techno-economic analysis.

From 1960 to '63 he contributed to the understanding of some problems of instrumentation and control of the brewing process.

Then from 1963 to 70 he worked with Aer Lingus, initially on the real-time reservations system, which in its first proposed version he showed, by computer-based modelling of a system in a specified stochastic environment, that it would be inadequate to meet the requirements. He then moved to economic planning, and did some work on modelling the economic performance of an aircraft in a given airline system in an assumed stochastic economic environment. The enhanced real-time system then came back on the agenda, was modelled again, found acceptable, and bought.

I expand below on these topics.

A Special-Purpose Computer

In 1959-60, in my last year in DIAS, we were working on trying to refine the methods of measurement of the momentum and rate of energy loss of highly-relatavistic particles, using measurements of multiple coulomb scattering and 'mean gap length' on particles of known energy captured in ionographic emulsion. The particle sources were the synchrotrons at CERN and Berkeley, and the electron linear accelerator at Stanford.

This involved masses of tedious measurements with a microscope on track trajectories, of which the momentum measure was embedded in the average oif the modulus of the second difference, a tedious and trivial calculation. It occurred to me that we had the means to hand with which it could be automated, and this we had some fun doing.

I subsequently described the work in a paper in Electronic Engineering (June 1963); the lag was due to my being with Guinness, and it sort of slipped in my priorities.

The abstract was as follows: 'a case is made for the development of a small special-purpose computer for processing of low-grade numerical information which occurs in quantities too small to justify computer processing on the usual scale. A machine is described hich used Dekatron tubes and uniselector relays to compute the cumulative total of the moduli of second differences of a sequence of two-digit numbers, at a rate limited by the manual input rate (about one number per second)'.

The dekatron tube was used for storage and for display of the results; it as basically an obsolescent technology, earlier used for counting pulses from radioactive sources, but some work had earlier been done in Harwell by Barnes et al (Electronic Engineering 23, p286, 1951) on the use of the Dekatron in computing and we adapted the concept. We did this as an alternative to using the then available computer capacity (a Hollerith HEC located in the Sugar Co in Tuam) because in the latter case the amount of work needed for data preparation of the input would have been comparable to the amount of work involved in doing the calculation itself. We had in fact used the Tuam HEC for least-squares curve-fitting, in another context.

"Typically... one had to deal with about 100 small numbers per measurement, and possibly of the order of 100,000 to 1M numbers in a complete experiment. In attempting to handle this type of information with existing commercially available equipment, difficultures ere encountered... use of an conventional adding machine is little faster than performing the arithmetic mentally. This holds also for commercial electromechanical differencing engines of the Babbage type. These were better adapted for processing multi-digit numbers where speed is not important... in order to achieve an appreciable improvement over mental arithmetic one requires a machine capable of working ... as fast as two-digit numbers can be tapped into a keyboard..."

In the paper I went into the overall ergonomics of various alternative data capture systems, showing that from the angle of matching output rates of the measurement system to the computer input rate, this made sense.

This was in effect my introduction to some basic techno-economic concepts relating production rates to information processing, and the experience was subsequently useful in industry. There was also a human dimension, in that the observers needed to keep some degree of hands-on control over the quality of the analysis of 'their' particle tracks. If the computation had been banished to a remote processor this ould have been lost.

The machine worked for the duration of one experiment, in which we attempted to relate ionisation measurements to momentum on what as known as the 'relativistic rise' side of minimum ionisation. We had already pinned down with some precision the shape of the curve on the low-energy side of the minimum, and we were pushing the technology of nuclear emulsions to the limit.

In the event however the attention of the ionographic emulsion people moved in the direction of the cosmic ray 'heavy primaries' (where O'Ceallaigh and his DIAS group concentrated their attention after I left), and the attention of the high-energy charged particle people (in CERN, Berkeley and the major world centres) became increasingly directed towards the possibilities of bubble-chambers in magnetic fields, and our work turned out to be a blind alley. But it was a useful learning experience, and I regard it positively.


The Guinness Patents

There were three patents registered by Guinness with which I was associated during my period with Michael Ashe and the Production Research Department at Park Royal. The first, 976,663 is headed 'Examining Solutions Photoelectronically', and covered our Yeast Concentration Meter. The novelty was in the use of a photo-multiplier tube to pick up back-scattered light from a liquid containing suspended solids. This could not have been done with the usual type of photo-cell, the intensity being too low. It was I think the first example of the use of the photo-multiplier in industrial instrumentation.

The photomultiplier output signal was a linear measure of the concentration of yeast in beer, and it did not matter if the beer was dark. The then current nephelometry instruments of course were quite useless in Guinness for that reason, and anyway they had a non-linear output. It was a good instrument, with many possible applications, and was subsequently made commercially under licence by Evans Electroselenium. If it had been used outside brewing (eg in paper pulp) I understant I should have been entitled to a royalty, but I never pursued this.

I wrote this up in Research and Development for Industry, no 31, March 1964, giving examples of how it had been applied in the control of the Guinness pilot-scale continuous fermenters.


The second patent 986,343 was headed Controlling Yeast Fermentation and was in effect an attempt by Guinness to evade infringing the then current patents of, if I remember correctly, Coutts and Lebatt. The novelty was in splitting the feedstock between a yeast growth vessel and one or more main fermenter vessels. The yeast concentration in the main fermenters was maintained high by a system for restricting the amount washed out, the yeast being flocculent, and wanting to settle, but being prevented from doing so by a stirrer. The outlet however was via a settling-tube of which the setting could be varied; this shielded the outgoing liquid from the action of the stirrer, thus holding back the flocculent yeast.

The main fermenter got a portion of the feedstock directly, the other portion feeding the vessel where yeast was grown, under aerobic conditions. The output of this growth vessel was fed into the main fermenter, replenishing the yeast supply therein, at a controlled rate.

We developed instrumentation to measure the oxygen level in the growth vessel, and to measure the gravity in the fermenters, and to keep track of the amount of yeast that came out. The rule was, everything must go downstream; no recycling of yeast. This latter feature dominated the Coutts-Lebatt patents, but Guinness wanted to maintain their own system distinct from this, the key idea being to start with pure yeast culture under laboratory control, and not allow a population of wild or mutant yeasts to build up, as would happen under a recycling regime.

We did the best we could, but the system was wildly unstable, and would have required a sophisticated feed-forward control system, based on totally reliable instrumentation. There is a non-linear relationship between the gravity of the fermenting beer and the extent to which the yeast flocculates. The attempt to control yeast concentration in such a way as to achieve the target beer gravity at the outlet, while staying within the 'no recycling' constraint, was doomed to failure.

In the end Guinness went back to the classical batch process, which is self-stabilising and easily manageable. Again, the experience was interesting and useful; one can learn from failures perhaps more than if one has one's head swelled by successes. Above all, we had fun doing the work, and there was a great sense of team cohesion.

I should add that we attempted to develop an on-line gravity-meter, in association with Solartron-Schlumberger (currently a world-leading oilwell instrumentaion specialist), which depended on the vibration frequency of a metal tube being modified by the specific gravity of the liquid in it. I don't think this led to a working prototype, but we had observed and measured the gravity effect in another Solartron instrument which depended on vibrations, designed to measure viscosity.


The third patent, 1,004,693, headed Continuous Production of Alcoholic Beverages, was a product of some laboratory prototyping; it never scaled up. (the work mentioned above I should say was on a large pilot scale, of the order of 100 barrels per day). The novelty was to ferment wort to beer by trickling it down a column containing concentrated yeast, in the form of a carbon dioxide foam, the yeast being in the walls of the bubbles. The foam was generated by compressing the liquid and releasing the pressure through an orifice, much as the foam on the pint is generated to this day.

We got this to work, after a fashion, on the bench, but scale-up problems would have been horrendous, and it remains a curiosity. The patent agent however was very taken by it, and had great hopes, and Michael Ashe was persuaded to go through with the patenting.

I seem to recollect recently some similar process being reported in the New Scientist, and Denis Weaire in TCD has recently (ca 2000) done some basic work on bubbles, so if anyone reinvents it, I can claim 'prior art'!

We were able to model the performance of the 3-vessel continuous fermentation system using a set of differential equations, and at one stage we had this working on an analogue computer. I drafted a paper on this, and showed it to Sir Cyril Hinshelwood, who was then consulting with Guinness, and he thought it should be published, but I never got round to it. The key concept was in the separation of the aerobic growth from the anaerobic fermentation phases; these processes obeyed different dynamic laws. It was I think a fairly early example of a mathematical application in biodynamics.

[If I find the paper it is perhaps worth appending in full.]


Simulation of a Real-time Computer System

The project on the basis of which I was recruited to Aer Lingus was IBM's pilot for the marketing of its real-time airline reservations system in Europe. It had already been developed weith American Airlines (AA), using second-generation equipment, and the Aer Lingus proposal was adapted from this.

After I joined in September 1963 the initial work was at the level of getting a statistical feel for the environment, and the requirements for the system in the specified environment. At the start of 1964 however there was consternation: it seemed that the AA system had saturated at about 1/3 of its planned capacity, and there was a major move to try to understand why.

It turned out that the planning had been done by real-time system engineers who had served their time in the space programme, where the data flows were on the whole steady. The stochastic environment presented by airline reservations was virgin territory. They had used average values in their calculations, not realising that in the system the situation would be dominated by queues of signals competing for service from the various components of the system.

The IBM people in Kingston NY set up a group to develop a detailed simulation of the system. What we did in Aer Lingus, at my suggestion, and with encouragement from Dr Paddy Doyle who was then the chief systems engineer with IBM on the Aer Lingus project, was to see what we could do analytically, using queue theory, as outlined in a book by Saaty which happened to be to hand (Elements of Queuing Theory, McGraw Hill 1961).

We did this, and it worked, giving results within a few percent of the detailed IBM Kingston simulation. I wrote it up for the 1965 Chicago conference of AGIFORS (the Airline Group of IFORS the Internaitonal Federation of Operations Research Societies) and I understand that it is on record in their archive, and may even be web-accessible, though I have not found this version. I have a copy of the paper as submitted.

The programme was written in Fortran and implemented initially on the Trinity College IBM 1620, courtesy of Dr John Byrne. A later version was run on the Aer Lingus IBM 1440 which had been installed to plug the gap between the old manual reservations system and the projected new one, which was re-designed to run on the 3rd-generation 360. I think we were just about the only, or one of the very few, users of the Fortran compiler which IBM had developed for their 1400 series; we had a sense of pioneering the bridge-building between commercial and scientific computing, opening up a capability which later became useful in fleet and route-system planning.

I quote from the introduction; the work '...arose out of a need to check rapidly the performance of hardware configurations prepared by the manufacturers at a time when little operational experience of real-time reservations systems existed, and when such experience as existed rendered the predictions of previous simulations open to question.'

The environment was characterised by a 'message input rate' and a 'message type distribution' derived from the distribution to reservations transactions of various types, each being characterised by a set of messages of predictable types. A 'message of type l' was defined in terms of D(k,l) and P(k,l) the number of 'seeks' for data and programme from mass-storage file k, also C(l) the associated number of chunks of activity in the centrel processing unit (CPU), and A(l) the realtive frequency with which this type occurs.

The system was composed of a CPU connected to several disk files by one or more channels, also possibly a drum, with its own channel, and an input-put;put channel. The CPU is capable of multi-processing so that it has an interrupt facility characterised by an interrupt time. Each episode involving data transfer tied up a disk and associated channel for a predictable time; thus we had a system of queues with associated service times.

Again I quote: '...we have a sytem providing various services with associated service time distributions, and a distribution of messages of variable composition arriving...' at random with a Poissonian distribution. These '..make variable demands on the services provided by the system and occupy the system until all demands are met. This is clearly a classic queue theory situation and it is possible to find most of the necessary analytical formulae in the literature...'.

For the various steps in the process we assumed appropriate service time distributions, the most complex one being that of Pollaczek-Khintchine for arbitrary distributions of known variance. Mean system time depends sharply on service time variances. In the real-time system the 'system time' for one queue constituted a 'service time' for another; there were 'queues within queues', enhancing the overall variability, and this feature had been the downfall of the American Airlines system.

We were in effect able to calculate a 'demand density' for the 'message mix' currently in the system, generate a mean message time, use this to adjust the number of messages in the system, dropping out the earliest, and/or taking in the next one, which would be of type randomly selected using the known type distribution, this being the only 'monte carlo' element in the model.

We were able to generate message time distributions for three message rates, 2.5, 3.0 and 3.5 messages per second, and we showed how at the higher rates there developed excessively long tails, implying the need for massively expanded buffer storage space for the queues.

It turned out that the key bottleneck was in the access-channel for the mass storage, which was tied up for the total seek time, rather than for just the data transfer time. In the 3rd-generation system developed around the IBM360 the channels were multiplexed, and the performance thereby substantially enhanced.

This approach has I believe since become widely used, and there is much academic literature about it dating from the early 1970s, based probably on an independent re-discovery of the principle within the computer science community. I have however reason to believe that this 1965 paper describing 1964 work was a 'first', but because of the obscurity of its publication I have no evidence that it was ever referenced.

Fleet and Route Planning

Oisin O Siochru and I presented, also at the 1965 AGIFORS conference in Chicago, an outline of a fleet planning model which attempted to estimated the real cost of planning schedules to meet a peak demand, the peak-trough ratio for Aer Lingus being unusually high. This was a hard-core mathematial treatment, awash with partial derivatives and three-dimensional arrays, but we had run it with real data, and it had worked, generating an optimal mix of BAC one-elevens and Viscounts on the London route. Oisin later became the economic planning manager.

We introduced an algorithm for loading the cost of the peak service with the cost of off-peak idle capacity, and we also introduced the concept of 'load-factor of marginal seats' which was related to the mean load factor by a power-law. The power selected was different, depending on whether you used larger aircraft or ran a higher frequency; in the latter case the load factor of marginal seats was closer to the average.

The classification of the variables is of interest: we distinguished independent variables requiring strategic decisions (eg aircraft purchase) from those requiring tactical decisions (fly this route at that time) and recognised 'technical constraints' (time to fly a route with a specific type of aircraft). We identified as 'environmental variables' the exponents in the marginal load-factor functions. A key intermediate variable was a 3-dimensional array of partial derivatives of T(j,k) with regard to f(i,j) where the first is the number of revenue passengers in period j in direction k, and the second is the frequency of aircraft type i in period j. We also had as technical constraints the maximum hours an aircraft could fly in a single period, and the annual allowed maximum hours. There was in any period a threshold frequency above which utilisatuion became period-limited rather than annual-limited. The model was thus full of discontinuities and non-linearities, and its behaviour was complex and 'interesting'.

We went on to apply this appoach to the route system as a whole, after testing it successfully on the London route, and we went on the try to develop a rationale for spreading the company overheads over the various routes equitably, in a programme to estimate route profitability in a total company model. The key concept here was the introduction of 'volume-dependent' and 'variability-dependent' components for the company overheads, the first being related to routine operational costs and the second being related to the marketing, planning and scheduling functions. This was, in effect, an attempt to bring 'entropy' into the cost aspect of a company model, and I think it would have been a global first, but I don't know. We did not get as far as doing this 'for real', though we did develop a substantial body of fleet, route and operationsl software, in Fortran, which I believe ran successfully for some time after I left in 1970.


The Computer as an Analytical Management Tool

What follows represents a distillation of the writer's 1960s Aer Lingus computing experience into a short paper, under the above title, which was published in the August-September 1970 issue of Léagas, the publication of an Foras Riarachain, the Institute of Public Administration. It is techno-economic, but also has a socio-technical dimension.

There are two distinct and largely unrelated fields of work in which the computer has proved to be useful. These may be defined as the 'commercial' and the 'scientific'

The commercial field seems to me to possess in most cases the following features:

(a) handling large volums of data on a routine basis;
(b) restriction of the analytical capability of the computer itself to a few elementary operations;
(c) domination of the computer management by people who tend to be technique-oriented;
(d) preoccupation of the technicians with micro-problems;
(e) a tendency for programmes once written to be inflexible and difficult to adapt to take advantage of opportunities not seen at specification tinie.

The scientific field, which developed earlier, has remained in a world of its own; sophisticated numerical analytical techniques for solving difficult problems at the frontiers of knowledge have occupied the minds of the vanguard. Bread and butter is earned by engineering calculations: stress analysis, aerodynamics etc.

Commerrcial computer installations tend to be restricted by their place in the organisation to remain at the clerical rather than the analytical level. Installations connected with firms where technology is important tend to be under the engineers, and to be staffed by people for whom the solving of the stress equations is the main object in life.

There is fruitful work to be done in the bridging of these two worlds. This does not occur automatically. I have personal experience of one UK manufacturer of sophisticated hardware who was trying to sell it to Irish users, failing to make the sale. It turned out that his computer was under the control of the engineers, but that his sales department were doing longhand calculations to estimate how the hardware would work out in the prospective Irish customer system.

It happened that the Irish firm possessed a micro-economic model of its production system, which had been developed for the purpose of choosing between the firms which sold hardware. On the basis of this model, the Irish firm bought hardware from a US competitor.

This story has in fact occurred twice in my experience, the second case being a matter of choice between two US firms.

(The hardware was aircraft, and the firm was Aer Lingus. RJ April 2001)

The moral of this story is that there is scope for the development of micro-economic models to check out if the capital it is proposed to invest in the new hardware will in fact pay off.

The Irish computer scene lends itself particularly well to the development of this type of computer application. Our universities produce many good analytical minds, who aspire to be among the world innovators. Often they look to the technologies of the large nations for their fulfilment, They have not yet realised that our future, as a nation with know-how and expertise, is to jump in and excel where the giants are blind.

Our economic life is not dominated by sophisticated hardware-producers in competition with each other. We can therefore apply our expertise to choosing between the produce of the giant competitors, as it can be used in economic applications in Ireland, or in any other small or medium economy.

In other words, we can set our minds to contracting to build models of economic systems to run on computers, use them to choose what hardware we buy in our own applications, and we can export this as a service to those who need it.

This type of computing is neither commercial data-processing nor is it in the tradition of scientific computing. Nor is it simply a mixture. It is best described as a bridge. It takes the technical coefficients from one side and the unit costs from the other and blends them into a model of an economic system interacting with an environment, subject to technical, marketing, policy etc constraints.

The type of mathematics used comes easily to the computer man who has grown up on the scientific side. There is more to it than mathematics: one needs a sense of scientific analysis when it comes to organising and clarifying the data. It is here that the would-be model-builder runs into the morass of the commercial data-processing world.

It is rare to find a commercial system that abstracts and classifies in such a way as to provide good inputs for an economic model. It will count nuts and bolts, and keep track of the money, but it seems rarely to occur to a commercial system analyst that there might be an interest in classifying costs on the basis of their dependence on capacity, or load, or event-counts, or any appropriate easily measurable statistic. Thus the planning model-builder often has to rely on manual input.

This, while being a weakness in that it makes it hard to link a planning model on to a data-processing system in a large organisation, is a strength in that it means that the model must be economically constructed with much thought put into selecting the really significant features. Micro-economic model building is therefore a mobile, portable, exportable asset.

Nor does a micro-economic model necessarily involve a large computer. The easiest way for this type of know-how to spread is to have cheap hardware readily available, without the need to queue for service. The model referred to above which caused concern to the UK manufacturer was developed on a small machine now regarded an obsolete and unsaleable by the computer manufacturers. Yet it is still perfectly serviceable. Such machines could be used to generalise the knowledge of how to solve problems on computers; it would be quite possible for the State to buy up all the 'obsolete' equipment for a song, organise a group to maintain it, and make computing services available cheaply to every town, technical college, vocational school, secondary school, firm and farm in the country, This, rather than some centralised super-machine, for which 100% utilisation is the dominant goal, is the way forward.

Some Techno-economic Modelling Concepts in 1970

After leaving Aer Lingus I attempted to promote the feasibility of computer use in 'what if' scenarios, and I ran a few seminars, mostly with postgraduate students, or at Operations Research Society events, for which I prepared some notes. These are not worth reproducing, as they are somewhat constrained by the then accessible technology, which was dominated by Fortran programming on a mainframe computer. They usually involved using a mainframe in quasi-conversational mode: do a run, look at the printed results, change an input punched card, and do another run, etc. This could often be done hands-on at night. The notes cover the following processes:

Evolution of a herd of livestock, taking into account the non-linear relationship between feed regime and animal weight, and a time-dependent price environment.

Overhead Costs and Manpower Planning: this constituted an attempt to relate management productivity to the variability of operating statistics, and introduced the 'entropy' concept. Scale economics were related to management costs as well as production unit-costs; large volume helps to smooth the statistics. Middle management costs were related to the variability of the statistics of their processes.

An airline 5-year plan, in two phases, the first being based on revenue, direct costs and projected operating statistics dependent on abstraction and analysis of current statistics. The second phase analysed route profitability after spreading all overhead costs in a manner related to route characteristics. The problem of how to interface the planning statistical database with the current routine data-collection system was addressed, specifying an abstraction process.

The two processes noted above were the basis for the somewhat negative episode noted on June 11 1967 in the Greaves Diaries.

Economics of Real-time Reservations Systems was treated in terms of accuracy and updating lag-time, and the relationship of these variables with the 'no-show' and 'no-record' statistics.


[To Techne Home Page] [To 'Century' Preface and Contents Page]


Some navigational notes:

A highlighted number brings up a footnote or a reference. A highlighted word hotlinks to another document (chapter, appendix, table of contents, whatever). In general, if you click on the 'Back' button it will bring to to the point of departure in the document from which you came.

Copyright Dr Roy Johnston 2003