Creating the Needed Interface
Seite 4: Problems with Computers in DOD
Les Earnest describes the problems he observed in how the DOD tried to utilize computers during this early postwar period. He writes [Community Memory Mailing List, April 2, 1999] :
I also spent some time working on Air Force intelligence systems. One that was built at Strategic Air Command (SAC) Headquarters collected information world-wide and compiled it into a database on a batch processing computer, which could then be queried. Trouble was, by the time the data was key-punched, error-checked, corrected, error-checked again and inserted into the data base it was about three days old. However, much of this data had a useful life of only a few hours. Not surprisingly, the intelligence officers continued to use paper and grease-pencil displays backed up by enlisted personnel with filing cabinets full of information and with desk calculators. They queried the computer only when directed to do so.
In order to ensure that the multi-million dollar computer system was adequately utilized, the General in charge at Strategic Air Command Headquarters directed that each watch officer would submit at least two queries to the computer staff on each shift. After several months, we checked the logs and found that the number of queries had been exactly twice the number of intervening shifts, no more.
Earnest explains why computers couldn't just be substituted for the way things were done in the past, but required a significant change in how things would be done. He writes (Ibid.):
A few months before the Hot Springs (1962-ed) conference I was loaned to the Central Intelligence Agency to join a team working on a year-long project at their headquarters whose goal was to "integrate" a number of intelligence files. After one month I concluded that the stated goal was impossible -- you can't take disparate files collected by various people with different and unknown selection and screening criteria and with different coding standards, then crunch them all together in a computer and come up with an integrated data base. I wrote a report saying that it would be necessary to first find out how the information was collected, then develop uniform standards that would encompass all the types of information that were being gathered. After that it would be possible to begin building a new, integrated database, but the historical files would have to remain separate. The head of my group said "Interesting idea, but it is not in our charter."
Just when I eventually left the project, he remarked, "You know, you were right." A short time later they terminated the group. As I recall, this politically-savvy leader later became the head of CIA.
Does that adequately explain why I was frustrated?
Similarly, the Defender Program needed the capability to have real time computing.
Explaining the nature of the computer systems needed, Herbert York, appointed Director of Defense Research and Engineering in January 1959, wrote (Barber, pg. III-55-56):
One of the next things we have to try to do is to design a computer system which is a big set of electronic hardware that does mathematics at a faster rate than it can be done any other way. We have to design some kind of a system that will notice that some of these are slowing down faster than others and automatically tell us that they are not the warhead. That means that there has to be designed a big piece of what is referred to as a logical machinery. In principle, if you have all of this data, afterward, in the next couple of weeks, you could look it over and decide which is what, but all of it has to be done in that one minute, and no human reaction could be fast enough. All of this has to be done by a machine which is designed, not only to observe these things and observe all of the tracks that they are making, and so on, but a machine which can actually decide that some of them are going too slow and are not the warhead, and therefore, shoot at this other one over here.
A New Form of Computing is Being Explored
During this period, there was research work going on in the Cambridge community trying to create a new form of computing, a form of interactive computing. Rejecting the batch processing form of computing architecture of IBM, which was a leader in the computer industry and also a defense contractor supplying computers used by the DOD, scientists and engineers in the Cambridge community were excited by the prospect of a new form of computer architecture that computer scientist John McCarthy had proposed, called time- sharing.
The new concept made it possible for several individuals to interact with a computer directly, and at the same time, effectively sharing the CPU time. This was as opposed to the batch processing method used in computers made by IBM and others in the computer industry where the computer was kept running continuously by an operator, but the user was not able to interact with the computer. Instead an individual would put a program on punch cards, bring the cards to a computer center and leave it there to be run by the operator. The person would return later in the day or days later to get a printout of the program.
There were those in the Cambridge computer science community who recognized the need then and in the future for humans to be able to increasingly interact with the computer (Interview with Licklider, Charles Babbage Institute) :
(...)the community here in Cambridge and Boston -- was thinking of computing in a way essentially quite different, maybe even new, from people in other parts of the academic community...certainly different from people in industry.
In this context, J.C.R. Licklider, a psychologist, did a study of the human computer relationship and determined that for the present and the future it was crucial to develop a rapport, a symbiotic relationship between the human and the computer. He wrote a paper about a study he did of his own research, examining his own actions and found that much of his time was spent doing tasks to get him in a position to think, with only a small percentage of his time being used to actually think about the problems of his research. He felt that a computer could do many of the tasks he was doing and that there was a need to have a partnership between the human and the computer to facilitate scientific thinking. Also he outlined the kinds of computer research needed to make the new form of computing possible. Published in 1960, his paper "Human Computer Symbiosis" became a major part of the program for basic research in computer science in the U.S. throughout the 1960s.
In 1961 there was a series of lectures at MIT about the future of the computer. One of the lectures was by John McCarthy, the young MIT Professor who described how time-sharing would make it possible for several people to utilize a computer at the same time and to be able to interact with the computer directly. At the final lecture, John Pierce, a Nobel Prize researcher at Bell Labs, described how there were two streams in computer science research.
He proposed the importance of the human-computer rapport as the crucial research topic. The other stream was how the computer could automate intelligent activity. Pierce proposed that at the current state of computer development, the first was the primary research task. This area of research required exploring the human-computer relationship as its prime function. Its objective was to identify which tasks in the relationship were most appropriate for each of the participants.
J.C.R. Licklider was also a participant in this lecture series. He contributed to the discussion by supporting McCarthy's observation that computers for the first time give us the tools to "come to grips in a serious ways with intellectual processes. We have never had them before," he adds, "and we have made more progress since getting them than in decades past, despite the flowering of mathematics in the early part of the century." [Greenberger, pg 319] His vision was that the human and the computer would function together in a partnership which would greatly enhance human intellectual capacity.
Vannevar Bush was the moderator at this lecture. Others who contributed included Claude Shannon, Walter Rosenblith, and Marvin Minsky.
J.C.R. Licklider Invited to ARPA
Shortly after this lecture, J.C.R. Licklider was invited to join ARPA as the director of two newly created offices, the Command and Control office and the Behavioral Science Office. Describing Licklider's invitation to join ARPA, Jack Ruina, the head of ARPA during the 1961-1963 period, writes that he realized the need for a new form of computing, a form of computing that would be different from number crunching and he invited Licklider who had a vision of developing this form of computing, to join ARPA.
The description of the Command and Control program read [Barber, V-49]:
CCR was assigned to ARPA in June 1961. It's primary purpose is to support research on the conceptual aspects of command and control and to provide a better understanding of organizational, informational, and man-machine relationships.
However, Ruina was not happy with the Command and Control conception of computing. He writes [Ruina Interview, Babbage Institute]:
It was not my idea that ARPA should be working with computers, but somehow when the idea came to me from the staff (I was a director and it was a large staff), saying, well why don't we start a program along these line, I was impressed....To my mind, the issue at the time was how to explore the potential power that was growing in hardware for applications other than straight number crunching.
Ruina realized that there was an important potential in the computer but that it wasn't understood [Ruina, May 9, 1975, pg V- 52]:
And so in many applications it was rather clear that (...) the hardware was there but what to do with it clearly was lacking --- what to do with this tremendous power. So people came around and talked about this whole question of the organization and use of computers for other than purely numerical scientific calculations. It impressed me as being something that was important.
Licklider confirms Ruina's interest in the potential power of the computer. He writes that he was invited to create new uses of the computer "different from number crunching."
Licklider was interested in exploring how to create the new form of computing that would enable those using computers to interact with them [Barber, quoting Licklider, V-51]
There was the belief in the heads of a number of people -- a small number -- that people could really become very much more effective in their thinking and decision making if they had the support of a computer system, good displays and so forth, good data bases, computation at your command. It was kind of an image that we were working toward the realization of....It really wasn't a command and control research program. It was an interactive computing program. And my belief was, and still is, you can't really do command and control outside the framework of such a thing....Of course, that wasn't believed by people in the command and control field.
He proposed that the problems that had been identified under the category of "Command and Control" were really problems that would be solved by research to develop "interactive computing" [Licklider Interview, Charles Babbage Institute]:
[The]...problems of command and control were essentially problems of man computer interaction....Every time I had the chance to talk, I said the mission is interactive computing....But I had pretty well wrapped up in me all of the topics that it would take to put interactive computing together. I deliberately talked about the intergalactic network, but deliberately did not try to do anything about netting them together because it was becoming very difficult just to get them to run.
This was a time at ARPA when there was the recognition and support for basic research. Licklider was starting a new area of scientific research at ARPA. Licklider describes how he had two years to demonstrate progress in this new form of computing. In a way consistent with what Vannevar Bush had proposed, Licklider proceeded to fund university researchers encouraging them to develop programs. Licklider funded Robert Fano at MIT to develop what came to be known as Project MAC. He funded a University of California Berkeley program in time sharing because he felt the elements were there to make the research fruitful.
He funded John McCarthy at Stanford University and Marvin Minsky at MIT to develop a research program in artificial intelligence (AI) and he funded Herb Simon and Alan Newell at CMU in their AI studies.
Licklider's vision for the development of computer research included the goal of making it possible for researchers and diverse machines with diverse operating systems to collaborate and share their problems and to find what questions their research had in common. Thus they would be able to identify the generic issues to be considered in computer science rather than being bogged down in the particularities of their individual software or hardware.
But to do such study, a new kind of computer and human communication was needed, and research in computer networking and packet switching would need to be supported. Discussing Licklider's vision, Ruina says the need to understand the computer as a communication device was at the essence of the Licklider's basic research program, and that Licklider's work at ARPA and the identification of basic research in computer science led to changing the name of the ARPA office that Licklider headed from Command and Control to Information Processing Techniques Office. Ruina explains (Barber, V-53):
The ARPA program had thus quickly developed from an expedient solution to an embarrassing Departmental problem involving a specific piece of hardware to a far-reaching basic research program in advanced computer technology in many ways similar and complementary to the materials science program. By the beginning of 1964 this change was reflected in renaming the office Information Processing Techniques, a title that continued into the 1970s.
In a similar way, Licklider compliments the heads of ARPA during his first turn there. He writes:
I was also fortunate in that my two immediate superiors in the chain of command, Ruina and Sproull, did me the great favor of listening intently long enough to decide that they were fundamentally in support of what was going on. After that they spent little time heckling me. I thought of both of them as absolutely ideal bosses in providing support beyond what I would have been strong enough to supply had I been in their shoes. I was very happy with both of them.
(Bartee, p. 222)
11.Computer Science As Exploring Human-Computer Interaction
In this context, it is important to understand that there was no field of computer science at the time that Licklider went to join ARPA. But there was a notion of computer science in the air at the time. Licklider's research led him to want to understand the theoretical underpinnings of human communication. And he felt the computer would be helpful.
Others like Claude Shannon and Norbert Wiener were part of the scientific community that Licklider had found very stimulating and instructive during the late 1940s and 1950s in the Cambridge area. (See for example "Netizens: On the History and Impact of Usenet and the Internet", Los Alamitos, pg. 76-95) As a graduate student under Vannevar Bush, Claude Shannon studied the "laws underlying the communication of messages in man-made systems." An MIT faculty member, a colleague of Bush, Norbert Wiener had maintained that there were a set of common laws of signals in humans and machines. Combining these questions, research in computer science in the 1960s could be formulated as exploring the laws of communication in machines (i.e. computers) and humans and the rapport between them. A research program to create interactive computing therefore is a program exploring the relationship between the human and the computer and their interaction. (See for example Howard Rheingold, "Tools for Thought", New York, 1985, pgs. 115-151.)
Thus the research framework for computer science as outlined by these scientists concerned the laws of communication in humans and computers and their interrelationship. And the methodology was similar to that of the early Royal Society scientists in Great Britain; involve oneself directly experimenting with the development of the new idea or technology.