Bibliography

Interlisp Bibliography

(This bibliography is kept in sync with our Zotero collection Library.

Reference
1959
McCarthy, John

LISP (for LISt Processor) is a programming system for the IBM 704 being developed by the Artificial Intelligence Group at MIT. We are developing it in order to program the Advice Taker which is to be a system for instructing a machine in a combination of declarative and imperative sentences.
1960
Deutsch, L. Peter

A program has been written for the PDP-1 providing a subset of the features of the LISP interpreter for the IBM 709/7090. This program, which contains no known bugs, will run on any PDP-1 with automatic divide. On machines with more than 4K of memory, it must be run in memory field 0.
It is assumed that the reader is familiar with 709 LISP in general and with the LISP 1.5 Programmer's Manual in particular.
McCarthy, John

A programming system called LISP (for Lisp Processor) has been developed for the IBM 704 computer by the Artificial Intelligence group at M.I.T. The system was designed to facilitate experiments with a proposed system called the Advice Taker, whereby a machine could be instructed to handle declarative as well as imperative sentences and could exhibit, "common sense" in carrying out its instructions. The original proposal for the Advice Taker was made in November l958. The main requirement was a programming system for manipulating expressions representing formalized declarative and imperative sentences so that the Advice Taker system could make deductions.
1963
Slagle, James R.

A large high-speed general-purpose digital computer (IBM 7090) was programmed to solve elementary symbolic integration problems at approximately the level of a good college freshman. The program is called SAINT, an acronym for "Symbolic Automatic INTegrator." This paper discusses the SAINT program and its performance. SAINT performs indefinite integration. It also performs definite and multiple integration when these are trivial extensions of indefinite integration. It uses many of the methods and heuristics of students attacking the same prombles. SAINT took an average of two minutes each to solve 52 of the 54 attempted problems taken from the Massachusetts Institute of Technology freshman calculus final examinations. Based on this and other experiments with SAINT, some conclusions coneering computer solution of such problems are: (1) Pattern recognition is of fundamental importance. (2) Great benefit would have been derived from a large memory and more convenient symbol manipulating facilities. (3) The solution of a symbolic integration problem by a commercially available computer is far cheaper and faster than by man.
1965
Deutsch, L. Peter; Lampson, Butler W.
1966
Weizenbaum, Joseph

Eliza is a program operating within the MAC time-sharing system at MIT which makes certain kinds of natural language conversation between man and computer possible. Input sentences are analyzed on the basis of decomposition rules which are triggered by key words appearing in the input text. Responses are generated by reassembly rules associated with selected decomposition rules. The fundamental technical problems with which ELIZA is concerned are: (1) the identification of key words, (2) the discovery of minimal context, (3) the choice of appropriate transformations, (4) generation of reponses in the abscence of key words, and (5) the provision of an editing capability for ELIZA "scripts". A discussion of some pyschological issues relevant to the ELIZA approach as well as of future developments concludes the paper.
Bobrow, Daniel G.; Teitelman, Warren

This article describes a notation and a programming language for expressing, from within a LISP system, string transformations such as those performed in COMIT or SNOBOL. A simple transformation (or transformation rule) is specified by providing a pattern which must match the structure to be transformed and a format which specifies how to construct a new structure according to the segmentation specified by the pattern. The patterns and formats are greatly generalized versions of the left-half and right-half rules of COMIT and SNOBOL. For example, elementary patterns and formats can be variable names, results of computations, disjunctive sets, or repeating subpatterns; predicates can be associated with elementary patterns which check relationships among separated elements of the match; it is no longer necessary to restrict the operations to linear strings since elementary patterns can themselves match structures. The FLIP language has been implemented in LISP 1.5, and has been successfully used in such disparate tasks as editing LISP functions and parsing Kleene regular expressions.
Teitelman, Warren

PILOT is a programming system constructed in LISP. It is designed to facilitate the development of programs by easing the familiar sequence: write some code, run the program, make some changes, write some more code, run the program again, etc. As a program becomes more complex, making these changes becomes harder and harder because the implications of changes are harder to anticipate. In the PILOT system, the computer plays an active role in this evolutionary process by providing the means whereby changes can be effected immediately, and in ways that seem natural to the user. The user of PILOT feels that he is giving advice, or making suggestions, to the computer about the operation of his programs, and that the system then performs the work necessary. The PILOT system is thus an interface between the user and his program, monitoring both in the requests of the user and operation of his program. The user may easily modify the PILOT system itself by giving it advice about its own operation. This allows him to develop his own language and to shift gradually onto PILOT the burden of performing routine but increasingly complicated tasks. In this way, he can concentrate on the conceptual difficulties in the original problem, rather than on the niggling tasks of editing, rewriting, or adding to his programs. Two detailed examples are presented. PILOT is a first step toward computer systems that will help man to formulate problems in the same way they now help him to solve them. Experience with it supports the claim that such "symbiotic systems" allow the programmer to attack and solve more difficult problems.
Daniel G. Bobrow; Daniel L. Murphy
Bobrow, Daniel G.

Storage allocation, maintenance, and reclamation are handled automatically in LISP systems. Storage is allocated as needed, and a garbage collection process periodically reclaims storage no longer in use. A number of different garbage collection algorithms are described. A common property of most of these algorithms is that during garbage collection all other computation ceases. This is an untenable situation for programs which must respond to real time interrupts. The paper concludes with a proposal for an incremental garbage collection scheme which allows simultaneous computation and storage reclamation.
Bobrow, Daniel G.; Murphy, Daniel L.

In an ideal list-processing system there would be enough core memory to contain all the data and programs. The paper describes a number of techniques used to build a LISP system which utilizes a drum for its principal storage medium, with a surprisingly low time-penalty for use of this slow storage device. The techniques include careful segmentation of system programs, allocation of virtual memory to allow address arithmetic for type determination, and a special algorithm for building reasonably linearized lists. A scheme is described for binding variables which is good in this environment and allows for complete compatibility between compiled and interpreted programs with no special declarations.
Berkeley, Edmund Callis; Bobrow, Daniel Gureasko

Among the new languages for instructing computers is a remarkable one called LISP. The name cornes from the first three
letters of LIST and the first letter of PROCESSING. Not only is LISP a language for instructing computers but it is also a formal mathematical language, in the same way as elëmentary algebra when rigorously defined and used is a formal mathematical language.
LISP is designed primarily for processing data consisting of lists of symbols. It has been used for symbolic calculations in differential and integral calculus, electrical circuit theory, mathematical logic, game playing, and other fields of intelligent handling of symbols.
The purpose of the present article is to make a bridge between the ideas and terms of ordinary English and elementary
mathematics, and the ideas and terms known and used by LISP programmers.
1967
Teitelman, Warren

The paper discusses some of the considerations involved in designing and implementing a pattern matching or COMIT feature inside of LISP. The programming language FLIP is presented here as a paradigm for such a feature. The design and implementation of FLIP discussed below emphasizes compact notation and efficiency of operation. In addition, FLIP is a modular language and can be readily extended and generalized to include features found in other pattern driven languages such as CONVERT and SNOBOL. This makes it extremely versatile. The development of this paper proceeds from abstract considerations to specific details. The syntax and semantics of FLIP are presented first, followed by a discussion of the implementation with especial attention devoted to techniques used for reducing the number of conses required as well as improving search strategy. Finally FLIP is treated as a working system and viewed from the users standpoint. Here we present some of the additions and extensions to FLIP that have evolved out of almost two years of experimentation. These transform it from a notational system into a practical and useful programming system.
Bobrow, D. G.; Deutsch, L. P.; Murphy, D. L.

This is a preliminary memo describing the BBN LISP 1.69 system for the 50S 940 computer. It is a description of how the system is working now, except for those places clearly noted in the text below. Any difference between the descriptions given and actual operation found should be reported, in writing, to the authors. At the end of this memo there is a copy of the index to function descriptions in the document, "The BBN LISP SYSTEM (revised October 1966).
Deutsch, P.

The editor described here is implemented within the PDP-l and SDS 940 time-sharing LISP systems, but can be used with minor changes within any LISP system which includes the capabilities of LISP 1.5. It was begun by the author in 1965 and later extended by Bobrow and Teitelman at BBN.
Teitelman, Warren

A memo listing the latest changes made to the 940 LISP Library.
Harrison, Malcolm

An introduction to LISP is given on an elementary level. Topics covered include the programming system, 240 exercises with solutions, debugging of LISP programs, and styles of programming. More advanced discussions are contained in the following articles: Techniques using LISP for automatically discovering interesting relations in data; Automation, using LISP, of inductive inference on sequences; Application of LISP to machine checking of mathematical proofs; METEOR: A LISP interpreter for string transformations; Notes on implementing LISP for the M-460 computer; LISP as the language for an incremental computer; The LISP system for the Q-2 computer; An auxiliary language for more natural expression -- the A-language. Some applications of the utilization of the LISP programming language are given in the appendices.
Bobrow, Daniel G.; Murphy, Daniel L.

In an ideal list-processing system there would be enough core memory to contain all the data and programs. Described in this paper are a number of techniques that have been used to build a LISP system utilizing a drum for its principal storage medium, with a surprisingly low time penalty for use of this slow storage device. The techniques include careful segmentation of system programs, allocation of virtual memory to allow address arithmetic for type determination, and a special algorithm for building reasonably linearized lists. A scheme for binding variables is described which is good in this environment and allows for complete compatibility between compiled and interpreted programs with no special declarations.
Daniel G. Bobrow; D. Lucille Darley; L. Peter Deutsch; Daniel L. Murphy; Warren Teitelman

This report describes the LISP system implemented at BBN on the SDS 940 Computer. This LISP is an upward compatible extension of LISP 1.5 for the IBM 7090, with a number of new features which make it work well as an on-line language. These new features include tracing, and conditional breakpoints in functions for debugging and a sophisticated LISP oriented editor. The BBN 940 LISP SYSTEM has a large memory store (approximately 50,000 free words) utilizing special paging techniques for a drum to provide reasonable computation times. The system includes both an interpreter, a fully compatible compiler, and an assembly language facility for inserting machine code subroutines.
1968
Sproull, Robert F.; Sutherland, Ivan E.

When compared with a drawing on paper, the pictures presented by today's computer display equipment are sadly lacking in resolution. Most modern display equipment uses 10 bit digital to analog converters, providing for display in a 1024 by 1024 square raster. The actual resolution available is usually somewhat less since adjacent spots or lines will overlap. Even large-screen displays have limited resolution, for although they give a bigger picture, they also draw wider lines so that the amount of material which can appear at one time is still limited. Users of larger paper drawings have become accustomed to having a great deal of material presented at once. The computer display scope alone cannot serve the many tasks which require relatively large drawings with fine details.
Bobrow, Daniel G.; Murphy, Daniel L.

The problem of the use of two levels of storage for programs is explored in the context of a LISP system which uses core memory as a buffer for a large virtual memory stored on a drum. Details of timing are given for one particular problem.
1969
Bobrow, D. G.

This first (long delayed) LISP Bulletin contains samples of most of those types of items which the editor feels are relevant to this publication. These include announcements of new (i.e. not previously announced here) implementations of LISP (or closely related) systems; quick tricks in LISP; abstracts of LISP related papers; short writeups and listings of useful programs; and longer articles on problems of general interest to the entire LISP community. Printing of these last articles in the Bulletin does not interfere with later publications in formal journals or books. Short write-ups of new features added to LISP are of interest, preferably upward compatible with LISP 1.5, especially if they are illustrated by programming examples.

This document describes the BBN-LISP system currently implemented on the SDS 940. It is a dialect of LISP 1.5 and the differences between IBM 7090 version and this system are described in Appendix 1 and 2. Principally, this system has been expanded from the LISP 1.5 on the 7090 in a number of different ways. BBN-LISP is designed to utilize a drum for storage and to provide the user a large virtual memory, with a relatively small penalty in speed (using special paging techniques described in Bobrow and Murphy 1967).
1971
Teitelman, W.; Bobrow, D. G.; Hartley, A. K.; Murphy, D. L.

This document describes the BBN-LISP system currently implemented on the DEC-PDP-10 under the BBN TENEX time sharing system. BBN-LISP is designed to provide the user access to the large cirtual memory allowed by TENEX with a relatively small penalty in speed (using special paging techniques described in Bobrow and Murphy, 1967). Additional datatypes have been added, including strings and has association tables (hash linkes). This system has been designed to be a good on-line interactive system. Some of the features provided include sophisticiated debugging facilities with tracing and conditional breakpoints, a sophisticated LISP oriented editor wihin the system, an compatible compiler and interpreter.
1972
Teitelman, Warren

This paper describes a research effort and programming system designed to facilitate the production of programs. Unlike automated programming, which focuses on developing systems that write programs, automated programmering involves developing systems which automate (or at least greatly facilitate) those tasks that a programmer performs other than writing programs: e.g., repairing syntactical errors to get programs to run in the first place, generating test cases, making tentative changes, retesting, undoing changes, reconfiguring, massive edits, et al., plus repairing and recovering from mistakes made during the above. When the system in which the programmer is operating is cooperative and helpful with respect to these activities, the programmer can devote more time and energy to the task of programming itself, i.e., to conceptualizing, designing and implementing. Consequently, he can be more ambitious, and more productive.
Teitelman, W.; Bobrow, D. G.; Hartley, A. K.; Murphy, D. L.

This document describes the BBN-LISP system currently implemented on the DEC PDP-10 under the BBN TENEX time sharing system. BBN-LISP is designed to provide the user access to the large virtual memory allowed by TENEX, with a relatively small penalty in speed (using special paging techiques described in Bobrow and Murphy, 1967). Additional data types have been added, including strings and hash association tables (hash links). This system has been designed to be a good on-line interactive system. Some of the features provided include sophisticated debugging facilities with tracing and conditional breakpoints, a sophisticated LISP oriented editor within the system, and compatible compiler and interpreter.
J778.SYSREM DOC on SYS05 (LISP features designed to aid the LISP programmer)
Masinter
Maurer, W.D.

LISP, the most important of the list processing languages, was developed in the early 1960s by John McCarthy and his students while he was on the faculty of MIT. It may be characterised as a functional language, a symbolic language, a list processing language, a recursive language, and a logical language. All of these facets of LISP are studied and brought together in this book.
The book is aimed at students who already know an algebraic language, such as FORTRAN or ALGOL. It is designed with a bias towards classroom teaching rather than self-teaching, although it may be used profitably in either way. There are exercises at the end of each chapter, and the answers to some of these exercises are given at the end of the book. The book has been used and tested in advanced programming courses for both undergraduate and graduate students at the University of California, Berkeley.
1973
Deutsch, L. Peter

This paper presents a machine designed for compact representation and rapid execution of LISP programs. The machine language is a factor of 2 to 5 more compact than S-expressions or conventional compiled code, and the compiler is extremely simple. The encoding scheme is potentially applicable to data as well as program. The machine also provides for user-defined data structures.
Reboh, Rene; Sacerdoti, Earl

A preliminary version of QLISP is described. QLISP permits free intermingling of QA4-like constructs with INTERLISP code. The preliminary version contains features similar to those of QA4 except for the backtracking of control environments. It provides several new features as well. This preliminary manual presumes a familiarity with both INTERLISP and the basic concepts of QA4. It is intended to update rather than replace the existing documentation of QA4.
Bobrow, Daniel G.; Wegbreit, Ben

Many control and access environment structures require that storage for a procedure activation exist at times when control is not nested within the procedure activated. This is straightforward to implement by dynamic storage allocation with linked blocks for each activation, but rather expensive in both time and space. This paper presents an implementation technique using a single stack to hold procedure activation storage which allows retention of that storage for durations not necessarily tied to control flow. The technique has the property that, in the simple case, it runs identically to the usual automatic stack allocation and deallocation procedure. Applications of this technique to multitasking, coroutines, backtracking, label-valued variables, and functional arguments are discussed. In the initial model, a single real processor is assumed, and the implementation assumes multiple-processes coordinate by passing control explicitly to one another. A multiprocessor implementation requires only a few changes to the basic technique, as described.
Deutsch, L. Peter

Program verification refers to the idea that the intent or effect of a program can be stated in a precise way that is not a simple "rewording" of the program itself, and that one can prove (in the mathematical sense) that a program actually conforms to a given statement of intent. This thesis describes a software system which can verify (prove) some non-trivial programs automatically. The system described here is organized in a novel manner compared to most other theorem-proving systems. It has a great deal of specific knowledge about integers and arrays of integers, yet it is not "special-purpose", since this knowledge is represented in procedures which are separate from the underlying structure of the system. It also incorporates some knowledge, gained by the author from both experiment and introspection, about how programs are often constructed, and uses this knowledge to guide the proof process. It uses its knowledge, plus contextual information from the program being verified, to simplify the theorems dramatically as they are being constructed, rather than relying on a super-powerful proof procedure. The system also provides for interactive editing of programs and assertions, and for detailed human control of the proof process when the system cannot produce a proof (or counter-example) on its own.
Teitelman, Warren

INTERLISP (INTERactive LISP) is a LISP system currently implemented on the DEC PDP-10 under the BBN TENEX time sharing system<*R1>. INTERLISP is designed to provide the user access to the large virtual memory allowed by TENEX, with a relatively small penalty in speed (using special paging techniques described in <*R2>). Additional data types have been added, including strings, arrays, and hash association tables (hash links). The system includes a compatible compiler and interpreter. Machine code can be intermixed with INTERLISP expressions via the assemble directive of the compiler. The compiler also contains a facility for "block compilation" which allows a group of functions to be compiled as a unit, suppressing internal names. Each successive level of computation, from interpreted through compiled, to block-compiled provides greater speed at a cost of debugging ease.
Bobrow, Daniel G.; Raphael, Bertram

Tape 2 of a recorded lecture and demonstration in Interlisp.
1974
Teitelman, Warren
Balzer, Robert M.

This paper addresses the general problem of creating a suitable on-line environment for programming. The amount of software, and the effort required to produce it, to support such an on-line environment is very large relative to that needed to produce a programming language, and is largely responsible for the scarcity of such programming environments. The size of this effort was largely responsible for the scrapping of a major language (QA4) as a separate entity and its inclusion instead as a set of extensions in a LISP environment. The few systems which do exist (e.g., LISP, APL, BASIC, and PL/I) have greatly benefited their users and have strongly contributed to the widespread acceptance of the associated language.
Deutsch, P.

Several conflicting goal must be resolved in deciding on a set of display facilities for Lisp: ease of lisp, efficient access to hardware facilities, and device- and system-independence. This memo suggests a set of facilities constructed in two layers: a lower layer that gives direct access to the Alto bitmap capability, while retaining Lisp's tradition of freeing the programmer from storage allocation worries, and an upper Iayer that uses the lower (on the Alto) or a character-stream protocol (for VTS, on MAXC) to provide for writing strings, scrolling, edting, etc. on the screen,
Teitelman, Warren

Documentation for INTERLISP in the form of the INTERLISP Reference Manual is now available and may be obtained from Warren Teitelman, Xerox Palo Alto Research Center. The new manual replaces all existing documentation, and is completely up to date (as to January, 1974). The manual is available in either loose-leaf or bound form. The lose-leaf version (binders not supplied) comes with printed separator tabs between the chapters. The bound version also includes colored divider pages between chapters, and is printed on somewhat thinner paper than the loose-leaf version, in an effort to make it 'portable' (the manual being approximately 700 pages long). Both versions contain a complete master index (approximately 1600 entries), as well as a separate index for each chapter. Although the manual is intended primarily to be used for reference, many chapters, e.g., the programmer's assistant, do-what-I-mean, CLISP, etc., include introductory and tutorial material. The manual is available in machine-readable form, and an on-line question-answering system using the manual as a data base is currently being implemented.
Bobrow, Daniel G.; Raphael, Bertram

New directions in Artificial Intelligence research have led to the need for certain novel features to be embedded in programming languages. This paper gives an overview of the nature of these features, and their implementation in four principal families of AI languages: SAIL; PLANNER/CONNIVER; QLISP/INTERLISP; and POPLER/POP-2. The programming featurcs described include: new data types and accessing mechanisms for stored expressions; more flexible control structures, including multiple processes and backtracking; pattern matching to allow comparison of data item with a template, and extraction of labeled subexpressions; and deductive mechanisms which allow the programming system to carry o u t certain activities including modifying the data base and deciding which subroutines to run next using only constraints and guidelines set up by the programmer.
Bobrow, Daniel G.; Raphael, Bertram

New directions in Artifical Intelligence research have led to the need for certain novel features to be embedded in programming languages. This paper gives an overview of the nature of these features, and their implementation in four principal families of AI languages: SAIL; PLANNER/CONNIVER;QLIPS/INTERLISP; and POPLER/POP-2. The programming feature described include: new data types and accessing mechanisms for stored expressions; matching to allow caomparison of data item with a template, and extraction of labeled subexpressions; and deductive mechanisms which allow the programming system to carry out certain activities including modifying the data base and deciding which subroutines to run next using only constraints and guidelines set up by the programmer.
1975
Bobrow, Daniel G.

In current machine designs, a machine address gives the user direct access to a single piece of information, namely the contents of that machine word. This note is based on the observation that it is often useful to associate additional information, with some (relatively few) address locations determined at run time, without the necessity of preallocating the storage at all possible such addresses. That is, it can be useful to have an effective extra bit, field, or address in some words without every word having to contain a bit (or bits) to mark this as a special case. The key idea is that this extra associated information can be found by a table search. Although it could be found by any search technique (e.g. linear, binary sorted, etc.), we suggest that an appropriate low overhead mechanism is to use hash search on a table in which the key is the address of the cell to be augmented.
Weyl, Stephen

This report describes the file system for the experimental large file management support system currently being implemented at SRI. INTERLISP, an interactive, development-oriented computer programming system, has been augmented to support applications requiring large data bases maintained on secondary store. The data base support programs are separated into two levels an advanced file system and relational data base management procedures. The file system allows programmers to make full use of the capabilities of on-line random access devices using problem related symbolic primitives rather than page and word numbers. It also performs several useful data storage functions such as data compression, sequencing, and generation of symbols which are unique for a file.
Anders Haraldson

This paper gives a tutorial introduction to INTERLISP/360-370, a subset of INTERLISP, which can be implemented on IBM/360 and similar systems. Descriptions of a large number of functions in INTERLISP with numerous examples, exercises and solutions are contained. The use of edit, break, advice, file handling and compiler are given and both interactive and batch use of the system is taken care of.
Deutsch, L. Peter

Program verification refers to the idea that the intent or effect of a program can be stated in a precise way that is not a simple "rewording" of the program itself, and that one can prove (in the mathematical sense) that a program actually conforms to a given statement of intent. This thesis describes a software system which can verify (prove) some non-trivial programs automatically. The system described here is organized in a novel manner compared to most other theorem-proving systems. IL has a great deal of specific knowledge about integers and arrays of integers, yet it is not "special-purpose", since this knowledge is represented in procedures which are separate from the underlying structure of the system. It also incorporates some knowledge, gained by the author from both experiment and introspection, about how programs are often constructed, and uses this knowledge to guide the proof process. It uses its knowledge, plus contextual information from the program being verified, to simplify the theorems dramatically as they are being constructed, rather than relying on a super-powerful proof procedure. The system also provides for interactive editing of programs and assertions, and for detailed human control of the proof process when the system cannot produce a proof (or counter-example) on its own.
Deutsch, P.
1976
Deutsch, L. Peter; Bobrow, Daniel G.

This paper describes a new way of solving the storage reclamation problem for a system such as Lisp that allocates storage automatically from a heap, and does not require the programmer to give any indication that particular items are no longer useful or accessible. A reference count scheme for reclaiming non-self-referential structures, and a linearizing, compacting, copying scheme to reorganize all storage at the users discretion are proposed. The algorithms are designed to work well in systems which use multiple levels of storage, and large virtual address space. They depend on the fact that most cells are referenced exactly once, and that reference counts need only be accurate when storage is about to be reclaimed. A transaction file stores changes to reference counts, and a multiple reference table stores the count for items which are referenced more than once.
Teitelman, Warren

Clisp is an attempt to make Lisp programs easier to read and write by extending the syntax of Lisp to include infix operators, IF-THEN statements, FOR-DO-WHILE statements, and similar Algol-like constructs, without changing the structure or representation of the language. Clisp is implemented through Lisp's error handling machinery, rather than by modifying the interpreter. When an expression is encountered whose evaluation causes an error, the expression is scanned for possible Clisp constructs, which are then converted to the equivalent Lisp expressions. Thus, users can freely intermix Lisp and Clisp without ut having to distinguish which is which. Emphasis in the design and development of Clisp has been on the system aspects of such a facility, with the goal of producing a useful tool, not just another language. To this end, Clisp includes interactive error correction and many "do-what-I-mean" features.
Shortliffe, Edward
Masinter, Larry
Bobrow, Robert; Grignetti, Mario

This report describes measurements performed for the purpose of determining areas of potential improvement to the efficiency of INTERLISP running under TENEX.
Clark, Douglas W.

This thesis is about list structures: how they are used in practice, how they can be moved and copied efficiently, and how they can be represented by space-saving encodings. The approach taken to these subjects is mainly empirical.
Measurement results are based on five large programs written in Interlisp, a sophisticated Lisp system that runs on the PDP-10.
Teitelman, W.

The contract covered by this annual report includes a variety of activities and services centering around the continued growth and well-being of INTERLISP, a large, interactive system widely used in the ARPA community for developing advanced and sophisticated computer-based systems.
Qlisp: a language for the interactive development of complex systems
Sacerdoti, Earl D.; Fikes, Richard E.; Reboh, Rene; Sagalowicz, Daniel; Waldinger, Richard J.; Wilber, B. Michael

This paper presents a functional overview of the features and capabilities of QLIS, one of the newest of the current generation of very high level languages developer for use in artificaly intelligence (AI) researc. QLISP is both a programming language and an interactive programming environment. It embeds an etended version of QA4, an earlier AI language, in INTERLISP, a widely available version of LISP with a variety of sophisticated programming aids.
Moore, J. Strother

INTERLISP is an interactive LISP system. It consists of a large and sophisticated collection of user support facilities (such as DWIM and the Programmer's Assistant [TEl]) built on top of a fairly conventional LISP language. We call this underlying conventional language "Virtual Machine" (or simply VM) LISP. The user support facilities are written entirely in VM LISP, and are in the public domain. Thus, if VM LISP is implemented on some machine, the rest of INTERLISP can be obtained from publicly available files1. Although the INTERLISP System is extensively documented at the user level in the INTERLISP Reference Manual [2], it is not possible to implement the system from that documentation. The purpose of this document is to specify VM LISP as fully as possible from the implementor's point of view. Consequently, this document emphasises clarity and conciseness over intuitive appeal. It is expected that a prospective implementor will have access to the INTERLISP Reference Manual for explanations of the justification or implications of certain specifications. Furthermore, since its purpose is mainly a practical one (i.e., to tell an implementor what must be done), the document is not altogether formal. Because INTERLISP evolved under the rather sophisticated BBN TENEX2 time sharing system, it assumes the presence of capabilities (such as user-defined interrupt characters) which may not be found in the implementor's environment. If an implementor is forced by such circumstances to forego the implementation of certain INTERLISP features, the user-support facilities may not perform as described in the Reference Manual. The implementor assumes responsibility for the documentation of such deficiencies. A great deal of care has been taken in the preparation of this document to determine the assumptions made in the high-level facilities about features in the underlying VM. Because of the size and complexity of the system we cannot guarantee that we have identified them all, and therefore do not assure the prospective implementor that the rest of INTERLISP will run perfectly upon loading it into the just implemented VM. However, this document goes a long way toward that admirable (and probably impossible) goal.
1977
Teitelman, Warren

This paper continues and extends previous work by the author in developing systems which provide the user with various forms of explicit and implicit assistance, and in general co-operate with the user in the development of his programs. The system described in this paper makes extensive use of a bit map display and pointing device (a mouse) to significantly enrich the user's interactions with the system, and to provide capabilities not possible with terminals that essentially emulate hard copy devices. For example, any text that is displayed on the screen can be pointed at and treated as input, exactly as though it were typed, i.e. the user can say use this expression or that value, and then simply point. The user views his programming environment through a collection of display windows, each of which corresponds to a different task or context. The user can manipulate the windows, or the contents of a particular window, by a combination of keyboard inputs or pointing operations. The technique of using different windows for different tasks makes it easy for the user to manage several simultaneous tasks and contexts, e.g. defining programs, testing programs, editing, asking the system for assistance, sending and receiving messages, etc. and to switch back and forth between these tasks at his convenience.
Teitelman, Warren

This paper continues and extends previous work by the author in developing systems which provide the user with various forms of explicit and implicit assistance, and in general cooperate with the user in the development of his programs. The system described in this paper makes extensive use of a bit map display and pointing device (a mouse) to significantly enrich the user's interactions with the system, and to provide capabilities not possible with terminals that essentially emulate hard copy devices. For example, any text that is displayed on the screen can be pointed at and treated as input, exactly as though it were typed, i.e., the user can say use this expression or that value, and then simply point. The user views his programming environment through a collection of display windows, each of which corresponds to a different task or context. The user can manipulate the windows, or the contents of a particular window, by a combination of keyboard inputs or pointing operations. The technique of using different windows for different tasks makes it easy for the user to manage several simultaneous tasks and contexts, e.g., defining programs, testing programs, editing, asking the system for assistance, sending and receiving messages, etc and to switch back and forth between these tasks at his convenience.
Sproull, Robert F.

This report describes briefly a set of display primitives that we have developed at PARC toextend the capabilities of InterLisp[l]. These primitives are designed to operate a raster-scanned display, and concentrate on facilities for placing text carefully on the display and for moving chunks of an already-created display.
Pratt, V. R.

This position paper is intended to supply the committee with information about LISP that can come only from someone who has used LISP extensively yet who has also had a comparable exposure to other languages competitive with LISP. In my own case I used the implementation of ALGOL due to Randell and Russel (9) from 1964 to 1969 at the Basser Computing Department of the University of Sydney, and alaso taught ALGOL for approximately fifty contact hours in several departmental "crash courses". My LISP experience extends from 1960 to now. It is hoped that the deeper understanding of LISP that this paper attempts to supply will be of value to the committee in determining the optimal number of languages to be given full support by the department.
Burton, Richard R.

One of the major stumbling blocks to more effective used computers by naive users is the lack of natural means of communication between the user and the computer system. This report discusses a paradigm for constructing efficient and friendly man-machine interface systems involving subsets of natural language for limited domains of discourse. As such this work falls somewhere between highly constrained formal language query systems and unrestricted natural language under-standing systems. The primary purpose of this research is not to advance our theoretical under-standing of natural language but rather to put forth a set of techniques for embedding both semantic/conceptual and pragmatic information into a useful natural language interface module. Our intent has been to produce a front end system which enables the user to concentrate on his problem or task rather than making him worry about how to communicate his ideas or questions to the machine.
1978
Teitelman, Warren
Wertz, Harald

This paper presents a systems (PHENARETE) which understands and improves incompletely defined LISP programs, such as those written by students beginning to program in LISP. This system takes, as input, the program without any additional information. In order to understand the program, the system meta-evaluates it, using a library of "pragmatic rules", describing the construction and correction of general program constructs, and a set of "specialists", describing the syntax and semantics of the standard LISP functions. The system can use its understanding of the program to detect errors in it, to debug them and, eventually, to justify its proposed modification. This paper gives a brief survey of the working of the system, emphasizing on some commented examples.
Deutsch, L. Peter

This paper presents the design of an Interlisp system running on a microprogrammed minicomputer. We discuss the constraints imposed by compatibility requirements and by the hardware, the important design decisions, and the most prominent successes and failures of our design, and offer some suggestions for future designers of small Lisp systems. This extended abstract contains only qualitative results. Supporting measurement data will be presented at MICRO-11.
Deutsch, L. Peter

Interlisp and Standard Lisp are the only Lisp dialects for which anything like a comprehensive functional specification exists. The Interlisp Virtual Machine (VM) document speaks for itself: only the highlights appear just below.
The non-technical qualities of Interlisp are unique and deserve mention. Interlisp has an exhaustive and well-organized reference manual, which is available on-line to answer questions about particular functions or (to a 1esser, extent) topics. It is only through amazing amounts of
labor expended on this manual that the proliferation of features in Interlisp has remained usable.
Greussay, Patrick

The design of a LISP interpreter that allows tail-recursive procedures to be interpreted iteratively is presented at the machine-language level. Iterative interpretation means that, without any program transformations, no environments and continuations will be stacked unless necessary. We apply a specific modification within a traditional stack-oriented version of LISP interpreter, without any non-recursive control structure. The design is compatible with value-cells as well as a-lists LISP processors. We present a complete modified interpreter written itself in LISP and an informal proof that it meets its requirements.
Brachman, Ronald; Ciccarelli, Eugene; Greenfeld, Norton; Yonke, Martin

KLONE is a language designed for representing conceptual knowledge. This manual is intended to serve two kinds of readers: the reader who is new to the KLONE implementation; and the reader who is familiar with KLONE, but who needs particulars of a function on occasion. Its organization accr imodates three basic kinds of lookup: getting familiar with what functions are available; deciding which function to use for some task (i.e. to see at a glance which functions might apply); and finding details of a particular function. While the manual concentrates on the third type of lookup, section 3 lists KLONE functions grouped logically; together with the introductory sections, this should facilitate the first two kinds of lookup.
Sandewall, Erik

Lisp systems have been used for highly interactive programming for more than a decade. During that time, special properties of the Lisp language (such as program/ data equivalence) have enabled a certain style of interactive programming to develop. characterized by powerful interactive support for the programmer, nonstandard program structures, and nonstandard program development methods. The paper summa-rizes the Lisp style of interactive programming for readers outside the Lisp community, describes those properties of Lisp systems that were essential for the development of this style. and discusses some current and not yet resolved issues
Chailloux, Jerome

VCMC1 is a virtual machine designed to observe "in vitro" the behaviour of VLISP interpreters. VCMC1 is actually entirely simulated in VLISP 10. We present a short description of the VCMC1 machine followed by the complete listing of the code of a VLISP interpreter, This interpreter incorporates the special feature for tail-recursion function calls.
Fiala, E. R.

The process of developing a computer system is not only inherently interesting; it also leads to significant organization concepts that the builders are often impelled to share with others. So it was in our development of the Maxc1 and Maxc2 time-sharing systems at the Xerox Palo Alto Research Center between 1971 and 1977. From this development came some ideas of system organization that are now seen to have contributed to the success of the effort:
• The high availability achieved is attributable to the simple microprogrammable organization of the machines.
• Microprogramming organization promotes simplicity by placing much of the complexity in firmware.
• This organization of a computer provides the environment for multiple instruction sets.
• Causes of failure in integrated circuitry were evenly distributed, but memory error correction was found to be important to overall reliability.
• Tools for software and firmware development and design automation are necessary for efficient development.
Chailloux, Jerome

A summary of G. GÖRZ "Die Verwendung von LISP an wissenschaft-lichen Rechenzentren in der BRD", IAB Nr 63, Universität Erlangen-nürnberg, Rechenzentrum, Dez. 76.
1979
Teitelman, Warren

This paper continues and extends previous work by the author in developing systems which provide the user with various forms of explicit and implicit assistance, and in general co-operate with the user in the development of his programs. The system described in this paper makes extensive use of a bit map display and pointing device (a mouse) to significantly enrich the user's interactions with the system, and to provide capabilities not possible with terminals that essentially emulate hard copy devices. For example, any text that is displayed on the screen can be pointed at and treated as input, exactly as though it were typed, i.e. the user can say use this expression or that value, and then simply point. The user views his programming environment through a collection of display windows, each of which corresponds to a different task or context. The user can manipulate the windows, or the contents of a particular window, by a combination of keyboard inputs or pointing operations. The technique of using different windows for different tasks makes it easy for the user to manage several simultaneous tasks and contexts, e.g. defining programs, testing programs, editing, asking the system for assistance, sending and receiving messages, etc. and to switch back and forth between these tasks at his convenience.
Westfold, Stephen
Stefik, Mark

The Unit Package is an interactive knowledge representation system with representations for individuals, classes, indefinite individuals, and abstractions. Links between the nodes are structured with explicit definitional roles, types of inheritance, defaults, and various data formats. This paper presents the general ideas of the Unit Package and compares it with other current knowledge representation languages. The Unit Package was created for a hierarchical planning application, and is now in use by several Al projects.
McCarthy, J.
Bobrow, Daniel G.; Clark, Douglas W.

List structures provide a general mechanism for representing easily changed structured data, but can introduce inefficiencies in the use of space when fields of uniform size are used to contain pointers to data and to link the structure. Empirically determined regularity can be exploited to provide more space-efficient encodings without losing the flexibility inherent in list structures. The basic scheme is to provide compact pointer fields big enough to accommodate most values that occur in them and to provide “escape” mechanisms for exceptional cases. Several examples of encoding designs are presented and evaluated, including two designs currently used in Lisp machines. Alternative escape mechanisms are described, and various questions of cost and implementation are discussed. In order to extrapolate our results to larger systems than those measured, we propose a model for the generation of list pointers and we test the model against data from two programs. We show that according to our model, list structures with compact cdr fields will, as address space grows, continue to be compacted well with a fixed-width small field. Our conclusion is that with a microcodable processor, about a factor of two gain in space efficiency for list structure can be had for little or no cost in processing time.
Teitelman, Warren; Kaplan, Ronald M.
Bobrow, Daniel G.; Deutsch, L. Peter
Stoyan, Herbert

For the SIGPLAN conference on history of programming languages held in Los Angeles in this June, J. McCarthy had to write a paper about LISP-history (1). He was very able to do this because he has given a talk on LISP history in summer 1974 at M.I.T. (2) and has contributed since then a lot of remarks and comments to my work on compiling a complete history of our language. His paper corresponds to the state of our knowledge in May of this year (1978) before D. Park found the original LISP 1 manual (3).
Sproull, Robert F.

Raster-scan display terminals can significantly imrpove the quality of interaction with conventional computer systems. the design of a graphics package to provide a "window" into the extensive programming environment of Interlisp is presented. Two aspects of the package are described: first, the functional view of display output and interactive input facilities as seen by the programmer, and second, the methods used to link the display terminal to the main computer via a packet-switched computer network. Recommendations are presented for designing operating systems and programming languages so as to simplifly attaching display terminals. An appendix contains detailed documentation of the graphics package.
Sproull, Robert F.

Raster-scan display terminals can significantly improve the quality of interaction with conventional computer systems. The design of a graphics package to provide a “window” into the extensive programming environment of interlisp is presented. Two aspects of the package are described: first, the functional view of display output and interactive input facilities as seen by the programmer, and second, the methods used to link the display terminal to the main computer via a packet-switched computer network. Recommendations are presented for designing operating systems and programming languages so as to simplify attaching display terminals.
Cohen, Shimon

This paper describes the A-TABLE Data-Type for LISP-based languages. The A-TABLE is introduced in an attempt to unify different structures such as the PASCAL-Record, SNOBOL-Table AND INTERLISP Funarg-Block.A set of functions is defined to apply A-TABLES to: (1) Creating, accessing and updating Records; (2) Managing associatively indexed tables; (3) Providing context-dependent computations in processes and coroutines; (4) Defining multivalued functions.We show how and why these functions can be efficiently implemented with respect to access, space, garbage-collection and page-faults. We compare the A-TABLE with other facilities - LIST, ARRAY, etc.It is suggested that the A-TABLE should be one of the data-types in LISP-based systems where it can fill the gap between types "LIST" and "ARRAY".
Moore, J. Strother

The Interlisp Virtual Machine is the environment in which the Interlisp System is implemented. It includes such abstract objects as "Literal Atoms", "List Cells", "Integers", etc., the basic LISP functions for manipulating them, the underlying program control and variable binding mechanisms, the input/output facilities, and interrupt processing facilities. In order to Implement the Interlisp System (as described in "The Interlisp Reference Manual" by W. Teitelman, et. al.) on some physical machine, it is only necessary to implement the Interlisp Virtual Machine, since Virtual Machine compatible source code for the rest of the Interlisp System can be obtained from publicly available files. This document specifies the behavior of the Interlisp Virtual Machine from the implementor's point of view. That is, it is an attempt to make explicit those things which must be implemented to allow the Interlisp System to run on some machine.
Chailloux, Jérôme

This study presents the realization of three systems VLISP (a dialect of LISP) developped at the University of Paris 8 - Vincennes, on the following machines: - a 8 bit words micro-processor (Intel8080/Zilog80) - a 16 bit words PDP-11 - a 36 bit words PDP-10 From these realizations is extracted an implementation model. Our study proposes a solution to the problems of construction and evaluation of such a system. These problems are : 1) The exhaustive description of the implementation. We propose a description based on the virtual, referential and prototype machine VCMC2. 2) The adequate representations of the VLISP objects and functions. We have associated some natural properties and we have established a functionnal typology. 3) The efficiency of the interpreter (in words of core, execution time and power). Our iterpreter does, for his own need, a optimal core allocation (in term of CONS module calls). The direct acces (which needs only one memory access) to the values of objects variable and function, and a type classification of functions allow a direct invocation of all typed functions. 4) The power of control structures. Our implementation's KIT generalizes the VLISP control structures SELF an ESCAPE, extends them with the new constructions EXIT, WHERE and LETF and unifies completly their description and implementation. An incarnation of our model is given by the realization of a complete VLISP system in the referential machine VCMC2. The full code is given in appendix.
1980
Rowan, William

A compiler has been written which compiles MACLISP into a compact intermediate language called 1-code, and an 1-code interpreter has been incorporated into an existing LISP system. The 1-code “machine” has a simple stack architecture and an instruction set specifically designed for LISP. Compiled programs consist of a string of eight-bit bytes of 1-code, and a local table of quantities used by the compiled code. The system has been used to compile most of the MACSYMA system, and algebraic expressions have been successfully evaluated. The system is about three times faster than interpreted LISP, and about 8 times more compact, if the 1-code and local table are compared in size to the dotted pairs needed for the uncompiled version. The possibility of enhancing the system to a true machine-independent LISP compiler is considered. In particular, the problem of varying instruction quantization size is examined, and a method is given for running 1-code programs on machines with different byte sizes.
Sandewall, Erik; Sörensen, Henrik; Strömberg, Claes

The SCREEN system is an experimental tool for development and maintenance of application software. It is organized as a System of Communicating REsidential ENvironments, where each environment may be e.g. a programming environment or an end-user environment. Environments are able to send and receive modules which contain programs and data, and the maintenance of an application (throughout the software life-cycle) is performed by communicating such modules. Each programming environment may send several modules to each of several end-user environments, to account for specialized user needs, as well as updates of those modules. End-user environments consist of a fixed framework, into which contributed modules can plug in. The framework is designed specifically for each class of applications, and can be viewed as a special-purpose operating system for that class of applications. The organizational principles used in the SCREEN system are an extension of the residential programming systems that have been developed for Lisp, but they have an increased emphasis on software engineering issues. They represent an alternative to the classical principles for the production and use of software, which are based on the use of compilers, linkage editors, and general-purpose operating systems.
Adding Type Declarations to Interlisp.
Kaplan, Ronald M.; Sheil, B. A.

The Interlisp programming system provides facilities which allow a programmer to provide type declarations for program variables, procedures and expressions. In some respects, these faciltiies resemble the type systems found in many contemporary high level languages. However, the nature of Lisp and Lisp programming requires them to be singificantly different from conventional type systems. Their most striking feature is a very powerful type description language which supports run-time, as well as static, type validation. In this paper, the design of the Interlisp type declaration facilities is described, illustrating how the characteristics of Lisp affected the design. Some suggestions are offered as to which aspects of these would provde useful in other contexts.
Deutsch, L. Peter

This paper describes in detail the most interesting aspects of ByteLisp, a transportable Lisp system architecture which implements the Interlisp dialect of Lisp, and its first implementation, on a microprogrammed minicomputer called the Alto. Two forthcoming related papers will deal with general questions of Lisp machine and system architecture, and detailed measurements of the Alto ByteLisp system described here. A highly condensed summary of the series was published at MICRO-11 in November 1978.
Deutsch, L. Peter

This paper describes in detail the most interesting aspects of ByteLisp, a transportable Lisp system architecture which implements the Interlisp dialect of Lisp, and its first implementation, on a microprogrammed minicomputer called the Alto. Two forthcoming related papers will deal with general questions of Lisp machine and system architecture, and detailed measurements of the Alto ByteLisp system described here. A highly condensed summary of the series was published at MICRO-11 in November 197815.
Goldstein, Ira P.; Bobrow, Daniel G.

Smalltalk is an object oriented programming language with behavior invoked by passing messages between objects. Objects with similar behavior are grouped into classes. These classes form a hierarchy. When an object receives a message, the class or one of its superclasses provides the corresponding method to be executed. We have built an experimental Personal Information Environment (PIE) in Smalltalk that extends this paradigm in several ways. A PIE object, called a node, can have multiple perspectives, each of which provides independent specialized behaviors for the object as a whole, thus providing multiple inheritance for nodes. Nodes have metadescription to guide viewing of the objects during browsing, provide default values, constrain the values of attributes, and define procedures to be run when values are sought or set. All nodes have unique names which allow objects to migrate between users and machines. Finally attribute lookup for nodes is context sensitive, thereby allowing alternative descriptions to be created and manipulated. This paper first reviews Smalltalk, then discusses our implementation of each of the above capabilities within PIE, a Smalltalk system for representing and manipulating designs. We then describe our experience with PIE applied to software development and technical writing. Our conclusion is that the resulting hybrid is a viable offspring for exploring design problems.
Allchin, James E.; Keller, Arthur M.; Wiederhold, Gio

A file access system, flash, for use in building database systems is described. It supports access from several languages, including pascal, fortran, and interlisp. Flash provides record level access to a file with multiple indexes using symbolic keys. It is portable and written in Pascal with support routines in dec System 20 macro. The file access system is designed to run on computers of various sizes and capabilities, including micros. Concurrent and simultaneous access by several users is supported given that the operating systems provides multiprogramming. Flash is designed to be highly reliable. It assumes the existence of underlying operating system file services that read or write named files directly. Transfer to files occurs in units which are efficient, typically a block.
Masinter, Larry Melvin

This dissertation describes a programming tool, implemented in Lisp, called SCOPE. The basic idea behind SCOPE can be stated simply: SCOPE analyzes a user's programs, remembers what it sees, is able to answer questions based on the facts it remembers, and is able to incrementally update the data base when a piece of the program changes. A variety of program information is available about cross references, data flow and program organization. Facts about programs are stored in a data base: to answer a question, SCOPE retrieves and makes inferences based on information in the data base. SCOPE is interactive because it keeps track of which part of the programs have changed during the course of an editing and debugging session, and is able to automatically and incrementally update its data base. Because SCOPE performs whatever re-analysis is necessary to answer the question when the question is asked. SCOPE maintains the illusion that the data base is always up to date—other than the additional wait time, it is as if SCOPE knew the answer all along.
SCOPE'S foundation is a representation system in which properties of pieces of programs can be expressed. The objects of SCOPE'S language are pieces of programs, and in part, definitions of symbols—e.g., the definition of a procedure or a data structure. SCOPE does not model properties of individual statements or expressions in the program; SCOPE knows only individual facts about procedures variables data structures and other pieces of a program which can be assigned as the definition of symbols. The facts are relations between the name of a definition and other symbols. For example, one of the relations that SCOPE keeps track of is Call: Call[FN₁,FN₂] holds if the definition whose name is FN₁ contains a call to a procedure named FN₂.
SCOPE has two interfaces: one to the user and one to other programs. The user interface is an English-like command language which allows for a uniform command structure and convenient defaults: the most frequently used commands are the easiest to type. All of the power available with the command language is accessible through the program interface as well. The compiler and various other utilities use the program interface.
McCarthy, John

LISP has survived for 21 years because it is an approximate local optimum in the space of programming languages. However, it has accumulated some barnacles that should be scraped off, and some long-standing opportunities for improvement have been neglected. It would benefit from some co-operative maintenance especially in creating and maintaining program libraries. Computer checked proofs of program correctness are now possible for pure LISP and some extensions, but more theory and some smoothing of the language itself are required before we can take full advantage of LISP's mathematical basis.
McCarthy, John

LISP has survived for 21 years because it is an approximate local optimum in the space of programming languages. However, it has accumulated some barnacles that should be scraped off, and some long-standing opportunities for improvement have been neglected. It would benefit from some co-operative maintenance especially in creating and maintaining program libraries. Computer checked proofs of program correctness are now possible for pure LISP and some extensions, but more theory and some smoothing of the language itself are required before we can take full advantage of LISP's mathematical basis.
Masinter, Larry M.; Deutsch, L. Peter

We describe the local optimization phase of a compiler for translating the INTERLISP dialect of LISP into stack-architecture (0-address) instruction sets. We discuss the general organization of the compiler, and then describe the set of optimization techniques found most useful, based on empirical results gathered by compiling a large set of programs. The compiler and optimization phase are machine independent, in that they generate a stream of instructions for an abstract stack machine, which an assembler subsequently turns into the actual machine instructions. The compiler has been in successful use for several years, producing code for two different instruction sets.
Masinter, Larry M.; Deutsch, L. Peter

We describe the local optimization phase of a compiler for translating the INTERLISP dialect of LISP into stack-architecture (0-address) instruction sets. We discuss the general organization of the compiler, and then describe the set of optimization techniques found most useful, based on empirical results gathered by compiling a large set of programs. The compiler and optimization phase are machine independent, in that they generate a stream of instructions for an abstract stack machine, which an assembler subsequently turns into the actual machine instructions. The compiler has been in successful use for several years, producing code for two different instruction sets.
Model, Mitchell L

Multiprocess environments comprising several intercommunicating LISP systems are straightforward to implement due to certain fundamental characteristics of the LISP language. Experiences with four methods of establishing the necessary communications linkages are described. The features of LISP which support experimentation with interprocess communication are identified. Two key characteristics of the language are important in this regard: 1. LISP programs can construct and interpret new code as they run; 2. Structures within LISP systems are accessible by name. The most flexible of the communication methods use only the ordinary LISP input and output functions, supplemented by a small amount of system-dependent code to create communication linkages that can be treated by LISP as file structures.
Emanuelson, Pär; Haraldsson, Anders

In INTERLISP we find a number of embedded languages such as the iterative statement and the pattern match facility in the CLISP package, the editor and makefile languages and so forth. We will in this paper concentrate on the problem of extending the LISP language and discuss a method to compile such extensions. We propose the language to be implemented through an interpreter (written in LISP) and that compilation of statements in such an embedded language is done through partial evaluation. The interpreter is partially evaluated with respect to the actual statements, and an object program in LISP is obtained. This LISP code can further be compiled to machine code by the standard LISP compiler. We have implemented the iterative statement and a CLISP-like pattern matcher and used a program manipulation system to generate object programs in LISP. Comparisons will be made with the corresponding INTERLISP implementations, which use special purpose compilers in order to generate the LISP code.
Burton, Richard R.; Masinter, L. M.; Bobrow, Daniel G.; Haugeland, Willie Sue; Kaplan, Ronald M.; Sheil, B. A.

DoradoLisp is an implementation of the Interlisp programming system on a large personal computer. It has evolved from AltoLisp, an implementation on a less powerful machine. The major goal of the Dorado implementation was to eliminate the performance deficiencies of the previous system. This paper describes the current status of the system and discusses some of the issues that arose during its implementation. Among the techniques that helped us meet our performance goal were transferring much of the kernel software into Lisp, intensive use of performance measurement tools to determine the areas of worst performance, and use of the Interlisp programming environment to allow rapid and widespread improvements to the system code. The paper lists some areas in which performance was critical and offers some observations on how our experience might be useful to other implementations of Interlisp.
Burton, Richard R.; Kaplan, Ronald M.; Masinter, B.; Sheil, B. A.; Bell, A.; Bobrow, D. G.; Deutsch, L. P.; Haugeland, W. S.

This report consists of five papers on Interlisp-D, a refinement and implementation of the Interlisp virtual machine [Moore, 76] which supports the Interlisp programming system [Teitelman et al., 78] on the Dolphin and Dorado personal computers.
Brachman, Ronald J.; Smith, Brian C.

In the fall of 1978 we decided to produce a special issue of the SIGART Newsletter devoted to a survey of current knowledge representation research. We felt that there were twe useful functions such an issue could serve. First, we hoped to elicit a clear picture of how people working in this subdiscipline understand knowledge representation research, to illuminate the issues on which current research is focused, and to catalogue what approaches and techniques are currently being developed. Second -- and this is why we envisaged the issue as a survey of many different groups and projects -- we wanted to provide a document that would enable the reader to acquire at least an approximate sense of how each of the many different research endesvours around the world fit into the field as a whole.It would of course be impossible to produce a final or definitive document accomplishing these goals: rather, we hoped that this survey could initiate a continuing dialogue on issues in representation, a project for which this newsletter seems the ideal forum. It has been many months since our original decision was made, but we are finally able to present the results of that survey. Perhaps more than anything else, it has emerged as a testament to an astounding range and variety of opinions held by many different people in many different places.The following few pages are intended as an introduction to the survey as a whole, and to this issue of the newsletter. We will briefly summarize the form that the survey took, discuss the strategies we followed in analyzing and tabulating responses, briefly review the overall sense we received from the answers that were submitted, and discuss various criticisms which were submitted along with the responses. The remainder of the volume has been designed to be roughly self-explanatory at each point, so that one may dip into it at different places at will. Certain conventions, however, particularly regarding indexing and tabulating, will also be explained in the remainder of this introduction.As editors, we are enormously grateful to the many people who devoted substantial effort to responding to our survey. It is our hope that the material presented here will be interesting and helpful to our readers, and that fruitful discussion of these and other issues will continue energetically and enthusiastically into the future.
Koomen, Johannes A. G. M.

Abstract machine definitions have been recognized as convenient and powerful tools for enhancing software portability. One such machine, the Interlisp Virtual Machine, is examined in this thesis. We present the Multilisp System as an implementation of the Virtual Machine and discuss some of the design criteria and difficulties encountered in mapping the Virtual Machine onto a particular environment. On the basis of our experience with Multilisp we indicate several weaknesses of the Virtual Machine which impair its adequacy as a basis for a portable Interlisp System.
1981
Alberga, C. N.; Brown, A. L.; Leeman, G. B.; Mikelsons, M.; Wegman, M. N.

In this paper we describe how we have combined a number of tools (most of which understand a particular programming language) into a single system to aid in the reading, writing, and running of programs. We discuss the efficacy and the structure of our system. For the last two years the system has been used to build itself; it currently consists of 500 kilobytes of machine code (25,000 lines of LISP/370 code) and approximately one hundred commands with large numbers of options. We will describe some of the experience we have gained in evolving this system. We first indicate the system components which users have found most important; some of the tools described here are new in the literature. Second, we emphasize how these tools form a synergistic union, and we illustrate this point with a number of examples. Third, we illustrate the use of various system commands in the development of a simple program. Fourth, we discuss the implementation of the system components and indicate how some of them have been generalized.
Sheil, Beau; Masinter, Larry M.

An hour-long forum lecture and demonstration delivered by Beau Sheil and Larry Masinter about Interlisp-D.
Beau Sheil, Johan de Kleer

Interlisp-D and MIT CADR Lisp Machine demos for Vancouver IJCAI Conference - Tape #1
Sheil, Beau

The Interslip-D project was formed to develop a personal machine implementation of Interlisp for use as an environment for research in artificial intelligence and cognitive science [Burton et al., 80b]. This note describes the principal developments since our last report almost a year ago [Burton et al., 80a].
Masinter, Larry M.

Since November 1979, a group at the Information Sciences Institute of the University of Southern California has been working on an implementation of Interlisp for the DEC VAX-series computers. This report is a description of the current status, future prospects, and estimated character of that Interlisp-VAX implementation. It is the result of several days of discussion with those at ISI involved with the implementation (Dave Dyer, Mans Koomen, Ray Bates, Dan Lynch): with John L. White of MIT, who is working on an implementation of another Lisp for the VAX (NIL); with the implementors of Interlisp-Jericho at BBN (Alice Hartley, Norton Greenfeld, Martin Yonke, John Vittal, Frank Zdybel, Jeff Gibbons, Daryle Lewis); with the implementors of Franz Lisp and Berkeley Unix at U.C. Berkeley (Richard Fateman, Bill Joy, Keith Sklower, John Foderaro); and with my colleagues at Xerox PARC.
An earlier draft of this report was circulated to the parties involved in the Interlisp-VAX discussions. This document has been revised as a result of comments received.
Barstow, David R.

DED is a display-oriented editor that was designed to add the power and convenience of display terminals to INTERLISP's teletype-oriented structure editor. DED divides the display screen into a Prettyprint Region and an Interaction Region. The Prettyprint Region gives a prettyprinted view of the structure being edited; the Interaction Region contains the interaction between the user and INTERLISP's standard editor. DEDs prettyprinter allows ellision, and the user may zoom in or out to see the expression being edited with more or less detail. There are several arrow keys which allow the user to change quite easily the locus of attention in certain structural ways, as well as a menu-like facility for common command sequences. Together, these features provide a display-facility that considerably augments INTERLISP's otherwise quite sophisticated user interface.
Davis, Randall; Austin, Howard; Carlbom, Ingrid; Frawley, Bud; Pruchnik, Paul; Sneiderman, Rich; Gilreath, J.

The DIPMETER ADVISOR program is an application of Al and Expert System techniques to the problem of inferring subsurface geologic structure. It synthesizes techniques developed in two previous lines of work, rulte-based systems and signal understanding programs. This report on the prototype system has four main concerns. First, we describe the task and characterize the various bodies of knowledge required. Second, we describe the design of the system we have built and the level of performance it has currently reached. Third, we use this task as a case study and examine it in the light of other, related efforts, showing how particular characteristics of this problem have dictated a number of design decisions. We consider the character of the interpretation hypotheses generated and the sources of the expertise involved. Finally, we discuss future directions of this early effort. We describe the problem of "shallow knowledge" in expert systems and explain why this task appears to provide an attractive setting for exploring the use of deeper models.
Teitelman, Warren; Masinter, Larry

Integration, extensibility, and ease of modification made Interlisp unique and powerful. Its adaptations will enhance the power of the coming world of personal computing and advanced displays.
Moore, J. Strother

The TXDT package is a collection of INTERLISP programs designed for those who wish to build text editors in INTERLISP. TXDT provides a new INTERLISP data type, called a buffer, and programs for efficiently inserting, deleting, searching and manipulating text in buffers. Modifications may be made undoable. A unique feature of TXDT is that an address may be "stuck" to a character occurrence so as to follow that character wherever it Is subsequently moved. TXDT also has provisions for fonts.
1982
Steele, Guy L.

A dialect of LISP called “COMMON LISP” is being cooperatively developed and implemented at several sites. It is a descendant of the MACLISP family of LISP dialects, and is intended to unify the several divergent efforts of the last five years. We first give an extensive history of LISP, particularly of the MACLISP branch, in order to explain in context the motivation for COMMON LISP. We enumerate the goals and non-goals of the language design, discuss the language features of primary interest, and then consider how these features help to meet the expressed goals. Finally, the status (as of May 1982) of six implementations of COMMON LISP is summarized.
Bates, Raymond L.; Dyer, David; Koomen, Johannes A. G. M.

This paper presents some of the issues involved in implementing Interlisp [19] on a VAX computer [24] with the goal of producing a version that runs under UNIX[17], specifically Berkeley VM/UNIX. This implementation has the following goals:
• To be compatible with and functionally equivalent to Interlisp-10.
• To serve as a basis for future Interlisp implementations on other mainframe computers. This goal requires that the implementation to be portable.
• To support a large virtual address space.
• To achieve a reasonable speed.
The implementation draws directly from three sources, Interlisp-10 [19], Interlisp-D [5], and Multilisp [12]. Interlisp-10, the progenitor of all Interlisps, runs on the PDP-10 under the TENEX [2] and TOPS-20 operating systems. Interlisp-D, developed at Xerox Palo Alto Research Center, runs on personal computers also developed at PARC. Multilisp, developed at the University of British Columbia, is a portable interpreter containing a kernel of Interlisp, written in Pascal [9] and running on the IBM Series/370 and the VAX. The Interlisp-VAX implementation relies heavily on these implementations. In turn, Interlisp-D and Multilisp were developed from The Interlisp Virtual Machine Specification [15] by J Moore (subsequently referred to as the VM specification), which discusses what is needed to implement an Interlisp by describing an Interlisp Virtual Machine from the implementors' point of view. Approximately six man-years of effort have been spent exclusively in developing Interlisp-VAX, plus the benefit of many years of development for the previous Interlisp implementations.
Dawson, Jeffrey L.

This paper describes a real-time garbage collection algorithm for list processing systems. We identify two efficiency problems inherent to real-time garbage collectors, and give some evidence that the proposed algorithm tends to reduce these problems. In a virtual memory implementation, the algorithm restructures the cell storage area more compactly, thus reducing working sets. The algorithm also may provide a more garbage-free storage area at the end of the collection cycle, although this claim really must await empirical verification.
Bates, Raymond; David, Dayer; Koomen, Johannes; Saunders, Steven; Voreck, Donald

The Interlisp-VAX project was begun in mid-1979 to provide a newer, more powerful alternative to Interlisp-10 as a LISP environment suitable for research. The result is an efficient, portable, fully
functional system compatible with other Interlisps and supporting a large virtual address space. Interlisp-VAX runs under two operating systems, VMS and UNIX' . Its implementation on the VAX, one of the most popular machines in research facilities and college campuses today, assures it of a long, productive future.
LOOPS: Data and Object Oriented Programming for Interlisp
Bobrow, Daniel G.; Stefik, Mark

This paper summarizes the features of LOOPS and indicate how they support different knowledge representation features.
Loops is a programming system integrated into Interlisp. It includes 1) object oriented programming with non-hierarchical class structure; 2) user extendible property list descriptions of the classes, their variables, and their methods (eg. for documentation, defaults, constraints); 3) composite objects- a way for defining templates for related objects that are instantiated as a group; 4) data oriented programming using active values- a way of having a procedure invoked when the value of a vbariable is set or read; 5) a knowledge base facility providing long term storage of shared knowledge bases, support for the exchange of incremental updates (layers), and tghe representation of multiple alternatives.
This paper describes the features of LOOPS, how they facilitate implementation of a core of knowledge representation features, and our erxperience using it.
Finin, Tim

We describe an effort to translate the Interlisp KL-ONE system into FranzLisp to enable it to be run on a VAX . This effort has involved Tim Finin, Richard Duncan and Hassan Ait-Kaci from the University of Pennsylvania, Judy Weiner from Temple University, Jane Barnett from Computer Corporation of America and Jim Schmolze from Bolt Beranek and Newman. The primary motivation for this project was to make a version of KL-ONE available on a PDP 11/780 VAX . A VAX Interlisp is not yet available, although one is being written and will soon be available . Currently, the only substantial Lisp for a Vax is the Berkeley FranzLisp system. As a secondary motivation, we are interested in making KL-ONE more available in general - on a variety of Lisp dialects and machines.

Collection of 1982 emails reporting and discussing Interlisp-D bugs and system performance issues.
1983
Xerox

Interlisp began with an implementation of the Lisp programming language for the PDP-1 at Bolt. Beranek and Newman in 1966. It was followed in 1967 by 940 Lisp, an upward compatible implementation for the SDS-940 computer. 940 Lisp was the first Lisp system to demonstrate the feasibility of using software paging techniques and a large virtual memory in conjunction with a list-processing system [Bobrow & Murphy, 1967]. 940 Lisp was patterned after the Lisp 1.5 implementation for CTSS at MIT, with several new facilities added to take advantage of its timeshared, on-line environment. DWIM, the Do-What-I-Mean error correction facility, was introduced into this system in 1968 by Warren Teitelman [Teitelman. 1969].
Becker, Jeffrey M.

This paper describes the operation and internal structure of a program called AQINTERLISP. AQINTERLISP is an interactive INTERLISP-10 program for generalization and optimization of discriminant descriptions of object classes. The descriptions are expressed as disjunctive normal expressions in the variable valued logic system. Such expressions are unions of conjunctive statements (complexes) involving relations on multiple-valued variables. Input data to the program are sets of VLI events (sequences of attribute-value pairs) describing individual objects. Each even is associated with a given class name.

This section describes the Fugue.4 release of Interlisp-D. Fugue.4 is the Customer Version of Fugue.3, which is a significant enrichment of its predecessors, Fugue.2 and Fugue.O.
Novak, Jr, Gordon S.

GLISP is a high-level language that is compiled into LISP. It provides a versatile abstract-data-type facility with hierarchical inheritance of properties and object-centered programming. GLISP programs are shorter and more readable than equivalent LISP programs. The object code produced by GLISP is optimized, making it about as efficient as handwritten LISP. An integrated programming environment is provided, including automatic incremental compilation, interpretive programming features, and an intelligent display-based inspector/editor for data and data-type descriptions. GLISP code is relatively portable; the compiler and the data inspector are implemented for most, major dialects of LISP and are available free or at nominal cost.
GLISP (Novak 1982, 1983A, 198313) is a high-level language, based on LISP and including LISP as a sublanguage, that is compiled into LISP (which can be further compiled to machine language by the LISP compiler). The GLISP system runs within an existing LISP system and provides an integrated programming environment that includes automatic incremental compilation of GLISP programs, interactive execution and debugging, and display-based editing and inspection of data. Use of GLISP makes writing, debugging, and modifying programs significantly easier; at the same time, the code produced by the compiler is optimized so that its execution. efficiency is comparable to that of handwritten LISP. This article describes features or GLISP and illustrates them with examples. most or the syntax of GLISP is similar to LISP syntax or PASCAL syntax, so explicit treatment of GLISP syntax will be brief.
GLISP programs are compiled relative to a knowledge base or object descriptions, a form of abstract data types (Liskov et al. 1977; Wulf, London, and Shaw 1976). A primary goal of the use of abstract data types in GLISP is to make programming easier. The implernentations of objects are described in a single place; the compiler uses the object descriptions to convert GLISP code written in terms or user objects into efficient LISP code written in terms of the implementations of the objects in LISP. This allows the implementations of objects to be changed without changing the code; it also allows the same code to be effective for objects that are implemented in different ways and thereby allows the accumulation of programming knowledge in the form of generic programs. Figure I illustrates the combination of information from these three sources; the recursive use of abstract data types and generic programs in the compilation process provides multiplicative power for describing programs.
Schoen, Eric; Smith, Reid G.

In this paper, we discuss a display-oriented editor to aid in the construction of knowledge-based systems. We also report on our experiences concerning the utility of the editor.
Interlisp reference manual: Revised
Michael Sannella

Interlisp is a programming system. A programming system consists of a programming language, a large number of predefined programs (or functions, to use the Lisp tenninology) that can be used either as direct user commands or as subroutines in user programs, and an environment that supports the programmer by providing a variety of specialized programming tools. The language and predefined functions of Interlisp are rich, but similar to those of other modem programming languages. The Interlisp programming environment, on the other hand, is very distinctive. Its most salient characteristic is an integrated set of programming tools which know enough about Interlisp programming so that they can act as semi-autonomous, intelligent "assistants" to the programmer. In addition, the environment provides a completely self-contained world for creating, debugging and maintaining Interlisp programs.
This manual describes all three components of the Interlisp system. There are discussions about the content and structure of the language, about the pieces of the system that can be incorporated into user programs, and about the environment. The line between user code and the environment is thin and changing. Most users extend the environment with some special features of their own. Because Interlisp is so easily extended, the system has grown over time to incorporate many different ideas about effective and useful ways to program. This gradual accumulation over many years has resulted in a rich and diverse system.
Stefik, Mark; Bobrow, Daniel G; Mittal, Sanjay; Conway, Lynn

Early this year fifty people took an experimental course at Xerox PARC on knowledge programming in Loops. During the course, they extended and debugged small knowledge systems in a simulated economics domain called Truckin'. Everyone learned how to use the Loops environment, formulated the knowledge for their own program, and represented it in Loops. At the end of the course a knowledge competition was run so that the strategies used in the different systems could be compared. The punchline to this story is that almost everyone learned enough about Loops to complete a small knowledge system in only three days. Although one must exercise caution in extrapolating from small experiments, the results suggest that there is substantial power in integrating multiple programming paradigms.
Naraln, Sanjai; McArthur, David; Klahr, Philip

ROSS is an object-oriented language developed for building knowledge-based simulations. SWIRL is a program written in ROSS that embeds knowledge about defensive and offensive air battle strategies. Given an initial configuration of military forces, SWIRL simulates the resulting air battle. We have implemented ROSS and SWIRL in several different Lisp environments. We report upon this experience by comparing the various environments in terms of cpu usage, real-time usage, and various user aids.
Schrag, Robert C.

Conversion of the LogLispLogic programming in Lisp Artificial Intelligence programming environment from its original RutgersUCI-Lisp RUCI-Lisp implementation to an InterLisp implementation is described. This report may be useful to researchers wishing to convert LogLisp to yet another Lisp dialect, or to those wishing to convert other RUCI-Lisp programs into InterLisp. It is also intended to help users of the InterLisp version of LogLisp to understand the implementation. The conversion process is described at a level aimed toward potential translators who might benefit from approaches taken and lessons learned. General issues of conversion of Lisp software between dialects are discussed, use of InterLisps dialect translation package is described, and specific issues of non-mechanizable conversion are addressed. The latter include dialect differences in function definitions, arrays, integer arithmetic, io, interrupts, and macros. Subsequent validation, compilation, and efficiency enhancement of the InterLisp version are then described. A brief users guide to the InterLisp version and points of contact for information on LogLisp software distribution are also provided. Author

The OSU LOOPS class notebook contains the following materials.
1. three short papers concerning the INTERLISP environment and the LOOPS language [Notebook Section #II],
2. a set of exercises we will be using throughout the course [Notebook Sections #III - #XIII],
3. a set of descriptive materials about the TRUCKIN knowledge engineering game [Notebook Section #XIV],
4. samples of LOOPS gauges [Notebook Section #XV],
5. samples of LOOPS ClassBrowsers [Notebook Section #XVI],
6. samples of TRUCKIN game boards [Notebook Section #XVII],
7. LOOPS summary and manual [Notebook Section #XVIII], and
8. documentation for the SSI tool [Notebook Section #XIX].
Stefik, Mark; Bobrow, Daniel G.

LOOPS adds data, object. and rule oriented programming to the procedure oriented programing of Interlisp. In object oriented programming, behavior is determined by responses of instances of classes to messages sent between these objects. with no direct access to the internal structure of an object. This approach makes it convenient to define program interfaces in terms of message protocols. Data oriented programming is a dual of object oriented programming, where behavior can occur as a side effect of direct access to (permanent) object state. This makes it easy to write programs which monitor the behavior of other programs. Rule oriented programming is an alternative to programming in LISP. Programs in this paradigm are organized around recursively composable sets of pattern-action rules for use in expert system design. Rules make it convenient for describing flexible responses to a wide range of events. LOOPS is integrated into interlisp, and thus provides access to the standard procedure oriented programming of Lisp, and use of the extensive environmental support of the Interlisp-D system.
Our experience suggests that programs are easier to build in a language when there is an available paradigm that matches the structure of the problem. The paradigms described here offer distinct ways of partitioning the organization of a program, as well as distinct ways of viewing the significance of side effects. LOOPS provides all these paradigms within a single environment. This manual is intended as the primary documentation for users of LOOPS, It describes the concepts and the programming facilities, and gives examples and scenarios for using LOOPS,
Bobrow, Daniel Gureasko; Stefik, Mark

LOOPS adds data, object. and rule oriented programming to the procedure oriented programing of Interlisp. In object oriented programming, behavior is determined by responses of instances of classes to messages sent between these objects. with no direct access to the internal structure of an object. This approach makes it convenient to define program interfaces in terms of message protocols. Data oriented programming is a dual of object oriented programming, where behavior can occur as a side effect of direct access to (permanent) object state. This makes it easy to write programs which monitor the behavior of other programs. Rule oriented programming is an alternative to programming in LISP. Programs in this paradigm are organized around recursively composable sets of pattern-action rules for use in expert system design. Rules make it convenient for describing flexible responses to a wide range of events. LOOPS is integrated into interlisp, and thus provides access to the standard procedure oriented programming of Lisp, and use of the extensive environmental support of the Interlisp-D system
Our experience suggests that programs are easier to build in a language when there is an available paradigm that matches the structure of the problem. The paradigms described here offer distinct ways of partitioning the organization of a program, as well as distinct ways of viewing the significance of side effects. LOOPS provides all these paradigms within a single environment. This manual is intended as the primary documentation for users of LOOPS, It describes the concepts and the programming facilities, and gives examples and scenarios for using LOOPS.
Stefik, Mark; Bobrow, Daniel; Mittal, Sanjay; Conway, Lynn
1984
Waguespack, Leslie J.; Hass, David F.

We present the Computer Science Scholar's Workbench, a tool kit written in Pascal suitable for research and teaching. It has advantages over contemporary workbenches, UNIX and INTERLISP: a host to support the tool kit costs less than $3,000, the tools are free-available in source from publications, and the tools are written in Pascal which is widely used in academic environments. We discuss a) course requirements and problems unique to project oriented software engineering classes, b) the tools we've chosen for the workbench, and c) how they may be used to ameliorate or solve many of the problems. We report our experience using the workbench and evaluate it in terms of cost, performance, portability, extensibility, and effectiveness.

These notes accompany the Carol Release of Interlisp-D. They describe changes made since the Fugue.4 Release of October, 1983 and the Fugue.6 Release of April, 1984. The most prominent feature is support for the Xerox 1108 local file system. A new release of TEdit provides a powerful, menudriven interface. Xerox 1100 and 1132 users can now initialize Interlisp-D from disk partitions other than the currently booted partition.
Steele, Guy L.
Acuff, Richard

A chat stream is a connection between two processes oriented towards terminal service, but not necessarily restricted to that. A chat stream is inherently bi-directional so it is represented by two Interlisp-D streams; one each for input and output. The input stream is considered the primary handle on the connection and is used wherever operations are preformed that are not inherently only input or output. The following operations are available for chat streams (as well as the normal stream operations). In general these operations return true if the operation was successful, NIL if it could not be done:
Foster, Gregg

CoLab is a laboratory to experiment with new forms of copmputer-assisted collaboration. We argue that current tools for suporting meetings are antique. We propose experiments using modern computational and display technologies to build tools for better support of meetings and cooperative problem solving. Research objectives are outlined, specificially: the goals of the project and our approach to computer-based support for cooperative problem solving and the experimental basis for CoLab. We take a quick tour of the Colab meeting lab being constructed. We outline an example tool and discuss some possible future tools. Previous software systems for supporting group work and some past efforts at structuring group problem solving are described. We present dimensions of tool design and some experiments under consideration. The basic architecture and the software primitives for group use of computers are presented. We discuss the current status of the CoLab project and our immediate plans. We plan to use CoLab to explore the use of computer software and advanced display devices to enhance and extend group problem solving activity. It will also be used as a laboratory to investigate appropriate structures for computer-based meetings.

Colab Cognoter - Reel 3.
Gregg Foster, Colab project.

Colab Cognoter Demo - Reel 1.
Gregg Foster, Colab project.
Stoyan, Herbert

This paper describes the development of LISP from McCarthy's first research in the topic of programming languages for AI until the stage when the LISP1 implementation had developed into a serious program (May 1959). We show the steps that led to LISP and the various proposals for LISP interpreters (between November 1958 and May 1959). The paper contains some correcting details to our book (32).

This section describes the Fugue.6 release of Interlisp-D. This version substantially improves the performance and reliability of key system components and fixes many bugs reported in earlier releases of Fugue.
Moon, David A.

This paper discusses garbage collection techniques used in a high-performance Lisp implementation with a large virtual memory, the Symbolics 3600. Particular attention is paid to practical issues and experience. In a large system problems of scale appear and the most straightforward garbage-collection techniques do not work well. Many of these problems involve the interaction of the garbage collector with demand-paged virtual memory. Some of the solutions adopted in the 3600 are presented, including incremental copying garbage collection, approximately depth-first copying, ephemeral objects, tagged architecture, and hardware assists. We discuss techniques for improving the efficiency of garbage collection by recognizing that objects in the Lisp world have a variety of lifetimes. The importance of designing the architecture and the hardware to facilitate garbage collection is stressed.
Bundy, Alan; Wallen, Lincoln

Major dialect of LISP <34>, designed for high-resolution, bit-mapped display, distinguished by (a) use of in-core editor for structures, and thus code, (b) programming environment of tools for automatic error-correction, syntax (sic) extension and structure declaration/access, (c) implementation of almost-compatible dialects (Interlisp ) on several machines, (d) extensive usage of display orientated tools and facilities. Emphasis: Personal Lisp workstation, user interface tools.
Odradek - A Prolog-Based Lisp Translator
Jellinek, Herb
Smith, Reid G.

We use our experience with the Dipmeter Advisor system for well-log interpretation as a case study to examine the development of commercial expert system. We discuss the nature of these systems as we see them in the coming decade, characteristics of the evolution process, development methods, and skills required in the development team. We argue that the tools and ideas of rapid prototyping and successive refinement accelerate the development process. We note that different types of people are required at different stages of expert system development: Those who are primarily knowledgeable in the domain, but who can use the framework to expand the domain knowledge; and those who can actually design and build expert systems. Finally, we discuss the problem of technology transfer and compare our experience with some of the traditional wisdom of expert system development.
Gabriel, Richard P.; McCarthy, John

As the need for high-speed computers increases, the need for multi-processors will be become more apparent. One of the major stumbling blocks to the development of useful multi-processors has been the lack of a good multi-processing language—one which is both powerful and understandable to programmers. Among the most compute-intensive programs are artificial intelligence (AI) programs, and researchers hope that the potential degree of parallelism in AI programs is higher than in many other applications. In this paper we propose multi-processing extensions to Lisp. Unlike other proposed multi-processing Lisps, this one provides only a few very powerful and intuitive primitives rather than a number of parallel variants of familiar constructs.
Bates, Raymond L.; Dyer, David; Feber, Mark

This paper reports on recent developments of the ISI- Interlisp implementation of Interlisp on a VAX computer. ISI-Interlisp currently runs under UNIX, specifically the Berkeley VM/UNIX and VMS operating systems. Particular attention is paid to the current status of the implementation and the growing pains experienced in the last few years. Included is a discussion of the conversion from UNIX to VAX/VMS, recent modifications and improvements, current limitations, and projected goals. Since much of the recent effort has concerned performance tuning, our observations on this activity are included. ISI-Interlisp, formerly known as Interlisp-VAX, was reported in 1982 ACM Symposium on LISP and Functional Programming, August 1982 [1]. Experiences and recommendations since the 1982 LISP conference are presented.
Austin Henderson

Trillium: A Design Environment for Copier Interfaces: The System and Its Impact on Design, Tape 1 of 2
Austin Henderson

Trillium: A Design Environment for Copier Interfaces: The System and Its Impact on Design, Tape 2 of 2
Lenat, Douglas B.; Brown, John Seely

Seven years ago, the AM program was constructed as an experiment in learning by discovery. Its source of power was a large body of heuristics, rules which guided it toward fruitful topics of investigation, toward profitable experiments to perform, toward plausible hypotheses and definitions. Other heuristics evaluated those discoveries for utility and “interestingness”, and they were added to AM’s vocabulary of concepts. AM’s ultimate limitation apparently was due to its Inability to discover new, powerful, domain-specific heuristics for the various new fields it uncovered. At that time, it seemed straight-forward to simply add Heuretics (the study of heuristics) as one more field in which to let AM explore, observe, define, and develop. That task -- learning new heuristics by discovery -- turned out to be much more difficult than was realized initially, and we have just now achieved some successes at it. Along the way, it became clearer why AM had succeeded in the first place, and why it was so difficult to use the same paradigm to discover new heuristics. This paper discusses those recent insights. They spawn questions about “where the meaning really resides” in the concepts discovered by A?/I. This leads to an appreciation of the crucial and unique role of representation in theory fomlation, a role intolling the relationship bet\%een Form and Content.
1985
Mostow, Jack; Cohen, Donald
Lenat, Douglas B.; Prakash, Mayank; Shepherd, Mary

MC& CYC project is the building, over the coming decade, of a large knowledge base (or KB) of real world facts and heuristics and-as a part of the KB itself-methods for efficiently reasoning over the KB. As the title of this article suggests, our hypothesis is that the two major limitations to building large intelligent programs might be overcome by using such a system. We briefly illustrate how common sense reasoning and analogy can widen the knowledge acquisition bottleneck The next section (“How CYC Works”) illustrates how those same two abilities can solve problems of the type that stymie current expert systems. We then report how the project is being conducted currently: its strategic philosophy, its tactical methodology, and a case study of how we are currently putting that into practice. We conclude with a discussion of the project’s feasibility and timetable.
Burwell, A. D. M.

Report of a meeting held by the Geological Information Group at the British Petroleum Research Centre, Sunbury, 24 January 1985
This meeting, concerned mainly with computer manipulation of petroleum exploration data, attracted c. 95 participants. In addition to eight papers presented, there were two computer demonstrations of log analysis systems and a number of poster displays.
The morning session, concerned with large-scale, integrated hardware and software systems, was chaired by R. Howarth. R. Till of British Petroleum gave the opening paper concerning BP Exploration’s integrated database system. BP Exploration databases fall into three main groups: those containing largely numerical data; databases specifically concerned with text handling; and well-based databases. The ‘numerical’ databases, implemented under the ULTRA database management system (dbms), include a seismic data system, a generalized cartographic database and an earth constants database. Textual databases include a library information system and a Petroconsultants scout data database, both implemented under the BASIS dbms. The well-based systems include a generalized well-data database, a wireline log archive, storage and retrieval system, and a master well index; all three are implemented under the INGRES dbms. Two related BASIS databases contain geochemical and biostratigraphical data.
G. Baxter (co-author M. Hemingway) described the development of Britoil’s well log database which was prompted by the need to have rapid access to digitized wireline log data for c. 1500 wells on the UKCS. Early work involved both locating log information and digitizing those logs held in sepia form only. Each digitized log occupies approximately 1 Mbyte.
Adeli, H.; Paek, Y. J.

LISP appears to be the language of choice among the developers of knowledge-based expert systems. Analysis of structures in INTERLISP environment is discussed in this paper. An interactive INTERLISP program is presented for analysis of frames which can be used as part of an expert system for computer-aided design of structures. Some of the concepts and characteristics of INTERLISP language are explained by referring to the INTERLISP program.
Thompson, Henry

This proposal represents an attempt to provide a set of control primatives for CommonLoops which will
1) Support the existing Interlisp error handling mechanisms (including ERROR and friends, ERRORSET and friends, RESETLST and friends, ERRORTYPELST, BREAKCHECK and its consequences, and the relationships between ERRORX, FAULT1 and BREAK1, all in the context of spaghetti stacks and the existing process mechanisms;
2) Support the CommonLisp constructs catch, throw, unwindprotect, the relationship of unwindprotect to go and return(-from), error, cerror and warn;
3) Substantially reproduce the functionality of the ZetaLisp signalling facility;
4) Be a reasonably plausible attempt to take the high ground wrt whatever proposals the CommonLisp working party on error handling come up with;
5) Be a Good Thing in its own right.

These notes document features of the Harmony release of Interlisp-D. Harmony is the successor to Carol, the June 1984 release of Interlisp-D.
Harmony is substantially more reliable than its predecessors: over 600 system bugs have fixed. Support for NS file and print servers is now robust and reliable. Major improvements have been made to the font system. An advanced version of Tedit, the Interlisp text editor, is being released with Harmony. Image streams, which allow for the printing of arbitrary text and graphics on Interpress and Press printers, are supported by Harmony.
The following pages present detailed descriptions of these, and many other features, which constitute the Harmony release.
XEROX

The Koto release of Interlisp-D provides a wide range of added functionality, increased performance and improved reliability.
Central among these is that Koto is the first release of Interlisp that supports the new Xerox 1185/1186 artificial inteilligence work stations, including the new features of these work stations such as the expanded 19" display, and the PC emulation option. Of course, like previous releases of Interlisp, Koto also supports the other current members of the 1100 series of machines, specifically the 1132 and various models of the 1108.

The Koto release of Interlisp-D provides a wide range of added functionality, increased performance and improved reliability Central among these is that Koto is the first release of Interlisp that supports the new Xerox 1185/1186 artificial intelligence work stations, including the new features of these work stations such as the expanded 19" display and the PC emulation option. Of course, like previous releases of Interlisp, Koto also supports the other current members of the 1100 series of machines, specifically the 1132 and various models of the 1108.
Lehtola, A.; Jäppinen, H.; Nelimarkka, E.

This paper introduces a special programming environment for the definition of grammars and for the implementation of corresponding parsers. In natural language processing systems it is advantageous to have linguistic knowledge and processing mechanisms separated. Our environment accepts grammars consisting of binary dependency relations and grammatical functions. Well-formed expressions of functions and relations provide constituent surroundings for syntactic categories in the form of two-way automata. These relations, functions, and automata are described in a special definition language.In focusing on high level descriptions a linguist may ignore computational details of the parsing process. He writes the grammar into a DPL-description and a compiler translates it into efficient LISP-code. The environment has also a tracing facility for the parsing process, grammar-sensitive lexical maintenance programs, and routines for the interactive graphic display of parse trees and grammar definitions. Translator routines are also available for the transport of compiled code between various LISP-dialects. The environment itself exists currently in INTERLISP and FRANZLISP. This paper focuses on knowledge engineering issues and does not enter linguistic argumentation.
Halasz, Frank Geza

Lecture notes of an Interlisp-D course.

Contents:
• GRAPHCALLS: a new Lispusers package
• Bugs, Workarounds And Helpful Hints
• LOOPS Use At Ohio State
• CSRL: A Language tor Designing Diagnostic Expert Systems
• LOOPS at Battelle, Columbus
• WELDEX: An expert system for interpreting radiographs of welds
• Programming tutors with LOOPS
• ANNOUNCEMENTS
• QUESTIONS

Contents:
• NOTECARDS
• Notes, Cautions and Helpfull Hints
• A Shell for Intelligent Databases
• RED: a Red-Cell Antibody Identification Expert
• MDXlMYCIN
• Auto-Mech
• Announcements
• Interlisp and Loops Training Classes

Contents:
• SpinPro TM : an Expert System for Optimizing Ultracentrifuge Runs
• GUIDON-WATCH: A graphic interface to a knowledge based system
• A KNOWLEDGE-BASED ENVIRONMENT FOR PROCESS PLANNING
• Programmer's Corner: Program and System Tips
• Notes, Cautions and Helpful Hints
• QUESTIONS and ANSWERS
• TSHOOT - A Recursive Expert Troubleshooting System
• EXPERT INSTRUCTIONAL SYSTEMS RESEARCH AT Learning Research and Development Center (LRDC)
• Announcements
• Interlisp and Loops Training Classes
• Xerox AI International Users' Group
Marshall, Kathy

Because NoteCards is a vehicle for current research and is still undergoing development, you may encounter occasional bugs or or be frustrated by seeming deficiencies. This manual has been written with these considerations in mind.
Stefik, Mark; Bobrow, Daniel G.

Over the past few years object-oriented programming languages have become popular in the artifical intelligence community, often as add-ons to Lisp. This is an introduction to the concepts of object-oriented programming based on our eperience of them in Loops, and secondarily a asurvey of some of the important variations and open issues that are being explored and debated among users of different dialects.
Gabriel, Richard P.

This is the final report of the Stanford Lisp Performance Study, which was conducted by the author during the period from February 1981 through October 1984. This report is divided into three major parts: the first is the theoretical background, which is an exposition of the factors that go into evaluating the performance of a Lisp system; the second part is a description of the Lisp implementations that appear in the benchmark study; and the last part is a description of the benchmark suite that was used during the bulk of the study and the results themselves.
Gabriel, Richard P.

The final report of the Stanford Lisp Performance Study, Performance and Evaluation of Lisp Systems is the first book to present descriptions on Lisp implementation techniques actually in use. It provides performance information using the tools of benchmarking to measure the various Lisp systems, and provides an understanding of the technical tradeoffs made during the implementation of a Lisp system. The study is divided into three parts. The first provides the theoretical background, outlining the factors that go into evaluating the performance of a Lisp system. The second part presents the Lisp implementations: MacLisp, MIT CADR, LMI Lambda, S-I Lisp, Franz Lisp, MIL, Spice Lisp, Vax Common Lisp, Portable Standard Lisp, and Xerox D-Machine. A final part describes the benchmark suite that was used during the major portion of the study and the results themselves.
Friedland, Peter

A fundamental shift in the preferred approach to building applied artificial intelligence (AI) systems has taken place since the late 1960s. Previous work focused on the construction of general-purpose intelligent systems; the emphasis was on powerful inference methods that could function efficiently even when the available domain-specific knowledge was relatively meager. Today the emphasis is on the role of specific and detailed knowledge, rather than on reasoning methods.
The first successful application of this method, which goes by the name of knowledge-based or expert-system research, was the DENDRAL program at Stanford, a long-term collaboration between chemists and computer scientists for automating the determination of molecular structure from empirical formulas and mass spectral data. The key idea is that knowledge is power, for experts, be they human or machine, are often those who know more facts and heuristics about a domain than lesser problem solvers. The task of building an expert system, therefore, is predominantly one of “teaching” a system enough of these facts and heuristics to enable it to perform competently in a particular problem-solving context. Such a collection of facts and heuristics is commonly called a knowledge base. Knowledge-based systems are still dependent on inference methods that perform reasoning on the knowledge base, but experience has shown that simple inference methods like generate and test, backward-chaining, and forward-chaining are very effective in a wide variety of problem domains when they are coupled with powerful knowledge bases.
If this methodology remains preeminent, then the task of constructing knowledge bases becomes the rate-limiting factor in expert-system development. Indeed, a major portion of the applied AI research in the last decade has been directed at developing techniques and tools for knowledge representation. We are now in the third generation of such efforts. The first generation was marked by the development of enhanced AI languages like Interlisp and PROLOG. The second generation saw the development of knowledge representation tools at AI research institutions; Stanford, for instance, produced EMYCIN, The Unit System, and MRS. The third generation is now producing fully supported commercial tools like KEE and S.1. Each generation has seen a substantial decrease in the amount of time needed to build significant expert systems. Ten years ago prototype systems commonly took on the order of two years to show proof of concept; today such systems are routinely built in a few months.
Three basic methodologies—frames, rules, and logic—have emerged to support the complex task of storing human knowledge in an expert system. Each of the articles in this Special Section describes and illustrates one of these methodologies. “The Role of Frame-Based Representation in Reasoning,” by Richard Fikes and Tom Kehler, describes an object-centered view of knowledge representation, whereby all knowldge is partitioned into discrete structures (frames) having individual properties (slots). Frames can be used to represent broad concepts, classes of objects, or individual instances or components of objects. They are joined together in an inheritance hierarchy that provides for the transmission of common properties among the frames without multiple specification of those properties. The authors use the KEE knowledge representation and manipulation tool to illustrate the characteristics of frame-based representation for a variety of domain examples. They also show how frame-based systems can be used to incorporate a range of inference methods common to both logic and rule-based systems.
"Rule-Based Systems,” by Frederick Hayes-Roth, chronicles the history and describes the implementation of production rules as a framework for knowledge representation. In essence, production rules use IF conditions THEN conclusions and IF conditions THEN actions structures to construct a knowledge base. The autor catalogs a wide range of applications for which this methodology has proved natural and (at least partially) successful for replicating intelligent behavior. The article also surveys some already-available computational tools for facilitating the construction of rule-based knowledge bases and discusses the inference methods (particularly backward- and forward-chaining) that are provided as part of these tools. The article concludes with a consideration of the future improvement and expansion of such tools.
The third article, “Logic Programming, ” by Michael Genesereth and Matthew Ginsberg, provides a tutorial introduction to the formal method of programming by description in the predicate calculus. Unlike traditional programming, which emphasizes how computations are to be performed, logic programming focuses on the what of objects and their behavior. The article illustrates the ease with which incremental additions can be made to a logic-oriented knowledge base, as well as the automatic facilities for inference (through theorem proving) and explanation that result from such formal descriptions. A practical example of diagnosis of digital device malfunctions is used to show how significantand complex problems can be represented in the formalism.
A note to the reader who may infer that the AI community is being split into competing camps by these three methodologies: Although each provides advantages in certain specific domains (logic where the domain can be readily axiomatized and where complete causal models are available, rules where most of the knowledge can be conveniently expressed as experiential heuristics, and frames where complex structural descriptions are necessary to adequately describe the domain), the current view is one of synthesis rather than exclusivity. Both logic and rule-based systems commonly incorporate frame-like structures to facilitate the representation of large amounts of factual information, and frame-based systems like KEE allow both production rules and predicate calculus statements to be stored within and activated from frames to do inference. The next generation of knowledge representation tools may even help users to select appropriate methodologies for each particular class of knowledge, and then automatically integrate the various methodologies so selected into a consistent framework for knowledge.
The Implementation of Device-Independent Graphics Through Imagestreams
Jellinek, Herb

The Interlisp-D system does all image creation through a set of functions and data structures for device-independent graphics, known popularly as DIG. DIG is achieved throught the use of a special flavor of stream, known as an imagestream.
An imagestream, by convention, is any stream that has its IMAGEOPS field (described in detail below) set to a vector of meaningful graphical operations. Using imagestreams, we can write programs that draw and print on an output stream without regard to the underlying device, be it window, disk, Dover, 8044 or Diablo printers.
Heering, Jan; Klint, Paul

Most programming environments are much too complex. One way of simplifying them is to reduce the number of mode-dependent languages the user has to be familiar with. As a first step towards this end, the feasibility of unified command/programming/debugging languages, and the concepts on which such languages have to be based, are investigated. The unification process is accomplished in two phases. First, a unified command/programming framework is defined and, second, this framework is extended by adding an integrated debugging capability to it. Strict rules are laid down by which to judge language concepts presenting themselves as candidates for inclusion in the framework during each phase. On the basis of these rules many of the language design questions that have hitherto been resolved this way or that, depending on the taste of the designer, lose their vagueness and can be decided in an unambiguous manner.
Fletcher, Charles R.

WORDPRO, a computer program written in Interlisp-D, implements Kintsch and Greeno's (1985) theory of the comprehension and solution of simple arithmetic word problems. The program is intended to demonstrate the sufficiency of that theory, to assist in communicating it to other researchers, and to serve as a tool for exploring the theory's consequences. In this paper, I address each of these goals. I describe the behavior of WORDPRO on a set of sample problems, show how empirical predictions are derived from it, and provide enough details of its implementation to allow other researchers to understand its output and (with the internal documentation) to test and modify it to meet their needs.
1986

The 1186 is an artificial intelligence development workstation that combines Xerox hardware and software to provide a wide variety of user applications. This chapter provides a brief overview of the 1186 workstation and its software environment.

The 1186 is an artificial intelligence development workstation that combines Xerox hardware and software to provide a wide variety of user applications. This chapter provides a brief overview of the 1186 workstation and its software environment.
Uncapher

Summaries of research performed by the Information Sciences Institute at the University of Southern California for the areas provided in this report: Common LISP framework; Explainable Expert Systems; Formalized Software Development; Mappings; Command Graphics; Internet Concepts Research; Strategic Computing Information System; Wideband Communication; KITSERV; VLSI; Advanced VLSI; Text Generation for Strategic Computing; Computer Research Support; Exportable Workstation Systems; New Computing Environment; Strategic Computing-Development Systems; Strategic Command, Control, and Communication Experiment Support.

This manual is designed to help you use Lafite. It assumes you understand the basic principles of using your Xerox Lisp workstation and the Interlisp-D environment.
Access-Oriented Programming for a Multiparadigm Environment
Stefik, Mark; Bobrow, Daniel G.; Kahn, Kenneth

In access-oriented programming, the fetching or storing of data causes user defined operations to be invoked. Annotated values, a reification of the notion of storage cell, are used to implement active values for procedural activations and properties for structural annotation. The implementation satisfies a number of criteria described for efficiency of operation, and non-interface with respect to other paradigms of programming. The access-oriented programming paradigm has been intragrated with the Loops multi-paradigm knowledge programming system which also provides function-oriented, object-oriented and rule-oriented paradigms for users.
Martz, Philip R.; Heffron, Matt; Griffith, Owen Mitch

The SpinPro™ Ultracentrifugation Expert System is a computer program that designs optimal ultracentrifugation procedures to satisfy the investigator's research requirements. SpinPro runs on the IBM PC/XT. Ultracentrifugation is a common method in the separation of biological materials. Its capabilities, however, are too often under-utilized. SpinPro addresses this problem by employing Artificial Intelligence (AI) techniques to design efficient and accurate ultracentrifugation procedures. To use SpinPro, the investigator describes the centrifugation problem in a question and answer dialogue. SpinPro then offers detailed advice on optimal and alternative procedures for performing the run. This advice results in cleaner and faster separations and improves the efficiency of the ultracentrifugation laboratory.
Wiederhold, Gio; Blum, Robert L.; Walker, Michael

A variety of types of linkages from knowledge bases to databases have been proposed, and a few have been implemented [MW84]. In this research note, we summarize a technique which was employed in a specific context: knowledge extraction from a copy of an existing clinical database. The knowledge base is also used to drive the extracting process. RX builds causal models in its domain to generate input for statistical hypothesis verification. We distinguish two information types: knowledge and data, and recognize four types of knowledge: categorical, definitional, causal (represented in frames), and operational, represented by rules. Based on our experience, we speculate about the generalization of the approach.
Lanning, Stan

The file CACHEOBJECT defines a Loops mixin that defines a protocol for instances that cache computed values.
Bobrow, Daniel G.; Kahn, Kenneth; Kiczales, Gregor; Masinter, Larry; Stefik, Mark; Zdybel, Frank

CommonLoops blends object-oriented programming smoothly and tightly with the procedure-oriented design of Lisp. Functions and methods are combined in a more general abstraction. Message passing is invoked via normal Lisp function call. Methods are viewed as partial descriptions of procedures. Lisp data types are integrated with object classes. With these integrations, it is easy to incrementally move a program between the procedure and object-oriented styles.
One of the most important properties of CommonLoops is its extensive use of meta-objects. We discuss three kinds of meta-objects: objects for classes, objects for methods, and objects for discriminators. We argue that these meta-objects make practical both efficient implementation and experimentation with new ideas for object-oriented programming.
CommonLoops' small kernel is powerful enough to implement the major object-oriented systems in use today.
Padget, Julian; Chailloux, Jérôme; Christaller, Thomas; DeMantaras, Ramon; Dalton, Jeff; Devin, Matthieu; Fitch, John; Krumnack, Timm; Neidl, Eugen; Papon, Eric; Pope, Stephen; Queinnec, Christian; Steels, Luc; Stoyan, Herbert

This paper reports work-in-progress within the LISP community on efforts to bring the LISP language to national and international standardisation. The paper discusses the objective criteria that have been established, how it is planned that these will be satisfied, when it is expected these will be fulfilled and what it still open. The Common LISP definition has made a very valuable contribution to the standardisation of LISP and the current authors have learned much from that experience. The result is a rationale for how LISP could be standardised along with identification of key features in the language and its environment, which together lead to a layered definition. This is followed by detail of the proposal for LISP standardisation based on the strategies that will have been outlined.
Kaisler, Stephen H.

LISP, as a language, has been around for about 25 years. It was originally developed to support artificial intelligence (AI) research. At first, it seemed to be little noticed except by a small band of academics who implemented some of the early LISP interpreters and wrote some of the early AI programs. In the early 60’s, LISP began to diverge as various implementations were developed for different machines. McCarthy gives a short history of its early days.
Kaisler, Stephen H.

This text describes the features of a dialect of LISP known as INTERLISP. INTERLISP stands for "Interactive Lisp." It provides a rich program development and problem prototyping environment.
Stefik, Mark; Bobrow, Daniel; Kahn, Kenneth

The Loops knowledge programming system integrates function-oriented, system object-oriented, rule-oriented, and—something notfound in most other systems—access-oriented programming.
Xerox
Oldford, R. W.; Peters, S. C.

We discuss the design and implementation of object-oriented datatypes for a sophisticated statistical analysis environment. The discussion draws on our experience with an experimental statistical analysis system, called DINDE. DINDE resides in the integrated programming environment of a Xerox Interlisp-D machine running LOOPS. The discussion begins with our implementation of arrays, matrices, and vectors as objects in this environment. We then discuss an additional set of objects that are based on statistical abstractions rather than mathematical ones and describe their implementation in the DINDE environment.
Sheil, Beau

This chapter discusses the power tools for programmers. Essentially, all of the intelligent programming tools described in this volume are at most experimental prototypes. Given that these tools are still quite far from being commercial realities, it is worthwhile to note that there is a completely different way in which artificial intelligence research has to help programmers. Artificial intelligence researchers are themselves programmers. Creating such programs is more a problem of exploration than implementation and does not conform to conventional software lifecycle models. The artificial intelligence programming community has always been faced with this kind of exploratory programming and has, therefore, had a head start on developing appropriate language, environment, and hardware features. Redundancy protects the design from unintentional change, the conventional programming technology restrains the programmer, and the programming languages used in exploratory systems minimize and defer constraints on the programmer.
Bobrow, Daniel G.; Stefik, Mark J.

Programs are judged not only by whether they faithfully carry out the intended processing but also by whether they are understandable and easily changed. Programming systems for artificial intelligence applications use specialized languages, environments, and knowledge-based tools to reduce the complexity of the programming task. Language styles based on procedures, objects, logic, rules, and constraints reflect different models for organizing programs and facilitate program evolution and understandability. To make programming easier, multiple styles can be integrated as sublanguages in a programming environment. Programming environments provide tools that analyze programs and create informative displays of their structure. Programs can be modified by direct interaction with these displays. These tools and languages are helping computer scientists to regain a sense of control over systems that have become increasingly complex.
Henderson, D. Austin Jr.; Card, Stuart K.

A key constraint on the effectiveness of window-based human-computer interfaces is that the display screen is too small for many applications. This results in “window thrashing,” in which the user must expend considerable effort to keep desired windows visible. Rooms is a window manager that overcomes small screen size by exploiting the statistics of window access, dividing the user's workspace into a suite of virtual workspaces with transitions among them. Mechanisms are described for solving the problems of navigation and simultaneous access to separated information that arise from multiple workspaces.
Laird, John E.

This manual describes Version 4 of Soar, an architecture for problem solving and learning based on heuristic search and chunking. Version 4 is available as of January 1986 in Common Lisp, Franz-Lisp, Interlisp, and Zeta-Lisp. An introduction to the Soar system is presented, and information is provided about the following system aspects: (1) data representation in working memory; (2) production representation; (3) decision procedure; (4) subgoals; (5) default search control; (6) chunking; (7) encoding a task; (8) operator implementation goal tests and operator parallelism; (9) top-level variables and functions; (10) errors, warnings, and recovery hints; and (11) the installation of Soar. Also provided are a performance comparison of the time required to solve a simple problem in the Eight Puzzle on different Lisp systems in Version 4, release 1; a brief annotated Soar bibliography; and a distribution list for this report. Two appendices contain a list of search-control productions and a summary of functions and variables, and an index is provided. (KM)
SpinPro™ Ultracentrifugation Expert System Product Brochure

Beckman proudly introduces the first Expert System on Ultra centrifugation—a truly advanced software program for the Personal Computer. Not a simulation, it can produce detailed run plans, perform calculations, provide information on sample materials, separation methods, density gradients. It is designed to help you shorten run times, improve the quality of separations, and make more efficient use of your ultracentrifuge while affording considerable economies. Now you can have an expert advisor at your command at any time in your own Lab—The SpinProTM Ultracentrifugation Expert System!
Trigg, Randall H.; Suchman, Lucy A.; Halasz, Frank G.

This paper describes a project underway to investigate computer support for collaboration. In particular, we focus on experience with and extensions to NoteCards, a hypertext-based idea structuring system. The forms of collaboration discussed include draft-passing, simultaneous sharing and online presentations. The requirement that mutual intelligibility be maintained between collaborators leads to the need for support of annotative and procedural as well as substantive activities.
Teitelman, Warren

Both James Gosling and I currently work for SUN and the reason for my wanting to talk before he does is that I am talking about the past and James is talking about the future. I have been connected with eight window systems as a user, or as an implementor, or by being in the same building! I have been asked to give a historical view and my talk looks at window systems over ten years and features: the Smalltalk, DLisp (Interlisp), Interlisp-D, Tajo (Mesa Development Environment), Docs (Cedar), Viewers (Cedar), SunWindows and SunDew systems.
Henderson, D. A.

Trillium is a computer-based environment for simulating and experimenting with interfaces for simple machines. For the past four years it has been use by Xerox designers for fast prototyping and testing of interfaces for copiers and printers. This paper defines the class of “functioning frame” interfaces which Trillium is used to design, discusses the major concerns that have driven the design of Trillium, and describes the Trillium mechanisms chosen to satisfy them.
Malone, Thomas; Grant, Kenneth R.; Turbak, Franklyn A.

This paper describes an intelligent system to help
people share and filter information
communicated by computer-based messaging
systems. The system exploits concepts from
artificial intelligence such as frames, production
rules, and inheritance networks, but it avoids the
unsolved problems of natural language
understanding by providing users with a rich set
of semi-structured message templates. A
consistent set of “direct manipulation” editors
simplifies the use of the system by individuals,
and an incremental enhancement path simplifies
the adoption of the system by groups.
1987

The 1186 Hardware Installation provides information to aid you in installing the 1186 Artificial Intelligence workstation.
Karttunen, Lauri; Koskenniemi, Kimmo; Kaplan, Ronald M.

This paper describes a system for compiling two-level phonological or orthographical rules into finite-state transducers. The purpose of this system, called TWOL, is to aid the user in developing a set of such rules for morphological generation and recognition.

A user's guide that shows how to use Sketch both by drawing shapes in a window and by manipulating Sketch using the programmatic interface.

The objective of this user's guide is to show you how to use TEdit both by typing text to a window and by manipulating TEdit using the programmatic interface.
Mott, Peter; Brooke, Simon

Abstract: This paper describes an inference system which lends itself to graphical representation. An implementation of the system is described, and its application in a legislation based domain is illustrated. The methodology for knowledge elicitation which the system is intended to support is briefly indicated. The algorithm is described, and semantics for the system are given.
Tatar, Deborah G.; Weinreb, Daniel

Lisp has been around for more than twenty-five years. But for most of Lisp's lifetime, there haven't been any good books that teach the language. Only a few books were available, ranging from mediocre to awful.

AGAST is an attempt to produce a program that can write intelligent stories. With an eclectic combination of ideas from the work of both computer scientists and writers, we have produced the flexible core of what could be a very intelligent story teller.
Cunningham, Robert E.; Corbett, John D.; Bonar, Jeffrey G.

Chips is an interactive tool for developing software employing graphical humancomputer interfaces on Xerox Lisp machines. For the programmer, It provides a rich graphical interface for the creation of rich graphical interfaces. In the service of an end user, It provides classes for modeling the graphical relationships of objects on the screen and maintaining constraints between them. Several large applications have been developed with Chips including intelligent tutors for programming and electricity. Chips is implemented as a collection of customizable classes in the LOOPS object-oriented extensions to Interlisp-D. The three fundamental classes are 1 DomainObject which defines objects of the application domain - the domain for which the interface is being built - and ties together the various functionalities provided by the Chips system 2 DisplayObject which defines mouse-sensitive graphical objects and 3 Substrate which defines specialized windows for displaying and storing collections of instances of DisplayObject. A programmer creates an interface by specializing existing DomainObjects and drawing new Displayobjects with a graphics editor. Instances of DispalyObject and Substrate are assembled on screen to form the interface. Once the interface has been sketched in this manner, the programmer can build inward, creating all other parts of the application through the objects on the screen.

Report of the automated tests for the DEdit structure editor ran on 28 February 1987.

This report is for tests written and executed up to March 24, 1987 on the Basics>Full.Sysout generated 11-Mar-87. The following tests are for the integration of the new error system into the Interlisp environment.
Lai, Kum-Yew

There exists the technology today to build large-scale knowledge bases haypertext system, as well as inteligent information sharing systems. As these three kinds of technologies become more widely used, the need to integrate them into generic office tools becomes more urgent for two reasons. First, each technology by itself has its limitations that will become increasingly burdensome. Second, their integration produces a synergy that can overcome their individual limitations.
The goal of this paper is to articulate the synergy that arises from systems that integrate these separate technologies and to describe an implementation of such a system called Object Lens.
Lai, Kum-Yew

There exists the technology today to build large-scale knowledge bases, hypertext systems, as well as intelligent information sharing systems. As these three kinds of technologies become more widely used, the need to integrate them into generic office tools becomes more urgent for two reasons. First, each technology by itself has its limitations that will become increasingly burdensome. Second, their integration produces a synergy that can overcome their individual limitations.
The goal of this paper is to articulate the synergy that arises from systems that integrate these separate technologies and to describe an implementation of such a system called Object Lens.

Welcome to the inaugural issue of HOTLINE! This issue covers the following topics:
• RS232 Chat
• SETQ and the File Manager
• Default MAKEFILE Environment
• Changing a saved file's Reading Environment
• SEdit and multiple Edit Data Fields
• MP 9303 on rebooting a partition
• Break, Font not found
• Sketch and Hardcopying data
• HORRIBLEVARS

In this issue of HOTLINE! three known problems and two frequently asked questions are addressed:
• Silent failure of MAKEFILE
• Unbound atom in Browser
• Control-C break in TOPS-20 TCP Chat window
• Koto-Lyric readtable inconsistency
• MP 0915 recovery on booting Systemtools

The following topics are covered in this issue:
• How to close open streams
• Saving macros in files
• NAME COMMANDS spontaneous redefinition
• Interfacing an 1186 to a VAX 11/780 and Sun 31160
• Exporting Symbols from Packages
• Making T edit Read Only
• Over-riding the default compiler
• Changing the Default Executive type

This issue is devoted to hints in using SEdit in Lyric. The following topics are covered in this issue.
• Using the left cluster keys in SEdit
• Changing levels in SEdit
• Function keys in SEdit
• Changing fonts in SEdit
• Setting a default mode for SEdit
• Finding "?" in SEdit
• Changing the print case in SEdit
• Macros in SEdit
• SEdit DO-IT key does not work as documented.

The following topics are covered:
• Creating and interning symbols
• Accessing symbols in packages
• Packages and Readtables
• Difference between MAKE PACKAGE, IN-PACKAGE and DEFPACKAGE
• Exporting symbols using DEFPACKAGE
• Building a file that exports symbols on loading
• Creating and interning symbols
• Package prefix for symbols and their values
• Exporting symbols in name conflict
• Importing symbols that have name-conflict
• Deleting a package

• COPYFILE to floppy LOGXOR break
• Error found when installing a sysout from floppy: "File name not found"
• Error found when installing a sysout from floppy: "Floppy label error"
• Disk scavenging
• How to recover from internal garbage collection table overflow
• How to diagnose the cause of internal garbage collection table overflow
• LOGOUT resets the TTY parameters
• Open RS232 stream
• Using the left cluster keys in SEdit
• Changing fonts in SEdit

The following topics are covered in this issue:
• Koto 11 86 MakeScript bug
• Standalone password protection
• "File System Resources Exceeded"
• Loading SYSEDIT without MASTERSCOPE
• The side effect of aborting a sysout procedure
• TCP FTP transmission problem of LCOM and DFASL files
• Unbound atom problem in TCP
• TCP Chat problem to Unix 4.3 hosts
• TCP Trivial File Transfer problem

The following topics are covered in this issue:
• Porting Common Lisp Files to Lyric
• Compiling Non-Xerox Common Lisp Files in Lyric
• XCL: EXEC Window Property Bug
• XCL:ADD-EXEC Window Property Bug
• Restoring Multiply Advised Functions
• Interpreted and Compiled Macros

The following topics are covered in this issue:
• Cannot boot Lisp volume after erasing Lispfiles
• Lyric doesn't immediately release files on NS servers
• XCL:storage-exhausted error
• Saving BITMAPS
• ADVISE not saved on file
• Redefinition of an Interlisp function
• Dwimify of I. S. OPRS
• OUTPUT a free variable in Interlisp Exec
• CL mapping functions
• 1987 Index
Trigg, Randall H.; Irish, Peggy M.

This paper reports on an investigation into the use of the NoteCards hypertext system for writing. We describe a wide variety of personal styles adopted by 20 researchers at Xerox as they “inhabit” NoteCards. This variety is displayed in each of their writing activities: notetaking, organizing and reorganizing their work, maintaining references and bibliographies, and preparing documents. In addition, we discuss the distinctive personal decisions made as to which activities are appropriate for NoteCards in the first place. Finally, we conclude with a list of recommendations for system designers arising from this work.

Alphabetic index of the Xerox Common Lisp Implementation Notes for the Lyric Release.
Mears, Lyn Ann; Rees, Ted

This primer is the equivalent of a tourist's guide book. It shows you the "sights" but it leaves out a lot of detail. Once you are comfortable with the basic LOOPS programming concepts and procedures described here, you can use the LOOPS Reference Manual as it was intended and fully exploit the capabilities of LOOPS. This primer was written with the beginner's viewpoint in mind.
it addresses strategic considerations, introduces basic procedures and methods, and provides numerous examples and pictures. The material in each chapter is presented with step-by-step instructions. What this primer does not assume you have any previous programming experience in LOOPS, it does assume you have a Xerox 1108/9 or a Xerox 1 186 AI Workstation which is running the Cantilever version of LOOPS, and that you have experience with interlisp-D and its programming environment.
Foderaro, John

In this issue we survey the Lisp programming environment provided on the family of Lisp machines built by Xerox. These machines, which once ran only Interlisp-D, are now said to run 'Xerox Lisp' which is a combination of Interlisp-D and Common Lisp.

This document is a part of the procedures describing how to run tests on the Xerox Lisp Environment. The following is a list of the tests that must be run by hand.
Halasz, Frank G.; Moran, Thomas P.; Trigg, Randall H.

NoteCards is an extensible environment designed to help people formulate, structure, compare, and manage ideas. NoteCards provides the user with a “semantic network” of electronic notecards interconnected by typed links. The system provides tools to organize, manage, and display the structure of the network, as well as a set of methods and protocols for creating programs to manipulate the information in the network. NoteCards is currently being used by more than 50 people engaged in idea processing tasks ranging from writing research papers through designing parts for photocopiers. In this paper we briefly describe NoteCards and the conceptualization of idea processing tasks that underlies its design. We then describe the NoteCards user community and several prototypical NoteCards applications. Finally, we discuss what we have learned about the system's strengths and weaknesses from our observations of the NoteCards user community.
Gladwin, Lee A.

This report is for tests written and executed up to February 28, 1987 on the Basics>Full.Sysout generated 21-Jan-87. The following tests are for the integration of the new error system into the Interlisp environment. The test plan for this report is {Erinyes}Lisp>Lyric>Plans>SEdit.NoteFile.
Dixon, Mike

TKDorado makes the full range of TEditKey commands available from the Dorado keyboard.
Stone, Jeffrey
DeMichiel, Linda G.; Gabriel, Richard P.

The Common Lisp Object System is an object-oriented system that is based on the concepts of generic functions, multiple inheritance, and method combination. All objects in the Object System are instances of classes that form an extension to the Common Lisp type system. The Common Lisp Object System is based on a meta-object protocol that renders it possible to alter the fundamental structure of the Object System itself. The Common Lisp Object System has been proposed as a standard for ANSI Common Lisp and has been tentatively endorsed by X3J13.
Shanor, Gordy G.

The Dipmeter Advisor is a knowledge-base system, linked to a computer work-station, designed to aid in the interpretation of dipmeter results through interaction between the interpreter and the "expert" system.
The system utilizes dipmeter results, other wireline log data, computer processed results such as LITHO*, and user-input local geological knowledge as the framework for the interpretation. A work session proceeds through a number of phases, which leads to first a structural, then a stratigraphic interpretation of the well data.
Conclusions made by the Dipmeter Advisor can be accepted, modified, or rejected by the interpreter at any stage of the work session. The user may also make his own conclusions and comments, which are stored as part of the final interpretation and become part of an updated knowledge-base for input to further field studies.
Myers, J. D.

During my tenure as Chairman of the Department of Medicine at the University of Pittsburgh, 1955 to 1970, two points became clear in regard to diagnosis in internal medicine. The first was that the knowledge base in that field had become vastly too large for any single person to encompass it. The second point was that the busy practitioner, even though he knew the items of information pertinent to his patients correct diagnosis, often did not consider the right answer particularly if the diagnosis was an unusual disease.
I resigned the position of Chairman in 1970 intending to resume my position as Professor of Medicine. However, the University saw fit to offer me the appointment as University Professor (Medicine). The University of Pittsburgh follows the practice of Harvard University, established by President James Bryant Conant in the late 1930s, in which a University Professor is a professor at large and reports only to the president of the university. He has no department, no school and is not under administrative supervision by a dean or vice-president. Thus the position allows maximal academic freedom. In this new position I felt strongly that I should conduct worthwhile research. It was almost fifteen years since I had worked in my chosen field of clinical investigation, namely splanchnic blood flow and metabolism, and I felt that research in that area had passed me by. Remembering the two points mentioned earlier — the excessive knowledge base of internal medicine and the problem of considering the correct diagnosis — I asked myself what could be done to correct these problems. It seemed that the computer with its huge memory could correct the first and I wondered if it could not help as well with the second.
At that point I knew no more about computers than the average layman so I sought advice. Dr. Gerhard Werner, our Chairman of Pharmacology, was working with computers in an attempt to map all of the neurological centers of the human brain stem with particular reference to their interconnections and functions. He was particularly concerned about the actions of pharmacological agents on this complex system. Working with him on this problem was Dr. Harry Pople, a computer scientist with special interest in “artificial intelligence”. The problem chosen was so complex and difficult that Werner and Pople were making little progress.
Gerhard listened patiently to my ideas and promptly stated that he thought the projects were feasible utilizing the computer. In regard to the diagnostic component of my ambition he strongly advised that “artificial intelligence” be used. Pople was brought into the discussion and was greatly interested, I believe because of the feasibility of the project and the recognition of its practical application to the practice of medicine.
The upshot was that Pople joined me in my project and Werner and Pople abandoned the work on the brain stem. Pople knew nothing about medicine and I knew nothing about computer science. Thus the first step in our collaboration was my analysis for Pople of the diagnostic process. I chose a goodly number of actual cases from clinical pathological conferences (CPCs) because they contained ample clinical data and because the correct diagnoses were known. At each small step of the way through the diagnostic process I was required to explain what the clinical information meant in context and my reasons for considering certain diagnoses. This provided to Pople insight into the diagnostic process. After analyzing dozens of such cases I felt as though I had undergone a sort of “psychoanalysis”. From this experience Pople wrote the first computer diagnostic programs seeking to emulate my diagnostic process. This has led certain “wags” to nickname our project “Jack in the box”. For this initial attempt Pople used the LISP computer language. We were granted access to the PROPHET PDP-10, a time-sharing mainframe maintained in Boston by the National Institutes of Health (NIH) but devoted particularly to pharmacological research. Thus we were interlopers.
The first name we applied to our project was DIALOG, for diagnostic logic, but this had to be dropped because the name was in conflict with a computer program already on the market and copyrighted. The next name chosen was INTERNIST for obvious reason. However, the American Society for Internal Medicine publishes a journal entitled “The Internist” and they objected to our use of INTERNIST although there seems to be little relationship or conflict between a printed journal and a computer software program. Rather than fight the issue we simply added the Roman numeral one to our title which then became INTERNIST-I, which continues to this day.
Pople's initial effort was unsuccessful, however. He diligently had incorporated details regarding anatomy and much basic pathophysiology, I believe because in my initial CPC analyses I had brought into consideration such items of information so that Pople could understand how I got from A to B etc. The diagnostician in internal medicine knows, of course, much anatomy and patho-physiology but these are brought into consideration in only a minority of diagnostic problems. He knows, for example, that the liver is in the right upper quadrant and just beneath the right leaf of the diaphragm. In most diagnostic instances this information is “subconscious”.
Our first computer diagnostic program included too many such details and as a result was very slow and frequently got into analytical “loops” from which it could not extricate itself. We decided that we had to simplify the program but by that juncture much of 1971 had passed on.
The new program was INTERNIST-I and even today most of the basic structure devised in 1972 remains intact. INTERNIST-I is written in INTERLISP and has operated on the PDP-10 and the DEC 2060. It has also been adapted to the VAX 780. Certain younger people have contributed significantly to the program, particularly Dr. Zachary Moraitis and Dr. Randolph Miller. The latter interrupted his regular medical school education to spend the year 1974-75 as a fellow in our laboratory and since finishing his formal medical education in 1979 has been active as a full time faculty member of the team. Several Ph.D. candidates in computer science have also made significant contributions as have dozens of medical students during electives on the project.
INTERNIST-I is really quite a simple system as far as its operating system or inference engine is concerned. Three basic numbers are concerned in and manipulated in the ranking of elicited disease hypotheses. The first of these is the importance (IMPORT) of each of the more than 4,100 manifestations of disease which are contained in the knowledge base. IMPORTS are a global representation of the clinical importance of a given finding graded from 1 to 5, the latter being maximal, focusing on how necessary it is to explain the manifestation regardless of the final diagnosis. Thus massive splenomegaly has an IMPORT of 5 whereas anorexia has an IMPORT of 1. Mathematical weights are assigned to IMPORT numbers on a non-linear scale.
The second basic number is the evoking strength (EVOKS), the numbers ranging from 0 to 5. The number answers the question, that given a particular manifestation of disease, how strongly does one consider disease A versus all other diagnostic possibilities in a clinical situation. A zero indicates that a particular clinical manifestation is non-specific, i.e. so widely spread among diseases that the above question cannot be answered usefully. Again, anorexia is a good example of a non-specific manifestation. The EVOKS number 5, on the other hand, indicates that a manifestation is essentially pathognomonic for a particular disease.
The third basic number is the frequency (FREQ) which answers the question that given a particular disease what is the frequency or incidence of occurrence of a particular clinical finding. FREQ numbers range from 1 to 5, one indicating that the finding is rare or unusual in the disease and 5 indicating that the finding is present in essentially all instances of the disease.
Each diagnosis which is evoked is ranked mathematically on the basis of support for it, both positive and negative. Like the import number, the values for EVOKS and FREQ numbers increase in a non-linear fashion. The establishment or conclusion of a diagnosis is not based on any absolute score, as in Bayesian systems, but on how much better is the support of diagnosis A as compared to its nearest competitor. This difference is anchored to the value of an EVOKS of 5, a pathognomonic finding. When the list of evoked diagnoses is ranked mathematically on the basis of EVOKS, FREQ and IMPORT, the list is partitioned based upon the similarity of support for individual diagnoses. Thus a heart disease is compared with other heart diseases and not brain diseases since the patient may have a heart disorder and a brain disease concommitantly. Thus apples are compared with apples and not oranges.
When a diagnosis is concluded, the computer consults a list of interrelationships among diseases (LINKS) and bonuses are awarded, again in a non-linear fashion for numbers ranging from 1 to 5 — 1 indicating a weak interrelationship and 5 a universal interrelationship. Thus multiple interrelated diagnoses are preferred over independent ones provided the support for the second and other diagnoses is adequate. Good clinicians use this same rule of thumb. LINKS are of various types: PCED is used when disease A precedes disease B, e.g. acute rheumatic fever precedes early rheumatic valvular disease; PDIS - disease A predisposes to disease B, e.g. AIDS predisposes to pneumocystis pneumonia; CAUS - disease A causes disease B, e.g. thrombophlebitis of the lower extremities may cause pulmonary embolism; and COIN - there is a statistical interrelationship between disease A and disease B but scientific medical information is not explicit on the relationship, e.g. Hashimoto's thyroiditis coincides with pernicious anemia, both so called autoimmune diseases.
The maximal number of correct diagnoses made in a single case analysis is, to my recollection, eleven. In working with INTERNIST-I during the remainder of the 1970s several important points about the system were learned or appreciated.
The first and foremost of these is the importance of a complete and accurate knowledge base. Omissions from a disease profile can be particularly troublesome. If a manifestation of disease is not listed on a disease profile the computer can only conclude that that manifestation does not occur in the disease, and if a patient demonstrates the particular manifestation it counts against the diagnosis. Fortunately, repeated exercise of the diagnostic system brings to attention many inadvertent omissions. It is important to establish the EVOKS and FREQ numbers as accurately as possible. Continual updating of the knowledge base, including newly described diseases and new information about diseases previously profiled, is critical. Dr. Edward Feigenbaum recognized the importance of the accuracy and completeness of knowledge bases as the prime requisite of expert systems of any sort. He emphasized this point in his keynote address to MEDINFO-86 (1).
Standardized, clear and explicit nomenclature is required in expressing disease names and particularly in naming the thousands of individual manifestations of disease. Such rigidity can make the use of INTERNIST-I difficult for the uninitiated user. Therefore, in QMR more latitude and guidance is provided the user. For example, the user of INTERNIST-I must enter ABDOMEN PAIN RIGHT UPPER QUADRANT exactly whereas in QMR the user may enter PAI ABD RUQ and the system recognizes the term as above.
The importance of “properties” attached to the great majority of clinical manifestations was solidly evident. Properties express such conditions that if A is true then B is automatically false (or true as the case may be). The properties also allow credit to be awarded for or against B as the case may be. Properties also provide order to the asking of questions in the interrogative mode. They also state prerequisites and unrequisites for various procedures. As examples, one generally does not perform a superficial lymph node biopsy unless lymph nodes are enlarged (prerequisite). Similarly, a percutaneous liver biopsy is inadvisable if the blood platelets are less than 50,000 (unrequisite).
It became clear quite early in the utilization of INTERNIST-I that systemic or multisystem diseases had an advantage versus localized disorders in diagnosis. This is because systemic diseases have very long and more inclusive manifestation lists. It became necessary, therefore, to subdivide systemic diseases into various components when appropriate. Systemic lupus erythematosus provides a good example. Lupus nephritis must be compared in our system with other renal diseases and such comparison is allowed by our partitioning algorithm. Likewise, cerebral lupus must be differentiated from other central nervous system disorders. Furthermore, either renal lupus or cerebral lupus can occur at times without significant clinical evidence of other systemic involvement. In order to reassemble the components of a systemic disease we devised the systemic LINK (SYST) which expresses the interrelationship of each subcomponent to the parent systemic disease.
It became apparent quite early that expert systems like INTERNIST do not deal with the time axis of a disease well at all, and this seems to be generally true of expert systems in “artificial intelligence”. Certain parameters dealing with time can be expressed by devising particular manifestations, e.g. a blood transfusion preceding the development of acute hepatitis B by 2 to 6 months. But time remains a problem which is yet to be solved satisfactorily including QMR.
It has been clearly apparent over the years that both the knowledge base and the diagnostic consultant programs of both INTERNIST-I and QMR have considerable educational value. The disease profiles, the list of diseases in which a given clinical manifestation occurs (ordered by EVOKS and FREQ), and the interconnections among diseases (LINKS) provide a quick and ready means of acquiring at least orienting clinical information. Such has proved useful not only to medical students and residents but to clinical practitioners as well. In the interrogative mode of the diagnostic systems the student will frequently ask “Why was that question asked?” An instructor can either provide insight or ready consultation of the knowledge base by the student will provide a simple semi-quantitative reason for the question.
Lastly, let the author state that working with INTERNIST-I and QMR over the years seems to have had real influence on his own diagnostic approaches and habits. Thus my original psycho-analysis when working with Pople has been reinforced.

The Xerox Common Lisp Implementation Notes cover several aspects of the Lyric release. In these notes you will find:
• An explanation of how Xerox Common Lisp extends the Common Lisp standard. For example, in Xerox Common Lisp the Common Lisp array-constructing function make-array has additional keyword arguments that enhance its functionality.
• An explanation of how several ambiguities in Steele's Common Lisp: the Language were resolved.
• A description of additional features that provide far more than extensions to Common Lisp.

The Xerox Common Lisp Implementation Notes cover several aspects of the Lyric release. In these notes you will find:
• An explanation of how Xerox Common Lisp extends the Common Lisp standard. For example, in Xerox Common Lisp the Common Lisp array-constructing function make-array has additional keyword arguments that enhance its functionality.
• An explanation of how several ambiguities in Steele's Common Lisp: the Language were resolved.
• A description of additional features that provide far more than extensions to Common Lisp.

The preliminary Lyric Release Notes provide reference material on the Xerox Lisp environment for the Lyric Beta release. You will find the following information in these notes:
• An overview of significant Xerox extensions to the Common Lisp language
• Discussion of how specific Common Lisp features have affected the Interlisp-D language and the Xerox Lisp environment.
• Notes reflecting the changes made to Interlisp-D, independent of Common Lisp, since the Koto release
• Known restrictions to the use of Xerox Lisp
Mears, Lyn Ann; Rees, Ted

This primer is the equivalent of a tourist's guide book. It shows you the "sights" but it leaves out a lot of detail. Once you are comfortable with the basic LOOPS programming concepts and procedures described here, you can use the LOOPS Reference Manual as it was intended and fully exploit the capabilities of LOOPS.
This primer was written with the beginner's viewpoint in mind. it addresses strategic considerations, introduces basic procedures and methods, and provides numerous examples and pictures. The material in each chapter is presented with step-by-step instructions.
1988

The 1108 User's Guide contains the information you need to begin using the 1108 Artificial Intelligence workstation.
Crowfoot, Norman

This paper describes a thesis project in which a visually-oriented design utility is constructed in Interlisp-D for the Xerox 1108 Artificial Intelligence Workstation. This utility aids in the design of Regular Expression Parsers by visually simulating the operation of a parser. A textual program, suitable for utilization in the construction of a compiler scanner or other similar processor may be produced by the utility.
Swanson, Mark; Kessler, Robert; Lindstrom, Gary

An implementation of the Portable Standard Lisp (PSL) on the BBN Butterfly is described. Butterfly PSL is identical, syntactically and semantically, to implementations of PSL currently available on the VAX, Gould, and many 68000-based machines, except for the differences discussed in this paper. The differences include the addition of the future and touch constructs for explicit parallelism and an extension of the fluid binding mechanism to support the multiple environments required by concurrent tasks. As with all other PSL implementations, full compilation to machine code of the basic system and application source code is the normal mode, in contrast to the previous byte-code interpreter efforts. Also discussed are other required changes to the PSL system not visible in the syntax or semantics, e.g., compiler support for the future construct. Finally, the underlying hardware is described, and timings for basic operations and speedup results for two examples are given.
Shaw, Mildred L.

Discusses a distributed system for human–computer interaction based on a network of computers. The system aids group problem solving by enabling participants to share in a construct elicitation process based on repertory grid techniques that have applications in education, management, and expert systems development. In education, the learner is attempting to acquire a specific construct system for the subject matter; in management, people with different construct systems are attempting to work together toward common objectives; in expert systems development, the knowledge engineer is attempting to make overt and encode the relevant construction system of an expert. The participant construct system enables individuals to interact through networked personal computers to develop mutual understanding of a problem domain through the use of repertory grid techniques. (PsycINFO Database Record (c) 2016 APA, all rights reserved)
Koschmann, T.; Evens, Martha Walton

Object-oriented programming (OOP) is highly effective for problems involving hierarchical data categorization, leveraging inheritance and data encapsulation to promote structured implementation and maintainability. However, OOP environments often lack generalized facilities for deductive retrieval and pattern matching, which are crucial for knowledge-based applications. Conversely, logic programming languages like Prolog excel in these areas with built-in deductive retrieval through backtracking and pattern matching via unification, offering a natural representation of rule-based knowledge. Despite its strengths, Prolog’s declarative style is awkward for inherently procedural tasks, and it lacks a robust library for graphical interfaces and user interactions. Combining object-oriented and logic programming in a multiparadigm approach could harness the strengths of both paradigms, utilizing each for its optimal tasks. This integration seeks to preserve the advantages of both OOP and logic programming, addressing the challenge of effectively merging these paradigms in a unified application.
Masinter, Larry

This paper describes some of the activities of the "cleanup" sub-committee of the ANSI X3J13 group. It
describes some fundamental assumptions of our work in this sub-committee, the process we use to consider changes, and a sampler of some of the changes we are considering.
Pitman, Kent M.
Bobrow, Daniel G.; DeMichiel, Linda G.; Gabriel, Richard P.; Keene, Sonya E.; Kiczales, Gregor; Moon, David A.

Introduction The Common Lisp Object System is an object-oriented extension to Common Lisp as defined in Common Lisp: The Language, by Guy L. Steele Jr. It is based on generic functions, multiple inheritance, declarative method combination, and a meta-object protocol. The first two chapters of this specification present a description of the standard Programmer Interface for the Common Lisp Object System. The first chapter contains a description of the concepts of the Common Lisp Object System, and the second contains a description of the functions and macros in the Common Lisp Object System Programmer Interface. The chapter "The Common Lisp Object System Meta-Object Protocol" describes how the Common Lisp Object System can be customized. The fundamental objects of the Common Lisp Object System are classes, instances, generic functions, and methods. A class object determines the structure and behavior of a set of other objects, which are called its instances. Every Common Lisp object is an instance of a class. The class of an object determines the set of operations that can be performed on the object. A generic function is a function whose behavior depends on the classes or identities of the arguments supplied to it. A generic function object contains a set of methods, a lambda-list, a method combination type, and other information. The methods define the class-specific behavior and operations of the generic function; a method is said to specialize a generic function. When invoked, a generic function executes a subset of its methods based on the classes of its arguments. A generic function can be used in the same ways that an ordinary function can be used in Common Lisp; in particular, a generic function can be used as an argument to funcall and apply and can be given a global or a local name. A method is an object that contains a method function, a sequence of parameter speclalizers that specify when the given method is applicable, and a sequence of qualifiers that is used by the method combination facility to distinguish among methods. Each required formal parameter of each method has an associated parameter specializer, and the method will be invoked only on arguments that satisfy its parameter specializers. The method combination facility controls the selection of methods, the order in which they are run, and the values that are returned by the generic function. The Common Lisp Object System offers a default method combination type and provides a facility for declaring new types of method combination.
Andrews, K.; Henry, R. R.; Yamamoto, W. K.

We have implemented an illustrated compiler for a simple block structured language. The compiler graphically displays its control and data structures, and so gives its viewers an intuitive understanding of compiler organization and operation. The illustrations were planned by hand and display information naturally and concisely.
Kohlsaat, Kat

The Action Request data base is the primary vehicle through which the state of Xerox Lisp, including outstanding problems, requested features, and the like, is tracked.
Foss, Carolyn L.

A drawback of browsing through nonlinear electronic documents is the accompanying sense of disorientation often reported by users. It is difficult to ascertain the layout of the document network so people often lose their place or forget to follow up on tasks they meant to complete, while wondering if they are missing anything relevant to their interests. This paper critiques existing approaches to computerized support for browsing and describes four new types of browsers, which have been implemented in the Xerox NoteCards hypertext system. These browsers support the process of pursing and returning from digressions and facilitate the integration of diverse sets of materials that have been browsed.
Harmon, Paul; Maus, Rex; Morrissey, William

Paul Harmon's 1985 classic Expert Systems: Artificial Intelligence in Business (with David King) gave many professionals their first taste of Al technology. Now Harmon returns, along with management training specialists William Morrissey and Rex Maus, with this timely, in-depth look at the enormous number of expert system-building tools and commercial appli- cations now available.
Expert Systems Tools and Applications gives you a complete overview of today's expert system market-where it is and where it's going, how to use available expert system-building tools to facilitate the development of expert system applications, plus everything you'll want to consider when purchasing the latest Al applications, from capabilities to costs to hardware requirements.
Expert Systems Tools and Applications features:
• Small, mid-size, and large rule-based expert system-building tools as well as inductive and hybrid tools-with summary compari- sons to help you decide which tools best suit your business needs
• Step-by-step guidance through the development stage-from task analysis, knowledge engineering, and prototype development to field-testing, implementing and maintaining the system
• A complete catalog of available commercial expert system applications, organized by business area-from sales, management, and operations to programming, research, and service industries
If you're an executive, middle manager, or computer professional who's ready to extend your company's expert system efforts, Expert Systems Tools and Applications offers the technical advice and information you need to make informed Al decisions for improving the performance of your company.
PAUL HARMON, internationally recognized journalist and lecturer, edits Expert Systems Strategies, a monthly newsletter. WILLIAM MORRISSEY, Senior Consultant and Partner in Harmon Associates, manages the com- pany's Workshops Division. REX MAUS is a management consultant who specializes in computer-related documentation and training.

The following topics are covered in this issue:
• Make Script!
• Porting CL files to Lyric
• Preceding DEFxxx's with comments in SEdit
• Long copyright strings
• Advice replicated when loaded more than once
• LOGOUT resets the TTY parameters
• CL mapping functions

The following topics are covered in this issue:
• How to recover from internal garbage collection table overflow
• Koto-Lyric readtable inconsistency
• Problems loading Sketch on 1186
• Is SEdit opening windows randomly?
• Using DEFSTRUCT
• Hard disk error after deleting file
• "Hard disk error - page not found"
• How to change a window's title font

The following topics are covered in this issue:
• Specifying default font for Sketch
• Specifying default font for TEdit
• DATE, GDATE functions have bad SIDE-EFFECTS-DATA
deds
• Compiling nlambda expressions
• Saving bitmaps
• TEDIT PAGEFORMAT specifications
• Example use of TableBrowser
• Reporting a problem on the Hotline

The Lisp Library Modules manual describes the library modules. These modules can be loaded into your sysout to provide additional functionality to your Lisp environment.

The Lisp Release Notes provide current information about the Lisp software development environment. You will find the following information in these Notes:
• An overview of significant extensions to the Common Lisp language.
• Descriptions of new features that enhance the integration and implementation of Common Lisp into the Lisp environment.
• A summary of changes made in the Library modules, in the Sketch and TEdit tools, and in the 1108 and 1186 User's Guides.
• Discussions of how specific Common Lisp features have affected the Interlisp-D language.
• Notes reflecting the changes made to Interlisp-D, independent of Common Lisp.
• Known restrictions.

Documentation of the programs and libraries in the LispUsers collection of user contributed Lisp software.

Documentation of the programs and libraries in the LispUsers collection of user contributed software.
Lai, Kum-Yew; Malone, Thomas W.; Yu, Keh-Chiang

Object Lens allows unsophisticated computer users to create their own cooperative work applications using a set of simple, but powerful, building blocks. By defining and modifying templates for various semistructured objects, users can represent information about people, tasks, products, messages, and many other kinds of information in a form that can be processed intelligently by both people and their computers. By collecting these objects in customizable folders, users can create their own displays which summarize selected information from the objects in table or tree formats. Finally, by creating semiautonomous agents, users can specify rules for automatically processing this information in different ways at different times. The combination of these primitives provides a single consistent interface that integrates facilities for object-oriented databases, hypertext, electronic messaging, and rule-based intelligent agents. To illustrate the power of this combined approach, we describe several simple examples of applications (such as task tracking, intelligent message routing, and database retrieval) that we have developed in this framework.
Lai, Kum-Yew; Malone, Thomas W.; Yu, Keh-Chiang

Object Lens allows unsophisticated computer users to create their own cooperative work applications using a set of simple, but powerful, building blocks. By defining and modifying templates for various semistructured objects, users can represent information about people, tasks, products, messages, and many other kinds of information in a form that can be processed intelligently by both people and their computers. By collecting these objects in customizable folders, users can create their own displays which summarize selected information from the objects in table or tree formats. Finally, by creating semiautonomous agents, users can specify rules for automatically processing this information in different ways at different times. The combination of these primitives provides a single consistent interface that integrates facilities for object-oriented databases, hypertext, electronic messaging, and rule-based intelligent agents. To illustrate the power of this combined approach, we describe several simple examples of applications (such as task tracking, intelligent message routing, and database retrieval) that we have developed in this framework.

The purpose of this manual is to give you a comprehensive guide to using ROOMS at both the menu level and programmatically. This manual includes a Programmer's Guide that lets you take the cover off a little (well tilt it up anyway).
Halasz, Frank G.

NoteCards, developed by a team at Xerox PARC, was designed to support the task of transforming a chaotic collection of unrelated thoughts into an integrated, orderly interpretation of ideas and their interconnections. This article presents NoteCards as a foil against which to explore some of the major limitations of the current generation of hypermedia systems, and characterizes the issues that must be addressed in designing the next generation systems.
Moran, Thomas D; Russell, Daniel M; Jordan, Daniel; Orr, Julian; Rypa, Marikka

In 1984, the Army Research Institute initiated a three year project to study, design, and develop instructional environments to enhance the learning of procedural troubleshooting skills for the maintenance of complex machines. The goal of the project was to identify how artificial intelligence technologies could be used to create better technical proficiency instruction for maintenance personnel. Initially, the effort focused on the role of conceptual and procedural knowledge in troubleshooting, and on the ways that procedural skills can be learned as meaningful structures. Various types- of computational tools were used to extract analyze, and represent the structure of diagnostic procedures, expertise in troubleshooting in the field, and the nature of mental models of complex machines, and the role of such models ' in causal reasoning. Simulation and qualitative modelling studies were conducted to determine the role of mental modelling in instruction, and to investigate how simulation of machine behavior and repair strategies can provide maintenance personnel with a means for understanding machine components, functions, and troubleshooting procedures. The investigation of instructional strategies for teaching diagnostic skills led to the development of an interactive design and development system - the Instructional Design Environment (IDE). IDE is a prototype interactive design and development system that assists instructional designers in the process of creating complex instruction. Put another way; it is essentially a knowledge structuring system, in which the knowledge is course content, structure, and instructional method.-.It accepts knowledge describing course goals as input, and then assists the designer in creating his output - courses. The system thus implants a way of articulating the design and development process, by helping to create a structure which explains why curriculum design and delivery decisions were made. IDE can aid in creating course designs, structuring course content, and creating instructional sequences for standard, as well as adaptive delivery.
1989
Queinnec, Christian

The editor of Lisp Pointers has been asking me for a long time to write down my view of Lisp. I was even given permission to flame. This paper is the result and, naturally, is entirely my own opinion.
vanMelle, Bill

Email with a short note on the Interlisp-D function \COUNTREALPAGES.
Gendron, Robert F.; E. Webb Stacy; Jr.Tudor V. Ionescu

A workstation that employs methods to construct computer programs through use of visual graphical representations. Computer programs are illustrated as visual road maps of the intended sequence of actions. Each operational entity in a program graph on the screen is represented as an elemental "atomic" unit, called a "Softron". The Softron is a multidimensional, graphical "atom" of programming information which has four modes of operation, termed "layers". The four layers are Normal, where the basic functionally of the application resides; Initialization/Reset, responsible both for the startup values of important variables and for their values at strategic checkpoints; Error, which handles conditions outside design limits; and Input/Output, which performs human input/output and other I/O tasks. Softrons reside in very general form in the workstation's library, and are optimized by the process of specialization. Softrons may be grouped to form new Softrons by a process called Logical Zoom (TM). Logically Zoomed Softrons may combine with other Softrons to form a computer program of arbitrary complexity.
Reboh, Rene; Tore J. M. Risch

An expert system shell efficiently computes functions of variables in response to numeric or symbolic data values input by a user. The system comprises a Knowledge Base in the form of a network of functions, an Inference Engine for efficiently updating values in the knowledge base in response to changes in entered data, and a Forms System that manages interaction with the user. A knowledge engineer creates the network of functions, and defines the user screens and the connection between screen objects and variables in the function network. The system allows many different types of variables, including numeric and symbolic types. The system associates a probability distribution with every variable, and computes the probability distributions for the dependent variables from the probability distributions for the independent variables. A variable can store multiple values as tables of probability distributions keyed by one or more key variables. When a user action changes the probability distributions for any variable, the system automatically maintains the specified functional relationships among all the related variables.
Jain, Rekha

Expert systems are computer programmes that can reproduce the behaviour of human experts in specific problem domains. In many places, development of expert systems is the major focus of fifth generation software projects. Accordingly, enormous amounts of resources are being spent on work in this field. Expert systems have enjoyed considerable success in many scientific and technological applications but their application in the field of management. is relatively recent.
            In this article, Rekha Jain presents an overview of expert systems and addresses several issues that will be of interest to managers who are likely to consider using expert systems in their organizations.
Beckerle, Michael; Beiser, Paul; Duggan, Jerry; Kerns, Robert; Layer, Kevin; Linden, Thom; Masinter, Larry; Unietis, David

This is a proposal to the X3 J13 committee for both extending and modifying
the Common LISP language definition to provide a standard basis for Common
LISP support of the variety of characters used to represent the languages of the
international community.
This proposal was created by the Character Subcommittee of X3 J13. We
would like to acknowledge discussions with T. Yuasa and other members of
the JIS Technical Working Group, comments from members of X3 J13, and
the proposals [Ida87], [Linden87], [Kerns87], and [Kurokawa88] for providing
the motivation and direction for these extensions. As all these documents and
discussions were created expressly for LISP standardization usage, we have borrowed freely from their ideas as well as the texts themselves.
Snow, Will

Email message reporting on an investigation on where functions HILOC and LOWLOC are called in Interlisp-D and what they are used for.
Martz, Philip R.; Heffron, Matt; Kalbag, Suresh; Dyckes, Douglas F.; Voelker, Paul

Peptide synthesis is an important research tool. However, successful syntheses require considerable effort from the scientist. We have produced an expert system, the PepPro™ Peptide Synthesis Expert System, that helps the scientist improve peptide syntheses. To use PepPro the scientist enters the peptide to be synthesized. PepPro then applies its synthesis rules to analyze the peptide, to predict coupling. problems, and to recommend solutions. PepPro produces a synthesis report that summarizes the analysis and recommendations. The program includes a capability that allows the scientist to write new synthesis rules and add them to the PepPro knowledge base. PepPro was developed on Xerox 11xx series workstations using Beckman’s proprietary development environment MP). We then compiled PepPro to run on the IBM PC. PepPro has limitations that derive from unpredictable events during a synthesis. Despite the limitations, PepPro provides several important benefits. The major one is that it makes peptide syntheses easier, less time-consuming, and more efficient.
1990
Greenfeld, Norton R.

Apparatus in a computer system provides source code analysis. The apparatus includes an analysis member which extracts programming semantics information from an input source code. The analysis member operates according to the programming language of the source code as defined by a grammar mechanism. The analysis member employs a database interface which enables the extracted programming semantics information to be placed in a user desired database for subsequent recall by a desired query system. The database and query system may be pre-existing elements which are supported by a digital processor independently of the analysis member. A relational database with an SQL query system may be used.
Saeed, Faisel

Expert Database Systems (EDS) has emerged in the recent years as a powerful combination of disciplines like Artificial Intelligence, Database Management, Logic Programming, and Fuzzy System Theory. This new field incorporates the benefits of both data-based and knowledge-based systems and has generated a great interest among the research, industrial and government communities. An International Workshop on EDS was held in South Carolina in October, 1984, which became the initiative for starting a series of International Conferences on EDS. The first conference on EDS was held in April, 1986 in South Carolina. The second conference, EDS'88, was held in Virginia on April 25--27, 1988. This conference was attended by 350 participants from Australia, Belgium, Brazil, Canada, Denmark, Egypt, England, Federal Republic of Germany, France, Ireland, Italy, Japan, Mexico, Netherlands, Singapore, the Soviet Union, Switzerland, and USA.
Steele, Guy L.
Lipkis, Thomas A.; Mark, William S.; Pirtle, Melvin W.

A computer-based tool, in the form of a computer system and method, for designing, constructing and interacting with any system containing or comprising concurrent asynchronous processes, such as a factory operation. In the system according to the invention a variety of development and execution tools are supported. The invention features a highly visual user presentation of a control system, including structure, specification, and operation, offering a user an interactive capability for rapid design, modification, and exploration of the operating characteristics of a control system comprising asynchronous processes. The invention captures a representation of the system (RS) that is equivalent to the actual system (AS)--rather than a simulation of the actual system. This allows the invention to perform tests and modification on RS instead of AS, yet get accurate results. RS and AS are equivalent because AS is generated directly from RS by an automated process. Effectively, pressing a button in the RS environment can "create" the AS version or any selected portion of it, by "downloading" a translation of the RS version that can be executed by a programmable processor in the AS environment. Information can flow both ways between AS and RS. That AS and RS can interact is important. This allows RS to "take on" the "state" of AS whenever desired, through an "uploading" procedure, thereby reflecting accurately the condition of AS at a specific point in time.
Nielsen, Jakob

Jakob Nielsen's trip report from the ACM Hypertext'89 conference. Includes summary of Meyrowitz' discussion of open integrating hypertext and the extent to which the Memex vision has been realized so far.
LISP Style & Design
Miller, Molly M.; Benson, Eric

These release notes provide warnings and information important to the successful running of Release 1.15-S of Medley for the Sun Workstation. These sections are followed by listings of known and fixed bugs in Release 1.15 of Medley.

These release notes provide warnings and information important to the successful running of Release 1.2-S of Medley for the Sun Workstation. These sections are followed by listings of known and fixed bugs in Release 1.2 of Medley. A section containing changes for specifying the size of UNIX process space follows the first (warning) section.
Jellinek, Herbert D.; Card, Stuart K.

Claims of increased pointing speed by users and manufacturers of variable-gain mice (“powermice”) have become rife. Yet, there have been no demonstrations of this claim, and theoretical considerations suggest it may not even be true. In this paper, the claim is tested. A search of the design space of powermice failed to find a design point that improved performance compared to a standard mouse. No setting for the gain for a constant-gain mouse was found that improved performance. No threshold setting for a variable gain mouse was found that improved performance. In fact, even gain and threshold combinations favored by powermouse enthusiasts failed to improve performance. It is suggested that the real source of enthusiasm for powermice is that users are willing to accept reduced pointing speed in return for a smaller desk footprint.
1991
Cunningham, Robert E.; Bonar, Jeffery G.; Corbett, John D.

A system and method for interactive design of user manipulable graphic elements. A computer has display and stored tasks wherein the appearance of graphic elements and methods for their manipulation are defined. Each graphic element is defined by at least one figure specification, one mask specification and one map specification. An interactive display editor program defines specifications of said graphic elements. An interactive program editor program defines programming data and methods associated with said graphic elements. A display program uses the figure, map and mask specifications for assembling graphic elements upon the display and enabling user manipulation of said graphic elements.

This manual describes the Users’ Modules for Xerox’s Lisp Object-Oriented Programming System, LOOPS (TM), to developers.
Gabriel, Richard P.

Lisp has done quite well over the last ten years: becoming nearly standardized, forming the basis of a commercial sector, achieving excellent performance, having good environments, able to deliver applications. Yet the Lisp community has failed to do as well as it could have. In this paper I look at the successes, the failures, and what to do next.

Encompassing release contents, instructions for installing Release 2.0, and information on using it. This Guide has been completely reorganized, and information about using the new installation script has been added.
Balban, Morton S.; Lan, Ming-Shong; Panos, Rodney M.

An apparatus and a method are disclosed for composing an imposition in terms of an arrangement of printing plates on selected of the image positions on selected units of a printing press to print a given edition, by first assigning each section of this edition to one of the press areas. Thereafter, each printing unit is examined to determine an utilization value thereof in terms of the placement of the printing plates on the image positions and the relative number of image positions to which printing plates are assigned with respect to the total number of image positions. Thereafter, a list of the image positions for each of the sections and its area, is constructed by examining one printing unit at a time in an order according to the placement of that printing unit in the array and examining its utilization value to determine whether or not to include a particular image position of that printing unit in the list. As a result, a list of the image positions is constructed in a sequence corresponding to numerical order of the pages in the section under consideration. Finally, that list of the image positions and the corresponding section and page numbers is displayed in a suitable fashion to inform a user of how to place the printing plates in the desired arrangement onto the printing units of the press to print this given edition.
Newman, William; Eldridge, Margery; Lamming, Michael

This paper presents one part of a broad research project entitled 'Activity-Based Information Retrieval' (AIR) which is being carded out at EuroPARC. The basic hypothesis of this project is that if contextual data about human activities can be automatically captured and later presented as recognisable descriptions of past episodes, then human memory of those past episodes can be improved. This paper describes an application called Pepys, designed to yield descriptions of episodes based on automatically collected location data. The program pays particular attention to meetings and other episodes involving two or more people. The episodes are presented to the user as a diary generated at the end of each day and distributed by electronic mail. The paper also discusses the methods used to assess the accuracy of the descriptions generated by the recogniser.
Allard, James R.; Hawkinson, Lowell B.
Stoyan, Herbert

In this chapter, some of the events of LISP development are protocolled. Step by step, the implementers became independent of McCarthy. In 1962 the internal drive was stronger than McCarthy's proposals. The results are somehow ambiguous.
Henderson, D. Austin; Card, Stuart K.; Maxwell, John T.

Workspaces provided by an object-based user interface appear to share windows and other display objects. Each workspace's data structure includes, for each window in that workspace, a linking data structure called a placement which links to the display system object which provides that window, which may be a display system object in a preexisting window system. The placement also contains display characteristics of the window when displayed in that workspace, such as position and size. Therefore, a display system object can be linked to several workspaces by a placement in each of the workspaces' data structures, and the window it provides to each of those workspaces can have unique display characteristics, yet appear to the user to be the same window or versions of the same window. As a result, the workspaces appear to be sharing a window. Workspaces can also appear to share a window if each workspace's data structure includes data linking to another workspace with a placement to the shared window. The user can invoke a switch between workspaces by selecting a display object called a door, and a back door to the previous workspace is created automatically so that the user is not trapped in a workspace. A display system object providing a window to a workspace being left remains active so that when that workspace is reentered, the window will have the same contents as when it disappeared. Also, the placements of a workspace are updated so that when the workspace is reentered its windows are organized the same as when the user left that workspace. The user can enter an overview display which shows a representation of each workspace and the windows it contains so that the user can navigate to any workspace from the overview.
1992
ACM

ACM Award Recipient page for Larry Masinter that shows the Award content.
Lee, Alison

Figure 1 from paper by Lee: INTERLISP-D's HISTMENU displays a history of the commands issued to the Executive in the form of a menu. The user may select the items from the menu (the window entitled History Window). from publication: Investigations into history tools for user support | | ResearchGate, the professional network for scientists.
Denber, Michel J.

In a graphic display system, display control software is modified to impart motion to a pop-up menu to attract the attention of the user. The menu becomes animated when a control and comparison circuit confirms that a mouse driven cursor on the screen is moving away from the pop-up menu indicating that the operator is unaware of the menu's presence. The menu moves or "tags-along" after the cursor until the user takes notice and makes the appropriate selection.
Kurlander, David; Feiner, Steven

We describe enhancements to graphical search and replace that allow users to extend the capabilities of a graphical editor. Interactive constraint-based search and replace can search for objects that obey user-specified sets of constraints and automatically apply other constraints to modify these objects. We show how an interactive tool that employs this technique makes it possible for users to define sets of constraints graphically that modify existing illustrations or control the creation of new illustrations. The interface uses the same visual language as the editor and allows users to understand and create powerful rules without conventional programming. Rules can be saved and retrieved for use alone or in combination. Examples, generated with a working implementation, demonstrate applications to drawing beautification and transformation.
Lee, Alison

History tools allow users to access past interactions kept in a history and to incorporate them into the context of their current operations. Such tools appear in various forms in many of today’s computing systems, but despite their prevalence, they have received little attention as user support tools. This dissertation investigates, through a series of studies, history–based, user support tools. The studies focus on three primary factors influencing the utility of history–based, user support tools: design of history tools, support of a behavioural phenomenon in user interactions, and mental and physical effort associated with using history tools.
Design of history tools strongly influences a user’s perception of their utility. In surveying a wide collection of history tools, we identify seven independent uses of the information with no single history tool supporting all seven uses. Based on cognitive and behavioural considerations associated with the seven history uses, we propose several kinds of history information and history functions that need to be supported in new designs of history tools integrating all seven uses of history. An exploratory study of the UNIX environment reveals that user interactions exhibit a behavioural phenomenon, nominally referred to as locality. This is the phenomenon where users repeatedly reference a small group of commands during extended intervals of their session. We apply two concepts from computer memory research (i.e., working sets and locality) to examine this behavioural artifact and to propose a strategy for predicting repetitive opportunities and candidates. Our studies reveal that users exhibit locality in only 31% of their sessions whereas users repeat individual commands in 75% of their sessions. We also found that history tool use occurs primarily in locality periods. Thus, history tools which localize their prediction opportunities to locality periods can predict effectively the reuse candidates.
Finally, the effort, mental and physical, associated with using a history tool to expedite repetitive commands can influence a user’s decision to use history tools. We analyze the human information–processing operations involved in the task of specifying a recurrent command for a given approach and design (assuming that the command is fully generated and resides in the user’s working memory and that users exhibit expert, error–free task performance behaviour). We find that in most of the proposed history designs, users expend less physical effort at the expense of more mental effort. The increased mental effort can be alleviated by providing history tools which require simpler mental operations (e.g., working memory retrievals and perceptual processing). Also, we find that the typing approach requires less mental effort at the expense of more physical effort. Finally, despite the overhead associated with switching to the use of history tools, users (with a typing speed of 55 wpm or less) do expend less overall effort to specify recurrent commands (which have been generated and appear in working memory) using history tools compared to typing from scratch.
The results of the three sets of studies provide insights into current history tools and point favourably towards the use of history tools for user support, especially history tools that support the reuse of previous commands, but additional research into history tool designs and usability factors is needed. Our studies demonstrate the importance of considering various psychological and behavioural factors and the importance of different grains of analysis.

We developed this primer to provide a starting point for new Medley users, to enhance your excitement and challenge you with the potential before you.
Smith, Reid G.; Schoen, Eric J.

A declarative object-oriented approach to menu construction provides a mechanism for specifying the behavior, appearance and function of menus as part of an interactive user interface. Menus are constructed from interchangeable object building blocks to obtain the characteristics wanted without the need to write new code or code and maintaining a coherent interface standard. The approach is implemented by dissecting interface menu behavior into modularized objects specifying orthogonal components of desirable menu behaviors. Once primary characteristics for orthogonal dimensions of menu behavior are identified, individual objects are constructed to provide specific alternatives for the behavior within the definitions of each dimension. Finally, specific objects from each dimension are combined to construct a menu having the desired selections of menu behaviors.
Goldman, Neil; Narayanaswamy, K.

The process of developing and evolving complex software systems is intrinsically exploratory in nature. Some prototyping activity is therefore inevitable in every stage of that process. Our program development and evolution methodology is predicated upon this observation. In this methodology, a prototype software system is developed as an approximation to an envisioned target system by compromising along one or more of the following dimensions: system performance, system functionality, or user interface. However, the prototype is not the end-product of the process. Instead, we support iterative evolution of the prototype towards the envisioned system by gradually dealing with the three general areas of compromise. This paper describes the methodology of using this alternative lifecycle; to wit, the programming language concepts and related implementation technology that support practice of the suggested methodology. We summarize the lessons we have learned in building and using this technology over the last several years.
Prakash, Atul; Knister, Michael J.

The abihly to undo operations is a standard feature in most single-user interactive applications. However, most current collaborative applications that allow several users to work simultaneously on a shared document lack undo capabilities; those which provide undo generally provide only a globe undo, in which the last change made by anyone to a document is undone, rather than allowing users to individually reverse their own changes. In this paper, we propose a general framework for undoing actions in collaborative systems. The framework takes into account the possibility of conflicts between different users' actions that may prevent a normal undo. The framework also allows selection of actions to undo based on who performed them, where they occurred, or any other appropriate criterion.
Rao, Ramana B.

A workspace data structure, such as a window hierarchy or network, includes functional data units that include data relating to workspace functionality. These functional data units are associated with data units corresponding to the workspaces such that a functional data unit can be replaced by a functional data unit compatible with a different set of functions without modifying the structure of other data units. Each workspace data unit may have a replaceably associated functional data unit called an input contract relating to its input functions and another called an output contract relating to its output functions. A parent workspace's data unit and the data units of its children may together have a replaceably associated functional data unit, called a windowing contract, relating to the windowing relationship between the parent and the children. The data structure may also include an auxiliary data unit associated between the data units of the parent and children windows, and the windowing contract may be associated with the auxiliary data unit. The contracts can be accessed and replaced by a processor in a system that includes the data structure. The contracts can be instances of classes in an object-oriented programming language, and can be replaceably associated by pointers associated with the system objects. Alternatively, a contract can be replaceably associated through dynamic multiple inheritance, with the superclasses of each workspace class including one or more contract classes such that changing the class of an instance of a workspace class serves to replace the contract.
1993
Mancoridis, Spiros

A Software Development Environment (SDE) is a set of tools that, at the very least, supports coding and possibly other software development activities. Related to SDEs are meta-SDEs, which are classes of SDEs that must be configured or populated by tools before they can be useful. We will use the generic term environment to refer to both SDEs and meta-SDEs.This paper presents a multi-dimensional taxonomy of environments. The primary dimensions of our taxonomy are scale and genericity. Scale distinguishes environments that are suitable for small-scale programming from those that are suitable for large-scale software development. Genericity differentiates monolithic environments from highly configurable and extendible ones. Secondary taxonomy dimensions include tool integration, which identifies the degree of interoperability and data sharing between tools, and the historical dimension, which gives insight into past and present research trends in these environments.
Wiil, Uffe K.; Leggett, John J.

An approach to flexible hyperbase (hypertext database) support predicated on the notion of ex-tensibility is presented. The extensible hypertext platform (Hyperform) implements basic hyperbase services that can be tailored to provide specialised hyperbase support. Hypeeform is based on an inter-nal computational engine that provides an object-oriented extension language which allows new data model objects and operations to be added at run-time. Hyperform has a number of built-in classes to pro-vide basic hyperbase features such as concurrency control, notification control (events), access control, version control and search and query. Each of these classes can be specialised using multiple inheritance to form virtually any type of hyperbase support needed in next-generation hypertext systems. This approach greatly reduces the effort required to provide high-quality customized hyperbase support for distributed hypertext applications. Hyper-form is implemented and operational in Unix environments. This paper describes the Hyperform approach, discusses its advantages and disadvantages, and gives examples of simulating the 11AM and the Danish Hyperlime in Hyperform. Hyper-form is compared with related work from the HAM generation of hyperbase systems and the current status of the project is reviewed.
Kazman, Rick; Kominek, John
Boyd, Mickey R.; Whalley, David B.

This paper describes two related tools developed to support the isolation and analysts of optimization errors in the vpo optimizer. Both tools rely on vpo identifying sequences of changes, referred to as transformations. that result in semantically equivalent (and usually improved) code. One tool determines the first transfer. motion that causes incorrect output of the execution of the compiled program. This tool not only automatically isolates the illegal transformation, but also identifies the location and instant the transformation is performed in vpo. To assist in the analysis of an optimization error, a graphical optimization viewer was also implemented that can display the state of the generated instructions before and after each transformation performed by vpo. Unique features of the optimization viewer include reverse viewing (or undoing) of transformations and the ability to stop at breakpoints associated with the generated instructions. Both tools are useful independently. Together these tools form a powerful environment for facilitating the retargeting of vpo to a new machine and supporting experimentation with new optimizations. In addition, the optimization viewercan be used as a teaching aid in compiler classes.

This manual describes all three parts of Medley. There are discussions of the language, about the pieces of the system that can be incorporated into your programs, and about the environment.
Medley Advertisement: "Clos for DOS"
Medley Advertisement: "For Rapid Prototypes!"
Medley Advertisement: "For Rapid Prototypes!"
Medley Advertisement: "For Rapid Prototypes!"

This Guide describes Medley release 2.01 for DOS: The release contents, instructions for installing the release, and information on using it.
Denber, Michel J.; Jankowski, Henry P.

A method and apparatus are shown for improving bit-image quality in video display terminals and xerographic processors. In one embodiment, each scan line of a source image is ANDed with the scan line above to remove half-bits and thin halftones. In other embodiments, entire blocks of data are processed by bit-block transfer operations, such as ANDing a copy of the source image with a copy of itself shifted by one bit. Also, a source image can be compared to a shifted copy of itself to locate diagonal lines in order to place gray pixels bordering these lines.
Petrus, Edwin S.

This paper describes an experience with Lisp as an extension language for a large electronics CAD environment and the role it plays in software design automation. This paper is not about extension languages in general, for an analysis of extension languages in CAD, see, [HNS90] and [Bar89]. Cadence is a full range supplier of software based Electronics CAD tools.
Steele, Guy L.; Gabriel, Richard P.

Lisp is the world's greatest programming language—or so its proponents think. The structure of Lisp makes it easy to extend the language or even to implement entirely new dialects without starting from scratch. Overall, the evolution of Lisp has been guided more by institutional rivalry, one-upsmanship, and the glee born of technical cleverness that is characteristic of the “hacker culture” than by sober assessments of technical requirements. Nevertheless this process has eventually produced both an industrial-strength programming language, messy but powerful, and a technically pure dialect, small but powerful, that is suitable for use by programming-language theoreticians. We pick up where McCarthy's paper in the first HOPL conference left off. We trace the development chronologically from the era of the PDP-6, through the heyday of Interlisp and MacLisp, past the ascension and decline of special purpose Lisp machines, to the present era of standardization activities. We then examine the technical evolution of a few representative language features, including both some notable successes and some notable failures, that illuminate design issues that distinguish Lisp from other programming languages. We also discuss the use of Lisp as a laboratory for designing other programming languages. We conclude with some reflections on the forces that have driven the evolution of Lisp.
1994
Denning, Peter J.; Dargan, Pamela A.
Prakash, Atul; Knister, Michael J.

The ability to undo operations is a standard feature in most single-user interactive applications. We propose a general framework for implementing undo in collaborative systems. The framework allows users to reverse their own changes individually, taking into account the possibility of conflicts between different users' operations that may prevent an undo. The proposed framework has been incorporated into DistEdit, a toolkit for building group text editors. Based on our experience with DistEdit's undo facilities, we discuss several issues that need to be taken into account in using the framework, in order to ensure that a reasonable undo behavior is provided to users. We show that the framework is also applicable to single-user systems, since the operations to undo can be selected not just on the basis of who performed them, but by any appropriate criterion, such as the document region in which the operations occurred or the time interval in which the operations were carried out.
Berlage, Thomas

It is important to provide a recovery operation for applications with a graphical user interface. A restricted linear undo mechanism can conveniently be implemented using object-oriented techniques. Although linear undo provides an arbitrarily long history, it is not possible to undo isolated commands from the history without undoing all following commands. Various undo models have been proposed to overcome this limitation, but they all ignore the problem that in graphical user interfaces a previous user action might not have a sensible interpretation in another state. Selective undo introduced here can undo isolated commands by copying them into the current state “if that is meaningful.” Furthermore, the semantics of selective undo are argued to be more natural for the user, because the mechanism only looks at the command to undo and the current state and does not depend on the history in between. The user interface for selective undo can also be implemented generically. Such a generic implementation is able to provide a consistent recovery mechanism in arbitrary applications.
Jan O. Pedersen; Per-Kristian Halvorsen; Douglass R. Cutting; John W. Tukey; Eric A. Bier; Daniel G. Bobrow

An information retrieval system and method are provided in which an operator inputs one or more query words which are used to determine a search key for searching through a corpus of documents, and which returns any matches between the search key and the corpus of documents as a phrase containing the word data matching the query word(s), a non-stop (content) word next adjacent to the matching word data, and all intervening stop-words between the matching word data and the next adjacent non-stop word. The operator, after reviewing one or more of the returned phrases can then use one or more of the next adjacent non-stop-words as new query words to reformulate the search key and perform a subsequent search through the document corpus. This process can be conducted iteratively, until the appropriate documents of interest are located. The additional non-stop-words from each phrase are preferably aligned with each other (e.g., by columnation) to ease viewing of the "new" content words.
Kaplan, Ronald M.; Maxwell, John T. III

A text-compression technique utilizes a plurality of word-number mappers ("WNMs") in a frequency-ordered hierarchical structure. The particular structure of the set of WNMs depends on the specific encoding regime, but can be summarized as follows. Each WNM in the set is characterized by an ordinal WNM number and a WNM size (maximum number of tokens) that is in general a non-decreasing function of the WNM number. A given token is assigned a number pair, the first being one of the WNM numbers, and the second being the token's position or number in that WNM. Typically, the most frequently occurring tokens are mapped with a smaller-numbered WNM. The set of WNMs is generated on a first pass through the database to be compressed. The database is parsed into tokens, and a rank-order list based on the frequency of occurrence is generated. This list is partitioned in a manner to define the set of WNMs. Actual compression of the data base occurs on a second pass, using the set of WNMs generated on the first pass. The database is parsed into tokens, and for each token, the set of WNMs is searched to find the token. Once the token is found, it is assigned the appropriate number pair and is encoded. This proceeds until the entire database has been compressed.
Newquist, H. P. (Harvey P. )

xv, 488 p. ; 24 cm; Includes index
1995
Pitman, Kent M.

Much has been written about Lazy Evaluation in Lisp---less about the other end of the spectrum---Ambitious Evaluation. Ambition is a very subjective concept, though, and if you have some preconceived idea of what you think an Ambitious Evaluator might be about, you might want to set it aside for a few minutes because this probably isn't going to be what you expect.
Ungar, David

In Self 4.0, people write programs by directly constructing webs of objects in a larger world of objects. But in order to save or share these programs, the objects must be moved to other worlds. However, a concrete, directly constructed program is incomplete, in particular missing five items of information: which module to use, whether to transport an actual value or a counterfactuaI initial value, whether to create a new object in the new world or to refer to an existing one, whether an object is immutable with respect to transportation, and whether an object should be created by a low-level, concrete expression or an abstract, type-specific expression. In Self 4.0, the programmer records this extra information in annotations and attributes. Any system that saves directly constructed programs will have to supply this missing information somehow.
Wood, Larry
Malone, Thomas W.; Lai, Kum-Yew; Fry, Christopher

This article describes a series of tests of the generality of a “radically tailorable” tool for cooperative work. Users of this system can create applications by combining and modifying four kinds of building blocks: objects, views, agents, and links. We found that user-level tailoring of these primitives can provide most of the functionality found in well-known cooperative work systems such as gIBIS, Coordinator, Lotus Notes, and Information Lens. These primitives, therefore, appear to provide an elementary “tailoring language” out of which a wide variety of integrated information management and collaboration applications can be constructed by end users.
Kaplan, Ronald M.; Kay, Martin; Maxwell, John

An FSM data structure is encoded by generating a transition unit of data corresponding to each transition which leads ultimately to a final state of the FSM. Information about the states is included in the transition units, so that the encoded data structure can be written without state units of data. The incoming transition units to a final state each contain an indication of finality. The incoming transition units to a state which has no outgoing transition units each contain a branch ending indication. The outgoing transition units of each state are ordered into a comparison sequence for comparison with a received element, and all but the last outgoing transition unit contain an alternative indication of a subsequent alternative outgoing transition. The indications are incorporated with the label of each transition unit into a single byte, and the remaining byte values are allocated among a number of pointer data units, some of which begin full length pointers and some of which begin pointer indexes to tables where pointers are entered. The pointers may be used where a state has a large number of incoming transitions or where the block of transition units depending from a state is broken down to speed access. The first outgoing transition unit of a state is positioned immediately after one of the incoming transitions so that it may be found without a pointer. Each alternative outgoing transition unit is stored immediately after the block beginning with the previous outgoing transition unit so that it may be found by proceeding through the transition units until the number of alternative bits and the number of branch ending bits balance.
Anderson, Kenneth R.

In theory, abstraction is important, but in practice, so is performance. Thus, there is a struggle between an abstract description of an algorithm and its efficient implementation. This struggle can be mediated by using an interpreter or a compiler. An interpreter takes a program that is a high level abstract description of an algorithm and applies it to some data. Don't think of an interpreter as slow. An interpreter is important enough to software that it is often implemented in hardware. A compiler takes the program and produces another program, perhaps in another language. The resulting program is applied to some data by another interpreter.
Tomoyuki, Tanaka; Shigeru, Uzuhara

We consider the impact of introducing the future construct to the multiple value facility in Lisp (Common Lisp and Scheme). A natural way to accommodate this problem is by modifying the implementation of futures so that one future object returns (or resolves to) multiple values instead of one. We first show how a such straightforward modification fails to maintain the crucial characteristic of futures, namely that inserting futures in a functional program does not alter the the result of the computation. A straightforward modification may result in wrong number of values. We then present two methods which we call the mv-context method and the mv-p flag method to overcome this problem. Both of these methods have been tested in TOP-1 Common Lisp, an implementation of a parallel Common Lisp on the TOP-1 multiprocessor workstation. To our knowledge, this problem has never been analyzed nor solved in an implementation of parallel Lisp. We also present the technique of future chain elimination which avoids creation of unnecessary futures and processes at run-time, which was inspired by this solution.
Steele, Guy L.

Maybe not as hot a topic in computer architecture as it used to be, but still of considerable interest, is parallelism. How do you make a faster computer? Just strap 20 or 200 or 2000 processors together? As we have learned, the architectural and hardware difficulties are immense (How do you connect them? A shared bus? A network? Is there a single system clock or many clocks?), and after these have been solved there remains the matter of programming.
Rao, Ramana; Pedersen, Jan O.; Hearst, Marti A.; Mackinlay, Jock D.; Card, Stuart K.; Masinter, Larry; Halvorsen, Per-Kristian; Robertson, George C.

Effective information access involves rich interactions between users and information residing in diverse locations. Users seek and retrieve information from the sources—for example, file serves, databases, and digital libraries—and use various tools to browse, manipulate, reuse, and generally process the information. We have developed a number of techniques that support various aspects of the process of user/information interaction. These techniques can be considered attempts to increase the bandwidth and quality of the interactions between users and information in an information workspace—an environment designed to support information work (see Figure 1).
1996
Stroustrup, Bjarne

This paper outlines the history of the C++ programming language. The emphasis is on the ideas, constraints, and people that shaped the language, rather than the minutiae of language features. Key design decisions relating to language features are discussed, but the focus is one the overall design goals and practical constraints. The evolution of C++ is traced from C with Classes to the current ANSI and ISO standards work and the explosion of use, interest, commercial activity, compilers, tools, environments, and libraries.
Tavani, Herman T.

The enclosed bibliography addendum includes over four hundred entries which focus primarily on recent works related to "CyberEthics," the "Future of Computing" and the "Quality of Life." Building on the original three parts of "A Computer Ethics Bibliography", the addendum serves as Part IV: "CyberEthics and the Future of Computing."
Part IV is comprised of Sections 11 and 12. Sources listed in Section 11, "CyberEthics & Information Infrastructures," focus on ethical and social issues related to cyberspace and the "networked society." Some sources in this section identify proposals and plans for designing a national and a global information infrastructure (an NII and a GII), while other sources examine issues related to "CyberEthics"—i.e., the cluster of ethical, social, legal, and political issues related to the internet and networked computers.
Issues considered under the rubric "CyberEthics" might, at first glance, seem as if they should be integrated into various sections of Part III, "Ethical Issues in Computing." Sources in those sections, however, consider ethical and social issues in computing that arise independently of computer networks. For example, issues related to computer monitoring, expert systems, intellectual property, software piracy, etc., arise regardless of whether computers happen to be networked to other computers or whether they function solely as "stand-alone" systems.
Some ethical and social issues currently associated with the use of computers arise precisely because computers arenetworked. Examples of such issues include free speech, obscenity, pornography, and other so-called "First-Amendment-related" issues in Cyberspace. Some of these "cyber-related" issues have come to the forefront of discussion and debate among politicians, computer manufacturers, computer users, and ordinary citizens. Terms such as "cyberpunk" and "cyberporn," "cyberlove" and "cyberadultery," as well as "cybercash" and "cybersovereignity" have recently crept into our lexicon, and have come to be associated with the controversy over civil liberties in cyberspace. Sources in Section 11 address these issues.
Section 12, to be published in a future issue of Computers and Society, contains a collection of sources related to the future of computing and the quality of life. Issues concerned with technological productivity and progress, human-computer interaction and interface design, and computer use in health and human services are grouped under the heading "quality of life." Providing a forum to discuss such issues, ACM/SIGCAS has sponsored two symposia whose theme and title has been "Computers and the Quality of Life." Many of the papers which were presented at these symposia, and also published in ACM Symposia Proceedings, are cited in Section 12.
An Appendix, which lists and annotates bibliographies related to computer ethics and computers in society, is also included in the bibliography addendum. The Appendix will be published with Section 12.
Wood, Amy
Reiss, Steven P.

This paper describes the design and motivations behind the Desert environment. The Desert environment has been created to demonstrate that the facilities typically associated with expensive data integration can be provided inexpensively in an open framework. It uses three integration mechanisms: control integration, simple data integration based on fragments, and a common editor. It offers a variety of capabilities including hyperlinks and the ability to create virtual files containing only the portions of the software that are relevant to the task on hand. It does this in an open environment that is compatible with existing tools and programs. The environment currently consists of a set of support facilities including a context database, a fragment database, scanners, and a ToolTalk interface, as well as a preliminary set of programming tools including a context manager and extensions to FrameMaker to support program editing and insets for non-textual software artifacts.
1997
Lightfoot, Jay M.

The global connectivity provided by the internet has changed the way organizations do business. One such change is the use of corporate websites to advertise products and promote customer good-will. Current website design techniques are inadequate for the creation and maintenance of effective sites. A new technique is described and demonstrated by this paper. The technique uses the NoteCards hypertext software environment. Using the technique results in websites that are easier to maintain and easier to use.
Law, Rob

This paper reviews empirical studies on debugging models and the findings associated with these models. There is a discussion on the evolution of program slicing applied to program debugging and different generations of debugging tools are analyzed and criticized.Finally, a programming environment section provides examples of program maintenance tools.
Ungar, David; Lieberman, Henry; Fry, Christopher
Henderson, Austin

Tailoring is the technical and human art of modifying the functionality of technology while the technology is in use in the field. This position paper explores various styles of, and mechanisms for, tailoring in three research systems (Trillium, Rooms, and Buttons) created by the author to explore ways to enable players (end users) to achieve new behaviors from these systems appropriate to their particular circumstances.
1998
Ehrlich, Kate

An interview of Austin Henderson, a pioneer in the field of Human Computer Interaction.
Lampson, Butler W.; Pier, Kenneth A.

This paper describes the design goals, micro-architecture. and implementation of the microprogrammed processor for a compact high-performance personal computer. This computer supports a range of high-level language environments and high bandwidth I/O devices. Besides the processor. it has a cache, a memory map, main storage. and an instruction fetch unit; these are described in other papers. The processor can be shared among 16 microcode tasks, performing microcode context switches on-demand with essentially no overhead. Conditional branches are done without any lookahead or delay. Micro-instructions are fairly tightly encoded and use an interesting variant on control field sharing. The processor implements a large number of internal registers. hardware stacks. acyclic shifter/masker, and an arithmetic/logic unit, together with external data paths for instruction fetching, memory interface, and I/O. in a compact, pipe-lined organization. The machine has a 50 ns microcycle, and can execute a simple macroinstruction in one cycle; the available I/O bandwidth is 640 Mbits/sec. The entire machine. including disk, display and network interfaces, is implemented with approximately 3000 NISI components, mostly EC:. 10K; the processor is about 35% of this. In addition, there are up to 4 storage modules, each with about 300 16K or 64K RAMS and 200 nisi components, for a total of 8 Mbytes. Several prototypes are currently running.
Beesley, Kenneth R.

Finite-state morphology has been successful in the description and computational implementation of a wide variety of natural languages. However, the particular challenges of Arabic, and the limitations of some implementations of finite-state morphology, have led many researchers to believe that finite-state power was not sufficient to handle Arabic and other Semitic morphology. This paper illustrates how the morphotactics and the variation rules of Arabic have been described using only finite-state operations and how this approach has been implemented in a significant morphological analyzer/generator.
Konkin, Douglas P.; Oster, Gregory M.; Bunt, Richard B.

Software performance measurement can be a difficult and tedious procedure, and this difficulty may explain the lack of interest shown in software performance optimisation in all but the most demanding areas, such as parallel computation and embedded systems. This paper describes the measurement shim. an approach to software perfor-mance which we have found to significantly reduce the effort required to make performance measurements. The measurement shim exploits the interfaces between software modules, and allows measurement at both data stream and procedure call interfaces. Experimental results indicate that the measurement shim provides high-quality data, and can he inserted with low impact on system performance.
Mackinlay, Jock D.; Card, Stuart K.; Robertson, George G.

The present invention relates to techniques for producing the perception of a moving viewpoint within a three-dimensional space presented on a display.
The invention provides techniques for operating a system to produce the perception of a moving viewpoint within a three-dimensional workspace. When the user indicates a point of interest on an object, the viewpoint can approach the point of interest asymptotically, with both radial and lateral motion. The orientation of the viewpoint can rotate to keep the point of interest in the field of view. The field of view can also be centered about the point of interest by rotating the viewpoint.
Nunberg, Geoffrey D.; Stansbury, Tayloe H.; Abbott, Curtis; Smith, Brian C.

The present invention relates to techniques for processing natural language text that take into account its punctuation. More specifically, the invention relates to data structures that include information about the punctuational structure of natural language text.
Malone, Thomas W.; Lai, Kum-Yew; Yu, Keh-Chiang; Berenson, Richard W.

A computer user interface includes a mechanism of graphically representing and displaying user-definable objects of multiple types. The object types that can be represented include data records, not limited to a particular kind of data, and agents. An agent processes information automatically on behalf of the user. Another mechanism allows a user to define objects, for example by using a template. These two mechanisms act together to allow each object to be displayed to the user and acted upon by the user in a uniform way regardless of type. For example, templates for defining objects allow a specification to be input by a user defining processing that can be performed by an agent.
1999
Deutsch, L. Peter; Finkbine, Ronald B.
Bier, Eric A.

A software architecture is provided for allowing users to impart various types of button behavior to ordinary human interpretable elements of electronic documents by associating hidden persistent character string button attributes to such elements. This architecture permits such buttons to be edited and searched through the use of the edit and search routines that are ordinarily provided by standard document editors.
Reiss, Steven P.

The Desert software engineering environment is a suite of tools developed to enhance programmer productivity through increased tool integration. It introduces an inexpensive form of data integration to provide additional tool capabilities and information sharing among tools, uses a common editor to give high-quality semantic feedback and to integrate different types of software artifacts, and builds virtual files on demand to address specific tasks. All this is done in an open and extensible environment capable of handling large software systems.
Kent Pitman
2000
Albizuri-Romero, Miren Begoña

This paper provides a retrospective view of the adoption of CASE tools in organizations using some empirical data from various research studies in this field. First, relevant factors that influence the decision to adopt such a tool are discussed. Such factors include elements related to the organization adopting such a technology, as well as other characteristics associated with the application environment and the alternative development methods being used. Then, the advantages and disadvantages of using CASE tools are discussed and some critical success factors are identified. Finally, a taxonomy of CASE tools in the 90's is presented. The paper provides some explanations of why some organizations are successful in adopting CASE tools and gives recommendations for making a better use of such a technology.
2001
Lightfoot, Jay

Corporate websites are an important component in the world-wide web. The traditional way of creating these websites leads to a variety of structural problems that reduce the effectiveness of the websites. These problems are difficult to locate and correct using the ad hoc analysis methods that currently exist. This paper introduces a new technique to analyze and document websites. The technique uses the NoteCards hypertext environment to build a visual model of the website. The model is semantically rich, dynamically extensible, and allows interactive update. The result of using this technique is a website that is easier to use, easier to update, and fully documented. The technique is described and demonstrated on a small website.
Böhnke, Dorothea; Eggerth, Claudia
Affenzeller, Michael; Pichler, Franz; Mittelmann, Rudolf

CAST.FSM denotes a CAST tool which has been developed at the Institute of Systems Science at the University of Linz during the years 1986–1993. The first version of CAST.FSM was implemented in INTERLISP-D and LOOPS for the Siemens-Xerox workstation 5815 (“Dandelion”). CAST.FSM supports the application of the theory of finite state machines for hardware design tasks between the architecture level and the level of gate circuits. The application domain, to get practical experience for CAST.FSM, was the field of VLSI design of ASICS’s where the theory of finite state machines can be applied to improve the testability of such circuits (“design for testability”) and to optimise the required silicon area of the circuit (“floor planning”). An overview of CAST as a whole and of CAST.FSM as a CAST tool is given in [11]. In our presentation we want to report on the re-engineering of CAST.FSM and on new types of applications of CAST.FSM which are currently under investigation. In this context we will distinguish between three different problems:
1. the implementation of CAST.FSM in ANSI Common Lisp and the design of a new user interface by Rudolf Mittelmann [5].
2. the search for systemstheoretical concepts in modelling intelligent hierarchical systems based on the past work of Arthur Koestler [3] following the concepts presented by Franz Pichler in [10].
3. the construction of hierarchical formal models (of multi-layer type) to study attributes which are assumed for SOHO-structures (SOHO = Self Organizing Hierarchical Order) of A. Koestler.
The latter problem will deserve the main attention in our presentation. In the present paper we will build such a hierarchical model following the concepts of parallel decomposition of finite state machines (FSMs) and interpret it as a multi-layer type of model.
2002
Allen, Eric; Cartwright, Robert; Stoler, Brian

DrJava is a pedagogic programming environment for Java that enables students to focus on designing programs, rather than learning how to use the environment. The environment provides a simple interface based on a "read-eval-print loop" that enables a programmer to develop, test, and debug Java programs in an interactive, incremental fashion. This paper gives an overview of DrJava including its pedagogic rationale, functionality, and implementation.
Filman, Robert E.; Barrett, Stuart; Lee, Diana D.; Linden, Ted
2003
LFG Grammar Writer’s Workbench Documentation
Kaplan, Ronald M.; Maxwell, John T.

The LFG Grammar-writer’s Workbench is a computational environment that assists in writing and debugging Lexical Functional Grammars (Kaplan & Bresnan, 1982). It provides linguists with a facility for writing syntactic rules, lexical entries, and simple morphological rules, and for testing and editing them.
Barron, David W.

Terminology concerning linkers and loaders is confusing, having changed over the years as technology has changed. In older mainframe operating systems, processing of a program between compiling and execution took place in two distinct stages. The function of the linker (or linkage editor) was to combine a number of independently compiled or assembled object files into a single load module, resolving cross-references and incorporating routines from libraries as required. The loader then prepared this module for execution, physically loaded it into memeory, and started execution. Early versions of Unix (q.v.) blurred this distinction: the functions of the linker were incorporated into the C (q.v.) compiler in what was confusingly called the "load phase," and the actual loading was done as part of the "exec," operation that installed a new process image for execution.
Fateman, Richard; McCarthy, John

Fortran (q.v.) is the only language in widespread use that is older than Lisp (LISt Processor). Lisp owes its longevity to two facts. First, its core elements occupy a kind of local optimum in the "space" of programming languages, given the resistance to purely notational changes. Recursive use of conditional expressions, representation of symbolic information externally by lists and internally by list data structures (q.v.), and the representation of programs in the same way as data will probably have a very long life.
Fuqua, Paul; Slagle, James R.; Gini, Maria L.

The two elements of a computer program are the computations (the actions we want done) and the data (the things we want the actions done upon). The computations are defined using expressions in a computer language, combined to form procedures, which are in turn combined to form compound procedures and eventually programs. The ability to combine simple expressions into procedures is the key to using computer programs to model processes in the real world. Data is defined in a similar way: compound data objects are built from simple parts, like numbers, and combined to represent real-world objects that have complex properties. Compound procedures and compound data are used for the same purposes: to improve the modularity of the program and to raise the conceptual level of its design. One of the simplest and most widespread form of compound data is the list.
Bobrow, Daniel; Mittal, Sanjay; Lanning, Stanley; Stefik, Mark

The LOOPS (Lisp Object-Oriented Language) project was started to support development of expert systems project at PARC. We wanted a language that had many of the features of frame languages, such as objects, annotated values, inheritance, and attached procedures. We drew heavily on Smalltalk-80, which was being developed next door.
Frieder, Gideon

Shifting is the process of moving data in a storage device relative to the boundaries of the device (as opposed to moving it in and out of the device). The device in which the shift is performed is called a shift register. In order to discuss the various modes of the shift operation, we assume that the register in which the shift is to be performed is n bits wide, and number the bits from left to right, 1...n.
2005
Chapuis, Olivier; Roussel, Nicolas

Twenty years after the general adoption of overlapping windows and the desktop metaphor, modern window systems differ mainly in minor details such as window decorations or mouse and keyboard bindings. While a number of innovative window management techniques have been proposed, few of them have been evaluated and fewer have made their way into real systems. We believe that one reason for this is that most of the proposed techniques have been designed using a low fidelity approach and were never made properly available. In this paper, we present Metisse, a fully functional window system specifically created to facilitate the design, the implementation and the evaluation of innovative window management techniques. We describe the architecture of the system, some of its implementation details and present several examples that illustrate its potential.
2006
Kossow, Al
Chen, Wen-ke; Bhansali, Sanjay; Chilimbi, Trishul; Gao, Xiaofeng; Chuang, Weihaw

Many applications written in garbage collected languages have large dynamic working sets and poor data locality. We present a new system for continuously improving program data locality at run time with low overhead. Our system proactively reorganizes the heap by leveraging the garbage collector and uses profile information collected through a low-overhead mechanism to guide the reorganization at run time. The key contributions include making a case that garbage collection should be viewed as a proactive technique for improving data locality by triggering garbage collection for locality optimization independently of normal garbage collection for space, combining page and cache locality optimization in the same system, and demonstrating that sampling provides sufficiently detailed data access information to guide both page and cache locality optimization with low runtime overhead. We present experimental results obtained by modifying a commercial, state-of-the-art garbage collector to support our claims. Independently triggering garbage collection for locality optimization significantly improved optimizations benefits. Combining page and cache locality optimizations in the same system provided larger average execution time improvements (17%) than either alone (page 8%, cache 7%). Finally, using sampling limited profiling overhead to less than 3%, on average.
Kersten, Mik; Murphy, Gail C.

When working on a large software system, a programmer typically spends an inordinate amount of time sifting through thousands of artifacts to find just the subset of information needed to complete an assigned task. All too often, before completing the task the programmer must switch to working on a different task. These task switches waste time as the programmer must repeatedly find and identify the information relevant to the task-at-hand. In this paper, we present a mechanism that captures, models, and persists the elements and relations relevant to a task. We show how our task context model reduces information overload and focuses a programmer's work by filtering and ranking the information presented by the development environment. A task context is created by monitoring a programmer's activity and extracting the structural relationships of program artifacts. Operations on task contexts integrate with development environment features, such as structure display, search, and change management. We have validated our approach with a longitudinal field study of Mylar, our implementation of task context for the Eclipse development environment. We report a statistically significant improvement in the productivity of 16 industry programmers who voluntarily used Mylar for their daily work.
2007
Eisenberg, Andrew D.; Kiczales, Gregor

Most approaches to programming language extensibility have worked by pairing syntactic extension with semantic extension. We present an approach that works through a combination of presentation extension and semantic extension. We also present an architecture for this approach, an Eclipse-based implementation targeting the Java programming language, and examples that show how presentation extension, both with and without semantic extension, can make programs more expressive.
Stoyan, Herbert

This presentation will cover several themes connected with Lisp. There will be some part about history, some part about semantical equivalences of code pieces in Lisp, etc.
Floyd, Robert W.
Karttunen, Lauri

This article is a perspective on some important developments in semantics and in computational linguistics over the past forty years. It reviews two lines of research that lie at opposite ends of the field: semantics and morphology. The semantic part deals with issues from the 1970s such as discourse referents, implicative verbs, presuppositions, and questions. The second part presents a brief history of the application of finite-state transducers to linguistic analysis starting with the advent of two-level morphology in the early 1980s and culminating in successful commercial applications in the 1990s. It offers some commentary on the relationship, or the lack thereof, between computational and paper-and-pencil linguistics. The final section returns to the semantic issues and their application to currently popular tasks such as textual inference and question answering.
2008
Gabriel, Richard P.; Steele, Guy L.

In 1992 when we completed our first draft of the History of Programming Languages II paper, The Evolution of Lisp [1], it included sections on a theory or model of how complex language families like Lisp grew and evolved, and in particular, how and when diversity would bloom and consolidation would prune. The historian who worked with all the HOPL II authors, Michael S. Mahoney, did not believe our theory was substantiated properly, so he recommended removing the material and sticking with the narrative of Lisp's evolution. We stopped working on those sections, but they remained in the original text sources but removed with conditionals.
Trancón y Widemann, Baltasar

Reference-counting garbage collection is known to have problems with the collection of cyclically connected data. There are two historically significant styles of cycle-aware algorithms: The style of Brownbridge that maintains a subset of marked edges and the invariant that every cycle contains at least one marked edge, and the style of Martinez-Lins-Wachenchauzer (MLW) that involves local mark-and-scan procedures to detect cycles. The former is known to be difficult to design and implement correctly, and the latter to have pathological efficiency for a number of very typical situations. We present a novel algorithm that combines both approaches to obtain reasonably efficient local mark-and-scan phases with a marking invariant that is rather cheap to maintain. We demonstrate that the assumptions of this algorithm about mutator activity patterns make it well-suited, but not limited, to a functional programming technique for cyclic data. We evaluate the approach in comparison with simple and more sophisticated MLW algorithms using a simple benchmark based on that functional paradigm.
Pitman, Kent M.

This paper summarizes a talk given at "[email protected]," the 50th Anniversary of Lisp workshop, Monday, October 20, 2008, an event co-located with the OOPSLA'08 in Nashville, TN, in which I offered my personal, subjective account of how I came to be involved with Common Lisp and the Common Lisp standard, and of what I learned from the process. The account highlights the role of luck in the way various details of history played out, emphasizing the importance of seizing and making the best of the chance opportunities that life presents. The account further underscores the importance of understanding the role of controlling influences such as funding and intellectual property in shaping processes and outcomes. As noted by Louis Pasteur, "chance favors the prepared mind." The talk was presented extemporaneously from notes. As such, it covered the same general material as does this paper, although the two may differ in details of structure and content. It is suggested that the talk be viewed as an invitation to read this written text, and that the written account be deemed my position of record on all matters covered in the talk.
White, Jon L.; Bourbaki, Nickieben

I worked on Lisp design and implementation from the late 1960s almost until I retired about 5 years ago---and since then I've remained in the community by helping organize Lisp conferences. This means I've been in the thick of Lisp for most of its lifetime. In my talk there were a couple of points I wanted to make. First, computer hardware over the years has imposed constraints on the design of Lisp, ranging from gigantic machines in the early days---gigantic in size but miniscule in computing power---to tiny ones today (whose computing power was once considered "super".) Second, it was certain mindsets of the people involved in the design and implementation of Lisp that most strongly influenced its design---in particular, it was their educational background, driven by interests and talents, that had a great impact on the language.
Teitelman, Warren

I was first introduced to Lisp in 1962 as a first year graduate student at M.I.T. in a class taught by James Slagle. Having programmed in Fortran and assembly, I was impressed with Lisp's elegance. In particular, Lisp enabled expressing recursion in a manner that was so simple that many first time observers would ask the question, "Where does the program do the work?" (Answer - between the parentheses!) Lisp also provided the ability to manipulate programs, since Lisp programs were themselves data (S-expressions) the same as other list structures used to represent program data. This made Lisp an ideal language for writing programs that themselves constructed programs or proved things about programs. Since I was at M.I.T. to study Artificial Intelligence, program writing programs was something that interested me greatly.
Stoyan, Herbert

I acknowledge the help of David Elsweiler to get this paper more readable.
Miller, Mark

Provides an engaging narrative of the significant contributions and events in the history of Lisp, particularly focusing on the pivotal work done at Xerox PARC. It highlights the notable contributions of Warren Teitelman, detailing his innovative work on debugging Lisp programs and developing the first spell checker, undo system, and online help system. The document also delves into the intense debates and processes involved in the consolidation of various Lisp dialects into Common Lisp, featuring insights from key figures such as Guy Steele and Kent Pitman. Through personal anecdotes and historical recounting, it underscores the technical and philosophical clashes that shaped the evolution of Lisp and its community.
2009
Daqing Hou; Wang, Yuejiao

Programmers spend much of their time interacting with Integrated Development Environments (IDEs), which help increase productivity by automating much of the clerical and administrative work. Like any useful software, IDEs are becoming more powerful and usable as new functionality is added and usability concerns addressed. In particular, the last decade has witnessed the rapid and steady growth of features and enhancements (changes) in major Java IDEs. It is of research interest to learn about the characteristics of these changes as well as salient patterns in their evolution trajectories as these can be useful to understand and guide both the design and evolution of similar systems. To this end, a total of 645 "What's New" entries in seven releases of the Eclipse IDE were analyzed both quantitatively and qualitatively under two models. Using the first, an activity-based, functional model, it is found that the vast majority of the changes are refinements or incremental additions to the feature architecture set up in early releases (1.0 and 2.0). Using the second, a usability-based model, a detailed usability analysis was performed to further characterize these changes in terms of their potential impact on how effectively programmers use the IDE. Findings and implications as well as results of selective comparison with two other popular IDEs are reported.
2010
Viriyakattiyaporn, Petcharat; Murphy, Gail C.

When performing software change tasks, software developers spend a substantial amount of their time navigating dependencies in the code. Despite the availability of numerous tools to aid such navigation, there is evidence to suggest that developers are not using these tools. In this paper, we introduce an active help system, called Spyglass, that suggests tools to aid program navigation as a developer works. We report on the results of a laboratory study that investigated two questions: will developers act upon suggestions from an active help system and will those suggestions improve developer behaviour? We found that with Spyglass we could make developers as aware of navigational tools as they are when requested to read a tutorial about such tools, with less up-front effort. We also found that we could improve developer behaviour as developers in the Spyglass group, after being given recommendations in the context of their work, navigated programming artifacts more efficiently than those in the tutorial group.
Turner, Roy M.

Writing a program and writing its documentation are often considered two separate tasks, leading to several problems: the documentation may never be written; when it is, it may be an afterthought; and when the program is modified, the needed changes to the documentation may be overlooked. Literate programming (LP), introduced by Donald Knuth, views a program and its documentation as an integrated whole: they are written together to inform both the computer and human readers. LP tools then extract the code for the computer and the documentation for further document processing. Unfortunately, existing LP tools are much more suited for compiled languages, where there is already a step between coding and executing and debugging the code. Lisp programming typically involved incremental development and testing, often highly interleaving coding with running portions of the code. Thus LP tools inject an artificial impediment into this process. LP/Lisp is a new LP tool designed specifically for Lisp and the usual style of programming using Lisp. The literate programming file is the Lisp file; LP markup and text resides in Lisp comments, where it does not interfere with running the code. LP/Lisp provided the usual literate programming services, such as code typesetting, syntactic sugaring, and the ability to split the code for expository purposes (a "chunk" mechanism). LP/Lisp, itself written in Lisp, is run on the code to produce the documentation.
Jill Marci Sybalsky
Fabbrizio, Giuseppe Di; Klarlund, Nils

A method and apparatus are described for a programming language with fully undoable, timed reactive instructions. More specifically, the present invention relates to providing a multi-modal user interface for controlling the execution of fully undoable programs. An embodiment of the present invention includes a method for providing a multi-modal user interface that is enabled to control the order of execution of a program having fully undoable instructions using checkpoints associated with discrete locations within the program.
2011
Freeman, Dustin; Balakrishnan, Ravin

We present Tangible Actions, an ad-hoc, just-in-time, visual programming by example language designed for large multitouch interfaces. With the design of Tangible Actions, we contribute a continually-created system of programming tokens that occupy the same space as the objects they act on. Tangible Actions are created by the gestural actions of the user, and they allow the user to reuse and modify their own gestures with a lower interaction cost than the original gesture. We implemented Tangible Actions in three different tabletop applications, and ran an informal evaluation. While we found that study participants generally liked and understood Tangible Actions, having the objects and the actions co-located can lead to visual and interaction clutter.
Mark Stefik

The Colab project at PARC was an experiment in creating an electronic meeting room. This project developed multi-user interfaces, telepointers, and other innovations at the time. This movie shows the Cognoter tool which was a multi-user brainstorming tool used for collaborative development of an outline for a paper.
Mark Stefik

In 1983 the Knowledge Systems Area at Xerox PARC taught experimental courses on knowledge programming. The Truckin' knowledge competition was the final exam at the end of a one week course. Students programmed their trucks to compete in Truckin' simulation world — buying and selling goods, getting gas as needed, avoiding bandits, and so on. All of the trucks competed in the final. The winner was the truck with the most cash parked nearest Alice's Restaurant.
See https://www.markstefik.com/?page_id=359
Stefik, Mark

Xerox was starting a business of selling the interlisp-D programming environment and AI workstations based on the dandelion, D0, and Dorado computers. We decided to distribute Loops with Interlisp-D and to all of the customers who were buying the workstations to develop expert systems. We decided to create a course on “knowledge programming,” explaining how to combine the various programming paradigms in Loops to create knowledge systems.
Lynn Conway suggested that a competition that would energize classes for learning a computer language. The Truckin‘ knowledge competition was the equivalent of a final exam for a one-week course that we offered periodically at PARC to teach people about object-oriented programming.
2012

This clip looks at two examples of larger tutorial--CAI systems that were developed by the Ontario Institute for Studies and Education, and Xerox's PARC.
It is from Episode 7 of the classic 1983 television series, Bits and Bytes, which starred Luba Goy and Billy Van. It was produced by TVOntario, but is no longer available for purchase.
Oldford, Wayne; DesVignes, Guy

This video (in 3 pieces) describes the use of graphical programming with an example, showing the encapsulation of several steps of an analysis into a single reusable tool. An INTERLISP-D programming environment with the object oriented system LOOPS is used for software development. Work is on a Xerox Lisp Workstation (Xerox 1186).
First of 3 pieces of a single video.
First piece: Graphical Programming (1988) - Part 0
Contains:
"Opening"
            - Introduction by a young Wayne Oldford
(refers to earlier video called "Data Analysis Networks in DINDE")
"Part 0 Statistical Analysis Maps"
- review of the interactive data analysis network representation of a statistical analysis.
Second piece: Graphical Programming (1988) - Parts 1 and 2
Contains:
"Part 1 Toolboxes"
- Review of the elements of a statistical toolbox in DINDE
"Part 2 The Analysis Path"
- Demonstrates exploration of a path in an existing
analysis map and its representation as a pattern,
It is shown how to capture this pattern in DINDE as a new
new program represented as an "AnalysisPath" object..
This is what is meant by "graphical programming".
Third piece: "Graphical Programming (1988) - Part 3"
Contains:
"Part 3 Graphical Programming
Example: Added Variable Plots"
- Demonstrates graphical programming by constructing
an added variable plot. This is done by constructing the appropriate analysis path on some data, capturing the pattern
adding it to the toolbox and then applying it to new data.
Oldford, Wayne; DesVignes, Guy

This video (in 3 pieces) describes the use of graphical programming with an example, showing the encapsulation of several steps of an analysis into a single reusable tool. An INTERLISP-D programming environment with the object oriented system LOOPS is used for software development. Work is on a Xerox Lisp Workstation (Xerox 1186).
Second of 3 pieces of a single video.
First piece: Graphical Programming (1988) - Part 0
Contains:
"Opening"
- Introduction by Wayne Oldford
(refers to earlier video called "Data Analysis Networks
in DINDE")
"Part 0 Statistical Analysis Maps"
- review of the interactive data analysis network representation
of a statistical analysis.
Second piece: Graphical Programming (1988) - Parts 1 and 2
Contains:
"Part 1 Toolboxes"
- Review of the elements of a statistical toolbox in DINDE
"Part 2 The Analysis Path"
- Demonstrates exploration of a path in an existing
analysis map and its representation as a pattern,
It is shown how to capture this pattern in DINDE as a new
new program represented as an "AnalysisPath" object..
This is what is meant by "graphical programming".
Third piece: "Graphical Programming (1988) - Part 3"
Contains:
"Part 3 Graphical Programming
Example: Added Variable Plots"
- Demonstrates graphical programming by constructing an added variable plot. This is done by constructing the appropriate analysis path on some data, capturing the pattern adding it to the toolbox and then applying it to new data.
2013
2014
Strandh, Robert

Garbage collection algorithms are divided into three main categories, namely mark-and-sweep, mark-and-compact, and copying collectors. The collectors in the mark-and-compact category are frequently overlooked, perhaps because they have traditionally been associated with greater cost than collectors in the other categories. Among the compacting collectors, the sliding collector has some advantages in that it preserves the relative age of objects. The main problem with the traditional sliding collector by Haddon and Waite [4] is that building address-forwarding tables is costly. We suggest an improvement to the existing algorithm that reverses the order between building the forwarding table and moving the objects. Our method improves performance of building the table, making the sliding collector a better contestant for young generations of objects (nurseries).
Smith, Robert

Common Lisp is a towering language that supports a plethora of functionality useful for both scientific and mathematical programming. However---except for a few notable systems such as Axiom, Macsyma/Maxima, and ACL2---Lisp has not taken center stage for such kinds of programming tasks. We will analyze exiting systems, including computer algebra systems, technical computing systems, and other programming languages, and their utility in scientific and mathematical programming. Such a discussion will form a foundation for comparative study. Following that, we will expound on some features of Lisp that augment the expressiveness, simplicity, and utility of programs written in the language. In particular, we do so by way of three carefully selected pragmatic examples arising in fields ranging from the theory of special functions to numerical simulation.
Strandh, Robert

We describe a technique for generic dispatch that is adapted to modern computers where accessing memory is potentially quite expensive. Instead of the traditional hashing scheme used by PCL [6], we assign a unique number to each class, and the dispatch consists of comparisons of the number assigned to an instance with a certain number of (usually small) constant integers. While our implementation (SICL) is not yet in a state where we are able to get exact performance figures, a conservative simulation suggests that our technique is significantly faster than the one used in SBCL, which uses PCL, and indeed faster than the technique used by most high-performance Common Lisp implementations. Furthermore, existing work [7] using a similar technique in the context of static languages suggests that perfomance can improve significantly compared to table-based techniques.
Tannir, Adam

Being the second oldest high-level language still in widespread use (after Fortran), Lisp is often considered solely as an academic language well-suited for artificial intelligence. It is sometimes accused of having a (very (strange syntax)), only using lists as data types, being difficult to learn, using lots of memory, being inefficient and slow, as well as being dead, an ex-language. This talk, focusing on Common Lisp, aims to show that it is actually an elegant, unique, expressive, fast, extensible language for symbolic computation that is not difficult to learn and may even change the way you think about programming. Lisp is primarily a functional paradigm language, but supports object-oriented, imperative, and other programming models natively. Rapid prototyping, iterative development, multiprocessor development, and creation of domain-specific languages are all facilitated by Lisp. There will be a discussion of the origins and history of Lisp, followed by a demonstration of the language, features that migrated to and from other languages, and concluding with a look to what may be in store for the future.
Hosted by Adam Tannir
Myers, Brad

Scrollbars, in Interlisp-D, appear on a window only when they are needed.
src: https://vimeo.com/61556918​
2015
Ensmenger, Nathan; Stachniak, Zbigniew; Rajaraman, Vaidyeswaran; Kidwell, Peggy Aldrich; Fidler, Bradley; Currie, Morgan; Cortada,, James W.; Spicer, Dag; Copeland, Jack; Haeff, Andre A.; Murphy, Dan; Misa, Thomas J.; Alper, Meryl
Murphy, Dan
Emulation & Virtualization as Preservation Strategies
Rosenthal, David S.H.
Dyomkin, Vsevolod

At Grammarly, the foundation of our business, our core grammar engine, is written in Common Lisp. It currently processes more than a thousand sentences per…
Murphy, Dan

In the late 1960s, a small group of developers at Bolt, Beranek, and Newman (BBN) in Cambridge, Massachusetts, began work on a new computer operating system, including a kernel, system call API, and user command interface (shell). While such an undertaking, particularly with a small group, became rare in subsequent decades, it was not uncommon in the 1960s. During development, this OS was given the name TENEX. A few years later, TENEX was adopted by Digital Equipment Corporation (DEC) for its new line of large machines to be known as the DECSYSTEM-20, and the operating system was renamed to TOPS-20. The author followed TENEX (or vice versa) on this journey, and these are some reflections and observations from that journey. He touches on some of the technical aspects that made TENEX notable in its day and an influence on operating systems that followed as well as on some of the people and other facets involved in the various steps along the way.
2016
Schafmeister, Christian E.

CANDO is a compiled programming language designed for rapid prototyping and design of macromolecules and nanometer-scale materials. CANDO provides functionality to write programs that assemble atoms and residues into new molecules and construct three-dimensional coordinates for them. CANDO also provides functionality for searching molecules for substructures, automatically assigning atom types, identifying rings, carrying out conformational searching, and automatically determining stereochemistry, among other things. CANDO extends the Clasp implementation of the dynamic language Common Lisp. CANDO provides classes for representing atoms, residues, molecules and aggregates (collections of molecules) as primitive objects that are implemented in C++ and subject to automatic memory management, like every other object within the language. CANDO inherits all of the capabilities of Clasp, including the easy incorporation of C++ libraries using a C++ template programming library. This automatically builds wrap- per code to expose the C++ functionality to the CANDO Common Lisp environment and the use of the LLVM library[1] to generate fast native code. A version of CANDO can be built that incorporates the Open Message Passing Interface C++ library[2], which allows CANDO to be run on supercomputers, in order to automatically setup, start, and analyze molecular mechanics simulations on large parallel computers. CANDO is currently available under the LGPL 2.0 license.
Bouvin, Niels Olof; Klokmose, Clemens Nylandsted

We show and analyze herein how Webstrates can augment the Web from a classical hypermedia perspective. Webstrates turns the DOM of Web pages into persistent and collaborative objects. We demonstrate how this can be applied to realize bidirectional links, shared collaborative annotations, and in-browser authorship and development.
Fisher, Lawrence M.
Allen, Paul G.

It’s one thing to read about a true breakthrough, something else to see it in action
2017

We as developers tend to separate our development tools by the stage of the development lifecycle: authoring, executing, building, or deployment. But this limits how much information each tool has at it’s disposal and therefore how much utility it can provide. For example, your IDE can show you the callers of a particular function but because it it’s not involved in running your code it can’t tell you how many times that function failed at runtime. Even worse, we end up with a lot of redundant implementations of the same functions – for example parsers – because it’s easier than sharing the work.
At Replit we’re growing a holistic development service from the ground up. At first our service just executed user code. Then it gained code intelligence capabilities like Lint. Then it understood the project structure and dependencies. Then it knew how to test code. And now it’s growing to understand deployment. All this within a single service. We envision this to become a long-lived always-on service that understands your code in all it’s stages and can be at your disposal anywhere you are regardless of the device, platform or the programming language you’re using.
DeKleer, Johann

It is with deep sorrow that we report the passing of former AAAI President Danny Bobrow on March 20, 2017. His family, friends, and colleagues from the Palo Alto Research Center and around the world recently gathered at PARC to commemorate his life and work.
Masinter, Larry

17 new photos added to shared album

Reuploaded from:
http://people.csail.mit.edu/riastradh...​
Thanks to "lispm" on reddit for all the info:
https://www.reddit.com/r/lisp/comment...​
From what I understand SEdit was developed later than DEdit. SEdit is documented first in the 1987 Lyric release of Interlisp-D, see Appendix B:
http://bitsavers.trailing-edge.com/pd...​
SEdit is expanded in the virtual machine version of Interlisp-D, called Medley. See the Medley 1.0 release notes, appendix B:
http://bitsavers.trailing-edge.com/pd...​
Some hints for using SEdit
http://bitsavers.trailing-edge.com/pd...​
If you want to try it out, maybe this contains the editors:
http://www2.parc.com/isl/groups/nltt/...​
Balzer, Robert; Erman, Lee; Feather, Martin; Goldman, Neil; London, Philip; Wile, David; Wilczynski, David; Lingard, Robert; Mark, William; Mann, William; Moore, James; Pirtle, Mel; Dyer, David; Rizzi, William; Cohen, Danny; Barnett, Jeff; Kameny, Iris; Yemini, Yechiam

ISI is an off-campus research center in the University of Southern California's School of Engineering. The Institute engages in a broad set of research and application oriented projects in the computer sciences, ranging from advanced research efforts aimed at producing new concepts to operation of a major Arpanet computer facility.
2018
Korkut, Joomy; Christiansen, David Thrane

Dependently typed programming languages, such as Idris and Agda, feature rich interactive environments that use informative types to assist users with the construction of programs. However, these environments have been provided by the authors of the language, and users have not had an easy way to extend and customize them. We address this problem by extending Idris's metaprogramming facilities with primitives for describing new type-directed editing features, making Idris's editors as extensible as its elaborator.
Rhodes, Christophe

We describe our use of Lisp to generate teaching aids for an Algorithms and Data Structures course taught as part of the undergraduate Computer Science curriculum. Specifically, we have made use of the ease of construction of domain-specific languages in Lisp to build an restricted language with programs capable of being pretty-printed as pseudocode, interpreted as abstract instructions, and treated as data in order to produce modified distractor versions. We examine student performance, report on student and educator reflection, and discuss practical aspects of delivering using this teaching tool.
Jayaprakash, Rajesh

This abstract describes the design and implementation of pL isp, a Lisp dialect and integrated development environment modeled on Smalltalk that targets beginners
2019
Becker, Brett A.; Denny, Paul; Pettit, Raymond; Bouchard, Durell; Bouvier, Dennis J.; Harrington, Brian; Kamil, Amir; Karkare, Amey; McDonald, Chris; Osera, Peter-Michael; Pearce, Janice L.; Prather, James

Diagnostic messages generated by compilers and interpreters such as syntax error messages have been researched for over half of a century. Unfortunately, these messages which include error, warning, and run-time messages, present substantial difficulty and could be more effective, particularly for novices. Recent years have seen an increased number of papers in the area including studies on the effectiveness of these messages, improving or enhancing them, and their usefulness as a part of programming process data that can be used to predict student performance, track student progress, and tailor learning plans. Despite this increased interest, the long history of literature is quite scattered and has not been brought together in any digestible form. In order to help the computing education community (and related communities) to further advance work on programming error messages, we present a comprehensive, historical and state-of-the-art report on research in the area. In addition, we synthesise and present the existing evidence for these messages including the difficulties they present and their effectiveness. We finally present a set of guidelines, curated from the literature, classified on the type of evidence supporting each one (historical, anecdotal, and empirical). This work can serve as a starting point for those who wish to conduct research on compiler error messages, runtime errors, and warnings. We also make the bibtex file of our 300+ reference corpus publicly available. Collectively this report and the bibliography will be useful to those who wish to design better messages or those that aim to measure their effectiveness, more effectively.
Bouvin, Niels Olof

Fifty years since the beginning of the Internet, and three decades of the Dexter Hypertext Reference Model and the World Wide Web mark an opportune time to take stock and consider how hypermedia has developed, and in which direction it might be headed. The modern Web has on one hand turned into a place where very few, very large companies control all major platforms with some highly unfortunately consequences. On the other hand, it has also led to the creation of a highly flexible and nigh ubiquitous set of technologies and practices, which can be used as the basis for future hypermedia research with the rise of computational notebooks as a prime example of a new kind of collaborative and highly malleable applications.
Böcker, Heinz-Dieter
Barela, Anne

Via livingcomputers.org: Josh Dersch writes about research into the Xerox 8010 Information System (codenamed “Dandelion” during development) and commonly referred to as the Star. The Star was envisioned as center point of the office of the future, combining high-resolution graphics with the now-familiar mouse, Ethernet networking for sharing and collaborating, and Xerox’s Laser Printer technology for faithful “WYSIWYG” document reproduction. A revolutionary system when most everyone else was using text based systems.
2020
Clinger, William D.; Wand, Mitchell

The fully parenthesized Cambridge Polish syntax of Lisp, originally regarded as a temporary expedient to be replaced by more conventional syntax, possesses a peculiar virtue: A read procedure can parse it without knowing the syntax of any expressions, statements, definitions, or declarations it may represent. The result of that parsing is a list structure that establishes a standard representation for uninterpreted abstract syntax trees. This representation provides a convenient basis for macro processing, which allows the programmer to specify that some simple piece of abstract syntax should be replaced by some other, more complex piece of abstract syntax. As is well-known, this yields an abstraction mechanism that does things that procedural abstraction cannot, such as introducing new binding structures. The existence of that standard representation for uninterpreted abstract syntax trees soon led Lisp to a greater reliance upon macros than was common in other high-level languages. The importance of those features is suggested by the ten pages devoted to macros in an earlier ACM HOPL paper, “The Evolution of Lisp.” However, naïve macro expansion was a leaky abstraction, because the movement of a piece of syntax from one place to another might lead to the accidental rebinding of a program’s identifiers. Although this problem was recognized in the 1960s, it was 20 years before a reliable solution was discovered, and another 10 before a solution was discovered that was reliable, flexible, and efficient. In this paper, we summarize that early history with greater focus on hygienic macros, and continue the story by describing the further development, adoption, and influence of hygienic and partially hygienic macro technology in Scheme. The interplay between the desire for standardization and the development of new algorithms is a major theme of that story. We then survey the ways in which hygienic macro technology has been adapted into recent non-parenthetical languages. Finally, we provide a short history of attempts to provide a formal account of macro processing.
Hansen, Hsu

In commemoration of the 40th anniversary of the release of Smalltalk-80, the Computer History Museum is proud to announce a collaboration with Dan Ingalls to preserve and host the “Smalltalk Zoo.”
Card, Stuart

This interview is part of a series on Human Computer Interaction (HCI) conducted by the Charles Babbage Institute for ACM SIGCHI (Association for Computing Machinery Special Interest Group for Computer Human Interaction). HCI Pioneer Stuart Card discusses early education, attending Oberlin College, and helping lead its computer center, before the bulk of the interview focuses on his graduate education at Carnegie Mellon University working under Allen Newell, and his long and influential tenure at Xerox PARC. This includes his long and impactful collaboration with Newell and fellow Newell doctoral student Tom Moran. Newell, Card, and Moran were fundamentally important to theorizing early Human Computer Interaction, and the three co-wrote the widely used and deeply insightful textbook, The Psychology of Human Computer Interaction. Card provides an overview of his decades of work of Xerox PARC and various aspects of his research contributions to HCI models, information visualization, and information access (especially foraging theory). He moved into managing research and also relates a portion of his leadership roles at PARC and outside on important committees such as for the National Academy of Science. He briefly expresses his ideas on the early institutional history of SIGCHI and its evolution. Regarding his work at PARC, Card discusses his influential work on computer mice research at greater length. Card became an adjunct professor at Stanford University. He is an ACM Fellow and was awarded SIGCHI’s Lifetime Research Achievement Award.

X3J13 is the name of a technical committee which was part of the International Committee for Information Technology Standards (INCITS, then named X3). The X3J13 committee was formed in 1986 to draw up an American National Standards Institute (ANSI) Common Lisp standard based on the first edition of the book Common Lisp the Language (also termed CLtL, or CLtL1), by Guy L. Steele Jr., which was formerly a de facto standard for the language. The primary output of X3J13 was an American National Standard for programming language Common Lisp (X3.226/1994), approved December 8, 1994. X3J13 later worked with International Organization for Standardization (ISO) working group SC22/WG16 on an internationally standardised dialect of Lisp named ISLISP.
Museum+Labs, Living Computers:

A Xerox Star 8010 Emulator. Contribute to livingcomputermuseum/Darkstar development by creating an account on GitHub.
2021
Brad Myers

All the Widgets 2: Menus
Brad Myers, Carnegie Mellon University
CHI '90 Special Issue: All The Widgets
WEB:: http://www.cs.umd.edu/hcil/chivideosl...​
Editor: Brad Myers (Carnegie Mellon University)
Location: Austin, USA
Cardoso-Llach, Daniel; Kaltman, Eric; Erdolu, Emek; Furste, Zachary

This paper explores the potential of distributed emulation networks to support research and pedagogy into historical and sociotechnical aspects of software. Emulation is a type of virtualization that re-creates the conditions for a piece of legacy software to operate on a modern system. The paper first offers a review of Computer-Supported Cooperative Work (CSCW), Human-Computer Interaction (HCI), and Science and Technology Studies (STS) literature engaging with software as historical and sociotechnical artifacts, and with emulation as a vehicle of scholarly inquiry. It then documents the novel use of software emulations as a pedagogical resource and research tool for legacy software systems analysis. This is accomplished through the integration of the Emulation as a Service Infrastructure (EaaSI) distributed emulation network into a university-level course focusing on computer-aided design (CAD). The paper offers a detailed case study of a pedagogical experience oriented to incorporate emulations into software research and learning. It shows how emulations allow for close, user-centered analyses of software systems that highlight both their historical evolution and core interaction concepts, and how they shape the work practices of their users.

The Code of Best Practices in Fair Use for Software Preservation provides clear guidance on the legality of archiving legacy software to ensure continued access to digital files of all...
Masad, Amjad

I'm fascinated by the idea of computers doing The Right thing without explicit user input. Today this is most apparent in autocorrect, but the idea -- in a more advanced form -- goes back to the early...

The Dipmeter Advisor was an early expert system developed in the 1980s by Schlumberger with the help of artificial-intelligence workers at MIT to aid in the analysis of data gathered during oil exploration. The Advisor was generally not merely an inference engine and a knowledge base of ~90 rules, but generally was a full-fledged workstation, running on one of Xerox's 1100 Dolphin Lisp machines (or in general on Xerox's "1100 Series Scientific Information Processors" line) and written in INTERLISP-D, with a pattern recognition layer which in turn fed a GUI menu-driven interface. It was developed by a number of people, including Reid G. Smith, James D. Baker, and Robert L. Young.It was primarily influential not because of any great technical leaps, but rather because it was so successful for Schlumberger's oil divisions and because it was one of the few success stories of the AI bubble to receive wide publicity before the AI winter.
The AI rules of the Dipmeter Advisor were primarily derived from Al Gilreath, a Schlumberger interpretation engineer who developed the "red, green, blue" pattern method of dipmeter interpretation.
Unfortunately this method had limited application in more complex geological environments outside the Gulf Coast, and the Dipmeter Advisor was primarily used within Schlumberger as a graphical display tool to assist interpretation by trained geoscientists, rather than as an AI tool for use by novice interpreters. However, the tool pioneered a new approach to workstation-assisted graphical interpretation of geological information.

An HTML document containing what seem to be lists of emails from between 1969 and 1997.

Interlisp (also seen with a variety of capitalizations) is a programming environment built around a version of the programming language Lisp. Interlisp development began in 1966 at Bolt, Beranek and Newman (renamed BBN Technologies) in Cambridge, Massachusetts with Lisp implemented for the Digital Equipment Corporation (DEC) PDP-1 computer by Danny Bobrow and D. L. Murphy. In 1970, Alice K. Hartley implemented BBN LISP, which ran on PDP-10 machines running the operating system TENEX (renamed TOPS-20). In 1973, when Danny Bobrow, Warren Teitelman and Ronald Kaplan moved from BBN to the Xerox Palo Alto Research Center (PARC), it was renamed Interlisp. Interlisp became a popular Lisp development tool for artificial intelligence (AI) researchers at Stanford University and elsewhere in the community of the Defense Advanced Research Projects Agency (DARPA). Interlisp was notable for integrating interactive development tools into an integrated development environment (IDE), such as a debugger, an automatic correction tool for simple errors (via do what I mean (DWIM) software design, and analysis tools.
Kaisler, Stephen Hendrick

This volume focuses on a set of tools for the interactive programming interface for Medley Interlisp. I tried to select the tools that I felt were extremely useful to the Interlisp user. Some tools were omitted due to space limitations. The decisions are solely mine.
Ingalls, Daniel
Proven, Liam

I must be mellowing in my old age (possibly as opposed to bellowing) because I have been getting praise and compliments recently on comments in various places. Dont worry, there are still people angrily shouting at me as well. This was the earlier comment, I think... There was a slightly…
Malone, Thomas W.

An intelligent system for information sharing and coordination.
Published in two videotapes: issue 27, and issue 33-34 of ACM SIGGRAPH Video Review (issue 27 appeared in same tape as issue 26, i.e. the CHI '87 Electronic Theater).
Video Chair: Richard J. Beach (Xerox PARC)
Location: Toronto, Canada
2022
trhawes

I'm a retro enthusiast who loves Lisp, so natually, I'd want to show off my Medley Interlisp virtual machine (emulating a Xerox Lisp Machine). Someone had included FreeBSD support for the project. I contributed makefiles for amd64 and aarch64 architectures. The project isn't in ports, it resides on github. Super-easy to get running on FreeBSD.
Kaisler, Stephen Hendrick

In this volume, I explore the features of Interlisp-D: The Interactive Programming Environment. Interlisp-D was a rehosting of Interlisp to a new class of powerful, microprogrammed computer systems specifically designed to execute Lisp and other high level languages efficiently.
2023
infoprogL

Interlisp is generally considered to be the most extensive programming environment in existence.
tutorial transcription:
cl1p.net/interlisp1txt
the link will expire after some days, so save it
Henkel-Wallace, David (gumby)

Many people who read about Lisp Machines are not aware that the InterLisp-D world and the MIT world (CADR, LMI, Symbolics etc) had significantly different approaches to how the systems should work, so even if you have read or used the MIT-style systems you will learn a lot by using Medley. I came from MIT out to PARC for a year, and later moved CYC from D machines to Symbolics machines (a complete reimplementation using a different fundamental architecture) so have good experiences with them both.
At heart, the InterLisp language itself isn't that different from MIT lisps, as Interlisp started down the road at BBN and there was a lot of cross fertilization in both directions. And CommonLisp, while heavily based on the "MIT" model has a lot of Interlisp influence in it.
BALISP

An Introduction to the Medley Interlisp Project. recording of a talk given at a Bay Area Lisp & Scheme Users Group event on March 18, 2023.
California State University Channel Islands SHFT group

This is a short demo showcasing some basic controls and typing shortcuts in Medley Online.
California State University Channel Islands SHFT group

This is a short demo showcasing how easy it is to start using Medley Online.
California State University Channel Islands SHFT group

This is a short demo showcasing the different features of the sidebar in Medley Online.
Amoroso, Paolo

Imagine someone let you into an alien spaceship they landed in your backyard, sat you at the controls, and encouraged you to fly the ship...
2024
Undated
shih:mv:envos

Here's a proposed design for big bitmaps (for Maiko Color).

This document specifies the interfaces to the automated tester harness. The harness is composed of two parts: the top-level tester and the individual test handlers.

The Call-C-Function MISCN opcode. DEFFOREIGN—Define a foreign function for lisp.

User manual of the CLOS class browser.

Descriptions of the DO-TEST-* functions of the Interlisp-D automated testing infrastructure.

Although Envos Corp., an artificial intelligence spin-off of the Xerox Corp., folded back into Xerox last spring after nine months in operation, the parent company is “absolutely” committed to developing similar ventures in the future, according to Xerox spokesman Peter Hawes. “We have been trying to identify [Xerox] technologies,” says Hawes, “and choose which. ..might lend themselves to alternative exploitation.”

Table of the Common Lisp equivalents of Interlisp data types and functions.
Interlisp-Ad-Venue.jpg

Table of Interlisp-D system data structure sizes.
Biggs, Melissa

This document provides a template and instructions for formatting the Lisp Users’ module documentation. This template applies primarily to standalone workstation users. Using the Lisp Library module TEdit, and this document, you should be able to create a standard Lisp Users’ module for the Lisp Users’ manual. This document gives you the written specifications for formatting your document. The specifications are given in the order in which you would most likely use them to format a document, with the basic text and margins described first, then the various levels of headings, then special elements such as page numbers.
Lanning, Stan

LOOPS-FB adds a command to the Lisp File Browsers for opening Loops browsers on files.
Maxwell, John

Documentation of a LispUsers module. The LispNerd provides a menu-based interface to the Interlisp Reference Manual.

Documentation of the main testing entry points, useful functions for building tests commands and functions for running tests, and internal functions of Interlisp-D's test harness.
Rindfleisch

Notes on the Interlisp-D test results of the {Medley}test directory.

Instructions for running test cases for entries in the Action Request issue tracking database of Interlisp-D and recording the results.

Documentation of a script for testing the INSPECTALLFIELDSFLG system variable of Interlisp on Interlisp-D.
Script for testing Inspect macro interface

Documentation of a script for testing the Inspect macro interface in Interlisp code on Interlisp-D.

Documentation of a script for testing defstruct and the inspector with defstruct in Common Lisp code on Interlisp-D.

Documentation of a script to check code inspectors on stack frames of break windows on Interlisp-D.

Documentation of the scripts for testing the record package of Interlisp-D.
Semantics of Procedures: A Cognitive Basis for Maintenance Training Competency
Moran, Thomas D; Russell, Daniel M; Jordan, Daniel; Jensen, Anne-Marie; Orr, Julian

Description of the purpose of the special files in the ARs> directory of the Interlisp-D testing infrastructure.

This is the preliminary documentation for the first experimental version of the Test Apprentice. The purpose of this tool is to help with testing. It is eventually intended to generate and execute tests (it would be an AI application). In its current state it is just learning by watching what other testers do. But it is useful in this state because it can repeat exactly what other testers have done before.

The Interlisp-D testing system is an integrated system built for creating, managing and using a large set of programmed tests for testing the correctness and the performance of the Interlisp-D programming environment.
The system is consisted of three parts : The test driver, the data base management system, and a graphic control tool. In addition, there are various tools for helping the test builders in the process of creating new tests.

This document should be used as a guide for users of the testing system, and it assumes that the reading of "The InterlispD Testing System" document.

Useful test utilities of the Interlisp-D test harness.

Notes on the results of a series of tests carried out on Interlisp-D.

Documentation of a script for testing the programmatic interface to the INSPECTW facility of Interlisp on Interlisp-D.

Descriptions of DLD-* opcodes of the Maiko Virtual Machine of Interlisp.
Bane, Bob

UNIXMAIL is a new mail sending and receiving mode for Lafite. It sends mail via Unix hosts using the SMTP mail transfer protocol and can receive mail either by reading a Unix mail spool file or by calling the Berkeley mail program.

Description of the testing directory structure and file naming conventions of the Interlisp-D testing infrastructure at Xerox AIS.