Bibliography

Interlisp Bibliography

The following table represents a snapshot of the information captured in the Interlisp Zotero library.

Author Title Abstract Notes
1963
Slagle, James R. A Heuristic Program that Solves Symbolic Integration Problems in Freshman Calculus
A large high-speed general-purpose digital computer (IBM 7090) was programmed to solve elementary symbolic integration problems at approximately the level of a good college freshman. The program is called SAINT, an acronym for "Symbolic Automatic INTegrator." This paper discusses the SAINT program and its performance. SAINT performs indefinite integration. It also performs definite and multiple integration when these are trivial extensions of indefinite integration. It uses many of the methods and heuristics of students attacking the same prombles. SAINT took an average of two minutes each to solve 52 of the 54 attempted problems taken from the Massachusetts Institute of Technology freshman calculus final examinations. Based on this and other experiments with SAINT, some conclusions coneering computer solution of such problems are: (1) Pattern recognition is of fundamental importance. (2) Great benefit would have been derived from a large memory and more convenient symbol manipulating facilities. (3) The solution of a symbolic integration problem by a commercially available computer is far cheaper and faster than by man.
1965
Deutsch, L. Peter ;
Lampson, Butler W.
930 LISP Reference Manual
1966
Berkeley, Edmund Callis ;
Bobrow, Daniel Gureasko
The programming language LISP: Its operation and applications
Bobrow, Daniel G. ;
Murphy, Daniel L.
THE STRUCTURE OF A LISP SYSTEM USING TWO-LEVEL STORAGE, SCIENTIFIC REPROT
In an ideal list-processing system there would be enough core memory to contain all the data and programs. The paper describes a number of techniques used to build a LISP system which utilizes a drum for its principal storage medium, with a surprisingly low time-penalty for use of this slow storage device. The techniques include careful segmentation of system programs, allocation of virtual memory to allow address arithmetic for type determination, and a special algorithm for building reasonably linearized lists. A scheme is described for binding variables which is good in this environment and allows for complete compatibility between compiled and interpreted programs with no special declarations.
Section: Technical Reports
Bobrow, Daniel G. ;
Teitelman, Warren
Format-directed list processing in LISP
This article describes a notation and a programming language for expressing, from within a LISP system, string transformations such as those performed in COMIT or SNOBOL. A simple transformation (or transformation rule) is specified by providing a pattern which must match the structure to be transformed and a format which specifies how to construct a new structure according to the segmentation specified by the pattern. The patterns and formats are greatly generalized versions of the left-half and right-half rules of COMIT and SNOBOL. For example, elementary patterns and formats can be variable names, results of computations, disjunctive sets, or repeating subpatterns; predicates can be associated with elementary patterns which check relationships among separated elements of the match; it is no longer necessary to restrict the operations to linear strings since elementary patterns can themselves match structures. The FLIP language has been implemented in LISP 1.5, and has been successfully used in such disparate tasks as editing LISP functions and parsing Kleene regular expressions.
Daniel G. Bobrow ;
Daniel L. Murphy
Preliminary Specification for BBN 940 LISP
1967
Teitelman, Warren DESIGN AND IMPLEMENTATION OF FLIP, A LISP FORMAT DIRECTED LIST PROCESSOR
The paper discusses some of the considerations involved in designing and implementing a pattern matching or COMIT feature inside of LISP. The programming language FLIP is presented here as a paradigm for such a feature. The design and implementation of FLIP discussed below emphasizes compact notation and efficiency of operation. In addition, FLIP is a modular language and can be readily extended and generalized to include features found in other pattern driven languages such as CONVERT and SNOBOL. This makes it extremely versatile. The development of this paper proceeds from abstract considerations to specific details. The syntax and semantics of FLIP are presented first, followed by a discussion of the implementation with especial attention devoted to techniques used for reducing the number of conses required as well as improving search strategy. Finally FLIP is treated as a working system and viewed from the users standpoint. Here we present some of the additions and extensions to FLIP that have evolved out of almost two years of experimentation. These transform it from a notational system into a practical and useful programming system.
Section: Technical Reports
Bobrow, Daniel G. ;
Darley, D. Lucille ;
Deutsch, L. Peter ;
Murphy, Daniel L. ;
Teitelman, Warren
THE BBN 940 LISP SYSTEM
The report describes the LISP system implemented at BBN on the SDS 940 Computer. This LISP is an upward compatible extension of LISP 1.5 for the IBM 7090, with a number of new features which make it work well as an on-line language. These new features include tracing, and conditional breakpoints in functions for debugging and a sophisticated LISP oriented editor. The BBN 940 LISP SYSTEM has a large memory store approximately 50,000 free words utilizing special paging techniques for a drum to provide reasonable computation times. The system includes both an interpreter, a fully compatible compiler, and an assembly language facility for inserting machine code subroutines.
Section: Technical Reports
Bobrow, Daniel G. ;
Murphy, Daniel L.
Structure of a LISP system using two-level storage: Communications of the ACM
In an ideal list-processing system there would be enough core memory to contain all the data and programs. Described in this paper are a number of techniques that have been used to build a LISP system utilizing a drum for its principal storage medium, with a surprisingly low time penalty for use of this slow storage device. The techniques include careful segmentation of system programs, allocation of virtual memory to allow address arithmetic for type determination, and a special algorithm for building reasonably linearized lists. A scheme for binding variables is described which is good in this environment and allows for complete compatibility between compiled and interpreted programs with no special declarations.
Teitelman, Warren Recent Improvements to 940 LISP Library
Daniel G. Bobrow ;
D. Lucille Darley ;
L. Peter Deutsch ;
Daniel L. Murphy ;
Warren Teitelman
The BBN-LISP System
This report describes the LISP system implemented at BBN on the SDS 940 Computer. This LISP is an upward compatible extension of LISP 1.5 for the IBM 7090, with a number of new features which make it work well as an on-line language. These new features include tracing, and conditional breakpoints in functions for debugging and a sophisticated LISP oriented editor. The BBN 940 LISP SYSTEM has a large memory store (approximately 50,000 free words) utilizing special paging techniques for a drum to provide reasonable computation times. The system includes both an interpreter, a fully compatible compiler, and an assembly language facility for inserting machine code subroutines.
1968
Sproull, Robert F. ;
Sutherland, Ivan E.
A clipping divider
When compared with a drawing on paper, the pictures presented by today's computer display equipment are sadly lacking in resolution. Most modern display equipment uses 10 bit digital to analog converters, providing for display in a 1024 by 1024 square raster. The actual resolution available is usually somewhat less since adjacent spots or lines will overlap. Even large-screen displays have limited resolution, for although they give a bigger picture, they also draw wider lines so that the amount of material which can appear at one time is still limited. Users of larger paper drawings have become accustomed to having a great deal of material presented at once. The computer display scope alone cannot serve the many tasks which require relatively large drawings with fine details.
Bobrow, Daniel G. ;
Murphy, Daniel L.
A note on the efficiency of a LISP computation in a paged machine
The problem of the use of two levels of storage for programs is explored in the context of a LISP system which uses core memory as a buffer for a large virtual memory stored on a drum. Details of timing are given for one particular problem.
1969
The BBN-LISP system: Reference Manual
This document describes the BEN-LISP system currently implemented on the SDS 940. It is a dialect of LISP 1.5 and the differences between IBM 7090 version and this system are described in Appendix 1 and 2. Principally, this system has been expanded from the LISP 1.5 on the 7090 in a number of different ways. BBN-LISP is designed to utilize a drum for storage and to provide the user a large virtual memory, with a relatively small penalty in speed (using special paging techniques described in Bobrow and Murphy 1967).
Bobrow, D. G. LISP bulletin
This first (long delayed) LISP Bulletin contains samples of most of those types of items which the editor feels are relevant to this publication. These include announcements of new (i.e. not previously announced here) implementations of LISP (or closely related) systems; quick tricks in LISP; abstracts of LISP related papers; short writeups and listings of useful programs; and longer articles on problems of general interest to the entire LISP community. Printing of these last articles in the Bulletin does not interfere with later publications in formal journals or books. Short write-ups of new features added to LISP are of interest, preferably upward compatible with LISP 1.5, especially if they are illustrated by programming examples.
1971
Teitelman, W. ;
Bobrow, D. G. ;
Hartley, A. K. ;
Murphy, D. L.
BBN - LISP, TENEX Reference Manual
1972
J778.SYSREM DOC on SYS05 (LISP features dsigned to aid the LISP programmer)
Catalog Number: 102721861
Category: Program Listing
Collection Title: Gift of Larry Masinter
Credit: Gift of Larry Masinter
Teitelman, W. ;
Bobrow, D. G. ;
Hartley, A. K. ;
Murphy, D. L.
BBN - LISP, TENEX Reference Manual, Revised
Teitelman, Warren Automated programmering: the programmer's assistant
This paper describes a research effort and programming system designed to facilitate the production of programs. Unlike automated programming, which focuses on developing systems that write programs, automated programmering involves developing systems which automate (or at least greatly facilitate) those tasks that a programmer performs other than writing programs: e.g., repairing syntactical errors to get programs to run in the first place, generating test cases, making tentative changes, retesting, undoing changes, reconfiguring, massive edits, et al., plus repairing and recovering from mistakes made during the above. When the system in which the programmer is operating is cooperative and helpful with respect to these activities, the programmer can devote more time and energy to the task of programming itself, i.e., to conceptualizing, designing and implementing. Consequently, he can be more ambitious, and more productive.
1973
Reboh, Rene ;
Sacerdoti, Earl
A Preliminary QLISP Manual
A preliminary version of QL1SP is described. QLISP permits free intermingling of QA4-like constructs with INTERLISP code. The preliminary version contains features similar to those of QA4 except for the backtracking of control environments. It provides several new features as well. This preliminary manual presumes a familiarity with both INTERL1SPand the basic concepts of QA4. It is intended to update rather than replace the existing documentation of QA4.
Section: Technical Reports
Bobrow, Daniel G. ;
Wegbreit, Ben
A model and stack implementation of multiple environments
Many control and access environment structures require that storage for a procedure activation exist at times when control is not nested within the procedure activated. This is straightforward to implement by dynamic storage allocation with linked blocks for each activation, but rather expensive in both time and space. This paper presents an implementation technique using a single stack to hold procedure activation storage which allows retention of that storage for durations not necessarily tied to control flow. The technique has the property that, in the simple case, it runs identically to the usual automatic stack allocation and deallocation procedure. Applications of this technique to multitasking, coroutines, backtracking, label-valued variables, and functional arguments are discussed. In the initial model, a single real processor is assumed, and the implementation assumes multiple-processes coordinate by passing control explicitly to one another. A multiprocessor implementation requires only a few changes to the basic technique, as described.
Deutsch, L. Peter A LISP machine with very compact programs
This paper presents a machine designed for compact representation and rapid execution of LISP programs. The machine language is a factor of 2 to 5 more compact than S-expressions or conventional compiled code, and the compiler is extremely simple. The encoding scheme is potentially applicable to data as well as program. The machine also provides for user-defined data structures.
Teitelman, Warren INTERLISP
INTERLISP (INTERactive LISP) is a LISP system currently implemented on the DEC PDP-10 under the BBN TENEX time sharing system<*R1>. INTERLISP is designed to provide the user access to the large virtual memory allowed by TENEX, with a relatively small penalty in speed (using special paging techniques described in <*R2>). Additional data types have been added, including strings, arrays, and hash association tables (hash links). The system includes a compatible compiler and interpreter. Machine code can be intermixed with INTERLISP expressions via the assemble directive of the compiler. The compiler also contains a facility for "block compilation" which allows a group of functions to be compiled as a unit, suppressing internal names. Each successive level of computation, from interpreted through compiled, to block-compiled provides greater speed at a cost of debugging ease.
Deutsch, L. Peter An interactive program verifier
Program verification refers to the idea that the intent or effect of a program can be stated in a precise way that is not a simple "rewording " of the program itself, and that one can prove (in the mathematical sense) that a program actually conforms to a given statement of intent. This thesis describes a software system which can verify (prove) some non-trivial programs automatically. The system described here is organized in a novel manner compared to most other theorem-proving systems. IL has a great deal of specific knowledge about integers and arrays of integers, yet it is not "special-purpose", since this knowledge is represented in procedures which are separate from the underlying structure of the system. It also incorporates some knowledge, gained by the author from both experiment and introspection, about how programs are often constructed, and uses this knowledge to guide the proof process. It uses its knowledge, plus contextual information from the program being verified, to simplify the theorems dramatically as they are being constructed, rather than relying on a super-powerful proof procedure. The system also provides for interactive editing of programs and assertions, and for detailed human control of the proof process when the system cannot produce a proof (or counter-example) on its own.
1974
Teitelman, Warren INTERLISP documentation
Documentation for INTERSLIP in the form of the INTERSLIP Reference Manual is now available and may be obtained from Warren Teitelman, Xerox Palo Alto Research Center. The new manual replaces all existing documentation, and is completely up to date (as to January, 1974). The manual is available in either loose-leaf or bound form. The lose-leaf version (binders not supplied) comes with printed separator tabs between the chapters. The bound version also includes colored divider pages between chapters, and is printed on somewhat thinner paper than the loose-leaf version, in an effort to make it 'portable' (the manual being approximately 700 pages long). Both versions contain a complete master index (approximately 1600 entries), as well as a separate index for each chapter. Although the manual is intended primarily to be used for reference, many chapters, e.g., the programmer's assistant, do-what-I-mean, CLISP, etc., include introductory and tutorial material. The manual is available in machine-readable form, and an on-line question-answering system using the manual as a data base is currently being implemented.
Deutsch, P. Display primitives in Lisp
Several conflicting goal must be resolved in deciding on a set of display facilities for Lisp: ease of lisp, efficient access to hardware facilities, and device and system independence. Thiss memo suggests a set of facilities constructed in two layers: a lower layer that gives direct access to the Alto bitmap capability, while retaining Lisp's tradition of freeing the programmer from storage allocation worries and an upper Iayer that uses the lower (on the Alto) or a character-stream protocol (for VTS t on MAXC) to provide for writing strings, scrolling, edting, etc. on the screen,
1975
Anders Haraidson LISP-details INTERLlSP / 360 - 370
This paper gives a tutorial introduction to INTERLISP/360-370, a subset of INTERLISP, which can be implemented on IBM/360 and similar systems. oe·scriptions of a large number of functions in INTERLISP with numerous examples, exercises and solutions are contained. The use of edit, break, advice, file handling and compiler are given and both interactive and batch use of the system is taken care of.
Weyl, Stephen An Interlisp Relational Data Base System.
This report describes the file system for the experimental large file management support system currently being implemented at SRI. INTERLISP, an interactive, development-oriented computer programming system, has been augmented to support applications requiring large data bases maintained on secondary store. The data base support programs are separated into two levels an advanced file system and relational data base management procedures. The file system allows programmers to make full use of the capabilities of on-line random access devices using problem related symbolic primitives rather than page and word numbers. It also performs several useful data storage functions such as data compression, sequencing, and generation of symbols which are unique for a file.
Section: Technical Reports
Bobrow, Daniel G. A note on hash linking
In current machine designs, a machine address gives the user direct access to a single piece of information, namely the contents of that machine word. This note is based on the observation that it is often useful to associate additional information, with some (relatively few) address locations determined at run time, without the necessity of preallocating the storage at all possible such addresses. That is, it can be useful to have an effective extra bit, field, or address in some words without every word having to contain a bit (or bits) to mark this as a special case. The key idea is that this extra associated information can be found by a table search. Although it could be found by any search technique (e.g. linear, binary sorted, etc.), we suggest that an appropriate low overhead mechanism is to use hash search on a table in which the key is the address of the cell to be augmented.
Deutsch, L. Peter PIVOT source listing
Program verification refers to the idea that the intent or effect of a program can be stated in a precise way that is not a simple "rewording " of the program itself, and that one can prove (in the mathematical sense) that a program actually conforms to a given statement of intent. This thesis describes a software system which can verify (prove) some non-trivial programs automatically. The system described here is organized in a novel manner compared to most other theorem-proving systems. IL has a great deal of specific knowledge about integers and arrays of integers, yet it is not "special-purpose", since this knowledge is represented in procedures which are separate from the underlying structure of the system. It also incorporates some knowledge, gained by the author from both experiment and introspection, about how programs are often constructed, and uses this knowledge to guide the proof process. It uses its knowledge, plus contextual information from the program being verified, to simplify the theorems dramatically as they are being constructed, rather than relying on a super-powerful proof procedure. The system also provides for interactive editing of programs and assertions, and for detailed human control of the proof process when the system cannot produce a proof (or counter-example) on its own.
Deutsch, P. Status Report on Alto Lisp
1976
Moore, J. Strother The Interlisp Virtual Machine Specification
The INTERLISP Virtual Machine is the environment in which the INTERLISP System is implemented. It includes such abstract objects as "Literal Atoms", "List Cells", "Integers", etc., the basic LISP functions for manipulating them, the underlying program control and variable binding mechanisms, the input/output facilities, and interrupt processing facilities. In order to Implement the INTER LISP System (as described in The INTERLISP Reference Manual by W. Teitelman, et. al.) on some physical machine, it is only necessary to implement the INTERLISP Virtual Machine, since Virtual Machine compatible source code for the rest of the INTERLISP System can be obtained from publicly available files. This document specifies the behavior of the INTER LISP Virtual Machine from the implementor's point of view. That is, it is an attempt to make explicit those things which must be implemented to allow the INTERLISP System to run on some machine. KEY WORDS AND PHRASES programming language semantics, LISP, dynamic storage allocation, interpreters, spaghetti stacks, abstract data types, function objects, FUNARGs, applicative programming languages, control structures, interactive systems, DWIM, programmer's assistant, automatic error correction, eval, error handling, interrupt
Sacerdoti, Earl D. ;
Fikes, Richard E. ;
Reboh, Rene ;
Sagalowicz, Daniel ;
Waldinger, Richard J. ;
Wilber, B. Michael
Qlisp: a language for the interactive development of complex systems
Clark, Douglas W. List Structure: Measurements, Algorithms, and Encodings
This thesis is about list structures: how they are used in practice, how they can be moved and copied efficiently, and how they can be represented by space-saving encodings. The approach taken to these subjects is mainly empirical. Measurement results are based on five large programs written in Interlisp, a sophisticated Lisp system that runs on the PDP-10.
Bobrow, Robert ;
Grignetti, Mario
Interlisp performance measurements
This report describes measurements performed for the purpose of determining areas of potential improvement to the efficiency of INTERLISP running under TENEX.
Teitelman, W. Proposal for Research on Interlisp and Network-Based Systems
The contract covered by this annual report includes a variety of activities and services centering around the continued growth and well-being of INTERLISP, a large, interactive system widely used in the ARPA community for developing advanced and sophisticated computer-based systems.
Section: Technical Reports
Deutsch, L. Peter ;
Bobrow, Daniel G.
An efficient, incremental, automatic garbage collector
This paper describes a new way of solving the storage reclamation problem for a system such as Lisp that allocates storage automatically from a heap, and does not require the programmer to give any indication that particular items are no longer useful or accessible. A reference count scheme for reclaiming non-self-referential structures, and a linearizing, compacting, copying scheme to reorganize all storage at the users discretion are proposed. The algorithms are designed to work well in systems which use multiple levels of storage, and large virtual address space. They depend on the fact that most cells are referenced exactly once, and that reference counts need only be accurate when storage is about to be reclaimed. A transaction file stores changes to reference counts, and a multiple reference table stores the count for items which are referenced more than once.
Teitelman, Warren Clisp: Conversational Lisp
Clisp is an attempt to make Lisp programs easier to read and write by extending the syntax of Lisp to include infix operators, IF-THEN statements, FOR-DO-WHILE statements, and similar Algol-like constructs, without changing the structure or representation of the language. Clisp is implemented through Lisp's error handling machinery, rather than by modifying the interpreter. When an expression is encountered whose evaluation causes an error, the expression is scanned for possible Clisp constructs, which are then converted to the equivalent Lisp expressions. Thus, users can freely intermix Lisp and Clisp without ut having to distinguish which is which. Emphasis in the design and development of Clisp has been on the system aspects of such a facility, with the goal of producing a useful tool, not just another language. To this end, Clisp includes interactive error correction and many "do-what-I-mean" features.
Conference Name: IEEE Transactions on Computers
Masinter, Larry PARCMESSAGE.TXT.1.
1977
Teitelman, Warren A display oriented programmer's assistant
This paper continues and extends previous work by the author in developing systems which provide the user with various forms of explicit and implicit assistance, and in general cooperate with the user in the development of his programs. The system described in this paper makes extensive use of a bit map display and pointing device (a mouse) to significantly enrich the user's interactions with the system, and to provide capabilities not possible with terminals that essentially emulate hard copy devices. For example, any text that is displayed on the screen can be pointed at and treated as input, exactly as though it were typed, i.e., the user can say use this expression or that value, and then simply point. The user views his programming environment through a collection of display windows, each of which corresponds to a different task or context. The user can manipulate the windows, or the contents of a particular window, by a combination of keyboard inputs or pointing operations. The technique of using different windows for different tasks makes it easy for the user to manage several simultaneous tasks and contexts, e.g., defining programs, testing programs, editing, asking the system for assistance, sending and receiving messages, etc and to switch back and forth between these tasks at his convenience.
Burton, Richard R. Semantic grammar: an engineering technique for constructing natural language understanding systems
One of the major stumbling blocks to more effective used computers by naive users is the lack of natural means of communication between the user and the computer system. This report discusses a paradigm for constructing efficient and friendly man-machine interface systems involving subsets of natural language for limited domains of discourse. As such this work falls somewhere between highly constrained formal language query systems and unrestricted natural language under-standing systems. The primary purpose of this research is not to advance our theoretical under-standing of natural language but rather to put forth a set of techniques for embedding both semantic/conceptual and pragmatic information into a useful natural language interface module. Our intent has been to produce a front end system which enables the user to concentrate on his problem or task rather than making him worry about how to communicate his ideas or questions to the machine.
Sproull, Robert F. INTERLISP DISPLAY PRIMITIVES
This report describes briefly a set of display primitives that we have developed at PARC toextend the capabilities of InterLisp[l]. These primitives are designed to operate araster-scanned displaYt and concentrate on facilities for placing text carefully on the displayand for moving chunks of an already-created display.
Teitelman, Warren A Display Oriented Programmers Assistant
This paper continues and extends previous work by the author in developing systems which provide the user with various forms of explicit and implicit assistance, and in general co-operate with the user in the development of his programs. The system described in this paper makes extensive use of a bit map display and pointing device (a mouse) to significantly enrich the user's interactions with the system, and to provide capabilities not possible with terminals that essentially emulate hard copy devices. For example, any text that is displayed on the screen can be pointed at and treated as input, exactly as though it were typed, i.e. the user can say use this expression or that value, and then simply point. The user views his programming environment through a collection of display windows, each of which corresponds to a different task or context. The user can manipulate the windows, or the contents of a particular window, by a combination of keyboard inputs or pointing operations. The technique of using different windows for different tasks makes it easy for the user to manage several simultaneous tasks and contexts, e.g. defining programs, testing programs, editing, asking the system for assistance, sending and receiving messages, etc. and to switch back and forth between these tasks at his convenience.
1978
Fiala, E. R. The Maxc Systems
Publisher: Institute of Electrical and Electronics Engineers
Report Number: 11
Fatcat ID: release_q5bn52bkmvgfnfpyj3fa6wfnzy
Brachman, Ronald ;
Ciccarelli, Eugene ;
Greenfeld, Norton ;
Yonke, Martin
KLONE Reference Manual:
DOI: 10.21236/ADA122437
Deutsch, L. Peter Experience with a microprogrammed Interlisp system
This paper presents the design of an Interlisp system running on a microprogrammed minicomputer. We discuss the constraints imposed by compatibility requirements and by the hardware, the important design decisions, and the most prominent successes and failures of our design, and offer some suggestions for future designers of small Lisp systems. This extended abstract contains only qualitative results. Supporting measurement data will be presented at MICRO-11.
Deutsch, L. Peter INSIDE INTERLISP: TWO IMPLEMENTATIONS
1979
Teitelman, Warren ;
Kaplan, Ron
new-lisp-messages.txt.1.
Sproull, Robert F. Raster graphics for interactive programming environments
Raster-scan display terminals can significantly improve the quality of interaction with conventional computer systems. The design of a graphics package to provide a “window” into the extensive programming environment of interlisp is presented. Two aspects of the package are described: first, the functional view of display output and interactive input facilities as seen by the programmer, and second, the methods used to link the display terminal to the main computer via a packet-switched computer network. Recommendations are presented for designing operating systems and programming languages so as to simplify attaching display terminals.
Bobrow, Daniel G. ;
Deutsch, L. Peter
Extending Interlisp for modularization and efficiency
Sproull, Robert F. Raster Graphics for Interactive Programming Environments
Raster-scan display terminals can significantly imrpove the quality of interaction with conventional computer systems. the design of a graphics package to provide a "window" into the extensive programming environment of Interlisp is presented. Two aspects of the package are described: first, the functional view of display output and interactive input facilities as seen by the programmer, and second, the methods used to link the display terminal to the main computer via a packet-switched computer network. Recommendations are presented for designing operating systems and programming languages so as to simplifly attaching display terminals. An appendix contains detailed documentation of the graphics package.
Teitelman, Warren A display oriented programmer's assistant
This paper continues and extends previous work by the author in developing systems which provide the user with various forms of explicit and implicit assistance, and in general co-operate with the user in the development of his programs. The system described in this paper makes extensive use of a bit map display and pointing device (a mouse) to significantly enrich the user's interactions with the system, and to provide capabilities not possible with terminals that essentially emulate hard copy devices. For example, any text that is displayed on the screen can be pointed at and treated as input, exactly as though it were typed, i.e. the user can say use this expression or that value, and then simply point. The user views his programming environment through a collection of display windows, each of which corresponds to a different task or context. The user can manipulate the windows, or the contents of a particular window, by a combination of keyboard inputs or pointing operations. The technique of using different windows for different tasks makes it easy for the user to manage several simultaneous tasks and contexts, e.g. defining programs, testing programs, editing, asking the system for assistance, sending and receiving messages, etc. and to switch back and forth between these tasks at his convenience.
Moore, J. Strother The INTERLISP Virtual Machine Specification: Revised
The INTERLISP Virtual Machine is the environment in which the INTERLISP System is implemented. It includes such abstract objects as ~Literal Atom s~. List CelIs~, ‘Integers. etc.. the basic LISP functions for manipulating them, the underlying program control, and variable binding mechanisms. the input/output facilities, and interrupt processing facilities. In order to implement the INTERLISP System (as described in The INTERLISP Reference Manual by W. Teitelman. et. al.) on some physical machine, it is only necessary to implement the INTERLISP Virtual Machine. since Virtual Machine compatible source code for the rest of the INTERLISP System can be obtained from publicly available files. This document specifies the behavior of the INTERLISP Virtual Machine from the implementor s point of view. That is. it is an attempt to make explicit those things that must be implemented to allow the INTERLISP System to run on some machine.
Bobrow, Daniel G. ;
Clark, Douglas W.
Compact Encodings of List Structure
List structures provide a general mechanism for representing easily changed structured data, but can introduce inefficiencies in the use of space when fields of uniform size are used to contain pointers to data and to link the structure. Empirically determined regularity can be exploited to provide more space-efficient encodings without losing the flexibility inherent in list structures. The basic scheme is to provide compact pointer fields big enough to accommodate most values that occur in them and to provide “escape” mechanisms for exceptional cases. Several examples of encoding designs are presented and evaluated, including two designs currently used in Lisp machines. Alternative escape mechanisms are described, and various questions of cost and implementation are discussed. In order to extrapolate our results to larger systems than those measured, we propose a model for the generation of list pointers and we test the model against data from two programs. We show that according to our model, list structures with compact cdr fields will, as address space grows, continue to be compacted well with a fixed-width small field. Our conclusion is that with a microcodable processor, about a factor of two gain in space efficiency for list structure can be had for little or no cost in processing time.
1980
Kaplan, Ronald M. ;
Sheil, B. A.
Adding Type Declarations to Interlisp.
Masinter, Larry Melvin Global Program Analysis in an Interactive Envi ronment by Larry Melvin Masinter SSL.80-1 JANUARY 1980
This disscrttion dcscribes a programming tool. implemented in Lisp. callcd SCOPE. The basic idea behind ScoPE can be stated simply: SCOPE analyzes a user's programs. rcmembers what it sees. is able to answer questions based on the facts it remembers. and is able to incrcmentay update thc data basc when a picce of thc program changes. A varicty of program infonnation is available about cross rcfcrences. data flow and program organi7.aticil. Facts about programs are stored in a data bas: to answer a question. SCOPE rctrieves and makes inferences basd on infonnation in the data base. SCOPE is interactive because it keeps track of which par of the programs have changed during the course of an editing and debugging sesion. and is able to automatically and incrementally update its data bas. Because SCOPE perfonns whatever re analysis is necesry to answer the question when the question is asked. SCOPE maintans the ilusion that the data bas is always up to date-ther than the additional wait tie. it is as if SCOPE knew the answer al along. SCOPE'S foundation is a representauon system in which propertes of pieces of progras ca be expred. The objects of SCOPE'S language are pieces of progrs, and in par. definitions of symbols-.g.. the definition of a proedure or a data strcture. SCOPE doe not model propertes of individua statements or expreions in the program: SCOPE knows only individual facts about procedures varables data strctures and other pieces of a progr which ca be asigned as the definiuon of symbols. The facts are relauons between the name of a definiuon and other symbols. For example. one of the relauons that SCOPE keeps trk of is Call: Call(FNl'FNil holds if the definiùon whose nae is FNi contans a ca to a procedure named FNi" SCOPE has two interfaces: one to the user and one to other progras. The user interface is an English-like command language which allows for a unifonn command strcture and convenient defaults: the most frequently used commands are the easiest to type. All of the power avaiable with the command language is accesible Jirough the progra interface as well. The compiler and varouš other uulities use the progra interf~"
Koomen, Johannes A. G. M. The interlisp virtual machine: study of its design and its implementation as multilisp
Abstract machine definitions have been recognized as convenient and powerful tools for enhancing software portability. One such machine, the Inter lisp Virtual Machine, is examined in this thesis. We present the Multilisp System as an implementation of the Virtual Machine and discuss some of the design criteria and difficulties encountered in mapping the Virtual Machine onto a particular environment. On the basis of our experience with Multilisp we indicate several weaknesses of the Virtual Machine which impair its adequacy as a basis for a portable Interlisp System.
DOI: 10.14288/1.0051801
Masinter, Larry M. ;
Deutsch, L. Peter
Local optimization in a compiler for stack-based Lisp machines
We describe the local optimization phase of a compiler for translating the INTERLISP dialect of LISP into stack-architecture (0-address) instruction sets. We discuss the general organization of the compiler, and then describe the set of optimization techniques found most useful, based on empirical results gathered by compiling a large set of programs. The compiler and optimization phase are machine independent, in that they generate a stream of instructions for an abstract stack machine, which an assembler subsequently turns into the actual machine instructions. The compiler has been in successful use for several years, producing code for two different instruction sets.
Deutsch, L. Peter ByteLisp and its Alto implementation
This paper describes in detail the most interesting aspects of ByteLisp, a transportable Lisp system architecture which implements the Interlisp dialect of Lisp, and its first implementation, on a microprogrammed minicomputer called the Alto. Two forthcoming related papers will deal with general questions of Lisp machine and system architecture, and detailed measurements of the Alto ByteLisp system described here. A highly condensed summary of the series was published at MICRO-11 in November 197815.
Burton, Richard R. ;
Masinter, L. M. ;
Bobrow, Daniel G. ;
Haugeland, Willie Sue ;
Kaplan, Ronald M. ;
Sheil, B. A.
Overview and status of DoradoLisp
DoradoLisp is an implementation of the Interlisp programming system on a large personal computer. It has evolved from AltoLisp, an implementation on a less powerful machine. The major goal of the Dorado implementation was to eliminate the performance deficiencies of the previous system. This paper describes the current status of the system and discusses some of the issues that arose during its implementation. Among the techniques that helped us meet our performance goal were transferring much of the kernel software into Lisp, intensive use of performance measurement tools to determine the areas of worst performance, and use of the Interlisp programming environment to allow rapid and widespread improvements to the system code. The paper lists some areas in which performance was critical and offers some observations on how our experience might be useful to other implementations of Interlisp.
Brachman, Ronald J. ;
Smith, Brian C.
Special issue on knowledge representation
In the fall of 1978 we decided to produce a special issue of the SIGART Newsletter devoted to a survey of current knowledge representation research. We felt that there were twe useful functions such an issue could serve. First, we hoped to elicit a clear picture of how people working in this subdiscipline understand knowledge representation research, to illuminate the issues on which current research is focused, and to catalogue what approaches and techniques are currently being developed. Second -- and this is why we envisaged the issue as a survey of many different groups and projects -- we wanted to provide a document that would enable the reader to acquire at least an approximate sense of how each of the many different research endesvours around the world fit into the field as a whole.It would of course be impossible to produce a final or definitive document accomplishing these goals: rather, we hoped that this survey could initiate a continuing dialogue on issues in representation, a project for which this newsletter seems the ideal forum. It has been many months since our original decision was made, but we are finally able to present the results of that survey. Perhaps more than anything else, it has emerged as a testament to an astounding range and variety of opinions held by many different people in many different places.The following few pages are intended as an introduction to the survey as a whole, and to this issue of the newsletter. We will briefly summarize the form that the survey took, discuss the strategies we followed in analyzing and tabulating responses, briefly review the overall sense we received from the answers that were submitted, and discuss various criticisms which were submitted along with the responses. The remainder of the volume has been designed to be roughly self-explanatory at each point, so that one may dip into it at different places at will. Certain conventions, however, particularly regarding indexing and tabulating, will also be explained in the remainder of this introduction.As editors, we are enormously grateful to the many people who devoted substantial effort to responding to our survey. It is our hope that the material presented here will be interesting and helpful to our readers, and that fruitful discussion of these and other issues will continue energetically and enthusiastically into the future.
Burton, Richard R. ;
Kaplan, Ronald M. ;
Masinter, B. ;
Sheil, B. A. ;
Bell, A. ;
Bobrow, D. G. ;
Deutsch, L. P. ;
Haugeland, W. S.
Papers on interlisp-D
This report consists of five papers on Interlisp-0, a refinement and implementation of the I[ tnterlisp virtual machine [Moore, 761 which supports the interlisp programming system [Teitelman et at., 781 on the Dolphin and Dorado personal computers
1981
Moore, J. Strother The TXDT Package-Interlisp Text Editing Primitives
The TXDT package is a collection of INTERLISP programs designed for those who wish to build text editors in INTERLISP. TXDT provides a new INTERLISP data type, called a buffer, and programs for efficiently inserting, deleting, searching and manipulating text in buffers. Modifications may be made undoable. A unique feature of TXDT is that an address may be "stuck" to a character occurrence so as to follow that character wherever it Is subsequently moved. TXDT also has provisions for fonts.
Davis, Randall ;
Austin, Howard ;
Carlbom, Ingrid ;
Frawley, Bud ;
Pruchnik, Paul ;
Sneiderman, Rich ;
Gilreath, J.
The DIPMETER ADVISOR: Interpretation of Geologic Signals.
The DIPMETER ADVISOR program is an application of Al and Expert System techniques to the problem of inferring subsurface geologic structure. It synthesizes techniques developed in two previous lines of work, rulte-based systems and signal understanding programs. This report on the prototype system has four main concerns. First, we describe the task and characterize the various bodies of knowledge required. Second, we describe the design of the system we have built and the level of performance it has currently reached. Third, we use this task as a case study and examine it in the light of other, related efforts, showing how particular characteristics of this problem have dictated a number of design decisions. We consider the character of the interpretation hypotheses generated and the sources of the expertise involved. Finally, we discuss future directions of this early effort. We describe the problem of "shallow knowledge" in expert systems and explain why this task appears to provide an attractive setting for exploring the use of deeper models.
Teitelman, W. ;
Masinter, L.
The Interlisp Programming Environment
Integration, extensibility, and ease of modification made Interlisp unique and powerful. Its adaptations will enhance the power of the coming world of personal computing and advanced displays.
Publisher: IEEE Computer Society
Masinter, Larry M. Interlisp-VAX: A Report
Barstow, David R. Overview of a display-oriented editor for INTERLISP
DED is a display-oriented editor that was designed to add the power and convenience of display terminals to INTERLISPs teletype-oriented structure editor. DED divides the display screen into a Prettypnnt Region and an Interaction Region. The Prettypnnt Region gives a prettyprinted view of the structure being edited; the Interaction Region contains the interaction between the user and INTERLISPs standard editor. DEDs prettyprinter allows ellision, and the user may zoom in or out to see the expression being edited with more or less detail. There are several arrow keys which allow the user to change quite easily the locus of attention in certain structural ways, as well as a menu-like facility for common command sequences. Together, these features provide a display-facility that considerably augments INTERLISPs otherwise quite sophisticated user interface.
Sheil, Beau Interlisp-D: further steps in the flight from time-sharing
The Interslip-D project was formed to develop a personal machine implementation of Interlisp for use as an environment for research in artificial intelligence and cognitive science [Burton et al., 80b]. This note describes the principal developments since our last report almost a year ago [Burton et al., 80a].
1982
Finin, Tim Translating KL-One from interlisp to Franzlisp
We describe an effort to translate the Interlisp KL-ONE system into Franzlisp to enable it to be run on a VAX . This effort has involved Tim Finin, Richard Duncan and Hassan Ait-Kaci from the University of Pennsylvania, Judy Weiner from Temple University, Jane Barnett from Computer Corporation of America and Jim Schmolze from Bolt Beranek and Newman. The primary motivation for this project was to make a version of KL-ONE available on a PDP 1 1/780 VAX . A VAX Interlisp is not yet available, although one is being written and will soon be available . Currently, the only substantial Lisp for a Vax is the Berkeley FranzLisp system. As a secondary motivation, we are interested in making KL-ONE more available in general - on a variety of Lisp dialects and machines.
Bates, Raymond L. ;
Dyer, David ;
Koomen, Johannes A. G. M.
Implementation of Interlisp on the VAX
This paper presents some of the issues involved in implementing Interlisp [19] on a VAX computer [24] with the goal of producing a version that runs under UNIX[17], specifically Berkeley VM/UNIX. This implementation has the following goals: • To be compatible with and functionally equivalent to Interlisp-10. • To serve as a basis for future Interlisp implementations on other mainframe computers. This goal requires that the implementation to be portable. • To support a large virtual address space. • To achieve a reasonable speed. The implementation draws directly from three sources, Interlisp-10 [19], Interlisp-D [5], and Multilisp [12]. Interlisp-10, the progenitor of all Interlisps, runs on the PDP-10 under the TENEX [2] and TOPS-20 operating systems. Interlisp-D, developed at Xerox Palo Alto Research Center, runs on personal computers also developed at PARC. Multilisp, developed at the University of British Columbia, is a portable interpreter containing a kernel of Interlisp, written in Pascal [9] and running on the IBM Series/370 and the VAX. The Interlisp-VAX implementation relies heavily on these implementations. In turn, Interlisp-D and Multilisp were developed from The Interlisp Virtual Machine Specification [15] by J Moore (subsequently referred to as the VM specification), which discusses what is needed to implement an Interlisp by describing an Interlisp Virtual Machine from the implementors' point of view. Approximately six man-years of effort have been spent exclusively in developing Interlisp-VAX, plus the benefit of many years of development for the previous Interlisp implementations.
Bates, Raymond ;
David, Dayer ;
Koomen, Johannes ;
Saunders, Steven ;
Voreck, Donald
Interlisp-VAX Users Manual
1983
Stefik, Mark ;
Bobrow, Daniel G
The LOOPS Manual
Stefik, Mark ;
Bobrow, Daniel G ;
Mittal, Sanjay ;
Conway, Lynn
KNOWLEDGE PROGRAMMING IN LOOPS:
Early this year fifty people took an experimental course at Xerox PARC on knowledge programming in Loops During the course, they extended and debugged small knowledge systems in a simulated economics domain called Truckin Everyone learned how to use the Loops environment, formulated the knowledge for their own program, and represented it in Loops At the end of the course a knowledge competition was run so that the strategies used in the different systems could be compared The punchline to this story is that almost everyone learned enough about Loops to complete a small knowledge system in only three days. Although one must exercise caution in extrapolating from small experiments, the results suggest that there is substantial power in integrating multiple programming paradigms.
Novak, Jr, Gordon S. GLISP: A Lisp-based Programming System with Data Abstraction
Michael Sannella Interlisp reference manual: Revised
OCLC: 11098633
Schoen, Eric ;
Smith, Reid G
1983 - IMPULSE: A Display Oriented Editor for STROBE
In this paper, we discuss a display-oriented editor to aid in the construction of knowledge-based systems. We also report on our experiences concerning the utility of the editor.
Stefik, Mark ;
Bobrow, Daniel G ;
Mittal, Sanjay ;
Conway, Lynn
KNOWLEDGE PROGRAMMING IN LOOPS:
Early this year fifty people took an experimental course at Xerox PARC on knowledge programming in Loops During the course, they extended and debugged small knowledge systems in a simulated economics domain called Truckin Everyone learned how to use the Loops environment, formulated the knowledge for their own program, and represented it in Loops At the end of the course a knowledge competition was run so that the strategies used in the different systems could be compared The punchline to this story is that almost everyone learned enough about Loops to complete a small knowledge system in only three days. Although one must exercise caution in extrapolating from small experiments, the results suggest that there is substantial power in integrating multiple programming paradigms.
Stefik, Mark ;
Bobrow, Daniel ;
Mittal, Sanjay ;
Conway, Lynn
Truckin’ and the Knowledge Competitions | MJSBlog
MJSBlog - Workin on it
Schrag, Robert C. Notes on the Conversion of LogLisp from Rutgers/UCI-Lisp to InterLisp,
Conversion of the LogLispLogic programming in Lisp Artificial Intelligence programming environment from its original RutgersUCI-Lisp RUCI-Lisp implementation to an InterLisp implementation is described. This report may be useful to researchers wishing to convert LogLisp to yet another Lisp dialect, or to those wishing to convert other RUCI-Lisp programs into InterLisp. It is also intended to help users of the InterLisp version of LogLisp to understand the implementation. The conversion process is described at a level aimed toward potential translators who might benefit from approaches taken and lessons learned. General issues of conversion of Lisp software between dialects are discussed, use of InterLisps dialect translation package is described, and specific issues of non-mechanizable conversion are addressed. The latter include dialect differences in function definitions, arrays, integer arithmetic, io, interrupts, and macros. Subsequent validation, compilation, and efficiency enhancement of the InterLisp version are then described. A brief users guide to the InterLisp version and points of contact for information on LogLisp software distribution are also provided. Author
Section: Technical Reports
Naraln, Sanjai ;
McArthur, David ;
Klahr, Philip
Large-scale system development in several lisp environments
ROSS [7] is an object-oriented language developed for building knowledge-based simulations [4]. SWIRL [5, 6] is a program written in ROSS that embeds knowledge about defensive and offensive air battle strategies. Given an initial configuration of military forces, SWIRL simulates the resulting air battle. We have implemented ROSS and SWIRL in several different Lisp environments. We report upon this experience by comparing the various environments in terms of cpu usage, real-time usage, and various user aids.
Xerox Interlisp reference manual
Interlisp began with an implementation of the Lisp programming language for the PDP-1 at Bolt. Beranek and Newman in 1966. It was followed in 1967 by 940 Lisp, an upward compatible implementation for the SDS-940 computer. 940 Lisp was the first Lisp system to demonstrate the feasibility of using software paging techniques and a large vinual memory in conjunction with a list-processing system [Bobrow & ( -----\ ) Murphy, 1967]. 940 Lisp was patterned after the Lisp 1.5 ~plementation for CTSS at MIT, with several' ) 1ew facilities added to take advantage of its timeshared. on-line environment. DWIM, the Do-What+ Mean error correction facility, was introduced into this system in 1968 by Warren Teitelman [Teitelman. 1969].
OCLC: 802551877
Becker, Jeffrey M. AQINTERLISP: An INTERLISP Program for Inductive Generalization of VL1 Event Sets
1984
Bundy, Alan ;
Wallen, Lincoln
Interlisp-D
Major dialect of LISP <34>, designed for high-resolution, bit-mapped display, distinguished by (a) use of in-core editor for structures, and thus code, (b) programming environment of tools for automatic error-correction, syntax (sic) extension and structure declaration/access, (c) implementation of almost-compatible dialects (Interlisp <X>) on several machines, (d) extensive usage of display orientated tools and facilities. Emphasis: Personal Lisp workstation, user interface tools.
DOI: 10.1007/978‑3‑642‑96868‑6_103
Smith, Reid G. On the Development of Commercial Expert Systems
We use our experience with the Dipmeter Advisor system for well-log interpretation as a case study to examine the development of commercial expert system. We discuss the nature of these systems as we see them in the coming decade, characteristics of the evolution process, development methods, and skills required in the development team. We argue that the tools and ideas of rapid prototyping and successive refinement accelerate the development process. We note that different types of people are required at different stages of expert system development: Those who are primarily knowledgeable in the domain, but who can use the framework to expand the domain knowledge; and those who can actually design and build expert systems. Finally, we discuss the problem of technology transfer and compare our experience with some of the traditional wisdom of expert system development.
Number: 3
Bates, Raymond L. ;
Dyer, David ;
Feber, Mark
Recent developments in ISI-interlisp
This paper reports on recent developments of the ISI- Interlisp implementation of Interlisp on a VAX computer. ISI-Interlisp currently runs under UNIX, specifically the Berkeley VM/UNIX and VMS operating systems. Particular attention is paid to the current status of the implementation and the growing pains experienced in the last few years. Included is a discussion of the conversion from UNIX to VAX/VMS, recent modifications and improvements, current limitations, and projected goals. Since much of the recent effort has concerned performance tuning, our observations on this activity are included. ISI-Interlisp, formerly known as Interlisp-VAX, was reported in 1982 ACM Symposium on LISP and Functional Programming, August 1982 [1]. Experiences and recommendations since the 1982 LISP conference are presented.
1985
Gabriel, Richard P. Performance and evaluation of LISP systems
The final report of the Stanford Lisp Performance Study, Performance and Evaluation of Lisp Systems is the first book to present descriptions on Lisp implementation techniques actually in use. It provides performance information using the tools of benchmarking to measure the various Lisp systems, and provides an understanding of the technical tradeoffs made during the implementation of a Lisp system. The study is divided into three parts. The first provides the theoretical background, outlining the factors that go into evaluating the performance of a Lisp system. The second part presents the Lisp implementations: MacLisp, MIT CADR, LMI Lambda, S-I Lisp, Franz Lisp, MIL, Spice Lisp, Vax Common Lisp, Portable Standard Lisp, and Xerox D-Machine. A final part describes the benchmark suite that was used during the major portion of the study and the results themselves.
Lenat, Douglas B. ;
Prakash, Mayank ;
Shepherd, Mary
CYC: Using common sense knowledge to overcome brittleness and knowledge acquisition bottlenecks
The major limitations in building large software have always been (a) its brittleness when confronted by problems that were not foreseen by its builders, and (by the amount of manpower required. The recent history of expert systems, for example highlights how constricting the brittleness and knowledge acquisition bottlenecks are. Moreover, standard software methodology (e.g., working from a detailed "spec") has proven of little use in AI, a field which by definition tackles ill- structured problems. How can these bottlenecks be widened? Attractive, elegant answers have included machine learning, automatic programming, and natural language understanding. But decades of work on such systems have convinced us that each of these approaches has difficulty "scaling up" for want a substantial base of real world knowledge.
Fletcher, Charles R. Understanding and solving arithmetic word problems: A computer simulation
XEROX 1985 Harmony and Intermezzo releases Koto release (for Xerox 1186), some bits of Common Lisp
Anonymous Koto INTERLISP-D RELEASE NOTES
The Koto release of Interlisp-D provides a wide range of added functionality, increased performance and improved reliability Central among these is that Koto is the first release of Interlisp that supports the new Xerox 1185i1186 artificial intelligence work stations, including the new features of these work stations such as the expanded 19" display and the PC emulation option. Of course, like previous releases of Interlisp, Koto also supports the other current members of the 1100 series of machines, specifically the 1132 and various models of the 1108.
Lehtola, A. ;
Jäppinen, H. ;
Nelimarkka, E.
Language-based environment for natural language parsing
This paper introduces a special programming environment for the definition of grammars and for the implementation of corresponding parsers. In natural language processing systems it is advantageous to have linguistic knowledge and processing mechanisms separated. Our environment accepts grammars consisting of binary dependency relations and grammatical functions. Well-formed expressions of functions and relations provide constituent surroundings for syntactic categories in the form of two-way automata. These relations, functions, and automata are described in a special definition language.In focusing on high level descriptions a linguist may ignore computational details of the parsing process. He writes the grammar into a DPL-description and a compiler translates it into efficient LISP-code. The environment has also a tracing facility for the parsing process, grammar-sensitive lexical maintenance programs, and routines for the interactive graphic display of parse trees and grammar definitions. Translator routines are also available for the transport of compiled code between various LISP-dialects. The environment itself exists currently in INTERLISP and FRANZLISP. This paper focuses on knowledge engineering issues and does not enter linguistic argumentation.
Interlisp-D Reference Manual, Volume III: Input/Output
OCLC: 802551877
Interlisp-D Reference Manual, Volume I: Language
OCLC: 802551877
Interlisp-D Reference Manual, Volume II: Environment
OCLC: 802551877
Burwell, A. D. M. Computer manipulation of geological exploration data
Report of a meeting held by the Geological Information Group at the British Petroleum Research Centre, Sunbury, 24 January 1985 This meeting, concerned mainly with computer manipulation of petroleum exploration data, attracted c. 95 participants. In addition to eight papers presented, there were two computer demonstrations of log analysis systems and a number of poster displays. The morning session, concerned with large-scale, integrated hardware and software systems, was chaired by R. Howarth. R. Till of British Petroleum gave the opening paper concerning BP Exploration’s integrated database system. BP Exploration databases fall into three main groups: those containing largely numerical data; databases specifically concerned with text handling; and well-based databases. The ‘numerical’ databases, implemented under the ULTRA database management system (dbms), include a seismic data system, a generalized cartographic database and an earth constants database. Textual databases include a library information system and a Petroconsultants scout data database, both implemented under the BASIS dbms. The well-based systems include a generalized well-data database, a wireline log archive, storage and retrieval system, and a master well index; all three are implemented under the INGRES dbms. Two related BASIS databases contain geochemical and biostratigraphical data. G. Baxter (co-author M. Hemingway) described the development of Britoil’s well log database which was prompted by the need to have rapid access to digitized wireline log data for c. 1500 wells on the UKCS. Early work involved both locating log information and digitizing those logs held in sepia form only. Each digitized log occupies approximately 1 Mbyte.
1986
Stephen H. Kaisler INTERLISP The Language And Its Usage
LISP, as a language, has been around for about 25 years [mcca78]. It was originally developed to support artificial intelligence (AI) research. At first, it seemed to be little noticed except by a small band of academics who implemented some of the early LISP interpreters and wrote some of the early AI programs. In the early 60’s, LISP began to diverge as various implementations were developed for different machines. McCarthy [mcca78] gives a short history of its early days.
Teitelman, Warren Ten Years of Window Systems - A Retrospective View
Both James Gosling and I currently work for SUN and the reason for my wanting to talk before he does is that I am talking about the past and James is talking about the future. I have been connected with eight window systems as a user, or as an implementor, or by being in the same building! I have been asked to give a historical view and my talk looks at window systems over ten years and features: the Smalltalk, DLisp (Interlisp), Interlisp-D, Tajo (Mesa Development Environment), Docs (Cedar), Viewers (Cedar), SunWindows and SunDew systems.
Adeli, H. ;
Paek, Y. J.
Computer-aided analysis of structures in INTERLISP environment
LISP appears to be the language of choice among the developers of knowledge-based expert systems. Analysis of structures in INTERLISP environment is discussed in this paper. An interactive INTERLISP program is presented for analysis of frames which can be used as part of an expert system for computer-aided design of structures. Some of the concepts and characteristics of INTERLISP language are explained by referring to the INTERLISP program.
Oldford, R. W. ;
Peters, S. C.
Object-Oriented Data Representations for Statistical Data Analysis
We discuss the design and implementation of object-oriented datatypes for a sophisticated statistical analysis environment The discussion draws on our experience with an experimental statistical analysis system, called DINDE. DINDE resides in the integrated programming environment of a Xerox Interlisp-D machine running LOOPS. The discussion begins with our implementation of arrays, matrices, and vectors as objects in this environment. We then discuss an additional set of objects that are based on statistical abstractions rather than mathematical ones and describe their implementation in the DINDE environment.
Stefik, Mark ;
Bobrow, Daniel ;
Kahn, Kenneth
Integrating Access-Oriented Programming into a Multiparadigm Environment
The Loops knowledge programming system integrates function-oriented, system object-oriented, rule-oriented, and—something notfound in most other systems—access-oriented programming.
Martz, Philip R. ;
Heffron, Matt ;
Griffith, Owen Mitch
An Expert System for Optimizing Ultracentrifugation Runs
The SpinPro™ Ultracentrifugation Expert System is a computer program that designs optimal ultracentrifugation procedures to satisfy the investigator's research requirements. SpinPro runs on the IBM PC/XT. Ultracentrifugation is a common method in the separation of biological materials. Its capabilities, however, are too often under-utilized. SpinPro addresses this problem by employing Artificial Intelligence (AI) techniques to design efficient and accurate ultracentrifugation procedures. To use SpinPro, the investigator describes the centrifugation problem in a question and answer dialogue. SpinPro then offers detailed advice on optimal and alternative procedures for performing the run. This advice results in cleaner and faster separations and improves the efficiency of the ultracentrifugation laboratory.
Section: 23
DOI: 10.1021/bk‑1986‑0306.ch023
Henderson, D. Austin Jr. ;
Card, Stuart K.
Rooms: The Use of Multiple Virtual Workspaces to Reduce Space Contention in a Window-Based Graphical User Interface
A key constraint on the effectiveness of window-based human-computer interfaces is that the display screen is too small for many applications. This results in “window thrashing,” in which the user must expend considerable effort to keep desired windows visible. Rooms is a window manager that overcomes small screen size by exploiting the statistics of window access, dividing the user's workspace into a suite of virtual workspaces with transitions among them. Mechanisms are described for solving the problems of navigation and simultaneous access to separated information that arise from multiple workspaces.
Henderson, D. A. The Trillium user interface design environment
Trillium is a computer-based environment for simulating and experimenting with interfaces for simple machines. For the past four years it has been use by Xerox designers for fast prototyping and testing of interfaces for copiers and printers. This paper defines the class of “functioning frame” interfaces which Trillium is used to design, discusses the major concerns that have driven the design of Trillium, and describes the Trillium mechanisms chosen to satisfy them.
Trigg, Randall H. ;
Suchman, Lucy A. ;
Halasz, Frank G.
Supporting collaboration in notecards
This paper describes a project underway to investigate computer support for collaboration. In particular, we focus on experience with and extensions to NoteCards, a hypertext-based idea structuring system. The forms of collaboration discussed include draft-passing, simultaneous sharing and online presentations. The requirement that mutual intelligibility be maintained between collaborators leads to the need for support of annotative and procedural as well as substantive activities.
Sheil, Beau POWER TOOLS FOR PROGRAMMERS
This chapter discusses the power tools for programmers. Essentially, all of the intelligent programming tools described in this volume are at most experimental prototypes. Given that these tools are still quite far from being commercial realities, it is worthwhile to note that there is a completely different way in which artificial intelligence research has to help programmers. Artificial intelligence researchers are themselves programmers. Creating such programs is more a problem of exploration than implementation and does not conform to conventional software lifecycle models. The artificial intelligence programming community has always been faced with this kind of exploratory programming and has, therefore, had a head start on developing appropriate language, environment, and hardware features. Redundancy protects the design from unintentional change, the conventional programming technology restrains the programmer, and the programming languages used in exploratory systems minimize and defer constraints on the programmer.
DOI: 10.1016/B978‑0‑934613‑12‑5.50048‑3
Bobrow, Daniel G. ;
Kahn, Kenneth ;
Kiczales, Gregor ;
Masinter, Larry ;
Stefik, Mark ;
Zdybel, Frank
CommonLoops: merging Lisp and object-oriented programming
CommonLoops blends object-oriented programming smoothly and tightly with the procedure-oriented design of Lisp. Functions and methods are combined in a more general abstraction. Message passing is invoked via normal Lisp function call. Methods are viewed as partial descriptions of procedures. Lisp data types are integrated with object classes. With these integrations, it is easy to incrementally move a program between the procedure and object-oriented styles. One of the most important properties of CommonLoops is its extensive use of meta-objects. We discuss three kinds of meta-objects: objects for classes, objects for methods, and objects for discriminators. We argue that these meta-objects make practical both efficient implementation and experimentation with new ideas for object-oriented programming. CommonLoops' small kernel is powerful enough to implement the major object-oriented systems in use today.
Bobrow, Daniel G. ;
Kahn, Kenneth ;
Kiczales, Gregor ;
Masinter, Larry ;
Stefik, Mark ;
Zdybel, Frank
CommonLoops: merging Lisp and object-oriented programming
CommonLoops blends object-oriented programming smoothly and tightly with the procedure-oriented design of Lisp. Functions and methods are combined in a more general abstraction. Message passing is invoked via normal Lisp function call. Methods are viewed as partial descriptions of procedures. Lisp data types are integrated with object classes. With these integrations, it is easy to incrementally move a program between the procedure and object-oriented styles. One of the most important properties of CommonLoops is its extensive use of meta-objects. We discuss three kinds of meta-objects: objects for classes, objects for methods, and objects for discriminators. We argue that these meta-objects make practical both efficient implementation and experimentation with new ideas for object-oriented programming. CommonLoops' small kernel is powerful enough to implement the major object-oriented systems in use today.
Halasz, Frank G. ;
Moran, Thomas P. ;
Trigg, Randall H.
Notecards in a nutshell
NoteCards is an extensible environment designed to help people formulate, structure, compare, and manage ideas. NoteCards provides the user with a “semantic network” of electronic notecards interconnected by typed links. The system provides tools to organize, manage, and display the structure of the network, as well as a set of methods and protocols for creating programs to manipulate the information in the network. NoteCards is currently being used by more than 50 people engaged in idea processing tasks ranging from writing research papers through designing parts for photocopiers. In this paper we briefly describe NoteCards and the conceptualization of idea processing tasks that underlies its design. We then describe the NoteCards user community and several prototypical NoteCards applications. Finally, we discuss what we have learned about the system's strengths and weaknesses from our observations of the NoteCards user community.
Xerox Artificial Intelligence Systems, Interlisp-D: A Friendly Primer
1987
Stone, Jeffrey The AAAI-86 Conference Exhibits: New Directions for Commercial Artificial Intelligence
Karttunen, Lauri ;
Koskenniemi, Kimmo ;
Kaplan, Roland C.
A Compiler for Two-level Phonological Rules
This paper describes a system for compiling two-level phonological or orthographical rules into finite-state transducers. The purpose of this system, called TWOL, is to aid the user in developing a set of such rules for morphological generation and recognition.
Cunningham, Robert E. ;
Corbett, John D. ;
Bonar, Jeffrey G.
Chips: A Tool for Developing Software Interfaces Interactively.
Chips is an interactive tool for developing software employing graphical humancomputer interfaces on Xerox Lisp machines. For the programmer, It provides a rich graphical interface for the creation of rich graphical interfaces. In the service of an end user, It provides classes for modeling the graphical relationships of objects on the screen and maintaining constraints between them. Several large applications have been developed with Chips including intelligent tutors for programming and electricity. Chips is implemented as a collection of customizable classes in the LOOPS object-oriented extensions to Interlisp-D. The three fundamental classes are 1 DomainObject which defines objects of the application domain - the domain for which the interface is being built - and ties together the various functionalities provided by the Chips system 2 DisplayObject which defines mouse-sensitive graphical objects and 3 Substrate which defines specialized windows for displaying and storing collections of instances of DisplayObject. A programmer creates an interface by specializing existing DomainObjects and drawing new Displayobjects with a graphics editor. Instances of DispalyObject and Substrate are assembled on screen to form the interface. Once the interface has been sketched in this manner, the programmer can build inward, creating all other parts of the application through the objects on the screen.
Section: Technical Reports
Shanor, Gordy G. The Dipmeter Advisor - A dipmeter interpretation workstation
The Dipmeter Advisor is a knowledge-base system, linked to a computer work-station, designed to aid in the interpretation of dipmeter results through interaction between the interpreter and the "expert" system. The system utilizes dipmeter results, other wireline log data, computer processed results such as LITHO*, and user-input local geological knowledge as the framework for the interpretation. A work session proceeds through a number of phases, which leads to first a structural, then a stratigraphic interpretation of the well data. Conclusions made by the Dipmeter Advisor can be accepted, modified, or rejected by the interpreter at any stage of the work session. The user may also make his own conclusions and comments, which are stored as part of the final interpretation and become part of an updated knowledge-base for input to further field studies.
Gladwin, Lee A. Review of Interlisp: The Language and Its Usage
Publisher: IEEE Computer Society
Myers, J. D. The background of INTERNIST I and QMR
During my tenure as Chairman of the Department of Medicine at the University of Pittsburgh, 1955 to 1970, two points became clear in regard to diagnosis in internal medicine. The first was that the knowledge base in that field had become vastly too large for any single person to encompass it. The second point was that the busy practitioner, even though he knew the items of information pertinent to his patients correct diagnosis, often did not consider the right answer particularly if the diagnosis was an unusual disease. I resigned the position of Chairman in 1970 intending to resume my position as Professor of Medicine. However, the University saw fit to offer me the appointment as University Professor (Medicine). The University of Pittsburgh follows the practice of Harvard University, established by President James Bryant Conant in the late 1930s, in which a University Professor is a professor at large and reports only to the president of the university. He has no department, no school and is not under administrative supervision by a dean or vice-president. Thus the position allows maximal academic freedom. In this new position I felt strongly that I should conduct worthwhile research. It was almost fifteen years since I had worked in my chosen field of clinical investigation, namely splanchnic blood flow and metabolism, and I felt that research in that area had passed me by. Remembering the two points mentioned earlier — the excessive knowledge base of internal medicine and the problem of considering the correct diagnosis — I asked myself what could be done to correct these problems. It seemed that the computer with its huge memory could correct the first and I wondered if it could not help as well with the second. At that point I knew no more about computers than the average layman so I sought advice. Dr. Gerhard Werner, our Chairman of Pharmacology, was working with computers in an attempt to map all of the neurological centers of the human brain stem with particular reference to their interconnections and functions. He was particularly concerned about the actions of pharmacological agents on this complex system. Working with him on this problem was Dr. Harry Pople, a computer scientist with special interest in “artificial intelligence”. The problem chosen was so complex and difficult that Werner and Pople were making little progress. Gerhard listened patiently to my ideas and promptly stated that he thought the projects were feasible utilizing the computer. In regard to the diagnostic component of my ambition he strongly advised that “artificial intelligence” be used. Pople was brought into the discussion and was greatly interested, I believe because of the feasibility of the project and the recognition of its practical application to the practice of medicine. The upshot was that Pople joined me in my project and Werner and Pople abandoned the work on the brain stem. Pople knew nothing about medicine and I knew nothing about computer science. Thus the first step in our collaboration was my analysis for Pople of the diagnostic process. I chose a goodly number of actual cases from clinical pathological conferences (CPCs) because they contained ample clinical data and because the correct diagnoses were known. At each small step of the way through the diagnostic process I was required to explain what the clinical information meant in context and my reasons for considering certain diagnoses. This provided to Pople insight into the diagnostic process. After analyzing dozens of such cases I felt as though I had undergone a sort of “psychoanalysis”. From this experience Pople wrote the first computer diagnostic programs seeking to emulate my diagnostic process. This has led certain “wags” to nickname our project “Jack in the box”. For this initial attempt Pople used the LISP computer language. We were granted access to the PROPHET PDP-10, a time-sharing mainframe maintained in Boston by the National Institutes of Health (NIH) but devoted particularly to pharmacological research. Thus we were interlopers. The first name we applied to our project was DIALOG, for diagnostic logic, but this had to be dropped because the name was in conflict with a computer program already on the market and copyrighted. The next name chosen was INTERNIST for obvious reason. However, the American Society for Internal Medicine publishes a journal entitled “The Internist” and they objected to our use of INTERNIST although there seems to be little relationship or conflict between a printed journal and a computer software program. Rather than fight the issue we simply added the Roman numeral one to our title which then became INTERNIST-I, which continues to this day. Pople's initial effort was unsuccessful, however. He diligently had incorporated details regarding anatomy and much basic pathophysiology, I believe because in my initial CPC analyses I had brought into consideration such items of information so that Pople could understand how I got from A to B etc. The diagnostician in internal medicine knows, of course, much anatomy and patho-physiology but these are brought into consideration in only a minority of diagnostic problems. He knows, for example, that the liver is in the right upper quadrant and just beneath the right leaf of the diaphragm. In most diagnostic instances this information is “subconscious”. Our first computer diagnostic program included too many such details and as a result was very slow and frequently got into analytical “loops” from which it could not extricate itself. We decided that we had to simplify the program but by that juncture much of 1971 had passed on. The new program was INTERNIST-I and even today most of the basic structure devised in 1972 remains intact. INTERNIST-I is written in INTERLISP and has operated on the PDP-10 and the DEC 2060. It has also been adapted to the VAX 780. Certain younger people have contributed significantly to the program, particularly Dr. Zachary Moraitis and Dr. Randolph Miller. The latter interrupted his regular medical school education to spend the year 1974-75 as a fellow in our laboratory and since finishing his formal medical education in 1979 has been active as a full time faculty member of the team. Several Ph.D. candidates in computer science have also made significant contributions as have dozens of medical students during electives on the project. INTERNIST-I is really quite a simple system as far as its operating system or inference engine is concerned. Three basic numbers are concerned in and manipulated in the ranking of elicited disease hypotheses. The first of these is the importance (IMPORT) of each of the more than 4,100 manifestations of disease which are contained in the knowledge base. IMPORTS are a global representation of the clinical importance of a given finding graded from 1 to 5, the latter being maximal, focusing on how necessary it is to explain the manifestation regardless of the final diagnosis. Thus massive splenomegaly has an IMPORT of 5 whereas anorexia has an IMPORT of 1. Mathematical weights are assigned to IMPORT numbers on a non-linear scale. The second basic number is the evoking strength (EVOKS), the numbers ranging from 0 to 5. The number answers the question, that given a particular manifestation of disease, how strongly does one consider disease A versus all other diagnostic possibilities in a clinical situation. A zero indicates that a particular clinical manifestation is non-specific, i.e. so widely spread among diseases that the above question cannot be answered usefully. Again, anorexia is a good example of a non-specific manifestation. The EVOKS number 5, on the other hand, indicates that a manifestation is essentially pathognomonic for a particular disease. The third basic number is the frequency (FREQ) which answers the question that given a particular disease what is the frequency or incidence of occurrence of a particular clinical finding. FREQ numbers range from 1 to 5, one indicating that the finding is rare or unusual in the disease and 5 indicating that the finding is present in essentially all instances of the disease. Each diagnosis which is evoked is ranked mathematically on the basis of support for it, both positive and negative. Like the import number, the values for EVOKS and FREQ numbers increase in a non-linear fashion. The establishment or conclusion of a diagnosis is not based on any absolute score, as in Bayesian systems, but on how much better is the support of diagnosis A as compared to its nearest competitor. This difference is anchored to the value of an EVOKS of 5, a pathognomonic finding. When the list of evoked diagnoses is ranked mathematically on the basis of EVOKS, FREQ and IMPORT, the list is partitioned based upon the similarity of support for individual diagnoses. Thus a heart disease is compared with other heart diseases and not brain diseases since the patient may have a heart disorder and a brain disease concommitantly. Thus apples are compared with apples and not oranges. When a diagnosis is concluded, the computer consults a list of interrelationships among diseases (LINKS) and bonuses are awarded, again in a non-linear fashion for numbers ranging from 1 to 5 — 1 indicating a weak interrelationship and 5 a universal interrelationship. Thus multiple interrelated diagnoses are preferred over independent ones provided the support for the second and other diagnoses is adequate. Good clinicians use this same rule of thumb. LINKS are of various types: PCED is used when disease A precedes disease B, e.g. acute rheumatic fever precedes early rheumatic valvular disease; PDIS - disease A predisposes to disease B, e.g. AIDS predisposes to pneumocystis pneumonia; CAUS - disease A causes disease B, e.g. thrombophlebitis of the lower extremities may cause pulmonary embolism; and COIN - there is a statistical interrelationship between disease A and disease B but scientific medical information is not explicit on the relationship, e.g. Hashimoto's thyroiditis coincides with pernicious anemia, both so called autoimmune diseases. The maximal number of correct diagnoses made in a single case analysis is, to my recollection, eleven. In working with INTERNIST-I during the remainder of the 1970s several important points about the system were learned or appreciated. The first and foremost of these is the importance of a complete and accurate knowledge base. Omissions from a disease profile can be particularly troublesome. If a manifestation of disease is not listed on a disease profile the computer can only conclude that that manifestation does not occur in the disease, and if a patient demonstrates the particular manifestation it counts against the diagnosis. Fortunately, repeated exercise of the diagnostic system brings to attention many inadvertent omissions. It is important to establish the EVOKS and FREQ numbers as accurately as possible. Continual updating of the knowledge base, including newly described diseases and new information about diseases previously profiled, is critical. Dr. Edward Feigenbaum recognized the importance of the accuracy and completeness of knowledge bases as the prime requisite of expert systems of any sort. He emphasized this point in his keynote address to MEDINFO-86 (1). Standardized, clear and explicit nomenclature is required in expressing disease names and particularly in naming the thousands of individual manifestations of disease. Such rigidity can make the use of INTERNIST-I difficult for the uninitiated user. Therefore, in QMR more latitude and guidance is provided the user. For example, the user of INTERNIST-I must enter ABDOMEN PAIN RIGHT UPPER QUADRANT exactly whereas in QMR the user may enter PAI ABD RUQ and the system recognizes the term as above. The importance of “properties” attached to the great majority of clinical manifestations was solidly evident. Properties express such conditions that if A is true then B is automatically false (or true as the case may be). The properties also allow credit to be awarded for or against B as the case may be. Properties also provide order to the asking of questions in the interrogative mode. They also state prerequisites and unrequisites for various procedures. As examples, one generally does not perform a superficial lymph node biopsy unless lymph nodes are enlarged (prerequisite). Similarly, a percutaneous liver biopsy is inadvisable if the blood platelets are less than 50,000 (unrequisite). It became clear quite early in the utilization of INTERNIST-I that systemic or multisystem diseases had an advantage versus localized disorders in diagnosis. This is because systemic diseases have very long and more inclusive manifestation lists. It became necessary, therefore, to subdivide systemic diseases into various components when appropriate. Systemic lupus erythematosus provides a good example. Lupus nephritis must be compared in our system with other renal diseases and such comparison is allowed by our partitioning algorithm. Likewise, cerebral lupus must be differentiated from other central nervous system disorders. Furthermore, either renal lupus or cerebral lupus can occur at times without significant clinical evidence of other systemic involvement. In order to reassemble the components of a systemic disease we devised the systemic LINK (SYST) which expresses the interrelationship of each subcomponent to the parent systemic disease. It became apparent quite early that expert systems like INTERNIST do not deal with the time axis of a disease well at all, and this seems to be generally true of expert systems in “artificial intelligence”. Certain parameters dealing with time can be expressed by devising particular manifestations, e.g. a blood transfusion preceding the development of acute hepatitis B by 2 to 6 months. But time remains a problem which is yet to be solved satisfactorily including QMR. It has been clearly apparent over the years that both the knowledge base and the diagnostic consultant programs of both INTERNIST-I and QMR have considerable educational value. The disease profiles, the list of diseases in which a given clinical manifestation occurs (ordered by EVOKS and FREQ), and the interconnections among diseases (LINKS) provide a quick and ready means of acquiring at least orienting clinical information. Such has proved useful not only to medical students and residents but to clinical practitioners as well. In the interrogative mode of the diagnostic systems the student will frequently ask “Why was that question asked?” An instructor can either provide insight or ready consultation of the knowledge base by the student will provide a simple semi-quantitative reason for the question. Lastly, let the author state that working with INTERNIST-I and QMR over the years seems to have had real influence on his own diagnostic approaches and habits. Thus my original psycho-analysis when working with Pople has been reinforced.
Anonymous XEROX COMMON LISP IMPLEMENTATION NOTES
The Xerox Common Lisp Implementation Notes cover several aspects of the Lyric release. In these notes you will find: • An explanation of how Xerox Common Lisp extends the Common Lisp standard. For example, in Xerox Common Lisp the Common Lisp array-constructing function make-array has additional keyword arguments that enhance its functionality. • An explanation of how several ambiguities in Steele's Common Lisp: the Language were resolved. • A description of additional features that provide far more than extensions to Common Lisp.
Mears, Lyn Ann ;
Rees, Ted
Artificial intelligence Systems Xerox LOOPS, A Friendly Primer
Trigg, Randall H. ;
Irish, Peggy M.
Hypertext habitats: experiences of writers in NoteCards
This paper reports on an investigation into the use of the NoteCards hypertext system for writing. We describe a wide variety of personal styles adopted by 20 researchers at Xerox as they “inhabit” NoteCards. This variety is displayed in each of their writing activities: notetaking, organizing and reorganizing their work, maintaining references and bibliographies, and preparing documents. In addition, we discuss the distinctive personal decisions made as to which activities are appropriate for NoteCards in the first place. Finally, we conclude with a list of recommendations for system designers arising from this work.
1988
Shaw, Mildred L. An interactive knowledge-based system for group problem solving
Discusses a distributed system for human–computer interaction based on a network of computers. The system aids group problem solving by enabling participants to share in a construct elicitation process based on repertory grid techniques that have applications in education, management, and expert systems development. In education, the learner is attempting to acquire a specific construct system for the subject matter; in management, people with different construct systems are attempting to work together toward common objectives; in expert systems development, the knowledge engineer is attempting to make overt and encode the relevant construction system of an expert. The participant construct system enables individuals to interact through networked personal computers to develop mutual understanding of a problem domain through the use of repertory grid techniques. (PsycINFO Database Record (c) 2016 APA, all rights reserved)
Place: US
Publisher: Institute of Electrical & Electronics Engineers Inc
Crowfoot, Norman A Visual aide for designing regular expression parsers
This paper describes a thesis project in which a visually-oriented design utility is constructed in Interlisp-D for the Xerox 1108 Artificial Intelligence Workstation. This utility aids in the design of Regular Expression Parsers by visually simulating the operation of a parser. A textual program, suitable for utilization in the construction of a compiler scanner or other similar processor may be produced by the utility.
Koschmann, T. ;
Evens, M.W.
Bridging the gap between object-oriented and logic programming
Andrews, K. ;
Henry, R. R. ;
Yamamoto, W. K.
Design and implementation of the UW Illustrated compiler
We have implemented an illustrated compiler for a simple block structured language. The compiler graphically displays its control and data structures, and so gives its viewers an intuitive understanding of compiler organization and operation. The illustrations were planned by hand and display information naturally and concisely.
Halasz, Frank,G. Reflections on NoteCards: seven issues for the next generation of hypermedia systems
NoteCards, developed by a team at Xerox PARC, was designed to support the task of transforming a chaotic collection of unrelated thoughts into an integrated, orderly interpretation of ideas and their interconnections. This article presents NoteCards as a foil against which to explore some of the major limitations of the current generation of hypermedia systems, and characterizes the issues that must be addressed in designing the next generation systems.
1989
Martz, Philip R. ;
Heffron, Matt ;
Kalbag, Suresh ;
Dyckes, Douglas F. ;
Voelker, Paul
The PepPro™ Peptide Synthesis Expert System
Peptide synthesis is an important research tool. However, successful syntheses require considerable effort from the scientist. We have produced an expert System, the PepPro™ Peptide Synthesis Expert System, that helps the scientist improve peptide syntheses. To use PepPro the scientist enters the peptide to be synthesized. PepPro then applies its synthesis rules to analyze the peptide, to predict coupling. problems, and to recommend solutions. PepPro produces a synthesis report that summarizes the analysis and recommendations. The program includes a capability that allows the scientist to write new synthesis rules and add them to the PepPro knowledge base. PepPro was developed on Xerox 11xx series workstations using Beckman’s proprietary development environment MP). We then compiled PepPro to run on the IBM PC. PepPro has limitations that derive from unpredictable events during a synthesis. Despite the limitations, PepPro provides several important benefits. The major one is that it makes peptide syntheses easier, less time-consuming, and more efficient.
1990
Lipkis, Thomas A. ;
Mark, William S. ;
Pirtle, Melvin W.
Design system using visual language
A computer-based tool, in the form of a computer system and method, for designing, constructing and interacting with any system containing or comprising concurrent asynchronous processes, such as a factory operation. In the system according to the invention a variety of development and execution tools are supported. The invention features a highly visual user presentation of a control system, including structure, specification, and operation, offering a user an interactive capability for rapid design, modification, and exploration of the operating characteristics of a control system comprising asynchronous processes. The invention captures a representation of the system (RS) that is equivalent to the actual system (AS)--rather than a simulation of the actual system. This allows the invention to perform tests and modification on RS instead of AS, yet get accurate results. RS and AS are equivalent because AS is generated directly from RS by an automated process. Effectively, pressing a button in the RS environment can "create" the AS version or any selected portion of it, by "downloading" a translation of the RS version that can be executed by a programmable processor in the AS environment. Information can flow both ways between AS and RS. That AS and RS can interact is important. This allows RS to "take on" the "state" of AS whenever desired, through an "uploading" procedure, thereby reflecting accurately the condition of AS at a specific point in time.
Experience, World Leaders in Research-Based User Hypertext'89 Trip Report: Article by Jakob Nielsen
Jakob Nielsen's trip report from the ACM Hypertext'89 conference. Includes summary of Meyrowitz' discussion of open integrating hypertext and the extent to which the Memex vision has been realized so far.
1991
Cunningham, Robert E. ;
Bonar, Jeffery G. ;
Corbett, John D.
Interactive method of developing software interfaces
A system and method for interactive design of user manipulable graphic elements. A computer has display and stored tasks wherein the appearance of graphic elements and methods for their manipulation are defined. Each graphic element is defined by at least one figure specification, one mask specification and one map specification. An interactive display editor program defines specifications of said graphic elements. An interactive program editor program defines programming data and methods associated with said graphic elements. A display program uses the figure, map and mask specifications for assembling graphic elements upon the display and enabling user manipulation of said graphic elements.
Balban, Morton S. ;
Lan, Ming-Shong ;
Panos, Rodney M.
Method of and apparatus for composing a press imposition
An apparatus and a method are disclosed for composing an imposition in terms of an arrangement of printing plates on selected of the image positions on selected units of a printing press to print a given edition, by first assigning each section of this edition to one of the press areas. Thereafter, each printing unit is examined to determine an utilization value thereof in terms of the placement of the printing plates on the image positions and the relative number of image positions to which printing plates are assigned with respect to the total number of image positions. Thereafter, a list of the image positions for each of the sections and its area, is constructed by examining one printing unit at a time in an order according to the placement of that printing unit in the array and examining its utilization value to determine whether or not to include a particular image position of that printing unit in the list. As a result, a list of the image positions is constructed in a sequence corresponding to numerical order of the pages in the section under consideration. Finally, that list of the image positions and the corresponding section and page numbers is displayed in a suitable fashion to inform a user of how to place the printing plates in the desired arrangement onto the printing units of the press to print this given edition.
Henderson, D. Austin ;
Card, Stuart K. ;
Maxwell, John T.
User interface with multiple workspaces for sharing display system objects
Workspaces provided by an object-based user interface appear to share windows and other display objects. Each workspace's data structure includes, for each window in that workspace, a linking data structure called a placement which links to the display system object which provides that window, which may be a display system object in a preexisting window system. The placement also contains display characteristics of the window when displayed in that workspace, such as position and size. Therefore, a display system object can be linked to several workspaces by a placement in each of the workspaces' data structures, and the window it provides to each of those workspaces can have unique display characteristics, yet appear to the user to be the same window or versions of the same window. As a result, the workspaces appear to be sharing a window. Workspaces can also appear to share a window if each workspace's data structure includes data linking to another workspace with a placement to the shared window. The user can invoke a switch between workspaces by selecting a display object called a door, and a back door to the previous workspace is created automatically so that the user is not trapped in a workspace. A display system object providing a window to a workspace being left remains active so that when that workspace is reentered, the window will have the same contents as when it disappeared. Also, the placements of a workspace are updated so that when the workspace is reentered its windows are organized the same as when the user left that workspace. The user can enter an overview display which shows a representation of each workspace and the windows it contains so that the user can navigate to any workspace from the overview.
Newman, William ;
Eldridge, Margery ;
Lamming, Michael
PEPYS: Generating Autobiographies by Automatic Tracking
This paper presents one part of a broad research project entitled 'Activity-Based Information Retrieval' (AIR) which is being carded out at EuroPARC. The basic hypothesis of this project is that if contextual data about human activities can be automatically captured and later presented as recognisable descriptions of past episodes, then human memory of those past episodes can be improved. This paper describes an application called Pepys, designed to yield descriptions of episodes based on automatically collected location data. The program pays particular attention to meetings and other episodes involving two or more people. The episodes are presented to the user as a diary generated at the end of each day and distributed by electronic mail. The paper also discusses the methods used to assess the accuracy of the descriptions generated by the recogniser.
1992
Rao, Ramana B. Window system with independently replaceable window functionality
A workspace data structure, such as a window hierarchy or network, includes functional data units that include data relating to workspace functionality. These functional data units are associated with data units corresponding to the workspaces such that a functional data unit can be replaced by a functional data unit compatible with a different set of functions without modifying the structure of other data units. Each workspace data unit may have a replaceably associated functional data unit called an input contract relating to its input functions and another called an output contract relating to its output functions. A parent workspace's data unit and the data units of its children may together have a replaceably associated functional data unit, called a windowing contract, relating to the windowing relationship between the parent and the children. The data structure may also include an auxiliary data unit associated between the data units of the parent and children windows, and the windowing contract may be associated with the auxiliary data unit. The contracts can be accessed and replaced by a processor in a system that includes the data structure. The contracts can be instances of classes in an object-oriented programming language, and can be replaceably associated by pointers associated with the system objects. Alternatively, a contract can be replaceably associated through dynamic multiple inheritance, with the superclasses of each workspace class including one or more contract classes such that changing the class of an instance of a workspace class serves to replace the contract.
Denber, Michel J. Graphics display system with improved dynamic menu selection
In a graphic display system, display control software is modified to impart motion to a pop-up menu to attract the attention of the user. The menu becomes animated when a control and comparison circuit confirms that a mouse driven cursor on the screen is moving away from the pop-up menu indicating that the operator is unaware of the menu's presence. The menu moves or "tags-along" after the cursor until the user takes notice and makes the appropriate selection.
Smith, Reid G. ;
Schoen, Eric J.
Object-oriented framework for menu definition
A declarative object-oriented approach to menu construction provides a mechanism for specifying the behavior, appearance and function of menus as part of an interactive user interface. Menus are constructed from interchangeable object building blocks to obtain the characteristics wanted without the need to write new code or code and maintaining a coherent interface standard. The approach is implemented by dissecting interface menu behavior into modularized objects specifying orthogonal components of desirable menu behaviors. Once primary characteristics for orthogonal dimensions of menu behavior are identified, individual objects are constructed to provide specific alternatives for the behavior within the definitions of each dimension. Finally, specific objects from each dimension are combined to construct a menu having the desired selections of menu behaviors.
Lee, Alison Investigations into history tools for user support
History tools allow users to access past interactions kept in a history and to incorporate them into the context of their current operations. Such tools appear in various forms in many of today’s computing systems, but despite their prevalence,they have received little attention as user support tools. This dissertation investigates, through a series of studies, history–based, user support tools. The studies focus on three primary factors influencing the utility of history–based, user support tools: design of history tools, support of a behavioural phenomenon in user interactions, and mental and physical effort associated with using history tools. Design of history tools strongly influences a user’s perception of their utility. In surveying a wide collection of history tools, we identify seven independent uses of the information with no single history tool supporting all seven uses. Based on cognitive and behavioural considerations associated with the seven history uses, we propose several kinds of history information and history functions that need to be supported in new designs of history tools integrating all seven uses of history. An exploratory study of the UNIX environment reveals that user interactions exhibit a behavioural phenomenon, nominally referred to as locality. This is the phenomenon where users repeatedly reference a small group of commands during extended intervals of their session. We apply two concepts from computer memory research (i.e., working sets and locality) to examine this behavioural artifact and to propose a strategy for predicting repetitive opportunities and candidates. Our studies reveal that users exhibit locality in only 31% of their sessions whereas users repeat individual commands in 75% of their sessions. We also found that history tool use occurs primarily in locality periods. Thus, history tools which localize their prediction opportunities to locality periods can predict effectively the reuse candidates. Finally, the effort, mental and physical, associated with using a history tool to expedite repetitive commands can influence a user’s decision to use history tools. We analyze the human information–processing operations involved in the task of specifying a recurrent command for a given approach and design (assuming that the command is fully generated and resides in the user’s working memory and that users exhibit expert, error–free task performance behaviour). We find that in most of the proposed history designs, users expend less physical effort at the expense of more mental effort. The increased mental effort can be alleviated by providing history tools which require simpler mental operations (e.g., working memory retrievals and perceptual processing). Also, we find that the typing approach requires less mental effort at the expense of more physical effort. Finally, despite the overhead associated with switching to the use of history tools, users (with a typing speed of 55 wpm or less) do expend less overall effort to specify recurrent commands (which have been generated and appear in working memory) using history tools compared to typing from scratch. The results of the three sets of studies provide insights into current history tools and point favourably towards the use of history tools for user support, especially history tools that support the reuse of previous commands, but additional research into history tool designs and usability factors is needed. Our studies demonstrate the importance of considering various psychological and behavioural factors and the importance of different grains of analysis.
Kurlander, David ;
Feiner, Steven
Interactive constraint-based search and replace
We describe enhancements to graphical search and replace that allow users to extend the capabilities of a graphical editor. Interactive constraint-based search and replace can search for objects that obey user-specified sets of constraints and automatically apply other constraints to modify these objects. We show how an interactive tool that employs this technique makes it possible for users to define sets of constraints graphically that modify existing illustrations or control the creation of new illustrations. The interface uses the same visual language as the editor and allows users to understand and create powerful rules without conventional programming. Rules can be saved and retrieved for use alone or in combination. Examples, generated with a working implementation, demonstrate applications to drawing beautification and transformation.
1993
Denber, Michel J. ;
Jankowski, Henry P.
Method and apparatus for thinning printed images
A method and apparatus are shown for improving bit-image quality in video display terminals and xerographic processors. In one embodiment, each scan line of a source image is ANDed with the scan line above to remove half-bits and thin halftones. In other embodiments, entire blocks of data are processed by bit-block transfer operations, such as ANDing a copy of the source image with a copy of itself shifted by one bit. Also, a source image can be compared to a shifted copy of itself to locate diagonal lines in order to place gray pixels bordering these lines.
FAQ: Lisp Implementations and Mailing Lists 4/7 [Monthly posting] - [4-1] Commercial Common Lisp implementations.
Wiil, Uffe K. ;
Leggett, John J.
Hyperform: using extensibility to develop dynamic, open, and distributed hypertext systems
An approach to flexible hyperbase (hypertext database) support predicated on the notion of ex-tensibility is presented. The extensible hypertext platform (Hyperform) implements basic hyperbase services that can be tailored to provide specialised hyperbase support. Hypeeform is based on an inter-nal computational engine that provides an object-oriented extension language which allows new data model objects and operations to be added at run-time. Hyperform has a number of built-in classes to pro-vide basic hyperbase features such as concurrency control, notification control (events), access control, version control and search and query. Each of these classes can be specialised using multiple inheritance to form virtually any type of hyperbase support needed in next-generation hypertext systems. This approach greatly reduces the effort required to provide high-quality customized hyperbase support for distributed hypertext applications. Hyper-form is implemented and operational in Unix environments. This paper describes the Hyperform approach, discusses its advantages and disadvantages, and gives examples of simulating the 11AM and the Danish Hyperlime in Hyperform. Hyper-form is compared with related work from the HAM generation of hyperbase systems and the current status of the project is reviewed.
Boyd, Mickey R. ;
Whalley, David B.
Isolation and analysis of optimization errors
This paper describes two related tools developed to support the isolation and analysts of optimization errors in the vpo optimizer. Both tools rely on vpo identifying sequences of changes, referred to as transformations. that result in semantically equivalent (and usually improved) code. One tool determines the first transfer. motion that causes incorrect output of the execution of the compiled program. This tool not only automatically isolates the illegal transformation, but also identifies the location and instant the transformation is performed in vpo. To assist in the analysis of an optimization error, a graphical optimization viewer was also implemented that can display the state of the generated instructions before and after each transformation performed by vpo. Unique features of the optimization viewer include reverse viewing (or undoing) of transformations and the ability to stop at breakpoints associated with the generated instructions. Both tools are useful independently. Together these tools form a powerful environment for facilitating the retargeting of vpo to a new machine and supporting experimentation with new optimizations. In addition, the optimization viewercan be used as a teaching aid in compiler classes.
1994
Jan O. Pedersen ;
Per-Kristian Halvorsen ;
Douglass R. Cutting ;
John W. Tukey ;
Eric A. Bier ;
Daniel G. Bobrow
Iterative technique for phrase query formation and an information retrieval...
An information retrieval system and method are provided in which an operator inputs one or more query words which are used to determine a search key for searching through a corpus of documents, and which returns any matches between the search key and the corpus of documents as a phrase containing the word data matching the query word(s), a non-stop (content) word next adjacent to the matching word data, and all intervening stop-words between the matching word data and the next adjacent non-stop word. The operator, after reviewing one or more of the returned phrases can then use one or more of the next adjacent non-stop-words as new query words to reformulate the search key and perform a subsequent search through the document corpus. This process can be conducted iteratively, until the appropriate documents of interest are located. The additional non-stop-words from each phrase are preferably aligned with each other (e.g., by columnation) to ease viewing of the "new" content words.
Kaplan, Ronald M. ;
Maxwell, John T. III
Text-compression technique using frequency-ordered array of word-number mappers
A text-compression technique utilizes a plurality of word-number mappers ("WNMs") in a frequency-ordered hierarchical structure. The particular structure of the set of WNMs depends on the specific encoding regime, but can be summarized as follows. Each WNM in the set is characterized by an ordinal WNM number and a WNM size (maximum number of tokens) that is in general a non-decreasing function of the WNM number. A given token is assigned a number pair, the first being one of the WNM numbers, and the second being the token's position or number in that WNM. Typically, the most frequently occurring tokens are mapped with a smaller-numbered WNM. The set of WNMs is generated on a first pass through the database to be compressed. The database is parsed into tokens, and a rank-order list based on the frequency of occurrence is generated. This list is partitioned in a manner to define the set of WNMs. Actual compression of the data base occurs on a second pass, using the set of WNMs generated on the first pass. The database is parsed into tokens, and for each token, the set of WNMs is searched to find the token. Once the token is found, it is assigned the appropriate number pair and is encoded. This proceeds until the entire database has been compressed.
1995
Kaplan, Ronald M. ;
Kay, Martin ;
Maxwell, John
Finite state machine data storage where data transition is accomplished without the use of pointers
An FSM data structure is encoded by generating a transition unit of data corresponding to each transition which leads ultimately to a final state of the FSM. Information about the states is included in the transition units, so that the encoded data structure can be written without state units of data. The incoming transition units to a final state each contain an indication of finality. The incoming transition units to a state which has no outgoing transition units each contain a branch ending indication. The outgoing transition units of each state are ordered into a comparison sequence for comparison with a received element, and all but the last outgoing transition unit contain an alternative indication of a subsequent alternative outgoing transition. The indications are incorporated with the label of each transition unit into a single byte, and the remaining byte values are allocated among a number of pointer data units, some of which begin full length pointers and some of which begin pointer indexes to tables where pointers are entered. The pointers may be used where a state has a large number of incoming transitions or where the block of transition units depending from a state is broken down to speed access. The first outgoing transition unit of a state is positioned immediately after one of the incoming transitions so that it may be found without a pointer. Each alternative outgoing transition unit is stored immediately after the block beginning with the previous outgoing transition unit so that it may be found by proceeding through the transition units until the number of alternative bits and the number of branch ending bits balance.
Rao, Ramana ;
Pedersen, Jan O. ;
Hearst, Marti A. ;
Mackinlay, Jock D. ;
Card, Stuart K. ;
Masinter, Larry ;
Halvorsen, Per-Kristian ;
Robertson, George C.
Rich interaction in the digital library
Effective information access involves rich interactions between users and information residing in diverse locations. Users seek and retrieve information from the sources—for example, file serves, databases, and digital libraries—and use various tools to browse, manipulate, reuse, and generally process the information. We have developed a number of techniques that support various aspects of the process of user/information interaction. These techniques can be considered attempts to increase the bandwidth and quality of the interactions between users and information in an information workspace—an environment designed to support information work (see Figure 1).
Pitman, Kent M. Ambitious evaluation: a new reading of an old issue
Much has been written about Lazy Evaluation in Lisp---less about the other end of the spectrum---Ambitious Evaluation. Ambition is a very subjective concept, though, and if you have some preconceived idea of what you think an Ambitious Evaluator might be about, you might want to set it aside for a few minutes because this probably isn't going to be what you expect.
Anderson, Kenneth R. Freeing the essence of a computation
In theory, abstraction is important, but in practice, so is performance. Thus, there is a struggle between an abstract description of an algorithm and its efficient implementation. This struggle can be mediated by using an interpreter or a compiler. An interpreter takes a program that is a high level abstract description of an algorithm and applies it to some data. Don't think of an interpreter as slow. An interpreter is important enough to software that it is often implemented in hardware. A compiler takes the program and produces another program, perhaps in another language. The resulting program is applied to some data by another interpreter.
1996
Tavani, Herman T. Cyberethics and the future of computing
1998
Nunberg, Geoffrey D. ;
Stansbury, Tayloe H. ;
Abbott, Curtis ;
Smith, Brian C.
Method for manipulating digital text data
Malone, Thomas W. ;
Lai, Kum-Yew ;
Yu, Keh-Chiang ;
Berenson, Richard W.
Object-oriented computer user interface
A computer user interface includes a mechanism of graphically representing and displaying user-definable objects of multiple types. The object types that can be represented include data records, not limited to a particular kind of data, and agents. An agent processes information automatically on behalf of the user. Another mechanism allows a user to define objects, for example by using a template. These two mechanisms act together to allow each object to be displayed to the user and acted upon by the user in a uniform way regardless of type. For example, templates for defining objects allow a specification to be input by a user defining processing that can be performed by an agent.
Mackinlay, Jock D. ;
Card, Stuart K. ;
Robertson, George G.
Image display systems
Ehrlich, Kate A conversation with Austin Henderson
2001
Affenzeller, Michael ;
Pichler, Franz ;
Mittelmann, Rudolf
On CAST.FSM Computation of Hierarchical Multi-layer Networks of Automata
CAST.FSM denotes a CAST tool which has been developed at the Institute of Systems Science at the University of Linz during the years 1986–1993. The first version of CAST.FSM was implemented in INTERLISP-D and LOOPS for the Siemens-Xerox workstation 5815 (“Dandelion”). CAST.FSM supports the application of the theory of finite state machines for hardware design tasks between the architecture level and the level of gate circuits. The application domain, to get practical experience for CAST.FSM, was the field of VLSI design of ASICS’s where the theory of finite state machines can be applied to improve the testability of such circuits (“design for testability”) and to optimise the required silicon area of the circuit (“floor planning”). An overview of CAST as a whole and of CAST.FSM as a CAST tool is given in [11]. In our presentation we want to report on the re-engineering of CAST.FSM and on new types of applications of CAST.FSM which are currently under investigation. In this context we will distinguish between three different problems: 1. the implementation of CAST.FSM in ANSI Common Lisp and the design of a new user interface by Rudolf Mittelmann [5]. 2. the search for systemstheoretical concepts in modelling intelligent hierarchical systems based on the past work of Arthur Koestler [3] following the concepts presented by Franz Pichler in [10]. 3. the construction of hierarchical formal models (of multi-layer type) to study attributes which are assumed for SOHO-structures (SOHO = Self Organizing Hierarchical Order) of A. Koestler. The latter problem will deserve the main attention in our presentation. In the present paper we will build such a hierarchical model following the concepts of parallel decomposition of finite state machines (FSMs) and interpret it as a multi-layer type of model.
Series Title: Lecture Notes in Computer Science
DOI: 10.1007/3‑540‑45654‑6_3
2003
Bobrow, Daniel ;
Mittal, Sanjay ;
Lanning, Stanley ;
Stefik, Mark
Programming Languages -- The LOOPS Project (1982-1986)
The LOOPS (Lisp Object-Oriented Language) project was started to support development of expert systems project at PARC. We wanted a language that had many of the features of frame languages, such as objects, annotated values, inheritance, and attached procedures. We drew heavily on Smalltalk-80, which was being developed next door.
2006
Kossow, Al PDP-10 software archive
2007
Karttunen, Lauri Word play
This article is a perspective on some important developments in semantics and in computational linguistics over the past forty years. It reviews two lines of research that lie at opposite ends of the field: semantics and morphology. The semantic part deals with issues from the 1970s such as discourse referents, implicative verbs, presuppositions, and questions. The second part presents a brief history of the application of finite-state transducers to linguistic analysis starting with the advent of two-level morphology in the early 1980s and culminating in successful commercial applications in the 1990s. It offers some commentary on the relationship, or the lack thereof, between computational and paper-and-pencil linguistics. The final section returns to the semantic issues and their application to currently popular tasks such as textual inference and question answering.
2008
Lisp50 Notes part V: Interlisp, PARC, and the Common Lisp Consolidation Wars
Okay, quick… how many people besides Alan Kay can you name that worked at Xerox PARC? Not very many, eh? Yeah, that’s your loss. It wasn’t all about the guys like Kay that get all…
Teitelman, Warren History of Interlisp
I was first introduced to Lisp in 1962 as a first year graduate student at M.I.T. in a class taught by James Slagle. Having programmed in Fortran and assembly, I was impressed with Lisp's elegance. In particular, Lisp enabled expressing recursion in a manner that was so simple that many first time observers would ask the question, "Where does the program do the work?" (Answer - between the parentheses!) Lisp also provided the ability to manipulate programs, since Lisp programs were themselves data (S-expressions) the same as other list structures used to represent program data. This made Lisp an ideal language for writing programs that themselves constructed programs or proved things about programs. Since I was at M.I.T. to study Artificial Intelligence, program writing programs was something that interested me greatly.
2010
Jill Marci Sybalsky Medley
2011
Mark Stefik Truckin Knowledge Competition (1983)
In 1983 the Knowledge Systems Area at Xerox PARC taught experimental courses on knowledge programming. The Truckin' knowledge competition was the final exam at the end of a one week course. Students programmed their trucks to compete in Truckin' simulation world -- buying and selling goods, getting gas as needed, avoiding bandits, and so on. All of the trucks competed in the final. The winner was the truck with the most cash parked nearest Alice's Restaurant. See www.markstefik.com/?page_id=359
Mark Stefik The Colab Movie (1987)
The Colab project at PARC was an experiment in creating an electronic meeting room. This project developed multi-user interfaces, telepointers, and other innovations at the time. This movie shows the Cognoter tool which was a multi-user brainstorming tool used for collaborative development of an outline for a paper. See /www.markstefik.com/?page_id=155"
2012
Oldford, Wayne Graphical Programming (1988) - Part 0
Guy DesVignes and R. Wayne Oldford, 1988 This video (in 3 pieces) describes the use of graphical programming with an example, showing the encapsulation of several steps of an analysis into a single reusable tool. An INTERLISP-D programming environment with the object oriented system LOOPS is used for software development. Work is on a Xerox Lisp Workstation (Xerox 1186). First of 3 pieces of a single video. First piece: Graphical Programming (1988) - Part 0 Contains: "Opening" - Introduction by a young Wayne Oldford (refers to earlier video called "Data Analysis Networks in DINDE") "Part 0 Statistical Analysis Maps" - review of the interactive data analysis network representation of a statistical analysis. Second piece: Graphical Programming (1988) - Parts 1 and 2 Contains: "Part 1 Toolboxes" - Review of the elements of a statistical toolbox in DINDE "Part 2 The Analysis Path" - Demonstrates exploration of a path in an existing analysis map and its representation as a pattern, It is shown how to capture this pattern in DINDE as a new new program represented as an "AnalysisPath" object.. This is what is meant by "graphical programming". Third piece: "Graphical Programming (1988) - Part 3" Contains: "Part 3 Graphical Programming Example: Added Variable Plots" - Demonstrates graphical programming by constructing an added variable plot. This is done by constructing the appropriate analysis path on some data, capturing the pattern adding it to the toolbox and then applying it to new data. "Summary" Sound has been cleaned up a little. Complete video also available in whole at stat-graphics.org/movies/graphical-programming.html
Oldford, Wayne Graphical Programming (1988) - Parts 1 and 2
Guy DesVignes and R. Wayne Oldford, 1988 This video (in 3 pieces) describes the use of graphical programming with an example, showing the encapsulation of several steps of an analysis into a single reusable tool. An INTERLISP-D programming environment with the object oriented system LOOPS is used for software development. Work is on a Xerox Lisp Workstation (Xerox 1186). Second of 3 pieces of a single video. First piece: Graphical Programming (1988) - Part 0 Contains: "Opening" - Introduction by Wayne Oldford (refers to earlier video called "Data Analysis Networks in DINDE") "Part 0 Statistical Analysis Maps" - review of the interactive data analysis network representation of a statistical analysis. Second piece: Graphical Programming (1988) - Parts 1 and 2 Contains: "Part 1 Toolboxes" - Review of the elements of a statistical toolbox in DINDE "Part 2 The Analysis Path" - Demonstrates exploration of a path in an existing analysis map and its representation as a pattern, It is shown how to capture this pattern in DINDE as a new new program represented as an "AnalysisPath" object.. This is what is meant by "graphical programming". Third piece: "Graphical Programming (1988) - Part 3" Contains: "Part 3 Graphical Programming Example: Added Variable Plots" - Demonstrates graphical programming by constructing an added variable plot. This is done by constructing the appropriate analysis path on some data, capturing the pattern adding it to the toolbox and then applying it to new data. "Summary" Sound has been cleaned up a little. Complete video also available in whole at stat-graphics.org/movies/graphical-programming.html
Computer-Assisted Instruction (Bits and Bytes, Episode 7)
This clip looks at two examples of larger tutorial--CAI systems that were developed by the Ontario Institute for Studies and Education, and Xerox's PARC. It is from Episode 7 of the classic 1983 television series, Bits and Bytes, which starred Luba Goy and Billy Van. It was produced by TVOntario, but is no longer available for purchase.
2013
Interlisp was the so-called "west coast" Lisp that emphasized an interactive pro... | Hacker News
2015
Murphy, Dan TENEX and TOPS-20
In the late 1960s, a small group of developers at Bolt, Beranek, and Newman (BBN) in Cambridge, Massachusetts, began work on a new computer operating system, including a kernel, system call API, and user command interface (shell). While such an undertaking, particularly with a small group, became rare in subsequent decades, it was not uncommon in the 1960s. During development, this OS was given the name TENEX. A few years later, TENEX was adopted by Digital Equipment Corporation (DEC) for its new line of large machines to be known as the DECSYSTEM-20, and the operating system was renamed to TOPS-20. The author followed TENEX (or vice versa) on this journey, and these are some reflections and observations from that journey. He touches on some of the technical aspects that made TENEX notable in its day and an influence on operating systems that followed as well as on some of the people and other facets involved in the various steps along the way.
Publisher: IEEE
Rosenthal, David S.H. Emulation & Virtualization as Preservation Strategies
2016
Allen, Paul G. Xerox Alto Is Rebuilt and Reconnected by the Living Computer Museum
It’s one thing to read about a true breakthrough, something else to see it in action
2017
Lisp Editing in the 80s - Interlisp SEdit
Reuploaded from: http://people.csail.mit.edu/riastradh...​ Thanks to "lispm" on reddit for all the info: https://www.reddit.com/r/lisp/comment...​ From what I understand SEdit was developed later than DEdit. SEdit is documented first in the 1987 Lyric release of Interlisp-D, see Appendix B: http://bitsavers.trailing-edge.com/pd...​ SEdit is expanded in the virtual machine version of Interlisp-D, called Medley. See the Medley 1.0 release notes, appendix B: http://bitsavers.trailing-edge.com/pd...​ Some hints for using SEdit http://bitsavers.trailing-edge.com/pd...​ If you want to try it out, maybe this contains the editors: http://www2.parc.com/isl/groups/nltt/...​
DeKleer, Johann Daniel G. Bobrow: In Memoriam
It is with deep sorrow that we report the passing of former AAAI President Danny Bobrow on March 20, 2017. His family, friends, and colleagues from the Palo Alto Research Center and around the world recently gathered at PARC to commemorate his life and work.
Publisher: Association for the Advancement of Artificial Intelligence (AAAI)
Report Number: 38
Fatcat ID: release_wg4g7ikocbagxinabbk2eni52q
Masinter, Larry Interlisp-D at AAAI-82
17 new photos added to shared album
2019
Barela, Anne Introducing Darkstar: A Xerox Star Emulator
Via libingcomputers.org: Josh Dersch writes about research into the Xerox 8010 Information System (codenamed “Dandelion” during development) and commonly referred to as the Star. The Star was envisioned as center point of the office of the future, combining high-resolution graphics with the now-familiar mouse, Ethernet networking for sharing and collaborating, and Xerox’s Laser Printer technology for faithful “WYSIWYG” document reproduction. A revolutionary system when most everyone else was using text based systems.
Bouvin, Niels Olof From NoteCards to Notebooks: There and Back Again
Fifty years since the beginning of the Internet, and three decades of the Dexter Hypertext Reference Model and the World Wide Web mark an opportune time to take stock and consider how hypermedia has developed, and in which direction it might be headed. The modern Web has on one hand turned into a place where very few, very large companies control all major platforms with some highly unfortunately consequences. On the other hand, it has also led to the creation of a highly flexible and nigh ubiquitous set of technologies and practices, which can be used as the basis for future hypermedia research with the rise of computational notebooks as a prime example of a new kind of collaborative and highly malleable applications.
2020
Attendees | Larry Masinter, The Medley Interlisp Project: Status and Plans | Meetup
Hansen, Hsu Introducing the Smalltalk Zoo
In commemoration of the 40th anniversary of the release of Smalltalk-80, the Computer History Museum is proud to announce a collaboration with Dan Ingalls to preserve and host the “Smalltalk Zoo.”
Section: Software History Center
Museum+Labs, Living Computers: livingcomputermuseum/Darkstar
A Xerox Star 8010 Emulator. Contribute to livingcomputermuseum/Darkstar development by creating an account on GitHub.
original‑date: 2019‑01‑15T20:40:02Z
2021
Malone, Thomas W. The Information Lens
The Information Lens Thomas Malone (MIT) CHI+GI '87 Technical Video Program Abstract An intelligent system for information sharing and coordination (subtitle from the video) WEB:: http://www.cs.umd.edu/hcil/chivideosl...​ Published in two videotapes: issue 27, and issue 33-34 of ACM SIGGRAPH Video Review (issue 27 appeared in same tape as issue 26, i.e. the CHI '87 Electronic Theater). Video Chair: Richard J. Beach (Xerox PARC) Location: Toronto, Canada
Dipmeter Advisor
The Dipmeter Advisor was an early expert system developed in the 1980s by Schlumberger with the help of artificial-intelligence workers at MIT to aid in the analysis of data gathered during oil exploration. The Advisor was generally not merely an inference engine and a knowledge base of ~90 rules, but generally was a full-fledged workstation, running on one of Xerox's 1100 Dolphin Lisp machines (or in general on Xerox's "1100 Series Scientific Information Processors" line) and written in INTERLISP-D, with a pattern recognition layer which in turn fed a GUI menu-driven interface. It was developed by a number of people, including Reid G. Smith, James D. Baker, and Robert L. Young.It was primarily influential not because of any great technical leaps, but rather because it was so successful for Schlumberger's oil divisions and because it was one of the few success stories of the AI bubble to receive wide publicity before the AI winter. The AI rules of the Dipmeter Advisor were primarily derived from Al Gilreath, a Schlumberger interpretation engineer who developed the "red, green, blue" pattern method of dipmeter interpretation. Unfortunately this method had limited application in more complex geological environments outside the Gulf Coast, and the Dipmeter Advisor was primarily used within Schlumberger as a graphical display tool to assist interpretation by trained geoscientists, rather than as an AI tool for use by novice interpreters. However, the tool pioneered a new approach to workstation-assisted graphical interpretation of geological information.
Page Version ID: 1060421060
Interlisp
Interlisp (also seen with a variety of capitalizations) is a programming environment built around a version of the programming language Lisp. Interlisp development began in 1966 at Bolt, Beranek and Newman (renamed BBN Technologies) in Cambridge, Massachusetts with Lisp implemented for the Digital Equipment Corporation (DEC) PDP-1 computer by Danny Bobrow and D. L. Murphy. In 1970, Alice K. Hartley implemented BBN LISP, which ran on PDP-10 machines running the operating system TENEX (renamed TOPS-20). In 1973, when Danny Bobrow, Warren Teitelman and Ronald Kaplan moved from BBN to the Xerox Palo Alto Research Center (PARC), it was renamed Interlisp. Interlisp became a popular Lisp development tool for artificial intelligence (AI) researchers at Stanford University and elsewhere in the community of the Defense Advanced Research Projects Agency (DARPA). Interlisp was notable for integrating interactive development tools into an integrated development environment (IDE), such as a debugger, an automatic correction tool for simple errors (via do what I mean (DWIM) software design, and analysis tools.
Cardoso-Llach, Daniel ;
Kaltman, Eric ;
Erdolu, Emek ;
Furste, Zachary
An Archive of Interfaces: Exploring the Potential of Emulation for Software Research, Pedagogy, and Design
This paper explores the potential of distributed emulation networks to support research and pedagogy into historical and sociotechnical aspects of software. Emulation is a type of virtualization that re-creates the conditions for a piece of legacy software to operate on a modern system. The paper first offers a review of Computer-Supported Cooperative Work (CSCW), Human-Computer Interaction (HCI), and Science and Technology Studies (STS) literature engaging with software as historical and sociotechnical artifacts, and with emulation as a vehicle of scholarly inquiry. It then documents the novel use of software emulations as a pedagogical resource and research tool for legacy software systems analysis. This is accomplished through the integration of the Emulation as a Service Infrastructure (EaaSI) distributed emulation network into a university-level course focusing on computer-aided design (CAD). The paper offers a detailed case study of a pedagogical experience oriented to incorporate emulations into software research and learning. It shows how emulations allow for close, user-centered analyses of software systems that highlight both their historical evolution and core interaction concepts, and how they shape the work practices of their users.
2022
Lisp and Symbolic Computation
Undated
1979-stefik-examination-of-frame-structured.pdf
1987-Themes-Variations-Stefik-Loops-.pdf
1982-Bobrow-Stefik-Data-Object-Pgming.pdf
1986-commonloops-oopsla66.pdf
ahc_20150101_jan_2015.pdf
Title:IEEE Annals of the History of Computing
Volume:31 #1
Date:2022‑01‑01
ED285570.pdf
Title:1985 Annual Technical Report: A Research Program in Computer Technology
Pages:156
Publisher: University of Southern California, Marina Del Rey. Information Sciences Inst.
Report NO: ISI/SR‑86‑170
Date:1986‑12
1986-access-oriented.pdf
Bobrow, G ;
Stefik, J
Perspectives on Artificial Intelligence Programming
Xerox Alto Emulator
Rindfleisch NOTES ON XEROX LISP MACH DEMO
Industry Briefs
Although Envos Corp., an artificial intelligence spin-off of the Xerox Corp., folded back into Xerox last spring after nine months in operation, the parent company is “absolutely” committed to developing similar ventures in the future, according to Xerox spokesman Peter Hawes. “We have been trying to identify [Xerox] technologies,” says Hawes, “and choose which. ..might lend themselves to alternative exploitation.”
Henderson, Austin Tailoring Mechanisms in Three Research Technologies.
Tailoring is the technical and human art of modifying the functionality of technology while the technology is in use in the field. This position paper explores various styles of, and mechanisms for, tailoring in three research systems (Trillium, Rooms, and Buttons) created by the author to explore ways to enable players (end users) to achieve new behaviors from these systems appropriate to their particular circumstances.
Bobrow, G ;
Stefik, J
Perspectives on Artificial Intelligence Programming
Truckin’ and the Knowledge Competitions | MJSBlog
MJSBlog - Workin on it
Foster, Gregg CoLab, Tools for Computer Based Cooperation
Xerox 1108 Artifical Intelligence Workstations
Xerox_Globalview_Document_Services_for_Sun_Technical_Reference_Manual_Jun92.pdf