Information retrieval typically involves accessing textual information from a database in response to a users vague information need. Hypertext or hypermeia, on the other hand, involves a user browsing through a database of textual or multimedia information in response to a variety of types of information need. Thus information retrieval can be said to have a searching metaphor while hypertext has a browsing analogy. Initially, these two technologies for information access appear to be very different and almost competing in nature. In this article we will briefly review information retrieval systems and we will also examine hypertext systems and we will explore if and how these two techniques for accessing information can be integrated or combined in some way, specifically by exploring how guided tours could be used.
One of the most attractive ways of accessing multimedia and even simply textual information in the last few years has been to use hypertext techniques. In this paper we discuss one of the application areas of hypertext systems, namely computer-assisted learning (CAL). Learning as a cognitive process is briefly reviewed, as are current approaches to using computers for instruction, i.e. intelligent tutoring and computer based training (CBT). We then discuss the current approaches to using hypertext techniques in CAL and discuss some of their inadequacies. This leads us on to an outline of the work we are doing to enhance the functionality of hypertext delivery systems by providing them with the capability to dynamically create guided tours through the hypertext, in response to individual student or user needs. Such an enhancement combines the discipline of CBT, the individual tailoring of intelligent tutoring and the freedom of hypertext browsing. Our plans for implementing such a facility are presented
One of the many potentially useful applications to emerge for hypertext in recent years is in the area of computer aided learning. This paper describes the results of an experiment in which undergraduate students used a hypertext about database systems, as part of their coursework. The hypertext as a source of learning information was offered in addition to recommended textbooks and lectures. Students were surveyed on when and how they used the hypertext, for what specific purposes and what types of searching they preferred. The results and analysis of this survey of actual hypertext usage is presented and our conclusions confirm earlier published results that current hypertext products have obvious shortcomings and that hypertext in general cannot replace conventional teaching but should be regarded as adjunct or complimentary courseware.
Graphic systems which make extensive use of icons are particularly suitable for object-orientation. By exploiting the class and inheritance mechanisms of C++ an icon-based graphical interface can be developed. This paper examines some of the key features of object-oriented development and C++, with examples from the development of a set of classes for the user interface of a Supervisory, Control and Data Acquisition (SCADA) System.
The PARCMAN system is an information system to provide parking guidance to motorists entering a city. This document highlights the data requirements of the PARCMAN system and the flow of information through the system.
Image filtering is the process by which certain spatial frequencies are removed from an image which is important for edge detection and enhancement. This paper explores the application of an array of 3 by 3 cellular automata or processing elements, to implementing this process.
Parking guidance and information systems communicate information to a motorist concerning the location, direction and availability of parking spaces via roadside electronic displays. This paper discusses the issues involved in developing a centralised parking management and control system including driver information requirements, dissemination of predictive information and parking control. An integrated management system is proposed for the predictive PGI system incorporating an adaptive filtering approach and an outline hardware infrastructure is discussed.
Methods for obtaining the parameter estimates of transition types in stress luminescence experiments have not been investigated in great detail. The emphasis for any particular solution to this problem is to devise a method which allows the experimenenter to handle large interactive secular matrices, to distinguish between components in the experimental results, to perform a number of iterative fits giving the best common estimates for these components and hence to obtain the transition type as rapidly and efficiently as possible. This paper discusses three possible approaches which we have considered. These include working with the characteristic polynomial, solving the eigenvalue equations of the model and using a minimisation technique directly on the secular matrix. To date this third method has provided us with the most useful results.
The evolution of software design methodologies originally tended to be constrained by the limitations of the languages available, which were in turn constrained by available hardware. The formula of a design methodology constrained by a language has, of late, been reversed, with programming languages being designed to facilitate the implementation of modern design paradigms. One such design which has gained widespread acceptance is that of object-oriented design based on the concept of the object model and supported by object-oriented languages like Smalltalk, Eiffel, ADA and C++. This paper outlines the object model and the language features required to fully support this model for software development.
Object-Z is a formal specification language which has been proposed as an extension of the language Z. It allows for classes to be represented within the specification and for an inheritance relationship to exist between classes. This paper deals with the semantics of the use of inheritance within Object-Z. The suggested approach is a conservative one as we wish to keep all of the benefits of Object-Z, while basing them on sound semantic principles already derived for Z itself.
All taxation authorities carry out a process involving the selection of companies for detailed taxation assessment. This process is intended to encourage compliance with tax laws and to maximise returns to the exchequer and the selection is of critical importance. This paper addresses the problems of computerising this selection process and puts forward the outline of a computer based system for this purpose.
To date, ray tracing has proven to be the most successful way for generating photorealistic images using a computer. This article presents a review of ray tracing using different light transportation modes and from surfaces with different surface characteristics.
GRIFON is a graphical interface to ONTOS. It presents the structure and the features of the underlying object orinted database in a graphical manner, allowing the user to interact with the database through direct manipulation. This paper looks at GRIFON, its features and its background. The development of the system is also described, outlining the approach taken to the representation of data structures by it and the algorithms used to represent these graphically. Although GRIFON is a prototype its development brought forth a number of issues in the area of OO databases and interfaces which might warrant further investigation.
The basic building block of a Z specification is the schema, and we can can use these schemas in logic expressions where they are presented as if they were propositions. This paper presents an enhancement of this schema calculus which allows for a predicate-like version of the schema to be used; this can facilitate operations where all the variables of a schema are not always treated as a unit. We explore the use of this notation in the standard Z schema operations, in the proof obligations for specification refinement and in the formulation of a weakest prespecification operator.
This paper identifies a computer based approach to yield management in an airline reservation system which allocates seates to fares on each flight. Different approaches to this problem are investigated and our main conclusion is that this process cannot be completely modelled using existing expert systems technology.
Control of versions of software modules in the development of large systems is of critical importance. This paper examines these problems in 4GLs with particular reference to the 4GL RULER as used in a large organisation.
Despite the advantages associated with the use of formal methods, practicing software engineers are reluctant to abandon existing approaches to software development. This paper investigates the possibility of introducing the formal specification language LOTOS into a development process which is already based on the use of structured analysis. The approach suggested involves using Structured Analysis specifications as the basis for the generation of LOTOS specifications. This would allow an evolutionary introduction of formal methods without requiring existing methods to be completely abandoned.
The application of automatic natural language processing techniques to the indexing and the retrieval of text information has been a target of information retrieval researchers for some time. Incorporating semantic level processing of language into retrieval has led to conceptual information retrieval which is effective but usually restricted in its domain. Using syntactic level analysis is domain-independent but has not yet yielded significant improvements in retrieval quality. This paper describes a process whereby a morpho-syntactic analysis of phrases or user queries is used to generate a structured representation of text. A process of matching these structured representations is then described which generates a metric value or score indicating the degree of match between phrases. This scoring can then be used for ranking the phrases. In order to evaluate the effectiveness or quality of the matching and scoring of phrases some experiments are described which indicate the method to be quite useful. Ultimately the phrase matching technique described here would be used as part of an overall document retrieval strategy and some future work towards this direction is outlined.