Fuzzy set theory was first developed in 1965 by Zadeh. as an extension to traditional set theory, along with the fuzzy logic to manipulate the fuzzy sets. Since its introduction fuzzy logic has attracted the attention of many researchers in the mathematics and engineering fields. Combining multivalued logic, probability theory, AI, and neural networks, fuzzy logic provides an alternative digital control methodology that simulates human thinking by incorporating the imprecisions inherent in all physical systems. In this paper a brief survey of fuzzy logic control is presented. Some of the basic concepts of fuzzy set theory and fuzzy logic are briefly summarised. Various fuzzy reasoning methods are examined and key issues in applying this logic to industrial applications are outlined. Some case studies are also presented.
The research on which this paper is based was to formally specify a persistent object management system and to implement a part of this specification in the form of a prototype. The prototype is called the Persistent Object Store Manager (POSM) and was implemented in C++ using the IBM OS/2 Operating System. POSM is formally specified in this paper. The data model used in POSM is rigorously defined using the Z notation. The operations of POSM and its state space are also specified in Z notation. Example schemas for the operation of the node management component of the prototype are presented. Z notation is justified in the paper and the benefits of using Z in this research are discussed.
Current approaches to text retrieval based on indexing by words or index terms and retrieving by specifying a Boolean combination of keywords are well known, as are their limitations. Statistical approaches to retrieval as exemplified in commercial products like STATUS/IQ and Personal Librarian are somewhat better but still have their weaknesses. Approaches to the indexing and retrieval of text based on techniques of automatic natural language processing (NLP) may soon start to realise their undoubted potential in terms of improving the quality and effectiveness of information retrieval. In this article we will explore what that potential is. We will divide information retrieval functionality into conceptual and traditional information retrieval and we will examine some of the current attempts at using various NLP techniques in both the indexing and retrieval operations.
Fractal block coding is a new coding scheme which compresses image data by exp[loiting self-similarity among the picture to be coded. Arbitrary digital greytone images are modelled as fixed points of contractive transformations on the image space. The transformation coefficients are used to iterate successively from any initial image to a fixed point which is close to the original image. With previous research on fractal block coding, target block size is chosen as fixed to trade image quality against compression ration. In this paper a multi-level fractal block coding scheme is proposed which includes two-level fractal block coding as a special case, which can achieve good image quality at a higher compression ratio.
Since the introduction of software measurement theory in the early seventies it has been accepted that in order to control software it must first be measured. Unambiguous and reproducible measurements are considered to be the most useful in controlling software productivity, costs and quality, and diverse sets of measurements are required to cover all aspects of software. This paper focuses on measurements for rule-based language systems and also describes a process for developing measures for other non-standard 3GL development tools. This paper uses "KEL" as an example and the method allows the re-use of existing measures and indicates if and where new measures are required. As software engineering continues to generate more diverse methods of system development, it is important to continually update our methods of measurement and control.
In using any hypertext system a user will encounter many technical problems which have been well-documented in the literature. Two of the more serious problems with using hypertext are user disorientation and the retrieval of information. Another less often addressed problem is that of the logical sequencing of nodes. In the work reported in this paper we address these three problems by combining Hammond and Allinson's guided tour metaphor and Frisse's information retrieval techniques to dynamically create guided tours for users in direct response to a user's query. One of the features of our method is that we take advantage of typing of information links in the hypertext to generate a tour which has a judicious sequencing of nodes rather than a simple presentation of hypertext nodes in order of similarity to the user's query. Our method was empirically tested on a population of 125 users who generated a total 973 individual tours and all user actions and responses to questions were logged. The results of this evaluation are presented in this paper
Reliability techniques are used in the risk analysis of mechanical and electronic systems such as nuclear installations, chemical complexes and aerospace operations. They are not, as yet, widely used in the field of hydrocarbon exploration and production, despite the fact that this area supplies 60% of the national demand for fuel products. This paper deals with the development of a generic software tool for the generation of reliability models in accordance with the design specifications of engineering systems. With sufficient reliability data the end tool will be a significant aid to both system designers and operators in the optimal design and maintenance of off-shore systems.
This paper describes the implementation of a prototype object management system called the Persistent Object Storage Manager (POSM) implemented in C++ using the IBM OS/2 Operating System. Before implementing an object management system, several issues have to be resolved. For instance, in the absence of a standard formal model model of objects in the field of object-oriented database systems, which arbitrary model should be used ? Should a new model be specified ? Should the best features of existing models be merged into a hybrid ? What operations will the system support ? Will existing technology be used and extended, for example, relational databases ? clearly these issues have a major impact on the specification and design of an object management system. Object oriented databases are a developing technology and a promising area of research. This paper is concerned with the implementation of a prototype object management system for use by applications programmers. The prototype is described as an object management system to differentiate it from an object oriented database system, the difference being that an object management system provides a subset of the functionality expected in an object-oriented database system in the form of a kernal.
A PC-based implementation of a simple non-interactive identity-based key exchange algorithm suggested by Maurer and Yacobi is described. In this system the user's public-key is simply their own identity. The security of the method depends on the generation of special trap-door primes, for which it is infeasible to factor the product of two or more, yet for which it is relatively easy to calculate discrete logarithms.
Classic secret key block ciphers can be vulnerable to chosen plaintext and known plaintext attacks. The recent discovery of differential cryptanalysis makes it even more imperative to develop defences against these attacks. This paper describes a modified version of Cipher-Block Chaining called Cipher-Block Chaining with Plaintext FeedForward and Initial Random Block (CBC-PFF/IRB). This effectively denies the cryptanalyst any known plaintext from which to launch a possible cryptanalytic attack.
This paper introduces the concept of continuous improvement of software through assessment and describes some models for doing exactly this, including ISO 9001 and the Capability Maturity model. We then compare these models and derive an evolutionary model which depicts the stages of software engineering evolution. We use a target organisation to understand how this model may be used in practice and discuss how the model would actually work.
This paper presents a PC or workstation based simulation tool which can be used to study the effects of parameters used in multiprocessor implementations of combinatorial optimisation or constraint satisfaction problems. The search space for such algorithms is usually represented as a tree. Some results of using the tool are presented for a case of an integer valued Linear Programming problem.
Problems of producting hight quality software at reasonable cost in a changing environment and for large scale systems like in the TELECOMs world, are very acute. Object oriented technologies seem to be the most promising technique in this area. This paper examines the current block design methods used in the telecommunications industry which offers some of the features of object oriented design. A design method is proposed which adopts ideas from the best known OO design methods proposed to date. Comments and experiences from prototype work using this approach are included.
SSADM is the preferred information systems development in the public service. This paper is based on 5 specific case studies of project development using SSADM and a proposal for a new model based on the conclusions reached. A list of questions relevant to developing a transfer strategy is included as are recommendations for SSADM support.
This paper examines all aspects of the OO paradigm with emphasis on OO system specification and design. We describe the technology behind the OO paradigm and OO analysis and design approaches. We also describe some of the problems encountered when Booch OO method was used to specify a system that had already been developed in C++ without any formal specification or design process.
Advantages of using a state based formal specification technique for the development process for Tellecommunications Services are examined. The Z notation is used to generate a specification of the Transfer On Busy Service for use in the intelligent Network Environment. Rapid provisioning of telecommunications services is achieved by constructing them from a seuite of reusable components. The ability of a formal specification in Z to support the concept of Reuse at the specification level is examined. A mechanism for detection of potential Service Interaction Problems in the Intelligent Network is proposed. Opportunities to automate the production of Test Suites from the specification are examined.
PCTE/ECLIPSE is a SUN-based environment and a set of tools for building prototype system/software design methods. It provides tools to enable a designer to custom build their own method. The core idea is to provide a base of libraries and functions called the Tool Builder's Kit. This paper reports on a project which involved using the Tool Builders Kit to implement a specific method for software design. The method is of a general type and contains features common in industry standards.
Semantic query optimisation has been proposed as an extension of conventional query optimisation in order to make use of integrity constraints from within a relational DBMS. Research has shown that SQO can reduce the time taken to answer user queries but most of this research has been on "academic" rather than real-life databases. This paper tries to establish if SQO can reduce the cost of query execution in a business R.DBMS. A set of queries are examined using conventional query optimisation and SQO techhiques and the results are compared.
A testing methodology suitable for use in a Windows environment is developed here based on a simple model of roles in a software project involving client, manager, designer and developer. General testing problems and testing problems specific to Windows are presented. A general framework for software testing is developed from the model. Alternative development options for windows are discussed including QuickC and Visual Basic. The importance of test materials is stressed with a hypertext representation being suggested.
This paper examines issues involved in emulating CISC processors with RISC technology while maintaining or improving performance. Case studies of existing emulations and a model of processor throughput are used as a basis for determining the appropriate division between hardware and software for an efficient emulation. An approach is selected which provides a micro- coded bit-slice front-end to handle CISC addressing modes and some core instructions for the RISC machine. This approach is evaluated in detail by using it as the basis for an emulation of a DIGITAL PDP-11 processor by an Intel i960 RISC processor. Analysis indicates that the emulation out-performs the original system by a substantial margin while requiring less than 30% of the microcode needed in a fully bit-sliced approach.
This review article will attempt to highlight the key areas of research in the general area of texture and give a broad appreciation of the models, analysis and synthesis techniques involved. A number of application examples will also be given.
This paper considers the problem of partitioning m tasks among n nodes where each of the m tasks has a different computational length, precenence constraints and may pass and/or receive data from any of the other (m-1) tasks. The paper presents a heuristic with complexity O(M2) approximately, which seems to minimise the total execution time for the overall program.
Document retrieval systems in the past have used single strategies to retrieve documents from within a collection using an appropriate representation of the document collection. To improve on this some researchers have used a number of retrieval strategies and selected the best strategy to use based on the particular query entered. However, none have tried to use more than one strategy in combination to retrieve documents from a text collection. This paper presents a retrieval system, OSCAR, where Bayesian inference is used to combine the results from more than one retrieval strategy in an attempt to build a retrieval strategy which achieves a higher quality of retrieval than any of the individual strategies used in isolation. The retrieval model is described together with some experiments. Retrieval effectiveness is evaluated using precision and recall measurements and shows that an improvement in retrieval quality can be obtained when more than one retrieval strategy is used in combination.
Most pseudo-random number generators are not random. This paper describes a portable C version of random number generator proposed by Marsaglia et al. with a period of just under 21178 or 10354 which for all intents and purposes is random.
In this paper we investigate the best-known identity-based key exchange schemes. Schemes such as E.Okamoto, Girault and Maurer-Yacobi as well as Gunther, Bauspiess-Knobloch and T.Okamoto-Ohta are described. Some of these schemes are based on the difficulty of computing discrete logarithms. We compare here all the above-mentioned schemes and clarify the relationships that link a user identification, his/her public information with his/her secret key in each scheme. We also compute the number of data-exchange as well as the number of modular multiplications required in each protocol, since these factors are considered as the two major time and communication consuming overheads in this type of protocol.
Many computer applications are designed solely for interactive use and the running of such applications can be difficult to automate as it often requires full-duplex flow of information. In this paper an application environment is developed which can automate any application that can run in an X terminal (xterm). We review similar approaches to automating a human-computer interaction and we provide sample scripts illustrating the approach we have taken.
This paper presents a prospectus for how an urban rail network for the Dublin area should be developed.
This paper describes the model used by the SEAU Procedure Management systems for representing procedures. Procedure management systems are introduced and some important issues in the design of such systems are discussed. The model used in the SEAU Procedure Management System is then presented.