School of Computing - Research - Working Papers - 1999


Working Papers for 1999



CA-0199:
The development of an Agent Based Critiquing System Architecture for a project management tool: Prompter,
Eamon Gaffney, Tony Moynihan and Rory O'Connor.

This paper describes work undertaken within the context of the P3 (Project and Process Prompter) Project the Prompter tool, a 'decision-support tool to assist in the planning and managing of a software development project'. Prompter has the ability to help software project managers assimilate best practice and 'know how' in the field of software project management and incorporate expert critiquing to assist with solving the complex problems associated with software project management. This paper focuses on a component of the Prompter tool known as the Daemon architecture which is responsible for the dynamic generation of advice from the tool.
Keywords: Project management, intelligent agent, Corba, Java

CA-0299:
A Study of Dynamic Bytecode Execution Frequencies of the Java Virtual Machine,
John Waldron.

Java is an object oriented language that has grown in popularity since its release in 1996 and is particularly interesting because it uses a bytecode intermediate language to represent programs, so that the same program can be run unchanged on machines with different underlying instruction sets. The bytecodes executed during the interpretation of some real Java programs were studied, which is important as most code loaded during execution is not used often enough to make it worth spending time just in time compiling to native code. To measure dynamic bytecode usage it was necessary to modify the source code of Kaffe, a Java Virtual Machine. A selection of programs was measured to compare the way different applets and applications use the bytecodes, and it was found that very similar patterns of usage appear in all cases. For the set of programs studied, most of the bytecodes are used at least once during execution. However a small subset of the bytecodes is executed with very high frequency. The data in this paper questions the stack based design for the intermediate representation of Java programs, since the bytecodes only occupy on average twelve percent of a class file, an intermediate representation that is less compact, but executes more efficiently might be possible.

CA-0399:
3D Training Environments: VRML and its use in Interactive Task-based Simulations,
Seamus Brady, Carol O'Sullivan.

Training in the IT sector is vital. With technologies evolving ever faster, people are required to learn new skills all the time. Instructor-led training (ILT) is popular and a useful method of training but is expensive and companies are looking towards computer-based training (CBT) to provide cheaper methods of educating the work force. There are already many CBT applications in the marketplace, all of which use 2D interfaces. This thesis proposes to examine the necessary requirements for a 3D interactive training application and look at the technologies that exist to support and implement such an application. The goal is to build a prototype interactive training application that allows users train with 3D models and have the application monitor and guide the user through training tasks.

CA-0499:
Heuristical Real-time Shadows: Improving the creation of shadows in real-time applications in computer graphics,
David Meaney, Carol O'Sullivan.

Computer-generated graphical scenes benefit greatly from the inclusion of accurately rendered shadows. Shadows contribute to the realism of a scene, and also provide important depth cues, providing information relating to the relative position of objects within a scene. However, shadow generation imposes a significant penalty in terms of the time required to render a scene, especially as the complexity of the scene and the number of polygons needed increases. The time needed to generate realistic scenes with shadows becomes an issue warranting serious consideration in the area of real-time scenes. For this reason, real-time scene generation benefits from the use of a heuristical approach to the determination of shadow areas. For example, the shape of a moving object is perceived less accurately by a viewer than the shape of a static object. So it is reasonable to say that less effort - and therefore less time - should be expended on realising the shadow of a moving object compared to that of a static object. The objective of this thesis is to investigate a number of heuristics that might be employed to facilitate real-time animation at acceptable frame rates, and also to illustrate the use of some of these techniques in an OpenGL application, specifically written for this purpose.

CA-0599:
A Formal Specification of a Safe ALGEBRA - "A",
Lejla Rovcanin, John Murphy.

It is a common knowledge that the database area is lacking a theoretically based data model that would be a foundation for the new generation databases. The last book by Hugh Darwen and Chris Date "Foundation for Object/Relational Databases Third Manifesto" [Addison Wesley Longman, 1998] defines such a foundation - a new relational algebra called A. The new algebra is based on set theory and first order predicate logic, and is proposed as a solution to all problems in the previous - Relational Algebra [E.F. Codd, "A Relational Model of Data for Large Shared Data Banks," CACM 13(6), June 1970]. These identified flaws were operator duplication, lack of support for complementation and recursion. We gave a terse analysis of the original formal specification of A algebra. We, also formally specified the A operators in Z notation and offered solution for the original (Date and Darwen) A definition's shortcomings (namely, unlimited - "unsafe" definition of the complementation operator and the transitive closure operator).
Key words: relational, algebra, model, database, first order logic.

CA-0699:
Automatic Face Identification Techniques - A Comparison with Human Cognition,
Peter Coughlan, Alistair Sutherland.

The main question of this dissertation; based on the existing work in the field of Automatic Face Identification (their conditions and their results), is whether Face Identification by computers is at a point where it can be considered accurate and efficient. In the dissertation, the subject of the psychology of face recognition by humans will be explored and experiments conducted. This is to present accurate information of human abilities, which can be then compared to the results in the field of the computerized equivalent. In the dissertation it is hoped to illustrate that the factors which can have strong effects on the success, and failure, of computerized face recognition systems are dealt with easily by the human process of face recognition. This is going to be illustrated by the examination of some face recognition techniques (most notably, Hidden Markov Models) and the environments under which they are considered efficient face recognition techniques. The set of conclusions drawn by this dissertation can then be compared with the success of other forms of Computerized Recognition in such subject areas as speech and text processing.
Keywords: Face Identification, Hidden Markov Models, Feature Extraction, Feature Saliency, Optical Flow, Image Segmentation.

CA-0799:
Evaluation of Automatic Shot Boundary Detection on a Large Video Test Suite,
Colin O'Toole, Noel Murphy, Sean Marlow, Alan Smeaton.

The challenge facing the indexing of digital video information in order to support browsing and retrieval by users, is to design systems that can accurately and automatically process large amounts of heterogeneous video. The segmentation of video material into shots and scenes is the basic operation in the analysis of video content. This paper presents a detailed evaluation of a histogram-based shot cut detector based on eight hours of TV broadcast video. Our observations are that the selection of similarity thresholds for determining shot boundaries in such broadcast video is difficult and necessitates the development of systems that employ adaptive thresholding in order to address the huge variation of characteristics prevalent in TV broadcast video.

CA-0899:
User interface issues for browsing digital video,
Hyowon Lee, Jonathan Furner, Alan Smeaton.

In this paper we examine a suite of systems for content-based indexing and browsing of digital video and we identify a superset of features and functions which are provided by these systems. From our classification of these we have identified that common to all is the fact of being predominantly technology-based, with little attention paid to actual user requirements. As part of our work we are developing an application for content-based browsing of digital video which will incorporate the most desirable but achievable of the functions of other systems. This will be achieved via a series of continuously refined demonstrator systems from Spring 1999 onwards which will be subjected to analysis of performance in terms of user needs.

CA-0999:
Optimal Parameters for Segmenting a Stream of Audio into Speech Documents,
Gerard Quinn, Alan Smeaton.

Indexing and retrieval of spoken documents is a desirable feature in a digital library as there is a wealth of contemporary information available to us uniquely in this medium. In this paper we describe experiments carried out on the TREC 6 SDR collection to determine the optimal parameters for our speech IR system. In the TREC task the data corresponded to complete news broadcasts and the boundaries between news stories were marked up manually. In an operational news speech retrieval system such as our own, news story boundaries are not always part of the speech data, making it difficult to automatically detect shifts in stories being broadcast. We describe our approach of splitting the stream of audio into speech documents of fixed length and analyse the results from each method culminating in an optimal solution.

CA-1099:
Multimedia Information Retrieval: MIDI as a format for Content Based Retrieval of Audio,
John McDonagh, Alan Smeaton.

The purpose of this paper is to investigate the role that MIDI plays in the arena of Multimedia Information Retrieval. MIDI is an encoding format which encodes and relays performance-related information about digital music which is used to generate music from a synthesiser. Current thinking has used this performance information to develop a concept known as content-based retrieval of audio. This paper will develop this thinking by analysing the existing research in content-based retrieval of non-speech audio and assessing whether or not MIDI is a good format to support this retrieval.

CA-1199:
Transfer Constructors,
Josef van Genabith, Anette Frank, Michael Dorna.

We present a modular, lexicalized, reversible and ambiguity preserving approach to semantic based transfer on sets of linear logic meaning and transfer constructors. In many cases, transfer on sets of meaning constructors (rather than on derived disambiguated meaning assignments) obviates the need for spurious multiple transfer on disambiguations. We concentrate on adjuncts and embedded head switching phenomena.

CA-1299:
Syntactic and Semantic Transfer with F-Structures,
Michael Dorna, Anette Frank, Martin Emele, Josef van Genabith.

We present two approaches for syntactic and semantic transfer based on LFG f-structures and compare the results with existing co-description and restriction operator based approaches, focusing on aspects of ambiguity preserving transfer, complex cases of syntactic structural mismatches as well as on modularity and reusability. The two transfer approaches are interfaced with an existing, implemented transfer component (VerbMobil), by translating f-structures into a term language, and by interfacing f-structure representations with an existing semantic based transfer approach, respectively.

CA-1399:
Glue, Underspecification and Translation,
Josef van Genabith, Anette Frank, Dick Crouch.

[Gupta and Lamping,98] show how linear logic based meaning constructors in the glue language semantics of [Dalrymple et. al.,96] can be converted to ``almost Horn clause'' form using the compilation method of [Hepple,98]. Efficient linear logic deductions deliver skeleton and modifier resources resembling Underspecified Discourse Representation Structures (UDRSs [Reyle,93]). In this paper we present an alternative method: we directly encode UDRT-based semantic representations in linear logic based meaning constructors. The meaning constructors thus obtained are in Horn clause form and deductions are linear in the size of semantic contributions. As an example we show how this encoding can be used advantageously in an ambiguity preserving transfer based machine translation scenario where it avoids the head-switching problem encountered by the otherwise related approach of [van Genabith et. al.,98].

CA-1499:
Data-Driven Compilation of LFG Semantic Forms,
Josef van Genabith, Louisa Sadler, Andy Way.

In a recent paper [van Genabith et.al.,99] describe a semi-automatic qmethod for annotating tree banks with high level Lexical Functional Grammar (LFG) f-structure representations. First, a CF-PSG is automatically induced from the tree bank using the method described in [Charniak,96]. The CF-PSG is then manually annotated with functional schemata. The resulting LFG is then used to deterministically ``reparse'' the original tree bank representations simply following the c-structure defined by the original annotations, thereby inducing f-structures corresponding to the original c-structures. The annotated grammars, however, are not yet stand-alone LFGs. They cannot be used to analyse freely occurring strings as opposed to strings annotated with c-structures as in the original tree bank. What is missing is the LFG account of subcategorization in terms of semantic forms and completeness and coherence constraints. In the present paper we develop automatic and semi-automatic methods for compiling LFG semantic forms from the tree banks annotated with f-structure representations, effectively turning the grammars developed in the original approach into stand-alone LFG grammars. In addition, the method provides full semantic forms as PRED values for the f-structures obtained from the input tree bank.

CA-1599:
Analysis of Factors Affecting Assembly Language Programming Ability,
John T. Waldron, Jane M. Horgan, Gary Keogh.

The level of programming skill achieved by a method of learning introductory computer architecture with a strong emphasis on assembly language programming as part of a Graduate Diploma in Information Technology in the School of Computing, Dublin City University is studied. The philosophy of the computer architecture course was that assessment defines the curriculum from the point of view of the student, and that the best way of understanding computer architecture is by mastery learning assembly language programming using automatically corrected invigilated programming exams. XSPIM, a virtual machine that runs programs for the MIPS R2000/R3000 computers was used since the architecture of the MIPS is an ideal example of a simple, clean RISC machine. One of the advantages of automatic assessment is that precise times taken to write particular assembly programs are available, so it is possible to take account of this when measuring programming competency. Neither previous degree type nor degree class had a significant impact on levels of assembly language programming skill achieved but whether or not the student had experience of one high level language before starting the course was significant. Surprisingly, it was found that those who did not prepare their programs in advance of going to the computer significantly outperformed those who did. The frequency with which students logged on to the system, the typical duration of their logon sessions and the frequency with which they used the XSPIM programming tool did not appear as significant, but the frequency of MIPSMARK test tool use and the order in which assignments were completed relative to other students were important.

CA-1699:
Dynamic Profiling of Local Variable and Operand Stack Activity in the Java Virtual Machine,
John Waldron.

Java has grown in popularity since its release in 1996 and is particularly interesting because programs run on a virtual machine, so that a program has network mobility and can be transfered over the internet and run unchanged on machines with different underlying instruction sets. The purpose of this paper is to study the way some real object oriented Java programs used the stack frame of the Java Virtual Machine during execution. Dynamic measurements of parameter size, parameter and return types, temporary variable size and operand stack size were made for every method call during the execution of both program and API (Application Programming Interface) methods for the test suite studied. During the execution of object oriented programs, a high number of method calls with local array size of one and two which may be instance methods that get an object field and set an object field respectively, was observed. An object oriented design with consistency of interface to allow for inheritance in practice can result in a significant number of methods being executed that do nothing, and hence need a zero sized operand stack.

CA-1799:
Metaphors and Type Theory,
Josef van Genabith.

Metaphorical use of language is often thought to be at odds with compositional, truth-conditional approaches to semantics: after all metaphors are literally false. In this paper we sketch an approach to simple metaphors based on standard type theory. The approach is classical: we do not invent a new logic. The approach models sense extension in that a supertype extends the predicate given in a particular sentence while avoiding extending the original predicate. Metaphors are interpreted as reduced similes. A compositional syntax / semantics interface is provided. The approach readily translates into a simple Prolog implementation.

CA-1899:
Poisson sampling revisited: A trip down the catwalk,
Jane M. Horgan.

Using a model-based approach, we derive conditions for which the gains in efficiency of a ratio estimator with Poisson sampling relative to the customary estimators used in conjunction with unequal probability with-replacement and fixed-sample-size without-replacement selection schemes are greatest.
Key Words: Efficiency; Hansen-Hurwitz and Horvitz-Thompson estimators; Model-based and design-based estimation; Ratio estimator; Superpopulation.

CA-1999:
A Statistical Refinement Method for Word Shape Token Querying of Document Images,
Jerh O'Connor, Alan Smeaton.

Word Shape Tokens (WSTs) are tokens used to represent words based on the overall shape or contour of a word as it appears in printed text. A character shape code (CSC) mapping function is used to aggregate similarly shaped letters such as "g" and "y" into one single code to represent those letters. The rationale behind this is that it is far easier and more accurate to map a scanned image of a word or letter into its WST representation than it is to map into its full ASCII representation. In previous work we showed that user-mediated selection of WSTs for querying document images improved system performance. In the work reported here we use a statistically derived dataset to help determine whether or not a particular WST from a scanned document image actually matches a query term WST. We do this by comparing the preceding and following WSTs of the each WST in a document against previously collected frequency data for a large set of WST occurrences.

CA-2099:
Using World Wide Web Technology to Implement Tailoring of Hypertext for Different Abilities of Users,
Grace Johnston, Mark Roantree.

CA-2199:
The Interval Routing Problem using Hamiltonian Paths,
Anthony Ryan, Micheal O'hEigeartaigh.

CA-2299:
Profiling Java memory demographics for Garbage Collection purposes,
Owen Harrison, John Waldron.

John Ellis wrote an amusing yet very true account of garbage collection(GC) in 1993, stating that it was GC researchers' opportunity to put their money where their mouth was and show the programming world that automated memory management is a realistic and viable option in modern day object oriented(OO) languages. He portrayed that it could be the last chance that GC would have of making it into the average programmers desktop, fortunately this is obviously now a reality with Java. Java takes full advantage of GC, making sure that OO languages remain pure by not having to manually track memory across objects. Now that GC has a large foot in the door, how do we intend to keep it there? The answer is to make it more competitive when compared with the performance of manual memory handling languages such as C. At the moment, there are many papers discussing the issues of profiling garbage collection with respect to specific algorithms. The method of improving garbage collection schemes seems to have bypassed the analysis of the Java language itself and its surrounding environment. The performance claims of most algorithms depend on particular characteristics of heap objects. There is no encompassing survey that analyses how heap tendencies of the java language affects potential garbage collection techniques. Given such survey information, one could then selectively sift through possible collection approaches, possibly arriving at new ones. I have set out to lay down a list of factors that need to be profiled and how these factors relate back to potential garbage collectors. I also propose a specification for a system that will allow the required versatile and verbose profiling of the Java heap.

CA-2399:
A review of development methodologies for real-time systems, using a practical example,
John Woods, Charlie Daly.

Much research has been done to develop effective methodologies for systems analysis and design, giving rise to widely practised approaches such as Jackson and SSADM. However, the focus of this research has been mainly on the business information environment, with less being written about developing real-time systems. In this paper one methodology, Yourdon Structured Method, is used to develop a simple example real-time system. The effectiveness of the method is discussed and compared with other current methodologies. The example system produced is a domestic water heating controller.

CA-2499:
A study of current UNIX filesystem innovation,
John Looney, John Waldron.

UNIX has been around since the seventies, and very few novel features (as opposed to performance/stability enhancing features) have been added to filesystems, compared to other sections of the operating system. This paper aimed to note the innovative features that have made their way into both mainstream UNIX and more specialized software, with a view of studying their implementability on a freeware UNIX like Linux. Part of this study also involves investigating the process of developing freeware (in particular Linux) software. Ten years ago, there were few freely available tools to study and test a modification on a modern operating system, as commercial UNIXes were closed-source and with proprietiary modifications, and the free BSD derivatives were not widely known about, or used. Linux is now widely used and understood, and it's simple programming model lends itself well to study and development.

CA-2599:
Real-Time Animation of Objects Modelled using Constructive Solid Geometry,
John O'Loughlin, Carol O'Sullivan.

Computer graphics have long been used to help people visualise complex entities such as machine components and assemblies. Systems such as Computer-Aided Design (CAD) applications allow designers to create models and designs for objects by providing interactive tools that can be manipulated by designers. An important facet of interactive design is that designers should be able to see the effects of changes they make on models immediately, i.e. in real-time. In this paper we are interested in applications where renderings of complex objects, modelled using Constructive Solid Geometry (CSG), can be updated quickly enough to allow interactive use. The algorithms developed in this paper to allow real-time rendering of CSG-based objects require no specialised hardware or software to perform their tasks. They merely need a display adapter with stencil buffer support, and a graphics language capable of manipulating the colour, depth and stencil buffers provided by the display hardware. They can be implemented on low-cost PC-level systems and still prove capable of generating the required number of frames per second to allow real-time screen updates of CSG-modelled objects. Through empirical studies and observation we have identified that the algorithms developed here provide better performance than equivalent algorithms.

CA-2699:
The Application of the Personal Software Process to Software Maintenance,
Francis Walsh, Howard Duncan.

The Personal software process (hereafter referred to by the acronym PSP) was set forth by Watts S. Humphrey of Carnegie-Mellon University, Pittsburgh, USA, as a means of allowing software engineers to improve the quality of their software by adopting a measured and defined personal process for writing that software. The PSP built upon work done by the author and other researchers at Carnegie-Mellon, aimed at improving software processes and the capability of software producing organisations, especially the Capability and Maturity Model (CMM). The author argued that the PSP was applicable to all types of software engineering activity. On reading the books which describe the PSP, it is clear that the author's chief aim is to improve the development of new software, rather than the maintenance of existing software. The aims of this paper are to assess the applicability of the PSP to the task of maintaining and enhancing pre-existing software in a commercial environment, to amend the PSP where necessary in order to facilitate such applicability, and to examine the issues in implementing the PSP in a commercial environment. To achieve these objectives, the nature and aims of the PSP are first examined. Then the nature of software maintenance and its peculiar problems are explored. Each of the seven sub-levels of the PSP are assessed for applicability to software maintenance, and additions and amendments are made as necessary. Finally, the issues that need to be addressed in implementing the PSP for the improvement of the software maintenance function of a commercial organisation are dealt with and conclusions on the application of the PSP to software maintenance given.

CA-2799:
Dynamic Bytecode Comparison by Different Compilers,
Charles Daly, David Gray, Jane Horgan, John Waldron.

The purpose of this paper is to look at factors which influence overall bytecode usage in the Java Virtual Machine, such as the number of Application Programming Interface (API) methods executed, and the compiler that was used to generate the class files. The dynamic percentages of instructions and non-native method calls in both API and program methods used by a selection of different programs were studied to compare the way different applets and applications use the bytecodes. It was found that 74.1% of bytecodes executed and 67.1% of Java methods are in the API. This suggests a way to improve the speed of Java programs would be to compile the API methods to native instructions and save these on disk in a standard format, cutting the time spent interpreting bytecode. Non-API program methods only used 18.1 bytecodes on average when executed and Java API methods on average 23.8 bytecodes. One interesting factor to note is that more static methods were invoked than virtual methods by the non-API bytecodes. Remarkably similar frequencies were obtained using the different compilers because many traditional optimisations performed by compilers cannot be used when converting a Java program to bytecode.

CA-2899:
Abstract Project Model: A Description of a Generic Software Project,
Mark Johnston, Tony Moynihan.

This paper describes work undertaken within the context of the P3 (Project and Process Prompter) Project to develop the Prompter tool, a 'decision support tool to assist in the planning and managing of a software development project'. Prompter has the ability to help software project managers assimilate best practice and 'know-how' in the field of software project management and incorporate expert critiquing to assist with solving the complex problems associated with software project management. This paper focuses on a component of the Prompter tool known as the Abstract Project Model. The Abstract Project Model represents the starting point of a user's project in a generic fashion. This is in fact a template for a project description in a similar way a template can be chosen for a document in MS Word. The difference is that the APM is a template for a real world software project.

CA-2999:
Effective Algorithms for Facility Layout and VLSI Circuit Design,
Theodore Stamoulakatos, Micheal O'hEigeartaigh.

We present a classification of published analytical methods for solving the facility layout problem. These methods are divided into optimal and heuristic algorithms. Among the optimal methods considered are branch-and-bound and cutting plane algorithms. We also review heuristic methods, construction algorithms, improvement algorithms, fuzzy set based algorithms, hybrid methods and graph theoretic methods which are some of the proposed methods for solving the facility layout problem. Next we considered extensions of these methods and new algorithms for design problems that arise in layout of Very-Large-Scale-Integrated (VLSI) circuits. The term VLSI reflects the capabilities of the semiconductor industry to fabricate a complex electronic circuit consisting of thousands of components on a single silicon base called wafer. Then VLSI routing problem is described analytically and wire routing algorithms for solving the latter problem as well as PCB (Printed Circuit Boards) routing problem were presented. These types of algorithms have two phases. The first phase is concerned with the placement of the chips on the printed circuit board and is divided into initial and improvement placement. The second phase addresses the routing of the wires following a search procedure and consists of loose routing (channel assignment) and final routing (track assignment). Two new algorithms 'Placement algorithm' and 'NGR algorithm' have been introduced and tested. The results we obtained from these algorithms show that both algorithms can be used as a base for further research and improvements. 'Placement algorithm' produced results almost identical with the ones we obtained from the optimal solution in terms of grid dimensions needed whereas 'NGR algorithm' performed adequately good in terms of channel utilization compared with two other widely used global channel routers. Moreover the results we obtained from both algorithms in terms of total interconnection length as well as CPU time were generally competitive, outperforming in some cases the algorithms they were compared with.

CA-3099:
Design By Contract: Implementations and Implications,
Gavin Scott, Renaat Verbruggen.

Design By Contract (DBC) is a paradigm that involves specifying contracts between routines and their clients in the form of assertions. These assertions can capture the class invariant and routine preconditions and postconditions. Eiffel provides language support for such assertions. It also supports implementation assertions in the form of check assertions, which are used to state developer assumptions, and loop assertions, which are used to guarantee loop termination. The Eiffel runtime system will monitor these assertions and generate exceptions if the assertions are violated. Eiffel also provides language support for the strategies of recovery and organised failure, which are required to deal with raised exceptions. DBC is a powerful mechanism for ensuring the development of well-behaved Object Oriented software systems. Neither C++ nor Java provides direct language support for DBC. A strategy for providing DBC assertion specification and monitoring support in C++ and Java is presented. This strategy involves the development of a set of APIs, a set of patterns for structuring classes and methods and a set of idioms for using the APIs. CORBA's Interface Definition Language (IDL) is used to specify certain elements of the APIs. This ensures compatibility between the C++ and Java DBC mechanisms and allows C++ and Java DBC integration in a multi-tiered software system. DBC also proves to be a powerful philosophy when applied to the discipline of testing. The design of an OO framework exploiting the principles of contracts is presented. It turns out that a number of testing strategies are supported by this framework, irrespective of whether or not DBC was employed in the implementation of the software under test. Strategies for C++ and Java support of the remaining elements of the DBC mechanism are sketched, coupled with a list of potential research areas concerning the application of DBC.

CA-3199:
Browsing Technical Documents: Document Modelling And User Interface Design,
Donal Fitzpatrick, Alex Monaghan.

Though non-technical publications are now readily available to blind people the degree of access diminishes as the amount of technical information increases. This makes the procurement of such information both difficult and time-consuming. It is also true to say that once the technical documents have been acquired their reading is a struggle and far from a pleasurable experience. The following discussion presents some research currently in progress at Dublin City University which aims to alleviate many of the problems encountered by blind readers who need to access highly technical material.

CA-3299:
TechRead: A System for Deriving Braille and Spoken Output from LaTeX Documents,
Donal Fitzpatrick, Alex Monaghan.

One of the most difficult aspects of research for a blind student is the unavailability of technical material in a format accessible to them. To date, much of the effort of transforming documents into either Braille or spoken output has been in the literary rather than in the technical or scientific areas. For example, the majority of the spoken text produced by existing screen access technology does not harness the capabilities of synthetic speech devices but instead outputs the material using a monotone. TechRead, on the other hand, is a system which, it is hoped, will render technical documents more accessible to blind people. This software will take LaTex documents as input and produce Braille or spoken output from them. The main aims of Techread are as follows:

This paper discusses the fundamental principles underlying this system. It aims to show how the LaTeX document is transformed into an internal representation of the document, and from this to either Braille or spoken output. The final section discusses how the system will be expanded to cater for mathematical material, and our beliefs that the ideas contained in the system can be used to improve screen access technology.

CA-3399:
DCU-SANE - A Neurogenetic Algorithm with Real-Coded Crossover and Mutation Operators,
Vincent M. Breheny, Seán O'Nualláin.

This research focuses on the development of the DCU-SANE neurogenetic algorithm. DCU-SANE uses a genetic algorithm with real-coded crossover and mutation operators as the method to construct and train an artificial neural network. SANE (Symbiotic Adaptive Neuro-Evolution) is a recently developed neurogenetic algorithm with superior performance to earlier approaches on the cart-broom control problem. DCU-SANE was developed by modifying the original SANE neurogenetic algorithm as follows: The development of a real-coded neuron encoding scheme to facilitate the use of real-coded crossover and mutation operators. The addition of the real-coded average and BLX-a crossover operators and the non-uniform mutation operator. The development of a crossover mechanism and a connection swapping mutation operator for the DCU-SANE neuron encoding scheme. DCU-SANE applied to the cart-broom control problem produced significantly improved performance results compared to those of SANE. DCU-SANE reduced the average number of training episodes necessary to balance the cart-broom system by 11%, in a comparable execution time and with comparable generalisation ability.

CA-3499:
A Logical Framework for Horizontal and Vertical Development in a State-based Setting,
Claus Pahl.

We will present a formal, logic-oriented semantical framework for state-based specification. This framework bases on a state-based computational model. Based on this model, a dynamic logic formalising invariants and pre- and postconditions for state transitions is defined. Implementation and refinement relations as a concept for vertical development are investigated. These relations will also be important constructs in the definition of concepts for a module or component language with parameterised modules for horizontal development. This paper integrates various aspects of state-based specification in a coherent framework. It can serve in the exploration and integration of new features into an existing specification formalism which matches the abstract notion of a state-based algebras. Modules, denoting these algebras, will be the central buildings blocks of the framework.

CA-3599:
An Algorithm For Scheduling Groups Of Jobs On A Single Machine,
Alexandros Diamantidis, Micheal O'hEigeartaigh

Our algorithm considers the problem of scheduling f-groups , n-jobs on a single machine. The problem is to find the optimal common flow allowance k* for the common due-date assignment method, and the optimal job sequence s* to minimise a penalty function of missing due-dates. It is assumed that penalty will not occur if the deviation of job completion from the due-date is sufficiently small. A numerical example is provided to illustrate the use of the results to determine the optimal solution to the due-date determination and sequencing problem.