The emergence of the so called "Information Society" is making ever larger amounts of online information available to people. Iam interested in supervising projects that are aimed at developing systems for the automated handling of digital information to deliver information that people looking for or might find useful. Some of the functionality of these systems aims to improve the efficiency of tasks that can be carried out using paper documents; but computing technologies also enable this information to be used in new ways to provide personalisation of information delivery or to address new tasks which would have been difficult or impossible to support in the past.
Visitors to museums, popular tourist sites, exhititions, etc frequently follow similar paths or may select particular items of interest. Developing location monitoring technologies are now making it possible to track their exact paths around the place that they are visiting. This information can then be used to make them a personalised record of their visit, recommend other exhibits that they may find interesting or even somewhere else to visit. Also records from previous visitors can be used to recommend items to new visitors who are interestedin the same things.
Various projects are available to addresses the opportunities and challanges of interactive multimodal museums and exhbitions. Project may explore the use of UbiSense location monitoring hardware recently installed in the School of Computing, to capture a users path through a fixed space. This information could be used to record personalised souvenirs or make recommendations. Another possibility would be automatically compose personalised multimedia displays to augument an exhibition, for example to tell you more about the history of a place with respect to, for example, people, buildings, technology, writers, artists or society in general.
These projects would be suitable for CA4 students with interested in learning about multimedia search and composition, information retrieval, recommendation and location-aware computing..
Standard hypertext systems are constructed by hand to provide a structured network of interlinked content. A web site provides an example of simple hypertext type systems. Hypertext can be used in a range of online applications including systems for e-learning where users are given an interlinked environment to learn about a topic, a simple standard example would be an online version of a textbook which contained many links to other parts of the book enabling the user to easy jump to topics they wished to explore. A rapidly expanding example of a hypertext information system is Wikipedia which increasing provides means to easily find the key details of topics, people, places, etc.
Hypertext systems are developed completely manually which is very expensive, and the content is limited to that included by the developer. The ideas of this project would be combined hypertext systems with automatic crawling of related content of the web. For example, as a starting point, individual pages in a hypertext system could be used as search queries to the web. The retrieved content could then be added to the hypertext system. But there are many ways that this could be made more sophisticated using ideas from research in adaptive hypermedia systems and information retrieval.
This project would be suitable for CA4 students with interested in learning about adaptive hypermedia systems, and web retrieval and indexing methods.
When using search engines users sometimes only have a vague idea of what they want to know. In order to address their information need they can end up browsing between a large number of documents developing their knowledge, in effect finding out what they need to know before ultimately finding the answer to their question.
Existing search engines fail to address the needs of this type of user effectively. After entering their vague query a ranked list of individual documents is returned, the user must then browse among these separate documents in an attempt to find out what they need to know. The aim of this project is to consider the needs of this class of web user and develop search technologies and an interface to enable the user to explore an integrate space of information relating to a vague topic. This would involve features such as automatically linking documents with related content and indicating these links to the user. For example, if the user wants to find out more about a specific topic within a document, they should be directed straight to potentially useful additional documents. This project would involve the use of information retrieval techniques, document link analysis and the exploration of novel web browsing interfaces and metaphors. It is expected that the project would make use of the SPIRIT web collection.
This project would be suitable for CA4 students interested in learning more about web retrieval methods and contributing to the development of new browsing and navigation tools for web data.
It is now common practice for lecturers to make electronic versions of their lecture slides available on webpages to support module teaching. This only represents a starting point for the potential of electronic information management tools to support teaching and learning. For example, an existing project developed a novel application to automatically link lecture slides to audio recordings of the relevant section of the associated lecture. Using this application students can play back the original lecture material while viewing the slide. This can be very useful since lectures often contain descriptions and examples not available in textbooks. This approach is potentially very powerful, for example allowing automated search for material within lectures, rather than having to listen to a complete audio recording to find the relevant part.
The aim of this project would be to explore methods by which this approach could be extended to further linking of study resources and information. Some examples of possible areas of investigation could be the automated generation of search queries from lectures slides and audio recordings, these might be used for example to find relevant sections of online textbooks or locate web pages containing related material, another could be the automated linking of materials between lectures, so that a student could simply follows links to related material. The system could be personalised for individual students by linking slides and the presentation to notes made by the student either during a lecture or in private student. Such notes would provide further information for automated information seeking.
The overall aim is to develop a learning environment which integrates the available information resources to improve the efficiency with which students navigate their way through related information searching and linking, and to bring to their attention resources that they might otherwise not be able to locate.
This project would be suitable for CA4 students interested in learning more about existing information retrieval methods and their use in a novel application.
Information can be delivered to a user using different media, e.g. as text or graphics or as the output of a speech synthesiser. The most appropriate form for the information delivery for mobile applications will often depend on the activities of the user. For example, a user who is walking or driving is best served by audio delivery, while a user who is stationary and able to view a screen can often obtain information more efficiently if it is delivered visually. The bandwidth of visual information delivery is much higher than audio delivery, thus it will often be necessary to automatically summarise text documents before they can be delviered via audio.
The objective of this project would be to develop a prototype information delivery management agent. The agent would monitor a user's activity based on simulated context input and then decide on how and when to deliver information.
The project would require formal analysis of user context and how this can be formally described for an agent, automated transformation of information into the appropriate form, e.g. showing directions on a graphical map or delivering them via speech synthesis, summarisation of text information for audio delivery. The system could be extended to include automated learning of user responses to agent behaviour and modification of the agent. Techniques covered could include agent technology, context-aware technology, automated summarisation, speech synthesis, information visualiation.
This project would be suitable for CA4 students with interests in learning about machine learning and agent technology, and their application in data management for mobile context-aware envrionments.s
Context-aware computing systems are designed to behave differently depending user context, for example different weather conditions, user location, who the user is with. When testing these systems it is often difficult to examine all anticipated situations since many context are not available on demand; we can't make it rain or change temperature for example to test how system responds.
One approach to this problem is to build virtual worlds where the user is in control of the context. The output of environment sensors can be varied exactly as they would in the real-world and the behaviour of the context-aware system monitored. In general terms the context-aware application does not need to know that it is dealinng with simulated as opposed to real data.
One approach to simulating environments for context-aware testing is the use of computer games development engines which enable programmers to develop new worlds and control the environment. The aim of this project would be to use an existing games engine to develop a simulated context-aware environment and integrate this with a basic context-aware information management tool.
This project would be suitable for CA4 students interested in learning about context-aware information management and virtual world simulation.
Information retrieval algorithms developed for text retrieval systems, such as web search engines, can be successfully applied to various other media included management of spoken data and scanned images of printed documents. The objective of this project would be to investigate their application to retrieval and analysis of music files, e.g. MIDI files or other music score representations. The aim would be to explore the effectiveness of information retrieval algorithms in melody retrieval and then to extend this work in one of several possible ways. One possible extension would be to use the retrieval methods to develop a form of automated musical analysis tool to look for similar melodies or examine the structure of a piece of music or a song. This might be further extended to develop a tool giving a graphical representation of song structure, perhaps extending further to find songs with similar high level structures.
But what have emotions to do with computers?
Well, recent neurological evidence indicates that emotions are an essential component of human reasoning, even in apparently completely rational decision making. Furthermore, emotional expression is a natural and significant part of human interaction. Traditional human-computer action studies take no account of any emotional element in the interaction process. Your computer doesn't know if you are frustrated, interested, tired, rushed, or bored. If computer interfaces are to be made truly intelligent it can be argued that they must include emotional processing. Thus your computer would be able to respond to you in a more pleasing manner by learning about you and your emotional states.
The main centre for work in Affective Computing is at the MIT Media Laboratory, which has much more information on this subject.
Affective Response in Human-Computer Interaction
Traditional models of human-computer action are centred on the tasks and actions which a user must perform to achieve an objective. For example, editing a document involves cutting and pasting text, which involves a sequence of keyboard and mouse actions.
A possible project in affective computing would be to explore the extension of traditional user interface design, such as the task analysis and interface planning, to include emotional elements. This could lead to the implementation of a test interface and some simulated experiments with test users. Or possibly to the exploration of some form of machine learning within the interface of the associated between user actions and their current affective state.
In general, I'm open to any ideas you may have for projects in this area. I suggest you have a look at the MIT webpage for background information.
Last modified: 14th October 2008