This document is the final report for a research project aimed at producing a prototype system for on-line translation of typed dialogues between speakers of different natural languages. The work was carried out jointly by SICS and SRI Cambridge. The resulting prototype system (called Billingual Conversation Interpreter, or BCI) translates between english and Swedish in both directions.The Major components of the BCI are two copies of the SRI Core Language Engine, equipped with English and Swedish grammars respectively. These are linked by the transfer and disambiguation components. Translation takes place by analyzing the source-language sentence into Quasi Logical Form ( QLF), a linguistically motivated logical representation, transferring this into a target-language QLF, and generating a target-language sentence. When ambiguities occur that cannot be resolved automatically, they are clarified by Querying the appropriate user. The clarification dialogue presupposes no knowledge of either linguistics or the other language. The prototype system has a broad grammatical coverage, a initial vocabulary of about 1000 words together with vocabulary expansion tools, and a set of English-Swedish transfer rules. The formalism developed for coding this linguistic information make it relatively easy to extend the system. We believe that the project was successful in demonstrating the feasibility of using these techniques for interactive translation applications, and provides a sound basis for development of a large scale message translator system with potential for commercial exploitation.The main sections of this report are the following: * A non-technical introduction, summarizing the BCI's design , and containing a sample session. * An overview of the Swedish version of the CLE. * A detailed discussion of the theory and practice of QLF transfer. * A description of the interactive disambiguation method. * Suggestions for possible follow-on projects aimed in the direction of practically usable commercial systems.
This Document is an introduction to a research project aimed at producing a prototype system for on-line translation of typed dialogues between speakers of different natural languages. The work was carried out jointly by SICS and SRI Cambridge. The resulting prototype system (called Billingual Conversation Interpreter, or BCI) translates between English and Swedish in both Directions. The major components of the BCI are two copies of the SRI Core Language Engine, equipped with English and Swedish grammars respectively. These are linked by the transfer and disambiguation components. Translation takes place by analyzing the source-language sentence into Quasi Logical Form (QLF), a linguistically motivated logical representation, transferring this onto a target-language QLF, and generating a target-language sentence. We believe that the project was successful in demonstrating the feasibility of using these techniques for interactive translation applications, and provides a sound basis for development of a large scale message translator system. The final section of the paper points to several possible follow-on projects aimed in the direction of practically usable commercial systems.
Gaming is a highly relevant application area for Intelligent Agents and Human Computer Interaction (HCI). Computer games bring us a full set of new gaming experiences where synthetic characters take on the main role.Using affective input in the interaction with a game and in particular with a character is a recent and fairly unexplored dimension. This video presents a study of a tangible interaction device for affective input and its use in a role-playing game where emotions are part of the game logic.
The DUMAS project develops speech-based applications that are adaptable to different users and domains. The paper describes the project's robust semantic analysis strategy, used both in the generic framework for the development of multilingual speech-based dialogue systems which is the main project goal, and in the initial test application, a mobile phone-based e-mail interface.
When developing adaptive speech-based multilingual interaction systems, we need representative data on the user's behaviour. In this paper we focus on a data collection method pertaining to adaptation in the user's interaction with the system. We describe a multi-session group scenario for Wizard of Oz studies with two novel features: firstly, instead of doing solo sessions with a static mailbox, our test users communicated with each other in a group of six, and secondly, the communication took place over several sessions in a period of five to eight days. The paper discusses our data collection studies using the method, concentrating on the usefulness of the method in terms of naturalness of the interaction and long-term developments.
The paper outlines a method for automatic lexical acquisition using three-layered back-propagation networks. Several experiments have been carried out where the performance of different network architectures have been compared to each other on two tasks: overall part-of-speech (noun, adjective or verb) classification and classification by a set of 13 possible output categories. The best results for the simple task were obtained by networks consisting of 204-212 input neurons and 40 hidden-layer neurons, reaching a classification rate of 93.6%. The best result for the more complex task was 96.4%, which was achieved by a net with 423 input neurons and 80 hidden-layer neurons. These results are rather promising and the paper compares them to the performance reported by rule-based and purely statistical methods; a comparison that shows the neural network completely compatible with the statistical approach. The rule-based method is, however, still better, even though it should noted that the task that the rule-based system performs is somewhat different from that of the neural net.
In contrast to most recent models that generate an entire image at once, the paper introduces a new architecture for generating images one pixel at a time using a Compositional Pattern-Producing Network (CPPN) as the generator part in a Generative Adversarial Network (GAN), allowing for effective generation of visually interesting images with artistic value, at arbitrary resolutions independent of the dimensions of the training data. The architecture, as well as accompanying (hyper-) parameters, for training CPPNs using recent GAN stabilisation techniques is shown to generalise well across many standard datasets. Rather than relying on just a latent noise vector (entangling various features with each other), mutual information maximisation is utilised to get disentangled representations, removing the requirement to use labelled data and giving the user control over the generated images. A web application for interacting with pre-trained models was also created, unique in the offered level of interactivity with an image-generating GAN.
The paper addresses using artificial neural networks for classification of Amharic news items. Amharic is the language for countrywide communication in Ethiopia and has its own writing system containing extensive systematic redundancy. It is quite dialectally diversified and probably representative of the languages of a continent that so far has received little attention within the language processing field. The experiments investigated document clustering around user queries using Self-Organizing Maps, an unsupervised learning neural network strategy. The best ANN model showed a precision of 60.0% when trying to cluster unseen data, and a 69.5% precision when trying to classify it.
In Douglas Adams' novel "The Hitch Hiker's Guide to the Galaxy" he describes the Babel fish, a fish which you are supposed to stick into your ear. It then works as a universal interpreter and can take utterances spoken in one language and instantly translate them into speech in some other language. What do we need to do in order to be able to build a device with the same general functionality?
The essay describes some of the main problems which meet us when trying to process human language on a computer. The overall approach is to look at what we would need to do in order to be able to build a device with the same general functionality as Douglas Adams' Babel fish. That is, a device which can take utterances spoken in one language and instantly translate them into speech in some other language.
The paper describes S-VEX, the lexical acquisition component of the Swedish Core Language Engine (S-CLE). The S-CLE is a general purpose natural language processing system for Swedish developed from its English counter-part, the SRI Core Language Engine. In parallel with the development of the S-CLE, a Swedish version of the English VEX (Vocabulary EXpander) system was designed. S-VEX allows for the creation of lexicon entries by users with knowledge of an application domain but not of linguistics or of the detailed workings of the system. The approach taken is based on eliciting grammaticality judgments and information of inflected forms interactively from the user. The S-VEX system and the lexicon of the S-CLE is described, as well as the problems of the specific lexical acquisition task and their solutions.
Semantic Morphology addresses the problem of designing the rules needed for mapping between the semantic lexicon and semantic grammar. The text discusses the relation between semantics, lexicon, and morphology in unification-based grammars and builds on the current trends in Computational Semantics to use underspecification and compositionality. The approach to Semantic Morphology advocated here assumes compositional word formation from (semantic) word roots and affixes that are given their own entries in the semantic lexicon. Different feature usages are then utilized to reach the intended surface word-form matches, with the correct feature settings.
The implementation of a unification-based lexicon is discussed as well as the morphological rules needed for mapping between the lexicon and grammar. It is shown how different feature usages can be utilised in the implementation to reach the intended surface word-form matches, with the correct feature settings. A novelty is the way features are used as binary signals in the compositional morphology. Finally, the lexicon coverage of the closed-class words and of the different types of open-class words is evaluated.
The DUMAS project constructs a generic framework for the development of multilingual speech-based dialogues systems. As an initial test of the generic framework we will build a mobile phone-based e-mail interface whose functionality can be adapted to different users, different situations and tasks. The paper describes the semantic processing which we envision needed in that type of system, for the actual e-mail system commands on the one hand, and for extracting knowledge from the e-mails at such on the other.
The lack of persons trained in computational linguistic methods is a severe obstacle to making the Internet and computers accessible to people all over the world in their own languages. The paper discusses the experiences of designing and teaching an introductory course in Natural Language Processing to graduate computer science students at Addis Ababa University, Ethiopia, in order to initiate the education of computational linguists in the Horn of Africa region.
The paper describes how the Swedish Core Language Engine (S-CLE), a general-purpose natural-language processing system was extended to process questions posed to a Prolog database. Previous work on the S-CLE has included processing up to the level of quasi-logical form (QLF); here we address the problems encountered when extending that processing to ``pure'' logical form (LF) and translating the logical forms into questions to the SNACK-85 reginal database. We also show how some natural-language answers were generated from the original question-QLFs and the answers obtained from the database.
Abstract Traditionally, the level of reusability of language processing resources within the research community has been very low. Most of the recycling of linguistic resources has been concerned with reuse of data, e.g., corpora, lexica, and grammars, while the algorithmic resources far too seldom have been shared between dierent projects and institutions. As a consequence, researchers who are willing to reuse somebody else's processing components have been forced to invest major eorts into issues of integration, inter-process communication, and interface design. In this paper, we discuss the experiences drawn from the svensk project regarding the issues on reusability of language engineering software as well as some of the challenges for the research community which are prompted by them. Their main characteristics can be laid out along three dimensions; technical/software challenges, linguistic challenges, and `political' challenges. In the end, the unavoidable conclusion is that it denitely is time to bring more aspects of engineering into the Computational Linguistic community!
The paper describes a set of experiments involving the application of three state-of- the-art part-of-speech taggers to Ethiopian Amharic, using three different tagsets. The taggers showed worse performance than previously reported results for Eng- lish, in particular having problems with unknown words. The best results were obtained using a Maximum Entropy ap- proach, while HMM-based and SVM- based taggers got comparable results.
Active learning techniques were employed for classification of dialogue acts over two dialogue corpora, the English human-human Switchboard corpus and the Spanish human-machine Dihana corpus. It is shown clearly that active learning improves on a baseline obtained through a passive learning approach to tagging the same data sets. An error reduction of 7% was obtained on Switchboard, while a factor 5 reduction in the amount of labeled data needed for classification was achieved on Dihana. The passive Support Vector Machine learner used as baseline in itself significantly improves the state of the art in dialogue act classification on both corpora. On Switchboard it gives a 31% error reduction compared to the previously best reported result.
We argue that bidding in the game of Contract Bridge can profitably be regarded as a micro-world suitable for experimenting with pragmatics. We sketch an analysis in which a "bidding system" is treated as the semantics of an artificial language, and show how this "language", despite its apparent simplicity, is capable of supporting a wide variety of common speech acts parallel to those in natural languages; we also argue that the reason for the relatively unsuccessful nature of previous attempts to write strong Bridge playing programs has been their failure to address the need to reason explicitly about knowledge, pragmatics, probabilities and plans. We give an overview of Pragma, a system currently under development at SICS, which embodies these ideas in concrete form, using a combination of rule-based inference, stochastic simulation, and "neural-net" learning. Examples are given illustrating the functionality of the system in its current form.
The paper describes a Swedish-language customization (S-CLE) of the SRI Core Language Engine, which has been developed at SICS from the original English-language version by replacing English-specific modules with corresponding Swedish-language versions. The S-CLE is intended to be used as a building block in a broad range of applications, such as data-base query system, machine translation systems, NL front-ends, speech-to-text/text-to-speech systems, and so on. Examples of the first two types of application already exist. The main part of the S-CLE is an extensive Swedish grammar that is compiled into parsing and generation modules. The grammar formalism is a type of unification grammar loosely based on Generalized Phrase Structure Grammar (GPSG). Generation is performed using the Semantic-Head-Driven algorithm. Analysis turns sentences into ``Quasi-Logical Form'' (QLF), a logical-form representation, while generation works in the opposite direction. Intermediate stages include processing of morphology, syntax and semantics. For knowledge-base applications, a separate module can convert QLFs into conventional scoped logical forms. After two-and-a-half years of work (approximately 45 person months), the first prototype system has a vocabulary of about 1900 words and covers a fairly broad range of possible grammatical constructions. Based on our experience in this project, we present in this paper detailed arguments to support the claim that customization of an English-language NLP system is a highly cost-effective way of constructing Swedish language systems with corresponding functionality.
The paper describes the Swedish involvement in the EU project DUMAS (Dynamic Universal Mobility for Adaptive Speech Interfaces), a project which aims at developing multilingual speech-based applications, and more specifically, investigating adaptive multilingual interaction techniques to handle both spoken and text input and to provide coordinated linguistic responses to the user. The project has a clear focus on Northern Europe with two of the eight partners coming from Sweden and four from Finland; and the languages we aim at treating are English, Swedish and Finnish. We will construct an agent-based generic framework for multilingual speech applications, supporting adaptivity to both the individual user and the particular domain. Applications based on the general architecture will benefit from the advantages of fault-tolerant semantic analysis, which combined with the dialogue management routines will handle user interaction in a very robust manner. As an initial such application, we are building a mobile phone-based e-mail interface that will deal with multilingual issues in several forms and environments, and whose functionality can be adapted to different users, different situations and tasks. Such a system produces speech output only (in the form of spoken responses and read e-mails) to the user, but gets two types of input: user speech and textual e-mail messages. It must be able to distinguish between languages, both in e-mails and in the user utterances. The contents of a user's inbox must be continuously analysed in order to enable advanced search functions.
Sentiment analysis is a circumstantial analysis of text, identifying the social sentiment to better understand the source material. The article addresses sentiment analysis of an English-Hindi and English-Bengali code-mixed textual corpus collected from social media. Code-mixing is an amalgamation of multiple languages, which previously mainly was associated with spoken language. However, social media users also deploy it to communicate in ways that tend to be somewhat casual. The coarse nature of social media text poses challenges for many language processing applications. Here, the focus is on the low predictive nature of traditional machine learners when compared to Deep Learning counterparts, including the contextual language representation model BERT (Bidirectional Encoder Representations from Transformers), on the task of extracting user sentiment from code-mixed texts. Three deep learners (a BiLSTM CNN, a Double BiLSTM and an Attention-based model) attained accuracy 20-60% greater than traditional approaches on code-mixed data, and were for comparison also tested on monolingual English data.
Information access tasks need flexible text understanding. While full text understanding remains a distant and possibly unattainable goal, to deliver better information access performance we must advance content analysis beyond the simple algorithms used today--and the dynamic nature of both information needs and information sources will make a flexible model or set of models a necessity. Models must either be adaptive or easily adapted by some form of low-cost intervention; and they must support incremental knowledge build-up. The first requirement involves acquisition of information from unstructured data; the second involves defining an inspectable and transparent model and developing an understanding of knowledge-intensive interaction. Text understanding needs a theory. Knowledge modeling, semantics, or ontology construction are areas marked by the absence of significant consensus either in points of theory or scope of application. Even the terminology and success criteria of the somewhat overlapping fields are fragmented. Some approaches to content modeling lay claim to psychological realism, others to inspectability; some are portable, others transparent; some are robust, others logically sound; some efficient, others scalable. Information access tasks give focus to modeling. It is too much to hope for a set of standards to emerge from the intellectually fairly volatile and fragmented area of semantics or cognitive modeling. But in our application areas -- namely, those in the general field of information access - external success criteria are better established. Compromise from theoretical underpinnings in the name of performance. Information access tasks need flexible text understanding. While full text understanding remains a distant and possibly unattainable goal, to deliver better information access performance we must advance content analysis beyond the simple algorithms used today--and the dynamic nature of both information needs and information sources will make a flexible model or set of models a necessity.
This document is the first-year report for a project whose long-term goal is the construction of a practically useful system capable of translating continuous spoken language within a restricted domain. The main deliverable resulting from the first year is a prototype, the Spoken Language Translator (SLT), which can translate queries from spoken English to spoken Swedish in the domain of air travel planning. The system was developed by SRI International, the Swedish Institute of Computer Science, and Telia Research AB. Most of it is constructed from previously existing pieces of software, which have been adapted for use in the speech translation task with as few changes as possible. The main components are connected together in a pipelined sequence as follows. The input signal is processed by SRI's DECIPHER(TM), a speaker-independent continuous speech recognition system. It produces a set of speech hypotheses which is passed to the English-language processor, the SRI Core Language Engine (CLE), a general natural- language processing system. The CLE grammar associates each speech hypothesis with a set of possible logical-form-like representations, typically producing 5 to 50 logical forms per hypothesis. A preference component is then used to give each of them a numerical score reflecting its linguistic plausibility. When the preference component has made its choice, the highest-scoring logical form is passed to the transfer component, which uses a set of simple non-deterministic recursive pattern-matching rules to rewrite it into a set of possible corresponding Swedish representations. The preference component is now invoked again, to select the most plausible transferred logical form. The result is fed to a second copy of the CLE, which uses a Swedish- language grammar and lexicon developed at SICS to convert the form into a Swedish string and an associated syntax tree. Finally, the string and tree are passed to the Telia Prophon speech synthesizer, which utilizes polyphone synthesis to produce the spoken Swedish utterance. The system's current performance figures, measured on previously unseen test data, are as follows. For sentences of length 12 words and under, 65% of all utterances are such that the top-scoring speech hypothesis is an acceptable one. If the speech hypothesis is correct, then a translation is produced in 80% of the cases; and 90% of all translations produced are acceptable. Nearly all incorrect translations are incorrect due to their containing errors in grammar or naturalness of expression, with errors due to divergence in meaning between the source and target sentences accounting for less than 1% of all translations. Making fairly conservative extrapolations from the current SLT prototype, we believe that simply continuing the basic development strategy could within three to five years produce an enhanced version, which recognized about 90% of the short sentences (12 words or less) in a specific domain, and produced acceptable translations for about 95-97% of the sentences correctly recognized. Since the greater part of the system's knowledge would reside in domain-independent grammars and lexicons, it would be possible to port it to new domains with a fairly modest expenditure of effort.
The paper describes an experiment on a set of translated sentences obtained from a large group of informants. We discuss the question of transfer equivalence, noting that several target-language translations of a given source- language sentence will be more or less equivalent. Different equivalence classes should form clusters in the set of translated sentences. The main topic of the paper is to examine how these clusters can be found: we consider --- and discard as inappropriate --- several different methods of examining the sentence set, including traditional syntactic analysis, finding the most likely translation with statistical methods, and simple string distance measures.
A community in social networks is generally assumed to be composed of a group of individuals with similar characteristics. Although there has been a plethora of work on understanding network topologies (edge density, clustering coefficient, etc.) within an online community, the psycho-sociological compositions of social network communities have hardly been studied. The present paper aims to analyse the communities as composition of induced psycholinguistic and sociolinguistic variables (Personalities, Values and Ethics) across individuals in social media networks. The motivation behind this analysis is to understand the behavioural characteristics at individual as well as societal level in social networks. To this end, three studies were carried out on six different datasets: three Twitter corpora, two Facebook corpora, and an Essay corpus, annotated with Values and Ethics of the users. First, experiments on creating automatic models to determine the Personality and Values of individuals by analysing their language usage and social media behaviour. Second, experiments on understanding the characteristics or blend of characteristics of individuals within an online community. Finally, generation of a map of values and ethics for India, a multi-lingual and multi-cultural country. Striking similarities to general intuitive perception could be observed, i.e., the results obtained in the study resemble our general perception about the cities/towns of India.
To find out how users' social media behaviour and language are related to their ethical practices, the paper investigates applying Schwartz' psycholinguistic model of societal sentiment to social media text. The analysis is based on corpora collected from user essays as well as social media (Facebook and Twitter). Several experiments were carried out on the corpora to classify the ethical values of users, incorporating Linguistic Inquiry Word Count analysis, n-grams, topic models, psycholinguistic lexica, speech-acts, and nonlinguistic information, while applying a range of machine learners (Support Vector Machines, Logistic Regression, and Random Forests) to identify the best linguistic and non-linguistic features for automatic classification of values and ethics.
Previous work on music generation and transformation has commonly targeted single instrument or single melody music. Here, in contrast, five music genres are used with the goal to achieve selective remixing by using domain transfer methods on spectrogram images of music. A pipeline architecture comprised of two independent generative adversarial network models was created. The first applies features from one of the genres to constant-Q transform spectrogram images to perform style transfer. The second network turns a spectrogram into a real-value tensor representation which is approximately reconstructed back into audio. The system was evaluated experimentally and through a survey. Due to the increased complexity involved in processing high sample rate music with homophonic or polyphonic audio textures, the system’s audio output was considered to be low quality, but the style transfer produced noticeable selective remixing on most of the music tracks evaluated. © 2022, The Author(s),
The papaer discusses the lessons we have learned from the work on building a reusable toolset for Swedish within the framework of GATE, the General Architecture for Text Engineering, from the University of Sheffield, UK. We describe our toolbox SVENSK and the reasons behing the choices made in the design, as well as the overall conclusions for language processing toolbox design which can be drawn.
The SVENSK project is developing an integrated toolbox of language processing components and resources for Swedish. SVENSK employs GATE, General Architecture for Text Engineering from the University of Sheffield as a platform in which the components are to be integrated. The goal is that the resources included in SVENSK should be freely available for noncommercial use. A wide range of different modules have been incorporated so far, both in-house modules, commercially available modules, and modules from academia. The results of the integration of the modules in the GATE environment are very encouraging: it is possible to mix modules from different sources, written in programming languages from completely different paradigms and have them interact with each other, thus maintaining a high degree of reuse of algorithmical resources. However, the use of Tcl/Tk and the associated API for processing structurally relatively complex data, is time consuming and considerably slows the processing in GATE.
The paper discusses an Amharic speaker independent continuous speech recognizer based on an HMM/ANN hybrid approach. The model was constructed at a context dependent phone part sub-word level with the help of the CSLU Toolkit. A promising result of 74.28% word and 39.70% sentence recognition rate was achieved. These are the best figures reported so far for speech recognition for the Amharic language.
The paper introduces a Mobile Companion prototype, which helps users to plan and keep track of their exercise activities via an interface based mainly on speech input and output. The Mobile Companion runs on a PDA and is based on a stand-alone, speaker-independent solution, making it fairly unique among mobile spoken dialogue systems, where the common solution is to run the ASR on a separate server or to restrict the speech input to some specific set of users. The prototype uses a GPS receiver to collect position, distance and speed data while the user is exercising, and allows the data to be compared to previous exercises. It communicates over the mobile network with a stationary system, placed in the user’s home. This allows plans for exercise activities to be downloaded from the stationary to the mobile system, and exercise result data to be uploaded once an exercise has been completed.
Multimodal conversational spoken dialogues using physical and virtual agents provide a potential interface to motivate and support users in the domain of health and fitness. The paper presents a multimodal conversational Companion system focused on health and fitness, which has both a stationary and a mobile component.
Work on Abusive Language Detection has tackled a wide range of subtasks and domains. As a result of this, there exists a great deal of redundancy and non-generalisability between datasets. Through experiments on cross-dataset training and testing, the paper reveals that the preconceived notion of including more non-abusive samples in a dataset (to emulate reality) may have a detrimental effect on the generalisability of a model trained on that data. Hence a hierarchical annotation model is utilised here to reveal redundancies in existing datasets and to help reduce redundancy in future efforts.
Multimodal conversational spoken dialogues using physical and virtual agents provide a potential interface to motivate and support users in the domain of health and fitness. In this paper we present how such multimodal conversational Companions can be implemented to support their owners in various pervasive and mobile settings. In particular, we focus on different forms of multimodality and system architectures for such interfaces.