Change search
Refine search result
1 - 37 of 37
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Abrahamsson, Henrik
    RISE, Swedish ICT, SICS. School of Innovation, Design and Engineering.
    Network overload avoidance by traffic engineering and content caching2012Doctoral thesis, monograph (Other academic)
    Abstract [en]

    The Internet traffic volume continues to grow at a great rate, now driven by video and TV distribution. For network operators it is important to avoid congestion in the network, and to meet service level agreements with their customers. This thesis presents work on two methods operators can use to reduce links loads in their networks: traffic engineering and content caching. This thesis studies access patterns for TV and video and the potential for caching. The investigation is done both using simulation and by analysis of logs from a large TV-on-Demand system over four months. The results show that there is a small set of programs that account for a large fraction of the requests and that a comparatively small local cache can be used to significantly reduce the peak link loads during prime time. The investigation also demonstrates how the popularity of programs changes over time and shows that the access pattern in a TV-on-Demand system very much depends on the content type. For traffic engineering the objective is to avoid congestion in the network and to make better use of available resources by adapting the routing to the current traffic situation. The main challenge for traffic engineering in IP networks is to cope with the dynamics of Internet traffic demands. This thesis proposes L-balanced routings that route the traffic on the shortest paths possible but make sure that no link is utilised to more than a given level L. L-balanced routing gives efficient routing of traffic and controlled spare capacity to handle unpredictable changes in traffic. We present an L-balanced routing algorithm and a heuristic search method for finding L-balanced weight settings for the legacy routing protocols OSPF and IS-IS. We show that the search and the resulting weight settings work well in real network scenarios.

    Download full text (pdf)
    FULLTEXT01
  • 2.
    Ahlgren, Bengt
    RISE, Swedish ICT, SICS, Decisions, Networks and Analytics lab.
    Improving computer communication performance by reducing memory bandwidth consumption1997Doctoral thesis, monograph (Other academic)
  • 3.
    Al-Shishtawy, Ahmad
    RISE, Swedish ICT, SICS, Computer Systems Laboratory.
    Self-management for large-scale distributed systems2012Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Autonomic computing aims at making computing systems self-managing by using autonomic managers in order to reduce obstacles caused by management complexity. This thesis presents results of research on self-management for large-scale distributed systems. This research was motivated by the increasing complexity of computing systems and their management. In the first part, we present our platform, called Niche, for programming self-managing component-based distributed applications. In our work on Niche, we have faced and addressed the following four challenges in achieving self-management in a dynamic environment characterized by volatile resources and high churn: resource discovery, robust and efficient sensing and actuation, management bottleneck, and scale. We present results of our research on addressing the above challenges. Niche implements the autonomic computing architecture, proposed by IBM, in a fully decentralized way. Niche supports a network-transparent view of the system architecture simplifying the design of distributed self-management. Niche provides a concise and expressive API for self-management. The implementation of the platform relies on the scalability and robustness of structured overlay networks. We proceed by presenting a methodology for designing the management part of a distributed self-managing application. We define design steps that include partitioning of management functions and orchestration of multiple autonomic managers. In the second part, we discuss robustness of management and data consistency, which are necessary in a distributed system. Dealing with the effect of churn on management increases the complexity of the management logic and thus makes its development time consuming and error prone. We propose the abstraction of Robust Management Elements, which are able to heal themselves under continuous churn. Our approach is based on replicating a management element using finite state machine replication with a reconfigurable replica set. Our algorithm automates the reconfiguration (migration) of the replica set in order to tolerate continuous churn. For data consistency, we propose a majority-based distributed key-value store supporting multiple consistency levels that is based on a peer-to-peer network. The store enables the tradeoff between high availability and data consistency. Using majority allows avoiding potential drawbacks of a master-based consistency control, namely, a single-point of failure and a potential performance bottleneck. In the third part, we investigate self-management for Cloud-based storage systems with the focus on elasticity control using elements of control theory and machine learning. We have conducted research on a number of different designs of an elasticity controller, including a State-Space feedback controller and a controller that combines feedback and feedforward control. We describe our experience in designing an elasticity controller for a Cloud-based key-value store using state-space model that enables to trade-off performance for cost. We describe the steps in designing an elasticity controller. We continue by presenting the design and evaluation of ElastMan, an elasticity controller for Cloud-based elastic key-value stores that combines feedforward and feedback control.

    Download full text (pdf)
    FULLTEXT01
  • 4.
    Arad, Cosmin
    RISE, Swedish ICT, SICS. School of Information and Communication Technology.
    Programming Model and Protocols for Reconfigurable Distributed Systems2013Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Distributed systems are everywhere. From large datacenters to mobile devices, an ever richer assortment of applications and services relies on distributed systems, infrastructure, and protocols. Despite their ubiquity, testing and debugging distributed systems remains notoriously hard. Moreover, aside from inherent design challenges posed by partial failure, concurrency, or asynchrony, there remain significant challenges in the implementation of distributed systems. These programming challenges stem from the increasing complexity of the concurrent activities and reactive behaviors in a distributed system on the one hand, and the need to effectively leverage the parallelism offered by modern multi-core hardware, on the other hand. This thesis contributes Kompics, a programming model designed to alleviate some of these challenges. Kompics is a component model and programming framework for building distributed systems by composing message-passing concurrent components. Systems built with Kompics leverage multi-core machines out of the box, and they can be dynamically reconfigured to support hot software upgrades. A simulation framework enables deterministic execution replay for debugging, testing, and reproducible behavior evaluation for large-scale Kompics distributed systems. The same system code is used for both simulation and production deployment, greatly simplifying the system development, testing, and debugging cycle. We highlight the architectural patterns and abstractions facilitated by Kompics through a case study of a non-trivial distributed key-value storage system. CATS is a scalable, fault-tolerant, elastic, and self-managing key-value store which trades off service availability for guarantees of atomic data consistency and tolerance to network partitions. We present the composition architecture for the numerous protocols employed by the CATS system, as well as our methodology for testing the correctness of key CATS algorithms using the Kompics simulation framework. Results from a comprehensive performance evaluation attest that CATS achieves its claimed properties and delivers a level of performance competitive with similar systems which provide only weaker consistency guarantees. More importantly, this testifies that Kompics admits efficient system implementations. Its use as a teaching framework as well as its use for rapid prototyping, development, and evaluation of a myriad of scalable distributed systems, both within and outside our research group, confirm the practicality of Kompics.

    Download full text (pdf)
    FULLTEXT01
    Download full text (gz)
    FULLTEXT02
  • 5.
    Ardelius, John
    RISE, Swedish ICT, SICS. KTH Royal Institute of Technology, Sweden.
    On the Performance Analysis of Large Scale, Dynamic, Distributed and Parallel Systems2013Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Evaluating the performance of large distributed applications is an important and non-trivial task. With the onset of Internet wide applications there is an increasing need to quantify reliability, dependability and performance of these systems, both as a guide in system design as well as a means to understand the fundamental properties of large-scale distributed systems. Previous research has mainly focused on either formalised models where system properties can be deduced and verified using rigorous mathematics or on measurements and experiments on deployed applications. Our aim in this thesis is to study models on an abstraction level lying between the two ends of this spectrum. We adopt a model of distributed systems inspired by methods used in the study of large scale system of particles in physics and model the application nodes as a set of interacting particles each with an internal state whose actions are specified by the application program. We apply our modeling and performance evaluation methodology to four different distributed and parallel systems. The first system is the distributed hash table (DHT) Chord running in a dynamic environment. We study the system under two scenarios. First we study how performance (in terms of lookup latency) is affectedon a network with finite communication latency. We show that an average delay in conjunction with other parameters describing changes in the network (such as timescales for network repair and join and leave processes)induces fundamentally different system performance. We also verify our analytical predictions via simulations.In the second scenario we introduce network address translators (NATs) to the network model. This makes the overlay topology non-transitive and we explore the implications of this fact to various performance metrics such as lookup latency, consistency and load balance. The latter analysis is mainly simulation based.Even though these two studies focus on a specific DHT, many of our results can easily be translated to other similar ring-based DHTs with long-range links, and the same methodology can be applied evento DHT's based on other geometries.The second type of system studied is an unstructured gossip protocol running a distributed version of the famous Belman-Ford algorithm. The algorithm, called GAP, generates a spanning tree over the participating nodes and the question we set out to study is how reliable this structure is(in terms of generating accurate aggregate values at the root) in the presence of node churn. All our analytical results are also verified using simulations.The third system studied is a content distribution network (CDN) of interconnected caches in an aggregation access network. In this model, content which sits at the leaves of the cache hierarchy tree, is requested by end users. Requests can then either be served by the first cache level or sent further up the tree. We study the performance of the whole system under two cache eviction policies namely LRU and LFU. We compare our analytical results with traces from related caching systems.The last system is a work stealing heuristic for task distribution in the TileraPro64 chip. This system has access to a shared memory and is therefore classified as a parallel system. We create a model for the dynamic generation of tasks as well as how they are executed and distributed among the participating nodes. We study how the heuristic scales when the number of nodes exceeds the number of processors on the chip as well as how different work stealing policies compare with each other. The work on this model is mainly simulation-based.

  • 6.
    Armstrong, Joe
    RISE, Swedish ICT, SICS.
    Making reliable distributed systems in the presence of software errors2003Doctoral thesis, monograph (Other academic)
    Abstract [en]

    The work described in this thesis is the result of a research program started in 1981 to find better ways of programming Telecom applications. These applications are large programs which despite careful testing will probably contain many errors when the program is put into service. We assume that such programs do contain errors, and investigate methods for building reliable systems despite such errors. The research has resulted in the development of a new programming language (called Erlang), together with a design methodology, and set of libraries for building robust systems (called OTP). At the time of writing the technology described here is used in a number of major Ericsson, and Nortel products. A number of small companies have also been formed which exploit the technology. The central problem addressed by this thesis is the problem of constructing reliable systems from programs which may themselves contain errors. Constructing such systems imposes a number of requirements on any programming language that is to be used for the construction. I discuss these language requirements, and show how they are satisfied by Erlang. Problems can be solved in a programming language, or in the standard libraries which accompany the language. I argue how certain of the requirements necessary to build a fault-tolerant system are solved in the language, and others are solved in the standard libraries. Together these form a basis for building fault-tolerant software systems. No theory is complete without proof that the ideas work in practice. To demonstrate that these ideas work in practice I present a number of case studies of large commercially successful products which use this technology. At the time of writing the largest of these projects is a major Ericsson product, having over a million lines of Erlang code. This product (the AXD301) is thought to be one of the most reliable products ever made by Ericsson. Finally, I ask if the goal of finding better ways to program Telecom applications was fulfilled --- I also point to areas where I think the system could be improved.

    Download full text (pdf)
    FULLTEXT01
  • 7.
    Cakici, Baki
    RISE, Swedish ICT, SICS. Stockholm University, Sweden.
    The Informed Gaze: On the Implications of ICT-Based Surveillance2013Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Information and communication technologies are not value-neutral. I examine two domains, public health surveillance and sustainability, in five papers covering: (i) the design and development of a software package for computer-assisted outbreak detection; (ii) a workflow for using simulation models to provide policy advice and a list of challenges for its practice; (iii) an analysis of design documents from three smart home projects presenting intersecting visions of sustainability; (iv) an analysis of EU-financed projects dealing with sustainability and ICT; (v) an analysis of the consequences of design choices when creating surveillance technologies. My contributions include three empirical studies of surveillance discourses where I identify the forms of action that are privileged and the values that are embedded into them. In these discourses, the presence of ICT entails increased surveillance, privileging technological expertise, and prioritising centralised forms of knowledge.

  • 8.
    Carlsson, Mats
    RISE, Swedish ICT, SICS, Computer Systems Laboratory.
    Design and Implementation of an OR-Parallel Prolog Engine1990Doctoral thesis, monograph (Other academic)
  • 9.
    Cöster, Rickard
    RISE, Swedish ICT, SICS. Department of Computer and System Sciences.
    Algorithms and Representations for Personalised Information Access2005Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Personalised information access systems use historical feedback data, such as implicit and explicit ratings for textual documents and other items, to better locate the right or relevant information for individual users. Three topics in personalised information access are addressed: learning from relevance feedback and document categorisation by the use of concept-based text representations, the need for scalable and accurate algorithms for collaborative filtering, and the integration of textual and collaborative information access. Two concept-based representations are investigated that both map a sparse high-dimensional term space to a dense concept space. For learning from relevance feedback, it is found that the representation combined with the proposed learning algorithm can improve the results of novel queries, when queries are more elaborate than a few terms. For document categorisation, the representation is found useful as a complement to a traditional word-based one. For collaborative filtering, two algorithms are proposed: the first for the case where there are a large number of users and items, and the second for use in a mobile device. It is demonstrated that memory-based collaborative filtering can be more efficiently implemented using inverted files, with equal or better accuracy, and that there is little reason to use the traditional in-memory vector approach when the data is sparse. An empirical evaluation of the algorithm for collaborative filtering on mobile devices show that it can generate accurate predictions at a high speed using a small amount of resources. For integration, a system architecture is proposed where various combinations of content-based and collaborative filtering can be implemented. The architecture is general in the sense that it provides an abstract representation of documents and user profiles, and provides a mechanism for incorporating new retrieval and filtering algorithms at any time. In conclusion this thesis demonstrates that information access systems can be personalised using scalable and accurate algorithms and representations for the increased benefit of the user.

    Download full text (pdf)
    FULLTEXT01
  • 10.
    Espinoza, Fredrik
    RISE, Swedish ICT, SICS.
    Individual service provisioning2003Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Computer usage is once again going through changes. Leaving behind the experiences of mainframes with terminal access and personal computers with graphical user interfaces, we are now headed for handheld devices and ubiquitous computing; we are facing the prospect of interacting with electronic services. These network-enabled functional components provide benefit to users regardless of their whereabouts, access method, or access device. The market place is also changing, from suppliers of monolithic oÆ-the-shelf applications, to open source and collaboratively developed specialized services. It is within this new arena of computing that we describe Individual Service Provisioning, a design and implementation that enables end users to create and provision their own services. Individual Service Provisioning consists of three components: a personal service environment, in which users can access and manage their services; ServiceDesigner, a tool with which to create new services; and the provisioning system, which turns end users into service providers.

    Download full text (pdf)
    fulltext
  • 11.
    Frecon, Emmanuel
    RISE, Swedish ICT, SICS.
    DIVE on the internet2004Doctoral thesis, monograph (Other academic)
    Abstract [en]

    This dissertation reports research and development of a platform for Collaborative Virtual Environments (CVEs). It has particularly focused on two major challenges: supporting the rapid development of scalable applications and easing their deployment on the Internet. This work employs a research method based on prototyping and refinement and promotes the use of this method for application development. A number of the solutions herein are in line with other CVE systems. One of the strengths of this work consists in a global approach to the issues raised by CVEs and the recognition that such complex problems are best tackled using a multi-disciplinary approach that understands both user and system requirements. CVE application deployment is aided by an overlay network that is able to complement any IP multicast infrastructure in place. Apart from complementing a weakly deployed worldwide multicast, this infrastructure provides for a certain degree of introspection, remote controlling and visualisation. As such, it forms an important aid in assessing the scalability of running applications. This scalability is further facilitated by specialised object distribution algorithms and an open framework for the implementation of novel partitioning techniques. CVE application development is eased by a scripting language, which enables rapid development and favours experimentation. This scripting language interfaces many aspects of the system and enables the prototyping of distribution-related components as well as user interfaces. It is the key construct of a distributed environment to which components, written in different languages, connect and onto which they operate in a network abstracted manner. The solutions proposed are exemplified and strengthened by three collaborative applications. The Dive room system is a virtual environment modelled after the room metaphor and supporting asynchronous and synchronous cooperative work. WebPath is a companion application to a Web browser that seeks to make the current history of page visits more visible and usable. Finally, the London travel demonstrator supports travellers by providing an environment where they can explore the city, utilise group collaboration facilities, rehearse particular journeys and access tourist information data.

    Download full text (pdf)
    FULLTEXT01
  • 12.
    Fredlund, Lars-Åke
    RISE - Research Institutes of Sweden, ICT, SICS.
    A framework for reasoning about Erlang code2001Doctoral thesis, monograph (Other academic)
    Abstract [en]

    We present a framework for formal reasoning about the behaviour of software written in Erlang, a functional programming language with prominent support for process based concurrency, message passing communication and distribution. The framework contains the following key ingredients: a specification language based on the mu-calculus and first-order predicate logic, a hierarchical small-step structural operational semantics of Erlang, a judgement format allowing parameterised behavioural assertions, and a Gentzen style proof system for proving validity of such assertions. The proof system supports property decomposition through a cut rule and handles program recursion through well-founded induction. An implementation is available in the form of a proof assistant tool for checking the correctness of proof steps. The tool offers support for automatic proof discovery through higher--level rules tailored to Erlang. As illustrated in several case

    Download full text (pdf)
    FULLTEXT01
  • 13.
    Gillblad, Daniel
    RISE, Swedish ICT, SICS, Decisions, Networks and Analytics lab.
    On Practical machine Learning and Data Analysis2008Doctoral thesis, monograph (Other academic)
    Abstract [en]

    This thesis discusses and addresses some of the difficulties associated with practical machine learning and data analysis. Introducing data driven methods in e.g industrial and business applications can lead to large gains in productivity and efficiency, but the cost and complexity are often overwhelming. Creating machine learning applications in practise often involves a large amount of manual labour, which often needs to be performed by an experienced analyst without significant experience with the application area. We will here discuss some of the hurdles faced in a typical analysis project and suggest measures and methods to simplify the process. One of the most important issues when applying machine learning methods to complex data, such as e.g. industrial applications, is that the processes generating the data are modelled in an appropriate way. Relevant aspects have to be formalised and represented in a way that allow us to perform our calculations in an efficient manner. We present a statistical modelling framework, Hierarchical Graph Mixtures, based on a combination of graphical models and mixture models. It allows us to create consistent, expressive statistical models that simplify the modelling of complex systems. Using a Bayesian approach, we allow for encoding of prior knowledge and make the models applicable in situations when relatively little data are available. Detecting structures in data, such as clusters and dependency structure, is very important both for understanding an application area and for specifying the structure of e.g. a hierarchical graph mixture. We will discuss how this structure can be extracted for sequential data. By using the inherent dependency structure of sequential data we construct an information theoretical measure of correlation that does not suffer from the problems most common correlation measures have with this type of data. In many diagnosis situations it is desirable to perform a classification in an iterative and interactive manner. The matter is often complicated by very limited amounts of knowledge and examples when a new system to be diagnosed is initially brought into use. We describe how to create an incremental classification system based on a statistical model that is trained from empirical data, and show how the limited available background information can still be used initially for a functioning diagnosis system. To minimise the effort with which results are achieved within data analysis projects, we need to address not only the models used, but also the methodology and applications that can help simplify the process. We present a methodology for data preparation and a software library intended for rapid analysis, prototyping, and deployment. Finally, we will study a few example applications, presenting tasks within classification, prediction and anomaly detection. The examples include demand prediction for supply chain management, approximating complex simulators for increased speed in parameter optimisation, and fraud detection and classification within a media-on-demand system.

    Download full text (pdf)
    FULLTEXT01
  • 14.
    Jacobsson, Mattias
    RISE, Swedish ICT, SICS, Computer Systems Laboratory.
    Tinkering With Interactive Materials - Studies, Concepts and Prototypes2013Doctoral thesis, monograph (Other academic)
    Abstract [en]

    The concept of tinkering is a central practice within research in the field of Human Computer Interaction, dealing with new interactive forms and technologies. In this thesis, tinkering is discussed not only as a practice for interaction design in general, but as an attitude that calls for a deeper reflection over research practices, knowledge generation and the recent movements in the direction of materials and materiality within the field. The presented research exemplifies practices and studies in relation to interactive technology through a number of projects, all revolving around the design and interaction with physical interactive artifacts. In particular, nearly all projects are focused around robotic artifacts for consumer settings. Three main contributions are presented in terms of studies, prototypes and concepts, together with a conceptual discussion around tinkering framed as an attitude within interaction design. The results from this research revolve around how grounding is achieved, partly through studies of existing interaction and partly through how tinkering-oriented activities generates knowledge in relation to design concepts, built prototypes and real world interaction.

    Download full text (pdf)
    FULLTEXT01
  • 15.
    Kreuger, Per
    RISE, Swedish ICT, SICS, Decisions, Networks and Analytics lab.
    Computational Issues in Calculi of Partial Inductive Definitions1995Doctoral thesis, monograph (Other academic)
    Abstract [en]

    We study the properties of a number of algorithms proposed to explore the computational space generated by a very simple and general idea: the notion of a mathematical definition and a number of suggested formal interpretations ofthis idea. Theories of partial inductive definitions (PID) constitute a class of logics based on the notion of an inductive definition. Formal systems based on this notion can be used to generalize Horn-logic and naturally allow and suggest extensions which differ in interesting ways from generalizations based on first order predicate calculus. E.g. the notion of completion generated by a calculus of PID and the resulting notion of negation is completely natural and does not require externally motivated procedures such as "negation as failure". For this reason, computational issues arising in these calculi deserve closer inspection. This work discuss a number of finitary theories of PID and analyzethe algorithmic and semantical issues that arise in each of them. There has been significant work on implementing logic programming languages in this setting and we briefly present the programming language and knowledge modelling tool GCLA II in which many of the computational prob-lems discussed arise naturally in practice.

    Download full text (ps)
    FULLTEXT01
  • 16.
    Laaksolahti, Jarmo
    RISE, Swedish ICT, SICS, Computer Systems Laboratory.
    Plot, Spectacle, and Experience: Contributions to the Design and Evaluation of Interactive Storytelling2008Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Interactive storytelling is a new form of storytelling emerging in the crossroads of many scholarly, artistic, and industrial traditions. In interactive stories the reader/spectator moves from being a receiver of a story to an active participant. By allowing participants to influence the progression and outcome of the story new experiences will arise. This thesis has worked on three aspects of interactive storytelling: plot, spectacle, and experience. The first aspect is concerned with finding methods for combining the linear structure of a story, with the freedom of action required for an interactive experience. Our contribution has focused on a method for avoiding unwanted plot twists by predicting the progression of a story and altering its course if such twists are detected. The second aspect is concerned with supporting the storytelling process at the level of spectacle. In Aristotelian terms, spectacle refers to the sensory display that meets the audience of a drama and is ultimately what causes the experience. Our contribution focuses on graphically making changing emotions and social relations, important elements of dramatic stories in our vision, salient to players at the level of spectacle. As a result we have broadened the view of what is important for interactive storytelling, as well as what makes characters believable. So far not very much research has been done on evaluating interactive stories. Experience, the third aspect, is concerned with finding qualitative methods for evaluating the experience of playing an interactive story. In particular we were interested in finding methods that could tell us something about how a players experience evolved over time, in addition to qualities such as agency that have been claimed to be characteristic for interactive stories. Our contribution consists of two methods that we have developed and adapted for the purposes of evaluating interactive stories that can provide such information. The methods have been evaluated on three different interactive storytelling type games.

  • 17.
    Marsh, Ian
    RISE - Research Institutes of Sweden, ICT, SICS.
    Quality aspects of Internet telephony2009Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Internet telephony has had a tremendous impact on how people communicate. Many now maintain contact using some form of Internet telephony. Therefore the motivation for this work has been to address the quality aspects of real-world Internet telephony for both fixed and wireless telecommunication. The focus has been on the quality aspects of voice communication, since poor quality leads often to user dissatisfaction. The scope of the work has been broad in order to address the main factors within IP-based voice communication. The first four chapters of this dissertation constitute the background material. The first chapter outlines where Internet telephony is deployed today. It also motivates the topics and techniques used in this research. The second chapter provides the background on Internet telephony including signalling, speech coding and voice Internetworking. The third chapter focuses solely on quality measures for packetised voice systems and finally the fourth chapter is devoted to the history of voice research. The appendix of this dissertation constitutes the research contributions. It includes an examination of the access network, focusing on how calls are multiplexed in wired and wireless systems. Subsequently in the wireless case, we consider how to handover calls from 802.11 networks to the cellular infrastructure. We then consider the Internet backbone where most of our work is devoted to measurements specifically for Internet telephony. The applications of these measurements have been estimating telephony arrival processes, measuring call quality, and quantifying the trend in Internet telephony quality over several years. We also consider the end systems, since they are responsible for reconstructing a voice stream given loss and delay constraints. Finally we estimate voice quality using the ITU proposal PESQ and the packet loss process. The main contribution of this work is a systematic examination of Internet telephony. We describe several methods to enable adaptable solutions for maintaining consistent voice quality. We have also found that relatively small technical changes can lead to substantial user quality improvements. A second contribution of this work is a suite of software tools designed to ascertain voice quality in IP networks. Some of these tools are in use within commercial systems today.

    Download full text (pdf)
    FULLTEXT01
  • 18.
    Montelius, Johan
    RISE, Swedish ICT, SICS.
    Exploiting Fine-grain Parallelism in Concurrent Constraint Languages1997Doctoral thesis, monograph (Other academic)
    Abstract [en]

    This dissertation presents the design, implementation, and evaluation of a system that exploits fine-grain implicit parallelism in concurrent constraint programming language. The system is able to outperform a C implementation of an algorithm with complex dependencies without any user annotations. The concurrent constraint programming language AKL is used as a source programming language. A program is divided during runtime into tasks that are distributed over available processors. The system is unique in that it handles both and-parallel execution of goals as well as or-parallel execution of encapsulated search. A parallel binding scheme for a hierarchical constraint store is presented. The binding scheme allows encapsulated search to be performed in parallel. The design is justified with empirical data from the implementation. The scheme is the most efficient parallel scheme yet presented for deep concurrent constraint systems. The system was implemented on a high-performance shared-memory multiprocessor. Extensive measurements were done on the system using both smaller benchmarks as well as real-life programs. The evaluation includes detailed instruction-level simulation, including cache-performance, to explain the behavior of the system.

  • 19.
    Nylander, Stina
    RISE, Swedish ICT, SICS, Software and Systems Engineering Laboratory.
    Design and Implementation of Multi-Device Services2007Doctoral thesis, monograph (Other academic)
    Abstract [en]

    We present a method for developing multi-device services which allows for the creation of services that are adapted to a wide range of devices. Users have a wide selection of electronic services at their disposal such as shopping, banking, gaming, and messaging. They interact with these services using the computing devices they prefer or have access to, which can vary between situations. In some cases, the services that they want to use func-tions with the device they have access to, and sometimes it does not. Thus, in order for users to experience their full benefits, electronic services will need to become more flexible. They will need to be multi-device services, i.e. be accessible from different devices. We show that multi-device services are often used in different ways on different devices due to variations in device capabilities, purpose of use, context of use, and usability. This suggests that multi-device services not only need to be accessible from more than one device, they also need to be able to present functionality and user interfaces that suit various devices and situations of use. The key problem addressed in this work is that there are too many device-service combinations for developing a service version for each device. In-stead, there is a need for new methods for developing multi-device services which allows the creation of services that are adapted to various devices and situations. The challenge of designing and implementing multi-device services has been addressed in two ways in the present work: through the study of real-life use of multi-device services and through the creation of a development method for multi-device services. Studying use of multi-device services has gener-ated knowledge about how to design such services which give users the best worth. The work with development methods has resulted in a design model building on the separation of form and content, thus making it possible to create different presentations to the same content. In concrete terms, the work has resulted in design guidelines for multi-device services and a system prototype based on the principles of separation between form and content, and presentation control.

  • 20.
    Olsson, Fredrik
    RISE, Swedish ICT, SICS.
    Bootstrapping Named Entity Annotation by Means of Active Machine Learning: A Method for Creating Corpora2008Doctoral thesis, monograph (Other academic)
    Abstract [en]

    This thesis describes the development and in-depth empirical investigation of a method, called BootMark, for bootstrapping the marking up of named entities in textual documents. The reason for working with documents, as opposed to for instance sentences or phrases, is that the BootMark method is concerned with the creation of corpora. The claim made in the thesis is that BootMark requires a human annotator to manually annotate fewer documents in order to produce a named entity recognizer with a given performance, than would be needed if the documents forming the basis for the recognizer were randomly drawn from the same corpus. The intention is then to use the created named en- tity recognizer as a pre-tagger and thus eventually turn the manual annotation process into one in which the annotator reviews system-suggested annotations rather than creating new ones from scratch. The BootMark method consists of three phases: (1) Manual annotation of a set of documents; (2) Bootstrapping – active machine learning for the purpose of selecting which document to an- notate next; (3) The remaining unannotated documents of the original corpus are marked up using pre-tagging with revision. Five emerging issues are identified, described and empirically investigated in the thesis. Their common denominator is that they all depend on the real- ization of the named entity recognition task, and as such, require the context of a practical setting in order to be properly addressed. The emerging issues are related to: (1) the characteristics of the named entity recognition task and the base learners used in conjunction with it; (2) the constitution of the set of documents annotated by the human annotator in phase one in order to start the bootstrapping process; (3) the active selection of the documents to annotate in phase two; (4) the monitoring and termination of the active learning carried out in phase two, including a new intrinsic stopping criterion for committee-based active learning; and (5) the applicability of the named entity recognizer created during phase two as a pre-tagger in phase three. The outcomes of the empirical investigations concerning the emerging is- sues support the claim made in the thesis. The results also suggest that while the recognizer produced in phases one and two is as useful for pre-tagging as a recognizer created from randomly selected documents, the applicability of the recognizer as a pre-tagger is best investigated by conducting a user study involving real annotators working on a real named entity recognition task.

    Download full text (pdf)
    FULLTEXT01
  • 21.
    Olsson, Tomas
    RISE, Swedish ICT, SICS.
    A Data-Driven Approach to Remote Fault Diagnosis of Heavy-duty Machines2015Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Heavy-duty machines are equipment constructed for working under rough conditions and their design is meant to withstand heavy workloads. However, the last decades technical development in cheap electronically components have lead to an increase of electrical systems in traditionally mainly mechanical systems of heavy-duty machines. As the complexity of these machines increases, so does the complexity of detecting and diagnosing machine faults. However, the addition of new electrical systems, such as on-board computational power and telematics, makes it possible to add new sensors that measure signals relevant for fault detection and diagnosis, and to process signals on-board or off-board the machines.In this thesis, we address the diagnostic problem by investigating data-driven methods for remote diagnosis of heavy-duty machines, where a part of the analysis is performed on-board the machine (fault detection), while another part is performed off-board the machine (fault classification). We propose a diagnostic framework where we use a novel combination of methods for each step in the diagnosis. On-board the machine, we have used logistic regression as an anomaly detector to detect faults that will lead to a stream of individual cases classified as anomalous or not. Then, either on-board or off-board, we can use a probabilistic anomaly detector to identify whether the stream of cases is truly anomalous when we look at the stream of cases as a group. The anomalous group of cases is called a composite case. Thereafter, off-board the machine, each anomalous individual case is classified into a fault type using a case-based reasoning approach to fault diagnosis. In the final step, we fuse the individual classifications into a single aggregated classification for the composite case. In order to be able to assess the reliability of a diagnosis, we also propose a novel case-based approach to estimating the reliability of probabilistic predictions. It can, for instance, be used for assessing the confidence of the classification of a composite case given historical data of the predictive reliability.

  • 22.
    Paladi, Nicolae
    RISE - Research Institutes of Sweden, ICT, SICS. Lund University, Sweden.
    Trust but verify: trust establishment mechanisms in infrastructure clouds2017Doctoral thesis, monograph (Other academic)
    Abstract [en]

    In the cloud computing service model, users consume computation resources provided through the Internet, often without any awareness of the cloud service provider that owns and operates the supporting hardware infrastructure. This marks an important change compared to earlier models of computation, for example when such supporting hardware infrastructure was under the control of the user. Given the ever increasing importance of computing, the shift to cloud computing raises several challenging issues, which include protecting the computation and ancillary resources such as network communication and the stored or produced data.While the potential risks for data isolation and confidentiality in cloud infrastructure are somewhat known, they are obscured by the convenience of the service model and claimed trustworthiness of cloud service providers, backed by reputation and contractual agreements. Ongoing research on cloud infrastructure has the potential to strengthen the security guarantees of computation, data and communication for users of cloud computing. This thesis is part of such research efforts, focusing on assessing the trustworthiness of components of the cloud network infrastructure and cloud computing infrastructure and controlling access to data and network resources and addresses select aspects of cloud computing security.The contributions of the thesis include mechanisms to verify or enforce security in cloud infrastructure. Such mechanisms have the potential to both help cloud service providers strengthen the security of their deployments and empower users to obtain guarantees regarding security aspects of service level agreements. By leveraging functionality of components such as the Trusted Platform Module, the thesis presents mechanisms to provide user guarantees regarding integrity of the computing environment and geographic location of plaintext data, as well as to allow users maintain control over the cryptographic keys for integrity and confidentiality protection of data stored in remote infrastructure. Furthermore, the thesis leverages recent innovations for platform security such as Software Guard Extensions to introduce mechanisms to verify the integrity of the network infrastructure in the Software-Defined Networking model. A final contribution of the thesis is an access control mechanism for access control of resources in the Software-Defined Networking model. 

  • 23.
    Payberah, Amir
    RISE, Swedish ICT, SICS.
    Live Streaming in P2P and Hybrid P2P-Cloud Environments for the Open Internet2013Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Peer-to-Peer (P2P) live media streaming is an emerging technology that reduces the barrier to stream live events over the Internet. However, providing a high quality media stream using P2P overlay networks is challenging and gives raise to a number of issues: (i) how to guarantee quality of the service (QoS) in the presence of dynamism, (ii) how to incentivize nodes to participate in media distribution, (iii) how to avoid bottlenecks in the overlay, and (iv) how to deal with nodes that reside behind Network Address Translators gateways (NATs). In this thesis, we answer the above research questions in form of new algorithms and systems. First of all, we address problems (i) and (ii) by presenting our P2P live media streaming solutions: Sepidar, which is a multiple-tree overlay, and GLive, which is a mesh overlay. In both models, nodes with higher upload bandwidth are positioned closer to the media source. This structure reduces the playback latency and increases the playback continuity at nodes, and also incentivizes the nodes to provide more upload bandwidth. We use a reputation model to improve participating nodes in media distribution in Sepidar and GLive. In both systems, nodes audit the behaviour of their directly connected nodes by getting feedback from other nodes. Nodes who upload more of the stream get a relatively higher reputation, and proportionally higher quality streams. To construct our streaming overlay, we present a distributed market model inspired by Bertsekas auction algorithm, although our model does not rely on a central server with global knowledge. In our model, each node has only partial information about the system. Nodes acquire knowledge of the system by sampling nodes using the Gradient overlay, where it facilitates the discovery of nodes with similar upload bandwidth. We address the bottlenecks problem, problem (iii), by presenting CLive that satisfies real-time constraints on delay between the generation of the stream and its actual delivery to users. We resolve this problem by borrowing some resources (helpers) from the cloud, upon need. In our approach, helpers are added on demand to the overlay, to increase the amount of total available bandwidth, thus increasing the probability of receiving the video on time. As the use of cloud resources costs money, we model the problem as the minimization of the economical cost, provided that a set of constraints on QoS is satisfied. Finally, we solve the NAT problem, problem (iv), by presenting two NAT-aware peer sampling services (PSS): Gozar and Croupier. Traditional gossip-based PSS breaks down, where a high percentage of nodes are behind NATs. We overcome this problem in Gozar using one-hop relaying to communicate with the nodes behind NATs. Croupier similarly implements a gossip-based PSS, but without the use of relaying.

    Download full text (pdf)
    FULLTEXT01
  • 24.
    Rahimian, Fatemeh
    RISE, Swedish ICT, SICS. KTH Royal Institute of Technology, Sweden.
    Gossip-based Algorithms for Information Dissemination and Graph Clustering2014Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Decentralized algorithms are becoming ever more prevalent in almost all real-world applications that are either data intensive, computation intensive or both. This thesis presents a few decentralized solutions for large-scale (i) data dissemination, (ii) graph partitioning, and (iii) data disambiguation. All these solutions are based on gossip, a light weight peer-to-peer data exchange protocol, and thus, appropriate for execution in a distributed environment. For efficient data dissemination, we make use of the publish/subscribe communication model and provide two distributed solutions, one for topicbased and one for content-based subscriptions, named Vitis and Vinifera respectively. These systems propagate large quantities of data to interested users with a relatively low overhead. Without any central coordinator and only with the use of gossip, we build a novel topology that enables efficient routing in an unstructured overlay. We construct a hybrid system by injecting structure into an otherwise unstructured network. The resulting structure resembles a navigable small-world network that spans along clusters of nodes that have similar subscriptions. The properties of such an overlay make it an ideal platform for efficient data dissemination in large-scale systems. Our solutions significantly outperforms their counterparts on various subscription and churn scenarios, from both synthetic models and real-world traces. We then investigate how gossiping protocols can be used, not for overlay construction, but for operating on fixed overlay topologies, which resemble graphs. In particular we study the NP-Complete problem of graph partitioning and present a distributed partitioning solution for very large graphs. This solution, called Ja-be-Ja, is based on local search and does not require access to the entire graph simultaneously. It is, therefore, appropriate for graphs that can not even fit into the memory of a single computer. Once again gossip-based algorithms prove efficient as they enable implementing light-weight peer sampling services, which supply graph nodes with partial knowledge about other nodes in the graph. The performance of our partitioning algorithm is comparable to centralized graph partitioning algorithms, and yet it is scalable and can be executed on several machines in parallel or even in a completely distributed peer-to-peer overlay. It can be used for both edge-cut and vertex-cut partitioning of graphs and can produce partition sizes of any given distribution. We further extend the use of gossiping protocols to find natural clusters in a graph instead of producing a given number of partitions. This problem, known as graph community detection, has extensive application in various fields and communities. We take the use of our community detection algorithm to the realm of linguistics and address a well-known problem of data disambiguation. In particular, we provide a parallel community detection algorithm for cross-document coreference problem. We operate on graphs that we construct by representing documents’ keywords as nodes and the co-location of those keywords in a document as edges. We then exploit the particular nature of such graphs, which is coreferent words are topologically clustered, and thus, can be efficiently discovered by our community detection algorithm.

  • 25.
    Rasmusson, Lars
    RISE, Swedish ICT, SICS, Computer Systems Laboratory.
    Network capacity sharing with QoS as a financial derivative pricing problem: algorithms and network2002Doctoral thesis, monograph (Other academic)
    Abstract [en]

    A design of an automatic network capacity markets, often referred to as a bandwidth market, is presented. Three topics are investigated. First, a network model is proposed. The proposed model is based upon a trisection of the participant roles into network users, network owners, and market middlemen. The network capacity is defined in a way that allows it to be traded, and to have a well defined price. The network devices are modeled as core nodes, access nodes, and border nodes. Requirements on these are given. It is shown how their functionalities can be implemented in a network. Second, a simulated capacity market is presented, and a statistical method for estimating the price dynamics in the market is proposed. A method for pricing network services based on shared capacity is proposed, in which the price of a service is equivalent to that of a financial derivative contract on a number of simple capacity shares.Third, protocols for the interaction between the participants are proposed. The market participants need to commit to contracts with an auditable protocol with a small overhead. The proposed protocol is based on a public key infrastructure and on known protocols for multi party contract signing. The proposed model allows network capacity to be traded in a manner that utilizes the network efficiently. A new feature of this market model, compared to other network capacity markets, is that the prices are not controlled by the network owners. It is the end-users who, by middlemen, trade capacity among each other. Therefore, financial, rather than control theoretic, methods are used for the pricing of capacity.

    Download full text (pdf)
    FULLTEXT01
  • 26.
    Raza, Shahid
    RISE, Swedish ICT, SICS. Department of Computer Science and Engineering.
    Lightweight Security Solutions for the Internet of Things2013Doctoral thesis, monograph (Other academic)
    Abstract [en]

    The future Internet will be an IPv6 network interconnecting traditional computers and a large number of smart objects or networks such as Wireless Sensor Networks (WSNs). This Internet of Things (IoT) will be the foundation of many services and our daily life will depend on its availability and reliable operations. Therefore, among many other issues, the challenge of implementing secure communication in the IoT must be addressed. The traditional Internet has established and tested ways of securing networks. The IoT is a hybrid network of the Internet and resource-constrained networks, and it is therefore reasonable to explore the options of using security mechanisms standardized for the Internet in the IoT. The IoT requires multi-faceted security solutions where the communication is secured with confidentiality, integrity, and authentication services; the network is protected against intrusions and disruptions; and the data inside a sensor node is stored in an encrypted form. Using standardized mechanisms, communication in the IoT can be secured at different layers: at the link layer with IEEE 802.15.4 security, at the network layer with IP security (IPsec), and at the transport layer with Datagram Transport Layer Security (DTLS). Even when the IoT is secured with encryption and authentication, sensor nodes are ex- posed to wireless attacks both from inside the WSN and from the Internet. Hence an Intrusion Detection System (IDS) and firewalls are needed. Since the nodes inside WSNs can be captured and cloned, protection of stored data is also important. This thesis has three main contributions. (i) It enables secure communication in the IoT using lightweight compressed yet standard compliant IPsec, DTLS, and IEEE 802.15.4 link layer security; and it discusses the pros and cons of each of these solutions. The proposed security solutions are implemented and evaluated in an IoT setup on real hardware. (ii) This thesis also presents the design, implementation, and evaluation of a novel IDS for the IoT. (iii) Last but not least, it also provides mechanisms to protect data inside constrained nodes. The experimental evaluation of the different solutions shows that the resource- constrained devices in the IoT can be secured with IPsec, DTLS, and 802.15.4 security; can be efficiently protected against intrusions; and the proposed combined secure storage and communication mechanisms can significantly reduce the security-related operations and energy consumption.

    Download full text (pdf)
    FULLTEXT01
  • 27.
    Rost, Mattias
    RISE, Swedish ICT, SICS.
    Mobility is the Message: Experiments with Mobile Media Sharing2013Doctoral thesis, monograph (Other academic)
    Abstract [en]

    This thesis explores new mobile media sharing applications by building, deploying, and studying their use. While we share media in many different ways both on the web and on mobile phones, there are few ways of sharing media with people physically near us. Studied were three designed and built systems: Push!Music, Columbus, and Portrait Catalog, as well as a fourth commercially available system – Foursquare. This thesis offers four contributions: First, it explores the design space of co-present media sharing of four test systems. Second, through user studies of these systems it reports on how these come to be used. Third, it explores new ways of conducting trials as the technical mobile landscape has changed. Last, we look at how the technical solutions demonstrate different lines of thinking from how similar solutions might look today. Through a Human-Computer Interaction methodology of design, build, and study, we look at systems through the eyes of embodied interaction and examine how the systems come to be in use. Using Goffman’s understanding of social order, we see how these mobile media sharing systems allow people to actively present themselves through these media. In turn, using McLuhan’s way of understanding media, we reflect on how these new systems enable a new type of medium distinct from the web centric media, and how this relates directly to mobility. While media sharing is something that takes place everywhere in western society, it is still tied to the way media is shared through computers. Although often mobile, they do not consider the mobile settings. The systems in this thesis treat mobility as an opportunity for design. It is still left to see how this mobile media sharing will come to present itself in people’s everyday life, and when it does, how we will come to understand it and how it will transform society as a medium distinct from those before. This thesis gives a glimpse at what this future will look like.

    Download full text (pdf)
    FULLTEXT01
  • 28.
    Sadighi, Babak
    RISE, Swedish ICT, SICS.
    Decentralised Privilege Management for Access Control2005Doctoral thesis, monograph (Other academic)
    Abstract [en]

    The Internet and the more recent technologies such as web services, grid computing, utility computing and peer-to-peer computing have created possibilities for very dynamic collaborations and business transactions where information and computational resources may be accessed and shared among autonomous and administratively independent organisations. In these types of collaborations, there is no single authority who can define access policies for all the shared resources. More sophisticated mechanisms are needed to enable flexible administration and enforcement of access policies. The challenge is to develop mechanisms that preserve a high level of control on the administration and the enforcement of policies, whilst supporting the required administrative flexibility. We introduce two new frameworks to address this issue. In the first part of the thesis we develop a formal framework and an associated calculus for delegation of administrative authority, within and across organisational boundaries, with possibilities to define various restrictions on their propagation and revocation. The extended version of the framework allows reasoning with named groups of users, objects, and actions, and a specific subsumes relation between these groups. We also extend current discretionary access control models with the concept of ability, as a way of specifying when a user is able to perform an action even though not permitted to do so. This feature allows us to model detective access control (unauthorised accesses are logged for post-validation resulting in recovery and/or punitive actions) in addition to traditional preventative access control (providing mechanisms that guarantee no unauthorised access can take place). Detective access control is useful when prevention is either physically or economically impossible, or simply undesirable for one reason or another. In the second part of the thesis, we develop a formal framework for contractualbased access control to shared resources among independent organisations. We introduce the notion of entitlement in the context of access control models as an access permission supported by an obligation agreed in a contract between the access requester and the resource provider. The framework allows us to represent the obligations in a contract in structured way and to reason about their fulfilments and violations.

  • 29.
    Saulsbury, Ashley
    RISE - Research Institutes of Sweden, ICT, SICS.
    Attacking latency bottlenecks in distributed shared memory systems1999Doctoral thesis, monograph (Other academic)
  • 30.
    Shafaat, Tallat M.
    RISE, Swedish ICT, SICS, Computer Systems Laboratory. School of Information and Communication Technology.
    Partition Tolerance and Data Consistency in Structured Overlay Networks2013Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Structured overlay networks forma major class of peer-to-peer systems, which are used to build scalable, fault-tolerant and self-managing distributed applications. This thesis presents algorithms for structured overlay networks, on the routing and data level, in the presence of network and node dynamism. On the routing level, we provide algorithms for maintaining the structure of the overlay, and handling extreme churn scenarios such as bootstrapping, and network partitions and mergers. Since any long lived Internet-scale distributed system is destined to face network partitions, we believe structured overlays should intrinsically be able to handle partitions and mergers. In this thesis, we discuss mechanisms for detecting a network partition and merger, and provide algorithms for merging multiple ring-based overlays. Next, we present a decentralized algorithm for estimating the number of nodes in a peer-to-peer system. Lastly, we discuss the causes of routing anomalies (lookup inconsistencies), their effect on data consistency, and mechanisms on the routing level to reduce data inconsistency. On the data level, we provide algorithms for achieving strong consistency and partition tolerance in structured overlays. Based on our solutions on the routing and data level, we build a distributed key-value store for dynamic partially synchronous networks, which is linearizable, self-managing, elastic, and exhibits unlimited linear scalability. Finally,we present a replication scheme for structured overlays that is less sensitive to churn than existing schemes, and allows different replication degrees for different key ranges that enables using higher number of replicas for hotspots and critical data.

    Download full text (pdf)
    FULLTEXT01
  • 31.
    Simsarian, Kristian
    RISE, Swedish ICT, SICS. Department of Numerical Analysis and Computing Science.
    Toward human-robot collaboration2000Doctoral thesis, monograph (Other academic)
    Download full text (pdf)
    FULLTEXT01
  • 32.
    Sjödin, Peter
    RISE, Swedish ICT, SICS.
    From LOTOS specifications to distributed implementations1992Doctoral thesis, monograph (Other academic)
    Abstract [en]

    This Thesis presents a technique for automatic generation of implementations from their formal LOTOS specifications. The technique focuses on implementations executing in distributed systems where processes communicate by asynchronous message passing. Implementations are specified in an executable subset of LOTOS. Such specifications are translated into implementations that execute supported by a run-time system. Implementations are described in LOTOS. The run-time system contains a distributed synchronizer, which implements a distributed algorithm for process synchronization, including synchronazation with environment. The Implementation techniques is shown to given correct implementations by proving that the distributed synchronizer is testing equivalent to a simple , centrazed synchronizer, and that implementations, together with the centralizid synchronizer, are testing equivalent to their specification. A new proof method, based on induction on transitions, is used for proving testing equivalence. The method is applicable to a subset of non-divergent specifications. Performance and improvements of the distributed algorithm are discussed.

  • 33.
    Steinert, Rebecca
    RISE, Swedish ICT, SICS, Decisions, Networks and Analytics lab.
    Probabilistic Fault Management in Networked Systems2014Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Technical advances in network communication systems (e.g. radio access networks) combined with evolving concepts based on virtualization (e.g. clouds), require new management algorithms in order to handle the increasing complexity in the network behavior and variability in the network environment. Current network management operations are primarily centralized and deterministic, and are carried out via automated scripts and manual interventions, which work for mid-sized and fairly static networks. The next generation of communication networks and systems will be of significantly larger size and complexity, and will require scalable and autonomous management algorithms in order to meet operational requirements on reliability, failure resilience, and resource-efficiency. A promising approach to address these challenges includes the development of probabilistic management algorithms, following three main design goals. The first goal relates to all aspects of scalability, ranging from efficient usage of network resources to computational efficiency. The second goal relates to adaptability in maintaining the models up-to-date for the purpose of accurately reflecting the network state. The third goal relates to reliability in the algorithm performance in the sense of improved performance predictability and simplified algorithm control. This thesis is about probabilistic approaches to fault management that follow the concepts of probabilistic network management (PNM). An overview of existing network management algorithms and methods in relation to PNM is provided. The concepts of PNM and the implications of employing PNMalgorithms are presented and discussed. Moreover, some of the practical differences of using a probabilistic fault detection algorithm compared to a deterministic method are investigated. Further, six probabilistic fault management algorithms that implement different aspects of PNM are presented. The algorithms are highly decentralized, adaptive and autonomous, and cover several problem areas, such as probabilistic fault detection and controllable detection performance; distributed and decentralized change detection in modeled link metrics; root-cause analysis in virtual overlays; event-correlation and pattern mining in data logs; and, probabilistic failure diagnosis. The probabilistic models (for a large part based on Bayesian parameter estimation) are memory-efficient and can be used and re-used for multiple purposes, such as performance monitoring, detection, and self-adjustment of the algorithm behavior.

  • 34.
    Ståhl, Anna
    RISE, Swedish ICT, SICS, Computer Systems Laboratory.
    Designing for Interactional Empowerment2015Doctoral thesis, monograph (Other academic)
    Abstract [en]

    This thesis further defines how to reach Interactional Empowerment through design for users. Interactional Empowerment is an interaction design program within the general area of affective interaction, focusing on the users’ abil­ity to reflect, express themselves and engage in profound meaning-making. This has been explored through design of three systems eMoto, Affective Di­ary and Affective Health, which all mirror users’ emotions or bodily reactions in interaction in some way. From these design processes and users’ encoun­ters with the system I have extracted one experiential quality, Evocative Bal­ance, and several embryos to experiential qualities. Evocative Balance refers to interaction experiences in which familiarity and resonance with lived expe­rience are balanced with suggestiveness and openness to interpretation. The development of the concept of evocative balance is reported over the course of the three significant design projects, each exploring aspects of Interaction­al Empowerment in terms of representing bodily experiences in reflective and communicative settings. By providing accounts of evocative balance in play in the three projects, analyzing a number of other relevant interaction design experiments, and discussing evocative balance in relation to existing con­cepts within affective interaction, we offer a multi-grounded construct that can be appropriated by other interaction design researchers and designers. This thesis aims to mirror a designerly way of working, which is recognized by its multigroundedness, focus on the knowledge that resides in the design pro­cess, a slightly different approach to the view of knowledge, its extension and its rigour. It provides a background to the state-of-the-art in the design communi­ty and exemplifies these theoretical standpoints in the design processes of the three design cases. This practical example of how to extend a designer’s knowledge can work as an example for design researchers working in a similar way.

  • 35.
    Tsiftes, Nicolas
    RISE, Swedish ICT, SICS, Computer Systems Laboratory.
    Storage-Centric System Architectures for Networked, Resource-Constrained Devices2015Doctoral thesis, monograph (Other academic)
  • 36.
    Täckström, Oscar
    RISE, Swedish ICT, SICS. Department of Linguistics and Philology.
    Predicting Linguistic Structure with Incomplete and Cross-Lingual Supervision2013Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Contemporary approaches to natural language processing are predominantly based on statistical machine learning from large amounts of text, which has been manually annotated with the linguistic structure of interest. However, such complete supervision is currently only available for the world's major languages, in a limited number of domains and for a limited range of tasks. As an alternative, this dissertation considers methods for linguistic structure prediction that can make use of incomplete and cross-lingual supervision, with the prospect of making linguistic processing tools more widely available at a lower cost. An overarching theme of this work is the use of structured discriminative latent variable models for learning with indirect and ambiguous supervision; as instantiated, these models admit rich model features while retaining efficient learning and inference properties. The first contribution to this end is a latent-variable model for fine-grained sentiment analysis with coarse-grained indirect supervision. The second is a model for cross-lingual word-cluster induction and the application thereof to cross-lingual model transfer. The third is a method for adapting multi-source discriminative cross-lingual transfer models to target languages, by means of typologically informed selective parameter sharing. The fourth is an ambiguity-aware self- and ensemble-training algorithm, which is applied to target language adaptation and relexicalization of delexicalized cross-lingual transfer parsers. The fifth is a set of sequence-labeling models that combine constraints at the level of tokens and types, and an instantiation of these models for part-of-speech tagging with incomplete cross-lingual and crowdsourced supervision. In addition to these contributions, comprehensive overviews are provided of structured prediction with no or incomplete supervision, as well as of learning in the multilingual and cross-lingual settings. Through careful empirical evaluation, it is established that the proposed methods can be used to create substantially more accurate tools for linguistic processing, compared to both unsupervised methods and to recently proposed cross-lingual methods. The empirical support for this claim is particularly strong in the latter case; our models for syntactic dependency parsing and part-of-speech tagging achieve the hitherto best published results for a wide number of target languages, in the setting where no annotated training data is available in the target language.

    Download full text (pdf)
    FULLTEXT01
  • 37.
    Voigt, Thiemo
    RISE, Swedish ICT, SICS, Computer Systems Laboratory.
    Architectures for service differentiation in overloaded Internet servers2002Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Web servers become overloaded when one or several server resources such as network interface, CPU and disk become overutilized. Server overload leads to low server throughput and long response times experienced by the clients. Traditional server design includes only marginal or no support for overload protection. This thesis presents the design, implementation and evaluation of architectures that provide overload protection and service differentiation in web servers. During server overload not all requests can be processed in a timely manner. Therefore, it is desirable to perform service differentiation, i.e., to service requests that are regarded as more important than others. Since requests that are eventually discarded also consume resources, admission control should be performed as early as possible in the lifetime of a web transaction. Depending on the workload, some server resources can be overutilized while the demand on other resources is low because certain types of requests utilize one resource more than others. The implementation of admission control in the kernel of the operating system shows that this approach is more efficient and scalable than implementing the same scheme in user space. We also present an admission control architecture that performs admission control based on the current server resource utilization combined with knowledge about resource consumption of requests. Experiments demonstrate more than 40% higher throughput during overload compared to a standard server and several magnitudes lower response times. This thesis also presents novel architectures and implementations of operating system support for predictable service guarantees. The Nemesis operating system provides applications with a guaranteed communication service using the developed TCP/IP implementation and the scheduling of server resources. SILK (Scout in the Linux kernel) is a new networking stack for the Linux operating system that is based on the Scout operating system. Experiments show that SILK enables prioritizing and other forms of service differentiation between network connections while running unmodified Linux applications.

    Download full text (pdf)
    fulltext
1 - 37 of 37
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf