Requirements prioritization plays an important role in driving project success during software development. Literature reveals that existing requirements prioritization approaches ignore vital factors such as interdependency between requirements. Existing requirements prioritization approaches are also generally time-consuming and involve substantial manual effort. Besides, these approaches show substantial limitations in terms of the number of requirements under consideration. There is some evidence suggesting that models could have a useful role in the analysis of requirements interdependency and their visualization, contributing towards the improvement of the overall requirements prioritization process. However, to date, just a handful of studies are focused on model-based strategies for requirements prioritization, considering only conflict-free functional requirements. This paper uses a meta-model-based approach to help the requirements analyst to model the requirements, stakeholders, and inter-dependencies between requirements. The model instance is then processed by our modified PageRank algorithm to prioritize the given requirements. An experiment was conducted, comparing our modified PageRank algorithm's efficiency and accuracy with five existing requirements prioritization methods. Besides, we also compared our results with a baseline prioritized list of 104 requirements prepared by 28 graduate students. Our results show that our modified PageRank algorithm was able to prioritize the requirements more effectively and efficiently than the other prioritization methods.
The use of requirements' information in testing is a well-recognized practice in the software development life cycle. Literature reveals that existing tests prioritization and selection approaches neglected vital factors affecting tests priorities, like interdependencies between requirement specifications. We believe that models may play a positive role in specifying these inter-dependencies and prioritizing tests based on these inter-dependencies. However, till date, few studies can be found that make use of requirements inter-dependencies for test case prioritization. This paper uses a meta-model to aid modeling requirements, their related tests, and inter-dependencies between them. The instance of this meta-model is then processed by our modified PageRank algorithm to prioritize the requirements. The requirement priorities are then propagated to related test cases in the test model and test cases are selected based on coverage of extra-functional properties. We have demonstrated the applicability of our proposed approach on a small example case.
Short-term traffic prediction allows Intelligent Transport Systems to proactively respond to events before they happen. With the rapid increase in the amount, quality, and detail of traffic data, new techniques are required that can exploit the information in the data in order to provide better results while being able to scale and cope with increasing amounts of data and growing cities. We propose and compare three models for short-term road traffic density prediction based on Long Short-Term Memory (LSTM) neural networks. We have trained the models using real traffic data collected by Motorway Control System in Stockholm that monitors highways and collects flow and speed data per lane every minute from radar sensors. In order to deal with the challenge of scale and to improve prediction accuracy, we propose to partition the road network into road stretches and junctions, and to model each of the partitions with one or more LSTM neural networks. Our evaluation results show that partitioning of roads improves the prediction accuracy by reducing the root mean square error by the factor of 5. We show that we can reduce the complexity of LSTM network by limiting the number of input sensors, on average to 35% of the original number, without compromising the prediction accuracy. .
Real-time road congestion detection allows improving traffic safety and route planning. In this work, we propose to use streaming graph processing algorithms for road congestion detection and evaluate their accuracy and performance. We represent road infrastructure sensors in the form of a directed weighted graph and adapt the Connected Components algorithm and some existing graph processing algorithms, originally used for community detection in social network graphs, for the task of road congestion detection. In our approach, we detect Connected Components or communities of sensors with similarly weighted edges that reflect different states in the traffic, e.g., free flow or congested state, in regions covered by detected sensor groups. We have adapted and implemented the Connected Components and community detection algorithms for detecting groups in the weighted sensor graphs in batch and streaming manner. We evaluate our approach by building and processing the road infrastructure sensor graph for Stockholm's highways using real-world data from the Motorway Control System operated by the Swedish traffic authority. Our results indicate that the Connected Components and DenGraph community detection algorithms can detect congestion with accuracy up to ? 94% for Connected Components and up to ? 88% for DenGraph. The Louvain Modularity algorithm for community detection fails to detect congestion regions for sparsely connected graphs, representing roads that we have considered in this study. The Hierarchical Clustering algorithm using speed and density readings is able to detect congestion without details, such as shockwaves.
We point out the risks of protecting relational databases viaSearchable Symmetric Encryption (SSE) schemes by proposing an infer-ence attack exploiting the structural properties of relational databases.We show that record-injection attacks mounted on relational databaseshave worse consequences than their file-injection counterparts on un-structured databases. Moreover, we discuss some techniques to reducethe effectiveness of inference attacks exploiting the access pattern leak-age existing in SSE schemes. To the best of our knowledge, this is thefirst work that investigates the security of relational databases protectedby SSE schemes.
Searchable symmetric encryption (SSE) schemes are commonly proposed to enable search in a protected unstructured documents such as email archives or any set of sensitive text files. However, some SSE schemes have been recently proposed in order to protect relational databases. Most of the previous attacks on SSE schemes have only targeted its common use case, protecting unstructured data. In this work, we propose a new inference attack on relational databases protected via SSE schemes. Our inference attack enables a passive adversary with only basic knowledge about the meta-data information of the target relational database to recover the attribute names of some observed queries. This violates query privacy since the attribute name of a query is secret.
Large network operators have thousands or tens of thousands of access aggregation links that they need to manage and dimension properly. Measuring and understanding the traffic characteristics on these type of links are therefore essential. What do the traffic intensity characteristics look like on different timescales from days down to milliseconds? How do the characteristics differ if we compare links with the same capacity but with different type of clients and access technologies? How do the traffic characteristics differ from that on core network links? These are the type of questions we set out to investigate in this paper. We present the results of packet level measurements on three different 1Gbit/s aggregation links in an operational IP network. We see large differences in traffic characteristics between the three links. We observe highly skewed link load probability densities on timescales relevant for buffering (i.e. 10-milliseconds). We demonstrate the existence of large traffic spikes on short timescales (10-100ms) and show their impact on link delay. We also found that these traffic bursts often are caused by only one or a few IP flows.
Connected vehicles can make roads traffic safer andmore efficient, but require the mobile networks to handle timecriticalapplications. Using the MONROE mobile broadbandmeasurement testbed we conduct a multi-access measurementstudy on buses. The objective is to understand what networkperformance connected vehicles can expect in today’s mobilenetworks, in terms of transaction times and availability. The goalis also to understand to what extent access to several operatorsin parallel can improve communication performance.In our measurement experiments we repeatedly transfer warningmessages from moving buses to a stationary server. Wetriplicate the messages and always perform three transactionsin parallel over three different cellular operators. This creates adataset with which we can compare the operators in an objectiveway and with which we can study the potential for multi-access.In this paper we use the triple-access dataset to evaluate singleaccessselection strategies, where one operator is chosen for eachtransaction. We show that if we have access to three operatorsand for each transaction choose the operator with best accesstechnology and best signal quality then we can significantlyimprove availability and transaction times compared to theindividual operators. The median transaction time improves with6% compared to the best single operator and with 61% comparedto the worst single operator. The 90-percentile transaction timeimproves with 23% compared to the best single operator andwith 65% compared to the worst single operator.
Dynamic Transfer Mode (DTM) is a ring based MAN technology that provides a channel abstraction with a dynamically adjustable capacity. TCP is a reliable end to end transport protocol capable of adjusting its rate. The primary goal of this work is investigate the coupling of dynamically allocating bandwidth to TCP flows with the affect this has on the congestion control mechanism of TCP. In particular we wanted to find scenerios where this scheme does not work, where either all the link capacity is allocated to TCP or congestion collapse occurs and no capacity is allocated to TCP. We have created a simulation environment using ns-2 to investigate TCP over networks which have a variable capacity link. We begin with a single TCP Tahoe flow over a fixed bandwidth link and progressively add more complexity to understand the behaviour of dynamically adjusting link capacity to TCP and vice versa.
Three-dimensional interactive audio has a variety ofpotential uses in human-machine interfaces. After lagging seriously behind the visual components, the importance of sound is now becoming increas-ingly accepted. This paper mainly discusses background and techniques to implement three-dimensional audio in computer interfaces. A case study of a system for three-dimensional audio, implemented by the author, is described in great detail. The audio system was moreover integrated with a virtual reality system and conclusions on user tests and use of the audio system is presented along with proposals for future work at the end of the paper. The thesis begins with a definition of three-dimensional audio and a survey on the human auditory system to give the reader the needed knowledge of what three-dimensional audio is and how human auditory perception works.
FlyZone is a testbed architecture to experiment with aerial drone applications. Unlike most existing drone testbeds that focus on low-level mechanical control, FlyZone offers a high-level API and features geared towards experimenting with application-level functionality. These include the emulation of environment influences, such as wind, and the automatic monitoring of developer-provided safety constraints, for example, to mimic obstacles. We conceive novel solutions to achieve this functionality, including a hardware/software architecture that maximizes decoupling from the main application and a custom visual localization technique expressly designed for testbed operation. We deploy two instances of FlyZone and study performance and effectiveness. We demonstrate that we realistically emulate the environment influence with a positioning error bound by the size of the smallest drone we test, that our localization technique provides a root mean square error of 9.2cm, and that detection of violations to safety constraints happens with a 50ms worst-case latency. We also report on how FlyZone supported developing three real-world drone applications, and discuss a user study demonstrating the benefits of FlyZone compared to drone simulators.
Cyberphysical systems (CPS) integrate embedded sensors,actuators, and computing elements for controlling physicalprocesses. Due to the intimate interactions with thesurrounding environment, CPS software must continuouslyadapt to changing conditions. Enacting adaptation decisionsis often subject to strict time requirements to ensure controlstability, while CPS software must operate within the tightresource constraints that characterize CPS platforms. Developersare typically left without dedicated programmingsupport to cope with these aspects. This results in either toneglect functional or timing issues that may potentially ariseor to invest significant efforts to implement hand-crafted solutions.We provide programming constructs that allow developersto simplify the specification of adaptive processingand to rely on well-defined time semantics. Our evaluationshows that using these constructs simplifies implementationswhile reducing developers’ effort, at the price of a modestmemory and processing overhead.
We present design concepts, programming constructs, and automatic verification techniques to support thedevelopment of adaptive Wireless Sensor Network (WSN) software. WSNs operate at the interface betweenthe physical world and the computing machine, and are hence exposed to unpredictable environment dynamics.WSN software must adapt to these dynamics to maintain dependable and efficient operation. Whilesignificant literature exists on the necessary adaptation logic, developers are left without proper support inmaterializing such a logic in a running system. Our work fills this gap with three key contributions: i) designconcepts help developers organize the necessary adaptive functionality and understand their relations,ii) dedicated programming constructs simplify the implementations, iii) custom verification techniques allowdevelopers to check the correctness of their design before deployment. We implement dedicated toolsupport to tie the three contributions, facilitating their practical application. Our evaluation considers representativeWSN applications to analyze code metrics, synthetic simulations, and cycle-accurate emulationof popular WSN platforms. The results indicate that our work is effective in simplifying the developmentof adaptive WSN software; for example, implementations are provably easier to test and to maintain, therun-time overhead of our dedicated programming construct is negligible, and our verification techniquesreturn results in a matter of seconds.
In this paper, a practical non-stationary three-dimensional (3-D) channel models for massive multiple-input multiple-output (MIMO) systems, considering beam patterns for different antenna elements, is proposed. The beam patterns using dipole antenna elements with different phase excitation toward the different direction of travels (DoTs) contributes various correlation weights for rays related towards/from the cluster, thus providing different elevation angle of arrivals (EAoAs) and elevation angle of departures (EAoDs) for each antenna element. These include the movements of the user that makes our channel to be a non-stationary model of clusters at the receiver (RX) on both the time and array axes. In addition, their impacts on 3-D massive MIMO channels are investigated via statistical properties including received spatial correlation. Additionally, the impact of elevation/azimuth angles of arrival on received spatial correlation is discussed. Furthermore, experimental validation of the proposed 3-D channel models on azimuth and elevation angles of the polarized antenna are specifically evaluated and compared through simulations. The proposed 3-D generic models are verified using relevant measurement data.
Packet loss is an important parameter for dimensioning network links or traffic classes carrying IP telephony traffic. We present a model based on the Markov modulated Poisson process (MMPP) which calculates packet loss probabilities for a set of super positioned voice input sources and the specified link properties. We do not introduce another new model to the community, rather try and verify one of the existing models via extensive simulation and a real world implementation. A plethora of excellent research on queuing theory is still in the domain of ATM researchers and we attempt to highlight its validity to the IP Telephony community. Packet level simulations show very good correspondence with the predictions of the model. Our main contribution is the verification of the MMPP model with measurements in a laboratory environment. The loss rates predicted by the model are in general close to the measured loss rates and the loss rates obtained with simulation. The general conclusion is that the MMPP-based model is a tool well suited for dimensioning links carrying packetized voice in a system with limited buffer space.
Transmitting telephone calls over the Internet causes problems not present in current telephone technology such as packet loss and delay due to queueing in routers. In this undergraduate thesis we study how a Markov modulated Poisson process is applied as an arrival process to a multiplexer and we study the performance in terms of loss probability. The input consists of the superposition of independent voice sources. The predictions of the model is compared with results obtained with simulations of the multiplexer made with a network simulator. The buffer occupancy distribution is also studied and we see how this distribution changes as the load increases.
In this paper we review previous work on the applicability and performance of Integrated Layer Processing (ILP). ILP has been shown to clearly improve computer communication performance when integrating simple data manipulation functions, but the situation has been less clear for more complex functions and complete systems. We discuss complications when applying ILP to protocol stacks, the requirements of ILP on the communication subsystem, caching aspects, the importance of the processor registers, and a model for predicting the performance of data manipulation functions. We conclude that the main drawback of ILP is its limited aplicability to complex data manipulation functions. The performance to expect from an ILP implementation also depends heavily on the protocol architecture and the host system architecture.
The existing Internet ecosystem is a result of decades of evolution. It has managed to scale well beyond the original aspirations. Evolution, though, highlighted a certain degree of inadequacies that is well documented. In this position paper we present the design considerations for a re-architected global networking architecture which delivers dissemination and non-dissemination objects only to consenting recipients, reducing unwanted traffic, linking information producers with consumers independently of the hosts involved, and connects the digital with the physical world. We consider issues ranging from the proposed object identifier/locator split to security and trust as we transition towards a Network of Information and relate our work with the emerging paradigm of publish/subscribe architectures. We introduce the fundamental components of a Network of Information, i.e., name resolution, routing, storage, and search, and close this paper with a discussion about future work.
We present the latency-aware multipath scheduler ZQTRTT that takes advantage of the multipath opportunities in information-centric networking. The goal of the scheduler is to use the (single) lowest latency path for transaction-oriented flows, and use multiple paths for bulk data flows. A new estimator called zero queue time ratio is used for scheduling over multiple paths. The objective is to distribute the flow over the paths so that the zero queue time ratio is equal on the paths, that is, so that each path is ‘pushed’ equally hard by the flow without creating unwanted queueing. We make an initial evaluation using simulation that shows that the scheduler meets our objectives.
Many current implementations of communication subsystems on workstation class computers transfer communication data to and from primary memory several times. This is due to software copying between user and operating system address spaces, presentation layer data conversion and other data manipulation functions. The consequence is that memory bandwidth is one of the major performance bottlenecks limiting high speed communication on these systems. We propose a communication subsystem architecture with a minimal-copy data path to widen this bottleneck. The architecture is tailored for protocol implementations using Integrated Layer Processing (ILP) and Application Layer Framing (ALF). We choose to implement these protocols in the address space of the application program. We present a new application program interface (API) between the protocols and the communication service in the operating system kernel. The API does not copy data, but instead passes pointers to page size data buffers. We analyze and discuss ILP loop and cache memory requirements on these buffers. Initial experiments show that the API can increase the communication performance with 50% compared to a standard BSD Unix socket interface.
Information-centric networking (ICN) with its design around named-based forwarding and in-network caching holds great promises to become a key architecture for the future Internet. Many proposed ICN hop-by-hop congestion control schemes assume a fixed and known link capacity, which rarely - if ever - holds true for wireless links. Firstly, we demonstrate that although these congestion control schemes are able to fairly well utilise the available wireless link capacity, they greatly fail to keep the delay low. In fact, they essentially offer the same delay as in the case with no hop-by-hop, only end-to-end, congestion control. Secondly, we show that by complementing these schemes with an easy-to-implement, packet-train capacity estimator, we reduce the delay to a level significantly lower than what is obtained with only end-to-end congestion control, while still being able to keep the link utilisation at a high level.
Service assurance for cloud applications is a challenging task and is an active area of research for academia and industry. One promising approach is to utilize machine learning for service quality prediction and fault detection so that suitable mitigation actions can be executed. In our previous work, we have shown how to predict service-level metrics in real-time just from operational data gathered at the server side. This gives the service provider early indications on whether the platform can support the current load demand. This paper provides the logical next step where we extend our work by proposing an automated detection and diagnostic capability for the performance faults manifesting themselves in cloud and datacenter environments. This is a crucial task to maintain the smooth operation of running services and minimizing downtime. We demonstrate the effectiveness of our approach which exploits the interpretative capabilities of Self- Organizing Maps (SOMs) to automatically detect and localize different performance faults for cloud services.
Automated detection and diagnosis of the performance faults in cloud and datacenter environments is a crucial task to maintain smooth operation of different services and minimize downtime. We demonstrate an effective machine learning approach based on detecting metric correlation stability violations (CSV) for automated localization of performance faults for datacenter services running under dynamic load conditions.
The aim of this work is to apply and evaluate different chemometric approaches employing several machine learning techniques in order to characterize the moisture content in biomass from data obtained by Near Infrared (NIR) spectroscopy. The approaches include three main parts: a) data pre-processing, b) wavelength selection and c) development of a regression model enabling moisture content measurement. Standard Normal Variate (SNV), Multiplicative Scatter Correction and Savitzky-Golay first (SG1) and second (SG2) derivatives and its combinations were applied for data pre-processing. Genetic algorithm (GA) and iterative PLS (iPLS) were used for wavelength selection. Artificial Neural Network (ANN), Gaussian Process Regression (GPR), Support Vector Regression (SVR) and traditional Partial Least Squares (PLS) regression, were employed as machine learning regression methods. Results shows that SNV combined with SG1 first derivative performs the best in data pre-processing. The GA is the most effective methods for variable selection and GPR achieved a high accuracy in regression modeling while having low demands on computation time. Overall, the machine learning techniques demonstrate a great potential to be used in future NIR spectroscopy applications. © 2019 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/) Peer-review under responsibility of the scientific committee of ICAE2018 - The 10th International Conference on Applied Energy.
This paper presents the design and implementation of the software for a run-time assurance infrastructure in the E-care@home system. An experimental evaluation is conducted to verify that the run-time assurance infrastructure is functioning correctly, and to enable detecting performance degradation in experimental IoT network deployments within the context of E-care@home. © 2018, ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering.
Transiently-powered computers (TPCs) lay the basis for a battery-less Internet of Things, using energy harvesting and small capacitors to power their operation. This power supply is characterized by extreme variations in supply voltage, as capacitors charge when harvesting energy and discharge when computing. We experimentally find that these variations cause marked fluctuations in clock speed and power consumption, which determine energy efficiency. We demonstrate that it is possible to accurately model and concretely capitalize on these fluctuations. We derive an energy model as a function of supply voltage and develop EPIC, a compile-time energy analysis tool. We use EPIC to substitute for the constant power assumption in existing analysis techniques, giving programmers accurate information on worst-case energy consumption of programs. When using EPIC with existing TPC system support, run-time energy efficiency drastically improves, eventually leading up to a 350% speedup in the time to complete a fixed workload. Further, when using EPIC with existing debugging tools, programmers avoid unnecessary program changes that hurt energy efficiency.
Embedded devices running on ambient energy perform computations intermittently, depending upon energy availability. System support ensures forward progress of programs through state checkpointing in non-volatile memory. Checkpointing is, however, expensive in energy and adds to execution times. To reduce this overhead, we present DICE, a system design that efficiently achieves differential checkpointing in intermittent computing. Distinctive traits of DICE are its software-only nature and its ability to only operate in volatile main memory to determine differentials. DICE works with arbitrary programs using automatic code instrumentation, thus requiring no programmer intervention, and can be integrated with both reactive (Hibernus) or proactive (MementOS, HarvOS) checkpointing systems. By reducing the cost of checkpoints, performance markedly improves. For example, using DICE, Hibernus requires one order of magnitude shorter time to complete a fixed workload in real-world settings.
Many protocols in low-power wireless networks require a leader to bootstrap and maintain their operation. For example, Chaos and Glossy networks need an initiator to synchronize and initiate the communication rounds. Commonly, these protocols use a fixed, compile-time defined node as the leader. In this work, we tackle the challenge of dynamically bootstrapping the network and electing a leader in low-power wireless scenarios.
Many protocols in low-power wireless networks require a root nodeor a leader to bootstrap and maintain its operation. For example,Chaos and Glossy networks need an initiator to synchronize andinitiate the communications rounds. Commonly, these protocolsuse a xed, compile-time dened node as the leader. In this work,we tackle the challenge of dynamically bootstrapping the networkand electing a leader in low-power wireless scenarios, and we focuson Chaos-style networks
This is a state of the art report and literature overview of practical methods for constructing and analysing real-time systems. It covers operating system support, monitoring methods, and execution time prediction through simulation.
Testing is the predominant software quality assurance method today, but it has a major flaw --- it cannot reliably catch race conditions, intermittent errors caused by factors that cannot be controlled during testing, such as unpredictable timing behaviour in concurrent software. We present entropy injection, a extension of traditional test methods, which enable developers to create tests for arbitrary types of race conditions in any software application, reusing the application's existing test cases. An entropy injector runs the software under test in an instruction set simulator, where all factors that normally are unpredictable can be explicitly controlled. The injector provokes race condition defects by artificially changing the timing behaviour of the simulated processors, hardware devices, clocks, and input models. Provoked defects can be debugged by developers in a non-intrusive, programmable debugger, which allows race condition defects to be reproduced and provides access to all software state in a distributed system. Developers can use its services to create application-specific injection strategies and directed regression test cases that monitor application state and test specific interleavings of events. Our proof-of-concept entropy injector implementation Njord is built on Nornir, a debugger environment based on the complete system simulator Simics. Njord provokes test case failures by suspending simulated processors, thereby injecting delays in the processes in a concurrent application. We demonstrate Njord on a small test routine, and show how a developer can write a race condition regression test that triggers errors with very high probability, or provoke errors with good probability without using application knowledge.
We present holistic debugging, a novel method for observing execution of complex and distributed software. It builds on an instruction set simulator, which provides reproducible experiments and non-intrusive probing of state in a distributed system. Instruction set simulators, however, provide low-level information, so a holistic debugger contains a translation framework that maps this information to higher abstraction level observation tools, such as source code debuggers. We have created Nornir, a proof-of-concept holistic debugger, built on the simulator Simics. For each observed process in the simulated system, Nornir creates an abstraction translation stack, with virtual machine translators that map machine-level storage contents (e.g. physical memory, registers) provided by Simics, to application-level data (e.g. virtual memory contents) by parsing the data structures of operating systems and virtual machines. Nornir includes a modified version of the GNU debugger (GDB), which supports non-intrusive symbolic debugging of distributed applications. Nornir's main interface is a debugger shepherd, a programmable interface that controls multiple debuggers, and allows users to coherently inspect the entire state of heterogeneous, distributed applications. It provides a robust observation platform for construction of new observation tools.
We present a temporal debugger, capable of examining time flow of soft real-time applications in Unix systems. The debugger is based on a simulator modelling an entire workstation in sufficient detail to run unmodified operating systems and applications. It provides a deterministic and non-intrusive debugging environment, allowing reproducible presentation of program time flow. The primary contribution of this paper is virtual machine translation, a technique necessary to debug applications in a simulated Unix system. We show how a virtual machine translator maps low-level data, provided by the simulator, to data useful to a symbolic debugger. The translator operates by parsing data structures in the target operating system and has been implemented for the GNU debugger and simulated Linux systems.
This paper presents a rationale for structuring a distributed human factors laboratory for future air systems. The distributed herein refers to two aspects: content and geographic. As for content, the laboratory is structured in two levels, namely, individual, and team. As for geographic, the laboratory infrastructure is distributed in three physically separate facilities, namely, Department of Computer and Information Science (IDA) and Department of Management and Engineering (IEI) from Linköping University - Sweden and the Competence Center in Manufacturing from the Aeronautics Institute of Technology (ITA) - Brazil.
A method for implementing cut in parallel execution of Prolog is presented. It takes advantages of the efficient implementation of cut in the sequential WAM. It restricts the parallelism, however, it is simple and adds a small extra overhead over the sequential scheme. The method can be used in parallel execution of Prolog on shared amd nonshared memory multiprocessors.
A method for OR parallel execution of Horn clause programs on a shared memory multiprocessor is presented. The shared memory contains only control information that guide processors requesting a job to independently construct the environment required to get a new job from the other processors without degrading the performance. Each processor has a local memory containing its own binding environment. This reduces the traffic to the shared memory and allows each processor to process its job with high performance. Each processor is almost the same as the Warren Abstract Machine (WAM). Modification to the WAM for supporting the method is described in detail. A method for nonshared memory multiprocessor architecture is outlined.
We present the principles of OR-parallel execution of Prolog on a special parallel inference machine, named BC-machine. The machine is a combined local and shared memory multiprocessor with a special interconnection network. The network allows write operations of an active processor to be broadcasted to several idle processors simultaneously. The shared memory is mainly used for sharing some control information among processors in the system. The amount of shared control information is small and accessed relatively seldom. The execution model is based on the local address space approach. It allows all the technology developed for standard Prolog to be used without loss of efficiency. We expect that the model substantially reduces the copying overhead in comparision with all previous related models. This reduction is due to our new idea of dynamic load balancing and the broadcast mechanism supported by the network. A prototype system of 9 processors is under construction at the Swedish Institute of Computer Science in Stockholm.
We present a garbage-collection algorithm, suitable for loosely-coupled multiprocessor systems, in which the processing elements (PE's) share only the communication medium. The algorithm is global, i.e. it involves all the PE's in the system. It allows space compaction, and it uses a system-wide marking phase to mark all accessible objects where a combination of parallel breadth-first/depth-first strategies is used for tracing the object-graphs according to a decentralized credit mechanism that regulates the number of garbage collection messages in the system. The credit mechanism is crucial for determining the space requirement of the garbage-collection messages. Also a variation of the above algorithm is presented for systems with high locality of reference. It allows each PE to perform first its local garbage collection and only invokes the global garbage collection when the freed space by the local collector is insufficient.
We present a model for OR parallel execution of Horn clause programs on a combined local and shared memory multiprocessor system. In this model, the shared memory only contains control information that guides processors requesting a job to independently construct the environment required to get a new job. Each processor has a local memory containing its own binding environment. This reduces the traffic to the shared memory and allows each processor to process its job with high performance. Each processor is almost the same as Warren's Abstract Machine (WAM). A method for nonshared memory multiprocessor architectures is outlined. We also present some preliminary results of an experimental investigation of the model.
Structured peer-to-peer overlay networks have recently emerged as good candidate infrastructure for building novel large-scale and robust Internet applications in which participating peers share computing resources as equals. In the past three year, various structured peer-to-peer overlay networks have been proposed, and probably more are to come. We present a framework for understanding, analyzing and designing structured peer-to-peer overlay networks. The main objective of the paper is to provide practical guidelines for the design of structured overlay networks by identifying a fundamental element in the construction of overlay networks: the embedding of k-ary trees. Then, a number of effective techniques for maintaining these overlay networks are discussed. The proposed framework has been effective in the development of the DKS system.
In this article we outline the details of an ontology, called SmartEnv, proposed as a representational model to assist the development process of smart (i.e., sensorized) environments. The SmartEnv ontology is described in terms of its modules representing different aspects including physical and conceptual aspects of a smart environment. We propose the use of the Ontology Design Pattern (ODP) paradigm in order to modularize our proposed solution, while at the same time avoiding strong dependencies between the modules in order to manage the representational complexity of the ontology. The ODP paradigm and related methodologies enable incremental construction of ontologies by first creating and then linking small modules. Most modules (patterns) of the SmartEnv ontology are inspired by, and aligned with, the Semantic Sensor Network (SSN) ontology, however with extra interlinks to provide further precision and cover more representational aspects. The result is a network of 8 ontology patterns together forming a generic representation for a smart environment. The patterns have been submitted to the ODP portal and are available on-line at stable URIs.
In this position paper we briey introduce SmartEnv ontology which relies on SEmantic Sensor Network (SSN) ontology and is used to represent different aspects of smart and sensorized environments. We will also talk about E-carehome project aiming at providing an IoT-based health-care system for elderly people at their homes. Furthermore, we refer to the role of SmartEnv in Ecarehome and how it needs to be further extended to achieve semantic interoperability as one of the challenges in development of autonomous health care systems at home.
Smart home environments have a significant potential to provide for long-term monitoring of users with special needs in order to promote the possibility to age at home. Such environments are typically equipped with a number of heterogeneous sensors that monitor both health and environmental parameters. This paper presents a framework called E-care@home, consisting of an IoT infrastructure, which provides information with an unambiguous, shared meaning across IoT devices, end-users, relatives, health and care professionals and organizations. We focus on integrating measurements gathered from heterogeneous sources by using ontologies in order to enable semantic interpretation of events and context awareness. Activities are deduced using an incremental answer set solver for stream reasoning. The paper demonstrates the proposed framework using an instantiation of a smart environment that is able to perform context recognition based on the activities and the events occurring in the home.