Change search
Link to record
Permanent link

Direct link
Publications (10 of 21) Show all publications
John, W., Balador, A., Taghia, J., Johnsson, A., Sjöberg, J., Marsh, I., . . . Dowling, J. (2023). ANIARA Project - Automation of Network Edge Infrastructure and Applications with Artificial Intelligence. ACM SIGAda Ada Letters, 42(2), 92-95
Open this publication in new window or tab >>ANIARA Project - Automation of Network Edge Infrastructure and Applications with Artificial Intelligence
Show others...
2023 (English)In: ACM SIGAda Ada Letters, Vol. 42, no 2, p. 92-95Article in journal (Refereed) Published
Abstract [en]

Emerging use-cases like smart manufacturing and smart cities pose challenges in terms of latency, which cannot be satisfied by traditional centralized infrastructure. Edge networks, which bring computational capacity closer to the users/clients, are a promising solution for supporting these critical low latency services. Different from traditional centralized networks, the edge is distributed by nature and is usually equipped with limited compute capacity. This creates a complex network to handle, subject to failures of different natures, that requires novel solutions to work in practice. To reduce complexity, edge application technology enablers, advanced infrastructure and application orchestration techniques need to be in place where AI and ML are key players.

National Category
Communication Systems
Identifiers
urn:nbn:se:ri:diva-66258 (URN)10.1145/3591335.3591347 (DOI)
Available from: 2023-09-11 Created: 2023-09-11 Last updated: 2023-09-11Bibliographically approved
Armgarth, A., Pantzare, S., Arven, P., Lassnig, R., Jinno, H., Gabrielsson, E., . . . Berggren, M. (2021). A digital nervous system aiming toward personalized IoT healthcare. Scientific Reports, 11(1), Article ID 7757.
Open this publication in new window or tab >>A digital nervous system aiming toward personalized IoT healthcare
Show others...
2021 (English)In: Scientific Reports, E-ISSN 2045-2322, Vol. 11, no 1, article id 7757Article in journal (Refereed) Published
Abstract [en]

Body area networks (BANs), cloud computing, and machine learning are platforms that can potentially enable advanced healthcare outside the hospital. By applying distributed sensors and drug delivery devices on/in our body and connecting to such communication and decision-making technology, a system for remote diagnostics and therapy is achieved with additional autoregulation capabilities. Challenges with such autarchic on-body healthcare schemes relate to integrity and safety, and interfacing and transduction of electronic signals into biochemical signals, and vice versa. Here, we report a BAN, comprising flexible on-body organic bioelectronic sensors and actuators utilizing two parallel pathways for communication and decision-making. Data, recorded from strain sensors detecting body motion, are both securely transferred to the cloud for machine learning and improved decision-making, and sent through the body using a secure body-coupled communication protocol to auto-actuate delivery of neurotransmitters, all within seconds. We conclude that both highly stable and accurate sensing—from multiple sensors—are needed to enable robust decision making and limit the frequency of retraining. The holistic platform resembles the self-regulatory properties of the nervous system, i.e., the ability to sense, communicate, decide, and react accordingly, thus operating as a digital nervous system. © 2021, The Author(s).

Place, publisher, year, edition, pages
Nature Research, 2021
National Category
Computer Sciences
Identifiers
urn:nbn:se:ri:diva-52956 (URN)10.1038/s41598-021-87177-z (DOI)2-s2.0-85104084403 (Scopus ID)
Note

Funding details: Stiftelsen för Strategisk Forskning, SSF; Funding details: VINNOVA; Funding details: Japan Science and Technology Agency, JST; Funding details: Linköpings Universitet, LiU; Funding details: Knut och Alice Wallenbergs Stiftelse; Funding details: Centrum för Industriell Informationsteknologi, Linköpings Universitet, CENIIT, LiU; Funding text 1: Major funding for this work was provided by the Swedish Foundation for Strategic Research, Vinnova, and the Japanese Science and Technology Agency. Additional funding was provided by grants from the Knut and Alice Wallenberg Foundation and the Önnesjö Foundation. We wish to thank Andrey Maleev and Eric Claar (Linköping University) for electronic back end-design and implementation, Dr Tomoyuki Yokota and Hanbit Jin (University of Tokyo) for aid with sensor development and input, and Theofilos Kakantousis and Robin Anders-son (RISE SICS). The authors also thank Thor Balkhed (Linköping University) for filming, Jonas Askergren (NyTeknik) for inspiration and assistance with Fig. 1, Per Janson and Dr Robert Brooke (conceptualized.tech) for visualization input and movie editing, and Dr Jae Joon Kim for significant assistance in reviewing the manuscript.

Available from: 2021-04-23 Created: 2021-04-23 Last updated: 2024-04-09Bibliographically approved
Bux, M., Brandt, J., Witt, C., Dowling, J. & Leser, U. (2017). Hi-WAY: Execution of scientific workflows on hadoop YARN. In: Advances in Database Technology - EDBT: . Paper presented at 20th International Conference on Extending Database Technology, EDBT 2017, 21 March 2017 through 24 March 2017 (pp. 668-679). OpenProceedings.org
Open this publication in new window or tab >>Hi-WAY: Execution of scientific workflows on hadoop YARN
Show others...
2017 (English)In: Advances in Database Technology - EDBT, OpenProceedings.org , 2017, p. 668-679Conference paper, Published paper (Refereed)
Abstract [en]

Scientific workflows provide a means to model, execute, and exchange the increasingly complex analysis pipelines necessary for today’s data-driven science. However, existing scientific workflow management systems (SWfMSs) are often limited to a single workflow language and lack adequate support for large-scale data analysis. On the other hand, current distributed dataflow systems are based on a semi-structured data model, which makes integration of arbitrary tools cumbersome or forces re-implementation. We present the scientific workflow execution engine Hi-WAY, which implements a strict black-box view on tools to be integrated and data to be processed. Its generic yet powerful execution model allows Hi-WAY to execute workflows specified in a multitude of different languages. Hi-WAY compiles workflows into schedules for Hadoop YARN, harnessing its proven scalability. It allows for iterative and recursive workflow structures and optimizes performance through adaptive and data-aware scheduling. Reproducibility of workflow executions is achieved through automated setup of infrastructures and re-executable provenance traces. In this application paper we discuss limitations of current SWfMSs regarding scalable data analysis, describe the architecture of Hi-WAY, highlight its most important features, and report on several large-scale experiments from different scientific domains. © 2017, Copyright is with the authors.

Place, publisher, year, edition, pages
OpenProceedings.org, 2017
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:ri:diva-38105 (URN)10.5441/002/edbt.2017.87 (DOI)2-s2.0-85046452463 (Scopus ID)9783893180738 (ISBN)
Conference
20th International Conference on Extending Database Technology, EDBT 2017, 21 March 2017 through 24 March 2017
Available from: 2019-03-08 Created: 2019-03-08 Last updated: 2023-05-22Bibliographically approved
Ismail, M., Gebremeskel, E., Kakantousis, T., Berthou, G. & Dowling, J. (2017). Hopsworks: Improving User Experience and Development on Hadoop with Scalable, Strongly Consistent Metadata. In: Proceedings - International Conference on Distributed Computing Systems: . Paper presented at 37th IEEE International Conference on Distributed Computing Systems, ICDCS 2017, 5 June 2017 through 8 June 2017 (pp. 2525-2528).
Open this publication in new window or tab >>Hopsworks: Improving User Experience and Development on Hadoop with Scalable, Strongly Consistent Metadata
Show others...
2017 (English)In: Proceedings - International Conference on Distributed Computing Systems, 2017, p. 2525-2528Conference paper, Published paper (Refereed)
Abstract [en]

Hadoop is a popular system for storing, managing,and processing large volumes of data, but it has bare-bonesinternal support for metadata, as metadata is a bottleneck andless means more scalability. The result is a scalable platform withrudimentary access control that is neither user-nor developer-friendly. Also, metadata services that are built on Hadoop, suchas SQL-on-Hadoop, access control, data provenance, and datagovernance are necessarily implemented as eventually consistentservices, resulting in increased development effort and morebrittle software. In this paper, we present a new project-based multi-tenancymodel for Hadoop, built on a new distribution of Hadoopthat provides a distributed database backend for the HadoopDistributed Filesystem's (HDFS) metadata layer. We extendHadoop's metadata model to introduce projects, datasets, andproject-users as new core concepts that enable a user-friendly, UI-driven Hadoop experience. As our metadata service is backed bya transactional database, developers can easily extend metadataby adding new tables and ensure the strong consistency ofextended metadata using both transactions and foreign keys.

Keywords
Data Management, Dynamic Roles, Hadoop, Mutli-tenancy, Access control, Data flow analysis, Information management, Metadata, Data provenance, Distributed database, Metadata services, Strong consistency, Transactional database, Distributed computer systems
National Category
Natural Sciences
Identifiers
urn:nbn:se:ri:diva-30835 (URN)10.1109/ICDCS.2017.41 (DOI)2-s2.0-85027275789 (Scopus ID)9781538617915 (ISBN)
Conference
37th IEEE International Conference on Distributed Computing Systems, ICDCS 2017, 5 June 2017 through 8 June 2017
Available from: 2017-09-07 Created: 2017-09-07 Last updated: 2023-05-22Bibliographically approved
Niazi, S., Ismail, M., Berthou, G. & Dowling, J. (2015). Leader Election Using NewSQL Database Systems (16ed.). In: Distributed Applications and Interoperable Systems: . Paper presented at 15th IFIP WG 6.1 International Conference on Distributed Applications and Interoperable Systems (DAIS 2015), June 2-4, 2015, Grenoble, France (pp. 158-172). Springer, 9038
Open this publication in new window or tab >>Leader Election Using NewSQL Database Systems
2015 (English)In: Distributed Applications and Interoperable Systems, Springer , 2015, 16, Vol. 9038, p. 158-172Conference paper, Published paper (Refereed)
Abstract [en]

Leader election protocols are a fundamental building block for replicated distributed services. They ease the design of leader-based coordination protocols that tolerate failures. In partially synchronous systems, designing a leader election algorithm, that does not permit multiple leaders while the system is unstable, is a complex task. As a result many production systems use third-party distributed coordination services, such as ZooKeeper and Chubby, to provide a reliable leader election service. However, adding a third-party service such as ZooKeeper to a distributed system incurs additional operational costs and complexity. ZooKeeper instances must be kept running on at least three machines to ensure its high availability. In this paper, we present a novel leader election protocol using NewSQL databases for partially synchronous systems, that ensures at most one leader at any given time. The leader election protocol uses the database as distributed shared memory. Our work enables distributed systems that already use NewSQL databases to save the operational overhead of managing an additional third-party service for leader election. Our main contribution is the design, implementation and validation of a practical leader election algorithm, based on NewSQL databases, that has performance comparable to a leader election implementation using a state-of-the-art distributed coordination service, ZooKeeper.

Place, publisher, year, edition, pages
Springer, 2015 Edition: 16
Series
Lecture Notes in Computer Science (LNCS), ISSN 0302-9743, E-ISSN 1611-3349 ; 9038
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:ri:diva-24414 (URN)10.1007/978-3-319-19129-4_13 (DOI)2-s2.0-84937458428 (Scopus ID)978-3-319-19128-7 (ISBN)978-3-319-19129-4 (ISBN)
Conference
15th IFIP WG 6.1 International Conference on Distributed Applications and Interoperable Systems (DAIS 2015), June 2-4, 2015, Grenoble, France
Available from: 2016-10-31 Created: 2016-10-31 Last updated: 2023-05-22Bibliographically approved
Bux, M., Brandt, J., Lipka, C., Hakimzadeh, K., Dowling, J. & Leser, U. (2015). SAASFEE: Scalable Scientific Workflow Execution Engine (10ed.). In: Proceedings of the VLDB Endowment: . Paper presented at 41st International Conference on Very Large Data Bases, August 31 - September 4, 2015, Kohala Coast, US (pp. 1892-1903). , 8
Open this publication in new window or tab >>SAASFEE: Scalable Scientific Workflow Execution Engine
Show others...
2015 (English)In: Proceedings of the VLDB Endowment, 2015, 10, Vol. 8, p. 1892-1903Conference paper, Published paper (Refereed)
Abstract [en]

Across many fields of science, primary data sets like sensor read-outs, time series, and genomic sequences are analyzed by complex chains of specialized tools and scripts exchanging intermediate results in domain-specific file formats. Scientific workflow management systems (SWfMSs) support the development and execution of these tool chains by providing workflow specification languages, graphical editors, fault-tolerant execution engines, etc. However, many SWfMSs are not prepared to handle large data sets because of inadequate support for distributed computing. On the other hand, most SWfMSs that do support distributed computing only allow static task execution orders. We present SAASFEE, a SWfMS which runs arbitrarily complex workflows on Hadoop YARN. Workflows are specified in Cuneiform, a functional workflow language focusing on parallelization and easy integration of existing software. Cuneiform workflows are executed on Hi-WAY, a higher-level scheduler for running workflows on YARN. Distinct features of SAASFEE are the ability to execute iterative workflows, an adaptive task scheduler, re-executable provenance traces, and compatibility to selected other workflow systems. In the demonstration, we present all components of SAASFEE using real-life workflows from the field of genomics.

Series
Proceedings of the VLDB Endowment, ISSN 2150-8097 ; 8
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:ri:diva-24506 (URN)10.14778/2824032.2824094 (DOI)2-s2.0-84953879839 (Scopus ID)
Conference
41st International Conference on Very Large Data Bases, August 31 - September 4, 2015, Kohala Coast, US
Available from: 2016-10-31 Created: 2016-10-31 Last updated: 2023-05-22Bibliographically approved
Hakimzadeh, K., Peiro Sajjad, H. & Dowling, J. (2014). Scaling HDFS with a Strongly Consistent Relational Model for Metadata (8ed.). In: : . Paper presented at DAIS 2014 (pp. 38-51). , 8460
Open this publication in new window or tab >>Scaling HDFS with a Strongly Consistent Relational Model for Metadata
2014 (English)Conference paper, Published paper (Refereed)
Abstract [en]

The Hadoop Distributed File System (HDFS) scales to store tens of petabytes of data despite the fact that the entire file system's metadata must fit on the heap of a single Java virtual machine. The size of HDFS' metadata is limited to under 100 GB in production, as garbage collection events in bigger clusters result in heartbeats timing out to the metadata server(NameNode). In this paper, we address the problem of how to migrate the HDFS' metadata to a relational model, so that we can support larger amounts of storage on a shared nothing, in-memory, distributed database. Our main contribution is that we show how to provide at least as strong consistency semantics as HDFS while adding support for a multiple-writer, multiple-reader concurrency model. We guarantee freedom from deadlocks by logically organizing inodes (and their constituent blocks and replicas) into a hierarchy and having all metadata operations agree on a global order for acquiring both explicit locks and implicit locks on subtrees in the hierarchy. We use transactions with pessimistic concurrency control to ensure the safety and progress of metadata operations. Finally, we show how to improve performance of our solution by introducing a snapshotting mechanism at NameNodes that minimizes the number of roundtrips to the database

National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:ri:diva-24359 (URN)10.1007/978-3-662-43352-2_4 (DOI)2-s2.0-84902601785 (Scopus ID)
Conference
DAIS 2014
Available from: 2016-10-31 Created: 2016-10-31 Last updated: 2023-05-22Bibliographically approved
Payberah, A. H., Kavalionak, H., Montresor, A., Dowling, J. & Haridi, S. (2013). Lightweight gossip-based distribution estimation. In: IEEE International Conference on Communications: . Paper presented at 2013 IEEE International Conference on Communications, ICC 2013, 9 June 2013 through 13 June 2013, Budapest (pp. 3439-3443). Institute of Electrical and Electronics Engineers Inc., Article ID 6655081.
Open this publication in new window or tab >>Lightweight gossip-based distribution estimation
Show others...
2013 (English)In: IEEE International Conference on Communications, Institute of Electrical and Electronics Engineers Inc. , 2013, p. 3439-3443, article id 6655081Conference paper, Published paper (Refereed)
Abstract [en]

Monitoring the global state of an overlay network is vital for the self-management of peer-to-peer (P2P) systems. Gossip-based algorithms are a well-known technique that can provide nodes locally with aggregated knowledge about the state of the overlay network. In this paper, we present a gossip-based protocol to estimate the global distribution of attribute values stored across a set of nodes in the system. Our algorithm estimates the distribution both efficiently and accurately. The key contribution of our algorithm is that it has substantially lower overhead than existing distribution estimation algorithms. We evaluated our system in simulation, and compared it against the state-of-the-art solutions. The results show similar accuracy to its counterparts, but with a communication overhead of an order of magnitude lower than them.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2013
Keywords
Algorithms, Distributed computer systems, Overlay networks, Peer to peer networks, Attribute values, Communication overheads, Distribution estimation, Distribution estimation algorithms, Global distribution, Gossip-based algorithms, Gossip-based protocol, Peer-to-Peer system, Estimation
National Category
Engineering and Technology
Identifiers
urn:nbn:se:ri:diva-47633 (URN)10.1109/ICC.2013.6655081 (DOI)2-s2.0-84891358815 (Scopus ID)9781467331227 (ISBN)
Conference
2013 IEEE International Conference on Communications, ICC 2013, 9 June 2013 through 13 June 2013, Budapest
Available from: 2020-08-28 Created: 2020-08-28 Last updated: 2023-06-07Bibliographically approved
Payberah, A. H., Dowling, J., Rahimain, F. & Haridi, S. (2012). Distributed optimization of P2P live streaming overlays. Computing, 94(8-10), 621-647
Open this publication in new window or tab >>Distributed optimization of P2P live streaming overlays
2012 (English)In: Computing, ISSN 0010-485X, E-ISSN 1436-5057, Vol. 94, no 8-10, p. 621-647Article in journal (Refereed) Published
Abstract [en]

Peer-to-peer live media streaming over the Internet is becoming increasingly more popular, though it is still a challenging problem. Nodes should receive the stream with respect to intrinsic timing constraints, while the overlay should adapt to the changes in the network and the nodes should be incentivized to contribute their resources. In this work, we meet these contradictory requirements simultaneously, by introducing a distributed market model to build an efficient overlay for live media streaming. Using our market model, we construct two different overlay topologies, tree-based and mesh-based, which are the two dominant approaches to the media distribution. First, we build an approximately minimal height multiple-tree data dissemination overlay, called Sepidar. Next, we extend our model, in GLive, to make it more robust in dynamic networks by replacing the tree structure with a mesh. We show in simulation that the mesh-based overlay outperforms the multiple-tree overlay. We compare the performance of our two systems with the state-of-the-art NewCoolstrea-ming, and observe that they provide better playback continuity and lower playback latency than that of NewCoolstreaming under a variety of experimental scenarios. Although our distributed market model can be run against a random sample of nodes, we improve its convergence time by executing it against a sample of nodes taken from the Gradient overlay. The evaluations show that the streaming overlays converge faster when our market model works on top of the Gradient overlay.

Keywords
Auction algorithm, Distributed algorithms, Market-based algorithms, P2P live streaming, The Gradient overlay
National Category
Engineering and Technology
Identifiers
urn:nbn:se:ri:diva-51862 (URN)10.1007/s00607-012-0195-y (DOI)2-s2.0-84867096227 (Scopus ID)
Available from: 2021-01-12 Created: 2021-01-12 Last updated: 2023-05-22Bibliographically approved
Dowling, J. & Payberah, A. (2012). Shuffling with a Croupier: Nat-Aware Peer-Sampling (8ed.). In: : . Paper presented at ICDCS 2012 (pp. 102-111). , Article ID 6257983.
Open this publication in new window or tab >>Shuffling with a Croupier: Nat-Aware Peer-Sampling
2012 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Despite much recent research on peer-to-peer (P2P) protocols for the Internet, there have been relatively few practical protocols designed to explicitly account for Network Address Translation gateways (NATs). Those P2P protocols that do handle NATs circumvent them using relaying and hole-punching techniques to route packets to nodes residing behind NATs. In this paper, we present Croupier, a peer sampling service (PSS) that provides uniform random samples of nodes in the presence of NATs in the network. It is the first NAT-aware PSS that works without the use of relaying or hole-punching. By removing the need for relaying and hole-punching, we decrease the complexity and overhead of our protocol as well as increase its robustness to churn and failure. We evaluated Croupier in simulation, and, in comparison with existing NAT-aware PSS’, our results show similar randomness properties, but improved robustness in the presence of both high percentages of nodes behind NATs and massive node failures. Croupier also has substantially lower protocol overhead.

Keywords
Gossip peer sampling, NAT, P2P networks
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:ri:diva-24022 (URN)10.1109/ICDCS.2012.19 (DOI)2-s2.0-84866901273 (Scopus ID)
Conference
ICDCS 2012
Projects
KTH’s TNG
Available from: 2016-10-31 Created: 2016-10-31 Last updated: 2023-05-22Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-9484-6714

Search in DiVA

Show all publications