Several studies of Internet traffic have shown that it is a small percentage of the flows that dominate the traffic. This is often referred to as the mice and elephants phenomenon. It has been proposed that this might be one of very few invariants of Internet traffic and that this property could somehow be used for traffic engineering purposes. The idea being that one in a scalable way could control a major part of the traffic by only keeping track of a small number of flows. But for this the large flows must also be stable in the meaning that they should be among the largest flows during long periods of time. In this work we analyse packet traces of Internet traffic and study the temporal characteristics of large aggregated traffic flows defined by destination address prefixes.
Large network operators have thousands or tens of thousands of access aggregation links that they need to manage and dimension properly. Measuring and understanding the traffic characteristics on these type of links are therefore essential. What do the traffic intensity characteristics look like on different timescales from days down to milliseconds? How do the characteristics differ if we compare links with the same capacity but with different type of clients and access technologies? How do the traffic characteristics differ from that on core network links? These are the type of questions we set out to investigate in this paper. We present the results of packet level measurements on three different 1Gbit/s aggregation links in an operational IP network. We see large differences in traffic characteristics between the three links. We observe highly skewed link load probability densities on timescales relevant for buffering (i.e. 10-milliseconds). We demonstrate the existence of large traffic spikes on short timescales (10-100ms) and show their impact on link delay. We also found that these traffic bursts often are caused by only one or a few IP flows.
Connected vehicles can make roads traffic safer andmore efficient, but require the mobile networks to handle timecriticalapplications. Using the MONROE mobile broadbandmeasurement testbed we conduct a multi-access measurementstudy on buses. The objective is to understand what networkperformance connected vehicles can expect in today’s mobilenetworks, in terms of transaction times and availability. The goalis also to understand to what extent access to several operatorsin parallel can improve communication performance.In our measurement experiments we repeatedly transfer warningmessages from moving buses to a stationary server. Wetriplicate the messages and always perform three transactionsin parallel over three different cellular operators. This creates adataset with which we can compare the operators in an objectiveway and with which we can study the potential for multi-access.In this paper we use the triple-access dataset to evaluate singleaccessselection strategies, where one operator is chosen for eachtransaction. We show that if we have access to three operatorsand for each transaction choose the operator with best accesstechnology and best signal quality then we can significantlyimprove availability and transaction times compared to theindividual operators. The median transaction time improves with6% compared to the best single operator and with 61% comparedto the worst single operator. The 90-percentile transaction timeimproves with 23% compared to the best single operator andwith 65% compared to the worst single operator.
Integrated Layer Processing is an implementation technique for data manipulation functions in communication protocols. The purpose of this technique is to increase communication performance. It reduces the number of memory accesses and thus relieves the memory bandwidth bottleneck. Integrated Layer Processing can however, in some situations, substantially increase the number of memory accesses, and therefore instead reduce performance. The main reason is contention for processor registers. We present a performance model that captures the memory behavior of data manipulation functions for both integrated and sequential implementations. By comparing the model to measurements of real and synthetic data manipulation functions, we show that the model accurately predicts the performance. The model can be used to assess whether an integrated implementation will perform better or worse than a sequential implementation. The situations where integration would reduce performance can then be avoided without spending a lot of effort on a more complex integrated implementation.
We define a global routing mechanism for the NetInf protocol, part of the NetInf information-centric networking architecture. The mechanism makes use of two levels of aggregation in order to provide the scalability needed for a global network. An anticipated $10^{15}$ number of individual named data objects are aggregated to in the order of 500K routing hints which are very feasible to handle with existing routing technology. The hints are then used to forward requests for named data towards the publisher.
Packet loss is an important parameter for dimensioning network links or traffic classes carrying IP telephony traffic. We present a model based on the Markov modulated Poisson process (MMPP) which calculates packet loss probabilities for a set of super positioned voice input sources and the specified link properties. We do not introduce another new model to the community, rather try and verify one of the existing models via extensive simulation and a real world implementation. A plethora of excellent research on queuing theory is still in the domain of ATM researchers and we attempt to highlight its validity to the IP Telephony community. Packet level simulations show very good correspondence with the predictions of the model. Our main contribution is the verification of the MMPP model with measurements in a laboratory environment. The loss rates predicted by the model are in general close to the measured loss rates and the loss rates obtained with simulation. The general conclusion is that the MMPP-based model is a tool well suited for dimensioning links carrying packetized voice in a system with limited buffer space.
Transmitting telephone calls over the Internet causes problems not present in current telephone technology such as packet loss and delay due to queueing in routers. In this undergraduate thesis we study how a Markov modulated Poisson process is applied as an arrival process to a multiplexer and we study the performance in terms of loss probability. The input consists of the superposition of independent voice sources. The predictions of the model is compared with results obtained with simulations of the multiplexer made with a network simulator. The buffer occupancy distribution is also studied and we see how this distribution changes as the load increases.
A new network architecture for the Internet needs ingredients from three approaches: information-centric networking, cloud computing integrated with networking, and open connectivity. Information-centric networking considers pieces of information as first-class entities of a networking architecture, rather than only indirectly identifying and manipulating them via a node hosting that information; this way, information becomes independent from the devices they are stored in, enabling efficient and application-independent information caching in the network. Cloud networking offers a combination and integration of cloud computing and virtual networking. It is a solution that distributes the benefits of cloud computing more deeply into the network, and provides a tighter integration of virtualisation features at computing and networking levels. To support these concepts, open connectivity services need to provide advanced transport and networking mechanisms, making use of network and path diversity (even leveraging direct optical paths) and encoding techniques, and dealing with ubiquitous mobility of user, content and information objects in a unified way.
This paper considers the issues of a real time transport service needed by multimedia applications for transferring digital video and audio. Three classes of transport service are defined with different levels of real time constraints. Methods for error control are considered for the classes, and the classes are discussed with respect to the application requirements.
Integrated Layer Processing (ILP) has been presented as an implementation technique to improve communication protocol performance by reducing the number of memory references. Previous research has however not pointed out that in some circumstances ILP can significantly increase the number of memory references, resulting in lower communication throughput. We explore the performance effects of applying ILP to data manipulation functions with varying characteristics. The functions are generated from a set of parameters including input and output block size, state size and number of instructions. We present experimental data for varying function state sizes, number of integrated functions and instruction counts. The results clearly show that the aggregated state of the functions must fit in registers for ILP to be competitive.
In this paper we review previous work on the applicability and performance of Integrated Layer Processing (ILP). ILP has been shown to clearly improve computer communication performance when integrating simple data manipulation functions, but the situation has been less clear for more complex functions and complete systems. We discuss complications when applying ILP to protocol stacks, the requirements of ILP on the communication subsystem, caching aspects, the importance of the processor registers, and a model for predicting the performance of data manipulation functions. We conclude that the main drawback of ILP is its limited aplicability to complex data manipulation functions. The performance to expect from an ILP implementation also depends heavily on the protocol architecture and the host system architecture.
Cache memory behavior is becoming more and more important as the speed of CPUs is increasing faster than the speed of memories. The operation of caches are statistical which means that the system level performance becomes unpredictable. In this paper we investigate the worst case behavior of cache line conflicts in the context of communication protocols implemented using Integrated Layer Processing. The goal of our work is to control the cache by placing communication buffers and code in non-conflicting positions in the cache. The result would be higher and more predictable performance. Our first results indicate that the worst case behavior can be up to almost four times slower than the best case.
We present a so-called no-copy Application Programming Interface (API) for communication. The interface avoids copying when data is transferred between the application and operating system kernel address spaces. The API is an extension to the socket interface for SunOS, and has been implemented on Sun SPARCstations equipped with Fore Systems ATM adapters. Throughput for the no-copy API is 85 Mbit/s for 8K UDP messages, to be compared to 57 Mbit/s for the regular API on the SPARCstation 2. Processing times through the TCP and UDP stacks are reduced by up to 30% for the SPARCstation 2 and by more than 50% for the SPARCstation 10.
The first age of Internet architectural thinking concentrated on defining the correct principles for designing a packet-switched network and its application protocol suites. Although these same principles remain valid today, they do not address the question of how to reason about the evolution of the Internet or its interworking with other networks of very different heritages. This paper proposes a complementary methodology, motivated by the view that evolution and interworking flexibility are determined not so much by the principles applied during initial design, but by the choice of fundamental components or "design invariants" in terms of which the design is expressed. The paper discusses the characteristics of such invariants, including examples from the Internet and other networks, and considers what attributes of invariants best support architectural flexibility.
Rapporten behandlar digitalisering – att införa ny digital teknik – i förvaltningsverksamheten av broar. Omfattningen är en förstudie med syftet att identifiera behovet av framtida forskning för en långsiktig utveckling av broförvaltningen. En grundläggande ansats var att en digitalisering ska minska behovet av kostsamma underhållsåtgärder men bibehålla en hög säkerhet för våra broar. Projektets mål var att samla information om digitala informationsmodeller som skapas under investeringsskedet, utvärdera överlämningen av digitala modeller till förvaltningsskedet, och värdera den eventuella nyttan med digital informationsinsamling för tillståndsbedömning och underhållsplanering. En viktig del av detta var beskrivningen av dagens förvaltningssystem och hur det skulle kunna utvecklas. Studierna har bedrivits genom en enkätundersökning med respondenter från konsultfirmor aktiva inom broprojektering, intervjuer med tekniska experter och litteratursökningar. Resultatet visar att projekteringen av broar idag huvudsakligen görs genom byggnads-informationsmodellering (BIM). Inriktningen är mot byggskedet där samordning och kommunikation bedöms vara de största nyttorna. Överlämningen till förvaltningen består dock av relationsritningar i formen av enkla ritningsfiler. Trots att Trafikverkets strategi för BIM beskriver att en informationsmodell bör leva kvar under hela brons livslängd, finns det tveksamheter huruvida en modell från projekteringen är lämplig som förvaltningsmodell. Istället lyfts andra metoder fram för att skapa en modell av det byggda utförandet. Till exempel optiska metoder för skanning och fotogrammetri. Förvaltningssystemen bör utvecklas med funktioner för att lagra och tillgängliggöra stora mängder digital information från sensorer maskinella inspektioner. Syftet är att minska osäkerheterna i det byggda utförandet och graden av nedbrytning, för att slutligen skapa ett bättre underlag för beslut om åtgärder. Ett framtida scenario är en digital tvilling som speglar den verkliga konstruktionen och uppdateras kontinuerligt genom sensordata. Gällande hårdvara för mätningar behöver sensorer och system utvecklas med avseende på energiförbrukning, energiskördning och underhållsåtgärder, t.ex. genom kombinationer av utbytbara komponenter med kort livslängd och andra delar med lång livslängd. Fiberoptiska sensorer visar på lovande egenskaper men utveckling behövs för att göra dem mer kostnadseffektiva i relation till konventionella sensorer.
The existing Internet ecosystem is a result of decades of evolution. It has managed to scale well beyond the original aspirations. Evolution, though, highlighted a certain degree of inadequacies that is well documented. In this position paper we present the design considerations for a re-architected global networking architecture which delivers dissemination and non-dissemination objects only to consenting recipients, reducing unwanted traffic, linking information producers with consumers independently of the hosts involved, and connects the digital with the physical world. We consider issues ranging from the proposed object identifier/locator split to security and trust as we transition towards a Network of Information and relate our work with the emerging paradigm of publish/subscribe architectures. We introduce the fundamental components of a Network of Information, i.e., name resolution, routing, storage, and search, and close this paper with a discussion about future work.
The information-centric networking (ICN) concept is a significant common approach of several Future Internet research activities. The approach leverages in-network caching, multi-party communication through replication, and interaction models decoupling senders and receivers. The goal is to provide a network infrastructure service that is better suited to today's use, in particular content distribution and mobility, and that is more resilient to disruptions and failures. The ICN approach is being explored by a number of research projects. We compare and discuss design choices and features of proposed ICN architectures, focussing on the following main components: named data objects, naming and security, API, routing and transport, and caching. We also discuss the advantages of the ICN approach in general.
Ambient Networks interconnect independent realms that may use different local network technologies and may belong to different administrative or legal entities. At the core of these advanced internetworking concepts is a flexible naming architecture based on dynamic indirections between names, addresses and identities. This paper gives an overview of the connectivity abstractions of Ambient Networks and then describes its naming architecture in detail, comparing and contrasting them to other related next-generation network architectures.
Providing end-to-end communication in heterogeneous internetworking environments is a challenge. Two fundamental problems are bridging between different internetworking technologies and hiding of network complexity and differences from both applications and application developers. This paper presents abstraction and naming mechanisms that address these challenges in the Ambient Networks project. Connectivity abstractions hide the differences of heterogeneous internetworking technologies and enable applications to operate across them. A common naming framework enables end-to-end communication across otherwise independent internetworks and supports advanced networking capabilities, such as indirection or delegation, through dynamic bindings between named entities.
Information-centric networks (ICNs) intrinsically support multipath transfer and thus have been seen as an exciting paradigm for IoT and edge computing, not least in the context of 5G mobile networks. One key to ICN's success in these and other networks that have to support a diverse set of services over a heterogeneous network infrastructure is to schedule traffic over the available network paths efficiently. This paper presents and evaluates ZQTRTT, a multipath scheduling scheme for ICN that load balances bulk traffic over available network paths and schedules latency-sensitive, non-bulk traffic to reduce its transfer delay. A new metric called zero queueing time (ZQT) ratio estimates path load and is used to compute forwarding fractions for load balancing. In particular, the paper shows through a simulation campaign that ZQTRTT can accommodate the demands of both latency-sensitive and-insensitive traffic as well as evenly distribute traffic over available network paths.
We present the latency-aware multipath scheduler ZQTRTT that takes advantage of the multipath opportunities in information-centric networking. The goal of the scheduler is to use the (single) lowest latency path for transaction-oriented flows, and use multiple paths for bulk data flows. A new estimator called zero queue time ratio is used for scheduling over multiple paths. The objective is to distribute the flow over the paths so that the zero queue time ratio is equal on the paths, that is, so that each path is ‘pushed’ equally hard by the flow without creating unwanted queueing. We make an initial evaluation using simulation that shows that the scheduler meets our objectives.
Many current implementations of communication subsystems on workstation class computers transfer communication data to and from primary memory several times. This is due to software copying between user and operating system address spaces, presentation layer data conversion and other data manipulation functions. The consequence is that memory bandwidth is one of the major performance bottlenecks limiting high speed communication on these systems. We propose a communication subsystem architecture with a minimal-copy data path to widen this bottleneck. The architecture is tailored for protocol implementations using Integrated Layer Processing (ILP) and Application Layer Framing (ALF). We choose to implement these protocols in the address space of the application program. We present a new application program interface (API) between the protocols and the communication service in the operating system kernel. The API does not copy data, but instead passes pointers to page size data buffers. We analyze and discuss ILP loop and cache memory requirements on these buffers. Initial experiments show that the API can increase the communication performance with 50% compared to a standard BSD Unix socket interface.
The Internet of Things (IoT) for smart cities needs accessible open data and open systems, so that industries and citizens can develop new services and applications. As an example, the authors provide a case study of the GreenIoT platform in Uppsala, Sweden.
Information-centric networking~(ICN) has been introduced as a potential future networking architecture. ICN promises an architecture that makes information independent from location, application, storage, and transportation. Still, it is not without challenges. Notably, there are several outstanding issues regarding congestion control: Since ICN is more or less oblivious to the location of information, it opens up for a single application flow to have several sources, something which blurs the notion of transport flows, and makes it very difficult to employ traditional end-to-end congestion control schemes in these networks. Instead, ICN networks often make use of hop-by-hop congestion control schemes. However, these schemes are also tainted with problems, e.g., several of the proposed ICN congestion controls assume fixed link capacities that are known beforehand. Since this seldom is the case, this paper evaluates the consequences in terms of latency, throughput, and link usage, variable link capacities have on a hop-by-hop congestion control scheme, such as the one employed by the Multipath-aware ICN Rate-based Congestion Control~(MIRCC). The evaluation was carried out in the OMNeT++ simulator, and demonstrates how seemingly small variations in link capacity significantly deteriorate both latency and throughput, and often result in inefficient network link usage.
Information-centric networking (ICN) with its design around named-based forwarding and in-network caching holds great promises to become a key architecture for the future Internet. Many proposed ICN hop-by-hop congestion control schemes assume a fixed and known link capacity, which rarely - if ever - holds true for wireless links. Firstly, we demonstrate that although these congestion control schemes are able to fairly well utilise the available wireless link capacity, they greatly fail to keep the delay low. In fact, they essentially offer the same delay as in the case with no hop-by-hop, only end-to-end, congestion control. Secondly, we show that by complementing these schemes with an easy-to-implement, packet-train capacity estimator, we reduce the delay to a level significantly lower than what is obtained with only end-to-end congestion control, while still being able to keep the link utilisation at a high level.
We modified a commercial Android TV app to use NetInf ICN transport. It was straightforward to adapt the standard HTTP Live Streaming to NetInf naming and network service. We demonstrate that NetInf's in-network caching and request aggregation result in efficient live TV distribution.
Many IoT applications are inherently information-centric, making it advantageous to use ICN transport. We demonstrate CCN-lite ported to run on Contiki sensor motes with limited processing and storage resources. We show a method for mapping streams of sensor data to a stream of immutable CCN named data objects, and an adaptive probing method to find the newest value. We also demonstrate interoperation between MQTT and CCN via a gateway. A higher level goal is to use ICN as an open interface for accessing IoT data.
SPION: Secure Protocols in OSI Networks This report describes how security services can be realized in a computer network using the protocols of the Open Systems Interconnection (OSI) reference model for communication. The report starts with defining security requirements for a "typical" local area network in a company, university or similar organization. It is assumed that the organization does not use the network for transfer of extremely sensitive information, such as military secrets. A set of security services, as specified in the OSI security architecture, are selected in order to satisfy the requirements. The selected services are then placed in suitable layers of the OSI model according to the criteria in the security architecture, and to the taste of the authors. The report concentrates on the transport layer. An extension of the OSI transport protocol, class 4, including security services is described in detail. The protocol is a fully compatible extension of the standard transport protocol. Key management is another topic which is included in the report. A key management system for handling public keys and digital signatures based on an article by Dorothy E. Denning is described. The system includes functions for distributing and validating public keys, and registering and later verifying digital signatures. A key management protocol supporting these functions is defined for communication between ordinary open systems and special key server systems.
This report is an effort to describe the state-of-the-art in computer network security focusing on the OSI Security architecture. Other sources of information include the NCSC "Trusted Network Interpretation of the TCSEC". The report describes the security threats imposed on networks and the countermeasures available. It gives a detailed description of the security services defined in the OSI Security architecture and the mechanisms proposed for realizing these services. An overview of security management with emphasis on key management is also included. The report contains numerous references to books and articles in the field of network security.
We describe experiences and insights from adapting the Subversion version control system to use the network service of two information-centric networking (ICN) prototypes: OpenNetInf and CCNx. The evaluation is done using a local collaboration scenario, common in our own project work where a group of people meet and share documents through a Subversion repository. The measurements show a performance benefit already with two clients in some of the studied scenarios, despite being done on un-optimised research prototypes. The conclusion is that ICN clearly is beneficial also for non mass-distribution applications.
We describe experiences and insights from adapting the Subversion version control system to use the network service of two information-centric networking (ICN) prototypes: OpenNetInf and CCNx. The evaluation is done using a local collaboration scenario, common in our own project work where a group of people meet and share documents through a Subversion repository. The measurements show a performance benefit already with two clients in some of the studied scenarios, despite being done on un-optimised research prototypes. The conclusion is that ICN clearly is beneficial also for non mass-distribution applications. It was straightforward to adapt Subversion to fetch updated files from the repository using the ICN network service. The adaptation however neglected access control which will need a different approach in ICN than an authenticated SSL tunnel. Another insight from the experiments is that care needs to be taken when implementing the heavy ICN hash and signature calculations. In the prototypes, these are done serially, but we see an opportunity for parallelisation, making use of current multi-core processors.
This document specifies how the NetInf information-centric network service can be used for transport of live video streaming. To illustrate this it describes a prototype system that was developed to be used at "events with large crowds", e.g., sports events. The specification defines how the used video format is mapped to NetInf named data objects (NDOs). It also describe how NetInf messages are used to transfer the NDOs.
DTM, dynamic synchronous transfer mode, is a new time division multiplexing technique for fiber networks currently being developed and implemented at the Royal Institute of Technology in Stockholm, Sweden. This paper describes the hardware and software aspects of the design of an SBus host interface to the DTM network for a Sun SPARCstation. The interface is based on a dual port memory residing on the interface card and accesible over the SBus from the host CPU. The host operating system allocates message buffers directly in this memory. The interface has hardware support for segmenting and reassembling packets to and from the data units of the DTM. The software part of the interface manages the shared memory and the virtual circuits provided by the DTM network.
We prove a result concerning objective functions that can be used to obtain efficient and balanced solutions to the multi-commodity network flow problem. This type of solution is of interest when routing traffic in the Internet. A particular case of the result proved here (see Corollary 2 below) was stated without proof in a previous paper.
Cooperative Intelligent Transport Systems (C-ITS) make road traffic safer and more efficient, but require the mobile networks to handle time-critical applications. While some applications may need new dedicated communications technologies such as IEEE 802.11p or 5G, other applications can use current cellular networks. This study evaluates the performance that connected vehicles can expect from existing networks, and estimates the potential gain of multi-access by simultaneously transmitting over several operators. We upload time-critical warning messages from buses in Sweden, and characterise transaction times and network availability. We conduct the experiments with different protocols: UDP, TCP, and HTTPS. Our results show that when using UDP, the median transaction time for sending a typical warning message is 135 ms. We also show that multi-access can bring this value down to 73 ms. For time-critical applications requiring transaction times under 200 ms, multi-access can increase availability of the network from to 57.4% to 92.0%.