Searchable symmetric encryption (SSE) schemes are commonly proposed to enable search in a protected unstructured documents such as email archives or any set of sensitive text files. However, some SSE schemes have been recently proposed in order to protect relational databases. Most of the previous attacks on SSE schemes have only targeted its common use case, protecting unstructured data. In this work, we propose a new inference attack on relational databases protected via SSE schemes. Our inference attack enables a passive adversary with only basic knowledge about the meta-data information of the target relational database to recover the attribute names of some observed queries. This violates query privacy since the attribute name of a query is secret.
We propose a simple and efficient searchable symmetric encryption scheme based on a Bitmap index that evaluates Boolean queries. Our scheme provides a practical solution in settings where communications and computations are very constrained as it offers a suitable trade-off between privacy and performance.
In this deliverable from the SeCoHeat project, profits that can be made with 1 MWh of electricity production capacity on existing ancillary service markets are evaluated in 2020 and 2021. Profits are evaluated for four different marginal production costs corresponding to the following fuels for a CHP power plant: waste (assumed fuel price: 0 kr/MWh), recycled wood (10 kr/MWh), wood chips (20 kr/MWh) and wood pellets (30 kr/MWh). The results show that except for wood chips and wood pellets in 2020, the most profitable ancillary service markets are FFR (fast-frequency response) and aFRR down (automatic frequency restoration reserves for down-regulation). The reasons are that (1) producers don’t have to withhold capacity from the day-ahead market when their participate in these two markets and (2) producers get compensated for the capacity reserved for the ancillary service markets. For wood chips, the FFR market was the most profitable in 2020, followed by the mFRR down market (manual frequency restoration reserves for down-regulation). The reason for the mFRR down market to be more profitable than the aFRR down market for this fuel is that the profits from mFRR down depend on the avoided fuel costs, which are higher for wood chips than for waste and recycled wood. In 2021, all prices started increasing significantly, which decreased the relative profitability of the mFRR down compared to other markets. For wood pellets, the mFRR down market was also the second most profitable market in 2020, for the same reasons. The most profitable one in 2020 was the mFRR up market (manual frequency restoration reserves for up-regulation). The reason is that the higher fuel price of these two fuels entails low participation in the day-ahead market. Therefore, withholding capacity from the day-ahead market to be able to participate on the mFRR up market brings additional profits. In 2021, however, day-ahead prices started increasing significantly (a trend that continued into 2022) and the mFRR up market became the least profitable market for these two fuels. The profit evaluation performed in this deliverable is purely economic. It does not include the sector coupling to the heat sector (which entails limitation of the available electricity production capacity but also a possibility to store heat if storage is available) nor does it include other technical limitations such as ramp rates. These aspects will be considered in follow-up work in this project. This report has been compiled within the scope of the project SeCoHeat - Sector coupling of district heating with the electricity system: profitability and operation. The project is financed by the Research and Development Foundation of Göteborg Energi.
Recently, there has been an increase in apartments with a large number of inhabitants, i.e., high residential density. This is partly due to a housing shortage in general but also increased migration, particularly in suburbs of major cities. This paper specifies issues that might be caused by high residential density by investigating the technical parameters influenced in Swedish apartments that are likely to have high residential density. Interviews with 11 employees at housing companies were conducted to identify issues that might be caused by high residential density. Furthermore, simulations were conducted based on extreme conditions described in the interviews to determine the impact on the energy use, indoor environmental quality, and moisture loads. In addition, the impact of measures to mitigate the identified issues was determined. Measures such as demand-controlled ventilation, increase of a constant ventilation rate, and moisture buffering are shown to reduce the risk for thermal discomfort, mold growth, and diminished indoor air quality; while still achieving a lower energy use than in a normally occupied apartment. The results of this study can be used by authorities to formulate incentives and/or recommendations for housing owners to implement measures to ensure good indoor environmental quality for all, irrespective of residential density conditions.
During the last few years, there has been an increased number of overcrowded apartments, due to increased migration but also housing shortage in general, particularly in the suburbs to major cities. The question is how the indoor environment in these apartments is affected by the high number of persons and how the problems related to high residential density can be overcome. This paper aims to specify the problem by investigating and analysing the technical parameters influenced by residential density in Swedish apartments built between 1965-1974. To map the situation, 11 interviews with employees at housing companies were conducted. Based on extreme conditions described in the interviews, simulations of the indoor climate and moisture risks at some vulnerable parts of constructions were made. Simulations were focused on moisture loads and CO2 concentrations as functions of residential density and ventilation rate. Finally, measures to combat problems associated to overcrowding are suggested. The aim is that the results should be used by authorities to formulate incentives and/or recommendations for housing companies to take actions to ensure a good indoor environment for all, irrespective of residential density conditions. © The Authors.
Characterization of the surface activity of previously obtained polymerizable dialkyl maleates is performed to find out the relation between the structure of surfactants and their performances. The given polymerizable surfactants were synthesized for using in the emulsion polymerization. Three groups of dialkyl maleates-nonionic, cationic and zwitterionic-with different chain lengths of hydrophobic alkyl groups are investigated. Critical micelle concentration (cmc) values are determined for water soluble surfactants. It is found that cmc decreases with increasing chain length of the hydrophobic alkyl group. For nonionic and cationic surfactants interfacial tension at the interface between water and dodecane is measured. Droplet size in oil-in-water (O/W) emulsions is determined for all given surfactants. Cationic and zwitterionic dialkyl maleates with the longest investigated alkyl chain (R=C16H33, C17H35) provide good stability of O/W emulsions. In order to compare the obtained results, measurements with well-known surfactants-nonionic nonylphenol-poly(ethylene oxide) (NPEO10) and cationic hexadecyltrimethyl ammonium bromide (CTAB)-are performed.
Deep Learning (DL) techniques are at the heart of most future advanced software functions in Critical Autonomous AI-based Systems (CAIS), where they also represent a major competitive factor. Hence, the economic success of CAIS industries (e.g., automotive, space, railway) depends on their ability to design, implement, qualify, and certify DL-based software products under bounded effort/cost. However, there is a fundamental gap between Functional Safety (FUSA) requirements on CAIS and the nature of DL solutions. This gap stems from the development process of DL libraries and affects high-level safety concepts such as (1) explainability and traceability, (2) suitability for varying safety requirements, (3) FUSA-compliant implementations, and (4) real-time constraints. As a matter of fact, the data-dependent and stochastic nature of DL algorithms clashes with current FUSA practice, which instead builds on deterministic, verifiable, and pass/fail test-based software. The SAFEXPLAIN project tackles these challenges and targets by providing a flexible approach to allow the certification - hence adoption - of DL-based solutions in CAIS building on: (1) DL solutions that provide end-to-end traceability, with specific approaches to explain whether predictions can be trusted and strategies to reach (and prove) correct operation, in accordance to certification standards; (2) alternative and increasingly sophisticated design safety patterns for DL with varying criticality and fault tolerance requirements; (3) DL library implementations that adhere to safety requirements; and (4) computing platform configurations, to regain determinism, and probabilistic timing analyses, to handle the remaining non-determinism.
The success of the P2P idea has created a huge diversity of approaches, among which overlay networks, for example, Gnutella, Kazaa, Chord, Pastry, Tapestry, P-Grid, or DKS, have received specific attention from both developers and researchers. A wide variety of algorithms, data structures, and architectures have been proposed. The terminologies and abstractions used, however, have become quite inconsistent since the P2P paradigm has attracted people from many different communities, e.g., networking, databases, distributed systems, graph theory, complexity theory, biology, etc. In this paper we propose a reference model for overlay networks which is capable of modeling different approaches in this domain in a generic manner. It is intended to allow researchers and users to assess the properties of concrete systems, to establish a common vocabulary for scientific discussion, to facilitate the qualitative comparison of the systems, and to serve as the basis for defining a standardized API to make overlay networks interoperable.
The effect of biofilm formation on passive stainless steel in seawater environments is of primary importance since it leads to potential ennoblement of surfaces and subsequently to localized corrosion such as pitting and crevice corrosion. This study aims at developing an ecofriendly alginate biopolymer containing both non-toxic calcium and a limited amount of biocidal zinc ions which inhibits this effect. For this purpose, calcium alginate containing less than 1 % of zinc ions localized in the vicinity of the steel surface in natural and renewed seawater is demonstrated to reduce significantly the ennoblement process of steel. After 1 month of immersion, a mass loss of only 4 % of the active material is observed authorizing thereby long-term protection of steel in real environment.
Nanocellulose (NC)-based hybrid coatings and films containing CeO2 and SiO2 nanoparticles (NPs) to impart UV screening and hardness properties, respectively, were prepared by solvent casting. The NC film-forming component (75 wt % of the overall solids) was composed entirely of cellulose nanocrystals (CNCs) or of CNCs combined with cellulose nanofibrils (CNFs). Zeta potential measurements indicated that the four NP types (CNC, CNF, CeO2, and SiO2) were stably dispersed in water and negatively charged at pH values between 6 and 9. The combination of NPs within this pH range ensured uniform formulations and homogeneous coatings and films, which blocked UV light, the extent of which depended on film thickness and CeO2 NP content, while maintaining good transparency in the visible spectrum (∼80%). The addition of a low amount of CNFs (1%) reduced the film hardness, but this effect was compensated by the addition of SiO2 NPs. Chiral nematic self-assembly was observed in the mixed NC film; however, this ordering was disrupted by the addition of the oxide NPs. The roughness of the hybrid coatings was reduced by the inclusion of oxide NPs into the NC matrix perhaps because the spherical oxide NPs were able to pack into the spaces between cellulose fibrils. We envision these hybrid coatings and films in barrier applications, photovoltaics, cosmetic formulations, such as sunscreens, and for the care and maintenance of wood and glass surfaces, or other surfaces that require a smooth, hard, and transparent finish and protection from UV damage.
counterions in the suspensions. The results suggest that there is a threshold surface charge density (∼0.3%S) above which effective volume considerations are dominant across the concentration range relevant to liquid crystalline phase formation. Above this threshold value, phase separation occurs at the same effective volume fraction of CNCs (∼10 vol %), with a corresponding increase in critical concentration due to the decrease in effective diameter that occurs with increasing surface charge. Below or near this threshold value, the formation of end-to-end aggregates may favor gelation and interfere with ordered phase formation.
From a circular economy perspective, one-pot strategies for the isolation of cellulose nanomaterials at a high yield and with multifunctional properties are attractive. Here, the effects of lignin content (bleached vs unbleached softwood kraft pulp) and sulfuric acid concentration on the properties of crystalline lignocellulose isolates and their films are explored. Hydrolysis at 58 wt % sulfuric acid resulted in both cellulose nanocrystals (CNCs) and microcrystalline cellulose at a relatively high yield (>55%), whereas hydrolysis at 64 wt % gave CNCs at a lower yield (<20%). CNCs from 58 wt % hydrolysis were more polydisperse and had a higher average aspect ratio (1.5-2×), a lower surface charge (2×), and a higher shear viscosity (100-1000×). Hydrolysis of unbleached pulp additionally yielded spherical nanoparticles (NPs) that were <50 nm in diameter and identified as lignin by nanoscale Fourier transform infrared spectroscopy and IR imaging. Chiral nematic self-organization was observed in films from CNCs isolated at 64 wt % but not from the more heterogeneous CNC qualities produced at 58 wt %. All films degraded to some extent under simulated sunlight trials, but these effects were less pronounced in lignin-NP-containing films, suggesting a protective feature, but the hemicellulose content and CNC crystallinity may be implicated as well. Finally, heterogeneous CNC compositions obtained at a high yield and with improved resource efficiency are suggested for specific nanocellulose uses, for instance, as thickeners or reinforcing fillers, representing a step toward the development of application-tailored CNC grades. © 2023 The Authors.
The biotechnological applications of cellulose nanocrystals (CNCs) continue to grow due to their sustainable nature, impressive mechanical, rheological, and emulsifying properties, upscaled production capacity, and compatibility with other materials, such as protein and polysaccharides. In this study, hydrogels from CNCs and pectin, a plant cell wall polysaccharide broadly used in food and pharma, were produced by calcium ion-mediated internal ionotropic gelation (IG). In the absence of pectin, a minimum of 4 wt% CNC was needed to produce self-supporting gels by internal IG, whereas the addition of pectin at 0.5 wt% enabled hydrogel formation at CNC contents as low as 0.5 wt%. Experimental data indicate that CNCs and pectin interact to give robust and self-supporting hydrogels at solid contents below 2.5 %. Potential applications of these gels could be as carriers for controlled release, scaffolds for cell growth, or wherever else distinct and porous network morphologies are required.
Proteoheparan sulphate can be adsorbed to a methylated silica surface in a monomolecular layer via its transmembrane hydrophobic protein core domain. As a result of electrostatic repulsion, its anionic glycosaminoglycan side chains are stretched out into the blood substitute solution, thereby representing one receptor site for specific lipoprotein binding through basic amino acid-rich residues within their apolipoproteins. The binding process was studied by ellipsometric techniques suggesting that high-density lipoprotein (HDL) has a high binding affinity and a protective effect on interfacial heparan sulphate proteoglycan layers with respect to low-density lipoprotein (LDL) and Ca2+ complexation. Low-density lipoprotein was found to deposit strongly at the proteoheparan sulphate-coated surface, particularly in the presence of Ca2+, apparently through complex formation 'proteoglycan-LDL-calcium'. This ternary complex build-up may be interpreted as arteriosclerotic nanoplaque formation on the molecular level responsible for the arteriosclerotic primary lesion. On the other hand, HDL bound to heparan sulphate proteoglycan protected against LDL deposition and completely suppressed calcification of the proteoglycan-lipoprotein complex. In addition, HDL was able to decelerate the ternary complex deposition. Therefore, HDL attached to its proteoglycan receptor sites is thought to raise a multidomain barrier, selection and control motif for transmembrane and paracellular lipoprotein uptake into the arterial wall. Although much remains unclear regarding the mechanism of lipoprotein depositions at proteoglycan-coated surfaces, it seems clear that the use of such systems offers possibilities for investigating lipoprotein deposition at a 'nanoscopic' level under close to physiological conditions. In particular, Ca2+-promoted LDL deposition and the protective effect of HDL even at high Ca2+ and LDL concentrations agree well with previous clinical observations regarding risk and beneficial factors for early stages of atherosclerosis. Considering this, the system was tested on its reliability in a biosensor application in order to unveil possible acute pleiotropic effects of the lipid lowering drug fluvastatin. The very low-density lipoprotein (VLDL)/intermediate-density lipoprotein (IDL)/LDL plasma fraction from a high risk patient with dyslipoproteinaemia and type 2 diabetes mellitus showed beginning arteriosclerotic nanoplaque formation already at a normal blood Ca2+ concentration, with a strong increase at higher Ca2+ concentrations. Fluvastatin, whether applied to the patient (one single 80 mg slow release matrix tablet) or acutely in the experiment (2.2 μmol L-1), markedly slowed down this process of ternary aggregational nanoplaque complexation at all Ca2+ concentrations used. This action resulted without any significant change in lipid concentrations of the patient. Furthermore, after ternary complex build-up, fluvastatin, similar to HDL, was able to reduce nanoplaque adsorption and size. These immediate effects of fluvastatin have to be taken into consideration while interpreting the clinical outcome of long-term studies.
The effects of divalent salts (CaCl2, MgCl2 and BaCl2) in promoting the adsorption of weakly charged polyelectrolyte (polyacrylic acid), PAA, Mw ~ 250000 g/mol) on mica surfaces and their role in tuning the nature of interactions between such adsorbed polyelectrolyte layers were studied using the interferometric surface forces apparatus. With mica surfaces in 3 mM MgCl2 solutions at pH ~8.0-9.0, the addition of 10 ppm PAA resulted in a long-range attractive bridging force and a short-range repulsive steric force. This force profile indicates a low surface coverage and weak adsorption. The range of the force can be related to the characteristic length scale RG of polyelectrolyte chains using a scaling description. An increase of the PAA concentration to 50 ppm changed the attractive force profile to a monotonic, long-range repulsive interaction extending up to 600 Å due to the increased surface coverage of polyelectrolyte chains on the mica surfaces. Comparison of the measured forces with a scaling mean field model suggests that the adsorbed polyelectrolyte chains are stretched, which eventually give rise to the polyelectrolyte brush like structure. When the mica surfaces were preincubated in 3 mM CaCl2 at pH ~8.0-9.0, in contrast to the case of 3 mM MgCl2, the addition of 10 ppm PAA resulted in a more complex force profile: long-range repulsive forces extending up to 800 Å followed by an attractive force regime and a second repulsive force regime at shorter separations. The long-range electrosteric forces can be attributed to strong adsorption of polyelectrolyte chains on mica surfaces (high surface coverage) which is facilitated by the presence of Ca2+ ions, while the intermediate range attractive forces can be ascribed to Ca2+ assisted bridging between adsorbed polyelectrolyte chains. Also interesting is to note various relaxation processes present in this system. In contrast to both MgCl2 and CaCl2 systems, with mica surfaces in 3 mM BaCl2 solution at pH ~8.0-9.0, the addition of 10 ppm PAA resulted in precipitation of polyelectrolyte chains on mica surfaces, resulting in an extremely long-range monotonic repulsive force profile. In summary, our study showed that divalent counterions (Mg2+, Ca2+, and Ba2+) exhibit significantly different behavior in promoting PAA adsorption on mica surfaces, modifying and controlling various surface interactions.
Wind-induced dynamic excitation is becoming a governing design action determin-ing size and shape of modern Tall Timber Buildings (TTBs). The wind actions generate dynamic loading, causing discomfort or annoyance for occupants due to the perceived horizontal sway – i.e. vibration serviceability failure. Although some TTBs have been instrumented and meas-ured to estimate their key dynamic properties (natural frequencies and damping), no systematic evaluation of dynamic performance pertinent to wind loading has been performed for the new and evolving construction technology used in TTBs. The DynaTTB project, funded by the Forest Value research program, mixes on site measurements on existing buildings excited by heavy shakers, for identification of the structural system, with laboratory identification of building elements mechanical features coupled with numerical modelling of timber structures. The goal is to identify and quantify the causes of vibration energy dissipation in modern TTBs and pro-vide key elements to FE modelers.
The first building, from a list of 8, was modelled and tested at full scale in December 2019. Some results are presented in this paper. Four other buildings will be modelled and tested in spring 2020.
Wind-induced dynamic excitation is a governing design action determining size and shape of modern Tall Timber Buildings (TTBs). The wind actions generate dynamic loading, causing discomfort or annoyance for occupants due to the perceived horizontal sway, i.e. vibration serviceability problem. Although some TTBs have been instrumented and measured to estimate their key dynamic properties (eigenfrequencies, mode shapes and damping), no systematic evaluation of dynamic performance pertinent to wind loading had been performed for the new and evolving construction technologies used in TTBs. The DynaTTB project, funded by the ForestValue research program, mixed on site measurements on existing buildings excited by mass inertia shakers (forced vibration) and/or the wind loads (ambient vibration), for identification of the structural system, with laboratory identification of building elements mechanical features, coupled with numerical modelling of timber structures. The goal is to identify and quantify the causes of vibration energy dissipation in modern TTBs and provide key elements to finite element models. This paper presents an overview of the results of the project and the proposed Guidelines for design of TTBs in relation to their dynamic properties.
We report on the synthesis, microstructure and mass transport properties of a colloidal hydrogel self-assembled from a mixture of colloidal silica and nontronite clay plates at different particle concentrations. The gel-structure had uniaxial long-range anisotropy caused by alignment of the clay particles in a strong external magnetic field. After gelation the colloidal silica covered the clay particle network, fixing the orientation of the clay plates. Comparing gels with a clay concentration between 0 and 0.7 vol%, the magnetically oriented gels had a maximum water permeability and self-diffusion coefficient at 0.3 and 0.7 vol% clay, respectively. Hence the specific clay concentration resulting in the highest liquid flux was pressure dependent. This study gives new insight into the effect of anisotropy, particle concentration and bound water on mass transport properties in nano/microporous materials. Such findings merit consideration when designing porous composite materials for use in for example fuel cell, chromatography and membrane technology.
The Internet traffic volume continues to grow at a great rate, now driven by video and TV distribution. For network operators it is important to avoid congestion in the network, and to meet service level agreements with their customers. This thesis presents work on two methods operators can use to reduce links loads in their networks: traffic engineering and content caching. This thesis studies access patterns for TV and video and the potential for caching. The investigation is done both using simulation and by analysis of logs from a large TV-on-Demand system over four months. The results show that there is a small set of programs that account for a large fraction of the requests and that a comparatively small local cache can be used to significantly reduce the peak link loads during prime time. The investigation also demonstrates how the popularity of programs changes over time and shows that the access pattern in a TV-on-Demand system very much depends on the content type. For traffic engineering the objective is to avoid congestion in the network and to make better use of available resources by adapting the routing to the current traffic situation. The main challenge for traffic engineering in IP networks is to cope with the dynamics of Internet traffic demands. This thesis proposes L-balanced routings that route the traffic on the shortest paths possible but make sure that no link is utilised to more than a given level L. L-balanced routing gives efficient routing of traffic and controlled spare capacity to handle unpredictable changes in traffic. We present an L-balanced routing algorithm and a heuristic search method for finding L-balanced weight settings for the legacy routing protocols OSPF and IS-IS. We show that the search and the resulting weight settings work well in real network scenarios.
Measurement and analysis of real traffic is important to gain knowledge about the characteristics of the traffic. Without measurement, it is impossible to build realistic traffic models. It is recent that data traffic was found to have self-similar properties. In this thesis work traffic captured on the network at SICS and on the Supernet, is shown to have this fractal-like behaviour. The traffic is also examined with respect to which protocols and packet sizes are present and in what proportions. In the SICS trace most packets are small, TCP is shown to be the predominant transport protocol and NNTP the most common application. In contrast to this, large UDP packets sent between not well-known ports dominates the Supernet traffic. Finally, characteristics of the client side of the WWW traffic are examined more closely. In order to extract useful information from the packet trace, web browsers use of TCP and HTTP is investigated including new features in HTTP/1.1 such as persistent connections and pipelining. Empirical probability distributions are derived describing session lengths, time between user clicks and the amount of data transferred due to a single user click. These probability distributions make up a simple model of WWW-sessions.
Several studies of Internet traffic have shown that it is a small percentage of the flows that dominate the traffic. This is often referred to as the mice and elephants phenomenon. It has been proposed that this might be one of very few invariants of Internet traffic and that this property could somehow be used for traffic engineering purposes. The idea being that one in a scalable way could control a major part of the traffic by only keeping track of a small number of flows. But for this the large flows must also be stable in the meaning that they should be among the largest flows during long periods of time. In this work we analyse packet traces of Internet traffic and study the temporal characteristics of large aggregated traffic flows defined by destination address prefixes.
Large network operators have thousands or tens of thousands of access aggregation links that they need to manage and dimension properly. Measuring and understanding the traffic characteristics on these type of links are therefore essential. What do the traffic intensity characteristics look like on different timescales from days down to milliseconds? How do the characteristics differ if we compare links with the same capacity but with different type of clients and access technologies? How do the traffic characteristics differ from that on core network links? These are the type of questions we set out to investigate in this paper. We present the results of packet level measurements on three different 1Gbit/s aggregation links in an operational IP network. We see large differences in traffic characteristics between the three links. We observe highly skewed link load probability densities on timescales relevant for buffering (i.e. 10-milliseconds). We demonstrate the existence of large traffic spikes on short timescales (10-100ms) and show their impact on link delay. We also found that these traffic bursts often are caused by only one or a few IP flows.
Connected vehicles can make roads traffic safer andmore efficient, but require the mobile networks to handle timecriticalapplications. Using the MONROE mobile broadbandmeasurement testbed we conduct a multi-access measurementstudy on buses. The objective is to understand what networkperformance connected vehicles can expect in today’s mobilenetworks, in terms of transaction times and availability. The goalis also to understand to what extent access to several operatorsin parallel can improve communication performance.In our measurement experiments we repeatedly transfer warningmessages from moving buses to a stationary server. Wetriplicate the messages and always perform three transactionsin parallel over three different cellular operators. This creates adataset with which we can compare the operators in an objectiveway and with which we can study the potential for multi-access.In this paper we use the triple-access dataset to evaluate singleaccessselection strategies, where one operator is chosen for eachtransaction. We show that if we have access to three operatorsand for each transaction choose the operator with best accesstechnology and best signal quality then we can significantlyimprove availability and transaction times compared to theindividual operators. The median transaction time improves with6% compared to the best single operator and with 61% comparedto the worst single operator. The 90-percentile transaction timeimproves with 23% compared to the best single operator andwith 65% compared to the worst single operator.
Today video and TV distribution dominate Internet traffic and the increasing demand for high-bandwidth multimedia services puts pressure on Internet service providers. In this paper we simulate TV distribution with time-shift and investigate the effect of introducing a local cache close to the viewers. We study what impact TV program popularity, program set size, cache replacement policy and other factors have on the caching efficiency. The simulation results show that introducing a local cache close to the viewers significantly reduces the network load from TV-on-Demand services. By caching 4% of the program volume we can decrease the peak load during prime time by almost 50%. We also show that the TV program type and how program popularity changes over time can have a big influence on cache hit ratios and the resulting link loads.
IPTV, where television is distributed over the Internet Protocol in a single operator network, has become popular and widespread. Many telecom and broadband companies have become TV providers and distribute TV channels using multicast over their backbone networks. IPTV also means an evolution to time-shifted television where viewers now often can choose to watch the programs at any time. However, distributing individual TV streams to each viewer requires a lot of bandwidth and is a big challenge for TV operators. In this paper we present an empirical IPTV workload model, simulate IPTV distribution with time-shift, and show that local caching can limit the bandwidth requirements significantly.
The focus of this paper is on traffic engineering in ambient networks. We describe and categorize different alternatives for making the routing more adaptive to the current traffic situation and discuss the challenges that ambient networks pose on traffic engineering methods. One of the main objectives of traffic engineering is to avoid congestion by controlling and optimising the routing function, or in short, to put the traffic where the capacity is. The main challenge for traffic engineering in ambient networks is to cope with the dynamics of both topology and traffic demands. Mechanisms are needed that can handle traffic load dynamics in scenarios with sudden changes in traffic demand and dynamically distribute traffic to benefit from available resources. Trade-offs between optimality, stability and signaling overhead that are important for traffic engineering methods in the fixed Internet becomes even more critical in a dynamic ambient environment.
Communication networks are vital for society and network availability is therefore crucial. There is a huge potential in using network telemetry data and machine learning algorithms to proactively detect anomalies and remedy problems before they affect the customers. In practice, however, there are many steps on the way to get there. In this paper we present ongoing development work on efficient data collection pipelines, anomaly detection algorithms and analysis of traffic patterns and predictability.
Dynamic Transfer Mode (DTM) is a ring based MAN technology that provides a channel abstraction with a dynamically adjustable capacity. TCP is a reliable end to end transport protocol capable of adjusting its rate. The primary goal of this work is investigate the coupling of dynamically allocating bandwidth to TCP flows with the affect this has on the congestion control mechanism of TCP. In particular we wanted to find scenerios where this scheme does not work, where either all the link capacity is allocated to TCP or congestion collapse occurs and no capacity is allocated to TCP. We have created a simulation environment using ns-2 to investigate TCP over networks which have a variable capacity link. We begin with a single TCP Tahoe flow over a fixed bandwidth link and progressively add more complexity to understand the behaviour of dynamically adjusting link capacity to TCP and vice versa.
Today increasingly large volumes of TV and video are distributed over IP-networks and over the Internet. It is therefore essential for traffic and cache management to understand TV program popularity and access patterns in real networks. In this paper we study access patterns in a large TV-on-Demand system over four months. We study user behaviour and program popularity and its impact on caching. The demand varies a lot in daily and weekly cycles. There are large peaks in demand, especially on Friday and Saturday evenings, that need to be handled. We see that the cacheability, the share of requests that are not first-time requests, is very high. Furthermore, there is a small set of programs that account for a large fraction of the requests. We also find that the share of requests for the top most popular programs grows during prime time, and the change rate among them decreases. This is important for caching. The cache hit ratio increases during prime time when the demand is the highest, and aching makes the biggest difference when it matters most. We also study the popularity (in terms of number of requests and rank) of individual programs and how that changes over time. Also, we see that the type of programs offered determines what the access pattern will look like.