Change search
Refine search result
1 - 12 of 12
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Al-Shishtawy, Ahmad
    et al.
    RISE, Swedish ICT, SICS, Computer Systems Laboratory.
    Asif Fayyaz, Muhammad
    Popov, Konstantin
    RISE, Swedish ICT, SICS, Computer Systems Laboratory.
    Vlassov, Vladimir
    Achieving Robust Self-Management for Large-Scale Distributed Applications2010Report (Other academic)
    Abstract [en]

    Autonomic managers are the main architectural building blocks for constructing self-management capabilities of computing systems and applications. One of the major challenges in developing self-managing applications is robustness of management elements which form autonomic managers. We believe that transparent handling of the effects of resource churn (joins/leaves/failures) on management should be an essential feature of a platform for self-managing large-scale dynamic distributed applications, because it facilitates the development of robust autonomic managers and hence improves robustness of self-managing applications. This feature can be achieved by providing a robust management element abstraction that hides churn from the programmer. In this paper, we present a generic approach to achieve robust services that is based on finite state machine replication with dynamic reconfiguration of replica sets. We contribute a decentralized algorithm that maintains the set of nodes hosting service replicas in the presence of churn. We use this approach to implement robust management elements as robust services that can operate despite of churn. Our proposed decentralized algorithm uses peer-to-peer replica placement schemes to automate replicated state machine migration in order to tolerate churn. Our algorithm exploits lookup and failure detection facilities of a structured overlay network for managing the set of active replicas. Using the proposed approach, we can achieve a long running and highly available service, without human intervention, in the presence of resource churn. In order to validate and evaluate our approach, we have implemented a prototype that includes the proposed algorithm.

  • 2.
    Bhatti, Muhammad Khurram
    et al.
    Information Technology University (ITU), Pakistan.
    Oz, Isil
    Izmir Institute of Technology, Turkey.
    Amin, Sarah
    Information Technology University (ITU), Pakistan.
    Mushtaq, Maria
    Information Technology University (ITU), Pakistan.
    Farooq, Umer
    Dhofar University, Oman.
    Popov, Konstantin
    RISE - Research Institutes of Sweden, ICT, SICS.
    Brorsson, Mats
    KTH Royal Institute of Technology, Sweden.
    Locality-aware task scheduling for homogeneous parallel computing systems2018In: Computing, ISSN 0010-485X, E-ISSN 1436-5057, Vol. 100, no 6, p. 557-595Article in journal (Refereed)
    Abstract [en]

    In systems with complex many-core cache hierarchy, exploiting data locality can significantly reduce execution time and energy consumption of parallel applications. Locality can be exploited at various hardware and software layers. For instance, by implementing private and shared caches in a multi-level fashion, recent hardware designs are already optimised for locality. However, this would all be useless if the software scheduling does not cast the execution in a manner that promotes locality available in the programs themselves. Since programs for parallel systems consist of tasks executed simultaneously, task scheduling becomes crucial for the performance in multi-level cache architectures. This paper presents a heuristic algorithm for homogeneous multi-core systems called locality-aware task scheduling (LeTS). The LeTS heuristic is a work-conserving algorithm that takes into account both locality and load balancing in order to reduce the execution time of target applications. The working principle of LeTS is based on two distinctive phases, namely; working task group formation phase (WTG-FP) and working task group ordering phase (WTG-OP). The WTG-FP forms groups of tasks in order to capture data reuse across tasks while the WTG-OP determines an optimal order of execution for task groups that minimizes the reuse distance of shared data between tasks. We have performed experiments using randomly generated task graphs by varying three major performance parameters, namely: (1) communication to computation ratio (CCR) between 0.1 and 1.0, (2) application size, i.e., task graphs comprising of 50-, 100-, and 300-tasks per graph, and (3) number of cores with 2-, 4-, 8-, and 16-cores execution scenarios. We have also performed experiments using selected real-world applications. The LeTS heuristic reduces overall execution time of applications by exploiting inter-task data locality. Results show that LeTS outperforms state-of-the-art algorithms in amortizing inter-task communication cost.

  • 3. De Palma, Noel
    et al.
    Parlavanzas, Nikos
    Popov, Konstantin
    RISE, Swedish ICT, SICS, Computer Systems Laboratory.
    Brand, Per
    RISE, Swedish ICT, SICS, Computer Systems Laboratory.
    Vlassov, Vladimir
    Tools for Autonomic Computing2009In: 5th International Conference on Autonomic and Autonomous Systems (ICAS 2009), IEEE Computer Society , 2009, 10, p. 313-320Conference paper (Refereed)
  • 4.
    Faxén, Karl-Filip
    et al.
    RISE, Swedish ICT, SICS.
    Popov, Konstantin
    RISE, Swedish ICT, SICS, Computer Systems Laboratory.
    Janson, Sverker
    RISE, Swedish ICT, SICS, Computer Systems Laboratory.
    Albertsson, Lars
    RISE, Swedish ICT, SICS.
    Embla - Data Dependence Profiling for Parallel Programming2008In: Proceedings of the 2008 International Conference on Complex, Intelligent and Software Intensive Systems, 2008, 10, p. 780-785Conference paper (Refereed)
    Abstract [en]

    With the proliferation of multicore processors, there is an urgent need for tools and methodologies supporting parallelization of existing applications. In this paper, we present a novel tool for aiding programmers in parallelizing programs. The tool, Embla, is based on the Valgrind framework, and allows the user to discover the data dependences in a sequential program, thereby exposing opportunities for parallelization. Embla performs an off-line dynamic analysis, and records dependences as they arise during program execution. It reports an optimistic view of parallelizable sequences, and ignores dependences that do not arise during execution. Moreover, since the tool instruments the machine code of the program, it is largely language independent. Since Embla finds the dependencies that occur for particular executions, the confidence one would assign to its results depend on whether different executions yield different (bad) or largely the same (good) dependencies. We present a preliminary investigation into this issue using 84 different inputs to the SPEC CPU 2006 benchmark 403.gcc. The results indicate that there is a strong correlation between coverage and finding dependencies; executing the entire program is likely to reveal all dependencies.

  • 5. Groleau, William
    et al.
    Vlassov, Vladimir
    Popov, Konstantin
    RISE, Swedish ICT, SICS, Computer Systems Laboratory.
    Towards Semantics-Based Resource Discovery for the Grid2005Conference paper (Refereed)
  • 6. Nimar, Gustaf
    et al.
    Vlassov, Vladimir
    Popov, Konstantin
    RISE, Swedish ICT, SICS, Computer Systems Laboratory.
    Practical Experience in Building an Agent System for Semantics-Based Provision and Selection of Grid Services2005Conference paper (Refereed)
  • 7.
    Oz, Isil
    et al.
    Izmir Institute of Technology, Turkey.
    Bhatti, Mohammad. K.
    Information Technology University, India.
    Popov, Konstantin
    RISE - Research Institutes of Sweden, ICT, SICS.
    Brorsson, Mats
    KTH Royal Institute of Technology, Sweden.
    Regression-Based Prediction for Task-Based Program Performance2019In: Journal of Circuits, Systems and Computers, ISSN 0218-1266, Vol. 8, no 4, article id 1950060Article in journal (Refereed)
    Abstract [en]

    As multicore systems evolve by increasing the number of parallel execution units, parallel programming models have been released to exploit parallelism in the applications. Task-based programming model uses task abstractions to specify parallel tasks and schedules tasks onto processors at runtime. In order to increase the efficiency and get the highest performance, it is required to identify which runtime configuration is needed and how processor cores must be shared among tasks. Exploring design space for all possible scheduling and runtime options, especially for large input data, becomes infeasible and requires statistical modeling. Regression-based modeling determines the effects of multiple factors on a response variable, and makes predictions based on statistical analysis. In this work, we propose a regression-based modeling approach to predict the task-based program performance for different scheduling parameters with variable data size. We execute a set of task-based programs by varying the runtime parameters, and conduct a systematic measurement for influencing factors on execution time. Our approach uses executions with different configurations for a set of input data, and derives different regression models to predict execution time for larger input data. Our results show that regression models provide accurate predictions for validation inputs with mean error rate as low as 6.3%, and 14% on average among four task-based programs.

  • 8.
    Popov, Konstantin
    et al.
    RISE, Swedish ICT, SICS, Computer Systems Laboratory.
    Brand, Per
    RISE, Swedish ICT, SICS, Computer Systems Laboratory.
    Haridi, Seif
    RISE, Swedish ICT, SICS, Computer Systems Laboratory.
    An efficient marshaling framework for distributed systems2003Conference paper (Refereed)
    Abstract [en]

    An efficient (un)marshaling framework is presented. It is designed for distributed applications implemented in languages such as C++. A marshaler/unmarshaler pair converts arbitrary \emph{structured} data between its host and network representations. This technology can also be used for persistent storage. Our framework simplifies the design of efficient and flexible marshalers. The network latency is reduced by concurrent execution of (un)marshaling and network operations. The framework is actually used in Mozart, a distributed programming system that implements Oz, a multi-paradigm concurrent language. Mozart, including the implementation of the framework, is available at http://www.mozart-oz.org.

  • 9.
    Popov, Konstantin
    et al.
    RISE, Swedish ICT, SICS, Computer Systems Laboratory.
    Rafea, Mahmoud
    Haridi, Seif
    RISE, Swedish ICT, SICS, Computer Systems Laboratory.
    Holmgren, Fredrik
    RISE, Swedish ICT, SICS, Computer Systems Laboratory.
    Brand, Per
    RISE, Swedish ICT, SICS, Computer Systems Laboratory.
    Vlassov, Vladimir
    Parallel agent-based simulation on a cluster of workstations.2003In: Parallel Processing Letters, ISSN 0129-6264, Vol. 13, no 4, p. 629-641Article in journal (Refereed)
    Abstract [en]

    We discuss a parallel implementation of an agent-based simulation. Our approach allows to adapt a sequential simulator for large-scale simulation on a cluster of workstations. We target discrete-time simulation models that capture the behavior of Web users and Web sites. Web users are connected with each other in a graph resembling the social network. Web sites are also connected in a similar graph. Users are stateful entities. At each time step, they exhibit certain behaviour such as visiting bookmarked sites, exchanging information about Web sites in the "word-of-mouth" style, and updating bookmarks. The real-world phenomena of emerged aggregated behavior of the Internet population is studied. The system distributes data among workstations, which allows large-scale simulations infeasible on a stand-alone computer. The model properties cause traffic between workstations proportional to partition sizes. Network latency is hidden by concurrent simulation of multiple users. The system is implemented in Mozart that provides multithreading, dataflow variables, component-based software development, and network-transparency. Currently we can simulate up to 1 million Web users on 100 million Web sites using a cluster of 16 computers, which takes few seconds per simulation step, and for a problem of the same size, parallel simulation offers speedups between 11 and 14.

  • 10.
    Popov, Konstantin
    et al.
    RISE, Swedish ICT, SICS, Computer Systems Laboratory.
    Vlassov, Vladimir
    Rafea, Mahmoud
    Holmgren, Fredrik
    RISE, Swedish ICT, SICS, Computer Systems Laboratory.
    Brand, Per
    RISE, Swedish ICT, SICS, Computer Systems Laboratory.
    Haridi, Seif
    RISE, Swedish ICT, SICS, Computer Systems Laboratory.
    Parallel agent-based simulation on a cluster of workstations2003Conference paper (Refereed)
    Abstract [en]

    We discuss a parallel implementation of an agent-based simulation. Our approach allows to adapt a sequential simulator for large-scale simulation on a cluster of workstations. We target discrete-time simulation models that capture the behavior of WWW. The real-world phenomena of emerged aggregated behavior of the Internet population is studied. The system distributes data among workstations, which allows large-scale simulations infeasible on a stand-alone computer. The model properties cause traffic between workstations proportional to partition sizes. Network latency is hidden by concurrent simulation of multiple users. The system is implemented in Mozart that provides multithreading, dataflow variables, component-based software development, and network-transparency. Currently we can simulate up to 106 Web users on 104 Web sites using a cluster of 16 computers, which takes few seconds per simulation step, and for a problem of the same size, parallel simulation offers speedups between 11 and 14.

  • 11. Rafea, Mahmoud
    et al.
    Holmgren, Fredrik
    RISE - Research Institutes of Sweden, ICT, SICS.
    Popov, Konstantin
    RISE, Swedish ICT, SICS, Computer Systems Laboratory.
    Application architecture of the Internet simulation model: web word of mouth (WoM)2002Conference paper (Refereed)
  • 12.
    Rafea, Mahmoud
    et al.
    RISE, Swedish ICT, SICS.
    Popov, Konstantin
    RISE, Swedish ICT, SICS, Computer Systems Laboratory.
    Brand, Per
    RISE, Swedish ICT, SICS, Computer Systems Laboratory.
    Holmgren, Fredrik
    RISE, Swedish ICT, SICS, Computer Systems Laboratory.
    Parallel distributed algorithms of the beta-model of the small world graphs2003Conference paper (Refereed)
    Abstract [en]

    The research goal is to develop a large-scale agent-based simulation environment to support implementations of Internet simulation applications.The Small Worlds (SW) graphs are used to model Web sites and social networks of Internet users. Each vertex represents the identity of a simple agent. In order to cope with scalability issues, we have to consider distributed parallel processing. The focus of this paper is to present two parallel-distributed algorithms for the construction of a particular type of SW graph called Beta-model. The first algorithm serializes the graph construction, while the second constructs the graph in parallel.

1 - 12 of 12
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
v. 2.35.7