Results 1 - 10 of 1728
Results 1 - 10 of 1728. Search took: 0.033 seconds
|Sort by: date | relevance|
[en] The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.
[en] Requirements for access control, especially authorization, in practical computing environments are listed and discussed. These are used as the basis for a critique of existing access control mechanisms, which are found to present difficulties. A new mechanism, fire of many of these difficulties, is then described and critiqued
[en] Complete text of publication follows. The studies of space physics phenomena generally require to exploit multi-observatory and multi-instrument data. Taking into account the volume and sometimes the complexity of the data, searching events of interest, accessing the data, extracting sub-databases or building statistical databases often consist of a time and energy consuming work. For helping researchers to analyse the space physics data, some teams have developed tools or on-line services for public use. AMDA, QSAS and CLWEB are among these ones. AMDA (Automated Multi-Dataset Analysis) is a web-based facility for on line analysis of space physics time series data coming from either its local database or distant ones. This tool allows the user to perform on line classical manipulations such as data visualization, parameter computation or data extraction. AMDA also offers innovative functionalities such as event search on the content of the data in either visual or automated way, and the generation, use and management of time-tables. QSAS and CL are standalone flexible software for detailed analysis of space physics data. These tools allow to perform high level manipulations (distribution function, partial moment computation, CLUSTER-curlometer) on many datasets (CLUSTER, THEMIS, GEOTAIL, STEREO). CLWEB is an on-line version of CL. We will show how these tools can be used in a complementary way by exploiting time-tables which can be seen as a brick of up-coming virtual observatories in space physics
[en] At the running phase of the LHC experiments the requirements on computing centers for the LHC experiments, including CMS, are very strict because it would be necessary to process a huge amount of data at very high speed. For these purposes, a special distributed global grid-infrastructure named WLCG has been constructed. JINR LIT and 'Kharkov Institute of Physics and Technology' (NSC KIPT) computing centers, integrated into the WLCG infrastructure and certified at the CMS experiment as CMS Tier2 centers, demonstrate a high level of reliability at their operation. These centers' computing resources are mostly dedicated for analysis of CMS data. A proper WLCG elements configuration at JINR LIT and NSC KIPT grid-sites (especially Storage Elements) should be provided and actual versions of CMS specialized software should be supported to make possible the reconstruction and analysis of data registered by the CMS detector. Readiness of JINR LIT and NSC KIPT computing centers to the LHC start-up is discussed and the results of scale testing of JINR LIT and NSC KIPT grid-sites are presented
[en] The rapid increase in the availability of high performance, cost-effective RISC/UNIX workstations has been both a blessing and a curse. The blessing of having extremely powerful computing engines available on the desk top is well-known to many users. The user has tremendous freedom, flexibility, and control of his environment. That freedom can, however, become the curse of distributed computing. The user must become a system manager to some extent, he must worry about backups, maintenance, upgrades, etc. Traditionally these activities have been the responsibility of central computing group. The central computing group, however, may find that it can no longer provide all of the traditional services. With the plethora of workstations now found on so many desktops throughout the entire campus or lab, the central computing group may be swamped by support requests. This talk will address several of these computer support and management issues by discussing the approach taken at the Superconducting Super Collider Laboratory (SSCL). In addition, a brief review of the future directions of commercial products for distributed computing and management will be given
[en] Among the key factors determining the processes of transcription and translation are the distributions of the electrostatic potentials of DNA, RNA and proteins. Calculations of electrostatic distributions and structure maps of biopolymers on computers are time consuming and require large computational resources. We developed the procedures for organization of massive calculations of electrostatic potentials and structure maps for biopolymers in a distributed computing environment (several thousands of cores).
[en] The rapid increase in the availability of high performance, cost- effective RISC/UNIX workstations has been both a blessing and a curse. The blessing of having extremely powerful computing engines available on the desk top is well-known to many users. The user has tremendous freedom, flexibility, and control of his environment. That freedom can, however, become the curse of distributed computing. The user must become a system manager to some extent, he must worry about backups, maintenance, upgrades, etc. Traditionally these activities have been the responsibility of a central computing group. The central computing group, however, may find that it can no linger provide all of the traditional services. With the plethora of workstations now found on so many desktops throughout the entire campus or lab, the central computing group may be swamped by support requests. This talk will address several of these computer support and management issues by discussing the approach taken at the Superconducting Super Collider Laboratory. In addition, a brief review of the future directions of commercial products for distributed computing and management will be given
[en] We present MPWide, a platform-independent communication library for performing message passing between computers. Our library allows coupling of several local message passing interface (MPI) applications through a long-distance network and is specifically optimized for such communications. The implementation is deliberately kept lightweight and platform independent, and the library can be installed and used without administrative privileges. The only requirements are a C++ compiler and at least one open port to a wide-area network on each site. In this paper we present the library, describe the user interface, present performance tests and apply MPWide in a large-scale cosmological N-body simulation on a network of two computers, one in Amsterdam and the other in Tokyo.
[en] Distributed computing, and in particular Grid computing, enables physicists to use thousands of CPU days worth of computing every day, by submitting thousands of compute jobs. Unfortunately, a small fraction of such jobs regularly fail; the reasons vary from disk and network problems to bugs in the user code. A subset of these failures result in jobs being stuck for long periods of time. In order to debug such failures, interactive monitoring is highly desirable; users need to browse through the job log files and check the status of the running processes. Batch systems typically don't provide such services; at best, users get job logs at job termination, and even this may not be possible if the job is stuck in an infinite loop. In this paper we present a novel approach of using regular batch system capabilities of Condor to enable users to access the logs and processes of any running job. This does not provide true interactive access, so commands like vi are not viable, but it does allow operations like ls, cat, top, ps, lsof, netstat and dumping the stack of any process owned by the user; we call this pseudo-interactive monitoring. It is worth noting that the same method can be used to monitor Grid jobs in a glidein-based environment. We further believe that the same mechanism could be applied to many other batch systems.