Results 1 - 10 of 481
Results 1 - 10 of 481. Search took: 0.02 seconds
|Sort by: date | relevance|
[en] Fluid dynamic analysis of a commercial, counter-flow Ranque-Hilsch Vortex Tube (Rh vt), Ex air 25 s cfm, has been performed in this work both experimentally and numerically; in particular Rh vt cooling power and temperature separation performances have been tested in both direct cooling employment (jet impingement) and indirect cooling employment (supplying cold plates). Experimental techniques, used in this work, revealed several difficulties to produce detailed information about velocity and temperature fields inside the tube and at both the exits. Hence numerical simulation of the flow inside the tube has been conducted using the commercial Cfd code Fluent 6.3.26. Compressible, turbulent, high swirling flow inside Rh vt has been simulated by using both Rans and Les approaches. In particular several turbulence closures have been used in the Rans simulations and results have been compared with Les ones. Large Eddy Simulations have been performed using Smagorinsky sub-grid model.
[en] The importance of numerical simulations in astrophysics is constantly growing, because of the complexity, the multi-scaling properties and the non-linearity of many physical phenomena. In particular, cosmological and galaxy-sized simulations of structure formation have cast light on different aspects, giving answers to many questions, but raising a number of new issues to be investigated. Over the last decade, great effort has been devoted in Padova to develop a tool explicitly designed to study the problem of galaxy formation and evolution, with particular attention to the early-type ones. To this aim, many simulations have been run on CINECA supercomputers (see publications list below). The next step is the new release of EvoL, a Fortran N-body code capable to follow in great detail many different aspects of stellar, interstellar and cosmological physics. In particular, special care has been paid to the properties of stars and their interplay with the surrounding interstellar medium (ISM), as well as to the multiphase nature of the ISM, to the setting of the initial and boundary conditions, and to the correct description of gas physics via modern formulations of the classical Smoothed Particle Hydrodynamics algorithms. Moreover, a powerful tool to compare numerical predictions with observables has been developed, self-consistently closing the whole package. A library of new simulations, run with EvoL on CINECA supercomputers, is to be built in the next years, while new physics, including magnetic properties of matter and more exotic energy feedback effects, is to be added.
[en] The aim at simulating the breakdown phase of a Plasma Focus (P F) discharge follows the need to fully understand the dynamics of such device, in order to retrieve useful information for the design and optimization of the machine itself. P Fs are compact devices able to generate, accelerate, compress and confine a plasma by means of strongly varying electric and magnetic fields. In the final phase of the discharge, the generated plasma collapses in a high density region (the focus) where nuclear reactions occur. The choice of the gases composing the plasma tunes the nuclear reactions in order to characterize the device as a possible neutron-free Short-Life Radioisotopes (SLRs) generator for PET (f.i. 18F and 15O), as well as a neutrons or collimated-electrons-beams source for radio-therapy applications. An electrostatic-collisional Particle-In-Cell (Pic) code for Plasma Focus devices (es-cPIF) has already been developed to investigate the breakdown phenomenon and the formation of the plasma seed, the preliminary plasma spot, within the device: the exact knowledge of the phase space distribution function (strongly deviating from the Maxwellian equilibrium one) is a fundamental basis indeed for the whole discharge simulation. In order to extend the present simulations towards the complete evolution of the plasma seed into a running plasma sheath, the code is being re-structured for strong parallelization and inclusion of Structured Adaptive Mesh Refinement (SAMR) capabilities. In this paper the development frame as well as the software design architecture are presented together with the features that will be provided by the new SAMRes-cPIF code.
[en] Virtualisation is a now proven software technology that is rapidly transforming the I T landscape and fundamentally changing the way people make computations and implement services. Recently, all major software producers (e.g., Microsoft and Red Hat) developed or acquired virtualisation technologies. Our institute (http://www.CNAF.INFN.it) is a Tier l for experiments carried on at the Large Hadron Collider at CERN (http://lhc.web.CERN.ch/lhc/) and is experiencing several benefits from virtualisation technologies, like improving fault tolerance, providing efficient hardware resource usage and increasing security. Currently, the virtualisation solution we adopted is xen, which is well supported by the Scientific Linux distribution, widely used by the High-Energy Physics (HEP) community. Since Scientific Linux is based on Red Hat E S, we felt the need to investigate performances and usability differences with the new k vm technology, recently acquired by Red Hat. The case study of this work is the Tier2 site for the LHCb experiment hosted at our institute; all major grid elements for this Tier2 run on xen virtual machines smoothly. We will investigate the impact on performance and stability that a migration to k vm would entail on the Tier2 site, as well as the effort required by a system administrator to deploy the migration.
[en] We present an overview of nowadays modelling capabilities and numerical challenges in the simulation of scalar dispersion phenomena in complex flows. Results from the simulation of a passive plume emitted from a line source downstream of a square obstacle are summarized to provide an example of a basic test case where the reliability of computational techniques can be carefully established.
[en] Computer simulations have become a widely used and powerful tool to study the behaviour of many-particle and many-interaction systems and processes such as nucleic acid dynamics, drug-DNA interactions, enzymatic processes, membrane, antibiotics. The increased reliability of computational techniques has made possible to plane a bottom-up approach in drug design, i.e. designing molecules with improved properties starting from the knowledge of the molecular mechanisms. However, the in silico techniques have to face the fact that the number of degrees of freedom involved in biological systems is very large while the time scale of several biological processes is not accessible to standard simulations. Algorithms and methods have been developed and are still under construction to bridge these gaps. Here we review the activities of our group focussed on the time-scale bottleneck and, in particular, on the use of the meta dynamics scheme that allows the investigation of rare events in reasonable computer time without reducing the accuracy of the calculation. In particular, we have devoted particular attention to the characterization at microscopic level of translocation of antibiotics through membrane pores, aiming at the identification of structural and dynamical features helpful for a rational drug design.
[en] We present ab initio calculations of the excited-state properties of indole in water. Indole and water are first studied separately and then in solution. Calculations are performed within DFT and in the framework of many-body Green's function formalism. The geometries are determined by classical and mixed quantum-classical dynamics The optical-absorption spectra with the inclusion of excitonic effects are calculated by solving the Bethe-Salpeter equation (BSE) after the Kohn and Sham eigenvalues have been corrected by using the GW method.
[en] We discuss the possibility of implementing asynchronous replica-exchange (or parallel tempering) molecular dynamics. In our scheme, the exchange attempts are driven by asynchronous messages sent by one of the computing nodes, so that different replicas are allowed to perform a different number of time steps between subsequent attempts. The implementation is simple and based on the message-passing interface (Mpi). We illustrate the advantages of our scheme with respect to the standard synchronous algorithm and we benchmark it for a model Lennard-Jones liquid on an IBM-LS21 blade center cluster.
[en] The support for complex services delivery is becoming a key point in current internet technology. Current trends in internet applications are characterized by on demand delivery of ever growing amounts of content. The future internet of services will have to deliver content intensive applications to users with quality of service and security guarantees. This paper describes the Reservoir project and the challenge of a reliable and effective delivery of services as utilities in a commercial scenario. It starts by analyzing the needs of a future infrastructure provider and introducing the key concept of a service oriented architecture that combines virtualisation-aware grid with grid-aware virtualisation, while being driven by business service management. This article will then focus on the benefits and the innovations derived from the Reservoir approach. Eventually, a high level view of Reservoir general architecture is illustrated.