Filters
Results 1 - 1 of 1
Results 1 - 1 of 1.
Search took: 0.031 seconds
Mkrtchyan, Tigran; Adeyemi, F.; Ashish, A.; Behrmann, G.; Fuhrmann, P.
Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States). Funding organisation: USDOE Office of Science - SC, High Energy Physics (HEP) (SC-25) (United States)2017
Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States). Funding organisation: USDOE Office of Science - SC, High Energy Physics (HEP) (SC-25) (United States)2017
AbstractAbstract
[en] For over a decade, dCache.org has delivered a robust software used at more than 80 Universities and research institutes around the world, allowing these sites to provide reliable storage services for the WLCG experiments as well as many other scientific communities. The flexible architecture of dCache allows running it in a wide variety of configurations and platforms - from a SoC based all-in-one Raspberry-Pi up to hundreds of nodes in a multipetabyte installation. Due to lack of managed storage at the time, dCache implemented data placement, replication and data integrity directly. Today, many alternatives are available: S3, GlusterFS, CEPH and others. While such solutions position themselves as scalable storage systems, they cannot be used by many scientific communities out of the box. The absence of community-accepted authentication and authorization mechanisms, the use of product specific protocols and the lack of namespace are some of the reasons that prevent wide-scale adoption of these alternatives. Most of these limitations are already solved by dCache. By delegating low-level storage management functionality to the above-mentioned new systems and providing the missing layer through dCache, we provide a solution which combines the benefits of both worlds - industry standard storage building blocks with the access protocols and authentication required by scientific communities. In this paper, we focus on CEPH, a popular software for clustered storage that supports file, block and object interfaces. CEPH is often used in modern computing centers, for example as a backend to OpenStack services. We will show prototypes of dCache running with a CEPH backend and discuss the benefits and limitations of such an approach. As a result, we will also outline the roadmap for supporting ‘delegated storage’ within the dCache releases.
Primary Subject
Source
FERMILAB-CONF--17-647-CD; OSTIID--1420913; AC02-07CH11359; Available from https://www.osti.gov/pages/servlets/purl/1420913; DOE Accepted Manuscript full text, or the publishers Best Available Version will be available free of charge after the embargo period; Country of input: United States
Record Type
Journal Article
Journal
Journal of Physics. Conference Series; ISSN 1742-6588;
; v. 898(6); vp

Country of publication
Reference NumberReference Number
INIS VolumeINIS Volume
INIS IssueINIS Issue