RedHat Hyperscale CEPH Storage
CEPH Storage for next generation providers
- Open Software
- Open Hardware
- Software defined
- Scalable
-
$AUD $153,321.00
*RRP Pricing*To View Channel Discounts Please Login
- Ex Tax: $153,321.00
RedHat CEPH Hyperscale Storage
Scalable, Open, Software defined
RedHat Ceph Storage is a scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services. Red Hat Ceph Storage is designed for cloud infrastructure and web-scale object storage. Ceph is designed to run on commodity hardware, which makes building and maintaining petabyteto- exabyte scale data clusters economically feasible.
To visit the Education Pages click here.
CEPH Calamari
Ceph calamari is a management and monitoring system for Ceph storage cluster. It provides a dashboard user interface that makes Ceph cluster monitoring simple and handy. Calamari was initially a part of Inktank’s Ceph Enterprise product offering and it has been open sourced by Red Hat. One node of the Hyperscale appliance would be dedicated for Calamari and would utilize the internal SSD as storage.
CEPH Monitor
The Ceph monitor is a data store for the health of the entire cluster, and contains the cluster log. RedHat strongly recommends using at least three monitors for a cluster quorum in production; though for the PoC purpose one rackgox node is used. Monitor nodes typically have fairly modest CPU and memory requirements. Because logs are stored on local disk(s) on the monitor node, it is important to make sure that sufficient disk space is provisioned. The node uses internal SSD for storing data.
CEPH OSD
ceph-osd is the object storage daemon for the Ceph distributed file system. It is responsible for storing objects on a local file system and providing access to them over the network. The PoC dedicates 3 rackgox nodes for OSD and connect 2 JBRs as storage pool. Each JBR consists of 2 sets of 14 HDDs; hence altogether 3 x14 drives are dedicated for the OSD storage purposes. The setup ensures that network interface, controllers and drive throughput don’t leave any bottlenecks— e.g.,
fast drives, but networks too slow to accommodate them. The datapath used is 40G and JBR drives are connected through high speed LSI iSCSI interface.
Installation Architecture
RedHat ceph uses five nodes and 3 channels for JBR for installations. The calamari and monitor uses 1 node each and OSD uses 3 nodes for installations. CEPHrelies on packages in the Red Hat Enterprise Linux 7 Base content set. Each Ceph node must be able to access the full Red Hat Enterprise Linux 7 Base content. To do so, CEPHnodes are connected to the Internet to the Red Hat Content Delivery Network (CDN) and registered with the redhat customer portal. After complete installations, the calamari provides an interface to configure the storage utilizing CEPHdrives.
PoC Infrastructure |
|
Compute |
RackGoX - F06A 4 Nodes / chassis 2 x Intel E5-2698 v3 CPU 2 x 32GB DDR3 DIMM 2 x 240GB SATA SSD LSI 3008 ISCSI Mellanox OCP 40G Network Adapter |
Storage |
2U JBR (JBOD) Lock in Mini-SAS Module Front Load Screw-less HDD Trays Support up to 28 x 3.5" SAS HGST Drives |
Network |
T5032-LY6 Switch 32 QSFP+ ports (40Gb/s) T1048-LY4 Switch 48 x 1GE ports (1Gb/s) 2 x 1/10G SFP+ ports |
Title | Version | Date | Size | |
---|---|---|---|---|
HyperScalers-PoC-ScaleIO_CEPH | 1 | 3-8-20 | 908KB |
Tags: RedHat, CEPH, Hyperscale, Storage, NAS, SAN, block, store, large, data