Technology Blog

Configuration scheme of video monitoring storage in LeoStor campus

LoongShine Network Copy Link
Abstract: With its excellent and predictable large and small file data throughput capacity, bad hard disk and network management capacity, 300PB of single volume, global file system-based, non-stop, self-owned horizontal expansion and transparent expansion, LeoStor outperforms traditional NAS over FC-SAN and Ceph architecture storage, excellent parallel file read and write performance, and meets the requirements of video storage, intelligent analysis and data structured use, making the system more cost-effective.

Overall requirements

The video monitoring in the park mainly includes scenes such as universities, software parks, industrial parks, entrepreneurship parks, large property, large shopping centers, large office buildings, large data centers, streets, amusement parks, and edge storage. The number of HD cameras is generally 100~500, and there is a need for AI video analysis. For example, a university upgraded 200 high-definition cameras for technical transformation. The high-definition camera has 1080P resolution, and the code stream is 6Mbps. The data is saved for 3 months. The video structured AI analysis needs to be done, including boundary prohibition and area prohibition, roll call, mask wearing, and vehicle interval speed measurement. The storage requirements are as follows:

Item Data
Code stream 6Mbps
Single camera/3 months 3.8TB
200 cameras/3 months 755TB

Storage design

Video monitoring is mainly based on MB video files. Through video structured AI analysis software, video clips are captured and saved into a storage mode supplemented by massive KB pictures. Considering the characteristics of video business, the failure rate of storage server and the maximum effective space utilization, LeoRaid 2+M: M, 4+M: M (M=1, 2) and other modes are more suitable for video storage, This scheme can use the LeoRaid 4+2:2 or LeoRaid 2+2:2 redundancy scheme according to the actual situation. When the number of nodes does not meet the redundancy conditions, it will automatically change from Server Raid to Disk Raid.

Based on cost considerations, metadata nodes and storage nodes can be integrated and deployed, and "application integration" can be done. KVM virtual machines or Docker containers and GPU cards can be configured, and software such as video recording, analysis, streaming, data center or situation monitoring can be deployed. Based on KVM hyper-integrated nodes, each hard disk can support 50+cameras, and multiple nodes share the same global file directory. Under full pressure, The performance delay of recording while writing and deleting is less than 1 second (measured by the customer), which meets the business requirements.

Capacity after redundancy:

  1. LeoRaid 4+2, with a minimum of 3 nodes configured. At the time of 3 nodes, only Disk Raid is supported, and the utilization rate is 66%, so the raw capacity (1)=755TB ÷ 66%=1144TB;
  2. LeoRaid 2+2, with a minimum of 2 nodes configured. When there are 2 nodes, Server Raid is supported, and 1 node can fail; Support for Disk Raid, failure of two hard disks at the same time, availability of 50%, raw capacity (2)=755TB ÷ 55%=1510TB;

Comparative advantages with friend systems:

  1. Ceph is distributed: it has a higher degree of productization, a clear extension architecture, a simpler integration deployment, a higher utilization rate of hard disk, predictable persistence, and faster data recovery;
  2. Multi-computer large-capacity NVR; Compared with application transparency, absolute application independence, horizontal expansion is easier, and performance is better, reliability is higher, and data recovery is faster than Raid mode;
  3. Super fusion system: Compared with the system, it has higher stability, easier storage expansion, higher utilization rate of hard disk, and lower delivery cost;

MDS node configuration

OSS storage nodes and MDS metadata nodes can be deployed together to reduce costs. It is recommended that each metadata node be configured with 960GB SSD hard disk.

Item To configure
Number of nodes 2, integrated deployment
Metadata disk 2 * 960GB Samsung/Intel Enterprise SATA SSD JBOD
Network 2 * 10GE

OSS node configuration

The LeoRaid 4+2 redundancy scheme requires at least three nodes, and each node needs two metadata disks.

Single hard disk capacity Number of hard disks 24 drawer device 36 drawer device
6TB 191 9 6
8TB 143 7 5
10TB 115 6 4
12TB 96 5 3
14TB 82 4 3
16TB 72 4 3

Note: The green area represents optional.

The LeoRaid 2+2 redundancy scheme requires at least two nodes, and each node needs two metadata disks.

Single hard disk capacity Number of hard disks 24 drawer device 36 drawer device
6TB 252 12 8
8TB 189 9 6
10TB 151 7 5
12TB 126 6 4
14TB 108 5 4
16TB 95 5 3

Note: The green area represents optional.

  • Hardware configuration

    1. Model: DF3600
    2. Equipment form: 4U 3.5" 36 drawer equipment;
    3. Number of nodes: 2/3;
    4. Storage hard disk: 34 * 16TB Seagate enterprise SATA mechanical disk HBA;
    5. CPU: 1 * silver CPU or 1 * 2620V4 (2 for application fusion configuration);
    6. Memory: 64G memory (application fusion configuration 128G);
    7. Network: 2 * 10GE;
    8. It is recommended to select Shengke, Maipu and Jiteng 10 Gigabit switches.