Inspur Deep Learning Appliance
This deep learning appliance is based on MPI optimized Caffe frame developed by Inspur. It is a software/hardware integrated solution with MPI parallel and multiple GPU accelerators for big data processing. It adopts Caffe frame for feature extraction & deep learning for mass of data, and GPU for parallel acceleration, to achieve linear speedups. The deep learning appliance is simpler to be controlled by Inspur cluster management software- ClusterEngine, integrated with the functions of job submit, data management, monitoring and reporting.
Deep Learning is a rapidly growing area; it enables many groups to achieve groundbreaking results in vision, speech, language, robotics, urban security, medical imaging and other areas.
However, spending a long time on training delivers a big challenge on system efficiency and resources utilization. To address this, Inspur designed a deep learning appliance based on H/S integration with multiple GPU parallel accelerating, adopting Caffee frame. It can achieve linear speedups.
Inspur Deep Learning Appliance speed-up ratio
Key Features & Benefits
- High speedup
Supporting MPI parallel and GPU accelerating, high performance/price ratio, software/hardware integrating, nearly linear speed-up
- Unified installation package
Integrating Caffe, CUDA and run-time libraries, simple installation
- Mainstream parallel file system supported
Supporting mainstream parallel file system such as Lustre, and high performance I/O
- Unified system management
Integrating with Inspur HPC service platform ClusterEngine, support cluster management, license management, job scheduling etc.
- Interactive GUI
Input/output file management, data upload/download, user management.
“Deep Learning Appliance is a trusted tool and solution that we rely on extensively. It is highly efficient and easy to use, enabling us to accelerate our current research more than three time speedup than the existing infrastructure”
End user feedback