This is a copy of the original ๐ŸŒธ BLOOMChat weights that is more efficient to use with the DeepSpeed-Inference ๐Ÿš€. In this repo the original tensors are split into 8 shards to target 8 GPUs, this allows the user to run the model with DeepSpeed-inference Tensor Parallelism.

For specific details about the BLOOMChat model itself, please see the original BLOOMChat model card.

This work was performed using AI/HPC resources (Jean Zay supercomputer) from GENCI-IDRIS

Downloads last month
9
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support