Upload folder using huggingface_hub
Browse files
README.md
CHANGED
@@ -7,7 +7,7 @@ pipeline_tag: image-text-to-text
|
|
7 |
|
8 |
[\[📂 GitHub\]](https://github.com/OpenGVLab/InternVL) [\[🆕 Blog\]](https://internvl.github.io/blog/) [\[📜 InternVL 1.0 Paper\]](https://arxiv.org/abs/2312.14238) [\[📜 InternVL 1.5 Report\]](https://arxiv.org/abs/2404.16821)
|
9 |
|
10 |
-
[\[🗨️ Chat Demo\]](https://internvl.opengvlab.com/) [\[🤗 HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[🚀 Quick Start\]](#quick-start) [\[📖 中文解读\]](https://zhuanlan.zhihu.com/p/
|
11 |
|
12 |
## Introduction
|
13 |
|
@@ -426,6 +426,32 @@ sess = pipe.chat('What is the woman doing?', session=sess, gen_config=gen_config
|
|
426 |
print(sess.response.text)
|
427 |
```
|
428 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
429 |
## License
|
430 |
|
431 |
This project is released under the MIT license, while InternLM is licensed under the Apache-2.0 license.
|
@@ -619,6 +645,32 @@ sess = pipe.chat('What is the woman doing?', session=sess, gen_config=gen_config
|
|
619 |
print(sess.response.text)
|
620 |
```
|
621 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
622 |
## 开源许可证
|
623 |
|
624 |
该项目采用 MIT 许可证发布,而 InternLM 则采用 Apache-2.0 许可证。
|
|
|
7 |
|
8 |
[\[📂 GitHub\]](https://github.com/OpenGVLab/InternVL) [\[🆕 Blog\]](https://internvl.github.io/blog/) [\[📜 InternVL 1.0 Paper\]](https://arxiv.org/abs/2312.14238) [\[📜 InternVL 1.5 Report\]](https://arxiv.org/abs/2404.16821)
|
9 |
|
10 |
+
[\[🗨️ Chat Demo\]](https://internvl.opengvlab.com/) [\[🤗 HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[🚀 Quick Start\]](#quick-start) [\[📖 中文解读\]](https://zhuanlan.zhihu.com/p/706547971) \[🌟 [魔搭社区](https://modelscope.cn/organization/OpenGVLab) | [教程](https://mp.weixin.qq.com/s/OUaVLkxlk1zhFb1cvMCFjg) \]
|
11 |
|
12 |
## Introduction
|
13 |
|
|
|
426 |
print(sess.response.text)
|
427 |
```
|
428 |
|
429 |
+
#### Service
|
430 |
+
|
431 |
+
For lmdeploy v0.5.0, please configure the chat template config first. Create the following JSON file `chat_template.json`.
|
432 |
+
|
433 |
+
```json
|
434 |
+
{
|
435 |
+
"model_name":"internlm2",
|
436 |
+
"meta_instruction":"我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态大语言模型。人工智能实验室致力于原始技术创新,开源开放,共享共创,推动科技进步和产业发展。",
|
437 |
+
"stop_words":["<|im_start|>", "<|im_end|>"]
|
438 |
+
}
|
439 |
+
```
|
440 |
+
|
441 |
+
LMDeploy's `api_server` enables models to be easily packed into services with a single command. The provided RESTful APIs are compatible with OpenAI's interfaces. Below are an example of service startup:
|
442 |
+
|
443 |
+
```shell
|
444 |
+
lmdeploy serve api_server OpenGVLab/InternVL2-26B --backend turbomind --chat-template chat_template.json
|
445 |
+
```
|
446 |
+
|
447 |
+
The default port of `api_server` is `23333`. After the server is launched, you can communicate with server on terminal through `api_client`:
|
448 |
+
|
449 |
+
```shell
|
450 |
+
lmdeploy serve api_client http://0.0.0.0:23333
|
451 |
+
```
|
452 |
+
|
453 |
+
You can overview and try out `api_server` APIs online by swagger UI at `http://0.0.0.0:23333`, or you can also read the API specification from [here](https://github.com/InternLM/lmdeploy/blob/main/docs/en/serving/restful_api.md).
|
454 |
+
|
455 |
## License
|
456 |
|
457 |
This project is released under the MIT license, while InternLM is licensed under the Apache-2.0 license.
|
|
|
645 |
print(sess.response.text)
|
646 |
```
|
647 |
|
648 |
+
#### API部署
|
649 |
+
|
650 |
+
对于 lmdeploy v0.5.0,请先配置聊天模板配置文件。创建如下的 JSON 文件 `chat_template.json`。
|
651 |
+
|
652 |
+
```json
|
653 |
+
{
|
654 |
+
"model_name":"internlm2",
|
655 |
+
"meta_instruction":"我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态大语言模型。人工智能实验室致力于原始技术创新,开源开放,共享共创,推动科技进步和产业发展。",
|
656 |
+
"stop_words":["<|im_start|>", "<|im_end|>"]
|
657 |
+
}
|
658 |
+
```
|
659 |
+
|
660 |
+
LMDeploy 的 `api_server` 使模型能够通过一个命令轻松打包成服务。提供的 RESTful API 与 OpenAI 的接口兼容。以下是服务启动的示例:
|
661 |
+
|
662 |
+
```shell
|
663 |
+
lmdeploy serve api_server OpenGVLab/InternVL2-26B --backend turbomind --chat-template chat_template.json
|
664 |
+
```
|
665 |
+
|
666 |
+
`api_server` 的默认端口是 `23333`。服务器启动后,你可以通过 `api_client` 在终端与服务器通信:
|
667 |
+
|
668 |
+
```shell
|
669 |
+
lmdeploy serve api_client http://0.0.0.0:23333
|
670 |
+
```
|
671 |
+
|
672 |
+
你可以通过 `http://0.0.0.0:23333` 的 swagger UI 在线查看和试用 `api_server` 的 API,也可以从 [这里](https://github.com/InternLM/lmdeploy/blob/main/docs/en/serving/restful_api.md) 阅读 API 规范。
|
673 |
+
|
674 |
## 开源许可证
|
675 |
|
676 |
该项目采用 MIT 许可证发布,而 InternLM 则采用 Apache-2.0 许可证。
|