Weiyun1025 commited on
Commit
bc7d877
·
verified ·
1 Parent(s): dcfc2ed

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -516,12 +516,12 @@ LMDeploy abstracts the complex inference process of multi-modal Vision-Language
516
  #### A 'Hello, world' Example
517
 
518
  ```python
519
- from lmdeploy import pipeline, TurbomindEngineConfig
520
  from lmdeploy.vl import load_image
521
 
522
  model = 'OpenGVLab/InternVL3-9B'
523
  image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg')
524
- pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=1))
525
  response = pipe(('describe this image', image))
526
  print(response.text)
527
  ```
@@ -533,12 +533,12 @@ If `ImportError` occurs while executing this case, please install the required d
533
  When dealing with multiple images, you can put them all in one list. Keep in mind that multiple images will lead to a higher number of input tokens, and as a result, the size of the context window typically needs to be increased.
534
 
535
  ```python
536
- from lmdeploy import pipeline, TurbomindEngineConfig
537
  from lmdeploy.vl import load_image
538
  from lmdeploy.vl.constants import IMAGE_TOKEN
539
 
540
  model = 'OpenGVLab/InternVL3-9B'
541
- pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=1))
542
 
543
  image_urls=[
544
  'https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg',
@@ -556,11 +556,11 @@ print(response.text)
556
  Conducting inference with batch prompts is quite straightforward; just place them within a list structure:
557
 
558
  ```python
559
- from lmdeploy import pipeline, TurbomindEngineConfig
560
  from lmdeploy.vl import load_image
561
 
562
  model = 'OpenGVLab/InternVL3-9B'
563
- pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=1))
564
 
565
  image_urls=[
566
  "https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg",
@@ -576,11 +576,11 @@ print(response)
576
  There are two ways to do the multi-turn conversations with the pipeline. One is to construct messages according to the format of OpenAI and use above introduced method, the other is to use the `pipeline.chat` interface.
577
 
578
  ```python
579
- from lmdeploy import pipeline, TurbomindEngineConfig, GenerationConfig
580
  from lmdeploy.vl import load_image
581
 
582
  model = 'OpenGVLab/InternVL3-9B'
583
- pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=1))
584
 
585
  image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg')
586
  gen_config = GenerationConfig(top_k=40, top_p=0.8, temperature=0.8)
@@ -595,7 +595,7 @@ print(sess.response.text)
595
  LMDeploy's `api_server` enables models to be easily packed into services with a single command. The provided RESTful APIs are compatible with OpenAI's interfaces. Below are an example of service startup:
596
 
597
  ```shell
598
- lmdeploy serve api_server OpenGVLab/InternVL3-9B --server-port 23333 --tp 1
599
  ```
600
 
601
  To use the OpenAI-style interface, you need to install OpenAI:
 
516
  #### A 'Hello, world' Example
517
 
518
  ```python
519
+ from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
520
  from lmdeploy.vl import load_image
521
 
522
  model = 'OpenGVLab/InternVL3-9B'
523
  image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg')
524
+ pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=1), chat_template_config=ChatTemplateConfig(model_name='internvl2_5'))
525
  response = pipe(('describe this image', image))
526
  print(response.text)
527
  ```
 
533
  When dealing with multiple images, you can put them all in one list. Keep in mind that multiple images will lead to a higher number of input tokens, and as a result, the size of the context window typically needs to be increased.
534
 
535
  ```python
536
+ from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
537
  from lmdeploy.vl import load_image
538
  from lmdeploy.vl.constants import IMAGE_TOKEN
539
 
540
  model = 'OpenGVLab/InternVL3-9B'
541
+ pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=1), chat_template_config=ChatTemplateConfig(model_name='internvl2_5'))
542
 
543
  image_urls=[
544
  'https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg',
 
556
  Conducting inference with batch prompts is quite straightforward; just place them within a list structure:
557
 
558
  ```python
559
+ from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
560
  from lmdeploy.vl import load_image
561
 
562
  model = 'OpenGVLab/InternVL3-9B'
563
+ pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=1), chat_template_config=ChatTemplateConfig(model_name='internvl2_5'))
564
 
565
  image_urls=[
566
  "https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg",
 
576
  There are two ways to do the multi-turn conversations with the pipeline. One is to construct messages according to the format of OpenAI and use above introduced method, the other is to use the `pipeline.chat` interface.
577
 
578
  ```python
579
+ from lmdeploy import pipeline, TurbomindEngineConfig, GenerationConfig, ChatTemplateConfig
580
  from lmdeploy.vl import load_image
581
 
582
  model = 'OpenGVLab/InternVL3-9B'
583
+ pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=1), chat_template_config=ChatTemplateConfig(model_name='internvl2_5'))
584
 
585
  image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg')
586
  gen_config = GenerationConfig(top_k=40, top_p=0.8, temperature=0.8)
 
595
  LMDeploy's `api_server` enables models to be easily packed into services with a single command. The provided RESTful APIs are compatible with OpenAI's interfaces. Below are an example of service startup:
596
 
597
  ```shell
598
+ lmdeploy serve api_server OpenGVLab/InternVL3-9B --chat-template internvl2_5 --server-port 23333 --tp 1
599
  ```
600
 
601
  To use the OpenAI-style interface, you need to install OpenAI: