zeroMN commited on
Commit
036cdb2
·
verified ·
1 Parent(s): df8187b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -47
README.md CHANGED
@@ -39,53 +39,6 @@ pipeline_tag: text-generation
39
  ## Multi-Modal Model
40
  # Model Card for Evolutionary
41
 
42
- ## Model Details
43
- model_name: Evolutionary Multi-Modal Model
44
- model_type: transformer
45
- license: mit
46
- language: en zh
47
- datasets:
48
- - "Custom"
49
- tags:
50
- - text-generation
51
- - code-generation
52
- - speech-recognition
53
- - multi-modal
54
- - evolutionary
55
- base_model: facebook/bart-base
56
- finetuned_from: gpt2, bert-base-uncased, facebook/wav2vec2-base-960h, openai/clip-vit-base-patch32
57
- dataset: Custom Multi-Modal Dataset
58
-
59
- metrics:
60
- - perplexity
61
- - bleu
62
- - wer
63
- - cer
64
-
65
- library_name: transformers
66
- pipeline_tag: text-generation
67
- inference:
68
- parameters:
69
- max_length: 50
70
- top_k: 50
71
- top_p: 0.95
72
- temperature: 1.2
73
- do_sample: true
74
-
75
- speech_recognition:
76
- waveform_path: "C:/Users/baby7/Desktop/权重参数/sample-15s.wav"
77
- task: "speech_recognition"
78
- output_audio_key: "Transcription"
79
-
80
- text_generation:
81
- input_text: "What is the future of AI?"
82
- task: "text_generation"
83
- output_text_key: "Generated Text"
84
-
85
- code_generation:
86
- input_code: "def add(a, b): return"
87
- task: "code_generation"
88
- output_code_key: "Generated Code"
89
  ### Model Description
90
 
91
  This model, named `Evolutionary Multi-Modal Model`, is a multimodal transformer designed to handle a variety of tasks including vision and audio processing. It is built on top of the `adapter-transformers` and `transformers` libraries and is intended to be a versatile base model for both direct use and fine-tuning.
 
39
  ## Multi-Modal Model
40
  # Model Card for Evolutionary
41
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
  ### Model Description
43
 
44
  This model, named `Evolutionary Multi-Modal Model`, is a multimodal transformer designed to handle a variety of tasks including vision and audio processing. It is built on top of the `adapter-transformers` and `transformers` libraries and is intended to be a versatile base model for both direct use and fine-tuning.