fxmarty commited on
Commit
1eaee47
·
1 Parent(s): 5e64819

Adding regression benchmark for the transformers SHA 55db70c63de2c07b6ffe36f24c0e7df8f967e935

Browse files
Files changed (21) hide show
  1. raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/llama_1gpu_inference/0/inference_results.csv +1 -1
  2. raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/llama_1gpu_inference/0/main.log +23 -23
  3. raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/llama_1gpu_inference/1/main.log +10 -15
  4. raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/llama_1gpu_inference/2/hydra_config.yaml +66 -0
  5. raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/llama_1gpu_inference/2/inference_results.csv +2 -0
  6. raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/llama_1gpu_inference/2/main.log +23 -0
  7. raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/llama_1gpu_inference/3/hydra_config.yaml +66 -0
  8. raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/llama_1gpu_inference/3/main.log +10 -0
  9. raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/llama_1gpu_inference/4/hydra_config.yaml +66 -0
  10. raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/llama_1gpu_inference/4/inference_results.csv +2 -0
  11. raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/llama_1gpu_inference/4/main.log +23 -0
  12. raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/llama_1gpu_inference/5/hydra_config.yaml +66 -0
  13. raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/llama_1gpu_inference/5/main.log +10 -0
  14. raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/llama_1gpu_inference/6/hydra_config.yaml +66 -0
  15. raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/llama_1gpu_inference/6/main.log +13 -0
  16. raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/llama_1gpu_inference/7/hydra_config.yaml +66 -0
  17. raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/llama_1gpu_inference/7/main.log +10 -0
  18. raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/pytorch_bert_inference/0/inference_results.csv +1 -1
  19. raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/pytorch_bert_inference/0/main.log +20 -20
  20. raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/pytorch_gpt2_inference/0/inference_results.csv +1 -1
  21. raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/pytorch_gpt2_inference/0/main.log +22 -22
raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/llama_1gpu_inference/0/inference_results.csv CHANGED
@@ -1,2 +1,2 @@
1
  ,forward.peak_memory(MB),forward.latency(s),forward.throughput(samples/s),generate.latency(s),generate.throughput(tokens/s)
2
- 0,16195.125247999998,0.031,32.3,7.71,25.9
 
1
  ,forward.peak_memory(MB),forward.latency(s),forward.throughput(samples/s),generate.latency(s),generate.throughput(tokens/s)
2
+ 0,80330.22771199999,0.0318,31.4,6.03,33.2
raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/llama_1gpu_inference/0/main.log CHANGED
@@ -1,23 +1,23 @@
1
- [2023-08-10 21:20:41,510][benchmark][INFO] - Configuring inference benchmark
2
- [2023-08-10 21:20:41,511][benchmark][INFO] - + Setting seed(42)
3
- [2023-08-10 21:20:41,806][pytorch][INFO] - + Infered AutoModel class AutoModelForCausalLM for task text-generation and model_type llama
4
- [2023-08-10 21:20:41,807][backend][INFO] - Configuring pytorch backend
5
- [2023-08-10 21:20:41,807][backend][INFO] - + Checking initial device isolation
6
- [2023-08-10 21:20:41,949][backend][INFO] - + Checking contineous device isolation
7
- [2023-08-10 21:20:41,964][pytorch][INFO] - + Disabling gradients
8
- [2023-08-10 21:20:41,965][pytorch][INFO] - + Loading pretrained model weights in dtype: float16 on device: cuda
9
- [2023-08-10 21:21:51,892][pytorch][INFO] - + Turning on eval mode
10
- [2023-08-10 21:21:51,893][inference][INFO] - Running inference benchmark
11
- [2023-08-10 21:22:00,458][inference][INFO] - + Tracking forward pass peak memory
12
- [2023-08-10 21:22:01,718][memory_tracker][INFO] - Peak memory usage: 16195.125247999998 MB
13
- [2023-08-10 21:22:01,719][inference][INFO] - + Forward pass peak memory: 16195.125247999998 (MB)
14
- [2023-08-10 21:22:01,719][inference][INFO] - + Warming up the forward pass
15
- [2023-08-10 21:22:02,031][inference][INFO] - + Tracking forward pass latency and throughput
16
- [2023-08-10 21:22:22,364][inference][INFO] - + Forward pass latency: 3.10e-02 (s)
17
- [2023-08-10 21:22:22,365][inference][INFO] - + Forward pass throughput: 32.30 (samples/s)
18
- [2023-08-10 21:22:22,366][inference][INFO] - + Warming up the generation pass
19
- [2023-08-10 21:22:30,792][inference][INFO] - + Tracking generation latency and throughput
20
- [2023-08-10 21:22:53,923][inference][INFO] - + Generation pass latency: 7.71e+00 (s)
21
- [2023-08-10 21:22:53,925][inference][INFO] - + Generation pass throughput: 25.90 (tokens/s)
22
- [2023-08-10 21:22:53,925][inference][INFO] - Saving inference results
23
- [2023-08-10 21:22:53,936][backend][INFO] - Cleaning backend
 
1
+ [2023-08-10 21:25:46,773][benchmark][INFO] - Configuring inference benchmark
2
+ [2023-08-10 21:25:46,774][benchmark][INFO] - + Setting seed(42)
3
+ [2023-08-10 21:25:47,065][pytorch][INFO] - + Infered AutoModel class AutoModelForCausalLM for task text-generation and model_type llama
4
+ [2023-08-10 21:25:47,065][backend][INFO] - Configuring pytorch backend
5
+ [2023-08-10 21:25:47,065][backend][INFO] - + Checking initial device isolation
6
+ [2023-08-10 21:25:47,490][backend][INFO] - + Checking contineous device isolation
7
+ [2023-08-10 21:25:47,512][pytorch][INFO] - + Disabling gradients
8
+ [2023-08-10 21:25:47,514][pytorch][INFO] - + Loading pretrained model weights in dtype: float16 on device: cuda
9
+ [2023-08-10 21:27:15,232][pytorch][INFO] - + Turning on eval mode
10
+ [2023-08-10 21:27:15,234][inference][INFO] - Running inference benchmark
11
+ [2023-08-10 21:27:23,248][inference][INFO] - + Tracking forward pass peak memory
12
+ [2023-08-10 21:27:24,569][memory_tracker][INFO] - Peak memory usage: 80330.22771199999 MB
13
+ [2023-08-10 21:27:24,570][inference][INFO] - + Forward pass peak memory: 80330.22771199999 (MB)
14
+ [2023-08-10 21:27:24,570][inference][INFO] - + Warming up the forward pass
15
+ [2023-08-10 21:27:24,888][inference][INFO] - + Tracking forward pass latency and throughput
16
+ [2023-08-10 21:27:45,192][inference][INFO] - + Forward pass latency: 3.18e-02 (s)
17
+ [2023-08-10 21:27:45,193][inference][INFO] - + Forward pass throughput: 31.40 (samples/s)
18
+ [2023-08-10 21:27:45,193][inference][INFO] - + Warming up the generation pass
19
+ [2023-08-10 21:27:51,915][inference][INFO] - + Tracking generation latency and throughput
20
+ [2023-08-10 21:28:16,026][inference][INFO] - + Generation pass latency: 6.03e+00 (s)
21
+ [2023-08-10 21:28:16,029][inference][INFO] - + Generation pass throughput: 33.20 (tokens/s)
22
+ [2023-08-10 21:28:16,029][inference][INFO] - Saving inference results
23
+ [2023-08-10 21:28:16,037][backend][INFO] - Cleaning backend
raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/llama_1gpu_inference/1/main.log CHANGED
@@ -1,15 +1,10 @@
1
- [2023-08-10 21:22:54,427][benchmark][INFO] - Configuring inference benchmark
2
- [2023-08-10 21:22:54,428][benchmark][INFO] - + Setting seed(42)
3
- [2023-08-10 21:22:54,645][pytorch][INFO] - + Infered AutoModel class AutoModelForCausalLM for task text-generation and model_type llama
4
- [2023-08-10 21:22:54,645][backend][INFO] - Configuring pytorch backend
5
- [2023-08-10 21:22:54,646][backend][INFO] - + Checking initial device isolation
6
- [2023-08-10 21:22:54,750][backend][INFO] - + Checking contineous device isolation
7
- [2023-08-10 21:22:54,786][pytorch][INFO] - + Disabling gradients
8
- [2023-08-10 21:22:54,787][pytorch][INFO] - + Loading pretrained model weights in dtype: float32 on device: cuda
9
- [2023-08-10 21:23:12,100][pytorch][INFO] - + Turning on eval mode
10
- [2023-08-10 21:23:12,102][inference][INFO] - Running inference benchmark
11
- [2023-08-10 21:23:20,736][inference][INFO] - + Tracking forward pass peak memory
12
- [2023-08-10 21:23:20,812][memory_tracker][INFO] - Peak memory usage: 30317.346815999997 MB
13
- [2023-08-10 21:23:20,812][inference][INFO] - + Forward pass peak memory: 30317.346815999997 (MB)
14
- [2023-08-10 21:23:20,813][inference][INFO] - + Warming up the forward pass
15
- [2023-08-10 21:23:22,942][inference][INFO] - + Tracking forward pass latency and throughput
 
1
+ [2023-08-10 21:28:16,493][benchmark][INFO] - Configuring inference benchmark
2
+ [2023-08-10 21:28:16,494][benchmark][INFO] - + Setting seed(42)
3
+ [2023-08-10 21:28:16,695][pytorch][INFO] - + Infered AutoModel class AutoModelForCausalLM for task text-generation and model_type llama
4
+ [2023-08-10 21:28:16,695][backend][INFO] - Configuring pytorch backend
5
+ [2023-08-10 21:28:16,696][backend][INFO] - + Checking initial device isolation
6
+ [2023-08-10 21:28:17,024][backend][INFO] - + Checking contineous device isolation
7
+ [2023-08-10 21:28:17,063][pytorch][INFO] - + Disabling gradients
8
+ [2023-08-10 21:28:17,064][pytorch][INFO] - + Loading pretrained model weights in dtype: float32 on device: cuda
9
+ [2023-08-10 21:28:17,294][main][ERROR] - Error during benchmarking: CUDA out of memory. Tried to allocate 172.00 MiB (GPU 0; 79.35 GiB total capacity; 18.39 GiB already allocated; 33.12 MiB free; 18.40 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
10
+ [2023-08-10 21:28:17,295][backend][INFO] - Cleaning backend
 
 
 
 
 
raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/llama_1gpu_inference/2/hydra_config.yaml ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ backend:
2
+ name: pytorch
3
+ version: 2.0.1+cu117
4
+ _target_: optimum_benchmark.backends.pytorch.PyTorchBackend
5
+ inter_op_num_threads: null
6
+ intra_op_num_threads: null
7
+ initial_isolation_check: true
8
+ continous_isolation_check: true
9
+ delete_cache: false
10
+ no_weights: false
11
+ torch_dtype: float16
12
+ device_map: null
13
+ load_in_8bit: false
14
+ load_in_4bit: false
15
+ bettertransformer: false
16
+ torch_compile: false
17
+ torch_compile_config:
18
+ fullgraph: false
19
+ dynamic: false
20
+ backend: inductor
21
+ mode: null
22
+ options: null
23
+ disable: false
24
+ amp_autocast: false
25
+ amp_dtype: null
26
+ disable_grad: true
27
+ eval_mode: true
28
+ benchmark:
29
+ name: inference
30
+ _target_: optimum_benchmark.benchmarks.inference.InferenceBenchmark
31
+ seed: 42
32
+ memory: true
33
+ warmup_runs: 10
34
+ benchmark_duration: 20
35
+ input_shapes:
36
+ batch_size: 2
37
+ sequence_length: 200
38
+ num_choices: 4
39
+ width: 64
40
+ height: 64
41
+ num_channels: 3
42
+ point_batch_size: 3
43
+ nb_points_per_image: 2
44
+ feature_size: 80
45
+ nb_max_frames: 3000
46
+ audio_sequence_length: 16000
47
+ new_tokens: 200
48
+ experiment_name: llama_1gpu_inference
49
+ model: togethercomputer/LLaMA-2-7B-32K
50
+ device: cuda
51
+ task: text-generation
52
+ hub_kwargs:
53
+ revision: main
54
+ cache_dir: null
55
+ force_download: false
56
+ local_files_only: false
57
+ environment:
58
+ optimum_version: 1.11.0
59
+ transformers_version: 4.32.0.dev0
60
+ accelerate_version: 0.21.0
61
+ diffusers_version: null
62
+ python_version: 3.10.12
63
+ system: Linux
64
+ cpu: ' Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz'
65
+ cpu_count: 96
66
+ cpu_ram_mb: 1204539.797504
raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/llama_1gpu_inference/2/inference_results.csv ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ ,forward.peak_memory(MB),forward.latency(s),forward.throughput(samples/s),generate.latency(s),generate.throughput(tokens/s)
2
+ 0,82039.406592,0.0331,60.4,6.27,63.8
raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/llama_1gpu_inference/2/main.log ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [2023-08-10 21:28:17,673][benchmark][INFO] - Configuring inference benchmark
2
+ [2023-08-10 21:28:17,674][benchmark][INFO] - + Setting seed(42)
3
+ [2023-08-10 21:28:17,884][pytorch][INFO] - + Infered AutoModel class AutoModelForCausalLM for task text-generation and model_type llama
4
+ [2023-08-10 21:28:17,884][backend][INFO] - Configuring pytorch backend
5
+ [2023-08-10 21:28:17,885][backend][INFO] - + Checking initial device isolation
6
+ [2023-08-10 21:28:18,203][backend][INFO] - + Checking contineous device isolation
7
+ [2023-08-10 21:28:18,238][pytorch][INFO] - + Disabling gradients
8
+ [2023-08-10 21:28:18,239][pytorch][INFO] - + Loading pretrained model weights in dtype: float16 on device: cuda
9
+ [2023-08-10 21:28:28,915][pytorch][INFO] - + Turning on eval mode
10
+ [2023-08-10 21:28:28,917][inference][INFO] - Running inference benchmark
11
+ [2023-08-10 21:28:36,839][inference][INFO] - + Tracking forward pass peak memory
12
+ [2023-08-10 21:28:36,885][memory_tracker][INFO] - Peak memory usage: 82039.406592 MB
13
+ [2023-08-10 21:28:36,885][inference][INFO] - + Forward pass peak memory: 82039.406592 (MB)
14
+ [2023-08-10 21:28:36,886][inference][INFO] - + Warming up the forward pass
15
+ [2023-08-10 21:28:37,754][inference][INFO] - + Tracking forward pass latency and throughput
16
+ [2023-08-10 21:29:11,298][inference][INFO] - + Forward pass latency: 3.31e-02 (s)
17
+ [2023-08-10 21:29:11,299][inference][INFO] - + Forward pass throughput: 60.40 (samples/s)
18
+ [2023-08-10 21:29:11,300][inference][INFO] - + Warming up the generation pass
19
+ [2023-08-10 21:29:18,363][inference][INFO] - + Tracking generation latency and throughput
20
+ [2023-08-10 21:29:43,432][inference][INFO] - + Generation pass latency: 6.27e+00 (s)
21
+ [2023-08-10 21:29:43,434][inference][INFO] - + Generation pass throughput: 63.80 (tokens/s)
22
+ [2023-08-10 21:29:43,434][inference][INFO] - Saving inference results
23
+ [2023-08-10 21:29:43,441][backend][INFO] - Cleaning backend
raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/llama_1gpu_inference/3/hydra_config.yaml ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ backend:
2
+ name: pytorch
3
+ version: 2.0.1+cu117
4
+ _target_: optimum_benchmark.backends.pytorch.PyTorchBackend
5
+ inter_op_num_threads: null
6
+ intra_op_num_threads: null
7
+ initial_isolation_check: true
8
+ continous_isolation_check: true
9
+ delete_cache: false
10
+ no_weights: false
11
+ torch_dtype: float32
12
+ device_map: null
13
+ load_in_8bit: false
14
+ load_in_4bit: false
15
+ bettertransformer: false
16
+ torch_compile: false
17
+ torch_compile_config:
18
+ fullgraph: false
19
+ dynamic: false
20
+ backend: inductor
21
+ mode: null
22
+ options: null
23
+ disable: false
24
+ amp_autocast: false
25
+ amp_dtype: null
26
+ disable_grad: true
27
+ eval_mode: true
28
+ benchmark:
29
+ name: inference
30
+ _target_: optimum_benchmark.benchmarks.inference.InferenceBenchmark
31
+ seed: 42
32
+ memory: true
33
+ warmup_runs: 10
34
+ benchmark_duration: 20
35
+ input_shapes:
36
+ batch_size: 2
37
+ sequence_length: 200
38
+ num_choices: 4
39
+ width: 64
40
+ height: 64
41
+ num_channels: 3
42
+ point_batch_size: 3
43
+ nb_points_per_image: 2
44
+ feature_size: 80
45
+ nb_max_frames: 3000
46
+ audio_sequence_length: 16000
47
+ new_tokens: 200
48
+ experiment_name: llama_1gpu_inference
49
+ model: togethercomputer/LLaMA-2-7B-32K
50
+ device: cuda
51
+ task: text-generation
52
+ hub_kwargs:
53
+ revision: main
54
+ cache_dir: null
55
+ force_download: false
56
+ local_files_only: false
57
+ environment:
58
+ optimum_version: 1.11.0
59
+ transformers_version: 4.32.0.dev0
60
+ accelerate_version: 0.21.0
61
+ diffusers_version: null
62
+ python_version: 3.10.12
63
+ system: Linux
64
+ cpu: ' Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz'
65
+ cpu_count: 96
66
+ cpu_ram_mb: 1204539.797504
raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/llama_1gpu_inference/3/main.log ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ [2023-08-10 21:29:43,918][benchmark][INFO] - Configuring inference benchmark
2
+ [2023-08-10 21:29:43,920][benchmark][INFO] - + Setting seed(42)
3
+ [2023-08-10 21:29:44,130][pytorch][INFO] - + Infered AutoModel class AutoModelForCausalLM for task text-generation and model_type llama
4
+ [2023-08-10 21:29:44,130][backend][INFO] - Configuring pytorch backend
5
+ [2023-08-10 21:29:44,130][backend][INFO] - + Checking initial device isolation
6
+ [2023-08-10 21:29:44,449][backend][INFO] - + Checking contineous device isolation
7
+ [2023-08-10 21:29:44,484][pytorch][INFO] - + Disabling gradients
8
+ [2023-08-10 21:29:44,485][pytorch][INFO] - + Loading pretrained model weights in dtype: float32 on device: cuda
9
+ [2023-08-10 21:29:44,710][main][ERROR] - Error during benchmarking: CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 79.35 GiB total capacity; 18.09 GiB already allocated; 17.12 MiB free; 18.10 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
10
+ [2023-08-10 21:29:44,710][backend][INFO] - Cleaning backend
raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/llama_1gpu_inference/4/hydra_config.yaml ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ backend:
2
+ name: pytorch
3
+ version: 2.0.1+cu117
4
+ _target_: optimum_benchmark.backends.pytorch.PyTorchBackend
5
+ inter_op_num_threads: null
6
+ intra_op_num_threads: null
7
+ initial_isolation_check: true
8
+ continous_isolation_check: true
9
+ delete_cache: false
10
+ no_weights: false
11
+ torch_dtype: float16
12
+ device_map: null
13
+ load_in_8bit: false
14
+ load_in_4bit: false
15
+ bettertransformer: false
16
+ torch_compile: false
17
+ torch_compile_config:
18
+ fullgraph: false
19
+ dynamic: false
20
+ backend: inductor
21
+ mode: null
22
+ options: null
23
+ disable: false
24
+ amp_autocast: false
25
+ amp_dtype: null
26
+ disable_grad: true
27
+ eval_mode: true
28
+ benchmark:
29
+ name: inference
30
+ _target_: optimum_benchmark.benchmarks.inference.InferenceBenchmark
31
+ seed: 42
32
+ memory: true
33
+ warmup_runs: 10
34
+ benchmark_duration: 20
35
+ input_shapes:
36
+ batch_size: 4
37
+ sequence_length: 200
38
+ num_choices: 4
39
+ width: 64
40
+ height: 64
41
+ num_channels: 3
42
+ point_batch_size: 3
43
+ nb_points_per_image: 2
44
+ feature_size: 80
45
+ nb_max_frames: 3000
46
+ audio_sequence_length: 16000
47
+ new_tokens: 200
48
+ experiment_name: llama_1gpu_inference
49
+ model: togethercomputer/LLaMA-2-7B-32K
50
+ device: cuda
51
+ task: text-generation
52
+ hub_kwargs:
53
+ revision: main
54
+ cache_dir: null
55
+ force_download: false
56
+ local_files_only: false
57
+ environment:
58
+ optimum_version: 1.11.0
59
+ transformers_version: 4.32.0.dev0
60
+ accelerate_version: 0.21.0
61
+ diffusers_version: null
62
+ python_version: 3.10.12
63
+ system: Linux
64
+ cpu: ' Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz'
65
+ cpu_count: 96
66
+ cpu_ram_mb: 1204539.797504
raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/llama_1gpu_inference/4/inference_results.csv ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ ,forward.peak_memory(MB),forward.latency(s),forward.throughput(samples/s),generate.latency(s),generate.throughput(tokens/s)
2
+ 0,83182.354432,0.0396,101.0,6.84,117.0
raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/llama_1gpu_inference/4/main.log ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [2023-08-10 21:29:45,091][benchmark][INFO] - Configuring inference benchmark
2
+ [2023-08-10 21:29:45,092][benchmark][INFO] - + Setting seed(42)
3
+ [2023-08-10 21:29:45,287][pytorch][INFO] - + Infered AutoModel class AutoModelForCausalLM for task text-generation and model_type llama
4
+ [2023-08-10 21:29:45,287][backend][INFO] - Configuring pytorch backend
5
+ [2023-08-10 21:29:45,288][backend][INFO] - + Checking initial device isolation
6
+ [2023-08-10 21:29:45,601][backend][INFO] - + Checking contineous device isolation
7
+ [2023-08-10 21:29:45,636][pytorch][INFO] - + Disabling gradients
8
+ [2023-08-10 21:29:45,637][pytorch][INFO] - + Loading pretrained model weights in dtype: float16 on device: cuda
9
+ [2023-08-10 21:29:56,097][pytorch][INFO] - + Turning on eval mode
10
+ [2023-08-10 21:29:56,099][inference][INFO] - Running inference benchmark
11
+ [2023-08-10 21:30:03,868][inference][INFO] - + Tracking forward pass peak memory
12
+ [2023-08-10 21:30:03,918][memory_tracker][INFO] - Peak memory usage: 83182.354432 MB
13
+ [2023-08-10 21:30:03,918][inference][INFO] - + Forward pass peak memory: 83182.354432 (MB)
14
+ [2023-08-10 21:30:03,919][inference][INFO] - + Warming up the forward pass
15
+ [2023-08-10 21:30:04,683][inference][INFO] - + Tracking forward pass latency and throughput
16
+ [2023-08-10 21:30:55,393][inference][INFO] - + Forward pass latency: 3.96e-02 (s)
17
+ [2023-08-10 21:30:55,395][inference][INFO] - + Forward pass throughput: 101.00 (samples/s)
18
+ [2023-08-10 21:30:55,395][inference][INFO] - + Warming up the generation pass
19
+ [2023-08-10 21:31:04,341][inference][INFO] - + Tracking generation latency and throughput
20
+ [2023-08-10 21:31:24,854][inference][INFO] - + Generation pass latency: 6.84e+00 (s)
21
+ [2023-08-10 21:31:24,856][inference][INFO] - + Generation pass throughput: 117.00 (tokens/s)
22
+ [2023-08-10 21:31:24,856][inference][INFO] - Saving inference results
23
+ [2023-08-10 21:31:24,862][backend][INFO] - Cleaning backend
raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/llama_1gpu_inference/5/hydra_config.yaml ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ backend:
2
+ name: pytorch
3
+ version: 2.0.1+cu117
4
+ _target_: optimum_benchmark.backends.pytorch.PyTorchBackend
5
+ inter_op_num_threads: null
6
+ intra_op_num_threads: null
7
+ initial_isolation_check: true
8
+ continous_isolation_check: true
9
+ delete_cache: false
10
+ no_weights: false
11
+ torch_dtype: float32
12
+ device_map: null
13
+ load_in_8bit: false
14
+ load_in_4bit: false
15
+ bettertransformer: false
16
+ torch_compile: false
17
+ torch_compile_config:
18
+ fullgraph: false
19
+ dynamic: false
20
+ backend: inductor
21
+ mode: null
22
+ options: null
23
+ disable: false
24
+ amp_autocast: false
25
+ amp_dtype: null
26
+ disable_grad: true
27
+ eval_mode: true
28
+ benchmark:
29
+ name: inference
30
+ _target_: optimum_benchmark.benchmarks.inference.InferenceBenchmark
31
+ seed: 42
32
+ memory: true
33
+ warmup_runs: 10
34
+ benchmark_duration: 20
35
+ input_shapes:
36
+ batch_size: 4
37
+ sequence_length: 200
38
+ num_choices: 4
39
+ width: 64
40
+ height: 64
41
+ num_channels: 3
42
+ point_batch_size: 3
43
+ nb_points_per_image: 2
44
+ feature_size: 80
45
+ nb_max_frames: 3000
46
+ audio_sequence_length: 16000
47
+ new_tokens: 200
48
+ experiment_name: llama_1gpu_inference
49
+ model: togethercomputer/LLaMA-2-7B-32K
50
+ device: cuda
51
+ task: text-generation
52
+ hub_kwargs:
53
+ revision: main
54
+ cache_dir: null
55
+ force_download: false
56
+ local_files_only: false
57
+ environment:
58
+ optimum_version: 1.11.0
59
+ transformers_version: 4.32.0.dev0
60
+ accelerate_version: 0.21.0
61
+ diffusers_version: null
62
+ python_version: 3.10.12
63
+ system: Linux
64
+ cpu: ' Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz'
65
+ cpu_count: 96
66
+ cpu_ram_mb: 1204539.797504
raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/llama_1gpu_inference/5/main.log ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ [2023-08-10 21:31:25,367][benchmark][INFO] - Configuring inference benchmark
2
+ [2023-08-10 21:31:25,367][benchmark][INFO] - + Setting seed(42)
3
+ [2023-08-10 21:31:25,553][pytorch][INFO] - + Infered AutoModel class AutoModelForCausalLM for task text-generation and model_type llama
4
+ [2023-08-10 21:31:25,554][backend][INFO] - Configuring pytorch backend
5
+ [2023-08-10 21:31:25,554][backend][INFO] - + Checking initial device isolation
6
+ [2023-08-10 21:31:25,870][backend][INFO] - + Checking contineous device isolation
7
+ [2023-08-10 21:31:25,906][pytorch][INFO] - + Disabling gradients
8
+ [2023-08-10 21:31:25,906][pytorch][INFO] - + Loading pretrained model weights in dtype: float32 on device: cuda
9
+ [2023-08-10 21:31:26,126][main][ERROR] - Error during benchmarking: CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 79.35 GiB total capacity; 17.98 GiB already allocated; 7.12 MiB free; 17.99 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
10
+ [2023-08-10 21:31:26,126][backend][INFO] - Cleaning backend
raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/llama_1gpu_inference/6/hydra_config.yaml ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ backend:
2
+ name: pytorch
3
+ version: 2.0.1+cu117
4
+ _target_: optimum_benchmark.backends.pytorch.PyTorchBackend
5
+ inter_op_num_threads: null
6
+ intra_op_num_threads: null
7
+ initial_isolation_check: true
8
+ continous_isolation_check: true
9
+ delete_cache: false
10
+ no_weights: false
11
+ torch_dtype: float16
12
+ device_map: null
13
+ load_in_8bit: false
14
+ load_in_4bit: false
15
+ bettertransformer: false
16
+ torch_compile: false
17
+ torch_compile_config:
18
+ fullgraph: false
19
+ dynamic: false
20
+ backend: inductor
21
+ mode: null
22
+ options: null
23
+ disable: false
24
+ amp_autocast: false
25
+ amp_dtype: null
26
+ disable_grad: true
27
+ eval_mode: true
28
+ benchmark:
29
+ name: inference
30
+ _target_: optimum_benchmark.benchmarks.inference.InferenceBenchmark
31
+ seed: 42
32
+ memory: true
33
+ warmup_runs: 10
34
+ benchmark_duration: 20
35
+ input_shapes:
36
+ batch_size: 16
37
+ sequence_length: 200
38
+ num_choices: 4
39
+ width: 64
40
+ height: 64
41
+ num_channels: 3
42
+ point_batch_size: 3
43
+ nb_points_per_image: 2
44
+ feature_size: 80
45
+ nb_max_frames: 3000
46
+ audio_sequence_length: 16000
47
+ new_tokens: 200
48
+ experiment_name: llama_1gpu_inference
49
+ model: togethercomputer/LLaMA-2-7B-32K
50
+ device: cuda
51
+ task: text-generation
52
+ hub_kwargs:
53
+ revision: main
54
+ cache_dir: null
55
+ force_download: false
56
+ local_files_only: false
57
+ environment:
58
+ optimum_version: 1.11.0
59
+ transformers_version: 4.32.0.dev0
60
+ accelerate_version: 0.21.0
61
+ diffusers_version: null
62
+ python_version: 3.10.12
63
+ system: Linux
64
+ cpu: ' Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz'
65
+ cpu_count: 96
66
+ cpu_ram_mb: 1204539.797504
raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/llama_1gpu_inference/6/main.log ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [2023-08-10 21:31:26,507][benchmark][INFO] - Configuring inference benchmark
2
+ [2023-08-10 21:31:26,509][benchmark][INFO] - + Setting seed(42)
3
+ [2023-08-10 21:31:26,704][pytorch][INFO] - + Infered AutoModel class AutoModelForCausalLM for task text-generation and model_type llama
4
+ [2023-08-10 21:31:26,705][backend][INFO] - Configuring pytorch backend
5
+ [2023-08-10 21:31:26,705][backend][INFO] - + Checking initial device isolation
6
+ [2023-08-10 21:31:27,027][backend][INFO] - + Checking contineous device isolation
7
+ [2023-08-10 21:31:27,062][pytorch][INFO] - + Disabling gradients
8
+ [2023-08-10 21:31:27,063][pytorch][INFO] - + Loading pretrained model weights in dtype: float16 on device: cuda
9
+ [2023-08-10 21:31:37,679][pytorch][INFO] - + Turning on eval mode
10
+ [2023-08-10 21:31:37,681][inference][INFO] - Running inference benchmark
11
+ [2023-08-10 21:31:45,516][inference][INFO] - + Tracking forward pass peak memory
12
+ [2023-08-10 21:31:45,833][main][ERROR] - Error during benchmarking: CUDA out of memory. Tried to allocate 392.00 MiB (GPU 0; 79.35 GiB total capacity; 17.11 GiB already allocated; 101.12 MiB free; 17.90 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
13
+ [2023-08-10 21:31:45,833][backend][INFO] - Cleaning backend
raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/llama_1gpu_inference/7/hydra_config.yaml ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ backend:
2
+ name: pytorch
3
+ version: 2.0.1+cu117
4
+ _target_: optimum_benchmark.backends.pytorch.PyTorchBackend
5
+ inter_op_num_threads: null
6
+ intra_op_num_threads: null
7
+ initial_isolation_check: true
8
+ continous_isolation_check: true
9
+ delete_cache: false
10
+ no_weights: false
11
+ torch_dtype: float32
12
+ device_map: null
13
+ load_in_8bit: false
14
+ load_in_4bit: false
15
+ bettertransformer: false
16
+ torch_compile: false
17
+ torch_compile_config:
18
+ fullgraph: false
19
+ dynamic: false
20
+ backend: inductor
21
+ mode: null
22
+ options: null
23
+ disable: false
24
+ amp_autocast: false
25
+ amp_dtype: null
26
+ disable_grad: true
27
+ eval_mode: true
28
+ benchmark:
29
+ name: inference
30
+ _target_: optimum_benchmark.benchmarks.inference.InferenceBenchmark
31
+ seed: 42
32
+ memory: true
33
+ warmup_runs: 10
34
+ benchmark_duration: 20
35
+ input_shapes:
36
+ batch_size: 16
37
+ sequence_length: 200
38
+ num_choices: 4
39
+ width: 64
40
+ height: 64
41
+ num_channels: 3
42
+ point_batch_size: 3
43
+ nb_points_per_image: 2
44
+ feature_size: 80
45
+ nb_max_frames: 3000
46
+ audio_sequence_length: 16000
47
+ new_tokens: 200
48
+ experiment_name: llama_1gpu_inference
49
+ model: togethercomputer/LLaMA-2-7B-32K
50
+ device: cuda
51
+ task: text-generation
52
+ hub_kwargs:
53
+ revision: main
54
+ cache_dir: null
55
+ force_download: false
56
+ local_files_only: false
57
+ environment:
58
+ optimum_version: 1.11.0
59
+ transformers_version: 4.32.0.dev0
60
+ accelerate_version: 0.21.0
61
+ diffusers_version: null
62
+ python_version: 3.10.12
63
+ system: Linux
64
+ cpu: ' Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz'
65
+ cpu_count: 96
66
+ cpu_ram_mb: 1204539.797504
raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/llama_1gpu_inference/7/main.log ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ [2023-08-10 21:31:46,234][benchmark][INFO] - Configuring inference benchmark
2
+ [2023-08-10 21:31:46,235][benchmark][INFO] - + Setting seed(42)
3
+ [2023-08-10 21:31:46,542][pytorch][INFO] - + Infered AutoModel class AutoModelForCausalLM for task text-generation and model_type llama
4
+ [2023-08-10 21:31:46,543][backend][INFO] - Configuring pytorch backend
5
+ [2023-08-10 21:31:46,543][backend][INFO] - + Checking initial device isolation
6
+ [2023-08-10 21:31:46,863][backend][INFO] - + Checking contineous device isolation
7
+ [2023-08-10 21:31:46,898][pytorch][INFO] - + Disabling gradients
8
+ [2023-08-10 21:31:46,899][pytorch][INFO] - + Loading pretrained model weights in dtype: float32 on device: cuda
9
+ [2023-08-10 21:31:47,007][main][ERROR] - Error during benchmarking: CUDA out of memory. Tried to allocate 500.00 MiB (GPU 0; 79.35 GiB total capacity; 17.11 GiB already allocated; 101.12 MiB free; 17.90 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
10
+ [2023-08-10 21:31:47,007][backend][INFO] - Cleaning backend
raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/pytorch_bert_inference/0/inference_results.csv CHANGED
@@ -1,2 +1,2 @@
1
  ,forward.peak_memory(MB),forward.latency(s),forward.throughput(samples/s)
2
- 0,460.652544,0.00386,259.0
 
1
  ,forward.peak_memory(MB),forward.latency(s),forward.throughput(samples/s)
2
+ 0,459.374592,0.00379,264.0
raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/pytorch_bert_inference/0/main.log CHANGED
@@ -1,20 +1,20 @@
1
- [2023-08-10 21:23:43,769][benchmark][INFO] - Configuring inference benchmark
2
- [2023-08-10 21:23:43,769][benchmark][INFO] - + Setting seed(42)
3
- [2023-08-10 21:23:44,107][pytorch][INFO] - + Infered AutoModel class AutoModelForSequenceClassification for task text-classification and model_type bert
4
- [2023-08-10 21:23:44,107][backend][INFO] - Configuring pytorch backend
5
- [2023-08-10 21:23:44,107][backend][INFO] - + Checking initial device isolation
6
- [2023-08-10 21:23:44,108][backend][INFO] - + Checking contineous device isolation
7
- [2023-08-10 21:23:44,109][pytorch][INFO] - + Disabling gradients
8
- [2023-08-10 21:23:44,110][pytorch][INFO] - + Loading pretrained model weights in dtype: None on device: cpu
9
- [2023-08-10 21:23:44,701][pytorch][INFO] - + Turning on eval mode
10
- [2023-08-10 21:23:44,702][inference][INFO] - Running inference benchmark
11
- [2023-08-10 21:23:44,824][dummy_input][INFO] - Generating dummy input for: ['input_ids', 'attention_mask', 'token_type_ids']
12
- [2023-08-10 21:23:44,825][inference][INFO] - + Tracking forward pass peak memory
13
- [2023-08-10 21:23:44,879][inference][INFO] - + Forward pass peak memory: 460.652544 (MB)
14
- [2023-08-10 21:23:44,880][dummy_input][INFO] - Generating dummy input for: ['input_ids', 'attention_mask', 'token_type_ids']
15
- [2023-08-10 21:23:44,882][inference][INFO] - + Warming up the forward pass
16
- [2023-08-10 21:23:44,914][inference][INFO] - + Tracking forward pass latency and throughput
17
- [2023-08-10 21:23:55,009][inference][INFO] - + Forward pass latency: 3.86e-03 (s)
18
- [2023-08-10 21:23:55,011][inference][INFO] - + Forward pass throughput: 259.00 (samples/s)
19
- [2023-08-10 21:23:55,012][inference][INFO] - Saving inference results
20
- [2023-08-10 21:23:55,024][backend][INFO] - Cleaning backend
 
1
+ [2023-08-10 21:31:51,183][benchmark][INFO] - Configuring inference benchmark
2
+ [2023-08-10 21:31:51,184][benchmark][INFO] - + Setting seed(42)
3
+ [2023-08-10 21:31:51,374][pytorch][INFO] - + Infered AutoModel class AutoModelForSequenceClassification for task text-classification and model_type bert
4
+ [2023-08-10 21:31:51,374][backend][INFO] - Configuring pytorch backend
5
+ [2023-08-10 21:31:51,374][backend][INFO] - + Checking initial device isolation
6
+ [2023-08-10 21:31:51,374][backend][INFO] - + Checking contineous device isolation
7
+ [2023-08-10 21:31:51,376][pytorch][INFO] - + Disabling gradients
8
+ [2023-08-10 21:31:51,377][pytorch][INFO] - + Loading pretrained model weights in dtype: None on device: cpu
9
+ [2023-08-10 21:31:51,961][pytorch][INFO] - + Turning on eval mode
10
+ [2023-08-10 21:31:51,961][inference][INFO] - Running inference benchmark
11
+ [2023-08-10 21:31:52,083][dummy_input][INFO] - Generating dummy input for: ['input_ids', 'attention_mask', 'token_type_ids']
12
+ [2023-08-10 21:31:52,084][inference][INFO] - + Tracking forward pass peak memory
13
+ [2023-08-10 21:31:52,135][inference][INFO] - + Forward pass peak memory: 459.374592 (MB)
14
+ [2023-08-10 21:31:52,136][dummy_input][INFO] - Generating dummy input for: ['input_ids', 'attention_mask', 'token_type_ids']
15
+ [2023-08-10 21:31:52,138][inference][INFO] - + Warming up the forward pass
16
+ [2023-08-10 21:31:52,169][inference][INFO] - + Tracking forward pass latency and throughput
17
+ [2023-08-10 21:32:02,266][inference][INFO] - + Forward pass latency: 3.79e-03 (s)
18
+ [2023-08-10 21:32:02,269][inference][INFO] - + Forward pass throughput: 264.00 (samples/s)
19
+ [2023-08-10 21:32:02,269][inference][INFO] - Saving inference results
20
+ [2023-08-10 21:32:02,285][backend][INFO] - Cleaning backend
raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/pytorch_gpt2_inference/0/inference_results.csv CHANGED
@@ -1,2 +1,2 @@
1
  ,forward.peak_memory(MB),forward.latency(s),forward.throughput(samples/s),generate.latency(s),generate.throughput(tokens/s)
2
- 0,464.027648,0.00389,257.0,0.551,181.0
 
1
  ,forward.peak_memory(MB),forward.latency(s),forward.throughput(samples/s),generate.latency(s),generate.throughput(tokens/s)
2
+ 0,463.53203199999996,0.0036,278.0,0.491,204.0
raw_results/2023-08-10_20:06:29_55db70c63de2c07b6ffe36f24c0e7df8f967e935/pytorch_gpt2_inference/0/main.log CHANGED
@@ -1,22 +1,22 @@
1
- [2023-08-10 21:23:59,496][benchmark][INFO] - Configuring inference benchmark
2
- [2023-08-10 21:23:59,497][benchmark][INFO] - + Setting seed(42)
3
- [2023-08-10 21:23:59,681][pytorch][INFO] - + Infered AutoModel class AutoModelForCausalLM for task text-generation and model_type gpt2
4
- [2023-08-10 21:23:59,681][backend][INFO] - Configuring pytorch backend
5
- [2023-08-10 21:23:59,681][backend][INFO] - + Checking initial device isolation
6
- [2023-08-10 21:23:59,681][backend][INFO] - + Checking contineous device isolation
7
- [2023-08-10 21:23:59,683][pytorch][INFO] - + Disabling gradients
8
- [2023-08-10 21:23:59,683][pytorch][INFO] - + Loading pretrained model weights in dtype: None on device: cpu
9
- [2023-08-10 21:24:00,343][pytorch][INFO] - + Turning on eval mode
10
- [2023-08-10 21:24:00,343][inference][INFO] - Running inference benchmark
11
- [2023-08-10 21:24:00,548][inference][INFO] - + Tracking forward pass peak memory
12
- [2023-08-10 21:24:00,599][inference][INFO] - + Forward pass peak memory: 464.027648 (MB)
13
- [2023-08-10 21:24:00,600][inference][INFO] - + Warming up the forward pass
14
- [2023-08-10 21:24:00,634][inference][INFO] - + Tracking forward pass latency and throughput
15
- [2023-08-10 21:24:10,726][inference][INFO] - + Forward pass latency: 3.89e-03 (s)
16
- [2023-08-10 21:24:10,729][inference][INFO] - + Forward pass throughput: 257.00 (samples/s)
17
- [2023-08-10 21:24:10,729][inference][INFO] - + Warming up the generation pass
18
- [2023-08-10 21:24:11,268][inference][INFO] - + Tracking generation latency and throughput
19
- [2023-08-10 21:24:21,742][inference][INFO] - + Generation pass latency: 5.51e-01 (s)
20
- [2023-08-10 21:24:21,743][inference][INFO] - + Generation pass throughput: 181.00 (tokens/s)
21
- [2023-08-10 21:24:21,743][inference][INFO] - Saving inference results
22
- [2023-08-10 21:24:21,756][backend][INFO] - Cleaning backend
 
1
+ [2023-08-10 21:32:06,170][benchmark][INFO] - Configuring inference benchmark
2
+ [2023-08-10 21:32:06,172][benchmark][INFO] - + Setting seed(42)
3
+ [2023-08-10 21:32:06,352][pytorch][INFO] - + Infered AutoModel class AutoModelForCausalLM for task text-generation and model_type gpt2
4
+ [2023-08-10 21:32:06,352][backend][INFO] - Configuring pytorch backend
5
+ [2023-08-10 21:32:06,352][backend][INFO] - + Checking initial device isolation
6
+ [2023-08-10 21:32:06,352][backend][INFO] - + Checking contineous device isolation
7
+ [2023-08-10 21:32:06,354][pytorch][INFO] - + Disabling gradients
8
+ [2023-08-10 21:32:06,354][pytorch][INFO] - + Loading pretrained model weights in dtype: None on device: cpu
9
+ [2023-08-10 21:32:06,993][pytorch][INFO] - + Turning on eval mode
10
+ [2023-08-10 21:32:06,994][inference][INFO] - Running inference benchmark
11
+ [2023-08-10 21:32:07,195][inference][INFO] - + Tracking forward pass peak memory
12
+ [2023-08-10 21:32:07,245][inference][INFO] - + Forward pass peak memory: 463.53203199999996 (MB)
13
+ [2023-08-10 21:32:07,246][inference][INFO] - + Warming up the forward pass
14
+ [2023-08-10 21:32:07,280][inference][INFO] - + Tracking forward pass latency and throughput
15
+ [2023-08-10 21:32:17,381][inference][INFO] - + Forward pass latency: 3.60e-03 (s)
16
+ [2023-08-10 21:32:17,384][inference][INFO] - + Forward pass throughput: 278.00 (samples/s)
17
+ [2023-08-10 21:32:17,385][inference][INFO] - + Warming up the generation pass
18
+ [2023-08-10 21:32:17,892][inference][INFO] - + Tracking generation latency and throughput
19
+ [2023-08-10 21:32:28,205][inference][INFO] - + Generation pass latency: 4.91e-01 (s)
20
+ [2023-08-10 21:32:28,206][inference][INFO] - + Generation pass throughput: 204.00 (tokens/s)
21
+ [2023-08-10 21:32:28,206][inference][INFO] - Saving inference results
22
+ [2023-08-10 21:32:28,221][backend][INFO] - Cleaning backend