Joctor commited on
Commit
fb6a006
1 Parent(s): 4e3dc60

End of training

Browse files
README.md ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ tags:
4
+ - generated_from_trainer
5
+ base_model: llava-hf/llava-1.5-7b-hf
6
+ datasets:
7
+ - imagefolder
8
+ model-index:
9
+ - name: llava-hf/llava-1.5-7b-hf
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # llava-hf/llava-1.5-7b-hf
17
+
18
+ This model is a fine-tuned version of [llava-hf/llava-1.5-7b-hf](https://huggingface.co/llava-hf/llava-1.5-7b-hf) on the imagefolder dataset.
19
+
20
+ ## Model description
21
+
22
+ More information needed
23
+
24
+ ## Intended uses & limitations
25
+
26
+ More information needed
27
+
28
+ ## Training and evaluation data
29
+
30
+ More information needed
31
+
32
+ ## Training procedure
33
+
34
+ ### Training hyperparameters
35
+
36
+ The following hyperparameters were used during training:
37
+ - learning_rate: 0.0002
38
+ - train_batch_size: 2
39
+ - eval_batch_size: 8
40
+ - seed: 42
41
+ - gradient_accumulation_steps: 4
42
+ - total_train_batch_size: 8
43
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
+ - lr_scheduler_type: constant
45
+ - num_epochs: 1
46
+ - mixed_precision_training: Native AMP
47
+
48
+ ### Training results
49
+
50
+
51
+
52
+ ### Framework versions
53
+
54
+ - PEFT 0.10.0
55
+ - Transformers 4.39.3
56
+ - Pytorch 2.1.2
57
+ - Datasets 2.18.0
58
+ - Tokenizers 0.15.2
adapter_config.json ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "llava-hf/llava-1.5-7b-hf",
5
+ "bias": "none",
6
+ "fan_in_fan_out": false,
7
+ "inference_mode": true,
8
+ "init_lora_weights": true,
9
+ "layer_replication": null,
10
+ "layers_pattern": null,
11
+ "layers_to_transform": null,
12
+ "loftq_config": {},
13
+ "lora_alpha": 32,
14
+ "lora_dropout": 0.1,
15
+ "megatron_config": null,
16
+ "megatron_core": "megatron.core",
17
+ "modules_to_save": null,
18
+ "peft_type": "LORA",
19
+ "r": 8,
20
+ "rank_pattern": {},
21
+ "revision": null,
22
+ "target_modules": [
23
+ "v_proj",
24
+ "q_proj"
25
+ ],
26
+ "task_type": "CAUSAL_LM",
27
+ "use_dora": false,
28
+ "use_rslora": false
29
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:26744900c060e7e706c9e90367e738ce6060ebf2f262888bedc9907a053f2cc2
3
+ size 19957360
runs/May16_07-00-48_cc8faff11463/events.out.tfevents.1715842849.cc8faff11463.34.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d6f15e5ac4ad2ad0db43313180d688cc7c5e09a8a1dc4e2b5ec0383e87cd91bc
3
+ size 15032
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d32ed4389cb7bb0f25f2133648f8dafc6556bf91207714235dd650c61b42ff29
3
+ size 4920
wandb/debug-internal.log ADDED
The diff for this file is too large to render. See raw diff
 
wandb/debug.log ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2024-05-16 07:00:49,430 INFO MainThread:34 [wandb_setup.py:_flush():76] Current SDK version is 0.16.6
2
+ 2024-05-16 07:00:49,431 INFO MainThread:34 [wandb_setup.py:_flush():76] Configure stats pid to 34
3
+ 2024-05-16 07:00:49,431 INFO MainThread:34 [wandb_setup.py:_flush():76] Loading settings from /root/.config/wandb/settings
4
+ 2024-05-16 07:00:49,431 INFO MainThread:34 [wandb_setup.py:_flush():76] Loading settings from /kaggle/working/wandb/settings
5
+ 2024-05-16 07:00:49,431 INFO MainThread:34 [wandb_setup.py:_flush():76] Loading settings from environment variables: {}
6
+ 2024-05-16 07:00:49,431 INFO MainThread:34 [wandb_setup.py:_flush():76] Inferring run settings from compute environment: {'program': '<python with no main file>'}
7
+ 2024-05-16 07:00:49,431 INFO MainThread:34 [wandb_setup.py:_flush():76] Applying login settings: {'api_key': '***REDACTED***'}
8
+ 2024-05-16 07:00:49,431 INFO MainThread:34 [wandb_setup.py:_flush():76] Applying login settings: {'api_key': '***REDACTED***'}
9
+ 2024-05-16 07:00:49,431 INFO MainThread:34 [wandb_setup.py:_flush():76] Applying login settings: {}
10
+ 2024-05-16 07:00:49,431 INFO MainThread:34 [wandb_init.py:_log_setup():521] Logging user logs to /kaggle/working/wandb/run-20240516_070049-8ecq2cdf/logs/debug.log
11
+ 2024-05-16 07:00:49,431 INFO MainThread:34 [wandb_init.py:_log_setup():522] Logging internal logs to /kaggle/working/wandb/run-20240516_070049-8ecq2cdf/logs/debug-internal.log
12
+ 2024-05-16 07:00:49,431 INFO MainThread:34 [wandb_init.py:_jupyter_setup():467] configuring jupyter hooks <wandb.sdk.wandb_init._WandbInit object at 0x7f2ce0184cd0>
13
+ 2024-05-16 07:00:49,432 INFO MainThread:34 [wandb_init.py:init():561] calling init triggers
14
+ 2024-05-16 07:00:49,432 INFO MainThread:34 [wandb_init.py:init():568] wandb.init called with sweep_config: {}
15
+ config: {}
16
+ 2024-05-16 07:00:49,432 INFO MainThread:34 [wandb_init.py:init():611] starting backend
17
+ 2024-05-16 07:00:49,432 INFO MainThread:34 [wandb_init.py:init():615] setting up manager
18
+ 2024-05-16 07:00:49,434 INFO MainThread:34 [backend.py:_multiprocessing_setup():105] multiprocessing start_methods=fork,spawn,forkserver, using: spawn
19
+ 2024-05-16 07:00:49,438 INFO MainThread:34 [wandb_init.py:init():623] backend started and connected
20
+ 2024-05-16 07:00:49,449 INFO MainThread:34 [wandb_run.py:_label_probe_notebook():1299] probe notebook
21
+ 2024-05-16 07:00:49,786 INFO MainThread:34 [wandb_init.py:init():715] updated telemetry
22
+ 2024-05-16 07:00:49,791 INFO MainThread:34 [wandb_init.py:init():748] communicating run to backend with 90.0 second timeout
23
+ 2024-05-16 07:00:49,943 INFO MainThread:34 [wandb_run.py:_on_init():2357] communicating current version
24
+ 2024-05-16 07:00:50,031 INFO MainThread:34 [wandb_run.py:_on_init():2366] got version response upgrade_message: "wandb version 0.17.0 is available! To upgrade, please run:\n $ pip install wandb --upgrade"
25
+
26
+ 2024-05-16 07:00:50,033 INFO MainThread:34 [wandb_init.py:init():799] starting run threads in backend
27
+ 2024-05-16 07:01:06,114 INFO MainThread:34 [wandb_run.py:_console_start():2335] atexit reg
28
+ 2024-05-16 07:01:06,114 INFO MainThread:34 [wandb_run.py:_redirect():2190] redirect: wrap_raw
29
+ 2024-05-16 07:01:06,114 INFO MainThread:34 [wandb_run.py:_redirect():2255] Wrapping output streams.
30
+ 2024-05-16 07:01:06,114 INFO MainThread:34 [wandb_run.py:_redirect():2280] Redirects installed.
31
+ 2024-05-16 07:01:06,116 INFO MainThread:34 [wandb_init.py:init():842] run started, returning control to user process
32
+ 2024-05-16 07:01:06,122 INFO MainThread:34 [wandb_run.py:_config_callback():1347] config_cb None None {'ignore_index': -100, 'image_token_index': 32000, 'projector_hidden_act': 'gelu', 'vision_feature_select_strategy': 'default', 'vision_feature_layer': -2, 'vision_config': {'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': None, 'use_bfloat16': False, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': True, 'chunk_size_feed_forward': 0, 'is_encoder_decoder': False, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': None, 'finetuning_task': None, 'id2label': {0: 'LABEL_0', 1: 'LABEL_1'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1}, 'tokenizer_class': None, 'prefix': None, 'bos_token_id': None, 'pad_token_id': None, 'eos_token_id': None, 'sep_token_id': None, 'decoder_start_token_id': None, 'task_specific_params': None, 'problem_type': None, '_name_or_path': '', 'model_type': 'clip_vision_model', 'vocab_size': 32000, 'hidden_size': 1024, 'intermediate_size': 4096, 'projection_dim': 768, 'num_hidden_layers': 24, 'num_attention_heads': 16, 'num_channels': 3, 'patch_size': 14, 'image_size': 336, 'initializer_range': 0.02, 'initializer_factor': 1.0, 'attention_dropout': 0.0, 'layer_norm_eps': 1e-05, 'hidden_act': 'quick_gelu'}, 'text_config': {'vocab_size': 32064, 'max_position_embeddings': 4096, 'hidden_size': 4096, 'intermediate_size': 11008, 'num_hidden_layers': 32, 'num_attention_heads': 32, 'num_key_value_heads': 32, 'hidden_act': 'silu', 'initializer_range': 0.02, 'rms_norm_eps': 1e-05, 'pretraining_tp': 1, 'use_cache': True, 'rope_theta': 10000.0, 'rope_scaling': None, 'attention_bias': False, 'attention_dropout': 0.0, 'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': 'float16', 'use_bfloat16': False, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': False, 'chunk_size_feed_forward': 0, 'is_encoder_decoder': False, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': ['LlamaForCausalLM'], 'finetuning_task': None, 'id2label': {0: 'LABEL_0', 1: 'LABEL_1'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1}, 'tokenizer_class': None, 'prefix': None, 'bos_token_id': 1, 'pad_token_id': None, 'eos_token_id': 2, 'sep_token_id': None, 'decoder_start_token_id': None, 'task_specific_params': None, 'problem_type': None, '_name_or_path': 'lmsys/vicuna-7b-v1.5', 'model_type': 'llama'}, 'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': 'bfloat16', 'use_bfloat16': False, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': False, 'chunk_size_feed_forward': 0, 'is_encoder_decoder': False, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': ['LlavaForConditionalGeneration'], 'finetuning_task': None, 'id2label': {0: 'LABEL_0', 1: 'LABEL_1'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1}, 'tokenizer_class': None, 'prefix': None, 'bos_token_id': None, 'pad_token_id': 32001, 'eos_token_id': None, 'sep_token_id': None, 'decoder_start_token_id': None, 'task_specific_params': None, 'problem_type': None, '_name_or_path': 'llava-hf/llava-1.5-7b-hf', 'transformers_version': '4.39.3', 'model_type': 'llava', 'quantization_config': {'quant_method': 'QuantizationMethod.BITS_AND_BYTES', '_load_in_8bit': False, '_load_in_4bit': True, 'llm_int8_threshold': 6.0, 'llm_int8_skip_modules': None, 'llm_int8_enable_fp32_cpu_offload': False, 'llm_int8_has_fp16_weight': False, 'bnb_4bit_quant_type': 'nf4', 'bnb_4bit_use_double_quant': True, 'bnb_4bit_compute_dtype': 'bfloat16', 'bnb_4bit_quant_storage': 'uint8', 'load_in_4bit': True, 'load_in_8bit': False}, 'output_dir': './', 'overwrite_output_dir': False, 'do_train': False, 'do_eval': False, 'do_predict': False, 'evaluation_strategy': 'no', 'prediction_loss_only': False, 'per_device_train_batch_size': 2, 'per_device_eval_batch_size': 8, 'per_gpu_train_batch_size': None, 'per_gpu_eval_batch_size': None, 'gradient_accumulation_steps': 4, 'eval_accumulation_steps': None, 'eval_delay': 0, 'learning_rate': 0.0002, 'weight_decay': 0.0, 'adam_beta1': 0.9, 'adam_beta2': 0.999, 'adam_epsilon': 1e-08, 'max_grad_norm': 1.0, 'num_train_epochs': 1, 'max_steps': -1, 'lr_scheduler_type': 'constant', 'lr_scheduler_kwargs': {}, 'warmup_ratio': 0.0, 'warmup_steps': 0, 'log_level': 'passive', 'log_level_replica': 'warning', 'log_on_each_node': True, 'logging_dir': './runs/May16_07-00-48_cc8faff11463', 'logging_strategy': 'steps', 'logging_first_step': False, 'logging_steps': 1, 'logging_nan_inf_filter': True, 'save_strategy': 'epoch', 'save_steps': 500, 'save_total_limit': 3, 'save_safetensors': True, 'save_on_each_node': False, 'save_only_model': False, 'no_cuda': False, 'use_cpu': False, 'use_mps_device': False, 'seed': 42, 'data_seed': None, 'jit_mode_eval': False, 'use_ipex': False, 'bf16': False, 'fp16': True, 'fp16_opt_level': 'O1', 'half_precision_backend': 'auto', 'bf16_full_eval': False, 'fp16_full_eval': False, 'tf32': None, 'local_rank': 0, 'ddp_backend': None, 'tpu_num_cores': None, 'tpu_metrics_debug': False, 'debug': [], 'dataloader_drop_last': False, 'eval_steps': None, 'dataloader_num_workers': 0, 'dataloader_prefetch_factor': None, 'past_index': -1, 'run_name': './', 'disable_tqdm': False, 'remove_unused_columns': False, 'label_names': None, 'load_best_model_at_end': False, 'metric_for_best_model': None, 'greater_is_better': None, 'ignore_data_skip': False, 'fsdp': [], 'fsdp_min_num_params': 0, 'fsdp_config': {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, 'fsdp_transformer_layer_cls_to_wrap': None, 'accelerator_config': {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, 'deepspeed': None, 'label_smoothing_factor': 0.0, 'optim': 'adamw_bnb_8bit', 'optim_args': None, 'adafactor': False, 'group_by_length': False, 'length_column_name': 'length', 'report_to': ['tensorboard', 'wandb'], 'ddp_find_unused_parameters': None, 'ddp_bucket_cap_mb': None, 'ddp_broadcast_buffers': None, 'dataloader_pin_memory': True, 'dataloader_persistent_workers': False, 'skip_memory_metrics': True, 'use_legacy_prediction_loop': False, 'push_to_hub': False, 'resume_from_checkpoint': None, 'hub_model_id': None, 'hub_strategy': 'every_save', 'hub_token': '<HUB_TOKEN>', 'hub_private_repo': False, 'hub_always_push': False, 'gradient_checkpointing': False, 'gradient_checkpointing_kwargs': None, 'include_inputs_for_metrics': False, 'fp16_backend': 'auto', 'push_to_hub_model_id': None, 'push_to_hub_organization': None, 'push_to_hub_token': '<PUSH_TO_HUB_TOKEN>', 'mp_parameters': '', 'auto_find_batch_size': False, 'full_determinism': False, 'torchdynamo': None, 'ray_scope': 'last', 'ddp_timeout': 1800, 'torch_compile': False, 'torch_compile_backend': None, 'torch_compile_mode': None, 'dispatch_batches': None, 'split_batches': None, 'include_tokens_per_second': False, 'include_num_input_tokens_seen': False, 'neftune_noise_alpha': None, 'optim_target_modules': None}
33
+ 2024-05-16 07:16:16,067 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
34
+ 2024-05-16 07:16:16,068 INFO MainThread:34 [wandb_init.py:_pause_backend():432] pausing backend
35
+ 2024-05-16 07:16:16,075 INFO MainThread:34 [wandb_init.py:_resume_backend():437] resuming backend
36
+ 2024-05-16 07:16:18,184 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
37
+ 2024-05-16 07:16:18,184 INFO MainThread:34 [wandb_init.py:_pause_backend():432] pausing backend
38
+ 2024-05-16 07:18:56,636 INFO MainThread:34 [wandb_init.py:_resume_backend():437] resuming backend
39
+ 2024-05-16 07:18:56,684 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
40
+ 2024-05-16 07:18:56,684 INFO MainThread:34 [wandb_init.py:_pause_backend():432] pausing backend
41
+ 2024-05-16 07:19:43,460 INFO MainThread:34 [wandb_init.py:_resume_backend():437] resuming backend
42
+ 2024-05-16 07:19:43,507 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
43
+ 2024-05-16 07:19:43,507 INFO MainThread:34 [wandb_init.py:_pause_backend():432] pausing backend
44
+ 2024-05-16 07:19:54,270 INFO MainThread:34 [wandb_init.py:_resume_backend():437] resuming backend
45
+ 2024-05-16 07:19:54,320 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
46
+ 2024-05-16 07:19:54,320 INFO MainThread:34 [wandb_init.py:_pause_backend():432] pausing backend
47
+ 2024-05-16 07:20:06,268 INFO MainThread:34 [wandb_init.py:_resume_backend():437] resuming backend
48
+ 2024-05-16 07:20:06,319 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
49
+ 2024-05-16 07:20:06,319 INFO MainThread:34 [wandb_init.py:_pause_backend():432] pausing backend
50
+ 2024-05-16 07:20:19,142 INFO MainThread:34 [wandb_init.py:_resume_backend():437] resuming backend
wandb/run-20240516_070049-8ecq2cdf/files/conda-environment.yaml ADDED
File without changes
wandb/run-20240516_070049-8ecq2cdf/files/config.yaml ADDED
@@ -0,0 +1,809 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ wandb_version: 1
2
+
3
+ _wandb:
4
+ desc: null
5
+ value:
6
+ python_version: 3.10.13
7
+ cli_version: 0.16.6
8
+ framework: huggingface
9
+ huggingface_version: 4.39.3
10
+ is_jupyter_run: true
11
+ is_kaggle_kernel: true
12
+ start_time: 1715842849.0
13
+ t:
14
+ 1:
15
+ - 1
16
+ - 2
17
+ - 3
18
+ - 5
19
+ - 11
20
+ - 12
21
+ - 49
22
+ - 51
23
+ - 53
24
+ - 55
25
+ - 71
26
+ - 84
27
+ - 98
28
+ - 105
29
+ 2:
30
+ - 1
31
+ - 2
32
+ - 3
33
+ - 5
34
+ - 11
35
+ - 12
36
+ - 49
37
+ - 51
38
+ - 53
39
+ - 55
40
+ - 71
41
+ - 84
42
+ - 98
43
+ - 105
44
+ 3:
45
+ - 7
46
+ - 23
47
+ - 62
48
+ 4: 3.10.13
49
+ 5: 0.16.6
50
+ 6: 4.39.3
51
+ 8:
52
+ - 1
53
+ - 2
54
+ - 5
55
+ 9:
56
+ 1: transformers_trainer
57
+ 13: linux-x86_64
58
+ m:
59
+ - 1: train/global_step
60
+ 6:
61
+ - 3
62
+ - 1: train/loss
63
+ 5: 1
64
+ 6:
65
+ - 1
66
+ - 1: train/grad_norm
67
+ 5: 1
68
+ 6:
69
+ - 1
70
+ - 1: train/learning_rate
71
+ 5: 1
72
+ 6:
73
+ - 1
74
+ - 1: train/epoch
75
+ 5: 1
76
+ 6:
77
+ - 1
78
+ ignore_index:
79
+ desc: null
80
+ value: -100
81
+ image_token_index:
82
+ desc: null
83
+ value: 32000
84
+ projector_hidden_act:
85
+ desc: null
86
+ value: gelu
87
+ vision_feature_select_strategy:
88
+ desc: null
89
+ value: default
90
+ vision_feature_layer:
91
+ desc: null
92
+ value: -2
93
+ vision_config:
94
+ desc: null
95
+ value:
96
+ return_dict: true
97
+ output_hidden_states: false
98
+ output_attentions: false
99
+ torchscript: false
100
+ torch_dtype: null
101
+ use_bfloat16: false
102
+ tf_legacy_loss: false
103
+ pruned_heads: {}
104
+ tie_word_embeddings: true
105
+ chunk_size_feed_forward: 0
106
+ is_encoder_decoder: false
107
+ is_decoder: false
108
+ cross_attention_hidden_size: null
109
+ add_cross_attention: false
110
+ tie_encoder_decoder: false
111
+ max_length: 20
112
+ min_length: 0
113
+ do_sample: false
114
+ early_stopping: false
115
+ num_beams: 1
116
+ num_beam_groups: 1
117
+ diversity_penalty: 0.0
118
+ temperature: 1.0
119
+ top_k: 50
120
+ top_p: 1.0
121
+ typical_p: 1.0
122
+ repetition_penalty: 1.0
123
+ length_penalty: 1.0
124
+ no_repeat_ngram_size: 0
125
+ encoder_no_repeat_ngram_size: 0
126
+ bad_words_ids: null
127
+ num_return_sequences: 1
128
+ output_scores: false
129
+ return_dict_in_generate: false
130
+ forced_bos_token_id: null
131
+ forced_eos_token_id: null
132
+ remove_invalid_values: false
133
+ exponential_decay_length_penalty: null
134
+ suppress_tokens: null
135
+ begin_suppress_tokens: null
136
+ architectures: null
137
+ finetuning_task: null
138
+ id2label:
139
+ '0': LABEL_0
140
+ '1': LABEL_1
141
+ label2id:
142
+ LABEL_0: 0
143
+ LABEL_1: 1
144
+ tokenizer_class: null
145
+ prefix: null
146
+ bos_token_id: null
147
+ pad_token_id: null
148
+ eos_token_id: null
149
+ sep_token_id: null
150
+ decoder_start_token_id: null
151
+ task_specific_params: null
152
+ problem_type: null
153
+ _name_or_path: ''
154
+ model_type: clip_vision_model
155
+ vocab_size: 32000
156
+ hidden_size: 1024
157
+ intermediate_size: 4096
158
+ projection_dim: 768
159
+ num_hidden_layers: 24
160
+ num_attention_heads: 16
161
+ num_channels: 3
162
+ patch_size: 14
163
+ image_size: 336
164
+ initializer_range: 0.02
165
+ initializer_factor: 1.0
166
+ attention_dropout: 0.0
167
+ layer_norm_eps: 1.0e-05
168
+ hidden_act: quick_gelu
169
+ text_config:
170
+ desc: null
171
+ value:
172
+ vocab_size: 32064
173
+ max_position_embeddings: 4096
174
+ hidden_size: 4096
175
+ intermediate_size: 11008
176
+ num_hidden_layers: 32
177
+ num_attention_heads: 32
178
+ num_key_value_heads: 32
179
+ hidden_act: silu
180
+ initializer_range: 0.02
181
+ rms_norm_eps: 1.0e-05
182
+ pretraining_tp: 1
183
+ use_cache: true
184
+ rope_theta: 10000.0
185
+ rope_scaling: null
186
+ attention_bias: false
187
+ attention_dropout: 0.0
188
+ return_dict: true
189
+ output_hidden_states: false
190
+ output_attentions: false
191
+ torchscript: false
192
+ torch_dtype: float16
193
+ use_bfloat16: false
194
+ tf_legacy_loss: false
195
+ pruned_heads: {}
196
+ tie_word_embeddings: false
197
+ chunk_size_feed_forward: 0
198
+ is_encoder_decoder: false
199
+ is_decoder: false
200
+ cross_attention_hidden_size: null
201
+ add_cross_attention: false
202
+ tie_encoder_decoder: false
203
+ max_length: 20
204
+ min_length: 0
205
+ do_sample: false
206
+ early_stopping: false
207
+ num_beams: 1
208
+ num_beam_groups: 1
209
+ diversity_penalty: 0.0
210
+ temperature: 1.0
211
+ top_k: 50
212
+ top_p: 1.0
213
+ typical_p: 1.0
214
+ repetition_penalty: 1.0
215
+ length_penalty: 1.0
216
+ no_repeat_ngram_size: 0
217
+ encoder_no_repeat_ngram_size: 0
218
+ bad_words_ids: null
219
+ num_return_sequences: 1
220
+ output_scores: false
221
+ return_dict_in_generate: false
222
+ forced_bos_token_id: null
223
+ forced_eos_token_id: null
224
+ remove_invalid_values: false
225
+ exponential_decay_length_penalty: null
226
+ suppress_tokens: null
227
+ begin_suppress_tokens: null
228
+ architectures:
229
+ - LlamaForCausalLM
230
+ finetuning_task: null
231
+ id2label:
232
+ '0': LABEL_0
233
+ '1': LABEL_1
234
+ label2id:
235
+ LABEL_0: 0
236
+ LABEL_1: 1
237
+ tokenizer_class: null
238
+ prefix: null
239
+ bos_token_id: 1
240
+ pad_token_id: null
241
+ eos_token_id: 2
242
+ sep_token_id: null
243
+ decoder_start_token_id: null
244
+ task_specific_params: null
245
+ problem_type: null
246
+ _name_or_path: lmsys/vicuna-7b-v1.5
247
+ model_type: llama
248
+ return_dict:
249
+ desc: null
250
+ value: true
251
+ output_hidden_states:
252
+ desc: null
253
+ value: false
254
+ output_attentions:
255
+ desc: null
256
+ value: false
257
+ torchscript:
258
+ desc: null
259
+ value: false
260
+ torch_dtype:
261
+ desc: null
262
+ value: bfloat16
263
+ use_bfloat16:
264
+ desc: null
265
+ value: false
266
+ tf_legacy_loss:
267
+ desc: null
268
+ value: false
269
+ pruned_heads:
270
+ desc: null
271
+ value: {}
272
+ tie_word_embeddings:
273
+ desc: null
274
+ value: false
275
+ chunk_size_feed_forward:
276
+ desc: null
277
+ value: 0
278
+ is_encoder_decoder:
279
+ desc: null
280
+ value: false
281
+ is_decoder:
282
+ desc: null
283
+ value: false
284
+ cross_attention_hidden_size:
285
+ desc: null
286
+ value: null
287
+ add_cross_attention:
288
+ desc: null
289
+ value: false
290
+ tie_encoder_decoder:
291
+ desc: null
292
+ value: false
293
+ max_length:
294
+ desc: null
295
+ value: 20
296
+ min_length:
297
+ desc: null
298
+ value: 0
299
+ do_sample:
300
+ desc: null
301
+ value: false
302
+ early_stopping:
303
+ desc: null
304
+ value: false
305
+ num_beams:
306
+ desc: null
307
+ value: 1
308
+ num_beam_groups:
309
+ desc: null
310
+ value: 1
311
+ diversity_penalty:
312
+ desc: null
313
+ value: 0.0
314
+ temperature:
315
+ desc: null
316
+ value: 1.0
317
+ top_k:
318
+ desc: null
319
+ value: 50
320
+ top_p:
321
+ desc: null
322
+ value: 1.0
323
+ typical_p:
324
+ desc: null
325
+ value: 1.0
326
+ repetition_penalty:
327
+ desc: null
328
+ value: 1.0
329
+ length_penalty:
330
+ desc: null
331
+ value: 1.0
332
+ no_repeat_ngram_size:
333
+ desc: null
334
+ value: 0
335
+ encoder_no_repeat_ngram_size:
336
+ desc: null
337
+ value: 0
338
+ bad_words_ids:
339
+ desc: null
340
+ value: null
341
+ num_return_sequences:
342
+ desc: null
343
+ value: 1
344
+ output_scores:
345
+ desc: null
346
+ value: false
347
+ return_dict_in_generate:
348
+ desc: null
349
+ value: false
350
+ forced_bos_token_id:
351
+ desc: null
352
+ value: null
353
+ forced_eos_token_id:
354
+ desc: null
355
+ value: null
356
+ remove_invalid_values:
357
+ desc: null
358
+ value: false
359
+ exponential_decay_length_penalty:
360
+ desc: null
361
+ value: null
362
+ suppress_tokens:
363
+ desc: null
364
+ value: null
365
+ begin_suppress_tokens:
366
+ desc: null
367
+ value: null
368
+ architectures:
369
+ desc: null
370
+ value:
371
+ - LlavaForConditionalGeneration
372
+ finetuning_task:
373
+ desc: null
374
+ value: null
375
+ id2label:
376
+ desc: null
377
+ value:
378
+ '0': LABEL_0
379
+ '1': LABEL_1
380
+ label2id:
381
+ desc: null
382
+ value:
383
+ LABEL_0: 0
384
+ LABEL_1: 1
385
+ tokenizer_class:
386
+ desc: null
387
+ value: null
388
+ prefix:
389
+ desc: null
390
+ value: null
391
+ bos_token_id:
392
+ desc: null
393
+ value: null
394
+ pad_token_id:
395
+ desc: null
396
+ value: 32001
397
+ eos_token_id:
398
+ desc: null
399
+ value: null
400
+ sep_token_id:
401
+ desc: null
402
+ value: null
403
+ decoder_start_token_id:
404
+ desc: null
405
+ value: null
406
+ task_specific_params:
407
+ desc: null
408
+ value: null
409
+ problem_type:
410
+ desc: null
411
+ value: null
412
+ _name_or_path:
413
+ desc: null
414
+ value: llava-hf/llava-1.5-7b-hf
415
+ transformers_version:
416
+ desc: null
417
+ value: 4.39.3
418
+ model_type:
419
+ desc: null
420
+ value: llava
421
+ quantization_config:
422
+ desc: null
423
+ value:
424
+ quant_method: QuantizationMethod.BITS_AND_BYTES
425
+ _load_in_8bit: false
426
+ _load_in_4bit: true
427
+ llm_int8_threshold: 6.0
428
+ llm_int8_skip_modules: null
429
+ llm_int8_enable_fp32_cpu_offload: false
430
+ llm_int8_has_fp16_weight: false
431
+ bnb_4bit_quant_type: nf4
432
+ bnb_4bit_use_double_quant: true
433
+ bnb_4bit_compute_dtype: bfloat16
434
+ bnb_4bit_quant_storage: uint8
435
+ load_in_4bit: true
436
+ load_in_8bit: false
437
+ output_dir:
438
+ desc: null
439
+ value: ./
440
+ overwrite_output_dir:
441
+ desc: null
442
+ value: false
443
+ do_train:
444
+ desc: null
445
+ value: false
446
+ do_eval:
447
+ desc: null
448
+ value: false
449
+ do_predict:
450
+ desc: null
451
+ value: false
452
+ evaluation_strategy:
453
+ desc: null
454
+ value: 'no'
455
+ prediction_loss_only:
456
+ desc: null
457
+ value: false
458
+ per_device_train_batch_size:
459
+ desc: null
460
+ value: 2
461
+ per_device_eval_batch_size:
462
+ desc: null
463
+ value: 8
464
+ per_gpu_train_batch_size:
465
+ desc: null
466
+ value: null
467
+ per_gpu_eval_batch_size:
468
+ desc: null
469
+ value: null
470
+ gradient_accumulation_steps:
471
+ desc: null
472
+ value: 4
473
+ eval_accumulation_steps:
474
+ desc: null
475
+ value: null
476
+ eval_delay:
477
+ desc: null
478
+ value: 0
479
+ learning_rate:
480
+ desc: null
481
+ value: 0.0002
482
+ weight_decay:
483
+ desc: null
484
+ value: 0.0
485
+ adam_beta1:
486
+ desc: null
487
+ value: 0.9
488
+ adam_beta2:
489
+ desc: null
490
+ value: 0.999
491
+ adam_epsilon:
492
+ desc: null
493
+ value: 1.0e-08
494
+ max_grad_norm:
495
+ desc: null
496
+ value: 1.0
497
+ num_train_epochs:
498
+ desc: null
499
+ value: 1
500
+ max_steps:
501
+ desc: null
502
+ value: -1
503
+ lr_scheduler_type:
504
+ desc: null
505
+ value: constant
506
+ lr_scheduler_kwargs:
507
+ desc: null
508
+ value: {}
509
+ warmup_ratio:
510
+ desc: null
511
+ value: 0.0
512
+ warmup_steps:
513
+ desc: null
514
+ value: 0
515
+ log_level:
516
+ desc: null
517
+ value: passive
518
+ log_level_replica:
519
+ desc: null
520
+ value: warning
521
+ log_on_each_node:
522
+ desc: null
523
+ value: true
524
+ logging_dir:
525
+ desc: null
526
+ value: ./runs/May16_07-00-48_cc8faff11463
527
+ logging_strategy:
528
+ desc: null
529
+ value: steps
530
+ logging_first_step:
531
+ desc: null
532
+ value: false
533
+ logging_steps:
534
+ desc: null
535
+ value: 1
536
+ logging_nan_inf_filter:
537
+ desc: null
538
+ value: true
539
+ save_strategy:
540
+ desc: null
541
+ value: epoch
542
+ save_steps:
543
+ desc: null
544
+ value: 500
545
+ save_total_limit:
546
+ desc: null
547
+ value: 3
548
+ save_safetensors:
549
+ desc: null
550
+ value: true
551
+ save_on_each_node:
552
+ desc: null
553
+ value: false
554
+ save_only_model:
555
+ desc: null
556
+ value: false
557
+ no_cuda:
558
+ desc: null
559
+ value: false
560
+ use_cpu:
561
+ desc: null
562
+ value: false
563
+ use_mps_device:
564
+ desc: null
565
+ value: false
566
+ seed:
567
+ desc: null
568
+ value: 42
569
+ data_seed:
570
+ desc: null
571
+ value: null
572
+ jit_mode_eval:
573
+ desc: null
574
+ value: false
575
+ use_ipex:
576
+ desc: null
577
+ value: false
578
+ bf16:
579
+ desc: null
580
+ value: false
581
+ fp16:
582
+ desc: null
583
+ value: true
584
+ fp16_opt_level:
585
+ desc: null
586
+ value: O1
587
+ half_precision_backend:
588
+ desc: null
589
+ value: auto
590
+ bf16_full_eval:
591
+ desc: null
592
+ value: false
593
+ fp16_full_eval:
594
+ desc: null
595
+ value: false
596
+ tf32:
597
+ desc: null
598
+ value: null
599
+ local_rank:
600
+ desc: null
601
+ value: 0
602
+ ddp_backend:
603
+ desc: null
604
+ value: null
605
+ tpu_num_cores:
606
+ desc: null
607
+ value: null
608
+ tpu_metrics_debug:
609
+ desc: null
610
+ value: false
611
+ debug:
612
+ desc: null
613
+ value: []
614
+ dataloader_drop_last:
615
+ desc: null
616
+ value: false
617
+ eval_steps:
618
+ desc: null
619
+ value: null
620
+ dataloader_num_workers:
621
+ desc: null
622
+ value: 0
623
+ dataloader_prefetch_factor:
624
+ desc: null
625
+ value: null
626
+ past_index:
627
+ desc: null
628
+ value: -1
629
+ run_name:
630
+ desc: null
631
+ value: ./
632
+ disable_tqdm:
633
+ desc: null
634
+ value: false
635
+ remove_unused_columns:
636
+ desc: null
637
+ value: false
638
+ label_names:
639
+ desc: null
640
+ value: null
641
+ load_best_model_at_end:
642
+ desc: null
643
+ value: false
644
+ metric_for_best_model:
645
+ desc: null
646
+ value: null
647
+ greater_is_better:
648
+ desc: null
649
+ value: null
650
+ ignore_data_skip:
651
+ desc: null
652
+ value: false
653
+ fsdp:
654
+ desc: null
655
+ value: []
656
+ fsdp_min_num_params:
657
+ desc: null
658
+ value: 0
659
+ fsdp_config:
660
+ desc: null
661
+ value:
662
+ min_num_params: 0
663
+ xla: false
664
+ xla_fsdp_v2: false
665
+ xla_fsdp_grad_ckpt: false
666
+ fsdp_transformer_layer_cls_to_wrap:
667
+ desc: null
668
+ value: null
669
+ accelerator_config:
670
+ desc: null
671
+ value:
672
+ split_batches: false
673
+ dispatch_batches: null
674
+ even_batches: true
675
+ use_seedable_sampler: true
676
+ deepspeed:
677
+ desc: null
678
+ value: null
679
+ label_smoothing_factor:
680
+ desc: null
681
+ value: 0.0
682
+ optim:
683
+ desc: null
684
+ value: adamw_bnb_8bit
685
+ optim_args:
686
+ desc: null
687
+ value: null
688
+ adafactor:
689
+ desc: null
690
+ value: false
691
+ group_by_length:
692
+ desc: null
693
+ value: false
694
+ length_column_name:
695
+ desc: null
696
+ value: length
697
+ report_to:
698
+ desc: null
699
+ value:
700
+ - tensorboard
701
+ - wandb
702
+ ddp_find_unused_parameters:
703
+ desc: null
704
+ value: null
705
+ ddp_bucket_cap_mb:
706
+ desc: null
707
+ value: null
708
+ ddp_broadcast_buffers:
709
+ desc: null
710
+ value: null
711
+ dataloader_pin_memory:
712
+ desc: null
713
+ value: true
714
+ dataloader_persistent_workers:
715
+ desc: null
716
+ value: false
717
+ skip_memory_metrics:
718
+ desc: null
719
+ value: true
720
+ use_legacy_prediction_loop:
721
+ desc: null
722
+ value: false
723
+ push_to_hub:
724
+ desc: null
725
+ value: false
726
+ resume_from_checkpoint:
727
+ desc: null
728
+ value: null
729
+ hub_model_id:
730
+ desc: null
731
+ value: null
732
+ hub_strategy:
733
+ desc: null
734
+ value: every_save
735
+ hub_token:
736
+ desc: null
737
+ value: <HUB_TOKEN>
738
+ hub_private_repo:
739
+ desc: null
740
+ value: false
741
+ hub_always_push:
742
+ desc: null
743
+ value: false
744
+ gradient_checkpointing:
745
+ desc: null
746
+ value: false
747
+ gradient_checkpointing_kwargs:
748
+ desc: null
749
+ value: null
750
+ include_inputs_for_metrics:
751
+ desc: null
752
+ value: false
753
+ fp16_backend:
754
+ desc: null
755
+ value: auto
756
+ push_to_hub_model_id:
757
+ desc: null
758
+ value: null
759
+ push_to_hub_organization:
760
+ desc: null
761
+ value: null
762
+ push_to_hub_token:
763
+ desc: null
764
+ value: <PUSH_TO_HUB_TOKEN>
765
+ mp_parameters:
766
+ desc: null
767
+ value: ''
768
+ auto_find_batch_size:
769
+ desc: null
770
+ value: false
771
+ full_determinism:
772
+ desc: null
773
+ value: false
774
+ torchdynamo:
775
+ desc: null
776
+ value: null
777
+ ray_scope:
778
+ desc: null
779
+ value: last
780
+ ddp_timeout:
781
+ desc: null
782
+ value: 1800
783
+ torch_compile:
784
+ desc: null
785
+ value: false
786
+ torch_compile_backend:
787
+ desc: null
788
+ value: null
789
+ torch_compile_mode:
790
+ desc: null
791
+ value: null
792
+ dispatch_batches:
793
+ desc: null
794
+ value: null
795
+ split_batches:
796
+ desc: null
797
+ value: null
798
+ include_tokens_per_second:
799
+ desc: null
800
+ value: false
801
+ include_num_input_tokens_seen:
802
+ desc: null
803
+ value: false
804
+ neftune_noise_alpha:
805
+ desc: null
806
+ value: null
807
+ optim_target_modules:
808
+ desc: null
809
+ value: null
wandb/run-20240516_070049-8ecq2cdf/files/output.log ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py:429: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants.
2
+ warnings.warn(
3
+ /opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py:61: UserWarning: None of the inputs have requires_grad=True. Gradients will be None
4
+ warnings.warn(
5
+ `use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`.
6
+ /opt/conda/lib/python3.10/site-packages/transformers/models/llava/configuration_llava.py:144: FutureWarning: The `vocab_size` attribute is deprecated and will be removed in v4.42, Please use `text_config.vocab_size` instead.
7
+ warnings.warn(
8
+ /opt/conda/lib/python3.10/site-packages/transformers/models/llava/configuration_llava.py:104: FutureWarning: The `vocab_size` argument is deprecated and will be removed in v4.42, since it can be inferred from the `text_config`. Passing this argument has no effect
9
+ warnings.warn(
10
+ /opt/conda/lib/python3.10/site-packages/transformers/models/llava/configuration_llava.py:144: FutureWarning: The `vocab_size` attribute is deprecated and will be removed in v4.42, Please use `text_config.vocab_size` instead.
11
+ warnings.warn(
12
+ /opt/conda/lib/python3.10/site-packages/transformers/models/llava/configuration_llava.py:104: FutureWarning: The `vocab_size` argument is deprecated and will be removed in v4.42, since it can be inferred from the `text_config`. Passing this argument has no effect
13
+ warnings.warn(
14
+ Token has not been saved to git credential helper. Pass `add_to_git_credential=True` if you want to set the git credential as well.
15
+ Token is valid (permission: write).
16
+ Your token has been saved to /root/.cache/huggingface/token
wandb/run-20240516_070049-8ecq2cdf/files/requirements.txt ADDED
@@ -0,0 +1,865 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Babel==2.14.0
2
+ Boruta==0.3
3
+ Brotli==1.0.9
4
+ CVXcanon==0.1.2
5
+ Cartopy==0.23.0
6
+ Cython==3.0.8
7
+ Deprecated==1.2.14
8
+ Farama-Notifications==0.0.4
9
+ Flask==3.0.3
10
+ Geohash==1.0
11
+ GitPython==3.1.41
12
+ ImageHash==4.3.1
13
+ Janome==0.5.0
14
+ Jinja2==3.1.2
15
+ LunarCalendar==0.0.9
16
+ Mako==1.3.3
17
+ Markdown==3.5.2
18
+ MarkupSafe==2.1.3
19
+ MarkupSafe==2.1.5
20
+ Pillow==9.5.0
21
+ PuLP==2.8.0
22
+ PyArabic==0.6.15
23
+ PyJWT==2.8.0
24
+ PyMeeus==0.5.12
25
+ PySocks==1.7.1
26
+ PyUpSet==0.1.1.post7
27
+ PyWavelets==1.5.0
28
+ PyYAML==6.0.1
29
+ Pygments==2.17.2
30
+ Pympler==1.0.1
31
+ QtPy==2.4.1
32
+ Rtree==1.2.0
33
+ SQLAlchemy==2.0.25
34
+ SecretStorage==3.3.3
35
+ Send2Trash==1.8.2
36
+ Shapely==1.8.5.post1
37
+ Shimmy==1.3.0
38
+ SimpleITK==2.3.1
39
+ TPOT==0.12.1
40
+ Theano-PyMC==1.1.2
41
+ Theano==1.0.5
42
+ Wand==0.6.13
43
+ Werkzeug==3.0.2
44
+ absl-py==1.4.0
45
+ accelerate==0.29.3
46
+ access==1.1.9
47
+ affine==2.4.0
48
+ aiobotocore==2.12.3
49
+ aiofiles==22.1.0
50
+ aiohttp-cors==0.7.0
51
+ aiohttp==3.9.1
52
+ aioitertools==0.11.0
53
+ aiorwlock==1.3.0
54
+ aiosignal==1.3.1
55
+ aiosqlite==0.19.0
56
+ albumentations==1.4.0
57
+ alembic==1.13.1
58
+ altair==5.3.0
59
+ annotated-types==0.6.0
60
+ annoy==1.17.3
61
+ anyio==4.2.0
62
+ apache-beam==2.46.0
63
+ aplus==0.11.0
64
+ appdirs==1.4.4
65
+ archspec==0.2.3
66
+ argon2-cffi-bindings==21.2.0
67
+ argon2-cffi==23.1.0
68
+ array-record==0.5.0
69
+ arrow==1.3.0
70
+ arviz==0.18.0
71
+ astroid==3.1.0
72
+ astropy-iers-data==0.2024.4.15.2.45.49
73
+ astropy==6.0.1
74
+ asttokens==2.4.1
75
+ astunparse==1.6.3
76
+ async-lru==2.0.4
77
+ async-timeout==4.0.3
78
+ attrs==23.2.0
79
+ audioread==3.0.1
80
+ autopep8==2.0.4
81
+ backoff==2.2.1
82
+ bayesian-optimization==1.4.3
83
+ beatrix_jupyterlab==2023.128.151533
84
+ beautifulsoup4==4.12.2
85
+ bitsandbytes==0.43.1
86
+ blake3==0.2.1
87
+ bleach==6.1.0
88
+ blessed==1.20.0
89
+ blinker==1.7.0
90
+ blis==0.7.10
91
+ blosc2==2.6.2
92
+ bokeh==3.4.1
93
+ boltons==23.1.1
94
+ boto3==1.26.100
95
+ botocore==1.34.69
96
+ bq_helper==0.4.1
97
+ bqplot==0.12.43
98
+ branca==0.7.1
99
+ brewer2mpl==1.4.1
100
+ brotlipy==0.7.0
101
+ cached-property==1.5.2
102
+ cachetools==4.2.4
103
+ cachetools==5.3.2
104
+ catalogue==2.0.10
105
+ catalyst==22.4
106
+ catboost==1.2.3
107
+ category-encoders==2.6.3
108
+ certifi==2024.2.2
109
+ cesium==0.12.1
110
+ cffi==1.16.0
111
+ charset-normalizer==3.3.2
112
+ chex==0.1.86
113
+ cleverhans==4.0.0
114
+ click-plugins==1.1.1
115
+ click==8.1.7
116
+ cligj==0.7.2
117
+ cloud-tpu-client==0.10
118
+ cloud-tpu-profiler==2.4.0
119
+ cloudpathlib==0.16.0
120
+ cloudpickle==2.2.1
121
+ cloudpickle==3.0.0
122
+ cmdstanpy==1.2.2
123
+ colorama==0.4.6
124
+ colorcet==3.1.0
125
+ colorful==0.5.6
126
+ colorlog==6.8.2
127
+ colorlover==0.3.0
128
+ comm==0.2.1
129
+ conda-libmamba-solver==23.7.0
130
+ conda-package-handling==2.2.0
131
+ conda==23.7.4
132
+ conda_package_streaming==0.9.0
133
+ confection==0.1.4
134
+ contextily==1.6.0
135
+ contourpy==1.2.0
136
+ contourpy==1.2.1
137
+ convertdate==2.4.0
138
+ crcmod==1.7
139
+ cryptography==41.0.7
140
+ cuda-python==12.4.0
141
+ cudf==23.8.0
142
+ cufflinks==0.17.3
143
+ cuml==23.8.0
144
+ cupy==13.0.0
145
+ cycler==0.12.1
146
+ cymem==2.0.8
147
+ cytoolz==0.12.3
148
+ daal4py==2024.3.0
149
+ daal==2024.3.0
150
+ dacite==1.8.1
151
+ dask-cuda==23.8.0
152
+ dask-cudf==23.8.0
153
+ dask-expr==1.0.11
154
+ dask==2024.4.1
155
+ dataclasses-json==0.6.4
156
+ dataproc_jupyter_plugin==0.1.66
157
+ datasets==2.18.0
158
+ datashader==0.16.0
159
+ datatile==1.0.3
160
+ db-dtypes==1.2.0
161
+ deap==1.4.1
162
+ debugpy==1.8.0
163
+ decorator==5.1.1
164
+ deepdiff==7.0.1
165
+ defusedxml==0.7.1
166
+ deprecation==2.1.0
167
+ descartes==1.1.0
168
+ dill==0.3.8
169
+ dipy==1.9.0
170
+ distlib==0.3.8
171
+ distributed==2023.7.1
172
+ distro==1.9.0
173
+ dm-tree==0.1.8
174
+ docker-pycreds==0.4.0
175
+ docker==7.0.0
176
+ docopt==0.6.2
177
+ docstring-parser==0.15
178
+ docstring-to-markdown==0.15
179
+ docutils==0.21.1
180
+ earthengine-api==0.1.399
181
+ easydict==1.13
182
+ easyocr==1.7.1
183
+ ecos==2.0.13
184
+ eli5==0.13.0
185
+ emoji==2.11.0
186
+ en-core-web-lg==3.7.1
187
+ en-core-web-sm==3.7.1
188
+ entrypoints==0.4
189
+ ephem==4.1.5
190
+ esda==2.5.1
191
+ essentia==2.1b6.dev1110
192
+ et-xmlfile==1.1.0
193
+ etils==1.6.0
194
+ exceptiongroup==1.2.0
195
+ executing==2.0.1
196
+ explainable-ai-sdk==1.3.3
197
+ fastai==2.7.14
198
+ fastapi==0.108.0
199
+ fastavro==1.9.3
200
+ fastcore==1.5.29
201
+ fastdownload==0.0.7
202
+ fasteners==0.19
203
+ fastjsonschema==2.19.1
204
+ fastprogress==1.0.3
205
+ fastrlock==0.8.2
206
+ fasttext==0.9.2
207
+ feather-format==0.4.1
208
+ featuretools==1.30.0
209
+ filelock==3.13.1
210
+ fiona==1.9.6
211
+ fitter==1.7.0
212
+ flake8==7.0.0
213
+ flashtext==2.7
214
+ flatbuffers==23.5.26
215
+ flax==0.8.2
216
+ folium==0.16.0
217
+ fonttools==4.47.0
218
+ fonttools==4.51.0
219
+ fqdn==1.5.1
220
+ frozendict==2.4.2
221
+ frozenlist==1.4.1
222
+ fsspec==2024.2.0
223
+ fsspec==2024.3.1
224
+ funcy==2.0
225
+ fury==0.10.0
226
+ future==1.0.0
227
+ fuzzywuzzy==0.18.0
228
+ gast==0.5.4
229
+ gatspy==0.3
230
+ gcsfs==2024.2.0
231
+ gensim==4.3.2
232
+ geographiclib==2.0
233
+ geojson==3.1.0
234
+ geopandas==0.14.3
235
+ geoplot==0.5.1
236
+ geopy==2.4.1
237
+ geoviews==1.12.0
238
+ ggplot==0.11.5
239
+ giddy==2.3.5
240
+ gitdb==4.0.11
241
+ google-ai-generativelanguage==0.6.2
242
+ google-api-core==2.11.1
243
+ google-api-core==2.18.0
244
+ google-api-python-client==2.126.0
245
+ google-apitools==0.5.31
246
+ google-auth-httplib2==0.2.0
247
+ google-auth-oauthlib==1.2.0
248
+ google-auth==2.26.1
249
+ google-cloud-aiplatform==0.6.0a1
250
+ google-cloud-artifact-registry==1.10.0
251
+ google-cloud-automl==1.0.1
252
+ google-cloud-bigquery==2.34.4
253
+ google-cloud-bigtable==1.7.3
254
+ google-cloud-core==2.4.1
255
+ google-cloud-datastore==2.19.0
256
+ google-cloud-dlp==3.14.0
257
+ google-cloud-jupyter-config==0.0.5
258
+ google-cloud-language==2.13.3
259
+ google-cloud-monitoring==2.18.0
260
+ google-cloud-pubsub==2.19.0
261
+ google-cloud-pubsublite==1.9.0
262
+ google-cloud-recommendations-ai==0.7.1
263
+ google-cloud-resource-manager==1.11.0
264
+ google-cloud-spanner==3.40.1
265
+ google-cloud-storage==1.44.0
266
+ google-cloud-translate==3.12.1
267
+ google-cloud-videointelligence==2.13.3
268
+ google-cloud-vision==2.8.0
269
+ google-crc32c==1.5.0
270
+ google-generativeai==0.5.1
271
+ google-pasta==0.2.0
272
+ google-resumable-media==2.7.0
273
+ googleapis-common-protos==1.62.0
274
+ gplearn==0.4.2
275
+ gpustat==1.0.0
276
+ gpxpy==1.6.2
277
+ graphviz==0.20.3
278
+ greenlet==3.0.3
279
+ grpc-google-iam-v1==0.12.7
280
+ grpcio-status==1.48.1
281
+ grpcio-status==1.48.2
282
+ grpcio==1.51.1
283
+ grpcio==1.60.0
284
+ gviz-api==1.10.0
285
+ gym-notices==0.0.8
286
+ gym==0.26.2
287
+ gymnasium==0.29.0
288
+ h11==0.14.0
289
+ h2o==3.46.0.1
290
+ h5netcdf==1.3.0
291
+ h5py==3.10.0
292
+ haversine==2.8.1
293
+ hdfs==2.7.3
294
+ hep-ml==0.7.2
295
+ hijri-converter==2.3.1
296
+ hmmlearn==0.3.2
297
+ holidays==0.24
298
+ holoviews==1.18.3
299
+ hpsklearn==0.1.0
300
+ html5lib==1.1
301
+ htmlmin==0.1.12
302
+ httpcore==1.0.5
303
+ httplib2==0.21.0
304
+ httptools==0.6.1
305
+ httpx==0.27.0
306
+ huggingface-hub==0.22.2
307
+ hunspell==0.5.5
308
+ hydra-slayer==0.5.0
309
+ hyperopt==0.2.7
310
+ hypertools==0.8.0
311
+ idna==3.6
312
+ igraph==0.11.4
313
+ imagecodecs==2024.1.1
314
+ imageio==2.33.1
315
+ imbalanced-learn==0.12.2
316
+ imgaug==0.4.0
317
+ importlib-metadata==6.11.0
318
+ importlib-metadata==7.0.1
319
+ importlib-resources==6.1.1
320
+ inequality==1.0.1
321
+ iniconfig==2.0.0
322
+ ipydatawidgets==4.3.5
323
+ ipykernel==6.28.0
324
+ ipyleaflet==0.18.2
325
+ ipympl==0.7.0
326
+ ipython-genutils==0.2.0
327
+ ipython-genutils==0.2.0
328
+ ipython-sql==0.5.0
329
+ ipython==8.20.0
330
+ ipyvolume==0.6.3
331
+ ipyvue==1.11.0
332
+ ipyvuetify==1.9.4
333
+ ipywebrtc==0.6.0
334
+ ipywidgets==7.7.1
335
+ isoduration==20.11.0
336
+ isort==5.13.2
337
+ isoweek==1.3.3
338
+ itsdangerous==2.2.0
339
+ jaraco.classes==3.3.0
340
+ jax-jumpy==1.0.0
341
+ jax==0.4.23
342
+ jaxlib==0.4.23.dev20240116
343
+ jedi==0.19.1
344
+ jeepney==0.8.0
345
+ jieba==0.42.1
346
+ jmespath==1.0.1
347
+ joblib==1.4.0
348
+ json5==0.9.14
349
+ jsonpatch==1.33
350
+ jsonpointer==2.4
351
+ jsonschema-specifications==2023.12.1
352
+ jsonschema==4.20.0
353
+ jupyter-console==6.6.3
354
+ jupyter-events==0.9.0
355
+ jupyter-http-over-ws==0.0.8
356
+ jupyter-lsp==1.5.1
357
+ jupyter-server-mathjax==0.2.6
358
+ jupyter-ydoc==0.2.5
359
+ jupyter_client==7.4.9
360
+ jupyter_client==8.6.0
361
+ jupyter_core==5.7.1
362
+ jupyter_server==2.12.5
363
+ jupyter_server_fileid==0.9.1
364
+ jupyter_server_proxy==4.1.0
365
+ jupyter_server_terminals==0.5.1
366
+ jupyter_server_ydoc==0.8.0
367
+ jupyterlab-lsp==5.1.0
368
+ jupyterlab-widgets==3.0.9
369
+ jupyterlab==4.1.6
370
+ jupyterlab_git==0.44.0
371
+ jupyterlab_pygments==0.3.0
372
+ jupyterlab_server==2.25.2
373
+ jupytext==1.16.0
374
+ kaggle-environments==1.14.3
375
+ kaggle==1.6.12
376
+ kagglehub==0.2.3
377
+ keras-cv==0.8.2
378
+ keras-nlp==0.9.3
379
+ keras-tuner==1.4.6
380
+ keras==3.2.1
381
+ kernels-mixer==0.0.7
382
+ keyring==24.3.0
383
+ keyrings.google-artifactregistry-auth==1.1.2
384
+ kfp-pipeline-spec==0.2.2
385
+ kfp-server-api==2.0.5
386
+ kfp==2.5.0
387
+ kiwisolver==1.4.5
388
+ kmapper==2.0.1
389
+ kmodes==0.12.2
390
+ korean-lunar-calendar==0.3.1
391
+ kornia==0.7.2
392
+ kornia_rs==0.1.3
393
+ kt-legacy==1.0.5
394
+ kubernetes==26.1.0
395
+ langcodes==3.3.0
396
+ langid==1.1.6
397
+ lazy_loader==0.3
398
+ learntools==0.3.4
399
+ leven==1.0.4
400
+ libclang==16.0.6
401
+ libmambapy==1.5.0
402
+ libpysal==4.9.2
403
+ librosa==0.10.1
404
+ lightgbm==4.2.0
405
+ lightning-utilities==0.11.2
406
+ lime==0.2.0.1
407
+ line-profiler==4.1.2
408
+ linkify-it-py==2.0.3
409
+ llvmlite==0.41.1
410
+ llvmlite==0.42.0
411
+ lml==0.1.0
412
+ locket==1.0.0
413
+ loguru==0.7.2
414
+ lxml==5.2.1
415
+ lz4==4.3.3
416
+ mamba==1.5.0
417
+ mapclassify==2.6.1
418
+ markdown-it-py==3.0.0
419
+ marshmallow==3.21.1
420
+ matplotlib-inline==0.1.6
421
+ matplotlib-venn==0.11.10
422
+ matplotlib==3.7.5
423
+ matplotlib==3.8.4
424
+ mccabe==0.7.0
425
+ mdit-py-plugins==0.4.0
426
+ mdurl==0.1.2
427
+ memory-profiler==0.61.0
428
+ menuinst==2.0.1
429
+ mercantile==1.2.1
430
+ mgwr==2.2.1
431
+ missingno==0.5.2
432
+ mistune==0.8.4
433
+ mizani==0.11.1
434
+ ml-dtypes==0.2.0
435
+ mlcrate==0.2.0
436
+ mlens==0.2.3
437
+ mlxtend==0.23.1
438
+ mne==1.6.1
439
+ mnist==0.2.2
440
+ momepy==0.7.0
441
+ more-itertools==10.2.0
442
+ mpld3==0.5.10
443
+ mpmath==1.3.0
444
+ msgpack==1.0.7
445
+ multidict==6.0.4
446
+ multimethod==1.10
447
+ multipledispatch==1.0.0
448
+ multiprocess==0.70.16
449
+ munkres==1.1.4
450
+ murmurhash==1.0.10
451
+ mypy-extensions==1.0.0
452
+ namex==0.0.8
453
+ nb-conda-kernels==2.3.1
454
+ nb_conda==2.2.1
455
+ nbclassic==1.0.0
456
+ nbclient==0.5.13
457
+ nbconvert==6.4.5
458
+ nbdime==3.2.0
459
+ nbformat==5.9.2
460
+ ndindex==1.8
461
+ nest-asyncio==1.5.8
462
+ networkx==3.2.1
463
+ nibabel==5.2.1
464
+ nilearn==0.10.4
465
+ ninja==1.11.1.1
466
+ nltk==3.2.4
467
+ nose==1.3.7
468
+ notebook==6.5.4
469
+ notebook==6.5.6
470
+ notebook_executor==0.2
471
+ notebook_shim==0.2.3
472
+ numba==0.58.1
473
+ numba==0.59.1
474
+ numexpr==2.10.0
475
+ numpy==1.26.4
476
+ nvidia-ml-py==11.495.46
477
+ nvtx==0.2.10
478
+ oauth2client==4.1.3
479
+ oauthlib==3.2.2
480
+ objsize==0.6.1
481
+ odfpy==1.4.1
482
+ olefile==0.47
483
+ onnx==1.16.0
484
+ opencensus-context==0.1.3
485
+ opencensus==0.11.4
486
+ opencv-contrib-python==4.9.0.80
487
+ opencv-python-headless==4.9.0.80
488
+ opencv-python==4.9.0.80
489
+ openpyxl==3.1.2
490
+ openslide-python==1.3.1
491
+ opentelemetry-api==1.22.0
492
+ opentelemetry-exporter-otlp-proto-common==1.22.0
493
+ opentelemetry-exporter-otlp-proto-grpc==1.22.0
494
+ opentelemetry-exporter-otlp-proto-http==1.22.0
495
+ opentelemetry-exporter-otlp==1.22.0
496
+ opentelemetry-proto==1.22.0
497
+ opentelemetry-sdk==1.22.0
498
+ opentelemetry-semantic-conventions==0.43b0
499
+ opt-einsum==3.3.0
500
+ optax==0.2.2
501
+ optree==0.11.0
502
+ optuna==3.6.1
503
+ orbax-checkpoint==0.5.9
504
+ ordered-set==4.1.0
505
+ orjson==3.9.10
506
+ ortools==9.4.1874
507
+ osmnx==1.9.2
508
+ overrides==7.4.0
509
+ packaging==21.3
510
+ pandas-datareader==0.10.0
511
+ pandas-profiling==3.6.6
512
+ pandas-summary==0.2.0
513
+ pandas==2.1.4
514
+ pandas==2.2.2
515
+ pandasql==0.7.3
516
+ pandocfilters==1.5.0
517
+ panel==1.4.1
518
+ papermill==2.5.0
519
+ param==2.1.0
520
+ parso==0.8.3
521
+ partd==1.4.1
522
+ path.py==12.5.0
523
+ path==16.14.0
524
+ pathos==0.3.2
525
+ pathy==0.10.3
526
+ patsy==0.5.6
527
+ pdf2image==1.17.0
528
+ peft==0.10.0
529
+ pettingzoo==1.24.0
530
+ pexpect==4.8.0
531
+ pexpect==4.9.0
532
+ phik==0.12.4
533
+ pickleshare==0.7.5
534
+ pillow==10.3.0
535
+ pip==23.3.2
536
+ pkgutil_resolve_name==1.3.10
537
+ platformdirs==4.2.0
538
+ plotly-express==0.4.1
539
+ plotly==5.18.0
540
+ plotnine==0.13.4
541
+ pluggy==1.4.0
542
+ pointpats==2.4.0
543
+ polars==0.20.21
544
+ polyglot==16.7.4
545
+ pooch==1.8.1
546
+ pox==0.3.4
547
+ ppca==0.0.4
548
+ ppft==1.7.6.8
549
+ preprocessing==0.1.13
550
+ preshed==3.0.9
551
+ prettytable==3.9.0
552
+ progressbar2==4.4.2
553
+ prometheus-client==0.19.0
554
+ promise==2.3
555
+ prompt-toolkit==3.0.42
556
+ prompt-toolkit==3.0.43
557
+ prophet==1.1.1
558
+ proto-plus==1.23.0
559
+ protobuf==3.20.3
560
+ protobuf==4.21.12
561
+ psutil==5.9.3
562
+ psutil==5.9.7
563
+ ptyprocess==0.7.0
564
+ pudb==2024.1
565
+ pure-eval==0.2.2
566
+ py-cpuinfo==9.0.0
567
+ py-spy==0.3.14
568
+ py4j==0.10.9.7
569
+ pyLDAvis==3.4.1
570
+ pyOpenSSL==23.3.0
571
+ pyaml==23.12.0
572
+ pyarrow-hotfix==0.6
573
+ pyarrow==15.0.2
574
+ pyasn1-modules==0.3.0
575
+ pyasn1==0.5.1
576
+ pybind11==2.12.0
577
+ pyclipper==1.3.0.post5
578
+ pycodestyle==2.11.1
579
+ pycosat==0.6.6
580
+ pycparser==2.21
581
+ pycryptodome==3.20.0
582
+ pyct==0.5.0
583
+ pycuda==2024.1
584
+ pydantic==2.5.3
585
+ pydantic==2.7.0
586
+ pydantic_core==2.14.6
587
+ pydantic_core==2.18.1
588
+ pydegensac==0.1.2
589
+ pydicom==2.4.4
590
+ pydocstyle==6.3.0
591
+ pydot==1.4.2
592
+ pydub==0.25.1
593
+ pyemd==1.0.0
594
+ pyerfa==2.0.1.4
595
+ pyexcel-io==0.6.6
596
+ pyexcel-ods==0.6.0
597
+ pyflakes==3.2.0
598
+ pygltflib==1.16.2
599
+ pykalman==0.9.7
600
+ pylibraft==23.8.0
601
+ pylint==3.1.0
602
+ pymc3==3.11.4
603
+ pymongo==3.13.0
604
+ pynndescent==0.5.12
605
+ pynvml==11.4.1
606
+ pynvrtc==9.2
607
+ pyparsing==3.1.1
608
+ pyparsing==3.1.2
609
+ pypdf==4.2.0
610
+ pyproj==3.6.1
611
+ pysal==24.1
612
+ pyshp==2.3.1
613
+ pytesseract==0.3.10
614
+ pytest==8.1.1
615
+ python-bidi==0.4.2
616
+ python-dateutil==2.9.0.post0
617
+ python-dotenv==1.0.0
618
+ python-json-logger==2.0.7
619
+ python-louvain==0.16
620
+ python-lsp-jsonrpc==1.1.2
621
+ python-lsp-server==1.11.0
622
+ python-slugify==8.0.4
623
+ python-utils==3.8.2
624
+ pythreejs==2.4.2
625
+ pytoolconfig==1.3.1
626
+ pytools==2024.1.1
627
+ pytorch-ignite==0.5.0.post2
628
+ pytorch-lightning==2.2.2
629
+ pytz==2023.3.post1
630
+ pytz==2024.1
631
+ pyu2f==0.1.5
632
+ pyviz_comms==3.0.2
633
+ pyzmq==24.0.1
634
+ pyzmq==25.1.2
635
+ qgrid==1.3.1
636
+ qtconsole==5.5.1
637
+ quantecon==0.7.2
638
+ qudida==0.0.4
639
+ raft-dask==23.8.0
640
+ rasterio==1.3.10
641
+ rasterstats==0.19.0
642
+ ray-cpp==2.9.0
643
+ ray==2.9.0
644
+ referencing==0.32.1
645
+ regex==2023.12.25
646
+ requests-oauthlib==1.3.1
647
+ requests-toolbelt==0.10.1
648
+ requests==2.31.0
649
+ retrying==1.3.3
650
+ retrying==1.3.4
651
+ rfc3339-validator==0.1.4
652
+ rfc3986-validator==0.1.1
653
+ rgf-python==3.12.0
654
+ rich-click==1.7.4
655
+ rich==13.7.0
656
+ rich==13.7.1
657
+ rmm==23.8.0
658
+ rope==1.13.0
659
+ rpds-py==0.16.2
660
+ rsa==4.9
661
+ ruamel-yaml-conda==0.15.100
662
+ ruamel.yaml.clib==0.2.7
663
+ ruamel.yaml==0.17.40
664
+ s2sphere==0.2.5
665
+ s3fs==2024.2.0
666
+ s3transfer==0.6.2
667
+ safetensors==0.4.3
668
+ scattertext==0.1.19
669
+ scikit-image==0.22.0
670
+ scikit-learn-intelex==2024.3.0
671
+ scikit-learn==1.2.2
672
+ scikit-multilearn==0.2.0
673
+ scikit-optimize==0.10.1
674
+ scikit-plot==0.3.7
675
+ scikit-surprise==1.1.3
676
+ scipy==1.11.4
677
+ scipy==1.13.0
678
+ seaborn==0.12.2
679
+ segment_anything==1.0
680
+ segregation==2.5
681
+ semver==3.0.2
682
+ sentencepiece==0.2.0
683
+ sentry-sdk==1.45.0
684
+ setproctitle==1.3.3
685
+ setuptools-git==1.2
686
+ setuptools-scm==8.0.4
687
+ setuptools==69.0.3
688
+ shap==0.44.1
689
+ shapely==2.0.4
690
+ shellingham==1.5.4
691
+ shtab==1.7.1
692
+ simpervisor==1.0.0
693
+ simplejson==3.19.2
694
+ six==1.16.0
695
+ sklearn-pandas==2.2.0
696
+ slicer==0.0.7
697
+ smart-open==6.4.0
698
+ smmap==5.0.1
699
+ sniffio==1.3.0
700
+ snowballstemmer==2.2.0
701
+ snuggs==1.4.7
702
+ sortedcontainers==2.4.0
703
+ soundfile==0.12.1
704
+ soupsieve==2.5
705
+ soxr==0.3.7
706
+ spacy-legacy==3.0.12
707
+ spacy-loggers==1.0.5
708
+ spacy==3.7.3
709
+ spaghetti==1.7.5.post1
710
+ spectral==0.23.1
711
+ spglm==1.1.0
712
+ sphinx-rtd-theme==0.2.4
713
+ spint==1.0.7
714
+ splot==1.1.5.post1
715
+ spopt==0.6.0
716
+ spreg==1.4.2
717
+ spvcm==0.3.0
718
+ sqlparse==0.4.4
719
+ squarify==0.4.3
720
+ srsly==2.4.8
721
+ stable-baselines3==2.1.0
722
+ stack-data==0.6.2
723
+ stack-data==0.6.3
724
+ stanio==0.5.0
725
+ starlette==0.32.0.post1
726
+ statsmodels==0.14.1
727
+ stemming==1.0.1
728
+ stop-words==2018.7.23
729
+ stopit==1.1.2
730
+ stumpy==1.12.0
731
+ sympy==1.12
732
+ tables==3.9.2
733
+ tabulate==0.9.0
734
+ tangled-up-in-unicode==0.2.0
735
+ tbb==2021.12.0
736
+ tblib==3.0.0
737
+ tenacity==8.2.3
738
+ tensorboard-data-server==0.7.2
739
+ tensorboard-plugin-profile==2.15.0
740
+ tensorboard==2.15.1
741
+ tensorboardX==2.6.2.2
742
+ tensorflow-cloud==0.1.16
743
+ tensorflow-datasets==4.9.4
744
+ tensorflow-decision-forests==1.8.1
745
+ tensorflow-estimator==2.15.0
746
+ tensorflow-hub==0.16.1
747
+ tensorflow-io-gcs-filesystem==0.35.0
748
+ tensorflow-io==0.35.0
749
+ tensorflow-metadata==0.14.0
750
+ tensorflow-probability==0.23.0
751
+ tensorflow-serving-api==2.14.1
752
+ tensorflow-text==2.15.0
753
+ tensorflow-transform==0.14.0
754
+ tensorflow==2.15.0
755
+ tensorstore==0.1.56
756
+ termcolor==2.4.0
757
+ terminado==0.18.0
758
+ testpath==0.6.0
759
+ text-unidecode==1.3
760
+ textblob==0.18.0.post0
761
+ texttable==1.7.0
762
+ tf_keras==2.15.1
763
+ tfp-nightly==0.24.0.dev0
764
+ thinc==8.2.2
765
+ threadpoolctl==3.2.0
766
+ tifffile==2023.12.9
767
+ timm==0.9.16
768
+ tinycss2==1.2.1
769
+ tobler==0.11.2
770
+ tokenizers==0.15.2
771
+ toml==0.10.2
772
+ tomli==2.0.1
773
+ tomlkit==0.12.4
774
+ toolz==0.12.1
775
+ torch==2.1.2
776
+ torchaudio==2.1.2
777
+ torchdata==0.7.1
778
+ torchinfo==1.8.0
779
+ torchmetrics==1.3.2
780
+ torchtext==0.16.2
781
+ torchvision==0.16.2
782
+ tornado==6.3.3
783
+ tqdm==4.66.1
784
+ traceml==1.0.8
785
+ traitlets==5.9.0
786
+ traittypes==0.2.1
787
+ transformers==4.39.3
788
+ treelite-runtime==3.2.0
789
+ treelite==3.2.0
790
+ trl==0.8.6
791
+ truststore==0.8.0
792
+ trx-python==0.2.9
793
+ tsfresh==0.20.2
794
+ typeguard==4.1.5
795
+ typer==0.9.0
796
+ typer==0.9.4
797
+ types-python-dateutil==2.8.19.20240106
798
+ typing-inspect==0.9.0
799
+ typing-utils==0.1.0
800
+ typing_extensions==4.9.0
801
+ tyro==0.8.4
802
+ tzdata==2023.4
803
+ uc-micro-py==1.0.3
804
+ ucx-py==0.33.0
805
+ ujson==5.9.0
806
+ umap-learn==0.5.6
807
+ unicodedata2==15.1.0
808
+ update-checker==0.18.0
809
+ uri-template==1.3.0
810
+ uritemplate==3.0.1
811
+ urllib3==1.26.18
812
+ urllib3==2.1.0
813
+ urwid==2.6.10
814
+ urwid_readline==0.14
815
+ uvicorn==0.25.0
816
+ uvloop==0.19.0
817
+ vaex-astro==0.9.3
818
+ vaex-core==4.17.1
819
+ vaex-hdf5==0.14.1
820
+ vaex-jupyter==0.8.2
821
+ vaex-ml==0.18.3
822
+ vaex-server==0.9.0
823
+ vaex-viz==0.5.4
824
+ vaex==4.17.0
825
+ vec_noise==1.1.4
826
+ vecstack==0.4.0
827
+ virtualenv==20.21.0
828
+ visions==0.7.5
829
+ vowpalwabbit==9.9.0
830
+ vtk==9.3.0
831
+ wandb==0.16.6
832
+ wasabi==1.1.2
833
+ watchfiles==0.21.0
834
+ wavio==0.0.8
835
+ wcwidth==0.2.13
836
+ weasel==0.3.4
837
+ webcolors==1.13
838
+ webencodings==0.5.1
839
+ websocket-client==1.7.0
840
+ websockets==12.0
841
+ wfdb==4.1.2
842
+ whatthepatch==1.0.5
843
+ wheel==0.42.0
844
+ widgetsnbextension==3.6.6
845
+ witwidget==1.8.1
846
+ woodwork==0.30.0
847
+ wordcloud==1.9.3
848
+ wordsegment==1.3.1
849
+ wrapt==1.14.1
850
+ xarray-einstats==0.7.0
851
+ xarray==2024.3.0
852
+ xgboost==2.0.3
853
+ xvfbwrapper==0.2.9
854
+ xxhash==3.4.1
855
+ xyzservices==2024.4.0
856
+ y-py==0.6.2
857
+ yapf==0.40.2
858
+ yarl==1.9.3
859
+ yarl==1.9.4
860
+ ydata-profiling==4.6.4
861
+ yellowbrick==1.5
862
+ ypy-websocket==0.8.4
863
+ zict==3.0.0
864
+ zipp==3.17.0
865
+ zstandard==0.22.0
wandb/run-20240516_070049-8ecq2cdf/files/wandb-metadata.json ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "os": "Linux-5.15.133+-x86_64-with-glibc2.31",
3
+ "python": "3.10.13",
4
+ "heartbeatAt": "2024-05-16T07:00:50.065608",
5
+ "startedAt": "2024-05-16T07:00:49.429145",
6
+ "docker": null,
7
+ "cuda": null,
8
+ "args": [],
9
+ "state": "running",
10
+ "program": "kaggle.ipynb",
11
+ "codePathLocal": null,
12
+ "root": "/kaggle/working",
13
+ "host": "cc8faff11463",
14
+ "username": "root",
15
+ "executable": "/opt/conda/bin/python3.10",
16
+ "cpu_count": 2,
17
+ "cpu_count_logical": 4,
18
+ "cpu_freq": {
19
+ "current": 2000.188,
20
+ "min": 0.0,
21
+ "max": 0.0
22
+ },
23
+ "cpu_freq_per_core": [
24
+ {
25
+ "current": 2000.188,
26
+ "min": 0.0,
27
+ "max": 0.0
28
+ },
29
+ {
30
+ "current": 2000.188,
31
+ "min": 0.0,
32
+ "max": 0.0
33
+ },
34
+ {
35
+ "current": 2000.188,
36
+ "min": 0.0,
37
+ "max": 0.0
38
+ },
39
+ {
40
+ "current": 2000.188,
41
+ "min": 0.0,
42
+ "max": 0.0
43
+ }
44
+ ],
45
+ "disk": {
46
+ "/": {
47
+ "total": 8062.387607574463,
48
+ "used": 5611.894302368164
49
+ }
50
+ },
51
+ "gpu": "Tesla T4",
52
+ "gpu_count": 2,
53
+ "gpu_devices": [
54
+ {
55
+ "name": "Tesla T4",
56
+ "memory_total": 16106127360
57
+ },
58
+ {
59
+ "name": "Tesla T4",
60
+ "memory_total": 16106127360
61
+ }
62
+ ],
63
+ "memory": {
64
+ "total": 31.357559204101562
65
+ }
66
+ }
wandb/run-20240516_070049-8ecq2cdf/files/wandb-summary.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"train/loss": 1.0685, "train/grad_norm": 1.4457346200942993, "train/learning_rate": 0.0002, "train/epoch": 0.99, "train/global_step": 45, "_timestamp": 1715843776.0617712, "_runtime": 926.6233122348785, "_step": 45, "train_runtime": 926.6837, "train_samples_per_second": 0.393, "train_steps_per_second": 0.049, "total_flos": 783933541933056.0, "train_loss": 1.7486182438002693}
wandb/run-20240516_070049-8ecq2cdf/logs/debug-internal.log ADDED
The diff for this file is too large to render. See raw diff
 
wandb/run-20240516_070049-8ecq2cdf/logs/debug.log ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2024-05-16 07:00:49,430 INFO MainThread:34 [wandb_setup.py:_flush():76] Current SDK version is 0.16.6
2
+ 2024-05-16 07:00:49,431 INFO MainThread:34 [wandb_setup.py:_flush():76] Configure stats pid to 34
3
+ 2024-05-16 07:00:49,431 INFO MainThread:34 [wandb_setup.py:_flush():76] Loading settings from /root/.config/wandb/settings
4
+ 2024-05-16 07:00:49,431 INFO MainThread:34 [wandb_setup.py:_flush():76] Loading settings from /kaggle/working/wandb/settings
5
+ 2024-05-16 07:00:49,431 INFO MainThread:34 [wandb_setup.py:_flush():76] Loading settings from environment variables: {}
6
+ 2024-05-16 07:00:49,431 INFO MainThread:34 [wandb_setup.py:_flush():76] Inferring run settings from compute environment: {'program': '<python with no main file>'}
7
+ 2024-05-16 07:00:49,431 INFO MainThread:34 [wandb_setup.py:_flush():76] Applying login settings: {'api_key': '***REDACTED***'}
8
+ 2024-05-16 07:00:49,431 INFO MainThread:34 [wandb_setup.py:_flush():76] Applying login settings: {'api_key': '***REDACTED***'}
9
+ 2024-05-16 07:00:49,431 INFO MainThread:34 [wandb_setup.py:_flush():76] Applying login settings: {}
10
+ 2024-05-16 07:00:49,431 INFO MainThread:34 [wandb_init.py:_log_setup():521] Logging user logs to /kaggle/working/wandb/run-20240516_070049-8ecq2cdf/logs/debug.log
11
+ 2024-05-16 07:00:49,431 INFO MainThread:34 [wandb_init.py:_log_setup():522] Logging internal logs to /kaggle/working/wandb/run-20240516_070049-8ecq2cdf/logs/debug-internal.log
12
+ 2024-05-16 07:00:49,431 INFO MainThread:34 [wandb_init.py:_jupyter_setup():467] configuring jupyter hooks <wandb.sdk.wandb_init._WandbInit object at 0x7f2ce0184cd0>
13
+ 2024-05-16 07:00:49,432 INFO MainThread:34 [wandb_init.py:init():561] calling init triggers
14
+ 2024-05-16 07:00:49,432 INFO MainThread:34 [wandb_init.py:init():568] wandb.init called with sweep_config: {}
15
+ config: {}
16
+ 2024-05-16 07:00:49,432 INFO MainThread:34 [wandb_init.py:init():611] starting backend
17
+ 2024-05-16 07:00:49,432 INFO MainThread:34 [wandb_init.py:init():615] setting up manager
18
+ 2024-05-16 07:00:49,434 INFO MainThread:34 [backend.py:_multiprocessing_setup():105] multiprocessing start_methods=fork,spawn,forkserver, using: spawn
19
+ 2024-05-16 07:00:49,438 INFO MainThread:34 [wandb_init.py:init():623] backend started and connected
20
+ 2024-05-16 07:00:49,449 INFO MainThread:34 [wandb_run.py:_label_probe_notebook():1299] probe notebook
21
+ 2024-05-16 07:00:49,786 INFO MainThread:34 [wandb_init.py:init():715] updated telemetry
22
+ 2024-05-16 07:00:49,791 INFO MainThread:34 [wandb_init.py:init():748] communicating run to backend with 90.0 second timeout
23
+ 2024-05-16 07:00:49,943 INFO MainThread:34 [wandb_run.py:_on_init():2357] communicating current version
24
+ 2024-05-16 07:00:50,031 INFO MainThread:34 [wandb_run.py:_on_init():2366] got version response upgrade_message: "wandb version 0.17.0 is available! To upgrade, please run:\n $ pip install wandb --upgrade"
25
+
26
+ 2024-05-16 07:00:50,033 INFO MainThread:34 [wandb_init.py:init():799] starting run threads in backend
27
+ 2024-05-16 07:01:06,114 INFO MainThread:34 [wandb_run.py:_console_start():2335] atexit reg
28
+ 2024-05-16 07:01:06,114 INFO MainThread:34 [wandb_run.py:_redirect():2190] redirect: wrap_raw
29
+ 2024-05-16 07:01:06,114 INFO MainThread:34 [wandb_run.py:_redirect():2255] Wrapping output streams.
30
+ 2024-05-16 07:01:06,114 INFO MainThread:34 [wandb_run.py:_redirect():2280] Redirects installed.
31
+ 2024-05-16 07:01:06,116 INFO MainThread:34 [wandb_init.py:init():842] run started, returning control to user process
32
+ 2024-05-16 07:01:06,122 INFO MainThread:34 [wandb_run.py:_config_callback():1347] config_cb None None {'ignore_index': -100, 'image_token_index': 32000, 'projector_hidden_act': 'gelu', 'vision_feature_select_strategy': 'default', 'vision_feature_layer': -2, 'vision_config': {'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': None, 'use_bfloat16': False, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': True, 'chunk_size_feed_forward': 0, 'is_encoder_decoder': False, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': None, 'finetuning_task': None, 'id2label': {0: 'LABEL_0', 1: 'LABEL_1'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1}, 'tokenizer_class': None, 'prefix': None, 'bos_token_id': None, 'pad_token_id': None, 'eos_token_id': None, 'sep_token_id': None, 'decoder_start_token_id': None, 'task_specific_params': None, 'problem_type': None, '_name_or_path': '', 'model_type': 'clip_vision_model', 'vocab_size': 32000, 'hidden_size': 1024, 'intermediate_size': 4096, 'projection_dim': 768, 'num_hidden_layers': 24, 'num_attention_heads': 16, 'num_channels': 3, 'patch_size': 14, 'image_size': 336, 'initializer_range': 0.02, 'initializer_factor': 1.0, 'attention_dropout': 0.0, 'layer_norm_eps': 1e-05, 'hidden_act': 'quick_gelu'}, 'text_config': {'vocab_size': 32064, 'max_position_embeddings': 4096, 'hidden_size': 4096, 'intermediate_size': 11008, 'num_hidden_layers': 32, 'num_attention_heads': 32, 'num_key_value_heads': 32, 'hidden_act': 'silu', 'initializer_range': 0.02, 'rms_norm_eps': 1e-05, 'pretraining_tp': 1, 'use_cache': True, 'rope_theta': 10000.0, 'rope_scaling': None, 'attention_bias': False, 'attention_dropout': 0.0, 'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': 'float16', 'use_bfloat16': False, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': False, 'chunk_size_feed_forward': 0, 'is_encoder_decoder': False, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': ['LlamaForCausalLM'], 'finetuning_task': None, 'id2label': {0: 'LABEL_0', 1: 'LABEL_1'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1}, 'tokenizer_class': None, 'prefix': None, 'bos_token_id': 1, 'pad_token_id': None, 'eos_token_id': 2, 'sep_token_id': None, 'decoder_start_token_id': None, 'task_specific_params': None, 'problem_type': None, '_name_or_path': 'lmsys/vicuna-7b-v1.5', 'model_type': 'llama'}, 'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': 'bfloat16', 'use_bfloat16': False, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': False, 'chunk_size_feed_forward': 0, 'is_encoder_decoder': False, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': ['LlavaForConditionalGeneration'], 'finetuning_task': None, 'id2label': {0: 'LABEL_0', 1: 'LABEL_1'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1}, 'tokenizer_class': None, 'prefix': None, 'bos_token_id': None, 'pad_token_id': 32001, 'eos_token_id': None, 'sep_token_id': None, 'decoder_start_token_id': None, 'task_specific_params': None, 'problem_type': None, '_name_or_path': 'llava-hf/llava-1.5-7b-hf', 'transformers_version': '4.39.3', 'model_type': 'llava', 'quantization_config': {'quant_method': 'QuantizationMethod.BITS_AND_BYTES', '_load_in_8bit': False, '_load_in_4bit': True, 'llm_int8_threshold': 6.0, 'llm_int8_skip_modules': None, 'llm_int8_enable_fp32_cpu_offload': False, 'llm_int8_has_fp16_weight': False, 'bnb_4bit_quant_type': 'nf4', 'bnb_4bit_use_double_quant': True, 'bnb_4bit_compute_dtype': 'bfloat16', 'bnb_4bit_quant_storage': 'uint8', 'load_in_4bit': True, 'load_in_8bit': False}, 'output_dir': './', 'overwrite_output_dir': False, 'do_train': False, 'do_eval': False, 'do_predict': False, 'evaluation_strategy': 'no', 'prediction_loss_only': False, 'per_device_train_batch_size': 2, 'per_device_eval_batch_size': 8, 'per_gpu_train_batch_size': None, 'per_gpu_eval_batch_size': None, 'gradient_accumulation_steps': 4, 'eval_accumulation_steps': None, 'eval_delay': 0, 'learning_rate': 0.0002, 'weight_decay': 0.0, 'adam_beta1': 0.9, 'adam_beta2': 0.999, 'adam_epsilon': 1e-08, 'max_grad_norm': 1.0, 'num_train_epochs': 1, 'max_steps': -1, 'lr_scheduler_type': 'constant', 'lr_scheduler_kwargs': {}, 'warmup_ratio': 0.0, 'warmup_steps': 0, 'log_level': 'passive', 'log_level_replica': 'warning', 'log_on_each_node': True, 'logging_dir': './runs/May16_07-00-48_cc8faff11463', 'logging_strategy': 'steps', 'logging_first_step': False, 'logging_steps': 1, 'logging_nan_inf_filter': True, 'save_strategy': 'epoch', 'save_steps': 500, 'save_total_limit': 3, 'save_safetensors': True, 'save_on_each_node': False, 'save_only_model': False, 'no_cuda': False, 'use_cpu': False, 'use_mps_device': False, 'seed': 42, 'data_seed': None, 'jit_mode_eval': False, 'use_ipex': False, 'bf16': False, 'fp16': True, 'fp16_opt_level': 'O1', 'half_precision_backend': 'auto', 'bf16_full_eval': False, 'fp16_full_eval': False, 'tf32': None, 'local_rank': 0, 'ddp_backend': None, 'tpu_num_cores': None, 'tpu_metrics_debug': False, 'debug': [], 'dataloader_drop_last': False, 'eval_steps': None, 'dataloader_num_workers': 0, 'dataloader_prefetch_factor': None, 'past_index': -1, 'run_name': './', 'disable_tqdm': False, 'remove_unused_columns': False, 'label_names': None, 'load_best_model_at_end': False, 'metric_for_best_model': None, 'greater_is_better': None, 'ignore_data_skip': False, 'fsdp': [], 'fsdp_min_num_params': 0, 'fsdp_config': {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, 'fsdp_transformer_layer_cls_to_wrap': None, 'accelerator_config': {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, 'deepspeed': None, 'label_smoothing_factor': 0.0, 'optim': 'adamw_bnb_8bit', 'optim_args': None, 'adafactor': False, 'group_by_length': False, 'length_column_name': 'length', 'report_to': ['tensorboard', 'wandb'], 'ddp_find_unused_parameters': None, 'ddp_bucket_cap_mb': None, 'ddp_broadcast_buffers': None, 'dataloader_pin_memory': True, 'dataloader_persistent_workers': False, 'skip_memory_metrics': True, 'use_legacy_prediction_loop': False, 'push_to_hub': False, 'resume_from_checkpoint': None, 'hub_model_id': None, 'hub_strategy': 'every_save', 'hub_token': '<HUB_TOKEN>', 'hub_private_repo': False, 'hub_always_push': False, 'gradient_checkpointing': False, 'gradient_checkpointing_kwargs': None, 'include_inputs_for_metrics': False, 'fp16_backend': 'auto', 'push_to_hub_model_id': None, 'push_to_hub_organization': None, 'push_to_hub_token': '<PUSH_TO_HUB_TOKEN>', 'mp_parameters': '', 'auto_find_batch_size': False, 'full_determinism': False, 'torchdynamo': None, 'ray_scope': 'last', 'ddp_timeout': 1800, 'torch_compile': False, 'torch_compile_backend': None, 'torch_compile_mode': None, 'dispatch_batches': None, 'split_batches': None, 'include_tokens_per_second': False, 'include_num_input_tokens_seen': False, 'neftune_noise_alpha': None, 'optim_target_modules': None}
33
+ 2024-05-16 07:16:16,067 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
34
+ 2024-05-16 07:16:16,068 INFO MainThread:34 [wandb_init.py:_pause_backend():432] pausing backend
35
+ 2024-05-16 07:16:16,075 INFO MainThread:34 [wandb_init.py:_resume_backend():437] resuming backend
36
+ 2024-05-16 07:16:18,184 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
37
+ 2024-05-16 07:16:18,184 INFO MainThread:34 [wandb_init.py:_pause_backend():432] pausing backend
38
+ 2024-05-16 07:18:56,636 INFO MainThread:34 [wandb_init.py:_resume_backend():437] resuming backend
39
+ 2024-05-16 07:18:56,684 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
40
+ 2024-05-16 07:18:56,684 INFO MainThread:34 [wandb_init.py:_pause_backend():432] pausing backend
41
+ 2024-05-16 07:19:43,460 INFO MainThread:34 [wandb_init.py:_resume_backend():437] resuming backend
42
+ 2024-05-16 07:19:43,507 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
43
+ 2024-05-16 07:19:43,507 INFO MainThread:34 [wandb_init.py:_pause_backend():432] pausing backend
44
+ 2024-05-16 07:19:54,270 INFO MainThread:34 [wandb_init.py:_resume_backend():437] resuming backend
45
+ 2024-05-16 07:19:54,320 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
46
+ 2024-05-16 07:19:54,320 INFO MainThread:34 [wandb_init.py:_pause_backend():432] pausing backend
47
+ 2024-05-16 07:20:06,268 INFO MainThread:34 [wandb_init.py:_resume_backend():437] resuming backend
48
+ 2024-05-16 07:20:06,319 INFO MainThread:34 [jupyter.py:save_ipynb():373] not saving jupyter notebook
49
+ 2024-05-16 07:20:06,319 INFO MainThread:34 [wandb_init.py:_pause_backend():432] pausing backend
50
+ 2024-05-16 07:20:19,142 INFO MainThread:34 [wandb_init.py:_resume_backend():437] resuming backend
wandb/run-20240516_070049-8ecq2cdf/run-8ecq2cdf.wandb ADDED
Binary file (51.6 kB). View file