Spaces:
Running
Running
Update app.py
Browse files
app.py
CHANGED
@@ -98,6 +98,25 @@ dataset[0]["audio"]
|
|
98 |
# **4**. Create a function to preprocess the audio `array` with the feature extractor, and truncate and pad the sequences into tidy rectangular tensors. The most important thing to remember is to call the audio `array` in the feature extractor since the `array` - the actual speech signal - is the model input.
|
99 |
#
|
100 |
# Once you have a preprocessing function, use the [map()](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.map) function to speed up processing by applying the function to batches of examples in the dataset.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
101 |
|
102 |
# %%
|
103 |
def preprocess_function(examples):
|
|
|
98 |
# **4**. Create a function to preprocess the audio `array` with the feature extractor, and truncate and pad the sequences into tidy rectangular tensors. The most important thing to remember is to call the audio `array` in the feature extractor since the `array` - the actual speech signal - is the model input.
|
99 |
#
|
100 |
# Once you have a preprocessing function, use the [map()](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.map) function to speed up processing by applying the function to batches of examples in the dataset.
|
101 |
+
import torch
|
102 |
+
|
103 |
+
device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # Use GPU if available
|
104 |
+
model.to(device) # Move model to the device
|
105 |
+
|
106 |
+
optimizer = torch.optim.AdamW(model.parameters(), lr=5e-5) # Example optimizer
|
107 |
+
|
108 |
+
for epoch in range(num_epochs): # Replace num_epochs
|
109 |
+
for batch in dataloader:
|
110 |
+
input_values = batch["input_values"].to(device)
|
111 |
+
labels = batch["labels"].to(device)
|
112 |
+
|
113 |
+
optimizer.zero_grad()
|
114 |
+
outputs = model(input_values, labels=labels)
|
115 |
+
loss = outputs.loss
|
116 |
+
loss.backward()
|
117 |
+
optimizer.step()
|
118 |
+
|
119 |
+
print(f"Epoch: {epoch+1}, Loss: {loss.item()}") # Print Loss for monitoring
|
120 |
|
121 |
# %%
|
122 |
def preprocess_function(examples):
|