Upload 6 files
Browse files- .gitattributes +3 -0
- README.md +66 -3
- assets/fc.png +3 -0
- assets/prompts.py +46 -0
- test.jsonl +3 -0
- train.jsonl +3 -0
- valid.jsonl +3 -0
.gitattributes
CHANGED
@@ -57,3 +57,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
57 |
# Video files - compressed
|
58 |
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
59 |
*.webm filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
57 |
# Video files - compressed
|
58 |
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
59 |
*.webm filter=lfs diff=lfs merge=lfs -text
|
60 |
+
test.jsonl filter=lfs diff=lfs merge=lfs -text
|
61 |
+
train.jsonl filter=lfs diff=lfs merge=lfs -text
|
62 |
+
valid.jsonl filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
@@ -1,3 +1,66 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Java Method to Test ChatML
|
2 |
+
|
3 |
+
This dataset is based on the [methods2test](https://github.com/microsoft/methods2test) dataset from Microsoft. It follows the ChatML template format: `[{'role': '', 'content': ''}, {...}]`.
|
4 |
+
|
5 |
+
Originally, methods2test contains only Java methods at different levels of granularity along with their corresponding test cases. The different focal method segmentations are illustrated here:
|
6 |
+
|
7 |
+

|
8 |
+
|
9 |
+
To simulate a conversation between a Java developer and an AI assistant, I introduce two key parameters:
|
10 |
+
|
11 |
+
- The prompt template
|
12 |
+
- The input code choice
|
13 |
+
|
14 |
+
## Prompt Template
|
15 |
+
|
16 |
+
The prompts simulate a user asking an AI assistant to generate a test case. For this, I used six different prompt templates, replicating real-life scenarios in which a user might ask an assistant to generate test cases:
|
17 |
+
|
18 |
+
1. Direct request (e.g., default)
|
19 |
+
2. Educational question
|
20 |
+
3. Code review
|
21 |
+
4. Bug fixing
|
22 |
+
5. Debugging (similar but slightly different)
|
23 |
+
6. TDD Approach (Test-Driven Development)
|
24 |
+
|
25 |
+
You can find these prompts in the [prompts](./assets/prompts.py) file.
|
26 |
+
|
27 |
+
## Input Code
|
28 |
+
|
29 |
+
The input code is selected from one of the six possibilities described in the "Focal methods segmentation" figure. To simulate real-life scenarios, I randomly select among these possibilities using the following distribution:
|
30 |
+
|
31 |
+
| Split | fm | +fc | +c | +m | +f |
|
32 |
+
|-------|---- |---- |---- |---- |---- |
|
33 |
+
| Train | 10% | 50% | 0% | 10% | 30% |
|
34 |
+
| Test | 0% | 50% | 0% | 0% | 50% |
|
35 |
+
|
36 |
+
There is a bit more diversity in the training data to prevent underfitting during LLM fine-tuning. If you prefer, you can merge the splits and shuffle the data.
|
37 |
+
|
38 |
+
## Summary
|
39 |
+
|
40 |
+
This dataset is a great starting point for building a powerful Java coding assistant or for complementing a larger knowledge base to train more advanced foundational models.
|
41 |
+
|
42 |
+
If you have any suggestions or spot any bugs, feel free to open a discussion! 😉 If you find it useful, don’t hesitate to give it a like. Peace! ✌️
|
43 |
+
|
44 |
+
## Usage and Citation
|
45 |
+
|
46 |
+
To use it with [🤗datasets](https://huggingface.co/docs/datasets/):
|
47 |
+
|
48 |
+
```python
|
49 |
+
from datasets import load_dataset
|
50 |
+
|
51 |
+
ds = load_dataset("...", split="train")
|
52 |
+
```
|
53 |
+
|
54 |
+
To use it elsewhere:
|
55 |
+
|
56 |
+
```bibtex
|
57 |
+
@misc{letoquart2025java,
|
58 |
+
author = {Pierre-Louis Létoquart},
|
59 |
+
title = {Dataset: Java Method to Test ChatML},
|
60 |
+
year = {2025},
|
61 |
+
publisher = {Hugging Face},
|
62 |
+
journal = {Hugging Face Hub},
|
63 |
+
howpublished = {\url{https://huggingface.co/datasets/random-long-int/Java_method2test_chatml}},
|
64 |
+
}
|
65 |
+
```
|
66 |
+
|
assets/fc.png
ADDED
![]() |
Git LFS Details
|
assets/prompts.py
ADDED
@@ -0,0 +1,46 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from enum import Enum
|
2 |
+
|
3 |
+
|
4 |
+
class PromptTemplate(Enum):
|
5 |
+
DIRECT_REQUEST = "direct_request"
|
6 |
+
EDUCATIONAL = "educational"
|
7 |
+
CODE_REVIEW = "code_review"
|
8 |
+
BUG_FIXING = "bug_fixing"
|
9 |
+
DEBUGGING = "debugging"
|
10 |
+
TDD_APPROACH = "tdd_approach"
|
11 |
+
|
12 |
+
|
13 |
+
def get_system_content(template: PromptTemplate) -> str:
|
14 |
+
system_contents = {
|
15 |
+
PromptTemplate.DIRECT_REQUEST: "You are an expert in Java programming and testing.",
|
16 |
+
PromptTemplate.EDUCATIONAL: "You are a helpful Java tutor.",
|
17 |
+
PromptTemplate.CODE_REVIEW: "You are a senior Java developer helping with code reviews.",
|
18 |
+
PromptTemplate.BUG_FIXING: "You are an AI assistant specializing in Java development.",
|
19 |
+
PromptTemplate.DEBUGGING: "You are a Java debugging expert.",
|
20 |
+
PromptTemplate.TDD_APPROACH: "You are an AI assistant focusing on Test-Driven Development.",
|
21 |
+
}
|
22 |
+
return system_contents[template]
|
23 |
+
|
24 |
+
|
25 |
+
def get_user_content(template: PromptTemplate, input_code: str) -> str:
|
26 |
+
user_contents = {
|
27 |
+
PromptTemplate.DIRECT_REQUEST: f"Could you write a JUnit test for this Java code?\n\n```java\n{input_code}\n```",
|
28 |
+
PromptTemplate.EDUCATIONAL: f"I'm learning about unit testing in Java. I have this code and I need to write a test for it, but I'm not sure where to start:\n\n```java\n{input_code}\n```",
|
29 |
+
PromptTemplate.CODE_REVIEW: f"I'm doing a code review for a colleague and noticed there aren't any tests for this code. Could you help me create a comprehensive unit test?\n\n```java\n{input_code}\n```",
|
30 |
+
PromptTemplate.BUG_FIXING: f"I think there might be a bug in this Java code. Can you write a test that would help me verify if it's working correctly?\n\n```java\n{input_code}\n```",
|
31 |
+
PromptTemplate.DEBUGGING: f"I'm having trouble debugging this Java code. Could you create a test that would help me understand what's happening?\n\n```java\n{input_code}\n```",
|
32 |
+
PromptTemplate.TDD_APPROACH: f"I'm practicing Test-Driven Development. I've written this Java code after writing tests first. Can you show me what kind of tests I should have written?\n\n```java\n{input_code}\n```",
|
33 |
+
}
|
34 |
+
return user_contents[template]
|
35 |
+
|
36 |
+
|
37 |
+
def get_assistant_content(template: PromptTemplate, output_test: str) -> str:
|
38 |
+
assistant_contents = {
|
39 |
+
PromptTemplate.DIRECT_REQUEST: f"I'd be happy to write a JUnit test for your Java code. Here's what I came up with:\n\n```java\n{output_test}\n```\n\nThis test covers the main functionality of your code. Let me know if you need any clarification or have questions about how it works!",
|
40 |
+
PromptTemplate.EDUCATIONAL: f"Unit testing is an important skill! For your code, I'd create a test like this:\n\n```java\n{output_test}\n```\n\nLet me explain the key parts of this test:\n1. We create test methods with the @Test annotation\n2. We setup the necessary objects and inputs\n3. We call the methods we want to test\n4. We use assertions to verify the expected behavior",
|
41 |
+
PromptTemplate.CODE_REVIEW: f"You're right to ensure proper test coverage. Here's a comprehensive unit test for the code:\n\n```java\n{output_test}\n```\n\nThis test should verify the key functionality and handle important edge cases. You might also want to suggest that your colleague consider additional test scenarios depending on the requirements.",
|
42 |
+
PromptTemplate.BUG_FIXING: f"To check if your code has a bug, we can write a test that verifies its expected behavior. Here's a test that should help:\n\n```java\n{output_test}\n```\n\nThis test covers the main functionality. If it passes, your code is likely working as intended. If it fails, the assertion messages should give you clues about where the bug might be.",
|
43 |
+
PromptTemplate.DEBUGGING: f"A well-written test can definitely help with debugging. Here's a test that should help isolate and understand the behavior:\n\n```java\n{output_test}\n```\n\nWhen you run this test, pay attention to which assertions pass and which fail. This will help you narrow down where the issue might be occurring in your code.",
|
44 |
+
PromptTemplate.TDD_APPROACH: f"In Test-Driven Development, you'd typically write tests like these before implementing the code:\n\n```java\n{output_test}\n```\n\nThese tests follow the TDD approach by:\n1. Testing each requirement separately\n2. Starting with the simplest cases\n3. Gradually adding more complex scenarios\n4. Verifying both correct behavior and edge cases",
|
45 |
+
}
|
46 |
+
return assistant_contents[template]
|
test.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7ff5dcf3190ae02309a31133ba0edc6e65890d0b3061d75097d7210f0f5c9a90
|
3 |
+
size 360958254
|
train.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7cf191738c5db01616225978c83d6062391fc93f703bea356afa1d9d4ae65882
|
3 |
+
size 1180530107
|
valid.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f5b79e3771051cc173b8cda4c5654c4da4aa86258025b633ff7d9bac2933b701
|
3 |
+
size 131637608
|