Datasets:

Modalities:
Tabular
Text
Formats:
csv
ArXiv:
DOI:
Libraries:
Datasets
pandas
License:
File size: 13,723 Bytes
fc39c19
1c0865a
fc39c19
 
617cae9
3d31546
 
410b758
617cae9
fc39c19
b6b2a60
 
 
 
 
 
80b7b64
28f9a51
 
 
 
 
7dedbf1
67b26fe
5242987
6df5926
 
 
 
 
 
 
 
 
 
 
 
24757dc
4a6bcd0
2f60543
5c6b9be
437765d
 
e0b205b
437765d
e0b205b
 
437765d
e0b205b
5c6b9be
437765d
2f60543
 
 
a0237ff
7c627a2
6041790
fff5cb1
6dd2b41
2cc9c7e
610e591
fb2f81d
 
6d7efdb
 
 
8992e86
 
fff5cb1
6dd2b41
c7f79ad
2f60543
 
 
04eb347
 
 
 
 
 
 
 
 
 
 
 
fff5cb1
6dd2b41
a3599e1
6dd2b41
a3599e1
6dd2b41
 
c2b1a72
6d7efdb
7c627a2
6d7efdb
7725c5c
ae41450
7c627a2
999c517
9d6d049
 
7725c5c
9d6d049
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
de8c268
9d6d049
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b2a24a4
9d6d049
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
999c517
7c627a2
24d8f62
 
 
6d7efdb
24d8f62
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7043080
24d8f62
 
 
7043080
24d8f62
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
77d7766
24d8f62
 
 
 
 
 
 
 
77d7766
24d8f62
 
 
 
 
 
 
 
 
77d7766
24d8f62
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
77d7766
24d8f62
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
77d7766
24d8f62
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1c0865a
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
---
license: apache-2.0
---

# Machine Learning for Two-Sample Testing under Right-Censored Data: A Simulation Study
- [Petr PHILONENKO](https://orcid.org/0000-0002-6295-4470), Ph.D. in Computer Science;
- [Sergey POSTOVALOV](https://orcid.org/0000-0003-3718-1936), D.Sc. in Computer Science.

The paper can be downloaded [here](https://arxiv.org/abs/2409.08201).

# About
This dataset is a supplement to the [github repositiry](https://github.com/pfilonenko/ML_for_TwoSampleTesting) and paper addressed to solve the two-sample problem under right-censored observations using Machine Learning. 
The problem statement can be formualted as H0: S1(t)=S2(t) versus H: S1(t)≠S_2(t) where S1(t) and S2(t) are survival functions of samples X1 and X2.

This dataset contains the synthetic data simulated by the Monte Carlo method and Inverse Transform Sampling.

**Contents**
- [About](#about)
- [Citing](#citing)
- [Repository](#repository)
- [Fields](#fields)
- [Simulation](#simulation)
    - [main.cpp](#maincpp)
    - [simulation_for_machine_learning.h](#simulation_for_machine_learningh)

# Citing
~~~
@misc {petr_philonenko_2024,
	author       = { {Petr Philonenko} },
	title        = { ML_for_TwoSampleTesting (Revision a4ae672) },
	year         = 2024,
	url          = { https://huggingface.co/datasets/pfilonenko/ML_for_TwoSampleTesting },
	doi          = { 10.57967/hf/2978 },
	publisher    = { Hugging Face }
}
~~~

# Repository

The files of this dataset have following structure:
~~~
data
├── 1_raw
│   └── two_sample_problem_dataset.tsv.gz    (121,986,000 rows)
├── 2_samples
│   ├── sample_train.tsv.gz                   (24,786,000 rows)
│   └── sample_simulation.tsv.gz              (97,200,000 rows)
└── 3_dataset_with_ML_pred
    └── dataset_with_ML_pred.tsv.gz           (97,200,000 rows)
~~~

- **two_sample_problem_dataset.tsv.gz** is a raw simulated data. In the [github repositiry](https://github.com/pfilonenko/ML_for_TwoSampleTesting), this file must be located in the _ML_for_TwoSampleTesting/proposed_ml_for_two_sample_testing/data/1_raw/_
- **sample_train.tsv.gz** and **sample_simulation.tsv.gz** are train and test samples splited from the **two_sample_problem_dataset.tsv.gz**. In the [github repositiry](https://github.com/pfilonenko/ML_for_TwoSampleTesting), these files must be located in the _ML_for_TwoSampleTesting/proposed_ml_for_two_sample_testing/data/2_samples/_
- **dataset_with_ML_pred.tsv.gz** is the test sample supplemented by the predictions of the proposed ML-methods. In the [github repositiry](https://github.com/pfilonenko/ML_for_TwoSampleTesting), this file must be located in the _ML_for_TwoSampleTesting/proposed_ml_for_two_sample_testing/data/3_dataset_with_ML_pred/_

# Fields
In these files, there are following fields:

1) PARAMETERS OF SAMPLE SIMULATION
- **iter** is an iteration number of the Monte Carlo replication (in total, 37650);
- **sample** is a type of the sample (train, val, test). This field is used to split dataset into train-validate-test samples for ML-model training;
- **H0_H1** is a true hypothesis: if **H0**, then samples X1 and X2 were simulated under S1(t)=S2(t); if **H1**, then samples X1 and X2 were simulated under S1(t)≠S2(t);
- **Hi** is an alternative (H01-H09, H11-H19, or H21-H29) with competing hypotheses S1(t) and S2(t). Detailed description of these alternatives can be found in the paper;
- **n1** is the size of the sample 1;
- **n2** is the size of the sample 2;
- **perc** is a set (expected) censoring rate for the samples 1 and 2;
- **real_perc1** is an actual censoring rate of the sample 1;
- **real_perc2** is an actual censoring rate of the sample 2;

2) STATISTICS OF CLASSICAL TWO-SAMPLE TESTS
- **Peto_test** is a statistic of the Peto and Peto’s Generalized Wilcoxon test (which is computed on two samples under parameters described above);
- **Gehan_test** is a statistic of the Gehan’s Generalized Wilcoxon test;
- **logrank_test** is a statistic of the logrank test;
- **CoxMantel_test** is a statistic of the Cox-Mantel test;
- **BN_GPH_test** is a statistic of the Bagdonavičius-Nikulin test (Generalized PH model);
- **BN_MCE_test** is a statistic of the Bagdonavičius-Nikulin test (Multiple Crossing-Effect model);
- **BN_SCE_test** is a statistic of the Bagdonavičius-Nikulin test (Single Crossing-Effect model);
- **Q_test** is a statistic of the Q-test;
- **MAX_Value_test** is a statistic of the Maximum Value test;
- **MIN3_test** is a statistic of the MIN3 test;
- **WLg_logrank_test** is a statistic of the Weighted Logrank test (weighted function: 'logrank');
- **WLg_TaroneWare_test** is a statistic of the Weighted Logrank test (weighted function: 'Tarone-Ware');
- **WLg_Breslow_test** is a statistic of the Weighted Logrank test (weighted function: 'Breslow');
- **WLg_PetoPrentice_test** is a statistic of the Weighted Logrank test (weighted function: 'Peto-Prentice');
- **WLg_Prentice_test** is a statistic of the Weighted Logrank test (weighted function: 'Prentice');
- **WKM_test** is a statistic of the Weighted Kaplan-Meier test;

3) STATISTICS OF THE PROPOSED ML-METHODS FOR TWO-SAMPLE PROBLEM
- **CatBoost_test** is a statistic of the proposed ML-method based on the CatBoost framework;
- **XGBoost_test** is a statistic of the proposed ML-method based on the XGBoost framework;
- **LightAutoML_test** is a statistic of the proposed ML-method based on the LightAutoML (LAMA) framework;
- **SKLEARN_RF_test** is a statistic of the proposed ML-method based on Random Forest (implemented in sklearn);
- **SKLEARN_LogReg_test** is a statistic of the proposed ML-method based on Logistic Regression (implemented in sklearn);
- **SKLEARN_GB_test** is a statistic of the proposed ML-method based on Gradient Boosting Machine (implemented in sklearn).

# Simulation

For this dataset, the full source code (C++) is available [here](https://github.com/pfilonenko/ML_for_TwoSampleTesting/tree/main/dataset/simulation). 
It makes possible to reproduce and extend the simulation by the Monte Carlo method. Here, we present two fragments of the source code (**main.cpp** and **simulation_for_machine_learning.h**) which can help to understand the main steps of the simulation process.
### main.cpp
```C++
#include"simulation_for_machine_learning.h"

// Select two-sample tests
vector<HomogeneityTest*> AllTests()
{
	vector<HomogeneityTest*> D;
	
	// ---- Classical Two-Sample tests for Uncensored Case ----
	//D.push_back( new HT_AndersonDarlingPetitt );
	//D.push_back( new HT_KolmogorovSmirnovTest );
	//D.push_back( new HT_LehmannRosenblatt );
	
	// ---- Two-Sample tests for Right-Censored Case ----
	D.push_back( new HT_Peto );
	D.push_back( new HT_Gehan );
	D.push_back( new HT_Logrank );
	
	D.push_back( new HT_BagdonaviciusNikulinGeneralizedCox );
	D.push_back( new HT_BagdonaviciusNikulinMultiple );
	D.push_back( new HT_BagdonaviciusNikulinSingle );

	D.push_back( new HT_QTest );			//Q-test
	D.push_back( new HT_MAX );				//Maximum Value test
	D.push_back( new HT_SynthesisTest );	//MIN3 test
	
	D.push_back( new HT_WeightedLogrank("logrank") );
	D.push_back( new HT_WeightedLogrank("Tarone–Ware") );
	D.push_back( new HT_WeightedLogrank("Breslow") );
	D.push_back( new HT_WeightedLogrank("Peto–Prentice") );
	D.push_back( new HT_WeightedLogrank("Prentice") );
	
	D.push_back( new HT_WeightedKaplanMeyer );
		
	return D;
}

// Example of two-sample testing using this code
void EXAMPLE_1(vector<HomogeneityTest*> &D)
{
	// load the samples
	Sample T1(".//samples//1Chemotherapy.txt");
	Sample T2(".//samples//2Radiotherapy.txt");

	// two-sample testing through selected tests
	for(int j=0; j<D.size(); j++)
	{
		char test_name[512];
		D[j]->TitleTest(test_name);
		

		double Sn = D[j]->CalculateStatistic(T1, T2);
		double pvalue = D[j]->p_value(T1, T2, 27000);  // 27k in accodring to the Kolmogorov's theorem => simulation error MAX||G(S|H0)-Gn(S|H0)|| <= 0.01

		printf("%s\n", &test_name);
		printf("\t Sn: %lf\n", Sn);
		printf("\t pv: %lf\n", pvalue);
		printf("--------------------------------");
	}
}

// Example of the dataset simulation for the proposed ML-method
void EXAMPLE_2(vector<HomogeneityTest*> &D)
{
	// Run dataset (train or test sample) simulation (results in ".//to_machine_learning_2024//")
	simulation_for_machine_learning sm(D);
}

// init point
int main()
{
	// Set the number of threads
	int k = omp_get_max_threads() - 1;
	omp_set_num_threads( k );

	// Select two-sample tests
	auto D = AllTests();
	
	// Example of two-sample testing using this code
	EXAMPLE_1(D);

	// Example of the dataset simulation for the proposed ML-method
	EXAMPLE_2(D);

	// Freeing memory
	ClearMemory(D);
	
	printf("The mission is completed.\n");
	return 0;
}
```
### simulation_for_machine_learning.h
```C++
#ifndef simulation_for_machine_learning_H
#define simulation_for_machine_learning_H

#include"HelpFucntions.h"

// Object of the data simulation for training of the proposed ML-method
class simulation_for_machine_learning{
	private:
		// p-value computation using the Test and Test Statistic (Sn)
		double pvalue(double Sn, HomogeneityTest* Test)
		{
			auto f = Test->F( Sn );
			double pv = 0;
			if( Test->TestType().c_str() == "right" )
				pv = 1.0 - f;
			else
				if( Test->TestType().c_str() == "left" )
					pv = f;
				else    // "double"
					pv = 2.0*min( f, 1-f );
			return pv;
		}

		// Process of simulation
		void Simulation(int iter, vector<HomogeneityTest*> &D, int rank, mt19937boost Gw)
		{
			// preparation the file to save
			char file_to_save[512];
			sprintf(file_to_save,".//to_machine_learning_2024//to_machine_learning[rank=%d].csv", rank);

			// if it is the first iteration, the head of the table must be read
			if( iter == 0 )
			{
				FILE *ou = fopen(file_to_save,"w");
				fprintf(ou, "num;H0/H1;model;n1;n2;perc;real_perc1;real_perc2;");
				for(int i=0; i<D.size(); i++)
				{
					char title_of_test[512];
					D[i]->TitleTest(title_of_test);
					fprintf(ou, "Sn [%s];p-value [%s];", title_of_test, title_of_test);
				}
				fprintf(ou, "\n");
				fclose(ou);
			}

			// Getting list of the Alternative Hypotheses (H01 - H27)
			vector<int> H;
			int l = 1;
			for(int i=100; i<940; i+=100)			// Groups of Alternative Hypotheses (I, II, III, IV, V, VI, VII, VIII, IX)
			{
				for(int j=10; j<40; j+=10)			// Alternative Hypotheses in the Group (e.g., H01, H02, H03 into the I and so on)
					//for(int l=1; l<4; l++)		// various families of distribution of censoring time F^C(t)
						H.push_back( 1000+i+j+l );
			}

			// Sample sizes
			vector<int> sample_sizes;
			sample_sizes.push_back( 20 );	// n1 = n2 = 20
			sample_sizes.push_back( 30 );	// n1 = n2 = 30
			sample_sizes.push_back( 50 );	// n1 = n2 = 50
			sample_sizes.push_back( 75 );	// n1 = n2 = 75
			sample_sizes.push_back( 100 );	// n1 = n2 = 100
			sample_sizes.push_back( 150 );	// n1 = n2 = 150
			sample_sizes.push_back( 200 );	// n1 = n2 = 200
			sample_sizes.push_back( 300 );	// n1 = n2 = 300
			sample_sizes.push_back( 500 );	// n1 = n2 = 500
			sample_sizes.push_back( 1000 );	// n1 = n2 = 1000

			// Simulation (Getting H, Simulation samples, Computation of the test statistics & Save to file)
			for(int i = 0; i<H.size(); i++)
			{
				int Hyp = H[i];
		
				if(rank == 0)
					printf("\tH = %d\n",Hyp);

				for(int per = 0; per<51; per+=10)
				{
					// ---- Getting Hi ----
					AlternativeHypotheses H0_1(Hyp,1,0), H0_2(Hyp,2,0);
					AlternativeHypotheses H1_1(Hyp,1,per), H1_2(Hyp,2,per);

					for(int jj=0; jj<sample_sizes.size(); jj++)
					{
						int n = sample_sizes[jj];

						// ---- Simulation samples ----
						//competing hypothesis H0
						Sample A0(*H0_1.D,n,Gw);
						Sample B0(*H0_1.D,n,Gw);
						if( per > 0 )
						{
							A0.CensoredTypeThird(*H1_1.D,Gw);
							B0.CensoredTypeThird(*H1_1.D,Gw);
						}

						//competing hypothesis H1
						Sample A1(*H0_1.D,n,Gw);
						Sample B1(*H0_2.D,n,Gw);
						if( per > 0 )
						{
							A1.CensoredTypeThird(*H1_1.D,Gw);
							B1.CensoredTypeThird(*H1_2.D,Gw);
						}

						// ---- Computation of the test statistics & Save to file ----
						//Sn and p-value computation under H0
						FILE *ou = fopen(file_to_save, "a");
						auto perc1 = A0.RealCensoredPercent();
						auto perc2 = B0.RealCensoredPercent();
						fprintf(ou,"%d;", iter);
						fprintf(ou,"H0;");
						fprintf(ou,"%d;", Hyp);
						fprintf(ou,"%d;%d;", n,n);
						fprintf(ou,"%d;%lf;%lf", per, perc1, perc2);
						for(int j=0; j<D.size(); j++)
						{
							auto Sn_H0 = D[j]->CalculateStatistic(A0, B0);
							auto pv_H0 = 0.0;	// skip computation (it prepares in ML-framework)
							fprintf(ou, ";%lf;0", Sn_H0);
						}
						fprintf(ou, "\n");

						//Sn and p-value computation under H1
						perc1 = A1.RealCensoredPercent();
						perc2 = B1.RealCensoredPercent();
						fprintf(ou,"%d;", iter);
						fprintf(ou,"H1;");
						fprintf(ou,"%d;", Hyp);
						fprintf(ou,"%d;%d;", n,n);
						fprintf(ou,"%d;%lf;%lf", per, perc1, perc2);
						for(int j=0; j<D.size(); j++)
						{
							auto Sn_H1 = D[j]->CalculateStatistic(A1, B1);
							auto pv_H1 = 0.0;  // skip computation (it prepares in ML-framework)
							fprintf(ou, ";%lf;0", Sn_H1);
						}
						fprintf(ou, "\n");
						fclose( ou );
					}
				}
			}
		}

	public:
		// Constructor of the class
		simulation_for_machine_learning(vector<HomogeneityTest*> &D)
		{
			int N = 37650;	// number of the Monte-Carlo replications
			#pragma omp parallel for
			for(int k=0; k<N; k++)
			{
				int rank = omp_get_thread_num();
				auto gen = GwMT19937[rank];
		
				if(rank == 0)
					printf("\r%d", k);

				Simulation(k, D, rank, gen);
			}
		}
};

#endif
```