File size: 3,484 Bytes
d17e641
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
This is a demo of the toolchain. The qalgo folder has the quantum algorithm to be benchmarked with folders:

- gen_files: This folder has the main machinery that generates the associated run files given the qasm files, job scripts and scope files that scope into environment of the container
- qasm_files: The quantum algorithm represented using the QASM instruction set
- qasm_parser: The parser that translates the QASM instruction into the instruction of the simulation package
- simulation package folder: The folder of the simulation package with the associated run files and job scripts

To generate the associated files, in this instance the qiskit files associated with the Heisenberg dynamics:

1. Create qiskit folder in qalgo folder

- mkdir -v qiskit

2. Change into the gen_files folder and run the file

- python3 create_dir_struct.py (allows to set the options inside the file)

3. In the qiskit folder the following folders are generated:

- bash_scripts_st_dp: bash scripts that scope the run files into the environment of the container
- data_st_dp: data folder to store the time statistics
- error_st_dp: error folder to store the errors of the job
- output_st_dp: output folder to store the output of the job
- job_scripts_st_dp: job scripts folder that has the jobs to be executed on the cluster (tailored for PSI cluster)
- run_files_st_dp: translated run files from qasm to qiskit instruction set

4. run_jobs_st_dp.sh: pushes all the jobs in the `bash_scripts_st_dp` onto the cluster

=============================================================================================================================

Note: 

1. The above instructions are tailored for a demo i.e., a minimal reproduction of the toolchain and its files (for
   the entire version: please see the files in the folder `toolchain`)

2. 1. The singularity containers are not contained in this repo but are on the zenodo repo. These can be accessed using
   the following commands or as in the job scripts:

   - singularity shell qucos_sim.sif

3. The containerized image has conda environments for all the packages (also in `bash_script_sample.sh` in zenodo repo) that is
   as follows:

```
   #!/bin/bash -l
   . /app/etc/profile.d/conda.sh
   conda activate sim_package
   python3 /home/<translated_run_file.py>

   # Available conda environments on CPU singularity image: qucos_sim.sif
   # Can be activated by issuing the commands as above, replace sim_package
   # by the package of choice as below:
   # sim_braket
   # sim_cirq
   # sim_hybridq
   # sim_myqlm
   # sim_myqlm_cpp
   # sim_pennylane
   # sim_pennylane_l
   # sim_projectq
   # sim_qibo
   # sim_qibojit
   # sim_qiskit
   # sim_qpanda
   # sim_qsimcirq
   # sim_qulacs
   # sim_svsim

   # Available conda environments on GPU singularity image: qucos_gpu_sim.sif
   # Can be activated by issuing the commands as above, replace sim_package
   # by the package of choice as below:
   # sim_hybridq_gpu
   # sim_pennylane_l_gpu
   # sim_qcgpu_gpu
   # sim_qibojit_gpu
   # sim_qiskit_gpu
   # sim_qpanda_gpu
   # sim_qsimcirq_gpu
   # sim_qulacs_gpu
   # sim_svsim_gpu
```

4. The use of containerized image implies that the packages installed in the container do not need additional dependencies
   and can be used in a straight out-of-box fashion. However to check the dependencies one can issue the command 
   `pip list` once the corresponding conda environment is activated (conda environments listed above).