import reflex as rx p2 = ''' # Steps ### Dataset Selection We begin with the <a href="https://huggingface.co/datasets/layoric/labeled-multiple-choice-explained" target="_blank">layoric/labeled-multiple-choice-explained</a> dataset, which includes reasoning provided by GPT-3.5-turbo. reasoning explanations serve as a starting point but may differ from Falcon's reasoning style. 0. <i><a href="https://huggingface.co/derek-thomas/prompt-order-experiment/blob/main/00-poe-generate-falcon-reasoning.ipynb" target="_blank">00-poe-generate-falcon-reasoning.ipynb</a></i>: To align with falcon, we need to create a refined dataset: <a href="https://huggingface.co/datasets/derek-thomas/labeled-multiple-choice-explained-falcon-reasoning" target="_blank">derek-thomas/labeled-multiple-choice-explained-falcon-reasoning</a>. 1. <i><a href="https://huggingface.co/derek-thomas/prompt-order-experiment/blob/main/01-poe-dataset-creation.ipynb" target="_blank">01-poe-dataset-creation.ipynb</a></i>: Then we need to create our prompt experiments. 2. <i><a href="https://huggingface.co/derek-thomas/prompt-order-experiment/blob/main/02-autotrain.ipynb" target="_blank">02-autotrain.ipynb</a></i>: We generate autotrain jobs on spaces to train our models. 3. <i><a href="https://huggingface.co/derek-thomas/prompt-order-experiment/blob/main/03-poe-token-count-exploration.ipynb" target="_blank">03-poe-token-count-exploration.ipynb</a></i>: We do some quick analysis so we can optimize our TGI settings. 4. <i><a href="https://huggingface.co/derek-thomas/prompt-order-experiment/blob/main/04-poe-eval.ipynb" target="_blank">04-poe-eval.ipynb</a></i>: We finally evaluate our trained models. **The flowchart is _Clickable_** ''' def mermaid_svg(): with open('assets/prompt-order-experiment.svg', 'r') as file: svg_content = file.read() return rx.html( f'<div style="width: 300%; height: auto;">{svg_content}</div>' ) def page(): return rx.vstack( rx.markdown(p2), mermaid_svg(), )