{ "cells": [ { "cell_type": "markdown", "id": "b0a2c873-7447-4402-85b5-facbb0d7c0a3", "metadata": {}, "source": [ "# DNAGPT2- The Best Beginner's Guide to Gene Sequence Large Language Models\n", "\n", "### 1. Overview\n", "Large language models have long transcended the NLP research domain, becoming a cornerstone for AI in science. Gene sequences in bioinformatics are most similar to natural language, making the application of large models to biological sequence studies a hot research direction in recent years. The 2024 Nobel Prize in Chemistry awarded to AlphaFold for predicting protein structures has further illuminated the future path for biological research.\n", "\n", "However, for most biologists, large models remain unfamiliar territory. Until 2023, models like GPT were niche topics within NLP research, only gaining public attention due to the emergence of ChatGPT.\n", "\n", "Most biology + large model research has emerged post-2023, but the significant interdisciplinary gap means these studies are typically collaborative efforts by large companies and teams. Replicating or learning from this work is challenging for many researchers, as evidenced by the issues sections of top papers on GitHub.\n", "\n", "On one hand, large models are almost certain to shape the future of biological research; on the other, many researchers hesitate at the threshold of large models. Providing a bridge over this gap is thus an urgent need.\n", "\n", "DNAGTP2 serves as this bridge, aiming to facilitate more biologists in overcoming the large model barrier and leveraging these powerful tools to advance their work.\n", "\n", "### 2. Tutorial Characteristics\n", "This tutorial is characterized by:\n", "\n", "1. **Simplicity**: Simple code entirely built using Hugging Face’s standard libraries.\n", "2. **Simplicity**: Basic theoretical explanations with full visual aids.\n", "3. **Simplicity**: Classic paper cases that are easy to understand.\n", "\n", "Despite its simplicity, the tutorial covers comprehensive content, from building tokenizers to constructing GPT, BERT models from scratch, fine-tuning LLaMA models, basic DeepSpeed multi-GPU distributed training, and applying SOTA models like LucaOne and ESM3. It combines typical biological tasks such as sequence classification, structure prediction, and regression analysis, progressively unfolding.\n", "\n", "### Target Audience:\n", "1. Researchers and students in the field of biology, especially bioinformatics.\n", "2. Beginners in large model learning, applicable beyond just biology.\n", "\n", "### 3. Tutorial Outline\n", "#### 1 Data and Environment\n", "1.1 Introduction to Large Model Runtime Environments \n", "1.2 Pre-trained and Fine-tuning Data Related to Genes \n", "1.3 Basic Usage of Datasets Library \n", "\n", "#### 2 Building DNA GPT2/Bert Large Models from Scratch\n", "2.1 Building DNA Tokenizer \n", "2.2 Training DNA GPT2 Model from Scratch \n", "2.3 Training DNA Bert Model from Scratch \n", "2.4 Feature Extraction for Biological Sequences Using Gene Large Models \n", "2.5 Building Large Models Based on Multimodal Data \n", "\n", "#### 3 Biological Sequence Tasks Using Gene Large Models\n", "3.1 Sequence Classification Task \n", "3.2 Structure Prediction Task \n", "3.3 Multi-sequence Interaction Analysis \n", "3.4 Function Prediction Task \n", "3.5 Regression Tasks \n", "\n", "#### 4 Entering the ChatGPT Era: Gene Instruction Building and Fine-tuning\n", "4.1 Expanding LLaMA Vocabulary Based on Gene Data \n", "4.2 Introduction to DeepSpeed Distributed Training \n", "4.3 Continuous Pre-training of LLaMA Model Based on Gene Data \n", "4.4 Classification Task Using LLaMA-gene Large Model \n", "4.5 Instruction Fine-tuning Based on LLaMA-gene Large Model \n", "\n", "#### 5 Overview of SOTA Large Model Applications in Biology\n", "5.1 Application of DNABERT2 \n", "5.2 Usage of LucaOne \n", "5.3 Usage of ESM3 \n", "5.4 Application of MedGPT \n", "5.5 Application of LLaMA-gene" ] }, { "cell_type": "code", "execution_count": null, "id": "1453bac8-82dc-4f1c-869d-399c99611c52", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.3" } }, "nbformat": 4, "nbformat_minor": 5 }