Skip to content

Commit 912ea24

Browse files
changed the format of the GCP
1 parent 2dd6da9 commit 912ea24

6 files changed

Lines changed: 403 additions & 77 deletions

GoogleCloud/Submodule_01_prog_setup.ipynb

Lines changed: 120 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -6,11 +6,64 @@
66
"metadata": {},
77
"source": [
88
"# MDIBL Transcriptome Assembly Learning Module\n",
9-
"# Notebook 1: Setup\n",
9+
"# Notebook 1: Setup"
10+
]
11+
},
12+
{
13+
"cell_type": "markdown",
14+
"id": "f62d616c",
15+
"metadata": {},
16+
"source": [
17+
"## Overview\n",
1018
"\n",
1119
"This notebook is designed to configure your virtual machine (VM) to have the proper tools and data in place to run the transcriptome assembly training module."
1220
]
1321
},
22+
{
23+
"cell_type": "markdown",
24+
"id": "60145056",
25+
"metadata": {},
26+
"source": [
27+
"## Learning Objectives\n",
28+
"\n",
29+
"1. **Understand and utilize shell commands within Jupyter Notebooks:** The notebook explicitly teaches the difference between `!` and `%` prefixes for executing shell commands, and how to navigate directories using `cd` and `pwd`.\n",
30+
"\n",
31+
"2. **Set up the necessary software:** Students will install and configure essential tools including:\n",
32+
" * Java (a prerequisite for Nextflow).\n",
33+
" * Mambaforge (a package manager for bioinformatics tools).\n",
34+
" * `sra-tools`, `perl-dbd-sqlite`, and `perl-dbi` (specific bioinformatics packages).\n",
35+
" * Nextflow (a workflow management system).\n",
36+
" * `gsutil` (for interacting with Google Cloud Storage).\n",
37+
"\n",
38+
"3. **Download and organize necessary data:** Students will download the TransPi transcriptome assembly software and its associated resources (databases, scripts, configuration files) from a Google Cloud Storage bucket. This includes understanding the directory structure and file organization.\n",
39+
"\n",
40+
"4. **Manage file permissions:** Students will use the `chmod` command to set executable permissions for the necessary files and directories within the TransPi software.\n",
41+
"\n",
42+
"5. **Navigate file paths:** The notebook provides examples and explanations for using relative file paths (e.g., `./`, `../`) within shell commands."
43+
]
44+
},
45+
{
46+
"cell_type": "markdown",
47+
"id": "549be731",
48+
"metadata": {},
49+
"source": [
50+
"## Prerequisites\n",
51+
"\n",
52+
"* **Operating System:** A Linux-based system is assumed (commands like `apt`, `uname` are used). The specific distribution isn't specified but a Debian-based system is likely.\n",
53+
"* **Shell Access:** The ability to execute shell commands from within the Jupyter Notebook environment (using `!` and `%`).\n",
54+
"* **Java Development Kit (JDK):** Required for Nextflow.\n",
55+
"* **Miniforge** A package manager for installing bioinformatics tools.\n",
56+
"* **`gsutil`:** The Google Cloud Storage command-line tool. This is crucial for downloading data from Google Cloud Storage."
57+
]
58+
},
59+
{
60+
"cell_type": "markdown",
61+
"id": "a92f62a0",
62+
"metadata": {},
63+
"source": [
64+
"## Get Started"
65+
]
66+
},
1467
{
1568
"cell_type": "markdown",
1669
"id": "958495ce-339d-4d4d-a621-9ede79a7363c",
@@ -71,7 +124,7 @@
71124
"metadata": {},
72125
"outputs": [],
73126
"source": [
74-
"!pwd"
127+
"! pwd"
75128
]
76129
},
77130
{
@@ -89,19 +142,17 @@
89142
"metadata": {},
90143
"outputs": [],
91144
"source": [
92-
"!sudo apt update\n",
93-
"!sudo apt-get install default-jdk -y\n",
94-
"!java -version"
145+
"! sudo apt update\n",
146+
"! sudo apt-get install default-jdk -y\n",
147+
"! java -version"
95148
]
96149
},
97150
{
98151
"cell_type": "markdown",
99152
"id": "7b3ffb16-3395-4c01-9774-ee568e815490",
100153
"metadata": {},
101154
"source": [
102-
"**Step 3:** Install Mambaforge, which is needed to support the information held within the TransPi databases.\n",
103-
"\n",
104-
">Mambaforge is a package manager."
155+
"**Step 3:** Install Miniforge (a package manager), which is needed to support the information held within the TransPi databases."
105156
]
106157
},
107158
{
@@ -111,9 +162,45 @@
111162
"metadata": {},
112163
"outputs": [],
113164
"source": [
114-
"!curl -L -O https://github.com/conda-forge/miniforge/releases/latest/download/Mambaforge-$(uname)-$(uname -m).sh\n",
115-
"!bash Mambaforge-$(uname)-$(uname -m).sh -b -p $HOME/mambaforge\n",
116-
"!~/mambaforge/bin/mamba install -c bioconda sra-tools perl-dbd-sqlite perl-dbi -y"
165+
"! curl -L -O https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-$(uname)-$(uname -m).sh\n",
166+
"! bash Miniforge3-$(uname)-$(uname -m).sh -b -p $HOME/miniforge"
167+
]
168+
},
169+
{
170+
"cell_type": "markdown",
171+
"id": "c5584e2e",
172+
"metadata": {},
173+
"source": [
174+
"Next, add it to the path."
175+
]
176+
},
177+
{
178+
"cell_type": "code",
179+
"execution_count": null,
180+
"id": "ad030cd1",
181+
"metadata": {},
182+
"outputs": [],
183+
"source": [
184+
"import os\n",
185+
"os.environ[\"PATH\"] += os.pathsep + os.environ[\"HOME\"]+\"/miniforge/bin\""
186+
]
187+
},
188+
{
189+
"cell_type": "markdown",
190+
"id": "7b930ad7",
191+
"metadata": {},
192+
"source": [
193+
"Next, using Miniforge and bioconda, install the tools that will be used in this tutorial."
194+
]
195+
},
196+
{
197+
"cell_type": "code",
198+
"execution_count": null,
199+
"id": "4d4dd51e",
200+
"metadata": {},
201+
"outputs": [],
202+
"source": [
203+
"! mamba install -c bioconda sra-tools perl-dbd-sqlite perl-dbi -y"
117204
]
118205
},
119206
{
@@ -131,9 +218,9 @@
131218
"metadata": {},
132219
"outputs": [],
133220
"source": [
134-
"!curl https://get.nextflow.io | bash\n",
135-
"!chmod +x nextflow\n",
136-
"!./nextflow self-update"
221+
"! curl https://get.nextflow.io | bash\n",
222+
"! chmod +x nextflow\n",
223+
"! ./nextflow self-update"
137224
]
138225
},
139226
{
@@ -152,7 +239,7 @@
152239
"metadata": {},
153240
"outputs": [],
154241
"source": [
155-
"!gsutil -m cp -r gs://nigms-sandbox/nosi-inbremaine-storage/TransPi ./"
242+
"! gsutil -m cp -r gs://nigms-sandbox/nosi-inbremaine-storage/TransPi ./"
156243
]
157244
},
158245
{
@@ -190,7 +277,7 @@
190277
"metadata": {},
191278
"outputs": [],
192279
"source": [
193-
"!gsutil -m cp -r gs://nigms-sandbox/nosi-inbremaine-storage/resources ./"
280+
"! gsutil -m cp -r gs://nigms-sandbox/nosi-inbremaine-storage/resources ./"
194281
]
195282
},
196283
{
@@ -234,7 +321,7 @@
234321
"metadata": {},
235322
"outputs": [],
236323
"source": [
237-
"!chmod -R +x ./TransPi/bin"
324+
"! chmod -R +x ./TransPi/bin"
238325
]
239326
},
240327
{
@@ -295,22 +382,30 @@
295382
},
296383
{
297384
"cell_type": "markdown",
298-
"id": "f80a7bab-98ae-45a6-845f-ad3c4138575a",
385+
"id": "ffec658a",
299386
"metadata": {},
300387
"source": [
301-
"## When you are ready, proceed to the next notebook: [`Submodule_02_basic_assembly.ipynb`](./Submodule_02_basic_assembly.ipynb)."
388+
"## Conclusion\n",
389+
"\n",
390+
"This notebook successfully configured the virtual machine for the MDIBL Transcriptome Assembly Learning Module. We updated the system, installed necessary software including Java, Mambaforge, and Nextflow, and downloaded the TransPi program and its associated resources from Google Cloud Storage. The `chmod` command ensured executability of the TransPi scripts. The VM is now prepared for the next notebook, `Submodule_02_basic_assembly.ipynb`, which will delve into the transcriptome assembly process itself. Successful completion of this notebook's steps is crucial for the successful execution of subsequent modules."
302391
]
303392
},
304393
{
305-
"cell_type": "code",
306-
"execution_count": null,
307-
"id": "934165c2-8fbd-4801-979f-6db5d1e592ea",
394+
"cell_type": "markdown",
395+
"id": "666c1e4d",
308396
"metadata": {},
309-
"outputs": [],
310-
"source": []
397+
"source": [
398+
"## Clean Up\n",
399+
"\n",
400+
"Remember to proceed to the next notebook [`Submodule_02_basic_assembly.ipynb`](./Submodule_02_basic_assembly.ipynb) or shut down your instance if you are finished."
401+
]
311402
}
312403
],
313-
"metadata": {},
404+
"metadata": {
405+
"language_info": {
406+
"name": "python"
407+
}
408+
},
314409
"nbformat": 4,
315410
"nbformat_minor": 5
316411
}

GoogleCloud/Submodule_02_basic_assembly.ipynb

Lines changed: 68 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,8 @@
88
"# MDIBL Transcriptome Assembly Learning Module\n",
99
"# Notebook 2: Performing a \"Standard\" basic transcriptome assembly\n",
1010
"\n",
11+
"## Overview\n",
12+
"\n",
1113
"In this notebook, we will set up and run a basic transcriptome assembly, using the analysis pipeline as defined by the TransPi Nextflow workflow. The steps to be carried out are the following, and each is described in more detail in the Background material notebook.\n",
1214
"\n",
1315
"- Sequence Quality Control (QC): removing adapters and low-quality sequences.\n",
@@ -23,12 +25,58 @@
2325
"> **Figure 1:** TransPi workflow for a basic transcriptome assembly run."
2426
]
2527
},
28+
{
29+
"cell_type": "markdown",
30+
"id": "062784ec",
31+
"metadata": {},
32+
"source": [
33+
"## Learning Objectives\n",
34+
"\n",
35+
"1. **Understanding the TransPi Workflow:** Learners will gain a conceptual understanding of the TransPi workflow, including its individual steps and their order. This involves understanding the purpose of each stage (QC, normalization, assembly, integration, assessment, annotation, and reporting).\n",
36+
"\n",
37+
"2. **Executing a Transcriptome Assembly:** Learners will learn how to run a transcriptome assembly using Nextflow and the TransPi pipeline, including setting necessary parameters (e.g., k-mer size, read length). They will learn how to interpret the command-line interface for executing Nextflow workflows.\n",
38+
"\n",
39+
"3. **Interpreting Nextflow Output:** Learners will learn to navigate and understand the directory structure generated by the TransPi workflow. This includes interpreting the output from various tools such as FastQC, FastP, Trinity, TransAbyss, SOAP, rnaSpades, Velvet/Oases, EvidentialGene, rnaQuast, BUSCO, DIAMOND/BLAST, HMMER/Pfam, and TransDecoder. This involves understanding the different types of output files generated and how to extract relevant information from them (e.g., assembly statistics, annotation results).\n",
40+
"\n",
41+
"4. **Assessing Transcriptome Quality:** Learners will understand how to assess the quality of a transcriptome assembly using metrics generated by rnaQuast and BUSCO.\n",
42+
"\n",
43+
"5. **Interpreting Annotation Results:** Learners will learn to interpret the results of transcriptome annotation using tools like DIAMOND/BLAST and HMMER/Pfam, understanding what information they provide regarding protein function and domains.\n",
44+
"\n",
45+
"6. **Utilizing Workflow Management Systems:** Learners will gain practical experience using Nextflow, a workflow management system, to execute a complex bioinformatics pipeline. This includes understanding the benefits of using a defined workflow for reproducibility and efficiency.\n",
46+
"\n",
47+
"7. **Working with Jupyter Notebooks:** The notebook itself provides a practical example of how to integrate command-line tools within a Jupyter Notebook environment."
48+
]
49+
},
50+
{
51+
"cell_type": "markdown",
52+
"id": "abf9345c",
53+
"metadata": {},
54+
"source": [
55+
"## Prerequisites\n",
56+
"\n",
57+
"* **Nextflow:** A workflow management system used to execute the TransPi pipeline. \n",
58+
"* **Docker:** Used for containerization of the various bioinformatics tools within the workflow. This avoids the need for local installation of numerous packages.\n",
59+
"* **TransPi:** The specific Nextflow pipeline for transcriptome assembly. The notebook assumes it's present in the `/home/jupyter` directory.\n",
60+
"* **Bioinformatics Tools (within TransPi):** The workflow utilizes several bioinformatics tools. These are packaged within Docker containers, but the notebook expects that TransPi is configured correctly to access and use them:\n",
61+
" * FastQC: Sequence quality control.\n",
62+
" * FastP: Read preprocessing (trimming, adapter removal).\n",
63+
" * Trinity, TransAbyss, SOAPdenovo-Trans, rnaSpades, Velvet/Oases: Transcriptome assemblers.\n",
64+
" * EvidentialGene: Transcriptome integration and reduction.\n",
65+
" * rnaQuast: Transcriptome assessment.\n",
66+
" * BUSCO: Assessment of completeness of the assembled transcriptome.\n",
67+
" * DIAMOND/BLAST: Protein alignment for annotation.\n",
68+
" * HMMER/Pfam: Protein domain assignment for annotation.\n",
69+
" * Bowtie2: Read mapping for assembly validation.\n",
70+
" * TransDecoder: ORF prediction and coding region identification.\n",
71+
" * Trinotate: Functional annotation of transcripts."
72+
]
73+
},
2674
{
2775
"cell_type": "markdown",
2876
"id": "6cd0f4f2-5559-4675-9e97-24b0548b31af",
2977
"metadata": {},
3078
"source": [
31-
"## Time to get started! \n",
79+
"## Get Started \n",
3280
"\n",
3381
"**Step 1:** Make sure you are in the correct local working directory as in `01_prog_setup.ipynb`.\n",
3482
"> It should be `/home/jupyter`."
@@ -278,14 +326,30 @@
278326
},
279327
{
280328
"cell_type": "markdown",
281-
"id": "b96dd6bb-a8ed-44bf-b1f4-bb284f8f0f3e",
329+
"id": "b82f0b3a",
330+
"metadata": {},
331+
"source": [
332+
"## Conclusion\n",
333+
"\n",
334+
"This Jupyter Notebook demonstrated a complete transcriptome assembly workflow using the TransPi Nextflow pipeline. We successfully executed the pipeline, encompassing quality control, normalization, multiple assembly generation with Trinity, TransAbyss, SOAP, rnaSpades, and Velvet/Oases, integration via EvidentialGene, and subsequent assessment using rnaQuast and BUSCO. The final assembly underwent annotation with DIAMOND/BLAST and HMMER/Pfam, culminating in comprehensive reports detailing the entire process and the resulting transcriptome characteristics. The generated output, accessible in the `basicRun/output` directory, provides a rich dataset for further investigation and analysis, including detailed quality metrics, assembly statistics, and functional annotations. This module provided a practical introduction to automated transcriptome assembly, highlighting the efficiency and reproducibility offered by integrated workflows like TransPi. Further exploration of the detailed output is encouraged, and the subsequent notebook focuses on a more in-depth annotation analysis."
335+
]
336+
},
337+
{
338+
"cell_type": "markdown",
339+
"id": "b68484f3",
282340
"metadata": {},
283341
"source": [
284-
"## When you are ready, proceed to the next notebook: [`Submodule_03_annotation_only.ipynb`](Submodule_03_annotation_only.ipynb)."
342+
"## Clean Up\n",
343+
"\n",
344+
"Remember to proceed to the next notebook [`Submodule_03_annotation_only.ipynb`](Submodule_03_annotation_only.ipynb) or shut down your instance if you are finished."
285345
]
286346
}
287347
],
288-
"metadata": {},
348+
"metadata": {
349+
"language_info": {
350+
"name": "python"
351+
}
352+
},
289353
"nbformat": 4,
290354
"nbformat_minor": 5
291355
}

0 commit comments

Comments
 (0)