The process of describing the structure and function of a genome
In molecular biology and genetics, DNA annotation or genome annotation is the process of describing the structure and function of the components of a genome,[2] by analyzing and interpreting them in order to extract their biological significance and understand the biological processes in which they participate.[3] Among other things, it identifies the locations of genes and all the coding regions in a genome and determines what those genes do.[4]
Annotation is performed after a genome is sequenced and assembled, and is a necessary step in genome analysis before the sequence is deposited in a database and described in a published article. Although describing individual genes and their products or functions is sufficient to consider this description as an annotation, the depth of analysis reported in literature for different genomes vary widely, with some reports including additional information that goes beyond a simple annotation.[5] Furthermore, due to the size and complexity of sequenced genomes, DNA annotation is not performed manually, but is instead automated by computational means. However, the conclusions drawn from the obtained results require manual expert analysis.[6]
DNA annotation is classified into two categories: structural annotation, which identifies and demarcates elements in a genome, and functional annotation, which assigns functions to these elements.[7] This is not the only way in which it has been categorized, as several alternatives, such as dimension-based[8] and level-based classifications,[3] have also been proposed.
History
The first generation of genome annotators used local ab initio methods, which are based solely on the information that can be extracted from the DNA sequence on a local scale, that is, one open reading frame (ORF) at a time.[9][10] They appeared as a necessity to handle the enormous amount of data produced by the Maxam-Gilbert and SangerDNA sequencing techniques developed in the late 1970s. The first software used to analyze sequencing reads is the Staden Package, created by Rodger Staden in 1977.[11] It performed several tasks related to annotation, such as base and codon counts. In fact, codon usage was the main strategy used by several early protein coding sequence (CDS) prediction methods,[12][13][14] based on the assumption that the most translated regions in a genome contain codons with the most abundant corresponding tRNAs (the molecules responsible for carrying amino acids to the ribosome during protein synthesis) allowing a more efficient translation.[15] This was also known to be the case for synonymous codons, which are often present in proteins expressed at a lower level.[13][16]
The advent of complete genomes in the 1990s (the first one being the genome of Haemophilus influenzae sequenced in 1995) introduced a second generation of annotators. Just like in the previous generation, they performed annotation through ab initio methods, but now applied on a genome-wide scale.[9][10]Markov models are the driving force behind many algorithms used within annotators of this generation;[17][18] these models can be thought of as directed graphs where nodes represent different genomic signals (such as transcription and translation start sites) connected by arrows representing the scanning of the sequence. To ensure a Markov model detects a genomic signal, it must first be trained on a series of known genomic signals.[19] The output of Markov models in the context of annotation includes the probabilities of every kind of genomic element in every single part of the genome, and an accurate Markov model will assign high probabilities to correct annotations and low probabilities to the incorrect ones.[20]
As more sequenced genomes began to be available in early and mid 2000s, coupled with the numerous protein sequences that were obtained experimentally, genome annotators began employing homology based methods, launching the third generation of genome annotation. These new methods allowed annotators not only to infer genomic elements through statistical means (as in previous generations) but could also perform their task by comparing the sequence being annotated with other already existing and validated sequences. These so-called combiner annotators, which perform both ab initio and homology-based annotation, require fast alignment algorithms to identify regions of homology.[2][9][10]
In the late 2000s, genome annotation shifted its attention towards identifying non-coding regions in DNA, which was achieved thanks to the appearance of methods to analyze transcription factor binding sites, DNA methylation sites, chromatin structure, and other RNA and regulatory region analysis techniques. Other genome annotators also began to focus on population-level studies represented by the pangenome; by doing so, for instance, annotation pipelines ensure that core genes of a clade are also found in new genomes of the same clade. Both annotation strategies constitute the fourth generation of genome annotators.[9][10]
By the 2010s, the genome sequences of more than a thousand-human individuals (through the 1000 Genomes Project) and several model organisms became available. As such, genome annotation remains a major challenge for scientists investigating the human and other genomes.[21][22]
Feature prediction (coding and noncoding sequences).
Repeat identification and masking
The first step of structural annotation consists in the identification and masking of repeats, which include low-complexity sequences (such as AGAGAGAG, or monopolymeric segments like TTTTTTTTT), and transposons (which are larger elements with several copies across the genome).[2][24] Repeats are a major component of both prokaryotic and eukaryotic genomes; for instance, between 0% and over 42% of prokaryotic genomes consist of repeats[25] and three quarters of the human genome are composed of repetitive elements.[26]
Identifying repeats is difficult for two main reasons: they are poorly conserved, and their boundaries are not clearly-defined. Because of this, repeat libraries must be built for the genome of interest, which can be accomplished with one of the following methods:[24][27]
De novo methods. Repeats are identified by detecting and grouping pairs of sequences at different locations whose similarity is above a minimum threshold of sequence conservation in a self-genome comparison, thus requiring no prior information about repeat structure or sequences. The disadvantage of these methods is that they can identify any repeated sequence, not just transposons, and may include conserved coding sequences (CDS), making careful post-processing an indispensable step to remove these sequences. It may also leave out related regions that have degraded over time and may group elements that have no connection in their evolutionary history.[28]
Homology-based methods. Repeats are identified by similarity (homology) of known repeats stored in a curated database. These methods are more likely to find real transposons, even in lower quantities, when compared with de novo methods, but are biased towards previously identified families.
Structure-based methods. Repeats are identified based on models of their structure, rather than repetition or similarity. They are capable of identifying real transposons (just like the homology-based ones), but are not biased by known elements. However, they are highly specific to each class of repeat, and, as such, are less universally applicable.
Comparative genomic methods. Repeats are identified as disruptions of one or more sequences in a multiple sequence alignment produced by large insertion regions. Although this strategy avoids the poorly-defined boundary problem that exists in other methods, it is highly dependent on assembly quality and the level of activity of transposons in the genomes in question.
After the repetitive regions in a genome have been identified, they are masked. Masking means replacing the letters of the nucleotides (A, C, G, or T) with other letters. By doing so, these regions will be marked as repetitive and downstream analyses will treat them accordingly. Repetitive regions may produce performance issues if they are not masked, and may even produce false evidence for gene annotation (for example, treating an open reading frame (ORF) in a transposon as an exon)[24] Depending on the letters used for replacement, masking can be classified as soft or hard: in soft masking, repetitive regions are indicated with lowercase letters (a, c, g, or t), whereas in hard masking, the letters of these regions are replaced with N's. This way, for example, soft masking can be used to exclude word matches and avoid initiating an alignment in those regions, and hard masking, apart from all of this, can also exclude masked regions from alignment scores.[29][30]
Evidence alignment
The next step after genome masking usually involves aligning all available transcript and protein evidence with the analyzed genome, that is, aligning all known expressed sequence tags (ESTs), RNAs and proteins of the organism being annotated with the genome.[31] Although it is optional, it can improve gene sequence elucidation because RNAs and proteins are direct products of coding sequences.[19]
If RNA-Seq data is available, it may be used to annotate and quantify all of the genes and their isoforms located in the corresponding genome, providing not only their locations, but also their rates of expression.[32] However, transcripts provide insufficient information for gene prediction because they might be unobtainable from some genes, they may encode operons of more than one gene, and their start and stop codons cannot be determined due to frameshifts and translation initiation factors.[19] To solve this problem, proteogenomics based approaches are employed, which utilize information from expressed proteins often derived from mass spectrometry.[33]
Splice identification
Annotation of eukaryotic genomes has an extra layer of difficulty due to RNA splicing, a post-transcriptional process in which introns (non-coding regions) are removed and exons (coding regions) are joined.[23] Therefore, eukaryotic coding sequences (CDS) are discontinuous, and, to ensure their proper identification, intronic regions must be filtered. To do so, annotation pipelines must find the exon-intron boundaries, and multiple methodologies have been developed for this purpose. One solution is to use known exon boundaries for alignment; for instance, many introns begin with GT and end with AG.[31] This approach, however, cannot detect novel boundaries, so alternatives like machine learning algorithms exist that are trained on known exon boundaries and quality information to predict new ones.[34] Predictors of new exon boundaries usually require efficient data-compression and alignment algorithms, but they are prone to failure in boundaries located in regions with low sequence coverage or high error-rates produced during sequencing.[35][36]
A genome is divided in coding and noncoding regions, and the last step of structural annotation consists in identifying these features within the genome. In fact, the primary task in genome annotation is gene prediction, which is why numerous methods have been developed for this purpose.[19] Gene prediction is a misleading term, as most gene predictors only identify coding sequences (CDS) and do not report untranslated regions (UTRs); for this reason, CDS prediction has been proposed as a more accurate term.[24] CDS predictors detect genome features through methods called sensors, which include signal sensors that identify functional site signals such as promoters and polyA sites, and content sensors that classify DNA sequences into coding and noncoding content.[37] Whereas prokaryotic CDS predictors mostly deal with open reading frames (ORFs), which are segments of DNA between the start and stopcodons, eukaryotic CDS predictors are faced with a more difficult problem because of the complex organization of eukaryotic genes.[3] CDS prediction methods can be classified into three broad categories:[2][31]
Ab initio methods (also called statistical, intrinsic, or de novo). CDS prediction is based solely on the information that can be extracted from the DNA sequence. They rely on statistical methods such as the hidden Markov model (HMM). Some methods employ two or more genomes to infer local mutation rates and patterns along the genome.[38]
Homology-based methods (also called empirical, evidence-driven, or extrinsic). CDS prediction is based on similarity to known sequences. Specifically, it performs alignments of the analyzed sequence with expressed sequence tags (ESTs), complementary DNA (cDNA), or protein sequences.
Combiners. CDS prediction is done by a combination of both methods mentioned above.
Functional annotation
Functional annotation assigns functions to the genomic elements found by structural annotation,[7] by relating them to biological processes such as the cell cycle, cell death, development, metabolism, etc.[3] It may also be used as an additional quality check by identifying elements that may have been annotated by error.[2]
Functional annotation of genes requires a controlled vocabulary (or ontology) to name the predicted functional features. However, because there are numerous ways to define gene functions, the annotation process may be hindered when it is performed by different research groups. As such, a standardized controlled vocabulary must be employed, the most comprehensive of which is the Gene Ontology (GO). It classifies functional properties into one of three categories (molecular function, biological process, and cellular component) and organizes them in a directed acyclic graph, in which every node is a particular function, and every edge (or arrow) between two nodes indicates a parent-child or subcategory-category relationship.[40][41] As of 2020, GO is the most widely used controlled vocabulary for functional annotation of genes, followed by the MIPS Functional Catalog (FunCat).[42]
Some conventional methods for functional annotation are homology-based, which rely on local alignment search tools.[40] Its premise is that high sequence conservation between two genomic elements implies that their function is conserved as well. Pairs of homologous sequences that appeared through paralogy, orthology, or xenology usually perform a similar function. However, orthologous sequences should be treated with caution because of two reasons: (1) they might have different names depending on when they were originally annotated, and (2) they may not perform the same functional role in two different organisms. Annotators often refer to an analogous sequence when no paralogy, orthology or xenology was found.[19] Homology-based methods have several drawbacks, such as errors in the database, low sensitivity/specificity, inability to distinguish between paralogy and homology,[43] artificially high scores due to the presence of low complexity regions, and significant variation within a protein family.[44]
Functional annotation can be performed through probabilistic methods. The distribution of hydrophilic and hydrophobicamino acids indicates whether a protein is located in a solution or membrane. Specific sequence motifs provide information on posttranslational modifications and final location of any given protein.[19] Probabilistic methods may be paired with a controlled vocabulary, such as GO; for example, protein-protein interaction (PPI) networks usually place proteins with similar functions close to each other.[45]
Machine learning methods are also used to generate functional annotations for novel proteins based on GO terms. Generally, they consist in constructing a binary classifier for each GO term, which are then joined to make predictions on individual GO terms (forming a multiclass classifier) for which confidence scores are later obtained. The support vector machine (SVM) is the most widely used binary classifier in functional annotation; however, other algorithms, such as k-nearest neighbors (kNN) and convolutional neural network (CNN), have also been employed.[40]
Binary or multiclass classification methods for functional annotation generally produce less accurate results because they do not take into account the interrelations between GO terms. More advanced methods that consider these interrelations do so by either a flat or hierarchical approach, which are distinguished by the fact that the former does not take into account the ontology structure, while the latter does. Some of these methods compress the GO terms by matrix factorization or by hashing, thus boosting their performance.[42]
Noncoding sequence function prediction
Noncoding sequences (ncDNA) are those that do not code for proteins. They include elements such as pseudogenes, segmental duplications, binding sites and RNA genes.[28]
Pseudogenes are mutated copies of protein-coding genes that lost their coding function due to a disruption in their open reading frame (ORF), making them untranslatable.[28] They may be identified using one of the following two methods:[46]
Homology-based method. Pseudogenes are identified by searching sequences that are similar to functional genes but contain mutations that produce a disruption in their ORF. This method cannot determine the evolutionary relationship between a pseudogene and its parent gene nor the elapsed time since the event happened.
Phylogeny-based method. Pseudogenes are identified by means of a phylogenetic analysis. First, a species tree of the species of interest and a phylogenetic tree of the gene (or gene family) of interest are constructed. The two are then compared to identify a species that has lost the gene. Next, within the genome of the species where the gene was not found, a sequence is searched that is orthologous to the gene identified in the closest species. Finally, if this orthologous sequence has a disruption in its ORF (and it meets with other criteria, such as RNA-Seq data analysis, dN/dS ratio, etc.), it means that the sequence is indeed a pseudogene.
Segmental duplications are DNA segments of more than 1000 base pairs that are repeated in the genome with more than 90% sequence identity. Two strategies used for their identification are WGAC and WSSD:[47]
Whole-Genome Assembly Comparison (WGAC). It aligns the entire genome to itself in order to identify repeated sequences after filtering out common repeats; it does not require having the original reads used for the assembly.
Whole-genome Shotgun Sequence Detection (WSSD). It aligns the original reads with the assembled genome and searches for regions with a higher read depth than the average, which usually are signals of duplication. Segmental duplications identified by this method but not by WGAC are likely collapsed duplications, which means that they were mistakenly aligned to the same region.[48]
Sequence similarity based methods. They consist in the identification of homologous sequences with known DNA binding sites, or by aligning them with query proteins. Their performance is usually low because the DNA binding sequences are less conserved.
Structure based methods. They employ the three-dimensional structural information of proteins to predict the locations of DNA binding sites.
Noncoding RNA (ncRNA), produced by RNA genes, is a type of RNA that is not translated into a protein. It includes molecules such as tRNA, rRNA, snoRNA, and microRNA, as well as noncoding mRNA-like transcripts. Ab initio prediction of RNA genes in a single genome often yields inaccurate results (with an exception being miRNA), so multi-genome comparative methods are used instead. These methods are specifically concerned with the secondary structures of ncRNA, as they are conserved in related species even when their sequence is not. Therefore, by performing a multiple sequence alignment, more useful information can be obtained for their prediction. Homology search may also be employed to identify RNA genes, but this procedure is complicated, especially in eukaryotes, due to presence of a large number of repeats and pseudogenes.[50]
Visualization
File formats
Visualization of annotations in a genome browser requires a descriptive output file, which should describe the intron-exon structures of each annotation, their start and stop codons, UTRs and alternative transcripts, and ideally should include information about the sequence alignments and gene predictions that support each gene model. Some commonly used formats for describing annotations are GenBank, GFF3, GTF, BED and EMBL.[24] Some of these formats use controlled vocabularies and ontologies to define their descriptive terminologies and guarantee interoperability between analysis and visualization tools.[2]
Genomic browsers are software products that simplify the analysis and visualization of large genomic sequence and annotation data to gain biological insight, via a graphical interface.[52][31][53]
Genomic browsers can be divided into web-based genomic browsers and stand-alone genomic browsers. The former use information from databases and can be classified into multiple-species (integrate sequence and annotations of multiple organisms and promote cross-species comparative analysis) and species-specific (focus on one organism and the annotations for particular species). The latter are not necessarily linked to a specific genome database but are general-purpose browsers that can be downloaded and installed as an application on a local computer.[54][19]
Comparative visualization of genomes
Comparative genomics aims to identify similarities and differences in genomic features, as well as to examine evolutionary relationships between organisms.[55] Visualization tools capable of illustrating the comparative behavior between two or more genomes are essential for this approach, and can be classified into three categories based on the representation of the relationships between the compared genomes:[19]
Dot Plots: This scheme only allows to show the alignment of two genomes, one genome is represented along the horizontal axis and the other along the vertical axis and the dots in the plot represent the genomic elements that are similar between these two annotations.
Linear representation: This representation uses multiple linear tracks to represent multiple genomes and their features where "track" is a concept that refers to a specific type of genomic feature at a genomic location.
Circular representation: This representation facilitates comparison of whole microbial or viral genomes. In this visualization mode, concentric circles and arcs are used to represent genomic sections.
Quality control
The quality of the sequence assembly influences the quality of the annotation, so it is important to assess assembly quality before performing the subsequent annotation steps.[31] In order to quantify the quality of a genome annotation, three metrics have been used: recall, precision and accuracy; although these measures are not explicitly used in annotation projects, but rather in discussions of prediction accuracy.[56]
Community annotation approaches are great techniques for quality control and standardization in genome annotation. An annotation jamboree that took part in 2002, led to the creation of the annotation standards used by the Sanger Institute's Human and Vertebrate Analysis Project (HAVANA).[57][20]
Re-annotation
Annotation projects often rely on previous annotations of an organism's genome; however, these older annotations may contain errors that can propagate to new annotations. As new genome analysis technologies are developed and richer databases become available, the annotation of some older genomes may be updated. This process, known as reannotation, can provide users with new information about the genome, including details about genes and protein functions. Re-annotation is therefore a useful approach in quality control.[56][58]
Community annotation
Community annotation consists in the engagement of a community (both scientific and nonscientific) in genome annotation projects. It can be classified into the following six categories:[59][3]
Factory model: Annotation is performed by a completely automated pipeline.
Museum model:Manual curation by experts is involved to interpret the results of an annotation project.
Cottage industry model: Annotation is decentralized and is the result of the effort from different part-time curators.
Party or jamboree model: Consists of a short intensive workshop with leading curators from the community. It was first used in the Drosophila melanogaster genome annotation project.[60]
Blessed annotator: A variation of the museum model, applied in the Knockout Mouse Project (KOMP), in which curators go through a training period prior to annotation, and are then given access to annotation tools to continue their work.
Gatekeeper approach: It is a combination of the jamboree and cottage industry models. It begins with an annotation workshop, followed by a decentralized collaboration to extend and refine the initial annotation. It has been used for multiple species data.
A community annotation is said to be supervised when there is a coordinator who manages the project by requesting the annotation of specific items to a select number of experts. On the other hand, when anyone can enter a project and coordination is accomplished in a decentralized manner, it is called unsupervised community annotation. Supervised community annotation is short-lived and limited to the duration of the event, whereas the unsupervised counterpart does not have this limitation. However, the latter has been less successful than the former presumably due to a lack of time, motivation, incentive and/or communication.[61]
Wikipedia has multiple WikiProjects aimed at improving annotation. The Gene WikiProject, for instance, operates a bot that harvests gene data from research databases and creates gene stubs on that basis.
[62]
The RNA WikiProject seeks to write articles that describe individual RNAs and RNA families in an accessible way.[63]
Applications
Disease diagnosis
Gene Ontology is being used by researchers to establish a disease-gene relationship, as GO helps in the identification of novel genes, the alterations in their expression, distribution and function under a different set of conditions, such as diseased versus healthy.[41]
Databases of this disease-gene relationships of different organisms have been created, such as Plant-Pathogen Ontology,[64] Plant-Associated Microbe Gene Ontology[65] or DisGeNET.[66] And some others have been implemented in pre-existing databases like Rat Disease Ontology in the Rat Genome database.[67]
Bioremediation
A great diversity of catabolicenzymes involved in hydrocarbon degradation by some bacterial strains are encoded by genes located in their mobile genetic elements (MGEs). The study of these elements is of great importance in the field of bioremediation, since recently the inoculation of wild or genetically modified strains with these MGEs has been sought in order to acquire these hydrocarbon degradation capacities.[68]
In 2013, Phale et al.[69] published the genome annotation of a strain of Pseudomonas putida (CSV86), a bacterium known for its preference of naphthalene and other aromatic compounds over glucose as a carbon and energy source.
In order to find the MGEs of this bacterium, its genome was annotated using RAST and the NCBI Prokaryotic Genome Annotation Pipeline (PGAP), and the identification of nine mobile elements was possible with the Insertion Sequence (IS) Finder database. This analysis concluded in the localization of the upper pathway genes of naphthalene degradation,[70] right next to the genes encoding tRNA-Gly and integrase, as well as the identification of the genes encoding enzymes involved in the degradation of salicylate, benzoate, 4-hydroxybenzoate, phenylacetic acid, hydroxyphenyl acetic acid, and the recognition of an operon involved in glucose transport in the strain.
Gene Ontology analysis is of great importance in functional annotation, and specifically in bioremediation it can be applied to know the relationships between the genes of some microorganisms with their functions and their role in the remediation of certain contaminants. This was the approach of the investigation and identification of Halomonas zincidurans strain B6(T), a bacterium with thirty-one genes encoding resistance to heavy metals, especially zinc[71] and Stenotrophomonas sp. DDT-1, a strain capable of using DDT as its sole carbon and energy source,[72] to mention a few examples.
Software
Genes in a eukaryotic genome can be annotated using various annotation tools[73] such as FINDER.[74] A modern annotation pipeline can support a user-friendly web interface and software containerization such as MOSGA.[75][76] Modern annotation pipelines for prokaryotic genomes are Bakta,[77] Prokka[51] and PGAP.[78]
As a general method, dcGO[80] has an automated procedure for statistically inferring associations between ontology terms and protein domains or combinations of domains from the existing gene/protein-level annotations.
A variety of software tools have been developed that allow scientists to view and share genome annotations, such as MAKER.
Genome annotation is an active area of investigation and involves a number of different organizations in the life science community which publish the results of their efforts in publicly available biological databases accessible via the web and other electronic means. Here is an alphabetical listing of on-going projects relevant to genome annotation:
^Grosjean H, Fiers W (June 1982). "Preferential codon usage in prokaryotic genes: the optimal codon-anticodon interaction energy and the selective codon usage in efficiently expressed genes". Gene. 18 (3): 199–209. doi:10.1016/0378-1119(82)90157-3. PMID6751939.
^Garber M, Grabherr MG, Guttman M, Trapnell C (June 2011). "Computational methods for transcriptome annotation and quantification using RNA-seq". Nature Methods. 8 (6): 469–477. doi:10.1038/nmeth.1613. PMID21623353. S2CID205419756.
^McHardy AC, Kloetgen A (2017). "Finding Genes in Genome Sequence". In Keith JM (ed.). Bioinformatics. Methods in Molecular Biology. Vol. 1525 (Second ed.). New York: Springer. pp. 271–291. doi:10.1007/978-1-4939-6622-6_11. ISBN978-1-4939-6622-6. PMID27896725.
^Brent MR, Guigó R (June 2004). "Recent advances in gene structure prediction". Current Opinion in Structural Biology. 14 (3): 264–272. doi:10.1016/j.sbi.2004.05.007. PMID15193305.
^ abSaxena R, Bishnoi R, Singla D (2021). "Gene Ontology: application and importance in functional annotation of the genomic data". In Singh B, Pathak RK (eds.). Bioinformatics : methods and applications. London: Academic Press. pp. 145–157. doi:10.1016/B978-0-323-89775-4.00015-8. ISBN978-0-323-89775-4.
^Cooper L, Jaiswal P (2016). "The Plant Ontology: A Tool for Plant Genomics". In Edwards D (ed.). Plant Bioinformatics. Methods in Molecular Biology. Vol. 1374 (2nd ed.). Totowa, N.J.: Humana Press. pp. 89–114. doi:10.1007/978-1-4939-3167-5_5. ISBN978-1-4939-3167-5. PMID26519402.