-
International Review of Neurobiology 2014Transcriptome studies have revealed a surprisingly high level of variation among individuals in expression of key genes in the CNS under both normal and experimental... (Review)
Review
Transcriptome studies have revealed a surprisingly high level of variation among individuals in expression of key genes in the CNS under both normal and experimental conditions. Ten-fold variation is common, yet the specific causes and consequences of this variation are largely unknown. By combining classic gene mapping methods-family linkage studies and genomewide association-with high-throughput genomics, it is now possible to define quantitative trait loci (QTLs), single-gene variants, and even single SNPs and indels that control gene expression in different brain regions and cells. This review considers some of the major technical and conceptual challenges in analyzing variation in expression in the CNS with a focus on mRNAs, rather than noncoding RNAs or proteins. At one level of analysis, this work has been highly successful, and we finally have techniques that can be used to track down small numbers of loci that control expression in the CNS. But at a higher level of analysis, we still do not understand the genetic architecture of gene expression in brain, the consequences of expression QTLs on protein levels or on cell function, or the combined impact of expression differences on behavior and disease risk. These important gaps are likely to be bridged over the next several decades using (1) much larger sample sizes, (2) more powerful RNA sequencing and proteomic methods, and (3) novel statistical and computational models to predict genome-to-phenome relations.
Topics: Animals; Central Nervous System; Chromosome Mapping; Gene Expression; History, 20th Century; Humans; Microarray Analysis; Quantitative Trait Loci; Transcriptome
PubMed: 25172476
DOI: 10.1016/B978-0-12-801105-8.00008-4 -
Gene Jul 2019Due to the rapid development of DNA microarray technology, a large number of microarray data come into being and classifying these data has been verified useful for...
Due to the rapid development of DNA microarray technology, a large number of microarray data come into being and classifying these data has been verified useful for cancer diagnosis, treatment and prevention. However, microarray data classification is still a challenging task since there are often a huge number of genes but a small number of samples in gene expression data. As a result, a computational method for reducing the dimension of microarray data is necessary. In this paper, we introduce a computational gene selection model for microarray data classification via adaptive hypergraph embedded dictionary learning (AHEDL). Specifically, a dictionary is learned from the feature space of original high dimensional microarray data, and this learned dictionary is used to represent original genes with a reconstruction coefficient matrix. Then we use a l-norm regularization to impose the row sparsity on the coefficient matrix for selecting discriminate genes. Meanwhile, in order to capture the localmanifold geometrical structure of original microarray data in a high-order manner, a hypergraph is adaptively learned and embedded into the model. An iterative updating algorithm is designed for solving the optimization problem. In order to validate the efficacy of the proposed model, we have conducted experiments on six publicly available microarray data sets and the results demonstrate that AHEDL outperforms other state-of-the-art methods in terms of microarray data classification. ABBREVIATIONS.
Topics: Algorithms; Big Data; Computational Biology; Data Analysis; Humans; Microarray Analysis
PubMed: 31085273
DOI: 10.1016/j.gene.2019.04.060 -
Recent Patents on Biotechnology 2008A nanoparticle is a microscopic particle with at least one dimension less than 100 nm, which plays an important role in the area of intense scientific research. In... (Review)
Review
A nanoparticle is a microscopic particle with at least one dimension less than 100 nm, which plays an important role in the area of intense scientific research. In recent years, the application of gold nanoparticles instead of fluorescence dyes and enzyme-conjugation in biochips is very common. For example, Au nanoparticles labeling method was applied in many DNA-detection methods, and a novel readout scheme for gold nanoparticle-based DNA microarrays was studied relying on "Laser-Induced Scattering around a nanoAbsorber" and nanogold electrode, and the colorimetric detection using gold label plus silver stain was also developed. The technology is a good combination of gene technology and nanotechnology. At the same time, a number of scientists from different countries have paid more attention to the application of nanoparticles in biochips and gotten some new patents for it.
Topics: Equipment Design; Microarray Analysis; Nanoparticles; Nanotechnology; Patents as Topic; Technology Assessment, Biomedical
PubMed: 19075853
DOI: 10.2174/187220808783330938 -
Biointerphases Sep 2010Enzymes are an integral part of biological systems. They constitute a significant majority of all proteins expressed (an estimated 18%-29%) within eukaryotic genomes. It... (Review)
Review
Enzymes are an integral part of biological systems. They constitute a significant majority of all proteins expressed (an estimated 18%-29%) within eukaryotic genomes. It thus comes as no major surprise that enzymes have been implicated in many diseases and form the second largest group of drug targets, after receptors. Despite their involvement in a multitude of physiological processes, only a limited number of enzymes have thus far been well-characterized. Consequently, little is understood about the physiological roles, substrate specificity, and downstream targets of the vast majority of these important proteins. In order to facilitate the biological characterization of enzymes, as well as their adoption as drug targets, there is a need for global "-omics" solutions that bridge the gap in understanding these proteins and their interactions. Herein the authors showcase how microarray methods can be adopted to facilitate investigations into enzymes and their properties, in a high-throughput manner. They will focus on several major classes of enzymes, including kinases, phosphatases, and proteases. As a result of research efforts over the last decade, these groups of enzymes have become readily amenable to microarray-based profiling methods. The authors will also describe the specific design considerations that are required to develop the appropriate chemical tools and libraries to characterize each enzyme class. These include peptide substrates, activity-based probes, and chemical compound libraries, which may be rapidly assembled using efficient combinatorial synthesis or "click chemistry" strategies. Taken together, microarrays offer a powerful means to study, profile, and also discover potent small molecules with which to modulate enzyme activity.
Topics: Enzymes; High-Throughput Screening Assays; Humans; Microarray Analysis; Protein Array Analysis
PubMed: 21171709
DOI: 10.1116/1.3462969 -
Biomolecular Engineering Oct 2006Microarray technologies provide powerful tools for biomedical researchers and medicine, since arrays can be configured to monitor the presence of molecular signatures in... (Review)
Review
Microarray technologies provide powerful tools for biomedical researchers and medicine, since arrays can be configured to monitor the presence of molecular signatures in a highly parallel fashion and can be configured to search either for nucleic acids (DNA microarrays) or proteins (antibody-based microarrays) as well as different types of cells. Microfluidics on the other hand, provides the ability to analyze small volumes (micro-, nano- or even pico-liters) of sample and minimize costly reagent consumption as well as automate sample preparation and reduce sample processing time. The marriage of microarray technologies with the emerging field of microfluidics provides a number of advantages such as, reduction in reagent cost, reductions in hybridization assay times, high-throughput sample processing, and integration and automation capabilities of the front-end sample processing steps. However, this potential marriage is also fraught with some challenges as well, such as developing low-cost manufacturing methods of the fluidic chips, providing good interfaces to the macro-world, minimizing non-specific analyte/wall interactions due to the high surface-to-volume ratio associated with microfluidics, the development of materials that accommodate the optical readout phases of the assay and complete integration of peripheral components (optical and electrical) to the microfluidic to produce autonomous systems appropriate for point-of-care testing. In this review, we provide an overview and recent advances on the coupling of DNA, protein and cell microarrays to microfluidics and discuss potential improvements required for the implementation of these technologies into biomedical and clinical applications.
Topics: Biological Assay; Equipment Design; Equipment Failure Analysis; Gene Expression Profiling; Microarray Analysis; Microfluidic Analytical Techniques; Systems Integration
PubMed: 16905357
DOI: 10.1016/j.bioeng.2006.03.002 -
PloS One 2020Microarray batch effect (BE) has been the primary bottleneck for large-scale integration of data from multiple experiments. Current BE correction methods either need...
Microarray batch effect (BE) has been the primary bottleneck for large-scale integration of data from multiple experiments. Current BE correction methods either need known batch identities (ComBat) or have the potential to overcorrect, by removing true but unknown biological differences (Surrogate Variable Analysis SVA). It is well known that experimental conditions such as array or reagent batches, PCR amplification or ozone levels can affect the measured expression levels; often the direction of perturbation of the measured expression is the same in different datasets. However, there are no BE correction algorithms that attempt to estimate the individual effects of technical differences and use them to correct expression data. In this manuscript, we show that a set of signatures, each of which is a vector the length of the number of probes, calculated on a reference set of microarray samples can predict much of the batch effect in other validation sets. We present a rationale of selecting a reference set of samples designed to estimate technical differences without removing biological differences. Putting both together, we introduce the Batch Effect Signature Correction (BESC) algorithm that uses the BES calculated on the reference set to efficiently predict and remove BE. Using two independent validation sets, we show that BESC is capable of removing batch effect without removing unknown but true biological differences. Much of the variations due to batch effect is shared between different microarray datasets. That shared information can be used to predict signatures (i.e. directions of perturbation) due to batch effect in new datasets. The correction can be precomputed without using the samples to be corrected (blind), done on each sample individually (single sample) and corrects only known technical effects without removing known or unknown biological differences (conservative). Those three characteristics make it ideal for high-throughput correction of samples for a microarray data repository. We also compare the performance of BESC to three other batch correction methods: SVA, Removing Unwanted Variation (RUV) and Hidden Covariates with Prior (HCP). An R Package besc implementing the algorithm is available from http://explainbio.com.
Topics: Algorithms; Gene Expression Profiling; Microarray Analysis
PubMed: 32271844
DOI: 10.1371/journal.pone.0231446 -
EMBO Reports May 2004Workshop on Genomic Approaches to Microarray Data Analysis
Workshop on Genomic Approaches to Microarray Data Analysis
Topics: Computational Biology; Gene Expression Profiling; Genes, Fungal; Microarray Analysis; Saccharomyces cerevisiae Proteins
PubMed: 15105828
DOI: 10.1038/sj.embor.7400156 -
Methods in Molecular Biology (Clifton,... 2009Bio-cell chips are microarrays, which are composed of collections of cell spots attached to the surface. They hold intact cells and therefore enable the study of... (Review)
Review
Bio-cell chips are microarrays, which are composed of collections of cell spots attached to the surface. They hold intact cells and therefore enable the study of gene-gene interactions and gene-protein interactions in a cell with three-dimensional positional information. The authors developed a 16 x 6 array bio-cell chip comprising a 1-mm-thick perforated polydimethylsiloxane (PDMS) layer on lattice-patterned 25 mm x75 mm glass slide. The perforations in the PDMS layer formed cylindrical wells of volume approximately 1.7 micro L, which were used to seed cells. The authors constructed bio-cell chips using mononuclear cells from bone marrow specimens and subjected them to fluorescent in situ hybridization (FISH). Bio-cell chip technology is compatible with standard clinical diagnosis protocols, requires smaller samples, provides results quickly, and is highly cost-effective. In addition, bio-cell chips can be used as a platform for distributing real samples for research purposes. These features make it a potential tool for basic research and for clinical diagnosis.
Topics: Biological Assay; Biosensing Techniques; Cell Culture Techniques; Cell Physiological Phenomena; Equipment Design; Equipment Failure Analysis; Gene Expression Profiling; Microarray Analysis; Proteome; Signal Transduction
PubMed: 19212720
DOI: 10.1007/978-1-59745-372-1_10 -
Biomedical Microdevices Jun 2005
Topics: Biological Assay; Microarray Analysis; Microfluidic Analytical Techniques; Nanotechnology
PubMed: 15940425
DOI: 10.1007/s10544-005-1590-3 -
Methods in Molecular Biology (Clifton,... 2012Glycolipid-protein interactions are increasingly recognised as critical to numerous and diverse biological processes, including immune recognition, cell-cell signalling,...
Glycolipid-protein interactions are increasingly recognised as critical to numerous and diverse biological processes, including immune recognition, cell-cell signalling, pathogen adherence, and virulence factor binding. Previously, such carbohydrate-lectin interactions have been assessed in vitro largely by assaying protein binding against purified preparations of single glycolipids. Recent observations show that certain disease-associated autoantibodies and other lectins bind only to complexes formed by two different gangliosides. However, investigating such 1:1 glycolipid complexes can prove technically arduous. To address this problem, we have developed a semi-automated system for assaying lectin binding to large numbers of glycolipid complexes simultaneously. This employs an automated thin-layer chromatography sampler. Single glycolipids and their heterodimeric complexes are prepared in microvials. The autosampler is then used to print reproducible arrays of glycolipid complexes onto polyvinylidene difluoride membranes affixed to glass slides. A printing density of 300 antigen spots per slide is achievable. Following overnight drying, these arrays can then be probed with the lectin(s) of interest. Detection of binding is by way of a horseradish peroxidase-linked secondary antibody driving a chemiluminescent reaction rendered on radiographic film. Image analysis software can then be used to measure signal intensity for quantification.
Topics: Glycolipids; Lectins; Microarray Analysis
PubMed: 22057541
DOI: 10.1007/978-1-61779-373-8_28