-
Cureus May 2022Introduction The purpose of this study was to evaluate which of the three positions on the tragus (superior, middle, and inferior), when connected to the inferior border...
An Evaluation of the Relation Between Variation in Arch Forms and Relative Parallelism of the Occlusal Plane to the Line Joining the Inferior Border of Ala of the Nose With Different Tragal Levels of the Ear in Dentulous Subjects: An In Vivo Study.
Introduction The purpose of this study was to evaluate which of the three positions on the tragus (superior, middle, and inferior), when connected to the inferior border of the ala of the nose, was the most parallel to the natural occlusal plane in dentate patients, to correlate the level of the naturally existing occlusal plane with the ala-tragal line when the tragus was divided into three portions (superior, middle, and inferior), and to determine which position in the tragus occlusal plane is the most parallel. The study also evaluated the correlation between the variation of arch forms and the relative parallelism of the occlusal plane to the ala-tragal line at different tragal levels. Methods This study included 1405 subjects between the ages of 18 and 35 years. A custom-made occlusal plane analyzer was used to check the relative parallelism between the existing occlusal plane and the ala-tragal line when the tragus was divided into the superior, middle, and inferior portions. The Fox plane of the occlusal plane analyzer was placed on the occlusal plane and the paralleling rod was adjusted till parallelism was obtained. The point on the tragus (superior, middle, or inferior) at which parallelism existed was recorded. The study also measured the inter-canine and intermolar distance to find the type of arch form and related it to the position (superior, middle, or inferior) at which the ala tragal line was parallel to the occlusal plane. The assessment was done on both the right and left sides of the subjects. Results Out of the 2810 tragi, the most common location at which parallelism was established was the inferior part of the tragus, which accounted for 47% of the total. Seventy-one percent (71%) of the subjects showed ovoid arch form. When the variation of arch forms was compared to the level of occlusal plane, 46.8% of the subjects with tapered arch form, 54.5% of subjects with square arch form, and 46.0% of subjects with ovoid arch form had the level of the occlusal plane at the inferior portion of the tragus. Conclusion The result of the study indicated that in the majority of the tragi studied, 47% of the subjects had the occlusal plane parallel to a line joining the inferior border of the ala of the nose to the inferior part of the tragus. Irrespective of the arch form, the occlusal plane was found parallel to a line joining the inferior border of the ala of the nose and the inferior part of the tragus. Thus the tragal position did not show any correlation to the variation of arch forms.
PubMed: 35706729
DOI: 10.7759/cureus.24925 -
Frontiers in Neuroinformatics 2021Fiber clustering methods are typically used in brain research to study the organization of white matter bundles from large diffusion MRI tractography datasets. These...
Fiber clustering methods are typically used in brain research to study the organization of white matter bundles from large diffusion MRI tractography datasets. These methods enable exploratory bundle inspection using visualization and other methods that require identifying brain white matter structures in individuals or a population. Some applications, such as real-time visualization and inter-subject clustering, need fast and high-quality intra-subject clustering algorithms. This work proposes a parallel algorithm using a General Purpose Graphics Processing Unit (GPGPU) for fiber clustering based on the FFClust algorithm. The proposed GPGPU implementation exploits data parallelism using both multicore and GPU fine-grained parallelism present in commodity architectures, including current laptops and desktop computers. Our approach implements all FFClust steps in parallel, improving execution times in all of them. In addition, our parallel approach includes a parallel Kmeans++ algorithm implementation and defines a new variant of Kmeans++ to reduce the impact of choosing outliers as initial centroids. The results show that our approach provides clustering quality results very similar to FFClust, and it requires an execution time of 3.5 s for processing about a million fibers, achieving a speedup of 11.5 times compared to FFClust.
PubMed: 34539370
DOI: 10.3389/fninf.2021.727859 -
BMC Bioinformatics Jan 2020Studies using quantitative experimental methods have shown that intracellular spatial distribution of molecules plays a central role in many cellular systems. Spatially...
BACKGROUND
Studies using quantitative experimental methods have shown that intracellular spatial distribution of molecules plays a central role in many cellular systems. Spatially resolved computer simulations can integrate quantitative data from these experiments to construct physically accurate models of the systems. Although computationally expensive, microscopic resolution reaction-diffusion simulators, such as Spatiocyte can directly capture intracellular effects comprising diffusion-limited reactions and volume exclusion from crowded molecules by explicitly representing individual diffusing molecules in space. To alleviate the steep computational cost typically associated with the simulation of large or crowded intracellular compartments, we present a parallelized Spatiocyte method called pSpatiocyte.
RESULTS
The new high-performance method employs unique parallelization schemes on hexagonal close-packed (HCP) lattice to efficiently exploit the resources of common workstations and large distributed memory parallel computers. We introduce a coordinate system for fast accesses to HCP lattice voxels, a parallelized event scheduler, a parallelized Gillespie's direct-method for unimolecular reactions, and a parallelized event for diffusion and bimolecular reaction processes. We verified the correctness of pSpatiocyte reaction and diffusion processes by comparison to theory. To evaluate the performance of pSpatiocyte, we performed a series of parallelized diffusion runs on the RIKEN K computer. In the case of fine lattice discretization with low voxel occupancy, pSpatiocyte exhibited 74% parallel efficiency and achieved a speedup of 7686 times with 663552 cores compared to the runtime with 64 cores. In the weak scaling performance, pSpatiocyte obtained efficiencies of at least 60% with up to 663552 cores. When executing the Michaelis-Menten benchmark model on an eight-core workstation, pSpatiocyte required 45- and 55-fold shorter runtimes than Smoldyn and the parallel version of ReaDDy, respectively. As a high-performance application example, we study the dual phosphorylation-dephosphorylation cycle of the MAPK system, a typical reaction network motif in cell signaling pathways.
CONCLUSIONS
pSpatiocyte demonstrates good accuracies, fast runtimes and a significant performance advantage over well-known microscopic particle methods in large-scale simulations of intracellular reaction-diffusion systems. The source code of pSpatiocyte is available at https://spatiocyte.org.
Topics: Algorithms; Computer Simulation; Diffusion; MAP Kinase Signaling System; Models, Biological; Phosphorylation; Software
PubMed: 31996129
DOI: 10.1186/s12859-019-3338-8 -
Computer Methods and Programs in... Aug 2021Recent research has reported methods that reconstruct cardiac MR images acquired with acceleration factors as high as 15 in Cartesian coordinates. However, the...
BACKGROUND AND OBJECTIVE
Recent research has reported methods that reconstruct cardiac MR images acquired with acceleration factors as high as 15 in Cartesian coordinates. However, the computational cost of these techniques is quite high, taking about 40 min of CPU time in a typical current machine. This delay between acquisition and final result can completely rule out the use of MRI in clinical environments in favor of other techniques, such as CT. In spite of this, reconstruction methods reported elsewhere can be parallelized to a high degree, a fact that makes them suitable for GPU-type computing devices. This paper contributes a vendor-independent, device-agnostic implementation of such a method to reconstruct 2D motion-compensated, compressed-sensing MRI sequences in clinically viable times.
METHODS
By leveraging our OpenCLIPER framework, the proposed system works in any computing device (CPU, GPU, DSP, FPGA, etc.), as long as an OpenCL implementation is available, and development is significantly simplified versus a pure OpenCL implementation. In OpenCLIPER, the problem is partitioned in independent black boxes which may be connected as needed, while device initialization and maintenance is handled automatically. Parallel implementations of both a groupwise FFD-based registration method, as well as a multicoil extension of the NESTA algorithm have been carried out as processes of OpenCLIPER. Our platform also includes significant development and debugging aids. HIP code and precompiled libraries can be integrated seamlessly as well since OpenCLIPER makes data objects shareable between OpenCL and HIP. This also opens an opportunity to include CUDA source code (via HIP) in prospective developments.
RESULTS
The proposed solution can reconstruct a whole 12-14 slice CINE volume acquired in 19-32 coils and 20 phases, with an acceleration factor of ranging 4-8, in a few seconds, with results comparable to another popular platform (BART). If motion compensation is included, reconstruction time is in the order of one minute.
CONCLUSIONS
We have obtained clinically-viable times in GPUs from different vendors, with delays in some platforms that do not have correspondence with its price in the market. We also contribute a parallel groupwise registration subsystem for motion estimation/compensation and a parallel multicoil NESTA subsystem for l1-l2-norm problem solving.
Topics: Algorithms; Magnetic Resonance Imaging; Prospective Studies; Radiography; Software
PubMed: 34029830
DOI: 10.1016/j.cmpb.2021.106143 -
BMC Bioinformatics Feb 2016Metagenomics is a genomics research discipline devoted to the study of microbial communities in environmental samples and human and animal organs and tissues. Sequenced...
BACKGROUND
Metagenomics is a genomics research discipline devoted to the study of microbial communities in environmental samples and human and animal organs and tissues. Sequenced metagenomic samples usually comprise reads from a large number of different bacterial communities and hence tend to result in large file sizes, typically ranging between 1-10 GB. This leads to challenges in analyzing, transferring and storing metagenomic data. In order to overcome these data processing issues, we introduce MetaCRAM, the first de novo, parallelized software suite specialized for FASTA and FASTQ format metagenomic read processing and lossless compression.
RESULTS
MetaCRAM integrates algorithms for taxonomy identification and assembly, and introduces parallel execution methods; furthermore, it enables genome reference selection and CRAM based compression. MetaCRAM also uses novel reference-based compression methods designed through extensive studies of integer compression techniques and through fitting of empirical distributions of metagenomic read-reference positions. MetaCRAM is a lossless method compatible with standard CRAM formats, and it allows for fast selection of relevant files in the compressed domain via maintenance of taxonomy information. The performance of MetaCRAM as a stand-alone compression platform was evaluated on various metagenomic samples from the NCBI Sequence Read Archive, suggesting 2- to 4-fold compression ratio improvements compared to gzip. On average, the compressed file sizes were 2-13 percent of the original raw metagenomic file sizes.
CONCLUSIONS
We described the first architecture for reference-based, lossless compression of metagenomic data. The compression scheme proposed offers significantly improved compression ratios as compared to off-the-shelf methods such as zip programs. Furthermore, it enables running different components in parallel and it provides the user with taxonomic and assembly information generated during execution of the compression pipeline.
AVAILABILITY
The MetaCRAM software is freely available at http://web.engr.illinois.edu/~mkim158/metacram.html. The website also contains a README file and other relevant instructions for running the code. Note that to run the code one needs a minimum of 16 GB of RAM. In addition, virtual box is set up on a 4GB RAM machine for users to run a simple demonstration.
Topics: Classification; Data Compression; Genomics; High-Throughput Nucleotide Sequencing; Humans; Metagenomics
PubMed: 26895947
DOI: 10.1186/s12859-016-0932-x -
Cognitive Science Dec 2020As modern deep networks become more complex, and get closer to human-like capabilities in certain domains, the question arises as to how the representations and decision...
As modern deep networks become more complex, and get closer to human-like capabilities in certain domains, the question arises as to how the representations and decision rules they learn compare to the ones in humans. In this work, we study representations of sentences in one such artificial system for natural language processing. We first present a diagnostic test dataset to examine the degree of abstract composable structure represented. Analyzing performance on these diagnostic tests indicates a lack of systematicity in representations and decision rules, and reveals a set of heuristic strategies. We then investigate the effect of training distribution on learning these heuristic strategies, and we study changes in these representations with various augmentations to the training set. Our results reveal parallels to the analogous representations in people. We find that these systems can learn abstract rules and generalize them to new contexts under certain circumstances-similar to human zero-shot reasoning. However, we also note some shortcomings in this generalization behavior-similar to human judgment errors like belief bias. Studying these parallels suggests new ways to understand psychological phenomena in humans as well as informs best strategies for building artificial intelligence with human-like language understanding.
Topics: Comprehension; Heuristics; Humans; Language; Machine Learning; Natural Language Processing
PubMed: 33340161
DOI: 10.1111/cogs.12925 -
PLoS Computational Biology Apr 2023Addressing many of the major outstanding questions in the fields of microbial evolution and pathogenesis will require analyses of populations of microbial genomes....
Addressing many of the major outstanding questions in the fields of microbial evolution and pathogenesis will require analyses of populations of microbial genomes. Although population genomic studies provide the analytical resolution to investigate evolutionary and mechanistic processes at fine spatial and temporal scales-precisely the scales at which these processes occur-microbial population genomic research is currently hindered by the practicalities of obtaining sufficient quantities of the relatively pure microbial genomic DNA necessary for next-generation sequencing. Here we present swga2.0, an optimized and parallelized pipeline to design selective whole genome amplification (SWGA) primer sets. Unlike previous methods, swga2.0 incorporates active and machine learning methods to evaluate the amplification efficacy of individual primers and primer sets. Additionally, swga2.0 optimizes primer set search and evaluation strategies, including parallelization at each stage of the pipeline, to dramatically decrease program runtime. Here we describe the swga2.0 pipeline, including the empirical data used to identify primer and primer set characteristics, that improve amplification performance. Additionally, we evaluate the novel swga2.0 pipeline by designing primer sets that successfully amplify Prevotella melaninogenica, an important component of the lung microbiome in cystic fibrosis patients, from samples dominated by human DNA.
Topics: Humans; Genomics; Sequence Analysis, DNA; Genome; DNA
PubMed: 37068103
DOI: 10.1371/journal.pcbi.1010137 -
Journal of Evolutionary Biology Mar 2023While we know that climate change can potentially cause rapid phenotypic evolution, our understanding of the genetic basis and degree of genetic parallelism of rapid...
While we know that climate change can potentially cause rapid phenotypic evolution, our understanding of the genetic basis and degree of genetic parallelism of rapid evolutionary responses to climate change is limited. In this study, we combined the resurrection approach with an evolve-and-resequence design to examine genome-wide evolutionary changes following drought. We exposed genetically similar replicate populations of the annual plant Brassica rapa derived from a field population in southern California to four generations of experimental drought or watered conditions in a greenhouse. Genome-wide sequencing of ancestral and descendant population pools identified hundreds of SNPs that showed evidence of rapidly evolving in response to drought. Several of these were in stress response genes, and two were identified in a prior study of drought response in this species. However, almost all genetic changes were unique among experimental populations, indicating that the evolutionary changes were largely nonparallel, despite the fact that genetically similar replicates of the same founder population had experienced controlled and consistent selection regimes. This nonparallelism of evolution at the genetic level is potentially because of polygenetic adaptation allowing for multiple different genetic routes to similar phenotypic outcomes. Our findings help to elucidate the relationship between rapid phenotypic and genomic evolution and shed light on the degree of parallelism and predictability of genomic evolution to environmental change.
Topics: Brassica rapa; Biological Evolution; Droughts; Genome; Evolution, Molecular
PubMed: 36721268
DOI: 10.1111/jeb.14152 -
Hepatology (Baltimore, Md.) Apr 2020Biliary atresia (BA) is a devastating neonatal cholangiopathy that progresses to fibrosis and end-stage liver disease by 2 years of age. Portoenterostomy may reestablish...
BACKGROUND AND AIMS
Biliary atresia (BA) is a devastating neonatal cholangiopathy that progresses to fibrosis and end-stage liver disease by 2 years of age. Portoenterostomy may reestablish biliary drainage, but, despite drainage, virtually all afflicted patients develop fibrosis and progress to end-stage liver disease requiring liver transplantation for survival.
APPROACH AND RESULTS
In the murine model of BA, rhesus rotavirus (RRV) infection of newborn pups results in a cholangiopathy paralleling human BA and has been used to study mechanistic aspects of the disease. Unfortunately, nearly all RRV-infected pups succumb by day of life 14. Thus, in this study we generated an RRV-TUCH rotavirus reassortant (designated as T ) that when injected into newborn mice causes an obstructive jaundice phenotype with lower mortality rates. Of the mice that survived, 63% developed Ishak stage 3-5 fibrosis with histopathological signs of inflammation/fibrosis and bile duct obstruction.
CONCLUSIONS
This model of rotavirus-induced neonatal fibrosis will provide an opportunity to study disease pathogenesis and has potential to be used in preclinical studies with an objective to identify therapeutic targets that may alter the course of BA.
Topics: Animals; Biliary Atresia; Cell Line; Chlorocebus aethiops; Disease Models, Animal; Humans; Jaundice, Obstructive; Liver Cirrhosis; Mice; Mice, Inbred BALB C; Reassortant Viruses; Rotavirus
PubMed: 31442322
DOI: 10.1002/hep.30907 -
Neurobiology of Language (Cambridge,... 2023Sentence structure, or syntax, is potentially a uniquely creative aspect of the human mind. Neuropsychological experiments in the 1970s suggested parallel syntactic...
Sentence structure, or syntax, is potentially a uniquely creative aspect of the human mind. Neuropsychological experiments in the 1970s suggested parallel syntactic production and comprehension deficits in agrammatic Broca's aphasia, thought to result from damage to syntactic mechanisms in Broca's area in the left frontal lobe. This hypothesis was sometimes termed , converging with developments in linguistic theory concerning central syntactic mechanisms supporting language production and comprehension. However, the evidence supporting an association among receptive syntactic deficits, expressive agrammatism, and damage to frontal cortex is equivocal. In addition, the relationship among a distinct grammatical production deficit in aphasia, paragrammatism, and receptive syntax has not been assessed. We used lesion-symptom mapping in three partially overlapping groups of left-hemisphere stroke patients to investigate these issues: grammatical production deficits in a primary group of 53 subjects and syntactic comprehension in larger sample sizes ( = 130, 218) that overlapped with the primary group. Paragrammatic production deficits were significantly associated with multiple analyses of syntactic comprehension, particularly when incorporating lesion volume as a covariate, but agrammatic production deficits were not. The lesion correlates of impaired performance of syntactic comprehension were significantly associated with damage to temporal lobe regions, which were also implicated in paragrammatism, but not with the inferior and middle frontal regions implicated in expressive agrammatism. Our results provide strong evidence against the overarching agrammatism hypothesis. By contrast, our results suggest the possibility of an alternative grammatical parallelism hypothesis rooted in paragrammatism and a central syntactic system in the posterior temporal lobe.
PubMed: 37946730
DOI: 10.1162/nol_a_00117