-
Neurosurgical Review Feb 2023The radial nerve is the biggest branch of the posterior cord of the brachial plexus and one of its five terminal branches. Entrapment of the radial nerve at the elbow is... (Review)
Review
The radial nerve is the biggest branch of the posterior cord of the brachial plexus and one of its five terminal branches. Entrapment of the radial nerve at the elbow is the third most common compressive neuropathy of the upper limb after carpal tunnel and cubital tunnel syndromes. Because the incidence is relatively low and many agents can compress it along its whole course, entrapment of the radial nerve or its branches can pose a considerable clinical challenge. Several of these agents are related to normal or variant anatomy. The most common of the compressive neuropathies related to the radial nerve is the posterior interosseus nerve syndrome. Appropriate treatment requires familiarity with the anatomical traits influencing the presenting symptoms and the related prognoses. The aim of this study is to describe the compressive neuropathies of the radial nerve, emphasizing the anatomical perspective and highlighting the traps awaiting physicians evaluating these entrapments.
Topics: Humans; Radial Neuropathy; Radial Nerve; Nerve Compression Syndromes; Upper Extremity; Elbow Joint
PubMed: 36781706
DOI: 10.1007/s10143-023-01944-2 -
Neural Computing & Applications 2021Deep neural networks (DNNs) have demonstrated super performance in most learning tasks. However, a DNN typically contains a large number of parameters and operations,...
Deep neural networks (DNNs) have demonstrated super performance in most learning tasks. However, a DNN typically contains a large number of parameters and operations, requiring a high-end processing platform for high-speed execution. To address this challenge, hardware-and-software co-design strategies, which involve joint DNN optimization and hardware implementation, can be applied. These strategies reduce the parameters and operations of the DNN, and fit it into a low-resource processing platform. In this paper, a DNN model is used for the analysis of the data captured using an electrochemical method to determine the concentration of a neurotransmitter and the recoding electrode. Next, a DNN miniaturization algorithm is introduced, involving combined pruning and compression, to reduce the DNN resource utilization. Here, the DNN is transformed to have sparse parameters by pruning a percentage of its weights. The Lempel-Ziv-Welch algorithm is then applied to compress the sparse DNN. Next, a DNN overlay is developed, combining the decompression of the DNN parameters and DNN inference, to allow the execution of the DNN on a FPGA on the PYNQ-Z2 board. This approach helps avoid the need for inclusion of a complex quantization algorithm. It compresses the DNN by a factor of 6.18, leading to about 50% reduction in the resource utilization on the FPGA.
PubMed: 34025038
DOI: 10.1007/s00521-021-06113-4 -
Entropy (Basel, Switzerland) Mar 2021Detection of the temporal reversibility of a given process is an interesting time series analysis scheme that enables the useful characterisation of processes and offers...
Detection of the temporal reversibility of a given process is an interesting time series analysis scheme that enables the useful characterisation of processes and offers an insight into the underlying processes generating the time series. Reversibility detection measures have been widely employed in the study of ecological, epidemiological and physiological time series. Further, the time reversal of given data provides a promising tool for analysis of causality measures as well as studying the causal properties of processes. In this work, the recently proposed measure (by the authors) is shown to be free of the assumption that the "cause precedes the effect", making it a promising tool for causal analysis of reversible processes. CCC is a data-driven interventional measure of causality (second rung on the ) that is based on , a well-established robust method to characterize the complexity of time series for analysis and classification. For the detection of the temporal reversibility of processes, we propose a novel measure called the . This asymmetry measure compares the probability of the occurrence of patterns at different scales between the forward-time and time-reversed process using ETC. We test the performance of the measure on a number of simulated processes and demonstrate its effectiveness in determining the asymmetry of real-world time series of sunspot numbers, digits of the transcedental number π and heart interbeat interval variability.
PubMed: 33802138
DOI: 10.3390/e23030327 -
Nature Communications Nov 2023During tumor progression, cancer-associated fibroblasts (CAFs) accumulate in tumors and produce an excessive extracellular matrix (ECM), forming a capsule that enwraps...
During tumor progression, cancer-associated fibroblasts (CAFs) accumulate in tumors and produce an excessive extracellular matrix (ECM), forming a capsule that enwraps cancer cells. This capsule acts as a barrier that restricts tumor growth leading to the buildup of intratumoral pressure. Combining genetic and physical manipulations in vivo with microfabrication and force measurements in vitro, we found that the CAFs capsule is not a passive barrier but instead actively compresses cancer cells using actomyosin contractility. Abrogation of CAFs contractility in vivo leads to the dissipation of compressive forces and impairment of capsule formation. By mapping CAF force patterns in 3D, we show that compression is a CAF-intrinsic property independent of cancer cell growth. Supracellular coordination of CAFs is achieved through fibronectin cables that serve as scaffolds allowing force transmission. Cancer cells mechanosense CAF compression, resulting in an altered localization of the transcriptional regulator YAP and a decrease in proliferation. Our study unveils that the contractile capsule actively compresses cancer cells, modulates their mechanical signaling, and reorganizes tumor morphology.
Topics: Cancer-Associated Fibroblasts; Mechanotransduction, Cellular; Cell Line, Tumor; Fibroblasts; Tumor Microenvironment; Neoplasms
PubMed: 37907483
DOI: 10.1038/s41467-023-42382-4 -
Journal of Biomedical Informatics May 2021Causal inference is one of the most fundamental problems across all domains of science. We address the problem of inferring a causal direction from two observed discrete...
Causal inference is one of the most fundamental problems across all domains of science. We address the problem of inferring a causal direction from two observed discrete symbolic sequences X and Y. We present a framework which relies on lossless compressors for inferring context-free grammars (CFGs) from sequence pairs and quantifies the extent to which the grammar inferred from one sequence compresses the other sequence. We infer X causes Y if the grammar inferred from X better compresses Y than in the other direction. To put this notion to practice, we propose three models that use the Compression-Complexity Measures (CCMs) - Lempel-Ziv (LZ) complexity and Effort-To-Compress (ETC) to infer CFGs and discover causal directions without demanding temporal structures. We evaluate these models on synthetic and real-world benchmarks and empirically observe performances competitive with current state-of-the-art methods. Lastly, we present two unique applications of the proposed models for causal inference directly from pairs of genome sequences belonging to the SARS-CoV-2 virus. Using numerous sequences, we show that our models capture causal information exchanged between genome sequence pairs, presenting novel opportunities for addressing key issues in sequence analysis to investigate the evolution of virulence and pathogenicity in future applications.
Topics: Algorithms; COVID-19; Causality; Data Compression; Humans; Models, Theoretical; SARS-CoV-2
PubMed: 33722730
DOI: 10.1016/j.jbi.2021.103724 -
Optics Express Jun 2023Recently introduced, spaceplates achieve the propagation of light for a distance greater than their thickness. In this way, they compress optical space, reducing the...
Recently introduced, spaceplates achieve the propagation of light for a distance greater than their thickness. In this way, they compress optical space, reducing the required distance between optical elements in an imaging system. Here we introduce a spaceplate based on conventional optics in a 4-f arrangement, mimicking the transfer function of free-space in a thinner system - we term this device a three-lens spaceplate. It is broadband, polarization-independent, and can be used for meter-scale space compression. We experimentally measure compression ratios up to 15.6, replacing up to 4.4 meters of free-space, three orders of magnitude greater than current optical spaceplates. We demonstrate that three-lens spaceplates reduce the length of a full-color imaging system, albeit with reductions in resolution and contrast. We present theoretical limits on the numerical aperture and the compression ratio. Our design presents a simple, accessible, cost-effective method for optically compressing large amounts of space.
PubMed: 37381385
DOI: 10.1364/OE.487255 -
PloS One 2020The development of high-throughput sequencing technology has generated huge amounts DNA data. Many general compression algorithms are not ideal for compressing DNA data,...
The development of high-throughput sequencing technology has generated huge amounts DNA data. Many general compression algorithms are not ideal for compressing DNA data, such as the LZ77 algorithm. On the basis of Nour and Sharawi's method,we propose a new, lossless and reference-free method to increase the compression performance. The original sequences are converted into eight intermediate files and six final files. Then, the LZ77 algorithm is used to compress the six final files. The results show that the compression time is decreased by 83% and the decompression time is decreased by 54% on average.The compression rate is almost the same as Nour and Sharawi's method which is the fastest method so far. What's more, our method has a wider range of application than Nour and Sharawi's method. Compared to some very advanced compression tools at present, such as XM and FCM-Mx, the time for compression in our method is much smaller, on average decreasing the time by more than 90%.
Topics: Algorithms; DNA; Data Compression; Genomics; High-Throughput Nucleotide Sequencing; Sequence Analysis, DNA; Software
PubMed: 33237908
DOI: 10.1371/journal.pone.0238220 -
Proceedings of the National Academy of... Aug 2021Many complex networks depend upon biological entities for their preservation. Such entities, from human cognition to evolution, must first encode and then replicate...
Many complex networks depend upon biological entities for their preservation. Such entities, from human cognition to evolution, must first encode and then replicate those networks under marked resource constraints. Networks that survive are those that are amenable to constrained encoding-or, in other words, are compressible. But how compressible is a network? And what features make one network more compressible than another? Here, we answer these questions by modeling networks as information sources before compressing them using rate-distortion theory. Each network yields a unique rate-distortion curve, which specifies the minimal amount of information that remains at a given scale of description. A natural definition then emerges for the compressibility of a network: the amount of information that can be removed via compression, averaged across all scales. Analyzing an array of real and model networks, we demonstrate that compressibility increases with two common network properties: transitivity (or clustering) and degree heterogeneity. These results indicate that hierarchical organization-which is characterized by modular structure and heterogeneous degrees-facilitates compression in complex networks. Generally, our framework sheds light on the interplay between a network's structure and its capacity to be compressed, enabling investigations into the role of compression in shaping real-world networks.
Topics: Algorithms; Cluster Analysis; Community Networks; Computer Communication Networks; Data Compression; Humans; Models, Theoretical; Random Allocation
PubMed: 34349019
DOI: 10.1073/pnas.2023473118 -
Biomedical Engineering Letters May 2020Electrocardiogram (ECG) data compression has numerous applications. The time for generating compressed samples is a vital factor when we consider ambulatory devices,...
Electrocardiogram (ECG) data compression has numerous applications. The time for generating compressed samples is a vital factor when we consider ambulatory devices, with the fact that data should be sent to the physician as soon as possible. In addition, there are some wearable ECG recorders that have limited power, and may only be capable of doing simple algorithms. With the aim of increasing the speed and simplicity of the compressors, we propose a system architecture that can generate compressed ECG samples, in a linear method and with CR 75%. We used sparsity of the ECG signal and proposed a system based on compressed sensing (CS) that can compress ECG samples, almost in real-time. We applied CS in a very small size in order to accelerate the compression phase and accordingly reducing the power consumption. Also, in the recovery phase, we used the recently developed Kronecker technique to improve the quality of the recovered signal. The system designed based on full-adder/subtractor (FAS) and shift registers, without using any external processor or any training algorithm.
PubMed: 32431956
DOI: 10.1007/s13534-020-00148-7