-
Science Advances Nov 2021A quantum processor to import, process, and export optical quantum states is a common core technology enabling various photonic quantum information processing. However,...
A quantum processor to import, process, and export optical quantum states is a common core technology enabling various photonic quantum information processing. However, there has been no photonic processor that is simultaneously universal, scalable, and programmable. Here, we report on an original loop-based single-mode versatile photonic quantum processor that is designed to be universal, scalable, and programmable. Our processor can perform arbitrarily many steps of programmable quantum operations on a given single-mode optical quantum state by time-domain processing in a dynamically controlled loop-based optical circuit. We use this processor to demonstrate programmable single-mode Gaussian gates and multistep squeezing gates. In addition, we prove that the processor can perform universal quantum operations by injecting appropriate ancillary states and also be straightforwardly extended to a multimode processor. These results show that our processor is programmable, scalable, and potentially universal, leading to be suitable for general-purpose applications.
PubMed: 34767450
DOI: 10.1126/sciadv.abj6624 -
Physics in Medicine and Biology Feb 2021For positron emission tomography (PET) online data acquisition, a centralized coincidence processor (CCP) with single-thread data processing has been used to select...
For positron emission tomography (PET) online data acquisition, a centralized coincidence processor (CCP) with single-thread data processing has been used to select coincidence events for many PET scanners. A CCP has the advantages of highly integrated circuit, compact connection between detector front-end and system electronics and centralized control of data process and decision making. However, it also has the drawbacks of data process delay, difficulty in handling very high count-rates of single and coincidence events and complicated algorithms to implement. These problems are exacerbated when implementing a CCP on a field-programable-gate-array (FPGA) due to increased routing congestion and reduced data throughput. Industry companies have applied non-centralized or distributed data processing to solve these problems, but those solutions remain either proprietary or lack full disclosure of technical details that make the techniques unclear and difficult to adapt for most research communities. In this study, we investigated the use of a set of distributed coincidence processors (DCP) that can address the CCP problems and be implemented relatively easily. Each coincidence processor exclusively connects one detector pair and selects coincidence events from this detector pair only, which breaks a centralized coincidence process to a collection of independent and parallel processes. DCP can significantly minimize the data process delay, maximize count-rates of coincidence events and simplify implementation by implementing a single coincidence processor with one detector pair and replicating it to the rest. A prototype DCP with 42 coincidence processors was implemented on an off-the-shelf FPGA development board for a small PET with 12 detectors configured with 42 detector pairs. DCP performances were tested with both pulsed signals and gamma ray interactions. There was no coincidence data loss up to the detector's maximum singles count-rate (250 k s). Approximately 1.2 k registers were utilized for each coincidence processor and the FPGA resource utilization was proportional to the number of coincidence processors. Coincidence timing spectra showed the results from accurately acquired coincidence events. In conclusion: complementary to CCP, DCP can provide high count-rate capability, with a simplified algorithm for implementation and potentially a practical solution for online acquisition of a PET with a larger number of detector pairs or for ultrahigh-throughput imaging.
Topics: Algorithms; Gamma Rays; Humans; Image Processing, Computer-Assisted; Positron-Emission Tomography; Software
PubMed: 33590827
DOI: 10.1088/1361-6560/abde85 -
Nanomaterials (Basel, Switzerland) Jun 2021In emerging artificial intelligence applications, massive matrix operations require high computing speed and energy efficiency. Optical computing can realize high-speed... (Review)
Review
In emerging artificial intelligence applications, massive matrix operations require high computing speed and energy efficiency. Optical computing can realize high-speed parallel information processing with ultra-low energy consumption on photonic integrated platforms or in free space, which can well meet these domain-specific demands. In this review, we firstly introduce the principles of photonic matrix computing implemented by three mainstream schemes, and then review the research progress of optical neural networks (ONNs) based on photonic matrix computing. In addition, we discuss the advantages of optical computing architectures over electronic processors as well as current challenges of optical computing and highlight some promising prospects for the future development.
PubMed: 34206814
DOI: 10.3390/nano11071683 -
JMIR Bioinformatics and Biotechnology May 2024Genetic data are widely considered inherently identifiable. However, genetic data sets come in many shapes and sizes, and the feasibility of privacy attacks depends on... (Review)
Review
BACKGROUND
Genetic data are widely considered inherently identifiable. However, genetic data sets come in many shapes and sizes, and the feasibility of privacy attacks depends on their specific content. Assessing the reidentification risk of genetic data is complex, yet there is a lack of guidelines or recommendations that support data processors in performing such an evaluation.
OBJECTIVE
This study aims to gain a comprehensive understanding of the privacy vulnerabilities of genetic data and create a summary that can guide data processors in assessing the privacy risk of genetic data sets.
METHODS
We conducted a 2-step search, in which we first identified 21 reviews published between 2017 and 2023 on the topic of genomic privacy and then analyzed all references cited in the reviews (n=1645) to identify 42 unique original research studies that demonstrate a privacy attack on genetic data. We then evaluated the type and components of genetic data exploited for these attacks as well as the effort and resources needed for their implementation and their probability of success.
RESULTS
From our literature review, we derived 9 nonmutually exclusive features of genetic data that are both inherent to any genetic data set and informative about privacy risk: biological modality, experimental assay, data format or level of processing, germline versus somatic variation content, content of single nucleotide polymorphisms, short tandem repeats, aggregated sample measures, structural variants, and rare single nucleotide variants.
CONCLUSIONS
On the basis of our literature review, the evaluation of these 9 features covers the great majority of privacy-critical aspects of genetic data and thus provides a foundation and guidance for assessing genetic data risk.
PubMed: 38935957
DOI: 10.2196/54332 -
BioRxiv : the Preprint Server For... Dec 2023Here, we present FLiPPR, or FragPipe LiP (limited proteolysis) Processor, a tool that facilitates the analysis of data from limited proteolysis mass spectrometry...
Here, we present FLiPPR, or FragPipe LiP (limited proteolysis) Processor, a tool that facilitates the analysis of data from limited proteolysis mass spectrometry (LiP-MS) experiments following primary search and quantification in FragPipe. LiP-MS has emerged as a method that can provide proteome-wide information on protein structure and has been applied to a range of biological and biophysical questions. Although LiP-MS can be carried out with standard laboratory reagents and mass spectrometers, analyzing the data can be slow and poses unique challenges compared to typical quantitative proteomics workflows. To address this, we leverage the fast, sensitive, and accurate search and label-free quantification algorithms in FragPipe and then process its output in FLiPPR. FLiPPR formalizes a specific data imputation heuristic that carefully uses missing data in LiP-MS experiments to report on the most significant structural changes. Moreover, FLiPPR introduces a new data merging scheme (from ions to cut-sites) and a protein-centric multiple hypothesis correction scheme, collectively enabling processed LiP-MS datasets to be more robust and less redundant. These improvements substantially strengthen statistical trends when previously published data are reanalyzed with the FragPipe/FLiPPR workflow. As a final feature, FLiPPR facilitates the collection of structural metadata to identify correlations between experiments and structural features. We hope that FLiPPR will lower the barrier for more users to adopt LiP-MS, standardize statistical procedures for LiP-MS data analysis, and systematize output to facilitate eventual larger-scale integration of LiP-MS data.
PubMed: 38106106
DOI: 10.1101/2023.12.04.569947 -
European Archives of... Oct 2022The Vibrant Soundbridge (VSB) was introduced in 1996, and the fourth generation of the audio processor recently released. This clinical study evaluates the audiological...
PURPOSE
The Vibrant Soundbridge (VSB) was introduced in 1996, and the fourth generation of the audio processor recently released. This clinical study evaluates the audiological performance and subjective satisfaction of the new SAMBA 2 audio processor compared to its predecessor, SAMBA.
METHOD
Fifteen VSB users tested both audio processors for approximately 3 weeks. Air conduction and bone conduction thresholds and unaided and aided sound field thresholds were measured with both devices. Speech performance in quiet (Freiburg monosyllables) and noise (OLSA) was evaluated as well as subjective listening effort (ACALES) and questionnaire outcomes (SSQ12 and APSQ). In addition, data from 16 subjects with normal hearing were gathered on sound field tests and ACALES.
RESULTS
Both audio processors showed substantial improvement compared to the unaided condition. The SAMBA and SAMBA 2 had comparable performance in sound filed thresholds, while the SAMBA 2 was significantly better in speech in quiet, speech in noise, reduced listening effort, and improved subjective satisfaction compared with the SAMBA.
CONCLUSION
The SAMBA 2 audio processor, compared to its predecessor SAMBA, offers improved performance throughout the parameters investigated in this study. Patients with a VSB implant would benefit from an upgrade to SAMBA 2.
Topics: Bone Conduction; Hearing; Hearing Aids; Humans; Ossicular Prosthesis; Speech Perception
PubMed: 34874465
DOI: 10.1007/s00405-021-07207-4 -
Journal of Clinical Medicine Dec 2020The islet purification step in clinical islet isolation is important for minimizing the risks associated with intraportal infusion. Continuous density gradient with a... (Review)
Review
The islet purification step in clinical islet isolation is important for minimizing the risks associated with intraportal infusion. Continuous density gradient with a COBE 2991 cell processor is commonly used for clinical islet purification. However, the high shear force involved in the purification method using the COBE 2991 cell processor causes mechanical damage to the islets. We and other groups have shown human/porcine islet purification using large cylindrical plastic bottles. Shear stress can be minimized or eliminated using large cylindrical plastic bottles because the bottles do not have a narrow segment and no centrifugation is required during tissue loading and the collection processes of islet purification. This review describes current advances in islet purification from large mammals and humans using a COBE 2991 cell processor versus large cylindrical plastic bottles.
PubMed: 33374512
DOI: 10.3390/jcm10010010 -
Chemical Science Jul 2021The implementation of a quantum computer requires both to protect information from environmental noise and to implement quantum operations efficiently. Achieving this by...
The implementation of a quantum computer requires both to protect information from environmental noise and to implement quantum operations efficiently. Achieving this by a fully fault-tolerant platform, in which quantum gates are implemented within quantum-error corrected units, poses stringent requirements on the coherence and control of such hardware. A more feasible architecture could consist of connected memories, that support error-correction by enhancing coherence, and processing units, that ensure fast manipulations. We present here a supramolecular {CrNi}-Cu system which could form the elementary unit of this platform, where the electronic spin 1/2 of {CrNi} provides the processor and the naturally isolated nuclear spin 3/2 of the Cu ion is used to encode a logical unit with embedded quantum error-correction. We demonstrate by realistic simulations that microwave pulses allow us to rapidly implement gates on the processor and to swap information between the processor and the quantum memory. By combining the storage into the Cu nuclear spin with quantum error correction, information can be protected for times much longer than the processor coherence.
PubMed: 34276940
DOI: 10.1039/d1sc01506k -
Nature Apr 2022The ability to engineer parallel, programmable operations between desired qubits within a quantum processor is key for building scalable quantum information systems. In...
The ability to engineer parallel, programmable operations between desired qubits within a quantum processor is key for building scalable quantum information systems. In most state-of-the-art approaches, qubits interact locally, constrained by the connectivity associated with their fixed spatial layout. Here we demonstrate a quantum processor with dynamic, non-local connectivity, in which entangled qubits are coherently transported in a highly parallel manner across two spatial dimensions, between layers of single- and two-qubit operations. Our approach makes use of neutral atom arrays trapped and transported by optical tweezers; hyperfine states are used for robust quantum information storage, and excitation into Rydberg states is used for entanglement generation. We use this architecture to realize programmable generation of entangled graph states, such as cluster states and a seven-qubit Steane code state. Furthermore, we shuttle entangled ancilla arrays to realize a surface code state with thirteen data and six ancillary qubits and a toric code state on a torus with sixteen data and eight ancillary qubits. Finally, we use this architecture to realize a hybrid analogue-digital evolution and use it for measuring entanglement entropy in quantum simulations, experimentally observing non-monotonic entanglement dynamics associated with quantum many-body scars. Realizing a long-standing goal, these results provide a route towards scalable quantum processing and enable applications ranging from simulation to metrology.
PubMed: 35444318
DOI: 10.1038/s41586-022-04592-6 -
Micromachines Jan 2021The development of the mobile industry brings about the demand for high-performance embedded systems in order to meet the requirement of user-centered application....
The development of the mobile industry brings about the demand for high-performance embedded systems in order to meet the requirement of user-centered application. Because of the limitation of memory resource, employing compressed data is efficient for an embedded system. However, the workload for data decompression causes an extreme bottleneck to the embedded processor. One of the ways to alleviate the bottleneck is to integrate a hardware accelerator along with the processor, constructing a system-on-chip (SoC) for the embedded system. In this paper, we propose a lossless decompression accelerator for an embedded processor, which supports LZ77 decompression and static Huffman decoding for an inflate algorithm. The accelerator is implemented on a field programmable gate array (FPGA) to verify the functional suitability and fabricated in a Samsung 65 nm complementary metal oxide semiconductor (CMOS) process. The performance of the accelerator is evaluated by the Canterbury corpus benchmark and achieved throughput up to 20.7 MB/s at 50 MHz system clock frequency.
PubMed: 33572563
DOI: 10.3390/mi12020145