-
IEEE Transactions on Pattern Analysis... Oct 2022We present the first systematic study on concealed object detection (COD), which aims to identify objects that are visually embedded in their background. The high...
We present the first systematic study on concealed object detection (COD), which aims to identify objects that are visually embedded in their background. The high intrinsic similarities between the concealed objects and their background make COD far more challenging than traditional object detection/segmentation. To better understand this task, we collect a large-scale dataset, called COD10K, which consists of 10,000 images covering concealed objects in diverse real-world scenarios from 78 object categories. Further, we provide rich annotations including object categories, object boundaries, challenging attributes, object-level labels, and instance-level annotations. Our COD10K is the largest COD dataset to date, with the richest annotations, which enables comprehensive concealed object understanding and can even be used to help progress several other vision tasks, such as detection, segmentation, classification etc. Motivated by how animals hunt in the wild, we also design a simple but strong baseline for COD, termed the Search Identification Network (SINet). Without any bells and whistles, SINet outperforms twelve cutting-edge baselines on all datasets tested, making them robust, general architectures that could serve as catalysts for future research in COD. Finally, we provide some interesting findings, and highlight several potential applications and future directions. To spark research in this new field, our code, dataset, and online demo are available at our project page: http://mmcheng.net/cod.
Topics: Algorithms; Animals; Image Interpretation, Computer-Assisted
PubMed: 34061739
DOI: 10.1109/TPAMI.2021.3085766 -
NDSS Symposium 2023When sharing relational databases with other parties, in addition to providing high quality (utility) database to the recipients, a database owner also aims to have (i)...
When sharing relational databases with other parties, in addition to providing high quality (utility) database to the recipients, a database owner also aims to have (i) privacy guarantees for the data entries and (ii) liability guarantees (via fingerprinting) in case of unauthorized redistribution. However, (i) and (ii) are orthogonal objectives, because when sharing a database with multiple recipients, privacy via data sanitization requires adding noise once (and sharing the same noisy version with all recipients), whereas liability via unique fingerprint insertion requires adding different noises to each shared copy to distinguish all recipients. Although achieving (i) and (ii) together is possible in a naïve way (e.g., either differentially-private database perturbation or synthesis followed by fingerprinting), this approach results in significant degradation in the utility of shared databases. In this paper, we achieve privacy and liability guarantees simultaneously by proposing a novel entry-level differentially-private (DP) fingerprinting mechanism for relational databases without causing large utility degradation. The proposed mechanism fulfills the privacy and liability requirements by leveraging the randomization nature of fingerprinting and transforming it into provable privacy guarantees. Specifically, we devise a bit-level random response scheme to achieve differential privacy guarantee for arbitrary data entries when sharing the entire database, and then, based on this, we develop an -entry-level DP fingerprinting mechanism. We theoretically analyze the connections between privacy, fingerprint robustness, and database utility by deriving closed form expressions. We also propose a sparse vector technique-based solution to control the cumulative privacy loss when fingerprinted copies of a database are shared with multiple recipients. We experimentally show that our mechanism achieves strong fingerprint robustness (e.g., the fingerprint cannot be compromised even if the malicious database recipient modifies/distorts more than half of the entries in its received fingerprinted copy), and higher database utility compared to various baseline methods (e.g., application-dependent database utility of the shared database achieved by the proposed mechanism is higher than that of the considered baselines).
PubMed: 37275390
DOI: 10.14722/ndss.2023.24693 -
IEEE Transactions on Pattern Analysis... Sep 2022Vision and language understanding techniques have achieved remarkable progress, but currently it is still difficult to well handle problems involving very fine-grained...
Vision and language understanding techniques have achieved remarkable progress, but currently it is still difficult to well handle problems involving very fine-grained details. For example, when the robot is told to "bring me the book in the girl's left hand", most existing methods would fail if the girl holds one book respectively in her left and right hand. In this work, we introduce a new task named human-centric relation segmentation (HRS), as a fine-grained case of HOI-det. HRS aims to predict the relations between the human and surrounding entities and identify the relation-correlated human parts, which are represented as pixel-level masks. For the above exemplar case, our HRS task produces results in the form of relation triplets 〈girl [left hand], hold, book 〉 and exacts segmentation masks of the book, with which the robot can easily accomplish the grabbing task. Correspondingly, we collect a new Person In Context (PIC) dataset for this new task, which contains 17,122 high-resolution images and densely annotated entity segmentation and relations, including 141 object categories, 23 relation categories and 25 semantic human parts. We also propose a Simultaneous Matching and Segmentation (SMS) framework as a solution to the HRS task. It contains three parallel branches for entity segmentation, subject object matching and human parsing respectively. Specifically, the entity segmentation branch obtains entity masks by dynamically-generated conditional convolutions; the subject object matching branch detects the existence of any relations, links the corresponding subjects and objects by displacement estimation and classifies the interacted human parts; and the human parsing branch generates the pixelwise human part labels. Outputs of the three branches are fused to produce the final HRS results. Extensive experiments on PIC and V-COCO datasets show that the proposed SMS method outperforms baselines with the 36 FPS inference speed. Notably, SMS outperforms the best performing baseline m-KERN with only 17.6 percent time cost. The dataset and code will be released at http://picdataset.com/challenge/index/.
Topics: Algorithms; Centric Relation; Female; Humans; Semantics
PubMed: 33905323
DOI: 10.1109/TPAMI.2021.3075846 -
Law and Human Behavior Oct 2022We tested the effect of true and fabricated baseline statements from the same sender on veracity judgments.
OBJECTIVES
We tested the effect of true and fabricated baseline statements from the same sender on veracity judgments.
HYPOTHESES
We predicted that presenting a combination of true and fabricated baseline statements would improve truth and lie detection accuracy, while presenting a true baseline would improve only truth detection, and presenting a fabricated baseline would only improve lie detection compared with presenting no baseline statement.
METHOD
In a 4 × 2 within-subjects design, 142 student participants ( = 23.47 years; 118 female) read no baseline statement, a true baseline statement, a fabricated baseline statement, and a combination of a true and a fabricated baseline statement from 29 different senders. Participants then rated the veracity of a true or fabricated target statement from the same 29 senders.
RESULTS
Logistic mixed-effects models with senders and participants as random effects showed no significant differences in overall veracity judgment accuracy between the no-baseline (51%) and either the true-baseline (44%) or the fabricated-baseline (49%) conditions. Equivalence tests failed to show the predicted equivalence of these accuracy rates. Separate analyses of truth and lie detection rates confirmed the assumed improvement of lie detection in the combination-of-true-and-fabricated-baseline condition (accuracy = 39%-61%). No other truth or lie detection rate changed significantly except that, unexpectedly, a true baseline reduced truth detection accuracy (64%-49%).
CONCLUSIONS
Baseline statements largely did not affect judgment accuracy and, in the case of true baselines, even had a negative impact on truth detection. The rather small positive effect of two baseline statements on lie detection suggests an avenue for further research, especially with expert raters. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Topics: Adult; Female; Humans; Judgment; Lie Detection; Students; Young Adult
PubMed: 36107688
DOI: 10.1037/lhb0000493 -
Journal of Pharmacokinetics and... Oct 2021The relationship between drug concentration and QTc interval is typically evaluated by applying the standard analysis model proposed in a scientific whitepaper by...
The relationship between drug concentration and QTc interval is typically evaluated by applying the standard analysis model proposed in a scientific whitepaper by Garnett et al. ( https://doi.org/10.1007/s10928-017-9558-5 ). The model is a mixed effects model in which a baseline QTc interval is included as a covariate. Two or more baseline QTc intervals are sometimes observed for a study participant, such as time-matched baselines on a baseline day in parallel studies, or pre-dose baselines in each period in crossover studies. In such situations, the baseline adjustments are not straightforward because these baselines correlate with not only the corresponding QTc intervals after drug administration, but also other QTc intervals at different timepoints for parallel studies, or those in different periods for crossover studies. In this study, we compared three analysis models through simulations and clinical study examples in settings in which two or more baselines were observed for a subject. We compared a model without baseline adjustment, a model with baseline adjustment, and a model in which baseline and baseline mean were included as covariates. In the simulations and clinical study examples, the model with baseline and baseline mean as covariates demonstrated higher accuracy and power than the other models. This model assumed a specific covariance structure in QTc intervals, which well approximated the correlations between QTc intervals within and between days. When there are two or more baselines in concentration-QTc analyses, the baseline mean should be included as a covariate in addition to the corresponding baseline.
Topics: Cross-Over Studies; Electrocardiography; Heart Rate; Humans; Long QT Syndrome; Pharmaceutical Preparations
PubMed: 33977390
DOI: 10.1007/s10928-021-09758-9 -
Perspectives on Behavior Science Sep 2022Multiple baseline designs-both concurrent and nonconcurrent-are the predominant experimental design in modern applied behavior analytic research and are increasingly...
Multiple baseline designs-both concurrent and nonconcurrent-are the predominant experimental design in modern applied behavior analytic research and are increasingly employed in other disciplines. In the past, there was significant controversy regarding the relative rigor of concurrent and nonconcurrent multiple baseline designs. The consensus in recent textbooks and methodological papers is that nonconcurrent designs are less rigorous than concurrent designs because of their presumed limited ability to address the threat of coincidental events (i.e., history). This skepticism of nonconcurrent designs stems from an emphasis on the importance of across-tier comparisons and relatively low importance placed on replicated within-tier comparisons for addressing threats to internal validity and establishing experimental control. In this article, we argue that the primary reliance on across-tier comparisons and the resulting deprecation of nonconcurrent designs are not well-justified. In this article, we first define multiple baseline designs, describe common threats to internal validity, and delineate the two bases for controlling these threats. Second, we briefly summarize historical methodological writing and current textbook treatment of these designs. Third, we explore how concurrent and nonconcurrent multiple baselines address each of the main threats to internal validity. Finally, we make recommendations for more rigorous use, reporting, and evaluation of multiple baseline designs.
PubMed: 36249165
DOI: 10.1007/s40614-022-00326-1 -
Brain Sciences Aug 2021Event-related mu-rhythm activity has become a common tool for the investigation of different socio-cognitive processes in pediatric populations. The estimation of the...
Event-related mu-rhythm activity has become a common tool for the investigation of different socio-cognitive processes in pediatric populations. The estimation of the mu-rhythm desynchronization/synchronization (mu-ERD/ERS) in a specific task is usually computed in relation to a baseline condition. In the present study, we investigated the effect that different types of baseline might have on toddler mu-ERD/ERS related to an action observation (AO) and action execution (AE) task. Specifically, we compared mu-ERD/ERS values computed using as a baseline: (1) the observation of a static image (BL1) and (2) a period of stillness (BL2). Our results showed that the majority of the subjects suppressed the mu-rhythm in response to the task and presented a greater mu-ERD for one of the two baselines. In some cases, one of the two baselines was not even able to produce a significant mu-ERD, and the preferred baseline varied among subjects even if most of them were more sensitive to the BL1, thus suggesting that this could be a good baseline to elicit mu-rhythm modulations in toddlers. These results recommended some considerations for the design and analysis of mu-rhythm studies involving pediatric subjects: in particular, the importance of verifying the mu-rhythm activity during baseline, the relevance of single-subject analysis, the possibility of including more than one baseline condition, and caution in the choice of the baseline and in the interpretation of the results of studies investigating mu-rhythm activity in pediatric populations.
PubMed: 34573178
DOI: 10.3390/brainsci11091159 -
IEEE Transactions on Pattern Analysis... Jul 2022Text encoding is one of the most important steps in Natural Language Processing (NLP). It has been done well by the self-attention mechanism in the current...
Text encoding is one of the most important steps in Natural Language Processing (NLP). It has been done well by the self-attention mechanism in the current state-of-the-art Transformer encoder, which has brought about significant improvements in the performance of many NLP tasks. Though the Transformer encoder may effectively capture general information in its resulting representations, the backbone information, meaning the gist of the input text, is not specifically focused on. In this paper, we propose explicit and implicit text compression approaches to enhance the Transformer encoding and evaluate models using this approach on several typical downstream tasks that rely on the encoding heavily. Our explicit text compression approaches use dedicated models to compress text, while our implicit text compression approach simply adds an additional module to the main model to handle text compression. We propose three ways of integration, namely backbone source-side fusion, target-side fusion, and both-side fusion, to integrate the backbone information into Transformer-based models for various downstream tasks. Our evaluation on benchmark datasets shows that the proposed explicit and implicit text compression approaches improve results in comparison to strong baselines. We therefore conclude, when comparing the encodings to the baseline models, text compression helps the encoders to learn better language representations.
PubMed: 33577448
DOI: 10.1109/TPAMI.2021.3058341 -
American Journal of Primatology Oct 2022Recent studies have highlighted the important role that individual learning mechanisms and different forms of enhancenment play in the acquisition of novel behaviors by... (Review)
Review
Recent studies have highlighted the important role that individual learning mechanisms and different forms of enhancenment play in the acquisition of novel behaviors by naïve individuals. A considerable subset of these studies has focused on tool innovation by our closest living relatives, the great apes, to better undestand the evolution of technology in our own lineage. To be able to isolate the role that individual learning plays in great ape tool innovation, researchers usually employ what are known as baseline tests. Although these baselines are commonly used in behavioral studies in captivity, the length of these tests in terms of number of trials and duration remains unstandarized across studies. To address this methodological issue, we conducted a literature review of great ape tool innovation studies conducted in zoological institutions and compiled various methodological data including the timing of innovation. Our literature review revealed an early innovation tendency in great apes, which was particularly pronounced when simple forms of tool use were investigated. In the majority of experiments where tool innovation took place, this occurred within the first trial and/or the first hour of testing. We discuss different possible sources of variation in the latency to innovate such as testing setup, species and task. We hope that our literature review helps researchers design more data-informed, resource-efficient experiments on tool innovation in our closest living relatives.
Topics: Animals; Hominidae; Learning
PubMed: 34339543
DOI: 10.1002/ajp.23311 -
Marine Pollution Bulletin Jul 2021Arsenic (As) and antimony (Sb) are toxic metalloids widely distributed in coastal sediments, but are seldom studied for their geochemical baselines. In this study,...
Arsenic (As) and antimony (Sb) are toxic metalloids widely distributed in coastal sediments, but are seldom studied for their geochemical baselines. In this study, sediment samples were collected from Jiaozhou Bay (JZB) to evaluate their baselines, contamination, and ecological risk. Results showed that the As and Sb concentrations were between 3.15 and 11.94 mg/kg and 0.20-0.61 mg/kg, respectively. Sc and Fe showed good performance in developing geochemical baseline functions for the metalloids. Organic matter content and clay had significant positive correlations with metalloid abundance in sediments (p < 0.01). In the JZB, As and Sb were not enriched in the sediments, with the enrichment factors below 1. Furthermore, the contamination degrees of As and Sb were low in the JZB. In addition, the ecological risks of As and Sb were relatively low in the JZB, with the risk index between 4.02 and 12.70 and 1.68-5.09, respectively.
Topics: Antimony; Arsenic; Bays; China; Environmental Monitoring; Geologic Sediments; Water Pollutants, Chemical
PubMed: 33940376
DOI: 10.1016/j.marpolbul.2021.112431