We consider the problem of local graph clustering where the aim is to discover the local cluster corresponding to a point of interest. We propose a new random walk model that has less tendency to jump between the clusters.
Multitask Learning for Brain-Computer Interfaces :
Brain-computer interfaces (BCIs) are limited in their applicability in everyday settings by the current necessity of calibration prior to actual use of the BCI. In this work,we utilize the framework of multitask learning to construct a BCI that can be used without any subject-specic calibration process.
Advanced Science, pages: 1800239, May 2018 (article)
Abstract Soft actuators have demonstrated potential in a range of applications, including soft robotics, artificial muscles, and biomimetic devices. However, the majority of current soft actuators suffer from the lack of real-time sensory feedback, prohibiting their effective sensing and multitask function. Here, a promising strategy is reported to design bilayer electrothermal actuators capable of simultaneous actuation and sensation (i.e., self-sensing actuators), merely through two input electric terminals. Decoupled electrothermal stimulation and strain sensation is achieved by the optimal combination of graphite microparticles and carbon nanotubes (CNTs) in the form of hybrid films. By finely tuning the charge transport properties of hybrid films, the signal-to-noise ratio (SNR) of self-sensing actuators is remarkably enhanced to over 66. As a result, self-sensing actuators can actively track their displacement and distinguish the touch of soft and hard objects.
Proceedings of the 3rd ACM conference on Learning @ Scale, pages: 369-378, (Editors: Haywood, J. and Aleven, V. and Kay, J. and Roll, I.), ACM, L@S, April 2016, (An earlier version of this paper had been presented at the ICML 2015 workshop for Machine Learning for Education.) (conference)
In Advances in Neural Information Processing Systems 26, pages: 225-233, (Editors: C.J.C. Burges and L. Bottou and M. Welling and Z. Ghahramani and K.Q. Weinberger), 27th Annual Conference on Neural Information Processing Systems (NIPS), 2013 (inproceedings)
In Advances in Neural Information Processing Systems 24, pages: 379-387, (Editors: J Shawe-Taylor and RS Zemel and P Bartlett and F Pereira and KQ Weinberger), Twenty-Fifth Annual Conference on Neural Information Processing Systems (NIPS), 2011 (inproceedings)
We study the family of p-resistances on graphs for p ≥ 1. This family generalizes the standard resistance distance. We prove that for any fixed graph, for p=1, the p-resistance coincides with the shortest path distance, for p=2 it coincides with the standard resistance distance, and for p → ∞ it converges to the inverse of the minimal s-t-cut in the graph. Secondly, we consider the special case of random geometric graphs (such as k-nearest neighbor graphs) when the number n of vertices in the graph tends to infinity. We prove that an interesting phase-transition takes place. There exist two critical thresholds p^* and p^** such that if p < p^*, then the p-resistance depends on meaningful global properties of the graph, whereas if p > p^**, it only depends on trivial local quantities and does not convey any useful information. We can explicitly compute the critical values: p^* = 1 + 1/(d-1) and p^** = 1 + 1/(d-2) where d is the dimension of the underlying space (we believe that the fact that there is a small gap between p^* and p^** is an artifact of our proofs. We also relate our findings to Laplacian regularization and suggest to use q-Laplacians as regularizers, where q satisfies 1/p^* + 1/q = 1.
In Proceedings of the IEEE International Conference on Data Mining (ICDM 2010), pages: 18-27, (Editors: Webb, G. I., B. Liu, C. Zhang, D. Gunopulos, X. Wu), IEEE, Piscataway, NJ, USA, IEEE International Conference on Data Mining (ICDM), December 2010 (inproceedings)
We consider the problem of local graph clustering
where the aim is to discover the local cluster corresponding
to a point of interest. The most popular algorithms to solve
this problem start a random walk at the point of interest and
let it run until some stopping criterion is met. The vertices
visited are then considered the local cluster. We suggest a more
powerful alternative, the multi-agent random walk. It consists
of several agents connected by a fixed rope of length l. All
agents move independently like a standard random walk on
the graph, but they are constrained to have distance at most l
from each other. The main insight is that for several agents it is
harder to simultaneously travel over the bottleneck of a graph
than for just one agent. Hence, the multi-agent random walk
has less tendency to mistakenly merge two different clusters
than the original random walk. In our paper we analyze
the multi-agent random walk theoretically and compare it
experimentally to the major local graph clustering algorithms
from the literature. We find that our multi-agent random walk
consistently outperforms these algorithms.
Brain-computer interfaces (BCIs) are limited in their applicability in everyday settings by the current necessity to record subject-specific calibration data prior to actual use of the BCI for communication. In this work, we utilize the framework of multitask learning to construct a BCI that can be used without any subject-specific calibration process, i.e., with zero training data. In BCIs based on EEG or MEG, the predictive function of a subject's intention is commonly modeled as a linear combination of some features derived from spatial and spectral recordings. The coefficients of this combination correspond to the importance of the features for predicting the intention of the subject. These coefficients are usually learned separately for each subject due to inter-subject variability. Principle feature characteristics, however, are known to remain invariant across subject. For example, it is well known that in motor imagery paradigms spectral power in the mu- and beta frequency ranges (roughly 8-14 Hz and 20-30 Hz, respectively) over sensorimotor areas provides most information on a subject's intention. Based on this assumption, we define the intention prediction function as a combination of subject-invariant and subject-specific models, and propose a machine learning method that infers these models jointly using data from multiple subjects. This framework leads to an out-of-the-box intention predictor, where the subject-invariant model can be employed immediately for a subject with no prior data. We present a computationally efficient method to further improve this BCI to incorporate subject-specific variations as such data becomes available. To overcome the problem of high dimensional feature spaces in this context, we further present a new method for finding the relevance of different recording channels according to actions performed by subjects. Usually, the BCI feature representation is a concatenation of spectral features extracted from different channels. This representation, however, is redundant, as recording channels at different spatial locations typically measure overlapping sources within the brain due to volume conduction. We address this problem by assuming that the relevance of different spectral bands is invariant across channels, while learning different weights for each recording electrode. This framework allows us to significantly reduce the feature space dimensionality without discarding potentially useful information. Furthermore, the resulting out-of-the-box BCI can be adapted to different experimental setups, for example EEG caps with different numbers of channels, as long as there exists a mapping across channels in different setups. We demonstrate the feasibility of our approach on a set of experimental EEG data recorded during a standard two-class motor imagery paradigm from a total of ten healthy subjects. Specifically, we show that satisfactory classification results can be achieved with zero training data, and that combining prior recordings with subject-specific calibration data substantially outperforms using subject-specific data only.
In JMLR Workshop and Conference Proceedings Volume 9: AISTATS 2010, pages: 17-24, (Editors: Teh, Y.W. , M. Titterington), JMLR, Cambridge, MA, USA, Thirteenth International Conference on Artificial Intelligence and Statistics , May 2010 (inproceedings)
Brain-computer interfaces (BCIs) are limited
in their applicability in everyday settings
by the current necessity to record subjectspecific
calibration data prior to actual use
of the BCI for communication. In this paper,
we utilize the framework of multitask
learning to construct a BCI that can be used
without any subject-specific calibration process.
We discuss how this out-of-the-box BCI
can be further improved in a computationally
efficient manner as subject-specific data
becomes available. The feasibility of the approach
is demonstrated on two sets of experimental
EEG data recorded during a standard
two-class motor imagery paradigm from
a total of 19 healthy subjects. Specifically,
we show that satisfactory classification results
can be achieved with zero training data,
and combining prior recordings with subjectspecific
calibration data substantially outperforms
using subject-specific data only. Our
results further show that transfer between
recordings under slightly different experimental
setups is feasible.
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems