Selected Publications

We propose a novel Learning-to-Protect algorithm that automatically learns a model-based protector from a set of non-private learning tasks.
In AAAI’21, 2021

We analyze the convergence of private gradient descent using dyanmic schedules.
preprint, 2021

In this paper, we focus on subspace-based learning problems, where data elements are linear subspaces instead of vectors. To handle this kind of data, Grassmann kernels were proposed to measure the space structure and used with classifiers, e.g., Support Vector Machines (SVMs). However, the existing discriminative algorithms mostly ignore the instability of subspaces, which would cause the classifiers to be misled by disturbed instances. Thus we propose considering all potential disturbances of subspaces in learning processes to obtain more robust classifiers.
In KDD’18, 2018

Recent Publications

. Learning Model-Based Privacy Protection under Budget Constraints. In AAAI’21, 2021.

PDF Source Document

. On Dynamic Noise Influence in Differentially Private Learning. preprint, 2021.


. Sequential Data Classification in the Space of Liquid State Machines. In ECML’16, 2016.

PDF Code


Recent Posts

More Posts

Keras provides very convenient tools for fast protyping Machine Learning models, especially neural networks. You can pass metric functions when compiling a model, to evaluate the learnt models. However in the current version (after v2.0.0), Keras no longer provides widely used binary-classification metrics, e.g., recall, f1score, etc. The reason is clearly explained in keras issue #5794. In this posts, we are going to dicuss a working-around to evaluate these metrics with Keras.


When refactoring a codes, we need to extract duplicated features from different methods or functions. A magic in Python 3 is to decorate the a striped basic functions with sharing features.