We propose several simple feature maps that lead to a collection of interpretable kernels with varying degrees of freedom. We make sure that the increase in the dimension of input data with each proposed feature map is extremely low, so that the resulting models can be trained quickly, and the obtained results can easily be interpreted. The details of this study is given in our paper.
All our codes are implemented in Pyhton 3.7 and we use the following packages:
We provide the following tutorials to demonstrate our implementation.
-
For the proposed feature maps, we refer to the pages FeatureMaps and Kernels. We also provide the same tutorials as two notebooks, notebook one and notebook two, respectively.
-
To obtain a row of Table 1, we refer to the page Table 1 or to the notebook.
-
To obtain a row of Table 2 and Table 3, we refer to the page Table_2_3 or to the notebook.
-
To obtain Figure 3, we refer to the page Figure_3 or to the notebook.
-
To obtain Figure 5, we refer to the page Figure_5 or to the notebook.
We provide the following scripts to reproduce the numerical experiments that we have reported in our paper.
-
In this tutorial, we explain how to apply the proposed feature maps with logistic regression on binary and multi-class classificiation problems. Notebook version of this tutorial can be found here.
-
We also provide the following code snippets that reproduce Tables 2-3 and Figure 5 in our paper, but this time, with logistic regression.