Each folder in this repository corresponds to a method or tool for transfer/meta-learning. xfer-ml
is a standalone MXNet library (installable with pip) which largely automates deep transfer learning. The rest of the folders contain research code for a novel method in transfer or meta-learning, implemented in a variety of frameworks (not necessarily in MXNet).
In more detail:
- xfer-ml: A library that allows quick and easy transfer of knowledge stored in deep neural networks implemented in MXNet. xfer-ml can be used with data of arbitrary numeric format, and can be applied to the common cases of image or text data. It can be used as a pipeline that spans from extracting features to training a repurposer. The repurposer is then an object that carries out predictions in the target task. You can also use individual components of the library as part of your own pipeline. For example, you can leverage the feature extractor to extract features from deep neural networks or ModelHandler, which allows for quick building of neural networks, even if you are not an MXNet expert.
- leap: MXNet implementation of "leap", the meta-gradient path learner (link) by S. Flennerhag, P. G. Moreno, N. Lawrence, A. Damianou, which appeared at ICLR 2019.
- nn_similarity_index: PyTorch code for comparing trained neural networks using both feature and gradient information. The method is used in the arXiv paper (link) by S. Tang, W. Maddox, C. Dickens, T. Diethe and A. Damianou.
- finite_ntk: PyTorch implementation of finite width neural tangent kernels from the paper Fast Adaptation with Linearized Neural Networks (link), by W. Maddox, S. Tang, P. G. Moreno, A. G. Wilson, and A. Damianou, which appeared at AISTATS 2021.
- synthetic_info_bottleneck PyTorch implementation of the Synthetic Information Bottleneck algorithm for few-shot classification on Mini-ImageNet, which is used in paper Empirical Bayes Transductive Meta-Learning with Synthetic Gradients (link) by S. X. Hu, P. G. Moreno, Y. Xiao, X. Shen, G. Obozinski, N. Lawrence and A. Damianou, which appeared at ICLR 2020.
- var_info_distil PyTorch implementation of the paper Variational Information Distillation for Knowledge Transfer (link) by S. Ahn, S. X. Hu, A. Damianou, N. Lawrence, Z. Dai, which appeared at CVPR 2019.
Navigate to the corresponding folder for more details.
You may contribute to the existing projects by reading the individual contribution guidelines in each corresponding folder.
The code under this repository is licensed under the Apache 2.0 License.
Bumps ipython from 7.16.3 to 8.10.0.
Sourced from ipython's releases.
See https://pypi.org/project/ipython/
We do not use GitHub release anymore. Please see PyPI https://pypi.org/project/ipython/
15ea1ed
release 8.10.0560ad10
DOC: Update what's new for 8.10 (#13939)7557ade
DOC: Update what's new for 8.10385d693
Merge pull request from GHSA-29gw-9793-fvw7e548ee2
Swallow potential exceptions from showtraceback() (#13934)0694b08
MAINT: mock slowest test. (#13885)8655912
MAINT: mock slowest test.a011765
Isolate the attack tests with setUp and tearDown methodsc7a9470
Add some regression tests for this changefd34cf5
Swallow potential exceptions from showtraceback()Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase
.
Bug
I think test_year
is supposed to be train_year
in line 55 right?
https://github.com/amzn/xfer/blob/dd4a6a27ca00406df83eec5916d3a76a1a798248/finite_ntk/data.py#L40-L55
Questions
I see the variables inside
, extent
and grid_x
being declared but not used in the malaria experiments. I'm looking to replicate the experiments in JAX so I was wondering what the original purpose of these variables were. In particular, what is the sparse tensor for marking Nigeria and inside
supposed to be doing?
https://github.com/amzn/xfer/blob/dd4a6a27ca00406df83eec5916d3a76a1a798248/finite_ntk/data.py#L80-L100
P.S. thank you for the work and for releasing the code!
Adds the code to reproduce Figure 7b (olivetti dataset) and Table 1 (unsupervised to supervised experiments) of https://arxiv.org/pdf/2103.01439.pdf that had not been included in the public version previously.
Let me know if I still need to modify licenses / attribution here.
@shuaitang @pgmoren
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
Maybe I am missing something. I don't see where current_pars
argument is used inside the losses. Is it used, if not I guess why it is passed?
https://github.com/amzn/xfer/blob/dd4a6a27ca00406df83eec5916d3a76a1a798248/finite_ntk/experiments/cifar/losses.py#L22
Hello, a great work! Thank you for sharing the codes. I to begin the Step 2 in README : python main.py --config config/miniImageNet_1shot.yaml --seed 100 --gpu 0
Firstly, I congfig the miniImageNet_1shot.json file path in config.py like this:
def get_args(): """ Create argparser for frequent configurations.
:return: argparser object
"""
argparser = argparse.ArgumentParser(description=__doc__)
argparser.add_argument(
'-c', '--config',
metavar='C',
default="/home/dy/PP/FSL/sib_meta_learn/data/Mini-ImageNet/val1000Episode_5_way_5_shot.json",
help='The Configuration file')
Then, I run main,py, there's an error like this:
Would you please help me solve the problem? Appreciate!
Issue #, if available:
Description of changes:
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
transfer-learning mxnet neural-network python machine-learning