Transfer Learning library for Deep Neural Networks.

amzn, updated 🕥 2023-02-10 23:10:07

Xfer

Transfer and meta-learning in Python


Each folder in this repository corresponds to a method or tool for transfer/meta-learning. xfer-ml is a standalone MXNet library (installable with pip) which largely automates deep transfer learning. The rest of the folders contain research code for a novel method in transfer or meta-learning, implemented in a variety of frameworks (not necessarily in MXNet).

In more detail: - xfer-ml: A library that allows quick and easy transfer of knowledge stored in deep neural networks implemented in MXNet. xfer-ml can be used with data of arbitrary numeric format, and can be applied to the common cases of image or text data. It can be used as a pipeline that spans from extracting features to training a repurposer. The repurposer is then an object that carries out predictions in the target task. You can also use individual components of the library as part of your own pipeline. For example, you can leverage the feature extractor to extract features from deep neural networks or ModelHandler, which allows for quick building of neural networks, even if you are not an MXNet expert.
- leap: MXNet implementation of "leap", the meta-gradient path learner (link) by S. Flennerhag, P. G. Moreno, N. Lawrence, A. Damianou, which appeared at ICLR 2019. - nn_similarity_index: PyTorch code for comparing trained neural networks using both feature and gradient information. The method is used in the arXiv paper (link) by S. Tang, W. Maddox, C. Dickens, T. Diethe and A. Damianou.
- finite_ntk: PyTorch implementation of finite width neural tangent kernels from the paper Fast Adaptation with Linearized Neural Networks (link), by W. Maddox, S. Tang, P. G. Moreno, A. G. Wilson, and A. Damianou, which appeared at AISTATS 2021.
- synthetic_info_bottleneck PyTorch implementation of the Synthetic Information Bottleneck algorithm for few-shot classification on Mini-ImageNet, which is used in paper Empirical Bayes Transductive Meta-Learning with Synthetic Gradients (link) by S. X. Hu, P. G. Moreno, Y. Xiao, X. Shen, G. Obozinski, N. Lawrence and A. Damianou, which appeared at ICLR 2020. - var_info_distil PyTorch implementation of the paper Variational Information Distillation for Knowledge Transfer (link) by S. Ahn, S. X. Hu, A. Damianou, N. Lawrence, Z. Dai, which appeared at CVPR 2019.

Navigate to the corresponding folder for more details.

Contributing

You may contribute to the existing projects by reading the individual contribution guidelines in each corresponding folder.

License

The code under this repository is licensed under the Apache 2.0 License.

Issues

Bump ipython from 7.16.3 to 8.10.0 in /xfer-ml/docs/demos

opened on 2023-02-10 23:10:06 by dependabot[bot]

Bumps ipython from 7.16.3 to 8.10.0.

Release notes

Sourced from ipython's releases.

See https://pypi.org/project/ipython/

We do not use GitHub release anymore. Please see PyPI https://pypi.org/project/ipython/

Commits
  • 15ea1ed release 8.10.0
  • 560ad10 DOC: Update what's new for 8.10 (#13939)
  • 7557ade DOC: Update what's new for 8.10
  • 385d693 Merge pull request from GHSA-29gw-9793-fvw7
  • e548ee2 Swallow potential exceptions from showtraceback() (#13934)
  • 0694b08 MAINT: mock slowest test. (#13885)
  • 8655912 MAINT: mock slowest test.
  • a011765 Isolate the attack tests with setUp and tearDown methods
  • c7a9470 Add some regression tests for this change
  • fd34cf5 Swallow potential exceptions from showtraceback()
  • Additional commits viewable in compare view


Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/amzn/xfer/network/alerts).

finite_ntk malaria data generation bug + questions

opened on 2022-04-14 22:38:36 by tingtang2

Bug

I think test_year is supposed to be train_year in line 55 right?

https://github.com/amzn/xfer/blob/dd4a6a27ca00406df83eec5916d3a76a1a798248/finite_ntk/data.py#L40-L55

Questions

I see the variables inside, extent and grid_x being declared but not used in the malaria experiments. I'm looking to replicate the experiments in JAX so I was wondering what the original purpose of these variables were. In particular, what is the sparse tensor for marking Nigeria and inside supposed to be doing?

https://github.com/amzn/xfer/blob/dd4a6a27ca00406df83eec5916d3a76a1a798248/finite_ntk/data.py#L80-L100

P.S. thank you for the work and for releasing the code!

Add unsupervised learning experiments for linear_ntk

opened on 2021-10-29 12:00:20 by wjmaddox

Adds the code to reproduce Figure 7b (olivetti dataset) and Table 1 (unsupervised to supervised experiments) of https://arxiv.org/pdf/2103.01439.pdf that had not been included in the public version previously.

Let me know if I still need to modify licenses / attribution here.

@shuaitang @pgmoren

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

current_pars not used

opened on 2021-10-28 19:58:54 by evcu

Maybe I am missing something. I don't see where current_pars argument is used inside the losses. Is it used, if not I guess why it is passed?

https://github.com/amzn/xfer/blob/dd4a6a27ca00406df83eec5916d3a76a1a798248/finite_ntk/experiments/cifar/losses.py#L22

when --config config/miniImageNet_1shot.json , AttributeError: 'list' object has no attribute 'items' for EasyDict

opened on 2021-06-24 08:05:53 by DanielaPlusPlus

Hello, a great work! Thank you for sharing the codes. I to begin the Step 2 in README : python main.py --config config/miniImageNet_1shot.yaml --seed 100 --gpu 0

Firstly, I congfig the miniImageNet_1shot.json file path in config.py like this:

def get_args(): """ Create argparser for frequent configurations.

:return: argparser object
"""
argparser = argparse.ArgumentParser(description=__doc__)
argparser.add_argument(
    '-c', '--config',
    metavar='C',
    default="/home/dy/PP/FSL/sib_meta_learn/data/Mini-ImageNet/val1000Episode_5_way_5_shot.json",
    help='The Configuration file')

Then, I run main,py, there's an error like this: image

Would you please help me solve the problem? Appreciate!

Fix scikit errors

opened on 2020-01-13 10:24:26 by jnkm

Issue #, if available:

Description of changes:

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

transfer-learning mxnet neural-network python machine-learning