Machine Learning and Deep Learning library in python using numpy and matplotlib.
This repository gives beginners and newcomers in the field of AI and ML a chance to understand the inner workings of popular learning algorithms by presenting them with a simple way to analyze the implementation of ML and DL algorithms in pure python using only numpy as a backend for linear algebraic computations.
The goal of this repository is not to create the most efficient implementation but the most transparent one, so that anyone with little knowledge of the field can contribute and learn.
You can install the library by running the following command,
python
python3 setup.py install
For development purposes, you can use the option develop
as shown below,
python
python3 setup.py develop
For testing your patch locally follow the steps given below,
python3 -m pytest --doctest-modules --cov=./ --cov-report=html
. Look for, htmlcov/index.html
and open it in your browser, which will show the coverage report. Try to ensure that the coverage is not decreasing by more than 1% for your patch.Follow the following steps to get started with contributing to the repository.
Clone the project to you local environment.
Use
git clone https://github.com/RoboticsClubIITJ/ML-DL-implementation/
to get a local copy of the source code in your environment.
Install dependencies: You can use pip to install the dependendies on your computer.
To install use
pip install -r requirements.txt
Installation:
use python setup.py develop
if you want to setup for development or python setup.py install
if you only want to try and test out the repository.
Make changes, work on a existing issue or create one. Once assigned you can start working on the issue.
While you are working please make sure you follow standard programming guidelines. When you send us a PR, your code will be checked for PEP8 formatting and soon some tests will be added so that your code does not break already existing code. Use tools like flake8 to check your code for correct formatting.
| Activations | Location | Optimizers | Location | Models | Location | Backend | Location | Utils | Location |
| :------------ | ------------: | :------------ | ------------: | :------------ | ------------: | ------------: | ------------: | ------------: | -----------: |
| ACTIVATION FUNCTIONS| |OPTIMIZERS| | MODELS | | BACKEND | | PRE-PROCESSING METHODS |
| Sigmoid | activations.py | Gradient Descent | optimizers.py | Linear Regression | models.py | Autograd | autograd.py | Bell Curve | preprocessor_utils.py
| Tanh | activations.py | Stochastic Gradient Descent | optimizers.py | Logistic Regression| models.py | Tensor | tensor.py| Standard_Scaler | preprocessor_utils.py
| Softmax | activations.py | Mini Batch Gradient Descent | optimizers.py | Decision Tree Classifier| models.py| Functions | functional.py | MaxAbs_Scaler | preprocessor_utils.py |
| Softsign | activations.py | Momentum Gradient Descent | optimizers.py | KNN Classifier/Regessor| models.py | | | Z_Score_Normalization | preprocessor_utils.py |
| Relu | activations.py | Nesterov Accelerated Descent | optimizers.py | Naive Bayes | models.py| | | Mean_Normalization | preprocessor_utils.py |
| Leaky Relu | activations.py | Adagrad | optimizers.py | Gaussian Naive Bayes| models.py | | | Min Max Normalization | preprocessor_utils.py |
| Elu | activations.py | Adadelta | optimizers.py | Multinomial Naive Bayes | models.py | | | Feature Clipping | preprocessor_utils.py |
| Swish | activations.py | Adam | optimizers.py | Polynomial Regression | models.py |
| Unit Step | activations.py | | | Bernoulli Naive Bayes | models.py |
| | | | | Random Forest Classifier | models.py |
| | | | | K Means Clustering| models.py |
| | | | | Divisive Clustering | models.py |
| | | | | Agglomerative Clustering | models.py |
| | | | | Bayes Optimization | models.py |
| | | | | Numerical Outliers| models.py |
| | | | | Principle Component Analysis | models.py |
| | | | | Z_Score | models.py |
| | | | | Sequential Neural Network | models.py |
| Loss Functions | Location | Regularizer | Location | Metrics | Location | | :------------ | ------------: | :------------ | ------------: | :------------ | ------------: | |LOSS FUNCTIONS| |REGULARIZER| |METRICS| | | Mean Squared Error | loss_func.py | L1_Regularizer| regularizer.py | Confusion Matrix | metrics.py | Logarithmic Error | loss_func.py | L2_Regularizer | regularizer.py | Precision | metrics.py | Absolute Error | loss_func.py | | | Accuracy | metrics.py | Cosine Similarity | loss_func.py | | | Recall | metrics.py | Log_cosh | loss_func.py | | | F1 Score | metrics.py | Huber | loss_func.py | | | F-B Theta | metrics.py | Mean Squared Log Error | loss_func.py | | | Specificity | metrics.py | Mean Absolute Percentage Error | loss_func.py
Fixes #9
The unit test approximately compares the output of the implemented loss function with that of the corresponding loss function already there in TensorFlow when using the same test example as input.
Describe the bug
In MLlib\loss_func.py, under the MeanSquaredLogLoss
class, there is a misbehavior in the loss method because of incorrect implementation of the formula for loss and the use of an incorrect attribute of the imported Sigmoid
class.
Is your feature request related to a problem? Please describe. In MLlib\loss_func.py, the docstrings of the loss functions require correction, particularly in the dimensions of the input vector and the output data types in some of the loss functions and their derivatives.
Describe the solution you'd like The docstrings need to be corrected according to the valid data types of parameters and output data types, and appropriate dimensions of the input vector (if required).
Added activation and deactivation checks for Relu and LeakyRelu functions
--> Try No. 2 --> Pr request ;-;
This is the link of the code file https://colab.research.google.com/drive/1tCT28q7D3YlMnGmVLEiFWsAdT2hlW-ZY?usp=sharing
Fixed some libraries setup issues.
python numpy machine-learning deep-learning nwoc woc matplotlib statistics hacktoberfest