📈🔍 Lets Python do AB testing analysis

tlentali, updated 🕥 2023-02-10 23:10:21

le_AB_logo

travis codecov bsd_3_license

le AB is a Python library for AB testing analysis.

⚡️ Quick start

Open In Colab

Before launching your AB test, you can compute the needed sample size per variation :

```python

from leab import before

ab_test = before.leSample(conversion_rate=20, ... min_detectable_effect=2) ab_test.get_size_per_variation()

6347 ```

After reaching the needed sample size, you can compare means obtained from A VS B :

```python

from leab import after from leab import leDataset

data = leDataset.SampleLeAverage()

ab_test = after.leAverage(data.A, data.B) teab_testst.get_verdict()

'Sample A mean is greater' ```

🛠 Installation

:snake: You need to install Python 3.6 or above.

Installation can be done by using pip.
There are wheels available for Linux, MacOS, and Windows.

bash pip install leab

You can also install the latest development version as so:

```bash pip install git+https://github.com/tlentali/leab

Or, through SSH:

pip install git+ssh://[email protected]/tlentali/leab.git ```

🥄 Philosophy

"Life is a sum of all our choices."
Albert Camus

Get ready to take a decision !

AB testing has never been more popular, especially on Internet based companies.
Even if each test is unique, some questions seem to be asked again and again :

  • when is my test going to be statistically significant ?
  • is A more successful than B ?
  • does A generate more than B ?

Strong statistical knowledge are required to handle it from start to end correctly.
To answer those questions in a simple and robust way, we built le AB.
Lets Python do AB testing analysis !

🔥 Features

Here are some benefits of using Le AB :

  • Sample size : How many subjects are needed for my AB test ?
  • Test duration : How many days are needed for my AB test ?
  • Rate of success : Does the rate of success differ across two groups ?
  • Average value : Does the average value differ across two groups ?
  • html auto-report : Generate a html report (example) for the sample size part.

Next release features :

  • html auto-report : Generate a html report for the after part.
  • sequential sampling : How many conversions are needed for a sequential AB test ?

🔗 Useful links

🙏 Thanks

This project takes its inspiration from Evan Miller great work, especially the following :

Thank you so much Evan M. for your work, it saved our lives so many times !

A big thanks to Max Halford too, who inspired us in the structure of this project, particularly for docs and tests.
Have a look at Creme-ml, it's just amazingly done !

To finish, thanks to all of you who use or are going to use this lib, hope it helps !

🖖 Contributing

Feel free to contribute in any way you like, we're always open to new ideas and approaches. If you want to contribute to the code base please check out the CONTRIBUTING.md file. Also take a look at the issue tracker and see if anything takes your fancy.

This project follows the all-contributors specification. Again, contributions of any kind are welcome!

tlentali
tlentali

📆 💻
JLouedec
JLouedec

📝
RomainSa
RomainSa

📝

📜 License

le AB is free and open-source software licensed under the 3-clause BSD license.

Issues

Bump ipython from 7.16.3 to 8.10.0

opened on 2023-02-10 23:10:21 by dependabot[bot]

Bumps ipython from 7.16.3 to 8.10.0.

Release notes

Sourced from ipython's releases.

See https://pypi.org/project/ipython/

We do not use GitHub release anymore. Please see PyPI https://pypi.org/project/ipython/

Commits
  • 15ea1ed release 8.10.0
  • 560ad10 DOC: Update what's new for 8.10 (#13939)
  • 7557ade DOC: Update what's new for 8.10
  • 385d693 Merge pull request from GHSA-29gw-9793-fvw7
  • e548ee2 Swallow potential exceptions from showtraceback() (#13934)
  • 0694b08 MAINT: mock slowest test. (#13885)
  • 8655912 MAINT: mock slowest test.
  • a011765 Isolate the attack tests with setUp and tearDown methods
  • c7a9470 Add some regression tests for this change
  • fd34cf5 Swallow potential exceptions from showtraceback()
  • Additional commits viewable in compare view


Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/tlentali/leab/network/alerts).

Releases

colab_playground 2020-09-27 17:12:19

Content addition : - Colab notebook - get size by variation and get total size in leSample

Unittests & CI 2020-05-08 09:25:32

  • unittests added with 80% code coverage
  • Travis CI testing :
    • several python version
    • unittests
    • code coverage
  • report live example on tlentali.github.io liked in readme
  • contribution and code of conduct files splitted
  • logo linked by url in readme

leReport 2020-05-01 14:45:02

  • leMean is renamed leAverage
  • mit licence become bsd 3 clause licence
  • summary methods are rename compute
  • html report leReport for before module is added
  • csv and html template are shipped with the lib

2020-03-27 21:03:40

pypi checked

Thomas Lentali

Data Scientist @betclic | [email protected] @BIGDATALABCARTEGIE

GitHub Repository Homepage

python data-science data-analysis ab-testing analysis analytics statistics abtest