This repository contains codes for wireless-fingerprinting.

metehancekic, updated πŸ•₯ 2023-02-10 23:09:41

Repository of paper: Wireless Fingerprinting via Deep Learning: The Impact of Confounding Factors

This repository contains scripts to simulate the effect of channel and CFO variations on wireless fingerprinting using complex-valued CNNs. This repo also has simulation-based dataset based on models of some typical nonlinearities. It includes augmentation techniques, estimation techniques, and combinations of the two. The repository consists of 4 folders; namely, cxnn, preproc, tests, data, and a number of scripts outside of these folders (in the main folder).

Simulated Dataset

We have created a simulation-based WiFi dataset based on models of some typical nonlinearities. We implement two different kinds of circuit-level impairments: I/Q imbalance and power amplifier nonlinearity. Training dataset consists of 200 signals per device for 19 devices (classes). The validation and test sets contain 100 signals per device. Overall, the dataset contains 3800 signals for training, 1900 signals for validation and 1900 signals for the test set. The dataset can be downloaded as an npz file from this Box link, and needs to be copied into the data subdirectory.

Further details can be found in our paper.

Building the environment and running the code

This repo can be installed via following command:

bash git clone https://github.com/metehancekic/wireless-fingerprinting.git

Since the implementation of complex valued neural networks is done on Keras with Theano backend, the modules inside the requirements.txt are needed to be installed to be able to run experiments. We strongly recommend to install miniconda, create a virtual environment, and run the following commands. These commands will build the environment which is necessary to run the codes in this repository.

bash conda create -n cxnn python=2.7 conda activate cxnn pip install -r requirements.txt conda install mkl-service conda install -c conda-forge resampy For gpu usage: bash conda install -c anaconda pygpu With following CUDA and cuDNN versions:

CUDA 9.0.176\ cuDNN 7.3.1

The code with default parameters (without channel and CFO) can be run using:

bash KERAS_BACKEND=theano python cfo_channel_training_simulations.py KERAS_BACKEND=theano python cfo_channel_testing_simulations.py

Controlled experiments emulating the effect of frequency drift and channel variations is included via "experiment_setup.py" or can be explicitly called on terminal. All the hyper-parameters for these experiments are in "configs_train.json" and "configs_test.json" for training and testing codes respectively.

For detailed information about arguments use following code:

bash KERAS_BACKEND=theano python cfo_channel_training_simulations.py --help

``` usage: cfo_channel_testing_simulations.py [-h] [-a {reim,reim2x,reimsqrt2x,magnitude,phase,re,im,modrelu,crelu}] [-phy_ch] [-phy_cfo] [-comp_cfo] [-eq_tr] [-aug_ch] [-aug_cfo] [-res] [-eq_test] [-comp_cfo_test] [-aug_ch_test] [-aug_cfo_test]

optional arguments: -h, --help show this help message and exit -a {reim,reim2x,reimsqrt2x,magnitude,phase,re,im,modrelu,crelu}, --architecture {reim,reim2x,reimsqrt2x,magnitude,phase,re,im,modrelu,crelu} Architecture

setup: Setup for experiments

-phy_ch, --physical_channel Emulate the effect of channel variations, default = False -phy_cfo, --physical_cfo Emulate the effect of frequency variations, default = False -comp_cfo, --compensate_cfo Compensate frequency of training set, default = False -eq_tr, --equalize_train Equalize training set, default = False -aug_ch, --augment_channel Augment channel for training set, default = False -aug_cfo, --augment_cfo Augment cfo for training set, default = False -res, --obtain_residuals Obtain residuals for both train and test set, default = False -comp_cfo_test, --compensate_cfo_test Compensate frequency of test set, default = False

test setup: Test Setup for experiments

-eq_test, --equalize_test Equalize test set, default = False -aug_ch_test, --augment_channel_test Augment channel for test set, default = False -aug_cfo_test, --augment_cfo_test Augment cfo for test set, default = False ```

Running code with different day scenario (channel, cfo):

bash KERAS_BACKEND=theano python cfo_channel_training_simulations.py --physical_channel --physical_cfo --augment_channel --augment_cfo KERAS_BACKEND=theano python cfo_channel_testing_simulations.py --physical_channel --physical_cfo --augment_channel --augment_cfo --augment_channel_test --augment_cfo_test

Module Structure

project β”‚ README.md β”‚ cfo_channel_training_simulations.py Training code for all the experiments β”‚ cfo_channel_testing_simulations.py Testing code from checkpoints β”‚ configs_train.json All hyper parameters for training β”‚ configs_test.json All hyper parameters for testing β”‚ simulators.py All simulations (CFO, channel, residuals, etc) as functions β”‚ experiment_setup.py Experiment setup (use different day channel, cfo effects or not) β”‚ └───cxnn β”‚ β”‚ models.py Neural network architectures β”‚ β”‚ train.py Training function β”‚ β”‚ train_network_reim_mag.py Training function for real and complex networks β”‚ β”‚ β”‚ └───complexnn β”‚ β”‚ complex-valued neural network implemantation codes β”‚ β”‚ ... β”‚ └───preproc β”‚ β”‚ fading_model.py Signal processing tools (Fading models, etc) β”‚ β”‚ preproc_wifi Preprocessing tools (Equalization, etc) β”‚ └───tests β”‚ β”‚ test_aug_analysis.py Signal processing tools (Fading models, etc) β”‚ β”‚ visualize_offset.py Preprocessing tools (Equalization, etc) β”‚ └───data β”‚ simulations.npz Simulated WiFi dataset β”‚

Complex-valued CNNs

Complex layers are from Trabelsi et al, "Deep Complex Networks", ICLR 2018 (MIT license), with the addition of cxnn/complexnn/activations.py which contains the ModReLU activation function.

Citation

``` @inproceedings{fingerprinting2019globecom, author = {Gopalakrishnan, Soorya and Cekic, Metehan and Madhow, Upamanyu}, booktitle = {IEEE Global Communications Conference (Globecom)}, location = {Waikoloa, Hawaii}, title = {Robust Wireless Fingerprinting via Complex-Valued Neural Networks}, year = {2019} }

@INPROCEEDINGS{9723393, author={Cekic, Metehan and Gopalakrishnan, Soorya and Madhow, Upamanyu}, booktitle={2021 55th Asilomar Conference on Signals, Systems, and Computers}, title={Wireless Fingerprinting via Deep Learning: The Impact of Confounding Factors}, year={2021}, volume={}, number={}, pages={677-684}, doi={10.1109/IEEECONF53345.2021.9723393}} ```

Issues

Bump ipython from 7.16.3 to 8.10.0

opened on 2023-02-10 23:09:41 by dependabot[bot]

Bumps ipython from 7.16.3 to 8.10.0.

Release notes

Sourced from ipython's releases.

See https://pypi.org/project/ipython/

We do not use GitHub release anymore. Please see PyPI https://pypi.org/project/ipython/

Commits
  • 15ea1ed release 8.10.0
  • 560ad10 DOC: Update what's new for 8.10 (#13939)
  • 7557ade DOC: Update what's new for 8.10
  • 385d693 Merge pull request from GHSA-29gw-9793-fvw7
  • e548ee2 Swallow potential exceptions from showtraceback() (#13934)
  • 0694b08 MAINT: mock slowest test. (#13885)
  • 8655912 MAINT: mock slowest test.
  • a011765 Isolate the attack tests with setUp and tearDown methods
  • c7a9470 Add some regression tests for this change
  • fd34cf5 Swallow potential exceptions from showtraceback()
  • Additional commits viewable in compare view


Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/metehancekic/wireless-fingerprinting/network/alerts).

question about dataset

opened on 2022-09-20 12:17:40 by Jiayangan

This is an excellent and exhaustive work. Thanks to the author for sharing code and the corresponding dataset. If it's convenient, I have the following two questions:

  1. Have you tried testing on the following two real-world datasets from Northeastern University or other public datasets? (1)https://wiot.northeastern.edu/wp-content/uploads/2020/07/dataset_release.pdf (2)https://genesys-lab.org/oracle

  2. Can you share the code for generating the simulation dataset?

complex network accuracy on wifi

opened on 2021-03-10 02:16:16 by pelavoie

Hi, I cannot reproduce the complex network accuracy of 99.62% on the clean WiFi dataset reported in Table I. With the code from this repository or in a custom training harness, I get performance equivalent or worse than the "Real 2x" architecture (<98%). Is there something I am missing?

Metehan Cekic

Applied Scientist @AWS

GitHub Repository