FreiHAND is a dataset for evaluation and training of deep neural networks for estimation of hand pose and shape from single color images, which was proposed in our paper. Its current version contains 32560 unique training samples and 3960 unique evaluation samples. The training samples were recorded with a green screen background allowing for background removal. We provide 3 additional sample_versions for each training sample using different post processing strategies yielding a total training set size of 130240 samples.
This repository contains a collection of scripts that show how the dataset can be used. See the project page for additional information.
An extendend version of this dataset, with calibration data and multiple-views, is released in HanCo. Due to ungoing problems with the Codalab evaluation server we have decided to release the evaluation split annotations publicly on our dataset page.
Download the dataset. See project page for instructions.
Install basic requirements:
virtualenv -p python2.7 ./venv
source venv/bin/activate
pip install numpy matplotlib scikit-image transforms3d tqdm opencv-python cython
Assuming ${DB_PATH} is the path to where you unpacked the dataset (path to where ./training/ and ./evaluation/ folder branch off).
This should enable you to run the following to show some dataset samples.
In my case ${DB_PATH} holds the value ~/FreiHAND_pub_v2/
python view_samples.py ${DB_PATH}
python view_samples.py ${DB_PATH} --show_eval
The script provides a couple of other parameters you might want to try. Note that for visualization of the hand shape you need to follow the Advanced setup.
Download Models&code from the MANO website
http://mano.is.tue.mpg.de
Assuming ${MANO_PATH} contains the path to where you unpacked the downloaded archive use the provided script to enable visualization of the hand shape fit. See the section Mano version for a known caveat.
python setup_mano.py ${MANO_PATH}
Install OpenDR
Getting OpenDR installed can be tricky. Maybe you are lucky and the pip install works for your system.
pip install opendr
But there is a known issue with a library. On my system the pip install didn't work and I installed the package using:
bash install_opendr.sh
Visualize samples with rendered MANO shapes
python view_samples.py ${DB_PATH} --mano
Update: Due to ungoing problems with the Codalab evaluation server we have decided to release the evaluation split annotations publicly on our dataset page.
In order to ensure a fair and consistent protocol, evaluation of your algorithm on FreiHAND is handled through Codalab.
Make predictions for the evaluation dataset. The code provided here predicts zeros for all joints and vertices.
python pred.py ${DB_PATH}
Zip the pred.json
file
zip -j pred.zip pred.json
Upload pred.zip
to our Codalab competition website (Participate -> Submit)
Wait for the evaluation server to report back your results and publish your results to the leaderboard. The zero predictor will give you the following results
Keypoint error 70.79cm
Keypoint error aligned 4.73cm
Mesh error 70.84cm
Mesh error aligned 5.07cm
[email protected]=0.0, [email protected]=0.0
[email protected]= 0.001, [email protected]=0.031
Modify pred.py
to use your method for making shape prediction and see how well it performs compared to the baselines in our leaderboard.
The following paragraph is only relevant if you downloaded the dataset before 22. Oct.
This dataset was developed with the original version of the MANO shape model, which differs from the one that is currently (Sept. 2019) offered for download.
If the change in version is not dealt with properly this results into mismatches between the provided keypoint and vertex coordinates and the ones the hand model implementation will yield,
when the respective hand model parameters are applied to it. Therefore an updated dataset version is offered for download now, where all annotations were converted to be compatible with the new MANO version.
The conversion is approximate, but the residual mean vertex deviation between versions is 0.6mm on average and does not exceed 6mm for any sample.
Manual inspection of the respective max deviation samples showed, that the new fits even look a bit better than the old ones.
Additionally the new version correctly accounts for perspective correction due to cropping of non centered image patches by warping the images.
If you downloaded the dataset or cloned the repository before 22. Oct then you should update to the new version, i.e. download the new data and update the code repository.
This dataset is provided for research purposes only and without any warranty. Any commercial use is prohibited. If you use the dataset or parts of it in your research, you must cite the respective paper.
```
@InProceedings{Freihand2019,
author = {Christian Zimmermann, Duygu Ceylan, Jimei Yang, Bryan Russel, Max Argus and Thomas Brox},
title = {FreiHAND: A Dataset for Markerless Capture of Hand Pose and Shape from Single RGB Images},
booktitle = {IEEE International Conference on Computer Vision (ICCV)},
year = {2019},
url = {"https://lmb.informatik.uni-freiburg.de/projects/freihand/"}
}
```
hi,I want to test my result on your Codalab, but it seams something wrong and I cant get the score. Can you public the evaluation code so that I can get my eval result.
hi,
how can I get the dataset for version 1?
WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap. /opt/conda/lib/python2.7/site-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment. warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.') Exception: Traceback (most recent call last): File "/opt/conda/lib/python2.7/site-packages/pip/basecommand.py", line 215, in main status = self.run(options, args) File "/opt/conda/lib/python2.7/site-packages/pip/commands/install.py", line 335, in run wb.build(autobuilding=True) File "/opt/conda/lib/python2.7/site-packages/pip/wheel.py", line 749, in build self.requirement_set.prepare_files(self.finder) File "/opt/conda/lib/python2.7/site-packages/pip/req/req_set.py", line 380, in prepare_files ignore_dependencies=self.ignore_dependencies)) File "/opt/conda/lib/python2.7/site-packages/pip/req/req_set.py", line 620, in _prepare_file session=self.session, hashes=hashes) File "/opt/conda/lib/python2.7/site-packages/pip/download.py", line 821, in unpack_url hashes=hashes File "/opt/conda/lib/python2.7/site-packages/pip/download.py", line 659, in unpack_http_url hashes) File "/opt/conda/lib/python2.7/site-packages/pip/download.py", line 853, in _download_http_url stream=True, File "/opt/conda/lib/python2.7/site-packages/pip/_vendor/requests/sessions.py", line 488, in get return self.request('GET', url, kwargs) File "/opt/conda/lib/python2.7/site-packages/pip/download.py", line 386, in request return super(PipSession, self).request(method, url, *args, kwargs) File "/opt/conda/lib/python2.7/site-packages/pip/_vendor/requests/sessions.py", line 475, in request resp = self.send(prep, send_kwargs) File "/opt/conda/lib/python2.7/site-packages/pip/_vendor/requests/sessions.py", line 596, in send r = adapter.send(request, kwargs) File "/opt/conda/lib/python2.7/site-packages/pip/_vendor/cachecontrol/adapter.py", line 47, in send resp = super(CacheControlAdapter, self).send(request, **kw) File "/opt/conda/lib/python2.7/site-packages/pip/_vendor/requests/adapters.py", line 497, in send raise SSLError(e, request=request) SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:661) You are using pip version 9.0.1, however version 21.3.1 is available. You should consider upgrading via the 'pip install --upgrade pip' command. Traceback (most recent call last): File "/tmp/codalab/tmpTReBmp/run/program/eval.py", line 20, in import open3d as o3d ImportError: No module named open3d
Hi, is it possible for the evaluation to also include a 2D pose estimation task?
Thanks for your excellent work. Can you provide the code about converting the data to .bvh file?
Hi everyone, I've used training_mano annotations with pytorch3d renderer (https://github.com/facebookresearch/pytorch3d) and a pytorch Implementation of MANO hand model issued from (https://github.com/otaheri/MANO), I'm facing some issues to set up the camera parameters to get the same hand pose rendering in image 2D space. I guess, I need to know the camera coordinates regarding to the world coordinates. Could anyone help me to find this information? any doc on the camera extrinsic parameters that were used for each image in the dataset would be helpful. Thanks.
Pattern Recognition and Image Processing
GitHub Repository Homepagedeep-learning deep-learning-dataset hand-pose hand-shape hand-gestures hand-recognition hand-pose-estimation hand-gesture-recognition cnn iccv iccv2019