Python tools for BigDataViewer.
You can install the package from source
python setup.py install
or via conda:
conda install -c conda-forge pybdv
Write out numpy array volume
to bdv format:
```python
from pybdv import make_bdv
out_path = '/path/to/out'
scale_factors = [[2, 2, 2], [2, 2, 2], [4, 4, 4]]
mode = 'mean'
make_bdv(volume, out_path, downscale_factors=scale_factors, downscale_mode=mode, resolution=[0.5, 0.5, 0.5], unit='micrometer') ```
Convert hdf5 dataset to bdv format: ```python from pybdv import convert_to_bdv
in_path = '/path/to/in.h5' in_key = 'data' out_path = '/path/to/out'
convert_to_bdv(in_path, in_key, out_path, resolution=[0.5, 0.5, 0.5], unit='micrometer') ```
You can also call convert_to_bdv
via the command line:
bash
convert_to_bdv /path/to/in.h5 data /path/to/out --downscale_factors "[[2, 2, 2], [2, 2, 2], [4, 4, 4]]" --downscale_mode nearest --resolution 0.5 0.5 0.5 --unit micrometer
The downscale factors need to be encoded as json list.
Bigdatviewer core also supports an n5 based data format. The data can be converted to this format by passing a path with n5 ending as output path: /path/to/out.n5
. In order to support this, you need to install z5py.
If elf is available, additional file input formats are supported. For example, it is possible to convert inputs from tif slices
```python import os import imageio import numpy as np from pybdv import convert_to_bdv
input_path = './slices' os.makedirs(input_path, exist_ok=True) n_slices = 25 shape = (256, 256)
for slice_id in range(n_slices): imageio.imsave('./slices/im%03i.tif', np.random.randint(0, 255, size=shape, dtype='uint8'))
input_key = '*.tif' output_path = 'from_slices.h5' convert_to_bdv(input_path, input_key, output_path) ```
or tif stacks:
```python import imageio import numpy as np from pybdv import convert_to_bdv
input_path = './stack.tif' shape = (25, 256, 256)
imageio.volsave(input_path, np.random.randint(0, 255, size=shape, dtype='uint8'))
input_key = '' output_path = 'from_stack.h5' convert_to_bdv(input_path, input_key, output_path) ```
Data can also be added on the fly, using pybdv.initialize_dataset
to create the bdv file and then BdvDataset
to
add (and downscale) new sub-regions of the data on the fly. See examples/on-the-fly.py for details.
You can use the pybdv.make_bdv_from_dask_array
function and pass a dask array. Currently only zarr
and n5
are supported and a limited options for downsampling (using dask.array.coarsen
). See examples/dask_array.py for details.
Hi Constantin
I reinstalled conda env and pybdv recently. Was processing a file as usual and got an error.
(/das/work/p15/p15889/Maxim_LCT) [[email protected] scripts_batch]$ Downsample scale 1 / 5
/das/work/p15/p15889/Maxim_LCT/lib/python3.6/site-packages/pybdv/downsample.py:51: UserWarning: Downscaling with mode 'interpolate' may lead to different results depending on the chunk size
warn("Downscaling with mode 'interpolate' may lead to different results depending on the chunk size")
0%| | 0/16384 [00:00<?, ?it/s]
Traceback (most recent call last):
File "5sec_185deg_4ppd_ra_dev.py", line 208, in <module>
unit='micrometer', setup_name = data_name)
File "/das/work/p15/p15889/Maxim_LCT/lib/python3.6/site-packages/pybdv/converter.py", line 461, in make_bdv
overwrite=overwrite_data)
File "/das/work/p15/p15889/Maxim_LCT/lib/python3.6/site-packages/pybdv/converter.py", line 204, in make_scales
overwrite=overwrite)
File "/das/work/p15/p15889/Maxim_LCT/lib/python3.6/site-packages/pybdv/downsample.py", line 172, in downsample
sample_chunk(bb)
File "/das/work/p15/p15889/Maxim_LCT/lib/python3.6/site-packages/pybdv/downsample.py", line 163, in sample_chunk
outp = downsample_function(inp, factor, out_shape)
File "/das/work/p15/p15889/Maxim_LCT/lib/python3.6/site-packages/pybdv/downsample.py", line 15, in ds_interpolate
anti_aliasing=order > 0, preserve_range=True)
TypeError: resize() got an unexpected keyword argument 'anti_aliasing'
Do you have an idea what could be the problem?
auto-fill scales from N5 attributes
Hi @constantinpape,
Thank you for this package, is there any chance you will upgrade "soon" your on the fly 2D example with a minimal 3D, multi channels, multi timepoints?
Thank you in advance,
Romain
fortunately it is only necessary for the interpolate
mode, for the others the 'natural' halo that results from upscaling the bounding box is sufficient.
Hi,
what can we do to make pybdv applicable to the following scenario:
AffineTransform
and potentially other attributes.My idea is to just use write_xml_metadata
and have it point to the data container. This however cannot be done using relative paths if the user does not have write access there. So far you have that hardcoded in this function. I will give it a try by finding common path and the use relative directory listing, however, this will fail under Windows when different shares are mounted as different drives...
Any ideas how to solve that? S3 storage for this data?
Add make_bdv_from_dask_array
for dask array support by @boazmohar
Better support for on the fly processing:
- adds initialize_bdv
function to create an empty bdv dataset
Utility functions to read resolution, file type and scale factors from bdv files.
Group leader at Uni Goettingen. I work on deep learning and computer vision solutions for large-scale bio-image analysis.
GitHub Repositorybigdataviewer image-pyramid 3d-viewer bdv