python tools for BigDataViewer

constantinpape, updated 🕥 2023-01-17 18:12:00

Build Status Conda Forge

pyBDV

Python tools for BigDataViewer.

Installation

You can install the package from source python setup.py install or via conda: conda install -c conda-forge pybdv

Usage

Python

Write out numpy array volume to bdv format: ```python from pybdv import make_bdv

out_path = '/path/to/out'

the scale factors determine the levels of the multi-scale pyramid

that will be created by pybdv.

the downscaling factors are interpreted relative to the previous factor

(rather than absolute) and the zeroth scale level (corresponding to [1, 1, 1])

is implicit, i.e. DON'T specify it

scale_factors = [[2, 2, 2], [2, 2, 2], [4, 4, 4]]

the downscale mode determines the method for downscaling:

- interpolate: cubic interpolation

- max: downscale by maximum

- mean: downscale by averaging

- min: downscale by minimum

- nearest: nearest neighbor downscaling

mode = 'mean'

specify a resolution of 0.5 micron per pixel (for zeroth scale level)

make_bdv(volume, out_path, downscale_factors=scale_factors, downscale_mode=mode, resolution=[0.5, 0.5, 0.5], unit='micrometer') ```

Convert hdf5 dataset to bdv format: ```python from pybdv import convert_to_bdv

in_path = '/path/to/in.h5' in_key = 'data' out_path = '/path/to/out'

keyword arguments are same as for 'make_bdv'

convert_to_bdv(in_path, in_key, out_path, resolution=[0.5, 0.5, 0.5], unit='micrometer') ```

Command line

You can also call convert_to_bdv via the command line: bash convert_to_bdv /path/to/in.h5 data /path/to/out --downscale_factors "[[2, 2, 2], [2, 2, 2], [4, 4, 4]]" --downscale_mode nearest --resolution 0.5 0.5 0.5 --unit micrometer

The downscale factors need to be encoded as json list.

Conversion to n5-bdv format

Bigdatviewer core also supports an n5 based data format. The data can be converted to this format by passing a path with n5 ending as output path: /path/to/out.n5. In order to support this, you need to install z5py.

Advanced IO options

If elf is available, additional file input formats are supported. For example, it is possible to convert inputs from tif slices

```python import os import imageio import numpy as np from pybdv import convert_to_bdv

input_path = './slices' os.makedirs(input_path, exist_ok=True) n_slices = 25 shape = (256, 256)

for slice_id in range(n_slices): imageio.imsave('./slices/im%03i.tif', np.random.randint(0, 255, size=shape, dtype='uint8'))

input_key = '*.tif' output_path = 'from_slices.h5' convert_to_bdv(input_path, input_key, output_path) ```

or tif stacks:

```python import imageio import numpy as np from pybdv import convert_to_bdv

input_path = './stack.tif' shape = (25, 256, 256)

imageio.volsave(input_path, np.random.randint(0, 255, size=shape, dtype='uint8'))

input_key = '' output_path = 'from_stack.h5' convert_to_bdv(input_path, input_key, output_path) ```

On-the-fly processing

Data can also be added on the fly, using pybdv.initialize_dataset to create the bdv file and then BdvDataset to add (and downscale) new sub-regions of the data on the fly. See examples/on-the-fly.py for details.

Dask array support

You can use the pybdv.make_bdv_from_dask_array function and pass a dask array. Currently only zarr and n5 are supported and a limited options for downsampling (using dask.array.coarsen). See examples/dask_array.py for details.

Issues

Add support for new hdf5 dtype handling

opened on 2023-01-17 11:58:47 by constantinpape None

Interpolation mode failed in the newest version?

opened on 2022-04-20 16:05:49 by maximka48

Hi Constantin

I reinstalled conda env and pybdv recently. Was processing a file as usual and got an error.

(/das/work/p15/p15889/Maxim_LCT) [[email protected] scripts_batch]$ Downsample scale 1 / 5 /das/work/p15/p15889/Maxim_LCT/lib/python3.6/site-packages/pybdv/downsample.py:51: UserWarning: Downscaling with mode 'interpolate' may lead to different results depending on the chunk size warn("Downscaling with mode 'interpolate' may lead to different results depending on the chunk size") 0%| | 0/16384 [00:00<?, ?it/s] Traceback (most recent call last): File "5sec_185deg_4ppd_ra_dev.py", line 208, in <module> unit='micrometer', setup_name = data_name) File "/das/work/p15/p15889/Maxim_LCT/lib/python3.6/site-packages/pybdv/converter.py", line 461, in make_bdv overwrite=overwrite_data) File "/das/work/p15/p15889/Maxim_LCT/lib/python3.6/site-packages/pybdv/converter.py", line 204, in make_scales overwrite=overwrite) File "/das/work/p15/p15889/Maxim_LCT/lib/python3.6/site-packages/pybdv/downsample.py", line 172, in downsample sample_chunk(bb) File "/das/work/p15/p15889/Maxim_LCT/lib/python3.6/site-packages/pybdv/downsample.py", line 163, in sample_chunk outp = downsample_function(inp, factor, out_shape) File "/das/work/p15/p15889/Maxim_LCT/lib/python3.6/site-packages/pybdv/downsample.py", line 15, in ds_interpolate anti_aliasing=order > 0, preserve_range=True) TypeError: resize() got an unexpected keyword argument 'anti_aliasing'

Do you have an idea what could be the problem?

metadata - auto-fill scales from N5 attributes

opened on 2022-01-03 16:34:51 by martinschorb

auto-fill scales from N5 attributes

on the fly example 3D, multi channels, multi timepoints

opened on 2021-09-10 09:06:10 by romainGuiet

Hi @constantinpape,

Thank you for this package, is there any chance you will upgrade "soon" your on the fly 2D example with a minimal 3D, multi channels, multi timepoints?

Thank you in advance,

Romain

Halo computation in downsample function does not work correctly

opened on 2020-11-18 21:35:00 by constantinpape

fortunately it is only necessary for the interpolate mode, for the others the 'natural' halo that results from upscaling the bounding box is sufficient.

dealing with absolute paths

opened on 2020-05-11 13:07:22 by martinschorb

Hi,

what can we do to make pybdv applicable to the following scenario:

  • 2 large datasets in some common storage location (maybe even separate group shares)
  • no write access to those
  • I want to register them to each other
  • I need to create matching bdv xml files pointing to the data but with modified AffineTransform and potentially other attributes.
  • so basically I need some means of creating additional valid BDV xml files without having write access to the data directory.

My idea is to just use write_xml_metadata and have it point to the data container. This however cannot be done using relative paths if the user does not have write access there. So far you have that hardcoded in this function. I will give it a try by finding common path and the use relative directory listing, however, this will fail under Windows when different shares are mounted as different drives...

Any ideas how to solve that? S3 storage for this data?

Releases

Add dask array support 2021-11-21 11:00:13

Add make_bdv_from_dask_array for dask array support by @boazmohar

Better on the fly support 2021-09-20 10:49:53

Better support for on the fly processing: - adds initialize_bdv function to create an empty bdv dataset

Add functionality for reading and writing setup names 2021-03-25 10:43:17

Add bdv dataset 2021-01-21 10:41:39

  • Add bdv dataset to write to all scale levels of a file in bdv format from python; implemented by @jhennies
  • Fix issues in downscaling methods
  • Work on support for zarr (not working yet due to https://github.com/zarr-developers/zarr-python/issues/693)

Custom attributes and over-write functionality 2020-07-20 14:25:21

Add utility functions 2020-02-16 12:36:29

Utility functions to read resolution, file type and scale factors from bdv files.

Constantin Pape

Group leader at Uni Goettingen. I work on deep learning and computer vision solutions for large-scale bio-image analysis.

GitHub Repository

bigdataviewer image-pyramid 3d-viewer bdv