Back

Show HN: Streaming gigabyte medical images from S3 without downloading them

98 points10 hoursgithub.com
mlhpdx58 minutes ago

A while back I worked on a project where s3 held giant zip files containing zip files (turtles all the way down) and also made good use of range requests. I came up with seekable-s3-stream[1] to generalize working with them via an idiomatic C# stream.

[1] https://github.com/mlhpdx/seekable-s3-stream

el_pa_b51 minutes ago

Nice!

tomnicholas12 hours ago

The generalized form of this range-request-based streaming approach looks something like my project VirtualiZarr [0].

Many of these scientific file formats (HDF5, netCDF, TIFF/COG, FITS, GRIB, JPEG and more) are essentially just contiguous multidimensional array(/"tensor") chunks embedded alongside metadata about what's in the chunks. Efficiently fetching these from object storage is just about efficiently fetching the metadata up front so you know where the chunks you want are [1].

The data model of Zarr [2] generalizes this pattern pretty well, so that when backed by Icechunk [3], you can store a "datacube" of "virtual chunk references" that point at chunks anywhere inside the original files on S3.

This allows you to stream data out as fast as the S3 network connection allows [4], and then you're free to pull that directly, or build tile servers on top of it [5].

In the Pangeo project and at Earthmover we do all this for Weather and Climate science data. But the underlying OSS stack is domain-agnostic, so works for all sorts of multidimensional array data, and VirtualiZarr has a plugin system for parsing different scientific file formats.

I would love to see if someone could create a virtual Zarr store pointing at this WSI data!

[0]: https://virtualizarr.readthedocs.io/en/stable/

[1]: https://earthmover.io/blog/fundamentals-what-is-cloud-optimi...

[2]: https://earthmover.io/blog/what-is-zarr

[3]: https://earthmover.io/blog/icechunk-1-0-production-grade-clo...

[4]: https://earthmover.io/blog/i-o-maxing-tensors-in-the-cloud

[5]: https://earthmover.io/blog/announcing-flux

el_pa_b52 minutes ago

Thanks for sharing! I agree that newer scientific formats will need to deeply think about how they are deciphered directly from cloud storage.

rwmj7 hours ago

https://dicom.nema.org/dicom/dicomwsi/

Interesting guide to the Whole Slide Images (WSI) format. The surprising thing for me is that compression is used, and they note does not affect use in diagnostics.

Back in the day we used TIFF for a similar application (X-ray detector images).

yread3 hours ago

Digital pathology are just a lot bigger than radiology, we regularly see slides 500k x 500k pixels.

el_pa_b2 hours ago

Yes, they can be huge, and for modalities like multiplex immunofluorescence with up to 20 channels, you're often dealing with very faint proteomic signals. Preserving that signal is critical, and compression can destroy it quickly.

yread1 hour ago

CODEX can do up to 120 channels I think. They are also 16/32bit. They are usually just deflated

matthberg8 hours ago

Seems very similar to how maps work on the web these days, in particular protomap files [0]. I wonder if you could view the medical images in leaflet or another frontend map library with the addition of a shim layer? Cool work!

0: https://protomaps.com/

el_pa_b8 hours ago

Thanks! Indeed, digital pathology, satellite imaging and geospatial data share a lot of computational problems: efficient storage, fast spatial retrieval/indexing. I think this could be doable.

As for digital pathology, the field is very much tied to scanner-vendor proprietary formats (SVS, NDPI, MRXS, etc).

tokyovigilante7 hours ago

This is really a job for JPEG-XL, which supports decode of portions of larger images and has recently been added to the DICOM standard.

iberator3 hours ago

No. Jpg conpression sucks. Medical data should not be compressed loosely. PNG and TIFF for the win

vrighter3 hours ago

unlike jpeg, jpeg-xl supports lossless compression too.

nszceta58 minutes ago

The original JPEG supports a lossless mode.

JPEG-LL refers to the lossless mode of the original JPEG standard (ISO/IEC 10918-1 or ITU-T T.81), also known as JPEG Lossless, and not to be confused with JPEG-LS (ISO/IEC 14495-1, Transfer Syntax 1.2.840.10008.1.2.4.80), which offers better ratios and speed via LOCO-I algorithm. JPEG-LL is older and less efficient yet more widely implemented in legacy systems.

The lossless mode in JPEG-XL is superior to all of those.

dmd6 hours ago

Or IIIF.

Sleaker2 hours ago

Maybe a bit pedantic, but if you're streaming it, then you're still downloading portions of it, yah? Just not persisting the whole thing locally before viewing it.

Edit: Looks like this is a slight discrepancy between the HN title and the GitHub description.

el_pa_b2 hours ago

Yes, I agree. I'm not persisting the WSI locally, which creates a smoother user experience. But I do need to transfer tiles from server to client. They are stored in an LRU cache and evicted if not used.

yread3 hours ago

You could probably do it completely clientside. I have a parser for 12 scanner formats in js. It doesnt read the pixels, just parses metadata but jpeg is easy and most common anyway

lametti7 hours ago

Interesting - I'm not so familiar with S3 but I wonder if this would work for WSI stored on-premises. Imposing lower network requirememts and a lightweight web viewer is very advantageous in this use case. I'll have to try it out!

el_pa_b7 hours ago

When WSI are stored on-premise, they are typically stored on hard drives with a filesystem. If you have a filesystem, you can use OpenSlide, and use a viewer like OpenSeaDragon to visualize the slide.

WSIStreamer is relevant for storage systems without a filesystem. In this case, OpenSlide cannot work (it needs to seek and open the file).

isuckatcoding1 hour ago

Is there a visual demo of this?

invaderJ1m5 hours ago

How does this compare to things like COGs (Cloud Optimised GeoTIFFs) or other binary blob + index raster pyramid formats?

Was there a requirement to work with these formats directly without converting?

el_pa_b3 hours ago

Yes there is a requirement to work with the vendor format. For instance, TCGA (The Cancer Genome Atlas - a large dataset of 12k+ human tumor cases) has mostly .svs files (scanned with an Aperio scanner). We tend to work with these formats as they contain all the metadata we need.

Sometimes, it happens that we re-write the image in a pyramidal TIFF format (happened to me a few times, where NDPI images had only the highest resolution level, no pyramid), in which case COGs could work.

Nora237 hours ago

How does this handle images with different compression formats?

el_pa_b3 hours ago

Currently we only support TIFF and SVS with JPEG and JPEG2000 compression formats. I plan on supporting more file extensions (e.g. NDPI, MRXS) in the future, each with their own compression formats.

andrewstuart4 hours ago

Please don’t use AWS S3 there’s vast numbers of much cheaper compatible choices.

lijok2 hours ago

I guess by "compatible" you mean the data plane.

There are choices that speak the S3 data plane API (GetObject, ListBucket, etc).

There are no alternatives that support most of the AWS S3 functionality such as replication, event notifications.

el_pa_b3 hours ago

As data scientists, we usually don't get to choose. It's usually up to the hospital or digital lab's CISO to decide where the digitized slides are stored, and S3 is a fairly common option.

That being said, I plan to support more cloud platforms in the future, starting with GCP.

kube-system2 hours ago

“Cheap” is not always the #1 requirement for a project.

thenaturalist4 hours ago

Pretty bold half claim while not backing it up with a single data point. :D

tonyhart77 hours ago

hey, I need this

huflungdung2 hours ago

[dead]