# ImageryClient
Connectomics data often involves a combination of microscopy imagery and segmentation, labels of distinct objects applied to this imagery.
While exploring the data in tools like [Neuroglancer](https://github.com/google/neuroglancer) is great, a common task is often to make figures overlaying 2d images and segmentation sliced from the larger data.
ImageryClient is designed to make it easy to generate aligned cutouts from imagery and segmentation, and make it efficient to produce attractive, publication-ready overlay images.
Because of the size of these volumes, cloud-based serverless n-d array file storage systems are often used to host this data.
[CloudVolume](https://github.com/seung-lab/cloud-volume/) has become an excellent general purpose tool for accessing such data.
However, imagery and segmentation for the same data are hosted at distinct cloud locations and can differ in basic properties like base resolution.
Moreover, imagery and segmentation have data that means intrensically different things.
Values in imagery indicate pixel intensity in order to produce a picture, while values in segmentation indicate the object id at a given location.
ImageryClient acts as a front end for making aligned cutouts from multiple cloudvolume sources, splitting segmentations into masks for each object, and more.
We make use of [Numpy arrays](http://numpydoc.readthedocs.io) and [Pillow Images](https://pillow.readthedocs.io/) to represent data.
Please see the appropriate documentation for information about saving data to image files and more.
## How to use ImageryClient
Here, we will use the ImageryClient to get some data from the [Kasthuri et al. 2014 dataset](https://neuroglancer-demo.appspot.com/fafb.html#!gs://fafb-ffn1/main_ng.json) hosted by Google.
In its simplest form, we just intialize an ImageryClient object with an image cloudpath and a segmentation cloudpath.
Values are taken from the layers in the linked neuroglancer state.
```python
import imageryclient as ic
img_src = 'precomputed://gs://neuroglancer-public-data/kasthuri2011/image_color_corrected'
seg_src = 'precomputed://gs://neuroglancer-public-data/kasthuri2011/ground_truth'
img_client = ic.ImageryClient(image_source=img_src,
segmentation_source=seg_src)
```
### Bounds
We need bounds to make a cutout.
The imagery client takes bounds a pair of points: upper-left and lower-right.
Since often we are using analysis points to center an image on, `bounds_from_center` can help produce 2d or 3d bounds around a specified point.
The first argument is the center, subsequent ones set width/height/depth.
Note that points are by default in voxel space for mip 0, thus correspond to values shown in Neuroglancer.
```python
ctr = [5319, 8677, 1201]
image_size = 400
bounds = ic.bounds_from_center(ctr, width=image_size, height=image_size, depth=1)
```
### Imagery
We can download an image cutout from the bounds directly as an numpy array:
```python
image = img_client.image_cutout(bounds)
# Use PIL to visualize
from PIL import Image
Image.fromarray(image.T)
```

#### An alternative to bounds
When upper and lower bounds are specified, the resolution will change with mip level but the field of view that is downloaded will remain the same.
Alternatively, one might want to download an image with a specific pixel size and a specific mip level and you want whatever field of view that gives you.
This can be done in `image_cutout` by specifying the center point in the place of bounds and also specify voxel_dimensions as a 2- or 3-element array.
The center point _will_ be adjusted as needed, while the dimensions will not.
```python
image = img_client.image_cutout(ctr, voxel_dimensions=(img_size, img_size))
```
If you specify mip level, this approach will always yield an image with the same size while a bounds-based approach will get smaller with increasing mips as the effective resolution gets coarser.
For example, using bounds:
```python
image = img_client.image_cutout(bounds, mip=3)
Image.fromarray(image.T)
```

And using specified voxel dimensions:
```python
image = img_client.image_cutout(ctr, mip=3, voxel_dimensions=(img_size, img_size))
Image.fromarray(image.T)
```

### Segmentations
An aligned segmentation cutout is retrieved similarly.
Note that segmentations show segment ids, and are not directly visualizable.
However, we can convert to a uint8 greyscale and see the gist, although there are many better approaches to coloring segmentations.
```python
seg = img_client.segmentation_cutout(bounds)
import numpy as np
Image.fromarray( (seg.T / np.max(seg) * 255).astype('uint8') )
```

Specific root ids can also be specified. All pixels outside those root ids have a value of 0.
```python
root_ids = [2282, 4845]
seg = img_client.segmentation_cutout(bounds, root_ids=root_ids)
Image.fromarray( (seg.T / np.max(seg) * 255).astype('uint8') )
```

### Split segmentations
It's often convenient to split out the segmentation for each root id as a distinct mask. These "split segmentations" come back as a dictionary with root id as key and binary mask as value.
```python
split_seg = img_client.split_segmentation_cutout(bounds, root_ids=root_ids)
Image.fromarray((split_seg[ root_ids[0] ].T * 255).astype('uint8'))
```

### Aligned cutouts
Aligned image and segmentations can be downloaded in one call, as well.
If the lowest mip data in each differs in resolution, the lower resolution data will be optionally upsampled to the higher resolution in order to produce aligned overlays.
Root ids and split segmentations can be optionally specified. This is the best option if your primary goal is overlay images.
```python
image, segs = img_client.image_and_segmentation_cutout(bounds,
split_segmentations=True,
root_ids=root_ids)
```
## Producing overlays
Now let produce an overlay of segmentation and imagery to highlight a particular synapse.
Overlays are returned as a [PIL Image](https://pillow.readthedocs.io/en/stable/), which has convenient saving options but can also be converted to RGBa via a simple `np.array` call.
Note that if imagery isn't specified, the segmentations are colored but not put over another image.
Segmentations must be either a list or a dict, such as comes out of split segmentation cutouts.
```python
ic.composite_overlay(segs, imagery=image)
```

### Aesthetic options
Colors are chosen by default from the perceptually uniform discrete [HUSL Palette](https://seaborn.pydata.org/generated/seaborn.husl_palette.html) as implemented in Seaborn, and any color scheme available through Seaborn's [color_palette](https://seaborn.pydata.org/generated/seaborn.color_palette.html?highlight=color_palette) function is similarly easy to specify.
Alpha is similarly easy to set.
```python
ic.composite_overlay(segs, imagery=image, palette='tab10', alpha=0.4)
```

Colors can also be specified in the same form as the segmentations, e.g. a dictionary of root id to RGB tuple.
```python
colors = {2282: (0,1,1), # cyan
4845: (1,0,0)} # red
ic.composite_overlay(segs, imagery=image, colors=colors)
```

### Outline options
While the overlay guides the eye, it can also obscure the imagery.
Because of that, one can also use highly configurable outlines instead of solid overlays.
The default option puts the outlines along the outside of the segmentations, but omits lines where two segmentations touch.
```py

挣扎的蓝藻
- 粉丝: 14w+
- 资源: 15万+
最新资源
- comsol焊接 激光熔覆多层多道 温度场流场应力场应力场 一共是两个模型,电弧 激光温度场流场电弧温度场应力场 ,激光熔覆多道焊接:电弧与激光技术下的温度场、流场与应力场模型研究,激光焊接与熔覆模型
- MATLAB实现BO-CNN-GRU-Mutilhead-Attention贝叶斯优化卷积神经网络-门控循环单元融合多头注意力机制多变量时间序列预测(含模型描述及示例代码)
- MATLAB实CNN-Mutilhead-Attention卷积神经网络融合多头注意力机制多变量时间序列预测(含模型描述及示例代码)
- labelmeAI标注模型
- MATLAB实现KOA-CNN-BiGRU-Multihead-Attention多头注意力机制多变量时间序列预测(含模型描述及示例代码)
- 基于STM32大棚温湿度检测蓝牙APP控制(原理图+PCB+代码)
- labelmeai2 ai标注模型
- 计算机组成原理三小时期末复习重点思维导图
- COMSOL 大型复杂流道燃料电池仿真 下面两个模型: 1)具有树状的冷却流道,蛇形气体分配流道, 2)具有树状的气体分配流道(无冷却流道) 模型特点: 1)模型具有良好的收敛性, 2)网格质量也不
- comsol声学 【声学超材料仿真】 吸声系数 【声阻抗-实部虚部】 展示模型为基于穿孔板和多孔材料复合结构,完美复现吸声系数曲线,仿真结果; 分析仿真结果,仿真; 仿真基于COMSOL6.1版本
- 编队 路径规划 apf 人工势场法 基于编队与路径规划的APF人工势场法研究与应用,编队; 路径规划; apf; 人工势场法,编队智能机器人路径规划与人工势场法(APF)研究
- xfce4-diskperf-plugin-2.6.3-3.el8.x64-86.rpm.tar.gz
- 基于永磁同步电机模型参考自适应MRAS学习参考模型 复现华科lunwen中的模型,有公式推导和原理解释 ,基于永磁同步电机的模型参考自适应MRAS学习与复现,基于永磁同步电机的模型参考自适应MRA
- Altium Designer 25.4.2 Build 15 (x64)
- xfce4-dict-plugin-0.8.4-3.el8.x64-86.rpm.tar.gz
- xfce4-dict-0.8.4-3.el8.x64-86.rpm.tar.gz
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈


