发布: 2019年09月20日第9卷第18期 DOI: 10.21769/BioProtoc.3365 浏览次数: 4684
评审: Zinan ZhouDr. Kalpa MehtaFarah Haque
Abstract
Precise spatiotemporal regulation is the foundation for the healthy development and maintenance of living organisms. All cells must correctly execute their function in the right place at the right time. Cellular motion is thus an important dynamic readout of signaling in key disease-relevant molecular pathways. However despite the rapid advancement of imaging technology, a comprehensive quantitative description of motion imaged under different imaging modalities at all spatiotemporal scales; molecular, cellular and tissue-level is still lacking. Generally, cells move either ‘individually’ or ‘collectively’ as a group with nearby cells. Current computational tools specifically focus on one or the other regime, limiting their general applicability. To address this, we recently developed and reported a new computational framework, Motion Sensing Superpixels (MOSES). Incorporating the individual advantages of single cell trackers for individual cell and particle image velocimetry (PIV) for collective cell motion analyses, MOSES enables ‘mesoscale’ analysis of both single-cell and collective motion over arbitrarily long times. At the same time, MOSES readily complements existing single-cell tracking workflows with additional characterization of global motion patterns and interaction analysis between cells and also operates directly on PIV extracted motion fields to yield rich motion trajectories analogous for single-cell tracks suitable for high-throughput motion phenotyping. This protocol provides a step-by-step practical guide for those interested in applying MOSES to their own datasets. The protocol highlights the salient features of a MOSES analysis and demonstrates the ease-of-use and wide applicability of MOSES to biological imaging through demo experimental analyses with ready-to-use code snippets of four datasets from different microscope modalities; phase-contrast, fluorescent, lightsheet and intra-vital microscopy. In addition we discuss critical points of consideration in the analysis.
Keywords: Biological motion (生物性运动)Background
In general, cells exhibit two types of movement; ‘single-cell’ or ‘individual’ migration in which each cell migrates independently or ‘collective’ migration where nearby cells migrate as a group in a coordinated fashion. Current computational tools focus on one or the other regime. To date numerous single-cell tracking methods have been developed that extract the movement trajectories of individual cells to build rich motion feature descriptors suitable for the unbiased analysis of high-throughput screens (Padfield et al., 2011; Meijeringet al., 2012; Maška et al., 2014; Schiegg et al., 2015; Nketia et al., 2017). However the performance of these algorithms requires accurate delineation or segmentation and subsequent temporal association of individual identified cells between video frames. This process is unfortunately difficult to generalize across applications, requires significant expertise and computationally scales poorly with increasing cell number. Single-cell tracking thus places an inherent upper limit to the time duration that can be imaged and tracked. In contrast, collective motion analysis tools such as Particle Image Velocimetry (PIV) exploit local correlation between image patch intensity values to derive velocities for all image pixels between all pairs of frames (Szabó et al., 2006; Petitjean et al., 2010; Milde et al., 2012). Advantageously this avoids errors introduced by image segmentation, is much easier to use for non-experts and is computationally efficient, scaling only with the number of region of interests (ROI) the image is divided into. Unfortunately, despite attempts to extract more descriptive motion parameters aside from PIV velocity such as including appearance parameters (Neumann et al., 2006; Zaritsky et al., 2012; 2014; 2015 and 2017), systematic characterization of the PIV extracted velocity fields to derive similarly rich ‘signatures’ as afforded by single-cell track measurements over arbitrarily long times for clustering motion patterns within and across videos is lacking. Further investigation of how the pixel-based or ROI measurements relate to particular individual cells or cell groups in the moving collectives has also been underexplored. As such PIV based analyses have exhibited limited success when studying phenomena associated with complex cellular movement such as boundary formation and chemoattraction in-vivo.
Recently we developed Motion Sensing Superpixels (MOSES) (Zhou et al., 2019), a computational framework that marries together the ability to generate rich motion features offered by single-cell tracking trajectories with the ease-of-use and computational efficiency of segmentation-free PIV methods. MOSES achieves this by firstly extending PIV-type methods to generate long-time motion trajectories analogous to single-cell tracks for user-defined ROIs called superpixels and secondly the construction of a mesh over the extracted motion trajectories to systematically capture the spatio-temporal interaction between neighboring ROIs. This protocol through ready-to-use code snippets details how to do this practically using the previously published MOSES code as a Python library.
Equipment
Software
Procedure
We describe the general protocol to extract superpixel long-time trajectories and motion measurements as described in our original publication (Zhou et al., 2019). The presented code snippets in this section are also provided in the supplementary Python script file (‘general_analysis_protocol.py’) to aid re-implementation. Adaptation of the basic procedure for the analysis of different datasets acquired by different microscope modalities is detailed in the Data Analysis section. The code and protocols should work for both Python 2 and 3 installations. They were originally developed using Python 2.7 under a Linux Mint Cinnamon 17 operating system. We assume throughout basic familiarity with working with command line prompts and terminals such as folder navigation using the cd or dir commands and basic Python usage including the importing of Python modules using import, array indexing using NumPy and plotting using Matplotlib libraries.
import scipy.io as spio
import os
fname = os.path.split(infile)[-1]
savetracksmat = (‘meantracks_’ + fname).replace(‘.tif’, ‘.mat’)
spio.savemat(savetracksmat, {‘meantracks_r’:meantracks_r, ‘meantracks_g’: meantracks_g})
Data analysis
The demo analyses presented in this section expands upon the general protocol detailed above by walking through the standard steps of motion analysis using MOSES on various different biological imaging datasets. The steps below illustrate how to adapt and work with MOSES to handle a breadth of different datasets. The analyses are additionally provided as supplementary Python script files (.py).
Analysis 1: Single Cell Tracking (Phase Contrast Microscopy) (Supplementary files)
We use the data from the Cell Tracking Challenge (http://celltrackingchallenge.net/) to illustrate how to use MOSES for single cell tracking analysis. As cell division is not currently explicitly modeled within the motion extraction in MOSES, this analysis is most effective for videos where cell proliferation is minimal or when extraction of the global ‘motion’ pattern is more important.
Advanced analysis:
Analysis 2: Two Cell Epithelial Sheet Analysis (Fluorescence Microscopy) (Supplementary files) The main analysis steps for the migration pattern of two color epithelial sheets acquired as presented in our previous work (Zhou et al., 2019) were documented in the main protocol section. Here we present extended procedures more specifically targeted for the analysis of two-color epithelial sheets.
from MOSES.Motion_Analysis.wound_statistics_tools import boundary_superpixel_meantracks_RGB
boundary_curves, curves_lines, curve_img,boundary_line = boundary_superpixel_meantracks_RGB(vidstack, meantracks_r,meantracks_g, movement_thresh=0.2, t_av_motion=5, robust=False, lenient=False, debug_visual=False, max_dist=1.5, y_bins=50, y_frac_thresh=0.50)
from MOSES.Motion_Analysis.wound_close_sweepline_area_segmentation import wound_sweep_area_segmentation
wound_closure_frame =wound_sweep_area_segmentation(vidstack, spixel_size, max_frame=50, n_sweeps=50, n_points_keep=1, n_clusters=2, p_als=0.001, to_plot=True)
print('predictedgap closure frame is: %d' %(wound_closure_frame))from MOSES.Motion_Analysis.mesh_statistics_tools import compute_max_vccf_cells_before_after_gap
(max_vccf_before, _), (max_vccf_after, _) =compute_max_vccf_cells_before_after_gap(meantracks_r, meantracks_g, wound_heal_frame=wound_closure_frame, err_frame=5)
print('max velocitycross-correlation before: %.3f' %(max_vccf_before))
print('max velocitycross-correlation after: %.3f' %(max_vccf_after))from MOSES.Motion_Analysis.mesh_statistics_tools import compute_spatial_correlation_function
spatial_corr_curve,(spatial_corr_pred, a_value, b_value, r_value) =compute_spatial_correlation_function(meantracks_r, wound_closure_frame, wound_heal_err=5, dist_range=np.arange(1,6,1))
# plot thecurve and the fitted curve to y=a*exp(-x/b) to get the (a,b) parameters.
plt.figure()
plt.title('Fitted Spatial Correlation: a=%.3f, b=%.3f' %(a_value, b_value))
plt.plot(np.arange(1,6,1),spatial_corr_curve, 'ko', label='measured')
plt.plot(np.arange(1,6,1), spatial_corr_pred, 'g-', label='fitted')
plt.xlabel('Distance (Number of Superpixels)')
plt.ylabel('Spatial Correlation')
plt.legend(loc='best')
plt.show()
from MOSES.Utility_Functions.file_io import detect_experiments
# detect experimentfolders as subfolders under a top-level directory.
rootfolder = '../Data/Motion_Map_Videos'
expt_folders =detect_experiments(rootfolder, exclude=['meantracks','optflow'], level1=False)
print(expt_folders) #print the detected folder names.import glob
import numpy as np
# detecteach .tif file in each folder.
videofiles =[glob.glob(os.path.join(rootfolder, expt_folder,'*.tif')) for expt_folder in expt_folders]
# shouldgive now [[0,0,0],[1,1,1]]
labels = [[i]*len(videofiles[i]) for i in range(len(videofiles))]
# flatteneverything into single array
videofiles =np.hstack(videofiles)
labels =np.hstack(labels)
from MOSES.Utility_Functions.file_io import read_multiimg_PIL
from MOSES.Motion_Analysis.mesh_statistics_tools import construct_MOSES_mesh, compute_MOSES_mesh_strain_curve
# set motion extractionparameters.
n_spixels = 1000
optical_flow_params = dict(pyr_scale=0.5, levels=3, winsize=15, iterations=3, poly_n=5, poly_sigma=1.2, flags=0)
n_videos = len(videofiles)
# initialise arrays tosave computed data
mesh_strain_all = []
for ii in range(n_videos):
videofile = videofiles[ii]
print(ii, videofile)
vidstack =read_multiimg_PIL(videofile)
# 1. compute superpixeltracks
_, meantracks_r = compute_grayscale_vid_superpixel_tracks(vidstack[:,:,:,0],optical_flow_params, n_spixels)
_, meantracks_g = compute_grayscale_vid_superpixel_tracks(vidstack[:,:,:,1],optical_flow_params, n_spixels)
spixel_size = meantracks_r[1,0,1] - meantracks_r[1,0,0]
# 2a. compute MOSESmesh.
MOSES_mesh_strain_time_r,MOSES_mesh_neighborlist_r = construct_MOSES_mesh(meantracks_r, dist_thresh=1.2, spixel_size=spixel_size)
MOSES_mesh_strain_time_g, MOSES_mesh_neighborlist_g= construct_MOSES_mesh(meantracks_g, dist_thresh=1.2, spixel_size=spixel_size)
# 2b. compute the MOSESmesh strain curve for the video.
mesh_strain_r =compute_MOSES_mesh_strain_curve(MOSES_mesh_strain_time_r, normalise=False)
mesh_strain_g =compute_MOSES_mesh_strain_curve(MOSES_mesh_strain_time_g, normalise=False)
mesh_strain_curve_video =.5*(mesh_strain_r+mesh_strain_g)
# (optionalnormalization)
normalised_mesh_strain_curve_video =mesh_strain_curve_video/ np.max(mesh_strain_curve_video)
# 3. append the computedmesh strain curves.
mesh_strain_all.append(normalised_mesh_strain_curve_video)
# stack all the mesh strain curves into one
mesh_strain_all = np.vstack(mesh_strain_all)from sklearn.decomposition import PCA
# initialise the PCAmodel
pca_model = PCA(n_components=2, random_state=0)
# 1. learn the PCA usingonly the 5% FBS a.k.a label=1
pca_5_percent_mesh =pca_model.fit_transform(mesh_strain_all[labels==1])pca_0_percent_mesh =pca_model.transform(mesh_strain_all[labels==0])
fig, ax = plt.subplots(figsize=(3,3))
ax.plot(pca_5_percent_mesh[:,0], pca_5_percent_mesh[:,1], 'o', ms=10, color='g', label='5% FBS')
ax.plot(pca_0_percent_mesh[:,0], pca_0_percent_mesh[:,1], 'o', ms=10, color='r', label='0% FBS')
ax.set_xlim([-2,2])
ax.set_ylim([-2,2])
plt.legend(loc='best')
plt.show()from MOSES.Track_Filtering.filter_meantracks_superpixels import filter_red_green_tracks
meantracks_r_filt,meantracks_g_filt = filter_red_green_tracks(meantracks_r, meantracks_g, img_shape=(n_rows, n_cols), frame2=1)
Analysis 3: Analysis of Drosophila Motion in Development (SiMView lightsheet Microscopy) (Supplementary files)
To demonstrate the ability of MOSES to analyze global tissue and local cellular motion patterns jointly we analyze the supplementary videos of drosophila development published by Amat et al., 2014. In addition this analysis illustrates i) how one can handle larger videos that contain a lot of temporal frames or have large video frames to speed up processing, ii) how to cluster tracks to reveal local spatial patterns and iii) how to work with general video formats such as .avi, mp4 and .mov.
def read_movie(moviefile, resize=1.):
from moviepy.editor import VideoFileClip
from tqdm import tqdm
from skimage.transform import rescale
import numpy as np
vidframes = []
clip = VideoFileClip(moviefile)
count = 0
for frame in tqdm(clip.iter_frames()):
vidframes.append(np.uint8(rescale(frame, 1./resize, preserve_range=True)))
count+=1
vidframes = np.array(vidframes)
return vidframes
moviefile = '../Data/Videos/nmeth.3036-sv1.avi'
# read in the movie, downsampling framesby 4.
movie = read_movie(moviefile, resize=4.)
n_frame, n_rows, n_cols, n_channels =movie.shape
print('Sizeof video: (%d,%d,%d,%d)' %(n_frame,n_rows,n_cols,n_channels))from MOSES.Optical_Flow_Tracking.superpixel_track import compute_grayscale_vid_superpixel_tracks_FB
# motion extractionparameters.
optical_flow_params = dict(pyr_scale=0.5, levels=5, winsize=21, iterations=5, poly_n=5, poly_sigma=1.2, flags=0)
# number of superpixels
n_spixels = 1000
# extract forward andbackward tracks.
optflow, meantracks_F, meantracks_B =compute_grayscale_vid_superpixel_tracks_FB(movie[:,:,:,1], optical_flow_params,n_spixels, dense=True)
import numpy as np
X = np.hstack([meantracks_F[:,:,0], meantracks_F[:,:,1]])from sklearn.decomposition import PCA
pca_model = PCA(n_components = 3, whiten=True, random_state=0)
X_pca = pca_model.fit_transform(X)from sklearn import mixture
n_clusters = 10
gmm = mixture.GaussianMixture(n_components=n_clusters, covariance_type='full', random_state=0)
gmm.fit(X_pca)
# get the labels
track_labels = gmm.predict(X_pca)
from MOSES.Visualisation_Tools.track_plotting import plot_tracks
import seaborn as sns
import pylab as plt
# Generate colours foreach unique cluster.
cluster_colors = sns.color_palette('hls', n_clusters)
# overlay cluster tracksand clustered superpixels
fig, ax = plt.subplots(nrows=1,ncols=3, figsize=(15,15))
ax[0].imshow(movie[0]); ax[0].grid('off');ax[0].axis('off'); ax[0].set_title('Initial Points')
ax[1].imshow(movie[-1]); ax[1].grid('off');ax[1].axis('off'); ax[1].set_title('Final Points')
ax[2].imshow(movie[0]); ax[2].grid('off');ax[2].axis('off'); ax[2].set_title('Clustered Tracks')
for ii, lab in enumerate(np.unique(track_labels)):
# plot coloured initialpoints
ax[0].plot(meantracks_F[track_labels==lab,0,1],
meantracks_F[track_labels==lab,0,0], 'o', color=cluster_colors[ii], alpha=1)
# plot coloured finalpoints
ax[1].plot(meantracks_F[track_labels==lab,-1,1],
meantracks_F[track_labels==lab,-1,0], 'o', color=cluster_colors[ii], alpha=1)
# plot coloured tracks
plot_tracks(meantracks_F[track_labels==lab],ax[2], color=cluster_colors[ii], lw=1.0, alpha=0.7)
plt.show()from MOSES.Motion_Analysis.mesh_statistics_tools import compute_motion_saliency_map
from skimage.exposure import equalize_hist
from skimage.filters import Gaussian
# specify a large thresholdto capture long-distances.
dist_thresh = 20
spixel_size = meantracks[1,0,1]-meantracks[1,0,0]
motion_saliency_F,motion_saliency_spatial_time_F = compute_motion_saliency_map(meantracks, dist_thresh=dist_thresh, shape=movie.shape[1:-1], max_frame=None, filt=1, filt_size=spixel_size)
motion_saliency_B,motion_saliency_spatial_time_B = compute_motion_saliency_map(meantracks_B, dist_thresh=dist_thresh, shape=movie.shape[1:-1], max_frame=None, filt=1, filt_size=spixel_size)
# smooth the discretelooking motion saliency maps.
motion_saliency_F_smooth =gaussian(motion_saliency_F, spixel_size/2.)
motion_saliency_B_smooth =gaussian(motion_saliency_B, spixel_size/2.)
# visualise the computedresults.
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(15,15))
ax[0].imshow(movie[0], cmap='gray');ax[0].grid('off'); ax[0].axis('off')
ax[1].imshow(movie[0], cmap='gray');ax[1].grid('off'); ax[1].axis('off')
ax[0].set_title('Motion Sinks');ax[1].set_title('Motion Sources')
ax[0].imshow(equalize_hist(motion_saliency_F_smooth), cmap='coolwarm', alpha=0.5, vmin=0, vmax=1)
ax[1].imshow(equalize_hist(motion_saliency_B_smooth), cmap='coolwarm', alpha=0.5, vmin=0, vmax=1)
plt.show()Analysis 4: Intra-vital Imaging Analysis (Multiphoton Microscopy) (Supplementary files)
Intra-vital imaging has enabled direct in-vivo imaging of the movement of fluorescently tagged cells in their native microenvironment. A common application is the study of immune cell behavior with respect to the local vasculature and extracellular matrix in tissue (Li et al., 2012). Typically it is desired to extract the global motion patterns of individual cell population despite their stochastic individual motion. Using automatic single-cell tracking approaches is frequently unreliable due to the lower imaging magnification and poorer spatial resolution of the acquired images compared to in-vitro imaging. For example, it is difficult for humans to manually distinguish between individually migrating fluorescent immune cells. Here we demonstrate how to use MOSES to nevertheless extract the global motion pattern of individual neutrophil cells to a laser-induced wound site for quantitative analysis.
moviefile = '../Data/Videos/nprot.2011.438-S5.mov'
movie = read_movie(moviefile, resize=1.)from MOSES.Optical_Flow_Tracking.superpixel_track import compute_grayscale_vid_superpixel_tracks_FB
# motion extractionparameters.
optical_flow_params = dict(pyr_scale=0.5, levels=5, winsize=21, iterations=5, poly_n=5, poly_sigma=1.2, flags=0)
# number of superpixels
n_spixels = 1000
# extract superpixeltracks for the 2nd or 'green' GFP+ cell channel
optflow, meantracks_F, meantracks_B =compute_grayscale_vid_superpixel_tracks_FB(movie[:,:,:,1], optical_flow_params,n_spixels, dense=True)from MOSES.Visualisation_Tools.motion_field_visualisation import view_ang_flow
import numpy as np
mean_opt_flow = np.mean(optflow, axis=0)
mean_colored_flow = view_ang_flow(mean_opt_flow)from MOSES.Motion_Analysis.tracks_statistics_tools import average_displacement_tracks
mean_disps_tracks =average_displacement_tracks(meantracks)
# parse out the mean(u,v) velocities of tracks
U_tra = mean_disps_tracks[:,1]
# this is negative toconvert image coordinates to proper (x,y) used in matplotlib quiverplot.
V_tra = -mean_disps_tracks[:,0]
Mag_tra = np.hypot(U_tra, V_tra)
# compute the mean (x,y)position of tracks
X_mean = np.mean(meantracks[:,:,1], axis=-1)
Y_mean = np.mean(meantracks[:,:,0], axis=-1)from MOSES.Motion_Analysis.mesh_statistics_tools import compute_motion_saliency_map
from skimage.exposure import equalize_hist
from skimage.filters import Gaussian
# specify a largethreshold to capture long-distances.
dist_thresh = 5
spixel_size = meantracks[1,0,1]-meantracks[1,0,0]
# compute the forwardand backward motion saliency maps.
motion_saliency_F,motion_saliency_spatial_time_F = compute_motion_saliency_map(meantracks, dist_thresh=dist_thresh, shape=movie.shape[1:-1], max_frame=None, filt=1, filt_size=spixel_size)
motion_saliency_B,motion_saliency_spatial_time_B = compute_motion_saliency_map(meantracks_B, dist_thresh=dist_thresh, shape=movie.shape[1:-1], max_frame=None, filt=1, filt_size=spixel_size)
# smooth the discretelooking motion saliency maps.
motion_saliency_F_smooth =gaussian(motion_saliency_F, spixel_size/2.)
motion_saliency_B_smooth =gaussian(motion_saliency_B, spixel_size/2.)Notes
We highlight key common considerations that arise when conducting a MOSES analysis to extract superpixel tracks, computing the motion saliency map and constructing motion maps. In general, MOSES is reproducible, produces consistent and interpretable results for all types of imaging as long as its primary modeling assumptions are satisfied.
Acknowledgments
This work is an accompaniment to our original methods paper published in eLife (Zhou et al., 2019) and extends the original work to showcase more the practical use of MOSES on a wide variety of imaging datasets. We thank Prof. Hiroshi Nakagawa for the generous donation of EPC2 cells originally used in the study of Harada et al., 2013. We thank Mark Shipman for technical assistance with timelapse microscopy in the collection of the two cell population dataset. This work was mainly funded by the Ludwig Institute for Cancer Research (LICR) with additional support from a CRUK grant to XL (C9720/A18513). FYZ is funded through the EPSRC Life Sciences Interface Doctoral Training Centre EP/F500394/1 and LICR, CRP and XL are funded by LICR, RPO is supported by LICR, the Oxford Health Services Research Committee and Oxford University Clinical Academic Graduate School supported by the National Institute for Health Research (NIHR) Biomedical Research Centre based at the Oxford University Hospitals Trust, Oxford, MJW is supported by CRUK (C5255/A19498, through an Oxford Cancer Research Centre Clinical Research Training Fellowship), and JR is funded by LICR and the EPSRC SeeBiByte Programme Grant (EP/M013774/1). The views expressed herein are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health.
Competing interests
A patent is pending for MOSES (UK application no. GB1716893.1, International application no. PCT/GB2018/052935). MOSES is available open-source and free for all academic and non-profit users under a Ludwig academic and non-profit license.
References
文章信息
版权信息
Zhou et al. This article is distributed under the terms of the Creative Commons Attribution License (CC BY 4.0).
如何引用
Readers should cite both the Bio-protocol article and the original research article where this protocol was used:
分类
细胞生物学 > 细胞成像 > 活细胞成像
细胞生物学 > 组织分析 > 组织成像
您对这篇实验方法有问题吗?
在此处发布您的问题,我们将邀请本文作者来回答。同时,我们会将您的问题发布到Bio-protocol Exchange,以便寻求社区成员的帮助。
提问指南
+ 问题描述
写下详细的问题描述,包括所有有助于他人回答您问题的信息(例如实验过程、条件和相关图像等)。
Share
Bluesky
X
Copy link