Welcome to ShortScience.org! |
[link]
Problem ========= Brain MRI segmentation using adversarial training approach Dataset ====== 55 T1 weighted brain MR images (35 adults and 20 elders) with respective label maps. Contributions ========== 1. The authors suggest an adversarial loss in addition to the traditional loss. 2. The authors compare 2 Generator (Segmentor) models - Fully convolutional and dilated networks. https://i.imgur.com/orhWhoM.png Dilated network ------------------ Using conv layers, allows for larger receptive field with fewer trainable weights (compared to the FCN option). However, the authors claim the adversarial loss contributes more when applying the FCN model |
[link]
1. U-NET learns segmentation in an end to end images. 2. They solved Challenges are * Very few annotated images (approx. 30 per application). * Touching objects of the same class. # How: * Input image is fed in to the network, then the data is propagated through the network along all possible path at the end segmentation maps comes out. * In U-net architecture, each blue box corresponds to a multi-channel feature map. The number of channels is denoted on top of the box. The x-y-size is provided at the lower left edge of the box. White boxes represent copied feature maps. The arrows denote the different operations. https://i.imgur.com/Usxmv6r.png * In two 3x3 convolutions (unpadded convolutions), each followed by a rectified linear unit (ReLU) and a 2x2 max pooling operation with stride 2for down sampling. At each down sampling step they double the number of feature channels. * Contracting path (left side from up to down) is increases the feature channel and reduces the steps and an expansive path (right side from down to up) consists of sequence of up convolution and concatenation with the corresponds high resolution features from contracting path. * The network does not have any fully connected layers and only uses the valid part of each convolution, i.e., the segmentation map only contains the pixels, for which the full context is available in the input image. ## Challenges: 1. Overlap-tile strategy for seamless segmentation of arbitrary large images: * To predict the pixels in the border region of the image, the missing context is extrapolated by mirroring the input image. * In fig, segmentation of the yellow area uses input data of the blue area and the raw data extrapolation by mirroring. https://i.imgur.com/NUbBRUG.png 2. Augment training data using deformation: * They use excessive data augmentation by applying elastic deformations to the available training images. * Then the network to learn invariance to such deformations, without the need to see these transformations in the annotated image corpus. * Deformation used to be the most common variation in tissue and realistic deformations can be simulated efficiently. https://i.imgur.com/CyC8Hmd.png 3. Segmentation of touching object of the same class: * They propose the use of a weighted loss, where the separating background labels between touching cells obtain a large weight in the loss function. * Ensure separation of touching objects, in that segmentation mask for training (inserted background between touching objects) get the loss weights for each pixel. https://i.imgur.com/ds7psDB.png 4. Segmentation of neural structure in electro-microscopy(EM): * Ongoing challenge since ISBI 2012 in this dataset structures with low contrast, fuzzy membranes and other cell components. * The training data is a set of 30 images (512x512 pixels) from serial section transmission electron microscopy of the Drosophila first instar larva ventral nerve cord (VNC). Each image comes with corresponding fully annotated ground truth segmentation map for cells(white) and membranes (black). * An evaluation can be obtained by sending the predicted membrane probability map to the organizers. The evaluation is done by thresholding the map at 10 different levels and computation of the warping error, the Rand error and the pixel error. ### Results: * The u-net (averaged over 7 rotated versions of the input data) achieves with-out any further pre or post-processing a warping error of 0.0003529, a rand-error of 0.0382 and a pixel error of 0.0611. https://i.imgur.com/6BDrByI.png * ISBI cell tracking challenge 2015, one of the dataset contains cell phase contrast microscopy has strong shape variations,weak outer borders, strong irrelevant inner borders and cytoplasm has same structure like background. https://i.imgur.com/vDflYEH.png * The first data set PHC-U373 contains Glioblastoma-astrocytoma U373 cells on a polyacrylimide substrate recorded by phase contrast microscopy- It contains 35 partially annotated training images. Here we achieve an average IOU ("intersection over union") of 92%,which is significantly better than the second best algorithm with 83%. https://i.imgur.com/of4rAYP.png * The second data set DIC-HeLa are HeLa cells on a flat glass recorded by differential interference contrast (DIC) microscopy - It contains 20 partially annotated training images. Here we achieve an average IOU of 77.5% which is significantly better than the second best algorithm with 46%. https://i.imgur.com/Y9wY6Lc.png |
[link]
The regulation of filopodia plays a crucial role during neuronal development and synaptogenesis. Axonal filopodia, which are known to originate presynaptic specializations, are regulated in response to neurotrophic factors. The structural components of filopodia are actin filaments, whose dynamics and organization are controlled by ensembles of actin-binding proteins. How neurotrophic factors regulate these latter proteins remains, however, poorly defined. Here, using a combination of mouse genetic, biochemical, and cell biological assays, we show that genetic removal of Eps8, an actin-binding and regulatory protein enriched in the growth cones and developing processes of neurons, significantly augments the number and density of vasodilator-stimulated phosphoprotein (VASP)-dependent axonal filopodia. The reintroduction of Eps8 wild type (WT), but not an Eps8 capping-defective mutant, into primary hippocampal neurons restored axonal filopodia to WT levels. We further show that the actin barbed-end capping activity of Eps8 is inhibited by brain-derived neurotrophic factor (BDNF) treatment through MAPK-dependent phosphorylation of Eps8 residues S624 and T628. Additionally, an Eps8 mutant, impaired in the MAPK target sites (S624A/T628A), displays increased association to actin-rich structures, is resistant to BDNF-mediated release from microfilaments, and inhibits BDNF-induced filopodia. The opposite is observed for a phosphomimetic Eps8 (S624E/T628E) mutant. Thus, collectively, our data identify Eps8 as a critical capping protein in the regulation of axonal filopodia and delineate a molecular pathway by which BDNF, through MAPK-dependent phosphorylation of Eps8, stimulates axonal filopodia formation, a process with crucial impacts on neuronal development and synapse formation. Neurons communicate with each other via specialized cell-cell junctions called synapses. The proper formation of synapses ("synaptogenesis") is crucial to the development of the nervous system, but the molecular pathways that regulate this process are not fully understood. External cues, such as brain-derived neurotrophic factor (BDNF), trigger synaptogenesis by promoting the formation of axonal filopodia, thin extensions projecting outward from a growing axon. Filopodia are formed by elongation of actin filaments, a process that is regulated by a complex set of actin-binding proteins. Here, we reveal a novel molecular circuit underlying BDNF-stimulated filopodia formation through the regulated inhibition of actin-capping factor activity. We show that the actin-capping protein Eps8 down-regulates axonal filopodia formation in neurons in the absence of neurotrophic factors. In contrast, in the presence of BDNF, the kinase MAPK becomes activated and phosphorylates Eps8, leading to inhibition of its actin-capping function and stimulation of filopodia formation. Our study, therefore, identifies actin-capping factor inhibition as a critical step in axonal filopodia formation and likely in new synapse formation. |
[link]
Pathogen perception by the plant innate immune system is of central importance to plant survival and productivity. The Arabidopsis protein RIN4 is a negative regulator of plant immunity. In order to identify additional proteins involved in RIN4-mediated immune signal transduction, we purified components of the RIN4 protein complex. We identified six novel proteins that had not previously been implicated in RIN4 signaling, including the plasma membrane (PM) H+-ATPases AHA1 and/or AHA2. RIN4 interacts with AHA1 and AHA2 both in vitro and in vivo. RIN4 overexpression and knockout lines exhibit differential PM H+-ATPase activity. PM H+-ATPase activation induces stomatal opening, enabling bacteria to gain entry into the plant leaf; inactivation induces stomatal closure thus restricting bacterial invasion. The rin4 knockout line exhibited reduced PM H+-ATPase activity and, importantly, its stomata could not be re-opened by virulent Pseudomonas syringae. We also demonstrate that RIN4 is expressed in guard cells, highlighting the importance of this cell type in innate immunity. These results indicate that the Arabidopsis protein RIN4 functions with the PM H+-ATPase to regulate stomatal apertures, inhibiting the entry of bacterial pathogens into the plant leaf during infection. Author Summary Top Plants are continuously exposed to microorganisms. In order to resist infection, plants rely on their innate immune system to inhibit both pathogen entry and multiplication. We investigated the function of the Arabidopsis protein RIN4, which acts as a negative regulator of plant innate immunity. We biochemically identified six novel RIN4-associated proteins and characterized the association between RIN4 and the plasma membrane H+-ATPase pump. Our results indicate that RIN4 functions in concert with this pump to regulate leaf stomata during the innate immune response, when stomata close to block the entry of bacterial pathogens into the leaf interior. |
[link]
Lee et al. propose a variant of adversarial training where a generator is trained simultaneously to generated adversarial perturbations. This approach follows the idea that it is possible to “learn” how to generate adversarial perturbations (as in [1]). In this case, the authors use the gradient of the classifier with respect to the input as hint for the generator. Both generator and classifier are then trained in an adversarial setting (analogously to generative adversarial networks), see the paper for details. [1] Omid Poursaeed, Isay Katsman, Bicheng Gao, Serge Belongie. Generative Adversarial Perturbations. ArXiv, abs/1712.02328, 2017. |