[link]
Edge, contour or boundary detection in 2D images have been an area of active research, with a variety of different algorithms. However due to a wide variety of image types and content, developing automatic segmentation algorithms have been challenging, while manual segmentation is tedious and time consuming. Previous algorithms approaching this task have tried to incorporate higher level constraints, energy functional (snakes), global properties (graph based). However the approaches still do not entirely fulfill the fully automated criteria due to a variety of reasons. To this end, Barrett et al. propose a graphbased boundary extraction algorithm called Interactive LiveWire, which is an extension to the original livewire algorithm presented in Mortensen et al. [1]. The algorithm is built upon a reformulation of the segmentation approach into graphs, particularly, an image $I$ is converted to an undirected graph with edges from a pixel $p$ to all it's neighbouring 8connected pixels. Each pixel or node is assigned a local cost according to a function (described later). The segmentation task then becomes a problem where there needs to be a shortest path from a pixel $p$ (seed) to another pixel $q$ (free goal point), where the cumulative cost of path is minimum. The local cost function is defined as: $l(p,q) = w_G.f_G(q) + w_Z.f_Z(q) + w_D.f_D(p,q)$ where $w_i, i = {G, Z, D}$ are weight coefficients controlling relative importance of the terms. The three terms that make up the local cost function are gradient magnitude ($f_G(q)$), Laplacian ZeroCrossing feature ($f_Z(q)$), and Gradient Direction feature ($f_D(p,q)$). The first term $f_G(q)$ defines a strong measure of edge strength, and is heavily weighted. The term $f_Z(q)$ provides a second degree measure for edge strength in the form of zerocrossing information. The third term $f_D(p,q)$ adds a smoothness constraint to the livewire boundary by adding high cost for rapidly changing gradient directions. The algorithm also exhibits some other features, namely boundary freezing, onthefly learning and datadriven cooling. Boundary freezing proves useful when Livewire segmentation digresses from the desired object boundary during interactive mode. The boundary can be "frozen" right before the digression point by specifying another seed point, until which the boundary is frozen and not allowed to be changed. Onthefly learning provides robustness to the method by learning the underlying cost distribution of a known "good" boundary, and using that to guide the livewire further to follow similar distribution. Datadriven path cooling allows the livewire to generate new seed points automatically as a function of image data and path properties. Pixels on "stable" paths will cool down and eventually freeze, producing new seed points. The results report that average times taken for segmenting a region using LiveWire was roughly 4.6 times less than manual human tracing time. LiveWire provided same amount of accuracy as manual tracing would in a fraction of time, with high reproducibility. However, the method does not provide a way to ``snap out" of an \textit{automatically} frozen livewire segmentation. Onthefly training can fail at instances where the edges of the object change too fast, and not much implementation related information is provided, especially for freezing and training parts.
Your comment:

[link]
## **Introduction** This paper presents a novel interactive tool for efficient and reproducible boundary extraction with minimal user input. Despite the user not being very accurate with the manual marking of the seed points, the algorithm snaps the boundary to the nearest strong object edge. Unlike active contour models where the user is unaware of how the final boundary will look like after energy minimization (which, if unsatisfactory, requires the entire process to be repeated again), this algorithm is interactive and therefore the user is aware of the "livewire boundary snapping". Moreover, "boundary cooling" and "onthefly training" are two novel contributions of the paper which help reduce user input while maintaining the stability of the boundary. ## **Method** Modeling the boundary detection problem as a graph search, the new objective is to find the optimal path between a start node (pixel) to a goal node (pixel), where the total cost of any path is the sum of the local costs of the path. Let the local cost of a directed edge from pixel ${p}$ to a neighboring pixel ${q}$ be $$ l({p}, {q}) = \underbrace{0.43f_Z({q})}_{\text{Laplacian zero crossing}} + \underbrace{0.43f_G({q})}_{\text{gradient magnitude}} + \underbrace{0.14f_D({p}, {q})}_{\text{gradient direction}} $$ where the weights have been empirically chosen by the authors. The gradient magnitude $f_G({g})$ needs to be a term so that higher image gradients will correspond to lower costs in order to provide a "firstorder" positioning of the live wire boundary. As such, the gradient magnitude G is defined as $$ f_G = 1  \frac{G}{max(G)} $$ The Laplacian zero crossing term is a binary edge feature and acts as a "second order" fine tuning term for boundary localization. $$ f_Z({q}) = \left\{ \begin{array}{@{}ll@{}} 0, & \text{if}\ I_L({q}) = 0 \: or \: a \: neighbor \: with \: opposite \: sign \\ 1, & \text{otherwise} \\ \end{array}\right. $$ where $I_L$ represents the convolution of the image with a Laplacian edge operator. The gradient direction term is associated with penalizing sharp changes in the boundary direction and therefore effectively adds a boundary smoothness constraint. Let ${D(p)}$ be the unit vector normal to the image gradient at pixel ${p}$. Then the gradient direction cost can be represented as: $$ f_D({p}, {q}) = \frac{2}{3\pi}\big\{cos[d_p(({p}, {q})]^{1} + cos[d_q(({p}, {q})]^{1}\big\} $$ where $$ d_p(({p}, {q}) = {D(p).L(p,q)} $$$$ d_p(({p}, {q}) = {L(p,q).D(q)}, \: and \text{} \\ $$$$ {L(p,q)} = \left\{ \begin{array}{@{}ll@{}} {qp}, & \text{if}\ {D(p).(qp)} \geq 0 \\ {pq}, & \text{otherwise} \\ \end{array}\right. $$ where ${L(p,q)}$ represents the normalized the bidirectional edge between pixels ${p}$ and ${q}$ and represents the direction of the link between them so that the difference between ${p}$ and this direction is minimized. Intuitively, this cost is low when the gradient direction of the two pixels are similar to each other and the link between them. Starting at a userselected seed point, the boundary finding is continued in the direction of the minimum cumulative cost, which creates a dynamic 'wavefront' along the directions to highest interest (which happen to be along the edges). For a path length of $n$, the boundary growing requires $n$ iterations, which is significantly more efficient than the previously used approaches. A distribution of a variety of features (such as the image gradient $f_G$ weighted with a monotonically decreasing function (either linear or Gaussian) determines the contribution of each of the closest $t$ pixels, and the algorithm follows the edge of current interest (rather than the strongest) and associates lower costs with current edge features and higher costs for edge features not belonging to the current edge, thereby performing a dynamic or "onthefly" training. The algorithm also exhibits what the paper calls "datadriven path cooling"  as the cursor (and therefore the free point) moves further away from the seed point, progressively longer portions of the boundary become fixed and only the new parts of the boundary need to be updated. ## **Results and Discussion** Using the algorithm, the boundaries are extracted in onefifth of the time required for manually tracing the boundary, while doing so with a 4.4x higher accuracy and 4.8x higher reproducibility. However, the paper admits that boundary detection can be difficult with objects with "hair like boundaries". Moreover, this technique cannot be extended to Ndimensional images. 