S. GAO ET AL.
hypercube search space for the algorithm. Thereafter, the GPP is
transferred into an optimization problem, where the objective of
the implemented TGSA is to optimize the combinatorial problem
of the edges’ placement. It is important to highlight that TGSA
is able to produce several optimal planar subgraphs, comparable
to those obtained by other approaches. This is the crucial feature
of population-based algorithms in general, and of the algorithm,
TGSA, used in this research work in particular. Eventually, in order
to show the accuracy of the proposed TGSA, a fair comparison is
made with other six meta-heuristics. Experimental results verify
the effectiveness of the proposed algorithm.
This paper is structured as follows. The following section makes
a small literature review on the algorithms used when solving
the GPP. The review also indicates that gravitational-search-based
algorithms have never been used for such a study. Section 3 gives
a general description of the GSA. Section 4 illustrates the problem
representation, the designs, and the application of TGSA on the
GPP. Simulation results are summarized and discussed in Section
5. Finally, some general remarks are given to conclude this paper.
2. Related Works
For finding the MPS of a given graph, numerous heuristics have
been proposed. A widely used standard process is to start with a
spanning tree of the input graph G = (V , E ), and to iteratively
try to add the remaining edges one by one. In every step, a
planarity testing algorithm is called for the obtained graph. If the
addition of an edge would lead to a nonplanar graph, then the
edge is disregarded; otherwise, the edge is added permanently
to the planar graph obtained so far. Within this algorithmic
framework, Jayakumar et al. [4] proposed a near-maximal planarity
testing algorithm with computational complexity O (|E |
2
) based
on the PQ-tree technique [3]. Kant [5] presented a corrected
and more generalized version of Jayakumar’s algorithm. Cai
et al. [16] and Battista and Tamassia [17] proposed efficient
algorithms with the same complexity bound of O(|V | log |E |),
which were derived from the Hopcroft–Tarjan planarity testing
algorithm [18] and the incremental planarity testing algorithm [17],
respectively. Furthermore, linear-time O (|V |+|E |) algorithms
can be considered as in Refs [7],[19]. Obviously, the heuristics
were developed for the purpose of reducing computational times.
In addition, Poranen [20] proposed a hybrid simulated annealing
method based on the planarity test [21] for MPS, claiming that the
hybrid algorithm outperformed its earlier heuristics in terms of the
solution quality and computational times. However, its deficiency
was also distinct because it performed the planarity test numerous
times during its execution. If an implementation for the planarity
test was not available, the algorithm could not be used to solve
the problem.
In general, the algorithms proposed for MPS cannot be directly
adopted for the GPP since finding MPS is just one of its objectives,
but their basic functions can motivate (or be incorporated into) the
design of the GPP solving algorithms. In the literature, some meta-
heuristics were proposed to deal with the MPS finding task and
plane embedding task simultaneously. Among these algorithms,
neural network learning methods, two-phase graph planarization
algorithms (TGPAs), genetic algorithms, and artificial immune
algorithms exhibited promising performance.
Using the neural network techniques, Takefuji and Lee pre-
sented a parallel planarization algorithm (TLNN) for generating
a near-maximal planar subgraph within O(1) time [8],[22], and
claimed superior performance to previously published algorithms.
An improved Hopfield network learning algorithm (WNN) with
a gradient ascent technique for alleviating the inherent drawbacks
(typically the local optima problem) of Takefuji and Lee’s algo-
rithm was proposed by Wang et al. [9]. Zhang and Qin [10]
combined the Hopfield network with the simulated annealing strat-
egy (ZNN), aiming to facilitate the flexibility of the algorithm and
further to achieve a better performance.
In addition, Goldschmidt and Takvorian [23] presented a TGPA,
based on which Resende and Ribeiro [24] proposed a greedy
randomized adaptive search procedure (GRASP) by incorporating
more randomness into it. GRASP had two phases: solution
construction phase and solution improvement phase. GRASP
often mainly focused on the randomized generation of high-
quality starting solutions by very refined construction phases and
sophisticated management of the solutions, while the subsequent
solution improvement phase was usually performed by a rather
simple local search. The first phase in GRASP was an enhancement
version of the TGPA by randomly choosing one of the best
candidates in the restricted candidate list. The second phase in
GRASP aimed at minimizing the number of edges that needed
to be removed to eliminate all edge crossings with respect to the
first-phase sequence, which was realized by implementing a simple
local search within a slightly restricted neighborhood. Simulation
results indicated that GRASP could obtain better or competitive
results than TGPA and branch-and-cut algorithms [25,26] for
graphs having up to 300 vertices or 1500 edges.
Besides, the applicability of genetic algorithms to solving the
GPP had been verified by Comellas in Ref. [11]. An effective
genetic algorithm (EGA) performing crossover and mutation
operators conditionally instead of probability could be also referred
to as in Ref. [12]. Most recently, the authors [13] proposed a
multilayered artificial immune system (MAIS) for the GPP, which
took advantages of the technologies of search space minimization
and feature extraction, exhibiting excellent performance on all
tested benchmark problems. It is to be noted that this section does
not aim to provide a comprehensive literature review, but intends
to gives some insights into the ideas of the proposed algorithms,
and further verify that gravitational search-based algorithms have
never been studied for solving the GPP.
3. Gravitational Search Algorithms
Meta-heuristic algorithms mimic biological or physical pro-
cesses. One of the newest meta-heuristic algorithms that has been
inspired by the physical laws is the GSA. It was originally pre-
sented by Rashedi et al. [14] for optimizing continuous nonlinear
functions, and then widely applied to many engineering problems
[27–29]. The algorithm is based on the Newtonian gravity: every
particle in the universe attracts every other particle with a force
that is directly proportional to the product of their masses and
inversely proportional to the square of the distance between them.
Figure 1 depicts the conceptual graph of gravity law used in GSA.
In GSA, agents are considered as solutions and their perfor-
mance is measured by their masses. Each object in GSA has two
M
i
M
j
M
k
M
p
F
i
Fig. 1. Gravity law used in GSA: every agent accelerates toward
the resulting force that acts on it from other agents
40 IEEJ Trans 9: 39–48 (2014)