The Hypercube Wiring Problem


Topic Index


This Page...

This page summarizes the thoughts, investigations and results compiled by researchers in the Department of Computer Science at the University of Saskatchewan pertaining to the hypercube wiring problem. The page mostly deals with Constraint Logic Programming implementations to solve the problem and theoretical analysis of the problem. The page is composed by Alpesh Patel. The researchers whose ideas are presented here are

Assistance with this research was also provided by Andrew Hughes, Xiaoshu Chen, and Mark Keil.


Problem Description

What is a Hypercube?

A D-dimensional hypercube (D-cube for short) is a network of connected points (a graph) with the following properties:

  1. The number of nodes, N, is 2^D.
  2. Each node is connected to D other nodes.
  3. There is a path from any node to any other node that has length less than or equal to D.

The most obvious example is the ordinary 3-D cube with its 8 points and 12 connections. The D-cube can be constructed recursively as follows:

  1. The 1-cube is simply 2 points connected by an arc. As a convention, we label the points 0 and 1.

  2. The D-cube is made starting with two labeled (D-1)-cubes. Precede all labels from one of the (D-1)-cubes with 0, and precede all labels from the other (D-1)-cube with 1. Connect each point in one (D-1)-cube with the one point in the other (D-1)-cube that has the same label except for the first bit. For example, the following figure shows how the 3-cube can be constructed from 2 2-cubes.

Note that each node has been given a unique D-digit binary label. The binary naming is natural for the D-cube, because the D-cube contains 2^D nodes, and there are exactly 2^D possible D-bit names. The points in the D-cube thus use up all possible D-bit names. Furthermore, while this may not be obvious from the recursive definition above, the labeling is such that two nodes are connected if and only if their binary labels differ at exactly 1 bit. For example, for the 3-dimensional hypercube, the component labelled 4 (100) is connected to components labelled 5 (101), 6 (110) and 0 (000).

Lastly, it is the points and connections that define the D-cube, not the shapes formed by the points or any other such geometrical property. There are many ways to draw eight points connected as they are in the 3-cube in Figure 1, and these are all valid 3-cubes. The 3-cube does not require right angles and square faces as in the typical geometrical cube. The D-cube is just a network; it is a graph.

The Hypercube as Hardware

The hypercube is explained above as a collection of points with various interconnections. It can also be considered a hardware device. Simply consider the points or nodes to be network components (henceforth called components or nodes) and consider the connections to be wires connecting the components.

In one method of building the hypercube hardware device, the nodes are placed in a line, with channels or tracks parallel to the line. Wire is run down from the nodes to the channels, and then connections are made on the channels. Each channel is wide enough for one connecting wire, so connections cannot overlap on the same channel. Figure 2 shows a three channel layout for the 2-cube.

Overlapping Connections

When it is said that connections cannot "overlap" on the same channel, the term "overlap" is slightly ambiguous. Consider, for example, Figure 2 above. It is clear that the connection from 00 to 01 would overlap the connection from 00 to 10 if the two connections were placed in the same channel. What about the connections from 00 to 10 and from 10 to 11? The ends of the wires (i.e. the connection endpoints) "overlap" at node position 2. Does that count as "overlap"?

It turns out that connections that "overlap" at their endpoints can only be placed on the same channel if a more expensive silicon technology is used. We call the case where such connections can be placed on the same channel the "abutting allowed" case. The case where abutting is not allowed we call "gap required". (Where the case being studied is not noted explicitly, assume the "gap required" case.) When we noted these facts, it became part of our study to determine if allowing abutting would reduce the number of channels sufficiently to justify the cost of the improved silicon technology required.

Statement of the Problem

The preceeding sections describe the hypercube as a graph and as a hardware device. With this description understood, the actual wiring problem can be stated as follows: The problem is to order the 2^D components on a line and to then place the connections in channels parallel to the line so as to minimize the number of channels required to wire the connections of the D-cube.

To motivate the investigation, Figure 3 shows how the number of channels required changes when connections are placed in channels differently (3-a vs. 3-b) or when the nodes are ordered differently (3-b vs. 3-c).


SubProblem 1: Placing the Connections Given Fixed Node Order

Before trying to solve the entire problem of ordering the nodes and then allocating channels, we began by trying to find an optimal channel allocation given a fixed node order. For this purpose, two node orders were used -- numeric and grey-code.

Optimizing Solutions: Using ECLiPSe to Prove Optimality

The Basic Approach: Defining the Variables and Setting Up the Constraints

The approach that served as the basis for many good implementations looks at each node position separately to determine which pairs of connections overlap.

First, every connection is associated with an integer domain variable representing the channel it is placed in. Before the connection is placed in a channel, the domain of the variable will include all channels the connection can be placed in (i.e. each integer in the domain represents a different channel). Once the connection is placed in a channel, the domain will only contain that channel's integer. A connection is placed in a channel by instantiating the connection's domain variable to the number of the channel. Two connections cannot be on the same channel (i.e. have domain variables instantiated to the same channel number) if they would overlap on the channel.

For each node position, a list of the domain variables for all connections crossing or ending at that node is collected. Any pair of these connections would overlap if placed in the same channel, so these connections must each be on separate channels (i.e. their domain variables must be instantiated to values that are pairwise distinct). The variables are constrained to be pairwise distinct by calling the ECLiPSe built-in alldistinct/1 with this list of domain variables as the argument.

Optimizing Strategy Characteristics

Once the constraints on variables are established, instantiations of the variables must be tried systematically to find valid combinations of values (i.e. allocation of connections to channels), and moreover, to find the valid combination with the lowest cost (i.e. requiring the least number of channels). The approach used to find the optimal combination is called branch-and-bound . The basic plan is to begin trying all combinations. Through constraints, however, the program will frequently detect that a group of combinations cannot include the optimal valid combination. This detection of a bad group of combinations can happen without each individual combination being tested separately. These combinations can be eliminated (i.e. pruned). A good solution must perform substantial pruning to reduce the exponential number of combinations to a number that can be checked in reasonable time. At least two parameters characterize the behaviour of such an optimizing strategy. These parameters are:

  1. instantiation order - the order in which connections are placed in tracks (i.e. the order in which we consider connection variables)
  2. search for better solutions - once a solution is found, the method in which better solutions (i.e. valid solutions with lower cost) are searched for

The following describes the options used for these two parameters. Most options are facilitated directly by ECLiPSe predicates. These predicates are identified below. The remaining options were implemented explicitly.

  1. Instantiation Order of Variables

  2. Search for Better Solutions

Branch-and-Bound: Proving Optimality Efficiently

Branch-and-bound is an established method to determine which combination of a large set of possible combinations is "best", where "best" means "has the lowest cost as defined by the program". To prove any combination to be optimal, every other possibile combination must be determined to be no better than the optimal one. At first, one might think that to prove optimality is to determine the cost for every possible combination. This is not quite true. By using the branch-and-bound optimizing strategies in ECLiPSe, only a limited number of combinations need be tested to prove that a particular one is optimal.

We can consider the possible combinations as a tree. First, order the connections in any convenient way. From the root of the tree, there is one child for each possible allocation of the first connection. Each child has a child of its own (i.e. a grand-child of the root) for each possible allocation of the second connection. Figure 4 shows a portion of the tree for the 2-cube.

Branch-and-bound is a process for traversing this tree of combinations systematically, ruling out portions of the tree (i.e. PRUNING the tree) when possible, and determining the optimal combination (i.e. the combination with the lowest cost as defined by the problem). To do this, a traversal of the tree is performed until a valid (but not necessarily optimal) solution is found. A constraint is then added that all subsequent solutions must be better than this solution. In other words, a bound is placed on the cost of any subsequent solutions. This bound is what allows pruning to take place.

Suppose we reach a node somewhere in the middle of the tree. There may be hundreds or even thousands of descendant combinations of that node which are valid combinations. If, however, it is determined through constraints that all descendant combinations will have a cost at least as large as the best combination found thus far, then the optimal solution cannot be a descendant of that node. There is no need to traverse the plethora of nodes in this part of the tree, and so all these nodes are pruned (i.e. ignored in the traversal). Any area of the tree which we prove through constraints cannot produce solutions better than the best solution found thus far is ignored. Through pruning, a great portion of the tree can be ignored, which makes it possible to account for all combinations (i.e. prove optimality) in reasonable time. In short, this technique involves BRANCHING through the tree and BOUNDING solutions to be better than the best solution found thus far. Note that for substantial pruning to take place, the implementation must:

  1. find a good (i.e. near optimal) solution quickly, and
  2. define constraints so that the bound on subsequent solutions propogates information well (i.e. allows the program to detect quickly that large groups of combinations cannot produce solutions better than the best found so far)

Instantiation Predicates

The predicate used to instantiate variables (i.e. place a connection in a track) was originally indomain/1, through which the program could systematically check all possible track allocations. Then, mindomain/2 was tested, which will only instantiate a variable to the lowest value in its domain, and thus does not check all possible channel allocations. At the time it was not known what effect mindomain/2 had on the correctness of the implementation (see Greedy Solutions ). For the cases tested at the time, using mindomain/2 just gave the same answers as when indomain/1 was used, but only faster.

Early Results -- Run Times with the Basic Approach

Table 1 displays the time taken to find an optimal solution to the hypercube wires problem in three dimensions (program: wires_04.pl) and four dimensions (program: wires_05.pl). Times for six different optimizing strategies are shown. These six strategies employ the six combinations of the optimizing strategy parameters listed above (instantiation order and search for better solutions).

The table below shows run times with both indomain/1 and mindomain/2, except for the 4-D hypercube. Predicates using indomain/1 with the 4-D hypercube take at least 48 hours to give a solution!

Conclusions:

  1. It is inefficient for this problem to let ECLiPSe determine which variables are most constrained. This is made clear from the poor performance of optimizers 1b and 2b, where constraint count is used along with domain size to determine instantiation order.

  2. Connection length is generally a good heuristic for instantiation order. This is likely because long connections are more likely to have reduced domains as connections are added, and also because long connections will be subject to more overlap constraints. In other words, connection length efficiently and approximately implements both domain size and constraint count heuristics without explicitly calculating either.

  3. Backtracking for more solutions appears faster than restarting a search for d=3, despite the fact that the backtracking optimizing predicate (minimize) attempts more solutions than the other optimizing predicate (min_max).

Fixing Long Connections

The early implementations could not solve the D=4 case in reasonable time, so modifications were required to improve performance. In looking for possible improvements, we realized that the first implementation tried multiple permutations of the same solutions. For example, if we take an optimal solution, and swap all connections in track 1 with all connections in track 2, we still have essentially the same solution. It is wasteful to try both of these permutations. The main change made in the next version was to prevent the checking of multiple permutations of the same solution.

To reduce the number of redundant solutions found, note that each connection that spans more than half a full track will overlap every other connection that spans more than half a full track. Suppose there are four such connections. There are 24 (4!) ways to place these connections in the first four tracks, but to use more than 1 is redundant, so all such long connections are identified and fixed to a track (i.e. so that no backtracking will be done on their track assignment). The performance of these improved solutions is given below.

Arbitrary Dimension: The Connection Maker

To spare the cumbersome task of making a new implementation for each dimension, a connection making module for ECLiPSe was made, and subsequent implementations incorporate the module.

The five versions wires_09.pl to wires_13.pl allow analysis of almost any potential hypercube layout. The cases they each deal with and run times are shown in table 3.

Greedy Solutions: Big Performance Gains

The dramatic improvement in performance acheived by mindomain/2 (see Table 1 ) prompted inquiry into the consequences of its use. Use of mindomain corresponds to use of the greedy algorithm (sort the connections by increasing left endpoint, then assign each connection to the lowest track number possible). By studying graph analogies to this wiring problem, two important facts were verified:

  1. The greedy algorithm gives an optimal solution. In CLP terms, mindomain/2 ignores most of the combination tree, but for this particular problem, it does not ignore all optimal combinations, so an optimal combination can be found.
  2. The number of channels required equals the maximum number of connections that overlap a common node position (i.e. the size of the largest set of connections such that any pair of connections in the set cannot be placed on the same channel).

The Problem in Graph-Colouring Terms

The problem of ordering the components in a line and then placing the connections in tracks can be studied using the analagous problem of graph-colouring.

Colouring a graph means assigning a colour (i.e. some distinct label) to each vertex. A colouring of a graph is valid if and only if no two vertices connected by an edge have the same colour. The minimum number of colours in which a graph can be coloured is the chromatic number of the graph.

We can now draw the analogy between hypercube wiring and graph colouring. First a graph must be made to represent the wiring.

The hypercube graph itself will not suffice because it does not contain sufficient information about the wiring layout. We can make a graph that has one vertex for each connection in the wiring (not for each node in the cube).

We connect a pair of vertices with an edge if and only if the wiring connections represented by those vertices cannot be placed on the same channel. Colouring a vertex is the same as placing the connection corresponding to that vertex in a wiring channel. Vertices in the graph that are joined by an edge cannot have the same colour. Similarly, the connections corresponding to those vertices cannot be placed on the same channel. Minimizing the number of channels is exactly analagous to minimizing the number of colours used to colour the graph.

The graph representation of the wiring as was just described has an edge between vertices whenever the corresponding wiring connections cannot be placed on the same channel (i.e. the connections would overlap or touch). It is not known, however, which pairs of connections overlap until the components are ordered in a line. For example, in the layout in Figure 3-b, the connections from 00 to 01, and from 10 to 11 can be placed on the same channel, but for the layout in Figure 3-c, which uses a different ordering of the components, these two connections overlap, so they must be placed on different channels.

By selecting an order of the components, it can be determined which pairs of connections overlap, and then the graph representation can be constructed. Given the graph, the chromatic number can be found by finding the colouring that uses the least colours. The wiring problem can be restated in graph terms as "order the components in a line to produce the graph representation of the wiring that has the lowest chromatic number, and find a colouring of the graph that uses the chromatic number of colours".

Functional Language Implementation

Concurrent with investigation into graph models of the wiring, a functional implementation was devised for the problem. This implementation treats each channel as a gap. Whenever a connection is placed at the end of a gap, a new shorter gap is returned. If a connection is placed in the middle of a gap, the two shorter gaps remaining on either side of the connection are returned. The program maintains a (queue or stack?) of gaps, and places in each gap the connection that best fills the gap. Because it was not based on either of the graph theory results we knew, it was only very likely, but not 100% certain, that the solutions generated were optimal. Table 4 shows the results collected from that implementation.

Rapid CLP Implementations

Weeks after the functional implementation was devised, we used the graph theory findings to create rapid implementations for the problem that give provably optimal channel allocations. These implementations are of the following two types:

  1. Greedy - This type of implementation does not even express the problem in terms of constraints. It simply uses the greedy algorithm to produce an optimal channel allocation. This method is the fastest we have devised. The version implementing this method is wires_ga.pl.

  2. Maximum Clique by Bit Vector -- This type of implementation expresses each connection as a bit vector. The coordinates in the bit vector correspond to the node positions along a channel. The bit vector for a connection consists of 1's along the interval of node positions that the connection touches or crosses, and 0's everywhere else. For example, the connection from 00 to 01 in Figure 2 would have bit vector [1, 1, 0, 0]. If the bit vectors are added together, each coordinate in the sum vector is the number of connections to cross or touch the corresponding node position. This maximum of these coordinates is the size of the maximum clique of the representative graph, so it is the number of channels required. This type of implementation is very fast, and while not quite as fast as the greedy implementation, this method is extendible to the problem of determining the optimal order. This method is used in wires_14.pl, wires_15.pl and wires_16.pl.

Table 5 summarizes run times for these implementations. As a reference, results for wires_09 are included, which was devised before we knew of the graph analogies and exploited them.

Note that the efficiency of the bit vector method is greatly improved over the old method of placing connections. This should facilitate efficient solutions to find the optimal order.

Also note that wires_16.pl is much slower than the wires_14.pl and wires_15.pl. This is because wires_16.pl, in determining which node position is the most commonly occupied (i.e. the position that determines the number of channels required), prunes positions that can be proved early in the computation not to be the busiest. The natural question arising is "Why would pruning SLOW the implementation?" The reason is that the overhead incurred far exceeds the gains of pruning. The expensive part of the computation is not determining which node position is the busiest, so to incur overhead in an attempt to speed that determination is wasteful. Technically, the number of node positions is exponential in D, but the number of connections is even larger, and the combinations for placing connections is larger still, so connection placement is the part of the computation where pruning is efficient.

Using the rapid CLP implementations, data on various layouts was gathered. Table 6 summarizes characteristics of optimal hypercube layouts that use numeric node ordering or grey-code ordering.

Formula for Chromatic Number

After the functional implementation provided results up to D=7, a search began for an order-independent lower bound on the number of channels. One was proposed, namely D + 2^(D-1) - 1. Our suspicion had been that the numeric node order would result in the minimum number of channels, so the lower bound was then compared to the number of channels required for the numeric order (as calculated by the functional implementation) to test the tightness of the lower bound. The difference between the channels required and this lower bound was found to increase exponentially, as shown in column LBD (lower bound difference) in Table 7.

A guess was then taken to estimate the lower bound difference. The estimate used was 2^(D_3)-1. The values for this estimate are in column LBDE (lower bound difference estimate) of Table 7. This estimate did not predict the LBD exactly beyond D=5, and in fact the error in this estimate also increased exponentially. More importantly, the error in LBDE (i.e. ELBDE in Table 7) was the same sequence of numbers as appeared in the LBD. The reemergence of the same sequence of numbers suggested a systematic way to add exponential terms to predict the LBD. For example, ELBDE could be estimated as 2^(D-5)-1, and this would be correct for D=6 and D=7, but the error in the estimate would then be 1 for D=8, 3 for D=9, 8 for D=10 and so on. By systematically including one more exponential term (i.e. 2^(D-7)-1, 2^(D-9)-1,...) every time the dimension is increased by 2, the LBD can be expressed exactly as a sum of exponential terms. Naturally, the number of channels is the lower bound plus the lower bound difference, so by getting a formula for LBD, one could construct a formula for the number of channels. The formula is:

Note that the formula was correctly predicting the results obtained from the functional implementation, which was not proven correct. It was at this time that the greedy ECLiPSe solution was devised. Using this provably correct method of finding the optimal number of channels required, the formula was found to correctly predict the optimal number of channels for all values of D that were tested (2-10). Furthermore, it allowed us to confirm that the functional implementation was giving accurate results, because it agrees with the greedy solution.

The formula was proven by studying the graph representation of the wiring. The number of wiring channels required equals the chromatic number of the graph representation. This chromatic number equals the size of the largest clique (complete sub-graph) in the graph representation. A two-step recurrence relation was found for the size of the largest clique. The formula satisfies both the recurrence relation and sufficient base cases, and so it is proven.


Sub-Problem 2: Finding the Optimal Node Order

While there is a greedy algorithm to solve the first sub-problem (minimizing the number of channels given a fixed node order), the problem of finding the optimal node order is much more difficult. The same graph theory that that proves the correctness of the greedy algorithm confirms that finding the optimal node order for an arbitrary wiring is NP-hard. Our work with this problem has involved three acitivities:

  1. Testing Various Node Orders
  2. Characterizing Node Orders
  3. Designing Solutions Based on Graph Theory

Testing Node Orders

Some experimenting with a diverse cross-section of node orders was done just to get familiar with general relationships between node order and number of channels required. For this experimentation to lead to relevant discovery, a size of d-cube is required that is small enough for orders to be tested quickly, but large enough for there to be non-trivial differences amongst orders. D=3 was chosen. There are 40,320 possible different orders, so there is certainly variety, and any one order can be analyzed quickly to determine the minimum number of wiring channels required if that order is used.

The following lists give orders for D=3 that result in optimal (i.e. 6 track) and sub-optimal (i.e. greater than 6 tracks) layouts respectively. The order is a list of the node identifiers as they are laid out from left to right.

Optimal Orders for D=3
  1. [0,1,2,3,4,5,6,7] -- numeric
  2. [4,5,6,7,0,1,2,3] -- numeric with rotation of 4
  3. [0,2,1,3,4,5,6,7] -- numeric with switch of 1 and 2
  4. [1,0,2,3,4,5,6,7] -- numeric with switch of 0 and 1
  5. [0,1,3,2,6,7,5,4] -- grey-code
  6. [3,2,6,7,5,4,0,1] -- grey-code with rotation of 2
  7. [6,7,5,4,0,1,3,2] -- grey-code with rotation of 4
  8. [0,2,4,6,1,3,5,7] -- even then odd
Sub-Optimal Orders for D=3
  1. [1,2,3,4,5,6,7,0] -- 9 tracks (numeric with rotation of 1)
  2. [2,3,4,5,6,7,0,1] -- 9 tracks (numeric with rotation of 2)
  3. [2,4,6,1,3,5,7,0] -- 9 tracks (rotation by 1 of numeric order delta-bits)
  4. [1,3,2,6,7,5,4,0] -- 7 tracks (grey-code with rotation of 1)
  5. [0,3,5,6,7,4,1,2] -- 12 tracks (worst possible, 1 track per connection)
  6. [0,3,5,6,2,1,7,4] -- 12 tracks (worst possible, 1 track per connection)
  7. [0,2,4,6,7,5,3,1] -- 7 tracks

Node Order Trends

We have tried to identify the key trait that makes an order optimal. No one decisive pattern has been found, but some useful trends have been noted.

Characterizing Node Orders

For the d-cube there are 2^d nodes and thus (2^d)! possible node orders. To reduce the number of orders we need to consider, we have tried to characterize node orders more generally. We have tried two methods of characterizing node orders -- delta-bits sequence and the xor pattern.

Delta-Bits Sequence

The delta-bits sequence describes a node order in terms of the number of bits that change from the binary label of one node to the binary label of the node to the right. For example, consider the numeric order [0,1,2,3,4,5,6,7]. In binary, this order is [000,001,010,011,100,101,110,111], so the delta-bits sequence is [1,2,1,3,1,2,1,3]. Note that the i-th term in the sequence is the number of bits that change from the i-th node to the (i+1)-th node.

Xor Matrix

The xor matrix is a more specific version of the delta-bits sequence. Rather than just indicating how many bits change, the xor matrix indicates which bits change. The matrix has d rows (one for each bit position) and 2^d columns (one for each node position). The element in the i-th row and j-th column is the exclusive-or (1 if true, 0 if false) of the i-th bit of the node in the j-th node position and the i-th bit of the node in the (j+1)-th node position. For example, consider the numeric order of nodes for D=3. The first column of the xor matrix indicates which bits changed from the first node position (which contains node 000) to the second position (which contains node 001), so the column would be
[1,
0,
0], because only the first (the right-most) bit is different between 000 and 001.

One can generalize the xor matrix further by sorting the rows. All empirical evidence and all intuition suggests that the order of the rows is irrelevant. Rearranging the rows of the xor matrix produces a different node order, and hence a different layout, but any rearranging of the rows we have tried does not change the overall layout at all. One finds connections in exactly the same places. Rearranging the rows just results in a systematic switching of which node is where and which connection is where.

One intuitive conjecture to support the claim that rows of the xor matrix can be rearranged without changing the layout is as follows:

"Given a node order NO, and another order NO' obtained by rearranging the rows of the xor matrix, there will be a connection from node position X to node position Y when order NO' is used iff there is a connection between X and Y when order NO is used."

We have not proven this conjecture, but if it is true, then rearranging the rows of the xor matrix does not change the overall picture of connections at all.

We can exploit the fact that xor matrix rows can be rearranged by imposing an order on the rows of the xor matrix. Any node order that gives an xor matrix that does not comply with the order can be eliminated, because there will be a different order that does comply and gives the exact same picture of connections (and thus requires the same number of wiring tracks).

We have two basic suggestions for sorting the rows of the xor matrix. One way to sort them is by row sum. Another way is by the position of the row's leading (i.e. left-most) 1. The two criteria can also be ranked and used together (e.g. sort first by left-most leading 1 and second by decreasing row sum).

The best overall sorting idea devised is an extension on the leading 1 idea. Generally speaking, several rows can have a leading 1 in the position, so to break the tie, we can sort these rows by the position of their next (i.e. second left-most) 1. The idea extends recursively until a strict order is imposed on the rows.

Comparison of Node Order Generalizations

The difference between the delta-bits sequence and the xor matrix, in terms of how well they characterize node orders, is slight but important. We know that a delta-bits sequence describes at least as many orders as the sorted xor matrix, but we needed to determine how much more general, if at all, the delta-bits sequence is, and if it categorizes orders that produce similar layouts. We are relatively confident that the xor matrix characterization does not group together orders producing different layouts. The following example demonstrates that the delta-bits sequence, however, does group together orders that give different layouts. This means that the delta-bits sequence is too general for our purposes. The two orders listed below have the same delta-bits sequence [1,1,1,1,1,1,1,1], but not the same xor matrix (i.e. not even the same matrix rows in a different order).

The rows in the xor matrix for Order 1 are the reverse of the rows for the xor matrix in Order 2. It turns out that Order 1 results in a 6 track (i.e. optimal for D=3) layout, while Order 2 results in a 9 track layout. This one example is sufficient to show that the delta-bits sequence is not quite adequate to characterize node orders for our purposes.

Graph Theory Solutions to Find the Optimal Order

This refers to an implementation idea to find the number of channels required with the optimal node order. It is based on using bit vectors to represent connections and to determine the maximum clique size (see ECLiPSe Particulars


Alpesh Patel
aap130@cs.usask.ca