BitcodeWriter: Emit uniqued subgraphs after all distinct nodes

Since forward references for uniqued node operands are expensive (and
those for distinct node operands are cheap due to
DistinctMDOperandPlaceholder), minimize forward references in uniqued
node operands.

Moreover, guarantee that when a cycle is broken by a distinct node, none
of the uniqued nodes have any forward references.  In
ValueEnumerator::EnumerateMetadata, enumerate uniqued node subgraphs
first, delaying distinct nodes until all uniqued nodes have been
handled.  This guarantees that uniqued nodes only have forward
references when there is a uniquing cycle (since r267276 changed
ValueEnumerator::organizeMetadata to partition distinct nodes in front
of uniqued nodes as a post-pass).

Note that a single uniqued subgraph can hit multiple distinct nodes at
its leaves.  Ideally these would themselves be emitted in post-order,
but this commit doesn't attempt that; I think it requires an extra pass
through the edges, which I'm not convinced is worth it (since
DistinctMDOperandPlaceholder makes forward references quite cheap
between distinct nodes).

I've added two testcases:

  - test/Bitcode/mdnodes-distinct-in-post-order.ll is just like
    test/Bitcode/mdnodes-in-post-order.ll, except with distinct nodes
    instead of uniqued ones.  This confirms that, in the absence of
    uniqued nodes, distinct nodes are still emitted in post-order.

  - test/Bitcode/mdnodes-distinct-nodes-break-cycles.ll is the minimal
    example where a naive post-order traversal would cause one uniqued
    node to forward-reference another.  IOW, it's the motivating test.

llvm-svn: 267278
This commit is contained in:
Duncan P. N. Exon Smith
2016-04-23 04:59:22 +00:00
parent 498b4977ba
commit 30805b2417
4 changed files with 91 additions and 1 deletions

View File

@@ -567,6 +567,12 @@ void ValueEnumerator::dropFunctionFromMetadata(
}
void ValueEnumerator::EnumerateMetadata(unsigned F, const Metadata *MD) {
// It's vital for reader efficiency that uniqued subgraphs are done in
// post-order; it's expensive when their operands have forward references.
// If a distinct node is referenced from a uniqued node, it'll be delayed
// until the uniqued subgraph has been completely traversed.
SmallVector<const MDNode *, 32> DelayedDistinctNodes;
// Start by enumerating MD, and then work through its transitive operands in
// post-order. This requires a depth-first search.
SmallVector<std::pair<const MDNode *, MDNode::op_iterator>, 32> Worklist;
@@ -584,7 +590,12 @@ void ValueEnumerator::EnumerateMetadata(unsigned F, const Metadata *MD) {
if (I != N->op_end()) {
auto *Op = cast<MDNode>(*I);
Worklist.back().second = ++I;
Worklist.push_back(std::make_pair(Op, Op->op_begin()));
// Delay traversing Op if it's a distinct node and N is uniqued.
if (Op->isDistinct() && !N->isDistinct())
DelayedDistinctNodes.push_back(Op);
else
Worklist.push_back(std::make_pair(Op, Op->op_begin()));
continue;
}
@@ -592,6 +603,14 @@ void ValueEnumerator::EnumerateMetadata(unsigned F, const Metadata *MD) {
Worklist.pop_back();
MDs.push_back(N);
MetadataMap[N].ID = MDs.size();
// Flush out any delayed distinct nodes; these are all the distinct nodes
// that are leaves in last uniqued subgraph.
if (Worklist.empty() || Worklist.back().first->isDistinct()) {
for (const MDNode *N : DelayedDistinctNodes)
Worklist.push_back(std::make_pair(N, N->op_begin()));
DelayedDistinctNodes.clear();
}
}
}