Commit Graph

11480 Commits

Author SHA1 Message Date
Eric Christopher
1b4b1e559a Grammar-o.
llvm-svn: 128004
2011-03-21 18:06:21 +00:00
Bill Wendling
00f0cddfd4 We need to pass the TargetMachine object to the InstPrinter if we are printing
the alias of an InstAlias instead of the thing being aliased. Because we need to
know the features that are valid for an InstAlias.

This is part of a work-in-progress.

llvm-svn: 127986
2011-03-21 04:13:46 +00:00
Jakob Stoklund Olesen
35502423de Process all dead defs after rematerializing during splitting.
llvm-svn: 127973
2011-03-20 19:46:23 +00:00
Jakob Stoklund Olesen
e55003fb04 Also eliminate redundant spills downstream of inserted reloads.
This can happen when multiple sibling registers are spilled after live range
splitting.

llvm-svn: 127965
2011-03-20 05:44:58 +00:00
Jakob Stoklund Olesen
39488642d3 Change an argument to a LiveInterval instead of a register number to save some redundant lookups.
llvm-svn: 127964
2011-03-20 05:44:55 +00:00
Jakob Stoklund Olesen
ccacd0df19 Replace a broken LiveInterval::MergeValueInAsValue() with something simpler.
llvm-svn: 127960
2011-03-19 23:02:49 +00:00
Jakob Stoklund Olesen
8698507fe1 Add debug output.
llvm-svn: 127959
2011-03-19 23:02:47 +00:00
Evan Cheng
b1f3b4989f Minor code re-structuring.
llvm-svn: 127952
2011-03-19 17:03:16 +00:00
Nadav Rotem
e7a101ccab Add support for legalizing UINT_TO_FP of vectors on platforms which do
not have native support for this operation (such as X86).
The legalized code uses two vector INT_TO_FP operations and is faster
than scalarizing.

llvm-svn: 127951
2011-03-19 13:09:10 +00:00
Stuart Hastings
12d5312622 Reapply 127939 since Daniel fixed the breakage. <rdar://problem/9012638>
llvm-svn: 127944
2011-03-19 02:42:31 +00:00
Stuart Hastings
08b4daa191 Revert 127939. <rdar://problem/9012638>
llvm-svn: 127943
2011-03-19 02:33:56 +00:00
Stuart Hastings
83d4a28d1f Revise r126127 to address Daniel's comments. <rdar://problem/9012638>
llvm-svn: 127939
2011-03-19 01:32:01 +00:00
Jim Grosbach
7b162490fd Beginnings of MC-JIT code generation.
Proof-of-concept code that code-gens a module to an in-memory MachO object.
This will be hooked up to a run-time dynamic linker library (see: llvm-rtdyld
for similarly conceptual work for that part) which will take the compiled
object and link it together with the rest of the system, providing back to the
JIT a table of available symbols which will be used to respond to the
getPointerTo*() queries.

llvm-svn: 127916
2011-03-18 22:48:41 +00:00
Jakob Stoklund Olesen
816f5f4c2a Extend live debug values down the dominator tree by following copies.
The llvm.dbg.value intrinsic refers to SSA values, not virtual registers, so we
should be able to extend the range of a value by tracking that value through
register copies. This greatly improves the debug value tracking for function
arguments that for some reason are copied to a second virtual register at the
end of the entry block.

We only extend the debug value range where its register is killed. All original
llvm.dbg.value locations are still respected.

Copies from physical registers are ignored. That should not be a problem since
the entry block already adds DBG_VALUE instructions for the virtual registers
holding the function arguments.

llvm-svn: 127912
2011-03-18 21:42:19 +00:00
Jakob Stoklund Olesen
27320cb864 Hoist spills when the same value is known to be in less loopy sibling registers.
Stack slot real estate is virtually free compared to registers, so it is
advantageous to spill earlier even though the same value is now kept in both a
register and a stack slot.

Also eliminate redundant spills by extending the stack slot live range
underneath reloaded registers.

This can trigger a dead code elimination, removing copies and even reloads that
were only feeding spills.

llvm-svn: 127868
2011-03-18 04:23:06 +00:00
Jakob Stoklund Olesen
fdc09941f2 Accept instructions that read undefined values.
This is not supposed to happen, but I have seen the x86 rematter getting
confused when rematerializing partial redefs.

llvm-svn: 127857
2011-03-18 03:06:04 +00:00
Jakob Stoklund Olesen
c099dde918 Be more accurate about the slot index reading a register when dealing with defs
and early clobbers.

Assert when trying to find an undefined value.

llvm-svn: 127856
2011-03-18 03:06:02 +00:00
Benjamin Kramer
cfcea12fe2 BuildUDIV: If the divisor is even we can simplify the fixup of the multiplied value by introducing an early shift.
This allows us to compile "unsigned foo(unsigned x) { return x/28; }" into
	shrl	$2, %edi
	imulq	$613566757, %rdi, %rax
	shrq	$32, %rax
	ret

instead of
	movl    %edi, %eax
	imulq   $613566757, %rax, %rcx
	shrq    $32, %rcx
	subl    %ecx, %eax
	shrl    %eax
	addl    %ecx, %eax
	shrl    $4, %eax

on x86_64

llvm-svn: 127829
2011-03-17 20:39:14 +00:00
Jakob Stoklund Olesen
8630840c30 Dead code elimination may separate the live interval into multiple connected components.
I have convinced myself that it can only happen when a phi value dies. When it
happens, allocate new virtual registers for the components.

llvm-svn: 127827
2011-03-17 20:37:07 +00:00
Cameron Zwarich
2ef0c69df1 Move more logic into getTypeForExtArgOrReturn.
llvm-svn: 127809
2011-03-17 14:53:37 +00:00
Cameron Zwarich
34e7b3f77e Rename getTypeForExtendedInteger() to getTypeForExtArgOrReturn().
llvm-svn: 127807
2011-03-17 14:21:56 +00:00
Jakob Stoklund Olesen
315b42c354 Rewrite instructions as part of ConnectedVNInfoEqClasses::Distribute.
llvm-svn: 127779
2011-03-17 00:23:45 +00:00
Jakob Stoklund Olesen
e14b2b226f Add a LiveRangeEdit delegate callback before shrinking a live range.
The register allocator needs to adjust its live interval unions when that happens.

llvm-svn: 127774
2011-03-16 22:56:16 +00:00
Jakob Stoklund Olesen
c738c96519 Erase virtual registers that are unused after DCE.
llvm-svn: 127773
2011-03-16 22:56:13 +00:00
Jakob Stoklund Olesen
e29d63e98a Tag cached interference with a user-provided tag instead of the virtual register number.
The live range of a virtual register may change which invalidates the cached
interference information.

llvm-svn: 127772
2011-03-16 22:56:11 +00:00
Jakob Stoklund Olesen
557a82c099 Clarify debugging output.
llvm-svn: 127771
2011-03-16 22:56:08 +00:00
Cameron Zwarich
ac106273d4 The x86-64 ABI says that a bool is only guaranteed to be sign-extended to a byte
rather than an int. Thankfully, this only causes LLVM to miss optimizations, not
generate incorrect code.

This just fixes the zext at the return. We still insert an i32 ZextAssert when
reading a function's arguments, but it is followed by a truncate and another i8
ZextAssert so it is not optimized.

llvm-svn: 127766
2011-03-16 22:20:18 +00:00
Cameron Zwarich
d1ad9bc277 Don't recompute something that we already have in a local variable.
llvm-svn: 127764
2011-03-16 22:20:07 +00:00
Daniel Dunbar
fd95b016fb Revert r127757, "Patch to a fix dwarf relocation problem on ARM. One-line fix
plus the test where it used to break.", which broke Clang self-host of a
Debug+Asserts compiler, on OS X.

llvm-svn: 127763
2011-03-16 22:16:39 +00:00
Renato Golin
a3aeafeb35 Patch to a fix dwarf relocation problem on ARM. One-line fix plus the test where it used to break.
llvm-svn: 127757
2011-03-16 21:05:52 +00:00
Jakob Stoklund Olesen
a0d5ec10d1 Trace back through sibling copies to hoist spills and find rematerializable defs.
After live range splitting, an original value may be available in multiple
registers. Tracing back through the registers containing the same value, find
the best place to insert a spill, determine if the value has already been
spilled, or discover a reaching def that may be rematerialized.

This is only the analysis part. The information is not used for anything yet.

llvm-svn: 127698
2011-03-15 21:13:25 +00:00
Jakob Stoklund Olesen
32210de3e4 Preserve both isPHIDef and isDefByCopy bits when copying parent values.
llvm-svn: 127697
2011-03-15 21:13:22 +00:00
Evan Cheng
e4b8ac9fef Add a peephole optimization to optimize pairs of bitcasts. e.g.
v2 = bitcast v1
...
v3 = bitcast v2
...
   = v3
=>
v2 = bitcast v1
...
   = v1
if v1 and v3 are of in the same register class.

bitcast between i32 and fp (and others) are often not nops since they
are in different register classes. These bitcast instructions are often
left because they are in different basic blocks and cannot be
eliminated by dag combine.

rdar://9104514

llvm-svn: 127668
2011-03-15 05:13:13 +00:00
Evan Cheng
c5c2cfa381 sext(undef) = 0, because the top bits will all be the same.
zext(undef) = 0, because the top bits will be zero.

llvm-svn: 127649
2011-03-15 02:22:10 +00:00
Bill Wendling
5c25a92011 There are some situations which can cause the URoR hack to infinitely recurse
and then go kablooie. The problem was that it was tracking the PHI nodes anew
each time into this function. But it didn't need to. And because the recursion
didn't know that a PHINode was visited before, it would go ahead and call
itself.

There is a testcase, but unfortunately it's too big to add. This problem will go
away with the EH rewrite.
<rdar://problem/8856298>

llvm-svn: 127640
2011-03-15 01:03:17 +00:00
Jakob Stoklund Olesen
59a549b7ec Place context in member variables instead of passing around pointers.
Use the opportunity to get rid of the trailing underscore variable names.

llvm-svn: 127618
2011-03-14 20:57:14 +00:00
Jakob Stoklund Olesen
a00bab24c2 Rename members to match LLVM naming conventions more closely.
Remove the unused reserved_ bit vector, no functional change intended.

This doesn't break 'svn blame', this file really is all my fault.

llvm-svn: 127607
2011-03-14 19:56:43 +00:00
Evan Cheng
37139edc8c BIT_CONVERT has been renamed to BITCAST.
llvm-svn: 127600
2011-03-14 18:19:52 +00:00
Evan Cheng
d2f3b01797 Minor optimization. sign-ext/anyext of undef is still undef.
llvm-svn: 127598
2011-03-14 18:15:55 +00:00
Jakob Stoklund Olesen
e1539cc5b6 Now that we are deleting unused live intervals during allocation, pointers may be reused.
Use the virtual register number as a cache tag instead. They are not reused.

llvm-svn: 127561
2011-03-13 01:29:32 +00:00
Jakob Stoklund Olesen
43a87501b3 Tell the register allocator about new unused virtual registers.
This allows the allocator to free any resources used by the virtual register,
including physical register assignments.

llvm-svn: 127560
2011-03-13 01:23:11 +00:00
Duncan Sands
b847bf547b Speculatively revert commit 127478 (jsjodin) in an attempt to fix the
llvm-gcc-i386-linux-selfhost and llvm-x86_64-linux-checks buildbots.
The original log entry:
Remove optimization emitting a reference insted of label difference, since
it can create more relocations. Removed isBaseAddressKnownZero method,
because it is no longer used.

llvm-svn: 127540
2011-03-12 13:07:37 +00:00
Jakob Stoklund Olesen
e77005ef88 Include snippets in the live stack interval.
llvm-svn: 127530
2011-03-12 04:25:36 +00:00
Jakob Stoklund Olesen
a86595e06b Spill multiple registers at once.
Live range splitting can create a number of small live ranges containing only a
single real use. Spill these small live ranges along with the large range they
are connected to with copies. This enables memory operand folding and maximizes
the spill to fill distance.

Work in progress with known bugs.

llvm-svn: 127529
2011-03-12 04:17:20 +00:00
Jakob Stoklund Olesen
dae1dc1f01 That's it, I am declaring this a failure of the C++03 STL.
There are too many compatibility problems with using mixed types in
std::upper_bound, and I don't want to spend 110 lines of boilerplate setting up
a call to a 10-line function. Binary search is not /that/ hard to implement
correctly.

I tried terminating the binary search with a linear search, but that actually
made the algorithm slower against my expectation. Most live intervals have less
than 4 segments. The early test against endIndex() does pay, and this version is
25% faster than plain std::upper_bound().

llvm-svn: 127522
2011-03-12 01:50:35 +00:00
Cameron Zwarich
4d7d728594 Fix the GCC test suite issue exposed by r127477, which was caused by stack
protector insertion not working correctly with unreachable code. Since that
revision was rolled out, this test doesn't actual fail before this fix.

llvm-svn: 127497
2011-03-11 21:51:56 +00:00
Owen Anderson
66443c034d Teach FastISel to support register-immediate-immediate instructions.
llvm-svn: 127496
2011-03-11 21:33:55 +00:00
Jan Sjödin
f3f78583f9 Remove optimization emitting a reference insted of label difference, since it can create more relocations. Removed isBaseAddressKnownZero method, because it is no longer used.
llvm-svn: 127478
2011-03-11 19:37:02 +00:00
Andrew Trick
710d5da306 Replace -dag-chain-limit flag with constant. It has survived a release cycle without being touched, so no longer needs to pollute the hidden-help text.
llvm-svn: 127468
2011-03-11 17:46:59 +00:00
John Wiegley
8559f5914c Fix use of CompEnd predicate to be standards conforming
The existing CompEnd predicate does not define a strict weak order as required
by the C++03 standard; therefore, its use as a predicate to std::upper_bound
is invalid. For a discussion of this issue, see
http://www.open-std.org/jtc1/sc22/wg21/docs/lwg-defects.html#270

This patch replaces the asymmetrical comparison with an iterator adaptor that
achieves the same effect while being strictly standard-conforming by ensuring
an apples-to-apples comparison.

llvm-svn: 127462
2011-03-11 08:54:34 +00:00