The code paths for mte enabled and disabled were interleaving and which
increases the difficulty of reading each path in both source level and
assembly level. In this change, we move the parts that they have
different logic into functions and minor refactors on the code
structure.
Since the set of COMPILER_RT_ASAN_SHADOW_SCALE_DEFINITION is removed in
commit 8421fa5d53,
cleanup the use of COMPILER_RT_ASAN_SHADOW_SCALE_DEFINITION.
...which caused issues like
> ==42==ERROR: AddressSanitizer failed to deallocate 0x32 (50) bytes at
address 0x117e0000 (error code: 28)
> ==42==Cannot dump memory map on emscriptenAddressSanitizer: CHECK
failed: sanitizer_common.cpp:81 "((0 && "unable to unmmap")) != (0)"
(0x0, 0x0) (tid=288045824)
> #0 0x14f73b0c in __asan::CheckUnwind()+0x14f73b0c
(this.program+0x14f73b0c)
> #1 0x14f8a3c2 in __sanitizer::CheckFailed(char const*, int, char
const*, unsigned long long, unsigned long long)+0x14f8a3c2
(this.program+0x14f8a3c2)
> #2 0x14f7d6e1 in __sanitizer::ReportMunmapFailureAndDie(void*,
unsigned long, int, bool)+0x14f7d6e1 (this.program+0x14f7d6e1)
> #3 0x14f81fbd in __sanitizer::UnmapOrDie(void*, unsigned
long)+0x14f81fbd (this.program+0x14f81fbd)
> #4 0x14f875df in __sanitizer::SuppressionContext::ParseFromFile(char
const*)+0x14f875df (this.program+0x14f875df)
> #5 0x14f74eab in __asan::InitializeSuppressions()+0x14f74eab
(this.program+0x14f74eab)
> #6 0x14f73a1a in __asan::AsanInitInternal()+0x14f73a1a
(this.program+0x14f73a1a)
when trying to use an ASan suppressions file under Emscripten: Even
though it would be considered OK by SUSv4, the Emscripten runtime states
"We don't support partial munmapping" (see
<f4115eb2c3>
"Implement MAP_ANONYMOUS on top of malloc in STANDALONE_WASM mode
(#16289)").
Co-authored-by: Stephan Bergmann <stephan.bergmann@allotropia.de>
libfuzzer's -jobs option will, depending on the number of CPUs, spin up
a
WorkerThread and end up printing the log file using CopyFileToErr.
This leads to an MSan false positive. This patch disables the MSan
interceptor checks,
similarly to other instances in https://reviews.llvm.org/D48891
Side-note: this false positive issue first appeared when printf()
was replaced by puts() (90b4d1bcb2).
The interceptor check was always present; however, MSan does not
check_printf by default.
This pulls out `ContextNode` as we need to use it pretty much as-is to implement a writer. The writer will be implemented on the LLVM side because it takes a dependency on BitStreamWriter.
Since we can't reuse a header between compiler-rt and llvm, we use a header file which is copied on both sides, and test that the 2 copies are identical.
The changes adds the necessary other stuff for compiler-rt/ctx_profile testing.
APIs for contextual profiling. `ContextNode` is the call context-specific counter buffer. `ContextRoot` is associated to those functions that constitute roots into interesting call graphs, and is the object on which we hang off `Arena`s for allocating `ContextNode`s, as well as the `ContextNode` corresponding to such functions. Graphs of `ContextNode`s are accessible by one thread at a time.
(Tracking Issue: #89287, more details in the RFC referenced there)
`Region->Exhausted` indicates that we don't have more pages to create
new blocks in the region. It has different meaning from region
allocation failure.
Also fix a minor lint in popBlocks()
When compiling the common sanitizer libraries, there are many warnings
about format specifiers, similar to:
compiler-rt/lib/sanitizer_common/sanitizer_symbolizer_markup.cpp:31:32: warning: format specifies type 'void *' but the argument has type 'uptr' (aka 'unsigned long') [-Wformat]
31 | buffer->AppendF(kFormatData, DI->start);
| ~~~~~~~~~~~ ^~~~~~~~~
compiler-rt/lib/sanitizer_common/sanitizer_symbolizer_markup_constants.h:33:46: note: format string is defined here
33 | constexpr const char *kFormatData = "{{{data:%p}}}";
| ^~
| %lu
compiler-rt/lib/sanitizer_common/sanitizer_symbolizer_markup.cpp:46:43: warning: format specifies type 'void *' but the argument has type 'uptr' (aka 'unsigned long') [-Wformat]
46 | buffer->AppendF(kFormatFrame, frame_no, address);
| ~~~~~~~~~~~~ ^~~~~~~
compiler-rt/lib/sanitizer_common/sanitizer_symbolizer_markup_constants.h:36:48: note: format string is defined here
36 | constexpr const char *kFormatFrame = "{{{bt:%u:%p}}}";
| ^~
| %lu
...
This is because `uptr` is dependent on the platform, and can be either
`unsigned long long`, `unsigned long`, or `unsigned int`.
To fix the warnings, cast the arguments to the expected type of the
format strings.
Trying to address the build failure on the `clang-ve-ninja`bot, which
appears hard to repro locally. The target isn't needed currently (there
are unit tests exercising the new functionality). Removing it for now
to green-ify the build bot.
Downstream disabled EnableContiguousRegions on RISCV-64 to avoid
running out of virtual memory, but our tests still use the internal
FuchsiaConfig class, which therefore needs to be changed too.
Reverts llvm/llvm-project#88965
This caused a test suite failure:
https://lab.llvm.org/buildbot/#/builders/185/builds/6583
NOEXE: test-suite::aarch64-acle-fmv-features.test
```
/home/tcwg-buildbot/worker/clang-aarch64-lld-2stage/test/test-suite/SingleSource/UnitTests/AArch64/acle-fmv-features.c:98:1: error: redefinition of 'check_sha1'
98 | CHECK(sha1, {
| ^
/home/tcwg-buildbot/worker/clang-aarch64-lld-2stage/test/test-suite/SingleSource/UnitTests/AArch64/acle-fmv-features.c:36:17: note: expanded from macro 'CHECK'
36 | static void check_##X(void) { \
| ^
<scratch space>:150:1: note: expanded from here
150 | check_sha1
| ^
```
I presume that the useless features need to be removed from the fmv test
as well.
As explained in https://github.com/ARM-software/acle/pull/315 we
are deprecating features which aren't adding any value. These are:
sha1, pmull, dit, dgh, ebf16, sve-bf16, sve-ebf16, sve-i8mm,
sve2-pmull128, memtag2, memtag3, ssbs2, bti, ls64_v, ls64_accdata
The code in this file dates back to 2012 when Clang's support for atomic
builtins was still quite limited. The bugs referenced in the comment
at the top of the file have long been fixed and using the compiler
builtins directly should now generate slightly better code.
Additionally, this allows using the atomic builtin header for platforms
where the __sync_builtins are lacking (e.g. Arm Morello).
This change does not introduce any code generation changes for
__tsan_read*/__tsan_write* or __tsan_func_{entry,exit} on x86, which
indicates the previously noted compiler issues have been fixed.
We also have to touch the non-clang codepaths here since the only way we
can make this work easily is by making the memory_order enum match the
compiler-provided macros, so we have to update the debug checks that
assumed the enum was always a bitflag.
The one downside of this change is that 32-bit MIPS now definitely
requires libatomic (but that may already have been needed for RMW ops).
Reviewed By: dvyukov
Pull Request: https://github.com/llvm/llvm-project/pull/84439
`__xray_customevent` and `__xray_typedevent` are built-in functions in
Clang. With -fxray-instrument, they are lowered to `__xray_CustomEvent`
(with 2 arguments) or `__xray_TypedEvent` (with 3 arguments).
xray patching is supported for shared objects, but they may contain
`__xray_customevent` and `__xray_typedevent` references that need to be
satisfied by default visibility definitions exported by the executable.
lld since df54f627fa, like GNU ld, catches
the scenario at link time.
This fixes https://github.com/llvm/llvm-project/issues/87324.
We haven't been able to come up with a minimal reproducer but we can
reliabely avoid this failure with the following fix. Prior to the
GetGlobalLowLevelAllocator change, the old LowLevelAllocator aquired a
lock associated with it preventing that specific allocator from being
accessed at the same time by many threads. With the
GetGlobalLowLevelAllocator change, I had accidentally replaced it but
not taken into account the lock, so we can have a data race if the
allocator is used at any point while a thread is being created. The
global allocator can be used for flag parsing or registering asan
globals.
This releases the requirement that we need to preserve the memory for
all regions at the beginning. It needs a huge amount of contiguous pages
and which may be a challenge in certain cases. Therefore, adding a new
flag, EnableContiguousRegions, to indicate whether we want to allocate
all the regions next to each other.
Note that once the EnableContiguousRegions is disabled,
EnableRandomOffset becomes irrelevant because the base of each region is
already random.
Unlike the other compiler-rt unit tests MemProf was not using the
`generate_compiler_rt_tests()` helper that ensures the test is compiled
using the test compiler (generally the Clang binary built earlier).
This was exposed by https://github.com/llvm/llvm-project/pull/83088
because it started adding Clang-specific flags to
COMPILER_RT_UNITTEST_CFLAGS if the compiler ID matched "Clang".
This change should fix the buildbots that compile compiler-rt using
a GCC compiler with LLVM_ENABLE_PROJECTS=compiler-rt.
Reviewed By: vitalybuka
Pull Request: https://github.com/llvm/llvm-project/pull/88074
Fix since #75481 got reverted.
- Explicitly set BitfieldBits to 0 to avoid uninitialized field member
for the integer checks:
```diff
- llvm::ConstantInt::get(Builder.getInt8Ty(), Check.first)};
+ llvm::ConstantInt::get(Builder.getInt8Ty(), Check.first),
+ llvm::ConstantInt::get(Builder.getInt32Ty(), 0)};
```
- `Value **Previous` was erroneously `Value *Previous` in
`CodeGenFunction::EmitWithOriginalRHSBitfieldAssignment`, fixed now.
- Update following:
```diff
- if (Kind == CK_IntegralCast) {
+ if (Kind == CK_IntegralCast || Kind == CK_LValueToRValue) {
```
CK_LValueToRValue when going from, e.g., char to char, and
CK_IntegralCast otherwise.
- Make sure that `Value *Previous = nullptr;` is initialized (see
1189e87951)
- Add another extensive testcase
`ubsan/TestCases/ImplicitConversion/bitfield-conversion.c`
---------
Co-authored-by: Vitaly Buka <vitalybuka@gmail.com>
This patch implements the implicit truncation and implicit sign change
checks for bitfields using UBSan. E.g.,
`-fsanitize=implicit-bitfield-truncation` and
`-fsanitize=implicit-bitfield-sign-change`.
Clean-up all of the calls and remove the redundant == 0 checks.
There is only one small visible change. For non-Android, the memalign
function will now fail if alignment is zero. Before this would have
passed.