25 Commits

Author SHA1 Message Date
Rainchus
6e7a5bdb2f Add dmtc1 and dmfc1 functionality to the recompiler (#134)
* make osPiReadIo no longer ignored

* remove added dmtc1/dmfc1 functionality

* add dmtc1 and dmfc1 to recompiler
2025-07-07 01:55:15 -04:00
MelonSpeedruns
e76668356b Fixed paths with spaces not being able to Compress-Archive properly. (#141)
* Fixed paths with spaces not being able to compress properly.

Needs testing on Linux and Mac!

* Fixed path for additional files
2025-07-07 01:53:02 -04:00
Wiseguy
3531bc0317 Optional dependencies for mod tool and add dependency name vector in recompiler context (#147) 2025-07-07 01:52:18 -04:00
Rainchus
989a86b369 Make osPiReadIo no longer ignored (#133) 2025-02-27 16:58:59 -05:00
Wiseguy
d660733116 Implement reloc pairing GNU extension (#131) 2025-02-13 18:20:48 -05:00
Wiseguy
8781eb44ac Add a mechanism to provide original section indices for jump table regeneration (#130) 2025-02-11 22:36:33 -05:00
Wiseguy
be65d37760 Added config option parsing for mod toml and populate them in the mod manifest (#129) 2025-01-31 02:36:33 -05:00
Wiseguy
2af6f2d161 Implement shim function generation (#128) 2025-01-30 23:48:20 -05:00
Wiseguy
198de1b5cf Move handling of HI16/LO16 relocs for non-relocatable reference sections into elf parsing by patching the output binary (fixes patch regeneration) (#127) 2025-01-30 02:54:27 -05:00
Wiseguy
b18e0ca2dd Fix function signature for RSP microcodes that don't have overlays (#126) 2025-01-26 22:32:15 -05:00
Wiseguy
b2d07ecd5a Renamed mod manifest to mod.json, added display_name, description, short_description fields (#125) 2025-01-26 21:57:00 -05:00
Wiseguy
38df8e3ddc Mod function hooking (#124)
* Add function hooks to mod symbol format

* Add function sizes to section function tables

* Add support for function hooks in live generator

* Add an option to the context to force function lookup for all non-relocated function calls

* Include relocs in overlay data

* Include R_MIPS_26 relocs in symbol file dumping/parsing

* Add manual patch symbols (syms.ld) to the output overlay file and relocs

* Fix which relocs were being emitted for patch sections

* Fix sign extension issue with mfc1, add TODO for banker's rounding
2025-01-26 21:52:46 -05:00
Ethan Lafrenais
36b5d9ae33 PIC Jump Table Support (#120)
* Support for $gp relative jump table calls
2025-01-16 00:40:50 -05:00
Ethan Lafrenais
916d16417e RSPRecomp overlay support (#118)
* RSPRecomp overlay support

* Change overlay_slot.offset config to text_address
2025-01-16 00:32:29 -05:00
Ethan Lafrenais
53ffee96fd Add ldl, ldr, sdl, sdr implementations (#119) 2025-01-12 22:43:46 -05:00
LittleCube
49bf144b0d Add TRACE_RETURN (#117) 2025-01-04 22:10:29 -05:00
LittleCube
351482e9c6 Fix TRACE_ENTRY and move function_sizes (#112) 2025-01-04 21:49:31 -05:00
Wiseguy
6dafc108f3 Skip internal symbols when dumping context (#116) 2025-01-02 20:50:46 -05:00
Wiseguy
fc696046da Fix some calling convention issues with the live recompiler (#115) 2024-12-31 19:12:54 -05:00
Wiseguy
66062a06e9 Implement live recompiler (#114)
This commit implements the "live recompiler", which is another backend for the recompiler that generates platform-specific assembly at runtime. This is still static recompilation as opposed to dynamic recompilation, as it still requires information about the binary to recompile and leverages the same static analysis that the C recompiler uses. However, similarly to dynamic recompilation it's aimed at recompiling binaries at runtime, mainly for modding purposes.

The live recompiler leverages a library called sljit to generate platform-specific code. This library provides an API that's implemented on several platforms, including the main targets of this component: x86_64 and ARM64.

Performance is expected to be slower than the C recompiler, but should still be plenty fast enough for running large amounts of recompiled code without an issue. Considering these ROMs can often be run through an interpreter and still hit their full speed, performance should not be a concern for running native code even if it's less optimal than the C recompiler's codegen.

As mentioned earlier, the main use of the live recompiler will be for loading mods in the N64Recomp runtime. This makes it so that modders don't need to ship platform-specific binaries for their mods, and allows fixing bugs with recompilation down the line without requiring modders to update their binaries.

This PR also includes a utility for testing the live recompiler. It accepts binaries in a custom format which contain the instructions, input data, and target data. Documentation for the test format as well as most of the tests that were used to validate the live recompiler can be found here. The few remaining tests were hacked together binaries that I put together very hastily, so they need to be cleaned up and will probably be uploaded at a later date. The only test in that suite that doesn't currently succeed is the div test, due to unknown behavior when the two operands aren't properly sign extended to 64 bits. This has no bearing on practical usage, since the inputs will always be sign extended as expected.
2024-12-31 16:11:40 -05:00
LittleCube
0d0e93e979 Check if mod context is good in mod tool (#113) 2024-12-26 18:49:22 -05:00
LittleCube
17438755a1 Implement nrm filename toml input, renaming list, trace mode, and context dumping flag (#111)
* implement nrm filename toml input

* change name of mod toml setting to 'mod_filename'

* add renaming and re mode

* fix --dump-context arg, fix entrypoint detection

* refactor re_mode to function_trace_mode

* adjust trace mode to use a general TRACE_ENTRY() macro

* fix some renaming and trace mode comments, revert no toml entrypoint code, add TODO to broken block

* fix arg2 check and usage string
2024-12-24 02:10:26 -05:00
LittleCube
d33d381617 Show error when zip command is not found by linux (#94)
* Show error when zip command is not found by linux

* Exit on linux if zip command is not found
2024-09-15 17:59:19 -04:00
Wiseguy
d5ab74220d Various mod fixes (#95)
* Terminate offline mod recompilation if any functions fail to recompile

* Fixed edge case with switch case jump table detection when lo16 immediate is exactly 0

* Prevent emitting duplicate reference symbol defines in offline mod recompilation

* Fix function calls and add missing runtime function pointers in offline mod recompiler
2024-09-12 18:54:08 -04:00
Wiseguy
cc71b31b09 Modding Support PR 2 (Finished mod tool base feature set and improvements for use in N64ModernRuntime) (#93)
* Remove reference context from parse_mod_symbols argument

* Add support for special dependency names (self and base recomp), fix non-compliant offline mod recompiler output

* Fix export names not being set on functions when parsing mod syms, add missing returns to mod parsing

* Switch offline mod recompilation to use a base global event index instead of per-event global indices

* Add support for creating events in normal recompilation

* Output recomp API version in offline mod recompiler

* Removed dependency version from mod symbols (moved to manifest)

* Added mod manifest generation to mod tool

* Implement mod file creation in Windows

* Fixed some error prints not using stderr

* Implement mod file creation on posix systems

* De-hardcode symbol file path for offline mod recompiler

* Fix duplicate import symbols issue and prevent emitting unused imports
2024-09-09 22:49:57 -04:00
27 changed files with 5488 additions and 771 deletions

10
.gitignore vendored
View File

@@ -6,8 +6,8 @@
*.elf
*.z64
# Output C files
test/funcs
# Local working data
tests
# Linux build output
build/
@@ -42,12 +42,6 @@ bld/
# Visual Studio 2015/2017 cache/options directory
.vs/
# Libraries (binaries that aren't in the repo)
test/Lib
# RT64 (since it's not public yet)
test/RT64
# Runtime files
imgui.ini
rt64.log

3
.gitmodules vendored
View File

@@ -10,3 +10,6 @@
[submodule "lib/tomlplusplus"]
path = lib/tomlplusplus
url = https://github.com/marzer/tomlplusplus
[submodule "lib/sljit"]
path = lib/sljit
url = https://github.com/zherczeg/sljit

View File

@@ -164,3 +164,32 @@ target_sources(OfflineModRecomp PRIVATE
)
target_link_libraries(OfflineModRecomp fmt rabbitizer tomlplusplus::tomlplusplus N64Recomp)
# Live recompiler
project(LiveRecomp)
add_library(LiveRecomp)
target_sources(LiveRecomp PRIVATE
${CMAKE_CURRENT_SOURCE_DIR}/LiveRecomp/live_generator.cpp
${CMAKE_CURRENT_SOURCE_DIR}/lib/sljit/sljit_src/sljitLir.c
)
target_include_directories(LiveRecomp PRIVATE
${CMAKE_CURRENT_SOURCE_DIR}/lib/sljit/sljit_src
)
target_link_libraries(LiveRecomp N64Recomp)
# Live recompiler test
project(LiveRecompTest)
add_executable(LiveRecompTest)
target_sources(LiveRecompTest PRIVATE
${CMAKE_CURRENT_SOURCE_DIR}/LiveRecomp/live_recompiler_test.cpp
)
target_include_directories(LiveRecompTest PRIVATE
${CMAKE_CURRENT_SOURCE_DIR}/lib/sljit/sljit_src
)
target_link_libraries(LiveRecompTest LiveRecomp)

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,364 @@
#include <fstream>
#include <chrono>
#include <filesystem>
#include <cinttypes>
#include "sljitLir.h"
#include "recompiler/live_recompiler.h"
#include "recomp.h"
static std::vector<uint8_t> read_file(const std::filesystem::path& path, bool& found) {
std::vector<uint8_t> ret;
found = false;
std::ifstream file{ path, std::ios::binary};
if (file.good()) {
file.seekg(0, std::ios::end);
ret.resize(file.tellg());
file.seekg(0, std::ios::beg);
file.read(reinterpret_cast<char*>(ret.data()), ret.size());
found = true;
}
return ret;
}
uint32_t read_u32_swap(const std::vector<uint8_t>& vec, size_t offset) {
return byteswap(*reinterpret_cast<const uint32_t*>(&vec[offset]));
}
uint32_t read_u32(const std::vector<uint8_t>& vec, size_t offset) {
return *reinterpret_cast<const uint32_t*>(&vec[offset]);
}
std::vector<uint8_t> rdram;
void byteswap_copy(uint8_t* dst, uint8_t* src, size_t count) {
for (size_t i = 0; i < count; i++) {
dst[i ^ 3] = src[i];
}
}
bool byteswap_compare(uint8_t* a, uint8_t* b, size_t count) {
for (size_t i = 0; i < count; i++) {
if (a[i ^ 3] != b[i]) {
return false;
}
}
return true;
}
enum class TestError {
Success,
FailedToOpenInput,
FailedToRecompile,
UnknownStructType,
DataDifference
};
struct TestStats {
TestError error;
uint64_t codegen_microseconds;
uint64_t execution_microseconds;
uint64_t code_size;
};
void write1(uint8_t* rdram, recomp_context* ctx) {
MEM_B(0, ctx->r4) = 1;
}
recomp_func_t* test_get_function(int32_t vram) {
if (vram == 0x80100000) {
return write1;
}
assert(false);
return nullptr;
}
void test_switch_error(const char* func, uint32_t vram, uint32_t jtbl) {
printf(" Switch-case out of bounds in %s at 0x%08X for jump table at 0x%08X\n", func, vram, jtbl);
}
TestStats run_test(const std::filesystem::path& tests_dir, const std::string& test_name) {
std::filesystem::path input_path = tests_dir / (test_name + "_data.bin");
std::filesystem::path data_dump_path = tests_dir / (test_name + "_data_out.bin");
bool found;
std::vector<uint8_t> file_data = read_file(input_path, found);
if (!found) {
printf("Failed to open file: %s\n", input_path.string().c_str());
return { TestError::FailedToOpenInput };
}
// Parse the test file.
uint32_t text_offset = read_u32_swap(file_data, 0x00);
uint32_t text_length = read_u32_swap(file_data, 0x04);
uint32_t init_data_offset = read_u32_swap(file_data, 0x08);
uint32_t good_data_offset = read_u32_swap(file_data, 0x0C);
uint32_t data_length = read_u32_swap(file_data, 0x10);
uint32_t text_address = read_u32_swap(file_data, 0x14);
uint32_t data_address = read_u32_swap(file_data, 0x18);
uint32_t next_struct_address = read_u32_swap(file_data, 0x1C);
recomp_context ctx{};
byteswap_copy(&rdram[text_address - 0x80000000], &file_data[text_offset], text_length);
byteswap_copy(&rdram[data_address - 0x80000000], &file_data[init_data_offset], data_length);
// Build recompiler context.
N64Recomp::Context context{};
// Move the file data into the context.
context.rom = std::move(file_data);
context.sections.resize(2);
// Create a section for the function to exist in.
context.sections[0].ram_addr = text_address;
context.sections[0].rom_addr = text_offset;
context.sections[0].size = text_length;
context.sections[0].name = ".text";
context.sections[0].executable = true;
context.sections[0].relocatable = true;
context.section_functions.resize(context.sections.size());
// Create a section for .data (used for relocations)
context.sections[1].ram_addr = data_address;
context.sections[1].rom_addr = init_data_offset;
context.sections[1].size = data_length;
context.sections[1].name = ".data";
context.sections[1].executable = false;
context.sections[1].relocatable = true;
size_t start_func_index;
uint32_t function_desc_address = 0;
uint32_t reloc_desc_address = 0;
// Read any extra structs.
while (next_struct_address != 0) {
uint32_t cur_struct_address = next_struct_address;
uint32_t struct_type = read_u32_swap(context.rom, next_struct_address + 0x00);
next_struct_address = read_u32_swap(context.rom, next_struct_address + 0x04);
switch (struct_type) {
case 1: // Function desc
function_desc_address = cur_struct_address;
break;
case 2: // Relocation
reloc_desc_address = cur_struct_address;
break;
default:
printf("Unknown struct type %u\n", struct_type);
return { TestError::UnknownStructType };
}
}
// Check if a function description exists.
if (function_desc_address == 0) {
// No function description, so treat the whole thing as one function.
// Get the function's instruction words.
std::vector<uint32_t> text_words{};
text_words.resize(text_length / sizeof(uint32_t));
for (size_t i = 0; i < text_words.size(); i++) {
text_words[i] = read_u32(context.rom, text_offset + i * sizeof(uint32_t));
}
// Add the function to the context.
context.functions_by_vram[text_address].emplace_back(context.functions.size());
context.section_functions.emplace_back(context.functions.size());
context.sections[0].function_addrs.emplace_back(text_address);
context.functions.emplace_back(
text_address,
text_offset,
text_words,
"test_func",
0
);
start_func_index = 0;
}
else {
// Use the function description.
uint32_t num_funcs = read_u32_swap(context.rom, function_desc_address + 0x08);
start_func_index = read_u32_swap(context.rom, function_desc_address + 0x0C);
for (size_t func_index = 0; func_index < num_funcs; func_index++) {
uint32_t cur_func_address = read_u32_swap(context.rom, function_desc_address + 0x10 + 0x00 + 0x08 * func_index);
uint32_t cur_func_length = read_u32_swap(context.rom, function_desc_address + 0x10 + 0x04 + 0x08 * func_index);
uint32_t cur_func_offset = cur_func_address - text_address + text_offset;
// Get the function's instruction words.
std::vector<uint32_t> text_words{};
text_words.resize(cur_func_length / sizeof(uint32_t));
for (size_t i = 0; i < text_words.size(); i++) {
text_words[i] = read_u32(context.rom, cur_func_offset + i * sizeof(uint32_t));
}
// Add the function to the context.
context.functions_by_vram[cur_func_address].emplace_back(context.functions.size());
context.section_functions.emplace_back(context.functions.size());
context.sections[0].function_addrs.emplace_back(cur_func_address);
context.functions.emplace_back(
cur_func_address,
cur_func_offset,
std::move(text_words),
"test_func_" + std::to_string(func_index),
0
);
}
}
// Check if a relocation description exists.
if (reloc_desc_address != 0) {
uint32_t num_relocs = read_u32_swap(context.rom, reloc_desc_address + 0x08);
for (uint32_t reloc_index = 0; reloc_index < num_relocs; reloc_index++) {
uint32_t cur_desc_address = reloc_desc_address + 0x0C + reloc_index * 4 * sizeof(uint32_t);
uint32_t reloc_type = read_u32_swap(context.rom, cur_desc_address + 0x00);
uint32_t reloc_section = read_u32_swap(context.rom, cur_desc_address + 0x04);
uint32_t reloc_address = read_u32_swap(context.rom, cur_desc_address + 0x08);
uint32_t reloc_target_offset = read_u32_swap(context.rom, cur_desc_address + 0x0C);
context.sections[0].relocs.emplace_back(N64Recomp::Reloc{
.address = reloc_address,
.target_section_offset = reloc_target_offset,
.symbol_index = 0,
.target_section = static_cast<uint16_t>(reloc_section),
.type = static_cast<N64Recomp::RelocType>(reloc_type),
.reference_symbol = false
});
}
}
std::vector<std::vector<uint32_t>> dummy_static_funcs{};
std::vector<int32_t> section_addresses{};
section_addresses.emplace_back(text_address);
section_addresses.emplace_back(data_address);
auto before_codegen = std::chrono::system_clock::now();
N64Recomp::LiveGeneratorInputs generator_inputs {
.switch_error = test_switch_error,
.get_function = test_get_function,
.reference_section_addresses = nullptr,
.local_section_addresses = section_addresses.data()
};
// Create the sljit compiler and the generator.
N64Recomp::LiveGenerator generator{ context.functions.size(), generator_inputs };
for (size_t func_index = 0; func_index < context.functions.size(); func_index++) {
std::ostringstream dummy_ostream{};
//sljit_emit_op0(compiler, SLJIT_BREAKPOINT);
if (!N64Recomp::recompile_function_live(generator, context, func_index, dummy_ostream, dummy_static_funcs, true)) {
return { TestError::FailedToRecompile };
}
}
// Generate the code.
N64Recomp::LiveGeneratorOutput output = generator.finish();
auto after_codegen = std::chrono::system_clock::now();
auto before_execution = std::chrono::system_clock::now();
int old_rounding = fegetround();
// Run the generated code.
ctx.r29 = 0xFFFFFFFF80000000 + rdram.size() - 0x10; // Set the stack pointer.
output.functions[start_func_index](rdram.data(), &ctx);
fesetround(old_rounding);
auto after_execution = std::chrono::system_clock::now();
// Check the result of running the code.
bool good = byteswap_compare(&rdram[data_address - 0x80000000], &context.rom[good_data_offset], data_length);
// Dump the data if the results don't match.
if (!good) {
std::ofstream data_dump_file{ data_dump_path, std::ios::binary };
std::vector<uint8_t> data_swapped;
data_swapped.resize(data_length);
byteswap_copy(data_swapped.data(), &rdram[data_address - 0x80000000], data_length);
data_dump_file.write(reinterpret_cast<char*>(data_swapped.data()), data_length);
return { TestError::DataDifference };
}
// Return the test's stats.
TestStats ret{};
ret.error = TestError::Success;
ret.codegen_microseconds = std::chrono::duration_cast<std::chrono::microseconds>(after_codegen - before_codegen).count();
ret.execution_microseconds = std::chrono::duration_cast<std::chrono::microseconds>(after_execution - before_execution).count();
ret.code_size = output.code_size;
return ret;
}
int main(int argc, const char** argv) {
if (argc < 3) {
printf("Usage: %s [test directory] [test 1] ...\n", argv[0]);
return EXIT_SUCCESS;
}
N64Recomp::live_recompiler_init();
rdram.resize(0x8000000);
// Skip the first argument (program name) and second argument (test directory).
int count = argc - 1 - 1;
int passed_count = 0;
std::vector<size_t> failed_tests{};
for (size_t test_index = 0; test_index < count; test_index++) {
const char* cur_test_name = argv[2 + test_index];
printf("Running test: %s\n", cur_test_name);
TestStats stats = run_test(argv[1], cur_test_name);
switch (stats.error) {
case TestError::Success:
printf(" Success\n");
printf(" Generated %" PRIu64 " bytes in %" PRIu64 " microseconds and ran in %" PRIu64 " microseconds\n",
stats.code_size, stats.codegen_microseconds, stats.execution_microseconds);
passed_count++;
break;
case TestError::FailedToOpenInput:
printf(" Failed to open input data file\n");
break;
case TestError::FailedToRecompile:
printf(" Failed to recompile\n");
break;
case TestError::UnknownStructType:
printf(" Unknown additional data struct type in test data\n");
break;
case TestError::DataDifference:
printf(" Output data did not match, dumped to file\n");
break;
}
if (stats.error != TestError::Success) {
failed_tests.emplace_back(test_index);
}
printf("\n");
}
printf("Passed %d/%d tests\n", passed_count, count);
if (!failed_tests.empty()) {
printf(" Failed: ");
for (size_t i = 0; i < failed_tests.size(); i++) {
size_t test_index = failed_tests[i];
printf("%s", argv[2 + test_index]);
if (i != failed_tests.size() - 1) {
printf(", ");
}
}
printf("\n");
}
return 0;
}

View File

@@ -3,7 +3,7 @@
#include <vector>
#include <span>
#include "n64recomp.h"
#include "recompiler/context.h"
#include "rabbitizer.hpp"
static std::vector<uint8_t> read_file(const std::filesystem::path& path, bool& found) {
@@ -24,17 +24,9 @@ static std::vector<uint8_t> read_file(const std::filesystem::path& path, bool& f
return ret;
}
const std::filesystem::path func_reference_syms_file_path {
"C:/n64/MMRecompTestMod/Zelda64RecompSyms/mm.us.rev1.syms.toml"
};
const std::vector<std::filesystem::path> data_reference_syms_file_paths {
"C:/n64/MMRecompTestMod/Zelda64RecompSyms/mm.us.rev1.datasyms.toml",
"C:/n64/MMRecompTestMod/Zelda64RecompSyms/mm.us.rev1.datasyms_static.toml"
};
int main(int argc, const char** argv) {
if (argc != 4) {
printf("Usage: %s [mod symbol file] [ROM] [output C file]\n", argv[0]);
if (argc != 5) {
printf("Usage: %s [mod symbol file] [mod binary file] [recomp symbols file] [output C file]\n", argv[0]);
return EXIT_SUCCESS;
}
bool found;
@@ -54,7 +46,7 @@ int main(int argc, const char** argv) {
std::vector<uint8_t> dummy_rom{};
N64Recomp::Context reference_context{};
if (!N64Recomp::Context::from_symbol_file(func_reference_syms_file_path, std::move(dummy_rom), reference_context, false)) {
if (!N64Recomp::Context::from_symbol_file(argv[3], std::move(dummy_rom), reference_context, false)) {
printf("Failed to load provided function reference symbol file\n");
return EXIT_FAILURE;
}
@@ -73,12 +65,14 @@ int main(int argc, const char** argv) {
N64Recomp::Context mod_context;
N64Recomp::ModSymbolsError error = N64Recomp::parse_mod_symbols(symbol_data_span, rom_data, sections_by_vrom, reference_context, mod_context);
N64Recomp::ModSymbolsError error = N64Recomp::parse_mod_symbols(symbol_data_span, rom_data, sections_by_vrom, mod_context);
if (error != N64Recomp::ModSymbolsError::Good) {
fprintf(stderr, "Error parsing mod symbols: %d\n", (int)error);
return EXIT_FAILURE;
}
mod_context.import_reference_context(reference_context);
// Populate R_MIPS_26 reloc symbol indices. Start by building a map of vram address to matching reference symbols.
std::unordered_map<uint32_t, std::vector<size_t>> reference_symbols_by_vram{};
for (size_t reference_symbol_index = 0; reference_symbol_index < mod_context.num_regular_reference_symbols(); reference_symbol_index++) {
@@ -124,7 +118,8 @@ int main(int argc, const char** argv) {
std::vector<std::vector<uint32_t>> static_funcs_by_section{};
static_funcs_by_section.resize(mod_context.sections.size());
std::ofstream output_file { argv[3] };
const char* output_file_path = argv[4];
std::ofstream output_file { output_file_path };
RabbitizerConfig_Cfg.pseudos.pseudoMove = false;
RabbitizerConfig_Cfg.pseudos.pseudoBeqz = false;
@@ -134,6 +129,9 @@ int main(int argc, const char** argv) {
output_file << "#include \"mod_recomp.h\"\n\n";
// Write the API version.
output_file << "RECOMP_EXPORT uint32_t recomp_api_version = 1;\n\n";
output_file << "// Values populated by the runtime:\n\n";
// Write import function pointer array and defines (i.e. `#define testmod_inner_import imported_funcs[0]`)
@@ -143,35 +141,60 @@ int main(int argc, const char** argv) {
const auto& import = mod_context.import_symbols[import_index];
output_file << "#define " << import.base.name << " imported_funcs[" << import_index << "]\n";
}
output_file << "RECOMP_EXPORT recomp_func_t* imported_funcs[" << num_imports << "] = {};\n";
output_file << "RECOMP_EXPORT recomp_func_t* imported_funcs[" << std::max(size_t{1}, num_imports) << "] = {0};\n";
output_file << "\n";
// Use reloc list to write reference symbol function pointer array and defines (i.e. `#define func_80102468 reference_symbol_funcs[0]`)
output_file << "// Array of pointers to functions from the original ROM with defines to alias their names.\n";
std::unordered_set<std::string> written_reference_symbols{};
size_t num_reference_symbols = 0;
for (const auto& section : mod_context.sections) {
for (const auto& reloc : section.relocs) {
if (reloc.type == N64Recomp::RelocType::R_MIPS_26 && reloc.reference_symbol && mod_context.is_regular_reference_section(reloc.target_section)) {
const auto& sym = mod_context.get_reference_symbol(reloc.target_section, reloc.symbol_index);
output_file << "#define " << sym.name << " reference_symbol_funcs[" << num_reference_symbols << "]\n";
// Prevent writing multiple of the same define. This means there are duplicate symbols in the array if a function is called more than once,
// but only the first of each set of duplicates is referenced. This is acceptable, since offline mod recompilation is mainly meant for debug purposes.
if (!written_reference_symbols.contains(sym.name)) {
output_file << "#define " << sym.name << " reference_symbol_funcs[" << num_reference_symbols << "]\n";
written_reference_symbols.emplace(sym.name);
}
num_reference_symbols++;
}
}
}
output_file << "RECOMP_EXPORT recomp_func_t* reference_symbol_funcs[" << num_reference_symbols << "] = {};\n\n";
// C doesn't allow 0-sized arrays, so always add at least one member to all arrays. The actual size will be pulled from the mod symbols.
output_file << "RECOMP_EXPORT recomp_func_t* reference_symbol_funcs[" << std::max(size_t{1},num_reference_symbols) << "] = {0};\n\n";
// Write provided event array (maps internal event indices to global ones).
output_file << "// Mapping of internal event indices to global ones.\n";
output_file << "RECOMP_EXPORT uint32_t event_indices[" << mod_context.event_symbols.size() <<"] = {};\n\n";
output_file << "// Base global event index for this mod's events.\n";
output_file << "RECOMP_EXPORT uint32_t base_event_index;\n\n";
// Write the event trigger function pointer.
output_file << "// Pointer to the runtime function for triggering events.\n";
output_file << "RECOMP_EXPORT void (*recomp_trigger_event)(uint8_t* rdram, recomp_context* ctx, uint32_t) = NULL;\n\n";
// Write the get_function pointer.
// Write the get_function pointer.
output_file << "// Pointer to the runtime function for looking up functions from vram address.\n";
output_file << "RECOMP_EXPORT recomp_func_t* (*get_function)(int32_t vram) = NULL;\n\n";
// Write the cop0_status_write pointer.
output_file << "// Pointer to the runtime function for performing a cop0 status register write.\n";
output_file << "RECOMP_EXPORT void (*cop0_status_write)(recomp_context* ctx, gpr value) = NULL;\n\n";
// Write the cop0_status_read pointer.
output_file << "// Pointer to the runtime function for performing a cop0 status register read.\n";
output_file << "RECOMP_EXPORT gpr (*cop0_status_read)(recomp_context* ctx) = NULL;\n\n";
// Write the switch_error pointer.
output_file << "// Pointer to the runtime function for reporting switch case errors.\n";
output_file << "RECOMP_EXPORT void (*switch_error)(const char* func, uint32_t vram, uint32_t jtbl) = NULL;\n\n";
// Write the do_break pointer.
output_file << "// Pointer to the runtime function for handling the break instruction.\n";
output_file << "RECOMP_EXPORT void (*do_break)(uint32_t vram) = NULL;\n\n";
// Write the section_addresses pointer.
output_file << "// Pointer to the runtime's array of loaded section addresses for the base ROM.\n";
output_file << "RECOMP_EXPORT int32_t* reference_section_addresses = NULL;\n\n";
@@ -179,12 +202,31 @@ int main(int argc, const char** argv) {
// Write the local section addresses pointer array.
size_t num_sections = mod_context.sections.size();
output_file << "// Array of this mod's loaded section addresses.\n";
output_file << "RECOMP_EXPORT int32_t section_addresses[" << num_sections << "] = {};\n\n";
output_file << "RECOMP_EXPORT int32_t section_addresses[" << std::max(size_t{1}, num_sections) << "] = {0};\n\n";
// Create a set of the export indices to avoid renaming them.
std::unordered_set<size_t> export_indices{mod_context.exported_funcs.begin(), mod_context.exported_funcs.end()};
// Name all the functions in a first pass so function calls emitted in the second are correct. Also emit function prototypes.
output_file << "// Function prototypes.\n";
for (size_t func_index = 0; func_index < mod_context.functions.size(); func_index++) {
auto& func = mod_context.functions[func_index];
func.name = "mod_func_" + std::to_string(func_index);
N64Recomp::recompile_function(mod_context, func, output_file, static_funcs_by_section, true);
// Don't rename exports since they already have a name from the mod symbol file.
if (!export_indices.contains(func_index)) {
func.name = "mod_func_" + std::to_string(func_index);
}
output_file << "RECOMP_FUNC void " << func.name << "(uint8_t* rdram, recomp_context* ctx);\n";
}
output_file << "\n";
// Perform a second pass for recompiling all the functions.
for (size_t func_index = 0; func_index < mod_context.functions.size(); func_index++) {
if (!N64Recomp::recompile_function(mod_context, func_index, output_file, static_funcs_by_section, true)) {
output_file.close();
std::error_code ec;
std::filesystem::remove(output_file_path, ec);
return EXIT_FAILURE;
}
}
return EXIT_SUCCESS;

View File

@@ -29,7 +29,7 @@ For relocatable overlays, the tool will modify supported instructions possessing
Support for relocations for TLB mapping is coming in the future, which will add the ability to provide a list of MIPS32 relocations so that the runtime can relocate them on load. Combining this with the functionality used for relocatable overlays should allow running most TLB mapped code without incurring a performance penalty on every RAM access.
## How to Use
The recompiler is configured by providing a toml file in order to configure the recompiler behavior, which is the only argument provided to the recompiler. The toml is where you specify input and output file paths, as well as optionally stub out specific functions, skip recompilation of specific functions, and patch single instructions in the target binary. There is also planned functionality to be able to emit hooks in the recompiler output by adding them to the toml (the `[[patches.func]]` and `[[patches.hook]]` sections of the linked toml below), but this is currently unimplemented. Documentation on every option that the recompiler provides is not currently available, but an example toml can be found in the Zelda 64: Recompiled project [here](https://github.com/Mr-Wiseguy/Zelda64Recomp/blob/dev/us.rev1.toml).
The recompiler is configured by providing a toml file in order to configure the recompiler behavior, which is the first argument provided to the recompiler. The toml is where you specify input and output file paths, as well as optionally stub out specific functions, skip recompilation of specific functions, and patch single instructions in the target binary. There is also planned functionality to be able to emit hooks in the recompiler output by adding them to the toml (the `[[patches.func]]` and `[[patches.hook]]` sections of the linked toml below), but this is currently unimplemented. Documentation on every option that the recompiler provides is not currently available, but an example toml can be found in the Zelda 64: Recompiled project [here](https://github.com/Mr-Wiseguy/Zelda64Recomp/blob/dev/us.rev1.toml).
Currently, the only way to provide the required metadata is by passing an elf file to this tool. The easiest way to get such an elf is to set up a disassembly or decompilation of the target binary, but there will be support for providing the metadata via a custom format to bypass the need to do so in the future.

View File

@@ -149,7 +149,7 @@ std::string_view c0_reg_write_action(int cop0_reg) {
case Cop0Reg::RSP_COP0_SP_DRAM_ADDR:
return "SET_DMA_DRAM";
case Cop0Reg::RSP_COP0_SP_MEM_ADDR:
return "SET_DMA_DMEM";
return "SET_DMA_MEM";
case Cop0Reg::RSP_COP0_SP_RD_LEN:
return "DO_DMA_READ";
case Cop0Reg::RSP_COP0_SP_WR_LEN:
@@ -161,6 +161,10 @@ std::string_view c0_reg_write_action(int cop0_reg) {
}
bool is_c0_reg_write_dma_read(int cop0_reg) {
return static_cast<Cop0Reg>(cop0_reg) == Cop0Reg::RSP_COP0_SP_RD_LEN;
}
std::optional<int> get_rsp_element(const rabbitizer::InstructionRsp& instr) {
if (instr.hasOperand(rabbitizer::OperandType::rsp_vt_elementhigh)) {
return instr.GetRsp_elementhigh();
@@ -193,7 +197,32 @@ BranchTargets get_branch_targets(const std::vector<rabbitizer::InstructionRsp>&
return ret;
}
bool process_instruction(size_t instr_index, const std::vector<rabbitizer::InstructionRsp>& instructions, std::ofstream& output_file, const BranchTargets& branch_targets, const std::unordered_set<uint32_t>& unsupported_instructions, bool indent, bool in_delay_slot) {
struct ResumeTargets {
std::unordered_set<uint32_t> non_delay_targets;
std::unordered_set<uint32_t> delay_targets;
};
void get_overlay_swap_resume_targets(const std::vector<rabbitizer::InstructionRsp>& instrs, ResumeTargets& targets) {
bool is_delay_slot = false;
for (const auto& instr : instrs) {
InstrId instr_id = instr.getUniqueId();
int rd = (int)instr.GetO32_rd();
if (instr_id == InstrId::rsp_mtc0 && is_c0_reg_write_dma_read(rd)) {
uint32_t vram = instr.getVram();
targets.non_delay_targets.insert(vram);
if (is_delay_slot) {
targets.delay_targets.insert(vram);
}
}
is_delay_slot = instr.hasDelaySlot();
}
}
bool process_instruction(size_t instr_index, const std::vector<rabbitizer::InstructionRsp>& instructions, std::ofstream& output_file, const BranchTargets& branch_targets, const std::unordered_set<uint32_t>& unsupported_instructions, const ResumeTargets& resume_targets, bool has_overlays, bool indent, bool in_delay_slot) {
const auto& instr = instructions[instr_index];
uint32_t instr_vram = instr.getVram();
@@ -236,7 +265,7 @@ bool process_instruction(size_t instr_index, const std::vector<rabbitizer::Instr
auto print_unconditional_branch = [&]<typename... Ts>(fmt::format_string<Ts...> fmt_str, Ts&& ...args) {
if (instr_index < instructions.size() - 1) {
uint32_t next_vram = instr_vram + 4;
process_instruction(instr_index + 1, instructions, output_file, branch_targets, unsupported_instructions, false, true);
process_instruction(instr_index + 1, instructions, output_file, branch_targets, unsupported_instructions, resume_targets, has_overlays, false, true);
}
print_indent();
fmt::print(output_file, fmt_str, std::forward<Ts>(args)...);
@@ -247,7 +276,7 @@ bool process_instruction(size_t instr_index, const std::vector<rabbitizer::Instr
fmt::print(output_file, "{{\n ");
if (instr_index < instructions.size() - 1) {
uint32_t next_vram = instr_vram + 4;
process_instruction(instr_index + 1, instructions, output_file, branch_targets, unsupported_instructions, true, true);
process_instruction(instr_index + 1, instructions, output_file, branch_targets, unsupported_instructions, resume_targets, has_overlays, true, true);
}
fmt::print(output_file, " ");
fmt::print(output_file, fmt_str, std::forward<Ts>(args)...);
@@ -508,8 +537,18 @@ bool process_instruction(size_t instr_index, const std::vector<rabbitizer::Instr
case InstrId::rsp_mtc0:
{
std::string_view write_action = c0_reg_write_action(rd);
if (has_overlays && is_c0_reg_write_dma_read(rd)) {
// DMA read, do overlay swap if reading into IMEM
fmt::print(output_file,
" if (dma_mem_address & 0x1000) {{\n"
" ctx->resume_address = 0x{:04X};\n"
" ctx->resume_delay = {};\n"
" goto do_overlay_swap;\n"
" }}\n",
instr_vram, in_delay_slot ? "true" : "false");
}
if (!write_action.empty()) {
print_line("{}({}{})", write_action, ctx_gpr_prefix(rt), rt); \
print_line("{}({}{})", write_action, ctx_gpr_prefix(rt), rt);
}
break;
}
@@ -520,6 +559,17 @@ bool process_instruction(size_t instr_index, const std::vector<rabbitizer::Instr
}
}
// Write overlay swap resume labels
if (in_delay_slot) {
if (resume_targets.delay_targets.contains(instr_vram)) {
fmt::print(output_file, "R_{:04X}_delay:\n", instr_vram);
}
} else {
if (resume_targets.non_delay_targets.contains(instr_vram)) {
fmt::print(output_file, "R_{:04X}:\n", instr_vram);
}
}
return true;
}
@@ -538,10 +588,24 @@ void write_indirect_jumps(std::ofstream& output_file, const BranchTargets& branc
" \" r16 = %08X r17 = %08X r18 = %08X r19 = %08X r20 = %08X r21 = %08X r22 = %08X r23 = %08X\\n\"\n"
" \" r24 = %08X r25 = %08X r26 = %08X r27 = %08X r28 = %08X r29 = %08X r30 = %08X r31 = %08X\\n\",\n"
" 0, r1, r2, r3, r4, r5, r6, r7, r8, r9, r10, r11, r12, r13, r14, r15, r16,\n"
" r17, r18, r19, r20, r21, r22, r23, r24, r25, r26, r27, r29, r30, r31);\n"
" r17, r18, r19, r20, r21, r22, r23, r24, r25, r26, r27, r28, r29, r30, r31);\n"
" return RspExitReason::UnhandledJumpTarget;\n", output_function_name);
}
void write_overlay_swap_return(std::ofstream& output_file) {
fmt::print(output_file,
"do_overlay_swap:\n"
" ctx->r1 = r1; ctx->r2 = r2; ctx->r3 = r3; ctx->r4 = r4; ctx->r5 = r5; ctx->r6 = r6; ctx->r7 = r7;\n"
" ctx->r8 = r8; ctx->r9 = r9; ctx->r10 = r10; ctx->r11 = r11; ctx->r12 = r12; ctx->r13 = r13; ctx->r14 = r14; ctx->r15 = r15;\n"
" ctx->r16 = r16; ctx->r17 = r17; ctx->r18 = r18; ctx->r19 = r19; ctx->r20 = r20; ctx->r21 = r21; ctx->r22 = r22; ctx->r23 = r23;\n"
" ctx->r24 = r24; ctx->r25 = r25; ctx->r26 = r26; ctx->r27 = r27; ctx->r28 = r28; ctx->r29 = r29; ctx->r30 = r30; ctx->r31 = r31;\n"
" ctx->dma_mem_address = dma_mem_address;\n"
" ctx->dma_dram_address = dma_dram_address;\n"
" ctx->jump_target = jump_target;\n"
" ctx->rsp = rsp;\n"
" return RspExitReason::SwapOverlay;\n");
}
#ifdef _MSC_VER
inline uint32_t byteswap(uint32_t val) {
return _byteswap_ulong(val);
@@ -552,6 +616,16 @@ constexpr uint32_t byteswap(uint32_t val) {
}
#endif
struct RSPRecompilerOverlayConfig {
size_t offset;
size_t size;
};
struct RSPRecompilerOverlaySlotConfig {
size_t text_address;
std::vector<RSPRecompilerOverlayConfig> overlays;
};
struct RSPRecompilerConfig {
size_t text_offset;
size_t text_size;
@@ -561,6 +635,7 @@ struct RSPRecompilerConfig {
std::string output_function_name;
std::vector<uint32_t> extra_indirect_branch_targets;
std::unordered_set<uint32_t> unsupported_instructions;
std::vector<RSPRecompilerOverlaySlotConfig> overlay_slots;
};
std::filesystem::path concat_if_not_empty(const std::filesystem::path& parent, const std::filesystem::path& child) {
@@ -666,6 +741,76 @@ bool read_config(const std::filesystem::path& config_path, RSPRecompilerConfig&
const toml::array* unsupported_instructions_array = unsupported_instructions_data.as_array();
ret.unsupported_instructions = toml_to_set<uint32_t>(unsupported_instructions_array);
}
// Overlay slots (optional)
const toml::node_view overlay_slots = config_data["overlay_slots"];
if (overlay_slots.is_array()) {
const toml::array* overlay_slots_array = overlay_slots.as_array();
int slot_idx = 0;
overlay_slots_array->for_each([&](toml::table slot){
RSPRecompilerOverlaySlotConfig slot_config;
std::optional<uint32_t> text_address = slot["text_address"].value<uint32_t>();
if (text_address.has_value()) {
slot_config.text_address = text_address.value();
}
else {
throw toml::parse_error(
fmt::format("Missing text_address in config file at overlay slot {}", slot_idx).c_str(),
config_data.source());
}
// Overlays per slot
const toml::node_view overlays = slot["overlays"];
if (overlays.is_array()) {
const toml::array* overlay_array = overlays.as_array();
int overlay_idx = 0;
overlay_array->for_each([&](toml::table overlay){
RSPRecompilerOverlayConfig overlay_config;
std::optional<uint32_t> offset = overlay["offset"].value<uint32_t>();
if (offset.has_value()) {
overlay_config.offset = offset.value();
}
else {
throw toml::parse_error(
fmt::format("Missing offset in config file at overlay slot {} overlay {}", slot_idx, overlay_idx).c_str(),
config_data.source());
}
std::optional<uint32_t> size = overlay["size"].value<uint32_t>();
if (size.has_value()) {
overlay_config.size = size.value();
if ((size.value() % sizeof(uint32_t)) != 0) {
throw toml::parse_error(
fmt::format("Overlay size must be a multiple of {} in config file at overlay slot {} overlay {}", sizeof(uint32_t), slot_idx, overlay_idx).c_str(),
config_data.source());
}
}
else {
throw toml::parse_error(
fmt::format("Missing size in config file at overlay slot {} overlay {}", slot_idx, overlay_idx).c_str(),
config_data.source());
}
slot_config.overlays.push_back(overlay_config);
overlay_idx++;
});
}
else {
throw toml::parse_error(
fmt::format("Missing overlays in config file at overlay slot {}", slot_idx).c_str(),
config_data.source());
}
ret.overlay_slots.push_back(slot_config);
slot_idx++;
});
}
}
catch (const toml::parse_error& err) {
std::cerr << "Syntax error parsing toml: " << *err.source().path << " (" << err.source().begin << "):\n" << err.description() << std::endl;
@@ -676,6 +821,269 @@ bool read_config(const std::filesystem::path& config_path, RSPRecompilerConfig&
return true;
}
struct FunctionPermutation {
std::vector<rabbitizer::InstructionRsp> instrs;
std::vector<uint32_t> permutation;
};
struct Permutation {
std::vector<uint32_t> instr_words;
std::vector<uint32_t> permutation;
};
struct Overlay {
std::vector<uint32_t> instr_words;
};
struct OverlaySlot {
uint32_t offset;
std::vector<Overlay> overlays;
};
bool next_permutation(const std::vector<uint32_t>& option_lengths, std::vector<uint32_t>& current) {
current[current.size() - 1] += 1;
size_t i = current.size() - 1;
while (current[i] == option_lengths[i]) {
current[i] = 0;
if (i == 0) {
return false;
}
current[i - 1] += 1;
i--;
}
return true;
}
void permute(const std::vector<uint32_t>& base_words, const std::vector<OverlaySlot>& overlay_slots, std::vector<Permutation>& permutations) {
auto current = std::vector<uint32_t>(overlay_slots.size(), 0);
auto slot_options = std::vector<uint32_t>(overlay_slots.size(), 0);
for (size_t i = 0; i < overlay_slots.size(); i++) {
slot_options[i] = overlay_slots[i].overlays.size();
}
do {
Permutation permutation = {
.instr_words = std::vector<uint32_t>(base_words),
.permutation = std::vector<uint32_t>(current)
};
for (size_t i = 0; i < overlay_slots.size(); i++) {
const OverlaySlot &slot = overlay_slots[i];
const Overlay &overlay = slot.overlays[current[i]];
uint32_t word_offset = slot.offset / sizeof(uint32_t);
size_t size_needed = word_offset + overlay.instr_words.size();
if (permutation.instr_words.size() < size_needed) {
permutation.instr_words.reserve(size_needed);
}
std::copy(overlay.instr_words.begin(), overlay.instr_words.end(), permutation.instr_words.data() + word_offset);
}
permutations.push_back(permutation);
} while (next_permutation(slot_options, current));
}
std::string make_permutation_string(const std::vector<uint32_t> permutation) {
std::string str = "";
for (uint32_t opt : permutation) {
str += std::to_string(opt);
}
return str;
}
void create_overlay_swap_function(const std::string& function_name, std::ofstream& output_file, const std::vector<FunctionPermutation>& permutations, const RSPRecompilerConfig& config) {
// Includes and permutation protos
fmt::print(output_file,
"#include <map>\n"
"#include <vector>\n\n"
"using RspUcodePermutationFunc = RspExitReason(uint8_t* rdram, RspContext* ctx);\n\n"
"RspExitReason {}(uint8_t* rdram, RspContext* ctx);\n",
config.output_function_name + "_initial");
for (const auto& permutation : permutations) {
fmt::print(output_file, "RspExitReason {}(uint8_t* rdram, RspContext* ctx);\n",
config.output_function_name + make_permutation_string(permutation.permutation));
}
fmt::print(output_file, "\n");
// IMEM -> slot index mapping
fmt::print(output_file,
"static const std::map<uint32_t, uint32_t> imemToSlot = {{\n");
for (size_t i = 0; i < config.overlay_slots.size(); i++) {
const RSPRecompilerOverlaySlotConfig& slot = config.overlay_slots[i];
uint32_t imemAddress = slot.text_address & rsp_mem_mask;
fmt::print(output_file, " {{ 0x{:04X}, {} }},\n",
imemAddress, i);
}
fmt::print(output_file, "}};\n\n");
// ucode offset -> overlay index mapping (per slot)
fmt::print(output_file,
"static const std::vector<std::map<uint32_t, uint32_t>> offsetToOverlay = {{\n");
for (const auto& slot : config.overlay_slots) {
fmt::print(output_file, " {{\n");
for (size_t i = 0; i < slot.overlays.size(); i++) {
const RSPRecompilerOverlayConfig& overlay = slot.overlays[i];
fmt::print(output_file, " {{ 0x{:04X}, {} }},\n",
overlay.offset, i);
}
fmt::print(output_file, " }},\n");
}
fmt::print(output_file, "}};\n\n");
// Permutation function pointers
fmt::print(output_file,
"static RspUcodePermutationFunc* permutations[] = {{\n");
for (const auto& permutation : permutations) {
fmt::print(output_file, " {},\n",
config.output_function_name + make_permutation_string(permutation.permutation));
}
fmt::print(output_file, "}};\n\n");
// Main function
fmt::print(output_file,
"RspExitReason {}(uint8_t* rdram, uint32_t ucode_addr) {{\n"
" RspContext ctx{{}};\n",
config.output_function_name);
std::string slots_init_str = "";
for (size_t i = 0; i < config.overlay_slots.size(); i++) {
if (i > 0) {
slots_init_str += ", ";
}
slots_init_str += "0";
}
fmt::print(output_file, " uint32_t slots[] = {{{}}};\n\n",
slots_init_str);
fmt::print(output_file, " RspExitReason exitReason = {}(rdram, &ctx);\n\n",
config.output_function_name + "_initial");
fmt::print(output_file, "");
std::string perm_index_str = "";
for (size_t i = 0; i < config.overlay_slots.size(); i++) {
if (i > 0) {
perm_index_str += " + ";
}
uint32_t multiplier = 1;
for (size_t k = i + 1; k < config.overlay_slots.size(); k++) {
multiplier *= config.overlay_slots[k].overlays.size();
}
perm_index_str += fmt::format("slots[{}] * {}", i, multiplier);
}
fmt::print(output_file,
" while (exitReason == RspExitReason::SwapOverlay) {{\n"
" uint32_t slot = imemToSlot.at(ctx.dma_mem_address);\n"
" uint32_t overlay = offsetToOverlay.at(slot).at(ctx.dma_dram_address - ucode_addr);\n"
" slots[slot] = overlay;\n"
"\n"
" RspUcodePermutationFunc* permutationFunc = permutations[{}];\n"
" exitReason = permutationFunc(rdram, &ctx);\n"
" }}\n\n"
" return exitReason;\n"
"}}\n\n",
perm_index_str);
}
void create_function(const std::string& function_name, std::ofstream& output_file, const std::vector<rabbitizer::InstructionRsp>& instrs, const RSPRecompilerConfig& config, const ResumeTargets& resume_targets, bool is_permutation, bool is_initial) {
// Collect indirect jump targets (return addresses for linked jumps)
BranchTargets branch_targets = get_branch_targets(instrs);
// Add any additional indirect branch targets that may not be found directly in the code (e.g. from a jump table)
for (uint32_t target : config.extra_indirect_branch_targets) {
branch_targets.indirect_targets.insert(target);
}
// Write function
if (is_permutation) {
fmt::print(output_file,
"RspExitReason {}(uint8_t* rdram, RspContext* ctx) {{\n"
" uint32_t r1 = ctx->r1, r2 = ctx->r2, r3 = ctx->r3, r4 = ctx->r4, r5 = ctx->r5, r6 = ctx->r6, r7 = ctx->r7;\n"
" uint32_t r8 = ctx->r8, r9 = ctx->r9, r10 = ctx->r10, r11 = ctx->r11, r12 = ctx->r12, r13 = ctx->r13, r14 = ctx->r14, r15 = ctx->r15;\n"
" uint32_t r16 = ctx->r16, r17 = ctx->r17, r18 = ctx->r18, r19 = ctx->r19, r20 = ctx->r20, r21 = ctx->r21, r22 = ctx->r22, r23 = ctx->r23;\n"
" uint32_t r24 = ctx->r24, r25 = ctx->r25, r26 = ctx->r26, r27 = ctx->r27, r28 = ctx->r28, r29 = ctx->r29, r30 = ctx->r30, r31 = ctx->r31;\n"
" uint32_t dma_mem_address = ctx->dma_mem_address, dma_dram_address = ctx->dma_dram_address, jump_target = ctx->jump_target;\n"
" const char * debug_file = NULL; int debug_line = 0;\n"
" RSP rsp = ctx->rsp;\n", function_name);
// Write jumps to resume targets
if (!is_initial) {
fmt::print(output_file,
" if (ctx->resume_delay) {{\n"
" switch (ctx->resume_address) {{\n");
for (uint32_t address : resume_targets.delay_targets) {
fmt::print(output_file, " case 0x{0:04X}: goto R_{0:04X}_delay;\n",
address);
}
fmt::print(output_file,
" }}\n"
" }} else {{\n"
" switch (ctx->resume_address) {{\n");
for (uint32_t address : resume_targets.non_delay_targets) {
fmt::print(output_file, " case 0x{0:04X}: goto R_{0:04X};\n",
address);
}
fmt::print(output_file,
" }}\n"
" }}\n"
" printf(\"Unhandled resume target 0x%04X (delay slot: %d) in microcode {}\\n\", ctx->resume_address, ctx->resume_delay);\n"
" return RspExitReason::UnhandledResumeTarget;\n",
config.output_function_name);
}
fmt::print(output_file, " r1 = 0xFC0;\n");
} else {
fmt::print(output_file,
"RspExitReason {}(uint8_t* rdram, [[maybe_unused]] uint32_t ucode_addr) {{\n"
" uint32_t r1 = 0, r2 = 0, r3 = 0, r4 = 0, r5 = 0, r6 = 0, r7 = 0;\n"
" uint32_t r8 = 0, r9 = 0, r10 = 0, r11 = 0, r12 = 0, r13 = 0, r14 = 0, r15 = 0;\n"
" uint32_t r16 = 0, r17 = 0, r18 = 0, r19 = 0, r20 = 0, r21 = 0, r22 = 0, r23 = 0;\n"
" uint32_t r24 = 0, r25 = 0, r26 = 0, r27 = 0, r28 = 0, r29 = 0, r30 = 0, r31 = 0;\n"
" uint32_t dma_mem_address = 0, dma_dram_address = 0, jump_target = 0;\n"
" const char * debug_file = NULL; int debug_line = 0;\n"
" RSP rsp{{}};\n"
" r1 = 0xFC0;\n", function_name);
}
// Write each instruction
for (size_t instr_index = 0; instr_index < instrs.size(); instr_index++) {
process_instruction(instr_index, instrs, output_file, branch_targets, config.unsupported_instructions, resume_targets, is_permutation, false, false);
}
// Terminate instruction code with a return to indicate that the microcode has run past its end
fmt::print(output_file, " return RspExitReason::ImemOverrun;\n");
// Write the section containing the indirect jump table
write_indirect_jumps(output_file, branch_targets, config.output_function_name);
// Write routine for returning for an overlay swap
if (is_permutation) {
write_overlay_swap_return(output_file);
}
// End the file
fmt::print(output_file, "}}\n");
}
int main(int argc, const char** argv) {
if (argc != 2) {
fmt::print("Usage: {} [config file]\n", argv[0]);
@@ -689,6 +1097,7 @@ int main(int argc, const char** argv) {
}
std::vector<uint32_t> instr_words{};
std::vector<OverlaySlot> overlay_slots{};
instr_words.resize(config.text_size / sizeof(uint32_t));
{
std::ifstream rom_file{ config.rom_file_path, std::ios_base::binary };
@@ -700,6 +1109,29 @@ int main(int argc, const char** argv) {
rom_file.seekg(config.text_offset);
rom_file.read(reinterpret_cast<char*>(instr_words.data()), config.text_size);
for (const RSPRecompilerOverlaySlotConfig &slot_config : config.overlay_slots) {
OverlaySlot slot{};
slot.offset = (slot_config.text_address - config.text_address) & rsp_mem_mask;
for (const RSPRecompilerOverlayConfig &overlay_config : slot_config.overlays) {
Overlay overlay{};
overlay.instr_words.resize(overlay_config.size / sizeof(uint32_t));
rom_file.seekg(config.text_offset + overlay_config.offset);
rom_file.read(reinterpret_cast<char*>(overlay.instr_words.data()), overlay_config.size);
slot.overlays.push_back(overlay);
}
overlay_slots.push_back(slot);
}
}
// Create overlay permutations
std::vector<Permutation> permutations{};
if (!overlay_slots.empty()) {
permute(instr_words, overlay_slots, permutations);
}
// Disable appropriate pseudo instructions
@@ -717,12 +1149,27 @@ int main(int argc, const char** argv) {
vram += instr_size;
}
// Collect indirect jump targets (return addresses for linked jumps)
BranchTargets branch_targets = get_branch_targets(instrs);
std::vector<FunctionPermutation> func_permutations{};
func_permutations.reserve(permutations.size());
for (const Permutation& permutation : permutations) {
FunctionPermutation func = {
.permutation = std::vector<uint32_t>(permutation.permutation)
};
// Add any additional indirect branch targets that may not be found directly in the code (e.g. from a jump table)
for (uint32_t target : config.extra_indirect_branch_targets) {
branch_targets.indirect_targets.insert(target);
func.instrs.reserve(permutation.instr_words.size());
uint32_t vram = config.text_address & rsp_mem_mask;
for (uint32_t instr_word : permutation.instr_words) {
const rabbitizer::InstructionRsp& instr = func.instrs.emplace_back(byteswap(instr_word), vram);
vram += instr_size;
}
func_permutations.emplace_back(func);
}
// Determine all possible overlay swap resume targets
ResumeTargets resume_targets{};
for (const FunctionPermutation& permutation : func_permutations) {
get_overlay_swap_resume_targets(permutation.instrs, resume_targets);
}
// Open output file and write beginning
@@ -730,28 +1177,20 @@ int main(int argc, const char** argv) {
std::ofstream output_file(config.output_file_path);
fmt::print(output_file,
"#include \"librecomp/rsp.hpp\"\n"
"#include \"librecomp/rsp_vu_impl.hpp\"\n"
"RspExitReason {}(uint8_t* rdram) {{\n"
" uint32_t r1 = 0, r2 = 0, r3 = 0, r4 = 0, r5 = 0, r6 = 0, r7 = 0;\n"
" uint32_t r8 = 0, r9 = 0, r10 = 0, r11 = 0, r12 = 0, r13 = 0, r14 = 0, r15 = 0;\n"
" uint32_t r16 = 0, r17 = 0, r18 = 0, r19 = 0, r20 = 0, r21 = 0, r22 = 0, r23 = 0;\n"
" uint32_t r24 = 0, r25 = 0, r26 = 0, r27 = 0, r28 = 0, r29 = 0, r30 = 0, r31 = 0;\n"
" uint32_t dma_dmem_address = 0, dma_dram_address = 0, jump_target = 0;\n"
" const char * debug_file = NULL; int debug_line = 0;\n"
" RSP rsp{{}};\n"
" r1 = 0xFC0;\n", config.output_function_name);
// Write each instruction
for (size_t instr_index = 0; instr_index < instrs.size(); instr_index++) {
process_instruction(instr_index, instrs, output_file, branch_targets, config.unsupported_instructions, false, false);
"#include \"librecomp/rsp_vu_impl.hpp\"\n");
// Write function(s)
if (overlay_slots.empty()) {
create_function(config.output_function_name, output_file, instrs, config, resume_targets, false, false);
} else {
create_overlay_swap_function(config.output_function_name, output_file, func_permutations, config);
create_function(config.output_function_name + "_initial", output_file, instrs, config, ResumeTargets{}, true, true);
for (const auto& permutation : func_permutations) {
create_function(config.output_function_name + make_permutation_string(permutation.permutation),
output_file, permutation.instrs, config, resume_targets, true, false);
}
}
// Terminate instruction code with a return to indicate that the microcode has run past its end
fmt::print(output_file, " return RspExitReason::ImemOverrun;\n");
// Write the section containing the indirect jump table
write_indirect_jumps(output_file, branch_targets, config.output_function_name);
// End the file
fmt::print(output_file, "}}\n");
return 0;
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,56 +0,0 @@
#ifndef __GENERATOR_H__
#define __GENERATOR_H__
#include "n64recomp.h"
#include "operations.h"
namespace N64Recomp {
struct InstructionContext {
int rd;
int rs;
int rt;
int sa;
int fd;
int fs;
int ft;
int cop1_cs;
uint16_t imm16;
bool reloc_tag_as_reference;
RelocType reloc_type;
uint32_t reloc_section_index;
uint32_t reloc_target_section_offset;
};
class Generator {
public:
virtual void process_binary_op(std::ostream& output_file, const BinaryOp& op, const InstructionContext& ctx) const = 0;
virtual void process_unary_op(std::ostream& output_file, const UnaryOp& op, const InstructionContext& ctx) const = 0;
virtual void process_store_op(std::ostream& output_file, const StoreOp& op, const InstructionContext& ctx) const = 0;
virtual void emit_branch_condition(std::ostream& output_file, const ConditionalBranchOp& op, const InstructionContext& ctx) const = 0;
virtual void emit_branch_close(std::ostream& output_file) const = 0;
virtual void emit_check_fr(std::ostream& output_file, int fpr) const = 0;
virtual void emit_check_nan(std::ostream& output_file, int fpr, bool is_double) const = 0;
};
class CGenerator final : Generator {
public:
CGenerator() = default;
void process_binary_op(std::ostream& output_file, const BinaryOp& op, const InstructionContext& ctx) const final;
void process_unary_op(std::ostream& output_file, const UnaryOp& op, const InstructionContext& ctx) const final;
void process_store_op(std::ostream& output_file, const StoreOp& op, const InstructionContext& ctx) const final;
void emit_branch_condition(std::ostream& output_file, const ConditionalBranchOp& op, const InstructionContext& ctx) const final;
void emit_branch_close(std::ostream& output_file) const final;
void emit_check_fr(std::ostream& output_file, int fpr) const final;
void emit_check_nan(std::ostream& output_file, int fpr, bool is_double) const final;
private:
void get_operand_string(Operand operand, UnaryOpType operation, const InstructionContext& context, std::string& operand_string) const;
void get_binary_expr_string(BinaryOpType type, const BinaryOperands& operands, const InstructionContext& ctx, const std::string& output, std::string& expr_string) const;
void get_notation(BinaryOpType op_type, std::string& func_string, std::string& infix_string) const;
};
}
#endif

475
include/recomp.h Normal file
View File

@@ -0,0 +1,475 @@
#ifndef __RECOMP_H__
#define __RECOMP_H__
#include <stdlib.h>
#include <stdint.h>
#include <math.h>
#include <fenv.h>
#include <assert.h>
// Compiler definition to disable inter-procedural optimization, allowing multiple functions to be in a single file without breaking interposition.
#if defined(_MSC_VER) && !defined(__clang__) && !defined(__INTEL_COMPILER)
// MSVC's __declspec(noinline) seems to disable inter-procedural optimization entirely, so it's all that's needed.
#define RECOMP_FUNC __declspec(noinline)
// Use MSVC's fenv_access pragma.
#define SET_FENV_ACCESS() _Pragma("fenv_access(on)")
#elif defined(__clang__)
// Clang has no dedicated IPO attribute, so we use a combination of other attributes to give the desired behavior.
// The inline keyword allows multiple definitions during linking, and extern forces clang to emit an externally visible definition.
// Weak forces Clang to not perform any IPO as the symbol can be interposed, which prevents actual inlining due to the inline keyword.
// Add noinline on for good measure, which doesn't conflict with the inline keyword as they have different meanings.
#define RECOMP_FUNC extern inline __attribute__((weak,noinline))
// Use the standard STDC FENV_ACCESS pragma.
#define SET_FENV_ACCESS() _Pragma("STDC FENV_ACCESS ON")
#elif defined(__GNUC__) && !defined(__INTEL_COMPILER)
// Use GCC's attribute for disabling inter-procedural optimizations. Also enable the rounding-math compiler flag to disable
// constant folding so that arithmetic respects the floating point environment. This is needed because gcc doesn't implement
// any FENV_ACCESS pragma.
#define RECOMP_FUNC __attribute__((noipa, optimize("rounding-math")))
// There's no FENV_ACCESS pragma in gcc, so this can be empty.
#define SET_FENV_ACCESS()
#else
#error "No RECOMP_FUNC definition for this compiler"
#endif
// Implementation of 64-bit multiply and divide instructions
#if defined(__SIZEOF_INT128__)
static inline void DMULT(int64_t a, int64_t b, int64_t* lo64, int64_t* hi64) {
__int128 full128 = ((__int128)a) * ((__int128)b);
*hi64 = (int64_t)(full128 >> 64);
*lo64 = (int64_t)(full128 >> 0);
}
static inline void DMULTU(uint64_t a, uint64_t b, uint64_t* lo64, uint64_t* hi64) {
unsigned __int128 full128 = ((unsigned __int128)a) * ((unsigned __int128)b);
*hi64 = (uint64_t)(full128 >> 64);
*lo64 = (uint64_t)(full128 >> 0);
}
#elif defined(_MSC_VER)
#include <intrin.h>
#pragma intrinsic(_mul128)
#pragma intrinsic(_umul128)
static inline void DMULT(int64_t a, int64_t b, int64_t* lo64, int64_t* hi64) {
*lo64 = _mul128(a, b, hi64);
}
static inline void DMULTU(uint64_t a, uint64_t b, uint64_t* lo64, uint64_t* hi64) {
*lo64 = _umul128(a, b, hi64);
}
#else
#error "128-bit integer type not found"
#endif
static inline void DDIV(int64_t a, int64_t b, int64_t* quot, int64_t* rem) {
int overflow = ((uint64_t)a == 0x8000000000000000ull) && (b == -1ll);
*quot = overflow ? a : (a / b);
*rem = overflow ? 0 : (a % b);
}
static inline void DDIVU(uint64_t a, uint64_t b, uint64_t* quot, uint64_t* rem) {
*quot = a / b;
*rem = a % b;
}
typedef uint64_t gpr;
#define SIGNED(val) \
((int64_t)(val))
#define ADD32(a, b) \
((gpr)(int32_t)((a) + (b)))
#define SUB32(a, b) \
((gpr)(int32_t)((a) - (b)))
#define MEM_W(offset, reg) \
(*(int32_t*)(rdram + ((((reg) + (offset))) - 0xFFFFFFFF80000000)))
#define MEM_H(offset, reg) \
(*(int16_t*)(rdram + ((((reg) + (offset)) ^ 2) - 0xFFFFFFFF80000000)))
#define MEM_B(offset, reg) \
(*(int8_t*)(rdram + ((((reg) + (offset)) ^ 3) - 0xFFFFFFFF80000000)))
#define MEM_HU(offset, reg) \
(*(uint16_t*)(rdram + ((((reg) + (offset)) ^ 2) - 0xFFFFFFFF80000000)))
#define MEM_BU(offset, reg) \
(*(uint8_t*)(rdram + ((((reg) + (offset)) ^ 3) - 0xFFFFFFFF80000000)))
#define SD(val, offset, reg) { \
*(uint32_t*)(rdram + ((((reg) + (offset) + 4)) - 0xFFFFFFFF80000000)) = (uint32_t)((gpr)(val) >> 0); \
*(uint32_t*)(rdram + ((((reg) + (offset) + 0)) - 0xFFFFFFFF80000000)) = (uint32_t)((gpr)(val) >> 32); \
}
static inline uint64_t load_doubleword(uint8_t* rdram, gpr reg, gpr offset) {
uint64_t ret = 0;
uint64_t lo = (uint64_t)(uint32_t)MEM_W(reg, offset + 4);
uint64_t hi = (uint64_t)(uint32_t)MEM_W(reg, offset + 0);
ret = (lo << 0) | (hi << 32);
return ret;
}
#define LD(offset, reg) \
load_doubleword(rdram, offset, reg)
static inline gpr do_lwl(uint8_t* rdram, gpr initial_value, gpr offset, gpr reg) {
// Calculate the overall address
gpr address = (offset + reg);
// Load the aligned word
gpr word_address = address & ~0x3;
uint32_t loaded_value = MEM_W(0, word_address);
// Mask the existing value and shift the loaded value appropriately
gpr misalignment = address & 0x3;
gpr masked_value = initial_value & (gpr)(uint32_t)~(0xFFFFFFFFu << (misalignment * 8));
loaded_value <<= (misalignment * 8);
// Cast to int32_t to sign extend first
return (gpr)(int32_t)(masked_value | loaded_value);
}
static inline gpr do_lwr(uint8_t* rdram, gpr initial_value, gpr offset, gpr reg) {
// Calculate the overall address
gpr address = (offset + reg);
// Load the aligned word
gpr word_address = address & ~0x3;
uint32_t loaded_value = MEM_W(0, word_address);
// Mask the existing value and shift the loaded value appropriately
gpr misalignment = address & 0x3;
gpr masked_value = initial_value & (gpr)(uint32_t)~(0xFFFFFFFFu >> (24 - misalignment * 8));
loaded_value >>= (24 - misalignment * 8);
// Cast to int32_t to sign extend first
return (gpr)(int32_t)(masked_value | loaded_value);
}
static inline void do_swl(uint8_t* rdram, gpr offset, gpr reg, gpr val) {
// Calculate the overall address
gpr address = (offset + reg);
// Get the initial value of the aligned word
gpr word_address = address & ~0x3;
uint32_t initial_value = MEM_W(0, word_address);
// Mask the initial value and shift the input value appropriately
gpr misalignment = address & 0x3;
uint32_t masked_initial_value = initial_value & ~(0xFFFFFFFFu >> (misalignment * 8));
uint32_t shifted_input_value = ((uint32_t)val) >> (misalignment * 8);
MEM_W(0, word_address) = masked_initial_value | shifted_input_value;
}
static inline void do_swr(uint8_t* rdram, gpr offset, gpr reg, gpr val) {
// Calculate the overall address
gpr address = (offset + reg);
// Get the initial value of the aligned word
gpr word_address = address & ~0x3;
uint32_t initial_value = MEM_W(0, word_address);
// Mask the initial value and shift the input value appropriately
gpr misalignment = address & 0x3;
uint32_t masked_initial_value = initial_value & ~(0xFFFFFFFFu << (24 - misalignment * 8));
uint32_t shifted_input_value = ((uint32_t)val) << (24 - misalignment * 8);
MEM_W(0, word_address) = masked_initial_value | shifted_input_value;
}
static inline gpr do_ldl(uint8_t* rdram, gpr initial_value, gpr offset, gpr reg) {
// Calculate the overall address
gpr address = (offset + reg);
// Load the aligned dword
gpr dword_address = address & ~0x7;
uint64_t loaded_value = load_doubleword(rdram, 0, dword_address);
// Mask the existing value and shift the loaded value appropriately
gpr misalignment = address & 0x7;
gpr masked_value = initial_value & ~(0xFFFFFFFFFFFFFFFFu << (misalignment * 8));
loaded_value <<= (misalignment * 8);
return masked_value | loaded_value;
}
static inline gpr do_ldr(uint8_t* rdram, gpr initial_value, gpr offset, gpr reg) {
// Calculate the overall address
gpr address = (offset + reg);
// Load the aligned dword
gpr dword_address = address & ~0x7;
uint64_t loaded_value = load_doubleword(rdram, 0, dword_address);
// Mask the existing value and shift the loaded value appropriately
gpr misalignment = address & 0x7;
gpr masked_value = initial_value & ~(0xFFFFFFFFFFFFFFFFu >> (56 - misalignment * 8));
loaded_value >>= (56 - misalignment * 8);
return masked_value | loaded_value;
}
static inline void do_sdl(uint8_t* rdram, gpr offset, gpr reg, gpr val) {
// Calculate the overall address
gpr address = (offset + reg);
// Get the initial value of the aligned dword
gpr dword_address = address & ~0x7;
uint64_t initial_value = load_doubleword(rdram, 0, dword_address);
// Mask the initial value and shift the input value appropriately
gpr misalignment = address & 0x7;
uint64_t masked_initial_value = initial_value & ~(0xFFFFFFFFFFFFFFFFu >> (misalignment * 8));
uint64_t shifted_input_value = val >> (misalignment * 8);
uint64_t ret = masked_initial_value | shifted_input_value;
uint32_t lo = (uint32_t)ret;
uint32_t hi = (uint32_t)(ret >> 32);
MEM_W(0, dword_address + 4) = lo;
MEM_W(0, dword_address + 0) = hi;
}
static inline void do_sdr(uint8_t* rdram, gpr offset, gpr reg, gpr val) {
// Calculate the overall address
gpr address = (offset + reg);
// Get the initial value of the aligned dword
gpr dword_address = address & ~0x7;
uint64_t initial_value = load_doubleword(rdram, 0, dword_address);
// Mask the initial value and shift the input value appropriately
gpr misalignment = address & 0x7;
uint64_t masked_initial_value = initial_value & ~(0xFFFFFFFFFFFFFFFFu << (56 - misalignment * 8));
uint64_t shifted_input_value = val << (56 - misalignment * 8);
uint64_t ret = masked_initial_value | shifted_input_value;
uint32_t lo = (uint32_t)ret;
uint32_t hi = (uint32_t)(ret >> 32);
MEM_W(0, dword_address + 4) = lo;
MEM_W(0, dword_address + 0) = hi;
}
static inline uint32_t get_cop1_cs() {
uint32_t rounding_mode = 0;
switch (fegetround()) {
// round to nearest value
case FE_TONEAREST:
default:
rounding_mode = 0;
break;
// round to zero (truncate)
case FE_TOWARDZERO:
rounding_mode = 1;
break;
// round to positive infinity (ceil)
case FE_UPWARD:
rounding_mode = 2;
break;
// round to negative infinity (floor)
case FE_DOWNWARD:
rounding_mode = 3;
break;
}
return rounding_mode;
}
static inline void set_cop1_cs(uint32_t val) {
uint32_t rounding_mode = val & 0x3;
int round = FE_TONEAREST;
switch (rounding_mode) {
case 0: // round to nearest value
round = FE_TONEAREST;
break;
case 1: // round to zero (truncate)
round = FE_TOWARDZERO;
break;
case 2: // round to positive infinity (ceil)
round = FE_UPWARD;
break;
case 3: // round to negative infinity (floor)
round = FE_DOWNWARD;
break;
}
fesetround(round);
}
#define S32(val) \
((int32_t)(val))
#define U32(val) \
((uint32_t)(val))
#define S64(val) \
((int64_t)(val))
#define U64(val) \
((uint64_t)(val))
#define MUL_S(val1, val2) \
((val1) * (val2))
#define MUL_D(val1, val2) \
((val1) * (val2))
#define DIV_S(val1, val2) \
((val1) / (val2))
#define DIV_D(val1, val2) \
((val1) / (val2))
#define CVT_S_W(val) \
((float)((int32_t)(val)))
#define CVT_D_W(val) \
((double)((int32_t)(val)))
#define CVT_D_L(val) \
((double)((int64_t)(val)))
#define CVT_S_L(val) \
((float)((int64_t)(val)))
#define CVT_D_S(val) \
((double)(val))
#define CVT_S_D(val) \
((float)(val))
#define TRUNC_W_S(val) \
((int32_t)(val))
#define TRUNC_W_D(val) \
((int32_t)(val))
#define TRUNC_L_S(val) \
((int64_t)(val))
#define TRUNC_L_D(val) \
((int64_t)(val))
#define DEFAULT_ROUNDING_MODE 0
static inline int32_t do_cvt_w_s(float val) {
// Rounding mode aware float to 32-bit int conversion.
return (int32_t)lrintf(val);
}
#define CVT_W_S(val) \
do_cvt_w_s(val)
static inline int64_t do_cvt_l_s(float val) {
// Rounding mode aware float to 64-bit int conversion.
return (int64_t)llrintf(val);
}
#define CVT_L_S(val) \
do_cvt_l_s(val);
static inline int32_t do_cvt_w_d(double val) {
// Rounding mode aware double to 32-bit int conversion.
return (int32_t)lrint(val);
}
#define CVT_W_D(val) \
do_cvt_w_d(val)
static inline int64_t do_cvt_l_d(double val) {
// Rounding mode aware double to 64-bit int conversion.
return (int64_t)llrint(val);
}
#define CVT_L_D(val) \
do_cvt_l_d(val)
#define NAN_CHECK(val) \
assert(val == val)
//#define NAN_CHECK(val)
typedef union {
double d;
struct {
float fl;
float fh;
};
struct {
uint32_t u32l;
uint32_t u32h;
};
uint64_t u64;
} fpr;
typedef struct {
gpr r0, r1, r2, r3, r4, r5, r6, r7,
r8, r9, r10, r11, r12, r13, r14, r15,
r16, r17, r18, r19, r20, r21, r22, r23,
r24, r25, r26, r27, r28, r29, r30, r31;
fpr f0, f1, f2, f3, f4, f5, f6, f7,
f8, f9, f10, f11, f12, f13, f14, f15,
f16, f17, f18, f19, f20, f21, f22, f23,
f24, f25, f26, f27, f28, f29, f30, f31;
uint64_t hi, lo;
uint32_t* f_odd;
uint32_t status_reg;
uint8_t mips3_float_mode;
} recomp_context;
// Checks if the target is an even float register or that mips3 float mode is enabled
#define CHECK_FR(ctx, idx) \
assert(((idx) & 1) == 0 || (ctx)->mips3_float_mode)
#ifdef __cplusplus
extern "C" {
#endif
void cop0_status_write(recomp_context* ctx, gpr value);
gpr cop0_status_read(recomp_context* ctx);
void switch_error(const char* func, uint32_t vram, uint32_t jtbl);
void do_break(uint32_t vram);
// The function signature for all recompiler output functions.
typedef void (recomp_func_t)(uint8_t* rdram, recomp_context* ctx);
// The function signature for special functions that need a third argument.
// These get called via generated shims to allow providing some information about the caller, such as mod id.
typedef void (recomp_func_ext_t)(uint8_t* rdram, recomp_context* ctx, uintptr_t arg);
recomp_func_t* get_function(int32_t vram);
#define LOOKUP_FUNC(val) \
get_function((int32_t)(val))
extern int32_t* section_addresses;
#define LO16(x) \
((x) & 0xFFFF)
#define HI16(x) \
(((x) >> 16) + (((x) >> 15) & 1))
#define RELOC_HI16(section_index, offset) \
HI16(section_addresses[section_index] + (offset))
#define RELOC_LO16(section_index, offset) \
LO16(section_addresses[section_index] + (offset))
void recomp_syscall_handler(uint8_t* rdram, recomp_context* ctx, int32_t instruction_vram);
void pause_self(uint8_t *rdram);
#ifdef __cplusplus
}
#endif
#endif

View File

@@ -9,6 +9,7 @@
#include <unordered_map>
#include <unordered_set>
#include <filesystem>
#include <optional>
#ifdef _MSC_VER
inline uint32_t byteswap(uint32_t val) {
@@ -36,6 +37,21 @@ namespace N64Recomp {
: vram(vram), rom(rom), words(std::move(words)), name(std::move(name)), section_index(section_index), ignored(ignored), reimplemented(reimplemented), stubbed(stubbed) {}
Function() = default;
};
struct JumpTable {
uint32_t vram;
uint32_t addend_reg;
uint32_t rom;
uint32_t lw_vram;
uint32_t addu_vram;
uint32_t jr_vram;
uint16_t section_index;
std::optional<uint32_t> got_offset;
std::vector<uint32_t> entries;
JumpTable(uint32_t vram, uint32_t addend_reg, uint32_t rom, uint32_t lw_vram, uint32_t addu_vram, uint32_t jr_vram, uint16_t section_index, std::optional<uint32_t> got_offset, std::vector<uint32_t>&& entries)
: vram(vram), addend_reg(addend_reg), rom(rom), lw_vram(lw_vram), addu_vram(addu_vram), jr_vram(jr_vram), section_index(section_index), got_offset(got_offset), entries(std::move(entries)) {}
};
enum class RelocType : uint8_t {
R_MIPS_NONE = 0,
@@ -69,10 +85,12 @@ namespace N64Recomp {
constexpr std::string_view EventSectionName = ".recomp_event";
constexpr std::string_view ImportSectionPrefix = ".recomp_import.";
constexpr std::string_view CallbackSectionPrefix = ".recomp_callback.";
constexpr std::string_view HookSectionPrefix = ".recomp_hook.";
constexpr std::string_view HookReturnSectionPrefix = ".recomp_hook_return.";
// Special mod names.
constexpr std::string_view ModSelf = ".";
constexpr std::string_view ModBaseRecomp = "*";
// Special dependency names.
constexpr std::string_view DependencySelf = ".";
constexpr std::string_view DependencyBaseRecomp = "*";
struct Section {
uint32_t rom_addr = 0;
@@ -86,6 +104,7 @@ namespace N64Recomp {
bool executable = false;
bool relocatable = false; // TODO is this needed? relocs being non-empty should be an equivalent check.
bool has_mips32_relocs = false;
std::optional<uint32_t> got_ram_addr = std::nullopt;
};
struct ReferenceSection {
@@ -128,13 +147,6 @@ namespace N64Recomp {
extern const std::unordered_set<std::string> ignored_funcs;
extern const std::unordered_set<std::string> renamed_funcs;
struct Dependency {
uint8_t major_version;
uint8_t minor_version;
uint8_t patch_version;
std::string mod_id;
};
struct ImportSymbol {
ReferenceSymbol base;
size_t dependency_index;
@@ -173,6 +185,19 @@ namespace N64Recomp {
ReplacementFlags flags;
};
enum class HookFlags : uint32_t {
AtReturn = 1 << 0,
};
inline HookFlags operator&(HookFlags lhs, HookFlags rhs) { return HookFlags(uint32_t(lhs) & uint32_t(rhs)); }
inline HookFlags operator|(HookFlags lhs, HookFlags rhs) { return HookFlags(uint32_t(lhs) | uint32_t(rhs)); }
struct FunctionHook {
uint32_t func_index;
uint32_t original_section_vrom;
uint32_t original_vram;
HookFlags flags;
};
class Context {
private:
//// Reference symbols (used for populating relocations for patches)
@@ -182,6 +207,8 @@ namespace N64Recomp {
std::vector<ReferenceSymbol> reference_symbols;
// Mapping of symbol name to reference symbol index.
std::unordered_map<std::string, SymbolReference> reference_symbols_by_name;
// Whether all reference sections should be treated as relocatable (used in live recompilation).
bool all_reference_sections_relocatable = false;
public:
std::vector<Section> sections;
std::vector<Function> functions;
@@ -194,6 +221,10 @@ namespace N64Recomp {
// The target ROM being recompiled, TODO move this outside of the context to avoid making a copy for mod contexts.
// Used for reading relocations and for the output binary feature.
std::vector<uint8_t> rom;
// Whether reference symbols should be validated when emitting function calls during recompilation.
bool skip_validating_reference_symbols = true;
// Whether all function calls (excluding reference symbols) should go through lookup.
bool use_lookup_for_all_function_calls = false;
//// Only used by the CLI, TODO move this to a struct in the internal headers.
// A mapping of function name to index in the functions vector
@@ -202,8 +233,8 @@ namespace N64Recomp {
//// Mod dependencies and their symbols
//// Imported values
// List of dependencies.
std::vector<Dependency> dependencies;
// Dependency names.
std::vector<std::string> dependencies;
// Mapping of dependency name to dependency index.
std::unordered_map<std::string, size_t> dependencies_by_name;
// List of symbols imported from dependencies.
@@ -224,6 +255,11 @@ namespace N64Recomp {
std::vector<Callback> callbacks;
// List of symbols from events, which contains the names of events that this context provides.
std::vector<EventSymbol> event_symbols;
// List of hooks, which contains the original function to hook and the function index to call at the hook.
std::vector<FunctionHook> hooks;
// Causes functions to print their name to the console the first time they're called.
bool trace_mode;
// Imports sections and function symbols from a provided context into this context's reference sections and reference functions.
bool import_reference_context(const Context& reference_context);
@@ -235,54 +271,57 @@ namespace N64Recomp {
Context() = default;
bool add_dependency(const std::string& id, uint8_t major_version, uint8_t minor_version, uint8_t patch_version) {
bool add_dependency(const std::string& id) {
if (dependencies_by_name.contains(id)) {
return false;
}
size_t dependency_index = dependencies.size();
dependencies.emplace_back(N64Recomp::Dependency {
.major_version = major_version,
.minor_version = minor_version,
.patch_version = patch_version,
.mod_id = id
});
size_t dependency_index = dependencies_by_name.size();
dependencies.emplace_back(id);
dependencies_by_name.emplace(id, dependency_index);
dependency_events_by_name.resize(dependencies.size());
dependency_imports_by_name.resize(dependencies.size());
dependency_events_by_name.resize(dependencies_by_name.size());
dependency_imports_by_name.resize(dependencies_by_name.size());
return true;
}
bool add_dependencies(const std::vector<Dependency>& new_dependencies) {
dependencies.reserve(dependencies.size() + new_dependencies.size());
bool add_dependencies(const std::vector<std::string>& new_dependencies) {
dependencies_by_name.reserve(dependencies_by_name.size() + new_dependencies.size());
// Check if any of the dependencies already exist and fail if so.
for (const Dependency& dep : new_dependencies) {
if (dependencies_by_name.contains(dep.mod_id)) {
for (const std::string& dep : new_dependencies) {
if (dependencies_by_name.contains(dep)) {
return false;
}
}
for (const Dependency& dep : new_dependencies) {
size_t dependency_index = dependencies.size();
for (const std::string& dep : new_dependencies) {
size_t dependency_index = dependencies_by_name.size();
dependencies.emplace_back(dep);
dependencies_by_name.emplace(dep.mod_id, dependency_index);
dependencies_by_name.emplace(dep, dependency_index);
}
dependency_events_by_name.resize(dependencies.size());
dependency_imports_by_name.resize(dependencies.size());
dependency_events_by_name.resize(dependencies_by_name.size());
dependency_imports_by_name.resize(dependencies_by_name.size());
return true;
}
bool find_dependency(const std::string& mod_id, size_t& dependency_index) {
auto find_it = dependencies_by_name.find(mod_id);
if (find_it == dependencies_by_name.end()) {
return false;
if (find_it != dependencies_by_name.end()) {
dependency_index = find_it->second;
}
else {
// Handle special dependency names.
if (mod_id == DependencySelf || mod_id == DependencyBaseRecomp) {
add_dependency(mod_id);
dependency_index = dependencies_by_name[mod_id];
}
else {
return false;
}
}
dependency_index = find_it->second;
return true;
}
@@ -364,6 +403,9 @@ namespace N64Recomp {
}
bool is_reference_section_relocatable(uint16_t section_index) const {
if (all_reference_sections_relocatable) {
return true;
}
if (section_index == SectionAbsolute) {
return false;
}
@@ -419,7 +461,7 @@ namespace N64Recomp {
}
bool find_import_symbol(const std::string& symbol_name, size_t dependency_index, SymbolReference& ref_out) const {
if (dependency_index >= dependencies.size()) {
if (dependency_index >= dependencies_by_name.size()) {
return false;
}
@@ -467,7 +509,7 @@ namespace N64Recomp {
}
bool add_dependency_event(const std::string& event_name, size_t dependency_index, size_t& dependency_event_index) {
if (dependency_index >= dependencies.size()) {
if (dependency_index >= dependencies_by_name.size()) {
return false;
}
@@ -523,9 +565,16 @@ namespace N64Recomp {
void copy_reference_sections_from(const Context& rhs) {
reference_sections = rhs.reference_sections;
}
void set_all_reference_sections_relocatable() {
all_reference_sections_relocatable = true;
}
};
bool recompile_function(const Context& context, const Function& func, std::ofstream& output_file, std::span<std::vector<uint32_t>> static_funcs, bool tag_reference_relocs);
class Generator;
bool recompile_function(const Context& context, size_t function_index, std::ostream& output_file, std::span<std::vector<uint32_t>> static_funcs, bool tag_reference_relocs);
bool recompile_function_custom(Generator& generator, const Context& context, size_t function_index, std::ostream& output_file, std::span<std::vector<uint32_t>> static_funcs_out, bool tag_reference_relocs);
enum class ModSymbolsError {
Good,
@@ -535,21 +584,56 @@ namespace N64Recomp {
FunctionOutOfBounds,
};
ModSymbolsError parse_mod_symbols(std::span<const char> data, std::span<const uint8_t> binary, const std::unordered_map<uint32_t, uint16_t>& sections_by_vrom, const Context& reference_context, Context& context_out);
ModSymbolsError parse_mod_symbols(std::span<const char> data, std::span<const uint8_t> binary, const std::unordered_map<uint32_t, uint16_t>& sections_by_vrom, Context& context_out);
std::vector<uint8_t> symbols_to_bin_v1(const Context& mod_context);
inline bool is_manual_patch_symbol(uint32_t vram) {
// Zero-sized symbols between 0x8F000000 and 0x90000000 are manually specified symbols for use with patches.
// TODO make this configurable or come up with a more sensible solution for dealing with manual symbols for patches.
return vram >= 0x8F000000 && vram < 0x90000000;
}
inline bool validate_mod_name(std::string_view str) {
// Disallow mod names with a colon in them, since you can't specify that in a dependency string orin callbacks.
for (char c : str) {
if (c == ':') {
// Locale-independent ASCII-only version of isalpha.
inline bool isalpha_nolocale(char c) {
return (c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z');
}
// Locale-independent ASCII-only version of isalnum.
inline bool isalnum_nolocale(char c) {
return isalpha_nolocale(c) || (c >= '0' && c <= '9');
}
inline bool validate_mod_id(std::string_view str) {
// Disallow empty ids.
if (str.size() == 0) {
return false;
}
// Allow special dependency ids.
if (str == N64Recomp::DependencySelf || str == N64Recomp::DependencyBaseRecomp) {
return true;
}
// These following rules basically describe C identifiers. There's no specific reason to enforce them besides colon (currently),
// so this is just to prevent "weird" mod ids.
// Check the first character, which must be alphabetical or an underscore.
if (!isalpha_nolocale(str[0]) && str[0] != '_') {
return false;
}
// Check the remaining characters, which can be alphanumeric or underscore.
for (char c : str.substr(1)) {
if (!isalnum_nolocale(c) && c != '_') {
return false;
}
}
return true;
}
inline bool validate_mod_name(const std::string& str) {
return validate_mod_name(std::string_view{str});
inline bool validate_mod_id(const std::string& str) {
return validate_mod_id(std::string_view{str});
}
}

View File

@@ -0,0 +1,109 @@
#ifndef __GENERATOR_H__
#define __GENERATOR_H__
#include "recompiler/context.h"
#include "operations.h"
namespace N64Recomp {
struct InstructionContext {
int rd;
int rs;
int rt;
int sa;
int fd;
int fs;
int ft;
int cop1_cs;
uint16_t imm16;
bool reloc_tag_as_reference;
RelocType reloc_type;
uint32_t reloc_section_index;
uint32_t reloc_target_section_offset;
};
class Generator {
public:
virtual void process_binary_op(const BinaryOp& op, const InstructionContext& ctx) const = 0;
virtual void process_unary_op(const UnaryOp& op, const InstructionContext& ctx) const = 0;
virtual void process_store_op(const StoreOp& op, const InstructionContext& ctx) const = 0;
virtual void emit_function_start(const std::string& function_name, size_t func_index) const = 0;
virtual void emit_function_end() const = 0;
virtual void emit_function_call_lookup(uint32_t addr) const = 0;
virtual void emit_function_call_by_register(int reg) const = 0;
// target_section_offset can each be deduced from symbol_index if the full context is available,
// but for live recompilation the reference symbol list is unavailable so it's still provided.
virtual void emit_function_call_reference_symbol(const Context& context, uint16_t section_index, size_t symbol_index, uint32_t target_section_offset) const = 0;
virtual void emit_function_call(const Context& context, size_t function_index) const = 0;
virtual void emit_named_function_call(const std::string& function_name) const = 0;
virtual void emit_goto(const std::string& target) const = 0;
virtual void emit_label(const std::string& label_name) const = 0;
virtual void emit_jtbl_addend_declaration(const JumpTable& jtbl, int reg) const = 0;
virtual void emit_branch_condition(const ConditionalBranchOp& op, const InstructionContext& ctx) const = 0;
virtual void emit_branch_close() const = 0;
virtual void emit_switch(const Context& recompiler_context, const JumpTable& jtbl, int reg) const = 0;
virtual void emit_case(int case_index, const std::string& target_label) const = 0;
virtual void emit_switch_error(uint32_t instr_vram, uint32_t jtbl_vram) const = 0;
virtual void emit_switch_close() const = 0;
virtual void emit_return(const Context& context, size_t func_index) const = 0;
virtual void emit_check_fr(int fpr) const = 0;
virtual void emit_check_nan(int fpr, bool is_double) const = 0;
virtual void emit_cop0_status_read(int reg) const = 0;
virtual void emit_cop0_status_write(int reg) const = 0;
virtual void emit_cop1_cs_read(int reg) const = 0;
virtual void emit_cop1_cs_write(int reg) const = 0;
virtual void emit_muldiv(InstrId instr_id, int reg1, int reg2) const = 0;
virtual void emit_syscall(uint32_t instr_vram) const = 0;
virtual void emit_do_break(uint32_t instr_vram) const = 0;
virtual void emit_pause_self() const = 0;
virtual void emit_trigger_event(uint32_t event_index) const = 0;
virtual void emit_comment(const std::string& comment) const = 0;
};
class CGenerator final : Generator {
public:
CGenerator(std::ostream& output_file) : output_file(output_file) {};
void process_binary_op(const BinaryOp& op, const InstructionContext& ctx) const final;
void process_unary_op(const UnaryOp& op, const InstructionContext& ctx) const final;
void process_store_op(const StoreOp& op, const InstructionContext& ctx) const final;
void emit_function_start(const std::string& function_name, size_t func_index) const final;
void emit_function_end() const final;
void emit_function_call_lookup(uint32_t addr) const final;
void emit_function_call_by_register(int reg) const final;
void emit_function_call_reference_symbol(const Context& context, uint16_t section_index, size_t symbol_index, uint32_t target_section_offset) const final;
void emit_function_call(const Context& context, size_t function_index) const final;
void emit_named_function_call(const std::string& function_name) const final;
void emit_goto(const std::string& target) const final;
void emit_label(const std::string& label_name) const final;
void emit_jtbl_addend_declaration(const JumpTable& jtbl, int reg) const final;
void emit_branch_condition(const ConditionalBranchOp& op, const InstructionContext& ctx) const final;
void emit_branch_close() const final;
void emit_switch(const Context& recompiler_context, const JumpTable& jtbl, int reg) const final;
void emit_case(int case_index, const std::string& target_label) const final;
void emit_switch_error(uint32_t instr_vram, uint32_t jtbl_vram) const final;
void emit_switch_close() const final;
void emit_return(const Context& context, size_t func_index) const final;
void emit_check_fr(int fpr) const final;
void emit_check_nan(int fpr, bool is_double) const final;
void emit_cop0_status_read(int reg) const final;
void emit_cop0_status_write(int reg) const final;
void emit_cop1_cs_read(int reg) const final;
void emit_cop1_cs_write(int reg) const final;
void emit_muldiv(InstrId instr_id, int reg1, int reg2) const final;
void emit_syscall(uint32_t instr_vram) const final;
void emit_do_break(uint32_t instr_vram) const final;
void emit_pause_self() const final;
void emit_trigger_event(uint32_t event_index) const final;
void emit_comment(const std::string& comment) const final;
private:
void get_operand_string(Operand operand, UnaryOpType operation, const InstructionContext& context, std::string& operand_string) const;
void get_binary_expr_string(BinaryOpType type, const BinaryOperands& operands, const InstructionContext& ctx, const std::string& output, std::string& expr_string) const;
void get_notation(BinaryOpType op_type, std::string& func_string, std::string& infix_string) const;
std::ostream& output_file;
};
}
#endif

View File

@@ -0,0 +1,159 @@
#ifndef __LIVE_RECOMPILER_H__
#define __LIVE_RECOMPILER_H__
#include <unordered_map>
#include "recompiler/generator.h"
#include "recomp.h"
struct sljit_compiler;
namespace N64Recomp {
struct LiveGeneratorContext;
struct ReferenceJumpDetails {
uint16_t section;
uint32_t section_offset;
};
struct LiveGeneratorOutput {
LiveGeneratorOutput() = default;
LiveGeneratorOutput(const LiveGeneratorOutput& rhs) = delete;
LiveGeneratorOutput(LiveGeneratorOutput&& rhs) { *this = std::move(rhs); }
LiveGeneratorOutput& operator=(const LiveGeneratorOutput& rhs) = delete;
LiveGeneratorOutput& operator=(LiveGeneratorOutput&& rhs) {
good = rhs.good;
string_literals = std::move(rhs.string_literals);
jump_tables = std::move(rhs.jump_tables);
code = rhs.code;
code_size = rhs.code_size;
functions = std::move(rhs.functions);
reference_symbol_jumps = std::move(rhs.reference_symbol_jumps);
import_jumps_by_index = std::move(rhs.import_jumps_by_index);
executable_offset = rhs.executable_offset;
rhs.good = false;
rhs.code = nullptr;
rhs.code_size = 0;
rhs.reference_symbol_jumps.clear();
rhs.executable_offset = 0;
return *this;
}
~LiveGeneratorOutput();
size_t num_reference_symbol_jumps() const;
void set_reference_symbol_jump(size_t jump_index, recomp_func_t* func);
ReferenceJumpDetails get_reference_symbol_jump_details(size_t jump_index);
void populate_import_symbol_jumps(size_t import_index, recomp_func_t* func);
bool good = false;
// Storage for string literals referenced by recompiled code. These are allocated as unique_ptr arrays
// to prevent them from moving, as the referenced address is baked into the recompiled code.
std::vector<std::unique_ptr<char[]>> string_literals;
// Storage for jump tables referenced by recompiled code (vector of arrays of pointers). These are also
// allocated as unique_ptr arrays for the same reason as strings.
std::vector<std::unique_ptr<void*[]>> jump_tables;
// Recompiled code.
void* code;
// Size of the recompiled code.
size_t code_size;
// Pointers to each individual function within the recompiled code.
std::vector<recomp_func_t*> functions;
private:
// List of jump details and the corresponding jump instruction address. These jumps get populated after recompilation is complete
// during dependency resolution.
std::vector<std::pair<ReferenceJumpDetails, void*>> reference_symbol_jumps;
// Mapping of import symbol index to any jumps to that import symbol.
std::unordered_multimap<size_t, void*> import_jumps_by_index;
// sljit executable offset.
int64_t executable_offset;
friend class LiveGenerator;
};
struct LiveGeneratorInputs {
uint32_t base_event_index;
void (*cop0_status_write)(recomp_context* ctx, gpr value);
gpr (*cop0_status_read)(recomp_context* ctx);
void (*switch_error)(const char* func, uint32_t vram, uint32_t jtbl);
void (*do_break)(uint32_t vram);
recomp_func_t* (*get_function)(int32_t vram);
void (*syscall_handler)(uint8_t* rdram, recomp_context* ctx, int32_t instruction_vram);
void (*pause_self)(uint8_t* rdram);
void (*trigger_event)(uint8_t* rdram, recomp_context* ctx, uint32_t event_index);
int32_t *reference_section_addresses;
int32_t *local_section_addresses;
void (*run_hook)(uint8_t* rdram, recomp_context* ctx, size_t hook_table_index);
// Maps function index in recompiler context to function's entry hook slot.
std::unordered_map<size_t, size_t> entry_func_hooks;
// Maps function index in recompiler context to function's return hook slot.
std::unordered_map<size_t, size_t> return_func_hooks;
// Maps section index in the generated code to original section index. Used by regenerated
// code to relocate using the corresponding original section's address.
std::vector<size_t> original_section_indices;
};
class LiveGenerator final : public Generator {
public:
LiveGenerator(size_t num_funcs, const LiveGeneratorInputs& inputs);
~LiveGenerator();
// Prevent moving or copying.
LiveGenerator(const LiveGenerator& rhs) = delete;
LiveGenerator(LiveGenerator&& rhs) = delete;
LiveGenerator& operator=(const LiveGenerator& rhs) = delete;
LiveGenerator& operator=(LiveGenerator&& rhs) = delete;
LiveGeneratorOutput finish();
void process_binary_op(const BinaryOp& op, const InstructionContext& ctx) const final;
void process_unary_op(const UnaryOp& op, const InstructionContext& ctx) const final;
void process_store_op(const StoreOp& op, const InstructionContext& ctx) const final;
void emit_function_start(const std::string& function_name, size_t func_index) const final;
void emit_function_end() const final;
void emit_function_call_lookup(uint32_t addr) const final;
void emit_function_call_by_register(int reg) const final;
void emit_function_call_reference_symbol(const Context& context, uint16_t section_index, size_t symbol_index, uint32_t target_section_offset) const final;
void emit_function_call(const Context& context, size_t function_index) const final;
void emit_named_function_call(const std::string& function_name) const final;
void emit_goto(const std::string& target) const final;
void emit_label(const std::string& label_name) const final;
void emit_jtbl_addend_declaration(const JumpTable& jtbl, int reg) const final;
void emit_branch_condition(const ConditionalBranchOp& op, const InstructionContext& ctx) const final;
void emit_branch_close() const final;
void emit_switch(const Context& recompiler_context, const JumpTable& jtbl, int reg) const final;
void emit_case(int case_index, const std::string& target_label) const final;
void emit_switch_error(uint32_t instr_vram, uint32_t jtbl_vram) const final;
void emit_switch_close() const final;
void emit_return(const Context& context, size_t func_index) const final;
void emit_check_fr(int fpr) const final;
void emit_check_nan(int fpr, bool is_double) const final;
void emit_cop0_status_read(int reg) const final;
void emit_cop0_status_write(int reg) const final;
void emit_cop1_cs_read(int reg) const final;
void emit_cop1_cs_write(int reg) const final;
void emit_muldiv(InstrId instr_id, int reg1, int reg2) const final;
void emit_syscall(uint32_t instr_vram) const final;
void emit_do_break(uint32_t instr_vram) const final;
void emit_pause_self() const final;
void emit_trigger_event(uint32_t event_index) const final;
void emit_comment(const std::string& comment) const final;
private:
void get_operand_string(Operand operand, UnaryOpType operation, const InstructionContext& context, std::string& operand_string) const;
void get_binary_expr_string(BinaryOpType type, const BinaryOperands& operands, const InstructionContext& ctx, const std::string& output, std::string& expr_string) const;
void get_notation(BinaryOpType op_type, std::string& func_string, std::string& infix_string) const;
// Loads the relocated address specified by the instruction context into the target register.
void load_relocated_address(const InstructionContext& ctx, int reg) const;
sljit_compiler* compiler;
LiveGeneratorInputs inputs;
mutable std::unique_ptr<LiveGeneratorContext> context;
mutable bool errored;
};
void live_recompiler_init();
bool recompile_function_live(LiveGenerator& generator, const Context& context, size_t function_index, std::ostream& output_file, std::span<std::vector<uint32_t>> static_funcs_out, bool tag_reference_relocs);
class ShimFunction {
private:
void* code;
recomp_func_t* func;
public:
ShimFunction(recomp_func_ext_t* to_shim, uintptr_t value);
~ShimFunction();
recomp_func_t* get_func() { return func; }
};
}
#endif

View File

@@ -28,13 +28,12 @@ namespace N64Recomp {
ToU32,
ToS64,
ToU64,
NegateS32,
NegateS64,
Lui,
Mask5, // Mask to 5 bits
Mask6, // Mask to 5 bits
ToInt32, // Functionally equivalent to ToS32, only exists for parity with old codegen
Negate,
NegateFloat,
NegateDouble,
AbsFloat,
AbsDouble,
SqrtFloat,
@@ -51,12 +50,20 @@ namespace N64Recomp {
ConvertLFromS,
TruncateWFromS,
TruncateWFromD,
TruncateLFromS,
TruncateLFromD,
RoundWFromS,
RoundWFromD,
RoundLFromS,
RoundLFromD,
CeilWFromS,
CeilWFromD,
CeilLFromS,
CeilLFromD,
FloorWFromS,
FloorWFromD
FloorWFromD,
FloorLFromS,
FloorLFromD
};
enum class BinaryOpType {
@@ -92,6 +99,12 @@ namespace N64Recomp {
LessEq,
Greater,
GreaterEq,
EqualFloat,
LessFloat,
LessEqFloat,
EqualDouble,
LessDouble,
LessEqDouble,
// Loads
LD,
LW,

1
lib/sljit Submodule

Submodule lib/sljit added at f6326087b3

View File

@@ -4,7 +4,7 @@
#include "rabbitizer.hpp"
#include "fmt/format.h"
#include "n64recomp.h"
#include "recompiler/context.h"
#include "analysis.h"
extern "C" const char* RabbitizerRegister_getNameGpr(uint8_t regValue);
@@ -16,15 +16,18 @@ struct RegState {
uint32_t prev_addiu_vram;
uint32_t prev_addu_vram;
uint8_t prev_addend_reg;
uint32_t prev_got_offset; // offset of lw rt,offset(gp)
bool valid_lui;
bool valid_addiu;
bool valid_addend;
bool valid_got_offset;
// For tracking a register that has been loaded from RAM
uint32_t loaded_lw_vram;
uint32_t loaded_addu_vram;
uint32_t loaded_address;
uint8_t loaded_addend_reg;
bool valid_loaded;
bool valid_got_loaded; // valid load through the GOT
RegState() = default;
@@ -33,10 +36,12 @@ struct RegState {
prev_addiu_vram = 0;
prev_addu_vram = 0;
prev_addend_reg = 0;
prev_got_offset = 0;
valid_lui = false;
valid_addiu = false;
valid_addend = false;
valid_got_offset = false;
loaded_lw_vram = 0;
loaded_addu_vram = 0;
@@ -44,6 +49,7 @@ struct RegState {
loaded_addend_reg = 0;
valid_loaded = false;
valid_got_loaded = false;
}
};
@@ -51,7 +57,7 @@ using InstrId = rabbitizer::InstrId::UniqueId;
using RegId = rabbitizer::Registers::Cpu::GprO32;
bool analyze_instruction(const rabbitizer::InstructionCpu& instr, const N64Recomp::Function& func, N64Recomp::FunctionStats& stats,
RegState reg_states[32], std::vector<RegState>& stack_states) {
RegState reg_states[32], std::vector<RegState>& stack_states, bool is_got_addr_defined) {
// Temporary register state for tracking the register being operated on
RegState temp{};
@@ -98,8 +104,26 @@ bool analyze_instruction(const rabbitizer::InstructionCpu& instr, const N64Recom
case InstrId::cpu_addu:
// rd has been completely overwritten, so invalidate it
temp.invalidate();
if (reg_states[rs].valid_got_offset != reg_states[rt].valid_got_offset) {
// Track which of the two registers has the valid GOT offset state and which is the addend
int valid_got_offset_reg = reg_states[rs].valid_got_offset ? rs : rt;
int addend_reg = reg_states[rs].valid_got_offset ? rt : rs;
// Copy the got offset reg's state into the destination reg, then set the destination reg's addend to the other operand
temp = reg_states[valid_got_offset_reg];
temp.valid_addend = true;
temp.prev_addend_reg = addend_reg;
temp.prev_addu_vram = instr.getVram();
} else if (((rs == (int)RegId::GPR_O32_gp) || (rt == (int)RegId::GPR_O32_gp))
&& reg_states[rs].valid_got_loaded != reg_states[rt].valid_got_loaded) {
// `addu rd, rs, $gp` or `addu rd, $gp, rt` after valid GOT load, this is the last part of a position independent
// jump table call. Keep the register state intact.
int valid_got_loaded_reg = reg_states[rs].valid_got_loaded ? rs : rt;
temp = reg_states[valid_got_loaded_reg];
}
// Exactly one of the two addend register states should have a valid lui at this time
if (reg_states[rs].valid_lui != reg_states[rt].valid_lui) {
else if (reg_states[rs].valid_lui != reg_states[rt].valid_lui) {
// Track which of the two registers has the valid lui state and which is the addend
int valid_lui_reg = reg_states[rs].valid_lui ? rs : rt;
int addend_reg = reg_states[rs].valid_lui ? rt : rs;
@@ -158,9 +182,11 @@ bool analyze_instruction(const rabbitizer::InstructionCpu& instr, const N64Recom
}
// If the base register has a valid lui state and a valid addend before this, then this may be a load from a jump table
else if (reg_states[base].valid_lui && reg_states[base].valid_addend) {
// Exactly one of the lw and the base reg should have a valid lo16 value
// Exactly one of the lw and the base reg should have a valid lo16 value. However, the lo16 may end up just being zero by pure luck,
// so allow the case where the lo16 immediate is zero and the register state doesn't have a valid addiu immediate.
// This means the only invalid case is where they're both true.
bool nonzero_immediate = imm != 0;
if (nonzero_immediate != reg_states[base].valid_addiu) {
if (!(nonzero_immediate && reg_states[base].valid_addiu)) {
uint32_t lo16;
if (nonzero_immediate) {
lo16 = (int16_t)imm;
@@ -176,6 +202,21 @@ bool analyze_instruction(const rabbitizer::InstructionCpu& instr, const N64Recom
temp.loaded_addu_vram = reg_states[base].prev_addu_vram;
}
}
// If the base register has a valid GOT offset and a valid addend before this, then this may be a load from a position independent jump table
else if (reg_states[base].valid_got_offset && reg_states[base].valid_addend) {
// At this point, we will have the offset from the value of the previously read GOT entry to the address being
// loaded here as well as the GOT entry offset itself
temp.valid_got_loaded = true;
temp.loaded_lw_vram = instr.getVram();
temp.loaded_address = imm; // This address is relative for now, we'll calculate the absolute address later
temp.loaded_addend_reg = reg_states[base].prev_addend_reg;
temp.loaded_addu_vram = reg_states[base].prev_addu_vram;
temp.prev_got_offset = reg_states[base].prev_got_offset;
} else if (base == (int)RegId::GPR_O32_gp && is_got_addr_defined) {
// lw from the $gp register implies a read from the global offset table
temp.prev_got_offset = imm;
temp.valid_got_offset = true;
}
reg_states[rt] = temp;
break;
case InstrId::cpu_jr:
@@ -192,21 +233,24 @@ bool analyze_instruction(const rabbitizer::InstructionCpu& instr, const N64Recom
reg_states[rs].loaded_lw_vram,
reg_states[rs].loaded_addu_vram,
instr.getVram(),
0, // section index gets filled in later
std::nullopt,
std::vector<uint32_t>{}
);
} else if (reg_states[rs].valid_lui && reg_states[rs].valid_addiu && !reg_states[rs].valid_addend && !reg_states[rs].valid_loaded) {
uint32_t address = reg_states[rs].prev_addiu_vram + reg_states[rs].prev_lui;
stats.absolute_jumps.emplace_back(
address,
instr.getVram()
} else if (reg_states[rs].valid_got_loaded) {
stats.jump_tables.emplace_back(
reg_states[rs].loaded_address,
reg_states[rs].loaded_addend_reg,
0,
reg_states[rs].loaded_lw_vram,
reg_states[rs].loaded_addu_vram,
instr.getVram(),
0, // section index gets filled in later
reg_states[rs].prev_got_offset,
std::vector<uint32_t>{}
);
}
// Allow tail calls (TODO account for trailing nops due to bad function splits)
else if (instr.getVram() != func.vram + (func.words.size() - 2) * sizeof(func.words[0])) {
// Inconclusive analysis
fmt::print(stderr, "Failed to to find jump table for `jr {}` at 0x{:08X} in {}\n", RabbitizerRegister_getNameGpr(rs), instr.getVram(), func.name);
return false;
}
// TODO stricter validation on tail calls, since not all indirect jumps can be treated as one.
break;
default:
if (instr.modifiesRd()) {
@@ -222,6 +266,9 @@ bool analyze_instruction(const rabbitizer::InstructionCpu& instr, const N64Recom
bool N64Recomp::analyze_function(const N64Recomp::Context& context, const N64Recomp::Function& func,
const std::vector<rabbitizer::InstructionCpu>& instructions, N64Recomp::FunctionStats& stats) {
const Section* section = &context.sections[func.section_index];
std::optional<uint32_t> got_ram_addr = section->got_ram_addr;
// Create a state to track each register (r0 won't be used)
RegState reg_states[32] {};
std::vector<RegState> stack_states{};
@@ -229,11 +276,26 @@ bool N64Recomp::analyze_function(const N64Recomp::Context& context, const N64Rec
// Look for jump tables
// A linear search through the func won't be accurate due to not taking control flow into account, but it'll work for finding jtables
for (const auto& instr : instructions) {
if (!analyze_instruction(instr, func, stats, reg_states, stack_states)) {
if (!analyze_instruction(instr, func, stats, reg_states, stack_states, got_ram_addr.has_value())) {
return false;
}
}
// Calculate absolute addresses for position-independent jump tables
if (got_ram_addr.has_value()) {
uint32_t got_rom_addr = got_ram_addr.value() + func.rom - func.vram;
for (size_t i = 0; i < stats.jump_tables.size(); i++) {
JumpTable& cur_jtbl = stats.jump_tables[i];
if (cur_jtbl.got_offset.has_value()) {
uint32_t got_word = byteswap(*reinterpret_cast<const uint32_t*>(&context.rom[got_rom_addr + cur_jtbl.got_offset.value()]));
cur_jtbl.vram += (section->ram_addr + got_word);
}
}
}
// Sort jump tables by their address
std::sort(stats.jump_tables.begin(), stats.jump_tables.end(),
[](const JumpTable& a, const JumpTable& b)
@@ -254,14 +316,22 @@ bool N64Recomp::analyze_function(const N64Recomp::Context& context, const N64Rec
// TODO this assumes that the jump table is in the same section as the function itself
cur_jtbl.rom = cur_jtbl.vram + func.rom - func.vram;
cur_jtbl.section_index = func.section_index;
while (vram < end_address) {
// Retrieve the current entry of the jump table
// TODO same as above
uint32_t rom_addr = vram + func.rom - func.vram;
uint32_t jtbl_word = byteswap(*reinterpret_cast<const uint32_t*>(&context.rom[rom_addr]));
if (cur_jtbl.got_offset.has_value() && got_ram_addr.has_value()) {
// Position independent jump tables have values that are offsets from the GOT,
// convert those to absolute addresses
jtbl_word += got_ram_addr.value();
}
// Check if the entry is a valid address in the current function
if (jtbl_word < func.vram || jtbl_word > func.vram + func.words.size() * sizeof(func.words[0])) {
if (jtbl_word < func.vram || jtbl_word >= func.vram + func.words.size() * sizeof(func.words[0])) {
// If it's not then this is the end of the jump table
break;
}

View File

@@ -4,22 +4,9 @@
#include <cstdint>
#include <vector>
#include "n64recomp.h"
#include "recompiler/context.h"
namespace N64Recomp {
struct JumpTable {
uint32_t vram;
uint32_t addend_reg;
uint32_t rom;
uint32_t lw_vram;
uint32_t addu_vram;
uint32_t jr_vram;
std::vector<uint32_t> entries;
JumpTable(uint32_t vram, uint32_t addend_reg, uint32_t rom, uint32_t lw_vram, uint32_t addu_vram, uint32_t jr_vram, std::vector<uint32_t>&& entries)
: vram(vram), addend_reg(addend_reg), rom(rom), lw_vram(lw_vram), addu_vram(addu_vram), jr_vram(jr_vram), entries(std::move(entries)) {}
};
struct AbsoluteJump {
uint32_t jump_target;
uint32_t instruction_vram;
@@ -29,7 +16,6 @@ namespace N64Recomp {
struct FunctionStats {
std::vector<JumpTable> jump_tables;
std::vector<AbsoluteJump> absolute_jumps;
};
bool analyze_function(const Context& context, const Function& function, const std::vector<rabbitizer::InstructionCpu>& instructions, FunctionStats& stats);

View File

@@ -4,11 +4,11 @@
#include "fmt/format.h"
#include "fmt/ostream.h"
#include "generator.h"
#include "recompiler/generator.h"
struct BinaryOpFields { std::string func_string; std::string infix_string; };
std::vector<BinaryOpFields> c_op_fields = []() {
static std::vector<BinaryOpFields> c_op_fields = []() {
std::vector<BinaryOpFields> ret{};
ret.resize(static_cast<size_t>(N64Recomp::BinaryOpType::COUNT));
std::vector<char> ops_setup{};
@@ -45,9 +45,15 @@ std::vector<BinaryOpFields> c_op_fields = []() {
setup_op(N64Recomp::BinaryOpType::Sra32, "S32", ">>"); // Arithmetic aspect will be taken care of by unary op for first operand.
setup_op(N64Recomp::BinaryOpType::Sra64, "", ">>"); // Arithmetic aspect will be taken care of by unary op for first operand.
setup_op(N64Recomp::BinaryOpType::Equal, "", "==");
setup_op(N64Recomp::BinaryOpType::EqualFloat,"", "==");
setup_op(N64Recomp::BinaryOpType::EqualDouble,"", "==");
setup_op(N64Recomp::BinaryOpType::NotEqual, "", "!=");
setup_op(N64Recomp::BinaryOpType::Less, "", "<");
setup_op(N64Recomp::BinaryOpType::LessFloat, "", "<");
setup_op(N64Recomp::BinaryOpType::LessDouble,"", "<");
setup_op(N64Recomp::BinaryOpType::LessEq, "", "<=");
setup_op(N64Recomp::BinaryOpType::LessEqFloat,"", "<=");
setup_op(N64Recomp::BinaryOpType::LessEqDouble,"", "<=");
setup_op(N64Recomp::BinaryOpType::Greater, "", ">");
setup_op(N64Recomp::BinaryOpType::GreaterEq, "", ">=");
setup_op(N64Recomp::BinaryOpType::LD, "LD", "");
@@ -72,22 +78,22 @@ std::vector<BinaryOpFields> c_op_fields = []() {
return ret;
}();
std::string gpr_to_string(int gpr_index) {
static std::string gpr_to_string(int gpr_index) {
if (gpr_index == 0) {
return "0";
}
return fmt::format("ctx->r{}", gpr_index);
}
std::string fpr_to_string(int fpr_index) {
static std::string fpr_to_string(int fpr_index) {
return fmt::format("ctx->f{}.fl", fpr_index);
}
std::string fpr_double_to_string(int fpr_index) {
static std::string fpr_double_to_string(int fpr_index) {
return fmt::format("ctx->f{}.d", fpr_index);
}
std::string fpr_u32l_to_string(int fpr_index) {
static std::string fpr_u32l_to_string(int fpr_index) {
if (fpr_index & 1) {
return fmt::format("ctx->f_odd[({} - 1) * 2]", fpr_index);
}
@@ -96,11 +102,11 @@ std::string fpr_u32l_to_string(int fpr_index) {
}
}
std::string fpr_u64_to_string(int fpr_index) {
static std::string fpr_u64_to_string(int fpr_index) {
return fmt::format("ctx->f{}.u64", fpr_index);
}
std::string unsigned_reloc(const N64Recomp::InstructionContext& context) {
static std::string unsigned_reloc(const N64Recomp::InstructionContext& context) {
switch (context.reloc_type) {
case N64Recomp::RelocType::R_MIPS_HI16:
return fmt::format("{}RELOC_HI16({}, {:#X})",
@@ -113,7 +119,7 @@ std::string unsigned_reloc(const N64Recomp::InstructionContext& context) {
}
}
std::string signed_reloc(const N64Recomp::InstructionContext& context) {
static std::string signed_reloc(const N64Recomp::InstructionContext& context) {
return "(int16_t)" + unsigned_reloc(context);
}
@@ -223,12 +229,6 @@ void N64Recomp::CGenerator::get_operand_string(Operand operand, UnaryOpType oper
case UnaryOpType::ToU64:
// Nothing to do here, they're already U64
break;
case UnaryOpType::NegateS32:
assert(false);
break;
case UnaryOpType::NegateS64:
assert(false);
break;
case UnaryOpType::Lui:
operand_string = "S32(" + operand_string + " << 16)";
break;
@@ -241,7 +241,10 @@ void N64Recomp::CGenerator::get_operand_string(Operand operand, UnaryOpType oper
case UnaryOpType::ToInt32:
operand_string = "(int32_t)" + operand_string;
break;
case UnaryOpType::Negate:
case UnaryOpType::NegateFloat:
operand_string = "-" + operand_string;
break;
case UnaryOpType::NegateDouble:
operand_string = "-" + operand_string;
break;
case UnaryOpType::AbsFloat:
@@ -292,24 +295,49 @@ void N64Recomp::CGenerator::get_operand_string(Operand operand, UnaryOpType oper
case UnaryOpType::TruncateWFromD:
operand_string = "TRUNC_W_D(" + operand_string + ")";
break;
case UnaryOpType::TruncateLFromS:
operand_string = "TRUNC_L_S(" + operand_string + ")";
break;
case UnaryOpType::TruncateLFromD:
operand_string = "TRUNC_L_D(" + operand_string + ")";
break;
// TODO these four operations should use banker's rounding, but roundeven is C23 so it's unavailable here.
case UnaryOpType::RoundWFromS:
operand_string = "lroundf(" + operand_string + ")";
break;
case UnaryOpType::RoundWFromD:
operand_string = "lround(" + operand_string + ")";
break;
case UnaryOpType::RoundLFromS:
operand_string = "llroundf(" + operand_string + ")";
break;
case UnaryOpType::RoundLFromD:
operand_string = "llround(" + operand_string + ")";
break;
case UnaryOpType::CeilWFromS:
operand_string = "S32(ceilf(" + operand_string + "))";
break;
case UnaryOpType::CeilWFromD:
operand_string = "S32(ceil(" + operand_string + "))";
break;
case UnaryOpType::CeilLFromS:
operand_string = "S64(ceilf(" + operand_string + "))";
break;
case UnaryOpType::CeilLFromD:
operand_string = "S64(ceil(" + operand_string + "))";
break;
case UnaryOpType::FloorWFromS:
operand_string = "S32(floorf(" + operand_string + "))";
break;
case UnaryOpType::FloorWFromD:
operand_string = "S32(floor(" + operand_string + "))";
break;
case UnaryOpType::FloorLFromS:
operand_string = "S64(floorf(" + operand_string + "))";
break;
case UnaryOpType::FloorLFromD:
operand_string = "S64(floor(" + operand_string + "))";
break;
}
}
@@ -323,7 +351,6 @@ void N64Recomp::CGenerator::get_binary_expr_string(BinaryOpType type, const Bina
thread_local std::string input_b{};
thread_local std::string func_string{};
thread_local std::string infix_string{};
bool is_infix;
get_operand_string(operands.operands[0], operands.operand_operations[0], ctx, input_a);
get_operand_string(operands.operands[1], operands.operand_operations[1], ctx, input_b);
get_notation(type, func_string, infix_string);
@@ -333,10 +360,10 @@ void N64Recomp::CGenerator::get_binary_expr_string(BinaryOpType type, const Bina
expr_string = fmt::format("{} {} {} ? 1 : 0", input_a, infix_string, input_b);
}
else if (type == BinaryOpType::Equal && operands.operands[1] == Operand::Zero && operands.operand_operations[1] == UnaryOpType::None) {
expr_string = input_a;
expr_string = "!" + input_a;
}
else if (type == BinaryOpType::NotEqual && operands.operands[1] == Operand::Zero && operands.operand_operations[1] == UnaryOpType::None) {
expr_string = "!" + input_a;
expr_string = input_a;
}
// End unnecessary cases.
@@ -365,7 +392,58 @@ void N64Recomp::CGenerator::get_binary_expr_string(BinaryOpType type, const Bina
}
}
void N64Recomp::CGenerator::emit_branch_condition(std::ostream& output_file, const ConditionalBranchOp& op, const InstructionContext& ctx) const {
void N64Recomp::CGenerator::emit_function_start(const std::string& function_name, size_t func_index) const {
(void)func_index;
fmt::print(output_file,
"RECOMP_FUNC void {}(uint8_t* rdram, recomp_context* ctx) {{\n"
// these variables shouldn't need to be preserved across function boundaries, so make them local for more efficient output
" uint64_t hi = 0, lo = 0, result = 0;\n"
" int c1cs = 0;\n", // cop1 conditional signal
function_name);
}
void N64Recomp::CGenerator::emit_function_end() const {
fmt::print(output_file, ";}}\n");
}
void N64Recomp::CGenerator::emit_function_call_lookup(uint32_t addr) const {
fmt::print(output_file, "LOOKUP_FUNC(0x{:08X})(rdram, ctx);\n", addr);
}
void N64Recomp::CGenerator::emit_function_call_by_register(int reg) const {
fmt::print(output_file, "LOOKUP_FUNC({})(rdram, ctx);\n", gpr_to_string(reg));
}
void N64Recomp::CGenerator::emit_function_call_reference_symbol(const Context& context, uint16_t section_index, size_t symbol_index, uint32_t target_section_offset) const {
(void)target_section_offset;
const N64Recomp::ReferenceSymbol& sym = context.get_reference_symbol(section_index, symbol_index);
fmt::print(output_file, "{}(rdram, ctx);\n", sym.name);
}
void N64Recomp::CGenerator::emit_function_call(const Context& context, size_t function_index) const {
fmt::print(output_file, "{}(rdram, ctx);\n", context.functions[function_index].name);
}
void N64Recomp::CGenerator::emit_named_function_call(const std::string& function_name) const {
fmt::print(output_file, "{}(rdram, ctx);\n", function_name);
}
void N64Recomp::CGenerator::emit_goto(const std::string& target) const {
fmt::print(output_file,
" goto {};\n", target);
}
void N64Recomp::CGenerator::emit_label(const std::string& label_name) const {
fmt::print(output_file,
"{}:\n", label_name);
}
void N64Recomp::CGenerator::emit_jtbl_addend_declaration(const JumpTable& jtbl, int reg) const {
std::string jump_variable = fmt::format("jr_addend_{:08X}", jtbl.jr_vram);
fmt::print(output_file, "gpr {} = {};\n", jump_variable, gpr_to_string(reg));
}
void N64Recomp::CGenerator::emit_branch_condition(const ConditionalBranchOp& op, const InstructionContext& ctx) const {
// Thread local variables to prevent allocations when possible.
// TODO these thread locals probably don't actually help right now, so figure out a better way to prevent allocations.
thread_local std::string expr_string{};
@@ -373,19 +451,118 @@ void N64Recomp::CGenerator::emit_branch_condition(std::ostream& output_file, con
fmt::print(output_file, "if ({}) {{\n", expr_string);
}
void N64Recomp::CGenerator::emit_branch_close(std::ostream& output_file) const {
fmt::print(output_file, " }}\n");
void N64Recomp::CGenerator::emit_branch_close() const {
fmt::print(output_file, "}}\n");
}
void N64Recomp::CGenerator::emit_check_fr(std::ostream& output_file, int fpr) const {
void N64Recomp::CGenerator::emit_switch_close() const {
fmt::print(output_file, "}}\n");
}
void N64Recomp::CGenerator::emit_switch(const Context& recompiler_context, const JumpTable& jtbl, int reg) const {
(void)recompiler_context;
(void)reg;
// TODO generate code to subtract the jump table address from the register's value instead.
// Once that's done, the addend temp can be deleted to simplify the generator interface.
std::string jump_variable = fmt::format("jr_addend_{:08X}", jtbl.jr_vram);
fmt::print(output_file, "switch ({} >> 2) {{\n", jump_variable);
}
void N64Recomp::CGenerator::emit_case(int case_index, const std::string& target_label) const {
fmt::print(output_file, "case {}: goto {}; break;\n", case_index, target_label);
}
void N64Recomp::CGenerator::emit_switch_error(uint32_t instr_vram, uint32_t jtbl_vram) const {
fmt::print(output_file, "default: switch_error(__func__, 0x{:08X}, 0x{:08X});\n", instr_vram, jtbl_vram);
}
void N64Recomp::CGenerator::emit_return(const Context& context, size_t func_index) const {
(void)func_index;
if (context.trace_mode) {
fmt::print(output_file, "TRACE_RETURN()\n ");
}
fmt::print(output_file, "return;\n");
}
void N64Recomp::CGenerator::emit_check_fr(int fpr) const {
fmt::print(output_file, "CHECK_FR(ctx, {});\n ", fpr);
}
void N64Recomp::CGenerator::emit_check_nan(std::ostream& output_file, int fpr, bool is_double) const {
void N64Recomp::CGenerator::emit_check_nan(int fpr, bool is_double) const {
fmt::print(output_file, "NAN_CHECK(ctx->f{}.{}); ", fpr, is_double ? "d" : "fl");
}
void N64Recomp::CGenerator::process_binary_op(std::ostream& output_file, const BinaryOp& op, const InstructionContext& ctx) const {
void N64Recomp::CGenerator::emit_cop0_status_read(int reg) const {
fmt::print(output_file, "{} = cop0_status_read(ctx);\n", gpr_to_string(reg));
}
void N64Recomp::CGenerator::emit_cop0_status_write(int reg) const {
fmt::print(output_file, "cop0_status_write(ctx, {});", gpr_to_string(reg));
}
void N64Recomp::CGenerator::emit_cop1_cs_read(int reg) const {
fmt::print(output_file, "{} = get_cop1_cs();\n", gpr_to_string(reg));
}
void N64Recomp::CGenerator::emit_cop1_cs_write(int reg) const {
fmt::print(output_file, "set_cop1_cs({});\n", gpr_to_string(reg));
}
void N64Recomp::CGenerator::emit_muldiv(InstrId instr_id, int reg1, int reg2) const {
switch (instr_id) {
case InstrId::cpu_mult:
fmt::print(output_file, "result = S64(S32({})) * S64(S32({})); lo = S32(result >> 0); hi = S32(result >> 32);\n", gpr_to_string(reg1), gpr_to_string(reg2));
break;
case InstrId::cpu_dmult:
fmt::print(output_file, "DMULT(S64({}), S64({}), &lo, &hi);\n", gpr_to_string(reg1), gpr_to_string(reg2));
break;
case InstrId::cpu_multu:
fmt::print(output_file, "result = U64(U32({})) * U64(U32({})); lo = S32(result >> 0); hi = S32(result >> 32);\n", gpr_to_string(reg1), gpr_to_string(reg2));
break;
case InstrId::cpu_dmultu:
fmt::print(output_file, "DMULTU(U64({}), U64({}), &lo, &hi);\n", gpr_to_string(reg1), gpr_to_string(reg2));
break;
case InstrId::cpu_div:
// Cast to 64-bits before division to prevent artihmetic exception for s32(0x80000000) / -1
fmt::print(output_file, "lo = S32(S64(S32({0})) / S64(S32({1}))); hi = S32(S64(S32({0})) % S64(S32({1})));\n", gpr_to_string(reg1), gpr_to_string(reg2));
break;
case InstrId::cpu_ddiv:
fmt::print(output_file, "DDIV(S64({}), S64({}), &lo, &hi);\n", gpr_to_string(reg1), gpr_to_string(reg2));
break;
case InstrId::cpu_divu:
fmt::print(output_file, "lo = S32(U32({0}) / U32({1})); hi = S32(U32({0}) % U32({1}));\n", gpr_to_string(reg1), gpr_to_string(reg2));
break;
case InstrId::cpu_ddivu:
fmt::print(output_file, "DDIVU(U64({}), U64({}), &lo, &hi);\n", gpr_to_string(reg1), gpr_to_string(reg2));
break;
default:
assert(false);
break;
}
}
void N64Recomp::CGenerator::emit_syscall(uint32_t instr_vram) const {
fmt::print(output_file, "recomp_syscall_handler(rdram, ctx, 0x{:08X});\n", instr_vram);
}
void N64Recomp::CGenerator::emit_do_break(uint32_t instr_vram) const {
fmt::print(output_file, "do_break({});\n", instr_vram);
}
void N64Recomp::CGenerator::emit_pause_self() const {
fmt::print(output_file, "pause_self(rdram);\n");
}
void N64Recomp::CGenerator::emit_trigger_event(uint32_t event_index) const {
fmt::print(output_file, "recomp_trigger_event(rdram, ctx, base_event_index + {});\n", event_index);
}
void N64Recomp::CGenerator::emit_comment(const std::string& comment) const {
fmt::print(output_file, "// {}\n", comment);
}
void N64Recomp::CGenerator::process_binary_op(const BinaryOp& op, const InstructionContext& ctx) const {
// Thread local variables to prevent allocations when possible.
// TODO these thread locals probably don't actually help right now, so figure out a better way to prevent allocations.
thread_local std::string output{};
@@ -395,24 +572,22 @@ void N64Recomp::CGenerator::process_binary_op(std::ostream& output_file, const B
fmt::print(output_file, "{} = {};\n", output, expression);
}
void N64Recomp::CGenerator::process_unary_op(std::ostream& output_file, const UnaryOp& op, const InstructionContext& ctx) const {
void N64Recomp::CGenerator::process_unary_op(const UnaryOp& op, const InstructionContext& ctx) const {
// Thread local variables to prevent allocations when possible.
// TODO these thread locals probably don't actually help right now, so figure out a better way to prevent allocations.
thread_local std::string output{};
thread_local std::string input{};
bool is_infix;
get_operand_string(op.output, UnaryOpType::None, ctx, output);
get_operand_string(op.input, op.operation, ctx, input);
fmt::print(output_file, "{} = {};\n", output, input);
}
void N64Recomp::CGenerator::process_store_op(std::ostream& output_file, const StoreOp& op, const InstructionContext& ctx) const {
void N64Recomp::CGenerator::process_store_op(const StoreOp& op, const InstructionContext& ctx) const {
// Thread local variables to prevent allocations when possible.
// TODO these thread locals probably don't actually help right now, so figure out a better way to prevent allocations.
thread_local std::string base_str{};
thread_local std::string imm_str{};
thread_local std::string value_input{};
bool is_infix;
get_operand_string(Operand::Base, UnaryOpType::None, ctx, base_str);
get_operand_string(Operand::ImmS16, UnaryOpType::None, ctx, imm_str);
get_operand_string(op.value_input, UnaryOpType::None, ctx, value_input);

View File

@@ -3,7 +3,7 @@
#include <toml++/toml.hpp>
#include "fmt/format.h"
#include "config.h"
#include "n64recomp.h"
#include "recompiler/context.h"
std::filesystem::path concat_if_not_empty(const std::filesystem::path& parent, const std::filesystem::path& child) {
if (!child.empty()) {
@@ -93,7 +93,7 @@ std::vector<std::string> get_ignored_funcs(const toml::table* patches_data) {
// Make room for all the ignored funcs in the array.
ignored_funcs.reserve(ignored_funcs_array->size());
// Gather the stubs and place them into the array.
// Gather the ignored and place them into the array.
ignored_funcs_array->for_each([&ignored_funcs](auto&& el) {
if constexpr (toml::is_string<decltype(el)>) {
ignored_funcs.push_back(*el);
@@ -104,42 +104,56 @@ std::vector<std::string> get_ignored_funcs(const toml::table* patches_data) {
return ignored_funcs;
}
std::vector<N64Recomp::FunctionSize> get_func_sizes(const toml::table* patches_data) {
std::vector<N64Recomp::FunctionSize> func_sizes{};
std::vector<std::string> get_renamed_funcs(const toml::table* patches_data) {
std::vector<std::string> renamed_funcs{};
// Check if the func size array exists.
const toml::node_view funcs_data = (*patches_data)["function_sizes"];
if (funcs_data.is_array()) {
const toml::array* sizes_array = funcs_data.as_array();
// Check if the renamed funcs array exists.
const toml::node_view renamed_funcs_data = (*patches_data)["renamed"];
// Copy all the sizes into the output vector.
sizes_array->for_each([&func_sizes](auto&& el) {
if constexpr (toml::is_table<decltype(el)>) {
const toml::table& cur_size = *el.as_table();
if (renamed_funcs_data.is_array()) {
const toml::array* renamed_funcs_array = renamed_funcs_data.as_array();
// Get the function name and size.
std::optional<std::string> func_name = cur_size["name"].value<std::string>();
std::optional<uint32_t> func_size = cur_size["size"].value<uint32_t>();
// Make room for all the renamed funcs in the array.
renamed_funcs.reserve(renamed_funcs_array->size());
if (func_name.has_value() && func_size.has_value()) {
// Make sure the size is divisible by 4
if (func_size.value() & (4 - 1)) {
// It's not, so throw an error (and make it look like a normal toml one).
throw toml::parse_error("Function size is not divisible by 4", el.source());
}
}
else {
throw toml::parse_error("Manually size function is missing required value(s)", el.source());
}
func_sizes.emplace_back(func_name.value(), func_size.value());
}
else {
throw toml::parse_error("Invalid manually sized function entry", el.source());
// Gather the renamed and place them into the array.
renamed_funcs_array->for_each([&renamed_funcs](auto&& el) {
if constexpr (toml::is_string<decltype(el)>) {
renamed_funcs.push_back(*el);
}
});
}
return renamed_funcs;
}
std::vector<N64Recomp::FunctionSize> get_func_sizes(const toml::array* func_sizes_array) {
std::vector<N64Recomp::FunctionSize> func_sizes{};
// Reserve room for all the funcs in the map.
func_sizes.reserve(func_sizes_array->size());
func_sizes_array->for_each([&func_sizes](auto&& el) {
if constexpr (toml::is_table<decltype(el)>) {
std::optional<std::string> func_name = el["name"].template value<std::string>();
std::optional<uint32_t> func_size = el["size"].template value<uint32_t>();
if (func_name.has_value() && func_size.has_value()) {
// Make sure the size is divisible by 4
if (func_size.value() & (4 - 1)) {
// It's not, so throw an error (and make it look like a normal toml one).
throw toml::parse_error("Function size is not divisible by 4", el.source());
}
func_sizes.emplace_back(func_name.value(), func_size.value());
}
else {
throw toml::parse_error("Manually sized function is missing required value(s)", el.source());
}
}
else {
throw toml::parse_error("Missing required value in function_sizes array", el.source());
}
});
return func_sizes;
}
@@ -187,8 +201,8 @@ std::vector<N64Recomp::InstructionPatch> get_instruction_patches(const toml::tab
return ret;
}
std::vector<N64Recomp::FunctionHook> get_function_hooks(const toml::table* patches_data) {
std::vector<N64Recomp::FunctionHook> ret;
std::vector<N64Recomp::FunctionTextHook> get_function_hooks(const toml::table* patches_data) {
std::vector<N64Recomp::FunctionTextHook> ret;
// Check if the function hook array exists.
const toml::node_view func_hook_data = (*patches_data)["hook"];
@@ -216,7 +230,7 @@ std::vector<N64Recomp::FunctionHook> get_function_hooks(const toml::table* patch
throw toml::parse_error("before_vram is not word-aligned", el.source());
}
ret.push_back(N64Recomp::FunctionHook{
ret.push_back(N64Recomp::FunctionTextHook{
.func_name = func_name.value(),
.before_vram = before_vram.has_value() ? (int32_t)before_vram.value() : 0,
.text = text.value(),
@@ -329,6 +343,13 @@ N64Recomp::Config::Config(const char* path) {
manual_functions = get_manual_funcs(array);
}
// Manual function sizes (optional)
toml::node_view function_sizes_data = input_data["function_sizes"];
if (function_sizes_data.is_array()) {
const toml::array* array = function_sizes_data.as_array();
manual_func_sizes = get_func_sizes(array);
}
// Output binary path when using an elf file input, includes patching reference symbol MIPS32 relocs (optional)
std::optional<std::string> output_binary_path_opt = input_data["output_binary_path"].value<std::string>();
if (output_binary_path_opt.has_value()) {
@@ -352,7 +373,7 @@ N64Recomp::Config::Config(const char* path) {
recomp_include = recomp_include_opt.value();
}
else {
recomp_include = "#include \"librecomp/recomp.h\"";
recomp_include = "#include \"recomp.h\"";
}
std::optional<int32_t> funcs_per_file_opt = input_data["functions_per_output_file"].value<int32_t>();
@@ -377,16 +398,28 @@ N64Recomp::Config::Config(const char* path) {
// Ignored funcs array (optional)
ignored_funcs = get_ignored_funcs(table);
// Renamed funcs array (optional)
renamed_funcs = get_renamed_funcs(table);
// Single-instruction patches (optional)
instruction_patches = get_instruction_patches(table);
// Manual function sizes (optional)
manual_func_sizes = get_func_sizes(table);
// Function hooks (optional)
function_hooks = get_function_hooks(table);
}
// Use trace mode if enabled (optional)
std::optional<bool> trace_mode_opt = input_data["trace_mode"].value<bool>();
if (trace_mode_opt.has_value()) {
trace_mode = trace_mode_opt.value();
if (trace_mode) {
recomp_include += "\n#include \"trace.h\"";
}
}
else {
trace_mode = false;
}
// Function reference symbols file (optional)
std::optional<std::string> func_reference_syms_file_opt = input_data["func_reference_syms_file"].value<std::string>();
if (func_reference_syms_file_opt.has_value()) {
@@ -476,6 +509,7 @@ bool N64Recomp::Context::from_symbol_file(const std::filesystem::path& symbol_fi
std::optional<uint32_t> vram_addr = el["vram"].template value<uint32_t>();
std::optional<uint32_t> size = el["size"].template value<uint32_t>();
std::optional<std::string> name = el["name"].template value<std::string>();
std::optional<uint32_t> got_ram_addr = el["got_address"].template value<uint32_t>();
if (!rom_addr.has_value() || !vram_addr.has_value() || !size.has_value() || !name.has_value()) {
throw toml::parse_error("Section entry missing required field(s)", el.source());
@@ -488,6 +522,7 @@ bool N64Recomp::Context::from_symbol_file(const std::filesystem::path& symbol_fi
section.ram_addr = vram_addr.value();
section.size = size.value();
section.name = name.value();
section.got_ram_addr = got_ram_addr;
section.executable = true;
// Read functions for the section.
@@ -574,7 +609,7 @@ bool N64Recomp::Context::from_symbol_file(const std::filesystem::path& symbol_fi
RelocType reloc_type = reloc_type_from_name(type_string.value());
if (reloc_type != RelocType::R_MIPS_HI16 && reloc_type != RelocType::R_MIPS_LO16 && reloc_type != RelocType::R_MIPS_32) {
if (reloc_type != RelocType::R_MIPS_HI16 && reloc_type != RelocType::R_MIPS_LO16 && reloc_type != RelocType::R_MIPS_26 && reloc_type != RelocType::R_MIPS_32) {
throw toml::parse_error("Invalid reloc entry type", reloc_el.source());
}

View File

@@ -12,7 +12,7 @@ namespace N64Recomp {
uint32_t value;
};
struct FunctionHook {
struct FunctionTextHook {
std::string func_name;
int32_t before_vram;
std::string text;
@@ -42,6 +42,7 @@ namespace N64Recomp {
bool single_file_output;
bool use_absolute_symbols;
bool unpaired_lo16_warnings;
bool trace_mode;
bool allow_exports;
bool strict_patch_mode;
std::filesystem::path elf_path;
@@ -54,8 +55,9 @@ namespace N64Recomp {
std::filesystem::path output_binary_path;
std::vector<std::string> stubbed_funcs;
std::vector<std::string> ignored_funcs;
std::vector<std::string> renamed_funcs;
std::vector<InstructionPatch> instruction_patches;
std::vector<FunctionHook> function_hooks;
std::vector<FunctionTextHook> function_hooks;
std::vector<FunctionSize> manual_func_sizes;
std::vector<ManualFunction> manual_functions;
std::string bss_section_suffix;

View File

@@ -3,7 +3,7 @@
#include "fmt/format.h"
// #include "fmt/ostream.h"
#include "n64recomp.h"
#include "recompiler/context.h"
#include "elfio/elfio.hpp"
bool read_symbols(N64Recomp::Context& context, const ELFIO::elfio& elf_file, ELFIO::section* symtab_section, const N64Recomp::ElfParsingConfig& elf_config, bool dumping_context, std::unordered_map<uint16_t, std::vector<N64Recomp::DataSymbol>>& data_syms) {
@@ -58,8 +58,9 @@ bool read_symbols(N64Recomp::Context& context, const ELFIO::elfio& elf_file, ELF
continue;
}
if (section_index < context.sections.size()) {
if (section_index < context.sections.size()) {
// Check if this symbol is the entrypoint
// TODO this never fires, the check is broken due to signedness
if (elf_config.has_entrypoint && value == elf_config.entrypoint_address && type == ELFIO::STT_FUNC) {
if (found_entrypoint_func) {
fmt::print(stderr, "Ambiguous entrypoint: {}\n", name);
@@ -103,10 +104,10 @@ bool read_symbols(N64Recomp::Context& context, const ELFIO::elfio& elf_file, ELF
if (section_index < context.sections.size()) {
auto section_offset = value - elf_file.sections[section_index]->get_address();
const uint32_t* words = reinterpret_cast<const uint32_t*>(elf_file.sections[section_index]->get_data() + section_offset);
uint32_t vram = static_cast<uint32_t>(value);
uint32_t num_instructions = type == ELFIO::STT_FUNC ? size / 4 : 0;
uint32_t rom_address = static_cast<uint32_t>(section_offset + section.rom_addr);
const uint32_t* words = reinterpret_cast<const uint32_t*>(context.rom.data() + rom_address);
section.function_addrs.push_back(vram);
context.functions_by_vram[vram].push_back(context.functions.size());
@@ -164,27 +165,30 @@ bool read_symbols(N64Recomp::Context& context, const ELFIO::elfio& elf_file, ELF
// The symbol wasn't detected as a function, so add it to the data symbols if the context is being dumped.
if (!recorded_symbol && dumping_context && !name.empty()) {
uint32_t vram = static_cast<uint32_t>(value);
// Skip internal symbols.
if (ELF_ST_VISIBILITY(other) != ELFIO::STV_INTERNAL) {
uint32_t vram = static_cast<uint32_t>(value);
// Place this symbol in the absolute symbol list if it's in the absolute section.
uint16_t target_section_index = section_index;
if (section_index == ELFIO::SHN_ABS) {
target_section_index = N64Recomp::SectionAbsolute;
}
else if (section_index >= context.sections.size()) {
fmt::print("Symbol \"{}\" not in a valid section ({})\n", name, section_index);
}
// Place this symbol in the absolute symbol list if it's in the absolute section.
uint16_t target_section_index = section_index;
if (section_index == ELFIO::SHN_ABS) {
target_section_index = N64Recomp::SectionAbsolute;
}
else if (section_index >= context.sections.size()) {
fmt::print("Symbol \"{}\" not in a valid section ({})\n", name, section_index);
}
// Move this symbol into the corresponding non-bss section if it's in a bss section.
auto find_bss_it = bss_section_to_target_section.find(target_section_index);
if (find_bss_it != bss_section_to_target_section.end()) {
target_section_index = find_bss_it->second;
}
// Move this symbol into the corresponding non-bss section if it's in a bss section.
auto find_bss_it = bss_section_to_target_section.find(target_section_index);
if (find_bss_it != bss_section_to_target_section.end()) {
target_section_index = find_bss_it->second;
}
data_syms[target_section_index].emplace_back(
vram,
std::move(name)
);
data_syms[target_section_index].emplace_back(
vram,
std::move(name)
);
}
}
}
@@ -364,8 +368,8 @@ ELFIO::section* read_sections(N64Recomp::Context& context, const N64Recomp::ElfP
ELFIO::relocation_section_accessor rel_accessor{ elf_file, reloc_find->second };
// Allocate space for the relocs in this section
section_out.relocs.resize(rel_accessor.get_entries_num());
// Track whether the previous reloc was a HI16 and its previous full_immediate
bool prev_hi = false;
// Track consecutive identical HI16 relocs to handle the GNU extension to the o32 ABI.
int prev_hi_count = 0;
// Track whether the previous reloc was a LO16
bool prev_lo = false;
uint32_t prev_hi_immediate = 0;
@@ -458,7 +462,7 @@ ELFIO::section* read_sections(N64Recomp::Context& context, const N64Recomp::ElfP
uint32_t rel_immediate = reloc_rom_word & 0xFFFF;
uint32_t full_immediate = (prev_hi_immediate << 16) + (int16_t)rel_immediate;
reloc_out.target_section_offset = full_immediate + rel_symbol_offset - rel_section_vram;
if (prev_hi) {
if (prev_hi_count != 0) {
if (prev_hi_symbol != rel_symbol) {
fmt::print(stderr, "Paired HI16 and LO16 relocations have different symbols\n"
" LO16 reloc index {} in section {} referencing symbol {} with offset 0x{:08X}\n",
@@ -466,8 +470,12 @@ ELFIO::section* read_sections(N64Recomp::Context& context, const N64Recomp::ElfP
return nullptr;
}
// Set the previous HI16 relocs' relocated address.
section_out.relocs[i - 1].target_section_offset = reloc_out.target_section_offset;
// Set the previous HI16 relocs' relocated addresses.
for (size_t paired_index = i - prev_hi_count; paired_index < i; paired_index++) {
uint32_t hi_immediate = section_out.relocs[paired_index].target_section_offset;
uint32_t paired_full_immediate = hi_immediate + (int16_t)rel_immediate;
section_out.relocs[paired_index].target_section_offset = paired_full_immediate + rel_symbol_offset - rel_section_vram;
}
}
else {
// Orphaned LO16 reloc warnings.
@@ -491,7 +499,8 @@ ELFIO::section* read_sections(N64Recomp::Context& context, const N64Recomp::ElfP
}
prev_lo = true;
} else {
if (prev_hi) {
// Allow a HI16 to follow another HI16 for the GNU ABI extension.
if (reloc_out.type != N64Recomp::RelocType::R_MIPS_HI16 && prev_hi_count != 0) {
// This is an invalid elf as the MIPS System V ABI documentation states:
// "Each relocation type of R_MIPS_HI16 must have an associated R_MIPS_LO16 entry
// immediately following it in the list of relocations."
@@ -504,11 +513,26 @@ ELFIO::section* read_sections(N64Recomp::Context& context, const N64Recomp::ElfP
if (reloc_out.type == N64Recomp::RelocType::R_MIPS_HI16) {
uint32_t rel_immediate = reloc_rom_word & 0xFFFF;
prev_hi = true;
prev_hi_immediate = rel_immediate;
prev_hi_symbol = rel_symbol;
// First HI16, store its immediate.
if (prev_hi_count == 0) {
prev_hi_immediate = rel_immediate;
prev_hi_symbol = rel_symbol;
}
// HI16 that follows another HI16, ensure they reference the same symbol.
else {
if (prev_hi_symbol != rel_symbol) {
fmt::print(stderr, "HI16 reloc (index {} symbol {} offset 0x{:08X}) follows another HI16 reloc with a different symbol (index {} symbol {} offset 0x{:08X}) in section {}\n",
i, rel_symbol, section_out.relocs[i].address,
i - 1, prev_hi_symbol, section_out.relocs[i - 1].address,
section_out.name);
return nullptr;
}
}
// Populate the reloc temporarily, the full offset will be calculated upon pairing.
reloc_out.target_section_offset = rel_immediate << 16;
prev_hi_count++;
} else {
prev_hi = false;
prev_hi_count = 0;
}
if (reloc_out.type == N64Recomp::RelocType::R_MIPS_32) {
@@ -547,6 +571,36 @@ ELFIO::section* read_sections(N64Recomp::Context& context, const N64Recomp::ElfP
return a.address < b.address;
}
);
// Patch the ROM word for HI16 and LO16 reference symbol relocs to non-relocatable sections.
for (size_t i = 0; i < section_out.relocs.size(); i++) {
auto& reloc = section_out.relocs[i];
if (reloc.reference_symbol && (reloc.type == N64Recomp::RelocType::R_MIPS_HI16 || reloc.type == N64Recomp::RelocType::R_MIPS_LO16)) {
bool target_section_relocatable = context.is_reference_section_relocatable(reloc.target_section);
if (!target_section_relocatable) {
uint32_t reloc_rom_addr = reloc.address - section_out.ram_addr + section_out.rom_addr;
uint32_t reloc_rom_word = byteswap(*reinterpret_cast<const uint32_t*>(context.rom.data() + reloc_rom_addr));
uint32_t ref_section_vram = context.get_reference_section_vram(reloc.target_section);
uint32_t full_immediate = reloc.target_section_offset + ref_section_vram;
uint32_t imm;
if (reloc.type == N64Recomp::RelocType::R_MIPS_HI16) {
imm = (full_immediate >> 16) + ((full_immediate >> 15) & 1);
}
else {
imm = full_immediate & 0xFFFF;
}
*reinterpret_cast<uint32_t*>(context.rom.data() + reloc_rom_addr) = byteswap(reloc_rom_word | imm);
// Remove the reloc by setting it to a type of NONE.
reloc.type = N64Recomp::RelocType::R_MIPS_NONE;
reloc.reference_symbol = false;
reloc.symbol_index = (uint32_t)-1;
}
}
}
}
}

View File

@@ -9,7 +9,7 @@
#include "fmt/format.h"
#include "fmt/ostream.h"
#include "n64recomp.h"
#include "recompiler/context.h"
#include "config.h"
#include <set>
@@ -111,7 +111,7 @@ bool compare_files(const std::filesystem::path& file1_path, const std::filesyste
return std::equal(begin1, std::istreambuf_iterator<char>(), begin2); //Second argument is end-of-range iterator
}
bool recompile_single_function(const N64Recomp::Context& context, const N64Recomp::Function& func, const std::string& recomp_include, const std::filesystem::path& output_path, std::span<std::vector<uint32_t>> static_funcs_out) {
bool recompile_single_function(const N64Recomp::Context& context, size_t func_index, const std::string& recomp_include, const std::filesystem::path& output_path, std::span<std::vector<uint32_t>> static_funcs_out) {
// Open the temporary output file
std::filesystem::path temp_path = output_path;
temp_path.replace_extension(".tmp");
@@ -127,7 +127,7 @@ bool recompile_single_function(const N64Recomp::Context& context, const N64Recom
"\n",
recomp_include);
if (!N64Recomp::recompile_function(context, func, output_file, static_funcs_out, false)) {
if (!N64Recomp::recompile_function(context, func_index, output_file, static_funcs_out, false)) {
return false;
}
@@ -199,7 +199,7 @@ void dump_context(const N64Recomp::Context& context, const std::unordered_map<ui
for (const N64Recomp::Reloc& reloc : section.relocs) {
if (reloc.target_section == section_index || reloc.target_section == section.bss_section_index) {
// TODO allow emitting MIPS32 relocs for specific sections via a toml option for TLB mapping support.
if (reloc.type == N64Recomp::RelocType::R_MIPS_HI16 || reloc.type == N64Recomp::RelocType::R_MIPS_LO16) {
if (reloc.type == N64Recomp::RelocType::R_MIPS_HI16 || reloc.type == N64Recomp::RelocType::R_MIPS_LO16 || reloc.type == N64Recomp::RelocType::R_MIPS_26) {
fmt::print(func_context_file, " {{ type = \"{}\", vram = 0x{:08X}, target_vram = 0x{:08X} }},\n",
reloc_names[static_cast<int>(reloc.type)], reloc.address, reloc.target_section_offset + section.ram_addr);
}
@@ -272,12 +272,18 @@ int main(int argc, char** argv) {
std::exit(EXIT_FAILURE);
};
// TODO expose a way to dump the context from the command line.
bool dumping_context = false;
bool dumping_context;
if (argc != 2) {
fmt::print("Usage: {} [config file]\n", argv[0]);
std::exit(EXIT_SUCCESS);
if (argc >= 3) {
std::string arg2 = argv[2];
if (arg2 == "--dump-context") {
dumping_context = true;
} else {
fmt::print("Usage: {} <config file> [--dump-context]\n", argv[0]);
std::exit(EXIT_SUCCESS);
}
} else {
dumping_context = false;
}
const char* config_path = argv[1];
@@ -485,10 +491,27 @@ int main(int argc, char** argv) {
// This helps prevent typos in the config file or functions renamed between versions from causing issues.
exit_failure(fmt::format("Function {} is set as ignored in the config file but does not exist!", ignored_func));
}
// Mark the function as .
// Mark the function as ignored.
context.functions[func_find->second].ignored = true;
}
// Rename any functions specified in the config file.
for (const std::string& renamed_func : config.renamed_funcs) {
// Check if the specified function exists.
auto func_find = context.functions_by_name.find(renamed_func);
if (func_find == context.functions_by_name.end()) {
// Function doesn't exist, present an error to the user instead of silently failing to rename it.
// This helps prevent typos in the config file or functions renamed between versions from causing issues.
exit_failure(fmt::format("Function {} is set as renamed in the config file but does not exist!", renamed_func));
}
// Rename the function.
N64Recomp::Function* func = &context.functions[func_find->second];
func->name = func->name + "_recomp";
}
// Propogate the trace mode parameter.
context.trace_mode = config.trace_mode;
// Apply any single-instruction patches.
for (const N64Recomp::InstructionPatch& patch : config.instruction_patches) {
// Check if the specified function exists.
@@ -513,7 +536,7 @@ int main(int argc, char** argv) {
}
// Apply any function hooks.
for (const N64Recomp::FunctionHook& patch : config.function_hooks) {
for (const N64Recomp::FunctionTextHook& patch : config.function_hooks) {
// Check if the specified function exists.
auto func_find = context.functions_by_name.find(patch.func_name);
if (func_find == context.functions_by_name.end()) {
@@ -559,6 +582,16 @@ int main(int argc, char** argv) {
"#include \"funcs.h\"\n"
"\n",
config.recomp_include);
// Print the extern for the base event index and the define to rename it if exports are allowed.
if (config.allow_exports) {
fmt::print(current_output_file,
"extern uint32_t builtin_base_event_index;\n"
"#define base_event_index builtin_base_event_index\n"
"\n"
);
}
cur_file_function_count = 0;
output_file_count++;
};
@@ -571,11 +604,86 @@ int main(int argc, char** argv) {
"#include \"funcs.h\"\n"
"\n",
config.recomp_include);
// Print the extern for the base event index and the define to rename it if exports are allowed.
if (config.allow_exports) {
fmt::print(current_output_file,
"extern uint32_t builtin_base_event_index;\n"
"#define base_event_index builtin_base_event_index\n"
"\n"
);
}
}
else if (config.functions_per_output_file > 1) {
open_new_output_file();
}
std::unordered_map<size_t, size_t> function_index_to_event_index{};
// If exports are enabled, scan all the relocs and modify ones that point to an event function.
if (config.allow_exports) {
// First, find the event section by scanning for a section with the special name.
bool event_section_found = false;
size_t event_section_index = 0;
uint32_t event_section_vram = 0;
for (size_t section_index = 0; section_index < context.sections.size(); section_index++) {
const auto& section = context.sections[section_index];
if (section.name == N64Recomp::EventSectionName) {
event_section_found = true;
event_section_index = section_index;
event_section_vram = section.ram_addr;
break;
}
}
// If an event section was found, proceed with the reloc scanning.
if (event_section_found) {
for (auto& section : context.sections) {
for (auto& reloc : section.relocs) {
// Event symbols aren't reference symbols, since they come from the elf itself.
// Therefore, skip reference symbol relocs.
if (reloc.reference_symbol) {
continue;
}
// Check if the reloc points to the event section.
if (reloc.target_section == event_section_index) {
// It does, so find the function it's pointing at.
size_t func_index = context.find_function_by_vram_section(reloc.target_section_offset + event_section_vram, event_section_index);
if (func_index == (size_t)-1) {
exit_failure(fmt::format("Failed to find event function with vram {}.\n", reloc.target_section_offset + event_section_vram));
}
// Ensure the reloc is a MIPS_R_26 one before modifying it, since those are the only type allowed to reference
if (reloc.type != N64Recomp::RelocType::R_MIPS_26) {
const auto& function = context.functions[func_index];
exit_failure(fmt::format("Function {} is an import and cannot have its address taken.\n",
function.name));
}
// Check if this function has been assigned an event index already, and assign it if not.
size_t event_index;
auto find_event_it = function_index_to_event_index.find(func_index);
if (find_event_it != function_index_to_event_index.end()) {
event_index = find_event_it->second;
}
else {
event_index = function_index_to_event_index.size();
function_index_to_event_index.emplace(func_index, event_index);
}
// Modify the reloc's fields accordingly.
reloc.target_section_offset = 0;
reloc.symbol_index = event_index;
reloc.target_section = N64Recomp::SectionEvent;
reloc.reference_symbol = true;
}
}
}
}
}
std::vector<size_t> export_function_indices{};
bool failed_strict_mode = false;
@@ -617,7 +725,7 @@ int main(int argc, char** argv) {
// Recompile the function.
if (config.single_file_output || config.functions_per_output_file > 1) {
result = N64Recomp::recompile_function(context, func, current_output_file, static_funcs_by_section, false);
result = N64Recomp::recompile_function(context, i, current_output_file, static_funcs_by_section, false);
if (!config.single_file_output) {
cur_file_function_count++;
if (cur_file_function_count >= config.functions_per_output_file) {
@@ -626,7 +734,7 @@ int main(int argc, char** argv) {
}
}
else {
result = recompile_single_function(context, func, config.recomp_include, config.output_func_path / (func.name + ".c"), static_funcs_by_section);
result = recompile_single_function(context, i, config.recomp_include, config.output_func_path / (func.name + ".c"), static_funcs_by_section);
}
if (result == false) {
fmt::print(stderr, "Error recompiling {}\n", func.name);
@@ -689,22 +797,25 @@ int main(int argc, char** argv) {
std::vector<uint32_t> insn_words((cur_func_end - static_func_addr) / sizeof(uint32_t));
insn_words.assign(func_rom_start, func_rom_start + insn_words.size());
N64Recomp::Function func {
// Create the new function and add it to the context.
size_t new_func_index = context.functions.size();
context.functions.emplace_back(
static_func_addr,
rom_addr,
std::move(insn_words),
fmt::format("static_{}_{:08X}", section_index, static_func_addr),
static_cast<uint16_t>(section_index),
false
};
);
const N64Recomp::Function& new_func = context.functions[new_func_index];
fmt::print(func_header_file,
"void {}(uint8_t* rdram, recomp_context* ctx);\n", func.name);
"void {}(uint8_t* rdram, recomp_context* ctx);\n", new_func.name);
bool result;
size_t prev_num_statics = static_funcs_by_section[func.section_index].size();
size_t prev_num_statics = static_funcs_by_section[new_func.section_index].size();
if (config.single_file_output || config.functions_per_output_file > 1) {
result = N64Recomp::recompile_function(context, func, current_output_file, static_funcs_by_section, false);
result = N64Recomp::recompile_function(context, new_func_index, current_output_file, static_funcs_by_section, false);
if (!config.single_file_output) {
cur_file_function_count++;
if (cur_file_function_count >= config.functions_per_output_file) {
@@ -713,14 +824,14 @@ int main(int argc, char** argv) {
}
}
else {
result = recompile_single_function(context, func, config.recomp_include, config.output_func_path / (func.name + ".c"), static_funcs_by_section);
result = recompile_single_function(context, new_func_index, config.recomp_include, config.output_func_path / (new_func.name + ".c"), static_funcs_by_section);
}
// Add any new static functions that were found while recompiling this one.
size_t cur_num_statics = static_funcs_by_section[func.section_index].size();
size_t cur_num_statics = static_funcs_by_section[new_func.section_index].size();
if (cur_num_statics != prev_num_statics) {
for (size_t new_static_index = prev_num_statics; new_static_index < cur_num_statics; new_static_index++) {
uint32_t new_static_vram = static_funcs_by_section[func.section_index][new_static_index];
uint32_t new_static_vram = static_funcs_by_section[new_func.section_index][new_static_index];
if (!statics_set.contains(new_static_vram)) {
statics_set.emplace(new_static_vram);
@@ -730,7 +841,7 @@ int main(int argc, char** argv) {
}
if (result == false) {
fmt::print(stderr, "Error recompiling {}\n", func.name);
fmt::print(stderr, "Error recompiling {}\n", new_func.name);
std::exit(EXIT_FAILURE);
}
}
@@ -755,13 +866,6 @@ int main(int argc, char** argv) {
);
}
fmt::print(func_header_file,
"\n"
"#ifdef __cplusplus\n"
"}}\n"
"#endif\n"
);
{
std::ofstream overlay_file(config.output_func_path / "recomp_overlays.inl");
std::string section_load_table = "static SectionTableEntry section_table[] = {\n";
@@ -780,6 +884,7 @@ int main(int argc, char** argv) {
for (size_t section_index = 0; section_index < context.sections.size(); section_index++) {
const auto& section = context.sections[section_index];
const auto& section_funcs = context.section_functions[section_index];
const auto& section_relocs = section.relocs;
if (section.has_mips32_relocs || !section_funcs.empty()) {
std::string_view section_name_trimmed{ section.name };
@@ -793,21 +898,66 @@ int main(int argc, char** argv) {
}
std::string section_funcs_array_name = fmt::format("section_{}_{}_funcs", section_index, section_name_trimmed);
std::string section_relocs_array_name = section_relocs.empty() ? "nullptr" : fmt::format("section_{}_{}_relocs", section_index, section_name_trimmed);
std::string section_relocs_array_size = section_relocs.empty() ? "0" : fmt::format("ARRLEN({})", section_relocs_array_name);
section_load_table += fmt::format(" {{ .rom_addr = 0x{0:08X}, .ram_addr = 0x{1:08X}, .size = 0x{2:08X}, .funcs = {3}, .num_funcs = ARRLEN({3}), .index = {4} }},\n",
section.rom_addr, section.ram_addr, section.size, section_funcs_array_name, section_index);
// Write the section's table entry.
section_load_table += fmt::format(" {{ .rom_addr = 0x{0:08X}, .ram_addr = 0x{1:08X}, .size = 0x{2:08X}, .funcs = {3}, .num_funcs = ARRLEN({3}), .relocs = {4}, .num_relocs = {5}, .index = {6} }},\n",
section.rom_addr, section.ram_addr, section.size, section_funcs_array_name,
section_relocs_array_name, section_relocs_array_size, section_index);
// Write the section's functions.
fmt::print(overlay_file, "static FuncEntry {}[] = {{\n", section_funcs_array_name);
for (size_t func_index : section_funcs) {
const auto& func = context.functions[func_index];
size_t func_size = func.reimplemented ? 0 : func.words.size() * sizeof(func.words[0]);
if (func.reimplemented || (!func.name.empty() && !func.ignored && func.words.size() != 0)) {
fmt::print(overlay_file, " {{ .func = {}, .offset = 0x{:08x} }},\n", func.name, func.rom - section.rom_addr);
fmt::print(overlay_file, " {{ .func = {}, .offset = 0x{:08X}, .rom_size = 0x{:08X} }},\n",
func.name, func.rom - section.rom_addr, func_size);
}
}
fmt::print(overlay_file, "}};\n");
// Write the section's relocations.
if (!section_relocs.empty()) {
// Determine if reference symbols are being used.
bool reference_symbol_mode = !config.func_reference_syms_file_path.empty();
fmt::print(overlay_file, "static RelocEntry {}[] = {{\n", section_relocs_array_name);
for (const N64Recomp::Reloc& reloc : section_relocs) {
bool emit_reloc = false;
uint16_t target_section = reloc.target_section;
// In reference symbol mode, only emit relocations into the table that point to
// non-absolute reference symbols, events, or manual patch symbols.
if (reference_symbol_mode) {
bool manual_patch_symbol = N64Recomp::is_manual_patch_symbol(reloc.target_section_offset);
bool is_absolute = reloc.target_section == N64Recomp::SectionAbsolute;
emit_reloc = (reloc.reference_symbol && !is_absolute) || target_section == N64Recomp::SectionEvent || manual_patch_symbol;
}
// Otherwise, emit all relocs.
else {
emit_reloc = true;
}
if (emit_reloc) {
uint32_t target_section_offset;
if (reloc.target_section == N64Recomp::SectionEvent) {
target_section_offset = reloc.symbol_index;
}
else {
target_section_offset = reloc.target_section_offset;
}
fmt::print(overlay_file, " {{ .offset = 0x{:08X}, .target_section_offset = 0x{:08X}, .target_section = {}, .type = {} }}, \n",
reloc.address - section.ram_addr, target_section_offset, reloc.target_section, reloc_names[static_cast<size_t>(reloc.type)] );
}
}
fmt::print(overlay_file, "}};\n");
}
written_sections++;
}
}
@@ -840,23 +990,76 @@ int main(int argc, char** argv) {
fmt::print(overlay_file, "}};\n");
if (config.allow_exports) {
// Emit the exported function table.
fmt::print(overlay_file,
"\n"
"static FunctionExport export_table[] = {{\n"
);
for (size_t func_index : export_function_indices) {
const auto& func = context.functions[func_index];
fmt::print(overlay_file, " {{ \"{}\", 0x{:08X} }},\n", func.name, func.vram);
}
// Add a dummy element at the end to ensure the array has a valid length because C doesn't allow zero-size arrays.
fmt::print(overlay_file, " {{ NULL, 0 }}\n");
fmt::print(overlay_file, "}};\n");
// Emit the event table.
std::vector<size_t> functions_by_event{};
functions_by_event.resize(function_index_to_event_index.size());
for (auto [func_index, event_index] : function_index_to_event_index) {
functions_by_event[event_index] = func_index;
}
fmt::print(overlay_file,
"\n"
"static const char* event_names[] = {{\n"
);
for (size_t func_index : functions_by_event) {
const auto& func = context.functions[func_index];
fmt::print(overlay_file, " \"{}\",\n", func.name);
}
// Add a dummy element at the end to ensure the array has a valid length because C doesn't allow zero-size arrays.
fmt::print(overlay_file, " NULL\n");
fmt::print(overlay_file, "}};\n");
// Collect manual patch symbols.
std::vector<std::pair<uint32_t, std::string>> manual_patch_syms{};
for (const auto& func : context.functions) {
if (func.words.empty() && N64Recomp::is_manual_patch_symbol(func.vram)) {
manual_patch_syms.emplace_back(func.vram, func.name);
}
}
// Sort the manual patch symbols by vram.
std::sort(manual_patch_syms.begin(), manual_patch_syms.end(), [](const auto& lhs, const auto& rhs) {
return lhs.first < rhs.first;
});
// Emit the manual patch symbols.
fmt::print(overlay_file,
"\n"
"static const ManualPatchSymbol manual_patch_symbols[] = {{\n"
);
for (const auto& manual_patch_sym_entry : manual_patch_syms) {
fmt::print(overlay_file, " {{ 0x{:08X}, {} }},\n", manual_patch_sym_entry.first, manual_patch_sym_entry.second);
fmt::print(func_header_file,
"void {}(uint8_t* rdram, recomp_context* ctx);\n", manual_patch_sym_entry.second);
}
// Add a dummy element at the end to ensure the array has a valid length because C doesn't allow zero-size arrays.
fmt::print(overlay_file, " {{ 0, NULL }}\n");
fmt::print(overlay_file, "}};\n");
}
}
fmt::print(func_header_file,
"\n"
"#ifdef __cplusplus\n"
"}}\n"
"#endif\n"
);
if (!config.output_binary_path.empty()) {
std::ofstream output_binary{config.output_binary_path, std::ios::binary};
output_binary.write(reinterpret_cast<const char*>(context.rom.data()), context.rom.size());

View File

@@ -1,6 +1,6 @@
#include <cstring>
#include "n64recomp.h"
#include "recompiler/context.h"
struct FileHeader {
char magic[8]; // N64RSYMS
@@ -16,6 +16,7 @@ struct FileSubHeaderV1 {
uint32_t num_exports;
uint32_t num_callbacks;
uint32_t num_provided_events;
uint32_t num_hooks;
uint32_t string_data_size;
};
@@ -49,9 +50,6 @@ struct RelocV1 {
};
struct DependencyV1 {
uint8_t major_version;
uint8_t minor_version;
uint8_t patch_version;
uint8_t reserved;
uint32_t mod_id_start;
uint32_t mod_id_size;
@@ -92,6 +90,13 @@ struct EventV1 {
uint32_t name_size;
};
struct HookV1 {
uint32_t func_index;
uint32_t original_section_vrom;
uint32_t original_vram;
uint32_t flags; // end
};
template <typename T>
const T* reinterpret_data(std::span<const char> data, size_t& offset, size_t count = 1) {
if (offset + (sizeof(T) * count) > data.size()) {
@@ -129,6 +134,7 @@ bool parse_v1(std::span<const char> data, const std::unordered_map<uint32_t, uin
size_t num_exports = subheader->num_exports;
size_t num_callbacks = subheader->num_callbacks;
size_t num_provided_events = subheader->num_provided_events;
size_t num_hooks = subheader->num_hooks;
size_t string_data_size = subheader->string_data_size;
if (string_data_size & 0b11) {
@@ -143,7 +149,6 @@ bool parse_v1(std::span<const char> data, const std::unordered_map<uint32_t, uin
// TODO add proper creation methods for the remaining vectors and change these to reserves instead.
mod_context.sections.resize(num_sections); // Add method
mod_context.dependencies.reserve(num_dependencies);
mod_context.dependencies_by_name.reserve(num_dependencies);
mod_context.import_symbols.reserve(num_imports);
mod_context.dependency_events.reserve(num_dependency_events);
@@ -151,6 +156,7 @@ bool parse_v1(std::span<const char> data, const std::unordered_map<uint32_t, uin
mod_context.exported_funcs.resize(num_exports); // Add method
mod_context.callbacks.reserve(num_callbacks);
mod_context.event_symbols.reserve(num_provided_events);
mod_context.hooks.reserve(num_provided_events);
for (size_t section_index = 0; section_index < num_sections; section_index++) {
const SectionHeaderV1* section_header = reinterpret_data<SectionHeaderV1>(data, offset);
@@ -203,6 +209,8 @@ bool parse_v1(std::span<const char> data, const std::unordered_map<uint32_t, uin
cur_func.rom = cur_section.rom_addr + funcs[func_index].section_offset;
cur_func.words.resize(funcs[func_index].size / sizeof(uint32_t)); // Filled in later
cur_func.section_index = section_index;
mod_context.functions_by_vram[cur_func.vram].emplace_back(start_func_index + func_index);
}
for (size_t reloc_index = 0; reloc_index < num_relocs; reloc_index++) {
@@ -234,6 +242,7 @@ bool parse_v1(std::span<const char> data, const std::unordered_map<uint32_t, uin
if (reloc_target_section >= mod_context.sections.size()) {
printf("Reloc %zu in section %zu references local section %u, but only %zu exist\n",
reloc_index, section_index, reloc_target_section, mod_context.sections.size());
return false;
}
}
else {
@@ -269,10 +278,11 @@ bool parse_v1(std::span<const char> data, const std::unordered_map<uint32_t, uin
if (mod_id_start + mod_id_size > string_data_size) {
printf("Dependency %zu has a name start of %u and size of %u, which extend beyond the string data's total size of %zu\n",
dependency_index, mod_id_start, mod_id_size, string_data_size);
return false;
}
std::string_view mod_id{ string_data + mod_id_start, string_data + mod_id_start + mod_id_size };
mod_context.add_dependency(std::string{mod_id}, dependency_in.major_version, dependency_in.minor_version, dependency_in.patch_version);
mod_context.add_dependency(std::string{mod_id});
}
const ImportV1* imports = reinterpret_data<ImportV1>(data, offset, num_imports);
@@ -290,11 +300,13 @@ bool parse_v1(std::span<const char> data, const std::unordered_map<uint32_t, uin
if (name_start + name_size > string_data_size) {
printf("Import %zu has a name start of %u and size of %u, which extend beyond the string data's total size of %zu\n",
import_index, name_start, name_size, string_data_size);
return false;
}
if (dependency_index >= num_dependencies) {
printf("Import %zu belongs to dependency %u, but only %zu dependencies were specified\n",
import_index, dependency_index, num_dependencies);
return false;
}
std::string_view import_name{ string_data + name_start, string_data + name_start + name_size };
@@ -317,6 +329,7 @@ bool parse_v1(std::span<const char> data, const std::unordered_map<uint32_t, uin
if (name_start + name_size > string_data_size) {
printf("Dependency event %zu has a name start of %u and size of %u, which extend beyond the string data's total size of %zu\n",
dependency_event_index, name_start, name_size, string_data_size);
return false;
}
std::string_view dependency_event_name{ string_data + name_start, string_data + name_start + name_size };
@@ -355,15 +368,29 @@ bool parse_v1(std::span<const char> data, const std::unordered_map<uint32_t, uin
if (func_index >= mod_context.functions.size()) {
printf("Export %zu has a function index of %u, but the symbol file only has %zu functions\n",
export_index, func_index, mod_context.functions.size());
return false;
}
if (name_start + name_size > string_data_size) {
printf("Export %zu has a name start of %u and size of %u, which extend beyond the string data's total size of %zu\n",
export_index, name_start, name_size, string_data_size);
return false;
}
std::string_view export_name_view{ string_data + name_start, string_data + name_start + name_size };
std::string export_name{export_name_view};
if (!mod_context.functions[func_index].name.empty()) {
printf("Function %u is exported twice (%s and %s)\n",
func_index, mod_context.functions[func_index].name.c_str(), export_name.c_str());
return false;
}
// Add the function to the exported function list.
mod_context.exported_funcs[export_index] = func_index;
// Set the function's name to the export name.
mod_context.functions[func_index].name = std::move(export_name);
}
const CallbackV1* callbacks = reinterpret_data<CallbackV1>(data, offset, num_callbacks);
@@ -380,15 +407,18 @@ bool parse_v1(std::span<const char> data, const std::unordered_map<uint32_t, uin
if (dependency_event_index >= num_dependency_events) {
printf("Callback %zu is connected to dependency event %u, but only %zu dependency events were specified\n",
callback_index, dependency_event_index, num_dependency_events);
return false;
}
if (function_index >= mod_context.functions.size()) {
printf("Callback %zu uses function %u, but only %zu functions were specified\n",
callback_index, function_index, mod_context.functions.size());
return false;
}
if (!mod_context.add_callback(dependency_event_index, function_index)) {
printf("Failed to add callback %zu\n", callback_index);
return false;
}
}
@@ -406,6 +436,7 @@ bool parse_v1(std::span<const char> data, const std::unordered_map<uint32_t, uin
if (name_start + name_size > string_data_size) {
printf("Event %zu has a name start of %u and size of %u, which extend beyond the string data's total size of %zu\n",
event_index, name_start, name_size, string_data_size);
return false;
}
std::string_view import_name{ string_data + name_start, string_data + name_start + name_size };
@@ -413,16 +444,30 @@ bool parse_v1(std::span<const char> data, const std::unordered_map<uint32_t, uin
mod_context.add_event_symbol(std::string{import_name});
}
const HookV1* hooks = reinterpret_data<HookV1>(data, offset, num_hooks);
if (hooks == nullptr) {
printf("Failed to read hooks (count: %zu)\n", num_hooks);
return false;
}
for (size_t hook_index = 0; hook_index < num_hooks; hook_index++) {
const HookV1& hook_in = hooks[hook_index];
N64Recomp::FunctionHook& hook_out = mod_context.hooks.emplace_back();
hook_out.func_index = hook_in.func_index;
hook_out.original_section_vrom = hook_in.original_section_vrom;
hook_out.original_vram = hook_in.original_vram;
hook_out.flags = static_cast<N64Recomp::HookFlags>(hook_in.flags);
}
return offset == data.size();
}
N64Recomp::ModSymbolsError N64Recomp::parse_mod_symbols(std::span<const char> data, std::span<const uint8_t> binary, const std::unordered_map<uint32_t, uint16_t>& sections_by_vrom, const Context& reference_context, Context& mod_context_out) {
N64Recomp::ModSymbolsError N64Recomp::parse_mod_symbols(std::span<const char> data, std::span<const uint8_t> binary, const std::unordered_map<uint32_t, uint16_t>& sections_by_vrom, Context& mod_context_out) {
size_t offset = 0;
mod_context_out = {};
const FileHeader* header = reinterpret_data<FileHeader>(data, offset);
mod_context_out.import_reference_context(reference_context);
if (header == nullptr) {
return ModSymbolsError::NotASymbolFile;
}
@@ -485,7 +530,7 @@ std::vector<uint8_t> N64Recomp::symbols_to_bin_v1(const N64Recomp::Context& cont
vec_put(ret, &header);
size_t num_dependencies = context.dependencies.size();
size_t num_dependencies = context.dependencies_by_name.size();
size_t num_imported_funcs = context.import_symbols.size();
size_t num_dependency_events = context.dependency_events.size();
@@ -493,6 +538,7 @@ std::vector<uint8_t> N64Recomp::symbols_to_bin_v1(const N64Recomp::Context& cont
size_t num_events = context.event_symbols.size();
size_t num_callbacks = context.callbacks.size();
size_t num_provided_events = context.event_symbols.size();
size_t num_hooks = context.hooks.size();
FileSubHeaderV1 sub_header {
.num_sections = static_cast<uint32_t>(context.sections.size()),
@@ -503,6 +549,7 @@ std::vector<uint8_t> N64Recomp::symbols_to_bin_v1(const N64Recomp::Context& cont
.num_exports = static_cast<uint32_t>(num_exported_funcs),
.num_callbacks = static_cast<uint32_t>(num_callbacks),
.num_provided_events = static_cast<uint32_t>(num_provided_events),
.num_hooks = static_cast<uint32_t>(num_hooks),
.string_data_size = 0,
};
@@ -512,15 +559,24 @@ std::vector<uint8_t> N64Recomp::symbols_to_bin_v1(const N64Recomp::Context& cont
// Build the string data from the exports and imports.
size_t strings_start = ret.size();
// Order the dependencies by their index. This isn't necessary, but it makes the dependency name order
// in the symbol file match the indices of the dependencies makes debugging easier.
std::vector<std::string> dependencies_ordered{};
dependencies_ordered.resize(context.dependencies_by_name.size());
for (const auto& [dependency, dependency_index] : context.dependencies_by_name) {
dependencies_ordered[dependency_index] = dependency;
}
// Track the start of every dependency's name in the string data.
std::vector<uint32_t> dependency_name_positions{};
dependency_name_positions.resize(num_dependencies);
for (size_t dependency_index = 0; dependency_index < num_dependencies; dependency_index++) {
const Dependency& dependency = context.dependencies[dependency_index];
const std::string& dependency = dependencies_ordered[dependency_index];
dependency_name_positions[dependency_index] = static_cast<uint32_t>(ret.size() - strings_start);
vec_put(ret, dependency.mod_id);
vec_put(ret, dependency);
}
// Track the start of every imported function's name in the string data.
@@ -637,14 +693,11 @@ std::vector<uint8_t> N64Recomp::symbols_to_bin_v1(const N64Recomp::Context& cont
// Write the dependencies.
for (size_t dependency_index = 0; dependency_index < num_dependencies; dependency_index++) {
const Dependency& dependency = context.dependencies[dependency_index];
const std::string& dependency = dependencies_ordered[dependency_index];
DependencyV1 dependency_out {
.major_version = dependency.major_version,
.minor_version = dependency.minor_version,
.patch_version = dependency.patch_version,
.mod_id_start = dependency_name_positions[dependency_index],
.mod_id_size = static_cast<uint32_t>(dependency.mod_id.size())
.mod_id_size = static_cast<uint32_t>(dependency.size())
};
vec_put(ret, &dependency_out);
@@ -732,5 +785,22 @@ std::vector<uint8_t> N64Recomp::symbols_to_bin_v1(const N64Recomp::Context& cont
vec_put(ret, &event_out);
}
// Write the hooks.
for (const FunctionHook& cur_hook : context.hooks) {
uint32_t flags = 0;
if ((cur_hook.flags & HookFlags::AtReturn) == HookFlags::AtReturn) {
flags |= 0x1;
}
HookV1 hook_out {
.func_index = cur_hook.func_index,
.original_section_vrom = cur_hook.original_section_vrom,
.original_vram = cur_hook.original_vram,
.flags = flags
};
vec_put(ret, &hook_out);
}
return ret;
}

View File

@@ -1,4 +1,4 @@
#include "operations.h"
#include "recompiler/operations.h"
namespace N64Recomp {
const std::unordered_map<InstrId, UnaryOp> unary_ops {
@@ -9,11 +9,13 @@ namespace N64Recomp {
{ InstrId::cpu_mflo, { UnaryOpType::None, Operand::Rd, Operand::Lo } },
{ InstrId::cpu_mtc1, { UnaryOpType::None, Operand::FsU32L, Operand::Rt } },
{ InstrId::cpu_mfc1, { UnaryOpType::ToInt32, Operand::Rt, Operand::FsU32L } },
{ InstrId::cpu_dmtc1, { UnaryOpType::None, Operand::FsU64, Operand::Rt } },
{ InstrId::cpu_dmfc1, { UnaryOpType::None, Operand::Rt, Operand::FsU64 } },
// Float operations
{ InstrId::cpu_mov_s, { UnaryOpType::None, Operand::Fd, Operand::Fs, true } },
{ InstrId::cpu_mov_d, { UnaryOpType::None, Operand::FdDouble, Operand::FsDouble, true } },
{ InstrId::cpu_neg_s, { UnaryOpType::Negate, Operand::Fd, Operand::Fs, true, true } },
{ InstrId::cpu_neg_d, { UnaryOpType::Negate, Operand::FdDouble, Operand::FsDouble, true, true } },
{ InstrId::cpu_neg_s, { UnaryOpType::NegateFloat, Operand::Fd, Operand::Fs, true, true } },
{ InstrId::cpu_neg_d, { UnaryOpType::NegateDouble, Operand::FdDouble, Operand::FsDouble, true, true } },
{ InstrId::cpu_abs_s, { UnaryOpType::AbsFloat, Operand::Fd, Operand::Fs, true, true } },
{ InstrId::cpu_abs_d, { UnaryOpType::AbsDouble, Operand::FdDouble, Operand::FsDouble, true, true } },
{ InstrId::cpu_sqrt_s, { UnaryOpType::SqrtFloat, Operand::Fd, Operand::Fs, true, true } },
@@ -65,24 +67,22 @@ namespace N64Recomp {
{ InstrId::cpu_ori, { BinaryOpType::Or64, Operand::Rt, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Rs, Operand::ImmU16 }}} },
{ InstrId::cpu_xori, { BinaryOpType::Xor64, Operand::Rt, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Rs, Operand::ImmU16 }}} },
// Shifts
/* BUG Should mask after (change op to Sll32 and input op to ToU32) */
{ InstrId::cpu_sllv, { BinaryOpType::Sll64, Operand::Rd, {{ UnaryOpType::ToS32, UnaryOpType::Mask5 }, { Operand::Rt, Operand::Rs }}} },
{ InstrId::cpu_sllv, { BinaryOpType::Sll32, Operand::Rd, {{ UnaryOpType::None, UnaryOpType::Mask5 }, { Operand::Rt, Operand::Rs }}} },
{ InstrId::cpu_dsllv, { BinaryOpType::Sll64, Operand::Rd, {{ UnaryOpType::None, UnaryOpType::Mask6 }, { Operand::Rt, Operand::Rs }}} },
{ InstrId::cpu_srlv, { BinaryOpType::Srl32, Operand::Rd, {{ UnaryOpType::ToU32, UnaryOpType::Mask5 }, { Operand::Rt, Operand::Rs }}} },
{ InstrId::cpu_dsrlv, { BinaryOpType::Srl64, Operand::Rd, {{ UnaryOpType::ToU64, UnaryOpType::Mask6 }, { Operand::Rt, Operand::Rs }}} },
/* BUG Should mask after (change op to Sra32 and input op to ToS64) */
{ InstrId::cpu_srav, { BinaryOpType::Sra64, Operand::Rd, {{ UnaryOpType::ToS32, UnaryOpType::Mask5 }, { Operand::Rt, Operand::Rs }}} },
// Hardware bug: The input is not masked to 32 bits before right shifting, so bits from the upper half of the register will bleed into the lower half.
{ InstrId::cpu_srav, { BinaryOpType::Sra32, Operand::Rd, {{ UnaryOpType::ToS64, UnaryOpType::Mask5 }, { Operand::Rt, Operand::Rs }}} },
{ InstrId::cpu_dsrav, { BinaryOpType::Sra64, Operand::Rd, {{ UnaryOpType::ToS64, UnaryOpType::Mask6 }, { Operand::Rt, Operand::Rs }}} },
// Shifts (immediate)
/* BUG Should mask after (change op to Sll32 and input op to ToU32) */
{ InstrId::cpu_sll, { BinaryOpType::Sll64, Operand::Rd, {{ UnaryOpType::ToS32, UnaryOpType::None }, { Operand::Rt, Operand::Sa }}} },
{ InstrId::cpu_sll, { BinaryOpType::Sll32, Operand::Rd, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Rt, Operand::Sa }}} },
{ InstrId::cpu_dsll, { BinaryOpType::Sll64, Operand::Rd, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Rt, Operand::Sa }}} },
{ InstrId::cpu_dsll32, { BinaryOpType::Sll64, Operand::Rd, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Rt, Operand::Sa32 }}} },
{ InstrId::cpu_srl, { BinaryOpType::Srl32, Operand::Rd, {{ UnaryOpType::ToU32, UnaryOpType::None }, { Operand::Rt, Operand::Sa }}} },
{ InstrId::cpu_dsrl, { BinaryOpType::Srl64, Operand::Rd, {{ UnaryOpType::ToU64, UnaryOpType::None }, { Operand::Rt, Operand::Sa }}} },
{ InstrId::cpu_dsrl32, { BinaryOpType::Srl64, Operand::Rd, {{ UnaryOpType::ToU64, UnaryOpType::None }, { Operand::Rt, Operand::Sa32 }}} },
/* BUG should cast after (change op to Sra32 and input op to ToS64) */
{ InstrId::cpu_sra, { BinaryOpType::Sra64, Operand::Rd, {{ UnaryOpType::ToS32, UnaryOpType::None }, { Operand::Rt, Operand::Sa }}} },
// Hardware bug: The input is not masked to 32 bits before right shifting, so bits from the upper half of the register will bleed into the lower half.
{ InstrId::cpu_sra, { BinaryOpType::Sra32, Operand::Rd, {{ UnaryOpType::ToS64, UnaryOpType::None }, { Operand::Rt, Operand::Sa }}} },
{ InstrId::cpu_dsra, { BinaryOpType::Sra64, Operand::Rd, {{ UnaryOpType::ToS64, UnaryOpType::None }, { Operand::Rt, Operand::Sa }}} },
{ InstrId::cpu_dsra32, { BinaryOpType::Sra64, Operand::Rd, {{ UnaryOpType::ToS64, UnaryOpType::None }, { Operand::Rt, Operand::Sa32 }}} },
// Comparisons
@@ -101,47 +101,47 @@ namespace N64Recomp {
{ InstrId::cpu_div_s, { BinaryOpType::DivFloat, Operand::Fd, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Fs, Operand::Ft }}, true, true } },
{ InstrId::cpu_div_d, { BinaryOpType::DivDouble, Operand::FdDouble, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::FsDouble, Operand::FtDouble }}, true, true } },
// Float comparisons TODO remaining operations and investigate ordered/unordered and default values
{ InstrId::cpu_c_lt_s, { BinaryOpType::Less, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Fs, Operand::Ft }}, true } },
{ InstrId::cpu_c_nge_s, { BinaryOpType::Less, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Fs, Operand::Ft }}, true } },
{ InstrId::cpu_c_olt_s, { BinaryOpType::Less, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Fs, Operand::Ft }}, true } },
{ InstrId::cpu_c_ult_s, { BinaryOpType::Less, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Fs, Operand::Ft }}, true } },
{ InstrId::cpu_c_lt_d, { BinaryOpType::Less, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::FsDouble, Operand::FtDouble }}, true } },
{ InstrId::cpu_c_nge_d, { BinaryOpType::Less, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::FsDouble, Operand::FtDouble }}, true } },
{ InstrId::cpu_c_olt_d, { BinaryOpType::Less, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::FsDouble, Operand::FtDouble }}, true } },
{ InstrId::cpu_c_ult_d, { BinaryOpType::Less, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::FsDouble, Operand::FtDouble }}, true } },
{ InstrId::cpu_c_lt_s, { BinaryOpType::LessFloat, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Fs, Operand::Ft }}, true } },
{ InstrId::cpu_c_nge_s, { BinaryOpType::LessFloat, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Fs, Operand::Ft }}, true } },
{ InstrId::cpu_c_olt_s, { BinaryOpType::LessFloat, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Fs, Operand::Ft }}, true } },
{ InstrId::cpu_c_ult_s, { BinaryOpType::LessFloat, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Fs, Operand::Ft }}, true } },
{ InstrId::cpu_c_lt_d, { BinaryOpType::LessDouble, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::FsDouble, Operand::FtDouble }}, true } },
{ InstrId::cpu_c_nge_d, { BinaryOpType::LessDouble, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::FsDouble, Operand::FtDouble }}, true } },
{ InstrId::cpu_c_olt_d, { BinaryOpType::LessDouble, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::FsDouble, Operand::FtDouble }}, true } },
{ InstrId::cpu_c_ult_d, { BinaryOpType::LessDouble, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::FsDouble, Operand::FtDouble }}, true } },
{ InstrId::cpu_c_le_s, { BinaryOpType::LessEq, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Fs, Operand::Ft }}, true } },
{ InstrId::cpu_c_ngt_s, { BinaryOpType::LessEq, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Fs, Operand::Ft }}, true } },
{ InstrId::cpu_c_ole_s, { BinaryOpType::LessEq, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Fs, Operand::Ft }}, true } },
{ InstrId::cpu_c_ule_s, { BinaryOpType::LessEq, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Fs, Operand::Ft }}, true } },
{ InstrId::cpu_c_le_d, { BinaryOpType::LessEq, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::FsDouble, Operand::FtDouble }}, true } },
{ InstrId::cpu_c_ngt_d, { BinaryOpType::LessEq, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::FsDouble, Operand::FtDouble }}, true } },
{ InstrId::cpu_c_ole_d, { BinaryOpType::LessEq, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::FsDouble, Operand::FtDouble }}, true } },
{ InstrId::cpu_c_ule_d, { BinaryOpType::LessEq, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::FsDouble, Operand::FtDouble }}, true } },
{ InstrId::cpu_c_le_s, { BinaryOpType::LessEqFloat, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Fs, Operand::Ft }}, true } },
{ InstrId::cpu_c_ngt_s, { BinaryOpType::LessEqFloat, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Fs, Operand::Ft }}, true } },
{ InstrId::cpu_c_ole_s, { BinaryOpType::LessEqFloat, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Fs, Operand::Ft }}, true } },
{ InstrId::cpu_c_ule_s, { BinaryOpType::LessEqFloat, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Fs, Operand::Ft }}, true } },
{ InstrId::cpu_c_le_d, { BinaryOpType::LessEqDouble, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::FsDouble, Operand::FtDouble }}, true } },
{ InstrId::cpu_c_ngt_d, { BinaryOpType::LessEqDouble, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::FsDouble, Operand::FtDouble }}, true } },
{ InstrId::cpu_c_ole_d, { BinaryOpType::LessEqDouble, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::FsDouble, Operand::FtDouble }}, true } },
{ InstrId::cpu_c_ule_d, { BinaryOpType::LessEqDouble, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::FsDouble, Operand::FtDouble }}, true } },
{ InstrId::cpu_c_eq_s, { BinaryOpType::Equal, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Fs, Operand::Ft }}, true } },
{ InstrId::cpu_c_ueq_s, { BinaryOpType::Equal, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Fs, Operand::Ft }}, true } },
{ InstrId::cpu_c_ngl_s, { BinaryOpType::Equal, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Fs, Operand::Ft }}, true } },
{ InstrId::cpu_c_seq_s, { BinaryOpType::Equal, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Fs, Operand::Ft }}, true } },
{ InstrId::cpu_c_eq_d, { BinaryOpType::Equal, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::FsDouble, Operand::FtDouble }}, true } },
{ InstrId::cpu_c_ueq_d, { BinaryOpType::Equal, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::FsDouble, Operand::FtDouble }}, true } },
{ InstrId::cpu_c_ngl_d, { BinaryOpType::Equal, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::FsDouble, Operand::FtDouble }}, true } },
{ InstrId::cpu_c_eq_s, { BinaryOpType::EqualFloat, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Fs, Operand::Ft }}, true } },
{ InstrId::cpu_c_ueq_s, { BinaryOpType::EqualFloat, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Fs, Operand::Ft }}, true } },
{ InstrId::cpu_c_ngl_s, { BinaryOpType::EqualFloat, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Fs, Operand::Ft }}, true } },
{ InstrId::cpu_c_seq_s, { BinaryOpType::EqualFloat, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Fs, Operand::Ft }}, true } },
{ InstrId::cpu_c_eq_d, { BinaryOpType::EqualDouble, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::FsDouble, Operand::FtDouble }}, true } },
{ InstrId::cpu_c_ueq_d, { BinaryOpType::EqualDouble, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::FsDouble, Operand::FtDouble }}, true } },
{ InstrId::cpu_c_ngl_d, { BinaryOpType::EqualDouble, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::FsDouble, Operand::FtDouble }}, true } },
/* TODO rename to c_seq_d when fixed in rabbitizer */
{ InstrId::cpu_c_deq_d, { BinaryOpType::Equal, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::FsDouble, Operand::FtDouble }}, true } },
{ InstrId::cpu_c_deq_d, { BinaryOpType::EqualDouble, Operand::Cop1cs, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::FsDouble, Operand::FtDouble }}, true } },
// Loads
{ InstrId::cpu_ld, { BinaryOpType::LD, Operand::Rt, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::ImmS16, Operand::Base }}} },
{ InstrId::cpu_lw, { BinaryOpType::LW, Operand::Rt, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::ImmS16, Operand::Base }}} },
{ InstrId::cpu_lwu, { BinaryOpType::LWU, Operand::Rt, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::ImmS16, Operand::Base }}} },
{ InstrId::cpu_lh, { BinaryOpType::LH, Operand::Rt, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::ImmS16, Operand::Base }}} },
{ InstrId::cpu_lhu, { BinaryOpType::LHU, Operand::Rt, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::ImmS16, Operand::Base }}} },
{ InstrId::cpu_lb, { BinaryOpType::LB, Operand::Rt, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::ImmS16, Operand::Base }}} },
{ InstrId::cpu_lbu, { BinaryOpType::LBU, Operand::Rt, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::ImmS16, Operand::Base }}} },
{ InstrId::cpu_ldl, { BinaryOpType::LDL, Operand::Rt, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::ImmS16, Operand::Base }}} },
{ InstrId::cpu_ldr, { BinaryOpType::LDR, Operand::Rt, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::ImmS16, Operand::Base }}} },
{ InstrId::cpu_lwl, { BinaryOpType::LWL, Operand::Rt, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::ImmS16, Operand::Base }}} },
{ InstrId::cpu_lwr, { BinaryOpType::LWR, Operand::Rt, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::ImmS16, Operand::Base }}} },
{ InstrId::cpu_lwc1, { BinaryOpType::LW, Operand::FtU32L, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::ImmS16, Operand::Base }}} },
{ InstrId::cpu_ldc1, { BinaryOpType::LD, Operand::FtU64, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::ImmS16, Operand::Base }}, true } },
{ InstrId::cpu_ld, { BinaryOpType::LD, Operand::Rt, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Base, Operand::ImmS16 }}} },
{ InstrId::cpu_lw, { BinaryOpType::LW, Operand::Rt, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Base, Operand::ImmS16 }}} },
{ InstrId::cpu_lwu, { BinaryOpType::LWU, Operand::Rt, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Base, Operand::ImmS16 }}} },
{ InstrId::cpu_lh, { BinaryOpType::LH, Operand::Rt, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Base, Operand::ImmS16 }}} },
{ InstrId::cpu_lhu, { BinaryOpType::LHU, Operand::Rt, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Base, Operand::ImmS16 }}} },
{ InstrId::cpu_lb, { BinaryOpType::LB, Operand::Rt, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Base, Operand::ImmS16 }}} },
{ InstrId::cpu_lbu, { BinaryOpType::LBU, Operand::Rt, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Base, Operand::ImmS16 }}} },
{ InstrId::cpu_ldl, { BinaryOpType::LDL, Operand::Rt, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Base, Operand::ImmS16 }}} },
{ InstrId::cpu_ldr, { BinaryOpType::LDR, Operand::Rt, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Base, Operand::ImmS16 }}} },
{ InstrId::cpu_lwl, { BinaryOpType::LWL, Operand::Rt, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Base, Operand::ImmS16 }}} },
{ InstrId::cpu_lwr, { BinaryOpType::LWR, Operand::Rt, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Base, Operand::ImmS16 }}} },
{ InstrId::cpu_lwc1, { BinaryOpType::LW, Operand::FtU32L, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Base, Operand::ImmS16 }}} },
{ InstrId::cpu_ldc1, { BinaryOpType::LD, Operand::FtU64, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Base, Operand::ImmS16 }}, true } },
};
const std::unordered_map<InstrId, ConditionalBranchOp> conditional_branch_ops {
@@ -159,10 +159,12 @@ namespace N64Recomp {
{ InstrId::cpu_bltzl, { BinaryOpType::Less, {{ UnaryOpType::ToS64, UnaryOpType::None }, { Operand::Rs, Operand::Zero }}, false, true }},
{ InstrId::cpu_bgezal, { BinaryOpType::GreaterEq, {{ UnaryOpType::ToS64, UnaryOpType::None }, { Operand::Rs, Operand::Zero }}, true, false }},
{ InstrId::cpu_bgezall, { BinaryOpType::GreaterEq, {{ UnaryOpType::ToS64, UnaryOpType::None }, { Operand::Rs, Operand::Zero }}, true, true }},
{ InstrId::cpu_bc1f, { BinaryOpType::NotEqual, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Cop1cs, Operand::Zero }}, false, false }},
{ InstrId::cpu_bc1fl, { BinaryOpType::NotEqual, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Cop1cs, Operand::Zero }}, false, true }},
{ InstrId::cpu_bc1t, { BinaryOpType::Equal, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Cop1cs, Operand::Zero }}, false, false }},
{ InstrId::cpu_bc1tl, { BinaryOpType::Equal, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Cop1cs, Operand::Zero }}, false, true }},
{ InstrId::cpu_bltzal, { BinaryOpType::Less, {{ UnaryOpType::ToS64, UnaryOpType::None }, { Operand::Rs, Operand::Zero }}, true, false }},
{ InstrId::cpu_bltzall, { BinaryOpType::Less, {{ UnaryOpType::ToS64, UnaryOpType::None }, { Operand::Rs, Operand::Zero }}, true, true }},
{ InstrId::cpu_bc1f, { BinaryOpType::Equal, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Cop1cs, Operand::Zero }}, false, false }},
{ InstrId::cpu_bc1fl, { BinaryOpType::Equal, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Cop1cs, Operand::Zero }}, false, true }},
{ InstrId::cpu_bc1t, { BinaryOpType::NotEqual, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Cop1cs, Operand::Zero }}, false, false }},
{ InstrId::cpu_bc1tl, { BinaryOpType::NotEqual, {{ UnaryOpType::None, UnaryOpType::None }, { Operand::Cop1cs, Operand::Zero }}, false, true }},
};
const std::unordered_map<InstrId, StoreOp> store_ops {

View File

@@ -8,10 +8,10 @@
#include "fmt/format.h"
#include "fmt/ostream.h"
#include "n64recomp.h"
#include "recompiler/context.h"
#include "analysis.h"
#include "operations.h"
#include "generator.h"
#include "recompiler/operations.h"
#include "recompiler/generator.h"
enum class JalResolutionResult {
NoMatch,
@@ -22,13 +22,17 @@ enum class JalResolutionResult {
};
JalResolutionResult resolve_jal(const N64Recomp::Context& context, size_t cur_section_index, uint32_t target_func_vram, size_t& matched_function_index) {
// Skip resolution if all function calls should use lookup and just return Ambiguous.
if (context.use_lookup_for_all_function_calls) {
return JalResolutionResult::Ambiguous;
}
// Look for symbols with the target vram address
const N64Recomp::Section& cur_section = context.sections[cur_section_index];
const auto matching_funcs_find = context.functions_by_vram.find(target_func_vram);
uint32_t section_vram_start = cur_section.ram_addr;
uint32_t section_vram_end = cur_section.ram_addr + cur_section.size;
bool in_current_section = target_func_vram >= section_vram_start && target_func_vram < section_vram_end;
bool needs_static = false;
bool exact_match_found = false;
// Use a thread local to prevent reallocation across runs and to allow multi-threading in the future.
@@ -42,9 +46,7 @@ JalResolutionResult resolve_jal(const N64Recomp::Context& context, size_t cur_se
// Zero-sized symbol handling. unless there's only one matching target.
if (target_func.words.empty()) {
// Allow zero-sized symbols between 0x8F000000 and 0x90000000 for use with patches.
// TODO make this configurable or come up with a more sensible solution for dealing with manual symbols for patches.
if (target_func.vram < 0x8F000000 || target_func.vram > 0x90000000) {
if (!N64Recomp::is_manual_patch_symbol(target_func.vram)) {
continue;
}
}
@@ -109,8 +111,8 @@ std::string_view ctx_gpr_prefix(int reg) {
return "";
}
// Major TODO, this function grew very organically and needs to be cleaned up. Ideally, it'll get split up into some sort of lookup table grouped by similar instruction types.
bool process_instruction(const N64Recomp::Context& context, const N64Recomp::Function& func, const N64Recomp::FunctionStats& stats, const std::unordered_set<uint32_t>& skipped_insns, size_t instr_index, const std::vector<rabbitizer::InstructionCpu>& instructions, std::ofstream& output_file, bool indent, bool emit_link_branch, int link_branch_index, size_t reloc_index, bool& needs_link_branch, bool& is_branch_likely, bool tag_reference_relocs, std::span<std::vector<uint32_t>> static_funcs_out) {
template <typename GeneratorType>
bool process_instruction(GeneratorType& generator, const N64Recomp::Context& context, const N64Recomp::Function& func, size_t func_index, const N64Recomp::FunctionStats& stats, const std::unordered_set<uint32_t>& jtbl_lw_instructions, size_t instr_index, const std::vector<rabbitizer::InstructionCpu>& instructions, std::ostream& output_file, bool indent, bool emit_link_branch, int link_branch_index, size_t reloc_index, bool& needs_link_branch, bool& is_branch_likely, bool tag_reference_relocs, std::span<std::vector<uint32_t>> static_funcs_out) {
using namespace N64Recomp;
const auto& section = context.sections[func.section_index];
@@ -118,6 +120,7 @@ bool process_instruction(const N64Recomp::Context& context, const N64Recomp::Fun
needs_link_branch = false;
is_branch_likely = false;
uint32_t instr_vram = instr.getVram();
InstrId instr_id = instr.getUniqueId();
auto print_indent = [&]() {
fmt::print(output_file, " ");
@@ -132,16 +135,20 @@ bool process_instruction(const N64Recomp::Context& context, const N64Recomp::Fun
}
// Output a comment with the original instruction
if (instr.isBranch() || instr.getUniqueId() == InstrId::cpu_j) {
fmt::print(output_file, " // 0x{:08X}: {}\n", instr_vram, instr.disassemble(0, fmt::format("L_{:08X}", (uint32_t)instr.getBranchVramGeneric())));
} else if (instr.getUniqueId() == InstrId::cpu_jal) {
fmt::print(output_file, " // 0x{:08X}: {}\n", instr_vram, instr.disassemble(0, fmt::format("0x{:08X}", (uint32_t)instr.getBranchVramGeneric())));
print_indent();
if (instr.isBranch() || instr_id == InstrId::cpu_j) {
generator.emit_comment(fmt::format("0x{:08X}: {}", instr_vram, instr.disassemble(0, fmt::format("L_{:08X}", (uint32_t)instr.getBranchVramGeneric()))));
} else if (instr_id == InstrId::cpu_jal) {
generator.emit_comment(fmt::format("0x{:08X}: {}", instr_vram, instr.disassemble(0, fmt::format("0x{:08X}", (uint32_t)instr.getBranchVramGeneric()))));
} else {
fmt::print(output_file, " // 0x{:08X}: {}\n", instr_vram, instr.disassemble(0));
generator.emit_comment(fmt::format("0x{:08X}: {}", instr_vram, instr.disassemble(0)));
}
if (skipped_insns.contains(instr_vram)) {
return true;
// Replace loads for jump table entries into addiu. This leaves the jump table entry's address in the output register
// instead of the entry's value, which can then be used to determine the offset from the start of the jump table.
if (jtbl_lw_instructions.contains(instr_vram)) {
assert(instr_id == InstrId::cpu_lw);
instr_id = InstrId::cpu_addiu;
}
N64Recomp::RelocType reloc_type = N64Recomp::RelocType::R_MIPS_NONE;
@@ -177,19 +184,10 @@ bool process_instruction(const N64Recomp::Context& context, const N64Recomp::Fun
reloc_reference_symbol = reloc.symbol_index;
// Don't try to relocate special section symbols.
if (context.is_regular_reference_section(reloc.target_section) || reloc_section == N64Recomp::SectionAbsolute) {
// TODO this may not be needed anymore as HI16/LO16 relocs to non-relocatable sections is handled directly in elf parsing.
bool ref_section_relocatable = context.is_reference_section_relocatable(reloc.target_section);
uint32_t ref_section_vram = context.get_reference_section_vram(reloc.target_section);
// Resolve HI16 and LO16 reference symbol relocs to non-relocatable sections by patching the instruction immediate.
if (!ref_section_relocatable && (reloc_type == N64Recomp::RelocType::R_MIPS_HI16 || reloc_type == N64Recomp::RelocType::R_MIPS_LO16)) {
uint32_t full_immediate = reloc.target_section_offset + ref_section_vram;
if (reloc_type == N64Recomp::RelocType::R_MIPS_HI16) {
imm = (full_immediate >> 16) + ((full_immediate >> 15) & 1);
}
else if (reloc_type == N64Recomp::RelocType::R_MIPS_LO16) {
imm = full_immediate & 0xFFFF;
}
// The reloc has been processed, so set it to none to prevent it getting processed a second time during instruction code generation.
reloc_type = N64Recomp::RelocType::R_MIPS_NONE;
reloc_reference_symbol = (size_t)-1;
@@ -206,13 +204,7 @@ bool process_instruction(const N64Recomp::Context& context, const N64Recomp::Fun
}
}
auto print_line = [&]<typename... Ts>(fmt::format_string<Ts...> fmt_str, Ts ...args) {
print_indent();
fmt::vprint(output_file, fmt_str, fmt::make_format_args(args...));
fmt::print(output_file, ";\n");
};
auto print_unconditional_branch = [&]<typename... Ts>(fmt::format_string<Ts...> fmt_str, Ts ...args) {
auto process_delay_slot = [&](bool use_indent) {
if (instr_index < instructions.size() - 1) {
bool dummy_needs_link_branch;
bool dummy_is_branch_likely;
@@ -221,56 +213,87 @@ bool process_instruction(const N64Recomp::Context& context, const N64Recomp::Fun
if (reloc_index + 1 < section.relocs.size() && next_vram > section.relocs[reloc_index].address) {
next_reloc_index++;
}
if (!process_instruction(context, func, stats, skipped_insns, instr_index + 1, instructions, output_file, false, false, link_branch_index, next_reloc_index, dummy_needs_link_branch, dummy_is_branch_likely, tag_reference_relocs, static_funcs_out)) {
if (!process_instruction(generator, context, func, func_index, stats, jtbl_lw_instructions, instr_index + 1, instructions, output_file, use_indent, false, link_branch_index, next_reloc_index, dummy_needs_link_branch, dummy_is_branch_likely, tag_reference_relocs, static_funcs_out)) {
return false;
}
}
print_indent();
fmt::vprint(output_file, fmt_str, fmt::make_format_args(args...));
if (needs_link_branch) {
fmt::print(output_file, ";\n goto after_{};\n", link_branch_index);
} else {
fmt::print(output_file, ";\n");
}
return true;
};
auto print_func_call = [reloc_target_section_offset, reloc_section, reloc_reference_symbol, reloc_type, &context, &section, &func, &static_funcs_out, &needs_link_branch, &print_unconditional_branch]
(uint32_t target_func_vram, bool link_branch = true, bool indent = false)
auto print_link_branch = [&]() {
if (needs_link_branch) {
print_indent();
generator.emit_goto(fmt::format("after_{}", link_branch_index));
}
};
auto print_return_with_delay_slot = [&]() {
if (!process_delay_slot(false)) {
return false;
}
print_indent();
generator.emit_return(context, func_index);
print_link_branch();
return true;
};
auto print_goto_with_delay_slot = [&](const std::string& target) {
if (!process_delay_slot(false)) {
return false;
}
print_indent();
generator.emit_goto(target);
print_link_branch();
return true;
};
auto print_func_call_by_register = [&](int reg) {
if (!process_delay_slot(false)) {
return false;
}
print_indent();
generator.emit_function_call_by_register(reg);
print_link_branch();
return true;
};
auto print_func_call_by_address = [&generator, reloc_target_section_offset, reloc_section, reloc_reference_symbol, reloc_type, &context, &func, &static_funcs_out, &needs_link_branch, &print_indent, &process_delay_slot, &print_link_branch]
(uint32_t target_func_vram, bool tail_call = false, bool indent = false)
{
bool call_by_lookup = false;
bool call_by_name = false;
// Event symbol, emit a call to the runtime to trigger this event.
if (reloc_section == N64Recomp::SectionEvent) {
needs_link_branch = link_branch;
needs_link_branch = !tail_call;
if (indent) {
if (!print_unconditional_branch(" recomp_trigger_event(rdram, ctx, event_indices[{}])", reloc_reference_symbol)) {
return false;
}
} else {
if (!print_unconditional_branch("recomp_trigger_event(rdram, ctx, event_indices[{}])", reloc_reference_symbol)) {
return false;
}
print_indent();
}
if (!process_delay_slot(false)) {
return false;
}
print_indent();
generator.emit_trigger_event((uint32_t)reloc_reference_symbol);
print_link_branch();
}
// Normal symbol or reference symbol,
else {
std::string jal_target_name{};
size_t matched_func_index = (size_t)-1;
if (reloc_reference_symbol != (size_t)-1) {
const auto& ref_symbol = context.get_reference_symbol(reloc_section, reloc_reference_symbol);
if (reloc_type != N64Recomp::RelocType::R_MIPS_26) {
fmt::print(stderr, "Unsupported reloc type {} on jal instruction in {}\n", (int)reloc_type, func.name);
return false;
}
if (ref_symbol.section_offset != reloc_target_section_offset) {
fmt::print(stderr, "Function {} uses a MIPS_R_26 addend, which is not supported yet\n", func.name);
return false;
if (!context.skip_validating_reference_symbols) {
const auto& ref_symbol = context.get_reference_symbol(reloc_section, reloc_reference_symbol);
if (ref_symbol.section_offset != reloc_target_section_offset) {
fmt::print(stderr, "Function {} uses a MIPS_R_26 addend, which is not supported yet\n", func.name);
return false;
}
}
jal_target_name = ref_symbol.name;
}
else {
size_t matched_func_index = 0;
JalResolutionResult jal_result = resolve_jal(context, func.section_index, target_func_vram, matched_func_index);
switch (jal_result) {
@@ -284,65 +307,81 @@ bool process_instruction(const N64Recomp::Context& context, const N64Recomp::Fun
// Create a static function add it to the static function list for this section.
jal_target_name = fmt::format("static_{}_{:08X}", func.section_index, target_func_vram);
static_funcs_out[func.section_index].push_back(target_func_vram);
call_by_name = true;
break;
case JalResolutionResult::Ambiguous:
fmt::print(stderr, "[Info] Ambiguous jal target 0x{:08X} in function {}, falling back to function lookup\n", target_func_vram, func.name);
// Print a warning if lookup isn't forced for all non-reloc function calls.
if (!context.use_lookup_for_all_function_calls) {
fmt::print(stderr, "[Info] Ambiguous jal target 0x{:08X} in function {}, falling back to function lookup\n", target_func_vram, func.name);
}
// Relocation isn't necessary for jumps inside a relocatable section, as this code path will never run if the target vram
// is in the current function's section (see the branch for `in_current_section` above).
// If a game ever needs to jump between multiple relocatable sections, relocation will be necessary here.
jal_target_name = fmt::format("LOOKUP_FUNC(0x{:08X})", target_func_vram);
call_by_lookup = true;
break;
case JalResolutionResult::Error:
fmt::print(stderr, "Internal error when resolving jal to address 0x{:08X} in function {}. Please report this issue.\n", target_func_vram, func.name);
return false;
}
}
needs_link_branch = link_branch;
needs_link_branch = !tail_call;
if (indent) {
if (!print_unconditional_branch(" {}(rdram, ctx)", jal_target_name)) {
return false;
}
} else {
if (!print_unconditional_branch("{}(rdram, ctx)", jal_target_name)) {
return false;
}
print_indent();
}
if (!process_delay_slot(false)) {
return false;
}
print_indent();
if (reloc_reference_symbol != (size_t)-1) {
generator.emit_function_call_reference_symbol(context, reloc_section, reloc_reference_symbol, reloc_target_section_offset);
}
else if (call_by_lookup) {
generator.emit_function_call_lookup(target_func_vram);
}
else if (call_by_name) {
generator.emit_named_function_call(jal_target_name);
}
else {
generator.emit_function_call(context, matched_func_index);
}
print_link_branch();
}
return true;
};
auto print_branch = [&](uint32_t branch_target) {
// If the branch target is outside the current function, check if it can be treated as a tail call.
if (branch_target < func.vram || branch_target >= func_vram_end) {
// If the branch target is the start of some known function, this can be handled as a tail call.
// FIXME: how to deal with static functions?
if (context.functions_by_vram.find(branch_target) != context.functions_by_vram.end()) {
fmt::print("Tail call in {} to 0x{:08X}\n", func.name, branch_target);
if (!print_func_call(branch_target, false, true)) {
if (!print_func_call_by_address(branch_target, true, true)) {
return false;
}
print_line(" return");
fmt::print(output_file, " }}\n");
print_indent();
generator.emit_return(context, func_index);
// TODO check if this branch close should exist.
// print_indent();
// generator.emit_branch_close();
return true;
}
fmt::print(stderr, "[Warn] Function {} is branching outside of the function (to 0x{:08X})\n", func.name, branch_target);
}
if (instr_index < instructions.size() - 1) {
bool dummy_needs_link_branch;
bool dummy_is_branch_likely;
size_t next_reloc_index = reloc_index;
uint32_t next_vram = instr_vram + 4;
if (reloc_index + 1 < section.relocs.size() && next_vram > section.relocs[reloc_index].address) {
next_reloc_index++;
}
if (!process_instruction(context, func, stats, skipped_insns, instr_index + 1, instructions, output_file, true, false, link_branch_index, next_reloc_index, dummy_needs_link_branch, dummy_is_branch_likely, tag_reference_relocs, static_funcs_out)) {
return false;
}
if (!process_delay_slot(true)) {
return false;
}
fmt::print(output_file, " goto L_{:08X};\n", branch_target);
print_indent();
print_indent();
generator.emit_goto(fmt::format("L_{:08X}", branch_target));
// TODO check if this link branch ever exists.
if (needs_link_branch) {
fmt::print(output_file, " goto after_{};\n", link_branch_index);
print_indent();
print_indent();
generator.emit_goto(fmt::format("after_{}", link_branch_index));
}
return true;
};
@@ -353,7 +392,6 @@ bool process_instruction(const N64Recomp::Context& context, const N64Recomp::Fun
int rd = (int)instr.GetO32_rd();
int rs = (int)instr.GetO32_rs();
int base = rs;
int rt = (int)instr.GetO32_rt();
int sa = (int)instr.Get_sa();
@@ -365,7 +403,7 @@ bool process_instruction(const N64Recomp::Context& context, const N64Recomp::Fun
bool handled = true;
switch (instr.getUniqueId()) {
switch (instr_id) {
case InstrId::cpu_nop:
fmt::print(output_file, "\n");
break;
@@ -375,7 +413,8 @@ bool process_instruction(const N64Recomp::Context& context, const N64Recomp::Fun
Cop0Reg reg = instr.Get_cop0d();
switch (reg) {
case Cop0Reg::COP0_Status:
print_line("{}{} = cop0_status_read(ctx)", ctx_gpr_prefix(rt), rt);
print_indent();
generator.emit_cop0_status_read(rt);
break;
default:
fmt::print(stderr, "Unhandled cop0 register in mfc0: {}\n", (int)reg);
@@ -388,7 +427,8 @@ bool process_instruction(const N64Recomp::Context& context, const N64Recomp::Fun
Cop0Reg reg = instr.Get_cop0d();
switch (reg) {
case Cop0Reg::COP0_Status:
print_line("cop0_status_write(ctx, {}{})", ctx_gpr_prefix(rt), rt);
print_indent();
generator.emit_cop0_status_write(rt);
break;
default:
fmt::print(stderr, "Unhandled cop0 register in mtc0: {}\n", (int)reg);
@@ -408,38 +448,25 @@ bool process_instruction(const N64Recomp::Context& context, const N64Recomp::Fun
// If so, create a temp to preserve the addend register's value
if (find_result != stats.jump_tables.end()) {
const N64Recomp::JumpTable& cur_jtbl = *find_result;
print_line("gpr jr_addend_{:08X} = {}{}", cur_jtbl.jr_vram, ctx_gpr_prefix(cur_jtbl.addend_reg), cur_jtbl.addend_reg);
print_indent();
generator.emit_jtbl_addend_declaration(cur_jtbl, cur_jtbl.addend_reg);
}
}
break;
case InstrId::cpu_mult:
print_line("result = S64(S32({}{})) * S64(S32({}{})); lo = S32(result >> 0); hi = S32(result >> 32)", ctx_gpr_prefix(rs), rs, ctx_gpr_prefix(rt), rt);
break;
case InstrId::cpu_dmult:
print_line("DMULT(S64({}{}), S64({}{}), &lo, &hi)", ctx_gpr_prefix(rs), rs, ctx_gpr_prefix(rt), rt);
break;
case InstrId::cpu_multu:
print_line("result = U64(U32({}{})) * U64(U32({}{})); lo = S32(result >> 0); hi = S32(result >> 32)", ctx_gpr_prefix(rs), rs, ctx_gpr_prefix(rt), rt);
break;
case InstrId::cpu_dmultu:
print_line("DMULTU(U64({}{}), U64({}{}), &lo, &hi)", ctx_gpr_prefix(rs), rs, ctx_gpr_prefix(rt), rt);
break;
case InstrId::cpu_div:
// Cast to 64-bits before division to prevent artihmetic exception for s32(0x80000000) / -1
print_line("lo = S32(S64(S32({}{})) / S64(S32({}{}))); hi = S32(S64(S32({}{})) % S64(S32({}{})))", ctx_gpr_prefix(rs), rs, ctx_gpr_prefix(rt), rt, ctx_gpr_prefix(rs), rs, ctx_gpr_prefix(rt), rt);
break;
case InstrId::cpu_ddiv:
print_line("DDIV(S64({}{}), S64({}{}), &lo, &hi)", ctx_gpr_prefix(rs), rs, ctx_gpr_prefix(rt), rt);
break;
case InstrId::cpu_divu:
print_line("lo = S32(U32({}{}) / U32({}{})); hi = S32(U32({}{}) % U32({}{}))", ctx_gpr_prefix(rs), rs, ctx_gpr_prefix(rt), rt, ctx_gpr_prefix(rs), rs, ctx_gpr_prefix(rt), rt);
break;
case InstrId::cpu_ddivu:
print_line("DDIVU(U64({}{}), U64({}{}), &lo, &hi)", ctx_gpr_prefix(rs), rs, ctx_gpr_prefix(rt), rt);
print_indent();
generator.emit_muldiv(instr_id, rs, rt);
break;
// Branches
case InstrId::cpu_jal:
if (!print_func_call(instr.getBranchVramGeneric())) {
if (!print_func_call_by_address(instr.getBranchVramGeneric())) {
return false;
}
break;
@@ -450,18 +477,19 @@ bool process_instruction(const N64Recomp::Context& context, const N64Recomp::Fun
return false;
}
needs_link_branch = true;
print_unconditional_branch("LOOKUP_FUNC({}{})(rdram, ctx)", ctx_gpr_prefix(rs), rs);
print_func_call_by_register(rs);
break;
case InstrId::cpu_j:
case InstrId::cpu_b:
{
uint32_t branch_target = instr.getBranchVramGeneric();
if (branch_target == instr_vram) {
print_line("pause_self(rdram)");
print_indent();
generator.emit_pause_self();
}
// Check if the branch is within this function
else if (branch_target >= func.vram && branch_target < func_vram_end) {
print_unconditional_branch("goto L_{:08X}", branch_target);
print_goto_with_delay_slot(fmt::format("L_{:08X}", branch_target));
}
// This may be a tail call in the middle of the control flow due to a previous check
// For example:
@@ -476,11 +504,12 @@ bool process_instruction(const N64Recomp::Context& context, const N64Recomp::Fun
// ```
// FIXME: how to deal with static functions?
else if (context.functions_by_vram.find(branch_target) != context.functions_by_vram.end()) {
fmt::print("Tail call in {} to 0x{:08X}\n", func.name, branch_target);
if (!print_func_call(branch_target, false)) {
fmt::print("[Info] Tail call in {} to 0x{:08X}\n", func.name, branch_target);
if (!print_func_call_by_address(branch_target, true)) {
return false;
}
print_line("return");
print_indent();
generator.emit_return(context, func_index);
}
else {
fmt::print(stderr, "Unhandled branch in {} at 0x{:08X} to 0x{:08X}\n", func.name, instr_vram, branch_target);
@@ -490,7 +519,7 @@ bool process_instruction(const N64Recomp::Context& context, const N64Recomp::Fun
break;
case InstrId::cpu_jr:
if (rs == (int)rabbitizer::Registers::Cpu::GprO32::GPR_O32_ra) {
print_unconditional_branch("return");
print_return_with_delay_slot();
} else {
auto jtbl_find_result = std::find_if(stats.jump_tables.begin(), stats.jump_tables.end(),
[instr_vram](const N64Recomp::JumpTable& jtbl) {
@@ -499,58 +528,41 @@ bool process_instruction(const N64Recomp::Context& context, const N64Recomp::Fun
if (jtbl_find_result != stats.jump_tables.end()) {
const N64Recomp::JumpTable& cur_jtbl = *jtbl_find_result;
bool dummy_needs_link_branch, dummy_is_branch_likely;
size_t next_reloc_index = reloc_index;
uint32_t next_vram = instr_vram + 4;
if (reloc_index + 1 < section.relocs.size() && next_vram > section.relocs[reloc_index].address) {
next_reloc_index++;
}
if (!process_instruction(context, func, stats, skipped_insns, instr_index + 1, instructions, output_file, false, false, link_branch_index, next_reloc_index, dummy_needs_link_branch, dummy_is_branch_likely, tag_reference_relocs, static_funcs_out)) {
if (!process_delay_slot(false)) {
return false;
}
print_indent();
fmt::print(output_file, "switch (jr_addend_{:08X} >> 2) {{\n", cur_jtbl.jr_vram);
generator.emit_switch(context, cur_jtbl, rs);
for (size_t entry_index = 0; entry_index < cur_jtbl.entries.size(); entry_index++) {
print_indent();
print_line("case {}: goto L_{:08X}; break", entry_index, cur_jtbl.entries[entry_index]);
print_indent();
generator.emit_case(entry_index, fmt::format("L_{:08X}", cur_jtbl.entries[entry_index]));
}
print_indent();
print_line("default: switch_error(__func__, 0x{:08X}, 0x{:08X})", instr_vram, cur_jtbl.vram);
print_indent();
fmt::print(output_file, "}}\n");
generator.emit_switch_error(instr_vram, cur_jtbl.vram);
print_indent();
generator.emit_switch_close();
break;
}
auto jump_find_result = std::find_if(stats.absolute_jumps.begin(), stats.absolute_jumps.end(),
[instr_vram](const N64Recomp::AbsoluteJump& jump) {
return jump.instruction_vram == instr_vram;
});
if (jump_find_result != stats.absolute_jumps.end()) {
print_unconditional_branch("LOOKUP_FUNC({})(rdram, ctx)", (uint64_t)(int32_t)jump_find_result->jump_target);
// jr doesn't link so it acts like a tail call, meaning we should return directly after the jump returns
print_line("return");
break;
}
bool is_tail_call = instr_vram == func_vram_end - 2 * sizeof(func.words[0]);
if (is_tail_call) {
fmt::print("Indirect tail call in {}\n", func.name);
print_unconditional_branch("LOOKUP_FUNC({}{})(rdram, ctx)", ctx_gpr_prefix(rs), rs);
print_line("return");
break;
}
fmt::print(stderr, "No jump table found for jr at 0x{:08X} and not tail call\n", instr_vram);
fmt::print("[Info] Indirect tail call in {}\n", func.name);
print_func_call_by_register(rs);
print_indent();
generator.emit_return(context, func_index);
break;
}
break;
case InstrId::cpu_syscall:
print_line("recomp_syscall_handler(rdram, ctx, 0x{:08X})", instr_vram);
print_indent();
generator.emit_syscall(instr_vram);
// syscalls don't link, so treat it like a tail call
print_line("return");
print_indent();
generator.emit_return(context, func_index);
break;
case InstrId::cpu_break:
print_line("do_break({})", instr_vram);
print_indent();
generator.emit_do_break(instr_vram);
break;
// Cop1 rounding mode
@@ -559,21 +571,22 @@ bool process_instruction(const N64Recomp::Context& context, const N64Recomp::Fun
fmt::print(stderr, "Invalid FP control register for ctc1: {}\n", cop1_cs);
return false;
}
print_line("rounding_mode = ({}{}) & 0x3", ctx_gpr_prefix(rt), rt);
print_indent();
generator.emit_cop1_cs_write(rt);
break;
case InstrId::cpu_cfc1:
if (cop1_cs != 31) {
fmt::print(stderr, "Invalid FP control register for cfc1: {}\n", cop1_cs);
return false;
}
print_line("{}{} = rounding_mode", ctx_gpr_prefix(rt), rt);
print_indent();
generator.emit_cop1_cs_read(rt);
break;
default:
handled = false;
break;
}
CGenerator generator{};
InstructionContext instruction_context{};
instruction_context.rd = rd;
instruction_context.rs = rs;
@@ -589,28 +602,28 @@ bool process_instruction(const N64Recomp::Context& context, const N64Recomp::Fun
instruction_context.reloc_section_index = reloc_section;
instruction_context.reloc_target_section_offset = reloc_target_section_offset;
auto do_check_fr = [](std::ostream& output_file, const CGenerator& generator, const InstructionContext& ctx, Operand operand) {
auto do_check_fr = [](const GeneratorType& generator, const InstructionContext& ctx, Operand operand) {
switch (operand) {
case Operand::Fd:
case Operand::FdDouble:
case Operand::FdU32L:
case Operand::FdU32H:
case Operand::FdU64:
generator.emit_check_fr(output_file, ctx.fd);
generator.emit_check_fr(ctx.fd);
break;
case Operand::Fs:
case Operand::FsDouble:
case Operand::FsU32L:
case Operand::FsU32H:
case Operand::FsU64:
generator.emit_check_fr(output_file, ctx.fs);
generator.emit_check_fr(ctx.fs);
break;
case Operand::Ft:
case Operand::FtDouble:
case Operand::FtU32L:
case Operand::FtU32H:
case Operand::FtU64:
generator.emit_check_fr(output_file, ctx.ft);
generator.emit_check_fr(ctx.ft);
break;
default:
// No MIPS3 float check needed for non-float operands.
@@ -618,25 +631,25 @@ bool process_instruction(const N64Recomp::Context& context, const N64Recomp::Fun
}
};
auto do_check_nan = [](std::ostream& output_file, const CGenerator& generator, const InstructionContext& ctx, Operand operand) {
auto do_check_nan = [](const GeneratorType& generator, const InstructionContext& ctx, Operand operand) {
switch (operand) {
case Operand::Fd:
generator.emit_check_nan(output_file, ctx.fd, false);
generator.emit_check_nan(ctx.fd, false);
break;
case Operand::Fs:
generator.emit_check_nan(output_file, ctx.fs, false);
generator.emit_check_nan(ctx.fs, false);
break;
case Operand::Ft:
generator.emit_check_nan(output_file, ctx.ft, false);
generator.emit_check_nan(ctx.ft, false);
break;
case Operand::FdDouble:
generator.emit_check_nan(output_file, ctx.fd, true);
generator.emit_check_nan(ctx.fd, true);
break;
case Operand::FsDouble:
generator.emit_check_nan(output_file, ctx.fs, true);
generator.emit_check_nan(ctx.fs, true);
break;
case Operand::FtDouble:
generator.emit_check_nan(output_file, ctx.ft, true);
generator.emit_check_nan(ctx.ft, true);
break;
default:
// No NaN checks needed for non-float operands.
@@ -644,54 +657,58 @@ bool process_instruction(const N64Recomp::Context& context, const N64Recomp::Fun
}
};
auto find_binary_it = binary_ops.find(instr.getUniqueId());
auto find_binary_it = binary_ops.find(instr_id);
if (find_binary_it != binary_ops.end()) {
print_indent();
const BinaryOp& op = find_binary_it->second;
if (op.check_fr) {
do_check_fr(output_file, generator, instruction_context, op.output);
do_check_fr(output_file, generator, instruction_context, op.operands.operands[0]);
do_check_fr(output_file, generator, instruction_context, op.operands.operands[1]);
do_check_fr(generator, instruction_context, op.output);
do_check_fr(generator, instruction_context, op.operands.operands[0]);
do_check_fr(generator, instruction_context, op.operands.operands[1]);
}
if (op.check_nan) {
do_check_nan(output_file, generator, instruction_context, op.operands.operands[0]);
do_check_nan(output_file, generator, instruction_context, op.operands.operands[1]);
fmt::print(output_file, "\n ");
do_check_nan(generator, instruction_context, op.operands.operands[0]);
do_check_nan(generator, instruction_context, op.operands.operands[1]);
fmt::print(output_file, "\n");
print_indent();
}
generator.process_binary_op(output_file, op, instruction_context);
generator.process_binary_op(op, instruction_context);
handled = true;
}
auto find_unary_it = unary_ops.find(instr.getUniqueId());
auto find_unary_it = unary_ops.find(instr_id);
if (find_unary_it != unary_ops.end()) {
print_indent();
const UnaryOp& op = find_unary_it->second;
if (op.check_fr) {
do_check_fr(output_file, generator, instruction_context, op.output);
do_check_fr(output_file, generator, instruction_context, op.input);
do_check_fr(generator, instruction_context, op.output);
do_check_fr(generator, instruction_context, op.input);
}
if (op.check_nan) {
do_check_nan(output_file, generator, instruction_context, op.input);
fmt::print(output_file, "\n ");
do_check_nan(generator, instruction_context, op.input);
fmt::print(output_file, "\n");
print_indent();
}
generator.process_unary_op(output_file, op, instruction_context);
generator.process_unary_op(op, instruction_context);
handled = true;
}
auto find_conditional_branch_it = conditional_branch_ops.find(instr.getUniqueId());
auto find_conditional_branch_it = conditional_branch_ops.find(instr_id);
if (find_conditional_branch_it != conditional_branch_ops.end()) {
print_indent();
generator.emit_branch_condition(output_file, find_conditional_branch_it->second, instruction_context);
// TODO combining the branch condition and branch target into one generator call would allow better optimization in the runtime's JIT generator.
// This would require splitting into a conditional jump method and conditional function call method.
generator.emit_branch_condition(find_conditional_branch_it->second, instruction_context);
print_indent();
if (find_conditional_branch_it->second.link) {
if (!print_func_call(instr.getBranchVramGeneric())) {
if (!print_func_call_by_address(instr.getBranchVramGeneric())) {
return false;
}
}
@@ -701,22 +718,23 @@ bool process_instruction(const N64Recomp::Context& context, const N64Recomp::Fun
}
}
generator.emit_branch_close(output_file);
print_indent();
generator.emit_branch_close();
is_branch_likely = find_conditional_branch_it->second.likely;
handled = true;
}
auto find_store_it = store_ops.find(instr.getUniqueId());
auto find_store_it = store_ops.find(instr_id);
if (find_store_it != store_ops.end()) {
print_indent();
const StoreOp& op = find_store_it->second;
if (op.type == StoreOpType::SDC1) {
do_check_fr(output_file, generator, instruction_context, op.value_input);
do_check_fr(generator, instruction_context, op.value_input);
}
generator.process_store_op(output_file, op, instruction_context);
generator.process_store_op(op, instruction_context);
handled = true;
}
@@ -727,23 +745,26 @@ bool process_instruction(const N64Recomp::Context& context, const N64Recomp::Fun
// TODO is this used?
if (emit_link_branch) {
fmt::print(output_file, " after_{}:\n", link_branch_index);
print_indent();
generator.emit_label(fmt::format("after_{}", link_branch_index));
}
return true;
}
bool N64Recomp::recompile_function(const N64Recomp::Context& context, const N64Recomp::Function& func, std::ofstream& output_file, std::span<std::vector<uint32_t>> static_funcs_out, bool tag_reference_relocs) {
template <typename GeneratorType>
bool recompile_function_impl(GeneratorType& generator, const N64Recomp::Context& context, size_t func_index, std::ostream& output_file, std::span<std::vector<uint32_t>> static_funcs_out, bool tag_reference_relocs) {
const N64Recomp::Function& func = context.functions[func_index];
//fmt::print("Recompiling {}\n", func.name);
std::vector<rabbitizer::InstructionCpu> instructions;
fmt::print(output_file,
"RECOMP_FUNC void {}(uint8_t* rdram, recomp_context* ctx) {{\n"
// these variables shouldn't need to be preserved across function boundaries, so make them local for more efficient output
" uint64_t hi = 0, lo = 0, result = 0;\n"
" unsigned int rounding_mode = DEFAULT_ROUNDING_MODE;\n"
" int c1cs = 0;\n", // cop1 conditional signal
func.name);
generator.emit_function_start(func.name, func_index);
if (context.trace_mode) {
fmt::print(output_file,
" TRACE_ENTRY()\n",
func.name);
}
// Skip analysis and recompilation of this function is stubbed.
if (!func.stubbed) {
@@ -778,11 +799,11 @@ bool N64Recomp::recompile_function(const N64Recomp::Context& context, const N64R
return false;
}
std::unordered_set<uint32_t> skipped_insns{};
std::unordered_set<uint32_t> jtbl_lw_instructions{};
// Add jump table labels into function
for (const auto& jtbl : stats.jump_tables) {
skipped_insns.insert(jtbl.lw_vram);
jtbl_lw_instructions.insert(jtbl.lw_vram);
for (uint32_t jtbl_entry : jtbl.entries) {
branch_labels.insert(jtbl_entry);
}
@@ -802,11 +823,11 @@ bool N64Recomp::recompile_function(const N64Recomp::Context& context, const N64R
bool is_branch_likely = false;
// If we're in the delay slot of a likely instruction, emit a goto to skip the instruction before any labels
if (in_likely_delay_slot) {
fmt::print(output_file, " goto skip_{};\n", num_likely_branches);
generator.emit_goto(fmt::format("skip_{}", num_likely_branches));
}
// If there are any other branch labels to insert and we're at the next one, insert it
if (cur_label != branch_labels.end() && vram >= *cur_label) {
fmt::print(output_file, "L_{:08X}:\n", *cur_label);
generator.emit_label(fmt::format("L_{:08X}", *cur_label));
++cur_label;
}
@@ -816,7 +837,7 @@ bool N64Recomp::recompile_function(const N64Recomp::Context& context, const N64R
}
// Process the current instruction and check for errors
if (process_instruction(context, func, stats, skipped_insns, instr_index, instructions, output_file, false, needs_link_branch, num_link_branches, reloc_index, needs_link_branch, is_branch_likely, tag_reference_relocs, static_funcs_out) == false) {
if (process_instruction(generator, context, func, func_index, stats, jtbl_lw_instructions, instr_index, instructions, output_file, false, needs_link_branch, num_link_branches, reloc_index, needs_link_branch, is_branch_likely, tag_reference_relocs, static_funcs_out) == false) {
fmt::print(stderr, "Error in recompiling {}, clearing output file\n", func.name);
output_file.clear();
return false;
@@ -827,7 +848,8 @@ bool N64Recomp::recompile_function(const N64Recomp::Context& context, const N64R
}
// Now that the instruction has been processed, emit a skip label for the likely branch if needed
if (in_likely_delay_slot) {
fmt::print(output_file, " skip_{}:\n", num_likely_branches);
fmt::print(output_file, " ");
generator.emit_label(fmt::format("skip_{}", num_likely_branches));
num_likely_branches++;
}
// Mark the next instruction as being in a likely delay slot if the
@@ -838,7 +860,17 @@ bool N64Recomp::recompile_function(const N64Recomp::Context& context, const N64R
}
// Terminate the function
fmt::print(output_file, ";}}\n");
generator.emit_function_end();
return true;
}
// Wrap the templated function with CGenerator as the template parameter.
bool N64Recomp::recompile_function(const N64Recomp::Context& context, size_t function_index, std::ostream& output_file, std::span<std::vector<uint32_t>> static_funcs_out, bool tag_reference_relocs) {
CGenerator generator{output_file};
return recompile_function_impl(generator, context, function_index, output_file, static_funcs_out, tag_reference_relocs);
}
bool N64Recomp::recompile_function_custom(Generator& generator, const Context& context, size_t function_index, std::ostream& output_file, std::span<std::vector<uint32_t>> static_funcs_out, bool tag_reference_relocs) {
return recompile_function_impl(generator, context, function_index, output_file, static_funcs_out, tag_reference_relocs);
}

View File

@@ -1,6 +1,6 @@
#include "n64recomp.h"
#include "recompiler/context.h"
const std::unordered_set<std::string> N64Recomp::reimplemented_funcs{
const std::unordered_set<std::string> N64Recomp::reimplemented_funcs {
// OS initialize functions
"__osInitialize_common",
"osInitialize",
@@ -58,6 +58,7 @@ const std::unordered_set<std::string> N64Recomp::reimplemented_funcs{
// Parallel interface (cartridge, DMA, etc.) functions
"osCartRomInit",
"osCreatePiManager",
"osPiReadIo",
"osPiStartDma",
"osEPiStartDma",
"osPiGetStatus",
@@ -268,7 +269,6 @@ const std::unordered_set<std::string> N64Recomp::ignored_funcs {
"__osDevMgrMain",
"osPiGetCmdQueue",
"osPiGetStatus",
"osPiReadIo",
"osPiStartDma",
"osPiWriteIo",
"osEPiGetDeviceType",
@@ -557,7 +557,7 @@ const std::unordered_set<std::string> N64Recomp::ignored_funcs {
"kdebugserver",
};
const std::unordered_set<std::string> N64Recomp::renamed_funcs{
const std::unordered_set<std::string> N64Recomp::renamed_funcs {
// Math
"sincosf",
"sinf",