summaryrefslogtreecommitdiffstats
path: root/src/core/memory.cpp (follow)
Commit message (Collapse)AuthorAgeFilesLines
* scope_exit: Make constexprFearlessTobi2024-02-191-4/+4
| | | | | Allows the use of the macro in constexpr-contexts. Also avoids some potential problems when nesting braces inside it.
* atomic_ops: Remove volatile qualifierMerry2024-01-271-4/+2
|
* SMMU: Add Android compatibilityFernando Sahmkow2024-01-191-36/+26
|
* SMMU: Implement physical memory mirroringFernando Sahmkow2024-01-191-21/+32
|
* SMMU: Initial adaptation to video_core.Fernando Sahmkow2024-01-191-7/+18
|
* core: track separate heap allocation for linuxLiam2023-12-261-23/+63
|
* general: properly support multiple memory instancesLiam2023-12-231-14/+14
|
* core: refactor emulated cpu core activationLiam2023-12-041-7/+3
|
* Address more review commentsGPUCode2023-11-251-0/+50
|
* arm: Implement native code execution backendLiam2023-11-251-0/+13
|
* core: Respect memory permissions in MapGPUCode2023-11-251-4/+4
|
* Merge pull request #11995 from FernandoS27/you-dont-need-the-new-iphoneliamwhite2023-11-161-5/+20
|\ | | | | Revert PR #11806 and do a proper fix to the memory handling.
| * Memory: Fix invalidation handling from the CPU/ServicesFernando Sahmkow2023-11-121-5/+20
| |
* | kernel: add KPageTableBaseLiam2023-11-101-3/+3
|/ | | | Co-authored-by: Kelebek1 <eeeedddccc@hotmail.co.uk>
* Merge pull request #11155 from liamwhite/memory3liamwhite2023-07-281-3/+18
|\ | | | | memory: check page against address space size
| * memory: check page against address space sizeLiam2023-07-251-3/+18
| |
* | Merge pull request #10990 from comex/ubsanliamwhite2023-07-261-13/+17
|\ \ | |/ |/| Fixes and workarounds to make UBSan happier on macOS
| * Fixes and workarounds to make UBSan happier on macOScomex2023-07-151-13/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are still some other issues not addressed here, but it's a start. Workarounds for false-positive reports: - `RasterizerAccelerated`: Put a gigantic array behind a `unique_ptr`, because UBSan has a [hardcoded limit](https://stackoverflow.com/questions/64531383/c-runtime-error-using-fsanitize-undefined-object-has-a-possibly-invalid-vp) of how big it thinks objects can be, specifically when dealing with offset-to-top values used with multiple inheritance. Hopefully this doesn't have a performance impact. - `QueryCacheBase::QueryCacheBase`: Avoid an operation that UBSan thinks is UB even though it at least arguably isn't. See the link in the comment for more information. Fixes for correct reports: - `PageTable`, `Memory`: Use `uintptr_t` values instead of pointers to avoid UB from pointer overflow (when pointer arithmetic wraps around the address space). - `KScheduler::Reload`: `thread->GetOwnerProcess()` can be `nullptr`; avoid calling methods on it in this case. (The existing code returns a garbage reference to a field, which is then passed into `LoadWatchpointArray`, and apparently it's never used, so it's harmless in practice but still triggers UBSan.) - `KAutoObject::Close`: This function calls `this->Destroy()`, which overwrites the beginning of the object with junk (specifically a free list pointer). Then it calls `this->UnregisterWithKernel()`. UBSan complains about a type mismatch because the vtable has been overwritten, and I believe this is indeed UB. `UnregisterWithKernel` also loads `m_kernel` from the 'freed' object, which seems to be technically safe (the overwriting doesn't extend as far as that field), but seems dubious. Switch to a `static` method and load `m_kernel` in advance.
* | memory: minimize dependency on processLiam2023-07-221-58/+57
| |
* | k_process: PageTable -> GetPageTableLiam2023-07-151-4/+4
|/
* Use spans over guest memory where possible instead of copying data.Kelebek12023-07-031-5/+49
|
* Memory Tracking: Optimize tracking to only use atomic writes when contested with the host GPUFernando Sahmkow2023-06-281-6/+33
|
* MemoryTracking: Initial setup of atomic writes.Fernando Sahmkow2023-06-281-3/+4
|
* Address feedback, add CR notice, etcFernando Sahmkow2023-05-071-6/+5
|
* Settings: add option to enable / disable reactive flushingFernando Sahmkow2023-05-071-1/+2
|
* GPU: Add Reactive flushingFernando Sahmkow2023-05-071-6/+21
|
* Accuracy Normal: reduce accuracy further for perf improvements in Project LimeFernando Sahmkow2023-04-231-1/+1
|
* memory: rename global memory references to application memoryLiam2023-03-241-25/+11
|
* kernel: use KTypedAddress for addressesLiam2023-03-221-143/+176
|
* general: rename CurrentProcess to ApplicationProcessLiam2023-02-141-5/+5
|
* Revert "MemoryManager: use fastmem directly."Merry2023-01-251-1/+1
| | | | This reverts commit af5ecb0b15d4449f58434e70eed835cf71fc5527.
* memory: fix watchpoint use when fastmem is enabledLiam2023-01-151-0/+4
|
* MemoryManager: use fastmem directly.Fernando Sahmkow2023-01-051-1/+1
|
* Merge pull request #9415 from liamwhite/dcMai2022-12-111-14/+15
|\ | | | | memory: correct semantics of data cache management operations
| * memory: correct semantics of data cache management operationsLiam2022-12-111-14/+15
| |
* | memory: remove DEBUG_ASSERT pointer testLiam2022-12-101-2/+0
|/
* kernel: implement FlushProcessDataCacheLiam2022-11-121-0/+65
|
* general: Resolve -Wunused-lambda-capture and C5233Morph2022-10-221-21/+16
|
* core: device_memory: Templatize GetPointer(..).bunnei2022-10-191-3/+3
|
* MemoryManager: Fix errors popping out.Fernando Sahmkow2022-10-061-0/+9
|
* code: dodge PAGE_SIZE #defineKyle Kienapfel2022-08-201-39/+42
| | | | | | | | | | | | | | | | | | | | | Some header files, specifically for OSX and Musl libc define PAGE_SIZE to be a number This is great except in yuzu we're using PAGE_SIZE as a variable Specific example `static constexpr u64 PAGE_SIZE = u64(1) << PAGE_BITS;` PAGE_SIZE PAGE_BITS PAGE_MASK are all similar variables. Simply deleted the underscores, and then added YUZU_ prefix Might be worth noting that there are multiple uses in different classes/namespaces This list may not be exhaustive Core::Memory 12 bits (4096) QueryCacheBase 12 bits ShaderCache 14 bits (16384) TextureCache 20 bits (1048576, or 1MB) Fixes #8779
* chore: make yuzu REUSE compliantAndrea Pappacoda2022-07-271-3/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [REUSE] is a specification that aims at making file copyright information consistent, so that it can be both human and machine readable. It basically requires that all files have a header containing copyright and licensing information. When this isn't possible, like when dealing with binary assets, generated files or embedded third-party dependencies, it is permitted to insert copyright information in the `.reuse/dep5` file. Oh, and it also requires that all the licenses used in the project are present in the `LICENSES` folder, that's why the diff is so huge. This can be done automatically with `reuse download --all`. The `reuse` tool also contains a handy subcommand that analyzes the project and tells whether or not the project is (still) compliant, `reuse lint`. Following REUSE has a few advantages over the current approach: - Copyright information is easy to access for users / downstream - Files like `dist/license.md` do not need to exist anymore, as `.reuse/dep5` is used instead - `reuse lint` makes it easy to ensure that copyright information of files like binary assets / images is always accurate and up to date To add copyright information of files that didn't have it I looked up who committed what and when, for each file. As yuzu contributors do not have to sign a CLA or similar I couldn't assume that copyright ownership was of the "yuzu Emulator Project", so I used the name and/or email of the commit author instead. [REUSE]: https://reuse.software Follow-up to 01cf05bc75b1e47beb08937439f3ed9339e7b254
* Project AndioKelebek12022-07-221-1/+5
|
* core/debugger: memory breakpoint supportLiam2022-06-161-1/+78
|
* core/debugger: Implement new GDB stub debuggerLiam2022-06-011-0/+13
|
* Revert "Memory GPU <-> CPU: reduce infighting in the texture cache by adding CPU Cached memory."bunnei2022-03-261-1/+1
|
* Memory: Don't protect reads on Normal accuracy.Fernando Sahmkow2022-03-251-1/+1
|
* core: device_memory: Use memory size reported by KSystemControl.bunnei2022-02-211-2/+1
| | | | - That way, we can consolidate the memory layout to one place.
* prevent access violation from iob in Memory::IsValidVirtualAddressAndrew Strelsky2021-09-301-1/+5
|
* memory: Address lioncash's reviewyzct123452021-08-071-52/+6
|
* memory: Dedup Read and Write and fix logging bugsyzct123452021-08-071-129/+115
|
* memory: Clean up CopyBlock tooyzct123452021-08-051-36/+15
|
* memory: Address lioncash's reviewyzct123452021-08-051-6/+7
|
* memory: Clean up codeyzct123452021-08-051-229/+77
|
* General: Add settings for fastmem and disabling adress space check.FernandoS272021-06-111-4/+10
|
* core: Make use of fastmemMarkus Wick2021-06-111-0/+12
|
* core/memory: Check our memory fallbacks for out-of-bound behavior.Markus Wick2021-05-291-4/+39
| | | | | | This makes it by far harder to crash yuzu. Also implement the 48bit masking of AARCH64 while touching this code.
* hle: kernel: Use host memory allocations for KSlabMemory.bunnei2021-05-211-21/+0
| | | | - There are some issues with the current workaround, we will just use host memory until we have a complete kernel memory implementation.
* hle: kernel: Rename Process to KProcess.bunnei2021-05-061-17/+17
|
* core: memory: Add a work-around to allocate and access kernel memory regions by vaddr.bunnei2021-05-061-1/+29
|
* hle: kernel: Migrate PageHeap/PageTable to KPageHeap/KPageTable.bunnei2021-02-191-1/+1
|
* memory: Remove MemoryHookMerryMage2021-01-011-30/+0
|
* core/memory: Read and write page table atomicallyReinUsesLisp2020-12-301-123/+64
| | | | | | | | | | | | | | | | | | | | | | | | | | Squash attributes into the pointer's integer, making them an uintptr_t pair containing 2 bits at the bottom and then the pointer. These bits are currently unused thanks to alignment requirements. Configure Dynarmic to mask out these bits on pointer reads. While we are at it, remove some unused attributes carried over from Citra. Read/Write and other hot functions use a two step unpacking process that is less readable to stop MSVC from emitting an extra AND instruction in the hot path: mov rdi,rcx shr rdx,0Ch mov r8,qword ptr [rax+8] mov rax,qword ptr [r8+rdx*8] mov rdx,rax -and al,3 and rdx,0FFFFFFFFFFFFFFFCh je Core::Memory::Memory::Impl::Read<unsigned char> mov rax,qword ptr [vaddr] movzx eax,byte ptr [rdx+rax]
* core: memory: Ensure thread safe access when pages are rasterizer cached (#5206)bunnei2020-12-251-12/+40
| | | * core: memory: Ensure thread safe access when pages are rasterizer cached.
* memory: Resolve -Wdocumentation warning for Write()Lioncash2020-12-081-2/+0
| | | | | Write() doesn't return anything, so the @returns tag shouldn't be present.
* Revert "core: Fix clang build"bunnei2020-10-211-1/+1
|
* core: Fix clang buildLioncash2020-10-181-1/+1
| | | | | | | Recent changes to the build system that made more warnings be flagged as errors caused building via clang to break. Fixes #4795
* core/CMakeLists: Make some warnings errorsLioncash2020-10-131-6/+6
| | | | | | | | | Makes our error coverage a little more consistent across the board by applying it to Linux side of things as well. This also makes it more consistent with the warning settings in other libraries in the project. This also updates httplib to 0.7.9, as there are several warning cleanups made that allow us to enable several warnings as errors.
* memory: Resolve a -Wdocumentation warningLioncash2020-09-231-1/+1
| | | | memory doesn't exist as a parameter any more.
* common/atomic_ops: Don't cast away volatile from pointersLioncash2020-07-281-6/+4
| | | | Preserves the volatility of the pointers being casted.
* memory: Set page-table pointers before setting attribute = MemoryMerryMage2020-07-051-2/+5
|
* General: Initial Setup for Single Core.Fernando Sahmkow2020-06-271-4/+4
|
* ARM/Memory: Correct Exclusive Monitor and Implement Exclusive Memory Writes.Fernando Sahmkow2020-06-271-0/+98
|
* General: Recover Prometheus project from harddrive failure Fernando Sahmkow2020-06-271-7/+4
| | | | | | | This commit: Implements CPU Interrupts, Replaces Cycle Timing for Host Timing, Reworks the Kernel's Scheduler, Introduce Idle State and Suspended State, Recreates the bootmanager, Initializes Multicore system.
* core: memory: Fix memory access on page boundaries.bunnei2020-04-171-6/+39
| | | | - Fixes Super Smash Bros. Ultimate.
* core: memory: Updates for new VMM.bunnei2020-04-171-100/+52
|
* core: memory: Move to Core::Memory namespace.bunnei2020-04-171-2/+2
| | | | - helpful to disambiguate Kernel::Memory namespace.
* Buffer Cache: Use vAddr instead of physical memory.Fernando Sahmkow2020-04-061-0/+115
|
* GPU: Setup Flush/Invalidate to use VAddr instead of CacheAddrFernando Sahmkow2020-04-061-6/+6
|
* core/memory: Create a special MapMemoryRegion for physical memory.Markus Wick2020-01-181-0/+11
| | | | This allows us to create a fastmem arena within the memory.cpp helpers.
* core/memory + arm/dynarmic: Use a global offset within our arm page table.Markus Wick2020-01-011-9/+16
| | | | | | This saves us two x64 instructions per load/store instruction. TODO: Clean up our memory code. We can use this optimization here as well.
* core/memory; Migrate over SetCurrentPageTable() to the Memory classLioncash2019-11-271-15/+16
| | | | | | | Now that literally every other API function is converted over to the Memory class, we can just move the file-local page table into the Memory implementation class, finally getting rid of global state within the memory code.
* core/memory: Migrate over GetPointerFromVMA() to the Memory classLioncash2019-11-271-36/+36
| | | | | | | | Now that everything else is migrated over, this is essentially just code relocation and conversion of a global accessor to the class member variable. All that remains is to migrate over the page table.
* core/memory: Migrate over Write{8, 16, 32, 64, Block} to the Memory classLioncash2019-11-271-92/+128
| | | | | | | | | The Write functions are used slightly less than the Read functions, which make these a bit nicer to move over. The only adjustments we really need to make here are to Dynarmic's exclusive monitor instance. We need to keep a reference to the currently active memory instance to perform exclusive read/write operations.
* core/memory: Migrate over Read{8, 16, 32, 64, Block} to the Memory classLioncash2019-11-271-96/+132
| | | | | | | | | | | | | | With all of the trivial parts of the memory interface moved over, we can get right into moving over the bits that are used. Note that this does require the use of GetInstance from the global system instance to be used within hle_ipc.cpp and the gdbstub. This is fine for the time being, as they both already rely on the global system instance in other functions. These will be removed in a change directed at both of these respectively. For now, it's sufficient, as it still accomplishes the goal of de-globalizing the memory code.
* core/memory: Migrate over ZeroBlock() and CopyBlock() to the Memory classLioncash2019-11-271-89/+110
| | | | | These currently aren't used anywhere in the codebase, so these are very trivial to move over to the Memory class.
* core/memory: Migrate over RasterizerMarkRegionCached() to the Memory classLioncash2019-11-271-63/+67
| | | | | This is only used within the accelerated rasterizer in two places, so this is also a very trivial migration.
* core/memory: Migrate over ReadCString() to the Memory classLioncash2019-11-271-14/+19
| | | | | This only had one usage spot, so this is fairly straightforward to convert over.
* core/memory: Migrate over GetPointer()Lioncash2019-11-271-15/+23
| | | | | With all of the interfaces ready for migration, it's trivial to migrate over GetPointer().
* core/memory: Move memory read/write implementation functions into an anonymous namespaceLioncash2019-11-271-97/+98
| | | | | | These will eventually be migrated into the main Memory class, but for now, we put them in an anonymous namespace, so that the other functions that use them, can be migrated over separately.
* core/memory: Migrate over address checking functions to the new Memory classLioncash2019-11-271-20/+31
| | | | | | | | | A fairly straightforward migration. These member functions can just be mostly moved verbatim with minor changes. We already have the necessary plumbing in places that they're used. IsKernelVirtualAddress() can remain a non-member function, since it doesn't rely on class state in any form.
* core/memory: Migrate over memory mapping functions to the new Memory classLioncash2019-11-271-71/+106
| | | | | | Migrates all of the direct mapping facilities over to the new memory class. In the process, this also obsoletes the need for memory_setup.h, so we can remove it entirely from the project.
* core/memory: Introduce skeleton of Memory classLioncash2019-11-271-0/+12
| | | | | | | | | Currently, the main memory management code is one of the remaining places where we have global state. The next series of changes will aim to rectify this. This change simply introduces the main skeleton of the class that will contain all the necessary state.
* core: Remove Core::CurrentProcess()Lioncash2019-10-061-5/+5
| | | | | | This only encourages the use of the global system instance (which will be phased out long-term). Instead, we use the direct system function call directly to remove the appealing but discouraged short-hand.
* Core/Memory: Only FlushAndInvalidate GPU if the page is marked as RasterizerCachedMemoryFernando Sahmkow2019-09-191-2/+7
| | | | | This commit avoids Invalidating and Flushing the GPU if the page is not marked as a RasterizerCache Page.
* memory: Remove unused includesLioncash2019-07-061-2/+0
| | | | | These aren't used within the central memory management code, so they can be removed.
* core/cpu_core_manager: Create threads separately from initialization.Lioncash2019-04-121-8/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Our initialization process is a little wonky than one would expect when it comes to code flow. We initialize the CPU last, as opposed to hardware, where the CPU obviously needs to be first, otherwise nothing else would work, and we have code that adds checks to get around this. For example, in the page table setting code, we check to see if the system is turned on before we even notify the CPU instances of a page table switch. This results in dead code (at the moment), because the only time a page table switch will occur is when the system is *not* running, preventing the emulated CPU instances from being notified of a page table switch in a convenient manner (technically the code path could be taken, but we don't emulate the process creation svc handlers yet). This moves the threads creation into its own member function of the core manager and restores a little order (and predictability) to our initialization process. Previously, in the multi-threaded cases, we'd kick off several threads before even the main kernel process was created and ready to execute (gross!). Now the initialization process is like so: Initialization: 1. Timers 2. CPU 3. Kernel 4. Filesystem stuff (kind of gross, but can be amended trivially) 5. Applet stuff (ditto in terms of being kind of gross) 6. Main process (will be moved into the loading step in a following change) 7. Telemetry (this should be initialized last in the future). 8. Services (4 and 5 should ideally be alongside this). 9. GDB (gross. Uses namespace scope state. Needs to be refactored into a class or booted altogether). 10. Renderer 11. GPU (will also have its threads created in a separate step in a following change). Which... isn't *ideal* per-se, however getting rid of the wonky intertwining of CPU state initialization out of this mix gets rid of most of the footguns when it comes to our initialization process.
* core/memory: Remove GetCurrentPageTable()Lioncash2019-04-071-4/+0
| | | | | Now that nothing actually touches the internal page table aside from the memory subsystem itself, we can remove the accessor to it.
* memory: Check that core is powered on before attempting to use GPU.bunnei2019-03-211-1/+1
| | | | | - GPU will be released on shutdown, before pages are unmapped. - On subsequent runs, current_page_table will be not nullptr, but GPU might not be valid yet.
* core: Move PageTable struct into Common.bunnei2019-03-171-74/+60
|
* memory: Simplify rasterizer cache operations.bunnei2019-03-161-60/+21
|
* gpu: Use host address for caching instead of guest address.bunnei2019-03-151-5/+8
|
* gpu: Move command processing to another thread.bunnei2019-03-071-4/+4
|
* Memory: don't lock hle mutex in memory read/writeWeiyi Wang2019-03-021-6/+0
| | | | The comment already invalidates itself: neither MMIO nor rasterizer cache belongsHLE kernel state. This mutex has a too large scope if MMIO or cache is included, which is prone to dead lock when multiple thread acquires these resource at the same time. If necessary, each MMIO component or rasterizer should have their own lock.
* Speed up memory page mapping (#2141)Annomatg2019-02-271-6/+11
| | | | - Memory::MapPages total samplecount was reduced from 4.6% to 1.06%. - From main menu into the game from 1.03% to 0.35%
* Fixed uninitialized memory due to missing returns in canaryDavid Marcec2018-12-191-0/+1
| | | | Functions which are suppose to crash on non canary builds usually don't return anything which lead to uninitialized memory being used.
* memory: Convert ASSERT into a DEBUG_ASSERT within GetPointerFromVMA()Lioncash2018-12-061-1/+1
| | | | | | Given memory should always be expected to be valid during normal execution, this should be a debug assertion, rather than a check in regular builds.
* vm_manager: Make vma_map privateLioncash2018-12-061-6/+5
| | | | | | | | | | | This was only ever public so that code could check whether or not a handle was valid or not. Instead of exposing the object directly and allowing external code to potentially mess with the map contents, we just provide a member function that allows checking whether or not a handle is valid. This makes all member variables of the VMManager class private except for the page table.
* Call shrink_to_fit after page-table vector resizing to cause crt to actually lower vector capacity. For 36-bit titles saves 800MB of commit.heapo2018-12-051-0/+8
|
* global: Use std::optional instead of boost::optional (#1578)Frederic L2018-10-301-1/+1
| | | | | | | | | | | | | | | | * get rid of boost::optional * Remove optional references * Use std::reference_wrapper for optional references * Fix clang format * Fix clang format part 2 * Adressed feedback * Fix clang format and MacOS build
* kernel/process: Make data member variables privateLioncash2018-09-301-7/+7
| | | | | | | Makes the public interface consistent in terms of how accesses are done on a process object. It also makes it slightly nicer to reason about the logic of the process class, as we don't want to expose everything to external code.
* memory: Dehardcode the use of fixed memory range constantsLioncash2018-09-251-5/+7
| | | | | | | | The locations of these can actually vary depending on the address space layout, so we shouldn't be using these when determining where to map memory or be using them as offsets for calculations. This keeps all the memory ranges flexible and malleable based off of the virtual memory manager instance state.
* memory: Dehardcode the use of a 36-bit address spaceLioncash2018-09-251-2/+16
| | | | | Given games can also request a 32-bit or 39-bit address space, we shouldn't be hardcoding the address space range as 36-bit.
* Port #4182 from Citra: "Prefix all size_t with std::"fearlessTobi2018-09-151-27/+28
|
* gl_renderer: Cache textures, framebuffers, and shaders based on CPU address.bunnei2018-08-311-36/+15
|
* gpu: Make memory_manager privateLioncash2018-08-281-2/+2
| | | | | | | | | | Makes the class interface consistent and provides accessors for obtaining a reference to the memory manager instance. Given we also return references, this makes our more flimsy uses of const apparent, given const doesn't propagate through pointers in the way one would typically expect. This makes our mutable state more apparent in some places.
* renderer_base: Make Rasterizer() return the rasterizer by referenceLioncash2018-08-041-4/+4
| | | | | | | All calling code assumes that the rasterizer will be in a valid state, which is a totally fine assumption. The only way the rasterizer wouldn't be is if initialization is done incorrectly or fails, which is checked against in System::Init().
* video_core: Eliminate the g_renderer global variableLioncash2018-08-041-8/+10
| | | | | | | | | | | | | | We move the initialization of the renderer to the core class, while keeping the creation of it and any other specifics in video_core. This way we can ensure that the renderer is initialized and doesn't give unfettered access to the renderer. This also makes dependencies on types more explicit. For example, the GPU class doesn't need to depend on the existence of a renderer, it only needs to care about whether or not it has a rasterizer, but since it was accessing the global variable, it was also making the renderer a part of its dependency chain. By adjusting the interface, we can get rid of this dependency.
* memory: Remove unused GetSpecialHandlers() functionLioncash2018-08-031-16/+0
| | | | This is just unused code, so we may as well get rid of it.
* core/memory: Get rid of 3DS leftoversLioncash2018-08-031-106/+0
| | | | Removes leftover code from citra that isn't needed.
* Merge pull request #690 from lioncash/movebunnei2018-07-191-3/+5
|\ | | | | core/memory, core/hle/kernel: Use std::move where applicable
| * core/memory, core/hle/kernel: Use std::move where applicableLioncash2018-07-191-3/+5
| | | | | | | | Avoids pointless copies
* | core/memory: Remove unused function GetSpecialHandlers() and an unused variable in ZeroBlock()Lioncash2018-07-191-7/+0
|/
* Update clang formatJames Rowe2018-07-031-12/+12
|
* Rename logging macro back to LOG_*James Rowe2018-07-031-12/+12
|
* Kernel/Arbiters: Fix casts, cleanup comments/magic numbersMichael Scire2018-06-221-0/+4
|
* core: Implement multicore support.bunnei2018-05-111-2/+7
|
* general: Make formatting of logged hex values more straightforwardLioncash2018-05-021-11/+11
| | | | | | This makes the formatting expectations more obvious (e.g. any zero padding specified is padding that's entirely dedicated to the value being printed, not any pretty-printing that also gets tacked on).
* general: Convert assertion macros over to be fmt-compatibleLioncash2018-04-271-10/+9
|
* Merge pull request #387 from Subv/maxwell_2dbunnei2018-04-261-0/+4
|\ | | | | GPU: Partially implemented the 2D surface copy engine
| * Memory: Added a missing shortcut for Memory::CopyBlock for the current process.Subv2018-04-251-0/+4
| |
* | core/memory: Amend address widths in assertsLioncash2018-04-251-2/+2
| | | | | | | | Addresses are 64-bit, these formatting specifiers are simply holdovers from citra. Adjust them to be the correct width.
* | core/memory: Move logging macros over to new fmt-capable onesLioncash2018-04-251-22/+24
|/ | | | While we're at it, correct addresses to print all 64 bits where applicable, which were holdovers from citra.
* gl_rasterizer_cache: Update to be based on GPU addresses, not CPU addresses.bunnei2018-04-251-16/+48
|
* memory: Fix cast for ReadBlock/WriteBlock/ZeroBlock/CopyBlock.bunnei2018-03-271-4/+8
|
* memory: Add RasterizerMarkRegionCached code and cleanup.bunnei2018-03-271-200/+190
|
* Merge pull request #265 from bunnei/tegra-progress-2bunnei2018-03-241-0/+40
|\ | | | | Tegra progress 2
| * memory: Fix typo in RasterizerFlushVirtualRegion.bunnei2018-03-231-3/+3
| |
| * memory: RasterizerFlushVirtualRegion should also check process image region.bunnei2018-03-231-0/+1
| |
| * rasterizer: Flush and invalidate regions should be 64-bit.bunnei2018-03-231-2/+2
| |
| * memory: Port RasterizerFlushVirtualRegion from Citra.bunnei2018-03-231-0/+39
| |
* | Remove more N3DS ReferencesN00byKing2018-03-221-9/+0
|/
* core: Move process creation out of global state.bunnei2018-03-141-15/+15
|
* memory: LOG_ERROR when falling off end of page tableMerryMage2018-02-211-0/+11
|
* memory: Silence formatting sepecifier warningsLioncash2018-02-141-21/+30
|
* memory: Replace all memory hooking with Special regionsMerryMage2018-01-271-317/+163
|
* memory: Return false for large VAddr in IsValidVirtualAddressRozlette2018-01-201-0/+3
|
* Remove gpu debugger and get yuzu qt to compileJames Rowe2018-01-131-40/+1
|
* fix macos buildMerryMage2018-01-091-4/+4
|
* core/video_core: Fix a bunch of u64 -> u32 warnings.bunnei2018-01-011-8/+8
|
* memory: Print addresses as 64-bit.bunnei2017-10-191-2/+2
|
* Merge remote-tracking branch 'upstream/master' into nxbunnei2017-10-101-143/+211
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | # Conflicts: # src/core/CMakeLists.txt # src/core/arm/dynarmic/arm_dynarmic.cpp # src/core/arm/dyncom/arm_dyncom.cpp # src/core/hle/kernel/process.cpp # src/core/hle/kernel/thread.cpp # src/core/hle/kernel/thread.h # src/core/hle/kernel/vm_manager.cpp # src/core/loader/3dsx.cpp # src/core/loader/elf.cpp # src/core/loader/ncch.cpp # src/core/memory.cpp # src/core/memory.h # src/core/memory_setup.h
| * Memory: Make WriteBlock take a Process parameter on which to operateSubv2017-10-011-10/+17
| |
| * Memory: Make ReadBlock take a Process parameter on which to operateSubv2017-10-011-12/+28
| |
| * Fixed type conversion ambiguityHuw Pascoe2017-09-301-14/+22
| |
| * Merge pull request #2961 from Subv/load_titlesbunnei2017-09-291-7/+18
| |\ | | | | | | Loaders: Don't automatically set the current process every time we load an application.
| | * Memory: Allow IsValidVirtualAddress to be called with a specific process parameter.Subv2017-09-271-7/+18
| | | | | | | | | | | | There is still an overload of IsValidVirtualAddress that only takes the VAddr and will default to the current process.
| * | Merge pull request #2954 from Subv/cache_unmapped_memJames Rowe2017-09-271-1/+16
| |\ \ | | |/ | |/| Memory/RasterizerCache: Ignore unmapped memory regions when caching physical regions
| | * Memory/RasterizerCache: Ignore unmapped memory regions when caching physical regions.Subv2017-09-261-1/+16
| | | | | | | | | | | | | | | | | | Not all physical regions need to be mapped into the address space of every process, for example, system modules do not have a VRAM mapping. This fixes a crash when loading applets and system modules.
| * | ARM_Interface: Implement PageTableChangedMerryMage2017-09-251-0/+5
| | |
| * | memory: Remove GetCurrentPageTablePointersMerryMage2017-09-241-4/+0
| | |
| * | memory: Add GetCurrentPageTable/SetCurrentPageTableMerryMage2017-09-241-1/+9
| |/ | | | | | | Don't expose Memory::current_page_table as a global.
| * Merge pull request #2842 from Subv/switchable_page_tableB3n302017-09-151-79/+74
| |\ | | | | | | Kernel/Memory: Give each process its own page table and allow switching the current page table upon reschedule
| | * Kernel/Memory: Make IsValidPhysicalAddress not go through the current process' virtual memory mapping.Subv2017-09-151-2/+1
| | |
| | * Kernel/Memory: Changed GetPhysicalPointer so that it doesn't go through the current process' page table to obtain a pointer.Subv2017-09-151-3/+62
| | |
| | * Kernel/Memory: Give each Process its own page table.Subv2017-09-101-75/+12
| | | | | | | | | | | | The loader is in charge of setting the newly created process's page table as the main one during the loading process.
| * | Use recursive_mutex instead of mutex to fix #2902danzel2017-08-291-2/+2
| | |
| * | Merge pull request #2839 from Subv/global_kernel_lockJames Rowe2017-08-241-1/+8
| |\ \ | | |/ | |/| Kernel/HLE: Use a mutex to synchronize access to the HLE kernel state between the cpu thread and any other possible threads that might touch the kernel (network thread, etc).
| | * Kernel/Memory: Acquire the global HLE lock when a memory read/write operation falls outside of the fast path, for it might perform an MMIO operation.Subv2017-08-221-1/+8
| | |
* | | memory: Log with 64-bit values.bunnei2017-09-301-8/+8
| | |
* | | core: Various changes to support 64-bit addressing.bunnei2017-09-301-22/+22
|/ /
* | Merge pull request #2799 from yuriks/virtual-cached-range-flushWeiyi Wang2017-07-221-52/+76
|\ \ | |/ |/| Add address conversion functions returning optional, Add function to flush virtual region from rasterizer cache
| * Memory: Add function to flush a virtual range from the rasterizer cacheYuri Kunde Schlesner2017-06-221-39/+52
| | | | | | | | | | | | This is slightly more ergonomic to use, correctly handles virtual regions which are disjoint in physical addressing space, and checks only regions which can be cached by the rasterizer.
| * Memory: Add TryVirtualToPhysicalAddress, returning a boost::optionalYuri Kunde Schlesner2017-06-221-4/+12
| |
| * Memory: Make PhysicalToVirtualAddress return a boost::optionalYuri Kunde Schlesner2017-06-221-9/+12
| | | | | | | | And fix a few places in the code to take advantage of that.
* | Memory: Fix crash when unmapping a VMA covering cached surfacesYuri Kunde Schlesner2017-06-221-5/+20
|/ | | | | | | | | | Unmapping pages tries to flush any cached GPU surfaces touching that region. When a cached page is invalidated, GetPointerFromVMA() is used to restore the original pagetable pointer. However, since that VMA has already been deleted, this hits an UNREACHABLE case in that function. Now when this happens, just set the page type to Unmapped and continue, which arrives at the correct end result.
* Memory: Add constants for the n3DS additional RAMYuri Kunde Schlesner2017-05-101-2/+6
| | | | This is 4MB of extra, separate memory that was added on the New 3DS.
* Revert "Memory: Always flush whole pages from surface cache"bunnei2016-12-181-10/+0
|
* Memory: Always flush whole pages from surface cacheYuri Kunde Schlesner2016-12-151-0/+10
| | | | | This prevents individual writes touching a cached page, but which don't overlap the surface, from constantly hitting the surface cache lookup.
* Expose page table to dynarmic for optimized reads and writes to the JITJames Rowe2016-11-251-6/+8
|
* memory: fix IsValidVirtualAddress for RasterizerCachedMemorywwylele2016-09-291-0/+3
| | | | RasterizerCachedMemory doesn't has pointer but should be considered as valid
* Use negative priorities to avoid special-casing the self-includeYuri Kunde Schlesner2016-09-211-1/+1
|
* Remove empty newlines in #include blocks.Emmanuel Gil Peyrot2016-09-211-4/+1
| | | | | | | This makes clang-format useful on those. Also add a bunch of forgotten transitive includes, which otherwise prevented compilation.
* Sources: Run clang-format on everything.Emmanuel Gil Peyrot2016-09-181-35/+49
|
* Memory: add ReadCString functionwwylele2016-08-271-0/+14
|
* Memory: Handle RasterizerCachedMemory and RasterizerCachedSpecial page types in the memory block manipulation functions.Subv2016-05-281-1/+60
|
* Memory: Make ReadBlock and WriteBlock accept void pointers.Subv2016-05-281-4/+4
|
* Memory: CopyBlockMerryMage2016-05-281-0/+41
|
* Memory: ZeroBlockMerryMage2016-05-211-0/+38
|
* Memory: ReadBlock/WriteBlockMerryMage2016-05-211-3/+74
|
* Memory: IsValidVirtualAddress/IsValidPhysicalAddressMerryMage2016-05-211-0/+21
|
* HWRasterizer: Texture forwardingtfarley2016-04-211-0/+140
|
* Memory: Do correct Phys->Virt address translation for non-APP linheapYuri Kunde Schlesner2016-03-061-1/+1
|
* Memory: Implement MMIOMerryMage2016-01-301-6/+80
|
* Fixed spelling errorsGareth Poole2015-10-091-2/+2
|
* memory: Get rid of pointer castsLioncash2015-09-101-14/+7
|
* Kernel: Add more infrastructure to support different memory layoutsYuri Kunde Schlesner2015-08-161-1/+4
| | | | | | This adds some structures necessary to support multiple memory regions in the future. It also adds support for different system memory types and the new linear heap mapping at 0x30000000.
* Memory: Move address type conversion routines to memory.cpp/hYuri Kunde Schlesner2015-08-161-1/+36
| | | | | These helpers aren't really part of the kernel, and mem_map.cpp/h is going to be moved there next.
* Memory: Fix unmapping of pagesYuri Kunde Schlesner2015-07-121-4/+2
|
* Common: Cleanup memory and misc includes.Emmanuel Gil Peyrot2015-06-281-3/+0
|
* Kernel: Add VMManager to manage process address spacesYuri Kunde Schlesner2015-05-271-4/+8
| | | | | | | | This enables more dynamic management of the process address space, compared to just directly configuring the page table for major areas. This will serve as the foundation upon which the rest of the Kernel memory management functions will be built.
* Memory: Use a table based lookup scheme to read from memory regionsYuri Kunde Schlesner2015-05-151-120/+123
|
* Memory: Read SharedPage directly from Memory::ReadYuri Kunde Schlesner2015-05-151-1/+2
|
* Memory: Read ConfigMem directly from Memory::ReadYuri Kunde Schlesner2015-05-151-1/+2
|
* Memmap: Re-organize memory function in two filesYuri Kunde Schlesner2015-05-151-0/+197
memory.cpp/h contains definitions related to acessing memory and configuring the address space mem_map.cpp/h contains higher-level definitions related to configuring the address space accoording to the kernel and allocating memory.