Skip to content

Memory and bus

The bus subsystem is responsible for routing physical addresses to the right backing store, applying big-endian byte swaps at typed access points, and exposing the same fabric to peripherals via IBusMaster so DMA shares the CPU's memory map.

SystemBus

lince::bus::SystemBus owns:

  • Zero or more Ram blocks, each at a configured base and size.
  • Zero or more MMIO mappings, each pointing to a non-owning IPeripheral*.

It exposes:

  • Untyped access: read_physical(PhysAddr, span<byte>) / write_physical(PhysAddr, span<const byte>).
  • Typed access: read_physical_u32, write_physical_u32 (and friends for u8/u16).
  • IBusMaster for DMA from peripherals.

Routing rules

  1. The bus walks its routing table in registration order.
  2. The first range whose [base, base+size) contains the access wins.
  3. If no range matches, the access returns ErrorCode::InvalidAddress.
  4. If an access straddles two regions, it returns ErrorCode::BusError (Decision 3 in Design decisions). Real hardware latches one transaction against one target; the bus does not silently split.

Bus is non-copyable, non-movable

SystemBus owns RAM via unique_ptr<Ram> and peripherals cache raw pointers into it. Moving the bus would invalidate those pointers, so the type is explicitly deleted for both copy and move (Decision 1). If you need a fresh bus, construct a new one.

RAM

lince::bus::Ram is a thin wrapper around std::vector<std::byte>.

It deliberately knows nothing about endianness — the bytes are stored in the order they appear on the wire. The big-endian byte-swap work is done one layer up in SystemBus::encode_be / SystemBus::decode_be. This keeps Ram trivially snapshot-able and mirrors how a real memory controller behaves: the controller sees raw bus bytes, not architectural ints.

MMIO access constraints

Two stricter-than-hardware rules apply at the bus boundary:

  • Naturally aligned 1 / 2 / 4-byte accesses only. Anything else is rejected with BusError or AlignmentError. CPU alignment traps live in the handlers, not in the bus (Decision 4).
  • All peripheral MMIO is word-only. Byte and half-word reads/writes to APBUart/IRQMP/GPTimer/MemCtrl return AlignmentError. This is stricter than GR712RC (which allows byte writes to the UART data register) but matches the MVP approach: defer narrowing to when RTEMS actually demands it (Decision 20).

DMA via IBusMaster

A peripheral that wants to DMA reads from / writes to RAM does so through the same SystemBus instance:

void DemoDmaDevice::do_dma() {
    std::array<std::byte, 16> buf{};
    ctx_.bus->dma_read(PhysAddr{0x40000100}, buf);
    /* …mutate buf… */
    ctx_.bus->dma_write(PhysAddr{0x40000200}, buf);
}

Because the bus is the same one the CPU uses, DMA shares the exact same memory map as CPU accesses, including any custom peripherals you added. There is no separate "DMA address space".

Endianness in DMA payloads

dma_read and dma_write move raw std::byte. Since SPARC is BE, re-composing a uint32_t from a DMA buffer requires the standard shift-and-or pattern:

uint32_t word =
      (uint32_t(std::to_integer<uint8_t>(buf[0])) << 24)
    | (uint32_t(std::to_integer<uint8_t>(buf[1])) << 16)
    | (uint32_t(std::to_integer<uint8_t>(buf[2])) <<  8)
    |  uint32_t(std::to_integer<uint8_t>(buf[3]));

…or use the bus's typed accessors (read_physical_u32) directly when you know your DMA target is aligned.

The reference implementation in examples/demo-dma/demo_dma_device.cpp shows both styles.

Public memory API

The Emulator republishes a curated set of bus operations on its public surface, gated by a mutex so that external callers (debuggers, SMP2 wrappers, test harnesses) can read and write physical memory safely even while the round-robin loop is running. The mutex is held for the duration of a quantum; external accesses block until the next scheduling boundary.

// from src/runtime/include/lince/runtime/emulator.hpp
Result<void>      read_physical (PhysAddr, std::span<std::byte>);
Result<void>      write_physical(PhysAddr, std::span<const std::byte>);
Result<uint32_t>  read_physical_u32 (PhysAddr);
Result<void>      write_physical_u32(PhysAddr, uint32_t);

When the SRMMU is delivered (Phase 7), read_virtual and write_virtual will be added with the same access pattern but routed through the selected core's MMU context.