Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Theres something in between, which you will find on microcontrollers: SRAM. If you use simple architectures, like AVR, you also get completely deterministic timings for a load from SRAM (e.g. 2 cycles for AVR).

Edit: Chill, everyone. Yes, it's "implementation detail of the substrate", but it is a very important implementation detail given that it is directly exposed to the programmer as memory, not in some automagically managed cache.



SRAM is used in every CPU, not just microcontrollers. Registers and cache are usually implemented as SRAM. The false distinction this article makes between registers and RAM is misleading and indicative of the author's general ignorance of computer architecture.


It's not misleading in the least unless you're a pedantic smartass who wants something to complain about. TFA uses terminology which "Reader Daniel Hooper" will understand, and in which RAM is a synonym for "main memory". Which is the colloquial meaning of RAM outside of hardware design labs and pedantic smartassery.


That's the implementation detail of the substrate, TFA uses "RAM" in the sense of "main memory" which is the colloquial meaning of the acronym. Registers can be implemented in SRAM. So can CPU-level caches or various hardware buffers.


That was my thought when I was reading the article. On-chip SRAM on microcontrollers feels different because on general purpose CPUs the generic programming model has registers and RAM with the cache managed for us by others. On MCUs you almost always end up being aware of on-chip SRAM and off-chip SRAM or DRAM. The lines are blurry for larger MCUs but for lower end stuff like Cortex M, AVR or MSP430 it's definitely a good idea to look over instruction timing for all the different flavors of storage.


Most ARM SoCs have a few hundred kilobytes of "internal RAM" (which is obviously SRAM) used mainly by the ROM and bootloader before the memory controller is initialized and can usually be accessed with the same latency as the L2 cache.

It's usually unused once the kernel has started but it can be mapped by the kernel later on if there's a use for it.


Modern x86 chips generally allow the onboard cache to be used as RAM during early boot for the same reason, too.


This is great stuff to know. Not relevant to my audience, I think, but it's something I wasn't quite aware of before, and I'm happy you pointed it out.


So, I'm a bit confused. Are registers SRAM? Or are they faster than SRAM?


Any of these computer architecture concepts: register file, L1/L2/L3 cache, main memory

Can be implemented with any of these components: DRAM, SRAM, D-FF (flip-flops)

It's common for main memory (in embedded systems) and register files to use SRAM. But you can also implement the registers with flip-flop banks, and get something bulkier but faster. I'm not sure what Intel/AMD does.


That's an awesome explanation. Thank you. [Making obvious reference to how relevant your username is]




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: