Post by Martin BrownMost of the fancy chips these days have long and large manuals.
(some much better structured than others)
In days gone by, the CPU was *JUST* a CPU. You had a separate
data sheet for the UART, counter/timer, DMAC (which was often an
external device), FPU (if supported), MMU (ditto), PWM controller,
display controller, interrupt controller (once you move beyond a
couple of peripherals, managing interrupts with an "8 channel"
interrupt system is problematic), NIC/PHY, "coprocessors",
interprocessor communications, etc.
Nowadays, SoCs bundle all of that in a single package. And,
a designer should at least be aware of what hardware is contained
therein even if only to know how to ensure it is NOT accidentally
activated.
With early generation systems (even those where you pieced
together a bunch of peripheral devices), a big part of the
design process was sorting out how you could leverage the
"extra" bits of hardware available much like using any "extra"
gates you had on-hand.
"I only need one UART; what *could* I do with this OTHER one?"
"Hmmm, counter/timers come in sets of 4 -- how can I make use
of ALL of them instead of just the two that are strictly
necessary?"
The SoC that I'm using has a 1/2/4 core, 64 bit, GHz+ CPU (with
FPU & MMU & EDAC) along with a pair of "real-time" controllers
Memory is supported by 32K L1 (per core) and 512K L2 caches
(shared). In addition to static interfaces to internal
memory, there's support for 16b LPDDR. EDAC on *all* memory.
(the other processors have additional I/D memory to support
their operations).
It's peripheral set includes 2 1588 NICs, 5 I2C channels,
3 SPI, 2 USB2, 2 CAN, 4 timers (per core), UARTs w/ support
for IrDA (slow/medium/fast) and 64 byte Rx&Tx FIFOs, FLASH
interface, SD interface, 2 PWM, 3 quadrature encoders, etc.
I.e., just the "glue logic" to support all of these devices would
exceed the complexity of most early systems.
Did I go *looking* for all of these hardware capabilities in my
selection process? No. They came along for the ride. But, if
you adopt the "computing as a service" idea (24/7/365 availability
instead of "a free-standing device /with a power switch/"),
there are some minimum requirements that can't responsibly be
avoided.
What I *needed* was EDAC support (essential for any device with
any "real" amount of memory, nowadays, esp if reliability is a
design issue; if you just want to write off hardware errors
as "software bugs" then feel free to irresponsibly ignore that!).
A paged MMU (segmented-over-paged would be ideal) is also
essential if you want to host "foreign" code; how else do you
ensure something ADDED to your system behaves well and can be
"contained" in the event it doesn't? Too many "applications"
fail to exploit the hardware in their, e.g., Linux-hosted
environments. Why cram your entire application into a single
process container that lets any part (thread) of it interfere
with any OTHER part? Compartmentalize/Information-hiding
(Software 101). If you can't/haven't set good boundaries in
how your code is *designed*, then expect bugs to abound as
one aspect of your *implementation* stomps on other parts.
["Performance -- IPC has costs". Yeah, right. Hey, just wait
a few months and the hardware will be that much faster to make
your "performance" requirement a moot point! But, waiting
isn't going to make your code any more *robust*!! Why deprive
yourself of a mechanism that can help you do that? Ignorance?]
At least one NIC is essential to communicate with other nodes.
A second let a node can communicate with some *other* network
-based peripheral. 1588 support makes synchronizing distributed
clocks much easier (you can do it *without* that hardware support
but requires working in the weeds and tying your implementation to
specific hardware to get precise -- 10s of ns -- synchronization)
Support for encryption as all of my IPC is actually RPC (RMI); do
you expose the pins of your bus to outsiders during use? Can
others design "add-in cards" for your device? How do you ensure
THOSE behave as intended? (it's YOUR product that will be blamed
for faults induced by THEIR hardware/software)
The DRAM i/f is essential as finding a gigabyte or more of "working
memory" in a SoC is a bit of a stretch, with today's technology.
And, the EDAC has to be external to it as, otherwise, there's no
easy way to KNOW when you are seeing (corrected) errors.
Timers are always "the more, the merrier"! Perhaps the most useful
and versatile I/O device available!
Other *legacy* cruft (I2C, CAN, SPI, UARTs, etc. are largely excess
baggage, nowadays -- at least in *my* application domains). But,
there's often a way to repurpose those capabilities for some
other use.
Extra processors are a boon as much of the costly overhead of a
distributed/open system can be off-loaded into them. E.g., let
one handle scheduling, encrypting and decrypting RPC traffic
instead of moving that task into the application layer. Likewise,
resource accounting and enforcement (how do you prevent foreign
code from tying up resources and effectively compromising your
systems operation? Reliance on a "watchdog" is naive -- THEN
what do you do???)