The RAD750's successor looks like it's RISC-V


It's been PowerPC in space for decades, from the Opportunity rover (a 20MHz BAE RAD6000, based on POWER1) to the James Webb Telescope (a 118MHz RAD750 "G3"). These are battle-tested processors in extremely hostile conditions, such as the Juno space probe in orbit around Jupiter where the RAD750 (a 132MHz part with 128MB of DRAM) operates in radiation levels a million times the human lethal dose. As evidence of its performance, it was supposed to be deorbited for destruction in 2021, but was extended to 2025 to examine the inner moons of Ganymede, Europa and Io.

Still, PowerPC never had a monopoly: the European Space Agency uses LEON, which is actually a free and open SPARC V8 core, and NASA has also used MIPS processors such as the 12MHz Mongoose-V (based on the R3000) used in New Horizons, which visited Pluto. A cluster of ARM-based "Rad-Tol Dependable Multiprocessors" (PDF) with OMAP 3503 cores will fly on the 6U CubeSat Lunar Flashlight nanosatellite scheduled for later this year after SLS Artemis 1 got scratched. For non-mission critical components, even some off-the-shelf ARM cores have made it to space; the Perseverance rover is another RAD750 system at 133MHz, but the Ingenuity helicopter drone it deployed was a regular Qualcomm Snapdragon 801.

BAE does have later generation Power parts available today: the RAD510 SOC, a system-on-a-chip with twice the performance of the RAD750, and the RAD5500 family with the RAD5545, derived from the ISA 2.06 NXP e5500. These are all Power ISA, all radiation-hardened, and all available from BAE's Manassas, Virginia facility, a U.S. Department of Defense Category 1A Microelectronics Trusted Source. (The RAD510 core is actually made by GlobalFoundries Fab 10 — one of IBM's former fabs.) With those on the shelf it's a bit puzzling that SiFive announced their X280 (U74-derived) core with vector extensions and AI/ML support will be the heart of NASA's next-generation High-Performance Spaceflight Computing (HPSC) processor instead, or more accurately eight of them (an additional four unspecified general-purpose RISC-V cores round out the total to 12). The chips are being developed on a radiation- and fault-tolerant process by Microchip Technology over the next three years, at a cost of US$50 million. More than just the added processing capability, probably what gives it a greater edge is lower expected power usage on a smaller process size and the ability to shut down silicon blocks for even greater power savings.

It would have to be indeed a significant technical leap to justify a complete break from a well-understood architecture and we'll see soon enough if it's worth it. That said, assuming it accumulates the same stellar track record as the BAE Power parts, the RISC-V HPSC will likely have its own decades-long run in space (assuming Jack Kang, SiFive's idiot senior vice president of business development, can stop smoking crack: there are still more PowerPC programmers than RISC-V programmers even now, chumpcakes). For that matter, the ESA is interested in RISC-V too and we approve of any free computing solution as long as it does the job. But don't cry for Power ISA in space yet; with three years of HPSC development to go, and several critical missions in progress, there's plenty of universe to explore no matter what CPU is doing the exploring. As HPSC will just be one choice of many even at NASA, Power ISA parts are likely to remain part of this very conservative industry for awhile, especially in commercial and military applications.

Comments

  1. While IBM's POWER9 and onwards are a Power/PowerPC architecture expected to continue development, are there any low power Power/PowerPC chip lines that NASA could reasonably expect developers to be familiar with for years to come?

    NXP seems to be moving away from its lower-power/embedded-use PowerPC cores in favour of ARM, and I do not see anyone else with active Power/PowerPC chip lines around.

    If I were NASA, I would probably be choosing ARM; betting on the future experience of developers with RISC-V when it has not seen any widespread public adoption yet seems like a somewhat insane gamble, unless I am missing something. Gambling millions of dollars on this kind of rationale seems more like SoftBank than NASA.

    ReplyDelete
    Replies
    1. ARM/AARCH64 have a lot of legacy, and part of that legacy is the pain of THUMB/THUMB-2 instructions. They essentially have multiple instruction sets, the original, then v2 through v7, and these are all 32-bit (ish) things, but they have the capability to scatter 16-bit condensed (THUMB/2) instructions in the code wherever. ARMv8/v9 are 64-bit, with an entirely different ISA, and a compatibility fallback to earlier 32-bit ISAs in the v7 and prior categories. This means that most modern chips have to support 3 ISAs to handle this at all, ARM EABI (sometimes with software FP to not have compatibility problems with Cortex-A9 and above having all kinds of different standards for which extensions need to be there, while Cortex-A8 was a pretty stable platform before it), AARCH64, and THUMB/2 all at the same time. The smallest of the Cortex-M series are THUMB-only, and there are only a couple of ARM64 chips that don't have back-compat support. Asides from that, ARM never really standardized anything with modern versions for boot, for memory mapping, and coming in after Android (which was initially targeting ARMv4), the process of a device vendor forking from a processor vendor, forking from Google's OS SDK, which forked from upstream, became the norm, so, each processor and vendor has kinda an enclave of forked code that is difficult to bring to modern standards as soon as they are trying to sell the next version. Add to that ARM suing Qualcomm over a license dispute, and not being able to really audit it, it's a mess.
      ARM ISA has outgrown being friendly, and the ecosystem is atrocious. By going RISC-V, NASA is just kinda changing the vendor from BAE to SiFive, one silo to another.
      I think the move from POWER is because OpenPOWER just hasn't moved fast enough, and it seems like sitting on their laurels, as well as NXP prioritizing what from their back-catalog gets to live in favor of ARM.

      Delete
    2. Pretty much all of those ecosystem concerns pretty much vanish when you consider that NASA is getting custom SoCs made and create all of the hardware and flight software themselves. They can make it boot in literally any way they please, and support whatever instruction sets make sense. That still does very little to impact their access to the existing massive market for ARM talent and experience, and the very mature software and tooling support. At this point ARM is a very well understood quantity, even if it's grown messy like x86. RISC-V on the other hand could completely disappear before the time comes for any rad-hardened RISC-V chips to fly.

      Delete
  2. Low power RISC-V cores are great for cheap & light helicopters, but there's only so much muscle they can bring to the table. Nasa has clearly been pushing for a higher degree of brains and autonomy in their latest robotic mission to mars. I can this being expanded upon in the unmanned decent/ascent operations that are supposed to be regularly happening from Gateway station. You might be able to coordinate robotic lunar surface activities from earth or from the station itself, but it's very cumbersome and there are lots of things that really benefit from the "pilot" being on-board in one way or another. On my wishlist would be something like a RAD-hardened POWER8 or POWER9 processor, with enough punch to run decent-sized machine vision and robotic control workloads in real time.

    ReplyDelete
    Replies
    1. I think POWER8/POWER9 in their current incarnations are just too big, I wonder how much they could be shrunk/made lower-power-draw. The PPC970 that was the basis of the "G5" Macs was essentially one core from a multi-core POWER4 package, so IBM did split them out in the past. I'd love to see microwatt-based designs used in those places, but shifting from what's there to an ASIC isn't something NASA is going to do themselves, and it seems none of their big contractors want to do it either.

      Delete
    2. As a baseline, I would not trust an FPGA for anything mission/vehicle-critical. With ASICs or other forms of physical hardware, while memory or maybe even individual operations can be corrupted, at least you can count on the circuits staying in the same place in response to radiation.

      Has there even been any use of FPGAs for non-critical tasks? Ex: image processing
      I cannot imagine NASA, ESA, JAXA entrusting the data from any science instrument, but maybe engineering cameras?

      If nothing thus far, Artemis I has GoPros strapped to ends of its solar panels, those probably have FPGAs in use.

      Delete

Post a Comment

Comments are subject to moderation. Be nice.