A semi-review of the Raptor Blackbird: POWER9 on the cheap(er)


(*A big thanks to Tim Pearson at Raptor for his help with some of the technical questions, though I jealously guard my editorial integrity: this review was neither written nor approved by Raptor, and this machine was bought as a retail item without commercial consideration or discount.)

Much has been made and occasionally mocked of the Raptor Talos II's purchase price, which is hardly alone in the market even though I think you get what you pay for, but still admittedly eye-watering. (I'm saving pennies for a spare system and upgrading to a dual-8 instead of this dual-4, but that probably won't happen until I get my tax refund next year and I'm fortunate to make a decent living in a field largely unrelated to computing.) There are the usual folks who miss the point by stating that ARM and x86 machines are cheaper, and then there are the people who will miss the boat by waiting around for RISC-V, but while I think the people in the first category have their priorities mixed up they're also not wrong. The reason simply is that there are so many of them bought, sold and manufactured. No other architectures have the economy of scale of x86_64 and ARM, and therefore, no other architecture will have their price advantages for the foreseeable future. Boutique systems cost boutique bucks; the classic example is the PowerPC Amiga systems, which make it even worse by running embedded CPUs. If price or (perceived) value for dollar is your biggest concern, stop reading right now, because nothing in this review will convince you otherwise. Just don't ever complain someday when you don't have choices in computing architectures, because you did have a choice, and you chose cheap.

The point of the Blackbird is for people who either (like me) don't feel like feeding the Chipzilla monoculture further, or (like many others) prefer an alternative computing architecture that's open and fully auditable, and would prefer a smaller capital outlay. Maybe you're curious but you're not ready to make POWER9 your daily driver. Maybe you'd like to have one around as an "option" for playing around with. Maybe you have a need or interest to port software. Maybe some of your tasks would benefit from its auditability, but you don't need a full T2 for them. Or, more simply, maybe you see the value in the platform but this is all you can afford.

Now, finally, there's an existing lower-cost option. Not low cost, but lower cost: the price just plain won't be competitive with commodity systems and you shouldn't buy it with that expectation. I call this article a "semi-review" because it's not (primarily) about performance testing, benchmarks or other forms of male body part measurement; you can get that at Phoronix, and they have a Blackbird of their own they're testing. Instead, I'm here to ask two questions: how cheap can you get for a decent POWER9 system? And how good will that low-cost system be? There's no spoiler alert in saying I think this system is solid and a good value for the right kind of buyer, because really, what are you reading this blog for? I'll tell you what I paid, I'll tell you what I got for it, I'll tell you the ups and downs, and then I'll let you decide.

As with the Talos II the Blackbird is a fully open-source, auditable POWER9 system from the firmware up, but the biggest advantage of Blackbird over the T2, even more so than its price, is its size. The T2 is a hulking EATX monster and even the cut-down Lite is in the same form factor, but Blackbird is a lithe micro-ATX system and will fit in pretty much any compliant case. Cutting it down to size has some substantial impacts, though: there's a single socket only, and because only one CPU can be installed and the CPU handles directly-attached RAM and PCIe, you have fewer memory channels and PCIe lanes. That means a smaller RAM ceiling (just two DIMM slots for 256GB, vs 2TB in a loaded T2), fewer PCIe slots (a single x16 and x8 apiece versus three x16 and two x8), and of course fewer hardware threads. The 3.2/3.8GHz Sforza POWER9 sold for the Blackbird is the same CPU as the T2, so that means SMT-4 and a maximum of 88 threads with a 22-core part, but you'll only have one of them. Plus, this smaller board also has weaker power regulation, causing anything with more than 8 cores such as a 16-core or that 22-core beast to be unable to run at fullest performance (if at all).

That said, the Blackbird makes up for the limited expansion options with basic creature comforts on board, including USB 3.0, SATA, HDMI video out using the ASpeed BMC as a 2D framebuffer, 5.1 audio over analogue, S/PDIF, HDMI and Ethernet. (These devices are all blob-free except the NIC, by the way, and that last deficiency is already being worked on.) I decided in order to keep the cost really low that I'd just use the 2D BMC graphics and the on-board audio, a SATA SSD instead of NVMe like in this T2, and go with only 16GB of RAM.

I also thought about where to put it. We actually do have a need for a home theatre PC, and my wife is curious about Linux, so I decided to configure it that way and connect it into our existing DLP projection system. There's no Ethernet drop in the home theatre yet, so this machine will be wireless. Let's go shopping!

As of this writing the low-end 4-core (16 thread) Blackbird bundle starts at $1280 (all prices in US dollars). This includes the CPU and an I/O plate but does not include a heatsink or high-speed fan (HSF) assembly, which is extra and required and frankly Raptor should just build that into the price. I was fortunate that I got in on the Thanksgiving Black Friday special and got one for $1000. With the 2U heatsink, installation tool and shipping it came almost exactly to $1280 out of my pocket, so let's add another $280 to the current price for what you're likely to pay and budget $1560. I don't claim the prices below to be particular bargains; they're just what seemed decent on Amazon and your money/mileage may vary. Prices rounded up to the nearest whole dollar.

Blackbird Bundle (4-way CPU, heatsink, I/O plate and install tool, shipped to your door)
$1560
Micron MTA18ASF2G72PDZ-2G6D1 16GB DDR4-2666 ECC RDIMM
$121
Seasonic Focus 650W 120mm Modular ATX PSU
(admittedly overkill)
$94
SilverStone ML03B mATX Case
$88
Samsung 860 EVO 500GB SATA III SSD
$80
LG 14x BD-RW
$60
Logitech K400 Wireless Combo Keyboard + Touchpad
$25
Vonets VAP11G-300 WiFi Bridge
$20
Arctic F8 80mm PWM case fans x 2
$15
SSD bracket
$8
Various random cables from my bag of crap
cheap as

That's a grand total of $2071 (I later spent an additional $22 which I'll explain in a minute) for what I would consider to be a basic system, not absolute barebones, but not generously configured either. Given that the major cost is the board itself, no matter what deals you find on similar items, you're largely guaranteed to be paying in the ballpark of two grand for a comparable loadout without a GPU.

One delivery exception and useless mailbox staff member later, I have a nice big box marked Fragile from the Raptor folks. If the NSA TAO got into this, it wasn't obvious from the outside.

Inside the big box was the 2U heatsink, the CPU (smaller white box), a 5/32" ballhead driver for cinching down the heatsink because I couldn't find where the other one had gone, and of course the Blackbird. I have serial #75, which seems high for an early order and I'm sad I missed one of the serial number 1 certificates.

Let's open up the Blackbird's box:

The DVD includes schematics, the manual, firmware builds and source code. There's also a paper info sheet, the test readout, the I/O plate, a couple cool stickers (hey, can we buy more of these OpenPOWER ones?) and, there it is, the motherboard. Other than the fact it's brand-spanking new, plus some additional markings and a couple of now-populated pin headers, at cursory glance I didn't see many changes in this Rev 1.01 board from the Rev 1.00 board I saw at SCaLE 17x, showing how mature the design was already by that stage.

The other very important thing in the box is a piece of receipt paper with red and black dot matrix printing containing the BMC factory root password. It is not 0penBmc as the manual states, and in fact nothing other than the slip of paper itself even references its existence. If you toss it out carelessly as I almost did, you won't be able to get into the BMC with anything short of a serial cable to its headers on the board. I applaud Raptor for their level of security, but it would have been nice to have been warned to watch for it, and the password on my unit had a capital O instead of a zero but the font doesn't differentiate them! (By the way, duh, I've changed the password. Don't bother writing in that you can see it.)

One item to note in particular on the board are the status LEDs, at the top left in this picture's orientation. If the case front panel does not have sufficient LEDs to display their state, and the one I bought doesn't, you may want to position this LED bank in such a way that you can see them easily. On my system they are visible through the side ventilation holes.

Let's pop the motherboard in the case, which I already prepared with the SSD, optical drive and PSU. We'll then remove (carefully) the plastic cap on the socket:

The CPU socket is a ZIF-type affair and has alignment notches so you can't put the CPU in wrong. It just drops in (though don't literally drop it, those delicate pins will bend).

The 2U heatsink is shown here. It just clears the top of the case in the mATX case I was using, but is not an active cooler, only passive. The manual mentioned an indium pad but that didn't sound necessary for the 4-core and indeed neither the 4 nor the 8 require one. The copper base with a couple small voids is shown, but doesn't seem to affect its ability to cool the chip. You don't need, and in fact should not use thermal compound, though I did polish the heat spreader with a microfibre cloth before installing the heatsink to remove fingerprints.

The side clasps grip the heatsink and should be level. Then insert your 5/32" ballhead in the top and turn clockwise. It requires a little bit of torque, but it's absolutely obvious when it's cinched down all the way because unless you're Ronda Rousey it won't turn any more.

At this point I also installed the two 80mm case fans and connected one to each of the two 4-pin PWM fan zones (if you have the heatsink-fan assembly, you would connect that instead to the zone closest to the CPU). For an HTPC we would definitely want as silent a system as possible, but Raptor doesn't recommend passive cooling of the rest of the components even with the 4-core because the fans will try to throttle down when they can and there may not be sufficient airflow. More about this in a moment.

One last gotcha is that you would think with one stick of RAM, it would go in slot A1. You would be wrong; the manual says it performs better in B1. But the single stick does work in A1. I won't tell you how I know, I just sense these things.

Anyway, let's put the lid on the case and install it in the home theatre rack. I put it on the bottom since it clears nicely there.

Connected to wall power, it took about a minute to bring the BMC up (until the rear status lights stopped pulsing). Now would be a good time to get in and change the OpenBMC default password provided on the slip of paper in the box. OpenBMC is accessible from the network on port 3, which is the one directly on top of the rear USB ports. Unfortunately, if you use a USB WiFi dongle, those ports will be powered down when the machine is, so there's no way to access it unless you set up some miniature Ethernet hardline network or plug into the serial port on the board. I suspected this would be an issue, hence the Vonets WiFi bridge which connects via Ethernet to the Blackbird and is USB-powered so I can power it independently from a wallwart. Because the BMC gets its address over DHCP, there may be a lag before it requests a lease and it may appear at varying addresses (I eventually tired of this and hardcoded a rule for its MAC address in the WiFi router). If your system will be hard-wired to Ethernet, though, there should be no problem. Note that even the Vonets devices are no panacea either because they will need separate configuration for the access point and password (I plugged it into my iBook G4 and used TenFourFox to set it up).

Once the BMC is ready, a quick press of the power button, and we have liftoff! Much as with the T2, the fans whir at full RPM on startup and then kick down at IPL as the temperature permits. Connected directly to the BMC's HDMI port, we get a graphical startup which is really quite cool. It shows all the steps, even some spurious meaningless errors which you can safely ignore. This image was displayed on the wall by my DLP projector, but the blinds were up, so apologies for the extra light.

And, about a minute and change later, here's our friend Petitboot:

Let's install the operating system. I don't know if I'll leave it on there and I'll probably experiment with some other OS choices, but I decided to install Fedora 30 on the Blackbird for review purposes so that I can compare the overall feel of the machine with my daily driver T2. (My T2 is a dual-4 with the BTO WX 7100 workstation card, 32GB of RAM and 1.5TB of NVMe flash.) So let's do that. It will also make a nice little stress test to see how it manages its own cooling.

I left it downloading Fedora Workstation and went to dinner, and came back about two hours later with the install complete but the fans now running full blast. That was not an encouraging sign.

However, that was not the biggest problem. The biggest problem was that the machine was almost unusable: besides sluggish animations, the mouse pointer skipped and the keyboard even stuttered keys, making entering passwords in GNOME a tedious and deliberate affair. This caused some, let's say, consternation on my part until it dawned on me I had not set this system up exactly like my full T2. Yes, it didn't have a discrete GPU, but it also was a direct install of Fedora Workstation rather than an install of Fedora Server that was turned into Workstation. This install was graphical turtles all the way down. That meant ... Wayland.

I opened up another VTY to tweak settings because typing into gnome-terminal was painful and error-prone, and a quick ps did indeed demonstrate the machine was in Wayland mode, which is the present default for Fedora Workstation on a graphical boot. That explained it: my T2 uses Xorg because it has a text boot and I run startx manually. I changed /etc/gdm/custom.conf to disable Wayland and restarted, this time into Xorg, and while animations were still not smooth they were much better than before. Best of all was that the keyboard and mouse pad were now working properly. If you don't have a GPU, and possibly even if you do, don't run Wayland on this machine.

Unfortunately, even with that sorted I couldn't increase the screen resolution to 1920x1080 (it was stuck at 1024x768), and audio wasn't playing through HDMI. Those could be addressed later but meanwhile I didn't want my new funbox to cook itself. Reported temperatures at idle looked like this:

Admittedly the location of the system is somewhat thermally constrained. The AV receiver doesn't run particularly hot and it's not flush on the top of the Blackbird, but mATX cases are cramped when they're loaded and the top vent is under the receiver (the side vents are where the fan mount points are). Coming from a system like the Quad G5 where temperatures over 70°C can be a sign of impending cooling system failure, this seemed worrisome. I asked Tim Pearson at Raptor about this and actually the POWER9 has a very wide temperature tolerance: 84°C as shown here falls well within its operating range and the thermal cutoff is, in his words, "well north of 100°C." This is reassuring but I would have preferred not to have a home theatre system that can also pop the popcorn while playing the movie, so I ordered a couple more fans and some splitters to take up the other two mount points ($22).

While waiting for those to arrive, the next order of business was the video, which was still stuck at 1024x768. Firefox from the Fedora repos worked fine for YouTube but only in 4:3.

After attempting to connect it directly to the DLP projector instead of through the AV receiver and getting nowhere, I started looking to see what the maximum resolution of the AST2500 BMC actually is, and bumbled into this Raptor wiki entry about getting Xorg to display in 1920x1080. Apparently you'll need to manually specify the settings for the time being because there isn't upstream support for the IT66121FN HDMI transceiver. Once this is done, though, it works:

HD playback from YouTube and Vimeo seems similar to the T2 in Firefox, maybe an occasional keyframe seek or dip in frame rate because of the smaller number of threads, but throughput is more than enough to be serviceable.

However, I couldn't say the same for VLC, which seemed strangely fill-rate limited trying to play commercial optical media. Playing both Atomic Blonde from DVD and Terminator 2 from Blu-ray in full screen 1080p generated roughly the same percentage of dropped frames in VLC's own statistics (around 6-8%). Disabling or changing interlacing settings didn't make a difference; turning down postprocessing, explicitly disabling hardware acceleration or trying different software framebuffer options for the video didn't help either. When reduced to a half-size window, however, no frames were dropped from either disc with VLC's default settings, suggesting that pushing pixels, not decoding, was hobbling playback.

This may have something to do with the fact that software LLVMpipe rendering at this resolution is a cruel, cruel joke. Xonotic, which runs at a ripping pace on my T2 with the WX 7100 card, is hobbled to a pathetic 5-10fps in software even with all the settings dialed down:

I spent a level or two getting whacked by the bots because running around and firing was a stuttery affair. Don't expect to game super hard on this either without a GPU unless you're talking about software-rendered classics. Unfortunately, it seems 1080p movie playback, at least on VLC, has similar limitations. Although mplayer doesn't seem to have any problems with full-screen scaling (I used mplayer -fs -x 1920 -y 1080 -zoom -ao alsa -afm hwac3), you have to know which title you want and it isn't as convenient or polished. I don't know why Firefox seemed to work okay and VLC didn't but I can live with that because streaming media will be this machine's primary task anyway.

Meanwhile, we still have the audio problem. My AV receiver does not have analogue 5.1 inputs, and what good is a home theatre without surround sound? The Blackbird does also offer S/PDIF, and my AV receiver has an input for that via TOSLINK, but being PCM only comes through in stereo. Tim suggested modifying /usr/bin/lvds.sh on the BMC side to enable S/PDIF over HDMI, and provided a prototype script to do so. I'll post a partially working version as a gist, but my projector occasionally came up with a black screen from it and resolutions under 1920x1080 had a weird extraneous two pixels added, so I ended up reverting it and going back to TOSLINK. Apparently a reclocking step is needed per Tim which hopefully will occur in a future firmware release.

It turns out surround sound over S/PDIF is a perennial problem on Linux. The solution for me was creating a 5.1 lossy profile for libasound from this blog entry for Debian Wheezy, which more or less "just worked" on Fedora except I had to restart the machine to get it to stick. In pavucontrol I made sure that the S/PDIF profile was configured to stream AC-3 (Dolby Digital), DTS and MPEG, and having done so VLC was now able to play 5.1 surround from both Terminator 2 and Atomic Blonde with default settings. Even this didn't work absolutely flawlessly: if the audio stream was interrupted for some reason then the AV receiver went haywire and just played an annoying buzzing noise. But at least if you don't have analogue inputs on your receiver, you can still use a TOSLINK cable to get lossy 5.1.

A number of people have requested some idea of the system's power usage, so here are some rough unprofessionally obtained numbers. Remember, this is a low-end 4-core system with one RAM stick, no GPU, no PCIe cards and an SSD, so you should expect these numbers to reflect the Blackbird's minimum power usage (your loadout will not use less, and may use more). The Kill-A-Watt measured just over 3 watts with the Blackbird on standby plugged into the wall; powering it on, BMC bring-up and IPL topped out at around 60-80W, booting Fedora peaked at 127W, and idling in GNOME measured about 65W (I told you that 650W PSU I bought was overkill). Stressing the system vainly trying to play Xonotic only showed around 105W. The highest power draw I ever saw out of the Kill-A-Watt was 131W.

I installed the two new fans when they arrived and placed two fans (again, all Arctic F8 PWMs) using a splitter on each of the two zones for the full four this case will accommodate. With two fans on the CPU zone in the same environment, the cores could now be seen to visibly cool down at idle which wasn't happening before. Interestingly, the board didn't seem to be using the case fan zone much even though all four fans did indeed spool up at IPL as expected. After a few minutes letting it just sit there at the GNOME desktop, the cores dropped to 58°C and even the CPU fans gradually spun down to minimum. This wasn't silent, to be sure, though with a movie playing you'd never notice it. That said, making the machine work I could still get some 85°C-ish peaks out of it but nothing close to the thermal cutoff Tim mentioned.

In the five days I've had this so far, a couple of other trivial annoyances did pop up, though these are possibly unique to my specific circumstances. The first is that resolution switching periodically seemed to unsettle the projector, requiring some HDMI hot-replugging or even once or twice a full power down to get it to see anything. This could well be the projector, and may have absolutely nothing to do with the Blackbird, but it was still obnoxious and never happened before with the Blu-ray 3D player or the AV receiver's on-screen display. So far I can't discern an obvious pattern and this may disappear as kernel AST2500 support improves. Also, I completely cut power at the surge protector switch to protect the equipment when turning off the home theatre, but that means every time I fire it up and want to use the Blackbird I have to wait for the BMC to come back up again and then go through the startup sequence before I can even boot Fedora (about two to three minutes to a login prompt). Yes, this happens with a T2 as well, but in my case the T2 I'm typing this on is my daily driver and so it's always running anyway.

It's time to sum up, so let's answer the first of our two main questions: how cheap can you get for a decent POWER9 system? I think this is a decent POWER9 system and should meet many basic use cases as configured, but I think I've also demonstrated that it's at or near the functional minimum, and owing to the cost distribution in its particular bill of materials I don't think you can get a lot cheaper. Given that the board and CPU are about three-quarters of the total damages, there's not a whole lot more to economize on: you could cut down the RAM to 8GB or get a smaller PSU but you'd probably barely save $100, and even the SATA SSD, though not luxuriously large, wasn't all that expensive compared to a spinning disk.

In that case, let's answer the second, thornier question: how good will that low-cost system be? I'll be painfully frank and say I probably had unrealistic expectations when I chose to try to make this into a HTPC. Linux itself isn't a great choice, especially on a non-x86 platform. DVD playback on Linux is pretty much solved, but much commercial Blu-ray media is right out for lack of decryption keys, and because it's not x86 or ARM so is most DRM-ed content without closed-source black box binaries. While 5.1 mostly works over digital, the most trouble-free means of connecting it will be analogue, and I know I'm not the only person for whom that's not an option with their existing setup; the slow bring-up if you don't leave it on standby power all the time isn't a dealbreaker but is definitely annoying (I think shortening the startup time should be looked at for future lower-end designs), and at least for the time being the lack of a GPU in this loadout appears to limit the HD media options I can play comfortably. This box suffices fine for playing YouTube and Vimeo, which fortuitously will be the majority of its current tasks, and will be a nice training wheels system for my wife to experiment with Linux, work in LibreOffice and do other family computer tasks. If you choose something less fatty than GNOME you can probably wring a little more UI juice out of it, but I didn't notice much difference from my regular T2 in basic usage. More than that isn't really in the cards, and at least as-is, I felt I had to compromise a bit too much to make it into a credible media system.

As a deskside developer box, simple server or buildbot, though, even this relatively austere loadout is acceptable. The lack of a GPU isn't a major problem for such pursuits and 16GB of RAM is sufficient for even many intermediate tasks. With 16 threads and extrapolating from my dual-4 T2 which builds an optimized Firefox at -j24 in a little over a half hour, I can reasonably expect build times to be about double, which is good enough as a sidecar. It would be more than enough to serve data and media, even if (especially since) it's not playing it. In fact, because it's a secondary machine for me, I can run it full tilt since desktop responsiveness is comparatively unimportant. Plus, it's small and I don't care if I botch the installed OS, so this will likely be my test-bed machine for other distributions and hopefully the BSDs. If you want one purely as a secondary system or utility machine, then you're probably in the best position to make the most of this low-end config.

But it seems to me from my cursory observations of people waiting for their Blackbirds that they want this as a lower-cost way into the POWER9 ecosystem and to see how they like it as a primary system. In that case, let's just say it: you need to spend more than what I've spent here. I think based on this testing that to get a good experience of what POWER9 is and what it can do, you really need eight cores (currently, add about $330) and a GPU (you decide, though I stick with BTO choices since they're likely to be tested more, so that AMD WX7100 will set you back around $500 right now). I don't think there's enough threads for heavier usage with the 4-core, and gaming, media and desktop choices will expand dramatically with a discrete GPU rather than overworking poor old LLVMpipe. For that matter, 32GB wouldn't be a bad idea either. Due to what I consider the thermal constraints of such a larger spec, though (and given how constrained I found this machine to be with just 4 cores and no GPU), I would be leery of trying to get that all into an mATX case. I think you'd find it running louder and hotter than most people would like, and after my experiences so far, I wouldn't put a single-8 plus GPU in anything smaller than a regular ATX mini-tower.

With that done and dusted, now you're looking at a slightly larger footprint and a budget of about $3000 for what I would consider a system I could live with day-to-day. It won't be the same as my big T2, which because of its dual sockets has much more expansion capability (everything is NVMe and I still have room for the GPU, a RAID or SATA card, and FireWire), but I think the experience is comparable and it's still less than half of what a similarly equipped full T2 will cost. Just keep in mind if you find you like the POWER, and you might want a dual-18 or even a dual-22 someday, you won't get that on a Blackbird — you won't even get the fullest benefit of one of them. If you want dual GPUs, or lots of NVMe storage, forget it. Heck, if you just want a dual-8, forget it. What, you expected Raptor to cannibalize their own pro market? For someone like me, the big T2 would always have been my preferred option because it gives me the biggest bang and the most room to grow, but I've clearly got more money than sense and I already knew I wanted something with that level of power and flexibility. Even if they'd sold the Blackbird back then I'd still have bought the T2. But if there's no way you can spend that much today, even if you know you might in the future, then the Blackbird is what you should buy.

Don't get me wrong: even considering it didn't turn out as I planned, overall I like the Blackbird a lot, and while not a revolutionary design I think it's a strong, gutsy choice by Raptor to put their efforts into this sort of lower-end machine. It's clearly captured a lot of people's interest and I think Raptor will sell quite a few of them, which is great for the ecosystem and for the chances of having robust libre options on the desktop. I think it's certainly possible to get a lower-cost rock-bottom configuration like this to cover many people's uses, especially as a secondary computer, but I think you'll need to be a little more generous if you want this as your OpenPOWER workstation. Budget in the 8-core and a GPU and do it right, and you'll end up with a better priced option good enough to be your daily driver, yet running a performant architecture that's more open, less creepy, comparably specced and actually exists.

PowerPC to the people.

Comments

  1. I think a lot of your difficulty with trying to use this as a preprocessed (unencrypted) video box stems from the rest of the industry "cheating". Most integrated/discrete video cards have video playback hardware decoders for the video standards they expect to encounter, and they use an overlay mode to directly shove out pixels without regard to normal rendering or compositing. I imagine since this is primarily a framebuffer meant to get basic video out for platforms that don't need modern desktop video, it probably has none of those functions, and if it does, it probably can't use them due to lacking software support.

    Wayland on Fedora 30 operates similarly terribly on Dell servers, the GPU just isn't supported, and it topples the whole experience.

    I do want to see Raptor go even lower-end and smaller, maybe a BGA chip with SO-DIMMs and an integrated Radeon mobile chip to see if they can bring the price and size down further. It was also mentioned in an interview that they may be looking away from PCIe to other busses that make more sense, so it'll be interesting to see where those designs can take them.

    Nice overview of assembly and current struggles, thanks for it!

    ReplyDelete
    Replies
    1. Thank you for the detailed review and pictures! I also went back and looked at your Talos review and I'm guessing that one has the '3U' heatsink? It looks significantly taller and would not fit a slim mATX case if the 2U heatsink just clears. How do the temperatures compare between the two machines? I saw in the phoronics pictures that it looks like the PSU is ducted to the CPU heatsink such that the PSU fan helps to cool the CPU as well. Do you think this idea would work with your SilverStone ML03B or is the layout different?

      Do you think it would be easy to attach one of your 80mm fans directly to the CPU heatsink?

      Also, is there an easy way to tell what the CPU clock is doing and whether it's throttling down for power or temperature limits?

      Delete
    2. Replying to both of you at once:

      Yes, I probably felt the absence of the GPU more than the absence of additional threads. It's reassuring to hear Wayland has similar problems even on x86. The integrated GPU choice I think would be the way to go, though Sforza was selected *because* it has so many PCIe lanes, so I don't know if they can get away from that entirely at least for this CPU generation.

      My T2 does indeed have 3U HSFs, not 2U heatsinks. Directing the CPU heatsink exhaust out through the PSU vent wouldn't be feasible in this particular case without substantial modification, but could work in the right one. But I don't think the HSFs would go well in an mATX enclosure even if they could be made to fit. The heatsink gets pretty hot but it might be possible to bolt another 80mm fan to it and run another splitter end to that. I'll consider this if I start running into issues.

      `lscpu` shows you min and max, but `/etc/cpuinfo` will show you the current speed for each logical CPU. I couldn't tell if it was thermally correlated, though for the record I didn't see any signs that heat was causing the CPU to get clocked back even if I personally didn't like the temperature.

      Delete
  2. "open and fully auditable"
    "fully open-source"

    As the Jon Masters guy from ARM on Twitter says, it should be noted that the hardware is still fully closed. Only firmwares are open. So being fully open/auditable is not really true. Of course, there is no actual open hardware so relatively speaking this is what you can at most achieve, but it's better to keep the the wording unmuddied.

    -----------------
    About the experience and problems you describe:

    Your bad experience with video playback and desktop GUI is likely just the fault of the ASpeed thing. That's no display adapter in any practical sense, I don't think anybody even expects it to play video, much less send audio to HTPC (I wonder if you are the only user trying that or others have done this?). Playing video through unnacelerated X11 just sucks. I was surprised you even seriously attempted that.

    Try to get some old used cheap Radeon card (RX 550, HD 7750, HD 350, HD 6450 or maybe even older - just check if it works on Power9 with open drivers, I'm no expert on that). That will give you proper accelerated output, ability to use opengl rendering and scaling (so that you hopefully don't get tearing when playing back video) and hopefully the hdmi audio should work better too. It should change everything (well, except the superlong booting) when trying to use this for multimedia, and that is just for dunno, 20-40$.

    Of course, with Power CPU, you will also lack SIMD optimizations when decoding many formats, but that is easy to spot if it happens (100% CPU usage). I believe the 4core CPU shouldn't be THAT weak if the 18cores are competetive with Threadrippers etc,

    Oh and another tip.
    Put a fan directly on the heatsink. Thinly-layered fins like this don't work as passive coolers or with mild system cooling. It would only work in servers if used with the usual ear-damagingly loud high-pressure airflow. If you use heatsink like this in normal PC, the mild pressure of quiet fans will never really manage to put air through those thin netween-fins spaces.
    However, these fins do work acceptably if the fan is sitting directly on them, because the air is forced in and can't decide to go around or just stop at the barrier. That should improve the cooling and you should probably be alright with just one system fan beside that - or perhaps two.

    The heatsinks Raptor sells look pretty much terrible for desktop use I guess, it's probably a good idea to not order them and just build some improvised fitting kit for a quality PC cooler which can be get for same or less.
    BTW, check the temperature on the VRM modules when the CPU is in full load. The board lacks heatsink over them and with that, there is a need for direct airflow to those chips, else they overheat. Given the price of the motherboard, you don't want the VRM to croak and KO the whole thing (and possibly other components). Make sure they get enough direct airflow or that your CPU cooler blows on them.

    ReplyDelete
    Replies
    1. I don't think you really read the review, but thank you for your suggestions.

      Delete
    2. Just want to point out as well that the reason there is no AMD GPU is because it would mean requiring a bunch of signed, binary-only firmware, foregoing FSF RYF certification, removing the owner controlled video output, and leaving the user with a partly useless brick if AMD decided to play dirty and disable the GPU in the future for any reason ("piracy", planned obsolescence, etc.).

      There's nearly two megabytes of firmware for the AMD GPUs required now. Take a look:

      https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git/commit/amdgpu?id=8bf607ca9bb42bc26bd4c8d574d88b6b6935c47d

      1.6MB of firmware that is signed for the express purpose, stated by an AMD employee, of limiting what you can do with the GPU. Why would Raptor want that soldered down on one of their platforms?

      Delete
    3. @ClassicHasClass
      I did read it whole. As somebody with experience in multimedia playback and that stuff, I think you having a "not having a GPU" problem is something I can bet on safely.

      @Anonymous
      That's all nice and ideologically pure, but if you want to roll with a HTPC or use GUI desktop programs and your attempts fail because Aspeed is a BMC and not a graphics card, that that is completely tangential.
      You could try some GeForce card from before they started to require signed blobs if it works with Noveau well (I still suspect od Radeon would be better). What would *you* suggest to run the display, anyway?

      Don't go shooting the messenger with your FSF guns please, I am just stating the facts: the GPU is needed there. It's a mistake if people try to think of the Aspeed BMC as of a cheap integrated GPU so please don'T encourage that, people will only be disappointed. After all, this article kindasorta says quadcore Power9 CPU machine might be too weak... while it's really that it needs a usable graphics card.
      The aim with Blackbird should always be some (even really cheap) GPU in the PCIe slot, notthing else makes sense, unless you use it as headless server.
      BTW, about the embedded GPU and SO-DIMMs slots - that won't happen, it's not space or cost effective. Just put a card in the slots, that will do the same job, for less pain and money. And it's modular/replaceable.

      "if AMD decided to play dirty and disable the GPU in the future for any reason ("piracy", planned obsolescence, etc.)"
      Let's just stay on Earth, please.

      Delete
  3. "The aim with Blackbird should always be some (even really cheap) GPU in the PCIe slot"

    I fully agree.

    For context, I've had integrated GPUs effectively disabled in software on systems I've owned in the past (hi HP/NVIDIA!) so I won't buy anything that has an integrated proprietary GPU that I don't intend on immediately supplanting with a discrete GPU. It doesn't mean I run without a GPU, it means I understand the risks of doing so and make sure that I can replace the problematic parts without throwing out the entire machine.

    "Let's just stay on Earth, please."

    OK, fine. What happens when some kind of content ID built into the GPU firmware mis-flags an important project you're working on and shuts the GPU down? You really think something like this isn't coming eventually? Even a basic BluRay player does this already.

    ReplyDelete
    Replies
    1. "OK, fine. What happens when some kind of content ID built into the GPU firmware mis-flags an important project you're working on and shuts the GPU down?"
      "Even a basic BluRay player does this already."

      Links for anybody repporting that happening, then, if you please.

      Delete
    2. On the BluRay side, it's been known since 2012 at least:

      Example: http://watershade.net/wmcclain/BDP-103-faq.html#does-the-player-implement-cinavia-watermark-detection

      The discussions around limiting consumer GPUs aren't exactly public outside of certain industries, but you can see the effects they have had with more and more restrictions, signing key requirements, etc. 4k was a large push into remote authentication for decryption, you can see that reflected in more invasivePAVP and the addition of a new PSP to the latest AMD GPUs.

      As far as watermarking / content ID getting things wrong, well:
      https://www.bbc.com/news/technology-42580523

      It's either your machine or it isn't. If you put a discrete GPU in and it limits what you can do, you can remove and replace it with an old one while leaning on legislators. If your integrated GPU stops working, you get to remove functionality (at least chew up a PCIe slot you didn't think you'd have to) or stop using the machine for the purpose you bought it for.

      Delete
    3. But where is your case of somebody's GPU getting shut down?
      All I see is generic DRM schemes that you try to spin as a literal FUD (unironically, this is exactly according to the literal definition of the acronym).
      I know of no instance of DRM provisions in GPUs (like HDCP which was also painted as an apocalypse in the time Vista was released... it was almost supposed to mean the end of everything, back then) ever shutting anything down or even interfering with playback (or encoding, heh) of pirated video content.

      Do you know of it happening, or do you not?

      P.S. I forgot to point that out earlier, but while I get your hardline-FOSS stance and you can obviously decide for yourself to stick to those principles, are you aware that the blog author does actually use a dedicated graphics card (Radeon Pro WX something iirc) in his main workstation?

      Delete
    4. It hasn't happened on the GPU side yet. When (not if, the way things are going) it does happen, there will be nothing you can do. It already happened to the set top players, so anyone affected just stopped using them. It's a bit harder to abandon computing as a whole.

      I also use a workstation GPU. I just understand that at any new GPU release I could get stuck on that existing generation of GPU (or older GPUs) effectively forever. How can one expect Raptor to integrate a non-free, DRM-infested device when said device can be easily plugged in -- and replaced, if necessary -- by the end user?

      The fundamental reason this hasn't been enforced yet for consumer GPUs is both because the technology wasn't practical until recently and because until the past few years there were other, smaller players making (rather poor) GPUs that could still play the latest Hollywood content -- trying to get them all to sign up at once to a mandatory DRM schemed wouldn't work, people would simply go to the alternatives. Now that there are only three GPU manufacturers that can handle 4k (Intel, AMD, NVIDIA), and now that studios are cracking down hard on 4k content and DRM, you can bet given past history there will be a strong push to close the playback hole once and for all.

      In the end it's up to you of course. Just don't be surprised if we end up with GPU-level content blocking and censorship in 10 years.

      Delete
    5. Yeah, no. Intel hasn't put out a discrete GPU yet, and to use an NVidia GPU you have to go back to a 700-series, because the encrypted proprietary firmware hasn't been opened to where it can be used in a FOSS driver. AMD has the only game in town for FOSS that doesn't come attached to an Intel CPU.

      If you want to do something about it, follow this project (https://www.phoronix.com/scan.php?page=news_item&px=Libre-RISC-V-Performance-Target) (https://www.crowdsupply.com/libre-risc-v/m-class) and maybe help out with it when you can.

      Otherwise you might as well be a person typing things about chemtrails into Facebook while watching old X-Files episodes, because you can't vote with your money when there's nothing on the market to buy.

      Delete
    6. "Otherwise you might as well be a person typing things about chemtrails into Facebook while watching old X-Files episodes, because you can't vote with your money when there's nothing on the market to buy."

      That wasn't very nice. I am in fact watching the RISC-V GPU project with some interest, but only time will tell if they require signed or closed firmware. If they don't, then that would be a game changer and hopefully set a new trend for GPUs.

      I personally agree with Raptor's decision not to force a proprietary GPU on their boards. I should be able to decide when and where I am willing to accept a proprietary solution and under what conditions, and conversely the GPU manufacturers should be aware that I can remove their solution and replace it with something else if I desire. I can't do that if the board vendor makes that decision for me.

      What is so important about an integrated GPU anyway versus an add-on GPU? You do realize that with POWER9's limited number of endpoints an integrated GPU would mean removing a PCIe slot from the Blackbird, right? I'm very glad Raptor didn't do that; only one slot would have made Blackbird useless for a number of applications I have in mind.

      Delete
    7. It only applies to you if you let it. OpenPOWER isn't RISC-V, and it is essentially IBM and NXP's domain.

      IBM's Sforza package, the one used on the Blackbird and Talos II, seems to be the smallest one they (IBM) feel like making. That means anything smaller/cheaper/less power-hungry will have to come from elsewhere.

      NXP is mostly about embedded PPC, and much of their PPC is legacy. If you want to go smaller/cheaper than Blackbird, it's probably necessary to use NXP's chips. The management controller is kinda dual-purposed to be a video system in the Blackbird, and that's great for simple access. If you want to make something like a laptop or a tablet though, where direct user access is going to happen, you need graphics.

      The entire point of the RISC-V GPU project is to make something fully open, that is fully audit-able, with all the firmware and drivers out in the open. If it succeeds, it will be to the GPU what Raptor is trying to be for the motherboard. I would just appreciate it if you would stop preemptively attacking projects making incremental moves that I consider to be in a positive direction.

      Delete
    8. "I would just appreciate it if you would stop preemptively attacking projects making incremental moves that I consider to be in a positive direction."

      Where did I attack the RISC-V GPU? It's interesting, albeit not very useful since it can't even do 1080p let alone 4k (not sure why those design decisions were taken, but water under the bridge and all that). The GPU side is a mess, none of the current vendors will allow owner controlled GPUs to be made because of DRM, so right now the best anyone can do is hold their nose and make sure they can replace the GPU in hardware they have if for any reason that becomes necessary( again, speaking as someone that had a laptop with integrated GPU effectively rendered useless because of NVIDIA).

      I don't believe a powerful, owner-controlled phone or tablet is possible with current silicon technology. My guess is a laptop would be next, but even that's going to be interesting to see if it actually happens.

      Believe me when I say I'm unhappy with the situation right now. It's costly, limiting, and quite frustrating, and at least in the US it might actually be easier to try to fix the DMCA, then let the markets sort the results out. And that's saying something about just how hard the GPU problem really is, considering the DMCA has proven nearly impossible to repeal.

      Am I the only one looking back on the late '90s and early '00s with rose colored glasses?

      Delete
    9. "speaking as someone that had a laptop with integrated GPU effectively rendered useless because of NVIDIA"
      Curious, can you explain that or give links to what story you are talking here? I wonder if that is something to do with bumpgate, which was a design/hardware fault (wear over time) issue, no evil Soros lizards disabling graphic cards because they hate Linux etc.

      Delete
    10. "something to do with bumpgate".

      Nope. But I'm a bit tired of dealing with some of the children here that don't understand why one might be wanting to use open source instead of a closed driver stack. Some people here have their minds made up, and that's fine from an ideological perspective -- doesn't mean I have to keep fighting with them when they obviously don't want to consider another view. :)

      Delete
    11. Go on then, run away before it turns out you made stuff up like with that "blu-ray player is shutting down people's GPUs" before.
      Also ironic you talk about children and having minds made up, with all those conspiracy theories :)

      Delete
    12. I think this is where I step in with the moderator stick before this goes too far, please (not directed to anyone in particular).

      Delete
  4. If I may add my unsolicited opinions:

    * Cheap old Radeon would be good and keeps support for things like the BSDs and Adélie which are only going to run in BE (anything that needs amdgpu won't work).

    * Using a musl distro such as Adélie or Void, you could get a lot more out of an 8 GB setup. As Adélie lead, I'm running KDE Plasma 5 with Pidgin, 9 Konsoles (with unlimited backlog), Firefox with too many tabs open (72), Thunderbird, two Kate sessions with ~15 files open each, Quassel with ~60 active buffers, Qt Creator, Audacious, and Calligra Sheets. I'm using 5.7 GB RAM total of the 32 in my Talos. I think if anyone was trying to make the Blackbird a "budget" system, they should skimp on RAM and run a musl distro. Unlike x86 you really won't miss the binary compatibility a glibc distro will get you.

    * Commercial Blu-Rays can be played back on POWER, I've done it on Talos, you just have to know where to look. :) Of course, that's only in VLC, which seems unhappy in your setup (likely due to the AST).

    ReplyDelete
    Replies
    1. Although these are good suggestions, going to 8GB isn't going to save you a great deal of money if you can get an ECC 16GB stick for "only" $120, and the Terminator 2 disc I reference above was a commercially pressed disc.

      Delete
  5. Still wondering what's the best GPU for around $100 - prerequisite is 0% fan speed when idle. RX 550 looks like a good choice, but do we have the aticonfig utility on ppc64el? I am afraid of suffering from 100% fan speed.

    ReplyDelete
  6. To those of you who already got your blackbird, when did you order it? I ordered mine on Monday of the sale and it's been crickets on the shipment front. It's slowly bugging me that I have no idea when it'll show up.

    ReplyDelete
    Replies
    1. I feel your pain. I don't have mine yet either, and I ordered it on 20181207. I wrote to their sales@raptorcs.com 20190611 (where their docs say to inquire about orders and shipping), and some system made 5 support tickets for me, 4 of which have been closed and one of which hasn't been updated yet, but is still open. I'm really just hoping for a timeline.

      Delete
    2. I ordered my unit on the Thanksgiving Black Friday, so yours shouldn't be too far behind mine.

      Delete
  7. A bit of an update for those stuck in the waiting line: I called Raptor and here's what they said. The original Blackbird batch they got had a factory defect in it, so they had to send it back for corrections. My order placed November 26, 2018, 7:17 pm is either supposed to ship tomorrow or Monday. (Just word of mouth, no confirmation.)

    They claim to have gotten way more orders than expected. I recommended they post regular updates on their twitter along the lines of "Today we're expecting to process orders for people who ordered between Date X and Date Y." I told them I've been madly refreshing their twitter for information. The person on the phone was shocked and said that we (people who preordered) shouldn't have to do that and this is the first he's hearing about this kind of mad refresh...

    That's all I've got.

    PS: Sorry for spamming your comments with shipment info. I don't have/want a twitter account to post there.

    ReplyDelete
    Replies
    1. No objection from me, I think it's very relevant. I'm glad to hear they got more orders than they expected. This is a "good problem" (tm).

      Delete
    2. Well mine finally shipped at 8pm Chicago time. Apparently they come from Rockford, IL 61125, USA. I didn't get an email but checked back on my account and sure enough a tracking number was there. I got nervous when I didn't get an email by 4pm Eastern Daylight time so I called them and got a little bit more information out of them.

      First: it looks like they alternate days of putting together 8 and 4 core bundles. Mine is replacing a junkyard home server that has a UPS backup so I opted for the 4 core model. Today is actually an 8 core day but they missed the cut off for mine on Friday so they did it this morning. Shipping notices (if you do get an email) don't go out until the evening when the post office comes to pick up the boxes. Yes, that means you've gotta spam that refresh button on your account way into the evening.

      If you want to know where your order is I highly suggest calling them. ABSOLUTELY DO NOT take voicemail for an answer. If nobody answers, wait 10 minutes then try again. Assume Eastern Daylight time. Don't open a support ticket. Nobody answers them.

      Delete
  8. Hmm, given how the board seems to have just the CPU on it and the ASpeed BMC, Broadcom NIC and the Lattice FPGA and Marvell SATA all seem like not too power hungry chips, the idle power consumption in Gnome seems quite high. Unless the BD-ROM drive has a high constant load in its electronics, the CPU might be taking up to 30-40 W at desktop, which is a lot (x86 desktop quadcore would typically be like 3-10 W I think, with at the wall power being like 25-35 W without dedicated graphics). Or perhaps the Wi-Fi Bridge adds significant load?

    I wonder if it the power could actually be improved by having a GPU acceleration, because even if a low-power GPU eats 8-15 W in idle, it should enable CPU cores to drop to power saving modes. If not, the Power9 chips (or the firmware/Linux?) seems to have some homework to do to reduce idle power consumption. It is possible that IBM itself doesn't care about idle/low-load power, given how they only really target servers and voltage/frequency scaling and all those techniques are added complexity not that useful or not really expected in big iron server.

    ReplyDelete
    Replies
    1. The low power GPUs I have lying around are a Radeon x300 and x800. Neither need a PSU power connection. While these graphics cards will allow 1080p24 anime to play nice and smooth, they actually add ~10W of power usage on idle. I get idle ~65W. With the graphics cards it idles ~75W.

      Without the graphics card the video stutters. With it, perfectly fine.

      Video was x265 encoded... so decoding 1080p24 x265 is absolutely no sweat for the blackbird 4 core bundle. Also used mpv on Fedora 30 mate.

      Delete
    2. That makes me think the single biggest performance sink before was the CPU-driven video's lack of a hardware overlay. That YUV->RGB conversion is murderous to handle without help from a GPU.

      Delete
  9. Apologies upfront as this might sound personal but I would like to congratulate you on finding such a patient wife who is not bothered by the delay and lack of video quality services.

    ReplyDelete

Post a Comment

Comments are subject to moderation. Be nice.