The Pironman 5 NAS case is the latest addition to Sunfounder’s Pironman series, this time designed specifically for the Raspberry Pi 5. Unlike the previous models, this version is built to turn your Pi into a compact yet capable NAS (Network Attached Storage) system.
What makes it stand out is its ability to house two drives, either 2.5-inch SSDs or full-size 3.5-inch HDDs. On top of that, the case comes with some great additional features like a built-in 2.5G network adapter, a large 90mm cooling fan and a front-facing OLED stats display.
It’s important to note that this is still a beta product. Sunfounder are in the internal testing stage, and the case hasn’t yet gone into full production. So while I’ll cover the full build and performance here, some things may change before the official launch later this year.
Here’s my video build and testing of the NAS, read on for the write-up;
Inside the box, the first thing you’ll find is the assembly instruction sheet.
Beneath that is another well-padded white box containing the aluminium enclosure, along with multiple accessory packs that hold all of the panels, components, and PCBs.
Here’s everything included in the kit:
Aluminium housing
Acrylic panels and the 90mm cooling fan
Universal 12V 4A power supply
Several adapter/control PCBs and heatsinks for the Pi
OLED display module, cables, screws, and mounting hardware
A set of basic assembly tools
As with other Sunfounder kits, the screws are clearly labelled, and the assembly process is made easy by their detailed, picture-guided instruction sheet.
Assembling the Pironman 5 NAS Case
Putting the Pironman 5 NAS case together is fairly straightforward. The acrylic panels even feature countersunk screw holes, giving the finished case a much cleaner look.
Here is the basic build process:
Add standoffs to the acrylic base panel, which will support the Pi assembly.
Mount the Raspberry Pi and attach the adapter boards, securing the stack with additional standoffs and screws.
Install the five supplied heatsinks on the Pi’s key components. The CPU heatsink is on the smaller side, but we’ll see later how well it performs.
Attach the main SATA and control HAT to the Pi.
Plug the OLED display into its dedicated 4-pin socket.
Connect the 90mm fan.
Mount the completed Pi assembly into the aluminium housing and add rubber feet to the base.
Secure the acrylic panels onto three sides of the case, and mount the fan to the top acrylic cover.
The drive bay section of the enclosure is cleverly designed to fit either two 3.5-inch drives (secured on both sides) or two 2.5-inch drives (secured on one side).
I personally would have liked to see an option for four 2.5-inch drives, but given that 3.5-inch drives remain popular for high-capacity NAS builds due to their cost-effectiveness, this design fills a gap in the market.
For my testing, I installed a pair of 1TB Crucial BX500 SSDs before closing up the case.
The finished enclosure measures 109mm x 109mm x 216mm, which is really compact considering it can house two full-size 3.5-inch drives.
Ports and I/O on the NAS
The Pironman 5 NAS case provides plenty of connectivity, expanding on the Pi’s basic IO through the included hat:
Raspberry Pi’s standard Gigabit Ethernet, 2x USB 3.0, and 2x USB 2.0 ports
A ribbon cable slot for Pi camera or display connectors
Two full-size HDMI ports from the adapter board
12V barrel jack input
A full GPIO pin header passthrough
A 2.5G Ethernet port, a standout feature for a Pi-based NAS
Operating System and Software Setup
Sunfounder recommends using Open Media Vault (OMV), but being a Raspberry Pi there are other options for operating systems if you’d like to use an alternative. The setup process is as follows:
Flash Raspberry Pi OS Lite to a microSD card and insert it into the case’s front slot.
Update the Pi and install OMV.
Install the Pironman script to control the OLED display.
Like other Pironman cases, this one also includes a web dashboard to monitor stats and adjust settings. The OLED display itself has multiple options, allowing you to choose a fixed readout or cycle between different stats.
To manage storage, log into the OMV web dashboard via the Pi’s IP address. From there, create a file system, a shared folder, and set up an SMB share to access the NAS from a Windows PC.
Performance Testing the Pironman 5 NAS Case
With everything configured, I ran a series of transfer tests to test the performance of the hat and 2.5G network connection.
1GB file test → Writes: ~240MB/s | Reads: ~200MB/s
16GB file test → Writes: ~230MB/s | Reads: ~170MB/s
Doing a realworld file transfer of a 60GB video file to and from the NAS in Windows 11, I got;
60GB video file → Writes: ~260MB/s (peaking near 280MB/s) | Reads: ~260MB/s (very consistent)
These results are right in line with what you’d expect from a 2.5G network connection and show no issues with bottlenecking on the single available PCIe lane on the Pi.
Cooling and Thermal Testing
The large 90mm fan keeps drive temperatures very low during heavy file transfers.
The Pi’s CPU, however, runs hotter. At idle, the CPU started at 48°C, climbing quickly to 70°C under load during a 20-minute stress test using CPU Burn. The small CPU heatsink is adequate for basic NAS use but not ideal for workloads like RAID parity or media encoding. A larger aftermarket heatsink would be recommended for those cases.
Fortunately, since this is still a beta kit, SunFounder will likely address cooling improvements in the final version.
The aluminium housing also helps dissipate heat, and airflow inside remains decent even with 3.5-inch drives installed.
As for noise levels, the 90mm fan is PWM controlled. At full load, it reached about 41dB (which is not significantly loud), but when running below 800RPM it’s essentially silent. With mechanical drives installed, you’d likely never hear the fan over the drive noise anyway.
Power Consumption
I tested power consumption using a wall adaptor and took readings both at idle and with the CPU full loaded during the thermal test, while also copying large video file to the NAS. For these two tests, I got;
Idle: ~8W
Full load: ~14W
These are impressively low numbers for a dual-drive NAS, thanks to the Pi’s efficiency. With mechanical HDDs, consumption would be slightly higher, but OMV allows you to set drive spin-down times to save even more power when idle.
Final Thoughts on the Pironman 5 NAS Case
The Pironman 5 NAS case is another great addition to SunFounder’s lineup. It’s well-built, easy to assemble, and packed with useful features like the OLED display and 2.5G networking.
Since it’s still in development, SunFounder haven’t released official pricing yet. But if it lands in the $100–$120 range, I think it would be excellent value for a Pi-based NAS kit.
I’d personally love to see a version that supports four 2.5-inch drives, but the flexibility of using 3.5-inch HDDs is a big selling point that very few other Pi enclosures offer.
Overall, this is shaping up to be a compact, efficient, and capable NAS solution for the Raspberry Pi 5. Let me know what you think of it in the comments section below.
If you’re into homelabs or setting up your own personal cloud server, I’ve got something really interesting to share with you today. This is the new LCMD Microserver, and its optional add-on, the AI Pod, a compact computing module designed to supercharge the system’s performance for AI-related tasks.
Together, these two devices form a powerful, accessible homelab solution. They’re designed to help even less experienced users set up an advanced personal server quickly and easily.
With the pair, you can do things like:
Run Docker containers
Host media servers
Set up your own Git server
Build an AI-searchable photo and video library
Backup your data
Let’s dive in, unbox them, and see how they perform.
Here’s my video review of the LCMD Microserver and AI Pod, read on for the written review;
Where To Buy The LCMD Microserver & AI Pod
The LCMD Microserver and AI Pod are planned to be crowd funded through Kickstarter, with their campiagn starting soon. Check out the LCMD Prelaunch Page in the meantime.
Some of the above parts are affiliate links. By purchasing products through the above links, you’ll be supporting my projects, at no additional cost to you.
Unboxing the LCMD Microserver
Right from the start, LCMD’s packaging makes a strong impression. This is one of the coolest product packages and unboxing experiences I’ve seen.
Opening the magnetic flaps reveals an orange acrylic cover, with the LCMD Microserver sitting underneath. Alongside it are two accessory boxes, plus a front sleeve that likely holds documentation. Everything is neatly protected in high-density foam.
Inside the accessory boxes:
Box 1: Ethernet cable and international adaptor for the power supply
Box 2: 19V 120W power supply (barrel jack connector) and mains lead
The sleeve contains a set of sequenced instruction cards, a unique and intuitive QuickStart guide.
First Look – LCMD Microserver
The front and sides are clean with no ports or buttons. On the top we’ve got some ventilation holes on each of the corners. On the bottom we’ve got a large ventilation grill with some orange rubber feet.
On the back we have all the ports, which are clearly labelled:
Power input
HDMI 2.1 port
Two USB 3.2 Type-C ports
Three USB 3.0 ports
Audio jack
2.5G Ethernet port
Mode and power buttons
Above the ports are vents for the CPU heatsink; below is an open grill likely for the drive bays. Speaking of drive bays, internally, the Microserver can house two 2.5″ drives and two M.2 NVMe drives.
The build quality is excellent, it’s got grey aluminium panels, solid construction, and an orange-and-white futuristic style. It’s also compact at 115mm × 115mm × 120mm.
Hardware Specs
Inside, the Microserver runs on an Intel Core i5-13500H. This is a 12-core CPU (4 performance cores up to 4.7GHz, 8 efficiency cores up to 3.5GHz) with Intel Iris Xe graphics (80 execution units at 1.45GHz).
Four additional M.2 2280 slots – up to 64TB (with 16TB drives)
Two SATA 3.0 bays – up to 8TB each
Realistically you could install about 20TB without spending a fortune. It also has WiFi 6E via an Intel AX210 chip.
The enclosure slides out the back after removing two sets of four screws from the bottom, giving access to the internals for upgrades. My only complaint with this process is that the bottom uses Torx screws, while the retaining bracket uses Philips head screws. It would be nice if they were consistent.
Software and Setup
Booting the Microserver was where I was most impressed. Many products have great hardware but fall short in software, but that’s not the case here.
After plugging it in, you install LCMD’s mobile app. The Microserver runs LCOS (which is Debian-based). Setup takes under 5 minutes, and the system automatically configures encrypted remote access (NAT traversal, like Tailscale), so no port forwarding or network tweaking is needed.
There’s also a desktop app for PC and Mac, which lets you:
Manage storage pools and permissions
Install over 1,000 available pre-configured apps (including Jellyfin, Nextcloud, Git servers)
Adjust settings
View and manage media
My first test was with Jellyfin, and this installed instantly without advanced setup.
You can also share files over your network with SMB or AFP like a traditional NAS.
The Microserver can act as a smart TV when connected to HDMI, with your phone as the remote. This allows you to watch media content, look at files and photos and even use some of the AI features will get to with the AI Pod.
The AI Pod, Adding an AI Brain To The LCMD Microserver
The AI Pod is an add-on for the Microserver that adds serious machine learning power. Unlike the Microserver, it can’t run on its own, it must be paired with the Microserver.
Packaging is again premium, with a foam pad and the device beneath. The AI Pod’s design feels like something from a sci-fi movie. I think it looks a bit like a building from theold PC game StarCraft – I might be showing my age with that comment.
Included with the AI Pod: instruction manual, 12V/96W power supply (my unit was white, but the production one will be black).
Ports and Hardware
Taking a look around the AI pod, the front is actually a ventilation grill and you can actually see through to the back of the enclosure. The two sides are solid. The top has a fine ventilation mesh and the bottom has a removable cover plate.
Like with the Microserver, the ports are all at the back:
Power & mode buttons
USB 3.2 Type-C
Two USB 3.2
HDMI 2.1
2.5G Ethernet
10G Ethernet
Power input
The 10G port is due to the Nvidia Jetson AGX Orin development board inside – not a custom board.
Styling is designed to match the micro server so it’s the same grey aluminium housing with black accents. I think the pair look really cool together.
The AI Pod has a slightly larger footprint than the Microserver, measuring 130mm x 144mm but it only stands 61mm high so it might look a bit smaller.
The Jetson Orin board has an Arm CPU featuring 12 Cortex A78 cores that can run up to 2.2GHz and an NVIDIA GPU with 2048 Cuda cores, 64 Tensor cores and 64GB of LPDDR5 video memory. This adds an impressive 275 TOPS of processing power to the Microserver while only drawing around 30-60W.
So this allows you to decode a single 8k30 stream of H.265 video or up to 22 streams at 1080P30. Or it can handle encoding two simultaneous 4K60 H.265 streams. But besides it’s powerful encoding and decoding abilities, it’s main function is to add the ability to run your own AI models locally.
Internally it’s also got a wifi 6 module and a 1TB WD Black NVMe ssd as the OS drive. It’s got a second drive bay but at this stage I don’t think you really need to use it on the AI Pod.
AI Features in Action
Pairing the AI Pod is easy via a separate desktop app.
With this added, you now have access to some AI based apps that’s take advantage of the pods processing power, so you can do things like image and video generation locally.
I tried out image generation first. My first test prompt was “a man taking a photo of his dog with a mountain in the background”.
The first test prompt worked quite well. The first and last images aren’t quite what I was going for but the other two are quite good.
You can also do different image styles like generate cartoon images. The next prompt I tried was “cartoon style image of a man reviewing the latest iphone in his home office”.
Lastly, I trieda prompt that I thought would be a bit more difficult “a girl being pulled by four dogs on her bicycle riding across a frozen lake”.
This even worked reasonably well, although this model clearly has a problem with numbers, none of these images have four dogs and this one has two heads.
Next I tried video generation. This works in a similar way to image generation, but takes a bit longer as it has to generate multiple frames.
I tried to be quite specific with this prompt “a red sports car driving through a mountain road at sunset”.
This came out much better than I expected. It’s not amazingly realistic but it is really impressive for a video generated in a minute or two on a local piece of hardware drawing under 100W. You can see this video and the second example video in my Youtube video at the begining of the post.
The second prompt I tried was “a cat looking out the window while it is raining”.
This one also come out quite well.
One of the features that I really like is the added search funtionality to your photo albums. You can give it very specific queries and in my small sample libary its very accurate.
This is a really useful feature if you’ve got thousands of photos and you don’t recall when you took a particular photo but you remember a small detail in it.
There are also a whole lot of other AI based features that I could make a separate video on but some of the useful ones are the ability to translate text locally in the browser, and you can even run a local language model similar to ChatGPT.
Pricing of the LCMD Microserver and AI Pod
Since this product is going to be crowd funded through Kickstarter before becoming a retail product, the is a special “launch price” for each product and then an eventual MSRP.
Bottom Model Microserver (16GB/2TB): $769 launch, RRP $1400
Top Model Microserver (64GB/4TB): $1159 launch
AI Pod: $2489 launch, RRP $3600
Combo deals: $3228–$3618 launch pricing
My feeling on the Microserver pricing is that the launch prices are quite fair but the RRP prices are higher than I’d expect. And the AI Pod seems quite expensive, even at launch price. They do both use more premium WD Black drives and Crucial RAM, so they haven’t cut corners with no name brand components but the AI Pod can’t be priced near where a used A4000 or 3090GPU would be a similarly priced as these would be a more powerful option. If they do then the AI Pod will have to seriously prove its worth in software.
Privacy and Transparency
One of the concerns people are probably going to have with a product like this, is that it is running a lot of preconfigured software with very little disclosure on what is and isn’t being shared with the manufacturer, and there is some good and bad news here.
The system is running a custom OS with quite a few proprietary layers and while that makes it capable and offers a lot out of the box, at this stage there is limited transparency on exactly what its doing in the background.
I compiled a long list of questions and spent quite a lot of time talking with LCMDs developers in trying to determine what information they have access to and how this information is used and I’ll try present a summary in the best way that I can. The full list of questions and answers is provided in the next section below.
The LCMD microserver is designed with decentralisation and privacy as a core selling point, which means that users naturally want to bring as much in-house as possible. But on the flip side, LCMD are also trying to make the system as user friendly and easy to use as possible, which they have really nailed. The platforms custom client app enables powerful features that just work, you don’t need to be a wizz with docker containers or have any advanced networking knowledge to get the Microserver up and running. Most people aren’t able to set up remote access to their home networks in a secure way, so there is a balancing act here which they’re trying to navigate.
From what I’ve found through testing and in talking to the developers, the server prioritises direct peer-to-peer communication using asymmetric encryption, and the private keys are stored locally on the device. If both devices can be reached via hole punching then traffic flows directly peer to peer, this falls back to relayed traffic if that process fails – quite similar to Tailscale.
For more advanced users that want a bit more transparency and control, you can also set up your own NAT traversal through Headscale – which is one of the preconfigured apps it offers.
Local storage is also encytped and the encryption keys are again stored locally on the device, so don’t forget your password or your data will be lost.
The server only requires internet access for the initial setup or app installation. You obviously also require an internet connection for remote access after this but if you don’t need remote access and the initial setup is all you require then you can isolate the system from the internet and it’ll continue to function on your local network.
At the very extreme end, the Microserver’s BIOS is also unlocked, so if the software isn’t for you, then your can install your own OS on it if you’d like to. Although for this particular product, the software is a large part of the user experience – so it probably doesn’t make much sense to do so.
With all of that being said, the Microserver still uses proprietary software like its own VPN protocol, which unlike open standards like WireGuard, hasn’t been independently auditied. It’s dependency on a central server is limited but not zero, and although you could potentially configure it yourself to be zero, I wouldn’t say that the system is 100% trustworthy, but it’s about as close as you can get without them making the whole software package open source too.
Questions Asked To Developement Team & The Answers Received
Here’s a list of questions that I compiled and asked their Developement team along with the answers I received back. Due to the language barrier, some questions were reworded or repeated in different ways and translations were required in both directions, so I have summarised the questions and answers here while trying to maintain their original intention.
Q1 – What is LCMD’s networking approach, it appears to be similar to Tailscale?
Answer:
LCMD’s direct connection mode works in a similar peer-to-peer way. In most cases, all traffic is sent directly between devices. In extreme situations (like multiple layers of NAT4), they provide a completely free relay server with 8 Mbps bandwidth and full end-to-end encryption. Users can also self-deploy their own relay server, which maximizes decentralisation.
Q2 – How can users verify that direct connection traffic isn’t passing through LCMD’s servers?
Answer:
Two ways:
Technical validation – Advanced users can use tools such as Wireshark to check if the inbound/outbound IP corresponds to their home broadband’s public IP.
Business reality – Routing all traffic through their servers would be financially unsustainable given bandwidth costs.
They emphasised that direct connection is built on a decentralisation philosophy to improve performance and privacy at no cost.
Q3 – How are encryption keys handled, and can LCMD access these keys?
Answer:
There are two types of keys:
Initialization / license verification – Bound to the motherboard; requires a connection to the official server for registration during initial setup.
Device communication keys – Public/private key pair generated locally on the Microserver. LCMD’s official servers do not have access to your private key, meaning only authorised users can connect.
Q4 – Is the relay/STUN service open source or self-hostable?
Answer:
LCMD currently allows self-deployment of the relay server, but STUN is not fully open yet. They plan to follow up on this. As a temporary solution, the LCMD Official App Store has Headscale available for one-click installation so users can self-deploy.
Q5 – Why is there a desktop app instead of just a browser dashboard?
Answer:
While browser access is possible via a domain name, the desktop app creates a TUN virtual network interface, enabling advanced networking features.
Benefits include:
Secure remote access without needing to set up public IPs, NAT traversal, or firewall rules.
Works out-of-the-box for non-technical users.
For advanced users, browser access remains available, but the desktop client delivers a smoother and more secure experience.
They added that a client app can do things browsers can’t, like:
Auto-mounting a cloud drive to the system file browser.
Uploading to the cloud drive app even after the window is closed (client must still be running).
A no-GUI client is also available for more technical users.
Q6 – Does the device need the internet to operate?
Answer:
Internet is only required for initial setup and when installing apps from the LCMD store. In all other cases, it can operate fully in local-only (LAN) mode without internet.
Q7 – Can I access the device via IP address directly?
Answer:
Yes — they offer a LAN port forwarding tool for this purpose and can provide further documentation.
Q8 – What is LCMD’s overall decentralization approach?
Answer:
LCMD was designed with decentralization in mind so users can build and access their own cloud services without relying on centralised infrastructure.
If the network has a public IP or favorable NAT, connections go directly between devices.
Benefits: better privacy, full use of home bandwidth, and stable decentralised performance.
If direct connection is impossible, free relay services (end-to-end encrypted) are provided.
Advanced users are encouraged to self-host network penetration services. A “Network Penetration Service Setup Guide” will be released within a year.
Q9 – Is the device locked to LCMD’s OS?
Answer:
No — the BIOS is open, and users can install another operating system if desired.
So that’s the LCMD Microserver and AI Pod – a seriously powerful private cloud and AI edge device combo that’s really easy to set up and use.
Performance is great, the user interface is very well polished and it’s AI features are genuinly helpful, not just for party tricks. I do think LCMD could benefit from being very open about what’s running under the hood, especially for a product designed around privacy and user control, but the same could be said for most cloud or data storage companys.
Let me know what you think of the LCMD Microserver and AI Pod in the comments section below and if this is something you can see yourself using for your homelab or personal cloud.
Here’s a link to their prelaunch page where you can join the waiting list to be notified of their official launch on Kickstarter. As with the other crowd funded products that I’ve reviewed, keep in mind that crowdfunded projects carry some level of risk and that there is no guarantee that the final product will live up to the promises made in the campaign.
Today we’re diving into a NAS that’s built for much more than simple home backups, the CyberData CF1000 by Orico. This is the flagship model from their newly released CyberData NAS series, which includes eight different configurations featuring a variety of CPU, memory, and storage options, all packed into a stylish, unified design.
Orico is a brand that’s long been recognized for its USB storage solutions, enclosures, and docking stations, but the CyberData line marks a bold and serious step into the NAS (Network Attached Storage) market.
Here’s my video of the CF1000, read on for my written review;
Where To Buy The Orico CyberData CF1000
Orico’s CyberData range is currently crowdfunding through Kickstarter – Campaign Link
Powering the CF1000 is a 12-core Intel i5-1240P processor with 16 threads,
It also has dual 10G Ethernet ports, and support for up to 256TB of storage. It’s a NAS built with power users, small businesses, and creative professionals in mind.
Let’s start with the unboxing experience.
Unboxing the CF1000
The CF100 is very well protected in it’s packaging, although the box is large for a 10-bay NAS, that’s mainly due to the thick foam padding keeping everything secure.
Inside the box, you’ll find:
The CF1000 NAS wrapped in an antistatic bag
A smaller accessory box containing the power cord, screws, screwdriver, two network cables, and a manual
The CF1000 itself looks and feels fantastic. It’s sleek, minimalistic, and well built with a premium solid cast aluminum chassis.
A Closer Look at the Hardware
Front Panel
Behind a magnetic front cover are 10 hot-swappable 3.5” drive bays, arranged in two vertical columns (1–5 and 6–10). Under the drive bays are LED indicators, although these are a bit strangely numbered backwards from 10 to 1, and then we’ve got status LEDs for network and system health.
Internal Expansion
Internally, we’ve also got space for:
2 x M.2 NVMe SSDs (for caching or high-speed access)
1 x 128GB NVMe SSD (preinstalled for the OS)
These M.2 drives are easily installable through a bottom access hatch, which is a great feature on the CF1000.
Rear I/O Ports
Around the back, we’ve got;
2 x Cooling fans
2 x USB 2.0 ports
1 x DisplayPort and 1 x HDMI
2 x USB 3.2 (Type-A and Type-C)
2 x USB4.0 (40Gbps capable)
2 x 10Gb Ethernet ports with link aggregation support (up to 20Gbps)
Integrated PSU, so there is no additional power brick required.
CPU and RAM
The Intel i5-1240P is a 12-core mobile CPU from 2022, with 4 performance cores running at up to 4.4GHz and 8 efficiency cores running at up to 3.3GHz. It supports DDR5 memory, Thunderbolt 4, and has 20 PCIe lanes. It’s both powerful and power-efficient, ideal for a NAS setup that’s going to be running 24/7.
The CF1000 comes with 16GB DDR5 RAM (4800MHz) installed via a single stick, so you’ve got an easy upgrade path to 32GB and it’s expandable up to 64GB in total.
Storage Capabilities
The main storage feature of the CF1000 is obviously the 10 drive bays, but in addition to these we’ve got the two M.2 slots for additional storage or caching. This gives the NAS a total storage capacity of 256TB (10 x 24TB HDDs and 2 x 8TB SSDs) and it supports a range of hardware RAID options including RAID 50 and RAID 60.
10 x 3.5” HDD bays
2 x M.2 NVMe SSD slots
Built-in 128GB SSD for the OS
Up to 256TB of total storage capacity
Hardware RAID support, including RAID 50 and RAID 60
Setting Up The ORICO CF1000
For testing the CF1000, I installed:
10 x 4TB WD Red HDDs
2 x Orico D10 NVMe SSDs
Once powered and connected to the network, the system boots into CyberData OS, Orico’s custom NAS software. Setup is handled through their CyberData client app for PC or mobile.
It’s worth noting that you can’t access the NAS via a standard web dashboard, a feature common to other NAS brands. Hopefully, this will be added in future updates as it’s a bit inconvenient to have to install software to change a feature rather than just going to a web dashboard.
Storage Pool Setup
Once an admin account is created, the software detects the installed drives and allows you to configure your storage pool and RAID level. I opted for RAID 6, which provides 80% usable capacity (32TB) and tolerance for up to two simultaneous drive failures. It does reduce write speeds due to dual-parity overhead, which also gives us a chance to test how the CPU handles this load.
Setup was really quick, taking only a few minutes.
CyberData OS Interface & Basic Features
CyberData OS feels intuiative and easy-to-use with its Windows desktop-style interface.
Key features include:
User permission management
Samba, FTP, WebDAV, DLNA, and Time Machine support
AI-based photo sorting
Movie/TV show metadata fetching
Offline video transcoding
Under the system monitor, you can view:
CPU stats (12 cores, 16 threads)
RAM usage
Drive and CPU temps
Fan controls (auto, silent, standard, turbo)
The storage panel shows:
System storage pool with usable space (~27.5TB for my RAID 6 configuration)
Drive SMART details
Storage Performance As A NAS
To test how the CF1000 performs as a NAS, I created a second storage pool on the NVMe drives to test raw speed differences.
Note: While NVMe can be used for caching, ZFS already handles asynchronous writes effectively using RAM, so there’s not much benefit in small office or home scenarios. CyberData OS warns you about this when you work through the drive pool setup process.
Automated Benchmarks
I started out by running some automated tests using AJA System Test over the 10G network connection.
64GB file: Consistent writes, reads around 830–840MB/s
NVMe Storage Pool:
1GB file: ~950MB/s writes, ~900–950MB/s reads
16GB file: ~940MB/s writes, ~920MB/s reads
64GB file: ~920MB/s writes, reads just under 900MB/s
Real-World Transfers
I then also ran some realworld tests transferring two large video files totalling around 90GB to and from each volume, again over the 10G network connection.
HDD volume: ~540MB/s write, ~1GB/s read
NVMe volume: >1GB/s write, ~1.1GB/s read
Thermal Performance
I decided to take a closer look at thermal performance since we were getting low reading and writing speed when using the RAID 6 main storage volume. These workloads caused high CPU temps (~90°C package, cores >70°C), which suggested we may be running into thermal throttling. The CPU usage hovered around 25–30%, meaning performance is limited by cooling rather than raw processing power.
Switching fan modes to Turbo didn’t help much, the thermal limitations remained. This indicates that the heatsink is just not capable of getting the heat away from the CPU.
The good news is that Orico has since upgraded the cooling system, replacing the aluminum heatsink with a larger copper one and improved ducting. This should significantly reduce thermal issues.
Noise & Power Consumption
Noise levels (measured at 15cm):
Silent mode: ~39dB (very quiet)
Auto/Standard: ~47dB
Turbo: ~55dB
Even in Turbo, the fans aren’t overly loud. The sound of the 10 mechanical drives is more noticeable than the fans.
Power draw:
Idle (drives on): ~70W
Full load: ~120W
Idle (no drives): ~25W
These figures are very reasonable for a system of this scale. It’s great to see that the power draw is realtively low since this is expected to run 24/7, so overall power consumption can become significant over time.
Privacy and Software Flexibility
I know some of you are probably wondering about privacy and people often have valid privacy concerns when using products with preloaded OS’s by the same company, but there is some good news here, this NAS fully supports local-only use, it doesn’t require an internet connection, cloud linkage or an online account to set it up. You can even turn off it’s internet connection or isolate it on your network if needed. There is no mandatory cloud syncing or forced telemetry.
No internet or cloud connection required
Fully local setup possible
No mandatory telemetry or account login
That said, the software still needs some refinement. At the time of writing this review, there is:
No web dashboard
Limited documentation and no official community support
Missing enterprise features like iSCSI
However, Orico is making steady progress. Since May, there have already been three version updates, which have added:
Bug fixes and translation improvements
Virtual machine support
One-click Docker Compose
Preconfigured AI models (e.g., DeepSeek)
Also, the NAS is not OS-locked—you can install alternatives like Unraid or TrueNAS. I found Unraid works better out of the box, as it includes drivers for the 10G NICs.
Final Thoughts On The ORICO CF1000
If you only need basic backups or streaming 1080p content, the CF1000 may be overkill, but this is just one product in their new CybderData range, so you should consider one of Orico’s lower-end models.
The CF1000 is well suited and worth taking a look at if you:
Work with 4K video
Run multiple services or containers
Need lots of fast, redundant storage
It’s also well-designed, powerful, and looks fantastic too.
Currently, the CF1000 is only available through Kickstarter, with the campaign running until August 7th. Here’s a link to it if you would like to get your own CyberData NAS.
One Final Note on Crowdfunding
As always with crowdfunded products, there’s a degree of risk. Orico is a well-established company, and I’ve tested a fully working pre-production unit. But the final product may still undergo changes. They’ve clearly invested a lot into development already, and it’s usable as-is, but it’s important to approach any crowdfunded product with realistic expectations.
Let me know in the comments section below what you think of the CF1000 or Orico’s broader CyberData NAS range.
Over the past few years, I’ve built several Raspberry Pi-based NAS (Network Attached Storage) devices. These range from a dual-drive setup using a Pi 4, a budget-friendly Pi Zero NAS for under $35, and more recently, an all-SSD NAS running on a Raspberry Pi 5. While each project had its advantages, today’s build takes things up a notch — we’re going for a more practical, fully-featured 4-bay NAS that resembles a traditional commercial unit.
Here’s my video of the build, read on for the write-up;
For this project, I’m again using the Raspberry Pi 5, making full use of its PCIe port by attaching the Radxa Penta SATA Hat, which provides four SATA ports. Technically, the hat includes a fifth port (hence the penta SATA name), but it uses a different connector and is inconveniently positioned, so I’m sticking with four.
For storage, I’m using four 4TB WD Red NAS drives, providing a good balance of capacity and reliability.
Because 3.5″ drives are too bulky to plug directly into the Radxa hat, I’m using SATA extension cables. The particular ones I’ve chosen have mounting holes, allowing me to design a custom bracket to align them properly with the drive trays.
To complete the setup, I’m using:
A Pi 5 active cooler for CPU thermal management
A microSD card to run the operating system
A 12V 40W power adapter to power the NAS
A slim 12V 80mm fan to cool the drives and internal components
Designing the NAS Enclosure
I designed the custom 3.5″ Pi NAS enclosure in Fusion 360.
The design features:
Individual drive trays with pull-out levers for easy access
A mounting bracket for the SATA extension connectors so that drives can slide in and plug directly.
The Pi and Radxa stack behind the drives, as we don’t need HDMI or USB-C access
A barrel jack extension for clean power supply routing to the Radxa hat
An 80mm fan mount above the Pi to draw air through front vents and exhaust it at the back
A vented fan guard to prevent cables from catching in the fan blades
All of the components are enclosed in a housing that closely resembles a traditional 4-bay NAS.
To bring the 3D model to life, I used the Bambu Lab P1S Combo, one of Bambulab’s mid-range CoreXY printer with:
High-speed enclosed printing
Multi-material support through the AMS (Automatic Material System)
Reliable out-of-the-box performance which is perfect for a functional project like this
Handling the Large Print Size
One challenge with the design is its length. The full enclosure is 275.5mm long, while the P1S has a 256mm max build volume along each axis. To work around this, I split the enclosure along a diagonal. This hides the seam as part of a design accent and, as a bonus, eliminates the need for print supports.
The enclosure was then exported and sliced across six build plates using Bambu Studio, and printed in:
Black PLA Basic for the enclosure
White PLA Basic for the front panel and tray lever text
I also used all default settings/presets for the textured PEI build plate, 0.2mm standard print profile and PLA Basic.
The entire enclosure uses nearly 1kg of filament and is going to take just under 24 hours to print. Because the text is only a couple of layers deep, we only have 5g of filament waste for the two-color prints on the front panel.
Print Results & Design Adjustments
The Bambu P1S produced high-quality parts straight off the printer. There was no warping or stringing, and the parts had accurate tolerances. This was especially impressive since it was the printer’s first large print out of the box with just basic setup and auto-calibration.
I did need to reprint the enclosure halves to add clearance for the drive tray guides and mounting holes for the Pi which I forgot about. Both small fixes, but worth mentioning.
Assembling the 3.5″ Pi NAS Enclosure
Adding The Brass Inserts
The 3D printed components are held together with M3 screws into brass inserts, so we need to get those installed using a soldering iron;
Four go into the back half of the enclosure for joining the two parts together
Four go into the SATA cable holder to mount it to the case
Eight more go into the cable holder to secure the SATA connectors to the holder
One for each drive tray to mount the tray lever with a 3D printed washer
Installing the SATA Cable Connectors On The Bracket
Each SATA connector is fastened with two M3x8mm button head screws. I added two brass inserts and installed the one on the end first, I then used a long hex key to secure them through the holes made for the subsequent inserts. This makes it easier to get to each set of screws, since its quite compact. This process was repeated for all four connectors.
Assembling the 3.5″ Drive Trays
Each 3.5″ HDD is mounted using four countersunk drive screws. The tray lever is then attached with an M3x8mm screw and a 3D printed washer, allowing it to pivot between the pulled out and stowed positions. This process is repeated for all four trays.
Installing the Fan & Power Jack
The 80x10mm 12V slim fan is mounted to the rear of the case using four M3x16mm screws. Make sure that it is aligned to push air out of the enclosure. I oriented the vent guard slots horizontally to minimize cable interference.
I also installed the barrel jack extension for the Radxa hat’s power input.
Pi Stack Assembly
Next we can assemble the Pi stack to install it into the enclosure. OMV (Open Media Vault) is the software package that we’re going to be running and that requires Raspberry Pi OS Lite to be flashed onto the microSD card – so get that done before installing it.
To assemble the Pi stack, we need to;
Plug the prepared microSD card into the Pi’s card reader.
Install the active cooler on the Pi. You’ll need to remove three heatsink fins to clear the Radxa hat’s barrel jack. This is a bit of a design flaw with the hat.
Install the included standoffs from the Radxa kit onto the Pi with the threads facing upwards
Connect the PCIe ribbon cable to the Pi.
Mount the Radxa hat onto the Pi’s GPIO header and secure it.
Plug in the Molex fan power cable before installing the stack, as there’s not enough clearance to plug it in once installed.
Secure the Pi stack in the case with four included M2.5 screws
Connect the fan and power cables
Next we need to install the SATA power cable assembly above the Pi. Start by plugging each of the connectors into the Radxa Penta SATA hat. Then align the bracket with the holes in the back enclosure half. Make sure the SATA cable holder is installed with the data connectors on the bottom — I initially installed it upside down by mistake. Secure it with another four M3x8mm button head screws.
Do one more check to make sure that there isn’t too much pressure on any of the SATA connectors going into the Radxa hat and also check that the fan is still able to rotate freely and doesn’t have any cables caught up in it.
Final 3.5″ Pi NAS Enclosure Assembly
Slide the two enclosure halves together and secure them with four M3x8mm button head screws. You can also use black screws for a cleaner finish if you’d prefer.
I finished it off with some small rubber feet on the bottom of the case for vibration isolation, and the NAS is readyto install the drives. These each slide into place until you feel them plug into the connectors at the back.
Booting Up the NAS & Installing OMV
To boot up the NAS, we first need to plug in an Ethernet cable and then the 12V power supply.
Give the Pi a few minutes to boot up and then, find its IP address through your router’s DHCP table or using a utility like Angry IP Scanner.
Use SSH to access the Pi and then update the Pi and install OpenMediaVault (OMV) with the following commands:
This script takes about 5 minutes and you’ll need to reboot your Pi when you’re done.
Enable The PCIe Port
Before drives will show up, we need to enable the PCIe port on the Pi. Add the following lines to the Pi’s /boot/firmware/config.txt:
dtparam=pciex1
dtparam=pciex1_gen=3
And again reboot the Pi.
Once the Pi has rebooted, the drives should show up and you can confirm this by entering:
lsblk
You should see something like (one for each drive):
sda
sdb
sdc
sdd
Configuring OMV
Once we’ve got OMV installed and the PCIe port enabled, we can acess the OMV web dashboard by entering the Pi’s IP address in our browser. The default login is:
Username: admin
Password: openmediavault
Be sure to change these credentials after logging in.
I’m not going to go into too much detail on setting up OMV since there are loads of guides available already. Essentially I’ve followed the following steps:
Set up the drives in a RAID 5 configuration, providing 12TB of usable space with redundancy.
OMV 7 on a Pi doesn’t allow you to create a RAID array, you’ll need to do this through the terminal.
Create a Storage Volume
Created a Shared Folder on the Storage Volume
Create a User Account with permissions to access the Shared Folder
Enable the SMB service
Testing The 3.5″ Pi NAS
Transfer Speeds
To test the Pi NAS’ transfer speeds, I’ve mapped the network drive to my Windows 11 PC. I then tried copying a large 30GB video file across to the NAS. I got an average write speed of about 110MB/s with some short dips along the way. This is around 900Mb/s, so we’re likely saturating the gigabith Ethernet port on the Pi.
I then tried copying the same file from the NAS back to the PC. This is a bit faster and more consistent, I got an average speed of 113MB/s.
I then tried an automated tested using a 1GB file size got similar results again. Writes were around 110MB/s and reads around 110MB/s.
Like with my SSD Pi NAS, because it looked like we were saturating the Ethernet port, I then tried using a 2.5G USB Ethernet adaptor plugged into one of the Pi’s USB 3 ports.
This improved writing to the NAS to an average of around 200MB/s, again with a few dips, and reading from the NAS I got a faster 250MB/s. So writing to the NAS is now likely being nottlenecked by the software RAID parity calculations being done on the Pi’s CPU.
This makes the 2.5G network adaptor an easy and worthwhile upgrade for less than $20. It makes a big difference to the NAS’ performance, especially when large amounts of data are being transferred.
Power Consumption
I used an AC power meter to measure the NAS’ power consumption under a full writing load and at idle.
Idle: ~18W
Under Load: ~30W
This is higher than my SSD Pi NAS (~9–12W), but is reasonable for a NAS have four large mechanical drives. For comparison, my Asustor NAS idles at around 18W with the drives spun down, so this NAS does great with them still running.
Thermals and Noise
Thermally, the ventilation ports on the front and the 80mm fan at the back do a great job at keeping the NAS cool, even under a full load.
The only real negative for this build is that it is quite noisy. With the 80mm fan running, we get a sound level of about 54dB at 20cm.
Final Thoughts on my 3.5″ Pi NAS Build
That wraps up the build of my 4-bay 3.5″ Raspberry Pi 5 NAS. It offers solid performance, a functional and aesthetic 3D printed design, and the flexibility to use OMV or another NAS OS for your home NAS needs.
I’ve uploaded the 3D print files to MakerWorld. If you’ve got a Bambu Lab A1, P1S, or X1C, you can use my preconfigured print profiles to start printing directly from the Bambu Handy app. If not, download the files and slice them in your own slicer.
If you’re considering getting a 3D printer, the Bambu Lab P1S is a fantastic option. It’s fast, supports multi-material printing, and its enclosed design handles a wide range of materials. It’s perfect for makers and you won’t outgrow its capabilities any time soon.
Let me know what you think of my Pi NAS build in the comments section below.
SunFounder have returned with the latest iteration of their Pironman case. This time, it’s called the Pironman 5 Max, built specifically for the Raspberry Pi 5. This case brings a host of upgrades, including dual NVMe support, a sleek black aluminium body, and tinted acrylic panels.
At $95 for the standard kit, it’s definitely on the higher end for Raspberry Pi enclosures, but it makes up for that with a range of inclusions and features. Most notably, the ability to run a Hailo-8L AI accelerator alongside an NVMe SSD. That makes it ideal for AI applications like onboard voice recognition, object detection, and real-time pose estimation.
Let’s walk through the case design, assembly, features, and performance testing to see how it holds up.
Here’s my video review of the case, read on for the write-up;
The Pironman 5 Max arrives in a clean, white branded box. Inside, you’ll find the aluminium shell, fans, cooler, expansion boards, and mounting hardware, everything you need to get up and running.
This is the third-generation Pironman case, and visually, it carries forward the design of its predecessor while swapping the silver and clear acrylic for a more refined black aluminium and tinted acrylic look. You’ll also notice upgraded features like dual M.2 NVMe ports, programmable RGB lighting, and a tap-to-wake OLED stats display.
Assembling The Pironman 5 Max Case
Like its predecessor, the Pironman 5 Max case is quite complex and requires some effort to assemble. It’s not difficult, thanks to the well-illustrated instruction sheet and clearly labelled screws, but it’s more involved than your average snap-together Pi case.
Here’s a quick overview and some photos of the assembly process;
Install Standoffs – Attach various lengths to one half of the enclosure.
Prepare the Pi – Plug in the carrier boards and mount the Pi 5 into the case.
Install the Cooler – Apply thermal pads to the CPU, WiFi module, and power circuitry. The included Ice Cube cooler uses the same mounting holes as the Pi 5 Active Cooler and similar “press into place” spring mounts.
Add the NVMe Adapter – The adaptor supports 2230 to 2280 drives. I installed the Hailo-8L AI accelerator in the top port and a Lexar 2280 NVMe SSD in the bottom (there is functionally no difference between ports)
Attach the Fans – These mount on the back panel.
Optional Camera Support – If you’re using a camera module, this is the time to install it (I decided to do this later as I wanted to test the case without it first).
Add the OLED & GPIO Expansion Board – This board includes the GPIO extension and RGB lighting. The display gets stuck onto the front panel.
Assemble the Shell – Screw the aluminium halves together.
Finish with Acrylic Panels – The dark tinted panels give it a clean, high-tech look. The power button gets added to the front panel before installing it.
Install the Rubber Feet to finish it off.
The total assembly time was around 25 minutes, and everything went together quite smoothly. There are also spare screws and cables included, which is a helpful touch.
First Boot & Software Setup
With Raspberry Pi OS pre-installed on the NVMe SSD, we can more stright on to booting it up. The OLED display and RGB lighting won’t function right away, they requiresome additional setup and software.
Fortunately, setup is easy. A quick config change and GitHub install later, and everything was up and running. I followed the instructions from SunFounder’s wiki and had no issues. They’ve also confirmed compatibility with other operating systems like Home Assistant, Ubuntu, and Kali Linux.
After rebooting, the OLED display shows:
CPU temperature and usage
RAM and storage capacity
IP address
The display goes to sleep after 10 seconds by default, but you can wake it with a tap or adjust the timeout in the config files.
The Pironman 5 Max Web Dashboard
One of my favourite features from the previous Pironman case was the web-based dashboard and I’m happy to repor that they’ve retained it. You can access it via your Pi’s IP address and port 34001.
From here, you can:
View system stats and logs
Graph CPU usage and temps (as well as a wide range of other metrics)
Customise the RGB lighting, including style, colour and animation speed
Control fan behaviour with presets like Quiet, Balanced, and Performance
The new PWM fans are a big step up. They can now be set to come on at different temperatures, unlike the previous version’s which were either always on or off.
Cooling Performance
To test the coolingperformance of the case, I set the fans to Always On and ran a 30-minute CPU stress test using CPU Burn.
Idle temperature: 35°C
After 30 minutes under full load: 46°C
Temperature delta: 11°C
So thermal performance is pretty good, leaving a lot of headroom for overclocking.
In comparison, these are the temperatures records on the same setup (without the Hailo AI module) on the previous generation case;
Idle temperature: 36°C
After 30 minutes under full load: 53°C
Temperature delta: 17°C
These results aren’t bad but you’d expect better from a case with three 40mm fans cooling a single Pi 5. I previously attributed this to restricted airflow from overly fine dust filters and inadequate inlets.
The Pironman 5 Max fixes this with front air inlets cut into the “Pironman” logo and filter-free exhaust fans, significantly improving airflow.
Fan Noise on the Pironman 5 Max
During the thermal stress test, I also measured the sound levels produced by the fans:
Quiet mode with fans turned off (idle): 29–33dB – practically silent
Always On (full load): 47dB – noticeably louder, potentially distracting if it’s close to you
So the added PWM fan control makes a huge difference for balancing performance and noise.
NVMe & Performance
I then wanted to test the performance of the dual NVMe adaptor. I did this by running James Chambers’ Pi Benchmark script three times on the Lexar NVMe SSD. These were the results:
Scores: 36,973, 36,947, 38,078
Average: 37,333
This aligns with expectations for a PCIe Gen 2 single-lane interface. You can boost it by switching to Gen 3 in the Pi’s config file.
Hailo AI Accelerator
Next, I tested the Hailo AI accelerator with an added Raspberry Pi camera and Hailo’s pretrained models from their Developer Zone. The performance was quite impressive:
Pose Estimation: 30fps, with CPU usage under 15%
Object Detection: Also ran at 30fps with low CPU usage
Person & Face Tracking: Handled multiple subjects in frame with ease
So using the Hailo AI module with a Raspberry Pi 5 significantly boosts performance for object recognition and pose estimation, enabling real-time inference with low CPU load and efficient power usage. It allows for smooth, high-speed AI processing directly on the device, ideal for edge applications without needing cloud resources.
Final Thoughts on the Pironman 5 Case
The Pironman 5 Max is a thoughtfully designed case with a premium look and a ton of functionality:
Dual NVMe slots for expanded storage or device support
Rear-only cable management for a cleaner setup
Full-size HDMI ports, avoiding the inconvenience of micro HDMI
Vast improvements to cooling and airflow
Great AI accelerator support
OLED display and RGB lighting with web-based customisation and controls
The case has the same footprint as its predecessor, 112mm x 117mm x 79mm, but packs in even more functionality.
The only real area for improvement would be more refined PWM control, allowing the fans to ramp smoothly with temperature rather than switching at fixed thresholds.
Despite the $95 price tag, the Pironman 5 Max offers great value considering it includes the enclosure, active cooling, dual NVMe support, OLED screen, RGB fans, and the necessary expansion boards. It’s one of the most complete cases currently available for the Raspberry Pi 5.
What do you think of the Pironman 5 Max? Let me know in the comments section below.
The GMKtec NucBox K10 is a mini PC that packs a serious punch. Featuring a 14-core Intel Core i9 processor, impressive connectivity options, and upgradeable RAM and storage, it’s a compelling option for those looking to add a capable system to their homelab, or even as a quietand powerful workstation.
Here’s my video review of the NucBox K10, read on for the written review;
Some of the above parts are affiliate links. By purchasing products through the above links, you’ll be supporting my projects, at no additional cost to you.
What’s in the Box?
In the box, first up we’ve got the NucBox K10 and underneath it is a sleeve with a user manual. Two accessory boxes are included, one with an HDMI cable and a power cable, and another with a 120W power brick and two WiFi antennas.
Design and Dimensions
Physically, the NucBox K10 is on the larger side for a mini PC. It measures 178mm x 176mm x 40mm and weighs just over 1kg. The build feels solid, and it has a functional design that accommodates powerful hardware while maintaining a relatively small footprint.
Ports and Connectivity Features
The NucBox K10 is well-equipped when it comes to connectivity. On the front panel, you’ll find:
1x 3.5mm audio jack
2x USB 3.2 Type-A ports
1x USB Type-C port with DisplayPort support
2x USB 2.0 ports
Power button and power indicator LED
On the rear panel, the system offers even more:
2x USB 3.2 Type-A ports
2x USB 2.0 Type-A ports
2x HDMI 2.0 ports
1x DisplayPort 1.4 (full-size)
1x 2.5G Ethernet port
1x RS-232 serial port
1x DC power input
2x WiFi antenna connectors
While the range of ports is quite good, it would have been great to see an additional USB Type-C port, or even USB 4 support, to make it more future-proof.
For wireless connectivity, the system includes WiFi 6 and Bluetooth 5.2, ensuring strong performance with modern devices and networks.
Ease of Access & Upgradeability
One of the standout features of the NucBox K10 is its tool-less access panel on the bottom. This provides direct access to:
2x SODIMM RAM slots
3x M.2 2280 NVMe SSD slots
This design makes it incredibly easy to upgrade the RAM and storage without needing tools. The unit comes with 32GB of DDR5 RAM (two 16GB sticks in dual-channel configuration) running at 5200MHz. This is the CPU’s maximum supported speed, even though the supplied ADATA modules are rated for 5600MHz.
For storage, one of the three M.2 slots is populated with a 1TB Crucial P3 Plus NVMe SSD. It’s not common to see three M.2 slots in a mini PC. Two of these ports support PCIe Gen 4 x4 and one supports PCIe Gen 3 x3. The SSD also lacks a thermal pad, which could have helped with heat dissipation given its proximity to the metal access door.
In additional to being upgradeable, its also good to see they’re using decent quality components and not a generic unbranded drive and RAM.
Taking A Look At The Internals
The main top cover is also easy to remove, secured with a single thumb screw. Inside, you’ll find the heart of the system, the Intel Core i9-13900HK, a 13th-gen Raptor Lake mobile CPU with 14 cores and 20 threads capable of boosting up to 5.4GHz.
It’s quite a power hungry CPU, with a TDP of 45W, so hopefully the cooler is able to deal with this. The cooler is a full copper heat pipe design but since this is a mini PC, the cooler is quite compact. It does however have quite a large fan which should support better cooling and keep noise down.
The CPU includes Intel Iris Xe integrated graphics with 96 execution units, running at up to 1.5GHz. This will likely be limitating for GPU-heavy tasks or gaming. That said, for media playback, light editing, or basic 3D applications, it should hold up reasonably well.
First Boot & Performance Testing
The included power adapter outputs 19V at 6.32A, totaling 120W, which is higher than most mini PCs but necessary for this level of performance.
The K10 ships with Windows 11 preinstalled, and the installation appears to be clean with no bloatware.
Opening Task Manager confirms the specs:
Intel Core i9-13900HK with 20 threads
32GB DDR5 RAM @ 5200MHz
1TB Crucial NVMe SSD
Intel Iris Xe iGPU with 16GB shared memory
Geekbench CPU Benchmark
Running a Geekbench CPU Benchmark yields solid results:
Single-core score: 2,411
Multi-core score: 12,596
Averaged over three tests: 2,514 and 12,606 respectively
These scores are pretty good for a mini PC, and would beat some more modern Core Ultra 7 series PCs.
3DMark Night Raid GPU Benchmark
In 3DMark Night Raid, designed for integrated GPUs:
Overall score: 21,902
Graphics score: 25,166
CPU score: 12,624
The graphics score is near the top end of what is achievable with this iGPU and is far better than older UHD graphics. It’s likely good enough for low to medium settings on less demanding games but won’t be good for any modern games. The CPU score is also quite good.
Gaming Test: Counter-Strike 2
I then tried running counterstrike 2 on it to see how the GPU performed. And, as expected, it’s not great for gaming. At 1080P with graphics on medium settings we get about 75fps. This is usable but I expected a bit more. The GPU is very much the bottleneck for this PC. With graphics set to very high this goes down to 30fps.
Storage Performance
Testing the Crucial P3 Plus SSD with a 1GB test file:
Read speed: ~5190 MB/s
Write speed: ~4750 MB/s
These are expected figures for a PCIe Gen 4 x4 interface with a DRAM-less budget drive.
Power Consumption
Idle: 18W
Full load (CPU + GPU): 84W
While this is high for a mini PC, it’s still much more efficient than a desktop with comparable specs. The performance-per-watt is excellent.
Fan Noise & Sound Level
Throughout benchmarking and gaming I was pleasantly surprised by how quiet the fan was. It’s barely audible at idle and only slighter louder when full loaded.
Idle: 30dB
Full load (CPU + GPU): 39dB
The fan also does a good job at keeping the CPU cool, even under full load the CPU didn’t go over 60 degrees.
Final Thoughts On The GMKtec NucBox K10
The GMKtec NucBox K10 delivers an impressive balance of power, upgradability, and quiet operation in a compact form factor. It’s not a gaming rig, but it excels as a homelab server, media center, or productivity workstation.
Pros:
Powerful 14-core i9 processor
Toolless design for easy upgrades
Quiet under load
Generous IO selection
Excellent storage and RAM options
Cons:
GPU performance limits gaming
Higher power consumption than most mini PCs
Lacks USB 4 support
In terms of pricing, on the GMKtec website, the barebones PC with no RAM or SSD installed is $420, this goes up to $590 with 64GB of RAM and a 1TB SSD installed. You can also often find them on sale on Amazon, so have a look there first.
For those who need a quiet, powerful, and compact system, the K10 is a great value, especially if you’re building out a homelab. I’ve already added mine to my 3D printed Lab Rax setup.
Let me know what you think of the NucBox K10 in the comment section below and if there’s anything else you’d like to see me test on it.
Today we’re aking a look at the new Beelink ME Mini, a compact mini PC designed to function as a small home NAS. It supports up to six NVMe drives, features dual 2.5G networking with link aggregation, and offers a silent, compact form factor — all ideal traits for a flexible DIY NAS.
Here’s my video review of the Beelink ME Mini, read on for the written review;
The Beelink ME Mini itself, wrapped for protection
An HDMI cable
A power cable
A user manual
Like most mini PCs, it does not include an Ethernet cable, so you’ll need to provide your own.
First Impressions and Design
The ME Mini is impressively compact, measuring just 99mm square. It has ventilation holes on the top, bottom, and two sides.
This version is white, but it’s also available in grey and a blue-green color called Peacock Blue.
Front I/O
USB 3.2 Type-C port
Power indicator LED
Power button
Sleep indicator LED
USB 3.2 Type-A port
Rear I/O
AC power input
USB 2.0 port (mouse/keyboard; can be set to always-on)
Dual 2.5G Ethernet ports
HDMI port
The two sides are reserved for ventilation only.
To access the internals, you’ll need to remove four screws on the bottom. These screws are initially covered with sticky plugs — a slightly odd choice for a device that’s meant to be user-accessible for drive upgrades.
Internal Layout and NVMe Support
With the cover removed, you’ll find a large central heatsink with drive slots on either side. The internal layout is thoughtfully designed:
Supports up to six M.2 2280 PCIe Gen 3 NVMe SSDs
Five slots use a single PCIe lane
One slot uses two PCIe lanes, intended for the OS
Thermal pads are preinstalled, making drive installation very straightforward
A 2TB Crucial P3 Plus drive was preinstalled in this model, and Beelink has partnered with Crucial for this lineup — a welcome change from generic SSDs often found in budget systems.
Why the 4TB Limit?
Official specs list a maximum supported capacity of 4TB per drive, likely due to:
Power or thermal limits
Heatsink contact only on one side, while larger 8TB drives are often double-sided
That said, 8TB may still work — just with reduced cooling and potential risk.
Power and Cooling
The rear of the heatsink houses a 45W built-in power supply — no external brick required. It takes a direct mains cable and is rated at 12V, 3.75A.
This heatsink cools:
The CPU
The power supply
All NVMe drives
A single fan blows downward across the heatsink, with air exiting through the bottom and side vents. The heatsink has a machined contact face to improve thermal transfer from the CPU.
CPU, RAM, and Connectivity
The ME Mini is powered by Intel’s new N150 CPU:
4 Efficiency cores
Up to 3.6GHz
6W TDP
Slightly faster than the popular N100
Memory and Storage
12GB LPDDR5 RAM at 4800MHz (soldered, not upgradeable)
64GB eMMC storage (also non-upgradeable)
Connectivity
Dual 2.5G Ethernet ports
WiFi 6
Bluetooth 5.2
Software and Use Cases
The ME Mini ships with Windows 11, but since it’s geared toward NAS use, I installed TrueNAS for testing.
It’s a flexible platform that could also be used for:
Unraid (home NAS)
Kodi, Plex, or Jellyfin (media center)
Proxmox (homelab/virtualization)
Note: The 12GB RAM might limit heavier virtualisation tasks.
OS Flexibility
You can install your OS on either:
The eMMC storage, ideal for lightweight systems like Unraid
The 2-lane NVMe slot, for faster OS performance under Windows/Linux
This gives you flexibility based on how you want to allocate your storage.
Storage Testing: NVMe Performance
I installed a second 2TB P3 Plus SSD to test both the single-lane and dual-lane NVMe ports.
Drive Setup
Two separate storage pools were created in TrueNAS:
One on the dual-lane slot
One on a single-lane slot
File Transfer Benchmarks
1GB file test: ~260MB/s write, ~245MB/s read
16GB & 64GB files: Similar results
Real-world test with 46GB video file:
~280MB/s both to and from NAS on both slots
The performance across both slots was identical, as expected, due to the 2.5G Ethernet bottleneck. The internal PCIe lane speed doesn’t become a factor here.
Thermals and Noise
Even under stress testing:
CPU temps stayed below 55°C
Fan noise was negligible
Silent at idle (around 35db)
Barely audible at full load, only noticeable within 20cm (around 36db)
The case gets warm, but not hot — impressive for a fan-cooled, passively compact system.
Power Consumption
With two NVMe drives installed:
Idle: ~8W
Write load: ~12W
Max CPU load: ~15W
This is very power-efficient, especially for 24/7 operation.
Pricing and Value
Base version (no storage): $209
2TB version: $329
4TB version: $429
Note: The drive upgrades aren’t discounted — it costs about the same as adding your own Crucial P3 Plus. But the base $209 model offers fantastic value for its features.
Who Is the Beelink ME Mini For?
This is a great option if you’re looking for a:
Quiet, energy-efficient NAS
Flexible platform with OS choice
Media server or file backup system
Device to run Docker containers, light Proxmox VMs, or home services
You can start with a single SSD and scale up to 6 drives.
Limitations
It’s not designed for:
Hot-swappable drives
Hardware RAID
10G networking
PCIe expansion
So it’s not for high-demand enterprise environments. But at $209, it beats most DIY NAS options, including my own Raspberry Pi NAS build from last year, and with significantly more performance.
Final Thoughts
The Beelink ME Mini is a compact, silent, and power-efficient mini PC that delivers everything you need for a budget-friendly DIY NAS. With support for six NVMe drives, dual 2.5G networking, flexible OS options, and surprisingly solid performance, it’s a well-rounded package for home users looking to build their own storage solution without the noise, bulk, or high cost of traditional NAS systems.
The ME Mini punches well above its weight for home NAS use, it’s:
Tiny
Efficient
Silent
Affordable
And thoughtfully designed for DIY upgrades and flexibility
Let me know in the comment section below what you think of the ME Mini or if there’s anything else you’d like to see tested!
If you’re into testing and experimenting with Raspberry Pi accessories, then you’ll know the importance of a solid setup that’s both functional and accessible. In today’s post, I’ll walk you through the design and build process for a custom open-air Pi test bench tailored for the Raspberry Pi 5. It’s complete with a real-time stats display, RGB CPU load monitor, and push-button controls.
Here’s my video of the build, read on for the written guide;
Some of the above parts are affiliate links. By purchasing products through the above links, you’ll be supporting this channel, at no additional cost to you.
Why Build a Raspberry Pi Test Bench?
The idea was to create something better than a simple Pi stand. I wanted something that looks great on a desk but is also practical for testing different HATs, accessories, and custom configurations. I needed a setup that would allow easy access to the Pi’s components while offering flexibility for cooling and external add-ons.
The result is a two-level stand with the Pi 5 on the base and a mounted HAT above it, making everything clean and organized while remaining functional. I still have access to the Pi’s GPIO pins and have a clear area above the Pi to fit a range of coolers.
Designing the Stand in Fusion 360
I started out in Fusion 360, designing the stand to hold a Raspberry Pi 5 flat on the desk and a HAT mounted above at an angle. The angled top mount can accommodate various add-ons like NVMe adapters or AI accelerators. You can still add external coolers and peripherals, which would be difficult with a HAT sitting directly on the Pi.
The stand was designed to be milled from aluminium for durability and aesthetics, but it can also be 3D printed. Alongside the main frame, I designed a small custom PCB that adds an OLED screen, RGB LED, and three programmable buttons to the mix.
Here are the 3D Print Files if you’d like to try print out your own stand.
Designing the Control PCB in EasyEDA
The control board was designed using EasyEDA, a free online PCB design tool.
Despite its small size, the PCB brings a lot of functionality:
OLED Display: Shows IP address, CPU temperature, and system resource usage.
RGB LED: Changes colour based on CPU load—green for idle, red for max load.
Three Pushbuttons: Mapped to custom actions like running scripts, toggling services, or rebooting.
All of this connects neatly to the Pi’s GPIO header via a short lead.
Here are the PCB gerber files if you’d like to make your own PCB;
To fabricate the components, I used the Carvera Air, a compact desktop CNC that Makera sent me to try out. I already had one from their Kickstarter campaign last year, so this expands on my workshop capabilties.
The Carvera Air is a versatile machine that can:
Mill wood, plastic, and aluminium
Laser engrave
Fabricate PCBs
So it’s a great addition to a home workshop or Makerspace.
Milling the Aluminium Parts
I began by milling the three aluminium pieces for the stand, the two identical sides and the central joiner.
First up, we need to create the tool paths for each operation required to make up each part. I did this in Fusion360’s manufacturing space. This also allows you to add virtual stock and simulate the paths that are created.
To make up the first leg, the Carvera Air performs autolevelling by probing the surface. I then used a 1/8″ endmill to face the parts. Then drilled holes using a 2mm drill bit for the joiner connection and finally, used the same endmill to contour the parts.
Tabs hold the parts in place during milling and need to be removed and cleaned up afterwards, but overall I’m really impressed by how well it came out.
We then need to repeat the process for the second leg and make up the joiner too.
Making the PCB
The PCB was also fabricated using the Carvera Air using their PCB Fabrication Pack.
This is a simple PCB, so it only requires a single-sided PCB blank. The Carvera Air again starts out by probing the surface of the blank so that it’s able to accurately engrave the traces.
The traces are then engraved using a 0.2mm engraving bit.
UV-curing solder mask is then applied and cured for 10-15 minutes using a UV lamp.
In hindsight, I probably put a bit too much UV mask on in each layer, so the finish isn’t great and it took a long time to cure between layers. It was first time using the solder mask and the end product doesn’t look too bad, it’ll be on the back in any case.
The solder mask is then removed from the pads that we’re going to solder onto using a mask removal tool. Then holes were drilled for the through-hole components with a range of drill bits.
Finally, a 0.8mm corn bit was used to cut out the board. Tabs again hold it in place, which will need to be removed and cleaned up afterwards.
The PCB components, OLED screen, RGB LED, resistors, and tactile switches, were then soldered into place.
Assembling the Test Bench
With all the components fabricated, it was time to assemble the Pi test bench.
I started out by tapping M2 holes in the joiner to bolt the aluminium sides to.
Four M2x10mm button head screws hold the legs onto the sides of the frame.
Next we need to mount the Pi. Four M2.5x6mm standoffs are used to hold the Pi securely. These can be installed with way around – I prefer having the threads facing upwards so that the Pi can just be placed onto them.
Brass inserts served as thumb screws, making it easy to remove and reattach the Pi without tools.
For the HAT, I mounted it directly to the frame using some M2.5x12mm button head screws and M2.5 nuts, since the NVMe HAT I used had no bottom-side solder joints. These same screws hold the control PCB securely in place alongside the hat.
Real-Time Monitoring and Controls
With the Pi test bench now complete, we can load the stats script onto the Pi and start using it.
The OLED display shows live system stats including the Pi’s IP address, CPU load, and temperature and other resource utilisation like RAM and storage.
I’ve set the RGB LED up to change from green to red based on CPU load, giving you instant visual feedback at a glance. It’s green when the CPU load is under 5% and then moves through a range of yellow and through to solid red at 100% utilisation.
The three buttons underneath the LED are configurable through Python scripts to control services, scripts, shutdowns, or toggling the OLED display, even when the Pi runs headless.
Final Thoughts
The extra weight from the aluminium stand, PCB, and HAT adds stability—preventing the Pi from sliding around when plugging in cables. And let’s be honest, it looks fantastic on my desk.
If you’re into Pi projects, want to test new accessories, or just want a clean, professional bench setup, this project is a great starting point.
Let me know in the comments section below what features you’d like to see added. Maybe an integrated fan controller? More buttons? USB hub?
If this build has inspired you, check out the Carvera Air from Makera. It’s an awesome addition to any workshop, letting you prototype your own PCBs and aluminium components quickly and accurately.
Today’s project is a little ridiculous, but in the best way possible. I’ve built a custom waterblock for the Raspberry Pi 5, and I’ve gone all out. This block features a milled aluminium cold plate, an integrated clear acrylic distribution plate, a built-in pump, and hardline tubing leading to an 80mm radiator and fan.
It’s complete overkill… and that’s exactly the point.
Here’s my video of the build, read on for the write-up;
Some of the above parts are affiliate links. By purchasing products through the above links, you’ll be supporting this channel, at no additional cost to you.
Making With the Carvera Air Desktop CNC
The entire waterblock was machined using the new Carvera Air, a desktop CNC machine that’s genuinely expanded what I can do in my workshop. While Makera did send me this unit for the video, I was already a backer on Kickstarter last year and have been using mine to fabricate parts for my other projects. You might have even spotted it in the background of a few recent videos.
For this build, I pushed it to its limits by milling aluminium and acrylic with precision.
I’ve built a few water-cooled Raspberry Pi projects before, but they usually end up bulky. This time, I wanted to combine water cooling with a compact custom distribution plate, aiming to make something truly unique and much smaller.
The Waterblock & Cooling Loop Design
As always, I started designing the components in Fusion360.
At the heart of the system is a milled aluminium block that makes direct contact with the Pi’s heat-generating components. The CPU is the main target, but I’ve also sized thermal pads for the RAM, USB and Ethernet controllers, and the power circuitry—taking full advantage of the additional cooling capacity.
To complete the waterblock, stacked on top is a two-layer acrylic distribution plate. This plate not only channels coolant over the block but also houses the pump itself.
I used a low-profile pump with an acrylic top and reverse-engineered the cutout to fit it seamlessly. The pump mounts directly into the plate with M4 screws and some M3 countersunk screws secure the acrylic layers to the aluminium base. I also custom made gaskets to ensure a good seal between the layers.
To keep everything clean and compact, I designed a small 3D-printed stand for the radiator next to the Pi. I opted for hardline tubing for aesthetics, though I didn’t bother trying to model the bends in CAD, that was a challenge for later.
Machining the Components on the Carvera Air
I began making the waterblock with the aluminium cold plate, which would take the longest to mill. Setting up the toolpaths in Fusion360 was a process in itself. A note for those using the free version, it doesn’t allow exporting multiple tool operations into a single CNC file. To get around this, you can either combine the GCode manually or used Makera’s new CAM software.
The 10mm aluminium stock was clamped onto the Carvera Air’s bed. While this machine doesn’t have an automatic tool changer, there are some excellent 3D printable tool holders that help keep everything organized.
Before cutting, the Carvera Air performed auto-leveling with its probe.
The machining process involved several steps:
Facing the stock with a 1/8″ flat endmill to the final thickness.
Drilling holes for the acrylic plate screws and Pi mounts.
Surfacing the heat pads and cleaning surrounding areas.
Contouring the outer shape, with tabs to keep it secured.
Flipping the plate to mill the internal cooling channels.
The final result came out great—especially for a desktop CNC. This was my first time milling aluminium, and while there are visible tool marks, the surface finish is smooth and clean.
Next up was the acrylic distribution plates, milled from 10mm clear cast acrylic. The first plate was machined with:
A 2mm flat endmill for the o-ring groove,
A 1/8″ endmill for the pump cutout and screw holes,
Pocket milling and outer contours.
The second acrylic layer followed a similar process, with the addition of thread milling using an M4 tool to tap the four pump mounting holes. I then countersank the screw holes using a chamfer bit.
The final step was threading the inlet and outlet ports by hand using a 1/4″ BSP tap. Makera currently offers thread mills in some metric sizes, but a BSP-compatible tool could be sourced elsewhere. I also tapped the M2.5 and M3 holes in the aluminium base at this stage.
Assembling the Waterblock
Assembly of the waterblock started with creating four custom o-rings using 1.5mm cord that I cut and joined with super glue. These will seal the aluminium base, distribution channel, and inlet/outlet ports.
Once the seals were in place, I clamped the acrylic plates together. One side is secured with M3 screws, and the other side is held by the pump itself. The pump is a compact 12V model whose geometry I had replicated in CAD. After inserting the base and impeller, I fixed it in place with four M4 screws.
With the block assembled, I moved on to completing the rest of the loop.
Hardline Tubing and Radiator Setup
To dissipate the heat from the waterblock, I added an 80mm aluminium radiator connected via 12mm hardline tubing. Despite never working with hardline tubing before, a bit of trial and error yielded some good results. A Milwaukee heat gun did the job, though it lacked a trigger lock, which made things trickier.
To add a fill port cleanly, I used a compact tee on one of the radiator ports. This provided a simple and tidy way to fill the loop without adding unnecessary bulk.
After tightening all the fittings on the waterblock and radiator, I mounted the entire assembly onto the 3D printed stand.
Filling the loop was surprisingly satisfying, especially with the fluorescent green coolant.
The pump and fan are powered via an adjustable 12V power supply, which lets me tweak their speeds for noise control. There’s no reservoir in the system, so working out the air bubbles took some patience. But the compact design made it worth the effort.
Installing the Pi and Testing the Waterblock
Once the loop was running smoothly and leak-free, I installed the Pi 5 onto the block. I used thermal paste for the CPU and 1mm thermal pads on the other components. I designed the heat sink pads to sit 0.8mm below the components to allow compression and ensure solid thermal contact.
The Pi mounts securely with four M2.5 screws.
Time to answer the big question, does it actually work?
With everything powered up, I ran CPU Burn to stress the Pi’s CPU. It was overclocked to 2.8GHz (up from the stock 2.4GHz) to push the cooling system to its limits.
To start with, we need a baseline. I ran the same test on the same Pi 5 without any cooler and then again with the official actove cooler and got the following results;
Stock Pi at 2.8GHz (no cooling): it started with a base temp of 44°C and started thermal throttling in under 30 seconds.
With the Active Cooler: it started at 37°C and peaked at 68°C after 5 minutes.
So the Active Cooler does a fair job at keeping the overclocked Pi cool but it still gets quite warm.
I then moved on to testing the Pi 5 in my new custom loop;
With this custom loop: base temp of 24°C (just 3°C above ambient), peaking at 32°C under full load. That’s a full 36°C drop compared to the stock unit and 5°C cooler than the active fan solution at idle.
The oversized aluminium block made a big difference by directly contacting the CPU heat spreader. With so much thermal headroom, I was also able to lower the pump and fan voltage for quieter operation without sacrificing cooling performance.
Final Thoughts
This was definitely an over-the-top build—but that’s what made it so much fun. It was my first time building a distribution plate and working with hardline tubing, and both exceeded expectations. The Carvera Air handled the aluminium and acrylic with ease and gave me confidence in taking on more CNC-based projects.
If you’re interested in trying something like this yourself, I highly recommend checking out the Carvera Air on Makera’s website.
If you enjoy projects that combine CNC machining, 3D printing, and pushing small single-board computers to the limit, subscribe to my Youtube channel or follow my blog. Feel free to leave a comment down below on what you’d like to see water cooled, or what I should build with the Carvera Air, next!
The LattePanda Mu is an ultra-compact x86 compute module designed to offer powerful performance in a tiny form factor. Based on Intel’s N100 processor, this board brings full Windows 11 compatibility and a wide range of connectivity options through its edge connector.
In this review, we’ll take a closer look at the LattePanda Mu Starter Kit, including the compute module, Lite Carrier Board, and bundled accessories. We’ll dive into its specifications, test performance under Windows, run benchmarks like Geekbench and 3DMark, and explore its power consumption and thermal performance to see how well it stacks up against other small form factor PCs and SBCs.
Some of the above parts are affiliate links. By purchasing products through the above links, you’ll be supporting this channel, at no additional cost to you.
Unboxing the LattePanda Mu Basic Kit – What’s Included?
The LattePanda Mu Basic Kit includes everything you need to get started using the LattePanda Mu module:
LattePanda Mu compute module
Active Cooler
Lite Carrier Board
Battery
Mounting screws for cover plates
Two acrylic base plates
The LattePanda Mu module itself is impressively small, measuring just 60mm by 70mm. While it can’t operate on its own and requires a carrier board, its compact size still makes it ideal for embedded or portable applications.
In terms of cost, the LattePanda Mu module costs $139 for the base N100 8GB model and goes up to $259 for the N305 16GB flagship. This makes it quite a lot more than something like the Radxa X4 that I showed recently, and especially considering that the X4 has all it’s ports ready to go while you’ll need to add a $39 carrier board to the cost of the Mu to use it. So you’ll likely need to be making use of the Mu’s available additional IO and interfacing features to justify the cost.
LattePanda Mu Module Tech Specs
The LattePanda Mu is available in three different CPU and RAM configurations. The unit tested here is the most basic of the three and is equipped with:
Processor: Intel N100 (4 cores, up to 3.4GHz, 6W TDP)
Graphics: Integrated Intel UHD Graphics at 750MHz
Memory: 8GB LPDDR5 RAM (4800 MT/s, soldered)
Storage: 64GB onboard eMMC (soldered)
Display Support: Up to three simultaneous outputs (3x HDMI 2.0, or 2x HDMI 2.0 + 1x DisplayPort 1.4)
Expandable IO: 9x PCIe 3.0 lanes available via the 260-pin SO-DIMM edge connector
Higher-end models include up to 16GB of RAM and/or an upgrade to the more powerful Intel i3-N305 processor.
The Mu is designed to be flexible for custom integration. LattePanda offers design documentation and services for developers and OEMs looking to build bespoke carrier boards for specific use cases.
Lite Carrier Board Features
The Lite Carrier Board included in the starter kit exposes the essential features of the LattePanda Mu. While it doesn’t expose all of the IO capabilities of the module, it provides the essentials to get up and running.
Although it does also support power input through the USB Type-C port, it is a little disappointing that this doesn’t support the PCIe port; you can only use that with the DC input. Also, although the DC port is stated as being able to accept 12-20V, it looks like you have to use a 12V adaptor if you plan on using the PCIe port – so power is likely routed straight to the PCIe port. This is not all that clear in the product pages or on the carrier board, it just says that the PCIe port is only available if you use a 12V DC power supply, not that you can’t use a higher voltage.
The DC barrel jack input is a nice addition as it’s range allows for the direct connection of a 4-cell lithium battery pack without requiring additional voltage regulation, which is really useful for mobile devices and projects.
Cooling is handled by the included active cooler rated at 35W of heat dissipation. For quieter or passive setups, LattePanda offers optional 10W and 15W passive heatsinks.
First Boot and Testing
The Mu comes preloaded with Windows 11, and it boots straight to a clean desktop environment. From the system monitor, we can see our N100 CPU with 4 cores, then we’ve got 8GB of LPDDR5 RAM running at 4800MHz, 64GB of eMMC storage and integrated Intel UHD graphics.
Video Playback on YouTube
1080p and 4K YouTube video playback performed flawlessly, with no stutters in windowed or fullscreen modes. This makes the Mu well-suited for home media applications.
3DMark Night Raid Benchmark
Next I ran a 3DMark Night Raid benchmark, which is a good benchmark to run on integrated GPUs.
The LattePanda Mu scored quite well;
Total Score: 4,663 (average over 3 tests of: 4,656)
Graphics Score: 4,905
CPU Score: 3,646
Geekbench 6 Benchmark
I then ran a Geekbench 6 benchmark on the CPU, which also scored fairly well;
Single-Core: 1,116 (average over 3 tests of: 1,121)
Multi-Core: 2,976 (average over 3 tests of: 2,980)
Storage Speed Test
Lastly, I tested the onboard eMMC storage speed using AJA System Test. The eMMC storage is quite slow, writes start off around 240MB/s but consistently drop to around 140MB/s when writing a 1GB file. Reads are consistently around 260-280MB/s for a 1GB file. This is ok for the operating system but you’d benefit from rather booting from an attached NVMe drive.
Write: Starts at ~240MB/s, drops to ~140MB/s on sustained 1GB file writes
Read: ~260–280MB/s consistently
Fan Noise and Thermal Performance
Fan noise depends a lot on what you’ve got running and what your power settings are. With a low load on the CPU, the fan is barely audible. It runs at under 34 decibels. Under full load, the fan spins up and is then quite noisy, getting up to about 46 decibels. If you aren’t putting a heavy load on it for long periods then one of the passive coolers is probably a better desktop option.
Thermally, the active cooler does well, keeping the CPU under 55°C at full load, and the surface of the cooler is about 8°C warmer than ambient.
Power Consumption
Power consumption is really good for an SBC running an Intel CPU.
Idle (Desktop): <6W
Full Load (CPU + GPU): <22W
Power Off (Shutdown): ~0.25W
It uses a little under 6W when idle on the desktop and when maxed out on all CPU and GPU cores we get a little under 22W. Interestingly it still uses about a quarter of a watt when shut down completely.
PCIe Expansion
I then tried plugging an NVMe adaptor into the PCIe port to try it out. Through that, I was able to add a 2280 size 2TB Crucial P3 Plus drive to the Mu. This drive gets significantly faster read and write speeds than the onboard eMMC storage, getting around 780MB/s.
You can also use this port to add faster networking adapters or even a GPU.
Final Thoughts on the LattePanda Mu
The LattePanda Mu is a compact, flexible x86 compute module with solid performance and a wide range of IO options via its edge connector. It’s ideal for developers, embedded applications, and projects that benefit from PCIe or multiple display outputs.
It is power-efficient and really compact for its capabilities. The included cooler is great if you’re not using the board under full load somewhere where the fan noise would be an issue; if it is, then the large passive cooler would probably be a better option.
It’s priced higher than some SBCs that offer comparable performance. At $139 for the 8GB model (plus $39 for the carrier board), it competes with devices like the Radxa X4, which offers onboard ports at a lower price. If you’re just after a budget-friendly N100 system, an N100 mini PC may offer better value.
Where the Mu really shines is in custom or embedded applications, especially where you can take advantage of its edge connector, multiple PCIe lanes, and flexible power input options, like direct 4-cell battery support.
Overall, it’s a well-built, capable module with specific strengths for the right user.