Over the last couple of years, SSDs have taken the market by storm. Once we leave the lowest budget segments of the market, a SSD is almost always part of a modern PC build. SSDs noticeably boost a PCs performance mainly due to their random data access times being orders of magnitude lower than on traditional rotation hard disk drives which makes them so desirable for the user. Transfer rates also increased beyond what hard disk drives are capable of and quickly hit the limits of what the Serial ATA interface is capable of. This led to the introduction of novel form factor SSD such as those that connect directly to the PCI-Express interconnect present on PC mainboards. We are looking at one such PCIe SSD today, namely the VisionTek GRX 240GB (formerly named the Data Fusion).
The VisionTek GRX 240GB SSD comes as a PCIe x2 low profile card. The retail packaging includes a low profile bracket so you can actually make use of that property. The card is based off a Marvell 88SE9230 raid controller, which got a PCIe x2 interface and offers up to four SATA 6Gb/s interfaces. Two of these are used to hook up two separate 120GB SSDs, the rest are unused. The raid controller is covered by a heatsink to ensure safe operating temperatures. While technically it’s a SATA interface, the SSD uses a proprietary form factor that’s not compatible with mSATA or m² form factors that are in use for similarly sized SSDs. The SSDs can be independently removed from the PCIe card, but due to the proprietary form factor this is of limited use. The SSDs use the well-known LSI Sandforce SF-2281 controller.
In the shipping configuration the raid controller was configured with a single RAID0 array with a stripe size of 64k. This is the intended way of using this product because only then you will be able to achieve transfer rates exceeding speeds of SATA SSDs as my testing will show later. However it is possible to use the SSDs independently too, if this is what you desire for some reason. It should be noted that if you want to Secure Erase the SSDs to completely wipe the data and restore original performance, the RAID array must first be disbanded and the SSDs be made available individually. Then the SSDs can be Secure Erased individually using a tool such as hdparm. On the whole RAID array this operation is not possible.
Usage of the SSD is straightforward. You plug it into a free PCIe slot and you are good to go. Given it is a x2 interface which isnt present on most motherboards, you will have to use a x4 or x16 slot as they are far more common. When using x16 slots you should pay attention to the lane configuration of your motherboard. Optimally you shouldn’t take away lanes from your discrete graphics card but on some boards it may be inevitable. For this SSD it should be fine putting it into a PEG slot that is only hooked up to four electrical lanes since it will only use two of these anyway.
When booting the computer you will be briefly provided with the screen of the option ROM showing the configuration of the SSDs and the key combination (Ctrl + M) to enter the configuration menu. In the menu you can basically only create and delete an array or set up the SSDs to work in single drive mode. Booting from the GRX PCIe SSD works without a hitch. Due to it reporting itself as a standard AHCI device to the system, it can be used without additional drivers both on Windows and Linux. In my testing I even cloned an existing Windows 7 installation to the SSD and it worked without major issues. Likewise it is possible to directly install a fresh OS to the SSD without having to prepare special drivers, which is quite convenient.
Monitoring S.M.A.R.T. data with standard tools is not possible when it is configured as a RAID, but works when the SSDs are setup to work independently. To monitor S.M.A.R.T. in a RAID setup, it is necessary to install Marvell Storage Utilities, which allow to check up on the SSDs in a web interface.
APU: AMD A10-7850K (3.7 GHz)
Mainboard: ASRock FM2A88X Extreme6+ (BIOS 2.90)
RAM: 2x4GB AMD DDR3-2400 Kit
OS Disk: 750GB Seagate Barracuda 7200.11
PSU: Corsair TX650 650W
OS: Windows 7 x64 SP1
The GRX PCIe SSD was installed in the lowest PEG slot on the motherboard labelled PCIE5. Electrically it is a x4 slot. Due to AMDs system architecture the SSD is directly connected to the APU and doesn’t have to go through the chipset. For the tests the SSD was only used as a separate drive, while the OS was booted from the HDD in the test system. The tests were conducted with the Windows 7 default msahci driver. I also ran a few tests with Marvells driver but the results were very close within usual measurement tolerance so I didn’t bother comparing the two.
AIDA64 Read / Write Linear / Random
The AIDA64 benchmarks show both the sequential and random access read and write throughput. It should be noted that AIDA64 uses incompressible data for the write tests. The performance drop towards the end of the random write test was characteristic and could be replicated across a number of runs.
The I/O comparison scores don’t show anything particularly interesting.
Using overlapped I/O the SSD can best approach its theoretical performance limits. The read speeds exceeding 800MB/s should be close to what can be theoretically transferred over a PCIe 2.0 x2 interface.
Crystal Disk Mark
Crystal Disk Mark is a rather straightforward test that gives a rough overview what a storage device is capable of. With random data we can see read speeds well above 600MB/s while write speeds in a similar region as with normal SATA SSDs.
When using a pattern consisting only of zeros, both read and write speeds go up due to the compression used by the SandForce controller. While this hardly represents a real world scenario, it shows we can indeed surpass speeds of SATA SSDs in both reads and writes.
The AS SSD benchmark gives a good overview of both sequential and random IO performance using incompressible data. The compression test clearly shows how the write performance scales strongly with increased compressibility of the data. It’s hard to translate this to real world usage as the compressibility of the data you work with is stringly use case specific.
The Anvil SSD scores show the performance using incompressible data, i.e. not the most favorable conditions for Sandforce-based SSDs. While the numbers are respectable, except for the sequential read speeds none of the scores are really unattainable for SATA SSDs.
This shows the performance of one of the individual SSDs comprising the RAID0. I opted to not present the entire suite of benchmarks for the single SSD, since anyone using this PCIe SSD would be ill-advised to do so, but picked the Anvil results as they provide a good overview for reference. It also gives a good explanation for the performance of the RAID0 setup. It can be observed that sequential write speeds scale almost linearly, while reads don’t. Random access performance doesn’t benefit from RAID0 as much, which is expected.
I measured an idle power consumption of the test system of 61.3W. At this point it should be noted that this is a bit higher than it should due to a bug in the BIOS used in this test. It doesn’t have an impact on the delta values with the SSD though so I still feel comfortable presenting these numbers here. Idle power with the SSD installed came in at 66.4W (a delta of 5.1W vs without). Stressing a single SSD of the PCIe card raised power consumption to 74.7W (8.3W more compared to Idle with SSD), while stressing both SSDs in a raid setup resulted in a 79.1W power draw (12.7W vs idle with SSD). All measurements were done at the wall outlet.
Before I can give a recommendation, let’s look at the current going prices of this product in popular online stores. At Newegg the VisionTek GRX PCIe SSD currently sells for $379, while on Amazon it’s available at $384.99. Given this price point and the performance I observed in my testing, this SSD is a hard sell. The only metric where it is truly able to outperform SATA SSDs is sequential read speeds and sequential write speeds once we use data that is compressible. The reason for this is that the individual SSDs making up this RAID0 setup on the VisionTek DataFusion GRX 240GB SSD only offer average performance.
When compared with popular high-end SATA SSDs like the Samsung SSD 840 Pro this PCIe SSD is on the short end of the stick. The 256GB version of the Samsung 840 Pro currently sells for $200 and provides better random IO performance for almost half the price. For the price of this PCIe SSD you could either get the 512GB version or get two 256GB units and run them in a raid setup on your mainboard to get even better performance.
I couldn’t come up with a use case where this PCIe SSD would excel to justify it’s price point. The advertised transfer speeds can only be reached in certain niche scenarios and otherwise the performance can be characterized as average.