I recently tested
out CIFS share performnace on FreeNAS 9.3 (version
FreeNAS-9.3-STABLE-201503270027 which is latest as on April 5, 2015)
PC built using following parts:
Intel
Core i3-2100T – Dual Core, 2.5 Ghz Clock speed
Asus
P8H77i Motherboard with 6 SATA ports
1600
Mhz DDR3 RAM
320
GB Hitachi 2.5” HDD (just for test)
1 GbE
port
The test topology is:
GbE GbE
FreeNAS Server
-----------------> Router ----------------------> Windows 7
Client PC
Cat 6 Cat 6
(Samba Server ) (LANTest)
(Iperf Server) (iPerf Client)
(DiskInfo -t
/dev/ada0)
The results are
described below:
(1) iPerf shows
76-85 MB/s with a CPU usage between 10-15% (max limit is 200% since
its dual Core CPU). I would take this as the practical max that I
could extract from my network with my setup. The max theoretical
transfer rate on my test setup is 125 MB/s since I have one GbE port
only on the motherboard
(2) LANTest transfer
performance is 74-76 MB/s write and 64-66 MB/s read with a CPU usage
of around 25-28%.
(3) If we measure
transfer rate in Windows copy with the a large 5.3 GB file, I get a
read rate of 57 MB/s and write rate of 47 MB/s with around 15-18% CPU
in case of read from network server and 20-25% in case of write to
network file server
(4) The diskinfo
benchmark test reports Transfer performance between 28MB/s to 62 MB/s
based on where the data is on the physical disk medium
(outer/middle/inner areas)
A few surprising
things:
(a) Using tools like
LANtest, Crystal Disk Mark, NAS performance Tester, etc, I am getting write performance higher than read performance as opposed to
real world windows file copy-paste which throws up a more reasonable
sounding outcome. I have seen such reports elsewhere on the Internet
too.
(b) In my case,
LANTest reports higher performance than what is observed in real
world copy-paste.
I did try using some
standard samba optimizations (see bottom of this post) set through auxiliary parameters in
FreeNAS CIFS GUI configuration but none of them had any noticeable
positive impact on throughput. Perhaps there might be was a very
marginal degradation (1 MB/s)
On a little investigation, i found that SAMBA server (smbd daemon) does not seem multi-threaded
but follows a multi-process design, with one process per network client. In
my test topology, their is one client only & smbd transfer is using only 1/8th of the available
CPU horsepower (200% max). The bottleneck which is looming on the
horizon is the singular GbE Network interface.
I do not expect the
results of AFS and NFS shares to drastically alter these observations
regarding the bottleneck.
Suggestion for Home Users:
So for most consumer
grade applications which may require max 2-3 parallel samba transfers
in worst case (and 1 typically), this FreeNAS setup of mine seems
overpowered. Stepping down to a single core CPU or multi-core ATOM
might be a workable option in the x86 architecture. Also it opens a
window of opportunity for multicore ARM based SBCs (especially with
SATA ports over those with only multiple USBs) especially where no RAID is required and one disk is sufficient .
Its also worthwhile
noting that many home users do not need very high transfers (100 MB/s
or so). They are fine if a copy (read write) just works at 10-20
MB/s which is sufficient for downloads, 1080p video streaming
(though 4K would be a challenge) and do not move big files around or
use a network drive as a replacement for local storage. I am one of
them most (maybe almost all) of the time. Also Fast ethernet 10/100 Mbps should be avoided.
Suggestions for SOHO or small enterprise use:
For office deployment,
you need more parallel transfers and therefore its worthwhile giving
this atleast a Dual GbE PCI NIC or better a quad GbE NIC upgrade and
possibly used with link aggregation (might require a better
router/switch which supports this feature). You could also select a
motherboard with 2 GbE to start things out.
So while building your own FreeNAS file server a little pit of research on Internet regarding the speeds achieved with different CPUs
Reference: Samba (CIFS) tuning options
that I tried but didn't work any wonders for me.
aio read size =
16384
aio write size =
16384
read raw = yes
write raw = yes
socket options =
TCP_NODELAY IPTOS_LOWDELAY SO_KEEPALIVE SO_RCVBUF=131072
SO_SNDBUF=131072 IPTOS_THROUGHPUT
use sendfile = true