Sunday, April 5, 2015

Is my FreeNAS server overpowered or underpowered or just about right ?

I recently tested out CIFS share performnace on FreeNAS 9.3 (version FreeNAS-9.3-STABLE-201503270027 which is latest as on April 5, 2015) PC built using following parts:

Intel Core i3-2100T – Dual Core, 2.5 Ghz Clock speed
Asus P8H77i Motherboard with 6 SATA ports
1600 Mhz DDR3 RAM
320 GB Hitachi 2.5” HDD (just for test)
1 GbE port


The test topology is:


                                    GbE                                   GbE
FreeNAS Server -----------------> Router ----------------------> Windows 7 Client PC
                                   Cat 6                                    Cat 6
(Samba Server )                                                                               (LANTest)
(Iperf Server)                                                                                   (iPerf Client)
(DiskInfo -t /dev/ada0)


The results are described below:

(1) iPerf shows 76-85 MB/s with a CPU usage between 10-15% (max limit is 200% since its dual Core CPU). I would take this as the practical max that I could extract from my network with my setup. The max theoretical transfer rate on my test setup is 125 MB/s since I have one GbE port only on the motherboard
(2) LANTest transfer performance is 74-76 MB/s write and 64-66 MB/s read with a CPU usage of around 25-28%.
(3) If we measure transfer rate in Windows copy with the a large 5.3 GB file, I get a read rate of 57 MB/s and write rate of 47 MB/s with around 15-18% CPU in case of read from network server and 20-25% in case of write to network file server
(4) The diskinfo benchmark test reports Transfer performance between 28MB/s to 62 MB/s based on where the data is on the physical disk medium (outer/middle/inner areas)



A few surprising things:
(a) Using tools like LANtest, Crystal Disk Mark, NAS performance Tester, etc, I am getting write performance higher than read performance as opposed to real world windows file copy-paste which throws up a more reasonable sounding outcome. I have seen such reports elsewhere on the Internet too.
(b) In my case, LANTest reports higher performance than what is observed in real world copy-paste.

I did try using some standard samba optimizations  (see bottom of this post) set through auxiliary parameters in FreeNAS CIFS GUI configuration but none of them had any noticeable positive impact on throughput. Perhaps there might be was a very marginal degradation (1 MB/s)

On a little investigation, i found that SAMBA server (smbd daemon) does not seem multi-threaded but follows a multi-process design, with one process per network client. In my test topology, their is one client only & smbd transfer is using only 1/8th of the available CPU horsepower (200% max). The bottleneck which is looming on the horizon is the singular GbE Network interface.

I do not expect the results of AFS and NFS shares to drastically alter these observations regarding the bottleneck.


Suggestion for Home Users:
So for most consumer grade applications which may require max 2-3 parallel samba transfers in worst case (and 1 typically), this FreeNAS setup of mine seems overpowered. Stepping down to a single core CPU or multi-core ATOM might be a workable option in the x86 architecture. Also it opens a window of opportunity for multicore ARM based SBCs (especially with SATA ports over those with only multiple USBs) especially where no RAID is required and one disk is sufficient . 

Its also worthwhile noting that many home users do not need very high transfers (100 MB/s or so). They are fine if a copy (read write) just works at 10-20 MB/s which is sufficient for downloads, 1080p video streaming (though 4K would be a challenge) and do not move big files around or use a network drive as a replacement for local storage. I am one of them most (maybe almost all) of the time. Also Fast ethernet 10/100 Mbps should be avoided.


Suggestions for SOHO or small enterprise use:
For office deployment, you need more parallel transfers and therefore its worthwhile giving this atleast a Dual GbE PCI NIC or better a quad GbE NIC upgrade and possibly used with link aggregation (might require a better router/switch which supports this feature). You could also select a motherboard with 2 GbE to start things out.

So while building your own FreeNAS file server a little pit of research on Internet  regarding the speeds achieved with different CPUs


Reference: Samba (CIFS) tuning options that I tried but didn't work any wonders for me.
aio read size = 16384
aio write size = 16384
read raw = yes
write raw = yes
socket options = TCP_NODELAY IPTOS_LOWDELAY SO_KEEPALIVE SO_RCVBUF=131072 SO_SNDBUF=131072 IPTOS_THROUGHPUT
use sendfile = true

Sunday, March 15, 2015

Not all MicroSD cards are equal

I have a bunch of credit card size single board computers tp play around like the Raspberry Pi, Beaglebone black, Odroid-C1 and Banana PRO. One item that I have to procure with every SBC is the Flash storage for the OS and applications i.e. the Micro SDHC card. It becomes critical to bootup time and run-time operation as Linux depends heavily on the disk access.

Bit I just discovered that not all cards are made equal. Two reputed brands available in India are Sandisk (Sandisk Ultra Class 10 UHS-1) & Kingston (8 GB class 10 UHS-1). I compared the performance of the SD cards on the Beaglebone Black and the PC (with Bitfenix USB 3.0 internal card reader, Core i7-2600K, 8 GB RAM) and here are are my observations:


  1. Sandisk  card gave a sustained performance of about 19.2 MB/s for sequential reads on the beaglebone Black and 22 MB/s on a PC
  2. Kingston card gave a sustained performance of about 12.5 MB/s read for sequential reads on a beaglebone Black
  3. A Samsung 840 EVO SSD can give a sequential read performance of 66 MB/s
  4. A Western Digital RE black enterprise drive can give a sequential read performance of about 95 MB/s


The Test was done using a simple command:

sudo hdparm -t [flash-device-Name] 

So my first conclusion is that not all SD cards are equal. Though the kingston and Sandisk cards are both rated class 10 and UHS-1, the sandisk one is about 50% faster than the Kingston. I found that many buyers on amazon have also complained on relatively slow real life performance of kingston cards in their smartphones. So I would recommend that SBC buyers faced with a choice nbetween these two cards opt for the Sandisk make.

Secondly the expect a performance drop of about 15% when moving an SD card for PC to any SBC. This is not so significant.

And Thirdly if you are moving an I/O intensive (bound) application from Linux PC to Linux SBC, then expect a 3-5 times drop in I/O performance (considering you are using the better sandisk cards). Of course the beaglebone black and Raspberry Pi can generate data at no more than theoretical 12.5 MB/s on the Ethernet port but it could generate more faster internally. But the Odroid-C1 and Banana-PRO have GbE interfaces that theoretically can receive 125 MB/s and for them a slow card will only restrict their capability

Also, its worth noting that when I connected an old 2.5" Hitachi drive (model 5K320-160) to a Banana PRO SBC, using the board's SATA interface,  the hdparm test gave a result of 52 MB/s. Possibly a USB 2.0 connected drive may give around 30 MB/s while an SSD connected to SATA may give 90-100 MB/s. So my final conclusion about storage mediums speeds on SBC are:

SATA SSD > SATA HDD > USB SSD/HDD > Flash Storage

Many boards do not feature SATA and therefore Flash and USB SSD/HDD are the only available options. Most will not even boot USB drive easily.

Tuesday, February 17, 2015

Early signs of changing winds in the Personal Computing Industry

Its 2015. I see winds of change for the desktop computing platform. The mobile computing platform (ARM based) is continuing with its stranglehold in the mobile computing pace while Intel & AMD continue to operate in a relatively declining desktop consumer market. But what is not obvious immediately is the quiet entry of ARM based devices as entry level PC hopefuls just as Intel and others are trying to shrink the PC with their MiniPC and NUC efforts. 

Today smartphones are coming out with 2 Ghz+ Quad core or 8-core devices with the ARM architecture, coupled with 4 GB RAM. While by no stretch of imagination these can beat Intel or AMD CPUs in raw performance, the fact of matter is that many users DO NOT need all the x86 compute power, atleast NOT ALL the time.  You can see this with the prevalence of thin clients in office environments (large & small). Even at the consumer end, if you measure the average CPU usage of your PC over time it might be less than 10% with a typical peak between 30-50%. So lets do a rough comparison of an Intel based x86 running an entry level Core i3 m/c and  raspberry Pi 2 Single Board computer running an ARMv7 chip with 1 GB slow RAM and flash storage.

  1. CAPEX - The raspberry pi m/c (minus LED screen, keyboard and mouse but loaded with Wifi, case, PSU, Fast microSDHC card)  will be around $60. The PC (cabinet+PSU+Mobo+CPU) will be around $350 without peripherals. That's a 6 times difference and you will get only 2 cores in the Intel PC Setup. For sake of simplification i assume we run a linux distro (like Ubuntu) on both.
  2. OPEX - The PC's OPEX is power. A raspberry pi uses about 3-4W (10W is theoretical max based on PSU input power and 100% efficiency) of current on average while the PC with a bare-bones 250W SMPS will use around 120W in idle state. That's a differential of 30-40 times. that means even if the raspberry pi is kept on 24x7x365, it will consume 35KW of power  per year (or probably Rs. 200 per year) with PC coming in at 30-40 times this number.
  3. Software - A raspberry pi 2 with its 1 Ghz quad core CPU, 1 GB RAM and 32 GB flash and Wifi/Ethernet will most likely be able to handle all content consumption tasks [browsing, email, social networking, chat, Audio-Video streaming (including 1080p) and playback]. And it will be able to do basic content creation task like image editing, blogging, word processing) etc decently. The only drawback of the Raspberry pi system would be heavy weight content creation like bulk image processing, video editing, heavy games, 3D graphics & visualization) and so on which anyways MOST consumers do not engage in on a routine (if at all) basis. Today the linux ecosystem has developed to an extent where it can compete well for basic work in point 93) with any Windows application or in other words it can satisfy the most common need. Even so, the desktop application load times are a little too much for comfirt, though after loading many work fine as long as they work in memory.
  4. Storage & Distribution Trends - Flash storage is now generally preferred over magnetic disk storage because of performance benefits. It includes flash cards, flash drives and SSDs which generally occupy less power and footprint than 3.5" HDDs. Optical media was earlier used to distribute content, but now on demand downloads, streaming  (multimedia) and USB pen drives have replaced them in the age of net enabled devices, rendering them almost obsolete. Both HDDs and Optical drives are archival storage mediums, not working storage areas which then are addon peripherals to be connected on demand and therefore outside the core PC.
  5. Graphics capability - While their is a place of Discrete Graphics (GPU), Integrated graphics or graphics chip on main motherboard have evolved to a point where they can handle common user activity with ease. Only for the more intense graphics, 3D, Gaming and massively parallel applications do discrete Graphics provide any perceivable benefit. Most users anyways do not indulge in these.
  6. HDMI - The integration of sound with display by means of an HDMI port, have eliminated the need of integrated or discrete sound cards. More so in the age when speakers are getting integrated in display monitors. A similar trend is integration of webcam in monitors connected to the PC using USB interface. All this again means reduction of ports and motherboard circuitry.

All the above have combined to create a disruptive shockwave in the PC space. We must change our thinking of what a PC is. And with increasing volumes, core counts and performance of ARM based SOCs in mobile devices, The cost is going down and the performance is improving at a given price point. 

The PC (with it Intel & Microsoft) is thus being presented with a big challenge in the form of the Raspberry Pis, Odroid C1s along with the Linux Operating system. increasing mobile device sales are driving down cost and increasing capabilities (The pi 2 is 6 times as powerful as the previous generation !!!) in the ARM ecosystem. Again its easy to wash it off by saying that for a PC you necessarily need windows OS, Graphics cards, Sound cards, etc but for an entry level system this assertion may not be universally true. This opens a market for ARM based PCs.

I would not be surprised if we would see a  2-3 Ghz Raspberry pi with 8 cores and powerful GPU in 3 -5 years  time, which can make the PC completely redundant like mainframes were done in by PCs in the last century. Most applications are anyway getting redesigned for multi-core rather than clock speed scale-up. Currently speed is a gap everywhere. The RAM is slower in SBcs and most boot and operate with SD cards which lag HDDs and SSDs way behind in performance (Even on linux of not the boot up time, the application load time ofd heavy applications like Browser, Libreoffice, Gimp, etc is just too much. But once they load and work in memory they are reasonably useful). But time may fix this  sooner than we expect.  And all this at $35 for the board and almost free for the software ;-)) 

And its also possible that mobiles and tablets will morph into the PC chassis with strong content creation and consumption capabilities.  In either case it means turbulent times for x86/x64 platform (and along with it Intel & Microsoft). The Intel x86 & windows platform PC is getting pushed into a Niche by ARM & Linux just the Wintel combine pushed UNIX and UNIX pushed mainframes during their dawn.

Prove me wrong Wintel !!!





Sunday, November 30, 2014

Three Things to do before you decide to get rid or upgrade your existing Desktop PC or laptop

If you are planning to throw away, gift or upgrade your desktop PC or laptop because its seems slow, better take that decision after evaluating if an upgrade of the following three things :

(1) RAM
(2) HDD to SSD
(3) Move to Desktop Linux

cannot resolve your problem.

The slow down in most computers is either due to RAM requirements going up for the newer OSes and new versions of the application themselves. Or its just that application and OS become so big that an existing HDD simply cannot read or write data from disk that fast. Of course for the latter you need to have a  PC/laptop that has SATA controllers built in.

For comparison sake most modern OS work just great with 4G RAM. 2GB is just about enough and 1GB is actually calling for trouble.More RAM means less paging.  For applications, an additional 2G-4G of RAM (making the tortal 6-8 GB) will likely ensure that both the app and OS can work very smoothly. Applications can be loaded directly into RAM and run completely from there. And you can do run many of them simultaneously. Similarly an HDD can write at max around 80-100 MB/s sequentially while a SATA III SSD can go 500 MB/s plus and SATA II 200 MB/S  plus (that's like a 2-5 times jump).

Most of us do not stress the CPU to 100% (even 50% in most tasks) and its likely that the CPU is underutilized because it can not read fast enough from disk or that the RAM is not enough. Upgrading CPU/motherboard (which means practically get a new computer) is not always the medicine your computer is looking for.

And finally, consider migrating to a stable Desktop Linux Distrubution like Ubuntu 14.04 LTS. It is comparatively much much lighter on computer resources. I have been usingh Ubuntu or a about 2 months  and am pretty satisfied with the results. 

(1) It's GUI is table and does not crash. My Opensuse/Fedora builds had this problem occasionally
(2) Upgrades do not break things. I used OpenSuse before and frequently encountered this problem.
(3) You can create & edit common MS Office documents (doc/docx, xls/xlsx and ppt/pptx) with supplied Libreoffice
(4) You get access to latest Mozilla Firefox & chrome Browsers
(5) While many commercial and free to use windows software may not be available, many open source equivalents are freely available which can do common tasks like reading/writing PDFs, Zip/unzip files, Rar/unrar files, Image processing using Gimp (instead of Adobe Photoshop) and so on.
(6) Popular Freewares like VLC, XBMC, etc are readily available
(7) Does not suffer from viruses and therefore does not need investment in antivirus software.

The overall idea is that a modern linux desktop ecosystem such as the one above is fully able to all common tasks usually performed with a PC or Mac and do them well. So why spent money on buying expensive new hardware or commercial applications.


Their are ofcourse some limitations

(1) Older laptop or PC motherboards may not support SATA. IDE SSDs are rare and very expensive making an SSD upgrade impossible
(2) A 120 gb ssd may cost you Rs. 4000/-, but for the same price you can buy a 1 TB HDD. So you should use additional external/internal HDD for data and SSD only for OS & applications to be able to retain full function at lower cost. Buying a large capacity SSD will inflate your upgrade cost to unreasonable levels
(3) Their is a lmit to how much RAM your computer can support and whether the RAM is easily available (older DDR., DDR2) in new retail or used market.

Chances are likely that if your computer in 5-6 years old, you will able to carry out the above three changes. Beyond this it becomes a challenging weekend project with compromises made like not able to run many applications simultaneously, running and older linux version, managing with less RAM or no SSD. I have an old 2002 Dell Inspiron 8200 laptop lying around and I understand this pain.



Tuesday, April 23, 2013

Why we need a NAS (file server) at home

The modern home is digital and networked. More and more devices or their supporting set-top-boxes/peripherals are getting connected to the Internet as well as each other for sharing data such as music, pictures, videos, etc. What started with just the PC/Laptop having this connectivity has invaded smartphones, tablets, TV or TV set-top boxes, surveillance cameras, camcorders, digital cameras, etc. Maybe the future will involve white goods too (refrigerator, microwave, etc). 

They key point is in this digital and connected world, devices have their own content (generated or consumed) in digital format  like


  1. videos is growing to astronomical sizes. 1080p is easily in access of 10 GB per movie. We have our consumer digicams and camcorder generating this data at home which we wouild like to backup as well as surveillance data
  2. music which is encoded as lossy with MP3 codec now has a better option of FLAC which does not compromise fidelity (of the source from here it is made) but would take up much larger space than MP3. 
  3. Data of gaming consoles, smartphones and other handheld devices
  4. And of-course our PC and laptops have data which needs a backup (application installers, our generated data)
And we need storage do maintain these library. Tons (or rather terabytes) of it. .

Whether you buy all this electronic content, or download pirated variants, or just say exchange with friends, their is a cost (in terms of the pain) to re-download it incase we lose it. I have 30$/month broadband connection in India which can move a 4 GB per day from the cloud to my network, till the Fair Usage Policy (FUP) kicks in after 25 GB and throttles my broadband speed to just 128 Kbps. Besides you may not get the content also (like some old videos or photographs you recorded).  Their may be  need to talk and explain that why you want to download again. You need reliable fail-safe storage.

So how do we handle this ? Two ways actually:

  1. the first is to tie storage to access devices (smartphones, tabs, Pcs, Music streamers, HTPCs etc)_ and always keep a copy on centralized non-RAID storage. You have to manage  downloading from cloud, moving from download device to access device and moving it to backup device. And it may not be possible to automate this always. Also the access device has storage constraints  what if it gets full and we can't expand storage. 
  2. The other option is to use an *extensible* centralized storage (aka NAS) which can support multiple access protocols (NFS, CIFS, Apple, DLNA, etc), concurrent clients and a high speed wired+wireless router which can help access devices pick up this content on demand from the NAS. In essence my own personal cloud. 
I chose the 2nd option. I like the idea of separation of storage, compute and access devices simplu\y because I can upgrade the parts as and when they reach obsolescence.

Let me list down the requirements for this NAS
(1) The storage cost should be cheap (its mostly write once read many type). HDDs fit the bill.
(2)  The storage should be durable (we need RAID support)
(3) The storage should be able grow with the need -- I need a big chassis which can atleast support 4 (or more  HDDs) even though I may begin with 2 or 4 only
(4) Should support High transfer rates (read/write) preferably near the limits of Gigabit Ethernet and primarily because I am possibly going to access data concurrently (for eg., TV playing videos while PC copying some data into the NAS)
(5) Its big or its beautiful is a nice to have feature, but not a mandatory requirement
(6) Consume less power if possible, though being the most energy efficient is not a mandatory requirement
(7) Hot-swap is a nice to have feature, not mandatory

In India pre-built BYOD NASs with high performance cost about $300 for a 2 bay, $600+ for a 4 bay and upwards of $2000 for 8-bay and 12-16 bays cost upwards of $4000. Since I am looking immediately at a 2-bay and 4-bay in 1 year time, I am staring at an investment of $600+ and the struggle to get it and we still have to pay atleast $200 per TB of RAID storage. That's an immediate investment of $1000. Very high for a file-server.

So I decide to make my own. Here's what I think I need:

(1) Software -- FreeNAS 9+ (BSD based, but who cares if it works well and can support my hardware)
(2) Chassis -- A Tower or NAS chassis with 6-8 bays atleast (later I can chuck it and go for a tall tower case if I need more disks)
(3) Motherboard -- Preferably no onboard sound card, no on board graphics or just the most simple integrated graphics, the maximum number of SATA ports supporting RAID or a RAID add on card with good number of ports. The RAID port is a very key requirement. Also one PCI express slot to add expansion RAID card could be useful in long-term \.
(4) RAM - As much as freeNAS needs, Its cheao these days or 8 or 16 or 32 isn't going to scare me off.
(5) HDDs -- Low cost/GB, Low speed (I do not think HDD speed is a bottleneck but rather than the GbE interface to data transfer) which means low power also and safe to be used in RAID setup ( wI will use only software RAID for the ease of repair if I lose the hardware and can't find the similar RAID controller card or on-board chipset)

With this in mind I go about this build. keep tuned in the following posts for what build I did.





Saturday, September 15, 2012

Tuning Linux to play nice on SSDs

While assembling my PC, the first Storage system I bought was a 160 GB caviar green. Sounds silly. It is, except that the reason was I wanted Linux (which as an OS and limited commercial apps) isn't as bloated as windows (you can see me recommending a 30 GB Corsair Nova SSD for linux boot) and therefor does not require so much space. And secondly, the HDD prices when i bought my computer parts were priced through the roof (which I was expected pricing to drop in a 2~4 months based on whatever I read on the net). Unfortunately the prices didn't seem to be falling much (partly because of the depreciation of rupee) and I am stuck with a slow linux installation that I didn't ask for :-(

And hence I decided to move to SSD and use part of my WD RE4 HDD. And the immediate task on hand is how to tune Linux for SSD and HDD install. We are going to use the same principle as we did for Windows 7 tuning, except apply them to Linux:
  1. Write as less as possible to the SSD
  2. Move Core OS, application and configuration files to the SSD
  3. Retain the dynamic OS data, dynamic application data and user storage to the HDD
Luckily compared to Windows the Linux File-system organization is pretty simple and feels more organized and uniform than Windows where applications go about their business anywhere they want.  To begin doing this we need to understand what the Linux root filesystem is composed of. I checked the Linux System adminstration guide as well as my root folder, and here's what we got (this is for an OpenSUSE 12.1 upto date system):

Filesystems suitable for SSD
For my case I need to move this on the SSD. For people doing install for first time. this should be put on the SSD
  • /binLooks like UNIX command line shell commands & utilities. Read only material.
  • /sbin - Looks like Linux specific superuser commands * utilities are here. Read only material
  • /lib and /lib64 - 32/64 bit shared and static libraries. always read, updated but never modified.
  • /mnt and /media - Non-removable and removable storage are mounted here. This is just a link to external file-system in those devices and to my opinion should not cause any writres to local filesystem
  • /etc - Configuration files. Mostly read, occasionally modified. Being on SSD will speed up startup
  • /root - Root's home. it is written as & when root logs in. I rarely use this (thanks to su and sudo). I do not want to  put this on HDD, and stand a risk that root cannot be logged in case I lost the HDD. I  decide let this be on SSD therefore
  • /usr - This is where all application/add-on programs are installed. This has to be the biggest folder on the SSD and this is again mostly read only data with only updates generating the writes.
  • /boot - Has the bootstrap loader or simply boot-loader (GRUB in my case) as well as kernel images. Again this is mostly read-only data and rarely updated.
  • /proc - This is an illusionary filesystem, which isn't on disk  but in memory. We move it to SSD therefore ;-))
  • /lost+found - this is the output of fsck being run when you lose power. This is rarely read and rarely written. I think we will leave it on the SSD and also create one on the HDD
  • /opt - This is used for additional applications and add-on packages not part of the distribution. Since it has just applications, this is mostly read only with updates driven by program updates.
  • /dev -
  • /selinux - security enhanced linux. Its an empty folder on my system . most probably its not installed or configured and I read this is like /proc with a database of policies.
  • /srv - This contains site specific server data for services such as web-server, ftp, etc. Again this is configuration data which is infrequently modified or updated and therefore suitable for SSD
  • /sys - files for PnP hardware and devices. like /proc it is a file system in memory and therefore I do not anticipate any disk writes

Filesystems suitable for HDD
In my case the task on hand is to move them to the bigger 1 TB WD RE4 HDD one by one.
  • /var - System's run-time data. Preferably on the HDD, though I feel if I lose the HDD, I will run into trouble booting the system also. Logs, cache, etc all go here. It can store app data and the difference between /var and /tmp is that /var is not cleaned across reboots and cleamup must be done manually.
  • /tmp - temporary files stored by applications. This is cleaned up automatically on every reboot and can be done manually by root user if it runs out of space. This is really dynamic and we do not want this on SSD and generate the additional writes
  • /run - this has the similar data as in /var/tmp where it survives reboots and is used by both startup programs and applications. This was created by linux community (fedora, OpenSuse, Ubuntu and Debian support this) to take care of some data which was being written to /dev. Good for SSD users.
  • /home - User folder. I will generate the writes heer based on what I am doing. Its best to leave this on the HDD. my data is also here and the only way I can guarantee reliability is to hardware RAID this HDD
  • swap (Linux swap) - I have an 8 GB RAM (had 16 but lost one bank most likely due to ESD), so mostly I need less swap and want this to be on HDD preferably. If I cannot do this, I will put this to SSD, but make a swap not exceeding 2GB.
Counted. 21 folders on my ls-l output and 22 here (the extra one is the linux swap file-system which does not show up in "/" folder). Sorry if the above sounds an abstract of linux SAG, but my intention was to show you what I want to do & why do I want to do that. only 4~5 odd folders should be on HDD.

The choice was now to try and move the existing install from 160 GB to a new SSD and partitions on on the RE4 HDD or just do a clean install. Linux is easy to setup and upgrade. I have only free software, no licensed or pirated one and therefore the 2nd option looked simpler to me. My opportunity came with the upgraded OpenSuse 12.2 release in early September 2012 and I took the plunge. Created Manually 5 HDD partitions (within an extended one) on the HDD for swap (32 GB), /var (20 GB), /tmp (10 GB), /run(10 GB) and /home (178 GB) and just one "/" (all of 27+ GB) on the SSD. The distro accepted my partitiioning scheme and the install went smooth ands now i have a slightly faster booting and importantly a very zippy install with very little lag in starting applications from KDE. And the text console based utils are going insanely fast.

Also their are tones of articles on the web regarding file-system optimizations like addition of noatime, nodiratime, discard to etc/fstab entries for SSD partitions as well as tuning other things like scheduler. I did all of them too. Readers can search "tuning linux for SSD" or something similar to get these ToDos.

Friday, July 13, 2012

One Disk or Many Disks

I am slicker for speed, stability and reliability (maybe because i work professionally in the telecom domain). And then comes cost (which I am also conscious about, but not the cost of the above three). Possibly adaptability. In reality, even  assembling a PC has some of these tradeoffs provided you are aware and make this choices and not go by the wind. My use-case is a dual boot Windows + Linux box (love to do hackintosh, but I have a macbook already and to believe the apple's integration works much better). One area foremost for its applicability is storage. Lets see how.

(1) Performance - I must use SSD for OS and applications, while HDD for dynamic application and OS data, and just another data. With SSD i can get high performance, but not high storage capacity and heavy write reliability at practical (even if its expensive prices). With HDD I can get the latter two, but not the best in class performance(even with RAID).

Since we want provision of apps and reasonable amount of SSD space to be left for GC to be invoked infrequently, I did not think a 64 GB SSD would b enough. Next I wanted the speed king for this. OCZ Vertex 4 was one, but its 64 GB speed of I/O was less.

For linux I had done a opensuse 12.1 install on my old 160 GB caviar green and it seemed to allow a maximum root partition of 20 Gb out of which 33% was free after full install of client 7 server packages and quite a few addons. So I guessed since /home will have to be on HDD, and most linux apps are not as big as Windows, a 30 GB would be more than enough. I settled for Corsair Nova, though it might be among the slowest of SSDs around, still atleast twice as fast as an HDD.

(2) Reliability -- I must achieve reasonable isolation between OSes and see that the impact of failure is eliminated at the particular OS itself, so that rules out a single disk with multiple OS. So i definitely need multiple SSD and either multiple disks for data or RAID for one shared data disk. One more thing is that I consider SSDs to be more reliable than HDD, if they are not frequently written to.

I wanted my disks to ber big & fast (ruled out the Velociraptors and neither were they available in local market). So 7200 RPM, 64 MB cache, SATA 3 gbps was on the menu. Caviar blacks and WDE4 were the choices. However the former is unsuitable for RAID and so the choice fell on the enterprise drives.

(3) Adaptability -- a Tuning of one OS must not impact other in anyway and it should be tunable independently in terms of storage. So I can't do multi-boot OS config on one disk. Definitely not on the OS, but maybe on the data as re sizing and moving partitions would be simpler.


Cost Comparison (in India)
(a) General approach -- a 2 TB disk = 250$
(b) Reliable generalized approach -- 2x2 TB disks (2 x 300$) = $600 in India
(c) Fast but not entirely unreliable approach -- 1x128 GB SSD for Windows ($230) + 1x30 GB SSD for Linux ($60) + 1 TB data disk ($140) = $430
(d) Fast & Reliable
         (i) 1x128 GB SSD for Windows ($230) + 1x30 GB SSD for Linux ($60) + 2 X 1 TB data disk in RAID mode ($280) = $570
         (ii) 1x128 GB SSD for Windows ($230) + 1x30 GB SSD for Linux ($60) + 4 X 500 GB data disk in RAID mode ($90x4=$360) = $650


So if you put cost before everything else and is your only criteria go for (1). if you want speed but no reliability go for (c) If you want performance first, reliability second & adaptability third go for the choices in (d). And of-course the more disks you use you may run into constraints of PSU and number of SATA ports on your motherboard. 

Personally I started with (c) and plan to move to (d)(i) in due course of time to spread the cost. I can't put two many disks right now beacuse I have 8 SATA on my motherboard which are used like

(1) One for Optical DVD Drive
(2) One for HAF-X chassis front eSATA
(3) Two for Hot-swappable bays of HAF-X

That takes away half of the ports, and my expansion needs are one blue-ray writer (am yet to see a combo BLUE-ray/DVD/CD writer & reader). If I go for (d)(ii), I need to accomodate 2 SSDs and 4 HDDs requiring 6 SAta ports and I have just 4 left. Have a strong feeling that motherboard manufacturers should increase the number of SATA ports in high-end mobos to atleast 12 and preferably 16 (its a lot and I am not stacking up Internet porn). Going forward i see myself probably losing front eSATA or one hot swappable bay.

PC is fun and challenging if you do it my insane way. I definitely say throw your laptop and pick a tab + PC with tab for laid back content viewing & presentation and the PC for all content creation needs.