Saturday, September 15, 2012

Tuning Linux to play nice on SSDs

While assembling my PC, the first Storage system I bought was a 160 GB caviar green. Sounds silly. It is, except that the reason was I wanted Linux (which as an OS and limited commercial apps) isn't as bloated as windows (you can see me recommending a 30 GB Corsair Nova SSD for linux boot) and therefor does not require so much space. And secondly, the HDD prices when i bought my computer parts were priced through the roof (which I was expected pricing to drop in a 2~4 months based on whatever I read on the net). Unfortunately the prices didn't seem to be falling much (partly because of the depreciation of rupee) and I am stuck with a slow linux installation that I didn't ask for :-(

And hence I decided to move to SSD and use part of my WD RE4 HDD. And the immediate task on hand is how to tune Linux for SSD and HDD install. We are going to use the same principle as we did for Windows 7 tuning, except apply them to Linux:
  1. Write as less as possible to the SSD
  2. Move Core OS, application and configuration files to the SSD
  3. Retain the dynamic OS data, dynamic application data and user storage to the HDD
Luckily compared to Windows the Linux File-system organization is pretty simple and feels more organized and uniform than Windows where applications go about their business anywhere they want.  To begin doing this we need to understand what the Linux root filesystem is composed of. I checked the Linux System adminstration guide as well as my root folder, and here's what we got (this is for an OpenSUSE 12.1 upto date system):

Filesystems suitable for SSD
For my case I need to move this on the SSD. For people doing install for first time. this should be put on the SSD
  • /binLooks like UNIX command line shell commands & utilities. Read only material.
  • /sbin - Looks like Linux specific superuser commands * utilities are here. Read only material
  • /lib and /lib64 - 32/64 bit shared and static libraries. always read, updated but never modified.
  • /mnt and /media - Non-removable and removable storage are mounted here. This is just a link to external file-system in those devices and to my opinion should not cause any writres to local filesystem
  • /etc - Configuration files. Mostly read, occasionally modified. Being on SSD will speed up startup
  • /root - Root's home. it is written as & when root logs in. I rarely use this (thanks to su and sudo). I do not want to  put this on HDD, and stand a risk that root cannot be logged in case I lost the HDD. I  decide let this be on SSD therefore
  • /usr - This is where all application/add-on programs are installed. This has to be the biggest folder on the SSD and this is again mostly read only data with only updates generating the writes.
  • /boot - Has the bootstrap loader or simply boot-loader (GRUB in my case) as well as kernel images. Again this is mostly read-only data and rarely updated.
  • /proc - This is an illusionary filesystem, which isn't on disk  but in memory. We move it to SSD therefore ;-))
  • /lost+found - this is the output of fsck being run when you lose power. This is rarely read and rarely written. I think we will leave it on the SSD and also create one on the HDD
  • /opt - This is used for additional applications and add-on packages not part of the distribution. Since it has just applications, this is mostly read only with updates driven by program updates.
  • /dev -
  • /selinux - security enhanced linux. Its an empty folder on my system . most probably its not installed or configured and I read this is like /proc with a database of policies.
  • /srv - This contains site specific server data for services such as web-server, ftp, etc. Again this is configuration data which is infrequently modified or updated and therefore suitable for SSD
  • /sys - files for PnP hardware and devices. like /proc it is a file system in memory and therefore I do not anticipate any disk writes

Filesystems suitable for HDD
In my case the task on hand is to move them to the bigger 1 TB WD RE4 HDD one by one.
  • /var - System's run-time data. Preferably on the HDD, though I feel if I lose the HDD, I will run into trouble booting the system also. Logs, cache, etc all go here. It can store app data and the difference between /var and /tmp is that /var is not cleaned across reboots and cleamup must be done manually.
  • /tmp - temporary files stored by applications. This is cleaned up automatically on every reboot and can be done manually by root user if it runs out of space. This is really dynamic and we do not want this on SSD and generate the additional writes
  • /run - this has the similar data as in /var/tmp where it survives reboots and is used by both startup programs and applications. This was created by linux community (fedora, OpenSuse, Ubuntu and Debian support this) to take care of some data which was being written to /dev. Good for SSD users.
  • /home - User folder. I will generate the writes heer based on what I am doing. Its best to leave this on the HDD. my data is also here and the only way I can guarantee reliability is to hardware RAID this HDD
  • swap (Linux swap) - I have an 8 GB RAM (had 16 but lost one bank most likely due to ESD), so mostly I need less swap and want this to be on HDD preferably. If I cannot do this, I will put this to SSD, but make a swap not exceeding 2GB.
Counted. 21 folders on my ls-l output and 22 here (the extra one is the linux swap file-system which does not show up in "/" folder). Sorry if the above sounds an abstract of linux SAG, but my intention was to show you what I want to do & why do I want to do that. only 4~5 odd folders should be on HDD.

The choice was now to try and move the existing install from 160 GB to a new SSD and partitions on on the RE4 HDD or just do a clean install. Linux is easy to setup and upgrade. I have only free software, no licensed or pirated one and therefore the 2nd option looked simpler to me. My opportunity came with the upgraded OpenSuse 12.2 release in early September 2012 and I took the plunge. Created Manually 5 HDD partitions (within an extended one) on the HDD for swap (32 GB), /var (20 GB), /tmp (10 GB), /run(10 GB) and /home (178 GB) and just one "/" (all of 27+ GB) on the SSD. The distro accepted my partitiioning scheme and the install went smooth ands now i have a slightly faster booting and importantly a very zippy install with very little lag in starting applications from KDE. And the text console based utils are going insanely fast.

Also their are tones of articles on the web regarding file-system optimizations like addition of noatime, nodiratime, discard to etc/fstab entries for SSD partitions as well as tuning other things like scheduler. I did all of them too. Readers can search "tuning linux for SSD" or something similar to get these ToDos.

Friday, July 13, 2012

One Disk or Many Disks

I am slicker for speed, stability and reliability (maybe because i work professionally in the telecom domain). And then comes cost (which I am also conscious about, but not the cost of the above three). Possibly adaptability. In reality, even  assembling a PC has some of these tradeoffs provided you are aware and make this choices and not go by the wind. My use-case is a dual boot Windows + Linux box (love to do hackintosh, but I have a macbook already and to believe the apple's integration works much better). One area foremost for its applicability is storage. Lets see how.

(1) Performance - I must use SSD for OS and applications, while HDD for dynamic application and OS data, and just another data. With SSD i can get high performance, but not high storage capacity and heavy write reliability at practical (even if its expensive prices). With HDD I can get the latter two, but not the best in class performance(even with RAID).

Since we want provision of apps and reasonable amount of SSD space to be left for GC to be invoked infrequently, I did not think a 64 GB SSD would b enough. Next I wanted the speed king for this. OCZ Vertex 4 was one, but its 64 GB speed of I/O was less.

For linux I had done a opensuse 12.1 install on my old 160 GB caviar green and it seemed to allow a maximum root partition of 20 Gb out of which 33% was free after full install of client 7 server packages and quite a few addons. So I guessed since /home will have to be on HDD, and most linux apps are not as big as Windows, a 30 GB would be more than enough. I settled for Corsair Nova, though it might be among the slowest of SSDs around, still atleast twice as fast as an HDD.

(2) Reliability -- I must achieve reasonable isolation between OSes and see that the impact of failure is eliminated at the particular OS itself, so that rules out a single disk with multiple OS. So i definitely need multiple SSD and either multiple disks for data or RAID for one shared data disk. One more thing is that I consider SSDs to be more reliable than HDD, if they are not frequently written to.

I wanted my disks to ber big & fast (ruled out the Velociraptors and neither were they available in local market). So 7200 RPM, 64 MB cache, SATA 3 gbps was on the menu. Caviar blacks and WDE4 were the choices. However the former is unsuitable for RAID and so the choice fell on the enterprise drives.

(3) Adaptability -- a Tuning of one OS must not impact other in anyway and it should be tunable independently in terms of storage. So I can't do multi-boot OS config on one disk. Definitely not on the OS, but maybe on the data as re sizing and moving partitions would be simpler.


Cost Comparison (in India)
(a) General approach -- a 2 TB disk = 250$
(b) Reliable generalized approach -- 2x2 TB disks (2 x 300$) = $600 in India
(c) Fast but not entirely unreliable approach -- 1x128 GB SSD for Windows ($230) + 1x30 GB SSD for Linux ($60) + 1 TB data disk ($140) = $430
(d) Fast & Reliable
         (i) 1x128 GB SSD for Windows ($230) + 1x30 GB SSD for Linux ($60) + 2 X 1 TB data disk in RAID mode ($280) = $570
         (ii) 1x128 GB SSD for Windows ($230) + 1x30 GB SSD for Linux ($60) + 4 X 500 GB data disk in RAID mode ($90x4=$360) = $650


So if you put cost before everything else and is your only criteria go for (1). if you want speed but no reliability go for (c) If you want performance first, reliability second & adaptability third go for the choices in (d). And of-course the more disks you use you may run into constraints of PSU and number of SATA ports on your motherboard. 

Personally I started with (c) and plan to move to (d)(i) in due course of time to spread the cost. I can't put two many disks right now beacuse I have 8 SATA on my motherboard which are used like

(1) One for Optical DVD Drive
(2) One for HAF-X chassis front eSATA
(3) Two for Hot-swappable bays of HAF-X

That takes away half of the ports, and my expansion needs are one blue-ray writer (am yet to see a combo BLUE-ray/DVD/CD writer & reader). If I go for (d)(ii), I need to accomodate 2 SSDs and 4 HDDs requiring 6 SAta ports and I have just 4 left. Have a strong feeling that motherboard manufacturers should increase the number of SATA ports in high-end mobos to atleast 12 and preferably 16 (its a lot and I am not stacking up Internet porn). Going forward i see myself probably losing front eSATA or one hot swappable bay.

PC is fun and challenging if you do it my insane way. I definitely say throw your laptop and pick a tab + PC with tab for laid back content viewing & presentation and the PC for all content creation needs.


Monday, July 9, 2012

Customization of Windows 7 applications for SSD

In order to use SSDs effectively for Windows 7, we need to customize the applications as well wherever possible. I use one basic principle of optimization  i.e. to reduce the number of writes on the SSD. To do that we divide data into two categories

(1) Static data -- Things like configuration files or data which is rarely modified and mostly read (accessed). I will leave this or configure this to sit on the SSD.
(2) Dynamic data -- Frequently modified data. For e.g,. a browser's cache and downloads folder is one such application. I will try to transfer such data away from SSD into HDD

Let me show you how I did it for some  applications I have:

Browsers
I have Chrome, Firefox and IE but don't ask me why. Here's what you can do for each:
(i) Chrome -- Follow the instructions in http://www.ghacks.net/2010/10/19/how-to-change-google-chromes-cache-location-and-size/ and restart the browser. Make sure that the chrome ccahe absolute path does not have folder names with spaces (its not happy about spaces and will not work unlike IE and Firefox)
(ii) Firefox -- Follow the instructions in http://support.mozilla.org/en-US/questions/768867 and Click Menu Item Options and in the general tab, give the link of the Downloads folder on HDD. 
(iii) IE -- Navigate Settings --> Internet Options -> Browsing History --> Settings and change the location of temporary internet files to HDD folder. You can also set the cache size here based on IE usage (mine is 100 MB). In addition one needs to move the location of TMP and TEMP from the Windows Folder on SSD to HDD (Start --> Right Click Computer --> Properties --> Advanced System Settings --> Environment variables)

Safari lovers be warned that most users are saying that there is no way to move the cache directory. Perhaps their is little need of safari on Windows with so many other good alternatives available.

Email Clients
Since email is a dynamic data, we can avoid writing it to SSD. So for Outlook/Outlook express, the.pst files should be located on a path in HDD, while for other email clients you can do the movement of local mail database to HDD.

Microsoft Office 2010
Not much do except go to File --> Options --> Save in each installed office application and change  auto recover, documents, server cache, templates and any other such writable data location to the HDD.

Source Insight
Go to Options --> Preferences --> Folders tab and change the location of Main User Data folder to HDD and every other path is relatively adjusted. Nice.

Avast Antivirus
This was something that gave me the most trouble. Avast installation has a "defs" folder which I think stores the antivirus definitions (updated almost daily) based on what timestamps I see for file creation/updation. I wanted to move this to HDD, but whatever I do even administrator account/rights is not allowed to move this. I tried many things, including fiddling with ACL and even manged to get my account converted as a guest, but to no avail. Finally I did a F8 at startup , booted into safe mode and I could use Mklink to move it to HDD. After that boot normally and all definitions are now being downloaded to HDD through the symlinked folder on SSD

Its all manual work and we can do with a tool which can do this for all applications. And though many such tiunings might seem really not worth it, every drops adds to the ocean. If we are aware of this we can do this right after we install any application.

Similarly we can set the directories for every application as applicable. Winzip, Winrar, Visual studio, etc You can check the write rate using the resource manager of Windows 7 task manger. The tools are two only

(a) Change the settings from the program or edit the registry for the program if that gives some options
(b) If their is no configuration item, change the folder to a symbolic link to a folder in HDD


The key point remains identifying where any application writes its static & dynamic data. I think readers will get the idea by now. 

Long Live SSDs !!!




Sunday, July 8, 2012

SSDs and what they bring from the table

These days if you go to a reputed computer hardware dealer in S.P.Road, Bangalore and ask either for an enterprise drive (64 MB cache, SATA III, 7200 RPM) or a 10000 RPM Velociraptor, they will almost frown at your request and immediately ask you that why you don't go for an SSD ? 

As their were confusing choices in the HDD area ( I like 3.5" drives from Western Digital, as they haven't died on me, some even after 6 years), there are in SSD. WD itself has Caviar greens, blues and blacks, apart from Velociraptor and Enterprise.In SSDs there are tons of choices from OCZ, Corsair, Crucial, Intel, Samsung, etc. To begin with let me make some recommendation of WD HDDs

(1) Caviar Green -- Very good drive for NAS, where you network connection and processor will be the bottleneck and not the transfer speeds of the drive. They run cooler and make less noise
(2) Caviar Blue -- Your regular Computer user.
(3) Caviar Black -- Power user who needs a fast drive for reads, not so much for writes and does not care too much about reliability
(4) RE4 enterprise -- RAID, most reliable and for servers or users who do a lot of disk writes (uploading/downloading torrents for eg.). I recommend this sa workstation drive too, though WD does not list it that way.
(5) Velociraptor -- At the low capacity end (250 GB) I do not recommend it, as its pricey, does not offer enough storage and you should seriously look at SSD as your boot drive + applications and mate with one of the above 4 for data. the SSD wil be much much faster as a boot drive.

Coming back to the core topic, lets see what per common choices that will influence a buyer's decision

(1) Cost -> One metric is cost per GB. Even with falling SSD prices and inflated HDD prices due to flooding in Thailand's HDD manufacturing units, SSD do not fare well in this metric compared to HDD. Except of-course the 250 GB 10000 RPM raptor drives. If some one has an idea of using a raptor as a boot drive, drop that idea right away and look at SSD for that and supplement it with an additional HDD for data. In simple words, SSD make no sense for storage.
(2) Performance --> SSDs are miles ahead of HDDs (even the WD velociraptor) in performance (read/write). They have bigger caches ( I have 1 GB cache in my OCZ Vertex 4 compared to 64 MB in WD RE4). My windows & ultimate installation with tons of Apps and un-optimized services at start-up boots in 15-20 seconds flat after BIOS loads the MBR (would give it a minute in HDD). Apps start in a zap (like office, photoshop, Corel, etc) 
(2) Noise -> SSDs are flash based and have no mechanical parts. Therefore they are silent while even the best HDDS are fairly audible., the blacks, enterprises and raptors being much more louder.
(4) Durability -> A not so important concern for desktops, but the crown will go to SSDs. We don't throw Desktops around, do we ?
(5) Reliability -> This is where things get tricky. SSDs have a technology limitation, that the number of writes on any sector (if we compare it with an HDD terminology) is limited. In simple words we can say that the same area on SSD can be written only a finite number of times (it is almost infinite in case of HDDs). Manufacturers are very picky about this and DO NOT mention this *finite* number in their product specifications and instead talk about performance only. So in general SSDs can seem to be less reliable than HDDS are not suitable for applications, features which constantly have a very high number of writes during their usage lifetime. And that's why we do not recommend SSDs to be used for data. I will cover this point in detail in a subsequent post as to how to manage this to extend SSD reliability and get the performance and reduced noise benefits.
(6) Power Consumption -> SSDs win here too by a wide margin, however their cost per GB limits limits their usage in Media Players and NAS. I would have recommended this for car audio, but pen-drives are cheaper and have lesser footprint for this applications 

So for now, the best combination is 
(a) SSD for OS + Applications --> The OS and application have to be tuned to work best on SSDs
(b) HDD for data --> Your multimedia files, Development projects, Email databases, browser caches, torrent data and temporary files, etc ...

I will shortly post in detail on how to do this for Windows 7 and linux later.

Saturday, July 7, 2012

Cable Management for my PC

Along similar lines of my previous posts an air-cooling is the subject of cable management. I work in Telecommunications area and we building networking and communicate software. And do that we use networked computers and boards in our labs. And we do our cable management by grouping them, tying them up and then running them neatly behind the equipment avoiding man made catastrophes ( I can remember the anguish when I run 24 hour or 48 hours long duration performance tests to test software stability, only to find out that it failed in closing stages because a network cable either on equipment under test or network pulled out because someone accidently tripped or walked over some cables).  The use case of such arrangement is that it is easy to debug in case something goes wrong or carry out the upgrades when they are needed. And without getting lost or pulling 10 other things just  to fix one. The same holds good for the PC as well.

However cable management for PC has another angle. If you do not have a mish-mash or noodle soup of cables in your chassis next to and over the motherboard, you:

(1)    Allow air to freely circulate and cool the hardware
(2)    Prevent micro dust from  collecting on the cables and spoiling their look

Most people however do not pay attention to this as they had no perceivable value in the short-term. Its the similar attitude summarized by martin Fowler in his keynote address at AGILE CONFERENCE 2011, "the software works for me as a user and hence why do I care about how the inside (design) looks !!! What matters is that it is cheap and that enough". Unfortunately like software design, cable management effects the stamina of hardware (how long it lasts).

Here is how the insides of my PC look like. Quite a few extension cards (graphics, sound, WiFi, case lighting slot plates, etc) as well as multiple fans, peripherals, case mod lights etc (not exactly a simple setup but not the most complex one either):




Not the best cable management that you may come across (and I have an story to hide at the back of the motherboard. More on this later), but decent enough to begin with. My chassis is a HAF-X and it provides very decent cable management options such as:

(a)    Rubber grommets to allow cables to go out from the PSU and come in at the right place back leaving the least obstruction.
(b)   A nice enclosure next to the PSU to hide the cables coming out from my PSU

(c)    Slightly elevated case panel at the back of the motherboard to accommodate cable bundle

(d)   And enough placeholders for typing the cables and running them in organized manner at the back with zip  ties or even twist ties




Besides I have a Corsair AX850 *fully* modular PSU, which allows me to just connect the cables to the PSU which are really needed, eliminating the un-needed cable bundle.  I recommend others to use modular units or semi-modular with fixed Motherboard power connectors (you anyways need them always). This Corsair  PSU has very neat cables which have a surrounding piping.

Assemblers need to pay attention to the cable management options on their chassis when they buy them, incase they are not buying the HAF-X ;-)). The only gripes I have about my installation are:

(a)    Because of the case height the optional power cable on the top just about reaches the header on the motherboard. Not good as it will put some lateral force on the header. Don’t know whom to blame CM for the height of the recess or Corsair for the length of the cable
(b)   The Placement of PCI-X Power cable son the Graphics card (Asus GTX 560 Ti) is on the top longitudinal side which requires bringing them out through the VGA fan bracket. Its an ugly arrangement. I prefer such connector to be  shorter side which would face the front of the case (see how its done in my Asus Xonar Essence ST soundcard with 4-pin molex power connector at the shorter side) making the cable length that is required to be inside the chassis very small and non obtrusive. This would have become more ugly if I had more than 1 Graphics card in SLI mode. Why Asus why  ?
(c)    And I also wonder sometimes that if cables are supposed to be run mostly on the side behind the motherboard, why do manufacturer’s don’t run DVD and Blue ray drive’s power connectors and data cable connectors on the side. It would be much cleaner (look at the way the HDD’s and SSDs have been mounted in the HAF-X).  At-least they should give an option in some models …



Occasionally when I am able to steal some time on the weekend from my family, I do open the PC up and  tidy up the cable management even more. I still have the back of the motherboard side to address …

Thursday, July 5, 2012

Installing the right type and number of chassis fans

One consideration while buying a case is to decide as to what type of fans will come installed, how many and just in the case has no fans pre-installed by default, or we want to replace the default ones or provision for extension then what fans should one put and what would be the orientation.

I own a HAF-X (HAF stands for High Air Flow or something like that) which is probably one of the best air cooled cases that one can get on the market. It is big in itself, Has provision of 5 fans, air vents, dust filters and even a VGA bracket with optional fan (which I have added). Sounds an overkill. Maybe.

IMO, what is needed is minimally in a case is

(1) One Fan which will pull cool air into the case
(2) One Fan which will exhaust hot air away from the case (the air becomes out because the electronic components conduct heat to the flowing air which regulates their temperature)

This is simple theory, and IMO most users need just these two fans. Nothing more, nothing less. And I say this even though I own the HAF-X ( I am sure the others are not so effective)



The fans must exhibit some qualitative characteristics.

(a) They must be sized to fit the provision in the case
(b) They must be able to move lots of air
(c) They must do (b) while not making too much noise that it disturbs users. Its like a Motorbike/car silencer, the absence of which is either hated or loved.
(d) They must not overload the current drain capacity of the motherboard

Unfortunately what most vendors and retailers propagandize *prominently* is size (which is right), LED color,  type of bearings, lifetime, blade structure, fan speed, wattage in their feature list and bury the air pressure and capacity (CFM, H2O pressure level), Noise level (dB), current drain in their specification sheet. 

The voltage in the motherboard rail is fixed at 12V. And each fan connector on the motherboard has current drain capacity, which must not be exceeded. Before connecting any chassis (or CPU fan) to the motherboard, care should be taken that the current drain does not exceed this limit, otherwise you can burn the motherboard.

If the airflow capacity (expressed in cubic feet per minute or CFM) is low, the cooling will be less, irrespective of what RPM the fan spins at.  And by the way bigger fans have to spin slower & make less noise to move more air which is a strong case for their installation. Big is beautiful.

If the chassis fan (which has 3-pin motherboard connector or a 4-pin Molex connector to connect to PSU/Fan-Controller, and spins at constant speed) makes a lot of noise, it will create some disturbance for users (the level varies with tolerance). If total noise of all fans is above 50 dB its quite audible to me and anything about 70 dB bothers me for instance. IMO fan controllers which control their speed by reducing voltage, solve the noise issue, but MAY impact the cooling ability of the case, which is the primary purpose of the chassis fans in the first place.

Usually High CFM/RPM and Noise/Size are opposing factors and a tradeoff is made in each fan to my observation. You may not be able to get the best of everything like (highest CFM + lowest Noise + smallest size + lowest rpm) in one fan.

And finally if you want to add a dice of color and lighting to your case, you can add LED fans instead of plane jane ones. But only after you got the other basics right.

Wednesday, July 4, 2012

The need and pitfalls of RAM Cooler installations

The concept of air cooling can be extended from a CPU to other components in the motherboard like RAM, PCI/PCI Express cards, etc. You need a heatsink and a fan to dissipate heat. You can see this in a graphic card also.

The RAM modules are an important target. From my C programming background, I understand the best way to speed up any routine or algorithm is to try and make it CPU bound (eliminate frequent memory lookups) and also achieve some locality of space and time of both data and instructions (in  other words make full use of CPU's cache memory). This is a low-level programming craft and most applications in my domain are written in high level languages like C/C++ , where developers are ignorant of this, or do it only for some key routines and not whole critical path code (forget the whole code). Worse many believe that design optimizations give better results and therefore no need to do low level code tuning. 

RESULT: A lot of memory accesses in any code(application). With multi-threaded applications and the gap between CPU speed and DRAM speed, this means very frequent DRAM access translating into heat buildup in the DIMM modules.

My Gskill RipJaws X had heatsink fins. So i extended the CPU cooler concept by looking at a RAM cooler fan and I got one from Gskill in Turbulence II with lovely blue LED fans. Corsair and Patriot also make  this cooler.  Easy to mount and dismount and takes power from a molex cable which can be connected to the PSU. However there are two things I want to warn others about




1. One side of the Turbulence II touches the Fan on the CPU air cooler since the DIMM slots are next to the CPU on my motherboard (Asus P8Z68-V PRO/GEN3). I see most Mobos have this arrangement. So it will block block some airflow into the cooler. And I had to shift the DIMM cooler towards one side to make sure it fit. That's where the second problem started
2. The base of the RAM Cooler's holders fits on the white DIMM slot levers (which we pull down to eject DIMMs). When I shifted the RAM cooler it covered the USB 3.0 header on my motherboard for the front USB 3.0 slots in HAF-X. Since I did not have a USB 3.0 device as of today, I disconnected the front USB 3.0 header as of now. It is the only open issue and I cannot find a way out except take a plier and alter the RAM cooler's holder

So my general opinion is that the co-operation between Motherboard manufactures, Air cooler makers and DIMM makers need to increase and they need to define some dimension, spacing and layout guidelines which will make CPU coolers, DIMMs and RAM coolers more compatible with each other. If I need a USB 3.0 front port any time in future, I have no option but to remove this Gskill RAM cooler or install one from Corsair (which I suspect may fit based on the pictures of it) or just use my trusty plier.

As a general rule, now I feel that whatever we mount on the Mobo (except normal sized PCI/PCIe cards), must be done outside the case at the time of purchase to check compatibility or have a return policy




CPU Cooler Installation Blues

The CPU cooler, Coolermaster Hyper 212 EVO, which i use in my PC had in itself a fair amount of challenges, one of which was because of some bad packaging and which effected only me,  while other challenges can be faced by others. Let me list them down into two broad categories

(1) Installation of the Cooler (aluminum radiator part)
(2) Installation of fans

Issue 1
Before deciding to buy a cooler I noticed that my HAF-X case had a backplate cutout which as per the manuals and videos shown by Coolermaster is used for quickly installing the heatsink even with the motherboard already mounted in the case. The cabinet is so good in so many areas that we hardly expect any issue. The Hyper 212 EVO like most other heatsinks requires a backplate to be installed. Unfortunately the HAF-X's backplate cutout is small and partially covers the backplate screw holes making it impossible to mount the cooler with the motherboard in the cabinet. What a let down !!! This is a manufacturing defect in HAF-X, but not a very high severity one as a general users hardly changes coolers often.



Issue 2
I had to take out the motherboard (since I was doing it for the first time, it was a nuisance, risk and trouble) and then mount the heatsink on top of it. And mounted Wait. Now I couldn't get the retention plate fixed as the screws will not fit in the backplate holder. After going over the manual with a magnifying lens ;-)) and watching some installation videos, I figured out that something was wrong with my retention plate. I had to remove the Motherboard again, remove the back plate, repackage the whole cooler and together with the motherboard take it to the vendor (Ankit Infotech)  from whom i bought my components. The guys at the vendor's shop were a little clueless and ran around to call some technician who was equally dumbfounded. The owner then decided to replace my unit and took out another Hyper 212 EVO piece. We opened it to check and whoa !!! This time the retention plate will fit because the non removable screws are proper. We swap the retention plate.

I was finally able to install this big heatsink with the motherboard mounted and using a dot method of thermal paste application.

The next problem domain is the fan installation. Some questions/issues which I had

(1) How many fans to install ? One (was supplied) or two (one on each side) ?
(2) Should we use 4 pin fans or 3 pin fans in case I want an LED Fan and not the plane jane fan that comes with this heatsink
(3) what should be the mounting orientation of fans i.e push air into heatsink or pull air out of heatsink ?
(4) Will the facing the DIMM slots obstruct them or get obstructed by installed DIMMs ?


And here is what I concluded by practical experimentation and wisdom of others on the Internet

(1) Two is better than 1. The heatsink picks up and spreads heat from CPU and we need to dissipate it quickly so that temperature can be controlled.
(2) I have an Asus P8Z68-V PRO/GEN3 motherboard. It has two 4 pin PWM fan connectors for the heatsinks while all other power and chases fan connectors are 3 pin. 3pin fans cannot be speed controlled by motherboard and spin at their max speed. The idea is that the case airflow is fixed and therefore all chassis fans can be 3 pin while CPU cooler fans can be controlled  to match the load on CPU (which lowers noise). I decided to go for 2 new matching Xtraflow LED fans (red) and not use the one suppled with the cooler. Costed me Rs. 1000/-  extra.
(3) In HAF-X there is a huge 230mm fan at front (lower side), grilled vents in drive bays and a 200mm fan at one side for cool air intake. The fan at the back and the two fans on the top are for exhausting warm air. To match this orientation, the heatsink fan facing the intake fan was made a push fan (i..e it pushes air into the heatsink) while the other side facing the exhaust was made a pull (pulls warm air out). This way their is are no opposing air currents created by fans
(4) Yes they do. I have 4 x 4 GB GSkill Ripjaws X DIMMs and the fan facing the DIMMs does touch (but just marginally) the DIMM closest to it which means the fan has to sit half a mm protruding from the heatsink. I was lucky that the fan just touched the DIMM on three counts. First the thickness of the heatsink was just right (many top-end heatsinks will be wider and will surely block one of the DIMM slots on my motherboard).  Secondly the spacing between the CPU and DIMMs was just about right. And thirdly, the height of the heatsink fins on my DIMM modules was also just right. I have heard that the fins on the Corsairs are taller which would have made mounting this fan a big problem. This is Issue No. 3 and a potential deal breaker.

So I think we can judge what needs to be taken care while buying & installing an after-market CPU cooler of air cooling type. I might summarize this by some key guidelines

(a) Buy the Cooler, DIMMs and motherboard together and install them at the vendors place, otherwise have a return agreement with suitable swap option. I have not seen many internet reviews pointing to such problems/questions that I faced especially with the installations and I expect many users to encounter atleast one of them.
(b) Buy Fans with 3-pin connectors for chassis and 4-pin (PWM fans) for the CPU cooler. If you do not like the sound made by chassis fans whirring at full speed, then go for a 5.25" fan controller (NZXT, Lian Li, etc make them) and connect the chassis fan to it and not the Mobo. Just leave CPU fan connected to the Mobo and let it to its own power management. I prefer you mount two fans.
(c) Mount the cooler on the CPU and Motherboard before fixing the motherboard in the chassis. Don't depend on the cutouts. You will save time and trouble.
(d) If you buy a RAM with tall heatsinks, either leave the last slot empty or maker sure you buy a tall CPU cooler and a case to accommodate that ;-)). The HAF-X is a tall and wide one, but others are not and you may just run up into a situation where the case side panel is being blocked from getting closed because of the heatsink or fan.

Their are so many failure points in this, but hell it works well after all this hungama !!!



Tuesday, July 3, 2012

The importance of a CPU Cooler

Many folks who buy assembled or for that matter bought branded PCs tend to overlook the importance of one key factor in the craft of PC building,  which is to take care of heat dissipation. I read about this after going through some newegg videos and could immediately correlate to the ideas as I have worked for a telecommunication equipment manufacturer (though on the software side) working with Compact PCI and ATCA chasis with tightly packed computing boards and loud whirring fans. And we know how big a killer heat is. Of computer equipment that is. 

Most friends who I approached for advice for assembling a new PC had the stock coolers supplied with the Intel/AMD CPUs they had. The shopkeeper who sold me the computer parts (and assembled it) in Bangalore's Computer Street (S.P. Road) possibly did not think that a separate add on cooler was a required or mandatory part and made no recommendation. A stock cooler is the norm.

But i had other ideas based on my experience. That's why I bought one of the best air cooled Chassis on the market, the Coolermaster HAF-X in the first place. Big guy with a lot of big & small fans to move air. And I came back after the initial build for a CPU Air Cooler and bought the Coolermaster Hyper 212 EVO (which is an entry level budget  air cooler to my reading & assessment) after failing to get the V6 GT locally or online in India. Did a DIY installation with manual and web videos help and mounted two 120 mm PWM LED fans (more about this later):



I will sell you some numbers to highlight the value of this device to my computer.

Stock cooler: Idle Temps of 42-44 deg. centigrade, Full load Temps of 100 deg. (with Prime 95 torture test with 99% CPU usage on all 4 cores of my Core i7-2600K CPU)
CM Hyper 212 Evo: Idle Temps of 34-36 deg. centigrade, Full load Temps of 60-63 deg (sustained at this level even with a 8 hour run)

A good 5-6 degrees on idle and 40 degrees on loading.

The ambient temperature in the room was around 30 degree centigrade. I felt some throttling by the CPU in the former case at full load and that's how it was holding on to a temperature of 100 degrees.  And all this with a budget aftermarket air cooler. The high end CPU coolers from Noctua (like the NH-D14) when combined the famed Artic Silver 5 thermal paste (i used the one from Coolermaster that came with the cooler) are reported to run 15-20 degrees cooler at full load and possibly just a few degrees cooler than what the Hyper 212 gave me at idle state.

If you think that we are going to do even a moderate 50% loading of CPU, you can guess how much the CPU cooler can help. That said if you are going to use a computer just for browsing, downloading, watching streaming videos, blogging, uploading some content, you will do fine with the stock cooler that come with a CPU. Just don't buy an i7 then. Settle for a dual core i3 with lower clock speeds.

To my judgement, therefore, an air cooler is a must for a workstation (that's my use case) or a server (where you can expect higher loads continuously). And if you are a gamer (who's going to potentially overclock also), you should open your purse and go for a high end cooler (100$+ range with two fans). For a thinner installation, I think the CM Hyper 212 (just a shade above Rs. 2000/- in India) is a VFM addition (read insurance) which will help in prolonging the usable life of the CPU.

More on cooler installation in the subsequent posts