It is easy to get a powerful server if you are ready to spend many $$$: there are leading vendors like Dell, HP, and many less known hardware companies, they will be happy to sell you a powerful machine (usually, with delivery in 4-6 weeks).
However, money is always a problem, especially for small businesses and start-ups, so there is a great demand for «good-enough» cheap solutions. This article describes the practical approach for building «good-enough» database server with the sufficient performance for small business. It also contains calculations and links to the specific products we recommend.
Our company (IBSurgeon) is providing database consulting and care services for Firebird databases, and we see many types of servers (or, better say, computers used as a server). Large customers use high-end configurations with SAN on top of SSDs, 350+Gb RAM and dual top Xeons on board, but small and medium business customers use less powerful machines, since they don't need much computing power and they don't have a lot of money.
We have noticed, that many servers are actually usual workstations, and, the most interesting thing, they work pretty stable for years and years. There are some really exotic things, like Windows XP on the computer with 2Gb RAM (the computer is built in 2002!), but most of all are just «good-enough» hardware solutions.
Of course, it is a matter of statistics, and nobody will recommend using workstation as a server, but the question is: where is the balance between stability and expenses? Is it a good approach to use cheap self-made server in terms of money-value? What can we get for USD$1000? What are risks related with non-expensive hardware?
Who can be interested in a database server below usd$1000?
Let's consider the following situation: an office with 20-25 workers, people work 7x5, the database size is around 50Gb. It is a typical situation for small business company, which uses some ERP or CRM or specific application (for example, dental office automation system) for small business apps based on Firebird.
Let's imagine that the company is not flooded with the money and they want to economize as much as possible, without losing the reliability, to achieve good-enough performance. The responsible person in a company understands that all hardware will fail sooner or later, and wants to decrease related risks.
What kind of server do they need for Firebird database of that size and load? According to Firebird Hardware Guide, the most critical resource for the database is disk IO, after that RAM, and after all – CPU (interesting, that when customers from SMB buy servers from the big vendors, they often choose servers with powerful CPUs and slow disks, probably due to marketing materials which overemphasize the importance of CPU).
Fast IO means that we need solid state drives (SSDs), since SSD is the king of random reads and writes. Of course, SSD is not a panacea for databases, but it provides really high performance, and now it is the right choice for databases.
SSD comes in 2 flavours – enterprise grade and consumer grade disks. The key difference between them is a resource, roughly calculated as a number of IO operations that they can perform.
For example, Samsung 850 Pro 256 Gb MZ-7KE256BW – customer-grade SSD (based on MLC technology), it has TBW (total bytes written) 150 Petabytes, and DWPD (Disk Writes Per Day) = 0.16. Or, more simply, the resource can be explained as 40Gb of writes every day during 10 years.
Is it much or not? We have a test server with consumer-grade SSD where we run load tests to simulate intensive work of 100 connections at 60 Gb database. Every day 3 such tests, for 3 hours each, are started – it means that these tests simulate 9 hours work day. The tests are running for 2 years already, and the remaining resource (according the firmware) is 54%.
Of course, ~50% of resource means that the drive should be replaced during the next year… Is it so bad? Every several years the speed of SSD is increased and prices drops: the SSD in the example has 6Gb/s and costs $137, in 2-3 years there will be 12G drives for the same price.
Amazon's price from $124 http://amzn.to/2woodsL
Enterprise-grade SSDs are built on the basis of another technology (SLC), and can approximately perform 30x more operations than the customer-grade SSD of the same size. Also, they cost approximately 3x times more that consumer-grade SSDs. The question is – do we really need 30x more IO rounds for small business server for 3x price?
If we are looking for fast and affordable solution, it is very attractive to use cheap customer-grade SSDs and replace them on regular basis (and monitor its resource with firmware, of course).
Risks associated with SSD
As adequate people, we understand that customer-grade hardware has less resource and high chances to fail, so we need to reduce these chances. How can we reduce it?
To decrease the chance of sudden hardware failure, it is necessary to use 2 SSDs drives for the database, bundled in RAID1. RAID1 does not mean that we need separate RAID controller, it will be good enough to use RAID capabilities of the modern motherboard chipsets - essentially, there is simple RAID controller without a cache.
If there will be no cache, what will be the performance? Modern SSDs, like the Samsung in the example, have the built-in cache and very smart controllers – so they are fast enough without dedicated RAID controllers.
Of course, it is necessary to check disks health; usually the firmware provides a good enough estimate of the disks lifespan.
Of course, we need drives for backups. The best choice will be a pair of SATA drives 2Tb each – for example, Seagate SkyHawk 2TB Surveillance Hard Drive, with 5900 rpm.
Amazon's price is $74,99 http://amzn.to/2wwWZjV
We need 2 HDDs, because drives for backup also must be put in RAID1, in order to provide protection from the sudden hardware failure. These backup drives can be pretty slow, their major purpose is to store backups and provide adequate speed of sequential reading/writing. 5900rpm drives are slower than7200 rpm, but they are known to be more reliable. This specific series of drives (Seagate SkyHawk) is designed for linear writes: this is what we need for backups.
Risks for backups drives
The risk is the same as for SSD, and we protect from it in the same way: RAID1 and monitoring. There should be a little chance that both drives in RAID1 will fail simultaneously.
Do we need separate drive for operational system? In the cheapest configuration - no, it will be good enough to keep OS and database on the same drive. It will slightly speed up the degradation of SSD, but not so much – in our scenario, it should definitely sustain for 3 years.
For Firebird database server it is better to choose CPU with many cores and as low frequency as possible. Yes, you read it correctly – there is no need for high frequency CPUs for database server. Firebird (if it is configured properly) uses all cores for normal work, but do not load them at 100% rate.
The exceptions are: some maintenance operations (backup and restore) which benefit from high speed CPU, and badly written SQL queries with wrong usage of indices – they consume 100% of the core, but this is a clear mark that the database performance optimization is required. In general, it is not possible to compensate slow queries with high end CPUs: even if CPU is 3x faster, the slow query will still be slow.
On other hand, having several cores can compensate the existence of such CPU-consuming queries: with the number of users under 25 and with 8 cores, there is not so big chance that all cores will be simultaneously occupied by the slow queries.
And, since we are looking for the cheap solution, we are limited with desktop CPUs – and, regret to say it, to AMD chips. Essentially, if Intel could provide desktop processor with 8 cores for the same price or slightly higher as AMD FX-8320E, we could use Intel too, but at the moment I don’t see such solution.
This AMD chip is not so modern and not so fast, but it has 8 cores, and it is cheap, so it is the optimal choice for our fast and cheap server.
Risks associated with desktop CPU
What is the potential risk associated with CPU? For low-frequency (i.e., not overclocked) chips with adequate cooler there is almost no risk. When did you hear about failed desktop CPU last time – in 2003? Of course, server CPUs are much faster, but we are looking for a good enough solution.