Quantcast
Channel: MongoDB | Blog
Viewing all articles
Browse latest Browse all 2423

Capacity Planning and Hardware Provisioning for MongoDB In Ten Minutes

$
0
0

Most MongoDB deployments run on a cluster of multiple servers. This may introduce capacity planning and provisioning complexities beyond that of traditional databases. Solution Architect Chad Tindel’s Hardware Provisioning presentation from MongoDB World describes some best practices for operations teams sizing their MongoDB deployments.

There are two important concepts related to MongoDB’s architecture that help understand the presentation:

  • Sharding. MongoDB partitions data across servers using a technique called Sharding. Balancing of data across shards is automatic, and shards can be added and removed without taking the database offline.
  • Replication. MongoDB maintains multiple redundant copies of the data for high availability. Replication is built into MongoDB, and works across wide area networks without the need for specialized networks.

Chad begins with some great tips to keep in mind, then moves on to some customer examples:

  • Document your performance requirements up front. Decide much data do you need to store. Determine the size of the working set by analyzing queries to estimate how much data will you need to access at one time. Calculate the number of requests you wish to deliver per second in the production environment. Set the percentage of uptime you require. Settle on what latency you will tolerate.
  • Stage a Proof of Concept (POC). MongoDB’s scalability allows you to try the application with 10%-20% of the data on 10%-20% of the hardware. With this approach, you can perform schema/index design and understand query patterns, then refine your estimate of the working set size. Check performance on one machine, then add replication and sharding as necessary. Keep this configuration as a sandbox for testing with successive revisions of the application.

    You can estimate your working set size by executing this command in MongoDB:

    b.runCommand( { serverStatus: 1, workingSet: 1 } ).
    

  • Always test with a real workload. Scale up the Proof of Concept, but do not deploy until substantial testing with real world data and performance requirements.
  • Constantly monitor and adjust based on changing requirements. An increase in users typically more queries and a larger working set. New indexes cause a larger working set. Applications may change their percentage of writes versus reads over time. Tools like MMS and mongoperf help you detect changes in system performance parameters. As your needs evolve, you can alter your hardware configuration. Note with MongoDB you can add and remove shards or replica set members without taking down the database.

Chad went over two actual customer use cases in the presentation.

Case #1: A Spanish Bank

The bank wants to store 6 months worth of logs. Each month of data occupies 3TB of space. The 6 months use 6 x 3 = 18 TB of space. They know they want to analyze the last month’s worth of logs, so the working set size amounts to 1 month of data or 3TB plus indexes (1TB) for a 4TB working set.

For the Proof of Concept (POC) environment, they choose to use about 10% of the data, or 2TB. The production requirements have a working set size of 4TB. 4TB/18TB data * our 2TB POC data gives us our POC working set size of 444GB. The customer could get servers with a maximum of only 128GB each, so for the POC environment they choose 4 shards with 128GB each. 4 x 128GB = 512GB which accommodates the 444GB requirement. 3 replica set members on each shard for read availability and redundancy give us 4 x 3 = 12 physical machines. Two application servers running mongos and three config servers in virtual machines round out the POC configuration.

To accommodate their 4TB deployment working set and 18TB of data with their 128GB servers, they chose to have 36 shards each with 128GB of ram and 512GB of available storage. 36 * 128GB = 4.6TB of RAM. 36* 512GB available storage = 18TB. As in the POC system above, they have two application servers running mongos and three config servers in virtual machines. As above, the shards each have a three node replica set for 36 shards * 3 replica set nodes = 108 physical machines.

Note that MongoDB allows both the POC and Production configurations to use the same commodity hardware building block system containing 128GB RAM and 512GB available storage. This allows them to test actual production nodes in the POC cluster before adding them to the production cluster.

Case #2: Big Online Retailer

The retailer wanted to move their product catalog from SQL Server to MongoDB. MongoDB excels at storing catalog information. Its dynamic schema can store different characteristics for each product without forcing blank field placeholders for other products that do not store the same information as would an SQL database.

The retailer wanted to give East and West coast customers their own datacenters, so they elected to run an active/active configuration. They bulk write new or changed inventory only during the least busy times of night. Peak usage consists of reads only.

They have 4 million product SKU’s each with an average JSON document size of 30KB. They need to service search queries for a specific product by _id or by category such as “Desks” or “Hard Drives.” Most category requests will retrieve an average of 72 documents. A search engine crawler that follows all links of the category tree will retrieve 200 documents.

Calculations revealed that the average product appears in 2 categories. The retailer initially wanted to shard by category, with products existing in multiple categories duplicated. 4 million products in an average of 2 categories x 30KB/SKU = 8M * 30KB = 240GB + 30GB indexes = 270GB of total data.

MongoDB consulting engineers recommended that the retailer use replica sets without sharding, since the application required high performance reads but only isolated batch writes during the least busy times. The entire working set of 270GB fits in the 384GB memory available in the large Cisco UCS servers the retailer said they wanted to use, eliminating the need to split the in-memory working set across shards, and thereby simplifying their deployment architecture.

MongoDB engineers suggested a four node replica set spanning the two datacenters, with an arbiter in a third location to elect a secondary in the case of the failure of the primary server. This allowed either server in each data center to go down for maintenance or as the result of failure while the other server continued to function. MongoDB’s “read nearest” query modifier selects the nearest server based on recent latency.

But, surprise! The retailer decided to deploy the system on their corporate VMWare Cloud instead of their big Cisco servers. Their IT department would not give them any VMWare nodes with more than 64GB RAM. To circumvent the 64GB limitation, they deployed 3 shards each with 4 nodes (2 East and 2 West + arbiter). 64GB x 3 = 192GB. While this would not hold their 270GB working set they decided to tolerate whatever swapping resulted. They reserved the possibility of adding the fourth shard to keep more of the working set in memory.

Hardware Provisioning Lessons From These Cases

  1. For read intensive applications, size your servers to hold the entire working set in memory and replicate for greater availability.
  2. If your servers’ RAM will not accommodate your working set in memory, shard to aggregate RAM from multiple replica set clusters.
  3. Create a Proof of Concept (POC) system using the same server hardware as your deployment. That way you can configure and test a server in your POC, system. Afterwards, you can drop it into your deployment system to either expand a replica set or add a shard to scale as needed.

For more information, consider downloading our MongoDB Operations Best Practices white paper below or watch Chad Tindel’s original presentation at MongoDB World.

DOWNLOAD WHITE PAPER


Viewing all articles
Browse latest Browse all 2423

Trending Articles