Table of Contents
Before creating a database, we recommend that you review our recommendations.
SQL Server is a robust DBMS on any hardware, but given that 64-bit architecture is reasonable and readily available, we recommend that you use 64-bit hardware and of course the 64-bit versions of SQL Server for installations. This goes for both SQL Server 2005 and 2008. We recommend that you use SQL Server 2008 R2 or SQL Server 2012.
The database file system configuration is much more important than the Web server file system in terms of design. A system administrator needs to be concerned about the integrity of their disks; plan for easily expandable growth; have a good balance between disk reads/writes, and plan for system failure and recovery.
See Zimbra's recommended Supported disk space, memory, and CPU specifications for optimal database I/O performance.
After the installation, consider adjusting the memory for the SQL Server Service. If you right-click the server (with SQL Server Manager), choose Properties and select the Memory tab. You can set the Maximum Server memory in MB.
We recommend that you allocate 80-85% of server memory for SQL Server. The Supported disk space, memory, and CPU specifications for the database is dedicated hardware with 16GB+ of memory.
We recommend that your services be dedicated to their own hardware; however, if your environment dictates that you have IIS and SQL Server running on the same machine, you will need to adjust these values accordingly. For a baseline, the server needs 15-20% of the system RAM. Then, split the RAM so 60% is used for SQL Server and up to 40% for IIS.
Server collation is the sort order and the case sensitivity of the data within the databases. Setting the collation at the installation time for the default collation of the databases is the best time. This is so that all of the system databases (master, msdb, model and tempdb) are created with the same sort order that the user databases would use. This allows for better interoperability.
The initial size of the database is very important. Planning the size of your community is not a one-time event: it should be evaluated at least once every six months.
We recommend that you plan for one year's growth when setting the initial database size - not only for the community's sake, but also for your file system. Even for smaller communities, it is better to use a much larger introductory data file. For a lightweight community, you should start with a 1GB data file size and 64MB log file size. Fewer expansions of the data files means that there are fewer file fragments on the hard disk. This is important for consistency at a raw file level.
Use the Supported disk space, memory, and CPU specifications to approximate the total amount of storage required when sizing the disk space needed for your SQL Server database.
The formula assumes the default set of user profile properties and on average one revision of a wiki document.
The metadata factor % is a value that adds a percentage of the core data to the total size.
Sizing formula = (Number of Items * Item Storage Required) * 112%
For example, assuming 100,000 users at 3.04kb per-user, you would plan for 296.88MB of storage with an additional 35.6MB of metadata for a total storage requirement of 332.5MB.
Each configuration is unique. Some installations will have more user storage with custom user properties, whereas other installations will have multiple revisions of Wiki documents.
Use the sizing recommendations in database sizing formula when creating your data file. There are two directions to take regarding autogrowth on SQL Server data and log files:
- If your environment is consistent with monitoring the size of the data in the SQL files and the physical size of the files themselves, then there should be no autogrowth at all. At a minimum, sizing should be checked at least once a quarter and adjusted as least once every six months. This is the preferred direction.
- If your environment is more lenient, you can continue to set AUTOGROWTH in terms of megabytes. Start with 128 MB. This is a small enough value that it can grow with hardly any contention and you shouldn't have issues with getting "out of space" errors.
If you are space-confined, set your maximum file size so that you do not exhaust all available space on your disk. We recommend that you have a minimum of 40% free space on the disk. Anything less than 40% free could jeopardize any substantial database grow that may occur in the future.
For the database log file, take the same considerations into account. Transactions in Zimbra Community are generally not very large, so you could size your log file much smaller - such as 64 MB or 128 MB. Your autogrowth could be set to something much smaller as well, such as16MB or 32 MB. You will find discussions on tracking growth later in this guide.
Maintenance is not always considered when deploying a database; however, we have found that maintenance can be a critical component to maintaining the health of a database. As a general rule, all tables with clustered indexes should be reindexed on a schedule. We recommend that you do this once a week. However, depending on your community's activity, you may want to do this once a month.
Zimbra Community includes a number of database maintenance scripts.
We include the cs_system_dbreindex stored procedure in the Zimbra Community codebase. cs_system_dbreindex can be scheduled (via a SQL Job) to reindex clustered indexes.
Table statistics should be updated on days when reindexing does not occur. Again, there is a stored procedure included in the Zimbra Community codebase called cs_system_updatestatistics that can be scheduled for this purpose.
We have also found that people generally do not track physical database and log growth over time. Tracking table and index growth over time gives you an idea of how fast your community is growing and can give you an idea of what your disk space needs will be.
Tracking physical database growth can be done with the SQL Server stored procedure sp_spaceused on a table-by-table basis. Additionally, the number of rows per table is a good metric. The number of rows per table provides you with the averages of the growth of your data over time within your community.
We recommend that you schedule a single full-database backup once a week; a differential backup each day (not on full backup days), and log backups for the maximum recovery time you are allotted for your site.
For more information about backup options refer to:
For additional data about backup schedules refer to
Even though older versions of SQL Server are specified, the processes are the same.
The most effective way to determine problems within the database (related to performance) is through SQL Profiler. An exhaustive review of SQL Profiler can be found here.