The following are general Microsoft® guidelines for Database Administrators in maintaining the SQL Server database(s) and ensuring peak performance from the applications that it uses. For additional information please refer to the Microsoft website.
A good way to consistently keep the SQL Server performing at peak levels is to make sure it’s using as much memory as possible. Some systems running 32GB of memory may only be utilizing 2GB for the SQL Server while leaving the remaining 30GB of memory in an idle status.
A useful tool for monitoring OS Available Memory is Perfmon (Performance Monitor). For SQL Server 2008 and earlier, this counter should not go below 200 MB. For 2008 R2, that number may be as high as 1 GB, particularly for a busy machine.
More details regarding Performance Monitor can be found in the link below.
Disk optimization can greatly increase SQL Server performance when managed correctly. Removing old backup files to a file server or network share can increase available disk space needed for transaction logs, database files, and indexes.
Transaction logs contain operations performed within the database and can grow quickly, consuming all available space allocated to it. Regular backups of the transaction logs help prevent this from happening. The backup process truncates old log records no longer needed for recovery and marks them as inactive so they can be overwritten. Implement a good backup strategy consisting of full database backups, transaction log backups, and system database backups.
Like an index in a book, an index in a database lets you quickly find specific information in a table or view. Well-designed indexes can significantly improve the performance of database queries and applications. Indexes can become fragmented which can cause a decrease in SQL Server performance. Indexes should be evaluated todetermine if fragmentation exists. Indexes can be rebuilt or reorganized as needed and should be included as part of an ongoing maintenance process.
The first step in deciding which defragmentation method to use is to analyze the index to determine the degree of fragmentation. By using the system function sys.dm_db_index_physical_stats, you can detect fragmentation in a specific index, all indexes on a table or indexed view, all indexes in a database, or all indexes in all databases.
See below for more information regarding this function.
Note: It is recommended to rebuild an index that is over 30% fragmented and reorganize when an index is between 10% and 30% fragmented. Rebuilding a fragmented Index is most advantageous if the index contains at least 100 pages, otherwise you may not see any difference after a rebuild is performed if the Indexes are small.
Suggested SELECTServer Indexes for Fragmentation:
Rebuild\Reorganize the Indexes:
More details regarding this process can be found in the link below.
Statistics contain information about the distribution of values in an index or column of a table. Indexes areautomatically created for each key created. It tracks information about the selectivity and determines the effectiveness of an index for satisfying queries. By default, the query optimizer already updates statistics as necessary to improvethe query plan. This can be configured in the database properties from MicroSoft SQL Server Management Studio.
You can improve query performance by using the UPDATESTATISTICS command or the SP_UPDATESTATS (stored procedure) to update statistics more frequently. SP_UPDATESTATS will update all statistics in all tables, whereasthe UPDATESTATISTICS command allows you to update statistics on a specific table. To view current statistics, run the DBCC SHOW_STATISTICS command.
Note: It is recommended to update the statistics if the results show an outdated date in the “Updated” column and/or if there’s a substantial difference between the “Actual Number of Rows” and the “Estimated Number of Rows” in that table. You can view the table properties to get the “Actual Number of Rows” from MicroSoft SQL Server Management Studio.
More details can be found in the links below.
Analyze the I/O of your system and decide the best distribution for the I/O load. Perform object level analysis and do performance tuning at table level. The goal is to reduce I/O performance bottlenecks and set optimal setting for read and write database
The Tempdb is a system database that contains global resources available for all users connected to a SQL Server Instance. The Tempdb database gets recreated each time the SQL Server service is restarted. The size and physical placement of the Tempdb database can significantly affect the performance of a system. If your Tempdb database is heavily used by your application(s), consider locating it on an array of its own (such as RAID 1 or RAID 10). This will allow disk I/O to be more evenly distributed, reducing disk I/O contention issues, and speeding up SQL Server’s overall performance.
Size the Tempdb accordingly, you don’t want to size the database too small with Autogrow feature enabled or it will grow too frequently. The sizing operations during Autogrow can generate lots of I/O during busy workloads, it would be better to set the Tempdb size based on workload tests. Ensure that you have enough available free disk space to allow it to grow as needed. As a general rule, create multiple data files for the Tempdb based on the amount of CPUs in your system.
Below are some general guidelines for setting the file growth increment for the Tempdb files.
Tempdb file size
0 to 100 MB
100 to 200 MB
200 MB or more
Licensing TechNotes and FAQs
External Links Bentley Technical Support KnowledgeBase
Bentley LEARN Server
Bentley's Technical Support Group requests that you please confine any comments you have on this Wiki entry to the "Comments" area below. THANK YOU!