Bentley’s enterprise applications such as ProjectWise, eB, and SELECTserver, just to name a few, are database centric. Although these applications utilize databases, their maintenance and implementation are not controlled by the Bentley applications. Database Servers are just like any other machine, they require maintenance. It’s especially important to develop a maintenance plan for your databases because this is where your mission critical data resides. Failure to do so can result in data loss, production down and performance issues.
For database servers you need to ensure that you do regular backups, rebuild / reorganize the database indexes and keep the stats up to date. This is all on top of regular system maintenance which includes tasks such as keeping the operating system up-to-date with windows update, virus scanners, disk defragging and monitoring the performance of the system. This article is SQL server focused although the concepts are also applicable to Oracle and other databases.
Backing up the database is an important part of maintaining a database. Data backup is an insurance plan that you will want to have. Having backup will help when important files are accidentally deleted or mission-critical data can become corrupt or if a drive fails. With a solid backup and recovery plan, you can recover from any of these. Without one, you're left with nothing to fall back on.
Regular backups of the transaction logs also helps to keep the transaction log's size low, which within a database they can grow quickly and consume all the available space allocated to the db. The backup process truncates old log records no longer needed for recovery and marks them as inactive so they can be overwritten. This assuming the database is in simple more recovery. Implement a good backup strategy consisting of full database backups, transaction log backups, and system database backups.
See Backup Overview (SQL Server) for additional information on this.
Monitoring the System
It is important to monitor the system. Its also a good practice to take a baseline of the server so if you encounter a problem you can compare the results from a known good point.
Things to monitor on the database server are:
An overwhelmed CPU can be due to the processor itself not offering enough power or it can be due to an inefficient application. You must double-check whether the processor spends a lot of time in paging as a result of insufficient physical memory. When investigating a potential processor bottleneck, the Microsoft Service Support engineers use the following counters.
Processor\% Processor Time This measures the percentage of elapsed time the processor spends executing a non-idle thread. If the percentage is greater than 85 percent, the processor is overwhelmed and the server may require a faster processor.
Processor\% User Time This measures the percentage of elapsed time the processor spends in user mode. If this value is high, the server is busy with the application. One possible solution here is to optimize the application that is using up the processor resources.
Processor\% Interrupt Time This measures the time the processor spends receiving and servicing hardware interruptions during specific sample intervals. This counter indicates a possible hardware issue if the value is greater than 15 percent.
System\Processor Queue Length This indicates the number of threads in the processor queue. The server doesn't have enough processor power if the value is more than two times the number of CPUs for an extended period of time.
A good way to consistently keep the SQL Server performing at peak levels is to make sure it’s using as much memory as possible. Some systems running 32GB of memory may only be utilizing 2GB for the SQL Server while leaving the remaining 30GB of memory in an idle status.
A useful tool for monitoring OS Available Memory is Perfmon (Performance Monitor). For SQL Server 2008 and earlier, this counter should not go below 200 MB. For 2008 R2, that number may be as high as 1 GB, particularly for a busy machine.
The best practice is to insure your database is using as much memory as possible. Sometimes a database will be setup with 16gb of RAM but it is only utilizing 2GB of that RAM for the database. You will want to make sure that your dba has as much memory allocated to the database as possible.
More details regarding Performance Monitor can be found in the link below.
Disk optimization can greatly increase SQL Server performance when managed correctly. Removing old backup files to a file server or network share can increase available disk space needed for transaction logs, database files, and indexes. Since the disk system stores and handles programs and data on the server, a bottleneck affecting disk usage and speed will have a big impact on the server's overall performance. You will want to monitor the following counters. The Microsoft Service Support engineers rely on for disk monitoring.
LogicalDisk\% Free Space This measures the percentage of free space on the selected logical disk drive. Take note if this falls below 15 percent, as you risk running out of free space for the OS to store critical files. One obvious solution here is to add more disk space.
PhysicalDisk\% Idle Time This measures the percentage of time the disk was idle during the sample interval. If this counter falls below 20 percent, the disk system is saturated. You may consider replacing the current disk system with a faster disk system.
PhysicalDisk\Avg. Disk Sec/Read This measures the average time, in seconds, to read data from the disk. If the number is larger than 25 milliseconds (ms), that means the disk system is experiencing latency when reading from the disk. For mission-critical servers hosting SQL Server® and Exchange Server, the acceptable threshold is much lower, approximately 10 ms. The most logical solution here is to replace the current disk system with a faster disk system.
PhysicalDisk\Avg. Disk Sec/Write This measures the average time, in seconds, it takes to write data to the disk. If the number is larger than 25 ms, the disk system experiences latency when writing to the disk. For mission-critical servers hosting SQL Server and Exchange Server, the acceptable threshold is much lower, approximately 10 ms. The likely solution here is to replace the disk system with a faster disk system.
PhysicalDisk\Avg. Disk Queue Length This indicates how many I/O operations are waiting for the hard drive to become available. If the value here is larger than the two times the number of spindles, that means the disk itself may be the bottleneck.
Memory\Cache Bytes This indicates the amount of memory being used for the file system cache. There may be a disk bottleneck if this value is greater than 300MB.
Allen Brown from ProjectWise development has pointed out that in his blog “ProjectWise Performance Tuning: Part 1” that the key thing to keep in mind with SQL Server is the more disk spindles the faster access you have to the data. So bigger isn't always better if that big database drive you have takes forever to retrieve the data. Since SQL Server may be writing to the log files and the database files it is very import to keep these files on separate drives as well.
Now what about a SAN or RAID drives? SAN drives and RAID drives can be very fast at accessing the data and they give you great flexibility for maintaining redundant data in case one of the drives would crash. So, since SAN and RAID systems are made up of multiple drives can all the database files reside on the same logical disk drive? No, even though a SAN system has multiple drives SQL Server may still end up trying to read and write to the same disk with the potential for Disk IO contention. The bottom line is to keep the database files on separate drives, separate SANs or separate RAIDs.
A network bottleneck, of course, affects the server's ability to send and receive data across the network. It can be an issue with the network card on the server, or perhaps the network is saturated and needs to be segmented. You can use the following counters to diagnosis potential network bottlenecks.
Network Interface\Bytes Total/Sec This measures the rate at which bytes are sent and received over each network adapter, including framing characters. The network is saturated if you discover that more than 70 percent of the interface is consumed. For a 100-Mbps NIC, the interface consumed is 8.7MB/sec (100Mbps = 100000kbps = 12.5MB/sec* 70 percent). In a situation like this, you may want to add a faster network card or segment the network.
Network Interface\Output Queue Length This measures the length of the output packet queue, in packets. There is network saturation if the value is more than 2. You can address this problem by adding a faster network card or segmenting the network.
Like an index in a book, an index in a database lets you quickly find specific information in a table or view. Well-designed indexes can significantly improve the performance of database queries and applications. Indexes can become fragmented which can cause a decrease in SQL Server performance. Indexes should be evaluated to determine if fragmentation exists. Indexes can be rebuilt or reorganized as needed and should be included as part of an ongoing maintenance process.
The first step in deciding which defragmentation method to use is to analyze the index to determine the degree of fragmentation. By using the system function sys.dm_db_index_physical_stats, you can detect fragmentation in a specific index, all indexes on a table or indexed view, all indexes in a database, or all indexes in all databases.
See below for more information regarding this function.
Note: It is recommended to rebuild an index that is over 30% fragmented and reorganize when an index is between 10% and 30% fragmented. Rebuilding a fragmented Index is most advantageous if the index contains at least 100 pages, otherwise you may not see any difference after a rebuild is performed if the Indexes are small.
Suggested SELECTServer Indexes for Fragmentation:
Rebuild\Reorganize the Indexes:
More details regarding this process can be found in the link below.
Statistics contain information about the distribution of values in an index or column of a table. Indexes are automatically created for each key created. It tracks information about the selectivity and determines the effectiveness of an index for satisfying queries. By default, the query optimizer already updates statistics as necessary to improve the query plan. This can be configured in the database properties from MicroSoft SQL Server Management Studio.
You can improve query performance by using the UPDATESTATISTICS command or the SP_UPDATESTATS (stored procedure) to update statistics more frequently. SP_UPDATESTATS will update all statistics in all tables, whereas the UPDATESTATISTICS command allows you to update statistics on a specific table. To view current statistics, run the DBCC SHOW_STATISTICS command.
Note: It is recommended to update the statistics if the results show an outdated date in the “Updated” column and/or if there’s a substantial difference between the “Actual Number of Rows” and the “Estimated Number of Rows” in that table. You can view the table properties to get the “Actual Number of Rows” from MicroSoft SQL Server Management Studio.
More details can be found in the links below.
There are many utilities to help monitor the health and performance of the SQL database here are a few:
Database Maintenance for ProjectWise
Increase SQL Server Performance for the SELECTServer
The SQL transaction logs are full
ProjectWise Performance Tuning: Part 1
Create a Full Database Backup (SQL Server)
Understanding SQL Server Backups
SQL Writer Service
Taking Your Server's Pulse
Microsoft® SQL Server® 2008 R2 Best Practices Analyzer tool
10 Baselining Tips for SQL Server: Lessons From the Field
Data Backup and Recovery