SharePoint 2010 performance relies heavily on database performance. SharePoint performs many queries on databases to process each page request. Most of the time the design focus tends to fall more towards the SharePoint application server architecture. Database Server and network performance are one of the major factors dictating the overall performance of the SharePoint applications. When the SharePoint database grows above 100 GB it begins to impact performance issues related to the Database server.
SharePoint Content DataBase
Content Databases need to be set up carefully to attain the maximum performance. We need to take different performance issues into consideration.
- Only create files in the primary filegroup for the database.
- Distribute the files across separate disks.
- The number of data files should be less than or equal to the number of core CPUs. Count dual core processors as two CPUs for this purpose. Count each processor that supports hyper-threading as a single CPU.
- Create data files of equal size.
- Limit content database size (200 GB)
- Limit site collection size (100 GB)
- Enable BLOB caching
- Transaction logs on separate physical hard disks
TempDB
SharePoint Server 2010 farm performance can be significantly impeded by insufficient disk I/O for tempdb. We could see 20 - 45% of i/o traffic with Tempdb in Medium to large size SharePoint farm installations.
- Only create files in the primary filegroup for the database.
- Distribute the files across separate disks.
- Place the tempdb on a RAID 10 array (RAID 10 uses all the drives in the array to gain higher I/O rates so more drives in the array higher performance)
- Create data files of equal size
Disk I/O
One of the important factors is a well-performing I/O subsystem, which is a critical component of any SQL Server application. We need to consider is the disk I/O workload generated by SharePoint on the database-tier ( Microsoft IT's standard). If we are seeing disk average response time is .20 to .35 seconds (should be less than 20 second) , then we need to re look at the distribute the files across separate disks and Separate database data and transaction log files across different disks.
Gather storage and SQL Server space and I/O requirements
Raid Levels of SQL Serevr
SQLIO Disk Subsystem Benchmark Tool
Memory
Memory needs to be managed wisely for better performance. When the SQL Server starts, it dynamically determines how much memory to allocate based on how much memory the operating system and other applications are currently using. If there is substantial memory pressure like PLE is consistently below 350 (50 -200 which is very low) then the memory needs to be bumped for the performance of the application. Fine tuning the following configuration settings of SQL server will help in improving the performance.
Hardware and software requirements
Processor
-
64-bit, four cores for small deployments
-
64-bit, eight cores for medium deployments
RAM