Autogrowth of SharePoint Content Databases – an Automated Solution

During a recent presentation at the London SharePoint User Group by Steve Smith of Combined Knowledge about SharePoint administration (and many other things), he discussed the issues surrounding the auto-growth of SharePoint content databases and the possible performance ramifications these can have when they are triggered during business hours.

As Steve pointed out, the default auto-growth settings for a newly created content databases are to grow in 1MB increments:

imageClearly for a content database that is used off the bat with this configuration, a lot (and I mean a lot) of auto-growths will be performed on the database as users load content and even access the site collections that the content database contains. The recommendations from Microsoft are to pre-grow data and log files and to set the auto-growth to 10% – see Storage and SQL Server capacity planning and configuration (SharePoint Server 2010) for further details:


These recommendation rightly point out that database growth should be proactively managed. So Steve’s presentation and this article got me thinking about a repairable SharePoint health analyzer rule that could warn when content databases are filling up and, if required, grow them automatically. What makes this a practical solution I believe is the ability to configure the rule so that database growths performed by the repair action of the health rule are only executed within a specified time window.

The health rule derives from SPRepairableHealthAnalysisRule so it can be configured to automatically repair (for repair read grow) a database once it has exceeded a configurable capacity threshold. The rule supports four custom configurable parameters:

  <!-- Enter the database capacity percentage that is used to trigger -->
  <!-- a warning and potentially a scheduled database expansion. Values -->
  <!-- should be between 0.0 and 1.0. -->
  <Property Key="CapacityThreshold" Value="0.8" />
  <!-- Enter the BeginHour for the time period in which database -->
  <!-- expansions should occur. -->
  <Property Key="BeginHour" Value="1" />
  <!-- Enter the EndHour for the time period in which database -->
  <!-- expansions should occur. -->
  <Property Key="EndHour" Value="3" />
  <!-- Enter the percentage of growth the database should undertake -->
  <!-- during an expansion. Values should be between 0.0 and 1.0. -->
  <Property Key="GrowBy" Value="0.3" />

The CapcityThreshold property is used to set the level at which warnings about database capacities are raised. Once a database exceeds 80%  (the default threshold for the rule) a health analyzer warning is raised and is visible in central admin.

The BeginHour and EndHour properties are used to define a time window in which, for database that have exceeded their capacity threshold, growths should be executed by the rule. These growths will not occur if the ‘Repair Automatically’ button is pressed outside of this window. Ideally you should review the properties and behaviour of this rule and if appropriate, set the rule to repair automatically. Please note, in order for the rule to repair automatically during the specified time window, the rule schedule should remain hourly:


Lastly, the GrowBy property is used by the repair method to determine the amount of expansion a database should undertake. The default option is 30% – this means that if a database is 100MB in size and 90% full, the database will be grown to 130MB. The total database size is used to calculate the new database size and not the amount of space currently used.

The rule is packaged as part of the SharePoint Health Analyzer Rules project on

The source code for the rule can be reviewed here:

BTW: There is a quicker way to solve this entire auto-growth problem – make the content database read-only! Winking smile

Get SharePoint to Automatically Read and Write XML With XMLParser

Have you ever wondered how the property promotion used by SharePoint to extract and write InfoPath form values to library columns works? XMLParser is the answer:

You can use the exact same technique in your own solutions to read and write xml content by updating list item values. This works due to the power of content types. All hail the content type!

The content type assigned to your library will define the columns (FieldRefs) that your xml will inherit. By extending the definition of the FieldRef elements to include a Node attribute we can map the field to a xml node via XPath.

Sample content type definition that maps MyLabel and MyValue fields to XML nodes:


Notice that the MyValue FieldRef also includes an Aggregation attribute. These can be used to perform mathematical functions on values (see link above for list of available functions). The XPath above I’m using reaches into the XML schema of the uploaded files and identifies where in the XML files these fields should be mapped to. Here’s the very basic XML I’m using in this demo:


Lastly, there is one more important FieldRef that needs to be included in your content type definition:


This FieldRef is used by the XMLParser to ensure that the appropriate content type ID is written to the XML file. In turn, the XML files uploaded to SharePoint must also contain the <?MicrosoftWindowsSharePointServices ContentTypeID="0x0101007438f6c6e5834860a94a8284a8c7106c"?> element which is then used to identify which content type the XML file should be mapped to. The content type ID specified in the FieldRef element and the XML must match for the XMLParser to work. Finally, this FieldRef must use the ID of {4B1BF6C6-4F39-45ac-ACD5-16FE7A214E5E} – this is the internal field ID for the content type site column.

Note: Replace the content type Id in my demo with your own content type id. Do not replace the FieldRef ID.

Once your content type is fully defined, deploy it to SharePoint, create a library to host your content type and then upload an XML file that includes the <?MicrosoftWindowsSharePointServices processing instruction described above.

The results should be instantly visible, before I’ve even confirmed the document upload, the XMLParser has executed, found my processing instruction in the xml file, found the same content type assigned to my library and performed the mappings defined by my FieldRefs:


Now my file is uploaded, the vales extracted from the XML are available to me in the list:


If I update the list item, and then download the xml document, the XMLParser will ensure that any new values entered into my list item columns are then written back to the underlying XML before they are downloaded, thus ensuring the list item values and XML values remain in sync.

The full content type definition used in this demo:


The full sample xml used in this demo:


Both these files can be downloaded from my SkyDrive: