DBAs have all sorts of scripts and tools for managing SQL Server databases. In day to day operations you are down in the weeds with query plans and performance statistics and various monitors. You pop up a level and daily check the database backups and size of the log files. Every once in a while you lift on up to the "instance" level and check out the server logs or disk throughput.
Every other month or so I escape all the way out to the top and review once again what has turned out to be my best tool for managing the shop overall: an Excel spreadsheet. You see new databases seem to materialize on various instances all of the time, and developers even have a habit of spinning up new (poorly configured) instances all by themselves as well. So this spreadsheet helps me keep track of the whole mess and figure out where my DBA love needs to focus next.
Let me go into some detail about what the layout and each column signifies, and why they are important. Each instance has its own section; in the header adjacent to the instance name I show the SQL version, the processor count and speed, and the memory available on the server. Next to that is its backup schedule with an indication of frequency and level of manual intervention. This row of information comes in handy both when the CIO wants to suggest a different configuration on the VM boxes, and when normal backups run out of space and I need to review when they /should/ have happened.
Next comes a subheader row with column labels, and then a row (or more) for each database on that instance. The column next to the database name has the size of the data storage in KB. This is useful as every few months when I return to update the spreadsheet, I can readily tell which databases are most actively growing. Next comes the recovery model, Simple or Full (more about this in a later post).
Next the location of the mdf database file; if the database has secondary filegroups then I show the location of the secondary files below this (with as many rows as necessary to cover all the secondaries). Then the location of the log file and the location of the backups. Note that to make the document more legible I abbreviate the often long path references with a double-letter pseudo disk designation, that is described in a legend at the bottom of each section.
Listing the file locations serves a couple of useful purposes. When you are first assembling this spreadsheet the discovery and documentation process helps you avoid the high risk of having all the files on a single physical drive (a notoriously dangerous arrangement). Later, when reviewing disk latency, you may find the spreadsheet useful in avoiding disk bottlenecks. Finally in the dreaded clutches of disaster recovery, you will know where your lifesaving logs and backups are located.
Next comes a column I have labeled Write Coupling. This and the next column, Software Volatility, relate to how you design your recovery models. For each database I provide a soft subjective rating from very low to very high of how tightly the dB writes are coupled to intricate processes, and how much the underlying update software is changing (more on this in the later post about recovery models).
Next follows a column labeled SME, for subject matter expert. Typically one of the lead developers at your company knows the most about this particular database, and if you need to migrate tables or take the database offline for maintenance then this is the person with whom you will get clearance. When you are first cobbling together this spreadsheet it will be smart to hold a few meetings with the developers to nail this down.
Finally a column for Notes that mostly alerts me to tasks and technical debt that needs to be worked down, or any special circumstances that excuse a database from normal maintenance processes.
You can find all sorts of costly tools to assist you in managing an array of databases, but far and away the most useful and cost effective is this simple spreadsheet.