There is a question happening right now in the EE threads about a secondary log file that cannot be shrunk.
The backups fail because of it, cannot grow, cannot reset, basically everything to do with the log is a bit of a disaster...
Then there if the dread pages problem where a DBCC check shows up an unknown object with a page problem (normally shows the object, but the object name is missing).
Or even things like moving database files around, or using detach to move to another location. Or even not having sufficient disk space can create problems. Some of those types of things are due to not fully comprehending what is going on, or not using the right tools for the job.
Then there are hardware issues. Maybe disk problems or server problems and then more ambiguous hardware errors to do with physical network. And environmental errors like power failures or unexpected / un-choreographed shutdowns and other catastrophic failures.
Really, there are quite a few problems that can occur. In fact the more you think about it, given the size of some of these files, it really is quite a tribute to most of the database manufacturers that they do build in such robustness.
Right, so what to do...
Build a contingency plan - a disaster recovery plan. Decide what the minimum granularity of loss (both data loss and off-line loss) your business can tolerate. It can depend on a lot of things - if you have paperwork for example, then part of the recovery process might be to re-type, if you are completely electronic, then you need a more robust database recovery plan.
If you are totally dependant on your database, then you will probably want to investigate mirroring and clustering whereby a combination of hardware and database software can help ensure that there is always a "healthy" database available (well, as much as possible). That can get quite expensive, so need to ascertain if it is an appropriate course of action for you.
Now the "easy" things to do (regardless of any other approach) are all about keeping your database "healthy" by planning ahead. Try to calculate your transaction volumes and preallocate space (ie size of those database files), make sure you run maintenance jobs for database backups, transaction log backups, and maintenance (like index rebuilds, reorganise pages etc), look at archiving old information into secondary databases or even secondary data files (partitioning).
Now sizing a database is all about setting a size and then filling it up. Obviously if you start too small and try to fill it up, it is going to have to grow - which means grabbing more physical disk. Everytime that happens you run risks. Risks of fragmentation (more a performance problem) through to running out of physical disk. The importance of getting that right is more about matching your physical environment so things like backups have a place to go, your database files have sufficient room, temporary database has sufficient disk, transactions logs are being managed.
You do need to monitor, or, automate the monitoring of your database. Do the maintenance plans and have them email the results and then check them - even just a daily glance. If you can intercept an error before your users then you are likely to be in a stronger position of recovery.
I said before you might like to partition, or add database files, but really, dont do that unless you really and truly need to. The more files the increased possibility that something might break (or even be forgotten about - yes it happens - in an infrequent or fairly random process). Having said that, it is possible to recover partitions or filespaces. Simply put, dont get into tricky areas unless you need to.
Most of all you must make sure your MDF file is the one file that you protect above all else (oh and any ndf secondary data files). You can regenerate log files through a few different methods.
Have a look at the MS whitepaper on high availability :
http://technet.microsoft.com/en-us/library/ee523927(SQL.100).aspxIs that the type of thing you want to discuss ?