![]() Tell me how you tackle DBCC checks in the comments. If you’re the kind of shop that has trouble with in-place DBCC checks, it’s totally worth the price of admission. ![]() For sure, it doesn’t cost more than you losing a bunch of data to corruption. Automating rotating backups and restores can be a nightmare so many different servers with different drive letters.ĭell LiteSpeed has been automating this process since at least version 7.4, and it’s not like it costs a lot. Sure, it can be a bear to script out yourself. This is harder, but you stand a better chance of getting everything checked.Įspecially if you have terabytes and terabytes of data, and really a short… Maintenance windowĪre you 24×7? Do you have nights or weekends to do this stuff? Are you juggling maintenance items alongside data loads, reports, or other internal tasks? Your server may have a different database for different customer locations, which means you have a revolving maintenance window for each zone (think North America, Europe, APAC, etc.), so at best you’re just spreading the pain around. The more complicated process is to break DBCC checks into pieces and run them a little every night. You can make things a little easier by running with the PHYSICAL ONLY option, but you lose out on some of the logical checks. It’s transactionally consistent, meaning the check is as good as your database was when the check started. They don’t cause blocking, the way a lot of people think they do, because they take the equivalent of a database snapshot to perform the checks on. They chew up CPU, memory, disk I/O, and tempdb. ![]() It’s not like these checks are a lightweight process. The more you have, the harder it is to check it all. Of course, keeping backups around for a long time is physically impossible depending on… How much data YOU have If your data only goes back two weeks, and your corruption goes back a month, best of luck with your job search. A corrupt backup doesn’t help you worth a lick. If you take weekly fulls, you should consider running your DBCC checks before those happen. If you keep data for two weeks, weekly is a good starting point. The shorter the period of time you keep backups, the more often you need to run DBCC CHECKDB. Which is why you need to carefully consider… Backup retention The amount of data you lose in doing so is ¯\_(?)_/¯ It’s real easy to run repair with allow data loss immediately. But there’s a simple equation you can do: the shorter your RTO for corruption, the longer your RPO. And turn on Trace Flag 3023 until you find a replacement that does. If you use a 3rd party backup tool that doesn’t allow you to use the backup checksum option, stop using it. They’re not a replacement for regular consistency checks by any means, but they can provide an early warning for some types of page corruption, if you have page verification turned on, and your page is assigned a checksum. That’s why backup checksums are important. If system tables or clustered indexes become corrupt, you’re potentially looking at a much more invasive procedure than if a nonclustered index gets a little wonky - something you can disable and rebuild pretty easily.Įither way, you’re looking at an RTO of at least how long it takes you to restore your largest database, assuming the corruption isn’t present in your most recent full backup. When you’re setting these numbers with management, you need to make them aware that certain forms of corruption are more serious than others, and may take longer to recover from.
0 Comments
Leave a Reply. |