Sorry, LZ, I mis-read your msg.
For 9000 GB (i.e. 9 TB) db, I would use a different approach, such as whether you can partition some big tables, and for those tables with history data, the history data should be put into separate files, and consider them as static (if possible) so you do not need to backup them.
Also I believe 3rd party tool (like RedGate sqlbackup) will help too. In my case, if my DB goes bigger, sqlbackup can support backup in multi-thread, each thread is responsible for one piece of backup file, and of course, you will end up with multi-backup files, but it can reduce the overall backup time significantly (like 50%).
For 9000 GB (i.e. 9 TB) db, I would use a different approach, such as whether you can partition some big tables, and for those tables with history data, the history data should be put into separate files, and consider them as static (if possible) so you do not need to backup them.
Also I believe 3rd party tool (like RedGate sqlbackup) will help too. In my case, if my DB goes bigger, sqlbackup can support backup in multi-thread, each thread is responsible for one piece of backup file, and of course, you will end up with multi-backup files, but it can reduce the overall backup time significantly (like 50%).