Disk drive makers perform these tests; instead of maintaining a bad-block list, they would perform magic to swap in "good" blocks for the "bad" blocks, using the over-provision(spare blocks) space. This testing continues as the drive is written to, so that a grown bad spot is covered by swapping as needed.
When the drive is unable to swap out bad space, the controller returns an error to the file system driver software. Sane file systems then try (and sometimes fail) to recover the data, mark the bad block as allocated permanently, then try to save the hopefully recovered data to another place, adjusting file extent lists as appropriate.
As for the partition descriptor blocks, they are rarely written, so they rarely go bad. (Unless your drive fails so badly that it "spiral writes" through the on-disk partition table. If this happens, you have more problems than just a blown partition table.)
Information on the health of the drive is provided by the SMART (Self-Monitoring, Analysis and Reporting Technology) reporting system. Most professional operating systems regularly monitor the built-in health meters to warn you when a drive is approaching critical illness.
For Solid State Drive (SSD) devices, there is another technical method for extending the life of the drive. It's called TRIM -- not an acronym -- is a cooperation between the operating system's file manager and the SSD. In some way, the coordinates of any unused space in the file system is reported to the SSD hardware. This marking is so the SSD can use the open space for swapping good blocks for bad, similar to what is done with spinning rust. In Unix, the fstab(5) entry can have the "discard" option, which performs a TRIM call for every block released. This is a Bad Idea&tm; in that it can cause more wear than it fixes. Instead, a sane OS will do a "TRIM cycle" periodically; Ubuntu 20.04 (both Desktop and Server editions) does this once a week.
SSD wear happens only on writes. So when the OS reports impending failures, stop writing to it, and perform a drive swap with a new device. Or make damn sure your backups are in pristine shape. Your choice.
Anything you know how to reduce the writes, especially on the Primary System Volume would be greatly appreciated.
I do a lot of Video processing these days and am writing to the system volume in a lot of ways, I didn't expect. Even if I write directly to a USB stick, it still writes to the system volume set first, a lot. As looking at the target file being 0 bytes till the processing is at a certain point or done.
And while this is being done, the USB stick and SSD C: drive gets HOT! I do have another system where C: is an NVMe drive and I worry about that one getting HOT As WELL.
My Dell Laptop had a nasty SSD on it's M.2 Slot. Even though it had a heat sink, the heat caused it to SLOW DOWN A LOT. I replaced it with a Nice Samsung NVMe and WOW, what a difference. So you see why I want 21H2 and the SMART data, TEMPERATURE AND LIFECYCLE data can be seen. As long as the SPARES used is ZERO, I can keep from worrying.
Thanks, CO