This post is spoilerite because it waxes philosophical about linux and other filesystem flavors, that are more kind to SSDs. Many people dont like such topics, and would prefer not to see them. This topic is however, situationally appropriate, given the question.
Spoiler
A good part of the problem with the "Write all the time! HDD is abundant, cheap and free!" mantra from software makers, is that flash memory array designs evolve faster than filesystems that live on top of them do, and this leads to either very inefficient use of disk space (like with ExFAT), or very destructive write amplification (Like with NTFS) However, it seems pretty much impossible to convince software developers to simply not write things to the disk all the time. They want persistence of data, and windows gives them ample facilities to accomplish that. Being respectful of the end user's computer and its resources is never a priority, and the results are entirely expected, obvious, and harmful.
The reason NTFS is very harmful for SSDs, is because of how its cluster size arrangement works. The largest cluster size you can use with NTFS is 64k cluster, and if you do, quite a few features of the filesystem simply break spectacularly, like compression, or encryption.
Understanding why this is, requires understanding why the flash memory consortia out there want all the disposable flash memory devices to be using ExFAT: ExFAT allows you to set cluster sizes in the multiple megabyte sizes.
Why is this preferable for flash memory, and SSDs in general?, you might ask. So, I'll do my best to explain why this is.
Flash memory, be it an SDCard, an eMMC chip glued to an embedded device, a USB stick, or an NVME SSD, is arranged in discretely erasable units, and larger units are easier to manufacture, because they can use less sophisticated controllers. They thus perform at their best, when full erase blocks are written contiguously, all at once. This wears the flash memory array the least, data thruput is the best, and all around, it performs better in every capacity. The *ISSUE*, is that these blocks are *NOT* friendly sizes to these older, well-established file systems.
NTFS has a preference for 512 BYTE clusters. RAW SECTOR for old CMR spinny disk--- and again, Flash memory wants to be allocated in blocks SIGNIFICANTLY bigger than this.
Since Microsoft does not want to introduce batching mitigations in how their disk subsystem dispatches writes, so that large blocks are committed contiguously, and instead, wants to pretend the world still uses 512 byte spinning rust, because that is what plays nicest with their filesystem of choice-- when files get written to an NTFS volume, 'partial write' on the erase block happens, and write amplification occurs. This is especially true when disk compression is turned on, because of how NTFS writes compressed files. (Not contiguously!! They are written as small, allocation unit sized file fragments, with enough free space between them, that uncompress-in-place is possible. Now, consider that, in conjunction with how MS automatically compresses the component baseband service log (CBS.LOG), and how VERY VERBOSE they make that log. (That even when compressed, it routinely gets to be multiple megabytes in size), and how VERY VERY OFTEN they push out updates.
Windows is thus, horrifically destructive to SSDs, all by itself, without any software running.
It does not have to be that way-- the disk subsystem could very easily identify that the disk it is working with is an SSD, determine the ideal erase block size, and then batch write operations appropriately. it could likewise, inform the filesystem to commit compressed file fragments in a non-fragmented manner, but MS has consistently resisted and argued against this for several decades now.
Then, on top of that bundle of bologna, you have browser makers wanting to write multiple redundant copies of small files from websites, constantly, and without checking to see if the locally cached version is actually still valid or not-- and-- retaining old copies basically forever until manually cleared--
AND--
System process makers, like antivirus vendors, that want to keep very aggressive statefulness logs that scattershot the drive, and are continually and heavily written to, relentlessly. (Both Sentinel and Webroot security suites do this. VERY VERY aggressively and egregiously.)
But, AGAIN, it does not HAVE to be this way. I'll get to this in a bit.
Now, the reason why this is all spoilerite.
Other operating systems, where you dont have an opaque and obstinate developer population, and a tonedeaf management culture, allow more direct control over these aspects of the disk subsystem, and how their filesystems operate.
Take for instance, Linux.
Its default filesystem, on the outset, does not look all that much better than NTFS is for SSDs. Only very modestly so, with its default allocation unit size of 4kb. Arguably, that's ideally suited for "Slightly more modern" spinning rust, of the "Advanced Sector Format" variety, and would be just as harmful to SSDs as NTFS is.
However, unlike NTFS, EX4 can be told to batch writes according to user defined sizes and parameters. Setting that up requires the user to know what they are doing, but even that is leaps and bounds above and beyond what microsoft offers, which is "NO! WONT DO!".
Specifically, you can enable features that were initially intended for use on a RAID controller. Specifically, a hardware RAID controller-- one that does the data striping across the array in a way that is completely opaque to the host, but which would benefit from having data structures committed to it in larger, more structured batches. Since the raid controller's logical volume appears like it is already one disk, the RAID features of EXT4 allow you to set the "Stripe" and "Stride" sizes, even with only 1 physical disk defined. This is explicitly and specifically for these black-box hardware raid array controllers.
It also, happens to provide "Wants very large, contiguous writes" flash memory devices to be fed what they actually want, while retaining the smaller allocation unit sizes desirable for the filesystem.
The disk subsystem caches, batches, and dispatches writes in chunks that are friendly for the underlying flash memory device, and the user is completely unaware of this. Further, the filesystem driver itself is more intelligent than NTFS, and is aware when these options are enabled, such that blocks of free space for writes are more intelligently chosen to minimize issues with re-write, or incomplete-write. This means even abusive applications that want to be "OH SO VERY STATEFUL!", dont clobber the shit out of the medium.
Additionally, this filesystem allows you to mount other filesystems on folder mountpoints cleanly and efficiently, and the OS provides very easy to set up facilities to host such additional filesystems entirely out of RAM to begin with-- useful for stopping the deleterious effects of browser caches.
NTFS *DOES* allow you to mount additional NTFS filesystems at folders, as mount points, but does not offer an easy to use and define ephemeral ramdrive backed filesystem provider to do this with-- and instead-- permits the abusive writes, and further, writes them in the most egregiously stupid ways possible.
NTFS *COULD* very much receive an update that permits it to understand user-defined parameters for batching, staging, and arbitrarily large atomic writes, similar to how the RAID functions for EXT4 work, but microsoft does not want to "confuse" its users, and is adamant that everything is fine. (Even while the SDCard Assn goes out of its way to tell you that if you are not using ExFat with huge clusters on windows, YOU ARE NOT IN SPEC. Again, BECAUSE the media NEEDS large atomic writes.)