NAS technology is 99% protocol.
There are 3 major flavors.
Microsoft SMB (usually provided by Samba)
linux / Unix NFS
Apple Filing Protocol
All of these are 'file level' protocols, and are thus NAS technology. Most consumer NAS boxes speak all 3 protocols, but microsoft SMB gets the most mileage.
Personally, I use a nice NFSv3 setup at home, with SMB as the fallback.
The actual hardware serving the storage can be literally anything, as long as it can speak the right protocols. In consumer devices, it's usually a single board computer hosting one or more sata disk drives. On higher end corporate hardware, it's a service running on a storage controller with a disk shelf.
The cousin of NAS technology is SAN, or Storage Area Network.
This is a dedicated network for only storage traffic, and is usually isolated from things like the internet. It usually runs on some modern variant of the Token Ring protocol (like FibreChannel, for older devices, etc, but modern devices can use ethernet based networks as well.) and makes very aggressive use of link bonding, multipath availability, and other fancy stuff.
SAN provides the infrastructure for a different class of 'remote storage' protocol: block protocols.
These are completely unaware of what individual files are, and allow access to raw block devices. The client treats it like an ordinary disk drive, and accesses raw sectors. (Blocks). iSCSI is the usual protocol used here, and it's used for stuff like virtual servers.
It's not at all unusual for an application server to be hosting a dozen or more instances of VMWare (or similar virtualizer), with all the virtual servers being physically stored on a RAIDed storage controller serving up iSCSI LUNS (logical unit number, the nomenclature for a SCSI device on a SCSI bus) over a SAN, with bonded connections, in a large corp envirionment. This is because data replication, fault tolerance, etc, can all be handled by the storage controller(s), and the application server can focus exclusively on compute and memory IO tasks of the virtualized os's running on it.
When most people think of NAS, they are thinking of the consumer grade devices out there, and not a storage controller. Still, modern NAS appliances are basically just very stripped down storage controllers, in the loosest kind of description. At the end of the day, it's still just a service running there, that serves a filesystem local to the controller, using a file level protocol. The major difference between a consumer NAS box, and a corporate storage controller, is scale, and beefiness of the hardware.
A DIY NAS box, made from a computer, and some NICs, is just a few pcie cards and some elective purchases away from being a bonafide storage controller, and being able to service a local SAN, if one felt so inclined.
Many enthusiasts and DIY types elect to build and administer their own NAS box, because the SBC based consumer boxes are notoriously underpowered, and lack robust data replication/protection (like advanced RAID). (Most rely on weaksauce SoC based sata implementations, that are restricted to only 2 sata ports, and at less than full sata 3 speeds. This limits the number of drives that can be connected, and thus the level of RAID possible. Further, these SoC based controllers are not fully bus mastering, meaning the whole device comes to its knees on certain disk io heavy operations, like a raid scrub, or rebuild.)
Once you sink the cost on a DIY box, you are basically building a storage controller. The difference between a NAS only one, and a fancy SAN enabled one, is just what cards you install, and if you elect to get disk trays.
Concerning newer windows clients, vs older clients, it's important to understand that this is 99% about protocol, specifically, the version of the protocol.
Really old clients (like the DOS client), use the 1.0 version of the SMB protocol. This version makes use of a workgroup/domain 'master', a custom name resolution protocol (WINS), and the NETBIOS extension for TCP/IP. The need for a 'browse master' to handle name resolution, and resource paths, (which has been removed/replaced with DNS and pals in later versions) is the reason 'network name' resolution often fails with consumer devices configured to respond to SMB1.0, and why raw IP address paths are often required.
Other, very important changes to the protocol over time include LDAP and KERBEROS security features (which SMB1 knows nothing about), and their own sets of dependency requirements, like network time protocol. (KERBEROS authentication will fail if the host clock on the client is more than 3 seconds out of sync with the NAS host, and needs NTP to keep this synchronization, etc.)
SMB3 and SMB1 are almost unrecognizable to each other. Very different animals.