To understand PC memory and architecture, It helps to know "Where it came from", and why...
"In the beginning" computers were large rooms filled with racks of electronics, all connected to a "van sized" box full of electronics called the "Central Processor". Processors are classified by "bits" meanoing how many digital "bits" of data can be manipulated in one "cycle". These mainframe systems were often 32, 40, 64, 96 or more "bits".
IBM was a well known maker of these "Big Iron" systems.
As semiconductor technology improved, the needed circuitry could be made smaller, till a company "Intel" decided they could put a limited edition of a proceesor into a single-chip "microprocessor".
The first Intel offering was the 4-bit 4004. Very limited, I don't recall any significant functionality computer using the 4004 .. but it was a game changer in that it showed the ability to "program" functionality instead of hard-wireing...
There next offering was the 8-bit 8008. Much more capable/powerful, you can experience an 8008 computer via my MOD8 simulator.
Then Intel made the 8-bit 8080, more powerful and had a 16-bit memory "Address bus" which allowed it to have 64k of memory. Now enough to make a "real" computer, amd Ed Roberts of MITS created the "Altair" - considered by some to be the first "Personal Computere". You can experience it with my Altair simulator.
Then Intel made the 16-bit 8086 - kept similar to the 8080 to ease converting software. To allow more mamory, the address bus was expanded to 20 bita (1M).
IBM noticed and chose the 8086 to make the "IBM Personal Computer" - the early PC.
To keep it more 8080 like, Intel had introduced the concept of memory "segments" 16-bit registers which make the top 16-bits of the 20-bit memory address. This allowed the processor to still "logically" use 16-bit memory addressing but it could be moved aroung to any 16-byte boundary in the 1M 8086 address space.
There were separate segments for Code, Data and Stack as well as an Extra one for other cases where the code had to go outside it's available memory.
In the original 8086, memopry addressed computed as (segment*16)+Offset which happened to exceed 20 bits (can be as much as 64k-16) wrapped aroud to offset from 0. As still more advanced proceessors had "protected mode" and supported more than 1M of RAM, this became "confusing" - some CPUs would still wrap real mode addressed to offsets from 0, some would flag an error-interrupt, and some would allow access to the (64k-16 bytes) of memory ABOVE 1M ...
There's also Bill Gates famous quote "640k ought to be enough for anyone", about MicroSofts decision to limit DOS to 640K consegutive memory, with I/O devices etc. mapped abote.
Some systems could map actual memory into the area above 640k, and this along with the (64k-16) bytes accessable above 1M became known as various types of "High" memory. And given that there was no official standards on how to allocate or manage it, various "high memory" software drivers became common and required.
And... with more advanced processors and hardware which could be comfigured to access other memory above 1M - systems/drivers were defined to create XMS, EMS etc.
I hope this helps a little ... I've not gone into details, but knowing how it came to be might help understand specifics as you discover them.