In my emulator I just cheated the light system a bit. When the CRT beam passes the light pen's position (when pointing at the screen, which is the location the user points it at a location on the last rendered frame essentially), it will simply record said position. If the user isn't pointing, it simply won't latch instead. So that basically makes it able to record the position even if the screen isn't bright enough for the point it's pointing at.
The button on the other hand is just a direct live binary value sent to the card itself (1 for depressed or not, don't remember which way around it was (1 for pressed or 1 for not pressed)).
So basically just a 'better' light pen input without the weakness of the light pen itself (which was that it requires the position on the screen to be bright enough to detect the CRT raster beam to latch). UniPCemu will simply latch whenever the beam coordinates passes the light pen's position (if pointed at the screen of course, otherwise it'll simply be at a negative screen position (thus never latching)).
As for how UniPCemu maps the light pen, it differs on the used host. Hosts with mice will be able to use the right and left mouse buttons when not in direct mode. Pressing the right button points to a place on the screen. Pressing the left button while the right button is pressed presses the button.
Touch devices have an alternative input as well: the top centered 1/3 (centered horizontally) screen (originally the middle mouse button) will enable the lightpen mode (which can be observed by the input mode indicator changing). Then touching a place on the screen sets the location of the light pen (just like the right mouse button on mouse-based PCs). While touching a location to point, touching with another finger will press the button.
The system is pretty simple: touching the top 1/3ths center area of the screen (think splitting vertically and horizontally into 3 parts (producing a 3x3 grid), the 0,1 (0-based) location of the grid becomes the toggle to enable two finger pointing/button inputs for two other fingers.
A nice thing that UniPCemu supports with this is that only the starting location of the pressed finger is recorded (basically it's using the text-based layer button inputs for that, just like the buttons in the bottom right corner of the screen). So you can move the finger out of the way to reach other locations with your two other fingers, as only the starting position of the touch matters.
Although you can't press the button without setting a location using any of the supplied input methods (one requiring the right mouse button and the other a touch together with touching the top-center 1/3ths of the window/screen).