The thing to understand about the 386 (and earlier) processors is that the processor instruction set only gives you instructions for basic integer arithmetic (add, subtract, multiply, divide) with very limited precision. If you are a programmer, and you want to do ANY real number calculations or any integer calculations with non-trivial precision, you have the following options:
1. Assume a numeric co-processor (internal or external to the CPU) is present, and use the FPU instructions accordingly. This option yields the fastest execution and smallest code size, but back in the day, there were a lot of people who didn't have a FPU or didn't have a clue what a FPU was.
2. Assume there is no FPU and that there is a library of software routines available to provide floating point and high precision integer operations. This option has the advantage of working in any environment, but it is guaranteed to yield the worst possible performance since you never take advantage of the FPU if one is present.
3. Assume that the library of software routines is available, but write the code to make use of the FPU if one is present. Like option 2, this works in any environment, but the code will execute faster on machines with an FPU (which may or may not be advantageous, depending on what you are trying to accomplish). The big disadvantage of this option is that the code size will be larger than the other two options, and we all know how precious every byte was back in those days.
So the answer to your question hinges on two things: (1) how floating-point (or high-precision integer) intensive is the application, and (2) which approach listed above did the software author adopt for that particular application?
For point (1), there were/are applications where virtually all of the processing time is spent on floating point or high precision calculations. One example would be the software used to calculate the position of a satellite above the earth either in real time or in tabular format for some some arbitrary time period. Amateur radio operators use this type of application all the time in order to communicate through non-geostationary satellites or with manned outposts like the International Space Station. Applications which generate maps or map projections are another example of a floating point intensive application. At the other end of the spectrum, you will have programs like word processors that will make minimal use of arithmetic operations as a percentage of total instruction execution.
Games will vary greatly in terms of floating point usage. Real-world simulators like the Falcon 3 combat simulation that leileilol mentioned will require extensive floating point usage for a variety of things like the physics of the vehicles, ballistics calculations for armament, environment rendering, etc. Other games such as text-based dungeon crawls may require few, if any, floating point calculations.
As far as point (2) goes, it isn't always obvious which option the application programmer chose, and the documentation for the application may not explicitly indicate this information (although the answer is obvious if a numeric co-processor is listed as a requirement).