First post, by Xanthoma
Xanthoma
Offline
Rank
Newbie
Hello. Is anyone aware of any microprocessors that implement the “double dabble” algorithm in hardware for converting binary unsigned integers to decimal?
I prefer to keep all my integers in binary and then only convert to decimal when I’m displaying them to the user. I don’t use the Z80’s DAA opcode to repair binary coded decimal sums and differences.