Base conversion is the conversion between notations of number, and NOT the numbers themselves. I discuss a common mistake on methods of base conversion, and a common mistake in thinking about ‘base conversion’ in computer programs.

When I was first introduced to positional notations of different bases, I was immediately taught how to convert decimal numbers to binary ones and vice versa.

Traditionally, at least traditionally in China, we say ‘when you convert a number from decimal to binary, you do repeated *division with remainder* by 2 until the quotient becomes 0, and write the remainders in reverse order’. Like this:$\ConvertDecToBinFirst$Similarly, you do the same to convert a number from decimal to hexadecimal by dividing by 16, and convert remainders greater than 9 to letters.

Also traditionally, we say ‘when you convert a number from binary to decimal, you sum up the values represented by each digit of the binary number’. The same applies for that from hexadecimal to decimal, except the values are different for each digit. For example,$\ConvertHexToDecFirst$

With this ‘repeated division from decimal to others, series summation from others to decimal’, one can easily convert numbers from any carry system to another, by going via decimal when necessary. This practice is prone to mistakenly ossifying one’s mind thinking that decimal is intrinsically different because it is so special in base conversion.

**No, it’s not.** It struck me long time ago (long before the writing of this entry) that the reason we use different methods from/to decimal is that we are only familiar with arithmetic *in decimal form*. Arithmetic itself does not take or depend on any specific form of numbers. (And if it does, the form will be unary, due to von Neumann’s construction of natural numbers.) In other words, **the only speciality of decimal numbers is not intrinsic but habitual.**

If you can do arithmetic in binary, you can use binary as *the decimal*. I.e., when you convert from binary, you do repeated division, and when you convert to binary, you do series summation. If you can do arithmetic in both base-A and base-B, you can convert from base-A to base-B with either repeated division or series summation, at your choice. For example, you can convert from decimal to binary like this:$\ConvertDecToBinSecond$And you can convert from hexadecimal to decimal by:$\ConvertHexToDecSecond$

A better way to understand this is that series summation converts

stringsintonumbers, while repeated division convertsnumberstostrings. For us usual human beings, decimal representation is both string and number.

Speaking of strings and numbers, there is another topic in elementary computer technology education, i.e., implementing this ‘base conversion’ thing. A common homework assignment is to ‘write a program that converts decimal numbers to hexadecimal numbers’. Of course the following C program will NOT be accepted (for expository purposes, sanity checks are removed):

```
#include<cstdio.h>
int main(void)
{
unsigned v;
scanf("%u", &v);
printf("%x", v);
return 0;
}
```

Because the intention of this assignment is to implement the conversion by oneself. The following solution is often accepted:

```
#include<cstdio.h>
void print_hex(unsigned v)
{
if (v / 16)
print_hex(v / 16);
putchar("0123456789abcdef"[v % 16]);
}
int main(void)
{
unsigned v;
scanf("%u", &v);
print_hex(v);
return 0;
}
```

**Despite it should not.** The reason is clear here: the program never did the whole ‘conversion’ itself, because a number stored in the memory does not necessarily take any specific form (usually it is in binary form). The program relies on `scanf`

to do the conversion from decimal to memory form. Therefore, what the program implements itself is just conversion from memory form to hexadecimal form. A thorough solution (without sanity checks) could be:

```
#include<cstdio.h>
unsigned get_dec()
{
unsigned result = 0;
int eat_space = 1, ch;
while ((ch = getchar()) != -1)
{
if (ch >= '0' && ch <= '9')
{
eat_space = 0;
result = result * 10 + (ch - '0');
}
else if (!eat_space)
{
ungetc(ch, stdin);
break;
}
else if (ch != ' ' && ch != '\t'
&& ch != '\v' && ch != '\r'
&& ch != '\n')
{
ungetc(ch, stdin);
break;
}
}
return result;
}
void print_hex(unsigned v)
{
if (v / 16)
print_hex(v / 16);
putchar("0123456789abcdef"[v % 16]);
}
int main(void)
{
print_hex(get_dec());
return 0;
}
```

This piece of code also serves as a support for the idea of habitual representation. The habitual representation is memory form, and you do series summation (with Horner’s method) to convert decimal to memory form, and repeated division to convert memory form to hexadecimal.

Please enable JavaScript to view the comments powered by Disqus.