R. Richard
Literotica Guru
- Joined
- Jul 24, 2003
- Posts
- 10,382
zeb1094 said:But the representation in binary will be different.
In the Hex world (ie IBM) the binary number will be constructed as a four place word (0001).
Where in the Octal world it is a three place word (001).
So the number 6 is 110 in Octal and 0110 in Hex.
So 666 is 0 101 001 101 in Octal where in Hex it is 0010 1001 1010
They both equal the same thing but the binary representation is completly different as the word length is different. Although with Octal you need a pairty bit at the beginning. And the number 666 is not exactly equal to the decimal (base 10) number of 666 nor the Hex( base 16) number 666.
But as computers don't know shit from shinola as to what a base 8, base 10 or base 16 number is, it is all dependant on the construction of the buffers. Are they three bit words or four bit words.
ETA: You may want to take a look at my Sig.![]()
Zeb:
In the beginning, there were six bit bytes. A six bit byte could assume 64 different values. It worked for a while. However, as computer started to print, they found a problem. If you have 26 lower case letters, 26 upper case letters, 10 digits, a radix point [most call it a period,] a comma and oops, we are out of possible states. The six bit byte lent itself well to octal numbers and that is what were used in the early days. Suddenly, IBM shifted to eight bit bytes and computers could print lower case, upper case, numbers, punctuation marks, various signs and such. Eight bit bytes worked well with hexadecimal numbers and programmers shifted from octal to hexadecimal. However, the shift from octal representation to hexadecimal representation had nothing to do with the internal binary storage of a computer. There are still [at least as of a few years ago] 36 bit machines used by the military. They now use "hex" to represent the numbers, even if the top digit is never "F."
Why yes, now that you ask, I used to be a programmer.