Number of The Beast

zeb1094 said:
But the representation in binary will be different.

In the Hex world (ie IBM) the binary number will be constructed as a four place word (0001).

Where in the Octal world it is a three place word (001).

So the number 6 is 110 in Octal and 0110 in Hex.

So 666 is 0 101 001 101 in Octal where in Hex it is 0010 1001 1010

They both equal the same thing but the binary representation is completly different as the word length is different. Although with Octal you need a pairty bit at the beginning. And the number 666 is not exactly equal to the decimal (base 10) number of 666 nor the Hex( base 16) number 666.

But as computers don't know shit from shinola as to what a base 8, base 10 or base 16 number is, it is all dependant on the construction of the buffers. Are they three bit words or four bit words.

ETA: You may want to take a look at my Sig. :)

Zeb:
In the beginning, there were six bit bytes. A six bit byte could assume 64 different values. It worked for a while. However, as computer started to print, they found a problem. If you have 26 lower case letters, 26 upper case letters, 10 digits, a radix point [most call it a period,] a comma and oops, we are out of possible states. The six bit byte lent itself well to octal numbers and that is what were used in the early days. Suddenly, IBM shifted to eight bit bytes and computers could print lower case, upper case, numbers, punctuation marks, various signs and such. Eight bit bytes worked well with hexadecimal numbers and programmers shifted from octal to hexadecimal. However, the shift from octal representation to hexadecimal representation had nothing to do with the internal binary storage of a computer. There are still [at least as of a few years ago] 36 bit machines used by the military. They now use "hex" to represent the numbers, even if the top digit is never "F."

Why yes, now that you ask, I used to be a programmer.
 
MistressJett said:
I have a friend who is attending a Satanic wedding today. I was supposed to maybe go... couldn't make it out there. Too bad, it would have been interesting.

I've heard that the bridesmaids are really nice and pale
 
MistressJett said:
I thought they'd be the chicks from Blood Orgy of the She-Devils.
Nah, that's a Jewish Wedding you're thinking of.
 
666 : number of miles travelled before the Beast's Acura a) is no longer covered by warrantee and b) will be due for it's next oil change.

670 : Number of miles between Hell and Earth.

For matters of both convienece and caution, the Beast has called and politely rescheduled. The mark of the Beast in now 667.

Have a nice day.

Q_C
 
There was a story on the news here tonight, about a woman called June, who was 66 today. She was also born at 6 minutes past 6 in the morning, and weighed 6lb 6oz.

She said she's locked the car in the garage for the day, but might venture out for a walk with her dogs, and won't be having a celebratory drink until after midnight, once she's made it through the day.

Poor cow. :D
 
Tatelou said:
There was a story on the news here tonight, about a woman called June, who was 66 today. She was also born at 6 minutes past 6 in the morning, and weighed 6lb 6oz.

That's not the worst part, Tatelou. The woman has gained quite a bit of weight since then and she blames it on the Beast!
 
R. Richard said:
Zeb:
In the beginning, there were six bit bytes. A six bit byte could assume 64 different values. It worked for a while. However, as computer started to print, they found a problem. If you have 26 lower case letters, 26 upper case letters, 10 digits, a radix point [most call it a period,] a comma and oops, we are out of possible states. The six bit byte lent itself well to octal numbers and that is what were used in the early days. Suddenly, IBM shifted to eight bit bytes and computers could print lower case, upper case, numbers, punctuation marks, various signs and such. Eight bit bytes worked well with hexadecimal numbers and programmers shifted from octal to hexadecimal. However, the shift from octal representation to hexadecimal representation had nothing to do with the internal binary storage of a computer. There are still [at least as of a few years ago] 36 bit machines used by the military. They now use "hex" to represent the numbers, even if the top digit is never "F."

Why yes, now that you ask, I used to be a programmer.

Yes and that's why the first computer only needed 4 kilobytes of RAM. But a byte has always been eight bits, even in an octal machine. It's the word length that differs. And that depends on whether the machine is a 8, 16, 32, 64 bit machine. Right now a standard PC is 32 bit, although there are 64 bit processors on the market.

I don't recall there ever being a 36 bit machine. 32, 48, 56, 64 maybe, as you always go in multiples of 8. And you always double what you had last, that's why there have only been 8, 16, 32, 64, 128 bit processors developed.

In the beginning there may have been some odd bit machines developed but those were quickly phased out by the standard.

Even back in the old days, when names like Prime, IBM, DEC, DataGeneral, Control Data ruled the world, there where only 8, 16 and 32 bit processors until Prime and DEC came out with their 64 bit processors, then the others followed suit.
 
Last edited:
Um... it's not the sixth month... at least not by biblical-era reckoning.

Until very recently (middle 1700s in America) the year began in March. That's why the "last" four months of the year begin with Sept Oct Nov and Dec (7, 8, 9 and 10).

For biblical purposes, the sixth month is August.
 
zeb1094 said:
Yes and that's why the first computer only needed 4 kilobytes of RAM. But a byte has always been eight bits, even in an octal machine. It's the word length that differs. And that depends on whether the machine is a 8, 16, 32, 64 bit machine. Right now a standard PC is 32 bit, although there are 64 bit processors on the market.
IBM 701 = 36 Bits and six bit bytes
IBM 704 = 36 bits and six bit bytes
IBM 7070 you gotta google this one up, it is weird!
IBM 7090 = 36 bits and six bit bytes
IBM 7094 = 36 bits and six bit bytes
DEC PDP 4 = 18 bits [half of a 7090 word!]
DEC PDP 6 = 36 bits timesharing machine.
Intel 4004 = 4 bit data word!
The IMB 704/7090/7094 were standard mainframe machines for their time which was mid 50s to mid/late 60s. In the mid 60s, the IBM 360 began to appear. The IBM was a 32 bit machine and was intended to replace the IBM 704[there was only one left by this time]/7090/7094 series. Incidently, the switch from the 700/7000 series machines to the 360 machines was accompanied by a switch from BCD codes to EBCDIC codes.

zeb1094 said:
I don't recall there ever being a 36 bit machine. 32, 48, 56, 64 maybe, as you always go in multiples of 8. And you always double what you had last, that's why there have only been 8, 16, 32, 64, 128 bit processors developed.
CDC 3300 = 24 bits
CDC 3600 = 48 bits
CDC 6500 = 60 bits

zeb1094 said:
In the beginning there may have been some odd bit machines developed but those were quickly phased out by the standard.
The IBM 700/7000 series machines were standard mainframes. The CDC 6000 series were the first SUPERCOMPUTERS!

zeb1094 said:
Even back in the old days, when names like Prime, IBM, DEC, DataGeneral, Control Data ruled the world, there where only 8, 16 and 32 bit processors until Prime and DEC came out with their 64 bit processors, then the others followed suit.
I didn't even get into some of the "off brands." Google up "IBM and the seven dwarfs."
 
zeb1094 said:
R. Richard said:
Zeb:
In the beginning, there were six bit bytes. A six bit byte could assume 64 different values. It worked for a while. However, as computer started to print, they found a problem. If you have 26 lower case letters, 26 upper case letters, 10 digits, a radix point [most call it a period,] a comma and oops, we are out of possible states. The six bit byte lent itself well to octal numbers and that is what were used in the early days. Yes and that's why the first computer only needed 4 kilobytes of RAM. But a byte has always been eight bits, even in an octal machine. It's the word length that differs. And that depends on whether the machine is a 8, 16, 32, 64 bit machine. Right now a standard PC is 32 bit, although there are 64 bit processors on the market.
I don't recall there ever being a 36 bit machine. 32, 48, 56, 64 maybe, as you always go in multiples of 8. And you always double what you had last, that's why there have only been 8, 16, 32, 64, 128 bit processors developed.

In the beginning there may have been some odd bit machines developed but those were quickly phased out by the standard.

Even back in the old days, when names like Prime, IBM, DEC, DataGeneral, Control Data ruled the world, there where only 8, 16 and 32 bit processors until Prime and DEC came out with their 64 bit processors, then the others followed suit.

Guys, you're both correct and both wrong.

An eight-bit byte was around long before computers were -- it's a legacy of teletype systems and the ASCII character set that includes seven data bits and a parity bit.

When computers started printing, teletype terminals were used to avoid re-inventing the wheel.

Early mainframes had various word-lengths and three-bit instruction sets -- because Octal is very easy to convert from binary bit patterns without resorting to a calculator so it is easy to read a four column punch tape to debug it (three data bits and a parity bit.)

Hex became useful when computers discarded the parity bit on punch-tapes and went to a Nybble (four-bit words) as the standard instruction set.

Keyboard input and ASCII output has always in been the eight-bit (0-127 + parity) Teletype/ASCII defintions that predated computers.
 
True enough Weird Harold. The seven bit plus parity TTY code has been around for a very long time. However, I worked as a consultant in the IBM world. The early IBM computers used BCD, a six bit code. Then, with the advent of the IBM 360, there was a shift to EDCDIC and eight bit codes. The shift was for reasons I previously outlined.

From Wikipedia:
IBM and BCD
IBM used the terms binary-coded decimal and BCD for six-bit alphameric codes that represented numbers, upper-case letters and special characters. Some variation of BCD was used in most early IBM computers, including the IBM 1620, IBM 1400 series and non-Decimal Architecture members of the IBM 700/7000 series. With the introduction of System/360, IBM replaced BCD with 8-bit EBCDIC.

Bit positions in BCD were usually labelled B, A, 8, 4, 2 and 1. For encoding digits, B and A were zero. The letter A was encoded (B,A,1).


Weird Harold said:
Guys, you're both correct and both wrong.

An eight-bit byte was around long before computers were -- it's a legacy of teletype systems and the ASCII character set that includes seven data bits and a parity bit.

When computers started printing, teletype terminals were used to avoid re-inventing the wheel.

Early mainframes had various word-lengths and three-bit instruction sets -- because Octal is very easy to convert from binary bit patterns without resorting to a calculator so it is easy to read a four column punch tape to debug it (three data bits and a parity bit.)

Hex became useful when computers discarded the parity bit on punch-tapes and went to a Nybble (four-bit words) as the standard instruction set.

Keyboard input and ASCII output has always in been the eight-bit (0-127 + parity) Teletype/ASCII defintions that predated computers.
 
R. Richard said:
True enough Weird Harold. The seven bit plus parity TTY code has been around for a very long time. However, I worked as a consultant in the IBM world. The early IBM computers used BCD, a six bit code. Then, with the advent of the IBM 360, there was a shift to EDCDIC and eight bit codes. The shift was for reasons I previously outlined.

I just wanted to point out the distinction between IO requirements and internal architecture requirements.

TTY, BCD, and EDCDIC are all concessions to human frailty in reading binary numbers directly. As far as the internal workings of a computer, they all require some extra manipulations for the computer to deal with them.

BCD operations do have the advantage of minimizing rounding errors in number crunching operations because computers inherently only work with integer and modulo math.

Still, it is communication with peripherals such as TTY terminals for input and output that drove the standardization of computer architecture to eventualy wind up using the old ASCII/TTY definitions of Bytes and Nybbles throughout the computer industry. It's just easier to store and manipulate data in the same format as is required to communicate with instead of converting it to communicate with the outside world.
 
Weird Harold said:
BCD operations do have the advantage of minimizing rounding errors in number crunching operations because computers inherently only work with integer and modulo math.
I would agree, but add scaled to integer. Of course, computers use scaled integer numbers to simulate real numbers.

Weird Harold said:
Still, it is communication with peripherals such as TTY terminals for input and output that drove the standardization of computer architecture to eventualy wind up using the old ASCII/TTY definitions of Bytes and Nybbles throughout the computer industry. It's just easier to store and manipulate data in the same format as is required to communicate with instead of converting it to communicate with the outside world.
Of course, the whole world uses "8 bit logic" nowadays. However, I still claim that the driving force behind "8 bit logic" was the need to have 256 characters to print with. The old "6 bit logic" offered only 64 characters and, once computers went to upper/lower case printing, 64 characters was not enough.

You don't need to tell me about punched paper tape. I worked with a lot of military systems that still use punched paper tape. I hate the stuff, but it was a cheap way to go in the early days of computing.
 
R. Richard said:
Of course, the whole world uses "8 bit logic" nowadays. However, I still claim that the driving force behind "8 bit logic" was the need to have 256 characters to print with.

Actually, it was the 128 character ASCII set (plus parity bit) that was require to adapt to TTY terminals to IO functions. The values 0-127 were already defined and even today the values 128-255 aren't completely and formally defined -- except by IBM.

However, The ever-increasing instruction sets for CPUs and ever higher memory adresses needed drove the move to higher internal register sizes as much as anything else. The use of multiples of eight in the increases is simply to maintain backward compatibility with the ASCII/TTY IO requirements and the terabytes of existing data.
 
Back
Top