How many bits will it take to represent the decimal number 150
How many bits will it take to represent the decimal number 150
How many bits per digit in the decimal system [closed]
This question is not about computer hardware or software, within the scope defined in the help center.
I am going to teach a small group of people about the numbering systems in computing and was wondering how many bits per digit are there in the decimal system, for instance:
10 Answers 10
What you are looking for is the 2-based logarithm of 10, which is an irrational number of around 3.32192809489.
The fact that you can’t use an integer number of bits for a decimal digit is the root cause of why many fractions that are easy to express in the decimal system (e.g. 1/5 or 0.2), are impossible (not hard: really impossible) to express in binary. This is important when evaluating rounding errors in floating point arithmetics.
In other words, what amount of information is contained in a single digit in these systems.
For base 2, base 4, base 8, base 16 and other 2 N bases the answer is obvious because in a base 2 N each digit can be expressed with exactly N digits.
K-based logarithms of numbers that are not powers of K aren’t cardinal numbers. In particular:
This number may look confusing, but it actually has some uses. For example, it’s an entropy of a single decimal digit.
For your case, though, I don’t think this value is of any use. @Christian’s answer does a good job at explaining why.
On the subject of bits:
I’m sorry to say the question is misguided. You wouldn’t use bits in that manner. A bit is a binary digit. You can convert the decimal number 10, to a binary 1010 (8+2), so you’d need 4 bits to express the decimal value 10.
Powers of 2
You’ve fallen into a bit of a trap, by using binary (2), octal (8) and hexadecimal (16) as examples, because these are all powers of 2, and thus you can think of them in terms of bits, whereas 10 isn’t a power of 2, so it just doesn’t work very well like that.
In base 1024, each symbol is 10 bits. Three decimal digits have the same amount of information as one digit in base 1000, which is slightly less than 1024. Therefore, a decimal digit has slightly less than 10/3 bits. This approximation gives 3.333333. while the exact number is 3.321928.
This might be an oversimplification but it depends on which question you are asking.
(and the answer is basically octal or hex)
I also don’t consider fractional bits as bits because in practical usage bits don’t have fractions.
Q1: How many bits can you represent in a decimal digit?
A1: You can represent 3 bits of information in a single decimal digit:
The most common scheme would be straight binary with wrapping where 0=8=000 and 1=9=001. But you could use any scheme there is nothing that says this is the only way to encode bits into decimal digits.
When we’re talking about values, we’ll use decimal if the application expects it (e.g., a digital banking application). When we’re talking about bits, we’ll usually use hex or binary (I almost never use octal since I work on systems that use 8-bit bytes and 32-bit words, which aren’t divisible by 3).
If you’re on a system where 9-bit bytes and 36-bit words are the norm, then octal makes more sense since bits group naturally into threes.
Number of bits it takes to represent a number
Is this accurate?
Answers and Replies
Is this accurate?
No, not quite. The fraction you have on the right is the same as ##log_2(x)##.
As to why your formula isn’t correct, consider x = 4, and that ##log_2(4) = 2##. It takes 3 bits (##100_2##) to represent 4.
I don’t understand what you’re saying. I was only considering positive integers. The binary representation of 4 as an unsigned number is ##100_2##. Are you interpreting the 1 digit to mean the number is negative?
The OP’s formula doesn’t give the right results for this and many other numbers
The binary representation is 100. Who said we are limited to the binary representation? We can save bits with the following scheme, assuming we know where digits end:
The trouble is, negative integers are stored with a 1 digit in the most significant bit (MSB).
The title of the thread is «Number of bits it takes to represent a number». With «bits» which is short for «binary digits,» it’s reasonable to assume that we’re talking about a binary representation. It takes three bits to represent the decimal number 4. If you take advantage of a scheme that stores only two bits, it still takes three bits to represent the number.
No, not quite. The fraction you have on the right is the same as ##log_2(x)##.
As to why your formula isn’t correct, consider x = 4, and that ##log_2(4) = 2##. It takes 3 bits (##100_2##) to represent 4.
The point most people participating in the thread should remember is that the Shannon information theorem does not consider the value of the largest number you want to represent but instead only the number of symbols you want to represent.
There are many formats that compress all needed data down to only a few bytes, I’ve run across them mostly when doing serial communications. They try to pack as much information as possible in those bits.
The computer will align everything in terms of 8 bit blocks, but there is no reason that you actually need to use 8 bits in your code:
That has never been true, the latest VS compilers from 2015 still supports 80 bit floating point and often still default to it.
That has never been true, the latest VS compilers from 2015 still supports 80 bit floating point and often still default to it.
It’s pretty simple really:
The original formula is correct if you round up, and are speaking of the number of values, not some arbitrary specific value.
The point most people participating in the thread should remember is that the Shannon information theorem does not consider the value of the largest number you want to represent but instead only the number of symbols you want to represent.
In the spirit of «you know what I mean», the answer is one of these:
[itex]x = some\_unsigned\_number[/itex]
[itex]bits(x)= \frac
[itex]x_p = some\_signed\_positive\_number[/itex]
[itex]bits(x_p)= 1+\frac
[itex]x_n = some\_signed\_negative\_number[/itex]
[itex]bits(x_n)= 1+\frac
However, taking your question literally, as glappkaeft has done, the required number of bits is determined by the number of symbols. So:
[itex]x_s = number\_of\_distinctive\_symbols[/itex]
[itex]bits(x_s)= \frac