method that achieves lossless data compression by encoding a
message with “wasted” or “extra” information removed. In other
words, entropy encoding removes information that was not necessary
in the first place to accurately encode the message. A high degree
of entropy implies a message with a great deal of wasted
information; english text encoded in ASCII is an example of a
message type that has very high entropy. Already compressed
messages, such as JPEG graphics or ZIP archives, have very little
entropy and do not benefit from further attempts at entropy
encoding.
English text encoded in ASCII has a high degree of entropy because
all characters are encoded using the same number of bits, eight. It
is a known fact that the letters E, L, N, R, S and T occur at a
considerably higher frequency than do most other letters in english
text. If a way could be found to encode just these letters with
four bits, then the new encoding would be smaller, would contain
all the original information, and would have less entropy. ASCII
uses a fixed number of bits for a reason, however: it’s easy, since
one is always dealing with a fixed number of bits to represent each
possible glyph or character. How would an encoding scheme that used
four bits for the above letters be able to distinguish between the
four-bit codes and eight-bit codes? This seemingly difficult
problem is solved using what is known as a “prefix-free
variable-length” encoding.
In such an encoding, any number of bits can be used to represent
any glyph, and glyphs not present in the message are simply not
encoded. However, in order to be able to recover the information,
no bit pattern that encodes a glyph is allowed to be the prefix of
any other encoding bit pattern. This allows the encoded bitstream
to be read bit by bit, and whenever a set of bits is encountered
that represents a glyph, that glyph can be decoded. If the
prefix-free constraint was not enforced, then such a decoding would
be impossible.
Consider the text “AAAAABCD”. Using ASCII, encoding this would
require 64 bits. If, instead, we encode “A” with the bit pattern
“00”, “B” with “01”, “C” with “10”, and “D” with “11” then we can
encode this text in only 16 bits; the resulting bit pattern would
be “0000000000011011”. This is still a fixed-length encoding,
however; we’re using two bits per glyph instead of eight. Since the
glyph “A” occurs with greater frequency, could we do better by
encoding it with fewer bits? In fact we can, but in order to
maintain a prefix-free encoding, some of the other bit patterns
will become longer than two bits. An optimal encoding is to encode
“A” with “0”, “B” with “10”, “C” with “110”, and “D” with “111”.
(This is clearly not the only optimal encoding, as it is obvious
that the encodings for B, C and D could be interchanged freely for
any given encoding without increasing the size of the final encoded
message.) Using this encoding, the message encodes in only 13 bits
to “0000010110111”, a compression ratio of 4.9 to 1 (that is, each
bit in the final encoded message represents as much information as
did 4.9 bits in the original encoding). Read through this bit
pattern from left to right and you’ll see that the prefix-free
encoding makes it simple to decode this into the original text even
though the codes have varying bit lengths.
As a second example, consider the text “THE CAT IN THE HAT”. In
this text, the letter “T” and the space character both occur with
the highest frequency, so they will clearly have the shortest
encoding bit patterns in an optimal encoding. The letters “C”, “I’
and “N” only occur once, however, so they will have the longest
codes.
There are many possible sets of prefix-free variable-length bit
patterns that would yield the optimal encoding, that is, that would
allow the text to be encoded in the fewest number of bits. One such
optimal encoding is to encode spaces with “00”, “A” with “100”, “C”
with “1110”, “E” with “1111”, “H” with “110”, “I” with “1010”, “N”
with “1011” and “T” with “01”. The optimal encoding therefore
requires only 51 bits compared to the 144 that would be necessary
to encode the message with 8-bit ASCII encoding, a compression
ratio of 2.8 to 1.
text strings, one per line. The text strings will consist only of
uppercase alphanumeric characters and underscores (which are used
in place of spaces). The end of the input will be signalled by a
line containing only the word “END” as the text string. This line
should not be processed.
output the length in bits of the 8-bit ASCII encoding, the length
in bits of an optimal prefix-free variable-length encoding, and the
compression ratio accurate to one decimal point.
4.9
2.8
#include
#include
#include
#include
#include
#include
#define maxn 20005
using namespace std;
typedef struct tree
{
int
leaf;//记录字母出现次数
int
pa;//记录父亲结点
};
tree tr[80];//80个节点应该够了
void huffman(int n)
{
int
m1,m2,x1,x2,t,i,j;//m1,m2记录两个权值较小的结点,x1,x2记录对应下标
for(i=0;i
{
m1=m2=0x3f3f3f3f;//初始化为无穷大;
for(j=0;j
{
if(m1>tr[j].leaf&&tr[j].pa==-1&&tr[j].leaf!=0)//后两个条件是这个数
//没有被用过,并且出现的次数比0大;
{
m2=m1;
m1=tr[j].leaf;
x2=x1;
x1=j;
}
else
if(m2>tr[j].leaf&&tr[j].pa==-1&&tr[j].leaf!=0)
{
x2=j;
m2=tr[j].leaf;
}
}
if(m2!=0x3f3f3f3f)//不为初始值就是找到了
{
t=n+i;
tr[t].leaf=m1+m2;//这就是两个数上面的那个新的节点
tr[x1].pa=tr[x2].pa=t;//删除两个以最小的节点
}
}
}
int main()
{
//freopen("in.txt", "r", stdin);
int
i,l,c,p,ans;
char
s[maxn];
while(scanf("%s",s))
{
if(strcmp(s,"END")==0)//结束标志
break;
for(i=0;i<80;i++)//初始化
{
tr[i].leaf=0;
tr[i].pa=-1;
}
l=strlen(s);//记录字符串长度
for(i=0;i
if(s[i]=='_')
tr[26].leaf++;//建立字符与数字下标的映射
else
tr[s[i]-'A'].leaf++;
huffman(27);//建立哈弗曼树
for(i=0;i<=26;i++)
{
c=0;//c记录编码所需长度
if(tr[i].leaf!=0)
{
p=i;
while(tr[p].pa!=-1)
{
c++;
p=tr[p].pa;
}
if(c==0)//如果为根节点长度为1
c=1;
tr[i].leaf=c;//将data赋值为所需字节数。方便下面算总数
}
}
ans=0;//ans记录所需总字节数
for(i=0;i
{
if(s[i]=='_')
ans+=tr[26].leaf;
else
ans+=tr[s[i]-'A'].leaf;
}
printf("%d %d %.1f\n",8*l,ans,8.0*l*1.0/ans);//这个位置开始用的是%.1lf就WA了
应该是编译器的问题
}
return
0;
}