Thursday, March 08, 2012

Storing Text Digitally


Storing Text Digitally

An excerpt from "The Sydnie Emails", written Feb 4, 2008
Copyright (c) 2008, Kevin Farley


Ok, think about this, what you are calling "digits" and "letters" are nothing more than symbols that are used to represent concepts of numbers and language elements. The "numbers" or "digits" are the language-dependent symbols that are associated with quantity and counting while the "letters" are the language-dependent symbols that are associated with language utterances.


So when you think about it globally, there is nothing intrinsic to the letter "k" that denotes a "k" sound. We, as English speakers associate that letter to the beginning sound made when saying "k". And similarly we associate the letter "w" to a sound that has no relationship to its name, but merely represents a known sound. So then the "letters" of the English alphabet are used to represent some language-dependent sound.

And when we want to store some information in a computer, we need to be able to associate the language-dependent symbols with some computer-usable patterns of 0s and 1s. And really, that is all that is done. Each letter of the alphabet, some punctuation, the numbers, and a few other "characters" are simply associated with binary values in the computer.

Think of it like a look up table. You want to store the symbol "k" so you need to define a unique value for that symbol so that every time you see it, it will ever only mean  "k". Do the same for each letter of the alphabet, including both upper case and lower case characters (think about it, capitalization of a letter may not change the sound it makes, but it is uniquely different in meaning and what it represents).

The result is a map where you can look up a symbol (character/letter) to get its representation, or using the value of the representation, you can look up the symbol.

Now the most widespread of such mappings is the ASCII code. This is the world's most recognized standardized character map, but it only maps English characters (gee, I wonder who invented the entire computing industry). This mapping code has been around a long time. Google it sometime if you are interested.

The basic ASCII chart assigns 127 letters, numbers, punctuation, and some special characters to the values 0 through 127. There is an extended character set that uses the values 128 through 255 but that is another matter altogether. Also because they wanted to keep the range of values for characters to something that can be stored in a single "byte" (8 binary digits/bits), all character mappings must be less than or equal to 255 which is the maximum value you can store in 8 binary digits (equivalent to 11111111).


Note: The Unicode character mapping set contains what is known as "wide characters", meaning they can be larger than a single byte. Most often they are two bytes wide which allows up to 65536 unique values  as opposed to the 256 unique values used by the single byte wide ASCII characters. Some Unicode character sets are 4 bytes wide.
The first 32 values (literally 0, 1, 2... 31) are assigned to "control characters". Do you know what happens every time you press "control-c" to copy something? The keyboard generates a key scan code that is translated into the numeric value 3 by the keyboard device driver. The software interprets this value 3 to mean "copy the highlighted text to the copy buffer/clipboard". There is nothing magic about "control-c", its the mapping that makes the magic.

So starting with numeric value 32 (0x20) through 127 (0x7f) you have your "printable" characters. They are called printable because they result in some character you can see (with the exception of space and delete which are technically not seen). The base-10 digits, starting from 0, are mapped to values 48 (0x30) through 57 (0x39).  Upper case letters, starting from 'A', are mapped to values 65 (0x41) through 90 (0x5a). The lower case letters from 'a' are mapped to 97 (0x61) through 122 (0x7a).

So then, when the name "Sydnie Pye" is stored in the computer it is actually stored as a sequence of numeric values in binary digits. So it is actually stored like the following

01010011    <-- S
01111001    <-- y
01100100    <-- d
01101110    <-- n
01101001    <-- i
01100101    <-- e
00100000    <-- space
01010000    <-- P
01111001    <-- y
01100101    <-- e

Alternatively, I could have simply written:
0x53 0x79 0x64 0x6e 0x69 0x65 0x20 0x50 0x79 0x65

By standardizing on the way the characters (letters) are represented in the computer, all the computers in the world can accurately store and recall that name correctly.

There are other mappings of an alphabet and characters to numeric values. One of the older ones is EBCDIC, an old IBM standard still in use to some extent. The new modern standard starting to be adopted globally is called Unicode. In Unicode, characters are not a single byte, but instead each character requires from 1 to 4 bytes depending on the specific encoding, and there are several.

This was needed because some of the Asian alphabets (most notably Kanji) have no simple equivalents to our English letters. Also, because ASCII is tuned for English and related languages (most European languages but not Russian and Russian derivatives), its not suitable for encoding all the intricacies of more complex alphabets.

So then the answer is "yes, binary numbers are used to store textual information in a computer."

I do not say "letters" because that is a language-dependent attribute. Asian alphabets like Kanji do not have any letters, they have glpyhs. And technically speaking, the English alphabet has glpyhs too, we just call them letters.

Counting With Letters? No Way!


Counting With Letters? No Way!

An excerpt from "The Sydnie Emails", written Jan 31, 2008
Copyright (c) 2008, Kevin Farley


When you say "count with letters too", I assume you are talking about working with digits other than 0 through 9 and that means number bases beyond 10. Recall that binary has only 2 digits, 0 and 1.
Think about it. We have 10 "numbers" because we have a base 10 number system. In English, we have assigned the "symbols" 0, 1, 2, 3, 4, 5, 6, 7, 8,  and 9 to the number positions 10^0 through 10^9 respectively. Semantically we call the symbols we assign to represent numeric quantities "numbers". But that is more of a grammar thing and not a math thing. The math thing is to call them "digits".
In math, a symbol is used to represent a quantity, an operation on quantities, unknown quantities, and properties. But that is all these symbols are, representations of a concept. Digit symbols are used to represent powers of the base of the number system.
So then grammatically in English, using our base 10 number system we only have 10 symbols for the digits 0-9. But the symbols can be anything. If you look at ancient Maya number systems, their number system was based on 20 and their "numbers" were glyphs of combinations of bars and dots. I suppose they counted on their toes too hence the base 20 system ;)
So instead of being based on powers of 10 representing digit positions of 1, 10, 100, 1000..., the Mayans numbering system was based on powers of 20 which means that the digit positions (if they had them) would be 1, 20, 400, 8000, 160000...
Now we can still count in Mayan using their glyphs. But also we can count in Mayan using English "symbols" instead of the glyphs. We can start by using the "number symbols" 0 through 9, and then (borrowing from the computing world and hexadecimal) we can start with the "letter symbols" A, B, C, etc.
Thus our Mayan digits are: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F, G, H, I, J
Now we also know from Deep Thought that the ultimate answer to the universe, to life, to... everything... is 42 (in decimal).
But the ultimate answer in Mayan is 22. Why?
Because 2 * 10^0 + 2 * 10^1 = 2*1 + 2*20 = 2 + 40 = 42
Now since I brought it up, lets talk about hexadecimal, which is to say the base 16 number system.
Though binary is used (conveniently enough) for computing because of the nature of electrical switching and the on/off detection of electrons in circuits, programmers rely most heavily on "hex" math. The reason is simple, strings of binary digits are simply too cumbersome to keep track of (how many 1s in a row can you look at before you lose track?). So a shorthand representation of the binary numbers is needed.
Why not just use decimal? Well for starters, 10 is not a power of 2. What do I mean by that?
Binary is based on powers of 2 and decimal is based on powers of 10. To convert between base 2 and base 10 requires some mental agility (or calculator) as the digits don't "line up". I will explain that.
If I have the decimal number 117, that is the binary number 1110101. Now I can't look at any sequence of those binary digits (bits) and "mathematically see" any digit of 117. Meaning, I can't look at the string of bits and see a substring of bits that mean 100 and a substring of bits that mean 10 and a substring of bits that mean 7.
Well technically I can look at 1110101 and see 117 because I have done this for over 2 decades, but that is an entirely different matter.
So then you want to use a numbering system that shortens the representation of binary numbers but is readily convertible. As a programmer, I want to look at the number and be able to immediately see the bits underneath.
Now if I use a number system that is in itself a power of 2 at some higher order, I can achieve that because the basis of the number system is still 2, but each digit represents larger values of powers of 2.
Early in the days of computing, programmers started using "octal" representation, which is based on the base 8 number system. In octal you only have the digits 0 through 7, because remember, you do not have the base number in your digit set.
Using octal that decimal number 117 becomes 0165 (and 01110101  in binary) . I prepended the number with a 0 because that is standard practice in computing to distinguish a number as being octal: it will have the 0 in front of the number which is not normally done for decimal numbers.
So if I look at the digits of 0165 I see 5 which is "101" in binary, 6 which is "110" in binary and 1 which is "001"in binary. Thus we have:
 1   6   5
001 110 101

See how you can visualize the bits? Each octal digit represents a string of 3 bits. I can look at the octal digit and I only have to do the bit conversion for 8 values total, which is represented in 3 bits. When you use decimal, the base 10 digits don't allow such simple visualization. You can't simply write the decimal digits 117 and have the underlying binary pattern fall out sequentially.
To see the failure of decimal, just look at the lowest digit of 117, which is 7. In binary, the 7 is represented as 111 because that is 1*2^0 + 1*2^1 + 1*2^2 = 1*1 + 1*2 + 1*4 = 7. But clearly the bottom binary digits are 101 and not the expected 111. This is because decimal is not a multiple of a power of 2 (the basis of binary).
If we were to simply use the decimal digits like I did the octal digits we would have the following:
 1   1   7
001 001 111 <<-- WRONG!

And that would actually be the value 79 in decimal, not 117. So clearly, decimal does not lend itself readily to binary visualization.
While octal is all well and good and an improvement on handling binary numbers, we want it still more compact and yet allow us to visualize the bits as in octal. So if we look to the next power of 2, we have 16 (we went from 2, to 8 - skipping 4, and the next is 16). That leads us to hex numbers in base 16.
So to use hex, I need 16 digits. English only has 10 "numbers", so we proceed on to the letters like with the Mayan example. So my base digit set in hex is:
0 1 2 3 4 5 6 7 8 9 A B C D E F

Which yields decimal values 0 through 15 inclusive.
Now, to distinguish a number in hex from those in octal and decimal, programmers typically prefix the number with "0x". This is the magic sign to tell us that we are looking at a hex number.
Now back to the decimal value 117. When we convert that number to hex we get 0x75 because 7 * 16^1 + 5 * 16^0 = 7*16 + 5*1 = 112 + 5 = 117.
Now remember the visualization thing? The hex digit 7 is "0111" in binary and the hex digit 5 is "0101" in binary. Thus we have:
  7   5
0111 0101

Now see again how we can visualize the bits?
So back to the original question of using letters, lets look at a much larger number, say 0x7EA6CF82.
In binary that is: 1111110101001101100111110000010
In octal that is: 017651547602
In decimal that is: 2124861314
In hex that is: 0x7EA6CF82
In mayan that is: 0m1D407D5E

Now for the hex visualization:
  7    E    A    6    C    F    8    2
0111 1110 1010 0110 1100 1111 1000 0010

With each hex digit, the programmer can "see" the underlying bit patterns. As a programmer, we instinctively know (now after doing it a while) that "F" is 15 and "C" is 12. We also know that 15 is "1111" and 12 is "1100".
Now the question that may have popped into thought: but who uses numbers that big?
Programmers do all the time. Its not the "data" that is usually that large, its memory addresses that are that large.
A regular PC has anywhere from 128 MB to 1 GB or more of RAM. A MB of RAM is actually 1048576 bytes. This is because 1 kilobyte (KB) is 1024 bytes, and a megabyte (MB) is 1024 KB. So 1024 * 1024 = 1048576. So then a gigabyte (GB) of RAM is 1024 MB or 1073741824 bytes.
Why 1024 and not 1000? Because 1024 is a power of 2 (it is 2^10 to be specific). Remember, computing uses a base 2 number system at its lowest level, and 1000 is  decimal concept. But since 1024 is almost 1000, we use the "kilo" prefix and instead of 1 million we have a little over that and use the "mega" prefix. The same for "giga" where a GB of RAM is actually more than 1 billion bytes.
So if you are talking about memory, the prefix kilo means 1024 and mega means 1024*1024. But when you are talking about CPU clock speed of a computer, that is a different matter. A 500 MHz CPU has a clock speed of 500 million cycles per second where the M for mega means 1,000,000. Also a 3 GHz processor is running at 3 billion cycles per second where G for giga means 1,000,000,000.

As a side note, disc drive manufacturers do not use 1024 as the order of magnitude, but they use the smaller 1000 instead. So that 60 GB hard drive is smaller than 60 GB of RAM because 60 * 1000 * 1000 * 1000 is less than 60 * 1024 * 1024 *1024.
Why do they do that? Marketing. Almost a bait and switch and most people don't know the difference. But in reality, a 100 GB hard disc drive has 7.3 GB less than one would think (100*1073741824 - 100*1000000000 = 7374182400). But I digress..



RAM is random access memory, and to use it, each byte must be individually accessible. To access memory, each byte has a unique address. That address is simply a one-up number. So the very first byte of RAM has memory address 0 and the last byte of a 1 GB RAM chip has memory address 1073741823.
That is supposed to be 1 less than the total locations because remember, despite how we all learned to count as children, the first of anything mathematically is really item 0, not item 1.
Another piece of this is that nearly all personal computers today use virtual memory, which is a really long discussion that is beyond what you need to get into at this time -- or ever ;)
Simply put, virtual memory means the computer can act like it has 4 GB or RAM even if it only has 64 MB, it just uses a hard disc to swap in and out sections of RAM.
To get addresses for 4 GB you have numbers in the range from 0 to 4294967295.
And because programmers are always looking at (virtual) memory addresses, we always, daily, perpetually, and in all other ways, have to deal with really really large numbers.
So that last virtual memory location, 4294967295, is 11111111111111111111111111111111 in binary, 037777777777 in octal, and 0xFFFFFFFF in hex.
And since each digit of the hex string is exactly represented by 4 binary digits (bits), the hex version is the optimal way of looking at really really large numbers in computing.
In summary, the point of having letters is just to get more digits than 0 through 9 which are need for number systems beyond base 10.
Now I am sure that all of this is well beyond your basic question. But I am the computer guy and the math guy and since I like this stuff, I like to explain it. Thanks for putting up with this long-winded explanation.



The Binary Number System


The Binary Number System

An excerpt from "The Sydnie Emails", written Jan 29, 2008
Copyright (c) 2008, Kevin Farley"On The Binary Number System"



Who? What? When? How? What do you want to know? It can be short or long, simple or overly complex. Let's try "overkill".
The basis of using the binary numbering system in computing is simple. At the lowest element of a computer you have a switch that is either on or off. If it is off, we call that "0". If it is on, we call that "1". Those are the only values we can count with, 0 and 1. Since there are only 2 values, all computer operations at the lowest level use a number system based on 2, called a base 2 number system, or simply "binary". The term "binary" simply means "two values" in this case.
In the binary number system, there are only 2 values, 0 and 1. To get any number you have to add combinations of 0s and 1s. The key to achieving this miracle sum is that you need to multiply the 0 or 1 by the right multiplier, which in binary will be the value 2 raised to an exponent just as in decimal (base 10) where the multiplier is the value 10 raised to an exponent.
This is why you see strings of 0s and 1s all over the place in computer programming. However, it is more common to see hexadecimal (base 16) but that is another story entirely.
So for example the decimal number 12 in binary would be 1100.
1*2^3 + 1*2^2 + 0*2^1 + 0*2^0
The value 37 in decimal when translated to binary is 100101.
1*2^5 + 0*2^4 + 0*2^3 + 1*2^2 + 0*2^1 + 1*2^0
By using a binary number system, the computer can make zillions of simple "yes/no" decisions that translate into certain values. It is this basic premise that all computers operate on.
One of the reasons why this is so key to computing is that at the heart of every digital processor, you have what is effectively a transistor that can switch an electric current from on to off or from off to on. By manipulating these transistors in patterns using circuits that compare and sum them, you can create a resulting pattern of electric current that represents a value.
Its just like cavemen laying out clam shells to count with, its just cooler now using electricity and lasers. And we have Geico.
Also note that most modern computers have normalized on the standard of using 8 binary digits (called bits) as the basic unit of storage/processing. What this means is a single "byte" of memory is essentially composed of 8 transistors created directly in silicon.
So what this means is that value 12 in decimal is normally represented in binary as 00001100 in computer bits and that would produce an electric current pattern in a typical PC's CPU of :
0v | 0v | 0v | 0v | 3.3v | 3.3v | 0v | 0v
If I were to add a positive current (3.3v) into the left most transistor it would yield the pattern:
3.3v | 0v | 0v | 0v | 3.3v | 3.3v | 0v | 0v
Which in binary is 10001100 which is equal to 140 in decimal. This is equivalent to saying "add 128 to 12" (because we are adding 10000000 to 00001100).
Note that in the CPU, these bits are stored and moved around as electric current. However because of the nature of binary systems (on/off values), you can use other ways to represent it. For example, on a typical hard disc drive, there is a metal platter (or non-metal platter coated in metal) that uses magnetic alignment (north/south) to represent 0 and 1. Also on a compact disc or DVD, each bit is represented as a "pit" or a "land", meaning "the laser light is lost in the pit" and "the laser light is reflected off the land".
It could also be "clam shell means 1" and "no clam shell means 0" to those Geico cavemen.
The importance of this all or nothing concept is that computers work because they have to very quickly measure these electric currents, magnetic directions, and laser reflection accurately (but not clam shells). It is much easier to determine if an electric circuit is at 3.3 volts versus 0 volts while it is much harder to determine if the circuit is at 3.3 volts versus 2.9 volts. The same goes for magnetics and light.
So because computers operate on the on/off electric circuit principle (simply put but it is enormously more complex in reality), it is extremely convenient to represent these electric patterns as sequences of 0s and 1s. It would be really verbose and annoying to have to say "on, off, off, off, on, on, off, off" when you really want to convey the meaning of "140".
Well that is a start let me know what else is needed.
And since I feel like it, and because I really do like math, lets have a little number theory to explain how those sequences of 0s and 1s work.
Here is a short number theory explanation, you can translate, and it makes most sense to describe it in decimal first as most people get blown away when you jump right to the alternate base numbering system.
A starting point. Regardless of all human experience, in mathematics, you do not start counting at "1", you start counting at "0". This point needs to be understood as the "first" instance of anything is instance number 0, not 1, regardless of what you think. Imagine counting your fingers, start with 0 and proceed to 9, you have 10 total fingers, but the first one is #0 and the last one is #9. This is actually an important aspect to remember.
In all number systems, each digit represents a value taken to a base number raised to an exponent. That sounds complicated but its not. When we count in decimal, which is what most normal humans do, the base is 10 (which coincidentally - not - we have 10 fingers). The right most digit is position "0", and then you go up by one as you move to the left.
So for the number "763" the 3 is in position "0", the 6 is in position "1", and the 7 is in position "2".
So to come up with the value of the number, you multiple the digit value by the base number raised to the exponent of its position, starting with position 0.
total_value = digit * base^0 + digit * base^1 + ... + digit * base^n
(where the ^ means "take the base to the exponent of")
So then the value 763 means "7 * 10^2 + 6 * 10^1 + 3 * 10^0"
Remember that a number raised to the power "0" is equal to 1, by definition. Always. That's just how it is.
So we have  7*10*10  + 6*10 + 3*1 = 700 + 60 + 3 = 763
This is the basis of base-N number theory and can be applied to any base value. If your base value exceeds 9, then you run out of digits and you substitute other letters or symbols. For example, in hexadecimal numbering, you use 0-9 for the first 10 digits, then a-f for the next 6 digits for values 10 through 15. Literally, in hexadecimal the digit "f" is equal to decimal value 15, always, forever, by definition.
Now when counting in decimal and you get to value "9", then next value is "10", but notice that it is no longer a single digit, but two digits. What we often take for granted is "10 comes after 9", but in mathematics, 10 is a linear progression from 9 using a systematic value increase that follows rules.
So what you really do when you go from "9" to "10" is you have to add a digit to the left that is now multiplied by the base raised to the next power of the base, which is 10. This means you have the following:
10 = 0 * 10^0 + 1 * 10^1 = 0 * 1 + 1 * 10 = 0 + 10 = 10
Ok, now that we have discussed the basics of base-n numbering systems, lets apply that to binary numbering systems.
In binary numbering systems, the base number is 2, which means each digit is multiplied by the value 2 raised to the power of the digit position, starting with 0. Now also, since by definition of being a base 2 number system, we start counting from 0 which leads to 1 and then to 10, not 2.
The reason there is no 2 is because you never have the base number in your numbering system.  Count your fingers starting from 0 and you end at 9, not 10, but you have 10 fingers. So then in the decimal numbering system you have 10 values (0-9) and in binary you have 2 values (0-1).
So if we take a modest decimal value, say 37 and convert it to binary, lets first look at it in decimal.
37 = 7 * 10^0 + 3 * 10^1 = 7 * 1 + 3 * 10 = 7 + 30 = 37
Now lets do figure out what it is in binary.
For starters, the digit positions and exponent values we need to consider are:
2^0 = 1
2^1 = 2
2^2 = 4
2^3 = 8
2^4 = 16
2^5 = 32
2^6 = 64
Now we can stop, because 64 is already greater than the number to convert. In other words, we don't need any multiple of 64 to come up with the value 37.
So starting with the largest power of 2 that does not exceed the number and working down, we can do some simple math. The largest value that is a power of 2 that is less than or equal to 37 is the value 32 which is 2^5. Remember, its exponent is 5 and that means it is digit "5" when counting from 0. So it is the 6th digit (numbering the digits right to left). Take the number to convert (37) and subtract the largest power of 2:
37 - 32 = 5
So we have a remainder of 5. Find the largest power of 2 that does not exceed 5. That would be the value 4 which is 2^2. Since the exponent is 2, there is a 1 put in digit "3". Again, take the difference and find the remainder:
5 - 4 = 1
Once you get a remainder of 0 or 1, you are done. There are no more powers of 2 to consider at that point. The remainder of 0 or 1 becomes the right most digit, which is digit 0. Just use that value for the digit.
So we used the exponents of 5 and 2 and had a remainder of 1. This means that digits 5, 2, and 0 each need to be value 1 while the other digits (1, 3, 4, and all digits beyond 5) need to be value 0. This gives us:
1 0 0 1 0 1
| | | | | |
| | | | | +-> digit 0, comes from remainder
| | | | +---> digit 1, uses no exponent hence 0
| | | +-----> digit 2, uses exponent of 2, hence 1
| | +-------> digit 3, uses no exponent hence 0
| +---------> digit 4, uses no exponent hence 0
+-----------> digit 5, uses exponent of 5, hence 1
Therefore the decimal number 37 is represented as 100101 in binary.
To convert it back, just multiply the digit by its position multiplier and sum up all the values.
1*2^5 + 0*2^4 + 0*2^3 + 1*2^2 + 0*2^1 + 1*2^0
which is
1*32 + 0 + 0 + 1*4 + 0 + 1*1
which is
32 + 0 + 0 + 4 + 0 + 1
which is
37
And that is the basis of base-n number systems with examples in decimal (base 10) and binary (base 2).
Overkill is underrated. :-)

Friday, March 02, 2012

Wrapped in Wax Paper

I recently started taking my lunch to work instead of going out (as I had done for years). I had a couple goals in mind: conserve money and conserve time. I managed doing it several days straight and then I needed Taco Bell. Yes. It was a need.

On the first morning of this experiment I opted to fix sandwiches made with white bread, ham, and mustard. Yes, it was white bread - thank you very much! I can hear the health nu... conscious... among us groaning at me even now, telling me that my white bread is "unhealthy". Well to that I say "I counteracted the unhealthy white bread by using mustard, the world's perfect condiment". Enough said.

So I fixed two sandwiches - I was expecting to be really hungry - and looked for Ziploc bags to put them in. To my shock and horror, we had very few of the right size bags. Since my daughter has been taking her lunch to school and she is only 10 and fixes it herself. I opted to leave the remaining bags for her. I was the adult. I would find another solution.

I looked around. I had plastic wrap, aluminum foil, and wax paper. I was about to grab the plastic wrap when an image from elementary school flashed through my head. I remembered sitting in the lunch room in fourth grade with my little U.F.O. lunchbox. Anyone remember that TV show? It ran from 1970 to 1973. And I had a lunchbox that proudly showed that I was a nerdling.

I distinctly remember my mother packing my lunch many times. She put just what I wanted in there. Usually it was a ham and mustard sandwich, celery (see? I liked celery as a kid, that's healthy!), a "mixed fruit" fruit cup (basically sugar water with bits of fruit), and a cookie or something. Oh yeah, and a thermos of grape Kool-Aid! 

I don't know why I remember these kinds of things. I stopped asking "why" a long time ago when I found that I could remember so many mundane things but forget important stuff. Like sometimes, instead of my fruit cup, mom would give me chocolate pudding cup. And Freddie got me mad at him one day in fourth grade because he said it "looked like poop". See? Why do I remember that stuff?

And the mundane thing that came to mind was the look of the sandwiches my mother made for me, wrapped neatly in wax paper. She folded the ends together a few times until they were snug against the bread, and then tucked the ends underneath. I remember distinctly how they looked. I wanted my sandwiches to look like that.

Pulling out the wax paper, I went to wrapping. I wasted a piece of the wax paper because I did not get the length right, but I figured it out soon enough. And in under 2 minutes I had both sandwiches wrapped.

I looked at them intently. The bread was nearly square for some reason. With the sandwiches wrapped in wax paper, they looked like something you would get at an old-fashioned deli. I was proud of myself.

I brought sandwiches so far every day, each wrapped in wax paper. Maybe that is why I needed Taco Bell. And even though I have more Ziploc bags, I think I will continue to use the wax paper. Mostly because I like the way they look wrapped up.

And to think, only recently I thought there was almost no use for wax paper anymore. I was wrong. Wax paper made me happy for a little while. And that makes it a good thing.



Copyright 2012, Kevin Farley (a.k.a. sixdrift, a.k.a. neuronstatic)