Monday, October 08, 2012

12 Common Things That Have or Will Disappear From The Modern Office

With changes in technology and in simply the way things are done anymore, a number of common office items have already disappeared or soon will disappear from the modern American office environment. While the point may be argued that there will always be some holdout still clinging to their old ways and refusing to give up that ancient manual typewriter, in general, the items identified in this list were once common in American offices, but are seldom seen now. 

Granted, this list comes from a particular perspective formed in the technology industry, so it may be skewed. But I think the same items are disappearing from most other office environments.

12. Intercoms

By this I mean a device that is nothing but an intercom, not the intercom function of a desk phone. My father had an ancient set of intercoms that he once used in his office and retired them before I was a teenager. So these have been gone for a long while now. With the advent of the modern office phone systems, the ability to call desk to desk without dialing outside of the company rendered these truly extinct.

Replaced by: office phone system

11. Typewriter

Once the most required piece of equipment in any office. Some offices had fleets of typewriters the way modern offices have co many computers today. At one time electric typewriters were the epitome of technology in an office. Now, they are derelicts from the past that get hauled out extremely rare occasions for some obscure requirement of some official documentation.

Replaced by: computers

10. Carbon Paper

This goes hand in hand with the typewriter. One of the things you were taught in typing class was the way to handle and use carbon paper so you could type out an original and a copy simultaneously, without getting carbon all over everything.

Replaced by: photocopy machines

9.Rolodex

There was a time when a Rolodex was the state of the art way of maintaining contacts. It is doubtful that there are many under the age of 30 that even know what they are.

Replaced by: software on computers and phones to track contacts

8. Cut and Paste Supplies

The computer paradigm of "cut and paste" originated from the use of physically cutting out printed material and pictures and pasting them to another piece of paper. Many secretaries and administrative assistants spent a great deal of time using scissors or a straight edge and hobby knife to cut out items for a presentation, then carefully pasted them onto another piece of paper, or sometimes a presentation board. Not too many years ago, the tools and supplies for this were crucial in the office but now they are often hard to find.

Replaced by: software for document and presentation generation

7. 3-Ring Binders

As fewer people need printed documents for long term use, fewer people are printing documents to store in a 3-ring binder. I used to have shelves of technical manuals and the like in 3-ring binders, all neatly organized with labels I made to slip into their spines to identify them easily. I, like so many others, have found such manual use of printed documents to be more of a hassle than simply viewing an electronic document with the ability to search it quickly and efficiently. So now, there seem to be a lot less 3-ring binders in offices.

Replaced by: computers and digital storage

6. 3-Hole Punch

This goes hand in hand with the 3-ring binders. If you don't have a need to store something in a 3-ring binder, you don't need to put 3 perfectly circular holes in the edge of a piece of paper.

Replaced by: nothing

5. Staplers

I know you still see them in offices. I know there are still uses for them. But once upon a time, everyone who started in an office was issued a stapler and usually a stapler puller. In the past 10 years I have seen that change. No longer are employees issued their own stapler. Now, they find one available on someone else's desk, in the mail room, or some other place. The paperless office really is more reality now than ever before and because of this, all those items to manipulate, store, and modify paper are becoming a rare item.

Replaced by: nothing

4. Floppy Disks

With the advent of the personal computer, boxes of floppy disks were common in all office supply cabinets. Everyone needed floppies and they were bought by the hundreds. However, as storage needs drove the development of CD-ROM and DVD-ROM, floppy disks were just not enough. You would be hard pressed to find them in today's offices.

Replaced by: CD-RW/CD-ROM

3. CD-RW/CD-ROM Media

Similar to what drove the extinction of the floppy disk, the sheer volume of today's storage requirements quickly exceeds that available in a CD-ROM. And with all laptops and desktop computers shipping with DVD writers in the past few years, it is more common to use DVD storage.

Replaced by: DVD-RW/DVD-ROM

2. Fax Machines

These have not gone away completely. Every office seemingly has the need for a paper fax. But this item specifically can be replaced by a number of things, including e-fax software that sends a scanned document image directly to someone else for display and storage on a computer. 

Replaced by: software and scanners

1. Desktop Computers

More often than not, employers expect their office employees to do some work off the clock, or at home. So they are issued laptop computers instead of desktop computers. This is very common. This is so common in many offices that it is hard to find a modern desktop computer as most of them are a few years old. You will find these in offices that have no need or desire for their employees to take home any work or any technology. But in most modern technology companies, using laptops and mobile devices instead of desktop computers is becoming the norm.

Replaced by: laptops and tablets

There it is. My own person list of a dozen items that were once common in the American office but have either mostly disappeared, or are on their way out. There are always exceptions and an argument can be made for each of these items as to their utility and required presence. But I believe those are exceptions and not the norm.


Copyright 2012, Kevin Farley (a.k.a. sixdrift, a.k.a. neuronstatic)

I Remember 9/11/2001

I remember that day, 11 years ago, sitting at my desk getting ready for a design review for our project. I remember one of my buddies giving me the chilling news of the attacks. I remember how we huddled around a few computers to get the latest news and see horrific images.

I remember people talking about family and friends and loved ones that may or may not have been in the Twin Towers that day. I remember the anger, the sadness, all the emotions that ran amok at the thought of such a cowardly attack against innocent civilians on American soil.


I remember thinking how I would like to be the one to fire the missiles, drop the bombs, or drive the tank over those cowards and their leaders. I remember patriotism and nationalism rising up renewed inside me, and all around me.

I remember concern for the first responders, and praying for their safety. I remember praying for the demise of those that caused this trouble. I remember praying for my own family.

I remember wondering how life in America would change, how would others react, how would America take vengeance. I remember wanting vengeance because of the nature of what happened.

I remember this was not a military attack against the government of the United States of America. I remember this was a personal attack against the people of the United States of America. I remember there were so many that died in the attacks. And I remember that there could have been more. I remember thinking that someone, hidden in their own fears, declared war on the people of the United States of America without actually telling us.

Yes, I remember all these things. But there is one thing I do not remember.
I do not remember being afraid.

We are Americans. And despite a rather vocal minority of individuals who are willing to simply roll over and surrender all we hold dear, Americans do not so easily give up. Even when divided over politics, economics, or religion, I believe most Americans will agree that we cannot allow others to tell us how America should structured, how we should live, and what freedoms we should or shouldn't have.

And despite those pacifistic idealists that have no real clue about the real world, and real hate from real evil, Americans are guaranteed fundamental rights that we will fight for through our military and individually. We are Americans. The world sees us as the most armed nation on the planet. I'm glad they do. They should. And they should never forget it.

I am an American. I am free to believe and think as I see fit. I am free to read my Christian Bible, and carry a gun. I am free to disagree with anyone, and free to ignore anyone that disagrees with me. Freedom is conformity. Freedom is independence. Freedom is our right.

I remember that 236 years ago, a bunch of armed colonists, tired of being over-taxed, under-represented, and over-governed set forth in a document their declaration of liberty. Men who put their lives on the line just by signing it. Men who bled and died defending it, and the families that suffered through years of war to win liberty and freedom for all Americans. Even those that did not appreciate it.

We are an independent nation. We must remain an independent nation. The founding fathers of this nation understood that when words run empty, weapons fill the void. Something they knew then, something most of us know now. Something that some would deny as the last means to preserving freedom. Some would gladly exchange their freedom for security. And that is something I won't do.

Our soldiers and first responders put their lives on the line every day defending our nation, our people, and our rights to life, liberty, and the pursuit of happiness. Whether you like it or not, that is what this nation is founded on. Our freedoms and our rights were not handed to us because we demanded it, or occupied it. Our freedoms and our rights were won through battle and death. That is the painful reality of living in a corrupt world.

So yes, I remember 9/11/2001. And I remember on that day thinking that I am glad I am an American. I am glad I am armed. And I am glad that I do not live in fear. And in spite of the efforts of that vocal minority, we are one nation, under God, indivisible, with liberty and justice for all.

So after all the remembering, take it with you wherever you go. Remember you are an American. Remember you are free. And remember that freedom is just one tyrant away from being lost.


Copyright 2012, Kevin Farley (a.k.a. sixdrift, a.k.a. neuronstatic)

Wednesday, August 08, 2012

Zero

I recently read a post online where someone attributed the "invention" of mathematical 0 (zero) to Islamic Arabs some time around 500 AD. Well, they didn't, and it can be proven, a number of ways.

For starters, the mathematician in me must comment. It was the Babylonians that first conceived of the idea of "zero" as a number somewhere around 1900 BC and introduced a symbol for the number zero somewhere around 250 BC - thus the symbol for "0" used in math predates Muhammad by around 800 years (he was born in 570 AD). And in 130 AD, Ptolemy,was using a symbol for zero (a small circle with a long overbar) within a sexagesimal numeral system otherwise using alphabetic Greek numerals.

Secondly, the number concept of "zero" was independently conceived in India around the same time, and it was the Indians (specifically the Indian scholar Pingala) who around 300 BC was actually counting in a binary system of 0 and 1. Also the Mayans around 36 BC conceived of a "zero" number in their counting system complete with a symbol to represent it, and influenced the Incas and other Mesoamericans in their counting systems as well.

The positional notation without the use of zero (using an empty space in tabular arrangements, or the word kha "emptiness") is known to have been in use in India from the 6th century. The earliest certain use of zero as a decimal positional digit dates to the 5th century mention in the Sanskrit text Lokavibhaga. The symbol for a zero digit was written in the shape of a dot, and consequently called bindu ("dot"). The dot had been used in Greece during earlier ciphered numeral periods.

Note that this all transpired while the Greeks were arguing about the philosophical concept of "nothing can't be something" and had to seek public (i.e. philosopher) opinion about whether or not zero was even a number - not unlike debate in the modern west debating the esoterica of facts while others simply go with the facts, but I digress...

But it is the engineer in me that must point out that the long before Muhammad was even born, and before there was a concise definition of zero and negative numbers, people in the west were doing math, building towers, sitting in concrete stadiums, getting water from concrete aqueducts, using paved roads with long bridge spans, damming rivers, and having multiple means of mechanical power conversion systems while achieving a level of sophistication of city and population management that the nomads of the deserts did not have for centuries.

Of course this also started before Muhammad was even born and was concurrent withs Roman soldiers spreading their engineering skills across Europe and into Africa.


Copyright 2012, Kevin Farley (a.k.a. sixdrift, a.k.a. neuronstatic)

Tuesday, July 17, 2012

What Have We Lost?

For years I have been concerned over this nation's frantic motions to bolster non-performance and false self-esteem in children. Time and time again there is some group here or some group there coming out with what some experts believe to be the "best thing for the children" and more often than not, it runs counter to traditional values and methods that have worked for ages.

But there it is, "for the children". The problem is that it is difficult to argue with someone who comes out first and says "but this is for the children". After all, who wants to be "against the children". But in reality, I believe those very - possibly well intentioned - people who malign traditional methods of raising children in favor of questionable experiments in "self-esteem" and "positive reinforcement" are themselves acting "against the children".

Why do I believe this?

Consider this. In days long past, when a child didn't make the team, or didn't get the grade, or didn't get included in some group, they were disappointed. They felt bad. And yet nearly every one of them got over such disappointments and went on to live a life where the real world required dealing with disappointment from time to time. So along comes someone who says "But why should the kids have to endure such hardship as disappointment? Surely we can do better." 

So they remove the early disappointment from the equation slowly at first, and then like a rampaging bull. First it was "participation ribbons", then it was "participation trophies", someone decided bad grades were a bad idea so they come up with "outcome based education", then people began suing the boy scouts, ball teams, and even churches because they were denied membership. 

And all of this is done because somewhere, some kid may be disappointed and may have to have a crummy day/week/month. 

Personally, to consider disappointment a travesty or even a hardship is an exaggeration.

"But wait!", say the proponents, "Those children who are disappointed may simply decide that it's not worth trying and simply give up. You don't want that now, do you?"

Well, yes, yes I do in some cases. If someone really can't do something, then they should focus their efforts on doing the things that play to their strengths. If they can't do one thing, but can do another well, then help them find satisfaction in doing the thing they do. Not all kids have to be like everyone else. If they can't do it well, but enjoy doing it, they should continue doing what they like without expectation of being rewarded the same as those who do it well. The satisfaction is their reward.

But a lot of times it is parents and peers that attempt to push everyone into the same mold. Why do we try to mold all kids in the same way? Because in reality, people get nervous around others who are different. Most people are more comfortable in a flock. I would say herd, but really flock - as in sheep - fits much better.

Yes, some kids will be disappointed at some things, and others at different things. But let's look at the alternative, and by alternative, I mean exactly what is happening across America today. Success is being mocked and ridiculed.

Starting decades ago, there has been a media - and political - fad of reducing the idea of independent effort and independent thought that result in success to something evil and should not be allowed. With that thought, you end up crushing a lot of kids beneath the burden of "but you shouldn't be better than your neighbor, it may make them feel bad". This is not an over-reaction or an idle approximation, this is exactly the message being sent to kids in TV shows, movies, news reports, and political speeches today. They are inundated with it. I was inundated with it when I was younger, but my parents had not been, so it didn't stick. I am afraid that each generation it gets stickier.

So the children who excel at various activities - sports, academics, music, art, etc. - see the mediocre ones, and even the non-performers, be rewarded with the same zeal and accolades they receive, they are frustrated. How do you think they feel? A lot of them have decided that it's not worth trying and they simply give up. Wait a minute. You really don't want that do you?

Well I don't want that. When those that excel give up, we become a nation of lackluster losers with mediocre performance. 

Am I being overly dramatic? I don't think so.

Ask yourself this question: is it better for a child to suffer some disappointment and learn to cope with it, or be coddled through it and to come out into the world as an adult without the ability to handle larger disappointment?

You can't "instruct" a child in how to handle disappointment unless they experience it. And you can't completely avoid it. So attempts to eliminate it from their emotional diet is doing them a disservice. It is against the children.

I can hear the straw-man argument already forming. "But wait! Are you saying we should just leave our kids to fend for themselves and do nothing? Just let them rot in disappointment and depression?"

Absolutely not and it is idiotic to even imply such a thing from what I have written. Nothing should be taken to extremes. And yet the argument of the possibility of extreme is exactly why some things are as screwed up as they are today. Spanking is considered a social taboo now. Why? Is it because spanking harms children? No. It is a social taboo because some people have taken it to extremes and been abusive. I would posit it is equally abusive to a child to not spank them in their early formative years but allow them to be selfish spoiled brats who grow up demanding they get their way.

It is my sincere hope that this trend of raising brats reverses and there is some semblance of order restored to the universe. Children do not rule the universe - though watching the federal government from the top down may lead you to believe so - and therefore the universe should not be molded around them. They are a part of it. They share a place in it. The must experience the good and the bad. It is our job in each generation to prepare the next generation for as many facets of it as we can, not just the ones we want.

I am in the 40-something group. I think we already screwed it up this time around. I just hope it's not too late to fix it.


Copyright 2012, Kevin Farley (a.k.a. sixdrift, a.k.a. neuronstatic)

Tuesday, July 03, 2012

What have the Republicans done for us in the last 40 years? Plenty.

This is just a cursory overview by the way.

Richard Nixon
  • Pursued reforms in welfare, heath care, civil rights, energy and environmental policy, many of which were struck down by Congress.
  • Created the Office of Management and Budget and the Office of Energy Policy for advice on oil policy and supported the Clean Air Act of 1970.
  • Established the Environmental Protection Agency.
  • Insisted that Congress broaden the U.S. Civil Rights Commission mandate to include sex discrimination and signed all civil rights legislation passed by Congress, including Title IX, which banned sexual discrimination in educational benefits.
  • Expanded enforcement of affirmative action.
  • Supported the Constitutional amendment lowering the voting age to 18.
  • Used the "peace dividend" from reducing troops in Vietnam to finance social welfare services and enforce civil rights through the Equal Employment Opportunity Commission. 
  • Oh yes, and basically ended our involvement in the Vietnam War started by J. F. Kennedy.

Ronald Reagan
  • Started the Strategic Defense Initiative which ultimately led to the bankrupting of the Soviet Union's military spending and ended the Cold War.
  • Stopped rampant inflation that resulted from Carter years.
  • The reduction of nuclear arms with the signing of the INF treaty together with Mikhail Gorbachev on December 8, 1987. This treaty eliminated all cruise missiles with a range of 500 to 5,000 kilometers.
  • Achieved an agreement in April of 1988 with the Soviet Union over their withdrawal from their occupation in Afghanistan. Not only was the war was ended, but also this was the first time in 33 years that the Red Army withdrew from any conflict voluntarily.
  • The nomination and eventual appointment of Sandra Day O'Connor to the Supreme Court, who became the first female Supreme Court Justice.
  • The rescue mission of Grenada on October 25, 1983 that deposed of the communist leaders who gained control through a violent coup and rescued the 800 American medical students held captive there.
  • The Anti-Drug Abuse Act of 1986 which budgeted $1.7 billion to fund the war on drugs in America and increase the severity of punishments for drug related offenses.

George Bush Sr.
  • Shortest war in our history, Desert Storm, went in to halt Iraqi invasion of Kuwait, and did so.
  • Strategic Arms Reduction Treaty (START) with nuclear powers.
  • Americans for Disabilities Act that prevents discrimination against individuals with disabilities.
  • Revision of the Clean Air Act, deemed by many to be the most significant environmental legislation ever passed.

George W Bush
  • Took an economy in recession from Clinton years and spurred new economic growth with tax cuts he promised before election, and delivered on.
  • Rejected the Kyoto global warming treaty so loved by Al Gore, the environmental lobby, elite opinion, and Europeans. The treaty was a disaster, with India and China exempted and economic decline a certain result. Everyone knew it, but only Bush said so and acted on it. Yes it was a good thing to reject that horrible treaty so that a better one could be produced.
  • Medicare Prescription Drug Modernization Act, bringing affordable prescription drug coverage to the seniors of America through expansion of Medicare.
  • Taking action against terrorists before escalation on American soil. And yes, all the intelligence agencies knew that was coming and all of Congress supported it until it seemed politically beneficial for them to distance themselves from it. Action taken against terrorist centers saved thousands of innocent American lives.
  • Taking out Hussein in Iraq putting an end to decades of crimes against the Iraqi people and other people of the region, and in particular, the Kurds which were nearly exterminated in Iraq under Hussein's rule.
  • Support of Israel that keeps the tide of Islamic extremists at bay in the Middle East, regardless of what you think of the Israelis. 

Copyright 2012, Kevin Farley (a.k.a. sixdrift, a.k.a. neuronstatic)

Thursday, April 19, 2012

Seven Windows 8 Features Lifted From Linux and Others

Over the decades, there have been many cases of companies incorporating features they see in the competition in order to improve their product. Microsoft, Apple, and many others have all engaged in the practice. It's a tricky road to follow though, with so many bloated software patents and copyrights that you open yourself up to lawsuit just for even having something similar. And these lawsuits are typically without merit, but are used as corporate weapons, requiring vast amounts of cash for the defendants. It sucks.

And now that Microsoft has come out with their Windows 8 beta and people are using it, it is a good time to consider what "innovations" did Microsoft copy from others - primarily the Linux world - and incorporate into Windows 8. Surely they copied from the Mac world also, but that is a different discussion.

Windows 8 and Linux are vastly different from each other in terms of their design and implementation. And they absolutely differ in their ideology of commercial vs free software. But they also have different audiences requiring different focus on some elements. And yet, despite their differences, there is a great deal of commonality among all desktop/laptop operating systems as they do a lot of the same things and humans use them in a lot of the same ways.

Now what is very interesting at this time is the lack of enthusiasm Microsoft expected for their new user interface design. Sure, the technical details of the new OS has a large number of users very happy for the improvements. However, many of the beta users find themselves wishing to go back to the "old way" of doing things as they did in Windows 7, or even XP.

Microsoft, to their credit, is attempting to make a leap in user interface design for their venerable operating system. It is expected and should be welcomed. This particular iteration may undergo a good deal of tweaking if they expect to lure users away from their Windows 7 machines, which people generally feel satisfied with as far as I can tell.

So without further delay, what are the features that are new to Windows but have been around in Linux desktop environments for years?

1) Mounting .iso files

Have you ever downloaded a .iso file that is intended to be burned to a CD/DVD but would like to just rummage through the file system without actually requiring a plastic disc? This feature allows users to download .iso files and simply mount them, with Windows assigning a drive letter to that file system as if it were an external drive.

Yes you could do this in Windows before using third-party tools, but now it will be native to the OS. Microsoft spins this feature saying that no Linux distro has the same kind of easy mounting, requiring command line tools or third-party tools. 

The truth? Linux has had mounting of .iso files for years. In the beginning, so long ago I don't remember when it came about, it did require a command line. All major Linux distros in recent years have included an applet or other GUI that does not require a command line. And as far as the "third-party" reference goes, a distro consists of components from numerous providers and contributors. So in a way, the majority of the software is "third-party" in that there is no one company that builds the entire Linux operating system.

2) Windows on a flash drive - Windows To Go

Windows To Go is a feature that allows users with a Windows Enterprise license to create a bootable Windows 8 environment on a USB flash drive. It supports hot unplug which allows the flash drive to be removed while running, suspending the OS until the drive is reinserted.

The ability to create and run "live" Linux distros has been around for years and every mainline distro provides a live version to be downloaded for users to give it a try. To Microsoft's credit, they did come up with an implementation that performs very well compared to typical Linux live distros. The reasons are fairly arcane but have to do with how they configured NTFS to manage just such a scenario.

3) A resilient file system - ReFS

Microsoft's next generation file system, called ReFS for Resilient File System, will be a part of Windows 8 server. When used in conjunction with Microsoft Storage Spaces, ReFS supports copy-on-write snapshots. It provides better security than NTFS, using integrity checksums and B+ trees.

Codenamed Protogon, ReFS closely resembles ZFS (Z File System) and the Linux-derived Btrfs (B-tree file system), including improvements in file, volume, and directory sizes. The ReFS design and feature set are to be expected since all operating systems need to consider more resilient file systems for server installations in particular. It's just that this is not so much as an innovation as an inclusion of something smart already being done by others.

4) The primary desktop interface - Metro UI

Microsoft prototyped the Metro UI through its forays into Zune and Media Center some 5 years ago. It is a unique means for working with a device, though arguably not the right paradigm for a general purpose computer, but was never a Microsoft original. Still, Microsoft followed these principles that were innovated by others, particularly phone and tablet vendors that have to deal with touch screens and tablets.

A lot of this kind of human factors engineering into the user interface has been going on for years by the makers of point-of-sale terminals that have a limited functional user interface and relied on touch screens. So this is just an evolutionary step in that same kind of user interface model.

In the Linux world there have been efforts to reach the same target audience with changes in the primary desktop environment. Ubuntu has come up with the Unity UI, the GNOME team announced the end of GNOME 2 and produced the GNOME 3 shell and the extensions that followed. These Linux projects as well as Windows Metro UI are attempts to create a single UI for everything from desktops, laptops, and netbooks, to tablets and phones.

The general response from users both in the Linux world and the Microsoft worlds have shown mixed responses to the new UI concept. Time will tell, but it doesn't seem that the world is quite ready to have a single UI for everything. I think this is because these devices are used differently, like the point of sale terminals, and therefore have different requirements for how users interact with them.

So this kind of  user interface has been around a long time. It has been used for special purpose devices in the past. This is merely an attempt to make it the default user interface for all Windows. I have my doubts as to its success.

5) An improved file copy dialogue

When copying, renaming, moving, or deleting files in Windows, all you had was a rather uninformative completion bar in its own small window that did not provide enough information. Furthermore, if you started six file copy operations you got six separate dialogue windows. Really ugly and useless.

Their resulting file operation dialogue is very close to the same dialogue used in the Dolphin and Nautilus file managers in Linux for a while now. The Windows 8 file operation dialog sums separate file operations into a single dialog that includes pause/restart on individual files and completion graphs, just like in Linux.

6) Social network integration

Numerous Linux distros, and specifically Ubuntu, have for some years now included integrated social networking interfaces by default. These social network applets and extensions allow users to update multiple social networks with a single status update as well as consolidate profile information. Microsoft was late to the game and is now incorporating these social network features directly into the Windows desktop.

7) Cloud integration

Microsoft is now integrating their SkyDrive online storage service into Windows 8, allowing you to not only store documents, photos, and music, but you can also host your user account - personal settings, etc. - so that you can log in from anywhere and have the same user account settings.

Back in Ubuntu 11, the Ubuntu One service was integrated into the desktop to support a free 5 GB online backup solution. Additionally services like DropBox and others provided free online storage with options to purchase more. Granted, the ability to host your login profile on the cloud would appeal to a certain niche of Windows users.

There are currently about 42 online backup/storage solutions including Microsoft's SkyDrive. Will SkyDrive supplant all those? It's hard to say. The fact that Microsoft is offering 25 GB of free storage - 5 times most of the competition - they will pick up a fair number of users for sure. And with native integration into the Windows desktop, they have a compelling reason for users to stop using other services.

There you have it. Seven new features added in Windows 8 that originated somewhere else. My guess is there are more, especially under the covers. 

I point these things out, not to vilify Microsoft or be anyone's fan boy. I point these things out because of fan boys that constantly point out the opposite: features from Windows that are picked up by Linux or other software makers. And they make the point that someone is "copying Microsoft features and innovations".

My problem with that kind of criticism is it devalues competition. In a competitive environment, the successful competitors must provide similar features and "innovations" or they get left behind. And when competition gets ground out, you are left with a monopoly. The best thing for users (and any consumers) is to have healthy competition. Without it, the monopolist is under no compulsion to make anything better.

Does anyone think Microsoft would be as aggressive in development of Windows 8 features if there were no other choice of operating system? Would they provide these particular features if there were not someone already providing such a feature and it was successful? No and no. 

The main point of all this is that Windows improves because Linux and Mac improve. And those improve because the others improve. Whether or not Windows users know it, they absolutely need successful Linux and Mac offerings because that is the reason they get new features and performance improvements.


Copyright 2012, Kevin Farley (a.k.a. sixdrift, a.k.a. neuronstatic)



Friday, April 13, 2012

Linux is Linux... mostly

"What's in a name? That which we call a rose by any other name would smell as sweet."

When Shakespeare penned those words so long ago, he had no notion that they would end up being so often quoted on a massive global Internet through these automatons we call computers. Nor would he perceive someone would eventually use the quote as an introduction on a operating system discussion. But I have just done that.


Once upon a time, there was one Linux kernel and one Linux operating system. Over time, parts were combined, recombined, replaced, and thoroughly hacked on until now we have over 600 variants of the operating system we call Linux (yes I know some insist on calling it GNU/Linux, but I think that is superfluous and silly). But we still have just the one kernel. And that one kernel has found itself on every kind of computing device that could bear an operating system.

From tiny computers no bigger than a matchbox, to monster mainframes with thousands of virtual Linux instances. From supercomputers used to solve the most advanced calculations on the planet, to power meters. From mobile phones to desktops, from tablets to toys. You encounter Linux every where. It is in your TVs, routers, set top boxes, cars, and Internet servers by the truckload. Linux is everywhere.

Interestingly enough, the place you most seldom find it in the wild is on users' desktops. The various flavors of Microsoft Windows holds a firm dominance on the desktop. Still, you can find it on a lot of desktops because many of us don't care about conforming to what others think is best. We find our own way. Some find their own way on Macs. But others, those others whose neurons buzz with excitement over having control over everything in their computer, they find their way to Linux. And they never leave.

And with all those millions of Linux users out there, each possessing a desire to do things their way, it was inevitable that variations of the Linux operating system would come about. These variants are called distributions, or simply "distros" for short. There are an estimated 600+ different distros of Linux spilling out of the Internet at this time. Fortunately for the newly converted, there are a handful that hold the majority of Linux users. Arguably - and really, it is argued a lot - the leading Linux distros at this time are Mint, Ubuntu, Fedora, Debian, OpenSuse, and Arch. 

These six distros have bubbled up to the top, serving the needs of the majority of the Linux users. And at their heart, they share the same kernel. Additionally, each of them have the ability to substitute, extend, or nearly every piece of software the comprises the distro. This software, called packages by some distros, typically have counterparts that provide the same basic function, but in different ways with a different user interface (UI).

What really distinguishes the distros is the list of packages included by default when installed and the package management tool that is used to maintain the distro. Many distros are simply respins of other distros, differing by merely a few packages. Others are targeted for specific industries or cultures. Some are created to be language specific for the non-English speaking world.

What strikes me as unique about Linux, and open source in general, is the (sometimes staggering) number of choices you can make about the software you run. You don't like the look and feel of the main desktop environment? Change it, just download a different one and use it. You don't like the office suite? Change it, there are several alternatives for most office apps.

With all the choices out there, and the ability to configure nearly every single aspect of the OS, it is the perfect environment for exploring how you do basic things and trying out new methods. Sometimes even the non-programmers get infected with the zeal for creation of their own applications and use one of the numerous high-level scripting languages so they either extend existing software, create plugins, or write their own applications. It is this kind of environment encourages people to experiment, to do more, to learn more. 

And this is where distros come in. Instead of having to start from scratch every time you want to try a different build of Linux, you can find a distro that provides the combination that most closely matches your needs. You start with some distro, then change out the packages as needed so you can get what you want, the way you want it. Or at least try to.

Some think that the huge number of distros indicates that there is no "one Linux" and it represents a fragmentation of the Linux community. While there is no single Linux operating system out there, there doesn't need to be. And furthermore, there remains that one cohesive kernel, and that is why comments about fragmentation are typically overblown. There may be 600+ distros, but Linux is Linux, and because of that, a simple swap of libraries and applications can allow one distro to morph from one look and feel to another. And yet each still remains a Linux operating system.

So, what is the point in all this? When the Linux community finds things it doesn't agree with, they are some of the most vocal critics you will ever encounter. To the non-Linux user community, it would appear that there is huge disagreement currently in the Linux fold over desktop environments specifically at this time. When Canonical announced it was swapping out the tried and true (and old) user interface software (GNOME 2) in the Ubuntu distro for the new "Unity" interface, it had its share of very vocal critics, myself included. And the announced end-of-life of GNOME 2 by the GNOME developers just added more fuel to the flames. But I submit that there is no fragmentation.

Linux is still Linux. Anyone can download Ubuntu from Canonical, and once installed, they can then download and install any of the other dozen or so alternative desktop environments available to Linux. They are not required to run Unity. Those that felt "betrayed" by Canonical with changes to Ubuntu and moved to different distros did so because they felt their base distro no longer served their needs. To those that use different distros altogether, they never really thought there was fragmentation to begin with.

As long as the kernel and the core Linux library APIs are maintained, Linux is Linux. Fragmentation would really only ever occur if someone were to seriously fork the kernel and strike out on their own AND they garnered a sufficient user base to make it a viable competitor. It looked like that may have been happening with Android. But recently the Linux team has merged most of the Android kernel changes back into the mainline. Fragmentation avoided.

Linux is Linux and has been around since 1991. It has undergone uncounted changes since then and has matured into world-class functionality and stability. One day it may "hit the wall". But such an event is not even on the radar yet. Until then, it keeps improving and growing. And all the while it flexes its OS muscles allowing it to become large and powerful, controlling masses of resources and serving millions of transactions, it can tuck itself into a tiny little flash part and run non-stop for years inside some little device. All this from the same code base.

Linux is Linux... mostly.


Copyright 2012, Kevin Farley (a.k.a. sixdrift, a.k.a. neuronstatic)

Thursday, March 08, 2012

Storing Text Digitally


Storing Text Digitally

An excerpt from "The Sydnie Emails", written Feb 4, 2008
Copyright (c) 2008, Kevin Farley


Ok, think about this, what you are calling "digits" and "letters" are nothing more than symbols that are used to represent concepts of numbers and language elements. The "numbers" or "digits" are the language-dependent symbols that are associated with quantity and counting while the "letters" are the language-dependent symbols that are associated with language utterances.


So when you think about it globally, there is nothing intrinsic to the letter "k" that denotes a "k" sound. We, as English speakers associate that letter to the beginning sound made when saying "k". And similarly we associate the letter "w" to a sound that has no relationship to its name, but merely represents a known sound. So then the "letters" of the English alphabet are used to represent some language-dependent sound.

And when we want to store some information in a computer, we need to be able to associate the language-dependent symbols with some computer-usable patterns of 0s and 1s. And really, that is all that is done. Each letter of the alphabet, some punctuation, the numbers, and a few other "characters" are simply associated with binary values in the computer.

Think of it like a look up table. You want to store the symbol "k" so you need to define a unique value for that symbol so that every time you see it, it will ever only mean  "k". Do the same for each letter of the alphabet, including both upper case and lower case characters (think about it, capitalization of a letter may not change the sound it makes, but it is uniquely different in meaning and what it represents).

The result is a map where you can look up a symbol (character/letter) to get its representation, or using the value of the representation, you can look up the symbol.

Now the most widespread of such mappings is the ASCII code. This is the world's most recognized standardized character map, but it only maps English characters (gee, I wonder who invented the entire computing industry). This mapping code has been around a long time. Google it sometime if you are interested.

The basic ASCII chart assigns 127 letters, numbers, punctuation, and some special characters to the values 0 through 127. There is an extended character set that uses the values 128 through 255 but that is another matter altogether. Also because they wanted to keep the range of values for characters to something that can be stored in a single "byte" (8 binary digits/bits), all character mappings must be less than or equal to 255 which is the maximum value you can store in 8 binary digits (equivalent to 11111111).


Note: The Unicode character mapping set contains what is known as "wide characters", meaning they can be larger than a single byte. Most often they are two bytes wide which allows up to 65536 unique values  as opposed to the 256 unique values used by the single byte wide ASCII characters. Some Unicode character sets are 4 bytes wide.
The first 32 values (literally 0, 1, 2... 31) are assigned to "control characters". Do you know what happens every time you press "control-c" to copy something? The keyboard generates a key scan code that is translated into the numeric value 3 by the keyboard device driver. The software interprets this value 3 to mean "copy the highlighted text to the copy buffer/clipboard". There is nothing magic about "control-c", its the mapping that makes the magic.

So starting with numeric value 32 (0x20) through 127 (0x7f) you have your "printable" characters. They are called printable because they result in some character you can see (with the exception of space and delete which are technically not seen). The base-10 digits, starting from 0, are mapped to values 48 (0x30) through 57 (0x39).  Upper case letters, starting from 'A', are mapped to values 65 (0x41) through 90 (0x5a). The lower case letters from 'a' are mapped to 97 (0x61) through 122 (0x7a).

So then, when the name "Sydnie Pye" is stored in the computer it is actually stored as a sequence of numeric values in binary digits. So it is actually stored like the following

01010011    <-- S
01111001    <-- y
01100100    <-- d
01101110    <-- n
01101001    <-- i
01100101    <-- e
00100000    <-- space
01010000    <-- P
01111001    <-- y
01100101    <-- e

Alternatively, I could have simply written:
0x53 0x79 0x64 0x6e 0x69 0x65 0x20 0x50 0x79 0x65

By standardizing on the way the characters (letters) are represented in the computer, all the computers in the world can accurately store and recall that name correctly.

There are other mappings of an alphabet and characters to numeric values. One of the older ones is EBCDIC, an old IBM standard still in use to some extent. The new modern standard starting to be adopted globally is called Unicode. In Unicode, characters are not a single byte, but instead each character requires from 1 to 4 bytes depending on the specific encoding, and there are several.

This was needed because some of the Asian alphabets (most notably Kanji) have no simple equivalents to our English letters. Also, because ASCII is tuned for English and related languages (most European languages but not Russian and Russian derivatives), its not suitable for encoding all the intricacies of more complex alphabets.

So then the answer is "yes, binary numbers are used to store textual information in a computer."

I do not say "letters" because that is a language-dependent attribute. Asian alphabets like Kanji do not have any letters, they have glpyhs. And technically speaking, the English alphabet has glpyhs too, we just call them letters.

Counting With Letters? No Way!


Counting With Letters? No Way!

An excerpt from "The Sydnie Emails", written Jan 31, 2008
Copyright (c) 2008, Kevin Farley


When you say "count with letters too", I assume you are talking about working with digits other than 0 through 9 and that means number bases beyond 10. Recall that binary has only 2 digits, 0 and 1.
Think about it. We have 10 "numbers" because we have a base 10 number system. In English, we have assigned the "symbols" 0, 1, 2, 3, 4, 5, 6, 7, 8,  and 9 to the number positions 10^0 through 10^9 respectively. Semantically we call the symbols we assign to represent numeric quantities "numbers". But that is more of a grammar thing and not a math thing. The math thing is to call them "digits".
In math, a symbol is used to represent a quantity, an operation on quantities, unknown quantities, and properties. But that is all these symbols are, representations of a concept. Digit symbols are used to represent powers of the base of the number system.
So then grammatically in English, using our base 10 number system we only have 10 symbols for the digits 0-9. But the symbols can be anything. If you look at ancient Maya number systems, their number system was based on 20 and their "numbers" were glyphs of combinations of bars and dots. I suppose they counted on their toes too hence the base 20 system ;)
So instead of being based on powers of 10 representing digit positions of 1, 10, 100, 1000..., the Mayans numbering system was based on powers of 20 which means that the digit positions (if they had them) would be 1, 20, 400, 8000, 160000...
Now we can still count in Mayan using their glyphs. But also we can count in Mayan using English "symbols" instead of the glyphs. We can start by using the "number symbols" 0 through 9, and then (borrowing from the computing world and hexadecimal) we can start with the "letter symbols" A, B, C, etc.
Thus our Mayan digits are: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F, G, H, I, J
Now we also know from Deep Thought that the ultimate answer to the universe, to life, to... everything... is 42 (in decimal).
But the ultimate answer in Mayan is 22. Why?
Because 2 * 10^0 + 2 * 10^1 = 2*1 + 2*20 = 2 + 40 = 42
Now since I brought it up, lets talk about hexadecimal, which is to say the base 16 number system.
Though binary is used (conveniently enough) for computing because of the nature of electrical switching and the on/off detection of electrons in circuits, programmers rely most heavily on "hex" math. The reason is simple, strings of binary digits are simply too cumbersome to keep track of (how many 1s in a row can you look at before you lose track?). So a shorthand representation of the binary numbers is needed.
Why not just use decimal? Well for starters, 10 is not a power of 2. What do I mean by that?
Binary is based on powers of 2 and decimal is based on powers of 10. To convert between base 2 and base 10 requires some mental agility (or calculator) as the digits don't "line up". I will explain that.
If I have the decimal number 117, that is the binary number 1110101. Now I can't look at any sequence of those binary digits (bits) and "mathematically see" any digit of 117. Meaning, I can't look at the string of bits and see a substring of bits that mean 100 and a substring of bits that mean 10 and a substring of bits that mean 7.
Well technically I can look at 1110101 and see 117 because I have done this for over 2 decades, but that is an entirely different matter.
So then you want to use a numbering system that shortens the representation of binary numbers but is readily convertible. As a programmer, I want to look at the number and be able to immediately see the bits underneath.
Now if I use a number system that is in itself a power of 2 at some higher order, I can achieve that because the basis of the number system is still 2, but each digit represents larger values of powers of 2.
Early in the days of computing, programmers started using "octal" representation, which is based on the base 8 number system. In octal you only have the digits 0 through 7, because remember, you do not have the base number in your digit set.
Using octal that decimal number 117 becomes 0165 (and 01110101  in binary) . I prepended the number with a 0 because that is standard practice in computing to distinguish a number as being octal: it will have the 0 in front of the number which is not normally done for decimal numbers.
So if I look at the digits of 0165 I see 5 which is "101" in binary, 6 which is "110" in binary and 1 which is "001"in binary. Thus we have:
 1   6   5
001 110 101

See how you can visualize the bits? Each octal digit represents a string of 3 bits. I can look at the octal digit and I only have to do the bit conversion for 8 values total, which is represented in 3 bits. When you use decimal, the base 10 digits don't allow such simple visualization. You can't simply write the decimal digits 117 and have the underlying binary pattern fall out sequentially.
To see the failure of decimal, just look at the lowest digit of 117, which is 7. In binary, the 7 is represented as 111 because that is 1*2^0 + 1*2^1 + 1*2^2 = 1*1 + 1*2 + 1*4 = 7. But clearly the bottom binary digits are 101 and not the expected 111. This is because decimal is not a multiple of a power of 2 (the basis of binary).
If we were to simply use the decimal digits like I did the octal digits we would have the following:
 1   1   7
001 001 111 <<-- WRONG!

And that would actually be the value 79 in decimal, not 117. So clearly, decimal does not lend itself readily to binary visualization.
While octal is all well and good and an improvement on handling binary numbers, we want it still more compact and yet allow us to visualize the bits as in octal. So if we look to the next power of 2, we have 16 (we went from 2, to 8 - skipping 4, and the next is 16). That leads us to hex numbers in base 16.
So to use hex, I need 16 digits. English only has 10 "numbers", so we proceed on to the letters like with the Mayan example. So my base digit set in hex is:
0 1 2 3 4 5 6 7 8 9 A B C D E F

Which yields decimal values 0 through 15 inclusive.
Now, to distinguish a number in hex from those in octal and decimal, programmers typically prefix the number with "0x". This is the magic sign to tell us that we are looking at a hex number.
Now back to the decimal value 117. When we convert that number to hex we get 0x75 because 7 * 16^1 + 5 * 16^0 = 7*16 + 5*1 = 112 + 5 = 117.
Now remember the visualization thing? The hex digit 7 is "0111" in binary and the hex digit 5 is "0101" in binary. Thus we have:
  7   5
0111 0101

Now see again how we can visualize the bits?
So back to the original question of using letters, lets look at a much larger number, say 0x7EA6CF82.
In binary that is: 1111110101001101100111110000010
In octal that is: 017651547602
In decimal that is: 2124861314
In hex that is: 0x7EA6CF82
In mayan that is: 0m1D407D5E

Now for the hex visualization:
  7    E    A    6    C    F    8    2
0111 1110 1010 0110 1100 1111 1000 0010

With each hex digit, the programmer can "see" the underlying bit patterns. As a programmer, we instinctively know (now after doing it a while) that "F" is 15 and "C" is 12. We also know that 15 is "1111" and 12 is "1100".
Now the question that may have popped into thought: but who uses numbers that big?
Programmers do all the time. Its not the "data" that is usually that large, its memory addresses that are that large.
A regular PC has anywhere from 128 MB to 1 GB or more of RAM. A MB of RAM is actually 1048576 bytes. This is because 1 kilobyte (KB) is 1024 bytes, and a megabyte (MB) is 1024 KB. So 1024 * 1024 = 1048576. So then a gigabyte (GB) of RAM is 1024 MB or 1073741824 bytes.
Why 1024 and not 1000? Because 1024 is a power of 2 (it is 2^10 to be specific). Remember, computing uses a base 2 number system at its lowest level, and 1000 is  decimal concept. But since 1024 is almost 1000, we use the "kilo" prefix and instead of 1 million we have a little over that and use the "mega" prefix. The same for "giga" where a GB of RAM is actually more than 1 billion bytes.
So if you are talking about memory, the prefix kilo means 1024 and mega means 1024*1024. But when you are talking about CPU clock speed of a computer, that is a different matter. A 500 MHz CPU has a clock speed of 500 million cycles per second where the M for mega means 1,000,000. Also a 3 GHz processor is running at 3 billion cycles per second where G for giga means 1,000,000,000.

As a side note, disc drive manufacturers do not use 1024 as the order of magnitude, but they use the smaller 1000 instead. So that 60 GB hard drive is smaller than 60 GB of RAM because 60 * 1000 * 1000 * 1000 is less than 60 * 1024 * 1024 *1024.
Why do they do that? Marketing. Almost a bait and switch and most people don't know the difference. But in reality, a 100 GB hard disc drive has 7.3 GB less than one would think (100*1073741824 - 100*1000000000 = 7374182400). But I digress..



RAM is random access memory, and to use it, each byte must be individually accessible. To access memory, each byte has a unique address. That address is simply a one-up number. So the very first byte of RAM has memory address 0 and the last byte of a 1 GB RAM chip has memory address 1073741823.
That is supposed to be 1 less than the total locations because remember, despite how we all learned to count as children, the first of anything mathematically is really item 0, not item 1.
Another piece of this is that nearly all personal computers today use virtual memory, which is a really long discussion that is beyond what you need to get into at this time -- or ever ;)
Simply put, virtual memory means the computer can act like it has 4 GB or RAM even if it only has 64 MB, it just uses a hard disc to swap in and out sections of RAM.
To get addresses for 4 GB you have numbers in the range from 0 to 4294967295.
And because programmers are always looking at (virtual) memory addresses, we always, daily, perpetually, and in all other ways, have to deal with really really large numbers.
So that last virtual memory location, 4294967295, is 11111111111111111111111111111111 in binary, 037777777777 in octal, and 0xFFFFFFFF in hex.
And since each digit of the hex string is exactly represented by 4 binary digits (bits), the hex version is the optimal way of looking at really really large numbers in computing.
In summary, the point of having letters is just to get more digits than 0 through 9 which are need for number systems beyond base 10.
Now I am sure that all of this is well beyond your basic question. But I am the computer guy and the math guy and since I like this stuff, I like to explain it. Thanks for putting up with this long-winded explanation.