Friday, 9 March 2012

UNIX and GNU/Linux : a user’s point of view

Hello guys ! The article on Apple took a long time to be made and written, noticeably because the company did a little more than selling operating systems in his history. This article will be shorter, mainly focusing on the UNIX family of operating systems the way it is nowadays, and especially on GNU/Linux. Reasons for that are simple : there are only little differences between most UNIXes (even though some like MINIX have a more original structure), and their past is not very interesting because they only got into consumer business lately. Unlike previous articles, this article will have a less encyclopedic structure and will be more opinion-oriented, because of the lack of simply accessible complete documentation on the subject. UNIX is an old operating system dating back from the times of minicomputers. Because of its well-designed architecture, it has been used in a lot of way, from embedded systems (smart fridges and cellphones) to web servers. By well-designed, I mean that this architecture does not introduce a big overhead in terms of processor and memory usage, puts much power in the hand of the user, and is relatively simple to understand.
  • The UNIX architecture is extensively based on the “everything is a file” metaphor. Running programs are visible as folders in the filesystem with files describing their state inside. Doing I/O is about acquiring access to a file and writing data in it. Speaking of files, UNIX introduces the powerful symbolic link mechanism, which allows links to files to be considered as the real file by most programs, hence somewhat allowing to store data at multiple places at once.
  • The second core concept of UNIX is the client/server concept. UNIX is optimized for a way of programming where there are servers who provide data and clients who may connect to them, and both are relatively independent. Apart from sending and receiving data and using signals (that tell the client/server program to do a specific task, without much freedom), the only things clients and server may do is wait for data or for a signal When the client tries to read data from the server, if there’s none, it is put to sleep until there is incoming data. When the server has no client, it may be put to sleep until a client connects to it.
  • The last special thing about UNIX is the “copy on write” principle. When data has to be duplicated for some reason, it’s not useful to keep two copies of the data in memory until one of the copies has to be modified. UNIX hence uses some operating system magic to create a “fake” copy of the data, in a way somewhat related to symbolic linking. Anyone reading that copy reads in fact the original data itself, but trying to write it leads to the actual creation of the files that has to be written to. Copy on write reduces memory usage at no processing power cost, but at the cost of lost system predictability, since writing data somewhere anytime later may brutally slow down the computer for some time, while it copies data.
    An application of copy on write is the “fork” system : on UNIX systems, applications cannot directly create processes. Instead, they first create a clone of themselves through the “fork” function, and then the clone mutates into the program that has to be run through the “exec” function. This allows to set up interprocess communication somewhat more easily in some cases, and in other cases the ability to clone processes is actually useful.
This can be understood as the core of UNIX : copy on write memory management, client/server model optimization, and everything is a file. Another key characteristic of UNIX data manipulation functions is that in some random cases they are about sending and receiving text (arrays of bytes) only. While this allows hardware independence, this may prove to be a drawback when transmitting large data such as C arrays and structures, as sending one byte at a time isn’t the most efficient way for that purpose.
UNIX users generally communicate with the system through the use of sophisticated command-line interfaces like bash, zsh, ash, etc… This is the way UNIX was made to work initially, and it’s perfectly fine this way. They can work over a network, meaning that a central server may provide computing power for anyone in a company, with dumb and cheap computers on employees’s desks whose only role is to communicate with the central server.
Anything that was described before is fairly standard in the UNIX world, through the POSIX standard and the Single UNIX Specification. However, things get more complicated on the GUI side. UNIX was never designed with graphical interfaces in mind, and all the work related to it is hence provided by big user package which depends on the flavor of UNIX you’re using. From that point of the present description, we’ll now focus on GNU/Linux, also known as Linux, a flavor of UNIX which is freely available on the Internet and uses a modular monolithic kernel design (meaning that the basic functions are provided through a huge and almighty program called the “kernel”, but that one may fairly easily add or remove parts from the kernel).
Basic interaction with the graphical hardware is done by the kernel, but it’s essentially about transmitting commands to it. Those commands are issued by the X Window System, a huge set of programs and libraries which introduces window management (allocating part of the screen to a program, and managing overlapping parts of the screen), and extremely basic hardware-accelerated drawing capabilities (like drawing a line between two points).
It is fairly obvious that writing real GUI programs in a reasonable amount of time requires more serious functions, like the ability to draw and manage buttons, listboxes, and checkboxes. There used to be a standard way for that, but the standard was still very primitive, and hence developers from all over the world written their all widgets-drawing libraries, which are fairly incompatible with each other. Examples of such libraries include GTK, Qt, WxWidgets, Tk, and the Enlightenment Foundation Libraries.
Then comes, finally, the desktop environment, the graphic program with which user actually interacts. It displays a virtual desktop, allows to explore files, manage settings, run applications, and so on. There isn’t a standard way of doing things in this area either, hence one may find several programs for that purpose, which are heavier or lighter and offer more or less functionality. Examples are Gnome, KDE, Fluxbox, Enlightenment, Xfce, IceWM, FvWM…
Modern computing also requires provides multimedia support. X provides a standard way of drawing graphics and a kernel module called ALSA takes care of the sound, but for actually opening anything common (a PNG image, a DVD video, a MP3-encoded music…), third-party software has to be used. For images, the widgets-drawing libraries are generally doing fine because they have to display icons, but for the rest, third-party software is again needed, and there isn’t a standard way of doing things either. Various examples of multimedia libraries and programs are Phonon, PulseAudio, GStreamer, Xine, FFMpeg, and SDL.
As one may see, much work is duplicated in the world of Linux, due to the lack of standardization, up to the point that there’s no way a single release of Linux could include anything and fit on a tiny CD. Several associations hence pack together various software in a bundle which is guaranteed to work and provide all needed functionality. Such a bundle is called a Linux distribution. As it’s not the same software that’s bundled, there is no way one may tell that a program running on one distribution is going to run on another, which led distributions vendors to distribute software running on their distribution themselves, through the “package management” system.
Packages are related with Windows’s installers, but more powerful. They are managed by a single program called the package manager (as usual, there are lots of these and they are incompatible with each other), which can automagically fetch the libraries required to run a program (which, as we’ve seen, is vital), and manage updates for all applications in the system.
Here ends our quick panorama of Linux for desktop computers. To sum it up, Linux has the following characteristics :
  • UNIX basis : Linux hugely benefits from this, as most computer scientists are familiar with UNIX and hence can work on Linux and write software for it fairly easily. It also allows Linux to work on big networked systems and systems with huge amounts of RAM and CPU fairly easily, benefiting on the software written for servers on UNIX.
  • Free software : Linux is both free of charge and freely redistributable. Its source code is also freely available, which allows developers of the community to improve it anytime they find a bug or feel like something is poorly designed. This allows Linux to be in some case more powerful and offering better out-of-box experience than other operating systems.
  • X : Built the UNIX way, with a file-centric approach and a client-server model, X is very powerful in corporate environments, but as every application specifically rely on him to draw things, this power comes at a cost : when X crashes, every graphical application crashes, which is problematic as a buggy graphics driver may easily crash X.
  • RTM amateur, beta proprietary : Linux has a lot of skilled developers in its community, which explains why it could get so far without financial support. However, this also leads to an amateurish way of managing certain things, like pushing forward buggy and work-in-progress technologies just before they look cool and open new horizons. Even worse, proprietary software, like graphic cards drivers, Adobe Flash player, or MP3 decoding software, is often poorly designed or not available because of the relatively low presence of Linux on the computer market. And even when it’s available, one may not redistribute it freely, which prevents people from making Linux distributions that offer complete out-of-box experience like Windows or Mac OS X.
  • Lack of global organization : There is no standard way of doing certain things of Linux, but rather a hundred different ways. This effort duplications results in effort duplication, and hence more bugs. It also results in the multiple distribution systems, which means that there isn’t a single “Linux” thing people may rely on. As one would guess facing the bad reactions to Vista’s multiples editions, Linux’s 150 distributions aren’t joyfully welcomed by users just wanting to use it for everyday tasks.
    Happily, things slowly get better. Some Linux distributions become fairly dominant, with the other keeping a relatively minor place. On the software side, the Freedesktop.org initiative started to think about standard ways of doing multimedia things. When those standards are complete and when anyone will use them, in a century or so, Linux might become a platform of choice for desktop computing.
As a conclusion, Linux is a good spawn of the UNIX family, which is well-appreciated in the server market, but it’s not mature enough to fully replace other operating systems in the desktop computing market. This is due to poorly designed GUI management, low amount of drivers and proprietary software, and general lack of normalization for hot desktop computing subjects.
However, for some specific uses of a desktop computers, Linux may be a satisfying operating system. As an example, the author watches his mails, writes reports, browses the web, and does drawing and photo manipulation on his system. He also plays some games, listens to music, and watches Youtube videos. All that is done using Ubuntu Linux 9.04. However, it needed several tweaks to make everything work properly that a normal computer user should not have to learn, and the next release of Ubuntu Linux does not play sound at all, no matter how hard I try.


0 comments:

Post a Comment

Twitter Delicious Facebook Digg Stumbleupon Favorites More

 
Design by Free WordPress Themes | Bloggerized by Lasantha - Premium Blogger Themes | Walgreens Printable Coupons