Sunday, March 29, 2009

Solaris History

Solaris is the Unix-based operating system developed by Sun Microsystems, displays that company's ability to be innovative and flexible. Solaris, one could argue, is perpetually ahead of the curve in the computer world. Sun continually adapts to the changing computer environment, trying to anticipate where the computer world is going, and what will be needed next, and develops new versions of Solaris to take that into account.


Solaris was born in 1987 out of an alliance between AT&T and Sun Microsystems to combine the leading Unix versions (BSD, XENIX, and System V) into one operating system. Four years later in 1991, Sun replaced it's existing Unix operating system (SunOS 4) with one based on SVR4. This new OS, Solaris 2, contained many new advances, including use of the OpenWindows graphical user interface, NIS+, Open Network Computing (ONC) functionality, and was specially tuned for symmetric multiprocessing.


This kicked off Solaris' history of constant innovation, with new versions of Solaris being released almost annually over the next fifteen years. Sun was constantly striving to stay ahead of the curve, while at the same time adapting Solaris to the existing, constantly evolving wider computing world. The catalogue of innovations in the Solaris OS are too numerous to list here, but a few milestones are worth mentioning. Solar 2.5.1 in 1996 added CDE, the NFSv3 file system and NFS/TCP, expanded user and group IDs to 32 bits, and included support for the Macintosh PowerPC platform. Solaris 2.6 in 1997 introduced WebNFS file system, Kerberos 5 security encryption, and large file support to increase Solaris' internet performance.


Solaris 2.7 in 1998 (renamed just Solaris 7) included many new advances, such as native support for file system meta-data logging (UFS logging). It was also the first 64-bit release, which dramatically increased its performance, capacity, and scalability. Solaris 8 in 2000 took it a step further was the first OS to combine datecentre and dot-com requirements, offering support for IPv6 and IPSEC, Multipath I/O, and IPMP. Solaris 9 in 2002 saw the writing on the wall of the server market, dropped OpenWindows in favour of Linux compatibility, and added a Resource Manager, the Solaris Volume Manager, extended file attributes, and the iPlanet Directory Server.


Solaris 10, the current version, was released to the public in 2005 free of charge and with a host of new developments. The latest advances in the computing world are constantly being incorporated in new versions of Solaris 10 released every few months. To mention just a few, Solaris features more and more compatibility with Linux and IBM systems, has introduced the Java Desktop System based on GNOME, added Dynamic Tracing (Dtrace), NFSv4, and later the ZFS file system in 2006.


Also in 2006, Sun set up the OpenSolaris Project. Within the first year, the OpenSolaris community had grown to 14,000 members with 29 user groups globally, working on 31 active projects. Although displaying a deep commitment to open-source ideals, it also provides Sun with thousands of developers essentially working for free.


The development of the Solaris OS demonstrates Sun Microsystems' ability to be on the cutting edge of the computing world without losing touch with the current computing environment. Sun regularly releases new versions of Solaris incorporating the latest development in computer technology, yet also included more cross-platform compatibility and incorporating the advances of other systems. The OpenSolaris project is the ultimate display of these twin strengths-Sun has tapped into the creative energy of developers across the world and receives instant feedback about what their audience wants and needs. If all software companies took a lesson from Sun, imagine how exciting and responsive the industry could be.

Monday, March 23, 2009

Linux History

In order to know the popularity of linux, we need to travel back in time. In earlier days, computers were like a big house, even like the stadiums. So there was a big problem of size and portability. Not enough, the worst thing about computers is every computer had a different operating system. Software was always customized to serve a specific purpose, and software for one given system didn't run on another system. Being able to work with one system didn't automatically mean that you could work with another. It was difficult, both for the users and the system administrators. Also those computers were quiet expensive. Technologically the world was not quite that advanced, so they had to live with the size for another decade. In 1960, a team of developers in the Bell Labs laboratories started working on a solution or the software problem, to address these compatibility issues. They developed a new operating system, which was simple, elegant , written in C Programming language instead of Assebly language and most important is it can be able to recycle the code. The Bell Labs developers named their this project as " UNIX ".

Unix was developed with small piece of code which is named as kernel. This kernel is the only piece of code that needs to be adapted for every specific system and forms the base of the UNIX system. The operating system and all other functions were built around this kernel and written in a higher programming language, C. This language was especially developed for creating the UNIX system. Using this new technique, it was much easier to develop an operating system that could run on many different types of hardware. So this naturally affected the cost of Unix operating system, the vendors used to sell the software ten times than the original cost. The source code of Unix, once taught in universities courtesy of Bell Labs, was not published publicly. So developers tried to find out some solution to to provide an efficient solution to this problem.

A solution seemed to appear in form of MINIX. It was written from scratch by Andrew S. Tanenbaum, a US-born Dutch professor who wanted to each his students the inner workings of a real operating system. It was designed to run on the Intel 8086 microprocessors that had flooded the world market.

As an operating system, MINIX was not a superb one. But it had the advantage that the source code was available. Anyone who happened to get the book 'Operating Systems: Design and Implementation' by Tanenbaum could get hold of the 12,000 lines of code, written in C and assembly language. For the first time, an aspiring programmer or hacker could read the source codes of the operating system, which to that time the software vendors had guarded vigorously. A superb author, Tanenbaum captivated the brightest minds of computer science with the elaborate lively discussion of the art of creating a working operating system. Students of Computer Science all over the world worked hard over the book, reading through the codes to understand the very system that runs their computer.

And one of them was Linus Torvalds. Linus Torvalds was the second year student of Computer Science at the University of Helsinki and a self- taught hacker. MINIX was good, but still it was simply an operating system for the students, designed as a teaching tool rather than an industry strength one. At that time, programmers worldwide were greatly inspired by the GNU project by Richard Stallman, a software movement to provide free and quality software. In the world of Computers, Stallman started his awesome career in the famous Artificial Intelligence Laboratory at MIT, and during the mid and late seventies, created the Emacs editor.

In the early eighties, commercial software companies lured away much of the brilliant programmers of the AI lab, and negotiated stringent nondisclosure agreements to protect their secrets. But Stallman had a different vision. His idea was that unlike other products, software should be free from restrictions against copying or modification in order to make better and efficient computer programs. With his famous 1983 manifesto that declared the beginnings of the GNU project, he started a movement to create and distribute softwares that conveyed his philosophy (Incidentally, the name GNU is a recursive acronym which actually stands for 'GNU is Not Unix'). But to achieve this dream of ultimately creating a free operating system, he needed to create the tools first. So, beginning in 1984, Stallman started writing the GNU C Compiler (GCC), an amazing feat for an individual programmer. With his smart technical skills, he alone outclassed entire groups of programmers from commercial software vendors in creating GCC, considered as one of the most efficient and robust compilers ever created.

Linus himself didn't believe that his creation was going to be big enough to change computing forever. Linux version 0.01 was released by mid September 1991, and was put on the net. Enthusiasm gathered around this new kid on the block, and codes were downloaded, tested, tweaked, and returned to Linus. 0.02 came on October 5th.

Further Development

While Linux development, Linus faced some of the difficulties such as cross opinions with some people. E.g. Tanenbaum the great teacher who wrote the MINIX. He sent the letter to Linus as :-

“I still maintain the point that designing a monolithic kernel in 1991 is a fundamental error. Be thankful you are not my student. You would not get a high grade for such a design " Linus later admitted that it was the worst point of his development of Linux. Tanenbaum was certainly the famous professor, and anything he said certainly mattered. But he was wrong with Linux, for Linus was one stubborn guy who never like defeats. Although, Tanenbaum also remarked that “Linux is obsolete.” So very soon thousands of people form a community and all joined the camp. Powered by programs from the GNU project, Linux was ready for the actual showdown. It was licensed under GNU General Public License, thus ensuring that the source codes will be free for all to copy, study and to change. Students and computer programmers grabbed it.

Everyone tried and edited the source code and then it gives the start for commercial vendors to start their market. They compiled various software and distributed them with that operating system which people are familiar with. Red Hat, Debian gained more response from outside world. With the new graphical interface system like KDE, GNONE the linux becomes popular. The best thing today about Linux is it's powerful commands.

Rise of the Desktop Linux

What is the biggest complaint about Linux ??? That is it's Text mode. Many people get scared of seeing the command base interface which is not understandable. But if anyone starts learning the commands, it goes on interesting topics to learn new about the Operating System. Still now, very friendly GUI's are available for it's flexibility. Anyone can install the Linux without having the prior experience. Everything is well explanatory at the time of installation. Most distributions are also available in Live CD format, which the users can just put in their CD drives and boot without installing it to the hard drive, making Linux available to the newbies. The most important point about Linux is it's open source. So Computer users having low budget can have Linux and learn linux as it is free.

Linux's Logo - Penguin

The logo of Linux is Penguin. It's called as Tux in technological world. Rather Tux, as the penguin is lovingly called, symbolizes the carefree attitude of the total movement. This cute logo has a very interesting history. As put forward by Linus, initially no logo was selected for Linux. Once Linus went to the southern hemisphere on a vacation. There he encountered a penguin,not unlike the current logo of Linux. As he tried to pat it, the penguin bit his hand. This amusing incident led to the selection of a penguin as the logo of Linux sometime later.

Tuesday, March 17, 2009

Unix History

Since it began to escape from AT&T's Bell Laboratories in the early 1970's, the success of the UNIX operating system has led to many different versions: recipients of the (at that time free) UNIX system code all began developing their own different versions in their own, different, ways for use and sale. Universities, research institutes, government bodies and computer companies all began using the powerful UNIX system to develop many of the technologies which today are part of a UNIX system.


Computer aided design, manufacturing control systems, laboratory simulations, even the Internet itself, all began life with and because of UNIX systems. Today, without UNIX systems, the Internet would come to a screeching halt. Most telephone calls could not be made, electronic commerce would grind to a halt and there would have never been "Jurassic Park"!


By the late 1970's, a ripple effect had come into play. By now the under- and post-graduate students whose lab work had pioneered these new applications of technology were attaining management and decision-making positions inside the computer system suppliers and among its customers. And they wanted to continue using UNIX systems.


Soon all the large vendors, and many smaller ones, were marketing their own, diverging, versions of the UNIX system optimized for their own computer architectures and boasting many different strengths and features. Customers found that, although UNIX systems were available everywhere, they seldom were able to interwork or co-exist without significant investment of time and effort to make them work effectively. The trade mark UNIX was ubiquitous, but it was applied to a multitude of different, incompatible products.


In the early 1980's, the market for UNIX systems had grown enough to be noticed by industry analysts and researchers. Now the question was no longer "What is a UNIX system?" but "Is a UNIX system suitable for business and commerce?"


Throughout the early and mid-1980's, the debate about the strengths and weaknesses of UNIX systems raged, often fuelled by the utterances of the vendors themselves who sought to protect their profitable proprietary system sales by talking UNIX systems down. And, in an effort to further differentiate their competing UNIX system products, they kept developing and adding features of their own.


In 1984, another factor brought added attention to UNIX systems. A group of vendors concerned about the continuing encroachment into their markets and control of system interfaces by the larger companies, developed the concept of "open systems."


Open systems were those that would meet agreed specifications or standards. This resulted in the formation of X/Open Company Ltd whose remit was, and today in the guise of The Open Group remains, to define a comprehensive open systems environment. Open systems, they declared, would save on costs, attract a wider portfolio of applications and competition on equal terms. X/Open chose the UNIX system as the platform for the basis of open systems.


Although UNIX was still owned by AT&T, the company did little commercially with it until the mid-1980's. Then the spotlight of X/Open showed clearly that a single, standard version of the UNIX system would be in the wider interests of the industry and its customers. The question now was, "which version?".


In a move intended to unify the market in 1987, AT&T announced a pact with Sun Microsystems, the leading proponent of the Berkeley derived strain of UNIX. However, the rest of the industry viewed the development with considerable concern. Believing that their own markets were under threat they clubbed together to develop their own "new" open systems operating system. Their new organization was called the Open Software Foundation (OSF). In response to this, the AT&T/Sun faction formed UNIX International.


The ensuing "UNIX wars" divided the system vendors between these two camps clustered around the two dominant UNIX system technologies: AT&T's System V and the OSF system called OSF/1. In the meantime, X/Open Company held the center ground. It continued the process of standardizing the APIs necessary for an open operating system specification.


In addition, it looked at areas of the system beyond the operating system level where a standard approach would add value for supplier and customer alike, developing or adopting specifications for languages, database connectivity, networking and mainframe interworking. The results of this work were published in successive X/Open Portability Guides.


XPG 4 was released in October 1992. During this time, X/Open had put in place a brand program based on vendor guarantees and supported by testing. Since the publication of XPG4, X/Open has continued to broaden the scope of open systems specifications in line with market requirements. As the benefits of the X/Open brand became known and understood, many large organizations began using X/Open as the basis for system design and procurement. By 1993, over $7 billion had been spent on X/Open branded systems. By the start of 1997 that figure has risen to over $23 billion. To date, procurements referencing the Single UNIX Specification amount to over $5.2 billion.


In early 1993, AT&T sold it UNIX System Laboratories to Novell which was looking for a heavyweight operating system to link to its NetWare product range. At the same time, the company recognized that vesting control of the definition (specification) and trademark with a vendor-neutral organization would further facilitate the value of UNIX as a foundation of open systems. So the constituent parts of the UNIX System, previously owned by a single entity are now quite separate


In 1995 SCO bought the UNIX Systems business from Novell, and UNIX system source code and technology continues to be developed by SCO.


In 1995 X/Open introduced the UNIX 95 brand for computer systems guaranteed to meet the Single UNIX Specification. The Single UNIX Specification brand program has now achieved critical mass: vendors whose products have met the demanding criteria now account for the majority of UNIX systems by value.


For over ten years, since the inception of X/Open, UNIX had been closely linked with open systems. X/Open, now part of The Open Group, continues to develop and evolve the Single UNIX Specification and associated brand program on behalf of the IT community. The freeing of the specification of the interfaces from the technology is allowing many systems to support the UNIX philosophy of small, often simple tools , that can be combined in many ways to perform often complex tasks. The stability of the core interfaces preserves existing investment, and is allowing development of a rich set of software tools. The Open Source movement is building on this stable foundation and is creating a resurgence of enthusiasm for the UNIX philosophy. In many ways Open Source can be seen as the true delivery of Open Systems that will ensure it continues to go from strength to strength.

Monday, March 9, 2009

OS/2

A family of multitasking operating systems for x86 machines from IBM. OS/2 Warp is the client version, and Warp Server is the server version. With add-ons, DOS and Windows applications can also be run under OS/2 (see Odin). The server version includes advanced features such as the journaling file system (JFS) used in IBM's AIX operating system. Like Windows, OS/2 provides a graphical user interface and a command line interface. See OS/2 Warp, Warp Server and eComStation.

Although highly regarded as a robust operating system, OS/2 never became widely used. However, it has survived in the banking industry, especially in Europe, and many ATM machines in the U.S. have continued to run OS/2 due to its stability.

Features

OS/2 includes Adobe Type Manager for rendering Type 1 fonts on screen and providing PostScript output on non-PostScript printers. OS/2's dual boot feature allows booting up into OS/2 or DOS.
The OS/2 Workplace Shell graphical user interface is similar to Windows and the Macintosh. Originally known as Presentation Manager (PM), after Version 2.0, PM referred to the programming interface (API), not the GUI interface itself.

Evolution

The first versions of OS/2 were single-user operating systems written for 286s and jointly developed by IBM and Microsoft. Starting with Version 2.0, versions were written for 32-bit 386s and up and were solely the product of IBM. Following is some of the evolution:

OS/2 16-bit Version 1.x

The first versions (1.0, 1.1, etc.) were written for the 16-bit 286. DOS compatibility was limited to about 500K. Version 1.3 (OS/2 Lite) required 2MB RAM instead of 4MB and included Adobe Type Manager. IBM's Extended Edition version included Communications Manager and Database Manager.

OS/2 32-bit Version 2.x - IBM

Introduced in April 1992, this 32-bit version for 386s from IBM multitasked DOS, Windows and OS/2 applications. Data could be shared between applications using the clipboard and between Windows and PM apps using the DDE protocol. Version 2.x provided each application with a 512MB virtual address space that allowed large tasks to be easily managed.
Version 2.1 supported Windows' Enhanced Mode and applications could take full advantage of Windows 3.1. It also provided support for more video standards and CD-ROM drives.
Communications and database management for OS/2 were provided by Communications Manager/2 (CM/2) and Database Manager/2 (DB2/2). CM/2 replaced Communications Manager, which was part of OS/2 2.0's Extended Services option.

OS/2 32-bit Version 3 - IBM

In late 1994, IBM introduced Version 3 of OS/2, renaming it OS/2 Warp. The first version ran in only 4MB of memory and included a variety of applications, including Internet access.

Windows NT - Microsoft

Originally to be named OS/2 Version 3.0, this 32-bit version from Microsoft was renamed "Windows NT" and introduced in 1993. See Windows NT.