Tuesday, July 28, 2009

Intel Core 2 Duo Processor

The Intel Core 2 Duo processor was developed to meet the insatiable demand for increased performance from PC users running multiple intense software applications simultaneously.

In the office, PC usage has changed from data entry and word processing to e-Commerce, online collaboration, and an ever-increasing need for continual security and virus protection.

In the home, interests have shifted from low-bandwidth photos and Internet surfing to downloading and viewing high-definition videos, as well as advanced photo and video editing.

Intel’s new 45nm manufacturing technology, with hafniuminfused Hi-k transistors, enables even more processor performance by doubling the transistor density, improving efficiency and speed relative to the previous generation, and increasing cache size by up to 50 percent.

These new Intel Core 2 Duo processors deliver more performance without using more energy.

Built on the innovative Intel® Core™ microarchitecture, the Intel Core 2 Duo desktop processor delivers revolutionary dual-core performance and breakthrough processor energy efficiency.

With Intel® Wide Dynamic Execution, Intel® Smart Memory Access, Intel® Advanced Smart Cache, and Intel® Digital Media Boost, this new processor is designed to do more in less time.

Additional features,which support enhanced security, virtualization, and 64-bit computing, make the Intel Core 2 Duo the most impressive processor developed for an increasingly multimedia-centered, high-definition world.

Energy Efficiency

Design changes in the Intel Core 2 Duo processors that improve performance also increase processor energy efficiency by operating at lower frequencies that require less power to run.

Intel® Intelligent Power Capability, a feature that optimizes energy usage of the processor cores, turns on computing functions only when needed. These more energyefficient processors support smaller, more capable, and quieter desktop PCs to conserve critical power resources.

Better Acoustics

Intel® Core™2 Duo processors are equipped with a Digital Thermal Sensor (DTS) that enables efficient processor and platform thermal control.

Thermal sensors located within the processor measure the maximum temperature on the die at any given time. Intel® Quiet System Technology, included in the Intel® Express Chipset families1, uses the DTS to regulate the system and processor fan speeds.

The acoustic benefit of temperature monitoring is that system fans spin only as fast as needed to cool the system, and slower spinning fans generate less noise.

Core 2 Duo

  • Core microarchitecture
  • Desktop CPU
  • Dual coreUp to 3.33 GHz
  • Up to 6 MB L2 cache
  • Up to 1333 MHz FSB
  • 64-bit
  • Execute Disable bitSSE3, SSSE3, SSE4.1
  • Virtualization
  • Trusted Execution
  • Socket 775

Thursday, July 9, 2009

Intel History

Intel is the largest manufacturer of microprocessors in the world, and they got their start by being the brains behind the world's most advanced consumer calculator.In 1972, the Busicom high-powered business calculator was released, and it was powered by an Intel 4004 chip, Intel's first microprocessor.Shortly after, in 1974, Intel broke into the personal computer market when they put their 8080 microprocessor in the Altair 8800, the first successful personal computer ever released.In 1978, Intel struck a deal with IBM to produce the 8088 microprocessor chip to power the brand-new IBM PC for home and small business use. With an ad campaign that featured a re-creation of Charlie Chaplin's "Little Tramp" character, the IBM PC went on to be a huge success and established Intel as a premier microchip manufacturer.

Significance

In 1982, Intel released the 80286 microprocessor, which it eventually shortened to just the 286. This was the first attempt by Intel to create a microchip that could run any of the software written for previous Intel processors. Prior to the release of the 286, none of the Intel processors were backwards compatible--able to run programs written for previous generations of processors. The ability to be backwards compatible with all previous generations is now standard with Intel products. The expanded compatibility of the 286 resulted in the sale of over 15 million personal computers throughout the world.

Time Frame

The 386 generation of microprocessors was released in 1985, and it was the first processor to allow a computer to multi-task, which is the ability to run more than one program simultaneously. The programs were simple, and they were limited to only two or three at one time, but this was a huge jump in technology for home computing.The next generation 486 was released in 1989, and this processor had a built-in math co-processor that allowed it to do complicated computations at a fraction of the time of previous generations. The 486 also allowed for a wider array of colors, and it also allowed for the introduction of true point-and-click technology.Prior to the 486, it was necessary to purchase a math co-processor separately to get the maximum speed out of an Intel microprocessor.

Effects

The Pentium processor was first introduced in 1993 at speeds of 60 Mhz and 66 Mhz. It contained over 3 million transistors that greatly expanded the processor's computing capability and that increased its speed.In 2000, Intel introduced the Pentium 4 family of processors, which featured an initial speed of 1.5 Ghz.Intel continued to make design changes to the Pentium line, which included introducing dual core and quad core processors that were the equivalent of two processors in one and four processors in one.In 2009, Intel finally retired the Pentium name and introduced a new core technology called Merom.

Considerations

The Pentium line of processors was actually going to be called the 586 line, but Intel found it difficult to put patents on a product that was referred to only by a number, so they decided to use the Pentium name instead. The name "Pentium" was created by a marketing firm named Lexicon Branding in 1992 and then used by Intel in its 1993 release. The very first line of Pentium processors was not very successful. A floating point error in the processor caused it to miscalculate on a regular basis, and this prompted one of the largest recalls in the history of the computer industry. It wound up costing Intel over $450 million to recall the defective chips. To avoid the problem ever happening again, Intel created a quality control division that checks each microprocessor before it leaves the factory.

Wednesday, July 8, 2009

AMD Phenom II Processors

Visual Experience

Live your life in HD. AMD Phenom™ II is for high definition entertainment, gaming, creativity, and beyond. With AMD Phenom™ II processors as the foundation, you'll enjoy a new level of responsiveness and visual intensity. AMD puts high definition computing within everyone’s reach.

Superior technologies for HD video. Enjoy a superior high definition experience for HD videos on you PC. AMD Phenom™ II processor are the powerful engine behind your fidelity, high definition video entertainment experience. Only AMD puts the Ultimate Visual Experience™ for HD video within your reach.

Enjoy entertainment beyond your media library. Get HD content online, offline, wherever you want it, however you want it. Your system can handle whatever you dish out - and serve it up on screen in full, high definition glory.*
Perfect chemistry. Combine AMD Phenom™ II processors and ATI Radeon™ HD graphics to really see the difference. Enjoy smooth video, brilliant videos and immersive games. AMD unleashes visual clarity and responsiveness for what you want to do.

Performance

Do it all. AMD Phenom™ II processors have the power to do it all. Featuring next-generation quad-core design, they crush even the most demanding tasks. So design it, render it, play it, create it, stream it, HD it.* With AMD Phenom™ II processors, if you can imagine it, you can do it.

Energy Efficient

Make a choice you can feel good about. AMD Phenom™ II processors were designed with energy efficiency in mind. Capitalizing on AMD's leadership in energy efficiency, they incorporate all of the latest technology that gives you performance when you need it and save power when you don’t.

Tuesday, July 7, 2009

AMD Athlon X2

Take multi-tasking to a whole new level with the AMD Athlon™ X2 dual-core processor

AMD Athlon™ X2 dual-core processors put the power of dual-core technology on the desktop. Dual-core processors contain two processing cores, residing on one chip, that perform calculations on two streams of data to increase efficiency and speed while running multiple programs and the new generation of multi-threaded software. For end-users this means a significant increase in response and performance when running multiple applications simultaneously.


Better Multi-Tasking Means Increased Office Productivity

Productivity in today’s workplace requires smooth, efficient and seamless multi-tasking. AMD Athlon™ X2 dual-core processors deliver TRUE multi-tasking, allowing users to switch from one program to another without always pausing for the computer to catch up and reducing annoying processing pauses.


Setting the Pace in Digital Media

Digital media software demands simultaneous processing of data streams, the perfect use for the incredible multi-tasking power of AMD64 dual-core technology. Dual-core technology is like having two processors working together, each one taking care of different applications, so power-users can actually experience greater performance when multiple applications are running. Digital media enthusiasts can usher in the next generation of digital media software for amazing high-definition video and photo editing, content creation, and audio mixing. With an AMD Athlon™ X2 dual-core processor, your PC can perform up to 80% faster than an AMD Athlon™ 4000+ processor on the latest power-hungry digital media software applications.


Get more Power using less Power

Energy-efficient AMD processors with AMD PowerNow!™ Technology (Cool’n’Quiet™ Technology) enable smaller, sleeker, more energy-efficient PC’s. In March 2005, the U.S. Environmental Protection Agency (EPA) awarded AMD PowerNow!™ Technology (Cool’n’Quiet™ Technology) special recognition for the advancement of energy-efficient computer technologies. AMD expects that systems built using energy-efficient AMD desktop processors can meet, and in many instances, exceed the new system requirements from the EPA’s ENERGY STAR Version 4 computer specification, effective July 20, 2007.


All the Proven Benefits of AMD64 Technology

Enhanced Virus Protection with Windows® XP Service Pack 2 and Vista™ Enhanced Virus Protection is a feature enabled by AMD64 technology. Enhanced Virus Protection in conjunction with modern operating systems can help prevent the spread of certain viruses, like MSBlaster and Slammer to significantly reduce the cost and down-time associated with similar viruses and improve the protection of computers and personal information against certain PC viruses.


AMD Athlon Processor Architecture Performance

HyperTransport™ Technology can increase overall system performance by reducing I/O bottlenecks, increasing system bandwidth, and reducing system latency. A fully integrated memory controller helps speed access to memory by giving the processor a direct connection to the main memory. As a result, end users can enjoy quicker application loading and extraordinary application performance.


Ready for the 64-bit future

Like all the processors in the AMD64 family, AMD Athlon™ X2 dual-core processors are designed for people who want to stay at the forefront of technology and for those who depend on their PCs to keep them connected, informed, and entertained. Systems based on AMD64 processors can deliver leading-edge performance for demanding productivity and entertainment software today and in the future.


With AMD64 technology, AMD Athlon™ X2 dual-core processors are fully compatible with existing software, while enabling a seamless transition to 64-bit applications. Both 32- and 64-bit applications can run simultaneously and transparently on the same platform. AMD64 technology enables new, cinematic computing experiences and capabilities, in addition to increased performance. AMD64 technology allows end users to take advantage of new innovations such as real-time encryption, more life-like games, accurate speech interfaces, cinema-quality graphic effects, and easy-to-use video and audio editing.

Monday, July 6, 2009

AMD Turion

Enhanced Memory and Processing

With true multi-core technology, notebook PCs based on AMD Turion™ X2 Ultra Dual-Core Mobile Processors and AMD Turion™ X2 Dual-Core Mobile Processors can deliver significantly greater bandwidth with support for power-optimized HyperTransport™ 3.0 Technology and PCI Express® 2.0, increasing data throughput and improving system performance while helping to extend battery life.


Next-Generation Power Management

Get long battery life with performance on demand. Enhanced AMD PowerNow!™ Technology dynamically switches performance states (processor core voltage and operating frequency) based on processor performance requirements enabling today's more mobile and demanding PC user to extend battery life. With AMD Dynamic Power Management, each processor core, and the integrated memory controller and HyperTransport™ Technology controller are powered by a dedicated voltage plane to give you the performance you need while multitasking on the go. Independent Dynamic Core Technology extends battery life by dynamically optimizing the operating frequency for each core in the processor based on end-user application needs. And AMD CoolCore™ Technology extends notebook usage while on battery by turning off processor features that are not being used.


Range Of Products Provide Smarter Choices For The Money

Choose the right performance for your mobile lifestyle. Mobile solution options include AMD Turion™ X2 Ultra Dual-Core Mobile Processors, which are designed to enable extreme performance and mobility, and AMD Turion™ X2 Dual-Core Mobile Processors, which deliver exceptional performance and long battery life. All to give more choices to today's more mobile PC users.


The Security And Reliability You Want With The Value You Expect For The Long-Term

Run the 32-bit applications of today and the 64-bit applications of tomorrow. Get the advantage of today's powerful applications in a power-efficient notebook PC with the forward-thinking technology to support the advanced applications of the future. Take comfort in security innovations such as Enhanced Virus Protection.

Saturday, June 6, 2009

AMD Sempron Processor

AMD Sempron™ Processor Overview

The AMD Sempron™ processor performs at the top of its class when running the home and business applications most. The AMD Sempron™ processor’s full-featured capabilities can include AMD64 Technology, HyperTransport™ technology, up to 256KB total high-performance cache, One 16-bit/16-bit link @ up to 1600MHz full duplex system bus technology, and an integrated DDR2 memory controller.

The AMD Sempron™ processor provides the productivity enhancing performance you need for your everyday applications. It runs over 60,000 of the world’s most popular applications, so you can enjoy solid performance. With 35 years of design and manufacturing experience and shipments of more than 240 million PC processors, you can count on AMD to provide reliable solutions for your home or business.

Affordable - Performance

The AMD Sempron processor performs at the top of its class on the home and business applications that you need and use most.

The AMD Sempron processor is designed for day-to-day computing and beyond.

Full-Featured to Improve your Computing Experience

The AMD Sempron processor lets you enjoy a dynamic Internet experience with smooth streaming video and audio.

The AMD Sempron processor saves you time and effort; enabling your system to boot and load your applications quickly.

Applications that allow you to communicate with family, friends and colleagues will run smoothly with the AMD Sempron processor.

The AMD Sempron™ processor’s advanced architectural features help ensure affordable performance and full-featured capability. These features include:

AMD64 Technology

HyperTransport technology

Up to 256KB total high-performance, full-speed cache

One 16-bit/16-bit link @ up to 1600MHz full duplex system bus technology,

Integrated DDR2 memory controller on certain models

Built-in security with Enhanced Virus Protection* that works with Microsoft® Windows® XP SP2 to help protect against viruses, worms, and other malicious attacks. When combined with protective software, Enhanced Virus Protection is part of an overall security solution that helps keep your information safer.

Enjoy full compatibility with the tools you use daily.
The AMD Sempron processor is designed to run more than 60,000 of the most popular software applications, so you can enjoy reliable performance for a wide variety of computing needs. And since the AMD Sempron processor is compatible with leading PC peripherals, it helps keep everything running smoothly.

Get more value from your PC.
The AMD Sempron processor is ideal for families, students and other budget-conscious or entry-level computer buyers. It includes the right set of features you need for day-to-day computing, and gives you more power for your money than other similar processors. This means you get a PC configured with better components such as CD drives, graphics capabilities, and more.

Reliability from an Industry Leader

AMD is an industry leader that is dedicated to enabling you to get the job done at work or at play.

AMD is constantly striving to find the right solutions for you and your home or business needs.

AMD’s superior quality and track record have long been recognized by a number of the industry’s top publications, organizations and high-tech experts. AMD products, technology, manufacturing, facilities, executives and corporate and community programs have earned a multitude of awards and recognition over the years.

For the latest performance benchmarks and detailed technical documentation of the AMD Sempron processor, please visit Benchmarks and Technical Documentation. For more product comparison information, please visit the AMD Sempron Product Comparison.

Friday, June 5, 2009

AMD Athlon™ 64 Processor

AMD Athlon™ 64 Processor Overview

The AMD Athlon 64 processor is the first Windows®-compatible 64-bit PC processor. The AMD Athlon 64 processor runs on AMD64 technology, a revolutionary technology that allows the processor to run 32-bit applications at full speed while enabling a new generation of powerful 64-bit software applications. Advanced 64-bit operating systems designed for the AMD64 platform from Microsoft, Red Hat, SuSE, and TurboLinux have already been announced.

With the introduction of the AMD Athlon 64 processor, AMD provides customers a solution that can address their current and future computing needs. As the first desktop PC processor to run on the AMD64 platform, the AMD Athlon 64 processor helps ensure superior performance on today’s software with readiness for the coming wave of 64-bit computing. With AMD64 technology, customers can embrace the new capabilities of 64-bit computing on their own terms and achieve compatibility with existing software and operating systems.

Enhanced Virus Protection with Windows® XP Service Pack 2 With a unique combination of hardware and software technologies that offer you an added layer of protection, certain types of viruses don't stand a chance. The AMD Athlon 64 processor features Enhanced Virus Protection, when support by the OS*, and can help protect against viruses, worms, and other malicious attacks. When combined with protective software, Enhanced Virus Protection is part of an overall security solution that helps keep your information safer.

Industry-leading performance for today’s software

It 's not just about email, Web browsing and word processing anymore. The AMD Athlon 64 processor gives you full-throttle performance to go wherever your digital world takes you. Whether you're watching videos, ripping and playing music, or playing games, AMD64 performance helps you to fully enjoy any multimedia experience with a “you are there” reality. The revolutionary architecture of the AMD Athlon 64 processor enables industry-leading performance to help maximize productivity and deliver a true-to-life digital entertainment experience. HyperTransport™ technology can increase overall system performance by removing I/O bottlenecks, increasing system bandwidth, and reducing system latency. A fully integrated DDR memory controller helps speed access to memory by offering the processor a direct connection to the main memory. As a result, end users can enjoy quicker application loading and extraordinary application performance.

With 3DNow!™ Professional technology and support for SSE3, the AMD Athlon 64 processor has more ways to accelerate multimedia applications, enabling stellar performance when working with audio, video, and photography software. For a superior experience with high-speed Internet, the AMD Athlon 64 processor combines high-speed memory access and I/O connectivity to help ensure that end users can fully take advantage of a broadband connection to streaming video and audio, and a riveting online gaming experience.

Ready for the 64-bit future

The AMD Athlon 64 processor is designed for people who want to stay at the forefront of technology and for those who depend on their PCs to keep them connected, informed, and entertained. Systems based on AMD Athlon 64 processors are able to deliver leading-edge performance for demanding productivity and entertainment software today and in the future.

With AMD64 technology, the AMD Athlon 64 processor is fully compatible with existing software, while enabling a seamless transition to 64-bit applications. Both 32- and 64-bit applications can run virtually simultaneously and transparently on the same platform. AMD64 technology enables new, cinematic computing experiences and capabilities, in addition to increased performance. AMD64 technology allows end users to take advantage of new innovations such as real-time encryption, more life-like games, accurate speech interfaces, cinema-quality graphic effects, and easy-to-use video and audio editing.

Protect investments with a technically superior PC processor

The AMD Athlon 64 processor is the world’s most technically advanced PC processor and the first Windows-compatible 64-bit PC processor. Advanced technologies in the AMD Athlon 64 processor include:

-AMD64 technology which doubles the number of processor registers and dramatically increases the system memory addressability

-Enhanced multimedia instructions support including 3DNow! Professional technology and SSE2/3

-With up to a 2000 MHz system bus using HyperTransport technology with up to 14.4 GB/sec total processor-to-system bandwidth

-An integrated memory controller with peak memory bandwidth of up to 6.4 GB/sec, supporting PC3200, PC2700, PC2100, or PC1600 DDR SDRAM

-Native execution of 32-bit software, allowing today’s PC software to provide leading-edge performance while enabling a seamless migration to 64-bit software

The combination of these innovations and features provides customers with performance they need along with tremendous flexibility. Customers can experience outstanding performance running today’s applications and prepare for the next generation of software without having to upgrade or change hardware. For business customers, this extends system life cycles, simplifies technology transition and reduces total cost of ownership.

Purchase with confidence

The AMD Athlon 64 processor is the only industry standard x86 processor with the ability to move beyond the limits of 32-bit computing. The AMD Athlon 64 processor is compatible with Microsoft Windows XP and tens of thousands of PC applications that people around the world use every day. The award-winning AMD Athlon XP processor won over 100 industry accolades and was the first 1GHz PC processor. Now, the AMD Athlon 64 processor reaches a new milestone by building a path to 64-bit computing for millions of PC users.

Founded in 1969, AMD has shipped more than 300 million PC processors worldwide. Customers can depend on the AMD Athlon 64 processor and AMD for compatibility and reliability. AMD processors undergo extensive testing to help ensure compatibility with Microsoft Windows XP, Windows 98, Windows ME, Windows NT®, Windows 2000, as well as Linux and other PC operating systems. AMD works collaboratively with Microsoft and other partners to achieve compatibility of AMD processors and to expand the capability of software and hardware products leveraging AMD64 technology. AMD conducts rigorous research, development, and validation to help ensure the continued integrity and performance of its products.

Thursday, June 4, 2009

AMD Processor

Advanced Micro Devices, Inc. (AMD) (NYSE: AMD) is an American multinational semiconductor company based in Sunnyvale, California, that develops computer processors and related technologies for commercial and consumer markets. Its main products include microprocessors, motherboard chipsets, embedded processors and graphics processors for servers, workstations and personal computers, and processor technologies for handheld devices, digital television, automobiles, game consoles, and other embedded systems applications.


AMD is the second-largest global supplier of microprocessors based on the x86 architecture after Intel Corporation, and the third-largest supplier of graphics processing units, behind Intel and Nvidia. It also owns 21 percent of Spansion, a supplier of non-volatile flash memory. In 2007, AMD ranked eleventh among semiconductor manufacturers in terms of revenue.


Advanced Micro Devices was founded on May 1, 1969, by a group of former executives from Fairchild Semiconductor, including Jerry Sanders III, Ed Turney, John Carey, Sven Simonsen, Jack Gifford and three members from Gifford's team, Frank Botte, Jim Giles, and Larry Stenger. The company began as a producer of logic chips, then entered the RAM chip business in 1975. That same year, it introduced a reverse-engineered clone of the Intel 8080 microprocessor. During this period, AMD also designed and produced a series of bit-slice processor elements (Am2900, Am29116, Am293xx) which were used in various minicomputer designs.


During this time, AMD attempted to embrace the perceived shift towards RISC with their own AMD 29K processor, and they attempted to diversify into graphics and audio devices as well as EPROM memory. It had some success in the mid-80s with the AMD7910 and AMD7911 "World Chip" FSK modem, one of the first multistandard devices that covered both Bell and CCITT tones at up to 1200 baud half duplex or 300/300 full duplex. While the AMD 29K survived as an embedded processor and AMD spinoff Spansion continues to make industry leading flash memory, AMD was not as successful with its other endeavors. AMD decided to switch gears and concentrate solely on Intel-compatible microprocessors and flash memory. This put them in direct competition with Intel for x86 compatible processors and their flash memory secondary markets.


AMD announced a merger with ATI Technologies on July 24, 2006. AMD paid $4.3 billion in cash and 58 million shares of its stock for a total of US$5.4 billion. The merger completed on October 25, 2006 and ATI is now part of AMD.

Tuesday, May 12, 2009

Problem Solving and Programming Logic

Computer Programs: The Power of Logic

A single program addresses a particular problem. When you write a program, you are solving a problem.

To solve a problem you must use your power of logic and develop an algorithm, or procedure, for solving the problem.

The algorithm is the finite set of step-by-step instructions that convert the input into the desired output, that is, solve the problem.

Structured Program Design: Divide and Conquer

Given a task, programmers were left on their own to create a solution any way they could.

Three major problems arose from this free-form method:
1.Long development time,
2.High maintenance cost, and
3.Low-quality software.

Structured programming stresses the systematic design and management of the program development process.

illustrates a common programming problem:

The printing of weekly payroll checks for hourly and commission employees. In the figure a structure chart is used to break the programming problem into a hierarchy of tasks. The most effective programs are designed to be written in modules, or independent task.

It is much easier to address a complex programming problem in small, more manageable modules than as one big task by using the principles of structured programming.

In structured programming, the logic of the program is addressed hierarchically in logical modules.

BY dividing the program into modules, the structured approach to programming reduces the complexity of the programming task.

Some programs are so complex that if taken as a single task.

Goals of structured programming :

Decrease program development time by increasing programmer productivity and reducing the
time needed to test and debug a program.

Decrease program maintenance costs by reducing errors and making program code easier to understand.

Improve the quality of software by providing programs with fewer errors.

Structured programming accomplishes these goals by incorporating these concepts:

  1. Top-down design and use of modules.
  2. Use of limited control structures (sequence, selection, and repetition).
  3. Management control.

Top-down design starts with the major functions involved in a problem and divides them into subfunctions until the problem has been divided as much as possible.

Each unit is small enough to be programmed by an individual programmer in the required time frame.

This forces an examination of all aspects of a problem on one level before considering the next level.

A programmer is left with small groups, or modules, of processing instructions, which are easy to understand and code.

A program consists of a main logic module that controls the execution of the other modules in the program.

Working from the top down avoids solutions that deal with only part of a problem.

A program that uses a main logic module to control smaller modules is easier to read, test, and maintain.

In structured programming, modules ensure these qualities by:

-having only one entrance and one exit

-performing only one program function

-returning control to the module from which it was received

Monday, May 11, 2009

Programming In Perspective

What Is Computer Programming ?

Computer programming involves writing instructions and giving them to the computer so it can complete a task.

A computer program, or software, is a set of instructions written in a computer language, intended to be executed by a computer to perform a useful task.

The application packages such as word processors, spreadsheets, and database management systems are computer programs.

A programmer is an individual who translates the tasks that you want a computer to accomplish into a form the computer understands.

What Are The Qualities of a Well-Designed Program?

· Correct and accurate
· Easy to understand
· Easy to maintain and update
· Efficient
· Reliable
· Flexible

Monday, April 20, 2009

Brief of Mac OS


Mac OS is Apple Computer's operating system for Apple Macintosh computers. Mac OS was the first commercially successful operating system which used a graphical user interface. The Macintosh team included Bill Atkinson and Jef Raskin. There are a variety of views on how the Macintosh was developed, and where the underlying ideas originated. While the connection between the Macintosh and the Alto project at Xerox PARC has been established in the historical record, the earlier contributions of Ivan Sutherland's Sketchpad and Doug Engelbart's On-Line System are no less significant. See History of the GUI, and Apple v. Microsoft.

The Mac OS can be divided into two families of operating systems:
An older and now unsupported "classic" Mac OS (the system that shipped with the first Mac in 1984 and its descendants, culminating with Mac OS 9).
The newer Mac OS X (pronunced oh-es-ten). Mac OS X incorporates elements of BSD Unix, OPENSTEP, and Mac OS 9. Its low-level UNIX-based foundation, Darwin, is open source.

Classic Mac OS

The "classic" Mac OS is characterized by its total lack of a command line; it is a 100% graphical operating system. Heralded for its ease of use, it is also criticized for its almost total lack of memory management, cooperative multitasking, and susceptibility to extension conflicts. "Extensions" are program modules that extend the operating system, providing additional functionality (such as a networking) or support for a particular device. Some extensions are prone not to work properly together, or only when loaded in a particular order. Troubleshooting Mac OS extensions can be a time-consuming process. The MacOS also introduced a new type of filesystem, which contained two different "forks" for a file. It was innovative at the time for separating out parameters into the resource fork, and raw data in the "data fork". However, it became quite a challenge to interoperate with other operating systems which did not recognize such a system.

The term "Mac OS" was not officially used until 1996 with the release of Mac OS 7.6 - prior to that the Macintosh operating system software was simply known as "The System", or by its version number, e.g. System 6 or System 7. Another common term was "the Toolbox". Apple deliberately played down the existence of the operating system in the early years of the Mac to help make the machine appear more user-friendly and to distance it from other systems such as MS-DOS, which were portrayed as arcane and technically challenging. With Mac, you turned it on, it just worked.

By the late 1990s, it was clear the useful life of this 1980s-era technology was coming to an end, with other more stable multitasking operating systems being developed.

Mac OS X

Mac OS X remedied this situation, bringing Unix-style memory management and preemptive multitasking. Improved memory management allowed more programs to run at once and virtually eliminated the possibility of one program crashing another. It is also the first Mac OS to include a command line, although it is never seen unless a separate "terminal" program is launched. However, since these new features put higher demands on system resources, Mac OS X is only officially supported on G3 and newer processors. (It runs poorly on many early G3 machines). Mac OS X has a compatibility layer for running older Mac applications, but compatibility is not 100%.

Mac OS Technologies

QuickDraw: the imaging model which first provided mass-market WYSIWYG.

Finder: the interface for browsing the filesystem and launching applications.

MultiFinder: the first version to support simultaneously running multiple apps.

Chooser: tool for accessing network resources (e.g., enabling AppleTalk).

ColorSync: technology for ensuring appropriate color matching.

Mac OS memory management: how the Mac managed RAM and virtual memory before the switch to UNIX.

PowerPC emulation of Motorola 68000: how the Mac handled the architectural transition from

CISC to RISC (see Mac 68K emulator).

Desk Accessories - small "helper" apps that could be run concurrently with any other app, prior to the advent of MultiFinder or System 7.


Wednesday, April 1, 2009

New Virus in April

An extraordinary behind-the-scenes struggle is taking place between computer security groups around the world and the brazen author of a malicious software program called Conficker.

The program grabbed global attention when it began spreading late last year and quickly infected millions of computers with software code that is intended to lash together the infected machines it controls into a powerful computer known as a botnet.

Since then, the program’s author has repeatedly updated its software in a cat-and-mouse game being fought with an informal international alliance of computer security firms and a network governance group known as the Internet Corporation for Assigned Names and Numbers. Members refer to the alliance as the Conficker Cabal.

The existence of the botnet has brought together some of the world’s best computer security experts to prevent potential damage. The spread of the malicious software is on a scale that matches the worst of past viruses and worms, like the I Love You virus. Last month, Microsoft announced a $250,000 reward for information leading to the capture of the Conficker author.

Botnets are used to send the vast majority of e-mail spam messages. Spam in turn is the basis for shady commercial promotions including schemes that frequently involve directing unwary users to Web sites that can plant malicious software, or malware, on computers.

Botnets can also be used to distribute other kinds of malware and generate attacks that can take commercial or government Web sites off-line.

One of the largest botnets tracked last year consisted of 1.5 million infected computers that were being used to automate the breaking of “captchas,” the squiggly letter tests that are used to force applicants for Web services to prove they are human.

The inability of the world’s best computer security technologists to gain the upper hand against anonymous but determined cybercriminals is viewed by a growing number of those involved in the fight as evidence of a fundamental security weakness in the global network.

“I walked up to a three-star general on Wednesday and asked him if he could help me deal with a million-node botnet,” said Rick Wesson, a computer security researcher involved in combating Conficker. “I didn’t get an answer.”

An examination of the program reveals that the zombie computers are programmed to try to contact a control system for instructions on April 1. There has been a range of speculation about the nature of the threat posed by the botnet, from a wake-up call to a devastating attack.

Researchers who have been painstakingly disassembling the Conficker code have not been able to determine where the author, or authors, is located, or whether the program is being maintained by one person or a group of hackers. The growing suspicion is that Conficker will ultimately be a computing-for-hire scheme. Researchers expect it will imitate the hottest fad in the computer industry, called cloud computing, in which companies like Amazon, Microsoft and Sun Microsystems sell computing as a service over the Internet.

Earlier botnets were devised so they could be split up and rented via black market schemes that are common in the Internet underground, according to security researchers.

The Conficker program is built so that after it takes up residence on infected computers, it can be programmed remotely by software to serve as a vast system for distributing spam or other malware.

Several people who have analyzed various versions of the program said Conficker’s authors were obviously monitoring the efforts to restrict the malicious program and had repeatedly demonstrated that their skills were at the leading edge of computer technology.

For example, the Conficker worm already had been through several versions when the alliance of computer security experts seized control of 250 Internet domain names the system was planning to use to forward instructions to millions of infected computers.

Shortly thereafter, in the first week of March, the fourth known version of the program, Conficker C, expanded the number of the sites it could use to 50,000. That step made it virtually impossible to stop the Conficker authors from communicating with their botnet.

“It’s worth noting that these are folks who are taking this seriously and not making many mistakes,” said Jose Nazario, a member of the international security group and a researcher at Arbor Networks, a company in Lexington, Mass., that provides tools for monitoring the performance of networks. “They’re going for broke.”

Several members of the Conficker Cabal said that law enforcement officials had been slow to respond to the group’s efforts, but that a number of law enforcement agencies were now in “listen” mode.

“We’re aware of it,” said Paul Bresson, an F.B.I. spokesman, “and we’re working with security companies to address the problem.”

A report scheduled to be released Thursday by SRI International, a nonprofit research institute in Menlo Park, Calif., says that Conficker C constitutes a major rewrite of the software. Not only does it make it far more difficult to block communication with the program, but it gives the program added powers to disable many commercial antivirus programs as well as Microsoft’s security update features.

“Perhaps the most obvious frightening aspect of Conficker C is its clear potential to do harm,” said Phillip Porras, a research director at SRI International and one of the authors of the report. “Perhaps in the best case, Conficker may be used as a sustained and profitable platform for massive Internet fraud and theft.”

“In the worst case,” Mr. Porras said, “Conficker could be turned into a powerful offensive weapon for performing concerted information warfare attacks that could disrupt not just countries, but the Internet itself.”

The researchers, noting that the Conficker authors were using the most advanced computer security techniques, said the original version of the program contained a recent security feature developed by an M.I.T. computer scientist, Ron Rivest, that had been made public only weeks before. And when a revision was issued by Dr. Rivest’s group to correct a flaw, the Conficker authors revised their program to add the correction.

Although there have been clues that the Conficker authors may be located in Eastern Europe, evidence has not been conclusive. Security researchers, however, said this week that they were impressed by the authors’ productivity.

Source New York Times

Sunday, March 29, 2009

Solaris History

Solaris is the Unix-based operating system developed by Sun Microsystems, displays that company's ability to be innovative and flexible. Solaris, one could argue, is perpetually ahead of the curve in the computer world. Sun continually adapts to the changing computer environment, trying to anticipate where the computer world is going, and what will be needed next, and develops new versions of Solaris to take that into account.


Solaris was born in 1987 out of an alliance between AT&T and Sun Microsystems to combine the leading Unix versions (BSD, XENIX, and System V) into one operating system. Four years later in 1991, Sun replaced it's existing Unix operating system (SunOS 4) with one based on SVR4. This new OS, Solaris 2, contained many new advances, including use of the OpenWindows graphical user interface, NIS+, Open Network Computing (ONC) functionality, and was specially tuned for symmetric multiprocessing.


This kicked off Solaris' history of constant innovation, with new versions of Solaris being released almost annually over the next fifteen years. Sun was constantly striving to stay ahead of the curve, while at the same time adapting Solaris to the existing, constantly evolving wider computing world. The catalogue of innovations in the Solaris OS are too numerous to list here, but a few milestones are worth mentioning. Solar 2.5.1 in 1996 added CDE, the NFSv3 file system and NFS/TCP, expanded user and group IDs to 32 bits, and included support for the Macintosh PowerPC platform. Solaris 2.6 in 1997 introduced WebNFS file system, Kerberos 5 security encryption, and large file support to increase Solaris' internet performance.


Solaris 2.7 in 1998 (renamed just Solaris 7) included many new advances, such as native support for file system meta-data logging (UFS logging). It was also the first 64-bit release, which dramatically increased its performance, capacity, and scalability. Solaris 8 in 2000 took it a step further was the first OS to combine datecentre and dot-com requirements, offering support for IPv6 and IPSEC, Multipath I/O, and IPMP. Solaris 9 in 2002 saw the writing on the wall of the server market, dropped OpenWindows in favour of Linux compatibility, and added a Resource Manager, the Solaris Volume Manager, extended file attributes, and the iPlanet Directory Server.


Solaris 10, the current version, was released to the public in 2005 free of charge and with a host of new developments. The latest advances in the computing world are constantly being incorporated in new versions of Solaris 10 released every few months. To mention just a few, Solaris features more and more compatibility with Linux and IBM systems, has introduced the Java Desktop System based on GNOME, added Dynamic Tracing (Dtrace), NFSv4, and later the ZFS file system in 2006.


Also in 2006, Sun set up the OpenSolaris Project. Within the first year, the OpenSolaris community had grown to 14,000 members with 29 user groups globally, working on 31 active projects. Although displaying a deep commitment to open-source ideals, it also provides Sun with thousands of developers essentially working for free.


The development of the Solaris OS demonstrates Sun Microsystems' ability to be on the cutting edge of the computing world without losing touch with the current computing environment. Sun regularly releases new versions of Solaris incorporating the latest development in computer technology, yet also included more cross-platform compatibility and incorporating the advances of other systems. The OpenSolaris project is the ultimate display of these twin strengths-Sun has tapped into the creative energy of developers across the world and receives instant feedback about what their audience wants and needs. If all software companies took a lesson from Sun, imagine how exciting and responsive the industry could be.

Monday, March 23, 2009

Linux History

In order to know the popularity of linux, we need to travel back in time. In earlier days, computers were like a big house, even like the stadiums. So there was a big problem of size and portability. Not enough, the worst thing about computers is every computer had a different operating system. Software was always customized to serve a specific purpose, and software for one given system didn't run on another system. Being able to work with one system didn't automatically mean that you could work with another. It was difficult, both for the users and the system administrators. Also those computers were quiet expensive. Technologically the world was not quite that advanced, so they had to live with the size for another decade. In 1960, a team of developers in the Bell Labs laboratories started working on a solution or the software problem, to address these compatibility issues. They developed a new operating system, which was simple, elegant , written in C Programming language instead of Assebly language and most important is it can be able to recycle the code. The Bell Labs developers named their this project as " UNIX ".

Unix was developed with small piece of code which is named as kernel. This kernel is the only piece of code that needs to be adapted for every specific system and forms the base of the UNIX system. The operating system and all other functions were built around this kernel and written in a higher programming language, C. This language was especially developed for creating the UNIX system. Using this new technique, it was much easier to develop an operating system that could run on many different types of hardware. So this naturally affected the cost of Unix operating system, the vendors used to sell the software ten times than the original cost. The source code of Unix, once taught in universities courtesy of Bell Labs, was not published publicly. So developers tried to find out some solution to to provide an efficient solution to this problem.

A solution seemed to appear in form of MINIX. It was written from scratch by Andrew S. Tanenbaum, a US-born Dutch professor who wanted to each his students the inner workings of a real operating system. It was designed to run on the Intel 8086 microprocessors that had flooded the world market.

As an operating system, MINIX was not a superb one. But it had the advantage that the source code was available. Anyone who happened to get the book 'Operating Systems: Design and Implementation' by Tanenbaum could get hold of the 12,000 lines of code, written in C and assembly language. For the first time, an aspiring programmer or hacker could read the source codes of the operating system, which to that time the software vendors had guarded vigorously. A superb author, Tanenbaum captivated the brightest minds of computer science with the elaborate lively discussion of the art of creating a working operating system. Students of Computer Science all over the world worked hard over the book, reading through the codes to understand the very system that runs their computer.

And one of them was Linus Torvalds. Linus Torvalds was the second year student of Computer Science at the University of Helsinki and a self- taught hacker. MINIX was good, but still it was simply an operating system for the students, designed as a teaching tool rather than an industry strength one. At that time, programmers worldwide were greatly inspired by the GNU project by Richard Stallman, a software movement to provide free and quality software. In the world of Computers, Stallman started his awesome career in the famous Artificial Intelligence Laboratory at MIT, and during the mid and late seventies, created the Emacs editor.

In the early eighties, commercial software companies lured away much of the brilliant programmers of the AI lab, and negotiated stringent nondisclosure agreements to protect their secrets. But Stallman had a different vision. His idea was that unlike other products, software should be free from restrictions against copying or modification in order to make better and efficient computer programs. With his famous 1983 manifesto that declared the beginnings of the GNU project, he started a movement to create and distribute softwares that conveyed his philosophy (Incidentally, the name GNU is a recursive acronym which actually stands for 'GNU is Not Unix'). But to achieve this dream of ultimately creating a free operating system, he needed to create the tools first. So, beginning in 1984, Stallman started writing the GNU C Compiler (GCC), an amazing feat for an individual programmer. With his smart technical skills, he alone outclassed entire groups of programmers from commercial software vendors in creating GCC, considered as one of the most efficient and robust compilers ever created.

Linus himself didn't believe that his creation was going to be big enough to change computing forever. Linux version 0.01 was released by mid September 1991, and was put on the net. Enthusiasm gathered around this new kid on the block, and codes were downloaded, tested, tweaked, and returned to Linus. 0.02 came on October 5th.

Further Development

While Linux development, Linus faced some of the difficulties such as cross opinions with some people. E.g. Tanenbaum the great teacher who wrote the MINIX. He sent the letter to Linus as :-

“I still maintain the point that designing a monolithic kernel in 1991 is a fundamental error. Be thankful you are not my student. You would not get a high grade for such a design " Linus later admitted that it was the worst point of his development of Linux. Tanenbaum was certainly the famous professor, and anything he said certainly mattered. But he was wrong with Linux, for Linus was one stubborn guy who never like defeats. Although, Tanenbaum also remarked that “Linux is obsolete.” So very soon thousands of people form a community and all joined the camp. Powered by programs from the GNU project, Linux was ready for the actual showdown. It was licensed under GNU General Public License, thus ensuring that the source codes will be free for all to copy, study and to change. Students and computer programmers grabbed it.

Everyone tried and edited the source code and then it gives the start for commercial vendors to start their market. They compiled various software and distributed them with that operating system which people are familiar with. Red Hat, Debian gained more response from outside world. With the new graphical interface system like KDE, GNONE the linux becomes popular. The best thing today about Linux is it's powerful commands.

Rise of the Desktop Linux

What is the biggest complaint about Linux ??? That is it's Text mode. Many people get scared of seeing the command base interface which is not understandable. But if anyone starts learning the commands, it goes on interesting topics to learn new about the Operating System. Still now, very friendly GUI's are available for it's flexibility. Anyone can install the Linux without having the prior experience. Everything is well explanatory at the time of installation. Most distributions are also available in Live CD format, which the users can just put in their CD drives and boot without installing it to the hard drive, making Linux available to the newbies. The most important point about Linux is it's open source. So Computer users having low budget can have Linux and learn linux as it is free.

Linux's Logo - Penguin

The logo of Linux is Penguin. It's called as Tux in technological world. Rather Tux, as the penguin is lovingly called, symbolizes the carefree attitude of the total movement. This cute logo has a very interesting history. As put forward by Linus, initially no logo was selected for Linux. Once Linus went to the southern hemisphere on a vacation. There he encountered a penguin,not unlike the current logo of Linux. As he tried to pat it, the penguin bit his hand. This amusing incident led to the selection of a penguin as the logo of Linux sometime later.

Tuesday, March 17, 2009

Unix History

Since it began to escape from AT&T's Bell Laboratories in the early 1970's, the success of the UNIX operating system has led to many different versions: recipients of the (at that time free) UNIX system code all began developing their own different versions in their own, different, ways for use and sale. Universities, research institutes, government bodies and computer companies all began using the powerful UNIX system to develop many of the technologies which today are part of a UNIX system.


Computer aided design, manufacturing control systems, laboratory simulations, even the Internet itself, all began life with and because of UNIX systems. Today, without UNIX systems, the Internet would come to a screeching halt. Most telephone calls could not be made, electronic commerce would grind to a halt and there would have never been "Jurassic Park"!


By the late 1970's, a ripple effect had come into play. By now the under- and post-graduate students whose lab work had pioneered these new applications of technology were attaining management and decision-making positions inside the computer system suppliers and among its customers. And they wanted to continue using UNIX systems.


Soon all the large vendors, and many smaller ones, were marketing their own, diverging, versions of the UNIX system optimized for their own computer architectures and boasting many different strengths and features. Customers found that, although UNIX systems were available everywhere, they seldom were able to interwork or co-exist without significant investment of time and effort to make them work effectively. The trade mark UNIX was ubiquitous, but it was applied to a multitude of different, incompatible products.


In the early 1980's, the market for UNIX systems had grown enough to be noticed by industry analysts and researchers. Now the question was no longer "What is a UNIX system?" but "Is a UNIX system suitable for business and commerce?"


Throughout the early and mid-1980's, the debate about the strengths and weaknesses of UNIX systems raged, often fuelled by the utterances of the vendors themselves who sought to protect their profitable proprietary system sales by talking UNIX systems down. And, in an effort to further differentiate their competing UNIX system products, they kept developing and adding features of their own.


In 1984, another factor brought added attention to UNIX systems. A group of vendors concerned about the continuing encroachment into their markets and control of system interfaces by the larger companies, developed the concept of "open systems."


Open systems were those that would meet agreed specifications or standards. This resulted in the formation of X/Open Company Ltd whose remit was, and today in the guise of The Open Group remains, to define a comprehensive open systems environment. Open systems, they declared, would save on costs, attract a wider portfolio of applications and competition on equal terms. X/Open chose the UNIX system as the platform for the basis of open systems.


Although UNIX was still owned by AT&T, the company did little commercially with it until the mid-1980's. Then the spotlight of X/Open showed clearly that a single, standard version of the UNIX system would be in the wider interests of the industry and its customers. The question now was, "which version?".


In a move intended to unify the market in 1987, AT&T announced a pact with Sun Microsystems, the leading proponent of the Berkeley derived strain of UNIX. However, the rest of the industry viewed the development with considerable concern. Believing that their own markets were under threat they clubbed together to develop their own "new" open systems operating system. Their new organization was called the Open Software Foundation (OSF). In response to this, the AT&T/Sun faction formed UNIX International.


The ensuing "UNIX wars" divided the system vendors between these two camps clustered around the two dominant UNIX system technologies: AT&T's System V and the OSF system called OSF/1. In the meantime, X/Open Company held the center ground. It continued the process of standardizing the APIs necessary for an open operating system specification.


In addition, it looked at areas of the system beyond the operating system level where a standard approach would add value for supplier and customer alike, developing or adopting specifications for languages, database connectivity, networking and mainframe interworking. The results of this work were published in successive X/Open Portability Guides.


XPG 4 was released in October 1992. During this time, X/Open had put in place a brand program based on vendor guarantees and supported by testing. Since the publication of XPG4, X/Open has continued to broaden the scope of open systems specifications in line with market requirements. As the benefits of the X/Open brand became known and understood, many large organizations began using X/Open as the basis for system design and procurement. By 1993, over $7 billion had been spent on X/Open branded systems. By the start of 1997 that figure has risen to over $23 billion. To date, procurements referencing the Single UNIX Specification amount to over $5.2 billion.


In early 1993, AT&T sold it UNIX System Laboratories to Novell which was looking for a heavyweight operating system to link to its NetWare product range. At the same time, the company recognized that vesting control of the definition (specification) and trademark with a vendor-neutral organization would further facilitate the value of UNIX as a foundation of open systems. So the constituent parts of the UNIX System, previously owned by a single entity are now quite separate


In 1995 SCO bought the UNIX Systems business from Novell, and UNIX system source code and technology continues to be developed by SCO.


In 1995 X/Open introduced the UNIX 95 brand for computer systems guaranteed to meet the Single UNIX Specification. The Single UNIX Specification brand program has now achieved critical mass: vendors whose products have met the demanding criteria now account for the majority of UNIX systems by value.


For over ten years, since the inception of X/Open, UNIX had been closely linked with open systems. X/Open, now part of The Open Group, continues to develop and evolve the Single UNIX Specification and associated brand program on behalf of the IT community. The freeing of the specification of the interfaces from the technology is allowing many systems to support the UNIX philosophy of small, often simple tools , that can be combined in many ways to perform often complex tasks. The stability of the core interfaces preserves existing investment, and is allowing development of a rich set of software tools. The Open Source movement is building on this stable foundation and is creating a resurgence of enthusiasm for the UNIX philosophy. In many ways Open Source can be seen as the true delivery of Open Systems that will ensure it continues to go from strength to strength.

Monday, March 9, 2009

OS/2

A family of multitasking operating systems for x86 machines from IBM. OS/2 Warp is the client version, and Warp Server is the server version. With add-ons, DOS and Windows applications can also be run under OS/2 (see Odin). The server version includes advanced features such as the journaling file system (JFS) used in IBM's AIX operating system. Like Windows, OS/2 provides a graphical user interface and a command line interface. See OS/2 Warp, Warp Server and eComStation.

Although highly regarded as a robust operating system, OS/2 never became widely used. However, it has survived in the banking industry, especially in Europe, and many ATM machines in the U.S. have continued to run OS/2 due to its stability.

Features

OS/2 includes Adobe Type Manager for rendering Type 1 fonts on screen and providing PostScript output on non-PostScript printers. OS/2's dual boot feature allows booting up into OS/2 or DOS.
The OS/2 Workplace Shell graphical user interface is similar to Windows and the Macintosh. Originally known as Presentation Manager (PM), after Version 2.0, PM referred to the programming interface (API), not the GUI interface itself.

Evolution

The first versions of OS/2 were single-user operating systems written for 286s and jointly developed by IBM and Microsoft. Starting with Version 2.0, versions were written for 32-bit 386s and up and were solely the product of IBM. Following is some of the evolution:

OS/2 16-bit Version 1.x

The first versions (1.0, 1.1, etc.) were written for the 16-bit 286. DOS compatibility was limited to about 500K. Version 1.3 (OS/2 Lite) required 2MB RAM instead of 4MB and included Adobe Type Manager. IBM's Extended Edition version included Communications Manager and Database Manager.

OS/2 32-bit Version 2.x - IBM

Introduced in April 1992, this 32-bit version for 386s from IBM multitasked DOS, Windows and OS/2 applications. Data could be shared between applications using the clipboard and between Windows and PM apps using the DDE protocol. Version 2.x provided each application with a 512MB virtual address space that allowed large tasks to be easily managed.
Version 2.1 supported Windows' Enhanced Mode and applications could take full advantage of Windows 3.1. It also provided support for more video standards and CD-ROM drives.
Communications and database management for OS/2 were provided by Communications Manager/2 (CM/2) and Database Manager/2 (DB2/2). CM/2 replaced Communications Manager, which was part of OS/2 2.0's Extended Services option.

OS/2 32-bit Version 3 - IBM

In late 1994, IBM introduced Version 3 of OS/2, renaming it OS/2 Warp. The first version ran in only 4MB of memory and included a variety of applications, including Internet access.

Windows NT - Microsoft

Originally to be named OS/2 Version 3.0, this 32-bit version from Microsoft was renamed "Windows NT" and introduced in 1993. See Windows NT.

Thursday, February 26, 2009

How Will I Use an Operating System?

The user interface of an operating system is the portion of the program with which users interact.
The user interface can be

1. Command-line,
2.Menu-driven, and
3.Graphics-based.

A command-line interface requires a user to type the desired response at a prompt using a special command language.

To be an effective user of any command-line software, you must memorize its commands and their exact syntax-no easy task.

A menu-driven interface allows the user to select commands from a list (menu) using the keyboard or a pointing device such as a mouse.

A graphical user interface (GUI):

The trend is away from text-based, command-line interfaces to user-friendly, graphics-oriented environment called a graphical user interface (GUI).

Graphical user interfaces rely on graphics-based software.

Graphic-based software permits the integration of text with high-resolution graphic
image, called icons.

GUI users interact with the operating system and other software packages by using a pointing device and a keyboard to issue commands.

Rather than enter a command directly, the user chooses from options displayed on the screen.

The equivalent of a syntax-sensitive operating system command is entered by pointing to and choosing one or more options from menu or by pointing to and choosing a graphics image, called an icon.

Typically GUI includes some or all of the following parts:

-Icons, which are graphical images that represent items, such as files and directories.
-Agraphical pointer, which is controlled by a pointing device (mouse), to select icons and
commands and move on-screen items.
-On-screen pull-down menus that appear or disappear, controlled by the pointing device.
-Windows that enclose applications or objects on the screen.

GUIs have effectively eliminated the need for users to memorize and enter cumbersome commands.

Type of Processing

A multiprocessing operating system allows the simultaneous execution of programs by a computer that has two or more CPUs. Each CPU can be either dedicated to one program, or dedicated to specific functions and then used by all programs.

Interprocessing, also called dynamic linking, is a type of processing that allows any change made in one application to be automatically reflected in any related, linked application.

Real-time processing allows a computer to control or monitor the performance of other machines and people by responding to input data in a specified amount of time.

Virtual-machine (VM) processing creates the illusion that there is more than one physical machine. VM capabilities permit a computer to run numerous operating systems at one time. VM capabilities are typically used on supercomputers and mainframes.

Virtual memory, also called virtual storage, allows you to use a secondary-storage device as an extension of main memory. Virtual memory resolves the problem of insufficient main memory to contain an entire program and its data.

Major Functions of Operating Systems

The major functions of an OS are:

-resource management,
-data management,
-job (task) management, and
-standard means of communication between user and computer.

The resource management function of an OS allocates computer resources such as CPU time, main memory, secondary storage, and input and output devices for use.

The data management functions of an OS govern the input and output of the data and their location, storage, and retrieval.

The job management function of an OS prepares, schedules, controls, and monitors jobs submitted for execution to ensure the most efficient processing. A job is a collection of one or more related programs and their data.

A job is a collection of one or more related programs and their data.

The OS establishes a standard means of communication between users and their computer systems. It does this by providing a user interface and a standard set of commands that control the hardware.

Typical Day-to-Day Uses of an Operating System

-Executing application programs.
-Formatting floppy diskettes.
-Setting up directories to organize your files.
-Displaying a list of files stored on a particular disk.
-Verifying that there is enough room on a disk to save a file.
-Protecting and backing up your files by copying them to other disks for safekeeping.

How Do Operating Systems Differ?

Operating systems for large computers are more complex and sophisticated than those for microcomputers because the operating systems for large computers must address the needs of a very large number of users, application programs, and hardware devices, as well as supply a host of administrative and security features.

Operating system capabilities can be described in terms of

-the number of users they can accommodate at one time,
-how many tasks can be run at one time, and
-how they process those tasks.

Number of Users:

A single-user operating system allows only one user at a time to access a computer.

Most operating systems on microcomputers, such as DOS and Window 95, are single-user access systems.

A multiuser operating system allows two or more users to access a computer at the same time (UNIX).

The actual number of users depends on the hardware and the OS design.
Time sharing allows many users to access a single computer.
This capability is typically found on large computer operating systems where many users need access at the same time.

Number of Tasks

An operating system can be designed for single tasking or multitasking.

A single tasking operating system allows only one program to execute at a time, and the program must finish executing completely before the next program can begin.

A multitasking operating system allows a single CPU to execute what appears to be more than one program at a time.

Context switching allows several programs to reside in memory but only one to be active at a time. The active program is said to be in the foreground. The other programs in memory are not active and are said to be in the background. Instead of having to quit a program and load another, you can simply switch the active program in the foreground to the background and bring a program from the background into the foreground with a few keystrokes.

Cooperative multitasking in which a background program uses the CPU during idle time of the foreground program. For example, the background program might sort data while the foreground program waits for a keystroke.

Time-slice multitasking enables a CPU to switch its attention between the requested tasks of two or more programs. Each task receives the attention of the CPU for a fraction of a second before the CPU moves on to the next. Depending on the application, the order in which tasks receive CPU attention may be determined sequentially (first come first served) or by previously defined priority levels.

Multithreading supports several simultaneous tasks within the same application. For example, with only one copy of a database management system in memory, one database file can be sorted while data is simultaneously entered into another database file.

Friday, February 20, 2009

Operating System

What Is an Operating System?

An operating system (OS) is a core set of programs that control and supervise the hardware resources of a computer and provide services to other system software, application software, programmers, and users of a computer.

The OS gives the computer the instructions it needs to operate, telling it how to interact with hardware, other software, and the user.

The OS establishes a standard interface, or means of communication, between users and their computer systems.
When you power up a computer, you boot the system.
The booting procedure is so named because the computer "pulls itself up by its own bootstraps" (without the assistance of humans).
When booting the system,

First, a program in read-only memory (ROM) initializes the system and runs a system check to verify that the electronic components are operational and readies the computer for processing.
Next, the operating system is loaded to RAM, takes control of the system, and presents the user with a system prompt or a GUI screen full of options.

Operating System Parts

Operating systems are composed of two major parts:

control programs, and
service program.

Control programs manage computer hardware and resources.
The main program in most operating systems is the supervisor program.

A supervisor program is a control program that is known in some operating systems as the monitor, executive, or kernel.
The supervisor program is responsible for controlling all other OS programs as well as other system and application programs.
The supervisor program controls the activities of all of the hardware components of a computer.


Service programs are external OS programs that provides a service to the user or programmer of a computer.
They must be loaded separately because they are not automatically loaded when the operating system is loaded.
They perform routine but essential functions, such as formatting a disk for use and copying files from one location to another.

Sunday, February 15, 2009

Computer language-oriented software includes

language translators such as assemblers, interpreters, and compilers.

Program generators (programs that automatically generate program code), debugging and testing programs.
Utilities are programs that are purchased as separate products; they perform a wide range of functions. This type of software includes products such as

-data conversion programs that convert data from one format to another,
-data recovery programs that restore damaged or accidentally erased data,
-librarians that log and track the locations of disk or tape program files,
-security and auditing programs, and
-merge and sort programs.

Application software refers to programs that allow you to accomplish specific tasks, like creating a document, organizing data, or drawing graphs.

Software acts as a connection, or interface, between you and the hardware.

-Interface is a term that describes how two parts are joined so that they can work together.
-System software and application software provide an interface to the hardware.

Shows the functional relationship among system software, application software, hardware, and a user.




Categories of Software

Knowledge the Rooms in the House

Computer hardware cannot perform alone.

Software refers to the instructions that direct the operations of a computer.

There are two basic types of software:

-system software (controls hardware), and
-application software (performs specific tasks).

System software refers to programs designed to perform tasks associated with directly controlling and utilizing computer hardware.

-It does not accomplish specific tasks for a user, such as creating documents or analyzing data.

-System software includes:

-Operating systems (the most important type of system software),
-Data management software,
-Computer language-oriented software, and
-Utilities that help users perform various functions.

-Data management software includes:

-database and file management programs that manage data for an operating system.
-data center management programs used on large system computers that control program execution, monitor system usage, track system resources and utilization, and bill users accordingly.

Interactive With The System

To interact effectively with a computer, user needs to be knowledgeable in four areas.

1.General software concepts (for example, windows, menus, uploading, and so on)

2.The operation and use of the hardware over which you have control (such as the PC, magnetic
disk, and printer).

3.The function and use of the computer's operating system and/or its graphical user interface
(GUI), both of which provide a link between the user, the computer system, and the various
applications.

4.The specific applications programs you are using

The first three areas are prerequisites to the fourth because you will need a:


  • working knowledge of software concepts,
  • hardware, and
  • the operating system and/or a GUI

before you can make effective use of

  • Quicken (accounting),
  • Harvard Graphics (presentation graphics),
  • Paradox (database).

Tuesday, February 10, 2009

COMMUNICATION AND NETWORK CONCEPTS

Evolution of Networking: ARPANET, Internet, Interspace.

Different ways of sending data across the network with reference to switchingtechniques.

Data Communication terminologies: Concept of Channel, Baud, Bandwidth (Hz, KHz, MHz) and Data transfer rate (bps, kbps, Mbps, Gbps, Tbps).

Transmission media: Twisted pair cable, coaxial cable, optical fiber, infrared, radio link, microwave link and satellite link.

Network devices: Modem, RJ45 connector, Ethernet Card, Hub, Switch, Gateway.

Different Topologies- Bus, Star, Tree; Concepts of LAN, WAN, MAN.

Protocol: TCP/IP, File Transfer Protocol (FTP), PPP, Level-Remote Login (Telnet), Internet, Wireless/Mobile Communication, GSM, CDMA, WLL, 3G, SMS, Voice mail, Application Electronic Mail, Chat, Video Conferencing.

Network Security Concepts: Cyber Law, Firewall, Cookies, Hackers and Crackers.

WebPages; Hyper Text Markup Language (HTML), eXtensible Markup Language (XML); Hyper Text Transfer Protocol (HTTP); Domain Names; URL; Protocol Address; Website, Web browser, Web Servers; Web Hosting.

COMPUTER SYSTEM ORGANISATION

Number System: Binary, Octal, Decimal, Hexadecimal and conversion between two different number systems. Integer, Floating Point, 2’s complement of number from base-2;

Internal Storage encoding of Characters: ASCII, ISCII (Indian scripts Standard Code for Information Interchange), UNICODE;

Microprocessor: Basic concepts, Clock speed (MHz, GHz), 16 bit, 32 bit, 64 bit processors; Types – CISC, RISC; Concept of System Buses, Address bus, Data bus.

Concepts of Accumulator, Instruction Register, and Program Counter;

Commonly used CPUs and CPU related terminologies: Intel Pentium Series, Intel Celeron, Cyrix, AMD Series, Xeon, Intel Mobile, Mac Series; CPU Cache;
Concept of heat sink and CPU fan, Motherboard; Single, Dual and Multiple
processors;

Types of Memory: Cache (L1,L2), Buffer, RAM (DRAM, SDRAM, RDRAM, DDRAM), ROM (PROM, EPROM), Access Time;

Input Output Ports/Connections: Power connector, Monitor Socket, Serial (COM)and Parallel (LPT) port, Universal Serial Bus port, PS-2 port, SCSI port, PCI/MCIsocket, Keyboard socket, Infrared port (IR), audio/speaker socket, Mic socket; data Bus; external storage devices connected using I/O ports;

Power Supply: Switched Mode Power Supply (SMPS): Elementary Concept of
Power Supply: Voltage, Current, Power (Volt, Ampere, Watt), SMPS supplies –
Mother Board, Hard Disk Drive, Floppy Disk Drive, CD/DVD Drive;

Power Conditioning Devices: Voltage Stabilizer, Constant Voltage Transformer (CVT), Uninterrupted Power Supply (UPS)-Online and offline.

Network “Architectures”

A host refers to any device that is connected to your network. Some define ahost as any device that has been assigned a network address.

A host can serve one or more functions:

• A host can request data (often referred to as a client)
• A host can provide data (often referred to as a server)
• A host can both request and provide data (often referred to as a peer)

Because of these varying functions, multiple network “architectures” have been developed, including:

• Peer-to-Peer networks
• Client/Server networks
• Mainframe/Terminal networks

When using a peer-to-peer architecture, all hosts on the network can bothrequest and provide data and services. For example, configuring two Windows XP workstations to share files would be considered a peer-to-peer network.

Though peer-to-peer networks are simple to configure, there are several key disadvantages to this type of architecture. First, data is spread across multiple devices, making it difficult to manage and back-up that data.
Second, security becomes problematic, as you must configure individual permissions and user accounts on each host.

When using a client/server architecture, hosts are assigned specific roles.Clients request data and services stored on Servers. Connecting Windows XP workstations to a Windows 2003 domain controller would be considered a client/server network.

While client/server environments tend to be more complex than peer-to-peer networks, there are several advantages. With data now centrally located on a server or servers, there is only one place to manage, back-up, and secure that data. This simplified management allows client/server networks to scale much larger than peer-to-peer. The key disadvantage of client/server architecture is that it introduces a single point of failure.

When using a mainframe/terminal architecture, often referred to as a thinclient environment, a single device (the mainframe) stores all data and services for the network. This provides the same advantage as a client/server environment – centralized management and security of data.

Additionally, the mainframe performs all processing functions for the dumb terminals (or thin-clients) that connect to the mainframe. The thin clients perform no processing whatsoever, but serve only as input and output devices into the mainframe. Put more simply, the mainframe handles all the “thinking” for the thin-clients.

A typical hardware thin-client consists of a keyboard/mouse, a display, and an interface card into the network. Software thin-clients are also prevalent, and run on top of a client operating system (such as Windows XP or Linux).

Windows XP’s remote desktop is an example of a thin-client application.

- Introduction to Networks -

What is a Network?

A network is defined as devices connected together to share information and services. The types of data/services that can be shared on a network is endless - documents, music, email, websites, databases, printers, faxes, telephony, videoconferencing, etc.

Protocols are “rules” that govern the method by which devices share data and services. Protocols are covered in great detail in subsequent sections.

Basic Network Types

Networks are generally broken down into two types:

LANs (Local Area Networks) - a high-speed network that covers a relatively small geographic area, usually contained within a single building or campus. A LAN is usually under the administrative control of a single entity/organization.

WANs (Wide Area Networks) – The book definition of a WAN is a network that spans large geographical locations, usually to interconnect multiple LANs.

A more practical definition describes a WAN as a network that traverses a public network or commercial carrier, using one of several WAN technologies. Thus, a WAN can be under the administrative control of several entities or organizations, and does not need to “span large geographical distances.”

MAN (Metropolitan Area Network). A MAN is defined as a network that spans several LAN’s across a city-wide geographic area. The term “MAN” is less prevalent than either LAN or WAN.

Number Systems and Binary Arithmetic

Number Systems

This focuses on the way communication takes place inside and among different computer devices.

Types of number systems:

1. Decimal (Denary): In primary school we used to write numbers in terms of Units, Tens, Hundreds and Thousands. Our number system, the DENARY system, bases itself on TEN states 0, 1, 2, 3, 4, 5, 6, 7, 8, 9

2. Binary: A numbering system using the digits "0" and "1" in the decimal system. We know that computers are machines built from microscopic switches with only TWO states: ON or OFF (0 or 1). All computer programs are executed in binary form only. When a user enters data into a computer (such as inputting letters) a translator has to convert that inputted data into its binary equivalent.

3. Hexadecimal: This is a numbering system involving 16 states and is used so that binary data would be easier to be represented.

Weights

A number is made up of digits, where every digit has a certain value of importance. When we were in primary school we were taught to place numbers under Units, Tens, Hundreds and Thousands and so on. What we were being taught was in fact the so-called DENARY WEIGHTS. Let us analyse the real value of a DECIMAL NUMBER.

Suppose we have the decimal number 213910. Each digit has a position. Thus, the digit three has a value of 3 tens(30) and 2 has a value of 2 thousands (2000).

Weights in the Binary System:

Weights can be called Place Values. Similar to the denary weights, there are the binary weights that only differ in the range of digits. Suppose we have the binary number 10101012.

Conversions:

At Matsec Level one needs to remember the following number conversions:

1. From binary to decimal
2. From decimal to binary
3. From binary to hexadecimal
4. From hexadecimal to binary
5. From decimal to hex
6. From hex to decimal

Wednesday, February 4, 2009

Logic circuits

Electronic circuits which process information encoded as one of a limited set of voltage or current levels. Logic circuits are the basic building blocks used to realize consumer and industrial products that incorporate digital electronics. Such products include digital computers, video games, voice synthesizers, pocket calculators, and robot controls.

All logic circuits may be described in terms of three fundamental elements, shown graphically in the illustration. The NOT element has one input and one output; as the name suggests, the output generated is the opposite of the input in binary. In other words, a 0 input value causes a 1 to appear at the output; a 1 input results in a 0 output. (All signals are interpreted to be one of only two values, denoted as 0 and 1.)




Logic elements.

The AND element has an arbitrary number of inputs and a single output. As the name suggests,the output becomes 1 if, and only if, all of the inputs are 1; otherwise the output is 0. The AND together with the NOT circuit therefore enables searching for a particular combination of binary signals.

The third element is the OR function. As with the AND, an arbitrary number of inputs may exist and one output is generated. The OR output is 1 if one or more inputs are 1.
The operations of AND and OR have some analogies to the arithmetic operations of multiplication and addition, respectively. The collection of mathematical rules and properties of these operations is called boolean algebra.

While the NOT, AND, and OR functions have been designed as individual circuits in many circuit families, by far the most common functions realized as individual circuits are the NAND and NOR circuits of the illustration. A NAND may be described as equivalent to an AND element driving a NOT element. Similarly, a NOR is equivalent to an OR element driving a NOT element.
As the names of the logic elements described suggest, logic circuits respond to combinations of input signals. Logic networks which are interconnected so that the current set of output signals is responsive only to the current set of input signals are appropriately termed combinational logic.

An important further capability for processing information is memory, or the ability to store information. The logic circuits themselves must provide a memory function if information is to be manipulated at the speeds the logic is capable of. Logic circuit networks that include feedback paths to retain information are termed sequential logic networks, since outputs are in part dependent on the prior input signals applied and in particular on the sequence in which the signals were applied.

Several alternatives exist for the digital designer to create a digital system. Two common realizations are ready-made catalog-order devices, which can be combined as building blocks, and custom-designed devices. Gate-array devices comprise a two-dimensional array of logic cells, each equivalent to one or a few logic gates. Programmable logic arrays have the potential for realizing any of a large number of different sets of logic functions. In table look-up, the collection of input signals are grouped arbitrarily as address digits to a memory device. Finally, the last form of logic network embodiment is the microcomputer.

logic circuit, electric circuit whose output depends upon the input in a way that can be expressed as a function in symbolic logic; it has one or more binary inputs (capable of assuming either of two states, e.g., “on” or “off”) and a single binary output. Logic circuits that perform particular functions are called gates. Basic logic circuits include the AND gate, the OR gate, and the NOT gate, which perform the logical functions AND, OR, and NOT. Logic circuits can be built from any binary electric or electronic devices, including switches, relays, electron tubes, solid-state diodes, and transistors; the choice depends upon the application and design requirements. Modern technology has produced integrated logic circuits, modules that perform complex logical functions. A major use of logic circuits is in electronic digital computers. Fluid logic circuits have been developed whose function depends on the flow of a liquid or gas rather than on an electric current.