The Desktop

The large area that is the upper part of the screen is called the Desktop. Sometimes (to keep you confused!) people may use the term Desktop to refer to everything you see on the monitor after Windows has gotten started.

The main purpose of the Desktop is to hold shortcut icons that will help you work efficiently.

The Desktop is really just a folder inside the Windows folder, so it can hold anything that any other folder can hold. It can be decorated with interesting textures or pictures. We'll discuss how later.

The Taskbar

Across the bottom of the screen we see the Taskbar. Normally it is in view all the time. The Taskbar's main job is to show what applications are currently running.

The middle section of the bar shows a button for each open application. Each button shows an icon with a label that shows the program and the current document, when there is room to see it! The icons and labels for the tasks adjust in size to fit the space on the Taskbar. So if you have several programs running, you may not see much of each one's taskbar icon, as in the illustration which is sized for this small window.

The Taskbar also holds the Start menu button at the far left and the Notification Area at the far right. Other toolbars, such as Quick Launch, Address, Links, Windows Media Player, may also display on the Taskbar.

In WinXP and Windows Vista you may see double chevrons or an arrow on the Taskbar when there are too many items to show in the allowed space. When you click the chevrons, a menu list appears or the space will expand to show the hidden items. In WinXP and in Windows Vista you can choose to group icons from the same program. For example, if you have 7 Word windows open, you will only see 1 button on the Task Bar that shows there are 7 Word windows. Clicking the button's arrow opens a list of the grouped windows.
The Start Menu
Clicking on the Start Menu brings up a list of shortcuts to start your programs. An item with an arrow at the right, such as Programs or All Programs, will open another list. There can be several levels of such lists.

In WinXP and Windows Vista, above the All Programs link is a list of often used programs. Above that is a list of programs that you want to remain in view, no matter how often you actually use them.

In Windows Vista the All Programs link works a bit differently. Clicking on All Programs changes the area directly above to show folders and shortcuts in a folder tree display instead of as cascading menus. Double click a folder to see the shortcuts and other folders inside it. At the bottom of the list, the Back arrow will return you to the original list.

Turning on the computer


Turn on the computer. There is a push button or a toggle switch, probably on the front. There may be both- a toggle switch on the back and a button on the front! The button will only work when the toggle is On.(Odds are, your computer is already on if you are reading this on a computer!)

You should see some text on the screen as this happens. First the BIOS checks the memory and looks for hardware parts. Programs that run at startup may print messages to the screen. Hopefully they just say that all is OK.

Then you will see the Windows startup screen. This picture is different for each version of Windows. There may be different words below the Windows text, depending on what other Microsoft products you have installed.

In any version of Windows, you may see a login dialog or a Welcome screen that asks for your user name and password. If so, log yourself in again.

The final screen you see shows the desktop and taskbar (and probably a prettier background then the illustration here!). Ready to work!

Options for quitting what you are doing

Recent versions of Windows have choices besides just turning off the computer. Which choices you see will depend on the version of Windows and sometimes on whether the computer is a laptop or not.

Switch user - Lets you switch to a different user account without closing down the open programs. The computer is, of course, left turned on. When you switch back to the original user, all of the programs appear just the way they were before you switched users.

Log off - Leaves the computer on but closes any open programs and returns you to the logon screen. This is useful when the computer is part of a network or there are several user accounts.

Lock - Lets you keep others from using your computer while it is still on. You must re-enter your password to unlock the computer. This feature is primarily available on network computers but can be enabled for stand-alone computers.

Restart - Closes documents and applications and shuts down the computer, but immediately restarts it. This is useful when installing software that requires a restart to finish the installation. Also, a restart can often fix a computer that is behaving oddly. Sometimes you will need to shut down to give the computer a longer rest and cool down than Restart provides.

Shut down - Closes all your programs and turns off the computer. Most programs will prompt you to save any unsaved documents as part of the shut down process.

Install updates and shut down - When one or more updates have been downloaded but not installed yet, the next time you shut down or restart the computer, the update(s) will be installed before the computer is shut down. How long the installation will take depends on the size and complexity of the updates. It can add several minutes to the shut down process AND to the next start up. You may or may not see a message about the installation during shut down. You should see a screen tip about a successful installation when you start up again.

Low-Power Modes

When you are not going to work for a while, you can save electricity and battery charge by putting the computer into a low-power mode. This is especially useful for laptops.

Which choices you have will depend on your version of Windows and on what modes your computer can use. The modes differ in how fast the computer returns to normal and in what happens if power is lost while in the low power state.

Sleep/Stand by - Saves your work to memory and puts the computer into a low-power state. If the computer loses power while asleep, the changes saved only to memory are lost. Waking up from Sleep or Stand By mode is quick.

Hibernate - Saves your work to memory and to the hard disk and then puts the computer into a lower-power state than Stand by does. If the computer loses power while in hibernate mode, your recent work is not lost. Waking up from Hibernation is slower than from Sleep/Stand By.

Hybrid Sleep - In Windows Vista on some computers, sleep combines features of stand by and hibernate. Open documents are saved to memory and to the hard disk.


Windows uses a GUI (Graphical User Interface) so almost everything can be done using mouse clicks on icons and buttons. You don't have to memorize commands or remember keystroke combinations. Instead of typing commands, you can SEE what you are doing. Later, once you know your way around, you will find it useful and often faster to type in commands or use those odd combinations of keystrokes.

Versions

Windows now comes in many different versions and often several flavors for a particular version. Confusing? Probably! Your workplace, friends, and school may all be using a different version. So it is worth a bit of trouble to get familiar with what you might see out there, as well as what you see on your own computer.

Using different antivirus programs may differ in performance and operations but generally they share a common procedure in scanning. Consider the steps below. (an antivirus should be installed first before proceeding the following steps and operating system used is microsoft)

For scanning physical drives (c:, d: ,e: or more):

1. Go to the My Computer or press win+e
2. With your mouse, point which drive to scan (a: or c: d: etc..)
3. Right click on the drive icon and choose scan drive (an antivirus logo usually appears in the context menu and click that option)
4. A dialogue box and a scan progress bar appears. Wait for the scan to complete, if there is a virus detection, choose the best antivirus action prompted in the dialogue box.
5. Done. If necessary, repeat the scanning procedure or proceed to scan other drives.


For scanning a Flash drive or other external drives:

1. Plug in your flash drive in the usb port (looks like a flat female plug, anyway there is no other thing to plug your flash drives..lol :P)
2. As your flash drive is recognized in your computer system, do steps 1 to 5 of the steps above.



Goodluck!

Top ten anti-virus of 2008

Posted by iknow | 4:45 AM

Here are the list in descending order (according to the annual anti-virus review):

10. Norman antivirus and anti spyware
9. CA Antivirus
8. Norton antivirus
7. McAfee virus scan
6. Trend Micro
5. F-secure 2008
4. AVG Antivirus
3. ESET Nod32
2. Kaspersky
1. Bit Defender

So just try to find some trial of these wares and evaluate which anti-virus is best for you.

Computer Virus

Posted by iknow | 5:00 AM



Computer VIRUSES and ANTI-VIRUS


Virus - is the generic term that people are using these days to describe a group of wilfully

destructive computer programs.

- any program that replicates and destroy another program.


V - Vital

I - Information

R - Resources

U - Under

S - Siege


A program or piece of code that is loaded onto your computer without your knowledge and runs against your wishes. Viruses can also replicate themselves. All computer viruses are manmade. A simple virus that can make a copy of itself over and over again is relatively easy to produce. Even such a simple virus is dangerous because it will quickly use all available memory and bring the system to a halt. An even more dangerous type of virus is one capable of transmitting itself across networks and bypassing security systems.

Since 1987, when a virus infected ARPANET, a large network used by the Defense Department and many universities, many antivirus programs have become available. These programs periodically check your computer system for the best-known types of viruses.


Types of Viruses:


LOGIC BOMBS - just like a real bomb, a logic bomb will lie dormant until triggered by some event. The trigger can be a specific date, the number of times executed, a random number, or even a specific event such as deletion of an employee’s payroll record. When the logic bomb is triggered it will usually do something unpleasant. This can range from changing a random byte of data somewhere on your disk to making the entire disk unreadable. The changing of random data on disk may be the most insidious attack since it would do a lot of damage before it would be detected.


TROJANS - these are named after the Trojan horse, which delivered soldiers into the city of Troy. Likewise, a Trojan program is a delivery vehicle for some destructive code (such as a logic bomb or a virus) onto a computer. The Trojan program appears to be a useful program, but when a certain event occurs, it will attack your computer in some way.


WORMS - is a self-reproducing program that does not infect other programs as a virus will, but instead creates copies of itself, which in turn create even more copies. These are usually seen on networks and on multi-processing operating systems where the worm will create copies of itself, which are also executed. Each new copy will create more copies quickly clogging the system.


The three most destructive computer program are:


Worm - is a program that replicates itself. It creates an image of itself either in a file or at a particular location on the disk.


Trojan Horse - program acts like the Trojan Horse of Greek mythology. A malevolent program is hidden inside another, apparently useful program. While the “useful” program is running, the malevolent part does something nasty, like erase your FAT and directory.


Bomb - is a piece of code embedded in a program or the operating system itself that waits for a particular event to occur. When that event occurs, the logic bomb “goes off”, doing some kind of damage.



Classification of Viruses by their preferred habitat:

Parasitic viruses - viruses attached themselves to other programs.

- parasitic viruses start when the executable file to which they are attached is run.


Boot Sector Viruses - viruses prefer lodging in the boot sector of your floppy or hard disk.

- Boot sector generally prefer hard drives. Once the computer reads the boot sector,

the virus wakes up and springs into action.


CATEGORIES OF VIRUSES:

Computer viruses can be roughly classified into the following categories:


Macro Viruses

Macro viruses are perhaps the newest type of virus. The first macro virus, written in Microsoft’s Word macro language, was discovered in August, 1995. Currently, thousands of macro viruses are known to exist and include viruses written in the macro language of Microsoft’s Excel, Word and AmiPro applications.

Since a macro virus is written in the language of an application, not the operating system (OS), it is platform independent and can spread between DOS, Windows, Mac, and even OS/2 systems. That is, macro viruses can be spread to any machine that runs the application the virus was written in. Any machine running Word, for example, whether it is a PC, Mac or something else, is vulnerable to Word documents that contain a macro virus.

This in itself is revolutionary. Now add the ability to travel by email, plus the tremendous interconnections of networks, the World Wide Web and the increasing power of the Macro language (Word, Excel, etc.), and you’ve got yourself a real threat.


File Viruses (Parasitic Viruses)

File viruses attach themselves to executable files and are at least partially activated whenever the host file is run. File viruses are typically TSR (terminate-and-stay-resident), direct action or companion programs.

TSR viruses, which are among the most common of viruses, reside in memory and attach themselves to executable programs when they are run. It is in this way that TSR viruses spread to other programs on the hard drive, floppies or network.

A direct action virus loads itself into memory to infect other files and then unloads itself, while a companion virus acts to fool an executable file into executing from a .COM file. For example, a companion virus might create a hidden PGM.COM file so that when the PGM command is executed, the fake PGM.COM runs first. The .COM file invokes its virus code before going on to start the real PGM.EXE file.


Boot Viruses

Infects the boot sector and related areas on a hard or floppy disk. Every disk has a boot sector and so is potentially vulnerable to infection. Once the hard disk of a machine has been contaminated, the virus will be activated every time the machine is powered on. It will install itself in the memory and turn control over to the normal boot code. The virus subsequently infects any floppy that is inserted into the machine.

Boot-sector viruses, the most common type of virus, move or overwrite a disk’s original boot sector data and replace it with an infected boot code of their own design. Floppies and hard drives are the most susceptible to being overwritten by a boot sector virus. Then, whenever the infected system is powered on (boots up), the virus loads into memory where it can gain control over basic hardware operations. From its place in memory, a boot virus can quickly spread to any of the other drives on the system (floppy, network, etc.).


Multi-partite Viruses

Multi-partite viruses share some of the characteristics of boot sector viruses and file viruses: they can infect .COM and .EXE files, and the boot sector of the computer’s hard drive.

On a computer booted up with an infected floppy, a typical multi- partite virus will first make itself resident in memory and then infect the boot sector of the hard drive. From there the virus may infect a PC’s entire environment.

Not many forms of this virus class actually exist. They do, however, account for a disproportionately large number of infections.


Polymorphic or Mutation Viruses

Polymorphic (mutation) viruses are unique in that they are designed to elude detection by changing their structure after each execution--with some polymorphic viruses, millions of permutations are possible. Of course, this makes it harder for normal antivirus programs to detect or intercept them. It should be noted that polymorphic viruses do not, strictly speaking, constitute a separate category of virus; they usually belong to one of the categories described above.


Stealth Viruses

Stealth viruses, or Interrupt Interceptors, as they are sometimes called, take control of key DOS-level instructions by intercepting the interrupt table, which is located at the beginning of memory. This gives the virus the ability to do two important things: 1) take control of the system by redirecting the interrupt calls, and 2) hide itself to prevent detection.


Network viruses

Network Virus use protocols and commands of computer network or email to spread themselves.

In DESTRUCTIVE CAPABILITIES viruses can be divided as follows:

  • harmless, that is having no effect on computing (except for some lowering of free disk space as a result of propagation);

  • not dangerous, limiting their effect to lowering of free disk space and a few graphical, sound or other FX);

  • dangerous viruses, which may seriously disrupt the computer's work;

  • very dangerous, the operating algorithms of which intentionally contain routines which may lead to losing data, data destruction, erasure of vital information in system areas, and even according to one of the unconfirmed computer legends inflict damage to the moving mechanical parts by causing resonance in some kinds of HDDs.


How Viruses Spread

There are many ways for a virus to enter your system:


  • Email attachments

  • Database replications

  • Shared network files and network traffic in general

  • World Wide Web (WWW) sites

  • FTP traffic from the Internet (file downloads)

  • Floppy disks brought in from outside the organization

  • Electronic bulletin boards (BBS)

  • Pirated software

  • Demonstration software

  • Computer labs


The most likely virus entry points are email, Internet and network connections, floppies; modems or other serial or parallel port connections. In today’s increasingly interconnected workplace (the Internet, intranet, shared drives, removable drives and email), virus outbreaks now spread faster and wider than ever before.



VIRUS PREVENTION GUIDELINES:


  • You can only get a virus by executing an infected program or booting from an infected diskette. Any diskette can be infected by a boot sector virus, even non-bootable diskettes.


  • You cannot get a virus simply by being on a bulletin Board Service (BBS), the Internet, or an online service. You will only become infected if you download an infected file and execute that file.


  • Most viruses are transferred by booting from an infected diskette (e.g. Stoned, Form, Stealth-B, AntiExe, Monkey). Always boot your Computer with a VIRUS- free DOS disk. .Remove diskettes from you’re a drive as soon as you are through with the diskette. If your CMOS permits it, change your order to boot from your hard disk first. If you don’t know what CMOS is, check the manual for your computer; there is normally an option when you boot your computer to hit a specific key to enter CMOS setup. This allows you to change many options on your computer.


  • Make sure you have at least two backups for all of your files. Backups are essential not only to safely recover from virus infections, but also to recover from the other threats to your data.


  • Be sure to check all new software for viruses. Even shrink-wrapped software from a major publisher may contain a virus.


  • Purchase and use an ANTI-VIRUS programs.
    Do not loan your original program Disk to the other users.
    Do not loan your Computer System to the other users.
    Write protect your Diskette.


Classifications of Digital Computers:


According to size:


  1. Microcomputer - is the smallest of the digital computers. A MICROCOMPUTER or PERSONAL COMPUTER, PC for short, is most widely used especially at home because of its affordable price and manageability

- it consists of : CPU, Keyboard, Monitor, Printer, and the disks drives.

- can only be use by one person at a time.

Examples: (Personal Computers, Workstations, Portable Computers)


  1. Minicomputer - smallest computer designed specifically for the multi-user environment.

- can allow several person using the machine at the time.

- serves as stand-alone computer.

- 40 to 100 employees or remote terminals.

- these perform multi-tasking and allow many terminals to be connected to their services.

The ability to connect minicomputers to each other and mainframes has popularized them among larger businesses. This use is being challenged by the developments in the microcomputers under a network. Minicomputers are still recognized are being able to process large amounts of data.


  1. Mainframe Computer - is another system that can be used in multi-user environment.

- can serve more than 100 remote terminals. Mainframe computer are large general purpose computers. Mainframe computers generally require special attention and are kept in a controlled atmosphere. They are multi-tasking and generally used in areas where large database are maintained e.g. government departments and the airline industry.


Other types:

  • Supercomputers – are the fastest calculating devices ever invented. Operate at speeds measured in nanoseconds and even in picoseconds.

  • Network Computers - are computers with minimal memory, disk storage and processor power designed to connect a network, especially the Internet. A Network is the coordinated system of linked computer terminals or mini computer and mainframes that may operate independently but also share data and other resources.



Computer People (Peopleware)


  1. System Developers/ Programmers – are those who analyze, design, develop, maintain, and upgrade all aspects of the computer system.

  2. Application Users – are those who benefit from the work done by the developers by means of availing of the system’s features and usage.


COMPUTER PROFESSIONALS:


Data Entry Operator / Data Encoder - prepare data for processing, usually by keying it in a machine-readable format.


Computer Operators - monitor the computer, review procedures, and keep peripheral equipment running.


Librarians - catalog the process disks and tapes and keep them secure.


Computer Programmer - design, write, test, and implement the programs that process data on the computer system; they also maintain and update the system.


System Analysts - are knowledgeable in the programming area but have broader responsibilities. They plan and design not just individual programs but the entire computer system.


Chief Information Officers (CIO) - called as department manager, must understand more than just computer technology. These person must understand the goals and operations of the entire organization.


Network Manager - implements and maintains the network.

Computer Software

Posted by iknow | 4:46 AM

COMPUTER SOFTWARE

Types of Computer Software :

1.Application Software
2.System Software
3.Compilers/Interpreters/Translators


Application Software - these refers to programs which are written for the purpose of solving specific data processing job.

2 Sub- types :

1.Custom Software – software created by programmers tailored to the needs of an office or companies.
Example: Video Rental Software, Payroll System Software, flight reservation software, etc.

2.Packaged Software or Commercial Software – software in a form of packages and are commercially available.

Example:

Word Processors – Microsoft Word, Wordstar, Wordpad
Electronic Spreadsheets – Excel, Lotus 123, Quattro Pro
Presentation Software – Powerpoint
Desktop Publisher – Microsoft Publisher, Adobe Pagemaker
Multimedia – Encarta
Web Development Tools – Dreamweaver, FrontPage Express
Gaming Software – CounterStrike, NBA Live, Ragnarok
Photo and Graphic Enhancement – Corel Draw, Adobe Photoshop
Antivirus – Norton, MacAfee, AVG


System Software - refers to program supplied by a computer manufacturers or specialist software companies that contribute to efficiency and ease of operating the computer.

Sub-type:

1.Operating System Software
- these are programs that are use to direct and manage the execution or jobs by the computer.

Example: Microsoft Windows, Microsoft DOS, Linux


Programming Languages – (Compilers / Interpreters / Translators)
- programs use to communicate with computer language.
- software that translate a source program into a machine language

Example: Visual Basic, Foxpro, C, Visual C++, Clipper, Cobol, Fortran, Java

Computer Hardware

Posted by iknow | 3:16 AM

I. COMPUTER HARDWARE


* 4 functional parts :

1. Input Device
2. Central Processing Unit
3. Memory
4. Output Device


A.) Input Device

What is Input?
- everything we tell the computer is Input. Types of Input Data is the raw facts given to the computer.

Programs
- are the sets of instructions that direct the computer.

Commands
- are special codes or key words that the user inputs to perform a task, like RUN "ACCOUNTS". These can be selected from a menu of commands like "Open" on the File menu.
They may also be chosen by clicking on a command button. User response is the user's
answer to the computer's question, such as choosing OK, YES, or NO or by typing in text, for example the name of a file.

Input Devices

1.) Computer Keyboard - is a typewriter-like device that allows you to type in letters, numbers and other symbols for the Computer.

*** 3 Parts of the Keyboard ***

a) Alphanumeric Keypad
b) Numeric Keypad
c) Function Keys

2.)Touch Tablet
- is an electronic blackboard that can sense the tip of a pencil on its touch.

3.) Light Pen
- is an input device that reads light from the display screen, thus allowing you to point to a spot on the screen.

4.) Mouse
- is a hand operated pointing device.

5.) Puck
- is a pointing device, used much like a mouse, but it has a small magnifying glass with cross hairs.

6.) Joysticks
- are used in computer games.

7.) Bar Code Reader
- is a device for translating the bar codes into data for the computer.
* Bar Code - is a series of vertical black and white bars.

8.) Magnetic Disk Drives
- it is a device for converting magnetic spots on the surface of magnetic disk into electrical signals understandable by the computer.

9.) Magnetic Tape Drive
- is like a tape or a cassette recorder.

10) Glidepad
- uses a touch sensitive pad for controlling cursor. The user slides finger across the pad and the cursor follows the finger movement. For clicking there are buttons, or you can tap on the pad with a finger. The glidepad is a popular alternate pointing device for laptops.

11) Pen Input
- used esp. in Personal Digital Assistants (PDA) Pen Input is used for:

*Data Input
- by writing. PDA recognizes your handwriting. (If only your friends could, too!) Pointing Device
- functions like a mouse in moving a cursor around the screen and clicking by tapping the screen.

*Command Gestures
-you can issue commands by moving pen in patterns. So a certain kind of swirl would mean to save the file and a different kind of swirl could mean to open a new file.

12) Touchscreen
- make selection by just touching the screen.

13) Digitizers and Graphics Tablets
- converts drawings, photos, etc. to digital signal.
The tablets have special commands

14) Terminal
- consists of a keyboard and a screen so it can be considered an input device, especially some of the specialized types. Some come as single units.
Terminals are also called: · Display Terminals · Video Display Terminals or VDT

A dumb terminal has no ability to process or store data. It is linked to minicomputer, mainframe, or super computer. The keyboard and viewing screen may be a single piece of equipment.

An intelligent, smart, or programmable terminal can process or store on its own, at least to a limited extent. PCs can be used as smart terminals. A point-of-sale terminal (POS) is an example of a special purpose terminal. These have replaced the old cash registers in nearly all retail stores. They can update inventory while calculating the sale. They often have special purpose keys.

*For example, McDonalds has separate touchpads for each food item available.

15) Sound Input
- recording sounds for your computer requires special equipment. Microphones can capture sounds from the air which is good for sound effects or voices. For music the best results come from using a musical instrument that is connected directly to the computer. Software can combine music recorded at different times. You could be a music group all by yourself -singing and playing all the parts!

16) Voice Input Data entry
- talking data into the computer when your hands and eyes are busy should certainly be more efficient. You'd have to be very careful about your pronunciation!
*command and control - Telling the computer what to do instead of typing commands, like saying "Save file". Be careful here, too. The dictionary of understood words does not include some of the more "forceful" ones.
*speaker recognition - Security measures can require you to speak a special phrase. The computer must recognize your voice to let you in.
*speech to text - Translating spoken words direct to type would suit some authors just fine. You'd have to watch out for those "difficult to translate" phrases like "hmmm" and "ah, well, ... uhmmm."

17) Video Input
- a digital camera takes still photos but records the pictures on computer disks or memory chips. The information contained can be uploaded to a computer for viewing. A video camera or recorder (VCR) can record data that can be uploaded to the computer with the right hardware. Though it is not digital data, you can still get good results with the right software.

Both of these take huge amounts of storage. Photos make for very large files. A web cam is a tiny video camera designed especially to sit on your computer. It feeds pictures directly to the computer - no tape or film to develop. Of course you are limited by the length of the cable that connects the camera to the computer. But like any camera, it will take a picture of what you point it at!

18) Page scanner
- the scanner works like a copy machine. It captures a whole page and converts it to digital image. The scanned text cannot be edited at this point.

19) Hand scanner
- you move the device across the document or picture. It will capture only a section of a page or a large image. So the pieces of anything wider than the scanner would have to be recombined with some nifty software.

Special types of characters read with special devices
Bar Codes- Retail shops now use printed bar codes on products to track inventory and calculate the sale at the checkout counter. The US Post Office uses bar codes to sort mail, but the bars are different from those used for pricing products.
Optical Marks- A special machine "reads" the marks. Woe to the student who takes a test with this kind of score sheet and doesn't get those bubbles colored in correctly! Magnetic Ink- Bank account # is printed in special ink with magnetic qualities which can be read by the right machine.
Optical Characters- There are coding systems that use letters or special characters that are especially shaped to be easy for machines to read.



Eye Strain

A computer screen is usually only an arms length from your eyes and prolonged usage will cause eyestrain. This can lead to headaches, blurred vision and sometimes nausea. You can put it on the same level as watching a T.V. from two feet away. These injuries don’t just appear; they tend to creep in over a period of time. To prevent this consider the following:

  • -Ensure the screen resolution is correct, if not correct it. (A 15" monitor usually has a resolution of 800 x 600 pixels).
    -Make sure your monitor screen is not blurred, if so get it checked or even replaced.
    -Keep your working time on the computer to a safe level before a break, 20 minutes maximum, with a 15-minute break.
    -Make sure you have a sufficient lighting environment. Working in the dark will cause no end of problems.
    -Keep your monitor screen clean and away from direct sunlight

Wrist Strain

The only two things you are most likely to use with a computer are a mouse and the keyboard. If used incorrectly you may suffer from wrist strain. This can be anything from an aching wrist to a painful one. Again with a few simple rules you can avoid this.

  • -Invest in a keyboard wrist rest, they are not expensive and you will benefit greatly with your typing.
    -If your keyboard feels like an old typewriter when you press the keys, consider buying a soft touch keyboard, again they are not that expensive.
    -When using a mouse keep your wrist parallel to it, ensuring you have enough room to navigate the whole screen without picking up and putting down.
    -Ensure you are sitting correctly in your seat.

Back Pain

This is one of the most common complaints. This is usually due to bad posture whilst sitting at a computer.
-Ensure you are sat at the correct height for your desk and are parallel to the keyboard and monitor if possible. Looking down or up at the screen constantly can cause neck problems.

Posted by iknow | 5:23 AM


What are computers used for?


Computers are used for a wide variety of purposes. Data processing is commercial and financial work. This includes such things as billing, shipping and receiving, inventory control, and similar business related functions, as well as the “electronic office”.

a.) For personal computing
-Small computer like microcomputer can be controlled by a single person. It can be use for word processing, desktop publishing, electronic spreadsheets and maintaining databases.

b) Information system
-Computers are used to support the administrative aspects of an administration, for
example: applications including payroll systems, airline reservation systems, and others.

c) Offices / Banks
-Computers are used in offices to enhance the working process. It is used to gather data
and information.

Scientific processing is using a computer to support science. This can be as simple as gathering and analyzing raw data and as complex as modelling natural phenomenon (weather and climate models, thermodynamics, nuclear engineering, etc.).
a.) Science and Research
-Engineers and Scientist use computers as a tool in experimentation and design. Aerospace engineers use computers to stimulate the effects of wind tunnel to analyze the aerodynamics of an airplane prototype.

b.) Medicine
-Tiny "Computers on a chip" are being embedded in artificial hearts and other organs. Once
the organs are implanted in the body, the computer monitors critical inputs, such as blood pressure and flow, then takes corrective action to ensure stability of operation in a continuous feedback.

c.) Artificial Intelligence
- Today's computer can imitate many human movements such as grasping calculating,
speaking, remembering, comparing numbers and drawings, researchers are working to expand these capabilities by developing computers and program that can imitate human intelligence.

d.)Multimedia
-Includes content creation (composing music, performing music, recording music, editing film and video, special effects, animation, illustration, laying out print materials, etc.) and multimedia playback (games, DVDs, instructional materials, etc.).

e.) Education
-Computers can interact with students to enhance the learning process.

IV. Capabilities of a Computer :

  • The computer has the ability to perform arithmetic operations.
    The computer has the ability to perform logical operations
    The computer has the ability to store and retrieve information.
    The computer has the ability to process information at very high speed.



V. Limitations of a Computer :

  • Computers do not think for you.
    The Computer cannot correct inaccurate data.
    The Computer is subject to breakdown.

Introduction to Computers

Posted by iknow | 4:59 AM


A Brief History of Computer
Technology

A complete history of computing would include a multitude of diverse devices such as the ancient Chinese abacus, the Jacquard loom (1805) and Charles Babbage's ``analytical engine'' (1834). It would also include discussion of mechanical, analog and digital computing architectures. As late as the 1960s, mechanical devices, such as the Merchant calculator, still found widespread application in science and engineering. During the early days of electronic computing devices, there was much discussion about the relative merits of analog vs. digital computers. In fact, as late as the 1960s, analog computers were routinely used to solve systems of finite difference equations arising in oil reservoir modeling. In the end, digital computing devices proved to have the power, economics and scalability necessary to deal with large scale computations. Digital computers now dominate the computing world in all areas ranging from the hand calculator to the supercomputer and are pervasive throughout society. Therefore, this brief sketch of the development of scientific computing is limited to the area of digital, electronic computers.

The evolution of digital computing is often divided into generations. Each generation is characterized by dramatic improvements over the previous generation in the technology used to build computers, the internal organization of computer systems, and programming languages. The following history has been organized using these widely recognized generations as mileposts.
The Mechanical Era (1623-1945)

The idea of using machines to solve mathematical problems can be traced at least as far as the early 17th century. Mathematicians who designed and implemented calculators that were capable of addition, subtraction, multiplication, and division.

The ABACUS emerged about 5,000 years ago as the earliest form of the Computer. This math tool allows user to make computation using a system of sliding beads arranged on the rack. It evolved from a simple need to account numbers. Merchants trading goods not only needed a way to count goods, but also to quickly calculate the prices of those goods.

In 1642, Blaise Pascal (1623-1662), the 18-year-old son of French tax collector, invented what he called a numerical wheel calculator to help his father with his duties.

This brass rectangular box, also called a Pascaline, used eight movable dials to add sums up to eight figures long.

In 1694, a German mathematical and philosopher, Gottfried Willhem von Leibniz (1646-1716), improved the Pascaline by creating a machine that could also multiply.
Charles Xavier Thomas de Colmar, a Frenchman, invented a machine that could perform the four basic arithmetic functions.

Colmar’s mechanical calculator, the ARITHOMETER or ARITHMOMETER, presented a more practical approach to computing because it could add, subtract, multiply, and divide.

The Computers as we know them today began with an English mathematics professor, Charles Babbage (1791-1871). He was called the father of Computer.

Babbage’s first attempt at solving this problem was in 1822 when he proposed a machine to perform different equations, called a Difference Engine.After working on the Difference Engine for 10 years, Babbage was suddenly inspired to begin work on the first general-purpose computer, which he called the Analytical Engine.

Augusta Ada King, Countess of Lovelace, daughter to English Poet lord Byron, and Babbage’s assistance, was instrumental in the Machine’s design. She helped revise plans and secure funding from the British government. She is noted as the first computer programmer. The Analytical engine never worked but his plans embodied the basic elements of modern computers: Input, output, store, process, and control.

In 1889, an American inventor, Herman Hollerith (1860-1929), Also applied the Jacquard loom concept to computing. His first task was to find a faster way to compute the U.S census. Hollerith brought his PUNCH CARD READER into the business world, founding Tabulating Machine Company in 1896, later to become International Business Machine (IBM) in 1924.

Electromechanical Accounting Machine
The EAM (Electromechanical Accounting Machine) ushered a new age of inventions through the use of electromechanical devices or components such as electric relays switches. Together with this, the punched-card technology improved.

ABC (Atanasoff-Berry Computer)
John V. Atanasoff, the inventor of the electronic digital computer together with Clifford Berry, built the prototype of an electromechanical digital computer in 1939 featuring the first known use of electronic vacuum tubes for computation. It was known as the ABC (Atanasoff-Berry Computer).
When World War II started on 7 December 1941, the work on the computer came to a halt.

ASCC or Harvard Mark I
Howard Aiken developed the ASCC computer (Automatic Sequence Controlled Calculators) which could carry out of operations, additions, subtraction, multiplication, divisions and reference to previous results. Aiken was much influenced in his ideas by Babbage’s writing and he saw the project to build the ASCC computer as completing the task, which Babbage had set out on but failed to complete. Having completed constructions of ASCC in 1943. it was decided to move the computer to Harvard University where it began to be used from May 1944. Grace Hopper worked with Aiken from 1944 on the ASCC Computer which had been renamed the Harvard Mark1 and given b IBM to Harvard University

The First “Computer Bug”
Moth founded trapped between points at Relay # 70, Panel F, of the Mark II Aiken Relay Calculator while it was being tested at Harvard University, 9 September 1945. The operations affixed the moth to the computer log, with the entry: “First actual case of bug being found”. They put out the word that they had “debugged” the machine, thus introducing the term “debugging” a computer program”.
Alan Turing - A second early electronic machine was Colossus, designed for the British military in 1943. This machine played an important role in breaking codes used by the German army in World War II. Turing's main contribution to the field of computer science was the idea of the Turing machine, a mathematical formalism widely used in the study of computable functions.



First Generation Electronic Computers (1937-1953) – Vacuum Tubes

Three machines have been promoted at various times as the first electronic computers. These machines used electronic switches, in the form of vacuum tubes, instead of electromechanical relays. Electronic components had one major benefit, however: they could ``open'' and ``close'' about 1,000 times faster than mechanical switches.

ENIAC( Electronic Numerical Integrator and Calculator) was the first all purpose, all electronics digital computer. I was built by John Presper Eckert, Jr. and John W Mauchy. ENIAC introduced us to what is commonly referred to as the First Generations of Computers. The ENIAC, being over 1,000 times faster than its electromechanical predecessors, could execute up to 5,000 basic arithmetic operations per second.

Its successor, Electronics Discrete Variable Automatic Computer, or EDVAC treated Instructions as numerical data and stored them electronically in the computer’s Memory. This led to the development of Self-modifying programs.

UNIVAC (Universal Automatic Computer)
Was the first digital computer to handle Both numerical and alphabetical Information with equal ease. first commercially successful computer.

Second Generation (1954-1962) - Transistors

The second generation saw several important developments at all levels of computer system design, from the technology used to build the basic circuits to the programming languages used to write scientific applications. Electronic switches in this era were based on discrete diode and transistor technology with a switching time of approximately 0.3 microseconds. The first machines to be built with this technology include TRADIC at Bell Laboratories in 1954 and TX-0 at MIT's Lincoln Laboratory. Memory technology was based on magnetic cores which could be accessed in random order, as opposed to mercury delay lines, in which data was stored as an acoustic wave that passed sequentially through the medium and could be accessed only when the data moved by the I/O interface.

Important innovations in computer architecture included index registers for controlling loops and floating point units for calculations based on real numbers. Prior to this accessing successive elements in an array was quite tedious and often involved writing self-modifying code (programs which modified themselves as they ran; at the time viewed as a powerful application of the principle that programs and data were fundamentally the same, this practice is now frowned upon as extremely hard to debug and is impossible in most high level languages). Floating point operations were performed by libraries of software routines in early computers, but were done in hardware in second generation machines.

During this second generation many high level programming languages were introduced, including FORTRAN (1956), ALGOL (1958), and COBOL (1959). Important commercial machines of this era include the IBM 704 and its successors, the 709 and 7094. The latter introduced I/O processors for better throughput between I/O devices and main memory.

The second generation also saw the first two supercomputers designed specifically for numeric processing in scientific applications. The term ``supercomputer'' is generally reserved for a machine that is an order of magnitude more powerful than other machines of its era. Two machines of the 1950s deserve this title. The Livermore Atomic Research Computer (LARC) and the IBM 7030 (aka Stretch) were early examples of machines that overlapped memory operations with processor operations and had primitive forms of parallel processing.

Third Generation (1963-1972) – Integrated Circuits

The third generation brought huge gains in computational power. Innovations in this era include the use of integrated circuits, or ICs (semiconductor devices with several transistors built into one physical component), semiconductor memories starting to be used instead of magnetic cores, microprogramming as a technique for efficiently designing complex processors, the coming of age of pipelining and other forms of parallel processing (described in detail in Chapter CA), and the introduction of operating systems and time-sharing.

The first ICs were based on small-scale integration (SSI) circuits, which had around 10 devices per circuit (or ``chip''), and evolved to the use of medium-scale integrated (MSI) circuits, which had up to 100 devices per chip. Multilayered printed circuits were developed and core memory was replaced by faster, solid state memories. Computer designers began to take advantage of parallelism by using multiple functional units, overlapping CPU and I/O operations, and pipelining (internal parallelism) in both the instruction stream and the data stream. In 1964, Seymour Cray developed the CDC 6600, which was the first architecture to use functional parallelism. By using 10 separate functional units that could operate simultaneously and 32 independent memory banks, the CDC 6600 was able to attain a computation rate of 1 million floating point operations per second (1 Mflops). Five years later CDC released the 7600, also developed by Seymour Cray. The CDC 7600, with its pipelined functional units, is considered to be the first vector processor and was capable of executing at 10 Mflops. The IBM 360/91, released during the same period, was roughly twice as fast as the CDC 660. It employed instruction look ahead, separate floating point and integer functional units and pipelined instruction stream. The IBM 360-195 was comparable to the CDC 7600, deriving much of its performance from a very fast cache memory. The SOLOMON computer, developed by Westinghouse Corporation, and the ILLIAC IV, jointly developed by Burroughs, the Department of Defense and the University of Illinois, were representative of the first parallel computers. The Texas Instrument Advanced Scientific Computer (TI-ASC) and the STAR-100 of CDC were pipelined vector processors that demonstrated the viability of that design and set the standards for subsequent vector processors.

Fourth Generation (1972-1984) - Microprocessor

The next generation of computer systems saw the use of large scale integration (LSI - 1000 devices per chip) and very large scale integration (VLSI - 100,000 devices per chip) in the construction of computing elements. At this scale entire processors will fit onto a single chip, and for simple systems the entire computer (processor, main memory, and I/O controllers) can fit on one chip. Gate delays dropped to about 1ns per gate.

Semiconductor memories replaced core memories as the main memory in most systems; until this time the use of semiconductor memory in most systems was limited to registers and cache. During this period, high speed vector processors, such as the CRAY 1, CRAY X-MP and CYBER 205 dominated the high performance computing scene. Computers with large main memory, such as the CRAY 2, began to emerge. A variety of parallel architectures began to appear; however, during this period the parallel computing efforts were of a mostly experimental nature and most computational science was carried out on vector processors. Microcomputers and workstations were introduced and saw wide use as alternatives to time-shared mainframe computers.

Developments in software include very high level languages such as FP (functional programming) and Prolog (programming in logic). These languages tend to use a declarative programming style as opposed to the imperative style of Pascal, C, FORTRAN, et al. In a declarative style, a programmer gives a mathematical specification of what should be computed, leaving many details of how it should be computed to the compiler and/or runtime system. These languages are not yet in wide use, but are very promising as notations for programs that will run on massively parallel computers (systems with over 1,000 processors). Compilers for established languages started to use sophisticated optimization techniques to improve code, and compilers for vector processors were able to vectorize simple loops (turn loops into single instructions that would initiate an operation over an entire vector).

Two important events marked the early part of the third generation: the development of the C programming language and the UNIX operating system, both at Bell Labs. In 1972, Dennis Ritchie, seeking to meet the design goals of CPL and generalize Thompson's B, developed the C language. Thompson and Ritchie then used C to write a version of UNIX for the DEC PDP-11. This C-based UNIX was soon ported to many different computers, relieving users from having to learn a new operating system each time they change computer hardware. UNIX or a derivative of UNIX is now a de facto standard on virtually every computer system.

Fifth Generation (1984-1990) – Parallel Processing

The development of the next generation of computer systems is characterized mainly by the acceptance of parallel processing. Until this time parallelism was limited to pipelining and vector processing, or at most to a few processors sharing jobs. The fifth generation saw the introduction of machines with hundreds of processors that could all be working on different parts of a single program. The scale of integration in semiconductors continued at an incredible pace - by 1990 it was possible to build chips with a million components - and semiconductor memories became standard on all computers.

Other new developments were the widespread use of computer networks and the increasing use of single-user workstations. Prior to 1985 large scale parallel processing was viewed as a research goal, but two systems introduced around this time are typical of the first commercial products to be based on parallel processing. The Sequent Balance 8000 connected up to 20 processors to a single shared memory module (but each processor had its own local cache). The machine was designed to compete with the DEC VAX-780 as a general purpose Unix system, with each processor working on a different user's job. However Sequent provided a library of subroutines that would allow programmers to write programs that would use more than one processor, and the machine was widely used to explore parallel algorithms and programming techniques.

The Intel iPSC-1, nicknamed ``the hypercube'', took a different approach. Instead of using one memory module, Intel connected each processor to its own memory and used a network interface to connect processors. This distributed memory architecture meant memory was no longer a bottleneck and large systems (using more processors) could be built. The largest iPSC-1 had 128 processors. Toward the end of this period a third type of parallel processor was introduced to the market. In this style of machine, known as a data-parallel or SIMD, there are several thousand very simple processors. All processors work under the direction of a single control unit; i.e. if the control unit says ``add a to b'' then all processors find their local copy of a and add it to their local copy of b. Machines in this class include the Connection Machine from Thinking Machines, Inc., and the MP-1 from MasPar, Inc.

Scientific computing in this period was still dominated by vector processing. Most manufacturers of vector processors introduced parallel models, but there were very few (two to eight) processors in this parallel machines. In the area of computer networking, both wide area network (WAN) and local area network (LAN) technology developed at a rapid pace, stimulating a transition from the traditional mainframe computing environment toward a distributed computing environment in which each user has their own workstation for relatively simple tasks (editing and compiling programs, reading mail) but sharing large, expensive resources such as file servers and supercomputers. RISC technology (a style of internal organization of the CPU) and plummeting costs for RAM brought tremendous gains in computational power of relatively low cost workstations and servers. This period also saw a marked increase in both the quality and quantity of scientific visualization.

Sixth Generation (1990 - )

Transitions between generations in computer technology are hard to define, especially as they are taking place. Some changes, such as the switch from vacuum tubes to transistors, are immediately apparent as fundamental changes, but others are clear only in retrospect. Many of the developments in computer systems since 1990 reflect gradual improvements over established systems, and thus it is hard to claim they represent a transition to a new ``generation'', but other developments will prove to be significant changes.

This generation is beginning with many gains in parallel computing, both in the hardware area and in improved understanding of how to develop algorithms to exploit diverse, massively parallel architectures. Parallel systems now compete with vector processors in terms of total computing power and most expect parallel systems to dominate the future. Combinations of parallel/vector architectures are well established, and one corporation (Fujitsu) has announced plans to build a system with over 200 of its high end vector processors. Manufacturers have set themselves the goal of achieving teraflops ( 10 arithmetic operations per second) performance by the middle of the decade, and it is clear this will be obtained only by a system with a thousand processors or more. Workstation technology has continued to improve, with processor designs now using a combination of RISC, pipelining, and parallel processing. As a result it is now possible to purchase a desktop workstation for about $30,000 that has the same overall computing power (100 megaflops) as fourth generation supercomputers. This development has sparked an interest in heterogeneous computing: a program started on one workstation can find idle workstations elsewhere in the local network to run parallel subtasks.

One of the most dramatic changes in the sixth generation will be the explosive growth of wide area networking. Network bandwidth has expanded tremendously in the last few years and will continue to improve for the next several years. T1 transmission rates are now standard for regional networks, and the national ``backbone'' that interconnects regional networks uses T3. Networking technology is becoming more widespread than its original strong base in universities and government laboratories as it is rapidly finding application in K-12 education, community networks and private industry. A little over a decade after the warning voiced in the Lax report, the future of a strong computational science infrastructure is bright. The federal commitment to high performance computing has been further strengthened with the passage of two particularly significant pieces of legislation: the High Performance Computing Act of 1991, which established the High Performance Computing and Communication Program (HPCCP) and Sen. Gore's Information Infrastructure and Technology Act of 1992, which addresses a broad spectrum of issues ranging from high performance computing to expanded network access and the necessity to make leading edge technologies available to educators from kindergarten through graduate school.

In bringing this encapsulated survey of the development of a computational science infrastructure up to date, we observe that the President's FY 1993 budget contains $2.1 billion for mathematics, science, technology and science literacy educational programs, a 43% increase over FY 90 figures.