Monday 17 December 2012

Parallel Processing


MEANING
Parallel processing (parallel processing) is the use of more than one CPU to execute a program simultaneously.
Ideally, parallel processing makes the program run faster as more CPUs are used.

GOAL
The main goal of parallel processing is to increase computing performance. The more things can be done simultaneously (in the same time), the more work that can be accomplished.

PARALLEL PROCESSING
parallel computing
Parallel computing is one technique to compute simultaneously using several computers simultaneously.
Normally required when the required capacity is very large, either because they have to process large amounts of data or due process demands a lot of computing.
To perform various types of parallel computing its necessary infrastructure parallel machine consisting of many computers connected to the network and able to work in parallel to solve a problem. It is necessary for various support software commonly referred to as middleware that acts to regulate the distribution of jobs among nodes in a single parallel machine. Furthermore, users must make parallel programming to realize the computation.
Parallel Programming itself is a computer programming technique that allows the execution of commands / operations concurrently. When computers are used simultaneously by separate computers connected in a network of computers, usually called distributed systems. A popular programming language used in parallel programming is MPI (Message Passing Interface) and PVM (Parallel Virtual Machine).
The thing to remember is parallel computing is different from multitasking. the meaning of multitasking is a computer with a single processor to execute multiple tasks simultaneously. While some people are struggling in the area of ​​operating systems assume that a single computer can not do multiple jobs at once, but the process of scheduling a pass off to the operating system make such computer tasks simultaneously. While parallel computing has been described previously, that parallel computing using multiple processors or computers. Besides not using parallel computing Von Neumann architecture.
To clarify more about the differences in a single computing (using one processor) and parallel computing (using multiple processors), then we must know in advance the understanding of the model of computation. There are 4 models of computing are used, namely:
1. SISD
2. SIMD
3. MISD
4. MIMD

1. SISD
Which stands for Single Instruction, Single Data is the only one to use the Von Neumann architecture. This is Because the models is only used one processor only. Therefore, this models can be regarded as a model for a single computation. While the other three models is parallel computing using multiple processors. Some examples of using computer models are SISD UNIVAC1, IBM 360, CDC 7600, Cray 1 and PDP 1.

2. SIMD
Which stands for Single Instruction, Multiple Data. SIMD processor to use many of the same instructions, but each processor to process different data. For example we want to find the number 27 in the series of numbers consists of 100 digits, and we use 5 processor. On each processor we use the same algorithm or command, but the data are processed differently. For example, processor 1 process data from sequence / order of the first to rank 20, second processor to process data from sequence 21 to sequence 40, so even for processor-processor to another. Several examples of SIMD computer model is ILLIAC IV, MasPar, Cray X-MP, Cray Y-MP, Thinking Machine CM-2 and the Cell processor (GPU).

3. MISD
Which stands for Multiple Instruction, Single Data. MISD uses many processors with each processor using different instructions, but to process the same data. This is the opposite of the model of SIMD. For example, we can use the same case on the SIMD model of example but a different way of settlement. At MISD when the computer first, second, third, fourth and fifth both process the data of the order of 1-100, but the algorithms used to search for different techniques on each processor. Until now there is no computer model of MISD.

4. MIMD
Which stands for Multiple Instruction, Multiple Data. MIMD uses many processors with each processor has different instructions and different data processing. But many computers using MIMD models also include components for SIMD models. Some models use MIMD computer is an IBM POWER5, HP / Compaq AlphaServer, Intel IA32, AMD Opteron, Cray XT3 and IBM BG / L.

In short for the difference between a single computing parallel computing, can be illustrated in the figure below:

Settlement Issues in Computing A Single



A Settlement Issues in Parallel Computing


From the difference between the two images above, we can conclude that the performance of parallel computing is more effective and can save time for data processing than a single computing.
From the explanations above, we can get the answers to why and when we need to use parallel computing. The answer is that parallel computing is much more time-saving and highly effective when we have to process large amounts of data. However, the effectiveness is lost when we only process small amounts of data, because the data with a small amount or a little more effective if we use a single computing.

Parallel computing requires:
· algorithms
· Programming language
· compiler

Parallel programming is a computer programming technique that allows the execution of commands / operations simultaneously in both the computer with one (single processor) or multiple (dual processors with parallel machines) CPU.
The main goal of parallel programming is to increase computing performance.

* Message Passing Interface (MPI)
MPI is a programming standard that allows programmers
to create an application that can run in parallel.
MPI provides functions for exchanging
between messages. Uses MPI else is
1. write parallel code is portable
2. get high performance in parallel programming, and
3. face problems involving irregular or dynamic data relationships that are not
so suited to parallel data model.





Relationship between Modern Computing with Parallel Processing
The relationship between modern computing and parallel processing are related, because the use of a computer or computing is now considered to be much faster than the settlement of the problem manually. With such improved computing performance or processes are increasingly applied, and one way is to increase the speed of the hardware. Where is the main component in the computer is the processor hardware. While parallel processing is the use of multiple processors (or multiprocessor computer architecture with many processors) for faster computer performance.

Computational performance by using parallel processing and utilize it using several computers or CPUs to find a solution to the problem of the problem. So it can be resolved quickly instead of using one computer. Computing with parallel processing will combine multiple CPUs, and divide the tasks for each of these CPUs. So, the problem is divided completion. But this is a big issue only, the problem of computing a smaller, cheaper one CPU only.

Memory Organization


Memory-Internal storage areas in the computer. The term memory identifies datastorage that comes in the form of chips, and the word storage is used for memory that exists on tapes or disks. Moreover, the term memory is usually used as a shorthand for physical memory, which refers to the actual chips capable of holding data. Some computers also use virtual memory, which expands physical memory onto a hard disk.
Every computer comes with a certain amount of physical memory, usually referred to as main memory or RAM. You can think of main memory as anarray of boxes, each of which can hold a single byte of information. A computer that has 1 megabyte of memory, therefore, can hold about 1 million bytes (orcharacters) of information.


1.0 Types of Memory

•  RAM (random-access memory): This is the same as main memory. When used by itself, the term RAM refers to read and writememory; that is, you can both write data into RAM and read data from RAM. This is in contrast to ROM, which permits you only to read data. Most RAM is volatile, which means that it requires a steady flow of electricity to maintain its contents. As soon as the power is turned off, whatever data was in RAM is lost.




•  ROM (read-only memory): Computers almost always contain a small amount of read-only memory that holds instructions for starting up the computer. Unlike RAM, ROM cannot be written to.



•  PROM (programmable read-only memory): A PROM is a memory chip on which you can store a program. But once the PROM has been used, you cannot wipe it clean and use it to store something else. Like ROMs, PROMs are non-volatile.



•  EPROM (erasable programmable read-only memory): An EPROM is a special type of PROM that can be erased by exposing it to ultraviolet light.


•  EEPROM (electrically erasable programmable read-only memory):An EEPROM is a special type of PROM that can be erased by exposing it to an electrical charge.



2.0 Cache Memories
 Cache memories are small, fast SRAM-­‐based Memories managed automatically in hardware.
 Hold Frequently accessed blocks of main memory
 CPU Looks first for data in caches.
 Typical system structure:


General Cache Organizationon (S,E,B)


Cache Read



Example:Direct Mapped Cache (E=1)

Direct mapped: One Line per set
Assume: Cache block size 8 bytes



Example:Direct Mapped Cache (E=1)

Direct mapped: One Line per set
Assume: Cache block size 8 bytes



3.0 Types of External Memory

• Magnetic Disk
— Floppy
— Winchester
— RAID
— Removable

• Optical
— CD-ROM
— CD-Writable (WORM)
— CD-R/W
— DVD
• Magnetic Tape

Magnetic Disk
• Disk substrate coated with magnetizable material (iron oxide…rust)
• Substrate used to be aluminium
• Now glass
— Improved surface uniformity
– Increases reliability
— Reduction in surface defects
– Reduced read/write errors
— Lower flight heights (See later)
— Better stiffness
— Better shock/damage resistance
• Range of packaging
— Floppy (PC’s A: drive, B: drive)
— Winchester hard disk (PC’s C: drive)
— Removable hard disk

Floppy Disk
• 8”, 5.25”, 3.5”
• Small capacity
— Up to 1.44Mbyte (2.88M never popular)
• Slow
• Universal
• Cheap

Optical Storage CD-ROM

• Originally for audio
• 650Mbytes giving over 70 minutes audio
• Polycarbonate coated with highly reflective coat, usually aluminium
• Data stored as pits
• Read by reflecting laser
• Constant packing density
• Constant linear velocity



Magnetic Tape

• First kind of secondary memory
• Serial access
• Slow
• Very cheap
• Backup and archive

4.0 Data Organization and Formating

Disk Geometry



Disk Geometry (Muliple-Platter View)



Disk Operation (Single-Platter View)



Disk Operation (Multi-Platter View)






Sunday 16 December 2012

Input/Output

INPUT and OUTPUT


Definition:

•A bus is a shared communication link, which uses one set of wires to connect multiple subsystems.
• I/O is very much architecture/system dependent
• I/O requires cooperation between
  – processor that issues I/O command (read, write etc.)
  -  buses that provide the interconnection between processor, memory      and I/O devices
  – I/O controllers that handle the specifics of control of each device and      interfacing
  – devices that store data or signal events


TYPE AND CHARACTERISTIC OF INPUT/OUTPUT:


Input devices
-Keyboard :  The data and instructions are input by typing on the keyboard. The          message typed on the keyboard reaches the memory unit of a computer.
-Mouse      :   It’s a pointing device. The mouse is rolled over the mouse pad, which in         turn controls the movement of the cursor in the screen.
Output devices
-Speaker :    High definition, or low distortion, to create a dynamic impact.
-Printer :      Produces a representation of an electronic document on physical media


Example of Input/Output :


qInput devices              qOutput devices

keyboard:-                    Speaker:-






Mouse:-                     Printer:-
















HANDLING I/O INTERRUPT:


Interrupts: alternative to polling
  - I/O device generates interrupt when status changes, data   ready
  - OS handles interrupts just like exceptions (e.g., page faults)
  - Identity of interrupting I/O device recorded in ECR
• I/O interrupts are asynchronous
  - Not associated with any one
  - Don’t need to be handled immediately
• I/O interrupts are prioritized
  - Synchronous interrupts (e.g., page faults) have highest priority
  - High-bandwidth I/O devices have higher priority than low

INTERRUPT OVERHEAD:


Parameters
  - 500 MHz CPU
  - Interrupt handler takes 400 cycles
  - Data transfer takes 100 cycles
  - 4 MB/s, 16 B interface disk transfers data only 5% of         time
Data transfer (x) time
  0.05 * (4M B/s)/(16 B/xfer)*[(100 c/xfer)/(500M c/s)] =           0.25%
Overhead for polling?
  (4M B/s)/(16 B/poll) * [(400 c/poll)/(500M c/s)] = 20%
Overhead for interrupts?
  + 0.05 * (4M B/s)/(16 B/poll) * [(400 c/poll)/(500M c/s)] =    1%

Monday 22 October 2012

Digital Logic


1.0 DIGITAL LOGIC

1.1 Basic Revision of Logic Gates
Principles of logic. Any Boolean algebra operation can be associated
with an electronic circuit in which the inputs and outputs represent
the statements of Boolean algebra. Although these circuits may be
complex, they may all be constructed from six devices. These
are the AND gate, the OR gate, the NOT gate, the NAND gate, the XOR                                                            gate and the NOR gate.





1.2 Boolen  Equation  Form

      1.2.1 Sum of Product and Product of Sum

      Boolen equation form can be represent in two forms :
A)   Sum-of-product is  a sum of three product terms
When two or more product terms are summed by Boolean addition,
the resulting expression is a sum-of-products (SOP). Some examples are:
AB + ABC
ABC + CDE + BCD
AB + BCD + AC
Also, an SOP expression can contain a single-variable term, as in
A + ABC + BCD.
In an SOP expression a single overbar cannot extend over more than


    B)   Product-of –sum is  a product of three sum terms
A sum term was defined before as a term consisting of the sum
(Boolean addition) of literals (variables or their complements). When two or
more sum terms are multiplied, the resulting expression is a product-of-sums
(POS). Some examples are
(A + B)(A + B + C)
(A + B + C)( C + D + E)(B + C + D)
(A + B)(A + B + C)(A + C)
A POS expression can contain a single-variable term, as in
A(A + B + C)(B + C + D).
In a POS expression, a single overbar cannot extend over more than one
variable; however, more than one variable in a term can have an overbar. For
example, a POS expression can have the term A + B + C but not A + B + C.



1.3 Simplication Of Boolean Equation
1.3.1 Laws of boolean Algebra
The basic laws of Boolean algebra-the commutative laws for addition and
multiplication, the associative laws for addition and multiplication, and the
distributive law-are the same as in ordinary algebra.







                                                                                           


Sunday 21 October 2012

Arithmetic


2.0 ARITHMETIC FOR COMPUTER

A binary number operation will be focusing only in addition, subtraction, multiplication and division. The fundamental explorations on computer binary number operation will be more enjoyable if we have a basic understanding of the number calculation between the human and computer. Therefore, by computing basic types of number operation is superior practice in the following parts.


2.1 Binary addition

Binary Rules
Sum
Carry
0 + 0 = 0
0
0
0 + 1 = 1
1
0
1 + 0 = 1
1
0
1 + 1 = 1
0
1


How to add Binary numbers
Step 1 :
Align the numbers you wish to add as you would if you were adding decimal numbers.



Step 2 :
Start with the two numbers in the far right column



Step 3 :
Add the numbers following the rules of decimal addition (1+0 = 1, 0+0 = 0) unless both numbers are a 1.



Step 4 :
Add 1+1 as "10" if present. (it is not "ten" but "one zero"). Write "0" below and carry a "1" to the next column.



Step 5 :
Start on the next column to the left.





Step 6 :      
Repeat the steps above, but add any carry. Remember that 1+1 = 10 and 1+1+1 = 11. Remember to carry the "1".



TIPS:
<·         Dont forget to carry
<·       You can only use the digits 0 and 1. If you find yourself using 2 or any other digit, you did something wrong.








2.2 Binary Subtraction

Here are some examples of binary subtraction. These are computed without regard to the word size, hence there can be no sense of "overflow" or "underflow". Work the columns right to left subtracting in each column. If you must subtract a one from a zero, you need to “borrow” from the left, just as in decimal subtraction.







2.3  Multiplication

Multiplication in binary is similar to its decimal counterpart. Two numbers A and B can be multiplied by partial products: for each digit in B, the product of that digit in A is calculated and written on a new line, shifted leftward so that its rightmost digit lines up with the digit in B that was used. The sum of all these partial products gives the final result.
Since there are only two digits in binary, there are only two possible outcomes of each partial multiplication:
If the digit in B is 0, the partial product is also 0
If the digit in B is 1, the partial product is equal to A