Home » CS301: Computer Architecture Certification Exam Answers

CS301: Computer Architecture Certification Exam Answers

by IndiaSuccessStories
0 comment

CS301: Computer Architecture Exam Quiz Answers

  • More than one program in memory
  • More than one memory in the system
  • More than one processor in the system
  • More than two processors in the system
  • The ALU
  • Back to memory
  • The program counters
  • The instruction registers
  • CPU chip
  • Floppy disk
  • Hard disk
  • Memory chip
banner
  • Apple’s iMacs
  • IBM’s Watson
  • Mobile devices
  • Supercomputers
  • Instruction Register
  • Memory Data Register
  • Memory Address Register
  • Program Counter Register
CS301 - Computer Architecture
  • 00
  • 01
  • 10
  • 11
  • 00111110
  • 11000001
  • 11000010
  • 11100010
  • 01001010
  • 01001011
  • 01101010
  • 11001111
  • 00010000011001010000000000000101
  • 00010000011001010000000000001010
  • 00100000011001010000000000000101
  • 00100000011001010000000000001010
  • ST and LD
  • JR and BEQ
  • ADD and SUB
  • PUSH and POP
  • add
  • jr
  • ld
  • or
ab
00011110
cd00X11
01X
111
1011X
  • b’d’ + a’b
  • ab’ + a’d’
  • d’ + ab’
  • ac + a’bd’
  • There are three stages
  • There is a clock line going to each full adder
  • The adder is slower than a carry looks ahead adder
  • Extra gates are needed besides the full adder gates
CS301 - Computer Architecture
  • [ab + a’b’] S’ + [a’b + ab’] S
  • [ab + a’b] S’ + [a’b’ + ab’] S
  • [a’b + a’b’] S’ + [ab + ab’] S
  • [ab’ + a’b] S’ + [ a’b’ + ab] S
  • Loop a times {

b = b + b

} answer = b

  • c = 0

Loop a times {

c = c + b

} answer = b

  • Assume b > a

Loop n times {

b = b – a

if (b = 0) answer = n

if (b < 0) answer = n – 1

}

Assume b > a

  • Loop n times {

b = b – a

if (b = 0) answer = 0

if (b < 0) answer = b + a

}

  • 1 XOR, 1 AND, 2 OR
  • 1 XOR, 2 AND, 1 OR
  • 2 XOR, 2 AND, 1 OR
  • 2 XOR, 1 AND, 2 OR
  • PC
  • PC+4
  • 2*PC
  • 2*PC-1
  • One stage must wait for data from another stage in the pipeline
  • The pipeline is not able to provide any speedup to execution time
  • The next instruction is determined based on the results of the currently-executing instruction
  • Hardware is unable to support the combination of instructions that should execute in the same clock cycle
  • One stage must wait for data from another stage in the pipeline
  • The pipeline is not able to provide any speedup to execution time
  • The next instruction is determined based on the results of the currently-executing instruction
  • Hardware is unable to support the combination of instructions that should execute in the same clock cycle
  • Control hazard
  • Static parallelism
  • Dynamic parallelism
  • Speculative execution
  • It is more expensive than other types of cache organizations
  • Its access time is greater than that of other cache organizations
  • Its cache hit ratio is typically worse than with other organizations
  • It does not allow simultaneous access to the intended data and its tag
  • 0
  • 1
  • 3
  • 5
  • A disk
  • A cache
  • The register files
  • The main memory
  • Disk
  • Cache
  • Page table
  • Virtual Memory
  • Asynchronous
  • External
  • Internal
  • Synchronous
  • There are no redundant check disks
  • The number of redundant check disks is equal to the number of data disks
  • The number of redundant check disks is less than the number of data disks
  • The number of redundant check disks is more than the number of data disks
  • C/C++
  • MPI
  • OpenMP
  • Python
  • There is no improvement in performance as the number of processors increase
  • There is a diminishing improvement in performance as the number of processors increase
  • There is an increasing improvement in performance as the number of processors increase
  • There can be no more than a 5 times improvement in performance as the number of processors increase
  • 1.5 times faster
  • 1.67 times faster
  • 2 times faster
  • 3 times faster
  • Uniform memory access
  • A single physical address space
  • One physical address space per processor
  • Multiple memories shared by multiprocessors
  • SIMD Machines
  • MIMD machines
  • Shared Memory Multiprocessors
  • Distributed Shared Memory Multiprocessors
  • Most programs are too long
  • The use of cache memory for data
  • The use of cache memory for instructions
  • Because of compiler limitations
  • It is a processor that has multiple levels of cache
  • It is a processor that is efficient for all types of computing
  • It is a special purpose processor only useful for graphics processing
  • It is a processor used in all types of applications that involve data parallelism
  • 00101001.11
  • 00110100.11
  • 00110110.10
  • 00111011.01
  • In the stack
  • In the memory
  • In the CPU register
  • After OP code in the instruction
  • F = x + y’z
  • F = xy’ + yz + xz
  • F = xy + y’z + xz
  • F = xy’z + xy’z’ + x’yz + x’yz
  • AND, OR
  • OR, NOT
  • XOR, OR
  • XOR, AND
  • AND gates and MUXes
  • NOT gates and MUXes
  • OR gates and DEMUXes
  • XNOR gates and DECODERs
  • 2
  • 3
  • 4
  • 5
  • A data hazards
  • A memory faults
  • A control hazards
  • A structural hazard
  • Value prediction
  • Branch prediction
  • Memory unit forwarding
  • Execution unit forwarding
  • Carry lookahead
  • Branch prediction
  • Register renaming
  • Out of order execution
  • “Hit under miss”
  • High associativity
  • Multiported caches
  • Segregated caches
  • Cache, Main Memory, Disk, Register
  • Cache, Main Memory, Register, Disk
  • Cache, Register, Main Memory, Disk
  • Register, Cache, Main Memory, Disk
  • Cache memory
  • Volatile memory
  • Non-cache memory
  • Non-volatile memory
  • 2
  • 4
  • 16
  • 32
  • Threads may use local variables
  • Threads may use private variables
  • Threads may use shared variables
  • Using a semaphore is not effective
  • Increase in speed of processor chips
  • Increase in power density of the chip
  • Increase in video and graphics processing
  • Increase in cost of semiconductor manufacturing
  • Load balancing
  • Grid computing
  • Web search engine
  • Scientific computing
  • A Monte Carlo integration
  • Any highly sequential program
  • A C++ program with lots of for loops
  • A program with fine-grained parallelism
  • Clock frequency
  • Transistors on a chip
  • Processors on a chip
  • Chip power consumption
  • Controlled transfer
  • Conditional transfer
  • Uncontrolled transfer
  • Unconditional transfer
  • 6E
  • 7D
  • 8A
  • B5
  • 1.0× 10-9
  • 10.0 × 10-9
  • 100.00 × 10-9
  • 1000.00 × 10-9
  • Commander
  • Compiler
  • Interpreter
  • Simulator
  • add
  • beq
  • jr
  • ld
  • Data memory and Register File take part
  • Instruction memory and data memory take part
  • Instruction memory, ALU, and register take part
  • Instruction memory, Register File, ALU, and data memory take part
  • Cache
  • Register
  • Hard disk
  • Main memory
  • The synchronous bus is better: 20.1 vs. 15.3 MB/s
  • The synchronous bus is better: 30 vs. 18.2 MB/s
  • The asynchronous bus is better: 13.3 vs. 11.1 MB/s
  • The asynchronous bus is better: 20.1 vs. 15.3 MB/s
  • RAID 4 does not use parity
  • RAID 4 uses bit-interleaved parity
  • RAID 4 uses block-interleaved parity
  • RAID 4 uses distributed block-interleaved parity
  • Multiple threads are used in multiple cores
  • Multiple threads are used in multiple processors
  • Multiple threads share a single processor, but do not overlap
  • Multiple threads share a single processor in an overlapping fashion
  • It stays the same
  • It decreases to zero
  • It approaches the execution time of the sequential part of the code
  • It approaches the execution time of the non-sequential part of the code
CS301 - Computer Architecture
  • 1 state, 2 inputs, 2 outputs
  • 2 states, 2 inputs, 1 output
  • 3 states, 1 input, 2 outputs
  • 3 states, 2 inputs, 1 output
  • A computer that is used by one person only
  • A computer that runs only one kind of software
  • A computer that is assigned to one and only one task
  • A computer that is meant for application software only
  • DTL
  • PMOS
  • RTL
  • TTL
ab
00011110
cd001X
0111X1
11
1011
  • cd’ + bd
  • c’ + ab’
  • c’d + b’d’
  • ad + b’d’
abcz
0000
0011
0101
0111
1000
1011
1101
1111

Select one:

  • a + b
  • b + c
  • ac + b
  • a’b + c
  • Loop a times {

b = b + b

} answer = b

  • c = 0

Loop a times {

c = c + b

} answer = b

  • Assume b > a

Loop n times {

b = b – a

if (b = 0) answer = n

if (b < 0) answer = n – 1

}

  • Assume b > a

Loop n times {

b = b – a

if (b = 0) answer = 0

if (b < 0) answer = b + a

}

  • The decoding of the instruction
  • The reading of the program counter value
  • The execution of operation using the ALU
  • The fetching of the instruction from the instruction memory
  • Decode the instruction; execute the instruction; transfer the data
  • Decode the instruction; transfer the data; execute the instruction
  • Execute the instruction; decode the instruction; transfer the data
  • Transfer the data; execute the instruction; decode the instruction
  • One stage must wait for data from another stage in the pipeline
  • The pipeline is not able to provide any speedup to execution time
  • The next instruction is determined based on the results of the currently-executing instruction
  • Hardware is unable to support the combination of instructions that should execute in the same clock cycle
  • Caching
  • Pipelining
  • Carry lookahead
  • Branch prediction
  • Pipelining
  • Data hazard
  • Concurrency
  • Instruction level parallelism
  • The cache block number
  • Whether there is a write-through or not
  • Whether the requested word is in the cache or not
  • Whether the cache entry contains a valid address or not
  • A disk
  • A cache
  • The register files
  • The main memory
  • Tape drive; PT
  • PT; victim cache
  • Dcache; Write buffer
  • Dcache; Main memory
  • The synchronous bus is better: 25 vs. 18.2 MB/s
  • The synchronous bus is better: 30 vs. 25.2 MB/s
  • The asynchronous bus is better: 13.3 vs. 11.1 MB/s
  • The asynchronous bus is better: 30 vs. 25.2 MB/s
  • 100.2 MB/s
  • 130.6 MB/s
  • 150.8 MB/s
  • 170.0 Mb/s
  • Asynchronous
  • External
  • Internal
  • Synchronous
  • There are no redundant check disks
  • The number of redundant check disks is equal to the number of data disks
  • The number of redundant check disks is less than the number of data disks
  • The number of redundant check disks is more than the number of data disks
  • 1.3333
  • 2
  • 2.6666
  • 8
  • Weak scaling
  • Timing issues
  • Strong scaling
  • Communication overhead
  • DTL RTL CMOS TTL
  • DTL RTL TTL CMOS
  • RTL DTL TTL CMOS
  • RTL TTL DTL CMOS
  • 1
  • n
  • log n
  • 2n
  • Decoding the instruction
  • Reading the program counter value
  • Executing the operation using the ALU
  • Fetching the instruction from the instruction memory
  • The program counters
  • The output of the ALU
  • Data from data memory
  • Decoding instructions from instruction memory
  • The number of pipe stages
  • 5 times that of a non-pipelined machine
  • The ratio of the fetch cycle period to the clock period
  • The ratio of time between instructions and clock cycle time
  • Value prediction
  • Branch prediction
  • Memory unit forwarding
  • Execution unit forwarding
  • 131.0 MB/s
  • 229.4 MB/s
  • 327.9 MB/s
  • 350.1 MB/s
  • Ranking a linked list
  • A matrix multiplication
  • Any highly sequential program
  • A program with fine-grained parallelism

Introduction to Computer Architecture

Computer architecture is a fascinating and crucial field in computer science and engineering that deals with the design and organization of computer systems. It encompasses various aspects, including the following:

  1. Instruction Set Architecture (ISA): This defines the set of instructions that a computer’s CPU can execute. It includes the CPU’s registers, data types, instructions, and addressing modes. Examples include x86, ARM, and MIPS.
  2. Microarchitecture: This refers to the implementation of the ISA. It involves the design of the CPU’s internal structure, including the arithmetic logic unit (ALU), registers, cache, and pipelines. It’s about how the ISA is realized in hardware.
  3. Memory Hierarchy: This involves the design of various types of memory used in a computer system, such as registers, cache (L1, L2, L3), RAM (main memory), and storage (hard drives, SSDs). The goal is to balance speed, cost, and capacity.
  4. Processor Design: This includes various techniques to improve CPU performance, such as pipelining, superscalar execution, out-of-order execution, and branch prediction. It also involves the design of multi-core processors and the coordination between multiple cores.
  5. Input/Output Systems: This covers how a computer interacts with external devices like keyboards, mice, printers, and network interfaces. It involves understanding bus architectures, I/O ports, and data transfer protocols.
  6. System Design: This encompasses the overall organization of a computer system, including the integration of the CPU, memory, and I/O components. It also includes considerations for system performance, reliability, and scalability.
  7. Parallel and Distributed Computing: This deals with architectures designed to execute multiple processes simultaneously, either within a single machine (multi-core processors) or across a network of machines (clusters, grids, or cloud computing).
  8. Performance Metrics: Understanding and improving performance involves metrics like clock speed, instructions per cycle (IPC), and throughput. Techniques such as benchmarking and profiling are used to evaluate and optimize system performance.
  9. Power and Thermal Management: As computers become more powerful, managing power consumption and heat generation becomes crucial. Techniques include dynamic voltage and frequency scaling (DVFS) and advanced cooling solutions.
  10. Security: Modern computer architecture must consider security at various levels, including secure boot mechanisms, trusted execution environments (TEEs), and protection against side-channel attacks.

Computer architects strive to design systems that balance performance, power efficiency, cost, and other factors to meet the requirements of specific applications and use cases. They often use simulation, modeling, and performance analysis techniques to evaluate design choices and optimize system performance.

You may also like

Leave a Comment

Indian Success Stories Logo

Indian Success Stories is committed to inspiring the world’s visionary leaders who are driven to make a difference with their ground-breaking concepts, ventures, and viewpoints. Join together with us to match your business with a community that is unstoppable and working to improve everyone’s future.

Edtior's Picks

Latest Articles

Copyright © 2024 Indian Success Stories. All rights reserved.