Skip to content

Instantly share code, notes, and snippets.

@xtpor
Last active December 31, 2017 03:53
Show Gist options
  • Save xtpor/6bd93ee105f4b3cc55ed125f3ec5068a to your computer and use it in GitHub Desktop.
Save xtpor/6bd93ee105f4b3cc55ed125f3ec5068a to your computer and use it in GitHub Desktop.

266 Revision Checklist

Note: The page numbers are for the PDF version.

Data Number Systems

  • Positive Binary Representation (P.19)
  • Binary Coded Decimal (P.20)
  • Sign-Magnitude Binary Representation (P.20)
  • 1's Complement Representation (P.21)
  • 2's Complement Representation (P.23)
  • IEEE 754 Floating Point Prepresentation (P.38-39)

Von Neumann Architecture and Bottleneck

  • Von Neumann Architecture (P.46,54)
  • Benefits and Hazards of Von Neumann Architecture (P.71)
  • Von Neumann bottleneck (P.72)

Little Man Computer (LMC)

  • Instruction Set (P.69)

Fetch And Execution Cycle

  • Fetch and execution cycle (P.74)
  • Register transfer language (RTL) (P.75)
  • LMC performance in RTL (P.79)

Buses, Memory and IO

  • Bus
    • Type of bus (P.84)
      • Data, Address, Control, Power
    • Throughput (P.84)
    • Connectivity (P.86)
      • Point-to-point, Multi-point
    • Serial vs. Parallel (P.86)
  • Memory
    • Classes and Hierarchy (P.90)
    • SRAM vs. DRAM (P.95)
    • Other types of RAM (P.96-98)
      • SRAM, DRAM, EDORAM (req+read), VRAM (read+write), DDRRAM (transfer*2), EEPROM (flash memory)
  • IO
    • Synchronous and Asynchronous (P.101)
    • Port-mapped IO vs. Memory-mapped IO (P.103)
    • Interrupt (P.104)
      • Interrupt vector
      • Interrupt Service Routine (ISR)
    • Hard Disk Read/Write Performance (P.106)

Methods of Improving Performance of Computers

  • Incresing clock rate (P.110)
    • Limitation: operational limit, generate heat, reduce lifespan
  • Adding CPU cache
    • CPU cache(P.113)
    • Hit Rate: the percentage of request that can be satisfied by the cache (P.113)
    • Locality of Reference
    • Cache coherency
    • Limitation of high cache size: high latency; solution: multi-level cache
  • Adding general purpose registers (P.114)
  • Parallel Execution and Adding another System Bus (P.118)
  • Direct Memory Access (P.121)

Instruction Set Design and E-LMC

  • Implicit operand vs. explicit operand (P.127)
  • E-LMC Instruction Set (P.128-130)
  • E-LMC Addressing mode (P.135)
  • E-LMC RTL (P.138-140)
  • RISC vs CISC (P.143-144)

Instruction Pipelines, Scalar and Superscaler Processing

  • Performance Metrics
    • Millions of instructions per second (MIPS) (P.145)
    • Complex and Simple Instructions (P.146)
    • Benchmarking (P.147)
  • Instruction Piplelines (P.148-150)
  • Performance of scalar and superscaler processing (P.151-152)
    • Scalar: can execute at most 1 instruction per clock cycle
    • Superscalar: can execute multiple instrctions per clock cycle due to having multiple execution units
  • Pipeline hazard and its solution (P.153-156)
    • Data hazard: instruction depends on the previous instruction
      • Solution: insert stall state
    • Control hazard: execution of branch instructions (BR/BRP/BRZ)
      • Solution: additional pipeline, speculative execution, insert stall state
    • Structural hazard: hardware cannot support concurrency
      • Solution: instruction reordering, insert stall state
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment