CITS2002 Systems Programming – Lecture 1 Notes

Unit Overview

  • Topics to be covered in CITS2002 Systems Programming (Lecture 1):
    • An introduction to the ISO-C programming language
    • The structure of a C program, basic datatypes and variables, compiling and linking
    • Focus on the C11 language standard
    • An introduction to Operating Systems (history, role, relationship to languages and programs)
    • An overview of computer hardware components (CPU, registers, memory hierarchy, I/O, storage)
    • C programs in greater detail: arrays, strings, user-defined types/structures, data representation, functions, parameter passing, return values
    • Executing and controlling processes: creating/terminating processes, runtime environment, command-line arguments, OS services from C
    • Managing memory: allocating physical memory, sharing memory between processes, memory management in C programs
    • Files and file I/O: file systems, allocation methods, file/directory operations and attributes, raw vs formatted I/O, buffered vs unbuffered I/O
  • By the end of the unit you’ll have the listed knowledge, though not presented strictly in this order.
  • Unit schedule reference: CITS2002 Systems Programming, Lecture 1, p2, 22nd July 2024.

Formal Standards in Systems Programming

  • Formal standards define data representations, file formats, programming languages, networking protocols, web interfaces, and cryptographic/authentication methods.
  • Formal standards in computing are often very large (e.g., the C11 standard is 660 pages).
  • You’re not expected to understand standards in depth for exams, but skimming helps appreciate their role in computing.
  • Standards discussed in this unit:
    • C11 — the ISO/IEC 9899:2011 programming language standard; standardizes features supported by common compilers (e.g., gcc, clang).
    • POSIX — Portable Operating System Interface; a family of IEEE Computer Society standards for OS interfaces, command shells and utilities.
  • Rationale for focusing on C11: C11 is widely supported in typical environments; newer C17 (ISO/IEC 9899:2018) is not yet as broadly adopted.
  • Notable historical note: the first POSIX-certified system was Microsoft Windows-NT v3.5 in 1999.
  • References: CITS2002, Lecture 1, p3, 22nd July 2024.

Why Teach C?

  • History of C:
    • Emerged in the early 1970s as a language to rewrite Unix, originally written in assembly.
    • Dennis Ritchie developed B from BCPL, then C from B; Unix kernel became 85% C by end of 1973, enabling portability.
    • Unix ported to Interdata 8/32 in 1976; since then Unix/Linux/macOS/Windows kernels and libraries are largely C or C-derived.
  • C’s status:
    • C is widely used for systems-level, engineering, and scientific programming: OS kernels, networking, libraries, device drivers, embedded systems, etc.
    • Other languages (Go, Nim, Rust, Swift, Zig) influence C and address some of its shortcomings.
    • Rust joined Linux kernel in December 2022, illustrating continued evolution in systems programming.
  • Popularity vs. relevance: popularity is not a quality measure; focus on relevance and appropriate use.
  • Surveys and references cited (2023–2024): Tiobe index (popularity), Stack Overflow Developer Survey, JetBrains State of Developer Ecosystem, HackerRank Developer Skills Report.
  • Key takeaway: C remains foundational for systems programming due to history, influence, standardization, portability, and performance.
  • Quote and perspective: C and C++ are foundational given chance historical factors and ongoing evolution; not the sole measure of language quality. — Anonymous, as cited in Lecture 1.

C: What It Is and How It Relates to Systems Programming

  • Characterization of C:
    • General-purpose language, excellent for systems programming, and often described as a well-designed, high-level representation of assembly language (a tongue-in-cheek view).
    • Procedural language, not inherently object-oriented (unlike Java, Python, Objective-C, C++/C#).
    • Supports: strongly typed variables, base datatypes, enums, user-defined types, aggregates (structures), standard control flow, recursion, and modularization.
    • Notable features: separate compilation, conditional compilation, bitwise operators, pointer arithmetic, and language-independent I/O.
  • Important caveat: C and C++ are different languages; promoters of “C/C++” as a single language often misrepresent their differences.
  • Design philosophy: a language isn’t valuable unless it informs and shapes the way you think about programming.
  • Educational takeaway: C is well-suited for teaching how software interacts with hardware (OS interfaces, memory management, and performance considerations).
  • Historical context: C began in 1972 for Unix re-implementation; K&R C (the original book) described the early form; ANSI-C and subsequent standards refined and extended the language.

C Standardization: A Timeline

  • K&R C (early C): described in The C Programming Language by Kernighan & Ritchie; informal baseline for C.
  • ANSI-C (K&R-2): ANSI X3J11 standardting efforts in the late 1980s produced ANSI-C (C89/C90); introduced new base datatype, revised function prototypes, preprocessor enhancements, constants and enums; 272 pages.
  • C99 (ANSI/ISO-C99): completed in 1999; added Boolean and complex datatypes, single-line comments, variable-length arrays; removed several unsafe or ill-defined constructs; 753 pages.
  • C11 (ISO/IEC 9899:2011): completed in December 2011; focus of this unit; optional C11 features exist alongside C99 features; many environments still run C99, but upgrading compilers is encouraged.
  • C18 (ISO/IEC 9899:2018): a revision introducing mainly technical corrections and defect fixes to C11; no new language features.
  • In this unit, emphasis is on C11 (ISO/IEC 9899:2011) due to broad support in instructional environments and industry usage.
  • Reference: CITS2002, Lecture 1, p13–p14, 22nd July 2024.

What C Standardization Provides (and Does Not Provide)

  • What standards provide:
    • Formal specification of C program form and interpretation
    • Representation of C programs (syntax and constraints), semantic rules, and data representations
    • Input/output data representations and the I/O interface
    • Output data representations and limitations of conforming implementations
    • Restrictions and limits that conforming implementations must honor
  • What standards do not provide:
    • Mechanisms for transforming C programs for data-processing systems
    • Mechanisms for invoking C programs within systems
    • Mechanisms for transforming input data to programs or outputs after processing
    • Any minimal or absolute requirements of a data-processing system beyond the standard
  • Emphasis: standards underpin portability, robustness, and maintainability of systems programming.
  • Reference: CITS2002, Lecture 1, p15, 22nd July 2024.

What’s (Deliberately) Missing from the C Language?

  • C omits certain facilities commonly provided by modern languages (graphics, networking, cryptography, multimedia).
  • Rationale: C rigorously defines what it does provide and how it interacts with external libraries; omissions encourage third-party libraries to supply domain-specific capabilities.
  • Example libraries and domains mentioned:
    • Operating system services (files, directories, processes, IPC)
    • OS-specific libraries (glibc, System32, Cocoa)
    • Data structures and algorithms libraries (GDSL, etc.)
    • GUI/graphics (OpenGL, GTK, Qt, wxWidgets, UIKit, Win32, Tcl/Tk)
    • Image processing (libjpeg, libpng, GIF handling)
    • Networking (Berkeley sockets, AT&T's TLI)
    • Security/cryptography (OpenSSL, libmcrypt, etc.)
    • Scientific computing libraries (NAG, BLAS, GNU Scientific Library)
    • Parallel and GPU programming (OpenMP, CUDA, OpenCL, OpenLIINDA for threading)
  • This section notes how C’s minimal core is intentionally extended via libraries for practical application domains.
  • Reference: CITS2002, Lecture 1, p16, 22nd July 2024.

A Summary View: C, Standards, and Relevance

  • The standardization of C (K&R, ANSI, C99, C11) is foundational for portable, robust system software.
  • This unit consistently references C11 and POSIX due to their broad adoption and relevance to systems programming.
  • POSIX provides a compatible interface to OS services across UNIX-like systems and Windows, highlighting portability concerns across platforms.
  • Overall message: C remains a deliberate, practical vehicle for teaching systems programming, OS interfaces, and hardware interaction, anchored by standardized behavior and portable libraries.
  • References: CITS2002, Lecture 1, p17, 22nd July 2024.

An Introduction to Operating Systems (OS) – Core Concepts

  • What is an operating system (OS)?
    • System software that provides a convenient, efficient environment for executing user programs; arguably the largest and most complex program you’ll run.
  • Why do we need an OS?
    • User perspective: UI, command interpreter, directory structure; execution of application programs (word processors, browsers, etc.).
    • Programming environment perspective: enhances the bare machine with utilities (compilers, editors, filters), provides high-level I/O, structures information into files, improves memory access, etc.
    • Efficiency perspective: resource scheduling, data storage/retrieval, and sharing hardware peripherals; maximize throughput and fairness.
    • Economic perspective: enables concurrent use of resources and multi-user environments.
  • OS goals: make the system convenient to use, schedule resources efficiently and fairly, and support hardware/software not yet developed.
  • OS as resource manager: an OS is a program that controls hardware and software resources; cores services may be dispatched to user processes.
  • OS must be extensible: modular design to accommodate new hardware, devices, applications, patches, and services.
  • OS services (traditional view): CPU scheduling, memory management, swapping, I/O device support, file system management, utility programs (accounting, policy enforcement), command interface, system calls (constrained access to OS internals), protection (isolation and safe sharing), and inter-process/network communication.
  • References: CITS2002, Lecture 3, p1–p6, 29th July 2024.

An Overview of Computer Hardware

  • Core components (the basic computer model):
    • CPU (Central Processing Unit): performs arithmetic/logic and directs I/O; may be multiple processors.
    • Main memory (RAM): stores instructions and data for active programs.
    • Secondary storage and peripheral devices: long-term storage (disks, tapes) and I/O controllers; persistence.
    • System bus: connects CPU, memory, and I/O; bus arbitration decides who accesses the bus.
  • Textbook references: Patterson & Hennessy, Gil de Lamadrid, Bryant & O’Hallaron; lecture 6, p1.
  • Role of OS in managing data flow between CPU and I/O, maximizing throughput and keeping the processor busy via interleaving computation and I/O.

The Memory Hierarchy

  • Memory hierarchy concept: faster storage is smaller and more expensive; slower storage is larger and cheaper.
  • Typical hierarchy (from fastest to slowest or smallest to largest):
    • Registers: 0.5–3 ns; 1–4 KB; on-chip, hardware-managed; accessed by the compiler/runtime.
    • Level-1 (L1) Cache: 0.4–4 ns; 8 KB–256 KB; on-chip, hardware.
    • Level-2 (L2) Cache: 4–8 ns; 256 KB–8 MB; on-chip or close-by.
    • Level-3 (L3) Cache: 6–16 ns; 4 MB–64 MB; typically shared.
    • Main memory (RAM): 10–60 ns; 64 MB–128 GB; DRAM; managed by the OS.
    • Magnetic hard disk: 3 MS–10 MS ns (i.e., 3 × 10^6 to 10 × 10^6 ns); 128 MB–24,000 GB.
    • Solid-state disk (SSD): 0.5 MS–1 MS ns; 16 GB–18,000 GB; non-volatile.
  • Realistic example: a modern laptop might have small CPU caches, several GB of RAM, and SSD storage with TB-scale capacities.
  • Key idea: memory hierarchy balances speed, cost, and capacity to keep the CPU fed with data.
  • Source: CITS2002, Lecture 6, p5–p6, 7th August 2024.

Units of Data and Data Interpretation

  • Data units: bits, bytes, and words
    • Bit: the basic binary digit (0 or 1).
    • Byte: typically 8 bits; the smallest normally addressable unit in RAM today, though historical variations existed (7-, 8-, 9-, 12-, 16-bit bytes).
    • Word: processor’s natural data size (commonly 32 or 64 bits); sometimes confused with the address space size.
    • Alignment: some architectures require data to start at addresses that are multiples of the data size (e.g., 2-byte alignment for 16-bit, 4-byte for 32-bit); unaligned access can be slower or fault.
  • Interpretation of data depends on context:
    • A sequence of bits could mean text, integers, floating-point numbers, memory addresses, or machine instructions.
    • Incorrect interpretation leads to critical errors (bus errors, illegal memory access, etc.).
  • Examples of interpretation (illustrative):
    • The first bytes of /bin/ls could be interpreted as 32-bit integers, 4 ASCII characters, two 16-bit integers, or a 32-bit floating-point value, depending on context.
    • Demonstrations of od (octal dump) and od -c (ASCII dump) show how the same bytes yield different representations.
  • Reference: CITS2002, Lecture 6, p9–p10, 7th August 2024.

Introducing Processes in Operating Systems

  • What is a process? A program in execution, including its executable image, its data, and its execution context.
  • OS perspective: a process is the executable instance and its associated data and context; the OS must manage many processes efficiently, support inter-process communication, and ensure responsive behavior.
  • Process concepts (as discussed in this unit):
    • A process may be in different states and transitions between states as it runs, waits for I/O, or terminates.
  • Initial unit focus: relationship between processes, memory, files, and OS services.
  • Reference: CITS2002, Lecture 8, p1, 14th August 2024.

Process States and Transitions

  • Two-state view (simplified):
    • Running: the process is currently executing on the CPU.
    • Ready: the process is prepared to run and awaiting CPU time.
    • Question posed: can a process determine its own state? (Generally not; the OS tracks states.)
  • The five-state model (more complete): New, Ready, Running, Blocked, Exit
    • New: newly created processes not yet admitted to Ready.
    • Ready: processes waiting to be assigned CPU time.
    • Running: currently executing on the CPU.
    • Blocked: waiting for I/O or some event; not ready to run.
    • Exit: terminated processes; resources may be released or accounted for.
  • State transitions (typical lifecycle):
    • Null → New: a new process is requested.
    • New → Ready: resources allocated and process admitted to Ready.
    • Ready → Running: process is given CPU time (time quantum).
    • Running → Ready: time quantum expires; process preempted.
    • Running → Blocked: process requests slow I/O.
    • Blocked → Ready: I/O completes; event triggers readiness.
    • Running/Ready/Blocked → Exit: external termination or process exit.
  • The dispatcher role: the OS moves processes between states according to a scheduling policy (fairness, interactivity, priority).
  • Note: modern OSs also support an idle process that is always ready to run.
  • Reference: CITS2002, Lecture 8, p2–p3, p9–p12, 14th August 2024.

Process Creation and Termination

  • Creating a new process requires allocating resources for both the process and the OS's management structures.
  • Memory requirements for a process include: instructions, initial data (read-only), execution stack, heap, and potentially shared resources.
  • Sources of new processes vary by OS: batch queues, user logins, existing processes requesting new processes, or the OS creating a process for services (e.g., networking, printing).
  • Creation methods (illustrative across systems):
    • fork() in Linux/macOS (duplicate an existing process).
    • spawn() in DEC-VMS (old) systems.
    • CreateProcess() in Windows.
  • Process termination (why it ends):
    • Normal termination, execution time limit exceeded, requested resources unavailable, arithmetic errors (division by zero), memory access violations, invalid memory/held resources, OS/parent requests, or parent process termination.
  • The OS defines a default action; termination can be requested or occur due to various reasons; the process may be running or ready at termination time.
  • Reference: CITS2002, Lecture 8, p5, p6, 14th August 2024.

Scheduling, Timers, and Time Quanta

  • Why timer interrupts? To achieve fairness and high resource utilization (CPU).
  • A hardware timer triggers an interrupt at a fixed interval (time quantum, e.g., 10 ms).
  • On a timer interrupt: the interrupt handler updates the running process’s time and may move it to the Ready queue if its time quantum expired.
  • Time quantum: the maximum CPU time a process may run before context switch.
  • Reference: CITS2002, Lecture 8, p7, 14th August 2024.

Blocking, I/O, and Event-Driven Scheduling

  • Blocking occurs when a process makes a slow I/O request (disk, keyboard, network, timer, etc.).
  • While waiting for I/O, the process moves out of Running; it is not Ready until the I/O completes.
  • Blocking and I/O mechanics:
    • A process requests I/O from a device.
    • The OS moves the process from Running to Blocked and arranges to receive an interrupt when I/O completes.
    • On completion, the process is moved from Blocked to Ready.
  • A common special case: sleep for a period; treated as blocking until a timer interrupt.
  • Reference: CITS2002, Lecture 8, p8, 14th August 2024.

Suspension, Swapping, and Memory Management Under Pressure

  • When memory is insufficient, the OS may suspend a process (swap memory to disk) to reclaim RAM.
  • Suspend state: processes moved out of main memory to free memory; they can later be resumed or admitted from New to Ready when memory becomes available.
  • Swapping strategy: if no Ready processes can run and memory is needed, the OS may reclaim memory from Blocked processes first, then from Ready processes.
  • This reduces memory pressure and allows more processes to be active, at the cost of potential I/O and disk access overhead.
  • Reference: CITS2002, Lecture 8, p13–p14, 14th August 2024.

A Whirlwind History of Operating Systems (Overview)

  • Purpose: understand how OS design evolved with hardware advances.
  • Timeline highlights:
    • 1945–1955: Vacuum tubes and plugboards; early computers (e.g., ideas about interactive vs batch processing).
    • 1955–1965: Transistors and batch systems; job control languages (JCL); batch processing to increase utilization;
    • Example: IBM systems used batch processing; programming used FORTRAN and assembly.
    • 1965–1980: Integrated circuits and multiprogramming; System/360 family; separate I/O processors; CTSS (timesharing) introduced; multiprogramming, memory protection, scheduling, and inter-process communication concepts.
    • Late 1960s–1970s: Timesharing scales up; PDP-6 and other timesharing systems; early memory protection introduced.
    • 1980s–1990s: Personal computers and networking; workstation era; fast networks; network security and encryption capabilities develop; GUI interfaces become mainstream.
    • 2000s and beyond: Speeds, mobility, and ubiquity; trends toward performance per watt and distributed/virtualized systems; the web and cloud ecosystems.
  • Notable quotes and reflections appear in the slides (e.g., Bill Gates’ 1981 quip about 640K of RAM).
  • Reference: CITS2002, Lecture 3, p7–p16, 29th July 2024.

An Overview of Computer Hardware (Structure and Roles)

  • Core components and their roles:
    • CPU: performs arithmetic/logic operations; directs I/O; may be multi-processor.
    • Main memory: stores instructions and data for active programs.
    • Secondary storage and peripherals: long-term storage; supports I/O devices.
    • System bus: interconnects CPU, memory, and I/O; arbitration decides bus access.
  • Recommended hardware texts for deeper study: Patterson & Hennessy; Bryant & O’Hallaron; Bryant & O’Hallaron’s CS:APP (as a starting point).
  • OS role in data flow: manage data movement between CPU and I/O devices; aim for maximum throughput; interleave computation with I/O; ensure memory/IO scheduling aligns with system goals.
  • Reference: CITS2002, Lecture 6, p1–p2, 7th August 2024.

The Basics of Data, Addressing, and Context

  • Data representation and addressing:
    • The CPU fetches data from uniquely addressed memory locations; there are separate registers for memory addressing (e.g., Memory Address Register, MAR) and I/O addressing (I/O Address Register).
    • Data fetched may be an instruction or data; interpretation depends on context.
  • The role of the OS in data flow is to maximize throughput while managing a diverse set of device types and interfaces.
  • Contextual interpretation of bytes can yield multiple valid meanings; correctness depends on the program’s expectation.
  • Reference: CITS2002, Lecture 6, p2, 7th August 2024.

The Role of Processor Registers and Memory (Foundations for Performance)

  • Processor registers: high-speed storage locations integral to instruction execution; access times are in nanoseconds.
  • Registration types:
    • User-accessible registers: accessible to programs; used for data and addresses.
    • Control and status registers: hold the processor’s internal state (e.g., IR, PC, PSW).
  • Register usage: data must be loaded into registers before arithmetic operations; results are stored back to RAM. The compiler typically translates high-level operations into register operations.
  • Common register types and roles:
    • Data registers, address registers (MAR, MBR), index registers, stack pointer (SP).
    • Control/status: instruction register (IR), program counter (PC), processor status word (PSW).
  • The PSW stores flags such as interrupt enable, privilege mode, and condition bits (overflow, carry, divide-by-zero, comparison outcomes).
  • Reference: CITS2002, Lecture 6, p3–p4, 7th August 2024.

The Memory Hierarchy in Depth

  • Memory hierarchy trade-off: faster memory is smaller and more expensive; slower memory is larger and cheaper.
  • A practical hierarchy includes registers, caches (L1/L2/L3), RAM, disk storage, and SSDs.
  • Numerical ranges and speeds (illustrative):
    • Registers: 0.5\text{--}3\,\text{ns}; 1–4 KB
    • L1 cache: 0.4\text{--}4\,\text{ns}; 8\text{KB}--256\text{KB}
    • L2 cache: 4\text{--}8\,\text{ns}; 256\text{KB}--8\text{MB}
    • L3 cache: 6\text{--}16\,\text{ns}; 4\text{MB}--64\text{MB}
    • Main memory (RAM): 10\text{--}60\,\text{ns}; 64\text{MB}--128\text{GB}
    • Magnetic disks: 3\text{M}\text{--}10\text{M ns}; 128\text{MB}--24{,}000\text{GB}
    • SSDs: 0.5\text{M}\text{--}1{,}000{,}000\,\text{ns}; 16\text{GB}--18{,}000\text{GB}
  • Concrete example: a contemporary computer might include small caches, several GB of RAM, and a large-capacity SSD for long-term storage.
  • Source: CITS2002, Lecture 6, p5–p6, 7th August 2024.

About Data: Context, Encoding, and Interpretation (Practical Examples)

  • Data interpretation depends on context.
  • A 32-bit pattern can mean:
    • 4 ASCII characters
    • A 32-bit integer
    • 2 × 16-bit integers
    • 1 floating-point value
    • A memory address
    • An instruction to be executed
  • The same sequence can be interpreted differently by different programs; this is why context is critical.
  • The presentation includes example outputs using od (octal dump) and od -c (ASCII dump) for /bin/ls illustrating different interpretations of the same bytes.
  • The risk: incorrect interpretation can cause processor faults (bus errors, misaligned accesses).
  • Reference: CITS2002, Lecture 6, p9–p10, 7th August 2024.

The Introduction to OS: Processes, Scheduling, and Service Roles

  • The fundamental activity of an OS is creating, managing, and terminating processes.
  • What is a process? A running instance of a program, with program state and associated resources.
  • OS perspective emphasizes a process as: executable instance, data space, external storage, and runtime context.
  • Goals: enable efficient multi-processing, support IPC, maintain acceptable response times.
  • Key idea: OS is a dispatcher and resource manager that schedules CPU time and coordinates access to memory, I/O, and system services.
  • Reference: CITS2002, Lecture 8, p1, 14th August 2024.

Traditional OS Services: Core Components

  • Traditional OS services include:
    • CPU scheduling: distributing CPU time among processes.
    • Memory management: sharing physical memory among processes.
    • Swapping: moving processes between main memory and disk to appear larger.
    • I/O device support: optimally supporting devices.
    • File system management: organizing mass storage into files and directories.
    • Utility programs: accounting, resource access controls, file-system manipulation.
    • Command interface: textual or graphical to interact with OS features.
    • System calls: constrained access to OS internals by user programs.
    • Protection: isolation of processes and data; controlled sharing.
    • Inter-process communication and networking: communication within machine and across networks.
  • Reference: CITS2002, Lecture 3, p5–p6, 29th July 2024.

A Brief History of Operating Systems (Whirlwind History)

  • Early days: one-user-at-a-time systems with minimal protection; memory clearing could occur between jobs.
  • Growth of accountability: multi-tasking and process monitoring became crucial as resource sharing increased.
  • Core security goals include: protecting the OS from programs, protecting programs from each other, and constraining data access to the correct program(s).
  • Security considerations were initially more about reliability and correctness than malicious attacks.
  • Reference: CITS2002, Lecture 3, p13, 29th July 2024.

1945–1955: Vacuum Tubes and Plugboards; 1955–1965: Transistors and Batch Systems

  • Early groups led the first electrical computers: Bletchley Park, Harvard, Princeton, Manchester, IBM, Zuse, etc.
  • Early programming and I/O involved paper tapes, punched cards, and batch processing; early memory protection and resourcing ideas were limited.
  • Innovations: job control languages (JCL), resident monitors, and automated program loading.
  • Famous quotes illustrate era sentiment about computing capabilities (e.g., IBM founder quotes).
  • Reference: CITS2002, Lecture 3, p8–p10, 29th July 2024.

1965–1980: Integrated Circuits, Multiprogramming, and Timesharing

  • System/360 family introduced software compatibility across machines with varying capacity.
  • Separation of I/O processing allowed better interleaving of CPU and I/O, enabling multiprogramming.
  • Timesharing: pre-emptive scheduling allowed multiple users to interact with the system; memory protection and scheduling theories matured.
  • The CTSS system (MIT) and early timesharing contributed to modern OS design.
  • The PDP-6, DEC’s role in commercial timesharing history is referenced in notes.
  • Reference: CITS2002, Lecture 3, p11–p12, 29th July 2024.

1980–1990s: PCs, Networking, and GUI Interfaces

  • Decentralization of computing resources: data grows more important than hardware; networks become essential (Ethernet, Token Ring).
  • Workstations emerge; OS facilities expand for inter-process communication, networking, data encryption, security, reliability, distributed data consistency.
  • GUI interfaces and desktop metaphors become mainstream.
  • Industry quotes emphasize the culture of the era.
  • Reference: CITS2002, Lecture 3, p14–p15, 29th July 2024.

2000s and Beyond: Speed, Mobility, and Ubiquity

  • Network speeds and long-distance communication enable graphical and audio communication, surpassing text only.
  • CPU performance plates against Moore’s Law; focus shifts to performance per watt and mobile/battery considerations.
  • The era emphasizes ubiquity of computing and cloud/virtualization trends.
  • Reference: CITS2002, Lecture 3, p16, 29th July 2024.

An Overview of Computer Hardware (Revisited)

  • Key takeaways about hardware organization and OS interaction were summarized earlier; the historical perspective helps understand why OS services and memory management evolved as they did.

A Brief Note on Ethics, Practical Implications, and Real-World Relevance

  • The standards-centered approach (C11 and POSIX) fosters portable, robust, and secure software across platforms.
  • The OS design principles (extensibility, modularity, efficiency, and fairness) guide real-world systems like Linux, Windows, and macOS.
  • The historical evolution of OSs informs current practices in virtualization, cloud computing, and multi-core/multi-processor scheduling.
  • Practical implications include: choosing the right language and standards for portability and performance; understanding process lifecycle in order to design robust systems software; and recognizing the trade-offs between memory, I/O, and CPU resources in real systems.

Quick Reference: Selected Key Numbers and Standards (LaTeX-formatted)

  • C11 standard: ISO/IEC\;9899:2011
  • ANSI-C (C89/C90): discussed as a predecessor; pages in standard description noted as part of historical context.
  • C99: 1999 completion; added Boolean and complex datatypes, single-line comments, and VLAs.
  • C18: 2018 revision (defect fixes, no new features).
  • C standardization literature cited in notes: ANSI/ISO standards, K&R reference, and 660-page C11 standard example.
  • The standardization pages cited: CITS2002, Lecture 1, p12–p15, 22nd July 2024.

Lecture 1 Summary

  • The relationship between a programming language and operating systems is central to systems programming.
  • This unit uses C11 and POSIX standards as anchors for robust, portable systems software development.
  • Debates about which language is best are less important than understanding the task at hand and selecting the most appropriate language.
  • C remains a practical vehicle to explain systems programming due to its history, influence, portability, standardization, and broad ecosystem.
  • The module emphasizes practical computing concepts: OS services, memory management, process management, file systems, and hardware interaction, all framed within the C11/POSIX context.
  • References: CITS2002, Lecture 1, p17–p17, 22nd July 2024.

Appendix: Quick Glossary and Concepts

  • C11: ISO/IEC standard for the C programming language; the 2011 revision.
  • POSIX: A family of standards for OS interfaces; widely used for portability across UNIX-like systems.
  • JCL: Job Control Language; historic batch-job description mechanism.
  • Timesharing: early multi-user interactive computing approach enabling multiple users to interact with the computer seemingly simultaneously.
  • CPU time quantum: maximum time slice a process may run before the OS pre-empts it.
  • Swap/Suspend: memory management technique to move inactive processes out of main RAM to disk to free memory for active processes.
  • New/Ready/Running/Blocked/Exit: five-state process model used to describe process life cycle and scheduling decisions.
  • New/Spawn/CreateProcess/Fork: mechanisms by which new processes are created across different OSes.
  • Memory hierarchy: structure of storage from fast, small memory (registers, caches) to slower, larger memory (RAM, disks).
  • O/S as dispatcher: OS role includes scheduling, resource management, and handling I/O, with a focus on fairness and throughput.
  • Context: Effective OS design requires understanding both hardware capabilities and software abstractions to build robust, portable systems.