Posts

Showing posts from July, 2024

CST 334 - Operating Systems - Week 7

Image
  This week, we learned about file systems, which are critical for managing persistent data storage in operating systems. Most of this week’s readings focused on the concept of persistence, which ensures that data remains available and consistent even after power failures or system crashes. We learned about various types of file systems, their structures, and how they manage data. An important part to remember was the role of inodes in storing metadata and how directories function to map human readable file names to these inodes. This structure not only organizes data efficiently but also facilitates easy retrieval and management. Additionally, we discussed advanced file operations such as handling file offsets with `lseek()` and ensuring data integrity with `fsync()`, which are vital for maintaining consistency and reliability in a file system. We also covered the implementation aspects of file systems, using the Very Simple File System (vsfs) as a model. This included understanding o

CST 334 - Operating Systems - Week 6

Image
  This week in CST 334, we learned about the concept of semaphores, a fundamental synchronization primitive in operating systems. Semaphores manage access to shared resources using an integer value, which can be incremented or decremented by operations like `sem_post()` and `sem_wait()`. These operations allow semaphores to control resource access, ensuring that threads wait when the resource is unavailable and proceed when it becomes available. A key form of semaphore is the binary semaphore, which functions similarly to a lock by permitting only one thread access at a time, ensuring mutual exclusion.  In addition to basic semaphore usage, we explored various common concurrency problems that semaphores can help solve. For example, in the Producer-Consumer problem, semaphores signal the state of a buffer, preventing data corruption or loss. Similarly, the Reader-Writer problem uses semaphores to manage access, allowing multiple readers or a single writer to use the resource, but not bo

CST 334 - Operating Systems - Week 5

Image
  This week in CST 334, we learned about the concept of concurrency, focusing on threads and their management. Concurrency allows multiple computations to execute simultaneously, which is essential for maximizing CPU utilization and improving the performance of multi-processor systems. I learned that threads are lightweight processes that share the same address space but have their own program counter, registers, and stack. This sharing facilitates easier data exchange between threads compared to separate processes. We also learned about the structure and function of Thread Control Blocks (TCBs), which store the state of a thread during context switching, and understood the significance of thread stacks in maintaining local variables and function parameters. Part of our learning was understanding the challenges associated with concurrency, such as race conditions, where multiple threads access and modify shared data unpredictably. This can lead to inconsistent or incorrect outcomes. To

CST 334 - Operating Systems - Week 4

Image
  During the fourth week, I learned more about memory virtualization, focusing on the concept of paging. Paging is a crucial method that divides virtual memory into fixed size pages, which are then mapped to physical memory frames. This process not only addresses issues related to fragmentation but also simplifies memory allocation, enhancing the overall flexibility of the system. The class’s readings highlighted the significant advantages of paging, such as its ability to support an abstract address space without assumptions about its use and the efficiency in free space management through the use of a free list. Understanding the address translation process, which involves converting a virtual address to a physical address by determining the virtual page number (VPN) and the offset within the page, was a great concept to understand. This knowledge is fundamental in comprehending how operating systems manage memory and ensure efficient access and protection mechanisms. Also, we learne

CST 334 - Operating Systems - Week 3

Image
  This week, we learned about memory virtualization, which is an important concept in operating systems because it provides an abstraction of physical memory, enabling efficient and flexible memory management. The key idea behind memory virtualization is to give each process the illusion of having a large, private address space, even though physical memory may be shared among multiple processes. This is achieved through address translation, where the operating system and hardware work together to map virtual addresses to physical addresses. I learned about the different types of memory such as stack and heap memory, and the various system calls and functions involved in memory management, including `malloc` and `free`. Also, I learned more about memory management, starting from early systems where memory layout was simple, to multiprogramming and time sharing that needed more sophisticated memory management techniques. The introduction of the address space abstraction by the OS has bee