Jump to content

Talk:Operating system

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Edit request 1

[edit]

Please replace the current contents of the "security" subsection with the following:

Extended content

Security means protecting users from other users of the same computer, as well as from those who seeking remote access to it over a network.[1] Operating systems security rests on achieving the CIA triad: confidentiality (unauthorized users cannot access data), integrity (unauthorized users cannot modify data), and availability (ensuring that the system remains available to authorized users, even in the event of a denial of service attack).[2] As with other computer systems, isolating security domains—in the case of operating systems, the kernel, processes, and virtual machines—is key to achieving security.[3] Other ways to increase security include simplicity to minimize the attack surface, locking access to resources by default, checking all requests for authorization, principle of least authority (granting the minimum privilege essential for performing a task), privilege separation, and reducing shared data.[4]

Some operating system designs are more secure than others. Those with no isolation between the kernel and applications are least secure, while those with a monolithic kernel like most general-purpose operating systems are still vulnerable if any part of the kernel is compromised. A more secure design features microkernels that separate the kernel's privileges into many separate security domains and reduce the consequences of a single kernel breach.[5] Unikernels are another approach that improves security by minimizing the kernel and separating out other operating systems functionality by application.[5]

Most operating systems are written in C or C++, which can cause vulnerabilities. Despite various attempts to protect against them, a substantial number of vulnerabilities are caused by buffer overflow attacks, which are enabled by the lack of bounds checking.[6] Hardware vulnerabilities, some of them caused by CPU optimizations, can also be used to compromise the operating system.[7] Programmers coding the operating system may have deliberately implanted vulnerabilities, such as back doors.[8]

Operating systems security is hampered by their increasing complexity and the resulting inevitability of bugs.[9] Because formal verification of operating systems may not be feasible, operating systems developers use hardening to reduce vulnerabilities,[10] such as address space layout randomization, control-flow integrity,[11] access restrictions,[12] and other techniques.[13] Anyone can contribute code to open source operating systems, which have transparent code histories and distributed governance structures.[14] Their developers work together to find and eliminate security vulnerabilities, using techniques such as code review and type checking to avoid malicious code.[15][16] Andrew S. Tanenbaum advises releasing the source code of all operating systems, arguing that it prevents the developer from falsely believing it is secret and relying on security by obscurity.[17]

References

  1. ^ Tanenbaum & Bos 2023, pp. 605–606.
  2. ^ Tanenbaum & Bos 2023, p. 608.
  3. ^ Tanenbaum & Bos 2023, p. 609.
  4. ^ Tanenbaum & Bos 2023, pp. 609–610.
  5. ^ a b Tanenbaum & Bos 2023, p. 612.
  6. ^ Tanenbaum & Bos 2023, pp. 648, 657.
  7. ^ Tanenbaum & Bos 2023, pp. 668–669, 674.
  8. ^ Tanenbaum & Bos 2023, pp. 679–680.
  9. ^ Tanenbaum & Bos 2023, pp. 605, 617–618.
  10. ^ Tanenbaum & Bos 2023, pp. 681–682.
  11. ^ Tanenbaum & Bos 2023, p. 683.
  12. ^ Tanenbaum & Bos 2023, p. 685.
  13. ^ Tanenbaum & Bos 2023, p. 689.
  14. ^ Richet & Bouaynaya 2023, p. 92.
  15. ^ Richet & Bouaynaya 2023, pp. 92–93.
  16. ^ Berntsso, Strandén & Warg 2017, pp. 130–131.
  17. ^ Tanenbaum & Bos 2023, p. 611.

Please add the following references to the "further reading" section:

  • Berntsson, Petter Sainio; Strandén, Lars; Warg, Fredrik (2017). Evaluation of Open Source Operating Systems for Safety-Critical Applications. Springer International Publishing. pp. 117–132. ISBN 978-3-319-65948-0.
  • Richet, Jean-Loup; Bouaynaya, Wafa (2023). "Understanding and Managing Complex Software Vulnerabilities: An Empirical Analysis of Open-Source Operating Systems". Systèmes d'information & management. 28 (1): 87–114. doi:10.54695/sim.28.1.0087.

Reasons: add refs, rewrite according to due weight in reliable sources, cover security concerns of open source operating systems

Thank you Buidhe paid (talk) 03:58, 2 February 2024 (UTC)[reply]

@Buidhe paid, these are a lot of changes. I admit I haven't reviewed it in detail, but at a glance it looks fine. Why exactly are you submitting a COI edit request? I understand you've declared a COI with regards to Anderson & Dahlin but you are not adding or removing any references to that book. I feel like you should just make this change directly to the article. Mokadoshi (talk) 08:59, 9 April 2024 (UTC)[reply]
ok, I will do that. Technically I have a COI because I received pay for these edits, but I am only asked to improve articles according to existing policy /guidelines. Buidhe paid (talk) 21:46, 9 April 2024 (UTC)[reply]

Edit request 2

[edit]

Please remove the top-level section on "real-time operating systems" (they will be covered under "types of operating systems") and change the content of the "Types of operating systems" section to the following:

Extended content

Multicomputer operating systems

[edit]

With multiprocessors multiple CPUs share memory. A multicomputer or cluster computer has multiple CPUs, each of which has its own memory. Multicomputers were developed because large multiprocessors are difficult to engineer and prohibitively expensive;[1] they are universal in cloud computing because of the size of the machine needed.[2] The different CPUs often need to send and receive messages to each other;[3] to ensure good performance, the operating systems for these machines need to minimize this copying of packets.[4] Newer systems are often multiqueue—separating groups of users into separate queues—to reduce the need for packet copying and support more concurrent users.[5] Another technique is remote direct memory access, which enables each CPU to access memory belonging to other CPUs.[3] Multicomputer operating systems often support remote procedure calls where a CPU can call a procedure on another CPU,[6] or distributed shared memory, in which the operating system uses virtualization to generate shared memory that does not actually exist.[7]

Distributed systems

[edit]

A distributed system is a group of distinct, networked computers—each of which might have their own operating system and file system. Unlike multicomputers, they may be dispersed anywhere in the world.[8] Middleware, an additional software layer between the operating system and applications, is often used to improve consistency. Although it functions similarly to an operating system, it is not a true operating system.[9]

Embedded

[edit]

Embedded operating systems are designed to be used in embedded computer systems, whether they are internet of things objects or not connected to a network. Embedded systems include many household appliances. The distinguishing factor is that they do not load user-installed software. Consequently, they do not need protection between different applications, enabling simpler designs. Very small operating systems might run in less than 10 kilobytes,[10] and the smallest are for smart cards.[11] Examples include Embedded Linux, QNX, VxWorks, and the extra-small systems RIOT and TinyOS.[12]

Real-time

[edit]

A real-time operating system is an operating system that guarantees to process events or data by or at a specific moment in time. Hard real-time systems require exact timing and are common in manufacturing, avionics, military, and other similar uses.[12] With soft real-time systems, the occasional missed event is acceptable; this category often includes audio or multimedia systems, as well as smartphones.[12] In order for hard real-time systems be sufficiently exact in their timing, often they are just a library with no protection between applications, such as eCos.[12]

Virtual machine

[edit]

A virtual machine is an operating system that runs as an application on top of another operating system.[13] The virtual machine is unaware that it is an application and operates as if it had its own hardware.[13][14] Virtual machines can be paused, saved, and resumed, making them useful for operating systems research, development,[15] and debugging.[16] They also enhance portability by enabling applications to be run on a computer even if they are not compatible with the base operating system.[13]

References

  1. ^ Tanenbaum & Bos 2023, p. 557.
  2. ^ Tanenbaum & Bos 2023, p. 558.
  3. ^ a b Tanenbaum & Bos 2023, p. 565.
  4. ^ Tanenbaum & Bos 2023, p. 562.
  5. ^ Tanenbaum & Bos 2023, p. 563.
  6. ^ Tanenbaum & Bos 2023, p. 569.
  7. ^ Tanenbaum & Bos 2023, p. 571.
  8. ^ Tanenbaum & Bos 2023, p. 579.
  9. ^ Tanenbaum & Bos 2023, p. 581.
  10. ^ Tanenbaum & Bos 2023, pp. 37–38.
  11. ^ Tanenbaum & Bos 2023, p. 39.
  12. ^ a b c d Tanenbaum & Bos 2023, p. 38.
  13. ^ a b c Anderson & Dahlin 2014, p. 11.
  14. ^ Silberschatz et al. 2018, pp. 701.
  15. ^ Silberschatz et al. 2018, pp. 705.
  16. ^ Anderson & Dahlin 2014, p. 12.

Also, please add the following source to further reading:

Reason: fix unsourced text, add a concise summary of virtual machines. I removed the "Single- and multi-user" section because it duplicates information that is in the new "Concurrency" section. Buidhe paid (talk) 02:57, 10 February 2024 (UTC)[reply]

"Virtual machine" doesn't usually refer to an operating system. A system virtual machine is implemented by a hypervisor and whatever assistance is provided by the underlying platform on which the hypervisor runs; the hypervisor could be viewed as an operating system, the "applications" that run onto of which are themselves operating systems. (There are also process virtual machines, such as a Java virtual machine, but the software providing that virtual machine is generally not thought of as being like an operating system.) Guy Harris (talk) 07:47, 26 May 2024 (UTC)[reply]

Edit request 3

[edit]

Please add the following content directly after the lead:

Extended content

Definition and purpose

[edit]

An operating system is difficult to define,[1] but has been called "the layer of software that manages a computer's resources for its users and their applications".[2] Operating systems include the software that is always running, called a kernel—but can include other software as well.[1][3] The two other types of programs that can run on a computer are system programs—which are associated with the operating system, but may not be part of the kernel—and applications—all other software.[3]

There are three main purposes that an operating system fulfills:[4]

  • Operating systems allocate resources between different applications, deciding when they will receive central processing unit (CPU) time or space in memory.[4] On modern personal computers, users often want to run several applications at once. In order to ensure that one program cannot monopolize the computer's limited hardware resources, the operating system gives each application a share of the resource, either in time (CPU) or space (memory).[5][6] The operating system also must isolate applications from each other to protect them from errors and security vulnerability is another application's code, but enable communications between different applications.[7]
  • Operating systems provide an interface that abstracts the details of accessing hardware details (like physical memory) to make things easier for programmers.[4][8] Virtualization also enables the operating system to mask limited hardware resources; for example, virtual memory can provide a program with the illusion of nearly unlimited memory that exceeds the computer's actual memory.[9]
  • Operating systems provide common services, such as an interface for accessing network and disk devices. This enables an application to be run on different hardware without needing to be rewritten.[10] Which services to include in an operating system varies greatly, and this functionality makes up the great majority of code for most operating systems.[11]

References

  1. ^ a b Tanenbaum & Bos 2023, p. 4.
  2. ^ Anderson & Dahlin 2014, p. 6.
  3. ^ a b Silberschatz et al. 2018, p. 6.
  4. ^ a b c Anderson & Dahlin 2014, p. 7.
  5. ^ Anderson & Dahlin 2014, pp. 9–10.
  6. ^ Tanenbaum & Bos 2023, pp. 6–7.
  7. ^ Anderson & Dahlin 2014, p. 10.
  8. ^ Tanenbaum & Bos 2023, p. 5.
  9. ^ Anderson & Dahlin 2014, p. 11.
  10. ^ Anderson & Dahlin 2014, pp. 7, 9, 13.
  11. ^ Anderson & Dahlin 2014, pp. 12–13.
  • Tanenbaum, Andrew S.; Bos, Herbert (2023). Modern Operating Systems, Global Edition. Pearson Higher Ed. ISBN 978-1-292-72789-9.

Reason: all OS textbooks that I was able to access started out by defining what an OS is and explaining its purpose. This section is missing in the current article. Buidhe paid (talk) 20:52, 2 February 2024 (UTC)[reply]

@Buidhe paid: Historically the term operating system has included a lot more than kernels, typically including unprivileged program such as commands, compilers and binders.
Also, the unpaide {{collapse bottom}} generates an extraneous }} line after the references. My reading of the documentation suggests that it wouldn't be appropriate even if properly paired with a {{collapse top}}. -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 22:30, 2 February 2024 (UTC)[reply]
If you can improve on my text, please feel free to do so. I am just reporting what it says in the cited source. I cannot see any stray curly braces and if there were any, I assume the person implementing the request would disregard them. Yes, you need a collapse bottom to mark the end of the collapsed section, otherwise your comment would be collapsed too. Buidhe paid (talk) 22:46, 2 February 2024 (UTC)[reply]
Note: I checked a few more sources and tweaked the definition accordingly. Buidhe paid (talk) 00:05, 6 February 2024 (UTC)[reply]
 Done Subwayfares (talk) 13:50, 23 May 2024 (UTC)[reply]

Edit request 4

[edit]

Please change the section header titled "multitasking" to "concurrency". Reason: this is a more commonly used term in OS textbooks (see here for evidence).

Also, please change the content of the section (including the hatnote) to:

Extended content

Concurrency

[edit]

Concurrency refers to the operating system's ability to carry out multiple tasks simultaneously.[1] Virtually all modern operating systems support concurrency.[2]

Threads enable splitting a process' work into multiple parts that can run simultaneously.[3] The number of threads is not limited by the number of processors available. If there are more threads than processors, the operating system kernel schedules, suspends, and resumes threads, controlling when each thread runs and how much CPU time it receives.[4] During a context switch a running thread is suspended, its state is saved into the thread control block and stack, and the state of the new thread is loaded in.[5] Historically, on many systems a thread could run until it relinquished control (cooperative multitasking). Because this model can allow a single thread to monopolize the processor, most operating systems now can interrupt a thread (preemptive multitasking).[6]

Threads have their own thread ID, program counter (PC), a register set, and a stack, but share code, heap data, and other resources with other threads of the same process.[7][8] Thus, there is less overhead to create a thread than a new process.[9] On single-CPU systems, concurrency is switching between processes. Many computers have multiple CPUs.[10] Parallelism with multiple threads running on different CPUs can speed up a program, depending on how much of it can be executed concurrently.[11]

Synchronization

[edit]

Given that threads usually share data with other threads from the same process, the order in which threads are executed could impact the result.[12] There are no guarantees about the order of execution between different threads.[13] This makes debugging multithreaded processes much more difficult.[14][15] If a program produces a different result depending on the order in which threads are executed, this is called a race condition.[16]

There are different ways of avoiding race conditions. A simple option is atomic operations that cannot be interrupted or interleaved with other processes.[17] Shared objects (sometimes called monitors)[18] encapsulating heap-allocated memory into an object. The synchronization status is built into the object and hidden from the programmer, but enables the object to be locked while in use by another thread.[19] Condition variables enable a thread to wait until a lock has been released.[20] Locks can only be used by one thread at a time, often reducing performance.[21] In an attempt to increase parallelism and improve performance, programmers can split a shared object into multiple shared objects. However, this approach can cause unexpected results from interactions across objects.[21]

The use of multiple locks can cause a deadlock where multiple threads are waiting for each other to finish and release their lock on a resource, thus halting execution.[22] Many operating systems include deadlock detection and recovery features,[23] for example, killing processes,[24] interrupting a process,[25] taking advantage of checkpoints to move back in the execution of a program.[26] Although the operating system can almost never prevent deadlocks, some use heuristics similar to the banker's algorithm to avoid some cases.[27] Communication deadlocks occur when two processes are waiting for a reply from each other. Timeouts are often employed to break these deadlocks.[28]

References

  1. ^ Anderson & Dahlin 2014, p. 129.
  2. ^ Silberschatz et al. 2018, p. 159.
  3. ^ Anderson & Dahlin 2014, p. 130.
  4. ^ Anderson & Dahlin 2014, p. 131.
  5. ^ Anderson & Dahlin 2014, pp. 157, 159.
  6. ^ Anderson & Dahlin 2014, p. 139.
  7. ^ Silberschatz et al. 2018, p. 160.
  8. ^ Anderson & Dahlin 2014, p. 183.
  9. ^ Silberschatz et al. 2018, p. 162.
  10. ^ Silberschatz et al. 2018, pp. 162–163.
  11. ^ Silberschatz et al. 2018, p. 164.
  12. ^ Anderson & Dahlin 2014, pp. 183–184.
  13. ^ Anderson & Dahlin 2014, p. 140.
  14. ^ Silberschatz et al. 2018, p. 165.
  15. ^ Anderson & Dahlin 2014, p. 184.
  16. ^ Anderson & Dahlin 2014, p. 187.
  17. ^ Anderson & Dahlin 2014, p. 189.
  18. ^ Anderson & Dahlin 2014, p. 197.
  19. ^ Anderson & Dahlin 2014, pp. 195–196.
  20. ^ Anderson & Dahlin 2014, p. 206.
  21. ^ a b Anderson & Dahlin 2014, p. 261.
  22. ^ Anderson & Dahlin 2014, p. 262.
  23. ^ Tanenbaum & Bos 2023, pp. 449–450.
  24. ^ Tanenbaum & Bos 2023, p. 455.
  25. ^ Tanenbaum & Bos 2023, p. 454.
  26. ^ Tanenbaum & Bos 2023, pp. 454–455.
  27. ^ Tanenbaum & Bos 2023, pp. 459, 461.
  28. ^ Tanenbaum & Bos 2023, pp. 465–466.

Reason: fix unsourced text and ensure that topics are covered according to their prominence in reliable sources (please see the summary of several OS textbooks on my sandbox talk page).

Thank you Buidhe paid (talk) 19:30, 3 February 2024 (UTC)[reply]

I have a few copy-editing comments, suggestions, and questions before I implement this one. I'm not a subject matter expert, so I want to be very sure that I'm not unintentionally making the information less accurate with these suggestions.
Extended content

Original:

Threads have their own thread ID, program counter (PC), a register set, and a stack, but share code, heap data, and other resources with other threads of the same process.


Suggested:

Threads have their own thread ID, program counter (PC), register set, and stack, but share code, heap data, and other resources with other threads of the same process.


Reasoning: maintain grammatical structure throughout list


Original:

Shared objects (sometimes called monitors)[18] encapsulating heap-allocated memory into an object.


Comment: Is there a word missing or something in this sentence? I can't parse it.


Original:

Many operating systems include deadlock detection and recovery features, for example, killing processes, interrupting a process, taking advantage of checkpoints to move back in the execution of a program.


Suggested:

Many operating systems include deadlock detection and recovery features. These include killing processes, interrupting processes, and taking advantage of checkpoints to move back in the execution of a program.


Reasoning: I think this is a little bit clearer if you break it up into two sentences; I also changed the wording a little to maintain parallel grammatical structure throughout the list

Let me know what you think. I'm happy to make changes as I implement the edit, or (maybe easier haha) you can adjust the original text for me to copy and paste. Subwayfares (talk) 14:24, 23 May 2024 (UTC)[reply]
Thank you for helping with these edits.
  1. This is an improvement, thanks
  2. Change encapsulating to encapsulate good catch
  3. Good fix
Buidhe paid (talk) 14:49, 23 May 2024 (UTC)[reply]
One last question as I start to put this together - In the deadlocks section, the sentence "The most common kind is a resource deadlock where multiple processes request the same resource that only one can have at a time." is commented out. Is there a reason not to include it in the article? Subwayfares (talk) 15:11, 23 May 2024 (UTC)[reply]
The obvious reason is that it's not true. A deadlock involves more than a single resource. -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 15:54, 23 May 2024 (UTC)[reply]
Correct, it is one process holding resource 1 while requesting resource 2 while another has 2 and needs 1 (for example) Buidhe paid (talk) 19:10, 23 May 2024 (UTC)[reply]
 Partly done @Buidhe paid: I added your concurrency content and section, but kept and indented the multitasking section, since it's a form of concurrency. See [[1]]. STEMinfo (talk) 23:12, 24 May 2024 (UTC)[reply]
Yes, "multitasking" is a word used for concurrency as it relates to multiple processes run by the OS on a single cpu, but I don't see why we need a new section about that when it is already most of the content covered under concurrency. Regardless of that, the older "multitasking" content is entirely unsourced, partly duplicates the content I added and has some undue details. Buidhe paid (talk) 00:10, 26 May 2024 (UTC)[reply]
Note that the nomenclature is not consistent; depending on the text, multitasking may refer to concurrent threads within a single application or to multiple applications running concurrently. Some of the same synchronization issues apply to both. — Preceding unsigned comment added by Chatul (talkcontribs) 01:53, 26 May 2024 (UTC)[reply]

Edit request 5

[edit]

Please replace the current "Disk access and file systems" and "Disk drivers" sections with the following text under the heading "File system":

Extended content
File systems allow users and programs to organize and sort files on a computer, often through the use of directories (or folders).

Permanent storage devices used in twenty-first century computers, unlike volatile dynamic random-access memory (DRAM), are still accessible after a crash or power failure. Permanent (non-volatile) storage is much cheaper per byte, but takes several orders of magnitude longer to access, read, and write.[1][2] The two main technologies are a hard drive consisting of magnetic disks, and flash memory (a solid state drive that stores data in electrical circuits). The latter is more expensive but faster and more durable.[3][4]

File systems are an abstraction used by the operating system to simplify access to permanent storage. They provide human-readable filenames and other metadata, increase performance via amortization of accesses, prevent multiple threads from accessing the same section of memory, and include checksums to identify corruption.[5] File systems are composed of files (named collections of data, of an arbitrary size) and directories (also called folders) that list human-readable filenames and other directories.[6] An absolute file path begins at the root directory and lists subdirectories divided by punctuation, while a relative path defines the location of a file from a directory.[7][8]

System calls (which are sometimes wrapped by libraries) enable applications to create, delete, open, and close files, as well as link, read, and write to them. All these operations are carried out by the operating system on behalf of the application.[9] The operating system's efforts to reduce latency include storing recently requested blocks of memory in a cache and prefetching data that the application has not asked for, but might need next.[10] Device drivers are software specific to each input/output (I/O) device that enables the operating system to work without modification over different hardware.[11][12]

Another component of file systems is a dictionary that maps a file's name and metadata to the data block where its contents are stored.[13] Most file systems use directories to convert file names to file numbers. To find the block number, the operating system uses an index (often implemented as a tree).[14] Separately, there is a free space map to track free blocks, commonly implemented as a bitmap.[14] Although any free block can be used to store a new file, many operating systems try to group together files in the same directory to maximize performance, or periodically reorganize files to reduce fragmentation.[15]

Maintaining data reliability in the face of a computer crash or hardware failure is another concern.[16] File writing protocols are designed with atomic operations so as not to leave permanent storage in a partially written, inconsistent state in the event of a crash at any point during writing.[17] Data corruption is addressed by redundant storage (for example, RAID—redundant array of inexpensive disks)[18][19] and checksums to detect when data has been corrupted. With multiple layers of checksums and backups of a file, a system can recover from multiple hardware failures. Background processes are often used to detect and recover from data corruption.[19]

References

  1. ^ Anderson & Dahlin 2014, pp. 492, 517.
  2. ^ Tanenbaum & Bos 2023, pp. 259–260.
  3. ^ Anderson & Dahlin 2014, pp. 517, 530.
  4. ^ Tanenbaum & Bos 2023, p. 260.
  5. ^ Anderson & Dahlin 2014, pp. 492–493.
  6. ^ Anderson & Dahlin 2014, p. 496.
  7. ^ Anderson & Dahlin 2014, pp. 496–497.
  8. ^ Tanenbaum & Bos 2023, pp. 274–275.
  9. ^ Anderson & Dahlin 2014, pp. 502–504.
  10. ^ Anderson & Dahlin 2014, p. 507.
  11. ^ Anderson & Dahlin 2014, p. 508.
  12. ^ Tanenbaum & Bos 2023, p. 359.
  13. ^ Anderson & Dahlin 2014, p. 545.
  14. ^ a b Anderson & Dahlin 2014, p. 546.
  15. ^ Anderson & Dahlin 2014, p. 547.
  16. ^ Anderson & Dahlin 2014, pp. 589, 591.
  17. ^ Anderson & Dahlin 2014, pp. 591–592.
  18. ^ Tanenbaum & Bos 2023, pp. 385–386.
  19. ^ a b Anderson & Dahlin 2014, p. 592.

Reason: add sources, improve high-level explanation, harmonize coverage proprortionate to reliable sources. Giving disk drivers a separate section seems UNDUE given that they are only a couple paragraphs in some of the text books (A&D, Silberschatz et al.) and given no more than two pages (in Tanenbaum). Buidhe paid (talk) 00:41, 4 February 2024 (UTC)[reply]

 Done Subwayfares (talk) 15:58, 23 May 2024 (UTC)[reply]
The mention of system calls sometimes being wrapped by libraries is not specific to those system calls that perform file system operations. System calls are mentioned in several paragraphs, and all of those may be wrapped by libraries. (In operating systems where the APIs are defined as higher-level-language procedure calls, they're typically wrapped by some library call, even if the wrapper is a small assembly-language wrapper around the instruction that performs the system call.) Guy Harris (talk) 08:15, 26 May 2024 (UTC)[reply]

Edit request 6

[edit]

Please replace the content of the "Memory management" and "Virtual memory" sections, after the hatnote under "Memory management", with: (note that the virtual memory section in my version is a subheading of "memory management")

Extended content

Memory hierarchy is the principle that a computer has multiple stocks of memory, from expensive, volatile (not retaining information in case of power shutoff), and fast cache memory, to less expensive, volatile, and slower main memory, and finally most of the computer's storage in the form of nonvolatile (persistent) and inexpensive, but less quickly accessed solid-state drive or magnetic disk.[1] The memory manager is the part of the operating system that manages volatile memory.[1] Cache memory is typically managed by hardware, while main memory is typically managed by software.[2]

Early computers had no virtual addresses. Multiple programs could not be loaded in memory at the same time, so during a context switch the entire contents of memory would be saved to nonvolatile storage, then the next program loaded in.[2] Virtual address space provided increased security by preventing applications from overwriting memory needed by the operating system or other processes[3][4] and enabled multiple processes to run simultaneously.[5] Virtual address space creates the illusion of nearly unlimited memory available to each process, even exceeding the hardware memory.[6]

Address translation is the process by which virtual addresses are converted into physical ones by the memory management unit (MMU).[7][8] To cope with the increasing amounts of memory and storage in modern computers, the MMU often contains a multi-level page table that can resolve any address along with a translation lookaside buffer (TLB) that caches the latest memory lookups for increased speed.[9] As part of address translation, the MMU prevents a process from accessing memory in use by another process (memory protection).[10]

Virtual memory

[edit]
Illustration of one process using memory segmentation

Often the amount of memory requested by processes will exceed the computer's total memory.[11] One strategy is that after a process runs for a while, it will be put on idle and its memory swapped to permanent storage. Then, the memory can be reused for another process.[12] The downside of this approach is that over time the physical memory becomes fragmented because not all processes use the same amount of physical address space.[13] Also, the user may want to run a process too large to fit in memory.[14] Free blocks are tracked either with bitmaps or free lists.[15]

The most common option of managing overflow from memory is dividing each process' memory usage into segments called pages.[14] All of the memory is backed up in disk storage, [16] and not all of the process' pages need to be in memory for execution to go ahead.[14] If the process requests an address that is not currently in physical memory (page fault), the operating system will fetch the page and resume operation.[8]

References

  1. ^ a b Tanenbaum & Bos 2023, p. 179.
  2. ^ a b Tanenbaum & Bos 2023, p. 180.
  3. ^ Tanenbaum & Bos 2023, p. 183.
  4. ^ Anderson & Dahlin 2014, pp. 371–372, 414.
  5. ^ Tanenbaum & Bos 2023, pp. 183–184.
  6. ^ Anderson & Dahlin 2014, pp. 425, 454.
  7. ^ Anderson & Dahlin 2014, p. 371.
  8. ^ a b Tanenbaum & Bos 2023, p. 193.
  9. ^ Anderson & Dahlin 2014, p. 414.
  10. ^ Silberschatz et al. 2018, p. 357.
  11. ^ Tanenbaum & Bos 2023, p. 185.
  12. ^ Tanenbaum & Bos 2023, p. 186.
  13. ^ Tanenbaum & Bos 2023, p. 187.
  14. ^ a b c Tanenbaum & Bos 2023, p. 192.
  15. ^ Tanenbaum & Bos 2023, p. 188.
  16. ^ Anderson & Dahlin 2014, p. 454.

Reason: Add sources, more closely harmonize the amount of detail for each subtopic with the amount of coverage in reliable sources Buidhe paid (talk) 06:26, 4 February 2024 (UTC)[reply]

The cache is largely managed by hardware, not by the OS's virtual memory code. The part of the memory hierarchy that's involved with virtual memory is the part that's of interest in this article.
In addition, whilst the main memory is volatile on the vast majority of current systems, on the first systems that supported demand-paged virtual memory, the main memory was magnetic core memory, which is non-volatile. The volatility of memory is relevant to the OS only if the OS provides hibernate/reawaken capabilities, allowing the hardware to shut down to a power-saving mode in which it doesn't refresh main memory after the OS has saved the content of memory to some non-volatile storage, and allowing the hardware to go back to a mode in which it refreshes main memory and then having the OS (or firmware) reload memory from the non-volatile storage, so this section shouldn't mention volatility. Guy Harris (talk) 08:22, 26 May 2024 (UTC)[reply]
 Not done for now: An editor has expressed a concern about this requested edit. ABG (Talk/Report any mistakes here) 11:30, 1 June 2024 (UTC)[reply]
We can't expect every reader to understand how computer hardware works. I think it is beneficial to give some basic background on this subject, even if it is not technically part of the OS. The volatility of memory is extensively discussed in OS textbooks so should not be omitted just because one of us thinks it is irrelevant. The content is supported by the cited sources and the article cannot reasonably cover every single possible OS or hardware ever in existence. Buidhe paid (talk) 05:13, 3 June 2024 (UTC)[reply]

Edit request 7

[edit]

Please replace the current content of the "User interface" section, after the hatnote and the image, with the following text:

Extended content

On personal computers, user input typically comes from a keyboard, mouse, trackpads, and/or touchscreen, which are connected to the operating system with specialized software.[1] Programmers often prefer output in the form of plain text, which is simple to support.[2] In contrast, other users often prefer graphical user interfaces (GUIs), which are supported by most PCs.[3] GUIs may be implemented with user-level code or by the operating system itself, and the software to support it is much more complex. They are also supported by specialized hardware in the form of a graphics card that usually contains a graphics processing unit (GPU).[4]

References

  1. ^ Tanenbaum & Bos 2023, pp. 396, 402.
  2. ^ Tanenbaum & Bos 2023, p. 402.
  3. ^ Tanenbaum & Bos 2023, pp. 395, 408.
  4. ^ Tanenbaum & Bos 2023, p. 409.

Reason: the current section is UNDUE, as major operating systems textbooks lack a top-level chapter about user interface, and cover the topic briefly if at all. My version exploits summary style to improve conciseness, and also resolves the issue of uncited text. Buidhe paid (talk) 06:16, 5 February 2024 (UTC)[reply]

Retained mention of shell, and the distinction between computers in general and PCs in particular. Keeping both images seems unnecessary. Unclear which (if either) Buidhe paid wants to retain. Am inserting references provided above. Will update request to indicate completion upon addition of sources.--FeralOink (talk) 17:09, 24 May 2024 (UTC)[reply]
Added sources, removed KDE visual as new version is available and image isn't needed. Will wait to close out COI template until Buidhe paid confirms satisfaction or proposes corrections/further changes.--FeralOink (talk) 17:49, 24 May 2024 (UTC)[reply]
My understanding is that shell is just another word for an interface to an OS. I maintain that the rest of the content in that section is UNDUE based on coverage in overview sources of OS. If a picture is included in that section, it should be a GUI—the vast majority of coverage that does exist is about GUIs. Buidhe paid (talk) 02:05, 26 May 2024 (UTC)[reply]
The term "shell" originally referred to command-line shells; it dates back to at least this 1965 paper "The SHELL: A Global Tool for Calling and Chaining Procedures in the System" by Louis Pouzin, which is about a shell for Multics.
Microsoft speaks of the "Windows Shell" as part of the overall GUI; it doesn't appear to refer to the entire GUI.
In Unix-like systems - including even macOS - however, "shell" usually seems to refer to command-line shell, probably because of Unix's history, including its historical connection to Multics. However, "GNOME shell" refers to the GUI shell for the GNOME desktop environment, and "KDE shell" is also used for a GUI shell for the KDE desktop environment.
I think that neither solely providing an image of a command-line shell nor providing an image of a GUI desktop environment would fully represent the notion of a "shell"; perhaps no image should be provided, with the task of providing screenshot examples being left to shell (computing).
And I'm not sure what "GUIs may be implemented with user-level code or by the operating system itself." means. Most of the code for a GUI runs in user mode on most operating systems, but is provided as part of the "operating system" in the larger sense of "a platform atop which applications run" rather than "the kernel code that performs privileged tasks and manages low-level resources such as the CPU and memory". Graphical device drivers may run in kernel mode, as may some code about the driver layer, but, as far as I know, graphical widgets such as text boxes, scrollbars, buttons, and spinboxes, and window decorations, are implemented by code running in user mode.
"GUIs evolve over time, e.g. Microsoft modified the GUI for almost every new version of its Windows operating system" doesn't strike me as relevant here. It might be relevant on graphical user interface, but it's not obvious to me that it's really notable; anybody who's updated the OS on their personal computer, tablet, or smartphone is likely to have seen at least one update that changes the look or feel of the user interface.
"GUIs may also require a graphics card, including a graphics processing unit (GPU)." Does that refer to an add-on graphics card? It may have been true of early PCs, but wasn't true of the Macintosh or of workstation computers, as they had, at minimum, a frame buffer built in. The early ones didn't have a full-blown GPU - the Mac used the CPU to do all the rendering. This might be another detail best left to graphical user interface or some such page.
So, yes, removal of at least some stuff from that section might be a good idea. Guy Harris (talk) 08:59, 26 May 2024 (UTC)[reply]
Thank you for your review, Guy Harris! I had removed all the images except one. I will remove the one remaining bash screenshot per your comments about the history of shells and the fact that a shell might (UNIX command line) or might not (e.g. Windows; GUI shells for GNOME and KDE desktop environments) be the entire GUI. I was a UNIX user long ago. I DO believe it is important to make a distinction between personal computer and non-personal computer interactions with an operating system. (I think that is accomplished.)
I will gladly remove the sentence about Microsoft changing GUIs with each new version of Windows as it isn't well-sourced!
I will remove Buidhe paid's sentence about how GUIs are implemented per your comment.
You suggest removing Buidhe paid's sentence, "They are also supported by specialized hardware in the form of a graphics card that usually contains a graphics processing unit (GPU). (with reference)" because historically, this wasn't always the case. As of about 2010, dedicated GPUs were often on a graphics card. I will remove that sentence if you think it is too ambiguous thus best avoided (and also, because the GUI article should cover it in depth).
Guy, does this capture your comments and seem satisfactory for the User Interface subsection of this Operating System article?

"A user interface (UI) is required to operate a computer, i.e. for human interaction to be supported. The two most common user interfaces are

  • command-line interface, in which computer commands are typed, line-by-line,
  • graphical user interface (GUI) using a visual environment, most commonly a combination of the window, icon, menu, and pointer elements, also known as WIMP.

For personal computers (PCs), user input is typically from a combination of keyboard, mouse, and trackpad or touchscreen, all of which are connected to the operating system with specialized software.[139] PC users who are not software developers or coders often prefer GUIs for both input and output; GUIs are supported by most PCs. The software to support GUIs is more complex than a command line for input and plain text output.[141] Plain text output is often preferred by programmers, and is easy to support.[140]"

I apologize for not indenting the above passage, as I couldn't get the Wiki syntax to cooperate.--FeralOink (talk) 22:26, 1 June 2024 (UTC)[reply]
Thank you for further refinements, Guy Harris. I am now closing the request as accepted.--FeralOink (talk) 07:23, 4 June 2024 (UTC)[reply]
 Done--FeralOink (talk) 07:42, 4 June 2024 (UTC)[reply]

Edit request 8

[edit]

Please remove the unsourced section "Networking". Reason: only one of the operating systems textbooks has a section on networking. I checked the source and it is a brief overview of networking in general, and does not cover how operating systems support networking (which makes up the current content of the section). Thus, I believe the section should be removed both for verifiability reasons as well as for being UNDUE. Buidhe paid (talk) 06:29, 5 February 2024 (UTC)[reply]

 Done TechnoSquirrel69 (sigh) 23:50, 29 May 2024 (UTC)[reply]

Edit request 9

[edit]

Please replace the current content of the "History" section, including the hatnote, with:

Extended content
An IBM System 360/65 Operator's Panel. OS/360 was used on most IBM mainframe computers beginning in 1966.

The first computers in the late 1940s and 1950s were directly programmed in machine code inputted via plugboards or punch cards, without programming languages or operating systems.[1] After the introduction of the transistor in the mid-1950s, mainframes began to be built. These still needed professional operators[1] but had rudimentary operating systems such as Fortran Monitor System (FMS) and IBSYS.[2] In the 1960s, IBM introduced the first series of intercompatible computers (System/360). All of them ran the same operating system—OS/360—which consisted of millions of lines of assembly language that had thousands of bugs. The OS/360 also was the first popular operating system to support multiprogramming, such that the CPU could be put to use on one job while another was waiting on input/output (I/O). Holding multiple jobs in memory necessitated memory partitioning and safeguards against one job accessing the memory allocated to a different one.[3]

Around the same time, terminals were invented so multiple users could access the computer simultaneously. The operating system MULTICS was intended to allow hundreds of users to access a large computer. Despite its limited adoption, it can be considered the precursor to cloud computing. The UNIX operating system originated as a development of MULTICS for a single user.[4] Because UNIX's source code was available, it became the basis of other, incompatible operating systems, of which the most successful were AT&T's System V and the University of California's Berkeley Software Distribution (BSD).[5] To increase compatibility, the IEEE released the POSIX standard for system calls, which is supported by most UNIX systems. MINIX was a stripped-down version of UNIX, developed in 1987 for educational uses, that inspired the commercially available, free software Linux. Since 2008, MINIX is used in controllers of most Intel microchips, while Linux is widespread in data centers and Android smartphones.[6]

Microcomputers

[edit]
Command-line interface of the MS-DOS operating system
Graphical user interface of a Macintosh

The invention of large scale integration enabled the production of personal computers (initially called microcomputers) from around 1980.[7] For around five years, the CP/M (Control Program for Microcomputers) was the most popular operating system for microcomputers.[8] Later, IBM bought the DOS (Disk Operating System) from Bill Gates. After modifications requested by IBM, the resulting system was called MS-DOS (MicroSoft Disk Operating System) and was widely used on IBM microcomputers. Later versions increased their sophistication, in part by borrowing features from UNIX.[8]

Steve Jobs' Macintosh, which after 1999 used the UNIX-based (via FreeBSD)[9] macOS, was the first popular computer to use a graphical user interface (GUI). The GUI proved much more user friendly than the text-only command-line interface earlier operating systems had used. Following the success of Macintosh, MS-DOS was updated with a GUI overlay called Windows. Windows later was rewritten as a stand-alone operating system, borrowing so many features from another (VAX VMS) that a large legal settlement was paid.[10] In the twenty-first century, Windows continues to be popular on personal computers but has less market share of servers. UNIX operating systems, especially Linux, are the most popular on enterprise systems and servers but are also used on mobile devices and many other computer systems.[11]

On mobile devices, Symbian OS was dominant at first, being usurped by BlackBerry OS (introduced 2002) and iOS for iPhones (from 2007). Later on, the open-source, UNIX-based Android (introduced 2008) became most popular.[12]

References

  1. ^ a b Tanenbaum & Bos 2023, p. 8.
  2. ^ Tanenbaum & Bos 2023, p. 10.
  3. ^ Tanenbaum & Bos 2023, pp. 11–12.
  4. ^ Tanenbaum & Bos 2023, pp. 13–14.
  5. ^ Tanenbaum & Bos 2023, pp. 14–15.
  6. ^ Tanenbaum & Bos 2023, p. 15.
  7. ^ Tanenbaum & Bos 2023, pp. 15–16.
  8. ^ a b Tanenbaum & Bos 2023, p. 16.
  9. ^ Tanenbaum & Bos 2023, pp. 17–18.
  10. ^ Tanenbaum & Bos 2023, p. 17.
  11. ^ Tanenbaum & Bos 2023, p. 18.
  12. ^ Tanenbaum & Bos 2023, pp. 19–20.

Reasons: make it more concise, use summary style, fix uncited text Buidhe paid (talk) 06:58, 6 February 2024 (UTC)[reply]

Plugboards weren't really "code" in the sense of machine code, and punched cards weren't the only way machine code could be entered; punched paper tape was also an input medium.
Programming languages came along relatively early; assembly language dates back to some of the earliest computers, and even FORTRAN dates back to the IBM 704.
The IBM 704 and IBM 709, both vacuum-tube rather than transistor computers, are both referred to as "mainframes" on their Wikipedia pages, so I don't think it's clear that the introduction of transistors was a requirement for building mainframes. The FORTRAN Monitor System ran on the 709, so operating systems date back before transistorized computers.
All S/360s (other than the incompatible IBM System/360 Model 20 and IBM System/360 Model 44) may have been able to run OS/360, but not all did; many ran, for example, DOS/360, as OS/360 may not have run well on smaller machines.
I'm not sure OS/360 was the first OS to support multiprogramming. The PDP-6 Monitor may have been available before OS/360 and perhaps even before DOS/360 (at least some configurations of which supported multiprogramming, as far as I know - and those may have come out before OS/360 MFT or MVT), and was a time-sharing OS that not only supported multiprogramming but supported time-slicing. The Burroughs MCP came out even earlier than either of those and, as far as I know, supported multiprogramming as well.
The first computer terminals weren't really invented at that point. They were just Teleprinter, such as the Flexowriter and various Teleprinter from Teletype Corporation (Model 28, Model 33, Model 35, etc.), which were invented earlier, and put to use as computer terminals at that later time.
The Compatible Time Sharing System (CTSS) preceded Multics as a time-sharing OS. It may be more correct to speak of time-sharing systems in general as predecessors to both client-server and cloud computing, rather than just mentioning Multics in particular (other time-sharing OSes may not have had the term "information utility" used when discussing them, but I don't think that makes Multics special intuit regard).
UNIX wasn't a direct derivative of Multics. Some aspects of UNIX were inspired by Multics, such as the hierarchical directory structure and the notion of a command-line interpreter in which command names were file names for programs that implemented the command (although Multics ran commands within the same process, rather than creating a new process, as UNIX did).
System V and BSD weren't completely incompatible with, for example, Seventh Edition (V7) UNIX or UNIX/32V. There were some incompatibilities introduced, but most of the system library APIs and commands were V7-compatible. POSIX provided an interface that both SV and later BSDs were changed to support; it's a standard for more than just system calls, in the sense of "APIs implemented as simple traps to the OS kernel" - it also includes APIs such as getpwnam() and getpwuid(), which are mostly implemented in a user-mode library, although they do perform system calls to read from a file or send requests to or receive replies from a directory server.
What are the Intel chips in which MINIX is used? MINIX may have inspired Linus Torvalds to write the original Linux kernel, but Linux wasn't, as far as I know, based on MINIX.
Were the first microprocessors based on LSI or VLSI?
The first Macintosh computers did not run Mac OS X/OS X/macOS; they ran the classic Mac OS, which was not UNIX-based. Mac OS X only showed up in the early 2000s; it was developed from the BSD-basedNeXTSTEP. Guy Harris (talk) 10:31, 26 May 2024 (UTC)[reply]
I have checked some of the alleged inaccuracies and tweaked some to be accurate both to the source text and to what you are saying. I maintain that my version is much better than the current version because at least it is more concise and better sourced, which makes it easier to improve in the future.
As for some specific points:
  • My text The UNIX operating system originated as a development of MULTICS for a single user, is based on the source: Ken Thompson... found a small PDP-7 minicomputer that no one was using and set out to write a stripped-down, one-user version of MULTICS. This work later developed into the UNIX operating system. Perhaps there is a more informative concise phrasing, but I do not see how that is contradicted by what you are saying.
  • As for Minix use in intel chips, Tanenbaum et al says: "MINIX was adapted by Intel for a separate and somewhat secret ‘‘management’’ processor embedded in virtually all its chipsets since 2008." He also says that Linux "was directly inspired by and developed on MINIX". I'm not entirely sure what relationship "developed on" entails (he mentions file systems) but we can go with inspired if you prefer.
  • Were the first microprocessors based on LSI or VLSI? He does not mention VLSI
  • The source does not go into pre-MacOSX operating systems, and I have edited to clarify.
Buidhe paid (talk) 05:08, 3 June 2024 (UTC)[reply]
The first-generation computer history part of Tanenbaum and Bos just describes very early first-generation computers. Several things, including assembler languages, some early higher-level languages, and business data processing were provided by later first-generation computers.
Tanenbaum and Bos does not say that OS/360 was the first popular OS to support multiprogramming. What it says is

Despite its enormous size and problems, OS/360 and the similar third-generation operating systems produced by other computer manufacturers actually satisfied most of their customers reasonably well. They also popularized several key techniques absent in second-generation operating systems. Probably the most important of these was multiprogramming.

(emphasis mine).
They did, at least, mention CTSS when talking about time-sharing (although their claim about protection hardware not showing up until the third generation is better stated as that hardware becoming common in the third generation - the modified 7090s and 7094s used for CTSS had special custom hardware from IBM providing relocation and protection). That section should probably mention the term "time-sharing".
I'd ask for a citation from Tanenbaum and Bos on their claim that Thompson was trying to write a "stripped-down, one-user version of MULTICS" - that sounds like folklore rather than fact. Dennis Ritchie says, in The Evolution of the Unix Time-sharing System, that Unix came from their desire to "find an alternative to Multics" and, in [https://www.bell-labs.com/usr/dmr/www/retro.pdf The UNIX Time-sharing System�A Retrospective], that ia good case can be made that it is in essence a modern implementation of MIT’s CTSS system"] - note the "in essence", so he's not saying it's a version of CTSS (which it isn't). Thompson himself, in an article in the May 1999 issue of IEEE Computer, that "The early versions [of Unix] were essentially me experimenting with some Multics concepts on a PDP-7", which isn't as strong as "a stripped-down, one-user version of MULTICS".
"MINIX is used in controllers of most Intel microchips" is a too-vague version of what Tanenbaum and Bos said, which is that it's the OS for a management processor in Intel chipsets, separate from the CPU. See, for example, "The Truth About the Intel's Hidden Minix OS and Security Concerns" and "MINIX: ​Intel's hidden in-chip operating system". Guy Harris (talk) 08:40, 3 June 2024 (UTC)[reply]
A few historical notes
  • Honeywell had multiprogramming on the Honeywell 800, announced in 1958 and installed in 1960. And, yes, so did the MCP in 1961 on the B5000
  • CDC, GE and UNIVAC all had block relocation prior to the S/360. The B5000 had segmentation. Atlas had paging.
  • Yes, the PDP-6 supported multiprogramming
I'm not sure whether a reference to Stretch is need; it announced multiprogramming earlier than some of the others but delivery was delayed -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 10:16, 3 June 2024 (UTC)[reply]
Yes, the PDP-6 supported multiprogramming And memory protection/address relocation. Guy Harris (talk) 17:04, 3 June 2024 (UTC)[reply]
Clearly you and I disagree on what the source intends to say. In my opinion, if something is already a feature of a popular product it cannot be popularized because it is already popular. I do think that more than a sentence or so on this issue is probably Undue Weight—the details on this belong in a different article. Would you be happy if I just took out the mention of multiprogramming? (t · c) buidhe 12:44, 4 June 2024 (UTC)[reply]
It makes more sense to move it earlier. Something like After the introduction of the transistor in the mid-1950s, mainframes began to be built. These still needed professional operators[1] but had rudimentary operating systems such as Fortran Monitor System (FMS) and IBSYS.[2] In the 1960s, vendors began to offer multiprogramming operating systems. In 1964, IBM introduced the first series of intercompatible computers (System/360)..
I'm not sure what to do about the Atlas and B5000; they were designed in the late 1950s but installed in the 1960s. -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 17:10, 4 June 2024 (UTC)[reply]

Edit request 10

[edit]

Please replace the current sections "Examples" and "Market share" with one section called "Popular operating systems", with the following content:

Extended content

In the personal computer market, as of September 2023, Microsoft Windows has the highest market share, around 68%. macOS by Apple Inc. is in second place (20%), and the varieties of Linux, including ChromeOS, are collectively in third place (7%).[3] In the mobile sector (including smartphones and tablets), as of September 2023, Android's share is 68.92%, followed by Apple's iOS and iPadOS with 30.42%, and other operating systems with .66%.[4]

Linux

[edit]
Layers of a Linux system

Linux is a free software distributed under the GNU General Public License (GPL), which means that all of its derivatives are legally required to release their source code.[5] Linux was designed by programmers for their own use, thus emphasizing simplicity and consistency, with a small number of basic elements that can be combined in nearly unlimited ways, and avoiding redundancy.[6]

Its design is similar to other UNIX systems not using a microkernel.[7] It is written in C[8] and uses UNIX System V syntax, but also supports BSD syntax. Linux supports standard UNIX networking features, as well as the full suite of UNIX tools, while supporting multiple users and employing preemptive multitasking. Initially of a minimalist design, Linux is a flexible system that can work in under 16 MB of RAM, but still is used on large multiprocessor systems.[7] Similar to other UNIX systems, Linux distributions are composed of a kernel, system libraries, and system utilities.[9] Linux has a graphical user interface (GUI) with a desktop, folder and file icons, as well as the option to access the operating system via a command line.[10]

Android is a partially open-source operating system closely based on Linux and has become the most widely used operating system by users, due to its popularity on smartphones and, to a lesser extent, embedded systems needing a GUI, such as "smart watches, automotive dashboards, airplane seatbacks, medical devices, and home appliances".[11] Unlike Linux, much of Android is written in Java and uses object-oriented design.[12]

Microsoft Windows

[edit]
Security descriptor for a file that is read-only by default, specified no access for Elvis, read/write access for Cathy, and full access for Ida, the owner of the file[13]

Windows is a proprietary operating system that is widely used on desktop computers, laptops, tablets, phones, workstations, enterprise servers, and Xbox consoles.[14] The operating system was designed for "security, reliability, compatibility, high performance, extensibility, portability, and international support"—later on, energy efficiency and support for dynamic devices also became priorities.[15]

Windows Executive works via kernel-mode objects for important data structures like processes, threads, and sections (memory objects, for example files).[16] The operating system supports demand paging of virtual memory, which speeds up I/O for many applications. I/O device drivers use the Windows Driver Model.[16] The NTFS file system has a master table and each file is represented as an record with metadata.[17] The scheduling includes preemptive multitasking.[18] Windows has many security features;[19] especially important are the use of access-control lists and integrity levels. Every process has an authentication token and each object is given a security descriptor. Later releases have added even more security features.[17]

References

  1. ^ Tanenbaum & Bos 2023, p. 8.
  2. ^ Tanenbaum & Bos 2023, p. 10.
  3. ^ "Desktop Operating System Market Share Worldwide". StatCounter Global Stats. Archived from the original on 2 October 2023. Retrieved 2023-10-03.
  4. ^ "Mobile & Tablet Operating System Market Share Worldwide". StatCounter Global Stats. Retrieved 2023-10-02.
  5. ^ Silberschatz et al. 2018, pp. 779–780.
  6. ^ Tanenbaum & Bos 2023, pp. 713–714.
  7. ^ a b Silberschatz et al. 2018, p. 780.
  8. ^ Vaughan-Nichols, Steven (2022). "Linus Torvalds prepares to move the Linux kernel to modern C". ZDNET. Retrieved 7 February 2024.
  9. ^ Silberschatz et al. 2018, p. 781.
  10. ^ Tanenbaum & Bos 2023, pp. 715–716.
  11. ^ Tanenbaum & Bos 2023, pp. 793–794.
  12. ^ Tanenbaum & Bos 2023, p. 793.
  13. ^ Tanenbaum & Bos 2023, pp. 1021–1022.
  14. ^ Tanenbaum & Bos 2023, p. 871.
  15. ^ Silberschatz et al. 2018, p. 826.
  16. ^ a b Tanenbaum & Bos 2023, p. 1035.
  17. ^ a b Tanenbaum & Bos 2023, p. 1036.
  18. ^ Silberschatz et al. 2018, p. 821.
  19. ^ Silberschatz et al. 2018, p. 827.

Reason: The only two OS textbooks with case studies have Linux and Windows, so if we're going to include a section with specific examples, that's probably the ones that should be there. This is a top level article and information about specific operating systems likely belongs in their own article or other subarticles such as comparison of operating systems. My version improves summary style and also fixes unsourced content issues. Buidhe paid (talk) 04:00, 7 February 2024 (UTC)[reply]

Or usage share of operating systems, which is a page that directly covers market share. Guy Harris (talk) 17:05, 26 May 2024 (UTC)[reply]

Edit request 11

[edit]

Please remove the unsourced "modes" section and replace the first paragraph of the "Kernel" section with the following text, which also covers modes:

Extended content

The kernel is the part of the operating system that provides protection between different applications and users. This protection is key to improving reliability by keeping errors isolated to one program, as well as security by limiting the power of malicious software and protecting private data, and ensuring that one program cannot monopolize the computer's resources.[1] Most operating systems have two modes of operation:[2] in user mode, the hardware checks that the software is only executing legal instructions, whereas the kernel has unrestricted powers and is not subject to these checks.[3] The kernel also manages memory for other processes and controls access to input/output devices.[4]

References

  1. ^ Anderson & Dahlin 2014, pp. 39–40.
  2. ^ Tanenbaum & Bos 2023, p. 2.
  3. ^ Anderson & Dahlin 2014, pp. 41, 45.
  4. ^ Anderson & Dahlin 2014, pp. 52–53.

Thanks Buidhe paid (talk) 07:09, 14 February 2024 (UTC)[reply]

I believe that it would be better to add sources to the existing text: some processors have more than two modes. -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 14:00, 14 February 2024 (UTC)[reply]
I updated the text to mention that some OS have a different number of modes. However, according to Tanenbaum, most OS have two modes, so going into detail on other mode systems is likely UNDUE. The current text of the modes section is excessive detail for this article compared to the coverage of user/kernel/other modes in OS textbooks (just a couple paragraphs out of hundreds or 1000+ pages in those I consulted). Buidhe paid (talk) 04:51, 15 February 2024 (UTC)[reply]

Need wordsmithing for virtual memory

[edit]

The text If a program tries to access memory that is not in its current range of accessible memory, but nonetheless has been allocated to it, the kernel is interrupted in the same way as it would if the program were to exceed its allocated memory. (See section on memory management.) This kind of interrupt is referred to as a page fault. has multiple issues.

  1. There may be holes in the accessible memory
  2. The interrupt might not be a page fault
  3. A page fault might not be an error.

At first I was planning to just through in a reference to segmentation, but that would not address the other issues. Can someone come up with an accurate and clean rewording that takes into account such issues:

  1. Demand paging
  2. Discontinuous storage allocation
  3. Guard pages for expandable structures
  4. Protection rings
  5. Read only pages and segment
  6. Segmentation

without going into too much detail? -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 15:16, 30 May 2024 (UTC)[reply]

This passage is completely unsourced and therefore the first priority is to rewrite based on reliable sources (as I did above). Wordsmithing is the last step, after sourcing & content. Buidhe paid (talk) 06:32, 5 June 2024 (UTC)[reply]
My main concern is accuracy, but I don't want to make the text awkward in the process of correcting it. -- Shmuel (Seymour J.) Metz Username:Chatul (talk) 17:56, 5 June 2024 (UTC)[reply]

Wiki99 summary

[edit]

Summary of changes as a result of the Wiki99 project (before, after, diff):

  • Large-scale rewrite from reliable sources, fixing many unsourced content issues
  • Added over 100 citations to the latest editions of various OS textbooks
  • Brought coverage of OS topics more in line with the due weight in reliable sources

Further possibilities for improvement:

  • Finish rewrite of article, updating the sections I didn't get to with new content based on reliable sources and summary style
  • Get the article to good article status

Buidhe paid (talk) 07:24, 5 August 2024 (UTC)[reply]

UNIX vs Unix-like Darwin vs FreeBSD VM vs OS

[edit]

There are multiple parts where info is just wrong. Darwin uses modified utilitities ffrom FreeBSD for compatibility, but the kernel and core of the OS is completely different. Android is not UNIX based, it's based on Linux which makes it Unix-like not actual UNIX. A VM is a virtualized or emulated computer, not an OS, it is typically used to run a seperate OS from the host machine's OS but it isn't an OS. Please fix these and other errors. Squid4572 (talk) 02:53, 21 September 2024 (UTC)[reply]

Darwin is a combination of Mach code, FreeBSD (and, at least at one point, also NetBSD) code, and Apple-developed code. Whether BSD code from 4.4-Lite is "UNIX" or "Unix-like" is a matter of debate; the trademark "UNIX" can be used for any operating system that passes the test suite for the Single UNIX Specification, regardless of how much AT&T code, if any, is in the operating system, and most versions of macOS, starting with Leopard, pass that test suite, making them UNIXes. (Lion, for some unknown reason, was never certified as passing it; Sequoia has not - yet - been announced as having passed it.) I just removed that bit about FreeBSD, which, over and above it being incomplete and possibly misleading, was out of place in a sentence talking about the Mac being the first popular computer with a GUI, as that's referring to the situation in 1984, long before the Mac had "OPENSTEP for Mach TNG" as its operating system.
Android has, as far as I know, never passed the Single UNIX Standard test suite; the UNIX-like code in it is the Linux kernel and the Bionic C library, the latter being based on the FreeBSD C library. I've changed it to say "Later on, the open-source Android operating system (introduced 2008), with a Linux kernel and a C library (Bionic) partially based on BSD code, became most popular."
A virtual machine isn't an OS. A hypervisor, which provides a virtual machine, could be considered a type of OS; I renamed the "Virtual machine" section to "Hypervisor" and modified it to say that "A hypervisor is an operating system that runs a virtual machine."
(I think the problems there are a combination of citing OS texts in which some statements were made without sufficient research - Nth-hand sources, for N > 2, so too far removed, a bit like the telephone game - and some misreading of what those sources say.) Guy Harris (talk) 08:44, 21 September 2024 (UTC)[reply]