Powered by Blogger.

Java - Call Stack Mechanism in JavaScript

Call Stack:-

Most modern implementations use a call stack, a special case of the stack data structure, to implement subroutine calls and returns. Each procedure call creates a new entry, called a stack frame, at the top of the stack; when the procedure returns, its stack frame is deleted from the stack, and its space may be used for other procedure calls. Each stack frame contains the private data of the corresponding call, which typically includes the procedure's parameters and internal variables, and the return address.

The call sequence can be implemented by a sequence of ordinary instructions (an approach still used in reduced instruction set computing (RISC) and very long instruction word (VLIW) architectures), but many traditional machines designed since the late 1960s have included special instructions for that purpose.

The call stack is usually implemented as a contiguous area of memory. It is an arbitrary design choice whether the bottom of the stack is the lowest or highest address within this area, so that the stack may grow forwards or backwards in memory; however, many architectures chose the latter.[citation needed]

Some designs, notably some Forth implementations, used two separate stacks, one mainly for control information (like return addresses and loop counters) and the other for data. The former was, or worked like, a call stack and was only indirectly accessible to the programmer through other language constructs while the latter was more directly accessible.

When stack-based procedure calls were first introduced, an important motivation was to save precious memory.[citation needed] With this scheme, the compiler does not have to reserve separate space in memory for the private data (parameters, return address, and local variables) of each procedure. At any moment, the stack contains only the private data of the calls that are currently active (namely, which have been called but haven't returned yet). Because of the ways in which programs were usually assembled from libraries, it was (and still is) not uncommon to find programs that include thousands of subroutines, of which only a handful are active at any given moment.[citation needed] For such programs, the call stack mechanism could save significant amounts of memory. Indeed, the call stack mechanism can be viewed as the earliest and simplest method for automatic memory management.

However, another advantage of the call stack method is that it allows recursive subroutine calls, since each nested call to the same procedure gets a separate instance of its private data.

Call Stack Mechanism:-

If a statement throws an exception, and that exception is not handled in the immediately enclosing method, then that exception is throws to the calling method. If the exception is not handled in the calling method, it is thrown to the caller of that method. This process continues. If the exception is still not handled by the time it gets back to the main() method and main() does not handle it, the exception terminates the program abnormally.

Single-Threaded Stack Mechanism Extensions:-

This section discusses call stack mechanisms that are relatively trivial modifications of the traditional single-threaded call stack mechanism. All maintain each thread's call stack as a contiguous region of memory. Solaris [22] uses a multi-threaded call stack mechanism that is practically the standard for modern operating systems. Each thread has its own stack space reserved near the top of virtual address space. The size of the call stack can be set to a custom value during thread creation. If no stack space size is specified, a large value (typically 2MB) will be used instead. Stack overflow is detected via the use of a "red zone", which refers to the process of appending a page of memory without read or write permissions to the end of a thread's stack space. This page will causes a memory fault if accessed.

Oberon with active objects [12] can be viewed as a subset of the above call stack mechanism specifically tailored to support a large number of small call stacks. It does this by reserving the upper 2GB of virtual address space for small call stacks that are each a maximum of 128KBytes, thereby supporting up to 16,384 call stacks simultaneously. Concurrent Oberon [18] uses a call stack that is a fixed size determined at thread creation, but allocated on the heap. Overflow is detected before it occurs via a check at the start of every procedure, and results in termination of the offending thread. While this method increases runtime overhead, it has the advantage of working on systems that do not have an MMU. The call stack is garbage collected once the thread terminates.

US patent 7,477,829 [27] attempts to address both heap contention and stack space in its proposed memory layout, depicted in Figure 2.4. Each stack/heap block is created from an initial base address, from which the thread and heap stack grow in opposite directions. Unfortunately, the patent does not specify how the initial base addresses are computed, but from Figure 2.4 it can be inferred that the base addresses are intended to be spaced apart evenly. Doing so would require knowledge of the maximum number of threads that the program would execute at one time. Stack and heap overflow are detected via the use of "dead zones"  that are "... impossible to read from or to write to... In so doing there is no chance of memory corruption between any of these thread heap/thread stack combinations" [27]. While the patent does not go into further details, it is inferred that these dead zones operate similarly to Solaris's red zones [22] by generating a page fault or similar hardware interrupt upon access.

All of the above methods suffer from the limitation that stack space for one thread cannot be shared with another, and each thread's stack space must always be large enough to handle the worst case stack usage, or the program will terminate with an error. This can lead to the situation where a program can prematurely "run out" of stack space due to a single thread exceeding its allotted stack space, even if there is plenty of unused stack space preallocated to other threads. These conditions also force a trade-off between the maximum allowable stack space per thread and the number of threads that can exist in a system at one time, which seems to run counter to the spirit of resource-sharing mechanics that govern system memory and hard disk space.

It is my opinion that this trade-off is a vestige of the success of the MMU (which gives the most assistance to processes that do not share address space) combined with the fact that a single-threaded program only requires a single call stack.

Multi-Threading's Call Stack Problems:-

Before examining the problems that multi-threading introduces, it is a good idea to first examine why the traditional call stack mechanism works so well for singlethreaded processes. Two important facts form the basis of its success:

• The MMU allows each process to use the entire address space as if it were the only process running on the system. Physical memory is not reserved for the process until it actually uses the space.

• For any single-threaded program, there is only one call stack required. As such, the operating system can, by default, reserve a stack so large that it is usually safe to assume that a properly-functioning program will not exhaust it. Reserving such a large portion of memory does not cause any negative effects because:

• Virtual address space will not map to physical memory until the program actually uses the virtual address space.

• The operating system automatically maps the used virtual address regions to physical memory that will not conflict with other processes. Multi-threading significantly changes the rules. A multi-threaded program requires one call stack per thread, all of which must exist within the same address space. This means that the MMU cannot help with multiple threads as it does with multiple processes. Most modern operating systems just create one "large" call stack for each thread at the top of virtual address space. However, when there are a large number of threads, this will cause the process to run out of virtual address space before it is actually out of memory. Shrinking each process's stack space until each thread's stack can fit may lead to one thread running out of stack space when there is otherwise lots of unused stack space remaining, This is especially likely to happen if thread stack usage patterns differ (e.g. one thread makes heavy use of recursion). It is possible to manually set stack space on a threadby- thread basis (e.g. giving a heavy stack space using thread more stack space).

However, this both increases the burden on the programmer and decreases the flexibility of the program (threads are locked into roles, not all threads have the ability to temporarily use a large amount of stack space). This harkens back to the days before the MMU when programmers used to manually give each process a certain Before examining the problems that multi-threading introduces, it is a good idea to first examine why the traditional call stack mechanism works so well for singlethreaded processes. Two important facts form the basis of its success:

• The MMU allows each process to use the entire address space as if it were the only process running on the system. Physical memory is not reserved for the process until it actually uses the space.

• For any single-threaded program, there is only one call stack required. As such, the operating system can, by default, reserve a stack so large that it is usually safe to assume that a properly-functioning program will not exhaust it. Reserving such a large portion of memory does not cause any negative effects because:

• Virtual address space will not map to physical memory until the program actually uses the virtual address space.

• The operating system automatically maps the used virtual address regions to physical memory that will not conflict with other processes. Multi-threading significantly changes the rules. A multi-threaded program requires one call stack per thread, all of which must exist within the same address space. This means that the MMU cannot help with multiple threads as it does with multiple processes. Most modern operating systems just create one "large" call stack for each thread at the top of virtual address space. However, when there are a large number of threads, this will cause the process to run out of virtual address space before it is actually out of memory. Shrinking each process's stack space until each thread's stack can fit may lead to one thread running out of stack space when there is otherwise lots of unused stack space remaining, This is especially likely to happen if thread stack usage patterns differ (e.g. one thread makes heavy use of recursion). It is possible to manually set stack space on a threadby- thread basis (e.g. giving a heavy stack space using thread more stack space).

However, this both increases the burden on the programmer and decreases the flexibility of the program (threads are locked into roles, not all threads have the ability to temporarily use a large amount of stack space). This harkens back to the days before the MMU when programmers used to manually give each process a certain With the number of cores on chips increasing, parallelism being touted as the way to increase performance in the future [23, 6, 10], and current operating system stack mechanics being bottleneck for the number of threads a process can run, it is clear that the lowly call stack is in need of investigation.

2 comments:

  1. I appreciate you sharing this article. Really thank you! Much obliged.
    This is one awesome blog article. Much thanks again.

    sap online training
    software online training
    sap sd online training
    hadoop online training
    sap-crm-online-training

    ReplyDelete
  2. I really enjoy the blog.Much thanks again. Really Great.
    Very informative article post. Really looking forward to read more. Will read on…


    oracle online training
    sap fico online training
    dotnet online training
    qa-qtp-software-testing-training-tutorial

    ReplyDelete