TechAE Blogs - Explore now for new leading-edge technologies

TechAE Blogs - a global platform designed to promote the latest technologies like artificial intelligence, big data analytics, and blockchain.

Full width home advertisement

Post Page Advertisement [Top]

CSM Part 2

Introduction:

As we discussed earlier what is computer systems performance evaluation and its goals to be achieved by tuning its parameters like processor speed, response time and latency, etc. Now, we move on to understanding techniques of performance evaluation, so first of all, let's discuss what is a model and how to construct it?

Models and their Construction

A model is an abstraction of a real-world system used to gain insight into some critical aspect(s) of a system. The real-world system may be a Bank Teller Machine, CPU in a computer, a database system, web server, etc.
Models typically are developed based on theoretical laws and principles. They may have various forms
  • physical models (scaled replicas),
  • mathematical equations and relations (abstractions),
  • graphical representations.

Importance of system modeling

    1. Performance analysis requires parameter variation. In an actual system, it may be very difficult. e.g. evaluating the performance impact of varying the speed of the main memory system, for instance, is simply not possible in most real systems.
    2. Measuring some aspects of performance on an actual system can be very time-consuming and difficult.

Computer Systems Performance Modeling And Evaluation - Part 1

Part 1 of the series explaining computer systems performance modeling and its parameter to evaluate

Check now!

Modeling Tools

Major classes of modeling tools in use today:-
  • Analytical
  • Simulation
  • Testbed
  • Operational Analysis

Analytical modeling tools

Involves constructing a mathematical model of the system behavior (at the desired level of detail) and solving it.
For instance,
  • queuing models.
  • Petri nets for problem-solving.
A simple analytical model of the overall average memory-access time observed by an executing program then is
tavg = h . tc + (1-h) . tm
Let tc be the time delay observed by a memory reference in the cache.
Also, let tm be the corresponding delay if the referenced location is not in the cache.
h: The cache hit ratio.
To apply this simple model to a specific application program, we would need to know the hit ratio, h, for the program, and the values of tc and tm for the system.
These memory-access-time parameters, tc, and tm, may often be found in the manufacturer's specifications of the system. The hit ratio, h for an application program is often found through a simulation of the application, though. This model can provide us with some insights into the relative effects of increasing the hit ratio, or changing the memory timing parameters, for instance.
Major advantages of analytic modeling
  • Highly flexible
  • Low cost
  • Captures the salient features of a systems
  • Generates good insight into the workings of the system
The major disadvantage of analytic modeling is the results of an analytical model tend to be much less believable and much less accurate.

Simulation modeling tools

It is a dynamic tool that uses a computer to imitate the operation of an entire process or system. It involves the development of models of systems and putting them into action to conduct experiments with the model with an appropriate abstraction of the workload. Simulation is a powerful technique for studying memory-system behavior due to its high degree of flexibility. e.g. to study the impact on performance, the sizes of the cache and memory, and the relative cache and memory delays
To study the effectiveness of pipelining in the CPU.
  • Flexibility: High
  • Believability: low
  • Cost: Medium
  • Accuracy: Medium

Testbeds

A realistic hardware-software environment for testing components without having the ultimate system. e.g. New aircraft engines are fitted to a testbed aircraft for flight testing. Important feature: only focuses on a subset of the total system. All other aspects have stubs (just simulated pieces) that provide their stimulus. In software development, test-bedding is a method of testing a particular module (function or class) in an isolated fashion.
i.e. apart from the system, it will later be added to.
A skeleton framework is implemented around the module so that it behaves as part of the larger program.
Testbeds
Testbeds in the context of computer networks are used to analyze a wide range of components.
Problem: testing the developed protocols and applications on simulators gives inaccurate results.
Solution: develop testbeds for the execution of new protocols and applications.
Advantage: Improves the understanding of functional requirements and operational behavior of an element(s) of a system.
Limitations: cost more to develop and therefore are limited in applications. e.g., we should not model a complex distributed computing system in a testbed. We would instead consider analytical or simulation models as a first pass and use a testbed between the initial concept and final design.

Three components of Testbeds

  1. The experimental subsystem: a collection of real-world system components and/or prototypes that we wish to model.
  2. The monitoring subsystem consists of interfaces to the experimental system to extract raw data and a support component to collate and analyze the collected information.
  3. The simulation-stimulation subsystem allows the experimenter to submit inputs and get outputs for experimentation.
The testbed approach provides a method to investigate system aspects that are complementary to simulation & analytical methods.

Operational Analysis (or Direct Measurement)

It is the measurement and evaluation of an actual system in operation. Also used to develop projections about the system's future operations. It involves instrumenting the system to extract the information. It uses hardware and/or software monitors. To estimate the time required to access the first-level cache, a simple program can be used that repeatedly references the same variable. To measure the main memory access time, a program that always forces a cache miss can be used.
  • Accuracy: High
  • Believability: High
  • Flexibility: Low
  • In an actual system, it may be very difficult (or impossible) to change the parameters.
  • Evaluating the performance impact of varying the speed of the main memory system, for instance, is simply not possible in most real systems.
  • Can be very time-consuming, difficult, and costly

Thank you for reading!

This is PART 2 of the series, I have explained the models and major modeling tools used nowadays to improve the system performance.

If you found this article useful, feel free to go for PART 3 of this awesome blog series.

Cheers!

No comments:

Post a Comment

Bottom Ad [Post Page]