Lade Inhalt...

Integration of Performance Management into the Application Lifecycle

©2011 Masterarbeit 124 Seiten

Zusammenfassung

Inhaltsangabe:Introduction:
Despite the widespread recognition that performance is important to the success of a project, many software products fail to respond fast enough to user requests or to handle a certain amount of parallel business transactions. This is because nowadays projects are result-oriented where the focus is on functionality to be implemented. Such projects do not pay high attention to application performance because it still does not have the particular importance that unit testing for example has. Moreover today’s development models usually consider performance management only in a limited way within their lifecycle and often follow what is known as the fix it later approach. The fix it later approach concentrates on software correctness and defers performance considerations to the integration testing phase where additional hardware is added or a system is tuned when performance issues are detected.
The problem of neglecting performance management is that performance issues often do not emerge until an application is put into production, where it is likely to suffer the consequences of a performance failure. The consequences of performance failures can be increased operational costs, increased development and hardware costs, and damaged customer relations. If severe performance issues are discovered during production, it may be too expensive to re-design a system or even impossible to add additional hardware in order to meet performance objectives. Such projects are likely to be canceled and their costs will be unrecoverable.
To avoid such situations, performance management should be integrated into an application’s lifecycle from the beginning. This means that performance objectives have to be defined early within a project and continually verified as an application evolves. Having performance management integrated from the beginning allows to reduce overall project risk and costs because performance issues can be spotted and corrected early in the lifecycle and even before end users are affected. Furthermore an application is extensively tested for its ability of reaching performance objectives before it is deployed to a production environment and exposed to real users.
NovaTec GmbH is a company providing IT-services in the area of consulting, project management, software engineering, application architectures, provisioning, performance management, and process engineering. The competences of NovaTec are logically grouped in […]

Leseprobe

Inhaltsverzeichnis


Eduard Tudenhöfner
Integration of Performance Management into the Application Lifecycle
ISBN: 978-3-8428-2047-0
Herstellung: Diplomica® Verlag GmbH, Hamburg, 2011
Zugl. Hochschule für Technik (HFT Stuttgart), Stuttgart, Deutschland, MA-Thesis /
Master, 2011
Dieses Werk ist urheberrechtlich geschützt. Die dadurch begründeten Rechte,
insbesondere die der Übersetzung, des Nachdrucks, des Vortrags, der Entnahme von
Abbildungen und Tabellen, der Funksendung, der Mikroverfilmung oder der
Vervielfältigung auf anderen Wegen und der Speicherung in Datenverarbeitungsanlagen,
bleiben, auch bei nur auszugsweiser Verwertung, vorbehalten. Eine Vervielfältigung
dieses Werkes oder von Teilen dieses Werkes ist auch im Einzelfall nur in den Grenzen
der gesetzlichen Bestimmungen des Urheberrechtsgesetzes der Bundesrepublik
Deutschland in der jeweils geltenden Fassung zulässig. Sie ist grundsätzlich
vergütungspflichtig. Zuwiderhandlungen unterliegen den Strafbestimmungen des
Urheberrechtes.
Die Wiedergabe von Gebrauchsnamen, Handelsnamen, Warenbezeichnungen usw. in
diesem Werk berechtigt auch ohne besondere Kennzeichnung nicht zu der Annahme,
dass solche Namen im Sinne der Warenzeichen- und Markenschutz-Gesetzgebung als frei
zu betrachten wären und daher von jedermann benutzt werden dürften.
Die Informationen in diesem Werk wurden mit Sorgfalt erarbeitet. Dennoch können
Fehler nicht vollständig ausgeschlossen werden und der Verlag, die Autoren oder
Übersetzer übernehmen keine juristische Verantwortung oder irgendeine Haftung für evtl.
verbliebene fehlerhafte Angaben und deren Folgen.
© Diplomica Verlag GmbH
http://www.diplomica.de, Hamburg 2011

Abstract
As applications are becoming more extensive in their size and complexity, it
is more important to continually manage and improve application performance
in order to decrease the risk of performance failures and to maximize revenues.
Application performance and business revenue are closely related because the
longer an application needs to respond to a user request, the less users that
generate revenue can be processed in a given amount of time. In addition to
that, an optimized performance can decrease the required amount of time and
money for application development and maintenance. Due to this impact of
performance on business results and project budgets, application performance is
a major factor to overall success. However, as many development models consider
performance only to a certain extent, it is difficult to provide an acceptable
application performance without a performance management in place.
The focus of this master thesis is to establish a model that handles performance
management and is generally applicable to different types of projects. In order
to define reasonable and essential content for this model, different areas of perfor-
mance management are analyzed and evaluated in-depth. This content is further
extended to form a model that has tailoring capabilities and is organization- and
project-independent. The model has a modular design with a high degree of
abstraction, where the overall flow of performance management is defined by
multiple building blocks. It is highly flexible, allowing an integration not only at
the beginning of a project, but also at later project stages. Such a late integration
is especially useful for projects that decide to introduce performance management
after severe performance issues are discovered.
Multiple development models that are currently available are analyzed for their
support of performance considerations and their ability to be enhanced for perfor-
mance management. At the end of the thesis the applicability of the performance
management model is evaluated by integrating it into one development model.
Keywords: Application Performance Management, Application Performance
Engineering, Performance Management, Performance Engineering, Software Per-
formance Engineering, Application Lifecycle Management.

Acknowledgements
This thesis would not have been possible without the help, ideas, and insights of
the following people, to whom I would like to express my appreciation for their
efforts and support:
· Prof. Dr. Gerhard Wanner for academic supervision and great lectures
· M.Sc. Patrice Bouillet for supervision and valuable criticism
· M.Sc. Stefan Siegl for valuable criticism
· Konrad Pfeilsticker, Andreas Zobel, Stefan Hartmann, Marc Bauer, Steffen
Apprich, and Dirk Maucher for sharing valuable ideas and experiences
· Ivan Senic, Matthias Huber, Mario Schwarz, Heiko Friedrich, and Jens
uller for reading my thesis and providing valuable criticism
· My wife for her never-ending support during the last six months

Contents
1
Introduction
1
1.1
Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
1.2
Current Situation . . . . . . . . . . . . . . . . . . . . . . . . . . .
2
1.3
Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2
1.4
Definition of Terms . . . . . . . . . . . . . . . . . . . . . . . . . .
3
1.5
Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
2
Software Performance Engineering
5
2.1
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5
2.2
Quantifying Performance . . . . . . . . . . . . . . . . . . . . . . .
6
2.2.1
Interrelationships of Performance Characteristics
. . . . .
7
2.2.2
The Cost to Fix a Performance Problem . . . . . . . . . .
9
2.3
Problems of Performance Engineering . . . . . . . . . . . . . . . .
9
2.4
Reactive vs. Proactive Performance Management . . . . . . . . .
11
2.5
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12
3
Analysis
13
3.1
Software Performance Engineering Process . . . . . . . . . . . . .
13
3.1.1
Performance Activities . . . . . . . . . . . . . . . . . . . .
14
3.1.2
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . .
22
3.2
Performance Activities . . . . . . . . . . . . . . . . . . . . . . . .
24
3.2.1
Planning of Performance Management . . . . . . . . . . .
24
3.2.2
Acquiring Performance Tools
. . . . . . . . . . . . . . . .
24
3.2.3
Performance Education and Trainings . . . . . . . . . . . .
25
3.2.4
Identification of Performance Risks and Definition of Per-
formance Objectives . . . . . . . . . . . . . . . . . . . . .
25
3.2.5
Architecture Assessment . . . . . . . . . . . . . . . . . . .
26
3.2.6
Performance Testing . . . . . . . . . . . . . . . . . . . . .
27
3.2.7
Performance Tuning . . . . . . . . . . . . . . . . . . . . .
31
3.2.8
Capacity Management . . . . . . . . . . . . . . . . . . . .
33
3.2.9
Application Monitoring . . . . . . . . . . . . . . . . . . . .
34
3.2.10 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . .
35
3.3
Performance Roles and Artifacts . . . . . . . . . . . . . . . . . . .
36
3.3.1
Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
36
3.3.2
Artifacts . . . . . . . . . . . . . . . . . . . . . . . . . . . .
39
v

Contents
3.3.3
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . .
42
3.4
Development Models . . . . . . . . . . . . . . . . . . . . . . . . .
42
3.4.1
Rational Unified Process . . . . . . . . . . . . . . . . . . .
42
3.4.2
Scrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
48
3.4.3
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . .
51
3.5
Solution Approach . . . . . . . . . . . . . . . . . . . . . . . . . .
52
3.6
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
52
4
Application Performance Management Model
54
4.1
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
54
4.2
Building Blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . .
55
4.2.1
Planning Performance Management . . . . . . . . . . . . .
57
4.2.2
Predicting Performance . . . . . . . . . . . . . . . . . . . .
58
4.2.3
Performance Testing and Analyses
. . . . . . . . . . . . .
61
4.2.4
Monitoring and Trend Identification . . . . . . . . . . . . .
63
4.3
Proceeding and Dependencies . . . . . . . . . . . . . . . . . . . .
65
4.3.1
Plan-Do-Verify Cycle . . . . . . . . . . . . . . . . . . . . .
65
4.3.2
Dependencies among Performance Activities . . . . . . . .
66
4.4
Performance Roles and Artifacts . . . . . . . . . . . . . . . . . . .
67
4.4.1
Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
68
4.4.2
Artifacts . . . . . . . . . . . . . . . . . . . . . . . . . . . .
69
4.5
Example for the Definition of Performance Objectives . . . . . . .
72
4.6
Tailoring Capabilities of the APM Model . . . . . . . . . . . . . .
74
4.7
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
76
5
Integration of Performance Management into the Rational Unified
Process
77
5.1
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
77
5.2
Application Lifecycle . . . . . . . . . . . . . . . . . . . . . . . . .
81
5.3
Integration Scenarios . . . . . . . . . . . . . . . . . . . . . . . . .
82
5.3.1
Early Integration . . . . . . . . . . . . . . . . . . . . . . .
82
5.3.2
Late Integration . . . . . . . . . . . . . . . . . . . . . . . .
89
5.4
Rational Method Composer . . . . . . . . . . . . . . . . . . . . .
92
5.5
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
95
6
Review
96
6.1
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
96
6.2
Lessons Learned . . . . . . . . . . . . . . . . . . . . . . . . . . . .
97
6.3
Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
97
Bibliography
99
Appendix
105
vi

List of Figures
2.1
Interrelationships between resource utilization, throughput, and
response time (adapted from [Hai06]) . . . . . . . . . . . . . . . .
8
2.2
The cost to fix a performance problem, measured in dollars and in
time spent [Hai06]
. . . . . . . . . . . . . . . . . . . . . . . . . .
9
3.1
SPE process as defined by Smith [SW01] . . . . . . . . . . . . . .
14
3.2
Software execution model with resource requirements and system
execution model with calculated resource consumptions under dif-
ferent workload characteristics (adapted from [Smi96])
. . . . . .
19
3.3
Performance tuning cycle [MVBM] . . . . . . . . . . . . . . . . .
32
3.4
Rational Unified Process (adapted from [RUP]) . . . . . . . . . .
43
3.5
Rational Unified Process is extended with the SPE process [dBPH08] 47
3.6
A performance activity from the SPE process is broken down to
show its tasks and relationships [dBPH08] . . . . . . . . . . . . .
47
3.7
Scrum methodology (adapted from [Sch]) . . . . . . . . . . . . . .
49
3.8
Performance engineering in Scrum (adapted from [Bal]) . . . . . .
51
4.1
Building blocks of the APM model . . . . . . . . . . . . . . . . .
55
4.2
APM model mapped to phases and iterations . . . . . . . . . . .
55
4.3
APM model consisting of building blocks and performance activities 56
4.4
The reduced scope of a building block . . . . . . . . . . . . . . . .
57
4.5
Plan-Do-Verify cycle . . . . . . . . . . . . . . . . . . . . . . . . .
65
4.6
Performance testing and its dependencies to other activities
. . .
67
4.7
Needed relationships when defining performance objectives . . . .
72
5.1
Rational Unified Process is extended for performance management
78
5.2
Lifecycle of the Rational Unified Process after two product releases 81
5.3
Performance activities within the inception phase . . . . . . . . .
83
5.4
Performance activities within the elaboration phase . . . . . . . .
84
5.5
Performance activities within the construction phase
. . . . . . .
86
5.6
Performance activities within the transition phase . . . . . . . . .
87
5.7
Performance activities within the production phase . . . . . . . .
88
5.8
Late performance integration phase and its performance activities
89
5.9
Late performance integration after performance problems are found 90
5.10 Elaboration phase is extended with activities of the APM model .
92
vii

List of Figures
5.11 The activity Define Performance Objectives with related roles and
artifacts from the APM model and from RUP . . . . . . . . . . .
93
5.12 Performance engineer supports design review . . . . . . . . . . . .
93
5.13 Elaboration phase workflow is extended with application monitoring 94
5.14 Testing workflow of the elaboration phase is extended with perfor-
mance testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
94
5.15 Performance testing workflow of the elaboration phase
. . . . . .
95
6.1
Confluence landing page showing the APM model . . . . . . . . . 109
6.2
Relationships of a performance engineer
. . . . . . . . . . . . . . 109
6.3
General overview for the activity Define Performance Objectives . 110
6.4
Relationships to roles, activities, and artifacts of the activity De-
fine Performance Objectives . . . . . . . . . . . . . . . . . . . . . 110
6.5
Overview matrix showing the relationships between artifacts, ac-
tivities, and executive roles . . . . . . . . . . . . . . . . . . . . . . 111
6.6
Adjustments in the Rational Unified Process . . . . . . . . . . . . 112
6.7
Activities that are executed by a performance manager . . . . . . 112
6.8
Production phase workflow . . . . . . . . . . . . . . . . . . . . . . 113
6.9
Performance management workflow in the production phase . . . 114
viii

List of Tables
4.1
Use case example enriched with performance characteristics . . . .
73
4.2
Use case example with defined performance objectives . . . . . . .
74
6.1
Documentation platform evaluation . . . . . . . . . . . . . . . . . 108
ix

1 Introduction
1.1 Motivation
Despite the widespread recognition that performance is important to the success
of a project, many software products fail to respond fast enough to user requests
or to handle a certain amount of parallel business transactions
1
. This is because
nowadays projects are result-oriented where the focus is on functionality to be
implemented. Such projects do not pay high attention to application performance
because it still does not have the particular importance that unit testing for
example has. Moreover today's development models usually consider performance
management only in a limited way within their lifecycle and often follow what
is known as the fix it later approach (cf. [SDD02]). The fix it later approach
concentrates on software correctness and defers performance considerations to
the integration testing phase where additional hardware is added or a system is
tuned when performance issues are detected (cf. [Smi86]).
The problem of neglecting performance management is that performance issues
often do not emerge until an application is put into production, where it is likely
to suffer the consequences of a performance failure. The consequences of per-
formance failures can be increased operational costs, increased development and
hardware costs, and damaged customer relations. If severe performance issues are
discovered during production, it may be too expensive to re-design a system or
even impossible to add additional hardware in order to meet performance objec-
tives. Such projects are likely to be canceled and their costs will be unrecoverable
(cf. [WS03]).
To avoid such situations, performance management should be integrated into an
application's lifecycle from the beginning. This means that performance objec-
tives have to be defined early within a project and continually verified as an
application evolves. Having performance management integrated from the begin-
ning allows to reduce overall project risk and costs because performance issues
can be spotted and corrected early in the lifecycle and even before end users are
1
A business transaction is usually initiated by a customer and can be e.g. an ordering process
in a shopping system.
1

1 Introduction
affected. Furthermore an application is extensively tested for its ability of reach-
ing performance objectives before it is deployed to a production environment and
exposed to real users.
1.2 Current Situation
NovaTec GmbH is a company providing IT-services in the area of consulting,
project management, software engineering, application architectures, provision-
ing, performance management, and process engineering. The competences of
NovaTec are logically grouped in so-called competence areas. This thesis is writ-
ten in the competence area Application Performance Management, where the
purpose is to support customers in performance engineering and analysis tasks
in order to detect performance bottlenecks and stability issues. Additionally,
this competence area helps customers in introducing performance management
to their projects.
Introducing performance management to customers is currently solely based on
the experience of NovaTec experts. This experience is only partially documented
and difficult to reuse. The problem is that no model is used for guiding the intro-
duction of performance management to different types of projects. For that rea-
son a well-documented performance management model is required that includes
the experience of experts and is generally applicable to different development
models
2
.
1.3 Objectives
As there is currently no model available for guiding performance management,
the objectives of this thesis are as follows:
Evaluation of Existing Approaches
Existing approaches towards performance
management have to be analyzed and evaluated in how far they are suitable
for nowadays project circumstances.
Performance Activities and their Responsibilities
It must be analyzed which
performance activities are to be considered in an application's lifecycle and
by whom they have to be performed. Furthermore it has to be evaluated
which performance artifacts are necessary and how they are related to par-
ticular performance activities.
2
Development models is used as a general term and comprises processes and frameworks for
software development.
2

1 Introduction
Analysis of Development Models
As nowadays development models are not
considering performance management, two development models have to be
analyzed in order to determine how far they already support performance
considerations and how they can be enhanced for performance management.
Creation
A performance management approach should be generally applicable
for different development models, thus if the outcome of the analysis tasks
indicate that an available approach is suitable for nowadays projects, it
should be selected and adapted to the needs of NovaTec. Otherwise, an
own approach that concludes the analysis results has to be created and
documented.
Application Example
As a final step, the performance management approach
should be combined with one development model in order to show an ex-
ample of how the approach can be used.
The outcome of this master thesis serves as a starting point for a documented per-
formance management approach that is used by NovaTec experts for introducing
performance management to customer projects.
1.4 Definition of Terms
This thesis is based on research that was done in the area of Software Performance
Engineering (SPE), which is often used in the literature interchangeably with the
term Application Performance Management (APM). Based on the literature, both
disciplines encompass management and engineering activities, roles, and practices
that are performed throughout an application's lifecycle (cf. [SW01]). Neverthe-
less, the competence area Application Performance Management differentiates
between performance engineering and performance management. Performance
engineering is about the technical aspect when executing performance activities,
whereas performance management covers performance engineering activities and
additionally comprises management activities. Due to the fact that most of the
analyzed literature did not make any difference between SPE and APM, the sec-
ond and third chapter will use both terms interchangeably in order to not falsify
an author's thoughts. Afterwards the terms APM and performance management
will be used consequently until the end of the thesis.
1.5 Outline
Chapter 2
describes Software Performance Engineering in general and shows how
performance can be quantified. The problems of performance engineering
3

1 Introduction
are demonstrated and the differences between reactive and proactive per-
formance management are explained.
Chapter 3
analyzes a process towards performance engineering and evaluates
which performance activities can be used in nowadays projects. Addition-
ally, performance roles and artifacts are depicted and discussed in-depth.
Moreover two different development models are analyzed for their suitabil-
ity of performance integration.
Chapter 4
depicts the selected solution approach and describes its structure and
performance activities. Furthermore defined performance roles and artifacts
with their interdependencies are specified.
Chapter 5
applies the selected approach and integrates it into a development
model in two different ways. The first way handles a best-case scenario,
where performance management is integrated from the beginning of a project.
The second way deals with an integration after performance issues were
found.
Chapter 6
reviews the results of this thesis and describes learned lessons. Addi-
tionally, an outlook for future work is provided.
4

2 Software Performance Engineering
This chapter gives an overview over Software Performance Engineering and de-
scribes negative consequences of performance failures. Additionally, it is shown
how performance can be quantified and what the problems of performance en-
gineering are. Moreover the differences between a reactive and proactive perfor-
mance management are demonstrated.
2.1 Overview
Software Performance Engineering (SPE) comprises management and engineering
activities, roles, and practices at every phase of the application lifecycle in order
to satisfy performance requirements. SPE begins early in the lifecycle using quan-
titative methods to identify adequate designs and to eliminate those not able to
meet performance objectives (cf. [SW06]). Connie U. Smith coined the term SPE
already in 1981 and mentioned that software development was performed with
the fix it later approach, meaning that performance was never considered during
the design, but was an afterthought (cf. [Smi81]). Almost 30 years after present-
ing the concepts of SPE it is still not incorporated into the practices of software
engineering, although the consequences are obvious. Negative consequences of
performance failures can be:
Damaged Customer Relations
The reputation of the organization suffers be-
cause people will continue to associate poor performance with the product,
even if the problem is fixed later (cf. [SW01]).
Lost Income
Revenue is lost or penalties have to be paid due to late delivery of
the product (cf. [SW01]).
Increased Development Costs
Delivering application features requires more time
and effort if performance issues are hindering the acceptance of these fea-
tures, thus resulting in additional costs for the development.
Increased Maintenance Costs
Additional time and resources are required if per-
formance issues are found and must be fixed when an application is in
production, thus increasing overall maintenance costs.
5

2 Software Performance Engineering
Increased Hardware Costs
Tuning of a system by adding additional hardware,
such as more CPUs or hard disks, increases the costs for hardware.
Delayed Project Schedules
Project schedules are delayed if unpredictable issues
occur that must be corrected.
Project Failure
Projects will be canceled when it is impossible to meet perfor-
mance objectives by tuning or if it is too expensive to re-design a system
(cf. [SW01]).
Despite the fact that performance is an essential quality attribute, many projects
face these consequences because they are not able to meet overall performance.
Reasons for that can be insufficiently or not defined performance goals and the
fact that performance is addressed late in an application's lifecycle and usually
only after performance issues are discovered. These reasons and the problems of
performance engineering are further depicted in section 2.3.
2.2 Quantifying Performance
Performance belongs, amongst others, to the non-functional requirements of a
software system that include constraints and quality attributes. Performance
can be described by different characteristics. According to Haines [Hai06] and
Molyneaux [Mol09], the most common and relevant ones are the response time
as seen by the user, the throughput of requests, resource utilization, and the
availability of the application. The following listing depicts each characteristic in
detail.
Response Time
This characteristic specifies the time taken by the system to
respond to a user request. It defines the core of the end-user experience
and is the most important indicator for good performance perception (cf.
[Hai06]). For an enterprise application it is defined as the time taken to
complete a single business transaction (cf. [MS08]). Microsoft found out
that a two second delay in response time within their search engine Bing
reduced user satisfaction by 3.8% and resulted in 4.3% less revenue per user
(cf. [Inc]). Therefore the goal is to minimize the end-user response time.
Throughput
Throughput refers to the number of events that can be performed
within a period of time. For an enterprise application it is defined as the
number of requests the application can serve in a time period (cf. [MS08]).
A high request throughput means that requests are served quickly and
efficiently and it underlines the overall efficiency of an application itself,
therefore the goal is to maximize the throughput (cf. [Hai06]).
6

2 Software Performance Engineering
Resource Utilization
This characteristic describes the amount of resources con-
sumed during request processing (cf. [MS08]). Resources include CPU,
memory, disk I/O, and network I/O. The goal is to minimize resource uti-
lization.
Availability
Availability is the amount of time a system is in a functioning condi-
tion to the end user (cf. [Sie]). Lack of availability can lead to a substantial
business cost for even a small outage (cf. [Mol09]). High availability means
that an application is available when it is supposed to be available a high
percentage of the time in order to make an effective use of the applica-
tion (cf. [Hai06]). According to studies by the Aberdeen Research Group
[Sim08], the industry average is 97.8% availability. This two percent lack
of availability means that a site is out of business 8 days a year. For an e-
commerce site generating $50,000 a day it translates into a loss of $400,000
in yearly revenue. For that reason the goal is to increase the availability of
a system.
Haines and Molyneaux describe performance by using these four characteristics.
However, availability is not a subpart of performance because availability and
performance both are non-functional requirements that can be specified indepen-
dently from each other. For example, an application can be available a high
percentage of the time but may perform poorly because it does not have enough
resources for processing a certain amount of user requests, thus performance and
availability are autonomous. For that reason the availability is not used when
quantifying performance. Nevertheless, response time, throughput, and resource
utilization are not autonomous and belong together because they influence each
other, therefore these three characteristics are used in this thesis for quantifying
performance.
According to Barber [Bar04a], performance characteristics can further be clas-
sified into three main categories. Speed indicates whether an application can
respond quickly enough for the intended users. Scalability describes if an appli-
cation can handle the expected user load and beyond. Stability signifies whether
an application is stable under expected and unexpected user loads.
2.2.1 Interrelationships of Performance Characteristics
Figure 2.1 shows the interrelationships between resource utilization, application
server throughput, and end user response time as the user load increases. Re-
source utilization increases with user load because more memory and CPU power
is needed to handle user requests. At a certain point the resources become sat-
urated because the system is trying to process more requests than it is capable
of. Throughput increases similarly with resource utilization. The number of
7

2 Software Performance Engineering
concurrent users at the throughput saturation point represents the maximum
concurrency of the application. As resources become saturated, the throughput
starts to degrade and the response time increase becomes perceptible to the user
(cf. [JWH02]). At this point an application enters the buckle zone, a state where
system components have become exhausted and where the response time increases
exponentially (cf. [Hai06]), resulting in a poorly performing application.
"The effect of the buckle zone is a severe drop in application performance, due to
the system spending most of its time managing resource contention, rather than
servicing requests." [MV05]
Figure 2.1: Interrelationships between resource utilization, throughput, and re-
sponse time (adapted from [Hai06])
A study that was conducted by Gomez on 1500 consumers found out that poorly
performing web applications result in less revenue per visitor and in a higher
abandonment rate per user (cf. [Gom]). The following listing shows the impact
of poor performing applications in detail.
· More than 75% of online consumers left for a competitor's site rather than
suffering delays at peak traffic times.
· 88% of online consumers are less likely to return to a site after a bad
experience.
· Almost 50% expressed a less positive perception of a company after a single
bad experience.
· More than a third told others about their bad experience with a site.
8

2 Software Performance Engineering
2.2.2 The Cost to Fix a Performance Problem
Performance problems that occur in an application's lifecycle should be found
and fixed as early as possible because the earlier an issue is found, the cheaper it
is to fix. Figure 2.2 illustrates that fixing a performance problem late results in
an increase of the development costs because costs grow exponentially, thus the
time to deliver the product is affected. For example, an inappropriate framework
for the communication
3
between components can be changed within a short time
during design, but if this wrong choice goes undiscovered into development, a
re-design and re-implementation is inevitable. If the choice of an inappropriate
communication framework is found in Quality Assurance (QA), several parts of
the application do not only have to be re-developed, but also re-tested. If such a
wrong choice reaches the production phase, it affects the end users and costs in
terms of productivity, reputation, and revenue (cf. [Hai06]).
Figure 2.2: The cost to fix a performance problem, measured in dollars and in
time spent [Hai06]
2.3 Problems of Performance Engineering
Performance engineering is time-consuming and requires additional resources to
carry out activities and tasks in order to meet performance objectives. In nowa-
days projects the management has a tight project plan and budget where per-
formance activities lead to delays in reaching milestones or delivering the final
product, if these activities are not planned from the beginning. From the man-
agement's point-of-view performance engineering seems to cause more costs than
3
Such a communication between components can be done by using web service frameworks.
9

2 Software Performance Engineering
bringing value to the project. As the higher management is interested in financial
information, it is difficult to provide the costs and benefits of performance engi-
neering in an appropriate way to them. The problem of performance engineering
is that it is hard to demonstrate success, but poorly performing applications are
clearly observable as failures (cf. [SB82]). Applying SPE is invisible because suc-
cessful management of performance will lead to not having performance issues
(cf. [Smi03]). As stated by Smith, managers were often asking:
"Why do we have performance engineers if we don't have performance problems?"
[Smi03]
But even if performance activities are planned in advance, many projects suffer
the consequences of performance failures due to insufficiently defined performance
objectives. These objectives are not verified throughout an application's lifecycle
because it is unclear what exactly has to be achieved. The outcome is that
performance activities are omitted due to ambiguities.
Based on Menasc´e [Men02], a possible cause for the missing performance part in
software engineering is the lack of scientific principles and models. Whereas
software engineers can write code without being obligated to rely on formal
and quantitative models, conventional engineers must use principles and mod-
els based on mathematics, computational science, and physics for their design
process. Menasc´e also states that the vast majority of computer science and
related engineering programs at universities and colleges do not provide any cur-
ricula related to software performance, leading to a lack of performance education
of graduates.
According to Williams [WS03], the management needs a financial justification
before committing funds to SPE. Developers and performance engineers are aware
of the consequences of performance failures from a technical perspective and are
convinced of adopting SPE. Yet, the management frequently remains unconvinced
because a technical justification does not provide financial information that the
management needs in order to make a decision.
The conclusion is that performance activities should be planned in advance in
order to reduce the risk of omitting them. Additionally, performance objectives
should be clearly defined and continually verified. Moreover not only developers
and engineers must be convinced of the need for performance engineering, but
also the higher management providing funds. It is important to continually justify
the efforts of SPE in order to track the costs and benefits of applying SPE for
the management.
10

2 Software Performance Engineering
2.4 Reactive vs. Proactive Performance
Management
There are two different ways of how performance can be managed within a project;
either reactively or proactively. Reactive performance management waits for
performance problems to appear and then deals with them in an ad hoc way (cf.
[SW01]). Reactive performance management follows a make it run, make it right,
make it fast, make it small approach and therefore has the same drawbacks as
the fix it later approach, which was described in section 1.1 and in section 2.1.
According to Smith [SW01], the following statements are common when managing
performance in a reactive way.
· "Let's just build and see what it can do."
· "We'll tune it later; we don't have time to worry about performance now."
· "We can't do anything about performance until we have something running
to measure."
· "Don't worry. Our software vendor is sure their product will meet our
performance goals."
· "Performance? That's what version 2 is for."
· "We'll just buy a bigger processor."
Proactive performance management includes techniques to identify and respond
to performance issues early in the process and to avoid the implications of the fix
it later approach. By defining concrete performance requirements, by planning
and forecasting of performance, and by result analysis, proactive performance
management reduces the probability of performance issues to occur. A proactive
approach can be described by several characteristics, which are depicted in the
following listing.
· The project has a performance manager who is responsible for tracking,
identifying, and communicating performance issues (cf. [SW01]).
· The performance manager is known to everybody on the project (cf. [SW06]).
· A process is available for guiding the project team in order to define mea-
surable performance objectives, to verify them, and to ensure that a system
remains in performance compliance.
· Such a process allows to act on performance issues before they reach the
production environment and affect end users.
11

2 Software Performance Engineering
· Project members are educated and trained in the performance process and
know when to apply which performance techniques.
· An appropriate performance risk management plan is available in the project
(cf. [SB82]).
· Performance issues are handled in the same way as functional defects.
· An application is tested for its ability of reaching performance objectives
before it is deployed to a production environment.
The conclusion is that nowadays projects should strive for a proactive perfor-
mance management because it avoids the implications of the fix it later approach
by having different techniques for identifying and responding to performance is-
sues. Such a performance management should be introduced to a project as early
as possible in order to reduce the risk of not meeting performance objectives.
2.5 Conclusion
Different cohesive characteristics are needed to quantify the performance of an ap-
plication, such as the response time, throughput, and resource utilization. Avail-
ability is not used for quantifying performance because both are autonomous
and independent from each other. Reaching the buckle zone must be avoided
because an application becomes unusable to the end user and negatively influ-
ences revenue per user. Due to the fact that costs to fix a performance issue grow
exponentially with the course of project phases, an issue must be spotted and
fixed as early as possible in order to not affect end users and to reduce overall
costs.
Introducing performance management early requires additional effort and re-
sources, resulting in postponed project deadlines if performance management is
not planned from the beginning. For that reason SPE seems to burden project
budgets from the management's viewpoint, without bringing value. Thus the
conclusion is that the management is one of the main causes for not having SPE
in nowadays projects because technical justifications are not sufficient for them.
Due to that reason it is essential to continually demonstrate the efforts of applying
SPE in order to provide costs and benefits to the higher management.
In addition to that, a proactive performance management approach should be
followed in order to reduce performance risks and to avoid the implications of the
fix it later approach.
12

3 Analysis
This chapter analyzes and evaluates a process towards performance engineering
for its applicability in nowadays projects. Afterwards different performance ac-
tivities, roles, and artifacts are assessed and evaluated. Moreover the Rational
Unified Process and Scrum are analyzed for their suitability of performance inte-
gration.
3.1 Software Performance Engineering Process
In the area of software development several approaches exist towards performance
engineering, such as proposed by Aziz et. al [ACDL07], Digital Innovations [Inn],
or Schmietendorf et. al. [SDD02]. Most of these approaches were made for
a specific development model or a specific project, thus their reusability and
adaptability suffers. The approach that is most formalized and was used in the
past as basis for integrations into several development models is the Software
Performance Engineering process. For example, Paes and Hirata [dBPH08] used
the SPE process for an integration into the Rational Unified Process, whereas
Balasubramanian [Bal] used it for Scrum. For that reason the SPE process is
chosen to be further analyzed and evaluated in this section in order to determine
whether it is applicable for nowadays projects.
The SPE process was established by Connie U. Smith [SW01], depicting im-
portant performance activities that are executed for one project phase and are
repeated throughout the development lifecycle. The SPE process is especially
suitable for an iterative and incremental development because such a develop-
ment allows to refine and improve the performance in multiple passes. The SPE
process is organization- and project-independent, therefore it can be adapted and
integrated into different development models through tailoring capabilities
4
. The
SPE process is illustrated in figure 3.1. Its activities are described in section
3.1.1.
4
It is only defined that the SPE process is organization- and project-independent and therefore
can be adapted to the needs of a project. Other details about which parts of the SPE process
can be adapted and modified are not described.
13

3 Analysis
Figure 3.1: SPE process as defined by Smith [SW01]
3.1.1 Performance Activities
This section describes the SPE process and its performance activities in-depth.
The first part of each performance activity depicts what is done within this ac-
tivity, whereas the second part evaluates this activity in order to decide whether
it is reasonable and can be used for nowadays projects.
3.1.1.1 Assess Performance Risk
It is important to understand the level of performance risk, where anything that
can endanger the success of a project regarding the performance must be iden-
tified. A project that supports critical business functions or is important to the
revenue of a company may result in a business failure if performance objectives
are not met, thus such a project has a high risk of performance failure. The risk
14

3 Analysis
of performance failure can be increased by different factors, such as inexperienced
developers, lack of familiarity with a new technology, or a tight project plan. As-
sessing performance risks at the beginning of a project allows to determine how
much SPE effort is needed. Depending on the level of risk, the effort can either
be small or more significant. For a low-risk project the effort might be 1% of the
total project budget, whereas high-risk projects might need 10% of the overall
budget for SPE (cf. [Smi03]). The level of performance risk can be assessed by
identifying, determining, and estimating the impact of all potential risks. The
impact of a risk consists of the probability of happening and the degree of damage
severity (cf. [WWD97]).
Due to the fact that performance risks can affect the success of a project, they
should be identified and estimated as early as possible in order to decrease the
chance of a performance failure (cf. section 2.1). The effort for performance
engineering should be appropriate to the risk of a project because otherwise too
much time and money is spent on producing a highly performant application that
is e.g. only used for uncritical tasks. Risks should be prioritized according to
their impact so that the most important ones are taken care of first because such
risks have a high severity and are most likely to occur. Furthermore it should
be defined how risks are to be treated so that they can be avoided or mitigated
in order to reduce the overall performance risk. For example, a risk that is
associated with the usage of a new framework could be mitigated by consulting
experts or by trainings. Risks should be periodically re-assessed and tracked
throughout a project because their impact can change or new risks might emerge.
In consequence of these reasons, performance risk assessment should be addressed
in nowadays projects because it is important and can avoid the implications of
performance failures.
3.1.1.2 Identify Critical Use Cases
All use cases must be identified that are important to the operation of a soft-
ware or that have a performance risk. Such use cases are usually those that are
frequently executed by many users and generate revenue. The 80-20 rule can
be applied here where a small subset (
20%) of use cases accounts for most of
the uses (
80%) of the system (cf. [Den05]). Thus it is important to start by
focusing on 20% of use cases that create measurable workload and are executed
by end users 80% of the time.
As nowadays development models are usually driven by use cases
5
, these use
cases should be analyzed for their performance criticality in order to determine
on which to focus. Due to the fact that most use cases only have functional
5
Use cases define the functionality to be implemented in such development models.
15

3 Analysis
requirements, where non-functional attributes are neglected, applying the 80-20
rule requires the examination of the entire set of use cases. This is because it first
must be determined which use cases are essential from a performance viewpoint
and which are not. It would be more efficient to directly enrich each use case
with performance characteristics when it is defined by the responsible use case
department. Each use case would then already have an indication that shows if
it might be critical from a performance viewpoint, thus not the entire set of use
cases would have to be analyzed when identifying critical use cases. Describing
all use cases with performance characteristics in detail would take a lot of time.
For that reason only little effort should be spend for enriching a use case, where it
is sufficient to make lightweight assumptions about its performance. An example
could be to specify that a use case should be finished in approximately one minute
and that it is uncritical from a performance viewpoint. The conclusion is that
this performance activity should be performed in nowadays projects because it
allows to focus on the most important use cases that have the highest impact on
the overall performance.
3.1.1.3 Select Key Performance Scenarios
Each use case consists of a set of scenarios that describe the sequence of ac-
tions required to execute the use case (cf. [Smi03]). Scenarios might include
browsing a product catalog, adding items to a shopping cart, or placing an order.
According to Smith [Smi03], key performance scenarios are those that are exe-
cuted frequently or that are critical to the perceived performance of the software.
These performance scenarios have to be selected in order to define performance
objectives for them.
The conclusion is that analyzing performance scenarios should be considered in
nowadays projects because it can be avoided that the development focuses on
the wrong scenarios to be improved. Moreover it is not sufficient to just identify
critical use cases because these use cases can consist of scenarios that are both
relevant and irrelevant from a performance viewpoint. In addition to that, the
focus should be not only on frequently executed scenarios, but also on those that
are rarely executed and must be finished in a defined period of time. An example
could be a batch job that is executed once a week, but must be finished in less
than two hours.
3.1.1.4 Establish Performance Objectives
For each key performance scenario the performance requirements and workload
goals must be identified and defined. Performance requirements and workload
16

3 Analysis
goals are both concluded and stated as performance objectives. The following
listing describes requirements and workload goals in detail.
Performance Requirements
specify quantitative criteria for evaluating the per-
formance characteristics of a software (cf. [WS02]). Such characteristics
can be expressed by response time, throughput, or constraints on resource
usage (cf. section 2.2). Quantitative requirements lead to a better control
of performance by explicitly stating what implicitly is expected, namely
the required performance that is detailed enough and can be used for quan-
titatively determining whether a system meets it and is fast enough (cf.
[MVBM]). According to Meier et. al. [MFB
+
], performance requirements
consist of business needs and service level agreements (SLAs). A service
level agreement is a contract that explicitly defines the terms of service
that are provided to an end user. The definition of SLAs is usually done
by stakeholders, such as the application business owner (product manager)
and the application technical owner (software architect). By analyzing use
cases the application business owner brings customer requirements to the
SLA and therefore ensures that the customer is satisfied. The application
technical owner ensures the feasibility of the SLA by analyzing technical
requirements (cf. [Hai06]). SLAs are not always defined within a project
itself, but can be delivered from outside.
Effective SLAs must be specific, flexible, and realistic. Specific in the sense
that a specific value to be achieved is exactly defined. Stating that a use
case must complete in about 5 seconds is not specific and therefore difficult
to verify because 5.25 seconds is about 5 seconds. SLAs must be flexible,
allowing a certain deviation of the specified value for unexpected conditions.
For example, it can be stated that a use case must adhere to the specified
value for a predefined percentage of time, allowing a measurable degree of
flexibility. Furthermore SLAs must be realistic in terms of attainability of
the specified value. Unrealistic performance requirements are ignored by
technical teams because an unrealistic SLA is worse then not having one in
the first place, which is also mentioned by Haines [Hai06]. For example, an
effective SLA can state that a search must be completed within two seconds
95 percent of the time under an expected load of 500 concurrent users.
Workload Goals
describe the expected number of users, data volumes, types
of transactions, throughput rates, projected growth in coming years, and
the desired use of an application (cf. [MS08]). Workload goals must be
identified in order to know what a system must support. It is crucial to
identify how the workload applies to individual performance scenarios (cf.
[MVBM]). For example, it must be identified how many users will use a
system and which type of transactions they will perform.
17

3 Analysis
The conclusion is that performance objectives should be specified within a project
in order to have concrete figures to test against. Performance objectives should
be quantitative and detailed, so that it can be verified whether objectives are
met. Vague statements defining that a system must be efficient or fast are not
useful as performance objectives because they cannot be measured and verified.
Many projects fail to meet overall performance because performance objectives
are insufficiently specified, thus it is often unclear what exactly has to be achieved
(cf. section 2.3). Due to the fact that performance objectives are usually defined
when only little about the future application is known, there is a chance that these
objectives are unrealistic and therefore ignored. For that reason performance ob-
jectives should be constantly clarified and refined as they evolve. Additionally,
performance objectives should be agreed upon and defined by the application
business owner and the application technical owner because only in that way
it can be assured that objectives are realistic and achievable. Performance ob-
jectives should be subject to verification and validation procedures in order to
ensure that they are correct, valid, consistent, complete, and understood. For
example, performance objectives should be verified during performance testing
and validated when a system is monitored in production under real conditions.
The activity of defining performance objectives is one of the most important ones
because it is the foundation for future performance activities that check against
performance objectives in order to verify whether an application is still in perfor-
mance compliance. Thus it can be concluded that this activity must be performed
in nowadays projects. Section 3.2 describes at which project stages performance
objectives can be verified and validated.
3.1.1.5 Construct Performance Models
Models for architectures and designs must be build in order to evaluate their suit-
ability for meeting performance objectives. It is important to construct a model
for each combination of design alternative and execution environment. Smith
[Smi90] and Williams [SW93] developed a methodological performance model
approach where two models are derived from performance scenarios and repre-
sent a system. The software execution model is specified using execution graphs,
where nodes represent functional components of the software and arcs represent
their transitions. According to Smith [Smi96], a software model should specify
the processing steps for each scenario with mean, best-case, and worst-case re-
sponse times. Processing steps are a collection of invocations and statements
that perform a function in an application. A software execution model character-
izes performance requirements of the software, independent of other workloads
or multiple users. The left side of figure 3.2 shows an example of a software
18

Details

Seiten
Erscheinungsform
Originalausgabe
Jahr
2011
ISBN (eBook)
9783842820470
DOI
10.3239/9783842820470
Dateigröße
5.6 MB
Sprache
Englisch
Institution / Hochschule
Hochschule für Technik Stuttgart – Informatik, Studiengang Software Technology
Erscheinungsdatum
2011 (September)
Note
1,0
Schlagworte
application performance management process model engineering software lifecycle
Zurück

Titel: Integration of Performance Management into the Application Lifecycle
book preview page numper 1
book preview page numper 2
book preview page numper 3
book preview page numper 4
book preview page numper 5
book preview page numper 6
book preview page numper 7
book preview page numper 8
book preview page numper 9
book preview page numper 10
book preview page numper 11
book preview page numper 12
book preview page numper 13
book preview page numper 14
book preview page numper 15
book preview page numper 16
book preview page numper 17
book preview page numper 18
book preview page numper 19
book preview page numper 20
book preview page numper 21
book preview page numper 22
book preview page numper 23
book preview page numper 24
book preview page numper 25
book preview page numper 26
124 Seiten
Cookie-Einstellungen