Lade Inhalt...

Nonparametric Inference of Utilites

Entropy Analysis with Applications to Consumer Theory

©2006 Doktorarbeit / Dissertation 203 Seiten

Zusammenfassung

Inhaltsangabe:Abstract:
In Chapter 2, „Foundations”, we provide a description of selected parts of theories which we believe are helpful to better understand the contribution of this thesis. We start with the presentation of several behavioral hypotheses in preference and utility theory. Next, we describe the basics of inferential statistics and Conjoint Analysis. Then, we describe probabilistic entropy, in addition to that a later established version of it, and its axiomatization as a general inference principle. We conclude Chapter 2 by presenting La Mura's decision-theoretic entropy, a version of entropy as an inference technique for expected utilities. La Mura had developed this connection between probabilistic entropy and expected utilities in his Ph.D. thesis.
Based on his work, the initial research objective for this dissertation had been to make his approach applicable to the inference of unique consumer utilities given some observed evidence, having in mind the vast amounts of data that nowadays are available to analysts but still not used very effectively, in order to jointly overcome the limitations of Conjoint Analysis as mentioned above.
In the following five chapters you will see that our research has instead resulted in a new method, namely Entropy Analysis, which is not based on expected utility functions but on ordinary utility functions. We close Chapter 2 with a conclusion for the following chapters.
In Chapter 3, „Entropy Analysis”, we derive the new method combining probabilistic cross-entropy and ordinary utility functions. We start by imposing a set of conditions on the inference method. Then, we suggest a normalization of utility functions such that they become formally a probability measure. Finally, we present and prove our main result.
In Chapter 4, „Irrational Behavior”, we present a solution for the problem of how to treat observed „irrational” behavior (see Definition 4.1) with Entropy Analysis. This is motivated by two reasons. First, we are hardly able to observe „perfectly” rational data in any survey or for any given set of transaction data. Therefore, any utility inference method that cannot deal with irrational data will not be meaningful for research or commercial applications.
Second, our method is at first sight formally structured in a way in which its application to irrational data would return an inferred utility function that is trivial, i.e. uniform (to be further explained at the beginning of the […]

Leseprobe

Inhaltsverzeichnis


Matthias Herfert
Nonparametric Inference of Utilites
Entropy Analysis with Applications to Consumer Theory
ISBN-13: 978-3-8366-0010-1
Druck Diplomica® GmbH, Hamburg, 2007
Zugl. Handelshochschule Leipzig (HHL), Leipzig, Deutschland, Dissertation /
Doktorarbeit, 2007
Dieses Werk ist urheberrechtlich geschützt. Die dadurch begründeten Rechte,
insbesondere die der Übersetzung, des Nachdrucks, des Vortrags, der Entnahme von
Abbildungen und Tabellen, der Funksendung, der Mikroverfilmung oder der
Vervielfältigung auf anderen Wegen und der Speicherung in Datenverarbeitungsanlagen,
bleiben, auch bei nur auszugsweiser Verwertung, vorbehalten. Eine Vervielfältigung
dieses Werkes oder von Teilen dieses Werkes ist auch im Einzelfall nur in den Grenzen
der gesetzlichen Bestimmungen des Urheberrechtsgesetzes der Bundesrepublik
Deutschland in der jeweils geltenden Fassung zulässig. Sie ist grundsätzlich
vergütungspflichtig. Zuwiderhandlungen unterliegen den Strafbestimmungen des
Urheberrechtes.
Die Wiedergabe von Gebrauchsnamen, Handelsnamen, Warenbezeichnungen usw. in
diesem Werk berechtigt auch ohne besondere Kennzeichnung nicht zu der Annahme,
dass solche Namen im Sinne der Warenzeichen- und Markenschutz-Gesetzgebung als frei
zu betrachten wären und daher von jedermann benutzt werden dürften.
Die Informationen in diesem Werk wurden mit Sorgfalt erarbeitet. Dennoch können
Fehler nicht vollständig ausgeschlossen werden, und die Diplomarbeiten Agentur, die
Autoren oder Übersetzer übernehmen keine juristische Verantwortung oder irgendeine
Haftung für evtl. verbliebene fehlerhafte Angaben und deren Folgen.
© Diplomica GmbH
http://www.diplom.de, Hamburg 2006
Printed in Germany

Acknowledgements
Thank you Jesus Christ, my lord and savior, for Your grace and love and
patience. You wanted me to write this dissertation, and so I did - with Your
help. Thank You for forgiving me for all my sins and for Your unconditional
love. Please, continue to bless me and enlarge my territory, always have Your
hands on me and protect me from evil and keep me from harming others. This
I pray in Your glorious name, Amen!
Without Arnis Vilks and Pierfrancesco La Mura this dissertation would
not have been possible. It is almost indescribable how much I thank both of
you for your faith in me and your highly dedicated scientific and at times also
motivational support, for the endless times we spent in discussions, and for the
interest you had in my research. Your contributions are invaluable and I will
never in my life-time forget what service you did for me. Thank you very much!
I also would like to thank my closest colleagues and friends for their emo-
tional support and the uncountable discussions in the office, hallway and cafet-
eria. In particular, I want to thank Lothar Troeger, Burkhart Eymer, Remigiusz
Smolinski, André Casajus, Michael W. Brown, Orla Palacio, Michael Berger as
well as Windle and Shirley Shelton.
Also, I am indebted to many colleagues at HHL and the University of
Leipzig for their administrative help throughout the whole process from March
2001 until today, in June 2006. Thank you all very much!

I thank my parents Erika and Dieter Herfert, my sister Yvonne and broth-
er in law Ingram as well as Rosemary and Carroll Hatcher for helping me dur-
ing this part of my life. You have been the backbone of my efforts, you have let
me concentrate on my research, and you have always been there when I needed
you.
Finally, I would like to thank anyone who will read this book. Hopefully,
you will find it interesting and perhaps, you wish to let me know what you
think. Please, write to herfert@k7.hhl.de or call HHL to find out how I can be
reached via phone. Thank you in advance for your interest.
May God bless you all.

Contents
1 Introduction... 1
1.1 Overview... 1
1.2 Context, Motivation and Objective... 2
1.3 Outline...8
2 Foundations... 12
2.1 Introduction...12
2.2 Preferences and Utility Functions...13
2.3 Methods of Inference... 19
2.4 Conjoint Analysis...24
2.5 Probabilistic Entropy... 33
2.5.1 Measure... 33
2.5.2 The Principle of Entropy Maximization... 38
2.5.3 The Principle of Cross-Entropy Minimization... 43
2.5.4 General Inference Technique...44
2.6 Decision-Theoretic Entropy...52
2.7 Conclusion...58
3 Entropy Analysis...60
3.1 Introduction...60
3.2 Setup... 61
3.3 The Method...62
3.4 Conclusion...79
4 Irrational Behavior... 80
4.1 Introduction...80
4.2 The General Principle...82
i

4.3 A Heuristic...86
4.4 Conclusion...89
5 Consumer Choice Models...91
5.1 Introduction...91
5.2 The Basic Utility Setup... 92
5.3 Quasi-Linear Utilities... 94
5.4 Expected Utilities...96
5.5 Conclusion...98
6 Applications...100
6.1 Introduction... 100
6.2 Additively Separable Utilities...104
6.2.1 Setup...104
6.2.2 Results with Small Amounts of Data...105
6.2.3 Implementation of Entropy Analysis... 106
6.2.4 Implementation of Conjoint Analysis...112
6.2.5 Results with Larger Amounts of Data...114
6.3 Inferior Goods...116
6.3.1 Setup...121
6.3.2 Data and Result... 123
6.4 Perfect Complements... 125
6.4.1 Setup...127
6.4.2 Data and Result... 127
6.4.3 Implementation of Entropy Analysis... 130
6.4.4 Implementation of Conjoint Analysis...132
6.5 Cobb-Douglas Preferences...134
6.5.1 Setup...134
6.5.2 Small Amounts of Data...135
6.5.3 Implementation of Entropy Analysis... 137
ii

6.5.4 Larger Amounts of and Partially Lost Data...138
6.6 Wealth-Independent Willingness to Pay... 139
6.6.1 Definitions...140
6.6.2 Setup...142
6.6.3 Data and Results...142
6.7 Wealth-Dependent Willingness to Pay... 145
6.7.1 Setup...146
6.7.2 Data and Results...148
6.8 Price-Demand Curves...152
6.8.1 Setup...153
6.8.2 Data and Result... 154
6.9 Money Lotteries...156
6.9.1 Setup...157
6.9.2 Small and Larger Amounts of Data... 157
6.9.3 Implementation of Entropy Analysis... 161
6.10 Multiattributive Lotteries... 162
6.10.1 Setup...163
6.10.2 Small and Larger Amounts of Data... 163
6.10.3 Implementation... 166
6.11 Irrational Survey Data...167
6.11.1 Setup...167
6.11.2 Small Amounts of Irrationality... 169
6.11.3 Implementation of Entropy Analysis... 174
6.11.4 Larger Amounts of Irrationality... 175
6.12 Irrational Transaction Data... 175
6.12.1 Setup...176
6.12.2 Small Amounts of Irrationality... 176
6.12.3 Larger Amounts of Irrationality... 178
iii

6.13 Conclusion...179
7 Summary and Outlook... 180
8 List of References... 182
iv

List of Symbols
,
+
Set of all real numbers, set of all non-negative real numbers
||
Relative to, e.g.
q
p ||
means
p
relative to
q
Set of all probability distributions or normalized utility
functions on
X
{}
Set, e.g.
}
,
,
{
c
b
a
is the set that has
a
,
b
and
c
as its (only) elements
[ ]
Interval, e.g. [0,1] is the (closed) interval from 0 to 1
Sum, e.g.
=
=
K
i
i
p
1
1
means the sum of all
i
p
with
K
i
,...,
1
=
is equal to 1
Integral, e.g.
a
b
f
xdx=1
means the integral of function
f
with
respect to variable
x
over interval
[a ,b]
is equal to 1
Weak preference relation, e.g.
a
b
means
a
is weakly preferred to
b
Greater or equal, e.g.
a
b
means
a
is greater or equal to
b
Element of, e.g.
X
x
means
x
is an element of the set
X
Delta, symbol for derivative; e.g.
x
x
x
2
2
=
:=
Defined as, e.g. function
)
(x
mup
is defined as
x
x
w
/
)
(
. Therefore we
v

write
x
x
w
x
mup
/
)
(
:
)
(
=
.
Strict subset of, e.g.
A
B
means
A
is a subset of
B
and different
from B
Weak subset of, e.g.
means
is a subset of or identical to
Union, e.g.
A
B
means sets
A
and
B
together, i.e. their union
Mapping, e.g.
X
f :
means f is a function that assigns each
element of
X
one element of the real line
If and only if
Inference (or concatenation) operator
Coordinate transformation
¬
Not, e.g.
¬
C
means not
C
-
Complement of, e.g. -
C
describes the set of all elements not in
C
| or * Conditional on, e.g.
)
|
(
i
S
x
x
q
means probability
q
of
x
conditional
on
i
S
x
or shorter
i
S
q
*
*
,u
p
True probability distribution
*
p
, true utility function
*
u
Conjunction (or logical "and"), e.g.
1
I
2
I
means new information
from systems 1 and 2 combined
Empty set, e.g.
=
A
means set
A
is empty
vi

1 Introduction
1.1
Overview
Suppose you are a business executive and you know that each of your
customers has a set of possible purchasing alternatives
i
x
. Furthermore, as-
sumingly, you know that each customer has preferences among those altern-
atives that could be represented by a utility function
)
(
i
x
u
which you do not
know. Obtaining knowledge of those preferences would be very rewarding
for you because from them you could conclude valuable information on how
to optimize product designs, marketing mix strategies, etc. in order to better
serve the customer and thereby influence the odds of reaching your organiza-
tion's financial and other objectives. Therefore, you decide to learn about the
nature of
u
: bounds, curvature or other characteristics. Eventually, you need
to choose a function
u
that is in some sense the best estimate of
u
given
what you know. Often there remains an infinite set of functions that are not
ruled out by the constraints. Which one should you choose?
Now, consider a similar setup with unknown probabilities
)
(
i
x
p
instead
of unknown utilities
)
(
i
x
u
. Again, you can then learn some details about the
nature of the function, in this case you learn some constraints on the probab-
ility distribution, either values of certain expectations
i
i
k
i
x
f
x
p
)
(
)
(
or
1

Introduction
bounds on these values. As before, there often remain infinitely many distri-
butions that are not ruled out by the constraints. Which one should you
choose?
In the context of probabilities, according to the principle of maximum
entropy, you should choose the distribution
p
with the largest entropy
defined by
)
)
(
log(
)
(
-
i
i
i
x
p
x
p
. There is another principle, that of minimum
cross-entropy. It is a generalization of maximum entropy and can be used
when there is a prior distribution
q
that is an estimate for
p
in addition to
the constraints. According to that principle, you should choose the distribu-
tion with the least cross-entropy defined by
i
i
i
i
x
q
x
p
x
p
))
(
/
)
(
log(
)
(
. Both
principles are equivalent when the prior is a uniform distribution (Shore and
Johnson 1980).
This dissertation thesis suggests and provides a new and promising en-
tropy-based method of utility inference. It was inspired by the above dis-
covered structural similarity between the problems of choosing the, in some
sense, best estimate for an unknown utility function and an unknown probab-
ility function as well as the ingenious solution that the principles of maximum
entropy and minimum cross-entropy offer for probabilities.
1.2
Context, Motivation and Objective
As introduced in Section 1.1, this work is about inference of individuals'
utilities, given some observed evidence. It introduces the new method En-
2

tropy Analysis which attempts to overcome some of the limitations of other
methods that also infer individuals' utilities given some observed evidence.
The method itself is in some sense not limited to any specific context but the
applications in this work have been developed for its specific use in marketing
and economics.
Let us consider the context of consumers. We commonly assume that
most consumers prefer some product bundles or combinations of product
characteristics to others and that their purchasing choices are usually attuned
to their preferences. Therefore, we can believe that the better our knowledge
is of an individual's preferences, the better we can predict future or explain
past behavior. For reasons of convenience of analysis, economists have long
used a representation of preferences given by a mathematical function that as-
signs to each alternative a number such that it is higher when an alternative is
more preferred. We call such a preference representation utility function (for
short utilities). Often we have only a little information but wish to know more
about someone's future behavior or explain behavior of the past; hence, infer-
ence of utilities is an issue of practical and theoretical interest, also in market-
ing and economics. Moreover, the increasing availability of large volumes of
consumer data, especially in the context of electronic transactions, has
spurred renewed interest in the subject.
Today, a number of approaches exist to infer utilities. But one, namely
Conjoint Analysis, has become the most popular over the last three decades
(Cattin and Wittink 1982; Wittink et al. 1994; Green and Wind 2001). Other
methods, which engage in inference about preferences in marketing are clus-
tering analysis, multidimensional scaling (Carroll and Green 1997) and the
Analytic Hierarchy Process (AHP, Saaty 1980). In contrast to Conjoint Ana-
lysis, these methods do not attempt to structure preferences in the form of
utilities. In addition to marketing, inferences about preferences have also been
3

Introduction
studied intensively in decision analysis (Luce and Raiffa 1957; Keeney and
Raiffa 1976; Howard 1984a; Howard 1984b), rather recently also in medical
informatics (Heckerman et al. 1992; Farr and Shachter 1992; Hornberger et
al. 1995) and in artificial intelligence (Ha and Haddawy 1997; Linden and
Lesh 1997; Boutilier et al. 1997).
Introduced by Green and Rao (1971), Conjoint Analysis has been suc-
cessfully applied to issues of new product development, pricing, advertising,
and other areas across many different industrial sectors around the world
(Cattin and Wittink 1982; Gustafsson et al. 2003). By far most applications
take survey-based preference data on product alternatives and use them to
generate independent utility components for every pre-specified combination
of product characteristics along which the alternatives are defined (Green and
Wind 1975). The utility components, which capture the importance, or value,
of each characteristic, are then used to learn about the consumer's trade-offs
between them, as well as to make predictions about the consumer's future be-
havior. Conjoint Analysis is primarily based on the framework of additive
Conjoint Measurement (Luce and Tukey 1964), which postulates that the
preferences admit an additively separable utility representation.
The motivation for this work comes primarily from the subsequently
mentioned three limitations of Conjoint Analysis. Even though different ex-
tensions of Conjoint Analysis have addressed such limitations, to the best of
our knowledge, no extension was able to address them jointly.
First, the application of Conjoint Analysis requires the analyst to make a
priori assumptions on the structure of consumer preferences, e.g. linear or
ideal-point models.
4

Second, its specific functional form restricts the inference to additively
independent utility contributions for each good or product characteristic
(Green and Wind 1975; Green and Srinivasan 1987). In particular, Conjoint
Analysis cannot infer utilities of inferior goods or perfect complements
(which is shown in Sections 6.3 and 6.4). Some of the limitations that are im-
plied by this functional form can in principle be overcome by extending Con-
joint Analysis to the more general framework of polynomial Conjoint Meas-
urement (Tversky 1967), which allows not only for additive, but also multi-
plicative combinations of utility components. Yet, even though polynomial
Conjoint Measurement is a substantial extension of its additive counterpart, it
still restricts the type of preferences that can be represented. Furthermore, it
still presupposes that the analyst knows the general functional form of the
utilities.
Third, today Conjoint Analysis offers a remarkable plenitude of para-
meter estimation techniques, including metric estimation like ordinary least
squares regression analysis (Johnston 1972), non-metric estimation like
monotonic analysis of variance (Monanova, Kruskal 1965), and choice prob-
ability estimation like Probit (Goldberger 1964) or Logit (McFadden 1976).
Therefore, the same evidence can lead to the inference of significantly differ-
ent utility functions depending on the specific parameter estimation technique
adopted (Green and Srinivasan 1978; Jain and Mahaja 1979; Arun et al. 1979).
Each of these parameter estimation techniques can also lead to significantly
different inferences depending on the specific data collection, or elicitation,
method (Jain and Mahaja 1979; Kalish and Nelson 1991; Sattler and Hensel-
Boerner 2000; Elrod et al. 1992), including ranking, rating, or selection of dif-
ferent product alternatives.
To this day, empirical research has been strongly suggestive that none of
the parameter estimation techniques or data collection methods is clearly bet-
5

Introduction
ter than the others. Therefore, the application of Conjoint Analysis requires
an analyst to make a multi-attribute decision, whereby its final result depends
not only on the given preference data, but also on the judgements of the ana-
lyst. Please see Sections 6.2, 6.3 and 6.4 for selected examples of Conjoint
Analysis applications.
Conjoint Analysis has proven to be of great relevance in research and
commercial applications. Yet, the above limitations also suggest great poten-
tial for improvement. For instance, the fact that preference on inferior goods
cannot be modelled seems to be a significant limitation considering the many
inferior products that we would like to replace by a better-quality alternative
as soon as we can afford it. The literature on Conjoint Analysis describes the
development of the method as not finished even though there has already
been enormous research effort (Green and Wind 2001; Gustafsson and
Huber 2003). Opportunities for further research are the investigations into
several technical areas, such as predictive reliability and validity of different
versions, effects of different levels of respondent's involvement in the re-
spective decision situation, number of attributes of the decision model, ap-
plicability of so-called non-compensatory methods, etc. (Carroll and Green
1995; Green and Srinivasan 1978). Within the Conjoint Analysis literature,
there are also new approaches that have not yet been researched extensively,
e.g. Continuous Conjoint Measurement (Wittink and Keil 2003), Limit Con-
joint Analysis (Hahn 1997) and Repeated Stack Sorting (Krapp and Sattler
2001). Also, the scope of applications has been with firms and physical
products in the private sector; there seems to be potential for analyzing ser-
vice products and various areas in the public sector (Green and Srinivasan
1978). The most recent focus of research on inferences about preferences in
marketing is on what is done with the utilities once they have been inferred.
Special emphasis is given to the development of so-called simulators and
6

most recently, quite comprehensive optimizers that work out market shares
or optimal product feature combinations, for example.
Our objective is not to develop the approaches that deal with using util-
ities once they have been inferred but to offer a different approach to infer-
ring utilities which would jointly overcome (at least some of) the problems of
Conjoint Analysis. To the best of our knowledge, we introduce the first non-
parametric approach to the problem of inferring a utility function given some
observed data.
With our new approach the evidence can take a variety of different
forms, e.g. survey or transaction data. We use an axiomatic derivation of a
method that infers a unique utility function that complies with the given data
and that is, in the sense specified by the axioms, maximally non-committal
with regard to missing information. Moreover, our method does not need to
rely on any a priori assumptions about the structural form of the utility func-
tions. We use three different consumer models to demonstrate the applicabil-
ity of our method to a variety of inference problems. In our application ex-
amples we use synthetically generated data, i.e. fictitious instead of real market
data, in order to test the new method for its applicability to inference of spe-
cific given types of utility functions. We infer utility functions over bundles of
goods, combinations of product characteristics, and probability distributions.
We show that our method is not only applicable to inference of utility func-
tions where the analyst is interested in the preferential order of all considered
alternatives, but we also exploit its applicability to infer a consumer's willing-
ness to pay function and, in particular, price-demand curves. Within our ap-
plication examples we show that the new method can compete favorably with
Conjoint Analysis, and we motivate our work by demonstrating that our
method is also applicable in important marketing domains in which the most
frequently used version of Conjoint Analysis could by virtue of its core as-
7

Introduction
sumption of additively separable utilities never return the true preference
structure. We also show that our method can infer utility functions from irra-
tional evidence (see Definition 4.1). We would like to emphasize that Entropy
Analysis is in principle not at all restricted to applications in economics or
marketing. In whatever context it may be interesting and useful to infer utility
functions from some available data, our method can, in principle, be applied.
In the outlook we emphasize that this research has built the foundation
for extensive empirical research not only to compare our method with Con-
joint Analysis in practice, but also to apply it to the fields in which the limita-
tions of Conjoint Analysis have constrained research activities in the past
such as the analysis of transaction data (or passively collected, or post-hoc
data).
1.3
Outline
In Chapter 2, "Foundations", we provide a description of selected parts
of theories which we believe are helpful to better understand the contribu-
tion of this thesis. We start with the presentation of several behavioral hypo-
theses in preference and utility theory. Next, we describe the basics of infer-
ential statistics and Conjoint Analysis. Then, we describe probabilistic en-
tropy, in addition to that a later established version of it, and its axiomatiza-
tion as a general inference principle. We conclude Chapter 2 by presenting La
Mura's decision-theoretic entropy, a version of entropy as an inference tech-
nique for expected utilities. La Mura had developed this connection between
probabilistic entropy and expected utilities in his Ph.D. thesis (La Mura 1999).
Based on his work, the initial research objective for this dissertation had been
8

to make his approach applicable to the inference of unique consumer utilities
given some observed evidence, having in mind the vast amounts of data that
nowadays are available to analysts but still not used very effectively, in order
to jointly overcome the limitations of Conjoint Analysis as mentioned above.
In the following five chapters you will see that our research has instead resul-
ted in a new method, namely Entropy Analysis, which is not based on expec-
ted utility functions but on ordinary utility functions. We close Chapter 2 with
a conclusion for the following chapters.
In Chapter 3, "Entropy Analysis", we derive the new method combining
probabilistic cross-entropy and ordinary utility functions. We start by impos-
ing a set of conditions on the inference method. Then, we suggest a normaliz-
ation of utility functions such that they become formally a probability meas-
ure. Finally, we present and prove our main result.
In Chapter 4, "Irrational Behavior", we present a solution for the prob-
lem of how to treat observed "irrational" behavior (see Definition 4.1) with
Entropy Analysis. This is motivated by two reasons. First, we are hardly able
to observe "perfectly" rational data in any survey or for any given set of trans-
action data. Therefore, any utility inference method that cannot deal with irra-
tional data will not be meaningful for research or commercial applications.
Second, our method is at first sight formally structured in a way in which its
application to irrational data would return an inferred utility function that is
trivial, i.e. uniform (to be further explained at the beginning of the chapter).
Our solution to this problem involves the principled use of a specific version
of our method which we call relative Entropy Analysis, the cross-entropy ver-
sion of Entropy Analysis. We start the chapter by presenting our general tech-
nique. Next, we substantiate our technique by suggesting one widely applic-
able heuristic.
9

Introduction
In Chapter 5, "Consumer Choice Models", we develop three consumer
choice models to apply our method to marketing problems. We start by de-
veloping a basic model for consumer choices in which we consider prefer-
ences that relate product characteristics or bundles of goods with money.
Next, we constrain this basic model by imposing conditions on preference re-
lations which imply utilities that are quasi-linear in money. We do this because
such utilities reduce technical complexity for utility inference problems and
because we believe that quasi-linear utilities (which imply the absence of in-
come effects) are sufficiently representative for all items that have relatively
low prices. Our third choice model uses von Neumann-Morgenstern expec-
ted utilities to apply our method to inference of utilities over risky alternat-
ives.
In Chapter 6, "Applications", we apply our method to synthetic, i.e. ficti-
tious, data. We imagine hypothetical consumers whom we use to show the
variety of possible applications of our method. We start with our quasi-linear
choice model and infer estimates for additively separable utilities, inferior
goods, perfect complements, and Cobb-Douglas preferences. Within these
examples, we show that our method can compete favorably with Conjoint
Analysis and that it is applicable to other structural forms of utilities besides
additively separable ones as it is most often implied by Conjoint Analysis.
Then, we again use our quasi-linear consumer choice model and also our ba-
sic choice model to apply Entropy Analysis to inferences of willingness to pay
functions for cases in which they are independent of the wealth of the con-
sumer and for cases in which they are dependent on the wealth of the con-
sumer. One area of applications of inference of willingness to pay functions is
the inference of price-demand curves. To illustrate this, we present an ex-
ample with our quasi-linear consumer choice model, but in principle we are
not constrained to that choice model. Next, we use our expected utility con-
10

sumer choice model to infer estimates of utilities of lotteries over both money
values and multiattributive alternatives. Finally, we show by the example of
our quasi-linear consumer choice model how our method can return estim-
ates from observed irrational behavior in the cases of both survey and trans-
action data.
In Chapter 7, "Summary and Outlook", we provide a summary of our
results and a research outlook for future studies.
11

2 Foundations
2.1
Introduction
This chapter will provide the theoretical foundations that are believed to
be necessary to understand the contributions of this thesis. It is divided into
the following parts:
In Section 2.2 we describe selected aspects of preference and utility the-
ory. We will start with the hypothesis of rational preference relations and de-
scribe its numerical representation via utility functions. Next, we will extend
this framework to decisions under uncertainty and describe von Neumann-
Morgenstern expected utility theory (von Neumann and Morgenstern 1947).
In this section, we will follow the lines of Mas-Colell et al. (1995).
In Section 2.3 we describe some fundamental aspects of inferential stat-
istics and in Section 2.4 we give an overview of Conjoint Analysis.
In Section 2.5 we present the theory around probabilistic entropy. We
start with its basic idea following Shannon (1948), then describe it as an op-
timization technique (Jaynes 1957, Kullback 1959), and finally present an ax-
iomatization under which probabilistic entropy and its more general version
cross-entropy become general inference techniques for unknown probability
12

Foundations
distributions, given some observed information about them (Shore and John-
son 1980; Johnson and Shore 1983).
Finally, in Section 2.6, we present La Mura's combination of expected
utility theory and probabilistic cross-entropy (La Mura 1999; La Mura 2003).
2.2
Preferences and Utility Functions
Suppose there is an individual who can choose from a set of possible
and mutually exclusive alternatives. Let this set be finite and denoted by
X
.
Suppose
X
K
+
for some positive integer
K
, with the interpretation that
K
is the number of products or product characteristics. Suppose we can sum-
marize individuals' tastes over
X
by so-called weak preference relations, de-
noted by
, which can be used to analyze someone's decision behavior. Let
x
and
y
be elements of
X
, to denote vectors of quantities or product char-
acteristics. We read
y
x
as "
x
is at least as good as
y
." Next, let us consider
rational preference relations:
Definition 2.1 (rationality) The preference relation is rational if it possesses
the following two properties:
Completeness: For any
X
y
x
,
,
y
x
or
x
y
, or both.
Transitivity: For any
X
z
y
x
,
,
, if
y
x
and
z
y
, then
z
x
.
13

Besides rational preference relations, economists often assume that
more is preferred to less. This property of preferences can be regarded by as-
suming monotonicity. Let
i
=1, .. . , K ,
and we read
y
x
as
i
i
y
x
for all
i
and
i
i
y
x
>
for at least one
i
.
Definition 2.2 (monotonicity) The preference relation
is monotone if for
any
X
y
x
,
such that
y
x
,
y
x
.
Often, we describe rational preference relations by means of a utility
function which assigns a numerical value to each element of
X
and ranks
them in accordance with the individual's preferences. Such a representation of
preferences in the form of a mathematical function allows for much more
convenient analyses of preferences. Let us formally define a utility function.
Definition 2.3 (utility function) A function
X
u :
is a utility function rep-
resenting the preference relation
if, for all
X
y
x
,
,
y
x
)
(
)
(
y
u
x
u
.
For any strictly increasing function
:
f
,
))
(
( x
u
f
defines a new utility
function, say
)
(
' x
u
, representing the same preferences as
)
(x
u
. For finite sets
X
, a rational preference relation always implies the existence of a utility func-
tion (Kreps 1988).
14

Foundations
Proposition 2.1 (Kreps 1988) (existence of a utility function) A rational pref-
erence relation implies the existence of a utility function.
Next, let us define a property of utility functions that corresponds to the
monotonicity of preference relations. We say that the utility function is non-
decreasing if and only if the preferences it represents are monotone.
Definition 2.4 (non-decreasing utility function) A utility function
X
u :
is
non-decreasing if for any
X
y
x
,
such that
y
x
,
)
(
)
(
y
u
x
u
.
Proposition 2.2 (Kreps 1990) If
u
represents preferences
, these prefer-
ences are monotone if and only if
u
is non-decreasing.
Until now, we have mentioned Conjoint Analysis a few times. The basic
framework which is used in most cases when Conjoint Analysis is applied
uses additively separable utility functions. It is defined as follows.
Definition 2.5 (additively separable utility function) Let
X
x
and
)
,...,
(
1
K
x
x
x
=
. Then, an additively separable utility function has the form
=
=
K
i
i
i
x
u
x
u
1
)
(
)
(
.
15

So far, we have stated the main elements of basic preference and utility
theory and added a few selected properties. Now, let us consider its usual ap-
plication to consumer theory. Imagine there is a consumer who has a rational
preference relation, and we take
)
(x
u
to be a utility function representing
these preferences. The consumer's problem is then to choose
x
, the con-
sumption bundle, that is best according to the preferences, subject to the con-
straint that the total expenditure on
x
is not greater than the consumer's
budget
B
with
0
B
. Given our numerical representation we can express the
consumer's problem as to choose
x
such that
)
(x
u
is maximized subject to
the budget constraint.
Until now we have built a framework for the analysis of the con-
sequences of rational preferences over certain outcomes. But many situations
involve some element of risk. We consider the theory of von Neumann-Mor-
genstern expected utilities in which we assume rational preferences over risky
alternatives. We will follow the lines of Mas-Colell et al. (1995). In this theory,
uncertainty is modeled with probability distributions over a set of objects, i.e.
besides the finite set of objects
X
, we now consider all probability distribu-
tions
P
over these objects. We want to describe rational preferences over the
set
P
, denoted as before by the rational preference relation
. Note that
above we have considered preferences over
X
; now we consider preferences
over
P
. In addition to our rationality assumption we impose the following
two properties to exploit the fact that our objects are probability distribu-
tions:
16

Foundations
Condition 2.1 (continuity) The preference relation
on
P
is continuous if
for any
P
r
q
p
,
,
the sets
r
a
ap
a
)
1
(
|
]
1
,
0
[
{
-
+
]
1
,
0
[
}
q
and
q
a
|
]
1
,
0
[
{
]
1
,
0
[
}
)
1
(
-
+
r
a
ap
are closed.
1
Condition 2.2 (independence) Suppose
P
r
q
p
,
,
and
p
q
. For
)
1
,
0
(
a
,
r
a
ap
)
1
(
-
+
r
a
aq
)
1
(
-
+
.
Given these two additional properties, the following result holds:
Proposition 2.3 (Mas-Colell et al. 1995) (expected utility form) Suppose that
the rational preference relation
on
P
satisfies the continuity and independ-
ence properties. Then
admits a utility representation of the expected utility
form. That is, there is a function
X
u :
such that
p q
if and only if
X
x
X
x
x
u
x
q
x
u
x
p
).
(
)
(
)
(
)
(
1
A set is closed, if every point outside the set has an (epsilon-) neighborhood disjoint from the set.
17

Now, we can define an expected utility function.
Definition 2.6 (Von Neumann-Morgenstern expected utility function) A util-
ity function is called von Neumann-Morgenstern expected utility function if it
has the form
=
X
x
x
u
x
p
p
u
)
(
)
(
)
(
.
Such an expected utility function is not unique. They are only unique up
to increasing linear transformations.
Proposition 2.4 (Mas-Colell et al. 1995) (increasing linear transformation) Let
P
u :
be a von Neumann-Morgenstern expected utility function represent-
ing
. Then, let
P
u :'
also be a von Neumann-Morgenstern expected util-
ity function representing
if and only if there are scalars
0
>
and
such
that
+
=
)
(
)
(
'
p
u
p
u
for every
P
p
.
The proofs to propositions 2.1-2.4 are given by Kreps (1988, 1990) and
Mas-Colell et al. (1995).
18

Foundations
2.3
Methods of Inference
In this section we describe the two main classes of inferential methods
in statistics and relate them to preferences. We refer to the following literat-
ure: Downing and Clark (2004), Sahai and Khurshid (2002), Freedman et al.
(1998), Kennedy (1998), Pagan and Ullah (1997), Davidson and MacKinnon
(1993), Hamburg (1985).
Inference is the process of drawing conclusions. Statistical inference is
the drawing of conclusions about population data from sample data with or
without the help of specific assumptions about certain population parameters,
hence one distinguishes between the two classes parametric and nonparamet-
ric methods. For most inference problems there are methods available from
both classes. A few examples of parametric methods are the t-test, F-test,
Pearson correlation coefficient, regression analysis and analysis of variance
(ANOVA). Nonparametric methods include the sign test, Mann-Whitney
test, Spearman rank-order correlation coefficient, Kruskal-Wallis test and
Friedman test.
The advantage of nonparametric methods is that they can be used when
little is known about the underlying population and can therefore be applied
in many situations in which parametric methods should, must or can not be
used. However, when nonparametric methods are applied in studies in which
parametric methods are also feasible, the former can have the disadvantage of
ignoring a certain amount of information and can therefore be expected to re-
turn slightly less favorable results. Despite the reduced efficiency, it can be ar-
gued that because of the at times possibly unrealistic assumptions with para-
metric methods, the analyst can have more confidence in the results of non-
parametric methods.
19

We consider the following numeric example. Suppose there are two
population variables, one denoted by
X
and the other by
Y
. Imagine that we
would be interested in the strength of their association. If there was a strong
association between both variables, knowing something about one would
help us a lot in predicting the other and it would not help us much if there
was a weak association. As the measure of association we choose the correla-
tion coefficient, which is a pure number between -1 and 1, without units. It
measures linear association, or clustering around a line, not association in
general. If one variable increases and the other increases as well, then we
speak of a positive correlation. If one decreases and the other increases, we
have a negative correlation. We consider the following pairwise collected ran-
dom sample data from both populations, see Table 2.1.
Table 2.1
Random Sample Data from Populations
X
and
Y
____________________________________________________________
Item
Values from X
Values from Y
1
65
49.1
2
81
52.2
3
29
42.6
4
35
41.2
5
14
33.4
6
51
40.3
7
58
41.2
8
20
41.5
9
69
49.8
10
43
34.3
____________________________________________________________
20

Foundations
Now we have a choice. We can use a parametric measure, Pearson's cor-
relation coefficient
P
r
, or a nonparametric measure, Spearman's correlation
coefficient
S
r
. Pearson's correlation is based on the assumption that the val-
ues of both variables are sampled from populations that follow a normal
(Gaussian) distribution, at least approximately. Spearman's correlation, in-
stead, is based on a rank ordering of both variables and therefore makes no
assumption about the distribution of the values. Pearson's correlation coeffi-
cient is defined as
)
)
(
)(
)
(
(
)
)(
(
2
1
1
2
2
1
1
2
1
1
1
=
=
=
=
=
=
=
-
-
-
=
n
i
i
n
i
i
n
i
i
n
i
i
n
i
i
n
i
i
n
i
i
i
P
y
y
n
x
x
n
y
x
y
x
n
r
where
n
represents the number of values in the sample (in our example,
10
=
n
), and
i
x
as well as
i
y
are the values of item
i
of the sample from popu-
lations
X
and
Y
. Spearman's rank-order correlation coefficient is defined as
)
1
)(
1
(
6
1
1
2
+
-
-
=
=
n
n
n
d
r
n
i
i
S
21

where
n
again represents the number of values in the sample and
i
d
the
rank-order difference of item
i
. Table 2.2 shows the rank-order lists for both
of our variables.
Table 2.2
Sample Data from Populations
X
and
Y
with Ranking
____________________________________________________________
Variable X
Variable Y
Item
Value
Rank
Value
Rank
1
65
8
49.1
8
2
81
10
52.2
10
3
29
3
42.6
7
4
35
4
41.2
4.5
5
14
1
33.4
1
6
51
6
40.3
3
7
58
7
41.2
4.5
8
20
2
41.5
6
9
69
9
49.8
9
10
43
5
34.3
2
____________________________________________________________
Now, we use our sample data with both methods. Our calculations re-
turn a substantially higher correlation for Pearson's coefficient,
r
P
=0 .774
and
r
S
=0 . 658
. In both cases, we find a high, but not very high, positive cor-
relation. Suppose we are now interested in inference from the sample data to
the population data. We engage in a significance test, which means that based
on our above analysis of the sample data we want to state whether or not
there is truly a correlation between
X
and
Y
in the population. This state-
ment will depend on how willing we are to accept a potentially false claim.
22

Foundations
For instance, how willing are we to be wrong when we say that, based on
r
P
=0 .774
or
r
S
=0 . 658
in our sample, there is a relationship between
X
and
Y
in the population? By convention, we quantify this "propensity" to be
wrong with the so-called alpha level (
), the higher the risk taken, the higher
is alpha. If we want to say that there is a relationship between both variables
in the population, the value of alpha will state how many times in one hun-
dred we would allow that the correlation coefficient in the sample does not
represent a relationship between both variables (imagine we would hypotheti-
cally repeat the sampling and calculations of the correlations many times). I.e.
how many times would we allow that
r
=0
? In order to reject or not reject
our null hypothesis ("There is no relationship between
X
and
Y
in the popu-
lation"), we need three pieces of information: alpha level, sample size (
n
=10
)
and the correlation coefficient in the sample (
r
P
=0 .774
or
r
S
=0 . 658
, re-
spectively). Imagine
=0 .01.
We use the table of critical values for correla-
tions, apply
n
-2
degrees of freedom for our two-tailed study and find that
the minimum sample correlation coefficient in order to confidently reject our
null hypothesis is
0.765
. Therefore, for one and the same original sample
data, we would draw two greatly different conclusions. In one case, we would
believe that there is a statistically significant correlation between both popula-
tion variables, but in the other case we would not be able to believe that. And,
the reason for these so greatly contrary conclusions is that in one case we as-
sumed an important parameter for the populations of
X
and
Y
, i.e. that their
values are normally distributed, and therefore used the parametric method of
Pearson's correlation. In the other case, we did not make such a parametric
assumption and thus used Spearman's correlation coefficient. This example
shows how critical it is to make correct assumptions about the existence of
certain parameters and to subsequently be able to apply situationally suitable
parametric or nonparametric inference methods.
23

With inference of utilities, we are interested in an estimate of the true
utility function of an individual. Therefore, we could postulate that an indi-
vidual's complete and transitive preference relation is the population under
investigation. For reasons of convenience of analysis we represent such pref-
erence relations by utility functions. Any partial information obtained about
that true utility function can be considered sample data. In contrast to our
previous example, we are now not interested in the estimation of some nu-
merical value which summarizes our true utility function. But, in many cases
we are interested in inferring an estimate of the entire true utility function it-
self. With any parametric method, we would assume something about the true
utility function, e.g. the shape or structural form, when we use the sample data
to infer about the population data. With a nonparametric method we would
not assume such a parameter.
In the following Section 2.4 we will describe an example of parametric
inference of utilities via Conjoint Analysis and in Chapter 3 we will introduce
our new method, an example of nonparametric utility inference via Entropy
Analysis.
2.4
Conjoint Analysis
As mentioned in Chapter 1, Conjoint Analysis is the most popular utility
inference method available today. The name Conjoint Analysis does not rep-
resent one single, in some sense well-defined, formula or technique to infer a
utility function in a given context. Instead, it is a collection of approaches that
has been extended by many researchers with a plentitude of refinements and
24

Foundations
improvements (Green and Srinivasan 1978). A common core that all ap-
proaches under the umbrella Conjoint Analysis share is a link to the initial
and seminal contribution that introduced conjoint measurement (Luce and
Tukey 1964) and the usage of conjoint measurement in marketing for utility
inference (Green and Rao 1971). The differences between most approaches
can be found in how data are collected and parameters for the inferred utility
function are estimated.
In contrast to the so-called expectancy-value models (e.g. Fishbein 1967
and Rosenberg 1956), a compositional approach in which the utility for some
object is determined by the weighted sum of the object's perceived attribute
levels and associated value ratings separately judged by the respondent, Con-
joint Analysis is a decompositional approach. Respondents judge a set of
product descriptions, and then the analyst finds so-called part-worths for the
individual attributes that are most consistent with the respondents' overall
preferences.
Since its start in the early 1970s, a plethora of new Conjoint Analysis
models has been introduced to improve various aspects of the method. Nev-
ertheless, the basic framework has not changed. Therefore, we would like to
follow the lines of an overview and procedural description of the Conjoint
Analysis methodology given by Green and Srinivasan (1978) and Green and
Srinivasan (1990).
Performing a Conjoint Analysis study involves analyst choices in six
areas: form of utility function, data collection method, stimulus set construc-
tion, stimulus presentation, measurement scale for the dependent variable,
and estimation method. Let us consider each of the choices in detail:
25

Form of Utility Function
There are three basic forms of utility functions as well as combinations
and generalizations thereof that are frequently assumed when Conjoint Ana-
lysis is used.
The so-called vector-form utility function describes an individual's utility
for all
X
x
as
=
=
K
i
i
i
x
w
x
u
1
)
(
where
i
x
denotes the level of attribute
i
of
x
, and
i
w
denotes the weight of
attribute
i
.
The ideal point-form utility function is given by
=
-
-
=
K
i
i
i
i
z
x
w
x
u
1
2
)
(
)
(
where
i
z
is the individual's ideal point of attribute i.
The part-worth-form utility function is given by
26

Foundations
=
=
K
i
i
i
x
f
x
u
1
)
(
)
(
where
)
(
i
i
x
f
is the individual's part-worth for attribute
i
.
Figure 1
3 Basic Forms of Utility Function
(Source: See Green and Srinivasan 1978, p. 106)
____________________________________________________________
____________________________________________________________
It can easily be seen that the part-worth-form is the most general of the
three forms of utility function, and it is most frequently applied in research
applications of Conjoint Analysis. But this generality also comes at the ex-
pense of having to estimate many additional parameters. The flexibility of the
shape of the utility functions increases with the order of vector, ideal-point
and part-worth forms, and, according to Green and Srinivasan, the reliability
27

of inferred utility functions seems to improve in the reverse order. By com-
bining all three types of utility forms, mixed models can be generated, and by
introducing so-called pseudo-attributes, interaction effects may be taken into
account. See Figure 1.
Data Collection Method
There are mainly two different ways to have a respondent be confronted
with a product alternative or product bundle (stimulus), either by the so-
called full-profile approach or the 'two-characteristics-at-a-time' procedure.
The full-profile approach uses complete descriptions of products and
lets the consumer rank several profiles or use rating scales. The limitation of
this approach is that it may cause information overload with the respondents,
especially when many characteristics and values are considered.
With the 'two-characteristics-at-a-time' procedure, the respondent is
asked to fill in the rank-order of all combinations of two product characterist-
ics. Naturally, if there are many different characteristics describing a product
and many values per characteristic, the consumer will have to evaluate many
large matrices. This may take a long time, and the larger the problems, the
more artificial the tasks will become.
There are some studies that find that the full-profile approach yields a
higher predictive validity, and others find opposite results (Green and
Srinivasan 1978; Green and Srinivasan 1990).
28

Foundations
Stimulus Set Construction
When using the full-profile approach, the analyst has to decide how
many stimuli will be used and how the stimuli themselves should be construc-
ted.
As the optimal number of stimuli, which is the number that corresponds
to the minimal expected mean squared error of prediction, naturally depends
on the number of estimated parameters, we know from multiple regression
theory (Darlington 1968) that the expected mean squared error of prediction
is given by
2
)
1
(
n
T
E
MSEP
+
=
where
T
is the number of estimated parameters,
n
is the number of stimuli
to be evaluated, and
2
is the unexplained variance (error) in the model
(Green and Srinivasan 1978; Green and Srinivasan 1990). So, in trying to in-
crease
n
as much as possible, analysts have to consider that respondents can
only dedicate a certain amount of time to such an interview. Often, it is as-
sumed difficult to increase
n
much above 30 (Green and Srinivasan 1978).
Besides some type of arbitrary selection of stimuli, there are two other
approaches that seem very feasible for a structured selection of the stimuli
that are presented during the survey. The first is to use so-called fractional
factorial designs. This field in itself is very comprehensive and shall not be
described here (see, for instance, Addelmann 1962), but it essentially allows
the analyst to select a set of stimuli that are optimal for the respective study
29

and the set itself can have varying characteristics like orthogonality or non-or-
thogonality. The second is simply to use a random sample of the full factorial
design.
All mentioned approaches have been advanced greatly over the last
three decades and a rich toolbox of suggestions for when to use which ap-
proach exists (Green and Srinivasan 1978; Green and Srinivasan 1990).
Stimulus Presentation
There are three basic approaches to stimuli presentation: verbal descrip-
tion, paragraph description, and pictorial representation.
The verbal description is basically a multiple cue stimulus card, where
each cue contains the level or value of a certain characteristic. It was found
that the measured importance of an attribute is affected by the order or posi-
tion of the card. To reduce this bias the order of attributes on the cards is
usually randomly changed per respondent.
A more realistic and complete description of a product alternative is giv-
en by the paragraph description approach. Also, this method allows to simul-
taneously test advertising claims. A disadvantage is that it significantly limits
the total number of descriptions to a small number, so that the inference
quality is not very high.
Finally, the pictorial approach is also more realistic than the verbal ap-
proach and it is more interesting and less fatiguing for the respondent. Be-
sides that, it reduces the information overload and allows for a more homo-
geneous perception of attributes.
30

Foundations
It is argued that given these considerations, the verbal and pictorial ap-
proaches are likely to be the best methods of presenting a product alternative.
Measurement Scale
There are two main classes for measurement scales - metric and non-
metric. The former is mostly used when the respondent is confronted with
rating scales assuming approximately interval scale properties. The latter is
mostly used when alternative product descriptions are rank ordered. The
main advantage of metric methods seems to be the higher information con-
tent inherent in these scales. On the other hand, one of the advantages of
nonmetric methods is that ranked data are likely to be more reliable because
ranking seems to be easier for the respondent than rating. As the name sug-
gests, ranking alternatives is done by creating a rank order between all given
product alternatives. Rating requires a rating scale, for instance a 3, 5 or 7
item scale which allows the interviewee to express preferences for each
product alternative separately on a standardized scale. For example, a 3 item
scale could have the three levels "very desired", "indifferent" and "not de-
sired at all."
Much research has been conducted using rating and ranking of alternat-
ives, but concerning the advantages and disadvantages of both methods there
exist conflicting results. Additional studies seem to be necessary to compare
these alternative approaches (Green and Srinivasan 1978; Green and Srinivas-
an 1990).
31

Estimation Method
There is a huge number of parameter estimation methods in Conjoint
Analysis. They can roughly be classified into three catagories:
1) Methods which assume an ordinally scaled dependent variable. The
most known approaches in this class are MONANOVA (Kruskal
1965), PREFMAP (Carroll 1972) and LINMAP (Srinivasan and
Shocker 1973a and 1973b).
2) Methods which assume an interval scaled dependent variable. The
two most relevant approaches in this category are OLS (Johnston
1972) and MSAE (Srinivasan and Shocker 1973a).
3) Methods which use paired-comparison data and relate them to choice
probability models. The two most known in this class are LOGIT
(McFadden 1976) and PROBIT (Goldberger 1964).
Whereas MONANOVA is used for part-worth function models and
does not allow to restrict the outcome to vector models, the other approaches
in the ordinally-scaled class can be used for vector or part-worth function
models. The best suited approach for ideal point models is LINMAP.
The MSAE approach is more robust than the OLS method and allows
the analyst to assume a priori contraints on the estimated parameters. Never-
theless, OLS provides standard errors for the estimated parameters which
none of the other metric methods does.
The LOGIT procedure leads to a deductive development of the choice
model and has the advantage that it produces a global maximum likelihood
estimate. However, it involves the "independence of irrelevant alternatives"
assumption which may not be a realistic assumption in many real consumer
32

Details

Seiten
Erscheinungsform
Originalausgabe
Jahr
2006
ISBN (eBook)
9783956360848
ISBN (Paperback)
9783836600101
Dateigröße
1.1 MB
Sprache
Englisch
Institution / Hochschule
HHL Leipzig Graduate School of Management – Betriebswirtschaft
Erscheinungsdatum
2006 (Dezember)
Note
1,0
Schlagworte
utility preference conjoint analysis marketing entropy
Zurück

Titel: Nonparametric Inference of Utilites
book preview page numper 1
book preview page numper 2
book preview page numper 3
book preview page numper 4
book preview page numper 5
book preview page numper 6
book preview page numper 7
book preview page numper 8
book preview page numper 9
book preview page numper 10
book preview page numper 11
book preview page numper 12
book preview page numper 13
book preview page numper 14
book preview page numper 15
book preview page numper 16
book preview page numper 17
book preview page numper 18
book preview page numper 19
book preview page numper 20
book preview page numper 21
book preview page numper 22
book preview page numper 23
book preview page numper 24
book preview page numper 25
book preview page numper 26
book preview page numper 27
book preview page numper 28
book preview page numper 29
book preview page numper 30
book preview page numper 31
book preview page numper 32
book preview page numper 33
book preview page numper 34
book preview page numper 35
book preview page numper 36
book preview page numper 37
book preview page numper 38
book preview page numper 39
book preview page numper 40
book preview page numper 41
203 Seiten
Cookie-Einstellungen