<<

. 11
( 18)



>>

was less strict (Directive 2000/35/CE, OJEC, August, 8, 2000); the failed
project for a Code of Good Commercial Practice prepared by the
Spanish Ministry of Finance in 1998; the extension of the concept of
unfair competition to include the exploitation of economic dependence,
the termination of a commercial relationship without a six-month notice
period and the attainment of discounts under threat of termination
introduced in the Spanish Unfair Competition Act by Ley 52/1999; and
the initiative taken by the French Government in January 2000 to modify
the Galland Act (Les Echos, January 14-15, 2000; p. 24).
2. For an empirical test of this theory in the car distribution sector see
Arru±ada, Garicano and Vázquez (2001).
3. Unless stated otherwise (mainly with respect to the econometric tests in
sections 2 and 4, which are run over aggregate European data), the
evidence on the structure and functioning of contractual relations comes
from case studies and interviews conducted with a sample of
representatives from all the parties in the sector in Spain. This sample
contained large and small, multinational and Spanish retailers and
manufacturers. While special care was taken to cover a variety of
operators, it was not possible to assess the statistical significance of the
sample.
4. It should be expected that suppliers sell at a lower price and accept
worse conditions from retailers that give them more additional services of
this nature. For this reason, the comparisons of selling prices which are
often employed in discussions on competitive conditions may lose much
of their relevance, because it is possible to observe only the net price
(the nominal price less the implicit discount that the supplier accepts in
exchange for services that are not explicitly paid). This net price is no
longer comparable across retailers of different reputation and size,
because the value of the reputation services they provide to suppliers is
not the same.
5. Payment periods have been discussed in more detail in Arru±ada (1999a,
1999b).
6. For example, Expansión (June 1, 1998, p. 8).
7. See two examples in Expansión (February 3 and June 1, 1988).
8. This makes the relationship between suppliers and retailers closer to the
type of relationships which can be observed more and more frequently in
industries in which the intensification of competition induces the use of
decreasing price clauses (see an example from the automobile sector in
Aláez et al. 1997, p. 100, n. 14). These clauses do not prevent car
manufacturers from asking for and occasionally receiving additional
discounts from their component suppliers. Several varieties of
asymmetric contracting have been studied in different industries and the
conclusion is that this kind of contracting is typical for services provided
under a franchising regime, both under a strict franchise arrangement
(Rubin 1978) and under allied activities (for example, in Arru±ada,
Garicano and Vázquez 2001, we analyze its use in automobile
distribution).
9. See Ormaza (1992); Schwartz (1999).
10. See, for example, Padilla (1996).
11. There is more on this in Arru±ada (2001 and 2002, chapter 3, generally;
see also 1999c, 2000, for an application to financial auditing).
12. For information about the situation in different European countries, see
CCE (1997, p. 7).
13. In some cases the impact of these discounts is substantial. For example,
in the relationship between one of the biggest retailers and one of the
biggest consumer good suppliers, both multinational firms, these
discounts were evaluated in 1998 at 1.67 percent of the turnover,
according to the supplier. In the same year and with the same retailer the
supplier recovered 13 percent of the total value of the discounts (0.2171
percent of his turnover with the retailer).
14. The fact that retailers have a greater capacity for control does not mean
that they have either perfect or homogeneous control. This issue is
highlighted by the policy of some retailers who contracted specialists to
detect irregularities in the contracting and accounting of their purchases.
Operations over the previous five years were investigated and the
specialist received half of the amount recovered. The mere existence of
this practice highlights the high degree of error that exists in the
administrative processing of transactions.
15. Expansión (1 June, 1998, p. 8).
16. Obviously there are more factors that influence the efficiency of
contracting with or without right of return. (See Kandel 1996.)
17. See Klein and Leffler (1981) and Shapiro (1983) for the basic formulation
of the role of reputation in contracting.
18. In fact, the studies that currently guide European legislative proposals in
this field do not seem more reliable. See, in particular, the study that
provided the basis for the Directive on late payments (CCE 1997) and,
for a criticism, Arru±ada (1999b).
Interconnection Agreements in
Chapter 20:

Telecommunications Networks”From
Strategic Behaviors to Property Rights
Godefroy Dang-Nguyen, Thierry P©nard
1 Introduction
Interconnection regulation is a major issue for the liberalization of energy and
telecommunications networks. Certainly, it is a focal point for tensions and interest
conflicts among operators. In France, for example, the Telecommunication Regulatory
Authority (ART) has to settle disputes concerning interconnection. These conflicts,
opposing most often the incumbent operator France Telecom to its new competitors,
deal with the unbundling of the local loop or the termination charges of calls from fixed to
mobile networks.
Generally, interconnection has a precise objective: the subscribers of the interconnected
networks are given the opportunity to have access to more subscribers
(telecommunications) or more suppliers (energy or water). Interconnection requires a
technical harmonization, as well as a contractual, often bilateral, arrangement among
network operators. More precisely, in an interconnection contract operators define the
conditions of access to their networks and the corresponding usage rights. In this
chapter we will consider mainly telecommunications networks (voice and data). But
many economic issues raised are equally relevant, mutatis mutandis, for other
interconnected networks.

In telephone networks, interconnection agreements have been strongly influenced by the
institutional framework in which they have emerged. For a long time, the telephone
services relied on the principle of "network integrity." This notion appeared in the United
States at the beginning of the twentieth century, under the influence of T. Vail, the
AT&T's CEO of that time, to justify the monopoly given to his private company. As long
as only national monopolies provided voice services, interconnection was merely a
problem of international, diplomatic bargaining. For data networks, the relationships
concerning the right of usage have been strongly influenced by the academic origin of
the Internet. Interconnection took the form of peering agreements, in which networks
convened to exchange traffic without monetary compensation.

With the development of competition in the telephone services and the rise of a
"business" Internet, one can expect a reassessment of the strategies and governance
structures for interconnection. What is the most relevant framework to analyze this
evolution?

Network economics is one of these frameworks. The first theoretical papers on network
economics were stimulated by the AT&T breakup in 1984. In this literature,
interconnection agreements are treated as strategies for compatibility (Economides
1996). Interconnection or compatibility choices depend on the initial structure of the
market and on competition modalities. In the case of telecommunications networks,
interconnection is obviously different in a market with a former monopoly incumbent
(fixed telephone) and a market without an historical leader (mobile telephone). Briefly,
two interconnection issues are possible. The first one deals with the access to an
essential facility; there is an asymmetric relationship for vertical compatibility. Without
interconnection, some operators cannot deliver their services to the final customer, and
have to quit the market. In the second case, interconnection leads to horizontal
compatibility among competing services. It is a symmetric relationship, since each
operator has a direct access to its customers. Without interconnection the operators can
provide services but do not fully exploit the network externalities. Papers dealing with
those two issues stress the anti-competitive effects of interconnection agreements, when
operators freely bargain. Some of those agreements may deter entry or favor a collusion
on prices. A public intervention seems thus necessary to control the operators' behaviors
and possibly to establish interconnection and usage rules, particularly in the asymmetric
case.
If this first approach highlights the impact of interconnection strategies on the competition
game, it says nothing on the institutional setting in which these agreements are
convened. Interconnection is a meta game in which not only operators, but also their
suppliers, their customers, and the public authorities (government, parliament, regulation
agencies) intervene. The stake is about the definition of property rights and their
assignment to the network operators. To study this issue, the neo-institutional theory is
better suited. This approach enables us to better understand the links between technical
and institutional changes in the networks. In particular, the possibility provided today by
technical progress to split the networks very finely is accompanied by institutional
innovations, with which the interconnection strategies of the operators have to cope.
In section 2, we analyze the interconnection agreements as a strategic dimension of the
competition game, by using network economics. In section 3 we analyze interconnection
agreements through the lens of neoinstitutional theory.
2 Interconnection agreements as a strategic game
For an operator, its interconnection choices have an overwhelming influence on the
diversity of its services, their quality, and their price. They also condition its profitability,
hence its survival in the market. Moreover, although operators always have the
possibility of revising or renegotiating their agreements, it seems that interconnection
decisions are less flexible or are more complicated than tariff decisions. For all these
reasons, interconnection strategies have to be analyzed in a sequential game in which
the operators set their interconnection policy as the first step, then define their provision
of service in a second step. This game theoretical framework enables us to deal with two
issues: on the one hand the decision to interconnect or not with another operator, on the
other hand the contractual conditions of this interconnection. Those questions are raised
differently whether operators have an asymmetric relationship (sub-section 2.1) or a
symmetric one (sub-section 2.2).

2.1 Asymmetric interconnection and essential facilities

An asymmetric interconnection agreement reflects a vertical relationship, whereby an
operator needs the other for its own service provision. The latter, who owns a facility
essential for the former, benefits from a strategic advantage which he can abuse. If he
has the possibility of freely determining the access conditions to his network in
conformity with his own interest only, he is in a situation of a regulator of the competition
game. The establishment of a public regulation is deemed necessary to limit this power
and to give the control of the competition game back to the public authorities.
2.1.1 The access to an essential facility
A carrier owns an essential facility if the others cannot duplicate such an infrastructure
with reasonable costs. In telecommunications and energy networks, the access to
subscribers is such a facility. In particular, the local loop in telecommunications and the
distribution network for gas and electricity are bottlenecks through which competitors
have to pass. The joint use of an infrastructure by competing operators thus raises the
issue of access conditions. A limited access at high prices may curb the development of
competition and new services.

However, the problem is different whether or not the owner of the essential facility
provides himself competitive services. If he provides interconnection only, his objective
will be to extract the profits of the operators using his network. Thus, he will set an
access charge to his facility high enough to capture all their profits. Meanwhile it is in the
owner's interest to provide end-user services himself (Economides and Woroch 1992).
This vertical integration strategy may even be favorable for consumers, since the price of
the end user service may decline. This result holds from the elimination of the double
margin when the supply of the service is vertically integrated, a classical result shown by
Cournot (Economides and Salop 1992; Economides 1999).

Vertical integration, however, reinforces the risk of monopolization. The integrated
operator may well deny the access to his essential facility in order to obtain or maintain a
monopoly on the final service. This strategy is called "foreclusion." In
telecommunications for example, the operator that controls the local loop (the local
[1]
network), can easily monopolize the market for long-distance calls. For this, he can use
either the access price or access quality to the local loop, in order to raise rivals' costs or
to downgrade their quality of service (Economides 1998; Economides and Lehr 1995;
Beard, Kaserman and Mayo 1996).
If competition is about strongly substitutable services, or if operators have no capacity
constraints, it is clear that a vertically integrated operator has a strong incentive to
exclude his competitors, to prevent them from stealing his business. Conversely, if
operators have capacity constraints or if they provide differentiated services in quality or
in variety, the incumbent can in principle gain by letting them enter: he can account for
new revenues from the access to his essential facility, thereby balancing the profit losses
on competitive services (Economides and Woroch 1992; P©nard 1999). But these results
hold in a static framework. In the long run, operators can always overcome capacity or
variety constraints and foreclusion seems the most robust strategy. Beyond the concern
about anti-competitive behaviors, fairness issues are also raised: it seems questionable
to leave to an operator who has, for historical reasons, inherited from an essential facility,
the right to establish the rules of the competition, namely the right to decide who can use
his infrastructure or not. Efficiency and equity arguments thus call for a regulation of
access conditions to essential facilities.
2.1.2 Access regulation
The objective of regulation is to open the access to the essential facility and to promote
competition on the complementary services. In such a perspective, the regulator must
intervene on the rights and obligations of each operator. He can make interconnection
compulsory or establish some modalities in the contractual agreements among the
operators (point of access, tariffs ). However, the regulator must be conscious of the
dilemma faced by the owner of the essential facility. When the latter invests in the
capacity or quality of his facility, he can provide better services to the end-users, but his
competitors that have access to his infrastructure can also improve their services. In
other words, he partially benefits from his investments. This situation may lead the
integrated operator to under-invest in his essential facility as shown by Amstrong, Cowan
and Vickers (1994) in the case of the United Kingdom. This trend is even stronger if the
incumbent expects an unfavorable evolution of the regulation (increase of the access
points to his network or decrease of the access tariffs). Thus the regulator has to make
sure that the former monopoly is paid enough for the use of his network and for his
investment. Conversely, high access charges may lead the competitors to invest in
inefficient bypass infrastructures (Curien, Jullien and Rey 1998). What should the
appropriate tariff for the access to the essential facility be?

The regulation of access tariffs to an essential facility is complex, when one considers
that most of the network costs are fixed and shared with other activities of the incumbent.
Several rules of efficient regulation have been proposed, in order to recover the costs of
usage of the facility. Baumol and Sidak (1994) recommend tariffing the access at its
opportunity cost that corresponds to the marginal cost of giving access to the
infrastructure, plus the loss of revenue for the owner, owing to the competition on the
complementary services. This rule is called ECPR (Efficient Component Pricing Rule). Its
advantage is to prevent the entry of less efficient operators than the incumbent. The
efficiency of this rule has been theoretically put in doubt (Armstrong, Doyle and Vickers
1994; Economides and White 1995, 1998). Spulber and Sidak (1997) claim that the
regulated access tariffs should comply with at least one condition of voluntary
interconnection: that means that the interconnection must be profitable for the owner of
the essential facility. Finally Laffont and Tirole (1994) suggest applying a Ramsey“
Boiteux tariff for the access to a local facility. This rule recommends setting access
charges inversely proportional to the price elasticity faced by each of the alternative
operators. Interconnection charges paid by an operator will thus be lower whenever the
price elasticity of his services is high. Laffont and Tirole generalize these results to the
situation where the regulator knows imperfectly the costs and behavior of the incumbent.
Therefore an efficient regulation consists in a menu of interconnection contracts
proposed to the incumbent, to induce him to reveal his true cost and to provide enough
productivity efforts. But these incentive schemes leave an informational rent to the
incumbent.

The implementation of these efficient rules, although analytically convincing, is difficult to
apply. Most regulatory agencies in telecommunications have chosen a method grounded
first on accounting costs (or historical costs) and then, when their knowledge of the costs
improves, on the long-run incremental costs, to which profit margins on invested capital
are added. Only New Zealand has tried to apply the ECPR rule, but abandoned it after
several litigations triggered by new operators (Blanchard 1995). In France, as in most
European countries, the public authorities have heralded the principles of transparency,
fairness, and efficiency in interconnection regulation. Moreover, they have forced the
incumbent to answer positively to interconnection demands and to offer cost-oriented
prices. The latter is also subject to accounting separation as a first step towards a better
knowledge of the operator's cost, in conformity with the concern of the regulator to fix the
interconnection prices as close as possible to the usage costs of essential facilities.
Regulation thus reduces the contractual freedom of the incumbent, both for the choice of
counterparts and the choice of tariffs. Recently, public authorities wished to go further in
this way, by enforcing an interconnection obligation for the incumbent, as close as
possible to the subscriber: this is the unbundling of the local loop.

Interconnection to essential facilities is not the only type of agreement examined in
network economics. Agreement among symmetric networks are also subject to
numerous although more recent papers.

2.2 Symmetric interconnection and compatibility

An interconnection is symmetric when two operators have a direct access to their
customer on the one hand and are in competition on the other: for example, an
agreement between a fixed and a mobile network operator, or an agreement between
Internet Service Providers (ISPs). Symmetric interconnection raises the issue of both
quality and prices of services that operators provide to each other.
2.2.1 Competition and compatibility
Symmetric interconnection is first an issue of compatibility. Operators have to know
whether they allow their customers to access the networks and services of competing
operators, while providing as a counterpart their own services to the latters' customers.
This choice will depend on two opposite effects. On the one hand interconnection
enables the customer to benefit from network externalities. Since the network size and
the number of services increases, the customers' willingness to pay increases too.
Operators can thus raise the price of their services without reducing customer utility. On
the other hand, interconnection brings networks closer with regard to the quality of
service. By reducing differentiation, it increases substitutability and price competition
among operators. The network effect supports interconnection while the substitution
effect (or business-stealing effect) restrains it. The net effect will depend on several
conditions concerning the size of externalities, the characteristics of demand and of
operators (Encaoua, Michel and Moreaux 1992; Katz and Shapiro 1985). For example
for Automatic Teller Machines (ATMs), the compatibility decision will depend, among
other things, on the initial size of the networks and interbanking fees (Matutes and
Padilla 1994).

This theoretical framework fits well with interconnection among Internet networks. Many
ISPs refuse to be interconnected even if they could benefit from network externalities.
Thus Baake and Wichmann (1999) show that a large ISP (with a large number of
subscribers) may deny interconnection with a small ISP to keep a quality of service
advantage. Dang-Nguyen and P©nard (1999) show also that ISPs have more incentives
to interconnect if they have similar qualities. But compatibility in the case of the Internet
is less a discrete (whether or not to interconnect) than a continuous (which
interconnection quality to adopt) choice. Each operator chooses the quality of
interconnection through the maximum and the guaranteed bandwidth of the connections
set up with other operators.

Few theoretical works finally are devoted to interconnection choices among telephone
networks. First, compatibility is compelling: in Europe as well as in the United States, any
open voice network has to accept interconnection demands, in conformity with quality
standards imposed by the regulator. Moreover network effects are considered so large
that compatibility issues seem irrelevant.
2.2.2 Financial transfers and collusion
Whether voluntary or mandatory, interconnection requires an agreement between
operators on tariffs and financial transfers. The financial counterpart to the service may
be fixed or usage-sensitive charges. Moreover, they can be set either independently or
cooperatively. In most cases the operator directly receives revenues from the
subscribers and transfers part of them to operators terminating the service or the
communication.
Compared to this principle, the Internet exhibits a specificity since most interconnection
agreements, called peering agreements, contain no financial counterpart to traffic
exchanges. Each operator keeps all revenues stemming from its network customers.

When operators agree to set positive access charges, they face the following dilemma: a
high access charge augments the revenue on each incoming call, but limits their number.
The net effect will depend on price elasticity of the interconnected users. Laffont, Rey
and Tirole (1998a) show that when the operators choose interconnection tariffs non-
cooperatively, the result is non-optimal, with too high tariffs, since operators do not
internalize the adverse effects upon the utility and the demand of users. The authors
then study the case of a collective determination of interconnection charges. The
cooperative choice may reduce competition and increase the retail prices of calls,
reflecting a possible collusion among operators. These effects are strengthened when
the regulator imposes a non-discrimination principle between incoming and outgoing
calls, as well as a reciprocity of access charge (the same charge for incoming calls of all
[2]
networks). If these regulatory constraints are upheld, a cooperative determination of
access charges leads to less collusion (Laffont, Rey and Tirole 1997, 1998b).

One of the limits of these papers is to describe competition in a static framework. Only
the impact of interconnection on current profits is considered, without taking its
consequences on competition dynamics into account. In particular, the link between
access charges and collusion is never analyzed in an intertemporal context. Without
going into details, one can say that the transition from a static to a dynamic framework
often reverses the results: this is the topsy-turvy principle underlined by Shapiro (1989).
For example, if operators want to sustain a tacit collusion on prices, they can punish
those who cheat or breach the agreement, by reverting to the competitive static
equilibrium. As a cooperative determination of access charges increases static profits, it
reduces the severity of punishments and makes collusion less likely. Then a practice that
increases current profits may appear as collusive in a static framework, while being pro-
competitive in a dynamic one. The results of the previous models, most of them static,
should thus be accepted with caution to evaluate the efficiency of a regulation which is
essentially dynamic.
To sum up, network economics has the virtue of exhibiting the strategic motivations of
operators in interconnection agreements. This approach focuses on the way operators,
but also regulators, interact in the competition game. However, it underestimates the
coordination problems met by operators, in a context of strong uncertainty and imperfect
information. Moreover, it considers the institutional framework as given, whereas it is well
known that operators constantly try to influence the rules of the game and to modify the
regulation. The competition game is thus embedded in an institutional game on which
depends, in the end, the rules and rights for interconnection. In section 3 we will
complete the strategic approach with an institutional one.

[1]
This was AT&T's attitude when MCI entered this market in 1963.

[2]
Economides, Lopomo and Worock (1996) show, however, that if a network dominates (with
a strategic advantage), the reciprocity rule enables us to prevent this network monopolizing all
the subscribers. But non-discrimination rules (access price equals internal price) and an
unbundled supply of service are less efficient.
3 Contract theory and interconnection
Through an interconnection agreement each operator gets a usage right on a network
capacity of another operator. This right is normally reciprocal, but each party is not in a
symmetric position, as seen above. Contractual difficulties may result from asset
specificity and opportunistic behaviors. The usage right defined in an interconnection
agreement may conflict with the operator's property right upon its own network. Thus, the
parties have to set up "governance structures" which reconcile the right of usage and the
property right. But the choice of a governance structure is not independent of the
institutional framework designed by public authorities. The latter can limit or transform
any property right on the network. We show that this actually happened in
telecommunications, thus illustrating a theoretical issue raised by Williamson (1993), in
continuity from Commons (1934) and North (1990): how institutions affect governance
structures.

3.1 Network evolution, property rights, and rights of usage
In a transport network, flows are commanded through an overlay network called a
"command network." For example, in a railway network, the "command network" is the
set of signaling and switching devices. Previously, the command network and the
infrastructure network were combined into a system, intended to globally optimize the
performance of the network. In that context, interconnection meant the interoperability of
two systems, both for the command and the infrastructure networks. International
agreements for telephone networks followed this principle.

In the 1980s, it became possible to split the command and the infrastructure networks.
This unbundling was established in 1987 in the United States by the Open Network
Infrastructure (ONA) doctrine of the Federal Communications Commission (FCC). It gave
the new operators the opportunity to freely combine "modules" leased to incumbent
operators. Telecommunications, as well as computer networks, were then functionally
broken down. This led to redefining usage as well as property rights on each module.
The question is thus raised of the efficiency and fairness of this redefinition: the analysis
of interconnection agreements cannot neglect the breaking down of the networks into
modules.
Hence transaction-cost theory (TCT) analyzes the possibility of separating or integrating
several modules depending on their asset specificity. These modules can be either
assembled or leased. But TCT says nothing on the initial definition of modules, on the
assignment of property and usage rights, which seems to refer to an institutional power.
Thus TCT assumes that modules exist before their possible integration and does not
explain how they do emerge. One hypothesis is that the definition of these modules is an
institutional issue, which evolves and has to be explained theoretically. Some examples
show how this process occurs. The splitting of a system into modules is often functional,
and may be linked to a horizontal (AT&T's geographical breakup in 1984) or vertical (the
breakup of IBM's complementary activities, such as software and hardware in the 1960s)
institutional separation. This splitting also occurred in electricity networks (Joskow 1996).
However, modularity appears to be a major feature of information technology goods and
services. The question is thus raised of the effectiveness of this institutional intervention.

3.2 Institutions, transaction costs, and assignment of property rights
Functional splitting can be explained by technical progress. Splitting should provide the
firms with the possibility of creating more efficient modules, later recombined into more
complex services or goods. Williamson (1993) claims that technology provides a semi-
weak determination upon organizational choices, in as much as well-known
technological options would limit the governance structures. Institutional intervention
would thus ratify technological opportunities. Given the state of the art in technology,
separable and well-identified modules could be individually appropriated and their
ownership rights could be transacted. In the network industries, those modules could be
interconnected through contractual arrangements, in order to provide a complete service
to the customer.
But the module appropriation is not so clear. Technology provides practically no limit in
the separability into modules: it is now technologically possible to split elementary
particles or genetic codes. The technical frontier is thus beyond what society considers
as a property right, economically or socially sustainable. An intuitive answer to this issue
is that the principle of the first mover should apply. Those who elaborate or create the
module should be the owners. But all that happened in information technologies (IT)
suggests that a subsequent reconsideration of the initial assignment of property rights is
always possible, precisely through a finer definition of modules. Then only institutions
have the coercive right to assign and to question those rights. It is thus essential to know
the motives and criteria of institutional actions.

Williamson does not explicitly treat this question. He is more interested in the evolution of
"governance structures" than in the initial assignment or the subsequent institutional
reallocation of property and usage rights (Williamson 1981). Nonetheless, he puts
forward a principle of "remediableness" to guide institutional actions (Williamson 1996):
one should implement only institutional reforms which, once achieved, would provide a
"net gain." Similarly, Commons (1934) suggests that institutions comply with the principle
of "artificial selection": public institutions choose to favor institutional rules which favor
more efficient but also "fairer" transactions (Ramstad 1994). Following the categories of
Commons, the "institutional environment" (institutions) should be conceived to facilitate
better "institutional arrangements" (governance structures), in terms of equity and
efficiency.

Advocates of the property-rights school emphasize the role of property rights structures
upon economic performance (De Alessi 1983). For Demsetz (1998), non-institutional
conditions, such as the initial endowment of factors or the evolution of exogenous
parameters like transportation costs, are essential for the definition of the rules and may
lead to the revision of the ownership-rights structures. Technology and its evolution could
be interpreted as a form of non-institutional determination.

Very differently, evolutionary theorists suggest that institutional rules rely on a principle of
path dependency (Magnusson and Ottosson 1996). Once established upon an
institutional trajectory, public authorities could not put them into question so easily.

This brief discussion of the motives for institutional intervention raises the following
question: Why do public authorities periodically decide to modify the "institutional rules"
that are applied on markets? Who initiates these changes? We will try to answer these
questions through the example of the modularization process that has been occurring in
telecommunications networks. However it is necessary to distinguish between voice
networks and data networks, where "modularization" has taken a different form.

3.3 Modularization and assignment of property rights in voice networks
The existence of "integrated systems" in telephony is historically explained by
interconnection difficulties among early operators (Muller 1993). Indeed, networks were
initially developed by local governments because intercity networks were too expensive.
Competition did exist in the United States, but also in France, where state intervention is
a tradition. But in the United States, the dominant network of the Bell System denied
interconnection to their competitors; their argument was that interconnection would
undermine the competitive advantage and expropriate the shareholders of the Bell
System. T. Vail, the CEO of Bell, pleaded for the uniqueness of the telephone system
with the motto: "one policy, one system, UNIVERSAL SERVICE." Institutionally, this
point of view has become dominant and telecommunications developed through a
monopolistic organization, theoretically grounded on the "theory of the natural monopoly"
(Sharkey 1982): duplication of infrastructures seemed costly and inefficient. Moreover,
complementarity between the local and the long-distance networks justified integration
into one system, to economize transaction costs. In some countries (the United States,
Canada, Finland, Denmark ) local and long-distance networks were run separately, but
coordination structures did exist among the local and long-distance monopolies.
In the 1980s, a liberalization process took place which can be interpreted as the calling
into question of monopolies' property rights on their networks. However, with the
noticeable exception of the United States, there has been no vertical or horizontal
separation of ownership, namely no breakup, but a functional separation which limited
the property rights of the operators on their networks. The unbundling of the local loop is
an example of this process. Modules are defined by regulation and incumbent operators
continue to own the modules, but a usage right is recognized of the competitors to lease
them. Now the new operators choose between "make" or "buy" (or, rather, "lease") not
on the basis of "market" versus "integration," as in the TCT paradigm, but on
"institutionally guaranteed lease" against "integration." The guarantee goes as far as to
define prices, technical interfaces, availability conditions, etc . It thus remains to
examine why previously an exclusive (and monopolistic) right of ownership was
institutionally recognized for the monopolies up the 1980s, and why this exclusive right
has been called into question since then.

Clearly, technology is part of the explanation, as suggested by Williamson and Demsetz:
the decrease of transmission costs has enabled the duplication of long-distance
capacities. Vertical separation between long-distance and local networks was a kind of
"institutional remediableness" in accordance with this evolution. But interestingly, this
separation occurred in one country only, the United States. In other countries, it was
limited only to an accounting separation, without breakup.

On the other hand, unbundling was more the consequence of lobbying than the search
for optimal governance structures. Indeed, the separation of the command and the
infrastructure network, the definition of ONAs, was set up at the monopolistic operators'
initiative. The latter were eager to buy equipment in separable modules, in order not to
be locked-in by one equipment manufacturer. Later, when competition was institutionally
admitted in telecommunications services, the competitors of the incumbent operators
took advantage of the existence of these modules to get access, and lease only a sub-
set of modules rather than entire systems with the help of regulatory agencies.
The method that regulatory authorities, such as FCC in the United States, Oftel in the
United Kingdom, or ART in France, have used to modify the institutional rules on
telecommunication markets is rather original. Instead of proceeding bluntly (breakup,
obligation to sell and thus to abandon property rights on specific activities), they have
adopted a more flexible and "fine-tuned" approach, based on property rights and usage
rights. They have been searching for a fine balance between property and usage rights.
To some extent, regulation has appeared as an institutional innovation, which has
created the conditions of challenging the property rights of telecommunications operators
for the sake of promoting competition. However, from a "governance structure" point of
view, the new operators do not take a "make-or-buy" decision, but a "make-or-lobby"
decision.
We can thus say that former monopolies initiated modularization to improve their
governance structures with their equipment suppliers, but this subsequently led to an
institutional innovation, which is the enforcement and regulation of leasing contracts
among competitors and the former monopoly. This had the consequence of limiting the
latter's property rights on its network, and promoting competition. This institutional
innovation had several advantages. First, through the guarantee of interconnection, it
reconciled the beneficial effects of competition and network externality: everybody could
access anybody whoever his or her telephone supplier was. Second, as the former
monopolists avoided breakup, they more easily accepted the transition to competition.
It is not clear, however, whether the governance structure emerging from this decision
process is really optimal. In other words, do the regulatory agencies provide the right
signals and incentives to both the incumbent and new entrants, to stimulate innovation,
since the benefits of their ownership of some assets may be upheld and shared with their
competitors? Also, lobbying is not an efficient business per se, and may lead to
opportunistic behavior: it might be cheaper but socially more detrimental to obtain an
extended usage right upon somebody else's asset, rather than negotiate directly for a
lease contract, or establish one's own asset.

3.4 Ownership-rights assignment for data networks
In the data networks, a modularization process has also happened with the emergence
of the Internet. The Transmission Control Protocol/Internet Protocol (TCP/IP) clearly
defines a boundary between what is "below the IP layer," namely the infrastructure of
data flows transport, and "above the IP layer," namely the command network and all
Internet services and applications. Hence, an Internet service uses at least two
components: a "transport module" and a "service module." The latter may itself be cut
into smaller modules provided by different suppliers: access, browsers, applets,
The infrastructure layer consists in thousands of heterogeneous interconnected networks.
Most of the time, interconnection among ISPs is tacitly convened. ISPs often agree to
cooperate on data flow transports without financial counterparts (peering agreements).
For applications "above the IP layer," many innovations such as the Web and its tools
(html, http), browsers, search engines, operating systems (Apache, Linux), and
programming languages (Perl) have been the outcome of a collective development by
the Internet community, and disseminated freely (Stallman 1999). The Internet has thus
provided an original solution to the coordination issue: users themselves have set up the
"governance structures," thereby reducing opportunism and transaction costs, that may
come out of the sharing of tangible or intangible assets. In such conditions, the public
institutions have not had to intervene, and modularization has been achieved on a strictly
technical basis. The question of assignment of property rights has been avoided. It is
clear that the specific origin (military and academic) of the Internet has been of
paramount importance to explain such an evolution. There has been a "path-dependant"
institutional evolution in the development of the Internet which explains why so many
services are "free" and why it is so difficult to earn money on the Net. But the diffusion of
the Internet in the market economy may lead to a profound evolution.

For example peering agreements are beginning to be replaced by formal interconnection
contracts. At the applications layer, companies like Microsoft and to a lesser extent Sun,
have attempted to individually appropriate the collective benefit of the Net. This shows
that the public good nature of Internet may perhaps disappear in the future. Conflicts
between the advocates of individual intellectual property rights and proponents of the
collective appropriation (exemplified by the Free Software Foundation) are becoming
more and more relevant. From a public policy point of view, it remains to be seen
whether the denial to assign individual property rights on the modules is efficient and fair
in the Commons' sense.

The collective organization of the Internet at the applications level may be justified by the
following features: first, software is never finished and benefits from the subsequent
improvement of users adapting it for their own needs. Second, by abandoning
Intellectual Property Rights (IPR) the innovators may trigger a network effect more
quickly (Katz and Shapiro 1986; Church and Gandal 1992). Innovators will benefit by
increasing their reputation and by providing ancillary (and charged) services.
4 Conclusion
The analysis of interconnection contracts in network industries has underlined the strong
interdependency between the competition and the institutional games, where the latter
permits us to define the property rights, as well as the rules of the competition game. It
establishes in particular the role and the scope of intervention for the regulatory authority.
We have shown in this context that the regulator has to take into account the
asymmetries among operators and the dynamics of competition.
In telecommunications networks (voice and data), modularity has been, without a doubt,
not only a technological but also an institutional innovation, which has allowed the
stimulation of competition. Its implementation has depended on the nature of the network:
in voice transport, it has been accompanied by a partial expropriation of the former
monopoly, a sine qua non condition of the effective functioning of competition. In the
case of data networks, there has been rather, at least initially, a "collectivization" of some
modules. These difference of institutional choices may be explained by a path
dependency: because it was born in the academic community the Internet has favored
and still favors communitarism. Because telephone operators initially held a monopoly
upon a full-fledged network, institutions have redistributed some property rights.

Convergence between voice and data networks may lead to a partial elimination of these
institutional differences. Indeed, some practices of data networks (peering, free software)
do not rely on a stable institutional framework. This framework has to be designed and
the property rights explicitly assigned. A convergence of the institutional frameworks is
likely to occur for both the Internet and telephone networks.
Notes
Chapter 20 was originally published as "Les accords d'interconnexion dans les r©seaux
de t©l©communications: des comportements strat©giques aux droits de pvopri©t©," in
Revue d'Economie Industrielle (92, 2000).
1. This was AT&T's attitude when MCI entered this market in 1963.
2. Economides, Lopomo and Worock (1996) show, however, that if a
network dominates (with a strategic advantage), the reciprocity rule
enables us to prevent this network monopolizing all the subscribers. But
non-discrimination rules (access price equals internal price) and an
unbundled supply of service are less efficient.
Licensing in the Chemical Industry
Chapter 21:
Ashish Arora, Andrea Fosfuri
1 Introduction
A firm wishing to protect its intellectual property from imitation has different options,
notably patents, first-mover advantage, lead time, and secrecy. Although patents are
often thought to be less effective at enabling the inventor to benefit from the innovation
than other alternatives (Levin et al. 1987; Cohen et al. 2000), they have an important
socially valuable feature that the alternatives lack. Specifically, patents can be used to
sell technology, typically through licensing contracts.

This is our point of departure beyond the traditional approach to patents that has mainly
focused on patents as means to exclude others. By reducing transaction costs, patents
can play a key role in facilitating the purchase and sale of technology, or in other words,
the development and functioning of a market for technology. A market for technology
helps diffuse existing technology more efficiently; it also enables firms to specialize in the
generation of new technology. In turn, such specialization is likely to hasten the pace of
technological change itself. The reason for focusing on the development and functioning
of a market for technology is that it greatly reduces the transaction costs involved in
buying and selling technology, implying that innovators have the option of appropriating
the rents from their innovation by means of simple contracts, instead of having to exploit
the technology in-house.

However, the development of a market for technology is not an automatic outcome. It
depends not only on the efficacy of technology licensing contracts (and on the strength
of patents that underpin these contracts), but also on the industry structure itself. This is
an important issue - whether firms contract for technology depends not only on the
transaction costs, as commonly understood, but also on historical factors. Thus, in
chemicals, the presence of specialized engineering firms that licensed technology, and in
other cases provided complementary know-how for technologies developed by chemical
firms, played a key role. The increasing competition has also fostered the willingness of
even the largest chemical firms to license their technology, while globalization and entry
since the Second World War has meant that there exists a substantial number of
chemical producers that are potential buyers of technology.

The chemical industry provides a natural framework within which to explore these
themes. It is a technology-based industry with a long history of patenting and licensing.
Further, as we show, transactions in technology have become widespread, with
substantial variations across products.
Section 2 reviews the contribution of the economic literature on licensing contract design,
whereas section 3 underlines the role of patents in facilitating the diffusion of technology.
In section 4 we show how, in the past, chemical firms have used patents as one of the
ways of excluding competitors and creating monopolies. However, after the Second
World War firms started to use licensing contracts (underpinned by patents) as means to
profit from innovation, leading to the development of a market for chemical process
technology. As section 5 argues, patents have also facilitated the entry of specialized
engineering firms and a progressive division of labor. Furthermore, as discussed in
section 6, this has profoundly influenced how even large chemical producers appropriate
rents from their innovations. Section 7 discusses the specific features of the chemical
industry that have favored the creation of a market for technology. Section 8 summarizes
and concludes the chapter.
2 Review of the economic literature on licensing contract
design
Most of the early works of the literature on licensing contract design (see Kamien 1992
for a survey) have analyzed the optimal licensing contract for a non-producer innovator
in a framework with perfect information and homogeneous goods. The two main findings
are that an auction is the mechanism that maximizes profit extraction from the licensees,
and that licensing by means of a royalty is inferior to a fixed-fee payment both for the
non-producer innovator and for consumers. However, Muto (1993) finds that a royalty
might be superior to a fixed fee in a differentiated goods duopoly with Bertrand
competition, and Rockett (1990) shows that output royalties can be optimal when the
licensor and the licensee compete in the same product market.

Since Arrow (1962) it has been well known that licensing contracts are plagued with
information problems which may result in imperfect appropriability. (See also Caves,
Crookel and Killing 1983.) Indeed, in a framework with asymmetric information in which
one party might not know the other party's type, a licensor endowed with a technology
with a low commercial value can pretend to have a much more profitable technology. An
uninformed licensee would be willing to pay no more than the expected value of the
technology. As a result, higher-type licensors want to offer a contract which a lower type
would never find in her best interest to offer. Gallini and Wright (1990) show that
performance-based royalties may allow separation because higher-type contracts can
base a large fraction of the total payments on output when it is commonly known that a
higher-value innovation will result in greater output than a lower-value innovation. (See
also Macho-Stadler and Perez-Castrillo 1991.) Beggs (1992) obtains a similar result in a
model in which it is the licensor who lacks information about the "type" of the licensee.

The design of the licensing contract is further complicated by the tacit nature of the
technology. Tacit technology is typically transferred as know-how, but contracts for
know-how are marked by double-sided moral-hazard problems (Arrow 1962; Teece
1986). For instance, once the licensor has received the payments, she may not send her
best engineers or managers over to the licensee to provide the technical service, or she
may provide the licensee's engineers with only limited exposure to her own operations.
Some important trade secrets may not be revealed to the licensee. Given this possibility
of moral hazard on the part of the licensor, the licensee would like to make the bulk of
the payments after being satisfied that the full technology, including the tacit part, has
been transferred. However, once the licensee has learned the know-how, she cannot be
forced to "unlearn" it. Hence, a licensee may refuse to pay the agreed-upon amount in
full after the know-how is transferred.

There are ways through which the efficiency of contracts for knowhow can be enhanced.
These include reputation-building in the context of repeated contracting, and the use of
output-based royalties. However, output-based royalties may not solve the moral-hazard
problem. Indeed, the amount of output produced by the licensee is often private
information and hard to assess by the licensor or a third party. In addition, output-based
royalties can handicap a licensee in the product market, especially in oligopolistic
markets (e.g. Katz and Shapiro 1985) and possibly for this reason, it has been shown
that the use of output-based royalties to compensate the licensor for technical assistance
is uncommon (Contractor 1981). Reputation-building through repeated contracts, while a
potential solution, requires a greater degree of integration among the partners.

Arora (1995) shows that efficient contracts for the exchange of technology can be written
by exploiting the complementarity between know-how and any other technology input,
most notably patents, that the licensor can use as a "hostage." With complementarity,
the use of the know-how, which cannot be taken back from the buyer once transferred, is
more valuable when used in conjunction with the complementary patents. This allows the
licensor to use her patents to protect herself against opportunistic behavior by the
licensee. On the other hand, the licensee protects herself by postponing a part of the
payment till the know-how has been transferred. If the licensee does not make the
second payment, the licensor can withdraw from the contract and withdraw the patents.
As long as the additional benefit of having the know-how and the complementary patents
is greater than the second-period payment, the licensee will make the payment. As long
as the second payment is greater than the cost to the licensor of supplying know-how,
the licensor will honor the contract as well. Thus, the problem of opportunism can be
mitigated through simple and self-enforcing contracts.

Empirical research has shown that the vast majority of licensing contracts involve
performance-based royalties, often used in combination with fixed fees. For example,
Macho-Stadler, Martinez-Giralt and Perez-Castrillo (1996) found royalty provisions in 72
percent of 241 Spanish technology transfer contracts while Bessy and Brousseau (1998)
found such provisions in nearly 83 percent of French contracts. However, the available
evidence seems to suggest that royalty rates tend to vary very little across licensing
contracts for any given industry, and are typically established by "rule of thumb"
(Contractor 1981). This suggests that factors other than royalties may be important in
reducing transaction costs.
3 The role of patents as a transaction-cost reducing
mechanism
Patents can play an important role in determining the efficiency of knowledge flows,
which are critical to any knowledge-based division of labor. First, the direct costs of
knowledge transfer are lowered when the knowledge is codified and organized in a
systematic way. Since the innovator has always some discretion in how she codifies,
stores, and organizes knowledge, strong patent protection provides incentives to codify
new knowledge in ways that are meaningful and useful to others. This is particularly
important when innovation systematically originates in firms that will not develop and
utilize the knowledge themselves.

Second, patents might help to make licensing contracts more efficient by reducing the
transaction costs of transferring know-how in licensing contracts. As noted earlier,
patents can function well as a complementary input provided by the licensor. Thus, a
prototypical case would be one in which the technology to be transferred is composed of
both a patented (possibly codified) component and complementary know-how (e.g.
experience with using the technology). Arora (1996) uses a sample of 144 technology
licensing agreements signed by Indian firms to test the empirical relevance of patents.
He uses the provision of three technical services - training, quality control, and help with
setting up an R&D unit - as empirical proxies for the transfer of know-how. Arora (1996)
finds that the probability of technical services being provided was higher when the
contract also included a patent license or a turnkey construction contract. Interestingly
enough, machines and equipment merely increased the probability of training being
provided, whereas patents and turnkey contracts were more strongly associated with the
provision of services relating to quality control and R&D.

In the technology licensing agreements discussed below, the vast majority are contracts
that involve the transfer of know-how and unpatented technology. However, for the most
part, these contracts are underpinned by patents. Industry executives we interviewed
strongly believe that strong patent protection is vital for technology licensing and that
absent such protection, firms would drastically reduce the extent of technology licensing.

Arora and Gambardella (1994) and Merges (1998) argue that patents are likely to have a
greater value for small firms and independent technology suppliers as compared to large
established corporations. Whereas the latter have several means to protect their
innovations, including their manufacturing and commercialization assets, the former can
appropriate the rents to their innovation by only leveraging the protection that patents
provide. At the margin, an increase in the strength of patents and intellectual property
rights increases the returns from investments in technology development more
substantially for smaller technology specialists and start-ups than for the larger
integrated companies.[1]

In a more recent paper Arora and Merges (2001) use the incomplete contracting
approach (Grossman and Hart 1986; Hart and Moore 1990) incorporating not only
opportunism but also information spillovers. Information spillovers arise owing to the
supplier's effort to customize its generalized technology to the specific needs of the
buyer. Arora and Merges (2001) argue that stronger intellectual property rights enhance
the viability of specialized firms by reducing buyer opportunism. As the examples of
technology-sharing agreements in the chemical industry, discussed below, show, patents
play an important role in structuring complex contracts involving the exchange of
[2]
technology between large firms.

Arora and Merges (2001) provide several examples of the role that patents play in the
specialty chemical industry, specifically from firms like Lonza or SepraChem specializing
in the design and production of optically pure or "chiral" compounds used as inputs by
[3]
the pharmaceutical industry. These firms must typically expend considerable effort in
developing new molecules (or processes for developing new molecules) for their
customers, the large pharmaceutical firms. They hope to recoup this cost by supplying
the molecules over a period of time. This up-front cost, analogous to the cost of
transferring tacit knowledge, makes the firm vulnerable to hold-up by the customer.
Ownership of patents covering the design of their input products provides these firms
with some security if future trades with the customer firms do not come through, a
possibility that the financial disclosure documents of chiral suppliers explicitly note.
Indeed, a case study of the contractual relationship between Alkermes and Genentech in
Arora and Merges (2001) shows how patents are used to structure technology licensing
agreements. Alkermes has a proprietary, microencapsulation, drug delivery technology
which routinely patents microencapsulated versions of highly successful drugs. This it
does in close collaboration with the large drug firms that own the rights to the drugs:
Alkermes has deals with Schering-Plough, Johnson and Johnson, and Genentech,
among others. Drug firms enter into these deals to access Alkermes' proprietary delivery
technology, which makes the drugs easier to take, and in some cases opens up new
sub-markets not available using conventional delivery techniques.

The basic structure of the Genentech-Alkermes deal illustrates the role of patents in such
transactions. There are two stages to the transaction: (1) Alkermes adapts its
microencapsulation drug delivery technology to Genentech's successful therapeutic
product, a genetically engineered form of the naturally occurring protein called Human
Growth Hormone (HGH); and (2) Alkermes manufactures the product for Genentech and
sells it at a pre-agreed price, with Genentech then marketing and distributing it. Note that
Alkermes is required to make substantial investments in adapting its technology to
Genentech's product and in creating the production process needed to manufacture it,
and licensing this knowhow to Genentech.

Arora and Merges (2001) argue that that Genentech is technically able to duplicate the
production process if it wanted to. So what protection does Alkermes have? The major
source of protection for Alkermes is its patents. Alkermes currently has 43 patents
covering (1) its microencapsulation process; (2) novel polymers and preparations that
make up the coatings; and (3) microencapsulated formulations of the drugs it delivers
under its collaboration agreements. These patents provide a fallback in the event that
Genentech does not continue with the agreement. They would prevent Genentech from
using the Alkermes technology after the agreement is terminated.

[1]
In some cases, policies designed in the na¤±ve hope of encouraging small inventors have
encouraged the abuse of the patent system. In the United States, for instance, there have
been well-known cases where patents filed in the 1950s were ultimately issued more than
twenty years later. In the meantime, the patentee could legally amend the application so that
it covered inventions made well after the filing date. Since patents in the United States are
published only upon issue, such patents (sometimes referred to as "submarine" patents
because they are not visible for long periods after they are filed) have surprised many
established firms. The move towards patent harmonization, which will require publication of all
patent applications after a certain period, will be helpful in this respect.

[2]
This is not specific to the chemical industry. Grindley and Teece (1997) report that, in cross-
licensing agreements in electronics and semiconductors, the quality and the market coverage
of the patent portfolios of each party is used in the calculation of balancing royalty payments.
(See also Hall and Ham 1999.)

[3]
Briefly, many molecules can exist in two mirror-image forms; they are said to be "chiral."
The majority of biomolecules occurring in the human body exist in only one of the two
possible forms. Because the wrong chiral form can be ineffective or harmful (as in the case of
the drug thalidomide), sophisticated catalysts are required to ensure that the manufacturing
process for a pharmaceutical product yields only the desired form of the molecule. (See Ball
1994, pp. 77-8.)
4 The market for chemical process technology
The way in which patents have been used in the chemical industry has evolved over time.
Patents played an important role in the development of organic dyestuffs, the first major
product area of the modern organic chemical industry, in the 1850s and 1860s. Chemical
technologies, strongly based in science, were easier to codify and patent compared to
mechanical technologies. The properties of synthetic dyes were dependent heavily on
the structure of the molecules. Thus, understanding the structure of the dyestuff
molecules and how to produce them implied that the innovator could protect the
innovation through patents. German companies skillfully combined secrecy and patents
to exclude competitors, both at home and abroad (see Arora 1997 for a full discussion).

Domestic licensing was not common during this time because the dominant producers
also controlled technology, not because of problems in technology licensing. Instead, the
dominant producers in each market tended to form licensing and market-sharing
agreements with each other to keep out entrants. Indeed, the pre-Second World War
international chemical market has been characterized by many as a sort of a
"gentlemen's club" (e.g. Spitz 1988; Smith 1992). These cartels used a number of
instruments, including patent licensing agreements, to maintain market shares and deter
entry.

Some cartels were organized around a common technology, and were often initiated by
the patent holder. The patent would be licensed, often in return for an equity stake, with
technology flow-back agreements. For instance, the Solvay process licensees were
required to share all improvements with the Solvay company, and the latter would share
it with other licensees. To the extent that there were benefits to all licensees from having
the Solvay process become the standard process for the production of alkali, such
technology-sharing cartels were mutually beneficial. In other cases, particularly during
the 1920s and 1930s, there were some prominent technology- and market-sharing
agreements, with the agreement between Standard and IG Farben that involved
technology-sharing in butyl rubber, TEL, and arc acetylene (from Standard), and Buna S
(from IG Farben) being one of the best-known examples.

Though anti-competitive in intent, these arrangements did economize on scarce assets.
For instance, although ICI obtained the basic patent on polyester, Du Pont had
developed significant expertise in the production process based on its experience in
nylon, and controlled the melt-spinning process that was crucial for successful
commercialization. ICI and Du Pont had a long-standing agreement that involved
technology licensing as well as the extensive sharing of information and know-how. As a
result, the two companies quickly settled on a suitable cross-licensing agreement.
However, it is only after the Second World War that firms start to use licensing as a
means to profit from innovation and a market for chemical technology began to arise.
Indeed, starting from the 1950s an increasing number of chemical processes became
available for license. Landau (1966, p. 4) writing two decades after the end of the war,
noted that "the partial breakdown of secrecy barriers in the chemical industry is
increasing the trend toward more licensing of processes." Importantly, these were not
exclusive licenses. As Spitz (1988, p. 318) put it:

some brand new technologies, developed by operating [chemical] companies, were
made available for license to any and all comers. A good example is the Hercules-
Distillers phenol/acetone process, which was commercialized in 1953 and forever
changed the way that phenol would be produced.
Our data analysis confirms the presence of a well-established market for chemical
[4]
technology during the 1980s. Indeed, figure 21.1 shows that during the period under
study only a fifth of the technology used in new chemical investments world-wide was
developed in-house by the investors, while the rest was licensed in from unaffiliated
sources. However, there are important differences across geographic areas, chemical
sub-sectors, and investors' sizes and nationalities in the propensity of chemical
producers to rely on the market for technology. Firms investing in North America
(Canada and the United States) have the highest share of plants developed in-house
(more than 40 percent), closely followed by firms investing in Western Europe. This
share is the smallest for firms in Eastern Europe, Africa, the Middle East, and South
America (less than 5 percent). Multinational firms tend to rely more on in-house
technology although this share is still sensitive to the final location of the investment.
Size and nationality of the investors, which might proxy for the degree of technological
capability, seem to play an important role. Large chemical corporations from advanced
countries acquire less than 50 percent of their technology from unaffiliated sources. By
contrast, third-world firms rely almost completely on the market for technology (see figure
21.2).




Figure 21.1: Who was licensing chemical technologies, 1980s (percent)




Figure 21.2: Market for chemical technology as a function of investor's type (percent)

Differences across chemical sub-sectors are remarkable as well. In the aggregate,
technology licensing is most common in sectors with large-scale production facilities,
with relatively homogeneous products, and with a large number of new plants. It is less
common in sectors marked by product differentiation, custom tailoring of products for
customers, and small scales of production. Indeed, in pulp and paper, gas handling,
fertilizers, industrial gases, and organic refining more than 90 percent of the plants
involve the sale of technology between firms that are not linked through ownership ties,
whereas in pharmaceuticals, organic chemicals, and plastics the share is close to 50
percent.
Finally, the market for chemical technology is more prominent in large product markets.
As shown in figure 21.3, the extent of the market for chemical technology moves from
close to 90 percent in "large" product markets (those accounting for more than thirty
plants world-wide during the period under study) to 50 percent in "niche" product markets
(one“two plants).
Figure 21.3: Share of SEFs licensing, by size of product markets (percent)

Contracts typically involve a lump-sum payment that is paid in installments, starting when
the contract is signed and ending when the plant is commissioned. In addition, there may
be royalties on output for a specified period of time (royalties are more or less set by
industry norms, typically between 2 percent and 5 percent). These carry with them the
right to audit, a right which is occasionally exercised. Specialized engineering firms tend
to favor lump-sum payments, being unwilling or unable to track how the project does
after commissioning.

[4]
All figures reported in this chapter refer to our calculations of the Chemical Age Project File
(CAPF), a comprehensive data set on world-wide investments in chemical plants during the
1980s, compiled by Pergamon Press (London). The data set covers about 14,000 plants
constructed or under construction during the period 1980“90. CAPF discloses the information
about the licensor only in half of the plants. Most of the figures provided in this chapter are
based on the assumption that the missing information about licensors is selected randomly.
5 Specialized engineering firms and division of labor
An important reason for the dramatic surge of licensing transactions after the Second
World War has to do with the rise of specialized process design, engineering and
construction firms (hereafter, SEFs). SEFs originated as an American phenomenon.
From very early in the twentieth century, the oil firms used specialized sub-contractors in
various capacities: to procure or manufacture equipment such as pumps and
compressors, valves, and heat exchangers, and to provide specialized sub-systems
such as piping and the electrical systems. As these specialized engineering“construction
firms grew in their ability to handle more sophisticated tasks, process design became a
part of their activities as well. By the 1960s, SEFs dominated the design and construction
of new plants and were important sources of process innovation (Freeman 1968, p. 30).
SEFs reaped the advantages of specialization. By working for many clients, they
benefitted from learning by doing, and by repeatedly selling their expertise (through
licenses or engineering services) they could spread the cost of accumulating that
expertise over a larger output.
The importance of the SEFs lies not only in the fact that they were sources of
innovations but also in how they appropriated the rents from innovation. Lacking the
downstream assets required to commercialize their innovations themselves, SEFs used
licensing as the principal way of profiting from their innovations. Freeman (1968) showed
that for the period 1960“6, SEFs as a group accounted for about 30 percent of all
licenses. During the 1980s the importance of SEFs as a source of technology has
increased somewhat. Figure 21.1 shows that in the 1980s SEFs supplied the technology
for more than one-third of plant investments in the world as a whole, which implies that
about 45 percent of all technologies coming from unaffiliated sources were licensed by
SEFs.[5]

With some prominent exceptions such as UOP and Halcon/Scientific Design, SEFs did
not focus on breakthrough innovation. However, they did improve and modify processes
developed by chemical firms and offer those for license. SEFs encouraged technology
licensing in two other ways. First, as discussed below, they induced chemical firms to
license their own technology. Second, they often acted as licensing agents for chemicals
firms. Chemical producers often lack licensing experience and are unwilling to provide
the various engineering and design services that licensees need in addition to the
technology, and therefore use SEFs as licensing agents. A chemical firm will license its
technology to an SEF. The latter offers a complete technology package, consisting of the
core technology licensed from a chemical producer, along with know-how and installation
and engineering services. This arrangement enables the licensor to benefit from the
superior ability of SEFs to manage technology transfer. It also provides a buffer between
the chemical firm and its licensees, limiting accidental leakage of information. From the
point of view of the customer, dealing with a single source for technology, construction,
and engineering reduces transaction costs. The SEF can also provide better operational
guarantees than if the contract were a pure technology licensing contract. (See Grindley
and Nickerson 1996 for further discussion of this topic.)

Interviews with industry executives have confirmed the important role of SEFs as
integrators, bundling technology licensed from a technology supplier like UCC or BP with
engineering and procurement services. It appears that whereas established firms in the
United States or Europe are more likely to negotiate directly with the technology supplier,
and then ask SEFs to bid for the engineering and construction contract, chemical firms in
developing countries rely very heavily on SEFs. For them, SEFs act like one-stop shops,
procuring technology and equipment, and providing engineering and construction
services.
Our data confirm this. In the 1980s, SEFs were more important sources of technology for
small chemical companies and third-world firms. For instance, large chemical companies
from advanced countries (those with a turnover of more than $1 billion in 1988)
purchased around a fifth of their technologies from SEFs. For smaller first-world
companies (with less than $1 billion of turnover in 1988) this percentage was 37 percent,
and close to 50 percent for third-world chemical firms. (See figure 21.2.)
Finally, figure 21.3 shows that SEFs accounted for a larger share of total licensing in
larger product markets. Furthermore, although not evident from figure 21.3, larger
markets also tend to have a larger fraction of the total investment from small firms and
third-world companies.[6] In other words, the evidence is consistent with the notion that
SEFs encourage investment, particularly by small firms and third-world companies.

[5]
The role of SEFs varies across different sub-sectors. For instance, in pharmaceuticals,
plastics, and agricultural products, SEFs account for less than 10 percent of all technologies
from unaffiliated firms, compared to 60 percent in sub-sectors like fertilizers, and textile and
fibers.

[6]
The market share of big chemical companies (i.e. all firms with a turnover of more than $1
billion in 1988) is 28 percent in "large" product markets (more than thirty plants), whereas it is
about 45 percent in "niche" product markets (one-two plants).
6 Licensing by chemical firms
6.1 Empirical evidence
The licensing activities of the SEFs have had a major effect on the rent appropriation
strategies of the other players in the market as well. In a marked departure from their
pre-Second World War strategy of closely holding onto their technology, a number of
chemical and oil companies began to use licensing as an important (although not the
only) means of profiting from innovation. Licensing by chemical producers is now a
significant share of all licensing in the industry. As figure 21.1 shows, although SEFs play
a major role as licensors, at least half of the licenses sold to unaffiliated firms are by
other chemical producers themselves.
Table 21.1 shows the licensing strategies by a number of selected chemical corporations
from advanced countries, which were especially active as technology suppliers during
the 1980s. In particular, columns (E)-(F) of the table report the share of licenses directed
to the national market, to the rest of the first world and to the third world, respectively. All
companies are more likely to use licensing in dealing with overseas investments,
although some firms (e.g. Union Carbide, Monsanto, Exxon) also license in their home
markets. On average, slightly more than one in ten licenses goes to the national market.
To put this in perspective, the weight of the national market vis-à-vis the world market is
also one-tenth, implying that the bias towards international licensing is moderate.


Table 21.1: Licensing strategies by some selected chemical producers

Company Turnover A B C D C/D E F G
name (1988)


Air Liquide FRA 3539 129 45 233 120 1.94 0.12 0.36 0.52

Monsanto USA 7453 113 31 204 590 0.35 0.26 0.22 0.52

Union Carbide USA 8324 106 37 192 59 3.25 0.22 0.42 0.36

Shell UK 11848 101 71 183 773 0.24 0.02 0.43 0.55

ICI UK 21125 93 55 168 1020 0.16 0.0 0.31 0.69

Air Products USA 2237 59 29 107 72 1.48 0.19 0.24 0.57

Amoco USA 4300 55 23 99.5 NA NA 0.18 0.40 0.42

Phillips USA 2500 55 22 99.5 NA NA 0.16 0.40 0.44

Rhône- FRA 10802 44 28 79.6 632 0.13 0.0 0.23 0.77
Poulenc

Texaco USA 1500 44 9 79.6 NA NA 0.18 0.32 0.50

BASF GER 21543 37 45 66.9 1,010 0.07 0.03 0.49 0.48

Exxon USA 9892 35 49 63.3 551 0.11 0.23 0.37 0.40

Mitsui Toatsu JAP 2991 35 15 63.3 NA NA 0.09 0.11 0.80

Hoechst GER 21948 34 44 61.5 1,363 0.05 0.03 0.3 0.94

Du Pont USA 19608 33 66 59.7 1,319 0.05 0.03 0.12 0.85

Total 973 569 1,760 7,509 0.23 0.12 0.32 0.56
Note: A = Total number of licenses in 1980-90, B = Total number of self-licenses in 1980- 90, C = Estimated annual average licensing revenues, D = R&D expenditures in 1988, E = Share of
licenses at home, F = Share of licenses in the rest of the first world, G = Share of licenses in the third world.

All figures (except shares) in million US dollars.

NA = Not available.



Not only do firms license extensively, many of them now explicitly consider licensing
revenues as a part of the overall return from investing in technology. For instance, Union
Carbide is reported to have earned $300 million from its polyolefin licensing in 1992
(Grindley and Nickerson 1996). Both Du Pont and Dow, two chemical firms with a long
tradition of exploiting technology in-house, have indicated that they intend to license
technology very actively. In 1994 Du Pont created a division with the specific task of
overseeing all technology transfer activities. Reversing its tradition of treating in-house
technology as the jewel in the crown, Du Pont has started to exploit it through an
aggressive outlicensing program. Starting in 1999, this was expected to be a $100
million per year business. On its own web page, Du Pont advertises the technologies
available for licensing in several areas: fibers-related, composites, chemical science and
catalysis, analytical, environmental, electronics, biological. The words of Jack Krol, Du
Pont's president and CEO, at the 1997 Corporate Technology Transfer Meeting,
emphasize this new trend:
For a long time, the belief about intellectual property at Du Pont was that patents were
for defensive purposes only. Patents and related know-how should not be sold, and
licensing was a drain on internal resources Our businesses are gradually becoming
more comfortable with the idea that all intellectual property is licensable for the right
price in the right situation.

Dow has also long had a reputation for "never licensing breakthrough technology, and
there was an emotional bias against licensing" (Ed Gambrell, Vice President, Dow). In
1995, it formed a licensing group with the purpose to "create more value" from its
technology. Before the group was formed, Dow had licensing revenues of roughly $10-
20 million per year. It expected to earn a $100 million/year business by 2000.
Finally, we have estimated the average annual licensing revenues (during the period
1980-90) for a sample of large chemical producers. These revenues amount to $26
million, or about 10 percent of the mean R&D expenditure in 1988 (for our sample).
Some firms are performing well above this average. For instance, Union Carbide has
licensing revenues as large as its total R&D expenditure. Other firms like Monsanto,
Shell and ICI cover, respectively, about 35 percent, 24 percent, and 16 percent of their
R&D expenditures through licensing revenues. In table 21.1 we report for a selected
number of firms the annual average licensing revenues (column (C)) and their total R&D
expenditures in 1988 (column (D)).

6.2 Why is there so much licensing by chemical producers?

This behavior of the chemical firms runs contrary to the orthodox management
prescriptions (e.g. Teece 1988). Traditional wisdom holds that licensing is undesirable
because the innovator has to share the rents with the licensee, and because licensing
implies increased competition and rent dissipation.

There are two, related, reasons for the change in strategy: increased competition, and
technology licensing by SEFs. The presence of competing technologies drastically
changes the payoff to the strategy of trying to keep one's technology in-house. For
instance, suppose there are two viable processes for the production of a particular
product, each owned by a different firm. If one of the firms is going to license out (sell) its
technology, the best response of the other innovator may well be to license out (sell) as
well.

A search of the trade publications in 2000 turned up further evidence that shows that, at
least in some markets, chemical and oil companies are aggressively competing to sell
technology, often in collaboration with an SEF which undertakes the provision of the
engineering and other knowhow. Sometimes, competitors in the market for licenses are
other chemical producers. In other cases, the major competition is provided by SEFs.
In Arora and Fosfuri (1999) we develop a model of oligopolistic competition with
potentially more than one technology supplier. We consider the case where at least one
of the competing innovations is patented by an SEF. Lacking production facilities, an
[7]
SEF has little option but to license its technology to others. Therefore, when one of the
innovators is an SEF, the other innovator's dominant strategy is to license its innovation
as well. Put differently, in product markets where SEFs are widespread, chemical
producers have no other strategic choice, but aggressively licensing themselves. Figure
21.4 shows that in all chemical sub-sectors in which SEFs had more than 42 percent of
market share during the 1980s, the average number of licenses sold out by chemical
producers was 2.8, whereas in the sub-sectors in which SEFs had less than 18 percent
of the market, it was as little as 1.3.[8]
Figure 21.4: Market share of SEFs and licensing, by chemical producers (percent)

Even without SEFs, a technology holder may license if the net licensing revenues are
greater than the loss in profits owing to increased competition in the product market.
However, whereas the licensing revenues go only to the licensor, all incumbent
producers potentially lose from the increased competition. In other words, licensing
imposes a negative pecuniary externality upon other incumbents, which is not taken into
account by the licensor. As a result, licensing can be privately profitable even if it
reduces the joint profits of all incumbents.

This is exemplified by the different ways in which BP Chemicals has approached acetic
acid and polyethylene. In acetic acid, BP Chemicals has strong proprietary technology,
but it licenses very selectively, typically licensing only to get access to markets it would
otherwise be unable to enter. By contrast, in polyethylene, BP has less than 2 percent of
the market share. Although it has good proprietary technology as well, there are a dozen
other sources of technology for making polyethylene. Thus, BP has licensed its
polyethylene technology very aggressively, competing with Union Carbide which was the
market leader in licensing polyethylene technology. Even here, BP initially tried not to
license in Western Europe, where BP had a substantial share of polyethylene capacity.
However, other licensors continued to supply technology to firms that wished to produce
polyethylene in Western Europe, with the result that BP found that it was losing potential
licensing revenue without any benefits in the form of restraining entry.
In Arora and Fosfuri (1999), we formally show that the more homogeneous the product,
the greater the negative externality to other incumbents, and the greater the incentives to
license. We find that technology licensing is most common in sectors with large-scale
production facilities, with relatively homogeneous products, and with a large number of
new plants. It is less common in sectors marked by product differentiation, custom
tailoring of products for customers, and small scales of production. Figure 21.5 confirms
this finding. It classifies all chemical sub-sectors reported in CAPF in three broad
categories of product differentiation: homogeneous, intermediate and differentiated.
Figure 21.5 shows that the average number of licenses per patent holder increases as
[9]
the product market becomes more homogeneous.




Figure 21.5: Product differentiation and licensing

Finally, most of the licensing takes place for processes. New products are far less likely
to be licensed, at least in the initial stage of their life cycles. In this case, the profit loss
due to competition would be felt almost entirely by the licensor since by definition there
would not be any other incumbent producers of the product. These incentives are
reinforced by the unimportance of SEFs in product innovation.

[7]
Our data confirm that the average number of licenses sold out by SEFs is larger than the
average number of licenses sold out by producers in basically all chemical sub-sectors.

[8]
Figure 21.4 classifies all chemical sub-sectors (twenty-three) reported in CAPF in three
broad categories characterized, respectively, by small, medium, and important presence of
SEFs. It reports the average number of licenses per chemical producer.

[9]
Our measure of product differentiation was computed as follows. CAPF classifies the
chemical plants within each sub-sector in more disaggregated process technology classes.
We use the counts at this disaggregated level to compute an equidistribution index at the sub-
sector level. Our index of product differentiation takes the value of 0 if the products are
homogeneous and the value of 100 if they are totally differentiated. We have also tried
alternative measures of product differentiation, such as the entropy index and the Herfindahl
index, with substantially similar results.
7 Why in chemicals?
Licensing and the presence of a market for technology are not limited to the chemical
industry. In Arora, Fosfuri and Gambardella (2001) we provide evidence of extensive
licensing in sectors such as semiconductors, electronics, industrial machinery,
equipment and business services, biotechnology, and several examples of licensing
strategies by large established producers such as IBM, Texas Instruments, Boeing,
Philips, Procter & Gamble, and General Electric. Nevertheless, it is true that the use of
licensing as a strategy of rent appropriation is less developed outside of chemicals,
particularly for processes (see also Anand and Khanna 2000).

As discussed earlier, technology licensing may be hindered either because licensing
contracts are very inefficient or because it is not in the strategic interest of the technology
holder to license the technology. Licensing contracts can be inefficient owing to the need
to transfer knowhow and owing to information asymmetries. Both are closely related to
the strength of patent protection.
In the chemical industry, unlike most others, chemical processes can be effectively
protected through patents. As a result, even the valuable unpatented know-how, needed
to use the technology, can be licensed. Patents pertain to that part of the discovery that
is codified. Therefore the effectiveness of patents depends on how cheaply and
effectively new ideas and knowledge can be articulated in terms of universal categories.
When innovations can not be described in terms of universal and general categories,
sensible patent law can provide only narrow patent protection. During the 1860s, when
synthetic dyestuffs first appeared, their structure was poorly understood, as were the
reaction pathways and processes. Thus broad patents led to extensive litigation and
retarded the development of technology. In France, an excessively broad patent on
analine red was construed to include all processes for making the red aniline-based dye,
even though it was quite clear that the structure of aniline dyes was as yet unknown.
There were long and bitter disputes in England about the validity of the Medlock patent
for magenta (another aniline dye) that turned on whether the appropriate definition of
"dry" arsenic acid included the water of hydration (Travis 1993, pp. 104-37).

Arora and Gambardella (1994) point out that technological knowledge that is closely
related to broad engineering principles and physical and chemical "laws" is more readily
codifiable. Chemical engineering developed more general and abstract ways of
conceptualizing chemical processes, initially in the form of unit operations, and later in
terms of concepts such as mass and energy transfer. A number of different processes
could be conceived of in terms of these more elementary units. A chemical engineer
could therefore see common elements across a number of processes that might appear
very different and diverse to a chemist from an earlier generation. Chemical engineering
(and the concomitant developments in polymer science and surface chemistry) thus
provided the language for describing more precisely the innovations to be protected.
In other words, patents work well in the chemical industry because the object of
discovery can be described clearly in terms of formulae, reaction pathways, operating
conditions, and the like (e.g. Levin et al., 1987). But it is not merely that the object of
discovery is more discrete in the sense of being a particular compound. Rather, it is the
ability to relate the "essential" structure of the compound to its function. This allows a
patent to include within its ambit inessential variations in structure, as in minor
[10]
modifications in side chains of a pesticide. In fact, chemical patents frequently use
[11]
Markush structures to define the scope of the claim. The use of Markush structures
permits a succinct and compact description of the claims and allows the inventor to
protect the invention for sets of related compounds without the expense (and tedium) of
testing and listing the entire set. The ability to explicate the underlying scientific basis of
the innovation allows the scope of the patent to be delimited more clearly. The obvious
extensions can be foreseen more easily and described more compactly.

[10]
In some instances, seemingly minor variations in side chains can have significant biological
effects. Therefore, what is a "minor" variation is itself determined by the state of the current
understanding of the relation between structure and function.
[11]
A Markush structure is best understood as a language for specifying chemical structures of
compounds, which allows generic representation for an entire set of related compounds. See
Maynard and Peters (1991, p. 71) for details.
8 Conclusions
We have argued that there exists a functioning market in chemicals where process
technologies are sold through arm's length license contracts. We have documented the
substantial extent of technology licensing in the chemical industry, involving both
specialized engineering firms and chemical producers themselves. The existence of this
market for technology has contributed to a faster world-wide diffusion of the chemical
technology and to making the chemical industry a truly global industry. This process has
progressed to the point where licensing is an integral part of the technology strategies of
even the largest chemical firms.

Such widespread licensing would be unlikely without a well-functioning patent system:
transaction costs involved in contracting for technology would be larger and contracts for
know-how less efficient. Although further research is needed, we believe that patents
have worked well in the chemical industry because the underlying knowledge base -
chemistry and chemical engineering - has been very successful in clarifying the
relationship between structure and function. A chemical invention can be described
clearly in terms of structure, reaction pathways, or operating conditions, with a
reasonably clear sense of the limits of the invention.

While patents are necessary for a market for technology, they are by no means sufficient.
Firms that specialize in the design, engineering, and construction of chemical plants
emerged and some developed proprietary technologies that they offered for license, at a
time when many firms, all over the world, were looking to acquire chemical technologies.
SEFs induced chemical firms to license their technology as well. In addition, SEFs
reduced transaction costs by acting as licensing agents for chemical firms and by
bundling technology with complementary engineering, design, and construction
capabilities valuable to potential buyers of technology. The presence of SEFs, induced
entry by a number of firms, increasing the number of potential technology buyers. The
net result was a "thicker" and a more efficient market for technology.
Notes
Chapter 21 was originally published as "The Market for Technology in the Chemical
Industry: Causes and Consequences," in Revue d'Economie Industrielle (92, 2000).

Financial support from the TSER project, "A Green Paper on the Chemical Industry:
From Science to Product," is gratefully acknowledged. We are indebted to Alfonso
Gambardella and Robert Merges for ongoing suggestions and discussions. We thank
Eric Brousseau, Jean-Michel Glanchant, and participants in seminars at Stanford
University, University of Stuttgart, Universitat Pompeu Fabra and University Carlos III
(Madrid) for helpful comments on a previous draft. Ralph Landau and Martin Howard
have shared their extensive knowledge of licensing practices in the chemical industry, for
which we are very grateful. All errors, of course, remain our own.
1. In some cases, policies designed in the n¤±ve hope of encouraging small
inventors have encouraged the abuse of the patent system. In the United
States, for instance, there have been well-known cases where patents
filed in the 1950s were ultimately issued more than twenty years later. In
the meantime, the patentee could legally amend the application so that it
covered inventions made well after the filing date. Since patents in the
United States are published only upon issue, such patents (sometimes
referred to as "submarine" patents because they are not visible for long
periods after they are filed) have surprised many established firms. The
move towards patent harmonization, which will require publication of all
patent applications after a certain period, will be helpful in this respect.
2. This is not specific to the chemical industry. Grindley and Teece (1997)
report that, in cross-licensing agreements in electronics and
semiconductors, the quality and the market coverage of the patent
portfolios of each party is used in the calculation of balancing royalty
payments. (See also Hall and Ham 1999.)
3. Briefly, many molecules can exist in two mirror-image forms; they are
said to be "chiral." The majority of biomolecules occurring in the human
body exist in only one of the two possible forms. Because the wrong
chiral form can be ineffective or harmful (as in the case of the drug
thalidomide), sophisticated catalysts are required to ensure that the
manufacturing process for a pharmaceutical product yields only the
desired form of the molecule. (See Ball 1994, pp. 77“8.)
4. All figures reported in this chapter refer to our calculations of the
Chemical Age Project File (CAPF), a comprehensive data set on world-
wide investments in chemical plants during the 1980s, compiled by
Pergamon Press (London). The data set covers about 14,000 plants
constructed or under construction during the period 1980“90. CAPF
discloses the information about the licensor only in half of the plants.
Most of the figures provided in this chapter are based on the assumption
that the missing information about licensors is selected randomly.
5. The role of SEFs varies across different sub-sectors. For instance, in
pharmaceuticals, plastics, and agricultural products, SEFs account for
less than 10 percent of all technologies from unaffiliated firms, compared
to 60 percent in sub-sectors like fertilizers, and textile and fibers.
6. The market share of big chemical companies (i.e. all firms with a turnover
of more than $1 billion in 1988) is 28 percent in "large" product markets
(more than thirty plants), whereas it is about 45 percent in "niche"
product markets (one-two plants).
7. Our data confirm that the average number of licenses sold out by SEFs
is larger than the average number of licenses sold out by producers in
basically all chemical sub-sectors.
8. Figure 21.4 classifies all chemical sub-sectors (twenty-three) reported in
CAPF in three broad categories characterized, respectively, by small,

<<

. 11
( 18)



>>