# Directional derivatives and subdifferentials of set-valued convex functions

###### Abstract

A new directional derivative and a new subdifferential for set-valued convex functions are constructed, and a set-valued version of the so-called ’max-formula’ is proven. The new concepts are used to characterize solutions of convex optimization problems with a set-valued objective. As a major tool, a residuation operation is used which acts in a space of closed convex, but not necessarily bounded subsets of a topological linear space. The residuation serves as a substitute for the inverse addition and is intimately related to the Minkowski or geometric difference of convex sets. The results, when specialized, even extend those for extended real-valued convex functions since the improper case is included.

## 1 Introduction

In this note, we introduce new notions of directional derivatives and subdifferentials for set-valued convex functions, we prove the so-called max-formula, a result of ”exceptional importance” in the scalar case [30, p. 90], and characterize solutions of set-valued optimization problems in terms of the new derivatives. The latter topic sheds some new light on what should actually be understood by a solution of a convex optimization problem with a set-valued objective. In particular, we supplement the solution concept given in [18] by a new one and show that a solution set can be reduced to a singleton via a generalized translation.

There exists basically three different (but partially overlapping) approaches for defining derivatives for set-valued functions. One approach starts by picking a point in the graph of the set-valued function and assigns to it another set-valued function whose graph is some kind of tangent cone to the graph of the original function at the point in question. The book [1] gives a prestigious account of such concepts, and Mordukhovich’s coderivative [23] is of the same nature. The second approach selects a class of ’simple’ set-valued functions the elements of which shall serve as approximation for a general set-valued function and then defines what is actually understood by ”approximation”. A representative for this approach is [21]. The third approach embeds the class of set-valued functions under consideration into a linear space and operates with classical derivative concepts. The reader may consult [10] for more references and a more complete account of the three basic approaches described above. Note, however, that the last two approaches very often are restricted to set-valued functions with compact convex values, and to finite dimensional [10] or even one-dimensional pre-image spaces [21].

On the other hand, it turned out that it is a hard task to generalize basic results in convex analysis from extended real- to vector- or even set-valued functions. The ’max-formula’ may feature as an example which is relevant for the present paper: Under some qualifying conditions, the directional derivative of a convex function at a given point is the support function of the subdifferential at the same point. Since this implies the non-emptiness of the subdifferential, this result is counted among the ’core results of the convex analysis’ [5, p. 122]. The difficulties which arise when passing from one-dimensional to more general image spaces are brought out, for example, in [3, Theorem 6.1]: The pre-image space must be a ”Minkowski differentiability space”, the image space must be ordered by a closed normal cone and enjoy the so-called monotone sequence (= greatest lower bound) property.

Our approach is more in the spirit of traditional derivative concepts which rely on increments of a function at a point in some direction. Using a residuation instead of a difference (an inverse group operation which is not available in relevant subsets of the power set of a linear space) we are able to define difference quotients and their limits even for set-valued functions. In fact, it seems to be natural to ”skip” the vector-valued case by embeding it into the set-valued one. The residuation is defined on carefully selected subsets of the power set of the (linear) image space; these subsets carry the order structure of a complete lattice (= every subset has an infimum and supremum) and the algebraic structure of a semi-module over the semi-ring . It turns out that the old concept of the Minkowski (or geometric) difference of convex sets [11] can be identified with the residuation in these spaces of sets; even this seems to be a new contribution (see also [15]) although residuations have been used before in (convex) analysis, see for example [9], [6] and the references therein.

The dual variables in our theory are simple set-valued functions generated by pairs of continuous linear functionals instead of continuous linear operators as, for instance, in [3], [4]. Moreover, no restrictive assumptions to the ordering cone in the underlying (linear) image space are imposed such as normality, pointedness, non-empty interior, generating a lattice order etc. These features make the theory presented in this note much more adequate for applications. The interested reader is referred to [14] for a financial application where the ordering cone is not pointed, in general, and has ’many’ generating vectors.

In the next section, the basics about set-valued functions and their image spaces are introduced. Section 3 contains the definitions of directional derivatives and subdifferentials for set-valued convex functions and the main results. Section 4 presents the link between ’adjoint process duality’ (Borwein, 1983), Mordukhovich’s coderivative and our derivative concepts. In the final section, set-valued optimization problems are discussed.

## 2 Preliminaries

### 2.1 Image spaces

Let be a locally convex, topological linear space and a convex cone with . We write for with which defines a reflexive and transitive relation (a preorder). The topological dual space of is denoted by , the (positive) dual cone of by . Note that if, and only if, which is assumed throughout the paper. The negative dual cone is .

The relation on can be extended to the powerset of , the set of all subsets of including the empty set in two canonical ways (see [13] and the references therein). This gives rise to consider the following subsets of :

Elements of are sometimes called upper closed ([22, Definition 1.50]) with respect to . We shall abbreviate and to and , respectively.

The Minkowski (elementwise) addition for non-empty subsets of is extended to by

for . Using this, we define an associative and commutative binary operation by

(2.1) |

for . The elementwise multiplication of a set with a (non-negative) real number is extended by

for all and . In particular, by definition, and we will drop the in most cases.

The triple is a conlinear space with neutral element , and, obviously, is a conlinear subspace of it. The concept of a ’conlinear space’ has been introduced in [12], see also [13], [15]. It basically means that is a commutative monoid, and a multiplication of elements of with those of is defined and satisfies some obvious requirements, but not, in general, the second distributivity law for , . The elements of which do satisfy this law are precisely those of , thus is a semi-module over the semi-ring .

On and , is a partial order which is compatible with the algebraic operations just introduced. Thus, and are partially ordered, conlinear spaces in the sense of [12], [13]. Note that this is true without any further assumptions to . In particular, is not required to generate a partial order, a fact, which will be used later on.

We will abbreviate and , and we will write and in order to denote an element and a subset , respectively.

Moreover, and are complete lattices with greatest (top) element and least (bottom) element . For a subset , the infimum and the supremum of are given by

(2.2) |

where we agree upon and whenever . Finally, for all and ,

(2.3) |

where . It follows that is an -residuated space (see [15] for more details). The inf-residuation will serve as a substitute for the inverse addition and is defined as follows: For , set

(2.4) |

Note that, for , the set on the right hand side of (2.4) is indeed closed since

which is an intersection of closed sets whenever is closed.

Sometimes, the right hand side of (2.4) is called the geometric difference [24] or the Minkowski difference [11] of the two sets and , and H. Hadwiger should probably be credited for its introduction. The relationship with residuation theory (see, for instance, [2], [8]) has been established in [15]. At least, we do not know an earlier reference.

###### Example 2.1

Let us consider , . Then , and can be identified (with respect to the algebraic and order structures which turn into an ordered conlinear space and a complete lattice admitting an inf-residuation) with using the ’inf-addition’ (see [25] , [15]). The inf-residuation on is given by

for all , compare [15] for further details.

Historically, it is interesting to note that R. Dedekind [7] introduced the residuation concept and used it in order to construct the real numbers as ’Dedekind sections’ of rational numbers. The construction above is in this line of ideas, but in a rather abstract setting.

###### Remark 2.2

The inf-residuation can be defined on and even other subspaces of , but we only need the construction in in this paper. Likewise, in , a sup-residuation can be defined such that the whole theory becomes symmetric. The interested reader is referred to [15].

In many cases, the set is ”too small”, even empty: Consider , , . Then for each . Therefore, we modify the inf-residuation in as follows. Take and let be the homogenous half-space with normal . We set

(2.5) |

The operation can be expressed using the inf-residuation in and support functions, see [15, Proposition 5.20] and therefore, it would be interesting to study the relationships to the Demyanov and Rubinov difference [27, p. 180 and p. 182, respectively]. However, our construction is particularly tailored for non-compact convex sets.

By definition, if or , and if , and if , . In all other cases, is a non-empty closed half-space parallel to . The relationships

and

(2.6) |

for all are immediate from the definition of . The next proposition makes it clear that the expression replaces .

###### Proposition 2.3

Let and . Then (a)

and (b)

Proof. (a) ””: We have

since .

””: This implication is certainly true if . If , then , hence by assumption which in turn implies . Finally, assume for some . Then

hence .

(b) is a straightforward consequence of the definition of .

The following calculus rules for apply and will be used frequently.

###### Proposition 2.4

Let and . Then

(a) and .

(b) if , and if .

(c) ,

(d) . The strict inclusion applies if, and only if, , or and .

(e) . The strict inclusion applies if, and only if, , or and .

(f) . The strict inclusion applies if, and only if, , or and .

(g) .

Proof. (a) - (c) are elementary using the definition of .

(d) The inclusion immediately follows from the definition of . If , we can find such that

Then

hence and . This implies

In view of proposition 2.3 (b), this leaves two possibilities for the strict inclusion: The first is and , the second and . In the first, we obtain . The set is non-empty if, and only if, in which case , so strict inclusion holds. In the second case, precisely when . Together, we obtain the conditions in (d).

(e) This claim can be proven by similar arguments as used for (d).

(f) If then the inclusion is trivially true. Otherwise, for each (there is one!) which implies which in turn gives . The inclusion follows.

If both sides are neither nor , then equality holds true. Indeed, in this case (see proposition 2.4 (b)) , hence there are such that and . This gives

This leaves two cases for strict inclusion: The first is , the second and . The second case can not occur as a little straightforward analysis shows. In the first, we can have in which case and if, and only if, . Or we can have which produces the strict inclusion precisely when and .

(g) First, note that the assumption implies . If , then , the assumption is trivially satisfied, and, also trivially, .

Now, assume , and that is not true. Then . Therefore, we can find such that and contradicting the assumption.

###### Lemma 2.5

Let and . Then

(2.7) |

The strict inclusion applies if, and only if, and , or and .

### 2.2 -valued functions and their scalarizations

Let be another locally convex, topological linear space with dual . A function is called convex if

(2.8) |

It is an exercise (see, for instance, [13]) to show that is convex if, and only if, the set

is convex. A -valued function is called positively homogeneous if

and it is called sublinear if it is positively homogeneous and convex. Another exercise shows that is sublinear if, and only if, is a convex cone.

A function is called lower semi-continuous (l.s.c. for short) at if where

(2.9) |

where is a neighborhood base of . The function is called closed if it is l.s.c. at every . Again, one can show that is closed if, and only if, is a closed set with respect to the product topology, see [22, Proposition 2.34].

The greatest closed convex minorant of a function is denoted by . We have

(2.10) |

###### Remark 2.6

A more common convexity concept for functions is the following (compare, for instance, [20, Definition 14.6]): is called -convex if

It is easily seen that -convexity of implies that the function maps into

and has a convex graph which coincides with

(see [20, Definition 14.7]). Moreover, if is additionally closed, then automatically maps into .

Finally, note that it does not make sense to distinguish between the graph and the epigraph of a -valued function since the two sets coincide.

A function is called proper, if its domain

is nonempty and does not attain the value . A -valued function is called -proper if for all . A function is called -proper for if the function is proper. Of course, if is -proper for at least one , then it is proper. Vice versa, if is closed convex proper function, then there is at least one such that is -proper. The latter fact follows, for example, from [13, Theorem 1].

###### Example 2.7

Let and be given. The function defined through

maps into if, and only if, . Moreover, it is positively homogeneous and additive. Therefore, if the function is -valued and convex. It is -proper for all if, and only if, . Finally, is a homogeneous closed half space with normal if and . In particular, for all .

For , the useful relation

immediately follows from the definition of . If and , then for while .

Let a function be given. The family of extended real-valued functions defined by

is called the family of (linear) scalarizations for . The function is convex if, and only if, the scalarizing function is convex for each . A closed convex function is proper if, and only if, there is such that is proper (in the usual sense of classical convex analysis), and this is the case if, and only if, the function is proper. A standard separation argument shows

With some effort, one can show that for a closed convex proper function it suffices to run the intersection in the above formula over the set of which generate a closed proper (and convex) scalarization , see [28] and [29, Corollary 3.34].

## 3 Directional derivatives and subdifferentials of -valued functions

###### Definition 3.1

Let be a convex function. The directional derivative of with respect to at in direction is given by

(3.1) |

If then for all . Therefore, we can restrict the analysis to the case . The main tool will be the directional difference quotient of at which is defined to be the function given by

The next lemma demonstrates the monotonicity of the difference quotient.

###### Lemma 3.2

Let be convex and . If then

(3.2) |

If, additonally, , then

(3.3) |

Proof. Since , produces is well-defined. The convexity of

The rules (a), (c) and (b) of proposition 2.4 produce

Hence . Replacing by we obtain . Proposition 2.4, (a) produces

It remains to demonstrate the inequality (3.3). Since implies convexity of

and from proposition 2.4, (a), (c) and we obtain

Since is a cone, the above relation can be divided by . Proposition 2.4, (g) yields

This completes the proof of the lemma.

###### Lemma 3.3

Let be convex, and . Then

(3.4) |

and the function

is sublinear as a function from into . If , then . Moreover,