The Submodular Welfare Problem with Demand Queries

We consider the Submodular Welfare Problem where we have m items and n players with given utility functions wi : 2 (m) ! R+. The utility functions are assumed to be monotone and submodular. We want to find an allocation of disjoint sets S1; S2;:::; Sn of items maximizing Âi wi(Si). A (1 1=e)-approximation for this problem in the demand oracle model has been given by Dobzinski and Schapira (5). We improve this algorithm by presenting a (1 1=e+e)-approximation for some small fixed e > 0. We also show that the Submodular Welfare Problem is NP -hard to approximate within a ratio better than some r < 1. Moreover, this holds even when for each player there are only a constant number of items that have nonzero utility. The constant size restriction on utility functions makes it easy for players to efficiently answer any "reasonable" query about their utility functions. In contrast, for classes of instances that were used for previous hardness of approximation results, we present an incentive compatible (in expectation) mechanism based on fair division queries that achieves an optimal solution.


Introduction
The following problem has appeared in the context of combinatorial auctions.
Problem: Given m items and n players with utility functions w i : 2 [m] → R + , find a partition [m] = S 1 ∪ S 2 ∪ . . .∪ S n in order to maximize n i=1 w i (S i ).
This is what we call a combinatorial allocation problem, where items are to be allocated to players with different interests in different combinations of items, in order to maximize their total utility.
Oracle models.Since an explicit description of a utility function requires exponential space, we have to clarify the issue of accessing the input.Unless the utility functions have a special form which allows us to encode them efficiently, we have to rely on oracle access.This means we have a "black box" for each player, which answers queries about her utility function.Two types of oracles have been considered in the literature.
• Value oracle.The most basic query is: What is the value of f (S)?An oracle answering such queries is called a value oracle.
• Demand oracle.Sometimes, a more powerful oracle is considered, which can answer queries of the following type: Given an assignment of prices to items p : [m] → R, what is max S (f (S) − j∈S p j )?Such an oracle is called a demand oracle.
Whether we want to allow the use of a demand oracle depends on a particular setting.From an economic standpoint, it seems natural to assume that given an assignment of prices, a player can decide which set of items is the most valuable for her.On the other hand, from a computational point of view, this decision problem is NP-hard for some very natural submodular utility functions (e.g.coverage-type functions).Thus we can either assume that players have sufficient knowledge of their utility functions (or the utility functions are simple enough) so that they are able to answer demand queries, or we can restrict ourselves to the value oracle model.In this paper, we follow the first path and assume that a demand oracle is available for each player.
Submodularity.By the Submodular Welfare Problem, we mean the combinatorial allocation problem where utility functions are monotone and submodular.A function is monotone if f (S) ≤ f (T ) whenever S ⊆ T .Submodularity is a discrete analogue of concavity and can be defined in three equivalent ways.
• Decreasing marginal values: For any S ⊂ T and j / ∈ T , • Subadditive marginal values: For any S, T, T , The second definition has the natural interpretation that the marginal value of an item cannot increase by including some other additional items.This property has been known in economics as the property of diminishing returns.It is natural to assume this in certain settings, where items do not complement each other in any way.
Truthfulness.We remark that in the context of combinatorial auctions, the utility functions are actually unknown and players are not assumed to be necessarily willing to reveal their true valuations.This leads to the design of incentive-compatible mechanisms, where players are not only queried about their valuations but also motivated to answer truthfully.We do not deal with this issue here and assume instead that players are willing to cooperate and give true answers to our queries.
Previous work.Without any assumptions on the utility functions, the problem is at least as hard as Set Packing (which corresponds to "single-minded bidders" who are interested in exactly one set each).Hence, no reasonable approximation (significantly better than m1/2 ) can be expected in general [13,20].Research has focused on classes of utility function that allow better positive results, in particular submodular utility functions.Lehmann, Lehmann and Nisan [15] provide an approximation ratio of 1  2 for the Submodular Welfare Problem, using a simple greedy algorithm using only value queries.(This also follows from the work of Fisher, Nemhauser and Wolsey on submodular maximization subject to a matroid constraint [19].)A randomized version of this algorithm is shown in [5] to give a somewhat improved approximation ratio of n 2n−1 .It is shown in [14] that if only value queries are allowed, then it is NP-hard to approximate Submodular Welfare within a ratio strictly better than 1 − 1/e.
Several subsequent works [4,5,8] considered the following linear programming relaxation of the problem, usually referred to as the Configuration LP.Here, x i,S is intended to be an indicator variable that specifies whether player i gets set S.
Configuration LP: Maximize i,S x i,S w i (S) subject to: • Item constraints: i,S:j∈S x i,S ≤ 1 for every item j.
• Player constraints: S x i,S ≤ 1 for every player i.
This linear program has an exponential number of variables but only a polynomial number of constraints.Using the fact that the separation oracle for the dual is exactly the demand oracle, this LP can be solved optimally in the demand oracle model.We refer the reader to [17,4] for more details.
The integrality gap of this LP is known (up to lower-order terms) for classes of utility functions that are more general than submodular.For subadditive utility functions 1 , it is 1/2 [8], and for fractionally subadditive utility functions 2 it is 1 − 1/e [5].A 1  2 -approximation for subadditive utilities and a (1 − 1/e)-approximation for fractionally subadditive functions were given in [8].Both algorithms run in the demand oracle model.For submodular utility functions, which are strictly contained in the classes we just mentioned, the positive results still apply and hence there is a (1 − 1/e)-approximation in the demand oracle model [5].Recently, it has been shown that a (1 − 1/e)-approximation for Submodular Welfare is possible even in the value oracle model which is optimal [21].Prior to this work, however, it was not known whether this approximation factor can be improved in the demand oracle model.Also, it was not known whether the Configuration LP actually has an integrality gap arbitrarily close to 1 − 1/e for submodular utility functions.An example of integrality gap 7/8 was given in [5].
Our results.Our main result is the following.Theorem 1.1.There is some universal constant > 0 and a randomized rounding procedure for the Configuration LP, such that given any feasible fractional solution, the rounding procedure produces a feasible allocation whose expected value is at least a (1 − 1/e + )-fraction of the value of the fractional solution.
Our rounding procedure is oblivious in the sense of [8]: its only input is a fractional LP solution and it need not know anything about the actual utility functions of the players.We obtain the following corollary by combining Theorem 1.1 with the fact that demand queries suffice in order to find an optimal fractional solution to the Configuration LP.
Corollary 1.2.The Submodular Welfare Problem can be approximated in the demand oracle model within a ratio of 1 − 1/e + (in expectation), for some absolute constant > 0.
The value of that we obtain is small, roughly 10 −5 .The significance of this result is that 1−1/e is not the optimal answer, and hence it is likely that further improvements are possible.Another way to look at this result is that the integrality gap of the Configuration LP with submodular functions cannot be arbitrarily close to 1 − 1/e 0.632.We do not determine the worst case integrality gap; on the negative side, we improve the example of 7/8 from [5] to an example with a gap of roughly 0.782 (see Appendix C).
Hardness results.This paper also contains some APX-hardness results.(A maximization problem is APX-hard if there is some constant ρ < 1 such that it is NP-hard to achieve approximation ratios better than ρ.) Contrasting Theorem 1.1 with the hardness results of [14] clearly shows that the best approximation ratio for Submodular Welfare depends on the query model that is allowed.Is there a limit to the best approximation ratio one can achieve, as we consider progressively stronger query models?Prior to our work, this was not known to hold even for the demand query model.
Concurrently and independently of our work, Dobzinski and Schapira [private communication] proved APX-hardness in the demand oracle model.More recently, Chakrabarty and Goel [3] proved that in the same model it is NP-hard to approximate the Submodular Welfare Problem within a ratio better than 15/16.
In contrast to these results, we consider utility functions of a very restricted type.Hardness results using such utility functions are valid not only in the demand oracle model, but a much wider class of oracle models.Constant size utility functions.Every player is interested only in a constant (say, t) number of the items.All other items have value 0 for the player.Hence the utility function of a player can be explicitly given by a table with a constant number of entries 2 t .This allows every player to answer demand queries (and essentially any other type of query) efficiently -in constant time.In particular, the Configuration LP then has only a polynomial number of "relevant" variables, and hence can be solved efficiently.
Theorem 1.3.The Submodular Welfare Problem is APX-hard.Moreover, this holds even if every player has a constant size utility function.
We remark that in previous APX-hardness results (say, for fractionally subadditive utility functions [4]), all players typically have the same utility function, and hence this utility function cannot be of constant size (as then the optimal allocation could be found by enumerating all possible allocations).To illustrate how certain query models might circumvent previous NP-hardness results we consider a class of queries that we call fair division queries.We give a polynomial time incentive compatible mechanism based on fair division queries that extracts the maximum welfare whenever all players have the same utility function, regardless of whether they are submodular or not.It is our opinion that no "reasonable" query model will be able to circumvent NP-hardness results when utility functions are of constant size.For this reason, in this paper we also indicate how previous NP-hardness results can be modified to hold also when utility functions are of constant size.In addition, this paper contains NP-hardness of approximation results in some cases where utility functions cannot be of constant size, for example, when there are only two players.
Remark.This is a detailed version of our extended abstract [9], where we presented a similar result also for the Generalized Assignment Problem.We treat the Generalized Assignment Problem in a separate paper [10].Here, we focus on the Submodular Welfare Problem.

Overview of techniques
First, let us recall how one achieves an approximation ratio of 1 − 1/e (or in fact, 1 − (1 − 1/n) n ) for the Submodular Welfare Problem [5,8].One uses a given feasible solution to the Configuration LP in the following way.Every player independently selects a tentative set, where the probability that player i selects set S is precisely x i,S .(If S x i,S < 1 for some player, then it may happen that the player will select no tentative set at all.) Per player, the expected utility after this step is equal to her contribution to the value of the fractional solution of the Configuration LP.However, the tentative allocation may not be feasible, because some items may be in more than one tentative set.To resolve this issue, one uses a "contention resolution procedure".In [5], contention resolution is based on further queries to the players is order to determine which player will benefit the most from getting the contended item.In [8], contention resolution is done in an oblivious way, without further interaction with the players.In both cases it is shown that contention resolution can be done while preserving (in expectation) at least a (1 − 1/e)-fraction of the total utility.
The above two-step rounding procedure seems to have a lot of slackness.Namely, there are many items that are not allocated at all because they are not in any tentative set.In fact, on what appear to be worst case instances of the two-step rounding procedure, every item has probability roughly 1/e of not being allocated at all.It appears that adding a third step to this rounding procedure, in which the remaining items are allocated, could potentially allow one to improve the 1 − 1/e ratio.
Somewhat surprisingly, one can design instances (see Section 1.4) where the utility functions are submodular, and regardless of how contention resolution is performed, and how items outside the tentative sets are allocated, one cannot obtain an approximation ratio better than 1 − 1/e.
We show in Section 1.3 that there is also a simpler one-step randomized rounding procedure (namely, item j is given to player i with probability j∈S x i,S , without a need to first choose tentative sets), which achieves an approximation ratio of 1 − 1/e (though not (1 One may hope that on every instance, the best of the two rounding procedures (the three-step procedure and the one-step procedure) would give an approximation ratio better than 1 − 1/e.Nevertheless, again, this is not true.
Our new algorithm (that does improve upon 1 − 1/e) uses a combination of two rounding techniques.The analysis of our algorithm uses algebraic manipulations that might not be easy to follow.To help the reader see the reasoning behind our final algorithm, we break its derivation into stages.
Our first rounding technique is the Fair Rounding Procedure (Section 1.3) which achieves at least a (1−1/e)-approximation, and actually beats it if the fractional solution is "unbalanced".This stage is based on a Fair Contention Resolution technique which might be interesting on its own.It is a variation and simplification of the contention resolution technique used in [8].Apart from other interesting features (that we discuss in Section 1.2), the Fair Rounding Procedure procedure has the following property.Call a fractional solution unbalanced if for many items j, there is some player i (that may depend on j) for which S:j∈S x i,S > , where > 0 is some fixed constant.Then the approximation ratio provided by our procedure is 1 − 1/e + O (1) .Thus, the heart of the problem is to deal with fractional solutions which are balanced.
Section 2.3 is perhaps the most instructive one, as it addresses a simple case which is relatively easy to understand.There are only two players, and the fractional solution is half-integral.The goal is to design a rounding procedure that improves the ratio of 1 − (1 − 1/2) 2 = 3/4 guaranteed by previous rounding procedures.Neither the three-step rounding procedure (as described above, based on each player choosing one tentative set) nor the one-step rounding procedure achieve such an improvement.We present a new rounding procedure that achieves an approximation ratio of 5/6.Moreover, our analysis of the approximation ratio serves as an introduction to the kind of algebraic manipulations that will be used in the more complicated proofs.We also show that the 5/6 ratio for this case is best possible, by showing a matching integrality gap.
Section 2.4 deals with the case of two players and a balanced fractional solution, in the sense that for every item j, S:j∈S x 1,S = S:j∈S x 2,S = 1/2.We design an algorithm using two tentative sets S, S for player 1 and T, T for player 2, sampled independently according to the fractional solution.Using two sets instead of one allows us to allocate more items overall, and also it gives us more freedom in designing an allocation scheme.Using a submodularity argument inspired by the half-integral case, we show that either a player gains by taking the entire complement of the other player's tentative set, or she can combine items from S, S to obtain what we call a "diagonal set" Y .Similarly, player 2 obtains a diagonal set Z. The sets Y and Z are designed so that their average overlap is less than that of S and T .Thus by resolving contention between Y and Z, we get a factor better than 3/4.Section 2.5 combines the balanced and unbalanced case for two players.We gain for different reasons in the balanced and unbalanced cases, but we always beat the factor of 3/4.For two players, we obtain an approximation factor of 13/17.This result convinced us that an improvement over 1 − 1/e in the general case of n players should be possible.
Considering the aforementioned Fair Rounding Procedure, it remains to handle the case of balanced fractional solutions.In Section 3.2, we develop the "Butterfly Rounding Procedure" which achieves an improvement over 1 − 1/e for any balanced fractional solution for n players.This is the key part of the proof of Theorem 1.1.It is based on ideas used for the two-player case, but again, with added complications.The main structural difference between this rounding procedure and earlier ones is that we let every player choose two tentative sets rather than one.Thereafter, we perform contention resolution for every item that is in tentative sets of more than one player.The exact way in which we perform contention resolution is rather complicated.
Finally, the Fair and Butterfly Rounding Procedures are combined to obtain an approximation ratio strictly above 1 − 1/e for any fractional solution (Section 3.3).

Fair Contention Resolution
A key component in previous (1 − 1/e)-approximation algorithms for Submodular Welfare (and for more general classes of utility functions as well) is a method for resolving contention among several tentative sets that contain the same item.In our current work, we generalize and improve upon the method used for this purpose in [8] so that it can be combined more easily with other parts of our new rounding procedure.Our method gives a solution to a problem that we call Fair Contention Resolution.We now describe this problem.
There are n players and one item.Every player i is associated with a probability 0 ≤ p i ≤ 1 of requesting the item.Our goal is to allocate the item to at most one player, and have the probability that a player i receives the item be proportional to its respective p i .We call this balanced contention resolution.Among all such contention resolution schemes, we wish to find one that maximizes for the players the probability that they get the item (the balancing requirement implies that maximizing this probability for one player maximizes it for all players).Given complete coordination among the players, we may assign the item to player i with probability p i / j p j , and this would be optimal.But we will be dealing with a two-step situation in which there is only partial coordination.
1.In step 1, there is no coordination among the players.Every player i independently requests the item with probability p i .
2. In step 2, those players who requested the item in step 1 may coordinate a (randomized) strategy for allocating the item to one of them.
The probability that the item is allocated at all is at most the probability that the set of players reaching step 2 is nonempty, namely, 1− j (1−p j ).Hence in balanced contention resolution, player i can get the item with probability at most p i P j p j (1 − j (1 − p j )).What we call Fair Contention Resolution is a method which indeed attains this maximum.In previous work [8], such contention resolution methods were designed for some special cases.Here, we design a general technique for an arbitrary number of players and any choice of probabilities p j .
Fair Contention Resolution.Suppose n players compete for an item independently with probabilities p 1 , p 2 , . . ., p n .Denote by A the random set of players who request the item, i.e.Pr[i ∈ A] = p i independently for each i.
• If A = ∅, do not allocate the item.
• If A = {k}, allocate the item to player k.
It can be seen that r A,k ≥ 0 and k∈A r A,k = 1 so this is a valid probability distribution.The following lemma shows that this is indeed the best possible balanced contention resolution technique; we defer the proof to Section 3.1.
Lemma 1.4.Conditioned on player k requesting the item, she obtains it with probability exactly Prior to this work, there have been two known ways of achieving a (1 − 1/e)-approximation for the Submodular Welfare Problem [5,8].They are both based on solving the Configuration LP using demand queries and then rounding the fractional solution to an integral one.To put the question of improving 1 − 1/e in perspective, and to pave our way for our subsequent considerations, we present two new algorithms here.We assume in the following that x i,S is an optimal solution to the Configuration LP.
The Fair Rounding Procedure.
• Let each player i sample a set S i from her probability distribution.
• For each item j, use the Fair Contention Resolution technique to decide which of the players i such that j ∈ S i receives the item.
Before analyzing this algorithm, let us introduce a class of utility functions that is more general than submodular functions.Definition 1.5.A function w is fractionally subadditive if w(S) ≤ α i w(T i ) whenever 0 ≤ α i ≤ 1 for all i and i:j∈T i α i ≥ 1 for all j ∈ S.
I.e., if the sets T i form a "fractional cover" of S, then the sum of their utilities weighted by the corresponding coefficients is at least as large as that of S. It is known [8] that this is equivalent to the "XOS" property, where f is XOS if f (S) = max j g j (S) with each g j linear.In this paper, we use the terms XOS and fractionally subadditive interchangeably.The key to the analysis of this algorithm is the following lemma stated in [8].
Lemma 1.6.Let p ∈ [0, 1] and w : 2 [m] → R + fractionally subadditive.For a set S, consider a probability distribution over subsets S ⊆ S such that each element of S is included in S with probability at least p (not necessarily independently).Then

E[w(S )] ≥ p w(S).
Using this lemma, we can show the following.
Lemma 1.7.For n players with fractionally subadditive utility functions, the Fair Rounding Procedure delivers expected value at least (1 Proof.Each player requests a set of expected value E[w i (S i )] = S x i,S w i (S).Define i.e., the probability that player i competes for item j.By Lemma 1.4, conditioned on player i competing for item j, the item is allocated to her with probability since n i=1 y ij ≤ 1 and by the arithmetic-geometric mean inequality, the worst case occurs when y ij = 1/n for all i.This is done independently of the particular set S i containing j that the player has chosen.Therefore, conditioned on any particular chosen set S i , the player obtains each of its items with probability at least 1 − (1 − 1/n) n .The result follows by Lemma 1.6.This already improves the approximation factor of 1 − 1/e for any fixed number of players.However, our goal is to obtain an absolute constant larger than 1 − 1/e, independent of n. (Also, we know that 1 − 1/e cannot be improved for fractionally subadditive utility functions in general.)An even simpler (1 − 1/e)-approximation (only for submodular utilities) is the following.
The Simple Rounding Procedure.
• Let y ij = S:j∈S x i,S .For each j, we have n i=1 y ij ≤ 1.
• Assign each item independently, to player i with probability y ij .
The fact that this gives a (1 − 1/e)-approximation can be seen in several ways.Given the recent work on submodular maximization subject to a matroid constraint [2], we can argue as follows.For a monotone submodular function w i , define Here y i is the vector with coordinates y ij for j = 1, . . ., m.So the Configuration LP can be written equivalently as and f + i ( y i ) corresponds to the share of player i in the fractional solution.Our Simple Rounding Procedure allocates to player i a random set S i which is obtained by rounding the coordinates of the vector y i independently to 0 and 1, based on probabilities y ij .The extension F (y) = E[f (ŷ)] defined in [2] corresponds exactly to the expected value of such a set, [2] (Lemma 4 and 5), it follows that In other words, each player receives in expectation at least a (1 − 1/e)-fraction of her LP share.

Obstacles to improving 1 − 1/e
We show here that several natural approaches to improve the factor of 1 − 1/e cannot succeed.
Example 1.8.Consider n n items arranged in an n-dimensional cube, Q n = {1, 2, . . ., n} n .For a vector y ∈ Q n , let F i ( y) denote the "i-fiber" of y as the set of elements coinciding with y in all coordinates j = i: The goal of player i is to obtain at least one item from each i-fiber.We define her utility function as where y ∈ Q n is a uniformly random element, i.e.F i ( y) is a uniformly random i-fiber.This is a monotone submodular function, being the probability measure of a union of events [j ∈ F i ( y)] over j ∈ S.
One optimal fractional solution can be defined as follows.Each player i selects a combination of the layers orthogonal to dimension i, Each of these sets has value w i (H i,j ) = 1, because it intersects every i-fiber.In our optimal fractional solution, player i receives each set H i,j with fractional weight x i,H i,j = 1/n, for a value of 1.
Consider the Simple Rounding Procedure.It allocates each item independently and uniformly to a random player.For player i, the probability that she receives some item in a fixed i-fiber is 1 − (1 − 1/n) n .Averaging over all fibers, the utility of each player is 1 However, the actual optimum gives value 1 to each player.This can be achieved by a "chessboard pattern" where item x ∈ S n is allocated to player p( x) = n i=1 x i mod n.So our one-step Simple Rounding Procedure gets only 1 − (1 − 1/n) n of the optimum.
As an alternative approach, consider a two-step rounding procedure (e.g. the Fair Rounding Procedure).Here, each player chooses a random set S with probability x i,S .Then, conflicts between players are resolved somehow and finally, the remaining items are allocated.For our fractional solution, this would mean that each player chooses H i,j for a random j; we can assume WLOG that she chooses H i,1 .Regardless of how we resolve conflicts, only items are allocated at this point, so we cannot get more value than 1 − (1 − 1/n) n per player.But the situation is even worse that this.We still have (n − 1) n unallocated items, but regardless of how we assign them to players, we do not gain any additional value at all.This is because for any of these (n − 1) n remaining items y and any player i, the fiber F i ( y) already has an item in the first layer which was allocated to player i and hence player i is not interested in any more items from this fiber.Thus again, this approach cannot achieve more than 1 − (1 − 1/n) n of the optimum.Instead, we have to return to the first step and redesign the rounding procedure in a different way.

The allocation problem for two players
In this section, we start working towards the goal of proving Theorem 1.1.First we consider a special case of the welfare maximization problem where only two players are interested in a given set of m items.The analysis of this case is not formally needed for the proof of Theorem 1.1 but we consider it instructive for the reader to understand this simpler case before proceeding to the proof of Theorem 1.1.Also, the improvement we achieve here (from 3/4 to 13/17) is relatively significant, as opposed to the purely theoretical improvement in Theorem 1.1.
As we have shown in the previous section, in the setting with two players we can achieve an approximation factor of 3/4 (even assuming only fractional subadditivity).Since our improvements are built on top of the 3/4-approximation algorithm, let's present this special case first.

3/4-approximation for two players
The basic idea is to use the fractional LP solution to generate random sets suitable for each player.When we say that "player 1 samples a random set from his distribution", it means that he chooses set S with probability x 1,S .(Since S x 1,S ≤ 1, this is a valid probability distribution.)Similarly, player 2 samples a random set T from her distribution defined by x 2,T .We define Ideally, we would like to assign S to player 1 and T to player 2 which would yield an expected value equal to the LP optimum.The only issue is that the sets S and T can overlap, so we cannot satisfy the players' requests exactly.One way to allocate the disputed items is by making a random decision.It turns out that the best way to do this is to allocate disputed items to one of the two players with reversed probabilities compared to the fractional solution (see [8]).For this purpose, we use a random "splitting set" X which contains each item j with probability p j .• Let player 1 sample a random set S and let player 2 sample a random set T from their respective distributions.
• Independently, generate a random set X which contains item j with probability p j .
• Assign S \ T to player 1.
• Assign T \ S to player 2.
• Divide S ∩ T into S ∩ T \ X for player 1 and S ∩ T ∩ X for player 2.
Now consider Algorithm 1 from the point of view of player 1, conditioned on a specific choice of S.He receives each element of S, unless it also appears in T ∩X.This set is sampled independently of S and Pr[j ∈ T ∩ X] = q j p j ≤ 1 4 because p j + q j ≤ 1.Therefore, conditioned on S, each element is taken away with probability at most 1/4.By Lemma 1.6, Taking the expectation over S, we get Similarly, player 2 gets at least 3 4 T x 2,T w 2 (T ), since any element appears in S ∩ X with probability p j (1 − p j ) ≤ 1/4.This shows that in expectation, we not only recover at least 3/4 of the optimum, but each player individually obtains at least 3/4 of his share in the LP.

Examples and integrality gaps
The 3/4-approximation algorithm for two players is optimal for fractionally subadditive utility functions, in the sense that our LP can have an integrality gap equal to 3/4.The proof is a simple example with 4 items which we present here.As shown in [8], the class of fractionally subadditive functions is equal to "XOS", the class of functions obtained as a maximum of several linear functions.We use this property here to define our example.
For a set of items A ⊆ {a, b, c, d}, we define two utility functions by In other words, S 1 , S 2 are the sets desired by player 1 and T 1 , T 2 are the sets desired by player 2. The optimal LP solution is 1 2 which makes each player maximally happy and yields a total value of LP = 4. On the other hand, there is no integral solution of value 4. Such a solution would require two disjoint sets S i and T j of value 2 but no such pair of disjoint sets exists.The optimum integral solution has value 3 = 3/4 • LP .
The question arises whether 3/4 is also optimal for submodular functions.It can be seen easily that the utility functions above are not submodular -for example w 1 ({a, c}) + w 1 ({b, c}) = 2 but w 1 ({c}) + w 1 ({a, b, c}) = 3.The easiest way to make the functions submodular is to increase the value of the diagonal sets {b, c} and {a, d} to 2.
Example 2.2 (two players with submodular functions).Consider the items as above, where we also define Y = {a, d} and Z = {b, c}.Each singleton has value 1 and any set of at least 3 elements has value 2. For pairs of items, define the utility function of player 1 as follows: In other words, player 1 wants at least one of items {a, c} and at least one of {b, d}.Symmetrically, player 2 wants at least one of items {a, b} and at least one of {c, d}.Her utility function is The functions w 1 and w 2 can be verified to be submodular (being the rank functions of partition matroids).As before, a fractional solution assigning each of the sets S 1 , S 2 , T 1 , T 2 with weight 1/2 has value LP = 4.However, here the integrality gap is equal to 1, since there is an integral solution (Y, Z) of value 4 as well.
This example illustrates a different phenomenon: Any optimal solution must combine items from the two sets desired by each player.If we allocate one set from the fractional solution to each player and resolve conflicts arbitrarily, we get only a value of 3  4 LP .Moreover, even allocating the remaining item does not help.Regardless of who gets the last item, the objective value is still only 3  4 LP .Therefore, we must combine different sets and the only optimal solution uses the two diagonals.
Observe that instead of increasing the value of the diagonals, we could have increased the value of the orthogonal sets (T 1 , T 2 for player 1; S 1 , S 2 for player 2).This produces submodular functions as well and again the integrality gap is 1.Here, it's enough to allocate for example S 1 to player 1 and the complement S 2 to player 2.
These examples might suggest that there is a chance to recover the full LP value by either taking sets from the fractional solution or "diagonals" as above.However, this is not the case.A linear combination of these two cases gives the following example.

Example 2.3 (5/6 integrality gap for two players with submodular functions). Each singleton has value 1 and any set of at least 3 elements has value 2. For pairs of items, define the utility function of player 1 as follows:
Symmetrically, the utility function of player 2 is the same except that w 2 (S 1 ) = w 2 (S 2 ) = 4/3 and w 2 (T 1 ) = w 2 (T 2 ) = 2.This can be verified to be a submodular function.

Two players with a half-integral solution
We have seen that the integrality gap for two players with submodular functions can be 5/6.We show that an approximation factor of 5/6 can be achieved in a special case as above, where the optimum fractional solution is half-integral: x i,S ∈ {0, 1  2 , 1}.If there is a variable x i,S = 1, we can assign S to player i and all the remaining items to the other player, in which case we recover the LP value without any loss.Therefore, we can assume that there are sets Items that appear only in sets requested by one player can be simply assigned to the respective player, which can only improve the approximation factor.So let's assume that each item appears in two sets assigned to different players.I.e, (S 1 , S 2 ) and (T 1 , T 2 ) are two (typically different) partitions of the set of all items.
It is necessary to allow the option to combine items from the two sets desired by each player, as described in Example 2.2.Any solution which starts by allocating one set to each player and resolving conflicts is limited by the factor of 3/4.On the other hand, it is thanks to submodularity that we can extract improved profit by combining two different sets.In analogy with Example 2.2, we define two "diagonal sets" Our algorithm then chooses a random allocation scheme according to the following table: We use submodularity to analyze the expected profit of this allocation procedure.For player 1, submodularity yields and Intuitively, T 1 and T 2 are not the sets desired by player 1 and their values could be as low as w 1 (S 1 ∩ T 1 ) and w 1 (S 1 ∩ T 2 ).However, if that is the case, submodularity implies that the diagonal sets Y, Z are very desirable for player 1.

Two players with a balanced fractional solution
Our goal now is to generalize the ideas of the previous section in order to improve the approximation factor of 3/4 for two players in general.First, we relax the condition that the fractional solution is half-integral.Instead, we assume that for any item j, We call such a fractional solution balanced.In particular, we can make this assumption in case both players have the same utility function, since then any solution x i,S can be replaced by x1,S = x2,S = (x 1,S + x 2,S )/2 without change of value.
Intuition.Suppose that each player gets value 1 in the fractional solution.Let S denote a random set sampled from the distribution of player 1 and T a random set sampled from the distribution of player 2. Due to our assumption of balance, Pr[j ∈ S] = Pr[j ∈ T ] = 1/2 for any item j.We can try to allocate S to player 1, and the entire complement of S to player 2. Then player 1 gets expected value 1, while player 2 obtains E[w 2 (S)] ≥ E[w 2 (T ∩ S)] ≥ 1/2, but this might be tight and we still don't achieve more than 3/4 of the fractional value.However, this is essentially the only case in which we do not gain compared to 3/4.Assuming that the complement of S is not very valuable for player 2, let's consider another set: Z = (T ∩ S) ∪ (T ∩ S) where T, T are sets sampled independently from the distribution of player 2. (This is analogous to one of the diagonal sets in the previous section.)Linearity of expectation allows us to use submodularity for expected values of random sets just like for values of deterministic sets: In other words, if S presents no improvement on the average over T ∩ S, then Z is a set as good as T which is exactly what player 2 would desire.Similarly, let player 1 generate two independent sets S, S .If he does not gain by taking by submodularity just as good as S or S .Note that each player uses the other player's set to "combine" two of her sets.Let's be more specific and define which of the other player's sets is used for this purpose: Note that unlike the diagonal sets in the previous section, Y and Z are not disjoint here.They are random sets, typically intersecting; however, the punch line is that Y and Z are not independent, and in fact the events j ∈ Y, j ∈ Z are negatively correlated.Observe that Z intersects Y only inside the first part (S ∩ T ), and more precisely The sets S, S , T, T are sampled independently and contain each item with probability 1/2.Similarly, Pr[j ∈ Y ] = Pr[j ∈ Z] = 1/2 for any item j, while rather than 1/4 which is the probability of appearance in S ∩ T .Thus the interests of the two players are closer to being disjoint and we are able to allocate more items to each of them, using Y and Z.Our next algorithm takes advantage of this fact, combining the two allocation schemes outlined above.
• Let player 2 sample independently random sets T, T .
• Let X contain each item independently with probability 1/2.
We assign items randomly based on the following table: Theorem 2.4.For any balanced fractional solution x i,S , Algorithm 2 gives expected profit at least 37/48 S x i,S w i (S) to each player i. Proof.Consider player 1.The sets S and S are sampled from the same distribution.E[w 1 (S)] = E[w 1 (S )] = S x 1,S w 1 (S) is the share of player 1 in the fractional solution, ideally what we would like player 1 to receive.For the sets allocated to him in the second and third scheme, T and Y \ (Z ∩ X), we use submodularity.We use the fact that By submodularity, Now we use the linearity of expectation and Lemma 1.6: each item appears in (S ∪ T ) ∩ X ∩ T with probability 3/16, and in T with probability 1/2.Therefore The expected profit of player 1 is For player 2, the analysis is similar, although not exactly symmetric: here, we have

Submodularity yields
Finally, we apply Lemma 1.6: and the rest follows as for player 1.
We remark that with a slightly more involved rounding scheme, it is possible to achieve a 7/9-approximation in the case of two balanced players.Details can be found in [22].

Two players with an arbitrary fractional solution
Given our algorithm for balanced fractional solutions, it seems plausible that we should be able to obtain an improvement in the general case as well.This is because Algorithm 1 gives a better approximation than 3/4 if the fractional solution is unbalanced.However, a fractional solution can be balanced on some items and unbalanced on others, so we have to analyze our profit more carefully, item by item.For this purpose, we prove the following generalization of Lemma 1.6.
This implies Lemma 1.6 as a special case, since we can have X contain each item with probability Proof.Using the marginal value definition of submodularity, we obtain Conditioned on S, j appears in X with probability at least p j , so taking expectation over X yields and finally In the following, we assume that S is a set sampled by player 1, T is a set sampled by player 2, and we set We seek to estimate our profit in terms of E[w 1 (S)] = j σ j and E[w 2 (T )] = j τ j which are the shares of the two players in the fractional solution.
Let us give a sketch of an argument that a strict improvement over 3/4 is possible in the general case.Let's choose a very small constant > 0 and define "unbalanced items" by U = {j : |p j − 1/2| > }.We distinguish two cases: • If a non-negligible value comes from unbalanced items, e.g.j∈U (σ j + τ j ) ≥ • LP , then we use Algorithm 1.A refined analysis using Lemma 2.5 implies than the algorithm yields profit at least • If the contribution of unbalanced items is negligible, j∈U (σ j + τ j ) < • LP , then let's remove the unbalanced items.Also, we scale the remaining fractional solution by 1/(1 + 2 ) and possibly extend some sets so that we get a balanced solution with p j = q j = 1/2.We incur at most a factor of (1 − 3 ) compared to the original fractional solution.Then we run Algorithm 2 on this balanced fractional solution which yields expected value at least 37 48 (1 − 3 ) • LP .
Choosing for example = 1/150 gives an approximation factor slightly better than 3/4.A more careful combination of the balanced and unbalanced case yields a 13/17-approximation for 2 players in the general case.We present this algorithm in Appendix A.

The allocation problem for n players
As we discussed, our final algorithm will use a combination of two rounding techniques.The first technique is the Fair Rounding Procedure that we presented in Section 1.3.• Let each player i sample a set S i from her probability distribution.

The Fair Rounding Procedure
• Using the Fair Contention Resolution technique (below), resolve contention independently for each item, and allocate each item to the respective winner.
Fair Contention Resolution.Suppose n players compete for an item independently with probabilities p 1 , p 2 , . . ., p n .Denote by A the random set of players who request the item, i.e.Pr[i ∈ A] = p i independently for each i.
• If A = ∅, do not allocate the item.
• If A = {k}, allocate the item to player k.
• If |A| > 1, allocate the item to each k ∈ A with probability It can be seen that r A,k ≥ 0 and k∈A r A,k = 1 so this is a valid probability distribution.Here, we analyze the Fair Contention Resolution technique and prove Lemma 1.4.Let us restate it here: Lemma.Conditioned on player k requesting the item, she obtains it with probability exactly Proof.First, suppose that we allocate the item to player k with probability r A,k for any A containing k, even A = {k}.(For the sake of the proof, we interpret the sum over A \ {k} for A = {k} as zero, although the summand is undefined.)Then the conditional probability that player k receives the item, when she competes for it, would be However, when A = {k}, our technique actually allocates the item to player k with probability 1, rather than So player k gains an additional probability Pr[A = {k}](1 − r {k},k ) = Pr[A = {k}] p k / i p i which makes the total probability that player k obtains the item equal to We would like to show that \ {k} and let A = A \ {k} be the set of players competing with k.The probability of a particular set A occurring is as a weighted sum over all possible subsets A ⊆ B: Ideally, we would like to see 1 instead, but we have to perform a certain redistribution of terms to achieve this.Observe that for i, A such that i ∈ B \ A , the contribution can be also written as Using this equality to replace all the terms for i ∈ B \ A , we get So indeed, from (1), player k receives the item with probability As we showed in Lemma 1.7, this already proves that the Fair Rounding Procedure achieves a (1 − 1/e)-approximation.In fact, the Fair Rounding Procedure achieves a factor better than 1− 1/e whenever the fractional solution is unbalanced, similarly to the case of two players.Therefore, the most difficult case to deal with is when the fractional solution is balanced.

The Butterfly Rounding Technique
Let's assume for now that the fractional solution is balanced in the sense that for each player i and item j, we have y ij = S:j∈S x i,S = 1/n, which is the worst case for the Fair Rounding Procedure3 .We also assume that the number of players n is very large; otherwise, we have an improvement over 1 − 1/e already.In other words, we consider the variables y ij infinitesimal and we write (1 = e −1/2 , etc.In this case, we use ideas from the two-player case in order to obtain a small improvement over 1 − 1/e.Roughly speaking, we divide the players into two groups and treat them as two super-players, using our algorithm for two players.The items obtained by the super-players are allocated within each group.We would have liked to employ Algorithm 3 as a black box for the two super-players, but the additional complication of conflicts inside each group forces us to modify the previous algorithms slightly.
Let us assume that we can split the players evenly into two groups A, B such that for each item j, For a collection of sets {S i : i ∈ A} sampled by players in one group, we will use Fair Contention Resolution to make the sets disjoint.Recall that players in a group A such that i∈A y ij = 1/2 can resolve contention in such a way that each requested item is allocated with probability (1 ) .= 0.787.This is significantly better than 1 − e −1 .= 0.632; however, 0.787 is not the approximation factor we can achieve.First we have to distribute items between the two groups, and for this purpose we use ideas inspired by the two-player case.In the end, we recover roughly 0.645 of the LP value for each player.
Algorithm 5. (The Butterfly Rounding Procedure) • Let each player in group A sample two independent random sets S i , S i .
• Let each player in group B sample two independent random sets T i , T i .
• Let the players in A apply Fair Contention Resolution to sets S i to obtain disjoint sets S * i ⊆ S i .Similarly, let them resolve contention among S i to obtain disjoint sets S * i ⊆ S i .
• Let the players in B use Fair Contention Resolution among T i to obtain disjoint sets T * i ⊆ T i .Similarly, let them resolve contention among T i to obtain disjoint sets We assign the items using one of four allocation schemes: 1.With probability e 1/2 /(1 2. With probability e 1/2 /(1 See the figures depicting the four allocation schemes (without considering contention inside each group of players).In the following, we also use etc., to denote the "diagonal sets" before resolving contention.Taking the union over each group, we get the same set regardless of resolving contention or not: Figure 4: Allocation schemes 1 and 2. Intuition.If we use only the first two schemes, we get an approximation factor at least 1 − 1/e.In fact, we get 1 − 1/e even without using the sets S i , T i -if each player chooses one set and we resolve contention first in one preferred group and then in the second group, we get exactly 1 − 1/e (see the proof below).Adding some elements of T i to a player i ∈ B and some elements of S i to a player i ∈ A might or might not help -but if it doesn't help, we prove that we can construct other good sets Y i , Y i for player i ∈ A and Z i , Z i for i ∈ B, which have the property of negative correlation (we repeat the trick of Algorithm 2).Then we can extract more than 1 − 1/e of their value for each player.
Theorem 3.1.For n players with a balanced fractional solution and n → ∞, Algorithm 5 yields expected profit at least 0.645 S x i,S w i (S) for player i.
Proof.We use the following notation: • For every i, j: y ij = S:j∈S x i,S ; i.e. i∈A y ij = i∈B y ij = 1/2.
First, recall that in each group of sets like {S i : i ∈ A}, Lemma 1.4 allows us to resolve contention in such a way that each item in S i is retained in S * i with conditional probability at least 2(1−e −1/2 ).We will postpone this step until the end, which will incur a factor of 2(1−e −1/2 ) on the expected value of all allocated sets.Instead, we analyze the sets "requested" by each player, which are formally obtained by removing the stars from the sets appearing in each allocation scheme.Note that some requested sets are formed by combining S i and S i , such as Y i = (S i ∩ V ) ∪ (S i ∩ V ); however, contention resolution for each fixed item requested by player i involves only one of the sets S i , S i .
Consider a player i ∈ A. In the first allocation scheme, he requests the set S i of expected value E[w i (S i )] = α i .In the second allocation scheme, he requests Observe that at this point, we already have an approximation factor of 1 − 1/e: By averaging the two cases, we get 1  2 E[w i (S i ) + w i (S i \ V )] ≥ 1 2 (1 + e −1/2 )α i .Player i actually receives each requested item with probability at least 2(1 − e −1/2 ), so his expected profit is at least 2(1 − e −1/2 ) However, rather than S i \ V , player i requests (S i \ V ) ∪ (S i \ V \ U ).This might yield some gain or not; we would like to express this gain in terms of γ i .Let's write Ũ = k∈A\{i} S k ; we can use this instead of U = k∈A S k here, since ).The way we analyze the contribution of S i \ V \ Ũ is that we look at the marginal value the marginal value of a set added to S i \ V .This is also a submodular function.We are interested in . = e −1/2 .Since Ũ is sampled independently of S i , S i and V , Lemma 2.5 implies Taking expectation over the remaining random sets, we get Now, let's turn to the third allocation scheme.Player i requests Y i = (S i ∩ V ) ∪ (S i ∩ V ) which by submodularity and monotonicity satisfies Note that either player i gains in the second allocation scheme (when γ i is large), or otherwise Y i has very good expected value, close to α i .In the fourth allocation scheme, player i requests

and so
By submodularity and monotonicity, and therefore, using ).This makes the profit of player i significantly smaller compared to the third allocation scheme; nonetheless, he does not lose as much as if we removed from him a union of independently sampled sets for players in B (which would contain each element with probability (1 − e −1/2 ) rather than (1 − e −1/2 )(1 − e −1 )).Here, we benefit from the negative correlation between sets Y i and Z k .
Finally, contention is resolved within each group which incurs an additional factor of 2(1−e −1/2 ).The overall expected profit of player i ∈ A is ) α i ≥ 0.645 α i .The analysis for a player i ∈ B would be exactly the same, yielding expected profit at least 0.645 β i .

A small improvement in the general case
Here we finally prove that a tiny (but constant) improvement over 1 − 1/e in the general case is possible.We have the Butterfly Rounding Procedure which requires that players be divided into groups A, B with balanced interest in each item, namely and we regard the values y ij as infinitesimal.In fact, the analysis of the Butterfly Rounding Procedure is quite exact, provided that the values y ij are not too large.Also, in this case we can argue that a random partition is likely to be approximately balanced.So, let's propose a variant of the Butterfly Rounding Procedure for the general case.
• Partition the players into groups A, B by assigning each player uniformly and independently to A or B. Define • Consider the fractional solution as a probability distribution over subsets S for each player.Let X be an independently sampled random set, where item j is present with probability 1/(2z j ).We modify the probability distribution of each player by taking S ∩ X instead of S. This defines a probability distribution corresponding to a new fractional solution xi, S where xi, S = S,X:S∩X= Then, we get ỹij = S:j∈S xi, S = S,X:j∈S∩X x i,S Pr[X] = y ij /(2z j ) so that • Then run Algorithm 5 on the fractional solution xi,S .
Let's fix a very small > 0 and call an item "unbalanced" if some player requests it with probability y ij > .We claim that Algorithm 5' works well for fractional solutions where no item is unbalanced.This is true because then, i∈A y ij and i∈B y ij are random variables well concentrated around 1 2 n i=1 y ij ; more precisely, their variance is Therefore, the expected amount by which either sum exceeds 1/2 is The way we obtain the new fractional solution xi,S corresponds to a sampling procedure where each item remains with probability 1/(2z j ).Therefore, the expected factor we lose here is Moreover, the analysis of Algorithm 5 (which assumed infinitesimal values of y ij ) is quite precise for such a fractional solution.For 0 Thus all the estimates in the analysis of Algorithm 5 are precise up to a relative error of O( ).The fact that the solution may be "sub-balanced" ( i∈A y ij , i∈B y ij < 1/2) can only help.Accounting for the balancing step, we get a solution of expected value (0.645 − O( √ ))LP .
If some items are unbalanced, then running Algorithm 5' might present a problem.However, then we gain by running Algorithm 4. As in Section 2.5, we decide which algorithm to use based on the importance of unbalanced items in the fractional solution.Let U denote the set of unbalanced items, and define the expected contribution of item j to player i.Then we distinguish two cases: • If a non-negligible value comes from unbalanced items, e.g.i j∈U σ ij ≥ •LP , then we use Algorithm 4. For each unbalanced item j ∈ U , since y ij > for some i, Lemma 1.4 allocates the item to each player with conditional probability By Lemma 2.5, the expected value of our solution is at least • LP.
• If the contribution of unbalanced items is negligible, i j∈U σ ij < • LP , then let's remove the unbalanced items.This incurs a factor of (1 − ) on the value of the fractional solution.
Then we run Algorithm 5' which yields expected value at least (0.645 For a very small > 0, one of the two algorithms beats 1 − 1/e by a positive constant amount.A rough estimate shows that we should choose .= 10 −4 in order to keep 0.645 − O( √ ) above 1 − 1/e, and then the first case gives an improvement on the order of 10 −12 .In Appendix B, we present a tighter analysis which yields an approximation factor 1 − 1/e + 0.00007.

Hardness results
In this section we show that there is some constant ρ < 1 such that it is NP-hard to approximate the Submodular Welfare Problem within a ratio better than ρ.We shall present several proofs of this fact that differ from each other in the number of players involved and in the type of submodular functions involved.
Previously it was shown in [14] that the maximum submodular welfare problem in the value oracle model is hard to approximate within a ratio better than 1 − 1/e.In [14] the source of the hardness result is the complexity of individual utility functions: given k, it is already NP-hard to approximate within a ratio better than 1 − 1/e the maximum utility that a single player can derive by choosing at most k items (even if no other player exists).In particular, it is NP-hard for players to answer demand queries (in the construction of [14]).In contrast, we are interested in cases where the utility functions of individual players are simple and every player can easily answer any reasonable query about her utility function, including demand queries.Hence our hardness of approximation results highlight the difficulty of coordinating the wishes of different players, rather than the difficulty of a single player figuring out what she actually wants.
We shall be considering utility functions that come from one of the following three families.
1. Constant size functions.Every player is interested only in a constant (say, t) number of the items.All other items have 0 value for the player.Hence the utility function of a player can be explicitly given by a table with a constant 2 t number of entries.This allows every player to answer demand queries (and essentially any other type of query) efficiently -in constant time.
2. Bounded.For some constant t, the utility function of a player can be represented in full by a table specifying the utility of those sets of items that have size bounded by t.The value of any other set T is w(T ) = max S⊂T w(S), that is, the value of the most valuable bounded set that is fully contained in T .Hence, for every set T , its utility is determined by at most t items within the set, and all other items have zero marginal utility.However, unlike the case of constant size functions, the set of items that has zero marginal utility depends on the set T and is not universal for all choices of T .The number of entries in the table describing a bounded utility function is at most m t , which is polynomial when t is constant.Clearly, both value queries and demand queries can be answered in polynomial time.However, as we shall see in Section 4.1, there are other type of queries that cannot be answered efficiently.

3.
Separable.The items can be partitioned into disjoint classes C i of constant size.The utility of set S is computed as w(S) = i w(S ∩ C i ).Namely, it is the sum of constant size functions over disjoint sets of items.Again, both value queries and demand queries can be answered in polynomial time.
In the following theorem we state our main results for the three families of utility functions that are described above.
Theorem 4.1.There is some constant ρ < 1 such that it is NP-hard to approximate the maximum welfare problem within a ratio better than ρ in the following cases.
1.When all players have constant size utility functions.Specifically, we will show a proof with t = 15.

When all players have the same utility function and it is bounded.
In this case we also present explicit values for ρ.Specifically, we shall show a proof with t = 7 and ρ 0.9964.

When there are only two players and their utility functions are separable.
We remark that in case 2 of Theorem 4.1 we could not have used constant size utility functions as in case 1, because when all players have the same utility function and it is of constant size the Submodular Welfare Problem can be solved in polynomial time.Likewise, in case 3 we could not have used bounded utility functions as in case 2, because when there are only a constant number of players and the utility functions are bounded, the Submodular Welfare Problem can be solved in polynomial time.
An interesting feature of our constructions (as well as of the results of [14]) is that they had perfect completeness.On positive instances, there is no conflict between the players, in the sense that there is an allocation that gives every player the maximum utility she could have got have there been no other players.(Remark: perfect completeness does not necessarily require all items to be allocated, as some items that have zero marginal value may remain unallocated.)Our proofs show that it is NP-hard to distinguish such instances from instances in which only a ρ < 1 fraction of the welfare can be recovered.
The rest of the section is organized as follows.Section 4.1 is somewhat of a digression.It presents an incentive compatible mechanism that achieves maximum welfare when all players have the same utility function.This mechanism highlights the significance of constant size utility functions.Section 4.2 shows our methodology for achieving the submodularity property for utility functions.Section 4.3 proves part 1 of Theorem 4.1.Section 4.4 proves part 2 of Theorem 4.1.We defer part 3 to the appendix.Appendix D.1 reviews the hardness of approximation result of [7] for max k-cover and the hardness of approximation result of [4] for maximum welfare with XOS utility functions.Appendix D.2 presents a new hardness of approximation result for maximum XOS welfare, but this time with constant size utility functions.The motivation for such a result is explained in Section 4.1.Appendix D.3 proves part 3 of Theorem 4.1, based on special properties of known hardness results for max k-cover.

Fair division queries
Inspired by methods for the fair division of goods (sometimes referred to as cake cutting theorems, see [1] for example), we propose the following allocation algorithm for the maximum welfare problem.The version of the algorithm presented here is most useful when all players have exactly the same utility function.This is an important special case, and may arise naturally when players have the ability to resell items that are allocated to them.We make no assumptions about the nature of the utility function, and in particular, it need not be submodular.1. Initialization.Let P denote the set of players, and let S denote the set of all items.
2. If S is empty, allocate no items to the players and end. 5. Pick one part (say, S i ) at random and allocate the items in it to player p. Remove p from P , remove S i from S, and return to step 2.
The above algorithm is incentive compatible in the following sense.If a player wishes to maximize her expected welfare, then her answer to a fair division query must maximize the sum of utilities of the parts in the partition.We say that a player is truthful in expectation if indeed she follows such a strategy.Now it easily follows that if all players have the same utility function, and all players are truthful in expectation, then the above algorithm actually produces the maximum welfare (regardless of whether the utility function is submodular or not).This should be contrasted with the fact that the published hardness of approximation results (e.g., within ratio of 1 − 1/e for XOS utility functions [4], within ratio of 1/2 for subadditive utility function, see for example [8], or within ratio of roughly 1/ √ m for general utility functions) are all proved for cases in which all players have the same utility functions.In all these proofs, the utility functions are simple in the sense that they allow the players to efficiently answer demand queries, but clearly, as our algorithm above implies, the utility functions are still too complicated so as to allow players to answer fair division queries.
Based on the above, one may argue that existing hardness of approximation results involve players who do not fully understand their own utility function, in the sense that there are queries that the player has incentive to answer truthfully, but finds it NP-hard to answer them.On the other hand, one may argue that it is easy to modify the existing hardness of approximation results so that not all players have the same utility functions (and indeed this is what we shall do . ..), and then the algorithm presented above would not necessarily provide a good approximation ratio.Taking these arguments into account, the points that we wish to make here are the following.
1. We showed an explicit example where the query model allows one to solve NP-hard allocation problems by transferring the computational difficulties to the players.
2. In some existing NP-hardness results, it is hard to distinguish whether the hardness is a consequence of the nature of individual utility functions, or whether it is the consequence of interplay between several simple utility functions.
It is our belief that the use of constant size utility functions is the most natural way to address the two points above.It is hard to imagine any reasonable query that would be difficult to answer regarding such a utility function, and hence queries will not transfer the computational burden to the players.Likewise, hardness results proved when utility functions have constant size are more obviously a consequence of there being multiple players, rather than being a consequence of the difficulty to reason about a single utility function.

Simple submodular extensions
A recurring theme in our proofs is that for each player, there will be certain sets of items that are special.The special sets will typically all be of the same size.If a player gets one of her special sets, her utility is maximized.On positive instances, there will be an allocation in which every player gets a special set.On negative instances, for every allocation some players do not get a respective special set, due to conflicts with other players.To prove a hardness of approximation result, we need to define the utility that a player gets out of sets that are not special.Hence one needs to extend the utility function originally defined only on the special sets to all sets.This needs to be done while maintaining the following properties: 1.The resulting utility function is submodular.
2. The resulting utility function is "simple" in the sense that one can efficiently answer demand queries.
3. The utility per item of nonspecial sets is "significantly" lower than that of special sets.
If one replaces the requirement that utility functions are submodular by the weaker requirement that utility functions are fractionally subadditive (or equivalently, XOS, see [8]), then one can use the following simple XOS extension.The value of a set T is max[|T ∩ S|], where S ranges over all special sets.That is, special sets have maximum value and every item in them contributes 1 towards this value.Other sets have value that depends on the maximum size of their intersection with a special set.Those items in the intersection may be thought of as having value 1, and the other items may be thought of as having 0 marginal value.It is not hard to see that the simple XOS extension does not result in a submodular utility function.
Achieving the above three properties when utility functions are submodular has been the stumbling block of extending known hardness of approximation results from more general classes of utility functions (e.g., the 1 − 1/e hardness result for XOS utility functions [4]) to the case of submodular utility functions.The key observation that allows us to achieve this in our work is the fact that we need to consider only sets of items of size bounded by t.Specifically, we use the following approach.
Let {S i } be a collection of special sets, all of size b.Let w be a partially defined utility function, giving utility b to every special set.We define the simple submodular extension of w to be as follows: 1.For every set S with |S| < b, w(S) = |S|.The three properties above are satisfied.It is not hard to see that w as above is indeed submodular.When b is constant, then w is also "simple".And nonspecial sets of size b have value a factor of (1 − 1/2b) lower than that of the special sets, which is significant when b is constant.
We now explain our convention of how to apply the simple submodular extension to each one of the three families of utility functions that we consider.
1. Constant size.We will be interested in cases were special sets have size b, and all but t items have 0 value.In this case only t items participate in the extension.The remaining items have 0 value also in the extension.

2.
Bounded.We will be interested in the case in which all special sets have size b, and then we set t = b + 1.Only the first t items in a set have marginal value, because by then the value of a set (after the extension) is b, which is the maximum value a set may have.

3.
Separable.We will take the simple submodular extension on each class separately (as done for constant size functions), and then the submodular utility function will be the sum of the utility functions over all classes.

Constant size utility functions
Here we prove part 1 of Theorem 4.1.(An alternative proof will also appear in Section D.2.) Our proof will be via a reduction from the problem max 3-coloring-5.Specifically, we shall use the following NP-hardness result whose proof appears in [6].

Lemma 4.2.
There is some > 0 such that given a 5-regular graph, it is NP-hard to distinguish between the case that it can be legally 3-colored, and the case in which every 3-coloring of its vertices leaves an fraction of its edges illegally colored (both endpoints have the same color).
Given a 5-regular graph with n vertices and m edges (hence 2m = 5n) we reduce it to the following submodular welfare maximization problem.With every edge e we associate three items, e 1 , e 2 and e 3 , corresponding to the three "colors" {1, 2, 3}.Hence there are 3m items.There will be m edge players, one for every edge, and n vertex players, one for every vertex.The utility function of the player p e who is associated with edge e gives the player utility 1 if she receives at least one of the three items associated with the edge, and utility 0 otherwise.Hence it is a constant size submodular utility function.The utility function of the player p v who is associated with vertex v will be nonzero on 15 items, and will have three special sets of size 5: the set of all 5 items of color 1 associated with edges incident with v, and likewise for colors 2 and 3.The utility function of p v is the simple submodular extension (as described in Section 4.2) of this function.
On positive instances, we can legally 3-color the graph.Then each vertex player gets the five items associated with her chosen color, giving her utility 5.Each edge player can get the item not allocated to the two players at the endpoint of the edge, giving her utility 1.Altogether, the total welfare is 3m (all items are allocated and give utility 1 per item), and all players are maximally happy.
On negative instances, we use the following analysis.Without loss of generality, we may assume that every edge player gets one item (because then the item contributes marginal utility 1, and there is no way by which it can contribute larger marginal utility).Likewise, it is is not hard to see that we may assume that every vertex player gets exactly 5 items, one from every incident edge (otherwise a shifting argument would yield an allocation with at least as much welfare).We distinguish between "legally colored vertices" for which all allocated items have the same color, and "illegally colored vertices" for which not all allocated items have the same color.The number of illegally colored vertices is at least m/3 (in a coloring that maximizes the number of legally colored edges, every vertex is incident with at most 3 illegally colored edges, by giving it the majority color of its neighbors), and every illegally colored vertex has utility 9/2 rather than 5 (by the simple submodular extension).Hence the maximum welfare is at most 3m − m/6, showing that Submodular Welfare cannot be approximated within a factor better than ρ = 1 − /18.
We remark that rather than use the simple submodular extension, one may use a different submodular extension that gives a value somewhat better than 1 − /18 for ρ.Details are omitted.

Bounded utility functions
Here we prove part 2 of Theorem 4.1.Our proof is by reduction from the problem of finding a maximum matching in k-uniform hypergraphs.Specifically, we choose the value of k = 6 because then we can use as a blackbox the following result of [12].Recall that a matching in a hypergraph is a collection of hyperedges that do not intersect.Lemma 4.3.For every > 0, given a 6-uniform hypergraph with n vertices, it is NP-hard to distinguish between the case that it has a matching covering at least (1 − )n vertices, and the case in which every matching covers at most 22n/23 vertices.
Given an instance of 6-uniform hypergraph matching, we reduce it to an instance of Submodular Welfare as follows.The vertices of the graph are the items.There are (1− )n/6 players.All players have exactly the same utility function, and it is bounded.The hyperedges of the graph are the special sets of the utility function.Hence in its simple submodular extension (as in Section 4.2), sets corresponding to hyperedges have value 6, other sets of size 6 have value 5.5, and sets of size 7 or more have value 6.
On positive instances, every player gets a hyperedge of the maximum matching, giving utility (1 − )n, and every player is maximally happy.On negative instances, it is not hard to see that the best allocation will give 22n/(23 • 6) players a hyperedge (giving utility 22n/23), n players will get 7 vertices each (giving utility 6 n), and (1/23 − 2 )n players will get 6 vertices that do not form a hyperedge (giving utility 5.5 per player).When tends to 0, the ratio between the negative and positive instances tends to 1 − 1/(12 • 23) 0.9964.

A 13/17-approximation for 2 players
Here we present our best algorithm for 2 players, achieving an approximation ration of 13/17.We refer to Sections 2.4 and 2.5 where we developed rounding techniques for balanced and unbalanced fractional solutions.For balanced items (p j = 1/2) Algorithm 1 gives factor 3/4, while Algorithm 2 gives factor 37/48.On the other hand, observe that items which are extremely unbalanced (p j → 0) are recovered by Algorithm 1 with probability almost 1, whereas Algorithm 2 recovers them with probability only 2/3.The best we can hope for (by combining these two algorithms) is to take a convex combination which optimizes the minimum of these two cases.This convex combination takes Algorithm 1 with probability 5/17 and Algorithm 2 with probability 12/17 which yields a factor of 13/17 in the two extreme cases.We show that this is indeed possible in the entire range of probabilities p j .Since Algorithm 2 turns out to favor the player whose share (p j or q j ) is higher than 1/2, we offset this advantage by generating a splitting set X which gives more advantage to the player with a smaller share.Algorithm 3. (13/17-approximation for 2 players) • Let player 1 sample independently random sets S, S .
• Let player 2 sample independently random sets T, T . .
Generate independently a random set X containing item j with probability φ(p j ) = f (p j ) for p j ≤ 1/2, or with probability φ(p j ) = 1 − f (1 − p j ) for p j > 1/2.
We assign items randomly based on the following Theorem A.1.For 2 players with an arbitrary fractional solution, Algorithm 3 yields expected profit at least 13/17 S x i,S w i (S) for player i.
Proof.By definition, we have E[w 1 (S)] = E[w 1 (S )] = j σ j and E[w 2 (T )] = E[w 2 (T )] = j τ j .Now consider the first allocation scheme.The definition of set X is symmetric in the sense that φ(1 − p j ) = 1 − φ(p j ), i.e. if p j + q j = 1, the set is generated equivalently from the point of view of either player.Since item j appears in T ∩ X with probability q j φ(p j ) ≤ (1 − p j )φ(p j ), and in S ∩ X with probability p j (1 − φ(p j )) = p j φ(1 − p j ), Lemma 2.5 implies To estimate the combined profit of the remaining allocation schemes, we use submodularity and Lemma 2.5: The total expected profit of player 1 is: We show that the last sum is nonnegative.It can be verified that the function is increasing and convex on the interval (0, 1).Also, f (1/2) = 1/2 and by convexity f (p j ) + f (1 − p j ) ≥ 1.We have φ(p j ) = f (p j ) for p j ∈ [0, 1  2 ] and φ(p j ) = 1 − f (1 − p j ) for p j ∈ [ 1 2 , 1]; i.e., φ(p j ) ≤ f (p j ) for any p j ∈ (0, 1).Consequently, φ(p j )(1 − p j )(9 − 4p j (1 − p j )) ≤ 4p j and so player 1 gets expected profit at least 13 17 j σ j .For player 2, we get the following: This is just like the expression for player 1 after substituting p j → 1 − p j .The same analysis gives that the total expected profit of player 2 is at least 13 17 j τ j .

B Our best algorithm for n players
Here we return to Section 3.3 where we showed a (1 − 1/e + )-approximation for n players and some small fixed > 0. Let's analyze Algorithm 5' more precisely.For each item j, we measure the "granularity" of the fractional solution by n i=1 y 2 ij , which could range from 1/n (for a perfectly balanced solution) to 1 (when only one player requests item j).We show later that the analysis of Algorithm 5 can be carried through for any balanced fractional solution, with error terms depending on the granularity of the solution.But first, let's look at the balancing procedure.
Lemma B.1.Let player i have a distribution over sets S i with expected value E[w i (S i )] = j σ ij where Then after the balancing procedure executed for a given partitioning of players, the new marginal values for player i are such that averaging over random partitions, Proof.Conditioned on a specific partition (A, B), the balancing procedure gives to player i a random set S i ∩ X where Pr[j ∈ X] = 1/(2z j ).The proof of Lemma 2.5 implies that the new marginal values will be at least σij ≥ σ ij /(2z j ).Averaging over random partitions (A, B), and using .
To estimate E[z j ], we use the second moment.Let Y ij be independent random variables that take values 0 or y ij with probability 1/2, corresponding to player i being assigned to group A or B. We have Thus, we are likely to get a good balanced solution for fractional solutions with low granularity.Now we apply Algorithm 5 to this modified fractional solution, and estimate the expected profit.In the analysis, we need the following bounds.Lemma B.2.For any y 1j , . . ., y nj ≥ 0 such that n i=1 y ij ≤ 1, Proof.By taking logs and Taylor's series, To compare the first two series, we simply note that and so the first inequality in the lemma follows.
To compare the first and the third series, observe that the first two terms match, while starting from the third we have to compare i y k ij with ( i y 2 ij ) k−1 .We use Hölder's inequality ( Using n i=1 y ij ≤ 1, we get i y k ij ≥ ( i y 2 ij ) k−1 , and therefore the second inequality holds as well.
Lemma B.3.For n players with an arbitrary fractional solution, the balancing procedure followed by Algorithm 5 gives player i expected profit at least 0.645 Proof.Let player i have a fractional solution of value E[w i (S i )] = j σ ij .We proved that after the balancing procedure, this solution is modified to a new (random) one with marginal values σij where ).We denote the new fractional solution by xi,S and ỹij = S:j∈S xi,S .Now we apply Algorithm 5. We have to go through the analysis of Algorithm 5 more carefully, keeping in mind that the solution has finite granularity.We need to be especially cautious about estimates like ( here, the inequality goes the right way.In the second allocation scheme, player i ∈ A receives 2 ) for i∈A ỹij ≤ 1/2, regardless of granularity.
In the third allocation scheme, player i ∈ A receives which is valid without any change, since the inequality goes the right way.
In the fourth allocation scheme, we have Observe that compared to the proof of Theorem 3.1, some terms get the error factor (1− i ỹ2 ij ).By taking the appropriate linear combination of the four allocation schemes, player i obtains expected profit at least This is for a fixed balanced solution with marginal values σij .For a random partition (A, B) and the associated balanced solution, we have . Also, observe that by the balancing procedure, granularity can only decrease: n i=1 ỹ2 ij ≤ n i=1 y 2 ij , so player i gets at least 0.645 This procedure works well for fractional solutions of low granularity.On the other hand, if the granularity is not low, then Algorithm 4 performs better than 1 − 1/e.A combination of the two is our final algorithm.Proof.Consider a fractional solution with σ ij and y ij defined as before.By Lemma 1.4 and B.2, the first allocation scheme gives each player a set S * i of expected value By Lemma B.3, the balancing procedure followed by Algorithm 4 yields an allocation where player i has expected profit at least 0.645 A numerical analysis shows that for any i y 2 ij ∈ [0, 1] and therefore the overall expected profit of player i is at least We can assume that e − P i=1 k i /n ≥ , otherwise (2) is trivial.Also, we assumed k > 4 C , so the RHS increases by at least Meanwhile, the LHS increases by which proves (2).Now we estimate OP T 2 .The constraints imply that when we order the y ij 's in a decreasing sequence, the sum of the m largest ones is bounded by n(1 − e −m/n ).We claim that the optimum is attained when this bound is tight for any m, i.e. the m-th largest y ij is equal to n(e −(m−1)/n − e −m/n ) .= e −m/n .This can be seen by considering the first y ij , being the m-th largest, which violates this condition and is smaller than e −m/n .Denote by P m = j =j (1 − y ij ) the product for the remaining entries in the i-th row, apart from y ij .Similarly, let y kl be the (m+1)-th largest entry and denote by P m+1 = l =l (1−y kl ) the product for the remaining entries in its row.If P m > P m+1 then we can switch y ij and y kl and decrease the sum of row products j (1 − y ij ) + l (1 − y kl ).If P m ≤ P m+1 then we can increase y ij slightly and decrease y kl by the same amount, which again decreases the average of the two row products.In both cases, we increase the objective function.
Thus we know exactly the optimal sequence of entries y ij : the m-th largest one is roughly e −m/n .We only have to find their optimal placement in the n × n matrix y ij .Since the product i,j (1 − y ij ) is fixed, and we minimize the sum of the row products i j (1 − y ij ), we try to make these products as uniform as possible.We claim that n i=1 n j=1 (1 − y ij ) is minimized under these conditions, when • There is m such that the first m rows contain y i1 = e −i/n , and y ij .= 0 for j > 1 (i.e., the smallest possible entries are placed here).
• The remaining entries are distributed in the remaining n − m rows so that their products To see this, denote by M the rows containing the m largest entries (it's easy to see that these should be in m distinct rows).Any of these rows has a product n j=1 (1 − y ij ) ≤ 1 − e −m/n = ρ.Outside of M , consider the row with the largest product n j=1 (1 − y ij ).By averaging, this must be larger than ρ.If there is any entry in it smaller than some entry in M , switch these two entries and gain in the objective function.Therefore, all the smallest entries must be next to the m largest entries in M .
Outside of M , the optimal way is to make the products n j=1 (1 − y ij ) as uniform as possible, since we are minimizing their sum.So we make them all approximately equal to ρ.The value of m is chosen so that the last row in M has a product approximately equal to ρ as well.
We find the approximate solution of by taking logs and replacing a sum by an integral.We substitute m = µn and r = xn, which yields The numerical solution of this equation is µ .= 0.292.The value of our optimal solution is then Substituting the solution of (3) yields OP T 2 .= 0.782 n n .

D.1 Review of hardness for Max k-cover
Recall that in the max k-cover problem, we are given a collection S of sets and a parameter k, and the objective is to choose k sets whose union covers the maximum number of items.A straightforward greedy algorithm (or alternatively, use of a linear programming relaxation) achieves an approximation ratio of 1 − 1/e 0.632 for this problem.In [7] it is shown that for every it is NP-hard to distinguish between the case that there are k sets that disjointly cover all items, and the case in which every collection of k sets cover only a 1 − 1/e + fraction of the items.Hence it is NP-hard to approximate the max k-cover problem within a ratio better than 1 − 1/e + .
Let us now recall how the hardness of max k-cover is used in [4] to show hardness for the maximum welfare problem with XOS utility functions.There are k players.They all have the same utility function.In this utility function, the special sets are exactly those in the collection S of sets of the max k-cover problem, and the utility function is the simple XOS extension of this set system.On positive instances, allocating to the k players the k sets of the optimal solution gives utility 1 per item.On negative instances, the k players may get k arbitrary sets (not necessarily from the collection S).For each such set consider its maximum intersection with a set S ⊂ S. The union of all these maximum intersection covers at most an 1 − 1/e + fraction of the items, which for the simple XOS extension means that the utility per item is only 1 − 1/e + on average.The utility function based on simple XOS extension is simple in the sense that a player can answer demand queries efficiently.However, it is NP-hard to answer fair division queries.This can be seen either directly, as these queries amount to solving the max k-cover problem, or indirectly, as an implication of the algorithm of Section 4.1.
We now show how one may obtain a 1−1/e+ hardness of approximation result for the maximum welfare when utility functions are XOS and of constant size.We shall use certain properties of the set systems constructed in [7].Unfortunately, there is no explicit theorem in [7] that lists all these properties.We do not wish to reproduce the rather lengthy proof of [7] in detail here.Hence we shall only list the properties that we use, together with some clearly marked "hints" regarding why the construction of [7] has these properties.
Fix an arbitrarily small value of > 0. The polynomial time reduction from max-3SAT-5 to max k-cover described in [7] has the following properties.
1.It constructs a set system.The set system can be viewed as being composed of groups of sets, where each set belongs to exactly one group.(Hint: recall that each set in [7] corresponds to a possible answer of a prover to a possible query.Those sets that correspond to the possible answers of the same prover to the same query will form a group.)The number of groups will be denoted here by k (though in [7] it is denoted by k , and k denotes there the number of provers).
2. The number of sets in a group is bounded above by some constant that depends only on .
(Hint: this is the consequence of the fact that to prove hardness of approximation for max k-cover it suffices to have a constant number of parallel repetitions.) 3. All sets are exactly of the same size.(Hint: this uses the strong regularity properties of the construction, such as the fact that we use max 3SAT-5 rather than just max 3SAT, the use of Hadamard codes in the multi-prover system described in Section 2.3 in [7], and the partition system described in the proof of Theorem 5.3 in [7].)Likewise, all groups contain exactly the same number of sets (though we do not need to use this fact).
4. The size of every set can be bounded from above by some constant that depends only on .(Hint: since the number of parallel repetitions is constant, each query to a prover can be completed in only a constant number of ways to queries to the other provers.Moreover, one can choose the number of points in every partition system to be a large constant.)5.For "yes" instances of max 3SAT-5, all points can be covered by k mutually disjoint sets, one from every group.
6.For "no" instances of max 3SAT-5, every collection of k sets cover at most a fraction of (1 − 1/e + ) of the points.

D.2 From max k-cover to maximum welfare
We now interpret instances of max k-cover as described in Section D.1 as instances of the maximum welfare problem.
The points in the set system will be the items.There will be k players, one for every group of sets.Consider a player p that is associated with a group g p of sets.Let s be the size of a single set (recall that all sets have the same size and that this size is a constant that depends only on ).Let be the number of sets in a group (recall that all groups have the same number of sets and that this number is a constant that depends only on ).Let us define t = s , and observe that t is a constant that depends only on .Let G p = ∪ S∈gp S be the union of all sets in g p , and observe that |G p | ≤ t.Using the above notation, we describe the utility function w p of player p.This function will be bounded above by 1 (and below by 0).
Only points in G p will have nonzero utility for player p. Hence to completely specify w p it suffices to specify w p (S) only for those sets S ⊂ G p .As there are at most 2 t such sets S, the utility function w s can be described in full by an explicit table with a constant number of entries.
We call the sets in g p the special sets, and they each have s items and utility s.We first address the case of XOS utility functions.In this case, the utility function w p of player p is the simple XOS extension of the special sets.Equivalently, for every set T we have that w p (T ) = max S∈gp [|T ∩ S|].As in the case of [4], this reduction gives a hardness of approximation within a ratio of 1 − 1/e + , but the advantage of our reduction is that each of the utility functions is of constant size.
For the case of submodular utility functions, we can instead take the simple submodular extension.Specifically, this gives:  On positive instances, it is again true that every player can get utility s.To analyze negative instances, it suffices to observe the following property: the fraction of players p for which the allocated set T p fully contains some set S ∈ g p is at most (1 − 1/e + ).(In fact, it is much smaller, but this is not needed here.)It is not hard to see that at best the other players get average utility s − 1/2.Hence the maximum welfare in this case gives average utility at most s − (1/e − )/2 per player, and hence it is NP-hard to approximate Submodular Welfare within ratio better than 1 − (1/e − )/2s.Observe that this ratio is bounded away from 1 (but not by much) because s is some constant.
We remark that using additional properties of the reduction of [7] (on negative instances, the fraction of players that get a special set is in fact arbitrarily small), one can show hardness of approximation within a ratio of 1/2 + when utility functions are subadditive.The following subadditive utility function can be used: special sets have value 2 to the respective player, and other sets (that do not contain a special set) have value 1.

D.3 Two players
In this section we prove the third part of Theorem 4.1.Our proof is again based on a reduction from max k-cover, and again we will use the special properties of the reduction as described in Section D.1.However, we shall use a simpler version of the proof of [7].Namely, rather than use a k prover proof system (which was perhaps the central aspect in which [7] improves over the previous reduction of [16]), we shall limit the number of provers to two (as in the proof in [16]).This has the effect that the hardness of approximation ratio for max k-cover becomes 3/4 + rather than 1 − 1/e + , which is not of significant importance in our context.More importantly, this allows the reduction to have the following property (in addition to the properties listed in Section D.1).
• The groups can be partitioned into two collections.Within a collection, all groups are disjoint in the sense that two sets in different groups within the same collection cannot share an item.
(Hint: one collection corresponds to answers of one prover, the other collection corresponds to answers of the other prover.) In the reduction to Submodular Welfare with two players, each player will be associated with one collection of groups.Its utility function will be the separable function defined on the collection, where each group serves as a constant size utility function as in Section D.2, and the total utility is the sum of utilities over groups in the collection.We shall use a simple submodular extension as explained in Section 4.2.In a sense, a player is simply the union of k/2 players from Section D.2.The analysis of the hardness of approximation result is essentially as in Section D.2, except that 1/e changes to 1/4.

Figure 3 :
Figure 3: Diagonal sets Y and Z.

Lemma 2 . 5 .
Fix an ordering of items [m] = {1, 2, . . ., m}; we denote by [j] = {1, 2, . . ., j} the first j items in this ordering.Let S and X be random subsets of [m] such that conditioned on any S, Pr[j ∈ X | S] ≥ p j .Let w be a monotone submodular function and define

3 .
If |P | = 1, allocate all of S to the player and end.4. Pick an arbitrary player p ∈ P and ask her to answer the following fair division query: partition S into |P | parts (some of which may be empty).Denote the reply of p by S 1 , . . ., S |P | .

2 .
For every set S with |S| > b, w(S) = b.3.For every set S with |S| = b, w(S) = b if S is a special set, and w(s) = b − 1/2 otherwise.

1 .
For every set T for which |T ∩ G p | < s, we have w p (T ) = |T ∩ G p |.

2 .
For every set T for which |T ∩ G p | > s, we have w p (T ) = s.3.For every set T for which |T ∩ Gp | = s, we have w p (T ) = s if T ∩ G p ∈ g p and w p (T ) = s − 1/2 if T ∩ G p ∈ g p .