A privacy mechanism design problem is studied through the lens of information theory. In this work, an agent observes useful data Y = (Y-1 , . . . , Y-N) that is correlated with private data X = (X1, . . . ,XN) which is assumed to be also accessible by the agent. Here, we consider K users where user i demands a sub-vector of Y, denoted by C-i. The agent wishes to disclose C-i to user i. A privacy mechanism is designed to generate disclosed data U which maximizes a linear combinations of the users utilities while satisfying a bounded privacy constraint in terms of mutual information. In a similar work it has been assumed that X-i is a deterministic function of Y-i, however in this work we let X-i and Y-i be arbitrarily correlated. First, an upper bound on the privacy-utility trade-off is obtained by using a specific transformation, Functional Representation Lemma and Strong Functional Representation Lemma, then we show that the upper bound can be decomposed into N parallel problems. Next, lower bounds on privacy-utility tradeoff are derived using Functional Representation Lemma and Strong Functional Representation Lemma. The upper bound is tight within a constant and the lower bounds assert that the disclosed data is independent of all {X-j}(i =1)(N) except one which we allocate the maximum allowed leakage to it. Finally, the obtained bounds are studied in special cases.