Reinforcement Learning: Concave Utilities and Constraints:

vaneet_photo

Many real-world problems require optimization of an objective that is non-linear in cumulative rewards, e.g., fairness objectives in scheduling. Further, most engineering problems have constraints, e.g., reaching safely, average power constraints. In this work, we aim to innovate on efficient algorithms taking into account non-linear objectives and constraints. Reinforcement Learning algorithms such as DQN owe their success to Markov Decision Processes, and the fact that maximizing the sum of rewards allows using backward induction and reduce to the Bellman optimality equation. These fail to hold in the presence of non-linear objectives and/or constraints making the analysis challenging. We have proposed multiple forms of algorithms with provable guarantees for these problems. As shown alongside, for a network traffic optimization problem, the proposed Actor-Critic (IFRL AC) and Policy Gradient (IFRL PG) approaches significantly outperform the standard approaches that do not explicitly account for non-linearity. In the Table below, we consider two forms of algorithms with a mention of the results with either concave utility functions or constraints or both. The model-based approaches use posterior or optimistic sampling approaches, the model-free approaches are based on policy gradients or primal-dual approaches. X mentions that we have the results for those cases (detailed papers in the below), and the others represent the scenarios where we are working towards efficient results.

 

Concave Utility

Constraints

Concave Utility and Constraints

Model-Based (Average Cost)

X

X

 

X

 

Model-Free Finite State Space (Discounted Cost)

X

X

X

Model-Free Infinite State Space (Discounted Cost)

X

X

 


Model-Based Approach with Average Costs:

We consider the problem of tabular infinite horizon concave utility reinforcement learning (CURL) with convex constraints. For this, we propose a model-based learning algorithm that also achieves zero constraint violations. Assuming that the concave objective and the convex constraints have a solution interior to the set of feasible occupation measures, we solve a tighter optimization problem to ensure that the constraints are never violated despite the imprecise model knowledge and model stochasticity. We use Bellman error-based analysis for tabular infinite-horizon setups which allows analyzing stochastic policies. Combining the Bellman error-based analysis and tighter optimization equation, we obtain a order-optimal regret guarantee for objective. The proposed method can be applied for optimistic algorithms to obtain high-probability regret bounds and also be used for posterior sampling algorithms to obtain a loose Bayesian regret bounds but with significant improvement in computational complexity.


Model-Free Finite State Space (Discounted Cost):

We use randomized primal-dual approach to solve reinforcement learning with concave objective functions and constraints. We achieve zero constraint violations with order-optimal objective regret.


Model-Free Infinite State Space (Discounted Cost):

We use policy gradient approach with concave objective functions, that allows us to go to large state space. We note that the gradient estimate is biased making the problem challenging.


Model-free algorithms with Constraints (Average Cost):

Modification of Q-learning based algorithms with constraints is an important problem. The average constraints make optimal policy non-deterministic making Q-learning directly not applicable while the peak constraint need to be learnt efficiently. In the following works, convergence/regret guarantees are shown for Q-learning based algorithms with peak/average constraints.

Home