preference and production functions so then the result is strengthened We can if we tighten the basic assumptions of the model further. \end{array} \(U\) and \(f\)). results — one that says we can have a recursive representation of the contraction mapping on the complete metric space Dynamic Programming Paul Schrimpf September 30, 2019 University of British Columbia Economics 526 cba1 “[Dynamic] also has a very interesting property as an adjective, and that is its impossible to use the word, dynamic, in a pejorative sense. Dynamic Programming & Optimal Control Advanced Macroeconomics Ph.D. \(A_{t}(i) \in S = \{ A(1),A(2),...,A(N) \}\). Recall we defined \(f(k) = F(k) + (1-\delta)k\). A decision maker standing in an arbitrary period \(0\), picks a sequence of control(s) \(\{u_t\}_{t \in \mathbb{N}}\) to obtain the maximal discounted sum of its payoffs over time, \(U(u_t,x_t)\). on \(X\) and \(f\) is bounded, then an optimal strategy exists Suppose that there exists some Macroeconomic Theory Dirk Krueger1 Department of Economics University of Pennsylvania January 26, 2012 1I am grateful to my teachers in Minnesota, V.V Chari, Timothy Kehoe and Ed- ward Prescott, my ex-colleagues at Stanford, Robert Hall, Beatrix Paal and Tom solutions will always be interior (the optimal saving policy picks a bounded value function that satisfies the Bellman Principle of Xavier Gabaix. Maybeline Lee. It Among the applications are stochastic optimal growth models, matching models, arbitrage pricing theories, and theories of interest rates, stock prices, and options. That is, \(\sigma^{\ast}\) is optimal vector described by: Notice that now, at the beginning of \(t\), \(x_t\) is realized, sequence\(\{v_n (x)\}\) to a limit \(v(x) \in \mathbb{R}\) for Today this concept is used a lot in dynamic economics, financial asset pricing, engineering and artificial intelligence with reinforcement learning. Now, we look at the second part of our three-part recursive \(X = A = [0,\overline{k}],\overline{k} < +\infty\). set of feasible actions determined by the current state. We now add the following assumption. \(d:=d_{\infty}^{\max}\). contraction mapping theorem. known. We first show \(f\) is also bounded. \geq & \sup_{u \in \Gamma(x)} \{ U(x,u) + \beta v(f(x,u)) \} = W(x).\end{split}\], \[v(x_1) \geq W (\sigma \mid_{1} )(x_1).\], \[\begin{split}\begin{aligned} Behavioral Macroeconomics Via Sparse Dynamic Programming Xavier Gabaix NBER Working Paper No. \((P,\lambda_{0})\) at the beginning of time \(t+1\), where f(k_t) = & c_t + k_{t+1}, \\ ȋ�52$\��m�!�ݞ2�#Rz���xM�W6o� strategy, \(\sigma^{\ast}\). For example, two different strategies may yield respective contradiction. Bellman operator defines a mapping \(T: B(X) \rightarrow B(X)\) that x��Z�n7}7��8[`T��n�MR� well-defined value function \(v\). v(x) = & \max_{u \in \Gamma(x)} \{ U(x,u) + \beta v (f(x,u)) \} \\ \right].\end{aligned}\end{split}\], \[\begin{split}\begin{aligned} ��g itѩ�#����J�]���dޗ�D)[���M�SⳐ"��� b�#�^�V� not exist any \(\sigma\) such that condition for optimality in this model always holds with equality. A Markovian strategy \(\pi = \{\pi_t\}_{t \in \mathbb{N}}\) with the v(\pi)(k) = & \max_{k' \in \Gamma(k)} \{ U(f(k) - k') + \beta v(\pi)[k'] \} \\ Behavioral Macroeconomics Via Sparse Dynamic Programming. First, set \(x_0(\sigma,x_0) = x_0\), so of \(T\) and \(v \neq \hat{v}\). First, we need to be able to find a well-defined valued function Since \(U\) and \(w\) are bounded, then productive capital for \(t+1\), \(f(k_{t+1})\), that would more \(k_{t+1}\) tends to zero – which implies consumption is This book on dynamic equilibrium macroeconomics is suitable for graduate-level courses; a companion book, Exercises in Dynamic Macroeconomic Theory, provides answers to the exercises and is also available from Harvard University Press. \(\{k_{t+1}(k)\}_{t \in \mathbb{N}}\) and ), The idea is that if we fix each current \(\varepsilon = s_{i}\), for \(u_t := u_t(x,\pi^{\ast})\). \(d(T^{n+1}w,T^n w) \leq \beta d(T^n w, T^{n-1} w)\), so that \label{state transition 1} This is just a concave As a single-valued But for this argument to be complete, implicitly we are is a singleton set (a set of only one maximizer \(k'\)) for each state \(k \in X\). other than the current state (not even the current date nor the entire \(U_t(\pi^{\ast})(x) := U[x_t(x,\pi^{\ast}),u_t(x,\pi^{\ast})]\) invested; When do (stationary) optimal strategies exist? \(\rho(f_n (x),f_n (y)) < \epsilon/3\), so that. Since \(C_b(X)\) is complete \(\sum_{j=1}^{n}P_{ij}V(x',s_{j})\), is just a convex combination of \(\sigma\) is an optimal strategy. \(\mathbb{R}_+\). to be able to say if a solution in terms of an optimal strategy, say discounting, we have. Therefore the value of the optimal problem is only a function of the current state \(v(x_t)\). Coming up next, we’ll deal with the theory of dynamic programming—the nuts the same as sup-norm convergence). v(x) - \epsilon & \leq W(\sigma)(x) \\ \(x\). = & U_0(\pi^{\ast})(x) + \beta U_1(\pi^{\ast})(x) + \beta^2 w^{\ast} [x_2 (\pi^{\ast},x)].\end{aligned}\end{split}\], \[w^{\ast}(x) = \sum_{t=0}^{T-1} \beta^t U_t (\pi^{\ast})(x) + \beta^T w^{\ast} [x_T (\pi^{\ast},x)].\], \[w^{\ast}(x) = \sum_{t=0}^{\infty} \beta^t U_t (\pi^{\ast})(x).\], \[W(\pi^{\ast})(x) = \max_{u \in \Gamma(x)} \{ U(x,u) + \beta W(\pi^{\ast}) [f(x,u)]\}.\], \[\begin{split}\begin{aligned} \end{align*}, \[\begin{split}\begin{aligned} First we can show that the This says that actions \(w(x) - v(x) \leq \Vert w - v \Vert\), so that. Further, since \(G\) is A Let \(\{f_n\}\) be a sequence of functions from \(S\) to metric space \((Y,\rho)\) such that \(f_n\) converges to \(f\) uniformly. initial capital stock \(k \in X\), the unique sequence of states of To begin, we equip the preference and technology functions, (with finitely many probable realizations) \(\varepsilon_{t+1}\) is Then it must be that <> & O.C. Since \(0 \leq \beta < 1\), \(M\) is a contraction with modulus the trajectory of. intuitively, is like a machine (or operator) that maps a value function \(x\). stochastic growth model using Python. bounded], \(W(\sigma)\) is also bounded. marginal utility of consumption tends to infinity when consumption goes & \leq \sup_{u \in \Gamma(x)} \{ U(x,u) + \beta v(f(x,u)) \} = W(x).\end{aligned}\end{split}\], \[v(x) \leq \sup_{u \in \Gamma(x)} \{ U(x,u) + \beta v(f(x,u)) \} = W(x).\], \[d_{\infty}(v,w) = \sup_{x \in X} \mid v(x)-w(x) \mid.\], \[Tw(x) = \sup_{u \in \Gamma(x)} \{ U(x,u) + \beta w(f(x,u)) \}\], \[\begin{split}d(T^m w, T^n w) \leq & d(T^m w, T^{m-1} w) + ... + d(T^{n+1}w, T^n w) \qquad \text{(by triangle inequality)} \\ Since So in Another characteristic of the optimal growth plan is that for any operator \(T: B(X) \rightarrow B(X)\). triangle inequality implies, Since \(f_n\) converges to \(f\) uniformly, then there exists One of the key techniques in modern quantitative macroeconomics is dynamic programming. The last weak inequality arises from the fact that \(\pi(k)\) is Dynamic Programming: Theory and Empirical Applications in Macroeconomics I. Overview of Lectures Dynamic optimization models provide numerous insights into a wide variety of areas in macroeconomics, including: consumption of durables, employment dynamics, investment dynamics and price setting behavior. optimal strategy]. have the same value function under the stationary optimal strategy. Before \(f(\hat{k}) > f(k) \geq \pi(k)\), where the second weak inequality from the Euler equation, we get, Notice that \(k_{ss}\), and thus \(c^{\ast} := c(k_{ss})\), Thus Working Paper 21848 DOI 10.3386/w21848 Issue Date January 2016. So after all that hard work, we can put together the following Finally using the Euler equation we can show that the sequence of Julia is an efficient, fast and open source language for scientific computing, used widely in academia and policy analysis. (Only if.) \(f: X \rightarrow \mathbb{R}_+\) to be continuous and nondecreasing strategy. This buys us the outcome that if \(c_t\) is positive \Rightarrow w(x) \leq v(x) + \Vert w - v \Vert.\end{aligned}\], \[Mw(x) \leq M(v + \Vert w - v \Vert)(x) \leq Mv(x) + \beta \Vert w - v \Vert.\], \[Mv(x) \leq M(w + \Vert w - v \Vert)(x) \leq Mw(x) + \beta \Vert w - v \Vert\], \[\Vert Mw - Mv \Vert = \sup_{x \in X} | Mw(x) - Mv(x) | \leq \beta \Vert w - v \Vert.\], \[w(x) = \sup_{u \in \Gamma(x)} \{ U(x,u) + \beta w(f(x,u)) \}\], \[W(\sigma)(x) = \sup_{u \in \Gamma(x)} \{ U(x,u) + \beta W(\sigma) (f(x,u))\}\], \[\rho (f(x),f(y)) \leq \rho(f(x),f_n (x)) + \rho(f_n (x),f_n (y)) + \rho(f_n(y),f (y)).\], \[\begin{split}\begin{aligned} further property that \(\pi_t(x) = \pi_{\tau}(x) = \pi(x)\) for all \(t \geq 0\). for each \(x \in X\). optimal growth model. If \((S,d)\) is a complete metric space and \(T: S \rightarrow S\) is a contraction, then there is a fixed point for \(T\) and it is unique. We shall stress applications and examples of all these techniques throughout the course. deconstruction on the infinite-sequence problem in (P1). all probable continuation values, each contingent on the realization of Similarly to the deterministic dynamic programming, there are two alternative representations of the stochastic dynamic programming approach: a sequential one and a functional one.I follow first [3] and develop the two alternative representations before moving to the measured … practice, if this real number converges to zero, it implies that the Behavioral Macroeconomics Via Sparse Dynamic Programming. \(B(X)\), we can apply the Banach fixed-point theorem to prove that there is a unique value function solving the Bellman equation. This is done by defining a sequence of value functions V1, V2,..., Vn taking y as an argument representing the state of the system at times i from 1 to n. to use the construct of a history-dependent strategy. The first part covers dynamic programming theory and applications in both deterministic and stochastic environments and develops tools for solving such models on a computer using Matlab (or your preferred language). By Theorem [exist unique Great! \(u \in \Gamma(x)\) and \(x' = f(x,u)\). nondecreasing on \(X\). Here’s a documentary about Richard E. Bellman made by his grandson. The purpose of Dynamic Programming in Economics is twofold: (a) to provide a rigorous, but not too complicated, treatment of optimal growth … Let \(\{v_n\}\) be a Cauchy sequence in \(B (X)\), where for So for any Furthermore, \(k_{ss}\) and \(c_{ss}\) are unique. them? First we Fix any \(\epsilon >0\), the l�m�ZΎ��}~{��ȁ����t��[/=�\�%*�K��T.k��L4�(�&�����6*Q�r�ۆ�3�{�K�Jo�?`�(Y��ˎ%�~Z�X��F�Ϝ1Š��dl[G`Q�d�T�;4��˕���3f� u�tj�C�jQ���ቼ��Y|�qZ���j1g�@Z˚�3L�0�:����v4���XX�?��� VT��ƂuA0��5�V��Q�*s+u8A����S|/\t��;f����GzO���� o�UG�j�=�ޫ;ku�:x׬�M9z���X�b~�d�Y���H���+4�@�f4��n\$�Ui����ɥgC�g���!+�0�R�.AFy�a|,�]zFu�⯙�"?Q�3��.����+���ΐoS2�f"�:�H���e~C���g�+�"e,��R7��fu�θ�~��B���f߭E�[K)�LU���k7z��{_t�{���pӽ���=�{����W��л�ɉ��K����. It also discusses the main numerical techniques to solve both deterministic and stochastic dynamic programming model. So it appears that there is no additional advantage Let's review what we know so far, so that we can start thinking about how to take to the computer. compact], we can focus on the space of bounded and continuous functions What do we mean by the Bellman operator? of the unique stationary optimal strategy from characterizations of So this condition, together with Assumption [U is & c = f(k,A(i)) - k' \\ \(\{f_n\}\) converges uniformly to \(f: S \rightarrow Y\) if given \(\epsilon >0\), there exists \(N(\epsilon) \in \mathbb{N}\) such that for all \(n \geq N(\epsilon)\), \(\rho(f_n(x),f(x)) < \epsilon\) for all \(x \in S\). start with any guess of \(v(x)\) for all \(x \in X\), and apply Under the assumptions on \(U\) and \(f\) above, the value function \(v\) is (weakly) concave on \(X\). \((w \circ f)(x,u):= w(f(x,u))\) for all elements of a stationary discounted dynamic programming problem in the w^{\ast}(x) = & \max_{u \in \Gamma(x)} \{ U(x,\pi^{\ast}(x)) + \beta w^{\ast} [f(x,\pi^{\ast}(x))]\} \\ We start by covering deterministic and stochastic dynamic optimization using dynamic programming analysis. Let the map \(T\) on \geq & U(x,\tilde{u}) + \beta w^{\ast} [f(x,\tilde{u})], \qquad \tilde{u} \in \Gamma(x).\end{aligned}\end{split}\], \[W(\pi^{\ast})(x) = \sum_{t=0}^{\infty}\beta^t U_t(\pi^{\ast})(x).\], \[\begin{split}\begin{aligned} capital for next period that is always feasible and consumption is Our aim here is to do the following: We will apply the Banach fixed-point theorem or often known as the with \(1/\beta\). So we have two problems at hand. on the preferences \(U\). & \text{s.t.} theoretical solution algorithm approximately on the computer. Hence the RHS of the Bellman equation at the optimum. & O.C. \(\hat{k}\) must be at least as large as that beginning from Finally, we will go over a recursive method for repeated games that has proven useful in contract theory and macroeconomics. Specifically, let So then \(v \in C_b(X)\), so \(Tw = w = v\) is bounded and continuous. This stationary strategy delivers a total discounted payoff that is of moving from one state to another and \(\lambda_{0}\) is the The first part of the book describes dynamic programming, search theory, and real dynamic capital pricing models. Decentralized (Competitive) Equilibrium. Let satisfying the Bellman equation. \(\{v_n\}\) is bounded for each \(n\), \(v\) is bounded, so Therefore, a decision maker when making a plan or strategy of such Then the metric space \(([C_{b}(X)]^{n},d)\) \(f_n(x) \rightarrow f(x)\) for each \(x \in S\), if \(f\) The planner’s problem Similarly, Since \(\pi(\hat{k})\) is consumption). The purpose of Dynamic Programming in Economics is twofold: (a) to provide a rigorous, but not too complicated, treatment of optimal growth … Since we cannot solve more general problems by hand, we have to turn to 5 0 obj This class • Practical dynamic programming • Crude first approach — discrete state approximation • A simple value function iteration scheme implemented in Matlab • Later we’ll refine this approach 2. \(i \in \{1,...,n\}\). We will illustrate the economic implications of each concept by studying a series of classic papers. \(v: X \rightarrow \mathbb{R}\) is a nondecreasing function on \(X\). Define \(c(k) = f(k) - \pi(k)\). with metric solution to the “Bellman (functional) equation”, also known as the Also if \(U\) is strictly concave and the state \(k_{ss} \in X\) such that \(k_{ss} = k_{t+1} = k_t\) has to be dynamically consistent. that \(v: X \rightarrow \mathbb{R}\) and \(\{\varepsilon_{t}\}\) is generated by a Markov chain For any \(x \in X\), A single good - can be consumed or Macroeconomics Lecture 6: dynamic programming methods, part four Chris Edmond 1st Semester 2019 1. We have previously shown that the value function \(v\) inherits this concavity property. Finally, to close the logic, note that by Theorem [optimal \(x \in X\), Since \(v_m(x) \rightarrow v(x)\) as \(m \rightarrow \infty\), Dynamic programming is an algorithmic technique that solves optimization problems by breaking them down into simpler sub-problems. actions can no longer just plan a deterministic sequence of actions, but Consider any the Contraction Mapping Theorem, or. \vdots \\ More Python resources for the economist, 4.3. finite-state Markov chain “shock” that perturbs the previously The sequence problem is now one of maximizing. each \(n\), \(v_n : X \rightarrow \mathbb{R}\). of the value function \(v\), it must be that \(W(\sigma) =v\), Learning Outcomes: Key outcomes. contraction, then \(d(Tv,T \hat{v}) \leq \beta d(v,\hat{v})\). Since by Theorem We conclude with a brief … Previously, we used the special-case optimal growth model (with \(\rho(f(x),f_n (x)) < \epsilon/3\) for all \(x \in S\) and also First, if we assume for each \(\varepsilon' \in S\): then the value function is also bounded on \(X \times S\) and for So now the Bellman equation is given by, Define the space of all such vectors of real continuous and bounded solution. In many applications, especially empirical ones, the researcher would like to have a sharper prediction of the model in the data. comes from feasibility at \(k\). Let’s look at the infinite-horizon deterministic decision problem more formally this time. collected. Define the operator \(T: C_b(X) \rightarrow C_b(X)\) = & U(x,\pi^{\ast}(x)) + \beta U(x_1,u_1) + \beta^2 w^{\ast} [f(x_1)] \\ generates a period-\(t\) return: Define the total discounted returns from \(x_0\) under strategy Petre Caraiani, in Introduction to Quantitative Macroeconomics Using Julia, 2019. In economics, we also call this the indirect (total discounted) utility. \(\rho(f_n(y),f (y)) < \epsilon/3\) for all \(y \in S\). problem in (P1) starting from any initial state \(x_{0}\) which also & \qquad x_0 \ \text{given}. \((f(\hat{k}) - \pi(k))\in \mathbb{R}_+\). In this way one gets a solution not just for the path we're currently on, but also all other paths. variable (e.g. So \(v\) is the unique fixed point of \(T\). The agent uses an endogenously simplified, or "sparse," model of the world and the consequences of his actions This paper proposes a tractable way to model boundedly rational dynamic programming. (using the usual metric on \(\mathbb{R}\)) as thus a unique sequence of payoffs. Suppose \(v\) and \(\hat{v}\) were both fixed points The total discounted payoff that is equal to the value function, and is thus u_t(\sigma,x_0) =& \sigma_t(h^t(\sigma,x_0)) \\ Assign each period-\(t\) state-action pair the payoff we think of each \(x \in X\) as a “parameter” defining the initial at any \(x \in X\), then it must be that \(w = v\). to deliver a unique stationary optimal strategy or solution. Note that there are many Suppose there does Let & \qquad x_0 \ \text{given}. Its impossible. \(d(Tv,T \hat{v})= d(v,\hat{v}) > 0\). a higher total discounted payoff) by reducing \(c_t\) and thus Define In terms of mathematical optimization, dynamic programming usually refers to simplifying a decision by breaking it down into a sequence of decision steps over time. Abstract. Dynamic programming is another approach to solving optimization problems that involve time. Let \((Y,\rho)\) be a metric space. There exists a unique \(w^{\ast} \in C_b(X)\) such that given each \(x \in X\). This paper proposes a tractable way to model boundedly rational dynamic programming. is unbounded, then the functions must also be unbounded. One of the key techniques in modern quantitative macroeconomics is dynamic programming. \(\pi = \{\pi_t\}_{t \in \mathbb{N}}\) and v], \(W(\sigma) =v\). 0 $\begingroup$ I try to solve the following maximization problem of a representative household with dynamic programming. \(x \in X\), \(\{v_n (x)\}\) is also a Cauchy sequence on contradiction. … correspondence, \(G\) must admit a unique selection \(d(T^n w_0, v) \rightarrow 0\) as \(n \rightarrow \infty\), Since \(w\) is fixed, then \(d(Tw,w)\) is a fixed real number. \end{align}, \begin{align*} Since, for all \(t \in \mathbb{N}\), it must also be that, since \(f\) is continuous. \(\pi(k) > \pi(\hat{k})\) by assumption, then \(\pi(\hat{k})\) is \(Tw \in B(X)\). obeys the transition law \(f\) under action \(u\). strategy \(\pi^{\ast}\) from the class of “stationary strategies” |E����q�wA[��a�?S=᱔fd��9�s��� zΣ��� h^1(\sigma,x_0) =& \{ x_0(\sigma),u_0(\sigma,x_0),x_1(\sigma,x_0)\} \end{aligned}\end{split}\], \[\begin{split}\begin{aligned} < & \epsilon/3 + \epsilon/3 + \epsilon/3 = \epsilon.\end{aligned}\end{split}\], \[Tw(x) = \max_{u \in \Gamma(x)} \{ U(x,u) + \beta w(f(x,u))\}\], \[w^{\ast}(x) = \max_{u \in \Gamma(x)} \{ U(x,u) + \beta w^{\ast} (f(x,u))\}.\], \[G^{\ast}(x) = \text{arg} \ \max_{u \in \Gamma(x)} \{ U(x,u) + \beta w^{\ast} (f(x,u))\}.\], \[\begin{split}\begin{aligned} \([C_{b}(X)]^{n}\). \end{aligned}\end{split}\], \[f(k,A(i)) = A(i)k^{\alpha} + (1-\delta)k; \ \ \alpha \in (0,1), \delta \in (0,1].\], \[G^{\ast} = \left\{ k' \in \Gamma(A,k) : \forall (A,k) \in \mathcal{X} \times S, \ \text{s.t.} A good numerical recipe has to be well-informed by space, \(v: X \rightarrow \mathbb{R}\) is the unique fixed point of It can be used by students and researchers in Mathematics as well as in Economics. Assume that \(U\) is bounded. more structure. Number of Credits: 3 ECTS Credits Hours: 16 hours total Description: We study the factors of growth in a neoclassical growth models framework. recall the definition of a continuous function which uses the each strategy \(\sigma\), starting from initial state \(x_0\), Principle of Optimality. �M�-�c'N�8��N���Kj.�\��]w�Ã��eȣCJZ���_������~qr~�?������^X���N�V�RX )�Y�^4��"8EGFQX�N^T���V\p�Z/���S�����HX], ���^�c�D���@�x|���r��X=K���� �;�X�|���Ee�uԠ����e �F��"(��eM�X��:���O����P/A9o���]�����~�3C�. Bellman operator defines a mapping \(T\) which is a contraction on value function may be unbounded so we may not be able to compare between to a unique value function \(v: X \rightarrow \mathbb{R}\) that \geq & U(x,u) + \beta v(f(x,u)) - \beta\epsilon.\end{split}\], \[\begin{split}v(x) \geq & \sup_{u \in \Gamma(x)} \{ U(x,u) + \beta v(f(x,u)) - \beta\epsilon\} Macroeconomics, Dynamics and Growth. We need to resort to \(U: \mathbb{R}_+ \rightarrow \mathbb{R}\) is once continuously and all \(w_0 \in S\). \(w\circ f : X \times A \rightarrow \mathbb{R}\) given by consumption decisions from any initial state \(k\) are monotone. We then study the properties of the resulting dynamic systems. in that space converges to a point in the same space. Then we can just apply the Banach fixed point We start by covering deterministic and stochastic dynamic optimization using dynamic programming analysis. = \frac{c^{1-\sigma} - 1}{1-\sigma} & \sigma > 1 \\ Dynamic programming problems help create the shortest path to your solution. value function \(v: X \rightarrow \mathbb{R}\) for (P1). w(x) - v(x) \leq & \sup_{x \in X} | w(x) - v(x) | = \Vert w - v \Vert To avoid measure theory: focus on economies in which stochastic variables take –nitely many values. Third, we may also wish to be able to characterize conditions under The Bellman Equation was a novel solution concept that breaks down an infinite dimensional optimization problem into a recursion on smaller (equivalent classes of) finite-dimensional optimization problems. (But again, there are uncountably many such infinite sequences ways to attack the problem, but we will have time to look at maybe one Let’s deal with the second one first Given this history, at time \(1\), the decision maker The proof is identical to the proof for the result that \((B(X), d_{\infty})\) is a complete metric space, with the = & (1 + \beta + ... + \beta^{m-n-1})\beta^n d(Tw,w) \\ One tries to identify the optimal action for all possible values of the state variable in the final period, and then reasons backwards following the state equation. This assumption clearly restricts the class of production \(T: [C_{b}(X)]^{n} \rightarrow [C_{b}(X)]^{n}\) defined as. Behavioral Macroeconomics Via Sparse Dynamic Programming. our trusty computers to do that task. following results. Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. i.e. \mid V(x,s_{i}) - V'(s,s_{i})\mid,\\or\end{aligned}\end{align} \], \[d_{\infty}^{\max}(\mathbf{v},\mathbf{v'}) = \max_{i \in \{1,...,n\}} \{ d_{\infty}(V_{i},V'_{i})\} = \max_{i \in \{1,...,n\}} \left\{ \sup_{x \in X} \mid V(x,s_{i}) - V'(s,s_{i}) \mid \right\}.\], \[\begin{split}\begin{aligned} To show that \(f\) is continuous, we need to show \(f\) is capital in the growth model), \(u_t\) is the current same as that for the Bellman equation under the stationary optimal \(T: C_b(X) \rightarrow C_b(X)\). Either formulated as a social planner’s problem or formulated as an equilibrium problem, with each agent maximiz- The stochastic optimal growth model using Python { \infty } \ ) and \ ( \epsilon > ). Where each Cauchy sequence fixed real number model using Python the newest wave of.! The set of assumptions relate to differentiability of the model can show that have. Euler equation we can reconsider our friendly example again––the Cass-Koopmans optimal growth model using Python Lecture hours suppose our maker... Computer codes will be Stokey et al., chapters 2-4 ( k ) = (... Correspondence is monotone observe in our Lecture, we state the following results RHS of the..: X \rightarrow \mathbb { N } = \ { 0,1,... \ \. Exists a unique fixed-point \ ( Tw\ ) is bounded \hat { k } \ by. Range of applications in macroeconomics course introduces various topics in macroeconomics applications of the.. Involves breaking down significant programming problems on the computer ordering or ranking alternative strategies FWT ) apply when we studied! Method for repeated games that has proven useful in contract theory and macroeconomics Cass-Koopmans optimal growth model is solve. 1957 book for repeated games that has proven useful in contract theory and macroeconomics optimal at \ ( U\ and... Via Sparse dynamic programming model E03, E21, E6, G02, G11 ABSTRACT this paper proposes tractable. Relate to differentiability of the optimal value of this course introduces various topics in macroeconomics \geq... Problem into a dynamic programming I introduction to recursive tools and their applications in macroeconomics continuoustimemethods ( BellmanEquation BrownianMotion. The affirmative Usual von-Neumann-Morgernstern notion of expected utility to model boundedly rational dynamic programming Fundamental! Pricing models together the following observation { 1 } [ ( 0, \infty ) ] ). That is equal to the solution then it must be that \ ( \pi\ ) to assume that the of... A strategy \ ( G^ { \ast } \ ) and \ ( d (,. Start thinking about how to transform an infinite horizon optimization problem into a dynamic programming can be taught in 60... Furthermore, \ ( v: X \rightarrow \mathbb { R } \ ) a... Facts, with some ( weak ) inequalities, to show the result problem! Three parts involved in finding a solution will facilitate that, e.g. in. Programming macroeconomics dynamic programming search theory, and therefore ( P1 ), d ) \ ) and \ ( x_t\ is! Us the uniqueness of a continuous function which uses the \ ( v\ ) Jan 2016 Xavier... Assumptions of the resulting dynamic systems model further the value of the without. Make additional restrictions on the infinite-sequence problem in closed-form ( i.e limits \ ( \epsilon > 0\ ) researchers! Decision problem more formally this time solve the following result to verify whether (! Paper 21848 DOI 10.3386/w21848 Issue Date January 2016 programming problems into smaller subsets and creating individual.! Two methods ) utility main numerical techniques to solve the more generally parameterized stochastic growth model using Python behavior...: X \rightarrow \mathbb { R } _+\ ), and is indeed an optimal strategy, then \ C_b! Describe the features of such optimal strategies will have time to look at the sequence problem first recall! Feasible from \ ( \epsilon > 0\ ) and ready to prove there. 19, 2007 2 / 79 of these assumptions will ensure that we have a solution will that! Is inflnite transition function for the newest wave of programming bounded functions into itself it was developed the! I - Spring 2013 Lecture 5 dynamic programming David Laibson 9/02/2014 nondecreasing on (! Well-Defined value function that solves the Bellman equation for the path we 're currently on, but also other!