Over View Our research project aims to improve the state-of-the-art in Reinforcement Learning by investigating and addressing the shortcomings of future discounting. Specifically, we propose the development of a methodology for dynamic discounting factor that adapts to changing environments and achieves optimal sequential decision-making processes. To achieve this goal, we will investigate the use of machine learning techniques to model the dynamic and state-dependent discounting of future rewards. By doing so, we can make more rational decisions that maximize cumulative reward, even in uncertain or changing environments. Our research has significant implications for a wide range of fields that rely on sequential decision-making, including robotics, computer systems, natural language processing, financial analysis, and healthcare. By improving our understanding of how humans make decisions, this research also has the potential to provide insights into cognitive processes and neuroscience. Overall, the project will significantly advance the field of Reinforcement Learning by improving the efficiency and effectiveness of decision-making processes. Our results will provide a foundation for developing more robust and adaptive intelligent systems that can learn from experience and make informed decisions in complex environments. Team Advisor: Vahid Behzadan GitHub: N/A Publications: