How Ethical Should AI Be? How AI Alignment Shapes the Risk Preferences of LLMs
September 10, 2024 Shumiao Ouyang

Follow Us




Shumiao Ouyang is an Associate Professor of Finance (without tenure) at Saïd Business School, University of Oxford, and a Tutorial Fellow in Management at Wadham College, Oxford.


Website: https://www.shumiaoouyang.com/

Paper: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4851711


Abstract: This study examines the risk preferences of Large Language Models (LLMs) and how aligning them with human ethical standards affects their economic decision-making. Analyzing 30 LLMs reveals a range of inherent risk profiles, from risk-averse to risk-seeking. We find that aligning LLMs with human values, focusing on harmlessness, helpfulness, and honesty, shifts them towards risk aversion. While some alignment improves investment forecast accuracy, excessive alignment leads to overly cautious predictions, potentially resulting in severe underinvestment. Our findings highlight the need for a nuanced approach that balances ethical alignment with the specific requirements of economic domains when using LLMs in finance.


If you would like to give a presentation in a future webinar, please contact our Senior Economist Dr. Wen Chen (wen.chen@luohanacademy.com).

For other inquires, contact events@luohanacademy.com.


Subscribe to our news
SUBMIT

    Alibaba Digital Ecosystem Innovation Park, No. 1 Ai Cheng Street, Yuhang District, Hangzhou, China.


    events@luohanacademy.com