Abstract:
Robo-advisory services are plagued by the "algorithmic black box" problem, which may trigger investor trust crises and cross-format risk contagion, necessitating the exploration of the development of a trustworthy service system with transparency and compliance. Based on explainable artificial intelligence (XAI) technology, this study constructs a three-dimensional framework encompassing algorithmic transparency, regulatory embeddedness, and format heterogeneity, explores the overall idea of constructing for a trustworthy service system for full-format robo-advisory. First, a hierarchical system (L1-L4) for risk pricing model explainability is established. According to increasing product risk levels, the system employs generalized linear or shallow decision tree models, random forest models combined with LIME methods, dynamic Bayesian networks, and structural causal models with counterfactual reasoning, respectively, achieving dynamic alignment between model explainability and financial product risk levels. Second, a regulatory visibility index is designed through nonlinear combination of algorithmic transparency, client cognition, and product risk penalties to objectively measure the degree of information asymmetry caused by "black boxes". Based on different thresholds, differentiated regulatory measures are implemented, including business suspension, supplementary explanation, continuous risk monitoring, and inclusion in compliance whitelists. Third, differentiated explanation strategies are embedded across various financial formats through a "core explanation module+format plugin" architecture. Low-risk products emphasizes explaining risk exposure and fund sources; products with a wide range of investors require layered explanation, providing concise and intuitive factor descriptions for retail investors while offering detailed scenario simulations and stress test results for institutional investors; products with long investment cycles focuses on explaining long-term risk transmission chains; products with high complexity explanations should feature timeliness and high-dimensional visualization capabilities. This study provides theoretical support and policy implications for preventing algorithmic risks and promoting trustworthy development of robo-advisory services.