This report provides a commentary on current and potential uses of artificial intelligence (AI) in gaming, including assessment of the quality and strength of evidence to detect risk of harm using various aspects of player behavior and exploring use of financial data to detect player risk. AI is already embedded in core business functions, with advanced personalization offering both potential benefits for customer experience and potential increased risk of harm to vulnerable populations. Many indicators recommended to detect player risk lacked evidence; payment-related indicators had the strongest evidence for detection of risk. While advancements in technology show promise for financial risk identification, barriers include issues such as lack of cross-operator data sharing, privacy concerns, consent issues, and regulatory barriers. Recommendations for regulators are offered in each area.