AI Financial Analysis

Lately, I've been working on a project to integrate Artificial Intelligence into an operational data warehouse for analysis. It has two prongs - one for financial analysis and one for operational business data. The subject client is a real estate firm. They have business data related to properties in which they are involved with sales or leases, either representing the owner or representing the buyer or lessor. So there are two different AI agents - one for financial analysis on their financial data. The financial data includes any financial transactions represented from their accounting system. The other is an agent that allows the user to ask questions or perform analysis on their operational real estate data - questions like how many leases were in a certain year, or by a certain broker or other more complex queries aggregating and representing information or data over time.

The project so far has been a success, I would say. Our initial deployment allows users to make financial inquiries related to income statement or balance sheet items. Things like - questions about revenue, or net income or operating income, etc. It allows inquiring about accounts receivable or cash balances. The output is both in a tabular form - e.g. showing revenue over time by month, as well as a summary analysis by yet another AI agent which provides a textual output describing what the data shows in a natural language format - not different than what might show up in a financial report.

The challenges have been varied, but presently the main challenge lies in providing the AI agent enough "training" to correlate natural language questions to an appropriate SQL statement query and output. When is the user asking a question to show a month-over-month results - like month-over-month revenue - and when is the user asking a summary question, like a total for the prior year or something similar. How people ask questions and what they mean can be varied - two people may be looking for the same output but phrase their question in wildly different ways.

So far, all the training has come from questions and queries defined by me, according to how I think the users may behave. Soon we are releasing the product to executives and users, and we will be gathering feedback concerning performance - is the AI agent responding in a way that meets the intended needs of the user, or was the user looking for something different? This feedback will be important in performing ongoing tweaking and tuning of the AI model that determines the appropriate SQL for a given user input.

Personally, I can't wait to see what happens next. Once we get more training data from user experience, I can add more training to the AI model to, hopefully, perform better in answering the question of the user.

Author: Marcus

Post Date: 2025-05-26

By Marcusstriking competent fellow