Hi, Adam here again with FIM reporting series. In first post I’ve covered basic concepts of FIM reporting included in FIM 2010 R2. After short break I’m getting back to continuation of this topic, this time with brief description of reporting architecture and data flow.
With FIM 2010 R2 release Microsoft temped to supply feature responsible for tracking and reporting changes made on FIM resources. Whole solution wasn’t built from scratch but it is based on the Microsoft System Center. Says more precisely the data flow and data storage are handled by System Center.
FIM Reporting can be deployed on two kind of topologies. Depending on production environment size, Microsoft recommended two solutions for small/medium or large size deployments. More detailed information you can see here.
Briefly I can tell that small or mediums size deployments can be hosted on servers where FIM and System Center Service Manager (SCSM) are hosted on one machine. While System Center Data Warehouse (SCDW) is hosted on second machine (look at figure 1).
For large deployments the whole environment is hosted on three servers. The difference is that the FIM Service and SCSM are separated into separate servers (look at figure below).
Schema update and Data Flow
As I mentioned before the System Center Service Management is used to handle data flow from FIM to System Center Data Warehouse. In general the whole process takes place following steps:
Data Warehouse schema update
In order to extend Data Warehouse schema to log new FIM resources it is required to supply MP definition. Management Pack are not new feature which is built-in FIM Reporting, this is part of System Center solution. Management Pack files will be described in the third post. For more precisely description I refer to TechNet documentation.
- Ready Management Packs files describing extension for Data Warehouse are imported (using Power Shell) to System Center Management service.
- Based on MP files appropriate schema changes are made on the System Center Data Warehouse databases.
The Date Flow process is responsible for transfer FIM data from FIM Service database directly to the DWDataMart database in the end. All take place in the following manner.
- Data are transferred form FIM Service database to the System Center Management. Appropriate bindings (connections between FIM resources and DW fields) are described in MP definitions.
- Transferred data are sequentially passes DWStagingAndConfig, DWRepository and DWDataMart databases where are subjected to complex transforms and loads operation, which are out of the scope of this blog series. Second reason is that I never had any experience with System Center and I didn’t try deep dive into this topic. For more details about System Center Data Warehouse ETL process I refer to TechNet documentation.
- Based on data transferred to DataMart Warehouse we can build custom reports, which we can store on Report Server. It also possible to upload reports to the System Center reports repository by using MP.
The whole process is presented on the diagram below (click to enlarge):
This short description is intended to show simple overview of FIM 2010 R2 Reporting architecture. In order to get more detailed and exhaustive description I suggests refer to TechNet articles:
- About FIM Reporting deployment solutions: Planning for Forefront Identity Manager 2010 R2 Reporting
- About Reporting Architecture and Data Flow: FIM 2010 R2 Reporting Architecture and DataFlow
- About System Center products.
If you are interested in this topic stay tuned … we will get back soon to it with more details into how to get this service running and doing what we want to do with our data.