Hi, my name is Maciej and here at Predica Team I’m acting as a dev lead and .NET developer. One of key focus of Predica is implementation of Identity and Access Management solutions based on Microsoft software stack, thus I had to learn my way into this world from developer’s perspective. Here on a blog I will try to share part of this knowledge for the benefit of FIM crowd … or just for fun (hopefully both).
If you are a developer with urgent need to communicate with FIM programmatically via its web services, you are most probably already familiar with FIM 2010 Resource Management Client library. It is a reference FIM client implementation provided by Microsoft, open sourced and available on CodePlex.
Regardless if it is best solution, this is definitely the recommended way of dealing with FIM objects as opposed to creating your own (if one will choose to do this I hope it will be shared on Codeplex or GitHub as well). However when working with this reference solution we had several big BUT moments when we have come across some implementation or usage details. Because of these moments we @ Predica Team have decided to create an intermediate layer between fim2010client and our application code.
This is an introductory post about our custom library that overcomes standard fim2010client shortcomings. While this will be focused mainly on developers working with FIM web service, this is also intended to give architects and other persons working with FIM necessary information to make right decisions in regards of implementation way choice.
4 main reasons why not use FIM 2010 Resource Management Client directly?
One of the biggest pains in creating solutions using the client in question is its DefautReourceTypeFactory class. It is responsible for creating instances of concrete FIM resource types by their names. This can be done in a very elegant way that finds correct types in runtime, but instead the default implementation forces the developer to change its code each time a new resource type is added to the solution. Take a moment to see for yourself: DefaultResourceTypeFactory.cs. This is not the most developer-friendly way to provide such functionality, to say the least.
This can be easily automated with a scripting in Powershell but again … is it really necessary?
Luckily the fim2010client team was sane enough to extract this contract to a separate IResourceTypeFactory interface. We solved this issue by implementing our own factory that just handles everything internally, with no modifications, by scanning loaded assemblies and finding the requested types using well defined, easy to follow conventions. Various performance tricks can be included here, so that the reflection logic is executed only once.
Bugs are a natural consequence of writing code – no argument here. One of advantages of open sourcing a project is allowing outsiders to find them, fix them and publish the fixes for everyone to use. We found two bugs, fixed them and sent a unit-tested pull request on codeplex, but unfortunately it was not merged into the project. To be honest, this project does not look to be very alive after all.
In this case we are just using our own copy of the library, with our modifications applied.
#3: Initialization performance
Project wiki contains a very simple code sample that shows how a developer should use the default implementation to call FIM web services. It recommends the RefreshSchema() method to be called each time the client instance is created. It would be OK if this call did not result in downloading at least 1MB of XML! Getting rid of this requirement alone boosted our applications very much.
You can imagine that FIM schema is not something that changes so often that we should download and parse it a million times during client application life time. Why not just fetch it once, when the client is first initialized, and then reuse it for all subsequently created instances? We did not find any argument against this approach and it paid off in a massive performance gain.
#4: Paging performance
There is not much that can be done when it comes to web-service, xml-oriented communication from a developer point of view. However we found that there is one place that can greatly enhance the overall FIM-communication experience. It is the size of batch that the library uses when fetching large portions of data from FIM.
This setting is crucial for achieving the best results and it varies from project to project, so we were quite surprised when we saw that the default value is hardcoded in a constant Constants.WsEnumeration.DefaultMaxElements field and cannot be fine-tuned without recompilation of the whole solution.
We moved this value to application configuration and playing with different values proved to produce very satisfying results. To achieve this, we had to modify EnumerationRequest.cs ctor so that it looks into application configuration first searching for default’s override.
3 Additional reasons why to use Predica.FimCommunication
We all know the weird feeling that appears in our guts when we get a performance-related bug from a client that says “hello dear developer, your application sucks does not perform well because it is so damn slow on my machine!”. What can we do with such poor description? Not much.
This is why we built very extensive diagnostic logging logic into our library. Each FIM operation is sent to logging framework and can be processed in various ways. We also added a concept of “long queries”, which are logged as WARN instead of DEBUG level. The default duration of “long query” is 2500ms, but it can be changed in application configuration file.
Such logs contain lots of useful information: the XPath query that is sent to FIM, paging/sorting data as well as requested attribute names list and exact query duration. When used in web application, we also added URL and User fields to the log messages. This allowed us to fine tune FIM queries and squeeze the best performance from most resource-greedy scenarios based only on log files sent by our clients.
Short side note: we use NLog because… well, it is the best logging framework for .NET available :). You can easily send NLog’s output to standard DEBUG stream, console, file, database, e-mail, http endpoint… wherever you like. We find it particularly useful to spit every single log to DEBUG and filter it with DebugView from SysInternals on dev machines, and configure a comfortable log files structure on client servers.
#2: Unit tests
We strongly believe in the power of unit tests and test-driven development. Some of us may even be perceived as TDD zealots (count me as one).
One way or another, while the original fim2010client implementation has some unit tests, this suite is very far from being comprehensive (start here to browse it). We had a goal of testing every single aspect of our library, literally bombarding given FIM instance with various queries and commands. This led to some interesting findings in FIM behavior.
A complete unit tests suite can obviously serve as a perfect demo code for developers that want to use our FimClient APIs.
#3: Runtime configuration
The original fim2010client requires configuration to be defined in application configuration file. There is nothing wrong with this approach, it is WCF after all. But some of our solutions required gathering FIM connection information (url, username, password) from user during runtime. We extended the DefaultClient implementation to use values given in code rather than require static config-file-only information.
I hope you enjoyed this post and are eager to put your hands on this lib. Soon I will follow up on this library and examples how to use it. As part of this series we will also publish this implementation on GitHub for FIM community to use. It was very beneficial for us in many projects so we think that it might be beneficial for community, as well as encourage others to work on FIM client library too.
Hopefully this post has made you hungry for more. So stay tuned on our blog and until then: may the FIM be with you!