FimExplorer

In my previous post I explained how you can execute any query in FIM and see the results. Remember how cumbersome that was?

Today I introduce our next internal project that just got open sourced: FimExplorer. Here is what it looks like (bear in mind that it’s not supposed to be pretty;) ):

And here is what it can do:

  • run any xpath query against FIM
    • you can choose what attributes will be fetched; all are fetched by default
  • find objects by ID
  • display results in grid
  • display single object information in a dialog (double-click the grid row)
  • navigate through references (click on the ID link)
  • export displayed results to XML (this one produces the same results as FIM migration cmdlets: Export-FIMConfig / ConvertTo-FIMResource / ConvertFrom-FIMResource, read MSDN for more info)
  • import objects from XML (generated by FimExplorer or FIM cmdlets) and show them in grid; this can be useful for “offline” analysis
  • it is not required to run on a machine with FIM installed

You can find the code on GitHub and CodePlex. Compiled, ready-to-run version is also available (on CodePlex). Of course you are more than welcome to send contributions via pull requests.

Before running the application you need to modify it’s configuration file (Predica.FimExplorer.exe.config). Just replace the initial settings with your own FIM URL, username and password.

Important (at least for those who will explore code of this project):

It uses our FimCommunication library under the hood. It is referenced as a Git submodule. So after you clone the FimExplorer repository make sure to also run “git submodule init” and “git submodule update” commands to download it!

Executing any XPath queries in FIM, the hard way

Being able to execute any query in FIM and get the results can be crucial when developing FIM-based solutions. Analysis based on real data can be very helpful during debugging as well as designing FIM objects to complete a certain task.

<sarcasm mode=”on”>
Fortunately Microsoft knows this is the case and provided an easy way to send XPath to FIM and get the result set.
</sarcasm mode=”off”>

Of course the above statement is not really true. In order to get XPath query results in FIM you need to:

  • Go to Search Scopes configuration

  • Start to create new Search Scope and fill first screen with any data:

  • Enter query on 2nd screen and “TAB away” – get focus out of the field. The query will now be executed and you will be presented the results in a standard FIM grid:

And this works. It’s not the most comfortable solution, but it gets the job done. However it adds a massive overhead if all you need to do is just perform some quick query/data checks.

Quick Editors Note (from Tomek)

#1

Of course … there is still PowerShell which can be used here.

#2

Probably most annoying thing here is that when one will make mistake in a query or in its syntax, or just query is not proper for FIM to crunch it, you will get only this error message:

Now it is up to you to figure out what’s wrong, which in case of complex queries might be sometimes problematic.

For now this is all I have to share with you. An alternative solution to this problem will be included in my next post. And yes, this will be another Predica open source project!

Happy querying!

ADFS SharePoint attribute store

Hi guys, it’s Paweł again – your SharePoint architect trying to make own little name in the identity world. In one of my previous posting from SharePoint meets identity series, I was writing about ADFS and its features. One of them was the ability to perform various operation on claims using attribute stores. During the course of the project which inspired this series, I had an challenging opportunity to create a custom attribute store which would provide data from SharePoint lists. So I thought it would be great if I shared my great invention with the outside world and the outside world should appreciate it as well, so this post will be all about that.

Continue reading

Predica.FimCommunication code samples

Hi, it’s Maciek again here with some FIM .NET developer goodies.

As I promised in my previous post, our FimCommunication library is now available on GitHub!

It’s time to see how it works. Bear in mind that we follow the KISS principle when we can. Therefore it was not our intention to create a, for example, full- blown LINQ provider for FIM. It could be fun, but its usefulness is questionable. You’re welcome to implement it and send us a pull request if you wish :).

Basic usage

These are some short code snippets for performing the basic operation with our FimClient:

[sourcecode language=”csharp”]
//XPath Query
var client = new FimClient();
var allPeople = client.EnumerateAll("/Person");
//Find by id
var person = (RmPerson)_client.FindById(personId);
[/sourcecode]

If no object with this ID is found, null will be returned.

Create

[sourcecode language=”csharp”]
var newPerson = new RmPerson()
{
FirstName = "person first name"
};
_client.Create(newPerson);
[/sourcecode]

Update

[sourcecode language=”csharp”]
var person = (RmPerson)_client.FindById(personId);
var personChanges = new RmResourceChanges(person);
persponChanges.BeginChanges();
person.LastName = "new last name";
_client.Update(personChanges);
[/sourcecode]

Delete

[sourcecode language=”csharp”]
var personToDelete = new RmPerson {
ObjectID = new RmReference(personId)
};
_client.Delete(personToDelete);
[/sourcecode]

Paging

Original fim2010client does not provide an easy access to executing paged queries, so we introduced it in our library. You can use it like this:

[sourcecode language=”csharp”]
var first10People = _client.EnumeratePage(
"/Person", Pagination.FirstPageOfSize(10), SortingInstructions.None
);

var second3People = _client.EnumeratePage(
/Person", new Pagination(1, 3), SortingInstructions.None
);

var allPeople = _client.EnumeratePage(
"/Person", Pagination.All, SortingInstructions.None
);
[/sourcecode]

Filtering

We created a little infrastructure code around filtering logic. For example you could create an xpath query like this:

Simple filtering

[sourcecode language=”csharp”]
var firstNameFilter = new Filter(
RmPerson.AttributeNames.FirstName.Name, "John", FilterOperation.Equals
);

var lastNameFilter = new Filter(
RmPerson.AttributeNames.LastName.Name, "mit", FilterOperation.Contains
);

var createdFilter = new Filter(
RmResource.AttributeNames.CreatedTime.Name, "2012-04-18",
FilterOperation.Equals, AttributeTypes.DateTime
);

string xpath = "/Person["+ firstNameFilter.ComposeXPath()
+ " and "
+ lastNameFilter.ComposeXPath()
+ " and "
+ createdFilter.ComposeXPath()
+ "]";

var enumerateAll = _client.EnumerateAll(xpath).ToList();
[/sourcecode]

It requires manually joining conditions with correct and/or operators, but we did not need anything more complex, like grouping conditions in some aggregator filters.

“Contains” operation

“Contains” filter operation is particularly interesting as it uses a workaround to “cheat” FIM, which by default does not handle “contains” XPath function properly. The above filter produces the following result:

[sourcecode language=”csharp”]
/Person[FirstName = ‘John’ and starts-with(LastName, ‘%mit’) and CreatedTime >= ‘2012-04-18T00:00:00’ and CreatedTime <= ‘2012-04-18T23:59:59’]
[/sourcecode]

Filtering by references

You can also use a special “reference” syntax to filter objects by their references. This sample filter will look for people that have managers with first name equal to “Mary”:

[sourcecode language=”csharp”]
var managerFirstNameFilter = new Filter(
"[ref]Manager,Person,FirstName", "Mary", FilterOperation.Equals
);

var query = "/Person[" + managerFirstNameFilter.ComposeXPath() + "]";
[/sourcecode]

Result XPath query:

[sourcecode language=”csharp”]
/Person[Manager=/Person[FirstName = ‘Mary’]]
[/sourcecode]

You can browse more Filter usage samples in FilterTests class available here.

Sorting

Paging and sorting kind of complete each other, so it has to be simple. We introduced our own sorting structures to be used like here:

[sourcecode language=”csharp”]
var peopleSortedDescByFirstName = _client.EnumeratePage<RmPerson>("/Person"
, Pagination.All
, new SortingInstructions(RmPerson.AttributeNames.FirstName.Name, SortOrder.Descending)
);
[/sourcecode]

It can be joined with paging:

[sourcecode language=”csharp”]
var thirdPageOfPeopleSortedAscByLastName = _client.EnumeratePage<RmPerson>("/Person"
, new Pagination(Pagination.FirstPageIndex + 2, 4)
, new SortingInstructions(RmPerson.AttributeNames.LastName.Name, SortOrder.Ascending)
);
[/sourcecode]

Sorting unit tests can be found here.

Selecting attributes to fetch

One thing we’ve learned along the way is that loading “full” objects from FIM can have negative impact on performance. It is often required to fine tune the query so that it fetches only a few selected attributes. Diagnostic logs described in previous post can help pin-point such performance-critical locations.

One attribute is particularly important: object type. It is required, because ResourceTypeFactory would not know what type it should construct if this value was missing. But you don’t have to worry about that, it’s been taken care of by the FimClient internals, this attribute is always fetched whether you requested it or not.

[sourcecode language=”csharp”]
var peopleWithFirstName = _client.EnumerateAll(
"/Person"
, new AttributesToFetch(RmPerson.AttributeNames.FirstName.Name)
);

var attributes = new AttributesToFetch(RmPerson.AttributeNames.LastName.Name)
.AppendAttribute(RmPerson.AttributeNames.AccountName.Name);

var peopleWithSelectedAttributes = _client.EnumeratePage(
"/Person"
, Pagination.All
, SortingInstructions.None
, attributes
);
[/sourcecode]

Full behavior of AttributesToFetch class can be viewed by browsing it’s tests.

This concludes the usage scenarios of our FIM communication library. Let us know what you think about it. Any suggestions, ideas?

Blog picture: (cceisenrah

Introducing Predica.FimCommunication library

Hi, my name is Maciej and here at Predica Team I’m acting as a dev lead and .NET developer. One of key focus of Predica is implementation of Identity and Access Management solutions based on Microsoft software stack, thus I had to learn my way into this world from developer’s perspective. Here on a blog I will try to share part of this knowledge for the benefit of FIM crowd … or just for fun (hopefully both).

If you are a developer with urgent need to communicate with FIM programmatically via its web services, you are most probably already familiar with FIM 2010 Resource Management Client library. It is a reference FIM client implementation provided by Microsoft, open sourced and available on CodePlex.

Regardless if it is best solution, this is definitely the recommended way of dealing with FIM objects as opposed to creating your own (if one will choose to do this I hope it will be shared on Codeplex or GitHub as well). However when working with this reference solution we had several big BUT moments when we have come across some implementation or usage details. Because of these moments we @ Predica Team have decided to create an intermediate layer between fim2010client and our application code.

This is an introductory post about our custom library that overcomes standard fim2010client shortcomings. While this will be focused mainly on developers working with FIM web service, this is also intended to give architects and other persons working with FIM necessary information to make right decisions in regards of implementation way choice.

4 main reasons why not use FIM 2010 Resource Management Client directly?

#1: ResourceTypeFactory

One of the biggest pains in creating solutions using the client in question is its DefautReourceTypeFactory class. It is responsible for creating instances of concrete FIM resource types by their names. This can be done in a very elegant way that finds correct types in runtime, but instead the default implementation forces the developer to change its code each time a new resource type is added to the solution. Take a moment to see for yourself: DefaultResourceTypeFactory.cs. This is not the most developer-friendly way to provide such functionality, to say the least.

This can be easily automated with a scripting in Powershell but again … is it really necessary?

Luckily the fim2010client team was sane enough to extract this contract to a separate IResourceTypeFactory interface. We solved this issue by implementing our own factory that just handles everything internally, with no modifications, by scanning loaded assemblies and finding the requested types using well defined, easy to follow conventions. Various performance tricks can be included here, so that the reflection logic is executed only once.

#2: Bugs

Bugs are a natural consequence of writing code – no argument here. One of advantages of open sourcing a project is allowing outsiders to find them, fix them and publish the fixes for everyone to use. We found  two bugs, fixed them and sent a unit-tested pull request on codeplex, but unfortunately it was not merged into the project. To be honest, this project does not look to be very alive after all.

In this case we are just using our own copy of the library, with our modifications applied.

#3: Initialization performance

Project wiki contains a very simple code sample that shows how a developer should use the default implementation to call FIM web services. It recommends the RefreshSchema() method to be called each time the client instance is created. It would be OK if this call did not result in downloading at least 1MB of XML! Getting rid of this requirement alone boosted our applications very much.

You can imagine that FIM schema is not something that changes so often that we should download and parse it a million times during client application life time. Why not just fetch it once, when the client is first initialized, and then reuse it for all subsequently created instances? We did not find any argument against this approach and it paid off in a massive performance gain.

#4: Paging performance

There is not much that can be done when it comes to web-service, xml-oriented communication from a developer point of view. However we found that there is one place that can greatly enhance the overall FIM-communication experience. It is the size of batch that the library uses when fetching large portions of data from FIM.

This setting is crucial for achieving the best results and it varies from project to project, so we were quite surprised when we saw that the default value is hardcoded in a constant Constants.WsEnumeration.DefaultMaxElements field and cannot be fine-tuned without recompilation of the whole solution.

We moved this value to application configuration and playing with different values proved to produce very satisfying results. To achieve this, we had to modify EnumerationRequest.cs ctor so that it looks into application configuration first searching for default’s override.

 

3 Additional reasons why to use Predica.FimCommunication

#1: Diagnostics

We all know the weird feeling that appears in our guts when we get a performance-related bug from a client that says “hello dear developer, your application sucks does not perform well because it is so damn slow on my machine!”. What can we do with such poor description? Not much.

This is why we built very extensive diagnostic logging logic into our library. Each FIM operation is sent to logging framework and can be processed in various ways. We also added a concept of “long queries”, which are logged as WARN instead of DEBUG level. The default duration of “long query” is 2500ms, but it can be changed in application configuration file.

Such logs contain lots of useful information: the XPath query that is sent to FIM, paging/sorting data as well as requested attribute names list and exact query duration. When used in web application, we also added URL and User fields to the log messages. This allowed us to fine tune FIM queries and squeeze the best performance from most resource-greedy scenarios based only on log files sent by our clients.

Short side note: we use NLog because… well, it is the best logging framework for .NET available :). You can easily send NLog’s output to standard DEBUG stream, console, file, database, e-mail, http endpoint… wherever you like. We find it particularly useful to spit every single log to DEBUG and filter it with DebugView from SysInternals on dev machines, and configure a comfortable log files structure on client servers.

#2: Unit tests

We strongly believe in the power of unit tests and test-driven development. Some of us may even be perceived as TDD zealots (count me as one).

One way or another, while the original fim2010client implementation has some unit tests, this suite is very far from being comprehensive (start here to browse it). We had a goal of testing every single aspect of our library, literally bombarding given FIM instance with various queries and commands. This led to some interesting findings in FIM behavior.

A complete unit tests suite can obviously serve as a perfect demo code for developers that want to use our FimClient APIs.

#3: Runtime configuration

The original fim2010client requires configuration to be defined in application configuration file. There is nothing wrong with this approach, it is WCF after all. But some of our solutions required gathering FIM connection information (url, username, password) from user during runtime. We extended the DefaultClient implementation to use values given in code rather than require static config-file-only information.

In closing…

I hope you enjoyed this post and are eager to put your hands on this lib. Soon I will follow up on this library and examples how to use it. As part of this series we will also publish this implementation on GitHub for FIM community to use. It was very beneficial for us in many projects so we think that it might be beneficial for community, as well as encourage others to work on FIM client library too.

Hopefully this post has made you hungry for more. So stay tuned on our blog and until then: may the FIM be with you!

Blog picture: (ccknejonbro