ADFS SharePoint attribute store

Hi guys, it’s Paweł again – your SharePoint architect trying to make own little name in the identity world. In one of my previous posting from SharePoint meets identity series, I was writing about ADFS and its features. One of them was the ability to perform various operation on claims using attribute stores. During the course of the project which inspired this series, I had an challenging opportunity to create a custom attribute store which would provide data from SharePoint lists. So I thought it would be great if I shared my great invention with the outside world and the outside world should appreciate it as well, so this post will be all about that.

Continue reading

FIM 2010 R2 Reporting – First Strike!

Hi, my name is Adam and here at Predica, I’m dealing with Business Intelligence solutions. As you can see on our Company site, we are specialized on two Microsoft technologies i.e. SharePoint and Forefront Identity Management (FIM). So in this series of posts I would like to share with you my experience and problems which I met during working with FIM 2010 R2 Reporting.


In whole series I will explain from beginning how it works, how to use it, what capabilities it carries on and I shall indicate the knowledge resources on which I based on.

In my first post I will try to introduce you to the FIM Reporting architecture. I will tell you how the data flows and where are stored. In the second post I will present in more details Data Warehouse database. The next post will describe System Center Managements Packs. Briefly I show how Managements Packs are built.
In the last two posts (the most interesting part) I will tell how to extend reporting for new attributes (part one) and new FIM resources (part two). For both scenario I show how to build own Management Pack. We will also look inside the warehouse to show you how logged data are stored.


Before you begin, make sure that you have fully configured environmental. Detailed guide you can download from here. Or you can based on website version.

See you soon!

Images: (cc) wizetux

SharePoint meets identity – part 2 – technical basics (ADFS)

Hi, here’s Paweł again with SharePoint and identity goodies.

After my first post on the topic you should be pretty convinced (or at least you should think you are) that claims present some interesting possibilities in the area of SharePoint security. Let’s jump directly to the details of that and let’s validate if this assumption is true on something real. If you are already experienced with SharePoint, you should probably know it by now, that you can never assume something in SharePoint works the way you expect it to. That’s why whenever I design something, I always first validate it, because sometimes in the end it turns out… you know what.

To have something “real” to work on, you need to have a proper setup first.  Again, if you are somewhat experienced SharePoint guy, or girl (on the side: then please mail me, cause I’ve never met any ;)), then you should have some base SharePoint VM in place with SQL, Active Directory and SharePoint itself. If not I suggest you stop reading here, because it might not make much sense if you do. Having that, there is one additional thing you need – Active Directory Federation Service (ADFS).

Note from our infrastructure team: Whenever you are reading ADFS right now and you are running something lower than Windows Server 2012, we are not speaking about the service which is available as part of a system. It referees to ADFS 2.0 which you can download from the Internet (and remember about the fixes).

The “proper” setup which I meant before is simply having Active Directory as an identity provider, ADFS as a relying party and SharePoint as a target application (which not entirely true since SharePoint is also sort of a relying party, but let’s simplify for now), so the “identity flow” would be AD->ADFS->SharePoint. There are some nice guides how to prepare this setup on the web (like Steve Peschka blog which you have to subscribe to if you seriously thinking about doing some claims in SharePoint), so I won’t waste our precious time here for this – I want to focus more about what results we get from it and how we can utilize them in real-world solutions.

OK, but why do you need ADFS in the first place? ADFS is acting as a proxy by which identity information (claims / user attributes) flow from identity providers (like AD) to the target applications (like SharePoint) – click on a picture to enlarge it:

But as I mentioned in my previous posting, AD is not the best place to find identity information if you are considering Internet-facing scenario, so in this case ADFS will still be a proxy but this time between an external identity provider (ACS in our case) and target application (again – click to see bigger version):

Of course it is not just 1-to-1 proxy passing through all information, but allows to transform it and make some authorization decisions based on it. So once you went through ADFS-with-SharePoint-and-AD setup, you should already have some basic understanding about claims, what kind of information they hold and how to control it. AD identity provider can be used as a good starting point to play around with claims, but later it can be changed to something else.

If you look into ADSF management console, you’ll find something like this under SharePoint relying party configuration (you know what to do do see it bigger 😉 ):

This is where you define which attributes from Active Directory transform into what claims available later for defining access in SharePoint. The interesting thing about these settings is that you don’t have to rely entirely on AD’s (or other provider’s) data, but for example based on user’s account or e-mail get some extra information from some external data source:

The external data sources are called attribute stores in ADFS terms and by default they can be:

  • Active Directory
  • SQL Server
  • LDAP

But also you can create own attribute store or find some ready-to-use from the web:

The concept of attribute stores and their usage in claim flow from identity providers to target applications is important, because it allows to normalize claims coming from different sources and add some additional ones which target applications might utilize in their authorization logic.

Once the claims are processed in ADFS with or without transformations by attribute stores assistance, they are passed to target applications – SharePoint in our case and this is where their primary role begins. The setup you might have done already according to Steve’s blog entry includes also SharePoint configuration, but what exactly are the implications of such setup to working with SharePoint data and security, I will continue writing about in the next posting.

If you are looking for good source of step-by-step configuration for ADFS and SharePoint you can also visit Jorge’s blog where he provides good description of ADFS side of infrastructure (start with Part 1 and there is 8 more).

Predica.FimCommunication code samples

Hi, it’s Maciek again here with some FIM .NET developer goodies.

As I promised in my previous post, our FimCommunication library is now available on GitHub!

It’s time to see how it works. Bear in mind that we follow the KISS principle when we can. Therefore it was not our intention to create a, for example, full- blown LINQ provider for FIM. It could be fun, but its usefulness is questionable. You’re welcome to implement it and send us a pull request if you wish :).

Basic usage

These are some short code snippets for performing the basic operation with our FimClient:

[sourcecode language=”csharp”]
//XPath Query
var client = new FimClient();
var allPeople = client.EnumerateAll("/Person");
//Find by id
var person = (RmPerson)_client.FindById(personId);

If no object with this ID is found, null will be returned.


[sourcecode language=”csharp”]
var newPerson = new RmPerson()
FirstName = "person first name"


[sourcecode language=”csharp”]
var person = (RmPerson)_client.FindById(personId);
var personChanges = new RmResourceChanges(person);
person.LastName = "new last name";


[sourcecode language=”csharp”]
var personToDelete = new RmPerson {
ObjectID = new RmReference(personId)


Original fim2010client does not provide an easy access to executing paged queries, so we introduced it in our library. You can use it like this:

[sourcecode language=”csharp”]
var first10People = _client.EnumeratePage(
"/Person", Pagination.FirstPageOfSize(10), SortingInstructions.None

var second3People = _client.EnumeratePage(
/Person", new Pagination(1, 3), SortingInstructions.None

var allPeople = _client.EnumeratePage(
"/Person", Pagination.All, SortingInstructions.None


We created a little infrastructure code around filtering logic. For example you could create an xpath query like this:

Simple filtering

[sourcecode language=”csharp”]
var firstNameFilter = new Filter(
RmPerson.AttributeNames.FirstName.Name, "John", FilterOperation.Equals

var lastNameFilter = new Filter(
RmPerson.AttributeNames.LastName.Name, "mit", FilterOperation.Contains

var createdFilter = new Filter(
RmResource.AttributeNames.CreatedTime.Name, "2012-04-18",
FilterOperation.Equals, AttributeTypes.DateTime

string xpath = "/Person["+ firstNameFilter.ComposeXPath()
+ " and "
+ lastNameFilter.ComposeXPath()
+ " and "
+ createdFilter.ComposeXPath()
+ "]";

var enumerateAll = _client.EnumerateAll(xpath).ToList();

It requires manually joining conditions with correct and/or operators, but we did not need anything more complex, like grouping conditions in some aggregator filters.

“Contains” operation

“Contains” filter operation is particularly interesting as it uses a workaround to “cheat” FIM, which by default does not handle “contains” XPath function properly. The above filter produces the following result:

[sourcecode language=”csharp”]
/Person[FirstName = ‘John’ and starts-with(LastName, ‘%mit’) and CreatedTime >= ‘2012-04-18T00:00:00’ and CreatedTime <= ‘2012-04-18T23:59:59’]

Filtering by references

You can also use a special “reference” syntax to filter objects by their references. This sample filter will look for people that have managers with first name equal to “Mary”:

[sourcecode language=”csharp”]
var managerFirstNameFilter = new Filter(
"[ref]Manager,Person,FirstName", "Mary", FilterOperation.Equals

var query = "/Person[" + managerFirstNameFilter.ComposeXPath() + "]";

Result XPath query:

[sourcecode language=”csharp”]
/Person[Manager=/Person[FirstName = ‘Mary’]]

You can browse more Filter usage samples in FilterTests class available here.


Paging and sorting kind of complete each other, so it has to be simple. We introduced our own sorting structures to be used like here:

[sourcecode language=”csharp”]
var peopleSortedDescByFirstName = _client.EnumeratePage<RmPerson>("/Person"
, Pagination.All
, new SortingInstructions(RmPerson.AttributeNames.FirstName.Name, SortOrder.Descending)

It can be joined with paging:

[sourcecode language=”csharp”]
var thirdPageOfPeopleSortedAscByLastName = _client.EnumeratePage<RmPerson>("/Person"
, new Pagination(Pagination.FirstPageIndex + 2, 4)
, new SortingInstructions(RmPerson.AttributeNames.LastName.Name, SortOrder.Ascending)

Sorting unit tests can be found here.

Selecting attributes to fetch

One thing we’ve learned along the way is that loading “full” objects from FIM can have negative impact on performance. It is often required to fine tune the query so that it fetches only a few selected attributes. Diagnostic logs described in previous post can help pin-point such performance-critical locations.

One attribute is particularly important: object type. It is required, because ResourceTypeFactory would not know what type it should construct if this value was missing. But you don’t have to worry about that, it’s been taken care of by the FimClient internals, this attribute is always fetched whether you requested it or not.

[sourcecode language=”csharp”]
var peopleWithFirstName = _client.EnumerateAll(
, new AttributesToFetch(RmPerson.AttributeNames.FirstName.Name)

var attributes = new AttributesToFetch(RmPerson.AttributeNames.LastName.Name)

var peopleWithSelectedAttributes = _client.EnumeratePage(
, Pagination.All
, SortingInstructions.None
, attributes

Full behavior of AttributesToFetch class can be viewed by browsing it’s tests.

This concludes the usage scenarios of our FIM communication library. Let us know what you think about it. Any suggestions, ideas?

Blog picture: (cceisenrah

SharePoint meets identity – part 1 – claims business concept

Hi, my name is Paweł and here at Predica’s team I hold an honored position of SharePoint architect. Like it or not / love it or hate it, but SharePoint earned its place among many companies as a primary intranet solution, often integrating other systems within the company, sometimes also exposed to the Internet. Therefore I would like to share some from-the-field experience in designing SharePoint solutions based on the real-life projects we either did or are doing in Predica.

Note: This is first of the series of posts where I want to present the approach we have taken and implementation details for authentication and access management for Internet facing SharePoint solution.

Recently, we were approached by an customer with interesting requirement for a new solution preferably based on SharePoint. The story in general concerned an Internet-facing portal which should serve companies and individuals being the customers of our customer’s company. The primary goal for the portal was to allow the users to access various documents and participate in the processes which our customer provided for them, which meant that each of them should have own personal secured space (site) where the data could be stored.

Although, it does not seem like a big challenge, doing it properly, according to what you may understand as “standard Internet way of doing things” was not a that easy as it might look like.
First assessment of the fundamental assumptions for the new system, raised a question: how will we handle authentication and authorization? What were these requirements:

  • Our “users” were no necessary internal users. In most cases these were external users, which might be individuals or employee of partner companies
  • Our “solution” can be hosted outside of internal network in data center of our customer choice so it should be not very dependent on the infrastructure and easy to re-deploy

When you think SharePoint obviously authentication and authorization choice is Active Directory. At least it was so far. In this context though, the first thought is “nah, this doesn’t seem right”. OK, but why?

First reason (putting aside all kind of licensing considerations) is that if users in our solution mostly will be coming from the outside world and they will not be related to company which is hosting the application, why “service provider” should be bothered with maintaining these user identities and accounts at all (which will be a case if we would choose Active Directory as our identity store of choice)? Registration, information changes, password resets … think only about these aspects of this decision.

Another fundamental problem with Active Directory based approach for authentication and authorization in SharePoint context is that managing security based on AD groups simply means managing user accounts and groups.

The concept of groups is great (or else WAS great some time ago), if only it wouldn’t mean you have to manage them…

Simple example: you have an employee in your company who belongs to HR department and you have a group “HR Department” to secure some file shares, SharePoint sites and God knows what more. So what happens when this person moves to another department? Well, the person must be removed from the group and possibly added to another one. Lucky you, if you have some automated solution which does that for you (like Forefront Identity Manager, which we also deliver by the way :)). And unlucky you if you don’t and you have more than 10.000 employees… Of course same problem applies to SharePoint groups, which additionally cannot be embedded within each other.

So, taking all that and inspired by Tomek (our identity architect) and his passion to claims-based authentication approach, I decided to dig into the subject in the context of SharePoint security to see if indeed it can bring some resolutions to these problems. The whole idea of claims is in the air for some time now, but surprisingly I never saw any implementation of it in the real system, which made me a little suspicious. If you are not familiar with the concepts of claims, Boogle or Ging (whatever you prefer) more about it or use you favorite source – MSDN (this is a good start BTW), there’s plenty of information out there, so I will only  focus on the essence of what it is.

The best analogy that comes to my mind thinking about claims in SharePoint context is that they give you possibility of attribute-based security management. Now, what that means? Looking back at the previous example, what IS/SHOULD really happen when for example a person changes department within the organization?

(IS) You update his/her Active Directory account or any HR management system to relate to the new department and (SHOULD) all security related the previously held position should be revoked and access required for the new position should be granted. For some organizations it might seem like a wishful thinking, but in fact it’s not – it is possible and achievable. This is where claims can successfully come into play. If you related your security against person attributes (department in this example) rather than on his/her group membership, all security-related management should happen by itself.

Of course, relating claims to Active Directory attributes is a narrow-minded thinking, because AD is only one of the possible identity providers. You can think about Facebook, Google, Live and others which can serve the same function, also totally custom or based on already existing HR or customer databases. In our customer’s story, AD was out of question, but external Internet providers should serve the purpose very well (and also wouldn’t give you this tedious Windows credentials login window in the browser, like Windows authentication does). So it seems we have a winner in the design concept competition. But of course this is just the beginning of a long way to making this really work.

This article should only give you a glimpse of what claims-based authentication is. My intention is to follow on this subject in posts following this one, to share more information about the real design and the technical concepts implemented, which caused the project successfully meeting its assumptions both providing proper security and low management need. But because the concept for this particular project might seem quite specific, I will also keep mentioning how these concept could apply in the enterprises to increase information security and ease manageability.

So stay tuned!

Image courtesy of tungphoto /

Introducing Predica.FimCommunication library

Hi, my name is Maciej and here at Predica Team I’m acting as a dev lead and .NET developer. One of key focus of Predica is implementation of Identity and Access Management solutions based on Microsoft software stack, thus I had to learn my way into this world from developer’s perspective. Here on a blog I will try to share part of this knowledge for the benefit of FIM crowd … or just for fun (hopefully both).

If you are a developer with urgent need to communicate with FIM programmatically via its web services, you are most probably already familiar with FIM 2010 Resource Management Client library. It is a reference FIM client implementation provided by Microsoft, open sourced and available on CodePlex.

Regardless if it is best solution, this is definitely the recommended way of dealing with FIM objects as opposed to creating your own (if one will choose to do this I hope it will be shared on Codeplex or GitHub as well). However when working with this reference solution we had several big BUT moments when we have come across some implementation or usage details. Because of these moments we @ Predica Team have decided to create an intermediate layer between fim2010client and our application code.

This is an introductory post about our custom library that overcomes standard fim2010client shortcomings. While this will be focused mainly on developers working with FIM web service, this is also intended to give architects and other persons working with FIM necessary information to make right decisions in regards of implementation way choice.

4 main reasons why not use FIM 2010 Resource Management Client directly?

#1: ResourceTypeFactory

One of the biggest pains in creating solutions using the client in question is its DefautReourceTypeFactory class. It is responsible for creating instances of concrete FIM resource types by their names. This can be done in a very elegant way that finds correct types in runtime, but instead the default implementation forces the developer to change its code each time a new resource type is added to the solution. Take a moment to see for yourself: DefaultResourceTypeFactory.cs. This is not the most developer-friendly way to provide such functionality, to say the least.

This can be easily automated with a scripting in Powershell but again … is it really necessary?

Luckily the fim2010client team was sane enough to extract this contract to a separate IResourceTypeFactory interface. We solved this issue by implementing our own factory that just handles everything internally, with no modifications, by scanning loaded assemblies and finding the requested types using well defined, easy to follow conventions. Various performance tricks can be included here, so that the reflection logic is executed only once.

#2: Bugs

Bugs are a natural consequence of writing code – no argument here. One of advantages of open sourcing a project is allowing outsiders to find them, fix them and publish the fixes for everyone to use. We found  two bugs, fixed them and sent a unit-tested pull request on codeplex, but unfortunately it was not merged into the project. To be honest, this project does not look to be very alive after all.

In this case we are just using our own copy of the library, with our modifications applied.

#3: Initialization performance

Project wiki contains a very simple code sample that shows how a developer should use the default implementation to call FIM web services. It recommends the RefreshSchema() method to be called each time the client instance is created. It would be OK if this call did not result in downloading at least 1MB of XML! Getting rid of this requirement alone boosted our applications very much.

You can imagine that FIM schema is not something that changes so often that we should download and parse it a million times during client application life time. Why not just fetch it once, when the client is first initialized, and then reuse it for all subsequently created instances? We did not find any argument against this approach and it paid off in a massive performance gain.

#4: Paging performance

There is not much that can be done when it comes to web-service, xml-oriented communication from a developer point of view. However we found that there is one place that can greatly enhance the overall FIM-communication experience. It is the size of batch that the library uses when fetching large portions of data from FIM.

This setting is crucial for achieving the best results and it varies from project to project, so we were quite surprised when we saw that the default value is hardcoded in a constant Constants.WsEnumeration.DefaultMaxElements field and cannot be fine-tuned without recompilation of the whole solution.

We moved this value to application configuration and playing with different values proved to produce very satisfying results. To achieve this, we had to modify EnumerationRequest.cs ctor so that it looks into application configuration first searching for default’s override.


3 Additional reasons why to use Predica.FimCommunication

#1: Diagnostics

We all know the weird feeling that appears in our guts when we get a performance-related bug from a client that says “hello dear developer, your application sucks does not perform well because it is so damn slow on my machine!”. What can we do with such poor description? Not much.

This is why we built very extensive diagnostic logging logic into our library. Each FIM operation is sent to logging framework and can be processed in various ways. We also added a concept of “long queries”, which are logged as WARN instead of DEBUG level. The default duration of “long query” is 2500ms, but it can be changed in application configuration file.

Such logs contain lots of useful information: the XPath query that is sent to FIM, paging/sorting data as well as requested attribute names list and exact query duration. When used in web application, we also added URL and User fields to the log messages. This allowed us to fine tune FIM queries and squeeze the best performance from most resource-greedy scenarios based only on log files sent by our clients.

Short side note: we use NLog because… well, it is the best logging framework for .NET available :). You can easily send NLog’s output to standard DEBUG stream, console, file, database, e-mail, http endpoint… wherever you like. We find it particularly useful to spit every single log to DEBUG and filter it with DebugView from SysInternals on dev machines, and configure a comfortable log files structure on client servers.

#2: Unit tests

We strongly believe in the power of unit tests and test-driven development. Some of us may even be perceived as TDD zealots (count me as one).

One way or another, while the original fim2010client implementation has some unit tests, this suite is very far from being comprehensive (start here to browse it). We had a goal of testing every single aspect of our library, literally bombarding given FIM instance with various queries and commands. This led to some interesting findings in FIM behavior.

A complete unit tests suite can obviously serve as a perfect demo code for developers that want to use our FimClient APIs.

#3: Runtime configuration

The original fim2010client requires configuration to be defined in application configuration file. There is nothing wrong with this approach, it is WCF after all. But some of our solutions required gathering FIM connection information (url, username, password) from user during runtime. We extended the DefaultClient implementation to use values given in code rather than require static config-file-only information.

In closing…

I hope you enjoyed this post and are eager to put your hands on this lib. Soon I will follow up on this library and examples how to use it. As part of this series we will also publish this implementation on GitHub for FIM community to use. It was very beneficial for us in many projects so we think that it might be beneficial for community, as well as encourage others to work on FIM client library too.

Hopefully this post has made you hungry for more. So stay tuned on our blog and until then: may the FIM be with you!

Blog picture: (ccknejonbro