FIM 2010 authorization workflow fails with EventID 3

If there is software, there have to be a bug. FIM 2010 as nice platform for identity management projects is not free from bugs of course, we have to live with them, wait for fixes to come and sometimes get to know how to handle them. This one is the latter case.

Story …

One of nice features of FIM is possibility to use approval activity to construct approval processes for user actions in FIM service. As every consultant working with FIM I can easily come with few things I would improve in approval activity, but this doesn’t change the fact, that this is easy to use and fast way to build approval workflows. If you will combine it with a little of custom activities it can be used even in a cases as dynamically calculated approvers based on conditions or conditional approval based on result some condition check (just two examples how we use it).

While working on rather simple case of approval for user actions I came across a problem that my approval activity stopped to work and on every instance of request I’ve got “Access Denied” because my workflow failed with exception “Object reference not set to an instance of an object“.

On FIM Service in Event Log I’ve found Event ID 3 which was caused by this exception, but with not a lot more details.

(…)

Error message from Event ID 3:

Microsoft.ResourceManagement.Service: System.NullReferenceException: Object reference not set to an instance of an object.

at Microsoft.ResourceManagement.Workflow.Hosting.HostActivator.ActivateHost(ResourceManagementWorkflowDefinition workflowDefinition, Boolean suspendWorkflowStartupAndTimerOperations)

at Microsoft.ResourceManagement.Workflow.Hosting.WorkflowManager.StartWorkflowInstance(Guid workflowInstanceIdentifier, KeyValuePair`2[] additionalParameters)

(…)

Getting to the source …

Troubleshooting of built-in activities is a bit troublesome, if not say that there is no option for that. Custom activities are rather easy to troubleshoot with debugger however here out options are limited. Only option was to try to narrow down possible causes for this error.

This particular workflow has used mix of custom and built-in activities, as first step for troubleshooting I’ve started to troubleshoot and finally remove from workflow custom activities to exclude possible bug in my code.

Troubleshooting Tip Of the Day

It is generally good idea to start to narrow down the area you are troubleshooting if you are experiencing problem in your solution, code, network. Removing elements of a solution and then adding them again will greatly improve ability to find element which is causing a problem. This might be custom activity, network load balancer or load balancing in general etc.

Quickly I’ve found out that this workflow is failing even with only single, built-in activity. So my code was OK. What was wrong then?

At Predica we are using scripts to deploy our FIM solution and this was not a different case. This solution was deployed from the script including this workflow definition. However as this was development environment, later I was testing some change introduced manually to this workflow. And this was it. After re-deploying this workflow from the script and later editing workflow definition in FIM portal problem occurred again.

Quick XOML comparison showed a problem – there was single difference between original versions:

(…)

xmlns:ns1=”clr-namespace:System.Workflow.Activities;Assembly=System.WorkflowServices, Version=3.5.0.0

(…)

And version altered in FIM portal:

(…)

xmlns:ns1=”clr-namespace:System.Workflow.Activities;Assembly=System.WorkflowServices, Version=4.0.0.0,

(…)

It looks like this behaviour happens only when FIM portal is deployed together with Sharepoint 2013

And now … workaround.

Workaround for this behaviour is simple but not very convinient to use if you want to do this through portal. In order to fix this situation you need to edit XOML definition of workflow (XOML is actually description of your workflow) and  find following piece of definition:

(…)

xmlns:ns1=”clr-namespace:System.Workflow.Activities;Assembly=System.WorkflowServices, Version=4.0.0.0,

(…)

Now you have to replace version of .NET in this reference to 3.5.0.0

(…)

xmlns:ns1=”clr-namespace:System.Workflow.Activities;Assembly=System.WorkflowServices, Version=3.5.0.0

(…)

In order to get to XOML you can open your workflow definition and click “Advanced view” button. XOML workflow definition can be found on “Extended attributes” page. You can safely copy it out of there to your text editor of choice (try SublimeText – my editor of choice for such things).

What is less fortunate is that you will have to do this every time your authorization workflow will be edited through FIM portal. One more reason why to script all configuration tasks in FIM – PowerShell really helps with this task.

FIM and FIPS or FIPS and FIM

Hi, Tomek here with some post finally ;). End of the world didn’t happened so there is no excuse to stay silent here and time to write some initial entry finally.

This time it will be quick troubleshooting entry for issue we came across few times so it might be an issue for others as well. And the topic is – FIM and FIPS (Federal Information Processing Standard) and what issues might be causing by these settings in a security locked down environments. As usually this knowledge comes with some real world learning experience so I hope this post will save some time on this learning curve for others.

When we were deploying some basic FIM elements on production servers in production environment we’ve found out during UAT after deployment that our setup is not working and is throwing Access Denied errors in authorization phase for some workflows. Quick look at details of a request which was denied showed us a cause:

This implementation is not part of the Windows Platform FIPS validated cryptographic algorithms.

We were going with this solution through UATs on test and pre-production environment and it didn’t happened so it pointed out to some difference in configuration. Quick search showed that this issue can happen in systems which are configured with following GPO settings:

“System cryptography: Use FIPS compliant algorithms for encryption, hashing, and signing”

Which translates into following registry entries:

  • On Windows 2003: HKLMSystemCurrentControlSetControlLsaFIPSAlgorithmPolicy
  • On Windows 2008 and later: HKLMSystemCurrentControlSetControlLsaFIPSAlgorithmPolicyEnabled

From quick tests made in a lab environment in order to troubleshoot this issue we’ve quickly found out that enabling any of entries above will cause FIM workflows to fail with this error message. Disabling this issue cause problem to be resolved.

Recently we were updating same environment with FIM 2010 R2 and adding reporting support to it. When we were deploying SCSM components (you can read on FIM 2010 R2 reporting infrastructure here) on new servers we have found out that SCSM setup is failing at the finalization stage:

This wasn’t obvious from a setup log file at the first glance, but at the end it has turned out that this is caused exactly by the same setting affecting new servers deployed for System Center components of FIM reporting platform.

This isn’t actually FIM specific as this is known issue related to FIPS compliant systems and .NET environment. There is a bunch of articles on this issue related to .NET environment:

Solution for us was to disable this setting in GPO which affects FIM Servers and this has resolved it for us. If it is not possible in your environment you can use links above to make necessary changes in a configuration of your environment without disabling these policies, however I have personally not tested these solutions with FIM (if You will do – please use comments or e-mail me with this information)

Edit

Actually during writing this article I’ve found out this KB article 935434, which describes fix for .NET 3.0 environment which might be a solution for it – if you have access to Microsoft support it might be worthy to give it a try.

Conclusions from this are:

  • Consult with your Security / AD /GPO team if environment in which you are about to deploy your FIM installation is configured with FIPS compliant settings and work your solution for it with teams.
  • Always make sure that your dev /staging environments are as close to production as it is possible. It will make your life easier and in case of a problems also troubleshooting will be quicker.

FIM Reporting – architecture overview

Hi, Adam here again with FIM reporting series. In first post I’ve covered basic concepts of FIM reporting included in FIM 2010 R2.  After short break I’m getting back to continuation of this topic, this time with brief description of reporting architecture and data flow.

With FIM 2010 R2 release Microsoft temped to supply feature responsible for tracking and reporting changes made on FIM resources. Whole solution wasn’t built from scratch but it is based on the Microsoft System Center. Says more precisely the data flow and data storage are handled by System Center.

Topology

FIM Reporting can be deployed on two kind of topologies. Depending on production environment size, Microsoft recommended two solutions for small/medium or large size deployments. More detailed information you can see here.

Briefly I can tell that small or mediums size deployments can be hosted on servers where FIM and System Center Service Manager (SCSM) are hosted on one machine. While System Center Data Warehouse (SCDW) is hosted on second machine (look at figure 1).

For large deployments the whole environment is hosted on three servers. The difference is that the FIM Service and SCSM are separated into separate servers (look at figure below).

Schema update and Data Flow

As I mentioned before the System Center Service Management is used to handle data flow from FIM to System Center Data Warehouse. In general the whole process takes place following steps:

Data Warehouse schema update

In order to extend Data Warehouse schema to log new FIM resources it is required to supply MP definition. Management Pack are not new feature which is built-in FIM Reporting, this is part of System Center solution.  Management Pack files will be described in the third post. For more precisely description I refer to TechNet documentation.

  • Ready Management Packs files describing extension for Data Warehouse are imported (using Power Shell) to System Center Management service.
  • Based on MP files appropriate schema changes are made on the System Center Data Warehouse databases.

Data Flow

The Date Flow process is responsible for transfer FIM data from FIM Service database directly to the DWDataMart database in the end. All take place in the following manner.

  • Data are transferred form FIM Service database to the System Center Management. Appropriate bindings (connections between FIM resources and DW fields) are described in MP definitions.
  • Transferred data are sequentially passes DWStagingAndConfig, DWRepository and DWDataMart databases where are subjected to complex transforms and loads operation, which are out of the scope of this blog series. Second reason is that I never had any experience with System Center and I didn’t try deep dive into this topic. For more details about System Center Data Warehouse ETL process I refer to TechNet documentation.
  • Based on data transferred to DataMart Warehouse we can build custom reports, which we can store on Report Server. It also possible to upload reports to the System Center reports repository by using MP.

The whole process is presented on the diagram below (click to enlarge):

This short description is intended to show simple overview of FIM 2010 R2 Reporting architecture. In order to get more detailed and exhaustive description I suggests refer to TechNet articles:

If you are interested in this topic stay tuned … we will get back soon to it with more details into how to get this service running and doing what we want to do with our data.

Cover image (cc) http://www.flickr.com/photos/malavoda/4195215934/

FimExplorer

In my previous post I explained how you can execute any query in FIM and see the results. Remember how cumbersome that was?

Today I introduce our next internal project that just got open sourced: FimExplorer. Here is what it looks like (bear in mind that it’s not supposed to be pretty;) ):

And here is what it can do:

  • run any xpath query against FIM
    • you can choose what attributes will be fetched; all are fetched by default
  • find objects by ID
  • display results in grid
  • display single object information in a dialog (double-click the grid row)
  • navigate through references (click on the ID link)
  • export displayed results to XML (this one produces the same results as FIM migration cmdlets: Export-FIMConfig / ConvertTo-FIMResource / ConvertFrom-FIMResource, read MSDN for more info)
  • import objects from XML (generated by FimExplorer or FIM cmdlets) and show them in grid; this can be useful for “offline” analysis
  • it is not required to run on a machine with FIM installed

You can find the code on GitHub and CodePlex. Compiled, ready-to-run version is also available (on CodePlex). Of course you are more than welcome to send contributions via pull requests.

Before running the application you need to modify it’s configuration file (Predica.FimExplorer.exe.config). Just replace the initial settings with your own FIM URL, username and password.

Important (at least for those who will explore code of this project):

It uses our FimCommunication library under the hood. It is referenced as a Git submodule. So after you clone the FimExplorer repository make sure to also run “git submodule init” and “git submodule update” commands to download it!

Executing any XPath queries in FIM, the hard way

Being able to execute any query in FIM and get the results can be crucial when developing FIM-based solutions. Analysis based on real data can be very helpful during debugging as well as designing FIM objects to complete a certain task.

<sarcasm mode=”on”>
Fortunately Microsoft knows this is the case and provided an easy way to send XPath to FIM and get the result set.
</sarcasm mode=”off”>

Of course the above statement is not really true. In order to get XPath query results in FIM you need to:

  • Go to Search Scopes configuration

  • Start to create new Search Scope and fill first screen with any data:

  • Enter query on 2nd screen and “TAB away” – get focus out of the field. The query will now be executed and you will be presented the results in a standard FIM grid:

And this works. It’s not the most comfortable solution, but it gets the job done. However it adds a massive overhead if all you need to do is just perform some quick query/data checks.

Quick Editors Note (from Tomek)

#1

Of course … there is still PowerShell which can be used here.

#2

Probably most annoying thing here is that when one will make mistake in a query or in its syntax, or just query is not proper for FIM to crunch it, you will get only this error message:

Now it is up to you to figure out what’s wrong, which in case of complex queries might be sometimes problematic.

For now this is all I have to share with you. An alternative solution to this problem will be included in my next post. And yes, this will be another Predica open source project!

Happy querying!

FIM 2010 R2 Reporting – First Strike!

Hi, my name is Adam and here at Predica, I’m dealing with Business Intelligence solutions. As you can see on our Company site, we are specialized on two Microsoft technologies i.e. SharePoint and Forefront Identity Management (FIM). So in this series of posts I would like to share with you my experience and problems which I met during working with FIM 2010 R2 Reporting.

Scenario

In whole series I will explain from beginning how it works, how to use it, what capabilities it carries on and I shall indicate the knowledge resources on which I based on.

In my first post I will try to introduce you to the FIM Reporting architecture. I will tell you how the data flows and where are stored. In the second post I will present in more details Data Warehouse database. The next post will describe System Center Managements Packs. Briefly I show how Managements Packs are built.
In the last two posts (the most interesting part) I will tell how to extend reporting for new attributes (part one) and new FIM resources (part two). For both scenario I show how to build own Management Pack. We will also look inside the warehouse to show you how logged data are stored.

Prerequisites

Before you begin, make sure that you have fully configured environmental. Detailed guide you can download from here. Or you can based on website version.

See you soon!

Images: (cc) wizetux

Predica.FimCommunication code samples

Hi, it’s Maciek again here with some FIM .NET developer goodies.

As I promised in my previous post, our FimCommunication library is now available on GitHub!

It’s time to see how it works. Bear in mind that we follow the KISS principle when we can. Therefore it was not our intention to create a, for example, full- blown LINQ provider for FIM. It could be fun, but its usefulness is questionable. You’re welcome to implement it and send us a pull request if you wish :).

Basic usage

These are some short code snippets for performing the basic operation with our FimClient:

[sourcecode language=”csharp”]
//XPath Query
var client = new FimClient();
var allPeople = client.EnumerateAll("/Person");
//Find by id
var person = (RmPerson)_client.FindById(personId);
[/sourcecode]

If no object with this ID is found, null will be returned.

Create

[sourcecode language=”csharp”]
var newPerson = new RmPerson()
{
FirstName = "person first name"
};
_client.Create(newPerson);
[/sourcecode]

Update

[sourcecode language=”csharp”]
var person = (RmPerson)_client.FindById(personId);
var personChanges = new RmResourceChanges(person);
persponChanges.BeginChanges();
person.LastName = "new last name";
_client.Update(personChanges);
[/sourcecode]

Delete

[sourcecode language=”csharp”]
var personToDelete = new RmPerson {
ObjectID = new RmReference(personId)
};
_client.Delete(personToDelete);
[/sourcecode]

Paging

Original fim2010client does not provide an easy access to executing paged queries, so we introduced it in our library. You can use it like this:

[sourcecode language=”csharp”]
var first10People = _client.EnumeratePage(
"/Person", Pagination.FirstPageOfSize(10), SortingInstructions.None
);

var second3People = _client.EnumeratePage(
/Person", new Pagination(1, 3), SortingInstructions.None
);

var allPeople = _client.EnumeratePage(
"/Person", Pagination.All, SortingInstructions.None
);
[/sourcecode]

Filtering

We created a little infrastructure code around filtering logic. For example you could create an xpath query like this:

Simple filtering

[sourcecode language=”csharp”]
var firstNameFilter = new Filter(
RmPerson.AttributeNames.FirstName.Name, "John", FilterOperation.Equals
);

var lastNameFilter = new Filter(
RmPerson.AttributeNames.LastName.Name, "mit", FilterOperation.Contains
);

var createdFilter = new Filter(
RmResource.AttributeNames.CreatedTime.Name, "2012-04-18",
FilterOperation.Equals, AttributeTypes.DateTime
);

string xpath = "/Person["+ firstNameFilter.ComposeXPath()
+ " and "
+ lastNameFilter.ComposeXPath()
+ " and "
+ createdFilter.ComposeXPath()
+ "]";

var enumerateAll = _client.EnumerateAll(xpath).ToList();
[/sourcecode]

It requires manually joining conditions with correct and/or operators, but we did not need anything more complex, like grouping conditions in some aggregator filters.

“Contains” operation

“Contains” filter operation is particularly interesting as it uses a workaround to “cheat” FIM, which by default does not handle “contains” XPath function properly. The above filter produces the following result:

[sourcecode language=”csharp”]
/Person[FirstName = ‘John’ and starts-with(LastName, ‘%mit’) and CreatedTime >= ‘2012-04-18T00:00:00’ and CreatedTime <= ‘2012-04-18T23:59:59’]
[/sourcecode]

Filtering by references

You can also use a special “reference” syntax to filter objects by their references. This sample filter will look for people that have managers with first name equal to “Mary”:

[sourcecode language=”csharp”]
var managerFirstNameFilter = new Filter(
"[ref]Manager,Person,FirstName", "Mary", FilterOperation.Equals
);

var query = "/Person[" + managerFirstNameFilter.ComposeXPath() + "]";
[/sourcecode]

Result XPath query:

[sourcecode language=”csharp”]
/Person[Manager=/Person[FirstName = ‘Mary’]]
[/sourcecode]

You can browse more Filter usage samples in FilterTests class available here.

Sorting

Paging and sorting kind of complete each other, so it has to be simple. We introduced our own sorting structures to be used like here:

[sourcecode language=”csharp”]
var peopleSortedDescByFirstName = _client.EnumeratePage<RmPerson>("/Person"
, Pagination.All
, new SortingInstructions(RmPerson.AttributeNames.FirstName.Name, SortOrder.Descending)
);
[/sourcecode]

It can be joined with paging:

[sourcecode language=”csharp”]
var thirdPageOfPeopleSortedAscByLastName = _client.EnumeratePage<RmPerson>("/Person"
, new Pagination(Pagination.FirstPageIndex + 2, 4)
, new SortingInstructions(RmPerson.AttributeNames.LastName.Name, SortOrder.Ascending)
);
[/sourcecode]

Sorting unit tests can be found here.

Selecting attributes to fetch

One thing we’ve learned along the way is that loading “full” objects from FIM can have negative impact on performance. It is often required to fine tune the query so that it fetches only a few selected attributes. Diagnostic logs described in previous post can help pin-point such performance-critical locations.

One attribute is particularly important: object type. It is required, because ResourceTypeFactory would not know what type it should construct if this value was missing. But you don’t have to worry about that, it’s been taken care of by the FimClient internals, this attribute is always fetched whether you requested it or not.

[sourcecode language=”csharp”]
var peopleWithFirstName = _client.EnumerateAll(
"/Person"
, new AttributesToFetch(RmPerson.AttributeNames.FirstName.Name)
);

var attributes = new AttributesToFetch(RmPerson.AttributeNames.LastName.Name)
.AppendAttribute(RmPerson.AttributeNames.AccountName.Name);

var peopleWithSelectedAttributes = _client.EnumeratePage(
"/Person"
, Pagination.All
, SortingInstructions.None
, attributes
);
[/sourcecode]

Full behavior of AttributesToFetch class can be viewed by browsing it’s tests.

This concludes the usage scenarios of our FIM communication library. Let us know what you think about it. Any suggestions, ideas?

Blog picture: (cceisenrah

Introducing Predica.FimCommunication library

Hi, my name is Maciej and here at Predica Team I’m acting as a dev lead and .NET developer. One of key focus of Predica is implementation of Identity and Access Management solutions based on Microsoft software stack, thus I had to learn my way into this world from developer’s perspective. Here on a blog I will try to share part of this knowledge for the benefit of FIM crowd … or just for fun (hopefully both).

If you are a developer with urgent need to communicate with FIM programmatically via its web services, you are most probably already familiar with FIM 2010 Resource Management Client library. It is a reference FIM client implementation provided by Microsoft, open sourced and available on CodePlex.

Regardless if it is best solution, this is definitely the recommended way of dealing with FIM objects as opposed to creating your own (if one will choose to do this I hope it will be shared on Codeplex or GitHub as well). However when working with this reference solution we had several big BUT moments when we have come across some implementation or usage details. Because of these moments we @ Predica Team have decided to create an intermediate layer between fim2010client and our application code.

This is an introductory post about our custom library that overcomes standard fim2010client shortcomings. While this will be focused mainly on developers working with FIM web service, this is also intended to give architects and other persons working with FIM necessary information to make right decisions in regards of implementation way choice.

4 main reasons why not use FIM 2010 Resource Management Client directly?

#1: ResourceTypeFactory

One of the biggest pains in creating solutions using the client in question is its DefautReourceTypeFactory class. It is responsible for creating instances of concrete FIM resource types by their names. This can be done in a very elegant way that finds correct types in runtime, but instead the default implementation forces the developer to change its code each time a new resource type is added to the solution. Take a moment to see for yourself: DefaultResourceTypeFactory.cs. This is not the most developer-friendly way to provide such functionality, to say the least.

This can be easily automated with a scripting in Powershell but again … is it really necessary?

Luckily the fim2010client team was sane enough to extract this contract to a separate IResourceTypeFactory interface. We solved this issue by implementing our own factory that just handles everything internally, with no modifications, by scanning loaded assemblies and finding the requested types using well defined, easy to follow conventions. Various performance tricks can be included here, so that the reflection logic is executed only once.

#2: Bugs

Bugs are a natural consequence of writing code – no argument here. One of advantages of open sourcing a project is allowing outsiders to find them, fix them and publish the fixes for everyone to use. We found  two bugs, fixed them and sent a unit-tested pull request on codeplex, but unfortunately it was not merged into the project. To be honest, this project does not look to be very alive after all.

In this case we are just using our own copy of the library, with our modifications applied.

#3: Initialization performance

Project wiki contains a very simple code sample that shows how a developer should use the default implementation to call FIM web services. It recommends the RefreshSchema() method to be called each time the client instance is created. It would be OK if this call did not result in downloading at least 1MB of XML! Getting rid of this requirement alone boosted our applications very much.

You can imagine that FIM schema is not something that changes so often that we should download and parse it a million times during client application life time. Why not just fetch it once, when the client is first initialized, and then reuse it for all subsequently created instances? We did not find any argument against this approach and it paid off in a massive performance gain.

#4: Paging performance

There is not much that can be done when it comes to web-service, xml-oriented communication from a developer point of view. However we found that there is one place that can greatly enhance the overall FIM-communication experience. It is the size of batch that the library uses when fetching large portions of data from FIM.

This setting is crucial for achieving the best results and it varies from project to project, so we were quite surprised when we saw that the default value is hardcoded in a constant Constants.WsEnumeration.DefaultMaxElements field and cannot be fine-tuned without recompilation of the whole solution.

We moved this value to application configuration and playing with different values proved to produce very satisfying results. To achieve this, we had to modify EnumerationRequest.cs ctor so that it looks into application configuration first searching for default’s override.

 

3 Additional reasons why to use Predica.FimCommunication

#1: Diagnostics

We all know the weird feeling that appears in our guts when we get a performance-related bug from a client that says “hello dear developer, your application sucks does not perform well because it is so damn slow on my machine!”. What can we do with such poor description? Not much.

This is why we built very extensive diagnostic logging logic into our library. Each FIM operation is sent to logging framework and can be processed in various ways. We also added a concept of “long queries”, which are logged as WARN instead of DEBUG level. The default duration of “long query” is 2500ms, but it can be changed in application configuration file.

Such logs contain lots of useful information: the XPath query that is sent to FIM, paging/sorting data as well as requested attribute names list and exact query duration. When used in web application, we also added URL and User fields to the log messages. This allowed us to fine tune FIM queries and squeeze the best performance from most resource-greedy scenarios based only on log files sent by our clients.

Short side note: we use NLog because… well, it is the best logging framework for .NET available :). You can easily send NLog’s output to standard DEBUG stream, console, file, database, e-mail, http endpoint… wherever you like. We find it particularly useful to spit every single log to DEBUG and filter it with DebugView from SysInternals on dev machines, and configure a comfortable log files structure on client servers.

#2: Unit tests

We strongly believe in the power of unit tests and test-driven development. Some of us may even be perceived as TDD zealots (count me as one).

One way or another, while the original fim2010client implementation has some unit tests, this suite is very far from being comprehensive (start here to browse it). We had a goal of testing every single aspect of our library, literally bombarding given FIM instance with various queries and commands. This led to some interesting findings in FIM behavior.

A complete unit tests suite can obviously serve as a perfect demo code for developers that want to use our FimClient APIs.

#3: Runtime configuration

The original fim2010client requires configuration to be defined in application configuration file. There is nothing wrong with this approach, it is WCF after all. But some of our solutions required gathering FIM connection information (url, username, password) from user during runtime. We extended the DefaultClient implementation to use values given in code rather than require static config-file-only information.

In closing…

I hope you enjoyed this post and are eager to put your hands on this lib. Soon I will follow up on this library and examples how to use it. As part of this series we will also publish this implementation on GitHub for FIM community to use. It was very beneficial for us in many projects so we think that it might be beneficial for community, as well as encourage others to work on FIM client library too.

Hopefully this post has made you hungry for more. So stay tuned on our blog and until then: may the FIM be with you!

Blog picture: (ccknejonbro