Direct Access + Network Access Protection – part 4 – Potential issue with multiple CAs. Lessons learned.

Hi, Andrzej Kaźmierczak (KAZM) again with the last article on DA + NAP integration. In my previous articles I have successfully configured and tested Direct Access with NAP integration using single, enterprise Issuing CA (DA + NAP part 3. Single CA work flows explanation).

There is a Microsoft MSDN article (http://msdn.microsoft.com/en-us/library/cc731916.aspx) recommending to use a dedicated standalone subordinate CA for NAP health certificates. And so I did in my LAB to show that it isn’t that simple with multiple CAs around. This is how my LAB changed comparing to its original setup (DA + NAP part 2: LAB configuration and overview):

  1. I added a Standalone CA role on SRVNLS server that will be used only for issuing health certificates (CN=Standalone CA,C=PL):
    image1
  2. I have reconfigured Health Registration Authority to use Standalone CA and not to use enterprise certificate templates (which by the way can be used even with standalone CA):
    image2
  3. On the properties of the Standalone CA in the Security tab I gave the SRVNPS computer account full permissions. Also, in Policy Module tab I set Request Handling to “Follow the settings in the certificate template, if applicable. Otherwise, automatically issue the certificate” so that HRA request will not hit “Pending” state but will be automatically issued:
    image3

Everything is setup, so I start the client connected to the Internet (External) network and…. Hhmm… from the client machine I am able to connect to the SRVDC (Domain Controller) and to SRVNPS (NPS + HRA) – which means that infrastructure tunnel was working. Unfortunately I could only ping my internal server SRVFS as seen on below figure:

image4

Let’s start troubleshooting:

  1. Quick look on the Direct Access server, everything looks green and good:
    image5
  2. What about the Remote Access Clients Status console? Ok, so user doesn’t have User Kerberos Authentication Method which is used for accessing intranet resources – that confirms that user cannot connect to SRVFS, but have to look deeper to get the cause:
    image6
  3. On the SRVNPS I can see that HRA has approved the request and enrolled a health certificate to the user, which means that the user was able to send statement of health (infrastructure and management tunnels are working fine) and HRA can “talk” to the new standalone CA as well:
    image7
  4. I confirm on the Standalone CA that health certificates have been issued to the SRVNPS (on behalf of the user):
    image8
  5. But have the user received health certificate? Yes, he did:
    image9
  6. Let’s have a look in client’s Windows Firewall – there is no User Kerberos – so no corp/intranet tunnel is created:
    image10
  7. In Event viewer on the SERVER and CLIENT there are NO THINGS that can lead to the problem cause. On the client, IKE negotiations logs are not shown by default, but you can view the success or failure of IKE negotiations in the Event Viewer security log doing little trick. To view these events, enable success or failure auditing for the Audit logon events audit policy for your domain or local computer (http://technet.microsoft.com/en-us/library/cc737812(v=ws.10).aspx). After doing this I could see more useful data:
    1. EVENT ID 4653 “An IPsec main mode negotiation failed” with Failure reason “No policy configured”
    2. EVENT ID 4984 “An IPsec extended mode negotiation failed” with Failure reason “SA establishment is not authorized”

    image11

  8. So everything seems to be ok, the client machine receives health certificate, but still no corp/intranet tunnel is setup. This has to be something with the health certificate! New Standalone CA certificate is trusted for the client computer (it is added to the Trusted Root Certification Authorities). I verify certificates for Main Mode SA with “netsh adv mon show mmsa” and everything should be clear now:
    image12
  9. Although certificates issued by the Standalone CA are trusted by the client computer and operating system, but they are not trusted by the service to setup IPsec Security Association (http://technet.microsoft.com/en-us/library/cc759130(v=ws.10).aspx). Why? Well, let me remind you that Standalone CA was not configured as trusted for Direct Access. Only TST Root CA (my issuing CA that issued certificate for workstation used for infrastructure and management tunnel) is setup to be trusted in Direct Access configuration wizard:
    image13

But wait… what? so I’m using enterprise CA to issue computer certificate and a standalone CA to issue health certificates and there’s only one place in the DA configuration to setup a CA trust for that? How is that possible? The answer is very simple: in the DA configuration use common Root CA for both Enterprise and Standalone CA!

Lessons learned:

  1. If you want to have only 1 Issuing CA for both certificates: computer (workstation) certificate and a health certificate – that’s fine! You can even have a Standalone CA for that, but you have to MANUALLY issue all: SSL (even Direct Access IPHTTPS) and computer certificates as there is no way for auto enrolment feature with standalone CA.
  2. If you want to use 2 Issuing CAs – one for computer/workstation certificates and the second one for health certificates you should have a common offline Root CA, which would be pointed in the Remote Access Server Setup / Authentication page of the Direct Access configuration wizard. This is how my LAB environment should look like:
    image14
  3. If you don’t have root and have 2 separate CAs, you should reconsider changing your PKI architecture to meet the PKI best practice design (http://kazmierczak.eu/itblog/2012/08/22/the-dos-and-donts-of-pki-microsoft-adcs/) . Or you can take your current issuing CA and consider it as a root – so standard CA will become a subordinate to your issuing CA, and DA should be configured with your issuing CA certificate in the Remote Access Server – which is not recommended, but will do the trick.
  4. When troubleshooting, ALWAYS read carefully Root/Issuing CA names and confirm with certificate thumbnail which certificate you are using for what . Sometimes Root and Issuing CA names are very similar and is very easy to get confused which one is which.

Direct Access + Network Access Protection – part 3 – Single CA work flows explanation

This is a follow-up on previous Andrzej Kaźmierczak’s (KAZM) article DA + NAP part2: LAB configuration and overview , where I have described my LAB settings for explanation of how things are working for Microsoft Direct Access and Network Access Protection integration on Windows Server 2012 R2. You can use below steps as a part of troubleshooting activities for DA with NAP in your environment.

First of all read this great TechNet article by The Cable Guy on NAP and DA integration http://technet.microsoft.com/en-us/magazine/ff758668.aspx

Let’s start! User got DA GPOs from the Domain Controller and now switched to the Internet (External) network. This is how things should be working:

  1. When the user was inside corporate network, he received GPOs telling him to connect to Direct Access server when he is not connected to corporate network. To confirm being outside, client should not be able to resolve https://nls.tst.lab . Now client machine is aware that it is in the External (Internet) network. To confirm that run “netsh dns show state”:
    DANAPart03img01
  2. Client machine tries different transition technologies to setup Direct Access connection.
    I have disable IPv6 on client’s NIC, so no 6to4 is used. Also, in my LAB, client is connected directly to the same network as DA is located, so no NAT means no Teredo. Client will use IPHTTPS transition technology which simply speaking packages the data in an HTTPS tunnel. To achieve that, in GPO client has been configured Direct Access IPHTTPS URL to connect to (https://da.tst.com/iphttps). Its SSL certificate had to be issued by a trusted Root/Issuing CA for the client computer (in my case it had been issued by SRVPKI and CN=TST Root CA which is a trusted Root CA on the client).
  3. Run “ipconfig /all” to see if the IPHTTPS has configured itself with IPv6 address:
    DANAPart03img02
  4. The user should have setup infrastructure tunnel using “TST Workstation Authentication” certificate from the “TST Root CA, C=PL” Issuing CA that had been issued to the workstation with autoenrolment feature and user should be able to access SRVDC (Domain Controller).
  5. In NAP’s GPOs I have setup Health Registration and Trusted Server Group to point to https://srvnps.tst.lab/domainhra/hcsrvext.dll so once NAP client is started, DA client connects to this URL creating management tunnel. With this tunnel, SRVNPS (NPS + HRA) is accessible because it has been added to Management Servers in Direct Access configuration wizard. Management tunnel is created using “TST Workstation Authentication” from the “TST Root CA, C=PL” Issuing CA. At this point I am able to access both SRVDC and SRVNPS servers – if you try on your own, do not use “ping” command, because every server (that is ICMP enabled) will respond– even without corp/intranet tunnel enabled. Use share or a website if server has IIS role installed.
  6. The user is connected to SRVNPS and sends to HRA SoH (Statement of Health). The HRA sends it internally to the NPS health policy server (in my case it is one and the same server). NPS evaluates whether client computer matches the System Health Validators. I have setup policies letting every computer in, so every computer is compliant. NPS sends results to the HRA service. When the client computer is compliant, the HRA on behalf of the user, enrols health certificate using my “TST Root CA, C=PL” Issuing CA and “DA HRA Certificate” health certificate template. The certificate is sent back to the user in the management tunnel. You can confirm this by going to the CA server, open CA console an investigate Issued Certificates container. Also you can verify those on the HRA server in System Event Viewer. Event 22 “The Health Registration Authority has approved the request with the correlation-id-…. The Network Policy Server has indicated that the client should be given full network access” should be there. See figure below:
    DANAPart03img03
  7. Now, on the client computer I run certificate snap-in for the Local Computer store and verify that the client possesses 2 certificates, both issued by “TST Root CA” – THIS IS IMPORTANT!:
    DANAPart03img04

    1. Workstation/Computer certificate that had been issued only once through autoenrolment and stays in the certificate store. This one gives client Infrastructure and Management tunnels.
    2. System Health Authentication which HRA enrolled after confirming that client computer is complaint. This one gives us Corp/intranet tunnel (to other internal resources, such as SRVFS)
  8. From Client machine I initiate connection to one of my internal resources: \\srvfs.tst.lab . Access to such resources is possible because 3rd tunnel – corp/intranet tunnel is setup using health certificate issued through the HRA. There is a very good TechNet article ton how to check tunnels on the client: http://technet.microsoft.com/en-us/library/ee844114(v=ws.10).aspx . To verify tunnels I run “netsh adv mon show mmsa” command and as an output I can see computer certificate and health certificate that is used for UserKerb authentication (connection to \\srvfs.tst.lab) successfully. What is important is that those certificates are confirmed to be Health Certificates, see below figure:
    DANAPart03img05
    Above tunnels can also be confirmed in the Windows Firewall Monitoring / Security Associations / Main Mode:
    DANAPart03img06
  9. From Direct Access Remote Client Status console I finally confirm that the client is connected and using User Kerberos as Authentication Method:
    DANAPart03img07

Great! Everything’s working – what’s the problem then? Well, Microsoft in this MSDN article http://msdn.microsoft.com/en-us/library/cc731916.aspx recommends that “ (…)For optimal performance, a dedicated standalone subordinate CA should be used to issue health certificates.” This article also guides you on how to Configure Standalone CA, wait time, validity period, but there is not a single word on how new CA should fit into Direct Access with NAP integration.

To my environment I added a standalone CA and the very interesting results of what happened next are described in my last article DA + NAP: Potential issue with multiple CAs. Lessons learned

Direct Access + Network Access Protection – part 2 – LAB configuration and overview

This is Andrzej Kaźmierczak’s (KAZM) second part of my DA + NAP articles. You can read about overview in the first part: DA + NAP part 1: Introduction.

To get better overview and learn how to configure Direct Access with NAP follow those TechNet articles (even though some of them apply to Windows Server 2008 R2):

This is how my LAB is configured (the main parts of configuration are described only):

DANAP01

To configure my LAB, first of all I have installed and confirmed that Direct Access is working fine without NAP. After that, I have added SRVNPS server and switched DA to integrate with NAP server.

Network

  • Internal network: 12.12.12.0/24
  • External (Internet) network: 137.0.0.0/24

Servers and Roles

Server OS Role Configuration
SRVDC Windows Server 2012 R2 Domain Controller FFL, DFL: 2008R2Domain: tst.lab
SRVPKI Windows Server 2012 R2 Enterprise Root CA used for issuing certificates for client machines and health certificates.SRVPKI is used for web enrollment and CDP/AIA paths publishing. CN= TST Root CA, C=PL2 NICs (Internal, External)

http://pki.tst.com/CertEnroll/*.crt/crl

TST Workstation Authentication certificate template for DA with EKU:

  • Client Authentication
  • Server Authentication

DA HRA Certificate template for NPS statement of health with EKU:

  • Client Authentication
  • System Health Authentication
SRVNLS Windows Server 2012 R2 Simple HTTPS website acting as NLS. https://nls.tst.lab
SRVFS Windows Server 2012 R2 File share and HTTPS site used for testing DA connection. \\srvfs.tst.lab\https://srvfs.tst.lab
SRVNPS Windows Server 2012 R2 NPS and HRA roles for Direct Access. System Health Validator: Default one, configured to allow any client computer (no firewall, no updates required, etc.)HRA detailed configuration see below
SRVDA Windows Server 2012 R2 Direct Access server https://da.tst.com/IPHTTPS2 NICs (Internal, External)

See detailed configuration below

Client Windows 7 Enterprise Client computer Forced GPOs before switching to external networkClient machine belongs to DA_Clients domain group

Direct Access server has been configured in the following way (if some setting is not mentioned, it has a default value):

  1. Remote Clients
    1. Deployment Scenario
      1. Deploy full Direct Access for client access and remote management – checked
    2. Select Groups
      1. Group: DA_Clients
      2. Enable Direct Access for mobile computers only – disabled (I could not test on client VM if this setting is enabled)
      3. Use force tunneling – enabled (my own requirement, could be disabled)
    3. Network Connectivity Assistant
      1. Allow Direct Access clients to use local name resolution – enabled
  2. Remote Access Setup
    1. Network Topology
      1. Network topology: Edge
      2. DA address: da.tst.com
    2. Network adapters
      1. IPHTTPS is not self-signed (issued by my SRVPKI), CN= da.tst.com
    3. Authentication
      1. As you can see I have chosen to use TST Root CA and enabled the “Enforce corporate compliance for Direct Access clients with NAP” option which simply enables NAP integration with DA.
      2. I didn’t choose “Use an intermediate certificate” because in this particular scenario I am having Root CA which issues certificates (try not to be confused). In any other well – designed PKI environment, one would use Subordinate Certification Authority as Issuing CA, NOT Root CA itself (this was done here only for LAB purposes and is crucial to understand the issue I’m describing in that article). If you have offline Root CA and separate online Issuing CA, you would need to enable “Use an intermediate certificate” option. Remember, if you do, the Browse button will show you only certificates that are stored in the “Intermediate Certification Authorities” Windows certificate store, not in the “Trusted Root Certification Authorities” store. I also have enabled Windows 7 computers, because this is OS of my client machine:
        DANAP02
  3. Infrastructure Servers
    1. Network Location Server
      1. The network location server is deployed on a remote web server: https://nls.tst.lab
    2. DNS
      1. Default suffixes
      2. Use local name resolution if the name does not exist in DNS or DNS servers are unreachable when the client computer is on a private network (recommended) – enabled
    3. DNS Suffix Search List
      1. Default settings
    4. Management
      1. Management servers: srvnps.tst.lab (it has to be available in management tunnel for the client to issue a health certificate for the user that will let you access corp/intranet tunnel).

Health Registration Authority configuration:

  • Added TST Root CA,
  • Enabled to use DA HRA Certificate template (duplicated and configured manually on SRVPKI):DANAP03

The setup is done (above are described only major parts of it). You can now go to the next article: DA + NAP part 3: Single CA work flows explanation

Direct Access + Network Access Protection – part 1 – Introduction

Hi, Andrzej Kaźmierczak (KAZM) here. Recently I’ve been doing some deep dive troubleshooting of two amazing technologies working together: Microsoft Direct Access and Network Access Protection. There is one thing I want to share about design of Certification Authorities for such implementation and a little bit of how to troubleshoot Direct Access client connection.

A few important words on those two technologies:

A really, really good overview on Direct Access can be find in Tim Warner’s YouTube CBT Nuggets https://www.youtube.com/watch?v=saYk4d3h6sY

Let me share what’s this series of articles is all about. It is divided into 4 sections:

  1. DA + NAP part 1: Introduction
    This introduction.
  2. DA + NAP part 2: LAB configuration and overview
    I do a quick overview of my Direct Access and NAP settings and general configuration on the LAB setup which is a core for further certificate issue investigation.
  3. DA + NAP part 3: Single CA work flows explanation
    In this section I am guiding through the step by step process happening under the hood of user getting access to internal resources using Direct Access with NAP policies. This scenario is working fine, as long as you use single CA.
  4. DA + NAP part 4: Potential issue with multiple CAs. Lessons learned.
    The last section describes what was the problem when having different CAs and what is the right design for such scenarios.

Enjoy!

P.S.

TL;DR version: What happened was that client didn’t show any error at all, had all required certificates (computer and health) issued but couldn’t setup the corp/intranet tunnel to internal resources with Direct Access client. There was no indication of any kind of errors neither on PKI, NPS/HRA, DA nor client machine side. At the end of the day it turned out it’s always about certificates. You can’t have 2 separated CAs: one for issuing machine certificates (enterprise CA with auto enrollment) and a separate CA for Health certificates (standalone CA) UNLESS THOSE DON’T HAVE COMMON ROOT CA. If they both are subordinate CAs to the same Root CA – that’s fine, but if they are separate machines and have nothing in common it is impossible to set DA with NAP to utilize those two.

Security Issue – Yammer Account Details Unauthorized Change

Hi, Andrzej (KAZM) here.

During testing Yammer with DSync I have found a security issue letting anyone to change anyone’s other Yammer Account Details in an unauthorized way, knowing only user’s e-mail. In few words: you could change anyone’s details of Yammer account, including Name and GivenName, in an unauthorized way.

NOTE: Issue has already been fixed/patched and below I am demonstrating Proof of Concept of the way someone could exploit this security gap.

Products that were affected: Yammer Enterprise DSync.

Problem description

The scenario begins when I create a user in Active Directory with Name=New_Name, GivenName=New_GN and Description=My_new_description, Work phone=123456 and (IMPORTANT) with E-mail attribute of an email of the real user (that already has and uses a Yammer account). This email can be in any domain (eg. @predica.pl, @microsoft.com). Let’s say I use my Predica’s email: akazmierczak [at] predica.pl.

During Dsync synchronization, details such as Name, GivenName, Description, Work number for that real user (my account) are OVERWRITTEN in Yammer to reflect details provided in account create in Active Directory. Must mention here that I AM NOT an administrator of predica.pl Yammer Network and I do not need to know user’s password.

After synchronization is complete, user akazmierczak [at] predica.pl receives welcome email with message to join adatum.com.pl Yammer Network, but his Yammer account details are already changed to match Active Directory users details. So instead of “Andrzej Kaźmierczak” account one can see “New_Name New_GN” on all my Yammer networks! Moreover, details of my account will include things configured in Active Directory (My_new_description and with Work Phone as “123456”).

It also means that users account and all his history of posts, comments, likes, etc. will suddenly be seen on all users’ Yammer networks as “New_Name New_GN”.

Environment details

Office365

  • Yammer Enterprise is enabled in Office365 tenant
  • Domains
    • yammerlab.onmicrosoft.com
    • Adatum.com.pl (domain added and verified)
  • Accounts
    • Yammer_service@adatum.com.pl is a global admin in O365 and also a Verified admin in Yammer. This account is also used by DSync

Domain Controller VM

  • OS: Windows Server 2012 R2
  • FFL, DDL: Windows 2008 R2

YammerSync VM

  • OS: Windows Server 2008 R2
  • Dsync Configuration
    • version: Yammer.DirSync_v3-0-8
    • Yammer Settings
      • Logged in as yammer_service@adatum.com.pl account in adatum.com.pl domain
    • Directory Settings
      • Only 1 Directory Connection to DC
      • Using dedicated yammerservice service account created in Active Directory
    • globalsettings.config.json File Settings
      • Queries Section left unchanged with attribute (“Filter”: “mail=*”, )
      • DirectoryAttributeMap Section left unchanged with default mapping settings
      • SyncSettings Section left unchanged with attributes (EnableAdds, EnableUpdates, EnableSuspends) set to “true”
      • AttributePreferences Section changed, so that all attributes (Prefer*) are set to “true”
      • Other settings left default/unchanged

All servers are patched up until 01.03.2014

Video with PoC

The issue has been reported to Microsoft:

  • 02.03.2014 Ticket #114030211227295 (Microsoft USA support)
  • 02.03.2014 Microsoft performed a call with me with long discussion explaining the problem and steps to reproduce. Advised to send some screenshots, video or links, so Microsoft can further investigation on the issue
  • 03.03.2014 The same video as above was uploaded to Microsoft DTM and informed Microsoft Support
  • 23.04.2014 Microsoft informed me that issue was fixed
  • 24.04.2014 Tried to reproduce bug but it is fixed now:

YammerDirSync

Approach to security hardening the Microsoft Server Stack

So you just deployed your brand new Microsoft infrastructure hosting your critical application, be it in the Public Cloud, leased infrastructure or in your own datacenter. You configured all your servers and application and are ready to publish it for external access (either authenticated or anonymous).

Microsoft Server products are established in the corporate, intranet networks, but still relatively less existent in the internet/extranet space.

How do you approach hardening of the Microsoft Server stack? What process do you follow? What tools do you use? How do you test and validate your setup?

This blog post aims to give you a few general hints and guidelines how you can with a few simple steps, increase your Windows Server security.

General approach

The approach we have used with success with many of our customers boils down to 3 main things:

  • Holistic (‘360’) approach to security. Examples: even if you have top notch security configuration on your servers, but the servers are not physically secure, any person can crack the password/encryption given enough time with the machine. Security must be implemented at all layers
  • Technology is one thing, but you also need to take care of the people (their knowledge, skills, team work) and process (procedures in place that ‘shape’ the proper people behaviour)
  • Verify/test! Even if you plan and execute the most security plan, you need to verify by running extensive tests (e.g. penetration/attack surface analysis) – ideally before and after applying your security plan

Technical details

Below I provide several points which are worth taking into account when building your security plan.

For hardware security ensure:

For Windows Server stack security hardening do not forget:

  • Up-to-date kernel (ensure responsive patching procedure)
  • Enable ONLY required roles and services
  • Once this is done, disable unused services – e.g. for a standalone (or domain-joined) web server we were able to disable 42 base windows services w/o any impact on the functionality of the webserver
  • Unbind unnecessary protocols from the network interfaces – most often you will only TCPIP (v4 or v6) and (if you need it) file and printer sharing
  • Disable netbios on your TCPIP properties of network connection
  • Change the default RDP port (http://support.microsoft.com/kb/306759) and enforce Network Level Authentication (NLA)
  • Enable and configure Windows Firewall – you can disable most out-of-the box enabled rules (except RDP – if you use it, and not some other remote connectivity tool) and your application traffic
  • Optimize your security via local or domain group policy (follow links below for guidance on recommended settings, esp. CIS) – definitely focus on enforcing NTLMv2, password/account policies, user rights assignment and audit policies
  • Enable UAC
  • Enable IE Enhanced Security Configuration, or even disable IE and other programs via Software Restriction Policies in GPO
  • Ensure only 2-3 people (with dedicated accounts) are members of local administrators
  • Rename the default administrator account and create a decoy administrator account (http://technet.microsoft.com/en-us/library/cc700835.aspx)
  • Ensure you are able to get the most out of your auditing, with tools like Dell/Quest ChangeAuditor (http://www.quest.com/changeauditor/)
  • Harden your TCPIP stack – disable automatic admin shares (e.g. C$), disable SSLv2, you might want to decrease default TTL, Disable ICMP redirect. The TCPIP stack since WS2008 is much more secure by default than 2003, but you still can make a few tweaks
  • And much more… Contact us if you are interested in securing your Microsoft-based business applications :)

In the area of network, take into account:

Tools and links

These are the tools I found quite useful:

FIM and FIPS or FIPS and FIM

Hi, Tomek here with some post finally ;). End of the world didn’t happened so there is no excuse to stay silent here and time to write some initial entry finally.

This time it will be quick troubleshooting entry for issue we came across few times so it might be an issue for others as well. And the topic is – FIM and FIPS (Federal Information Processing Standard) and what issues might be causing by these settings in a security locked down environments. As usually this knowledge comes with some real world learning experience so I hope this post will save some time on this learning curve for others.

When we were deploying some basic FIM elements on production servers in production environment we’ve found out during UAT after deployment that our setup is not working and is throwing Access Denied errors in authorization phase for some workflows. Quick look at details of a request which was denied showed us a cause:

This implementation is not part of the Windows Platform FIPS validated cryptographic algorithms.

We were going with this solution through UATs on test and pre-production environment and it didn’t happened so it pointed out to some difference in configuration. Quick search showed that this issue can happen in systems which are configured with following GPO settings:

“System cryptography: Use FIPS compliant algorithms for encryption, hashing, and signing”

Which translates into following registry entries:

  • On Windows 2003: HKLMSystemCurrentControlSetControlLsaFIPSAlgorithmPolicy
  • On Windows 2008 and later: HKLMSystemCurrentControlSetControlLsaFIPSAlgorithmPolicyEnabled

From quick tests made in a lab environment in order to troubleshoot this issue we’ve quickly found out that enabling any of entries above will cause FIM workflows to fail with this error message. Disabling this issue cause problem to be resolved.

Recently we were updating same environment with FIM 2010 R2 and adding reporting support to it. When we were deploying SCSM components (you can read on FIM 2010 R2 reporting infrastructure here) on new servers we have found out that SCSM setup is failing at the finalization stage:

This wasn’t obvious from a setup log file at the first glance, but at the end it has turned out that this is caused exactly by the same setting affecting new servers deployed for System Center components of FIM reporting platform.

This isn’t actually FIM specific as this is known issue related to FIPS compliant systems and .NET environment. There is a bunch of articles on this issue related to .NET environment:

Solution for us was to disable this setting in GPO which affects FIM Servers and this has resolved it for us. If it is not possible in your environment you can use links above to make necessary changes in a configuration of your environment without disabling these policies, however I have personally not tested these solutions with FIM (if You will do – please use comments or e-mail me with this information)

Edit

Actually during writing this article I’ve found out this KB article 935434, which describes fix for .NET 3.0 environment which might be a solution for it – if you have access to Microsoft support it might be worthy to give it a try.

Conclusions from this are:

  • Consult with your Security / AD /GPO team if environment in which you are about to deploy your FIM installation is configured with FIPS compliant settings and work your solution for it with teams.
  • Always make sure that your dev /staging environments are as close to production as it is possible. It will make your life easier and in case of a problems also troubleshooting will be quicker.