Approach to security hardening the Microsoft Server Stack

So you just deployed your brand new Microsoft infrastructure hosting your critical application, be it in the Public Cloud, leased infrastructure or in your own datacenter. You configured all your servers and application and are ready to publish it for external access (either authenticated or anonymous).

Microsoft Server products are established in the corporate, intranet networks, but still relatively less existent in the internet/extranet space.

How do you approach hardening of the Microsoft Server stack? What process do you follow? What tools do you use? How do you test and validate your setup?

This blog post aims to give you a few general hints and guidelines how you can with a few simple steps, increase your Windows Server security.

General approach

The approach we have used with success with many of our customers boils down to 3 main things:

  • Holistic (‘360’) approach to security. Examples: even if you have top notch security configuration on your servers, but the servers are not physically secure, any person can crack the password/encryption given enough time with the machine. Security must be implemented at all layers
  • Technology is one thing, but you also need to take care of the people (their knowledge, skills, team work) and process (procedures in place that ‘shape’ the proper people behaviour)
  • Verify/test! Even if you plan and execute the most security plan, you need to verify by running extensive tests (e.g. penetration/attack surface analysis) – ideally before and after applying your security plan

Technical details

Below I provide several points which are worth taking into account when building your security plan.

For hardware security ensure:

For Windows Server stack security hardening do not forget:

  • Up-to-date kernel (ensure responsive patching procedure)
  • Enable ONLY required roles and services
  • Once this is done, disable unused services – e.g. for a standalone (or domain-joined) web server we were able to disable 42 base windows services w/o any impact on the functionality of the webserver
  • Unbind unnecessary protocols from the network interfaces – most often you will only TCPIP (v4 or v6) and (if you need it) file and printer sharing
  • Disable netbios on your TCPIP properties of network connection
  • Change the default RDP port (http://support.microsoft.com/kb/306759) and enforce Network Level Authentication (NLA)
  • Enable and configure Windows Firewall – you can disable most out-of-the box enabled rules (except RDP – if you use it, and not some other remote connectivity tool) and your application traffic
  • Optimize your security via local or domain group policy (follow links below for guidance on recommended settings, esp. CIS) – definitely focus on enforcing NTLMv2, password/account policies, user rights assignment and audit policies
  • Enable UAC
  • Enable IE Enhanced Security Configuration, or even disable IE and other programs via Software Restriction Policies in GPO
  • Ensure only 2-3 people (with dedicated accounts) are members of local administrators
  • Rename the default administrator account and create a decoy administrator account (http://technet.microsoft.com/en-us/library/cc700835.aspx)
  • Ensure you are able to get the most out of your auditing, with tools like Dell/Quest ChangeAuditor (http://www.quest.com/changeauditor/)
  • Harden your TCPIP stack – disable automatic admin shares (e.g. C$), disable SSLv2, you might want to decrease default TTL, Disable ICMP redirect. The TCPIP stack since WS2008 is much more secure by default than 2003, but you still can make a few tweaks
  • And much more… Contact us if you are interested in securing your Microsoft-based business applications :)

In the area of network, take into account:

Tools and links

These are the tools I found quite useful:

FIM and FIPS or FIPS and FIM

Hi, Tomek here with some post finally ;). End of the world didn’t happened so there is no excuse to stay silent here and time to write some initial entry finally.

This time it will be quick troubleshooting entry for issue we came across few times so it might be an issue for others as well. And the topic is – FIM and FIPS (Federal Information Processing Standard) and what issues might be causing by these settings in a security locked down environments. As usually this knowledge comes with some real world learning experience so I hope this post will save some time on this learning curve for others.

When we were deploying some basic FIM elements on production servers in production environment we’ve found out during UAT after deployment that our setup is not working and is throwing Access Denied errors in authorization phase for some workflows. Quick look at details of a request which was denied showed us a cause:

This implementation is not part of the Windows Platform FIPS validated cryptographic algorithms.

We were going with this solution through UATs on test and pre-production environment and it didn’t happened so it pointed out to some difference in configuration. Quick search showed that this issue can happen in systems which are configured with following GPO settings:

“System cryptography: Use FIPS compliant algorithms for encryption, hashing, and signing”

Which translates into following registry entries:

  • On Windows 2003: HKLMSystemCurrentControlSetControlLsaFIPSAlgorithmPolicy
  • On Windows 2008 and later: HKLMSystemCurrentControlSetControlLsaFIPSAlgorithmPolicyEnabled

From quick tests made in a lab environment in order to troubleshoot this issue we’ve quickly found out that enabling any of entries above will cause FIM workflows to fail with this error message. Disabling this issue cause problem to be resolved.

Recently we were updating same environment with FIM 2010 R2 and adding reporting support to it. When we were deploying SCSM components (you can read on FIM 2010 R2 reporting infrastructure here) on new servers we have found out that SCSM setup is failing at the finalization stage:

This wasn’t obvious from a setup log file at the first glance, but at the end it has turned out that this is caused exactly by the same setting affecting new servers deployed for System Center components of FIM reporting platform.

This isn’t actually FIM specific as this is known issue related to FIPS compliant systems and .NET environment. There is a bunch of articles on this issue related to .NET environment:

Solution for us was to disable this setting in GPO which affects FIM Servers and this has resolved it for us. If it is not possible in your environment you can use links above to make necessary changes in a configuration of your environment without disabling these policies, however I have personally not tested these solutions with FIM (if You will do – please use comments or e-mail me with this information)

Edit

Actually during writing this article I’ve found out this KB article 935434, which describes fix for .NET 3.0 environment which might be a solution for it – if you have access to Microsoft support it might be worthy to give it a try.

Conclusions from this are:

  • Consult with your Security / AD /GPO team if environment in which you are about to deploy your FIM installation is configured with FIPS compliant settings and work your solution for it with teams.
  • Always make sure that your dev /staging environments are as close to production as it is possible. It will make your life easier and in case of a problems also troubleshooting will be quicker.

FIM Reporting – architecture overview

Hi, Adam here again with FIM reporting series. In first post I’ve covered basic concepts of FIM reporting included in FIM 2010 R2.  After short break I’m getting back to continuation of this topic, this time with brief description of reporting architecture and data flow.

With FIM 2010 R2 release Microsoft temped to supply feature responsible for tracking and reporting changes made on FIM resources. Whole solution wasn’t built from scratch but it is based on the Microsoft System Center. Says more precisely the data flow and data storage are handled by System Center.

Topology

FIM Reporting can be deployed on two kind of topologies. Depending on production environment size, Microsoft recommended two solutions for small/medium or large size deployments. More detailed information you can see here.

Briefly I can tell that small or mediums size deployments can be hosted on servers where FIM and System Center Service Manager (SCSM) are hosted on one machine. While System Center Data Warehouse (SCDW) is hosted on second machine (look at figure 1).

For large deployments the whole environment is hosted on three servers. The difference is that the FIM Service and SCSM are separated into separate servers (look at figure below).

Schema update and Data Flow

As I mentioned before the System Center Service Management is used to handle data flow from FIM to System Center Data Warehouse. In general the whole process takes place following steps:

Data Warehouse schema update

In order to extend Data Warehouse schema to log new FIM resources it is required to supply MP definition. Management Pack are not new feature which is built-in FIM Reporting, this is part of System Center solution.  Management Pack files will be described in the third post. For more precisely description I refer to TechNet documentation.

  • Ready Management Packs files describing extension for Data Warehouse are imported (using Power Shell) to System Center Management service.
  • Based on MP files appropriate schema changes are made on the System Center Data Warehouse databases.

Data Flow

The Date Flow process is responsible for transfer FIM data from FIM Service database directly to the DWDataMart database in the end. All take place in the following manner.

  • Data are transferred form FIM Service database to the System Center Management. Appropriate bindings (connections between FIM resources and DW fields) are described in MP definitions.
  • Transferred data are sequentially passes DWStagingAndConfig, DWRepository and DWDataMart databases where are subjected to complex transforms and loads operation, which are out of the scope of this blog series. Second reason is that I never had any experience with System Center and I didn’t try deep dive into this topic. For more details about System Center Data Warehouse ETL process I refer to TechNet documentation.
  • Based on data transferred to DataMart Warehouse we can build custom reports, which we can store on Report Server. It also possible to upload reports to the System Center reports repository by using MP.

The whole process is presented on the diagram below (click to enlarge):

This short description is intended to show simple overview of FIM 2010 R2 Reporting architecture. In order to get more detailed and exhaustive description I suggests refer to TechNet articles:

If you are interested in this topic stay tuned … we will get back soon to it with more details into how to get this service running and doing what we want to do with our data.

Cover image (cc) http://www.flickr.com/photos/malavoda/4195215934/

SharePoint DR strategies

Hi Folks,
Andrzej here. My role in Predica is Managing Partner and Infrastructure Architect. Today I’d like to share with you some insights on SharePoint Server 2010/2013 Disaster Recovery (DR) design.

Truth is, designing SharePoint DR is not a trivial task. SharePoint is a distributed application with a 3-tier architecture (web, application, database), and on each of those tiers we have multiple web applications, service applications and databases that communicate with each other. Additionally there is a number of underlying technologies that can be used in a DR design: Hyper-V Replicas, SQL log shipping, database mirroring, SQL 2012 AlwaysOn and SharePoint-level backup (PowerShell, stsadm). On top of that we can have SQL Remote Blob storage so part of our content is on a file store, custom applications and code that integrates with SharePoint and needs to be included in DR design.

There are a number of 3rd party products that allow to simplify (or at least add new options) to the DR design, but of course none of them is free of charge. Products, like Neverfail for SharePoint, AvePoint DocAve High Availability, Idera SharePoint Backup, Metalogix Replicator, or Microsoft’s Data Protection Manager, might be a good choice for some companies, but I do not recommend choosing any of them before careful technology validation/proof-of-concept and price/performance comparison. Let’s call it “experience”. In this article I want to focus only on what is available out-of-box as Microsoft native DR strategy.

In this article I will present two architectural patterns for SharePoint DR design:

  1. Single farm for high bandwidth, low latency DR sites
  2. Separate farm for low bandwidth, high latency DR sites

Before starting with your DR design you should collect following information:

  • Business requirements:
    • Recovery Time Objective (RTO)
    • Recovery Point Objective (RPO)
    • Note that they may be different for different content or different applications
  • Financial budget: DRC will require
    • Hardware
    • Licenses (different Microsoft editions, potential 3rd party products)
  • IT operations manual labor cost and availability
  • DRC site network parameters:
    • Bandwidth (Mbps)
    • Bandwidth available (%)
    • Latency (ms)
    • QoS capabilities
    • Load-balancing switch/router capabilities

How those scenarios are different in terms of DR approach from Sharepoint.

Scenario 1 – Single Farm

Key points:

  • 1 farm spanning 2 sites: a more hands-off approach
  • Requires synchronous database replication (e.g. with SQL AlwaysOn Availability Group), as

SharePoint does not support asynchronous replication of administration/configuration databases: link

  • You will need a sound network connection to DRC, recommended is min. 1Gbps and <10ms latency
  • Most of SharePoint SQL IO will be read (probably 60-80% for content management scenarios), so mirroring with AlwaysOn is a good solution, as writes, which need to travel across sites, are less frequent then reads, which do not
  • Failure of DRC SQL replica does not impact MAIN SQL instance
  • If you want to minimize the amount of traffic on the wire between sites, consider removing some databases from the availability group, and use a backup/restore or re-create method for them. Good candidates are: web analytics, search, user profile
  • If required this approach can facilitate automatic site switchover (with a witness in a 3rd location): I would not recommend this for various reasons outside the scope of this discussion – automatic DR switchover is only applicable in specific cases
  • Use network load balancing to divert traffic to DRC site, or keep an active/active setup, where DRC front-end/application servers connect to the currently active SQL Replica

Here’s simple diagram to illustrate this scenario (click to enlarge):

Scenario 2 – Separate Farm

Key points:

  • Separate farm in DRC
  • Only content databases are replicated asynchronously across site-link
  • Does not have stringent requirements on the site-link
  • Requires manual work overhead to manage, update the DRC farm
  • Switchover procedure requires more time and steps to be taken: DNS or load balancer changes, bring up the content databases online and attach them to DRC farm..
  • When restoring service applications, ensure you restore through SharePoint API (Powershell/stsadm/central admin) as Microsoft does not support restore of some service applications or configuration/administration databases from SQL backups
  • If you use SQL Remote Blob Storage (RBS) on some of your content DBs, good news is that with SQL 2012 AlwaysOn you can replicate content databases – just remember to configure RBS on the replica as well

As previously, simple diagram below illustrates this scenario (click to enlarge):

What to choose … where to go …???

These three aspects will have the most impact on the DR design decisions:

  • your Main-DRC network capabilities
  • RPO/RTO requirements
  • manual labor cost of managing a second, separate farm.

This was a very general overview, and there are a number of hybrid approaches.

My goal here was only to make you aware of some caveats. If you are still hesitant about how to approach SharePoint DR or any other SharePoint subject for that matter, do not hesitate to contact us!

This was your DR-host for today, Andrzej … stay tuned for upcoming posts.

Cover picture: (ccInnisfree Hotels

FIM 2010 R2 Reporting – First Strike!

Hi, my name is Adam and here at Predica, I’m dealing with Business Intelligence solutions. As you can see on our Company site, we are specialized on two Microsoft technologies i.e. SharePoint and Forefront Identity Management (FIM). So in this series of posts I would like to share with you my experience and problems which I met during working with FIM 2010 R2 Reporting.

Scenario

In whole series I will explain from beginning how it works, how to use it, what capabilities it carries on and I shall indicate the knowledge resources on which I based on.

In my first post I will try to introduce you to the FIM Reporting architecture. I will tell you how the data flows and where are stored. In the second post I will present in more details Data Warehouse database. The next post will describe System Center Managements Packs. Briefly I show how Managements Packs are built.
In the last two posts (the most interesting part) I will tell how to extend reporting for new attributes (part one) and new FIM resources (part two). For both scenario I show how to build own Management Pack. We will also look inside the warehouse to show you how logged data are stored.

Prerequisites

Before you begin, make sure that you have fully configured environmental. Detailed guide you can download from here. Or you can based on website version.

See you soon!

Images: (cc) wizetux

SharePoint meets identity – part 2 – technical basics (ADFS)

Hi, here’s Paweł again with SharePoint and identity goodies.

After my first post on the topic you should be pretty convinced (or at least you should think you are) that claims present some interesting possibilities in the area of SharePoint security. Let’s jump directly to the details of that and let’s validate if this assumption is true on something real. If you are already experienced with SharePoint, you should probably know it by now, that you can never assume something in SharePoint works the way you expect it to. That’s why whenever I design something, I always first validate it, because sometimes in the end it turns out… you know what.

To have something “real” to work on, you need to have a proper setup first.  Again, if you are somewhat experienced SharePoint guy, or girl (on the side: then please mail me, cause I’ve never met any ;)), then you should have some base SharePoint VM in place with SQL, Active Directory and SharePoint itself. If not I suggest you stop reading here, because it might not make much sense if you do. Having that, there is one additional thing you need – Active Directory Federation Service (ADFS).

Note from our infrastructure team: Whenever you are reading ADFS right now and you are running something lower than Windows Server 2012, we are not speaking about the service which is available as part of a system. It referees to ADFS 2.0 which you can download from the Internet (and remember about the fixes).

The “proper” setup which I meant before is simply having Active Directory as an identity provider, ADFS as a relying party and SharePoint as a target application (which not entirely true since SharePoint is also sort of a relying party, but let’s simplify for now), so the “identity flow” would be AD->ADFS->SharePoint. There are some nice guides how to prepare this setup on the web (like Steve Peschka blog which you have to subscribe to if you seriously thinking about doing some claims in SharePoint), so I won’t waste our precious time here for this – I want to focus more about what results we get from it and how we can utilize them in real-world solutions.

OK, but why do you need ADFS in the first place? ADFS is acting as a proxy by which identity information (claims / user attributes) flow from identity providers (like AD) to the target applications (like SharePoint) – click on a picture to enlarge it:

But as I mentioned in my previous posting, AD is not the best place to find identity information if you are considering Internet-facing scenario, so in this case ADFS will still be a proxy but this time between an external identity provider (ACS in our case) and target application (again – click to see bigger version):

Of course it is not just 1-to-1 proxy passing through all information, but allows to transform it and make some authorization decisions based on it. So once you went through ADFS-with-SharePoint-and-AD setup, you should already have some basic understanding about claims, what kind of information they hold and how to control it. AD identity provider can be used as a good starting point to play around with claims, but later it can be changed to something else.

If you look into ADSF management console, you’ll find something like this under SharePoint relying party configuration (you know what to do do see it bigger 😉 ):

This is where you define which attributes from Active Directory transform into what claims available later for defining access in SharePoint. The interesting thing about these settings is that you don’t have to rely entirely on AD’s (or other provider’s) data, but for example based on user’s account or e-mail get some extra information from some external data source:

The external data sources are called attribute stores in ADFS terms and by default they can be:

  • Active Directory
  • SQL Server
  • LDAP

But also you can create own attribute store or find some ready-to-use from the web:

The concept of attribute stores and their usage in claim flow from identity providers to target applications is important, because it allows to normalize claims coming from different sources and add some additional ones which target applications might utilize in their authorization logic.

Once the claims are processed in ADFS with or without transformations by attribute stores assistance, they are passed to target applications – SharePoint in our case and this is where their primary role begins. The setup you might have done already according to Steve’s blog entry includes also SharePoint configuration, but what exactly are the implications of such setup to working with SharePoint data and security, I will continue writing about in the next posting.

If you are looking for good source of step-by-step configuration for ADFS and SharePoint you can also visit Jorge’s blog where he provides good description of ADFS side of infrastructure (start with Part 1 and there is 8 more).

SharePoint meets identity – part 1 – claims business concept

Hi, my name is Paweł and here at Predica’s team I hold an honored position of SharePoint architect. Like it or not / love it or hate it, but SharePoint earned its place among many companies as a primary intranet solution, often integrating other systems within the company, sometimes also exposed to the Internet. Therefore I would like to share some from-the-field experience in designing SharePoint solutions based on the real-life projects we either did or are doing in Predica.

Note: This is first of the series of posts where I want to present the approach we have taken and implementation details for authentication and access management for Internet facing SharePoint solution.

Recently, we were approached by an customer with interesting requirement for a new solution preferably based on SharePoint. The story in general concerned an Internet-facing portal which should serve companies and individuals being the customers of our customer’s company. The primary goal for the portal was to allow the users to access various documents and participate in the processes which our customer provided for them, which meant that each of them should have own personal secured space (site) where the data could be stored.

Although, it does not seem like a big challenge, doing it properly, according to what you may understand as “standard Internet way of doing things” was not a that easy as it might look like.
First assessment of the fundamental assumptions for the new system, raised a question: how will we handle authentication and authorization? What were these requirements:

  • Our “users” were no necessary internal users. In most cases these were external users, which might be individuals or employee of partner companies
  • Our “solution” can be hosted outside of internal network in data center of our customer choice so it should be not very dependent on the infrastructure and easy to re-deploy

When you think SharePoint obviously authentication and authorization choice is Active Directory. At least it was so far. In this context though, the first thought is “nah, this doesn’t seem right”. OK, but why?

First reason (putting aside all kind of licensing considerations) is that if users in our solution mostly will be coming from the outside world and they will not be related to company which is hosting the application, why “service provider” should be bothered with maintaining these user identities and accounts at all (which will be a case if we would choose Active Directory as our identity store of choice)? Registration, information changes, password resets … think only about these aspects of this decision.

Another fundamental problem with Active Directory based approach for authentication and authorization in SharePoint context is that managing security based on AD groups simply means managing user accounts and groups.

The concept of groups is great (or else WAS great some time ago), if only it wouldn’t mean you have to manage them…

Simple example: you have an employee in your company who belongs to HR department and you have a group “HR Department” to secure some file shares, SharePoint sites and God knows what more. So what happens when this person moves to another department? Well, the person must be removed from the group and possibly added to another one. Lucky you, if you have some automated solution which does that for you (like Forefront Identity Manager, which we also deliver by the way :)). And unlucky you if you don’t and you have more than 10.000 employees… Of course same problem applies to SharePoint groups, which additionally cannot be embedded within each other.

So, taking all that and inspired by Tomek (our identity architect) and his passion to claims-based authentication approach, I decided to dig into the subject in the context of SharePoint security to see if indeed it can bring some resolutions to these problems. The whole idea of claims is in the air for some time now, but surprisingly I never saw any implementation of it in the real system, which made me a little suspicious. If you are not familiar with the concepts of claims, Boogle or Ging (whatever you prefer) more about it or use you favorite source – MSDN (this is a good start BTW), there’s plenty of information out there, so I will only  focus on the essence of what it is.

The best analogy that comes to my mind thinking about claims in SharePoint context is that they give you possibility of attribute-based security management. Now, what that means? Looking back at the previous example, what IS/SHOULD really happen when for example a person changes department within the organization?

(IS) You update his/her Active Directory account or any HR management system to relate to the new department and (SHOULD) all security related the previously held position should be revoked and access required for the new position should be granted. For some organizations it might seem like a wishful thinking, but in fact it’s not – it is possible and achievable. This is where claims can successfully come into play. If you related your security against person attributes (department in this example) rather than on his/her group membership, all security-related management should happen by itself.

Of course, relating claims to Active Directory attributes is a narrow-minded thinking, because AD is only one of the possible identity providers. You can think about Facebook, Google, Live and others which can serve the same function, also totally custom or based on already existing HR or customer databases. In our customer’s story, AD was out of question, but external Internet providers should serve the purpose very well (and also wouldn’t give you this tedious Windows credentials login window in the browser, like Windows authentication does). So it seems we have a winner in the design concept competition. But of course this is just the beginning of a long way to making this really work.

This article should only give you a glimpse of what claims-based authentication is. My intention is to follow on this subject in posts following this one, to share more information about the real design and the technical concepts implemented, which caused the project successfully meeting its assumptions both providing proper security and low management need. But because the concept for this particular project might seem quite specific, I will also keep mentioning how these concept could apply in the enterprises to increase information security and ease manageability.

So stay tuned!

Image courtesy of tungphoto / FreeDigitalPhotos.net