Managing Identities in a Hybrid World

Last Tuesday I had the pleasure of addressing a combined audience of fellow local MVP Pete Calvert‘s Adelaide Windows User Group and the Adelaide System Center User Community.  So I thought I’d post the identitygovernancefor-o365 deck from that meeting here, mainly for the benefit of those in the two groups.

Coincidentally, this same week my colleague Shane Day from my company UNIFY’s leadership group posted this article on the topic of the Top 3 Identity Management tips for Frontier Software chris21 users which I thought nicely reinforces the ideas that were discussed at on Tuesday – which (happily) went way overtime with a very engaged group of IT folks.

The angle of my presentation was simply directed to those presiding over a hybrid AD/AAD identity base (hopefully using Microsoft AAD Connect to do so) in order for their workforce to access Office 365 licensed applications like SharePoint, Exchange Online and Yammer.   They may also be using federation to provide SSO to other cloud applications like SalesForce.

For any such organisation,  improving governance around not only who can logon to your company network, but who has access to what at any given time, should be an ever increasing priority.  In my experience your HR platform is not only the best source of accurate employee identity information, but also the best source of  the key events which can be harnessed to apply access-related changes to your user identity records – think “joiners, movers and leavers” .

This is not a new concept by any means, but when you look at this in a hybrid identity context you can start to see there really is no limit to the potential of harnessing the HR source to drive downstream application access.  Combine this with the workflow capabilities of in identity and access management (IAM) platform such as MIM2016 (the sync engine of which now comes free with your Enterprise Windows Server licenses) and you can start to realize additional automation benefits by combining request with role-based access – such as the assigning of Office 365 licenses based on group membership (see my previous post on how to use AzMan on premises for this).

For those who happen to use chris21 (or perhaps Aurion) HR platform but are not yet ready to take the plunge and build a full scale IAM solution, there is good news for you too!   UNIFY has both on premises or cloud (SAAS) offerings that simply allow you to harness your HR system to simply drive your on premises AD, which in turn will drive your AAD via AAD Connect.  If this is you I strongly encourage you take a look to find out more.

Advertisement
Posted in Azure AD Connect Sync, Event Broker for FIM 2010, MIM (Microsoft Identity Manager) 2016, UNIFY Broker PLUS | Leave a comment

#AADConnect sync: The Inbound sync rules in scope have different join criteria.

I’ve finally had the opportunity to work with AAD Connect over these past weeks, and its been one of those “everything old is new again” experiences.  It’s one thing to hear the architectural objectives that Andreas talked about for the Azure “identity bridge” product that was to replace AADSync, and DirSync before it – but it’s only when you get your hands dirty for the first time that you really start to understand what exactly that means.  In my case I am replacing a custom FIM 2010 multi-forest sync implementation with a custom AADConnect solution.  FIM had previously replaced DirSync before it, and while I am finding that it hasn’t quite been the plain sailing I was hoping for, I remain confident I will get there based on alignment to one of the listed topology models.

In my target environment I have something close to the “Multiple Forests – Account-Resource Forest” scenario, with FIM 2010 currently in place providing the identity bridge. However there are some key differences too, the main one being that the FIM instance I am replacing has only a single management agent to the resource forest, with no connectors to any of the regional forests.  This is made possible by a secondary FIM sync instance populating a bunch of extension attributes in the resource forest with feeds from each of the 5 AD regional forests, including the all-important immutableId.

Long story short … in the process of adjusting the OOTB sync rule solution for a multi-forest configuration, I have found myself

  1. disabling a bunch of sync rules (all up there were 195!);
  2. provisioning from the resource forest only (or “projecting” as we would say in MIM-speak) for each object type by changing the link type “provision” to “join” for all the regional connectors;
  3. changing the join rules for contacts and groups to only join on the immutableId extension attribute; and
  4. using AD groups to filter my pilot environment to reduce the sync analysis to a small managebale set of users, groups and contacts.

Now I am still to decide on whether or not point #3 is the way to go, but in the process of running the initial load sync step after changing all the contact and group rules with names that ended in “join”, I ran into this rather confusing error:

The Inbound sync rules in scope have different join criteria. Join Criteria:[objectGUID=sourceAnchorBinary CS=False], [extensionAttributeXX=extensionAttributeXX CS=False] Sync Rules: In from AD - Group Filtering[<guid>], In from AD - Group Join[<guid>]
 at Microsoft.MetadirectoryServices.SyncRulesEngine.JoinModule.ValidateAllApplicableSyncRules(IEnumerable`1 applicableSyncRules)
 at Microsoft.MetadirectoryServices.SyncRulesEngine.JoinModule.Execute(PipelineArguments argsToProcess)
 at Microsoft.MetadirectoryServices.SyncRulesEngine.Server.SyncEngine.RunSyncPipeline(SyncRulePipelineArguments pipelineData, List`1 pipelineChain)
 at Microsoft.MetadirectoryServices.SyncRulesEngine.Server.SyncEngine.Synchronize(SynchronizationOperation operation, IObjectLinkGraph inputGraph)
 at ManagedSyncRulesEngine.Synchronize(ManagedSyncRulesEngine* , CCsObject* sourceCsObject, CMvObject* mvObject, SynchronizationOperation operation, Char** error)

Not only did I get this error, but it occurred 5000 times and then stopped the sync run profile having reached the built-in error limit.  This kind of threw me because what I had read suggested that I had changed all of the OOTB join rules for contacts and users by observing the naming standards.  Yet there was obviously at least one rule I had missed – maybe more.  What was concerning me at the time was that maybe I had broken something by making the change that could not be recovered.  Thankfully the details I needed to identify the problem were in the details of the message … specifically that there was another sync rule which included a JOIN but which did not have a name which ended in “Join”.

It turned out that because of my use of group filters (item #4 in my list above), there were additional sync rules with a name ending in “Filtering” which also included a join statement – and these were still configured NOT to use my extension attribute.

 

So the remedy was simple – changing all the “Filtering” sync rules in addition to the “Join” rules, followed by a rerun of the initial load, was all that was required to restore to a working state.

Posted in Azure Active Directory, Azure AD Connect Sync | 1 Comment

Managing Office 365 Licenses with #MIM2016 and #AzMan – Part 2

In my last post I introduced the concept of using Windows Authorisation Manager (AzMan) to manage the automation of Office 365 licenses.  In this post I will go into detail on how the solution hangs together.

Complementing AAD Connect with MIM 2016

The main components of my solution are as follows:

  • AADConnect (OOTB identity bridge between AD and AAD for user/group/contact sync)
  • MIM 2016 (to complement AADConnect)
  • AzMan (SQL database storage option)
  • 2 MIM 2016 management agents (e.g. PowerShell)
    • User License Entittlements (sync source for user.assignedLicenses)
    • User License Assignment (sync target for user.assignedLicenses)

As many of you will know, it is technically possible to extend AADConnect to include additional management agents and negate the need for MIM2016, however this was not my recommended approach for many reasons, including the following:

  • AADConnect extensions will invariably require extended Microsoft support over and above the standard AADConnect support options;
  • MIM2016 synchronisation remains a cost-effective option for any enterprise IAM architecture on the Microsoft platform; and
  • The MIM synchronisation configuration for this solution requires no rules extensions (only direct attribute flows)

Aside from the AzMan approach itself, the benefits of using the MIM 2016 sync engine (or FIM 2010 R2) in lieu of the traditional scripted approach should not be surprising for any of us, and include the following:

  • State based approach – provide ongoing reconciliation between expected (entitlements) and actual (assignments), avoiding the need for report-based options such as this one;
  • Timeliness of changes – delta sync cycles can be used to process only the changes since the last sync cycle;
  • Maintainability – maintaining a centralised authoritative source of rules/roles governing license entitlements ensures ongoing ease of-maintenance; and
  • Auditability – enforcing a centralised authoritative source of rules/roles governing license entitlements ensures license accountability and compliance.

AzMan.Licensing

Creating the AzMan Store

Setting up an AzMan instance itself is straight-forward given this is a standard feature maintainable via a simple Windows MMC snap-in.

While there are 3 options for hosting my AzMan store, I chose SQL over XML and AD – a decision made easier by the fact I could reuse a SQL alias I already had in place for my MIM Sync service.  For more information on the choice of store, see Managing Authorization Stores.

  • Click on the Authorization Manager node in the LHS tree view
    Az1
  • From the Action menu, select Options…
    Az2
  • Click on Developer mode
    Az3
  • From the Action menu, select New Authorization Store…
    Az4
  • Select the Microsoft SQL store type and Shema version 2.0, then paste the following string into the Store name:
    mssql://Driver={SQL Server};Server=MIM01;/AzManStore/O365.Licensing

    … and in the Description enter:

    Authorization store defining QBE O365 licensing entitlements.
    * role definitions = SKUs and disabled plans
    * task definitions = plans
    * role assignments = AD group memberships required to fulfill user license entitlements
    For more details, including a full list of indexed service plans (tasks) see https://technet.microsoft.com/en-us/library/dn771771.aspx

    Az5

  • Click OK
  • Check that the (initialized) Licensing store is correctly displayed as follows:
    Az6
  • Select the O365.Licensing node (store) and select New Application… from the Action menu, then enter the following details:
    • Name = O365
    • Description = Office 365 Licensing
    • Version = 2
      Az7
  • Click OK

Note: The AzMan store must remain in Developer mode from this point so that configuration can continue via the MMC snap-in UI.

Configuring the AzMan Store for Office 365 Licensing

Initially I chose to set up my AzMan store data manually via the MMC snap-in.  You may choose to do this too, but I chose to script this entirely in PowerShell to assist me in the deployment phase, as well as allow my customer to maintain the master data in the way they were more comfortable (initially at least) – yes in a Microsoft Excel Spreadsheet.

Set up Task Definitions

In my AzMan model a “task” represents a service plan, or specifically a “disabled plan”.  When an SKU is assigned to a user, service plans that are to be excluded for that user are listed as disabled plans.  To get a list of all possible disabled plans I have used the latest Service Plan table from this TechNet article to come up with the following:
az8

When entitlement details are extracted for FIM processing, both the Name and the Description (“Seq” index) are read, but only when it is associated to a role definition with at least one disabled plan.  Plans must always be extracted/listed in index order.

Set up Role Definitions

For my model a “role” represents an SKU and zero or more disabled plans.  How these are set up will most likely be specific to your enterprise’s unique requirements, but the following is an example of how this might look.  Note that the SKU name appears in the Description property – this is an important part of the data model because this SKU name must be able to be extracted for each (arbitrary but unique) role name.
Az9

The role definition for “Office 365 Enterprise E3 – no Exchange nor Skype” is highlighted because this best illustrates the concept of optional disabled plans.  Opening the properties dialog you can see this is managed like so:
az10

When entitlement details are extracted for FIM processing, the Description is all that is read from any group-assigned role, along with the Name and Description (Seq) of any associated disabled plans.

Set up Role Assignments

This is where the real power of the model comes into play.  By assigning 1 or many AD groups to the roles constructed above, a collection of groups and a constructed string in the following form can be constructed:

Format
SubscribedSku.skuPartNumber : ServicePlanInfo.servicePlanName ; ServicePlanInfo.servicePlanName ; ...
Plan name : disabled plan name ; disabled plan name ; ...
Example
ENTERPRISEPACK:OFFICESUBSCRIPTION;SHAREPOINTENTERPRISE;EXCHANGE_S_STANDARD

The following is a dummy role assignment to illustrate the concept – but this screen would normally have at least one entry per defined role, and multiple AD groups assigned to each:
Az11
Az12

Bringing it all together

The following PowerShell script takes whatever data you have in the above store and renders it in the following table format:

  • groupDN
    Single value string property, retrieved from AD from the group ID read from the AzMan role assignment, this is then able to be joined on the user.memberOf DN
  • assignedLicenses
    Multi-value string property, constructed from the role and task data linked to the role assignment – this is the data which can be directly synchronised to Azure via the GRAPH REST API.

Note that the script makes use of the “MIM” SQL alias I already have set up for my MIM SQL connection – allowing me to host the script on the MIM synchronisation host server.

# -- BEGIN PowerShell Connector Script

 # Create the AzAuthorizationStore object.
 #Add-PSSnapin Interop.AZROLESLib -ErrorAction SilentlyContinue
 $AzStore = new-object -com AzRoles.AzAuthorizationStore
 # Initialize the authorization store.
 # Refer to https://msdn.microsoft.com/en-us/library/windows/desktop/aa376359(v=vs.85).aspx for initialize parameters
 $AzStore.Initialize(0,"mssql://Driver={SQL Server};Server=MIM;/AzManStore/O365.Licensing")
 $AzStore.UpdateCache($null)
 # Create an application object in the store.
 $qbeApp = $AzStore.OpenApplication("O365")

 # Create HashTable of all group/assignedLicenses associations
 $licenseAssociations = @{}
 $licenseAssociations.Add("Groups",@{})

 # The following is to demonstrate what the PowerShell connector would look like which acts as a lookup-table to join on each users "memberOf" property - thereby
 # allowing for multiple licenses

 # Loop through all role assignments to return the role for each member MemberNames
 foreach($assignment in $qbeApp.RoleAssignments) {
 #$assignment
 foreach($member in $assignment.MembersName) {
 #$member # AD Domain\Group
 $groupDNAD = (Get-ADGroup -Identity $($member.Split('\')[1])).DistinguishedName
 $groupDN = "CN=$((Get-ADGroup -Identity $($member.Split('\')[1])).name),OU=LicensedUsers,DC=IdentityBroker"
 #$assignment.Name # EMS role
 $role = $qbeApp.OpenRole($assignment.Name)
 foreach($def in $role.RoleDefinitions) { # disabled plans
 #$def.Description # SKU
 #"$($def.Description):$tasks" # assignedLicenses
 $tasksAsString=""
 $tasks = @{}
 foreach($task in $def.Tasks) {
 $obj = [PSCustomObject]@{
 Name = $task
 Seq = [int]($qbeApp.Tasks | Where-Object {$_.IsRoleDefinition -ne 1 -and $_.Name -eq $task}).Description
 }
 $tasks.Add($task,$obj)
 }
 foreach($task in $tasks.Values | Sort-Object Seq) {
 if($tasksAsString.Length -gt 0) {
 $tasksAsString+=";"
 }
 $tasksAsString+="$($task.Name)"
 }
 }
 $assignedLicense = $def.Description
 if ($tasksAsString.Length -gt 0) {
 $assignedLicense += ":$tasksAsString"
 }
 $licenseAssociations.Groups.Add($groupDN,@($groupDNAD,$assignedLicense))
 }
 }

 #Repeat/loop through each item in a target system
 foreach($group in $licenseAssociations.Groups.Keys) {
 $group
 $licenseAssociations.Groups.$group[0]
 $licenseAssociations.Groups.$group[1]
 }

 $AzStore.CloseApplication($qbeApp.Name,0)
# -- END Connector Script

 

Note that the script above outputs the groupDN and assignedLicenses  properties to the console – for your own MIM connector implementation you will need to take this to the next steps yourself, i.e. essentially merging the group-keyed data with user-keyed data, joining on user.memberOf, and present a consolidated data set to MIM in the form of a User License Entitlements MA.

Important Important: The script now takes care to order tasks as “disabled plans” according to the (integer) index (stored in the task description).  This ensures that the assignedLicenses are reflected back in the order that they were sent (vital for a sync configuration with MIM).

Conclusion

While I won’t go into the specifics of the MIM side of the implementation – mainly because the preferences for this will vary from place to place – I trust that I have presented a repeatable approach that can get us away from the ongoing pitfalls of either a scripted approach, or any other sync approach which does not provide the same levels of extensibility and manageability.

In my case user volume levels meant that a MIM PowerShell MA was not going to suffice, and I needed to use UNIFY Identity Broker to provide the scalability and throughput to meet target SLAs, others may find that a PowerShell MA approach is adequate – at least for now.  If anyone is interested in exactly how I took this idea to the “next level” in this way, I would be more than happy to elaborate – but for now I felt that it was important to share the above approach as a superior way forward to Automatically Assign Licenses to Your Office 365 Users.

Footnote

Some of you will be aware that AzMan has been marked as deprecated as of Server 2012 R2, which means that this will be the last version of the OS which will incorporate this feature.  In case you’re wanting more info on this, read this MS blog post.

However, rest assured I took this into account when investing energy into the above solution – noting that it will be “… well into 2023 before we see the last of AzMan” and by then I am expecting there to be an even better way of providing role/rule-based license allocation for Office 365.  Regardless, I needed something which:

  • Supported the O365 licensing data structure I was modelling;
  • Had a “free” UI which allowed me to link to AD groups; and
  • Had an API which I could use to firstly read the data the way I needed to, as well as load the data from a master source.

… and this definitely ticked all the boxes for me – noting that I wasn’t actually using the main feature AzMan was designed for, namely its access checking API for buttons/panels on forms.  If I can get 6 years of use out of this between now and 2023 then great :).

Posted in FIM (ForeFront Identity Manager) 2010, MIM (Microsoft Identity Manager) 2016, Windows AzMan | Tagged , , | 9 Comments

Managing Office 365 Licenses with #MIM2016 and #AzMan – Part 1

One of the things we Microsoft FIM/MIM folks find ourselves doing of late is having to find ways of automating Office 365 license assignment for our “hybrid” (AD+AAD) customers, initially as part of provision the initial Exchange Online mailbox which requires that initial Enterprise license.

Microsoft’s AADConnect “identity bridge” solution superseded DirSync, and more recently AADSync, as the preferred solution for hybrid enterprises, bringing with it support for the many multi-forest/single tenant scenario which was once the domain of FIM and the Windows Azure Active Directory (WAAD) connector.  There was hope that with this would come an automation option for license management, but to date there has been no answer forthcoming.  Consequently most have adopted a PowerShell-scripted approach utilising the Office365 PowerShell API, assigning licenses with a post-AAD sync step, and perhaps even automating this based on on-premises AD group membership.

Until now a certain client’s FIM 2010 solution has performed just this function, but while it performs its role admirably, it presently only goes as far as assigning the initial E3 license applicable for an active user, and revoking all licenses on termination, it does not extend to managing all licenses in between those 2 events.  While there is some extensibility to the model, configuration has been a little messy – requiring ongoing maintenance of the relationship between groups and their associated O365 SKU (and optional “disabled plans”). The FIM 2010 configuration solution consisted of

  • The standard WAAD and AD connectors that formed the original DirSync configuration, but with a custom Metaverse design;
  • An LDIF MA to import user.memberOf (not an available attribute for the OOTB AD MA) for each O365 license group for which an entitled user is a member; and
  • A PowerShell MA to set the “isLicensed” property to true/false for active/inactive users respectively, and at the same time assign the correct SKU/disabled plan combination applicable for newly provisioned AAD user accounts.

While the automated management of the group membership in this particular instance was already taken care of, I needed to find an alternative approach that was not only more extensible, but was also

  • more flexible
  • more granular in its entitlement management, and
  • more administrator-friendly.

While an Office Premium feature (still in preview???) goes some of the way to achieving this through the use of role groups, this was not able to tick all the boxes the customer was looking for.  Furthermore, the FIM 2010 configuration which originally replaced DirSync now needed to be swapped out with AADConnect, and in order to keep the AADConnect design as close to OOTB as possible, I needed to relocate the license management to another MIM instance which was performing another identity synchronisation role for the same AD user base.  This is where I thought of that rather unloved component of the Windows Server OS – Microsoft Authorization Manager, or AzMan.

What is AzMan and how can it help?

Microsoft Authorization Manager (AzMan) is a component of the Windows operating system which allows a consistent model to be used for driving application access.

“AzMan is a role-based access control (RBAC) framework that provides an administrative tool to manage authorization policy and a runtime that allows applications to perform access checks against that policy. The AzMan administration tool (AzMan.msc) is supplied as a Microsoft Management Console (MMC) snap-in.”

As a result of recent enhancements to the framework and its API, this lends itself quite nicely to modelling roles and groups for O365 licensing and license assignment.

The AzMan model and UI (MMC snap-in) allows groups of users (either LDAP or built-in groups) to be mapped to roles, which in turn are made up of sub-roles, tasks, sub-tasks and operations.  With the AD license groups already taken care of, this provides an easy way of matching existing groups to roles.  Now all that had to be done was to

  1. design a suitable AzMan data structure to allow unambiguous O365 enterprise user license assignment; and
  2. design a way to dynamically translate changes in the AD group memberships to corresponding O365 license assignments in AAD using MIM synchronisation.

What will the new solution look like?

The following is a conceptual overview of the architecture to get you thinking:

AzMan.Licensing

In my next post I will describe my AzMan data model, share a simple script to extract the data in a way that MIM can consume it, as well as the MIM extensions (custom MAs) which deliver the desired outcomes.

Posted in Active Directory, FIM (ForeFront Identity Manager) 2010, MIM (Microsoft Identity Manager) 2016, Windows AzMan | Tagged , | 1 Comment

Building in #MIM2016 Solution Resilience

My company UNIFY is well into its 12th year of existence (as is my tenure), and our “application-driven identity management” mantra has been a core principle in our solution approach from the beginning.  With the advent of cloud and so-called “hybrid” (off/on-premise) identity management I have not wavered from the belief that nothing changes in this regard.  That is despite the popular misconception from certain quarters that identity starts and ends with your on-premise directory.  Let’s just say your AD makes a lousy source of truth (SoT) for identity!

The benefits of aligning enterprise identity lifecycle to its HR platform are many, and at UNIFY we like to focus not just on the various sources of truth (e.g. students vs. staff in an education context), but on the events that should trigger change to an identity profile.  When it comes to harnessing these events, which may be in HR, the on-premise or cloud directory, or even in an LOB application such as a CRM, we are confident our Broker approach is second to none in driving timely identity synchronisation on platforms such as Microsoft Identity Manager (MIM) or any of its predecessors.  Our investment over more than a decade in a common application directory platform for driving IAM solutions is now paying dividends in the form of application sources such as HR systems being presented to an IAM platform such as MIM as an LDAP directory in its own right.

Yet behind the LDAP layer, not all HR systems are created equal … or rather, “some are more equal than others” … but more importantly, not all HR processes, be they BAU (business-as-usual) or EOM (end-of-month), are going enable the HR platform to lend itself to becoming a proxy identity source from the outset.  Put another way … it’s unlikely you will find any HR manager’s KPIs are related to driving an IAM solution.  Here are some possible questions that might come to mind for the HR system owner:

  • Did anyone ask me if I should allow my HR system to become the primary authoritative ‘source of truth’ for all enterprise directory employee user profiles?
  • Where are all my extra staff going to come from to allow me to meet these new SLAs?
  • Why didn’t anyone think to tell me that entering the backlog of new staff records the day before the monthly pay cycle isn’t going to cut it any more?
  • I wonder if anyone has thought about what should happen when a contractor (or a bunch of them) take on a permanent role?

What the HR manager is NOT likely to ask, however, is the following:

  • How might the IAM platform perform during nightly batch processes?
  • I wonder when would be the best time to take the HR system offline for scheduled maintenance and backups?
  • What if I forget to tell anyone when I upgrade the HR platform or extend the schema?
  • What happens if I want to set up some new test processes in my production environment?
  • Do you think it would be OK for me to delete and reload the entire employee table each night (don’t laugh – I’ve seen this one!)?

Lately I’ve been revisiting the fundamental application-driven principles I’ve taken for granted as being the basis for all good IAM solutions, and asking this question:

“What if (unexpected) sh*t happens?”

The question kind of answered itself, and quickly became this:

“Given that it is inevitable that (unexpected) sh*t will happen, who’s responsible for dealing with it?”

Just quietly, I may have been guilty in the past of taking the high ground (subconsciously at least) by thinking to myself (perhaps even out loud) “That’s not a matter for the IAM solution to deal with it – all problems must surely be addressed at the root!”.  Yet most enterprise environments are always in a state of flux, and it doesn’t really help anyone by ducking the problem when it might turn out that you are best equipped to make a difference in this regard – with a just bit of planning and lateral thinking.  This is not to say that the source systems are absolved from responsibility – far from it – but rather that it is better to be pro-active and take preventative action to avert a problem that has possibly yet to be seriously considered.

At this month’s (April 2016) MIMTeam User Group Skype Meeting I am presenting the topic of “Watch out for that Iceberg!”.  In this session (which includes a demo of a repeatable  MIM approach I wish to share with you), I will be asking what can we (as IAM consultants, solution architects and implementers) do to protect our customers or our own companies from unwanted SoT changes?  In particular, how can we be prepared for when unwanted changeshappen in large volumes, and wreak havoc on the unsuspecting systems and processes you’ve painstakingly aligned with that SoT?  What does the term “resilience” mean when used in the context of your IAM solution?

Please join me on the call (see when this is in my timezone) – looking forward to sharing some thoughts and ideas on this topic.

Posted in Active Directory, FIM (ForeFront Identity Manager) 2010, MIM (Microsoft Identity Manager) 2016, Uncategorized | Tagged , , | 2 Comments

Using .Where instead of | Where-Object

I’ve been fighting a problem today whereby the PowerShell Where-Object commandlet was returning results of varying object types from the same XML document.  Specifically, when trying to check for the numbers of adds/deletes/updates from a CSExport xml file, where I had isolated the deltas to the following:

$deltas = $csdata.SelectNodes("//cs-objects/cs-object[@object-type='$addObjectClass']/pending-import/delta")

If I then used the Where-Object construct as follows:

$deletes = $deltas | Where-Object {$_."operation" -eq "delete"}

… I would either get an object back of type System.Xml.XmlLinkedNode or System.Xml.XmlNodeSet.  Because I simply wanted to access the Count method of the XmlNodeSet, this was failing (returning nothing) when the result was of type System.Xml.XmlLinkedNode. In looking at the available methods of my $deltas System.Xml.XmlNodeSet variable I noticed the Where method … another piece of pure gold!  The syntax is slightly different if I use this:

$deletes = $deltas.Where({$_."operation" -eq "delete"})

… BUT, the result is always System.Object[].

This means that regardless of what XML I am confronted with, the most reliable means of counting the number of changes is to use the “Where” method of the System.Xml.XmlNodeSet class.

One step closer to a reliable means of consistently counting Pending Import and Pending Export changes from either my CSExport or Run Profile audit drop (xml) files :).

Posted in Event Broker for FIM 2010, FIM (ForeFront Identity Manager) 2010, MIM (Microsoft Identity Manager) 2016, Uncategorized, XML Programming | 1 Comment

Using -ReadCount 0 with Get-Content

Not that I’m an expert in PowerShell by any means, but here’s my tip of the day … use the -ReadCount parameter with Get-Content!

From the Get-Content page on TechNet …

-ReadCount<Int64>
Specifies how many lines of content are sent through the pipeline at a time. The default value is 1. A value of 0 (zero) sends all of the content at one time.
This parameter does not change the content displayed, but it does affect the time it takes to display the content. As the value of ReadCount increases, the time it takes to return the first line increases, but the total time for the operation decreases. This can make a perceptible difference in very large items.

The line in bold above for me was pure gold!  Right now I am working with parsing very large #FIM2010/#MIM2016 audit drop files to check change thresholds, and PowerShell is my tool of choice, not just because it integrates natively with MIM Event Broker (this idea will be the subject of another post a bit later).

Example for me just now with a 100Mb XML file:

  • 5 mins 17 seconds to load WITHOUT specifying any -ReadCount, with a memory footprint growth to 3.5 Gb
  • 39 seconds with -ReadCount 0, and a memory footprint growth of only 750 Mb.

Importantly, be aware that this setting is not appropriate when your script is still under construction, when the value should still be 1 (default).  In my case I found that stepping through my script in debug with this set to 0 resulted in painful delays reloading variable XML variables.  I am now thinking of using a $debug variable to toggle between 0 and 1 as appropriate – it should definitely be 0 once the script has been deployed.

So when loading large XML files in future I will certainly be using -ReadCount 0 unless I have a good reason not to – one that is worth taking 8 times longer and 5 times the resources :).

Posted in Event Broker for FIM 2010, FIM (ForeFront Identity Manager) 2010, MIM (Microsoft Identity Manager) 2016, Uncategorized | Leave a comment

#FIM2010 MIISActivate – FIM Sync service terminated with service-specific error %%-2146234334

Just posted by Peter Geelen – thought this worthy of a reblog for the #FIM2010 and #MIM2016 community.

Identity Underground

This article has been posted on TNWiki at: FIM2010 Troubleshooting: MIISActivate – FIM Sync service terminated with service-specific error %%-2146234334.


Situation

Failing over a FIM Sync Server to the standby FIM sync server using MIISActivate.

After using successfully MIISActivate, the FIMSync Service fails to start and logs an error in the eventviewer.


Symptoms

You’ll see 2 error messages in the event viewer, erro 7024 and error 6324.

Error 7024

Reference

This error is pretty similar or exactly like the error described in the following Wiki article:

FIM2010 Troubleshooting: FIM Sync service terminated with service-specific error %%-2146234334.

Screen

Error message Text

Log Name: System
Source: Service Control Manager
Date: 3/02/2016 15:08:59
Event ID: 7024
Task Category: None
Level: Error
Keywords: Classic
User: N/A
Computer: servername.domain.customer
Description:
The Forefront Identity Manager Synchronization Service service terminated with service-specific error %%-2146234334.
Event Xml:
<Event xmlns=”http://schemas.microsoft.com/win/2004/08/events/event”>
<System>
<Provider Name=”Service Control Manager” Guid=”{555908d1-a6d7-4695-8e1e-26931d2012f4}” EventSourceName=”Service Control Manager”…

View original post 735 more words

Posted in FIM (ForeFront Identity Manager) 2010, ILM (Identity Lifecycle Manager) 2007 | Leave a comment

The (#FIM2010) service account cannot access SQL Server …

Ran into this old chestnut just now and thought that it was worth re-visiting the outcome of an old forum post on the subject.

Before I get to the point, by way of background I always start out the installation process with a quick sanity check:

  1. Create a UDL file on the FIM Sync server desktop
  2. Configure the UDL file to connect to the SQL instance you are targeting
  3. Test for connectivity success

The above will ensure you can at least get to “first base” with SQL connectivity, negotiating firewall and network issues.

When installing the FIM Sync service any number of connectivity issues can prevent you progressing through the installer wizard.  For instance, if you’ve got a remote SQL database and you’ve forgotten to install the appropriate SQL Native Client then you will be stuck on the page configuring the SQL connection.

Once you get past this problem it’s generally onto the next … the configuration of the FIM Sync service account.  The full text of the error you might run into is this:

The service account cannot access SQL server. Ensure that the server is accesible, the service account is not a local account being used with a remote SQL server, and that the account doesn’t already have a SQL login.

The error text can be quite misleading – because (as was the case with the linked thread) the problem can be the installer access itself.  The installer account (not the service account itself) MUST be a member of the SQL sysadmin role to have any hope of progressing beyond this point.  Generally you will want to (or be asked to!) remove this access after a successful install.

Thanks to those who bother contributing answers to the TechNet forums – they are incredible time savers, often long after the threads are closed.

Posted in Uncategorized | 3 Comments

#FIM2010 R2 Scoped Sync Rules – Part 2 (The Experience)

So I decided to take up the challenge on a recent FIM2010 R2 project – outlined in the first part of this post.

Lets just say there are plenty of FIM folk who would simply ask ‘why?’ …

  • Why would i want to even try working with declarative rules at all?
  • Why if something isn’t broken (rules extensions) why fix it?
  • Why do you think it will give a better outcome?
  • Why do you think scoped rules will work when the alternative type promised so much but failed so spectacularly?
  • Why would you want to put yourself through the wringer when you could fail and bring your project down with it?

Well for a variety of reasons let’s just imagine for a moment i had convincing answers for each of these that struck such a chord with you that you just want to read on and found out how i did it. Maybe lets come back to the above at the end. Rest assured, however, that i was not completely convinced myself, and at the outset i still had a bet each way on me failing. So here goes ….

No De-provisioning

Firstly i knew that for this approach to work i couldn’t de-provision – that is to say, disconnect objects from the Metaverse and cause deletions or something similar to happen for any of my connected systems.

If you expect your SRs to do this for you then you will need the traditional ERE model. However, when I looked closely at requirements that might on face value appear to require this capability, I found that in each case the need wasn’t really there at all. For starters, for systems which are not authoritative sources of identity, it is usually a bad idea for you to leave a CS entry as a disconnector. Doing this can leave you with “reverse join” problems if you subsequently need to re-connect. Equally deleting the target object never seemed to be an option generally because of the risk of compromising the target downstream system (e.g. Orphaned ACLs in AD, or SharePoint documents or sites without owners).

I reasoned that choosing not to disconnect at all was the better option. Yes this could lead to “bloat” issues if left unchecked over a long time. However, the alternative of trying to control the deletion/archive process from FIM is often impractical. I adopted the standard alternative to deprovisioning AD accounts, disabling and moving them to a ‘disabled users’ container, and leaving it to the AD system admins to handle the deletion and archive process – usually after a delay of a number of months. I also figured that if at some stage I needed to handle the archiving as part of the FIM design, then this could be comfortably achieved by an out-of-band PowerShell script, e.g. initiated as a post-processing step after an export run profile is executed.

So … No de-provisioning? No problem.

Avoiding Rules Extensions

As soon as you know you’ve got to handle anything other than the most basic of transformations, you find yourself drifting inextricably towards writing these things. So my strategy was to keep these as simple as possible by maximising direct flow rules.

If you want to sync to an LDAP style directory target, then the best choice of an authoritative source is also a directory structure – ideally at least vaguely close to the target schema. But how do you achieve this when your source system(s) are invariably relational systems rather than directory structures? The answer is to re-imagine your relational data as if it was an LDAP directory.

In order to explain the approach, consider a simple relational database with the following entities in an imaginary student management (SMS) system:

  • Student – 1000s of individuals, each belonging to one or more classes
  • Class – 100s of classes, each belonging to a single year
  • Year – 10s of years
  • Teacher – 10s of teachers, each assigned to one or more classes

Each entity is related to one or more of the other entities via a database foreign key constraint. The SMS relational structure for these entities would therefore look something like this:

  • Student <=> Class (generally physically stored as Student <= StudentClass => Class)
    • Class => Year
    • Class => Teacher

Our target Metaverse might have corresponding resource types as follows:

  • Student
    • Classes (multiple)
  • Class
    • Teacher (single)
    • Year (single)
    • Students (multiple)
  • Teacher
    • Classes (multiple)
  • Year
    • Classes (multiple)

In order to generate as many direct attribute flows as possible, what must happen is that the connector space schema for the SMS management agent must align itself as closely as possible to the Metaverse, if not mirror it exactly. The trick to doing this is to use an LDAP schema for your CS, which means one thing – converting foreign key relationships into distinguished name collections. In the above structure we could achieve this as follows:

  • UID=<StudentID>,OU=Students
    • UID=<ClassCode>,OU=Classes,OU=<Year>,OU=Year
  • UID=<ClassCode>,OU=Classes,OU=<Year>,OU=Year
    • UID=<TeacherID>,OU=Teachers
    • UID=<StudentID>,OU=Students
  • UID=<TeacherID>,OU=Teachers
    • UID=<ClassCode>,OU=Classes,OU=<Year>,OU=Year
  • OU=<Year>,OU=Year
    • UID=<ClassCode>,OU=Classes,OU=<Year>,OU=Year

There is no right/wrong here when it comes to inventing a DN structure – just that it should allow the CS to mirror the Metaverse such that attribute flows in/out of it are direct, or at worst simple transformations. Most importantly, the reference attribute flows must almost always be direct. Furthermore, if you found yourself having to transform multi-value attributes do then not only would scoped sync rules not be for you, but more than likely that the traditional ERE style would be no good to you either!

So as you can see, by imagining your source system as an LDAP structure such as the above this makes the sync design quite straight-forward. This lends itself nicely to scoped sync rules.

Of course if you have a tool that allows you to easily

  • Build consistent LDAP schema for your FIM connectors
  • Replicate changes from your source systems through this structure and into FIM
  • Allow for bi-directional flow
  • Combine multiple data sources (e.g. Text file and/or SQL and/or PowerShell and/or Web Service) in a single connector space

… then that tool (let’s call it UNIFY Identity Broker, because that is its name) drives a consistent, highly performed, and highly maintainable set of FIM connectors.

In my latest solution ALL of my FIM management agents besides the AD and FIM connectors were instances of Identity Broker connectors. Of these, most accessed the connected system via a PowerShell layer.

Using Out-of-Band Processes

When there is simply no FIM function available to perform a transformation, then the problem with scoped sync rules is that you can’t employ workflow parameters to pass in data constructed by custom workflow activities. This means you either have to resort to rules extensions (which I was determined NOT to do), or think outside the square a little. Three scenarios come to mind.

  1. Generating a unique account name and email alias (e.g. John.Smith1).
    In the days before the declarative model, this process was always achieved with provisioning rules extensions. With ERE-style declarative came the ability to use custom workflow activities, but these tended to become problematic under a number of well documented use cases. Now with scoped sync rules I had to come up with another way of doing this. We tried a couple of ideas, but ended up settling on using a PowerShell management agent to work in harmony with the standard AD management agent, and this worked a treat:

    1. Initial flow rules removed from the AD sync rules completely, leaving it to join and perform persistent flow rules only;
    2. Account (and optional mail alias) creation was performed entirely by a PowerShell MA, which used LDAP lookups on the target AD forest(s) to arrive at a unique value and insert what was effectively a “stub account” immediately (no initial password);
  2. Setting the initial password and notifying the manager in an email.
    1. An extension to the above was to set the initial password in a PowerShell workflow activity, and pass the value back to a WorkflowData variable to allow this to be included in an email notification.
    2. Once the password was set a “PasswordIsSet” flag on the account was set to TRUE which was tied to the EAF for userAccountControl in the AD sync rule to allow the AD account to be activated only once there was a password assigned.
      This allowed us an alternative to the workflow parameter approach used with the ERE style sync rules.
  3. Setting an AD extension attribute value to the Base64 encoded value of the AD GUID.
    Performing this task is easy in a rules extension, but impossible with scoped sync rules given the available function set. However, this could be performed as either a secondary step in the “set password” workflow, or as a post-processing PowerShell task which searched the target FIM OU for accounts with a missing extensionAttributeXX value and set the value. Either way, this did the trick.

    There were a number of other variations on the above ideas used at various times in the design, but the above 3 are the main ones that spring to mind. These are enough to make the point – that if you’re willing to work to the limitations of scoped sync rules by employing methods such as the above, then your FIM sync design ends up with no rules extensions – and no EREs either!

Summary

No doubt there will be some times when you have requirements which will prevent you from using scoped declarative rules. As mentioned in Part 1, there are a couple of check-points you need to cover off before you should be confident of proceeding any further, and these I have attempted to cover. In my case I was able to design and (with the help of my able colleagues) implement a reasonably complex FIM sync solution based entirely on scoped sync rules.

In my last post on this topic I plan to reflect on the overall result and all those ‘why?’ questions. I’ll also share a utility I used to troubleshoot objects that hadn’t had the expected sync rules applied as expected. With the ERE model you can see the sync rule has been physically attached to the target – but scoped sync rules have no such indicator, making troubleshooting much more difficult without the aid of a new tool. I’ll also share with you a couple of FIM sync rule bugs I uncovered but was happily able to work-around while the problems are fixed by Microsoft in the fullness of time.

Posted in FIM (ForeFront Identity Manager) 2010 | Tagged , , , | Leave a comment