So I decided to take up the challenge on a recent FIM2010 R2 project – outlined in the first part of this post.
Lets just say there are plenty of FIM folk who would simply ask ‘why?’ …
- Why would i want to even try working with declarative rules at all?
- Why if something isn’t broken (rules extensions) why fix it?
- Why do you think it will give a better outcome?
- Why do you think scoped rules will work when the alternative type promised so much but failed so spectacularly?
- Why would you want to put yourself through the wringer when you could fail and bring your project down with it?
Well for a variety of reasons let’s just imagine for a moment i had convincing answers for each of these that struck such a chord with you that you just want to read on and found out how i did it. Maybe lets come back to the above at the end. Rest assured, however, that i was not completely convinced myself, and at the outset i still had a bet each way on me failing. So here goes ….
No De-provisioning
Firstly i knew that for this approach to work i couldn’t de-provision – that is to say, disconnect objects from the Metaverse and cause deletions or something similar to happen for any of my connected systems.
If you expect your SRs to do this for you then you will need the traditional ERE model. However, when I looked closely at requirements that might on face value appear to require this capability, I found that in each case the need wasn’t really there at all. For starters, for systems which are not authoritative sources of identity, it is usually a bad idea for you to leave a CS entry as a disconnector. Doing this can leave you with “reverse join” problems if you subsequently need to re-connect. Equally deleting the target object never seemed to be an option generally because of the risk of compromising the target downstream system (e.g. Orphaned ACLs in AD, or SharePoint documents or sites without owners).
I reasoned that choosing not to disconnect at all was the better option. Yes this could lead to “bloat” issues if left unchecked over a long time. However, the alternative of trying to control the deletion/archive process from FIM is often impractical. I adopted the standard alternative to deprovisioning AD accounts, disabling and moving them to a ‘disabled users’ container, and leaving it to the AD system admins to handle the deletion and archive process – usually after a delay of a number of months. I also figured that if at some stage I needed to handle the archiving as part of the FIM design, then this could be comfortably achieved by an out-of-band PowerShell script, e.g. initiated as a post-processing step after an export run profile is executed.
So … No de-provisioning? No problem.
Avoiding Rules Extensions
As soon as you know you’ve got to handle anything other than the most basic of transformations, you find yourself drifting inextricably towards writing these things. So my strategy was to keep these as simple as possible by maximising direct flow rules.
If you want to sync to an LDAP style directory target, then the best choice of an authoritative source is also a directory structure – ideally at least vaguely close to the target schema. But how do you achieve this when your source system(s) are invariably relational systems rather than directory structures? The answer is to re-imagine your relational data as if it was an LDAP directory.
In order to explain the approach, consider a simple relational database with the following entities in an imaginary student management (SMS) system:
- Student – 1000s of individuals, each belonging to one or more classes
- Class – 100s of classes, each belonging to a single year
- Year – 10s of years
- Teacher – 10s of teachers, each assigned to one or more classes
Each entity is related to one or more of the other entities via a database foreign key constraint. The SMS relational structure for these entities would therefore look something like this:
- Student <=> Class (generally physically stored as Student <= StudentClass => Class)
- Class => Year
- Class => Teacher
Our target Metaverse might have corresponding resource types as follows:
- Student
- Classes (multiple)
- Class
- Teacher (single)
- Year (single)
- Students (multiple)
- Teacher
- Classes (multiple)
- Year
- Classes (multiple)
In order to generate as many direct attribute flows as possible, what must happen is that the connector space schema for the SMS management agent must align itself as closely as possible to the Metaverse, if not mirror it exactly. The trick to doing this is to use an LDAP schema for your CS, which means one thing – converting foreign key relationships into distinguished name collections. In the above structure we could achieve this as follows:
- UID=<StudentID>,OU=Students
- UID=<ClassCode>,OU=Classes,OU=<Year>,OU=Year
- UID=<ClassCode>,OU=Classes,OU=<Year>,OU=Year
- UID=<TeacherID>,OU=Teachers
- UID=<StudentID>,OU=Students
- UID=<TeacherID>,OU=Teachers
- UID=<ClassCode>,OU=Classes,OU=<Year>,OU=Year
- OU=<Year>,OU=Year
- UID=<ClassCode>,OU=Classes,OU=<Year>,OU=Year
There is no right/wrong here when it comes to inventing a DN structure – just that it should allow the CS to mirror the Metaverse such that attribute flows in/out of it are direct, or at worst simple transformations. Most importantly, the reference attribute flows must almost always be direct. Furthermore, if you found yourself having to transform multi-value attributes do then not only would scoped sync rules not be for you, but more than likely that the traditional ERE style would be no good to you either!
So as you can see, by imagining your source system as an LDAP structure such as the above this makes the sync design quite straight-forward. This lends itself nicely to scoped sync rules.
Of course if you have a tool that allows you to easily
- Build consistent LDAP schema for your FIM connectors
- Replicate changes from your source systems through this structure and into FIM
- Allow for bi-directional flow
- Combine multiple data sources (e.g. Text file and/or SQL and/or PowerShell and/or Web Service) in a single connector space
… then that tool (let’s call it UNIFY Identity Broker, because that is its name) drives a consistent, highly performed, and highly maintainable set of FIM connectors.
In my latest solution ALL of my FIM management agents besides the AD and FIM connectors were instances of Identity Broker connectors. Of these, most accessed the connected system via a PowerShell layer.
Using Out-of-Band Processes
When there is simply no FIM function available to perform a transformation, then the problem with scoped sync rules is that you can’t employ workflow parameters to pass in data constructed by custom workflow activities. This means you either have to resort to rules extensions (which I was determined NOT to do), or think outside the square a little. Three scenarios come to mind.
- Generating a unique account name and email alias (e.g. John.Smith1).
In the days before the declarative model, this process was always achieved with provisioning rules extensions. With ERE-style declarative came the ability to use custom workflow activities, but these tended to become problematic under a number of well documented use cases. Now with scoped sync rules I had to come up with another way of doing this. We tried a couple of ideas, but ended up settling on using a PowerShell management agent to work in harmony with the standard AD management agent, and this worked a treat:- Initial flow rules removed from the AD sync rules completely, leaving it to join and perform persistent flow rules only;
- Account (and optional mail alias) creation was performed entirely by a PowerShell MA, which used LDAP lookups on the target AD forest(s) to arrive at a unique value and insert what was effectively a “stub account” immediately (no initial password);
- Setting the initial password and notifying the manager in an email.
- An extension to the above was to set the initial password in a PowerShell workflow activity, and pass the value back to a WorkflowData variable to allow this to be included in an email notification.
- Once the password was set a “PasswordIsSet” flag on the account was set to TRUE which was tied to the EAF for userAccountControl in the AD sync rule to allow the AD account to be activated only once there was a password assigned.
This allowed us an alternative to the workflow parameter approach used with the ERE style sync rules.
- Setting an AD extension attribute value to the Base64 encoded value of the AD GUID.
Performing this task is easy in a rules extension, but impossible with scoped sync rules given the available function set. However, this could be performed as either a secondary step in the “set password” workflow, or as a post-processing PowerShell task which searched the target FIM OU for accounts with a missing extensionAttributeXX value and set the value. Either way, this did the trick.There were a number of other variations on the above ideas used at various times in the design, but the above 3 are the main ones that spring to mind. These are enough to make the point – that if you’re willing to work to the limitations of scoped sync rules by employing methods such as the above, then your FIM sync design ends up with no rules extensions – and no EREs either!
Summary
No doubt there will be some times when you have requirements which will prevent you from using scoped declarative rules. As mentioned in Part 1, there are a couple of check-points you need to cover off before you should be confident of proceeding any further, and these I have attempted to cover. In my case I was able to design and (with the help of my able colleagues) implement a reasonably complex FIM sync solution based entirely on scoped sync rules.
In my last post on this topic I plan to reflect on the overall result and all those ‘why?’ questions. I’ll also share a utility I used to troubleshoot objects that hadn’t had the expected sync rules applied as expected. With the ERE model you can see the sync rule has been physically attached to the target – but scoped sync rules have no such indicator, making troubleshooting much more difficult without the aid of a new tool. I’ll also share with you a couple of FIM sync rule bugs I uncovered but was happily able to work-around while the problems are fixed by Microsoft in the fullness of time.