There have been numerous attempts since the concept of ‘declarative sync rules’ was first introduced with FIM 2010 to eliminate the need for rules extensions altogether, but rarely have these been successful. In all but the most trivial of scenarios we find ourselves resorting to writing .Net code, when we invariably run into the now well-documented limitations of this type of approach to custom sync rules. On top of this, when using the original MPR-based rules, the extra sync overhead of the expected rule entry (ERE) processing drove most of us to distraction, especially in the build phase where sync rules are always evolving, not to mention when large numbers of sync objects were involved.
At TEC in San Diego in 2012, some years after the inception of the declarative model, David Lundell presented his FIM R2 Showdown — Classic vs. Declarative presentation. Despite his protestations to the contrary, many present left his entertaining presentation firmly of the view that the traditional model won hands down. The argument in favour of traditional went something like this:
- Declarative will only work at best 8 times out of 10 (see the links below for the main scenarios where these fall short);
- In cases where it does not work the options are custom rules extensions and/or custom workflow activities;
- Even if there is only one case where declarative doesn’t cut it, we are left with business rules to maintain in more than one place;
- Given that most would consider a single place to maintain sync rules to be more than highly desirable, why bother at all with declarative given that you can always build 100% of your sync rules with rules extensions?
The Microsoft FIM product group had invested a lot of energy in bringing the whole declarative concept to fruition, and are not about to give up any time soon. There has been a steely resolve to make a success of this approach, due mainly to feedback from MIIS/ILM customers and prospects before FIM that the biggest weakness of the product was that you couldn’t actually provision anything without writing at least some .Net code, no matter how small. To their credit they took this and other feedback like it on board, and responded with the concept of a ‘scoped sync rule’ alternative with FIM2010 R2.
Those of us were not so jaded by our own forays into the declarative world to ‘throw in the towel’ by this point took some interest in this development. Of all of my own experiences with declarative rules, it was the ERE which frustrated me (and my customers) the most. In one particular site, the slightest rule change would always meant many hours (even days) of sync activity to re-baseline the sync service had to follow. When this time exceeded available change windows, I couldn’t help but feel at least partially responsible for the administrators’ pain. Given I had done my share of MCS FIM projects where the declarative model was actually mandated (a case of where the sales pitch had often set unrealistic expectations with the customer), it was clear to me then that Microsoft wasn’t going to give up on the idea, so I might as well try to ‘get with the programme’. Consequently on the next major MCS project I embarked on, I was determined to revisit David’s TEC presentation to see if it might be possible to finally achieve what had become something of a FIM ‘holy grail’ – 100% declarative sync.
I have previously read about others’ experiences in this, including the following posts:
- FIM Case Study: Trying to achieve a 100% Declarative (or “Codeless”) Architecture
- Declarative or Bust!
- Best-Practices for declarative and “less-code” FIM solutions
However, in my mind at least, all of these had a common underlying sentiment … “nice try, but no cigar”. What is more, none of these seemed to talk in any depth (if at all) about the ‘scoped’ alternative to the standard ERE-driven model.
Staring at me in the face now was what initially appeared to be a typical FIM sync scenario – with some complexities only emerging well after the initial design was settled:
- Approximately 10-20K user objects under management
- Authoritative HR source (SAP), with extended ‘foundation’ object classes (position, department, cost centre, job class, etc.)
- Provisioning and sync to Active Directory (2 legacy AD forests in a trust relationship, with a new forest to come online at some point in the future)
- AD group membership provisioning based on foundation data references
- A hybrid user mailbox provisioning requirement (users split between Office 365 and on premise 2010 Exchange)
- Provisioning to a legacy in-house access management system (via a SharePoint 2007 list)
- Sync with an externally hosted call management system (provisioning will eventually follow in a subsequent phase)
- Office 365 license assignment
- Notification workflows
- Write-backs to HR (email, network ID)
With the voices of many nay-sayers ringing in my ears, I remained quietly confident I could pull this off, by taking the following line of thought:
- So long as I didn’t need to disconnect (de-provision) any objects under sync, I could work with scoped SRs and avoid any use of EREs;
- If I developed a consistent object (resource) model in the FIM service, modelled heavily on the inherent HR structures and relationships, I would be able to engineer the same consistency in the FIM Metaverse and each connector space;
- By investing in each extensible connector design (I had 5 of these) I would ensure that I presented data in the same consistent structure, maximising the chances of ‘direct’ attribute flows both inbound (IAF) and outbound (EAF);
- By taking any complexities known to be beyond the SR capabilities outside of the scope of the FIM sync process itself (due mostly to its limited function set), either within the connector import/export process itself, or in a pre/post sync ‘out of band’ process; and
- Making heavy use of PowerShell (all 5 extensible management agents being instances of a PowerShell connector, as well as all pre/post sync processing).
In the next post I will cover how I went about building to the above principles, and some of the challenges I encountered along the way. Without giving the game away entirely, all I will say at this point is that for every challenge there was always a work-around – the question was always going to be if any one of these would force me to write any .Net code.