A new angle on an old #FIM2010 problem

Anyone working with the FIM Sync engine, in its current or previous guises, for any length of time will be familiar with the age old dilemma – how best to ensure uniqueness constraints of a newly provisioned AD account.  Or a mailbox alias for that matter.

In a standard FIM Sync design, in particular one where there is an authoritative source of identity such as an HR system, there are standard considerations must be observed, the basics of which are explained here.  What makes this particularly challenging where FIM is responsible for assigning unique values in AD (based on a supplied algorithm, e.g. surname+initial+number) are a number of significant constraints such as the following:

  1. Organisations continue to insist on user-friendly account names.
    In student or staff management systems there is generally already an enforced unique numeric ID that would be perfect for sAMAccountName, userPrincipleName, mailNickname, CN, or all 4.  Yet people insist on employing age-old algorithms based on functions of surname, given name and initials, adding unwanted complexity and ongoing management overhead to your IdM solution.  Such values invariably change during the identity lifecycle at some point (i.e. are not immutable), making manual intervention at some stage practically unavoidable.
  2. FIM needs to have knowledge of all existing user accounts before it can provision a new one.
    It is rare that anyone would really ever want to include the full OU tree in scope of the FIM ADDS MA.  However that is effectively what you have to do so that you can implement your uniqueness algorithm in FIM logic.
  3. FIM may need to enforce uniqueness across multiple AD domains and forests.
    Particularly where a hybrid cloud/on-premise user synchronisation scenario exists with a single tenant, it is no longer satisfactory to enforce uniqueness in a single AD forest alone.
  4. Scoped declarative sync rules cannot be used to generate a unique value.
    These don’t support parameters, so you can’t write your clever workflow activity and pass a value in like you might have done in the days when EREs were the only declarative approach available.  In FIM2010 we’re still limited to a very small set of functions, and while calculating a random number might still be possible, implementing the type of complex rules most organisations exist on is effectively impossible today.
  5. ERE-style declarative sync rules cannot reliably be used to generate a unique value.
    If you insist on going down this path then yes, there is a way to use a custom workflow activity to pass in a sync rule parameter value.  But don’t expect this to work when FIM is under even slight duress (see #6 below), and certainly don’t expect the calculated value to be guaranteed unique – especially if you are dealing with latency in writing the value out to AD.
  6. When used with the FIM sync engine, the FIM service is a poor option for calculating a unique value in general.
    When processing even a moderate volume of concurrent requests invoking a FIM workflow activity designed to calculate a unique value, the FIM service will invariably cause requests to fail with the dreaded Denied error.  The uniqueness enforced in the RCDC for AccountName might work OK for new user records created in the FIM portal, but not so for new ones created either by the sync service or imported in bulk via the FIM API.  You may also think that bypassing the sync engine altogether to generate your unique account name might be an option – and if you do you are at the same point I got to – but if you do you will need to almost reinvent the very good, robust wheel that is the sync engine when it comes to making your value ‘stick’ in AD.
  7. Enforcing consistency between sAMAccountName and userPrincipleName (UPN) suffix.
    It may not even be important to many people, but I always figured that it would at least be desirable that these two values were the same.  Of course they can’t be if you can’t limit your values to less the 20 character limit of the sAMAccountName property, meaning that the other 44 available characters available to you in the UPN will invariably go to waste.  I figure that the ADUC console encourages consistency, so why shouldn’t FIM?
  8. Enforcing consistency between UPN and email.
    In a lot of Office365 implementations that pre-dated the “alternate login attribute” concept, it was a requirement for SSO that these values be identical.  This is made additionally challenging when multiple UPN suffixes exist to choose from.
  9. Enforcing uniqueness beyond sAMAccountName and userPrincipleName.
    It is rarely enough that you can get away with not considering either or all of the other standard AD attributes too – namely CN, displayName and mailNickname.  Even if you do get lucky and achieve agreement to use the student ID or employee ID as the login name, you’ll generally need to come up with a more friendly email alias.  A unique CN becomes important if you are required to locate your account in different OUs depending on identity state information (e.g. employeeStatus).  Then there’s displayName – FIM hates it when you can’t enforce uniqueness here – and I thoroughly agree with Brad Turner’s 2007 argument here.

I think it is fair to say that in over a decade of working with this toolset, nothing has emerged that stands out as a more consistent way of addressing all of the above concerns than the original MIIS developers’ guide examples from the CHMs that used to come with the product.  I have used an approach similar to this one many times with great success – only to find that customers do not want to have to edit your C# or VB.Net code years later (assuming they can put their hands on it) just to change the number of digits appended to the end of surname+initial from 2 to 3 because they never expected volumes to rise like they had.  As someone with a .Net developer background, like some of you reading this article, I would happily keep things this way if the customer was happy with the approach.  Invariably, they are not.

So – how do we go about designing a FIM solution that covers off points 1-9 above (as well as others I simply can’t think of right now) – and NOT write any .Net code, while still using the FIM sync engine for all AD sync and provisioning?  And no, without any 3rd party tool or CodePlex library either?

Well yes you’re partially right if you’re thinking PowerShell, and I alluded to it above in point #6.  But rather than running a PowerShell activity from within a FIM workflow, you can instead use your favourite PowerShell connector to do this instead.  And all with scoped declarative sync rules and not a rules extension in sight – in fact this is probably only the 3rd time I’ve managed to create the entire sync solution without a rules extension.  The other 2 were with my company’s own xml-based ‘codeless’ extensions, so I guess this makes it the first in the purest sense.

The idea is simple enough – use a scoped SR with initial flow to provide the attributes that are the inputs of your uniqueness algorithms – but applied to a PowerShell MA/connector instead of the ADDS MA.  This gives you the freedom to do your uniqueness checks on the whole directory tree if you don’t want to bring in all OUs within the scope of the ADDS MA(s) you will still need to use to apply all your standard attribute flows best handled this way.  You can tie the two together with an inbound flow rule from AD which can unambiguously join on an attribute value that was written to the AD account created by your PowerShell MA.  At this point you can also move your account from the default OU that the ‘stub’ account was placed in originally.

So that’s the point of this post really.  I’m really letting you know that by combining 2 relatively new tools (well let’s face it nothing FIM is ever bleeding edge is it?) in (your favourite) PowerShell MA and FIM2010 R2 scoped declarative rules, you have all that you need to architect a maintainable FIM sync solution that ticks all the boxes.  Not only that but you’ll also find your customer is happy there is no .Net code to maintain, and you will be happy in the knowledge that whoever comes along after you to extend the solution will thank you for making their job a whole lot easier.  Sure – you are making yourself less indispensable, but then again with the future of MIM2015 and AADSync being a complete absence of custom rules extensions, you’re also creating a more future-proof solution on which to base repeatable business.

Posted in FIM (ForeFront Identity Manager) 2010 | Tagged , , , | Leave a comment

Key #FIM2010 Principles for the New Year and the #MSMIM2015 Timeframe

It’s been an eventful couple of months leading up to Christmas for me, starting with the MVP conference in Redmond and followed closely by my company UNIFY’s 10th anniversary, at which I was taken by surprise to be honoured as the first UNIFY 10-year employee. Although I can’t remember a word of my acceptance speech, I won’t forget the evening in a hurry, and how proud I felt to be part of an exceptional selection of IAM professionals carving a niche in the Aus/NZ identity and access market, not only in FIM/MIM, but also in other complementary IAM technologies such as Azure, Ping and Optimal. It was a nice touch by one of my Novell-inspired colleagues who presented a different take on our theFIMTeam.com brand.

Since then I have been heavily involved in a couple of large-scale FIM deployments and this will continue in the new year with a major project to use FIM2010 to replace an ailing access provisioning system. I will be drawing on all 10 years of my ILM/FIM experience with this one as the project seems to take on something new by the day, but I’m looking forward to sharing the challenge with a more than capable team assembled for the task. This project really brings together so many key concepts as to how to approach identity and access life-cycle provisioning that I thought I’d share the main ones here, as they will remain as relevant as ever while we roll into a new year and the pending MIM2015 timeframe.

  1. Achieving stakeholder accord across multiple platforms and programmes.
    Especially in an enterprise environment, believing that you can plow ahead and eventually win the naysayers over is foolhardy and disrespectful.  Everyone is entitled to their opinion, and engaging with them all early is vital to share ideas and draw on experiences to avoid pitfalls of the past.  Knowing where to respectfully hold your ground is just as important to acknowledging and embracing a superior alternative approach.
  2. Understanding the target environment and culture
    Sure there are systems to integrate, but always keep in mind the people that have to deal with them day by day, and understand the impact of changes you will invariably introduce.  While you may see yourself as the harbinger of change, others may measure the success of your project in the exact opposite!
  3. Maintaining clarity of vision
    Don’t take on any more than you can handle in the timeframe allowed.  There is always more to do, and pressure to try to accommodate everyone’s needs and ideas at once.  Identify what is paramount for an initial successful deployment, and build your strategy from that.  Don’t eliminate anything, but clearly lay out a roadmap and identify a timeframe for each targeted requirement.
  4. Integrating processes not just data
    Think about on-boarding, moves, and off-boarding.  Extend this thinking to edge case scenarios such as rehires and elevated duties.  Think about the events that drive changes, and work to see how you can best leverage these; maybe not just the ones that are happening now but perhaps those also falling in the near future.
  5. Provisioning relationships not just identities
    Especially when working with FIM, or MIM later this year, resist pressures not to surface key relationships between data entities that you will need to drive policy.  Rather than caving in to working with ‘flat’ data structures where every piece of information is a string attribute of a user, point out the benefits of modelling a simplified uniform data structure in FIM.  Demonstrate that by maintaining and honouring these relationships when synchronising entities between multiple systems, not only do you ensure referential integrity, minimise sync times, and avoid error, but you also provide the mechanisms you need to add value in FIM in terms of policy.
  6. Responding to changes in a timely manner
    I will come back to this below …
  7. Honouring multiple authoritative sources
    Rarely is one platform or system 100% authoritative for all entities and attributes in a synchronisation/replication model.  Acknowledge this up front by identifying the processes in connected systems, rather than just the data, that might come into conflict when automated sync comes into play.  Build flexibility in your model to adapt to changes as they invariably evolve, along with collective understanding.
  8. Planning for the future
    Further point #3, we are doing our job well if we are building a strong foundation for future identity and access management initiatives and requirements.  Don’t lock your customer into something that will not allow them to adapt as their business evolves any more than is absolutely necessary.

I know there are even more, but the above stand out to me as critical to success as I face the busy months ahead.  I have posted on some of these before, and find myself coming back to them over and over.

I am presenting the January 2015 FIMTeam UG session in a couple of weeks (yes, even though many of you are still on holidays).  In this session I will be addressing point #6 above.  Those of you that know me will understand that this is a passion of mine, and for good reason.  I really need you all to understand how FIM sync can be “uplifted” in a way you may never thought possible in order to deliver not only to SLAs, but also to people’s true expectations of a modern identity life-cycle management solution.  Looking forward to your company – but if you miss it you will be able to view the recording at your leisure from the above link.

Happy 2015 everyone – may it be the best ever.

Posted in Event Broker for FIM 2010, FIM (ForeFront Identity Manager) 2010 | Tagged , , , | Leave a comment

#FIM2010 Run on Policy Update saves the day

As this old 2009 post on the Bobby and Nima blog attests to, there is often value in turning on the ROPU setting on a FIM2010 workflow – even if it’s only temporarily.

My use case is a workflow which adds a sync rule to a target user object to write back an email address to an HR system … in this case actually this means creating a contact record with the new email.  During testing I had found that bulk emails initiated from an HR platform to live users when their email was set had the potential to be career limiting – and I needed to introduce an override concept.  This I did by implementing a sync rule parameter and testing for the presence of a value in the supplied parameter in the EAF for email.  If a value was set I would use that instead of the email bound to my user object. Simple enough idea, and did the trick nicely.  That is until the default email value for existing users needed changing …

Changing the parameter on the workflow is the obvious first step – but this of course didn’t affect the existing EREs.  Enter ROPU.  Simply disabling and re-enabling my MPR re-triggered my workflow (set transition IN) for every user in scope of my ResourceFinalSet – sure this was a lot of activity in a short period of time, but it did the trick.

Note that I find it is ALWAYS good practice to remove any existing SR as the first step of my workflow before adding any new SRs … otherwise you can get the same SR added many times over.

Posted in FIM (ForeFront Identity Manager) 2010 | Leave a comment

Replay your #FIM2010 ADDS MA

An interesting take on the Replay MA idea came to me that I want to share today.

So far the published use cases for this idea have been restricted to the ‘replaying’ of the FIM Service MA alone – such as dealing with ‘skipped-not-precedent’ issues and the like.  This post is about a different more topical scenario – specifically the need to manage Office 365 licence allocations based on AD group membership.  In this case, the customer wants to manage allocations based on group membership managed through a (non-FIM) 3rd party tool … and FIM Synchronisation (via the AAD connector) is being tasked the job of translating membership changes to license allocation changes for Office 365.

The problem with this is two-fold:

  1. The API for assigning licenses works on the basis of what licenses do you NOT want a user to get (a topic for another day); and
  2. The delta is on the GROUP object when you actually need a delta on the USER (member) objects.

Solution?  Simple … replay the delta import of your source ADDS MA, and map the member user objects to your FIM Metaverse to ‘touch’ these MV objects and trigger your export attribute flow to AAD/Office 365.  There you have it … a kind of freebie version of the traditional ‘Auxiliary MA’ idea from MIIS/ILM days.


Posted in Active Directory, FIM (ForeFront Identity Manager) 2010, ILM (Identity Lifecycle Manager) 2007 | Tagged , , , , | 4 Comments

A midsummer LITE dream

With our own personal identity details being proliferated on the web at an unprecedented rate, many of us are finally taking steps of our own to protect ourselves. But it is a daunting proposition to reign in what has already become a runaway train in many ways.

Lying in bed trying to get to sleep, I wonder what would happen if I woke up to find my iPhone being held to ransom by unscrupulous identity thieves. At least I’ll be aware almost as soon as it happens, and stand a fighting chance of doing something about it I hope.  But what happens in a corporate context when the identity theft goes unnoticed for some time? I’m talking about things like acts of fraud or malice carried out by former employees against a former employer through access that wasn’t identified and revoked in time.

This scenario is very real now and has been for many years. One of my first major IdM projects on the old MIIS platform was commissioned for exactly that reason. That organisation was compelled by its own shareholders and stakeholders to take action in response to an attack, and fortuitously for all parties the funds were always going to be set aside to make this happen. However, almost a decade later, most organisations are still waiting for an ‘opportune moment’ to take the plunge in their own IdM initiative, and crossing their fingers that nothing sinister will catch them napping in the meantime.

I strongly suspect many organisations are foolishly waiting for someone else to magically solve their problem for them. Surely one day soon a Cloud Identity Service provider will reach out and pull them onto the rescue helicopter and save their organisation the pain of building their own solution?

I drift off to a restless sleep …

<cue spooky music>

I find myself the CEO of a small-medium sized retail firm of some 300 employees and have been in operation for just on 10 years. I have long recognised the value in relocating my rapidly expanding IT operations to the cloud, and have just taken what I considered to be the obvious step of moving to Office 365.

Like 98% of all organisations these days I already had my own ‘on prem(ise)’ Active Directory forest, and having just made the O365 move I am beaming with self-satisfaction of refusing to approve the in situ upgrade of our old Exchange mail system my CIO recommended 3 years ago. However I canned an IdM initiative at the same time, with funds being redirected to a (much sexier!) company intranet replacement with Office SharePoint and a CRM.

At the time my CIO reported that our company’s AD was in need of a redesign, and that some of the more attractive features of SharePoint (audience targeting, built-in manager org structure, approval delegations) and Exchange (dynamic address lists) were not going to be usable without this. Furthermore, AD groups and address lists had proliferated to the point where there were more groups than users. But worse still, nobody seemed to be on top of which user accounts should be active, or which now had inappropriate permissions. Despite all this, I accepted advice from a trusted vendor that a quick fix was all that was needed to ‘rationalise’ the number of groups and disable all accounts which had not been accessed in 3 months. I also recall that a quick cross-check with a dump from our chris21 HR system had identified several accounts of former employees which were also disabled.

That was 6 months ago, and since then I have been marveling at how much more reliable our email service has been, with no obvious additional management overhead now that Microsoft’s ‘DirSync’ ( or ‘identity bridge’ as we refer to it now) was quietly humming away automatically provisioning mailboxes and syncing Azure identities and now passwords with our 10-year old on-prem AD. Life is sweet, and I am now asking my CIO to look at commissioning other cloud services such as SalesForce now that Azure federated access with single-sign-on (SSO) is readily available.

The tranquility is rudely interrupted with a call from my CFO who has just been advised that the O365 license limit had been exceeded and that I need to double our number of CALs. In addition mailbox limit allocations initially selected were now woefully inadequate, despite only minimal growth in our organisation size. Worse was to come.

I now find myself reading an email from a large client that is thanking me for my advice for them to cancel their regular large purchase order, and saving them any potential embarrassment of being caught short.  They tell me they were at first disappointed and confused, but are now happily signed on with a major competitor for the next 3 years. This is a huge surprise to me as I was only at lunch with their MD the previous month and had agreed on a new deal for additional product lines. Instead the email finishes thanking me for our many years as their trusted supplier and wishes me all the best.

I sit there scratching my head as to how this could happen … then I remember a conversation I had with a former employee before they left the company to join this same major competitor, where he had asked explicitly about the account and how proud he had been to introduce them to us ten years ago.  The penny dropped and I realised what must have happened.

I immediately reach for the phone but before I can speak to my AD administrator to investigate a possible security breach, I wake up in a cold sweat …

I decide that my life must be far too dull to be dreaming about this sort of stuff (supposing for a minute that I actually did!).  But I do wonder what it will take for people to get serious about sorting out the integrity of their on-premise AD before they go publishing it willy-nilly to the cloud.  Why doesn’t everyone listen to us wise Identity folk and “bite the IAM bullet” sooner rather than later.  I suspect it just comes down to people riding their luck as long as they can … and as long as they can argue that a “proper” IAM solution is out of their reach for now.

Well the good people at UNIFY have come up with our new “LITE” approach to IdM (for chris21 or Aurion HR only at this stage), so that companies like the one in my dream can have a foundation level, enterprise standard IdM solution deployed to production within a couple of days (my first site took only 3 days).  They can now happily DirSync to their hearts’ content safe in the knowledge that only current staff have access to O365 and their federated SalesForce customer directory, with the added bonus of an always accurate GAL complete with manager-based organisational structure.  And by the time they want to extend this to the next connected system, we can seamlessly extend to FIM2010 now or look to leverage the new MIM platform as a logical progression.

Posted in Active Directory, FIM (ForeFront Identity Manager) 2010, Identity Broker LITE | Tagged , , | 2 Comments

#FIM2010 MPR Integrity Checks

I recently had reason to suspect that there were a number of MPRs which had become corrupted in a lab environment due to the deletion of set objects.

FIM 2010 doesn’t complain when you delete a set, but it will leave any associated MPRs in an invalid state.  Obviously this is not desirable, and you wouldn’t intentionally be doing this.  However, it is possible that someone who doesn’t know any better could make this mistake, and if they have, how would you know?

I decided I’d write a couple of xpath queries which could probably be useful as MPR search scopes – they identified a number of faulty MPRs, and they may be worth running on your own environments now for that extra peace of mind!

  • Rights-granting policy where the PRINCIPAL set reference is missing
/ManagementPolicyRule[not(PrincipalSet=/*) and GrantRight=true and not(starts-with(PrincipalRelativeToResource,'%'))]
  • Non-rights-granting policy where the FINAL set reference is missing
/ManagementPolicyRule[not(ResourceFinalSet=/*) and not(GrantRight=true) and not(starts-with(PrincipalRelativeToResource,'%')) and not(starts-with(ActionType,'Transition'))]
  • Transition IN policy where the FINAL set reference is missing
/ManagementPolicyRule[not(ResourceFinalSet=/*) and not(GrantRight=true) and not(starts-with(PrincipalRelativeToResource,'%')) and (starts-with(ActionType,'TransitionIn'))]
  • Transition OUT policy where the CURRENT set reference is missing
/ManagementPolicyRule[not(ResourceCurrentSet=/*) and not(GrantRight=true) and not(starts-with(PrincipalRelativeToResource,'%')) and (starts-with(ActionType,'TransitionOut'))]

Note that the above queries are not likely to be a definitive set, and I’d be keen to add to them over time.  They also are written on the premise that MPRs which are rights-granting do not invoke any action workflows (a “best practice” I stick to religiously).

Hope this sparks some other ideas on FIM policy integrity checks.  Let me know if you come up with any others, or variations on the above.


Posted in FIM (ForeFront Identity Manager) 2010 | Tagged , , | Leave a comment

Identifying #FIM2010 Database Index Fragmentation

I want to share the following SQL script which I have adapted for FIM from the original here.

If you read the blog post you will understand that both FIM databases meet the criteria the author describes (GUID cluster keys) as a cause for high index fragmentation, leading to poor FIM performance in a variety of ways (even leading to SQL timeouts in extreme cases).  If only the GUID keys were able to be sequential to be able to avoid this problem – but alas they are not.  Hence the need to do something about them – and regularly!

When troubleshooting poor application performance where SQL is involved, my approach (with both FIM services) is to open the following script in SQL Server Management Studio, selecting the FIMService database in the toolbar drop-down list:

 OBJECT_NAME (ips.[object_id]) AS [Object Name],
 si.name AS [Index Name],
 ROUND (ips.avg_fragmentation_in_percent, 2) AS [Fragmentation],
 ips.page_count AS [Pages],
 ROUND (ips.avg_page_space_used_in_percent, 2) AS [Page Density],
 WHEN ips.avg_page_space_used_in_percent = 0
 THEN ips.page_count * ROUND (ips.avg_fragmentation_in_percent, 2)/100
 ELSE ips.page_count * ROUND (ips.avg_fragmentation_in_percent, 2) / ROUND (ips.avg_page_space_used_in_percent, 2)
 END AS [Weighting]
FROM sys.dm_db_index_physical_stats (DB_ID ('FIMService'), NULL, NULL, NULL, 'DETAILED') ips
--FROM sys.dm_db_index_physical_stats (DB_ID ('FIMSynchronizationService'), NULL, NULL, NULL, 'DETAILED') ips
CROSS APPLY sys.indexes si
 si.object_id = ips.object_id
 AND si.index_id = ips.index_id
 AND ips.index_level = 0
 AND si.name IS NOT NULL
 WHEN ips.avg_page_space_used_in_percent = 0
 THEN ips.page_count * ROUND (ips.avg_fragmentation_in_percent, 2)/100
 ELSE ips.page_count * ROUND (ips.avg_fragmentation_in_percent, 2) / ROUND (ips.avg_page_space_used_in_percent, 2)

The results I will get back (after about a minute running the first time) will show the tables which I have rated as the most in need of defragmentation (highest weighting) at the top.  I then proceed down the list to simply locate each offending table (e.g. ObjectValueReference is a prime candidate), expand the database table in the Object Explorer treeview, and select the Rebuild All option from the RH mouse menu (at this point I am a tad heavy-handed, and prefer to rebuild ALL of the indexes not just the one that shows the highest weighting).  For particularly large FIM databases it is often best to stop the FIMService first before doing this, but I find I don’t always have to do this.  Once I have completed this exercise I repeat it once or twice until I am satisfied that the remaining fragmentation is acceptable and not likely to cause further problems for now.

I then select the FIMSynchronizationService database in the drop-down list, comment the line referencing FIMService, uncomment the corresponding FIMSynchronizationService line, and repeat the above process for the FIM Sync database (with far less tables here you will find this far quicker).

At some stage I plan to implement something along the lines of the SQL Server Maintenance Solution , but in the meantime I am using this spot fix approach – particularly after the initial system data load on deployment, or after periods of high data volatility such as the beginning of a school year at an education site :).  I like the weighting idea because the generic > 30% fragmentation rule sometimes used (or similar) doesn’t necessarily highlight those fragmented table indexes which are having the greatest performance impact but may be below your threshold.

Posted in FIM (ForeFront Identity Manager) 2010, ILM (Identity Lifecycle Manager) 2007 | Tagged , , , , , | 5 Comments

#FIM2010 R2 SP1 (W2012) Oracle Management Agent requires additional Oracle driver

My good friend Henry from infoWAN in Germany asked me to post the following for him on my blog (in lieu of setting up one of his own at least for now).  Here is the tip that Henry discovered and was so keen to share …

My task was to replace an ILM 2007 Server with FIM 2010 R2 SP1 running on Server 2012. The newly configured FIM Server is being stood up alongside other systems connected to an Oracle Database Version 11g.  Here was my approach

  1. Checked the Server compatibility with the release notes of FIM 2010 R2 SP1.
  2. Checked Oracle compatibility with the Management Agent list.
  3. Installed FIM and the Oracle Client Software 11.2.0.x and tried to create the Oracle Management Agent ,,,

This was as far as I got before I the install failed, reporting the Client Software could not be found as displayed below:


Error connecting to Oracle database’

At this point I checked Oracle Database connection using SqlPlus and was able to open the View in question. Given this proved the client was installed correctly and the settings in the tnsnames file were also fine, I then checked file permission on the Oracle Client directory for the FIMSync Service Account as others had suggested in the same situation.

I then dug a bit deeper, using the Process Monitor to look up what was happening behind the scenes. The most important clue that led me the right direction was the highlighted access to a CLSID registry key which could not be found by the miisserver.exe:


Using the process monitor to identify a missing registry key

This registry key was not found on the server, so I searched for this ID {3F63C36E-51A3-11D2-BB7D-00C04FA30080} in the Internet and found references to an OLEDB DLL provided by Oracle:


In the end I discovered that this OLEDB driver was NOT included in the Oracle client Software package. Instead it is included in 64-bit Oracle Data Access Components (ODAC) which can be separately downloaded at the Oracle web site.  This is the 64-bit ODAC 11.2 Release 5 ( for Windows x64 – it contains 64-bit Oracle Provider for OLE DB

Having installed the software, the registry key is clearly now available on the server and references the Oracle OLEDB dll – thereby enabling me to create the Oracle Management Agent on my FIM Box:

Oracle DLL registry settings

Oracle DLL registry settings

As it happens, I have a feeling I’ll be needing just this piece of vital info myself in the coming weeks …

Posted in FIM (ForeFront Identity Manager) 2010 | Tagged , | 1 Comment

Uncovering #FIM2010 Service Set Correction Requests

When responding to this FIM forum post tonight it occurred to me that monitoring for and troubleshooting these events is something I’ve probably not rated highly enough on the priority list.

Digging a bit further I stumbled upon this TechNet WIKI article from Markus – and it reinforced the thought that behind every recurring set correction you are likely to uncover a policy design flaw that’s probably going to be a pain in the !@&*#! to track down.  This is right up there with the failed FIM request that occurs when two multiple workflow instances are spawned attempting to concurrently apply the same action on a FIM object – where one succeeds and the other fails with a “denied” exception.  These types of errors are really the hardest to pin down, and it’s why I’m bothering to post about them.

Markus explains a scenario which can cause the set correction condition to occur – I had to read it a couple of times before I understood this.  Maybe you will too – in which case the following variation may help:

The end result of multiple updates for the same FIM resource may well be that the resource satisfies a set criteria, but each not after each request individually.  If requests are processed on a single thread sequentially, then the last request would be expected to cause the criteria to be satisfied.  However, in periods of high volatility and multi-threading, if the individual requests are processed concurrently it is possible for all requests to be fully processed without the set condition having been satisfied.  When this happens set correction is required.

Of course there are going to be other reasons for set corrections being required too – such as exceptions occurring evaluating complex set criteria (particularly when the FIMService database indexes become overly fragmented, or when your criteria is just too complex for FIM to handle).  There is always a trade-off here:

  • reducing the number of set definitions you need at risk of increased complexity (and relying on a defrag/index rebuild regime)
  • using additional nested sets to simplify individual set criteria but (arguably) reduce solution maintainability and risk running foul of stated best practices in this regard.

Note to self – whenever monitoring the health of the FIM Service, look not only at the exceptions and the failures, but also for the presence of set corrections.  Of course if you have SCOM and the FIM 2010 Management Pack (my customer chooses not to) you will no doubt already have the following in hand:

The key monitoring scenarios covered by this management pack are listed below:

  • End-User Availability
  • Synchronization Service Availability
  • FIM Service and Portal Availability
  • FIM Portal Errors Shown to End Users
  • FIM Portal Configuration Errors
  • FIM Service Internal State
  • FIM Service Set Corrections
  • FIM Service Connectivity with Exchange
  • FIM Synchronization Service Configuration Errors
Posted in FIM (ForeFront Identity Manager) 2010 | Tagged , | 1 Comment

Optional Synchronization Rule Parameters

Recently I needed to extend a simple outbound sync rule (FIM 2010 R1) to provision a business email address to an HR system.  In the target HR system, multiple contact records can be recorded for a user, and under normal conditions a “business” contact was to be set with the exchange email address from AD.  However, in a test environment where “new starter” emails are to be sent from the HR system I didn’t want to use “real” email addresses but a test mailbox instead.

I figured I simply needed a means of overriding an EAF in a sync rule with a constant email address – purely to support my testing needs.  Under normal circumstances there should be no override, so I figured I could use a workflow parameter and only set a value in the test scenario.  The override idea seemed to work well – I could have identical sync policy in each of my DEV/TEST/PROD environments, but this way I could support this testing requirement without having to actually change the sync rule itself.  Test emails were indeed sent to the test mailbox as required.

I set up my EAF in my sync rule like this (CS and MV prefixes for explanatory purposes only):

CS.email = IIF(Eq(Trim($EmailOverride),""),MV.email,$EmailOverride)

It seemed like a perfectly reasonable thing to expect to work – I assumed that if I simply didn’t supply any parameter value when I added the sync rule to the target user object, that the above logic would result in Eq(Trim($EmailOverride) returning a TRUE value.  I was wrong …

I only noticed there was a problem when I removed the override value and noticed that the pending exports subsequently produced had no email address value at all!  This broke my HR exports and indicated that I had a lingering problem with the above EAF.  This was confirmed when I compared the corresponding ERE for two different users – one created when the constant email value was present (which worked), and one when the value was removed (which failed).  What I noticed was that there was only an XML value in the Synchronization Parameter binding on the ERE when there was a value specified on the workflow which attached my sync rule.  When I specified an override email I ended up with this in the SR parameter :


… but when there was no value specified, rather than getting this:


… I actually got no SR parameter at all (i.e. no XML whatsoever).  This was not what I expected, and explained why my EAF wasn’t working.

I then tried each of the following without success:

  • Eq($EmailOverride,Null())
  • IsPresent($EmailOverride)

I finally had to settle for this:

CS.email = IIF(Eq(Trim($EmailOverride),”NONE”),MV.email,$EmailOverride)

and resort to having to specify “NONE” as my default workflow parameter rather than an empty string.

So the upshot of this post is to make the point that (for FIM2010 R1 at least) there is effectively no such concept as an “optional sync rule parameter”.  Why?  Because there doesn’t appear to be a way to successfully test for the (lack of) presence of a value in a parameter.

I would be interested to find out if anyone has observed this same behaviour for R2?

Posted in FIM (ForeFront Identity Manager) 2010 | Tagged , | Leave a comment