The case for reusable CRUD workflow activities

In mid 2010 I authored what was to be the first version of a couple of reusable workflow activities that have now become the cornerstone concept of essentially every FIM Portal implementation I’ve ever done.  Now I’m the first to admit I’m not a bonafide developer, but having said that, in the last 18 months I have done little but tinker around the edges of the UI components of both a “look up stuff” and a “create/update/delete stuff” activity pair.  Happily so, because the up-front goal I had was to all but eliminate the workflow development aspect of a standard FIM engagement, and I have to say it has been a resounding success (for the projects involved) right from the start.

So much of a success, in fact, that I think the value of these 2 workflow activities has been lost on almost everyone involved in the respective FIM projects, as the focus has been on the main solution design goals that needed achieving – as they should always be.  It is only now and then (like this evening) that I am reminded of just how successful they have been … and it staggers me to think that these have not been incorporated into the core product itself.  Kind of like event-driven sync really 🙂 … but I digress.

I have often heard people coming to grips with FIM saying words to the effect that FIM is only really good for FIM consultants to make money, and ultimately this is NOT a good look for this community.  The real value we can deliver to any FIM client is delivering a high quality outcome on time and within budget.  Nobody will thank us in the end for clever but ultimately costly coding unfortunately (think “total cost of ownership” or TOC), as much as we’d all like that not to be the case!

What we collectively now need to do is make FIM more mainstream, and continuing to reinforce the idea that clients are going to be forever reliant on the .Net developer fraternity to achieve their IAM vision is NOT the way to achieve this.

What I think needs to happen may not be every FIM guy/girl’s opinion, but I think that no client should really have to pay anyone to write workflows that can be achieved entirely by performing basic CRUD operations on the FIM database.  A former boss of mine said over 8 years ago now that if anyone these days expects to be paid still for doing this sort of mundane thing with a SQL database he’d be surprised – so why is it OK now in 2012 with FIM?  Just because we can and others can’t yet?  Surely the art is not in being able to perform CRUD itself, but rather the creative ways you can apply these building blocks to achieve unlimited functionality by simply building on just 2 very basic atomic workflow activities.

So exactly how can we do this when every business requirement is different and often complex?

Here’s an example that came up only tonight … where a design oversight meant that a objects of a specific custom object class in the FIM Portal were not being deleted when they should have been.  Furthermore, it wasn’t going to be easy to identify ALL the possible scenarios in which these redundant objects were going to potentially occur, so a broad-based approach was required to (a) perform a custom lookup when a target object fell into a set, (b) delete all instances of the object returned from the query, and (c) delete the target.  I am almost certain that the average FIM implementer today would look at this task as an unavoidable last minute development exercise.

Consider the impact of this realization when your solution has spent the last couple of months stabilizing in UAT, and the prospect of new binary components being introduced and the associated regression testing required sends chills down your spine.  What is more, you have all but exhausted your project budget!  I can almost feel the pit forming in the bottom of my stomach now …

Thankfully the solution turned out to be a rather trivial workflow implementation thanks to the CRUD activities, with the only complexity in the lookup query itself.  The workflow looked something like this:

  1. Use the LOOKUP activity (or the function evaluator in this simple case) to write [//Target/ObjectID] to [//WorkflowData/SaveID];
  2. Use the CREATE/UPDATE/DELETE activity (in DELETE mode) to perform a query to return all FIM objects meeting a certain (somewhat complex) criteria and delete them; and
  3. Use the CREATE/UPDATE/DELETE activity (again in DELETE mode) to delete the original target object (in case step #2 didn’t happen to delete it first), i.e. /*[ObjectID='[//Target/SaveID]’]

If I look back through ALL of my workflows in this project, of which there are probably close to 100 or so, then it would be safe to say that I could count the number of them NOT implemented with exactly the same principles as the scenario above on one hand.

As an aside, the above workflow will need to be run periodically rather than simply on set transition, and as such is being implemented using what I call “FIM Housekeeping”, which I presented at TEC2012 and will blog about here in due course.

Meanwhile, my message in this post is that everyone can achieve the same thing, no doubt more elegantly than myself, and save a heap of development time, without any of the risk associated with implementing any new (untried) DLLs.  Looking back at this particular project, the only other workflow I think I’ve used might be the one which converted a GMT datetime property into local time for a notification … yet another activity that should be baked into FIM by now, along with the 2 CRUD activities themselves.

Why don’t I post these activities?  Lots of mainly commercial reasons really right now, but maybe one day that will change.  However, I hope that before I get the chance they will happily crop up in the product itself, and thereby in a very short time empower 10-100 times the number of people we are limited to today who are capable of implementing the above kinds of workflows themselves.

Posted in FIM (ForeFront Identity Manager) 2010 | Tagged , , , | Leave a comment

Crouching SharePoint, hidden FIM Work

MSN Messenger is a wonderful thing … not only have I just managed to track down my new European friends from TEC2012, but I also seem to have scored a significant new gig for UNIFY on the back of a “quick question” from a former SharePoint wiz (some 10+ yrs experience) colleague of mine from my pre-MIIS days … here’s an abridged version (with all names withheld of course) of how that panned out … see if this could be you next, and how to recognise !

SharePoint guy: Hey Bob, got a sec?
Me: Sure – fire away!
SharePoint guy: How do I add a combo box and some string validation to this FIM page … it’s built on SharePoint right?  I should be able to work this stuff out easily, no???
Me: (after getting up off the floor) Maaaaate … where do I start?
SharePoint guy: What do you mean?
Me: Well first, to give you a basic idea of what’s involved, here’s the RCDC reference (I send him an appropriate link to TechNet) … then when you’ve got your head around that, you need to work out where to store your list of a couple of hundred combo list items, and then where to sync them from, and then …
SharePoint guy: Stop there for a minute … so what you’re saying is I’ve got to learn a completely new technology?
Me: Basically – yes.  Happy to give you a few pointers of course, but it probably isn’t your bag …
SharePoint guy: (after a few minutes MSN silence) No kidding … I’m bailing on this one :).  Basically I just have far too much other stuff to be learning this too right now.
Me: OK – not surprised – let me know if you want some help with the work so you can keep your customer!
SharePoint guy: Will do.

No more than 15 minutes later I was on the phone to the gentleman who had quoted my friend half a day to do the work ;).  Suffice to say, 2 weeks later and we’re working together to agree to terms on what turns out to be a full-blown FIM Portal swap-out of an in-house white pages self-service web application with direct updates to Active Directory.  Not only that, but with the replacement of an app which does real-time AD updates with another which … well OOTB just doesn’t … it was not hard to get a common understanding that Event Broker was going to have to be part of the solution from the start.

Of course I’m pretty sure my SharePoint friend was relieved he had averted a potential major misunderstanding and waste of everyone’s precious time.  He keeps his customer happy and “sticks to his knitting” (and I to mine), and all things going to plan, we get a happy customer at the end of the exercise.   Sure we’ve a lot longer development cycle, but all parties are now more the wiser for it, with the customer thankful they had been saved from a potentially very expensive mistake.  Moreover, assuming we succeed with this, there will be more FIM work lined up waiting behind this.

So the moral of the story is two-fold:

  1. keep active on your IM network – be willing to help others and we’ll all end up helping each other in the long term (just expect the unexpected); and
  2. don’t expect people to always recognize FIM work opportunities by themselves  – they often appear disguised as something completely different!
Posted in Event Broker for FIM 2010, FIM (ForeFront Identity Manager) 2010 | Tagged , | Leave a comment

The FIM Team Community site – now live

I’m very pleased to announce that we have gone live with The FIM Team Community site – where our FIM consultants can make things available for the FIM community worldwide, and we’ve started with making the scripts presented at TEC2012 by both Carol  Wapshere and myself available by the FreeBSD license.

Please check out both postings:

Posted in FIM (ForeFront Identity Manager) 2010 | Tagged , , | Leave a comment

Failed to retrieve schema … old FIM MA error raises its head

Friday came and went without having solved the all-too-familiar “Failed to retrieve schema” error when creating the FIM MA for the first time, and having exhausted all the usual remedies, even resorting to complete uninstall/reinstall, my colleague Ryan and I admitted defeat and hoped that we’d figure something out over the weekend.

In a quiet moment on a Sunday afternoon I came across this TechNet article, and decided to follow this up first thing Monday morning. When I did, I found that Ryan had found the same article and beaten me to it :).

The reason I’m posting about this is that you will invariably run into this exact problem when you do a vanilla install, and run all your windows updates (including ,Net 4 together with FIM hotfix rollup #2). I think that if we had created the FIM MA before we applied the hotfix, we would have put 2 and 2 together! When you do the same thing, I hope you find this post and realize the cause quicker than we did!  Just when you start to feel confident that you’ve seen everything, don’t be lulled into a false sense of security … you can trust a hotfix to invariably pull the rug out from under you 😦

Posted in FIM (ForeFront Identity Manager) 2010 | Tagged , , | Leave a comment

FIM is losing sales to competitors because it is not change event-driven

This was the staggering insight I learned @ TEC2012 from a fellow attendee on the IAM stream.  He confided that of all the new IdM business his (European) company wrote, which was split between FIM and another mainstream IdM platform, about 50% were selecting the other technology over FIM essentially because FIM was not “event driven”.  That is to say, the FIM solution model is to process changes in connected systems on a schedule, and not selectively in response to the change event itself!

This is outrageous because FIM, and ILM/MIIS before it, has been able to operate in this manner since UNIFY invented and patented UNIFYNow back in 2005!!!

Now known as Event Broker for FIM, this mature web application and windows service complements FIM like the proverbial “peaches and cream”,  It is now well into its 3rd major release, and is readily available for download to ANY FIM customer, or potential FIM customer.

I strongly believe that FIM SHOULD ALWAYS be configured to operate in this manner, if not ALL management agent operations, then definitely the majority of them.  It SHOULD be baked into FIM now … it’s not like this is a new idea or anything :).

So in the mean time PLEASE … when someone tries to tell you that you can’t run FIM in near real-time, or that FIM can’t be change-driven, direct them to this blog post, and let them know that FIM should never be excluded from the IAM selection process purely on this missing OOTB feature.

Posted in Event Broker for FIM 2010, FIM (ForeFront Identity Manager) 2010 | Tagged , , , | Leave a comment

Is FIM Best Practice a Pipe Dream?

A recent response to a forum post by Craig Martin (MVP) stopped me in my tracks in the last days of 2011.  The comment was in response to my request to establish a “Best Practice” (which I will refer to now simply as “BP”) when it comes to working with one particular FIM scenario involving notifications – although the context is not particularly relevant to what I want to say.  His point was that while it might be considered noble to pursue BP for FIM, ultimately we’re all consultants who should be willing to sacrifice “elegance” in order to get the quickest possible outcome that just works.

At first this seemed to me to be a plausible argument, after all nobody is going to thank us for “polishing apples” when at the end of the day it may not alter the outcome.  However this got me thinking over the next couple of days, and early this morning as I lay awake on a houseboat holiday with the family (of all places), and I decided it was worth an article in its own right … mainly because I once thought the same way myself.

My thoughts turned to my own experiences on a project I worked on in my SQL/Web Business Intelligence (BI) days prior to turning my hand to IdM.  I was working in a small team of 3 as part of a (now defunct) MS ISV company in the months shortly after the launch of the Dot Net era, and I recall brimming with pride at the outstanding result we produced after 3 months or so of solid team teamwork, burning plenty of “midnight oil”, and ultimately the glowing response received from a happy client.  In doing so I remember the satisfaction of “thinking outside the square” and pushing beyond the limits of the standard BI toolkit to achieve results that we wouldn’t have dreamed of at the outset.

But the sense of pride and satisfaction quickly turned overwhelmingly to dejection and bewilderment when we were roundly criticized (and even ridiculed) for not sticking to the company’s own solution blue-print, purely on the basis that we had produced something that nobody else in the company could hope to support (with its extensive use of XSL, JS libraries and Flash).  The scathing (internal) peer review appeared to be at complete odds with the glowing praise from the customer, and the enthusiastic testimonial that appeared on our company website shortly afterwards.  Disillusioned, my two colleagues left the company not long afterwards and took their ideas with them, and although I persisted myself, I was forced to “toe the line” in subsequent projects.  While I still believe in the approach to this day, it taught me a very valuable lesson … the best technical solutions are not always the most profitable, nor the most palatable, and in the long run do not necessarily deliver the best outcome for the customer either.

Essentially, what I learned first-hand was that the further removed your solution becomes from the mainstream, the greater the Total Cost of Ownership (TCO) for the customer.  This means that when you have something that you know works, there are only two options if you wish to remain in business … either change the thinking of others to make it mainstream, or abandon the solution in favour of something considered more acceptable (think Beta vs. VHS, Novell vs. Microsoft, …).  This is not to say there isn’t value in pushing boundaries – there definitely is!  However in our case there were forces at work (in particular .Net and SQL Reporting Services) which were far greater than we could tackle at the time.

Ultimately, the thought that keeps coming back to me now is that if I’m not striving for BP in anything , then I might as well stop now!  Just by its very name shouldn’t BP be worth something?

So now … what do we REALLY hear when someone says BP?  What connotations come to mind?  What is the first thing that comes into your head?

Perhaps (like Craig I suspect) you think of that anally retentive guy who wants every “i dotted and t crossed”.  This is really the point of this article – what is REALLY meant by FIM Best Practice?

Maybe it would be easiest to first list some of the things it is most certainly NOT:

  • a cookbook we can all blindly follow and expect a consistent result
  • something that is beyond dispute
  • an insurance policy that vindicates our approach if we do not succeed
  • something that eliminates the need to think for yourself

So if it is none of these things, then what is it instead, and what makes it particularly valuable?

To answer this, consider what motivates people in their quest for these …

  • discovery of “potholes in the road” (bad experiences), and how to recognise and avoid them
  • a “eureka moment” when something previously untried happens to work
  • a pattern which when repeated was found to return superior results
  • hope of kudos in lighting the way for others to follow
  • the quest for benchmarks to qualify certification
  • the desire for solution maintainability

This last point is the one that is probably of most interest to me personally, because in the past this has proven the greatest challenge for me in establishing a business built largely on establishing repeat consultancy business with a steadily growing client portfolio, where major project work is typically separated by many months (or even years) of ad-hoc solution support (or “holding the fort” if you like).  It is this kind of environment where it is rarely enough to just get the job done quickly.  Very clever solutions can often be “thrown out the window” in favour of something far less satisfactory purely because the support consultant either

  • didn’t understand the original approach
  • couldn’t support the original approach, or
  • had a personal preference for a different approach.

Such a scenario plays out far too often and invariably it is the client who ends up paying – that is if you don’t go out of business first!

So for me at least, BP should be something very much worth not only investing in, but loudly proclaiming to your current and future clients.  It can be something that sets you apart from the average consultancy as a company – not necessarily because you never make mistakes, but because your culture of “continuous improvement” makes you a more reliable service provider in the long term.

Ironically it is the blind replication of others’ solutions that is possibly the worst advertisement for BP.  I have sometimes worked with consultants who have a list of current certifications a mile long, but who simply cannot stand on their own feet in the real consultancy world.  But don’t blame BP for this – rather blame those who fail to adequately factor in the inherent unpredictability of the world of Systems Integration when scoping and resourcing your project.  Soldier types who are lost without a drill-sergeant continuously barking instructions are a very poor choice for an IAM consultant, and businesses that are not flexible enough to recognise and allow for the inevitable environmental variations from one client to the next will invariably fail.

So what is my Utopia?

My FIM Utopia is a world of community-conscious consultants collaborating to establish a peer-moderated knowledge base which is continually revisited, questioned and steadily improved over time.  It is contributions from left-field that prompt others to sit up and take notice.  It is the willingness to be proven either right or wrong without taking it personally.

That to me is FIM Best Practice.  That is definitely NOT a pipe dream.

Posted in FIM (ForeFront Identity Manager) 2010, ILM (Identity Lifecycle Manager) 2007 | Tagged , , , | 2 Comments

Requirements of an ECMA API to support Event Broker too

Today the subject came up as to what is generally required of an API that is being developed to support an ILM ECMA with delta import requirements to also support the creation of a corresponding Event Broker "changes" plug-in (using the Event Broker SDK).  This is a requirement which I have already had to meet with many times myself with my own ECMAs (or xMAs), and it turns out to be quite simple.
 
The SDK provides an Event Broker developer with the interface definition required to build a new changes plugin for any given MA source.  In the case where an API is being created specifically to support the writing of an ECMA (using the ILM SDK), it makes sense to take the extra step to ensure that the API also supports the Event Broker SDK.  Generally this comes down to a requirement for a single API method, namely "Get Latest Change Token".  If the API supports delta imports, generally this means that it does so because the concept of a change token has been employed in some way (e.g. datetime, timestamp or unique counter).  In fact the ILM ECMA developer has the harder task – the generation of a delta image for ILM – wheras the Event Broker developer just needs to know if there is a delta image at all.
 
This obviously presumes such a concept exists in the source – where it doesn’t you can’t generally use Event Broker to enable event-driven delta imports without another piece of UNIFY technology, namely Identity Broker.  But that’s another story …
Posted in Event Broker for FIM 2010, FIM (ForeFront Identity Manager) 2010, ILM (Identity Lifecycle Manager) 2007 | Tagged , , , , , | Leave a comment

Custom workflow activites for ILM 2007

I’ve been writing a few new plugins for Event Broker lately on the back of some client requirements for what in the FIM world would be regarded as "custom activities".  The latest of these allows an ILM 2007 implementor to provision and ACL file shares based on nominated user attributes in AD (e.g. profilePath and homeFolder).  This is a pretty common activity that occurs for a new AD account – along with mailbox provisioning, and notification emails for initial passwords.  However, ILM implementers will know only too well that you can’t accomplish this sort of thing with the Active Directory Management Agent alone.
 
Some might think that it would be reasonable to do this sort of thing in your provisioning logic, but think again!  Sure you can put in an external call to a .Net ADSI wrapper to go and do extra work when you provision a new user, but there are several reasons why this is a bad idea.  It is one thing to go and read an external directory (e.g. to check if a sAMAccountName has been used), but it’s another thing altogether to perform updates.  The most sigificant reasons NOT to do this are probably the following:
  • the exports are the only place for ILM updates to connected directories – the ILM sync process is supposed to be an internal ILM activity which should be able to be done safely (e.g. via preview) without affecting any connected directories
  • if someone performs a preview or a sync they could be unaware that updates are actually taking affect before the export
  • these sorts of tasks are often the activities most likely to fail – trying to troubleshoot failures in provisioning is a lot more difficult than running an "out of bounds" process which can be run at will as many times as necessary.
  • … I could go on .
In the ILM 2007 (and MIIS 2003) world there are several accepted approaches to doing these custom activities:
  • employ third party workflow (+RBAC) tools such as Omada IM which complement the sync/provisioning performed by ILM
  • write an Extensible Management Agent (xMA) to perform the work in the export step
  • write an Extensible Management Agent (xMA) to interface to a workflow engine (e.g. SharePoint (WSS) list) to initiate a workflow task (e.g. Nintex, or K2) which incorporates the desired activities
  • write an "out of bounds" process to "run along behind" a run profile execution and perform custom activities on freshly updated directory data.

Although all of the above are appealing options, the last option was the one I adopted because a workflow engine wasn’t at my disposal for this particular client (hopefully it will be though).  What I did have, however, was Event Broker and its SDK to play with.  Having written a few of these plugins before, I was able to implement some MSDN library code that could be configured in the plugin XML and executed (by the Event Broker service) as a "post processing" step in the "Outgoing" Event Broker OpList.

In this particular case I used an Event Broker "watermark" (much like those used in an ILM xMA) to store the timestamp of the last created AD user (whenCreated attribute).  Whenever an export to AD succeeds, this plugin checks AD for any new users since that timestamp, checks to see if file share(s) are required for the new user based on user attribute data, and if required creates and ACLs the file shares.  It then updates the watermark for next time … meaning that the cost of executing the plugin is whatever the cost of an LDAP search happens to be on the (indexable) whenCreated attribute.

Works like a charm … wonder if anyone else might want to use this?  I expect to be implementing a few other activities in the same way – including some post-deprovisioning tasks (mailbox archiving, file share archiving, group membership removal, etc.)

Posted in Event Broker for FIM 2010, ILM (Identity Lifecycle Manager) 2007 | Tagged , , | 2 Comments

Operating ILM in (near) Real Time for SQL Data Sources

One of the MMSUG members tonight posted a question on the topic of managing deltas from a SQL source, and I thought it worth blogging about since it is a topic close to my heart, and part of solving the puzzle on how to make your ILM solution "event aware" and operate it in (near) real time …
 
Steve asked the following question:
 

I have a big project coming where I’m connecting amongst other things a authoritative user and group store on SQL into AD, now I have implemented and understand the multivalued attribute sql tables before etc. but not done much with the sql delta tables. Now if I implement sql delta tables then clear the tables after succesful import. There is a short time frame between running of the agent and running of the clearing of the delta table. If data gets put in there between them it will get erased before it can be processed. How do people normally deal with this? can you do anything with processed flags or timestamps perhaps?

In the process of responding to the above, I realized that Steve’s question was exactly the way to introduce the idea of how to use Event Broker with a SQL delta source.  However, before I do, let me explain my approach to handling delta views/tables with a SQL MA.  Here’s my basic approach (which is an extension on the "Generating Delta Views Using Triggers" approach documented on TechNet here):
  1. I run the following (pre-processing) SQL on a cycle:
    if not exists (select 1 from <myDeltaTable> where processingStatus = ‘processing’
    Begin
      update <myDeltaTable>
      set processingStatus = ‘processing’
      where processingStatus = Null
    End
  2. I run my ILM delta import using a delta view which is effectively "select * from <myDeltaTable> where processingStatus = ‘processing’".  Note that when I am using our Event Broker service for ILM, I can run this on say a 10 second interval, and only continue to this step if my existence test above returns true.  If you are not using this service, you have to run your delta import on whatever schedule you’ve decided to employ, but you need to use something like the MIIS Toolkit’s Scheduler tool to allow you to run your pre-process step as a DOS batch file, VB script, or suchlike.
  3. I then only run the following ‘post processing’ step if the ILM run profile returns a success (or equivalent) status:
      update <myDeltaTable>
      set processingStatus = ‘processed’
      where processingStatus = ‘processing’
  4. So that I can prevent performance degradation over time, what I often do is archive into a <myHistoryTable> as an extra step, e.g.
    insert into <myHistoryTable>
    select * from <myDeltaTable>
    where processingStatus = ‘processed’
    go
    delete from <myDeltaTable>
    where processingStatus = ‘processed’

With the above, I my delta table generally has a "processIndicator" attribute with a value of D/I/U for delete/insert/update … although the above instructions are not how to construct a delta table, but how to use it to construct a delta view.  In other words I am adopting a "tri state" approach with my "processingStatus" attribute, and it is this same attribute which is the key to running your ILM SQL MA delta imports on demand rather than on a schedule … here’s how:

  1. I use the Event Broker "SQL Changes" plugin to detect the presence of records with null values, and at this time update the processingStatus to "processing" (using a pre-processing SQL script/stored proc).  As with all Event Broker change detection plugins, the plugin returns a TRUE value if there is something to do, and FALSE otherwise.
  2. When the plugin returns TRUE (and this will happen immediately after some delta records have been updated with a status of ‘processing’, and continue to happen until such time as the ‘proceessing’ records are updated to ‘processed’), only then do I initiate what is termed an Event Broker "outgoing OpList".  The first of the standard two steps to run for this OpList is my ILM delta import/delta sync.  This in turn can set up pending exports in any number of other MAs, and Event Broker is designed to fire the corresponding export run profiles off as and when these pending exports are set up.  This leads to more pending (confirming) imports … you get the idea.
  3. The second of the Outgoing Oplist steps is my "post processing" step to update the "processing" records to a processingStatus of "processed", and generally follow this up in the same SQL script with the archiving of the processed records to keep the process from performance degradation over time

 There you have it.  I hope to follow this up soon with an installment on how to manage other ILM MAs using corresponding Event Broker plugins, such as the AD/ADLDS Changes plugin and File Changes plugin.

 
 
Posted in Event Broker for FIM 2010, FIM (ForeFront Identity Manager) 2010 | Tagged , , , , , | Leave a comment

Upgrading to FIM 2010 from MIIS 2003/ILM2007 – Pre Upgrade Check?

Just thought I’d let you know about a little "gotcha" lurking around the corner for anyone trying to upgrade their existing ILM solution to FIM – the potential for a clash of the new FIM (RC0) metaverse schema.
 
Like many others, I have used ILM/MIIS in the past to provision userProxyFull objects to a connected ADAM instance, involving the syncing of the AD objectSid attribute.  To do this you typically set up a new objectSid attribute in your metaverse and everything works like a charm … until you upgrade to FIM and happen to named your "objectSid" attribute with a different case (e.g. ObjectSid, or objectSID) …
 
I ran into a problem on a client site where there was a metaverse attribute already in use called objectSID.  All was fine until I created the ILM MA for the first time, causing the ILM metaverse schema update to be invoked.  What I found was that it wanted to add a new "objectSid" attribute, but threw an Unable to create the management agent. The XML format of the join rules is invalid error because this couldn’t co-exist with "objectSID".  The error wasn’t particularly friendly either, and apart from taking ages to nail down to this problem, in a production upgrade scenario this may have caused dramas because the only resoluton I could come up with was to (a) remove the existing attribute flows (thereby losing the data), delete and recreate the metaverse attribute as "objectSid", and (c) recreate the attribute flows.
 
I would argue that this is actually an oversight of the upgrade process … maybe there needs to be a "FIM Upgrade Compatibility Test" or something???  I would hope that RC1 won’t be so unforgiving :|.
Posted in FIM (ForeFront Identity Manager) 2010, ILM (Identity Lifecycle Manager) 2007 | Tagged , , | Leave a comment