Sunday, January 04, 2026

Some thoughts on AI

We keep moving forward, opening new doors, and doing new things, because we're curious and curiosity keeps leading us down new paths. -Walt Disney

AI is the next big thing. Is it a wave, a bubble, or here to stay? I wish I knew. I will tell you what I have learned about and from AI over the last few months.

At the end of the day, it’s a tool, and as we all know, the result of using tools depends on the user. Hammers can drive nails or smash your finger. Read a tape measure correctly and your project can look good, do it wrong and nothing fits. I won’t even talk about knives and saws.

On the other hand, we hear all the time about “Vibe Coding” and how easy it is to just have AI build an app for you. I can tell you I’ve probably tried this 5 or 6 times with limited results. When it’s worked, it’s been amazing and when it hasn’t, it’s no fun to work your way through code you did not write yourself and debug it.

Even if you ask AI to document the code, it’s not always easy to figure out what is going on. One of the things that separates professional developers from amateurs (which I most certainly am not a professional developer) is the ability to easily discern what is being done here. (And I’ll talk more about this in a second) It’s even more fun when you’re not as proficient at the particular language in use. For whatever reason, the AIs I have worked with prefer Python, which I’m learning (just not quick enough).

So what makes a good AI coding experience? Well for me, it’s something I have in abundance from my career, the ability to define requirements and specifications. Understanding how to code is not as important, but asking how to define what needs to be coded is very important. As the old joke goes, it’s all too easy to miss on the requirements, as this picture shows:

This reminds me of an argument that I had with my father as I was preparing to enter college. Being from the first generation of computer scientists and engineers, he was dead set against me being a Computer Science Major. He felt that it was more important to Major in some aspect of business, accounting, finance, even marketing and minor CS. This way I would understand why things needed to be coded the way they were and not just how to code them. His experience from the early days of computing had taught him that. However, I was not really interested in business concepts, and countered with a new course of study called Management Information Systems, which combined some aspects of business, computer science, and actual business applications that might be encountered by those in the business world. He didn’t think it was a good idea, so in youthful protest, I majored in Political Science.

OK, so enough of the biographical tidbits, how does this all relate to AI and coding? When defining a task with AI, it’s the requirements and their details that matter more than anything else.

Want to design a system, OK, what does it need to do from start to finish? Anything left as undefined or that you think is generally known is a potential spot for issues. One of the nicer things about AI is that it can be iterative, so as results are displayed and tested it’s easy to add in a feature and say “assume all documents are located in the user’s document folder” than not specifying it and then getting some new code. By the way, I’d also add in a declaration if this is a Windows, Mac, or Linux application since that will definitely affect where the documents are stored, and if it’s supposed to work in any environment, you might want to make this a configurable parameter. If anything, it’s too easy for a novice AI developer to work themselves into a corner by getting deeper into interesting features, than just getting the thing working. When coding under this paradigm, it’s a good idea to establish basic functionality before adding in extra features and functionality (something I’ve learned from hard experience) 

As a best practice, something that I have been consistently adding to my AI Prompts is a clause along the lines of “Identify and address any issues or conflicts with best practices in this specification” I find that the tool will typically present things that I haven’t thought of and dwell on things that are potentially important, but not essential. I was using AI to build some demonstration code that had some security values hard coded and in plain text. Definitely a no-no in the professional world, but good enough for a simple one-off demonstration. So I added to the specification a notice that this was for a demonstration and not for production usage.

At the end of the day, what does this all mean? I’m going to make a few guesses:

    • Writing code manually will become less important, if not outright deprecated. As people get more familiar with AI prompting, it will become irrelevant for creating basic applications and tools. I don’t think that major applications or operating systems will be built this way any time soon, so there’s no immediate worry for professional developers.

        ◦ This doesn’t mean that basic application development will be easy or seamless. Indeed, those who do develop in this fashion will need to be tight when defining specifications and scope. I foresee new teaching methods that will develop and enhance these needs.

        ◦ There’s still going to be a need for professional programmers. Currently, AI works by working with large libraries of information. As AI doesn’t truly create at this point, we still need developers that can create new ways of approaching problems and developing algorithms.

    • We need to carefully examine the security models that will address how our AI tools interact with each other, whether it is for interacting on our behalf with other agents/systems for buying things, doing research, achieving a goal or creating applications.

My thinking is that the overall acceptance of AI as something truly useful and not a bubble depends on how the tools develop and are embraced not only by professionals, but by the average user. Part of this will be the design of the tools, is the head of the hammer too big; making it easy to hit one’s thumb? Can we easily read the tape measure to get correct measurements? And of course, can we adapt AI tools so that they are easy to understand and use? I’m pretty sure we will, but the road ahead could be somewhat rocky.


Monday, November 03, 2025

Schemas and Some Elements of LDAP History

Note: all trademarks mentioned in this blog are the property of respective owners.

I've been meaning to write this entry in one form or another for over 20 years. Glad I finally got around to it.

LDAP has been around since 1993, while Microsoft's Active Directory was introduced in 2000. During this time, Active Directory has become a virtually universal constant in organizations worldwide. Approximately 90% of the Fortune 1000 use it. It's hard to escape from it. But there is a definite appeal to setting up additional Directory Service instances from Microsoft or other providers. Setting up these additional instances helps to properly segregate different user types (employees, customers, vendors, etc.) and, particularly in the Active Directory use case, helps manage licenses and keep all OS and Application infrastructure accessible by Active Directory out of prying hands.

Herein lies the issue at hand. For its own reasons, Microsoft does not use the same object classes as standard LDAP. For those unfamiliar with LDAP, an object class is a grouping of attributes. Object classes facilitate the definition of users, groups, and other components of the LDAP structure, thereby introducing some organization to the overall schema.

Standard LDAP uses the inetOrgPerson as the basic definition of a user, while Active Directory uses the user. Most of this grew from the basic organization of Active Directory, along with the additional information required by integrating Microsoft Exchange back when it was an on-premises application. Of course, as the two concepts evolved, differences cropped up that I need to reference from time to time. To make this easier, I'm listing the most important differences here. I've listed the standard LDAP attribute first, followed by the Active Directory attribute.

  • jpegPhoto -- thumbnailPhoto
  • secretary -- assistant
  • street    -- streetAddress
  • uid       -- typically not used
I'm sure there are a few others, and I can see updating this list as things change in the future. Also, the uid attribute is typically used in Active Directory when it is being synchronized with a more standards-based LDAP, as it usually uses uid as the primary identifier as opposed to cn when building the entry's distinguishedName. It's also important to remember that the value stored in userPassword is not encrypted, but rather it is a hashed representation of the password. (This article provides a nice description of the process) This means that there is no way to decode this value and setting it typically requires an SSL connection.

One final note here, I've pointed out some differences between Active Directory and LDAP. This is not necessarily a criticism of Active Directory. Active Directory is a proprietary evolution of the original LDAP standard for some of the reasons I mentioned above and more. Standard LDAP is just that, LDAP based applications more closely adhering to RFC2798 and typically being a "descendent" of the original AOL-Netscape-Sun Directory Server code.

Let me know if you have other attributes that should be added to this list.

Sunday, August 24, 2025

Identity Management just isn’t what it used to be

 

It feels like Identity Management, whether we are discussing managing employee or customer identities, that it’s not just about managing user names and passwords. Of course, this has been the case for several years. But those of us in the industry are realizing there is a new paradigm where suspicion is the new normal, and this means that everyone involved needs to find new ways to manage and operate in this new model.

The recent Scattered Spider and related attacks are hitting us where we are most vulnerable, the intersection of customers and support, using social engineering to get organizations to reset passwords, along with standbys such as adversary-in-the-middle tactics.[i] Microsoft has also done a great job of describing its tactics and overall strategy.

As a result, the efforts of the Scattered Spider group have done an excellent job of making anyone involved in IT Support and Security very concerned, and have left their customers feeling overwhelmed with doubt and insecurity. “Is this user who they say they are?” is a bigger concern than ever before.

There is one thing that has become abundantly clear for both organizations and customers: Suspicion must be the new normal. It’s a sad, but true reflection on the state of things right now. For customers, it means manual verification of information requests. Do not submit or provide data over the phone without independently verifying that they are communicating with the right people. Of course, this means separate phone calls or other interactions, which means in complicated scam scenarios that this might not be enough, and of course, it slows everything down. It’s really nothing less than a new form of digital terrorism.

For the large organizations, it means that ensuring both employee and customer security is even more critical than ever. Ensuring that employee and customer data is properly segmented and that proper entitlement governance policies are in place is more essential than ever. Additionally, new patterns should be considered for verifying incoming user requests. The most promising method is using stronger identity verification mechanisms, such as liveliness checks, and when the strongest measures are called for, submission of government documents such as ID Cards, Driver’s licenses, and passports. This, of course, can unfortunately increase authentication and authorization friction in areas where there had been motivation to reduce or eliminate it.

There is hope that the use of concepts in Self-sovereign identity will make this verification easier through the encapsulation of identity-related data via an identity wallet secured by technologies such as blockchain. However, this is still an evolving, niche technology.

While we wait for this technology to mature, it is crucial for organizations and individuals who digitally interact with them to exercise caution and remember that when it comes to protecting one’s digital identity, “Paranoid people live longer.[ii]



[i] https://www.cybersecuritydive.com/news/scattered-spider-expands-tactics-recent-hacks/753220/

[ii] I’ve used this expression for over thirty years of my IT career. I can’t believe it has taken me this long to use it in a blog entry.

Saturday, July 12, 2025

From Toll Roads to Tokens: The Road Rules of Identity



Recently, I found myself comparing Identity Management to the New Jersey Turnpike—a stretch of infrastructure that demands continuous maintenance, monitoring, and support. The more I thought about it, the more the analogy seemed to hold up on multiple levels.

Consider this: when you enter the Turnpike, you're authenticated—thanks to your EZ-Pass RFID reader. You authorize yourself to use the service by paying the toll1. Your presence on the road is uniquely identified through a combination of your EZ-Pass ID and your vehicle’s license plate. Similarly, in Identity Management, we combine multiple identifiers to authenticate users and authorize access.

There's even a form of fine-grained authorization at play. Your driver's license determines which type of vehicle you’re allowed to operate—semi-trucks, motorcycles, passenger cars—all of which come with their own set of permissions. Identity systems do the same by assigning entitlements and roles based on user attributes and context.

We can stretch the analogy further. Think about drivers from other states or countries using the Turnpike. They bring their own credentials, but the system recognizes and allows them to operate—a real-world version of Single Sign-On (SSO). Once authenticated, drivers manage their journey: choosing routes, switching lanes, adjusting speed—just like identities that evolve, shift roles, or gain new permissions over time.

But perhaps the most vital component in this infrastructure? The on-ramps and off-ramps.

In our analogy, these represent connectors to other roads—other systems. On-ramps lead drivers onto the Turnpike (onboarding), and off-ramps take them to their destination (offboarding). In identity terms, they’re links to enterprise applications. Some lead to robust, high-speed interstates (modern apps), while others connect to older, more narrow routes (legacy systems). Despite their differences, all are part of the same interconnected digital landscape.

If these ramps are blocked or broken, people can’t get where they need to go. The same is true in Identity Management. Disrupted connectors—whether due to outages, outdated protocols, or rigid infrastructure—can prevent users from accessing critical resources. That’s why flexibility is key.

Just as highways need multiple lanes, alternate routes, and regular maintenance, identity infrastructure must be resilient. It needs to support remote access, cloud redundancy, and failover mechanisms. Whether through replicated data centers, leveraging SaaS service, or just having a well-designed backup plan, your identity architecture must ensure users can always reach their destinations.

In short: smooth identity operations are just like smooth traffic flow. It's all about seamless access, clear pathways, and ensuring the road is always open.







1 In the pre-EZ-pass era, one paid the toll on the Garden State Parkway, another important piece of infrastructure with a token, but we won’t get into yet another roadway and it’s analogies here ☺.

Saturday, May 24, 2025

The Goldilocks Syndrome

 

“Then Goldenlocks sat down in the chair of the Great, Huge Bear, and that was too hard for her. And then she sat down in the chair of the Middle Bear, and that was too soft for her. And then she sat down in the chair of the Little, Small, Wee Bear, and that was neither too hard nor too soft, but just right. So she seated herself in it, and there she sat till the bottom of the chair came out, and down she came plump upon the ground.”[i]

I’ve been making this observation formally ever since I started in the software field at a company called Magic Solutions back in the late 90s and probably informally before then. You see, it’s been my experience that when organizations roll out new enterprise concepts, particularly in IT and more specifically in IT Security and Governance, it goes through at least three revisions. I’ve seen this happen in several models whenever there is some sort of organizational hierarchy. In my Help Desk days, it was about Ticket Subject Organization, in Identity it’s usually the organization of the Directory Service (Security Groups and Organization Unit structures) or role/entitlement hierarchies.

For the record, I’ve been involved in all of the scenarios listed below, and I’ve been confident I nailed it nearly every time. As I’ve become more experienced, I mention that these structures will most likely change over time and that the first time is seldom the charm.

The first one is usually pretty much what the organization thinks they need. This might be in consultation with experts either during the sales process or when working with the implementation specialists. This frequently suffers from a lack of flexibility, in that not all use cases have been properly considered and weighted. It’s good enough for now, and the project to review how things are configured is pushed to the next version of the application / architecture review.

The second time around, the organization is looking to be flexible, so that any potential scenario can be handled. Now we have the opposite problem, where different parts of the organization have too much control and the solution becomes too cumbersome and there is little to no organization. It’s complete anarchy, audit logs become so incomprehensive that they border on being meaningless, and nobody is happy.

At the third time through the process, I believe that we are starting to maybe see a proper solution that has structure, and is somewhat flexible to new scenarios. In terms of our introduction quote, it’s not too rigid, and it’s not too open, but just flexible enough.

Sometimes this is because the structure is more open, or because there’s a stronger change control process in place. Sometimes it is because the organization itself has changed, changing in size, complexity, governance needs, or just a plain old change in culture. Change will still occur, but with the lessons learned the process should be more manageable.



[i] https://en.wikisource.org/wiki/The_Story_of_the_Three_Bears_(Brooke) That’s how this version spelled it. Emphasis is mine.

Thursday, May 15, 2025

Identity Management as Kitchens and driving on the New Jersey Turnpike

Those of you who have been following me for years are aware of my preference for Identity Management Programs over one-off Projects.  The fact is, one might consider that a proper program goes something like this:

  1. Set up the Directory/IDP
  2. Define Roles
  3. Set up Access Management (SSO/MFA)
  4. Set up LCM processes
  5. Implement Fine-grained authorization
  6. Implement Self-Sovereign Identity and digital wallets

Of course, this list and its order depend on the needs and culture of the organization being served. In the long term, it is virtually impossible to do just some of this. It’s like upgrading or updating your kitchen. Now the Dining Room looks off, which makes the Den look dated, and then the carpeting, and then, of course, the bedrooms. All because one part of the house was improved.

My thinking has always been that you can’t really grant access until you have some sort of Identity store in place, which is usually the Directory Service for the Workforce and an IDP when it comes to CIAM.

Furthermore, steps two and three are somewhat interchangeable, but if you need to organize your identities, it’s likely due to an Access Management requirement, so you may want to complete this task sooner rather than later.

LCM needs are required regardless of use case, but of course take different forms. For the Workforce, this is more about how an employee progresses through their corporate career. On the CIAM side, this might involve subscriptions, optional services, and the ability to unsubscribe and be forgotten.

Refining all these processes and connecting them to additional applications will likely require some form of fine-grained authorization to ensure that all users can access only what they are intended to.

Once all of this is in place and working, we can begin to think about utilizing this information for digital wallets and establishing the foundations of Self-Sovereign identity using wallets. This will ensure that, in any given Identity-based transaction, only the minimum required attributes are shared.    

As far as the Identity Program goes, it’s like driving on the New Jersey Turnpike; the construction and work never seem to end. As soon as we finish one round of repairs and upgrades, it’s probably time to start over again.

Monday, April 28, 2025

Must it always be Virtual?

 

The only constant in life is change

-Heraclitus. 



One of the things that most people in the Identity field know about me is that I am a huge fan of Virtual Directory Services (VDS). But it’s possible this is starting to change. It’s also entirely possible that working with the technologies at Ping Identity every day has something to do with this. 1


What I have always loved about a true Virtual Directory is its immediacy. Access the VDS, have it do the lookup, and then do something with the value. It doesn’t matter what the back end is—an LDAP directory, a database view, or even a CSV file. (Not that I ever wanted to go there.) Do the search, get the result, and move on with your life.


But do we really need this when other, less complicated tools exist? I’m starting to think that this is exactly what is happening. Let’s face it: a Virtual Directory is a real pain to set up in the posterior (although once it’s running, you tend to forget it’s there). Setting up the DIT, configuring joins of back-end sources, properly translating non-directory data into something resembling the DIT that was configured back in step 1, it's tedious and is about as error-prone a process as exists in the Identity field.


What if I told you that there were solutions that just work better?


I mean, if you just need to do some basic representations of an existing Directory and some simple transformations to handle things like mergers and acquisitions, a basic LDAP Proxy will handle this exceptionally well. There is no need to have anything else going on. A proxy also handles essential use cases such as Pass-Through Authentication, which can be helpful during “lazy migration” scenarios.


If you need to access different types of data, we need to think about what are we doing with it. Does it really need to be referenced in some sort of LDAP schema? Does inetOrgPerson (or other LDAP Object classes) necessarily give any true advantages? Most of the time when we need this information it’s to choose a course of action during an identity related process.


What are we doing with it? Does it need to be referenced in some LDAP schema? Does inetOrgPerson (or other LDAP Object classes) necessarily give any actual advantages? Most of the time, we need this information to choose a course of action during an identity-related

So, instead of the virtual attribute, why not consider fine-grained authentication tools? The whole point here is that we are looking at specific identity attributes to determine access or those involved in an orchestration flow, where both data and policies are subject to change at a moment’s notice. Being able to look up and evaluate that data with the same tool seems to make the most sense to me.


To me, the biggest value here is more efficient access to data and understanding how said data will be used. In an age where we are increasingly concerned with governance, compliance, and regulation, maybe this is the way we need to think about identity data and how it is represented for use in identity-related operations.





1 My opinions remain my own, and nothing said here represents any official positions or statements from Ping Identity or any organization I might be associated with unless otherwise specified.