Archive for the ‘XACML’ category

Talking about authorization w/ Gunnar Peterson

June 10, 2013

It’s always great to catch up with Gunnar Peterson and discuss the latest in externalized authorization. There was quite a bit of ground to cover since our last blog post series and here is the transcript:

Gunnar Peterson:
The thing that strikes me about XACML and ABAC is that its really different from other security standards. Usually when we talk about an authentication or crypto protocol, we talk about strength, threat models, and the strength of the security service itself. It’s inward focused. It seems to me that the value of XACML and ABAC is really in the use cases that they enable. It’s outward focused, and unlocks value through new kinds of services. What kinds of use cases have you seen recently where XACML and ABAC are enabling companies to build things better?

Gerry Gebel:
You are correct to point out that XACML feels different from other identity and security standards. XACML is inwardly focused on the application resources it is assigned to protect through the use of its policy language – there isn’t just a schema, token format or DIT to work with.

There are a couple of recent customer use cases that I’d like to briefly describe as they are typical of the kind of requirements we see. In the first case, the organization holds a lot of data for customers in different industries and they wish to provide access to different slices of data via a combination of APIs and web services. In this case, its API access primarily for mobile devices and web service for other client applications. Specific business rules dictate what data customers can view or what APIs/web services they can call. Integrating an XACML service at the API/web services gateway layer is a non-intrusive way to implement the right level of data sharing and enable new business models for the organization.

The other case study example is for an organization that is building a new data hub service, where certain users can publish data to the hub and others will subscribe to the feeds. Due to the sensitive nature of the information, granular access control was important for the new service. In this case, the designers wanted a flexible policy-based model to control access, rather than hardcoding it into the application.

GP:
Interesting use cases, let’s drill down on these. First as to the gateway – I am a fan of web services gateways, they are a no brainer for implementing identity, access control, dealing with malicious code and so on. Authorization (beyond coarse grained) requires a little bit more thought. How have you seen companies approach getting the right level of granularity to take advantage of the XACML Policies at a gateway level? In other words, given that a gateway has less context than the application layer, what is the hook for the policy to be able to intelligently make authorization decisions outside the app as it were?

GG:
You are correct to point out that you can only make as granular an access decision as the context that is provided to the policy decision point (PDP). In this case, the call from the gateway to the PDP may just contain something like: subject: Alice, action: view, client_record: AD345. The PDP can enhance the situation by looking up more information about Alice before processing the access request – her department, role, location, etc. In addition, the PDP can look up information about the client record – is it assigned to the same location or department as Alice. With this approach, you can still make pretty granular access control decisions, even though you don’t have a lot of context coming in with the original access request from the gateway.

GP:
Right, so its case of Roles are necessary but not sufficient?

GG:
Roles are usually only part of the equation and certainly not adequate on their own for granular authorization scenarios.

GP:
Here is one I wrestle with – naming. On the user Subject side its pretty simple, we have LDAP, we have AD, and everyone knows their user names and log in processes, but what about the resource side? Its seems less clear, less consistent, and less well managed, once you get beyond URL, URI, ARN and the like. What trends are you seeing in resource naming and management; and how does this effect XACML projects?

GG:
Indeed, naming conventions and namespaces for subject attributes are prevalent and are lacking for other attribute types, in particular for resources. One approach to address naming for resources is to publish an XACML profile, whereby you can establish standard names for at least a subset of attributes. We see this being done today in the Export Control and Intellectual Property Protection profiles. Some firms in the financial services industry are also examining whether XACML profiles can be defined to support certain cross tim interactions, such as trade settlement.

Otherwise, ABAC implementers should approach this task with a consistent naming convention and process to ensure they end up with a resource namespace that is manageable to implement and operate.

GP:
I had always looked at XACML as something that helps developers, but it appears to have a role to play in areas like DevOps too. I have seen a few examples where XACML services delegate some administrative functions, such as spinning up Cloud server instances, and lower level configuration. For decentralized environments where admin tasks (which are very sensitive and need to be audited) can be handled by different teams and even different organizations this kind of granular policy control seems like a very good fit. It gave me a new perspective on where and how XACML and ABAC might fit, have you seen these types of use cases?

GG:
Normally we are dealing with application resources, but we have had cases where IT uses XACML to control access to DevOps kinds of functions. As you have pointed out, the XACML policy language can be quite useful in a number of areas where granular access control is important.

GP:
Developers and security people fundamentally lack good (read: any) testing tools for authorization bugs. Static analysis and black box scanning tools are all the rage (and server a useful purpose in security bug identification), when you scan your app they can find all manner of SQL Injection, XSS and other pernicious problems, but at the same time you can cut those same tools loose on an app that’s riven with thousands of authZ vulnerabilities and they will often come back green! I am pretty sure this is a major factor contributing to the numerous authorization vulnerabilities we see.

I think even just a first cut, 1.0 implementation with XACML and ABAC is an huge leg up towards formalizing some of the authZ structure so that real test cases can be developed and run. This makes it simpler for the developer to avoid authZ mistakes since they can continually test against a defined policy instead of dumb scanning against something where your tools cannot differentiate between what’s authorized versus unauthorized states. What are your thoughts on authZ testing?

GG:
We get a lot of questions about testing the policies in an ABAC system and there are many ways to address this requirement.

1. At the policy authoring stage, there is the requirement to perform initial unit testing – does this policy I am writing operate the way I expect it to? We provide this simulation capability so you don’t have to run the application to see outcome of a policy and it includes a trace facility so you can explore exactly how the policy was evaluated (this is a big help in debugging policies as well). Unit tests can be captured in scripts for future use, such as when the application or access policies change.

2. Positive and negative test cases: You are correct to point out that developers can test against a defined policy, such as: cardiologists can view and update records of heart patients. We refer this as a positive test, that is, does the policy allow doctors that are labeled cardiologists to view heart patients’ medical records? But there are other conditions to test for that may be characterized as negative tests. For example, given a set of ABAC policies, is there any way a non-cardiologist can update a heart patient’s record? For these kinds of scenarios, you can build additional test scripts or use an advanced policy analysis tool.

3. Gap analysis testing: Another advanced function is to test for any possible gaps in the policy structure. But again, as you pointed out, having a specific set of access policies to test against makes the process easier. In this manner, you could test for separation of duty scenarios that violate policy: is there any combination of attributes that permits a user to create and approve a purchase order?

GP:
In my opinion, there are concrete benefits from being able to make more granular authZ decisions, audit policies and configure rather than code authZ, but as a security guy the testing piece all by itself is a game changer. This is just such a big gap in so many systems today and a large source of “known unknown” kind of bugs, ones that can be but often aren’t found and closed.

Ok last question – is XACML dead? This is your cue to tee off.

GG:
Far from it. I’ve witnessed a significant increase in demand for XACML solutions over the last few years, the OASIS technical committee <https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=xacml&gt; is actively working on new profiles (after version 3.0 of the core spec was formally ratified earlier this year), and seen new vendors enter the market. There is a big emphasis to further improve the standard for consumption by the developer community, a key constituency if the industry is going to escape the cycle of hard-coding authorization inside applications. Some of the standardization efforts worth noting are profiles to define a REST interface for the authorization service as well as JSON encoding of the XACML request and response formats. These two enhancements should greatly broaden the appeal of the XACML authorization standard. Further, Axiomatics recently joined the OpenAz <http://www.openliberty.org/wiki/index.php/OpenAz_Main_Page&gt; project to help update and improve this developers’ API.

Advertisement

XACML: Alive and Well

May 8, 2013

The latest hyperbolic headline from our friends in the analyst community is brought to you by Andras Cser of Forrester, who proclaims that XACML is dead. Naturally, we at Axiomatics disagree since we have invested many years of effort at OASIS to develop and support the standard. The timing of this post is also interesting in that XACML version 3.0 was just formally ratified earlier this year and the Technical Committee is actively working on new profiles to support a REST interface as well as JSON encoding of the request/response formats – two features that will significantly expand the appeal to a wider developer audience. Let’s walk through this and address some of the statements that Andras makes:

Conversations with vendors and IT end users at Forrester’s Security lead us to predict that XACML (the lingua franca for centralized entitlement management and authorization policy evaluation and enforcement) is largely dead or will be transformed into access control

I am not sure what you mean here Andras as XACML already does access control.

Here are the reasons why we predict XACML is dead:

Lack of broad adoption. The standard is still not widely adopted with large enterprises who have written their authorization engines.

While XACML has not hit the mass market, we continue to see increased adoption across many industries. Organizations that have written their own authorization engines are investigating commercial alternatives, due to the cost of maintaining home grown systems and keeping up with growing requirements.

Inability to serve the federated, extended enterprise. XACML was designed to meet the authorization needs of the monolithic enterprise where all users are managed centrally in AD. This is clearly not the case today: companies increasingly have to deal with users whose identities they do not manage.

This is not correct on multiple levels. First, XACML was designed to meet the needs of service oriented architectures – which are, by definition, not monolithic in architecture or deployment patterns.

Second, the XACML standard never mandated that all users be managed centrally in AD or any other repository. Some products may have this limitation, but it is a vendor choice to do so. In fact, the policy information point is specifically defined to retrieve attributes or meta data from heterogeneous, distributed sources.

Finally, the XACML architecture naturally supports federated environments because access decision making and policy enforcement can be deployed centrally or in a distributed approach to cater for performance and other operational preferences. In fact, one of the simplest ways to achieve a hybrid IAM strategy for the cloud is to leave AD in the corporate enterprise and use authorization to communicate access control decisions.

PDP does a lot of complex things that it does not inform the PEP about. If you get a ‘no, you can’t do that’ decision in the application from the PEP, you’d want to know why. Our customers tell us that this can prove to be very difficult. The PEP may not be able to find out from the complex PDP evaluation process why an authorization was denied.

Actually, you can optionally communicate context about the decision using Advice or Obligation statements – part of the XACML standard. In version 3, these statements can contain variables and are very useful for communicating additional information to the PEP. Some examples are to redirect the user to a stronger authentication page, tell the user they have an insufficient approval limit, or tell the user they are not assigned to the patient so they can’t see the health record.

Keep in mind, many situations specifically require that the PEP not know why the access failed, because it could leak information for an attacker. Firewalls and network access control solutions are examples of this.

Not suitable for cloud and distributed deployment. While some PEPs can bundle the PDP for faster performance, using a PEPs in a cloud environment where you only have a WAN link between a PDP and a PEP is not an option.

The modular architecture of XACML is absolutely suitable for cloud and other kinds of distributed deployment scenarios. The fact that major components such as the PEP, PDP and policy authoring are decoupled means you can deploy them in many configurations. Embedding the PDP with the PEP and application is one option, but you can also co-locate the PDP with the app for better performance. As with on-premise deployments, implementers have to consider the latency between PEP to PDP and attribute retrieval. Cloud scenarios may present some challenges in reference data synchronization or retrieval, but many options are available to address them. 

Commercial support is non-existent. There is no software library with PEP support. Major ISVs have not implemented externalized authorization or plugin frameworks for externalized authorization. Replacing native SharePoint authorization with an Entitlement Management PEP is a nightmare requiring a one-off, non-standard, non-repeatable development and operations process.

I acknowledge that, as an industry, we have not adequately addressed the ISV industry with sufficient tooling to externalize authorization. As a result, we continue to see the creation of ‘new legacy’ applications that are difficult to manage and operate from an IAM perspective. Axiomatics has recently joined and contributed to the OpenAz project in an effort to meet these requirements.

Regarding SharePoint, we agree that a PEP-to-PDP model is difficult to implement for this platform, which is why we have taken a different approach.

Refactoring and rebuilding existing in-house applications is not an option. Entitlement Management deployment requires a refactoring of the application to use the PEP hooks for centralized, externalized authorization. This is not a reality at most companies. They cannot just refactor applications because of a different authorization model (sometimes, especially with mainframe applications the authorization model is not even understood well enough to do this…)

Another point of agreement: Most existing applications will not be rewritten to implement an externalized authorization approach. However, there are ways to integrate with existing applications without changing the application’s code by using filters or proxies, for example.

Additionally, many organizations are exposing existing applications by building API or web services layers – this is the perfect integration point for incorporating externalized access control.

OAuth supports the mobile application endpoint in a lightweight manner. XACML today largely supports web based applications. While OAuth’s current profiles are not a full-blown replacement for XACML functionality, we see that OAuth’s simplicity made it the de-facto choice for mobile and also non-mobile applications.

OAuth and XACML are not mutually exclusive, but certainly have their respective strengths/weaknesses. Again, I will point to the REST and JSON profiles for XACML that are currently under development at OASIS – these profiles will make XACML-based systems more easily integrated with mobile and other light weight platforms.

Part Two: Software Development Lifecycle (Development)

May 31, 2012

This is a continuing conversation with James McGovern who is lead Enterprise Architect for HP Enterprise Services and whose focus is in providing bespoke enterprise applications to the insurance vertical. The conversation to date is about how entitlements should be conceptualized along the SDLC (part 1). The topic we will cover in this dialog is centered on concerns that happen after IT Architects have performed high-level architecture and need to provide hands off to development teams. My colleague, Felix Gaehtgens also provided valuable input to the discussion.

JM: Generally speaking, the need for entitlements management tends to be on the radar of savvy information security professionals who realize that they need to invest more time in protecting enterprise applications and the data they hold over simply twiddling with firewalls, SSL and audit policies that look for whether a third party has a clean desk policy and whether there number two pencils are sharpened. When security people know nothing about software development and software development people don’t know anything about security, then bad things can happen. Today’s conversation will be a small attempt in connecting these two concerns. Are you game?

GG: Definitely. I also see a disproportionate amount of time and budget dedicated to security apparatus that does not address the specific security, business or compliance rules that an enterprise must enforce. To do that, you need to address security and access control concerns within the business application directly.

JM: A developer has received the mockups for a user interface from the graphics team and now has to turn it into code using JSPs and Servlets. In this particular tier, how should they incorporate entitlements into the pages as well as do it en-masse if they have hundreds of pages to develop?

FG: That’s an excellent question. Access control can and should happen on multiple layers. As you mention a user interface, that is a good point to control access to individual user interface components. For example: a button might start a particular transaction. Is this user authorized to carry out that transaction? If not, then the button should perhaps not be displayed. We can even think of fine-grained access control here. Suppose you are displaying a list of customer accounts to a user. What details should be visible? Should you perhaps hide some columns?

When we do access control in a holistic manner, we can obviously not stop at the presentation layer. You mentioned servlets here. A servlet operation is another type of action that can be authorized. May this function be executed on this servlet by this user in this particular context? This again is a good question. Let’s assume the user is authorized. What happens then? The servlet probably does some things, perhaps retrieving some data, perhaps kicking off a call to some back-end service. As the servlet does its thing, there other steps that would need to be authorized within the execution code of the running servlet. None of this is actually new. If we look at existing code, we see a lot of “If thens” that check whether something is allowed to happen. What architects should be vigilant about, is the fact that having all these “if thens” causes problems down the line. What if the business policies change? What if new regulations come into force? How can you actually audit what is happening? Because of this, it is important to consider moving access control to a separate layer and externalize authorization.

JM: Developers will also develop reusable web services whenever possible that can be leveraged not only by their enterprise application but others as well. How should they think about incorporating entitlements into a service-oriented architecture?

FG: Hooking entitlements into a service-oriented architecture is actually quite painless. The easiest way – without modifying code – would be to use interceptors that check whether a particular transaction is authorized. This also makes the services simpler because authorization is moved into its own layer.

JM: There are a variety of ways to develop web-based applications ranging from Spring, Struts, Django, etc and each of them come with some sort of security hook functionality. How do I configure this to work with entitlements?

FG: These frameworks support authorization, to a certain degree. Unfortunately though, Authorization is typically quite coarse-grained. In Spring for example, you can authorize access to a class. But if this class implements a lot of logic by itself, Spring doesn’t help you doing these “micro-authorizations” or fine-grained authorization. So it’s likely going to be a lot of “If thens” within those classes. The best approach would be to externalize both the coarse-grained as well as the fine-grained authorizations. But if for any reason that is not practical, then the coarse-grained authorization can already be done through the framework by talking to an externalized authorization layer, such as a XACML policy decision point (PDP).

JM: Being an Enterprise Architect who codes and knows security, I have observed throughout my career that many enterprise applications from a code perspective tend to centralize authentication but spread authorization in almost every module. What guidance do you have for both new and old applications in this regard?

FG: For new code, you have the option of externalizing authorization from the start. There are several ways to do this. Aspect-oriented programming can help automate some of this. You can also implement your own permissions checker interface and then hook that into either a local implementation or an externalized XACML authorization service at run-time, so that it gives you all of the flexibility. There is no perfect answer for all cases, as it really depends on how you are writing your code. Wherever in your code you would otherwise do the hard-coded “If thens” to check whether something should be authorized or not, you should be calling an authorization function. If you can create certain “control points”, then you make your life easier. If you have some other points where you need to authorize, use simple APIs to make a call-out to an authorization service.

For old applications, you will need to check where you can “hook in” the authorization. Perhaps there are some control points where you can install interceptors, inject dependencies, or wrap existing classes. If this is not possible, you might be able to intercept data flows coming in or out of a module, and do your authorization there.

JM: Within my enterprise application, I may have built up a “profile” of the user that contains information I would have retrieved post authentication from a directory service. What is the best practice in using this information to make authorization decisions?

GG: The design issue you are raising is whether the PEP should do attribute lookups or if we should rely on the PDP to perform this function. Generally speaking, it is more efficient for the PDP to look up attributes. Mostly this is because the PDP determines what policies will be evaluated and is able to fetch only the additional attributes it needs for policy evaluation. The PEP is not aware of what policies are going to be evaluated, and therefore may waste processing cycles retrieving attributes that will not be used. That extra processing time could be substantial when considering network time for the retrieval, parsing the response, and converting data to XACML attributes.

However, in your case it appears that the application is collecting attribute data for the profile in its normal course of operation. Seems like these attributes can be forwarded to the PDP in the access request without compromising response time performance. There may be other cases where the attributes are in close proximity to the application and it is better for the PEP to do the lookup.

Each scenario and use case should be analyzed, but our starting position would be to have the PEP include attributes it has already collected and to let the PDP look up the rest through its PIP interface. Attribute retrieval is really an externality for the application and should be left to the authorization service. It is also important to consider what happens when policies change. If too much attribute handling is done by the application, it may require additional code changes to accommodate policy changes. If the developer relies on the authorization service to deal with attribute management, then he/she gets the additional benefit of fewer (if any) code changes when the access policies must be adjusted.

JM: Another form of reuse within that many enterprise applications should consider but are not currently implementing is the notion of supporting multiple tenants. Today, an enterprise may take an application and deploy it redundantly instead of keeping a single instance and allowing multiple tenants to live within it. If I wanted to show development leadership in this regard, how can entitlements help?

GG: Applications have multiple layers or integration points where you must consider authorization for a multi tenant configuration – this also applies to single tenant applications. As you described earlier, access policies need to be applied at the presentation and web services or API layers. Beyond this, you have the data layer, typically a database, to consider. It is likely that enterprises deploy multiple instances of an application and its database because they cannot adequately filter data per tenant with current technologies or approaches. With an XACML entitlements system, you can enforce row, column and field level access controls – providing a consistent enforcement of entitlements from presentation to web service to the database. Axiomatics builds specific database integrations (such as Oracle, Microsoft and others), but customers can also use the API to integrate with their preferred SQL coding mechanisms. We think this is a less costly AND more secure solution than what can be purchased from Oracle, for example.

With the approach just described, enterprises can get some economies of scale by deploying fewer application instances – I know there are reports out there about idle CPU time in data centers. Hopefully this also reduces the operational burden by managing fewer instances, but the operations center has to know more detail about which user communities or customer groups each application is supporting.

JM: Our corporation has been breached on more than a few occasions by Wiley hackers. Every time this happens, the security forensic jamboree blows their trumpets really loud asking for assistance in determining what happened. They attempt to reactively walk through log files. To me, this feels like a ceremonial failure. Can entitlements management make those information security people disappear so that I can focus on developing code that provides business value without listening to their forensic whining?

GG: Audit logs of what HAS happened will always be important when attempting to analyze a breach, incident or even for extreme troubleshooting. I think it can be helpful to investigators if there are fewer access logs to examine – here a central authorization service can provide a lot of benefit. A central authorization system that serves multiple applications gives you a single audit stream and single audit file format. It also relieves developers from at least some of the burdens of security logging – although there may be requirements to log additional context that the authorization system is not aware of.

There is also a proactive side of this coin: what CAN users access in an application. It seems that, as an industry, we’ve been trying to definitively answer auditor questions such as, “Who can update accounting data in the general ledger system?” or “Who can approve internal equity trades when the firm’s accumulated risk position reaches a certain threshold?” First, there is a fundamental failure in application design when business owners, auditors and security officers alike cannot easily answer these questions. Why is it still acceptable to build and buy applications that actually increase the operational risk for an organization? Second, many identity management technologies have only served to mask the problem and, ultimately, enable the problem to continue. For example, user provisioning systems were initially thought to be capable of managing access and entitlements for business applications. It turns out that they are relatively good at creating user accounts, but have limited visibility into application entitlements – those are managed by local admin teams. Access governance tools have a better view of entitlements, but it remains difficult to get a complete view when authorization logic is embedded in the application code.

With XACML policies implemented, auditors can test specific access scenarios to confirm enterprise objectives are being met. A policy language is an infinitely richer model for expressing access control policies, than can be done with ACLs, group lists, or roles. Finally, you can specifically answer those auditor questions of who can access or update applications, transactions, or data.

XACML and Dynamic Access Control in Windows Server 2012

May 25, 2012

Microsoft has introduced a significant feature enhancement to Windows Server 2012, Dynamic Access Control (DAC). This is big upgrade from the access control lists (ACLs) used in previous generations of Windows Server, giving enterprises a richer and more flexible authorization model at their disposal. The new functionality gives enterprises tools to more effectively control access to the vast amounts of data in Windows file shares, while complying with business, security and compliance policies. You can find an excellent introduction to Dynamic Access Control here and I expect Microsoft to publish much more information, as we get closer to the GA date for Windows Server 2012.

At Axiomatics, we have added a new feature to our core XACML engine – Axiomatics Policy Server – so that XACML authorization policies can be converted into a format recognized by the DAC function in Windows Server 2012. To implement DAC, Microsoft uses Security Descriptor Definition Language, or SDDL. The Axiomatics feature automatically translates XACML policies into SDDL format and loads the policies into your Windows Server 2012 Active Directory.

There are several benefits to the Axiomatics integration that will enhance Windows Server 2012 deployments, including:

  • Leverage a central authoritative source of access policies: XACML access policies that are implemented across other applications in the enterprise can now be applied to Windows Server environments.
  • Manage and control access to file server resources more easily: Policy languages provide, such as XACML, provide a more direct and flexible model for managing access to vast amounts of data spread across hundreds or thousands of servers.
  • Meet audit and compliance requirements more easily: An externalized and authoritative source for access policies means you have fewer places to audit and certify the access controls for critical applications and data
  • Report on who has access: Axiomatics provides advanced reporting tools to fully explore and validate your access control policies
  • Consistently enforce access across applications and platforms: Enable your Windows Server 2012 to participate in a broader, central authorization service. In this mode, enterprises can ensure a consistent level of policy enforcement across the environment – based on the single, authoritative source of access policies.
  • Best runtime performance: Windows Server 2012 performance is not impacted, since its normal internal access control mechanism is being utilized – there is no callout to an external authorization engine. This gives enterprises the best performance possible, but also provides the assurance that access control is being implemented according to centrally managed policies.
  • Increase value of your XACML investment: Integration with platforms such as Windows Server 2012 or Microsoft SharePoint 2010 extends the reach of your XACML authorization system.

If you are planning to visit Microsoft TechEd 2012, please stop by our booth in the partner pavilion for a demonstration.

Part One: Software Development Lifecycle (Architecture and Design)

April 5, 2012

Last year, James McGovern who previously was in the role of Chief Security Architect for The Hartford and now is the lead Enterprise Architect for HP focused on insurance and I held several discussions (Part 1, Part 2, Part 3) on using entitlements management within the insurance vertical. Now that we are in a new year, we have decided to revisit entitlements management from the perspective of the software development lifecycle.

JM: Historically speaking, a majority of enterprise applications were built without regard to modern approaches to either identity or entitlements management. At the same time, there is no published guidance by either the information security community or industry analysts in terms of how not to repeat past sins. So, let’s dive into some of the challenges a security architecture team should consider when providing guidance to developers on building applications securely. Are you game?

GG: Definitely! I think it remains an issue that applications are still being built without a modern approach to identity or entitlements – we see many cases where developers make their own determinations on how to best handle these tasks. Security architects and enterprise architects have long professed the desire to externalize security and identity from applications, but this guidance has an uneven track record of success.

JM: The average enterprise is not short of places to store identity. One common place where identity is stored is within Active Directory. However, infrastructure teams generally don’t allow for extending Active Directory for application purposes. So, should architects champion having a separate identity store for enterprise applications or somehow find a way to at least centralize application identity?

GG: Attribute management and governance is a key element to an ABAC (attribute based access control) approach. You might expect that one source of identity data is ideal, but that is not the reality of most deployments. Identity and other attribute data is distributed between AD, enterprise directories, HR, databases, CRM systems, supply chain systems, etc. The important thing is to have a process for policy modeling that is aware of and accommodates the source of attributes that are used in decision making.

For example, some attributes are derived from the session and application context, captured by the policy enforcement point (PEP) code and sent to the policy decision point (PDP) with the access request. The PDP can look up additional attributes through a policy information point (PIP) interface. The PIP is configured to connect with authoritative sources of information, which could be additional information about the user, resource or environment.

JM: While I haven’t ran across an enterprise that has gotten a handle on identity, I can also say that many security architecture professionals haven’t figured out ways to stitch together identity on the fly either. If we are going to leave identity distributed, what should we consider?

GG: I am a proponent of a distributed model as the starting point for this issue. That is, identity data should be stored and managed in close proximity to its authoritative source. In a distributed approach such as this, data accuracy should be better than if it is synchronized into a central source. Others will argue for data synchronization, and it is important when performance requirements call for a local copy of data. Therefore, performance, latency and data volatility are all issues to consider.

JM: What if an enterprise application currently assumes that authentication occurs by taking a user-provided token and compares it to something stored within the applications database. Many shops deploy web access management (WAM) technologies such as Yale CAS, CA Siteminder, etc where they centralize authentication and pass around session cookies but may not to know from an identity perspective why this may not be a complete solution?

GG: A few things come to mind here. First, a WAM session token is proprietary and therefore has a number of limitations in the areas of interoperability, support in multiple platforms, etc.

Second, there is the issue of separation of concerns. From an architectural perspective, I strongly believe in having an approach that treats authentication separate from authorization concerns. One of the main benefits is the ability to adjust your authentication scheme to meet the rapidly changing threats that we see emerging on a daily or weekly basis. If authentication is tightly coupled with another identity component, then an organization is severely limiting its ability to cope with security threats.

Finally, authentication should be performed at the identity domain that is most familiar with the user. Said another way, each application does not and should not store a credential for users. Federation standards permit the user to authenticate at their home domain and present a standardized token to applications they may subsequently access.

JM: Have you ever been to a website where they ask you to enter your credentials and they don’t provide you with any queues as to what form the credential comes in? For example is it a User ID or email address. A person may have multiple unique identifiers. Is it possible to use entitlements management as a centralized authenticator for an enterprise application in this scenario?

GG: My initial thought is “no” based on my comments regarding separating authN and authZ above. There are also security reasons for not giving the user a hint about the credential – to reduce the attack surface for someone trying to compromise the site.

However, there may be cases where a web site wishes to permit the use of multiple unique identifiers for authentication. Once you get to the authorization step, will you still have all the necessary user attributes available? Do you need to map all the identifiers to the attribute stores? You can end up making the authorization more complex than it needs to be

JM: If you have ever witnessed how enterprise applications are developed, they usually start out with the notion of two roles where the first role is a user and the second is the administrator. The user can do a few things and the administrator can do anything. Surely, we need to have something more finer-grained than this if we want to improve the security of enterprise applications. What guidance could you provide in terms of modeling roles?

GG: There are different levels of roles that should be defined for any given application:

  • Security Administrator: Their only purpose is to manage and potentially assign entitlements.
  • System Administrator: They just manage the application or platform but don’t deal with entitlements
  • User Role: Here are the regular users that will interact with the system

I definitely would start with the security administrator role – this role deal with managing entitlements, access policies and assigning these to users – they should not have access to the data, transactions or resources within the application. The system administrator role functionality should be constrained to managing the application, such as configuring the system, starting/stopping the application, defining additional access roles (see below) or other operational functions that are not associated with the business application.  This is a vast departure from the super user model where there is a root account with complete access to everything on a system, which ends up being a security and audit nightmare.

Third, you can define a user role that permits an individual to login to the application but with very limited capabilities. Here is where ABAC/XACML comes in to give you the granularity required. Access rules can define what functions a user role can perform as well as what data they can perform functions on. With this kind of dynamic capability, you can enforce rules such as, Managers can view payroll data for employees in their department.

JM: I had the opportunity in my career to be the lead architect for many once popular and now defunct Internet startups during the dot-com era. At no time, do I remember anyone ever inquiring about a standard around what a resource naming convention should look like. Even today, many enterprise applications have no discernable standards as to what a URL should look like. Now that we have portals and web services, this challenge is even more elusive. I know that web access management technologies use introspection techniques and therefore are suboptimal in this regard. Does Entitlements Management provide a potential solution and if so, what constructs should we consider in designing new enterprise applications?

GG: The XACML policy language includes a namespace and naming convention for attributes, policies, etc. This helps to organize the system and also to avoid conflicts in the use of metadata. It is also possible to incorporate semantic web approaches or ontologies to manage large and complex environments – we are seeing some customers interested in exploring these capabilities.

JM: I have heard Gunnar Peterson use an analogy in a testing context that makes me smile. He once stated, testing through the UI is like attempting to inspect the plumbing in your basement by peering through your showerhead. This seems to hint that many applications think of security only through the user interface. Does entitlements management provide the ability to define a security model that is cohesive and deals with all layers of an enterprise application?

GG: Absolutely, this is one of the strengths of the XACML architecture. You can define all the access rules that an XACML policy server will enforce – and install policy enforcement points (PEP) at the necessary layers of an application. These are typically installed at the presentation, application and data tiers or layers. Such an approach is important because you have a different session context at each layer and may have different security concerns to address, but the organization needs to ensure that a set of access rules are consistently enforced throughout the layers of the application. Further, individual services or APIs can be secured as they are used on their own or in mash-up scenarios.

You get the additional benefit of a consolidated access log for all layers of the application. All access successes and failures are available for reporting, investigations or forensic purposes.

JM: Some enterprises are moving away from thinking in terms of objects towards thinking in terms of business processes. How should a security architect think about applying an entitlements-based approach to BPM?

GG: I recall writing some years ago that BPM tools could facilitate the creation of application roles – it’s very interesting that you now ask me about BPM and entitlements! But it’s a logical question. BPM tools help you map out and visualize the application, have the notion of a namespace, resources, and so on. At least a couple of places where entitlements and authorization rules can be derived are within BPM activities as well as when you have an interaction with an activity in another swim lane.

JM: Enterprises are also developing mobile applications that allow their consumers to access services, pay bills and conduct business transactions. It goes without saying that a mobile application should have the same security model or at least adhere to the same security principles as an internally hosted web application. What are some of the entitlements considerations an architect should think about?

GG: There are several considerations that come to mind, but let’s address just a few of them here.

  • Do you need to limit functionality or data download for mobile devices? This can be enforced in your access policies.
  • Do you need to control what functions/buttons/content is displayed on the screen? This is commonly done for access via non-mobile browsers.
  • Do you need to support offline mode or deal with low bandwidth connections (insert your least favorite carrier here). In this case, you may need to support long-lived entitlements or access decisions as opposed to the normal transactional model for XACML systems.
  • Where is the data? How much data is stored on the mobile device? Is the data stored in the cloud? The answers to these questions help to determine how the authorization solution is architected.

XACML – so much more than some people can see!

June 29, 2011

This is Felix Gaehtgens and I am one of the latest additions to the Axiomatics team. My colleagues Gerry Gebel and David Brossard were so nice to offer me a space on their blogs. As a former industry analyst, I’m going to start here, and post my future more technical musings on David’s blog.

Speaking about musings, I just stumbled upon an article on heise.de from my former colleague Martin Kuppinger. The article talks about access management with the XACML standard and is entitled “Only good if you can’t see any of it”. It is published in German, and if you can understand that, you can read it here. Martin is a distinguished and very smart analyst, and I am honoured to have worked alongside him for more than three years at Kuppinger Cole where I covered the authorization field and in particular the XACML technology. His past and current musings – apart from this one article – provide an excellent insight into the current state of technology within the Identity and Access management field.

In his article, Martin wrote that XACML is a complex language. He expects that companies implementing XACML will have to confront a large set of many complex rules, and therefore the management of these rules is complex. He makes the point that XACML has to be “hidden” by tools because it is just too complex to use.

To my experience, nothing is further from the truth. This merely reflects some common misconceptions about XACML, and it also reflects some FUD (fear, uncertainty and doubt) floating around by some people without sufficient understanding of the XACML technology. What is important here is that customers need to have fully-fledged XACML tools to solve their complex authorization problems. Cutting corners and then calling those limited features “simplification” to make it “more user-friendly” mean that you don’t get all of the powerful benefits, and shouldn’t be an option for you. You need the potential of this technology to tackle the hard authorization problems in the real world. You will need to use the concepts of XACML to create compact and adequate policies. No short-cuts are recommended here.

XACML is a language used for access control, and it is very powerful and flexible. When I compare it to simple rules used in traditional SSO and RBAC-like access management products, I like saying that RBAC is like a simple text editor and XACML is like a word processor. If your needs are really simple then perhaps a text editor is enough. But if you want to create real documents, I’d highly recommend using a word processor.

The reality in XACML implementations is that you create a small set of policies that quite accurately reflects the overall business requirements. In many cases, you can almost “translate” a business access control requirement into one policy. This makes things quite simple, because you can easily edit the few number of policies to make sure that they accurately reflect what the business is demanding.

I’ve seen this already “at work” with several of our customers. They are really enthusiastic about the fact that they can create a fairly simple policy model in XACML to link their business policies together with data that already exists in the enterprise and deploy truly powerful and dynamic access control for their applications. It’s very cool to see this in action. That’s why I really disagree about the “good practise” of “generating” XACML from “simple” rules. Sure, you can do that, and there may be some cases when this could be useful, but then you are really missing out on those benefits of XACML that can truly transform your deployment.

Consider the other, more “traditionalist” approach. You have primitive and “simplified” access control rules, often in the form of “if user X has role Y”. You then need to model your business requirements in these types of low-level rules. Not an easy task, and it also means that you will need to maintain a whole new data repository to support your simplistic policy model. That is complex. And that is much more difficult to maintain and audit.

So XACML is “expressive” and “complete” rather than “primitive” because it allows you to do many more things. It allows you to define your access policy in just a few well-defined rules that are not difficult to understand, rather than in a myriad patchwork of primitive access control statements that become difficult and expensive to maintain and audit. This in my opinion is one of the greatest benefits of XACML. Maintaining you access control system in XACML makes it much easier, rather than more complex.

When I started at Axiomatics, I already expected that our customers would find it easier modeling their access control requirements with XACML rather than using a traditional role-based approach. Once I started talking to them, I was somewhat surprised to find out that many of them found it even easier than I thought! Sure, you have to learn a little (not too much), but this usually saves you so much more time and money because if elegantly side-steps a lot of the mess that you would otherwise have to confront.

Where Martin does have has a point is that XACML is based on XML, and nobody (except maybe for some hard-core geeks) would like to maintain XACML policies by editing raw XML files. But that’s obvious. At Axiomatics, we ship a GUI (graphical user interface) that allows you to define and test your XACML policies. It offers the full power and features of XACML in an easy way, rather than a limited “simplified” subset of possibilities that don’t go far. The learning curve with XACML is not steep, because it is actually very natural in the way that it allows you to express your requirements; and as with any powerful tool you’d be advised to take a bit of time to learn about the features at your disposal and real-life good practises that we’ve learned from our customers’ experiences. I’d be happy to discuss them with you – just drop me an email.

Part Three: Enterprise Authorization Scenarios with James McGovern

April 6, 2011

Here is the third installment in a series of conversations I have had with James McGovern, enterprise architect extraordinaire. In this post, we expand the scope from insurance scenarios to include some broader enterprise contexts for externalized authorization.

JM: Over the last couple of years, I have had lots of fascinating conversations with Architects in Fortune enterprises regarding their Identity Management deployment and several common themes have emerged including that while they could do basic provisioning of an account in Active Directory, they couldn’t manage to successfully provision many enterprise applications due to challenges that go beyond simplistic identity. Can XACML help get them to the next stage of maturity?

GG: Your question reminds me of the latest round of commentary regarding the futility of a provisioning approach to identity management. Particularly from the latest Gartner IAM summit, speakers were lamenting the state of the provisioning market and how little progress has been made over the last 10 years. At the heart of the problem is the fact that provisioning tools just don’t have visibility into application privileges and entitlements, in the vast majority of deployments. Instead, provisioning deployments tend to “skim the surface” by managing userIDs/passwords, but defer deep entitlement settings to the target application or platform. Of course, the most difficult applications to manage are an issue because they don’t properly externalize identity management functions – making provisioning deployments more expensive as well as less than optimal.

Enter the “pull” model espoused by my former colleague, Bob Blakley. The basic premise of the pull model is that identity data is resolved at runtime by calling the appropriate service. If a user accesses an application before authenticating, redirect them to an authentication service. If a user accesses the application with a known token, redirect to the token service for proper handling. When the user attempts to perform a protected function, an authorization service should be called for evaluation.

As the reader may have surmised, the more an application externalizes identity – the less provisioning is required. Instead of provisioning accounts and entitlements to every application, a smaller number of authoritative identity service points are provisioned that can be leveraged by many applications. COTS applications would come preconfigured with policies for entitlements and authorization, instead of using a proprietary, embedded approach. To extend this further, access controls for COTS applications from different vendors can be implemented consistently – without excess access “leaks” – if they share a centralized access control model.

Therefore the ability to centrally describe the authorization model of an enterprise application would help. The challenge of identity management would significantly change in a number of ways. For example, enterprises would need to establish with identity services they would provision for the purposes of authentication – and which established, external identity providers they would consume. Authoritative attribute sources would fall into the same category. Finally, authorization policy modeling and management skills would become more prominent so that a normalized view could be attained across the enterprise.

JM: I remember conversations with my former boss at The Hartford where he asked me to explain the value proposition of identity management. He didn’t understand the value of spending millions of dollars for a system to tell him that James McGovern is still an employee. After all, he would know whether he fired me or not. What he wanted to know is what James McGovern could access if he decided to fire me. More importantly, even being in the role of Chief Security Architect, I couldn’t always figure out what I had access to.

GG: Sure, your boss would know whether he fired you or not – but what about all those independent insurance agents we’ve discussed in previous scenarios? Dealing with hundreds, thousands or millions of users and managing what they have access to is what drives organizations to spend significant sums on identity management. That said, there is often a budget imbalance because internal systems are more complex and expensive to operate than the applications serving external constituencies.

Determining what resources a particular user has access to, or who has access rights to a resource are questions that auditors, as well as system administrators, want answers to. Administrators need to know this detail so they can properly set up access for a new employee, contractor, customer, etc. Of course they also need to know this information so de-provisioning can also occur when the relationship is terminated. Auditors and regulators are responsible for ensuring that the organization is following internal business and security policies, as well as regulations or laws they may be subject to.

Current practices, where identity and policy are embedded in each business application, have proven to be very inefficient when attempting to audit the environment. It is not unusual for large organizations to have several hundred or a few thousand applications – imagine trying to audit such an environment on a regular basis if the identity information and policy is not externalized? The situation can be utterly insane if each application has its own store of user data, because then you have a synchronization and reconciliation challenge. Herein lies one of the main value propositions of externalizing authorization: audit-ability and accountability are much easier to accomplish because you have a central place where policies are defined and maintained (although the enforcement of those policies can certainly be distributed). Further, when you think of combining externalized authorization and identity governance, then you can achieve even more visibility and transparency into access controls.

JM: Many Architects have enterprise initiatives to reduce the number of signons to enterprise applications a user has to provide in any given day. Once they have implemented the solution to carry a token that provides seamless access between systems, they now discover that they have an authorization problem. What role should XACML play in an identity strategy?

GG: Sounds like that famous Vegas game, whack-a-mole. As soon as you think you have solved one problem, a new one appears… The scenario you describe can occur if the architects have not fully mapped out their strategy or understood the full consequences (intended or otherwise) of the architecture. If you move to a tokenized authentication approach (like SAML), then you have accomplished two worthy goals: reduced signon for users and fewer systems to provision a credential to.

However, as you point out, the application still needs to do authorization in some way. This could be accomplished if the application retains some kind of user store and keeps entitlement or personalization attributes about the user – at least the application is not storing or managing a credential. Thinking back to the issue of hundreds or thousands of applications, this doesn’t sound like a good solution for a number of reasons.

The preferred approach, if you have externalized authentication, is to also externalize authorization and utilize an XACML system. When the user presents their SSO token of choice to the application, it can call out to an XACML policy engine (such engine could also be embedded in the application for lowest latency) for the authorization engine. This is the approach we see more and more organizations taking.

JM: The average Fortune insurance enterprise may have hundreds of enterprise applications where maybe only 20% are commercial off-the-shelf (COTS) products. Vendors such as Oracle, IBM and SAP are providing out-of-the-box integration with XACML in future releases of their ERP, CRM, BPM, ECM and Portal products. Other large enterprise vendors however seem to be missing in action. How do I fill in the gaps?

GG: This is where you need to rely on your authorization vendor to provide PEPs for COTS applications that don’t directly support XACML. In some cases, you can use a proxy, custom PEP code or an XML gateway (such as from Layer 7, Intel or Vordel) to intercept calls to the application. In other cases, a hybrid approach is necessary because the application cannot operate unless users are provisioned into certain roles or groups.

Ultimately application customers have a lot of influence with their vendors on what standards should be supported. Enterprises should use what leverage they have to encourage XACML adoption where appropriate – that leverage could come in the form of willingness to buy the application if standards are supported vs. building it internally if the required standards are not included.

JM: Many corporations have SoX controls not only around IT systems but also physical security. Does XACML have a value here?

GG: There is definitely a use case where XACML authorization is the policy engine for converged physical/logical access systems. We are seeing some interest for this capability in certain defense scenarios and are working with a physical access vendor on some prototypes. The idea is that access decisions be determined not only based on typical logical access rules, but also based on where you are located. For example, the batch job for printing insurance claim checks will only be released once I have badged into the print room.

JM: So far, the conversation around identity has dominated many of the industry conferences and analyst research. The marketplace at large is blissfully ignorant to the challenges of managing entitlements within an enterprise context. What do you think will be the catalyst for this getting more airtime in front of IT executives?

GG: I think there are significant challenges that are forcing the issue of entitlements to the surface.

  1. The need to share: an organization’s most valuable and sensitive data is precisely the data that partners, customers, and suppliers want access to. The business imperative is to share this data securely.
  2. Overexposure of data: The counterpoint to the first item is that too much data is exposed – that is, sharing of sensitive data must be very granular so that the proper access is granted, but no more.
  3. Sensitivity of data: We are in an era of seemingly continuous incidents of personal data release – either accidentally or due to poor security controls. Insurance companies collect lots of personal data such as what car you drive, where you work, what valuables you have in your home, medical information, insurance policy data, workers compensation data, etc. All this data needs to be protected from improper disclosure.
  4. Moving workloads to the cloud: Regardless of all the hype around cloud computing, there is a strong drive to utilize the capabilities of this latest computing movement. What’s particular to entitlements surfaces in at least two areas. First, it is almost impossible to move workloads out of their traditional data center if entitlements and other IdM functions are “hard wired” into the application, because the application will cease to function. Second, once applications are moved to the cloud, you need to have a consistent way to enforce access – regardless of where the applications and data are hosted. This cries out for a common entitlement and authorization model that can be applied to all resources.

JM: I have some really wild scenarios in the back of my head on how XACML could enable better protections in relational databases, be used for implementing user privacy rights in enterprise applications and even how it could be used  as a way to provide digital rights management. What are some of the more novel uses of XACML that are on your radar that you think the information security community should be thinking of?

GG: The XACML policy language and architectural model are incredibly flexible and applicable to many business scenarios. Databases pose a particular challenge, but there are certainly creative ways to address this and it would be great to explore some of your ideas. Privacy scenarios have their own challenges because you can have legal restrictions on PII as well as user preferences to accommodate. At Axiomatics, we always welcome input from potential customers on their most challenging authorization scenarios to see how they we can meet their requirements.

Biographies for James and Gerry:

James McGovern

James McGovern is currently employed by a leading consultancy and is responsible for defining next-generation solutions for Fortune enterprises. Most recently he was employed as an Enterprise Architect for The Hartford. Throughout his career, James has been responsible for leading key innovation initiatives. He is known as a author of several books focused on Enterprise Architecture, Service Oriented Architectures and Java Software Development. He is deeply passionate about topics including web application security, social media and agile software development.

James is a fanatic champion of work/life balance, corporate social responsibility and helping make poverty history. James heads the Hartford Chapter of the Open Web Application Security Project (OWASP) and contributes his information security expertise to many underserved non-profits. When not blogging or twittering, he spends time with his two kids (six and nine) who are currently training to be world champions in BMX and Jiu-Jitsu.

Gerry Gebel, President Axiomatics Americas

As president, Gerry Gebel is responsible for sales, customer support, marketing, and business development for the Americas region. In addition, he will contribute to product strategy and manage partner relationships for Axiomatics.

Prior to joining Axiomatics, Gebel was vice president and service director for Burton Group’s identity management practice. Gebel authored or contributed to more than 70 reports and articles on topics such as authorization, federation, identity and access governance, user provisioning and other IdM topics. Gebel has also been instrumental in advancing the state of identity-based interoperability by leading demonstration projects for federation, entitlement management, and user-centric standards and specifications. In 2007, Gebel facilitated the first ever XACML interoperability demonstration at the Burton Group Catalyst conference.

In addition, Gebel has nearly 15 years experience in the financial services industry including architecture development, engineering, integration, and support of Internet, distributed, and mainframe systems.

Take 3, talking authZ and TOCTOU with Gunnar

March 18, 2011

Here is part 3 of a conversation with Gunnar Peterson where we continue talking about externalized authorization, who in the organization is involved in an XACML system deployment – and it even includes a discussion of TOCTOU concerns as it relates to a XACML system. Thanks also to my colleagues, David Brossard and Pablo Giambiagi, for their input. You can also find part 1 and part 2 of the conversation on this blog.

GP: In our last conversation you mentioned “As with other access policies, administrators will work with application and security experts to construct XACML policies. Administrators learn this skill fairly quickly, but they do need some training and guidance at first.”

In your experience who authors these policies today? And where should this responsibility sit in the organization going forward? It seems there is a mix of application domain and security experience required, plus there may be some need to understand business processes and architecture. Is there a new organizational role, Security Policy manager emerging?

GG: I like the sound of Security Policy Manager, and it is a role that could appear over time. For now, we see security administrators working with a business analyst and/or application developer to construct the policies. This amounts to determining what resources/actions need to be protected, translating the business policy (doctors can only view records of patients they have a care relation with), determining what attributes are needed, sourcing the necessary attributes, etc.

GP: In looking at Infosec as a whole, it seems to me that we have two working mechanisms – Access Control and Crypto. Everything else is integration. What guide posts should people use to identify and locate the integration points for their authorization logic? Do you focus on the resource object, the user, or the Use Case context? Or is it a mix?

GG: Policies can focus on the resource, subject, action or environmental attributes – that is the beauty of an attribute based access control language like XACML. That also means there are many ways to model policies, here are some general guidelines:

– Make sure you start with the plain old english language business rules and policies

– Take a “divide and conquer” approach to building the policy structure. This could mean writing policies for different components of the application that can be viewed as logically separate. What you’re also doing here is taking into account the natural activity flows when people use an application or the types of requests that will come into the PDP. Upon analysis of the scenarios, you can place infrequently used policies in branches where they won’t impact performance of more frequently used policies.

– Evaluate frequently used policies first – seems intuitive but may not be apparent at first. Therefore, you need to continually evaluate how the system is being used to see if modifications to the policy structure are needed. As mentioned in the previous point, this will allow you to identify policies that are infrequently evaluated so you can ensure they are not in the path of frequently evaluated policies.

– Consider the sources of attributes that you will utilize in policies. Are the attributes readily available or spread across multiple repositories – great place to use a virtual directory if answer is yes.

It is only after analyzing the scenario that you can say whether policies will be targeted to users, resources or some mix of the two. This is an area where we do provide guidance to customers at the start of a deployment. However, we do find that customers are able to manage this aspect of the deployment pretty quickly. Another point to keep in mind is that access policies don’t change that frequently once an application is on-boarded into the system. What happens on a daily basis is the administration of attributes on the individual users that will be accessing the protected applications.

GP:The Time to Check Time of Use problem has been around for as long as distributed computing. The time between when tickets, tokens, assertions, claims, and/or cookies are created until the time(s) when they are used by the resource’s access control layer, gives an attacker a number of places to hide. Choosing the place to deal with this problem and crafting policies is an important engineering decision —  What new options and patterns does XACML bring to the table to help architects and developers deal with this problem?

You talked previously about the XACML Anywhere Architecture where a callback from a Cloud Provider queries PDP; this would enable the Cloud Provider to get the freshest attributes at runtime while still allowing the Cloud Consumer to benefit from the Cloud provider’s deployment. The callback idea has appeal to a wide variety of use cases, but do people need to update their attribute schemas with any additional data to mark the freshness of the attributes so the PEP/PDP can decide when or if to fire off these requests? Does the backed PDP have any responsibility to mark attributes? What are the emerging consensus for core patterns here if any?

GG: You are right to point out that having the ability to call an externalized authorization service provides a mechanism for the application to have the most current attributes and access decisions. However, there are a couple of places in the XACML architecture to consider as it relates to TOCTOU – freshness of attributes and caching of decisions.

Attributes: Fresh or Stale

First we’ll look at the freshness of attributes that you describe, because it is the attribute values that will be processed in the policy evaluation engine (PDP). The XACML 3.0 specification is clear about what an attribute is. In terms of XML schema, it is defined as follows:

<xs:element name=”AttributeDesignator” type=”xacml:AttributeDesignatorType” substitutionGroup=”xacml:Expression”/>

<xs:complexType name=”AttributeDesignatorType”> <xs:complexContent> <xs:extension base=”xacml:ExpressionType”>

<xs:attribute name=”Category” type=”xs:anyURI” use=”required”/> <xs:attribute name=”AttributeId” type=”xs:anyURI” use=”required”/>

<xs:attribute name=”DataType” type=”xs:anyURI” use=”required”/> <xs:attribute name=”Issuer” type=”xs:string” use=”optional”/>

<xs:attribute name=”MustBePresent” type=”xs:boolean” use=”required”/> </xs:extension> </xs:complexContent></xs:complexType>

There is no room for a timestamp or a freshness attribute in an attribute designator used in a XACML policy (set) which would indicate under which temporal terms a policy is relevant based on the freshness of incoming attributes.

Accordingly the incoming XACML request is made up of attributes and attribute values defined in the schema as follows:

<xs:element name=”Attribute” type=”xacml:AttributeType”/><xs:complexType name=”AttributeType”> <xs:sequence>

<xs:element ref=”xacml:AttributeValue” maxOccurs=”unbounded”/> </xs:sequence>

<xs:attribute name=”AttributeId” type=”xs:anyURI” use=”required”/>

<xs:attribute name=”Issuer” type=”xs:string” use=”optional”/>

<xs:attribute name=”IncludeInResult” type=”xs:boolean” use=”required”/>

</xs:complexType>

Again, there is no use or mention of a freshness / time attribute. This is coherent with the fact that the PDP is a stateless component in the architecture. It means that unless it is told so, it cannot know the freshness, the time of creation, or the expiry date of an attribute value.  It knows when it receives a request from a PEP or when it calls out to PIPs and can assume a date of retrieval but even so the value could be skewed by potential caching mechanisms. Even though it does know the time at which it receives a request from the PEP or the time at which it invokes the PIP(s), it does not mean it will be using those timestamps for access control unless, of course, the policy author explicitly asks to do so.

Let’s imagine the case of a role attribute. Let’s imagine we want to write a policy which says: grant access to data if

• the user is a manager and

• if the role attribute is no more than 5 minutes old (5 minutes being the maximum freshness time)

This uses 2 attributes: role and freshness. The incoming request says “joe access data” which is not enough for the PDP to reach a conclusion. The PDP queries an LDAP via a PIP connector to retrieve the roles for Joe: “give me Joe’s roles”. Joe may have multiple roles; some of which may have been cached in the PIP attribute cache. In addition, each role has a freshness value (which could be the time at which the LDAP was queried for the value or the number of minutes since it was last queried for the given attribute). In that case, we may have different freshness values. Which one should be used? This shows there is a ternary relationship between the user, the role, and the freshness of the role. The simplest way for XACML to handle this today is to encode the freshness and the role value into a single attribute value e.g. manager::0h04mn37s. In that case the policy must match the attribute value using regular expressions (or XPath) to extract the role value on one hand and the freshness value on the other.

Is TOCTOU masquerading as a governance issue?

Another aspect of attribute freshness is the administrative or governance processes behind them. If attributes are not fresh, isn’t this a governance failure? For example, how frequently does your provisioning process make updates? Once a day, once a week, every hour, or is it event driven and updates continuously? Your answer will provide additional input on how to manage the consumption of attributes by authorization systems. So this problem goes even beyond the Time to Check to the Time of Administration.

Decision Cache
Next up, PEP decision caching is another area to examine to stay clear of TOCTOU issues. For performance reasons, PEPs can cache decisions for reuse so they don’t have to call the PDP for every access request. Here, your TTL setting is the window of opportunity when the PEP could use an invalid decision if a privilege-granting attribute has changed.

In short, XACML is an attribute-based language and helps you express restrictions / conditions some of which can be freshness of attributes. It is easier in that sense than previous AC models which did not allow for that. However, there has yet to be a standardized effort around the overall freshness challenge. And it is no longer a problem constrained to policy modelling. Attribute retrieval strategies, PEP and PIP implementations, caching, and performance all impact freshness (or the impression of freshness).

GP: Thanks very much to Gerry Gebel, David Brossard and Pablo Giambiagi for the conversation on these important topics

Part Two: Insurance Authorization Scenarios with James McGovern

March 2, 2011

The conversation with James McGovern continues… here is the next installment in a series of posts on the applicability of XACML-based authorization for the insurance industry:

JM: We had a great discussion covering basic entitlement scenarios and how they can be applied to the insurance vertical. Are you ready for some scenarios that are more challenging?

GG: Absolutely…

JM: Let’s dive into two additional insurance-oriented use cases. First, let’s talk about the concept of relationships and how they challenge the traditional notions of authorization and role-based access controls. Imagine you are a vacationing in sunny Trinidad and have left your nine-year old child home alone. Your son having been raised by responsible parents decides to renew your automobile registration in order to avoid paying a late penalty but realizes he needs to also get an automobile insurance card first. How does the insurance carrier determine that your son is authorized to request an insurance card for your policy, the answer is via relationships.

Relationships in an insurance context may be as simple as confirming whether the subject is listed as a named insured on a policy or could be more complicated in scenarios where there is a power of attorney in place where someone with a totally different name, address and otherwise unrelated may be authorized to conduct business on your behalf.

GG: This is an excellent case where the PIP interface of the policy server can call out to a directory, customer database, or web service to determine if the requestor has a relationship with the policy holder. Having the policy server, the PDP in XACML parlance, make the query simplifies things for the PEP and application. Instead, the PDP figures out what additional attributes are necessary to satisfy a particular policy.

JM: Relationships can be modeled in a variety of manners but generally speaking can be expressed in either a uni-directional or omni-directional manner. For example, a husband and wife have a bi-directional relationship to each other than can be named as a spouse while an elderly person may have a uni-directional relationship where the person holding the power of attorney can take actions on behalf of the individual but not vice versa.

GG: Again, XACML policies and the PDP can evaluate relationships between entities to resolve access requests. In this example, a person with power of attorney for a parent’s account can make changes to that account because a condition in the XACML rule can dynamically validate access. Spouses can have common access to update insurance policies that they co-own because each is named on the insurance policy – again the XACML condition easily evaluates the relationship: user_attempting_access == named_insured. In this example, named_insured could be a multi-valued attribute that lists parents and children on the insurance policy. The PDP must be able to parse through the multiple values when evaluating access policies. To add another layer of context, each of the persons in the named_insured list could have different privileges where children are allowed to view the insurance policy, but not able to update or cancel it.

JM: In the model of delegation, the power-of-attorney may have a specified scope whereby the person holding the power-of-attorney can do actions such as make bill payments or make endorsement changes but may not have the right to cancel.

GG: The flexibility of XACML policy is evident for this case as well. For example, Policies can have a “target” so that particular effects can be implemented in each scenario. In the above example, a policy with a target of “action=cancel” can have a rule that denies the action, while other actions are permitted. Alternatively, policies could be created for each action and combining algorithms resolve any conflicting effects. Combining algorithms are defined for deny overrides, permit overrides, first applicable, and several other results.

JM: Let’s look at another insurance scenario. Within the claims administration process, you can imagine that the need for a workflow application (BPM) along with a content management application (ECM) would be frequently used. From a business perspective, you may have a process known as First Notice Of Loss (FNOL) whereby a claimant can get the claim’s process started. The BPM process would handle tasks such as assigning a claims handler to adjudicate the claim while the ECM system would capture all the relevant documentation such as the police reports, medical records if there were injuries and photos of the car you just totaled.

Now, let’s imagine that a famous person such as Steve Jobs or Warren Buffett is out driving their Lamborghini and get’s into an accident. For high-profile people, you may want to handle claims a little differently than for the general public and so you may define a special business process for this purpose. The big question then becomes, how do you keep the security models of the BPM and ECM systems in sync? More importantly, what types of integration would be required between these two platforms.

GG: First, access policies should be designed to restrict claims processors to only handle claims that are assigned to them, or their team. This can be accomplished dynamically through the use of conditions, independent of what users get assigned to teams or groups. As noted earlier, the PIP interface is able to look up group or team membership at runtime. In addition, the insurance company may choose to implement an extra policy to further restrict access to celebrity or VIP clients. An example of where this would have been useful is the “Octo-mom” case where employees were found to have inappropriately accessed her records. The “celebrity” policy can be targeted to resources associated with an individual or they can be tagged with metadata indicating a special handling policy applies. In the PDP, results from multiple policies are resolved with the combining algorithms defined in XAMCL – first applicable, deny overrides, permit overrides, etc.

Regarding integration between BPM and ECM systems, it appears there are multiple options here. In one example, the ECM system can defer access decisions to the BPM layer, which can be effective if the only access to records is through the BPM layer. If access to ECM records flows through different applications, then both ECM and BPM should use the same authorization policies/system. If they use the same authorization system, BPM and ECM are using the same policies by definition and can therefore implement access controls consistently.

JM: It is a good practice to not only assign the claim to a team but for people outside of that team to not have access (in order to respect privacy). The challenge is that teams aren’t static entities and may not be statically provisioned. This model doesn’t just occur within business applications but is a general challenge in many enterprise systems. As you are aware, the vast majority of enterprise directory services tend to have a view of the organization and its people through the lens of reporting relationships and not team composition and how work actually gets done. The notion of the matrixed organization can further blur authorization models.

GG: I agree that directories are not always able to easily represent matrixed relationships within an organization. Ad hoc groups can be created for projects or teams, but can be difficult to manage and keep current. In some cases, virtual directories can provide a more flexible way to surface different views of directory data. The bottom line is that you can’t implement dynamic policies if the necessary relationship data is not available.

JM: Are there practices you recommend that enterprises should consider while modeling directory services to support authorization scenarios described so far?

GG: Yes, there are a number of things to consider regarding directory services when dealing with attribute based access control systems. In general, here are some key points:

  • We tend to prefer using existing, authoritative attribute sources – rather than force any kind of directory service re-design. In typical organizations, this means that privilege-granting attributes could be stored in several repositories that include directories, as well as databases or web services. At some point, the organization may choose to implement a virtual directory product, which gives them a lot of flexibility in aggregating attributes and providing custom schemas for the various consuming applications – including ABAC systems.
  • When constructing XACML policies, the policy author does need to think about where attributes are stored because of performance implications. Attributes may be local to the application or possibly remotely stored in another security domain. Even local attribute lookups can be an expensive operation if the repository does not operate efficiently. There are many techniques to deal with performance, but they must be dealt with in order to achieve adequate response times for interactive users.
  • A corollary to the previous point is the question of what component does the attribute lookup, the PEP or PDP? The PEP will naturally have access to several attributes, such as userID, action, target resource, and some environmental variables. The PEP could look up additional attributes, but it does not necessarily know which policies will be evaluated. Therefore, it is normally better for the PDP to do attribute lookup after it determines what policy(ies) to evaluate.
  • Data quality is always an issue in directory services. As a former colleague, Larry Gauthier, was fond of saying, “Even if you admit your directory data is dirty, it is most likely filthy.” Once an organization starts writing access policies that utilize dirty data, it’s possible that incorrect decisions could be the result. The solution isn’t necessarily technical, but could impact processes that are responsible for updating and maintaining user data – whether that’s in the HR system, enterprise directory, CRM database, or other repositories.

JM: Are you aware of any BPM or ECM vendors that are currently supporting the XACML specification? If not, what do you think enterprise customers can do to help vendors who remain blissfully ignorant to the power of XACML to see the light?

GG: I am not aware of any BPM or ECM vendors that support XACML today. Documentum has published how to add an XACML PEP to their XDB, but I don’t know of their broader plans, if any, to support XACML.

I think customers need to continue pressing vendors to externalize authorization and other identity management functionality from their applications. Customers can do this directly via their product selection process and by proxy through their industry analyst resources. ISVs should not expect to operate in a silo any more because applications have to interact with each other. It is extremely difficult to implement consistent access policy across multiple policy domains and you would think that application vendors have gotten this message by now. Further, XACML is a very mature standard that can be easily integrated into new application development and also feasible for retrofitting many existing applications. Again, the key is for customers and analysts to force the issue with application and infrastructure vendors.

Stay tuned for Part Three…

 

Part One: Insurance Authorization Scenarios with James McGovern

February 16, 2011

In my past role of Industry Analyst at Burton Group, I used to have frequent conversations with James McGovern who at the time was in the role of Chief Security Architect for The Hartford and is now a Director with Virtusa where he focuses on Enterprise Architecture and Information Security. Recently, we had a dialog on applying XACML in an industry vertical context. This exchange was inspired by similar conversations I had with Gunnar Peterson where we discussed the applicability of XACML bases solutions to some more general security scenarios. For readers new to XACML, you can find some additional information elsewhere on this blog as well as at http://www.axiomatics.com. Below is a transcript of our conversation…

JM: Let’s dive into three different scenarios using examples from insurance where making proper authorization decisions are vital and understand how XAMCL can provide value.

GG: That sounds great James, thanks for bringing up these industry specific examples so we can have a discussion of XACML based systems in that context.

JM: Let’s jump into the first scenario. An independent insurance agent will do business with an insurance carrier through a variety of channels. One method is to visit the carrier’s web-site that is dedicated to independent insurance agents. The carrier may use web access management (WAM) products for providing security to the website. Another method may be to conduct transactions from their agency management system that either is installed in their data center (large agencies) or hosted in a SAAS manner (small agencies). The agency management system may create XML-based transactions that are sent to the carrier’s XML gateway for processing. Another method still would be for the agent to conduct a transaction via telephone using interactive voice response (IVR) systems.

In all three scenarios, the independent insurance agent may execute transactions such as requesting a quote where it is vital not only that any one individual channel remain secure, but that all the channels through the lens of business security have the same security semantics.

GG: First, I will not address the authentication challenge across these multiple channels and will focus on authorization only. With an XACML-based system, you can indeed implement and enforce the same policies across multiple channels. In the example you cite above, here is where the policy enforcement points (PEPs) would be inserted:

  1. Web access management tier: At this level, let the WAM system do what it does best – manage authentication and the user session. For authorization, WAM integration with an XACML PDP can be implemented in multiple ways. For example, the WAM policy server can call out to the PDP (act like a PEP) or an XACML specific PEP can be installed at the application (website) to handle authorizations.
  2. Agency management system: If the on premises AMS and SaaS AMS are both accessed via an XML gateway, then the gateway acts as the PEP and enforces policies that are evaluated by the PDP. XML gateways are a great way to secure web services because most (all?) of them support the SAML profile for XACML or can integrate with an XACML vendor’s API.
  3. IVR system: This one could be a bit trickier, but the idea is that a PEP can be built for most any environment. If the IVR vendor permits it, then a Java or .NET PEP can be developed pretty quickly to connect with an XACML PDP.

There are many deployment options for where PDPs are installed or policies are managed, but the bottom line is that resources accessed through multiple channels can be protected by a common set of policies and authorization infrastructure.

JM: The IVR scenario is just one example of authorization issues that occur in a telephony environment. In the investment community, the notion of a “Chinese Wall” where an investment firm for regulatory reasons may need to prevent phone conversations between two different individuals in different departments such as an employee working on mergers and acquisitions from sharing non-public information with those in the trading department.

GG: Integrating XACML across a variety of channels are also used at banks – employee accounts are marked as such to enforce access policies, provide employee discounts, etc. Integrating XACML isn’t just valuable for web sites, web services and IVRs but can work with instant messaging applications, Turrets and email to support the concept of Chinese Walls or other regulatory considerations.

JM: Let’s look at another scenario. A large insurance broker may employ hundreds of insurance agents that interact with multiple insurance carriers on a daily basis. From a financial perspective, the broker would like for the insurance carriers to provide up to the minute details on commissions from selling insurance products. The challenge is that the insurance carrier may need to understand the organizational structure of the insurance broker so as to not provide information to the wrong person. For example, one insurance broker may organize by regions (e.g. north, south, east, west) while another may organize around size of customer (e.g. large, medium, small) while another still may organize around the types of products sold (e.g. personal, commercial, wealth management, etc). In this scenario, the broker may only want the managers of each region to see only their information, but not that of their peers in other regions.

The requirement of an insurance broker to at runtime dynamically describe the authorization model to a foreign system becomes vital to conducting business.

GG: The flexibility of an attribute based access control (ABAC) model, such as the XACML policy language, is very useful in this scenario. From the insurance carrier perspective, it is quite easy to represent the various policies that need to be implemented for each broker. In XACML, attributes are defined in four categories (you can also define additional categories): subject, action, resource, and environment. For the broker organized by region, information such as north, south, etc are passed as subject attributes. Data such as <large customer> or <commercial> are passed as resource attributes to the PDP (either via the PEP or through the PIP interface). The carrier’s PDP will evaluate requests based on its defined policies to determine whether access is permitted or denied. Further, the PDP can also send an obligation back to the PEP with the decision – read access to commission report is granted, but redact sections 2, 5 and 8.

JM: The ability to make authorization decisions in the above scenario requires the ability to describe an organizational structure. This scenario not only applies to the carrier to agency relationship but could be equally applicable for internal applications such as procurement where you may have a rule that your two job grades above you must approve all expenses. Could you describe in more detail how XACML can support hierarchical constructs?

GG: To answer the question it’s important to use the right resource model (from the hierarchical resource profile). If the hierarchy is represented using “ancestor attributes” (§2.3), then there won’t be enough information to identify the manager two levels up. What is needed is a richer hierarchical model, e.g. using XML documents (§2.1), URIs (§2.2) or a slight modification of §2.3 to add an attribute that explicitly identifies a “grandparent” resource (or manager).

If the hierarchy is represented using an XML document, then the policy would use an AttributeSelector with an XPath expression that can easily pick a node two levels above any other. The same goes for an ‘n’ degree relation where ‘n’ is a constant known at policy-authoring time If the degree ‘n’ is dynamically provided in the form of some XACML attribute, then this might be harder to achieve and the individual case would have to be analyzed before coming up with a recommendation.

In practice, it may not suffice to simply use the base hierarchical resource profile. Other solutions may be needed – for example, using richer PIPs that massage the information into a format that facilitates policy authoring. [1]

JM: Let’s look at the scenario of an independent insurance agent and how they may access a given insurance carriers claims administration systems. The carrier may have an authorization rule that states any agent can access information for all policyholders in which they are the agent of record.

Taking this one step further, when an insurance agent purchases workers compensation insurance for their own business without the right authorization model, they may be able to have conflicting access rights if the agent is in the role of both agent and policyholder. When an otherwise authorized employee of the agency needs to file a worker’s compensation claim for themselves, other employees of the agency should not be able to view the claims of their coworker.

GG: This scenario can also be modeled in XACML policy provided that all the necessary attributes are available. To turn around your example 180 degrees, when an agency employee views the status of their own worker’s compensation claim, they should only be able to see their own records and not the records of fellow employees. Of course in performing normal work tasks, agency employees should also see any client records that they would otherwise have access to. Ideally, worker’s compensation claim records should be tagged with an additional attribute to indicate the claim is for an agency employee as opposed to a claim from a customer.

JM: A big challenge in getting this right is to make sure that you modeled identity correctly. Historically, many systems would have modeled an agent, an employee policyholder and a claimant as distinct entities. Today, we have to think about them more as personas or roles that are more dynamic in their usage. The party model would be a better modeling approach in this regard.

GG: Ideally, if your system has a proper identity model, then implementing sound authorization models becomes easy. On the chance, that your identity model is less normalized, you can use the PIP interface to accomplish the same goal of first detecting whether two distinct entities are the same. For example, a request may come into the PDP only containing the employee ID attribute but the PDP recognizes that it must look up additional attributes before evaluating the policy. The employee ID can be used as the index to lookup additional attributes on the user, possibly the SSN, department number, cost center, etc in a directory or HR database.

Stay tuned for part two…


[1] Thanks to my colleague Pablo Giambiagi for providing input to this question