Archive for the ‘Authorization’ category

A Closer Reading of the NIST report on ABAC

September 24, 2014

On October 1st, I will host a webinar that focuses on the NIST Special Publication 800-162 Guide to Attribute Based Access Control (ABAC) Definition and Considerations, published January 2014. I highly recommend the report for anyone that has responsibility for and an interest in authorization technologies and approaches.

The NIST report is a seminal event for the industry as it is their first report on this topic. Many organizations, public and private, look to NIST for guidance on a wide range of IT topics. Having a NIST document on ABAC is a strong signal that this is a technology worthy of further examination and exploration.

In this webinar, I’ll walk through key parts of the report and add comments based on our experiences at Axiomatics. I hope to see you there and look forward to your comments and questions. Please register for the webinar here.

Talking about authorization w/ Gunnar Peterson

June 10, 2013

It’s always great to catch up with Gunnar Peterson and discuss the latest in externalized authorization. There was quite a bit of ground to cover since our last blog post series and here is the transcript:

Gunnar Peterson:
The thing that strikes me about XACML and ABAC is that its really different from other security standards. Usually when we talk about an authentication or crypto protocol, we talk about strength, threat models, and the strength of the security service itself. It’s inward focused. It seems to me that the value of XACML and ABAC is really in the use cases that they enable. It’s outward focused, and unlocks value through new kinds of services. What kinds of use cases have you seen recently where XACML and ABAC are enabling companies to build things better?

Gerry Gebel:
You are correct to point out that XACML feels different from other identity and security standards. XACML is inwardly focused on the application resources it is assigned to protect through the use of its policy language – there isn’t just a schema, token format or DIT to work with.

There are a couple of recent customer use cases that I’d like to briefly describe as they are typical of the kind of requirements we see. In the first case, the organization holds a lot of data for customers in different industries and they wish to provide access to different slices of data via a combination of APIs and web services. In this case, its API access primarily for mobile devices and web service for other client applications. Specific business rules dictate what data customers can view or what APIs/web services they can call. Integrating an XACML service at the API/web services gateway layer is a non-intrusive way to implement the right level of data sharing and enable new business models for the organization.

The other case study example is for an organization that is building a new data hub service, where certain users can publish data to the hub and others will subscribe to the feeds. Due to the sensitive nature of the information, granular access control was important for the new service. In this case, the designers wanted a flexible policy-based model to control access, rather than hardcoding it into the application.

GP:
Interesting use cases, let’s drill down on these. First as to the gateway – I am a fan of web services gateways, they are a no brainer for implementing identity, access control, dealing with malicious code and so on. Authorization (beyond coarse grained) requires a little bit more thought. How have you seen companies approach getting the right level of granularity to take advantage of the XACML Policies at a gateway level? In other words, given that a gateway has less context than the application layer, what is the hook for the policy to be able to intelligently make authorization decisions outside the app as it were?

GG:
You are correct to point out that you can only make as granular an access decision as the context that is provided to the policy decision point (PDP). In this case, the call from the gateway to the PDP may just contain something like: subject: Alice, action: view, client_record: AD345. The PDP can enhance the situation by looking up more information about Alice before processing the access request – her department, role, location, etc. In addition, the PDP can look up information about the client record – is it assigned to the same location or department as Alice. With this approach, you can still make pretty granular access control decisions, even though you don’t have a lot of context coming in with the original access request from the gateway.

GP:
Right, so its case of Roles are necessary but not sufficient?

GG:
Roles are usually only part of the equation and certainly not adequate on their own for granular authorization scenarios.

GP:
Here is one I wrestle with – naming. On the user Subject side its pretty simple, we have LDAP, we have AD, and everyone knows their user names and log in processes, but what about the resource side? Its seems less clear, less consistent, and less well managed, once you get beyond URL, URI, ARN and the like. What trends are you seeing in resource naming and management; and how does this effect XACML projects?

GG:
Indeed, naming conventions and namespaces for subject attributes are prevalent and are lacking for other attribute types, in particular for resources. One approach to address naming for resources is to publish an XACML profile, whereby you can establish standard names for at least a subset of attributes. We see this being done today in the Export Control and Intellectual Property Protection profiles. Some firms in the financial services industry are also examining whether XACML profiles can be defined to support certain cross tim interactions, such as trade settlement.

Otherwise, ABAC implementers should approach this task with a consistent naming convention and process to ensure they end up with a resource namespace that is manageable to implement and operate.

GP:
I had always looked at XACML as something that helps developers, but it appears to have a role to play in areas like DevOps too. I have seen a few examples where XACML services delegate some administrative functions, such as spinning up Cloud server instances, and lower level configuration. For decentralized environments where admin tasks (which are very sensitive and need to be audited) can be handled by different teams and even different organizations this kind of granular policy control seems like a very good fit. It gave me a new perspective on where and how XACML and ABAC might fit, have you seen these types of use cases?

GG:
Normally we are dealing with application resources, but we have had cases where IT uses XACML to control access to DevOps kinds of functions. As you have pointed out, the XACML policy language can be quite useful in a number of areas where granular access control is important.

GP:
Developers and security people fundamentally lack good (read: any) testing tools for authorization bugs. Static analysis and black box scanning tools are all the rage (and server a useful purpose in security bug identification), when you scan your app they can find all manner of SQL Injection, XSS and other pernicious problems, but at the same time you can cut those same tools loose on an app that’s riven with thousands of authZ vulnerabilities and they will often come back green! I am pretty sure this is a major factor contributing to the numerous authorization vulnerabilities we see.

I think even just a first cut, 1.0 implementation with XACML and ABAC is an huge leg up towards formalizing some of the authZ structure so that real test cases can be developed and run. This makes it simpler for the developer to avoid authZ mistakes since they can continually test against a defined policy instead of dumb scanning against something where your tools cannot differentiate between what’s authorized versus unauthorized states. What are your thoughts on authZ testing?

GG:
We get a lot of questions about testing the policies in an ABAC system and there are many ways to address this requirement.

1. At the policy authoring stage, there is the requirement to perform initial unit testing – does this policy I am writing operate the way I expect it to? We provide this simulation capability so you don’t have to run the application to see outcome of a policy and it includes a trace facility so you can explore exactly how the policy was evaluated (this is a big help in debugging policies as well). Unit tests can be captured in scripts for future use, such as when the application or access policies change.

2. Positive and negative test cases: You are correct to point out that developers can test against a defined policy, such as: cardiologists can view and update records of heart patients. We refer this as a positive test, that is, does the policy allow doctors that are labeled cardiologists to view heart patients’ medical records? But there are other conditions to test for that may be characterized as negative tests. For example, given a set of ABAC policies, is there any way a non-cardiologist can update a heart patient’s record? For these kinds of scenarios, you can build additional test scripts or use an advanced policy analysis tool.

3. Gap analysis testing: Another advanced function is to test for any possible gaps in the policy structure. But again, as you pointed out, having a specific set of access policies to test against makes the process easier. In this manner, you could test for separation of duty scenarios that violate policy: is there any combination of attributes that permits a user to create and approve a purchase order?

GP:
In my opinion, there are concrete benefits from being able to make more granular authZ decisions, audit policies and configure rather than code authZ, but as a security guy the testing piece all by itself is a game changer. This is just such a big gap in so many systems today and a large source of “known unknown” kind of bugs, ones that can be but often aren’t found and closed.

Ok last question – is XACML dead? This is your cue to tee off.

GG:
Far from it. I’ve witnessed a significant increase in demand for XACML solutions over the last few years, the OASIS technical committee <https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=xacml&gt; is actively working on new profiles (after version 3.0 of the core spec was formally ratified earlier this year), and seen new vendors enter the market. There is a big emphasis to further improve the standard for consumption by the developer community, a key constituency if the industry is going to escape the cycle of hard-coding authorization inside applications. Some of the standardization efforts worth noting are profiles to define a REST interface for the authorization service as well as JSON encoding of the XACML request and response formats. These two enhancements should greatly broaden the appeal of the XACML authorization standard. Further, Axiomatics recently joined the OpenAz <http://www.openliberty.org/wiki/index.php/OpenAz_Main_Page&gt; project to help update and improve this developers’ API.

XACML: Alive and Well

May 8, 2013

The latest hyperbolic headline from our friends in the analyst community is brought to you by Andras Cser of Forrester, who proclaims that XACML is dead. Naturally, we at Axiomatics disagree since we have invested many years of effort at OASIS to develop and support the standard. The timing of this post is also interesting in that XACML version 3.0 was just formally ratified earlier this year and the Technical Committee is actively working on new profiles to support a REST interface as well as JSON encoding of the request/response formats – two features that will significantly expand the appeal to a wider developer audience. Let’s walk through this and address some of the statements that Andras makes:

Conversations with vendors and IT end users at Forrester’s Security lead us to predict that XACML (the lingua franca for centralized entitlement management and authorization policy evaluation and enforcement) is largely dead or will be transformed into access control

I am not sure what you mean here Andras as XACML already does access control.

Here are the reasons why we predict XACML is dead:

Lack of broad adoption. The standard is still not widely adopted with large enterprises who have written their authorization engines.

While XACML has not hit the mass market, we continue to see increased adoption across many industries. Organizations that have written their own authorization engines are investigating commercial alternatives, due to the cost of maintaining home grown systems and keeping up with growing requirements.

Inability to serve the federated, extended enterprise. XACML was designed to meet the authorization needs of the monolithic enterprise where all users are managed centrally in AD. This is clearly not the case today: companies increasingly have to deal with users whose identities they do not manage.

This is not correct on multiple levels. First, XACML was designed to meet the needs of service oriented architectures – which are, by definition, not monolithic in architecture or deployment patterns.

Second, the XACML standard never mandated that all users be managed centrally in AD or any other repository. Some products may have this limitation, but it is a vendor choice to do so. In fact, the policy information point is specifically defined to retrieve attributes or meta data from heterogeneous, distributed sources.

Finally, the XACML architecture naturally supports federated environments because access decision making and policy enforcement can be deployed centrally or in a distributed approach to cater for performance and other operational preferences. In fact, one of the simplest ways to achieve a hybrid IAM strategy for the cloud is to leave AD in the corporate enterprise and use authorization to communicate access control decisions.

PDP does a lot of complex things that it does not inform the PEP about. If you get a ‘no, you can’t do that’ decision in the application from the PEP, you’d want to know why. Our customers tell us that this can prove to be very difficult. The PEP may not be able to find out from the complex PDP evaluation process why an authorization was denied.

Actually, you can optionally communicate context about the decision using Advice or Obligation statements – part of the XACML standard. In version 3, these statements can contain variables and are very useful for communicating additional information to the PEP. Some examples are to redirect the user to a stronger authentication page, tell the user they have an insufficient approval limit, or tell the user they are not assigned to the patient so they can’t see the health record.

Keep in mind, many situations specifically require that the PEP not know why the access failed, because it could leak information for an attacker. Firewalls and network access control solutions are examples of this.

Not suitable for cloud and distributed deployment. While some PEPs can bundle the PDP for faster performance, using a PEPs in a cloud environment where you only have a WAN link between a PDP and a PEP is not an option.

The modular architecture of XACML is absolutely suitable for cloud and other kinds of distributed deployment scenarios. The fact that major components such as the PEP, PDP and policy authoring are decoupled means you can deploy them in many configurations. Embedding the PDP with the PEP and application is one option, but you can also co-locate the PDP with the app for better performance. As with on-premise deployments, implementers have to consider the latency between PEP to PDP and attribute retrieval. Cloud scenarios may present some challenges in reference data synchronization or retrieval, but many options are available to address them. 

Commercial support is non-existent. There is no software library with PEP support. Major ISVs have not implemented externalized authorization or plugin frameworks for externalized authorization. Replacing native SharePoint authorization with an Entitlement Management PEP is a nightmare requiring a one-off, non-standard, non-repeatable development and operations process.

I acknowledge that, as an industry, we have not adequately addressed the ISV industry with sufficient tooling to externalize authorization. As a result, we continue to see the creation of ‘new legacy’ applications that are difficult to manage and operate from an IAM perspective. Axiomatics has recently joined and contributed to the OpenAz project in an effort to meet these requirements.

Regarding SharePoint, we agree that a PEP-to-PDP model is difficult to implement for this platform, which is why we have taken a different approach.

Refactoring and rebuilding existing in-house applications is not an option. Entitlement Management deployment requires a refactoring of the application to use the PEP hooks for centralized, externalized authorization. This is not a reality at most companies. They cannot just refactor applications because of a different authorization model (sometimes, especially with mainframe applications the authorization model is not even understood well enough to do this…)

Another point of agreement: Most existing applications will not be rewritten to implement an externalized authorization approach. However, there are ways to integrate with existing applications without changing the application’s code by using filters or proxies, for example.

Additionally, many organizations are exposing existing applications by building API or web services layers – this is the perfect integration point for incorporating externalized access control.

OAuth supports the mobile application endpoint in a lightweight manner. XACML today largely supports web based applications. While OAuth’s current profiles are not a full-blown replacement for XACML functionality, we see that OAuth’s simplicity made it the de-facto choice for mobile and also non-mobile applications.

OAuth and XACML are not mutually exclusive, but certainly have their respective strengths/weaknesses. Again, I will point to the REST and JSON profiles for XACML that are currently under development at OASIS – these profiles will make XACML-based systems more easily integrated with mobile and other light weight platforms.

Part Two: Software Development Lifecycle (Development)

May 31, 2012

This is a continuing conversation with James McGovern who is lead Enterprise Architect for HP Enterprise Services and whose focus is in providing bespoke enterprise applications to the insurance vertical. The conversation to date is about how entitlements should be conceptualized along the SDLC (part 1). The topic we will cover in this dialog is centered on concerns that happen after IT Architects have performed high-level architecture and need to provide hands off to development teams. My colleague, Felix Gaehtgens also provided valuable input to the discussion.

JM: Generally speaking, the need for entitlements management tends to be on the radar of savvy information security professionals who realize that they need to invest more time in protecting enterprise applications and the data they hold over simply twiddling with firewalls, SSL and audit policies that look for whether a third party has a clean desk policy and whether there number two pencils are sharpened. When security people know nothing about software development and software development people don’t know anything about security, then bad things can happen. Today’s conversation will be a small attempt in connecting these two concerns. Are you game?

GG: Definitely. I also see a disproportionate amount of time and budget dedicated to security apparatus that does not address the specific security, business or compliance rules that an enterprise must enforce. To do that, you need to address security and access control concerns within the business application directly.

JM: A developer has received the mockups for a user interface from the graphics team and now has to turn it into code using JSPs and Servlets. In this particular tier, how should they incorporate entitlements into the pages as well as do it en-masse if they have hundreds of pages to develop?

FG: That’s an excellent question. Access control can and should happen on multiple layers. As you mention a user interface, that is a good point to control access to individual user interface components. For example: a button might start a particular transaction. Is this user authorized to carry out that transaction? If not, then the button should perhaps not be displayed. We can even think of fine-grained access control here. Suppose you are displaying a list of customer accounts to a user. What details should be visible? Should you perhaps hide some columns?

When we do access control in a holistic manner, we can obviously not stop at the presentation layer. You mentioned servlets here. A servlet operation is another type of action that can be authorized. May this function be executed on this servlet by this user in this particular context? This again is a good question. Let’s assume the user is authorized. What happens then? The servlet probably does some things, perhaps retrieving some data, perhaps kicking off a call to some back-end service. As the servlet does its thing, there other steps that would need to be authorized within the execution code of the running servlet. None of this is actually new. If we look at existing code, we see a lot of “If thens” that check whether something is allowed to happen. What architects should be vigilant about, is the fact that having all these “if thens” causes problems down the line. What if the business policies change? What if new regulations come into force? How can you actually audit what is happening? Because of this, it is important to consider moving access control to a separate layer and externalize authorization.

JM: Developers will also develop reusable web services whenever possible that can be leveraged not only by their enterprise application but others as well. How should they think about incorporating entitlements into a service-oriented architecture?

FG: Hooking entitlements into a service-oriented architecture is actually quite painless. The easiest way – without modifying code – would be to use interceptors that check whether a particular transaction is authorized. This also makes the services simpler because authorization is moved into its own layer.

JM: There are a variety of ways to develop web-based applications ranging from Spring, Struts, Django, etc and each of them come with some sort of security hook functionality. How do I configure this to work with entitlements?

FG: These frameworks support authorization, to a certain degree. Unfortunately though, Authorization is typically quite coarse-grained. In Spring for example, you can authorize access to a class. But if this class implements a lot of logic by itself, Spring doesn’t help you doing these “micro-authorizations” or fine-grained authorization. So it’s likely going to be a lot of “If thens” within those classes. The best approach would be to externalize both the coarse-grained as well as the fine-grained authorizations. But if for any reason that is not practical, then the coarse-grained authorization can already be done through the framework by talking to an externalized authorization layer, such as a XACML policy decision point (PDP).

JM: Being an Enterprise Architect who codes and knows security, I have observed throughout my career that many enterprise applications from a code perspective tend to centralize authentication but spread authorization in almost every module. What guidance do you have for both new and old applications in this regard?

FG: For new code, you have the option of externalizing authorization from the start. There are several ways to do this. Aspect-oriented programming can help automate some of this. You can also implement your own permissions checker interface and then hook that into either a local implementation or an externalized XACML authorization service at run-time, so that it gives you all of the flexibility. There is no perfect answer for all cases, as it really depends on how you are writing your code. Wherever in your code you would otherwise do the hard-coded “If thens” to check whether something should be authorized or not, you should be calling an authorization function. If you can create certain “control points”, then you make your life easier. If you have some other points where you need to authorize, use simple APIs to make a call-out to an authorization service.

For old applications, you will need to check where you can “hook in” the authorization. Perhaps there are some control points where you can install interceptors, inject dependencies, or wrap existing classes. If this is not possible, you might be able to intercept data flows coming in or out of a module, and do your authorization there.

JM: Within my enterprise application, I may have built up a “profile” of the user that contains information I would have retrieved post authentication from a directory service. What is the best practice in using this information to make authorization decisions?

GG: The design issue you are raising is whether the PEP should do attribute lookups or if we should rely on the PDP to perform this function. Generally speaking, it is more efficient for the PDP to look up attributes. Mostly this is because the PDP determines what policies will be evaluated and is able to fetch only the additional attributes it needs for policy evaluation. The PEP is not aware of what policies are going to be evaluated, and therefore may waste processing cycles retrieving attributes that will not be used. That extra processing time could be substantial when considering network time for the retrieval, parsing the response, and converting data to XACML attributes.

However, in your case it appears that the application is collecting attribute data for the profile in its normal course of operation. Seems like these attributes can be forwarded to the PDP in the access request without compromising response time performance. There may be other cases where the attributes are in close proximity to the application and it is better for the PEP to do the lookup.

Each scenario and use case should be analyzed, but our starting position would be to have the PEP include attributes it has already collected and to let the PDP look up the rest through its PIP interface. Attribute retrieval is really an externality for the application and should be left to the authorization service. It is also important to consider what happens when policies change. If too much attribute handling is done by the application, it may require additional code changes to accommodate policy changes. If the developer relies on the authorization service to deal with attribute management, then he/she gets the additional benefit of fewer (if any) code changes when the access policies must be adjusted.

JM: Another form of reuse within that many enterprise applications should consider but are not currently implementing is the notion of supporting multiple tenants. Today, an enterprise may take an application and deploy it redundantly instead of keeping a single instance and allowing multiple tenants to live within it. If I wanted to show development leadership in this regard, how can entitlements help?

GG: Applications have multiple layers or integration points where you must consider authorization for a multi tenant configuration – this also applies to single tenant applications. As you described earlier, access policies need to be applied at the presentation and web services or API layers. Beyond this, you have the data layer, typically a database, to consider. It is likely that enterprises deploy multiple instances of an application and its database because they cannot adequately filter data per tenant with current technologies or approaches. With an XACML entitlements system, you can enforce row, column and field level access controls – providing a consistent enforcement of entitlements from presentation to web service to the database. Axiomatics builds specific database integrations (such as Oracle, Microsoft and others), but customers can also use the API to integrate with their preferred SQL coding mechanisms. We think this is a less costly AND more secure solution than what can be purchased from Oracle, for example.

With the approach just described, enterprises can get some economies of scale by deploying fewer application instances – I know there are reports out there about idle CPU time in data centers. Hopefully this also reduces the operational burden by managing fewer instances, but the operations center has to know more detail about which user communities or customer groups each application is supporting.

JM: Our corporation has been breached on more than a few occasions by Wiley hackers. Every time this happens, the security forensic jamboree blows their trumpets really loud asking for assistance in determining what happened. They attempt to reactively walk through log files. To me, this feels like a ceremonial failure. Can entitlements management make those information security people disappear so that I can focus on developing code that provides business value without listening to their forensic whining?

GG: Audit logs of what HAS happened will always be important when attempting to analyze a breach, incident or even for extreme troubleshooting. I think it can be helpful to investigators if there are fewer access logs to examine – here a central authorization service can provide a lot of benefit. A central authorization system that serves multiple applications gives you a single audit stream and single audit file format. It also relieves developers from at least some of the burdens of security logging – although there may be requirements to log additional context that the authorization system is not aware of.

There is also a proactive side of this coin: what CAN users access in an application. It seems that, as an industry, we’ve been trying to definitively answer auditor questions such as, “Who can update accounting data in the general ledger system?” or “Who can approve internal equity trades when the firm’s accumulated risk position reaches a certain threshold?” First, there is a fundamental failure in application design when business owners, auditors and security officers alike cannot easily answer these questions. Why is it still acceptable to build and buy applications that actually increase the operational risk for an organization? Second, many identity management technologies have only served to mask the problem and, ultimately, enable the problem to continue. For example, user provisioning systems were initially thought to be capable of managing access and entitlements for business applications. It turns out that they are relatively good at creating user accounts, but have limited visibility into application entitlements – those are managed by local admin teams. Access governance tools have a better view of entitlements, but it remains difficult to get a complete view when authorization logic is embedded in the application code.

With XACML policies implemented, auditors can test specific access scenarios to confirm enterprise objectives are being met. A policy language is an infinitely richer model for expressing access control policies, than can be done with ACLs, group lists, or roles. Finally, you can specifically answer those auditor questions of who can access or update applications, transactions, or data.

XACML and Dynamic Access Control in Windows Server 2012

May 25, 2012

Microsoft has introduced a significant feature enhancement to Windows Server 2012, Dynamic Access Control (DAC). This is big upgrade from the access control lists (ACLs) used in previous generations of Windows Server, giving enterprises a richer and more flexible authorization model at their disposal. The new functionality gives enterprises tools to more effectively control access to the vast amounts of data in Windows file shares, while complying with business, security and compliance policies. You can find an excellent introduction to Dynamic Access Control here and I expect Microsoft to publish much more information, as we get closer to the GA date for Windows Server 2012.

At Axiomatics, we have added a new feature to our core XACML engine – Axiomatics Policy Server – so that XACML authorization policies can be converted into a format recognized by the DAC function in Windows Server 2012. To implement DAC, Microsoft uses Security Descriptor Definition Language, or SDDL. The Axiomatics feature automatically translates XACML policies into SDDL format and loads the policies into your Windows Server 2012 Active Directory.

There are several benefits to the Axiomatics integration that will enhance Windows Server 2012 deployments, including:

  • Leverage a central authoritative source of access policies: XACML access policies that are implemented across other applications in the enterprise can now be applied to Windows Server environments.
  • Manage and control access to file server resources more easily: Policy languages provide, such as XACML, provide a more direct and flexible model for managing access to vast amounts of data spread across hundreds or thousands of servers.
  • Meet audit and compliance requirements more easily: An externalized and authoritative source for access policies means you have fewer places to audit and certify the access controls for critical applications and data
  • Report on who has access: Axiomatics provides advanced reporting tools to fully explore and validate your access control policies
  • Consistently enforce access across applications and platforms: Enable your Windows Server 2012 to participate in a broader, central authorization service. In this mode, enterprises can ensure a consistent level of policy enforcement across the environment – based on the single, authoritative source of access policies.
  • Best runtime performance: Windows Server 2012 performance is not impacted, since its normal internal access control mechanism is being utilized – there is no callout to an external authorization engine. This gives enterprises the best performance possible, but also provides the assurance that access control is being implemented according to centrally managed policies.
  • Increase value of your XACML investment: Integration with platforms such as Windows Server 2012 or Microsoft SharePoint 2010 extends the reach of your XACML authorization system.

If you are planning to visit Microsoft TechEd 2012, please stop by our booth in the partner pavilion for a demonstration.

Part One: Software Development Lifecycle (Architecture and Design)

April 5, 2012

Last year, James McGovern who previously was in the role of Chief Security Architect for The Hartford and now is the lead Enterprise Architect for HP focused on insurance and I held several discussions (Part 1, Part 2, Part 3) on using entitlements management within the insurance vertical. Now that we are in a new year, we have decided to revisit entitlements management from the perspective of the software development lifecycle.

JM: Historically speaking, a majority of enterprise applications were built without regard to modern approaches to either identity or entitlements management. At the same time, there is no published guidance by either the information security community or industry analysts in terms of how not to repeat past sins. So, let’s dive into some of the challenges a security architecture team should consider when providing guidance to developers on building applications securely. Are you game?

GG: Definitely! I think it remains an issue that applications are still being built without a modern approach to identity or entitlements – we see many cases where developers make their own determinations on how to best handle these tasks. Security architects and enterprise architects have long professed the desire to externalize security and identity from applications, but this guidance has an uneven track record of success.

JM: The average enterprise is not short of places to store identity. One common place where identity is stored is within Active Directory. However, infrastructure teams generally don’t allow for extending Active Directory for application purposes. So, should architects champion having a separate identity store for enterprise applications or somehow find a way to at least centralize application identity?

GG: Attribute management and governance is a key element to an ABAC (attribute based access control) approach. You might expect that one source of identity data is ideal, but that is not the reality of most deployments. Identity and other attribute data is distributed between AD, enterprise directories, HR, databases, CRM systems, supply chain systems, etc. The important thing is to have a process for policy modeling that is aware of and accommodates the source of attributes that are used in decision making.

For example, some attributes are derived from the session and application context, captured by the policy enforcement point (PEP) code and sent to the policy decision point (PDP) with the access request. The PDP can look up additional attributes through a policy information point (PIP) interface. The PIP is configured to connect with authoritative sources of information, which could be additional information about the user, resource or environment.

JM: While I haven’t ran across an enterprise that has gotten a handle on identity, I can also say that many security architecture professionals haven’t figured out ways to stitch together identity on the fly either. If we are going to leave identity distributed, what should we consider?

GG: I am a proponent of a distributed model as the starting point for this issue. That is, identity data should be stored and managed in close proximity to its authoritative source. In a distributed approach such as this, data accuracy should be better than if it is synchronized into a central source. Others will argue for data synchronization, and it is important when performance requirements call for a local copy of data. Therefore, performance, latency and data volatility are all issues to consider.

JM: What if an enterprise application currently assumes that authentication occurs by taking a user-provided token and compares it to something stored within the applications database. Many shops deploy web access management (WAM) technologies such as Yale CAS, CA Siteminder, etc where they centralize authentication and pass around session cookies but may not to know from an identity perspective why this may not be a complete solution?

GG: A few things come to mind here. First, a WAM session token is proprietary and therefore has a number of limitations in the areas of interoperability, support in multiple platforms, etc.

Second, there is the issue of separation of concerns. From an architectural perspective, I strongly believe in having an approach that treats authentication separate from authorization concerns. One of the main benefits is the ability to adjust your authentication scheme to meet the rapidly changing threats that we see emerging on a daily or weekly basis. If authentication is tightly coupled with another identity component, then an organization is severely limiting its ability to cope with security threats.

Finally, authentication should be performed at the identity domain that is most familiar with the user. Said another way, each application does not and should not store a credential for users. Federation standards permit the user to authenticate at their home domain and present a standardized token to applications they may subsequently access.

JM: Have you ever been to a website where they ask you to enter your credentials and they don’t provide you with any queues as to what form the credential comes in? For example is it a User ID or email address. A person may have multiple unique identifiers. Is it possible to use entitlements management as a centralized authenticator for an enterprise application in this scenario?

GG: My initial thought is “no” based on my comments regarding separating authN and authZ above. There are also security reasons for not giving the user a hint about the credential – to reduce the attack surface for someone trying to compromise the site.

However, there may be cases where a web site wishes to permit the use of multiple unique identifiers for authentication. Once you get to the authorization step, will you still have all the necessary user attributes available? Do you need to map all the identifiers to the attribute stores? You can end up making the authorization more complex than it needs to be

JM: If you have ever witnessed how enterprise applications are developed, they usually start out with the notion of two roles where the first role is a user and the second is the administrator. The user can do a few things and the administrator can do anything. Surely, we need to have something more finer-grained than this if we want to improve the security of enterprise applications. What guidance could you provide in terms of modeling roles?

GG: There are different levels of roles that should be defined for any given application:

  • Security Administrator: Their only purpose is to manage and potentially assign entitlements.
  • System Administrator: They just manage the application or platform but don’t deal with entitlements
  • User Role: Here are the regular users that will interact with the system

I definitely would start with the security administrator role – this role deal with managing entitlements, access policies and assigning these to users – they should not have access to the data, transactions or resources within the application. The system administrator role functionality should be constrained to managing the application, such as configuring the system, starting/stopping the application, defining additional access roles (see below) or other operational functions that are not associated with the business application.  This is a vast departure from the super user model where there is a root account with complete access to everything on a system, which ends up being a security and audit nightmare.

Third, you can define a user role that permits an individual to login to the application but with very limited capabilities. Here is where ABAC/XACML comes in to give you the granularity required. Access rules can define what functions a user role can perform as well as what data they can perform functions on. With this kind of dynamic capability, you can enforce rules such as, Managers can view payroll data for employees in their department.

JM: I had the opportunity in my career to be the lead architect for many once popular and now defunct Internet startups during the dot-com era. At no time, do I remember anyone ever inquiring about a standard around what a resource naming convention should look like. Even today, many enterprise applications have no discernable standards as to what a URL should look like. Now that we have portals and web services, this challenge is even more elusive. I know that web access management technologies use introspection techniques and therefore are suboptimal in this regard. Does Entitlements Management provide a potential solution and if so, what constructs should we consider in designing new enterprise applications?

GG: The XACML policy language includes a namespace and naming convention for attributes, policies, etc. This helps to organize the system and also to avoid conflicts in the use of metadata. It is also possible to incorporate semantic web approaches or ontologies to manage large and complex environments – we are seeing some customers interested in exploring these capabilities.

JM: I have heard Gunnar Peterson use an analogy in a testing context that makes me smile. He once stated, testing through the UI is like attempting to inspect the plumbing in your basement by peering through your showerhead. This seems to hint that many applications think of security only through the user interface. Does entitlements management provide the ability to define a security model that is cohesive and deals with all layers of an enterprise application?

GG: Absolutely, this is one of the strengths of the XACML architecture. You can define all the access rules that an XACML policy server will enforce – and install policy enforcement points (PEP) at the necessary layers of an application. These are typically installed at the presentation, application and data tiers or layers. Such an approach is important because you have a different session context at each layer and may have different security concerns to address, but the organization needs to ensure that a set of access rules are consistently enforced throughout the layers of the application. Further, individual services or APIs can be secured as they are used on their own or in mash-up scenarios.

You get the additional benefit of a consolidated access log for all layers of the application. All access successes and failures are available for reporting, investigations or forensic purposes.

JM: Some enterprises are moving away from thinking in terms of objects towards thinking in terms of business processes. How should a security architect think about applying an entitlements-based approach to BPM?

GG: I recall writing some years ago that BPM tools could facilitate the creation of application roles – it’s very interesting that you now ask me about BPM and entitlements! But it’s a logical question. BPM tools help you map out and visualize the application, have the notion of a namespace, resources, and so on. At least a couple of places where entitlements and authorization rules can be derived are within BPM activities as well as when you have an interaction with an activity in another swim lane.

JM: Enterprises are also developing mobile applications that allow their consumers to access services, pay bills and conduct business transactions. It goes without saying that a mobile application should have the same security model or at least adhere to the same security principles as an internally hosted web application. What are some of the entitlements considerations an architect should think about?

GG: There are several considerations that come to mind, but let’s address just a few of them here.

  • Do you need to limit functionality or data download for mobile devices? This can be enforced in your access policies.
  • Do you need to control what functions/buttons/content is displayed on the screen? This is commonly done for access via non-mobile browsers.
  • Do you need to support offline mode or deal with low bandwidth connections (insert your least favorite carrier here). In this case, you may need to support long-lived entitlements or access decisions as opposed to the normal transactional model for XACML systems.
  • Where is the data? How much data is stored on the mobile device? Is the data stored in the cloud? The answers to these questions help to determine how the authorization solution is architected.

Part Three: Enterprise Authorization Scenarios with James McGovern

April 6, 2011

Here is the third installment in a series of conversations I have had with James McGovern, enterprise architect extraordinaire. In this post, we expand the scope from insurance scenarios to include some broader enterprise contexts for externalized authorization.

JM: Over the last couple of years, I have had lots of fascinating conversations with Architects in Fortune enterprises regarding their Identity Management deployment and several common themes have emerged including that while they could do basic provisioning of an account in Active Directory, they couldn’t manage to successfully provision many enterprise applications due to challenges that go beyond simplistic identity. Can XACML help get them to the next stage of maturity?

GG: Your question reminds me of the latest round of commentary regarding the futility of a provisioning approach to identity management. Particularly from the latest Gartner IAM summit, speakers were lamenting the state of the provisioning market and how little progress has been made over the last 10 years. At the heart of the problem is the fact that provisioning tools just don’t have visibility into application privileges and entitlements, in the vast majority of deployments. Instead, provisioning deployments tend to “skim the surface” by managing userIDs/passwords, but defer deep entitlement settings to the target application or platform. Of course, the most difficult applications to manage are an issue because they don’t properly externalize identity management functions – making provisioning deployments more expensive as well as less than optimal.

Enter the “pull” model espoused by my former colleague, Bob Blakley. The basic premise of the pull model is that identity data is resolved at runtime by calling the appropriate service. If a user accesses an application before authenticating, redirect them to an authentication service. If a user accesses the application with a known token, redirect to the token service for proper handling. When the user attempts to perform a protected function, an authorization service should be called for evaluation.

As the reader may have surmised, the more an application externalizes identity – the less provisioning is required. Instead of provisioning accounts and entitlements to every application, a smaller number of authoritative identity service points are provisioned that can be leveraged by many applications. COTS applications would come preconfigured with policies for entitlements and authorization, instead of using a proprietary, embedded approach. To extend this further, access controls for COTS applications from different vendors can be implemented consistently – without excess access “leaks” – if they share a centralized access control model.

Therefore the ability to centrally describe the authorization model of an enterprise application would help. The challenge of identity management would significantly change in a number of ways. For example, enterprises would need to establish with identity services they would provision for the purposes of authentication – and which established, external identity providers they would consume. Authoritative attribute sources would fall into the same category. Finally, authorization policy modeling and management skills would become more prominent so that a normalized view could be attained across the enterprise.

JM: I remember conversations with my former boss at The Hartford where he asked me to explain the value proposition of identity management. He didn’t understand the value of spending millions of dollars for a system to tell him that James McGovern is still an employee. After all, he would know whether he fired me or not. What he wanted to know is what James McGovern could access if he decided to fire me. More importantly, even being in the role of Chief Security Architect, I couldn’t always figure out what I had access to.

GG: Sure, your boss would know whether he fired you or not – but what about all those independent insurance agents we’ve discussed in previous scenarios? Dealing with hundreds, thousands or millions of users and managing what they have access to is what drives organizations to spend significant sums on identity management. That said, there is often a budget imbalance because internal systems are more complex and expensive to operate than the applications serving external constituencies.

Determining what resources a particular user has access to, or who has access rights to a resource are questions that auditors, as well as system administrators, want answers to. Administrators need to know this detail so they can properly set up access for a new employee, contractor, customer, etc. Of course they also need to know this information so de-provisioning can also occur when the relationship is terminated. Auditors and regulators are responsible for ensuring that the organization is following internal business and security policies, as well as regulations or laws they may be subject to.

Current practices, where identity and policy are embedded in each business application, have proven to be very inefficient when attempting to audit the environment. It is not unusual for large organizations to have several hundred or a few thousand applications – imagine trying to audit such an environment on a regular basis if the identity information and policy is not externalized? The situation can be utterly insane if each application has its own store of user data, because then you have a synchronization and reconciliation challenge. Herein lies one of the main value propositions of externalizing authorization: audit-ability and accountability are much easier to accomplish because you have a central place where policies are defined and maintained (although the enforcement of those policies can certainly be distributed). Further, when you think of combining externalized authorization and identity governance, then you can achieve even more visibility and transparency into access controls.

JM: Many Architects have enterprise initiatives to reduce the number of signons to enterprise applications a user has to provide in any given day. Once they have implemented the solution to carry a token that provides seamless access between systems, they now discover that they have an authorization problem. What role should XACML play in an identity strategy?

GG: Sounds like that famous Vegas game, whack-a-mole. As soon as you think you have solved one problem, a new one appears… The scenario you describe can occur if the architects have not fully mapped out their strategy or understood the full consequences (intended or otherwise) of the architecture. If you move to a tokenized authentication approach (like SAML), then you have accomplished two worthy goals: reduced signon for users and fewer systems to provision a credential to.

However, as you point out, the application still needs to do authorization in some way. This could be accomplished if the application retains some kind of user store and keeps entitlement or personalization attributes about the user – at least the application is not storing or managing a credential. Thinking back to the issue of hundreds or thousands of applications, this doesn’t sound like a good solution for a number of reasons.

The preferred approach, if you have externalized authentication, is to also externalize authorization and utilize an XACML system. When the user presents their SSO token of choice to the application, it can call out to an XACML policy engine (such engine could also be embedded in the application for lowest latency) for the authorization engine. This is the approach we see more and more organizations taking.

JM: The average Fortune insurance enterprise may have hundreds of enterprise applications where maybe only 20% are commercial off-the-shelf (COTS) products. Vendors such as Oracle, IBM and SAP are providing out-of-the-box integration with XACML in future releases of their ERP, CRM, BPM, ECM and Portal products. Other large enterprise vendors however seem to be missing in action. How do I fill in the gaps?

GG: This is where you need to rely on your authorization vendor to provide PEPs for COTS applications that don’t directly support XACML. In some cases, you can use a proxy, custom PEP code or an XML gateway (such as from Layer 7, Intel or Vordel) to intercept calls to the application. In other cases, a hybrid approach is necessary because the application cannot operate unless users are provisioned into certain roles or groups.

Ultimately application customers have a lot of influence with their vendors on what standards should be supported. Enterprises should use what leverage they have to encourage XACML adoption where appropriate – that leverage could come in the form of willingness to buy the application if standards are supported vs. building it internally if the required standards are not included.

JM: Many corporations have SoX controls not only around IT systems but also physical security. Does XACML have a value here?

GG: There is definitely a use case where XACML authorization is the policy engine for converged physical/logical access systems. We are seeing some interest for this capability in certain defense scenarios and are working with a physical access vendor on some prototypes. The idea is that access decisions be determined not only based on typical logical access rules, but also based on where you are located. For example, the batch job for printing insurance claim checks will only be released once I have badged into the print room.

JM: So far, the conversation around identity has dominated many of the industry conferences and analyst research. The marketplace at large is blissfully ignorant to the challenges of managing entitlements within an enterprise context. What do you think will be the catalyst for this getting more airtime in front of IT executives?

GG: I think there are significant challenges that are forcing the issue of entitlements to the surface.

  1. The need to share: an organization’s most valuable and sensitive data is precisely the data that partners, customers, and suppliers want access to. The business imperative is to share this data securely.
  2. Overexposure of data: The counterpoint to the first item is that too much data is exposed – that is, sharing of sensitive data must be very granular so that the proper access is granted, but no more.
  3. Sensitivity of data: We are in an era of seemingly continuous incidents of personal data release – either accidentally or due to poor security controls. Insurance companies collect lots of personal data such as what car you drive, where you work, what valuables you have in your home, medical information, insurance policy data, workers compensation data, etc. All this data needs to be protected from improper disclosure.
  4. Moving workloads to the cloud: Regardless of all the hype around cloud computing, there is a strong drive to utilize the capabilities of this latest computing movement. What’s particular to entitlements surfaces in at least two areas. First, it is almost impossible to move workloads out of their traditional data center if entitlements and other IdM functions are “hard wired” into the application, because the application will cease to function. Second, once applications are moved to the cloud, you need to have a consistent way to enforce access – regardless of where the applications and data are hosted. This cries out for a common entitlement and authorization model that can be applied to all resources.

JM: I have some really wild scenarios in the back of my head on how XACML could enable better protections in relational databases, be used for implementing user privacy rights in enterprise applications and even how it could be used  as a way to provide digital rights management. What are some of the more novel uses of XACML that are on your radar that you think the information security community should be thinking of?

GG: The XACML policy language and architectural model are incredibly flexible and applicable to many business scenarios. Databases pose a particular challenge, but there are certainly creative ways to address this and it would be great to explore some of your ideas. Privacy scenarios have their own challenges because you can have legal restrictions on PII as well as user preferences to accommodate. At Axiomatics, we always welcome input from potential customers on their most challenging authorization scenarios to see how they we can meet their requirements.

Biographies for James and Gerry:

James McGovern

James McGovern is currently employed by a leading consultancy and is responsible for defining next-generation solutions for Fortune enterprises. Most recently he was employed as an Enterprise Architect for The Hartford. Throughout his career, James has been responsible for leading key innovation initiatives. He is known as a author of several books focused on Enterprise Architecture, Service Oriented Architectures and Java Software Development. He is deeply passionate about topics including web application security, social media and agile software development.

James is a fanatic champion of work/life balance, corporate social responsibility and helping make poverty history. James heads the Hartford Chapter of the Open Web Application Security Project (OWASP) and contributes his information security expertise to many underserved non-profits. When not blogging or twittering, he spends time with his two kids (six and nine) who are currently training to be world champions in BMX and Jiu-Jitsu.

Gerry Gebel, President Axiomatics Americas

As president, Gerry Gebel is responsible for sales, customer support, marketing, and business development for the Americas region. In addition, he will contribute to product strategy and manage partner relationships for Axiomatics.

Prior to joining Axiomatics, Gebel was vice president and service director for Burton Group’s identity management practice. Gebel authored or contributed to more than 70 reports and articles on topics such as authorization, federation, identity and access governance, user provisioning and other IdM topics. Gebel has also been instrumental in advancing the state of identity-based interoperability by leading demonstration projects for federation, entitlement management, and user-centric standards and specifications. In 2007, Gebel facilitated the first ever XACML interoperability demonstration at the Burton Group Catalyst conference.

In addition, Gebel has nearly 15 years experience in the financial services industry including architecture development, engineering, integration, and support of Internet, distributed, and mainframe systems.


Follow

Get every new post delivered to your Inbox.