Archive for the ‘Authorization’ category

Have it your way

March 1, 2011

Recent conversations with prospective customers have made me think of the long time Burger King slogan, “have it your way”. For Burger King, it was a way to offer an alternative approach to the one-size-fits-all menu of its competitors – chiefly MacDonalds. In most fast food restaurants, it is difficult to make modifications to your order – you have to take it the way the restaurant makes it. Don’t like pickles on your burger? Too bad, take them off yourself.

Does the same situation apply when buying enterprise software and middleware? I am afraid so. By now you are all familiar with the paternal tone used by large vendors when they are describing “their” vision, strategy and architecture. “They” know best and customer should just follow obediently. Of course, doing so also potentially locks the customer in for the long term and the associated high license and maintenance costs.

However, customers are starting to push back. One architect said it best this way, “I appreciate that you (Axiomatics) adjust your solution to our preferred architecture, unlike other vendors that attempt to force our business model into their architecture.” That’s right, there is a choice. Standards such as XACML, SAML, and web services along with new delivery models, such as cloud platforms, are giving enterprises more ways to deploy applications and connect with their partners. Enterprises are finding that they can’t be beholden to a particular vendor’s architecture, but must work with vendors that can accommodate and adjust to the rapidly changing needs of their business operations.

Advertisement

Part One: Insurance Authorization Scenarios with James McGovern

February 16, 2011

In my past role of Industry Analyst at Burton Group, I used to have frequent conversations with James McGovern who at the time was in the role of Chief Security Architect for The Hartford and is now a Director with Virtusa where he focuses on Enterprise Architecture and Information Security. Recently, we had a dialog on applying XACML in an industry vertical context. This exchange was inspired by similar conversations I had with Gunnar Peterson where we discussed the applicability of XACML bases solutions to some more general security scenarios. For readers new to XACML, you can find some additional information elsewhere on this blog as well as at http://www.axiomatics.com. Below is a transcript of our conversation…

JM: Let’s dive into three different scenarios using examples from insurance where making proper authorization decisions are vital and understand how XAMCL can provide value.

GG: That sounds great James, thanks for bringing up these industry specific examples so we can have a discussion of XACML based systems in that context.

JM: Let’s jump into the first scenario. An independent insurance agent will do business with an insurance carrier through a variety of channels. One method is to visit the carrier’s web-site that is dedicated to independent insurance agents. The carrier may use web access management (WAM) products for providing security to the website. Another method may be to conduct transactions from their agency management system that either is installed in their data center (large agencies) or hosted in a SAAS manner (small agencies). The agency management system may create XML-based transactions that are sent to the carrier’s XML gateway for processing. Another method still would be for the agent to conduct a transaction via telephone using interactive voice response (IVR) systems.

In all three scenarios, the independent insurance agent may execute transactions such as requesting a quote where it is vital not only that any one individual channel remain secure, but that all the channels through the lens of business security have the same security semantics.

GG: First, I will not address the authentication challenge across these multiple channels and will focus on authorization only. With an XACML-based system, you can indeed implement and enforce the same policies across multiple channels. In the example you cite above, here is where the policy enforcement points (PEPs) would be inserted:

  1. Web access management tier: At this level, let the WAM system do what it does best – manage authentication and the user session. For authorization, WAM integration with an XACML PDP can be implemented in multiple ways. For example, the WAM policy server can call out to the PDP (act like a PEP) or an XACML specific PEP can be installed at the application (website) to handle authorizations.
  2. Agency management system: If the on premises AMS and SaaS AMS are both accessed via an XML gateway, then the gateway acts as the PEP and enforces policies that are evaluated by the PDP. XML gateways are a great way to secure web services because most (all?) of them support the SAML profile for XACML or can integrate with an XACML vendor’s API.
  3. IVR system: This one could be a bit trickier, but the idea is that a PEP can be built for most any environment. If the IVR vendor permits it, then a Java or .NET PEP can be developed pretty quickly to connect with an XACML PDP.

There are many deployment options for where PDPs are installed or policies are managed, but the bottom line is that resources accessed through multiple channels can be protected by a common set of policies and authorization infrastructure.

JM: The IVR scenario is just one example of authorization issues that occur in a telephony environment. In the investment community, the notion of a “Chinese Wall” where an investment firm for regulatory reasons may need to prevent phone conversations between two different individuals in different departments such as an employee working on mergers and acquisitions from sharing non-public information with those in the trading department.

GG: Integrating XACML across a variety of channels are also used at banks – employee accounts are marked as such to enforce access policies, provide employee discounts, etc. Integrating XACML isn’t just valuable for web sites, web services and IVRs but can work with instant messaging applications, Turrets and email to support the concept of Chinese Walls or other regulatory considerations.

JM: Let’s look at another scenario. A large insurance broker may employ hundreds of insurance agents that interact with multiple insurance carriers on a daily basis. From a financial perspective, the broker would like for the insurance carriers to provide up to the minute details on commissions from selling insurance products. The challenge is that the insurance carrier may need to understand the organizational structure of the insurance broker so as to not provide information to the wrong person. For example, one insurance broker may organize by regions (e.g. north, south, east, west) while another may organize around size of customer (e.g. large, medium, small) while another still may organize around the types of products sold (e.g. personal, commercial, wealth management, etc). In this scenario, the broker may only want the managers of each region to see only their information, but not that of their peers in other regions.

The requirement of an insurance broker to at runtime dynamically describe the authorization model to a foreign system becomes vital to conducting business.

GG: The flexibility of an attribute based access control (ABAC) model, such as the XACML policy language, is very useful in this scenario. From the insurance carrier perspective, it is quite easy to represent the various policies that need to be implemented for each broker. In XACML, attributes are defined in four categories (you can also define additional categories): subject, action, resource, and environment. For the broker organized by region, information such as north, south, etc are passed as subject attributes. Data such as <large customer> or <commercial> are passed as resource attributes to the PDP (either via the PEP or through the PIP interface). The carrier’s PDP will evaluate requests based on its defined policies to determine whether access is permitted or denied. Further, the PDP can also send an obligation back to the PEP with the decision – read access to commission report is granted, but redact sections 2, 5 and 8.

JM: The ability to make authorization decisions in the above scenario requires the ability to describe an organizational structure. This scenario not only applies to the carrier to agency relationship but could be equally applicable for internal applications such as procurement where you may have a rule that your two job grades above you must approve all expenses. Could you describe in more detail how XACML can support hierarchical constructs?

GG: To answer the question it’s important to use the right resource model (from the hierarchical resource profile). If the hierarchy is represented using “ancestor attributes” (§2.3), then there won’t be enough information to identify the manager two levels up. What is needed is a richer hierarchical model, e.g. using XML documents (§2.1), URIs (§2.2) or a slight modification of §2.3 to add an attribute that explicitly identifies a “grandparent” resource (or manager).

If the hierarchy is represented using an XML document, then the policy would use an AttributeSelector with an XPath expression that can easily pick a node two levels above any other. The same goes for an ‘n’ degree relation where ‘n’ is a constant known at policy-authoring time If the degree ‘n’ is dynamically provided in the form of some XACML attribute, then this might be harder to achieve and the individual case would have to be analyzed before coming up with a recommendation.

In practice, it may not suffice to simply use the base hierarchical resource profile. Other solutions may be needed – for example, using richer PIPs that massage the information into a format that facilitates policy authoring. [1]

JM: Let’s look at the scenario of an independent insurance agent and how they may access a given insurance carriers claims administration systems. The carrier may have an authorization rule that states any agent can access information for all policyholders in which they are the agent of record.

Taking this one step further, when an insurance agent purchases workers compensation insurance for their own business without the right authorization model, they may be able to have conflicting access rights if the agent is in the role of both agent and policyholder. When an otherwise authorized employee of the agency needs to file a worker’s compensation claim for themselves, other employees of the agency should not be able to view the claims of their coworker.

GG: This scenario can also be modeled in XACML policy provided that all the necessary attributes are available. To turn around your example 180 degrees, when an agency employee views the status of their own worker’s compensation claim, they should only be able to see their own records and not the records of fellow employees. Of course in performing normal work tasks, agency employees should also see any client records that they would otherwise have access to. Ideally, worker’s compensation claim records should be tagged with an additional attribute to indicate the claim is for an agency employee as opposed to a claim from a customer.

JM: A big challenge in getting this right is to make sure that you modeled identity correctly. Historically, many systems would have modeled an agent, an employee policyholder and a claimant as distinct entities. Today, we have to think about them more as personas or roles that are more dynamic in their usage. The party model would be a better modeling approach in this regard.

GG: Ideally, if your system has a proper identity model, then implementing sound authorization models becomes easy. On the chance, that your identity model is less normalized, you can use the PIP interface to accomplish the same goal of first detecting whether two distinct entities are the same. For example, a request may come into the PDP only containing the employee ID attribute but the PDP recognizes that it must look up additional attributes before evaluating the policy. The employee ID can be used as the index to lookup additional attributes on the user, possibly the SSN, department number, cost center, etc in a directory or HR database.

Stay tuned for part two…


[1] Thanks to my colleague Pablo Giambiagi for providing input to this question

 

Talking authorization with Gunnar Peterson

December 15, 2010

Gunnar Peterson and I had a discussion about why authorization should start to receive more attention in the infosec industry. He feels that most infosec pros are over emphasizing authentication and it’s time to look more toward authorization. Since I now work for Axiomatics, I couldn’t agree more :-). Here is a transcript of the conversation:

GP: Authentication gets so much attention in security, for example there are dozens of authentication types supported by SAML. This is due to the many, many attempts that people have made in improving authentication over the years, but authentication is really a guess. There are ways to make better or worse guesses, but once that guess is bound to a principal, the game changes. Authorization is mainly about solving puzzles (not guessing), it seems to me that infosec as a whole should spend more time getting their “puzzle logic” implemented right, ensuring that authorization rules have the coverage, depth and portability. Why is it that people have been so easily seduced by authentication and what can we do to get people to focus less on this quixotic pursuit and onto more solveable problems like authorization?

GG: The focus on authentication, in many ways, is justifiable because much authorization was embedded within the authentication process. If you can authenticate to the application, then you have access to all the information – in a general sense. This approach is manifested in many legacy applications, early security perimeter strategies, and first generation portals. In today’s environment, authorization approaches must be much more discriminating due to regulatory, privacy, or business requirements. Further, the number of applications or resources that are not shared with an outside party has essentially been reduced to zero – the most valuable assets are the one of most interest to your partners and customers. This transition from “need to know” to “need to share” is shifting the focus to authorization and I believe we are seeing the early signs of enterprises placing more focus on authorization. Ultimately authentication remains important because we still need to know who is accessing the information although not necessarily their identity, but enough privilege granting attributes about them.

 

GP: What is driving this shifting focus towards authorization? How much is driven by the management of sharing problem present in today’s applications like Web applications, Web service, Mobile and Cloud, where architectures are so distributed and occasionally connected that they are forced to authenticate in space and authorize in another? And how much is driven by the applications themselves becoming more sophisticated with more functionality, data and layers that require more fine grained authorization? Can general purpose frameworks like XACML help in both the technology architecture and the language expressiveness or are there different patterns required?

GG: There are many forces at work that are changing the perspectives on how identity management functionality and services should be implemented. First, it is quite logical to have authentication taking place completely disconnected from the application or data resource – think of the federated model. Second, applications are much more sophisticated than what was being developed even a few years ago. The level of complexity and granularity must be met by an equally sophisticated and comprehensive authorization scheme. Finally, XACML is well suited to meet the complex requirements we are referring to. XACML is a mature standard (work on it began in Feb 2000) that is comprised of a reference architecture model, policy language and request/response protocol. The XACML architecture is well suited to protect application resources whether they are centralized in your data center or distributed across the data center, private clouds, public clouds, partners, etc. And the core XACML policy language, plus profile extensions such as the hierarchical resource profile, is capable of modeling very complex business rules that will address the vast majority of use cases.

 

GPBob Blakley’s paper on “The Emerging Architecture of Identity Management” from earlier this year described three architecture models – first a traditional Push model, then a more dynamic future state based on a Pull Model, and a third hybrid model to move from Push to Pull to help enterprise to make incremental progress. The pure Pull model looks to solve what I regard as the single biggest security issue we face today – poor integration. Can you discuss how XACML based architectures play a role in these Push and Pull models, is XACML applicable in all models or are some more in its sweet spot than others?

GG: In fact, Bob includes XACML in his emerging architecture – referring products of this class as ‘XACMLoids.” The primary value XACML brings is in externalizing authorization from business applications – “Externalized Authorization Managers” is another excellent report that Bob has recently written on this topic.

In the Push model, XACML systems have a smaller role since identity data is synchronized with or pushed to the application specific identity repositories. In the Pull model, applications call out to the XACML service for authorization decisions using the XACML request/response protocol. If additional attributes are needed to make an authorization decision, the PDP engine can retrieve attributes through a virtualization service. XACML works equally well in the hybrid model – the difference here is that the application does need to persist some identity information in a local repository. That said, the sweet spot for XACML-based systems is likely the pure Pull model or the hybrid scenario.

 

GP: So beyond, the XACML language framework can you briefly describe the components that an authorization system needs to implement XACML architecture in the pure Pull and Hybrid scenarios?

GG: The components identified in the XACML reference architecture are:

Policy Decision Point (PDP): this component receives access requests, evaluates them against known policies, and returns a decision to the caller

Policy Enforcement Point (PEP): this component is located at or near the resource and can be integrated as a gateway, co-located with the application, or embedded within the same process as the application. PEPs basically construct the request in XACML format, forward it to the PDP over a particular protocol, and enforce the decision returned from the PDP

Policy Information Point (PIP): XACML is an attribute based access control (ABAC) model and therefore thrives on attributes. If the PEP does not send all the necessary attributes in the request, the PDP retrieves the additional attributes via the PIP interface.

Policy Administration Point (PAP): Here the policies are written, tested, and deployed – all the expected policy lifecycle management functions.

Policy Retrieval Point (PRP): This is where the policies are stored by the PAP and retrieved by the PDP.

 

GP: Given that logical architecture, what do you typically see or recommend in terms of physical deployment? Seems like a minimum would include separate PEP, PDP, and PAP instances. The PIP and PRP would be combined with PEP/PDP and PAP respectively. Is this is a way to get started on building the foundation?

GG: Some of these deployment configurations will be product dependent, but in general here are tome typical topologies for an XACML system:

1. Shared PDP service: You should have at least 2 PDP instances for availability and business continuity. Additional PDP instances can be deployed for scalability as each server instance is stateless.

2. Embedded PDP: For low latency scenarios, the PDP can be embedded directly in the application container

3. Attribute sources: The PDP, via the PIP interface, can connect to several attribute sources directly as one option. A second option is to use a virtual directory as the attribute source manager. Finally, when a persistent, consolidated attribute store is required then privilege granting attributes can be synchronized into a directory.

4. PAP service: Typically this function will be run in an offline mode. Most of the work happens when a new application is onboarded to the environment and we find that XACML policies are quite stable and don’t require daily or even weekly adjustments.

5. PRP repository: Obviously the PAP will store XACML policies in the repository but the enterprise may have a preferred option for putting policies into production. Operational procedures must also take into account how many PDPs are installed locally or distributed throughout the network. For example, you could utilize database or directly replication to promote new policies into production.

6. PEP integration with applications: Here you can get started by integrating the XACML system with an XML gateway as a great way to get started, which has a low impact to the existing environment. For more advanced scenarios, you can integrate application environment specific PEPs into your business applications.

 

GP: I look at the domain of information security like a triangle – there AAA services for Identity and Access Management, Defensive Services like monitoring and logging, and finally Enablement services that help to integration AAA and Defensive services in to the organization. XACML was designed to handle parts of the AAA and Enablement challenges, but how can we use XACML and authorization services to improve our Defensive posture? What ways have you seen to implement more robust logging and monitoring through the Authorization layers?

GG: The XACML PDP engine should be instrumented so it can provide important information to the logging and monitoring apparatus of the enterprise. At the basic level, the monitoring system can track the number of permit and deny decisions to watch for anomalies. Further, alerts can be triggered if certain thresholds are exceeded – maybe the same user getting repeatedly denied access or a particular IP address making excessive requests.

 

Discussing XACML with Travis

October 6, 2010

Travis Spencer (@travisspencer) raised a few issues with XACML and proposed some solutions in a recent blog post. I’d like to take this opportunity to respond in the interest of continuing the conversation. Thanks to my colleagues, Erik, David (@davidjbrossard), and Ludwig for their input.

Point 1 – Lack of wire protocol definitions: The industry is limited to a single wire protocol spec at the moment, the SAML profile for XACML. It is by no means universally applicable but is useful when integrating with other vendors’ policy enforcement points (PEP). Such is the case for integration with XML security gateways. We agree that other wire protocols are needed and expect that they will emerge over time, as the market demands them. This will require a combined effort between XACML vendors and experts in the particular protocol domains of interest. Once the industry reaches a point where multiple protocol profiles are created, then formal certification and interoperability testing may also be required – similar to the SAML profile testing that occurs today. Finally, I invite you to join the TC and provide your use cases as input!

Point 2 – Cryptographically binding attributes to a trusted IDP: There are cases where a cryptographic chain can be established between the IDP and PDP – as Craig Forster described in a comment to the original post. That is, a SAML token can be passed from the PEP to the PDP and the PDP can perform signature validation. However, this doesn’t address all possible scenarios as there are many ways that attributes can reach the PDP. In federated scenarios, a token of some sort may contain attributes, but this represents only a portion of use cases. The PEP may derive attributes from a local repository and it, of course, may send environmental attributes in the access request to the PDP. The PDP may also query additional sources for necessary attributes before making an access decision. These sources could include a local LDAP directory, web service, or customer database. The PDP could also query a remote source, as defined in the Backend Attribute Exchange (BAE) profile. Therefore it may not be practical, or possible, to implement cryptographic bindings all the way to the attribute source.

It is true that the PEP and PDP operate in a trusted ecosystem – that includes the application itself as well as other infrastructure components. XACML was intentionally designed in a modular fashion to cleanly separate authorization from other IdM functions, such as authentication. Security mechanisms are implemented to secure the communication between PEP and PDP components, but there also is a certain amount of trust between the components. For example, the PDP must “trust” that the PEP will actually enforce decisions properly and carry out all obligations. The PDP must “trust” the contents of access requests from the PEP, including the attributes about the subject. In such cases where additional context is needed, the PEP can send subject attributes plus the source (issuer) of the subject attributes – it’s just another XML string.

Point 3 – Policy authoring and administration: The XACML policy language was developed to address a broad range of application scenarios and to satisfy the complex requirements of sophisticated applications. As such, XACML is a rich language and it must be, otherwise we would be debating whether it is comprehensive enough. Based on our experience at Axiomatics (@axiomatics), if you simplify the policy authoring tool – you lose some of the XACML functionality. Some clients choose to create domain-specific and simpler admin tools but these are only used after the initial set of policies has been created. A couple other observations may be helpful. First, XACML policy development takes some effort up front, but the policies are typically quite stable and do not need a lot of manipulation. Second, the more frequent activity is management of user attributes during onboarding or when the user’s status changes. Finally, we expect more domain specific policy administration tools to emerge in the future as the standard is adopted more broadly.

In summary, I thank Travis for raising points that are issues from his perspective. XACML is not perfect but then no technology is. However, the standard and products that implement it will continue to improve over time based on experiences from production deployments across many industries. We think that XACML is a very comprehensive and capable specification – and we are seeing many leading organizations choosing to deploy it already today. They recognize the value of standards-based, externalized authorization as a competitive advantage and a vast improvement from previous models.

Lastly, I invite everyone to attend our webinar on XACML and the ‘200M’ user deployment this month (October). More information on the event can be found here.

Weighing in on Pull vs. Push

August 20, 2010

Bob Blakley certainly hit a nerve with his keynote presentation at Catalyst this year. He had been working on the concepts for his “Pull” identity architecture for some time and it was well received by the audience, sparking a lot of discussion and debate. Since the conference, we’ve witnessed a terrific continuation of the debate through the excellent posts by Nishant Kaushik and Ben Goodman. Nishant has argued in favor of “Pull” here and here, while Ben has taken an opposing view here and here.

This type of discussion often takes place when we speak to enterprises about adopting externalized authorization managers instead of relying on historical approaches – you don’t always (or rarely) open up legacy applications unless there is a specific business reason to do so. However, as Nishant points out, enterprises realize the value and opportunity of moving forward with a “Pull” based approach. Existing models, while workable in many situations, may not be flexible enough for modern business organizations that need to operate in a more dynamic fashion while maintaining security and regulatory compliance.

A final point to make is this: for every application that adopts the “Pull” model, you have one less application that requires provisioning or data synchronization. I refer to this type of application as stateless, from an identity perspective. In this case, users don’t authenticate to the application, they authenticate via a service that may be hosted by the enterprise or an external entity – no extra accounts or credentials needed. For access control, the application calls to an externalized authorization manager (EAM) – here policies define what the user can do within the application. If additional attributes are needed, they can be loaded from existing authoritative sources by the EAM – no extra data synchronization or user provisioning is needed. Now this model will not work for every application or every scenario, but it is a model that is implementable today and many in the industry are enthusiastically adopting it. For applications that still require a monolithic approach, then I agree with Ben that your IdM  tools must indeed be very intelligent.

Diagramming XACML Performance

July 14, 2010

In a previous post discussing XACML performance myth-busting, I described several areas in an XACML authorization system where performance issues can be addressed. Since then, my colleague David Brossard created the diagram below to illustrate potential performance bottlenecks.

To refresh your memory, here is the issue for each numbered item in the diagram (see the previous post for explanations):

  1. Policy Retrieval
  2. Policy Matching
  3. Attribute Retrieval
  4. Decision Caching
  5. Multiple Requests
  6. PDP – PEP Interaction

Concordia hosts Authorization Standards Workshop

July 9, 2010

The Concordia Discussion Group is planning another workshop at Burton Catalyst North America, continuing a trend of providing timely and informative events. I have had the pleasure of participating in the past and will provide an update on what is new in XACML 3.0 this time around. XACML 3.0 is nearing ever closer to formal standardization – and contains several useful enhancements that are important for leading edge as well as legacy application environments.

Information on the workshop can be found here. Admission is free – you just need to register with Dervla O’Reilly to attend. Hope to see you there!

The Anywhere Application Architecture

June 8, 2010

With this post, the long tail of commentary from the European Identity Conference continues. I came up with the term Anywhere Application Architecture while preparing my EIC keynote as it captures a number of principles that architects must consider when deploying applications. Today, applications and the supporting infrastructure must be able to run anywhere – and be mobile enough to transit between on premises data centers, private clouds, and public clouds. Below is a graphic that represents this approach, from an authorization perspective.

Here’s how it works: A typical authorization service is depicted in the lower part of the graphic; it is comprised of a policy enforcement point (PAP) policy decision point (PDP) and policy enforcement points (PEP). In addition, authorization policies represent the business and security rules to be enforced and necessary attributes are retrieved through a policy information point (PIP) interface.

The traditional deployment of an authorization infrastructure is to install it on premises alongside applications in your enterprise’s data center. A PEP would be typically integrated with your application (App A in this case). If your enterprise utilizes a private cloud hosting a web services application, then XML gateways can serve as a super PEP to secure access to web services (in this case Service A).

If you are running workloads in the public cloud, the same authorization infrastructure can be extended. In our example, the XML gateway can protect publicly hosted web services (Service B) or you can choose to implement a PEP in the cloud (Service C). Finally, you may also choose to run part or all of your authorization infrastructure in the cloud – depending on the usage scenarios or requirements of your applications and users.

To reiterate, security architects and application planners must prepare for workloads that can run in the data center, in private clouds or in public cloud scenarios – and they must be able to accommodate moving workloads between these environments. Therefore, your IdM infrastructure must have the same flexibility characteristics. In this example, we’ve shown you how an XACML-based authorization system fits the bill. By the way, this approach integrates extremely well with a federated model as the authentication approach. Then you can also accommodate users that are located anywhere!