Spring and Summer conference schedule

Posted May 4, 2011 by ggebel
Categories: Conferences, Workshop

It seems that this time of year the conference circuit begins to intensify before taking a break for summer vacations. Unfortunately I am not able to join the identerati at IIW this week, but here is the schedule for the rest of spring and summer:

European Identity Conference: Located in beautiful Munich, this promises to be another excellent event. Of particular intrigue is the fact that Craig Burton will be joining KuppingerCole as an analyst – very exciting news! In addition to Axiomatics sharing an exhibit floor booth with Ping Identity, I will be participating in the following sessions

Glue Conference: This is one of Eric Norlin’s creations and promises to be an interesting and informative event – particularly for you developers out there. A new program was introduced this year where Alcatel-Lucent has funded “demo pods” in the exhibitor space for interesting startup vendors. So, a big thank you to Alcatel-Lucent and the selection committee for picking Axiomatics as a demo pod participant.

Cloud Identity Summit: 2011 is the second year for the Cloud Identity Summit, hosted by Ping Identity.  Just give a quick look at the agenda and speaker lineup and I expect you will be registering immediately. Plus, Andre has lined up 15 workshops on various cloud and identity topics… 15 workshops, are you kidding me? What conference gives you that kind of learning opportunity? I am pleased to be giving The Essential XACML Primer workshop again this year – please come out to Keystone, CO for this amazing event!

Gartner Catalyst Conference: Axiomatics will have a hospitality suite for the first time at Catalyst this year. For those of you familiar with the Catalyst format, hospitality suites are unlike exhibit spaces at any other conference. They are fun, themed based settings where you can enjoy yourself while mingling with other attendees – and learn a little about vendor offerings. Please join us at Catalyst and look for a very unusual and fun giveaway in the Axiomatics suite.

Advertisement

Part Three: Enterprise Authorization Scenarios with James McGovern

Posted April 6, 2011 by ggebel
Categories: Architecture, Authorization, Standards, XACML

Here is the third installment in a series of conversations I have had with James McGovern, enterprise architect extraordinaire. In this post, we expand the scope from insurance scenarios to include some broader enterprise contexts for externalized authorization.

JM: Over the last couple of years, I have had lots of fascinating conversations with Architects in Fortune enterprises regarding their Identity Management deployment and several common themes have emerged including that while they could do basic provisioning of an account in Active Directory, they couldn’t manage to successfully provision many enterprise applications due to challenges that go beyond simplistic identity. Can XACML help get them to the next stage of maturity?

GG: Your question reminds me of the latest round of commentary regarding the futility of a provisioning approach to identity management. Particularly from the latest Gartner IAM summit, speakers were lamenting the state of the provisioning market and how little progress has been made over the last 10 years. At the heart of the problem is the fact that provisioning tools just don’t have visibility into application privileges and entitlements, in the vast majority of deployments. Instead, provisioning deployments tend to “skim the surface” by managing userIDs/passwords, but defer deep entitlement settings to the target application or platform. Of course, the most difficult applications to manage are an issue because they don’t properly externalize identity management functions – making provisioning deployments more expensive as well as less than optimal.

Enter the “pull” model espoused by my former colleague, Bob Blakley. The basic premise of the pull model is that identity data is resolved at runtime by calling the appropriate service. If a user accesses an application before authenticating, redirect them to an authentication service. If a user accesses the application with a known token, redirect to the token service for proper handling. When the user attempts to perform a protected function, an authorization service should be called for evaluation.

As the reader may have surmised, the more an application externalizes identity – the less provisioning is required. Instead of provisioning accounts and entitlements to every application, a smaller number of authoritative identity service points are provisioned that can be leveraged by many applications. COTS applications would come preconfigured with policies for entitlements and authorization, instead of using a proprietary, embedded approach. To extend this further, access controls for COTS applications from different vendors can be implemented consistently – without excess access “leaks” – if they share a centralized access control model.

Therefore the ability to centrally describe the authorization model of an enterprise application would help. The challenge of identity management would significantly change in a number of ways. For example, enterprises would need to establish with identity services they would provision for the purposes of authentication – and which established, external identity providers they would consume. Authoritative attribute sources would fall into the same category. Finally, authorization policy modeling and management skills would become more prominent so that a normalized view could be attained across the enterprise.

JM: I remember conversations with my former boss at The Hartford where he asked me to explain the value proposition of identity management. He didn’t understand the value of spending millions of dollars for a system to tell him that James McGovern is still an employee. After all, he would know whether he fired me or not. What he wanted to know is what James McGovern could access if he decided to fire me. More importantly, even being in the role of Chief Security Architect, I couldn’t always figure out what I had access to.

GG: Sure, your boss would know whether he fired you or not – but what about all those independent insurance agents we’ve discussed in previous scenarios? Dealing with hundreds, thousands or millions of users and managing what they have access to is what drives organizations to spend significant sums on identity management. That said, there is often a budget imbalance because internal systems are more complex and expensive to operate than the applications serving external constituencies.

Determining what resources a particular user has access to, or who has access rights to a resource are questions that auditors, as well as system administrators, want answers to. Administrators need to know this detail so they can properly set up access for a new employee, contractor, customer, etc. Of course they also need to know this information so de-provisioning can also occur when the relationship is terminated. Auditors and regulators are responsible for ensuring that the organization is following internal business and security policies, as well as regulations or laws they may be subject to.

Current practices, where identity and policy are embedded in each business application, have proven to be very inefficient when attempting to audit the environment. It is not unusual for large organizations to have several hundred or a few thousand applications – imagine trying to audit such an environment on a regular basis if the identity information and policy is not externalized? The situation can be utterly insane if each application has its own store of user data, because then you have a synchronization and reconciliation challenge. Herein lies one of the main value propositions of externalizing authorization: audit-ability and accountability are much easier to accomplish because you have a central place where policies are defined and maintained (although the enforcement of those policies can certainly be distributed). Further, when you think of combining externalized authorization and identity governance, then you can achieve even more visibility and transparency into access controls.

JM: Many Architects have enterprise initiatives to reduce the number of signons to enterprise applications a user has to provide in any given day. Once they have implemented the solution to carry a token that provides seamless access between systems, they now discover that they have an authorization problem. What role should XACML play in an identity strategy?

GG: Sounds like that famous Vegas game, whack-a-mole. As soon as you think you have solved one problem, a new one appears… The scenario you describe can occur if the architects have not fully mapped out their strategy or understood the full consequences (intended or otherwise) of the architecture. If you move to a tokenized authentication approach (like SAML), then you have accomplished two worthy goals: reduced signon for users and fewer systems to provision a credential to.

However, as you point out, the application still needs to do authorization in some way. This could be accomplished if the application retains some kind of user store and keeps entitlement or personalization attributes about the user – at least the application is not storing or managing a credential. Thinking back to the issue of hundreds or thousands of applications, this doesn’t sound like a good solution for a number of reasons.

The preferred approach, if you have externalized authentication, is to also externalize authorization and utilize an XACML system. When the user presents their SSO token of choice to the application, it can call out to an XACML policy engine (such engine could also be embedded in the application for lowest latency) for the authorization engine. This is the approach we see more and more organizations taking.

JM: The average Fortune insurance enterprise may have hundreds of enterprise applications where maybe only 20% are commercial off-the-shelf (COTS) products. Vendors such as Oracle, IBM and SAP are providing out-of-the-box integration with XACML in future releases of their ERP, CRM, BPM, ECM and Portal products. Other large enterprise vendors however seem to be missing in action. How do I fill in the gaps?

GG: This is where you need to rely on your authorization vendor to provide PEPs for COTS applications that don’t directly support XACML. In some cases, you can use a proxy, custom PEP code or an XML gateway (such as from Layer 7, Intel or Vordel) to intercept calls to the application. In other cases, a hybrid approach is necessary because the application cannot operate unless users are provisioned into certain roles or groups.

Ultimately application customers have a lot of influence with their vendors on what standards should be supported. Enterprises should use what leverage they have to encourage XACML adoption where appropriate – that leverage could come in the form of willingness to buy the application if standards are supported vs. building it internally if the required standards are not included.

JM: Many corporations have SoX controls not only around IT systems but also physical security. Does XACML have a value here?

GG: There is definitely a use case where XACML authorization is the policy engine for converged physical/logical access systems. We are seeing some interest for this capability in certain defense scenarios and are working with a physical access vendor on some prototypes. The idea is that access decisions be determined not only based on typical logical access rules, but also based on where you are located. For example, the batch job for printing insurance claim checks will only be released once I have badged into the print room.

JM: So far, the conversation around identity has dominated many of the industry conferences and analyst research. The marketplace at large is blissfully ignorant to the challenges of managing entitlements within an enterprise context. What do you think will be the catalyst for this getting more airtime in front of IT executives?

GG: I think there are significant challenges that are forcing the issue of entitlements to the surface.

  1. The need to share: an organization’s most valuable and sensitive data is precisely the data that partners, customers, and suppliers want access to. The business imperative is to share this data securely.
  2. Overexposure of data: The counterpoint to the first item is that too much data is exposed – that is, sharing of sensitive data must be very granular so that the proper access is granted, but no more.
  3. Sensitivity of data: We are in an era of seemingly continuous incidents of personal data release – either accidentally or due to poor security controls. Insurance companies collect lots of personal data such as what car you drive, where you work, what valuables you have in your home, medical information, insurance policy data, workers compensation data, etc. All this data needs to be protected from improper disclosure.
  4. Moving workloads to the cloud: Regardless of all the hype around cloud computing, there is a strong drive to utilize the capabilities of this latest computing movement. What’s particular to entitlements surfaces in at least two areas. First, it is almost impossible to move workloads out of their traditional data center if entitlements and other IdM functions are “hard wired” into the application, because the application will cease to function. Second, once applications are moved to the cloud, you need to have a consistent way to enforce access – regardless of where the applications and data are hosted. This cries out for a common entitlement and authorization model that can be applied to all resources.

JM: I have some really wild scenarios in the back of my head on how XACML could enable better protections in relational databases, be used for implementing user privacy rights in enterprise applications and even how it could be used  as a way to provide digital rights management. What are some of the more novel uses of XACML that are on your radar that you think the information security community should be thinking of?

GG: The XACML policy language and architectural model are incredibly flexible and applicable to many business scenarios. Databases pose a particular challenge, but there are certainly creative ways to address this and it would be great to explore some of your ideas. Privacy scenarios have their own challenges because you can have legal restrictions on PII as well as user preferences to accommodate. At Axiomatics, we always welcome input from potential customers on their most challenging authorization scenarios to see how they we can meet their requirements.

Biographies for James and Gerry:

James McGovern

James McGovern is currently employed by a leading consultancy and is responsible for defining next-generation solutions for Fortune enterprises. Most recently he was employed as an Enterprise Architect for The Hartford. Throughout his career, James has been responsible for leading key innovation initiatives. He is known as a author of several books focused on Enterprise Architecture, Service Oriented Architectures and Java Software Development. He is deeply passionate about topics including web application security, social media and agile software development.

James is a fanatic champion of work/life balance, corporate social responsibility and helping make poverty history. James heads the Hartford Chapter of the Open Web Application Security Project (OWASP) and contributes his information security expertise to many underserved non-profits. When not blogging or twittering, he spends time with his two kids (six and nine) who are currently training to be world champions in BMX and Jiu-Jitsu.

Gerry Gebel, President Axiomatics Americas

As president, Gerry Gebel is responsible for sales, customer support, marketing, and business development for the Americas region. In addition, he will contribute to product strategy and manage partner relationships for Axiomatics.

Prior to joining Axiomatics, Gebel was vice president and service director for Burton Group’s identity management practice. Gebel authored or contributed to more than 70 reports and articles on topics such as authorization, federation, identity and access governance, user provisioning and other IdM topics. Gebel has also been instrumental in advancing the state of identity-based interoperability by leading demonstration projects for federation, entitlement management, and user-centric standards and specifications. In 2007, Gebel facilitated the first ever XACML interoperability demonstration at the Burton Group Catalyst conference.

In addition, Gebel has nearly 15 years experience in the financial services industry including architecture development, engineering, integration, and support of Internet, distributed, and mainframe systems.

Take 3, talking authZ and TOCTOU with Gunnar

Posted March 18, 2011 by ggebel
Categories: Architecture, Authorization, Standards, XACML

Here is part 3 of a conversation with Gunnar Peterson where we continue talking about externalized authorization, who in the organization is involved in an XACML system deployment – and it even includes a discussion of TOCTOU concerns as it relates to a XACML system. Thanks also to my colleagues, David Brossard and Pablo Giambiagi, for their input. You can also find part 1 and part 2 of the conversation on this blog.

GP: In our last conversation you mentioned “As with other access policies, administrators will work with application and security experts to construct XACML policies. Administrators learn this skill fairly quickly, but they do need some training and guidance at first.”

In your experience who authors these policies today? And where should this responsibility sit in the organization going forward? It seems there is a mix of application domain and security experience required, plus there may be some need to understand business processes and architecture. Is there a new organizational role, Security Policy manager emerging?

GG: I like the sound of Security Policy Manager, and it is a role that could appear over time. For now, we see security administrators working with a business analyst and/or application developer to construct the policies. This amounts to determining what resources/actions need to be protected, translating the business policy (doctors can only view records of patients they have a care relation with), determining what attributes are needed, sourcing the necessary attributes, etc.

GP: In looking at Infosec as a whole, it seems to me that we have two working mechanisms – Access Control and Crypto. Everything else is integration. What guide posts should people use to identify and locate the integration points for their authorization logic? Do you focus on the resource object, the user, or the Use Case context? Or is it a mix?

GG: Policies can focus on the resource, subject, action or environmental attributes – that is the beauty of an attribute based access control language like XACML. That also means there are many ways to model policies, here are some general guidelines:

– Make sure you start with the plain old english language business rules and policies

– Take a “divide and conquer” approach to building the policy structure. This could mean writing policies for different components of the application that can be viewed as logically separate. What you’re also doing here is taking into account the natural activity flows when people use an application or the types of requests that will come into the PDP. Upon analysis of the scenarios, you can place infrequently used policies in branches where they won’t impact performance of more frequently used policies.

– Evaluate frequently used policies first – seems intuitive but may not be apparent at first. Therefore, you need to continually evaluate how the system is being used to see if modifications to the policy structure are needed. As mentioned in the previous point, this will allow you to identify policies that are infrequently evaluated so you can ensure they are not in the path of frequently evaluated policies.

– Consider the sources of attributes that you will utilize in policies. Are the attributes readily available or spread across multiple repositories – great place to use a virtual directory if answer is yes.

It is only after analyzing the scenario that you can say whether policies will be targeted to users, resources or some mix of the two. This is an area where we do provide guidance to customers at the start of a deployment. However, we do find that customers are able to manage this aspect of the deployment pretty quickly. Another point to keep in mind is that access policies don’t change that frequently once an application is on-boarded into the system. What happens on a daily basis is the administration of attributes on the individual users that will be accessing the protected applications.

GP:The Time to Check Time of Use problem has been around for as long as distributed computing. The time between when tickets, tokens, assertions, claims, and/or cookies are created until the time(s) when they are used by the resource’s access control layer, gives an attacker a number of places to hide. Choosing the place to deal with this problem and crafting policies is an important engineering decision —  What new options and patterns does XACML bring to the table to help architects and developers deal with this problem?

You talked previously about the XACML Anywhere Architecture where a callback from a Cloud Provider queries PDP; this would enable the Cloud Provider to get the freshest attributes at runtime while still allowing the Cloud Consumer to benefit from the Cloud provider’s deployment. The callback idea has appeal to a wide variety of use cases, but do people need to update their attribute schemas with any additional data to mark the freshness of the attributes so the PEP/PDP can decide when or if to fire off these requests? Does the backed PDP have any responsibility to mark attributes? What are the emerging consensus for core patterns here if any?

GG: You are right to point out that having the ability to call an externalized authorization service provides a mechanism for the application to have the most current attributes and access decisions. However, there are a couple of places in the XACML architecture to consider as it relates to TOCTOU – freshness of attributes and caching of decisions.

Attributes: Fresh or Stale

First we’ll look at the freshness of attributes that you describe, because it is the attribute values that will be processed in the policy evaluation engine (PDP). The XACML 3.0 specification is clear about what an attribute is. In terms of XML schema, it is defined as follows:

<xs:element name=”AttributeDesignator” type=”xacml:AttributeDesignatorType” substitutionGroup=”xacml:Expression”/>

<xs:complexType name=”AttributeDesignatorType”> <xs:complexContent> <xs:extension base=”xacml:ExpressionType”>

<xs:attribute name=”Category” type=”xs:anyURI” use=”required”/> <xs:attribute name=”AttributeId” type=”xs:anyURI” use=”required”/>

<xs:attribute name=”DataType” type=”xs:anyURI” use=”required”/> <xs:attribute name=”Issuer” type=”xs:string” use=”optional”/>

<xs:attribute name=”MustBePresent” type=”xs:boolean” use=”required”/> </xs:extension> </xs:complexContent></xs:complexType>

There is no room for a timestamp or a freshness attribute in an attribute designator used in a XACML policy (set) which would indicate under which temporal terms a policy is relevant based on the freshness of incoming attributes.

Accordingly the incoming XACML request is made up of attributes and attribute values defined in the schema as follows:

<xs:element name=”Attribute” type=”xacml:AttributeType”/><xs:complexType name=”AttributeType”> <xs:sequence>

<xs:element ref=”xacml:AttributeValue” maxOccurs=”unbounded”/> </xs:sequence>

<xs:attribute name=”AttributeId” type=”xs:anyURI” use=”required”/>

<xs:attribute name=”Issuer” type=”xs:string” use=”optional”/>

<xs:attribute name=”IncludeInResult” type=”xs:boolean” use=”required”/>

</xs:complexType>

Again, there is no use or mention of a freshness / time attribute. This is coherent with the fact that the PDP is a stateless component in the architecture. It means that unless it is told so, it cannot know the freshness, the time of creation, or the expiry date of an attribute value.  It knows when it receives a request from a PEP or when it calls out to PIPs and can assume a date of retrieval but even so the value could be skewed by potential caching mechanisms. Even though it does know the time at which it receives a request from the PEP or the time at which it invokes the PIP(s), it does not mean it will be using those timestamps for access control unless, of course, the policy author explicitly asks to do so.

Let’s imagine the case of a role attribute. Let’s imagine we want to write a policy which says: grant access to data if

• the user is a manager and

• if the role attribute is no more than 5 minutes old (5 minutes being the maximum freshness time)

This uses 2 attributes: role and freshness. The incoming request says “joe access data” which is not enough for the PDP to reach a conclusion. The PDP queries an LDAP via a PIP connector to retrieve the roles for Joe: “give me Joe’s roles”. Joe may have multiple roles; some of which may have been cached in the PIP attribute cache. In addition, each role has a freshness value (which could be the time at which the LDAP was queried for the value or the number of minutes since it was last queried for the given attribute). In that case, we may have different freshness values. Which one should be used? This shows there is a ternary relationship between the user, the role, and the freshness of the role. The simplest way for XACML to handle this today is to encode the freshness and the role value into a single attribute value e.g. manager::0h04mn37s. In that case the policy must match the attribute value using regular expressions (or XPath) to extract the role value on one hand and the freshness value on the other.

Is TOCTOU masquerading as a governance issue?

Another aspect of attribute freshness is the administrative or governance processes behind them. If attributes are not fresh, isn’t this a governance failure? For example, how frequently does your provisioning process make updates? Once a day, once a week, every hour, or is it event driven and updates continuously? Your answer will provide additional input on how to manage the consumption of attributes by authorization systems. So this problem goes even beyond the Time to Check to the Time of Administration.

Decision Cache
Next up, PEP decision caching is another area to examine to stay clear of TOCTOU issues. For performance reasons, PEPs can cache decisions for reuse so they don’t have to call the PDP for every access request. Here, your TTL setting is the window of opportunity when the PEP could use an invalid decision if a privilege-granting attribute has changed.

In short, XACML is an attribute-based language and helps you express restrictions / conditions some of which can be freshness of attributes. It is easier in that sense than previous AC models which did not allow for that. However, there has yet to be a standardized effort around the overall freshness challenge. And it is no longer a problem constrained to policy modelling. Attribute retrieval strategies, PEP and PIP implementations, caching, and performance all impact freshness (or the impression of freshness).

GP: Thanks very much to Gerry Gebel, David Brossard and Pablo Giambiagi for the conversation on these important topics

Part Two: Insurance Authorization Scenarios with James McGovern

Posted March 2, 2011 by ggebel
Categories: Architecture, Authorization, Standards, XACML

The conversation with James McGovern continues… here is the next installment in a series of posts on the applicability of XACML-based authorization for the insurance industry:

JM: We had a great discussion covering basic entitlement scenarios and how they can be applied to the insurance vertical. Are you ready for some scenarios that are more challenging?

GG: Absolutely…

JM: Let’s dive into two additional insurance-oriented use cases. First, let’s talk about the concept of relationships and how they challenge the traditional notions of authorization and role-based access controls. Imagine you are a vacationing in sunny Trinidad and have left your nine-year old child home alone. Your son having been raised by responsible parents decides to renew your automobile registration in order to avoid paying a late penalty but realizes he needs to also get an automobile insurance card first. How does the insurance carrier determine that your son is authorized to request an insurance card for your policy, the answer is via relationships.

Relationships in an insurance context may be as simple as confirming whether the subject is listed as a named insured on a policy or could be more complicated in scenarios where there is a power of attorney in place where someone with a totally different name, address and otherwise unrelated may be authorized to conduct business on your behalf.

GG: This is an excellent case where the PIP interface of the policy server can call out to a directory, customer database, or web service to determine if the requestor has a relationship with the policy holder. Having the policy server, the PDP in XACML parlance, make the query simplifies things for the PEP and application. Instead, the PDP figures out what additional attributes are necessary to satisfy a particular policy.

JM: Relationships can be modeled in a variety of manners but generally speaking can be expressed in either a uni-directional or omni-directional manner. For example, a husband and wife have a bi-directional relationship to each other than can be named as a spouse while an elderly person may have a uni-directional relationship where the person holding the power of attorney can take actions on behalf of the individual but not vice versa.

GG: Again, XACML policies and the PDP can evaluate relationships between entities to resolve access requests. In this example, a person with power of attorney for a parent’s account can make changes to that account because a condition in the XACML rule can dynamically validate access. Spouses can have common access to update insurance policies that they co-own because each is named on the insurance policy – again the XACML condition easily evaluates the relationship: user_attempting_access == named_insured. In this example, named_insured could be a multi-valued attribute that lists parents and children on the insurance policy. The PDP must be able to parse through the multiple values when evaluating access policies. To add another layer of context, each of the persons in the named_insured list could have different privileges where children are allowed to view the insurance policy, but not able to update or cancel it.

JM: In the model of delegation, the power-of-attorney may have a specified scope whereby the person holding the power-of-attorney can do actions such as make bill payments or make endorsement changes but may not have the right to cancel.

GG: The flexibility of XACML policy is evident for this case as well. For example, Policies can have a “target” so that particular effects can be implemented in each scenario. In the above example, a policy with a target of “action=cancel” can have a rule that denies the action, while other actions are permitted. Alternatively, policies could be created for each action and combining algorithms resolve any conflicting effects. Combining algorithms are defined for deny overrides, permit overrides, first applicable, and several other results.

JM: Let’s look at another insurance scenario. Within the claims administration process, you can imagine that the need for a workflow application (BPM) along with a content management application (ECM) would be frequently used. From a business perspective, you may have a process known as First Notice Of Loss (FNOL) whereby a claimant can get the claim’s process started. The BPM process would handle tasks such as assigning a claims handler to adjudicate the claim while the ECM system would capture all the relevant documentation such as the police reports, medical records if there were injuries and photos of the car you just totaled.

Now, let’s imagine that a famous person such as Steve Jobs or Warren Buffett is out driving their Lamborghini and get’s into an accident. For high-profile people, you may want to handle claims a little differently than for the general public and so you may define a special business process for this purpose. The big question then becomes, how do you keep the security models of the BPM and ECM systems in sync? More importantly, what types of integration would be required between these two platforms.

GG: First, access policies should be designed to restrict claims processors to only handle claims that are assigned to them, or their team. This can be accomplished dynamically through the use of conditions, independent of what users get assigned to teams or groups. As noted earlier, the PIP interface is able to look up group or team membership at runtime. In addition, the insurance company may choose to implement an extra policy to further restrict access to celebrity or VIP clients. An example of where this would have been useful is the “Octo-mom” case where employees were found to have inappropriately accessed her records. The “celebrity” policy can be targeted to resources associated with an individual or they can be tagged with metadata indicating a special handling policy applies. In the PDP, results from multiple policies are resolved with the combining algorithms defined in XAMCL – first applicable, deny overrides, permit overrides, etc.

Regarding integration between BPM and ECM systems, it appears there are multiple options here. In one example, the ECM system can defer access decisions to the BPM layer, which can be effective if the only access to records is through the BPM layer. If access to ECM records flows through different applications, then both ECM and BPM should use the same authorization policies/system. If they use the same authorization system, BPM and ECM are using the same policies by definition and can therefore implement access controls consistently.

JM: It is a good practice to not only assign the claim to a team but for people outside of that team to not have access (in order to respect privacy). The challenge is that teams aren’t static entities and may not be statically provisioned. This model doesn’t just occur within business applications but is a general challenge in many enterprise systems. As you are aware, the vast majority of enterprise directory services tend to have a view of the organization and its people through the lens of reporting relationships and not team composition and how work actually gets done. The notion of the matrixed organization can further blur authorization models.

GG: I agree that directories are not always able to easily represent matrixed relationships within an organization. Ad hoc groups can be created for projects or teams, but can be difficult to manage and keep current. In some cases, virtual directories can provide a more flexible way to surface different views of directory data. The bottom line is that you can’t implement dynamic policies if the necessary relationship data is not available.

JM: Are there practices you recommend that enterprises should consider while modeling directory services to support authorization scenarios described so far?

GG: Yes, there are a number of things to consider regarding directory services when dealing with attribute based access control systems. In general, here are some key points:

  • We tend to prefer using existing, authoritative attribute sources – rather than force any kind of directory service re-design. In typical organizations, this means that privilege-granting attributes could be stored in several repositories that include directories, as well as databases or web services. At some point, the organization may choose to implement a virtual directory product, which gives them a lot of flexibility in aggregating attributes and providing custom schemas for the various consuming applications – including ABAC systems.
  • When constructing XACML policies, the policy author does need to think about where attributes are stored because of performance implications. Attributes may be local to the application or possibly remotely stored in another security domain. Even local attribute lookups can be an expensive operation if the repository does not operate efficiently. There are many techniques to deal with performance, but they must be dealt with in order to achieve adequate response times for interactive users.
  • A corollary to the previous point is the question of what component does the attribute lookup, the PEP or PDP? The PEP will naturally have access to several attributes, such as userID, action, target resource, and some environmental variables. The PEP could look up additional attributes, but it does not necessarily know which policies will be evaluated. Therefore, it is normally better for the PDP to do attribute lookup after it determines what policy(ies) to evaluate.
  • Data quality is always an issue in directory services. As a former colleague, Larry Gauthier, was fond of saying, “Even if you admit your directory data is dirty, it is most likely filthy.” Once an organization starts writing access policies that utilize dirty data, it’s possible that incorrect decisions could be the result. The solution isn’t necessarily technical, but could impact processes that are responsible for updating and maintaining user data – whether that’s in the HR system, enterprise directory, CRM database, or other repositories.

JM: Are you aware of any BPM or ECM vendors that are currently supporting the XACML specification? If not, what do you think enterprise customers can do to help vendors who remain blissfully ignorant to the power of XACML to see the light?

GG: I am not aware of any BPM or ECM vendors that support XACML today. Documentum has published how to add an XACML PEP to their XDB, but I don’t know of their broader plans, if any, to support XACML.

I think customers need to continue pressing vendors to externalize authorization and other identity management functionality from their applications. Customers can do this directly via their product selection process and by proxy through their industry analyst resources. ISVs should not expect to operate in a silo any more because applications have to interact with each other. It is extremely difficult to implement consistent access policy across multiple policy domains and you would think that application vendors have gotten this message by now. Further, XACML is a very mature standard that can be easily integrated into new application development and also feasible for retrofitting many existing applications. Again, the key is for customers and analysts to force the issue with application and infrastructure vendors.

Stay tuned for Part Three…

 

Have it your way

Posted March 1, 2011 by ggebel
Categories: Architecture, Authorization

Recent conversations with prospective customers have made me think of the long time Burger King slogan, “have it your way”. For Burger King, it was a way to offer an alternative approach to the one-size-fits-all menu of its competitors – chiefly MacDonalds. In most fast food restaurants, it is difficult to make modifications to your order – you have to take it the way the restaurant makes it. Don’t like pickles on your burger? Too bad, take them off yourself.

Does the same situation apply when buying enterprise software and middleware? I am afraid so. By now you are all familiar with the paternal tone used by large vendors when they are describing “their” vision, strategy and architecture. “They” know best and customer should just follow obediently. Of course, doing so also potentially locks the customer in for the long term and the associated high license and maintenance costs.

However, customers are starting to push back. One architect said it best this way, “I appreciate that you (Axiomatics) adjust your solution to our preferred architecture, unlike other vendors that attempt to force our business model into their architecture.” That’s right, there is a choice. Standards such as XACML, SAML, and web services along with new delivery models, such as cloud platforms, are giving enterprises more ways to deploy applications and connect with their partners. Enterprises are finding that they can’t be beholden to a particular vendor’s architecture, but must work with vendors that can accommodate and adjust to the rapidly changing needs of their business operations.

Part One: Insurance Authorization Scenarios with James McGovern

Posted February 16, 2011 by ggebel
Categories: Authorization, Standards, XACML

In my past role of Industry Analyst at Burton Group, I used to have frequent conversations with James McGovern who at the time was in the role of Chief Security Architect for The Hartford and is now a Director with Virtusa where he focuses on Enterprise Architecture and Information Security. Recently, we had a dialog on applying XACML in an industry vertical context. This exchange was inspired by similar conversations I had with Gunnar Peterson where we discussed the applicability of XACML bases solutions to some more general security scenarios. For readers new to XACML, you can find some additional information elsewhere on this blog as well as at http://www.axiomatics.com. Below is a transcript of our conversation…

JM: Let’s dive into three different scenarios using examples from insurance where making proper authorization decisions are vital and understand how XAMCL can provide value.

GG: That sounds great James, thanks for bringing up these industry specific examples so we can have a discussion of XACML based systems in that context.

JM: Let’s jump into the first scenario. An independent insurance agent will do business with an insurance carrier through a variety of channels. One method is to visit the carrier’s web-site that is dedicated to independent insurance agents. The carrier may use web access management (WAM) products for providing security to the website. Another method may be to conduct transactions from their agency management system that either is installed in their data center (large agencies) or hosted in a SAAS manner (small agencies). The agency management system may create XML-based transactions that are sent to the carrier’s XML gateway for processing. Another method still would be for the agent to conduct a transaction via telephone using interactive voice response (IVR) systems.

In all three scenarios, the independent insurance agent may execute transactions such as requesting a quote where it is vital not only that any one individual channel remain secure, but that all the channels through the lens of business security have the same security semantics.

GG: First, I will not address the authentication challenge across these multiple channels and will focus on authorization only. With an XACML-based system, you can indeed implement and enforce the same policies across multiple channels. In the example you cite above, here is where the policy enforcement points (PEPs) would be inserted:

  1. Web access management tier: At this level, let the WAM system do what it does best – manage authentication and the user session. For authorization, WAM integration with an XACML PDP can be implemented in multiple ways. For example, the WAM policy server can call out to the PDP (act like a PEP) or an XACML specific PEP can be installed at the application (website) to handle authorizations.
  2. Agency management system: If the on premises AMS and SaaS AMS are both accessed via an XML gateway, then the gateway acts as the PEP and enforces policies that are evaluated by the PDP. XML gateways are a great way to secure web services because most (all?) of them support the SAML profile for XACML or can integrate with an XACML vendor’s API.
  3. IVR system: This one could be a bit trickier, but the idea is that a PEP can be built for most any environment. If the IVR vendor permits it, then a Java or .NET PEP can be developed pretty quickly to connect with an XACML PDP.

There are many deployment options for where PDPs are installed or policies are managed, but the bottom line is that resources accessed through multiple channels can be protected by a common set of policies and authorization infrastructure.

JM: The IVR scenario is just one example of authorization issues that occur in a telephony environment. In the investment community, the notion of a “Chinese Wall” where an investment firm for regulatory reasons may need to prevent phone conversations between two different individuals in different departments such as an employee working on mergers and acquisitions from sharing non-public information with those in the trading department.

GG: Integrating XACML across a variety of channels are also used at banks – employee accounts are marked as such to enforce access policies, provide employee discounts, etc. Integrating XACML isn’t just valuable for web sites, web services and IVRs but can work with instant messaging applications, Turrets and email to support the concept of Chinese Walls or other regulatory considerations.

JM: Let’s look at another scenario. A large insurance broker may employ hundreds of insurance agents that interact with multiple insurance carriers on a daily basis. From a financial perspective, the broker would like for the insurance carriers to provide up to the minute details on commissions from selling insurance products. The challenge is that the insurance carrier may need to understand the organizational structure of the insurance broker so as to not provide information to the wrong person. For example, one insurance broker may organize by regions (e.g. north, south, east, west) while another may organize around size of customer (e.g. large, medium, small) while another still may organize around the types of products sold (e.g. personal, commercial, wealth management, etc). In this scenario, the broker may only want the managers of each region to see only their information, but not that of their peers in other regions.

The requirement of an insurance broker to at runtime dynamically describe the authorization model to a foreign system becomes vital to conducting business.

GG: The flexibility of an attribute based access control (ABAC) model, such as the XACML policy language, is very useful in this scenario. From the insurance carrier perspective, it is quite easy to represent the various policies that need to be implemented for each broker. In XACML, attributes are defined in four categories (you can also define additional categories): subject, action, resource, and environment. For the broker organized by region, information such as north, south, etc are passed as subject attributes. Data such as <large customer> or <commercial> are passed as resource attributes to the PDP (either via the PEP or through the PIP interface). The carrier’s PDP will evaluate requests based on its defined policies to determine whether access is permitted or denied. Further, the PDP can also send an obligation back to the PEP with the decision – read access to commission report is granted, but redact sections 2, 5 and 8.

JM: The ability to make authorization decisions in the above scenario requires the ability to describe an organizational structure. This scenario not only applies to the carrier to agency relationship but could be equally applicable for internal applications such as procurement where you may have a rule that your two job grades above you must approve all expenses. Could you describe in more detail how XACML can support hierarchical constructs?

GG: To answer the question it’s important to use the right resource model (from the hierarchical resource profile). If the hierarchy is represented using “ancestor attributes” (§2.3), then there won’t be enough information to identify the manager two levels up. What is needed is a richer hierarchical model, e.g. using XML documents (§2.1), URIs (§2.2) or a slight modification of §2.3 to add an attribute that explicitly identifies a “grandparent” resource (or manager).

If the hierarchy is represented using an XML document, then the policy would use an AttributeSelector with an XPath expression that can easily pick a node two levels above any other. The same goes for an ‘n’ degree relation where ‘n’ is a constant known at policy-authoring time If the degree ‘n’ is dynamically provided in the form of some XACML attribute, then this might be harder to achieve and the individual case would have to be analyzed before coming up with a recommendation.

In practice, it may not suffice to simply use the base hierarchical resource profile. Other solutions may be needed – for example, using richer PIPs that massage the information into a format that facilitates policy authoring. [1]

JM: Let’s look at the scenario of an independent insurance agent and how they may access a given insurance carriers claims administration systems. The carrier may have an authorization rule that states any agent can access information for all policyholders in which they are the agent of record.

Taking this one step further, when an insurance agent purchases workers compensation insurance for their own business without the right authorization model, they may be able to have conflicting access rights if the agent is in the role of both agent and policyholder. When an otherwise authorized employee of the agency needs to file a worker’s compensation claim for themselves, other employees of the agency should not be able to view the claims of their coworker.

GG: This scenario can also be modeled in XACML policy provided that all the necessary attributes are available. To turn around your example 180 degrees, when an agency employee views the status of their own worker’s compensation claim, they should only be able to see their own records and not the records of fellow employees. Of course in performing normal work tasks, agency employees should also see any client records that they would otherwise have access to. Ideally, worker’s compensation claim records should be tagged with an additional attribute to indicate the claim is for an agency employee as opposed to a claim from a customer.

JM: A big challenge in getting this right is to make sure that you modeled identity correctly. Historically, many systems would have modeled an agent, an employee policyholder and a claimant as distinct entities. Today, we have to think about them more as personas or roles that are more dynamic in their usage. The party model would be a better modeling approach in this regard.

GG: Ideally, if your system has a proper identity model, then implementing sound authorization models becomes easy. On the chance, that your identity model is less normalized, you can use the PIP interface to accomplish the same goal of first detecting whether two distinct entities are the same. For example, a request may come into the PDP only containing the employee ID attribute but the PDP recognizes that it must look up additional attributes before evaluating the policy. The employee ID can be used as the index to lookup additional attributes on the user, possibly the SSN, department number, cost center, etc in a directory or HR database.

Stay tuned for part two…


[1] Thanks to my colleague Pablo Giambiagi for providing input to this question

 

Take 2, talking authZ with Gunnar

Posted January 28, 2011 by ggebel
Categories: Uncategorized

Gunnar Peterson and I continued our discussion about authorization recently, this time talking about use cases, opportunities for architects, and how to get people to understand XACML.

GP: Gerry last time we chatted we talked about the overall architecture of XACML and some of the needs it fills for security architects in 2011. For this thread I wanted to talk about concrete Use Cases. The one Use case that seems to come up a lot when showing the value of XACML is healthcare, these are quite fascinating use cases that include fine grained authorization and privacy concerns, but this is not something that applies to all systems.

I wanted to get your thoughts on deployments where XACML use cases are for core access control and integrity. In my mind, XACML fills a need in both Mobile and Cloud use cases.

There are so many different Cloud Use Cases and deployments, but one area where XACML seems to solve a clear and pressing need is for fine grained authorization in PaaS and SaaS, where the following conditions apply.

* The Cloud Consumer would like to minimize use and control sharing of certain highly sensitive attributes, but would still like to shift the major parts of the software platform to PaaS/SaaS
* The Cloud Consumer (company) needs to store some sensitive data and funcitonality in PaaS/SaaS
* The user accounts are managed by the Cloud Consumer and the PaaS/SaaS relying party consume SAML tokens (or other)  at runtime

To me this looks like a core access control use case for XACML in the Cloud, where the run time access control resolves the policy dynamically based on the contents of the SAML token. Are there examples of this being done today?

GG: The scenario you are describing is covered by what I call the “anywhere architecture” that XACML addresses – see previous post on this topic. Access to cloud hosted data and resources can be controlled in a number of deployment configuration options. For example, you can have application specific PEPs at or near the resource, or utilize an XML gateway for the PEP. Policy servers (PDPs) can be installed on premises or hosted at the cloud provider. Similarly, policies can be managed where most appropriate and distributed to PDPs for run time enforcement.

When combined with a federation approach to authentication, you get the best of both worlds. That is, users can authenticate to their home domain regardless of where they are and access data based on policy – wherever that data is hosted.

We have been in discussions with some organizations that are investigating cloud hosting options for applications and data, but this is mostly in exploratory phases. What is reassuring to them is the fact that the XACML architecture does in fact cover such scenarios and and provides the flexibility they are looking for.

While not a cloud-hosted system, the Swedish National Health Care system does use a combination of SAML and XACML to control access to patient data. So your premise that healthcare is an excellent use case for XACML controls is certainly true. However the requirement to restrict access to specific data elements holds true across many other industry segments.

GP: In the Mobile case, we know that the devices are occasionally connected. So some session data will be cached  locally on the device – for when you go into the elevator, drive through a tunnel, or the stewardess tells you to turn off your phone before take off.
We also know that mobile device usability is pretty good but from a usability standpoint we want to minimize re-authentication for the user and extended authentication dances.

So both of these Mobile trends argue for delivering some content to the device that requires authorization out of band of the initial data flow – the ability to store a attribute cache and attach the policy to use if needed. This strikes me as right in the wheelhouse of XACML, have you seen these types of deployments?

GG: We are starting to see some interesting mobile uses cases emerge, but it’s still a bit early to make many concrete statements on how an XACML model could accommodate them. That said, here are some preliminary thoughts:

1. If a policy decision is rendered while the mobile device is connected, that decision could be cached locally for some period of time and re-used for a subsequent access request. This technique is useful for performance and efficiency when online, but also quite effective when you are offline – provided that the TTL limit is sufficient.
2. For full offline processing of authorization, you have to consider how much of the authZ apparatus can be tolerated on the mobile device. This includes the attributes and policies you mentioned, but also a decision engine of some sort. What might be more efficient is to preload decisions in some way. That is, if you possess a token or claim that indicates you are the owner of a health record – then you can view it.

GP: I used to struggle in how to convey XACML’s value proposition to people so I would just show them the mechanics and hope they could see it. But now I say XACML is useful in two scenarios- one where your data is on the move and two when you have more fine grained access decisions to make. I am sure there are other scenarios but those seem to be two quite important issues that many companies have to deal with today and XACML can play a role in addressing.

As to the data on the move, business has already decided that this is happening, whether its Cloud, Mobile or just good old fashioned Web apps. Its our job in security to catch up to where the business is going. The idea that the data can be in motion, and crucially that it can carry its security policy with it is a very useful property for a security architect to have at their disposal. You mentioned in response to the Cloud question that when using XACML plus a “federation approach to authentication, you get the best of both worlds”, this approach seems not only to be the best of both worlds, but one of the only ways to deliver on the flexibility of use case scenarios that the business is driving and the distributed technical architecture that runs those deployments.

Is there an end to end model (or models) emerging here?

GG: When you say end-to-end model, it makes me think of an architecture that is sort of tightly coupled. What has emerged over the last few years is the ability to piece together application and infrastructure components based on a number of important standards, such as SAML, XACML, OAuth, WS-Security, WS-Trust, etc. The existence of standards makes it more feasible to build implementations that are flexible enough to support the kind of mobile models you are describing.

Data on the move is one issue, but “access on the move” is another perspective that I think should be treated slightly differently. Certainly when a data element moves from protected storage to some other location, you may want to attach policy that controls its usage downstream. Access on the move is at least as equally compelling because in this case you have power users demanding the ability to perform their jobs from any computing device regardless of where they are located. Subsequently, it can be difficult to build an access model that applies the same policy rigor across all these channels and devices.

The challenge for security architects is develop solutions that accommodate mobile data and mobile workforces, instead of just saying ‘no’.

GP: As a security architect, I take it as a given that authentication happens in one place and authorization happens in another. As a general rule of thumb I want my authentication process to happen as “close” to the user/subject as possible, where they know the most about the user; and I want my authorization processes to fire as “close” to the resource as possible, where the authorization process has the most specific knowledge as to the authorization request. XACML helps me resolve the back half of that problem.

It strikes me that the combination of SAML to ferry the authentication/attribute assertions to Anywhere architecture (XACML) on the resource side closes some gaps that most security architectures have today, lots of system have strong authentication, some systems have good fine grained authorization, but almost all systems have systemic breakdowns in the middleware plumbing between those two, and those two domains – authentication and authorization have to work together.

Are you seeing the drivers for XACML being around the resource domain or enabling an end to end model?

GG: There are multiple drivers for externalized authorization based on XACML, but lets focus on a couple. First there is the audit and compliance angle (I recognize that the industry has beaten this issue to a pulp in the last couple years…). As someone told me recently, “if we can implement externalized authorization, then we will actually have a chance to answer auditors’ questions when they ask about access to these applications.” This is still a big issue for organizations, particularly those that are deploying more sophisticated applications – they must have more granular access to the application’s functions and they prefer to do it in a standards-based way.

Second, an organization’s most precious data is exactly what outsiders like partners and customers want access to. A policy language like XACML allows you to share exactly the right data and nothing more to the right users. You could say this is resource domain focused. Standards like SAML play a role because an organization doesn’t want to, and shouldn’t, manage credentials for most outside users.

GP: Role based access control has done well in terms of adoption for many reasons, but I think one important reason has little to do with technologies, its that a Role is a real thing in an organization. Its easy for people to grasp that a DBA or a nurse or a stock trader should have a different set of privileges and that we can aggregate them into Roles. In ABAC, we have a far more powerful, generic framework that can build much more robust authorization frameworks than RBAC. The generic part of ABAC is what gives it its architectural power, but in terms of communicating them it makes it much harder because you don’t anchor the description on a fixed concept like a Role. It seems that patterns could fill this gap, but many of the patterns we have seen so far are things like delegation, which is important but refers to a domain specific authorization concern. Are there patterns that you are seeing emerge that help map some ABAC capabilities to organizational Parties, Places and Things?

GG: There has been a similar discussion on LinkedIn (http://linkd.in/ekLLqo) that has more than 180 comments, so far. The RBAC model has a huge head start in terms of mindshare – it has been promoted for something like 30 years. As you point out, a role is something that people recognize and it encapsulates at least a high level notion of what privileges should be for a person assigned to that role. One of the challenges with roles is that they are often not granular enough: doctors can view patient information, but it should only be for the patients assigned to them. Nurses can update patient status, but it is only permitted while they are on duty, from the nurse’s station, and if they badged into the clinic. These kind of rules can be difficult to represent in a role or in an access policy. With an ABAC language like XACML, it is very easy to represent organizational or security policies in the XACML language. While it is not as “human readable” as a role, XACML policy is a much more comprehensive way to represent access policy where roles are one of the attributes that can be utilized.

GP: As much as people think about computing technology, tools and protocols, real world deployment is all about people and process. How do you get organizations to grok XACML? What kinds of things do they need to do in the development and operational processes to get benefits from XACML?

GG: First they need to buy into the notion that externalized authorization is the right architectural approach for building and deploying applications. Fortunately, the number of architects thinking this way is growing and they are causing software makers to change course. Once you embrace, here are three important things to consider:

1. How will applications interface with the XACML service? In essence, this means how will the application construct an XACML request and process the XACML response for access decisions. Each application style is a little different in where the necessary attributes (subject, action, resource, environment) are stored so the PEP may need to be adjusted. In some scenarios, an XML gateway can be a low footprint way of dealing with web services applications.

This also implies that developers and application owners need to buy into the externalized authorization model. Security pros will recognize this point as critical – think of how many times you have attempted to sell developers and app owners on a new idea, your mileage may vary.

2. Who will author and manage XACML policies? As with other access policies, administrators will work with application and security experts to construct XACML policies. Administrators learn this skill fairly quickly, but they do need some training and guidance at first.

3. Where are the attributes? Since XACML is an attribute based language, organizations may need to do some work to improve the care and maintenance of privilege giving attributes. For example, if a request comes into the PDP that Alice, a doctor, wants to update a patient record – the PDP may need to find out more information on Alice. Is Alice affiliated with the hospital where the patient is located? Is Alice the patient’s primary physician assigned to the case? This information could be stored in a directory, database, patient record system, etc. Further, if these attributes are used to grant access, they must be properly maintained to ensure accuracy.

Talking authorization with Gunnar Peterson

Posted December 15, 2010 by ggebel
Categories: Authorization, Standards, XACML

Gunnar Peterson and I had a discussion about why authorization should start to receive more attention in the infosec industry. He feels that most infosec pros are over emphasizing authentication and it’s time to look more toward authorization. Since I now work for Axiomatics, I couldn’t agree more :-). Here is a transcript of the conversation:

GP: Authentication gets so much attention in security, for example there are dozens of authentication types supported by SAML. This is due to the many, many attempts that people have made in improving authentication over the years, but authentication is really a guess. There are ways to make better or worse guesses, but once that guess is bound to a principal, the game changes. Authorization is mainly about solving puzzles (not guessing), it seems to me that infosec as a whole should spend more time getting their “puzzle logic” implemented right, ensuring that authorization rules have the coverage, depth and portability. Why is it that people have been so easily seduced by authentication and what can we do to get people to focus less on this quixotic pursuit and onto more solveable problems like authorization?

GG: The focus on authentication, in many ways, is justifiable because much authorization was embedded within the authentication process. If you can authenticate to the application, then you have access to all the information – in a general sense. This approach is manifested in many legacy applications, early security perimeter strategies, and first generation portals. In today’s environment, authorization approaches must be much more discriminating due to regulatory, privacy, or business requirements. Further, the number of applications or resources that are not shared with an outside party has essentially been reduced to zero – the most valuable assets are the one of most interest to your partners and customers. This transition from “need to know” to “need to share” is shifting the focus to authorization and I believe we are seeing the early signs of enterprises placing more focus on authorization. Ultimately authentication remains important because we still need to know who is accessing the information although not necessarily their identity, but enough privilege granting attributes about them.

 

GP: What is driving this shifting focus towards authorization? How much is driven by the management of sharing problem present in today’s applications like Web applications, Web service, Mobile and Cloud, where architectures are so distributed and occasionally connected that they are forced to authenticate in space and authorize in another? And how much is driven by the applications themselves becoming more sophisticated with more functionality, data and layers that require more fine grained authorization? Can general purpose frameworks like XACML help in both the technology architecture and the language expressiveness or are there different patterns required?

GG: There are many forces at work that are changing the perspectives on how identity management functionality and services should be implemented. First, it is quite logical to have authentication taking place completely disconnected from the application or data resource – think of the federated model. Second, applications are much more sophisticated than what was being developed even a few years ago. The level of complexity and granularity must be met by an equally sophisticated and comprehensive authorization scheme. Finally, XACML is well suited to meet the complex requirements we are referring to. XACML is a mature standard (work on it began in Feb 2000) that is comprised of a reference architecture model, policy language and request/response protocol. The XACML architecture is well suited to protect application resources whether they are centralized in your data center or distributed across the data center, private clouds, public clouds, partners, etc. And the core XACML policy language, plus profile extensions such as the hierarchical resource profile, is capable of modeling very complex business rules that will address the vast majority of use cases.

 

GPBob Blakley’s paper on “The Emerging Architecture of Identity Management” from earlier this year described three architecture models – first a traditional Push model, then a more dynamic future state based on a Pull Model, and a third hybrid model to move from Push to Pull to help enterprise to make incremental progress. The pure Pull model looks to solve what I regard as the single biggest security issue we face today – poor integration. Can you discuss how XACML based architectures play a role in these Push and Pull models, is XACML applicable in all models or are some more in its sweet spot than others?

GG: In fact, Bob includes XACML in his emerging architecture – referring products of this class as ‘XACMLoids.” The primary value XACML brings is in externalizing authorization from business applications – “Externalized Authorization Managers” is another excellent report that Bob has recently written on this topic.

In the Push model, XACML systems have a smaller role since identity data is synchronized with or pushed to the application specific identity repositories. In the Pull model, applications call out to the XACML service for authorization decisions using the XACML request/response protocol. If additional attributes are needed to make an authorization decision, the PDP engine can retrieve attributes through a virtualization service. XACML works equally well in the hybrid model – the difference here is that the application does need to persist some identity information in a local repository. That said, the sweet spot for XACML-based systems is likely the pure Pull model or the hybrid scenario.

 

GP: So beyond, the XACML language framework can you briefly describe the components that an authorization system needs to implement XACML architecture in the pure Pull and Hybrid scenarios?

GG: The components identified in the XACML reference architecture are:

Policy Decision Point (PDP): this component receives access requests, evaluates them against known policies, and returns a decision to the caller

Policy Enforcement Point (PEP): this component is located at or near the resource and can be integrated as a gateway, co-located with the application, or embedded within the same process as the application. PEPs basically construct the request in XACML format, forward it to the PDP over a particular protocol, and enforce the decision returned from the PDP

Policy Information Point (PIP): XACML is an attribute based access control (ABAC) model and therefore thrives on attributes. If the PEP does not send all the necessary attributes in the request, the PDP retrieves the additional attributes via the PIP interface.

Policy Administration Point (PAP): Here the policies are written, tested, and deployed – all the expected policy lifecycle management functions.

Policy Retrieval Point (PRP): This is where the policies are stored by the PAP and retrieved by the PDP.

 

GP: Given that logical architecture, what do you typically see or recommend in terms of physical deployment? Seems like a minimum would include separate PEP, PDP, and PAP instances. The PIP and PRP would be combined with PEP/PDP and PAP respectively. Is this is a way to get started on building the foundation?

GG: Some of these deployment configurations will be product dependent, but in general here are tome typical topologies for an XACML system:

1. Shared PDP service: You should have at least 2 PDP instances for availability and business continuity. Additional PDP instances can be deployed for scalability as each server instance is stateless.

2. Embedded PDP: For low latency scenarios, the PDP can be embedded directly in the application container

3. Attribute sources: The PDP, via the PIP interface, can connect to several attribute sources directly as one option. A second option is to use a virtual directory as the attribute source manager. Finally, when a persistent, consolidated attribute store is required then privilege granting attributes can be synchronized into a directory.

4. PAP service: Typically this function will be run in an offline mode. Most of the work happens when a new application is onboarded to the environment and we find that XACML policies are quite stable and don’t require daily or even weekly adjustments.

5. PRP repository: Obviously the PAP will store XACML policies in the repository but the enterprise may have a preferred option for putting policies into production. Operational procedures must also take into account how many PDPs are installed locally or distributed throughout the network. For example, you could utilize database or directly replication to promote new policies into production.

6. PEP integration with applications: Here you can get started by integrating the XACML system with an XML gateway as a great way to get started, which has a low impact to the existing environment. For more advanced scenarios, you can integrate application environment specific PEPs into your business applications.

 

GP: I look at the domain of information security like a triangle – there AAA services for Identity and Access Management, Defensive Services like monitoring and logging, and finally Enablement services that help to integration AAA and Defensive services in to the organization. XACML was designed to handle parts of the AAA and Enablement challenges, but how can we use XACML and authorization services to improve our Defensive posture? What ways have you seen to implement more robust logging and monitoring through the Authorization layers?

GG: The XACML PDP engine should be instrumented so it can provide important information to the logging and monitoring apparatus of the enterprise. At the basic level, the monitoring system can track the number of permit and deny decisions to watch for anomalies. Further, alerts can be triggered if certain thresholds are exceeded – maybe the same user getting repeatedly denied access or a particular IP address making excessive requests.

 

PayPal Selects Axiomatics Access Management Solution

Posted November 18, 2010 by ggebel
Categories: Uncategorized

This week we announced that PayPal has chosen Axiomatics XACML-based authorization – you can see the press release here.

At Axiomatics we are thrilled to have won a contract with an innovative company like PayPal and look forward to making the project very successful.

Discussing XACML with Travis

Posted October 6, 2010 by ggebel
Categories: Authorization, Standards, XACML

Travis Spencer (@travisspencer) raised a few issues with XACML and proposed some solutions in a recent blog post. I’d like to take this opportunity to respond in the interest of continuing the conversation. Thanks to my colleagues, Erik, David (@davidjbrossard), and Ludwig for their input.

Point 1 – Lack of wire protocol definitions: The industry is limited to a single wire protocol spec at the moment, the SAML profile for XACML. It is by no means universally applicable but is useful when integrating with other vendors’ policy enforcement points (PEP). Such is the case for integration with XML security gateways. We agree that other wire protocols are needed and expect that they will emerge over time, as the market demands them. This will require a combined effort between XACML vendors and experts in the particular protocol domains of interest. Once the industry reaches a point where multiple protocol profiles are created, then formal certification and interoperability testing may also be required – similar to the SAML profile testing that occurs today. Finally, I invite you to join the TC and provide your use cases as input!

Point 2 – Cryptographically binding attributes to a trusted IDP: There are cases where a cryptographic chain can be established between the IDP and PDP – as Craig Forster described in a comment to the original post. That is, a SAML token can be passed from the PEP to the PDP and the PDP can perform signature validation. However, this doesn’t address all possible scenarios as there are many ways that attributes can reach the PDP. In federated scenarios, a token of some sort may contain attributes, but this represents only a portion of use cases. The PEP may derive attributes from a local repository and it, of course, may send environmental attributes in the access request to the PDP. The PDP may also query additional sources for necessary attributes before making an access decision. These sources could include a local LDAP directory, web service, or customer database. The PDP could also query a remote source, as defined in the Backend Attribute Exchange (BAE) profile. Therefore it may not be practical, or possible, to implement cryptographic bindings all the way to the attribute source.

It is true that the PEP and PDP operate in a trusted ecosystem – that includes the application itself as well as other infrastructure components. XACML was intentionally designed in a modular fashion to cleanly separate authorization from other IdM functions, such as authentication. Security mechanisms are implemented to secure the communication between PEP and PDP components, but there also is a certain amount of trust between the components. For example, the PDP must “trust” that the PEP will actually enforce decisions properly and carry out all obligations. The PDP must “trust” the contents of access requests from the PEP, including the attributes about the subject. In such cases where additional context is needed, the PEP can send subject attributes plus the source (issuer) of the subject attributes – it’s just another XML string.

Point 3 – Policy authoring and administration: The XACML policy language was developed to address a broad range of application scenarios and to satisfy the complex requirements of sophisticated applications. As such, XACML is a rich language and it must be, otherwise we would be debating whether it is comprehensive enough. Based on our experience at Axiomatics (@axiomatics), if you simplify the policy authoring tool – you lose some of the XACML functionality. Some clients choose to create domain-specific and simpler admin tools but these are only used after the initial set of policies has been created. A couple other observations may be helpful. First, XACML policy development takes some effort up front, but the policies are typically quite stable and do not need a lot of manipulation. Second, the more frequent activity is management of user attributes during onboarding or when the user’s status changes. Finally, we expect more domain specific policy administration tools to emerge in the future as the standard is adopted more broadly.

In summary, I thank Travis for raising points that are issues from his perspective. XACML is not perfect but then no technology is. However, the standard and products that implement it will continue to improve over time based on experiences from production deployments across many industries. We think that XACML is a very comprehensive and capable specification – and we are seeing many leading organizations choosing to deploy it already today. They recognize the value of standards-based, externalized authorization as a competitive advantage and a vast improvement from previous models.

Lastly, I invite everyone to attend our webinar on XACML and the ‘200M’ user deployment this month (October). More information on the event can be found here.