Skip navigation
All Places > Products > RSA SecurID Access > Blog > Author: Karim Elatov
1 2 Previous Next

RSA SecurID Access

16 Posts authored by: Karim Elatov Employee

I was playing around with Azure AD and SecurID Access. I was mostly looking over Configure Secure LDAP (LDAPS) for an Azure AD Domain Services managed domain and using the recommendations from that page, I was able to connect to Azure AD from a SecurID Access IDR. Here are the steps I took to use AzureAD as an identity source for SecurID Access.

Create an SSL Certificate for the LDAPS Connection

I ended up using the powershell approach as described in the above page. From the above page here are the requirements for the certficate:

Acquire a valid certificate per the following guidelines, before you enable secure LDAP. You encounter failures if you try to enable secure LDAP for your managed domain with an invalid/incorrect certificate.

  1. Trusted issuer - The certificate must be issued by an authority trusted by computers that need to connect to the domain using secure LDAP. This authority may be your organization's enterprise certification authority or a public certification authority trusted by these computers.
  2. Lifetime - The certificate must be valid for at least the next 3-6 months. Secure LDAP access to your managed domain is disrupted when the certificate expires.
  3. Subject name - The subject name on the certificate must be a wildcard for your managed domain. For instance, if your domain is named '', the certificate's subject name must be '*'. Set the DNS name (subject alternate name) to this wildcard name.
  4. Key usage - The certificate must be configured for the following uses - Digital signatures and key encipherment.
  5. Certificate purpose - The certificate must be valid for SSL server authentication.


Use Powershell to generate the Self Signed Cert

I followed the instructions and used powershell. On a windows 10 machine, I launched powershell as an administrator and ran the following:

PS C:\WINDOWS\system32> $lifetime=Get-Date
PS C:\WINDOWS\system32> New-SelfSignedCertificate -Subject * -NotAfter $lifetime.AddDays(365) -KeyUsage DigitalSignature, KeyEncipherment -Type SSLServerAuthentication -DnsName *
   PSParentPath: Microsoft.PowerShell.Security\Certificate::LocalMachine\MY
Thumbprint                                Subject
----------                                -------
DBFA9C7175D47188565D1F145F29505B202D7855  CN=*

Then I followed the instructions laid out in Configure Secure LDAP (LDAPS) for an Azure AD Domain Services managed domain to export the cert. Here are the contents of the cert as seen with the openssl utility:

<> openssl x509 -text -noout -in wild-ssotes-win.pem
        Version: 3 (0x2)
        Serial Number:
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN=*
            Not Before: Dec 14 23:41:58 2016 GMT
            Not After : Dec 14 23:51:55 2017 GMT
        Subject: CN=*
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Extended Key Usage:
                TLS Web Client Authentication, TLS Web Server Authentication
            X509v3 Subject Alternative Name:
            X509v3 Subject Key Identifier:
    Signature Algorithm: sha256WithRSAEncryption

Save the pfx file and keep the export password since we will need it when we upload it to the Azure Portal. Just for reference here is openssl command I used to extract the PEM file from the PFX file:

$ openssl pkcs12 -in wild-ssotes-win.pfx -clcerts -nokeys -out wild-ssotes-win.pem
Enter Import Password:
MAC verified OK

Now let's move on to the Azure network configuration.

Create a Virtual Network and Subnet

Most of the networking recommendations are covered in Networking considerations for Azure AD Domain Services, here are the recommendations:

Type of Azure virtual network

  • You can enable Azure AD Domain Services in a classic Azure virtual network.
  • Azure AD Domain Services cannot be enabled in virtual networks created using Azure Resource Manager.

Azure region for the virtual network

  • Your Azure AD Domain Services managed domain is deployed in the same Azure region as the virtual network you choose to enable the service in.
  • Select a virtual network in an Azure region supported by Azure AD Domain Services.
  • See the Azure services by region page to know the Azure regions in which Azure AD Domain Services is available.

Best practices for choosing a subnet

  • Deploy Azure AD Domain Services to a separate dedicated subnet within your Azure virtual network.
  • Do not apply NSGs to the dedicated subnet for your managed domain. If you must apply NSGs to the dedicated subnet, ensure you do not block the ports required to service and manage your domain.
  • Do not overly restrict the number of IP addresses available within the dedicated subnet for your managed domain. This restriction prevents the service from making two domain controllers available for your managed domain.
  • Do not enable Azure AD Domain Services in the gateway subnet of your virtual network.

Network connection options

  • VNet-to-VNet connections using site-to-site VPN connections: Connecting a virtual network to another virtual network (VNet-to-VNet) is similar to connecting a virtual network to an on-premises site location. Both connectivity types use a VPN gateway to provide a secure tunnel using IPsec/IKE.

In summary I created a virtual network with the following characteristics:

  1. In the US East Region where Azure AD Services are supported

  2. Brand new dedicated network (didn't reuse any of the existing networks)
  3. I didn't configure a VPN to the network but it sounds like that's supported (and might be a good approach if the IDRs are on premise)

Here is a step by step page that describes the process: Create or select a virtual network for Azure AD Domain Services. So using an account with a valid Azure Subscription login to the Classic Azure Portal ( and go to the Networks Application, then click the + sign, and choose Quick Create to create a virtual network:

Then give it a name and assign a local address range:

After it's done creating the network you will see the following message:

Create a Network Security Group (NSG) for AD Access

As per Networking considerations for Azure AD Domain Services, it's not recommend to apply NSGs to the Subnet, but I couldn't get it to work without them. Here is a list of Ports they recommend for the Subnet which is used for Azure AD Domain Services :

Ports required for Azure AD Domain Services

The following ports are required for Azure AD Domain Services to service and maintain your managed domain. Ensure that these ports are not blocked for the subnet in which you have enabled your managed domain. 

Port number
443Synchronization with your Azure AD tenant
3389Management of your domain
5986Management of your domain
636Secure LDAP (LDAPS) access to your managed domain


The new Azure Portal ( allows you to create Network Security Groups through the UI so let's do that. After you've logged in, launch the Network Security Groups (Classic) application:

Then click add and name the Network Security Group:

I didn't create a new Resource Group I just used the Default-Networking Resource Group, which is where my Subnet ended  up in (if you go to Virtual Networks (Classic) -> AzureAD-VirtualNetwork you can see what Resource Group a Virtual Network belongs to) :

After the NSG is created, you will see the following:

If you select the NSG and go to it's Inbound Rules you can see the default rules:

You can then click Add and another pane will show up where you can add the port that you want to allow:

And after it's added you will see it on the list:

Repeat the process for the other ports: 443, 3389, 5986. Next, we need to assign this NSG to our Subnet. So launch the Virtual Networks (classic) application:

Then select the Virtual Network that you created and then select the Subnets Section:


Then change the Network Security Group from None to the NSG that we created:

Then click Save and after it's finished you will see the following in the notification section:


That should be it for the networking configuration.

Configure the Active Directory Service in Azure

In the classic Azure portal, if you navigate to Active Directory you will see your directories (I created a brand new one for testing purposes, but you will probably have one created). Select your Directory and go to the Configure section:


Then scroll down and enable Domain Services:

After you enable it, you can assign it to the dedicated network that you created above:

After it's done enabling Domain Services, you will now see the button to enable Secure LDAP (LDAPS):

Click Configure certificate and you will see the upload dialog:

Then point to the pfx file that you create in the first section and enter the password as well:

After the upload is finished you should see the following under the Certificate Section:


And the following under the notifications:

Lastly after the certificate is uploaded you will have an option to enable LDAPS access from a public IP:

So switch the Enable Secure LDAP Access Over the Internet option to YES and click Save:

After it's enabled you should see the Public IP that is assigned to the Azure AD instance:

For good measure I also added IPs from which I knew the IDRs would be connecting to this Azure AD instance. So under the your organization's public ip address ranges section I clicked Add Known IP Address Ranges and specified the IPs:

Create a test Admin User

The Get started with Azure AD Domain Services has section about creating an admin group:

The first task is to create an administrative group in your Azure Active Directory tenant. This special administrative group is called AAD DC Administrators. Members of this group are granted administrative privileges on machines that are domain-joined to the Azure AD Domain Services managed domain. On domain-joined machines, this group is added to the ‘Administrators’ group. Additionally, members of this group can use Remote Desktop to connect remotely to domain-joined machines.

So let's first create an admin so we can use that to bind with. In the Azure Management Console go to Directory Service -> You AD -> Users -> Add User:

Then fill out all the information and assign it the Service Admin Role:


Next let's create the suggested group, Directory Service -> You AD -> Groups -> Add Group:

Then you will see the Group listed:

Then select the Group and click on Add Members and add the admin user to the group:


Create a test User

Since I was not using Azure AD Connect (to synchronize users to AzureAD), I also created a regular user as well. Same process as the admin except for the role, I selected User:

I also created a test group called ssousers and I selected O365 Preview as the Group Type:

And I made my test user part of that group:

Add a User to Azure AD with the AzureAD Powershell

You can also add a user using Azure AD Powershell. I logged in with the user and first I was able to query the existing users:

PS C:\Users\elatov\Desktop> $cred=Get-Credential
cmdlet Get-Credential at command pipeline position 1
Supply values for the following parameters:
PS C:\Users\elatov\Desktop> Connect-MsolService -Credential $cred
PS C:\Users\elatov\Desktop> Get-MsolUser
UserPrincipalName                                                    DisplayName  isLicensed
-----------------                                                    -----------  ----------                                         Admin User   False                                          Test User    False
PS C:\Users\elatov\Desktop> Get-Msoluser -UserPrincipalName | fl
ExtensionData                          : System.Runtime.Serialization.ExtensionDataObject
AlternateEmailAddresses                : {}
AlternateMobilePhones                  : {}
AlternativeSecurityIds                 : {}
BlockCredential                        : False
City                                   :
CloudExchangeRecipientDisplayType      :
Country                                :
Department                             :
DirSyncProvisioningErrors              : {}
DisplayName                            : Test User
Errors                                 :
Fax                                    :
FirstName                              : Test
ImmutableId                            :
IndirectLicenseErrors                  : {}
IsBlackberryUser                       : False
IsLicensed                             : False
LastDirSyncTime                        :
LastName                               : User
LastPasswordChangeTimestamp            : 12/15/2016 12:13:23 AM
LicenseReconciliationNeeded            : False
Licenses                               : {}
LiveId                                 : 10037FFE9CEDDFA4
MSExchRecipientTypeDetails             :
MobilePhone                            :
ObjectId                               : c56f4960-1f5a-4cc6-8c08-d6f01b20fd59
Office                                 :
OverallProvisioningStatus              : None
PasswordNeverExpires                   : False
PasswordResetNotRequiredDuringActivate :
PhoneNumber                            :
PortalSettings                         :
PostalCode                             :
PreferredLanguage                      :
ProxyAddresses                         : {}
ServiceInformation                     : {}
SignInName                             :
SoftDeletionTimestamp                  :
State                                  :
StreetAddress                          :
StrongAuthenticationMethods            : {}
StrongAuthenticationPhoneAppDetails    : {}
StrongAuthenticationProofupTime        :
StrongAuthenticationRequirements       : {}
StrongAuthenticationUserDetails        : Microsoft.Online.Administration.StrongAuthenticationUserDetails
StrongPasswordRequired                 : True
StsRefreshTokensValidFrom              : 12/15/2016 12:13:23 AM
Title                                  :
UsageLocation                          :
UserLandingPageIdentifierForO365Shell  :
UserPrincipalName                      :
UserThemeIdentifierForO365Shell        :
UserType                               : Member
ValidationStatus                       : Healthy
WhenCreated                            : 12/15/2016 12:13:24 AM

To add a new user we can run the following command:

PS C:\Users\elatov\Desktop> New-MsolUser -userprincipalname "" -lastname user -firstname azuretest -Displayname "azuretest user" -immutableID 49acbc85-68bb-4c16-a867-f837b7694sdfs
Password UserPrincipalName            DisplayName    isLicensed
-------- -----------------            -----------    ----------
Wudu1766 azuretest user False

Confirm Connectivity to Azure AD

After the above is setup, I used a test Linux machine to query the Azure AD services with the ldapsearch command and it worked out:

<> LDAPTLS_REQCERT=never ldapsearch -D "" -w Suto7302 -H ldaps:// -b "dc=ssotes,dc=onmicrosoft,dc=com" -s sub -x "(" givenName -LLL
dn: CN=Test User,OU=AADDC Users,DC=ssotes,DC=onmicrosoft,DC=com
givenName: Test

Notice that I'm using the Public IP address. If that doesn't work check out the Random Troubleshooting section of the post for some additional troubleshooting steps.

Add AzureAD as an Identity Source in SecurID Access

After I confirmed that the connectivity is okay with ldapsearch, I logged into SecurID Access Administration Console and added a new Identity Source with the following information:

Notice I made the Base DN the following: OU=AADDC Users,DC=ssotes,DC=onmicrosoft,DC=com. When adding a server, I add the following:

I ended up using a connection time out of 30 seconds, since that was recommended by Azure. From LDAP Authentication and Azure Multi-Factor Authentication Server:

Configure the LDAP timeout to 30-60 seconds so that there is time to validate the user’s credentials with the LDAP directory, perform the second-factor authentication, receive their response and then respond to the LDAP access request.

Since we are going through the internet to connect to this Identity Source, that's probably not a bad idea. But connecting to Azure AD using a Tunnel (From the On Premise Network to the Azure Virtual Network) is probably a better approach. Then I enabled SSL for the connection and uploaded the x509 base64 encoded PEM of the certificate that I created with powershell:

Then when I clicked Test Connection, I saw all the attributes:


Then going to the attribute section, I refreshed the attributes and it pulled all the attributes:

Then I saved the Identity Source and Published. After that I went to the portal and I was successfully able to login with the test user I created:

Set Up Identity Source Sync

Initially when I ran the sync, the sync worked, but I received warning about missing emails:

I also logged into the o365 portal ( which is another way of managing AzureAD users and I saw that since the user is unlicensed we are unable to set the mail attribute:

When a user has a valid O365 license (this is in another tenant) their email is allowed to be configured:

So to use StepUp we need:

  1. A Valid Azure Subscription (to login to the Azure Portal to configure the Azure AD Services)
  2. Valid O365 Licenses (so the mail attribute can be set and/or modified for the users in Azure AD)

From How Azure subscriptions are associated with Azure Active Directory:

Manage the directory for your Office 365 subscription in Azure

Let's say you signed up for Office 365 before you sign up for Azure. Now you want to manage the directory for the Office 365 subscription in the Azure classic portal. There are two ways to do this, depending on whether you have signed up for Azure or you have not. 

I do not have a subscription for Azure

In this case, just sign up for Azure using the same work or school account that you use to sign in to Office 365. Relevant information from the Office 365 account will be prepopulated in the Azure sign-up form. Your account will be assigned to the Service Administrator role of the subscription.

And from Manage the directory for your Office 365 subscription in Azure:

After you complete the Azure subscription, you can sign in to the Azure classic portal and access Azure services. Click the Active Directory extension to manage the same directory that authenticates your Office 365 users.


After getting a valid license and setting the email attribute for the users, the sync worked without issues: 

And under User Management you will find the synced users:

The web portal authentications still worked and step-up worked as well. 

Random Troubleshooting

Initially the public IP wasn't working for me, so I deployed a Linux VM in the same subnet as Azure AD and I ran some tests, first I ran an nmap just to make sure the server is listening on port 636:

azureuser@azure-ad-osuse:~> nmap -P0
Starting Nmap 6.47 ( ) at 2016-12-15 19:35 UTC
Nmap scan report for (
Host is up (0.0012s latency).
Not shown: 980 filtered ports
53/tcp    open  domain
88/tcp    open  kerberos-sec
135/tcp   open  msrpc
139/tcp   open  netbios-ssn
443/tcp   open  https
445/tcp   open  microsoft-ds
464/tcp   open  kpasswd5
593/tcp   open  http-rpc-epmap
636/tcp   open  ldapssl
1801/tcp  open  msmq
2103/tcp  open  zephyr-clt
2105/tcp  open  eklogin
2107/tcp  open  msmq-mgmt
3268/tcp  open  globalcatLDAP
3269/tcp  open  globalcatLDAPssl
3389/tcp  open  ms-wbt-server
49154/tcp open  unknown
49155/tcp open  unknown
49157/tcp open  unknown
49158/tcp open  unknown
Nmap done: 1 IP address (1 host up) scanned in 4.07 seconds

Next, make sure the cert is presented from the AzureAD Service:

azureuser@azure-ad-osuse:~> openssl s_client -connect -servername
depth=0 CN = *
verify error:num=20:unable to get local issuer certificate
verify return:1
depth=0 CN = *
verify error:num=21:unable to verify the first certificate
verify return:1
Certificate chain
 0 s:/CN=*
Server certificate
No client certificate CA names sent
SSL handshake has read 1411 bytes and written 502 bytes
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
    Protocol  : TLSv1.2
    Cipher    : ECDHE-RSA-AES256-SHA384
    Session-ID: BF3300009FEB139B8FF5BA4DC5752D554CD819B414D74818EF21AE7D81F0B4E4
    Master-Key: 66810E1EC2C6C4394F1A9E42BFD7497F1FC9CA977B2EACA9A143A20F7DB8263B
    Key-Arg   : None
    PSK identity: None
    PSK identity hint: None
    SRP username: None
    Start Time: 1481760017
    Timeout   : 300 (sec)
    Verify return code: 21 (unable to verify the first certificate)

I was also able to run ldapsearch queries from the test machine against the AzureAD instance:

$ LDAPTLS_REQCERT=never ldapsearch -D " -w Suto7302 -H ldaps:// -b "dc=ssotes,dc=onmicrosoft,dc=com" -s sub -x "(" -LLL
dn: CN=Test User,OU=AADDC Users,DC=ssotes,DC=onmicrosoft,DC=com
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: user
cn: Test User
sn: User
givenName: Test
distinguishedName: CN=Test User,OU=AADDC Users,DC=ssotes,DC=onmicrosoft,DC=com
instanceType: 4
whenCreated: 20161215003608.0Z
whenChanged: 20161215003609.0Z
displayName: Test User
uSNCreated: 30881
memberOf: CN=ssousers,OU=AADDC Users,DC=ssotes,DC=onmicrosoft,DC=com
uSNChanged: 30884
name: Test User
objectGUID:: cBz+hsXVqE6pARpz4Ol0JA==
userAccountControl: 544
badPwdCount: 0
codePage: 0
countryCode: 0
badPasswordTime: 0
lastLogoff: 0
lastLogon: 0
pwdLastSet: 131262357699310735
primaryGroupID: 513
accountExpires: 9223372036854775807
logonCount: 0
sAMAccountName: test
sAMAccountType: 805306368
objectCategory: CN=Person,CN=Schema,CN=Configuration,DC=ssotes,DC=onmicrosoft,DC=com
dSCorePropagationData: 20161215024600.0Z
dSCorePropagationData: 16010101000001.0Z
msDS-AzureADObjectId:: YElvxVofxkyMCNbwGyD9WQ==
msDS-AzureADMailNickname: test

Actually since I was on the same subnet, I could also do a query over 389 (if you add that to your NSG):

azureuser@azure-ad-osuse:~> ldapsearch -H ldap:// -b "DC=ssotes,DC=onmicrosoft,DC=com" -D "" -w Suto7302 -s sub "(sAMAccountName=admin)" givenName
# extended LDIF
# LDAPv3
# base <DC=ssotes,DC=onmicrosoft,DC=com> with scope subtree
# filter: (sAMAccountName=admin)
# requesting: givenName
# Admin User, AADDC Users,
dn: CN=Admin User,OU=AADDC Users,DC=ssotes,DC=onmicrosoft,DC=com
givenName: Admin

But we should never expose that to the internet, but it's a good test to make sure AD Services are at least responding. As another test I also made sure I could authenticate as the test user:

azureuser@azure-ad-osuse:~> LDAPTLS_REQCERT=never ldapsearch -D "" -w Jatu9458 -H ldaps:// -b "dc=ssotes,dc=onmicrosoft,dc=com" -s sub -x "(" givenName -LLL
dn: CN=Test User,OU=AADDC Users,DC=ssotes,DC=onmicrosoft,DC=com
givenName: Test

The above commands are using the internal IP and worked out. But it's good to confirm AD is up, SSL is enabled, and we are able to bind. 

It looks like you can't RDP directly to the Azure AD Server even though the port is open. You will get a permission error, what you can do though is deploy a windows machine in Azure that is on the same subnet as the Azure AD server, then join it to the domain and then use Remote Server Administration Tools to administer the Services on the Azure AD box. The process is described in detail in Administer an Azure Active Directory Domain Services managed domain. I didn't end up doing that but I think that could be very helpful. 

IWA Application With IIS and Windows Authentication

Since our IWA connector runs as an ASP application on IIS, we can utilize the IIS SSL Client Authentication method to log users in. By default if you visit the IWA application (which by default uses Windows Authentication) on a machine that is not joined to the domain you will see a password prompt that will ask you for your AD password:

Let's see if we can enable SSL Client Authentication on the IWA ASP application in IIS to see if we can skip that password prompt.

Generate the Necessary SSL Certificates

There are couple of approaches to this: on windows we can use makecert and on linux we can use openssl.

Creating SSL certificates for SSL Client Authentication with makecert

There are actually a couple of sites that go over the setup:

First download the necessary tools, the above sites provides the links to download makercert and pvk2pfx. Next create a self signed CA with the makecert and create a pfx from the newly generated files with pvk2pfx:

C:\Users\Admin\Downloads\makecert_pvk2pfx>makecert.exe -n "CN=root-ca-sp66" -r -pe -a sha512 -len 4096 -cy authority -sv root-ca-sp66.pvk root-ca-sp66.cer
C:\Users\Admin\Downloads\makecert_pvk2pfx>pvk2pfx.exe -pvk root-ca-sp66.pvk -spc root-ca-sp66.cer -pfx root-ca-sp66.pfx
 Volume in drive C has no label.
 Volume Serial Number is 26E4-53F1
 Directory of C:\Users\Admin\Downloads\makecert_pvk2pfx
11/04/2016  12:52 PM    <DIR>          .
11/04/2016  12:52 PM    <DIR>          ..
11/04/2016  12:46 PM            69,824 makecert.exe
11/04/2016  12:46 PM            36,544 pvk2pfx.exe
11/04/2016  12:51 PM             1,305 root-ca-sp66.cer
11/04/2016  12:52 PM             4,254 root-ca-sp66.pfx
11/04/2016  12:51 PM             2,348 root-ca-sp66.pvk
               5 File(s)        114,275 bytes
               2 Dir(s)  19,559,297,024 bytes free

Next we can create the Server SSL certificate and sign it with the newly created CA (and combine the files into one pfx file):

C:\Users\Admin\Downloads\makecert_pvk2pfx>makecert.exe -n "" -iv root-ca-sp66.pvk -ic root-ca-sp66.cer -pe -a sha512 -len 4096 -b 11/01/2016 -e 11/01/2026 -sky exchange -eku -sv iwa-sp66.pvk iwa-sp66.cer
C:\Users\Admin\Downloads\makecert_pvk2pfx>pvk2pfx.exe -pvk iwa-sp66.pvk -spc iwa-sp66.cer -pfx iwa-sp66.pfx
 Volume in drive C has no label.
 Volume Serial Number is 26E4-53F1
 Directory of C:\Users\Admin\Downloads\makecert_pvk2pfx
11/04/2016  01:09 PM    <DIR>          .
11/04/2016  01:09 PM    <DIR>          ..
11/04/2016  01:08 PM             1,318 iwa-sp66.cer
11/04/2016  01:09 PM             4,262 iwa-sp66.pfx
11/04/2016  01:08 PM             2,348 iwa-sp66.pvk
11/04/2016  12:46 PM            69,824 makecert.exe
11/04/2016  12:46 PM            36,544 pvk2pfx.exe
11/04/2016  12:51 PM             1,305 root-ca-sp66.cer
11/04/2016  12:52 PM             4,254 root-ca-sp66.pfx
11/04/2016  12:51 PM             2,348 root-ca-sp66.pvk
               8 File(s)        122,203 bytes
               2 Dir(s)  19,559,276,544 bytes free

Lastly let's create the client SSL certificate and sign it with the same CA (and combine the files into a single pfx file):

C:\Users\Administrator.SINGLEPOINT66\Downloads\makecert_pvk2pfx>makecert.exe -n"CN=Karim-cert" -iv root-ca-sp66.pvk -ic root-ca-sp66.cer -pe -a sha512 -len 4096 -b 11/01/2016 -e 11/01/2026 -sky exchange -eku -sv karim-cert.pvk karim-cert.cer
C:\Users\Admin\Downloads\makecert_pvk2pfx>pvk2pfx.exe -pvk karim-cert.pvk -spc karim-cert.cer -pfx karim-cert.pfx
 Volume in drive C has no label.
 Volume Serial Number is 26E4-53F1
 Directory of C:\Users\Admin\Downloads\makecert_pvk2pfx
11/04/2016  01:15 PM    <DIR>          .
11/04/2016  01:15 PM    <DIR>          ..
11/04/2016  01:08 PM             1,318 iwa-sp66.cer
11/04/2016  01:09 PM             4,262 iwa-sp66.pfx
11/04/2016  01:08 PM             2,348 iwa-sp66.pvk
11/04/2016  01:14 PM             1,307 karim-cert.cer
11/04/2016  01:15 PM             4,254 karim-cert.pfx
11/04/2016  01:14 PM             2,348 karim-cert.pvk
11/04/2016  12:46 PM            69,824 makecert.exe
11/04/2016  12:46 PM            36,544 pvk2pfx.exe
11/04/2016  12:51 PM             1,305 root-ca-sp66.cer
11/04/2016  12:52 PM             4,254 root-ca-sp66.pfx
11/04/2016  12:51 PM             2,348 root-ca-sp66.pvk
              11 File(s)        130,112 bytes
               2 Dir(s)  19,559,260,160 bytes free

That should be it for the cert generation.

Creating SSL certificates for SSL Client Authentication with openssl

We can follow a similar process as with makecert. First let's create a self signed CA:

# generate private key for CA
<> openssl genrsa -out root-ca-sp66.key 2048
Generating RSA private key, 2048 bit long modulus
e is 65537 (0x10001)
# create the CA cert and sign it with the CA private key
<> openssl req -x509 -new -nodes -key root-ca-sp66.key -days 1024 -out root-ca-sp66.pem
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:Colorado
Locality Name (eg, city) []:Boulder
Organization Name (eg, company) [Internet Widgits Pty Ltd]:RSA
Organizational Unit Name (eg, section) []:PM
Common Name (e.g. server FQDN or YOUR name) []:root-ca-sp66
Email Address []:
# combine the files into a pfx
<> openssl pkcs12 -export -out root-ca-sp66.pfx -inkey root-ca-sp66.key -in root-ca-sp66.pem
Enter Export Password:
Verifying - Enter Export Password:
# here are the resulted files
<> ls
root-ca-sp66.key  root-ca-sp66.pem  root-ca-sp66.pfx

Now let's create the Server Certificate and sign it with our CA:

## Create Request for Server
<> openssl req -nodes -newkey rsa:2048 -keyout iwa-sp66.key -out iwa-sp66-req.csr
Generating a 2048 bit RSA private key
writing new private key to 'iwa-sp66.key'
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:Colorado
Locality Name (eg, city) []:Boulder
Organization Name (eg, company) [Internet Widgits Pty Ltd]:RSA
Organizational Unit Name (eg, section) []:PM
Common Name (e.g. server FQDN or YOUR name) []
Email Address []:
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
## Sign Request with CA
<> openssl x509 -req -in iwa-sp66-req.csr -CA root-ca-sp66.pem -CAkey root-ca-sp66.key -CAcreateserial -out iwa-sp66-cert.pem -days 500
Signature ok
Getting CA Private Key
## Combine files into PFX
<> openssl pkcs12 -export -out iwa-sp66.pfx -inkey iwa-sp66.key -in iwa-sp66-cert.pem
Enter Export Password:
Verifying - Enter Export Password:
# All the Files
<> ls
iwa-sp66-cert.pem  iwa-sp66.key  root-ca-sp66.key  root-ca-sp66.pfx
iwa-sp66-req.csr   iwa-sp66.pfx  root-ca-sp66.pem

And lastly let's create the client cert:

## Let's create the request for the Client Cert
<> openssl req -nodes -newkey rsa:2048 -keyout karim-cert.key -out karim-req.csr
Generating a 2048 bit RSA private key
writing new private key to 'karim-cert.key'
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:Colorado
Locality Name (eg, city) []:Boulder
Organization Name (eg, company) [Internet Widgits Pty Ltd]:RSA
Organizational Unit Name (eg, section) []:PM
Common Name (e.g. server FQDN or YOUR name) []:Karim-Cert
Email Address []
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
## Create an openssl config for clients
<> cat openssl-client.cnf
[ client ]
nsCertType = client, email
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth
### Now let's Sign it with the CA and use our custom config
<> openssl x509 -req -in karim-req.csr -CA root-ca-sp66.pem -CAkey root-ca-sp66.key -CAserial -out karim-cert.pem -days 365 -extfile openssl-client.cnf -extensions client
Signature ok
Getting CA Private Key
# and let's combine the cert and key files into a pfx file
<> openssl pkcs12 -export -out karim-cert.pfx -inkey karim-cert.key -in karim-cert.pem
Enter Export Password:
Verifying - Enter Export Password:
## All the files in the end
<> ls
iwa-sp66-cert.pem  karim-cert.key  openssl-client.cnf
iwa-sp66-req.csr   karim-cert.pem  root-ca-sp66.key
iwa-sp66.key       karim-cert.pfx  root-ca-sp66.pem
iwa-sp66.pfx       karim-req.csr   root-ca-sp66.pfx

That should be it for the certificate creation on a Linux OS with openssl. A couple of notes, the srl file will contain the serial value of the last signed cert:

## Serial File Contents
<> cat
## Serial value of last cert
<> openssl x509 -text -noout -in karim-cert.pem | grep -i serial -A 1
        Serial Number:

Also for comparison you can check out the purpose of the server cert and client cert:

<> openssl x509 -purpose  -noout -in iwa-sp66-cert.pem
Certificate purposes:
SSL client : Yes
SSL client CA : No
SSL server : Yes
SSL server CA : No
Netscape SSL server : Yes
Netscape SSL server CA : No
S/MIME signing : Yes
S/MIME signing CA : No
S/MIME encryption : Yes
S/MIME encryption CA : No
CRL signing : Yes
CRL signing CA : No
Any Purpose : Yes
Any Purpose CA : Yes
OCSP helper : Yes
OCSP helper CA : No
Time Stamp signing : No
Time Stamp signing CA : No

Here is the client one:

<> openssl x509 -purpose  -noout -in karim-cert.pem
Certificate purposes:
SSL client : Yes
SSL client CA : No
SSL server : No
SSL server CA : No
Netscape SSL server : No
Netscape SSL server CA : No
S/MIME signing : No
S/MIME signing CA : No
S/MIME encryption : No
S/MIME encryption CA : No
CRL signing : No
CRL signing CA : No
Any Purpose : Yes
Any Purpose CA : Yes
OCSP helper : Yes
OCSP helper CA : No
Time Stamp signing : No
Time Stamp signing CA : No

For fun here are the ones created by makecert (in DER format):

<> openssl x509 -purpose -inform DER -noout -in iwa-sp66.cer
Certificate purposes:
SSL client : No
SSL client CA : No
SSL server : Yes
SSL server CA : No
Netscape SSL server : Yes
Netscape SSL server CA : No
S/MIME signing : No
S/MIME signing CA : No
S/MIME encryption : No
S/MIME encryption CA : No
CRL signing : Yes
CRL signing CA : No
Any Purpose : Yes
Any Purpose CA : Yes
OCSP helper : Yes
OCSP helper CA : No
Time Stamp signing : No
Time Stamp signing CA : No

And here is client one:

<> openssl x509 -purpose -inform DER -noout -in karim-cert.cer
Certificate purposes:
SSL client : Yes
SSL client CA : No
SSL server : No
SSL server CA : No
Netscape SSL server : No
Netscape SSL server CA : No
S/MIME signing : No
S/MIME signing CA : No
S/MIME encryption : No
S/MIME encryption CA : No
CRL signing : Yes
CRL signing CA : No
Any Purpose : Yes
Any Purpose CA : Yes
OCSP helper : Yes
OCSP helper CA : No
Time Stamp signing : No
Time Stamp signing CA : No

At this point just copy all the PFX files to the windows server running IIS.

Importing the SSL Certificates

I just ended up using the certs generated with the makecert and pvk2pfx utilities. For the Certificate imports we can break it down into four parts:

  • Import CA Certificate into the Web Server's Computer Account Trusted Root Certification Authorities store
  • Import the Server Certificate into the Web Server's Compunter Account Personal Certificate store
  • Import the CA Certificate into the Client Machine's Current User Trusted Root Certification Authorities store
  • Import the Client Certificate into the Client Machine's Current User Personal Certificate store

Import CA Certificate on the Web Server into the Trusted Root Certification Authorities Store

To import the certificate we need to launch the Certificates Snap-In. Go to Start -> run -> mmc and then click on to the
Add/Remove Snap In option:

Then choose the Certificates Snap-In and pick the Computer Account:

And then choose Local Computer:

Then under the Trusted Root Certification Authorities store import the CA Cert:

After it's imported you can confirm the cert is there:

Import the Server Certificate on the Web Server into the Computer Account Personal Certificate Store

From the same certificate snap-in import the Server Certificate into the Personal Certificate store. After it's imported confirm it's able to see the CA that signed it:

Import CA Certificate on the Client Machine into the Current Users Trusted Root Certification Authorities Store

I was using a windows 10 machine, so I launched the cert manager (Start -> run -> certmgr.msc):

And then import the CA certificate into the Trusted Root Certification Authorities Store and after it's done you will be able to see it:

Import Client Certificate on the Client Machine into the Current User's Personal Certificate Store

In the same Certificate Manager import the Client Cert under the Personal Store and confirm it's signed by the correct CA:

You can also launch chrome and confirm the cert is present:

Now let's go to the next section.

Configure IWA ASP application to use SSL Client Authentication

This can be broken down into a couple of parts:

  1. Enable SSL Client Auth method on the IIS server
  2. Import The Server Certificate into IIS and Enable https binding with the Imported Server Certificate
  3. Enable SSL Client Authentication on the IWA Application
  4. Add One to One Mapping for the Certificate and the User

Let's go into each step

Enable SSL Client Authentication method on the IIS server

Launch the Server Manager and under Roles scroll down to the IIS role and click on Add Role Services

Then scroll down and enable the IIS Client Certificate Mapping Authentication option:

After it's installed you will see a success message at the end:

Import The Server Certificate into IIS and Enable HTTPS binding with the Imported Server Certificate

Start the IIS Manager (Start -> Run -> inetmgr) and under the Server Section click on the Server Certificates Module:

Then click Import (under the Actions section) and import the Server Certificate:

Then at the Default Site Level click Bindings and enable HTTPS with the uploaded certificate:

Enable SSL Client Authentication on the IWA ASP Application

Now at the Application level launch the SSL Settings module:

and set the Client Certificates to Required and click Apply:

You can also set it to Accept, that way only the browser has a cert will it use, else it will keep using Windows Authentication:


Add One to One Mapping for the Certificate and the User

Now let's map our certificate to a username and password. At the Application Level or the Site Level, I only had one application so I did it at the site Level, launch the Configuration Editor Module:

In the Section field enter system.webServer/security/authentication/iisClientCertificateMappingAuthentication (that will list the options available for that feature) and enable it:

Then click on oneToOneMappings and click the ... to add a new one:

Then add a new user (for the certificate paste in a X509 Base64 encoded version of the client ssl certificate):

Also make sure the certificate is one big line, if you have a new line characters in the cert it will only paste the first line. Then close this dialog and you will see the Count value increase:

Then click Apply and it will save the settings:

That should be it for the IIS settings. For good measure go ahead and restart the IIS Server (Start -> Run -> iisreset)

Testing out SSL Client Authentication

Now just go to the Portal and click on the IWA icon and after you are forwarded to the IWA server it will prompt you for your Certificate. Here is how it looked like in Chrome:

After choosing the certificate, I was forwarded to the portal and I was successfully logged in. Here is how the prompt looked like on the IE Edge:

For IE 11 and lower you can set a setting to only prompt if you have more than one certificate. Launch internet options (Start -> Run -> inetcpl.cpl) and go to Security -> Internet -> Custom Level and under the Miscellaneous section you can enable the Don’t prompt for client certificate selection when only one certificate exists option:

That's should be it

Troubleshooting SSL Authentication with IIS and ASP

Here are some troubleshooting tips for this setup. First enable detailed errors for ASP Applications. This is a two step process, in IIS manager under the Application Section launch the ASP module and under the Debugging Properties set the Send Errors to Browsers to true:

Second, under the Error Pages module click on Edit Features Settings and in the Error Responses Section set it to Detailed Errors:

Then when you don't have the client SSL Certificate in the browser you will see this:

Without enabling that you will just see the following:

Another thing to check out is the Security Section in the Event Viewer (Start -> Run -> eventvwr):

I entered the wrong password under the mapping section and I saw the following event:

- System
  - Provider
   [ Name]  Microsoft-Windows-Security-Auditing
   [ Guid]  {54849625-5478-4994-A5BA-3E3B0328C30D} 
   EventID 4625
   Version 0
   Level 0
   Task 12544
   Opcode 0
   Keywords 0x8010000000000000
  - TimeCreated
   [ SystemTime]  2016-11-05T17:01:04.790771500Z
   EventRecordID 4208
  - Execution
   [ ProcessID]  464
   [ ThreadID]  3020
   Channel Security
- EventData
  SubjectUserSid S-1-5-82-3006700770-424185619-1745488364-794895919-4004696415
  SubjectUserName DefaultAppPool
  SubjectDomainName IIS APPPOOL
  SubjectLogonId 0x4f7fa
  TargetUserSid S-1-0-0
  TargetUserName karim
  TargetDomainName IWA
  Status 0xc000006d
  FailureReason %%2313
  SubStatus 0xc0000064
  LogonType 8
  LogonProcessName Advapi 
  AuthenticationPackageName Negotiate
  WorkstationName IWA
  TransmittedServices -
  LmPackageName -
  KeyLength 0
  ProcessId 0xbd8
  ProcessName C:\Windows\System32\inetsrv\w3wp.exe
  IpPort 59372

The failure reason can be looked up here and in my case FailureReason %%2313 is wrong password or username (found a forum on that)

User Attributes

In SecurID Access Change Attribute Mapping Name in Identity Sources we talked about how we can change attributes names. We also mentioned that we can change the attribute type, here was the use case:

  1. Change the Target Attribute Type of a Discovered attribute
    1. Let's say you wanted to treat a date as a string to use other policies operations

Let's go into this scenatrio

Changing The Attribute Type

Depending on the type of the attribute we have certain policy operations available. Here are the available types:

  • datetime (accountExpires)
  • string (mail)
  • long (badPwdCount)
  • boolean (isDeleted)
  • double ()

If an attribute is of type datetime we can use the following policy operations on it:

  • Equals
  • Does not equal
  • Greater than
  • Greater than or equal
  • Less than
  • Less than or equal
  • Is null
  • Is not null



If an attribute is of type string, we can use the following policy operations on it:

  • Contains
  • Does not contain
  • Matches
  • Does not match
  • Starts with
  • Ends with
  • Equals
  • Does not equal
  • Is empty
  • Is not empty
  • Is null
  • Is not null
  • Set contains any
  • Set does not contain any
  • Set contains all
  • Set does not contain all


If an attribute is of type long or double, then we have the following policy operations on it (same as datetime):

  • Equal
  • Does not equal
  • Greater than
  • Greater than or equal
  • Less than
  • Less than or equal
  • Is null
  • Is not null

If an attribute is of type boolean, then we have the following policy operations on it: 

  • Equal
  • Does not equal
  • Is null
  • Is not null


So let's say I wanted to do a string match operation on a datetime attribute, like accountExpires, by default you saw what operation are available above. So let's change the type mapping to string:

Now after making that change, if I choose that attribute I can have more policy operations:

Different Attribute Type for the same Attribute

Let's say we have two attributes with the same name but the types are different. Let's use the same example as before and utilize the mail attribute. I went ahead and changed the type on one of the identity sources to be boolean while I left the other one to be string:

and here is the other one:

Since both attributes are seen as one, when I check out the policy operations available for that attribute it actually only lists operations that apply to both:

User Attributes

After you add an Identity Source, the IDR connects to the AD or LDAP Server and parses all the available attributes. On the User Attributes page of the Identity Source, you will see a list of the discovered attributes:

You will also notice that next to each attribute there is a Mapping column with a pencil next to it. Upon clicking on that pencil you are able to change the Attribute Mapping:

There are a couple of use cases for changing the attribute mapping:

  1. With Multiple Identity Sources you can combine attribute names by setting their Target Attribute Name to be the same
    1. For example in AD you have sAMAccountName and in an LDAP server you might have uid
  2. Change the Target Attribute Type of a Discovered attribute (this is actually covered in SecurID Access: Change Attribute Mapping Type in Identity Sources)
    1. Let's say you wanted to treat a date as a string to use other policies operations

Let's check out the first scenario.

Combine Attributes Names

I have two Identity Sources, an LDAP server and an Active Directory server. In AD I use the sAMAccountName attribute for the User IDs, and in my LDAP server I use uid for User IDs. I have the same User ID in both Identity Sources:

~> ldapsearch -LLL -h -D "" -W -v -b "CN=Users,DC=singlepoint67,DC=com" -s sub "(sAMAccountName=karim)"
ldap_initialize( ldap:// )
Enter LDAP Password:
filter: (sAMAccountName=karim)
requesting: All userApplication attributes
dn: CN=karim demo,CN=Users,DC=singlepoint67,DC=com
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: user
cn: karim demo
sn: demo
givenName: karim
distinguishedName: CN=karim demo,CN=Users,DC=singlepoint67,DC=com
instanceType: 4
whenCreated: 20150406235849.0Z
whenChanged: 20160930162317.0Z
displayName: karim demo
name: karim demo
logonCount: 0
sAMAccountName: karim
sAMAccountType: 805306368
objectCategory: CN=Person,CN=Schema,CN=Configuration,DC=singlepoint67,DC=com

And here are the regular and operational attributes for same user in the LDAP Server:

 ~> ldapsearch -LLL -h -b "dc=singlepoint66,dc=com" -D "CN=directory manager" -W "(uid=karim)"
Enter LDAP Password:
dn: cn=Karim OpenDJ,dc=singlepoint66,dc=com
userPassword:: xx
givenName: Karim
objectClass: person
objectClass: inetOrgPerson
objectClass: organizationalPerson
objectClass: top
uid: karim
cn: Karim OpenDJ
sn: OpenDJ
~> ldapsearch -LLL -h -b "dc=singlepoint66,dc=com" -D "CN=directory manager" -W "(uid=karim)" "+"
Enter LDAP Password:
dn: cn=Karim OpenDJ,dc=singlepoint66,dc=com
numSubordinates: 0
structuralObjectClass: inetOrgPerson
etag: xx
pwdPolicySubentry: cn=Default Password Policy,cn=Password Policies,cn=config
subschemaSubentry: cn=schema
hasSubordinates: false
entryDN: cn=karim opendj,dc=singlepoint66,dc=com
entryUUID: xx
pwdChangedTime: 20160930163722.214Z
creatorsName: cn=Directory Manager,cn=Root DNs,cn=config
createTimestamp: 20160930163722Z

You will notice that the uid and sAMAaccountName attributes are the same. So I went ahead and added both as Identity Sources in my SecurID Access Environment:

Then on the LDAP Identity Source I searched for the uid attribute in the page:

Then I clicked on the Pencil under the Mapping column and set the Target Attribute Name to user_id:

Then if you go back to the User Attribute page, you can now search for the new name (user_id):

Then I went to the AD Identity Source and did the same thing for the sAMAccountName attribute:

Now I can use that name under policies and it will show up a valid attribute that you can select. I created a simple policy saying that if user_id is equal to karim, allow access:

Don't forget to choose both of the Identity Sources when adding the policy:

Now if I can assign that policy to an application and if I authenticate with either of the Identity Sources, I will see the application. 

Separate Attributes Names (No Attribute Mapping)

Let's use a similar scenario as above, I have an LDAP server and an AD server. A common attribute between those two servers is mail, you can't change the target name for more than one attribute with a unique name. For example I modified the mail attribute for both identity sources:

But if I try to use those new names only the second one shows up in the list:

If you are in a situation where you have multiple Identity Sources with same attributes and you want to create a policy that applies to only one, then just select the appropriate Identity Source when creating the policy:


O365 Client Access Policies

Without enabling Modern Authentication (ADAL) on O365, ADFS is able to provide flexible Access policies. The configuration of and these policies are seen in these pages:

From the above pages we can see there are 4 main scenarios the policies cover:

  • Scenario 1 - Block all external access to Office 365
  • Scenraio 2 - Block all external access to Office 365 except Exchange ActiveSync
  • Scenario 3 - Block all external access to Office 365 except browser-based applications
  • Scenario 4 - Block all external access to Office 365 except for designated Active Directory groups

For each of those scenarios custom claim types are generated and used to create the policy. Here is a note from the above pages:

The policies described in this article make use of two kinds of claims
  1. Claims AD FS creates based on information the AD FS and Web Application proxy can inspect and verify, such as the IP address of the client connecting directly to AD FS or the WAP.
  2. Claims AD FS creates based on information forwarded to AD FS by the client as HTTP headers

WS-Federation VS WS-Trust

Before we get into the scenarios it's important to understand WS-Federation (Passive Profile) VS WS-Trust (Active Profile). The Understanding WS-Federation page covers the topic in great detail. To summarize here are some excerpts from the page:

WS-Trust provides an additional piece of the foundation for federation by defining a service model, the Security Token Service (STS), and a protocol for requesting/issuing these security tokens which are used by WS-Security and described by WS-SecurityPolicy.


WS-Trust introduces protocol mechanisms independent of any particular application for requesting, issuing, renewing, cancelling and validating security tokens which can be exchanged to authenticate principals and protect resources. The core of this protocol is the request-response message pair, Request Security Token (RST) and Request Security Token Response (RSTR).

And here are some excerpts about WS-Federation:

A fundamental goal of WS-Federation is to simplify the development of federated services through cross-realm communication and management of Federation Services by re-using the WS-Trust Security Token Service model and protocol.
WS-Federation defines syntax for expressing the WS-Trust protocol and WS-Federation extensions in a web browser only environment using widely supported HTTP 1.1 mechanisms (GET, POST, redirects, and cookies). WS-Federation defines encoding rules that enable many of the WS-Trust protocol extensions to be accessible via HTTP 1.1 mechanisms by standard browser clients and web applications.

And from Windows Identity Foundation 101’s : WS-Federation Passive Requestor Profile (part 2 of 2) here is a pretty good diagram of WS-Federation Passive Profile:



  1. End-user attempts to access RP website
  2. Client is redirected to IdP website
  3. End-user logs in
  4. IdP sends a claim-rich token back to the client
  5. Client presents token to RP

I think in summary with WS-Trust it's usually used for rich clients and require an STS (Security Token Service) to function. While with WS-Federation it can be used for Browser Based clients when the Passive Profile is utilized. People sometimes link WS-Trust with Active Endpoints and WS-Federation with Passive endpoints.

Scenario 1 without ADAL with ADFS

Here is the rule for the first scenario:

c1:[Type == "", Value == "false"] && c2:[Type == "", Value =~ "^(?!192\.168\.1\.77|10\.83\.118\.23)"] => issue(Type = "", Value = " DenyUsersWithClaim"); c:[] => issue(Type = "", Value = "true");

The most important new claim type is x-ms-forwarded-client-ip. The policy is basically checking to see if the value of that claim type is not an internal IP, and if it's not ADFS denies access. From the Configuring Client Access Policies page here is note about that claim type:

This AD FS claim represents a “best attempt” at ascertaining the IP address of the user (for example, the Outlook client) making the request. This claim can contain multiple IP addresses, including the address of every proxy that forwarded the request.  This claim is populated from an HTTP header. The value of the claim can be one of the following:

  • A single IP address - The IP address of the client that is directly connected to Exchange Online
  • One or more IP addresses
    • If Exchange Online cannot determine the IP address of the connecting client, it will set the value based on the value of the x-forwarded-for header, a non-standard header that can be included in HTTP based requests and is supported by many clients, load balancers, and proxies on the market.

So Exchange Online creates this claim (either from an HTTP Header - set by a web proxy - or directly from the client it self).

Scenario 1 with ADAL with ADFS

If ADAL is enabled then the active end point is not longer used, everything happens over the passive endpoint. There are actually a couple of good sites that talk about Modern Authentication and the Access Policies:

By using the insidecorporatenetwork claim type ADFS is able to check if the client is internal or external. From the above links:

reminder: this claim is added by the WAP server or any other AD FS proxy replacement


Similar to the above x-ms-proxy claim type, this claim type indicates whether the request has passed through the web application proxy. Unlike x-ms-proxy, insidecorporatenetwork is a boolean value with True indicating a request directly to the federation service from inside the corporate network.

So using external tools we can figure out if the request is internal or external. Here is a note about active vs passive:

With modern authentication, all clients will use Passive Flows (WS-Federation), and will appear to be browser traffic to AD FS.


Note that we didn’t include a check for which endpoint the request came from. The reason being that with Modern authentication, every request from ADAL-enabled clients will be hitting the passive endpoint.

Scenario 1 with ADAL with RSA Via Access

With the current policies capabilities we can create a rule to check the IP from which the request came and make a decision on it. For example:

If not internal deny
else allow all

I ended up using the following regular expression to figure out if the IP is not internal


Here is how the policy looked like in the Access Console:


That will be the important rule.

Scenario 2 without ADAL with ADFS

Here is the rule for the second scenario:

c1:[Type == "", Value == "false"] && c2:[Type == "", Value =~ "^(?!192\.168\.1\.77|10\.83\.118\.23)"] => issue(Type = "http://custom/ipoutsiderange", Value = "true"); c1:[Type == "http://custom/ipoutsiderange", Value == "true"] && c2:[Type == "", Value != "Microsoft.Exchange.ActiveSync"] => issue(Type = "", Value = "DenyUsersWithClaim"); NOT EXISTS([Type == ""]) => add(Type = "http://custom/xmsapplication", Value = "fail"); c1:[Type == "http://custom/ipoutsiderange", Value == "true"] && c2:[Type == "http://custom/xmsapplication", Value == "fail"] => issue(Type = "", Value = "DenyUsersWithClaim"); c:[] => issue(Type = "", Value = "true");

The rules get pretty creative with figuring out if the access is internal or external but the most important rule is the one using the x-ms-client-application claim type. Here is note about this claim type:


This AD FS claim represents the protocol used by the end client, which corresponds loosely to the application being used.  This claim is populated from an HTTP header that is currently only set by Exchange Online, which populates the header when passing the authentication request to AD FS. Depending on the application, the value of this claim will be one of the following:

  • In the case of devices that use Exchange Active Sync, the value is Microsoft.Exchange.ActiveSync.
  • Use of the Microsoft Outlook client may result in any of the following values:
    • Microsoft.Exchange.Autodiscover
    • Microsoft.Exchange.OfflineAddressBook
    • Microsoft.Exchange.RPC
    • Microsoft.Exchange.WebServices
    • Microsoft.Exchange.Mapi
  • Other possible values for this header include the following:
    • Microsoft.Exchange.Powershell
    • Microsoft.Exchange.SMTP
    • Microsoft.Exchange.PopImap

So Exchange Online generates that header and includes that in the request to ADFS and ADFS passes that header/claim through and uses it for it's policy. Looking over Limit Access to Office 365 Based on the Location of Client the pages covers some of the HTTP headers that are passed to an STS from a client:

Office 365 sends information about application name, client IP, useragent, proxy information to STS as part of HTTP request. Solution to restrict user access to STS can be implemented via ServletFilter. Filter will look for following header names:
  • x-ms-client-application
  • x-ms-forwarded-client-ip
  • x-ms-client-user-agent

Scenario 2 with ADAL with ADFS

Here is note from one of the above sites (talking about the x-ms-client-application claim type) :

As some of you might recall, this claim is only available for Exchange Online related requests and is inserted by the Exchange server during the process of proxying the authentication request to the AD FS on behalf of the client. With modern authentication enabled, this claim will simply not be present in the request, as the client now gets the token directly from the AD FS server and the Exchange server plays no role in the process.

So now we can use the x-ms-client-user-agent claim type instead of the . From Restrict iOS apps which can access to Office 365 services (ADFS required) it has an example of what some user agents look like:

Here’s the user agent of Microsoft Outlook/Word/Excel/PowerPoint/OneDrive/Intune Managed Browser on an iOS 8.4.1 device.

Mozilla/5.0 (iPhone; CPU iPhone OS 8_4_1 like Mac OS X) AppleWebKit/600.1.4 (KHTML, like Gecko) Mobile/12H321

And this is an example of user agent of Safari browser on iOS 8.4.1:

Mozilla/5.0 (iPhone; CPU iPhone OS 8_4_1 like Mac OS X) AppleWebKit/600.1.4 (KHTML, like Gecko) Version/8.0 Mobile/12H321 Safari/600.1.4

Last example is the user agent of Chrome browser on iOS 8.4.1:

Mozilla/5.0 (iPhone; CPU iPhone OS 8_4_1 like Mac OS X) AppleWebKit/600.1.4 (KHTML, like Gecko) CriOS/44.0.2403.67 Mobile/12H321 Safari/600.1.4

Microsoft Word for Android user agent:

Mozilla/5.0 (Linux; Android 5.0.1; GT-I9505 Build/LRX22C; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/46.0.2490.76 Mobile Safari/537.36 PKeyAuth/1.0

Intune Managed Browser for Android user agent:

Mozilla/5.0 (Linux; Android 5.0.1; GT-I9505 Build/LRX22C; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/46.0.2490.76 Mobile Safari/537.36

So then using that claim type the following rule was created from that page:

exists([Type == “”, Value =~ “^Mozilla\/5.0 \((iPhone|iPad|iPod); CPU iPhone (?:OS\s*\d+_\d+(?:_\d+)?\s*)? like Mac OS X\) (?:AppleWebKit\/\d+(?:\.\d+(?:\.\d+)?|\s*\+)?\s*)? \(KHTML, like Gecko\) (?:Mobile\/\w+\s*)?$”]) => issue(Type = “”, Value = “true”);

BTW here is the description of that claim type:


This AD FS claim provides a string to represent the device type that the client is using to access the service. This can be used when customers would like to prevent access for certain devices (such as particular types of smart phones). Example values for this claim include (but are not limited to) the values below.

The below are examples of what the x-ms-user-agent value might contain for a client whose x-ms-client-application is “Microsoft.Exchange.ActiveSync”


  • Vortex/1.0
  • Apple-iPad1C1/812.1
  • Apple-iPhone3C1/811.2
  • Apple-iPhone/704.11
  • Moto-DROID2/4.5.1
  • SAMSUNGSPHD700/100.202
  • Android/0.3


It is also possible that this value is empty.

Since this is not created by Exchange Online but ADFS it self we can still use it with ADAL clients.

Scenario 2 with ADAL with RSA Via Access

With the current policy set we can also utilize the User Agent HTTP Header, looking over Browser detection using the user agent I ended up creating the following regex to determine if the connection is coming from a mobile device/browser:


And then I used that to allow user agents that are mobile based. Here is how the policy looks like in the Access Console:



Scenario 3 without ADAL with ADFS

Here is how the rule looks like:

c1:[Type == "", Value == "false"] && c2:[Type == "", Value =~ "^(?!192\.168\.1\.77|10\.83\.118\.23)"] => issue(Type = "http://custom/ipoutsiderange", Value = "true"); c1:[Type == "http://custom/ipoutsiderange", Value == "true"] && c2:[Type == "", Value != "/adfs/ls/"] => issue(Type = "", Value = " DenyUsersWithClaim"); c:[] => issue(Type = "", Value = "true");

Similar process is used for internal IP testing and the important claim type is x-ms-endpoint-absolute-path and here the information about that claim type from the same page:


This claim type can be used for determining requests originating from “active” (rich) clients versus “passive” (web-browser-based) clients. This enables external requests from browser-based applications such as the Outlook Web Access, SharePoint Online, or the Office 365 portal to be allowed while requests originating from rich clients such as Microsoft Outlook are blocked.

The value of the claim is the name of the AD FS service that received the request.


We can see from the rule that we are checking is the request is not coming into the passive endpoint (/adfs/ls/) then we know it's going to the active endpoint (which I believe is /adfs/services/trust/2005/usernamemixed.. or any other end point really) and therefore will be denied.

Scenario 3 with ADAL with ADFS

From Office 2013 and Office 365 ProPlus modern authentication and client access filtering policies : Things to know before onboarding it's mentioned that this Scenario is no longer support:


This scenario is not yet supported for public preview and we recommend organizations that rely on this scenario to not onboard their tenants for modern authentication.


If scenario # 3 applies to you, and you enable modern authentication on your tenant, rich clients (Outlook and other Office apps) will be able to bypass your client access filtering policies and in ADFS access resources like Exchange Online and SharePoint online.


This  kind of makes sense since all ADAL enabled clients will only use the Passive Endpoint just like browser Based clients.

Scenario 3 with ADAL with RSA Via Access

I was testing out some of the User Agent stuff and here is what I saw:


Office 2016 Outlook
Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/7.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Microsoft Outlook 16.0.6769)

Android Nexus 5.0
Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5 Build/MMB29X; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/44.0.2403.119 Mobile Safari/537.36 PKeyAuth/1.0

Mac OS X Chrome
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.112 Safari/537.36

Windows Chrome
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML,like Gecko) Chrome/49.0.2623.112 Safari/537.36

Mozilla/5.0 (iPhone; CPU iPhone OS 9_2_1 like Mac OS X) AppleWebKit/601.1.46 (KHTML, like Gecko) Version/9.0 Mobile/13D15 Safari/601.1

Mac OS Firefox
Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:45.0) Gecko/20100101 Firefox/45.0

Office 2016 Word/PowerPoint/Excel
Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/7.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3. 5.30729; .NET CLR 3.0.30729; .NET4.0C; .NET4.0E)

IE 11 Windows
Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko

So we can see that Non-Outlook Rich Clients show up as Internet Explorer 7 (MSIE 7.0, here is full list of IE User Agents: Internet Explorer User Agent Strings). So we can basically say that if the User Agent is not Outlook or IE 7 (hopefully no one is using that old browser) then we are using a regular browser (you can get creative and block mobile browsers as well if you'd like). Here is the regex I ended up using:

(Outlook|MSIE\ 7\.0\;)

And the rule looked like this in the Access Console:


Then when I tried to login from a rich client it blocked the access:


Another thing we can do is use Windows Intune to block Active Sync or Mailbox Access at the Exchange Online Level (a post for another time). Here are a couple of sites that about that setup:

Restrict access to email and O365 services with Microsoft Intune
Conditional Access in Configuration Manager
The evolution of access control: from VPN to identity-based anywhere access

Scenario 4 without ADAL with ADFS

This one doesn't use any special headers it just use the groupsid claim, here is the rule:

c1:[Type == "", Value =~ "^(?!192\.168\.1\.77|10\.83\.118\.23)"] && c2:[Type == "", Value == "false"] => issue(Type = "http://custom/ipoutsiderange", Value = "true"); NOT EXISTS([Type == "", Value == "S-1-5-32-100"]) => add(Type = "http://custom/groupsid", Value = "fail"); c1:[Type == "http://custom/ipoutsiderange", Value == "true"] && c2:[Type == "http://custom/groupsid", Value == "fail"] => issue(Type = "", Value = "DenyUsersWithClaim"); c:[] => issue(Type = "", Value = "true");

The rule just checks if the groupsid claim is there (and this is created by ADFS not any proxy) and if it's not the user is denied.

Scenario 4 with ADAL with ADFS

This won't change and will work without any changes, since the policy mainly depends on the groupsid claim type.

Scenario 4 with ADAL with RSA Via Access

Same goes for RSA Via here, we can create a policy based on group membership and deny or allow access based on that. The configuration is covered in a previous post: Combining Multiple Rules Into A Single Rule to Create Complex Rule Sets

This guide is assuming you have done the following steps:

  • Install ADFS
  • Add SSL certificate to ADFS
  • Configure ADFS for your domain

Confirm Active Directory is Under the Attribute Stores

Launch the "ADFS 2.0 Management" Console from the "Start Menu" -> "All Programs" -> "Administrative Tools":
Once launched expand "Trust Relationships" and click on "Attribute Stores" and you should see "Active Directory" under the list:


Confirm Active Directory is Added to the Claims Provider Trusts

In the ADFS 2.0 Management Console, check on "Claims Provider Trusts" and make sure AD is in the list:


Export the Token Signing ADFS Certificate

We will upload this Cert when setting up ADFS as an IdP and it will used to sign SAML responses/requests. Launch the ADFS 2.0 Management Console and Expand "Service" and then click on "Certificates":


Right click on the "Token-Signing" certificate and select "View Certificate":

Then click on the "Details" tab and click on "Copy to File":


At this point a wizard will start:

Click next and select the format to be "Base-64 encoded X.509 (.CER)":

Follow the rest of the prompts to place the exported certificate on the Desktop. If for some reason someone exports the cert in DER encoded format we will have to convert the certificate to PEM format. Copy the file to a *nix system and run the following to convert it to regular PEM format:


$ openssl x509 -in adfs_pub_token_sign_cert.cer -inform DER -out adfs_pub_token_sign_cert.pem -outform PEM

Add IDR to the ADFS Relying Party Trusts

From the ADFS 2.0 Management Console, right click on "Relying Party Trusts" and select "Add Relying Party Trust":


At this point you will see the Add Relying Party Trust Wizard:


Click Start and select "Enter data about relying party manually":


Click Next and enter a desired and meaningful name (I chose viasso):


Click Next and select "AD FS 2.0 Profile":


Click Next, since we are going to use the Token-Signing Certificate from ADFS we won't need to upload a token encrypting certificate.

So on this page, just click "Next". Then select "Enable support for the SAMl 2.0 WebSSO protocol" and enter the "Relying party SAML 2.0 SSO service URL". For this URL it will be in the following format:



The ISSUER_ID has to match Identifier Name that we create later. Here is mine filled out:




Click Next and for the "Relying party trust identifier", make sure this matches the IssuerID you specified in the SSO URL from the previous screen(I called it viasso) and then click Add:


Click Next and select "Permit all users to access this relying party":


Click Next and you will see the Summary page:


Click Next and then leave the "Open the Edit Claim Rules dialog for this relying party trust when the wizard closes" check box selected:


Click Close and the "Edit claim Rules" dialog will show up:


Click Add Rule and make sure the Claim rule template has Send LDAP Attributes as Claims selected:


Click Next, give the Claim Rule a name, select the Attribute store to be our Active Directory. Then for the LDAP Attribute select "SAM-Account-Name" and for theOutgoing Claim Type select "Name ID":




Click Finish and you should see the claim rule added:




Lastly make sure Permit Access to All Users is configured under the "Issuance Authorization Rules" tab is present:


The last thing to do is to make sure we use sha-1 as our hashing algorithm. Right click on our Relying Party and select properties:


And go to the Advanced Tab and change the hashing algorithm to be sha-1:


Confirm Relying Trust Party is Configured and Try an IdP initiated Login from ADFS

From a client machine go to https://ADFS_SERVER/adfs/ls/IDpInitiatedSignOn.aspx and you should see the following:


Upon selecting "Sign in to one of the following site" and then clicking to "Continue to Sign In" you should get logged into the portal (if you are using IE and you are on the domain. If using firefox, you will have to enter your domain credentials).

Configure the ADFS IdP in RSA Via Access Console

In the Access Console go to Users -> Identity Providers -> Add an Identity Provider and select SAML 2.0 IDP:


Give it a useful name:


Then in the configuration I set the following parameters:

Here is how it looked like in the UI:

and here is the imported cert:

Click Next and Save the configuration. Note: you can figure out the IssuerID that ADFS uses by clicking on "Edit Federation Service Properties" from the main screen:


and then you will see the Federation Service Identifier:


Test ADFS IdP From Portal (SP Initiated Login)

Go to the portal and click on the ADFS IdP (on the right side of the ribbon):


You will be logged into the portal. On the ADFS server you can check the audit logs and you should see a Special Logon for you user:



This guide is intended to provide instructions on how to configure vCloud Director as an SP (Service Provider) and RSA Via Access as an IdP (Identity Provider). Before we get started I will use these URLs throughout the guide:

Export vCloud Director Metadata

If you would like you can also export the metadata from vCloud director. The URL for the metadata is the following:

If you export the metadata you will get something like this:

<?xml version="1.0" encoding="UTF-8"?>
<md:EntityDescriptor entityID="" xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata">
    <md:SPSSODescriptor AuthnRequestsSigned="true" WantAssertionsSigned="true" protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol">
        <md:KeyDescriptor use="signing">
            <ds:KeyInfo xmlns:ds="">
        <md:KeyDescriptor use="encryption">
            <ds:KeyInfo xmlns:ds="">
        <md:SingleLogoutService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" Location=""/>      
        <md:SingleLogoutService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location=""/>        
        <md:SingleLogoutService Binding="urn:oasis:names:tc:SAML:2.0:bindings:SOAP" Location=""/>       
       <md:AssertionConsumerService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" Location="" index="0" isDefault="true"/>   
      <md:AssertionConsumerService Binding="urn:oasis:names:tc:SAML:2.0:profiles:holder-of-key:SSO:browser" Location="" hoksso:ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" index="1" xmlns:hoksso="urn:oasis:names:tc:SAML:2.0:profiles:holder-of-key:SSO:browser"/>


If you import the metadata it will configure the connector to be Encrypt the Assertion and to validate the signed AuthN Request, but you can actually get without those.Here is how the import will look like:



Configure RSA Via Manually For vCloud Director

The VMware site Enable Your Organization to Use an SAML Identity Provider has most of the requirements:

Create an XML file with the following metadata from your SAML identity provider.
  • The location of the single sign-on service
  • The location of the single logout service
  • The location of the service's X.509 certificate
..Configure your SAML provider to provide tokens with the following attribute mappings.
  • email address = "EmailAddress"
  • user name = "UserName"
  • full name = "FullName"
  • user's groups = "Groups"

Let's start on the RSA SecurID side and create the connector, for the configuration we can use the following:

Prepare SAML Metadata XML for vCloud Director


After the RSA SecurID application is created we can export the SAML metadata and modify it to be successfully imported into vCloud director. After the application is created you can go back to Application -> My Applications you can click on the drop down menu for the application and click Export Metadata:




By default the XML will look like this:


<?xml version="1.0" encoding="UTF-8"?>
<md:EntityDescriptor xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata" entityID="vdirector_via">
        <md:KeyDescriptor use="signing">
            <ds:KeyInfo xmlns:ds="">
    <md:SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" Location=""/>
For the XML to be valid we need to add two sections the Logout_URL and the Attributes. For the Logout URL we just need to add the following into the XML:
<SingleLogoutService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="https://RSA_VIA_PORTAL_URL/LogoutServlet"/>
In my case it was this:
<SingleLogoutService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location=""/>

Adding the Attributes Entries into the SAML XML

We have the list from above of what needs to be included in the XML. There is also a page from vCloud air that talks about these: Enabling and Managing Federation, from that page:

Download the appropriate SAML metadata in XML format from your identity provider. The SAML metadata must provide mappings for the user attributes shown in this XML fragment:

    FriendlyName="Subject Type"
So I ended up creating the following attributes in the XML:
<Attribute Name="" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:uri" FriendlyName="EmailAddress" xmlns="urn:oasis:names:tc:SAML:2.0:assertion"/>
<Attribute Name="" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:uri" FriendlyName="FullName" xmlns="urn:oasis:names:tc:SAML:2.0:assertion"/>
<Attribute Name="" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:uri" FriendlyName="Groups" xmlns="urn:oasis:names:tc:SAML:2.0:assertion"/>
<Attribute Name="" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:uri" FriendlyName="UserName" xmlns="urn:oasis:names:tc:SAML:2.0:assertion"/>
In the end here is what I ended up with:
<?xml version="1.0" encoding="UTF-8"?>
<EntityDescriptor xmlns="urn:oasis:names:tc:SAML:2.0:metadata" entityID="">
    <IDPSSODescriptor protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol">
        <KeyDescriptor use="signing">
            <KeyInfo xmlns="">
        <SingleLogoutService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect"
        <SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST"
        <Attribute Name="" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:uri" FriendlyName="EmailAddress" xmlns="urn:oasis:names:tc:SAML:2.0:assertion"/>
        <Attribute Name="" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:uri" FriendlyName="FullName" xmlns="urn:oasis:names:tc:SAML:2.0:assertion"/>
        <Attribute Name="" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:uri" FriendlyName="Groups" xmlns="urn:oasis:names:tc:SAML:2.0:assertion"/>
        <Attribute Name="" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:uri" FriendlyName="UserName" xmlns="urn:oasis:names:tc:SAML:2.0:assertion"/>
NOTE: I also noticed that if the SSL certificates any New Line characters then vCloud Director doesn't like that. So make sure you remove any Carriage Returns from the SSL certificate in the XML. I also had to remove the md and ds prefixes on the XML entries.
Enabling SAML SSO in vCloud Director


Now for the fun stuff. Login as a Organization Administrator into vCloud director and you will see the administration tab and the federation option within the administration tab:



Then check the Use SAML Identity Provider check box and either upload the file or just paste the XML (I just pasted it):




Upon hitting Apply it should accept the configuration. If the XML is mis-formed or missing any fields you will just get a generic message saying:

The provided metadata is not a valid SAML 2.0 metadata document



Add SAML User to vCloud Director

After the federation is enabled there will be a new option under Administration -> Users to import users:


After you click import Users you can then enter a list of SAML users you want to add:



As you can see I just added one user called devuser. After it's imported you will see the user under the users section:




You can see the type for my test user is SAML. If you check out the properties of the user you will see they are empty:


This is expected since the user hasn't logged into vCloud Director with RSA Via as the IdP. Since we configured the connector to send extended attributes those will be sent in the assertion when the user tries to login.

Logging Directly to vCloud Director After Federation is Enabled

You can still login as a local users. By default if you visit the vCloud_ORG_URL in my case: then you will be forwarded to the IdP. If you go to vCloud_ORG_URL/login.jsp (in my case then you can still login as local users.

This is a continuation of the Setting up an access policy for users on or off the corporate network with RSA Via Access blog post. In that post we saw how we can create a policy to match on IP ranges. Now let's get a little creative with our rule sets.

Creating a Rule Set to Match Multiple Users

Let's say we were trying to create a policy to allow access to an application for multiple users. In Active Directory (AD) their usernames (Or sAMAccountName Attributes) were as follows:

  • jason
  • karim
  • lenore

With this information we can create a rule per username to be:

  • sAMAccountName EQUAL jason
  • sAMAccountName EQUAL karim
  • sAMAccountName EQUAL lenore

In the Access Console our matching criteria would looks like this:



Notice our Match directive is set to Any, this is the same thing as a logical OR, meaning the above Rule Set is translated to this:


Rather than creating policies with multiple users in them, I would recommend creating a Group in AD and then creating a policy based on that AD Group. But just to illustrate a point let's pretend that we can't create groups within AD.

Combining Multiple Rules into a Single Rule with Usernames


Now instead of creating three different rules we can create one. There is an operator called Set Contains Any:




and it actually works on single valued attributes (like sAMAccountName). So when creating our rule it would look like this:


And in the end our Matching Criteria looks like this:


I have the Match directive set to Any but since there is only one rule it doesn't really matter. The Set Contains Any Operator is it self another logical OR operator. So the Rule Set would be translated to:

IF [ SAMACCOUNTNAME SET_CONTAINS_ANY (  jason OR karim OR lenore ) ]

Combining Multiple Rules into a Single Rule with IP Addresses


In the first blog post we saw multiple rules created to match multiple IP Ranges. Currently in the Access Console the IP Address attribute is treated as a string therefore operator like Starts_With and Ends_With are applicable. Another operator which can be used with strings is the matches operator. The matches operators allows you to match strings using regular expressions. I am not going to dig deep into what regular expressions are, but they basically allow you to do really advanced pattern patching with strings.So let's say our network engineer told us that the following CIDR (Classless Inter-Domain Routing) Blocks represent our internal network:




If we convert the CIDR Blocks to IP ranges we get the following:


$ ipcalc
Address:           00001010.00001010.0000101 0.00000000
Netmask: = 23   11111111.11111111.1111111 0.00000000
Wildcard:            00000000.00000000.0000000 1.11111111
Network:        00001010.00001010.0000101 0.00000000
HostMin:           00001010.00001010.0000101 0.00000001
HostMax:         00001010.00001010.0000101 1.11111110
Broadcast:         00001010.00001010.0000101 1.11111111
Hosts/Net: 510                   Class A, Private Internet


We just need to check out the HostMin and HostMax values, so our range is - We can see that with basic string matching we could create two rules:

  • IP_Address Starts_With 10.10.10
  • IP_Address Starts_With 10.10.11

Doing the same thing for the other CIDR Block, we get the following:


$ ipcalc
Address:         11000000.10101000.00001010. 00000000
Netmask: = 24   11111111.11111111.11111111. 00000000
Wildcard:            00000000.00000000.00000000. 11111111
Network:      11000000.10101000.00001010. 00000000
HostMin:         11000000.10101000.00001010. 00000001
HostMax:       11000000.10101000.00001010. 11111110
Broadcast:       11000000.10101000.00001010. 11111111
Hosts/Net: 254                   Class C, Private Internet


Again looking over the HostMin and HostMax our IP Range is It looks like the third octet doesn't change so we can cover this in one rule:


  • IP_Address Starts_With 192.168.10


With three rules we can cover the specified network ranges. Now let's get creative and combine our three rules into one. Using the matches operator we can create a rule like this:


  • IP_Address Matches (^10\.10\.10\.|^10\.10\.11\.|^192\.168\.10\.)


The carrot (^) translates to Starts_With, the pipe (|) translates to an OR, and the backward slash (\) is to escape the special character period (.) which means any character. (This is a very simple regular expression and other more detailed examples exist... ie RegEx Magic IPv4 Patterns). Here is how it looks like when entering the rule in the Access Console:


And in the end here is the Matching Criteria:


The above rule would translate to:

IF [ IP_ADDRESS MATCHES ( (STARTS_WITH 10.10.10) OR (STARTS_WITH 10.10.11) OR (START_WITH 192.168.10) ) ]

If the CIDR Mask is less than 24 then the regular expressions will get more creative.

Creating Complex Rule Sets with Combined Rules

Now why did we do all this hard work to combine our rules? Here is where it will pay off. Let's say we wanted to create a rule that says:


  • Match For Specific User And if on Internal Network


So we could just create this in the Access Console:


  • sAMAccountName EQUALS jason OR karim OR lenore (AND)
  • IP_Address STARTS_WITH 10.10.10 OR 10.10.11 OR 192.168.10

Here is how the Matching Criteria will look like in the Access Console:


This time around I set the Match directive to be All and this is the same thing as a logical AND, meaning the above Rule Set is translated to this:

IF [ SAMACCOUNTNAME SET_CONTAINS_ANY (  jason OR karim OR lenore ) ] AND [ IP_ADDRESS MATCHES ( (STARTS_WITH 10.10.10) OR (STARTS_WITH 10.10.11) OR (START_WITH 192.168.10) ) ]

And that should cover it

Whenever working with HTTP-Federation (H-Fed) applications, Key Chains always come into play. For every H-Fed application a user has a corresponding Key Chain with the credentials for that application. There are rare times when populating the Key Chains automatically is helpful/necessary, although this is not the only use case for the Key Chain CLI. Here are some other use cases that come to mind for the Key Chain CLI:


  • Populating Users' Keychains for H-Fed Applications automatically
  • Creating a "Snapshot" of Users' Keychains (This is not a backup, passwords are not exported)
  • Migrate Users' Key Chains (If the user's Username has changed for some reason)
  • Creating Usage Reports (Listing and counting configured H-Fed applications and Key Chains)


So let's try out the Key Chain CLI on a Windows Machine and a Linux Machine.

Key Chain CLI Prerequisites

There are a couple of prerequisites that are worth mentioning in order to successfully use the Key Chain CLI. Here is a small, and may not be comprehensive, list:


  • The CLI needs to run against the IDR's management interface, not the portal interface
    • This means the machine that is running the CLI needs to have access to the Management Interface of the IDR
  • The CLI uses HTTPS to connect to the IDR
  • The CLI requires Java version 1.7 or higher

Get the SDK Files

The Key Chain CLI is part of the SDK zip archive, you can download the SDK zip from RSA Via Access - Application Portal Integration API. After downloading the zip file, extract the contents to an easily accessible directory. I will refer to this as the SWS_SDK_HOME Directory.

Windows - Configure Environment Variables

In my testing I had Java 1.8:


C:\>java -version
java version "1.8.0_66"
Java(TM) SE Runtime Environment (build 1.8.0_66-b18)
Java HotSpot(TM) Client VM (build 25.66-b18, mixed mode, sharing)


To ease the sourcing of the Environment variables I like to create a script. On my windows machine here is the script:


C:\>type c:\SWSSDK\key_cli.bat
@echo off
set JAVA_HOME=C:\Program Files (x86)\Java\jre1.8.0_66
::The SWS_SERVER is the management IP Address of the IDR

Now we can just run the following to set and confirm all the environment variables:


C:\>set SWS

Lastly we can make sure the idr.cmd command is in your path (any commands are appended with .cmd by default so you don't have to type out the full command):


C:\>idr.cmd --version

Linux - Configure Environment Variables

In my below example I will put the SWS_SDK_HOME directory under /tmp, but I would recommend installing it into a directory dedicated for software installs (ie, /usr/local or /opt). So let's create a dedicated folder for our testing:


me@admin:~>cd /tmp
me@admin:/tmp>mkdir SWSSDK
me@admin:/tmp>mv ~/ SWSSDK/.
me@admin:/tmp>cd SWSSDK
me@admin:/tmp/SWSSDK> unzip

Now let's create a file to ease the setup of the environment variables let's call it  /tmp/SWSSDK/ Here are the contents of the script:


export JAVA_HOME=/opt/jre
export SWS_KEYFILE=/tmp/SWSSDK/key.txt
export SWS_SERVER=  #This should be the management IP Address for the IDR
export PATH=/tmp/SWSSDK/bin:$PATH

After that's ready, you can source your script and your variables will be configured. Here is what you can run to source the setup script:


Note: that there is a space between the "." and ""

After it's finished you can confirm that the environment variables are configured:


me@admin:~> env | grep -E "JAVA|SWS"

Create a Dedicated API user for the Key Chain CLI

Login into the RSA Via Access Console as a Customer Super Administrator and create another user with API functionality enabled (My Account > Administrators > Add an Administrator):



The Allowed Networks specifies the remote network that will be making the remote CLI calls (in my above example I allow the internal network Don't forget to publish after creating the user (this will push the settings onto the IDR). Next we need to create the key.txt file and populate it with the Access ID and Access Key Values. We can run the following to create the key file:


# for Windows
C:\> echo key=0da0fe339b202fb96b7f5317153f402c78a561bc/fabe1b20d2d47813d4131fe98c2a8b1ef064221f > c:\SWSSDK\key.txt
# for Linux
me@admin:~> echo "key=0da0fe339b202fb96b7f5317153f402c78a561bc/fabe1b20d2d47813d4131fe98c2a8b1ef064221f" > /tmp/SWSSDK/key.txt

The format of the file is the following:


#AccessID and AccessKey values can be found from RSA Via Access Console > My Account > Administrators > "Your_Designated_API_USER" -> Edit

Test KeyChain CLI Commands

As a quick test make sure the following commands work:

Username    Application Name   Credentials
karim       Evernote           username, password
karim       Sharepoint 2013    username, password
karim       Concur             userid, password
dave        Evernote           username, password
dave        Sharepoint 2013    username, password
dave        Concur             userid, password
dpeterson   <none>             <none>
nancy       Sharepoint 2013    username, password
nancy       Concur             userid, password
nancy       Twitter            username, password
jason       Evernote           username, password
jason       Sharepoint 2013    username, password
jason       Concur             userid, password


On Linux you can run the same:


me@admin:~> idr-describe-keychains
Username        Application Name   Credentials
Administrator   <none>             <none>
devuser         <none>             <none>


The above just shows the current Key Chains that are configured.

Import Key Chain Data Using a CSV File

If we need to batch import a bunch of H-Fed Credentials we can create a CSV file in the following format:

Username,Application Name,CredentialName1,CredentialValue1,CredentialName2,CredentialValue2

To figure out the Application Name and Credential Names we can use the idr-describe-applications command. For example here is output from that command:


Application Name   Portal URL                        Portal Text     Enable Keychain Edit   Credentials
Concur      Concur            true                 userid,  password
Evernote   Evernote          true                 username,  password
Sharepoint 2013     Sharepoint 2013   true                 username,  password
Twitter               Twitter           true                 username,  password

So let's say I wanted to populate the Key Chain credentials for some users for the Sharepoint 2013 Application. I would then create a CSV file with the following contents:


me@admin:~> cat users.csv
Username,Application Name,CredentialName1,CredentialValue1,CredentialName2,CredentialValue2
user1,Sharepoint 2013,username,,password,password1
user2,Sharepoint 2013,username,,password,password2
user3,Sharepoint 2013,username,,password,password3
user4,Sharepoint 2013,username,,password,password4
user5,Sharepoint 2013,username,,password,password5
user6,Sharepoint 2013,username,,password,password6
user7,Sharepoint 2013,username,,password,password7
user8,Sharepoint 2013,username,,password,password8

Then you can use the idr-update-keychains command to import that CSV file:


me@admin:~>idr-update-keychains -f users.csv
A total of 8 keychains were updated.
C:\>idr-update-keychains -f c:\SWSSDK\users.csv
A total of 8 keychains were updated.

And you can confirm the Key Chain Data with the idr-describe-keychains command just like we did above.

Export User Key Chains as CSV

Another cool side note is that you can also export Key Chain Data as CSV but the password will not be available (they will be left blank):


C:\>idr-describe-keychains -f csv
Username,Application Name,CredentialName1,CredentialValue1,CredentialName2,CredentialValue2
user1,Sharepoint 2013,username,,password,********
user2,Sharepoint 2013,username,,password,********
user3,Sharepoint 2013,username,,password,********
user4,Sharepoint 2013,username,,password,********
user5,Sharepoint 2013,username,,password,********
user6,Sharepoint 2013,username,,password,********
user7,Sharepoint 2013,username,,password,********
user8,Sharepoint 2013,username,,password,********


A full list of all the keychain CLI commands will be available soon and I will definitely link to it as soon as it's available.

Integrated Windows Authentication

A lot of the times we configure IWA to allow seamless login into the RSA SecurID Access Web Portal. Instructions on how to install and configure IWA are located in Install the Integrated Windows Authentication Connector and the installer itself is located at RSA Via Access IWA Connector Installer. IWA uses kerberos capabilities (SPNEGO) for authentication, therefore browser configuration is necessary to trust the IWA server and ease the login process.


Internet Explorer

If you are on a windows machine the first you can do is confirm that kerberos tickets have been assigned to the logged in user. This can be done by using the klist command:



Current LogonId is 0:0x31ae6
Cached Tickets: (2)

#0>     Client: devuser @ ELATOV.NET
        Server: krbtgt/ELATOV.NET @ ELATOV.NET
        KerbTicket Encryption Type: AES-256-CTS-HMAC-SHA1-96
        Ticket Flags 0x40e10000 -> forwardable renewable initial pre_authent name_canonicalize
        Start Time: 9/24/2015 12:58:21 (local)
        End Time:   9/24/2015 22:58:21 (local)
        Renew Time: 10/1/2015 12:58:21 (local)
        Session Key Type: AES-256-CTS-HMAC-SHA1-96

#1>     Client: devuser @ ELATOV.NET
        Server: LDAP/ @ ELATOV.NET
        KerbTicket Encryption Type: AES-256-CTS-HMAC-SHA1-96
        Ticket Flags 0x40a50000 -> forwardable renewable pre_authent ok_as_delegate name_canonicalize
        Start Time: 9/24/2015 12:58:51 (local)
        End Time:   9/24/2015 22:58:21 (local)
        Renew Time: 10/1/2015 12:58:21 (local)
        Session Key Type: AES-256-CTS-HMAC-SHA1-96


We can see that we have a valid kerberos ticket for the domain. As long as my IWA Server's hostname is within that domain (ie. then we can proceed configuring the browser to trust the domain. If look under the Internet Explorer Settings (Tools -> Internet Options -> Security -> Local Intranet -> Custom Level), you will notice that automatic login is allowed for sites that are in the Intranet Zone by default:


Now all that we have to do is add our domain into the Local Intranet Zone in Internet Explorer and we will be all set. This is accomplished by going to Tools -> Internet Options -> Security -> Local intranet -> Sites -> Advanced and add the following for the site: https://* :



After that if you visit your portal and you if have configured it as per the instructions laid out in Enable Automatic Integrated Windows Authentication you will be automatically logged in the portal with Internet Explorer.




A similar mechanism exists for Firefox. From Mozilla's Integrated Authentication page:


Mozilla currently supports a whitelist of sites that are permitted to engage in SPNEGO authentication with the browser. This list is intended to be configured by an IT department prior to distributing Mozilla to end-users.

The preferences are:

pref("network.negotiate-auth.trusted-uris", site-list); 
pref("network.negotiate-auth.delegation-uris", site-list);
pref("network.automatic-ntlm-auth.trusted-uris", site-list);

where, site-list is a comma-separated list of URL prefixes or domains of the form:

site-list = ","

network.negotiate-auth.trusted-uris lists the sites that are permitted to engage in SPNEGO authentication with the browser, and
network.negotiate-auth.delegation-uris lists the sites for which the browser may delegate user authorization to the server.
network.automatic-ntlm-auth.trusted-uris lists the trusted sites to use NTLM authentification.

To modify these settings we start firefox in the address we can enter about:config and modify just the top two options to include our domain (




Chrome for windows is actually pretty easy, from Chromium's HTTP authentication page:


In Windows only, if the AuthServerWhitelist setting is not specified, the permitted list consists of those servers in the Local Machine or Local Intranet security zone (for example, when the host in the URL includes a "." character it is outside the Local Intranet security zone), which is the behavior present in IE. Treating servers that bypass proxies as being in the intranet zone is not currently supported.

So if we configure the Local Intranet Security Zone appropriately in Internet Explorer then Chrome will use those settings as well.


Mac OS X

As long as the Mac OS X system is joined to the domain (I will talk more about that below) and has a valid kerberos ticket then you can launch chrome with the following command:


$ open -a 'Google Chrome' --args --auth-server-whitelist="*" --auth-negotiate-delegate-whitelist="*"


Or you can set the following setting permanently:


$ defaults write AuthServerWhitelist "*"
$ defaults write AuthNegotiateDelegateWhitelist "*"



As long as the Linux Machine is able to get a kerberos ticket (I will talk more about that later), then you can launch from with the following parameters:


$ google-chrome-stable --auth-server-whitelist="*" --auth-negotiate-delegate-whitelist="*"


If you want to make it permanent, create a file with the following settings:


$ cat /etc/opt/chrome/policies/managed/policies.json
    "AuthServerWhitelist": "*",
    "AuthNegotiateDelegateWhitelist": "*"


And then just launch Google Chrome as you would normally do. You can also confirm the policy settings by going to chrome://policy in the address bar of the Chrome Browser:



Safari and Mac OS X


Safari by default supports IWA, from Best Practices for Integrating OS X with Active Directory :


Apple and Microsoft both support Kerberos to provide a secure single sign-on environment. When integrated into an Active Directory environment, OS X uses Kerberos exclusively for all authentication activities. The use of Microsoft’s NT LAN Manager (NTLM) suite of protocols, including both NTLMv1 and NTLMv2, can be prohibited on the network as needed, without effecting Mac computers or services provided by OS X Server within the Active Directory environment.

When a user logs in to a Mac using an Active Directory account, the Active Directory domain controller automatically issues a Kerberos Ticket Granting Ticket (TGT). When the user attempts to use any service on the domain that supports Kerberos authentication, the TGT generates a ticket for that service without requiring the user to authenticate again.

You can use the Kerberos administration tools on a Mac to view currently issued tickets both from the command line, where klist displays the current

tickets, or by using the graphical Ticket Viewer utility located at /System/Library/CoreServices/Ticket


As long as klist shows something similar to this, Safari will work by default:


Ticket cache: KCM:01C347AC-5C1C-46E4-B9BC-6260049FCB2B
Default principal:

Valid starting                 Expires                        Service principal
09/24/2015 12:27:37  09/24/2015 22:27:37  krbtgt/ELATOV.NET@ELATOV.NET
09/24/2015 12:29:37  09/24/2015 22:27:37  HTTP/


Authenticating with Kerberos with Linux


I was testing this out with an Ubuntu Machine and it worked out for me. In order to get a kerberos ticket you first have to install the kerberos tools:


$ sudo apt-get install krb5-user


Then we need to create the Realm/Domain information, I ended up creating this simple file (Capitalization is important):


$ cat /etc/krb5.conf
  default_realm =
  krb4_config = /etc/krb.conf
  krb4_realms = /etc/krb.realms
  kdc_timesync = 1
  ccache_type = 4
  forwardable = true
  proxiable = true

  kdc =
  admin_server =
[domain_realm] = ELATOV.NET = ELATOV.NET

  krb4_convert = true
  krb4_get_tickets = false


Then authenticate your self to get a ticket:


$ kinit devuser@ELATOV.NET
Password for devuser@ELATOV.NET: 


If that's successful, you will see your ticket:


$ klist
Ticket cache: FILE:/tmp/krb5cc_1000
Default principal: devuser@ELATOV.NET

Valid starting       Expires              Service principal
09/24/2015 16:03:58  09/25/2015 02:03:58  krbtgt/ELATOV.NET@ELATOV.NET
  renew until 09/25/2015 16:03:53


Then after configuring Firefox (or Google Chrome) as per the above instructions, I visited the portal and was automatically logged in. Checking the kerberos tickets I saw a new one:


$ klist
Ticket cache: FILE:/tmp/krb5cc_1000
Default principal: devuser@ELATOV.NET

Valid starting       Expires              Service principal
09/24/2015 16:03:58  09/25/2015 02:03:58  krbtgt/ELATOV.NET@ELATOV.NET
  renew until 09/25/2015 16:03:53
09/24/2015 16:08:43  09/25/2015 02:03:58  HTTP/
  renew until 09/25/2015 16:03:53


That should be it!

I wanted to try out the Salesforce Mobile App with SAML enabled and here is what I discovered.

Enable SP-Initiated Mode in Salesforce

Reading over Single Sign-On for Desktop and Mobile Applications using SAML and OAuth, I saw the following:


The 'My Domain' feature allows you to select a custom domain name for your application. A 'My Domain' URL looks like (for a production org) or (for a Developer Edition). A benefit of configuring 'My Domain' is that it enables support for SP-initiated single sign-on, improving the user experience, and allowing users to access 'deep links' into their environment via SSO.

You may configure 'My Domain' in Setup | Company Profile | My Domain. As users may be un-authenticated when they arrive at, a unique domain is the mechanism by which a specific organization's SAML configuration can be discovered. In order to take advantage of SAML for desktop and mobile apps you must deploy My Domain. In addition, this will greatly improve the user-experience for web browser based single sign-on. This is considered a best practice if you deploy SAML with



So we need to enable "My Domain" and then configure SAML to be SP-Initiated.

Configure My Domain in Salesforce

After you login as the administrator to Salesforce navigate to Administer -> Domain Management -> My Domain and pick a name for your Domain to make sure it's available:


After click Register Domain and it will take some time to enable all the DNS settings and you will see the following:


After the registration is complete you will receive an email similar to this:


After that if you go back to the domain settings you will see that the domain is ready for use:


Configure an SP-Initiated SAML Setup

Now login to your access console and configure a SAML application and get all the necessary information:

  1. Issuer (Issuer Entity ID)
  2. Entity Id ( Audience (Service Provider Entity ID) )
  3. Identity Provider Certificate (cert.pem from the Certificate Bundle)
  4. Identity Provider Login URL (Identity Provider URL)

Then from Salesforce as an administrator go to Administer -> Security Controls -> Single Sign-On Settings and add a new SAML config. Here is one I ended up with:


Make sure you enter an Identity Provider Login URL or the configuration won't be seen as SP-Initiated.

Modify Authentication Configuration for the Custom Domain

The last thing we need to do is enable the SAML configuration to be used as an Authentication service. So again from the salesforce portal go to Administer -> Domain Management -> My Domain -> Authentication Configuration -> Edit, and enable your SAML configuration:


You can have both enabled (the Login page and the SAML configuration). With both enabled people can see the login page and choose to either use the SAML configuration or to use their passwords. If you just leave the SAML configuration, then it auto start the login process and forward you to the IdP as soon as you visit the login page for your custom domain.

Salesforce Mobile Client Testing

At this point, I installed the Salesforce1 Application on my Android phone:


After installing it, I launched the application and I saw the login page:


From here click on the option menu and choose change server:


Then click Add Connection and enter the custom domain that you created in salesforce:


and then choose that connection and click Apply:


Then it will take you to the login page and you will see a button to use the IDP for the login:


Upon clicking that it took me to the IDP and I was able to enter my AD credentials:


After you get authenticated it will confirm that you want to provide the necessary access to this application:


And then you will gain access to the application:


And here are the options for my chatter user:


Just-In-Time Provisioning

There is actually a pretty good description from the Salesforce About Just-in-Time Provisioning for SAML

With Just-in-Time provisioning, you can use a SAML assertion to create regular and portal users on the fly the first time they try to log in. This eliminates the need to create user accounts in advance. For example, if you recently added an employee to your organization, you don't need to manually create the user in Salesforce. When they log in with single sign-on, their account is automatically created for them, eliminating the time and effort with on-boarding the account. Just-in-Time provisioning works with your SAML identity provider to pass the correct user information to Salesforce in a SAML 2.0 assertion. You can both create and modify accounts this way. Because Just-in-Time provisioning uses SAML to communicate, your organization must have SAML-based single sign-on enabled.

It's a pretty awesome feature so let's see how to utilize it with RSA SecurID Access.

Just-In-Time Attributes For Salesforce

Looking over the Salesforce Just-in-Time Provisioning Requirements it looks like we case use the following SAML Attributes in the Assertion for Just-In-Time Provisioning:






AliasIf not present, a default is derived from FirstName and LastName.
CommunityNicknameIf not present, a default is derived from the UserName.
DefaultCurrencyIsoCodeDerived from organization settings.
EmailYFor example,
EmailEncodingKeyIf not present, a default is derived from the organization settings.
FederationIdentifier (insert only)If present, it must match the SAML subject, or the SAML subject is taken instead. Can't be updated with SAML.
LocaleSidKeyIf not present, a default is derived from the organization settings.
ProfileIdYFor example,User.ProfileId=Standard User
TimeZoneSidKeyIf not present, a default is derived from the organization settings.
Username (insert only)YFor example, Can't update using SAML.
UserRoleIdDefaults to “no role” if blank.

From the same page it looks like we need to prepend each SAML attribute with User.:


To correctly identify which object to create in Salesforce, you must use the User. prefix for all fields passed in the SAML assertion. In this example, the User. prefix has been added to the Username field name.



      <saml:AttributeValue xsi:type="xs:anyType"></saml:AttributeValue>





Enable JIT for the SAML Salesforce Configuration

Login as a Salesforce Administration and navigate to Administer -> Security Controls -> Single SignOn Settings and you will see your SAML configurations:


Then click Edit on your desired configuration and ensure the User Provisioning Enabled checkbox is checked and the standard option is used:


Also make sure you select Assertion contains the Federation ID from the User object:


Configure Extended Attributes for the Salesforce Connector in Via Admin Console

The basic configuration for salesforce is found at RSA Via Access Salesforce SAML Implementation Guide. After you have that configured let's modify the Extended Attributes section to have the following Attributes (the first two will already be there, I am just including them for completion):


Attribute Source


Attribute Name

User Store





Chatter Free User
NOTE: If you have an attribute within AD that contains this information, then you can use that


Here is how it looks like:


Just-In-Time Provisioning Testing

I created a new user in AD:


I also double checked and no such user existed in Salesforce, you can confirm by going to Administer -> Manage Users -> Users:


Then I logged into the portal as the velma user and clicked on the Salesforce Application and I logged in successfully:


If you look at the SAML assertion that RSA Via Access sent, it will look similar to this:



    <saml2:Attribute Name="User.Email" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic">

        <saml2:AttributeValue xsi:type="xs:string"></saml2:AttributeValue>


    <saml2:Attribute Name="User.UserName" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic">

        <saml2:AttributeValue xsi:type="xs:string"></saml2:AttributeValue>


    <saml2:Attribute Name="logoutURL" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic">

        <saml2:AttributeValue xsi:type="xs:string"></saml2:AttributeValue>


    <saml2:Attribute Name="User.LastName" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic">

        <saml2:AttributeValue xsi:type="xs:string">Tech</saml2:AttributeValue>


    <saml2:Attribute Name="ssoStartPage" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic">

        <saml2:AttributeValue xsi:type="xs:string"></saml2:AttributeValue>


    <saml2:Attribute Name="User.ProfileId" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic">

        <saml2:AttributeValue xsi:type="xs:string">Chatter Free User</saml2:AttributeValue>



Confirming Just-In-Time Provisioning

I then logged in to salesforce as the administrator and saw the user created:


Also if you check out the Audit Logs (Under Administer -> Security Controls -> View Setup Audit Trail), you will see something like this:


We were testing out deploying an IDR in vCloud Air to ensure the setup and configuration worked without any issues. Here is the process we ran through to successfully deploy an IDR in vCloud Air.


Import IDR OVA into vCloud Air Catalog


After you login into you vCloud Air Private Cloud you can create a VM within your VDC (Virtual Data Center). You can click on the Virtual Machines Tab and you will see the following:


Click on Create your first virtual machine and you will see the new vm wizard:


If you click on My Catalog it might be empty:


At this point you can click on Create My Virtual Machine from Scratch and it will take your vCloud instance:


From here you can upload the OVA into the default-catalog  (this inside vCloud Director under Catalogs -> default_catalog -> Upload) so you can use it as a template for multiple deployments:


Create a SNAT in vCloud Air

By default, most of the traffic is blocked and no NAT is configured so you can't reach the external network. To fix this first let's get a public IP. In vCloud air go to Gateways -> GATEWAY ON VDC1 -> Public IPs and initially it will look like this:


Then click on Add IP Address and it will warn you about getting charged and after that it will allocate the IP. Now let's create the SNAT, so any machine can reach the internet. Go to Gateways -> GATEWAY ON VDC1 -> NAT Rules -> Add a NAT Rule and fill out the following (make sure it matches your environment):


The firewall is pretty restrictive as well, so go to Gateways -> GATEWAY ON VDC1 -> Firewall Rules and the following basic rules:


Adding a second Organization Network

vCloud Director offers many different networking options, most of them are covered in vApp Design Considerations . By default there is Direct – External Organization Virtual Datacenter Network (Routed) network created, from the same page: Direct – External Organization Virtual Datacenter Network (Routed)
If the same example vApp with three virtual machines is connected to an organization virtual datacenter network that has a routed connection to an external network, the vApp is connected to an organization virtual datacenter network and is deployed there with the organization virtual datacenter network’s IP addressing. The Edge Gateway device then provides a routed connection between the organization virtual datacenter network and the external network. This scenario is shown in the following figure.

Figure 11. Direct Connection to a Routed External Organization Virtual Datacenter Network


And it looks like this in vCloud Director (Navigate to Administration -> Virtual Datacenters -> VCD1 -> Org VDC Networks):


So let's another routed network which will be our mgmt network (The IDR comes with two network interfaces, portal and mgmt). So click the green + and it will start the wizard. Choose the routed option:


I then added the following network details on the next page:


And after that you will have two networks:


I will use to be the portal/DMZ (defaulted-routed-network) network and as the mgmt/internal (routed-network-2) network.

DNAT for the Portal Interface

Since we want the portal to be reached external let's add a DNAT (or a port forward) from Public IP port 443 to the Internal portal IP port 443. So in vCloud Air navigate to Gateways -> GATEWAY ON VDC1 -> NAT Rules -> Add a NAT Rule and add the following DNAT:


On the next page if you are really organized you can also add a similar rule for 80:


Also don't forget to allow the firewall to access port 80 and 443 on the public IP and the internal network. I ended up created the following rules to allow that traffic:



Firewall and NAT Rules prior to Registration

As a sanity check here is a table of my NAT Rules:




Original IP

Original Port

Translated IP

Translated Port




The bottom one (DNAT from 8443 to MGMT_IP 443) is to allow access to the setup.jsp page and can be removed after registration. And the Firewall rules look like this:








The bottom ALLOW_INTERNAL_HTTPS_IN rule can be changed after registration to only allow 443 to the portal interface network ( and not any internal IP. And the Bottom rule can also be removed, since that allows for the port forward from 8443 to the MGMT_IP 443 , if setup.jsp access is no longer required.

Configure IPs on the IDR

First let's assign the vNics of the VM to the appropriate networks (management to default-routed-network and portal to routed-network-2). Click on the VM and then go to Networks:


Then click Add a Network and assign the NICs accordingly:


Then go back to the main VM screen and power on the IDR VM :


Then click on the VM and it will take you the properties page of the VM and you can click on Open Virtual Machine Console:



After applying the network settings I was able to access reach the 8443 port without problems:


Establishing the IPSec Tunnel To Local Environment

There is a pretty good KB on the process from VMware: Configuring IPsec VPN within VMware vCloud Air to a remote network and there is a pretty good diagram that represents all the networks:


My local networks is the mgmt network which is the network and my peer networks is the network which allow access to the AD in my internal network. So to start this configuration from vCloud Air go to  Gateways -> GATEWAY ON VDC1, then click on Manage in vCloud director:


Once in vCloud Director to Administration -> Virtual Datacenters -> VCD1 -> Edge Gateways and right click on the default GW to choose Edge Gateway Services:


Then go to the VPN tab and click Enable VPN and then Add:


After clicking Add, the wizard will start and you can configure your VPN settings. The settings depend on your environment, but here are the options broken down:






Local Network10.10.10.0/24This is the vCloud network (Mgmt Network) we want the remote site to have access to
Peer Networks10.210.0.0/16This is the network we want to access at the remote site
Local Endpoint107.189.120.76 (drop down)This is the Public IP of the Local End Point
Local ID107.189.120.76This can be anything, but it helps to set the IP to keep track of the configuration
Peer ID10.210.0.248This can be anything as well, but they recommend to either set it to the Public IP of the Remote End Point or the Private IP of the Remote End Point
Peer IPX.X.X.XThis is the Public IP of the Remote End Point
Encryption ProtocolAESYou can use AES 256, 3DES, or AES
Shared KeyLeft it as the generated oneWe will have to use that key on the remote VPN side to ensure we can authenticate with each end point to establish the VPN Tunnel
MTU1500Left the Default

Firewall Configuration in vCloud for VPN Connections

I ended up adding the following rules to ensure the VPN connection is established and to allow traffic from and to the internal networks across the VPN tunnel:








ALLOW_IP_SEC_ESP_AH_UDPX.X.X.X/32:Any107.189.120.76/32:AnyANYThis is so we can establish the IPSec Tunnel between the two endpoints
The following is necessary:
  • IP Protocol ID 50 (ESP)
  • IP Protocol ID 51 (AH)
  • UDP Port 500 (IKE)
  • UDP Port 4500

Since the only IP protocol allowed in the vCloud UI is ICMP, I decided to use Any to make sure I cover all of the above

ALLOW_VPN_TRAFFIC_L_TO_R10.210.0.0/16:Any10.10.10.0/24:AnyANYThis might be overkill but I am allowing anything from the Internal network to the vCloud MGMT network.
ALLOW_VPN_TRAFFIC_R_TO_L10.10.10.0/24:Any10.210.0.0/16:80ANYThis might be overkill but I am allowing anything from the vCloud MGMT network to the Remote Internal network. For my test, I could've just allowed 389 for the AD connection. But if you are planning to connect to internal webapps then 80 and 443 should be added here


After all the above is done, if you go back to vCloud Director you will see the VPN connection is good:

vcd-vpn-established (1).png

After all the above settings, we were able to connect to the AD server and login into the web portal issues.

Dropbox Setup

Looking over the documentation of dropbox it mentions that after setting Required mode for the SSO configuration, Rich Clients (like Desktop clients and Mobile Clients) should still work (What happens when I add a new user to the Business account?):

If you've turned on SSO in required mode, you'll need to make sure that the new user's email address is registered with your identity provider. Otherwise, the user will not be able to sign in and access Dropbox. In optional mode, the user will be asked to create a Dropbox password and can sign in with it as usual.

To make sure we are only using SSO and not the standard dropbox password let's make sure Dropbox is set for Required mode.

Confirm Required Mode is enabled

I logged into dropbox as the Administrator, navigated to Authentication, and confirmed that Required mode is enabled:


Initial Registration

After an administrator invites you to dropbox, you will receive an email:


Upon clicking on the link you can enter your email address and it will take you to the IdP:


And then you will be forwarded back to dropbox:


Desktop Client

Login to dropbox and click on YOUR_NAME -> Install:


And it will allow you to download the client:


Download it and start the installer:


After the installer was finished, the application launched, and I saw the following:


I just entered my email for the username, left the password blank, and clicked "Sign In". It figured out that I have SSO enabled and I saw the following:


Then upon clicking Get your link code a web browser opened up to the IdP:


We also had step-up enabled so I had to go through that:


After I was authenticated and authorized at the IdP side, it forwarded me to dropbox which showed me the link code:


I then copied that and pasted it back at the Dropbox Rich Client and it congratulated me on a successful setup:


Then I clicked "Open my Dropbox folder" and it showed me the contents:


So it worked out quite well. Once the link is established we won't be able to use step up again, so it's a one time setup and then dropbox doesn't have to re-login to the IdP. From the same page (What's the difference between optional mode and required mode?):

Users' existing desktop and mobile clients will remain linked to their accounts. This includes any desktop or mobile client that was connected to their account before they joined Dropbox for Business. All new desktop and mobile clients must use single sign-on.

Mobile Client

Now let's try the same thing on a mobile device. First let's install the app: