Sunday, February 14, 2021

Azure Files Identity based authentication - Active Directory on-premises

 

Azure Files Identity based authentication -Active Directory Domain Services (ADDS)

Azure Files supports identity-based authentication over Server Message Block (SMB) through on-premises Active Directory Domain Services (AD DS) and Azure Active Directory Domain Services (Azure AD DS).

This article focus on how to enable identity based authentication on Azure Files over SMB through on-premises active directory domain services.

Why Identity based authentication is important ?

Other authentication methods available in Azure storage are storage key and SAS signature based.

Storage Key - Provide root level access to the storage account once authentication succeeded ie, users will be able to access all the storage services Blob, Files, Tables and Queues

SAS Signature - Even though there are options to control access to the storage account, it doesn't really suitable for a File share.

Ex: How to setup authentication for 200 users those who needed different level of access to the same file share in Azure Files ? Would be the solution is to create 200 SAS signature each user ?

Also it is important how the users are going to access the azure file share, as of today in-order to access the file share from a workstation either use storage key to mount the file share into the machine over SMB or use azure storage explorer.

The following above concerns can be addressed by introducing identity based authentication into Azure storage account and Azure Files.

What you need to know before get start

If you think, what if I try mounting the File Share using storage key and configure ntfs permissions on the mounted File Share ?

It will not work in that way and you will be end up with below error 😊.





Yes this is how it is !

  • On-premises Active Directory mush be Sync to Azure AD 
  • Supports Kerberos authentication with AD with RC4-HMAC and AES 256 encryption. AES 256 encryption support is currently limited to storage accounts with names <= 15 characters in length. AES 128 Kerberos encryption is not yet supported.
Note : If you want to use AES 256 encryption the storage account name should be below 15 characters, so be careful with your storage account naming convention.
  • Supports only Windows 7 & Windows 2008 R2 above 
  • Supports only against the AD forest, storage account registered to
  • Does not support authentication against computer accounts created in AD DS.
  • Does not support authentication against Network File System (NFS) file shares

Steps to be followed

Part one: enable AD DS authentication on your storage account

Part two: assign access permissions for a share to the Azure AD identity (a user, group, or service principal) that is in sync with the target AD identity

Part three: configure Windows ACLs over SMB for directories and files

Part four: mount an Azure file share to a VM joined to your AD DS

Update the password of your storage account identity in AD DS




Part one: enable AD DS authentication on your storage account

Straight forward and well explained in the below article


It would be good to create a separate OU for Azure storage and below command helps to get the OU distinguish name.


PS C:\Users\ladmin> Get-ADOrganizationalUnit -Filter 'Name -like "AzureStorage"'



Set-AzStorageAccount `
        -ResourceGroupName $ResourceGroupName `
        -Name $StorageAccountName `
        -EnableActiveDirectoryDomainServicesForFile $true `
        -ActiveDirectoryDomainName "fsstoreadint.test" `
        -ActiveDirectoryNetBiosDomainName "fsstoreadint" `
        -ActiveDirectoryForestName "fsstoreadint.test" `
        -ActiveDirectoryDomainGuid "84eda2fa-3279-45be-b5c3-63fa060b0291" `
        -ActiveDirectoryDomainsid "S-1-5-21-4076236781-4176507843-1324823733" `
        -ActiveDirectoryAzureStorageSid "S-1-5-21-4076236781-4176507843-1324823733-1112"

Once Part 1 is completed, login to azure portal and  navigate to storage account --> Configurations --> you could see Active Directory Domain Services (AD DS) is enabled.



Also navigate to your file share and check whether Active Directory is configured as the authentication method.



Part two: assign access permissions for a share to the Azure AD identity (a user, group, or service principal) that is in sync with the target AD identity

Follow the documentation


Navigate to Storage Account --> File Shares --> <share name> --> IAM

Assign one of the user (user synced from on-prem AD to Azure AD) as a "Storage File Data Share Contributor" role


Part three: configure Windows ACLs over SMB for directories and files

Steps to be followed


Administrator with full access to the share can mount the share into a domain joined computer and set necessary NTFS permissions using the following.

Configure Windows ACLs with Windows File Explorer or icals

Ex: icacls <mounted-drive-letter>: /grant <user-email>:(f)

The following permissions are included on the root directory of a file share:

·         BUILTIN\Administrators:(OI)(CI)(F)

·         BUILTIN\Users:(RX)

·         BUILTIN\Users:(OI)(CI)(IO)(GR,GE)

·         NT AUTHORITY\Authenticated Users:(OI)(CI)(M)

·         NT AUTHORITY\SYSTEM:(OI)(CI)(F)

·         NT AUTHORITY\SYSTEM:(F)

·         CREATOR OWNER:(OI)(CI)(IO)(F)

Note: Port 445 should be allowed in nsg/firewall rules

Similarly if you want to give read access to a group/users into Azure File Share assign them with the below role

  • Storage File Data SMB Share Reader allows read access in Azure Storage file shares over SMB.
Administrator can then configure NTFS permissions on the Mounted Share

Part four: mount an Azure file share to a VM joined to your AD DS


Login as the user with at least read permission to the share







User can access the share by calling the share UNC path from run command box without entering the password.







User can also access the share by calling the share UNC path from file explorer without entering the password.





Most convenient way is to MAP the share as a drive into the file explorer as shown below.


















Note: Ultimate access to the Shares will be based on the permissions applied from Azure file share IAM and more granular control applied on the windows ACL/ NTFS Permission to the subfolders and directories.

Part 5 Update the password of your storage account identity in AD DS

https://docs.microsoft.com/en-gb/azure/storage/files/storage-files-identity-ad-ds-update-password

If you registered the Active Directory Domain Services (AD DS) identity/account that represents your storage account in an organizational unit or domain that enforces password expiration time, you must change the password before the maximum password age. Your organization may run automated clean-up scripts that delete accounts once their password expires. Because of this, if you do not change your password before it expires, your account could be deleted, which will cause you to lose access to your Azure file share

Summary

With Active Directory Domain service integration of Azure storage customers can leverage Kerberos based authentication and SSO for their storage file share.

Share Permissions - Using Azure AD RBAC roles
NTFS Permissions - Can be configured on mounted file share by the users those who having contributor level access on the share.

Thanks for reading and sorry for not including all the step by step setup. If you would like to know more here are some references.




Wednesday, September 16, 2020

Mount Azure Blob Storage into Windows Machine

 

Azure Blob Storage is an object storage solution for the cloud. Blob Storage allows you to store a massive amount of unstructured data, however initially there wasn't an option to mount the blob container directly into windows/Linux operating systems.

Storage explorer, azcopy and SDKs are used in-order to manage files and folders in blob containers from windows/Linux workstations.

Now Microsoft launched in public preview NFS 3.0 support for Azure Blob storage which helps us to mount the blob containers into windows/Linux machines.

Today we'll briefly go thorough the steps to be followed in-order to mount a azure blob container into windows machine.

Before following this article to mount a blob container ensure that you have a valid subscription and storage account with a blob container.

To mount a storage account container, you'll have to do these things.

  1. Register NFS 3.0 protocol feature with your subscription.

  1. Verify that the feature is registered.

  2. Create an Azure Virtual Network (VNet).

  3. Configure network security.

  4. Create and configure a storage account that accepts traffic only from the VNet.

  5. Create a container in the storage account.

  6. Mount the container.


1. Register NFS 3.0 protocol feature with your subscription.


Since NFS 3.0 is in public preview in-order to use this feature you will need to manually register it into your subscription.

Below are the power-shell commands to register and activate NFS 3.0 in your subscription, I'm using azure cloud-shell as it has all the necessary power-shell modules required for Azure.














Connect-AzAccount (Not required if you are using cloudshell)

If your account is associated more than one subscription, identify and set the active subscription

$context = Get-AzSubscription -SubscriptionId <subscription-id>
Set-AzContext $context

Register the AllowNFSV3 feature by using the following command.

Register-AzProviderFeature -FeatureName AllowNFSV3 -ProviderNamespace Microsoft.Storage











Register the PremiumHns feature by using the following command as well.

Register-AzProviderFeature -FeatureName PremiumHns -ProviderNamespace Microsoft.Storage

HNS - The hierarchical namespace allows you to define ACL and POSIX permissions on directories, subdirectories or individual files. You can also use role-based authentication and Azure Active Directory (Azure AD) to support resource management and data operations









Register the resource provider by using the following command.

Register-AzResourceProvider -ProviderNamespace Microsoft.Storage













It will take around 1 hour to complete registration process, you can run below commands to verify the status of registration.

Get-AzProviderFeature -ProviderNamespace Microsoft.Storage -FeatureName AllowNFSV3

Get-AzProviderFeature -ProviderNamespace Microsoft.Storage -FeatureName PremiumHns
















2.Create a storage account


Supported region : US East, US Central, US West Central, Australia Southeast, North Europe, UK West, Korea Central, Korea South, and Canada Central

NFS 3.0 supported only in below account kind; General purpose v1,v2 and blobstorage are not supported at the moment.

Performance tier : Premium
Account kind : Block blob storage

























Virtual Network and Security requirements


At the moment in-order to secure data account key authorization, Azure Active Directory (AD) security, and access control lists (ACLs) are not yet supported in accounts that have the NFS 3.0 protocol support enabled on them. 

So only way to support this kind of storage account is by placing them into a subnet in a VNET and enable restrictions with the help of NSG, virtual machines from same VNET/subnet can access the blob containers created in the storage account.

Also you can enable VNET peering with other VNETS to get the blob containers accessed from other networks in azure, blob containers can be mounted to the machines in on-prem infrastructure in case if you have VPN/Express route connection via virtual network gateway into the VNET where storage account is contained.


Note: NFS 3.0 feature cannot be enabled unless the storage account is placed into a VNET and subnet.



























On advanced tab you must enable the HNS and NFS 3.0. If it is greyed out check the NFS 3.0 registration status and ensure that the storage account is placed into a VNET/Subnet.

























Verify the below before submitting the settings for storage account creation, once it is deployed the account kind and related configurations cannot be reverted.
















































































3. Create Blob container


Once the storage account is successfully provisioned, navigate to Data Lake storage --> select containers and click on +container to create a blob container 






























Ensure that the access level is set to private anonymous 




















4. Setup NFS 3.0 client on windows machines


In-order to mount a blob container into windows machine using NFS 3.0 you must install NFS 3.0 client into the windows machines.

Windows Servers

Open server manager --> Add roles and features --> select "Client for NFS" from feature tab and complete the installation and reboot the machine.





























Windows 10 workstations

Open control panel --> add or remove programs --> Turn Windows features on or off --> select "client for NFS" check box from under the "Service for NFS" --> complete the installation and reboot the machine.









































5. Mount the blob container 


The blob container can be mounted into the server(W2K16)/workstation(windows 10) by entering below command from a administrative command prompt.








mount -o nolock <storage-account-name>.blob.core.windows.net:/<storage-account-name>/<container-name> *

Storage-account-name - replace with your storage account name
Container-name - replace with your container name























6. Write permission issue with mounted container


At this stage the mounted container blob will not have write permission, lets check it by creating a new file into it.















































If you need write permissions, you may need to change the default UID and GID that Windows uses to connect to the share. To do this, run the following PowerShell commands as an administrator from the windows machine:


New-ItemProperty -Path HKLM:\SOFTWARE\Microsoft\ClientForNFS\CurrentVersion\Default -Name AnonymousUid -PropertyType DWord -Value 0

New-ItemProperty -Path HKLM:\SOFTWARE\Microsoft\ClientForNFS\CurrentVersion\Default -Name AnonymousGid -PropertyType DWord -Value 0





















Lets now check the write access into the blob container mounted into a windows machine

















































Check out whether the file created is showing up in the container blob on Azure portal.












Note: We can expect new features such as Azure AD RBAC, ACL and account key in-order to secure the mounted blob containers also many of the configurations may change once the it is released to general; so always refer Microsoft documentations to get latest updates.

Reference : 





Guys kindly check it out and get familiar with the concepts of container blob mounting, the above link to Microsoft documentation will help you understand about mounting of blobs containers into Linux operating systems as well.



Thanks for reading 👍


Thursday, August 13, 2020

HP Pro-curve/Aruba Switch Firmware Upgrade Automation

 

HP Pro-curve/Aruba Switch Firmware Upgrade Automation.


If you haven't gone through the best practices to be followed for upgrading network switch firmware, please refer the below article.


Firmware upgrade on 1 or 2 switches are fine however what about upgrading 100 switches with short deadline ?

If there are more number of network switches, usually paid software's are used for managing/administering them, Aruba central is one of the cloud console solution of HP/Aruba for their switches.

In case you don't have a paid software for HP/Aruba, don't worry 😊 today we're going to discuss about automating the HP Pro-curve/Aruba switch firmware upgrade.

Here we are going to use the new module introduced in windows PowerShell (WMF 5.0 or later) “Posh-SSH module”, which is loaded with SSH commands to access and execute commands on the network devices.

Script is developed based on the assumption that the "Primary" flash is the active flash in your switch.

If your switch is running on secondary flash (which is active flash) then the script commands need to be amended with secondary instead of primary vice versa.

Pre-requisites  

  ü  Windows PowerShell version 5.0 or above


Open an administrative PowerShell and execute the below command to check your PowerShell version.


If it is below 5.0, update your PowerShell Version first.

Reference : https://docs.microsoft.com/en-us/powershell/wmf/5.1/install-configure

 ü  Posh-SSH module should be installed on windows PowerShell


Open an administrative PowerShell and execute the below command








 ü  Install and configure a TFTP Server (Refer Step 2)

 ü  SSH should be enabled on all network devices

 ü  All devices should be configured with same login credentials (Read only)

 ü  After logging in, the devices should be in "Enable" (Privileged #) mode

 ü  Network devices firmware should be in-line with industry standards

 ü  Add IP address of devices into hp.txt in the “content” folder.

 ü  Not recommended to run on any servers installed with SCCM, WDS or any other tftp services.

 ü  Login credential need to be encrypted and saved in a text file Pass.txt. Copy the pass.txt file into the script “content” folder.

How to Convert

Open Administrative PowerShell window and execute the command below.

"Temp123*" | ConvertTo-SecureString -AsPlainText -Force | ConvertFrom-SecureString | Out-File "C:\pass.txt”.

Password – Temp123*
Output File – pass.txt in C drive

Note: In case if your password contains special characters like "$" make sure you input the password in below format. Otherwise while encrypting the password get altered.

Example:

Temp123$upp0rt

"Temp123"+"$"+upp0rt

How to use the script

1)  Download the script “Network_switch_auto_backup.ps1” from the GitHub repository and extract it into any drive.



 2)      Open tftpd64 folder under script root folder and run tftpd64.exe, note down the IP address and edit the following settings. It is a one-time job.

      






       2a)  Open Tftpd64 program, click on Settings button.

2b) Settings window will open as shown below. Put a check mark only to TFTP Server option. Remove check mark from all other options

2c) then next select TFTP tab. click on Browse button to specify Base Directory. You need to specify the Base Directory of the TFTP Server. Set your script root folder as the base directory.

Ex: H:\Network_switch_auto_backup

Where H = your disk drive where the script folder is extracted to. Network_switch_auto_backup is the script root folder.

2d) Under TFTP Security, select the option None

2e) A very important Step, Bind TFTP to this address: To set the IP address for TFTP server, please select the option Bind TFTP to this address then select the IP address available for you. You may get a different IP address, please use the IP address available in the drop down window.

You have to note down bonded IP address and write into the script line as mentioned in Step 3.

2f) once you have performed all the above steps, Click on OK. Now you will receive a window asking to restart Tftpd64 to apply the new settings. Click on OK.

2e) Reopen Tftpd64 program. Just ensure that you selected same IP address for Server Interface.              



3)      Download the new firmware flash and copy it into script root folder



Note: If it is not worked copy it to tftpd64 folder



4)      Edit the following portion in the script


 If user name to login to your device is not "manager”, change it to your user name.

$cred=New-ObjectSystem.Management.Automation.PSCredential ('manager',$securePassword)

Enter your TFTP server IP address (Bonded TFTP Server IP address – Step 2e)

$tftp_server="Enter your TFTP server ip address here" 

Enter the name of firmware flash 

$FlashVersion = "Example : WB_16_10_0003.swi"


5)      Open script root folder and navigate to “Content folder”


  Replace pass.txt with your encrypted device password key file















 Enter the IP address of HP devices into hp.txt

















5)      Open a PowerShell (Administrative PS recommended) 


6)      Navigate and set path to script root folder


7)      If you want to backup HP devices configuration execute the below command



PS>.\Network_switch_auto_backup.ps1 HP



8) SSH session will be disconnected once the firmware is updated on primary flash

9) Once switch is online , login and verify the Firmware.

10) Ensure that there are no errors on the logs after the Firmware upgrade.

Warning


Use at your own risk as there are many other dependencies based on your switch model and configuration that can break the network switch during Firmware upgrade.


Go through the best practices as mentioned in the below article and additionally take a backup of flash as well even-though we're keeping a secondary flash with duplicated configuration config2




Always test the script on a test/non-critical network switch before going for wide range of upgrades.

After sanity test optionally primary flash can be copied to secondary flash.

Troubleshooting



1)  Logging is enabled on the script with run time, date and year, check the folder “logs”


Future Enhancements



1)  Expand functionality for larger pool of network devices.

2)  Include the functionality for staged firmware upgrade (Firmware1 --> Firmware2 -->Firmware3)


Devices Tested


1) HP Switches (Procurve and Aruba)

2920-48G-POE+
2910al-48G-POE+
2920-24G-POE
2530-48G-PoEP

Tested Firmware versions

WB.16.01.0004 --> WB.16.05.0003 --> WB.16.10.0003


Hope this information is valuable for you and thanks for reading.

Happy Automation ✌