Microsoft Azure – Tech-Coffee //www.tech-coffee.net Thu, 28 Mar 2019 09:58:11 +0000 en-US hourly 1 https://wordpress.org/?v=5.2.11 65682309 Getting started with Azure Update Management to handle Windows updates //www.tech-coffee.net/getting-started-with-azure-update-management-to-handle-windows-updates/ //www.tech-coffee.net/getting-started-with-azure-update-management-to-handle-windows-updates/#comments Thu, 28 Mar 2019 09:49:20 +0000 //www.tech-coffee.net/?p=6803 For most of the companies, the patch management is a challenge. All customers don’t have SCCM. And WSUS is aging and is not agile (you have to create several GPOs to handle different patch windows). This is why Azure Update Management is welcome to replace this tool. If you do only Azure Update Management in ...

The post Getting started with Azure Update Management to handle Windows updates appeared first on Tech-Coffee.

]]>
For most of the companies, the patch management is a challenge. All customers don’t have SCCM. And WSUS is aging and is not agile (you have to create several GPOs to handle different patch windows). This is why Azure Update Management is welcome to replace this tool. If you do only Azure Update Management in your automation account, the solution is nearly free (while you don’t exceed 500mn of usage per month).

For most of the usage, Azure Update Management helps to improve your patch management. However, clusters are not handled for the moment (a shame for my S2D clusters). Some features are missing such as run an update process “now” and the information are not assessed immediately after an update. Despite all these lacks, I use only Azure Update Management to handle Windows Update in my lab and I try to convince my customers to use this product instead of WSUS. In this topic I’ll show you how to deploy and use Azure Update Management.

Azure resources creation

The following Azure resources are required to deploy Azure Update Management:

  • Log Analytics workspace
  • Azure Automation Account

So I create these resources from the Azure Marketplace.

Then, once you created the Azure Automation Account and the Log Analytics workspace, open the Azure Automation Account blade and navigate to Update Management. Select the Log Analytics workspace and click on Enable.

Connect on-prem machines to Azure Update Management

Open Log Analytics Workspace blade. In overview pane, locate Connect a data source. Then click on Windows, Linux and others sources.

Then download the Windows Agent. Copy the workspace ID and the primary key: you need these information to complete the agent installation.

Once you downloaded the agent binaries, run the installation. Check the box saying Connect the agent to Azure log analytics (OMS).

Next specify the workspace ID and key. Select Azure Commercial.

N.B: You can also install the agent by using a command line:

setup.exe /qn NOAPM=1 ADD_OPINSIGHTS_WORKSPACE=1 OPINSIGHTS_WORKSPACE_AZURE_CLOUD_TYPE=0 OPINSIGHTS_WORKSPACE_ID= OPINSIGHTS_WORKSPACE_KEY= AcceptEndUserLicenseAgreement=1

It can take a while before information are pulled up in Azure. Once the agent is detected in Azure Update Management, you should get a message saying that a machine does not have “Update Management” enabled. Click on the link beside.

Choose the option you want and click on OK.

Once you have enabled update management of machines, you should get information about update states on your On-Prem computers.

Create an update deployment

Now that machines are well reported in the Update Management portal, we can create an update deployment to install the updates. Click on Schedule update deployment. First provide a name for this update deployment. Then, select machine to update and click on Machines. Select machine you want to upgrade.

Then configure the schedule. For this rule I choose to run it once a time. As you can see also in the below screenshot, you can specify a pre and post script.

Finally, specify the maintenance window and the reboot options as specified in the following screenshot.

Once the schedule update is created, you can retrieve it in scheduled update deployments tab.

Create a recurring update deployment

You can also create a recurring update deployment to install automatically updates each month. Create a new update deployment and this time in schedule settings choose recurring.

Several scheduled update deployments can be created as you can see in the following screenshot.

When a deployment update is running, you can see the progression in Update Deployments tab.

Finally, when update process is finished, you have to wait almost 30mn to get the new assessment from on-prem machines. After updates are installed you should get all your machines compliant.

The post Getting started with Azure Update Management to handle Windows updates appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/getting-started-with-azure-update-management-to-handle-windows-updates/feed/ 1 6803
Create a Hub-and-Spoke topology with Azure Virtual Network Peering //www.tech-coffee.net/virtual-network-peering/ //www.tech-coffee.net/virtual-network-peering/#comments Mon, 28 Jan 2019 11:07:41 +0000 //www.tech-coffee.net/?p=6705 Currently I’m working on AZ-102 certification and I wanted to share with you a small lab I created to try Azure virtual network and especially remote gateway. To create a Hub-and-Spoke topology, you need that each spoke virtual network communicates through the hub virtual network. To implement this kind of solution, you need several virtual ...

The post Create a Hub-and-Spoke topology with Azure Virtual Network Peering appeared first on Tech-Coffee.

]]>
Currently I’m working on AZ-102 certification and I wanted to share with you a small lab I created to try Azure virtual network and especially remote gateway. To create a Hub-and-Spoke topology, you need that each spoke virtual network communicates through the hub virtual network. To implement this kind of solution, you need several virtual networks and peering. I would like to implement the following solution:

All VMs must be able to communicate through NE01-VMProject1 which is the hub. A peering will be established between NE01-NET – NE02-NET and NE01-NET – NE03-NET. To prepare this topic, I’ve already created the following resources:

  • Resource groups
  • Virtual machines
  • Virtual networks

As you can see below, the VM NE01VM1 is connected to NE01-NET virtual network with the IP 10.11.0.4.

The VM NE02VM1 is connected to NE02-NET virtual network with the IP 10.12.0.4.

Because no peering is created, a VM cannot ping another:

Create the peering

First, I edit Peerings from NE02-NET.

I call it NE02-NET-NE01-NET and I select the virtual network NE01-NET. For the moment, I leave default configuration.

From NE01-NET virtual network, I do the same thing to peer it to NE02-NET. I leave also the default configuration for the moment.

When peers are created, you should get the peering status to Connected.

Now, VM from NE01-VMProject1 and NE02-VMProject2 are able to communicate:

So, I create the peers between NE03-VMProject3 and NE01-VMProject1. I repeat the same steps as previously. I create a peer from NE01-NET to connect to NE03-NET.

Then I create a peer from NE03-NET to connect to NE01-NET.

From this point, VMs from NE03-VMProject3 are able to communicate with NE01-VMProject1 VMs and VMs from NE02-VMProject2 can ping VM from NE01-VMProject1. However, VM from NE03-VMProject3 can’t communicate with NE02-VMProject2 because gateway and routes are missing:

Create virtual gateway and route tables

First, create a virtual gateway in your hub network (NE01-NET) with the following settings. The gateway takes the 4th IP address in gateway subnet. You need this information for later. So, in this example, the internal IP address of this virtual network gateway is 10.11.1.4.

Then in NE02-VMProject2 and NE03-VMProject3, create a route table resource with the following settings:

Now, navigate in route table resource and click on Routes. Click on Add.

Configure the route as the following:

Route Name Address prefix Next hop type Next hop address
NE02-NET-ROUTE To-NE03-NET 10.13.0.0/16 Virtual appliance 10.11.1.4
NE03-NET-ROUTE To-NE02-NET 10.12.0.0/16 Virtual appliance 10.11.1.4

Now, click on Subnet and Associate.

Associate the NE02-NET-ROUTE to NE02-NET virtual network and NE03-NET-ROUTE to NE03-NET.

Configure hub peers

Now we need to allow gateway transit in each hub peer. Open each peering configuration in NE01-NET and Allow gateway transit as below.

Configure spoke peers

In each spoke peer (NE02-NET and NE03-NET), enable Use remote gateways option.

Wait a few minutes and then all VMs should be able to communicate.

The post Create a Hub-and-Spoke topology with Azure Virtual Network Peering appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/virtual-network-peering/feed/ 2 6705
Getting started with Azure File Sync //www.tech-coffee.net/getting-started-with-azure-file-sync/ //www.tech-coffee.net/getting-started-with-azure-file-sync/#respond Thu, 22 Nov 2018 09:29:42 +0000 //www.tech-coffee.net/?p=6629 Azure File Sync is a Microsoft feature released in July 2018. It enables to synchronize multiple On-Premise file servers with Azure. In other words, we can replace DFS-R for branch office. Azure File Sync brings also a tiering feature that enables to cache in On-Prem servers the most used files (based on the access data) ...

The post Getting started with Azure File Sync appeared first on Tech-Coffee.

]]>
Azure File Sync is a Microsoft feature released in July 2018. It enables to synchronize multiple On-Premise file servers with Azure. In other words, we can replace DFS-R for branch office. Azure File Sync brings also a tiering feature that enables to cache in On-Prem servers the most used files (based on the access data) and to keep in Azure the others. The data can be protected with Azure Backup to avoid to manage backup in each On-Prem file servers and in case of disaster, the data remains in Azure. In this topic, I’ll show you how to implement Azure File Sync.

Requirement

To follow this topic, you need:

  • An On-Prem file server (physical or virtual) running on Windows Server 2012R2, 2016 or 2019.
  • An Azure account

Azure side configuration

First, create a Storage Account. I don’t need performance, so I choose a standard performance account with cool access tier. Regarding the Replication, you must choose regarding the SLA you require.

Once the storage account is created, open its properties and create a file share. I called mine branch1.

Open the Azure marketplace and look for Azure File Sync.

Create the resource in the same location as the storage account. Usually I add Azure File Sync in the same resource group than the storage account.

Once Azure File Sync is created, you can browse registered servers and click on Azure File Sync agent link.

Download the agent for your Windows Server version. I downloaded the version for Windows Server 2019 because my On-Prem server is running on Windows Server 2019. Download the file and copy it to the On-Prem server.

Implement agent in On-Prem Server

Connect to the On-Prem server and run the following cmdlet to install AzureRM PowerShell cmdlet.

Then run the Azure File Sync agent setup.

Once the agent is installed, the following window is raised. Specify your tenant ID and click on sign-in. Another pop-up is raised to ask you credentials.

Next choose the Azure Subscription, the resource group and the Storage Sync Service.

Once you are registered, your server should be present in Azure File Sync (registered server tab). My Windows Server is running on Windows Server 2019 but operating system in Azure File Sync is Windows Server 2016 :).

To finish, I create a folder in P:\ called AFS. This folder will be synchronized with Azure File Sync. I copy files in this folder.

Manage Azure File Sync

Now that Azure File Sync is installed, agent is ready and file are presents somewhere in the On-Prem server, we can sync data between On-Prem and Azure. To create the synchronization job, navigate to Sync Groups in Azure File Sync.

Provide a name for this Sync Group and select the storage account and the Azure File Share that you created at the beginning.

Now that the cloud endpoints is created, we can add servers to the sync group. So, click on Add server endpoint.

Select the On-Prem server, the path to synchronize (P:\AFS) and enable the cloud tiering if you wish.

Once the synchronization has run, you should retrieve files in the storage account.

Conclusion

In large company with branch office, DFS-R is often implemented to replicate branch office data to main datacenter (in a single way). Now Microsoft provides a new solution to replace DFS-R with Azure File Sync. Thanks to Cloud Tiering, your On-Prem file servers don’t require plenty of storage. Data can be accessed from everywhere because they are stored in Azure. It’s a nice hybrid cloud scenario.

The post Getting started with Azure File Sync appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/getting-started-with-azure-file-sync/feed/ 0 6629
Deploy Veeam Cloud Connect for large environments in Microsoft Azure //www.tech-coffee.net/deploy-veeam-cloud-connect-for-large-environments-in-microsoft-azure/ //www.tech-coffee.net/deploy-veeam-cloud-connect-for-large-environments-in-microsoft-azure/#comments Fri, 30 Jun 2017 07:57:00 +0000 //www.tech-coffee.net/?p=5604 Veeam Cloud Connect is a solution to store backups and archives in a second datacenter such as Microsoft Azure. Thanks to this technology, we can easily follow the 3-2-1 backup rule (3 backups; 2 different medias; 1 offsite). Last time I talked about Veeam Cloud Connect, I deployed all Veeam roles within a single VM. ...

The post Deploy Veeam Cloud Connect for large environments in Microsoft Azure appeared first on Tech-Coffee.

]]>
Veeam Cloud Connect is a solution to store backups and archives in a second datacenter such as Microsoft Azure. Thanks to this technology, we can easily follow the 3-2-1 backup rule (3 backups; 2 different medias; 1 offsite). Last time I talked about Veeam Cloud Connect, I deployed all Veeam roles within a single VM. This time I’m going to deploy the Veeam Cloud Connect in Microsoft Azure where roles are allocated across different Azure VMs. Moreover, some roles such as the Veeam Cloud Gateway will be deployed in a high availability setup.

Before I begin, I’d like to thank Pierre-Francois Guglielmi – Veeam Alliances System Engineer (@pfguglielmi) for his time. Thank you for your review, your English correction and your help.

What is Veeam Cloud Connect

Veeam Cloud Connect provides an easy way to copy your backups to an offsite location that can be based on public cloud (such as Microsoft Azure) or for archival purpose. Instead of investing money in another datacenter to store backup copies, you can choose to leverage Veeam Cloud Connect (VCC) to send these backup copies to Microsoft Azure. VCC exists in the form of two templates that you can find in the Microsoft Azure Marketplace:

  • Veeam Cloud Connect for Service Providers
  • Veeam Cloud Connect for the Enterprise

The first one is for service providers with several customers who want to deliver Backup-as-a-Service offerings using the Veeam Cloud Connect technology. This provider can deploy the solution in a public cloud and deliver the service to clients. The second version is dedicated to companies willing to build similar Backup-as-a-Service offerings internally, leveraging the public cloud to send backup copies offsite. For this topic, I’ll work on Veeam Cloud Connect for Enterprise, but the technology is the same.

Veeam Cloud Connect is a Veeam Backup & Replication server with Cloud Connect features unlocked by a specific license file. When deploying this kind of solution, you have the following roles:

  • Microsoft Active Directory Domain Controller (optional)
  • Veeam Cloud Connect server
  • Veeam Cloud Gateway
  • Veeam backup repositories
  • Veeam WAN Accelerator (optional)

Microsoft Active Directory Domain Controller

A Domain controller is not a mandatory role for the Veeam Cloud Connect infrastructure but it can make servers and credentials management easier. If you plan to establish a site-to-site VPN from your on-premises to Microsoft Azure, you can deploy domain controllers within Azure, in the same forest than the existing domain controllers and add all Azure VMs to a domain. In this way, you can use your existing credentials to manage servers, apply existing GPOs and create specific service accounts for Veeam managed by Active Directory. It is up to you: if you don’t deploy a domain controller within Azure, you can still deploy the VCC infrastructure. But then you’ll have to manage servers one by one.

Veeam Cloud Connect server

Veeam Cloud Connect server is a Veeam Backup & Replication server with Cloud Connect features. This is the central point to manage and deploy Veeam Cloud Connect infrastructure components. From this component, you can deploy Veeam Cloud Gateway, WAN accelerator, backup repositories and manage backup copies.

Veeam Cloud Gateway

The Veeam Cloud Gateway component is the entry point of your Veeam Cloud Connect infrastructure. When you’ll choose to send a backup copy to this infrastructure, you’ll specify the public IP or DNS name of the Veeam Cloud Gateway server(s). This service is based on Azure VM(s) running Windows Server and with a public IP address to allow secure inbound and outbound connections to the on-premises environment. If you choose to deploy several Veeam Cloud Gateway servers for high availability, you have two ways to provide a single entry point:

  • Round-Robin record in your public DNS registrar; one DNS name for all A records bound to Veeam Cloud Gateways public IP adresses.
  • A Traffic Manager in front of all Veeam Cloud Gateway servers

Because Veeam Cloud Gateway has its own load balancing mechanism, you can’t deploy Azure Load balancer, F5 appliance or other kinds of load balancers on front of Veeam Cloud Gateways.

Veeam Backup repositories

This is the storage system that stores backups. It can be a single Windows Server with a single disk or a storage space. Don’t forget that in Azure, the maximum size of a single data disk is 4TB (as of June 2017). You can also leverage the Scale-Out Backup Repository functionality where several backup repositories are managed by Veeam as a single logical repository. To finish, and this is the scenario I’m going to present later in this topic, you can store backups on a Scale-Out File Server based on a Storage Spaces Direct cluster. This solution provides SMB 3.11 access to the storage.

Veeam WAN Accelerator

Veeam WAN accelerator is the same component already available in Veeam Backup & Replication. This service optimizes the traffic between source and destination by sending only new unique blocks not already known at destination. To leverage this feature, you need a pair of WAN Accelerator servers. The source WAN Accelerator creates a digest for data blocks and the target synchronizes these digests and populates a global cache. During next transfer, the source WAN Accelerator compares digests of the blocks in the new incremental backup file with the already known digests. If nothing has changed, the block is not copied over the network and the data is taken from the global cache in the target, or from the target backup repositories, which in such a case act as infinite cache.

Architecture Overview

For this topic, I decided to separate roles on different Azure VMs. I’ll have 5 kinds of Azure VMs:

  • Domain Controllers
  • Veeam Cloud Gateways
  • Veeam Cloud Connect
  • Veeam WAN Accelerator
  • File Servers (Storage Spaces Direct)

First, I deploy two Domain Controllers to ease management. This is completely optional. All domain controllers are members of an Azure Availability Set.

The Veeam Cloud Gateway servers are located behind a Traffic Manager profile. Each Veeam Cloud Gateway has its own public IP address. The Traffic Manager profile distributes the traffic across public IP addresses of Veeam Cloud Gateway servers. The JSON template provided below allows to deploy from 1 to 9 Cloud Gateway servers depending on your needs. All Veeam Cloud Gateways are added to an Availability Set to support a 99,95% SLA.

Then I deploy two Veeam Cloud Connect VMs: one active and one passive. I add these both Azure VMs in an Availability Set. If the first VM crashes, the backup configuration is restored to the second VM.

The WAN Accelerator is not in an Availability Set because you can add only one WAN Accelerator per tenant. You can deploy as many WAN accelerators as required.

Finally, the backup repository is based on Storage Spaces Direct. I deploy 4 Azure VMs to leverage parity. I choose parity because my S2D managed disks are based on SSD (premium disk). If you want more performance or if you choose standard disks, I recommend you mirroring instead of parity. You can use a single VM to store backups to save money but for this demonstration, I’d like to share with Storage Spaces Direct just to show that it is possible. However, there is one limitation with S2D in Azure: for better performance, managed disks are recommended. An Availability Set with Azure VMs with managed disks supports only three fault domains. That means that in a four-node S2D cluster, two nodes will be in the same fault domain. So there is a chance that two nodes fail simultaneously. But dual parity (or 3-way mirroring) supports two fault domain failures.

Azure resources: Github

I have published in my Github repository a JSON template to deploy the infrastructure described above. You can use this template to deploy the infrastructure for your lab or production environment. In this example, I won’t explain how to deploy the Azure Resources because this template does it automatically.

Active Directory

Active Directory is not mandatory for this kind of solution. I have deployed domain controllers to make management of servers and credentials easier. To configure domain controllers, I started the Azure VMs where domain controller roles will be deployed. In the first VM, I run the following PowerShell cmdlets to deploy the forest:

# Initialize the Data disk
Initialize-Disk -Number 2

#Create a volume on disk
New-Volume -DiskNumber 2 -FriendlyName Data -FileSystem NTFS -DriveLetter E

#Install DNS and ADDS features
Install-windowsfeature -name AD-Domain-Services, DNS -IncludeManagementTools

# Forest deployment
Import-Module ADDSDeployment
Install-ADDSForest -CreateDnsDelegation:$false `
                   -DatabasePath "E:\NTDS" `
                   -DomainMode "WinThreshold" `
                   -DomainName "VeeamCloudConnect.net" `
                   -DomainNetbiosName "HOMECLOUD" `
                   -ForestMode "WinThreshold" `
                   -InstallDns:$true `
                   -LogPath "E:\NTDS" `
                   -NoRebootOnCompletion:$false `
                   -SysvolPath "E:\SYSVOL" `
                   -Force:$true

Then I run these cmdlets for additional domain controllers:

# Initialize data disk
Initialize-Disk -Number 2

# Create a volume on disk
New-Volume -DiskNumber 2 -FriendlyName Data -FileSystem NTFS -DriveLetter E

# Install DNS and ADDS features
Install-windowsfeature -name AD-Domain-Services, DNS -IncludeManagementTools

# Add domain controller to forest
Import-Module ADDSDeployment
Install-ADDSDomainController -NoGlobalCatalog:$false `
                             -CreateDnsDelegation:$false `
                             -Credential (Get-Credential) `
                             -CriticalReplicationOnly:$false `
                             -DatabasePath "E:\NTDS" `
                             -DomainName "VeeamCloudConnect.net" `
                             -InstallDns:$true `
                             -LogPath "E:\NTDS" `
                             -NoRebootOnCompletion:$false `
                             -SiteName "Default-First-Site-Name" `
                             -SysvolPath "E:\SYSVOL" `
                             -Force:$true

Once the Active Directory is ready, I add each Azure VM to the domain by using the following cmdlet:

Add-Computer -Credential homecloud\administrator -DomainName VeeamCloudConnect.net -Restart

Configure Storage Spaces Direct

I have written several topics on Tech-Coffee about Storage Spaces Direct. You can find for example this topic or this one. These topics are more detailed about the Storage Spaces Direct if you need more information.

To configure Storage Spaces Direct in Azure, I started all file servers VMs. Then in each VM I ran the following cmdlet:

# Rename vNIC connected in Internal subnet by Management
rename-netadapter -Name "Ethernet 3" -NewName Management

# Rename vNIC connected in cluster subnet by cluster
rename-netadapter -Name "Ethernet 2" -NewName Cluster

# Disable DNS registration for cluster vNIC
Set-DNSClient -InterfaceAlias *Cluster* -RegisterThisConnectionsAddress $False

# Install required features
Install-WindowsFeature FS-FileServer, Failover-Clustering -IncludeManagementTools -Restart

Once you have run these commands on each server, you can deploy the cluster:

# Validate cluster prerequisites
Test-Cluster -Node AZFLS00, AZFLS01, AZFLS02, AZFLS03 -Include "Storage Spaces Direct",Inventory,Network,"System Configuration"

#Create the cluster
New-Cluster -Node AZFLS00, AZFLS01, AZFLS02, AZFLS03 -Name Cluster-BCK01 -StaticAddress 10.11.0.160

# Set the cluster quorum to Cloud Witness (choose another Azure location)
Set-ClusterQuorum -CloudWitness -AccountName StorageAccount -AccessKey "AccessKey"

# Change the CSV cache to 1024MB per CSV
(Get-Cluster).BlockCacheSize=1024

# Rename network in the cluster
(Get-ClusterNetwork "Cluster Network 1").Name="Management"
(Get-ClusterNetwork "Cluster Network 2").Name="Cluster"

# Enable Storage Spaces Direct
Enable-ClusterS2D -Confirm:$False

# Create a volume and rename the folder Volume1 to Backup
New-Volume -StoragePoolFriendlyName "*Cluster-BCK01*" -FriendlyName Backup -FileSystem CSVFS_ReFS -ResiliencySettingName parity -PhysicalDiskRedundancy 2 -Size 100GB
Rename-Item C:\ClusterStorage\Volume1 Backup
new-item -type directory C:\ClusterStorage\Backup\HomeCloud

Then open the Active Directory console (dsa.msc) and edit the permissions of the OU where the Cluster Name Object is located. Grant the permission to create computer objects to the CNO (in the example Cluster-BCK01) on the OU.

Next, run the following cmdlets to complete the file server’s configuration:

# Add Scale-Out File Server to cluster
Add-ClusterScaleOutFileServerRole -Name BackupEndpoint

# Create a share
New-SmbShare -Name 'HomeCloud' -Path C:\ClusterStorage\Backup\HomeCloud -FullAccess everyone

First start of the Veeam Cloud Connect VM

First time you connect to the Veeam Cloud Connect VM, you should see the following screen. Just specify the license file for Veeam Cloud Connect and click Next. The next screen shows the requirements to run a Veeam Cloud Connect infrastructure.

Deploy Veeam Cloud Gateway

First component I deploy is Veeam Cloud Gateway. In the Veeam Backup & Replication console (in the Veeam Cloud Connect VM), you can navigate to Cloud Connect. Then select Add Gateway.

In the first screen, just click on Add New…

Then specify the name of the first gateway and provide a description.

In the next screen, enter credentials that have administrative permissions in the Veeam Cloud Gateway VM. For that, I created an account in Active Directory and I added it to local administrators of the VM.

Then Veeam tells you that it has to deploy a component on the target host. Just click Apply.

The following screen shows a successful deployment:

Next you have a summary of the operations applied to the target server and what has been installed.

Now you are back to the first screen. This time select the host you just added. You can change the external port. For this test I kept the default value.

Then choose “This server is located behind NAT” and specify the public IP address of the machine. You can find this information in the Azure Portal on the Azure VM blade. Here again I left the default internal port.

This time, Veeam tells you that it has to install Cloud Gateway components.

The following screenshot shows a successful deployment:

Repeat these steps for each Cloud Gateway. In this example, I have two Cloud Gateways:

To complete the Cloud Gateway configuration, open up the Azure Portal and edit the Traffic Manager profile. Add an endpoint for each Cloud Gateway you deployed and select the right public IP address. (Sorry I didn’t find how to loop the creation of endpoint in JSON template).

Because I have two Cloud Gateways and so two Traffic Manager endpoints with the same weight.

Add the backup repository

In this step, we add the backup repository. Open the Veeam Backup & Replication console (in Veeam Cloud Connect VM) and navigate to Backup Infrastructure. Then select Add Repository.

Enter a name and a description for your backup repository.

Next select Shared folder because Storage Spaces Direct with SOFS is based on … shared folder.

Then specify the UNC path to the share that you have previously created (Storage Spaces Direct section) and provide credentials with privileges.

In the next screen you can limit the maximum number of concurrent tasks, the data rates and set some advanced parameters.

Then I choose to not enable vPower NFS because it’s only use in VMware vSphere environments.

The following steps are not mandatory. I just clean up the default configuration. First I remove the default tenant.

Then I change the Configuration Backup task’s repository to the one created previously. For that I navigate to Configuration Backup:

Then I specify that I want to store the configuration backups to my S2D cluster. It is highly recommended to encrypt configuration backup to save credentials

Finally, I remove the default backup repository.

Deploy Veeam WAN Accelerator (Optional)

To add a Veeam WAN Accelerator, navigate to Backup Infrastructure and select Add WAN Accelerator.

In the next screen, click Add New…

Specify the FQDN of the target host and type in a description.

Then select credentials with administrative permissions on the target host.

In the next screen, Veeam tells you that a component has to be installed.

This screen shows a successful deployment.

Next you have a summary screen which provides a summary of the configuration of the target host.

Now you are back to the first screen. Just select the server that you just added and provide a description. I choose to leave the default traffic port and the number of streams.

Select a cache device with enough capacity for your needs.

Finally you can review your settings. If all is ok, just click Apply.

You can add as many WAN accelerators as needed. One WAN Accelerator can used by several tenants. Only one WAN Accelerator can be bound to a tenant.

Prepare the tenant

Now you can add a tenant. Navigate to Cloud Connect tab and select Add tenant.

Provide a user name, a password and a description to your tenant. Then choose Backup storage (cloud backup repository).

In the next screen you can define the maximum number of concurrent tasks and a bandwidth limit.

Then click Add to bind a backup repository to the tenant.

Specify the cloud repository name, the backup repository, the capacity of the cloud repository and the WAN Accelerator.

Once the cloud repository is configured, you can review the settings in the last screen.

Now the Veeam Cloud Connect infrastructure is ready. The enterprise can now connect to Veeam Cloud Connect in Azure.

Connect On-Premises to Veeam Cloud Connect

To connect to the Veeam Cloud Connect infrastructure from On-Premises, open your Veeam Backup & Replication console. Then in Backup infrastructure, navigate to Service Providers. Click Add Service Provider.

Type in the FQDN to your Traffic Manager profile and provide a description. Select the external port your chose for the Veeam Cloud Gateways configuration (I left mine to the default 6180).

In the next screen, enter the credentials to connect to your tenant.

If the credentials are correct, you should see the available cloud repositories.

Now you can create a backup copy job to Microsoft Azure.

Enter a job name and description and configure the copy interval.

Add virtual machine backups to copy to Microsoft Azure and click Next.

In the next screen you can set archival settings and how many restore points you want to keep. You can also configure some advanced settings.

If you a WAN Accelerator on-premises, you can select the source WAN Accelerator.

Then you can configure scheduling options for the backup copy job.

When the backup copy job configuration is complete, the job starts and you should see backup copies being created in the Veeam Cloud Connect infrastructure.

Conclusion

This topic introduces “a large” Veeam Cloud Connect infrastructure within Azure. All components can be deployed in a single VM (or two) for small environments or as described in this post for huge infrastructure. If you have several branch offices and want to send backup data to an offsite location, it can be the right solution instead of tape library.

The post Deploy Veeam Cloud Connect for large environments in Microsoft Azure appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/deploy-veeam-cloud-connect-for-large-environments-in-microsoft-azure/feed/ 1 5604
RDS 2016 farm: RDS Final configuration //www.tech-coffee.net/rds-2016-farm-rds-final-configuration/ //www.tech-coffee.net/rds-2016-farm-rds-final-configuration/#comments Wed, 24 May 2017 11:32:49 +0000 //www.tech-coffee.net/?p=5497 This article is the final topic about how to deploy a Remote Desktop Service in Microsoft Azure with Windows Server 2016. In this topic, we will apply the RDS Final configuration, such as the certificates, the collection and some custom settings. Then we will try to open a remote application from the portal. Deploy a ...

The post RDS 2016 farm: RDS Final configuration appeared first on Tech-Coffee.

]]>
This article is the final topic about how to deploy a Remote Desktop Service in Microsoft Azure with Windows Server 2016. In this topic, we will apply the RDS Final configuration, such as the certificates, the collection and some custom settings. Then we will try to open a remote application from the portal.

Certificates

Before creating the collection, we can configure the certificates for RD Web Access, RD Gateway and the brokers. You can request a public certificate for this or you can use your own PKI. If you not use your own PKI, you have to distribute the certificate authority certificates to all clients. You have also to provide the CRL/OCSP responder. If you use a public certificate, there is almost no client side configuration. You can get more information about required certificates here.

Once you have your certificate(s), you can open the properties of the RDS Farm from the server manager. Then navigate to certificates. In this interface, you can add the certificate(s) for each role.

On client side, you should add a setting by GPO or with local policy editor. Get the RD Connection Broker – Publishing thumbprint and copy it. Then edit this setting (Specify SH1 thumbprint of certificates representing trusted .rdp publishers) and add the certificate thumbprint without spaces. This setting enable to remove a pop-up for the clients.

Create and configure the collection

To create the collection, I use the following PowerShell cmdlet:

New-RDSessionCollection –CollectionName RemoteApps `
                        –SessionHost azrdh0.homecloud.net, azrdh1.homecloud.net `
                        –CollectionDescription "Remote application collection" `
                        –ConnectionBroker azrdb0.homecloud.net

Once you have created the collection, the RDS farm should indicates a new collection:

Now we can configure the User Profile Disks location:

Set-RDSessionCollectionConfiguration -CollectionName RemoteApps `
                                     -ConnectionBroker azrdb0.homecloud.net `
                                     -EnableUserProfileDisk `
                                     -MaxUserProfileDiskSizeGB 10 `
                                     -DiskPath \\SOFS\UPD$

If you edit the properties of the collection, you should have this User Profile Disk configuration:

In the \\sofs\upd$ folder, you can check if you have new VHDX files as bellow:

From the Server Manager, you can configure the collection properties as below:

Add applications to the collection

The collection that we have created is used to publish applications. So, you can install each application you need in all RD Host servers. Once the applications are installed you can publish them. Open the collection properties and click on add applications in RemoteApp Programs part.

Then select applications you want to publish. If the application you want to publish is not available in the list, you can click on add.

Then the wizard confirms you the application that will be published.

Test

Now that applications are published, you can browse to the RD Web Access portal. In my configuration, I have added a DNS record which is bound to the Azure Load Balancer public IP. Specify your credential and click on Sign In.

Click on the application of your choice.

I have chosen the calculator. As you can see in the task manager, the calculator is run through a Remote Desktop Connection. Great, it is working.

Conclusion

This series of topics about Remote Desktop Services shown you how to deploy the farm in Azure. We saw that Windows Server 2016 brings a lot of new features that ease the deployment in Azure. However, you can also deploy the RDS Farm On-Prem if you wish.

The post RDS 2016 farm: RDS Final configuration appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/rds-2016-farm-rds-final-configuration/feed/ 2 5497
RDS 2016 Farm: Configure File Servers for User Profile Disks //www.tech-coffee.net/rds-2016-farm-configure-file-servers-for-user-profile-disks/ //www.tech-coffee.net/rds-2016-farm-configure-file-servers-for-user-profile-disks/#comments Tue, 09 May 2017 11:26:37 +0000 //www.tech-coffee.net/?p=5471 In the previous topics of this series, we have deployed the RDS Farm in Azure. Now we need a file service in high availability to manage user profile disks (UPD). To support the high availability, I leverage Storage Spaces Direct (S2D) and Scale-Out File Server (SOFS). For more information about the deployment of S2D, you ...

The post RDS 2016 Farm: Configure File Servers for User Profile Disks appeared first on Tech-Coffee.

]]>
In the previous topics of this series, we have deployed the RDS Farm in Azure. Now we need a file service in high availability to manage user profile disks (UPD). To support the high availability, I leverage Storage Spaces Direct (S2D) and Scale-Out File Server (SOFS). For more information about the deployment of S2D, you can read this topic (based on hyperconverged model). For Remote Desktop usage, I’ll deploy a disaggregated model of S2D. In this topic, I’ll configure file servers for User Profile Disks. This series consists of the following topics:

I’ll deploy this file service by using only PowerShell. Before following this topic, be sure that your Azure VM has joined the Active Directory and they have two network adapters in two different subnets (one for cluster and the other for management). I have also fixed the IP addresses from Azure portal.

Deploy the cluster

First of all, I install these features in both file server nodes:

install-WindowsFeature FS-FileServer, Failover-Clustering -IncludeManagementTools

Then I install the RSAT of Failover Clustering in the management VM.

Install-WindowsFeature RSAT-Clustering

Next I test if the cluster nodes can manage Storage Spaces Direct

Test-Cluster -Node "AZFLS0","AZFLS1" -Include "Storage Spaces Direct", Inventory,Network,"System Configuration"

If the test is passed successfully, you can run the following cmdlet to deploy the cluster with the name UPD-Sto and the IP 10.11.0.29.

New-Cluster -Node "AZFLS0","AZFLS1" -Name UPD-Sto -StaticAddress 10.11.0.29 -NoStorage

Once the cluster is created, add the Cluster Name Object (UPD-Sto) the right to create computer object on the OU where it is located. This permission is required to create the CNO for SOFS.

Enable and configure S2D and SOFS

Now that the cluster is created, you can enable S2D (I run the following PowerShell on a file server node by using Remote PowerShell).

Enable-ClusterS2D

Then I create a new volume formatted with ReFS and with a capacity of 100GB. This volume has the 2-Way Mirroring resilience.

New-Volume -StoragePoolFriendlyName S2D* -FriendlyName UPD01 -FileSystem CSVFS_REFS -Size 100GB

Now I rename the folder Volume1 in ClusterStorage by UPD-01

rename-item C:\ClusterStorage\Volume1 UPD-01

Then I a add the role Scale-Out File Server role in the cluster and I call it SOFS.

Add-ClusterScaleOutFileServerRole -Name SOFS

To finish I create a folder called Profiles in the volume and I share it for everyone (not recommended in production) and I call the share UPD$

New-Item -Path C:\ClusterStorage\UPD-01\Profiles -ItemType Directory
New-SmbShare -Name 'UPD$' -Path C:\ClusterStorage\UPD-01\Profiles -FullAccess everyone

Now my storage is ready and I am able to reach \\SOFS.homecloud.net\UPD$

Next topic

In the next topic, I will deploy a session collection and configure it. Then I will add the certificate for each Remote Desktop components.

The post RDS 2016 Farm: Configure File Servers for User Profile Disks appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/rds-2016-farm-configure-file-servers-for-user-profile-disks/feed/ 4 5471
RDS 2016 Farm: Deploy RDS 2016 farm in Azure //www.tech-coffee.net/rds-2016-farm-deploy-the-farm-in-azure/ //www.tech-coffee.net/rds-2016-farm-deploy-the-farm-in-azure/#comments Tue, 09 May 2017 09:21:33 +0000 //www.tech-coffee.net/?p=5461 This topic is part of a series about how to deploy a Windows Server 2016 RDS farm in Microsoft Azure. In previous topics, we saw how to deploy networks, storage and virtual machines in Azure. We added also the domain controller to the On-Prem forest across the Site-to-Site VPN. In this topic, we will deploy ...

The post RDS 2016 Farm: Deploy RDS 2016 farm in Azure appeared first on Tech-Coffee.

]]>
This topic is part of a series about how to deploy a Windows Server 2016 RDS farm in Microsoft Azure. In previous topics, we saw how to deploy networks, storage and virtual machines in Azure. We added also the domain controller to the On-Prem forest across the Site-to-Site VPN. In this topic, we will deploy RDS 2016 farm in Azure. This farm is executed on Windows Server 2016. This series consists of the following topics:

Deploy the Azure SQL database

In the previous topics, we have not deployed the Azure SQL database. In this part, I will deploy this component. In Microsoft Azure, open marketplace and look for SQL Database. Create a blank database and create a new SQL Server. I have called the SQL server sql-rds and the database DBA-Broker.

Deploy RDS 2016 Farm

Once all your VM has joined the Active Directory, you can create a new Remote Desktop deployment based on session. The first broker server is AZRDB0, the first RD host server is AZRAH0 and the first RD access server is AZRDA0. From AZRDB0, I run the following cmdlet:

New-RDSessionDeployment -ConnectionBroker AZRDB0.homecloud.net `
                        -SessionHost AZRAH0.homecloud.net `
                        -WebAccessServer AZRDA0.homecloud.net

Next, in the Server Manager of AZRDB0, add all servers of the RDS farm.

Then, I add additional server to the RDS Farm. First, I add two license servers. Each server will have some licenses, so even if a server is down, a license server is available.

Add-RDServer -ConnectionBroker AZRDB0.homecloud.net -Server AZRDB0.homecloud.net -Role RDS-LICENSING
Add-RDServer -ConnectionBroker AZRDB0.homecloud.net -Server AZRDB1.homecloud.net -Role RDS-LICENSING

Then I add an additional RD host server:

Add-RDServer -ConnectionBroker AZRDB0.homecloud.net -Server AZRAH1.homecloud.net -Role RDS-RD-SERVER

And I add an additional RD Web Access server:

Add-RDServer -ConnectionBroker AZRDB0.homecloud.net -Server AZRDA1.homecloud.net -Role RDS-WEB-ACCESS

In Server Manager, if you browse the Remote Desktop Deployment, you should have the following diagram.

Configure the RD Broker in High Availability

Before configuring the RD Broker in High Availability mode, go back to the Azure Portal and open the SQL database settings. Click on the link connection strings.

Then create two DNS records where each DNS record is associated to one RD broker.

N.B: you can use an Azure Load Balancer instead of DNS round-robin for the RD Broker. For more information, you can read this topic.

Then install the SQL Native Client on each RD Broker server. Next run the following cmdlet. Replace in the Connection String the SQL server and database name.

Set-RDConnectionBrokerHighAvailability -ConnectionBroker 'azrdb0.homecloud.net' `
                                       -DatabaseConnectionString 'Driver={SQL Server Native Client 11.0};Server=tcp:sql-rds.database.windows.net,1433;Database=DBA-Broker;Uid=master@sql-rds;Pwd={DATABASE PASSWORD};Encrypt=yes;TrustServerCertificate=no;Connection Timeout=30;' `
                                       -ClientAccessName 'broker.homecloud.net'

To finish, run the following cmdlet to add an additional RD Broker server:

Add-RDServer -ConnectionBroker AZRDB0.homecloud.net -Server AZRDB1.homecloud.net -Role RDS-CONNECTION-BROKER

If you come back to the deployment overview In Server Manager, the RD Connection Broker should be marked as a High Availability Mode.

Configure RD Gateway

To add RD Gateways, click on the + symbol in the deployment overview. Then select both RD Gateway servers and add them to selected box.

Provide an SSL certificate name which should be the FQDN of the RD Gateway servers.

Then click on add to start the RD Gateway deployment.

Now the deployment overview should look like that :

In each RD Gateway server, open the RD Gateway console and edit the server properties. Then navigate to Transport Settings and disable UDP.

In Server Farm tab, add both servers and click on Apply.

Repeat these steps for each RD Gateway server.

Deploy the Load Balancer

A Load Balancer is required for the RD Web Access and the RD Gateways. You can use also an Azure Load Balancer for the RD Broker. But in this example I deploy an Azure Load Balancer for RD Web Access and Gateway. Open the marketplace and specify load balancer in the search box.

Provide a name to the Load Balancer and select public. Select the Public IP address previously created from the JSON template.

Once the Azure Load Balancer is created, open the Backend Pools settings. Then click on Add.

Specify a name for the backend pool and select associated to Availability Set. Select the RD Access availability set and add both virtual machines.

Next add a Health probe based on TCP 443 (HTTP / 443 is currently not supported).

Add also a load balancing rule based on TCP. Specify the public TCP port and the backend port. Then select the health probe.

Now you can try the public IP (https://<IP>/rdweb). You should get the Remote Web Access authentication page.

What is missing?

For the moment, no certificate has been deployed. So, you should have some security alerts in the web browser and the RD Gateway still not working. We will configure these certificates in another topic.

Next topic

In the next topic, I’ll deploy a SOFS cluster based on Storage Spaces Direct to store User Profile Disk.

The post RDS 2016 Farm: Deploy RDS 2016 farm in Azure appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/rds-2016-farm-deploy-the-farm-in-azure/feed/ 5 5461
RDS 2016 Farm: Configure Domain Controllers //www.tech-coffee.net/rds-2016-farm-configure-domain-controllers/ //www.tech-coffee.net/rds-2016-farm-configure-domain-controllers/#respond Wed, 12 Apr 2017 14:32:03 +0000 //www.tech-coffee.net/?p=5357 This topic is part of a series about how to deploy a Windows Server 2016 RDS farm in Azure. In the previous topics, we have deployed Microsoft Azure resources such as networks, storage or virtual machines. In this topic, we will configure domain controllers to extend the On-Premise Active Directory to Microsoft Azure  Before following ...

The post RDS 2016 Farm: Configure Domain Controllers appeared first on Tech-Coffee.

]]>
This topic is part of a series about how to deploy a Windows Server 2016 RDS farm in Azure. In the previous topics, we have deployed Microsoft Azure resources such as networks, storage or virtual machines. In this topic, we will configure domain controllers to extend the On-Premise Active Directory to Microsoft Azure  Before following this topic, the previous articles of this series must be followed. This series consists of the following topics:

Prepare the On-Prem Active Directory

In the following screenshot, you can find the current sites and services configuration. I have two sites with a replication link.

Now I’m going to create a new site, subnets, and a new replication link with PowerShell:

$OnPremSite = "Lyon-HyperV"
$AzureSite  = "Azure"
$AzureDesc  = "Azure AD Site"

Try {
    New-ADReplicationSite -Name $AzureSite `
                          -Description $AzureDesc `
                          -ErrorAction Stop

    New-ADReplicationSubnet -Name 10.11.0.0/24 `
                            -Site $AzureSite `
                            -ErrorAction Stop

    New-ADReplicationSubnet -Name 10.11.1.0/24 `
                            -Site $AzureSite `
                            -ErrorAction Stop

    New-ADReplicationSiteLink -Name $($OnPremSite + "-" + $AzureSite) `
                              -ReplicationFrequencyInMinutes 15 `
                              -InterSiteTransportProtocol IP `
                              -SitesIncluded $OnPremSite, $AzureSite `
                              -Cost 200
                              -ErrorAction Stop
}
Catch {
    Write-Output $Error[0].Exeption.Message
}

The following screenshot presents the sites and services configuration after that I have run the script.

Below you can find the subnets configuration.

Azure VM configuration

First of all, I set to static the IP address of my domain controllers:

  • AZADS0: 10.11.0.20
  • AZADS1: 10.11.0.21

Then I change the DNS configuration. AZADS0 is bound to On-Prem domain controllers.

AZADS1 is bound to AZADS0 and an On-Prem domain controller.

Thanks to this configuration, both domain controllers are able to resolve the On-Prem domain DNS name (called homecloud.net).

Operating system configuration

Now I’m connecting to each domain controller (across the private IP because VPN is established) and I create a new volume on the data disk. I run the following PowerShell cmdlet:

Initialize-Disk -Number 2
New-Volume -DiskNumber 2 -FriendlyName Data -FileSystem NTFS -DriveLetter E

Then I install the domain service and DNS role:

Install-WindowsFeature AD-Domain-Services, DNS -IncludeManagementTools

Next I add promote the server as a domain controller:

Import-Module ADDSDeployment
Install-ADDSDomainController `
-NoGlobalCatalog:$false `
-CreateDnsDelegation:$false `
-Credential (Get-Credential) `
-CriticalReplicationOnly:$false `
-DatabasePath "E:\NTDS" `
-DomainName "homecloud.net" `
-InstallDns:$true `
-LogPath "E:\NTDS" `
-NoRebootOnCompletion:$false `
-SiteName "Azure" `
-SysvolPath "E:\SYSVOL" `
-Force:$true

Once each Azure domain controllers are promoted, I open again the Active Directory Sites and Services. You can see now that both Azure Domain Controllers are located in Azure AD site.

Next topic

In the next topic, I will deploy the RDS Farm with all roles in High Availability. I’ll try to make the most PowerShell possible.

The post RDS 2016 Farm: Configure Domain Controllers appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/rds-2016-farm-configure-domain-controllers/feed/ 0 5357
RDS 2016 Farm: Deploy the Microsoft Azure VM //www.tech-coffee.net/rds-2016-farm-deploy-the-microsoft-azure-vm/ //www.tech-coffee.net/rds-2016-farm-deploy-the-microsoft-azure-vm/#comments Tue, 11 Apr 2017 11:38:33 +0000 //www.tech-coffee.net/?p=5340 This topic is part of a series about how to deploy a Windows Server 2016 RDS farm in Microsoft Azure. Previously, we have created the network resources, the storage account for diagnostics and the Windows image. In this topic, we will create all the Azure VM required for the solution. The deployment will be processed from ...

The post RDS 2016 Farm: Deploy the Microsoft Azure VM appeared first on Tech-Coffee.

]]>
This topic is part of a series about how to deploy a Windows Server 2016 RDS farm in Microsoft Azure. Previously, we have created the network resources, the storage account for diagnostics and the Windows image. In this topic, we will create all the Azure VM required for the solution. The deployment will be processed from a JSON template. This series talks about the following subjects:

Github

The template for this series are located in my Github. I have created a folder called RDSFarm that containers JSON template. For this topic, I have used RDS-VMs.json.

JSON template explanation

In this template, I create an availability set for each kind of service. So, I have 5 availability sets (Domain Controllers, File Servers, RD Host, RD Broker and RD Gateway). So, I have the following block code for each availability set:

{
      "type": "Microsoft.Compute/availabilitySets",
      "sku": {
        "name": "Aligned"
      },
      "name": "[parameters('ASDomainControllersName')]",
      "apiVersion": "[variables('computeResouresApiVersion')]",
      "location": "[variables('ResourcesLocation')]",
      "tags": {
        "displayName": "AS_DomainControllers"
      },
      "properties": {
        "platformUpdateDomainCount": 5,
        "platformFaultDomainCount": 2
      }
    }

Then I create the virtual network adapters. Each VM has one network adapters excepted the File Servers which have two (cluster and management). Each vNIC is connected to the right subnet. You can see also that I have created a loop (copy section): because each kind of service has at least two VMs, the loop avoids me to duplicate several times same block code.

{
      "type": "Microsoft.Network/networkInterfaces",
      "name": "[concat(parameters('PrefixNameDC'), copyindex())]",
      "apiVersion": "[variables('NetworkResouresApiVersion')]",
      "location": "[variables('ResourcesLocation')]",
      "tags": {
        "displayName": "vNIC_DomainControllers"
      },
      "copy": {
        "name": "DCnicLoop",
        "count": "[parameters('numberOfDC')]"
      },
      "properties": {
        "ipConfigurations": [
          {
            "name": "ipconfig1",
            "properties": {
              "privateIPAllocationMethod": "Dynamic",
              "subnet": {
                "id": "[Variables('vNetSubIntRef')]"
              }
            }
          }
        ],
        "dnsSettings": {
          "dnsServers": []
        },
        "enableIPForwarding": false
      }
    }

Next I create data disks. File Servers have four data disks each (for Storage Spaces Direct). Each Domain Controller has one data disk to host the AD database and RD Hosts have a data disk for application. All these disks are managed disks. I have also made a loop for each kind of data disk:

{
      "type": "Microsoft.Compute/disks",
      "name": "[concat(parameters('PrefixNameDC'), copyindex(),'-Data01')]",
      "apiVersion": "[variables('computeResouresApiVersion')]",
      "location": "[variables('ResourcesLocation')]",
      "tags": {
        "displayName": "Disks_DomainControllers"
      },
      "copy": {
        "name": "DCDskLoop",
        "count": "[parameters('numberOfDC')]"
      },
      "properties": {
        "creationData": {
          "createOption": "Empty"
        },
        "accountType": "Standard_LRS",
        "diskSizeGB": 10
      }
    }

I have also created a public IP for the RD Access load balancer:

{
      "type": "Microsoft.Network/publicIPAddresses",
      "name": "[parameters('PublicIPName')]",
      "apiVersion": "[variables('NetworkResouresApiVersion')]",
      "location": "[variables('ResourcesLocation')]",
      "tags": {
        "displayName": "Public IP Address"
      },
      "properties": {
        "publicIPAllocationMethod": "Static",
        "idleTimeoutInMinutes": 4
      },
      "dependsOn": []
    }

To finish, the following JSON block code creates VMs. I have a block code for each kind of VM. Then I use a loop to deploy several times the same VM with a different name. I use the Windows image to deploy the VM. Credentials are provided from parameters. Boot diagnostics are enabled and logs are stored in the storage account. Each vNIC is also bound to the right VM. VMs are added to availability set and connected to the right data disks.

{
      "name": "[concat(parameters('PrefixNameDC'), copyindex())]",
      "type": "Microsoft.Compute/virtualMachines",
      "apiVersion": "[variables('computeResouresApiVersion')]",
      "location": "[variables('ResourcesLocation')]",
      "tags": {
        "displayName": "VM_DomainControllers"
      },
      "copy": {
        "name": "DCVMLoop",
        "count": "[parameters('NumberOfDC')]"
      },
      "dependsOn": [
        "[resourceId('Microsoft.Compute/availabilitySets', parameters('ASDomainControllersName'))]",
        "[resourceId('Microsoft.Network/networkInterfaces', concat(parameters('PrefixNameDC'), copyindex()))]"
      ],
      "properties": {
        "osProfile": {
          "computerName": "[concat(parameters('PrefixNameDC'), copyindex())]",
          "adminUsername": "[parameters('adminUser')]",
          "adminPassword": "[parameters('adminPassword')]",
          "windowsConfiguration": {
            "provisionVmAgent": "true"
          }
        },
        "hardwareProfile": {
          "vmSize": "Standard_DS1_v2"
        },
        "storageProfile": {
          "imageReference": {
            "id": "[parameters('OSDiskMasterPath')]"
          },
          "osDisk": {
            "name": "[concat(parameters('PrefixNameDC'), copyindex(),'-OS')]",
            "createOption": "FromImage",
            "managedDisk": {
              "storageAccountType": "Standard_LRS"
            }
          },
          "dataDisks": [
            {
              "lun": 2,
              "name": "[concat(parameters('PrefixNameDC'), copyindex(),'-Data01')]",
              "createOption": "Attach",
              "managedDisk": {
                "id": "[resourceId('Microsoft.Compute/disks', concat(parameters('PrefixNameDC'), copyindex(),'-Data01'))]"
              }
            }
          ]
        },
        "networkProfile": {
          "networkInterfaces": [
            {
              "id": "[resourceId('Microsoft.Network/networkInterfaces', concat(parameters('PrefixNameDC'), copyindex()))]"
            }
          ]
        },
        "diagnosticsProfile": {
          "bootDiagnostics": {
            "enabled": true,
            "storageUri": "[reference(resourceId('rdsfarm', 'Microsoft.Storage/storageAccounts', parameters('Sto_LogsAccount')), '2015-06-15').primaryEndpoints['blob']]"
          }
        },
        "availabilitySet": {
          "id": "[resourceId('Microsoft.Compute/availabilitySets', parameters('ASDomainControllersName'))]"
        }
      }
    }

Template deployment

To run the deployment with the JSON template, go to the marketplace and search for Template Deployment.

Then, copy past the template. You should have something like this:

Next change parameters as you wish and click on purchase.

After the deployment, I have stopped all VMs to not spend money immediately for VMs not used.

Result

Once the deployment is finished, you should have several Azure VM depending on the loop settings. On my side, I have 10 Azure VMs.

If I select a VM such as a file server, you can see that managed disks are well bound to Azure VM.

Network interfaces are also connected to the server and well associated with the right subnet.

Azure VM are also inside Availability Sets.

To finish, boot diagnostics are enabled and stored in the storage account.

Next topic

In the next topic, I will configure the domain controller. I’ll set the AD site and I’ll promote the Azure domain controllers.

The post RDS 2016 Farm: Deploy the Microsoft Azure VM appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/rds-2016-farm-deploy-the-microsoft-azure-vm/feed/ 7 5340
RDS 2016 Farm: Create Microsoft Azure networks, storage and Windows image //www.tech-coffee.net/rds-2016-farm-create-microsoft-azure-networks-storage-and-windows-image/ //www.tech-coffee.net/rds-2016-farm-create-microsoft-azure-networks-storage-and-windows-image/#comments Mon, 10 Apr 2017 10:20:20 +0000 //www.tech-coffee.net/?p=5319 This topic is part of a series about how to deploy a Windows Server 2016 RDS farm in Microsoft Azure. In this topic, we will see how to deploy the Microsoft Azure networks resources, the storage account and how to prepare a Windows Image. You can find the other topics of the series in the ...

The post RDS 2016 Farm: Create Microsoft Azure networks, storage and Windows image appeared first on Tech-Coffee.

]]>
This topic is part of a series about how to deploy a Windows Server 2016 RDS farm in Microsoft Azure. In this topic, we will see how to deploy the Microsoft Azure networks resources, the storage account and how to prepare a Windows Image. You can find the other topics of the series in the following menu:

Github

I have published the complete JSON template on my github. You can copy it and make your modifications as you wish.

JSON template explanation

The JSON template consists of parameters, variables and resources. Parameters and variable are easy to understand. However, it is a little more complicated for resources. The below resource is a Virtual Network. This virtual network takes settings in parameters and variables. The below JSON code create a virtual network with four subnets (Internal, DMZ, Cluster and Gateway).

{
      "apiVersion": "[variables('API-Version')]",
      "location": "[variables('ResourcesLocation')]",
      "name": "[parameters('vNETName')]",
      "properties": {
        "addressSpace": {
          "addressPrefixes": [
            "[parameters('vNETPrefix')]"
          ]
        },
        "subnets": [
          {
            "name": "[parameters('vNETSubIntName')]",
            "properties": {
              "addressPrefix": "[parameters('vNETSubIntPrefix')]"
            }
          },
          {
            "name": "[parameters('vNETSubExtName')]",
            "properties": {
              "addressPrefix": "[parameters('vNETSubExtPrefix')]"
            }
          },
          {
            "name": "[parameters('vNETSubCluName')]",
            "properties": {
              "addressPrefix": "[parameters('vNETSubCluPrefix')]"
            }
          },
          {
            "name": "[Parameters('vNETSubGtwName')]",
            "properties": {
              "addressPrefix": "[Parameters('vNETSubGtwPrefix')]"
            }
          }
        ]
      },
      "tags": {
        "displayName": "Virtual Network"
      },
      "type": "Microsoft.Network/virtualNetworks"
    },

The following block code creates a Public IP address for the Azure Gateway.

{
      "apiVersion": "[variables('API-Version')]",
      "location": "[variables('ResourcesLocation')]",
      "name": "[parameters('S2SPIPName')]",
      "properties": {
        "publicIPAllocationMethod": "Dynamic"
      },
      "tags": {
        "displayName": "Public IP Address"
      },
      "type": "Microsoft.Network/publicIPAddresses"
    }

The following JSON code deploys the local gateway. The S2SGtwOnPremPIP specifies the public IP address of the On-Prem Gateway. The S2SLocalIPSubnet specifies the On-Prem routed IP subnets.

{
      "apiVersion": "[variables('API-version')]",
      "location": "[variables('ResourcesLocation')]",
      "name": "[parameters('S2SGtwOnPremName')]",
      "properties": {
        "localNetworkAddressSpace": {
          "addressPrefixes": [
            "[parameters('S2SLocalIPSubnet')]"
          ]
        },
        "gatewayIpAddress": "[parameters('S2SGtwOnPremPIP')]"
      },
      "tags": {
        "displayName": "Local Gateway"
      },
      "type": "Microsoft.Network/localNetworkGateways"
    }

The following JSON code deploys the Microsoft Azure Gateway by taking the previously created Public IP address. The Microsoft Azure Gateway is located in the gateway subnet.

{
      "apiVersion": "[variables('API-version')]",
      "dependsOn": [
        "[concat('Microsoft.Network/publicIPAddresses/', parameters('S2SPIPName'))]",
        "[concat('Microsoft.Network/virtualNetworks/', parameters('vNETName'))]"
      ],
      "location": "[Variables('Resourceslocation')]",
      "name": "[parameters('S2SGtwAzureName')]",
      "properties": {
        "enableBgp": false,
        "gatewayType": "Vpn",
        "ipConfigurations": [
          {
            "properties": {
              "privateIPAllocationMethod": "Dynamic",
              "publicIPAddress": {
                "id": "[resourceId('Microsoft.Network/publicIPAddresses',parameters('S2SPIPName'))]"
              },
              "subnet": {
                "id": "[variables('vNETSubGtwRef')]"
              }
            },
            "name": "vnetGatewayConfig"
          }
        ],
        "vpnType": "[parameters('S2SGtwVPNType')]"
      },
      "tags": {
        "displayName": "Azure Gateway"
      },
      "type": "Microsoft.Network/virtualNetworkGateways"
    }

To finish, the following block code creates a storage account. This storage account will be used for VM diagnostic logs.

{
      "name": "[parameters('StoAcctLogName')]",
      "type": "Microsoft.Storage/storageAccounts",
      "apiVersion": "2016-05-01",
      "tags": {
        "displayName": "Log Storage Account"
      },
      "sku": {
        "name": "[parameters('StoAcctLogType')]"
      },
      "kind": "Storage",
      "location": "[variables('ResourcesLocation')]"
    }

Import the template

To import the template, connect to Microsoft Azure and search for Template Deployment. Copy past the template. You should have something as below:

Then change the parameters as you wish and click on Purchase (don’t worry, it’s free :p).

Once the template is deployed, you should have 5 resources as below. So the virtual network, the gateways and storage account are created.

You can review the virtual network configuration as the following screenshot:

The public IP is also created:

Create the VPN connection

Now I create the VPN connection between On-Prem and Microsoft Azure. Select the On-Prem gateway and click on Configuration. Please review if the local gateway IP address is correct.

Then select Connections, and create a new connection. Provide a name, select Site-to-Site and specify the virtual network gateway and the local network gateway. To finish, provide a shared key.

Now, you have to configure your local gateway. I have an Ubiquiti gateway and I have set it with the following command lines:

set vpn ipsec auto-firewall-nat-exclude disable
set vpn ipsec disable-uniqreqids
set vpn ipsec esp-group esp-azure compression disable
set vpn ipsec esp-group esp-azure lifetime 3600
set vpn ipsec esp-group esp-azure mode tunnel
set vpn ipsec esp-group esp-azure pfs disable
set vpn ipsec esp-group esp-azure proposal 1 encryption aes256
set vpn ipsec esp-group esp-azure proposal 1 hash sha1
set vpn ipsec ike-group ike-azure ikev2-reauth no
set vpn ipsec ike-group ike-azure key-exchange ikev2
set vpn ipsec ike-group ike-azure lifetime 28800
set vpn ipsec ike-group ike-azure proposal 1 dh-group 2
set vpn ipsec ike-group ike-azure proposal 1 encryption aes256
set vpn ipsec ike-group ike-azure proposal 1 hash sha1
set vpn ipsec ipsec-interfaces interface pppoe0
set vpn ipsec nat-traversal enable
set vpn ipsec site-to-site peer <Azure Gateway Public IP> authentication mode pre-shared-secret
set vpn ipsec site-to-site peer <Azure Gateway Public IP> authentication pre-shared-secret <Shared Key>
set vpn ipsec site-to-site peer <Azure Gateway Public IP> connection-type initiate
set vpn ipsec site-to-site peer <Azure Gateway Public IP> default-esp-group esp-azure
set vpn ipsec site-to-site peer <Azure Gateway Public IP> ike-group ike-azure
set vpn ipsec site-to-site peer <Azure Gateway Public IP> ikev2-reauth inherit
set vpn ipsec site-to-site peer <Azure Gateway Public IP> local-address any
set vpn ipsec site-to-site peer <Azure Gateway Public IP> tunnel 100 allow-nat-networks disable
set vpn ipsec site-to-site peer <Azure Gateway Public IP> tunnel 100 allow-public-networks disable
set vpn ipsec site-to-site peer <Azure Gateway Public IP> tunnel 100 esp-group esp-azure
set vpn ipsec site-to-site peer <Azure Gateway Public IP> tunnel 100 local prefix 10.10.0.0/16
set vpn ipsec site-to-site peer <Azure Gateway Public IP> tunnel 100 protocol all
set vpn ipsec site-to-site peer <Azure Gateway Public IP> tunnel 100 remote prefix 10.11.0.0/16

Once the VPN is connected, you should have a Succeeded status as below:

Create the Windows Server 2016 image

To create the Windows Server 2016 image, first I deploy a new Azure VM. I call it zTemplate.

Then I choose a VM size.

I choose to use managed disks and I connect the VM in the Internal subnet. I don’t need Network Security Group for this VM. I enable the boot diagnostics and I choose the previously created storage account to store logs.

Once the Azure VM is started, I customize the operating system and I apply updates. Then I run sysprep as below:

Once the VM is stopped, I click on Capture:

Then I specify an image name and the resource group. I choose also to automatically delete the VM after creating the image.

At the end of this topic, I have the following resources in the resource group:

Next topic

In the next topic, we will deploy all Azure VMs for the Remote Desktop farm. The VM will be deployed from the Windows Image and from a JSON template.

The post RDS 2016 Farm: Create Microsoft Azure networks, storage and Windows image appeared first on Tech-Coffee.

]]>
//www.tech-coffee.net/rds-2016-farm-create-microsoft-azure-networks-storage-and-windows-image/feed/ 4 5319