Tuesday 12 December 2017

Error invoking Powershell script on Azure web apps via Kudu API

My team is running a bunch of Powershell scripts (like this) on web apps as part of our deployment process, for example to enable Solr (Sitecore configs). This was all working fine until this morning when we were getting the following error:

Invoking Kudu Api: https://mysite.scm.azurewebsites.net/api/command
@{Output=; Error=File D:\home\site\wwwroot\App_Data\Scripts\myscript.ps1 cannot be
loaded because running scripts is disabled on this system. For more
information, see about_Execution_Policies at
http://go.microsoft.com/fwlink/?LinkID=135170.
    + CategoryInfo          : SecurityError: (:) [], ParentContainsErrorRecord
   Exception
    + FullyQualifiedErrorId : UnauthorizedAccess
; ExitCode=0}
Kudu Api Successfully invoked.

Obviously following the link tells you all about Powershell security policies, and I could see that on our particular apps (and slots) via Kudu powershell, when running the command Get-ExecutionPolicy -List these were set to:

                                  Scope                         ExecutionPolicy
                                  -----                         ---------------
                          MachinePolicy                               Undefined
                             UserPolicy                               Undefined
                                Process                            RemoteSigned
                            CurrentUser                               Undefined
                           LocalMachine                               Undefined


whereas on the rest of our servers the LocalMachine value was set to RemoteSigned. There doesn't appear to be any way to change this via command line, or in the app properties. What's more I could run the desired Powershell script through the Kudu command line just fine, but the Kudu API wasn't working (so they must be different under the covers?).

After a lot of hunting around and bashing my head against the wall, I realised that somehow the web apps properties had been set to 32-bit mode, rather than their usual 64-bit. I have no idea why this affects the LocalMachine execution policy (or any of them for that matter), but this seems to be the fix for this particular issue.

Hopefully this helps someone out there!

Tuesday 5 December 2017

Enhancing Sitecore ARM templates to be production-ready

The Sitecore ARM templates are great for most environments, and better yet they are extensible, however in a production environment you're probably going to need a few more things in your environment; things like staging slots, your own hostname(s), and maybe some storage / networking / traffic manager.

As I mentioned, Sitecore has made the their templates in a fairly modular fashion: you have your base deployment which takes all parameters, and runs a sub-deployment for '-infrastructure' (your Azure resources), '-application' (Sitecore package installation), and a sub-deployment for each Sitecore module.
This gives us the option of just running a separate template after running the base Sitecore templates, or we can integrate our additional deployment as another sub-deployment in the base template in the same way.  Obviously we want to keep our enhancements as separate as possible so that when there is an update to Sitecore or their templates it doesn't mean a massive and painful re-write of our enhancements.

I've uploaded an ARM template as a gist, and included the extra Powershell code below. They do the following:
  • Change your CM + CD hosting plan to Standard
  • Add staging slots to CM + CD
  • Copy files from your CM + CD to the staging slot
  • Add custom hostnames
  • Add an SSL certificate and enable HTTPS for all your hostnames
  • Add connection strings to your Rep / Prc web apps so you can swap them
  • Add custom firewall rules
See the gist of the ARM template which you can run on its own, or as a sub-deployment, in which case you should place this in the "nested" folder, along with the infrastructure.json file. You can then include this into your primary ARM template by duplicating the "-infrastructure" (deployment) resource and changing the name to "-infrastructure-prod" and the reference to this new json file.

You can upload your SSL cert through the ARM template, by providing the binary like in the provided gist; alternatively you can upload the certificate to a key vault (which also creates a secret in the vault). If you choose to use the vault, you must give the ARM deployment service access to your key vault by running the following Powershell command (the service principal ID is an Azure Guid, so the same for everyone):
Set-AzureRmKeyVaultAccessPolicy -VaultName your-keyvault-name -ServicePrincipalName abfa0a7c-a6b6-4736-8310-5855508787cd -PermissionsToSecrets get

You can then substitute the certificate section in the gist with the following, which uses the key vault ID and secret name:
{
  "apiVersion": "[variables('certificateApiVersion')]",
  "location": "[parameters('location')]",
  "name": "[variables('sslCertificateNameTidy')]",
  "type": "Microsoft.Web/certificates",
  "properties": {
    "keyVaultId": "[parameters('keyVaultId')]",
    "keyVaultSecretName": "[parameters('sslKeyVaultCertificateName')]"
  } 
}

In your Powershell script, after the part which does the deployment, add the following to copy the app settings and files to your staging slots:

function copyAppSettings($rg, $webApp, $slot) {
 $props = (Invoke-AzureRmResourceAction -ResourceGroupName $rg `
    -ResourceType Microsoft.Web/sites/Config -Name $webApp/appsettings `
    -Action list -ApiVersion 2015-08-01 -Force).Properties

 $hash = @{}
 $props | Get-Member -MemberType NoteProperty | % { $hash[$_.Name] = $props.($_.Name) }

 Set-AzureRMWebAppSlot -ResourceGroupName $rg -Name $webApp -Slot $slot -AppSettings $hash
}

# Copy app settings to staging slot
Write-Host "Copying app settings to staging slots";
copyAppSettings $ResourceGroupName "$($DeploymentId)-cd" "cd-staging"
copyAppSettings $ResourceGroupName "$($DeploymentId)-cm" "cm-staging"
Write-Host "Done copying app settings";
  
# Copy files to staging slot
Write-Host "Copying files to staging CD";
..\sync_slots -SubscriptionId $SubscriptionId -ResourceGroupName $ResourceGroupName -WebAppName "$($DeploymentId)-cd" -SlotName "cd-staging"
Write-Host "Copying files to staging CM";
..\sync_slots -SubscriptionId $SubscriptionId -ResourceGroupName $ResourceGroupName -WebAppName "$($DeploymentId)-cm" -SlotName "cm-staging"
Write-Host "Done copying files to staging";

Wednesday 22 November 2017

Copy Azure web app files to slot

The majority of the out-of-the-box Sitecore ARM template is great for anything from a development to testing environment, but in production you're very likely to be using slots to test in staging and have zero-downtime releases (if you're not using slots, I'd highly recommend it).  I'll be doing a later post on some updates we can make to the Sitecore ARM templates to actually add a staging slot (amongst other enhancements), but once you have your slot you still need the base Sitecore files in there when you kick off your deployment (using CI/CD of course).

Sitecore by default is only installed to the production slot (ie. the web app itself), and installing it again in the slot will mean either pointing the installation at a second DB (which you could do), restoring the dacpacs to the live DB a second time (which you don't want to do), or creating a custom Sitecore package without the dacpac files (painful when Sitecore upgrades or changes need to be made).

I was tempted to create some DB-less Sitecore packages, but I knew there would have to be a better way.  There looks to be some options if you upgrade to Premium, but for those of us in Standard I figured there should be a way to copy the files from slot to slot without downloading them locally and uploading them again, via FTP.  After a lot of hunting and a promising upcoming solution from Microsoft, I stumbled across this azure-clone-webapps repo in Github.  This was almost exactly what I was after (massive thanks to the author), I just needed to convert it to Powershell so that I could run it as part of my ARM template deployment script.

I've included my final Powershell script here and uploaded it as a Gist, feel free to use it as-is or tweak it to suit your needs.  Since our client's Sitecore host is Rackspace they've got NewRelic installed, and I've included a skip statement to ignore the newrelic folder inside the website.  Other than that it will copy all the site files from your given web app to the given slot.  Enjoy!

Gist of SyncFilesToSlot.ps1
<#
 .SYNOPSIS
    Copies all of a web app's files to a given slot

 .DESCRIPTION
    Copies all of a web app's files to a given slot. Skips "newrelic" folder as the files are in use.
    
 .PARAMETER SubscriptionId
    The subscription id where the resources reside.

 .PARAMETER ResourceGroupName
    The resource group where the resources reside.

 .PARAMETER WebAppName
    Name of the web app containing files for the slot.
    
 .PARAMETER SlotName
    Name of the slot to fill with files from web app.
#>

param(
    [string]
    $SubscriptionId,

    [Parameter(Mandatory = $True)]
    [string]
    $ResourceGroupName,

    [Parameter(Mandatory = $false)]
    [string]
    $WebAppName,

    [Parameter(Mandatory = $True)]
    [string]
    $SlotName
)

[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Web.Deployment")

function Get-AzureRmWebAppPublishingCredentials($ResourceGroupName, $WebAppName, $SlotName = $null){
  if ([string]::IsNullOrWhiteSpace($SlotName)) {
    $resourceType = "Microsoft.Web/sites/config";
    $resourceName = "$WebAppName/publishingcredentials";
  }
  else {
    $resourceType = "Microsoft.Web/sites/slots/config";
    $resourceName = "$WebAppName/$SlotName/publishingcredentials";
  }
  $publishingCredentials = Invoke-AzureRmResourceAction -ResourceGroupName $ResourceGroupName -ResourceType $resourceType -ResourceName $resourceName -Action list -ApiVersion 2016-08-01 -Force;
  return $publishingCredentials;
}

function GetScmUrl($ResourceGroupName, $WebAppName, $SlotName) {
    # revert to this when MS fixes https://social.msdn.microsoft.com/Forums/expression/en-US/938e59f6-6a83-4640-a423-26fe91d66cf3/scm-uri-for-web-app-deployment-slots
    #$scmUrl = $publishingCredentials.properties.scmUri
    #$scmUrlNoCreds = $scmUrl.Replace($scmUrl.Substring($scmUrl.IndexOf('$'), ($scmUrl.IndexOf('@')-$scmUrl.IndexOf('$')+1)), '') # ugh this version of substring sucks sooo much :'(
    #$apiUrl = "$scmUrl/api/command"
    # revert below
    if($SlotName) {
        $slot = Get-AzureRmWebAppSlot -ResourceGroupname $ResourceGroupName -Name $WebAppName -Slot $SlotName;
        $scmUrl = $slot.EnabledHostNames | where { $_.Contains('.scm.') };
    } else {
        $scmUrl = "$WebAppName.scm.azurewebsites.net";
    }
    # revert above
    return "https://$scmUrl";
}

function SyncWebApps($srcUrl, $srcCredentials, $destUrl, $destCredentials) {
    $syncOptions = New-Object Microsoft.Web.Deployment.DeploymentSyncOptions;
    #$syncOptions.DoNotDelete = $true;
    $appOfflineRule = $null;
    $availableRules = [Microsoft.Web.Deployment.DeploymentSyncOptions]::GetAvailableRules();
    if (!$availableRules.TryGetValue('AppOffline', [ref]$appOfflineRule)) {
        Write-Host "Failed to find AppOffline Rule";
    } else {
        $syncOptions.Rules.Add($appOfflineRule);
        Write-Host "Enabled AppOffline Rule";
    }
    
    $skipNewRelic = New-Object Microsoft.Web.Deployment.DeploymentSkipDirective -ArgumentList @("skipNewRelic", 'objectName=dirPath,absolutePath=.*\\newrelic', $true);

    $sourceBaseOptions = New-Object Microsoft.Web.Deployment.DeploymentBaseOptions;
    $sourceBaseOptions.ComputerName = $srcUrl + "/msdeploy.axd";
    $sourceBaseOptions.UserName = $srcCredentials.properties.PublishingUserName;
    $sourceBaseOptions.Password = $srcCredentials.properties.PublishingPassword;
    $sourceBaseOptions.AuthenticationType = "basic";
    $sourceBaseOptions.SkipDirectives.Add($skipNewRelic);

    $destBaseOptions = New-Object Microsoft.Web.Deployment.DeploymentBaseOptions;
    $destBaseOptions.ComputerName = $destUrl + "/msdeploy.axd";
    $destBaseOptions.UserName = $destCredentials.properties.PublishingUserName;
    $destBaseOptions.Password = $destCredentials.properties.PublishingPassword;
    $destBaseOptions.AuthenticationType = "basic";
    $destBaseOptions.SkipDirectives.Add($skipNewRelic);

    $destProviderOptions = New-Object Microsoft.Web.Deployment.DeploymentProviderOptions -ArgumentList @("contentPath");
    $destProviderOptions.Path = "/site";
    $sourceObj = [Microsoft.Web.Deployment.DeploymentManager]::CreateObject("contentPath", "/site", $sourceBaseOptions);
    $sourceObj.SyncTo($destProviderOptions, $destBaseOptions, $syncOptions); 
}

if($SubscriptionId) {
    try {
        Set-AzureRmContext -SubscriptionID $SubscriptionId;
    } catch {
     Login-AzureRmAccount;
     Set-AzureRmContext -SubscriptionID $SubscriptionId;
    }
}

$srcCreds = Get-AzureRmWebAppPublishingCredentials $ResourceGroupName $WebAppName;
$srcUrl = GetScmUrl $ResourceGroupName $WebAppName;
$destCreds = Get-AzureRmWebAppPublishingCredentials $ResourceGroupName $WebAppName $SlotName;
$destUrl = GetScmUrl $ResourceGroupName $WebAppName $SlotName;
SyncWebApps $srcUrl $srcCreds $destUrl $destCreds;

Friday 17 November 2017

Microsoft Tech Summit Sydney (Day 2)


Day 2 was much the same as day 1, with similar quality speakers and content (but a shorter day).  Overall there was a great amount of material, and I'm very glad I made it along.  I'm looking forward to checking out and making use of some of the preview tooling and features that'll be released in the next few months.

DevOps best practices for Azure and VSTS

Simon Lamb started off by stating that we shouldn't need dev-ops teams, that everyone should be dev-ops, and that Microsoft believes in services in any language, any platform, for all developers.
Microsoft defines dev-ops as the "union of people, process and products to enable continuous delivery of value to our end users", highlighting that it's measurable and deliverable.  Great reasons to implement dev-ops are: competition is already doing it, increase velocity, reduce downtime, reduce human error through automation.  Your options provided by MS are: TFS (on prem, upgrade yourself), or VSTS (cloud, automatically upgraded).  VSTS also has release documentation with the features which have been added and updated, and you can switch on or off 'preview' features in your profile

The demo was a new MVC project (with tests) pushed to a git repo in VSTS. The 'continuous delivery tools for Visual Studio' extension creates a web app, build definition and release definition.  Simon went through the Release templates and tasks (as well as failing the performance test if response time was 5sec+), and swapping slots.  He ran the build, viewed build logs, highlighted that you can link the build/release back to code changes, view test results, and see release that was executed; you can also send a release summary email from release.

You can run a hosted build/release agent which is handled by MS, runs in a clean environment every time, and you can see the state (variables etc.) after each build.  You can also run a private agent, which is on your own build machine.  Deployment groups, which are a new addition, provide you with registration scripts (available for Windows or Linux) which will auto-download and install on your machine, and run phases in parallel on all machines in the group 

Simon then demonstrated how easy it was to also work with Java and VSTS, through either Eclipse or IntelliJ.  He ran through the usual process of viewing work item, commiting to feature branch (a policy was implemented which meant no committing to master); he went into VSTS to create a PR which he then approved to run the build and release (which was set to auto-run when master successfully built). He highlighted the policy which meant that a commit must correspond to a work item, must have a comment, and can't break build.  

The final demo was a Node.js project using VS Code with with Azure/VSTS extension.  You can create a web app straight from VS code. The continuous deployment Azure blade on the web app creates your build and release definitions in VSTS. 

Drive Azure governance with Policy and Cost Management

This was another session by Alistair Speirs, who began by acknowledging that in the past costs were very complicated and painful to manage (bills, 3rd party tooling, APIs).  These days Azure has integrated Cloudyn, which can be used over more than 1 subscription and makes life easier for those who have to manage lots of projects (and heaps of resources).  It's not just for Azure, but "all 3 cloud providers" (the other 2 being AWS and Google), and you can have a different policy per platform if you want. Cloudyn also uses the RBAC model that Azure employs.  Typically it's been extremely difficult to split costs for shared resources or costs like ExpressRoute, security, ingress / egress, but with Cloudyn it's very easy and customisable. Different teams also have a different dashboard.  On-prem we typically over-provision to leave room for growth, but in the cloud it's all about cost-saving by not over-provisioning. Cloudyn has tools to help you optimise your resources, as well as reports, and you can schedule this to be before your budget.

He demonstrated the Cloudyn interface, which did look very simple to use: you open the dashboard from cost management in portal. The cost by service, cost by region, ability to create monthly reports were all demo'd.  

Cost management is typically "someone else's problem", but Azure wants to prevent this facilitate things by bringing visibility, which brings accountability (and allows you to set budgets and set forecasts). You can make changes in the Portal and see resulting cost changes practically straight away. You can also set alerts, and now get notified of anomalies. Tagging is also a great way to aggregate your costs, so don't forget to tag.

Azure also now has Reserved Instances where you can pre-purchase 1 or 3 years worth of compute for a massive discount.  

Alistair concluded with the promise that the policy and cost tooling will come together, and Azure Advisor will improve.

Azure Infrastructure and application monitoring

John Pritchard and Rebecca Lyons both presented this session, which I think was one of my favourites.

John kicked things off by outlining that monitoring means different things to different people; that monitoring in the cloud means getting an overall picture of all your resources, which can be spun up/down at will, and that some resources might not even exist by the time you're looking at the logs. You need: visibility (activity / metrics), insight (alerting / mapping) , optimisation (App Insights).
 
John demonstrated: service health, activity log (across subscription, summary, save queries), metrics (network in / out for scale sets), metrics preview (storage account API calls which had failed), edit alerts, alerts with actions (eg. deployment failed), log alert support request (a new feature), create real time metric + compound metric (CPU+network) into activity group (eg. SMS/email/service now/ITSM/custom).  He also went into Log Analytics to demo custom searches, show multiple result sets with a line graph (comparing individual computers against logical group of computers, on-prem vs Azure).

Rebecca took the App Insights portion of the session, highlighting that bugs are hard to find and that's where Azure can help.  She took us through an example site (Fabrikam Fibre), and demonstrated the Application Map in App Insights: this shows what’s talking to what, dependencies, overall health, availability, and recommendations, out of the box.  There's also a new and improved performance dashboard, and Azure shows you stats 5 mins before and after an error to help you can reproduce any issue; you can then create a new work item with pre-populated details directly from the error in the Portal.  She demo'd the Users Usage feature, which lets you group your users by page, geography, etc. and provides a very nice chart.  The User Flows is another fantastic feature which shows the path of your users through the site through site, so you can see what's being used the most and focus your attention where it's needed.

New Azure platform capabilities

Katy Olmstead ran us through some of the features of the Azure portal, but I mainly focused on the newer functionality.  Don't forget to go to https://preview.portal.azure.com/ to have all the preview functionality enabled!  

One of the first things to note is that you can use Azure CLI or Powershell directly in the browser by clicking the icon in the top right (between alerts and settings).  Unfortunately for us here in Australia it looks like this requires setting up a storage account in Southeast Asia, which is more than a little annoying.
There are also a bunch of keyboard shortcuts for power-users, which you can view by clicking the help icon (question mark in the top right), then 'keyboard shortcuts' link.  In the same section you can show your new users a guided tour.
Don't forget search, all services

Up next was a demo of creating an app, and ARM template deployment.  Azure Monitor lets you view all monitoring, real time alerting, and diagnose issues.

Use resource groups for your primary method of grouping resources (based on the lifecycle of the resources) but don't forget tags (key-value pairs) which can also be used for searching and billing.  The columns in every view are also all customisable

Again, as with most other sessions, it was highlighted that Azure has the most comprehensive resiliency and best SLA including 99.9% single-instance.

Advisor is a great tool (which apparently just went public) for optimisation, offering personalised recommendations using machine learning, for cost or performance. It learns as you use Azure.  You can see recommendations and act on things right now, or snooze (eg. for a dev environment). You can also download it all as CSV or PDF to go over as a team and prioritise for a later date.

Service Health is a more reactive tool for diagnosing symptoms ("is it a Microsoft issue, or is it me?"), understanding the impact, and getting notified.  It can be integrated with web hooks, and provides a tracking id (to let you know that MS is aware of the issue).  You can also always tweet @azuresupport who are apparently a very large and active team.  
Planned maintenance lets you know about anything within 30 days that might impact you, and will shortly give you the ability to control the time window and pick the time of the maintenance of your resources (so you can schedule any down-time to be out of hours).

The Resource Health blade on most resources shows shows status, last changes, solutions to common problems (recommended steps), and allows you to troubleshoot issues; you can also report if you think the status listed is incorrect.

There are 4 levels of support: developer (low severity only), Azure Standard (for production, 24/7), Azure ProDirect (shorter responses), Microsoft Premier (enterprise-wide proactive support).

Thursday 16 November 2017

Microsoft Tech Summit Sydney (Day 1)

This week I was fortunate enough to make it to the Sydney Microsoft Tech summit, over Thursday 16th and Friday 17th of November.  There were some great speakers and plenty of excellent material, as well as some fun partner booths, not to mention the Vive, Hololens, and Xbox stands which were always in use.  I thought I'd share some notes and pictures from some of the sessions I attended (it might be a bit disjointed, it's all from a bunch of bullet points).


Implement a Secure and Well-Managed Azure Infrastructure

This session was a high-level introduction into the world of Azure security.  Scott Woodgate started off by saying that as soon as you put even 1 VM onto the public internet you should be thinking about security (it'll be hit 100k times in the first month), and that security is a joint responsibility between Microsoft and the customer.  Microsoft manages things like the physical assets, data-center operations, and cloud infrastructure; the customer should focus on their actual VMs, applications, and data.  Azure has security built into it, but there are plenty of 3rd party options; obviously Microsoft is pushing their option as the better solution.

He then ran through Security Center, which focuses on visibility ("what have I got?"), identification & mitigation ("what do I fix?") and detect & respond.  Security Center gives you ranked issues in order of severity so you can see what you need to fix right now more clearly.  Microsoft has a giant list of known bad actors, their region, and known attack paths. The Investigation Path was an amazing feature of Security Center which shows you, if you've been attacked and breached, the way the hackers managed to access and traverse your network, so you can secure and fix every part.  If you upgrade to the paid Security Center you can also enable JIT management ports, which allow you to only open your management ports (eg. RDP, SSH) on-demand, and only with administrator approval.

The next topic was backup, and as you may know Azure backs everything up to 3 places in the same site - Microsoft expects hardware to fail and this is built in.  Even deleted backups are retained for 14 days in case you end up deleting something by accident.  As the first thing a hacker might do wen they pwn your network is delete your backups, Scott demonstrated how Microsoft has this great method of preventing backups from being deleted from an pwned machine, because you need a PIN (and can also set up MFA) to run the delete backup command.  Backing up is easy, can be scheduled, and is 'hot' so can be restored quickly (using backup vaults) as opposed to some other services.

Scott was adamant that security is a CEO-level issue, even though it's often overlooked.  The challenge with any network is understanding what went wrong, especially when the knowledge about the initial architecture and setup may be gone (when the employee(s) who built it left the company).

Azure Log Analytics was the next topic: this covers everything from one VM, to entire systems, to a code line item (ie. application performance monitoring); it's all stored in the same place, and under-pins everything in Azure.  It's highly-scalable, low latency, has text search and relational queries, and you can query it in a T-SQL-like syntax (easy to learn) as well as build charts, and use machine learning across it.

The Service Map looked like an amazing feature, showing everything in one place: connections / services / ports; you can see incidents (plus related/affected services) in real time. It apparently uses a kernel-level driver to analyse packets. You can also view failed requests on app or VM, can dig into (for example, 500) error codes, and for each issue you can create a work item in VSTS. You can also drill down into which areas of your site your users frequent more often.

Keynote: Microsoft Azure: Cloud for All

Next up was the keynote, where speaker Julia White put a big emphasis on productivity, hybrid, intelligence, and security.  She mentioned that the cloud brought challenges, but that Microsoft was there to be both shield and partner; they believe in open source, and that the cloud must be available for all.  There's lots going on, and this can be overwhelming; Microsoft/Azure wants to help everyone with this challenge, plus help in staying secure, and help everyone be as productive as possible

Productivity
There are lots of interconnected tools to help with this: Azure itself, Visual Studio (/Code), VSTS; everything from tooling to management to security (and dev ops).  Azure has 100+ services, including some of the newer ones like functions, logic apps, Kubernetes.  She compared developers to artists, and their IDE as their paintbrush, so Visual Studio (and Code) are top shelf offerings, with lots of integration to 3rd party apps and dev ops, they want to make life as easy as possible for us.
This section's business example was UPS: they use Xamarin to be cross-platform with a single codebase, and bot as service in Azure (plus app insights which can scale).
Julia re-iterated Microsoft's commitment to open source, last year being Github's biggest open source contributor.  She also demo'd a Powershell browser module which you can use while navigating the Portal, which actually looks very handy.
The demo for productivity was the biggest M-series VM - an absolute beast - with 128 virtual cores, and allowing for nested virtualisation.
With Azure you can be productive by managing multiple computers, in cloud and on-prem; you can also use log analytics to create scripts across multiple machines (in this example, correlate CPU spikes).

Hybrid
Microsoft pushed that migrating to Azure is a lot more cost-effective than alternate cloud providers, but also that hybrid isn't about migrating to the cloud, it's about one consistent experience. Today, it's all about the intelligent cloud and intelligent edge, bringing machine learning etc. from the cloud to on-prem (or close enough). It was also highlighted that SQL migration back and forth from on-prem to the cloud is easy, and reusing existing licences from your on-prem environment can save you 50%, which is a good incentive.
With Azure Stack you can run the cloud experience (same look because it's the same code!) in your data centre, keeping emphasis on cloud-first in a disconnected environment (eg. an oil rig, cruise ship fleet management). It could also be used due to certain industry regulations, or for a modern front end to a mainframe. EY uses it in Russia for legal regulations. The existing tools make it easy to deploy to Azure or Azure Stack.
DocuSign was the business example: "trust is something you earn in a lifetime and lose in an instant" was a quote that resonated with me. They needed the ability lift and move to the cloud with no / minimal change, and ability to scale, and Azure provided this.

Intelligence 
AI should be available for - and usable by - everyone, development and organisation alike.  We need access to good data, and good APIs; the business needs to collect the data and Azure provides the APIs.
ASOS was the example business in this case, who is a digital-only company, and always-on. They have 85k products, 4k added per week. They use microservices, machine learning, and use this for example to show relevant products to create a better experience. They use CosmosDB low latency better elasticity.
The (fantastic) demo for intelligence was an insurance bot: it showcased language detection, suggestions to the customer, voice & camera recognition used for verification, looking up your account history to know about family and make suggestions, car recognition (to show that it identified the car picture you uploaded was not the model you stated it was), sentiment analysis (knows you aren't happy, connects you to live person to continue the sale). On the backend you can see everything in Dynamics 365, including the user flow and recommended actions for a live customer to take to make the sale (in this case offer a discount)

Trust
Microsoft highlights that Azure has more certifications than any other cloud vendor, and is also working with many governments (including here in Australia). It has datacenters in 42 regions, which can be good for controlling where your data is being handled, and to keep things close to where your employees are located. Australia now has 4 regions (2 new ones coming online in Canberra)!
They do provide data centre tours, and a quick video showed that the locations are carbon neutral and have tonnes of security.
Julia pointed out that these days you're not just defending against hackers but nation-state attacks. All Azure's cloud services are built for security, and that they invested $1billion in the last year into security, and that's just going up.  Security centre gives you recommendations, because it's hard to keep up with the latest attacks, and Microsoft is there to be first responder.  Security centre shows you how secure you are, and how to respond when you're attacked; it has the investigation graph (covered above) and provides playbooks for recommended actions.
Azure has great cost management for visibility and accountability. You can split on resource group and tag. You can also get "reserved" VM instances where you pre-purchase 1 or 3 years of compute to save overall.
The final business example was Cabcharge: this is obviously a very disruptive area. They evaluated 16 vendors and ended up with Azure. They wanted PaaS, to keep their .NET skill set, and have something future-proof. They brought all development in-house with (now) 7 agile teams who work to an MVP and improve each sprint; they are language-agnostic and only requirement is TDD with code-coverage. Their struggle is to digitise non-digital tasks like hailing a cab, and not needing a bank account to purchase a ride using the app. The main point they said to take away was to think about what makes you different and focus on your strength.
Julia's final point about trust was that 90% of Fortune 500 companies are on Azire!

Migrating Infrastructure to Azure - VMs, Network + AD

This session was presented by John Pritchard, and John started by showing the IaaS to Saas chart that hopefully you've seen before, stating that generally an organisation first migrates to IaaS because it's easier but it's not necessary.  He re-iterated what I'd heard in a previous session: that Australia Central 1+2 (in Canberra) will be coming first half of next year; this is connected to the ICON high speed government network, and offers secure services at SCEC zone 4 protection ('protected' to 'secret' level). This will only be for Azure, not Office or Dynamics yet. Though it's located in Canberra, it's not a government data centre, but will be used (mainly to start with) federal and state government, partners and suppliers.

Identity, management and security, platform, and development each have a corresponding cloud-based alternative, and Microsoft is trying to facilitate the transition by utilising your existing knowledge base.  Azure has everything: compute, storage, networking; now security and management / monitoring.  Virtual machines are similar to what you're used to on-prem, networking is the same, but scale sets allow you to grow easily.  There are a tonne of different VM types: from general purpose, to burst, to nested virtualisation etc.  There are 4 levels of availability: single (99.9% SLA), availability set (99.95%), availability zone (new, 99.99%) and region pairs.  The different storage options were then outlined, and file sync (a new service) was mentioned - this makes keeping files in sync with the cloud even easier.  The different connectivity options were then outlined (I won't go into detail here).

The first demo showcased how easy it was to spin up a VM: create network & subnet to put it in, create VM (reuse license for discount on Windows server; you can auto shut down with notification), then add various storage disks.  Storage is locally redundant 3x behind the scenes, but can be made up to geo-redundant; premium disks are SSD for high IOPS; standard are HDD for general-purpose; managed are how Azure makes life easier, and can be premium or standard. 
The second demo was a VNnet to VNet communication both via VPN gateway and peering. Peering is much quicker to set up and lower latency, and will soon be cross region. Azure can show you a VNet diagram, and you can use the network watcher for topology, flow control, packet capture (formerly netmon), and a connectivity check.

Finally John quickly went over site recovery, which is usually to paired region. It can be run on a live production environment without interruption, and doesn't have anything running in secondary region until failover (saving you money). You can manually failover and test failover. Behind the scenes Azure sets up a recovery plan.

Simplify hybrid cloud protection with Azure Security Center

This was the second session from Scott Woodgate, and a deeper dive into the security aspects of Azure and walkthrough of Security Center. He reiterated that Security Center is a SaaS offering for VM, on premises (using an agent), and PaaS.

The first time you set up Security Center, you will see a welcome screen where the first step is to turn on data collection. Within SC, Azure uses machine learning based on logs sent from agents, and you can use an existing workspace or create new one. You select how much data you want to collect, the default is minimal. Don't forget to turn it on for all subscriptions! Policies determine what info is relevant (eg. dev is less important, doesn't matter if some errors slip through), but policies extend beyond security centre. For example, in prod subscription disks must be encrypted. You can also save a policy and apply it to multiple subscriptions. Microsoft is investing lots of time and money into governance.  You can set up Security Center to give you emails and alerts, and there's 2 tiers: free (the basics) or standard (including threat protection and lots of other advanced options).

Within the compute section you see prioritised recommendations, and can select to fix one, some, or all (including on-prem VMs which are represented with a purple icon). SC tracks OS vulnerabilities, system updates, and loads more and provides heaps of info and suggestions with more Linux info coming in the next few months.  You can use Qualys (& other 3rd party) integration to check and ensure that certain software is installed on your VMs.

Regarding networking, Scott emphasised the necessity to have NSGs on all subnets.  Security Center shows you which VMs are public facing, and again provides lots of actionable info.  For storage it's the same deal, and covers things like SQL and storage encryption.  SC also covers applications, and Microsoft recommends putting a WAF in front of your apps whenever possible.

In adaptive threat protection you can enable just-in-time access, to enable approval for, and time-cap, your SSH or RDP access, and/or white list IPs. This is important as there are roughly 100k attacks in the first month that you enable a public facing VM in the cloud.  The activity log also shows access attempts so you have an audit log of who requested access and (tried to) access your machines. One of the more advanced tools is app whitelisting (formrely applocker), which is apparently under-utilised because it used to be difficult. Now Azure learns what apps you usually have running in 'audit' mode, then you can turn on 'enforce' mode to ensure no other apps are installed or processes are run. Azure will also recognise similar machines (eg. VMs in a scale set) and recommend you use the same settings for them.

Microsoft has a list of known bad actors updated in real time (SC is a cloud service so it's always up to date), and the demo walked through a few examples of attacks and how Azure links these to known botnets and hacker networks in the Threat Intelligence Map, along with providing a full PDF report on some of the botnets and how to deal with them. SC has built-in anomaly detection, and Wannacry was detected in somewhere around 1hr so that it could be acted upon by Azure customers. SC Fusion merges incidents into one attack profile, and lets you view the 'kill chain' so you can fix every aspect the hackers messed with. We then got to see an example of real attack and analyse how the attacker got in (RDP brute force) and see the chain of destruction they left (further ingress into the network, querying user data from AD).

Regarding dealing with issues in SC, the suggested fixes (playbooks) are logic apps, so you can work off the ones provided or create your own.  You could, for example, update Service Now, or post to Slack when an attack happens.

Migrating your applications, data, and workloads to Microsoft Azure

Allistair Speirs started by outlining that managing migration has always been about managing people, processes, and tech.  Generally around 80% of a company's budget in maintenance.  As has been mentioned in many of the sessions, Azure has a tonne of VM options, and lots of 9s in their various SLAs.  Obviously the more you move to Azure the lower the operational costs; it's a scale.

For on-prem you can: leave it alone, or implement Azure Stack;  for cloud you can: lift-and-shift (IaaS), lift-and-modernise (containers/web apps), or just go straight to a SaaS option.
What's getting in the way? Costing, the fact that it's complicated, and any necessary downtime.  There are 3 main steps: discover (which things to move first, which later, which need upgrades / patches), migrate, and optimise/modernise, and Azure has a few migration tools to help with all 3.

For migration: Azure Migrate (free for all Azure customers, mentioned further below), Azure Database Migration Service (free for all customers), Azure Cost Management (free for Azure customers), Azure Hybrid Benefit (for Windows Server, SQL Server, save up to 40% BYOL), Azure Databox (large storage data migration).

There's now an Azure Migrate tool in preview which maps dependencies in your on-prem environment, recommends VM sizes, provides a compatibility report, cost analysis, and recommends migration services. There is no agent required!  Azure then lets you have a free POC for 30 days to ensure it will all work.  

Allistair gave a brief demo of migrating vSphere, and mentioned the process is basically the same for HyperV.  Migrating using Azure Site Recovery (ASR) is the easist option for VMs. Migration is just failing over and not failing back, and as mentioned Azure provides the ability to run the failover environment for 30 days for free, so you can test and ensure it works.

Azure has integrated Cloudyn which you can use to monitor Azure / AWS / Google costs; it's free for Azure. This is great for isolating costs, but also splitting costs between departments (for example ExpressRoute which might be shared between all departments).

For lift-and-modernise, we're talking containers, CI/CD, microservices / server less, all of which Azure caters for. You can 10x savings this way, but obviously it's more effort, and better suited to projects still under development.

For storage, you've got your blob options: hot, cold (more for reads), archive (hrs to retrieve), as well as Azure file share SMB 10c/GB (and now file sync).  Azure data box also provides a 100TB bulk migration option which is encrypted, and provides chain of evidence that you've moved data.

For database you've got your PaaS or IaaS, SQL or no-SQL.  For assessing the migration you can use MS data migration assistant (discovers and provides migration recommendations).  For migrating  you've got the Database Migration service.  Migrating to a Azure SQL instance is the best option if it's possible, as it's totally managed, scalable, and more economical.

Information Protection with AIP

This one was nice and different for me, as I didn't have any background on AIP or its capabilities.  Lou Mercuri covered information protection both from an Office standpoint and in Azure.

In Word, you can "classify" a document as a classification level (set up in Azure below), using a dropdown in the ribbon.  Once this has been applied you can set a custom header, footer, or watermark depending on the classification level.  Classifying a document above your currently assigned level won't lock you out if you're the owner of the document.  Word will also pop up with a suggestion to classify the document based on key words as they are typed (without sending any info to the cloud). In Outlook once you attach a classified document it suggests that you also classify the email.

From an admin perspective, you manage it all in Azure, creating classification levels, custom user groups and assigning classification levels to them (or all users).  There is one super admin, and you can create one admin per classification level who can decrypt documents of that level.

Lou then ran through a few scenarios in Sharepoint and Salesforce through Microsoft Cloud App Security, outlining how a person who hadn't been distrusted should be able to access their document.  Examples of how the user could be blocked from viewing or downloading a document depending on whether they were on a managed device, or working from home, or accessing a certain-classification of document.  The user will also be notified that their “access to Salesforce is being monitored”.  You can also notify an admin via email or text if a user has tried to access a blocked document, and you can monitor all user actvitiy including these attempts.

Tuesday 14 November 2017

WFFM 8.2.x PaaS prc package missing parameters

I've been working on an Azure PaaS environment setup for a client, upgrading them to using Sitecore 8.2.5.  I'm definitely loving how easy it is to work with Sitecore's ARM templates, which we've both customised and extended for different environments.  Unfortunately we've also encountered a few setbacks along the way...

One of the big ones (fortunately easy to fix) was that the Sitecore package for WFFM for the processing server was invalid and giving us lots of errors when spinning up a new environment.  It only has the following:
<parameters>
  <parameter name="Application Path" tags="iisapp">
    <parameterEntry type="ProviderPath" scope="iisapp" match="WebSite" />
  </parameter>
  <parameter name="Reporting Admin Connection String" tags="Hidden, SQLConnectionString, NoStore">
    <parameterEntry type="ProviderPath" scope="dbfullsql" match="Content\\Website\\Data\\WFFM_Analytics\.sql$" />
  </parameter>
</parameters>

However this does not include the parameters necessary to restore the 2 included .dacpac files inside the package.  It should contain:
<parameters>
  <parameter name="Application Path" tags="iisapp">
    <parameterEntry type="ProviderPath" scope="iisapp" match="WebSite" />
  </parameter>
  <parameter name="Core Admin Connection String" tags="Hidden, SQLConnectionString, NoStore">
    <parameterEntry type="ProviderPath" scope="dbDacFx" match="core.dacpac" />
  </parameter>
  <parameter name="Master Admin Connection String" tags="Hidden, SQLConnectionString, NoStore">
    <parameterEntry type="ProviderPath" scope="dbDacFx" match="master.dacpac" />
  </parameter>
  <parameter name="Reporting Admin Connection String" tags="Hidden, SQLConnectionString, NoStore">
    <parameterEntry type="ProviderPath" scope="dbfullsql" match="Content\\Website\\Data\\WFFM_Analytics\.sql$" />
  </parameter>
</parameters>

You can simply unzip -> edit -> re-zip the files, and you should be good to go.
I have confirmed this with sitecore support who have provided the issue reference number 193329.
I have also raised a pull request in the quickstart templates for when the fixed packages are released by Sitecore.

Monday 13 November 2017

Azure breaking change for Sitecore ARM templates

Over the weekend it seems Microsoft has changed one or more functions in the way ARM templates are processed and we're now getting the following error when trying to spin up our Sitecore environments:

New-AzureRmResourceGroupDeployment : 5:45:07 PM - Error: Code=InvalidTemplate; Message=Deployment template validation failed: 'The template resource 'templateLinkBase' at line '27' and column '23' is not valid: The template language function 'replace' expects its first parameter to be of type 'String'. The provided value is of type 'Uri'. Please see https://aka.ms/arm-template-expressions for usage details.. Please see https://aka.ms/arm-template-expressions for usage details.'.

The simplest workaround is to temporarily include the templateLinkBase parameter in your parameters file, containing the URL to your ARM template file:

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "templateLinkBase": {
      "value": "https://<yoursite>/sitecore/templates/azuredeploy.json"
    },
    ... other parameters
  }
}

It seems a few others are having the same issue and have raised it with Microsoft.  Is anyone else out there experiencing this? I thought it would have been all over the forums but I haven't seen it there yet.

Update: Microsoft has now responded and confirmed the issue (with a bit of an ugly workaround).  Hopefully they'll fix the issue soon!

Thursday 9 November 2017

Azure bug in SCM Uri for slots

I thought I'd raise some more attention (mostly out of pure frustration) to a bug in the Azure API that I came across today.

Rackspace, as one of our client's hosting providers, uses a guid as the Sitecore deployment ID, and so it is prepended to all the resource names.  This creates quite long app names, and combining that with a slot name (eg. 'cm-staging'), means that Azure drops some characters in the middle to maintain a 40 character hostname.
eg.
app name: 3dd1a584-abc6-4019-a917-633cb1dbfb40-prod-cm
slot name: cm-staging
becomes: 3dd1a584-abc6-4019-a917-633cb1dbfb40-pro-cm-staging.scm.azurewebsites.net

This unfortunately means that one can't simply use 'webappname-slotname'.azurewebsites.net as the URL, and it requires an API call to find out what Azure has generated.

We are using the Kudu API to run a script on our web apps (and slots) which means making REST calls to <webapp/slot>.scm.azurewebsites.net and this Kudu/scm URL is nicely provided in the publishing profile which can be retrieved with a simple Azure REST call.

Unfortunately the Azure API appears to do a simple 'webappname-slotname', and doesn't generate the 40 character URL which Azure actually uses.
ie. the URL that is returned: 3dd1a584-abc6-4019-a917-633cb1dbfb40-prod-cm-cm-staging.scm.azurewebsites.net

This has apparently been raised to Microsoft who are working on the issue.  However, rather than using the 'hostnames' property, you need to filter on the 'EnabledHostNames' property.

If I could make a suggestion: do your best to avoid using guids in your app names, or really long app names for that matter, it causes a world of pain.

Thursday 26 October 2017

Sitecore PaaS conditional deployments

For our current project our client has a production Sitecore environment and DR environment which are almost identical.  Production is always up and running, however DR is spun-up on demand, in a secondary region, when the production region goes down for any reason.  The only difference between the two environments is the data: the SQL servers are using Azure's active geo-replication and failover groups so that the secondary region's data is always up and ready to go, and to fails over automatically.  This is more costly, but enables us to meet the client's RPO and RTO (as opposed to the backup and restore method), and specify the grace period with data loss if the RTO isn't going to be met.
As a side-note, the secondary SQL servers, DBs, and failover groups are located in the primary resource group, not the DR resource group. This is so that the DR resource group can simply be deleted when the primary region comes back on-line.  Don't forget point 1 of resource groups: the resources inside should share the same lifecycle.

But moving on to the topic of the title: the DR environment is identical to prod minus the SQL servers and databases.  There's no point scripting up two ARM templates when things are this similar, and fortunately Azure has us covered with ARM template conditions.  This is the best use for them that I've found so far, but you could also use it for any other environments/deployments which are quite similar.

First, in the Powershell script (we're using a modified version of the Sitecore Azure toolkit's Sitecore.Cloud.Cmdlets.psm1 Start-SitecoreAzureDeployment function), we dynamically set the template property based on a flag:
if($IsDR) {
  $paramJson | Add-Member -NotePropertyName "Environment" -NotePropertyValue @{ "value" = "DR" } -Force
}

Then in the template we add our Environment property:
"Environment": {
    "type": "string",
    "allowedValues": [
        "Prod",
        "DR"
    ],
    "defaultValue": "Prod",
    "metadata": {
        "description": "Select whether this environment is prod (requires SQL) or DR (no SQL required)."
    }
}

Finally, in the resource itself we add the condition to only create SQL servers if it's not DR:
{
  "condition": "[not(equals(parameters('Environment'), 'DR'))]",
  "type": "Microsoft.Sql/servers",
  "name": "[variables('dbServerNameTidy')]",
  ... etc.
}

You can also use the conditions in your properties, which is required in the DR deployment to get the old FQDN of the original prod SQL servers.
"sqlServerFqdn": "[reference(if(equals(parameters('Environment'), 'DR'), resourceId(parameters('prodResourceGroup'), 'Microsoft.Sql/servers', variables('oldDbServerNameTidy')), resourceId('Microsoft.Sql/servers', variables('dbServerNameTidy'))), variables('dbApiVersion')).fullyQualifiedDomainName]",

Easy as that! Now you have one ARM template which can be used for both environments, just by passing the -IsDr parameter to your powershell deployment script.
You could easily modify the Environment parameter to contain more values for different environments if you have more which are similar.  However if they're more than a little different it's probably worth having a different nested template, or entire set of templates, for each environment.

Tuesday 10 October 2017

Making a Sitecore PaaS package (WDP) for your module

For one of my first posts on Sitecore PaaS I'll run through one of the more basic things you may need to know for your deployments: adding a module to your Sitecore PaaS installation.  I'll use the Sitecore Powershell Extensions module (which as far as I know doesn't have a package already) as an example.  This is an easy one to start with, as it will only really be running on your CM server so it cuts down on the need for transforms and multiple packages.

Firstly, we need to create what Sitecore calls a "web deploy package" (WDP).  This is just a zip file containing our module files, and ending in .scwdp.zip.  As you can find from a dig around in the Sitecore Azure Module Documentation in the Generating an inital WDP section, you can create this WDP from a normal Sitecore module installation .zip or .update package.

The command is:
ConvertTo-SCModuleWebDeployPackage [-Path] <string> [[-Destination] <string>]

Which in our case will be:
ConvertTo-SCModuleWebDeployPackage -Path "Sitecore PowerShell Extensions-4.5 for Sitecore 8.zip" -Destination "SPE.scwdp.zip"

This generates our .scwdp.zip with the same files that were in the installer zip, and also creates a .dacpac file to amend our DB with any items from the installer zip.  The deploy package will take an "application path" parameter no matter what, and if you have any .dacpacs (new items) it will also automatically add parameters for whichever database you are updating (in our case, core and master).

Next up, we need to create an ARM template which will install our WDP.  A good starting point is taking a look at the WFFM templates, which are relatively straightforward.  I've created a gist for the SPE deployment template, and you can see the end of the post for how we pass in our parameters.

You'll note we're only adding one new resource in the ARM template, which is a MSDeploy extension.  This is how we use ARM to deploy things to Azure.


"resources": [
    {
      "name": "[concat(variables('cmWebAppNameTidy'), '/', 'MSDeploy')]",
      "type": "Microsoft.Web/sites/extensions",
      "location": "[parameters('location')]",
      "apiVersion": "[variables('webApiVersion')]",
      "properties": {
        "addOnPackages": [
          {
            "dbType": "SQL",
            "connectionString": "[concat('Data Source=tcp:', variables('sqlServerFqdnTidy'), ',1433;Initial Catalog=master;User Id=', parameters('sqlServerLogin'), '@', variables('sqlServerNameTidy'), ';Password=', parameters('sqlServerPassword'), ';')]",
            "packageUri": "[parameters('cmMsDeployPackageUrl')]",
            "setParameters": {
              "Application Path": "[variables('cmWebAppNameTidy')]",
              "Core Admin Connection String": "[concat('Encrypt=True;TrustServerCertificate=False;Data Source=', variables('sqlServerFqdnTidy'), ',1433;Initial Catalog=',variables('coreSqlDatabaseNameTidy'),';User Id=', parameters('sqlServerLogin'), ';Password=', parameters('sqlServerPassword'), ';')]",
              "Master Admin Connection String": "[concat('Encrypt=True;TrustServerCertificate=False;Data Source=', variables('sqlServerFqdnTidy'), ',1433;Initial Catalog=',variables('masterSqlDatabaseNameTidy'),';User Id=', parameters('sqlServerLogin'), ';Password=', parameters('sqlServerPassword'), ';')]"
            }
          }
        ]
      }
    }
  ]

You'll notice we're passing in the application path, and core/master connection strings parameters to the deployment, as mentioned above.

Finally, we need to add the extension to our Sitecore deployment.  We do this in our parameters file for our main deployment, in a "modules" parameter:

"modules": {
  "value": {
    "items": [
      {
        "name": "spe",
        "templateLink": "https://yoursite/sitecore/templates/azuredeploy.spe.json",
        "parameters": {
          "cmMsDeployPackageUrl": "https://yoursite/sitecore/modules/SPE.scwdp.zip"
        }
      }
    ]
  }
}

You'll see there is only one parameter we actually need to pass in: the WDP we created above.  This is because Sitecore automatically populates the other parameters with the values we pass in to the main ARM deployment.  This includes the CM app name (where the package is installed), SQL username and password (for installing the .dacpac which has the new items), etc.

And that's all there is to it!  Enjoy adding your additional modules and install packages to Sitecore PaaS in a nice modular way.

Thursday 22 June 2017

Microsoft Azure certification (70-534)

After getting my developing (532) and implementing (533) Azure certifications I felt I needed a bit of a break to focus on my life (and work) again, but after a few months I received an email from Opsgility about a 534 course so I decided to get back into the study.  Overall it was quite well structured, though some of it was a bit all over the place.

There's a little overlap between 532/533 and 534, but not that much.  I'd recommend a lot of study and being very comfortable with all topics before attempting the exam.  There are lots of Opsgility, Udemy, and Pluralsight courses and between them all you should be able to cover all your bases.  Read and watch everything you can, and definitely play around in Azure as much as possible.  The 534 course had recently been restructured (udpate: has again now) to include the latest topics like IoT and Event Hubs, and Media Services, so make sure you have all the latest materials.

Either way after all the study was done I managed to pass the exam and am now a certified Azure Architect and MCSE: Cloud Platform & Infrastructure!  Hopefully this leads to some fun projects, and more posts more around architecting solutions and (Sitecore in) Azure.