Deploying pre-built functions to Azure Static Web Apps

Posted 30 November 2022, 10:50 | by | Perma-link

In my last post I talked about hosting a Next.js app on Azure Web Apps, and I was asked why I didn't use Static Web Apps

Out of curiosity, is there a reason that you chose to go AppService over Static Web Apps (which has native Next.js support)?
Aaron Powell

Which is good question, the main answer to which is that we like to ensure that what we deploy has been fully tested, barring any configuration changes - we build the application once and then deploy that single package to every environment - static web apps supports this for a number of application styles, but sadly not for Next.js apps at this time - from the linked article on the feature:

During the preview, the following features of Static Web Apps are unsupported for Next.js with server-side rendering: […] skip_app_build and skip_api_build can't be used.

However, I do like the Static Web Apps offer, so we've been using them for a few landing pages recently and we recently had a need to front a secured API call, which is where this post comes in.

The Problem: We have a Static Web App that needs to call an API with a key, and we don't want that key (or indeed the API endpoint) exposed to the end users. We also want to follow our usual build and deploy process, with a single build that is deployed to each environment, and as an additional benefit if we can enable Password Protection on both the site and the API, that would be great.

Managed Functions to the Rescue!

The primary way to hide the key and endpoint of the API is by using an intermediary API - to hide the key, it must be stored server side, and added to the request. I considered using Front Door to add the header to requests, but that would still expose an additional URL I didn't really want to use, so using a function app seemed like a better way to go. Having this as a Managed Function answered two of my asks:

  1. It keeps everything on the same site and domain (all functions are under the /api/ route).
  2. Password Protection is enabled for the /api/ route as well as the site pages.

However, when I initially tried to pre-build and deploy the function application, the builds failed with:

Cannot deploy to the function app because Function language info isn't provided.

Which was a little annoying.

The answer to this is on the Build Configuration documentation under Skip building the API:

If you want to skip building the API, you can bypass the automatic build and deploy the API built in a previous step.

Steps to skip building the API:

  • In the staticwebapp.config.json file, set apiRuntime to the correct runtime and version. Refer to Configure Azure Static Web Apps for the list of supported runtimes and versions.
  • Set skip_api_build to true.
  • Set api_location to the folder containing the built API app to deploy. This path is relative to the repository root in GitHub Actions and cwd in Azure Pipelines.

So, adding the following into our staticwebapp.config.json was the final piece in the puzzle:

{
  "platform": {
    "apiRuntime": "dotnet:6.0"
  }
}

So now I can create a single build package, apply variable replacements to the appSettings.Production.json file for the function app to provide sandbox/production endpoints and keys, and have the API protected in the same way as the static site.

Filed under: Azure, DevOps

Hosting Next.js apps on Azure Web Apps

Posted 24 November 2022, 02:52 | by | Perma-link

We've started using Storybook for a few of the sites we're pulling together at work, while continuing to use Azure DevOps and deploying into our Azure estates. Running a simple static app produced from a Next.js site is pretty straightforward, but it turns out that running a full Next.js application on Azure App Service has a few less than obvious hoops you need to jump through to get everything lined up and working.

The key to solving this was the "With Docker" example, which provided a decent working point to compare what we had, and what we need.

Configure Next.js Output: Standalone

The main requirement is to ensure that your next.js application is configured to output as a Standalone application. In next.config.js, ensure you've set:

module.exports = {
  output: 'standalone',
}

This will create two core folders in the .next output folder:

  • standalone - This is the self contained application, including the required node_modules packages, along with a minimal server.js file which can be used instead of next start.
  • static - This is the standard static site which needs to be deployed as well.

Finally, you also have your existing public folder.

Set up your App Service

We created a Linux based App Service Plan, in our case a B1 instance.

The App Service is configured with:

  • Stack: Node
  • Major version: Node 16
  • Minor version: Node 16 LTS
  • Startup Command: Provided by deployment

Everything else as you would normally configure it (e.g. FTP disabled, HTTP 2.0, HTTPS Only On, Minimum TLS Version 1.2, etc.).

Build and Deployment

Using Azure DevOps pipelines the following tasks will create a deployable package:

- stage: Build
  jobs:
  - job: NextBuild
    pool:
      vmImage: ubuntu-latest

    steps:
    - script: |
        yarn install --frozen-lockfile
      workingDirectory: $(Build.SourcesDirectory)
      env:
        CI: true
      displayName: "Installing packages"

    # Assumes output: 'standalone' configured on the next.config.
    - script: |
        yarn build
      workingDirectory: $(Build.SourcesDirectory)
      env:
        CI: true
        NODE_ENV: "production"
      displayName: "Building Next application"

    - task: CopyFiles@2
      inputs:
        sourceFolder: $(Build.SourcesDirectory)/.next/standalone
        contents: |
          **/*
        targetFolder: $(Build.ArtifactStagingDirectory)/site-deploy
      displayName: 'Copy standalone into the root'

    - task: CopyFiles@2
      inputs:
        sourceFolder: $(Build.SourcesDirectory)/.next/static
        contents: |
          **/*
        targetFolder: $(Build.ArtifactStagingDirectory)/site-deploy/.next/static
      displayName: 'Copy static into the .next folder'

    - task: CopyFiles@2
      inputs:
        sourceFolder: $(Build.SourcesDirectory)/public
        contents: |
          **/*
        targetFolder: $(Build.ArtifactStagingDirectory)/site-deploy/public
        flattenFolders: false
      displayName: 'Copy Public folder'

    - task: ArchiveFiles@2
      inputs:
        rootFolderOrFile: $(build.artifactStagingDirectory)/site-deploy
        includeRootFolder: false
        archiveType: "zip"
        archiveFile: $(Build.ArtifactStagingDirectory)/Nextjs-site.zip
        replaceExistingArchive: true
      displayName: "Package Next application"

    - task: PublishPipelineArtifact@1
      inputs:
        artifactName: Nextjs-site
        targetPath: $(Build.ArtifactStagingDirectory)/Nextjs-site.zip
      displayName: "Publish Next Application artifact"

Obviously you'll need to update the workingDirectory and sourceFolder paths as appropriate to your repo.

Basically, the steps are:

  1. Restore your packages
  2. Build your application
  3. Copy the contents of the .next/standalone folder into the root of a staging folder
  4. Copy the contents of the .next/static folder into the same folder within the staging folder
  5. Copy the content of the public folder into the same folder within the staging folder
  6. Create a zip archive of the staging folder, without including the root folder
  7. Publish the archive as a Pipeline Artifact

The published artifact should now contain the following structure:

+ /.next
¦ + /server
¦ ¦ + /chunks
¦ ¦ + /pages
¦ + /static
¦   + /chunks
¦   + /[hash]
+ /node_modules
+ /public
+ .env
+ package.json
+ server.js

Deployment is simply a case of deploying this package with the AzureWebApp task:

- stage: Deploy
  dependsOn: Build
  condition: and(succeeded())
  jobs:
  - deployment: NextJs
    environment: "QA"
    strategy:
      runOnce:
        deploy:
          steps:
          - task: AzureWebApp@1
            inputs:
              appType: webAppLinux
              azureSubscription: azureRMServiceConnection
              appName: NextWebAppName
              package: $(Pipeline.Workspace)/Nextjs-site.zip
              deploymentMethod: "zipDeploy"
              runtimeStack: 'Node|16-lts'
              startupCommand: 'node server.js'
            displayName: "Deploy site to Azure"

The important settings to call out are:

  • appType - Needs to be set to webAppLinux so that we can use the startupCommand.
  • runtimeStack - Allows the development team to adjust the major and minor versions if necessary for their specific build.
  • startupCommand - Needs to be set to node server.js. This tells the App Service container to call the server.js file from node, starting your application ready to receive requests.

And with that, you should now have a NextJS application running on a Linux based Azure App Service.

Filed under: Azure, DevOps, Next.js

Using Animated Lower Thirds in OBS Studio

Posted 14 November 2021, 16:28 | by | Perma-link

I've been playing around with OBS recently, and having fun pushing various tools to their limits.

One that I've found is great, but suffering a little from lack of documentation is Animated Lower Thirds with Dockable Control Panel , so I thought I'd post a few helpful notes.

Setting up

As of OBS 27.2 and above, the browser source needs to be loaded as a URL, rather than a local file. The simplest way to do this is open the lower thirds\browser-source.html file in a browser, and copy the path from the address bar. You should end up with a URL like:
file:///C:/Users/.../OBS-Scripts/Animated-Lower-Thirds/lower%20thirds/browser-source.html

General

Often, the tool won't do anything if certain fields are empty. Usually adding a space to a field you want to leave blank is sufficient.

To clear a memory slot, press and hold the slot you wish to clear, it will flash red and then clear.

Fonts

The video tutorials show how to import Google Fonts into the tool, but sometimes you want to use a font you've purchased and installed on you machine. Referencing these is actually fairly easy as the browser control is running the HTML page locally on your machine:

  1. Expand the Main Settings section, and "Show more" so you can see the three buttons.
  2. Select "Customs"
  3. For the custom font fill it in as:
    Font Family: font-family: 'Ale and Wenches BB', serif;
    @import url: (single space)
    It's important to include the , serif or similar in the font family field otherwise the font name displays as ' in the other editors
    Adding custom font
    You should then see your local font appear as an option to select in the lower third controls - remember that they don't appear in alphabetical order
    Selecting a custom font
    Using a custom font

Filed under: obs-studio, Streaming

Connecting Live.com Accounts to Outlook when they use a 3rd party email address

Posted 30 October 2021, 16:37 | by | Perma-link

I've just had to recreate a number of accounts on a windows PC, and came up against a fairly niche problem:

My family have Live "Personal" accounts that are tied to their primary email addresses, rather than to a hotmail.com or outlook.com account. My webhost provides a nice email service that supports secure IMAP connections for their accounts, so there was no need to set up full mail accounts, but we do use the calendar and contacts options, which are also managed on our phones, and shared between us. However, it's been getting slightly harder to add the live accounts to the desktop version of Outlook, hence this post.

Previously, I'd succeeded by adding the live account first, but that started failing after my host upgraded their systems and started supporting the autodiscover protocols, however I'd been able to create the account as an Exchange Enabled account, and point it to a hotmail.com server, however trying both that and using outlook.live.com seemed to connect successfully, but didn't actually bring down any contacts or calendar information.

I did manage to get it work once I'd found the underlying account that outlook.com had generated for the user. If you log in to outlook.com and hover over the Outlook icon in the top left, you're shown the details of the account - this was an email like outlook_[...]@outlook.com:

Outlook.com underlying account

Using that email address, I was able to get the New Account dialog to recognise that I was adding an outlook.com account, but it complained that there wasn't a user with that email address. Entering the users actual email address and credentials added the account with the Calendars and Contacts to Outlook and then I could add the primary email account from our servers after that.

Filed under: Office365, outlook

When No Cache means Cache - Fun with Azure Front Door

Posted 21 June 2021, 17:30 | by | Perma-link

Azure Front Door is a great product, that has only improved since it's initial release. At a high level it wraps three core services that most websites can benefit from: Caching (CDN), Routing (both simple Traffic Manager style and more complex rules based) and Firewalls (WAF). It also works really well behind the bigger full featured CDN offerings when you need more complex caching rules. It's easy to lock down an App Service to the Front Door infrastructure, providing you with the benefits of Web Application Firewall and failover if that's what you need.

However we recently had the following issue with Azure Front Door, caching and cookies: For sites where Front Door has Caching enabled, Front Door was stripping the set-cookie header from responses. This was causing form validation to fail when the Request Verification Token cookie wasn't returned in the POST.

Based on the documentation around cache expiration:

Cache-Control response headers that indicate that the response won't be cached such as Cache-Control: private, Cache-Control: no-cache, and Cache-Control: no-store are honoured.

We had ensured that our pages were sending Cache-Control: no-cache and were seeing x-cache: TCP_MISS on the responses so we thought we were good, but the cookies weren't being set. Checking the origin, they were being set fine, and disabling caching in Front Door resulted in them being set as expected as well, but none of the site was then cached.

Here's where the limitations of Front Door, compared to Azure Premium CDN show - the new Rules engine in Front Door allows you to modify responses, routing and caching behaviour, but only based on the incoming request (Azure Premium from Verizon CDN rules engine also allows you to modify those things based on the incoming request as well as the response from the server). So as an initial work around we disabled caching, and then enabled it with a rule for requests that included a file extension:

Front Door Rules Engine

In psuedo-code:

IF Condition: "Request Path"
   Operator: "Contains"
   Value: "." Transform: "To Lowercase"
THEN Action: "Routing Configuration"
     Route Type: "Forward" "Backend Pool"
     Backend Pool: // Update as needed
     Forwarding Protocol: // Update as needed
     URL Rewrite: // Update as needed
     Caching: "Enabled"
     Cache behaviour: "Cache every unique URL" // We want cache busting query strings to work
     Dynamic compression: "Enabled"
     Use default cache duration: "Yes"

This gave us a level of caching for static content (CSS, JS, images, etc.) but still meant that cacheable pages were not being cached.

After a bit of to and fro with the very helpful support team, it was pointed out that the HTTP specification has this to say about Cache Headers:

The "no-cache" response directive indicates that the response MUST NOT be used to satisfy a subsequent request without successful validation on the origin server

And the MDN documentation spells it out even plainer:

no-cache The response may be stored by any cache, even if the response is normally non-cacheable. However, the stored response MUST always go through validation with the origin server first before using it.

Because pages with a response of "no-cache" may actually be cached, Front Door automatically strips the set-cookie header from the response, ensuring that the page can be cached and other users don't share the set-cookie header.

What we needed to do was use the Cache-Control: no-store on those pages, which results in a truly non-cacheable page, and then Front Door lets the cookies through.

This basically meant changing our code from:

Response.Cache.SetCacheability(HttpCacheability.NoCache);

To:

Response.Cache.SetNoStore();

Your page will then emit a cache control header with private, no-store and an expires header set to -1. While this does help you fall into the pit of success, it's a little tedious that No Store doesn't exist on the HttpCacheability enum, and that attempting to set the cacheability manually to no-store results in an exception.

Filed under: Azure

Setting up Umbraco Azure AD Authentication

Posted 21 June 2019, 08:48 | by | Perma-link

l recently had a requirement to add Azure Active Directory authentication to a client's Umbraco Back Office, with the added request of managing the CMS group membership via Azure AD as well.

To do this, we're going to use an Azure AD Application Registration, with some custom roles - by using roles we remove the need to share more information about the User and their memberships with our application, and we make the application more portable - we can deploy the application manifest into any directory and the same role names are returned to the calling web site, rather than a unique group ID, and because they are textual names we can also easily wire up our Umbraco group aliases to the role names, without working with unwieldy GUIDs (Umbraco also requires that a group alias doesn't start with a number).

When following through Shazwazza's post Configuring Azure Active Directory login with Umbraco, I ran into a couple of problems with the basic set-up mainly that the Token ID wasn't included in the responses, and then that the users email address wasn't populating in the generated claim correctly.

Start by installing the UmbracoCms.IdentityExtensions.AzureActiveDirectorypackage into a suitable project in your solution. This will add some dependencies and a few sample classes in either an app_start or the app_code folder depending on your project type, however these consist of two classes with OWIN startup attributes and a couple of helper methods so you could move them if needed. These classes are very well documented, so it should be fairly easy to see what's happening in there - we'll come back to them in a short while.

Then jump over to Azure, where you'll need permissions to create Applications and Enterprise Applications at least.

Open the Azure Active Directory blade for the subscription you are connecting to, and select "App Registrations" and then "New registration":
Azure AD App Registrations

As we're granting users access to the back office we'll stick with the default of "Accounts in this organisational directory only", and as the name of the application will be shown to users if they have to grant access to their data make it meaningful (you can change it later). Finally add your first "Redirect URI" (if you have more environments that you want to control with a single application we can add these later) - this should be the full path to your Umbraco instance, including the trailing slash:
New App Registration
Note: This area has been improved recently so these screenshots may be slightly out of date

Press "Register" and your application will be created.

Switch to the "Authentication" pane for your new App - you need to enable "ID tokens" in the Advanced Settings - "Implicit grant" section. If you have more than one environment you're protecting with same users, you can also add the additional Redirect URIs here.
Add more redirects and enable ID Tokens

Next, under "API permissions", you ideally want to "Grant admin consent for [Your Directory]" for the User.Read permission that is added by default to the App:
Grant Admin Consent

Which should result in the consent being given for all users:
Grant Admin Consent Success

Next you need to set up the Roles that your application is going to grant to users - these are what we're going to map to Umbraco's back office groups. If you don't want to use the new preview UI to create these, you can edit the manifest directly. Open the Manifest pane and find the "appRoles" array: App roles in the manifest

Put your cursor between the braces, and then add at least the following three roles - you'll need to generate a unique GUID for each role, and enter it as 00000000-0000-0000-0000-000000000000 (i.e. hyphens but no curly braces):

{
    "allowedMemberTypes": [
        "User"
    ],
    "description": "Members of the Umbraco Administrators group.", 
    "displayName": "Umbraco Admin", 
    "id": "[UniqueGUID]", 
    "isEnabled": true, 
    "lang": null, 
    "origin": "Application", 
    "value": "admin" 
}, 
{ 
    "allowedMemberTypes": [ 
        "User" 
    ], 
    "description": "Members of the Umbraco Editors group.", 
    "displayName": "Umbraco Editor", 
    "id": "[UniquieGuid]", 
    "isEnabled": true, 
    "lang": null, 
    "origin": "Application", 
    "value": "editor" 
}, 
{ 
    "allowedMemberTypes": [ 
        "User" 
    ], 
    "description": "Members of the Umbraco Writers group.", 
    "displayName": "Umbraco Writer", 
    "id": "[UniqueGuid]", 
    "isEnabled": true, 
    "lang": null, 
    "origin": "Application", 
    "value": "writer" 
} 

Note that the display name can contain spaces and the value parameter will be used to map to the Group Alias in Umbraco.

From the Overview blade of your application, make a note of the Application (client) ID and Directory (tenant) ID, as you'll need them later. You can also update the Branding for your application that may appear on your users Applications page - the logo also appears in the Application listings in Azure, so can be useful to help you spot it amongst all the others.

Having done all that, you can then configure some users - to do this switch to the "Enterprise Applications" blade in your Azure Active Directory and locate your new Application Registration - depending on your configuration you may have many or very few, and so may have to search for it either by name or the Application (client) ID.

Select your application, and open the "Users and groups" blade and select "Add user". Depending on your Azure Active Directory plan, your experience may be better or worse - on the Free plan you can only assign users to roles, but with any of the paid plans (Basic and above) you can add groups to roles (when selecting users, groups and roles, make sure that you do press the "Select" button at the bottom of the blade each time!):
Assign Roles to Users or groups

To add a user or group to more than one role, you need to add them multiple times:
Add user to multiple roles

Back in your code, first up add the Application (client) Id, Directory (tenant) Id and redirect URI to your appSettings in the web.config - these aren't secrets, so should be safe in source control, but you'll want to ensure that your redirectURI is updated for each environment so that your users are returned to the correct instance.

As I planned on using App_Start\UmbracoStandardOwinStartup.cs as the basis of my application, I updated the owin:appStartup setting to reference UmbracoStandardOwinStartup.

In the UmbracoStandardOwinStartup class, just update the call to ConfigureBackOfficeAzureActiveDirectoryAuth with the values added to the appSettings - note that you can use the Directory (tenant) Id for both the tenant and issuerId properties, which is useful if you're not sure which domain is associated with the directory.

The main changes are then in the UmbracoADAuthExtensions.cs class. This class contains the single extension method ConfigureBackOfficeAzureActiveDirectoryAuth called by the start-up class and this is where I got to work.

To resolve the issue with the user's email address not being found, I wrote a custom handler for the SecurityTokenValidated notification. To wire that up, add the following to the OpenIdConnectAuthenticationOptions constructor call:

Notifications = new OpenIdConnectAuthenticationNotifications 
{ 
 SecurityTokenValidated = async notification => { AzureActiveDirectory.HandleNotification(notification); }, 
} 

This calls the following custom method, which uses the user's full name (rather than just their first name) and then finds either the Email or UPN claims, which should contain the user's email address.

// Need to handle the case when the email address is returned on the UPN claim. 
internal static void HandleNotification(SecurityTokenValidatedNotification<OpenIdConnectMessage, 
                                                                    OpenIdConnectAuthenticationOptions> notification) 
{ 
   var id = notification.AuthenticationTicket.Identity; 

   // we want to keep name (as a whole name) and roles 
   var name = id.FindFirst(ClaimTypes.Name); 

   var email = id.FindFirst(ClaimTypes.Email) ?? id.FindFirst(ClaimTypes.Upn); 

   var roles = id.FindAll(ClaimTypes.Role); 

   // create new identity and set name and role claim type 
   var nid = new ClaimsIdentity( 
       id.AuthenticationType, 
       ClaimTypes.Name, 
       ClaimTypes.Role); 

   nid.AddClaim(name); 
   nid.AddClaims(roles); 
   nid.AddClaim(id.FindFirst(ClaimTypes.NameIdentifier)); 
   var emailClaim = new Claim(ClaimTypes.Email, email.Value); 
   nid.AddClaim(emailClaim); 

   notification.AuthenticationTicket = new AuthenticationTicket( 
       nid, 
       notification.AuthenticationTicket.Properties); 
} 

That should get basic authentication up and running, but requires users exist in Umbraco already so we need to enable Auto Linking, and then add and remove groups based on the roles included in the claim from Azure AD.

Back in ConfigureBackOfficeAzureActiveDirectoryAuth, we need to create a new ExternalSignInAutoLinkOptions object and add it to our OpenIdConnectAuthenticationOptions object. Again, this is going to use a couple of custom handlers to configure the values:

// Don't add the user to any groups by default, these should be added by the claims from Azure. 
var autoLinkOptions = new ExternalSignInAutoLinkOptions(true, new string[] { }, defaultCulture: "en-GB"); 
 
// Handle the Roles from the Azure AD Application 
autoLinkOptions.OnAutoLinking = AzureActiveDirectory.OnAutoLinking; 
 
// Check the Roles from the Azure AD Application on subsequent login 
autoLinkOptions.OnExternalLogin = AzureActiveDirectory.OnExternalLogin; 
 
adOptions.SetExternalSignInAutoLinkOptions(autoLinkOptions); 

The OnAutoLinking method is fairly simple as it just calls out to the OnExternalLogin method:

internal static void OnAutoLinking(BackOfficeIdentityUser user, ExternalLoginInfo info) 
{ 
    // Let login handle sorting out the roles. 
    OnExternalLogin(user, info); 
} 

The OnExternalLogin does all the heavy lifting

public static bool OnExternalLogin(BackOfficeIdentityUser user, ExternalLoginInfo info) 
{ 
    // check user is still in an editing group 
    var applicationRoles = info.ExternalIdentity 
                               .FindAll(c => c.Type == info.ExternalIdentity.RoleClaimType) 
                               .Select(c => c.Value) 
                               .ToList(); 
 
    if (applicationRoles.Any()) 
    { 
        var groups = user.Groups.ToList(); 
 
        var groupsToRemove = groups.Where(g => !applicationRoles.Contains(g.Alias)) 
                                   .ToArray(); 
        var groupsToAdd = applicationRoles.Where(r => !groups.Any(g => g.Alias.Equals(r))); 
 
        // Remove old groups and reset the group array, then sort out the roles. 
        // Has to be done this way to ensure correct change tracking on the underlying user. 
        foreach (var group in groupsToRemove) 
        { 
            groups.Remove(group); 
        } 
 
        user.Groups = groups.ToArray(); 
 
        foreach (var group in groupsToRemove) 
        { 
            var userRole = user.Roles.FirstOrDefault(r => r.RoleId.Equals(group.Alias)); 
 
            if (userRole != null) 
            { 
                user.Roles.Remove(userRole); 
            } 
        } 
 
        foreach (string group in groupsToAdd) 
        { 
            user.AddRole(group); 
        } 
 
        return true; 
    } 
 
    return false; 
} 

And with that in place, you should then be able to log in to your Umbraco instance using the credentials attached to the account linked to the Azure AD (for example a Windows Live, Office 365 or Federated AD account). As you add or remove roles from the user, these are reflected each time they authenticate through the application.

Filed under: ASP.NET MVC, Azure, Umbraco

Restricting access to Sitecore Media Items

Posted 03 March 2015, 11:00 | by | Perma-link

I recently had a requirement to lock down some media items (PDFs in this case) within Sitecore so that only certain logged in users could access them. In principle this is trivially easy - ensure the users are in the right roles, remove read access from the extranet\anonymous user and grant read access to the specific roles. However, as always, the devil is in the details.

Whilst the above steps did work and users were correctly sent to the login page there was a problem - once the user logged in, they were just sent to the home page of site rather than being returned to the item they'd requested.

Checking the web.config I found the following setting, which defaults to false:

<setting name="Authentication.SaveRawUrl" value="true" />

But setting it to true here didn't actually make any difference - because the out of the box MediaRequestHandler ignores this value. I'm not really sure whether that makes sense at all - if I lock down some images for example, but then include them on a publicly accessible page the user isn't going to be prompted to log in, they'd just get broken images as the browser requests an image but gets HTML in response, but in the context of a PDF or other document surely you'd want to log in and be returned to the correct place.

Anyway, the solution was fairly straight forward. I created a new RestrictedMediaRequestHandler that inherits MediaRequestHandler and then overrode only the DoProcessRequest method:

/// <summary>
/// Extends the Sitecore MediaRequestHandler to include the requested
/// URL in the redirect to the login page.
/// </summary>
public class RestrictedMediaRequestHandler : MediaRequestHandler
{
  protected override bool DoProcessRequest(HttpContext context)
  {
    Assert.ArgumentNotNull(context, "context");
    MediaRequest request = MediaManager.ParseMediaRequest(context.Request);
    if (request == null) {
      return false;
    }

    Media media = MediaManager.GetMedia(request.MediaUri);
    if (media != null) {
      // We've found the media item, so send it to the user
      return DoProcessRequest(context, request, media);
    }

    using (new SecurityDisabler()) {
      // See if the media item exists but the user doesn't have access
      media = MediaManager.GetMedia(request.MediaUri);
    }

    string str;
    if (media == null) {
      // The media item doesn't exist, send the user to a 404
      str = Settings.ItemNotFoundUrl;
    } else {
      Assert.IsNotNull(Context.Site, "site");
      str = Context.Site.LoginPage != string.Empty ?
          Context.Site.LoginPage : Settings.NoAccessUrl;

      if (Settings.Authentication.SaveRawUrl) {
        var list = new List<string>(new[]
                                    {
                                        "item",
                                        Context.RawUrl
                                    });

        str = WebUtil.AddQueryString(str, list.ToArray());
      }
    }

    HttpContext.Current.Response.Redirect(str);
            
    return true;
  }
}

Then I updated the web.config to tell the sitecore media handler to use this new handler instead of the default one, and all was well in the world:

<add verb="*" path="sitecore_media.ashx"
     type="Custom.Infrastructure.Sitecore.RestrictedMediaRequestHandler, Custom.Infrastructure"
     name="Custom.RestrictedMediaRequestHandler" />

And now when a user requests a PDF they don't have access to they are sent to a login page that can return them to the PDF afterwards.

Filed under: ASP.NET, Sitecore

Setting up OSMC on a Raspberry Pi

Posted 11 February 2015, 21:50 | by | Perma-link

I've had a Raspberry Pi (original model B) sitting around at home for about a year, and I've been wondering what to do with it for most of that time. I've finally decided that as we're decluttering the house* we need a better way to access all the music that currently sits on the media shares of an original Windows Home Server that we use for backups and media storage.

So I've got together the following hardware:

  • A Windows Home Server, with the Guest user enabled as follows:
    • No password
    • Read Access to the Music, Videos and Photos shares
  • A Raspberry Pi Model B
  • A tiny USB wifi dongle (or one very much like that)
  • An 8GB SD card (they recommend a class 10, mine's currently a class 4)
  • A USB mouse
  • A TV with an HDMI port and its remote (for initial configuration)
  • A stereo amplifier with an AUX port and some speakers (we're running the Pi headless)

The software needed was:

  • The OSMC Installer (I went for the Windows one) - currently Alpha 4
  • A SSH client - what confused me here was the reference to PuTTY in the docs - I thought I had that as part of GitExtensions but Plink (described as "a command-line interface to the PuTTY back ends") is not really - so I used Bash that was installed by Git and that worked a treat - but I'm sure the proper PuTTY client would be fine as well

The process I followed was then (I'm mostly documenting this so that if I have to do it again, I'll have one location to refer to):

  1. Install OSMC on the SD card, setting up the wireless connection during the installer - this is currently the only way to configure this
  2. Insert the SD in the Pi, insert the wifi dongle and mouse and connect it to the TV before powering on
  3. Revel in the wonder that is OSMC running on your telly box.
  4. Change the skin - there are few known issues with the default skin - I'm currently using Metropolis, but might switch to Conq as it seems even lighter (and I'm not really going to be seeing it).
  5. Choose a web server addin of your choice - I'm currently really liking Chorus, which works nicely on Chrome and is responsive enough to have a "remote" view on mobile devices.
  6. Set the Audio output to at least "audio jack" (or possibly both if you want to test it on the TV first)
  7. I wanted to change the webserver port to 80, but that's only been fixed post Alpha 4.
  8. I also turned on AirPlay "just in case" - although most of the devices don't have anything suitable to stream.
  9. On my computer, fire up bash and connect to the Pi via SSH:
    ssh [email protected]
  10. Enter the password when prompted and you should be in
  11. Set up the mount points - you need to create local folders to hold the mounted network drives first, so I went for the following steps which creates them under the osmc users home directory:
    sudo mkdir media/music
    sudo mkdir media/photos
    sudo mkdir media/videos
  12. You can then try mounting your network drive - and I suggest you do so that you can iron out any issues - as a point of note, it appears that you can either use an absolute path or one relative to your current location:
    sudo mount -t cifs -o guest //[servername]/music music
    This basically says:
    1. Mount a device using the CIFS module: mount -t cifs
    2. Pass the option "guest" to use the guest user: -o guest
    3. The network path to mount: //[servername]/music
    4. The mount location: media (in this case the folder below the current location)
  13. If that works, you should be able to change into your music directory and see the folder structure that exists on your server.
  14. You now need to get these to mount every time the Pi boots (hopefully not all that often Wink). OSMC comes with the nano editor pre-loaded, so open your File System Table file as follows:
    sudo nano /etc/fstab
  15. Then I added the following lines:
    //[servername]/music  /home/osmc/media/music  cifs guest,x-systemd.automount,noauto 0 0
    //[servername]/photos /home/osmc/media/photos cifs guest,x-systemd.automount,noauto 0 0
    //[servername]/videos /home/osmc/media/videos cifs guest,x-systemd.automount,noauto 0 0

    These are similar to the mount command above:
    1. The network path to mount: //[servername/music
    2. The full path to the mount location: /home/osmc/media/music
    3. The mount type: cifs
    4. The comma separated mount options: guest,x-systemd.automount,noauto
      A bit more information about these options: as we need to wait until the network is up and running before we can mount the drives, luckily the people behind OSMC have thought of that:
      1. Use "guest" credentials (standard option for cifs): guest
      2. Use the OSMC custom auto-mount script: x-systemd.automount
      3. Do not use the standard auto-mount: noauto
    5. Two further options about closing/hanging options I think: 0 0
  16. Saved the changes (Ctrl+o), exited nano (Ctrl+x) and exited the console (exit)
  17. Back in OSMC, I then rebooted the Pi. If all's gone well, it should restart without any errors.
  18. I then went and added the new mounted folders to their respective libraries within OSMC, not forgetting to tell it to build the music library from the added folder.

Phew - quite a lot of steps, but I'm now sitting here listening to my music collection on the stereo with a "permanent" solution.

Now that I know it works, what would I change? I'd probably spend more than a fiver on the wifi dongle, and I might get a better SD card too - the playback can be a little stuttery... seeing as we're about to move into our loft room, grabbing a new Raspberry Pi 2 and popping it up there looks like a no-brainer Grin

Filed under: OSMC, Raspberry Pi

Long Running Sitecore Workflows

Posted 31 March 2014, 15:42 | by | Perma-link

Note: This has been sitting in my queue for nearly a year, mainly because I didn't find a nice solution that worked with workflows - but I thought I'd finish it up and move on - 10/02/2015

I've been looking into some options for informing editors about the state of long running processes when carrying out a Sitecore workflow action. Typically, the UI will freeze while the workflow action is happening - which can cause issues with browsers (I'm looking at you Chrome) that decide that the page has timed out and just kill it.

In our particular case, we are generating a static copy of our site (as XML, html and a packaged .zip container) for use within a Magazine App container - the content is all hosted via a CDN, and only gets updated when a new issue is published. However, processing a number of issues and languages can take a little while.

I'm currently favouring a fairly simple Sitecore Job running in the context of a ProgressBox, which is working, but has a few rough edges.

The key advantages this method has are:

  • It keeps the connection between the browser and the server active, which keeps Chrome happy.
  • There's a visual indication that "something is happening", which keeps editors happy.

The issues I'm currently looking into however include:

  1. Because the task is running asynchronously, the workflow action "completes" (at least from a code point of view) before the Job finishes.
  2. Because of 1, there's no way to stop the workflow and mark it as "failed" if there are issues with the process.

Not long after I started writing this, the client requested that we remove the various status checks from the workflow conditions (so they could run the process for staging without having to complete the entire magazine) and I came to the conclusion that having this as a Sitecore Workflow didn't really work for because the editors workflow was: work on a few pages, package for staging, work on a few more pages, package to staging, etc. until it was ready to package to production - with the Workflow in place they had to keep rejecting the build to staging so they could re-run that step.

We therefore needed to replace the workflow with some custom ribbon buttons allowing the editors the package the content for staging or production as needed.

Filed under: Sitecore, Sitecore Jobs, Sitecore Workflow

Deploying Assemblies to the GAC

Posted 15 May 2013, 14:50 | by | Perma-link

Pulling together a couple of helpful posts into one place so that I can find it later...

I wanted to deploy an assembly to the Global Assembly Cache (GAC) on a Windows 2008 server - GacUtil isn't neccessary installed, and I don't really want to install the entire SDK on to the server just to get it.

PowerShell to the rescue!

The first thing I needed to do was enable PowerShell to load .Net 4 assemblies, this was done by adding a config file into the PowerShell home directory (run $pshome from a PowerShell console), with the following contents:

<configuration>
  <startup uselegacyv2runtimeactivationpolicy="true">
    <supportedruntime version="v4.0.30319" />
    <supportedruntime version="v2.0.50727" />
  </startup>
</configuration>

You can then run the following commands to load the 4.0 instance of EnterpriseServices, which will install your libraries into the correct GAC depending on it's target runtime:

[System.Reflection.Assembly]::LoadWithPartialName("System.EnterpriseServices")
$publish = New-Object System.EnterpriseServices.Internal.Publish
$publish.GacInstall("C:\PathTo\Your.Assembly.dll")

Filed under: .Net, Fixes