Restricting access to Sitecore Media Items

Posted 03 March 2015, 11:00 | by | | Perma-link

I recently had a requirement to lock down some media items (PDFs in this case) within Sitecore so that only certain logged in users could access them. In principle this is trivially easy - ensure the users are in the right roles, remove read access from the extranet\anonymous user and grant read access to the specific roles. However, as always, the devil is in the details.

Whilst the above steps did work and users were correctly sent to the login page there was a problem - once the user logged in, they were just sent to the home page of site rather than being returned to the item they'd requested.

Checking the web.config I found the following setting, which defaults to false:

<setting name="Authentication.SaveRawUrl" value="true" />

But setting it to true here didn't actually make any difference - because the out of the box MediaRequestHandler ignores this value. I'm not really sure whether that makes sense at all - if I lock down some images for example, but then include them on a publicly accessible page the user isn't going to be prompted to log in, they'd just get broken images as the browser requests an image but gets HTML in response, but in the context of a PDF or other document surely you'd want to log in and be returned to the correct place.

Anyway, the solution was fairly straight forward. I created a new RestrictedMediaRequestHandler that inherits MediaRequestHandler and then overrode only the DoProcessRequest method:

/// <summary>
/// Extends the Sitecore MediaRequestHandler to include the requested
/// URL in the redirect to the login page.
/// </summary>
public class RestrictedMediaRequestHandler : MediaRequestHandler
  protected override bool DoProcessRequest(HttpContext context)
    Assert.ArgumentNotNull(context, "context");
    MediaRequest request = MediaManager.ParseMediaRequest(context.Request);
    if (request == null) {
      return false;

    Media media = MediaManager.GetMedia(request.MediaUri);
    if (media != null) {
      // We've found the media item, so send it to the user
      return DoProcessRequest(context, request, media);

    using (new SecurityDisabler()) {
      // See if the media item exists but the user doesn't have access
      media = MediaManager.GetMedia(request.MediaUri);

    string str;
    if (media == null) {
      // The media item doesn't exist, send the user to a 404
      str = Settings.ItemNotFoundUrl;
    } else {
      Assert.IsNotNull(Context.Site, "site");
      str = Context.Site.LoginPage != string.Empty ?
          Context.Site.LoginPage : Settings.NoAccessUrl;

      if (Settings.Authentication.SaveRawUrl) {
        var list = new List<string>(new[]

        str = WebUtil.AddQueryString(str, list.ToArray());

    return true;

Then I updated the web.config to tell the sitecore media handler to use this new handler instead of the default one, and all was well in the world:

<add verb="*" path="sitecore_media.ashx"
     type="Custom.Infrastructure.Sitecore.RestrictedMediaRequestHandler, Custom.Infrastructure"
     name="Custom.RestrictedMediaRequestHandler" />

And now when a user requests a PDF they don't have access to they are sent to a login page that can return them to the PDF afterwards.

Filed under: ASP.NET, Sitecore

Setting up OSMC on a Raspberry Pi

Posted 11 February 2015, 21:50 | by | | Perma-link

I've had a Raspberry Pi (original model B) sitting around at home for about a year, and I've been wondering what to do with it for most of that time. I've finally decided that as we're decluttering the house* we need a better way to access all the music that currently sits on the media shares of an original Windows Home Server that we use for backups and media storage.

So I've got together the following hardware:

  • A Windows Home Server, with the Guest user enabled as follows:
    • No password
    • Read Access to the Music, Videos and Photos shares
  • A Raspberry Pi Model B
  • A tiny USB wifi dongle (or one very much like that)
  • An 8GB SD card (they recommend a class 10, mine's currently a class 4)
  • A USB mouse
  • A TV with an HDMI port and its remote (for initial configuration)
  • A stereo amplifier with an AUX port and some speakers (we're running the Pi headless)

The software needed was:

  • The OSMC Installer (I went for the Windows one) - currently Alpha 4
  • A SSH client - what confused me here was the reference to PuTTY in the docs - I thought I had that as part of GitExtensions but Plink (described as "a command-line interface to the PuTTY back ends") is not really - so I used Bash that was installed by Git and that worked a treat - but I'm sure the proper PuTTY client would be fine as well

The process I followed was then (I'm mostly documenting this so that if I have to do it again, I'll have one location to refer to):

  1. Install OSMC on the SD card, setting up the wireless connection during the installer - this is currently the only way to configure this
  2. Insert the SD in the Pi, insert the wifi dongle and mouse and connect it to the TV before powering on
  3. Revel in the wonder that is OSMC running on your telly box.
  4. Change the skin - there are few known issues with the default skin - I'm currently using Metropolis, but might switch to Conq as it seems even lighter (and I'm not really going to be seeing it).
  5. Choose a web server addin of your choice - I'm currently really liking Chorus, which works nicely on Chrome and is responsive enough to have a "remote" view on mobile devices.
  6. Set the Audio output to at least "audio jack" (or possibly both if you want to test it on the TV first)
  7. I wanted to change the webserver port to 80, but that's only been fixed post Alpha 4.
  8. I also turned on AirPlay "just in case" - although most of the devices don't have anything suitable to stream.
  9. On my computer, fire up bash and connect to the Pi via SSH:
  10. Enter the password when prompted and you should be in
  11. Set up the mount points - you need to create local folders to hold the mounted network drives first, so I went for the following steps which creates them under the osmc users home directory:
    sudo mkdir media/music
    sudo mkdir media/photos
    sudo mkdir media/videos
  12. You can then try mounting your network drive - and I suggest you do so that you can iron out any issues - as a point of note, it appears that you can either use an absolute path or one relative to your current location:
    sudo mount -t cifs -o guest //[servername]/music music
    This basically says:
    1. Mount a device using the CIFS module: mount -t cifs
    2. Pass the option "guest" to use the guest user: -o guest
    3. The network path to mount: //[servername]/music
    4. The mount location: media (in this case the folder below the current location)
  13. If that works, you should be able to change into your music directory and see the folder structure that exists on your server.
  14. You now need to get these to mount every time the Pi boots (hopefully not all that often Wink). OSMC comes with the nano editor pre-loaded, so open your File System Table file as follows:
    sudo nano /etc/fstab
  15. Then I added the following lines:
    //[servername]/music  /home/osmc/media/music  cifs guest,x-systemd.automount,noauto 0 0
    //[servername]/photos /home/osmc/media/photos cifs guest,x-systemd.automount,noauto 0 0
    //[servername]/videos /home/osmc/media/videos cifs guest,x-systemd.automount,noauto 0 0

    These are similar to the mount command above:
    1. The network path to mount: //[servername/music
    2. The full path to the mount location: /home/osmc/media/music
    3. The mount type: cifs
    4. The comma separated mount options: guest,x-systemd.automount,noauto
      A bit more information about these options: as we need to wait until the network is up and running before we can mount the drives, luckily the people behind OSMC have thought of that:
      1. Use "guest" credentials (standard option for cifs): guest
      2. Use the OSMC custom auto-mount script: x-systemd.automount
      3. Do not use the standard auto-mount: noauto
    5. Two further options about closing/hanging options I think: 0 0
  16. Saved the changes (Ctrl+o), exited nano (Ctrl+x) and exited the console (exit)
  17. Back in OSMC, I then rebooted the Pi. If all's gone well, it should restart without any errors.
  18. I then went and added the new mounted folders to their respective libraries within OSMC, not forgetting to tell it to build the music library from the added folder.

Phew - quite a lot of steps, but I'm now sitting here listening to my music collection on the stereo with a "permanent" solution.

Now that I know it works, what would I change? I'd probably spend more than a fiver on the wifi dongle, and I might get a better SD card too - the playback can be a little stuttery... seeing as we're about to move into our loft room, grabbing a new Raspberry Pi 2 and popping it up there looks like a no-brainer Grin

Filed under: OSMC, Raspberry Pi

Long Running Sitecore Workflows

Posted 31 March 2014, 15:42 | by | | Perma-link

Note: This has been sitting in my queue for nearly a year, mainly because I didn't find a nice solution that worked with workflows - but I thought I'd finish it up and move on - 10/02/2015

I've been looking into some options for informing editors about the state of long running processes when carrying out a Sitecore workflow action. Typically, the UI will freeze while the workflow action is happening - which can cause issues with browsers (I'm looking at you Chrome) that decide that the page has timed out and just kill it.

In our particular case, we are generating a static copy of our site (as XML, html and a packaged .zip container) for use within a Magazine App container - the content is all hosted via a CDN, and only gets updated when a new issue is published. However, processing a number of issues and languages can take a little while.

I'm currently favouring a fairly simple Sitecore Job running in the context of a ProgressBox, which is working, but has a few rough edges.

The key advantages this method has are:

  • It keeps the connection between the browser and the server active, which keeps Chrome happy.
  • There's a visual indication that "something is happening", which keeps editors happy.

The issues I'm currently looking into however include:

  1. Because the task is running asynchronously, the workflow action "completes" (at least from a code point of view) before the Job finishes.
  2. Because of 1, there's no way to stop the workflow and mark it as "failed" if there are issues with the process.

Not long after I started writing this, the client requested that we remove the various status checks from the workflow conditions (so they could run the process for staging without having to complete the entire magazine) and I came to the conclusion that having this as a Sitecore Workflow didn't really work for because the editors workflow was: work on a few pages, package for staging, work on a few more pages, package to staging, etc. until it was ready to package to production - with the Workflow in place they had to keep rejecting the build to staging so they could re-run that step.

We therefore needed to replace the workflow with some custom ribbon buttons allowing the editors the package the content for staging or production as needed.

Filed under: Sitecore, Sitecore Jobs, Sitecore Workflow

Deploying Assemblies to the GAC

Posted 15 May 2013, 14:50 | by | | Perma-link

Pulling together a couple of helpful posts into one place so that I can find it later...

I wanted to deploy an assembly to the Global Assembly Cache (GAC) on a Windows 2008 server - GacUtil isn't neccessary installed, and I don't really want to install the entire SDK on to the server just to get it.

PowerShell to the rescue!

The first thing I needed to do was enable PowerShell to load .Net 4 assemblies, this was done by adding a config file into the PowerShell home directory (run $pshome from a PowerShell console), with the following contents:

  <startup uselegacyv2runtimeactivationpolicy="true">
    <supportedruntime version="v4.0.30319" />
    <supportedruntime version="v2.0.50727" />

You can then run the following commands to load the 4.0 instance of EnterpriseServices, which will install your libraries into the correct GAC depending on it's target runtime:

$publish = New-Object System.EnterpriseServices.Internal.Publish

Filed under: .Net, Fixes

Microsoft Surface and Windows 8

Posted 19 February 2013, 16:58 | by | | Perma-link

I pre-ordered the Windows Surface back in October and have been running it fairly happily ever since, but thought I ought to write up what I've found to be good and bad with it, especially in comparison to other tablets, notably the Nexus 7.


This is the key win for me, Windows 8 comes with Family Safety built in, which means I can set up separate accounts for each of my kids, enable curfews and time limits that just work. On top of that, I can restrict their web access appropriately along with the game ratings from the store. They then get prompted that something's been blocked and can request permission. This is something that I've not seen on the Apple devices at all, and while the Nexus supports multiple accounts there's no restrictions or sharing. Which leads me to:

App Sharing

Once you have account, the issue of purchasing apps comes up, and Microsoft's answer is that apps you've purchased can be downloaded onto up to 5 machines and downloaded to other's accounts. So I purchase the app on my account, my son logs in, I go to his Store app, go into the settings and switch to my account. I can then see all the apps I've purchased but not installed on here and install it for him, after re-entering my password (I'm not that foolish). In general this works fairly well, and a lot better than it does on Android, where I have to link my account completely with his, including mail, etc.

Content Consumption

There's good and bad elements to the Surface form factor - clearly being a 10" widescreen device, watching HD content on the iPlayer or similar works really well - especially with the HDMI output. Couple that with a USB input and we can easily pull the latest round of pictures off the camera, sort through them and share a few out there (just about - see App ecosystem below). For reading it can depend on the app: in portrait mode pages in Kindle Reader are a little too long, and it's a little top heavy that way, but I've got a nice RSS reader that has a great three column layout with feed categories, feeds and then the actual posts spreading across the landscape device.

Content Creation

Well, words at any rate - the touch screen finger painting style creation works well across all platforms - for long form note taking, the Type cover owns this space, there is no competition from on screen options here. The Touch cover is good for incidental notes, but as others have noticed, it is a bit picky about the angle it works at, which is a shame. The on screen keyboard is however very adaptive, allowing you to choose between a full keyboard for when the Surface is flat on a table or lap, and the split thumb keyboard, which is good for when you're carrying it - the keyboard is split in two allowing your thumbs to reach all the keys even with the 10" screen. On top of that, the on screen keyboard also has the left and right arrows for easy navigation and corrections.

App ecosystem

This is the biggest issue really: My wife was hoping to use the tablet occasionally for her work in Speech and Language Therapy, however all the apps out there are obviously for the iPad (even the Play Store seems a bit light in that respect, although there are some).

Filed under: Tablets, Windows 8

Working with Symplified

Posted 31 August 2012, 19:00 | by | | Perma-link

I've been working on a couple of Proof of Concept demos for a client that's looking to implement a Single Sign On solution on their new site, and one of the offerings was from Symplified. Seeing as there doesn't appear to be much out there on this, especially within an ASP.Net context I thought I'd write up my experience.

Symplified Network Overview

The first thing to realise is that Symplified works as a reverse proxy, sitting between your server and your users (reverse in that it's a proxy you put in place rather than your user's ISP). So all requests hit the Symplified app server first before they are forwarded on to your servers. All authorisation is handled by the Symplified app, so you shouldn't be locking things down with the authorization elements in web.config files.

However, you can still use some of the features that the framework provides you with a bit of care.

Membership Provder

I started off with the idea of implementing a custom Membership Provider to handle the authentication/authorisation aspects (as this had worked well in the previous PoC based on PingFederate).


You can still implement the CreateUser method in a custom membership provider, as you will need to provision users within Symplified, especially if you want to allow direct registration.

In Symplified's world, you will need to make three calls to a rest service:

  1. Create a session token
  2. Create a user
  3. Reset the user's password

You need to reset the password as by default the users appear to be created with an expired password, and resetting it to the same value fixes this - note that Symplified will also send an email to the user informing them that they've reset the password - you may want to suppress this.

Not too bad, however handling errors from the create user service is a little tedious:

  • If any of the parameters don't match the patterns expected you'll get a 500 lnternal Server error returned with plain text error messages in the XML response.
  • If the user already exists you'll get a 400 Bad Request, again with the error description in the XML.

These plain text error messages will need to be parsed and mapped to MembershipCreateStatus values to get sensible errors back to your controls.


You can't really implement the ValidateUser method however, as there's nothing in the API you can call to do this, the user's login credentials need to be sent directly to Symplified's SSO application so it can set it's cookies appropriately, and then pass some headers through to your "secure" areas.

So, how do you handle an authenticated user?

When the user is viewing a "secure" area of your site, Symplified will send a number of additional headers along with the request, which will include things like the Username, which can then be used to generate a Forms Authentication ticket and a Membership Priniple that you can fill for the app to use later.

For the PoC I implemented that logic in a custom Module that hooks into the application's AuthenticateRequest event.

OpenId Users

The one big issue so far has been around users authenticating via OpenId providers. These users are authenticated without a user being be provisioned in Symplified, which could well be an issue for you. The solution we put in place within the PoC was to check for the headers stating this was a login from the OpenId provider, and then attempt to create a user within Symplified, and ignoring the duplicate username error message - the Symplified engineers were looking at adapting the solution so that if the OpenId user matched known user it would send additional headers which would allow me to skip the creation step.

Next Steps

If we decide to go forward with Symplified there are a number of changes I'd like to make:

  • Only create the user context if the request comes from the Symplified app.
  • Implement the GetUser methods using the Symplified API.
  • Redirect requests to the Symplified applicance if they don't come from there.
  • Don't try and create the user on every single request!

Filed under: ASP.NET, SSO

Launching Soon: Crispin Marine

Posted 26 June 2012, 09:25 | by | | Perma-link

Just to keep things ticking over while job hunting, I'm working on a site for a friend of a friend:

Crispin Marine London

Pete Crispin's a London based Marine Surveyor, who consults on both commercial and private vessels, and operates worldwide.

We're working on the Information Architecture at the moment, and will be putting together a small CMS for him in the coming weeks.

Filed under: Websites

Setting up Office 365 Directory Synchronization and filtering out users after the effect

Posted 21 May 2012, 17:33 | by | | Perma-link

If you're considering moving to Office 365, you've probably been looking into the options for Directory Synchronisation. If your AD is anything like ours, you'll have a number of accounts for services, ex-staff, contractors, etc. that you don't really want synchronised up to Office 365.

By default there's no control over the filtering of accounts within the Directory Sync Configuration tool, and the content on setting this up has been "coming soon" for over 6 months, which is a shame, however, when you run the Office 365 Deployment Readiness Tool you'll see the following line in the reports:

Filters were applied to obtain the above object counts for an Office 365 deployment.

So clearly these filters are configured somewhere - but where?

Hunting around on the web, I've found a very useful post for the initial setup scenario from credera: Filtering Users in the Office 365 Directory Synchronization Tool where they talk about using the UI based on Forefront Identity Manager (FIM 2010) - and suggest you run this before DirSync runs. I failed to realise the implication of one aspect of this: Even if you clear the "Synchroize directories now" so DirSync doesn't force the update, you've started the service and the 3 hour timer - so don't start this just before you go home for the weekend, otherwise you'll end up with all your accounts online like I did.

So how do you implement filtering after you've done initial import? Well, it's not too painful once you've looked around at the systems.

Start by firing up the Synchronization Service Manager through using the miisclient.exe executable found deep in the Sync Service's UIShell folder, and switch to the "Management Agents" pane. Click on the "SourceAD" Management Agent line, and select Properties.
View Management Agent Properties

Then select the "Configure Connector Filter" (apparently you can also do things with the Directory Partitions - your milage may vary), select "user", and create a "New..." rule:
Configure Connector Filter

As I'm only interested in importing users from our "Staff" organizational unit, and groups from our "Company_UserGroups" OU, I set up one filter with the following rules:
Add new filter

"<dn>" "Does not contain" "OU=Staff"
"<dn>" "Does not contain" "OU=Company_UserGroups"

Ok out of the dialogs and return to the "Operations" page. You now need to perform a Full synchronisation to remove the filtered out users:

From the Actions menu, select "Run..."
Actions | Run...

Ensure that "SourceAD" Management Agent is selected in the top dropdown, then select "Full Import Full Sync" from the list of Run profiles, and press "OK".
Full Import Full Sync report

Once that operation shows has a status of "success", select "Run..." again, and this time switch to the "TargetWebService" management agent, and choose the "Full Confirming Import" run profile, and press "OK".
Full Confirming Import report

Once that operation also shows a status of "success", you'll want to run the "Export" profile for the "TargetWebService" management agent.
Export report

As you can see, after each run you should see confirmation of deleted accounts in the reports.

To confirm that they've really gone away, you can then fire up forced run of the standard sync using the powershell command Start-OnlineCoexistenceSync if you run the DirSyncConfigShell powershell script from the root of the DirSync install folder, or by re-running the Directory Sync Configuration Tool.

Just deleting the users from Office 365 using the Remove-MsolUser command obviously didn't work as they were just recreated again with the next diffential sync.

I also found that the event log will contain warnings that your configuration has changed and you need to perform a full sync for the changes to take effect:

The management agent "SourceAD" completed run profile "Delta Import Delta Sync" with a delta import or delta synchronization step type. The rules configuration has changed since the last full synchronization.
User Action
To ensure the updated rules are applied to all objects, a run with step type of full synchronization should be completed.

Filed under: Office365, PowerShell, Tools

Setting up SharePoint Information Worker Demo

Posted 06 May 2012, 19:30 | by | | Perma-link

There are a few blog posts out there already about setting up the SharePoint Information Worker Demo machines in a "dual boot from VHD" mode, but I found that I needed to pull a number of them together to get it all up and running to my satisfaction.

Firstly, download the latest 2010 IW Demo VM from Microsoft (at the time of writing, this was 2010-10, released 26 March 2012).

For basic SharePoint demos, you can get away with just Server A, which gives you everything except Exchange (Server B) and Lync (Server C).

This walkthrough assumes you don't have access to Hyper-V - which will have implications later on unfortunately.

Ensure you've also got yourself:

  • Optional: VirtualBox - a great little app for creating and running Virtual Machines - we're going to use this to get the basic configuration up and running before we set up the boot-to-VHD.
  • Drivers - You'll want to ensure that you have suitable drivers for a few bits of hardware - especially Graphics Cards and Wireless adaptors if you're running this on a laptop.

Configure your Virtual Machine

If you want to save yourself a little bit of pain later on, once you've downloaded and unpacked the Virtual Machine, fire up VirtualBox and create a new Windows 2008 Server. When prompted to for the hard disk, attach the one you've just unpacked.

Before firing up the VM, you need to change the hard disk from SATA to IDE otherwise you'll be bluescreening during boot:

  1. Select the VM, and click on the Settings button.
  2. Switch to the Storage page, and remove the VHD from under the SATA controller - make sure you KEEP the actual files when prompted.
  3. Then, add a new Hard Disk under the IDE Controller (the stacked disk icon), and point it at the VHD.

You're now good to go. Fire up the VM, and log in, (using the admin password "pass@word1" - remember that it's currently using the US keyboard layout, so the @ symbol is on the 2 (or quote mark).

Let the warm-up scripts run, but don't try and view SharePoint yet, we haven't configured the networking correctly, but you should be able to open the Central Admin site to confirm things are correct.

You can either configure standalone networking now, or move on to the Dual Boot options.

Setup Networking

Open up the device manager, select the root computer node (demo2010a), and then from the Actions menu select "Add Legacy Device".

In the wizard, you want to add a "Network adapter", then select "Microsoft" from the list of manfacturers, and "Microsoft Loopback Adapter" from the list of network adapters.

Then open up the Network and Sharing Center, and go through to Configure Network Connections. Find the new adaptor (it'll have a description of "Microsoft Loopback Adapter"), right click on it, and select "Properties". Then select the "Internet Protocol Version 4 (TCP/IPv4)" item, and press the Properties button.

Set up the connection as follows:

  • Use the following IP address:
    • IP address:
    • Subnet Mask:
    • Default Gateway: Leave Blank.
  • Use the following DNS server addresses:
    • Preferred DNS sever:

And click OK. You'll then get a warning that the IP address you've entered is already assigned to another adapter (Microsoft Virtual Machine Bus Network Adaptor) which is no longer present in the computer. Select "Yes" you do want to remove the static IP configuration for the absent adapter.

You should now be able to view the empty Intranet Team site site on http://intranet/ (the default homepage of may not work at this point, however everything else should now be inplace, including the profiles, and search - try searching for "Tad" or "Erika".

Dual Boot

Next it's time to set up our Dual Boot setup - I've been happily following Scott Hanselman's advice on this for some time, and you should too: Less Virtual More Machine - The magic of Boot to VHD.

If you've left the VHD in the default unpacked location, there's a space in the folder path - when using bcdedit you need to ensure that the path is quoted, for example:

bcdedit /set {guid} device vhd="[C:]\VHDs\2010-10a\Virtual Hard Disks\2010-10a.vhd"

Note also that these VHDs are expandable to 125GB, and when you boot to VHD they need to be able to fully expand, even if they're not using it all - so make sure you've cleared off the .rar files, etc before continuing.

Enabling the Wireless Network

Once you're up and running, if you're on a Laptop, you'll probably want to enable the Wireless Networking Service:

The key thing to do here is to install the Feature from the Server Management view, called "Wireless LAN Service" - once that's configured, you can install your WiFi card drivers and connect to a wireless network (assuming you've configured some AV software, etc.).

Fixing the Locale

If you're like me and not based in the US, you may find typing things like the @ symbol tediously complex, in which case you will probably want to change the system locale from English (United States) to something more in line with your keyboard layout.

This is fairly easily accomplished through the "Region and Language" control panel, but don't forget to switch to the Administrative tab and copy the keyboard settings over to the "Welcome screen and new user accounts" - you'll need to restart for these changes to take effect.

Once you've done this however, you will probably notice that nickname and phonetic searching no longer work, so take a look at my short summary of helpful links for enabling SharePoint People Search.


The core limitation that I've found with doing it this way is that the content packs aren't installed by default, and unless the VM is attached to a HyperV system you can't install them using the downloadable content packs as these work by attaching a (very) virtual DVD to the VM - and it won't work with Virtual Box or dual booted systems.

I've currently managed to get around this by ripping the content database out of an old instance of the VM (hunt around for 2010-07a) which had the content pre-configured, however I'm still missing some of the walkthrough content - most notably the KPI and Business Connectivity configuration.

Filed under: SharePoint

Custom Master Pages in SharePoint without Publishing Features

Posted 23 March 2012, 17:30 | by | | Perma-link

There are a number of useful blog posts out there that talk about applying Custom Master Pages to SharePoint Team Sites, Blogs, etc., and some, like Bugra Postaci's post, even use stapling to do this automatically.

However, at the end of the day, they all have one thing in common: They claim you need to activate the Publishing Features to do this, even when you're doing it programmatically.

Assigning custom Master Pages to Out-of-the-Box SharePoint sites without activating Publishing Features

Obviously this all needs to happen within a SharePoint project, and assumes that you've created a customised Master Page in your Master Page Gallery.

Starting from a Blank SharePoint project, that is set to "Deploy as a Farm Solution" I added a new feature, and scoped it to the "Web - Activates a Feature for a specific web site.":

A new fearure with the scope set to Web

Right-click on the feature in the solution explorer, and add an event receiver:

Add Event Receiver

Implement the FeatureActivated method (note that we're also setting the Site Master as well as the System Master, that way if the Publishing Features are enabled later they will be set correctly):

public override void FeatureActivated(SPFeatureReceiverProperties properties)
  // Get the SPWeb we're being activated in.
  var web = (SPWeb)properties.Feature.Parent;

  if (null != web) {
    // Get the Site Collection root path to get the master page gallery.
    string siteCollectionRoot = web.Site.RootWeb.Url;

    // Set the Site Master to Custom.master
    var siteMaster = new Uri(siteCollectionRoot +
                             "/_catalogs/masterpage/" +
    web.CustomMasterUrl = siteMaster.AbsolutePath;

    // Set the System Master to Custom.master
    var systemMaster = new Uri(siteCollectionRoot +
                               "/_catalogs/masterpage/" +
    web.MasterUrl = systemMaster.AbsolutePath;

    // Clear the Alternate CSS
    web.AlternateCssUrl = string.Empty;

    // Save the changes back to the web

And FeatureDeactivating to set things back to the default master page:

public override void FeatureDeactivating(SPFeatureReceiverProperties properties)
  // Get the SPWeb we're being activated in.
  var web = (SPWeb)properties.Feature.Parent;

  if (null != web) {
    // Get the Site Collection root path to get the master page gallery.
    string siteCollectionRoot = web.Site.RootWeb.Url;

    // Set both Master Pages to v4.master
    var siteMaster = new Uri(siteCollectionRoot +
                             "/_catalogs/masterpage/" +
    web.CustomMasterUrl = siteMaster.AbsolutePath;
    web.MasterUrl = systemMaster.AbsolutePath;

    // Clear the Alternate CSS
    web.AlternateCssUrl = string.Empty;

    // Save the changes back to the web

All fairly simple so far, and once you've deployed this feature, you can activate it and it will apply your custom master pages to you team sites. As you've hopefully already guessed, these are based on the "System Master Page" - so you really need to ensure that your template is left aligned and fluid - there's a whole other post hidden in that statement I'm sure.

Automatically assigning Custom Master pages on new sites as they are created

Create another feature in your project, and this time scope it to at least "Site - Activates a Feature for all web sites in a site collection", however your milage may vary, and will depend on the setup of your Farm - if you have multiple Web Applications that don't all have this branding applied to them, then you don't want to be scoping this at the Farm level. If the majority of your Web Applications share the same branding scoping it to WebApplication and deactivating it on those Web Applications that don't use the branding might suffice.

Next, add a new "Empty Element" to the project:

New Empty Element

First, open the CustomMasterPages feature, and switch to the "Manifest" view to find the id of the feature:

Switch to Feature Manifest view

Then in the Elements.xml add the details of the features you want to activate:

<Elements xmlns="">
  <FeatureSiteTemplateAssociation Id="e5f2c515-d45f-4096-a0b1-7d486a4d011a"
                        TemplateName="GLOBAL" />

As you can see, no Publishing Features were harmed in the making of this feature. I've set this to apply my custom Master Pages to every site template so that the users have a nice consistent branding throughout their Intranet - however you can pick and choose which ones you want here, or you could do it in code during the feature activation if (for example) you want to apply a different Master Page to the Blog Site template.

Then in the stapling feature, add the Element:

Add the Elements to the Feature

Save, deploy and test. You should now have your custom master pages in action every time you create a new site within SharePoint.

Filed under: SharePoint