Basic Logging within Web Jobs


When you are a developer it’s very easy to debug code, step through it and understand exactly what is happening at different parts of the code. When the code is finally shipped to a production environment, you are virtually blind to what is happening within your code unless you put sufficient logging within the code. (Or you are lucky enough to use Visual Studio to debug production environment, something that you probably shouldn’t do).

The basic way to log within WebJobs is using Console.Writeline or Trace from the System.Diagnostics.

Console.Writeline() logs

By using the Console.Writeline you only write to the Web Job logging window. These do not display as errors, warning or information. They are just plain print out to screen.

To get to the web job logging screen head to https://portal.azure.com then select your web app. On the All settings blade, select Web Jobs, then on the Web Jobs blade it will list all the web jobs currently assigned to this web app. The logs URL is stored on the right hand side of the screen. Click this link and it will take you through to your logs. Normally the URL is similar to the following https:// <WebAppName> .scm.azurewebsites.net/azurejobs/#/jobs/ <ContinuousOrTriggered> / <WebJobname>

Trace logs

The trace logs are stored within Azure Storage Blobs or Tables, which will need to be set up and configured. (This is shown later in this blog post). They do not show up in the Web Job window as described above for the Console.Writeline(). Please note that the Trace logs are not just for Web Jobs, you can use them in your web application, and most of this information will work for Web Apps.

Within your code, all you need to do is add a using statement to System.Diagnostics. Then you have a choice of logging an information, a warning, an error, or verbose message log.

System.Diagnostics.Trace.TraceInformation("This creates an information log");
System.Diagnostics.Trace.TraceWarning("This creates a warning log");
System.Diagnostics.Trace.TraceError("This creates an error log");
System.Diagnostics.Trace.WriteLine("This creates a verbose log");

Configuring Blob and Table logging.

Unfortunately I’m unable to find any way of doing this in the new Azure Portal. Therefore for these set of instructions I’m using the old Azure Portal https://manage.windowsazure.com

First you will need to create a Storage if you don’t already have one to use. In the old portal, you can do this just by clicking the New button at the bottom of the screen, and then selecting Data Services > Storage > Quick Create

Now head to your Web App, and click on the CONFIGURE tab, and on the CONFIGURE tab, scroll down until you reach the application diagnostics section.

In the Application Diagnostics section there are 3 different places you can set up logging.

  1. File System – This is written to the Web App file system, you can access these logs from the FTP share for this Web App. Please note: Application Logging is only enable for 12 hours. (Not part of this post)
  2. Table Storage – The logs are collected in the table storage that is specified. You can access the traces from the specified table in the storage account. Please note: These logs will remain in the Storage table increasing in size until someone clears the logs.
  3. Blob Storage – The logs are collected in the blob contain that is specified. You can access the traces from the specified container in the storage account. Please note: By default, application diagnostic logs are never deleted, however you have an option to set a retention period between 1 and 99999 days.

Configuring Blob Storage

Following on from the last section, to configure the Blob Storage click the ON button of the Application Logging (Blob Storage). Once you have clicked ON you will be presented with some other values.

The logging level options are Verbose, Error, Warning and Information. By setting the level at a given status, will depend which trace appear in the logs.

  • Verbose – Shows all logs.
  • Information – Shows Information, Warnings and Errors.
  • Warnings – Shows only Warnings and Errors.
  • Errors – Shows just Errors.

Next Click on manage blob storage, and you will be presented with a dialog. Select the Storage Account you created previously, and then Create a new blob container. Give your container a name, and then click the tick button.

Next you can set the retention in days to store the logs. Click Save on the page and the Blob Storage has been set up.

Configuring Table Storage

Configuring the Table Storage isn’t much different from configuring the Blob Storage. The only difference really is you are creating a new table instead of a new blob container. Click the ON button of the Application Logging (Table Storage). Once you have clicked ON you will be presented with some other values.

The logging level options are Verbose, Error, Warning and Information. By setting the level at a given status, will depend which trace appear in the logs.

  • Verbose – Shows all logs.
  • Information – Shows Information, Warnings and Errors.
  • Warnings – Shows only Warnings and Errors.
  • Errors – Shows just Errors.

Next Click on manage table storage, and you will be presented with a dialog. Select the Storage account you created previously, and then Create a new table. Give your table a name, and then click the tick button.

How to view the application diagnostic logs.

The biggest issue with Blob and Table storage in Azure is that there isn’t a simple way within Azure to view the information stored within them. There are plenty of 3rd party tools out there, that allow you to view blob and table storage for free. Azure Storage Explorer 6 is a free Windows application that you can use. It’s available on CodePlex https://azurestorageexplorer.codeplex.com/ however as useful as it can be, it is a bit painful, as you need download blob file to view once found, or filter the table to find the logs you are looking for. Also it is a windows application viewer looking at your Azure storage. Meaning if you use multiple PC’s, you need to ensure this is installed on all PC’s you use.

I try not to suggest 3rd party apps/extensions in my blog post, however I do like the Azure Web Site Log Browser written by Amit Apple. Amit Apple blog site http://blog.amitapple.com/ seems to just live and breathe Web Jobs, and I have learnt many things about Web Jobs from his blog. To install his extension you can do this directly from the new Portal or going to Site Extensions within you website scm. I will show both ways below how to do this.

Installing Azure Web Site Log Browser via Azure Portal

  • Head to your Web App in the Azure new portal. https://portal.azure.com
  • In the Web-App blade, along the menu buttons, select Tools.
  • On the Tools blade, select Extensions from the bottom of the Develop section.
  • Then in the Installed web app extensions blade, click Add.

  • On the Add Web Extension blade, you can choose an extension from the Choose web app extension blade. At time of writing this, Azure Web Site logs Browser is the 5th extension from the list. Click Azure Web Site Logs Browser.

  • Then click the OK button to accept terms. Then lastly, click the OK button to add the extension to your site. It takes a few moments to install into your Web App. It will only be installed on this Web Application.

  • Lastly on the Web App blade Click the restart button. This will ensure the Azure Web Site Logs Browser has fully installed.
  • To ensure it has worked. Click on the extension within the Installed web app extensions blade, and then click on Browse button on the Azure Web Site Logs Browser blade.
  • This will take you to https://<YourWebApp>.scm.azurewebsites.net/websitelogs/#

Installing Azure Web Site Logs Browser via SCM website.

  • Click on the Gallery tab, this will display all available site extensions.

  • As you can see from the screen shot, Azure Web Site Logs Browser is the first item in the second row at the time of writing this. You could search for it, by typing Azure Web Site Logs in search box.
  • Click the + button and this will install Azure Web Site Logs Browser to your Web App.
  • After it has successfully install, click the Restart Site button in the top right hand corner of the screen. This will ensure everything has been loaded up correctly.
  • By clicking on the Installed tab of Site extensions and then clicking the play button.
  • This will take you to https://<YourWebApp>.scm.azurewebsites.net/websitelogs/#

Viewing Application Logs – Blob Storage.

In my web job which I have set up logs for both Blob and Table reporting, I have a simple loop that displays some trace informations, warnings and errors, with some sleep threading in between to make sure the webjob lasts more than 10 seconds.

Trace.WriteLine("We are about to loop."); 

for (int i = 0; i < 20; i++)
{
   Trace.TraceInformation(String.Format("This is an information message brought to you by BasicLogging Web Job, looping for the {0} time", i)); // Write an information message
   Thread.Sleep(2000);

   Trace.TraceWarning("I'm warning you, don't get me angry.");
   Thread.Sleep(2000);

   Trace.TraceError("That's it, I'm annoyed now!");
   Thread.Sleep(2000);

   Trace.TraceInformation("Ok, I'm sorry, I got a little mad I'm OK now.");
   Thread.Sleep(4000);
}
Trace.WriteLine("End of the loop.");

I have then run this web job once. Now I wish to view this in the Blob storage within my site extension. This you would think would be easy to find. Unfortunately it’s not, this is down to Azure, not the site extension. The location of the blob file is different depending on the name of the application, name of the web job, date/time, website instance and website process ID of the web job. Please note: If the web job runs from one hour into the next, it will create the same named file in two different hour folders.

Typical path would look like:

/blobapplication/<WebSiteName>-<TriggeredOrContinous>-<WebJobName>/<CurrentYear>/<CurrentMonth>/<CurrentDay>/<CurrentHour>/<InstanceIdFirst6Char>-<ProcessId>.applicationLog.csv

If you have a web job that runs every hour on the hour, then it’s not that difficult to find, but a web job that runs on demand can be difficult to find. Therefore I write to the WebJob Log using Console.WriteLine the location of the log, using code that calculates the location of the file. This way if there is a reason I need to look into the trace logs of a given web job instance, I just view the web job to get the link to the file.

var instanceID = Environment.GetEnvironmentVariable("WEBSITE_INSTANCE_ID").Substring(0, 6);
var pid = Process.GetCurrentProcess().Id;
var currentTime = DateTime.UtcNow;
var filename = instanceID + "-" + pid + ".applicationLog.csv";

//Location of the blob file path within the Azure Web Site logs extension
var filePath = "https://" + Environment.GetEnvironmentVariable("WEBSITE_SITE_NAME") + 
                             ".scm.azurewebsites.net/WebSiteLogs/#/blobapplication/" + 
                             Environment.GetEnvironmentVariable("WEBSITE_SITE_NAME") + 
                             "-triggered-" + Environment.GetEnvironmentVariable("WEBJOBS_NAME") + "/" +
                            currentTime.Year + "/" + currentTime.Month.ToString().PadLeft(2,'0') + "/"  + 
                           currentTime.Day.ToString().PadLeft(2, '0') + "/" +
                           currentTime.Hour.ToString().PadLeft(2, '0') + "/" ;

//Location of the blob file to download within the Azure Web Site log Extension
var linkToDownload = "https://" + Environment.GetEnvironmentVariable("WEBSITE_SITE_NAME") + 
                     ".scm.azurewebsites.net/WebSiteLogs/api/log?path=/blobapplication/" +
                     Environment.GetEnvironmentVariable("WEBSITE_SITE_NAME") + 
                     "-triggered-" + Environment.GetEnvironmentVariable("WEBJOBS_NAME") + "/" +
                     currentTime.Year + "/" + currentTime.Month.ToString().PadLeft(2,'0') + "/"  +
                     currentTime.Day.ToString().PadLeft(2, '0') + "/" +
                     currentTime.Hour.ToString().PadLeft(2, '0') + "/" + filename + "&download=true";

Console.WriteLine(String.Format("FilePath Link: {0}", filepath));
Console.WriteLine(String.Format("Download csv file: {0}", linkToDownload));

Using the actual Azure Web Site Log Browser, you can view the csv log file directly in the browser. Using the filePath link that I’m displaying in the Web Job log window, this takes you to the Azure Web Site Log Browser.

By clicking on the file it opens the CSV in the browser.

Using this browser, you can view the logs. You can use Search to find key words within your application.

The columns available within a blob storage is defined by Azure, you cannot add or remove these columns. These columns provide you more granular information about the event. The following properties are used for each row in the CSV:

  • Date – The date and time that the event occurred
  • Level – The event Level (Verbose, Error, Warning, Information)
  • Application Name – The web app name, in our case the WebApp – TypeOfWebJob – WebJobName.
  • InstanceId – Instance of the Web app that the event occurred on.
  • EventTickCount – The date and time that the event occurred, in Tick format
  • EventId – The event ID of this event, defaults to 0 if none specified.
  • Pid – Process ID of the web job
  • Tid – The thread ID of the thread that produced the event.
  • Message – The event detail message
  • ActivityID – This is only included if you set up the Activity ID in the code. This can be useful especially for continuous webjobs as all the jobs processed will be entered into the same csv file.

Setting the Activity ID in code

//Add to the start of the code
var correlationGuid = Guid.NewGuid();
Trace.CorrelationManager.ActivityId = correlationGuid;
Console.WriteLine(String.Format("ActivityID for this job: {0}, correlationGuid"));

Viewing Application Logs – Table Storage.

The URL for table logs within the Azure Web Site Log Browser is https://<WebSiteName>.scm.azurewebsites.net/websitelogs/viewtable.cshtml

On landing on the page, it shows the last 20 items found within the table. However you can change the date, number of items, or what to search for using search. There is also sorting within the columns. I have put the ActivityID in search (see end of last section how to assign Activity ID in code), and this has brought back all log items for that web job.

The columns available within a table storage is defined by Azure, you cannot add or remove these columns. These columns provide you more granular information about the event. The following properties are used for each row in the table:

  • Date – The date and time that the event occurred
  • Level – The event Level (Verbose, Error, Warning, Information)
  • Instance – Instance of the Web app that the event occurred on.
  • Activity – This is only included if you set up the Activity ID in the code.
  • Message – The event detail message

If you open the row of information on the table using Visual Studio/Azure Storage 6 or another tool, there are additional columns that contains information that isn’t shown within the Azure Web Site Log Browser.

  • PartitionKey – The Date Time of the event in yyyyMMddHH format
  • RowKey – A GUID value that uniquely identifies this entity
  • Timestamp – The date and time that the event occurred
  • EventTickCount – The date and time that the event occurred, in Tick format
  • Application Name – The Web App Name
  • EventId – The event ID of this event, defaults to 0 if none specified.
  • InstanceId – Instance of the Web app that the event occurred on.
  • Pid – Process ID of the web job
  • Tid – The thread ID of the thread that produced the event.

References:

Amit Apple Blog – http://blog.amitapple.com/ (Creator of the Azure Web Site Log Browser)

Microsoft Azure – Enable diagnostics logging for web apps in Azure App Service – https://azure.microsoft.com/en-us/documentation/articles/web-sites-enable-diagnostic-log/

Azure Table Storage and pessimistic concurrency using an Azure blob


In my previous two blog posts, I have spoken about concurrency using Azure Blobs and Azure Table Storage.

Azure Blob Storage and managing concurrency

Azure Table Storage and managing concurrency

In the Azure Table Storage and managing concurrency, I state that they only option you have for concurrency is optimistic concurrency. This is when performing an update, it will verify if the data has changed since the application last read that data. However there is a way to perform pessimistic locking on an Azure table entity by assigning a designated blob for each table, and try to take a lease on the blob before operating the table.

This blog post will walk you through creating a solution that allows you to perform pessimistic locking for Azure table entity. My solution will show two methods. The first method is a single thread application that will try to update a number in the Azure table. If we run the program twice at the same time, one of the programs will be blocked and receive an HttpsStatusCode of 409 (Conflict).

You will need to following NuGet Packages installed for the code to work:

  • Microsoft.WindowsAzure.ConfigurationManager
  • WindowsAzure.Storage

The SingleThreadTableStorageUpdate() method will first obtain the values from the app.config. These values are:

  • BlobStorageFileName – The filename of the blob that will be assigned a lease.
  • BlobStorageContainerReference – The Blob container, that will hold the blob file.
  • TableStorageReference – The name of the Table within Azure Storage.
  • StorageConnectionString – The connection string to Azure Storage.
//Obtain the BlobStorage information
String filename = System.Configuration.ConfigurationManager.AppSettings["BlobStorageFileName"];
String blobStorageContainerRef = System.Configuration.ConfigurationManager.AppSettings["BlobStorageContainerReference"];
String blobStorageTableRef = System.Configuration.ConfigurationManager.AppSettings["TableStorageReference"];
String connectionString  = CloudConfigurationManager.GetSettings("StorageConnectionString");

//Instantiate the NextNumber class
NextNumber nextNumber = new NextNumber(filename, blobStorageContainerRef, blobStorageTableRef, connectionString);

Within the NextNumber Class when it is instantiated, it will check or create the Blob file and table entity.

class NextNumber
{
 private readonly CloudBlobContainer _leaseContainer;
 private readonly CloudTable _table;
 private readonly String _filename;

 public NextNumber(string filename, string blobContainerReference, string tableStorageReference, string storageConnectionString)
 {
   //Get Connection to Storage.
   CloudStorageAccount storageAccount = CloudStorageAccount.Parse(storageConnectionString);
   _filename = filename;

   //This creates a Blob
   CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
   _leaseContainer = blobClient.GetContainerReference(blobContainerReference);
   _leaseContainer.CreateIfNotExists();

   //This creates a table.
   CloudTableClient tableClient = storageAccount.CreateCloudTableClient();
   _table = tableClient.GetTableReference(tableStorageReference);
   _table.CreateIfNotExists();

   try
   {
       //Get a reference to the blob.
       CloudBlockBlob blob = _leaseContainer.GetBlockBlobReference(String.Format("{0}.lck", _filename));
       if (!blob.Exists())
       {
           //Blob doesn't exist therefore create table entity.
           NextNumberEntity entity = new NextNumberEntity
          {
             PartitionKey = _filename,
             RowKey = "",
             NumberValue = 0
          };

          //Upload blob with some information in file
          blob.UploadText("Created on " + DateTime.UtcNow);
          //Insert entity into table.
         _table.Execute(TableOperation.Insert(entity));
       }
   }
   catch (Exception ex)
   {
       Console.WriteLine("Error happened " + ex.Message);
   }
 }
}

Once I know that the blob file and table actually exists, I’m then able to get the next number and update the entity in the table.

nextNumber.GetNextNumber()

Below is the code for the GetNextNumer() method. Within the code I have grabbed the blob property lease information and displayed it within the console. This is useful to see the state of the current blob object and see if there is a lease on it.

internal int GetNextNumber()
{
   //Get blob reference and display current lease information
   CloudBlockBlob blob = _leaseContainer.GetBlockBlobReference(String.Format("{0}.lck", _filename));
   blob.FetchAttributes();

   Console.WriteLine(String.Format("LeaseDuration = {0}", blob.Properties.LeaseDuration));
   Console.WriteLine(String.Format("LeaseState = {0}", blob.Properties.LeaseState));
   Console.WriteLine(String.Format("LeaseStatus = {0}", blob.Properties.LeaseStatus));

   //Acquire the lease for 30 seconds.
   string leaseId = blob.AcquireLease(TimeSpan.FromSeconds(30), Guid.NewGuid().ToString());
   var nextNumber = 0;
   Console.WriteLine();

   Console.WriteLine(String.Format("Aquired lease on blob ID: {0}", leaseId));
   Console.WriteLine();

   try
   {
        //Get and display current lease information
        blob.FetchAttributes();
        Console.WriteLine(String.Format("LeaseDuration = {0}", blob.Properties.LeaseDuration));
        Console.WriteLine(String.Format("LeaseState = {0}", blob.Properties.LeaseState));
        Console.WriteLine(String.Format("LeaseStatus = {0}", blob.Properties.LeaseStatus));

        //Retrieve the entity out of the Azure table.
        TableResult tableResult = _table.Execute(TableOperation.Retrieve<NextNumberEntity>(_filename, ""));
        NextNumberEntity entity = (NextNumberEntity)tableResult.Result;
        //Update the number
        entity.NumberValue++;
        //Add back into Azure table.
        _table.Execute(TableOperation.Replace(entity));
        nextNumber = entity.NumberValue;
        //Wait to extend the time this calling code hold the lease for (demo purposes)
        Thread.Sleep(TimeSpan.FromSeconds(10));
    }
    catch (Exception ex)
    {
        Console.Write("An error: " + ex.Message);
    }
    finally
    {
       //Release the blob.
       blob.ReleaseLease(AccessCondition.GenerateLeaseCondition(leaseId));
    }
    return nextNumber;
  }

Lastly I have a class that is my NextNumberEntity. This inherits Microsoft.WindowsAzure.Storage.Table.TableEntity.

class NextNumberEntity : TableEntity
{
     public int NumberValue { get; set; }
}

If I run the above code, it creates the blob file, the table, and updates the number from 0 to 1 in the table.

Above shows the leaseobject blob container, and the nextnumbers table.

Above shows the Cann0nF0dderDemo.lck file created within the leaseobject blob container.

Above shows the nextnumber table with the number value set to one.

Above shows the console window. You can see how before the lease was acquired, the LeaseState for the blob is Available with the LeaseStatus set as Unlocked. Soon as a lease was acquired, the LeaseState for the blob is Leased, with the LeaseStatus set to Locked. Lastly the console displays the next number which is one.

If I run two instances of the program and get them trying to acquire the lease at the same time, one errors with a conflict.

So how can I ensure that both instances run and eventually both get a number? I’ve written another method that is similar to the GetNextNumber method within the NextNumber class, called GetNextNumberWithDelay(). If a conflict is discovered then it retries the method, until I have obtained the next number.

internal int GetNextNumberWithDelay()
{
   //Get the blob reference
   CloudBlockBlob blob = _leaseContainer.GetBlockBlobReference(String.Format("{0}.lck", _filename));
   var nextNumber = 0;
   bool gotNumber = false;

   while (!gotNumber)
   {
       try
       {
         //Acquire the lease for 60 seconds.
         string leaseId = blob.AcquireLease(TimeSpan.FromSeconds(60), Guid.NewGuid().ToString());
         Console.WriteLine("Acquired Lease to update number");

         try
         {
              //Retrieve the entity out of the Azure table.
              TableResult tableResult = _table.Execute(TableOperation.Retrieve<NextNumberEntity>(_filename, ""));
              //Wait to extend the time this calling code hold the lease for (demo purposes)
              Thread.Sleep(TimeSpan.FromSeconds(10));

              NextNumberEntity entity = (NextNumberEntity)tableResult.Result;
              //Update the number
              entity.NumberValue++;

              //Add back into Azure table.
              _table.Execute(TableOperation.Replace(entity));
              nextNumber = entity.NumberValue;
              Console.WriteLine();

              Console.WriteLine(String.Format("The next number is: {0}", nextNumber));
          }
          catch (Exception inner)
          {
              Console.WriteLine("Another Error: " + inner.Message);
          }
          finally
          {
              //Release the blob
              blob.ReleaseLease(AccessCondition.GenerateLeaseCondition(leaseId));
              gotNumber = true;
           }
       }
       catch (Microsoft.WindowsAzure.Storage.StorageException se)
       {
           var response = se.RequestInformation.HttpStatusCode;
           if (response != null && (response == (int)HttpStatusCode.Conflict))
           {
               //A Conflict has been found, lease is being used by another process, wait and try again.
               Console.Write(".");
               Thread.Sleep(TimeSpan.FromSeconds(2));
           }
           else
           {
              throw se;
           }
        }
    }

   Thread.Sleep(TimeSpan.FromSeconds(3));
   return nextNumber;
}

Just to really test my code, instead of just calling the GetNumberWithDelay() just once, I’m going to uses Task threading and call it 5 times at once. This way if I run two instances of the program, I’m requesting 10 different numbers on 2 instances, 10 different threads. The lease on the blob will only allow one thread at a time to request a number, making all the other threads wait. Before I ran this, I reset the NumberValue in the Storage table back to 0.

//Run the GetNextNumber code 5 times at once. 
Task.WaitAll(new[]{
             Task.Run(() => nextNumber.GetNextNumberWithDelay()),
             Task.Run(() => nextNumber.GetNextNumberWithDelay()),
             Task.Run(() => nextNumber.GetNextNumberWithDelay()),
             Task.Run(() => nextNumber.GetNextNumberWithDelay()),
             Task.Run(() => nextNumber.GetNextNumberWithDelay())
});

When the code runs, one thread on one instance will obtain the lease, every other thread (on both instances) will have to wait, and will display a dot “.”

As you can see from above, both instances obtained 5 different numbers, however each instance didn’t receive 5 sequential numbers, and together, they never grabbed the same number twice.

This concludes my blog posts on Azure Blob/Table concurrency.

Code: http://1drv.ms/1JBoIek

Azure Table Storage and managing concurrency


In my previous post, Azure Blob Storage and managing concurrency, I wrote about storing a blob and then using either:

  • Optimistic concurrency – When performing an update, it will verify if the data has changed since the application last read that data.
  • Pessimistic concurrency – When performing an update, it will acquire a lock, preventing other processes from trying to update it.

When using Azure Table storage you only have the option of using optimistic concurrency. If pessimistic locking is needed, one approach is to assign a designated blob for each table, and try to take a lease on the blob before operating the table. I discuss this in my next post, Azure Table Storage and pessimistic concurrency using an Azure blob.

Optimistic concurrency

Every entity that is added to the storage is assigned an ETag. Every time the entity changes, the ETag will also change. It is this ETag that is used to calculate if the entity has changed since the process has last read it. The steps are:

  • Read the entity from the table storage, and grab the ETag header value.
  • Update the entity, passing in the ETag value from reading the entity from previous step.
  • The ETag passed in, matches the ETag of the current entity in the storage table. The entity is updated, and a new ETag assigned.
  • If the ETag passed in, doesn’t match the ETag of the current entity in the storage table, due to another process changes it, then an HttpStatusCode of 412 (PreconditionFailed) is returned and the file isn’t updated.

The below code (taken partly from Managing Concurrency using Azure Storage – Sample Application) shows an example of optimistic concurrency. It also shows how to ignore concurrency completely to simulate a different process updating the blob. You will need to following NuGet Packages installed for the code to work:

  • Microsoft.WindowsAzure.ConfigurationManager
  • WindowsAzure.Storage
internal void DemonstrateOptimisticConcurrencyUsingEntity()
{
    Console.WriteLine("Demo - Demonstrate optimistic concurrency using a table entity");
    CloudStorageAccount storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
    string tableStorageReference = System.Configuration.ConfigurationManager.AppSettings["TableStorageReference"];

   CloudTableClient tableClient = storageAccount.CreateCloudTableClient();
   CloudTable nextNumberTable = tableClient.GetTableReference(tableStorageReference);
   nextNumberTable.CreateIfNotExists();

   //Add new entity to table (Requires PartitionKey and RowsKey to work)
   NextNumberEntity originalNumber = new NextNumberEntity()
   {
        PartitionKey = "Numbers",
        RowKey = "Next",
        NumberValue = 0
   };

   TableOperation insert = TableOperation.InsertOrReplace(originalNumber);
   nextNumberTable.Execute(insert);
   Console.WriteLine("Entity added. Original ETag = {0}", originalNumber.ETag);

   //Simulate an update by different process
   NextNumberEntity updatedNumber = new NextNumberEntity()
   {
       PartitionKey = "Numbers",
       RowKey = "Next",
       NumberValue = 1
   };

   insert = TableOperation.InsertOrReplace(updatedNumber);
   nextNumberTable.Execute(insert);
   Console.WriteLine("Entity updated. Updated ETag = {0}", updatedNumber.ETag);

   //Try updating originalNumber. Etag is cached within originalNumber and passed by default.
   originalNumber.NumberValue = 2;
   insert = TableOperation.Merge(originalNumber);

   try
   {
       Console.WriteLine("Trying to update original entity");
       nextNumberTable.Execute(insert);
    }
    catch(StorageException ex)
    {
        if (ex.RequestInformation.HttpStatusCode == (int)HttpStatusCode.PreconditionFailed)
        {
            Console.WriteLine("Precondition failure as expected. Entities orignal etag does not match");
        }
        else
        {
            throw;
        }
    }

    Console.WriteLine("Press enter to exit");
    Console.ReadLine();
}

My app.config file for my console application has the following in it for the above to work.

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
    <startup> 
        <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" />
    </startup>
  <appSettings>
    <add key=" TableStorageReference" value="nextnumbers"/>
    <add key="StorageConnectionString" value="[Insert your own DefaultEndpointsProtocol to your Azure Storage] "/>
  </appSettings>
</configuration>

The results from running DemonstrateOptimisticConcurrencyUsingEntity()

If you want to see the table you could use visual studio. In the Server explorer, if you sign into your Azure account, under storage you should see the table you created. Mine is called NextNumbers, which I defined in my app.config “TableStorageReference”.

Within our table, we have the data we placed in from the update.

References: https://azure.microsoft.com/en-gb/blog/managing-concurrency-in-microsoft-azure-storage-2/

You can download my code from here.

Code: http://1drv.ms/1FlbDUw

Azure Blob Storage and managing concurrency


There are different ways you can manage concurrency within a blob storage.

  1. Blob leases work where a process will obtain a lease/lock on a file, that process will be able to update that file. Any other process will not be able to access that file to update, until the lease has been released and the process can acquire its only lease. This is known as Pessimistic concurrency.
  2. If no lease is used, and a process updates a file, it checks to see if the data has changed since it last obtained the file, if the file has been changed since then it prevents the process from updating the file. This is known as Optimistic concurrency.
  3. Ignoring concurrency completely, a process could just update the file, basically last writer wins. This is best used when there is no chance that multiple users will access the same data at the same time.

Optimistic concurrency

Every blob that is added to the storage is assigned an ETag. Every time the blob changes, the ETag will also change. It is this ETag that is used to calculate if the blob has changed since the process has last read it. The steps are:

  • Read the Blob file, and grab the ETag header value.
  • Update the Blob file, passing in the ETag value from reading the blob file.
  • The ETag passed in, matches the ETag of the blob currently in the storage. The file is updated, and a new ETag assigned.
  • If the ETag passed in, doesn’t match the ETag of the blob currently in the storage, due to another process changes it, then an HttpStatusCode of 412 (PreconditionFailed) is returned and the file isn’t updated.

The below code (taken partly from Managing Concurrency using Azure Storage – Sample Application) shows an example of optimistic concurrency. It also shows how to ignore concurrency completely to simulate a different process updating the blob. You will need to following NuGet Packages installed for the code to work:

  • Microsoft.WindowsAzure.ConfigurationManager
  • WindowsAzure.Storage
public static void DemonstrateOptimisticConcurrencyUsingBlob()
{
    Console.WriteLine("Demo - Demonstrate optimistic concurrency using a blob");

    CloudStorageAccount storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
    String blobStorageContainerRef = System.Configuration.ConfigurationManager.AppSettings["BlobStorageContainerReference"];
    String filename = "Optimistic.txt";

    CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
    var container = blobClient.GetContainerReference(blobStorageContainerRef);
    container.CreateIfNotExists();

    // Create test blob - default strategy is last writer wins - so UploadText will overwrite existing blob if present
    CloudBlockBlob blockBlob = container.GetBlockBlobReference(filename);
    blockBlob.UploadText("Hello this is my text");

    // Retrieve the ETag from the newly created blob
    // Etag is already populated as UploadText should cause a PUT Blob call to storage blob service which returns the etag in response.
   string orignalETag = blockBlob.Properties.ETag;
   Console.WriteLine("Blob added. Orignal ETag = {0}", orignalETag);

   // This code simulates an update by different process.
   string helloText = "Blob updated simulates a different process.";

   // No etag, provided so orignal blob is overwritten (generating a new etag)
   blockBlob.UploadText(helloText);

   Console.WriteLine("Blob updated. Updated ETag = {0}", blockBlob.Properties.ETag);

   // update the blob using the orignal ETag provided when the blob was created
   try
   {
      Console.WriteLine("Trying to update blob using orignal etag to generate if-match access condition");
      blockBlob.UploadText(helloText, accessCondition: AccessCondition.GenerateIfMatchCondition(orignalETag));
    }
    catch (StorageException ex)
    {
       if (ex.RequestInformation.HttpStatusCode == (int)System.Net.HttpStatusCode.PreconditionFailed)
       {
          Console.WriteLine("Precondition failure as expected. Blob's orignal etag no longer matches");
       }
       else
       {
           throw;
       }
    }

   Console.WriteLine("Press enter to exit");
   Console.ReadLine();
}

My app.config file for my console application has the following in it for the above to work.

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
   <startup> 
       <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" />
   </startup>
   <appSettings>
     <add key="BlobStorageContainerReference" value="leaseobjects"/>
     <add key="StorageConnectionString" value="[Insert your own DefaultEndpointsProtocol to your Azure Storage] "/>
   </appSettings>
</configuration>

The results from running the DemonstrateOptimisticConcurrencyUsingBlob() method.

Pessimistic concurrency

If you assign a lease on a blob for exclusive use, you can say how long you need the lease for. This lease is from 15 seconds to 60 seconds, or infinite which will lock the blob until the code releases the lease. A lease can be renewed and broken (generally used by administrators to reset the lease state). The steps to acquire a lock is:

  • Get the blob
  • Acquire a lease
  • Update the blob
  • Release the blob.
  • If a different process try to acquire a lease when the blob is already leased, then an HttpStatusCode of 409 (Conflict) is returned.
  • If you try to update the blob that has a lease on it, without the lease id, then an HttpStatusCode of 412 (PreconditionFailed) is returned.

The below code (taken partly from Managing Concurrency using Azure Storage – Sample Application) shows an example of optimistic concurrency.

private static void DemonstratePessimisticConcurrencyUsingBlob()
{
   Console.WriteLine("Demo - Demonstrate pessimistic concurrency using a blob");
   CloudStorageAccount storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
   String blobStorageContainerRef = System.Configuration.ConfigurationManager.AppSettings["BlobStorageContainerReference"];
   String filename = "Optimistic.txt";

   CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
   var container = blobClient.GetContainerReference(blobStorageContainerRef);
   container.CreateIfNotExists();

   //Create test blob - default strategy is last writer wins - so UploadText will overwrite existing blob if present
   CloudBlockBlob blockBlob = container.GetBlockBlobReference(filename);
   blockBlob.UploadText("Hello this is my text");

   // Acquire lease for 15 seconds
   string lease = blockBlob.AcquireLease(TimeSpan.FromSeconds(15), null);
   Console.WriteLine("Blob lease acquired. Lease = {0}", lease);

   // Update blob using lease. This operation will succeed
   string helloText = "Blob updated";
   var accessCondition = AccessCondition.GenerateLeaseCondition(lease);
   blockBlob.UploadText(helloText, accessCondition: accessCondition);

   Console.WriteLine("Blob updated using an exclusive lease");
   //Simulate third party update to blob without lease

   try
   {
       // Below operation will fail as no valid lease provided
       Console.WriteLine("Trying to update blob without valid lease");
       blockBlob.UploadText("Update without lease, will fail");
    }
    catch (StorageException ex)
    {
        if (ex.RequestInformation.HttpStatusCode == (int)System.Net.HttpStatusCode.PreconditionFailed)
       {
          Console.WriteLine("Precondition failure as expected. Blob's lease does not match");
       }
       else
       {
          throw;
       }
    }

    //Simulate another process trying to obtain a new lease
    try
    {
        Console.WriteLine("Trying to obtain a new lease when previous lease still exists.");
        string newLease = blockBlob.AcquireLease(TimeSpan.FromSeconds(15), null);
    }
    catch(StorageException ex)
    {
        if (ex.RequestInformation.HttpStatusCode == (int)System.Net.HttpStatusCode.Conflict)
        {
            Console.WriteLine("Conflict. Another process already has this blob lease");
        }
        else
        {
            throw;
        }
     }

     // Release the blob. Lease expires anyways after 15 seconds
     blockBlob.ReleaseLease(accessCondition);
     Console.WriteLine("Press enter to exit");
     Console.ReadLine();
}

The results from running DemonstratePessimisticConcurrencyUsingBlob()

If you want to see the blob file you could use visual studio. In the Server explorer, if you sign into your Azure account, under storage you should see the blob you created. Mine is called leaseobjects, which I defined in my app.config “BlobStorageContainerReference”.

Within our blob container, we have the two files we created in our application.

References: https://azure.microsoft.com/en-gb/blog/managing-concurrency-in-microsoft-azure-storage-2/

http://justazure.com/azure-blob-storage-part-8-blob-leases/

You can download my code from here.

Code: http://1drv.ms/1LNvvET

Creating lookup and dependency columns in CSOM


What is a dependency column?

When you create a list in SharePoint this list can be used as a lookup. So for example (a very bad example, but gives you the idea), you have a customer list and an order list. On the order list you want to lookup to your Customer List. You would pick the column you want to point to for the lookup, in this case it would probably be the Customer column. However there might be other columns from the Customer list that you want to pull through to your orders list. (Customer ID, Date Joined etc). These extra columns are the dependency columns.

The customer you pick to associate with the order, these extra columns are pulled through. On the lookup list, if any of the data changes for the customer, this data will automatically be updated on the order list.

Not all columns can be used as a lookup/dependency column, only columns that can are:

  • Single Line of text
  • Number
  • Date
  • Calculated

The Demo

I have put together a demo, using a SharePoint hosted Add-In (SharePoint App). This SharePoint Add-in acts as a provisioning page to create my columns and lists, (similar to how OfficeDevPnp samples work) I will not be explaining in this post how to create a SharePoint Add-In.

The main point of the code that adds the dependant lookups uses the method AddDependentLookup which is part of the Microsoft.SharePoint.Client namespace.

public Field AddDependentLookup(string displayName, Field primaryLookupField, string lookupField ) 

I will be explaining the relevant sections of the code. All the methods have been written so that if anything already exists, it will not recreate it. My code also using OfficeDevpnp dlls, which I have obtained through NuGet. It is not until near the end after creating the Order List will I be adding the dependency columns.


Creating the Customer List

This is standard SharePoint CSOM code, I’m checking if the list already exists, if it doesn’t then I create it.

 
List customerList = null; 
if(!ctx.Web.ListExists("Customers")) 
{ 
  customerList = ctx.Web.CreateList(ListTemplateType.GenericList, "Customers", false, false); 
}
else { 
  customerList = ctx.Web.Lists.GetByTitle("Customers"); 
} 
customerList.Update(); 
ctx.Load(customerList); 
ctx.ExecuteQueryRetry(); 

I then change the Title display value from Title to Customer Name.

var title = customerList.Fields.GetByInternalNameOrTitle("Title"); 
title.Title = "Customer Name"; 
title.Update(); 

Lastly I create 3 fields if they don’t exist, Account ID, Address, Date Joined. Once the Date Joined column is created, I’m then ensuring that the Date column is just using Date Only, instead of Date Time values.

 if(!customerList.FieldExistsById("E99BF256-BC01-4A37-B35A-B39BCC5FB82E")) 
 { 
   var accountId = new FieldCreationInformation(FieldType.Text){ 
                           AddToDefaultView = true, 
                           DisplayName = "Account ID", 
                           Id = new Guid("E99BF256-BC01-4A37-B35A-B39BCC5FB82E"), 
                           Group="Lookups", 
                           Required=true, 
                           InternalName = "AccountID" 
                      }; 
    customerList.CreateField(accountId, true); 
 }

 if(!customerList.FieldExistsById("2A8ABC2E-B7F0-4187-ADCA-7831648AFAD3")) 
 {
   var address = new FieldCreationInformation(FieldType.Note){
                           AddToDefaultView = true, 
                           DisplayName = "Address", 
                           Id = new Guid("2A8ABC2E-B7F0-4187-ADCA-7831648AFAD3"), 
                           Group = "Lookups", 
                           Required =true, 
                           InternalName = "CustomerAddress" 
                       }; 
   customerList.CreateField(address, true); 
 } 

 if(!customerList.FieldExistsById("C73123E9-85C8-4156-B280-E7783EEB119C")) 
 { 
   var dateJoined = new FieldCreationInformation(FieldType.DateTime){
                             AddToDefaultView = true, 
                             DisplayName = "Date Joined", 
                             Id= new Guid("C73123E9-85C8-4156-B280-E7783EEB119C"), 
                             Group = "Lookups", 
                             Required = true, 
                             InternalName = "DateJoined" 
                         }; 
   customerList.CreateField(dateJoined, true); 

   var dateJoinedField = ctx.CastTo<FieldDateTime>(customerList.Fields.GetById(new Guid("C73123E9-85C8-4156-B280-E7783EEB119C"))); 
   ctx.Load(dateJoinedField); 
   ctx.ExecuteQueryRetry(); 

 if(dateJoinedField.DisplayFormat == DateTimeFieldFormatType.DateTime) 
 { 
  dateJoinedField.DisplayFormat = DateTimeFieldFormatType.DateOnly; 
  dateJoinedField.UpdateAndPushChanges(true); 
  ctx.ExecuteQueryRetry(); 
 } 
}

 

As this is just a demo, I want to ensure that my Customer list already has some data in it ready to use. Therefore I’ve added a method that just adds some data to the Customer list.

private void CreateCustomerData(ClientContext ctx) 
{ 
  List customerList = ctx.Web.Lists.GetByTitle("Customers"); 
  ctx.Load(customerList); 
  ctx.ExecuteQueryRetry(); 
  if(customerList.ItemCount == 0) { 
      //Add Data. 
      Microsoft.SharePoint.Client.ListItem cust1 = customerList.AddItem(new ListItemCreationInformation()); 
      cust1["Title"] = "Customer A"; 
      cust1["AccountID"] = "A12345"; 
      cust1["CustomerAddress"] = "85 Abbott Close, \r\nLondon"; 
      cust1["DateJoined"] = new DateTime(2015, 6, 4).ToString("o"); 
      cust1.Update(); 

      Microsoft.SharePoint.Client.ListItem cust2 = customerList.AddItem(new ListItemCreationInformation()); 
      cust2["Title"] = "Customer B"; 
      cust2["AccountID"] = "B26554"; 
      cust2["CustomerAddress"] = "745 Rose Drive, \r\nLondon"; 
      cust2["DateJoined"] = new DateTime(2014, 8, 14).ToString("o"); 
      cust2.Update(); 
      
      Microsoft.SharePoint.Client.ListItem cust3 = customerList.AddItem(new ListItemCreationInformation()); 
      cust3["Title"] = "Customer C"; 
      cust3["AccountID"] = "C44575"; 
      cust3["CustomerAddress"] = "547 Cooper Way, \r\nLondon"; 
      cust3["DateJoined"] = new DateTime(2011, 1, 24).ToString("o"); 
      cust3.Update(); 

      ctx.ExecuteQueryRetry(); 
 } 
} 

 

Now we can move onto creating the Orders list. I have written this code very similar to how I started to write the Customer list, where I’m first checking if it exists first before creating it. I’m then changing the Title display value from Title to Orders. So that I have access to the Customers list columns I’m loading the list and columns.

 List orderList = null; 
 if (!ctx.Web.ListExists("Orders")) 
 { 
   orderList = ctx.Web.CreateList(ListTemplateType.GenericList, "Orders", false, false); 
 } 
 else 
 { 
   orderList = ctx.Web.Lists.GetByTitle("Orders"); 
 } 

 //Change Title 
 var title = orderList.Fields.GetByInternalNameOrTitle("Title"); 
 title.Title = "Order Item"; 
 title.Update(); 

 //Get the customer list 
 List customerList = ctx.Web.Lists.GetByTitle("Customers");
 //Load Lists, fields and views ready to add more fields and lookup fields. 
 ctx.Load(orderList); 
 ctx.Load(customerList); 
 ctx.Load(customerList.Fields); 
 ctx.ExecuteQueryRetry();

I now need to add the columns to the Order list. I’m going to start with the Cost column, as this is a standard currency column. I’m checking if the column exists first before adding it to the list.

 Field cost = null; 
 if (!orderList.FieldExistsById(new Guid("70420D11-3D40-4B53-AAF4-21B57D51C033"))) 
 { 
  FieldCreationInformation orderCost = new FieldCreationInformation(FieldType.Currency) 
  { 
    DisplayName = "Cost", 
    Id = new Guid("70420D11-3D40-4B53-AAF4-21B57D51C033"), 
    AddToDefaultView = true, 
    Group = "Lookups", 
    Required = true, 
    InternalName = "Cost" 
  }; 

 cost = orderList.CreateField(orderCost, true); 
 ctx.Load(cost); 
 } 

This is the section where the AddDependentLookup method is being used. First I need to create the lookup column from the Customers list to the Orders list. If it already exists I need to load this column as I require the Field when I call the AddDependentLookup method. Once this is created, I will check to see if the dependency column has already been added. Unfortunately when it gets added, you don’t have control over what the GUID of the column will be. Therefore you will need to check by internal name. This name will be the title of the column that you give it encoded the way SharePoint encodes spaces, punctuation etc. If the column doesn’t exist it is then added to the Order list using the AddDependentLookup method passing in the column display name, the lookupfield, and the internal name of the dependant column within the customer list.

FieldLookup customerLookupField = null; 
if (!orderList.FieldExistsById("FE9ED460-02E7-4124-A4F0-BFE5A3DDA4D0")) 
{ 
  FieldCreationInformation customerLookup = new FieldCreationInformation(FieldType.Lookup) { 
       DisplayName = "Customer", 
       Id = new Guid("FE9ED460-02E7-4124-A4F0-BFE5A3DDA4D0"), 
       Group = "Lookups",
       Required = true, 
       AddToDefaultView = true, 
       InternalName = "CustomerLookup" 
 }; 
  customerLookupField = ctx.CastTo<FieldLookup>(orderList.CreateField(customerLookup, false)); 
  customerLookupField.LookupList = customerList.Id.ToString(); 
  customerLookupField.LookupField = "Title"; 
  customerLookupField.Update(); 
  ctx.ExecuteQueryRetry(); 
} 
else 
{ 
  customerLookupField = ctx.CastTo<FieldLookup>(orderList.Fields.GetById(new Guid("FE9ED460-02E7-4124-A4F0-BFE5A3DDA4D0"))); 
  ctx.Load(customerLookupField); 
  ctx.ExecuteQueryRetry(); 
 } 
 
 //Add Dependency fields. AccountID, DateJoined 
 Field accountDependency = null; 
 if (!orderList.FieldExistsByName("Cust_x002e__x0020_Account")) 
 {
   accountDependency = orderList.Fields.AddDependentLookup("Cust. Account", customerLookupField, "AccountID"); 
   ctx.Load(accountDependency); 
 } 
  Field dateJoinedDependency = null; 
  if (!orderList.FieldExistsByName("Cust_x002e__x0020_Joined")) 
  { 
    dateJoinedDependency = orderList.Fields.AddDependentLookup("Cust. Joined", customerLookupField, "DateJoined"); 
  ctx.Load(dateJoinedDependency); 
  } 
  ctx.ExecuteQueryRetry();

After deploying my app, and loading it up, I am able to create the lookup lists.

Customer List


Creating a new Order


Orders List


Reference

https://msdn.microsoft.com/en-us/library/microsoft.sharepoint.client.fieldcollection.adddependentlookup.aspx

Link to Visual Studio Project

http://1drv.ms/1WWzyoi

Using a Document Library to store global Document Templates


With the use of Content Types, you can assigned a Document Template to the content type. That way when the user goes to create a new document of that content type, they will be given a Word/PowerPoint/Excel document that you want them to use. For example a company expense form, or a company report document.

Typically you would create a content type, and upload the document template you wish to use. Then you add the content type to your library and then every time you wish to create a new document the document template opens. However if you look under the covers although you have just uploaded one document template, the document template is being copied into each Document library the content type is being added to.

With the use of SharePoint Designer I can prove this point.

I have created a Content Type called CandC Document, and added a few extra site columns to it.

Then in the advance settings I have uploaded a Document Template that I wish my Content Type to use.

By opening my site in SharePoint Designer, click on All files, and in the _cts folder, there is now a folder named after my Content Type. (This is the first copy of my Document Template within SharePoint)

After creating a document library called Team Documents and adding my Content Type I can then use my document template.

However if I look at this document library in SharePoint Designer, I see that SharePoint has actually copied my Document Template to be local to the list. (This is the second copy of my Document Template within SharePoint)

Every document library I create and add the document template to will also create a copy. (This is the X copy of my Document Template within SharePoint).

Now you are right if you are thinking, when I have a new Document Template I just need to go to the Content Type, upload the new template and ensure that when I click OK, ensuring I’ve selected Yes for ‘Update all content types inheriting from this type’. This works fine in one site collection, therefore having multiple copies isn’t really an issue.

What if the Content Type being used in multiple site collections?

If the Content Type is being used in multiple site collections and you wish to update the Document template then the choices are to:

  1. Use the Content Hub.
  2. Define a document library in a different site collection as the global location.

Manually going to each site collection and update the Content Type.

This is a valid option, you might only have 4 site collections. Hopefully you have used code to create your content types in each site collection (otherwise the GUID’s) won’t match. If you have 100s of site collections this isn’t suitable. You will still have multiple copies of your Document Template.

Use the Content Type Hub.

The full explanation and setup of the Content Type hub is out of scope for this document, but basically it is a built in publishing of content type mechanism. You would create the Content Type in the Content Hub with the Document Template and then publish the content type. This will be pushed out to every site collection. Any updates/changes to the Content Type (including changing the Document Template) can be republished and pushed out to all site collections. Again this a valid option, however there can be caveats using the Content Type Hub, and I fully recommend you do your research online before making this decision. Not all site columns play nicely with the Content Hub. Also using this approach you are creating copies of your Document Template everywhere.

http://blogs.msdn.com/b/chaks/archive/2011/02/09/content-type-hub-limitations.aspx

Define a document library in a Site Collection as the Global Location.

This is the solution I’ve used recently and it ensures there is just one copy of the Document Template. The update is instantly, unlike using the Content Hub which relies on a timer job to push the changes everywhere, and there isn’t multiple copies of the template. Also you have the added bonus that because you are storing the Document Template is a Document Library, you can apply versioning, workflow approval on it too if you wish.

You will need to ensure your Content Types are created in code, as you need the GUID’s to match.

First create a document library in the Site Collection you are going to store your document templates in. Then upload all your document templates into this library. Please note, that if you wish to use your document template with multiple content types then you will need to have a copy for each content type still. (I know I said one copy, but there are limitations if using the same document template for multiple content types and you will understand why later, also having one document template for each content type even if they are the same is not a bad idea, as in the future the documents templates could different from each other.)

So I have create my Site Collection on my root site (can be any other site collection), and created my Document Library called Document Template. Inside my document library I have create 2 folders, one called CandC and once called Cann0nF0dder. Inside each folder is the same two templates I want to assign to my Content Types. Ensure you have also given everyone access to this library.

In your other site collections, using code, however you wish to do it, PowerShell, C# CSOM or JavaScript, create your Content Types in your Sites and ensure that the Document template URL is a relative path to your Document template stored in the global location. Below is sample PowerShell code I used for this demo.

$UserName = Read-Host -Prompt "UserName"
$Password = Read-Host -Prompt "Password" -AsSecureString
$Url = https://tenant.sharepoint.com/sites/Teams

#location is https://tenant.sharepoint.com/DocumentTemplates
$location = "/DocumentTemplates"
Add-Type -Path "C:\Program Files\Common Files\microsoft shared\Web Server Extensions\16\ISAPI\Microsoft.SharePoint.Client.dll"
Add-Type -Path "C:\Program Files\Common Files\microsoft shared\Web Server Extensions\16\ISAPI\Microsoft.SharePoint.Client.Runtime.dll"

$ctx = New-Object Microsoft.SharePoint.Client.ClientContext($Url)
$ctx.Credentials = New-Object Microsoft.SharePoint.Client.SharePointOnlineCredentials($UserName, $Password)
$ctx.ExecuteQuery();

#Create Content Type
$lci = New-Object Microsoft.SharePoint.Client.ContentTypeCreationInformation
$lci.Description = "CandC Document"
$lci.Name = "CandC Documents"
$lci.ID = "0x0101005F05888B19FD40C798741D9F1B5401E4"
$lci.Group = "CandC"

#Add Content Type
$contentType = $ctx.Web.ContentTypes.Add($lci)
$ctx.Load($contentType)
$ctx.ExecuteQuery()

#Update Document Template
$contentType.DocumentTemplate = $location + "/CandC/CCDocument Template.docx"
$contentType.Update($true)
$ctx.ExecuteQuery()

#Create Content Type
$lci = New-Object Microsoft.SharePoint.Client.ContentTypeCreationInformation
$lci.Description = "CandC Presentation"
$lci.Name = "CandC Presentation"
$lci.ID = "0x0101005F05888B19FD40C798741D9F1B5401E5"
$lci.Group = "CandC"

#Add Content Type
$contentType = $ctx.Web.ContentTypes.Add($lci)
$ctx.Load($contentType)
$ctx.ExecuteQuery();

#Update Document Template
$contentType.DocumentTemplate = $location + "/CandC/CCPresentation Template.pptx"
$contentType.Update($true)
$ctx.ExecuteQuery()

The above code only creates 2 content types (CandC Document and CandC Presentation), and it also doesn’t add the additional columns I wanted, you’ll have to work that out on your own for now. However I did create 4 content types, one for each document template.

By going into the Content Type, and viewing the Advanced Settings, you will see that it is pointing to the correct location in my Global Document Template library.

I’ve added these content types to my library (this can also be done in code) and now select whatever content type/template I wish to use.

However we are not finished yet. I discovered that when I select a Content Type, it opens up the relevant template, but when I save it back to the list, the Content Type for the document is the default content type for the library (in my case Document) not the Content Type I have picked. Also, any extra columns in my Content type were not showing up in the Document Information Panel at the top of the Microsoft Office application.

To ensure that it saves with the correct content type, you need to use your code to deploy the content types to the site collection that is holding the Global Templates. Then you need to add these Content Types to the document library that is holding the Document Templates.

Edit the property of each document and assign them to the correct content type.

Now when you create a new document from one of the content types, first of all you will notice that the Document Information panel displays with your columns.

After you have saved your document back to your library, you will also notice that it has been saved with the correct content type.

By using SharePoint designer I can also check that there is no template stored within the Site Collections, the only location you will find your template is in the Site Collection holding your Global Document Templates.

Office applications crashing with SharePoint Content Types and Document Templates


The scenario I had set up was, a few content types with different site columns assigned to the content type. (Single Line of Text, Ratings and Managed Metadata). Each content type also had a document template assigned to them.

The content types were then added to a document library list. When I click the New button on the list menu bar, I’m presented with a choice of documents I can create.

On my environment I was using Office 2013, and opening each one didn’t cause me a problem. The Document Information Panel (DIP) was there, my Presentation document template loaded correctly.

However when my clients tried to open any of the document templates it crashed. It appeared to crash just as the Document Information Panel attempted to load. They were using MS Office 2010, perhaps that was the problem?

It wasn’t. As I tried to open the same document on numerous of other PC’s that had either MS Office 2010 or MS Office 2013, none of them had a problem. The client had a few different people attempt to open the document templates and they all had MS Office crashing on them.

Going back to basics, I decided to create a basic content type with just single line of text columns and use the same document template. This loaded no problem in the clients MS Office 2010, the document information panel was there and the document saved back to the SharePoint Library. Therefore only difference then between the two content types were the columns.

Working with the client, I removed a column from the content type one at time and got them to open the Document Template. It turns out that after all Managed Metadata columns were removed the document open OK. This was great that I worked out the problem, but this wasn’t a solution, I needed the Managed Metadata columns to be there. I did a bit of searching on the internet, and many forums were pointing people in the direction that there are two version of Microsoft Office installed on the PC’s that are crashing.

The guys I were dealing with were working with SharePoint 2013 a lot, and turned out that they had SharePoint Designer 2013 installed. When SharePoint Designer 2013 is installed so is a bunch of Microsoft Office 2013 dlls. These Shared Dlls get registered and override some of the Office 2010 dlls.

Solution

The solution is a simple fix, (maybe not if you have to do it to everyone in your company), basically you need to either upgrade everyone to 2013 or get the PC’s with Office 2010 to repair the install. This will re-register the dlls for 2010 correctly and the Document Templates will open correctly. (SharePoint designer 2013 will continue to work)

To repair your Office 2010, on the PC go to Control Panel > Program and Features. Click on MS Office 2010, and then on the menu bar Click Change. Then select Repair. Once the repair is complete you will need to restart your PC, and afterwards you will be able to open Document Templates with Managed Metadata columns.

Final thought

Even though MS Office 2010 is very old now, and less people will probably encounter this issue, there is a chance that this issue could resurface for people when MS Office 2016 is released.