Branding your Office 365 sign-in pages


Are you fed up when you go to sign into your SharePoint Online sites, that you see the default picture that Microsoft provides? Wouldn’t it be nice that the sign in page matched your branding, gave your users a consistence look and feel? Well this article is going to show you how to do that.

Note: A branded sign-in page only appears when you visit a service with a tenant-specific URL such as https://outlook.com/contoso.com. When you visit a service with non-tenant specific URLs (e.g https://myapps.microsoft.com) a non-branded sign-in page appears until you have entered your User ID.

The following screen shot shows an example of the Office 365 sign-in page on a desktop after a customisation:

The following screen shot shows an example of the Office 365 sign-in page on a mobile device after customisation:

What can you customise?

On the screenshot below I have highlighted the areas that you can customise.

  1. Large Image / Background colour – You can change the image, or show a background colour which will be used in place of the image on low bandwidth or narrow screens.
  2. Logo – Your logo can be shown at the top right of the screen instead of the Office 365 logo.
  3. Sign-in Page Text – Although not showing on the above picture, you can supply sign-in page text. This text could be used to display a legal statement, simple instructions, or even contact information for your help desk.

SharePoint Online Tenant

To customise your sign-in page, you need to do this through Azure AD. If you have just a SharePoint Online tenant of the domain *.onmicrosoft.com and never used Azure, you will find that you cannot get to Azure. Luckily this isn’t too much of a problem if you have a credit card. (Don’t worry it doesn’t cost any money).

If you go to https://manage.windowsazure.com or https://portal.azure.com and attempt to sign in with your account, that you use for SharePoint online, you will see the screen below.

Chris O’Brien blog explains it further here (http://www.sharepointnutsandbolts.com/2014/04/using-azure-instance-behind-your-office-365-tenant.html), but basically you just need to click on “Sign up for Windows Azure” then follow the instructions and enter your credit card details. (Again it doesn’t cost you anything). It gives you a pay-as-you-go Azure instance.

Configuring your directory with company branding

  • Sign into your Azure classic portal (https://manage.windowsazure.com) as an administrator of the directory you want to customise, and select your directory.
  • Along the menu/toolbar list, Click Configure.
  • Under Directory properties click Customize Branding.
  • Modify the elements listed below. All fields are optional. See below for screenshots and details of all customisable elements.
    • Banner Logo
    • Sign-in Page Text
    • Sign-in Page Illustration
    • Sign-In Page Background colour.
  • Click Save.

Note: If you have applied changes to your sign-in page, it can take up to an hour for the changes to appear. Mine happen within a few minutes.

Any time you wish to change your customisation, just by going back and clicking the Customize Branding button again. It is also here where you can add different branding settings for a specific language.

Different Branding for different languages

After you have configured your default branding settings, by going back and clicking the Customize Branding button, the first screen you are presented with is changing existing settings (which if you have just followed this blog, only see Default here) or you can add branding settings for a specific language. Select the language and then click the arrow button, and upload pictures/text as you did before. Once set, this branding will only show for the given browser language.

Customisable elements details

Below you will find the screen shot of the customising branding wizard with descriptions that you would find if you click the help tool tip icon.

  • Banner Logo (60 x 280 pixels) – The banner logo is displayed on the Azure AD sign-in page, when users sign in to cloud application that use this directory. It’s also used in the Access Panel service.
    • Max pixel size: 60px by 300px
    • Recommended to keep under 30 pixels high to avoid introducing scrollbars on mobile devices.
    • Recommended file size: 5-10kb
    • Use a PNG image with a transparent background if possible.
    • Avoid using a logo with small text on it, as the image may be resized to fit smaller screens.
  • Square Logo (240 X 240 pixels) – The square logo (previously referred to as “Title Logo”) is used to represent user accounts in your organization, on Azure AD web UI and in Windows 10.
    • Max pixel size: 240px by 240px
    • Recommended file size: 5-10kb
    • Use a PNG image with a transparent background if possible.
    • Avoid using a logo with small text on it, as the image may be resized to fit smaller screens.
  • Square Logo, Dark Theme (240 x 240 pixels) – If configured, this image will be used instead of the “Square Logo” image in combination with dark backgrounds, such as Windows 10 Azure AD Joined screens in the out-of-box experience.
    • If your logo already looks good on white and on dark blue/black backgrounds, there’s no need to configure a separate Dark Theme logo.
  • User ID Placeholder – This will replace “someone@example.com” that’s shown as a hint in the user ID input field on the Azure AD login page.
    • Important: you should only configure this if you only support internal users. If you expect external users to sign in to your app(s), we recommend you leave this blank (Azure AD will show “someone@example.com”).
  • Sign-In Page Text Heading – Add a heading above your customized sign-in page text. If not configured, this space is left blank on Azure AD web login pages, and replaced by “Need help” on Azure AD Join experience on Windows 10.
    • Plain text only.
    • Don’t exceed 30 characters.
  • Sign-In Page Text – This text appears at the bottom of the Azure AD sign in page, on the web, in apps and in the Azure AD Join experience on Windows 10. Use this space to convey instructions, terms of use and help tips to your users.
    • Plain text only.
    • Can’t be longer than 500 characters (250-300 characters recommended).
    • Remember, anyone can see your login page so you shouldn’t use this space to convey sensitive info!

  • Sign-In Page Illustration – This large image is displayed on the side of the Azure AD sign in page. By design, this image is scaled and cropped to fill in the available space in the browser window.
    • PNG, JPEG or GIF
    • 1420×1200 resolution recommended.
    • Recommended file size: 300 kb (max file size 500 kb).
    • Use an abstract illustration or picture. Since the image gets resized and cropped, avoid using rasterized text and keep the “interesting” part of the illustration in the top-left corner.
  • Sign-In Page Background Colour – On high latency connections, the sign-in page illustration may not load, in which case the login page will fill in the space with a solid colour.
    • Enter an RGB colour code in hex format (e.g. #FFFFFF).
  • Hide KMSI (Keep Me Signed In) – Choose whether your users can see the “Keep me signed in” check box on the Azure AD sign-in page. This option has no impact on session lifetime, and only allows users to remain signed in when they close and reopen their browser.
    • Important: some features of SharePoint Online and Office 2010 have a dependency on users being able to check this box. If you hide this option, users may get additional and unexpected sign in prompts.
  • Post Logout Link Label – If this is configured, Azure AD will show a link to a web site of your choice, after users sign out of Azure AD web applications.
    • Make sure to configure both the label and URL properties!
    • Link can be plain text only.
    • URL can be HTTP or HTTPS.
  • Post Logout Link URL – If this is configured, Azure AD will show a link to a web site of your choice, after users sign out of Azure AD web applications.
    • Make sure to configure both the label and URL properties!
    • Link can be plain text only.
    • URL can be HTTP or HTTPS.

References: https://azure.microsoft.com/en-gb/documentation/articles/active-directory-add-company-branding/

Temporal Tables in SQL 2016 and SQL Azure


Have you ever been asked to create a History/audit table for your database? Do you need to? If so, then read this blog post on the awesome feature now built into SQL 2016 and SQL Azure.

What is a temporal table?

A temporal table is a new type of user table in SQL Server 2016 and SQL Azure. These tables allow a point-in-time analysis by keep a full history of data changes, without the need of custom coding using triggers etc. You can create any new user table as a temporal table, or convert an existing table into a temporal table. By converting an existing table to a temporal table you will not need to do anything to any stored procedures or T-SQL statements to allow your application to continue working, it will just continue working, while storing the history data of any changes. These tables can also be known as system-versioned temporal tables because each row is managed by the system.

Every temporal table has two explicitly defined datetime2 columns. These columns are referred to as period columns and are used by the system to record period of validity for each row whenever a row is modified. A temporal table also has reference to another table with the same schema as itself. This is the history table and automatically stores the previous version of the row each time a row in the temporal table gets updated or deleted. This allows the temporal table to remain as the current table, and the history table to hold… well the history data. During temporal table creation users can specify existing history table (which must match the schema of the temporal table) or let the system create a default history table.

How does temporal work?

All current entries are stored within the Temporal table with a Start time and non-ending End time. Any changes will cause the original row to be stored in the history table with the start time and end time for the period for which is was valid.

Let me show you an example.

On first input on a row, the value would be entered only into the Temporal table.

Temporal

ID Value StartTime EndTime
1 My First Value 2016-05-01 10:26:45.15 9999-12-31 23:59:59.99

History

ID Value StartTime EndTime

On Update to ID 1, the original inputted value is entered into the history table with the EndTime updated to match when the Update took place, and Temporal table is updated with the updated value and the new start time.

Temporal

ID Value StartTime EndTime
1 My Second Value 2016-05-14 14:54:44.54 9999-12-31 23:59:59.99

History

ID Value StartTime EndTime
1 My First Value 2016-05-01 10:26:45.15 2016-05-14 14:54:44.54

On second update to ID 1, again the current value is entered into the history table with the EndTime updated to match when the Update took place, and the Temporal table is updated with the new version and new start time.

Temporal

ID Value StartTime EndTime
1 My Third Value 2016-05-24 01:59:41.82 9999-12-31 23:59:59.99

History

ID Value StartTime EndTime
1 My First Value 2016-05-01 10:26:45.15 2016-05-14 14:54:44.54
1 My Second Value 2016-05-14 14:54:44.54 2016-05-24 01:59:41.82

On deletion of ID 1, the current value is entered into the history table, with the EndTime updated to match when the row was deleted. The row is then removed from the Temporal table.

Temporal

ID Value StartTime EndTime

History

ID Value StartTime EndTime
1 My First Value 2016-05-01 10:26:45.15 2016-05-14 14:54:44.54
1 My Second Value 2016-05-14 14:54:44.54 2016-05-24 01:59:41.82
1 My Third Value 2016-05-24 01:59:41.82 2016-06-01 13:12:17.72

Creating or converting exiting table to a temporal table.

You can create a temporal table by specifying the Transact-SQL statements directly as show below. I recommend using SQL Management Studio 2016 which can be obtained and downloaded from here. You do not need a SQL Server license to install and use this, and it can be used with SQL Azure.

By using SQL Management Studio 2016, you can obtain the correct T-SQL by right clicking Tables > New > Temporal Table > System-Versioned Table..


I’m going to create an Employee Table.

CREATE TABLE dbo.Employee
(
  [EmployeeID] int NOT NULL PRIMARY KEY CLUSTERED
  , [Name] nvarchar(100) NOT NULL
  , [Position] varchar(100) NOT NULL
  , [Department] varchar(100) NOT NULL
  , [Address] nvarchar(1024) NOT NULL
  , [AnnualSalary] decimal (10,2) NOT NULL
-- This point below is the Period/Temporal set up on the table.
  , [ValidFrom] datetime2 (2) GENERATED ALWAYS AS ROW START
  , [ValidTo] datetime2 (2) GENERATED ALWAYS AS ROW END
  , PERIOD FOR SYSTEM_TIME (ValidFrom, ValidTo)
 )
 WITH (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.EmployeeHistory));
GO

If I was going to convert my existing Employee Table into a Temporal table, I would use the following T-SQL Statement

ALTER TABLE Employee
ADD
    ValidFrom datetime2 (0) GENERATED ALWAYS AS ROW START HIDDEN
        constraint DF_ValidFrom DEFAULT DATEADD(SECOND, -1, SYSUTCDATETIME())
    , ValidTo datetime2 (0)  GENERATED ALWAYS AS ROW END HIDDEN
        constraint DF_ValidTo DEFAULT '9999.12.31 23:59:59.99'
    , PERIOD FOR SYSTEM_TIME (ValidFrom, ValidTo);
ALTER TABLE Employee
SET (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.EmployeeHistory));
GO

AS you can see from above, SQL Management Studio indicates the System-Versioned and History table.

Inserts, updates and deleting data

When you come to doing your Inserts, Updates and Deletes there are no changes to T-SQL code, you would perform all against the Temporal table, (Employee table in my case). The T-SQL code below is demo code, that inserts 3 people a minute apart and then every 5 minutes something else will happen to the data. Either an update, inserting a new record, or delete.

--Create Lisa Fane
INSERT INTO [dbo].[Employee] ([EmployeeID],[Name],[Position],[Department],[Address],[AnnualSalary])
VALUES    (1234,'Lisa Fane','Sales Rep','Sales','Hertforshire', 25000)
GO

WAITFOR DELAY '00:01'
--Create Dan Wilson
INSERT INTO [dbo].[Employee] ([EmployeeID],[Name],[Position],[Department],[Address],[AnnualSalary])
VALUES    (2435,'Dan Wilson','Developer','Development','Kent', 35500)
GO

WAITFOR DELAY '00:01'
--Create David Hamilton
INSERT INTO [dbo].[Employee] ([EmployeeID],[Name],[Position],[Department],[Address],[AnnualSalary])
VALUES    (3445,'David Hamilton','Developer','Development','Croydon', 20000)
GO

WAITFOR DELAY '00:05'
--Update Lisa Fane with new job title and payrise.
UPDATE [dbo].[Employee]
SET  [Position] = 'Lead Sales Rep',[AnnualSalary] = 32000
WHERE EmployeeID = 1234
GO

WAITFOR DELAY '00:05'
-- Give Lisa Fane a Pay Rise.
UPDATE [dbo].[Employee]
SET  [AnnualSalary] = 33000
WHERE EmployeeID = 1234
GO

WAITFOR DELAY '00:05'
-- Give Dan Wilson a new job title and payrise
UPDATE [dbo].[Employee]
SET  [Position] = 'Development Manager',
[AnnualSalary] = 45500
WHERE EmployeeID = 2435
GO

WAITFOR DELAY '00:05'
--Employ Lucy Williamson
INSERT INTO [dbo].[Employee] ([EmployeeID],[Name],[Position],[Department],[Address],[AnnualSalary])
VALUES    (8875,'Lucy Williamson','Project Management','PMO','Sutton', 20000)
GO

WAITFOR DELAY '00:05'
--Lisa Fane change address
UPDATE [dbo].[Employee]
SET  [Address] = 'Barnet'
WHERE EmployeeID = 1234
GO

WAITFOR DELAY '00:05'
--Adam Crane joins the team
INSERT INTO [dbo].[Employee] ([EmployeeID],[Name],[Position],[Department],[Address],[AnnualSalary])
VALUES    (4454,'Adam Crane','Sales Rep','Sales','Islington', 26000)
GO
WAITFOR DELAY '00:05'

--David Hamilton has a payrise
UPDATE [dbo].[Employee]
SET  [Position] = 'Manage Services',[AnnualSalary] = 20500
WHERE EmployeeID = 3445
GO

WAITFOR DELAY '00:05'
--Lucy Williamson left the company.
Delete From Employee
Where EmployeeID = 8875

Running the above script takes about 30-35 mins.

Querying Temporal data

To obtain the current information in the Temporal table, there is no changes to your typical SQL Select statements.

SELECT * FROM Employee

As you can see from the above image, the results are as if it’s not a temporal table.

To view history data, there is a new clause you can use within the SELECT FROM statement. This is the FOR SYSTEM_TIME clause with 5 temporal-specific sub-clauses to query data across the current and history tables. This new SELECT statement syntax is supported directory on a single table, propagated through multiple joins, and through views on top of multiple temporal tables.

View All Data

Use the following command to see current and past records, the IsActual flag indicates if the row is current. This query is also useful as a view so that BI tools such as PowerBI can display a graph over time.

SELECT Name, Position, Department, [Address], AnnualSalary, ValidFrom, ValidTo, IIF (YEAR(ValidTo) = 9999, 1, 0) AS IsActual FROM Employee
FOR SYSTEM_TIME All
ORDER BY ValidFrom

Between two dates

Using BETWEEN <startDateTime> AND <endDateTime> will return rows that were active at least for a portion of period between the two times.

SELECT * FROM Employee
FOR SYSTEM_TIME
BETWEEN '2016-06-18 10:27:00' AND '2016-06-18 10:47:00'
ORDER BY ValidFrom

Contains two dates

Using CONTAINS IN (<startDateTime>,<EndDateTime>) will return rows that were only active within a period (and not outside it). This only queries the history table. As you can see below Lisa Fane was updated 3 times within the time period.

SELECT * FROM Employee
FOR SYSTEM_TIME CONTAINED IN ('2016-06-18 10:25:00', '2016-06-18 11:50:00')
ORDER BY ValidFrom

Point in time search

Using AS OF <dateTime> will return how the database looked at that given moment in time. Below are multiple statements which returns results from those points in the database. I’ve highlighted within the next result set what has changed. This type of query is perfect for BI tools such as Power BI to query the data 24 hours, 7 days, 30 day etc.

SELECT * FROM Employee
FOR SYSTEM_TIME
AS OF '2016-06-18 10:28:00'
ORDER BY EmployeeID

SELECT * FROM Employee
FOR SYSTEM_TIME
AS OF '2016-06-18 10:31:00'
ORDER BY EmployeeID

SELECT * FROM Employee
FOR SYSTEM_TIME
AS OF '2016-06-18 10:36:00'
ORDER BY EmployeeID

SELECT * FROM Employee
FOR SYSTEM_TIME
AS OF '2016-06-18 10:46:00'
ORDER BY EmployeeID

SELECT * FROM Employee
FOR SYSTEM_TIME
AS OF '2016-06-18 10:56:00'
ORDER BY EmployeeID

SELECT * FROM Employee
FOR SYSTEM_TIME
AS OF '2016-06-18 11:01:00'
ORDER BY EmployeeID

SELECT * FROM Employee
FOR SYSTEM_TIME
AS OF '2016-06-18 11:06:00'
ORDER BY EmployeeID

References

I found quite a bit of good information available to help me understand Temporal tables, I have listed the main sites below.

Temporal Tables – https://msdn.microsoft.com/en-IN/library/dn935015.aspx 

Getting Started with Temporal Tables in Azure SQL Database – https://azure.microsoft.com/en-us/documentation/articles/sql-database-temporal-tables/

Getting Started with System- Versioned Temporal Tables – https://msdn.microsoft.com/en-us/library/mt604462.aspx

Temporal in SQL Server 2016 (Video) – https://channel9.msdn.com/Shows/Data-Exposed/Temporal-in-SQL-Server-2016 


Exporting and Importing SQL Azure database in different Tenants


This was easier than I thought it was going to be. Using purely point and click, and Microsoft Azure Storage Explorer.

To be able to backup/export a database you need to have an Azure Blob Storage. If you don’t have one the steps below will show you how to create one.

Creating an Azure Blob Storage

  • Open up https://portal.azure.com and log in with your credentials, on the tenant where your SQL server source is.
  • Click New and select Data + Storage and then select Storage account
  • On the Create storage account blade you will be asked the following information:
    • Name: <Give a unique name>
    • Deployment model: Resource manager
    • Account Kind: Blob Storage
    • Performance: Standard
    • Replication: Locally-redundant storage (LRS) <- This may be different for you. I’m just doing a simple export and restore, not planning on keeping this storage.
    • Subscription: <Your subscription>
    • Resource group: Create New <- You might wish to use an existing resource group.
    • Resource Group Name: <Resource Group Name>
    • Location: <Your closest location>
  • Click Create
  • This will take a small amount of time while Azure creates this storage

Exporting Source Database

  • If not continuing from last step, open up https://portal.azure.com and log in with your credentials, on the tenant where your SQL server source is.
  • Go into SQL Database, and select the database you wish to export.
  • At the top of the blade there is a menu button item called ‘Export’. Click this button.

  • On the Export database blade, you will be asked the following information:
    • File name: Give a meaningful export name.
    • Subscription: Select the subscription that you can find your storage account in that you created earlier.
    • Storage: Select the storage account you created earlier.
      • Containers: Add a container name, and give it private access type, then select it.
    • Server admin login: Provide the Server Admin Username.
    • Password: Provide the password for the Server Admin.
  • Click OK.
  • Once you have clicked OK, your request to export the database is sent Azure, and is put into some sort of Microsoft Azure queue behind the scenes. Here you have to wait until the job has run, this can take some time. (Mine took 15 mins before complete) Please see Viewing the Import/Export history later in this blog post for job status.

Downloading the Blob file.

After the SQL export has completed, you will need to download the blob file so that you can then upload it to your destination tenant. To do this follow these steps:

  • In https://portal.azure.com select your Storage account where you exported SQL to.
  • Under the container selector you should find the container you created when exporting SQL data. Select this.
  • This container blade shows you all the files within this container. As I’ve just created it for this purpose the only file within here is my export file. Select this file.
  • Then click the download button.

Uploading export file to new tenant storage.

Before you can upload your export file to your new tenant, first you will need to ensure you have a storage account to upload to. If there isn’t one, follow my previous steps in this blog about creating an Azure Blob Storage.

Once you have a blob storage on your destination tenant, download and open Microsoft Azure Storage Explorer

  • Sign in with your destination tenant credentials.
  • Select the Storage account and then blob container.
  • Click Upload.
  • Upload your export file.

Importing to Destination Database

  • Open up https://portal.azure.com and log in with your credentials, on the tenant where your destination SQL server is.
  • Go into SQL Servers, and Add/Select the SQL server you wish to import the database too.
  • At the top of the blade there is a menu button item called ‘Import database’. Click this button.

  • On the Import database blade, you will be asked the following information:
    • Subscription: Select the subscription that you can find your storage account in that you created earlier.
    • Storage: Select the storage account you created earlier.
      • Containers: Select the Container
        • File: Select the export file.
    • Select Pricing Tier: <Select a pricing tier>
    • Database name: <Name the database>
    • Collation: Leave as is, or change if you require too.
    • Server admin login: Provide the Server Admin Username for this tenant.
    • Password: Provide the password for the Server Admin.
  • Click OK.
  • Once you have clicked OK, your request to import the database is sent Azure, and is put into some sort of Microsoft Azure queue behind the scenes. Here you have to wait until the job has run, this can take some time. (Mine took less than two minutes to import.) Please see Viewing the Import/Export history below for job status.

Viewing the Import/Export history.

After you have imported/exported a database, you can view the progress of the request by following these steps:

  • In https://portal.azure.com select SQL servers
  • Select your server where the import/export is taking place.
  • If you scroll down to Operations on the Server blade, you will see a tile called Import/Export history. Click this.



Simple SQL statement to see membership within a database


In SQL Azure if you connect up Microsoft SQL Server Management Studio, you have to do everything using SQL statements, there is no ability to point and click your way through creating accounts, memberships, new tables etc. I’m sure a good DBA would tell me that this is the correct way of building any database. Unfortunately, (or fortunately) I’m not a DBA, and I like point and click tools.

So the other day I was having a problem seeing what accounts had what access to a given database. I found running this SQL statement on a given database gave me the information I needed. I have written this blog post today mainly so I have reference to this in the future.

SELECT DP1.name AS DatabaseRoleName,
isnull (DP2.name, 'No members') AS DatabaseUserName
FROM sys.database_role_members AS DRM
RIGHT OUTER JOIN sys.database_principals AS DP1
ON DRM.role_principal_id = DP1.principal_id
LEFT OUTER JOIN sys.database_principals AS DP2
ON DRM.member_principal_id = DP2.principal_id
WHERE DP1.type = 'R'
ORDER BY DP1.name;

Fig 1. Example results.

Basic Logging within Web Jobs


When you are a developer it’s very easy to debug code, step through it and understand exactly what is happening at different parts of the code. When the code is finally shipped to a production environment, you are virtually blind to what is happening within your code unless you put sufficient logging within the code. (Or you are lucky enough to use Visual Studio to debug production environment, something that you probably shouldn’t do).

The basic way to log within WebJobs is using Console.Writeline or Trace from the System.Diagnostics.

Console.Writeline() logs

By using the Console.Writeline you only write to the Web Job logging window. These do not display as errors, warning or information. They are just plain print out to screen.

To get to the web job logging screen head to https://portal.azure.com then select your web app. On the All settings blade, select Web Jobs, then on the Web Jobs blade it will list all the web jobs currently assigned to this web app. The logs URL is stored on the right hand side of the screen. Click this link and it will take you through to your logs. Normally the URL is similar to the following https:// <WebAppName> .scm.azurewebsites.net/azurejobs/#/jobs/ <ContinuousOrTriggered> / <WebJobname>

Trace logs

The trace logs are stored within Azure Storage Blobs or Tables, which will need to be set up and configured. (This is shown later in this blog post). They do not show up in the Web Job window as described above for the Console.Writeline(). Please note that the Trace logs are not just for Web Jobs, you can use them in your web application, and most of this information will work for Web Apps.

Within your code, all you need to do is add a using statement to System.Diagnostics. Then you have a choice of logging an information, a warning, an error, or verbose message log.

System.Diagnostics.Trace.TraceInformation("This creates an information log");
System.Diagnostics.Trace.TraceWarning("This creates a warning log");
System.Diagnostics.Trace.TraceError("This creates an error log");
System.Diagnostics.Trace.WriteLine("This creates a verbose log");

Configuring Blob and Table logging.

Unfortunately I’m unable to find any way of doing this in the new Azure Portal. Therefore for these set of instructions I’m using the old Azure Portal https://manage.windowsazure.com

First you will need to create a Storage if you don’t already have one to use. In the old portal, you can do this just by clicking the New button at the bottom of the screen, and then selecting Data Services > Storage > Quick Create

Now head to your Web App, and click on the CONFIGURE tab, and on the CONFIGURE tab, scroll down until you reach the application diagnostics section.

In the Application Diagnostics section there are 3 different places you can set up logging.

  1. File System – This is written to the Web App file system, you can access these logs from the FTP share for this Web App. Please note: Application Logging is only enable for 12 hours. (Not part of this post)
  2. Table Storage – The logs are collected in the table storage that is specified. You can access the traces from the specified table in the storage account. Please note: These logs will remain in the Storage table increasing in size until someone clears the logs.
  3. Blob Storage – The logs are collected in the blob contain that is specified. You can access the traces from the specified container in the storage account. Please note: By default, application diagnostic logs are never deleted, however you have an option to set a retention period between 1 and 99999 days.

Configuring Blob Storage

Following on from the last section, to configure the Blob Storage click the ON button of the Application Logging (Blob Storage). Once you have clicked ON you will be presented with some other values.

The logging level options are Verbose, Error, Warning and Information. By setting the level at a given status, will depend which trace appear in the logs.

  • Verbose – Shows all logs.
  • Information – Shows Information, Warnings and Errors.
  • Warnings – Shows only Warnings and Errors.
  • Errors – Shows just Errors.

Next Click on manage blob storage, and you will be presented with a dialog. Select the Storage Account you created previously, and then Create a new blob container. Give your container a name, and then click the tick button.

Next you can set the retention in days to store the logs. Click Save on the page and the Blob Storage has been set up.

Configuring Table Storage

Configuring the Table Storage isn’t much different from configuring the Blob Storage. The only difference really is you are creating a new table instead of a new blob container. Click the ON button of the Application Logging (Table Storage). Once you have clicked ON you will be presented with some other values.

The logging level options are Verbose, Error, Warning and Information. By setting the level at a given status, will depend which trace appear in the logs.

  • Verbose – Shows all logs.
  • Information – Shows Information, Warnings and Errors.
  • Warnings – Shows only Warnings and Errors.
  • Errors – Shows just Errors.

Next Click on manage table storage, and you will be presented with a dialog. Select the Storage account you created previously, and then Create a new table. Give your table a name, and then click the tick button.

How to view the application diagnostic logs.

The biggest issue with Blob and Table storage in Azure is that there isn’t a simple way within Azure to view the information stored within them. There are plenty of 3rd party tools out there, that allow you to view blob and table storage for free. Azure Storage Explorer 6 is a free Windows application that you can use. It’s available on CodePlex https://azurestorageexplorer.codeplex.com/ however as useful as it can be, it is a bit painful, as you need download blob file to view once found, or filter the table to find the logs you are looking for. Also it is a windows application viewer looking at your Azure storage. Meaning if you use multiple PC’s, you need to ensure this is installed on all PC’s you use.

I try not to suggest 3rd party apps/extensions in my blog post, however I do like the Azure Web Site Log Browser written by Amit Apple. Amit Apple blog site http://blog.amitapple.com/ seems to just live and breathe Web Jobs, and I have learnt many things about Web Jobs from his blog. To install his extension you can do this directly from the new Portal or going to Site Extensions within you website scm. I will show both ways below how to do this.

Installing Azure Web Site Log Browser via Azure Portal

  • Head to your Web App in the Azure new portal. https://portal.azure.com
  • In the Web-App blade, along the menu buttons, select Tools.
  • On the Tools blade, select Extensions from the bottom of the Develop section.
  • Then in the Installed web app extensions blade, click Add.

  • On the Add Web Extension blade, you can choose an extension from the Choose web app extension blade. At time of writing this, Azure Web Site logs Browser is the 5th extension from the list. Click Azure Web Site Logs Browser.

  • Then click the OK button to accept terms. Then lastly, click the OK button to add the extension to your site. It takes a few moments to install into your Web App. It will only be installed on this Web Application.

  • Lastly on the Web App blade Click the restart button. This will ensure the Azure Web Site Logs Browser has fully installed.
  • To ensure it has worked. Click on the extension within the Installed web app extensions blade, and then click on Browse button on the Azure Web Site Logs Browser blade.
  • This will take you to https://<YourWebApp>.scm.azurewebsites.net/websitelogs/#

Installing Azure Web Site Logs Browser via SCM website.

  • Click on the Gallery tab, this will display all available site extensions.

  • As you can see from the screen shot, Azure Web Site Logs Browser is the first item in the second row at the time of writing this. You could search for it, by typing Azure Web Site Logs in search box.
  • Click the + button and this will install Azure Web Site Logs Browser to your Web App.
  • After it has successfully install, click the Restart Site button in the top right hand corner of the screen. This will ensure everything has been loaded up correctly.
  • By clicking on the Installed tab of Site extensions and then clicking the play button.
  • This will take you to https://<YourWebApp>.scm.azurewebsites.net/websitelogs/#

Viewing Application Logs – Blob Storage.

In my web job which I have set up logs for both Blob and Table reporting, I have a simple loop that displays some trace informations, warnings and errors, with some sleep threading in between to make sure the webjob lasts more than 10 seconds.

Trace.WriteLine("We are about to loop."); 

for (int i = 0; i < 20; i++)
{
   Trace.TraceInformation(String.Format("This is an information message brought to you by BasicLogging Web Job, looping for the {0} time", i)); // Write an information message
   Thread.Sleep(2000);

   Trace.TraceWarning("I'm warning you, don't get me angry.");
   Thread.Sleep(2000);

   Trace.TraceError("That's it, I'm annoyed now!");
   Thread.Sleep(2000);

   Trace.TraceInformation("Ok, I'm sorry, I got a little mad I'm OK now.");
   Thread.Sleep(4000);
}
Trace.WriteLine("End of the loop.");

I have then run this web job once. Now I wish to view this in the Blob storage within my site extension. This you would think would be easy to find. Unfortunately it’s not, this is down to Azure, not the site extension. The location of the blob file is different depending on the name of the application, name of the web job, date/time, website instance and website process ID of the web job. Please note: If the web job runs from one hour into the next, it will create the same named file in two different hour folders.

Typical path would look like:

/blobapplication/<WebSiteName>-<TriggeredOrContinous>-<WebJobName>/<CurrentYear>/<CurrentMonth>/<CurrentDay>/<CurrentHour>/<InstanceIdFirst6Char>-<ProcessId>.applicationLog.csv

If you have a web job that runs every hour on the hour, then it’s not that difficult to find, but a web job that runs on demand can be difficult to find. Therefore I write to the WebJob Log using Console.WriteLine the location of the log, using code that calculates the location of the file. This way if there is a reason I need to look into the trace logs of a given web job instance, I just view the web job to get the link to the file.

var instanceID = Environment.GetEnvironmentVariable("WEBSITE_INSTANCE_ID").Substring(0, 6);
var pid = Process.GetCurrentProcess().Id;
var currentTime = DateTime.UtcNow;
var filename = instanceID + "-" + pid + ".applicationLog.csv";

//Location of the blob file path within the Azure Web Site logs extension
var filePath = "https://" + Environment.GetEnvironmentVariable("WEBSITE_SITE_NAME") + 
                             ".scm.azurewebsites.net/WebSiteLogs/#/blobapplication/" + 
                             Environment.GetEnvironmentVariable("WEBSITE_SITE_NAME") + 
                             "-triggered-" + Environment.GetEnvironmentVariable("WEBJOBS_NAME") + "/" +
                            currentTime.Year + "/" + currentTime.Month.ToString().PadLeft(2,'0') + "/"  + 
                           currentTime.Day.ToString().PadLeft(2, '0') + "/" +
                           currentTime.Hour.ToString().PadLeft(2, '0') + "/" ;

//Location of the blob file to download within the Azure Web Site log Extension
var linkToDownload = "https://" + Environment.GetEnvironmentVariable("WEBSITE_SITE_NAME") + 
                     ".scm.azurewebsites.net/WebSiteLogs/api/log?path=/blobapplication/" +
                     Environment.GetEnvironmentVariable("WEBSITE_SITE_NAME") + 
                     "-triggered-" + Environment.GetEnvironmentVariable("WEBJOBS_NAME") + "/" +
                     currentTime.Year + "/" + currentTime.Month.ToString().PadLeft(2,'0') + "/"  +
                     currentTime.Day.ToString().PadLeft(2, '0') + "/" +
                     currentTime.Hour.ToString().PadLeft(2, '0') + "/" + filename + "&download=true";

Console.WriteLine(String.Format("FilePath Link: {0}", filepath));
Console.WriteLine(String.Format("Download csv file: {0}", linkToDownload));

Using the actual Azure Web Site Log Browser, you can view the csv log file directly in the browser. Using the filePath link that I’m displaying in the Web Job log window, this takes you to the Azure Web Site Log Browser.

By clicking on the file it opens the CSV in the browser.

Using this browser, you can view the logs. You can use Search to find key words within your application.

The columns available within a blob storage is defined by Azure, you cannot add or remove these columns. These columns provide you more granular information about the event. The following properties are used for each row in the CSV:

  • Date – The date and time that the event occurred
  • Level – The event Level (Verbose, Error, Warning, Information)
  • Application Name – The web app name, in our case the WebApp – TypeOfWebJob – WebJobName.
  • InstanceId – Instance of the Web app that the event occurred on.
  • EventTickCount – The date and time that the event occurred, in Tick format
  • EventId – The event ID of this event, defaults to 0 if none specified.
  • Pid – Process ID of the web job
  • Tid – The thread ID of the thread that produced the event.
  • Message – The event detail message
  • ActivityID – This is only included if you set up the Activity ID in the code. This can be useful especially for continuous webjobs as all the jobs processed will be entered into the same csv file.

Setting the Activity ID in code

//Add to the start of the code
var correlationGuid = Guid.NewGuid();
Trace.CorrelationManager.ActivityId = correlationGuid;
Console.WriteLine(String.Format("ActivityID for this job: {0}, correlationGuid"));

Viewing Application Logs – Table Storage.

The URL for table logs within the Azure Web Site Log Browser is https://<WebSiteName>.scm.azurewebsites.net/websitelogs/viewtable.cshtml

On landing on the page, it shows the last 20 items found within the table. However you can change the date, number of items, or what to search for using search. There is also sorting within the columns. I have put the ActivityID in search (see end of last section how to assign Activity ID in code), and this has brought back all log items for that web job.

The columns available within a table storage is defined by Azure, you cannot add or remove these columns. These columns provide you more granular information about the event. The following properties are used for each row in the table:

  • Date – The date and time that the event occurred
  • Level – The event Level (Verbose, Error, Warning, Information)
  • Instance – Instance of the Web app that the event occurred on.
  • Activity – This is only included if you set up the Activity ID in the code.
  • Message – The event detail message

If you open the row of information on the table using Visual Studio/Azure Storage 6 or another tool, there are additional columns that contains information that isn’t shown within the Azure Web Site Log Browser.

  • PartitionKey – The Date Time of the event in yyyyMMddHH format
  • RowKey – A GUID value that uniquely identifies this entity
  • Timestamp – The date and time that the event occurred
  • EventTickCount – The date and time that the event occurred, in Tick format
  • Application Name – The Web App Name
  • EventId – The event ID of this event, defaults to 0 if none specified.
  • InstanceId – Instance of the Web app that the event occurred on.
  • Pid – Process ID of the web job
  • Tid – The thread ID of the thread that produced the event.

References:

Amit Apple Blog – http://blog.amitapple.com/ (Creator of the Azure Web Site Log Browser)

Microsoft Azure – Enable diagnostics logging for web apps in Azure App Service – https://azure.microsoft.com/en-us/documentation/articles/web-sites-enable-diagnostic-log/

Azure Table Storage and pessimistic concurrency using an Azure blob


In my previous two blog posts, I have spoken about concurrency using Azure Blobs and Azure Table Storage.

Azure Blob Storage and managing concurrency

Azure Table Storage and managing concurrency

In the Azure Table Storage and managing concurrency, I state that they only option you have for concurrency is optimistic concurrency. This is when performing an update, it will verify if the data has changed since the application last read that data. However there is a way to perform pessimistic locking on an Azure table entity by assigning a designated blob for each table, and try to take a lease on the blob before operating the table.

This blog post will walk you through creating a solution that allows you to perform pessimistic locking for Azure table entity. My solution will show two methods. The first method is a single thread application that will try to update a number in the Azure table. If we run the program twice at the same time, one of the programs will be blocked and receive an HttpsStatusCode of 409 (Conflict).

You will need to following NuGet Packages installed for the code to work:

  • Microsoft.WindowsAzure.ConfigurationManager
  • WindowsAzure.Storage

The SingleThreadTableStorageUpdate() method will first obtain the values from the app.config. These values are:

  • BlobStorageFileName – The filename of the blob that will be assigned a lease.
  • BlobStorageContainerReference – The Blob container, that will hold the blob file.
  • TableStorageReference – The name of the Table within Azure Storage.
  • StorageConnectionString – The connection string to Azure Storage.
//Obtain the BlobStorage information
String filename = System.Configuration.ConfigurationManager.AppSettings["BlobStorageFileName"];
String blobStorageContainerRef = System.Configuration.ConfigurationManager.AppSettings["BlobStorageContainerReference"];
String blobStorageTableRef = System.Configuration.ConfigurationManager.AppSettings["TableStorageReference"];
String connectionString  = CloudConfigurationManager.GetSettings("StorageConnectionString");

//Instantiate the NextNumber class
NextNumber nextNumber = new NextNumber(filename, blobStorageContainerRef, blobStorageTableRef, connectionString);

Within the NextNumber Class when it is instantiated, it will check or create the Blob file and table entity.

class NextNumber
{
 private readonly CloudBlobContainer _leaseContainer;
 private readonly CloudTable _table;
 private readonly String _filename;

 public NextNumber(string filename, string blobContainerReference, string tableStorageReference, string storageConnectionString)
 {
   //Get Connection to Storage.
   CloudStorageAccount storageAccount = CloudStorageAccount.Parse(storageConnectionString);
   _filename = filename;

   //This creates a Blob
   CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
   _leaseContainer = blobClient.GetContainerReference(blobContainerReference);
   _leaseContainer.CreateIfNotExists();

   //This creates a table.
   CloudTableClient tableClient = storageAccount.CreateCloudTableClient();
   _table = tableClient.GetTableReference(tableStorageReference);
   _table.CreateIfNotExists();

   try
   {
       //Get a reference to the blob.
       CloudBlockBlob blob = _leaseContainer.GetBlockBlobReference(String.Format("{0}.lck", _filename));
       if (!blob.Exists())
       {
           //Blob doesn't exist therefore create table entity.
           NextNumberEntity entity = new NextNumberEntity
          {
             PartitionKey = _filename,
             RowKey = "",
             NumberValue = 0
          };

          //Upload blob with some information in file
          blob.UploadText("Created on " + DateTime.UtcNow);
          //Insert entity into table.
         _table.Execute(TableOperation.Insert(entity));
       }
   }
   catch (Exception ex)
   {
       Console.WriteLine("Error happened " + ex.Message);
   }
 }
}

Once I know that the blob file and table actually exists, I’m then able to get the next number and update the entity in the table.

nextNumber.GetNextNumber()

Below is the code for the GetNextNumer() method. Within the code I have grabbed the blob property lease information and displayed it within the console. This is useful to see the state of the current blob object and see if there is a lease on it.

internal int GetNextNumber()
{
   //Get blob reference and display current lease information
   CloudBlockBlob blob = _leaseContainer.GetBlockBlobReference(String.Format("{0}.lck", _filename));
   blob.FetchAttributes();

   Console.WriteLine(String.Format("LeaseDuration = {0}", blob.Properties.LeaseDuration));
   Console.WriteLine(String.Format("LeaseState = {0}", blob.Properties.LeaseState));
   Console.WriteLine(String.Format("LeaseStatus = {0}", blob.Properties.LeaseStatus));

   //Acquire the lease for 30 seconds.
   string leaseId = blob.AcquireLease(TimeSpan.FromSeconds(30), Guid.NewGuid().ToString());
   var nextNumber = 0;
   Console.WriteLine();

   Console.WriteLine(String.Format("Aquired lease on blob ID: {0}", leaseId));
   Console.WriteLine();

   try
   {
        //Get and display current lease information
        blob.FetchAttributes();
        Console.WriteLine(String.Format("LeaseDuration = {0}", blob.Properties.LeaseDuration));
        Console.WriteLine(String.Format("LeaseState = {0}", blob.Properties.LeaseState));
        Console.WriteLine(String.Format("LeaseStatus = {0}", blob.Properties.LeaseStatus));

        //Retrieve the entity out of the Azure table.
        TableResult tableResult = _table.Execute(TableOperation.Retrieve<NextNumberEntity>(_filename, ""));
        NextNumberEntity entity = (NextNumberEntity)tableResult.Result;
        //Update the number
        entity.NumberValue++;
        //Add back into Azure table.
        _table.Execute(TableOperation.Replace(entity));
        nextNumber = entity.NumberValue;
        //Wait to extend the time this calling code hold the lease for (demo purposes)
        Thread.Sleep(TimeSpan.FromSeconds(10));
    }
    catch (Exception ex)
    {
        Console.Write("An error: " + ex.Message);
    }
    finally
    {
       //Release the blob.
       blob.ReleaseLease(AccessCondition.GenerateLeaseCondition(leaseId));
    }
    return nextNumber;
  }

Lastly I have a class that is my NextNumberEntity. This inherits Microsoft.WindowsAzure.Storage.Table.TableEntity.

class NextNumberEntity : TableEntity
{
     public int NumberValue { get; set; }
}

If I run the above code, it creates the blob file, the table, and updates the number from 0 to 1 in the table.

Above shows the leaseobject blob container, and the nextnumbers table.

Above shows the Cann0nF0dderDemo.lck file created within the leaseobject blob container.

Above shows the nextnumber table with the number value set to one.

Above shows the console window. You can see how before the lease was acquired, the LeaseState for the blob is Available with the LeaseStatus set as Unlocked. Soon as a lease was acquired, the LeaseState for the blob is Leased, with the LeaseStatus set to Locked. Lastly the console displays the next number which is one.

If I run two instances of the program and get them trying to acquire the lease at the same time, one errors with a conflict.

So how can I ensure that both instances run and eventually both get a number? I’ve written another method that is similar to the GetNextNumber method within the NextNumber class, called GetNextNumberWithDelay(). If a conflict is discovered then it retries the method, until I have obtained the next number.

internal int GetNextNumberWithDelay()
{
   //Get the blob reference
   CloudBlockBlob blob = _leaseContainer.GetBlockBlobReference(String.Format("{0}.lck", _filename));
   var nextNumber = 0;
   bool gotNumber = false;

   while (!gotNumber)
   {
       try
       {
         //Acquire the lease for 60 seconds.
         string leaseId = blob.AcquireLease(TimeSpan.FromSeconds(60), Guid.NewGuid().ToString());
         Console.WriteLine("Acquired Lease to update number");

         try
         {
              //Retrieve the entity out of the Azure table.
              TableResult tableResult = _table.Execute(TableOperation.Retrieve<NextNumberEntity>(_filename, ""));
              //Wait to extend the time this calling code hold the lease for (demo purposes)
              Thread.Sleep(TimeSpan.FromSeconds(10));

              NextNumberEntity entity = (NextNumberEntity)tableResult.Result;
              //Update the number
              entity.NumberValue++;

              //Add back into Azure table.
              _table.Execute(TableOperation.Replace(entity));
              nextNumber = entity.NumberValue;
              Console.WriteLine();

              Console.WriteLine(String.Format("The next number is: {0}", nextNumber));
          }
          catch (Exception inner)
          {
              Console.WriteLine("Another Error: " + inner.Message);
          }
          finally
          {
              //Release the blob
              blob.ReleaseLease(AccessCondition.GenerateLeaseCondition(leaseId));
              gotNumber = true;
           }
       }
       catch (Microsoft.WindowsAzure.Storage.StorageException se)
       {
           var response = se.RequestInformation.HttpStatusCode;
           if (response != null && (response == (int)HttpStatusCode.Conflict))
           {
               //A Conflict has been found, lease is being used by another process, wait and try again.
               Console.Write(".");
               Thread.Sleep(TimeSpan.FromSeconds(2));
           }
           else
           {
              throw se;
           }
        }
    }

   Thread.Sleep(TimeSpan.FromSeconds(3));
   return nextNumber;
}

Just to really test my code, instead of just calling the GetNumberWithDelay() just once, I’m going to uses Task threading and call it 5 times at once. This way if I run two instances of the program, I’m requesting 10 different numbers on 2 instances, 10 different threads. The lease on the blob will only allow one thread at a time to request a number, making all the other threads wait. Before I ran this, I reset the NumberValue in the Storage table back to 0.

//Run the GetNextNumber code 5 times at once. 
Task.WaitAll(new[]{
             Task.Run(() => nextNumber.GetNextNumberWithDelay()),
             Task.Run(() => nextNumber.GetNextNumberWithDelay()),
             Task.Run(() => nextNumber.GetNextNumberWithDelay()),
             Task.Run(() => nextNumber.GetNextNumberWithDelay()),
             Task.Run(() => nextNumber.GetNextNumberWithDelay())
});

When the code runs, one thread on one instance will obtain the lease, every other thread (on both instances) will have to wait, and will display a dot “.”

As you can see from above, both instances obtained 5 different numbers, however each instance didn’t receive 5 sequential numbers, and together, they never grabbed the same number twice.

This concludes my blog posts on Azure Blob/Table concurrency.

Code: http://1drv.ms/1JBoIek

Azure Table Storage and managing concurrency


In my previous post, Azure Blob Storage and managing concurrency, I wrote about storing a blob and then using either:

  • Optimistic concurrency – When performing an update, it will verify if the data has changed since the application last read that data.
  • Pessimistic concurrency – When performing an update, it will acquire a lock, preventing other processes from trying to update it.

When using Azure Table storage you only have the option of using optimistic concurrency. If pessimistic locking is needed, one approach is to assign a designated blob for each table, and try to take a lease on the blob before operating the table. I discuss this in my next post, Azure Table Storage and pessimistic concurrency using an Azure blob.

Optimistic concurrency

Every entity that is added to the storage is assigned an ETag. Every time the entity changes, the ETag will also change. It is this ETag that is used to calculate if the entity has changed since the process has last read it. The steps are:

  • Read the entity from the table storage, and grab the ETag header value.
  • Update the entity, passing in the ETag value from reading the entity from previous step.
  • The ETag passed in, matches the ETag of the current entity in the storage table. The entity is updated, and a new ETag assigned.
  • If the ETag passed in, doesn’t match the ETag of the current entity in the storage table, due to another process changes it, then an HttpStatusCode of 412 (PreconditionFailed) is returned and the file isn’t updated.

The below code (taken partly from Managing Concurrency using Azure Storage – Sample Application) shows an example of optimistic concurrency. It also shows how to ignore concurrency completely to simulate a different process updating the blob. You will need to following NuGet Packages installed for the code to work:

  • Microsoft.WindowsAzure.ConfigurationManager
  • WindowsAzure.Storage
internal void DemonstrateOptimisticConcurrencyUsingEntity()
{
    Console.WriteLine("Demo - Demonstrate optimistic concurrency using a table entity");
    CloudStorageAccount storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
    string tableStorageReference = System.Configuration.ConfigurationManager.AppSettings["TableStorageReference"];

   CloudTableClient tableClient = storageAccount.CreateCloudTableClient();
   CloudTable nextNumberTable = tableClient.GetTableReference(tableStorageReference);
   nextNumberTable.CreateIfNotExists();

   //Add new entity to table (Requires PartitionKey and RowsKey to work)
   NextNumberEntity originalNumber = new NextNumberEntity()
   {
        PartitionKey = "Numbers",
        RowKey = "Next",
        NumberValue = 0
   };

   TableOperation insert = TableOperation.InsertOrReplace(originalNumber);
   nextNumberTable.Execute(insert);
   Console.WriteLine("Entity added. Original ETag = {0}", originalNumber.ETag);

   //Simulate an update by different process
   NextNumberEntity updatedNumber = new NextNumberEntity()
   {
       PartitionKey = "Numbers",
       RowKey = "Next",
       NumberValue = 1
   };

   insert = TableOperation.InsertOrReplace(updatedNumber);
   nextNumberTable.Execute(insert);
   Console.WriteLine("Entity updated. Updated ETag = {0}", updatedNumber.ETag);

   //Try updating originalNumber. Etag is cached within originalNumber and passed by default.
   originalNumber.NumberValue = 2;
   insert = TableOperation.Merge(originalNumber);

   try
   {
       Console.WriteLine("Trying to update original entity");
       nextNumberTable.Execute(insert);
    }
    catch(StorageException ex)
    {
        if (ex.RequestInformation.HttpStatusCode == (int)HttpStatusCode.PreconditionFailed)
        {
            Console.WriteLine("Precondition failure as expected. Entities orignal etag does not match");
        }
        else
        {
            throw;
        }
    }

    Console.WriteLine("Press enter to exit");
    Console.ReadLine();
}

My app.config file for my console application has the following in it for the above to work.

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
    <startup> 
        <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" />
    </startup>
  <appSettings>
    <add key=" TableStorageReference" value="nextnumbers"/>
    <add key="StorageConnectionString" value="[Insert your own DefaultEndpointsProtocol to your Azure Storage] "/>
  </appSettings>
</configuration>

The results from running DemonstrateOptimisticConcurrencyUsingEntity()

If you want to see the table you could use visual studio. In the Server explorer, if you sign into your Azure account, under storage you should see the table you created. Mine is called NextNumbers, which I defined in my app.config “TableStorageReference”.

Within our table, we have the data we placed in from the update.

References: https://azure.microsoft.com/en-gb/blog/managing-concurrency-in-microsoft-azure-storage-2/

You can download my code from here.

Code: http://1drv.ms/1FlbDUw