to go faster

Oct 07


This post is a collection of ideas and options for you to pick from to help your Sitefinity projects run faster. Some are actualy Sitefinity specific. Some are for when developing. Some are for production. Most can be applied to any website. This post focuses around resource loading. A second post will look at a varity of things.

HTTP2 and 3

If you are not serving your site over HTTP2 then that should be your major focus. The HTTP2 protocol allows parallel downloading of resources over one TCP connection. Unlike HTTP1.1 which opens a connection for each resource.

But with this new protocol, things have changed in the way we approach resources. At a high level, if you have a 100Kb per second connection and have 10 100Kb files this will take 1 second to download over HTTP2 as they all come down at the same time. If you have two 200Kb files, (that is 800Kb less in total), it will take 2 seconds to download over HTTP2.

So breaking larger files into smaller ones is now the goal, though there is a tipping point and a balance to it.

To implement HTTP2 you need a supporting browser, (all major browsers support it) and a server that supports it such as Windows Server 2018. Azure Web Apps also supports it. If you have support at both ends, client and server, they will automatically use HTTP2.

HTTP3 is now also on the horizon. Most browsers have it supported as opt-in switches and servers are coming. Google has been using it since 2015 and Facebook since 2017. The main enhancements are around better/faster connection and transport latency. It uses the UDP protocol over TCP and is considered more secure. A downside, (at present) is a higher CPU at both client and server ends but expected to improve.


In your headers, you can get some DNS and TCP processing done and ready for later by adding a pre-connect. A good example is the Snipcart shopping cart on this site. When the page loads the user's cart information is hidden, but I do make a call to populate current items in the cart at the end of the page load. By pre-connecting, the shopping cart will function that little bit faster to get and populate that cart item count because I have already established the TCP and DNS lookup work.

Read about Preconnect details.


Preload allows you to load resources now, knowing you will need them later.

Preload in detail.

A good example is any URL to an image in your CSS file. When the CSS file is being processed and it comes across a background image the processing stops and downloads the file before continuing. If you preload this, then the image is already local and the processing goes faster.

<link rel="preload" href="/ResourcePackages/Stuff/Content/images/Background-Home.jpg" as="image" type="image/jpeg" />

Another big blocking process is font files, these are good candidates to preload as they can be quite big. But you must add crossorigin else it will be downloaded twice.

<link rel="preload" href="" as="font" type="font/woff2" crossorigin />

Avoid @import

While I am talking about CSS, avoid @import as this effectively does the same. The reference cannot be parallel downloaded. The style sheet is downloaded and parsed and then each import is downloaded and processed. If you have more than one import the first will download and process, then the second, then the third and so on. @import has its place for use but do consider if it will be better to break out those resources and parallel download them.

Defer Scripts

Traditionally we put scripts at the end of the HTML document as these are render-blocking resources. But let's consider the new world with HTTP2. First, we have a defer attribute we can apply. This will download the file now but it will not parse it until the 'DomInteractive' event fires. When HTML has been parsed and ready.

For argument's sake, let's say our script resources take two seconds to download and two seconds to process. If we put our scripts at the end of the document that means everything above it gets processed first, (HTML and CSS) and then our scripts add 4 seconds on at the end.

If we place our scripts in the head and add the defer attribute, this will mean the first two seconds are (potentially) saved because they are parallel downloaded with everything at the start. Then, when the document is parsed, the scripts are already present and we only have to add the two seconds of script parsing time to the end.

Most important the browser has gotten to the end of the processing faster and so the user sees that page appear sooner and the domInteractive event is fired. This important because this event is a primary measurement of perceived load speed in page speed tests.

The script element also has the async attribute. What this does is download and execute the script in its own time. If you add a script in the head with this attribute it will download the resource like defer but as soon as it is complete it will stop the parsing of the page to parse itself before letting the page parsing continue. If you specify both async and defer, async will take precedence.

These attributes are relevant when the script is set in the head element. They become a little pointless if your scripts are placed at the bottom the the body element.

So, async will still block the parsing of the page while defer will not.

One (potential) desirable about defer is that it will still process scripts in order. So if you need one script to be loaded before another, (jQuery) then you should perhaps avoid async.

Deferring Styles

The link element does not have the defer attribute but we can do the same thing.

Deferring style sheets.

Our use case would be a CSS file that styles all of our modals or our Ajax HTML calls. Something that we need on every page but is not present on page load. Or, below the fold.

We can use the preload option and add an 'onload' attribute to load it when it is ready.

<link rel="preload" href="/vendors/jquery.fancybox.min.css" as="style" onload="this.onload = null;this.rel='stylesheet'" />
But there is a second way to do this. Mark your style sheet to be for 'print' media. This will load it but not parse it. Then on the ''event you change it to 'all'.
<link rel="stylesheet" href="/my.css" media="print" onload="'all'">
I do not know if one is better than the other.

Where does this help? If you look at Googles Page Speed test there is a Reduce Render blocking resources. CSS files are render-blocking and affect the First Paint moment of the screen being displayed to the user. So any styles that are not needed straight away should be deferred.

Render blocking detail.

What is happening here is that the style is being loaded async. When the file has loaded the processing of the page is not halted to process the file contents but applied later when it is done. A little like running jQuery and adding new styles to your page after it has loaded.

Say we have two resources, our HTML doc and a CSS file. If we just link it like normal the process is:
  1. Get HTML
  2. Parse HTML
  3. Get CSS
  4. Parse CSS
  5. Create the Render Tree and start showing the user something.
Lets say each step is 1 sec, so that is 5 seconds in total till the white sreen shows something.

By Asyncing it, the process runs kinda like this:
  1. Get HTML
  2. Parse HTML
  3. Get CSS
  4. Create the render tree and show something
  5. Parse CSS
Now we see something on the screen a second faster.

I applied the above to this site and targeted my google font CSS references and a jquery fancy box style sheet. This gave me a 20 point increase on my Google Page Speed score for mobile. The biggest single thing to affect a score that I have ever seen.

The downside is that the user may see funny shifting page display as styles are added after the fact. Thus a reason not to do your main style sheet but all those auxiliary ones. You should consider splitting your style sheet into these two types. Ones that I want to load right away. Ones that can load a little late.

Another technic is to inline with a <style> element your pages styles for everything that is displayed above the fold, while everything else is processed async. This will mean you have no CSS blocking as the styles come with the HTML doc. This may mean you end up with double styles as you have to maintain them inline and in the CSS.


With Sitefinity 13.0 the version of jQuery has moved to 3.4.1. Before that jQuery was version 1.12.1.

If you are good with using the embedded version of jQuery from Sitefinity then you just need to add:

@Html.Script(ScriptRef.JQuery, "head"false)

If you wish to use a different version of jQuery here are a couple of things to consider. (You should also consider script loading that I talked about above but I am keeping it simple here.)

Easy as you can add a script reference to your desired version in your 'layout.cshtml' file. But you must also consider that if you are using a Sitefinity widget that references jQuery you will be loading the inbuilt version as well as your own. Downloading and processing two versions of jQuery is not ideal.

This knowledge base article has a list of the widgets that you should override and change the version of jQuery to match your desired version. You do have to consider and test that your version won't break the widgets JavaScript.

But don't be surprised to find that this breaks the backend page editing experience. (This is the reason it took a while for them to upgrade I think.) A way around that is to conditionally load the version.

@if (!SitefinityContext.IsBackend)
    <script src="//my-jquery-x.y.z"></script>
    @Html.Script(ScriptRef.JQuery, "head"false)

Thus, when in the backend, the Sitefinity version is loaded and nothing should break. It may break your JS code but most of the time you don't need/or want all your front end functionality running or available when page editing. So if errors are thrown in the console you may be happy to ignore them. You can also improve that using the same conditional check to disable events, animations and/or functionality in your widgets.

You can also avoid this by using the recommended jQuery method of 'noconflict'. But then you are back to loading two copies of the file.

Delay loading

Another Page Speed measurement is 'First Input Delay'. This is the time that the user can first interact with your page and the time it takes to respond. Often this is affected by scripts doing their 'init' stuff in the background. Reducing JavaScript and slimming 'onload' events can help here.

I had a scenario, a Facebook feed was being loaded on the page and it was causing a lot of Page Speed result issues. Same issue with a google map. Both of these sections were below the fold and further down the page.

So Google Page Speed is complaining that Google maps needs improving and that I should do something about it, typical!

To speed up the page I loaded the Facebook iFrame with an empty src attribute and added an empty div element to hold the map. I also did not reference the google map script.

I added an on scroll event to the page. When the user scrolled down the page I then populated the iFrame src attribute and dynamically loaded Google maps script and ran the init method to add the map to the page.

The result was that my Page Speed scores greatly improved. Sometimes you would notice the Facebook feed popping in and filling its space but the map was in the footer and was always there by the time you scrolled down. And for this site, it was all completely acceptable for the performance improvements.

$(document).on('scroll.fb'function () {
    if ($("#JuniorFacebookFeed").length && $("#JuniorFacebookFeed").attr("src") === "") {

I first assign the event and give it a namespace to avoid a clash with the Google map doing the same thing in another script. By adding the off method at the end ensures that the event fires just once. I don't want this endlessly firing as the user scrolls around the page.

I check that existence of the feed first and that the src is empty. In my widget view, I don't load this at all when in page editing mode and just display a note to the user that this where the feed will be.

And if that is all good I update the src attribute.

For the google map, I do the same but I am loading a script.

$(document).on('scroll.gmap'function () {
    let myScript = document.createElement("script");

Note that the script has a callback parameter which is the name of a function to run once the script is loaded and this builds my map.

Another idea, if you have any off-screen forms you may want to consider loading the resources when the form is requested. A good candidate would be the Google Recaptcha script. Rather than your page suffering the performance hit of loading this script when it may not even be required, load it when you are pretty sure it will be required.

JavaScript Init

The more processing your JavaSvript does at the start, the longer the page will take to be declared 'fully loaded' by the Page Speed testers.

Consider if you could embeded any of those Ajax JSON calls. An example may be some dropdown or select inputs on the screen where the options change depending on the first selection. Say a country and city select. Onload you make a Ajax call to to get the country list so it is ready right away for the user.

Rather than adding that 'GET' process to your time, perhaps consider if you could just have it included as part of your HTML doc. It will be fater for the user if all the country and city data is placed inline allowing your script to read it locally rather than go off to the server. If your data is related to a module then you will automatically inherit cache dependancy for any data changes as well.

To CDN or Not

In the past, we often always choose external CDN hosters for third party resources like jQuery. But that comes with extra DNS lookups and these add time. Page Speed tools will often hit you up about too many external resources now and will suggest you reduce the number of these.

With the now, all but standard HTTP2 protocol, hosting these resources yourself may be a better option.

CDN's still have there place especially around having resources closer to the requester. Today I tend to host as much as I can locally and use big Caching/CDN services like Imperva or Cloud Front to front my site rather than using a raft of one-off CDN hosters like or

Conditionally Loading

Two common pieces of script I have is the analytics and monitoring JavaScript code.

In development and most of my environments, I don't want or need these scripts running. I also don't want any QA environment page analytics heading to my production analytics data. To avoid this I will often wrap them in a conditional statement checking what the environment is. This is often achieved by a value in the web.config appSettings called 'Environment'.

I also do this with certain third-party scripts that add a lot of load time onto my page and which I don't need in local development. An example of this is 'AddThis' and 'Optimizley'. I found these two add a lot of processing and so I don't load them when my environment is 'Local'.

@if (System.Configuration.ConfigurationManager.AppSettings["Environment"] == "Live" 
    && !SitefinityContext.IsBackend)

If I do wish to test and develop against them I have to, of course, move them out and ensure they get loaded.

Darrin Robertson - Sitefinity Developer

Thanks for reading and feel free to comment - Darrin Robertson

If I was really helpful and you would buy me a coffee if you could, yay! You can.

Leave a comment
Load more comments

Make a Comment

recapcha code