Abstract

So you’re trying to create new DOM nodes in JavaScript and adding CSS rules that set them up for later manipulation, but things aren’t displaying exactly as expected, if at all.  If you’ve found yourself in this situation, like me, you have probably spent a ton of time trying to debug what’s going on and searching through your JavaScript and CSS for errors.  Sometimes it works, sometimes it doesn’t.  What’s going on here?  This post is an attempt to share some thoughts and a fairly simple workaround for this confusing issue.

Background

Here at Jellyvision, we are experimenting with the cool new features outlined in the HTML5  specification (well, the latest draft proposal that is). During the course of our development we have noticed a few strange things along the way.  This is one of them, and while this isn’t really confined to HTML5 development the problem manifested itself while working with CSS3 transitions on dynamic DOM nodes.

What you should already know

This post assumes that you are familiar with JavaScript and the DOM as well as basic CSS syntax.  The example will show “-webkit” vendor prefixes in the CSS, but it could be adapted easily for other current browsers.  Also, it doesn’t appear that other DOM manipulation libraries like jQuery, etc. are immune to this issue either.  It’s easy to work around without adding complex libraries though, or to integrate into code that already uses such librares.

The setup

Let’s say you’ve got an HTML page that defines a few CSS rules:

.park {
  position: absolute;
  top: 5%;
  left: 5%;
  opacity: 0;
  -webkit-transition: opacity 0.8s ease-out;
}

.position1 {
  top: 15%;
  left: 25%;
}

.show {
  opacity: 1;
}</pre>
  • The “.park” rule sets up an initial visual state that is invisible and also adds a CSS3 transition for the ‘opacity’ property.
  • The “.position1” rule is one of potentially many rules that designate a predetermined position on the screen for elements that specifiy this class.
  • The “.show” rule is a simple change to opacity to make something totally opaque/visible

So far so good, but what are we actually trying to do here? (bear with me, this isn’t the best example, but hopefully you’ll understand what the issue is and be able to recognize it when designing your own code)

Our hypothetical page will have a button that does the following:

  • creates a new DOM node, let’s say a “DIV”
  • adds some text inside of it
  • adds the above CSS classes to the node

The expected result, is that when the button is pressed, the new DIV will “fade-in” to the correct position in 0.8s time.

Let’s say that the button has a click handler function that is defined as follows:

aButton.onclick = function()
{
  //create our new DIV node
  var d = document.createElement("div");

  //set the text
  d.innerHTML = "some text";

  //set initial state
  d.classList.add("park");

  //assume that some logic, not shown, picked positon 1
  d.classList.add("position1");

  //make visible
  d.classList.add("show");

  //attach the new DIV to the screen (note: it doesn't have to be body)
  document.body.appendChild(d);

};

NOTE: I am using the “classList” property to add CSS.  This property is not fully supported by all browsers yet.  Check out Mozilla’s DOM reference page about it here. It really makes life easier, but you could always use a space-delimited string in the “element.className” property as a fallback.

Um, so what’s the problem?

Well, we expected the new chunk of text to fade on as specified in the transition declaration of the “park” rule.  But what we actually see is that the text either just appears immediately with no fade time, or in some rare cases, it doesn’t appear at all.  That’s not good.  The code is pretty straightforward, so what could the problem be?  Like I said earlier, this isn’t the best example for a number of reasons, and there are actually two things going on that could cause our problem.

Problem 1

The first is that even though our code is executed serially, that is, one line after the next, the underlying operations to the DOM that the browser is going to perform on our behalf are actually asynchronous.  The addition of the new DIV to the existing DOM tree takes a teensy bit of time, but the “appendChild” method may return before the process is complete.  What does that mean for us?  Well, it means that the new DIV node may not have all of its initial properties set as we attempt to modify them or reference them.

The browser performs quite a few calculations and steps before finally rendering the final result of the added node.  The above example really doesn’t highlight this issue that well, but it’s something to consider and leads to the solution described below.

Aren’t there DOM events we can use to get around this?

Well yes, but… no.  The DOM Level 2 spec does include a special class of events known as Mutation Events however, they should not be used for two reasons.

  • They don’t remedy the issue in the example, believe me, I tried it.
  • These events have been deprecated in the latest revision of the spec: DOM level 3.  For the curious, the next revision of the spec will replace the ‘MutationEvent’ interface with a new ‘MutationObserver’ interface, but it’s not available for use at this time.

These events should allow you to listen for changes to DOM nodes and trees, for example: the ability to know when the node is fully inserted and ready to be styled.  But alas, this does not work as expected either.

The key to this failure is the second reason for the problem.  A wildcard.

Problem 2 (the real issue)

The wildcard in our example scenaio is CSS.  More specifically, how the browser handles the computations for applied CSS rules on elements while adding them programatically through JavaScript DOM interfaces.  That was a mouthfull. Let’s review what we were trying to do in our example code:

  • create a new DOM node, let’s say a “DIV”
  • add some text inside of it
  • add some CSS classes to the node to make it appear with a fade-in effect

I highlighted the last bullet, as that is where our problem lies.  Here’s the part of the original code where the styles are applied:

//d is an unattached DIV node instance
d.classList.add("park");
d.classList.add("position1");
d.classList.add("show");
document.body.appendChild(d);</pre>

Now you may notice that we are appending the new node after the CSS is applied, and you also may be thinking something like “of course it doesn’t work, the CSS can’t be calculated before the node has been attached to the visible DOM tree!” or maybe you were thinking “get to the point already.”  In any case, you’re on the right track.  Simply moving the “appendChild” call above the CSS class additions will have no effect.  Remember Problem 1 above? The addition of the node is not guaranteed to be complete when the CSS classes are added, but that’s not the whole story here.  Please read on…

Notice that the class selectors for “park” and “show” BOTH contain a declaration for the “opacity” property.  Here they are again:

.park {
  ...
  opacity: 0;
  -webkit-transition: opacity 0.8s ease-out;
}

.show {
  opacity: 1;
}

Also notice that both classes are added to the new DIV node in the same function, our button click handler. Now, with most JavaScript code, we expect that it will be executed in the order it is written, that is, procedurally.  The cases where we don’t expect this are usually with events and other asynchronous things like loading, and button clicks, etc. Our example code doesn’t fit either of those cases. Sure it looks like each class is added individually based upon the example, but we’re dealing with the DOM interface, and not plain-old JavaScript execution here.  The browser itself must perform some additional steps that aren’t exposed to the programmer.

The explanation

In this case, what looks like separate additions of CSS to a node is actually not handled that way by the browser. In effect, it is as if all of the CSS declarations defined in our 3 rules are being applied at once; as a single rule.  Here’s what that looks like if it were actually written that way:

.allInOne {
  position: absolute;
  top: 5%;
  left: 5%;
  opacity: 0;
  -webkit-transition: opacity 0.8s ease-out;
  top: 15%;
  left: 25%;
  opacity: 1;
}

It’s a bit ugly, but it is perfectly legal/valid CSS. You see, the browser is trying to be efficient here, as computing and rendering a web page potentially takes a huge amount of processing power.  If each CSS rule were added individually as coded in our function, the browser would need to re-compute and re-render the page layout after each one.  This could cause pages to become unresponsive and annoy the user.  So the browser tries to reduce the need for re-computation and re-rendering as much as posible.  It notices that the CSS changes in our function are all happening within the same code block and are being queued instead of executed*

*note, actual browser implementations may vary, but we are concerned with the conceptual aspect here, not the actual browser implementation.  If you are a browser expert and have some info as to how the browser handles this, please let me know.  We can add it to another blog post!

It probably checks to see how often the changes are being enqueued and determines if the current queue of changes should be applied immediately or to wait a little longer whether for some time threshold, or some internal event.  And so we can see that our changes are rolled into one big set of changes that will be rendered after computing and condensing the rules specified.

Notice the repeated properties in the above example. The question we need to ask is “how does the browser know which value of ‘opacity’ to use. Or more generally, how does it handle repeated properties in a single rule-set.  The answer is simple.

The last declaration of a repeated property wins.  (note: we’re not talking about selector specificty here, since we’re dealing with a single rule.  We’ll save specificity for another post.  It’s kinda’ awesome.)

So if the last one wins, and the browser has combined all of our separate declarations into one rule, what is the final result?  Take a look at this:

.allInOne {
  position: absolute;
  -webkit-transition: opacity 0.8s ease-out;
  top: 15%;
  left: 25%;
  opacity: 1;
}

The repeated property declarations have been replaced by the “winning” ones, and we have a single, simple rule ready to be applied to our node.  But wait, there’s more!

This looks fine, right?  We have our opacity set to ‘1’, we have a transition on the opacity property, so why doesn’t it transition? Why doesn’t it fade-in like we expected?

The devil is in the details.  Here, in the browser’s reduced set of property declarations there is no actual change in opacity to trigger the specified transition.  Yes, opacity is specified, but it is also the only place it has ever been specified and it is in the same rule as the transition declaration.  As it turns out, whie the browser is busy reducing and interpreting the CSS as a single set of property declarations, it doesn’t treat this single opacity declaration as a change event.  It simply uses it as the opacity vaue to render the node at.  It also attaches the transition, but at this point the node properties have been calculated (or are being calculated) so it doesn’t know to also tell the transition engine that something’s changed.  To put it another way, it is just setting up the properties of the node, which happens to include a transition property, but during this time it is not yet checking for changes to properties; just adding them.

At this point, the node is styled, and added to the DOM tree, but the transition never fires, and the fade-in never ocurrs.

A solution

It’s probably a good thing that the browser doesn’t try to send property change notifications on new nodes, or queued CSS changes while it’s still computing the final styles. It likely prevents all sorts of infinite loops within the browser’s own code.  Unfortunately for us, this is a bit of a grey area that is not addressed by the specification or the DOM interface.  Some type of event would be a nice addition here.  Then we could listen for a “nodeready” or something and apply new styles after that to trigger our transition, or whatever. (remember, MutationEvents don’t seem to work here)

So what do we do instead?  We use a little delay, that’s what!  After trying a bunch of different ideas, I finally settled upon using the simple, built-in “setTimeout” function as a reliable workaround to this issue (in the specific scenario required by the code I was working on at Jellyvision).  Now, there are probably different set-ups that don’t really encounter this issue, but I couldn’t avoid it in my project.  This workaround seems to work pretty well, and have very little overhead.

Let’s take a look at the previous button handler function, re-written with the workaround:

aButton.onclick = function()
{
  var d = document.createElement("div");
  d.innerHTML = "some text";
  d.classList.add("park");
  d.classList.add("position1");
  document.body.appendChild(d);

  //WORKAROUND ADDED HERE ('setTimeout' is built-in 'window.setTimeout' function)
  setTimeout(
    function()
    {
      //make visible
      d.classList.add("show");
    },
    50
  );
};

Here are the changes:

  • ‘d.classList.add(“show”) has been deferred by moving it into a callback to the ‘setTimeout’ function
  • the ‘onclick’ fuction now returns before adding the “show” class, hinting to the DOM engine that it should do it’s calculations and rendering of the new DIV node
  • after 50ms, the new class is added to the DIV node which is all ready and set up.  This triggers the opacity property change and the transition will fire.

Pretty simple huh?  By adding a slight delay and allowing the other CSS related code to complete before adding the final CSS class, we get the transition to fire as expected.

Some caveats

There are a few things to keep in mind when employing this workaround:

  • Be aware of closure and scope issues, the anonymous callback to the ‘setTimeout’ function will be executed in the scope of the “window” object.  If you need to reference other variables or are dynamically constructing your class name, make sure you have “closed” on the right vars, or at least specified the scope using “call” or “apply”, etc.
  • The timeout delay that works best for my application at Jellyvision is only 50ms (0.050 seconds) since I need to have a nearly imperceptible delay in our project.  That’s pretty short though.  Any shorter and there may not be enough time for the browser to do its computation and rendering of the new node.  Be sure to fine-tune this value to someting that is acceptible for your application.
  • Design your CSS rules and declarations properly.  There may be cases where you don’t need this trick, or that a different CSS architecture will be more appropriate.
  • Beware of CSS declarations that perform asynchronous tasks.  An example of this would be using a ‘url’ macro in a ‘background’ property declaration.  It will take extra, unknown time for the image to load when the CSS rule is applied.  Watch out for this extra delay.  You can write additional handlers to deal with this, but it may not be obvious at first.

Conclusion

I hope that this helps some of you out with your development.  It was a tricky issue to debug in my specific application.  Also be sure to always review the spec docs, they can provide insight into the inner-workings of CSS and DOM, etc. Don’t forget to read documentation on the specific browser you are developing against.  (Mozilla has great docs that usually point out gotchas and compatibility issues with other browsers)

Until next time, happy coding!




Tagged with: