The WebApp Wizard Web development made magical

4Mar/123

Web development process, from end to end

Just a little post I wanted to write for a long time. Even if it is mainly aimed at beginner web developers, I think it could be interesting for more experienced ones to read on too. I don't pretend to be one of these kick-ass developers we see out there, but just an average one who loves his job, and do it as well as I can, yet not perfectly. So here is a quick honest overview of an average web development process and the tools that come with it, with its advantages and drawbacks.

The IDE

Maybe one of the most important tools, as we are using it a great part of our time. I used a few of them, from the most simple ones to the most advanced ones. After having used notepad, a few fancy visual editors, a few awesome and simple editors like TextMate and Zend Studio, I think I kind of made up my mind with Aptana.

First, the default theme looks great and has been a real relief for my eyes. At first, I wondered why this dark background was used, I felt it like a regression: we used to type light characters on dark backgrounds a long time ago, and then we started writing with black characters on white background. Just like we do with a pen and paper, that seemed kind of logical to me. But when we think about it, this is just stupid. A dark background doesn't emit as much light as a light one, so it puts really less strain on the eyes. So I'm definitely convinced by this default theme.

Then, even if they are not always perfect, available bundles to help with auto-completion or documentation work quite well, and suit my needs. Plus, there are some little bonuses like Capistrano integration that made me adopt it.

The only thing I regret from Zend Studio is the PHPUnit / Code coverage / Debug integration. That was really great. But Aptana will reintegrate debug support someday, I hope.

And last but not least, it is freely available.

There are also really great editors like TextMate, but honestly, I didn't have the courage yet to learn how to use these properly. Just know that it embeds tons of shortcuts and features that saves you time and effort though.

Minification

A key aspect of web development, as many of us know, consists in reducing the number of HTTP request our pages make, and in reducing the weight we transfer over the wire. I became aware of this relatively recently, and since I have spent quite a lot of time trying to resolve this issue. Reducing the number and weight of our requests helps a lot in producing fast responding pages, which is crucial for users with a slow Internet connection, but also counts for users with a good broadband connection.

Combining and minifying our files is good, but we don't want it to slow down or clutter up our development process. Ideally, this should be taken care of automatically. And after trying a few ways to do that minification process more or less manually, I recently stumbled upon the ideal solution for me, which I recently talked about. Assetic is so good not only because it takes care of combining and minifying your files, but also because it allows you to use all sorts of filters like Less, Sass, or virtually anything that could come to your mind. Moreover, it facilitates caching AND leaves no room for out of date cache. But I'll talk about this in the next paragraph.

Caching

Caching is also very important. I can't make my mind about this question: what is the most important part? Minifying or caching? After all, if our caching is right, our non-minification will only hurt us the first time the user hits the page. Once all the files are in the cache, magnification doesn't matter anymore.

So I consider caching at the same level of importance as minification. Once again, Assetic helped me a lot with this. But whatever the tool you use, the key is the key. Nope, this sentence has no mistake in it.

The most basic cache associates a key with a content, and updates the content related to a key, when necessary. The problem with this is that the client can't know directly if its cache is up to date or not. It has to ask the server "hey, is this cache still good or do I need to update it?". If it doesn't do this, it can lead to out of date cache entries, which can be more or less of an issue, depending on the context. So we can't have the best performance and best reliability with this kind of cache.

Another approach to this is to update not the value of a cache entry, but rather the key. When the client finds a key (filename) it doesn't know about, it will be forced to ask the content to the server. As long as the content doesn't change, the key won't change. But as soon as the content is modified, a new key/value pair is created, leaving the old key/value pair or deleting it. One one hand, we have a single key with multiple values over time, and on the other hand, we have a multiple keys over time, with only one value each. So the evolution of the cache is not a problem anymore. In other words, you have to version your filenames. You can do it manually, or you can use a tool like Assetic take care of this for you. It allows you to always serve fresh content, with maximum caching capabilities as the client doesn't have to ask if the cached entry is ok or not. Be careful though, adding a version number in the query string isn't always a good idea, as some proxies rely only on the filename to determine whether to download the file or serve it from its cache. So the best option is to change the filename itself.

Deploying

The last important step is how you deploy your app. Like many people, I started with some basic FTP upload, but as soon as you start working on more serious applications, you probably need something more reliable and more automated. That's why I began writing deployment scripts to help me out. The main problem when we do this manually is we are humans, so we are prone to error and to forget things. How many times did I have to put the server configuration file back in place, as it was erased by my development one which I just committed by mistake? I don't know, nut what I know for sure is "too many times". A script is more reliable for this, but, when this script is written by one human, it is also prone to error. The difference is that once the error is spotted and corrected, it is for good.

But we can do better: a deployment script that is written and used by many humans, therefore reducing the error risk. Not to mention this script will also probably have more features, which can be good too. That's the case with Capistrano, a really great deployment tool I started using last year. It not only takes care of deploying an app from a repository, it also versions it and supports a nice rollback feature in case something went wrong. Another really nice thing is that it allows you to store your users files outside of your code, and create symlinks automatically to ensure everything works as if the files where right under your code's tree.

To sum up

This was a quick post despite its length, and it covers a few topics just on the surface, but the aim was not to dive too deeply in these. It is more meant to give a few leads to follow and make your own opinion about what tools to use or not to use. I could also have talked about testing, but I am still not using this as a real part of my development process, sadly. I tend to use them more and more, but I don't think I am ready to really talk about this right now. A lot of other people will do this way better than me out there.

I just wanted to share the principle I run with, hoping it will help somebody, as I know I've searched for a long time before finding my way of doing these things.

Happy coding!

7Dec/113

Web performance : further optimization

If you use tools like YSlow, PageSpeed and WebPageTest, you have probably already gone a long way about web performance.

The problem

Working on a website which had already good YSlow / PageSpeed ratings, I just wanted to push a little further: can I go up to 100/100 or get very very close to it? That may seem a bit pointless, I mean : who is going to be able to tell the difference? Will it make any difference for the server, too? Well, I don't know, but I wanted to try this for fun. Yeah, strange kind of fun.

So I looked at the metrics of my favorite tools, and PageSpeed told me something: maybe you should try to inline these scripts. What? Am I not supposed to make my scripts external (rule 8 of my bible)? In fact, not always. Making an extra HTTP request is contrary to rule 1, after all. A small file is often not worth a request. So we'd better make it inline, right into the page, to avoid unnecessary HTTP overhead.

But hey, I don't want to sacrifice my cleanly organized JS folder just for the sake of performance. So I had to come up with something that would inline my scripts/css when necessary, without me having to copy and paste the contents of said resources. More importantly, I want it to be dynamic: maybe my files will grow large enough to be worth an extra HTTP request again. So there is no way I manage this by hand.

The solution

Working with Smarty on this project, I decided to make a little Smarty plugin that would help me doing this. The idea is, based on a file size limit, to include scripts the "normal" way or to inline them.

I came up with two little plugins, one for JS files, the other for CSS files.

The results

Using these plugins resulted in one tiny script (a few hundred bytes) and one CSS, on some pages, to be included inline. To be perfectly honest, I didn't measure if there were any "real" performance improvement, and I don't know if it had such a big impact on performance from the user point of view. But it is obvious that this really tiny javascript file generated more HTTP overhead than its content, which is ridiculous. So inlining it can't be bad for performance either.

I was quite surprised by the results from the metrics point of view, though. My YSlow score jumped from 92 - 93 up to 99! Now, that's what I'm talking about: a pretty solid A-grade score :-) . I didn't expect much gain on the YSlow side, as it doesn't mention anything about inlining your scripts. I was even expecting a slightly lower score as YSlow tells you to make your scripts external. But it seems that it doesn't only rely on some stupid rules of thumb, but rather also on real performance.

99 YSlow A-grade score

The PageSpeed score also jumped from something around 91 to 98, which is less of a surprise, as I just applied its recommendations.

What about the server?

That's nice, but I still have a doubt about overall performance, or, more accurately, server charge. That's not really a problem in my case, as I don't have thousands of simultaneous users, so my server can take a little extra charge, but: looking at my plugins implementation, I wonder if this couldn't be optimized. Each time it is used, it checks the file size to decide whether to inline it or not. I don't know if it's a heavy operation, if there is some kind of cache somewhere in the system that avoids to make disk access each time, etc. And when it decides to inline it, it reads the file contents and writes it in the page. And neither do I know precisely how heavy this is.

Anyway, as I told, that's not much of an issue for me, so the overall performance isn't affected. And that's easy to understand: it's quite easy to shave off 100 from the front end (any HTTP request easily takes that much), but what is 100 ms on the back end? That's a whole lot of time. 100ms of PHP execution (or Ruby, or Python, or Java, or C) is huge: most operations won't take more than a few ms. So I think it's pretty safe to say that avoiding unnecessary HTTP traffic is worth a little extra work on the server. And that's the whole thing! I see people working hard optimizing their server code, just to save 3ms here and there. On the server side, that may be important if you have tons of simultaneous connections, but the user won't even notice. When you start working just a little on front end optimization, you save milliseconds by packs of 500!

More!

So, how could I get up to 100/100 on YSlow (and maybe PageSpeed)? Well, if I look at YSlow output, I see this:

Google Analytics preventing 100 YSlow A-grade score

Google Analytics script is not allowing me to get the holy 100, just because it doesn't set a far-future expiration date, thus making it difficult to cache for the browser. I don't know if there is any way to fix this, and I would be glad to hear there is one. I'm pretty sure that would be a great enhancement for the user, as this script download is not that fast.

6Dec/114

Assets management with Assetic (and Smarty)

Being quite a web performance geek, I came across various solutions concerning JS and CSS resources management. But this time, I think I found the right one.

At the very beginning, like everyone else, I didn't care much about having 10 javascript and 10 css files included in my web pages. But as my users met performance and strong bandwidth constraints, I started digging into how to improve loading speeds. And then, I fell into web perfs, reading this book. I started applying advice by hand, and as I saw and felt a real improvement, even on my high speed Internet connection, I wanted more. I began making my own automation solutions, and it did quite well the job, even if it was still a bit raw and not so easy to use. But at this time, I used to work alone on my project. As another developer now joins me from time to time, I needed to make minification, compression, caching, and all the rest as painless as possible. My main concern was, at first, to provide an easy way to switch between production and development mode (minified or not), without having to launch any particular command or generating any file. A simple boolean in my configuration had to do the trick. That was an absolute requirement for me.

Then I discovered Assetic thanks to @tijuan_. Assetic is a Symfony2 component, which I tried to use on my non-Symfony project.

And there it is. The thing that I always dreamed about. It allows you to combine and minify the files you want in one single file, and it has several filters you can apply to your assets, like YUI compressor, LESS/SASS parser, Google Closure compiler, among others.

The only problem was: how to integrate this thing neatly? As I use Smarty, some custom block plugin would have been nice. So I made it. The idea is that you tell which files you want to include, which filters you want to apply to them, and whether you're in debug configuration or not. The plugin then takes care of combining and minifying your files if not in debug mode, generating a single file which will be re-generated with a different name if any change is made on the source files. That means you can choose a very aggressive caching strategy without any trouble. The first modification you will make in your source code will result in generating a new file, so the browser will have no other choice than downloading it. You already use some fake parameter in your URLs to avoid caching issues? Not a bad idea, but still not perfect: some proxies will ignore URL query strings. The only 100% reliable option is to generate a totally new file name, which Assetic does.

Just a little example of usage:

{assetic 
    assets="style/reset.css,style/common.css,style/other.css" 
    output="css" 
    build_path="style/build" 
    debug=false 
    filters="yui_css,less" 
    asset_url=asset_url}
    <link rel="stylesheet" href="{$asset_url}">
{/assetic}

Cherry on the cake: it allows you to keep track of dependencies between your files. You don't have to explicitly include the libs you need, assetic-smarty takes care of it, keeping your "files to include" list clean and uncluttered. Don't worry, it won't include the same file twice.

Enough chat, I strongly encourage you to discover Assetic and the associated Smarty plugin.

4Oct/111

Time input mask, RegExp powered

Hi.

I dreamed about this for a long time. An input mask which would not allow any unvalid time such as 25:63. I know we can achieve this by checking this via a function bound on keydown, but the goal here is to do this without functions, but rather just via one regular expression. We could also try to check input on blur, for instance. But that could be misleading for the user, who could enter an invalid time and only be warned at the end.

You think that's no big deal? Maybe you think it is as simple as /^[0-2][0-9]:[0-5][0-9]$/

That's where the fun begins. At the time you type the first number, this expression will never match. "1" doesn't match /^[0-2][0-9]:[0-5][0-9]$/. Remember, we want to match on each keystroke.
So here is the one that matches :

/^(([0-1][0-9]|2[0-3]|[0-9])|([0-1][0-9]|2[0-3]|[0-9])(:|h)[0-5]?[0-9]?)$/

Boom. That bulky horror only matches a simple time input. Let's try to re-build this. First, we guess we will have a problem with the separator ":" or "h" sign. It cannot be optional, but if it isn't, we cannot type the first numbers, as the regexp won't match, right? Wrong. We have to use alternatives. We can have two numbers, or two numbers followed by the separator, followed by two other numbers.

/^((\d{1,2})|(\d{1,2})(:|h)(\d{0,2}))$/
  • On first keystroke (if we type a number), the first part of the alternative will match
  • On second keystroke, the first part will still match
  • On third keystroke, the first part won't match anymore (max 2 numbers). The second part will, though, if and only if this third character is a ":" or "h" separator
  • On fourth keystroke, the second part of the alternative will match (if we type a number)
  • On fifth keystroke, the second part will also match, if we type a number

That seems to be a good start, and it is not that complex. We also note that it allows to type something like "9:54", which can be a good thing (we do not need to fill with the initial "0"). However, it is far from perfect, as we can also type something like "89:76", which not at all a legit time input. But we're gonna fix this, and that's when we make this (almost) clean expression look like a pile of garbage.

Let's focus on the hours part. We want to be able to type 8:25 as well as 02:31 and 18:58, but not 24:14 neither 16:85. This is quite simple, we can, once again, use alternatives. Either we want one digit, between 0 and 9, either we want two digits, with the first one between 0 and 2, and the second one between 0 and 9, unless the first one is 2. Wow, slow down. So, we got:

  • Any single-digit number between 0 and 9 : [0-9] (or \d)
  • Any two-digits number between 0 and 19 : [0-1][0-9] (or [0-1]\d)
  • Any two-digits number between 20 and 23 : 2[0-3]

So we've got our hour-matching regexp, with range checking:

[0-9]|[0-1][0-9]|2[0-3]

It's way simpler for the minutes, as we just want to check a two-digits number comprised between 0 and 59. Remember we want to match while the user types, so we add "?" to make minutes numbers "optional"

[0-5]?[0-9]?

Just pop that in place of our previous naive \d{1,2} and \d{0,2}, and we've got our super-regexp:

/^(([0-1][0-9]|2[0-3]|[0-9])|([0-1][0-9]|2[0-3]|[0-9])(:|h)[0-5]?[0-9]?)$/

You can use this with Ruoso's compact and efficient jQuery regexMask if you want to give it a try.

I still don't know what to think about this: the regexp is not that complex, but still complex for such a simple case. What about more complex cases, like e-mail for instance? E-mail can be checked quite easily with a regexp (and still, the real e-mail regexp is not as simple as many believe), but if we add the "on-the-fly" checking, it must begin to be quite unreadable.

But hey, a single regexp...

Filed under: JavaScript 1 Comment
6Jul/113

Distributed, transparent NodeJS architecture in hostile environment

Hi everyone.

As we're trying to redesign an application at ORU-MiP, we're wondering if something already exists.

Let's settle the context first: it is an application designed for disater, emergency situations. It allows users (health professionals) to simply input victims basic data. There are three main concerns about this application:

  1. Anybody must be able to use it under difficult, extreme circumstances. Just imagine you have a ton of victims coming at you, and you must ask their names, age, etc. and type this as fast as possible in an unknown piece of software...
  2. It must just work. You have absolutely no time to configure anything, just plug your tablet PC / iPad / anything you want on the local network, type an URL in your browser, and you're ready to go.
  3. Maybe the most difficult part: we have absolutely no idea of what we're going to encounter. We cannot rely on one medium for the network part. 3G might not be available (imagine a subway crash). The hardware parts to connect could be too far apart to connect via RJ45. There might be some wave interferences, making any wireless attempt fail. A disaster can happen anywhere, anytime, so we must consider we are going to be in a really unfriendly environment. We're not talking about users in an office, sitting on a chair, in front of a 24 inches screen here.

The last 2 points are very important in this post.

So, what do we want to achieve? It's quite simple:

  1. Any client must be able to connect to the system and use it just by typing an URL in a browser
  2. The clients will have everything stored locally after the first step (hello localStorage, hello cache), and must be able to work independently of the network state.
  3. The server actually just broadcasts the messages it receives to the other connected clients (synchronization messages: create, update and delete).
  4. The server must be hosted on one of the clients (pre-installed). Less hardware to deploy is better.
  5. There might be more than one server one the network, just in case. In fact, every client will also be a server.
  6. If one server falls, another must be able to replace it instantly, with no action from the user.
  7. Everything must be absolutely transparent for the user.
  8. The user must have nothing to do (did I already say that?)

It seems like an HTML5 web app is a perfect fit for our needs. It allows us to combine ease of deployment (just type in an URL) and client-side storage and processing.

It could be really simple: one of the clients also hosts the server, and everything is synchronized by this server. But what if this server fails? What if the tablet PC on which it will be installed runs out of battery? What if the system crashes? What if the tablet PC is destroyed (unlikely, but still possible)? And believe me, these considerations are not here just for fun. It happens, and when it does, the system must continue with no action from the user.

So the idea is to duplicate the server on each client. We can pre-install the machines, unless it's some complicated, time-consuming stuff. The application is currently coded in C#.net, and it is a real pain to install/update the framework, install system updates, etc. We want to get rid of this.

Ok, so we have a few machines with the same lightweight server on it. NodeJS and Socket.IO are good choices as it allows to code fast and responsive web apps. The clients would be single-page web apps, so this is OK.

Here is how I plan to deal with the complicated stuff: each client, once connected to the server, would establish a socket connection with every server on the network (we would use some network detection protocol to achieve this). It would promote one of these sockets to the state of "pub" socket. This socket would be used to publish (pub) messages (add, update, delete of victims for instance). The other sockets would be only used to subscribe (sub) to other clients modifications.

So what happens when a client publishes a message on his "pub" socket? The server will broadcast the message to every connected client, which means every client, as every client is connected to every server via their "sub" sockets. Nice!

What happens if the server is down or unreachable? The client has a list of every other server on the network (remember the "sub" sockets). So it is possible to promote a "sub" socket to state of "pub" socket, and use this one instead. The client will then publish his message to another sever, chosen at random. The server will be able to broadcast the message to every other client, as every client is connected to every server.

In the end, after a few network connection issues, or unresponsive server for whatever reason, we might end up with every client connected to a different server, but it's no big deal, as the servers are stateless and only used to brodcast client messages.

Moreover, we could imagine that any client that was not intended to be used in the system could come in and help. Just connect to one of the servers via your browser, and you're ready. It just won't be able to be used as a server, but that's not that important. Somebody is walking by, we could ask him for help, and in a few seconds, he would be able to welcome victims and type in their names, ages, etc.

A little simplified diagram to sum things up (only two client sets of socket connections represented for the sake of readability...):

Distributed NodeJS architecture with automatic failover

In fact, I dream about the client to client communication protocol the W3C is working on. But in the meantime, we need a workaround. And, for the moment, this is the best one I've found. Every other solution I imagined would fail someday under certain circumstances (and remember, we're not talking about traditional web here, we have to consider that everything out there is trying to kill your system, and if it happens, it stacks up another disaster on top of the first one), and if so, would require some user action to continue working.

Technically, it might be pretty simple to implement. But I'm wondering:

  • Is there already some NodeJS framework or module that does the same kind of thing?
  • If not, would you be interested in one?

Thanks for reading this long post.

4May/112

jQuery 1.6 and backwards compatibility

jQuery 1.6 just came out, announcing big performance improvements, enhanced animations, and so on. At first, my brain cells were like having a big wild party, thinking about all the good stuff.

And then comes the hangover. Not backwards compatible? Maybe some little obsolete things won't work anymore, after all, that may not be so important.

But no, it is definitely important. And inconsistent, in some (weird) way.

OK, .prop() is born, taking some duty off .attr(). .attr() is for attributes (attributes don't change over time), and .prop() is for properties (they evolve. For example, is you want to retrieve the actual value of an input, you will use .prop(). OK, .val() exists. But just for the example. On the other side, .attr() will return the original value). Ouch, here is the first problematic change. Think about all the code that will break now...

But it's not only about that. They say that if you want to check a checkbox, you will have to use .attr('checked', true) instead of .attr('checked', 'checked'). Nice to see this more convenient way to do it, but... Shouldn't I use .prop('checked', true) instead? Attributes are not supposed to change, right?

So I'm very confused. As a plugin user AND author, what should I do? Shall I upgrade my plugins to support jQuery 1.6? As I use my own plugins (no? Really?), that would mean I'll have to upgrade my applications to jQuery 1.6 (no, I won't use a deprecated version of my own plugins. I use them in their last version). But if I do so, I'll probably break a good part of the plugins I'm using. If I don't, people who switched to jQuery 1.6 or newcomers probably won't be able to use my plugins. I don't like this.

Let's say I decide to upgrade my plugins. What should I do for the other plugins I use? Wait their respective authors to upgrade too? What if they don't? I could patch them myself, but I don't want to spend hours understanding how several plugins work. And that's without thinking about testing...

So, as an author or as a user, what do you plan to do regarding this good but non compatible version?

EDIT: Well, it seems that they changed their minds. jQuery 1.6.1 is out, with a fixed .attr() method which should be backwards compatible. Follow the story.

15Feb/114

Form submit confirmation, fast and easy

Following some demands, I recently released a new version of Fast Confirm, to make it more easy to implement when dealing, among others, with form submission.

But in the hurry, I forgot to include some features that I developed earlier, thinking "I'll put that online soon". And I didn't. So here there are, the missing features that some of you wanted. With version 2.1.0, you can now tell your confirm boxes that you want them to be unique, just by setting a boolean on invocation.

There is another little thing that I wanted to introduce alongside with the better event handling. Since version 2.0.0, you can simply bind FC to a form submit event, with the "eventToBind" parameter. There is a little problem with that.

Typically, a form will look like :

<form action="action.php" method="post">
   Username: <input type="text" name="username" />
   Email: <input type="text" name="email" />
   <input type="submit" value="Create user" />
</form>

So what happens when the confirm box opens? It opens pointing on the form. That makes no sense. A form is usually not a visual element. You will more likely want the confirm box to be opened on the submit button, or maybe on a fieldset, or even on a input field. But directly on the form tag? No way. Or at least, that would not be very frequent.

So I added the "targetElement" parameter. It simply is a selector that allows you to specify on which element, within the form, you want the confirm box to open. For example, a ":submit" selector will simply make the confirm box open on the submit button of the binded form.

Have a look at the demos!

23Jan/111

Fast Confirm, Universal Paginate, reloaded

Hi everyone.

I finally had some time to re-work on my web projects. Since FastConfirm has been quite appreciated, I decided to take a few minutes to rewrite it a bit. The main concerns were about events management and manipulating it programmatically.

So I decided to take it directly to version 2.0, breaking some backward compatibility. Nothing bad, the usage remains almost the same, except for the "close" method.

So now, you can call:

$('.click_me').fastConfirm('close');

To close the previously opened confirm box.

OK, so this was for the method calling stuff.

You can also now completely delegate event handling to FastConfirm. This feature has been asked for several times, and I must admit I was not really satisfied with the previous plugin usage (well, in fact, you can still use the plugin like you did before). This is mainly useful for form submit confirmation. So now, you can simply do:

$('.confirmable_form').fastConfirm({
   eventToBind: 'submit'
});

And FastConfirm will take care of binding the confirm box on the submit event, allowing it if the user says "yes", canceling it if the user says "no". The same could be true with any event. The new eventToBind parameter defaults to false, which will result on the same behavior than before.

As a reminder, you previously had to write something like this to achieve the same behavior:

$('.button').click(function() {
   $(this).fastConfirm({
      onProceed: function($trigger) {
         $trigger.closest('form').submit();
      }
   });
   return false;
});

That's amazing. But there's one more thing ©. ;-)

The user is now able to close the confirm box by hitting the Escape key. This is the same as clicking "No" and provides a quicker way for the power user. A little detail can sometimes greatly improve the user experience.

I hope you will like this new, rewrote, cleaner and easier version of Fast Confirm. As usual, you can find it here.

Universal Paginate has also been rewrote in order to provide a better control over plugin internals. Methods can now be called the standard way:

$(list).universalPaginate('changeItemTemplate', newItemTemplate); 

for example.

Also, Universal Paginate has a new parameter which overrides ajax requests defaults when querying the server for data on page changes.

As this is the official jQuery plugin authoring recommendations, this rewriting will happen for other plugins, too. It will provide a better consistency and probably less bugs / shitty code.

Oh, and, by the way, these two plugins now pass JSLint. :-)

Last thing, I am very thankful to all of those who contributed to make we want to improve my work, whether with bits of code or simply with comments and requests.

14Dec/108

Javascript variable declaration and hoisting

Do you know what JS hoisting is? If you do, I guess you already declare your variables properly. But if you don't, I am pretty sure you can still improve your code a little bit, and make it much more reliable.

If you already experienced some strange variable behavior, like a variable's value being unexpectedly changed or not changed, this post is for you.

Let's see how JS handles variables through a few examples.

JavaScript variables scoping

First, let's have a look at how JS variables are scoped. One may think that JS is a C-like language. And yes, the syntax is quite C-ish. But the language works quite differently. And yes, variable scoping is really different from what you may see in C-like languages.

Most languages use block-level scoping. JS doesn't. It uses function-level scoping. You should keep this in mind at all times, as it is probably one of the greatest sources of confusion for newcomers.

So what happens if we execute this piece of code?

(function() {
  var a = 10;

  if (true) {
    var a = 35;
  }

  alert(a);
})();

Yes, it alerts 35. The same kind of program written in C would output 10 (provided you really re-declare your variable inside your block).

Well that's the first surprise. And not the last, nor the least.

JavaScript variables declaration

Well, let's see what happens when we declare a variable.

Let's take the following code:

var a = 10;
alert(a);

(function() {
  alert(a);
  var a = 200;
  alert('a now contains: ' + a);
})();

What do we have here? The first alert says "10", as we could expect. But the second one alerts "undefined". Why on earth would a be undefined? Try to remove the "var a = 200;" line. The second alert says "10" again? Right. So this declaration / initialization line has something to do with this strange behavior. No matter where you declare a variable in your function, this variable will be declared anywhere in this function. What about the initialization? It stays right where you wrote it.

Eventually, JavaScript hoisting

Wow wow wow... What's that? Do you mean JS doesn't care about how I write my code? Not really. In fact, the previous piece of code will be interpreted like so:

var a = 10;
alert(a);

(function() {
  var a;
  alert(a);
  a = 200;
  alert('a now contains: ' + a);
})();

In fact, all the variable declarations, but not the initializations, are put at the top of your function. This is commonly called hoisting.

Note that this works with all declarations. Even the functions declarations. Be careful though, as the variable-like defined functions will only see their name hoisted (as we saw in the previous example with variable a), but not their body. "Traditionally" declared functions will be entirely hoisted (name and body). Here is a little example you can run to understand this phenomenon:

(function() {
  f1(); // Will run OK
  f2(); // Will throw an error

  f1() {
    alert("I'm in function f1");
  }

  var f2 = function() {
    alert("I'm in function f2, but I will never run before the f2 initialization...");
  };
})();

That may seem quite surprising, but that's how JS works. I can't say if that's a good thing or a bad thing, honestly, I don't see any advantage or drawback. It's just another way to work. As it pushes us to keep a clean code, with all vars declared at the top, it may be a good thing.

Speaking of code quality, your code won't pass JSLint (when you select "the good parts") if you don't declare all your variables at once in each function.

But what should I do then?

This is why you should always declare your variables at the top of your functions. You'll have no bad surprises since you respect this rule.

I hope this post was useful, and that it will help you achieve a better code quality.

4Nov/101

Website creation, the easy way

Ever wanted a website creation service? Already tried some? And got tired of all the customization work you've got to do? Well, my mates at 4wonders and I just released a new service: unoome. Basically, it takes care of all the technical stuff. You focus on your content and nothing else.

We thought about 90% of the use cases, so that you don't have to worry about page layout and design. Web is not your domain? Well, it is ours. So let's work together. You want to show your products on a website? We allow you to create "products" pages, the simplest way you could imagine. Maybe you are selling services. You can also create price list pages, again, the easy way. You own a restaurant? There are specific page types for you. All of this is already designed, just put your content in, and you're ready to go. And yes, your website is already available online, as soon as you decide to make your site public. Before making it public, maybe you will want to check our other predefined templates? One click, and your website looks totally different.

And guess what? All of this still looks good. Notice how that was fast and easy. Some themes are customizable, but you won't have to choose how every single detail will look. You choose some general aspects, unoome takes care of the details and tries to make your site look consistent and professional.

The service is available right now, you can try it for free with no time limit. You can even publish your website for free, but it will contain some ads and you won't be able to choose one of our "premium" themes. The interface is in French only at this moment, but our service supports any language. Your content can be in English, French, Spanish, Chinese, whatever.

You can take a tour on www.unoome.com