Quantcast
Channel: Nettuts+
Viewing all 502 articles
Browse latest View live

Deeper In the Brackets Editor

$
0
0

Brackets Turns 30 (Ditches the Minivan and Goes for the Jet Pack!)

Nearly one year ago, Jeffrey Way reviewed the open source Brackets project. In the time since that review Brackets has come quite far, recently celebrating it’s 33rd Sprint release. In this article I’ll talk about many of the updates as well as demonstrate why Brackets is my favorite editor.


Ok, What Is Brackets Again?

Brackets primary focus is on web development.

Just in case you aren’t aware, Brackets is an open-source code editor focused on web development and built with web standards. Yes – an editor built with HTML, JavaScript, and CSS. It was originally released in July 2012 on GitHub (http://github.com/adobe/brackets). While launched by Adobe, the commiters behind Brackets include folks from numerous sources. (As an aside, the Brackets team makes it a priority to focus on non-Adobe pull requests.)

Brackets primary focus is on web development. You get the expected editing and code hinting for HTML, CSS, and JavaScript of course, but you also get some powerful features on top of this. The "Live Preview" feature connects your Brackets editor to your browser. As you edit CSS, updates happen in real time providing instant feedback. Just selecting CSS items will provide highlights within the browser so you know exactly what you are working with. Another feature, quick editing, let’s you select an HTML tag and instantly get to the CSS code that applies to that part of the DOM. What isn’t directly supported in Brackets can be achieved via a rich extension API (again using web standards) to let developers add whatever feature they want. Extensions have been created for CSS linting, HTML validation, GitHub integration, and more. (I’m writing this article in Markdown within my Brackets editor using a Markdown extension that gives me a live update of the display.)

That’s where Brackets began. Now let’s talk about where it has come and what we can expect in the future.


The Basics – Covered

Improvements have been made in all aspects (HTML, CSS, and JavaScript).

When Brackets first launched, it was something of an experiment. Could you use web standards to build an editor for web developers? Even more importantly, could you build something that would perform? Because this was something of an experiment and there were many low level architectural concerns, some things that you would expect in any decent editor, like renaming files for example, did not ship for a long time. Brackets was not marketed as being ready for prime time. Instead, the idea was to try something new and see what worked.

It is now fair to say that Brackets has all of the basics covered. Things like creating new files, deleting, opening from the file system, etc. are now baked in. While not necessarily something to crow about, if the lack of these basic features were the only thing keeping you from using Brackets, now is definitely the time to check it out. (And for those of you waiting for a Linux version – one is ready for you!)

Along with basic file operations, code hinting has been dramatically improved over time. Improvements have been made in all aspects (HTML, CSS, and JavaScript). Recently, Brackets added support for parsing and hinting of your own functions. Imagine that you’ve written two JavaScript functions. As you type your calls to these functions, Brackets tries to understand both the arguments and the types of arguments required and provide code support as you type. Here is a simple example:

/*
* @param {number} x First number
* @param {number} y Second number
*/
function ringTheBell(x, y) {
    'use strict';
    var total = x + y;
    return total;
}

function sayHello(name) {
    'use strict';
    return "Hello, " + name;
}

My code has two functions, one called ringTheBell and one called sayHello. I provided some additional metadata for ringTheBell, but that isn’t required. Providing it though will make code hinting a bit nicer. Now I’m going to type a call to ringTheBell:

ch1

Notice how it detected the arguments and and type. If I enter a value for the first argument, notice how the code hinting bolds the second argument:

ch2

Even in cases where Bracket’s can’t determine the type of argument being used in a function, it will still provide you with the name of the argument which can be useful:

ch3

Live Connect for HTML

Recently Brackets added real support for HTML live connect.

Live Connect is probably one of the cooler aspects of Brackets. As I mentioned above, it lets you edit CSS and see updates in real time. Need to tweak padding or margins? You can use your editor and see the impact immediately. Browsers typically allow for this (Chrome Dev Tools), but don’t normally provide an easy way to get those changes back out into source. Chrome has made strides in this area recently, but as much as I love Chrome, I’d rather write my code in an editor.

While that worked great for CSS, it did not support HTML. Brackets would automatically reload your browser on saving an HTML file, but if you wanted to preview your changes without a save, you were out of luck. Recently Brackets added real support for HTML live connect. As you modify your HTML code the browser will update in real time. You will also see highlights in the DOM for the area you’re modifying. This doesn’t really translate well to screenshots, but imagine the following HTML.

<!doctype html><html><head><title>Test</title></head><body><h2>This is a Test</h2><p>
        fooioikkkllklkkopkk</p></body></html>

If I click in the h2 above, Chrome will render a highlight of that item:

1

If I modify text inside the h2, Chrome will reflect those changes immediately.


Working With Extensions

Another important update to Brackets involves extension support. Behind the scenes, what extensions can do and how they can do it have been progressively improving with each sprint. While not necessarily that important to an end user, for people writing extensions the improvements had made it much easier to add new features to Brackets. If you can spend less time on boilerplate code and more time on features, that’s an all around win for extending Brackets. Brackets also exposes the ability to use Node.js itself for extensions. This feature gives your extensions the ability to make use of anything Node can – which by itself pretty much opens the entire world to you. This is a rather complex topic but if you want to learn more, read this guide: Brackets Node Process.

That’s behind the scenes, but for the end user, Brackets has come a long way in making it easier to actually use extensions. Brackets now ships with a full-fledged Extension Manager. Available via the File menu or an icon in the right gutter, clicking it will launch the manager:

2

Notice that for each extension have installed, you can see details about the version, links for additional information, and even better, a quick way to remove the extension if it is causing problems. At the bottom of this manager is a button that lets you install extensions from a URL. That’s handy if you know what extension you want (as well as the GitHub URL), but what if you don’t? Simply click on the Available tab:

em_available

You can now browse (and even filter) through a long list of available extensions. Even better, installation is as simple as clicking a button. Note the Bracket’s Extension Manager is even smart enough to recognize when an extension may not be compatible with your version of Brackets:

em_bad

Theseus Integration

Probably the most exciting update to Brackets (at least for me) is the integration of the Theseus. Theseus is an open source project created by folks from both Adobe and MIT. It is focused on providing debugging support for both Chrome and Node.js applications. Imagine being able to debug a Node.js application made up of server-side JavaScript as well as client-side code. Theseus provides just that. While still early in development, Theseus is now integrated into Brackets and can be used within the editor itself.

Theseus currently provides three main features:

  • Code coverage in real-time
  • Retroactive inspection
  • Asynchronous call tree

Let’s look at a few examples of these. Theseus’s code coverage support will show how often a function is called. It sounds simple, but can be powerful. I recently tried Theseus on a simple demo that made use of AJAX to call a server-side program. I noticed that my demo wasn’t working, and the Theseus-integration in Brackets confirmed this. Notice the "0 calls" report by my callback:

theseus1

Turns out my server-side code wasn’t set up right and I didn’t write my JavaScript code to support an error callback for the AJAX call. This was literally the first time I played with Theseus and it immediately helped point out a problem in my code. After modifying my front-end code, I could see the difference right away:

theseus2

To be clear, this is all done in real-time. With Brackets open and Chrome open, I can click around in my application and see the updates in Brackets in sync with my actions in the browser.

On top of just seeing the call count, I can also click on an item and see what was passed to it. This is the retroactive inspection feature I mentioned above. Note that you can click into complex properties and really dig into the data.

theseus_log

Finally, for asynchronous calls that my occur at an undefined time after their initial call, Theseus has no problem handling and correctly organizing these calls under their initiator.

theseus_from_site

Adding New CSS Rules

One of the earliest features in Brackets was inline editing for CSS items. You could put your cursor in any HTML element, hit CMD/CTRL+E, and Brackets would scan your project to find relevant CSS files as well as the appropriate matching rule. This made it incredibly easy to quickly update the style sheets applicable for your content.

This worked well – as long as your content actually had a matching CSS rule. In the latest update to Brackets, the editor will now recognize when an element doesn’t have a matching CSS rule.

addnewcss

You can now directly add a new CSS rule right from the inline editor.

addnewcss2

New Theme

Finally, a new "shell" look is being added to Brackets. Currently available to Windows only (but will be in the OSX build soon), the "Dark" look is the future of the Brackets look and feel.

dark

What Next?

Your primary editor is a very personal decision for a developer.

Your primary editor is a very personal decision for a developer. I found myself using Sublime Text a few months ago and noticed that something wasn’t working right. Turns out, I was trying to use a Brackets feature. That day I switched from Sublime as my primary editor to Brackets. I still use Sublime (and to be clear, it is a pretty darn awesome editor!) but now my day to day work is done almost entirely in Brackets.

Obviously I’d love for you to go – right now – and download Brackets. But if you want to dig a bit more before you commit (hey, I understand, building a relationship with an editor is a serious undertaking), check out these resources:

  • First and foremost, the Bracket’s home page is your core location for everything about Brackets.
  • Even if you have no plans on contributing to Brackets, looking at the source code on GitHub would be a great way to look at a seriously cool application built with web standards.
  • Have a question or a problem with Brackets? Head over to the Google Group to post your question. That’s where I go when I have problems and I typically get help pretty quickly.
  • Finally, if you want to know what’s coming next with Brackets, you can find everything at the Trello board.

Authentication With Laravel 4

$
0
0

Authentication is required for virtually any type of web application. In this tutorial, I’d like to show you how you can go about creating a small authentication application using Laravel 4. We’ll start from the very beginning by creating our Laravel app using composer, creating the database, loading in the Twitter Bootstrap, creating a main layout, registering users, logging in and out, and protecting routes using filters. We’ve got a lot of code to cover, so let’s get started!


Installation

Let’s start off this tutorial by setting up everything that we’ll need in order to build our authentication application. We’ll first need to download and install Laravel plus all of its dependencies. We’ll also utilize the popular Twitter Bootstrap to make our app look pretty. Then we’ll do a tad bit of configuration, connect to our database and create the required table and finally, start up our server to make sure everything is working as expected.

Download

Let’s use composer to create a new Laravel application. I’ll first change directories into my Sites folder as that’s where I prefer to store all of my apps:

cd Sites

Then run the following command to download and install Laravel (I named my app laravel-auth) and all of its dependencies:

composer create-project laravel/laravel laravel-auth

Add In Twitter Bootstrap

Now to keep our app from suffering a horrible and ugly fate of being styled by yours truly, we’ll include the Twitter bootstrap within our composer.json file:

{"name": "laravel/laravel","description": "The Laravel Framework.","keywords": ["framework", "laravel"],"require": {"laravel/framework": "4.0.*","twitter/bootstrap": "*"
	},

	// The rest of your composer.json file below ....

… and then we can install it:

composer update

Now if you open up your app into your text editor, I’m using Sublime, and if you look in the vendor folder you’ll see we have the Twitter Bootstrap here.

laravel-auth-twitter-bootstrap-installed

Now by default our Twitter Bootstrap is composed of .less files and before we can compile them into .CSS files, we need to install all of the bootstrap dependencies. This will also allow us to use the Makefile that is included with the Twitter bootstrap for working with the framework (such as compiling files and running tests).

Note: You will need npm in order to install these dependencies.

In your terminal, let’s change directories into vendor/twitter/bootstrap and run npm install:

cd ~/Sites/laravel-auth/vendor/twitter/bootstrap
npm install

With everything ready to go, we can now use the Makefile to compile the .less files into CSS. Let’s run the following command:

make bootstrap-css

You should now notice that we have two new folders inside our vendor/twitter/bootstrap directory named bootstrap/css which contain our bootstrap CSS files.

laravel-auth-css-compiled

Now we can use the bootstrap CSS files later on, in our layout, to style our app.

But, we have a problem! We need these CSS files to be publicly accessible, currently they are located in our vendor folder. But this is an easy fix! We can use artisan to publish (move) them to our public/packages folder, that way we can link in the required CSS files into our main layout template, which we’ll create later on.

First, we’ll change back into the root of our Laravel application and then run artisan to move the files:

cd ~/Sites/laravel-auth
php artisan asset:publish --path="vendor/twitter/bootstrap/bootstrap/css" bootstrap/css

The artisan command asset:publish allows us to provide a --path option for which files we want to move into our public/packages directory. In this case, we tell it to publish all of the CSS files that we compiled earlier and place them inside of two new folders named bootstrap/css. Your public directory should now look like the screenshot below, with our Twitter Bootstrap CSS files now publicly accessible:

laravel-auth-bootstrap-css-moved-to-public

Set Permissions

Next we need to ensure our web server has the appropriate permissions to write to our applications app/storage directory. From within your app, run the following command:

chmod -R 755 app/storage

Connect To Our Database

Next, we need a database that our authentication app can use to store our users in. So fire up whichever database you are more comfortable using, personally, I prefer MySQL along with PHPMyAdmin. I’ve created a new, empty database named: laravel-auth.

laravel-auth-database-creation

Now let’s connect this database to our application. Under app/config open up database.php. Enter in your appropriate database credentials, mine are as follows:

// Default Database Connection Name

'default' => 'mysql',

// Database Connections

	'connections' => array(

		'mysql' => array(
			'driver'    => 'mysql',
			'host'      => '127.0.0.1',
			'database'  => 'laravel-auth',
			'username'  => 'root',
			'password'  => '',
			'charset'   => 'utf8',
			'collation' => 'utf8_unicode_ci',
			'prefix'    => '',
		),

		// the rest of your database.php file's code ...

Create the Users Table

With our database created, it won’t be very useful unless we have a table to store our users in. Let’s use artisan to create a new migration file named: create-users-table:

php artisan migrate:make create-users-table

Let’s now edit our newly created migration file to create our userstable using the Schema Builder. We’ll start with the up() method:

public function up()
{
	$table->increments('id');
	$table->string('firstname', 20);
	$table->string('lastname', 20);
	$table->string('email', 100)->unique();
	$table->string('password', 64);
	$table->timestamps();
}

This will create a table named users and it will have an id field as the primary key, firstname and lastname fields, an email field which requires the email to be unique, and finally a field for the password (must be at least 64 characters in length) as well as a few timestamps.

Now we need to fill in the down() method in case we need to revert our migration, to drop the users table:

public function down()
{
	Schema::drop('users');
}

And now we can run the migration to create our users table:

php artisan migrate

Start Server & Test It Out

Alright, our authentication application is coming along nicely. We’ve done quite a bit of preparation, let’s start up our server and preview our app in the browser:

php artisan serve

Great, the server starts up and we can see our home page:

laravel-auth-home-page

Making the App Look Pretty

Before we go any further, it’s time to create a main layout file, which will use the Twitter Bootstrap to give our authentication application a little style!

Creating a Main Layout File

Under app/views/ create a new folder named layouts and inside it, create a new file named main.blade.php and let’s place in the following basic HTML structure:

<!DOCTYPE html><html lang="en"><head><meta charset="utf-8"><meta name="viewport" content="width=device-width, initial-scale=1.0"><title>Authentication App With Laravel 4</title></head><body></body></html>

Linking In the CSS Files

Next, we need to link in our bootstrap CSS file as well as our own main CSS file, in our head tag, right below our title:

<head><meta charset="utf-8"><meta name="viewport" content="width=device-width, initial-scale=1.0"><title>Authentication App With Laravel 4</title>
    	{{ HTML::style('packages/bootstrap/css/bootstrap.min.css') }}
    	{{ HTML::style('css/main.css')}}</head>

Now we just need to create this main.css file where we can add our own customized styling for our app. Under the public directory create a new folder named css and within it create a new file named main.css.

laravel-auth-add-main-css-file

Finishing the Main Layout

Inside of our body tag, let’s create a small navigation menu with a few links for registering and logging in to our application:

<body><div class="navbar navbar-fixed-top"><div class="navbar-inner"><div class="container"><ul class="nav">  <li>{{ HTML::link('users/register', 'Register') }}</li>   <li>{{ HTML::link('users/login', 'Login') }}</li>   </ul>  </div></div></div> </body>

Notice the use of several Bootstrap classes in order to style the navbar appropriately. Here we’re just using a couple of DIVs to wrap an unordered list of navigation links, pretty simple.

For our application, we’re going to want to give our users simple flash messages, like a success message when the user registers. We’ll set this flash message from within our controller, but we’ll echo out the message’s value here in our layout. So let’s create another div with a class of .container and display any available flash messages right after our navbar:

<body><div class="navbar navbar-fixed-top"><div class="navbar-inner"><div class="container"><ul class="nav">  <li>{{ HTML::link('users/register', 'Register') }}</li>   <li>{{ HTML::link('users/login', 'Login') }}</li>   </ul>  </div></div></div> <div class="container">
    	@if(Session::has('message'))<p class="alert">{{ Session::get('message') }}</p>
		@endif</div></body>

To display the flash message, I’ve first used a Blade if statement to check if we have a flash message to display. Our flash message will be available in the Session under message. So we can use the Session::has() method to check for that message. If that evaluates to true, we create a paragraph with the Twitter bootstrap class of alert and we call the Session::get() method to display the message’s value.

Now lastly, at least for our layout file, let’s echo out a $content variable, right after our flash message. This will allow us to tell our controller to use this layout file, and our views will be displayed in place of this $content variable, right here in the layout:

<body><div class="navbar navbar-fixed-top"><div class="navbar-inner"><div class="container"><ul class="nav">  <li>{{ HTML::link('users/register', 'Register') }}</li>   <li>{{ HTML::link('users/login', 'Login') }}</li>   </ul>  </div></div></div> <div class="container">
    	@if(Session::has('message'))<p class="alert">{{ Session::get('message') }}</p>
		@endif

		{{ $content }}
    </div></body>

Custom Styling

Now that we have our layout complete, we just need to add a few small custom CSS rules to our main.css file to customize our layout a little bit more. Go ahead and add in the following bit of CSS, it’s pretty self explanatory:

body {
	padding-top: 40px;
}

.form-signup, .form-signin {
	width: 400px;
	margin: 0 auto;
}

I added just a small amount of padding to the top of the body tag in order to prevent our navbar from overlapping our main content. Then I target the Bootstrap’s .form-signup and .form-signin classes, which we’ll be applying to our register and login forms in order to set their width and center them on the page.


Creating the Register Page

It’s now time to start building the first part of our authentication application and that is our Register page.

The Users Controller

We’ll start by creating a new UsersController within our app/controllers folder and in it, we define our UsersController class:

<?php

class UsersController extends BaseController {

}
?>

Next, let’s tell this controller to use our main.blade.php layout. At the top of our controller set the $layout property:

<?php

class UsersController extends BaseController {
	protected $layout = "layouts.main";
}
?>

Now within our UsersController, we need an action for our register page. I named my action getRegister:

public function getRegister() {
	$this->layout->content = View::make('users.register');
}

Here we just set the content layout property (this is the $content variable we echo’d out in our layout file) to display a users.register view file.

The Users Controller Routes

With our controller created next we need to setup the routes for all of the actions we might create within our controller. Inside of our app/routes.php file let’s first remove the default / route and then add in the following code to create our UsersController routes:

Route::controller('users', 'UsersController');

Now anytime that we create a new action, it will be available using a URI in the following format: /users/actionName. For example, we have a getRegister action, we can access this using the following URI: /users/register.

Note that we don’t include the “get” part of the action name in the URI, “get” is just the HTTP verb that the action responds to.

Creating the Register View

Inside of app/views create a new folder named users. This will hold all of our UsersController‘s view files. Inside the users folder create a new file named register.blade.php and place the following code inside of it:

{{ Form::open(array('url'=>'users/create', 'class'=>'form-signup')) }}<h2 class="form-signup-heading">Please Register</h2><ul>
		@foreach($errors->all() as $error)<li>{{ $error }}</li>
		@endforeach</ul>

	{{ Form::text('firstname', null, array('class'=>'input-block-level', 'placeholder'=>'First Name')) }}
	{{ Form::text('lastname', null, array('class'=>'input-block-level', 'placeholder'=>'Last Name')) }}
	{{ Form::text('email', null, array('class'=>'input-block-level', 'placeholder'=>'Email Address')) }}
	{{ Form::password('password', array('class'=>'input-block-level', 'placeholder'=>'Password')) }}
	{{ Form::password('password_confirmation', array('class'=>'input-block-level', 'placeholder'=>'Confirm Password')) }}

	{{ Form::submit('Register', array('class'=>'btn btn-large btn-primary btn-block'))}}
{{ Form::close() }}

Here we use the Form class to create our register form. First we call the open() method, passing in an array of options. We tell the form to submit to a URI of users/create by setting the url key. This URI will be used to process the registration of the user. We’ll handle this next. After setting the url we then give the form a class of form-signup.

After opening the form, we just have an h2 heading with the .form-signup-heading class.

Next, we use a @foreach loop, looping over all of the form validation error messages and displaying each $error in the unordered list.

After the form validation error messages, we then we create several form input fields, each with a class of input-block-level and a placeholder value. We have inputs for the firstname, lastname, email, password, and password confirmation fields. The second argument to the text() method is set to null, since we’re using a placeholder, we don’t need to set the input fields value attribute, so I just set it to null in this case.

After the input fields, we then create our submit button and apply several different classes to it so the Twitter bootstrap handles the styling for us.

Lastly, we just close the form using the close() method.

Make sure to start up your server, switch to your favorite browser, and if we browse to http://localhost:8000/users/register you should see your register page:

laravel-auth-register-page

Processing the Register Form Submission

Now if you tried filling out the register form’s fields and hitting the Register button you would have been greeted with a NotFoundHttpException, and this is because we have no route that matches the users/create URI, because we do not have an action to process the form submission. So that’s our next step!

Creating a postCreate Action

Inside of your UsersController let’s create another action named postCreate:

public function postCreate() {
}

Now this action needs to handle processing the form submission by validating the data and either displaying validation error messages or it should create the new user, hashing the user’s password, and saving the user into the database.

Form Validation

Let’s start with validating the form submission’s data. We first need to create our validation rules that we’ll validate the form data against. I prefer storing my validation rules in my model as that’s the convention I’m used to, from working with other frameworks. By default, Laravel ships with a User.php model already created for you.

Make sure you don’t delete this User model or remove any of the preexisting code, as it contains new code that is required for Laravel 4′s authentication to work correctly. Your User model must implement UserInterface and RemindableInterface as well as implement the getAuthIdentifier() and getAuthPassword() methods.

Under app/models open up that User.php file and at the top, add in the following code:

public static $rules = array(
	'firstname'=>'required|alpha|min:2',
	'lastname'=>'required|alpha|min:2',
	'email'=>'required|email|unique:users',
	'password'=>'required|alpha_num|between:6,12|confirmed',
	'password_confirmation'=>'required|alpha_num|between:6,12'
	);

Here I’m validating the firstname and lastname fields to ensure they are present, only contain alpha characters, and that they are at least two characters in length. Next, I validate the email field to ensure that it’s present, that it is a valid email address, and that it is unique to the users table, as we don’t want to have duplicate email addresses for our users. Lastly, I validate the password and password_confirmation fields. I ensure they are both present, contain only alpha-numeric characters and that they are between six and twelve characters in length. Additionally, notice the confirmed validation rule, this makes sure that the password field is exactly the same as the matching password_confirmation field, to ensure users have entered in the correct password.

Now that we have our validation rules, we can use these in our UsersController to validate the form submission. In your UsersController‘s postCreate action, let’s start by checking if the data passes validation, add in the following code:

public function postCreate() {
	$validator = Validator::make(Input::all(), User::$rules);

	if ($validator->passes()) {
		// validation has passed, save user in DB
	} else {
		// validation has failed, display error messages	
	}
}
}

We start by creating a validator object named $validator by calling the User::validate() method. This accepts the two arguments, the submitted form input that should be validated and the validation rules that the data should be validated against. We can grab the submitted form data by calling the Input::all() method and we pass that in as the first argument. We can get our validation rules that we created in our User model by accessing the static User::$rules property and passing that in as the second argument.

Once we’ve created our validator object, we call its passes() method. This will return either true or false and we use this within an if statement to check whether our data has passed validation.

Within our if statement, if the validation has passed, add in the following code:

if ($validator->passes()) {
	$user = new User;
	$user->firstname = Input::get('firstname');
	$user->lastname = Input::get('lastname');
	$user->email = Input::get('email');
	$user->password = Hash::make(Input::get('password'));
	$user->save();

	return Redirect::to('users/login')->with('message', 'Thanks for registering!');
} else {
	// validation has failed, display error messages	
}

As long as the data that the user submits has passed validation, we create a new instance of our User model: new User; storing it into a $user variable. We can then use the $user object and set each of the user’s properties using the submitted form data. We can grab the submitted data individually using the Input::get('fieldName') method. Where fieldName is the field’s value you want to retrieve. Here we’ve grabbed the firstname, lastname, and email fields to use for our new user. We also grabbed the password field’s value, but we don’t just want to store the password in the database as plain text, so we use the Hash::make() method to hash the submitted password for us before saving it. Lastly, we save the user into the database by calling the $user object’s save() method.

After creating the new user, we then redirect the user to the login page (we’ll create the login page in a few moments) using the Redirect::to() method. This just takes in the URI of where you’d like to redirect to. We also chain on the with() method call in order to give the user a flash message letting them know that their registration was successful.

Now if the validation does not pass, we need to redisplay the register page, along with some validation error messages, with the old input, so the user can correct their mistakes. Within your else statement, add in the following code:

if ($validator->passes()) {
	$user = new User;
	$user->firstname = Input::get('firstname');
	$user->lastname = Input::get('lastname');
	$user->email = Input::get('email');
	$user->password = Hash::make(Input::get('password'));
	$user->save();

	return Redirect::to('users/login')->with('message', 'Thanks for registering!');
} else {
	return Redirect::to('users/register')->with('message', 'The following errors occurred')->withErrors($validator)->withInput();
}

Here we just redirect the user back to the register page with a flash message letting them know some errors have occurred. We make sure to display the validation error messages by calling the withErrors($validator) method and passing in our $validator object to it. Finally, we call the withInput() method so the form remembers what the user originally typed in and that will make it nice and easy for the user to correct the errors.

Adding In the CSRF Before Filter

Now we need to make sure to protect our POST actions from CSRF attacks by setting the CSRF before filter within our UsersController‘s constructor method. At the top of your UsersController add in the following code:

public function __construct() {
	$this->beforeFilter('csrf', array('on'=>'post'));
}

Within our constructor, we call the beforeFilter() method and pass in the string csrf, as the first argument. csrf is the filter that we want to apply to our actions. Then we pass in an array as the second argument and tell it to only apply this filter on POST requests. By doing this, our forms will pass along a CSRF token whenever they are submitted. This CSRF before filter will ensure that all POST requests to our app contain this token, giving us confidence that POST requests are not being issued to our application from other external sources.


Creating the Login Page

Before you run off and try out your register page, we first need to create the Login page so that when our register form submission is successful, we don’t get an error. Remember, if the form validation passes, we save the user and redirect them to the login page. We currently don’t have this login page though, so let’s create it!

Still inside of your UsersController, create a new action named getLogin and place in the following code:

public function getLogin() {
	$this->layout->content = View::make('users.login');
}

This will display a users.login view file. We now need to create that view file. Under app/views/users create a new file named login.blade.php and add in the following code:

{{ Form::open(array('url'=>'users/signin', 'class'=>'form-signin')) }}<h2 class="form-signin-heading">Please Login</h2>

	{{ Form::text('email', null, array('class'=>'input-block-level', 'placeholder'=>'Email Address')) }}
	{{ Form::password('password', array('class'=>'input-block-level', 'placeholder'=>'Password')) }}

	{{ Form::submit('Login', array('class'=>'btn btn-large btn-primary btn-block'))}}
{{ Form::close() }}

This code is very similar to the code we used in our register view, so I’ll simplify the explanation this time to only what is different. For this form, we have it submit to a users/signin URI and we changed the form’s class to .form-signin. The h2 has been changed to say “Please Login” and its class was also changed to .form-signin-heading. Next, we have two form fields so the user can enter in their email and password, and then finally our submit button which just says “Login”.

Let’s Register a New User!

We’re finally at a point to where we can try out our registration form. Of course, the login functionality doesn’t work just yet, but we’ll get to that soon enough. We only needed the login page to exist so that our register page would work properly. Make sure your server is still running, switch into your browser, and visit http://localhost:8000/users/register. Try entering in some invalid user data to test out the form validation error messages. Here’s what my page looks like with an invalid user:

laravel-auth-displaying-errors

Now try registering with valid user data. This time we get redirected to our login page along with our success message, excellent!

laravel-auth-successful-registration

Logging In

So we’ve successfully registered a new user and we have a login page, but we still can’t login. We now need to create the postSignin action for our users/signin URI, that our login form submits to. Let’s go back into our UsersController and create a new action named postSignin:

public function postSignin() {
}

Now let’s log the user in, using the submitted data from the login form. Add the following code into your postSignin() action:

if (Auth::attempt(array('email'=>Input::get('email'), 'password'=>Input::get('password')))) {
	return Redirect::to('users/dashboard')->with('message', 'You are now logged in!');
} else {
	return Redirect::to('users/login')
		->with('message', 'Your username/password combination was incorrect')
		->withInput();
}

Here we attempt to log the user in, using the Auth::attempt() method. We simply pass in an array containing the user’s email and password that they submitted from the login form. This method will return either true or false if the user’s credentials validate. So we can use this attempt() method within an if statement. If the user was logged in, we just redirect them to a dashboard view page and give them a success message. Otherwise, the user’s credentials did not validate and in that case we redirect them back to the login page, with an error message, and display the old input so the user can try again.

Creating the Dashboard

Now before you attempt to login with your newly registered user, we need to create that dashboard page and protect it from unauthorized, non logged in users. The dashboard page should only be accessible to those users who have registered and logged in to our application. Otherwise, if a non authorized user attempts to visit the dashboard we’ll redirect them and request that they log in first.

While still inside of your UsersController let’s create a new action named getDashboard:

public function getDashboard() {
}

And inside of this action we’ll just display a users.dashboard view file:

public function getDashboard() {
	$this->layout->content = View::make('users.dashboard');
}

Next, we need to protect it from unauthorized users by using the auth before filter. In our UsersController‘s constructor, add in the following code:

public function __construct() {
	$this->beforeFilter('csrf', array('on'=>'post'));
	$this->beforeFilter('auth', array('only'=>array('getDashboard')));
}

This will use the auth filter, which checks if the current user is logged in. If the user is not logged in, they get redirected to the login page, essentially denying the user access. Notice that I’m also passing in an array as a second argument, by setting the only key, I can tell this before filter to only apply it to the provided actions. In this case, I’m saying to protect only the getDashboard action.

Customizing Filters

By default the auth filter will redirect users to a /login URI, this does not work for our application though. We need to modify this filter so that it redirects to a users/login URI instead, otherwise get an error. Open up app/filters.php and in the Authentication Filters section, change the auth filter to redirect to users/login, like this:

/*
|--------------------------------------------------------------------------
| Authentication Filters
|--------------------------------------------------------------------------
|
| The following filters are used to verify that the user of the current
| session is logged into this application. The "basic" filter easily
| integrates HTTP Basic authentication for quick, simple checking.
|
*/

Route::filter('auth', function()
{
	if (Auth::guest()) return Redirect::guest('users/login');
});

Creating the Dashboard View

Before we can log users into our application we need to create that dashboard view file. Under app/views/users create a new file named dashboard.blade.php and insert the following snippet of code:

<h1>Dashboard</h1><p>Welcome to your Dashboard. You rock!</p>

Here I’m displaying a very simple paragraph to let the user know they are now in their Dashboard.

Let’s Login!

We should now be able to login. Browse to http://localhost:8000/users/login, enter in your user’s credentials, and give it a try.

laravel-auth-logged-in

Success!


Displaying the Appropriate Navigation Links

Ok, we can now register and login to our application, very cool! But we have a little quirk here, if you look at our navigation menu, even though we’re logged in, you can see that the register and login buttons are still viewable. Ideally, we want these to only display when the user is not logged in. Once the user does login though, we want to display a logout link. To make this change, let’s open up our main.blade.php file again. Here’s what our navbar code looks like at the moment:

<div class="navbar navbar-fixed-top"><div class="navbar-inner"><div class="container"><ul class="nav">  <li>{{ HTML::link('users/register', 'Register') }}</li>   <li>{{ HTML::link('users/login', 'Login') }}</li>   </ul>  </div></div></div> 

Let’s modify this slightly, replacing our original navbar code, with the following:

<div class="navbar navbar-fixed-top"><div class="navbar-inner"><div class="container"><ul class="nav">  
				@if(!Auth::check())<li>{{ HTML::link('users/register', 'Register') }}</li>   <li>{{ HTML::link('users/login', 'Login') }}</li>   
				@else<li>{{ HTML::link('users/logout', 'logout') }}</li>
				@endif</ul>  </div></div></div> 

All I’ve done is wrapped our li tags for our navbar in an if statement to check if the user is not logged in, using the !Auth::check() method. This method returns true if the user is logged in, otherwise, false. So if the user is not logged in, we display the register and login links, otherwise, the user is logged in and we display a logout link, instead.

laravel-auth-logout-link

Logging Out

Now that our navbar displays the appropriate links, based on the user’s logged in status, let’s wrap up this application by creating the getLogout action, to actually log the user out. Within your UsersController create a new action named getLogout:

public function getLogout() {
}

Now add in the following snippet of code to log the user out:

public function getLogout() {
	Auth::logout();
	return Redirect::to('users/login')->with('message', 'Your are now logged out!');
}

Here we call the Auth::logout() method, which handles logging the user out for us. Afterwards, we redirect the user back to the login page and give them a flash message letting them know that they have been logged out.

laravel-auth-logged-out

Conclusion

And that concludes this Laravel 4 Authentication tutorial. I hope you’ve found this helpful in setting up auth for your Laravel apps. If you have any problems or questions, feel free to ask in the comments and I’ll try my best to help you out. You can checkout the complete source code for the small demo app that we built throughout this tutorial on Github. Thanks for reading.

Interview With Eric Bowman of Gilt.com

$
0
0

While most of us have built really cool websites, realistically speaking, few developers have had to worry about the complexities of managing and scaling incredibly large websites. One thing is putting up a site for a small company to ensure they have a great presence and another is trying to figure out how to scale your site so it won't buckle under the load of thousands of users.

I was fortunate enough to chat with the folks a flash-sale site Gilt.com which has received quite a bit of press over the years and seen tremendous growth. It's opportunities like these that allow us to probe the team that manages these sites and learn how they handle their day-to-day business and technology.

In this interview, Eric Bowman, VP Architecture at Gilt Groupe takes us through some of the background behind the site and the technology decisions behind keeping the service running smoothly.


Q Could you give us a quick intro about yourself?

I’m incredibly proud of what the team has accomplished.

I’ve been with Gilt since August 2011, and became VP/head of architecture and platform engineering in December 2011. During my time here, we’ve transitioned from Java to Scala, adopted Gerrit for code review, implemented a continuous delivery system we call Ion Cannon, introduced a typesafe client and microservice architecture, rolled out Play 2 for our frontend, created a public API, and rolled out platform engineering as an organizational architecture. I’m incredibly proud of what the team has accomplished. Before Gilt I was an architect at TomTom in Amsterdam and was responsible for their online map, search and traffic APIs, and products. Prior to that I was an architect working on service delivery for 3′s global 3G launch offering, and a long time ago my first “real” job was building The Sims 1.0.


Q Could you set an expectation for our readers of the scale/size of Gilt.com so they get a better feel for the breadth of effort needed to build a large-scale site?

The flash sales model presents a unique technical challenge because so much of the traffic comes in these incredible pulses as new sales go live. Over the course of a few seconds, our traffic can increase by as much as 100x, which really puts stress on every part of the system, all at once. Essentially, we need to have the eCommerce infrastructure almost at Amazon scale for at least 15 minutes every day. Most days this happens exactly at noon EST, and until a couple of years ago, noon was a stressful time every day. Nowadays it’s usually a non-event–in part because our software is great, and in other part due to better visibility into system performance and behavior.

In order to accommodate the pulse, we tend to over-provision on the hardware side. Our customer-facing production environment at the moment consists of about 40 physical servers running a couple hundred micro-services and a few dozen user-facing applications. On top of that we have about another 100 servers for development, testing, data warehousing, analytics and development infrastructure.


Q When it comes to large web properties, most developers are curious about the technology that runs under the hood. Could you share what you’re using and what prompted some of the technology choices you’ve made?

On the database side, we depend heavily on PostgreSQL, MongoDB and Voldemort.

Gilt was originally a Ruby on Rails application with a PostgreSQL backend. We had serious problems scaling Rails to handle the noon pulse, and the core customer-facing systems were ported very quickly to Java and a coarse-grained, services-oriented architecture starting in 2009. We kept the Java extremely low-tech: JDBC, hashmaps and JSP.

The performance and scalability of low-tech tools on the JVM is astonishing. As we grew the tech organization in Gilt, though, it became increasingly hard for teams to contribute code. A downside of the low-tech approach was that the contracts between systems were not well defined, and the most critical code bases grew monolithic over time. We gradually transitioned away from a pure servlet-based service implementation towards JAX-RS, and in parallel increasingly toward Scala. Our service stack is similar to Dropwizard, which came a few years after we built our internal platform, built on Jersey and Jackson 2. We like Dropwizard, but found that the way apps are configured–at runtime, in particular–wasn’t very compatible with our infrastructure, which uses ZooKeeper for discovery and per-environment configuration.

We’ve also moved from Ant to sbt over the last year. At TomTom I grew fond of Maven, and spent some time trying to introduce it at Gilt. At the point everything was falling into place, I had a change of heart and did some experiments with sbt. We found that sbt provides a fantastic developer experience and has an incredibly powerful extension model. Switching to sbt has enabled a degree of tooling customization that previously seemed impossible, and a lot of great features have fallen out of the sbt adoption, such as deep integration with our continuous delivery system and automatic dependency upgrading–things we couldn’t even imagine with tools like Ant or Maven. It was an interesting case where the limitations of the tools limited our imagination, and an important lesson for me personally in how to recognize and avoid that antipattern.

On the database side, we depend heavily on PostgreSQL, MongoDB and Voldemort. PostgreSQL has been part of the Gilt stack from the beginning, and has been amazing. We were one of the sponsors supporting the development of hot standby, a key feature in PostgreSQL 9.0 that enables true replication. We run it on Fusion-io cards, and the performance has been phenomenal. Our CTO, Michael Bryzek (also a Gilt cofounder), recently released a really nice open source schema upgrade mechanism for PostgreSQL. In general we’ve been moving away from a monolithic database application towards individual private databases per service. PostgreSQL is really nice for this, and its stable, predictable performance makes it straightforward to predict system behavior and to provision smartly.

Both MongoDB and Voldemort have become increasingly important in the last year or so. Voldemort has been part of Gilt’s stack for some time, though usage of Voldemort didn’t grow at all until this year. Despite less momentum than some other NoSQL solutions, we find Voldemort to be incredibly reliable and straightforward to reason about, and gradually we’ve introduced it in a few more places. We’ve wrapped it and incorporated it into our core platform, which makes it straightforward to use in new services; it’s easily embeddable, leading to almost no additional infrastructure needed to run a reliable Dynamo-style key-value store. We’ve also looked at a number of other solutions in the space, including Riak, and we’re pretty excited by all the activity in the field–particularly around multi master databases with strong semantics on conflict resolution.

MongoDB has also become increasingly important at Gilt over the past couple years. Today we run our core user and authentication services on MongoDB–absolutely critical data with very high throughput and low latency requirements–and it has been running now for a long time and very smoothly. MongoDB gets a hard time in the community sometimes, but we’ve found it to be astonishingly fast when run on high-spec hardware, and we’ve been reluctant to consider other options because of its raw performance in our use case.


Q Focusing on scalability specifically, what were your expectations when Gilt was launched and how did you prepare for launch traffic and post-launch growth?

Gilt grew faster than anyone could have imagined, and everyone was caught off guard by how difficult it was to scale the Rails stack. Like any successful startup, Gilt was assembled just-in-time, and the bulk of the effort was spent trying to react to what the market fed back in the face of so much public interest in what Gilt was trying to do. Startups in this situation have to maneuver a knife-edge of “just enough” architecture. If you overthink or over-engineer too much or too soon, it’s impossible to move fast enough to capture the market you are going after. But if you don’t architect enough, you can’t actually adapt once you’ve got something running. Gilt’s core tech stack has always embraced simplicity as a key feature, and maintaining that over time has been a worthy challenge.


Q Post-launch and obviously using hindsight, which decisions do you feel were spot on and which do you wish you could have a do-over on?

The decision to use PostgreSQL was spot-on. Despite the scaling issues with Rails, it’s an amazing framework for moving fast–until you need to scale. So that wasn’t necessarily a wrong decision, and a lot of Gilt’s internal systems today are written in Rails. If we were starting over today, we’d probably build on top of Play 2.


Q Of the technologies you’ve leveraged, which specifically helped in terms of scalability?

In terms of technology, we’ve found the JVM to be very scalable. Over time a number of tools have come into play to make scaling easier. Scala, ZooKeeper, RabbitMQ, Apache Camel and Kafka come to mind as important for scalability.

However, scalability at Gilt has had less to do with specific technologies, and more to do with architecture and approach. We’ve never been afraid to rethink things almost from the ground up, and scaling is a multidimensional problem that covers technology, infrastructure, architecture, and organizational structure. We’ve iterated along all four of those axes.


Q Being a commerce-oriented company, safeguarding customer data I’m sure is priority #1. From a security perspective, how have you had to adapt your infrastructure to adjust to the constantly changing security landscape?

We take security and our customers’ privacy very seriously, obviously. I don’t want to go into too much detail here, but a few things stand out. We take PCI compliance extremely seriously, and everyone participates on some level in the PCI review process. We’ve architected our systems using a bulkhead approach to PCI compliance, which physically limits what needs to be PCI-compliant, and also reduces risk in the event of a number of possible breach scenarios we model. We’ve found a micro-services architecture, and continuous delivery make it relatively inexpensive for us to stay cutting-edge in terms of security-related best practices, and so we try hard to do so.


Q Along those lines, what has been the most challenging aspect of security to manage?

The biggest challenge by far is coming up with a realistic model of what the risks really are, and then making the right decisions to mitigate those risks. Despite lip service about how security is everyone’s problem, in practice it’s hard for developers to keep security in mind all the time. We’ve focused more on an architecture that is forgiving and partitioned so that we don’t compromise security, and we reduce the scope of any particular potential mistake.


Q How has open-source software played a role at Gilt, both from a technology and financial perspective?

From a financial perspective, open source has helped us keep our costs down, and also helped us move faster.

Gilt is built almost entirely using open-source software. We actively encourage our engineering teams to use and contribute back to open source, and we have really low-friction guidelines for how to contribute back to open source. We have a number of open source projects we host on our GitHub repo, and we constantly feed pull requests upstream to some of the most important open source projects we use. We also actively support open source efforts, from funding feature development in PostgreSQL, to sponsoring Scala conferences like Scala Days.

From a financial perspective, open source has helped us keep our costs down, and also helped us move faster. Besides the obvious benefit of not paying licensing costs, open source provides a more subtle advantage, in that when you run into an issue, whether a trivial one or a catastrophic one, you can both look at the source code and potentially fix the problem. I started developing in a closed-source environment, and I do not miss those days where everything was a black box, and licenses were enormously restrictive in terms of what you could do with–or, in some cases, even say about–commercial software.


Q When you looked at a specific OSS-based technology, what was your decision-making process for determining it’s viability, applicability to your needs and the longer-term management of the technology?

We try to actively follow the latest developments across a number of open source projects, and read blogs, and generally we all love technology and tend to get excited. So sometimes it’s hard to avoid the irrational exuberance that can follow if you read too many blog posts and believe all the hype. We have an internal peer-review system that encourages a lightweight planning and architecture discipline, which works pretty well. We also use things like the ThoughtWorks Tech Radar to help temper over-exuberance, and also to gain insight via another lens of what’s working well across the industry.

Our approach also depends on how critical the software is. At Gilt we talk a lot about “Voluntary Adoption,” which actively encourages our developers to adopt the best tools for the job. In practice this means that individual teams have a lot of leeway in terms of leveraging whatever open source libraries they want, and when this goes well, it helps to keep things simple–and also helps us move faster. Usually the benefits of these libraries are clear; we tend to leave it up to individual teams to do the right level of analysis around the tradeoffs and costs of a particular solution. It is a struggle to avoid too much fragmentation across the organization, and we actively work to understand when teams have needed to use fairly exotic libraries, and incorporate them into the core platform in a way that tries to minimize upgrade pain and “dependency hell.”

For more critical shared components and systems we tend to use a combination of consensus, peer review, and stress testing to make decisions. Sometimes we look at a system and it’s so obviously superior to the other options that consensus is easy and we move quickly to adopt. ZooKeeper is an example of this. In other cases when the choice is less clear, we tend to just spin up the various alternatives and run them until they break, and try to understand why they failed, if they failed. For example, when evaluating messaging systems, we found that pumping a billion messages as fast as possible through several contenders was a pretty good way to eliminate poor choices via a “last man standing” criterion.

For now our approach is fairly lightweight and agile, and we hope to keep it that way. Microservices and a unique approach to deployment make it straightforward for us to try things out in production to see how they work, without much risk. Ultimately how a system works in production is the most important criterion, and not one you can divine through documents and meetings. We try stuff and use what works.


Q OSS relies heavily on community contributions. How does Gilt give back to the OSS community?

On the JavaScript and Ruby side of things, we’ve open sourced a number of libraries.

On the Java and Scala side, we’ve not contributed as much or as quickly as we’d like, due to some specifics of how our build works that makes it hard to build some of our most core software outside Gilt’s infrastructure. We are actively working on improving this, and we have a backlog of Java and Scala software we look forward to open sourcing in the next half year or so.

On the JavaScript and Ruby side of things, we’ve open sourced a number of libraries, most of which are visible on our GitHub page.

We’ve also funded specific features in PostgreSQL, for example, and we regularly sponsor conferences around open source topics–primarily Scala in the recent past. We also are large supporters of the technology groups where we have our main engineering centers (New York and Dublin)–opening up our offices to host local meetups, gatherings of technology groups, and even free all-day freecourses.

We also include open source in our hiring process, in that we tend to prefer developers who are or have been involved in open source. In general we see it as a great sign that a developer is used to code that gets read, and also has priorities aligned with how we try to develop at Gilt.


In Closing

Eric, I'd like to thank you for taking the time to talk with us about Gilt. We appreciate the transparency and comfort you having in sharing many of the architectural underpinnings of such a highly-trafficked property. The complexity and diversity of the technologies you use shows that scaling a site requires more than just choosing a stack and running with it. It also demonstrates that it's important to objectively look at all of the options available and choose (sometimes adapt) to other products that can help your business be successful.

Intro to the React Framework

$
0
0

In today’s world of Javascript Application frameworks, design philosophy is the key differentiating factor. If you compare the popular JS frameworks, such as EmberJS, AngularJS, Backbone, Knockout, etc. you are sure to find differences in their abstractions, thinking models, and of course the terminology. This is a direct consequence of the underlying design philosophy. But, in principle, they all do one thing, which is to abstract out the DOM in such a way that you don’t deal directly with HTML Elements.

I personally think that a framework becomes interesting when it provides a set of abstractions that enable a different mode of thinking. In this aspect, react, the new JS framework from the folks at Facebook, will force you to rethink (to some extent) how you decompose the UI and interactions of your application. Having reached version 0.4.1 (as of this writing), React provides a surprisingly simple, yet effective model for building JS apps that mixes a delightful cocktail of a different kind.

In this article, we’ll explore the building blocks of React and embrace a style of thinking that may seem counter-intuitive on the first go. But, as the React docs say: “Give it Five Minutes” and then you will see how this approach will become more natural.


Motivations

The story of React started within the confines of Facebook, wherein it brew for a while. Having reached a stable-enough state, the developers decided to open-source it a few months back. Interestingly the Instagram website is also powered by the React Framework.

React approaches the DOM-abstraction problem with a slightly different take. To understand how this is different, lets quickly gloss over the techniques adopted by the frameworks I mentioned earlier.

A High Level Overview of JS Application Frameworks

The MVC (Model-View-Controller) design pattern is fundamental to UI development, not just in web apps, but in front-end applications on any platform. In case of web apps, the DOM is the physical representation of a View. The DOM itself is generated from a textual html-template that is pulled from a different file, script-block or a precompiled template function. The View is an entity that brings the textual template to life as a DOM fragment. It also sets up event-handlers and takes care of manipulating the DOM tree as part of its lifecycle.

For the View to be useful, it needs to show some data, and possibly allow user interaction. The data is the Model, which comes from some data-source (a database, web-service, local-storage, etc.). Frameworks provide a way of “binding” the data to the view, such that changes in data are automatically reflected with changes on the view. This automatic process is called data-binding and there are APIs/techniques to make this as seamless as possible.

The MVC triad is completed by the Controller, which engages the View and the Model and orchestrates the flow of data (Model) into the View and user-events out from the View, possibly leading to changes in the Model.

mvc-flow

Frameworks that automatically handle the flow of data back and forth between the View and Model maintain an internal event-loop. This event-loop is needed to listen to certain user events, data-change events, external triggers, etc and then determine if there is any change from the previous run of the loop. If there are changes, at either end (View or Model), the framework ensures that both are brought back in sync.

What Makes React Different?

With React, the View-part of the MVC triad takes prominence and is rolled into an entity called the Component. The Component maintains an immutable property bag called props, and a state that represents the user-driven state of the UI. The view-generation part of the Component is rather interesting and possibly the reason that makes React stand out compared to other frameworks. Instead of constructing a physical DOM directly from a template file/script/function, the Component generates an intermediate DOM that is a stand-in for the real HTML DOM. An additional step is then taken to translate this intermediate DOM into the real HTML DOM.

As part of the intermediate DOM generation, the Component also attaches event-handlers and binds the data contained in props and state.

If the idea of an intermediate-DOM sounds a little alien, don’t be too alarmed. You have already seen this strategy adopted by language runtimes (aka Virtual Machines) for interpreted languages. Our very own JavaScript runtime, first generates an intermediate representation before spitting out the native code. This is also true for other VM-based languages such as Java, C#, Ruby, Python, etc.

React cleverly adopts this strategy to create an intermediate DOM before generating the final HTML DOM. The intermediate-DOM is just a JavaScript object graph and is not rendered directly. There is a translation step that creates the real DOM. This is the underlying technique that makes React do fast DOM manipulations.


React In Depth

To get a better picture of how React makes it all work, let’s dive a little deeper; starting with the Component. The Component is the primary building block in React. You can compose the UI of your application by assembling a tree of Components. Each Component provides an implementation for the render() method, where it creates the intermediate-DOM. Calling React.renderComponent() on the root Component results in recursively going down the Component-tree and building up the intermediate-DOM. The intermediate-DOM is then converted into the real HTML DOM.

component-dom-tree

Since the intermediate-DOM creation is an integral part of the Component, React provides a convenient XML-based extension to JavaScript, called JSX, to build the component tree as a set of XML nodes. This makes it easier to visualize and reason about the DOM. JSX also simplifies the association of event-handlers and properties as xml attributes. Since JSX is an extension language, there is a tool (command-line and in-browser) to generate the final JavaScript. A JSX XML node maps directly to a Component. It is worth pointing out that React works independent of JSX and the JSX language only makes it easy to create the intermediate DOM.

Tooling

The core React framework can be downloaded from their website. Additionally, for the JSX → JS transformation, you can either use the in-browser JSXTransformer or use the command line tool, called react-tools (installed via NPM). You will need an installation of Node.js to download it. The command-line tool allows you to precompile the JSX files and avoid the translation within the browser. This is definitely recommended if your JSX files are large or many in number.

A Simple Component

Alright, we have seen a lot of theory so far, and I am sure you are itching to see some real code. Let’s dive into our first example:

/** @jsx React.DOM */

var Simple = React.createClass({

  getInitialState: function(){
    return { count: 0 };
  },

  handleMouseDown: function(){
    alert('I was told: ' + this.props.message);
    this.setState({ count: this.state.count + 1});
  },

  render: function(){

    return <div><div class="clicker" onMouseDown={this.handleMouseDown}>
        Give me the message!</div><div class="message">Message conveyed<span class="count">{this.state.count}</span> time(s)</div></div>
    ;
  }
});

React.renderComponent(<Simple message="Keep it Simple"/>,
                  document.body);

Although simple, the code above does cover a good amount of the React surface area:

  • We create the Simple component by using React.createClass and passing in an object that implements some core functions. The most important one is the render(), which generates the intermediate-DOM.
  • Here we are using JSX to define the DOM and also attach the mousedown event-handler. The {} syntax is useful for incorporating JavaScript expressions for attributes (onMouseDown={this.handleClick}) and child-nodes (<span class="count">{this.state.count}</span>). Event handlers associated using the {} syntax are automatically bound to the instance of the component. Thus this inside the event-handler function refers to the component instance. The comment on the first line /** @jsx React.DOM */ is a cue for the JSX transformer to do the translation to JS. Without this comment line, no translation will take place.

We can run the command-line tool (jsx) in watch mode and auto-compile changes from JSX → JS. The source files are in /src folder and the output is generated in /build.

jsx --watch src/ build/

Here is the generated JS file:

/** @jsx React.DOM */

var Simple = React.createClass({displayName: 'Simple',

  getInitialState: function(){
    return { count: 0 };
  },

  handleMouseDown: function(){
    alert('I was told: ' + this.props.message);
    this.setState({ count: this.state.count + 1});
  },

  render: function(){

    return React.DOM.div(null, 
      React.DOM.div( {className:"clicker", onMouseDown:this.handleMouseDown}, " Give me the message! "      ),
      React.DOM.div( {className:"message"}, "Message conveyed ",        React.DOM.span( {className:"count"}, this.state.count), " time(s)")
    )
    ;
  }
});

React.renderComponent(Simple( {message:"Keep it Simple"}),
                  document.body);

Notice how the <div/> and <span/> tags map to instances of React.DOM.div and React.DOM.span.

  • Now let’s get back to our code example. Inside handleMouseDown, we make use of this.props to read the message property that was passed in. We set the message on the last line of the snippet, in the call to React.renderComponent() where we create the <Simple/> component. The purpose of this.props is to store the data that was passed in to the component. It is considered immutable and only a higher-level component is allowed to make changes and pass it down the component tree.
  • Inside handleMouseDown we also set some user state with this.setState() to track the number of times the message was displayed. You will notice that we use this.state in the render() method. Anytime you call setState(), React also triggers the render() method to keep the DOM in sync. Besides React.renderComponent(), setState() is another way to force a visual refresh.

Synthetic Events

The events exposed on the intermediate-DOM, such as the onMouseDown, also act as a layer of indirection before they get set on the real-DOM. These events are thus refered to as Synthetic Events. React adopts event-delegation, which is a well-known technique, and attaches events only at the root-level of the real-DOM. Thus there is only one true event-handler on the real-DOM. Additionally these synthetic events also provide a level of consistency by hiding browser and element differences.

The combination of the intermediate-DOM and synthetic events gives you a standard and consistent way of defining UIs across different browsers and even devices.

Component Lifecycle

Components in the React framework have a specific lifecycle and embody a state-machine that has three distinct states.

component-lifecycle

The Component comes to life after being Mounted. Mounting results in going through a render-pass that generates the component-tree (intermediate-DOM). This tree is converted and placed into a container-node of the real DOM. This is a direct outcome of the call to React.renderComponent().

Once mounted, the component stays in the Update state. A component gets updated when you change state using setState() or change props using setProps(). This in turn results in calling render(), which brings the DOM in sync with the data (props + state). Between subsequent updates, React will calculate the delta between the previous component-tree and the newly generated tree. This is a highly optimized step (and a flagship feature) that minimizes the manipulation on the real DOM.

The final state is Unmounted. This happens when you explicitly call React.unmountAndReleaseReactRootNode() or automatically if a component was a child that was no longer generated in a render() call. Most often you don’t have to deal with this and just let React do the proper thing.

Now it would have been a big remiss, if React didn’t tell you when it moved between the Mounted-Update-Unmounted states. Thankfully that is not the case and there are hooks you can override to get notified of lifecycle changes. The names speak for themselves:

  • getInitialState(): prepare initial state of the Component
  • componentWillMount()
  • componentDidMount()
  • componentWillReceiveProps()
  • shouldComponentUpdate(): useful if you want to control when a render should be skipped.
  • componentWillUpdate()
  • render()
  • componentDidUpdate()
  • componentWillUnmount()

The componentWill* methods are called before the state change and the componentDid* methods are called after.

Some of the method names do seem to have taken a cue from the Cocoa frameworks in Mac and iOS

Miscellaneous Features

Within a component-tree, data should always flow down. A parent-component should set the props of a child-component to pass any data from the parent to the child. This is termed as the Owner-Owned pair. On the other hand user-events (mouse, keyboard, touches) will always bubble up from the child all the way to the root component, unless handled in between.

data-event-flow

When you create the intermediate-DOM in render(), you can also assign a ref property to a child component. You can then refer to it from the parent using the refs property. This is depicted in the snippet below.

  render: function(){
    // Set a ref 
    return <div><span ref="counter" class="count">{this.state.count}</span></div>;
  }

  handleMouseDown: function(){
    // Use the ref
    console.log(this.refs.counter.innerHTML);
  },

As part of the component metadata, you can set the initial-state (getInitialState()), which we saw earlier within the lifecycle methods. You can also set the default values of the props with getDefaultProps() and also establish some validation rules on these props using propTypes. The docs give a nice overview of the different kinds of validations (type checks, required, etc.) you can perform.

React also supports the concept of a Mixin to extract reusable pieces of behavior that can be injected into disparate Components. You can pass the mixins using the mixins property of a Component.

Now, lets get real and build a more comprehensive Component that uses these features.


A Shape Editor Built Using React

In this example, we will build an editor that accepts a simple DSL (Domain Specific Language) for creating shapes. As you type in, you will see the corresponding output on the side, giving you live feedback.

The DSL allows you to create three kinds of shapes: Ellipse, Rectangle and Text. Each shape is specified on a separate line along with a bunch of styling properties. The syntax is straightforward and borrows a bit from CSS. To parse a line, we use a Regex that looks like:

  var shapeRegex = /(rect|ellipse|text)(\s[a-z]+:\s[a-z0-9]+;)*/i;

As an example, the following set of lines describe two rectangles and a text label…

// React label
text value:React; color: #00D8FF; font-size: 48px; text-shadow: 1px 1px 3px #555; padding: 10px; left: 100px; top: 100px;

// left logo
rect background:url(react.png) no-repeat; border: none; width: 38; height: 38; left: 60px; top: 120px;

// right logo
rect background:url(react.png) no-repeat; border: none; width: 38; height: 38; left: 250px; top: 120px;

…generating the output shown below:

react-shapes

Setting Up

Alright, lets go ahead and build this editor. We will start out with the HTML file (index.html), where we put in the top-level markup and include the libraries and application scripts. I am only showing the relevant parts here:

<body><select class="shapes-picker"><option value="--">-- Select a sample --</option><option value="react">React</option><option value="robot">Robot</option></select><div class="container"></div><!-- Libraries --><script src="../../lib/jquery-2.0.3.min.js"></script><script src="../../lib/react.js"></script><!-- Application Scripts --><script src="../../build/shape-editor/ShapePropertyMixin.js"></script><script src="../../build/shape-editor/shapes/Ellipse.js"></script><script src="../../build/shape-editor/shapes/Rectangle.js"></script><script src="../../build/shape-editor/shapes/Text.js"></script><script src="../../build/shape-editor/ShapeParser.js"></script><script src="../../build/shape-editor/ShapeCanvas.js"></script><script src="../../build/shape-editor/ShapeEditor.js"></script><script src="../../build/shape-editor/shapes.js"></script><script src="../../build/shape-editor/app.js"></script></body>

In the above snippet, the container div holds our React generated DOM. Our application scripts are included from the /build directory. We are using JSX within our components and the command line watcher (jsx), puts the converted JS files into /build. Note that this watcher command is part of the react-tools NPM module.

jsx --watch src/ build/

The editor is broken down into a set of components, which are listed below:

  • ShapeEditor: the root Component in the component tree
  • ShapeCanvas: responsible for generating the shape-Components (Ellipse, Rectangle, Text). It is contained within the ShapeEditor.
  • ShapeParser: responsible for parsing text and extracting the list of shape definitions. It parses line by line with the Regex we saw earlier. Invalid lines are ignored. This is not really a component, but a helper JS object, used by the ShapeEditor.
  • Ellipse, Rectangle, Text: the shape Components. These become children of the ShapeCanvas.
  • ShapePropertyMixin: provides helper functions for extracting styles found in the shape definitions. This is mixed-into the three shape-Components using the mixins property.
  • app: the entry-point for the editor. It generates the root component (ShapeEditor) and allows you to pick a shape sample from the drop-down.

The relationship of these entities is shown in the annotated component-tree:

component-tree

The ShapeEditor Component

Lets look at the implementation of some of these components, starting with the ShapeEditor.

/** @jsx React.DOM */
var ShapeEditor = React.createClass({

  componentWillMount: function () {
    this._parser = new ShapeParser();
  },

  getInitialState: function () {
    return { text: '' };
  },

  render: function () {
    var shapes = this._parser.parse(this.state.text);

    var tree = (
      <div><textarea class="editor" onChange={this.handleTextChange} /><ShapeCanvas shapes={shapes} /></div>);

    return tree;
  },

  handleTextChange: function (event) {
    this.setState({ text: event.target.value })
  }

});

As the name suggests, the ShapeEditor provides the editing experience by generating the <textarea/> and the live feedback on the <ShapeCanvas/<. It listens to the onChange event (events in React are always named with camel case) on the <textarea/> and on every change, sets the text property of the component’s state. As mentioned earlier, whenever you set the state using setState(), render is called automatically. In this case, the render() of the ShapeEditor gets called where we parse the text from the state and rebuild the shapes. Note that we are starting with an initial state of empty text, which is set in the getInitialState() hook.

For parsing the text into a set of shapes, We use an instance of the ShapeParser. I’ve left out the details of the parser to keep the discussion focused on React. The parser instance is created in the componentWillMount() hook. This is called just before the component mounts and is a good place to do any initializations before the first render happens.

It is generally recommended that you funnel all your complex processing through the render() method. Event handlers just set the state while render() is the hub for all your core logic.

The ShapeEditor uses this idea to do the parsing inside of its render() and forwards the detected shapes by setting the shapes property of the ShapeCanvas. This is how data flows down into the component tree, from the owner (ShapeEditor) to the owned (ShapeCanvas).

One last thing to note in here is that we have the first line comment to indicate JSX → JS translation.

ShapeCanvas to Generate the Shapes

Next, we will move on to the ShapeCanvas and the Ellipse, Rectangle and Text components.

p>The ShapeCanvas is rather straightforward with its core responsibility of generating the respective <Ellipse/>, <Rectangle/> and <Text/> components from the passed in shape definitions (this.props.shapes). For each shape, we pass in the parsed properties with the attribute expression: properties={shape.properties}.

/** @jsx React.DOM */
var ShapeCanvas = React.createClass({

  getDefaultProps: function(){
    return {
      shapes: []
    };
  },

  render: function () {
    var self = this;
    var shapeTree = <div class="shape-canvas">
    {
      this.props.shapes.map(function(s) {
        return self._createShape(s);
      })
    }</div>;

    var noTree = <div class="shape-canvas no-shapes">No Shapes Found</div>;

    return this.props.shapes.length > 0 ? shapeTree : noTree;
  },

  _createShape: function(shape) {
    return this._shapeMap[shape.type](shape);
  },

  _shapeMap: {
    ellipse: function (shape) {
      return <Ellipse properties={shape.properties} />;
    },

    rect: function (shape) {
      return <Rectangle properties={shape.properties} />;
    },

    text: function (shape) {
      return <Text properties={shape.properties} />;
    }
  }

});

One thing different here is that our component tree is not static, like we have in ShapeEditor. Instead it’s dynamically generated by looping over the passed in shapes. We also show the "No Shapes Found" message if there is nothing to show.

The Shapes: Ellipse, Rectangle, Text

All of the shapes have a similar structure and differ only in the styling. They also make use of the ShapePropertyMixin to handle the style generation.

Here’s Ellipse:

/** @jsx React.DOM */

var Ellipse = React.createClass({
  mixins: [ShapePropertyMixin],

  render:function(){
    var style = this.extractStyle(true);
    style['border-radius'] = '50% 50%';

    return <div style={style} class="shape" />;
  }
});

The implementation for extractStyle() is provided by the ShapePropertyMixin.

The Rectangle component follows suit, of course without the border-radius style. The Text component has an extra property called value which sets the inner text for the <div/>.

Here’s Text, to make this clear:

/** @jsx React.DOM */

var Text = React.createClass({

  mixins: [ShapePropertyMixin],

  render:function(){
    var style = this.extractStyle(false);
    return <div style={style} class="shape">{this.props.properties.value}</div>;
  }

});

Tying It All Together With App.js

app.js is where we bring it all together. Here we render the root component, the ShapeEditor and also provide support to switch between a few sample shapes. When you pick a different sample from the drop down, we load some predefined text into the ShapeEditor and cause the ShapeCanvas to update. This happens in the readShapes() method.

/** @jsx React.DOM */

var shapeEditor = <ShapeEditor />;
React.renderComponent(
  shapeEditor,
  document.getElementsByClassName('container')[0]
);
function readShapes() {
  var file = $('.shapes-picker').val(),
    text = SHAPES[file] || '';

  $('.editor').val(text);
  shapeEditor.setState({ text: text }); // force a render
}

$('.shapes-picker').change(readShapes);
readShapes(); // load time

To exercise the creative side, here is a robot built using the Shape Editor:

robot

And That’s React for you!

Phew! This has been a rather long article and having reached to this point, you should have a sense of achievement!

We have explored a lot of concepts here: the integral role of Components in the framework, use of JSX to easily describe a component tree (aka intermediate-DOM), various hooks to plug into the component lifecyle, use of state and props to drive the render process, use of Mixins to factor out reusable behavior and finally pulling all of this together with the Shape Editor example.

Hope this article gives you enough boost to build a few React apps for yourself. To continue your exploration, here are few handy links:

Client-Side Security Best Practices

$
0
0

Thanks to HTML5, more and more of an applications’ logic is transferred from server-side to client-side. This requires front-end developers to focus more on security. In this article I will show you how to make your apps more secure. I will focus on techniques that you may not have heard about, instead of just telling you that you have to escape HTML data entered in by users.


Don’t Even Think About HTTP

Of course I don’t want you to serve your content with FTP or plain TCP. What I mean is that if you want your users to be safe when using your website, you need to use SSL (HTTPS). And not only for login sites, or valuable information. For all of your content. Otherwise, when someone is accessing your app from a public network, what they see may be malformed by some hacker inside this network. This is called a main-in-the-middle attack:

main-in-the-middle

When you use SSL, all of the data is encrypted before it’s sent, so even if the attacker gets it, he would not be able to modify or capture it. This is by far the most important step in securing your app.

Strict Transport Security

This HTTP header can come in handy if you want to serve your content using only SSL. When it’s issued by the server (or a <meta> tag, but that will allow at least one request to be HTTP), no insecure traffic will come from the browser to your server. It is used like this:

Strict-Transport-Security: max-age=3600; includeSubDomains

The includeSubDomains part is optional, it allows you to declare that you also want all of the sub-domains to be accessed using HTTPS. The max-age option sets how long (in seconds) the pages should be served using SSL. Sadly, only Firefox, Chrome and Opera are supporting Strict Transport Security.

Secure and HttpOnly

Another way to further improve the security on both HTTP and HTTPS are these two cookie attributes: Secure and HttpOnly. The first one allows a cookie to be sent only on SLL connection. The second one may sound as the exact opposite, but it’s not. It’s telling the browser that the cookie can only be accessed using HTTP(S) protocol, so it cannot be stolen using, for example, JavaScript’s document.cookie.


Make XSS Less Harmful With Content Security Policy

Content Security Policy allows you to define the origin of all scripts, images etc. on your site.

If you think your XSS filter will stop all possible XSS attacks check how many ways there are to perform these attacks and think again. Of course securing your app to stop all of these may be a problem and may slow it down, but there is a solution.

It’s called Content Security Policy. It allows you to define the origin of all scripts, images etc. on your site. It also blocks all inline scripts and styles, so even if someone can inject a script tag into a comment or post, the code would not be executed. The CSP is an HTTP header (which can also be set using HTML <meta> tag), which looks like this:

Content-Security-Policy: policy

Where policy is a set of CSP directives. Here are the possible options:

  • script-src– sets acceptable sources of JavaScript code
  • style-src– defines acceptable origins of CSS styles
  • connect-src– specifies the servers the browser can connect to using XHR, WebSockets and EventSource
  • font-src– lists allowed sources of fonts
  • frame-src– defines what origins should be allowed in iframes
  • img-src– sets allowed image sources
  • media-src– lists origins that can serve video and audio files
  • object-src– same as above but for Flash and other plugins

If a directive is not set, the browser assumes that all origins are allowed. This can be changed by setting the default-src option. What you set there will be applied to all unset directives. There is also a sandbox option, which makes the webpage load as an iframe with the sandbox attribute. An example usage of the CSP header would look like this:

Content-Security-Policy: default-src: 'self'; script-src: https://apis.google.com;

It allows all of the assets to be loaded only from the application’s origin (the 'self' attribute) and also allows you to load scripts from the Google APIs server. There is a lot of flexibility when defining CSP, and when used properly it will greatly improve the security of your webpage.

Drawbacks

The thing to remember when using CSP is that, by default, all inline JavaScript will not be executed. This also includes:

  • inline event listeners: like <body onload="main();">
  • all javascript URLs: like <a href="javascript:doTheClick()">

This is because the browser cannot distinguish your inline code from the hacker’s inline code. You will have to replace them by adding event listeners with addEventListener or some framework’s equivalent. This is not a bad thing ultimately, as it forces you to separate your application’s logic from its graphical representation which you should be doing anyway. CSP also (by default) blocks all eval()-ish code, including strings in setInterval/setTimeout and code like new Function('return false').

Availability

CSP is available in most of the modern browsers. Firefox, Chrome and Opera (mobile too) use the standard Content-Security-Policy header. Safari (iOS too) and Chrome for Android use the X-WebKit-CSP header. IE10 (with support limited only to the sandbox directive) uses X-Content-Security-Policy. So, thanks to Internet Explorer, you can’t just use only CSP (unless you will use something like Google Chrome Frame), but you can still use it to improve the security on the real browsers and to prepare your app for the future.


Use Cross Origin Resource Sharing Instead of JSONP

JSONP is currently the most used technique to get resources from other servers despite the same-origin policy. Usually, you just create the callback function in your code and pass the name of that function to the URL from which you want to get the data, like this:

function parseData(data) {
	...
}
<script src="http://someserver.com/data?format=jsonp&callback=parseData"></script>

But by doing this, you are creating a big security risk. If the server that you are getting data from is compromised, a hacker can add his malicious code and for example, steal your user’s private data, because actually, you are getting JavaScript using this request – and the browser will run all of the code just like with a normal script file.

The solution here is Cross Origin Resource Sharing. It allows your data provider to add a special header in responses so that you can use XHR to retrieve that data, then parse and verify it. This removes the risk of getting malicious code executed on your site.

The implementation requires the provider only to add the following special header in responses:

Access-Control-Allow-Origin: allowed origins

This can be just a few allowed origins separated with spaces, or a wildcard character: * to let every origin request the data.

Availability

All current versions of modern browsers support CORS, with the exception of Opera Mini.

Of course, the bigger problem here is that service providers would have to add CORS support, so it’s not completely dependent on the developer.


Sandbox Potentially Harmful Iframes

An iframe with the sandbox attribute will not be able to navigate the window, execute scripts, lock the pointer, show pop-ups or submit forms.

If you are using iframes to load content from external sites, you may want to secure them too. This can be done using the sandbox iframe attribute. An iframe with such an attribute empty (but present) will not be allowed to navigate the window, execute scripts, lock the pointer, show pop-ups or submit forms. The frame will also have a unique origin, so it can’t use localStorage or anything related to the same-origin policy. You can of course allow some of them, if you want, by adding one or more of these values into the attribute:

  • allow-same-origin– the frame will have the same origin as the site, instead of the unique one
  • allow-scripts– the frame will be allowed to execute JavaScript
  • allow-forms– the frame will be able to submit forms
  • allow-pointer-lock– the frame will have access to the Pointer Lock API
  • allow-popups– the frame will be allowed to show pop-ups
  • allow-top-navigation– the frame will be able to navigate the window

Availability

The sandbox iframe attribute is supported in all modern browsers, with the exception of Opera Mini.


Conclusion

So that’s it. I hope you’ve learned some new techniques that you can use in your future projects to protect your applications. Thanks to HTML5, we can now do amazing things with our websites, but we have to think about security from the first line of code if we want them to be resistant against attacks.

WebGL With Three.js: Basics

$
0
0

3D graphics in the browser have been a hot topic ever since it was first introduced. But if you were to create your apps using plain WebGL, it would take ages. This is exactly why some really useful libraries have recently came about. Three.js is one of the most popular, and in this series I will show you how best to use it in order to create stunning 3D experiences for your users.

Before we begin, I do expect you to have a basic understanding of 3D space before you start reading this tutorial, as I won’t be explaining stuff like coordinates, vectors, etc.


Step 1: Preparation

First, create three files: index.html, main.js and style.css. Now, download Three.js (whole zip file with examples and source, or the JavaScript file alone, your choice). Now, open index.html and insert this code:

<!DOCTYPE html><html><head><link rel="stylesheet" href="./style.css"><script src="./three.js"></script></head><body><script src="./main.js"></script></body></html>

That’s all you need in this file. Just a declaration of scripts and stylesheet. All the magic will happen in main.js, but before we get to that we need one more trick to make the app look good. Open style.css and insert this code:

canvas {
	position: fixed;
	top: 0;
	left: 0;
}

This will position the canvas in the left-top corner, because by default the body will have 8px of margin. Now we can proceed with the JavaScript code.


Step 2: The Scene and the Renderer

Three.js uses the concept of a display list. It means that all objects are stored in the list and then drawn to the screen.

Three.js uses the concept of a display list. This means that all objects are stored in the list and then drawn to the screen. Here, this is a THREE.Scene object. You need to add any object you want to be drawn on the screen to the scene. You can have as many scenes as you want, but one renderer can draw only one scene at once (of course you can switch the scene that is displayed).

The renderer simply draws everything from the scene to the WebGL canvas. Three.js also supports drawing on SVG or 2D Canvas, but we will focus on WebGL.

To get started, lets store the window’s width and height in variables, we will use it later:

var width = window.innerWidth;
var height = window.innerHeight;

Now define the renderer and the scene:

var renderer = new THREE.WebGLRenderer({ antialias: true });
renderer.setSize(width, height);
document.body.appendChild(renderer.domElement);

var scene = new THREE.Scene;

The first line defines the WebGL renderer. You can pass the renderer’s options in the first argument as a map. Here, we set the antialias to true, because we want the edges of objects to be smooth, not jagged.

The second line sets the renderer size to the size of the window, and in the third one we add the renderer’s canvas element to the document (you can also do this using a library, like jQuery: $('body').append(renderer.domElement)).

The last one defines the scene, no arguments needed.


Step 3: The Cube

Now lets add something to be drawn. Let it be a cube, since it’s the simplest 3D object. In Three.js the objects that are being drawn on the screen are called meshes. Each mesh has to have its own geometry and material. Geometry is a set of points that need to be connected in order to create the object. Material is simply the paint (or painting, but that is not the topic of this tutorial) that will cover the object. So, lets create our cube. Luckily for us there are some helper functions in Three.js for creating primitives (simple shapes):

var cubeGeometry = new THREE.CubeGeometry(100, 100, 100);
var cubeMaterial = new THREE.MeshLambertMaterial({ color: 0x1ec876 });
var cube = new THREE.Mesh(cubeGeometry, cubeMaterial);

cube.rotation.y = Math.PI * 45 / 180;

scene.add(cube);

As you can see, first we create the geometry. The arguments are defining a size of the cube: the width, height and depth.

Next, we define the cube’s material. There are a few material types in Three.js, but this time we will use the THREE.MeshLambertMaterial, since we want to have some lighting later (this material uses Lambert’s algorithm for light calculations). You can pass the options in the first argument as a map, the same as with the renderer – this is pretty much a rule for more complex objects in Three.js. Here, we only use color, which is passed as a hexadecimal number.

On the third line, we create a mesh using the geometry and material created earlier. Next, we rotate the cube by 45 degrees on the Y axis, to make it look better. We have to change degrees to radians, which is handled by the equation you probably remember from your high school physics class: Math.PI * 45 / 180. Finally, the cube is added to the scene.

Now you could open index.html in your browser to see the results, but you will see nothing because the scene is not rendered yet.


Step 4: Camera!

To render something, first we need to add the camera to the scene, so the renderer knows from which point of view it should render stuff. There are a few types of cameras in Three.js, but you’ll probably only use THREE.PerspectiveCamera. This type of camera is presenting the scene as we see our world. Lets create one:

var camera = new THREE.PerspectiveCamera(45, width / height, 0.1, 10000);

“To render something, first we need to add the camera to the scene, so the renderer knows from which point of view it should render stuff.”

Creating the camera is a bit more complicated than the rest of the things we’ve done so far. The first argument defines the FOV (field of view), the angle that can be seen from where the camera is. A FOV of 45 degrees looks natural. Next, we define the camera’s ratio. This is always the width of the renderer divided by its height, unless you want to achieve some special effects. The last two numbers are defining how close and how far the object can be to the camera to be drawn.

Now we have to move the camera back and up a little, as all of the objects created in Three.js have their position set in the middle of the scene (x: 0, y: 0, z: 0) by default:

camera.position.y = 160;
camera.position.z = 400;

The z coordinate is positive in the direction of the viewer, so objects with a higher z position will appear closer to you (in this case, since we moved the camera, all of the objects will appear further away from you).

Now, lets add the camera to the scene and render it:

scene.add(camera);

renderer.render(scene, camera);

You add the camera just like you added the cube. The next line renders the scene using this camera. Now you can open the browser and you should see the following:

first_rendering

You should only be able to see the top of the cube. This is because we moved the camera up and it’s still looking directly in front of it. This can be fixed by letting the camera know on what position it should look. Add this line after the lines setting the position of the camera:

camera.lookAt(cube.position);

The only argument passed in is a position on which the camera will look. Now, the scene looks better, but the cube is still black, no matter what color you’ve set when creating it:

fixed_camera_lookat

Step 5: Lights!

The cube is black, because there are no lights on the scene, so it’s like a completely black room. You see a white background because there is nothing to draw apart from the cube. To avoid that, we will use a technique called skybox. Basically, we will add a big cube that will display the background of the scene (usually some far terrain if it’s open space). So, lets create the box. This code should go before the renderer.render call:

var skyboxGeometry = new THREE.CubeGeometry(10000, 10000, 10000);
var skyboxMaterial = new THREE.MeshBasicMaterial({ color: 0x000000, side: THREE.BackSide });
var skybox = new THREE.Mesh(skyboxGeometry, skyboxMaterial);

scene.add(skybox);

This code is similar to the one that creates the cube. But this time the geometry is much bigger. We’ve also used THREE.MeshBasicMaterial since we don’t need to light the skybox. Also, notice the additional argument passed to the material: side: THREE.BackSide. Since the cube will be displayed from the inside, we have to change the side that gets drawn (normally, Three.js draws only outside walls).

Now the rendered scene is completely black. To fix that we have to add light to the scene. We will use THREE.PointLight, which emits the light like a bulb. Add these lines after the skybox:

var pointLight = new THREE.PointLight(0xffffff);
pointLight.position.set(0, 300, 200);

scene.add(pointLight);

As you can see, we’ve created the point light with white color, then we are setting its position to be up and back a little, to light the front and the top of the cube. Finally the light is added to the scene like any other object. Open up the browser and you should see a colored, shaded cube:

colored_shaded_cube

But the cube is still pretty boring. Let’s add some movement to it.


Step 6: Action!

Now we will add some movement to the scene. Lets make the cube rotate around the Y axis. But first, we have to change the way that we render the scene. One renderer.render call, renders the current state of the scene once. So even if we animate the cube somehow, we will not see it move. To change that, we have to add the render loop to our app. This can be achieved using the renderAnimationFrame function, which was created specially for that purpose. It’s supported in most of the major browsers, and for those which doesn’t support it, Three.js comes with its own polyfill. So, lets change this:

renderer.render(scene, camera);

to this:

function render() {
	renderer.render(scene, camera);
	requestAnimationFrame(render);
}

render();

Actually, there is no loop in there, because it would freeze the browser. The requestAnimationFrame function behaves a bit like setTimeout, but it’s calling the function passed as quick as the browser is ready. So, nothing really changed in the displayed scene and the cube is still not moving. Lets fix that. Three.js comes with THREE.Clock which can be used to achieve smooth animation of objects. First, initialize it before the render function definition:

var clock = new THREE.Clock;

Now, each time you call clock.getDelta it will return the time since the last call, in milliseconds. This can be used to rotate the cube like this:

cube.rotation.y -= clock.getDelta();

Add this line between the renderer.render and the requestAnimationFrame calls in the render function. It’s simply subtracting the time passed from the cube’s rotation on the Y axis (remember that it’s in radians) to rotate the cube clockwise. Now open the browser and you should see your cube rotating clockwise smoothly.


Conclusion

In this part of the series you learned how to prepare the scene, add objects and lights, and how to animate things. You can experiment with the app, add more or different objects, lights. It’s up to you. Next time I will show you how to use textures and how to create some nice effects with particles. Don’t forget to take a look at the documentation if you are having any problems.

Deploying a Laravel Application Using Capistrano

$
0
0

So you’ve just built a fancy web application and you’re planning to put it online. This can be done in many ways. In this article, we’ll cover one approach to deploy your backend system to your production server. We’ll go through the following steps through the example of a Laravel application, but this can be applied to any other language or technology.


The Past

Perhaps you have already put some websites online in the past. You’ve probably used an FTP client and uploaded the bits and bytes by hand. Or perhaps you always logged into your server via SSH and pulled the changes manually.


The Idea

Our goal is to simplify this process as much as possible. The idea is to use your code repository as a source for every deployment. The deployment tool, in our case capistrano, will automatically log into your server and build your system right out of your repository.

Software deployment is all of the activities that make a software system available for use. – Wikipedia


What You’ll Need

On Your Remote Server

Your remote server needs to provide SSH access. It also should have installed all necessary dependencies for your project such as GIT, PHP, MySQL, Composer, etc. Aside from this, you don’t need any extra software on your production server.

On Your Local Machine

In order to install and use Capistrano, you need at least Ruby 1.8.7 (if you don’t have Ruby installed, I recommend installing it using rbenv). To install capistrano, you simply have to run:

gem install capistrano

So, why capistrano? You may ask. As always, there are many ways to accomplish a task but in my case, Capistrano always seemed to be the easiest and most flexible approach. You can configure it to all your needs and there are a lot of plugins out there which simplify your work again.

Capistrano is a utility framework for executing commands in parallel on multiple remote machines, via SSH. It uses a simple DSL (borrowed in part from Rake) that allows you to define tasks, which may be applied to machines in certain roles. It also supports tunneling connections via some gateway machine to allow operations to be performed behind VPNs and firewalls.


Prepare

Now that we have everything we need, let’s set up our deployment settings. But first we have to create a folder on the remote server where all the files should be deployed to. Log into your server with SSH and create a folder. A common place is /var/www/. So let’s do this:

$ sudo mkdir /var/www/my-app
$ sudo chown -R username:group /var/www/my-app

That’s it. There is nothing more to do on the remote server, so you can close the SSH connection and move on. Go into your project (or any other folder, that doesn’t matter right now) and run:

$ cd my-project
$ capify .

This command will create the basic files we need. The Capfile is like the mount point for Capistrano, but for now we’ll just need to edit the config/deploy.rb file, which, as the name tells, is responsible for all the configuration. Open this file in your favorite text editor and replace the content with the following snippet. We’ll go through the code afterwards.

set :application, "Your app name"  # EDIT your app name

set :scm, :git
set :deploy_via, :remote_cache
set :repository,  "https://github.com/laravel/laravel.git" # EDIT your git repository

role :app, "12.456.789.123" # EDIT server ip address 
set :deploy_to, "/var/www/my-app" # EDIT folder where files should be deployed

set :user, "" # EDIT your ssh user name
set :password, "" # EDIT your ssh password
set :use_sudo, false
set :ssh_options, {
	:forward_agent => true
}

default_run_options[:pty] = true # needed for the password prompt from git to work

namespace :deploy do

	task :update do
		transaction do
			update_code # built-in function
			composer_install
			prepare_artisan
			symlink # built-in function
		end
	end

	task :composer_install do
		transaction do
			run "cd #{current_release} && composer install --no-dev --quiet"
		end
	end

	task :prepare_artisan do
		transaction do
			run "chmod u+x #{current_release}/artisan"
		end
	end

	task :restart do
		transaction do
			run "chmod -R g+w #{releases_path}/#{release_name}"
			run "chmod -R 777 #{current_release}/app/storage/cache"
			run "chmod -R 777 #{current_release}/app/storage/logs"
			run "chmod -R 777 #{current_release}/app/storage/meta"
			run "chmod -R 777 #{current_release}/app/storage/sessions"
			run "chmod -R 777 #{current_release}/app/storage/views"
		end
	end

end

You now have to put your data in every line with an #EDIT comment (ip address, git repo, ssh user, password, …). The :deploy_to variable should be the folder we just created. Your webserver (Apache, nginx, …) should point to /var/www/my-app/current/public. In the first part of the deploy.rb file, you can set up all your data. In the namespace :deploy block, you specify what actually should happen for each deployment.

So let’s have a quick walk-through on what those task‘s mean:

  • update_code is a built-in method of Capistrano and pulls in your latest version from your git repository.
  • composer_install fetches all your PHP dependencies, just like you’re used to during development.
  • prepare_artisan makes the artisan file executable in order to use it for migrations.
  • Every deployment is stored in /var/www/my-app/releases/. The built-in task symlink creates a symbolic link of the recent deployment to the current folder. This way you can keep older releases and switch versions without going offline for a second. When this task ran, your newest version is online.

Here you can easily add your own tasks if your build process requires some extra steps. For more detailed information, I recommend reading the Wiki on Github.

Now it’s time to initiate the server and test if everything works. To do this, run the following commands:

$ cap deploy:setup
$ cap deploy:check

You should see the message You appear to have all necessary dependencies installed. That means were are prepared for our first deploy.


Fire!

This is the moment you were waiting for. The hardest part is done. From now on, every time you want to update your application, you only have to run the following magical command. Capistrano will read your config/deploy.rb file and run each task. If a task fails, the deploy will stop and the old version will still be online.

$ cap deploy

You will see a bunch of text output and after little time (depending on your server) everything should be complete. That was easy, wasn’t it?


Further Thoughts

Security

Perhaps you might be a little worried with having to put your plain-text password in the configuration file. I only chose that way to make the demonstration as straight forward as possible, but in the real world, you might want to use an SSH key. You can import one like this:

set :user, "" # EDIT your ssh user name
set :use_sudo, false
set :ssh_options, {
	:forward_agent => true,
	:auth_methods => ["publickey"],
	:keys => ["/path/to/your/key.pem"] # EDIT your ssh key
}

Database

So far, we have focused on deploying the actual files to their new home, but in many scenarios you might also do something with your database. Laravel has a perfect tool for that: migrations. To run your migrations, you can just define an extra task. Let’s do so:

	task :laravel_migrate do
		transaction do
			run "#{current_release}/artisan migrate"
		end
	end

You also have to add this task in the transaction block of the update task. Now every time you deploy, the database will be updated to your latest migrations.

Rollback

Sometimes you deploy a non-working version of your application and you need to undo this changes. Capsitrano has a built-in feature for that called “rollback”. Just run:

cap deploy:rollback

Conclusion

You’ve just learned a very simple way of deploying your application to your production server(s) with Capistrano. Once the configuration work is done, it just takes one command to deploy your latest version in seconds. But as mentioned earlier, this is not the only way to do this.

You should also check out the task runner grunt which suits perfectly for building and deploying JavaScript applications. A complete different approach takes docker which acts like a lightweight VM. The idea here is to deploy your whole environment as a virtual machine. Check them out!

The Repository Design Pattern

$
0
0

The Repository Design Pattern, defined by Eric Evens in his Domain Driven Design book, is one of the most useful and most widely applicable design patterns ever invented. Any application has to work with persistence and with some kind of list of items. These can be users, products, networks, disks, or whatever your application is about. If you have a blog for example, you have to deal with lists of blog posts and lists of comments. The problem that all of these list management logics have in common is how to connect business logic, factories and persistence.


The Factory Design Pattern

As we mentioned in the introductory paragraph, a Repository will connect Factories with Gateways (persistence). These are also design patterns and if you are not familiar with them, this paragraph will shed some light on the subject.

A factory is a simple design pattern that defines a convenient way to create objects. It is a class or set of classes responsible for creating the objects our business logic needs. A factory traditionally has a method called "make()" and it will know how to take all the information needed to build an object and do the object building itself and return a ready-to-use object to the business logic.

Here is a little bit more on the Factory Pattern in an older Nettuts+ tutorial: A Beginner’s Guide to Design Patterns. If you prefer a deeper view on the Factory Pattern, check out the first design pattern in the Agile Design Patterns course we have on Tuts+.


The Gateway Pattern

Also known as “Table Data Gateway” is a simple pattern that offers the connection between the business logic and the database itself. Its main responsibility is to do the queries on the database and provide the retrieved data in a data structure typical for the programming language (like an array in PHP). This data is then usually filtered and modified in the PHP code so that we can obtain the information and variables needed to crate our objects. This information must then be passed to the Factories.

The Gateway Design Pattern is explained and exemplified in quite great detail in a Nettuts+ tutorial about Evolving Toward a Persistence Layer. Also, in the same Agile Design Patterns course the second design pattern lesson is about this subject.


The Problems We Need to Solve

Duplication by Data Handling

It may not be obvious at first sight, but connecting Gateways to Factories can lead to a lot of duplication. Any considerable sized software needs to create the same objects from different places. In each place you will need to use the Gateway to retrieve a set of raw data, filter and work that data to be ready to be sent to the Factories. From all these places, you will call the same factories with the same data structures but obviously with different data. Your objects will be created and provided to you by the Factories. This will, inevitably lead to a lot of duplication in time. And the duplication will be spread throughout distant classes or modules and will be difficult to notice and to fix.

Duplication by Data Retrieval Logic Reimplementation

Another problem we have is how to express the queries we need to do with the help of the Gateways. Each time we need some information from the Gateway we need to think about what do we exactly need? Do we need all the data about a single subject? Do we need only some specific information? Do we want to retrieve a specific group from the database and do the sorting or refined filtering in our programming language? All of these questions need to be addressed each time we retrieve information from the persistence layer through our Gateway. Each time we do this, we will have to come up with a solution. In time, as our application grows, we will be confronted with the same dilemmas in different places of our application. Inadvertently we will come up with slightly different solutions to the same problems. This not only takes extra time and effort but also leads to a subtle, mostly very difficult to recognize, duplication. This is the most dangerous type of duplication.

Duplication by Data Persistence Logic Reimplementation

In the previous two paragraphs we talked only about data retrieval. But the Gateway is bidirectional. Our business logic is bidirectional. We have to somehow persist our objects. This again leads to a lot of repetition if we want to implement this logic as needed throughout different modules and classes of our application.


The Main Concepts

Repository for Data Retrieval

A Repository can function in two ways: data retrieval and data persistence.

UMLRepoQuery

When used to retrieve objects from persistence, a Repository will be called with a custom query. This query can be a specific method by name or a more common method with parameters. The Repository is responsible to provide and implement these query methods. When such a method is called, the Repository will contact the Gateway to retrieve the raw data from the persistence. The Gateway will provide raw object data (like an array with values). Then the Repository will take this data, do the necessary transformations and call the appropriate Factory methods. The Factories will provide the objects constructed with the data provided by the Repository. The Repository will collect these objects and return them as a set of objects (like an array of objects or a collection object as defined in the Composite Pattern lesson in the Agile Design Patterns course).

Repository for Data Persistence

The second way a Repository can work is to provide the logic needed to be done in order to extract the information from an object and persist it. This can be as simple as serializing the object and sending the serialized data to the Gateway to persist it or as sophisticated as creating arrays of information with all the fields and state of an object.

UMLRepoPersist

When used to persist information the client class is the one directly communicating with the Factory. Imagine a scenario when a new comment is posted to a blog post. A Comment object is created by our business logic (the Client class) and then sent to the Repository to be persisted. The repository will persist the objects using the Gateway and optionally cache them in a local in memory list. Data needs to be transformed because there are only rare cases when real objects can be directly saved to a persistence system.


Connecting the Dots

The image below is a higher level view on how to integrate the Repository between the Factories, the Gateway and the Client.

UMLRepository

In the center of the schema is our Repository. On the left, is an Interface for the Gateway, an implementation and the persistence itself. On the right, there is an Interface for the Factories and a Factory implementation. Finally, on the top there is the client class.

As it can be observed from the direction of the arrows, the dependencies are inverted. Repository only depend on the abstract interfaces for Factories and Gateways. Gateway depends on its interface and the persistence it offers. The Factory depends only on its Interface. The client depends on Repository, which is acceptable because the Repository tends to be less concrete than the Client.

HighLevelDesign

Put in perspective, the paragraph above respects our high level architecture and the direction of dependencies we want to achieve.


Managing Comments to Blog Posts With a Repository

Now that we’ve seen the theory, it is time for a practical example. Imagine we have a blog where we have Post objects and Comment objects. Comments belong to Posts and we have to find a way to persist them and to retrieve them.

The Comment

We will start with a test that will force us to think about what our Comment object should contain.

class RepositoryTest extends PHPUnit_Framework_TestCase {

	function testACommentHasAllItsComposingParts() {
		$postId = 1;
		$commentAuthor = "Joe";
		$commentAuthorEmail = "joe@gmail.com";
		$commentSubject = "Joe Has an Opinion about the Repository Pattern";
		$commentBody = "I think it is a good idea to use the Repository Pattern to persist and retrieve objects.";

		$comment = new Comment($postId, $commentAuthor, $commentAuthorEmail, $commentSubject, $commentBody);
	}

}

At first glance, a Comment will just be a data object. It may not have any functionality, but that is up to the context of our application to decide. For this example just assume it is a simple data object. Constructed with a set of variables.

class Comment {

}

Just by creating an empty class and requiring it in the test makes it pass.

require_once '../Comment.php';

class RepositoryTest extends PHPUnit_Framework_TestCase {

[ ... ]

}

But that’s far from perfect. Our test does not test anything yet. Let’s force ourselves to write all the getters on the Comment class.

function testACommentsHasAllItsComposingParts() {
	$postId = 1;
	$commentAuthor = "Joe";
	$commentAuthorEmail = "joe@gmail.com";
	$commentSubject = "Joe Has an Opinion about the Repository Pattern";
	$commentBody = "I think it is a good idea to use the Repository Pattern to persist and retrieve objects.";

	$comment = new Comment($postId, $commentAuthor, $commentAuthorEmail, $commentSubject, $commentBody);

	$this->assertEquals($postId, $comment->getPostId());
	$this->assertEquals($commentAuthor, $comment->getAuthor());
	$this->assertEquals($commentAuthorEmail, $comment->getAuthorEmail());
	$this->assertEquals($commentSubject, $comment->getSubject());
	$this->assertEquals($commentBody, $comment->getBody());
}

To control the length of the tutorial, I wrote all the assertions at once and we will implement them at once as well. In real life, take them one by one.

class Comment {

	private $postId;
	private $author;
	private $authorEmail;
	private $subject;
	private $body;

	function __construct($postId, $author, $authorEmail, $subject, $body) {
		$this->postId = $postId;
		$this->author = $author;
		$this->authorEmail = $authorEmail;
		$this->subject = $subject;
		$this->body = $body;
	}

	public function getPostId() {
		return $this->postId;
	}

	public function getAuthor() {
		return $this->author;
	}

	public function getAuthorEmail() {
		return $this->authorEmail;
	}

	public function getSubject() {
		return $this->subject;
	}

	public function getBody() {
		return $this->body;
	}

}

Except for the list of private variables, the rest of the code was generated by my IDE, NetBeans, so testing the auto generated code may be a little bit of overhead some times. If you are not writing these lines by yourself, feel free to do them directly and don’t bother with tests for setters and constructors. Nevertheless, the test helped us better expose our ideas and better document what our Comment class will contain.

We can also consider these test methods and test classes as our “Client” classes from the schemas.


Our Gateway to Persistence

To keep this example as simple as possible, we will implement only an InMemoryPersistence so that we do not complicate our existence with file systems or databases.

require_once '../InMemoryPersistence.php';

class InMemoryPersistenceTest extends PHPUnit_Framework_TestCase {

	function testItCanPerisistAndRetrieveASingleDataArray() {
		$data = array('data');

		$persistence = new InMemoryPersistence();
		$persistence->persist($data);

		$this->assertEquals($data, $persistence->retrieve(0));
	}

}

As usual, we start with the simplest test that could possibly fail and also force us to write some code. This test creates a new InMemoryPersistence object and tries to persist and retrieve an array called data.

require_once __DIR__ . '/Persistence.php';

class InMemoryPersistence implements Persistence {

	private $data = array();

	function persist($data) {
		$this->data = $data;
	}

	function retrieve($id) {
		return $this->data;
	}

}

The simplest code to make it pass is just to keep the incoming $data in a private variable and return it in the retrieve method. The code as it is right now does not care about the sent in $id variable. It is the simplest thing that could possibly make the test pass. We also took the liberty to introduce and implement an interface called Persistence.

interface Persistence {

	function persist($data);
	function retrieve($ids);

}

This interface defines the two methods any Gateway needs to implement. Persist and retrieve. As you probably already guessed, our Gateway is our InMemoryPersistence class and our physical persistence is the private variable holding our data in the memory. But let’s get back to the implementation of this in memory persistence.

function testItCanPerisistSeveralElementsAndRetrieveAnyOfThem() {
	$data1 = array('data1');
	$data2 = array('data2');

	$persistence = new InMemoryPersistence();
	$persistence->persist($data1);
	$persistence->persist($data2);

	$this->assertEquals($data1, $persistence->retrieve(0));
	$this->assertEquals($data2, $persistence->retrieve(1));
}

We added another test. In this one we persist two different data arrays. We expect to be able to retrieve each of them individually.

require_once __DIR__ . '/Persistence.php';

class InMemoryPersistence implements Persistence {
	private $data = array();
	function persist($data) {
		$this->data[] = $data;
	}

	function retrieve($id) {
		return $this->data[$id];
	}
}

The test forced us to slightly alter our code. We now need to add data to our array, not just replace it with the one sent in to persists(). We also need to consider the $id parameter and return the element at that index.

This is enough for our InMemoryPersistence. If needed, we can modify it later.


Our Factory

We have a Client (our tests), a persistence with a Gateway, and Comment objects to persists. The next missing thing is our Factory.

We started our coding with a RepositoryTest file. This test, however, actually created a Comment object. Now we need to create tests to verify if our Factory will be able to create Comment objects. It seems like we had an error in judgment and our test is more likely a test for our upcoming Factory than for our Repository. We can move it into another test file, CommentFactoryTest.

require_once '../Comment.php';

class CommentFactoryTest extends PHPUnit_Framework_TestCase {

	function testACommentsHasAllItsComposingParts() {
		$postId = 1;
		$commentAuthor = "Joe";
		$commentAuthorEmail = "joe@gmail.com";
		$commentSubject = "Joe Has an Opinion about the Repository Pattern";
		$commentBody = "I think it is a good idea to use the Repository Pattern to persist and retrieve objects.";

		$comment = new Comment($postId, $commentAuthor, $commentAuthorEmail, $commentSubject, $commentBody);

		$this->assertEquals($postId, $comment->getPostId());
		$this->assertEquals($commentAuthor, $comment->getAuthor());
		$this->assertEquals($commentAuthorEmail, $comment->getAuthorEmail());
		$this->assertEquals($commentSubject, $comment->getSubject());
		$this->assertEquals($commentBody, $comment->getBody());
	}
}

Now, this test obviously passes. And while it is a correct test, we should consider modifying it. We want to create a Factory object, pass in an array and ask it to create a Comment for us.

require_once '../CommentFactory.php';

class CommentFactoryTest extends PHPUnit_Framework_TestCase {

	function testACommentsHasAllItsComposingParts() {
		$postId = 1;
		$commentAuthor = "Joe";
		$commentAuthorEmail = "joe@gmail.com";
		$commentSubject = "Joe Has an Opinion about the Repository Pattern";
		$commentBody = "I think it is a good idea to use the Repository Pattern to persist and retrieve objects.";

		$commentData = array($postId, $commentAuthor, $commentAuthorEmail, $commentSubject, $commentBody);

		$comment = (new CommentFactory())->make($commentData);

		$this->assertEquals($postId, $comment->getPostId());
		$this->assertEquals($commentAuthor, $comment->getAuthor());
		$this->assertEquals($commentAuthorEmail, $comment->getAuthorEmail());
		$this->assertEquals($commentSubject, $comment->getSubject());
		$this->assertEquals($commentBody, $comment->getBody());
	}
}

We should never name our classes based on the design pattern they implement, but Factory and Repository represent more than just the design pattern itself. I personally have nothing against including these two words in our class’s names. However I still strongly recommend and respect the concept of not naming our classes after the design patterns we use for the rest of the patterns.

This test is just slightly different from the previous one, but it fails. It tries to create a CommentFactory object. That class does not exist yet. We also try to call a make() method on it with an array containing all the information of a comment as an array. This method is defined in the Factory interface.

interface Factory {
	function make($data);
}

This is a very common Factory interface. It defined the only required method for a factory, the method that actually creates the objects we want.

require_once __DIR__ . '/Factory.php';
require_once __DIR__ . '/Comment.php';

class CommentFactory implements Factory {

	function make($components) {
		return new Comment($components[0], $components[1], $components[2], $components[3], $components[4]);
	}

}

And CommentFactory implements the Factory interface successfully by taking the $components parameter in its make() method, creates and returns a new Comment object with the information from there.

We will keep our persistence and object creation logic as simple as possible. We can, for this tutorial, safely ignore any error handling, validation and exception throwing. We will stop here with the persistence and object creation implementation.


Using a Repository to Persist Comments

As we’ve seen above, we can use a Repository in two ways. To retrieve information from persistence and also to persist information on the persistence layer. Using TDD it is, most of the time, easier to start with the saving (persisting) part of the logic and then use that existing implementation to test data retrieval.

require_once '../../../vendor/autoload.php';
require_once '../CommentRepository.php';
require_once '../CommentFactory.php';

class RepositoryTest extends PHPUnit_Framework_TestCase {

	protected function tearDown() {
		\Mockery::close();
	}

	function testItCallsThePersistenceWhenAddingAComment() {

		$persistanceGateway = \Mockery::mock('Persistence');
		$commentRepository = new CommentRepository($persistanceGateway);

		$commentData = array(1, 'x', 'x', 'x', 'x');
		$comment = (new CommentFactory())->make($commentData);

		$persistanceGateway->shouldReceive('persist')->once()->with($commentData);

		$commentRepository->add($comment);
	}

}

We use Mockery to mock our persistence and inject that mocked object to the Repository. Then we call add() on the repository. This method has a parameter of type Comment. We expect the persistence to be called with an array of data similar to $commentData.

require_once __DIR__ . '/InMemoryPersistence.php';

class CommentRepository {

	private $persistence;

	function __construct(Persistence $persistence = null) {
		$this->persistence = $persistence ? : new InMemoryPersistence();
	}

	function add(Comment $comment) {
		$this->persistence->persist(array(
			$comment->getPostId(),
			$comment->getAuthor(),
			$comment->getAuthorEmail(),
			$comment->getSubject(),
			$comment->getBody()
		));
	}

}

As you can see, the add() method is quite smart. It encapsulates the knowledge about how to transform a PHP object into a plain array usable by the persistence. Remember, our persistence gateway usually is a general object for all of our data. It can and will persist all the data of our application, so sending to it objects would make it do too much: both conversion and effective persistence.

When you have an InMemoryPersistence class like we do, it is very fast. We can use it as an alternative to mocking the gateway.

function testAPersistedCommentCanBeRetrievedFromTheGateway() {

	$persistanceGateway = new InMemoryPersistence();
	$commentRepository = new CommentRepository($persistanceGateway);

	$commentData = array(1, 'x', 'x', 'x', 'x');
	$comment = (new CommentFactory())->make($commentData);

	$commentRepository->add($comment);

	$this->assertEquals($commentData, $persistanceGateway->retrieve(0));
}

Of course if you do not have an in-memory implementation of your persistence, mocking is the only reasonable way to go. Otherwise your test will be just too slow to be practical.

function testItCanAddMultipleCommentsAtOnce() {

	$persistanceGateway = \Mockery::mock('Persistence');
	$commentRepository = new CommentRepository($persistanceGateway);

	$commentData1 = array(1, 'x', 'x', 'x', 'x');
	$comment1 = (new CommentFactory())->make($commentData1);
	$commentData2 = array(2, 'y', 'y', 'y', 'y');
	$comment2 = (new CommentFactory())->make($commentData2);

	$persistanceGateway->shouldReceive('persist')->once()->with($commentData1);
	$persistanceGateway->shouldReceive('persist')->once()->with($commentData2);

	$commentRepository->add(array($comment1, $comment2));
}

Our next logical step is to implement a way to add several comments at once. Your project may not require this functionality and it is not something required by the pattern. In fact, the Repository Pattern only says that it will provide a custom query and persistence language for our business logic. So if our bushiness logic feels the need of adding several comments at once, the Repository is the place where the logic should reside.

function add($commentData) {
	if (is_array($commentData))
		foreach ($commentData as $comment)
			$this->persistence->persist(array(
				$comment->getPostId(),
				$comment->getAuthor(),
				$comment->getAuthorEmail(),
				$comment->getSubject(),
				$comment->getBody()
			));
	else
		$this->persistence->persist(array(
			$commentData->getPostId(),
			$commentData->getAuthor(),
			$commentData->getAuthorEmail(),
			$commentData->getSubject(),
			$commentData->getBody()
		));
}

And the simplest way to make the test pass is to just verify if the parameter we are getting is an array or not. If it is an array, we will cycle through each element and call the persistence with the array we generate from one single Comment object. And while this code is syntactically correct and makes the test pass, it introduces a slight duplication that we can get rid of quite easily.

function add($commentData) {
	if (is_array($commentData))
		foreach ($commentData as $comment)
			$this->addOne($comment);
	else
		$this->addOne($commentData);
}

private function addOne(Comment $comment) {
	$this->persistence->persist(array(
		$comment->getPostId(),
		$comment->getAuthor(),
		$comment->getAuthorEmail(),
		$comment->getSubject(),
		$comment->getBody()
	));
}

When all the tests are green, it is alway time for refactoring before we continue with the next failing test. And we did just that with the add() method. We extracted the addition of a single comment into a private method and called it from two different places in our public add() method. This not only reduced duplication but also opened the possibility of making the addOne() method public and letting the business logic decide if it wants to add one or several comments at a time. This would lead to a different implementation of our Repository, with an addOne() and another addMany() methods. It would be a perfectly legitimate implementation of the Repository Pattern.


Retrieving Comments With Our Repository

A Repository provides a custom query language for the business logic. So the names and functionalities of the query methods of a Repository is hugely up to the requirements of the business logic. You build your repository as you build your business logic, as you need another custom query method. However, there are at least one or two methods that you will find on almost any Repository.

function testItCanFindAllComments() {
	$repository = new CommentRepository();

	$commentData1 = array(1, 'x', 'x', 'x', 'x');
	$comment1 = (new CommentFactory())->make($commentData1);
	$commentData2 = array(2, 'y', 'y', 'y', 'y');
	$comment2 = (new CommentFactory())->make($commentData2);

	$repository->add($comment1);
	$repository->add($comment2);

	$this->assertEquals(array($comment1, $comment2), $repository->findAll());
}

The first such method is called findAll(). This should return all the objects the repository is responsible for, in our case Comments. The test is simple, we add a comment, then another one, and finally we want to call findAll() and get a list containing both comments. This is however not accomplish-able with our InMemoryPersistence as it is at this point. A small update is required.

function retrieveAll() {
	return $this->data;
}

That’s it. We added a retrieveAll() method which just returns the whole $data array from the class. Simple and effective. It’s time to implement findAll() on the CommentRepository now.

function findAll() {
	$allCommentsData = $this->persistence->retrieveAll();
	$comments = array();
	foreach ($allCommentsData as $commentData)
		$comments[] = $this->commentFactory->make($commentData);
	return $comments;
}

findAll() will call the retrieveAll() method on our persistence. That method provides a raw array of data. findAll() will cycle through each element and use the data as necessary to be passed to the Factory. The factory will provide one Comment a time. An array with these comments will be built and returned at the end of findAll(). Simple and effective.

Another common method you will find on repositories is to search for a specific object or group of objects based on their characteristic key. For example, all of our comments are connected to a blog post by a $postId internal variable. I can imagine that in our blog’s business logic we would almost always want to find all the comments related to a blog post when that blog post is displayed. So a method called findByPostId($id) sounds reasonable to me.

function testItCanFindCommentsByBlogPostId() {
	$repository = new CommentRepository();

	$commentData1 = array(1, 'x', 'x', 'x', 'x');
	$comment1 = (new CommentFactory())->make($commentData1);
	$commentData2 = array(1, 'y', 'y', 'y', 'y');
	$comment2 = (new CommentFactory())->make($commentData2);
	$commentData3 = array(3, 'y', 'y', 'y', 'y');
	$comment3 = (new CommentFactory())->make($commentData3);

	$repository->add(array($comment1, $comment2));
	$repository->add($comment3);

	$this->assertEquals(array($comment1, $comment2), $repository->findByPostId(1));
}

We just create three simple comments. The first two have the same $postId = 1, the third one has $postID = 3. We add all of them to the repository and then we expect an array with the first two ones when we do a findByPostId() for the $postId = 1.

function findByPostId($postId) {
	return array_filter($this->findAll(), function ($comment) use ($postId){
		return $comment->getPostId() == $postId;
	});
}

The implementation couldn’t be simpler. We find all the comments using our already implemented findAll() method and we filter the array. We have no way to ask the persistence to do the filtering for us, so we will do it here. The code will query each Comment object and compare its $postId with the one we sent in as parameter. Great. The test passes. But I feel we missed something.

function testItCanFindCommentsByBlogPostId() {
	$repository = new CommentRepository();

	$commentData1 = array(1, 'x', 'x', 'x', 'x');
	$comment1 = (new CommentFactory())->make($commentData1);
	$commentData2 = array(1, 'y', 'y', 'y', 'y');
	$comment2 = (new CommentFactory())->make($commentData2);
	$commentData3 = array(3, 'y', 'y', 'y', 'y');
	$comment3 = (new CommentFactory())->make($commentData3);

	$repository->add(array($comment1, $comment2));
	$repository->add($comment3);

	$this->assertEquals(array($comment1, $comment2), $repository->findByPostId(1));
	$this->assertEquals(array($comment3), $repository->findByPostId(3));
}

Adding a second assertion to obtain the third comment with the findByPostID() method reveals our mistake. Whenever you can easily test extra paths or cases, like in our case with a simple extra assertion, you should. These simple extra assertions or test methods can reveal hidden problems. Like in our case, array_filter() does not reindex the resulting array. And while we have an array with the correct elements, the indexes are messed up.

1) RepositoryTest::testItCanFindCommentsByBlogPostId
Failed asserting that two arrays are equal.
--- Expected
+++ Actual
@@ @@
 Array (
-    0 => Comment Object (...)
+    2 => Comment Object (...)
 )

Now, you may consider this a shortcoming of PHPUnit or a shortcoming of your business logic. I tend to be rigorous with array indexes because I burned my hands with them a few times. So we should consider the error a problem with our logic in the CommentRepository.

function findByPostId($postId) {
	return array_values(
		array_filter($this->findAll(), function ($comment) use ($postId) {
			return $comment->getPostId() == $postId;
		})
	);
}

Yep. That simple. We just run the result through array_values() before returning it. It will nicely reindex our array. Mission accomplished.


Final Thoughts

And that’s mission accomplished for our Repository also. We have a class usable by any other business logic class which offers an easy way to persist and retrieve objects. It also decouples the business logic from the factories and data persistence gateways. It reduced logic duplication and significantly simplifies the persistence and retrieval operations for our comments.

Remember, this design pattern can be used for all types of lists and as you start using it, you will see its usefulness. Basically, whenever you have to work with several objects of the same type, you should consider introducing a Repository for them. Repositories are specialized by object type and not general. So for a blog application, you may have distinct repositories for blog posts, for comments, for users, for user configurations, for themes, for designs, for or anything you may have multiple instances of.

And before concluding this, a Repository may have its own list of objects and it may do a local caching of objects. If an object can not be found in the local list, we retrieve it from the persistence, otherwise we serve it from our list. If used with caching, a Repository can be successfully combined with the Singleton Design Pattern.

As usual, thank you for your time and I sincerely hope I taught you something new today.


Getting Into Ember.js: Part 5

$
0
0

In part 3 of my Ember series, I showed you how you can interact with data using Ember's Ember.Object main base class to create objects that define the methods and properties that act as a wrapper for your data. Here's an example:

App.Item = Ember.Object.extend();

App.Item.reopenClass({
  all: function() {
    return $.getJSON('http://api.ihackernews.com/page?format=jsonp&callback=?').then(function(response) {
      var items = [];

      response.items.forEach( function (item) {
    items.push( App.Item.create(item) );
  });
  return items;
});

In this code, we subclass Ember.Object using the "extend()" and create a user-defined method called called "all()" that makes a request to Hacker News for JSON-formatted results of its news feed.

While this method definitely works and is even promoted by Ember-based Discourse as their way of doing it, it does require that you flesh out and expose the API that you'd like to reference the data with. Most MVC frameworks tend to include ORM-like capabilities so if you're used to Rails, for example, you'd be very familiar with the benefits of ActiveRecord which helps to manage and do the heavy lifting of interacting with data.

The Ember team has wanted to do the same thing but their main focus has been to get a stable v1 release of their core framework out first to ensure that complementary components could be built on a stable foundation. I actually applaud this and I actually made mention of the fact that you should hold off on using Ember Data because of this.

Now that Ember RC8 is out and v1 seems to be coming around the corner, I felt it was a good time to start exploring Ember Data and see what it offers.

Ember Data

The first thing I want to stress is that Ember Data is a work in progress and in much the same way as Ember started, will probably see a number of breaking API changes over the next several months. While that's not ideal, it's important to begin looking at how you would structure your apps using the library. To give you a good description of what Ember Data provides, I've copied in the well-written description from the GitHub page:

Ember Data is a library for loading data from a persistence layer (such as a JSON API), mapping this data to a set of models within your client application, updating those models, then saving the changes back to a persistence layer. It provides many of the facilities you'd find in server-side ORMs like ActiveRecord, but is designed specifically for the unique environment of JavaScript in the browser.

So as I mentioned, it's meant to abstract out a lot of the complexities of working with data.

Using Ember Data

If you've read my previous tutorials, you should be very familiar with how to set up a page to leverage Ember. If you haven't done so, you should go to the Ember.js home page and grab the Starter Kit. You can find it right in the middle of the page as it's displayed via a big button. This will give you the most up-to-date version of Ember which you'll need in order to work with Ember Data. The easiest way to get a downloadable version of Ember Data is to go to the API docs for models, scroll to the bottom and download the library. Additionally, you can go to the builds page to pull down the latest builds of any Ember-related library.

Adding Ember Data is as simple as adding another JavaScript file to the mix like this:

<script src="js/libs/jquery-1.9.1.js"></script><script src="js/libs/handlebars-1.0.0.js"></script><script src="js/libs/ember-1.0.0-rc.8.js"></script><script src="js/libs/ember-data.js"></script><script src="js/app.js"></script>

This now gives you access to Ember Data's objects, method and properties.

Without any configuration, Ember Data can load and save records and relationships served via a RESTful JSON API, provided it follows certain conventions.

Defining a Store

Ember uses a special object called a store to load models and retrieve data and is based off the Ember DS.Store class. This is how you'd define a new store:

App.Store = DS.Store.extend({
...
});

If you remember from my previous articles, "App" is just a namespace created for the application level objects, methods and properties for the application. While it's not a reserved word in Ember, I would urge you to use the same name as almost every tutorial and demo I've seen uses it for consistency.

The store you create will hold the models you create and will serve as the interface with the server you define in your adapter. By default, Ember Data creates and associates to your store a REST adapter based off the DS.RestAdapter class. If you simply defined the code above, you would have an adapter associated to it by default. Ember magic at its finest. You can also use a Fixture adapter as well if you're working with in-memory-based data (for example, JSON you're loading from code) but since this is about making API calls, the REST adapter is more appropriate.

You can also define your own adapter for those situations where you need more custom control over interfacing with a server by using the adapter property within your store declaration:

App.Store = DS.Store.extend({
  adapter: 'App.MyCustomAdapter'
});

Defining Models

The code I listed at the top of this tutorial was an example of how to use Ember.Object to create the models for your application. Things change a bit when you define models via Ember Data. Ember Data provides another object called DS.Model which you subclass for every model you want to create. For example, taking the code from above:

App.Item = Ember.Object.extend();

It would now look like this:

App.Item = DS.Model.Extend()

Not much of a difference in terms of appearance but a big difference in terms of functionality since you now have access to the capabilities of the REST adapter as well as Ember Data's built-in relationships like one-to-one, one-to-many and more. The main benefit, though, is that Ember Data provides the hooks for interacting with your data via your models as opposed to you having to roll your own. Referencing the code from above again:

App.Item.reopenClass({
  all: function() {
    return $.getJSON('http://api.ihackernews.com/page?format=jsonp&callback=?').then(function(response) {
      var items = [];</p>

     response.items.forEach( function (item) {
    items.push( App.Item.create(item) );
  });
  return items;
});

While I had to create my own method to return all of the results from my JSON call, Ember Data provides a find() method which does exactly this and also serves to filter down the results. So in essence, all I have to do is make the following call to return all of my records:

App.Item.find();

The find() method will send an Ajax request to the URL.

This is exactly what attracts so many developers to Ember; the forethought given to making things easier.

One thing to keep in mind is that it's important to define within the model the attributes you plan on using later on (e.g. in your templates). This is easy to do:

App.Post = DS.Model.extend({
     title: DS.attr('string')
});

In my demo app, I want to use the title property returned via JSON so using the attr() method, specify which attributes a model has at my disposal.

One thing I want to mention is that Ember Data is incredibly picky about the structure of the JSON returned. Because Ember leverages directory structures for identifying specific parts of your applications (remember the naming conventions we discussed in my first Ember article?), it makes certain assumptions about the way that the JSON data is structured. It requires that there be a named root which will be used to identify the data to be returned. Here's what I mean:

{
  'posts': [{
    'id': 1, 
    'title': 'A friend of mine just posted this.',
   'url': 'http://i.imgur.com/9pw20NY.jpg'
  }]
}1<p>If you had defined it like this:</p>

1{
{
    'id': '1', 
    'title': 'A friend of mine just posted this.',
    'url': 'http://i.imgur.com/9pw20NY.jpg'
  },
{
    'id': '2', 
    'title': 'A friend of mine just posted this.',
    'url': 'http://i.imgur.com/9pw20NY.jpg'
  },
}

Ember Data would've totally balked and thrown the following error:

Your server returned a hash with the key id but you have no mapping for it.

The reason is that since the model is called "App.Post", Ember Data is expecting to find a URL called "posts" from which it will pull the data from. So if I defined my store as such:

App.Store = DS.Store.extend({
  url: 'http://emberdata.local' 
});

and my model like this:

App.Post = DS.Model.extend({
     title: DS.attr('string')
});

Ember Data would assume that the Ajax request made by the find() method would look like this:

http://emberdata.local/posts

And if you were making a request for a specific ID (like find(12)), it would look like this:

http://emberdata.local/posts/12

This issue drove me batty, but doing a search found plenty of discussions on it. If you can't set up your JSON results in this way, then you'll have to create a custom adapter to massage the results to properly serialize them before being able to use it. I'm not covering that here but plan to explore more of that soon.

The Demo App

I purposely wanted to keep this tutorial simple because I know Ember Data is changing and I wanted to give a brief overview of what it provided. So I whipped up a quick demo app that uses Ember Data to pull JSON data from my own local server. Let's look at the code.

First I create my application namespace (which you would do for any Ember app):

// Create our Application
App = Ember.Application.create({});

Next, I define my data store and I declare the url from where the model will pull the data from:

App.Store = DS.Store.extend({
  url: 'http://emberdata.local'; 
});

In the model, I specify the attribute: title, which I'll use in my template later on:

// Our model
App.Post = DS.Model.extend({
     title: DS.attr('string')
});

Lastly, I associate the model to the route via the model hook. Notice that I'm using the predefined Ember Data method find() to immediately pull back my JSON data as soon as the app is started:

// Our default route. 
App.IndexRoute = Ember.Route.extend({
  model: function() {
    return App.Post.find();
  }
});

In the template for the root page (index), I use the #each Handlebars directive to look through the results of my JSON data and render the title of each of my posts:

<script type="text/x-handlebars" data-template-name="index"><h2>My Posts</h2><ul>
    {{#each post in model}}<li>{{post.title}}</li>
    {{/each}}</ul></script></p>

That's it! No Ajax call to make or special methods to work with my data. Ember Data took care of making the XHR call and storing the data.

Fin

Now, this is incredibly simplistic and I don't want to lead you to believe it's all unicorns and puppy dogs. As I went through the process of working with Ember Data, I found myself wanting to go back to using Ember.Object where I had more control. But I also realize that a lot of work is going on to improve Ember Data, especially in the way it manages diverse data results. So it's important to at least kickstart the process of understanding how this thing works and even offering constructive feedback to the team.

So I urge you to jump in and begin tinkering with it, especially those that have a very strong ORM background and could help shape the direction of Ember Data. Now is the best time to do that.

WebGL With Three.js: Textures & Particles

$
0
0

Since its introduction, 3D graphics in the browser has been a popular topic. But if you were to create your apps using plain old WebGL it would take a very long. But now we have some pretty useful libraries that we can take advantage of, like Three.js. So in this series I will show you how to create stunning 3D experiences for the browser.

I do expect you to have a basic understanding of 3D space before you start reading this tutorial, as I won’t be explaining things like coordinates, vectors etc.


Preparation

We will start with the code from previous part of this series. Also, grab the assets I provided and put them in the same folder as your app. Now, since we will use images here you will have to put your app on some static server (may be local), because unless you start the browser with enabled file access from files (for example using the --allow-file-access-from-files flag in Chrome) CORS will not let you load them from file. That’s all you need to do before proceeding.


Step 1: Loading the Texture

If you’ve ever got so bored that you went with creating something using pure OpenGL, you probably remember how much pain it is to load a texture. Luckily, Three.js comes with a nice function that will load and set up the texture for us. Add this line before the definition of our cube’s material:

var cubeTexture = THREE.ImageUtils.loadTexture('./box.png');

It’s really all you have to do in order to have your texture loaded.

In a real-world app you would have to preload the texture like any normal image and show the users some fancy loading bar to let them know that you are loading (Three.js will use the cached image then).


Step 2: Painting the Cube

Now we will apply the texture to our cube. This is also easy, you just need to replace the color definition in the cube’s material to look like this:

var cubeMaterial = new THREE.MeshLambertMaterial({ map: cubeTexture });

The map attribute sets the texture. Now you can open the browser and you should see a rotating, textured cube:

textured_cube

You can also colorize the texture, just add the color definition in the material’s options, like that:

var cubeMaterial = new THREE.MeshLambertMaterial({ map: cubeTexture, color: 0x28c0ec });

And now the cube turns blue:

textured_colorized_cube

This way you can have multiple different objects with the same texture if only the color changes.


Step 3: Multiple Materials

You can set different materials for every face of the cube. To achieve that, you have to change the whole material’s definition. First, define the materials array. Each element in the array will correspond to the material of one face. They go in this order: right, left, top, bottom, front and back:

var materials = [];
materials.push(new THREE.MeshLambertMaterial({ map: cubeTexture, color: 0xff0000 })); // right face
materials.push(new THREE.MeshLambertMaterial({ map: cubeTexture, color: 0xffff00 })); // left face
materials.push(new THREE.MeshLambertMaterial({ map: cubeTexture, color: 0xffffff })); // top face
materials.push(new THREE.MeshLambertMaterial({ map: cubeTexture, color: 0x00ffff })); // bottom face
materials.push(new THREE.MeshLambertMaterial({ map: cubeTexture, color: 0x0000ff })); // front face
materials.push(new THREE.MeshLambertMaterial({ map: cubeTexture, color: 0xff00ff })); // back face

As you can see each face has it’s own material, so you can set different textures, colors and other attributes for each one. Next, change the type of the cube’s material to THREE.MeshFaceMaterial:

var cubeMaterial = new THREE.MeshFaceMaterial(materials);

You only need to pass the materials array as the parameter. In the browser you should see that each side of the cube has different color:

each_side_different

Step 4: Particles!

Let’s say you want to create an effect of spinning snowflakes in your app. If you were to render each snowflake as a mesh you will get very low fps. That’s where particles come into play. They are way less complicated, and drawing them as a whole particle system makes them really efficient.

Start with creating a geometry for our particles:

var particles = new THREE.Geometry;

THREE.Geometry is a base geometry object, without any shape. Now we have to define the position of each particle in the system. Let it be completely random:

for (var p = 0; p &lt; 2000; p++) {
	var particle = new THREE.Vector3(Math.random() * 500 - 250, Math.random() * 500 - 250, Math.random() * 500 - 250);
	particles.vertices.push(particle);
}

This loop will create 2000 randomly placed particles and put them all in the geometry. Next, you have to define particles’ material:

var particleMaterial = new THREE.ParticleBasicMaterial({ color: 0xeeeeee, size: 2 });

Notice that we are using THREE.ParticleBasicMaterial, which is only for particles. In options we only define the color and the size of each particle. Finally, you can create the particle system and add it to the scene:

var particleSystem = new THREE.ParticleSystem(particles, particleMaterial);

scene.add(particleSystem);

Now, to make the scene look better let’s rotate the particles in the direction opposite to the one that the cube is rotating in (change the render function to look like this):

function render() {
	requestAnimationFrame(render);
	var delta = clock.getDelta();
	cube.rotation.y -= delta;
	particleSystem.rotation.y += delta;
	renderer.render(scene, camera);
}

We moved the clock.getDelta to the variable, because if you would use it like that:

cube.rotation.y -= clock.getDelta();
particleSystem.rotation.y += clock.getDelta();

The particle system would not rotate, because the second call will return a number close to zero (remember that it’s getting the time from the last call).

Now open up the browser and you should see a cube and particles rotating:

cube_with_particles

Let’s combine both things you’ve learned in this tutorial and turn those ugly white squares into real snowflakes. First, load the snowflake texture:

var particleTexture = THREE.ImageUtils.loadTexture('./snowflake.png');

Now, change the particles’ material to use the texture. Also, enable transparency and make the particles bigger so we can see the shape:

var particleMaterial = new THREE.ParticleBasicMaterial({ map: particleTexture, transparent: true, size: 5 });

If you open the browser you should see some nice snowflakes flowing around the cube:

particles_snowflakes

Step 5: Smoke

Smoke effect is pretty easy to achieve and it looks nice. Start by creating the geometry, just like with the snowflakes:

var smokeParticles = new THREE.Geometry;
for (var i = 0; i &lt; 300; i++) {
	var particle = new THREE.Vector3(Math.random() * 32 - 16, Math.random() * 230, Math.random() * 32 - 16);
	smokeParticles.vertices.push(particle);
}

The only difference here is that we are choosing the position from a rectangular prism with dimensions 32x32x230. Now, let’s load the texture and define the material:

var smokeTexture = THREE.ImageUtils.loadTexture('./smoke.png');
var smokeMaterial = new THREE.ParticleBasicMaterial({ map: smokeTexture, transparent: true, blending: THREE.AdditiveBlending, size: 50, color: 0x111111 });

In material definition, there is a blending option. It tells the renderer how it should render one object on another. With THREE.AdditiveBlending overlapping color values will be added to each other which will result in a brighter smoke in the areas with higher particle density. We also set the color to almost black, so the smoke looks more natural.

Finally, create the particle system, move it a bit to the left and add it to the scene:

var smoke = new THREE.ParticleSystem(smokeParticles, smokeMaterial);
smoke.sortParticles = true;
smoke.position.x = -150;

scene.add(smoke);

You also have to set smoke.sortParticles to true. When it’s false the background of the sprite may be drawn as black. If you open the browser you should see a still pillar of smoke next to the cube:

smoke_still

To animate the smoke we have to loop trough all of the particles and move them up a bit. Add this code to the render function:

var particleCount = smokeParticles.vertices.length;
while (particleCount--) {
	var particle = smokeParticles.vertices[particleCount];
	particle.y += delta * 50;
	if (particle.y >= 230) {
		particle.y = Math.random() * 16;
		particle.x = Math.random() * 32 - 16;
		particle.z = Math.random() * 32 - 16;
	}
}
smokeParticles.__dirtyVertices = true;

In the loop we are adding delta * 50 to the y position of the particle. Next we check if the particle is higher than 230, if so we randomly choose its new position somewhere in the bottom of the smoke pillar. Finally, the most important thing: setting the geometry’s __dirtyVertices flag to true.

To improve the performance, Three.js is caching the objects to avoid building all of the WebGL calls again every frame, so if we change something in the geometry of the object we have to let the renderer know that it has changed. Basically, the __dirtyVertices flag will just reach the element.

If you open the browser now you should see a smoothly animated smoke next to the cube.


Conclusion

In this tutorial you’ve learned how to use textures and particles. As before, don’t be afraid to experiment a bit with your app. If you have problems take a look at the documentation. In the next article I will teach you how to load models and animate them.

Interview With Jonathan Snook

$
0
0

I've met many web developers over the years and the common theme is that they tend to specialize in a specific aspect of web development. They're either designers, JavaScript coders, server-side experts or perhaps a tiny bit of all of them. Rarely do I meet someone who is incredibly well-versed in the full-stack having an amazing design acumen and being able to take a vision and bring it to life, front to back.

Jonathan Snook is one of those rare breeds and also an influencer in the web development world. His skills have made him a sought after speaker and writer and afforded great opportunities at companies like Yahoo! and Shopify. He's now venturing into product management and we catch up with him to see how that's going and his advice for anyone looking to jump into that role.


Q Let's start with the usual. Could you give us a quick intro about yourself?

Sure thing. My name is Jonathan Snook and I'm a web developer based in Ottawa, Canada. I've been developing on the web since before Netscape hit 1.0. I've had the pleasure of working on hundreds of projects both professionally and personally. I also speak at conferences and put on workshops and have authored or co-authored three books to date, the most recent of which is Scalable and Modular Architecture for CSS (or SMACSS for short). These days, I'm a product manager at Shopify based here in Ottawa.


Q You've transitioned to a product management role recently. What prompted the switch?

Opportunity and misunderstanding! Until earlier this year, Shopify never had a product team. I was working on the design team focused on our core product. As a company, we have traditionally had a very egalitarian approach that allowed anybody to work on an idea. Shopify was growing quickly and really needed product ownership to keep the team and product focused. A product team was being assembled. If you're picturing it being like the Justice League, it's just like that.

The role, as it was described to me, sounded much like the work I was doing as a designer. Talk to customers, the support team, and other stakeholders to evaluate which problems we should be solving and testing our work to ensure that we were solving our problems well. This sounded fantastic and I jumped at the opportunity. I misunderstood just how much work it really was to define a solid product direction and be able to communicate that effectively both within the company and out. As a result, I've not done nearly as much hands-on development as I expected I'd be able to continue to do.


Q How has the new role affected your skillset? Are you still coding or are you losing your edge?

Shifting into product management has meant learning new skills. I've been doing a lot of reading. I've been researching what makes a good product manager. I've been researching what needs to happen for a team (or multiple teams, really) to build a good product. It's been a great opportunity for me to grow and has been very exciting.

I'm still coding when I can but not nearly as much as I used to. However, I still read the blogs and twitter posts. I try to stay on top of the industry, and I still get to speak and attend great conferences where I can expand my knowledge. I still participate on pull requests and technical discussions. I still feel like I've "got it", at least from a technical conceptual level. Actually spending a day writing code, on the other hand, might prove a little harder. Just don't tell anybody that!


Q Has the new role changed the way you work with your development team now that you're on the other side of the equation? If so, what are the good and bad parts of the interactions?

I've been fortunate to have a well-rounded career having done design, front-end development, and back-end development. One of the advantages to having this breadth of skills is the ability to deeply understand the requirements at all levels. I'd be lost without the ability to understand the design and technical hurdles. I wouldn't be able to engage my co-workers on the same level. They're very smart people that can code circles around me but at least I can understand what they're doing and why they're doing it the way they are. I think this is very helpful.

On the bad side, it's not anything you don't see come out of any team. People have different ideas of what we should be focusing on. Sometimes I don't communicate the vision and direction well enough and that can create confusion and conflict. Those are skills I'm working to improve upon.


Q How do you balance out the desire of developers to implement the new shiny features or technologies versus managing the realistic goals the product?

At Shopify, we're dealing with a 7 year old codebase. The team often wants to work on refactoring things instead of the shiny new features. I'm the one who wants to implement the new shiny features.

Of course, with all things, balance is key. One of the things that I liked when I worked on the design team at Yahoo! was the regular maintenance cycle that was built into their process. Clean up files, fix naming, get rid of cruft. As Shopify grows, we know we need to continue to keep this balance. I think we're on the right track, even if sometimes we disagree when we should be doing feature development or should be doing refactoring and maintenance.


Q I know a lot of devs that would eventually like to shift to product management. Could you tell us about your transition and the things you feel would help others transition successfully?

Even though we had product managers at Yahoo!, I rarely interacted with them. The role was largely foreign to me until I decided to jump in head first.

If you want to be a good product manager for a technology company then I think a well-rounded background is good to have. Then again, I think that for nearly any role you might take. (And I believe that working for a web agency is a great way to gain that experience. Although, I'm probably biased since that is the path I took.) A good product manager needs to be able to communicate well. A good product manager needs to have empathy. At Shopify, for example, I run a store. In fact, I sell my book, SMACSS, on Shopify. This helps me understand some of the problems that our customers face every day. I don't think I could manage a product I didn't believe in.

For those looking to transition from development to product management, having that passion for the product is going to be key. For me, it was an opportunity to have a bigger picture of the entire ecosystem and to have more sway on the direction across the entire system. I wanted this because I want Shopify to be amazing. I want it to be something that people enjoy using every day.

If you want to become a product manager, don't let the word "manager" scare you. Don't worry about losing your skills. The industry changes fast but not that fast. It just feels like it does. If I decide that I want to leave product management and get back into coding full-time, I feel confident that it would be a reasonable easy transition back in. (Then again, it's only been 6 months. We'll see if I think the same thing in 2 years.)


Q SMACSS is your baby. What is it trying to solve that CSS frameworks don't already do?

Frameworks don't code your entire site. Take any framework and you still have to add your code on top of it. This was the problem I was trying to solve. What problems are the frameworks trying to solve and how is the code you're writing going to fit in with everything else.

That's why SMACSS is written the way it is: it's not a framework. It's meant to describe a process. It describes a way of architecting a site. It's a documentation of the learning process I went through in building large projects with large teams.


Q In terms of real-world development, how does SMACSS adopt to the dynamic needs of UI & UX development?

SMACSS came out of real-world development. It's not pie-in-the-sky thinking and it's not a lone wolf approach. It's an amalgamation of many ideas that were already floating around.

As an industry, we've been seeing more and more designers and developers approach site design as a modular system instead of as a series of brochure-style pages that don't change. The modular approach means that parts can be moved around. The more autonomous the parts, the easier it is to move them and add new ones or remove others.

SMACSS, being a scalable and modular architecture for CSS, is catered to the dynamic needs of UI and UX development.


Q Since SMACSS outlines a guideline for structuring your CSS, how viable is it for projects that are already in progress? At what point in the development process is SMACSS viable?

Of course, it's always easier to be able to write everything from scratch but that doesn't always happen. I had that privilege at Yahoo!. But coming into Shopify, I was contributing to an existing project that had already been under development for some time. "If it ain't broke, don't fix it" is a familiar mantra but refactoring should be something that projects make time for. Refactoring removes the technical debt that has built up allowing for faster development for new features. As they say, "there's no time like the present" to begin implementing a modular approach to a project. Just do it one piece at a time. That's the approach we took at Shopify and one we continue to take.


Q Other CSS frameworks tend to specify their own way of doing things. Is SMACSS workable in a scenario where existing frameworks are dictating process?

It depends on the process! When I wrote SMACSS, I wanted to present a number of concepts that could be taken in part or as a whole since that's the way I am when it comes to development. I'm unlikely to take someone else's project wholesale. I'm going to take the pieces that work for me and leave the rest.


Q Last question, what the heck happened to the Snitter Twitter Client?!?

Have you seen the Twitter client ecosystem?! Yeah, I think I'm okay having let it die. I'm happy with the small success it had (which wasn't really that much to begin with) and enjoyed the process of building the app. Alas, my time was better focused on other projects.


Thank You Jonathan

Jonathan, thank you for taking the time to chat with us and for your great advice on transitioning to product management. If you'd like to learn more about Jonathan be sure to visit his blog and follow him on Twitter. You can also find out more about SMACSS at the project site.

Recently in Web Development Nov 2013

$
0
0

We used to have an awesome series called "Recently in Web Development" which listed out cool happenings around the web development industry. It touched on interesting frameworks, tools, articles and tutorials, helping to organize information in a quick and easy-to-read format.

Based on feedback, we've decided to bring it back and hope that it helps you, our faithful readers, stay on top of the news and announcements of this fast-changing industry.

So without further ado…


News and Releases

Let's get caught up with relevant news and releases from around the web development community.

Rails 4.0.1 Is Out

The Rails framework continues chugging away with its newest update to version four. The big update in this version is a change to the way Active Record handles order calls and the subsequent prepending of an SQL ORDER BY clause. It was primarily done for backward compatibility but worth checking out to ensure your apps behave as expected.

Read more


Google Octane v2.0

Google updated their Octane JavaScript benchmark to include some new tests to simulate complex data structures and memory-intensive applications by measuring (interestingly enough) how fast Microsoft's TypeScript compiler can compile itself. There are also a number of fixes to existing tests to help improve the reliability of measurements. If you're a JS developer, it's definitely worth checking this out.

Read more


AWS SDK for JavaScript

Looks like Amazon wants to get into the "no backend" world for client-side developers. We've seen services like Firebase offer managed back-end services like this making it easier for client-side developers with no backend experience to get up and running quickly. Seems Amazon likes this too since they launched a developer preview AWS SDK for JavaScript.

Read more


Missed the Chrome Developer Summit?

If you missed the Chrome Developer Summit, be sure to at least grab the speaker decks which are chock full of great information on performance, Chrome packaged apps, and the Chrome Developer Tools.

Get the decks


And While You're At It…

Check out Addy Osmani's deck, "Rendering Performance Case Studies", which he used for a talk at VelocityConf. It should go nicely with the decks from the Chrome Summit.

Addy's slides


The Node.js Knockout Contest Announces Its Winners

Node Knockout is a yearly event which always produces some impressive results. This year's winners are no-less impressive with everything from multi-player games to embeddable JavaScript to mine bitcoins.

Read more


Firefox 29 Alpha

This is the nightly version of Firefox which will have the new Australis UI, the first big interface update to Firefox in years.

Read more


New and Notable

Here's a glance at some new resources that might pique your interest. We'll run the gamut from frameworks to books trying to call out new things that look pretty cool and might make your development easier.


RedditKit.rb

RedditKit.rb is a reddit API wrapper, written in Ruby. RedditKit.rb is structured closely to the wonderful Octokit.rb and Twitter gems. If you're familiar with either of those, you'll feel right at home here.

Read more


Ember-Autosuggest

This component will auto-complete or auto-suggest completed search queries for you as you type. There is very basic keyboard navigation using the up and down keys to scroll up and down the results and enter adds the selection, while hitting escape hides the autocomplete menu.
This is a work in progress.

Read more


Book: Node.js In Action

Node.js in Action is an example-driven tutorial that starts at square one and guides you through all the features, techniques, and concepts you'll need to build production-quality Node applications.

Read more


INIT Front-End Framework

INIT aims to provide you with a decent workflow and structure within your sophisticated web project. INIT is based upon HTML5 Boilerplate and adds more structure for SCSS files, JavaScripts, includes build tasks and a whole lot more.

Read more


Grunt-Ec2

grunt-ec2 abstracts away aws-cli allowing you to easily launch, terminate, and deploy to AWS EC2 instances.

Read more


jQuery Evergreen

jQuery Evergreen works with modern browsers. It has the same familiar API as jQuery, and is lean and mean with the following, optional modules: selector, class, DOM, event, attr and html.

Read more


Suggested Reading and Off-Topic

Tutorials are cool but sometimes we just want to read the esoteric and unique. That's what this section is for. Somewhat technical but not necessarily tutorial-based content.

Converting Print Books Into eBooks Using Radically Smart Templates

If you've ever been curious about the technical process of bringing books to a readable form on the web, then this post by Brad Neuberg of Inkling is a must-read.

Read more


4 Golden Rules for Asking Questions on Forums

It may seem obvious to many but knowing how to post for help properly will go a long way to getting you the answer you need.

Read more


How Ready Are the 5 Top US Online Retailers for Black Friday?

The team at Zoompf tested out Amazon, Walmart, Target, eBay and Best Buy to see if they're ready for Black Friday. Check out what they found in their survey.

Read more


Civic Information API: Now Connecting US Users With Their Representatives

Google released their new Civic Information API "that lets developers connect constituents to their federal, state, county and municipal elected officials—right down to the city council district".

Read more


EtchMark Under the Hood: Building a Website That Handles Touch, Mouse, and Pen – and Device Shakes

If you haven't yet seen Microsoft's EtchMark Etch-A-Sketch drawing toy demo, go check it out because it's pretty cool. Then read up on how they built it.

Read more


So Much More…

There's so much going on in the industry and hopefully this will be enough to satisfy your information appetite for now. We'll continue to search out cool, interesting and unique tidbits and bubble them up to you. We'll be back soon with more great information to share.

Using Node’s Event Module

$
0
0

When I first heard about Node.js, I thought it was just a JavaScript implementation for the server. But it’s actually much more: it comes with a host of built-in functions that you don’t get in the browser. One of those bit of functionality is the Event Module, which has the EventEmitter class. We’ll be looking at that in this tutorial.


EventEmitter: What and Why

One last benefits to events: they are a very loose way of coupling parts of your code together.

So, what exactly does the EventEmitter class do? Put simply, it allows you to listen for “events” and assign actions to run when those events occur. If you’re familiar with front-end JavaScript, you’ll know about mouse and keyboard events that occur on certain user interactions. These are very similar, except that we can emit events on our own, when we want to, and not necessary based on user interaction. The principles EventEmitter is based on have been called the publish/subscribe model, because we can subscribe to events and then publish them. There are many front-end libraries built with pub/sub support, but Node has it build in.

The other important question is this: why would you use the event model? In Node, it’s an alternative to deeply nested callbacks. A lot of Node methods are run asynchronously, which means that to run code after the method has finished, you need to pass a callback method to the function. Eventually, your code will look like a giant funnel. To prevent this, many node classes emit events that you can listen for. This allows you to organize your code the way you’d like to, and not use callbacks.

One last benefits to events: they are a very loose way of coupling parts of your code together. An event can be emitted, but if no code is listening for it, that’s okay: it will just passed unnoticed. This means removing listeners (or event emissions) never results in JavaScript errors.


Using EventEmitter

We’ll begin with the EventEmitter class on its own. It’s pretty simple to get at: we just require the events module:

    var events = require("events");

This events object has a single property, which is the EventEmitter class itself. So, let’s make a simple example for starters:

    var EventEmitter = require("events").EventEmitter;

    var ee = new EventEmitter();
    ee.on("someEvent", function () {
        console.log("event has occured");
    });

    ee.emit("someEvent");

We begin by creating a new EventEmitter object. This object has two main methods that we use for events: on and emit.

We begin with on. This method takes two parameters: we start with the name of the event we’re listening for: in this case, that’s "someEvent". But of course, it could be anything, and you’ll usually choose something better. The second parameter is the function that will be called when the event occurs. That’s all that is required for setting up an event.

Now, to fire the event, you pass the event name to the EventEmitter instance’s emit method. That’s the last line of the code above. If you run that code, you’ll see that we get the text printed out to the console.

That’s the most basic use of an EventEmitter. You can also include data when firing events:

    ee.emit("new-user", userObj);

That’s only one data parameter, but you can include as many as you want. To use them in your event handler function, just take them as parameters:

    ee.on("new-user", function (data) {
        // use data here
    });

Before continuing, let me clarify part of the EventEmitter functionality. We can have more than one listener for each event; multiple event listeners can be assigned (all with on), and all functions will be called when the event is fired. By default, Node allows up to ten listeners on one event at once; if more are created, node will issue a warning. However, we can change this amount by using setMaxListeners. For example, if you run this, you should see a warning printed out above the output:

    ee.on("someEvent", function () { console.log("event 1"); });
    ee.on("someEvent", function () { console.log("event 2"); });
    ee.on("someEvent", function () { console.log("event 3"); });
    ee.on("someEvent", function () { console.log("event 4"); });
    ee.on("someEvent", function () { console.log("event 5"); });
    ee.on("someEvent", function () { console.log("event 6"); });
    ee.on("someEvent", function () { console.log("event 7"); });
    ee.on("someEvent", function () { console.log("event 8"); });
    ee.on("someEvent", function () { console.log("event 9"); });
    ee.on("someEvent", function () { console.log("event 10"); });
    ee.on("someEvent", function () { console.log("event 11"); });

    ee.emit("someEvent");

To set the maximum number of viewers, add this line above the listeners:

    ee.setMaxListeners(20);

Now when you run it, you won’t get a warning.


Other EventEmitter Methods

There are a few other EventEmitter methods you’ll find useful.

Here’s a neat one: once. It’s just like the on method, except that it only works once. After being called for the first time, the listener is removed.

    ee.once("firstConnection", function () { console.log("You'll never see this again"); });
    ee.emit("firstConnection");
    ee.emit("firstConnection");

If you run this, you’ll only see the message once. The second emission of the event isn’t picked up by any listeners (and that’s okay, by the way), because the once listener was removed after being used once.

Speaking of removing listeners, we can do this ourselves, manually, in a few ways. First, we can remove a single listener with the removeListener method. It takes two parameters: the event name and the listener function. So far, we’ve been using anonymous functions as our listeners. If we want to be able to remove a listener later, it will need to be a function with a name we can reference. We can use this removeListener method to duplicate the effects of the once method:

    function onlyOnce () {
        console.log("You'll never see this again");
        ee.removeListener("firstConnection", onlyOnce);
    }

    ee.on("firstConnection", onlyOnce) 
    ee.emit("firstConnection");
    ee.emit("firstConnection");

If you run this, you’ll see that it has the very same effect as once.

If you want to remove all the listeners bound to a given event, you can use removeAllListeners; just pass it the name of the event:

    ee.removeAllListeners("firstConnection");

To remove all listeners for all events, call the function without any parameters.

ee.removeAllListeners();

There’s one last method: listener. This method takes an event name as a parameter and returns an array of all the functions that are listening for that event. Here’s an example of that, based on our onlyOnce example:

    function onlyOnce () {
        console.log(ee.listeners("firstConnection"));
        ee.removeListener("firstConnection", onlyOnce);
        console.log(ee.listeners("firstConnection"));
    }

    ee.on("firstConnection", onlyOnce) 
    ee.emit("firstConnection");
    ee.emit("firstConnection");

We’ll end this section with one bit of meta-ness. Our EventEmitter instance itself actually fires two events of its own, which we can listen for: one when we create new listeners, and one when we remove them. See here:

    ee.on("newListener", function (evtName, fn) {
        console.log("New Listener: " + evtName);
    });

    ee.on("removeListener", function (evtName) {
        console.log("Removed Listener: " + evtName);
    });

    function foo () {}

    ee.on("save-user", foo);
    ee.removeListener("save-user", foo);

Running this, you’ll see our listeners for both new listeners and removed listeners have been run, and we get the messages we expected.

So, now that we’ve seen all the methods that an EventEmitter instance has, let’s see how it works in conjunction with other modules.

EventEmitter Inside Modules

Since the EventEmitter class is just regular JavaScript, it makes perfect sense that it can be used within other modules. Inside your own JavaScript modules, you can create EventEmitter instances, and use them to handle internal events. That’s simple, though. More interestingly, would be to create a module that inherits from EventEmitter, so we can use its functionality part of the public API.

Actually, there are built-in Node modules that do exactly this. For example, you may be familiar with the http module; this is the module that you’ll use to create a web server. This basic example shows how the on method of the EventEmitter class has become part of the http.Server class:

    var http = require("http");
    var server = http.createServer();

    server.on("request", function (req, res) {
        res.end("this is the response");
    });

    server.listen(3000);

If you run this snippet, the process will wait for a request; you can go to http://localhost:3000 and you’ll get the response. When the server instance gets the request from your browser, it emits a "request" event, an event that our listener will receive and can act upon.

So, how can we go about creating a class that will inherit from EventEmitter? It’s actually not that difficult. We’ll create a simple UserList class, which handles user objects. So, in a userlist.js file, we’ll start with this:

    var util         = require("util");
    var EventEmitter = require("events").EventEmitter;

We need the util module to help with the inheriting. Next, we need a database: instead of using an actual database, though, we’ll just use an object:

    var id = 1;
    var database = {
        users: [
            { id: id++, name: "Joe Smith",  occupation: "developer"    },
            { id: id++, name: "Jane Doe",   occupation: "data analyst" },
            { id: id++, name: "John Henry", occupation: "designer"     }
        ]
    };

Now, we can actually create our module. If you aren’t familiar with Node modules, here’s how they work: any JavaScript we write inside this file is only readable from inside the file, by default. If we want to make it part of the module’s public API, we make it a property of module.exports, or assign a whole new object or function to module.exports. Let’s do this:

    function UserList () {
        EventEmitter.call(this);
    }

This is the constructor function, but it isn’t your usual JavaScript constructor function. What we’re doing here is using the call method on the EventEmitter constructor to run that method on the new UserList object (which is this). If we need to do any other initialization to our object, we could do it inside this function, but that’s all we’ll do for now.

Inheriting the constructor isn’t enough though; we also need to inherit the prototype. This is where the util module comes in.

    util.inherits(UserList, EventEmitter);

This will add everything that’s on EventEmitter.prototype to UserList.prototype; now, our UserList instances will have all the methods of an EventEmitter instance. But we want to add some more, of course. We’ll add a save method, to allow us to add new users.

    UserList.prototype.save = function (obj) {
        obj.id = id++;
        database.users.push(obj);
        this.emit("saved-user", obj);  
    };

This method takes an object to save to our "database": it adds an id and pushes it into the users array. Then, it emits the "saved-user" event, and passes the object as data. If this were a real database, saving it would probably be an asynchronous task, meaning that to work with the saved record we would need to accept a callback. The alternative to this is to emit an event, as we’re doing. Now, if we want to do something with the saved record, we can just listen for the event. We’ll do this in a second. Let’s just close up the UserList

    UserList.prototype.all = function () {
        return database.users;
    };

    module.exports = UserList;

I’ve added one more method: a simple one that returns all the users. Then, we assign UserList to module.exports.

Now, let’s see this in use; in another file, say test.js. Add the following:

    var UserList = require("./userlist");
    var users = new UserList();

    users.on("saved-user", function (user) {
        console.log("saved: " + user.name + " (" + user.id + ")");
    });

    users.save({ name: "Jane Doe", occupation: "manager" });
    users.save({ name: "John Jacob", occupation: "developer" });

After requiring our new module and creating an instance of it, we listen for the "saved-user" event. Then, we can go ahead and save a few users. When we run this, you’ll see that we get two messages, printing out the names and ids of the records we saved.

    saved: Jane Doe (4)
    saved: John Jacob (5)

Of course, this could work the other way around: we could be using the on method from inside our class and the emit method outside, or both inside or out. But this is a good example of how it could be done.


Conclusion

So that’s how Node’s EventEmitter class works. Below you’ll find links to the Node documentation for some of the things we’ve been talking about.

New Development Courses Available on Tuts+ Premium

$
0
0

Tuts+ Premium courses teach you a single skill from top to bottom, inside out.

Currently, more than 15,000 members are sharpening their skills in web design, web development, Photoshop, vector design, video effects and much more. Our dedicated team adds new content weekly, so there’s always something fresh to sink your teeth into. Today, we’re highlighting a few of the latest and greatest course additions to Tuts+ Premium.


New Development Courses

Getting Good with Grunt

In both front-end and back-end projects, there are so many things to do besides the actual coding: there’s all the maintenance that a project requires: compilation, unit testing, linting, etc. You could do this manually, but you want to focus on the code, right? That’s why you’ll want to check out Grunt: it’s an automation tool that will do all that “boring” work for you!

SOLID Design Patterns

If you want to improve your development skills, then take this course. Through testing and examples, you’ll learn how to create beautiful, flexible, maintainable code that lasts longer.

PostgreSQL Essentials

This course will get you up to speed on using PostgreSQL databases. We will cover the command line tools, the PGAdmin GUI tool, the unique hstore extension, and much more. By the conclusion of this course, hopefully, you’ll feel confident to kick-start your own Postgres-based projects!

Acceptance Testing in Ruby with Cucumber

Developing software is unthinkable without tests, and Ruby makes it easy since it was built to be tested. Enter Cucumber. It is the de facto standard for acceptance testing in Ruby.

Tools of the Modern Web Developer

Are you just getting into web development, or would you like to hone your skills? In Tools of the Modern Developer, we discuss everything you need to polish and perfect your abilities.

Join Tuts+ Premium for Courses, eBooks, and More

Tuts+ Premium has a huge collection of courses, eBooks, source files and guides on hundreds of creative and technical topics. And we’re adding new content every week. Sign up now and develop your skills today!

Kickstarting Your Rails Education

$
0
0

It's been a long time since I last coded on the server-side. In fact, if you've read some of my tutorials, you may have noticed that I use ColdFusion as my application server. While ColdFusion still works great, it definitely doesn't have the panache and coolness of newer server-side technologies like Ruby on Rails. Wanting to be a bit more modern, I've decided to jump on the Ruby on Rails train. Both Ruby and the Rails framework are proven technologies that are stable and widely embraced so I think it's a great direction to head to in my server-side renaissance.

Picking it is the easy part. The hard part is actually learning how to properly use RoR and finding good resources learn from, the latter being the hardest part of it. With so many sites coming and going or not being maintained, it can be difficult to find information that's relevant and useful.

Luckily for you, I've done a lot of homework recently and started to collect a list of current. up-to-date resources that have been recommended to me and look really promising.

Let me share these with you.


The Ruby Language

You've got to walk before you can run and learning the ins-and-outs of the Ruby language will help you get a leg up. I'm a firm believer that having a good understanding of a programming language will make leveraging complementary technologies (e.g.: Rails) much easier and allow you to build maintainable code from the get-go. I know it may seem obvious but I've seen plenty of cowboys out there that learn something half-assed in a weekend and throw up production code the following Monday.


TryRuby.org

The great thing about the web is the abundance of interactive tools available for learning. The slogan for Try Ruby is:

Got 15 minutes? Give Ruby a shot right now!

And they hit the mark by providing an interactive editor that takes you step by step through the learning process. You follow some simple exercises, enter your answers in the editor and get immediate feedback.

tryruby

RubyMonk

Like Try Ruby, RubyMonk takes an interactive approach but they've also broken down the learning into skill levels. Each tutorial is listed by which level the content applies to allowing you to scale your learning appropriately. The site even offers an in-progress tutorial on using Rails.


Why's Poignant Guide to Ruby

When you first hit this site, you may actually think you've landed in the wrong place or a hipster book club. Don't be fooled. Go ahead and click on the book, then follow the pages. Initially, the imagery and cartoons may be confusing but as you get further along, you'll see it's just the author's eccentric style of writing meant to make his presentation of Ruby topics more inviting. The books is actually very good from what I've seen and a good resource to have.

whysbook

Ruby-Doc.org

As you learn Ruby, you'll see how rich the language can be. Being "rich" also means there's a lot to learn and language APIs to get comfortable with. This is where Ruby documentation project comes in. It is absolutely invaluable and you will live in this as you start to ramp up in Ruby. Seriously, bookmark it now.


Programming Ruby 1.9 & 2.0 (4th edition): The Pragmatic Programmers' Guide

Affectionately called the "pick axe" book, this is the must-have reference guide for Ruby. It's like the holy grail of the language and the one I found recommended all over the place. The key thing to keep in mind is that it's a "reference" and meant to complement your learning efforts as opposed to actually walking you through the learning process.


The Rails Framework

Once you feel you have a good grasp of the Ruby language, next it's time to jump into the Rails framework. Currently at version 4.0.x, it's become a mainstay for most startups that want a robust framework to get them up and running quickly. From what I've seen, it's very opinionated about how it does things, focusing on a lot of abstractions to make common tasks (e.g.: database access and interaction) easier.


Ruby on Rails Tutorial by Michael Hartl

In terms of learning Rails, this tutorial by Michael Hartl is one of the most complete I've seen and amazingly, he offers it up for free. He does offer some other niceties like screencasts and ebook versions for a cost but unless you want to place the book on your Kindle, reading it online should suffice.

What I love about this is that it covers every major aspect of the Rails framework and is updated with each major Rails version including v4.0.x. It's the reason that I listed it as the first Rails tutorial to check out.


Rails Guides

The tutorials in the Rails Guides will give you a solid foundation to work from. Looking through the Getting Started tutorial, it looks to cover the basics well but it does feel like Michael Hartl's stuff is a bit more comprehensive. Nonetheless, it's still a great option to learn by.


The Rails 3 Way

Obie Fernandez is a Rails guru and this book is recommended by everyone as the must-have Rails reading material. So I bowed to peer pressure and got it. Can't say yet if it's awesome but enough people I know who are good Rails developers said it's good so I'll go with that.

rails3way

Online Courses

Sometimes having someone walk you step-by-step through the learning process works better. Thankfully, there are some free courses available that provide a nice walk-through of Ruby on Rails and help make piecing things together a bit easier.


Tuts+ Premium Courses

I'd be remiss if I didn't mention Tuts+ as a great place to crank up my Ruby and Rails education. I also think Jeffrey Way would totally disown me as well!

Jose Mota's course, The Fundamentals of Ruby is a great example of the high-quality courses available for aspiring Rails developers like me.


RailsCasts

RailsCasts was created by Ryan Bates and currently lists over 400 instructional videos. Most of them are short and cover very specific topics allowing you to zero in on what you'd like to learn about.


Lots of Goodness to Learn From

Well that's my list. I think it's a pretty solid one at that. I know there are a ton of other blog posts, newsletters, sites and resources that aren't listed but that's okay. This is a list to get things kickstarted and as with any new thing, it's easy to get overwhelmed with too much information. I actually wrote about how hard it can be to stay on top of emerging technologies and finding time to learn new things in my op-ed, The Learning Conundrum.

I'm trying to keep things nice and tidy so I can focus and set realistic learning goals. I find this list to be short and sweet providing a good balance of reading material and interactive learning. But if you feel like I'm absolutely missing out on a good learning resource, mention it in the comments.


WebGL With Three.js: Models and Animation

$
0
0

3D graphics in the browser has been a hot topic since it was introduced. But if you were to create your apps using plain old WebGL it would take ages. That’s why some really useful libraries have came about. Three.js is one of the most popular of them, and in this series I will show you how to make the best use of it to create stunning 3D experiences for your users.

I expect you to have a basic understanding of 3D space before you start reading this tutorial, as I won’t be explaining topics like coordinates, vectors, etc.


Preparation

As usual, we will start from the code that you created earlier. Download and unpack the assets I provided and you’ll be ready to go.


Step 1: A Word About Exporting Models In Blender

Before we start the programming part, I will explain something that many people have problems with. When you have a model created in Blender, and you want to export it to Three.js format, you should keep the following in mind:

  • First, remove the parenting– The Three.js exporter won’t export any animations if you leave it (this also applies to the Armature Modifier)
  • Second, group vertices– if you want the bone to move any vertices you have to group them, and name the group with the name of the bone
  • Third, you can have only one animation– this may sound like a big problem, but I will explain the workaround later

Also, when exporting you have to make sure that these options are selected in the exporter: Skinning, Bones and Skeletal Animation.


Step 2: Importing the Model

As with pretty much everything in Three.js, importing models is very simple. There is a special class, THREE.JSONLoader that will do everything for us. Of course it only loads JSON models, but it’s recommended to use them so I will only cover this loader (others work pretty much the same way). Let’s initialize it first:

var loader = new THREE.JSONLoader;
var animation;

No arguments needed. We also need to define a variable for animation, so we can access it later. Now we can load the model:

loader.load('./model.js', function (geometry, materials) {
	var skinnedMesh = new THREE.SkinnedMesh(geometry, new THREE.MeshFaceMaterial(materials));
	skinnedMesh.position.y = 50;
	skinnedMesh.scale.set(15, 15, 15);
	scene.add(skinnedMesh);
	animate(skinnedMesh);
});

The load method accepts two parameters: a path to the model and a callback function. This function will be called when the model is loaded (so in the meantime you can display a loading bar to the user). A callback function will be called with two parameters: the geometry of the model and its materials (these are exported with it). In the callback, we are creating the mesh – but this time it’s THREE.SkinnedMesh, which supports animations.

Next, we move the model 50 units up to put it on the top of our cube, scale it 15 times (because I tend to create small models in Blender) and add it to the scene. Next we call the animate function that will set up and play the animation.


Step 3: Animation

Now we set up the animation. This is the source for the animate function:

function animate(skinnedMesh) {
	var materials = skinnedMesh.material.materials;

	for (var k in materials) {
		materials[k].skinning = true;
	}

	THREE.AnimationHandler.add(skinnedMesh.geometry.animation);
	animation = new THREE.Animation(skinnedMesh, "ArmatureAction", THREE.AnimationHandler.CATMULLROM);
	animation.play();
}

First we have to enable skinning (animations) in all materials of the model. Next, we have to add the animation from model to THREE.AnimationHandler and create the THREE.Animation object. The parameters are in the following order: the mesh to animate, the animation name in the model and interpolation type (useful when you have a complicated model like a human body, where you want the mesh to bend smoothly). Finally, we play the animation.

But if you open the browser now, you would see that the model is not moving:

model_still

To fix this, we have to add one line to our render function, just below the particleSystem rotation:

if (animation) animation.update(delta);

This will update the time on the animation, so THREE.AnimationHandler knows which frame to render. Now open the browser and you should see the top cube bend to the left and to the right:

model_animated

Step 4: Multiple Animations

Yes, there is a workaround for only a one animation sequence in a model, but it requires you to edit it. The idea is that you add each animation to one sequence, then, when that one ends, the next one begins. Next, after you’ve exported your model, you need to change the animation code. Let’s say we have a standing animation from the beginning to the third second, and a walking animation from the third second to the end. Then in our render function we have to check on which second the animation is, and if it reaches the end time of the current sequence, stop it and play it from beginning:

var currentSequence = 'standing';

function (render) {
...
	if (animation) animation.update(delta);
	if (currentSequence == 'standing') {
		if (animation.currentTime > 4) {
			animation.stop();
			animation.play(false, 0); // play the animation not looped, from 0s
		}
	} else if (currentSequence == 'walking') {
		if (animation.currentTime <= 4 || animation.currentTime > 8) {
			animation.stop();
			animation.play(false, 4); // play the animation not looped, from 4s
		}
	}
...
}

You have to remember to start the animations not looped and from the correct time. This will of course be buggy if the user’s frame-rate is really low, because the delta will be higher and animation.currentTime may be much higher than the limit for any particular sequence, resulting in playing some part of the next sequence. But it will be noticeable only if deltas are about 300-500ms.

Now lets change our animate function to play the walking animation, just add these arguments to the animation.play function:

animation.play(false, 0);

Also, let’s allow the user to switch between animations using the a key. Add this code at the end of the file, just before the render() call:

document.addEventListener('keyup', function (e) {
	if (e.keyCode == 'A'.charCodeAt(0)) {
		currentSequence = (currentSequence == 'standing' ? 'walking': 'standing');
	}
});

Step 5: Attach to Bone

This technique is particularly useful in RPGs, but it can apply to other genres as well. It involves attaching another object to the bone of the animated object: clothes, weaponry, etc.

Let’s start by modifying our loader.load callback. Add this code under the scene.add(skinnedMesh'):

item = new THREE.Mesh(new THREE.CubeGeometry(100, 10, 10), new THREE.MeshBasicMaterial({ color: 0xff0000 }));
item.position.x = 50;
pivot = new THREE.Object3D();
pivot.scale.set(0.15, 0.15, 0.15);
pivot.add(item);
pivot.useQuaternion = true;
skinnedMesh.add(pivot);

The item mesh simulates something you may want to attach to an animated object. To make it rotate around a specific point, and not around the center, we will add it to a pivot object and move it 50 units (half of the width) to the right. We have to scale it to 0.15, because it will be added to the skinnedMesh that has a scale of 15. Finally, before it’s added to our animated object we tell it to use quaternions.

Basically, quaternions are a number system, but since Three.js handles everything for us, you don’t have to delve into this topic if you don’t want to (but if you do, take a look at its Wiki). They are used to rotate objects without the risk of gimbal lock.

Now, in the render function we have to update the object’s position and rotation:

pivot.position = new THREE.Vector3().getPositionFromMatrix(skinnedMesh.bones[2].skinMatrix);
pivot.quaternion.setFromRotationMatrix(skinnedMesh.bones[2].skinMatrix);

Let me explain what is happening here. First, we set the position to be the same as on the last bone in the model. We are using the skinMatrix property to calculate it. Then we use the same property to calculate the quaternion for the pivot‘s rotation. After that, you can open the browser and you should see the red beam attached to our model:

attach_to_bone

Conclusion

So I hope you’ve learned a few new interesting techniques from this tutorial. Like always, feel free to experiment with the app that we’ve created. In the next (and the last) tutorial in this series, I’ll show you the true power of OpenGL/WebGL – Shaders.

AbsurdJS or Why I Wrote My Own CSS Preprocessor

$
0
0

As a front-end developer, I’m writing a lot of CSS and using pure CSS is not the most efficient way nowadays. CSS preprocessors are something which have helped me a lot. My first impression was that I finally found the perfect tool. These have a bunch of features, great support, free resources and so on. This is all true and it still applies, but after several projects, I realized that the world is not so perfect. There are two main CSS preprocessors – LESS and SASS. There are some others, but I have experience with only these two. In the first part of this article I’ll share with you what I don’t like about preprocessors and then in the second part I’ll show you how I managed to solve most of the problems that I had.


The Problems

problems

Setup

No matter which CSS preprocessor is involved, there is always setup required, like, you can’t just start typing .less or .sass files and expect to get the .css file. LESS requires NodeJS and SASS Ruby. At the moment, I’m working mostly on HTML/CSS/JavaScript/NodeJS applications. So, LESS seems like a better option, because I don’t need to install additional software. You know, adding one more thing to your ecosystem means more time for maintenance. Also, not only do you need the required tool, but all of your colleagues should now integrate the new instrument as well.

Firstly, I chose LESS because I already had NodeJS installed. It played well with Grunt and I successfully finished two projects with that setup. After that, I started reading about SASS. I was interested in OOCSS, Atomic design and I wanted to build a solid CSS architecture. Very soon I switched to SASS, because it gave me better possibilities. Of course I (and my colleagues too) had to install Ruby.

Output

A lot of developers don’t check the produced CSS. I mean, you may have really good looking SASS files, but what’s used in the end is the compiled .css file. If it is not optimized and its file size is high, then you have a problem. There are few things which I don’t like in both preprocessors.

Let’s say that we have the following code:

// LESS or SASS
p {
    font-size: 20px;
}
p {
    padding: 20px;
}

Don’t you think that this should be compiled to:

p {
    font-size: 20px;
    padding: 20px;
}

Neither LESS nor SASS works like that. They just leave your styles as you type them. This could lead to code duplication. What if I have complex architecture with several layers and every layer adds something to the paragraph. There will be several definitions, which are not exactly needed. You may even have the following situation:

p {
    font-size: 20px;
}
p {
    font-size: 30px;
}

The correct code at the end should be only the following:

p {
    font-size: 30px;
}

I now know that the browser will take care and will find out the right font size. But, isn’t it better to save those operations. I’m not sure that this will affect the performance of your page, but it affects the readability for sure.

Combining selectors which share the same styles is a good thing. As far as I know, LESS doesn’t do this. Let’s say that we have a mixin and we want to apply it to two classes.

.reset() {
    padding: 0;
    margin: 0;
}
.header {
    .reset();
}
.footer {
    .reset();
}

And the result is:

.header {
    padding: 0;
    margin: 0;
}
.footer {
    padding: 0;
    margin: 0;
}

So, these two classes have the same styles and they could be combined into one definition.

.header, .footer {
    padding: 0;
    margin: 0;
}

I was wondering if this is actual performance optimization, and I didn’t find an accurate answer, but it looks like a good thing. SASS has something called place holders. It’s used exactly for such situations. For example:

%reset {
    padding: 0;
    margin: 0;
}
.header {
    @extend %reset;
}
.footer {
    @extend %reset;
}

The code above produces exactly what I wanted. The problem is that if I use too many place holders I may end up with a lot of style definitions, because the preprocessor thinks that I have something to combine.

%reset {
    padding: 0;
    margin: 0;
}
%bordered {
    border: solid 1px #000;
}
%box {
    display: block;
    padding: 10px;
}
.header {
    @extend %reset;
    @extend %bordered;
    @extend %box;
}

There are three place holders. The .header class extends them all and the final compiled CSS looks like this:

.header {
    padding: 0;
    margin: 0;
}
.header {
    border: solid 1px #000;
}
.header {
    display: block;
    padding: 10px;
}

It looks wrong, doesn’t it? There should be only one style definition and only one padding property.

.header {
    padding: 10px;
    margin: 0;
    border: solid 1px #000;
    display: block;
}

Of course, there are tools which may solve this, having the compiled CSS. But as I said, I prefer to use as less libraries as possible.

Syntax Limitation

While I was working on OrganicCSS, I met a lot of limitations. In general, I wanted to write CSS as I write JavaScript. I mean, I had some ideas about complex architecture, but I wasn’t able to achieve them, because the language which I was working with was kinda primitive. For example, let’s say that I need a mixin which styles my elements. I want to pass a theme and border type. Here is how this should look in LESS:

.theme-dark() {
   color: #FFF;
   background: #000;
}
.theme-light() {
   color: #000;
   background: #FFF;
}
.component(@theme, @border) {
   border: "@{border} 1px #F00";
   .theme-@{theme}();
}
.header {
   .component("dark", "dotted");
}

Of course I’ll have a lot of themes and they should also be mixins. So, the variable interpolation works for the border property, but not for the mixin names. That’s a simple one, but it is currently not possible, or at least I don’t know if it is fixed. If you try to compile the above code you will get Syntax Error on line 11.

SASS is one step further. The interpolation works with placeholders, which makes things a little bit better. The same idea looks like this:

@mixin theme-dark() {
   color: #FFF;
   background: #000;
}
@mixin theme-light() {
   color: #000;
   background: #FFF;
}
%border-dotted {
   border: dotted 1px #000;
}
@mixin component($theme, $border) {
   @extend %border-#{$border};
   @include theme-#{$theme};
}
.header {
   @include component("dark", "dotted");
}

So, the border styling works, but the theme produces:

Sass Error: Invalid CSS after "   @include theme-": expected "}", was "#{$theme};"

That’s because the interpolation in the names of the mixins and extends is not allowed. There is a long discussion about that and will probably be fixed soon.

Both LESS and SASS are great if you want to improve your writing speed, but they are far from perfect for building modular and flexible CSS. Mainly, they are missing things like encapsulation, polymorphism and abstraction. Or at least, they are not in the form which I needed.


A New Approach

I fought several days with those limitation. I invested a good amount of time reading documentation. In the end, I just gave up and started looking for other options. What I had wasn’t flexible enough and I started thinking about writing my own preprocessor. Of course, that’s a really complex task and there are a lot of things to think about, such as:

  • the input– normally preprocessors take code which looks like CSS. I guess the idea is to complete the language, such as add in missing, yet necessary features. It is also easy to port pure CSS and the developers could start using it immediately, because in practice, it is almost the same language. However, from my point of view, this approach brings few difficulties, because I had to parse and analyze everything.
  • the syntax– even if I write the parsing part, I had to invent my own syntax which is kind of a complex job.
  • competitors– there are already two really popular preprocessors. They have good support and an active community. You know, most of the coolest things in our sphere are so useful, because of the contributers. If I write my own CSS preprocessor and I don’t get enough feedback and support from the people, I may be the only one which is actually using it.
nocss

So, I thought about it a bit and found a solution. There is no need to invent a new language with a new syntax. It’s already there. I could use pure JavaScript. There is already a big community and a lot of people may start using my library immediately. Instead of reading external files, parsing and compiling them, I decided to use the NodeJS ecosystem. And of course, the most important thing – I completely removed the CSS part. Writing everything in JavaScript made my web application a lot cleaner, because I didn’t have to deal with the input format and all those processes which produces the actual CSS.

(The name of the library is AbsurdJS. You may find this name funny and it is indeed. When I share my idea with some friends they all said, writing your CSS in JavaScript – absurd. So, that was the perfect title.)


AbsurdJS

absurd

Installation

To use AbsurdJS you need NodeJS installed. If you still don’t have this little gem on your system go to nodejs.org and click the Install button. Once everything finishes you could open a new console and type:

npm install -g absurd

This will setup AbsurdJS globally. This means that wherever you are, you may run the absurd command.

Writing Your CSS

In the JavaScript world, the closest thing to CSS is JSON format. So, that’s what I decided to use. Let’s take a simple example:

.content {
    padding: 0;
    margin: 0;
    font-size: 20px;
}
.content p {
    line-height: 30px;
}

This is pure CSS. Here is how it looks like in LESS and SASS:

.content {
    padding: 0;
    margin: 0;
    font-size: 20px;
    p {
        line-height: 30px;
    }
}

In the context of AbsurdJS the snippet should be written like this:

module.exports = function(api) {
    api.add({
        '.content': {
            padding: 0,
            margin: 0,
            'font-size': '20px',
            p: {
                'line-height': '30px'
            }
        }
    });
}

You may save this to a file called styles.js and run:

absurd -s .\styles.js

It will compile the JavasSript to the same CSS. The idea is simple. You write a NodeJS package, which exports a function. The function is called with only one parameter – the AbsurdJS API. It has several methods and I’ll go through them later, but the most common one is add. It accepts valid JSON. Every object defines a selector. Every property of that object could be a CSS property and its value or another object.

Importing

Placing different parts of your CSS in different files is really important. This approach improves the readability of your styles. AbsurdJS has an import method, which acts as the @import directive in the CSS preprocessors.

var cwd = __dirname;
module.exports = function(api) {
    api.import(cwd + '/config/main.js');
    api.import(cwd + '/config/theme-a.js');
    api.import([
        cwd + '/layout/grid.js',
        cwd + '/forms/login-form.js',
        cwd + '/forms/feedback-form.js'
    ]);
}

What you have to do is write a main.js file which imports the rest of the styles. You should know that there is overwriting. What I mean is that if you define a style for the body tag inside /config/main.js and later in /config/theme-a.js use the same property, the final value will be the one used in the last imported file. For example:

module.exports = function(api) {
    api.add({
        body: { margin: '20px' }
    });
    api.add({
        body: { margin: '30px' }
    });
}

Is compiled to

body {
    margin: 30px;
}

Notice that there is only one selector. While, if you do the same thing in LESS or SASS you will get

body {
    margin: 20px;
}
body {
    margin: 30px;
}

Variables and Mixins

One of the most valuable features in preprocessors are their variables. They give you the ability to configure your CSS, such as define a setting somewhere in the beginning of the stylesheet and use it later on. In JavaScript, variables are something normal. However, because you have modules placed in different files you need something that acts as a bridge between them. You may want to define your main brand color in one file, but later use it in another. AbsurdJS offers an API method for that, called storage. If you execute the function with two parameters, you create a pair: key-value. If you pass only a key, you actually get the stored value.

// config.js
module.exports = function(api) {
    api.storage("brandColor", "#00F");
}

// header.js
module.exports = function(api) {
    api.add({
        header: {
            color: api.storage("brandColor")
        }
    })
}

Every selector may accept not only an object, but also an array. So this is also valid:

module.exports = function(api) {
    api.add({
        header: [
            { color: '#FF0' },
            { 'font-size': '20px' }
        ]
    })
}

This makes sending multiple objects to specific selectors possible. It plays very well with the idea of mixins. By definition, the mixin is a small piece of code which could be used multiple times. That’s the second feature of LESS and SASS, which makes them attractive for developers. In AbsurdJS the mixins are actually normal JavaScript functions. The ability to put things inside storage gives you the power to share mixins between the files. For example:

// A.js
module.exports = function(api) {
    api.storage("button", function(color, thickness) {
        return {
            color: color,
            display: "inline-block",
            padding: "10px 20px",
            border: "solid " + thickness + "px " + color,
            'font-size': "10px"
        }
    });
}

// B.js
module.exports = function(api) {
    api.add({
        '.header-button': [
            api.storage("button")("#AAA", 10),
            {
                color: '#F00',
                'font-size': '13px'
            }
        ]
    });
}

The result is:

.header-button {
    color: #F00;
    display: inline-block;
    padding: 10px 20px;
    border: solid 10px #AAA;
    font-size: 13px;
}

Notice, that there is only one selector defined and the font-size property has the value from the second object in the array (the mixin defines some basic styles, but later they are changed).

Plugins

Ok, mixins are cool, but I always wanted to define my own CSS properties. I mean using properties that don’t normally exist, but encapsulate valid CSS styles. For example:

.header {
    text: medium;
}

Let’s say that we have three types of text: small, medium and big. Each of them has a different font-size and different line-height. It’s obvious that I can achieve the same thing with mixins, but AbsurdJS offers something better – plugins. The creation of the plugin is again via the API:

api.plugin("text", function(api, type) {
    switch(type) {
        case "small":
            return {
                'font-size': '12px',
                'line-height': '16px'
            }
        break;
        case "medium":
            return {
                'font-size': '20px',
                'line-height': '22px'
            }
        break;
        case "big":
            return {
                'font-size': '30px',
                'line-height': '32px'
            }
        break;
    }
});

This allows you to apply text: medium to your selectors. The above styling is compiled to:

.header {
    font-size: 20px;
    line-height: 22px;
}

Media Queries

Of course the library supports media queries. I also copied the idea of the bubbling feature (you are able to define breakpoints directly inside the elements and AbsurdJS will take care for the rest).

api.add({
    '.footer': {
        'font-size': '14px',
        '@media all and (min-width: 320px) and (max-width: 550px)': {
            'font-size': '24px'
        }
    },
    '.content': {
        '@media all and (min-width: 320px) and (max-width: 550px)': {
            margin: '24px'
        }
    }
})

The result is:

.footer {
    font-size: 14px;
}
@media all and (min-width: 320px) and (max-width: 550px) {
    .footer {
        font-size: 24px;
    }
    .content {
        margin: 24px;
    }
}

Keep in mind that if you have the same media query used multiple times, the compiled file will contain only one definition. This actually saves a lot of bytes. Unfortunately LESS and SASS doesn’t do this.

Pseudo Classes

For these, you just need to pass in valid JSON. The following example demonstrates how to use pseudo CSS classes:

module.exports = function(api) {
    api.add({
        a: {
            'text-decoration': 'none',
            ':hover': {
                'text-decoration': 'underline'
            }
        }
    });
}

And it is compiled to:

a {
    text-decoration: none;
}
a:hover {
    text-decoration: underline;
}

Integration

AbsurdJS works as a command line tool, but it could be used inside a NodeJS application as well. For example:

var Absurd = require("absurd"),
    absurd = Absurd(),
    api = absurd.api,
    output = "./css/styles.css";

api.add({ ... }).import("...");
absurd.compileFile(output, function(err, css) {
    // do something with the css
});

Or if you have a file which acts as an entry point:

var Absurd = require("absurd");
Absurd("./css/styles.js").compileFile("./css/styles.css", function(err, css) {
    // do something with the css
});

The library also supports integration with Grunt. You can read more about that on the following Github page.

Command Line Interface Options

There are three parameters available:

  • [-s]– main source file
  • [-o]– output file
  • [-w]– directory to watch

For example, the following line will start a watcher for a ./css directory, will grab ./css/main.js as an entry point and will output the result to ./styles.css:

absurd -s ./css/main.js -o ./styles.css -w ./css

Conclusion

Don’t get me wrong. The available CSS preprocessors are awesome, and I’m still using them. However, they came with their own set of problems. I managed to solve them by writing AbsurdJS. The truth is that I just replaced one tool with another. The usage of this library eliminates the usual writing of CSS and makes things really flexible, because everything is JavaScript. It could be used as a command line tool or it could be integrated directly into the application’s code. If you are interested in AbsurdJS, feel free to check out the full documentation at github.com/krasimir/absurd or fork the repo.

Statamic 101

$
0
0

Statamic is a modern PHP CMS which really makes an effort to be easy and intuitive to use. From its flat-file design to its use of technologies, like markdown and Yaml, you can accomplish an outstanding amount of work without writing any code at all.

In this article we will take a look at the process from installation to setting up a basic portfolio.

Having a flat-file design, setup is as simple as extracting the zip file you’ll download from their site. There is no database involved, all the content and settings are stored locally in a host of different files, this also means you have automatic backups and versioning on all of your content, if you use something like GIT.

With the contents extracted, let's take a look at Statamic’s structure.


The File Structure

There are more or less five different folders you will most likely be interacting with, and they are:

  • _config is where all your settings are held
  • _content is where you will put your Markdown files
  • _add-ons is for Statamic add-ons
  • _themes is where you build your theme
  • assets is where you can stick resources for your site

Besides these, you have the following four folders that you probably won't touch directly:

  • _app– Statamic’s own source code
  • _cache– Where Statamic caches all your content
  • _logs– Where Statamic will store your logs
  • admin– The Statamic admin panel

But the first step in every Statamic site is to configure its options.


Configuration

All the configuration files are inside the _config folder like we just saw, the main file you should take a look at is the settings.yaml.

If you are new to Yaml, then all you really need to know, is that it's a data format similar to JSON except that it's meant to be a more human-readable format. It accomplishes this by not requiring any separating characters like semi-colons or quotation marks, instead it get's its structure from placement and indentation.

The settings.yaml file is really well documented so you shouldn't have a problem filling it out, some of the options you will probably want to look at are the following:

_license_key: Enter your License Key
_site_name: My Portfolio
_site_url: http://localhost:7000
_theme: portfolio
_taxonomy: 
   - language
_log_enabled: true
_cookies.secret_key: Some Random Key

Most of these are pretty straight forward, like setting the license key, your site's name and URL. The theme option sets which theme folder to load. We will get into this in a moment, but a theme is essentially the place where you specify how the different pages on your site will work. We will be creating our own theme so you can name it whatever you want, I chose ‘portfolio’.

The next option is an array called taxonomy, if you have ever used something like WordPress, then you should know what this is for, basically it allows you to add a setting or 'type' to each post, which you can then use these taxonomies to filter your content and create custom pages for these groupings.

I am just adding one taxonomy; the language taxonomy, because in our example portfolio we are going to specify each work’s programming languages. You don’t need to create a taxonomy for each custom property, we are going to want other things in our portfolio, like links and descriptions. A taxonomy is for fields that multiple entries have in common, and for fields that you may want to create a custom page for.

The log_enabled setting turns on logging, so you can view problems which come up from the admin panel, they will be stored in the _logs folder we saw earlier. Finally the last option I mentioned is the secret key used to encrypt the cookie.

This file can now be saved out, but before we move onto content, let's take a moment to setup the portfolio template so we can see what we are doing.


Theme Basics

The template is a specific view for a single page.

Like most modern frameworks, when you load a page, you can build it up from multiple reusable components. A page in Statamic is made up of a layout a template and a content file. Both the layout files and templates can optionally also be made of even more pieces called partials.

The layout is the outer shell in which your template will be placed; this is usually used to hold the boilerplate HTML code like the head section, and as-well as the basic body that all the pages using this layout will need, like adding common libraries at the bottom of your file.

The template is a specific view for a single page. Like you can have a home page template, a contact page template, etc.. You don't need to create on per page, but I would say one per type of page.

In these templates you have the ability to use variables stored in the actual content files, so say you need a page which displays an index of books you are reading, and then another page to display a list of shows you are watching; instead of replicating most of the code for each one, you can create one template for displaying a list of objects, and then pull in the specifics of which list to retrieve from the content file itself.

The content file – like its name suggests – is the actual resource being displayed, this can range from things like an actual unique web page, to a single blog entry. We will get to these in more details in a moment.

Now instead of manually creating all these different components, Statamic provides a sort of starter template, giving you a basic structure to get started. You can download the theme folder from here.

Just place the entire folder into the _themes directory, and rename it to portfolio (as that's the theme name we declared in the settings file). You also need to rename the kindling.js file from the js folder and the kindling.css file from the css directory, to portfolio.js and portfolio.css respectively, as their is a special tag to pull these in automatically.

That's all the setup we need now, but to get a better idea of what I was talking about regarding the layout/template, let's take a look at those files. To begin with open up the file named default.html from the layouts folder, this corresponds to the default layout as you may have guessed.

<!DOCTYPE html><html lang="en"><head><meta charset="utf-8" /><title>{{ _site_name }}</title><link rel="stylesheet" href="{{ theme:css }}"></head><body>

        {{ layout_content }}

        <script src="http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js"></script><script src="{{ theme:js }}"></script></body></html>

Like I mentioned earlier, the layout is a good place to put code that is required on multiple pages (or templates rather), which is why the default layout in this file just contains the basic outline of an HTML file. Now we haven't really talked about it yet, but Statamic comes with its own templating engine, which is fairly easy to pick up, you basically just place a tag where you want something to be inserted similar to Handlebars if you are familiar.

This layout contains a couple of tags which I thought I would go through, the first of which is _site_name. This tag actually refers to the property we set up inside of the settings.yaml file. You will find this convention all throughout Statamic, you can set yaml options either globally like this, or even on a per-file basis, you can then get these options just by placing a tag with their names into your templates.

Helpers in Statamic are more like independent modules.

The next tag, which actually comes up twice is the theme tag. Helpers in Statamic are more like independent modules, so they can have multiple different functions attached to the same name; you access the individual functions with a colon and then the name of the command you want.

The theme tag is all about loading in resources specific to this theme, it can be used to pull in things like scripts and stylesheets, but also things like images and partials. It's a helper function that basically allows you to just provide the name of the resource you want and it will fill in the path to the current templates directory. So for instance if you were to write:

{{ theme:js src="underscore.js" }}

It would replace that with a link to a file named underscore.js inside of the current theme’s js folder. By default if no src parameter is set for the js or css command, it will assume you are referring to a js or css file with the name of the current theme, which is why we renamed those files to match; it's just a matter of convenience so we don't need to specify them and it cleans up the source a little bit.

The next tag that comes up is {{ layout_content }}; this is similar to yield in other templating engines, and it basically signifies where the inner template should be inserted.

The last thing I want to do in this file is remove the link to jQuery, just because I won't be using it (if you will then you can of-course leave it).

Next let's move onto the default template file (templates/default.html), it should be blank. For illustrations sake, let's just add a tag called {{content}} which just inserts the contents of the current page being loaded.

So to recap when you goto a page it will first load the layout file, and then wherever the {{layout_content}} tag is placed this template will get inserted. This template will then just output whatever the current page has inside.

With that done, save out these files and let's move onto content.


The Content

Content in Statamic is specified in markdown files by default, and the default file which is loaded is called page.md. What I mean by this is that the same way standard web server will load index.html if no file was specified, Statamic will load page.md.

It's also worth noting that routes, or URL links in your site will be defined by the _content directory. So for example if you create a folder named 'demo' inside of the _content directory, and in it place a file named 'link.md' this will correspond to the URL /demo/link; and if you place a page.md file inside, it will be loaded if you navigate to /demo/ since it is the default file name.

Statamic comes with some demo content, but you can just delete everything inside the _content directory for this example (or move it aside for now).

Let's start with a basic home page, so at the root of the _content directory, create a file named page.md with the following:

---
title: Home
---

# Welcome to the {{title}} Page

All content files are basically split into two sections, the Yaml front-matter and the contents. The top part (between the dashed lines) is where you can put standard Yaml specific to this file, and is a good way for setting options to adjust your template files. The second part is the markdown area, where you put the contents of the actual page. You can use standard markdown as-well as Statamic helper tags.

This page will load with the default layout and template files we just setup, but if you want it to use a different one, you can specify these as options in the Yaml section at the top using _layout and _template respectively.

If you create a server at the root of your Statamic directory:

php -S localhost:7000

and then navigate to http://localhost:7000 in your browser you should see the H1 tag with our welcome message.

This is all you need to know to create pages in Statamic, and if you are building a fairly static site, this would be enough. But in a lot of sites we need to be able to add dynamic data, which can take the form of blog posts, shop items, or in our case portfolio works.


Entries

If you recall there is no database in Statamic, so these kinds of entries are stored in markdown files just like the page we just built, although a couple of things were done to subtly introduce multiple features to optimize things and to make it work in the admin.

First off you can name the files with a special date format so they can be sorted and filtered by date. You do this by pre-pending the title with a year-month-day pattern, so say you want to create a post named 'foobar' you would name it something like:

2013-09-15-foobar.md

Inside you would put all the posts settings inside the front-matter section for use in the template, and then any content underneath; just like in a page.

Now this is pretty cool, but it's the equivalent of manually entering posts into a database of a traditional system; there is another option.

Statamic comes bundled with a really nice admin, which can do all this for you, but in order to get it setup we need to tell it which fields this type of entry is supposed to have. This is done in a file appropriately named fields.yaml.

So for our example let's create a folder inside the _content directory named works, and inside the works folder let's create a file named fields.yaml. Inside the fields.yaml file we need to specify which properties our 'entries' will contain and the individual types for each of these settings.

You can either specify your fieldset (the list of fields) in the _config/fieldsets/ directory and pull in a definition, or you can just enter the definition here (or you can do both to extend an existing definition). For our simple example I am just going to put the definition here since we won't be reusing it anywhere:

type: date
fields:
 language:
   type: tags
   display: Programming Language
   required: true

 description:
    type: text
    display: Description
    required: false

 link:
    type: text
    display: Link
    required: true

 content:
    type: hidden

The first property just tells Statamic that we want these entry files to have a date property and to be named appropriately. Next we open up a YAML object named fields containing all our work entry's properties.

The first is the language field, which if you remember is the taxonomy we created in the settings.yaml. Inside each property we need to specify its type (or it defaults to a text box) its display text (which will default to the properties name) and whether it is required. There are other options you can set as-well like instructions or the default value, which you can view more information about here, besides for these settings different field types can have their own custom options.

For the language input I set it to use the tag type, which basically allows you to set multiple tags for this option, it's just a different type of input for entering its value in the admin. You can view all the different fieldtypes here or from the official cheat sheet under 'Fieldtypes'.

The description and link are pretty much the same, they will both be text boxes except one will be required and one won't be. Besides the fields you specify all entries will come with a title and content field. We don't really want a content field in our works they will be more like links, so I just set it to hidden.

The last step before we go to the admin, is to create a page.md file inside the works directory. This isn't really required but the admin will attempt to get the title of this entry type from here so it's a good idea to just place it. So create a page.md file inside the works folder with just the title set to 'Works':

---
title: Works
---  

The Admin

To get into the admin we need to first create a user; again this is a simple yaml file inside of the config folder. The name of the file is the username you will use to login, and inside you configure the user's details and password.

So let's create a new user with a username of editor, again we do this by creating a file called 'editor.yaml' inside of the _config/users/ folder. Insert the following data (except with your info):

--- 
first_name: Gabriel
last_name: Manricks
roles: [admin]
email: gmanricks@gmail.com
password: password
---

The Editor of this Portfolio

Most of these columns are pretty straight forward and I don't think they require any explanation. The only field worth mentioning is the roles setting. Currently admin is the only option available, but in the future this will be where you would be able to adjust who can edit what.

It's also worth mentioning that the password will not stay in plain-text. After the first login, Statamic will hash the password along with a salt and include those here instead.

Everything after the dashed lines will just be stored as the content for this user, and can be used as a sort of bio for them.

Now save this file, and if your web server is still running, navigate to /admin in your browser. This will open up the login console where you can enter these properties. Like I mentioned the first time you login you will need to do this twice, once to hash the password, and the second time to actually login.

Admin Login
The Statamic Login

Once in you will see a list of our pages, included is our home page as-well as the 'Works' entry type. Let's take a look at what our fields declaration did for us; click on the Create link inside of the 'Works' bar.

Pages Admin

You should see a nice form which includes all the fields we specified along with the tite, for this demos sake, just add a few posts.

Demo Post

With some posts stored, we have completed round one; you now know how to create pages, themes, users, and entries, it's a great first step. But there is alot more Statamic has to offer.


The Templating Engine

Having some posts created is nice, but what would be better is if we could actually display them on a page. For this we will need to edit the default template.

This will be our first real interaction with the included templating engine, but not to worry, Statamic's intuitive design makes it almost 'obvious' to pick up.

To view a full list of the available tags you can take a look at the official doc page on the matter. But all we really need in our example is the entries tag, which is used to pull in entries from a specific folder in the _content directory. There are a lot of optional properties, allowing you to filter by date, or conditions like taxonomies or even standard properties. We are going to keep it simple and just list the properties by date (which is the default).

Here is the complete new default template (templates/default.html):

<h1>Portfolio</h1><table>
 {{ entries:listing folder="works" }}<tr><td class="lang"><p>{{ language }}</p></td><td class="work"><a href="{{ link }}"><span class="title">{{ title }} - </span><span class="desc">{{ description }}</span></a></td></tr>
 {{ /entries:listing }}</table>

In this code we are creating a table and just looping through all posts in the 'works' directory. These kind of block tags, where you place more HTML inside, basically assign new placeholders. Besides for providing access to things like all the post's attributes, you also get special helper variables which can tell you things like the current index, or whether this is the first or last post. We won't be using any of those variables, all we need is to display the title / language / description and link. However if you load up the page in your browser, you will probably realize that instead of showing the language it just says "Array".

This is because we set this property to be of type tag, which means it could hold more than one language, so even though you may have only put one language, it is still being stored in an array. Luckily besides for these tag helpers, Statamic comes with modifiers.


Modifiers

To finish off this guide, let's take a look at a few modifiers which will allow us to make this page even better.

The first and biggest problem is making the language show up. If you take a look at the following cheat sheet, all the way at the bottom left, you will see a section named List Shortcuts. While not technically being modifiers, Statamic allows you to append these words to the end of a list variable, and it will instead return a string representation. The one I want to use in this situation is the standard _list helper. What this will do is separate multiple values in the array with a comma and space, and is what we would want in our situation.

To try it out, replace the {{ language }} tag to {{ language_list }}. Refreshing your browser it should be displaying the languages correctly now.

Next let's add a modifier to the title to make it all uppercase. If you have ever used something like the smarty templating engine, then it works the same way. You add a pipe to the end of the variable name and then add a modifier. In our example you just need to replace the call to {{ title }} with {{ title|upper }} and these are chainable so you can keep adding pipes indefinitely.

Now let's just add some CSS to style everything up (remember this goes in the css/portfolio.css file:

body { background: #FAFAF5; }

h1 {
 font: 800 64px 'Raleway', sans-serif;
 margin-bottom: 28px;
}

table { font: 15px 'Coustard', serif; }

td { padding: 10px 10px 0 10px; }
p { margin-bottom: 15px; }

.lang p {
 background: #CA9F53;
 color: #FFF;
 padding: 3px 5px;
 text-align: right;
}

.work { text-align:left; }
.work a{
 border-bottom: 1px solid #000;
 text-decoration: none;
}

.title {
 font-weight: 600;
 color: #000;
}

.desc { color: #666; }

And these two fonts are from Google Fonts, so you will need to add the following link at the top of your default layout file:

<!DOCTYPE html><html lang="en"><head><meta charset="utf-8" /><title>{{ _site_name }}</title><link href='http://fonts.googleapis.com/css?family=Coustard|Raleway:800' rel='stylesheet' type='text/css'><link rel="stylesheet" href="{{ theme:css }}"></head><body>

 {{ layout_content }}

 <script src="{{ theme:js }}"></script></body></html>

If everything worked out you should see the following page (except with the works you added)

The Demo

Conclusion

Where I think this will thrive is as a blogging platform or CMS.

In this article we have went through the entire process from installing the framework, to setting everything up, creating a new entry type, and building a custom theme. It's a lot to do, and it's only possible because of how easy Statamic makes things.

We have seen alot of functionality already and we haven't even touched on creating your own modules, and extending Statamic with PHP, but I think the most amazing thing is we haven’t even written a single line of PHP in this entire article! And that is something to brag about.

So I think the main question people might have is, should I use it, or what should this replace in my current repertoire? It's important to gauge what Statamic is for. If you are building a new Startup and need the full flexibility of a full fledged framework, I am sure you would be able to get it running in Statamic, but it would be alot of custom code, which may defeat the purpose. Where I think this will thrive is as a blogging platform or CMS.

Coming from a background in WordPress, this feels like a direct predecessor, in that it follows a-lot of the same conventions in theory, but they are all implemented in a much smarter way, in that comparing the amount of code required in each becomes a joke. Moving forward, Statamic has an incredible API for building custom tags, hooks, new fieldtypes and more, and you can imagine Statamic makes it as lean and simple to do as you might have come to expect.

I hope you enjoyed this article, if you have any questions feel free to ask me below, on twitter @gabrielmanricks or on the Nettuts+ IRC channel on freenode (#nettuts).

Working With IndexedDB – Part 3

$
0
0

Welcome to the final part of my IndexedDB series. When I began this series my intent was to explain a technology that is not always the most… friendly one to work with. In fact, when I first tried working with IndexedDB, last year, my initial reaction was somewhat negative (“Somewhat negative” much like the Universe is “somewhat old.”). It’s been a long journey, but I finally feel somewhat comfortable working with IndexedDB and I respect what it allows. It is still a technology that can’t be used everywhere (it sadly missed being added to iOS7), but I truly believe it is a technology folks can learn and make use of today.

In this final article, we’re going to demonstrate some additional concepts that build upon the “full” demo we built in the last article. To be clear, you must be caught up on the series or this entry will be difficult to follow, so you may also want to check out part one.


Counting Data

Let’s start with something simple. Imagine you want to add paging to your data. How would you get a count of your data so you can properly handle that feature? I’ve already shown you how you can get all your data and certainly you could use that as a way to count data, but that requires fetching everything. If your local database is huge, that could be slow. Luckily the IndexedDB spec provides a much simpler way of doing it.

The count() method, run on an objectStore, will return a count of data. Like everything else we’ve done this will be asynchronous, but you can simplify the code down to one call. For our note database, I’ve written a function called doCount() that does just this:

function doCount() {

    db.transaction(["note"],"readonly").objectStore("note").count().onsuccess = function(event) {
        $("#sizeSpan").text("("+event.target.result+" Notes Total)");
    };

}

Remember – if the code above is a bit hard to follow, you can break it up into multiple blocks. See the earlier articles where I demonstrated this. The result handler is passed a result value representing the total number of objects available in the store. I modified the UI of our demo to include an empty span in the header.

<span class="navbar-brand" >Note Database <span id="sizeSpan"></span></span>
Count Example

The final thing I need to do is simply add a call to doCount when the application starts up and after any add or delete operation. Here is one example from the success handler for opening the database.

openRequest.onsuccess = function(e) {
    db = e.target.result;

    db.onerror = function(event) {
      // Generic error handler for all errors targeted at this database's
      // requests!
      alert("Database error: " + event.target.errorCode);
    };

    displayNotes();
    doCount();
};

You can find the full example in the zip you downloaded as fulldemo2. (As an FYI, fulldemo1 is the application as it was at the end of the previous article.)


Filter As You Type

For our next feature, we’re going to add a basic filter to the note list. In the earlier articles in this series I covered how IndexedDB does not allow for free form search. You can’t (well, not easily) search content that contains a keyword. But with the power of ranges, it is easy to at least support matching at the beginning of a string.

If you remember, a range allows us to grab data from a store that either begins with a certain value, ends with a value, or lies in between. We can use this to implement a basic filter against the title of our note fields. First, we need to add an index for this property. Remember, this can only be done in the onupgradeneeded event.

    if(!thisDb.objectStoreNames.contains("note")) {
        console.log("I need to make the note objectstore");
        objectStore = thisDb.createObjectStore("note", { keyPath: "id", autoIncrement:true });
        objectStore.createIndex("title", "title", { unique: false });
    }

Next, I added a simple form field to the UI:

Filter UI

Then I added a “keyup” handler to the field so I’d see immediate updates while I type.

$("#filterField").on("keyup", function(e) {
    var filter = $(this).val();
    displayNotes(filter);
});

Notice how I’m calling displayNotes. This is the same function I used before to display everything. I’m going to update it to support both a “get everything” action as well as a “get filtered” type action. Let’s take a look at it.

function displayNotes(filter) {

    var transaction = db.transaction(["note"], "readonly");  
    var content="<table class='table table-bordered table-striped'><thead><tr><th>Title</th><th>Updated</th><th>& </td></thead><tbody>";

    transaction.oncomplete = function(event) {
        $("#noteList").html(content);
    };

    var handleResult = function(event) {  
      var cursor = event.target.result;  
      if (cursor) {  
        content += "<tr data-key=\""+cursor.key+"\"><td class=\"notetitle\">"+cursor.value.title+"</td>";
        content += "<td>"+dtFormat(cursor.value.updated)+"</td>";

        content += "<td><a class=\"btn btn-primary edit\">Edit</a> <a class=\"btn btn-danger delete\">Delete</a></td>";
        content +="</tr>";
        cursor.continue();  
      }  
      else {  
        content += "</tbody></table>";
      }  
    };

    var objectStore = transaction.objectStore("note");

    if(filter) {
        //Credit: http://stackoverflow.com/a/8961462/52160
        var range = IDBKeyRange.bound(filter, filter + "\uffff");
        var index = objectStore.index("title");
        index.openCursor(range).onsuccess = handleResult;
    } else {
        objectStore.openCursor().onsuccess = handleResult;
    }

}

To be clear, the only change here is at the bottom. Opening a cursor with or without a range gives us the same type of event handler result. That’s handy then as it makes this update so trivial. The only complex aspect is in actually building the range. Notice what I’ve done here. The input, filter, is what the user typed. So imagine this is “The”. We want to find notes with a title that begins with “The” and ends in any character. This can be done by simply setting the far end of the range to a high ASCII character. I can’t take credit for this idea. See the StackOverflow link in the code for attribution.

You can find this demo in the fulldemo3 folder. Note that this is using a new database so if you’ve run the previous examples, this one will be empty when you first run it.

While this works, it has one small problem. Imagine a note titled, “Saints Rule.” (Because they do. Just saying.) Most likely you will try to search for this by typing “saints”. If you do this, the filter won’t work because it is case sensitive. How do we get around it?

One way is to simply store a copy of our title in lowercase. This is relatively easy to do. First, I modified the index to use a new property called titlelc.

        objectStore.createIndex("titlelc", "titlelc", { unique: false });

Then I modified the code that stores notes to create a copy of the field:

$("#saveNoteButton").on("click",function() {

    var title = $("#title").val();
    var body = $("#body").val();
    var key = $("#key").val();
    var titlelc = title.toLowerCase();

    var t = db.transaction(["note"], "readwrite");

    if(key === "") {
        t.objectStore("note")
                        .add({title:title,body:body,updated:new Date(),titlelc:titlelc});
    } else {
        t.objectStore("note")
                        .put({title:title,body:body,updated:new Date(),id:Number(key),titlelc:titlelc});
    }

Finally, I modified the search to simply lowercase user input. That way if you enter “Saints” it will work just as well as entering “saints.”

        filter = filter.toLowerCase();
        var range = IDBKeyRange.bound(filter, filter + "\uffff");
        var index = objectStore.index("titlelc");

That’s it. You can find this version as fulldemo4.


Working With Array Properties

For our final improvement, I’m going to add a new feature to our Note application – tagging. This will
let you add any number of tags (think keywords that describe the note) so that you can later find other
notes with the same tag. Tags will be stored as an array. That by itself isn’t such a big deal. I mentioned in the beginning of this series that you could easily store arrays as properties. What is a bit more complex is handling the search. Let’s begin by making it so you can add tags to a note.

First, I modified my note form to have a new input field. This will allow the user to enter tags separated by a comma:

Tag UI

I can save this by simply updating my code that handles Note creation/updating.

    var tags = [];
    var tagString = $("#tags").val();
    if(tagString.length) tags = tagString.split(",");

Notice that I’m defaulting the value to an empty array. I only populate it if you typed something in. Saving this is as simple as appending it to the object we pass to IndexedDB:

    if(key === "") {
        t.objectStore("note")
                        .add({title:title,body:body,updated:new Date(),titlelc:titlelc,tags:tags});
    } else {
        t.objectStore("note")
                        .put({title:title,body:body,updated:new Date(),id:Number(key),titlelc:titlelc,tags:tags});
    }

That’s it. If you write a few notes and open up Chrome’s Resources tab, you can actually see the data being stored.

Chrome DevTools and the Resource View

Now let’s add tags to the view when you display a note. For my application, I decided on a simple use case for this. When a note is displayed, if there are tags I’ll list them out. Each tag will be a link. If you click that link, I’ll show you a list of related notes using the same tag. Let’s look at that logic first.

function displayNote(id) {
    var transaction = db.transaction(["note"]);  
    var objectStore = transaction.objectStore("note");  
    var request = objectStore.get(id);

    request.onsuccess = function(event) {  
        var note = request.result;
        var content = "<h2>" + note.title + "</h2>"; 
        if(note.tags.length > 0) {
            content += "<strong>Tags:</strong> ";
            note.tags.forEach(function(elm,idx,arr) {
                content += "<a class='tagLookup' title='Click for Related Notes' data-noteid='"+note.id+"'> " + elm + "</a> ";  
            });
            content += "<br/><div id='relatedNotesDisplay'></div>";
        }
        content += "<p>" + note.body + "</p>";
         I
        $noteDetail.html(content).show();
        $noteForm.hide();           
    };  
}

This function (a new addition to our application) handles the note display code formally bound to the table cell click event. I needed a more abstract version of the code so this fulfills that purpose. For the most part it’s the same, but note the logic to check the length of the tags property. If the array is not empty, the content is updated to include a simple list of tags. Each one is wrapped in a link with a particular class I’ll use for lookup later. I’ve also added a div specifically to handle that search.

A note with tags

At this point, I’ve got the ability to add tags to a note as well as display them later. I’ve also planned to allow the user to click those tags so they can find other notes using the same tag. Now here comes the complex part.

You’ve seen how you can fetch content based on an index. But how does that work with array properties? Turns out – the spec has a specific flag for dealing with this: multiEntry. When creating an array-based index, you must set this value to true. Here is how my application handles it:

objectStore.createIndex("tags","tags", {unique:false,multiEntry:true});

That handles the storage aspect well. Now let’s talk about search. Here is the click handler for the tag link class:

$(document).on("click", ".tagLookup", function(e) {
    var tag = e.target.text;
    var parentNote = $(this).data("noteid");
    var doneOne = false;
    var content = "<strong>Related Notes:</strong><br/>";

    var transaction = db.transaction(["note"], "readonly");
    var objectStore = transaction.objectStore("note");
    var tagIndex = objectStore.index("tags");
    var range = IDBKeyRange.only(tag);

    transaction.oncomplete = function(event) {
        if(!doneOne) {
            content += "No other notes used this tag."; 
        }
        content += "<p/>";
        $("#relatedNotesDisplay").html(content);
    };

    var handleResult = function(event) {
        var cursor = event.target.result;
        if(cursor) {
            if(cursor.value.id != parentNote) {
                doneOne = true;
                content += "<a class='loadNote' data-noteid='"+cursor.value.id+"'>" + cursor.value.title + "</a><br/> ";
            }
            cursor.continue();
        }           
    };

    tagIndex.openCursor(range).onsuccess = handleResult;

});

There’s quite a bit here – but honestly – it is very similar to what we’ve dicussed before. When you click a tag, my code begins by grabbing the text of the link for the tag value. I create my transaction, objectstore, and index objects as you’ve seen before. The range is new this time. Instead of creating a range from something and to something, we can use the only() api to specify that we want a range of only one value. And yes – that seemed weird to me as well. But it works great. You can see then we open the cursor and we can iterate over the results as before. There is a bit of additional code to handle cases where there may be no matches. I also take note of the original note, i.e. the one you are viewing now, so that I don’t display it as well. And that’s really it. I’ve got one last bit of code that handles click events on those related notes so you can view them easily:

$(document).on("click", ".loadNote", function(e) {
    var noteId = $(this).data("noteid");
    displayNote(noteId);
});

You can find this demo in the folder fulldemo5.


Conclusion

I sincerely hope that this series has been helpful to you. As I said in the beginning, IndexedDB was not a technology I enjoyed using. The more I worked with it, and the more I began to wrap my head around how it did things, the more I began to appreciate how much this technology could help us as web developers. It definitely has room to grow, and I can definitely see people preferring to use wrapper libraries to simplify things, but I think the future for this feature is great!

Coding With Koding

$
0
0

Cloud IDEs have been around for a little while now, and they have been pretty good for things like pair programming, or cases where you want to code consistently no matter where you are. Koding just came out of private beta, and they would like to take this notion a couple steps further, with their “cloud ecosystem”.

In this article we will take a look at what Koding is, as-well as some of the benefits you can get from using it.

Koding is kind of hard to explain, because there isn’t really a product similar to it on the market. So to better illustrate all of its moving parts, let’s split the service up and begin with the development environment.


The Development Environment

When you sign up to Koding, out of the box you get your own sub-domain (.kd.io) your own VPS, and some built in web apps to manage your new resources.

Through the admin, you have the ability to create other sub-domains on top of your current URL and spin up new VPSs all through an easy to use UI.

The Dev Dashboard

VMs

Now these VMs are not your average micro instances that a lot of services offer, these are full fledged VMs with access to eight processors and a full GB of RAM so you can easily run just about any app, and if you want to play around with things like cluster setups or networks, you can easily spin up multiple instances for just $5 a month.

So in terms of processing power, these instances can potentially be as powerful as your own computer, and they are definitely better than loading a local virtual machine.

What the people over at Koding are trying to do is empower developers to learn through experimentation and just try things that they wouldn’t necessarily want to try locally, or just don’t have the resources to do it.

These instances initialize in a matter of seconds, and if you make mistakes and break some system files, you can easily just re-initialize the server and it will restore everything under the home folder. Essentially, you’ll have a new instance but all the files you created in the home folder are preserved.

Another thing they provide, which is actually a pretty big deal in some situations, is root access to all your servers. Koding is a very transparent service, you get a VM and you can literally do whatever you want with it. Anything you can do with a standard VPS, you can do with their VMs.

OS & Languages

As for the instances themselves, they come with Ubuntu installed, and pretty much every language I can think of, including:

  • PHP
  • GO
  • Node.js
  • Ruby
  • Perl
  • Haskell

Among others, so you are pretty much good to go out of the box.

Apps

With Koding, you sort of have two layers of applications. You have the VM, which like I mentioned, you can run anything you want on, but besides that, you have ‘Koding Apps’ which are web-apps that run on Koding itself and through them you can manage all of your Koding resources.

Some of the default apps you have available to you are things like admin panels for databases or frameworks and editors for code and images. The default code editor that comes pre-installed is the Ace code editor for regular development, or Firepad if you want to work collaboratively via the teamwork app.

Apps

Besides all these really cool apps, you have the ability to create your own. They are written using plain JavaScript (CoffeScript) and the KD framework (from Koding). Now because they have just come out of beta, there isn’t really a start-to-finish documentation site up yet, but there are two Koding apps available (kodepad and app maker) which are built to give you a sort of structure, with examples. Besides those, I’d advise searching Github for “.kdapp” and just looking at how other apps were built to get an idea of what sort of things are possible and how to accomplish them.

Altogether, it sort of has the feeling of a cloud “operating-system” where you have the VMs as resources but the Koding apps allow you to manage your resources and set them up just the way you like. This means if your company has a sort of boilerplate setup, you can create a kdapp which will configure a new VM with the files and software you need, and then whenever you spin up a new instance your app can configure it just the way you like.

Additionally, kdapps can be a standalone tool which just modifies files like the Ace editor, or image editors that are available. This means if you put in the time, you can essentially build your own dev environment, with all the custom tools which make you more efficient at building apps.

Everything I have mentioned up to now really only covers half of what Koding is, and that is the development environment part. Koding also has a social/organizational side of it, which compliments the development features and sort of boosts the platforms value.


Developer Community

By default, when you sign up to Koding, you are added to the Koding “group”; all the features, like the activity notifications, topics, code-snippets, etc.. are all coming from this default group. It’s kind of cool to get all the updates from users around the world, and you can filter by topic by going to the topics page and selecting something you are interested in. But where these features really show potential is when you create your own group.

Koding Topics Page

If you use Koding as a group, then you can take advantage of all these features to easily see what your colleagues have done, get updates and snippets from them, and filter all the posts by project using the topics as tags.

In a group, you can create shared VMs which multiple users can have access to, or credit users in the group money so they can create their own VMs and work privately.

It’s one of those situations where they probably could’ve just released the cloud development environment, the social network, or the project management, and it would have fit a market; but having them all work together and for free, is something to really think about.

I have been saying a lot of positive things about cloud environments, but there are some drawbacks when comparing them to developing locally which are worth at least mentioning.


Cloud vs. Local Development

Drawbacks

One of the main things is that you aren’t really getting what I would call an IDE. For example, if you take a look at the Ace editor, it’s a great editor, but when you stack it up against a full fledged IDE like PhpStorm, they don’t compare. Ace is merely a code editor while PhpStorm contains all the tools you would need from testing to refactoring, all in one app.

The other drawback is simply latency, now compared to other web IDEs I haven’t had too much of an issue with this on Koding, but still, it doesn’t compare to a local setup. When you perform an action like opening a document, it could sometimes take a second to open.

So to summarize, developing online may not have all the tools you are used to working with, and it may not be as fast as doing it locally. But when you develop locally, you lose out on the powerful VMs and all the project management / social features.

Luckily you don’t have to make a choice. Editing code online is always possible so you don’t have to sacrifice on that front, but if you prefer coding locally with your own tools, you have full SSH access to your machines. So whether you want to use FTP, SCP, GIT, or any other kind of tool to transfer your changes to the server, you are given those options just like a standard VPS.


Setting Up SSH & Rsync

Now, I have already covered how to setup a bare GIT repo to deploy to your server, so it’s redundant to cover that process again, but let’s take a look at setting up your Koding account with an SSH key and using rsync to transfer your project to and from Koding.

For the unfamiliar, rsync is a utility for transferring large projects to and from your computer. Where it kind of differs from something like SCP, and the reason it’s good at working with large projects, is that it will scan the files both locally and remotely and only transfer the files that have changed. If you are working on any kind of project, you are going to have some framework system files, some boilerplate code, images, etc.. and you don’t really want to send them on every request, so rsync is a really good choice for stuff like this.

It’s not as good as GIT, you don’t get any form of version control, but if you are using Koding as a test environment and you just want to throw files up, or pull them down, rsync is the tool for the job.

The first step is pretty simple and it’s to get SSH setup; you just need to grab your public key (on a Mac you can run cat .ssh/id_rsa.pub | pbcopy from a terminal window to copy the key) and then add it to your account page on Koding. The next thing you need to do is configure your computer to connect. Koding requires you to use their proxy as a tunnel to your server, so on a Unix based system, you can just create a file named ‘config‘ with the following inside (you need to replace this with your Koding username):

Host *.kd.io
    User <username>
    ProxyCommand ssh %r@ssh.koding.com nc %h %p

If you are on a Windows system, refer to their guide to see how to setup the proxy using Putty.

With that in place, you can run:

ssh vm-<vm number>.<username>.koding.kd.io

So for example, using my username, on the first default VM (which is number 0) you would run the following:

ssh vm-0.gabrielmanricks.koding.kd.io

If all went well, you should connect and see the Koding terminal message. If it doesn’t want to connect, make sure you added the public key and make sure the VM is on in Koding (your VMs turn off when you haven’t used them for about 20 minutes).

With that setup, we can now create a local project. We don’t really need anything fancy here, so for this example I am just going to create a simple hello world HTML file inside a blank directory:

<!DOCTYPE HTML><html><head><title>Koding Demo</title></head><body><h1>Hello rsync</h1></body></html>

Save this file inside your projects folder and then run:

rsync -rvza --delete ./ vm-<vm number>.<username>.koding.kd.io:~/Web/

This will copy the entire contents of the current local folder to the remote directory deleting any remote files that are not in the current folder. If you ever make changes remotely, you can easily pull them down by reversing the paths like so:

rsync -rvza vm-<vm number>.<username>.koding.kd.io:~/Web/ ./

Now these commands are a bit long, and if you plan on developing in this manner, you are going to want to create some shortcuts. One simple way is to just create bash aliases, but you may have multiple servers, and for each you would need an alias for each direction, so let’s just create a simple bash script which can accept the VM number along with the username, and the desired direction you want the files to go, and it will perform the transfer.


Bash Primer

I’m not going to cover all of Bash’s syntax, just the parts we need for this script.

First we need the variables, inside a bash script you define variables by typing name=value. For example, if we wanted to set a variable that contains a message, we would type:

message="Hello"

There shouldn’t be any spaces around the equals sign for it to work. Once set, you can then retrieve the value of a variable by typing its name with a dollar sign before it. So to print the above variable’s value, we would type:

echo $message

Besides the variables that you define and set, you can use a couple of global variables that are set by your environment. These may be different according to your setup, but the ones we will be using are $USER for the currently logged in user and $PWD for the current folder. You can see what variables are in your environment by adding printenv to your code. This will print out all the environment’s current variables.

The next thing our script will need, is to be able to accept command line arguments. This is actually really easy to do, as they become numbered variables. So $1 represents the first parameter, $2 is the second and so on.

The last thing we will need to use in our script are if statements. These are similar to how you would write an if statement in most programming languages, with a few noticeable quirks:

if [ expression ]
then
    do something here
else
    do something else here
fi

In bash scripts you have the expression between a pair of square brackets, and you have to leave a space between the brackets and the expression. You should also note that the then line is a requirement. The last difference, which is a little different, and is found in other bash structures is the fi keyword. Basically you just type the if backwards, it’s the same for a switch statement for example, you start the switch block with case and then you end it with esac (case reversed).

So with this information, let’s construct a simple script to help us upload and download our code to Koding:


Building Our Script

To begin, we need the whole-shebang to tell the computer to run it as a shell script and then I will create a simple helper function which will tell the user how to use this command:

#!/bin/sh

function koding_usage
{
    echo "Usage: koding [push|pull] <vm_number> <username>"
    exit 1
}

If you are new to exit codes, 0 means it exited successfully and is the default returned when a script finishes, whereas anything else is an exit code for when an error has occurred. So if this function gets called, it means that the script wasn’t used correctly and we will exit with an error code.

Next, we need to make sure the arguments were passed in correctly and in the process, collect them and store them in some helper variables:

if [ "$1" = "" ]; then
    echo "Command Required"
    koding_usage
fi

if [ "$1" != "push" ] && [ "$1" != "pull" ]; then
    echo "You Can Only push or pull"
    koding_usage
else
    command=$1
fi

if [ "$2" = "" ]; then
    echo "VM Number Required"
    koding_usage
else
    vmnumber=$2
fi

if [ "$3" = "" ]; then
    username=$USER
else
    username=$3
fi

In this code, we are making four different checks:

  1. we check if there is a first parameter
  2. we check to make sure the first parameter is either ‘push‘ or ‘pull
  3. we make sure there is a second parameter
  4. we check whether the third parameter was set

In the first three if statements, if there was an issue we echo out a message and then call our helper method from above. For the last one though, if no username was supplied we will just use the currently logged in user’s username. So if your computer’s username is the same as your Koding username, you can leave the last parameter off.

The last thing we need to do, is actually run the rsync commands based on the command requested (push or pull):

if [ "$command" = "push" ]; then
    rsync -rvza --delete $PWD/ vm-$vmnumber.$username.koding.kd.io:~/Web
else
    rsync -rvza vm-$vmnumber.$username.koding.kd.io:~/Web/ $PWD
fi

You can see we are just placing the variables we collected (along with the current folder $PWD) right into the command. Since this is a shell script, you can just place shell commands straight in, like I did above

Now save the file and name it koding and then make it executable (you can do this by running chmod +x koding) and last but not least, move this file to your bin folder:

mv koding /usr/local/bin/

If you did everything correctly, you should be able to run koding and see our usage message come up. So now you can make a quick change to the example project above and simply run:

koding push 0

Assuming you don’t need the username property, and your current folder will be transferred as the Web directory on your server, named vm-0. The same goes for if you make changes online, you can cd into the local project folder and run:

koding pull 0

And you will receive all the updates.


Conclusion

Koding is a really powerful tool for prototyping and learning through experimentation. It has really cool social and project management capabilities and being able to code with someone else, live, can make a huge difference when you are trying to debug some code. Not to mention this all being free, means there really isn’t a reason you wouldn’t want to use this.

I really like the idea of having kd apps which run outside the VMs and I think it will be cool to see where people will take that and what kind of tools people will build.

You can sign up to Koding by visiting koding.com.

Thank you for reading, I hope you enjoyed it, if you have any questions feel free to leave me a comment down below, on twitter or via the Nettuts+ IRC channel (#nettuts on freenode).

Viewing all 502 articles
Browse latest View live