We’re excited to let you know that we’re currently planning our first Tuts+ and Envato meetup in the UK! We’ll have a great venue, some exciting goodies to give away, and you’ll have the opportunity to meet various Envato staff (as well as fellow Tuts+ readers!). If you’re interested in joining us, read on to find out how you can RSVP, and help us choose the meetup location.
What’s Happening?
Although Envato has organized plenty of meetups to date (in places such as Melbourne, Kuala Lumpur, Chicago, and New York!), this will be our first official UK event. First and foremost, it’s a brilliant chance to meet lots of like-minded creative professionals and developers, as well as various members of the Tuts+ team. A few of the editors likely to be in attendance are Michael James Williams, Johnny Winter, Sharon Milne, Joel Bankhead, David Appleyard, Ian Yates, Neil Pearce, and one of our top authors — Martin Perhiniak.
We’ll have a fantastic, funky venue, free food and drink, and lots of exciting goodies to give away to attendees. You’ll have the chance to talk to our team about what we’re doing at Tuts+ and Envato, as well as picking their brains on anything from web design and game development, to illustration and electronics!
Just to give you a feel for what to expect, here’s a selection of photos from last year’s meetup in New York:
When and Where?
We’ve fixed a date of Saturday 9th November. It’s far enough away to give you plenty of time to plan, and close enough to the holidays for you to also do a little Christmas shopping before the meetup in the evening! The meetup will be happening late afternoon/evening (although we’ll let you know all the final times closer to the date).
We’re going to be holding it in either London, or Manchester — it’s up to you to decide! If you’re interested in attending, we’d be thrilled to meet you. Submit your RSVP below, and be sure to let us know where you’d like the meetup to be held. We’ll keep you updated as we get closer to the date.
RSVP to Join the Guest List!
We’re incredibly excited to have the opportunity to meet lots of our wonderful readers, and we really hope you’ll be able to make it. Let’s bring Tuts+ and Envato to the UK!
In this series we’re going to build a web billboard application from scratch, we’re going to use CodeIgniter to handle the back-end service and BackboneJS for the web client. In the first two parts of the series we’ll create the back-end service and then the client application in the last two.
Application Description
The application we are creating will be a simple billboard, where users can register, post tasks, and offer a reward for its completion. Other users can see the existing tasks, assign the task to themselves, and get the offered reward.
The tasks will have basic data like a title, description and reward (as required parameters) and an optional due date and notes. The user profile will simply consist of the user’s name, email and website. So let’s get started.
Database Setup
First off, for the app data we’re going to use MongoDB as the database server. MongoDB is a document oriented database and the leading NoSQL database out there. It’s really scalable and fast, which makes it great to manage huge amounts of data.
MongoDB is a document oriented database and the leading NoSQL database.
In order to use MongoDB in this application, I’m going to use a MongoDB CodeIgniter driver that I wrote some time ago, it’s just a wrapper of the MongoDB PHP driver to mimic the framework’s SQL ActiveRecord. You can find the source files for this driver in my public repository. For this driver to work properly, make sure that you have the PHP’s MongoDB driver installed, if you don’t, follow these steps to get it working.
Please note that explaining the drivers in CodeIgniter and such is out of the scope of this tutorial, refer to the documentation if you have any doubts. You just need to move the "mongo_db.php" in the "config" folder to the "config" folder of your application and the "Mongo_db" folder in the "libraries" folder to the "libraries" folder in your application.
Database Configuration
The only file we need to edit at this point is the "mongo_db.php" file under the "config" folder, since my mongo installation has all the default parameters, I’m just going to edit line 40 and give it the name of the database that I want to use:
$config['mongo_db'] = 'billboard';
That’s it for the database, one of the many advantages of MongoDB is that the documents have no predefined structure, so it works without us needing to set anything up before using it, our database doesn’t even have to exist, MongoDB will create it on the fly when we need it.
Global Configuration
Other than your regular configuration options, that should include the base_url and the index_page if any, we need to set the string and date helpers to autoload. I’m not going to walk you through this, since we have much more to cover, when in doubt refer to the documentation.
Other than the helpers, we need to set up the encryption class since we’re going to use it for our app.
URL Handling
This is going to be a RESTful service and we need a way to take the requests coming to the server and handle them accordingly. We could use an existing library (which is great by the way) but for the purposes of this demonstration, I’m going to create the functionality I need using CodeIgniter’s core features.
Handling RESTful Requests
In particular, we’re going to use the ability to extend the core classes. We will start with the Controller, for the main part of this extension we’re using the "_remap" method in the base controller so all the controllers of our app can use it. Start by creating a MY_Controller.php file inside the "core" folder in the "application" folder, we create this just like any other CodeIgniter controller, as follows:
<?php
if( !defined( 'BASEPATH' ) ) exit( 'No direct script access allowed' );
class MY_Controller extends CI_Controller {
}
Now in this controller we’re going to use the CodeIgniter _remap method to preprocess every request made to the server. Inside the class we just created, add the following method:
public function _remap( $param ) {
$request = $_SERVER['REQUEST_METHOD'];
switch( strtoupper( $request ) ) {
case 'GET':
$method = 'read';
break;
case 'POST':
$method = 'save';
break;
case 'PUT':
$method = 'update';
break;
case 'DELETE':
$method = 'remove';
break;
case 'OPTIONS':
$method = '_options';
break;
}
$this->$method( $id );
}
A couple of things to note here, first off, there are some REST verbs that we are ignoring (like PATCH), since I’m demonstrating building a REST app, I don’t want to add things that may make this more complex than it needs to be. Secondly, we’re not taking into account the case where a controller doesn’t implement a particular method, which is very likely that this could happen. Now, we could add a default method to handle such requests, but so that we don’t add too much complexity, let’s leave it like this. Third, we’re receiving a param variable in the method declaration, let’s address that, and then I’ll explain the OPTIONS request. Above the switch statement, add the following code:
This regular expression matches any string consisting of uppercase and lowercase letters and any numbers. This is used to check if a MongoDB _id string is being given as a parameter, again, this is not the safest way nor the most thorough check, but for the sake of simplicity, we’ll keep it as is.
OPTIONS Request
Since we’re building a web service and a client application as separate parts, it makes sense that both are going to be hosted on different domains, so we will enable CORS in the back-end and this means, among other things, that our app will respond properly to OPTIONS requests.
When a web app created with BackboneJS (and some other frameworks) tries to make an asynchronous request to a remote server, it sends an OPTIONS request before sending the actual request it’s supposed to send. Among other things, the client tells the server from where it is sending the request, what type of request it is about to send, and the content that it’s expecting. After that, it is up to the server to send the client a response where it acknowledges the request or rejects it.
Since our back-end service, no matter which controller is called, is going to receive this OPTIONS request, it makes sense to implement the method to respond to it in our base controller. Add the following method below (or above) the _remap method in our controller.
Ideally, we would only allow some domains to make requests to us, we would check the request_headers header to see if we accept it and we would check for the expected content type, by the client to see if we support it, but again, this is a not-so-complex app and we are skipping these edge cases.
Managing Output
To finish our base controller, let’s create a method that every controller will use to send its results back to the client. In the base controller class, add the following method:
Again, in order for BackboneJS to process the server response it has to know that its host is accepted by the server, hence the Allow-Origin header, then, if the result is a faulty one, we set a status header indicating this. This status will become more clear when we create the back-end models. Next we use the parse_data helper, which will be a private method (that we will write in a moment) but let me skip that for the time being, then we set the content type as JSON and finally we encode the response as a JSON object. Again, here we could (and should) support other output formats (like XML).
Now let’s create the parse_data helper method (and I’ll explain it afterwards), add the following code to the base controller:
First off, note that we only parse the data for arrays and objects, and we’re doing it recursively. This pre-parsing has to do with the fact that MongoDB uses dates and IDs as objects, but our clients don’t need this information. Now for the case of IDs, we just need its string value, hence the toString method call, then the value has an ‘$id’ property. Afterwards we are converting the dates to a day.month.year format, this is being done for convenience in the design of the client application, again, not the most flexible approach but it works for this example.
Handling Input
Since we’re sending JSON back to the client application, it is only logical that we accept data in JSON format as well. CodeIgniter doesn’t support this by default like Laravel does, as a matter of fact, CodeIgniter doesn’t even support put and delete params. This is mainly because the framework is not intended for a RESTful service, however the effort that it takes to adapt it is minimal compared to the benefits, at least from my point of view.
So we will start by supporting the JSON data that BackboneJS sends. Create a new file inside the "core" folder, this time it is going to be named "MY_Input.php" and it will have the following basic structure:
<?php
if( !defined( 'BASEPATH' ) ) exit( 'No direct script access allowed' );
class MY_Input extends CI_Input {
}
Now every time we use $this->input in our application we’ll be referring to this class, we will create some new methods and override a few existing ones. First off, we are going to add the support for JSON data, add the following method to the new class.
$request_params is a static variable used to store the request string/data sent by the client. It is static in order to make it object independent so that we can access it from any controller at any given time. The data is obtained from the php://input stream rather than the $_POST global. This is done in order to obtain the data sent in via PUT and DELETE requests as well. Finally, the obtained payload is inspected to check if it’s an array, a JSON encoded object, or a query string, and it’s processed accordingly. The result is then returned as an object.
For this method to work, we need to create the static $request_params variable, add its declaration to the top of the class.
private static $request_params = null;
Handling Regular Requests
Next, we need to override the post method of the regular input class to use the new JSON payload instead of the $_POST global, add the following method to the new Input class.
This is almost the same as the post method from the original CI_Input class, with the difference being that it uses our new JSON method instead of the $_POST global to retrieve the post data. Now let’s do the same for the the PUT method.
Now technically, there’s really no need for these additional methods, since the post method can handle the params in the PUT and DELETE requests, but semantically it’s better (in my opinion).
This is all we need for our custom Input class. Again we’re ignoring edge cases here, like multipart requests, even though it’s not very hard to handle those and still maintain the functionality obtained here, but, for the sake of simplicity we’ll keep it just the way it is.
Base Model
To end the extension of the core classes, let’s create a base model that every model in the app will extend upon, this is just to avoid repetition of common tasks for every model. Like any other core class extension, here’s our barebones base model:
<?php
if( !defined( 'BASEPATH' ) ) exit( 'No direct script access allowed' );
class MY_Model extends CI_Model {
}
This base model will only serve the purpose of setting and retrieving errors. Add the following method to set a model error:
As you can see, this method uses an instance variable $error, so let’s add its declaration to the top of our base model class.
protected $_error;
Finally, to keep it structured, let’s create the getter method for this property.
public function get_error() {
return $this->_error;
}
Handling Sessions
Session Controller
For the last part of this tutorial, we will create the controller and model to handle user sessions.
The controller for our session is going to respond to any POST request made to our Session resource, since the session can’t be retrieved after creation, nor updated directly, this controller will only respond to POST and DELETE requests. Please note that sending any other request to the resource will result in a server error, we’re not dealing with edge cases here but this could be easily avoided by checking if the method that’s called exists in our MY_Controller file and setting a default method name if the resource doesn’t support the request.
Below you’ll find the structure for our Session controller:
<?php
if ( !defined( 'BASEPATH' ) ) exit( 'No direct script access allowed' );
class Session extends MY_Controller {
public function __construct() {}
public function save() {}
public function remove( $id = null ) {}
}
Note that this controller is extending the MY_Controller class instead of the regular CI_Controller class, we do this in order to use the _remap method and other functionality that we created earlier. OK, so now let’s start with the constructor.
public function __construct() {
parent::__construct();
$this->load->model( 'session_model', 'model' );
}
This simple constructor just calls its parent constructor (as every controller in CodeIgniter must do) and then loads the controller’s model. The code for the save method is as follows.
public function save() {
$result = $this->model->create();
if ( !$result ) {
$result = $this->model->get_error();
}
$this->_format_output( $result );
}
And then the code for the remove method:
public function remove( $id = null ) {
$result = $this->model->destroy( $id );
if ( !$result ) {
$result = $this->model->get_error();
}
$this->_format_output( $result );
}
Both methods simply delegate the task at hand to the model, which handles the actual data manipulation. In a real world application, the necessary data validation and session checking would be done in the controller, and the common tasks such as session checking should be implemented in the base controller.
Session Model
Now let’s move on to the session model. Here is its basic structure:
<?php
if ( !defined( 'BASEPATH' ) ) exit( 'No direct script access allowed' );
class Session_Model extends MY_Model {
public function __construct() {}
public function create() {}
public function destroy( $id ) {}
}
Like the controller, this model extends the MY_Model class instead of the regular CI_Model class, this is being done in order to use the common methods that we’ve created earlier. Again, let’s start with the constructor.
public function __construct() {
$this->load->driver( 'mongo_db' );
}
In this case, we just load the Mongo_db driver, that we discussed earlier. Now we’ll continue with the method in charge of destroying the session.
In this method we check if there’s a session for the given session_id, and if so we attempt to remove it, sending a success message if everything goes OK, or setting an error and returning false if something goes wrong. Note that when using the session_id we use the special method $this->mongo_db->gen_id, this is because like I mentioned earlier, IDs in MongoDB are objects, so we use the id string to create it.
Finally, let’s write the create method which will wrap up part one of this tutorial series.
First of all, we check that there’s a user associated with the given email. Then we decode the user’s associated salt (which I’ll explain in the second part of this series when we cover user registration) and check that the given password matches the user’s stored password.
We then remove any previous session associated with the user and create a new session object. If we were checking the session thoroughly, we would add things like the user_agent, ip_address, last_activity field and so on to this object. Finally, we send back to the client the session and user IDs for the new session.
Conclusion
This has been a rather long tutorial, we covered a lot of topics, and we have even more to cover yet. Hopefully by now you have a better understanding of RESTful or stateless services and how to create such a service with CodeIgniter, and possibly, you may have also picked up some new ideas that you can give to the framework’s core functionality.
In the next part we will finish the back-end service and in parts three and four we’ll cover the BackboneJS client application. If you have any doubts/suggestions or anything to say, please do so in the comments section below.
Downloading and installing Xdebug on your local machine (Mac OS X 10.6.6+, MAMP 2.1.1).
Integrating with PhpStorm.
Practice debugging.
What You Will Need
A Mac running Mac OS X 10.6.6+.
If you are on 10.8.X you may need to install XQuartz as Apple removed X11.
If you are on Windows, the whole process is somewhat easier, just hit Google for more details.
Apple Xcode 4.6 (free on the Mac App Store).
Command Line Tools.
Homebrew.
A terminal app of your choice.
PhpStorm 5+ (many other IDE’s will work as well).
What Is Xdebug?
Well, technically, Xdebug is an extension for PHP to make your life easier while debugging your code. Right now, you may be used to debugging your code with various other simple solutions. These include using echo statements at different states within your program to find out if your application passes a condition or to get the value of a certain variable. Furthermore, you might often use functions like var_dump, print_r or others to inspect objects and arrays.
What I often come across are little helper functions, like this one for instance:
function dump($value) {
echo ‘<pre>';
var_dump($value);
echo ‘</pre>';
}
The truth is, I used to do this too, for a very long time actually.
The truth is, I used to do this too, for a very long time actually. So what’s wrong with it? Technically, there is nothing wrong with it. It works and does what it should do.
But just imagine for a moment, as your applications evolve, you might get into the habit of sprinkling your code all over with little echos, var_dumps and custom debuggers. Now granted, this isn’t obstructive during your testing workflow, but what if you forget to clean out some of that debug code before it goes to production? This can cause some pretty scary issues, as those tiny debuggers may even find their way into version control and stay there for a long time.
The next question is: how do you debug in production? Again, imagine you’re surfing one of your favorite web-services and suddenly you get a big array dump of debug information presented to you on screen. Now of course it may disappear after the next browser refresh, but it’s not a very good experience for the user of the website.
Now lastly, have you ever wished to be able to step through your code, line by line, watch expressions, and even step into a function call to see why it’s producing the wrong return value?
Well, you should definitely dig into the world of professional debugging with Xdebug, as it can solve all of the problems above.
Configuring MAMP
I don’t want to go too deep into the downloading and installation process of MAMP on a Mac. Instead, I’ll just share with you that I’m using PHP 5.4.4 and the standard Apache Port (80) throughout this read.
Your First Decision
A quick note before we start with building our own Xdebug via Homebrew: If you want to take the easiest route, MAMP already comes with Xdebug 2.2.0. To enable it, open:
/Applications/MAMP/bin/php/php5.4.4/conf/php.ini
with a text editor of your choice, go to the very bottom and uncomment the very last line by removing the ;.
The last two lines of the file should read like this:
“Why would I want to choose a harder way than this one?”
And my answer to that is, it is never a mistake to look beyond your rim and learn something new. Especially as a developer these days, throwing an eye on server related stuff will always come in handy at some point in time. Promised.
Install Xcode and Command Line Tools
You can get Apple Xcode for free off of the Mac App Store. Once you’ve downloaded it, please go to the application preferences, hit the “Downloads” tab and install the “Command Line Tools” from the list.
Install Homebrew
Homebrew is a neat little package manager for Mac OS X which gets you all the stuff Apple left out. To install Homebrew, just paste the following command into your terminal.
On a Mac, Homebrew will be the most convenient way to install Xdebug. On Linux however, compiling it yourself is the best way to go; which is not that easy on a Mac.
Tip: Windows users just need to download the *.dll file from Xdebug.org, put it into the XAMPP folder and add the path to their php.ini file.
As a PHP developer, you should from now on be aware of Jose Gonzalez’s “homebrew-php” Github repo, which holds a lot of useful “brews” for you. If you’ve ever asked yourself how to install PHP 5.4 manually, you are right there.
Now if you get into any trouble while installing Homebrew, check out Jose’s Readme.
To complete our Homebrew excursion, we want to “tap” into Jose’s brewing formulae by executing the following commands within your terminal application:
brew tap homebrew/dupes
This will get us some dependencies we need for Jose’s formulae.
brew tap josegonzalez/homebrew-php
Done! Now we should be ready to install Xdebug the comfy way, on a Mac.
Install Xdebug
Back in your terminal application, please execute:
brew install php54-xdebug
If you are on PHP 5.3, just replace the “4″ with a “3″ ;)
The installation will take some time. After it’s done, you’ll see a little beer icon and some further instructions which you can ignore.
So what just happened? Homebrew downloaded all the files including their dependencies and built them for you. As I’ve already told you, compiling yourself on a Mac can be a hassle. At the end, we got a freshly compiled xdebug.so located at /usr/local/Cellar/php54-xdebug/2.2.1/.
Attention: Please note that Homebrew will install PHP 5.4 to your system during the process. This should not influence anything as it is not enabled on your system.
To finally install Xdebug, we just need to follow a few more steps.
Change directory (cd) to MAMP’s extensions folder:
cd /Applications/MAMP/bin/php/php5.4.4/lib/php/extensions/no-debug-non-zts-20100525
You can re-check the path by looking at the last line of /Applications/MAMP/bin/php/php5.4.4/conf/php.ini, as this is where we are going.
If you want to force a copy (cp) command to overwrite existing files, just do cp -X source target.
Last, but not least, we need to modify the php.ini file to load the Xdebug extension file. Open /Applications/MAMP/bin/php/php5.4.4/conf/php.ini with a text editor of your choice, go to the very bottom and uncomment the last line by removing the semicolon at the front. Don’t close the file just yet.
Now relaunch MAMP, go to http://localhost/MAMP/phpinfo.php. If everything went well, you should find this within the output:
If it did not work, please make sure that you really copied over the xdebug.so and have the right path in your php.ini file.
Start Debugging
Before we can actually start debugging, we need to enable Xdebug. Therefore, I hope you didn’t close out your php.ini, as we need to add this line to the very end, after the zend_extension option:
xdebug.remote_enable = On
Save and close your php.ini file and restart MAMP. Go to http://localhost/MAMP/phpinfo.php again and search for xdebug.remote on the site. Your values should look exactly like mine:
If they do not, follow the same procedure you used to add remote_enable = On for the other statements at the end of your php.ini file.
Now, open your IDE of choice. You can use Xdebug with a number of popular software solutions like Eclipse, Netbeans, PhpStorm and also Sublime Text. Like I said before, I am going to use PhpStorm EAP 6 for this demo.
Inside of PhpStorm, open the application preferences and find your way to “PHP \ Debug \ DBGp Proxy” on the left hand side, like in the screenshot below:
Now choose your personal IDE key. This can be any alphanumeric string you want. I prefer to just call it PHPSTORM, but XDEBUG_IDE or myname would be perfectly fine too. It is important to set the “Port” value to 9000 as our standard Xdebug configuration uses this port to connect to the IDE.
Tip: If you need to adjust this, add xdebug.remote_port = portnumber to your php.ini file.
Attention: Other components may change this value inside of PhpStorm, so watch out for it if something fails.
Next, click that red little phone button with a tiny bug next to it on the top toolbar. It should turn green. This makes PhpStorm listen for any incoming Xdebug connections.
Now we need to create something to debug. Create a new PHP file, call it whatever you’d like and paste in the following code:
<?php
// Declare data file name
$dataFile = 'data.json';
// Load our data
$data = loadData($dataFile);
// Could we load the data?
if (!$data) {
die('Could not load data');
}
if (!isset($data['hitCount'])) {
$data['hitCount'] = 1;
}
else {
$data['hitCount'] += 1;
}
$result = saveData($data, $dataFile);
echo ($result) ? 'Success' : 'Error';
function loadData($file)
{
// Does the file exist?
if (!file_exists($file)) {
// Well, just create it now
// Save an empty array encoded to JSON in it
file_put_contents($file, json_encode(array()));
}
// Get JSON data
$jsonData = file_get_contents($file);
$phpData = json_decode($jsonData);
return ($phpData) ? $phpData : false;
}
function saveData($array, $file)
{
$jsonData = json_encode($array);
$bytes = file_put_contents($file, $jsonData);
return ($bytes != 0) ? true : false;
}
Now this code is falsy by default, but we will fix it in a moment, in the next section.
Make sure everything is saved and open up your browser to the script we just created. I will use Google Chrome for this demo, but any browser will do.
Now let’s take a moment to understand how the debugging process is initialized. Our current status is: Xdebug enabled as Zend extension, listening on port 9000 for a cookie to appear during a request. This cookie will carry an IDE key which should be the same as the one we set-up inside of our IDE. As Xdebug sees the cookie carrying the request, it will try to connect to a proxy, our IDE.
So how do we get that cookie in place? PHP’s setcookie? No. Although there are multiple ways, even some to get this working without a cookie, we will use a little browser extension as a helper.
Install the “Xdebug helper”" to your Google Chrome browser or search for any extension that will do it for the browser you are using.
Once you’ve installed the extension, right click the little bug appearing in your address bar and go to the options. Configure the value for the IDE key to match the key you chose in your IDE, like so:
After configuring it, click the bug and select “Debug” from the list. The bug should turn green:
Now, go back to PhpStorm or your IDE of choice and set a “breakpoint”. Breakpoints are like markers on a line which tell the debugger to halt the execution of the script at that breakpoint.
In PhpStorm, you can simply add breakpoints by clicking the space next to the line numbers on the left hand side:
Just try to click where the red dot appears on the screenshot. You will then have a breakpoint set at which your script should pause.
Note: You can have multiple breakpoints in as many files as you’d like.
Now we are all set. Go back to your browser, make sure the bug is green and just reload the page to submit the cookie with the next request.
Tip: if you set a cookie, it will be available to the next request.
If everything goes according to plan, this window should pop up inside of PhpStorm to inform you of an incoming debug connection:
Did the window not popup for you? Let’s do some troubleshooting and repeat what needs to be set in order for this to succeed:
You should find Xdebug info inside of phpinfo()‘s output. If not, get the xdebug.so file in the right place and set up your php.ini file.
Set PhpStorm DBGp settings to your IDE key e.g. “PHPSTORM” and port “9000″.
Make PhpStorm listen for incoming debug connections using the red phone icon which will then turn green.
Set a breakpoint in your code, or select “Run \ Break at first line in PHP scripts” to be independent from any breakpoints. Note that this is not suited for practical use.
Get a browser extension to set the Xdebug cookie.
Make sure the browser extension has the same IDE key in it that you chose inside of your IDE.
Reload the page and PhpStorm should get the connection.
If you get the dialog seen on the previous image, please accept it. This will take you into debug mode, like so:
You can see that the debugger stopped the script’s execution at your breakpoint, highlighting the line in blue. PHP is now waiting and controlled by Xdebug, which is being steered by your very own hands from now on.
Our main workspace will be the lower section of the IDE which is already showing some information about the running script (the superglobals).
And would you look at that? There’s the cookie we just set to start the debugging session. You can now click through the superglobals and inspect their values at this very moment. PHP is waiting, there is no time limit, at least not the default 30 seconds.
On the left side, you’ll see a few buttons. For now, only “Play” and “Stop” are of interest to us. The green play button will resume the script. If there is another breakpoint in the code, the script will continue until it reaches the breakpoint and halt again.
The red stop button aborts the script. Just like PHP’s exit or die would do.
Now the really interesting ones come in the upper section of the debug window:
Let’s quickly check them out:
Step Over: This means step one line ahead.
Step Into: If the blue line highlights, for example, a function call, this button let’s you step through the insights of the function.
Step Out: If you stepped into a function and want to get out before the end is reached, just step out.
Run to cursor: Let’s say that, for example, your file is 100 lines long and your breakpoint was set at line two in order to inspect something. Now you want to quickly run to the point where you just clicked your cursor to – this button is for you. You can click “Step over”n times too ;)
Now don’t worry, as you use Xdebug you will rapidly adapt to the shortcuts on the keyboard.
Actually Debugging Some Example Code
I already told you that the code you copy/pasted is falsy, so you’ll need to debug it. Start stepping over the code, statement by statement.
Note that the blue line only halts on lines which actually contain a command. Whitespace and comments will be skipped.
Once you reach the function call to loadData, please do not step into it, just step over and halt on the if statement.
You can see two new variables in the “Variables” panel on the bottom of the screen. Now, why did the $data variable return false? It seems like the script should have done its job. Let’s take a look. Go back to line seven to step into the function call -> bam! We get a message informing us that we can not “step back”. In order to get your debugger to line seven again, you need to stop this session and reload the page in the browser. Do so and step into the function call this time.
Stop on the return statement inside of the loadData function and see what happened:
The $phpData array is empty. The return statement uses a ternary operator to detect what to return. And it will return false for an empty array.
Fix the line to say:
return $phpData;
As json_decode will either return the data or null on failure. Now stop the debug session, reload your browser, and step over the function call this time.
Now it seems like we still have a problem as we step into the condition. Please fix the condition to use is_null() to detect what’s going on:
if (is_null($data)) {
die('Could not load data');
}
Now it’s up to you to try and step around a bit. I would suggest to revert the script to the original falsy version, debug it with echo‘s and then compare how that feels in comparison to using Xdebug.
Conclusion
Throughout this article you should have gained a lot of new knowledge. Don’t hesitate to read it again and to help a friend set up Xdebug – nothing better than that!
You may want to try replacing your usual debug behavior by using Xdebug instead. Especially with larger, object-oriented projects, as they become much easier to debug and even catch up on the flow, if you don’t get something right away.
Note that this is just the tip of the iceberg. Xdebug offers much more power which needs to be explored as well.
Please feel free to ask any questions in the comments and let me know what you think.
There's so much goodness happening on the web and as it continues to evolve, it's important that talented individuals step up into leadership roles to help shape the future of web development. And doing this isn't an easy task. Not only do you need to have the technical chops to help define new techniques and paradigms or create the next great technology, it's equally important to be able to effectively convey your message in a passionate and credible fashion so that your peers respect your direction.
Lea Verou is one of these new breed of leaders who is helping to push the web forward through her technical savvy and profound love for web standards. She's developed quite a following and her live coding sessions at major conferences are a thing of legend.
We had an opportunity to find out more about her in this Q&A.
Q Let’s start with the usual. Could you give us a quick intro about yourself?
I’m Lea Verou and I’m a web designer/developer and web standards geek (sounds like an AA introduction, doesn’t it?). I’ve created several open source projects such as Prism, a syntax highlighter used in A List Apart, Smashing Magazine, WebPlatform.org, MDN and other big websites, Dabblet, an interactive code playground, or -prefix-free, a JavaScript library that lets authors forget about vendor prefixes and code to the future standards. I’ve also come up with and published several CSS techniques, such as using CSS gradients to create patterns. I’m currently employed by W3C, although I’ve announced that I’m leaving at the end of July to pursue other challenges, such as writing and designing my first book.
Q You’ve risen quickly to be one of the most recognized and respected web developers around. Has that changed the way that you view yourself within the community and the responsibilities you may (or may not) have in promoting best practices and specific technologies?
Not really, to be honest. I still do my thing, make stuff and put it out there in the hopes they will be useful for someone. I still speak my mind about the technologies and best practices I like and those that I don’t. Whoever wants to listen to me, it’s their call. I’m not going to censor myself because of the number of people who are following me. That would be counter-intuitive, since being myself made these people follow me in the first place.
Q You’ve been very vocal about the problem with vendor prefixes. Do you think that’s been solved?
I think both browser makers and the WG (working group) have realized that vendor prefixes, although good in theory, do not work in practice. So, the way to go right now seems to be browsers implementing experimental features under a setting instead of behind a prefix. That way, developers will not start using it in production, forcing the WG to get stuck in early iterations, as was the case with vendor prefixes.
Q Along those same lines, how much responsibility did the W3C, the CSS WG and WebKit teams have in perpetuating what became an incredible hindrance to cross-browser development (especially mobile)?
There’s no single cause, but I believe a big part of the blame lies with developers. Although we’ve endured the pains of a browser mono-culture before, we did not learn much. IE6 used to be really cool stuff 12 years ago you know, just like WebKit is today. I can see the CSS WG being at fault too, for not realizing the issues with vendor prefixes early on, which turned web development into a popularity contest. Last but not least, the WebKit team shares some part of the blame, as they shouldn’t have implemented non-standard CSS features to get ahead in the browser game.
Q Developers want more modern features and they want them now. Is the pace of the standards bodies keeping up with the needs and wants of the developer community? If not, what needs to happen to change that?
I’m sure you are aware of the old project management triangle paradigm: Something cannot be fast, high quality and cheap, you need to pick two. I believe this applies to designing APIs as well. Budget is limited, as there are very few people paid to work on standards. So, basically, designing new features can either be fast or high quality, but not both. We can see the former when browsers decide to innovate: Usually, even when the original ideas are good, they are poorly thought out, since they were designed in the vacuum of a single company (examples: Drag and Drop API, -webkit-gradient()).
When the standardization process is followed through, features can be very high quality in the end, at the cost of taking a long time to be finished. Several parties with different interests need to reach consensus, a full test-suite needs to be written, it needs experimental implementations and several iterations based on implementor feedback. All of this takes time, but keep in mind that once a feature enters the open web platform, we’re stuck with it for years, if not decades. Therefore, it pays off to invest that kind of resources in to it, and to be patient. Short-term pain for long-term gain ;)
Q You recently announced your departure from the W3C. How will that affect your involvement in the standards process?
I will still be involved in the CSS WG as an Invited Expert. The WG voted on it in a recent telcon and I was happy to see several people in support and nobody against it. :) In fact, I will be able to devote more time to it now, since I expect to have more free time in general, and having worked at W3C gives me a unique perspective and insight into the standards process.
Q You’re renowned for your live coding demos, flooring conference attendees during your presentations. Aren’t you concerned about messing up and affecting your flow? How do you even prep for something like that?
I have several safeguards preventing me from messing up. I keep my code examples concise, showing only what’s needed. This also helps the audience understand, as the code is small enough for them to process. I believe that as the number of lines in a code sample grows linearly, understanding decreases exponentially. Most importantly, I practice a lot. I might not practice the delivery of the talk, but I always practice the live coding several times, even when I’ve given the talk before. Also, even if I mess up, which has happened a couple times, the audience is so glad to see the result gradually build up in front of them instead of being presented with a screenshot, that they can be incredibly forgiving of missteps. If something does not work, I will spend a few seconds trying to fix it and if I can’t, I will explain what was supposed to happen and move on.
I often see people ruining live coding presentations by showing too much code, with too many distractions (e.g. a full IDE around it and having to switch windows to see the result) and long delays trying to debug their code when something goes awry. All three are very effective in getting the audience to lose focus. However, done right, live coding can be a great teaching aid and make a talk more engaging and fun.
Q A lot of devs long to be as multi-faceted as you. Most seem to be good in either JS or CSS but usually not both. What are the techniques or resources that have allowed you to become as proficient in both to where you can do live coding demos near flawlessly?
My two biggest interests since my preteens were design and programming. So, when I started making websites, I was studying the languages involved, as much as studying graphic design principles. I fell in love with CSS because it felt like a bridge between the two, which is why I specialized in it.
Regarding resources, I was always the type of person that learned through reading and practicing (in that order). I would read an entire book and then build something that helps me put what I learned into practice to create something that I wasn’t able to before. I don’t learn easily from lectures, and I even glided through university (Computer Science) studying on my own, rarely attending any lectures. However, this greatly depends on the person, I know some amazingly skilled folks who absolutely hate studying on their own without anyone teaching them. I think that’s why I tried to take a different approach in my own talks, because I find conventional lectures so damn boring and hard to follow, except for the rare case where the speaker is as funny as a stand-up comedian (and while I like to think I have a good sense of humor, I’m by no means on that level).
More About Lea
Thank you Lea for taking the time to chat with us. To get to know more about Lea and her work on standards and web development, be sure to visit her website and follow her on Twitter. Also, if you have an opportunity to see her speak in person, definitely jump on it. Her list of past and future events can be found on Lanyrd which also link to videos of her previous presentations.
Node.js and Websockets are the perfect combination to write very fast, lag free applications which can send data to a huge number of clients. So why don’t we start learning about these two topics by building a chat service! We will see how to install Node.js packages, serve a static page to the client with a basic web-server, and configure Socket.io to communicate with the client.
Why Choose Node.js and Socket.io?
So why use this combo?
There are a lot of platforms which can run a chat application, but by choosing Node.js we don’t have to learn a completely different language, it’s just JavaScript, but server-side.
Node.js is a platform built on Chrome’s JavaScript runtime to make building applications in JavaScript that run on the server, easy. Node.js uses an event-driven, non-blocking I/O model, which makes it perfect for building real time apps.
More and more Node.js applications are being written with real-time communication in mind. A famous example is BrowserQuest from Mozilla, an MMORPG written entirely in Node.js whose source code has been released on Github.
Node.js comes with a built-in package manager : npm. We will use it to install packages that will help speed up our application development process.
We’ll be using three packages for this tutorial: Jade, Express, and Socket.io.
Socket.io: the Node.js Websockets Plugin
The main feature of our application is the real-time communication between the client and the server.
HTML5 introduces Websockets, but it is far away from being supported by all users, so we need a backup solution.
Socket.io is our backup solution : it will test Websocket compatibility and if it’s not supported it will use Adobe Flash, AJAX, or an iFrame.
Finally, it supports a very large set of browsers:
Internet Explorer 5.5+
Safari 3+
Google Chrome 4+
Firefox 3+
Opera 10.61+
iPhone Safari
iPad Safari
Android WebKit
WebOs WebKit
It also offers very easy functions to communicate between the server and the client, on both sides.
Let’s start by installing the three packages we will need.
Installing Our Dependencies
Npm allows us to install packages very fast, using one line, so first go to your directory and have npm download the needed packages:
npm install express jade socket.io
Now we can start building our server-side controller to serve the main page.
We are going to save all the server-side code into a "server.js" file which will be executed by Node.js.
Serving a Single Static Page
To serve our static page we will use Express, a package which simplifies the whole server-side page send process.
So let’s include this package into our project and start the server:
var express = require('express'), app = express.createServer();
Next, we need to configure Express to serve the page from the repertory views with the Jade templating engine that we installed earlier.
Express uses a layout file by default, but we don’t need it because we will only serve one page, so instead, we will disable it.
Express can also serve a static repertory to the client like a classic web server, so we will send a "public" folder which will contain all our JavaScript, CSS and image files.
Next, let’s create two folders inside our project folder named "public" and "views".
Now we just need to configure Express to serve a "home.jade" file, which we will create in a moment, and then set Express to listen for a specific port.
I will use port 3000 but you can use whatever you’d prefer.
Node.js uses templating engines to serve webpages. It’s useful to send dynamic pages and to build them faster.
In this tutorial, we will use Jade. Its syntax is very clear and it supports everything we need.
“Jade is a high performance templating engine heavily influenced by Haml and implemented with JavaScript for Node.”
Now, I’m not going to go over Jade in detail, if you need more help, you can find very well written documentation on its Github repo.
Jade Configuration
We installed Jade earlier, but we need to include it in our server.js file like we did for Express.
By convention, we include our libraries at the top of our file to use them later, without having to check if they are already included. So place the following code at the top of your "server.js" file :
var jade = require('jade');
And that completes our Jade configuration. Express is already setup to use Jade with our view files, to send an HTML response, we just need to create that file.
Creating Our Home Page
If we start our server now it will crash because we’re requesting our app to send a page which doesn’t exist yet.
We’re not going to create a full featured page, just something basic that has a title, a container for the messages, a text area, a send button, and a user counter.
Go ahead and create a "home.jade" page inside the "views" folder with the following code:
doctype 5
html
head
title Chat
script(src='https://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min.js')
script(src="/socket.io/socket.io.js")
script(src="script.js")
body
div.container
header
h1 A Chat application with Node.js and Socket.io
input(type='text')#pseudoInput
button#pseudoSet Set Pseudo
div#chatEntries
div#chatControls
input(type='text')#messageInput
button#submit Send
“Jade is all about indentation”
The Jade language is all about indentation. As you can see, we don’t need to close our containers, just indenting the children of the parent container is enough.
We also use a period "." and a pound sign "#" to indicate the class or ID of the element, just like in a CSS file.
We also link in three scripts at the top of the file. The first is jQuery from Google CDN, next we have the Socket.io script which is served automatically by the package, and finally a "script.js" file which will keep all of our custom JS functions.
The Socket.io Server-Side Configuration
Socket.io is event based, just like Node. It aims to make real-time apps possible in every browser and mobile device, blurring the lines between these different transport mechanisms. It’s care-free, real-time, and 100% JavaScript.
Like the other modules, we need to include it in our server.js file. We will also chain on our express server to listen for connections from the same address and port.
var io = require('socket.io').listen(app);
The first event we will use is the connection event. It is fired when a client tries to connect to the server; Socket.io creates a new socket that we will use to receive or send messages to the client.
Let’s start by initializing the connection:
io.sockets.on('connection', function (socket) {
//our other events...
});
This function takes two arguments, the first one is the event and the second is the callback function, with the socket object.
Using code like this, we can create new events on the client and on the server with Socket.io. We will set the "pseudo" event and the "message" event next.
To do this, it’s really simple, we just use the same syntax, but this time with our socket object and not with the "io.sockets" (with an “s”) object. This allows us to communicate specifically with one client.
So inside our connection function, let’s add in the "pseudo" event code.
socket.on('setPseudo', function (data) {
socket.set('pseudo', data);
});
The callback function takes one argument, this is the data from the client and in our case it contains the pseudo. With the "set" function, we assign a variable to the socket. The first argument is the name of this variable and the second is the value.
Next, we need to add in the code for the "message" event. It will get the user’s pseudo, broadcast an array to all clients which contains the message we received as well as the user’s pseudo and log it into our console.
socket.on('message', function (message) {
socket.get('pseudo', function (error, name) {
var data = { 'message' : message, pseudo : name };
socket.broadcast.emit('message', data);
console.log("user " + name + " send this : " + message);
})
});
This completes our server-side configuration. If you’d like, you can go ahead and use other events to add new features to the chat.
The nice thing about Socket.io is that we don’t have to worry about handling client disconnections. When it disconnects, Socket.io will no longer receive responses to “heartbeat” messages and will deactivate the session associated with the client. If it was just a temporary disconnection, the client will reconnect and continue with the session.
The Socket.io Client-Side Configuration
Now that our server is configured to manage messages, we need a client to send them.
The client-side of Socket.io is almost the same as the server-side. It also works with custom events and we will create the same ones as on the server.
So first, create a "script.js" file inside the public folder. We will store all of our functions inside of it.
We first need to start the socket.io connection between the client and the server. It will be stored in a variable, which we will use later to send or receive data. When the connection is not passed any arguments, it will automatically connect to the server which will serve the page.
var socket = io.connect();
Next, let’s create some helper functions that we will need later. The first is a simple function to add a message to the screen with the user’s pseudo.
This helper uses the append function from jQuery to add a div at the end of the #chatEntries div.
Now we are going to write a function that we can call when we want to send a new message.
function sentMessage() {
if ($('#messageInput').val() != "")
{
socket.emit('message', $('#messageInput').val());
addMessage($('#messageInput').val(), "Me", new Date().toISOString(), true);
$('#messageInput').val('');
}
}
First, we verify that our textarea is not empty, then we send a packet named "message" to the server which contains the message text, we print it on the screen with our "addMessage" function, and finally we remove all the text from the textarea.
Now, when the client opens the page, we need to set the user’s pseudo first. This function will send the pseudo to the server and show the textarea and the submit button.
function setPseudo() {
if ($("#pseudoInput").val() != "")
{
socket.emit('setPseudo', $("#pseudoInput").val());
$('#chatControls').show();
$('#pseudoInput').hide();
$('#pseudoSet').hide();
}
}
Additionally, we hide the pseudo setting controls when it’s sent to the server.
Now just like we did on the server-side, we need to make sure we can receive incoming messages and this time we’ll print them on the screen. We’ll use the same syntax but this time we call the "addMessage" function.
Just like with our server configuration, the packet that is sent to the client is an array containing the message and the pseudo. So we just call our "addMessage" function passing in the message and the pseudo, which we extract from the received data packet.
Now we just need to add the initialization function which is fired once the page is fully loaded.
First, we hide the chat controls before the pseudo is set and then we set two click listeners which listen for clicks on our two submit buttons. The first is for the pseudo and the second is for the messages.
And that wraps up our client-side script.
Conclusion
We now have a working chat service. To start it, just run the following command :
node server.js
In your terminal you should get a message from Socket.io saying that the server is started. To see your page go to 127.0.0.1:3000 (or whichever port you chose previously).
The design is very basic, but you could easily add in a stylesheet with CSS3 transitions for incoming messages, HTML5 sounds, or Bootstrap from Twitter.
As you can see, the server and client scripts are fairly similar: this is the power of Node.js. You can build an application without having to write the code twice.
Finally, you may have noticed that it only took 25 lines of code inside our server.js file to create a functional chat app, with amazing performance. It is very short, but it also works very well.
Now if you’re interested, I have created a better chat service application, with a good looking design, along with some additional features. It is hosted on Nodester and the source code is on Github.
Recently, Dropbox announced its new Datastore API and Drop-ins, a couple of great new features aimed to leverage the power of accessing files and (now with Datastores) other general information from any device and keep that data synced across all platforms, painlessly.
Datastores
Today, the Datastore API only supports single-user use-cases, but multi-user scenarios are in future plans for Dropbox.
Let’s begin by discussing what datastores are. You can think of them as a small database to keep key/values pairs of information. Now, you may say that your application could use a web service with a database and your data will be the same across all devices, and while this is true, by using the Datastore API, we are taking away the overhead of handling a back-end service.
With this in mind, applications that don’t need to store a large amount of user data and don’t require heavy processing, can delegate the database management to Dropbox and forget about handling it manually. Take for instance a multi-platform game. You would want to allow the user to play it on their iPad in the morning, head to work and while in the traffic, continue playing on their iPhone. In this scenario you’d normally need that user to log into the system, play, and then save their progress.
Now with the Datastore API you can forget about the whole login process and the overhead of handling the progress data, you just use the provided SDK and store the information you want to store, later that day when your user is opening your application from another Dropbox connected device, you can easily retrieve their information. In this case, Dropbox handles the storage, security, and privacy of the information.
Although, as of right now the Datastore API only supports single-user use-cases. According to Dropboxer Steve M., multi-user scenarios are in future plans for Dropbox.
Persistent TODOs App
If you have ever used a JavaScript framework/library and came across example applications, chances are that in one of those apps there was a TODO app of some kind. So, in the spirit of keeping things consistent and to demonstrate some of the most common features, let’s build a TODO app using the Dropbox Datastore API.
Since this is a tutorial on the Dropbox functionalities offered to developers, I’m not going to be explaining the HTML nor the CSS in the application, you can find those in the files accompanying this tutorial.
Step 1 – Setting Up the Application
First of all, like with most public APIs, we need to create a new application within the system. To do this, log into your Dropbox account and head to the App Console. Click on “Create app”, select “Dropbox API app” and “Datastores only” and finally give your app a name.
You may be tempted to select “Files and datastores”, however if your application is not actually using this permission, when you request production status, it will be denied, adhere to the Dropbox policies for every application you create.
Now you have a new application in Dropbox and you can start making use of the Datastore API (and other resources). In order to do this, you’d need your App Key. If you use the JavaScript SDK, as we will in this tutorial, you don’t need your App Secret (keep this string secret).
Step 2 – Adding the SDK
We’re going to be using the JavaScript SDK provided by Dropbox to interact with the Datastore API. To install it, simply add the following script declaration to your HTML document above the script for your application.
What this does is create a new Dropbox Client object using the App key obtained from the app console. It then defines our application object and when everything is ready, we call the init method.
Checking the User’s Session
The first thing our application should do is check if we have an access token for the user of our application. Add the following code to the init method:
Here we are trying to authenticate the app’s user to the Dropbox API server. By setting the interactive option to false, we are asking the method to not redirect the user to the Dropbox site for authentication, this way our application can continue its normal flow. We are going to manually send the user there later on.
Now we need to check if the user is authenticated and if so, proceed to load in their data. Add the following code to your checkClient method:
Here we are updating our interface accordingly, based on whether the user has been authenticated or not.
Authenticating the User
So far we have our application interface being updated accordingly, if the user is authenticated or not. Now we are going to handle the process of authenticating the user in the system. Add the following code to the else statement of the checkClient method:
This is merely a callback which is called when the user clicks the “Connect Dropbox” button in the interface. Note that we are not setting the interactive option this time, thus allowing the redirection. If the user is not logged into Dropbox, a login form will be shown and the system will ask the user to allow the application.
Retrieving User Data
Once the user has been granted access to the application, it will redirect back to us. In this case, the call to the isAuthenticated method will return true, at this point we need to retrieve the user’s Dropbox stored data. Add the following code to the if statement of the checkClient method:
This method retrieves the authenticated user’s Datastore and accesses the todos table. Contrary to an SQL table, the table structure doesn’t have to be defined prior to usage, as a matter of fact, the table doesn’t even exist until we add data to it.
What this also means is that the table can contain any data, and one record doesn’t have to have the same data as the others. However, it is a good practice to preserve a similar, if not equal structure, amongst records.
Rendering Records
At this point we have the user’s todos information, however it is not displayed to the user. In order to do this, just add the following code to the updateTodos method:
var list = $( '#todos' ),
records = todosList.query();
list.empty();
for ( var i = 0; i < records.length; i++ ) {
var record = records[i],
item = list.append(
$( '<li>' ).attr( 'data-record-id', record.getId() ).append(
$( '<button>' ).html( '&times;' )
).append(
$( '<input type="checkbox" name="completed" class="task_completed">' )
).append(
$( '<span>' ).html( record.get( 'todo' ) )
).addClass( record.get( 'completed' ) ? 'completed' : '' )
)
if ( record.get( 'completed' ) ) {
$( 'input', item ).attr( 'checked', 'checked' );
}
}
This method simply sets a container element for the HTML tag that will contain our list of todos, then we retrieve the records in our todos table by calling the query method from the Datastore API. Next, we clear the list of items and finally we render every record to the screen.
Deleting a Record
Now that we have the ability to retrieve the user’s stored TODOs on application startup, let’s work on deleting those records. In our render method, we’ll create an X button. Add the following code to the bottom of the updateTodos method:
$( 'li button' ).click( function( e ) {
e.preventDefault();
var id = $( this ).parents( 'li' ).attr( 'data-record-id' );
todosList.get( id ).deleteRecord();
});
In this code we just get the id of the record to delete, retrieve the actual record by calling the get method, and delete it by calling deleteRecord on the obtained object. Since we previously set the recordsChanged callback, our interface will update like magic.
Updating a Record
So far so good, we can retrieve a list of tasks from the user’s Datastore and we can delete a record from it. Now how about updating a record? For this new feature we are going to add in the ability to mark a record as completed or not. Add the following code to the bottom of the updateTodos method:
$( 'li input' ).click( function( e ) {
var el = $( e.target ),
id = el.parents( 'li' ).attr( 'data-record-id' );
todosList.get( id ).set( 'completed', el.is( ':checked' ) );
});
Like with the delete method, we retrieve the id of the object to update, retrieve the record object itself, and set its completed property according to its current state.
Creating a Record
Finally, we need to be able to create new records in the user’s Datastore. In order to do this, we need to react to the form submission event that the add-todo form will trigger. Add the following code to the bottom of the if statement in our checkClient method:
$( '#add-todo' ).submit( TodosApp.createTodo );
This is simply a listener for the submit event on the add-todo form. Now for the actual record creation. Add the following code to the createTodo method:
With this, we have completed our example application. As you can see, the CRUD operations for our data have become really simple and we can access it across multiple devices. While using this service, we don’t have the need to create a full back-end service to store the user’s information.
Datastore Extras
As an extra service to developers, Dropbox let’s you explore the data inside your Datastores. To check this, go to the App console and select Browse datasores from the submenu, select the application you want to check the Datastores for and you’ll be presented with a list of the existing tables and each record in the table.
Space Limits
When creating Datastores, you have to take into account the amount of information you plan on storing. Every application has up to five MBs per user, to use across all datastores. As long as your data doesn’t hit this limit, the Datastore won’t contribute to the user’s Dropbox quota. Keep in mind that any data over this limit will count towards the user’s Dropbox storage quota, and writing operations may be limited.
Field Types
Datastore records can be seen as JSON objects, however there are certain constraints about the data that a field can contain, for instance, even though you can see a record as a JSON document, you can’t have embedded documents. The types of data you can store are as follows:
String
Boolean
Integer – 64 bits signed
Floating Point
Date – POSIX-like timestamp
Bytes – Arbitrary binary data up to 100 KBs
List
A list is a special kind of value that can contain an ordered list of other values, but not lists themselves.
Drop-Ins
Another great feature added to Dropbox for developers are Drop-ins. There are two types of Drop-ins, the Chooser and the Saver. With these new features you can add support to your application to either select (for sharing or some other purpose) files directly from Dropbox with the Chooser and to directly store files to Dropbox with the Saver.
So continuing with our example, lets add Drop-ins to our TODOs application.
Step 1 – Setup
As with the Datastore API, we need to create an application for Dropins, head to the App console, select Create new, choose Drop-in app and give it a name.
Now we have a new application, contrary to the applications for other Dropbox APIs, this one doesn’t need production access, so once you’re ready, you can offer it to your users with no hassle.
Now the only thing we need to do to add Drop-ins support to our application is add the SDK, add the following code to the scripts declarations in the HTML file, above the script for your application:
Note the id with a value of dropboxjs, if you remove or change this, Dropbox won’t be able to get your application key, hence breaking the Drop-in functionality.
Step 2 – Chooser
OK, so now we have the Drop-ins API in place, let’s begin with the Chooser Drop-in. To add the Choose from dropbox button, use the following code.
This will generate the button for you and when you click it, it’ll open a window allowing you to select files from the user’s Dropbox. To style this element, use the class dropbox_choose. In my case, I’ll simply center it on screen. The data-link-type attribute specifies if the obtained link will be a direct link to the file (useful for download, or display) or preview, in which case going to the link will take you to the Dropbox interface.
Both links have disadvantages, for instance a direct link will expire within four hours of its creation, and a preview link may stop working if the user owning the file removes or changes it. The preview link type is the default used by the chooser.
Working With the Result
Adding the code above will generate a “Choose from Dropbox” button, which when clicked will show us a window to select the desired file. To retrieve the selected file(s), yes it supports multiple selection, your application needs to respond to the DbxChooseSuccess event in the input element. Add the following method to your JavaScript application:
After creating the event listener, you need to add it to the application, at the bottom of your init method, add the following line of code:
TodosApp.listenChooser();
This event will give us a payload containing, among other things, an array of files selected by the user. In this case, we are selecting a single file and appending its link property to an image tag already in the DOM. Each file in the array contains some other fields, like the file size, its name, etc. For a full list of the properties in each file go to the Chooser Drop-in documentation.
Step 3 – Saver
Last but not least, we have the Saver Drop-in. This Drop-in let’s you save files directly to the user’s Dropbox folder. Just like with the Chooser, you need a Drop-in application to use the API. Fortunately, we already created one and working with this Drop-in is even easier than everything else so far, simply create a link as follows:
The href and class parameters are required for the Drop-in to work. The href is the file that will be saved to the user’s Dropbox folder, and the class tells the application that it is a Saver Drop-in. Additionally, you may add a third parameter: data-filename which will be used as a user-friendly name for the file to save. If you don’t specify one, the name will be taken from the href parameter.
As with the Chooser, there are a couple of more advanced things you can do with the Saver Drop-in, but for a reference on those check the official documentation.
Conclusion
As you can see with these new features in the Dropbox Developer Services, we can add powerful data storage capabilities to our web and mobile applications easily. This frees us from the overhead of creating a back-end service when little data processing is needed.
Hopefully by now you feel comfortable adding CRUD support to your application using the Datastore API and adding the capabilities to read and write from your user’s Dropbox accounts, using the Drop-ins API. Please note that both APIs are really new, the Datastore API is still in Beta version, however you can see the potential they represent. Be sure to check the official Dropbox documentation for further information on these and other great services the platform has to offer.
I previously asked several top developers the following four simple questions:
What’s your primary development focus?
What hardware are you using for development?
Which editor or IDE do you use?
What software can you not live without on a daily basis?
The article generated a lot of interest and discussion about the tools the community is using which was really great! We love to motivate discussions with our topics. Well, this also motivated us to ask the question, "Why don't we post about what we, the Nettuts+ authors use every day?"
So we did just that. We chose ten Nettuts+ authors and asked them the same four questions. And like before, you’ll find the answers they gave below and hopefully discover some tools that could make your development much easier.
Csaba Patkos
Bio: I had my first contact with computers in the mid-80s when I visited my father at work. That was an important moment for what I am doing now. I am a proud member of an agile team working for a company called Syneto. Through my carrier, I programmed in several languages and I had the chance to learn and use daily all the major Agile techniques from Scrum to Lean and from TDD to DDD. Since August 2012, I am sharing my knowledge with the Nettuts+ readers by articles, tutorials and premium courses, all about programming.
I am mainly a back-end programmer and mostly program in PHP but I continuously try new languages. I am most focused on general software design and architecture. The programming language I use is just a tool to achieve that.
Q What hardware are you using for development?
Well, at work we have Mac Minis but I am not a fan of Apple. So, at home I have an HP desktop with 27" Samsung monitor running my favorite Linux distribution, Sabayon.
Q Which editor or IDE do you use?
NetBeans, definitely. Even though it has its limits, I find it the best IDE when it comes to multiple languages. Its Java part is just superb and from all the free IDEs it has the best PHP support.
Q What software can you not live without on a daily basis?
I spend a lot of my time in the web browser and email client; Opera. It would be hard to live without it. And of course NetBeans.
Krasimir Tsonev
Bio: Krasimir Tsonev is a coder with over ten years of experience in web development. With a strong focus on quality and usability, he is interested in delivering cutting edge applications. Currently, with the rise of the mobile development, Krasimir is enthusiastic to work on responsive applications targeted to various devices. Living and working in Bulgaria, he graduated at the Technical University of Varna with a bachelor’s and master’s degree in computer science.
I'm usually working with PHP, JavaScript(NodeJS), HTML/CSS and sometimes Flex/AS3. In some of the projects I'm a front-end developer, in some others I'm the back-end guy. Generally I'm interested in making the things in the right way. I love KIS (keep it simple) and DRY (don't repeat yourself) principles and I'm trying to follow them all the time. Very often I develop tools which help other programmers work faster and efficiently. When I don't code, I normally blog, which is kinda a passion of mine.
Q What hardware are you using for development?
I have Dell Vostro 3560 connected to an external monitor Dell 23". I'm a Windows user, but also have Ubuntu running in a VirtualBox VM.
Q Which editor or IDE do you use?
I'm a big fan of Sublime Text 2. Most of the time I'm switching between three windows – Sublime Text 2, Chrome and PowerShell (+ posh-git installed). A couple of years when I worked mainly on Flash-based project, I used FlashDevelop. Even for PHP or JavaScript it was a good choice.
Q What software can you not live without on a daily basis?
That's my favorite browser – Google Chrome. It's not just a program for visiting the web. It's actually a great tool for development and even for design.
Pavan Podila
Bio: I am a Financial technologist specializing in front-ends, mostly for Trading and Analytics applications. I have worked on a wide variety of UI technologies in the past, ranging from Java Swing, Eclipse SWT, Nokia Qt to Cocoa on OSX/iOS, .Net WPF, and HTML5. I am also a published author for “WPF Control Development Unleashed” with Addison/Wesley-SAMS. When I am not programming, I like to play Table Tennis, Badminton or paint using my Wacom Tablet with Photoshop or SketchBook Pro.
I am a Front-end consultant in the financial services sector of New York. Most of the apps I develop/maintain are trading apps, visualizations, portfolio management tools, etc. These apps run on a mixture of desktop, mobile and web platforms. For desktop I've mostly use .NET/C#/WPF. On the web its been a combination of the standard JS technologies/frameworks with Node.js, Java or Rails backends. On the mobile side, its primarily iOS. I like to learn new things all the time and always looking out for exciting ways to bend the mind! The part that I like the most about being a consultant is the opportunity to explore new platforms, technologies, languages which I would never venture into voluntarily.
Q What software can you not live without on a daily basis?
Git, Sublime Text, Zsh, RubyMine, Final Cut Pro (for all my video editing), Dash, Google Chrome, Keynote (for all my diagrams)
Aurelio De Rosa
Bio: I’m a web and app developer with more than 5 years’ experience programming for the web using HTML5, CSS3, JavaScript and PHP. I mainly use the LAMP stack and frameworks like jQuery, jQuery Mobile, and Cordova (PhoneGap). My interests also include web security, web accessibility, SEO and WordPress.
Currently I’m self-employed working with the cited technologies. I’m also a regular blogger for several networks (SitePoint, Tuts+ and FlippinAwesome) where I write articles about the topics I usually work with and more.
I'm a full stack web developer working with the LAMP stack. Apart for PHP for the server side, I use JavaScript with jQuery for the client side, and a lot of HTML5 and CSS. Besides, I reuse my web knowledge to build mobile apps with the help of frameworks like jQuery Mobile and Cordova (PhoneGap). My interests also include web security, web accessibility, SEO and WordPress. Currently I'm self-employed working with the cited technologies. I'm also a regular blogger for several networks where I write articles about the topics I usually work with and more.
Q What hardware are you using for development?
A PC with an i3 processor with 4Gb of RAM plus a 24'' monitor. While I deploy on Linux, both my PC and 13'' notebook run Windows 7.
Q Which editor or IDE do you use?
It depends on the projects I'm working on or the code I have to write in the moment I sit in front of the desk. For small changes I usually just open the file using Notepad++. As an IDE, I used to develop with NetBeans but some months ago I tried PHPStorm and from that moment, I felt in love. It's really a complete, stable, and useful IDE.
Q What software can you not live without on a daily basis?
Based on what I said so far, it should be clear that I cannot live without browsers. My favorite one is Chrome, but for work reasons that you may easily guess, my PCs have all the major browsers installed. In addition, I must mention Composer, Git, FireFTP, Poedit, Google, StackOverflow, and Twitter . Oh…and YouTube and Spotify! Who the hell can code without music?
Jeremy McPeak
Bio: Hi! I’m Jeremy McPeak, and I’m an author and a software developer. I’ve written a few books, articles, and courses at Tuts+. I specialize in my two favorite languages: JavaScript and C#, but I’ve been known to delve into other languages like PHP and Java when needed. When I’m not working, I’m spending time with my family, playing guitar or piano, gaming, or reading.
Connect with Jeremy on Twitter: @jwmcpeak and on his blog.
Q What’s your primary development focus?
These days, I spend the majority of my time with C# and .NET for both desktop and web applications. I got into this industry as a client-side developer, and I’m continually trying to fit more client-side work into my daily work flow. JavaScript is my first love, after all.
Q What hardware are you using for development?
There are three computers I use for development, all of which run Windows 8 Pro. For development on-the-go, I use a Dell XPS 14 Ultrabook with 8GB of RAM, and it will soon sport a SSD. My workstation at the office is an Ivy Bridge-based Xeon with 32GB RAM and dual nVidia Quatro cards for powering four displays. For development (and other things) at home, I built a Haswell-based computer: i7-4770 CPU, 32GB RAM, two Samsung 840 Pro 256GB SSDs, a ton of conventional storage, nVidia 660 GTX, and three Dell U2410 displays.
Q Which editor or IDE do you use?
I primarily use Visual Studio Professional 2008 and 2012 with Resharper and NCrunch for web and desktop development. I also use WebMatrix if I need to quickly prototype something, and Sublime Text and Notepad2 get notable usage when I don’t need Visual Studio.
Q What software can you not live without on a daily basis?
I must have Resharper and NCrunch. Visual Studio is a top-notch development environment, but the Resharper and NCrunch plug-ins make it the absolute best environment on the planet. I also need VMWare Workstation. I do a lot with virtual machines, and VMWare’s Workstation is currently the best client-based VM software available.
Nikko Bautista
Bio: I’m Nikko Bautista. By day, I work as a Software Engineer at Bright.com, where we make hiring smarter, faster, and cheaper. By night, I develop web applications and write tutorials for Nettuts+. I specialize in PHP and PHP frameworks. I have experience with Symfony, Zend Framework, CodeIgniter, FuelPHP, and Laravel. I like creating and maintaining developer-friendly APIs. I also have expertise in third-party APIs from Facebook, Twitter, Google, and other platforms. I often explore new technologies, frameworks, and web services by building web applications that use them. Nettuts+ allows me to share what I’ve learned with the world.
I'm a web application developer, using PHP as my main language. I also dabble with other languages like Ruby and Python, but not as much as I'd like. Together with this, I use jQuery and Ember for the client-side. I currently build applications for Bright.com, where we help people score their next job.
Q What hardware are you using for development?
At work, I use a MBP 15" with a 23" secondary screen. Before I started working at my current job, I used to be a Windows fanatic. I've always hated how OSX had different conventions than Windows. I decided to give it a fighting chance when I started work at Bright, and I couldn't be happier that I did. At home, I have a triple 27" monitor set up, connected to a small mATX PC. The PC has a quad-core i5, 8GBs of RAM, and 7TBs of hard disk space all packaged in a Lian-li V350B. For work on the go, I have a 11" MBA that I bring around with me almost all the time since it's so light you barely even notice it's there. Additionally, I use my trusty Logitech K350 Keyboard and Logitech M705 Marathon Mouse (for both my work setup and home setup).
Q Which editor or IDE do you use?
Like many, I mainly use Sublime Text for my everyday coding. It's fast, reliable and extensible, although I sometimes miss the features only full IDEs can provide. When mucking around in servers though, I use Vim. In the future, I'd love to be able to work more efficiently using Vim, and use it as my main editor, but for now, I can't live without my cmd+p to open files in Sublime.
Q What software can you not live without on a daily basis?
Google Chrome is definitely on the top of my list, working is just so much faster if I use it. Fantastical on OSX (and just plain Google Calendar on Windows) is a great way to keep track of stuff on my calendar and add new tasks/events.
Stephen Radford
Bio: I’m Stephen Radford, a web designer and developer from Leicester, UK. Working with stuff like Laravel, Backbone and AngularJS.
I'm primarily a PHP working on web applications, with my go-to framework being Laravel 4. On the frontend side of things I'm working with AngularJS for the most part, as well as maintaining some applications built with Backbone.
Q What hardware are you using for development?
During my day job I'm using a 21" iMac as well as a cheap, secondary display which usually is littered with terminal windows. When working on my side-projects, I'm using my 13" MacBook Air which is perfect to be able to chuck in my bag and work somewhere else should I need to. Though most of my work is done from the sofa.
Q Which editor or IDE do you use?
Un-surprisingly, I'm a big Sublime Text 2 fan. The huge repository of plugins (mainly accessible thanks to the fantastic Package Control and unique features like multiple cursors and distraction free mode just make a joy to use.
Q What software can you not live without on a daily basis?
I probably wouldn't be as productive without CodeKit, iTerm, ColorSnapper or Base. Kickoff allows me to manage a collaborative to-do list, FileShuttle lets me easily share screenshots or files, and I certainly couldn't work without the constant stream of music delivered by Spotify.
Adam Conrad
Bio: I’m Adam Conrad the VP of Product for fantasy sports startup @starstreet, DJ as @deejayacon, and a front-end developer. I lift things up and put them down, too.
I work on the front-end – HTML/CSS/JS, but we're a Rails shop so I do that too. Straight JS/jQuery for most of our work, but we're investigating AngularJS at the moment as a way to wrap a framework around the front-end.
Q What hardware are you using for development?
MacBook Air 13'' from 2011 – 4GB RAM, 1.7 GHz Intel i5…I could use a bit more RAM especially if I wanted to do some work with VMs but it gets the job done. I used to use an additional external monitor (24'' Asus HDMI screen) but the color profile discrepancies between the two screens were annoying enough as a front-end guy that I abandoned it altogether in favor of one single screen. For our responsive work, I'm constantly cycling between an iPhone 5, Nexus 4, iPad 3, iPad Mini and Nexus 7. And of course, no hardware setup can be complete without some gnarly headphones. I rock the Audio Technica ATH-M50s because they had the highest ratings on Amazon for pretty much any product and man do they deliver.
Q Which editor or IDE do you use?
Back in my .NET days I was a Visual Studio guy, then I moved to Vim when I switched to Ruby on Rails, but then I saw the light that was Sublime Text 2 and life is golden. I have a host of packages installed for pretty much anything you could possibly need for Ruby, Rails, jQuery, JavaScript, HTML and CSS. Can't say I've used them all, but they're slowly creeping into my development workflow.
Q What software can you not live without on a daily basis?
My IDE (obviously), Chrome DevTools and my feed reader to provide me an endless stream of great new music.
Hendrik Maus
Bio: Hendrik is a Web Application Developer based in Cologne. He is working with SAE Global/European IT and Navitas Ltd., mostly on large scale database driven PHP applications using Zend Framework, MS SQL and some pretty exciting cutting edge stuff. Always happy to branch out and experience related fields.
“Trying to become a renaissance developer seems to be the ultimate goal for me. Being able to pick any right technology for the job, adapt and use it quickly.”
My current daily business is developing database-driven web applications based on object-oriented PHP for educational businesses. I most frequently use custom PHP, Zend Framework (Delivery and DB manipulation), MSSQL, MySQL, and Javascript (mostly native + jQuery for DOM & Ajax stuff). Besides work, I am digging into Sencha Touch, Node and Angular JS.
Q What hardware are you using for development?
I use a 13" MacBook Air as a portable server (with both Mac OS & Windows) as I constantly change workspaces and cannot rely on the cloud for a major part of my work. I usually connect the server to the local networks at home or in my office. At home, the desk is powered with a Mac Pro connected to a 30" display which is quite a pleasure to work with. At the office I use a 2012 Mac Mini i7 with two displays – 27" and 19". Both of them are SSD powered as you must admit that you never want to miss it again once you've tried it. ;) I fly over to our headquarters in Berlin on quite a regular schedule where I work directly on the MacBook. This setup has proven to be very flexible and fits my needs in any situation.
Q Which editor or IDE do you use?
I have been using PhpStorm as IDE from the minute it came out. Seriously, this is one of the most incredible pieces of software ever made for really powerful web development. For quick editing I am a fan of Sublime Text 2 as it is incredibly lightweight and even comes powerful features you'd much more likely expect from a full blown IDE. On the command line I tend to stick with nano or vim if I'm forced to. I must admit that Microsoft did a pretty good job on the SQL Server Management Studio; fun to write SQL with it.
Q What software can you not live without on a daily basis?
Bio: I’m a web developer focusing mostly on JavaScript, ASP.NET MVC, jQuery, and C#. I believe that you cannot ever stop learning which is why I stay active in the development world attending user groups like NashJS, ID of Nashville, andNashDotNet, blogging for FreshBrewedCode.com,JCreamerLive, Net Tuts, and Tech.pro and scouring Twitter and the interwebs for as much knowledge I can squeeze into my brain. I work as a JavaScript Engineer appendTo and am having a great time developing front end applications in JavaScript and jQuery. I am also an IE userAgent Please feel free to contact me, I love meeting other devs who are passionate about what they do.
My primary focus is front end development using JavaScript, and jQuery. I love using Backbone.js or Knockout.js to build applications, and I frequently use postal.js, machina.js, mockjax, and several others. Typically I build my apps using AMD with Require.js as I feel it gives me the best development experience. I also write ASP.NET MVC, and actually got my start writing ColdFusion primarily focused around the ColdBox MVC framework.
Q What hardware are you using for development?
Currently I'm on a custom built AMD Athlon X4 Phenom II with 16 GB of RAM, 2TB of HDD, and a GTX 250. I also have an ASUS U56E laptop with an I5 and 8GB of RAM. Windows is my primary OS with a Linux VM as needed.
Q Which editor or IDE do you use?
I use SublimeText 2 for the most part. I love the speed and extensibility of it. I've also been beta testing version 3 which is blazingly fast, however the plugin support is still a work in progress for this version.
Q What software can you not live without on a daily basis?
I'm a big Evernote fan. It's got a nice screen capture experience. I use the Chrome extension for it as well to clip pages or urls. Most of my needs are met in the browser with things like TweetDeck, Bit.ly, and Simple Time Track. I also use a lot of Node.js tools such as Grunt and simple-http-server by Andrew Thorp. I also use Notepad++ for super fast code edits. Spotify makes my day go by faster. Fiddler2 is a great tool for watching HTTP traffic. One of my favorite Git tools is TortoiseGit as well as Posh-Git for Poweshell.
Great Stuff!
It's great to be able to peak behind the curtains of other developers and see how they do the magic they do. And from what you can see, the tools and technologies they use are all easily available, and in many cases for free. I'd like to thank the Nettuts+ authors for sharing this information.
One of the more interesting developments in web standards lately is the Indexed Database (IndexedDB for short) specification. For a fun time you can read the spec yourself. In this tutorial I’ll be explaining this feature and hopefully giving you some inspiration to use this powerful feature yourself.
Overview
As a specification, IndexedDB is currently a Candidate Recommendation.
In a nutshell, IndexedDB provides a way for you to store large amounts of data on your user’s browser. Any application that needs to send a lot of data over the wire could greatly benefit from being able to store that data on the client instead. Of course storage is only part of the equation. IndexedDB also provides a powerful indexed based searching API to retrieve the data you need.
You may wonder how IndexedDB differs from other storage mechanisms?
Cookies are extremely well supported, but have legal implications and limited storage space. Also – they are sent back and forth to the server with every request, completely negating the benefits of client-side storage.
Local Storage is also very well supported, but limited in terms of the total amount of storage you can use. Local Storage doesn’t provide a true “search” API as data is only retrieved via key values. Local Storage is great for “specific” things you may want to store, for example, preferences, whereas IndexedDB is better suited for Ad Hoc data (much like a database).
Before we go any further though, let’s have an honest talk about the state of IndexedDB in terms of browser support. As a specification, IndexedDB is currently a Candidate Recommendation. At this point the folks behind the specification are happy with it but are now looking for feedback from the developer community. The specification may change between now and the final stage, W3C Recommendation. In general, the browsers that support IndexedDB now all do in a fairly consistent manner, but developers should be prepared to deal with prefixes and take note of updates in the future.
As for those browsers supporting IndexedDB, you’ve got a bit of a dilemma. Support is pretty darn good for the desktop, but virtually non-existent for mobile. Let’s see what the excellent site CanIUse.com says:
Chrome for Android does support the feature, but very few people are currently using that browser on Android devices. Does the lack of mobile support imply you shouldn’t use it? Of course not! Hopefully all our readers are familiar with the concept of progressive enhancement. Features like IndexedDB can be added to your application in a manner that won’t break in non-supported browsers. You could use wrapper libraries to switch to WebSQL on mobile, or simply skip storing data locally on your mobile clients. Personally I believe the ability to cache large blocks of data on the client is important enough to use now even without mobile support.
Let’s Get Started
We’ve covered the specification and support, now let’s look at using the feature. The very first thing we should do is check for IndexedDB support. While there are tools out there that provide generic ways to check for browser features, we can make this much simpler since we’re just checking for one particular thing.
document.addEventListener("DOMContentLoaded", function(){
if("indexedDB" in window) {
console.log("YES!!! I CAN DO IT!!! WOOT!!!");
} else {
console.log("I has a sad.");
}
},false);
The code snippet above (available in test1.html if you download the zip file attached to this article) uses the DOMContentLoaded event to wait for the page to load. (Ok, that’s kind of obvious, but I recognize this may not be familiar to folks who have only used jQuery.) I then simply see if indexedDB exists in the window object and if so, we’re good to go. That’s the simplest example, but typically we would probably want to store this so we know later on if we can use the feature. Here’s a slightly more advanced example (test2.html).
var idbSupported = false;
document.addEventListener("DOMContentLoaded", function(){
if("indexedDB" in window) {
idbSupported = true;
}
},false);
All I’ve done is created a global variable, idbSupported, that can be used as a flag to see if the current browser can use IndexedDB.
Opening a Database
IndexedDB, as you can imagine, makes use of databases. To be clear, this isn’t a SQL Server implementation. This database is local to the browser and only available to the user. IndexedDB databases follow the same rules as cookies and local storage. A database is unique to the domain it was loaded from. So for example, a database called “Foo” created at foo.com will not conflict with a database of the same name at goo.com. Not only will it not conflict, it won’t be available to other domains as well. You can store data for your web site knowing that another web site will not be able to access it.
Opening a database is done via the open command. In basic usage you provide a name and a version. The version is very important for reasons I’ll cover more later. Here’s a simple example:
var openRequest = indexedDB.open("test",1);
Opening a database is an asynchronous operation. In order to handle the result of this operation you’ll need to add some event listeners. There’s four different types of events that can be fired:
success
error
upgradeneeded
blocked
You can probably guess as to what success and error imply. The upgradeneeded event is used both when the user first opens the database as well as when you change the version. Blocked isn’t something that will happen usually, but can fire if a previous connection was never closed.
Typically what should happen is that on the first hit to your site the upgradeneeded event will fire. After that – just the success handler. Let’s look at a simple example (test3.html).
var idbSupported = false;
var db;
document.addEventListener("DOMContentLoaded", function(){
if("indexedDB" in window) {
idbSupported = true;
}
if(idbSupported) {
var openRequest = indexedDB.open("test",1);
openRequest.onupgradeneeded = function(e) {
console.log("Upgrading...");
}
openRequest.onsuccess = function(e) {
console.log("Success!");
db = e.target.result;
}
openRequest.onerror = function(e) {
console.log("Error");
console.dir(e);
}
}
},false);
Once again we check to see if IndexedDB is actually supported, and if it is, we open a database. We’ve covered three events here – the upgrade needed event, the success event, and the error event. For now focus on the success event. The event is passed a handler via target.result. We’ve copied that to a global variable called db. This is something we’ll use later to actually add data. If you run this in your browser (in one that supports IndexedDB of course!), you should see the upgrade and success message in your console the first time you run the script. The second, and so forth, times you run the script you should only see the success message.
Object Stores
So far we’ve checked for IndexedDB support, confirmed it, and opened a connection to a database. Now we need a place to store data. IndexedDB has a concept of “Object Stores.” You can think of this as a typical database table. (It is much more loose than a typical database table, but don’t worry about that now.) Object stores have data (obviously) but also a keypath and an optional set of indexes. Keypaths are basically unique identifiers for your data and come in a few different formats. Indexes will be covered later when we start talking about retrieving data.
Now for something crucial. Remember the upgradeneeded event mentioned before? You can only create object stores during an upgradeneeded event. Now – by default – this will run automatically the first time a user hits your site. You can use this to create your object stores. The crucial thing to remember is that if you ever need to modify your object stores, you’re going to need to upgrade the version (back in that open event) and write code to handle your changes. Lets take a look at a simple example of this in action.
var idbSupported = false;
var db;
document.addEventListener("DOMContentLoaded", function(){
if("indexedDB" in window) {
idbSupported = true;
}
if(idbSupported) {
var openRequest = indexedDB.open("test_v2",1);
openRequest.onupgradeneeded = function(e) {
console.log("running onupgradeneeded");
var thisDB = e.target.result;
if(!thisDB.objectStoreNames.contains("firstOS")) {
thisDB.createObjectStore("firstOS");
}
}
openRequest.onsuccess = function(e) {
console.log("Success!");
db = e.target.result;
}
openRequest.onerror = function(e) {
console.log("Error");
console.dir(e);
}
}
},false);
This example (test4.html) builds upon the previous entries so I’ll just focus on what’s new. Within the upgradeneeded event, I’ve made use of the database variable passed to it (thisDB). One of the properties of this variable is a list of existing object stores called objectStoreNames. For folks curious, this is not a simple array but a “DOMStringList.” Don’t ask me – but there ya go. We can use the contains method to see if our object store exists, and if not, create it. This is one of the few synchronous functions in IndexedDB so we don’t have to listen for the result.
To summarize then – this is what would happen when a user visits your site. The first time they are here, the upgradeneeded event fires. The code checks to see if an object store, “firstOS” exists. It will not. Therefore – it is created. Then the success handler runs. The second time they visit the site, the version number will be the same so the upgradeneeded event is not fired.
Now imagine you wanted to add a second object store. All you need to do is increment the version number and basically duplicate the contains/createObjectStore code block you see above. The cool thing is that your upgradeneeded code will support both people who are brand new to the site as well as those who already had the first object store. Here is an example of this (test5.html):
Once you’ve got your object stores ready you can begin adding data. This is – perhaps – one of the coolest aspects of IndexedDB. Unlike traditional table-based databases, IndexedDB lets you store an object as is. What that means is you can take a generic JavaScript object and just store it. Done. Obviously there’s some caveats here, but for the most part, that’s it.
Working with data requires you to use a transaction. Transactions take two arguments. The first is an array of tables you’ll be working with. Most of the time this will be one table. The second argument is the type of transaction. There are two types of transactions: readonly and readwrite. Adding data will be a readwrite operation. Let’s start by creating the transaction:
//Assume db is a database variable opened earlier
var transaction = db.transaction(["people"],"readwrite");
Note the object store, “people”, is just one we’ve made up in the example above. Our next full demo will make use of it. After getting the transaction, you then ask it for the object store you said you would be working with:
var store = transaction.objectStore("people");
Now that you’ve got the store you can add data. This is done via the – wait for it – add method.
//Define a person
var person = {
name:name,
email:email,
created:new Date()
}
//Perform the add
var request = store.add(person,1);
Remember earlier we said that you can store any data you want (for the most part). So my person object above is completely arbitrary. I could have used firstName and lastName instead of just name. I could have used a gender property. You get the idea. The second argument is the key used to uniquely identify the data. In this case we’ve hard coded it to 1 which is going to cause a problem pretty quickly. That’s ok – we’ll learn how to correct it.
The add operation is ascynchronous, so lets add two event handlers for the result.
request.onerror = function(e) {
console.log("Error",e.target.error.name);
//some type of error handler
}
request.onsuccess = function(e) {
console.log("Woot! Did it");
}
We’ve got an onerror handler for errors and onsuccess for good changes. Fairly obvious, but let’s see a complete example. You can find this in the file test6.html.
>
<!doctype html><html><head></head><body><script>
var db;
function indexedDBOk() {
return "indexedDB" in window;
}
document.addEventListener("DOMContentLoaded", function() {
//No support? Go in the corner and pout.
if(!indexedDBOk) return;
var openRequest = indexedDB.open("idarticle_people",1);
openRequest.onupgradeneeded = function(e) {
var thisDB = e.target.result;
if(!thisDB.objectStoreNames.contains("people")) {
thisDB.createObjectStore("people");
}
}
openRequest.onsuccess = function(e) {
console.log("running onsuccess");
db = e.target.result;
//Listen for add clicks
document.querySelector("#addButton").addEventListener("click", addPerson, false);
}
openRequest.onerror = function(e) {
//Do something for the error
}
},false);
function addPerson(e) {
var name = document.querySelector("#name").value;
var email = document.querySelector("#email").value;
console.log("About to add "+name+"/"+email);
var transaction = db.transaction(["people"],"readwrite");
var store = transaction.objectStore("people");
//Define a person
var person = {
name:name,
email:email,
created:new Date()
}
//Perform the add
var request = store.add(person,1);
request.onerror = function(e) {
console.log("Error",e.target.error.name);
//some type of error handler
}
request.onsuccess = function(e) {
console.log("Woot! Did it");
}
}</script><input type="text" id="name" placeholder="Name"><br/><input type="email" id="email" placeholder="Email"><br/><button id="addButton">Add Data</button></body></html>
The example above contains a small form with a button to fire off an event to store the data in IndexedDB. Run this in your browser, add something to the form fields, and click add. If you’ve got your browser dev tools open, you should see something like this.
This is a great time to point out that Chrome has an excellent viewer for IndexedDB data. If you click on the Resources tab, expand the IndexedDB section, you can see the database created by this demo as well as the object just entered.
For the heck of it, go ahead and hit that Add Data button again. You should see an error in the console:
The error message should be a clue. ConstraintError means we just tried to add data with the same key as one that already existed. If you remember, we hard coded that key and we knew that was going to be a problem. It’s time to talk keys.
Keys
Keys are IndexedDB’s version of primary keys. Traditional databases can have tables without keys, but every object store needs to have a key. IndexedDB allows for a couple different types of keys.
The first option is to simply specify it yourself, like we did above. We could use logic to generate unique keys.
Your second option is a keypath, where the key is based on a property of the data itself. Consider our people example – we could use an email address as a key.
Your third option, and in my opinion, the simplest, is to use a key generator. This works much like an autonumber primary key and is the simplest method of specifying keys.
Keys are defined when object stores are created. Here are two examples – one using a key path and one a generator.
Finally, we can take the Add call we used before and remove the hard coded key:
var request = store.add(person);
That’s it! Now you can add data all day long. You can find this version in test7.html.
Reading Data
Now let’s switch to reading individual pieces of data (we’ll cover reading larger sets of data later). Once again, this will be done in a transaction and will be asynchronous. Here’s a simple example:
var transaction = db.transaction(["test"], "readonly");
var objectStore = transaction.objectStore("test");
//x is some value
var ob = objectStore.get(x);
ob.onsuccess = function(e) {
}
Note that the transaction is read only. The API call is just a simple get call with the key passed in. As a quick aside, if you think using IndexedDB is a bit verbose, note you can chain many of those calls as well. Here’s the exact same code written much tighter:
Personally I still find IndexedDB a bit complex so I prefer the ‘broken out’ approach to help me keep track of what’s going on.
The result of the get’s onsuccess handler is the object you stored before. Once you have that object you can do whatever you want. In our next demo (test8.html) we’ve added a simple form field to let you enter a key and print the result. Here is an example:
The handler for the Get Data button is below:
function getPerson(e) {
var key = document.querySelector("#key").value;
if(key === "" || isNaN(key)) return;
var transaction = db.transaction(["people"],"readonly");
var store = transaction.objectStore("people");
var request = store.get(Number(key));
request.onsuccess = function(e) {
var result = e.target.result;
console.dir(result);
if(result) {
var s = "<h2>Key "+key+"</h2><p>";
for(var field in result) {
s+= field+"="+result[field]+"<br/>";
}
document.querySelector("#status").innerHTML = s;
} else {
document.querySelector("#status").innerHTML = "<h2>No match</h2>";
}
}
}
For the most part, this should be self explanatory. Get the value from the field and run a get call on the object store obtained from a transaction. Notice that the display code simply gets all the fields and dumps them out. In a real application you would (hopefully) know what your data contains and work with specific fields.
Reading More Data
So that’s how you would get one piece of data. How about a lot of data? IndexedDB has support for what’s called a cursor. A cursor lets you iterate over data. You can create cursors with an optional range (a basic filter) and a direction.
As an example, the following code block opens a cursor to fetch all the data from an object store. Like everything else we’ve done with data this is asynchronous and in a transaction.
var transaction = db.transaction(["test"], "readonly");
var objectStore = transaction.objectStore("test");
var cursor = objectStore.openCursor();
cursor.onsuccess = function(e) {
var res = e.target.result;
if(res) {
console.log("Key", res.key);
console.dir("Data", res.value);
res.continue();
}
}
The success handler is passed a result object (the variable res above). It contains the key, the object for the data (in the value key above), and a continue method that is used to iterate to the next piece of data.
In the following function, we’ve used a cursor to iterate over all of the objectstore data. Since we’re working with “person” data we’ve called this getPeople:
function getPeople(e) {
var s = "";
db.transaction(["people"], "readonly").objectStore("people").openCursor().onsuccess = function(e) {
var cursor = e.target.result;
if(cursor) {
s += "<h2>Key "+cursor.key+"</h2><p>";
for(var field in cursor.value) {
s+= field+"="+cursor.value[field]+"<br/>";
}
s+="</p>";
cursor.continue();
}
document.querySelector("#status2").innerHTML = s;
}
}
You can see a full demo of this in your download as file test9.html. It has an Add Person logic as in the earlier examples, so simply create a few people and then hit the button to display all the data.
So now you know how to get one piece of data as well as how to get all the data. Let’s now hit our final topic – working with indexes.
They Call This IndexedDB, Right?
We’ve been talking about IndexedDB for the entire article but haven’t yet actually done any – well – indexes. Indexes are a crucial part of IndexedDB object stores. They provide a way to fetch data based on their value as well as specifying if a value should be unique within a store. Later we’ll demonstrate how to use indexes to get a range of data.
First – how do you create an index? Like everything else structural, they must be done in an upgrade event, basically at the same time you create your object store. Here is an example:
var objectStore = thisDb.createObjectStore("people",
{ autoIncrement:true });
//first arg is name of index, second is the path (col);
objectStore.createIndex("name","name", {unique:false});
objectStore.createIndex("email","email", {unique:true});
In the first line we create the store. We take that result (an objectStore object) and run the createIndex method. The first argument is the name for the index and the second is the property that will be indexed. In most cases I think you will use the same name for both. The final argument is a set of options. For now, we’re just using one, unique. The first index for name is not unique. The second one for email is. When we store data, IndexedDB will check these indexes and ensure that the email property is unique. It will also do some data handling on the back end to ensure we can fetch data by these indexes.
How does that work? Once you fetch an object store via a transaction, you can then ask for an index from that store. Using the code above, here is an example of that:
var transaction = db.transaction(["people"],"readonly");
var store = transaction.objectStore("people");
var index = store.index("name");
//name is some value
var request = index.get(name);
First we get the transaction, followed by the store, and then index. As we’ve said before, you could chain those first three lines to make it a bit more compact if you want.
Once you’ve got an index you can then perform a get call on it to fetch data by name. We could do something similar for email as well. The result of that call is yet another asynchronous object you can bind an onsuccess handler to. Here is an example of that handler found in the file test10.html:
request.onsuccess = function(e) {
var result = e.target.result;
if(result) {
var s = "<h2>Name "+name+"</h2><p>";
for(var field in result) {
s+= field+"="+result[field]+"<br/>";
}
document.querySelector("#status").innerHTML = s;
} else {
document.querySelector("#status").innerHTML = "<h2>No match</h2>";
}
}
Note that an index get call may return multiple objects. Since our name is not unique we should probably modify the code to handle that, but it isn’t required.
Now let’s kick it up a notch. You’ve seen using the get API on the index to get a value based on that property. What if you want to get a more broad set of data? The final term we’re going to learn today are Ranges. Ranges are a way to select a subset of an index. For example, given an index on a name property, we can use a range to find names that begin with A up to names that begin with C. Ranges come in a few different varieties. They can be “everything below some marker”, “everything above some marker”, and “something between a lower marker and a higher marker.” Finally, just to make things interesting, ranges can be inclusive or exclusive. Basically that means for a range going from A-C, we can specify if we want to include A and C in the range or just the values between them. Finally, you can also request both ascending and descending ranges.
Ranges are created using a toplevel object called IDBKeyRange. It has three methods of interest: lowerBound, upperBound, and bound. lowerBound is used to create a range that starts at a lower value and returns all data “above” it. upperBound is the opposite. And – finally – bound is used to support a set of data with both a lower and upper bound. Let’s look at some examples:
//Values over 39
var oldRange = IDBKeyRange.lowerBound(39);
//Values 40a dn over
var oldRange2 = IDBKeyRange.lowerBound(40,true);
//39 and smaller...
var youngRange = IDBKeyRange.upperBound(40);
//39 and smaller...
var youngRange2 = IDBKeyRange.upperBound(39,true);
//not young or old... you can also specify inclusive/exclusive
var okRange = IDBKeyRange.bound(20,40)
Once you have a range, you can pass it to an index’s openCursor method. This gives you an iterator to loop over the values that match that range. As a practical manner, this isn’t really a search per se. You can use this to search content based on the beginning of a string, but not the middle or end. Let’s look at a full example. First we’ll create a simple form to search people:
Starting with: <input type="text" id="nameSearch" placeholder="Name"><br/>
Ending with: <input type="text" id="nameSearchEnd" placeholder="Name"><br/><button id="getButton">Get By Name Range</button>
We’re going to allow for searches that consist of any of the three types of ranges (again, a value and higher, a highest value, or the values within two inputs). Now let’s look at the event handler for this form.
function getPeople(e) {
var name = document.querySelector("#nameSearch").value;
var endname = document.querySelector("#nameSearchEnd").value;
if(name == "" && endname == "") return;
var transaction = db.transaction(["people"],"readonly");
var store = transaction.objectStore("people");
var index = store.index("name");
//Make the range depending on what type we are doing
var range;
if(name != "" && endname != "") {
range = IDBKeyRange.bound(name, endname);
} else if(name == "") {
range = IDBKeyRange.upperBound(endname);
} else {
range = IDBKeyRange.lowerBound(name);
}
var s = "";
index.openCursor(range).onsuccess = function(e) {
var cursor = e.target.result;
if(cursor) {
s += "<h2>Key "+cursor.key+"</h2><p>";
for(var field in cursor.value) {
s+= field+"="+cursor.value[field]+"<br/>";
}
s+="</p>";
cursor.continue();
}
document.querySelector("#status").innerHTML = s;
}
}
From top to bottom – we begin by grabbing the two form fields. Next we create a transaction and from that get the store and index. Now for the semi-complex part. Since we have three different types of ranges we need to support we have to do a bit of conditional logic to figure out which we’ll need. What range we create is based on what fields you fill in. What’s nice is that once we have the range, we then simply pass it to the index and open the cursor. That’s it! You can find this full example in test11.html. Be sure to enter some values first so you have data to search.
What’s Next?
Believe it or not – we’ve only begun our discussion on IndexedDB. In the next article, we’ll cover additional topics, including updates and deletes, array based values, and some general tips for working with IndexedDB.
Envato, the people behind Nettuts+, have created a new avenue for web developers to earn an income doing what they love.
It’s called Microlancer, and we’ve recently started allowing freelancers to offer web dev services like PSD to HTML, WordPress plug-in development, and website customization.
You can set your own prices, turnaround time, and the terms of your services. We connect you with buyers who pay upfront to work with you. It’s an awesome new way to earn an income doing simple web dev jobs without having to worry about quotes, bidding, chasing up payment, invoicing, or self-promotion. We bring buyers, you do the work, Microlancer takes care of the rest, and you get paid.
Over $130,000 jobs have been sold to date on Microlancer, and twenty of our design service providers are into the thousands of dollars in gross earnings.
If you love to code but struggle with design, Microlancer’s designers can help with things like logo design, web design, app icon design and much more.
If you currently have more work than you can handle, Microlancer is an excellent place to outsource jobs.
Do work you love and we’ll take care of the rest. That’s the concept behind Microlancer’s official desktop wallpaper, available in several desktop sizes, as well as for Android, iPhone and iPad.
Travis CI makes working in a team for a software project easier with automated builds. These builds are triggered automatically when each developer checks in their code to the repository. In this article, we will go through how we can integrate Travis CI easily with our project, which is hosted on Github. With automation, notification and testing in place, we can focus on our coding and creating, while Travis CI does the hard work of continuous integration!
Hello Travis & CI!
Travis CI is a hosted continuous integration platform that is free for all open source projects hosted on Github. With just a file called .travis.yml containing some information about our project, we can trigger automated builds with every change to our code base in the master branch, other branches or even a pull request.
Before we get started with how we can integrate Travis with our project, the following prerequisites will be helpful:
At the heart of using Travis, is the concept of continuous integration (CI). Let’s say we are working on one feature and after we are done coding, we will typically build the project so as to create the executable as well as other files necessary to run the application. After the build is completed, good practices include running all the tests to ensure they are all passing and everything is working as expected.
The last step is ensuring that whatever we coded is indeed working even after we integrate it into the mainline code. At this point we build and test again. If the integrated build succeeds we can consider that the feature has been fully implemented. Travis CI automates this exact step of triggering a build and test upon each integration to the master branch, other branches or even a pull request, accelerating the time to detection of a potential integration bug.
In the following sections, we will take a simple project and trigger a failing build, correct it and then pass it. We will also see how Travis CI easily works with Github pull requests.
Travis Interface
When we land on the main homepage, we can also see the “busyness” of many open source projects going through automated build. Let’s deconstruct the interface and understand the various parts:
Sidebar: This shows the list of public open source projects on Github currently going through automated builds. Each item has the hyperlinked project name, duration of the build so far and the sequential number of build.
Build in progress [yellow]: A little yellow colored circle beside the project name indicates that the build is in progress.
Build failed [red]: A little red colored circle beside the project name indicates that the build is complete and it has failed.
Build passed [green]: A little green colored circle beside the project name indicates that the build is complete and it has passed.
Project name and links: The title is in the format username/repository and it is linked to the Travis CI build page. The little Octocat symbol beside it links to the Github page of the repository containing its source code.
Types of build: The automated builds can be triggered by committing the code to the master branch, other branches or even a pull request. By visiting the individual tab, we can get more information about the builds.
Build activity: This section will include information about each of the tasks that the build is running.
Step 1: Hello World!
Before we integrate Travis CI, we will create a simple “hello world” project and create some build tasks. Travis supports various programming languages including Python, Ruby, PHP and JavaScript with NodeJS. For the purpose of our demo, we will use NodeJS. Let’s create a very simple file hello.js as defined on the main website of NodeJS:
var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n') // missing semi-colon will fail the build
}).listen(1337, '127.0.0.1');
console.log('Server running at http://127.0.0.1:1337/');
Do notice that there is a missing semi-colon so that later on JSHint, a JavaScript linter will be able to detect this and raise an error. We will build the project using a task runner called GruntJS that will include JSHint. This is of course an illustration, but in real projects, we can go on to include various testing, publishing, linting and hinting tasks.
To indicate the various packages required for GruntJS, JSHint and others, we will create a second file called package.json. This file will firstly contain the name and the version number of our simple application. Next, we will define the dependencies needed with devDependencies which will include GruntJS related packages including JSHint. With scripts, we will tell Travis CI to start running the test suite and the command grunt --verbose. Let’s see the full contents of the file: package.json:
Next, let’s prepare the Gruntfile.js that will include all the tasks required to run our build. For simplicity, we can include just one task – JavaScript linting with JSHint.
Finally, we will run the build that contains only one task after we download all the related packages with npm install:
$ npm install
$ grunt
As expected, the build will not pass because the JSHint will detect a missing semi-colon. But if we place the semi-colon back into the hello.js file and run the grunt command once again, we will see that the build will pass.
Now that we have created a simple project locally, we will push this project to our Github account and integrate Travis CI to trigger the build automatically.
Step 2: Hello World With Travis CI
The very first step in integrating Travis CI is to create a file called .travis.yml which will contain the essential information about the environment and configurations for the build to run. For simplicity, we will just include the programming environment and the version. In our simple project, it is NodeJS version 0.10. The final contents of the file .travis.yml will be as follows:
language: node_js
node_js:
- "0.10"
Now our project will consist of the following files along with README.md and .gitignore as required:
Next, log in to Travis CI and authorize Travis CI to access your Github account. Afterwards, visit your profile page to turn on the hook for the Github repository to trigger automated builds with Travis CI.
As a final step to trigger our very first build, we will need to push to Github. Let’s remove the semi-colon in the file hello.js to make a failing build and then push to Github. This will trigger the automated build in Travis CI. Let’s visit the URL: https://travis-ci.org/[username]/[repo] to see the first build in progress!
This failing build in the above example is really a simple illustration. But this situation is reflective of something that might happen in our real projects – we try to integrate our code and the automated build fails. By default, after each build is completed, Travis CI will send emails to the commit author and repository owner. In this way, the developer that pushed the code is immediately alerted and can then fix the integration errors. In our case, let’s just insert the missing semi-colon and push to Github one more time.
git add hello.js
git commit -m "added semi-colon to pass the build"
git push
Hurray! The automated build has passed this time. Our code is integrated passing all the required tests. Now each time we try to integrate our changes whether it is to the master branch or even other branches, Travis CI will trigger an automated build.
Pull Requests
Once we have integrated Travis CI into our project, a pull request will also trigger an automated build. This is immensely useful for the repository owner or the developer who is in charge of merging the code base. Let’s see how Travis CI will advise whether the pull request is good to merge or not.
First, using another Github account, let’s fork the original repository and pull request with the following steps:
Fork the original repository
Create a new branch in the forked repository
Make the new changes and commit it
Ensure the feature branch is chosen
Compare and pull request
Merge With Caution
To simulate a failing build in the pull request, we will once again remove the semi-colon in the file hello.js, commit and push the changes and finally pull request.
Upon each pull request, Travis CI will automatically trigger the build. This time, we can also visit the “Pull Requests” tab to see the history of current or past builds triggered due to a pull request.
After Travis CI completes the build, if we visit the pull request page from the original repository, we will see that Travis CI has appended some user-interface changes to alert us that the build has failed.
Good to Merge
This failing build status will be immediately notified to the repository owner as well as the developer who did the pull request. And now, depending on the reason for the failing build, it can be rectified with another commit in the same branch. Hence, let’s add on the semi-colon and pull request one last time. Github will automatically update the pull request page as well.
And finally, when we come back to the original repository’s pull request page, this time we will see a “green” signal to go ahead and do a merge as our build is passing!
Build Configurations
The file .travis.yml defines the build configurations. Our example included just the language type and version, but we can add-on more useful ones as follows:
Notifications in terms of emails or chat alerts are sent as declared by the build configurations. This is an example of turning off emails and sending it to IRC:
As you can see, the file .travis.yml becomes very important in triggering automated builds. If this file in not valid, Travis CI will not trigger the build upon each push to Github. Hence, ensuring that we have a valid file that Travis CI will interpret correctly is important. For this, we will install a gem called travis-lint and run the file .travis.yml
It’s really helpful to include a little image to indicate the current status of the build. The image itself can be accessed from the URL pattern http://travis-ci.org/[username]/[repository-name].png. Another way to quickly access the images embedded in various formats is on the Travis CI project page itself. For example, we can copy the Markdown format and embed in the project’s README.md file.
Another cool way to track the build statuses of various open source projects while surfing around Github is to install one of the browser extensions. This will put the build status images prominently right next to each of the project names.
Resources on Travis CI
Here are some resources on the concept of continuous integration as well as learning and integrating Travis CI into our Github projects:
A fantastic way to learn what and how to include the various build configurations in the .travis.yml file is to actually browse through many of the popular open source repositories that already integrate Travis CI. Here are a few:
I hope this gave you a brief introduction to how we can easily integrate Travis CI in our Github projects. It’s really easy to use, so give it a try and make continuous integration a breeze for your team!
Recently I had the opportunity to look into Chrome Extension development. The scenario was pretty simple, I had to notify a group of users when someone from the group was using a website. A Chrome Extension was an obvious choice and after a bit of documentation I came across Simperium, a service that I could use to send and receive data real-time in my extension.
In this article we will see how simple it is to integrate real-time messaging into your Chrome Extension. To illustrate this, our final goal is a Chrome Extension that will send out real time updates about opened tabs to a separate monitoring page.
What Is Simperium
Simperium is a hosted service that will simply update the connected clients in real-time with any data that is written to it or changed. It does so in an efficient way, by only sending out data that has been changed. It can handle any JSON data and even provides an online interface to track any changes to it.
Getting Started
First off, you will have to create an account. There are various plans available at your disposal, however you can also choose the basic plan, which is free. After you are logged in, you will find yourself on the Dashboard.
To use Simperium, we will have to create an app, so go ahead and hit Add an App in the sidebar and name it whatever you wish.
On the App Summary screen you will find a unique APP ID and a Default API Key.
You can use the API key to generate an access token on the fly, however for the purposes of this tutorial we will generate this token from the Simperium interface. Look for the Browse Data tab in the Dashboard and click Generate Token.
This will generate an Access Token that we can use together with the APP ID to connect to our Simperium app.
Let’s See How This Works!
If you are like me and you can’t wait to see how this works, you will want to create a simple test page.
Now, as you can see, we already included the Simperium Javascript library in our HTML, we just have to initialize it in our script. We can do this by creating a new file in the js subfolder with the name script.js, and pasting in the following code:
var simperium = new Simperium('SIMPERIUM_APP_ID', { token : 'SIMPERIUM_ACCESS_TOKEN'}); // Our credentials
var bucket = simperium.bucket('mybucket'); // Create a new bucket
bucket.start(); // Start our bucket
bucket.on('notify', function(id, data) { // This event fires when data in the bucket is changed
$('.data').html("<p>"+data.text+"</p>");
});
$(document).ready(function() {
$("textarea").on('input', function() {
value = $(this).val();
bucket.update("yourdata", {"text": value}); // We update our Simperium bucket with the value of the textarea
$('.data').html("<p>"+value+"</p>"); // Our notify event doesn't fire locally so we update manually
});
});
You will have to replace SIMPERIUM_APP_ID and SIMPERIUM_ACCESS_TOKEN with the credentials you previously generated for your app.
To test this, you have to open at least two instances of our test HTML file in the browser and you should see them update each other as you type.
The functionality is really simple, we initialize Simperium and create a new bucket. A bucket is basically a place to store our objects. Once our bucket is started, Simperium will keep it in sync, we just have to use the notify event. If we want to update the bucket, we use the update function. That’s it!
This is the basic usage of Simperium, now we will combine this with a Chrome Extension to create something useful!
Our Chrome Extension
In this tutorial we will not cover the very basics of creating a Chrome Extension, if you need to catch up on that you can do so by reading Developing Google Chrome Extensions written by Krasimir Tsonev
The Basic Idea
Our steps will consist of the following:
Initialize Simperium in our Extension.
Use Chrome Extension Events to get notified when a tab is opened, closed or changed.
Update our Simperium bucket with a list of the opened tabs.
Create a separate HTML file to track opened tabs using Simperium events.
Let’s jump right in by creating the basic structure of our extension which consists of:
manifest.json– Manifest file
background.js– Background script
The Manifest File
Our manifest file will look rather simple:
{"name": "Live Report","version": "1.0","description": "Live reporting of your opened tabs","manifest_version":2,"background": {"persistent": true,"scripts": ["simperium.js", "background.js"]
},"permissions": ["webNavigation","tabs"
]
}
Paste this code into a blank file and save it as manifest.json.
As you can see, we only need to load the simperium library and our background script. We need to set the persistent option to true, so that Chrome will not unload these files to save memory.
The extension will use the chrome.webNavigation API so we need to set the webNavigation permission. We also need the tabs permission to have access to the title of the tabs.
The Background Script
Create a background.js file and save it next to manifest.json.
This the core of our extension, let’s go through it step by step.
First things first, we need to initialize Simperium:
var simperium = new Simperium('SIMPERIUM_APP_ID', { token : 'SIMPERIUM_ACCESS_TOKEN'});
var data = simperium.bucket('tabs');
data.start();
Don’t forget to replace SIMPERIUM_APP_ID and SIMPERIUM_ACCESS_TOKEN with the correct values you generated earlier.
In this case, we will create a new bucket called “tabs” to store our data.
The chrome.webNavigation and the chrome.tabs API
These APIs contain the events we’ll use to catch them when a tab is opened, closed or changed.
chrome.tabs.onRemoved will fire when you close a tab.
These two events seem to cover what we need, however it turns out that chrome.tabs.onUpdated does not fire when a tab is updated with a new page that is in the browser cache.
As a workaround, we can use chrome.webNavigation.onTabReplaced.
According to the documentation: “Fired when the contents of the tab is replaced by a different (usually previously pre-rendered) tab.”
The wording is not rock solid, but the event does work and will help us catch them when a tabs content is replaced with a cached page.
With these events, in theory, we could keep track of our tabs, however with these events firing multiple times this would be a tedious task.
Our solution is the chrome.tabs.query method.
chrome.tabs.query(queryInfo, function(tab){
});
Our callback function will return an array with all opened tabs. We can also set the queryInfo parameter to narrow the results, but for the purposes of this tutorial we will leave it empty.
Putting It All Together
Let’s take a look at our final code:
var simperium = new Simperium('SIMPERIUM_APP_ID', { token : 'SIMPERIUM_ACCESS_TOKEN'});
var data = simperium.bucket('tabs');
data.start();
chrome.tabs.onUpdated.addListener(function(tabId, changeInfo, tab) {
chrome.tabs.query({}, function(tabs){
updateTitles(tabs);
});
});
chrome.tabs.onRemoved.addListener(function(tabId, removeInfo) {
chrome.tabs.query({}, function(tabs){
updateTitles(tabs);
});
});
chrome.webNavigation.onTabReplaced.addListener(function(e){
chrome.tabs.query({}, function(tabs){
updateTitles(tabs);
});
});
function updateTitles(tabs){
var titles =[];
var length = tabs.length;
for (var i = 0; i < length; i++) {
titles[i]= tabs[i].title;
}
data.update("Tabs", {"Titles" : titles});
}
We use the events mentioned above to catch all tab events and query all opened tabs. To keep things simple, we created the updateTitles function that will go through our tabs array with a simple loop and assign the title value of every element to a new array.
In the last step, we update our Simperium object with our newly created array.
You can use the Browse Data tab in your Simperium Dashboard to verify if data is being changed correctly in your bucket, but we will also create a really simple HTML page to view our data.
Finally, some Javascript to retrieve the data from Simperium:
var simperium = new Simperium('SIMPERIUM_APP_ID', { token : 'SIMPERIUM_ACCESS_TOKEN'});
var data = simperium.bucket('tabs');
data.start();
data.on('notify', function(id, data) {
$(".tabs ul").html("");
var length = data.Titles.length;
for (var i = 0; i < length; i++) {
$( "<li>"+data.Titles[i]+"</li>" ).appendTo(".tabs ul");
}
});
We simply use the notify Simperium event to update our data in real-time. We generate the <li> tags with the titles inside an <ul> and that’s it!
You can see the page automatically updates when you open or close a tab.
Conclusion
In this tutorial we looked at Simperium and tab related events in Chrome. As you can see, it is quite easy to use them together, just don’t forget to set the persistent flag for your background page to true in your manifest file.
Our result might not be the most useful application of these technologies but it certainly helps us understand how easy it is to get started with them. They allow us to create some really interesting and fun applications.
I hope you enjoyed this article and I encourage you to leave a comment if you get stuck or have any questions. Thanks and have fun!
Ok so a couple of weeks now, on it’s very own two year anniversary Mark Otto and the rest of the guys responsible for the develop and maintenance of Bootstrap announced the official release of the framework’s third version, and it came on steroids, let’s see what we’re getting.
What’s New?
First off, the most important changes you’re going to find in the new Bootstrap release is the support for responsive websites, as a matter of fact the responsive module has been removed. Now, from its core, Bootstrap is responsive, more than that, this new version comes with the approach of “Mobile First”, meaning that almost everything has been redesigned to start from a lower screen size and scale up (more on that in a bit).
Nearly everything has been redesigned and rebuilt to start from your handheld devices and scale up.
In the look and feel you’ll see a lot of changes too, prominently the fact that the whole style has gone flat, and there’s an optional theme for a 2.0-er look. Additionally, the icons have gone from images to a font, which is really handy to use when needing different sizes and colors.
Grid System
Let’s start talking about the Grid System, oh the Grid, as a matter of fact, there are four Grid Systems in this new version of Bootstrap, each works exactly the same, being differentiated by the screen size width at which they operate.
Enabling the Grid
In order for the Grid System to work properly and also to ensure proper rendering and touch zooming, add the viewport meta tag to your document:
There are four Grid Systems in this new version of the framework, with the width of the viewport being the parameter that differentiates them. The widths that set the frontiers between one and another are as follows:
Extra small devices ~ Phones (< 768px)
Small devices ~ Tablets (>= 768px)
Medium devices ~ Desktops (>= 992px)
Large devices ~ Desktops (>= 1200px)
And each of the different supported viewports have a particular class to address it, as follows:
col-xs- ~ Extra small devices
col-sm- ~ Small devices
col-md- ~ Medium devices
col-lg- ~ Large devices
To make use of the Grid System you’d need a container element, with a class "container", and inside a second container with a class "row". Notice how in both cases the “fluid” suffix has disappeared, this in contrast with Bootstrap 2. And inside the second container you’d place your columns.
As I mentioned earlier, this new version of Bootstrap comes with a Mobile First approach, what this means is that the columns with a class suffixed with an "xs" are always going to be floated horizontally, no matter the viewport size. If you were to say, use columns prefixed by an "md" and the viewport happened to be less than 992px wide (even 991px), those columns will stack one below the other with a 100% width, as in the next example.
When this page is viewed at a viewport of 992px or more, it will look like this:
If you would happen to see that document in a viewport of 991px or less, it would look as follows:
You can also combine classes to address your elements at a given viewport size. For instance, if in the following example you’d need the first two columns floated side by side in small devices (sm) and stacked on phones, you could add the col-sm-6 in addition to the col-md-4 class.
In this case, opening the page in a viewport larger than 991px you’d see three equal columns, one next to the other, just like in the last example. However, if you’d see it in a viewport with a width between 768px and 991px, you’d get the following result:
As in the example above, you can combine and nest columns in a lot of different combinations to create very complex layouts. There’s a lot more to the Grid System in Bootstrap, but going into detail about every aspect of it would take a while to cover, so for a deeper look into it I’d strongly suggest that you go ahead and take a look at the documentation.
Bootstrap CSS
Most of the classes for the Base CSS part of Bootstrap have remained the same, however there are some changes that we must keep in mind when using this new version.
The code as a whole, has being re-written and variable names have changed. If you look at the .less files, you’ll find that all the variables in the framework have switched from camelCase to use hyphens for word separation, and also every variable name has been standardized in a “element-state-pseudo state” approach. What this means is that element-specific styles like:
<ul class="unstyled">...</ul>
Now are prefixed with the element that they are applied to, so in this new version of Bootstrap the previous list would become.
<ul class="list-unstyled">...</ul>
The same applies for lists with an “inline” style applied to them. Some other changes in the variables names reflecting in the classes that we’ve known from the earlier days are those related with size, for instance with buttons, in version 2.* you’d have:
This size suffixes have changed to match the overall naming convention and this is the same as for the Grid System, so the previous buttons declaration for version three becomes:
The same applies for input sizes and visibility declarations as well.
Responsive Tables
The overall syntax and layout for the tables remain the same in this version of the framework, but, loyal to the whole “Mobile First” paradigm, now we have responsive tables in this iteration of Bootstrap. To take advantage of it, simply wrap the whole table in a container with a class of “responsive-table“, what this will do is make the tables scroll horizontally in small devices (< 768px).
In the CSS department, it’s in the Forms where you’d see the major differences. For starters, every input in a form in Bootstrap three is now displayed as a “block” element with a 100% width. The “size” attributes you can modify with a class in form controls relate to the padding and font-size of the element and not the width, to control that you’d need to put it in a container of the desired width.
The markup for the forms has also changed, in it’s most basic form, in version 2.* a form would look something like this.
Bootstrap has been created with Accessibility in mind, that’s the reason for the “role” attribute addition, note also that the label/input combo is wrapped inside a wrapper with a class of “form-group“, and like everything else, this has to do with the responsive nature of the framework.
The search form is gone in this version, and since all inputs and textareas are 100% width by default, special consideration has to be taken with inline forms, however the markup for these is almost identical to that of the previous form.
Note the addition of the class “form-inline” to the form element, and that of “sr-only” to the label, this last class has to do with the Accessibility part of the framework. The use of a label is optional with inline forms, however it’s highly encouraged and doesn’t hurt anyone. And by the way, the “sr-only” stands for Screen Reader only. So screen readers will find the label and “say it” to the user.
Lastly, to create a horizontal form you simply set the width of the label by setting it’s “col-md-” or “_sm” or whatever and the corresponding “control-label” class, just as with version two., and then wrap the input in a container with its own column width specified.
There are some other changes that have been made in regard to forms, like the removal of “input-prepend” and “input-append” classes, in favor of “input-group” and “input-group-addon“. However, there’s a lot more to cover yet, so for details on that, please refer to the documentation.
Glyphicons
Another area where you’re going to find major changes is in the framework’s icons. The icon library includes 40 new glyphs, and not only that, they switched from using images to fonts, so now instead of adding the two “glyphicons-*” images to your “img” folder, you’ll have to add the four “glyphicons” fonts to your “fonts” directory, and yes, four of them. This has to do with the browser’s compatibility.
For performance reasons, every icon requires a base class and a icon specific class. So now, to add a user icon you’d use:
<span class="glyphicon glyphicon-user"></span>
The switch from images to fonts, gives control to icon coloring and sizing, and also allows you to replace the default icons with some custom ones you may like. If you happen to wonder where you might find such font icons, Fontello is a great resource.
JavaScript Components
Although redesigned and recreated, the JavaScript Components in Bootstrap 3.0 keep the same names and usage. With a couple of slightly and not that gentle differences.
Modals
Perhaps one of the most used plugins in Bootstrap is Modals. You’ll find it is quite similar, with the differences being that the containers “modal-header”, “modal-body” and “modal-footer” are not wrapped inside a “modal-content” inside a “modal-dialog” container. So what used to be:
Yes, it’s a little more markup, but it improves the responsiveness of the component and also allows it to scroll the whole viewport instead of having a “max-height” parameter.
To trigger them via JavaScript, you’d use the same method as before.
$( '#my-modal' ).modal('show');
The rest of the plugins remain mostly the same. On a special note, the accordion is gone in favor of collapsible panels, they work pretty much the same and have a very similar syntax. With some classes changing a bit, they still require the transitions plugin and don’t require any extra containers.
JavaScript events are now namespaced, but what does mean to you? Well, in Bootstrap two, to listen for the moment when some alert in your site was “close“, you’d add:
$( '#my-alert' ).bind( 'close', function() {});
Now in this third version, the event name has changed, it is namespaced to the framework, so the previous snippet would be:
You can add the “active” class to any item in the list to highlight it, also if you happen to place a badge inside it, it will be centered vertically and align it to the right, inside the item. Go ahead and try it.
Panels
Panels are a way to box in some content in your site or application, highlight it and give it a sense of unity. The panel markup is fairly simple, and its contents can be combined with different elements to achieve a unique look and feel.
Panels can have headers and footers and get a special meaning with the well known “sucess“, “error“, “warning“, etc. classes. For instance.
As you can see, panels work well with list groups, and also with non-bordered tables.
Customizer
Also new in this version is the Customizer in which, not only the look has changed, is far better organized and gives you control on things like the viewport width at which a certain Grid System takes control.
As always, you can set all your fonts styles and colors. It’s a huge topic on its own so I’d encourage you to go on your own and mess with it.
Browser Support
Long has been the suffering brought to all of us by Internet Explorer, it’s version six was a total nightmare and its predecessors still have a lot of catching up to do. In version 2.* Bootstrap’s team still supported the version seven of Microsoft’s browser. In this new iteration of the framework, support for IE seven is gone, as well as for Mozilla Firefox 3.6 and below.
Specifically, Bootstrap supports the latest version of all the major browsers (Safari, Opera, Chrome, Firefox and IE), in both Windows and Mac when there are both.
For IE, it supports version eight and forward, and although there are some properties that the browser doesn’t render, such as “border-radius“, the framework is fully functional only with some minor look and feel differences. Also IE eight requires respond.js for media query support.
To get a detailed list of workarounds and gotchas for the different browsers (cough Internet Explorer cough) look at the official docs.
Conclusion
Since its beginning, Bootstrap has been a great tool for rapid prototyping and creation of great sites and web applications and this third version is no different. If you need just one reason to use it, I would definitely go for the Grid System, with the growth of mobile browsing and the always increasing viewport sizes out there, responsiveness is a must now, more than ever. And in this latest version, that’s the area where Bootstrap shines the most.
I remember working on a Rails app a few years ago and someone floated the idea of using this new service that had appeared on the scene. It was called New Relic and they were promising to give you more insight into the performance of your Rails app, than you ever could get before. We gave it a try and it was impressive, more importantly it was something the Ruby web development ecosystem truly needed.
Fast forward to now and you’d be hard-pressed to find a Ruby web application that doesn’t have New Relic hooked in. New Relic as a company has continued to provide tools to monitor your Ruby apps, but they’ve also branched out into a number of other languages such as Java, Python and even .Net. But of course as the number of features you provide grows so does the complexity and the amount of documentation out there. It becomes hard to figure out where to start especially if you’re not yet an expert.
Today I thought we could go back to the roots of New Relic and look at how we can get started with the service to monitor a Rails application.
A Basic Rails App
In order to use New Relic we need something to monitor, so let’s set up a basic ‘Hello World’ Rails app.
The app we create will live under ~/projects/tmp/newrelic, and will be called newrelic_rails1. I assume you already have Rails installed:
cd ~/projects/tmp/newrelic
rails new newrelic_rails1
cd newrelic_rails1
There isn’t much for us to do to create our ‘Hello World’ app. We need a new controller:
rails g controller hello
Now we just need a route, we will get the root route of the application to use our controller. We also need a view, with the words ‘Hello World’. Given all this, our config/routes.rb should look like this:
NewrelicRails1::Application.routes.draw do
root 'hello#index'
end
Our controller (app/controller/hello_controller.rb), will be as follows:
class HelloController > ApplicationController
def index
end
end
And our view (app/views/hello/index.html.erb), will be similar to:
With Ruby it’s very simple. We add a gem to our Gemfile, run a bundle install, drop a config file into the config folder and we have all we need. In fact, New Relic is pretty good at guiding you through this. All you need to do is log in to your account and if you haven’t deployed a New Relic agent before, it’s pretty obvious what to do:
Firstly, we install the New Relic agent gem by adding it to our Gemfile, as per the instructions:
There is a bunch of JavaScript that got inserted into our pages so that New Relic can monitor browser time. This is one way we can tell that our New Relic integration is working. But it is not the only way, New Relic also creates a log file:
% cat log/newrelic_agent.log
Logfile created on 2013-09-22 16:23:13 +1000 by logger.rb/36483
[09/22/13 16:23:13 +1000 skorks-envato (12424)] INFO : Starting the New Relic agent in "production" environment.
[09/22/13 16:23:13 +1000 skorks-envato (12424)] INFO : To prevent agent startup add a NEWRELIC_ENABLE=false environment variable or modify the "production" section of your newrelic.yml.
[09/22/13 16:23:13 +1000 skorks-envato (12424)] INFO : Reading configuration from config/newrelic.yml
[09/22/13 16:23:13 +1000 skorks-envato (12424)] INFO : Enabling the Request Sampler.
[09/22/13 16:23:13 +1000 skorks-envato (12424)] INFO : Environment: production
[09/22/13 16:23:13 +1000 skorks-envato (12424)] INFO : Dispatcher: webrick
[09/22/13 16:23:13 +1000 skorks-envato (12424)] INFO : Application: My Application
[09/22/13 16:23:13 +1000 skorks-envato (12424)] INFO : Installing ActiveRecord 4 instrumentation
[09/22/13 16:23:13 +1000 skorks-envato (12424)] INFO : Installing Net instrumentation
[09/22/13 16:23:13 +1000 skorks-envato (12424)] INFO : Installing deferred Rack instrumentation
[09/22/13 16:23:13 +1000 skorks-envato (12424)] INFO : Installing Rails 4 Controller instrumentation
[09/22/13 16:23:13 +1000 skorks-envato (12424)] INFO : Installing Rails 4 view instrumentation
[09/22/13 16:23:13 +1000 skorks-envato (12424)] INFO : Installing Rails4 Error instrumentation
[09/22/13 16:23:13 +1000 skorks-envato (12424)] INFO : Finished instrumentation
[09/22/13 16:23:13 +1000 skorks-envato (12424)] INFO : Doing deferred dependency-detection before Rack startup
[09/22/13 16:23:16 +1000 skorks-envato (12424)] INFO : Reporting to: https://rpm.newrelic.com/accounts/303380/applications/2507356
We can also check our New Relic account to make sure a new application has appeared for monitoring:
There are however a few things that are not so nice:
Our application is named ‘My Application’
We accepted all the default configuration values, which may not suit our app
We had to launch our server in production mode (which is only possible cause it’s a brand new app that doesn’t rely on any external infrastructure)
So let us look at our newrelic.yml file in a little bit more detail to see how we can monitor our app performance exactly the way we want it.
Diving in to New Relic Configuration
First of all, the New Relic configuration file is extremely well commented and I encourage you to read the comments for the various configuration parameters to understand what all of them do.
Secondly, New Relic configuration is environment aware, and configuration for all environments is defined in the one newrelic.yml file, this is very similar to, how the Rails database.yml file works. We define a bunch of common configuration values and then override the relevant ones in the specific environment blocks e.g.:
We can instantly begin to see how we can fix some of the points that we raised above. If we don’t want to have to launch our app in production mode while we’re tweaking our configuration, all we have to do is enable monitoring in development mode (we will need to remember to switch this off when we’re happy with our configuration as we don’t want development data cluttering up our New Relic account).
We should also override our application name for every environment that we have, to make sure they’re monitored separately and the application name makes sense:
With just those configuration tweaks, when we start our server in development mode and curl localhost:3000:
We’re now monitoring our application in development mode and our app name is what we expect. If your application is saying that it’s not receiving any data, give it a minute, it takes a little while for the data to start coming through.
The next most interesting (and often the most confusing) configuration value is the Apdex T-value. Unlike most of the other configuration parameters, this value does not live in the newrelic.yml file, but is instead found in the settings for the application within New Relic:
If you want to tweak your Apdex T-value you have to do it here, but what is this parameter and what is the right value to put in it? Well, New Relic explains it in the following way:
Your application’s Apdex T-value is set to 0.5 seconds. That means requests responding in less than 0.5 seconds are satisfying (s), responding between 0.5 seconds and 2.0 seconds are tolerating (t), and responding in more than 2.0 seconds are frustrating (f).
Essentially, New Relic uses the Apdex value to gauge the health of your application as far as performance is concerned, so if many of the requests that are monitored by New Relic take longer than your Apdex value, New Relic will consider your application to be performing poorly and if you’ve set up alerts, will notify you of the fact. Basically, you have to figure out, how fast you want each server request to be fulfilled by your application, so if you’re OK with a backend request taking two seconds, you can set your Apdex value to 2.0, but if you need a response to be returned within 100ms then you should set your Apdex value to 0.1.
If you have a new application you may set the Apdex value to the performance you desire from your application. If your app is an existing one, you may have some metrics regarding how fast it is/should be performing, and you can be guided by that. All requests which are fulfilled by the server in less than the Apdex T-value will be considered by New Relic to be fine. All requests fulfilled within Apdex * 4 seconds will be considered tolerating (i.e. users can tolerate it). All responses that take longer than Apdex * 4 will be considered frustrating (frustrated users don’t tend to stick around). So, set your Apdex T-value in such a way that you actually get useful information out of it, the actual value depends on your domain and what you want to achieve (in terms of performance), there is no right or wrong answer.
We will set our Apdex T-value to 100ms (0.1), since all we have is a ‘Hello World’ app, and it should be able to return a response very quickly (even in development mode).
Even More New Relic Configuration
It was a little funny that most of the configuration comes from the newrelic.yml file, but the Apdex T-value is in the application settings, so New Relic now allows you to move all the configuration values from the YAML file into New Relic:
The advantage of this is that you don’t have to redeploy every time you want to tweak your configuration values, so it is definitely something worth considering. We will stick with the YAML file for now.
So what are some of the other useful New Relic parameters we should know about?
Well, there is a set of parameters dealing with the New Relic agent log file:
log_level: info
log_file_path: 'log'
log_file_name: 'newrelic_agent.log'
These have sensible defaults, but if we want the log file to go to a specific place or if we want to see more or less info in the file, we can easily control this. Since we’re just setting up New Relic we will set the log level to debug, to make sure we don’t miss any important information (when we deploy we may want to set it to warn, or even error).
We now get a wealth of information in the log file, which (if read carefully) can give us a lot of insights into how New Relic works:
% cat log/newrelic_agent.log</p>
[09/22/13 17:23:39 +1000 skorks-envato (12925)] INFO : Starting the New Relic agent in "development" environment.
[09/22/13 17:23:39 +1000 skorks-envato (12925)] INFO : To prevent agent startup add a NEWRELIC_ENABLE=false environment variable or modify the "development" section of your newrelic.yml.
[09/22/13 17:23:39 +1000 skorks-envato (12925)] INFO : Reading configuration from config/newrelic.yml
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Not in Rake environment so skipping blacklisted_rake_tasks check: uninitialized constant Rake
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Updating config (add) from NewRelic::Agent::Configuration::YamlSource. Results:
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : {...}
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Not in Rake environment so skipping blacklisted_rake_tasks check: uninitialized constant Rake
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Updating config (add) from NewRelic::Agent::Configuration::ManualSource. Results:
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : {...}
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Not in Rake environment so skipping blacklisted_rake_tasks check: uninitialized constant Rake
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Installed New Relic Browser Monitoring middleware
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Installed New Relic Agent Hooks middleware
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Agent is configured to use SSL
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Using JSON marshaller
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Transaction tracing threshold is 2.0 seconds.
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Ignoring errors of type 'ActionController::RoutingError'
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Ignoring errors of type 'Sinatra::NotFound'
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Errors will be sent to the New Relic service.
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Ignoring errors of type 'ActionController::RoutingError'
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Ignoring errors of type 'Sinatra::NotFound'
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : RequestSampler max_samples set to 1200
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Resetting RequestSampler
[09/22/13 17:23:39 +1000 skorks-envato (12925)] INFO : Enabling the Request Sampler.
[09/22/13 17:23:39 +1000 skorks-envato (12925)] INFO : Environment: development
[09/22/13 17:23:39 +1000 skorks-envato (12925)] INFO : Dispatcher: webrick
[09/22/13 17:23:39 +1000 skorks-envato (12925)] INFO : Application: newrelic_rails1 (Development)
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : EnvironmentReport failed to retrieve value for "Plugin List": undefined method `plugins' for #<Rails::Application::Configuration:0x007fb232401a00>
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : EnvironmentReport failed to retrieve value for "JRuby version": uninitialized constant NewRelic::EnvironmentReport::JRUBY_VERSION
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : EnvironmentReport failed to retrieve value for "Java VM version": uninitialized constant NewRelic::EnvironmentReport::ENV_JAVA
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : EnvironmentReport ignoring value for "Rails threadsafe" which came back falsey: nil
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Creating Ruby Agent worker thread.
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Creating New Relic thread: Worker Loop
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : New Relic Ruby Agent 3.6.7.152 Initialized: pid = 12925
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Connecting Process to New Relic: bin/rails
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Not in Rake environment so skipping blacklisted_rake_tasks check: uninitialized constant Rake
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Created net/http handle to collector.newrelic.com:443
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Sending request to collector.newrelic.com:443/agent_listener/12/1f69cbd2a641bde79bdb5eb4c86a0ab32360e1f8/get_redirect_host?marshal_format=json
[09/22/13 17:23:39 +1000 skorks-envato (12925)] INFO : Installing ActiveRecord 4 instrumentation
[09/22/13 17:23:39 +1000 skorks-envato (12925)] INFO : Installing Net instrumentation
[09/22/13 17:23:39 +1000 skorks-envato (12925)] INFO : Installing deferred Rack instrumentation
[09/22/13 17:23:39 +1000 skorks-envato (12925)] INFO : Installing Rails 4 Controller instrumentation
[09/22/13 17:23:39 +1000 skorks-envato (12925)] INFO : Installing Rails 4 view instrumentation
[09/22/13 17:23:39 +1000 skorks-envato (12925)] INFO : Installing Rails4 Error instrumentation
[09/22/13 17:23:39 +1000 skorks-envato (12925)] INFO : Finished instrumentation
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Registered NewRelic::Agent::Samplers::CpuSampler for harvest time sampling.
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : Registered NewRelic::Agent::Samplers::MemorySampler for harvest time sampling.
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : NewRelic::Agent::Samplers::ObjectSampler not supported on this platform.
[09/22/13 17:23:39 +1000 skorks-envato (12925)] DEBUG : NewRelic::Agent::Samplers::DelayedJobSampler not supported on this platform.
[09/22/13 17:23:39 +1000 skorks-envato (12925)] INFO : Doing deferred dependency-detection before Rack startup
[09/22/13 17:23:40 +1000 skorks-envato (12925)] DEBUG : Uncompressed content returned
[09/22/13 17:23:40 +1000 skorks-envato (12925)] DEBUG : Created net/http handle to collector-1.newrelic.com:443
[09/22/13 17:23:40 +1000 skorks-envato (12925)] DEBUG : Sending request to collector-1.newrelic.com:443/agent_listener/12/1f69cbd2a641bde79bdb5eb4c86a0ab32360e1f8/connect?marshal_format=json
[09/22/13 17:23:42 +1000 skorks-envato (12925)] DEBUG : Uncompressed content returned
[09/22/13 17:23:42 +1000 skorks-envato (12925)] DEBUG : Server provided config: {...}
[09/22/13 17:23:42 +1000 skorks-envato (12925)] DEBUG : Not in Rake environment so skipping blacklisted_rake_tasks check: uninitialized constant Rake
[09/22/13 17:23:42 +1000 skorks-envato (12925)] DEBUG : Updating config (add) from NewRelic::Agent::Configuration::ServerSource. Results:
[09/22/13 17:23:42 +1000 skorks-envato (12925)] DEBUG : {...}
[09/22/13 17:23:42 +1000 skorks-envato (12925)] DEBUG : Wiring up Cross Application Tracing to events after finished configuring
[09/22/13 17:23:42 +1000 skorks-envato (12925)] DEBUG : Connected to New Relic Service at collector-1.newrelic.com
[09/22/13 17:23:42 +1000 skorks-envato (12925)] DEBUG : Agent Run = 575257565.
[09/22/13 17:23:42 +1000 skorks-envato (12925)] DEBUG : Connection data = {...}
[09/22/13 17:23:42 +1000 skorks-envato (12925)] INFO : Reporting to: https://rpm.newrelic.com/accounts/303380/applications/2507376
[09/22/13 17:23:42 +1000 skorks-envato (12925)] DEBUG : Browser timing header: "<script type=\\"text/javascript\\">var NREUMQ=NREUMQ||[];NREUMQ.push([\"mark\",\"firstbyte\",new Date().getTime()]);</script>"
[09/22/13 17:23:42 +1000 skorks-envato (12925)] DEBUG : Browser timing static footer: "if (!NREUMQ.f) { NREUMQ.f=function() {\nNREUMQ.push([\"load\",new Date().getTime()]);\nvar e=document.createElement(\"script\");\ne.type=\"text/javascript\";\ne.src=((\"http:\"===document.location.protocol)?\"http:\":\"https:\") + \"//\" +\n \"js-agent.newrelic.com/nr-100.js\";\ndocument.body.appendChild(e);\nif(NREUMQ.a)NREUMQ.a();\n};\nNREUMQ.a=window.onload;window.onload=NREUMQ.f;\n};\n"
[09/22/13 17:23:42 +1000 skorks-envato (12925)] DEBUG : Real User Monitoring is using JSONP protocol
[09/22/13 17:23:42 +1000 skorks-envato (12925)] DEBUG : Reporting performance data every 60 seconds.
[09/22/13 17:23:42 +1000 skorks-envato (12925)] DEBUG : Running worker loop
[09/22/13 17:23:50 +1000 skorks-envato (12925)] DEBUG : Attempting to insert RUM header at beginning of head.
For example we can see that:
We can switch off monitoring even if it’s switched on in the configuration file, by setting an environment variable NEWRELIC_ENABLE=false
We can see that New Relic inserts a bunch of Rack middleware
We’re using Webrick as our server, which is obviously in development mode, but in production it would be good to confirm that New Relic recognises the server that we’re using
New Relic is sending data to collector.newrelic.com:443
The transaction tracer captures detailed data about requests that take too long. The transaction threshold is normally a multiple (x4) of the Apdex value, but it is often useful to divorce these values from each other. You might be happy with an Apdex score of one second, but you may want to capture detailed data about requests that take 1.5 seconds or longer (instead of the four seconds or longer which would happen by default). So you can set this parameter separately:
transaction_tracer:
transaction_threshold: 1.5
The New Relic Developer Mode
One of the configuration values you may have noticed was:
developer_mode: true
This should only be switched on in development (if at all). In development mode, New Relic agent will store performance data about the last 100 requests in memory. You can look at this data at any time by hitting the /newrelic endpoint of your running application:
I hardly ever use it, but it’s there if you need it.
Notifying New Relic of Deployments
Whenever you’re working on the performance of your application, it’s always good to know if a particular deploy has had a positive or negative effect on performance. For this purpose, you can notify New Relic every time you perform a deploy. This way if performance degrades or improves, you’ll be able to see which deploy was the culprit. New Relic provides Capistrano hooks to do this, but I prefer the command line way:
% newrelic deployments -a 'newrelic_rails1 (Development)' -e 'development' -u 'skorks' -r 'abc123'
Recorded deployment to 'newrelic_rails1 (Development)' (2013-09-22 18:19:13 +1000)
The key thing is to correctly supply the application name as configured in the newrelic.yml file.
We will get nice lines on the relevant New Relic graphs to indicate when a deployment occurred.
Conclusion
You now know a whole lot about how New Relic works and how to start using it to monitor a Rails application. But configuring things properly is only half the battle, what kind of metrics will New Relic actually capture for you? And how can you use them to improve the performance of your application? We will look at some of these in a subsequent article. For now, have a go at configuring New Relic for your Rails application (you’ll get a free T-shirt) and if you have any questions don’t forget to leave a comment.
Once you start digging around New Relic you begin to realise just how many interesting features the service has to help monitor the performance and health of your application. It was truly difficult to pick just five things to talk about, so rather than focusing on the obvious features let’s look at some of the less hyped functionality that New Relic provides and how we can use it in interesting and sometimes unorthodox ways.
When we left you last time, we had a basic ‘Hello World’ Rails application (called New Relic_rails1, living in ~/project/tmp/New Relic). We will continue using this app, extend it and see if we can use it to demonstrate the features of New Relic that we’ll be looking at.
Availability Monitoring
This is one New Relic feature that usually doesn’t make the front page of the marketing material. There is not a lot to it, but if you think about it, what’s more important that making sure your app is actually up and running and accessible by your users?
Firstly, when you set up availability monitoring, your application gets a nice asterisk on your main applications dashboard:
It’s a nice visual reminder, so you can see which apps still need availability monitoring switched on.
Let’s now look at how we can set up availability monitoring and what we can get out of it. Firstly, you need to jump into your application and then go into Settings->Availability Monitoring. You will see something like this:
You need to provide a URL you want New Relic to ping, tick the box, save your changes and you’re good to go. New Relic will begin hitting your URL every 30 seconds. But the fun doesn’t stop there. New Relic will ping your URL via an HTTPHEAD request (and deem everything OK if it receives a 200 response code), but you can supply a response string that you want New Relic to look for in which case it will perform a GET request and examine the response for the string that you provided. This can be very handy if you have a custom ‘Health Check’ page that you want to hit.
You can also set up email notification if downtime occurs:
Now that you’re monitoring availability, you will have access to a nice report which will visually show you when any downtime has occurred:
In fact, many of your charts (e.g. the application overview) will have this visual indication:
You have to admit that’s some pretty nice functionality for so little effort.
You can, of course, disable and re-enable monitoring (via the New Relic REST API) when you’re performing deploys, to make sure you don’t get spurious downtime events.
Another interesting side-effect of this is that if you’re deploying your pet project to Heroku on a single dyno, you can use this ping functionality to prevent your dyno from sleeping, which can make your site annoyingly slow if you don’t have a lot of traffic.
Custom Error Recording
If unexpected errors occur in your application, New Relic will record these for you and give you a nice graph. Our little ‘Hello World’ app has performed admirably for the moment, so there is nothing for us to see on that front. But, we can purposely break our app and see what New Relic gives us.
Let’s modify our HelloController to raise an error randomly approximately 50% of the time:
class HelloController < ApplicationController
def index
if rand(2) == 0
raise 'Random error'
end
end
end
We will now make a few hundred calls to our app and see what happens:
ab -n 300 -c 10 http://127.0.0.1:3000/
Our New Relic error graph now looks much more interesting:
And we can drill down to get some specifics:
As you can see we can sort our errors and filter them as well as look at errors from web requests and background tasks separately. This is some incredibly powerful stuff to help you diagnose and fix problems with your application. You can of course also see the stack trace for each error:
There are services specifically dedicated to capturing errors from your application, some of the most well known ones are Airbrake and Bugsnag. These are paid services used by many application, but the functionality that New Relic provides just about makes these services redundant. In fact if we could send custom errors to New Relic (rather than letting it capture errors that we hadn’t rescued) we could make a compelling case for not using a separate error collection service (and save some money and get rid of an extra gem in the process).
While New Relic doesn’t document any way of doing this, we can always go to the source to see if what we want to do is hard. It looks to me like it should be pretty trivial for us to send custom errors to New Relic, so let’s give it a try. We’ll modify our controller action again to rescue all errors and send a custom error to New Relic:
class HelloController < ApplicationController
def index
if rand(2) == 0
raise 'Random error'
end
rescue
New Relic::Agent.notice_error(StandardError.new("I caught and reraised an error"))
end
end
After we make a few more calls and wait for the data to come through we see the following:
It worked, our custom error is coming through! New Relic can definitely act as our error collection service. We are of course using a private interface here which is not very nice, but we can put the notice_error call behind a facade which will make things a bit easier for us if the interface changes.
An even better approach might be to not treat custom errors like regular errors at all, but instead create a custom metric to track and then build a custom dashboard to visualise. This way we’re no using any undocumented functionality and would still get all the benefits – brilliant!
Key Transaction Tracking
New Relic will normally track your transactions for you:
You will be able to see where your application is spending most of its time (e.g. in the controller, model, database etc.). However, New Relic will not capture a detailed trace unless the transaction takes longer than Appdex * 4 seconds. Normally this is OK, but sometime you have transactions that are much more important to your application or to your business. Perhaps these transactions are extremely high volume or deal with important events like payments. Suffice to say you need to make sure this type of transaction always performs extremely well.
The thing is though, when a transaction is this important it has probably received quite a lot of love from you already and may be performing fairly well. Let’s say you have a transaction with an extremely high throughput (occurs many times per minute). If this transaction is performing optimally everything is fine, but if the performance were to degrade slightly, due to the volume of the traffic it may have a disproportionally detrimental effect on your application. What you want is something like:
a separate Apdex T value just for this transaction
the ability to receive alerts when the performance of this transaction degrades
a detailed trace every time this transaction performs even slightly non-optimally
This is exactly what key Key Transactions give you!
Before we set up a key transaction for our ‘Hello World’ app, we need to create a more interesting transaction which will usually perform well, but will sometimes perform somewhat badly. We will build the ability to look at car makes and models and get a particular car make to slow the transaction down. Firstly the route:
New RelicRails1::Application.routes.draw do
get 'random_car', to: 'cars#show_random'
root 'hello#index'
end
We want to be able to get a random car, this will map to the CarsController:
class CarsController < ApplicationController
def show_random
@car = Car.offset(rand(Car.count)).first
if @car.make == 'Ford'
sleep(2)
end
end
end
We get a random car from the database and if the car make is ‘Ford’ we will have a slow transaction on our hands. Of course we need a Car model:
class Car < ActiveRecord::Base
end
We’ll need to configure our database to use MySql in development (I did this, but you can stick with sqlite):
You’ll also need to add the mysql2 gem to the Gemfile if you’ve gone with MySql. After this we just need to create and populate the database, restart our server and we’re good to go:
You’ll need to hit the URL, to make sure New Relic recognises that this transaction exists:
curl localhost:3000/random_car
We’re now ready to monitor this transaction as a key transaction. Firstly, jump into the transaction tab:
Click the ‘Track a Key Transaction’, button and pick our newly created transaction:
We can give our new key transaction a name, pick the Apdex T that we’re happy with as well as set up some alerts. When our transaction takes longer than the Apdex that we’ve chosen, New Relic will capture a detailed trace which we’ll be able to use to figure out where the performance issue is coming from. Let’s make a few calls against our new URL and see what data we get:
ab -n 300 -c 20 http://127.0.0.1:3000/random_car
Hmm, it seems some of our transactions are frustrating our users:
Let’s see if New Relic has captured some transaction traces for us:
Let’s look at one of these traces. It took around 2 seconds to respond, but only 10 milliseconds were using the CPU:
All our SQL statements were fast so database is not the issue:
It looks like most of the time is spent in the controller action:
Let’s dig into the trace a little bit. It looks like the SQL SELECT was fast, a Car.find was also fast. Then we lose about 2 seconds which is followed by some very fast template rendering:
New Relic has kindly highlighted for us where we lost those two seconds. We need to look at our controller code after a Car.find call:
class CarsController < ApplicationController
def show_random
@car = Car.offset(rand(Car.count)).first
if @car.make == 'Ford'
sleep(2)
end
end
end
Hmm, the initial SELECT must be the Car.count call, and the Car.find, must be due to the Car.offset call. Our big delay is right after this though. Ahh look at this, some silly person has put a 2 second delay in our code when the make of the car is ‘Ford’. That would explain why our 2 second delay happens only some of the time. I better do a git blame on our repository to find out who put that horrible code in there! On second thoughts, I better not, cause it might say that it was me.
External Service Call Recording
Whenever you make calls to other services from within you app (e.g. an HTTP request to an API like Twitter), New Relic will monitor these as external calls. These days a serious application may integrate with a number of external APIs. Often these external services can significantly degrade the performance of your app, especially if you make these calls in-process. New Relic can show which of your external calls are slowest, which ones you call the most and which respond the slowest on average. You can also look at the performance of each of the external services you use individually. Let’s give it a try.
We’ll create an external service of our very own, by building a small Sinatra app. Firstly we install the gem:
gem install sinatra
Create a new file for our service:
touch external_service.rb
And put the following code in there:
require 'sinatra'
get '/hello' do
sleep_time = rand(2000)/1000.0
sleep(sleep_time)
"Hello External World #{sleep_time}!"
end
This service will sleep for a random time (between 0 and 2000 milliseconds) and then return a ‘Hello’ response with the time it slept for. Now all we have to do is start it:
ruby external_service.rb
Back in our Rails app we’ll build a new controller to call our external service. We’ll use this route:
New RelicRails1::Application.routes.draw do
...
get 'external_call', to: 'external_calls#external_call'
...
end
Our controller will call our Sinatra service via HTTP:
require 'net/http'
class ExternalCallsController < ApplicationController
def external_call
url = URI.parse('http://localhost:4567/hello')
external_request = Net::HTTP::Get.new(url.to_s)
external_response = Net::HTTP.start(url.host, url.port) do |http|
http.request(external_request)
end
@result = external_response.body
end
end
And we need a view to display the results:
<h1><%= @result %></h1>
All we have to do now is make a few calls to our new endpoint:
ab -n 100 -c 10 http://127.0.0.1:3000/external_call
Let’s see what New Relic has produced for us.
New Relic has indeed picked up our new external call. We’ve got the total calls per minute we’re making to the external endpoint. And the total that was spend responding by the external service. Of course our chart looks a little sparse since we only have one external service, which means we don’t have anything to compare against.
We can also get more detailed data about the specific external call as well as where in our app this call is being made from:
We can see when the calls were made, the throughput and the average response time. This may seem simple, but when you have an app with a lot of external services this feature can give you a very nice overview of how these external services are performing, as well as when and where they are being used. This can allow you to make decisions regarding caching certain external service responses if possible, or even dropping particular external services if their performance is not up to scratch. And you no longer have to argue these things based on gut-feel and home-baked metrics, you’ll have hard data to prove your point for you.
Scalability and Capacity Analysis
There is nothing more frustrating for a developer than having your application fall over due to a traffic spike. Everything was running smooth until those extra few hundred users came along and your application exploded. You had a feeling this might happen, but couldn’t be sure – the wait and see attitude seemed to be the most pragmatic approach. Well with New Relic capacity and scalability reports, you no longer have to ‘wait and see’. You can tell straight away how well your app is scaling, you can do load tests and instantly see if you application can handle the load. You can observe your application response time trends as your user base grows and predict when you’ll need to add capacity. All of those are truly wonderful things.
First, let’s look at the capacity reports:
Hmm, this one shows a big spike, but otherwise nothing. Well we’re running in development mode, so this is understandable. That spike is for when we did a bunch of requests concurrently just a little while ago. As you can see when we did those concurrent requests, we maxed out our poor lonely Webrick instance. If this was production and that load was constant, our instance would always be 100% busy, which would probably indicate that we need another instance.
The instance analysis report is slightly different:
In our case we don’t get much out of it, but it normally shows us the number of instances that are running, and the number of instance we actually need to handle the load if all instances were 100% busy. So if we were running 10 instances and the concurrent instance load was 2, we could easily halve (or even more than halve) the number of running instance and not degrade the performance at all. For a small app that runs only a few instances, this is no big deal, but for a large application with dozens and hundreds of instances, this can translate to significant cost savings.
And then there are the scalability reports. The response time report is probably the most interesting/important one:
Once again, our graph is very distorted cause it’s a development app that we’ve been playing around with randomly. The idea with this report is that as the throughput for your application increases (more requests per minute), the response time should remain close to constant (i.e. performance does not degrade when there is more traffic). This means you should always be seeing something resembling a flat line here. If your line is sloping upwards significantly, your app is probably struggling to handle the traffic and you may need to look at adding more capacity. Where to add capacity is another question entirely (e.g. database capacity, more servers etc.). The other two scalability reports can help you answer it. There is the database report:
You can’t expect your database to not be effected by higher load, so what you should be seeing here is a line that slowly goes up as the throughput of your application increases. It is up to you when the database response time is deemed unacceptable (i.e. is affecting the response of the application too much), but when you do decide that the database responses are too slow, you know it is time to add database capacity. The other report is the CPU:
Once again you can’t really expect higher throughput to not affect your CPU load, you should be seeing a line that slowly goes up with increased throughput. This, together with the capacity reports we talked about earlier can allow you to decide when to add more Rails processes/servers to ensure your performance remains decent.
Conclusion
If one or all of those features have raised an eyebrow (or two) for you, the good news is that we’ve only just scratched the surface. Each of those features more than deserves an in-depth article of its own. But, New Relic also has a number of other features that are potentially even more powerful, these include Real User Monitoring, The New Relic Platform, The Thread Profiler, Alert Thresholds and Notification and many others. We will try to cover some or maybe even all of these in later tutorials.
For now, try New Relic out, deploy an agent in your favourite language and see if you can discover an out-of-the-box way of using some of the functionality that New Relic provides. And if you do have some innovative ways to use New Relic, be sure to let everyone know by leaving a comment.
Setting up a new machine can often be an exciting prospect. However, as developers, there are a lot of tools we need that don’t come as standard.
In this post, I’d like to go through some of the techniques I use to help set up my machine quickly, efficiently and with added super powers.
Introduction
After reading this article, you should be able to do the following:
Quickly set up a new machine
Enhance SSH’ing into a Linux box
Easily absorb smart configs from other developers on GitHub
Optionally share your setup with other developers and participate
This is how all professional developers maintain their configurations
Before we begin, you’ll need some understanding of Git and using the command line. If you’re not sure what these are, I’d recommend looking over the following first:
What if you could style the Terminal, make the speed of Mission Control faster, run g instead of git, have tab autocomplete regardless of filename case, check for software updates daily, not just once per week? What if you could automate setting up all these features with a single script? Sound good? Then this post is for you.
In many respects setting up a new machine is very much down to personal preference. I’m always refactoring and reevaluating and I’d advise you to do the same. Find out what works best for you and share your knowledge.
TL;DR: Invest time learning to configure your machine and automate processes, you’ll get that time back ten fold.
Dotfiles, so called because the filename begins with a . are found in the user’s home directory. These files are created as you install and configure your machine. I think of each dotfile as a superhero each containing its own super powers. I’m going to go over each superhero dotfile and the powers that lie within. But first…
There’s a lot to be said for the awesomeness of dotfiles, setting up configurations automatically and speeding up processes. It may be tempting to clone a repository and run dotfiles straight away, but I’d advise against this as the outcome may have undesired results.
Baby Steps
First of all, I’d recommend cloning some existing dotfiles repositories. Doing so will allow you to start to understand the file structure and get an overview of the code. The following are GitHub repos from some top developers who have shared their dotfiles:
It may seem daunting at first glance, but don’t panic, I’ll be going over each dotfile that I use when I setup a new machine. After reading this post, when you’ve got a better understanding of each file and what they can do, I’d recommend creating your own repository and taking advantage of existing dotfiles to build it up. You can then add the files and code in that’s best going to suit your requirements.
As people generally name their dotfiles repo dotfiles I set the folder structure up like so:
Here, I’m setting up a main folder called dotfiles, then a folder with the username and then the repo. The reason I recommend setting it up like this is to avoid confusion. Some of the code is fairly similar, so I find it useful to easily see whose code I’m looking at. For example, if I had four or more repos all named ‘dotfiles’ this process would be much more difficult.
Want to know how I output the folder structure like that? I used this awesome thing called tree, installed in the .brew file.
Let’s break each file down and look at what’s going on.
Superhero Dotfiles and Their Super Powers
Dotfiles are split into two main types. Those that contain a set of commands and only run once, .osx for example runs a list of commands and gives OS X super powers. Other files such as .bash_profile and .bashrc run each time you open a new Terminal session and gives your Terminal super powers.
Here’s a run down of the dotfiles in my repo and a description of what they can do.
.brew
It’s best to run this first. Once it checks that Homebrew is up to date, it will be used to install useful tools such as tree.
brew install tree
Instead of having to go to a site and download an app, it’s also possible to automate the installation of some apps using brew-cask, such as:
This file is like turning the key in a car to start the engine.
When ran, it will sync the local repo and the one on GitHub, then copy those files to your home folder, overriding any existing files if they exist.
Therefore, before running bootstrap.sh it’s a good idea to backup your existing dotfiles and save them somewhere else. A handy shortcut to get to your dotfiles in the Finder is:
Finder > Cmd + Shift + g > ~
I use an app called TotalFinder, this adds some nice features to the Finder. Tabbed windows and a shortcut to show and hide hidden files for example I find very useful.
In bootstrap.sh you’ll notice source ~/.bash_profile. This means that if you run bootstrap.sh and have any Terminal windows open, your new settings will be applied without the need of a restart.
.bash_profile / .bashrc
When you open a new Terminal session, this file is loaded by Bash. It loads in the other dotfiles path,bash_prompt,exports,aliases,functions,extra and configures some useful settings such as auto correcting typos when using cd completion.
In some instances .bashrc can be loaded, so this file makes sure that .bash_profile is called.
I like my Terminal clean and clutter free, so I opt not to display the username / computer name at the top by default with this file.
.path
This file speeds up the process of running executable files. Rather than having to cd back and forth across various paths to executable files, you can set the file paths in your .path dotilfe and then run executable files directly.
Generally, this file isn’t held in the public repo as it can contain sensitive information.
Here’s an example ~/.path file that adds ~/utils to the $PATH: export PATH="$HOME/utils:$PATH"
.bash_prompt
Using this file you can customise and set the various colors of your Bash prompt.
.exports
Sets environment variables, such as setting Vim as the default editor using export EDITOR="vim". It also increases the amount of history saved, useful for backtracking over previous commands you’ve used.
.aliases
This file contains useful aliases to help you write less. For example, instead of typing ‘cd ..‘ you can set it here to be ‘..‘. Starting to like these files yet? :)
.functions
Similar to aliases, except functions can take arguments.
Before when I mentioned I was looking over different dotfile repos, I did mkdir to create a directory. After that, I’d then need to cd into that directory.
One example of a function that I find useful is:
# Create a new directory and enter it
function mkd() {
mkdir -p "$@" && cd "$@"
}
Now you can simply do mkd. Now, not only have you made the directory, you’re in the directory as well.
.extra
This file is used for adding your personal information and isn’t added to your repository in order to make sure someone doesn’t accidentally fork your project and then start committing using your details. Something nice to add in here would be your Git credentials.
.gitconfig
This file is only used by Git, for example, when a git command is invoked. So although there’s an .aliases file, those aliases are run directly.
In .aliases I have g set to git and in .gitconfig, s set to status -s.
Now instead of running:
git status -s
I can simply run:
g s
.gitignore
Set files that you’d like Git to ignore on the entire system. Yay, no more .DS_Store being accidentally committed!
.gvimrc
A small file that improves readability for gvim.
.hgignore
Simliar to .gitignore for Mercurial.
.hushlogin
In some instances, for example, when you ssh into a machine, you may be presented with a message. It might look something like this:
Configures the ‘Readline environment’. This controls the way keys work when you’re entering a command into your shell.
An example of how I find this useful is to make tab autocomplete regardless of filename case:
set completion-ignore-case on
.osx
This is my favorite of all the dotfiles. It is run once, manually, for the commands to run and take effect. Depending on what you’ve added to this file, you may need to restart your machine.
Some of the awesome things I love are:
Disable the “Are you sure you want to open this application?” dialog
Check for software updates daily, not just once per week
Disable Notification Center and remove the menu bar icon
Enable access for assistive devices
Set a blazingly fast keyboard repeat rate
Finder: allow quitting via ⌘ + Q; doing so will also hide desktop icons
When performing a search, search the current folder by default
Speed up Mission Control animations
.screenrc
If you use screen, this removes the startup message.
.vimrc
I’m not that familiar with vim. However some of the things you can do with this file include enabling line numbers and adding syntax highlighting.
Sounds like a good idea to me :)
.wgetrc
If you use wget, this adds additional settings such as changing the timeout to 60 seconds rather than the default 15 minutes. It also sets the retry to three, rather than the default 20!
Dotfiles Are Go!
At this point, I’ve gone over all the files and I’m at a stage where I’m happy with everything in my repo. Anything I wasn’t sure about has been commented out.
Now the exciting part! As it stands we have the dotfiles in a repo but we need to put them in the correct place so they can be found and used.
Think of it like this, we have Thor’s Hammer, Batman’s Utility Belt, Captain America’s Shield, and Iron Man’s Suit. All of our heroes know how to use these, but without them they’re lost! We need to give our superheros their weapons so they can use them.
To do this (with my existing dotfiles backed up and my repo all up to date), open your Terminal, cd to the repo and run
source bootstrap.sh
Next, cd to ~ and run:
source .osx
Quick restart and… Awesome, super powers are now available!!!
Additional Super Powers
Rupa Z
Do you spend lots of time doing things like this?
cd this/is/the/path/that/i/want/so/i/type/it/all/out/to/get/whereiwant
To add this, in .bash_profile I made the following change:
# init z https://github.com/rupa/z
. ~/z/z.sh
And also in install-deps.sh:
cd
git clone https://github.com/rupa/z.git
chmod +x ~/z/z.sh
Reverting Things
When you run your dotfiles for the first time, you may find that you don’t like a piece of code that has been ran. For example, in the .osx file, I wasn’t too keen with what the following code did:
With most commands it’s quite obvious to revert the command by simply changing true to false or vice versa. With others, it’s possible to set it back to default using defaults delete, for example, defaults delete NSGlobalDomain AppleHighlightColor. In some instances you may also need to restart the machine.
Custom .osx Commands
Now this is for the more advanced dotfile master. As you gain more knowledge and confidence using dotfiles, you may want to include your own code.
On a new machine if you find you’re manually changing settings, these would be best automated.
Adding your own .osx commands can get a bit tricky!
Doing this creates a file called a and b then displays the difference between them, with this knowledge you can then open up the file b in Sublime Text 2, search for the bit that changed and try and work out the command to change it. If you try out this method, good luck!
Conclusion
So, there you have it! Go forth, have fun with dotfiles, look forward to giving your machine super powers and the next time you need to set up a machine from scratch, you can smile to yourself as the whole process is automated.
Thanks very much for stopping by, please comment below if you have any questions or suggestions.
I’m especially interested to see your own .dotfiles repos and any new additions you make, so feel free to add a link to your dotfiles repo in the comments below.
Mobile web development is tough especially when you're trying to offer native-like experiences to users. Several years ago, a small company called Nitobi took on the effort of simplifying building native mobile apps using traditional web development skills. Ambitious and sometimes controversial, the effort known as PhoneGap grew out of this need and one converts left and right.
One of the main masterminds behind the framework is Brian Leroux who apart from being well-respected for his development skills and incredibly likeable personality is also one of the savviest mobile developers around. Considering the number of mobile devices PhoneGap targets, you have to be pretty well-versed in a variety of devices and OSs.
Nitobi has since been acquired by Adobe and the PhoneGap codebase donated to the Apache Software Foundation to continue its development as the Apache Cordova project. Brian moved over to Adobe and continues to steward the codebase. In this interview, we'll chat with Brian about how PhoneGap came about and what the future of mobile web holds.
Q Let's start with the usual. Could you give us a quick intro about yourself?
Hello, I'm Brian. I work on Apache Cordova, PhoneGap, and a new css library called Topcoat at Adobe. In my spare time I created a code joke site called http://wtfjs.com which kind of follows me around.
Q You were one of the creators of PhoneGap. How did Nitobi decide to build such an ambitious framework?
I've definitely been one of the stewards of PhoneGap but it is very important for me to say that MANY people of contributed to the creation and growth of it. No one person really decided to do anything it was a lot of forces coming together at once. PhoneGap was an outcome of the primordial soup that was the new Github model for open source, nascent mobile web browsers, and new generation smartphones. We started hacking, and did the whole thing in the open, and eventually more people subscribed to the project philosophy and utility. It grew from there.
Q Now that Adobe has acquired Nitobi, what's the future of PhoneGap?
Adobe acquired Nitobi in 2011! It is a little hard to believe that was close to two years ago already. Since our acquisition we donated the source of PhoneGap to Apache which is now known as Cordova. We are constantly improving project, adding features, polish, performance improvements, new tooling, and recently we shipped a vastly improved plugin architecture. PhoneGap has become as much about tooling and extension as it is a fancy embedded web browser for building apps.
We're also working closely with a new team at Adobe on a CSS library called Topcoat that is designed for building fast and clean apps. Of course, everything in Adobe Edge is growing mobile consciousness as a part of our focus on web technologies. Brackets is great for authoring web centric code. Reflow and Inspect are great new tools helping tame responsive design. We'll see more and deeper integrations between these tools and PhoneGap in the future.
Q There's a lot of confusion about PhoneGap and Cordova. Can you clear things up?
Adobe PhoneGap is a downstream distribution of Apache Cordova. It is the same as the relationship of Safari to WebKit. When Adobe acquired Nitobi the original source of PhoneGap was donated to Apache to continue its open development, and encourage contribution from the wider developer community. It has been really great, and the community has grown exponentially since joining Apache. It was a great move for the project and has really matured the development.
Author Note: Brian goes into more detail about this in this blog post.
Q There have been a number of other similar projects but PhoneGap's received the bulk of the attention. What can you attribute to its popularity?
I think our popularity is owed, in part, to very clearly defined principles and goals. We want the web to be a first class platform and we often state a purpose of the project is to cease to exist. It's a powerful acknowledgement of our intention to get back to web development. This resonates with the web community.
PhoneGap is also just a really good name that clearly communicates the project succinctly. We got lucky there. I'm not sure if it was Brock Whitten or Andre Charland whom coined it. Rob Ellis was there but I doubt he'd remember either. I hated it at first but after five years of working on the thing I'm sort of used to it!
The adoption of PhoneGap was probably a little bit of dumb luck too. I'd like to think we made some of that luck with a regular release cadence and a strong testing philosophy. We rarely have regressions, and we ship quality releases continuously. That healthy activity has helped to build the confidence of our developer community, and the businesses and organizations that are using PhoneGap today.
Q In terms of mobile, how easy or challenging has it been to use web technologies like HTML & JavaScript to create a platform that builds native apps for mobile devices?
Well, on one hand it is super easy to get started building a web app. On the other hand, web apps can grow complex quickly, and the devices we're talking about don't have a whole lot of horsepower to begin with. Software development is a balancing act. We're balancing all sorts of forces. Skill and code reuse. Adding more features or working on performance.
Q Did you find mobile OS vendors receptive to PhoneGap? What challenges did you have to overcome?
Most mobile operating system vendors are contributing directly to Cordova!
We have friends from Google bringing Chrome Packaged Apps to the fray. Mozilla is ramping up Firefox OS with us. Canonical has hackers working on Ubuntu Phone. Blackberry has a bunch of devs bringing us the Blackberry Webworks perspective. Intel and Samsung representing Tizen.
Early in the PhoneGap project, before all this glamorous Apache business <g>, we were temporarily blocked from the App Store by Apple. It was the best thing ever. It brought a tonne of attention the project. After much kerfuffle, Apple reviewed our code to our mutual satisfaction that we were not in violation of any of the App Store policies and PhoneGap apps have been shipping there ever since.
Q What are the practical use cases for using PhoneGap and at which point should developers consider going totally native?
Well if you have an existing investment in web content or web developers then PhoneGap is worth looking at. If you are looking for portability then web technologies are obviously useful, and this can even mean on a single platform, but able to automatically target handset and tablet form factors with a single codebase.
I used to say that web technology isn't really the best choice for games. But this depends on the type of game. Mobile games in particular tend to be more puzzles, card, or two dimensional sorting things that do not require immersive graphics. Web technology is surprisingly good for these types of games. Until we get better support for WebGL I think going native is still compelling. The W3C and browser vendors are very aware of this shortcoming and it is only a matter of time before the Game Controller, Orientation Lock, Fullscreen API, and the Audio API are fully realized. The console will move into the browser and monetization will move towards a service model as a result.
Q When should developers look at native versus going pure browser-based for mobile apps?
Well, if you have the time to invest in a particular platform (sometimes proprietary too) then going native is a fine, if costly, route to invest in.
It is easy to write a crummy native app as it is with web tech but it is even easier to debug these things using the native platform tooling. That tooling integration makes most development environments very comfortable, with great documentation, and distribution is kind of built in. (You get these advantages with PhoneGap too as that we do not hide these details.) I encourage developers to always be learning as much you can and native mobile development is super fun stuff to learn.
That said, I'm not as convinced about the business benefits of going native. You inherit a reliance on (often) proprietary tooling and distribution channels which is inherently risky. When the vendor makes a change so do you. If they chose to shut down, deprecate, or otherwise abandon infrastructure you rely on for revenue you will have no say or recourse. I personally would not build a business in that way, but I can also respect that some do, and either way you can use PhoneGap to mitigate that risk.
Q Is browser-based mobile web ready for primetime? If not, what's missing?
Offline is still messy, we have App Cache but it is really complex and creates a janky user experience. When a new version is available you have to prompt the user to reload. But I have high hopes for the Navigation Controller effort to fix it.
Push notifications are another thing the web needs to get right. Notifications are crucial for user engagement. Those standards and support are starting emerge in desktop web browsers but we need those capabilities to rise up into the mobile web browsers.
The security models for packaged apps is in need of refinement. But that is happening. As a result we will win more and better device APIs. Firefox OS and Chrome OS are going to point the way. We're going to do everything we can to help by providing a quick prototyping surface for browsers.
Developer tooling experience needs love. It is getting pretty good and there is a real healthy competition between Firefox, Chrome, Opera and to a lesser extent IE and Safari. Performance instrumentation for monitoring and especially post deployment crash reporting would be particularly nice.
Thank you Brian
I want to thank Brian for taking the time to provide us with the history of PhoneGap and his insights into mobile web. If you're interested in building mobile applications using your web development skills, be sure to check out Apache Cordova and Adobe PhoneGap.
Sails is a Javascript framework designed to resemble the MVC architecture from frameworks like Ruby on Rails. It makes the process of building Node.js apps easier, especially APIs, single page apps and realtime features, like chat.
Installation
To install Sails, it is quite simple. The prerequisites are to have Node.js installed and also npm, which comes with Node. Then one must issue the following command in the terminal:
sudo npm install sails -g
Create a New Project
In order to create a new Sails project, the following command is used:
sails new myNewProject
Sails will generate a new folder named myNewProject and add all the necessary files to have a basic application built. To see what was generated, just get into the myNewProject folder and run the Sails server by issuing the following command in the terminal:
sails lift
Sails’s default port is 1337, so if you visit http://localhost:1337 you should get the Sails default index.html page.
Now, let's have a look at what Sails generated for us. In our myNewProject folder the following files and sub-folders were created:
The assets Folder
The assets folder contains subdirectories for the Javascript and CSS files that should be loaded during runtime. This is the best place to store auxiliary libraries used by your application.
The public Folder
Contains the files that are publicly available, such as pictures your site uses, the favicon, etc.
The config Folder
This is one of the important folders. Sails is designed to be flexible. It assumes some standard conventions, but it also allows the developer to change the way Sails configures the created app to fit the project’s needs. The following is a list of configuration files present in the config folder:
adapters.js– used to configure the database adapters
application.js– general settings for the application
assets.js– asset settings for CSS and JS
bootstrap.js– code that will be ran before the app launches
locales– folder containing translations
policies.js– user rights management configuration
routes.js– the routes for the system
views.js– view related settings
The sails.jsdocumentation contains detailed information on each of these folders.
The views Folder
The application's views are stored in this folder. Looking at its contents, we notice that the views are generated by default as EJS (embedded JavaScript). Also, the views folder contains views for error handling (404 and 500) and also the layout file (layout.ejs) and the views for the home controller, which were generated by Sails.
The api Folder
This folder is composed from a buch of sub-folders:
the adapters folder contains the adapters used by the application to handle database connections
the controllers folder contains the application controllers
the application's models are stored in the models folder
in the policies folder are stored rules for application user access
the api services implemented by the app are stored in the services folder
Configure the Application
So far we have created our application and took a look at what was generated by default, now it's time to configure the application to make it fit our needs.
General Settings
General settings are stored in the config/application.js file. The configurable options for the application are:
application name (appName)
the port on which the app will listen (port)
the application environment; can be either development or production (environment)
the level for the logger, usable to control the size of the log file (log)
Note that by setting the app environment to production, makes Sails bundle and minify the CSS and JS, which can make it harder to debug.
Routes
Application routes are defined in the config/routes.js file. As you’d expect, this file will be the one that you will most often work with as you add new controllers to the application.
The routes are exported as follows, in the configuration file:
module.exports.routes = {
// route to index page of the home controller
'/': {
controller: 'home'
},
// route to the auth controller, login action
'/login': {
controller: 'auth',
action: 'login'
},
// route to blog controller, add action to add a post to a blog
// note that we use also the HTTP method/verb before the path
'post /blog/add': {
controller: 'blog',
action: 'add_post'
},
// route to get the first blog post. The find action will return
// the database row containing the desired information
'/blog/:item': {
controller: blog,
action: find
}
}
Views
Regarding views, the configurable options are the template engine to be used and if a layout should or not be used, for views.
Models
Models are a representation of the application data stored in a database. Models are defined by using attributes and associations. For instance, the definition of a Person model might look like this:
// Person.js
var Person = {
name: 'STRING',
age: 'INTEGER',
birthDate: 'DATE',
phoneNumber: 'STRING',
emailAddress: 'STRING'
};
exports = Person;
The communication with the underlying database is done through adapters. Adapters are defined in api/adapters and are configured in the adapters.js file. At the moment of writing this article, Sails comes with three adapters: memory, disk and mysql but you can write your own adapter (see the documentation for details).
Once you have a model defined you can operate on it by creating records, finding records, updating and destroying records.
Controllers
Controllers are placed in api/controllers. A controller is created using the following command:
sails generate controller comment
This command will generate a CommentController object. Actions are defined inside this object. Actions can also be generated when you issue the generate controller command:
sails generate controller comment create destroy tag like
This will create a Comment controller with actions for create, destroy, tag and like.
Actions receive as parameters the request and the response objects, which can be used for getting parameters of the URI (the request object) or output in the view (using the response object).
To communicate with the model, the callback of the appopriate action is used. For instance, in the case of querying a database with find, the following pattern is used to manipulate the model:
Blog.find(id).done(err, blog) {
// blog is the database record with the specified id
console.log(blog.content);
}
Views
Views are used to handle the UI of the application. By default, views are handled using EJS, but any other templating library can be used. How to configure views was discussed previously in the Configuration chapter.
Views are defined in the /views directory and the templates are defined in the /assests/templates folder.
There are mainly four types of views:
server-side views
view partials
layout views
client-side views
Server-Side Views
Their job is to display data when a view is requested by the client. Usually the method res.view corresponds to a client with the appropriate view. But if no controller or action exists for a request, Sails will serve the view in the following fashion: /views/:controller/:action.ejs.
The Layout View
The Layout can be found in /views/layout.ejs. It is used to load the application assets such as stylesheets or JavaScript libraries.
Have a look at the specified file:
<!DOCTYPE html><html><head><title><%- title %></title><!-- Viewport mobile tag for sensible mobile support --><meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1"><!-- JavaScript and stylesheets from your public folder are included here --><%- assets.css() %><%- assets.js() %></head><body><%- body %><!-- Templates from your view path are included here --><%- assets.templateLibrary() %></body></html>
The lines assets.css() and assets.js() load the CSS and JS assets of our application and the assets.templateLibrary loads the client templates.
Client-Side Templates
These are defined in the /assets/templates and are loaded as we saw above.
Routes
We discussed how to configure routes in the Configuration chapter.
There are several conventions that Sails follows when routes are handled:
if the URL is not specified in the config/routes.js the default route for a URL is /:controller/:action/:id with the obvious meanings for controller and action and id being the request parameter derived from the URL.
if :action is not specified, Sails will redirect to the appropriate action. Out of the box, the same RESTful route conventions are used as in Backbone.
if the requested controller/action do not exist, Sails will behave as so:
if a view exists, Sails will render that view
if a view does not exist, but a model exists, Sails will return the JSON form of that model
if none of the above exist, Sails will respond with a 404
Conclusion
Now I’ve barely scratched the surface with what Sails can do, but stay tuned, as I will follow this up with an in-depth presentation showing you how to build an application, using Sails.
Also keep in mind that Sails is currently under development and constantly changing. So make sure to check out the documentation to see what's new.
Browser testing is the bane of our existence. Well, that's a bit of an exaggeration, but not by much. Multiple browser versions and browser fragmentation can make it difficult to get good test coverage for your sites especially when you factor in the different operating systems developers use to build with.
Over the years, we've relied on a variety of tools to help us with this challenge including virtual machines, tools that simulate browsers and even having multiple devices on hand to work with. It'd be great if there were a way to have one viewport that allowed us to easily test across any major browser and their individual versions without jumping through hoops.
BrowserStack.com aims to offer this via it's browser-based virtualization service and in this article we'll cover the service and how it helps tackle the cross-browser testing problem.
Browsers Inside Your Browser
I mentioned that BrowserStack offers a virtualization service. What most developers think of when they hear that is "virtual machines" and not in a fond way. Virtual machines, while certainly useful, require valuable disk space and resources to be useful and most developers loathe having to run them because of that. BrowserStack takes a different approach by leveraging Adobe Flash to provide a virtualized browser within your own browser. You're not required to install anything and you're getting access to real virtual browsers in the cloud.
To give you an example, using the service I pulled up the Nettuts+ main page via Safari 5.1 on OSX Lion while using Internet Explorer 11.
That's pretty powerful functionality and the key thing is that it's all done within your browser. And of course you're not limited in OS choice or browser versions. BrowserStack offers virtulization for:
Windows XP, 7 and 8
OSX Snow Leopard, Lion and Mountain Lion
iOS
Android
Opera Mobile
That's right, they offer mobile browser virtualization. We're in a mobile world, so I'd expect nothing less.
Depending on the operating system you choose, BrowserStack offers up a number of supported browsers for the specific OS including betas and nightlies in some cases.
Yes, even the dreaded IE6 is available. It can't die soon enough.
Apart from the OS and browser options, you can also choose the screen resolution you'd like to test with which is especially useful for checking out your responsive layouts. Just know that BrowserStack also has a complementary service to tackle responsive designs which generates screenshots for different devices and sizes.
The main point is that there's extensive test coverage here without the need to install anything to use it.
How Does It Work?
The first thing you need to do is register for the service. BrowserStack is a for-pay service and I think the pricing is very reasonable for the functionality you're getting and yes there's a whole lot more features.
Once you've registered and signed in, you'll be at the dashboard which offers a quick start dialog.
This allows you to easily enter the URL you'd like to test and via the dropdowns, the target OS and browser version. You can fine tune things via the left panel which offers screen resolution choices and page rendering speed simulation.
Clicking start kicks off the process of establishing the connection via Flash to the remote server and rendering the virtualized browser:
What I'd like to emphasize here is that this is not a screenshot grabber or some fake session. You have full access to the web page's functionality including menus, buttons, and so on. This also includes the developer tools that come with browsers. Yes, you read correctly. You have access to tools like Firefox Web Developer Tools, the IE F12 Tools and the Chrome Developer Tools. In this screenshot, I'm in a session running Firefox on Mountain Lion and using the Firefox Web Developer Tools.
So not only can you see how your pages will render across browsers but you can also use the existing tools to debug common issues. Very cool!
Going Local
It's definitely awesome to be able to check out your pages once they're publicly available but in most cases, you're going to be developing locally and will want to checkout your pages way before pushing your code to production.
BrowserStack has addressed this by providing a tunneling capability that allows you to test your local pages remotely. It uses a Java applet to act as a proxy between your directory or web server and the cloud-based service. Yes, this means you'll need to install Java and while I tend to not recommend installation of the Java browser plugins, in this case it's a necessity and worthwhile. BrowserStack isn't installing a plugin though. It's serving an applet that leverages Java's applet browser plugin. Just be sure to disable the browser plugins after you're done testing. A thing to note is that during my testing on Windows 8.1, I needed to use the 32-bit version of the Java JRE as the 64-bit didn't seem to work nor would it install the browser plugins into Firefox or Chrome. To get the 32-bit version, go to Oracle's manual download page. Also be aware that Firefox will not enable the plugin by default so you'll need to go in and activate it.
Looking at the left panel on the BrowserStack dashboard, you should see a section titled "Local Testing" with two buttons labeled "Web tunnel" and "Command line".
The "Web Tunnel" option leverages the Java applet to establish the tunnel between your computer and the remote service. This could be done at the file system level where you would select a specific directory with your pages or a local server URL (example: localhost). To illustrate this, I’ve installed WAMP on my PC to have a local webserver to use with BrowserStack. WAMP by default also installs phpMyAdmin which is accessible via:
http://localhost:81/phpmyadmin/
I'm using port 81 as to not conflict with another process I'm running. Clicking the "Web Tunnel" option opens the following dialog letting you know that the applet is loading:
Because Oracle has worked to secure Java, especially their browser plugins, you should be prompted to run the applet. My advice is to never allow any unsigned applet from a website to run on your PC so I always set my Java security setting to "High". There's also an option called "Very High" but using that will prevent the BrowserStack applet from connecting remotely.
Once the applet is running, you'll be presented with a dialog asking for your local server address or folder.
As you can see, I entered my local URL and it detected the port number. You can also use SSL if you need to. From there, I kick off the connection and I'm able to see my local copy of phpMyAdmin on the BrowserStack remote server.
Now, if you don't want to use the Java applet in the browser or for some reason it doesn't work, you can use the "Command line" option which requires that you download the a .jar file which is called via the command line to establish the connection:
The <key> would be a BrowserStack access key that you'd have to enter. Once the connection is established, you then return to the dashboard to begin testing.
Personally, I prefer the applet approach since it's dead simple. You can get a ton more details on BrowserStack's local testing on this page.
A Whole Lot More
I think you'd agree that from a browser testing perspective, this is a very cool service that makes it substantially easier to do cross-browser testing, even locally. And it's certainly a viable alternative to virtual machines for those running short on system resources.
The Money Pattern, defined by Martin Fowler and published in Patterns of Enterprise Application Architecture, is a great way to represent value-unit pairs. It is called Money Pattern because it emerged in a financial context and we will illustrate its use mainly in this context using PHP.
A PayPal Like Account
I have no idea how PayPal is implemented, but I think it is a good idea to take its functionality as an example. Let me show you what I mean, my PayPal account has two currencies: US Dollars and Euro. It keeps the two values separated, but I can receive money in any currency, I can see my total amount in any of the two currencies and I can extract in any of the two. For the sake of this example, imagine that we extract in any of the currencies and automatic conversion is done if the balance of that specific currency is less than what we want to transfer but yet there is still enough money in the other currency. Also, we will limit the example to only two currencies.
Getting an Account
If I were to create and use an Account object, I would like to initialize it with an account number.
function testItCanCrateANewAccount() {
$this->assertInstanceOf("Account", new Account(123));
}
This will obviously fail because we have no Account class, yet.
class Account {
}
Well, writing that in a new "Account.php" file and requiring it in the test, made it pass. However, this is all being done just to make ourselves comfortable with the idea. Next, I am thinking of getting the account’s id.
function testItCanCrateANewAccountWithId() {
$this->assertEquals(123, (new Account(123))->getId());
}
I actually changed the previous test into this one. There is no reason to keep the first one. It lived it’s life, meaning it forced me to think about the Account class and actually create it. We can now move on.
class Account {
private $id;
function __construct($id) {
$this->id = $id;
}
public function getId() {
return $this->id;
}
}
The test is passing and Account is starting to look like a real class.
Currencies
Based on our PayPal analogy, we may want to define a primary and a secondary currency for our account.
private $account;
protected function setUp() {
$this->account = new Account(123);
}
[...]
function testItCanHavePrimaryAndSecondaryCurrencies() {
$this->account->setPrimaryCurrency("EUR");
$this->account->setSecondaryCurrency('USD');
$this->assertEquals(array('primary' => 'EUR', 'secondary' => 'USD'), $this->account->getCurrencies());
}
Now the above test will force us to write the following code.
class Account {
private $id;
private $primaryCurrency;
private $secondaryCurrency;
[...]
function setPrimaryCurrency($currency) {
$this->primaryCurrency = $currency;
}
function setSecondaryCurrency($currency) {
$this->secondaryCurrency = $currency;
}
function getCurrencies() {
return array('primary' => $this->primaryCurrency, 'secondary' => $this->secondaryCurrency);
}
}
For the time being, we are keeping currency as a simple string. This may change in the future, but we are not there yet.
Gimme the Money
There are endless reasons why not to represent money as a simple value. Floating point calculations? Anyone? What about currency fractionals? Should we have 10, 100 or 1000 cents in some exotic currency? Well, this is another problem we will have to avoid. What about allocating indivisible cents?
There are just too many and exotic problems when working with money to write them down in code, so we will go directly on to the solution, the Money Pattern. This is quite a simple pattern, with great advantages and many use cases, far out of the financial domain. Whenever you have to represent a value-unit pair you should probably use this pattern.
The Money Pattern is basically a class encapsulating an amount and currency. Then it defines all the mathematical operations on the value with respect to the currency. "allocate()" is a special function to distribute a specific amount of money between two or more recipients.
So, as a user of Money I would like to be able to do this in a test:
class MoneyTest extends PHPUnit_Framework_TestCase {
function testWeCanCreateAMoneyObject() {
$money = new Money(100, Currency::USD());
}
}
But that won’t work yet. We need both Money and Currency. Even more, we need Currency before Money. This will be a simple class, so I will skip testing it for now. I am pretty sure the IDE can generate most of the code for me.
class Currency {
private $centFactor;
private $stringRepresentation;
private function __construct($centFactor, $stringRepresentation) {
$this->centFactor = $centFactor;
$this->stringRepresentation = $stringRepresentation;
}
public function getCentFactor() {
return $this->centFactor;
}
function getStringRepresentation() {
return $this->stringRepresentation;
}
static function USD() {
return new self(100, 'USD');
}
static function EUR() {
return new self(100, 'EUR');
}
}
That’s enough for our example. We have two static functions for USD and EUR currencies. In a real application, we would probably have a general constructor with a parameter and load all the currencies from a database table or, even better, from a text file.
Next, include the two new files in the test:
require_once '../Currency.php';
require_once '../Money.php';
class MoneyTest extends PHPUnit_Framework_TestCase {
function testWeCanCreateAMoneyObject() {
$money = new Money(100, Currency::USD());
}
}
This test still fails, but at least it can find Currency now. We continue with a minimal Money implementation. A little bit more than what this test strictly requires since it is, again, mostly auto-generated code.
class Money {
private $amount;
private $currency;
function __construct($amount, Currency $currency) {
$this->amount = $amount;
$this->currency = $currency;
}
}
Please note, we enforce the type Currency for the second parameter in our constructor. This is a nice way to avoid our clients sending in junk as currency.
Comparing Money
The first thing that came into my mind after having the minimal object up and running was that I will have to compare money objects somehow. Then I remembered that PHP is quite smart when it comes to comparing objects, so I wrote this test.
function testItCanTellTwoMoneyObjectAreEqual() {
$m1 = new Money(100, Currency::USD());
$m2 = new Money(100, Currency::USD());
$this->assertEquals($m1,$m2);
$this->assertTrue($m1 == $m2);
}
Well, that actually passes. The "assertEquals" function can compare the two objects and even the built-in equality condition from PHP "==" is telling me what I expect. Nice.
But, what about if we are interested in one being bigger than the other? To my even greater surprise, the following test also passes without any problems.
function testOneMoneyIsBiggerThanTheOther() {
$m1 = new Money(200, Currency::USD());
$m2 = new Money(100, Currency::USD());
$this->assertGreaterThan($m2, $m1);
$this->assertTrue($m1 > $m2);
}
Which leads us to…
function testOneMoneyIsLessThanTheOther() {
$m1 = new Money(100, Currency::USD());
$m2 = new Money(200, Currency::USD());
$this->assertLessThan($m2, $m1);
$this->assertTrue($m1 < $m2);
}
… a test that passes immediately.
Plus, Minus, Multiply
Seeing so much PHP magic actually working with comparisons, I could not resist to try this one.
function testTwoMoneyObjectsCanBeAdded() {
$m1 = new Money(100, Currency::USD());
$m2 = new Money(200, Currency::USD());
$sum = new Money(300, Currency::USD());
$this->assertEquals($sum, $m1 + $m2);
}
Which fails and says:
Object of class Money could not be converted to int
Hmm. That sounds pretty obvious. At this point we have to make a decision. It is possible to continue this exercise with even more PHP magic, but this approach will, at some point, transform this tutorial into a PHP cheatsheet instead of a design pattern. So, let’s make the decision to implement the actual methods to add, subtract and multiply money objects.
function testTwoMoneyObjectsCanBeAdded() {
$m1 = new Money(100, Currency::USD());
$m2 = new Money(200, Currency::USD());
$sum = new Money(300, Currency::USD());
$this->assertEquals($sum, $m1->add($m2));
}
This test fails as well, but with an error telling us there is no "add" method on Money.
public function getAmount() {
return $this->amount;
}
function add($other) {
return new Money($this->amount + $other->getAmount(), $this->currency);
}
To sum up two Money objects, we need a way to retrieve the amount of the object we pass in as the argument. I prefer to write a getter, but setting the class variable to be public would also be an acceptable solution. But what if we want to add Dollars to Euros?
/**
* @expectedException Exception
* @expectedExceptionMessage Both Moneys must be of same currency
*/
function testItThrowsExceptionIfWeTryToAddTwoMoneysWithDifferentCurrency() {
$m1 = new Money(100, Currency::USD());
$m2 = new Money(100, Currency::EUR());
$m1->add($m2);
}
There are several ways to deal with operations on Money objects with different currencies. We will throw an exception and expect it in the test. Alternatively, we could implement a currency conversion mechanism in our application, call it, convert both Money objects into some default currency and compare them. Or, if we would have a more sophisticated currency conversion algorithm, we could always convert from one to another and compare in that converted currency. The thing is, that when conversion comes into place, conversion fees have to be considered and things will be getting quite complicated. So let’s just throw that exception and move on.
public function getCurrency() {
return $this->currency;
}
function add(Money $other) {
$this->ensureSameCurrencyWith($other);
return new Money($this->amount + $other->getAmount(), $this->currency);
}
private function ensureSameCurrencyWith(Money $other) {
if ($this->currency != $other->getCurrency())
throw new Exception("Both Moneys must be of same currency");
}
That’s better. We do a check to see if the currencies are different and throw an exception. I already wrote it as a separate private method, because I know we will need it in the other mathematical operations as well.
Subtraction and multiplication are very similar to addition, so here is the code and you can find the tests in the attached source code.
function subtract(Money $other) {
$this->ensureSameCurrencyWith($other);
if ($other > $this)
throw new Exception("Subtracted money is more than what we have");
return new Money($this->amount - $other->getAmount(), $this->currency);
}
function multiplyBy($multiplier, $roundMethod = PHP_ROUND_HALF_UP) {
$product = round($this->amount * $multiplier, 0, $roundMethod);
return new Money($product, $this->currency);
}
With subtraction, we have to make sure we have enough money and with multiplication, we must take actions to round things up or down so that division (multiplication with numbers less than one) will not produce “half cents”. We keep our amount in cents, the lowest possible factor of the currency. We can not divide it more.
Introducing Currency to Our Account
We have an almost complete Money and Currency. It is time to introduce these objects to Account. We will start with Currency, and change our tests accordingly.
Because of the dynamic typing nature of PHP, this test passes without any problems. However I would like to force the methods in Account to use Currency objects and do not accept anything else. This is not mandatory, but I find these kinds of type hintings extremely useful when someone else needs to understand our code.
function setPrimaryCurrency(Currency $currency) {
$this->primaryCurrency = $currency;
}
function setSecondaryCurrency(Currency $currency) {
$this->secondaryCurrency = $currency;
}
Now it is obvious to anyone reading this code for the first time that Account works with Currency.
Introducing Money to Our Account
The two basic actions any account must provide is: deposit – meaning adding money to an account – and withdraw – meaning removing money from an account. Depositing has a source and withdrawing has a destination other than our current account. We will not go into details about how to implement these transactions, we will only concentrate on implementing the effects these have on our account. So, we can imagine a test like this for depositing.
function testAccountCanDepositMoney() {
$this->account->setPrimaryCurrency(Currency::EUR());
$money = new Money(100, Currency::EUR()); //That's 1 EURO
$this->account->deposit($money);
$this->assertEquals($money, $this->account->getPrimaryBalance());
}
This will force us to write quite a lot of implementation code.
class Account {
private $id;
private $primaryCurrency;
private $secondaryCurrency;
private $secondaryBalance;
private $primaryBalance;
function getSecondaryBalance() {
return $this->secondaryBalance;
}
function getPrimaryBalance() {
return $this->primaryBalance;
}
function __construct($id) {
$this->id = $id;
}
[...]
function deposit(Money $money) {
$this->primaryCurrency == $money->getCurrency() ? $this->primaryBalance = $money : $this->secondaryBalance = $money;
}
}
OK, OK. I know, I wrote more than what was absolutely necessary, for production. But I don’t want to bore you to death with baby-steps and I am also fairly sure the code for secondaryBalance will work correctly. It was almost entirely generated by the IDE. I will even skip testing it. While this code makes our test pass, we have to ask ourselves what happens when we do subsequent deposits? We want our money to be added to the previous balance.
function testSubsequentDepositsAddUpTheMoney() {
$this->account->setPrimaryCurrency(Currency::EUR());
$money = new Money(100, Currency::EUR()); //That's 1 EURO
$this->account->deposit($money); //One euro in the account
$this->account->deposit($money); //Two euros in the account
$this->assertEquals($money->multiplyBy(2), $this->account->getPrimaryBalance());
}
Well, that fails. So we have to update our production code.
function deposit(Money $money) {
if ($this->primaryCurrency == $money->getCurrency()){
$this->primaryBalance = $this->primaryBalance ? : new Money(0, $this->primaryCurrency);
$this->primaryBalance = $this->primaryBalance->add($money);
}else {
$this->secondaryBalance = $this->secondaryBalance ? : new Money(0, $this->secondaryCurrency);
$this->secondaryBalance = $this->secondaryBalance->add($money);
}
}
This is much better. We are probably done with the deposit method and we can continue with withdraw.
function testAccountCanWithdrawMoneyOfSameCurrency() {
$this->account->setPrimaryCurrency(Currency::EUR());
$money = new Money(100, Currency::EUR()); //That's 1 EURO
$this->account->deposit($money);
$this->account->withdraw(new Money(70, Currency::EUR()));
$this->assertEquals(new Money(30, Currency::EUR()), $this->account->getPrimaryBalance());
}
This is just a simple test. The solution is simple, also.
Well, that works, but what if we want to use a Currency that is not in our account? We should throw an Excpetion for that.
/**
* @expectedException Exception
* @expectedExceptionMessage This account has no currency USD
*/
function testThrowsExceptionForInexistentCurrencyOnWithdraw() {
$this->account->setPrimaryCurrency(Currency::EUR());
$money = new Money(100, Currency::EUR()); //That's 1 EURO
$this->account->deposit($money);
$this->account->withdraw(new Money(70, Currency::USD()));
}
That will also force us to check our currencies.
function withdraw(Money $money) {
$this->validateCurrencyFor($money);
$this->primaryCurrency == $money->getCurrency() ?
$this->primaryBalance = $this->primaryBalance->subtract($money) :
$this->secondaryBalance = $this->secondaryBalance->subtract($money);
}
private function validateCurrencyFor(Money $money) {
if (!in_array($money->getCurrency(), $this->getCurrencies()))
throw new Exception(
sprintf(
'This account has no currency %s',
$money->getCurrency()->getStringRepresentation()
)
);
}
But what if we want to withdraw more than what we have? That case was already addressed when we implemented subtraction on Money. Here is the test that proves it.
/**
* @expectedException Exception
* @expectedExceptionMessage Subtracted money is more than what we have
*/
function testItThrowsExceptionIfWeTryToSubtractMoreMoneyThanWeHave() {
$this->account->setPrimaryCurrency(Currency::EUR());
$money = new Money(100, Currency::EUR()); //That's 1 EURO
$this->account->deposit($money);
$this->account->withdraw(new Money(150, Currency::EUR()));
}
Dealing With Withdraw and Exchange
One of the more difficult things to deal with when we are working with multiple currencies is exchanging between them. The beauty of this design pattern is that it allows us to somewhat simplify this problem by isolating and encapsulating it in its own class. While the logic in an Exchange class may be very sophisticated, its use becomes much easier. For the sake of this tutorial, let’s imagine that we have some very basic Exchange logic only. 1 EUR = 1.5 USD.
class Exchange {
function convert(Money $money, Currency $toCurrency) {
if ($toCurrency == Currency::EUR() && $money->getCurrency() == Currency::USD())
return new Money($money->multiplyBy(0.67)->getAmount(), $toCurrency);
if ($toCurrency == Currency::USD() && $money->getCurrency() == Currency::EUR())
return new Money($money->multiplyBy(1.5)->getAmount(), $toCurrency);
return $money;
}
}
If we convert from EUR to USD we multiply the value by 1.5, if we convert from USD to EUR we divide the value by 1.5, otherwise we presume we are converting two currencies of the same type, so we do nothing and just return the money. Of course, in reality this would be a much more complicated class.
Now, having an Exchange class, Account can make different decisions when we want to withdraw Money in a currency, but we do not hove enough in that specific currency. Here is a test that better exemplifies it.
We set our account’s primary currency to USD and deposit one dollar. Then we set the secondary currency to EUR and deposit one Euro. Then we withdraw two dollars. Finally, we expect to remain with zero dollars and 0.34 Euros. Of course this test throws an exception, so we have to implement a solution to this dilemma.
function withdraw(Money $money) {
$this->validateCurrencyFor($money);
if ($this->primaryCurrency == $money->getCurrency()) {
if( $this->primaryBalance >= $money ) {
$this->primaryBalance = $this->primaryBalance->subtract($money);
}else{
$ourMoney = $this->primaryBalance->add($this->secondaryToPrimary());
$remainingMoney = $ourMoney->subtract($money);
$this->primaryBalance = new Money(0, $this->primaryCurrency);
$this->secondaryBalance = (new Exchange())->convert($remainingMoney, $this->secondaryCurrency);
}
} else {
$this->secondaryBalance = $this->secondaryBalance->subtract($money);
}
}
private function secondaryToPrimary() {
return (new Exchange())->convert($this->secondaryBalance, $this->primaryCurrency);
}
Wow, lots of changes had to be made to support this automatic conversion. What is happening is that if we are in the case of extracting from our primary currency and we don’t have enough money, we convert our balance of the secondary currency to primary and try the subtraction again. If we still do not have enough money, the $ourMoney object will throw the appropriate exception. Otherwise, we will set our primary balance to zero and we will convert the remaining money back to the secondary currency and set our secondary balance to that value.
It remains up to our account’s logic to implement a similar automatic conversion for secondary currency. We will not implement such a symmetrical logic. If you like the idea, consider it as an exercise for you. Also, think about a more generic private method that would do the magic of auto-conversions in both cases.
This complex change to our logic also forces us to update another one of our tests. Whenever we want to auto-convert we must have a balance, even if it is just zero.
/**
* @expectedException Exception
* @expectedExceptionMessage Subtracted money is more than what we have
*/
function testItThrowsExceptionIfWeTryToSubtractMoreMoneyThanWeHave() {
$this->account->setPrimaryCurrency(Currency::EUR());
$money = new Money(100, Currency::EUR()); //That's 1 EURO
$this->account->deposit($money);
$this->account->setSecondaryCurrency(Currency::USD());
$money = new Money(0, Currency::USD());
$this->account->deposit($money);
$this->account->withdraw(new Money(150, Currency::EUR()));
}
Allocating Money Between Accounts
The last method we need to implement on Money is allocate. This is the logic that decides what to do when dividing money between different accounts which can’t be made exactly. For example, if we have 0.10 cents and we want to allocate them between two accounts in a proportion of 30-70 percents, that is easy. One account will get three cents and the other seven. However, if we want to make the same 30-70 ratio allocation of five cents, we have a problem. The exact allocation would be 1.5 cents in one account and 3.5 in the other. But we can not divide cents, so we have to implement our own algorithm to allocate the money.
There can be several solutions to this problem, one common algorithm is to add one cent sequentially to each account. If an account has more cents than its exact mathematical value, it should be eliminated from the allocation list and receive no further money. Here is a graphical representation.
We just create a Money object with five cents and two accounts. We call allocate and expect the two to three values to be in the two accounts. We also created a helper method to quickly create accounts. The test fails, as expected, but we can make it pass quite easily.
function allocate(Account $a1, Account $a2, $a1Percent, $a2Percent) {
$exactA1Balance = $this->amount * $a1Percent / 100;
$exactA2Balance = $this->amount * $a2Percent / 100;
$oneCent = new Money(1, $this->currency);
while ($this->amount > 0) {
if ($a1->getPrimaryBalance()->getAmount() < $exactA1Balance) {
$a1->deposit($oneCent);
$this->amount--;
}
if ($this->amount <= 0)
break;
if ($a2->getPrimaryBalance()->getAmount() < $exactA2Balance) {
$a2->deposit($oneCent);
$this->amount--;
}
}
}
Well, not the simplest code, but it is working correctly, as the passing of our test proves it. The only thing we can still do to this code is to reduce the small duplication inside the while loop.
What I find amazing with this little pattern is the large range of cases where we can apply it.
We are done with our Money Pattern. We saw that it is quite a simple pattern, which encapsulates the specifics of the money concept. We also saw that this encapsulation alleviates the burden of computations from Account. Account can concentrate on representing the concept from a higher level, from the point of view of the bank. Account can implement methods like connection with account holders, IDs, transactions and money. It will be an orchestrator not a calculator. Money will take care of calculations.
What I find amazing with this little pattern is the large range of cases where we can apply it. Basically, every time you have a value-unit pair, you can use it. Imagine you have a weather application and you want to implement a representation for temperature. That would be the equivalent of our Money object. You can use Fahrenheit or Celsius as currencies.
Another use case is when you have a mapping application and you want to represent distances between points. You can easily use this pattern to switch between Metric or Imperial measurements. When you work with simple units, you can drop the Exchange object and implement the simple conversion logic inside your “Money” object.
So, I hope you enjoyed this tutorial and I am eager to hear about the different ways that you might use this concept. Thank you for reading.
If you’ve been reading this site for awhile, then you know who Jeffrey Way is. He’s the man, the myth and the legend behind the stellar growth of Nettuts+ and an influential voice in the web development community. And now he’s tackling online education full steam via Tuts+.
We wanted to catchup with Jeffrey to see how his next great adventure is going. Let’s check it out.
Q Readers want to know, “Where in the world is Jeffrey Way?”
In the last year, much of my energy has been put into the Tuts+ Premium program, and I’m really proud of what we’ve achieved.
I’m still around! I simply decided to adjust my priorities a bit. After building up and maintaining Nettuts+ for five years, I realized that I’d reached the limits of what I was capable of learning in that job. Staying anywhere too long is rarely a good thing, so I chose to step down as editor, and instead focus my attention on other projects.
In the last year, much of my energy has been put into the Tuts+ Premium program, and I’m really proud of what we’ve achieved. Though it’s been tough, we’re now at a point where we’re publishing well over 25 new courses every single month. We’ve released courses on everything from modern WordPress development, to Yeoman, to Ember, to Laravel testing. As I sometimes tease: if you enjoy Dreamweaver, then Lynda.com is a great choice. Otherwise, to instead learn the technologies that working pros use every day, Tuts+ Premium is a really fantastic resource. :)
Q You have one of the biggest fan bases, built on your stellar work on Nettuts+. What prompted the change to Tuts+?
Like I said above, mostly it came down to a personal decision. Life is too short to not experiment with new ideas and roles. So, having managed the site for over five years, the time was right to move on. You have to be careful about falling into a rut, sometimes.
Also, with you and Andrew at the helm, I felt that the site was in perfect hands to reach the next level.
Q The focus of Tuts+ is squarely online courses. How do you see online education complementing and/or disrupting the traditional mediums for education?
The best education on the planet in this sphere is not exclusive to a cold brick building.
What’s particularly nice about online education is that it can be anything you want it to be. While traditional schooling has a tendency to force lesson plans (which I’ve never been a fan of, considering the price tag), when it comes to the online world, you’re in charge. You choose the path.
Do platforms like Tuts+ disrupt the traditional medium? I’d say the answer is a big fat yes. As I tweeted not too long ago, at this point, I can’t imagine an environment where I’d find myself recommending to my future child that he or she should attend university. Perhaps there are merits to the social aspect of college (questionable, though), but, beyond that, I see it as little more than an excellent way to start your life with masses of debt.
If your goal, specifically, is to develop for the web, then the answer is even more obvious. The best education on the planet in this sphere is not exclusive to a cold brick building. It’s widely accessible for free around the web. We’re very fortunate that our community (web development) is so incredibly open about documenting their trials and experiments.
Q I’ve read viewpoints where people, on many occasions, recommend forgoing formal education altogether and encouraging developers to leverage the Internet as their educational resource. Is online education at a point where bypassing a degree in, say, Computer Science is actually viable?
I think we passed that point long ago. Outside of the incredible price tag, the problem with university is the same problem with all forms of traditional schooling: it mandates a “one size fits all” approach to learning. Maybe every eighteen year old doesn’t learn best by waking up at eight in the morning, sitting in a 200+ auditorium for ninety minutes, and then taking multiple choice tests. Gasp – maybe there are ways to learn that don’t fit some college’s rigid curriculum. You are not a bad person if you don’t fit this mold.
Really, though, it all comes down to what type of person you are. I was not a fan of my university experience; however, my personality type virtually guaranteed the experience I had. You might be different. If that’s the case, and you can afford the price tag of admission, then certainly nothing bad could come from it! In those cases, have at it, and use platforms like Tuts+ as a supplement.
Q There’s been some criticism about online education (some valid, some FUD). How do you ensure that the courses you’re providing offer real-world knowledge and value to people who take the courses?
Honestly, it can sometimes be a struggle. The key for me has been to leverage the community that I’ve personally submerged myself in. Twitter is amazing for this. By reaching for the leaders in the community, I can rest assured that they’ll bring their experience to the courses and material that I might not personally be as well-versed in.
In terms of choosing which courses to publish and what constitutes “real-world knowledge,” well that simply comes down to experience, I think. Generally speaking, I can often refer to the technologies that I, myself, am interested in learning more about. This includes everything from Ember to AngularJS (yes, both), to architecture, and everything in between. At that point, it simply translates to a process of choosing which developer is most qualified to teach those subjects.
Q I recently wrote on the challenges of staying up-to-date with technology. What are your thoughts on how developers can manage the fast and constant changes for the evolving web development space?
Ahh, yes, I’ve written about these challenges myself many times, as well. There’s no denying that ours is an incredibly difficult industry. I’ve often noted that, if I knew how deep the rabbit hole went at the beginning of my development career, I’m not sure that I would do it again. I guess, from that perspective, my naivety was absolutely working in my favor back then!
I certainly don’t want to dissuade the newcomers in the audience. Instead, I’d simply recommend that they be prepared for the long-haul. Development isn’t something that you knuckle down and learn in six months (despite what some infomercials may say). It’s a non-stop battle, not too dissimilar from an RPG. Little by little, your skills level-up. But it’s a slow process. The key is to love it, and to never stop…even when you’re overwhelmed with frustration and confusion.
Q You’ve become one of the biggest advocates for Laravel. What makes Laravel so special to invoke such a passionate dedication to the framework?
If you want to talk about sheer joy of development, I’ll happily put Laravel up against any framework.
Because Laravel makes PHP development fun! There was a period of time, not too long ago, when PHP and its community were, for lack of better words, hated. Seemingly, the headline joke of every day was one that related to how terrible PHP was. Let’s see, what new PHP-slamming blog article will be posted today? While some of these complaints are certainly valid, the truth of the matter is that much of what people hate about PHP has little effect on your average developer’s day-to-day workflow. In fact, most of that vitriol is rooted in the days of PHP 4. The language and community have come so far since then. It’s unfair to continue painting it with that brush.
If you want to talk about sheer joy of development, I’ll happily put Laravel up against any framework. Rails, Django, Express, you name it. Laravel has it all, too. Migrations, Active-Record implementation, clean syntax, testing facilities, elegant routing, etc. Every Laravel developer knows that feeling of realizing that a seemingly difficult task has been reduced to a single method call.
Need to cache a database query to improve performance? You can do that in one line of code. Want to work with queues, without the hassle of a background daemon? Laravel hooks up flawlessly with Iron.io’s push queues. No framework in existence makes it easier. What about things like writing a console command to deploy your application? Yep, with Laravel, we can arrange that in seconds, using custom Artisan commands and the remote component.
The reason why I’m such a cheerleader of Laravel is because I’m continually impressed by its capabilities. It never fails.
Q It seems like Laravel and Symfony have taken the PHP world by storm. How does this impact existing applications based on other frameworks like CodeIgniter? Will we be seeing a developer knowledge gap soon?
I suppose one argument is that it doesn’t affect those applications at all. Projects built upon CodeIgniter may freely stay that way. There’s no mandate that all applications must be upgraded to their nearest modern framework base! But, naturally, we’ll continue to see the decline of CodeIgniter. This is a certainty, and is specifically why I’ve stopped commissioning new CI courses for Tuts+ Premium. We’re interested in modern development; not technologies of 2008. While CodeIgniter was fantastic in its own right, the simple truth is that its time has come to an end.
Symfony and Laravel are the PHP frameworks of the new generation.
Q Along those same lines, how does PHP fit into the picture when so many web developers are preaching the virtues of Node.js, Ruby on Rails and Python with Django? Is PHP adapting to modern needs?
Pick one that feels right to you, and start building things. That’s all that matters.
Perhaps the question could instead be phrased, like so: “Despite the fact that many developers champion newer languages and frameworks, why does PHP continue to dominate, to the point of 80% market share?” Certainly, something must have been done right, yes?
What this all boils down to is that PHP has been around for a long time. It’s not “the new hotness.” It’s not overly sexy. But we get stuff done. I’ve never been more excited for what’s in store for the community and language than today.
But, sure, those other technologies are excellent, too. Pick one that feels right to you, and start building things. That’s all that matters. People focus too much on “us vs. them.”
Q Last question. What would you like to tell your many fans that miss your presence on Nettuts+?
I’m still here! Let’s stay in touch on Twitter. My username is @jeffrey_way.
In Conclusion
Thank you very much Jeffrey, for taking time to do this interview.