Quantcast
Channel: Nettuts+
Viewing all 502 articles
Browse latest View live

What’s Hot in 2013: Our Picks

$
0
0

2012 was a fantastic year for new technologies, products, and frameworks in our industry. That said, 2013 is looking to be even better! Recently, I asked our Nettuts+ writing staff to compile a list of the technologies that they’ll be keeping a close eye on. Now these aren’t necessarily brand new, but we expect them to spike in popularity this year!

Jeffrey Way’s Picks

Composer

Composer

Composer is a tool for dependency management, similar to Bundler and NPM. Declare your dependencies within a configuration file, and then run a single command to immediately pull them into your project!

Though it rapidly picked up steam last year, in 2013, I expect to see wide-spread adoption of Composer from the PHP community.


Laravel 4

Laravel

Laravel will be to the PHP community what Rails was to the Ruby world. It’s an incredibly elegant framework that will surge to the next level in early 2013, with the release of Version 4. Composer support, better testability, easy emailing, and resourceful controllers are just a few new features that you can look forward to. Keep an eye on this one!

Tuts+ Premium Further Learning


PHP 5.5

PHP 5.5

Following the successful release of PHP 5.4 in early 2012, which introduced a plethora of badly needed new features, such as a built-in server, traits, and an improved array syntax, in version 5.5, we can expect to play around with generators, support for list within foreach statements, and, among other things, a vastly simplified password hashing API.


D3

D3

D3 is a fantastic JavaScript-based data visualization library that allows you to bind data to the DOM, and then make transformations to the document. To learn more, refer to D3′s GitHub repository for a massive gallery of examples for visualizing various data sets.


Brackets

Brackets

Brackets is an open-source code editor that takes HTML, CSS, and JavaScript to the extreme: it’s built with those very technologies! As a result, as long as you have a relatively basic understanding of JavaScript, you have the necessary tools to extend the editor as you see fit.

Expect to see this editor give Sublime Text 2 a run for its money in 2013! Until then, here’s a peek at the latest (at the time of this writing) updates to the editor.

Bryan Jones’s Self-Serving Pick

CodeKit 2.0

CodeKit Logo

CodeKit became massively popular in 2012 and is now used on sites like Barackobama.com, Engadget.com, and many more. The 2.0 release coming in the first half of 2013 features a complete UI overhaul, support for more languages and tools, better integration of frameworks and a revolutionary new-project-creation workflow.

Essentially, the goal is to make anyone who’s forced to build a website without CodeKit… cry.

Dan Harper Picks

PHP

2013 will be the year of PHP. The year PHP finally makes its comeback and starts to fight against the call of Ruby and Node.

Composer is bringing PHP its long-sought-after package manager. The PHP Framework Interop Group is setting a standard for how PHP should be written, allowing every new and existing framework to grow together and benefit one-another. Not to mention the whole host of new features coming to the language with PHP version 5.4, 5.5 and beyond. It’s hard not to be excited about PHP’s now rosy-looking future.

Tuts+ Premium Further Learning


Meteor

Meteor, a new Node.js-powered framework is set to revolutionise how you write high-quality dynamic web apps. While right now it’s still in preview at version 0.5.2, it’s set to hit the version 1 milestone sometime in the new year. It very well may spark a change in the industry like we haven’t seen since the rise of Ruby on Rails. I’m seriously excited for this. I’ll grab the popcorn.


Backbone.js

With browsers getting ever faster, JavaScript is being turned to more and more to provide fast and slick user interfaces for web apps. Backbone is one of the leading libraries for structuring your JS code. With Backbone fast-approaching version 1.0, it’s sure to only achieve more and more success as the year goes on.

Tuts+ Premium Further Learning


Sublime Text 2

There’s just no way you can’t love Sublime. With its command palette, multiple cursors, split-panes, insane levels of customisation and extensibility, it really is no surprise why Sublime Text 2 has stolen then hearts of thousands of developers away from text editors across every operating system. In 2013, I expect it to continue reigning supreme – with a few exciting updates along the way.

Tuts+ Premium Further Learning


Adobe?

The controversial one. Adobe? The company loathed by anyone who’s written even a single line of HTML? Well, yes. In the past year, Adobe have made it abundantly clear that they’re embracing the future of web technologies. They’ve announced a number of very cool projects, from Brackets, a new take on a text editor for a web designers, to Edge Animate, a Flash-like editor to produce rich CSS3 animations and their CSS FilterLab experiment.

Also, let’s not forget their purchase of PhoneGap and Typekit! Perhaps, by 2014, we’ll have started to forget that Flash websites and Dreamweaver ever existed?

Nikko Bautista’s Picks

Zend Framework 2

Zend Framework 2

Zend Framework 2 was released earlier this year, and it has been a wonderful experience so far. Its adoption of Composer (or Pyrus) to manage its packaging is a huge step in the right direction. I’m hopeful that, in 2013, it will take the crown as the best tool for web developers seeking to build highly-scalable web applications.


Twitter Bootstrap

Twitter Bootstrap

Since its conception in 2011, Twitter Bootstrap has become a standard rapid prototyping framework, used by many developers (including myself) who have no idea how to create a grid-layout (or are too lazy to write one). With both developers (@mdo and @fat) moving the whole project into its own open-source organization, I’m looking forward to what the new infrastructure will bring to the project as a whole.


Facebook Open Graph

Facebook Open Graph

In 2011, Facebook released the Facebook Open Graph. The Open Graph has opened Facebook users to a whole lot more, allowing users to share richer stories, based on exactly what they’re doing. From a development point of view, it allows for better integration with Facebook, providing definable stories, which surpass what a simple “Like” can offer.

In 2013, I foresee Facebook’s Open Graph becoming a standard way of sharing different kinds of stories and actions – not just in Facebook, but for any application.


PlayThru

PlayThru

CAPTCHAs have always been the bane of my existence. They’re inclusion in any project generally results in a slightly lower conversion rate. Love it or hate it though, I’ve always deemed it necessary to help fight robots, looking to spam your web sites.

Enter PlayThru: a CAPTCHA alternative, which asks users to play a simple mini-game instead of typing unreadable gibberish. It’s easy to implement, and is nearly uncrackable by any existing CAPTCHA solving solutions that are currently available. In 2013, I can see it being adopted by many of the applications that we use today.


Eden PHP

Eden

Eden is a PHP library that was designed for rapid prototyping. I view it as the Twitter Bootstrap for your PHP code. It’s quite easy to use, offers support for plenty of services, and, best of all, it integrates well with any framework you choose. In 2013, I expect to see it make more of a dent in the PHP scene.

Gabriel Manricks’ Picks

Koding

Koding

Koding is a web development platform that combines all the development tools you need, along with a social aspect to a single place in the cloud. They offer a complete solution, which includes support for multiple languages (PHP, Python, Ruby, etc.), multiple databases (mySQL, MongoDB), terminal access, a sub domain, and file hosting.

Additionally, they’ve made it social, with a mix of GitHub, Twitter and Stack Overflow. You can view friends activity, ask questions, follow topics and post updates. With all of this innovation on a single page, you’re likely wondering how much it’s going to cost you? Well, the developers have stated that the product is free and will remain free for developers always.

They are still in beta, so there are some things which still need tweaking, such as one-click apps and options to purchase additional resources. Overall, though, I think this product shows a lot of promise, and may turn into something really great in 2013.


RethinkDB

RethinkDB

RethinkDB is a database system, rebuilt for the modern 21st century.

Things that are traditionally the most complicated of tasks can be accomplished through the admin’s clean UI.

RethinkDB is a database system, rebuilt from the ground up for the modern 21st century. Created in 2009, RethinkDB is an open-source database that, in my opinion, is considerably under-rated.

It uses a JSON data model to store everything in documents, and supports: atomic updates, JavaScript code directly in the queries, upserting!, map/reduce functions, inline sub-queries, and all operations are lock-free. Additionally, it comes with a stunning UI that puts other tools, like phpMyAdmin, to shame. The included admin allows you to run queries (with autocomplete code hinting), view usage graphs and set up sharding/replication on a per table basis. Things that are traditionally the most complicated of tasks can be accomplished here through the admin’s clean UI.

RethinkDB has automatic failsafe operations for when a node crashes or loses internet connectivity, and the entire system is optimized to take advantage of the new SSD technologies.

Currently, they only provide a package for Ubuntu, but they do offer instructions for getting it set up on Mac OSX. And, of course, they are working on packages for other systems. It will be interesting to see where they take this in 2013.


Stripe

Stripe

Will 2013 be the year that they go global?

Stripe, for the unfamiliar, is a payment processor with the mindset of “built by developers for developers.” If you’ve ever tried to accept credit card payments with something like PayPal, then you know that it can be a headache to set up. From unclear documentation, to fussy APIs, you end up with a lot more open-source projects. Stripe combats this with a dead simple REST API, webhooks for handling different events, and wrappers for basically every language available.

Stripe recently released “Stripe Connect,” an OAUTH 2.0 API that allows you to handle payments and access users’ information, allowing you to create analytical apps and services for Stripe. The single downside to Stripe currently is that it’s only available in the U.S. and Canada. That said, the development team have stated that they are trying to branch out to all countries.

Will 2013 be the year that they go global? I guess we will have to wait and see. Until then, you can learn how to use Stripe here on Nettuts+.


Chrome Packaged Apps

Chrome Packaged Apps

Hopefully, 2013 will bring a new era of hybrid applications, which combine the web’s simplicity with the OS’s power.

Packaged apps are an exciting concept for web technologies and developers alike. Building a web app is a super easy process compared to native OS apps; all you do is layout your objects in XML (forms, buttons, text, etc.) and style them with CSS. Then, to add functionality, you can use something like JavaScript to write simple code in a very component-oriented way.

The downside to web apps is the need for a persistent connection, and nearly no support for native tasks (access to USB devices, writing local files, and so on). Lastly, they are bound to a web browser, which can spoils the effect.

Chrome apps are a mix of both worlds: you get to build apps with access to all of the features of your operating system, but you do it with HTML, CSS and JavaScript!. Chrome offers API-like libraries, which provide you with access to the computers’ resources – and your application is created offline first. This means that, once installed, there is no requirement for an internet connection; it fully runs outside of the browser.

So where’s the catch? Why haven’t we seen many Chrome apps? Well the reason is because it’s still only in the preview stage right now. You can certainly build your own apps with it to test yourself, but there is currently no way to package it for distribution. Hopefully, 2013 will bring a new era of hybrid applications, which combine the web’s simplicity with the OS’s power.


CKEditor 4

CKEditor 4

Already, there are plugins for syntax highlighting and MS document handling.

When building a web application, you must consider the different options for improving a user’s experience. A good UI can “make or break” a product, regardless of its functionality. CKEditor is a WYSIWYG editor that allows you to generate HTML code from an easy to use interface.

CKEditor 4 was released in late 2012, and comes with a few drastic improvements over its previous version. It now supports inline editing of HTML pages, new UI themes that look great out of the box, and a full API to create your own custom extensions.

When it comes to making products, you shouldn’t waste time creating inputs for your users, only to then process the data and format it for the web. With CKEditor, you can customize every stage of its event-cycle, from what’s in the toolbar, to which format the content should be processed into. CKEditor 4 has only been out for a few short weeks, but, already, there are plugins for syntax highlighting and MS document handling.

This is something that I’m very curious to learn more about.

Claudio Ortolina’s Picks

Ruby 2.0

Ruby 2.0

With the Ruby 2.0 release just around the corner, offering new language features, like named arguments and improved performance, Ruby will certainly be a hot topic for 2013 – especially when it comes to upgrading any application deployed on previous versions.


Rails 4.0

Rails 4.0

Another big release, with important architectural changes (like strong parameters) and a more modular structure that should once again positively impact performance. Keep an eye on this one!


jRuby

jRuby

jRuby is a solid alternative to the default Ruby interpreter (MRI). It’s a mature Ruby implementation on top of the Java Virtual Machine that leverages support for concurrency and integration with Java native libraries and drivers. The latest releases show also extremely good performance; it’s definitely an option, when it comes to deploying Ruby applications.


Travis-CI

Travis CI

Continuous integration for testing is increasingly important; Travis makes it possible with a simple cloud based service. With upcoming support for private projects, it’s going to be a must-use tool for any serious test suite.


Go

Go

The Go language, developed by Google, has rapidly gained momentum in our community, thanks to its simplicity, performance and intuitive design. The recent 1.0 release and Google’s commitment to its future make it a valid option for performance critical services in 2013.

Andrew Burgess’s Picks

Node.js

Some Framework

Node is relatively new as server technologies go, but I’m convinced that the excitement we’ve seen so far is hardly the beginning. Technologies like Meteor are proof that Node opens up a whole new way of building web apps that’s incredibly difficult to pull off with some of the old faithfuls.

Tuts+ Premium Further Learning


MongoDB (and NoSQL in General)

MongoDB

I recently created a Tuts+ Premium course all about MongoDB. Prior to that, I hadn’t really had a chance to check out any NoSQL technology, but it was love at first site (yes, pun intended). The idea of storing your data in the same way you work with it (JSON) seems so obvious; why weren’t we doing it sooner? While NoSQL isn’t always the right tool for the job, I think you’ll be seeing it used a lot more in the not-so-distant future.


Responsive Design

Responsive Design

I’m no designer, but I’m certainly a connoisseur of good design. So, lately, I’ve been pretty excited about the hype surrounding responsive design. Once again, it just feels so right. I’ve seen a lot of websites, some pretty high-profile, redesigning with responsive layouts over the last year, and I’m fairly sure this is one trend that won’t be disappearing any time soon.

Keep an eye on Tuts+ in 2013 for a new responsive redesign!


Industry Maturity

industry

While this isn’t a framework or tool, it’s a trend I’ve been noticing for a while – and liking a lot. What I mean by mature is mainly better, more close-to-standardized practices, when building web applications. A great article this year that put a lot of it down on paper (so to speak) was Rebecca Murphey’s A Baseline for Front End Developers. Other projects, like Yeoman, encourage developers to build tested, modular projects, and tools like Github encourage good code management and history.

This maturing can only be good for the industry, so I welcome it whole-heartedly.


Conclusion

Now that you’ve seen our votes, are there other technologies or releases that you’re anxiously awaiting? Let’s keep the conversation going in the comments below!


An Introduction to Python’s Flask Framework

$
0
0

Flask is a small and powerful web framework for Python. It’s easy to learn and simple to use, enabling you to build your web app in a short amount of time.

In this article, I’ll show you how to build a simple website, containing two static pages with a small amount of dynamic content. While Flask can be used for building complex, database-driven websites, starting with mostly static pages will be useful to introduce a workflow, which we can then generalize to make more complex pages in the future. Upon completion, you’ll be able to use this sequence of steps to jumpstart your next Flask app.


Installing Flask

Before getting started, we need to install Flask. Because systems vary, things can sporadically go wrong during these steps. If they do, like we all do, just Google the error message or leave a comment describing the problem.

Install virtualenv

Virtualenv is a useful tool that creates isolated Python development environments where you can do all your development work.

We’ll use virtualenv to install Flask. Virtualenv is a useful tool that creates isolated Python development environments where you can do all your development work. Suppose you come across a new Python library that you’d like to try. If you install it system-wide, there is the risk of messing up other libraries that you might have installed. Instead, use virtualenv to create a sandbox, where you can install and use the library without affecting the rest of your system. You can keep using this sandbox for ongoing development work, or you can simply delete it once you’ve finished using it. Either way, your system remains organized and clutter-free.

It’s possible that your system already has virtualenv. Refer to the command line, and try running:

$ virtualenv --version

If you see a version number, you’re good to go and you can skip to this “Install Flask” section. If the command was not found, use easy_install or pip to install virtualenv. If running Linux or Mac OS X, one of the following should work for you:

$ sudo easy_install virtualenv

or:

$ sudo pip install virtualenv

or:

$ sudo apt-get install python-virtualenv

If you don’t have either of these commands installed, there are several tutorials online, which will show you how to install it on your system. If you’re running Windows, follow the “Installation Instructions” on this page to get easy_install up and running on your computer.

Install Flask

After installing virtualenv, you can create a new isolated development environment, like so:

$ virtualenv flaskapp

Here, virtualenv creates a folder, flaskapp/, and sets up a clean copy of Python inside for you to use. It also installs the handy package manager, pip.

Enter your newly created development environment and activate it so you can begin working within it.

$ cd flaskapp
$ . bin/activate

Now, you can safely install Flask:

$ pip install Flask

Setting up the Project Structure

Let’s create a couple of folders and files within flaskapp/ to keep our web app organized.

.
.
├── app
│   ├── static
│   │   ├── css
│   │   ├── img
│   │   └── js
│   ├── templates
│   ├── routes.py
│   └── README.md

Within flaskapp/, create a folder, app/, to contain all your files. Inside app/, create a folder static/; this is where we’ll put our web app’s images, CSS, and JavaScript files, so create folders for each of those, as demonstrated above. Additionally, create another folder, templates/, to store the app’s web templates. Create an empty Python file routes.py for the application logic, such as URL routing.

And no project is complete without a helpful description, so create a README.md file as well.

Now, we know where to put our project’s assets, but how does everything connect together? Let’s take a look at “Fig. 1″ below to see the big picture:

Fig. 1

  1. A user issues a request for a domain’s root URL / to go to its home page
  2. routes.py maps the URL / to a Python function
  3. The Python function finds a web template living in the templates/ folder.
  4. A web template will look in the static/ folder for any images, CSS, or JavaScript files it needs as it renders to HTML
  5. Rendered HTML is sent back to routes.py
  6. routes.py sends the HTML back to the browser

We start with a request issued from a web browser. A user types a URL into the address bar. The request hits routes.py, which has code that maps the URL to a function. The function finds a template in the templates/ folder, renders it to HTML, and sends it back to the browser. The function can optionally fetch records from a database and then pass that information on to a web template, but since we’re dealing with mostly static pages in this article, we’ll skip interacting with a database for now.

Now that we know our way around the project structure we set up, let’s get started with making a home page for our web app.


Creating a Home Page

When you write a web app with a couple of pages, it quickly becomes annoying to write the same HTML boilerplate over and over again for each page. Furthermore, what if you need to add a new element to your app, such as a new CSS file? You would have to go into every single page and add it in. This is time consuming and error prone. Wouldn’t it be nice if, instead of repeatedly writing the same HTML boilerplate, you could define your page layout just once, and then use that layout to make new pages with their own content? This is exactly what web templates do!

Web templates are simply text files that contain variables and control flow statements (if..else, for, etc), and end with an .html or .xml extension.

The variables are replaced with your content, when the web template is evaluated. Web templates remove repetition, separate content from design, and make your application easier to maintain. In other, simpler words, web templates are awesome and you should use them! Flask uses the Jinja2 template engine; let’s see how to use it.

As a first step, we’ll define our page layout in a skeleton HTML document layout.html and put it inside the templates/ folder:

app/templates/layout.html

<!DOCTYPE html><html><head><title>Flask App</title></head><body><header><div class="container"><h1 class="logo">Flask App</h1></div></header><div class="container">
      {% block content %}
      {% endblock %}</div></body></html>

This is simply a regular HTML file…but what’s going on with the {% block content %}{% endblock %} part? To answer this, let’s create another file home.html:

app/templates/home.html

{% extends "layout.html" %}
{% block content %}<div class="jumbo"><h1>Welcome to the Flask app<h1><h2>This is the home page for the Flask app<h2></div>
{% endblock %} 

The file layout.html defines an empty block, named content, that a child template can fill in. The file home.html is a child template that inherits the markup from layout.html and fills in the “content” block with its own text. In other words, layout.html defines all of the common elements of your site, while each child template customizes it with its own content.

This all sounds cool, but how do we actually see this page? How can we type a URL in the browser and “visit” home.html? Let’s refer back to Fig. 1. We just created the template home.html and placed it in the templates/ folder. Now, we need to map a URL to it so we can view it in the browser. Let’s open up routes.py and do this:

app/routes.py

from flask import Flask, render_template
app = Flask(__name__)
@app.route('/')
def home():
  return render_template('home.html')
if __name__ == '__main__':
  app.run(debug=True)

That’s it for routes.py. What did we do?

  1. First. we imported the Flask class and a function render_template.
  2. Next, we created a new instance of the Flask class.
  3. We then mapped the URL / to the function home(). Now, when someone visits this URL, the function home() will execute.
  4. The function home() uses the Flask function render_template() to render the home.html template we just created from the templates/ folder to the browser.
  5. Finally, we use run() to run our app on a local server. We’ll set the debug flag to true, so we can view any applicable error messages if something goes wrong, and so that the local server automatically reloads after we’ve made changes to the code.

We’re finally ready to see the fruits of our labors. Return to the command line, and type:
$ python routes.py[/python]

Visit http://localhost:5000/ in your favorite web browser.

When we visited http://localhost:5000/, routes.py had code in it, which mapped the URL / to the Python function home(). home() found the web template home.html in the templates/ folder, rendered it to HTML, and sent it back to the browser, giving us the screen above.

Pretty neat, but this home page is a bit boring, isn’t it? Let’s make it look better by adding some CSS. Create a file, main.css, within static/css/, and add these rules:

static/css/main.css

body {
  margin: 0;
  padding: 0;
  font-family: "Helvetica Neue", Helvetica, Arial, sans-serif;
  color: #444;
}
/*
 * Create dark grey header with a white logo
 */
header {
  background-color: #2B2B2B;
  height: 35px;
  width: 100%;
  opacity: .9;
  margin-bottom: 10px;
}
header h1.logo {
  margin: 0;
  font-size: 1.7em;
  color: #fff;
  text-transform: uppercase;
  float: left;
}
header h1.logo:hover {
  color: #fff;
  text-decoration: none;
}
/*
 * Center the body content
 */
.container {
  width: 940px;
  margin: 0 auto;
}
div.jumbo {
  padding: 10px 0 30px 0;
  background-color: #eeeeee;
  -webkit-border-radius: 6px;
     -moz-border-radius: 6px;
          border-radius: 6px;
}
h2 {
  font-size: 3em;
  margin-top: 40px;
  text-align: center;
  letter-spacing: -2px;
}
h3 {
  font-size: 1.7em;
  font-weight: 100;
  margin-top: 30px;
  text-align: center;
  letter-spacing: -1px;
  color: #999;
}

Add this stylesheet to the skeleton file layout.html so that the styling applies to all of its child templates by adding this line to its <head> element:

<link rel="stylesheet" href="{{ url_for('static', filename='css/main.css') }}">;

We’re using the Flask function, url_for, to generate a URL path for main.css from the static folder. After adding this line in, layout.html should now look like:

app/templates/layout.html

<!DOCTYPE html><html><head><title>Flask</title><strong><link rel="stylesheet" href="{{ url_for('static', filename='css/main.css') }}"></strong></head><body><header><div class="container"><h1 class="logo">Flask App</h1></div></header><div class="container">
      {% block content %}
      {% endblock %}</div></body></html>

Let’s switch back to the browser and refresh the page to view the result of the CSS.

That’s more like it! Now, when we visit http://localhost:5000/, routes.py still maps the URL / to the Python function home(), and home() still finds the web template home.html in the templates/ folder. But, since we added the CSS file main.css, the web template home.html looks in static/ to find this asset, before rendering to HTML and being sent back to the browser.

We’ve achieved a lot so far. We started with Fig. 1 by understanding how Flask works, and now we’ve seen how it all plays out, by creating a home page for our web app. Let’s move on and create an About page.


Creating an About Page

In the previous section, we created a web template home.html by extending the skeleton file layout.html. We then mapped the URL / to home.html in routes.py so we could visit it in the browser. We finished things up by adding some styling to make it look pretty. Let’s repeat that process again to create an about page for our web app.

We’ll begin by creating a web template about.html and putting it inside the templates/ folder.

app/templates/about.html

{% extends "layout.html" %}
{% block content %}<h>About us</h2><p>This is a sample app for the Flask tutorial. Don't I look good? Oh stop, you're making me blush.</p>
{% endblock %}

Just like before with home.html, we extend from layout.html, and then fill the content block with our custom content.

In order to visit this page in the browser, we need to map a URL to it. Open up routes.py and add another mapping:

from flask import Flask, render_template
app = Flask(__name__)
@app.route('/')
def home():
  return render_template('home.html')<strong>@app.route('/about')
def about():
  return render_template('about.html')</strong>
if __name__ == '__main__':
  app.run(debug=True)

We mapped the URL /about to the function about(). Now we can open up the browser and go to http://localhost:5000/about and check out our newly created page.


Adding Navigation

Most websites have links to their main pages within the header or footer of the document. These links are usually visible across all pages of a website. Let’s open up the skeleton file, layout.html. and add these links so they show up in all of the child templates. Specifically, let’s add a <nav> element inside the <header> element:

app/templates/layout.html

...<header><div class="container"><h1 class="logo">Flask App</h1><strong><nav><ul class="menu"><li><a href="{{ url_for('home') }}">Home</a></li><li><a href="{{ url_for('about') }}">About</a></li></ul></nav></strong></div></header>
...

Once again, we use the Flask function url_for to generate URLs.

Next, add some more style rules to main.css to make these new navigation elements look good:

app/static/css/main.css

...
/*
 * Display navigation links inline
 */
.menu {
  float: right;
  margin-top: 8px;
}
.menu li {
  display: inline;
}
.menu li + li {
  margin-left: 35px;
}
.menu li a {
  color: #999;
  text-decoration: none;
}

Finally, open up the browser and refresh http://localhost:5000/ to see our newly added navigation links.


Conclusion

Over the course of this article, we built a simple web app with two, mostly static, pages. In doing so, we learned a workflow that can be used to create more complex websites with dynamic content. Flask is a simple, but powerful framework that enables you to efficiently build web apps. Go ahead – check it out!

Building Ribbit in PHP

$
0
0

In the initial entry in this series, we took care of the UI-aspect of our Twitter-clone, called Ribbit. Now, we’ll begin coding the application in a number of languages. This lesson will leverage standard PHP (with homegrown MVC), but, in future articles, we’ll review other implementations, such as with Rails or Laravel.

There is a lot to cover, so let’s get started.


Where We Last Left Off


Ribbit

For the unfamiliar, MVC stands for Model-View-Controller. You can thing of MVC as Database-HTML-Logic Code. Separating your code into these distinct parts makes it easier to replace one or more of the components without interfering with the rest of your app. As you will see below, this level of abstraction also encourages you to write small, concise functions that rely on lower-level functions.

I like to start with the Model when building this type of application–everything tends to connect to it (I.E. signup, posts, etc). Let’s setup the database.


The Database

We require four tables for this application. They are:

  • Users– holds the user’s info.
  • Ribbits– contains the actual ribbits (posts).
  • Follows– the list of who follows who.
  • UserAuth– the table for holding the login authentications

I’ll show you how to create these tables from the terminal. If you use an admin program (such as phpMyAdmin), then you can either click the SQL button to directly enter the commands or add the tables through the GUI.

To start, open up a terminal window, and enter the following command:

mysql -u username -h hostAddress -P portNumber -p

If you are running this command on a MySQL machine, and the port number was not modified, you may omit the -h
and -P arguments. The command defaults to localhost and port 3306, respectively. Once you login, you can create the database using the following SQL:

CREATE DATABASE Ribbit;
USE Ribbit;

Let’s begin by creating the Users table:

CREATE TABLE Users (
    id              INT NOT NULL AUTO_INCREMENT,
    username        VARCHAR(18) NOT NULL,
    name            VARCHAR(36),
    password        VARCHAR(64),
    created_at      DATETIME,
    email           TEXT,
    gravatar_hash   VARCHAR(32),
    PRIMARY KEY(id, username)
);

This gives us the following table:


Users Table

The next table I want to create is the Ribbits table. This table should have four fields: id, user_id, ribbit and created_at. The SQL code for this table is:

CREATE TABLE Ribbits (
    id            INT NOT NULL AUTO_INCREMENT,
    user_id       INT NOT NULL,
    ribbit        VARCHAR(140),
    created_at    DATETIME,
    PRIMARY KEY(id, user_id)
);

Ribbits Table

This is fairly simple stuff, so I won’t elaborate too much.

Next, the Follows table. This simply holds the ids of both the follower and followee:

CREATE Table Follows (
    id            INT NOT NULL AUTO_INCREMENT,
    user_id       INT NOT NULL,
    followee_id   INT,
    PRIMARY KEY(id, user_id)
);

Follows Table

Finally, we have a table, called UserAuth. This holds the user’s username and password hash. I opted not to use the user’s ID, because the program already stores the username, when logging in and signing up (the two times when entries are added to this table), but the program would need to make an extra call to get the user’s ID number. Extra calls mean more latency, so I chose not to use the user’s ID.

In a real world project, you may want to add another field like ‘hash2′ or ‘secret’. If all you need to authenticate a user is one hash, then an attacker only has to guess that one hash. For example: I randomly enter characters into the hash field in the cookie. If there are enough users, it might just match someone. But if you have to guess and match two hashes, then the chance of someone guessing the correct pair drops exponentially (the same applies to adding three, etc). But to keep things simple, I will only have one hash.

Here’s the SQL code:

CREATE TABLE UserAuth (
    id        INT NOT NULL AUTO_INCREMENT,
    hash      VARCHAR(52) NOT NULL,
    username  VARCHAR(18),
    PRIMARY KEY(id, hash)
);

And this final table looks like the following image:


UserAuth Table

Now that we have all the tables setup, you should have a pretty good idea of how the overall site will work. We can start writing the Model class in our MVC framework.


The Model

Create a file, called model.php and enter the following class declaration:

class Model{
    private $db; // Holds mysqli Variable
    function __construct(){
    	$this->db = new mysqli('localhost', 'user', 'pass', 'Ribbit');
    }
}

This looks familiar to you if you have written PHP classes in the past. This code basically creates a class called Model. It has one private property named $db which holds a mysqli object. Inside the constructor, I initialized the $db property using the connection info to my database. The parameter order is: address, username, password and database name.

Before we get into any page-specific code, I want to create a few low-level commands that abstract the common mySQL functions like SELECT and INSERT.

The first function I want to implement is select(). It accepts a string for the table’s name and an array of properties for building the WHERE clause. Here is the entire function, and it should go right after the constructor:

//--- private function for performing standard SELECTs
private function select($table, $arr){
    $query = "SELECT * FROM " . $table;
    $pref = " WHERE ";
    foreach($arr as $key => $value)
    {
        $query .= $pref . $key . "='" . $value . "'";
        $pref = " AND ";
    }
    $query .= ";";
    return $this->db->query($query);
}

The function builds a query string using the table’s name and the array of properties. It then returns a result object which we get by passing the query string through mysqli‘s query() function. The next two functions are very similar; they are the insert() function and the delete() function:

    //--- private function for performing standard INSERTs
    private function insert($table, $arr)
    {
        $query = "INSERT INTO " . $table . " (";
        $pref = "";
        foreach($arr as $key => $value)
        {
            $query .= $pref . $key;
            $pref = ", ";
        }
        $query .= ") VALUES (";
        $pref = "";
        foreach($arr as $key => $value)
        {
            $query .= $pref . "'" . $value . "'";
            $pref = ", ";
        }
        $query .= ");";
        return $this->db->query($query);
    }
    //--- private function for performing standard DELETEs
    private function delete($table, $arr){
        $query = "DELETE FROM " . $table;
        $pref = " WHERE ";
        foreach($arr as $key => $value)
        {
            $query .= $pref . $key . "='" . $value . "'";
            $pref = " AND ";
        }
        $query .= ";";
        return $this->db->query($query);
    }

As you may have guessed, both functions generate a SQL query and return a result. I want to add one more helper function: the exists() function. This will simply check if a row exists in a specified table. Here is the function:

//--- private function for checking if a row exists
private function exists($table, $arr){
    $res = $this->select($table, $arr);
    return ($res->num_rows > 0) ? true : false;
}

Before we make the more page-specific functions, we should probably make the actual pages. Save this file and we’ll start on URL routing.


The Router

In a MVC framework, all HTTP requests usually go to a single controller, and the controller determines which function to execute based on the requested URL. We are going to do this with a class called Router. It will accept a string (the requested page) and will return the name of the function that the controller should execute. You can think of it as a phone book for function names instead of numbers.

Here is the completed class’s structure; just save this to a file called router.php:

class Router{
	private $routes;
	function __construct(){
		$this->routes = array();
	}
	public function lookup($query)
	{
		if(array_key_exists($query, $this->routes))
		{
			return $this->routes[$query];
		}
		else
		{
			return false;
		}
	}
}

This class has one private property called routes, which is the “phone book” for our controllers. There’s also a simple function called lookup(), which returns a string if the path exists in the routes property. To save time, I will list the ten functions that our controller will have:

	function __construct(){
		$this->routes = array("home" => "indexPage","signup" => "signUp","login" => "login","buddies" => "buddies","ribbit" => "newRibbit","logout" => "logout","public" => "publicPage","profiles" => "profiles","unfollow" => "unfollow","follow" => "follow"
		);
	}

The list goes by the format of 'url' => 'function name'. For example, if someone goes to ribbit.com/home, then the router tells the controller to execute the indexPage() function.

The router is only half the solution; we need to tell Apache to redirect all traffic to the controller. We’ll achieve this by creating a file called .htaccess in the root directory of the site and adding the following to the file:

RewriteEngine On
RewriteRule ^/?Resource/(.*)$ /$1 [L]
RewriteRule ^$ /home [redirect]
RewriteRule ^([a-zA-Z]+)/?([a-zA-Z0-9/]*)$ /app.php?page=$1&query=$2 [L]

This may seem a little intimidating if you’ve never used apache’s mod_rewrite. But don’t worry; I’ll walk you through it line by line.

In a MVC framework, all HTTP requests usually go to a single controller.

The first line tells Apache to enable mod_rewrite; the remaining lines are the rewrite rules. With mod_rewrite, you can take an incoming request with a certain URL and pass the request onto a different file. In our case, we want all requests to be handled by a single file so that we can process them with the controller. The mod_rewrite module also lets us have URLs like ribbit.com/profile/username instead of ribbit.com/profile.php?username=username–making the overall feel of your app more professional.

I said, we want all requests to go to a single file, but that’s really not accurate. We want Apache to normally handle requests for resources like images, CSS files, etc. The first rewrite rule tells Apache to handle requests that start with Resource/ in a regular fashion. It’s a regular expression that takes everything after the word Resource/ (notice the grouping brackets) and uses it as the real URL to the file. So for example: the link ribbit.com/Resource/css/main.css loads the file located at ribbit.com/css/main.css.

The next rule tells Apache to redirect blank requests (i.e. a request to the websites root) to /home.

The word “redirect” in the square brackets at the end of the line tells Apache to actually redirect the browser, as opposed rewriting on URL to another (like in the previous rule).

There are different kinds of flashes: error, warning and notice.

The last rule is the one we came for; it takes all requests (other than those that start with Resource/) and sends them to a PHP file called app.php. That is the file that loads the controller and runs the whole application.

The “^” symbol represents the beginning of the string and the “$” represents the end. So the regular expression can be translated into English as: “Take everything from the beginning of the URL until the first slash, and put it in group 1. Then take everything after the slash, and put it in group 2. Finally, pass the link to Apache as if it said app.php?page=group1&query=group2.” The “[L]” that is in the first and third line tells Apache to stop after that line. So if the request is a resource URL, it shouldn’t continue to the next rule; it should break after the first one.

I hope all that made sense; the following picture better illustrates what’s going on.

If you are still unclear on the actual regular expression, then we have a very nice article that you can read.

Now that we have everything setup URL-wise, let’s create the controller.


The Controller

The controller is where most of the magic happens; all the other pieces of the app, including the model and router, connect through here. Let’s begin by creating a file called controller.php and enter in the following:

require("model.php");
require("router.php");
class Controller{
	private $model;
	private $router;
	//Constructor
	function __construct(){
		//initialize private variables
		$this->model = new Model();
		$this->router = new Router();
        //Proccess Query String
        $queryParams = false;
        if(strlen($_GET['query']) > 0)
        {
            $queryParams = explode("/", $_GET['query']);
        }
        $page = $_GET['page'];
		//Handle Page Load
		$endpoint = $this->router->lookup($page);
		if($endpoint === false)
		{
			header("HTTP/1.0 404 Not Found");
		}
		else
		{
            $this->$endpoint($queryParams);
		}
	}

With mod_rewrite, you can take an incoming request with a certain URL and pass the request onto a different file.

We first load our model and router files, and we then create a class called Controller. It has two private variables: one for the model and one for the router. Inside the constructor, we initialize these variables and process the query string.

If you remember, the query can contain multiple values (we wrote in the .htaccess file that everything after the first slash gets put in the query–this includes all slashes that may follow). So we split the query string by slashes, allowing us to pass multiple query parameters if needed.

Next, we pass whatever was in the $page variable to the router to determine the function to execute. If the router returns a string, then we will call the specified function and pass it the query parameters. If the router returns false, the controller sends the 404 status code. You can redirect the page to a custom 404 view if you so desire, but I’ll keep things simple.

The framework is starting to take shape; you can now call a specific function based on a URL. The next step is to add a few functions to the controller class to take care of the lower-level tasks, such as loading a view and redirecting the page.

The first function simply redirects the browser to a different page. We do this a lot, so it’s a good idea to make a function for it:

private function redirect($url){
    header("Location: /" . $url);
}

The next two functions load a view and a page, respectively:

private function loadView($view, $data = null){
    if (is_array($data))
    {
        extract($data);
    }
    require("Views/" . $view . ".php");
}
private function loadPage($user, $view, $data = null, $flash = false){
    $this->loadView("header", array('User' => $user));
    if ($flash !== false)
    {
        $flash->display();
    }
    $this->loadView($view, $data);
    $this->loadView("footer");
}

The first function loads a single view from the “Views” folder, optionally extracting the variables from the attached array. The second function is the one we will reference, and it loads the header and footer (they are the same on all pages around the specified view for that page) and any other messages (flash i.e. an error message, greetings, etc).

There is one last function that we need to implement which is required on all pages: the checkAuth() function. This function will check if a user is signed in, and if so, pass the user’s data to the page. Otherwise, it returns false. Here is the function:

private function checkAuth(){
    if(isset($_COOKIE['Auth']))
    {
        return $this->model->userForAuth($_COOKIE['Auth']);
    }
    else
    {
        return false;
    }
}

We first check whether or not the Auth cookie is set. This is where the hash we talked about earlier will be placed. If the cookie exists, then the function tries to verify it with the database, returning either the user on a successful match or false if it’s not in the table.

Now let’s implement that function in the model class.


A Few Odds and Ends

In the Model class, right after the exists() function, add the following function:

public function userForAuth($hash){
    $query = "SELECT Users.* FROM Users JOIN (SELECT username FROM UserAuth WHERE hash = '";
    $query .= $hash . "' LIMIT 1) AS UA WHERE Users.username = UA.username LIMIT 1";
    $res = $this->db->query($query);
    if($res->num_rows > 0)
    {
        return $res->fetch_object();
    }
    else
    {
        return false;
    }
}

If you remember our tables, we have a UserAuth table that contains the hash along with a username. This SQL query retrieves the row that contains the hash from the cookie and returns the user with the matching username.

That’s all we have to do in this class for now. Let’s go back into the controller.php file and implement the Flash class.

In the loadPage() function, there was an option to pass a flash object, a message that appears above all the content.

For example: if an unauthenticated user tries to post something, the app displays a message similar to, “You have to be signed in to perform that action.” There are different kinds of flashes: error, warning and notice, and I decided it is easier to create a Flash class instead of passing multiple variables (like msg and type. Additionally, the class will have the ability to output a flash’s HTML.

Here is the complete Flash class, you can add this to controller.php before the Controller class definition:

class Flash{
    public $msg;
    public $type;
    function __construct($msg, $type)
    {
        $this->msg = $msg;
        $this->type = $type;
    }
    public function display(){
        echo "<div class=\"flash " . $this->type . "\">" . $this->msg . "</div>";
    }
}

This class is straight-forward. It has two properties and a function to output the flash’s HTML.

We now have all the pieces needed to start displaying pages, so let’s create the app.php file. Create the file and insert the following code:

<?php
	require("controller.php");
	$app = new Controller();

And that’s it! The controller reads the request from the GET variable, passes it to the router, and calls the appropriate function. Let’s create some of the views to finally get something displayed in the browser.


The Views

Create a folder in the root of your site called Views. As you may have already guessed, this directory will contains all the actual views. If you are unfamiliar with the concept of a view, you can think of them as files that generate pieces of HTML that build the page. Basically, we’ll have a view for the header, footer and one for each page. These pieces combine into the final result (i.e. header + page_view + footer = final_page).

Let’s start with the footer; it is just standard HTML. Create a file called footer.php inside the Views folder and add the following HTML:

</div></div><footer><div class="wrapper">
			Ribbit - A Twitter Clone Tutorial<img src="http://cdn.tutsplus.com/net.tutsplus.com/authors/jeremymcpeak//Resource/gfx/logo-nettuts.png"></div></footer></body></html>

I think this demonstrates two things very well:

  • These are simply pieces of an actual page.
  • To access the images that are in the gfx folder, I added Resources/ to the beginning of the path (for the mod_rewrite rule).

Next, let's create the header.php file. The header is a bit more complicated because it must determine if the user is signed in. If the user is logged in, it displays the menu bar; otherwise, it displays a login form. Here is the complete header.php file:

<!DOCTYPE html><html><head><link rel="stylesheet/less" href="/Resource/style.less"><script src="/Resource/less.js"></script></head><body><header><div class="wrapper"><img src="http://cdn.tutsplus.com/net.tutsplus.com/authors/jeremymcpeak//Resource/gfx/logo.png"><span>Twitter Clone</span><?php if($User !== false){ ?><nav><a href="/buddies">Your Buddies</a><a href="/public">Public Ribbits</a><a href="/profiles">Profiles</a></nav><form action="/logout" method="get"><input type="submit" id="btnLogOut" value="Log Out"></form><?php }else{ ?><form method="post" action="/login"><input name="username" type="text" placeholder="username"><input name="password" type="password" placeholder="password"><input type="submit" id="btnLogIn" value="Log In"></form><?php } ?></div></header><div id="content"><div class="wrapper">

I'm not going to explain much of the HTML. Overall, this view loads in the CSS style sheet and builds the correct header based on the user's authentication status. This is accomplished with a simple if statement and the variable passed from the controller.

The last view for the homepage is the actual home.php view. This view contains the greeting picture and signup form. Here is the code for home.php:

<img src="http://cdn.tutsplus.com/net.tutsplus.com/authors/jeremymcpeak//Resource/gfx/frog.jpg"><div class="panel right"><h1>New to Ribbit?</h1><p><form action="/signup" method="post"><input name="email" type="text" placeholder="Email"><input name="username" type="text" placeholder="Username"><input name="name" type="text" placeholder="Full Name"><input name="password" type="password" placeholder="Password"><input name="password2" type="password" placeholder="Confirm Password"><input type="submit" value="Create Account"></form></p></div>

Together, these three views complete the homepage. Now let's go write the function for the home page.


The Home Page

We need to write a function in the Controller class called indexPage() to load the home page (this is what we set up in the router class). The following complete function should go in the Controller class after the checkAuth() function:

private function indexPage($params){
  $user = $this->checkAuth();
  if($user !== false) { $this->redirect("buddies"); }
  else
  {
    $flash = false;
    if($params !== false)
    {
      $flashArr = array("0" => new Flash("Your Username and/or Password was incorrect.", "error"),"1" => new Flash("There's already a user with that email address.", "error"),"2" => new Flash("That username has already been taken.", "error"),"3" => new Flash("Passwords don't match.", "error"),"4" => new Flash("Your Password must be at least 6 characters long.", "error"),"5" => new Flash("You must enter a valid Email address.", "error"),"6" => new Flash("You must enter a username.", "error"),"7" => new Flash("You have to be signed in to acces that page.", "warning")
      );
      $flash = $flashArr[$params[0]];
    }
    $this->loadPage($user, "home", array(), $flash);
  }
}

The first two lines check if the user is already signed in. If so, the function redirects the user to the "buddies" page where they can read their friends' posts and view their profile. If the user is not signed in, then it continues to load the home page, checking if there are any flashes to display. So for instance, if the user goes to ribbit.com/home/0, then it this function shows the first error and so on for the next seven flashes. Afterwards, we call the loadPage() function to display everything on the screen.

At this point if you have everything setup correctly (i.e. Apache and our code so far), then you should be able to go to the root of your site (e.g. localhost) and see the home page.

Congratulations!! It's smooth sailing from here on out... well at least smoother sailing. It's just a matter of repeating the previous steps for the other nine functions that we defined in the router.


Rinse and Repeat

The next logical step is to create the signup function, you can add this right after the indexPage():

private function signUp(){
  if($_POST['email'] == "" || strpos($_POST['email'], "@") === false){
    $this->redirect("home/5");
  }
  else if($_POST['username'] == ""){
    $this->redirect("home/6");
  }
  else if(strlen($_POST['password']) < 6)
  {
    $this->redirect("home/4");
  }
  else if($_POST['password'] != $_POST['password2'])
  {
    $this->redirect("home/3");
  }
  else{
    $pass = hash('sha256', $_POST['password']);
    $signupInfo = array(
      'username' => $_POST['username'],
      'email' => $_POST['email'],
      'password' => $pass,
      'name' => $_POST['name']
    );
    $resp = $this->model->signupUser($signupInfo);
    if($resp === true)
    {
      $this->redirect("buddies/1");
    }
    else
    {
      $this->redirect("home/" . $resp);
    }
  }
}

This function goes through a standard signup process by making sure everything checks out. If any of the user's info doesn't pass, the function redirects the user back to the home page with the appropriate error code for the indexPage() function to display.

The checks for existing usernames and passwords cannot be performed here.

Those checks need to happen in the Model class because we need a connection to the database. Let's go back to the Model class and implement the signupUser() function. You should put this right after the userForAuth() function:

public function signupUser($user){
  $emailCheck = $this->exists("Users", array("email" => $user['email']));
  if($emailCheck){
    return 1;
  }
  else {
    $userCheck = $this->exists("Users", array("username" => $user['username']));
    if($userCheck){
      return 2;
    }
    else{
      $user['created_at'] = date( 'Y-m-d H:i:s');
      $user['gravatar_hash'] = md5(strtolower(trim($user['email'])));
      $this->insert("Users", $user);
      $this->authorizeUser($user);
      return true;
    }
  }
}

We use our exists() function to check the provided email or username, returning an error code either already exists. If everything passes, then we add the final few fields, created_at and gravatar_hash, and insert them into the database.

Before returning true, we authorize the user. This function adds the Auth cookie and inserts the credentials into the UserAuth database. Let's add the authorizeUser() function now:

public function authorizeUser($user){
  $chars = "qazwsxedcrfvtgbyhnujmikolp1234567890QAZWSXEDCRFVTGBYHNUJMIKOLP";
  $hash = sha1($user['username']);
  for($i = 0; $i<12; $i++)
  {
    $hash .= $chars[rand(0, 61)];
  }
  $this->insert("UserAuth", array("hash" => $hash, "username" => $user['username']));
  setcookie("Auth", $hash);
}

This function builds the unique hash for a user on sign up and login. This isn't a very secure method of generating hashes, but I combine the sha1 hash of the username along with twelve random alphanumeric characters to keep things simple.

It's good to attach some of the user's info to the hash because it helps make the hashes unique to that user.

There is a finite set of unique character combinations, and you'll eventually have two users with the same hash. But if you add the user's ID to the hash, then you are guaranteed a unique hash for every user.


Login and Logout

To finish the functions for the home page, let's implement the login() and logout() functions. Add the following to the Controller class after the login() function:

private function login(){
  $pass = hash('sha256', $_POST['password']);
  $loginInfo = array(
    'username' => $_POST['username'],
    'password' => $pass
  );
  if($this->model->attemptLogin($loginInfo))
  {
    $this->redirect("buddies/0");
  }
  else
  {
    $this->redirect("home/0");
  }
}

This simply takes the POST fields from the login form and attempts to login. On a successful login, it takes the user to the "buddies" page. Otherwise, it redirects back to the homepage to display the appropriate error. Next, I'll show you the logout() function:

private function logout() {
  $this->model->logoutUser($_COOKIE['Auth']);
  $this->redirect("home");
}

The logout() function is even simpler than login(). It executes one of Model's functions to erase the cookie and remove the entry from the database.

Let's jump over to the Model class and add the necessary functions for these to updates. The first is attemptLogin() which tries to login and returns true or false. Then we have logoutUser():

public function attemptLogin($userInfo){
  if($this->exists("Users", $userInfo)){
    $this->authorizeUser($userInfo);
    return true;
  }
  else{
    return false;
  }
}
public function logoutUser($hash){
  $this->delete("UserAuth", array("hash" => $hash));
  setcookie ("Auth", "", time() - 3600);
}

The Buddies Page

Hang with me; we are getting close to the end! Let's build the "Buddies" page. This page contains your profile information and a list of posts from you and the people you follow. Let's start with the actual view, so create a file called buddies.php in the Views folder and insert the following:

<div id="createRibbit" class="panel right"><h1>Create a Ribbit</h1><p><form action="/ribbit" method="post"><textarea name="text" class="ribbitText"></textarea><input type="submit" value="Ribbit!"></form></p></div><div id="ribbits" class="panel left"><h1>Your Ribbit Profile</h1><div class="ribbitWrapper"><img class="avatar" src="http://www.gravatar.com/avatar/<?php echo $User->gravatar_hash; ?>"><span class="name"><?php echo $User->name; ?></span> @<?php echo $User->username; ?><p><?php echo $userData->ribbit_count . " "; echo ($userData->ribbit_count != 1) ? "Ribbits" : "Ribbit"; ?><span class="spacing"><?php echo $userData->followers . " "; echo ($userData->followers != 1) ? "Followers" : "Follower"; ?></span><span class="spacing"><?php echo $userData->following . " Following"; ?></span><br><?php echo $userData->ribbit; ?></p></div></div><div class="panel left"><h1>Your Ribbit Buddies</h1><?php foreach($fribbits as $ribbit){ ?><div class="ribbitWrapper"><img class="avatar" src="http://www.gravatar.com/avatar/<?php echo $ribbit->gravatar_hash; ?>"><span class="name"><?php echo $ribbit->name; ?></span> @<?php echo $ribbit->username; ?><span class="time"><?php
                    $timeSince = time() - strtotime($ribbit->created_at);
                    if($timeSince < 60)
                    {
                        echo $timeSince . "s";
                    }
                    else if($timeSince < 3600)
                    {
                        echo floor($timeSince / 60) . "m";
                    }
                    else if($timeSince < 86400)
                    {
                        echo floor($timeSince / 3600) . "h";
                    }
                    else{
                        echo floor($timeSince / 86400) . "d";
                    }
                ?></span><p><?php echo $ribbit->ribbit; ?></p></div><?php } ?></div>

The first div is the form for creating new "ribbits". The next div displays the user's profile information, and the last section is the for loop that displays each "ribbit". Again, I'm not going to go into to much detail for the sake of time, but everything here is pretty straight forward.

Now, in the Controller class, we have to add the buddies() function:

private function buddies($params){
  $user = $this->checkAuth();
  if($user === false){ $this->redirect("home/7"); }
  else
  {
    $userData = $this->model->getUserInfo($user);
    $fribbits = $this->model->getFollowersRibbits($user);
    $flash = false;
    if(isset($params[0]))
    {
      $flashArr = array("0" => new Flash("Welcome Back, " . $user->name, "notice"),"1" => new Flash("Welcome to Ribbit, Thanks for signing up.", "notice"),"2" => new Flash("You have exceeded the 140 character limit for Ribbits", "error")
      );
      $flash = $flashArr[$params[0]];
    }
    $this->loadPage($user, "buddies", array('User' => $user, "userData" => $userData, "fribbits" => $fribbits), $flash);
  }
}

This function follows the same structure as the indexPage() function: we first check if the user is logged in and redirect them to the home page if not.

We then call two functions from the Model class: one to get the user's profile information and one to get the posts from the user's followers.

We have three possible flashes here: one for signup, one for login and one for if the user exceeds the 140 character limit on a new ribbit. Finally, we call the loadPage() function to display everything.

Now in the Model class we have to enter the two functions we called above. First we have the 'getUserInfo' function:

public function getUserInfo($user)
{
  $query = "SELECT ribbit_count, IF(ribbit IS NULL, 'You have no Ribbits', ribbit) as ribbit, followers, following ";
  $query .= "FROM (SELECT COUNT(*) AS ribbit_count FROM Ribbits WHERE user_id = " . $user->id . ") AS RC ";
  $query .= "LEFT JOIN (SELECT user_id, ribbit FROM Ribbits WHERE user_id = " . $user->id . " ORDER BY created_at DESC LIMIT 1) AS R ";
  $query .= "ON R.user_id = " . $user->id . " JOIN ( SELECT COUNT(*) AS followers FROM Follows WHERE followee_id = " . $user->id;
  $query .=  ") AS FE JOIN (SELECT COUNT(*) AS following FROM Follows WHERE user_id = " . $user->id . ") AS FR;";
  $res = $this->db->query($query);
  return $res->fetch_object();
}

The function itself is simple. We execute a SQL query and return the result. The query, on the other hand, may seem a bit complex. It combines the necessary information for the profile section into a single row. The information returned by this query includes the amount of ribbits you made, your latest ribbit, how many followers you have and how many people you are following. This query basically combines one normal SELECT query for each of these properties and then joins everything together.

Next we had the getFollowersRibbits() function which looks like this:

public function getFollowersRibbits($user)
{
  $query = "SELECT name, username, gravatar_hash, ribbit, Ribbits.created_at FROM Ribbits JOIN (";
  $query .= "SELECT Users.* FROM Users LEFT JOIN (SELECT followee_id FROM Follows WHERE user_id = ";
  $query .= $user->id . " ) AS Follows ON followee_id = id WHERE followee_id = id OR id = " . $user->id;
  $query .= ") AS Users on user_id = Users.id ORDER BY Ribbits.created_at DESC LIMIT 10;";
  $res = $this->db->query($query);
  $fribbits = array();
  while($row = $res->fetch_object())
  {
    array_push($fribbits, $row);
  }
  return $fribbits;
}

Similar to the previous function, the only complicated part here is the query. We need the following information to display for each post: name, username, gravatar image, the actual ribbit, and the date when the ribbit was created. This query sorts through your posts and the posts from the people you follow, and returns the latest ten ribbits to display on the buddies page.

You should now be able to signup, login and view the buddies page. We are still not able to create ribbits so let's get on that next.


Posting Your First Ribbit

This step is pretty easy. We don't have a view to work with; we just need a function in the Controller and Model classes. In Controller, add the following function:

private function newRibbit($params){
  $user = $this->checkAuth();
  if($user === false){ $this->redirect("home/7"); }
  else{
    $text = mysql_real_escape_string($_POST['text']);
    if(strlen($text) > 140)
    {
      $this->redirect("buddies/2");
    }
    else
    {
      $this->model->postRibbit($user, $text);
      $this->redirect("buddies");
    }
  }
}

Again we start by checking if the user is logged in, and if so, we ensure the post is not over the 140 character limit. We'll then call postRibbit() from the model and redirect back to the buddies page.

Now in the Model class, add the postRibbit() function:

public function postRibbit($user, $text){
  $r = array("ribbit" => $text,"created_at" => date( 'Y-m-d H:i:s'),"user_id" => $user->id
  );
  $this->insert("Ribbits", $r);
}

We are back to standard queries with this one; just combine the data into an array and insert it with our insert function. You should now be able to post Ribbits, so go try to post a few. We still have a little more work to do, so come back after you post a few ribbits.


The Last Two Pages

The next two pages have almost identical functions in the controller so I'm going to post them together:

private function publicPage($params){
  $user = $this->checkAuth();
  if($user === false){ $this->redirect("home/7"); }
  else
  {
    $q = false;
    if(isset($_POST['query']))
    {
      $q = $_POST['query'];
    }
    $ribbits = $this->model->getPublicRibbits($q);
    $this->loadPage($user, "public", array('ribbits' => $ribbits));
  }
}
private function profiles($params){
  $user = $this->checkAuth();
  if($user === false){ $this->redirect("home/7"); }
  else{
    $q = false;
    if(isset($_POST['query']))
    {
      $q = $_POST['query'];
    }
    $profiles = $this->model->getPublicProfiles($user, $q);
    $this->loadPage($user, "profiles", array('profiles' => $profiles));
  }
}

These functions both get an array of data; one gets ribbits and the other profiles. They both allow you to search by a POST string option, and they both get the info from the Model. Now let's go put their corresponding views in the Views folder.

For the ribbits just create a file called public.php and put the following inside:

<div class="panel right"><h1>Search Ribbits</h1><p></p><form action="/public" method="post"><input name="query" type="text"><input type="submit" value="Search!"></form><p></p></div><div id="ribbits" class="panel left"><h1>Public Ribbits</h1><?php foreach($ribbits as $ribbit){ ?><div class="ribbitWrapper"><img class="avatar" src="http://www.gravatar.com/avatar/<?php echo $ribbit->gravatar_hash; ?>"><span class="name"><?php echo $ribbit->name; ?></span> @<?php echo $ribbit->username; ?><span class="time"><?php
	                $timeSince = time() - strtotime($ribbit->created_at);
	                if($timeSince < 60)
	                {
	                    echo $timeSince . "s";
	                }
	                else if($timeSince < 3600)
	                {
	                    echo floor($timeSince / 60) . "m";
	                }
	                else if($timeSince < 86400)
	                {
	                    echo floor($timeSince / 3600) . "h";
	                }
	                else{
	                    echo floor($timeSince / 86400) . "d";
	                }
	            ?></span><p><?php echo $ribbit->ribbit; ?></p></div><?php } ?></div>

The first div is the ribbit search form, and the second div displays the public ribbits.

And here is the last view which is the profiles.php view:

<div class="panel right"><h1>Search for Profiles</h1><p></p><form action="/profiles" method="post"><input name="query" type="text"><input type="submit" value="Search!"></form><p></p></div><div id="ribbits" class="panel left"><h1>Public Profiles</h1><?php foreach($profiles as $user){ ?><div class="ribbitWrapper"><img class="avatar" src="http://www.gravatar.com/avatar/<?php echo $user->gravatar_hash; ?>"><span class="name"><?php echo $user->name; ?></span> @<?php echo $user->username; ?><span class="time"><?php echo $user->followers; echo ($user->followers > 1) ? " followers " : " follower "; ?><a href="<?php echo ($user->followed) ? "unfollow" : "follow"; ?>/<?php echo $user->id; ?>"><?php echo ($user->followed) ? "unfollow" : "follow"; ?></a></span><p><?php echo $user->ribbit; ?></p></div><?php } ?></div>

This is very similar to the public.php view.

The last step needed to get these two pages working is to add their dependency functions to the Model class. Let's start with the function to get the public ribbits. Add the following to the Model class:

public function getPublicRibbits($q){
  if($q === false)
  {
    $query = "SELECT name, username, gravatar_hash, ribbit, Ribbits.created_at FROM Ribbits JOIN Users ";
    $query .= "ON user_id = Users.id ORDER BY Ribbits.created_at DESC LIMIT 10;";
  }
  else{
    $query = "SELECT name, username, gravatar_hash, ribbit, Ribbits.created_at FROM Ribbits JOIN Users ";
    $query .= "ON user_id = Users.id WHERE ribbit LIKE \"%" . $q ."%\" ORDER BY Ribbits.created_at DESC LIMIT 10;";
  }
  $res = $this->db->query($query);
  $ribbits = array();
  while($row = $res->fetch_object())
  {
    array_push($ribbits, $row);
  }
  return $ribbits;
}

If a search query was passed, then we only select ribbits that match the provided search. Otherwise, it just takes the ten newest ribbits. The next function is a bit more complicated as we need to make multiple SQL queries. Enter this function to get the public profiles:

public function getPublicProfiles($user, $q){
  if($q === false)
  {
    $query = "SELECT id, name, username, gravatar_hash FROM Users WHERE id != " . $user->id;
    $query .= " ORDER BY created_at DESC LIMIT 10";
  }
  else{
    $query = "SELECT id, name, username, gravatar_hash FROM Users WHERE id != " . $user->id;
    $query .= " AND (name LIKE \"%" . $q . "%\" OR username LIKE \"%" . $q . "%\") ORDER BY created_at DESC LIMIT 10";
  }
  $userRes = $this->db->query($query);
  if($userRes->num_rows > 0){
    $userArr = array();
    $query = "";
    while($row = $userRes->fetch_assoc()){
      $i = $row['id'];
      $query .= "SELECT " . $i . " AS id, followers, IF(ribbit IS NULL, 'This user has no ribbits.', ribbit) ";
      $query .= "AS ribbit, followed FROM (SELECT COUNT(*) as followers FROM Follows WHERE followee_id = " . $i . ") ";
      $query .= "AS F LEFT JOIN (SELECT user_id, ribbit FROM Ribbits WHERE user_id = " . $i;
      $query .= " ORDER BY created_at DESC LIMIT 1) AS R ON R.user_id = " . $i . " JOIN (SELECT COUNT(*) ";
      $query .= "AS followed FROM Follows WHERE followee_id = " . $i . " AND user_id = " . $user->id . ") AS F2 LIMIT 1;";
      $userArr[$i] = $row;
    }
    $this->db->multi_query($query);
    $profiles = array();
    do{
      $row = $this->db->store_result()->fetch_object();
      $i = $row->id;
      $userArr[$i]['followers'] = $row->followers;
      $userArr[$i]['followed'] = $row->followed;
      $userArr[$i]['ribbit'] = $row->ribbit;
      array_push($profiles, (object)$userArr[$i]);
    }while($this->db->next_result());
    return $profiles;
  }
  else
  {
    return null;
  }
}

It's a lot to take in, so I'll go over it slowly. The first if...else statement checks whether or not the user passed a search query and generates the appropriate SQL to retrieve ten users. Then we make sure that the query returned some users, and if so, it moves on to generate a second query for each user, retrieving there latest ribbit and info.

After that, we send all the queries to the database with the multi_query command to minimize unnecessary trips to the database.

Then, we take the results and combine them with the user's information from the first query. All this data is returned to display in the profiles view.

If you have done everything correctly, you should be able to traverse through all the pages and post ribbits. The only thing we have left to do is add the functions to follow and unfollow other people.


Tying up the Loose Ends

There is no view associated with these functions, so these will be quick. Let's start with the functions in the Controller class:

private function follow($params){
  $user = $this->checkAuth();
  if($user === false){ $this->redirect("home/7"); }
  else{
    $this->model->follow($user, $params[0]);
    $this->redirect("profiles");
  }
}
private function unfollow($params){
  $user = $this->checkAuth();
  if($user === false){ $this->redirect("home/7"); }
  else{
    $this->model->unfollow($user, $params[0]);
    $this->redirect("profiles");
  }
}

These functions, as you can probably see, are almost identical. The only difference is that one adds a record to the Follows table and one removes a record. Now let's finish it up with the functions in the Model class:

public function follow($user, $fId){
  $this->insert("Follows", array("user_id" => $user->id, "followee_id" => $fId));
}
public function unfollow($user, $fId){
  $this->delete("Follows", array("user_id" => $user->id, "followee_id" => $fId));
}

These functions are basically the same; they only differ by the methods they call.

The site is now fully operational!!! The last thing which I want to add is another .htaccess file inside the Views folder. Here are its contents:

Order allow,deny
Deny from all

This is not strictly necessary, but it is good to restrict access to private files.


Conclusion

We definitely built a Twitter clone from scratch!

This has been a very long article, but we covered a lot! We setup a database and created our very own MVC framework. We definitely built a Twitter clone from scratch!

Please note that, due to length restraints, I had to omit a lot of the features that you might find in a real production application, such as Ajax, protection against SQL injection, and a character counter for the Ribbit box (probably a lot of other things as well). That said, overall, I think we accomplished a great deal!

I hope you enjoyed this article, feel free to leave me a comment if you have any thoughts or questions. Thank you for reading!

How I Test

$
0
0

In a recent discussion on Google+, a friend of mine commented, “Test-Driven Development (TDD) and Behavior-Driven Development (BDD) is Ivory Tower BS.” This prompted me to think about my first project, how I felt the same way then, and how I feel about it now. Since that first project, I’ve developed a rhythm of TDD/BDD that not only works for me, but for the client as well.

Ruby on Rails ships with a test suite, called Test Unit, but many developers prefer to use RSpec, Cucumber, or some combination of the two. Personally, I prefer the latter, using a combination of both.


RSpec

From the RSpec site:

RSpec is a testing tool for the Ruby programming language. Born under the banner of Behaviour-Driven Development, it is designed to make Test-Driven Development a productive and enjoyable experience.

RSpec provides a powerful DSL that is useful for both unit and integration testing. While I have used RSpec for writing integration tests, I prefer to use it only in a unit testing capacity. Therefore, I will cover how I use RSpec exclusively for unit testing. I recommend reading The RSpec Book by David Chelimsky and others for complete and in-depth RSpec coverage.


Cucumber

I’ve found the benefits of TDD/BDD far outweigh the cons.

Cucumber is an integration and acceptance testing framework that supports Ruby, Java, .NET, Flex, and a host of other web languages and frameworks. Its true power comes from its DSL; not only is it available in plain English, but it has been translated into over forty spoken languages.

With a human-readable acceptance test, you can have the customer sign off on a feature, before writing a single line of code. As with RSpec, I will only be covering Cucumber in the capacity in which I use it. For the complete rundown on Cucumber, check out The Cucumber Book.


The Setup

Let’s first begin a new project, instructing Rails to skip Test Unit. Type the following into a terminal:

rails new how_i_test -T

Within the Gemfile, add:

source 'https://rubygems.org'
...
group :test do
  gem 'capybara'
  gem 'cucumber-rails', require: false
  gem 'database cleaner'
  gem 'factory_girl_rails'
  gem 'shoulda'
end
group :development, :test do
  gem 'rspec-rails'
end

I mostly use RSpec to ensure that my models and their methods stay in check.

Here, we’ve put Cucumber and friends inside of the group test block. This ensures that they are properly loaded only in the Rails test environment. Notice how we also load RSpec inside of the development and test blocks, making it available in both environments. There are a few other gems. which I will briefly detail below. Don’t forget to run bundle install to install them.

We need to run these gems’ generators to set them up. You can do that with the following terminal commands:

rails g rspec:install
  create  .rspec
  create  spec
  create  spec/spec_helper.rb
rails g cucumber:install
  create  config/cucumber.yml
  create  script/cucumber
   chmod  script/cucumber
  create  features/step_definitions
  create  features/support
  create  features/support/env.rb
   exist  lib/tasks
  create  lib/tasks/cucumber.rake
    gsub  config/database.yml
    gsub  config/database.yml
   force  config/database.yml

At this point, we could begin writing specs and cukes to test our application, but we can set up a few things to make testing easier. Let’s start in the application.rb file.

module HowITest
  class Application < Rails::Application
    config.generators do |g|
      g.view_specs false
      g.helper_specs false
      g.test_framework :rspec, :fixture => true
      g.fixture_replacement :factory_girl, :dir => 'spec/factories'
    end
  ...
  end
end

Inside the Application class, we override a few of Rails’ default generators. For the first two, we skip the views and helpers generation specs.

These tests are not necessary, because we are only using RSpec for unit tests.

The third line informs Rails that we intend to use RSpec as our test framework of choice, and it should also generate fixtures when generating models. The final line ensures that we use factory_girl for our fixtures, of which are created in the spec/factories directory.


Our First Feature

To keep things simple, we’re going to write a simple feature for signing into our application. For the sake of brevity, I will skip the actual implementation and stick with the testing suite. Here is the contents of features/signing_in.feature:

Feature: Signing In
  In order to use the application
  As a registered user
  I want to sign in through a form
Scenario: Signing in through the form
  Given there is a registered user with email "user@example.com"
  And I am on the sign in page
  When I enter correct credentials
  And I press the sign in button
  Then the flash message should be "Signed in successfully."

When we run this in the terminal with cucumber features/signing_in.feature, we see a lot of output ending with our undefined steps:

Given /^there is a registered user with email "(.*?)"$/ do |arg1|
  pending # express the regexp above with the code you wish you had
end
Given /^I am on the sign in page$/ do
  pending # express the regexp above with the code you wish you had
end
When /^I enter correct credentials$/ do
  pending # express the regexp above with the code you wish you had
end
When /^I press the sign in button$/ do
  pending # express the regexp above with the code you wish you had
end
Then /^the flash message should be "(.*?)"$/ do |arg1|
  pending # express the regexp above with the code you wish you had
end

The next step is to define what we expect each of these steps to do. We express this in features/stepdefinitions/signin_steps.rb, using plain Ruby with Capybara and CSS selectors.

Given /^there is a registered user with email "(.*?)"$/ do |email|
  @user = FactoryGirl.create(:user, email: email)
end
Given /^I am on the sign in page$/ do
  visit sign_in_path
end
When /^I enter correct credentials$/ do
  fillin "Email", with: @user.email
  fillin "Password", with: @user.password
end
When /^I press the sign in button$/ do
  click_button "Sign in"
end
Then /^the flash message should be "(.*?)"$/ do |text|
  within(".flash") do
    page.should have_content text
  end
end

Within each of the Given, When, and Then blocks, we use the Capybara DSL to define what we expect from each block (except in the first one). In the first given block, we tell factory_girl to create a user stored in the user instance variable for later use. If you run cucumber features/signing_in.feature again, you should see something similar to the following:

Scenario: Signing in through the form                            # features/signing_in.feature:6
    Given there is a registered user with email "user@example.com" # features/step_definitions/signing\_in\_steps.rb:1
      Factory not registered: user (ArgumentError)
      ./features/step_definitions/signing\_in\_steps.rb:2:in `/^there is a registered user with email "(.*?)"$/'
      features/signing_in.feature:7:in `Given there is a registered user with email "user@example.com"'

We can see from the error message that our example fails on line 1 with an ArgumentError of the user factory not being registered. We could create this factory ourselves, but some of the magic we setup earlier will make Rails do that for us. When we generate our user model, we get the user factory for free.

rails g model user email:string password:string
  invoke  active_record
  create    db/migrate/20121218044026\_create\_users.rb
  create    app/models/user.rb
  invoke    rspec
  create      spec/models/user_spec.rb
  invoke      factory_girl
  create        spec/factories/users.rb

As you can see, the model generator invokes factory_girl and creates the following file:

ruby spec/factories/users.rb
FactoryGirl.define do
  factory :user do
    email "MyString"
    password "MyString"
  end
end

I won’t go into great depth of factory_girl here, but you can read more in their getting started guide. Don’t forget to run rake db:migrate and rake db:test:prepare to load the new schema. This should get the first step of our feature to pass, and start you down the road of using Cucumber for your integration testing. On each pass of your features, Cucumber will guide you to the pieces that it sees missing to make it pass.


Model Testing with RSpec and Shoulda

I mostly use RSpec to make sure that my models and their methods stay in check. I often also use it for some high level controller testing, but that goes into more detail than this guide allows for. We’re going to use the same user model that we previously set up with our sign-in feature. Looking back at the output from running the model generator, we can see that we also got user_spec.rb for free. If we run rspec spec/models/user_spec.rb we should see the following output.

Pending:
  User add some examples to (or delete) /Users/janders/workspace/how\_i\_test/spec/models/user_spec.rb

And if we open that file, we see:

require 'spechelper'
describe User do
  pending "add some examples to (or delete) #{FILE}"
end

The pending line gives us the output we saw in the terminal. We’ll leverage Shoulda’s ActiveRecord and ActiveModel matchers to ensure our user model matches our business logic.

require 'spechelper'
describe User do
context "#fields" do
    it { should respondto(:email) }
    it { should respondto(:password) }
    it { should respondto(:firstname) }
    it { should respondto(:lastname) }
  end
context "#validations" do
    it { should validate_presence_of(:email) }
    it { should validate_presence_of(:password) }
    it { should validate_uniqueness_of(:email) }
  end
context "#associations" do
    it { should have_many(:tasks) }
  end
describe "#methods" do
    let!(:user) { FactoryGirl.create(:user) }
it "name should return the users name" do
  user.name.should eql "Testy McTesterson"
end
end
end

We setup a few context blocks inside of our first describe block to test things like fields, validations, and associations. While there are not functional differences between a describe and a context block, there is a contextual one. We use describe blocks to set the state of what we are testing, and context blocks to group those tests. This makes our tests more readable and maintainable in the long run.

The first describe allows us to test against the User model in an unmodified state.

We use this unmodified state to test against the database with the Shoulda matchers grouping each by type. The next describe block sets up a user from our previously created user factory. Setting up the user with the let method inside of this block allows us to test an instance of our user model against known attributes.

Now, when we run rspec spec/models/user_spec.rb, we see that all of our new tests fail.

Failures:
1) User#methods name should return the users name
     Failure/Error: user.name.should eql "Testy McTesterson"
     NoMethodError:
       undefined method name' for #<User:0x007ff1d2775170>
     # ./spec/models/user_spec.rb:26:in</code>block (3 levels) in <top (required)>'
2) User#validations
     Failure/Error: it { should validate_uniqueness_of(:email) }
       Expected errors to include "has already been taken" when email is set to "arbitrary<em>string", got no errors
     # ./spec/models/user</em>spec.rb:15:in `block (3 levels) in <top (required)>'
3) User#validations
     Failure/Error: it { should validate_presence_of(:password) }
       Expected errors to include "can't be blank" when password is set to nil, got no errors
     # ./spec/models/user_spec.rb:14:in `block (3 levels) in <top (required)>'
4) User#validations
     Failure/Error: it { should validate_presence_of(:email) }
       Expected errors to include "can't be blank" when email is set to nil, got no errors
     # ./spec/models/user_spec.rb:13:in `block (3 levels) in <top (required)>'
5) User#associations
     Failure/Error: it { should have<em>many(:tasks) }
       Expected User to have a has</em>many association called tasks (no association called tasks)
     # ./spec/models/user_spec.rb:19:in `block (3 levels) in <top (required)>'
6) User#fields
     Failure/Error: it { should respond<em>to(:last</em>name) }
       expected #<User id: nil, email: nil, password: nil, created_at: nil, updated_at: nil> to respond to :last<em>name
     # ./spec/models/user</em>spec.rb:9:in `block (3 levels) in <top (required)>'
7) User#fields
     Failure/Error: it { should respond<em>to(:first</em>name) }
       expected #<User id: nil, email: nil, password: nil, created_at: nil, updated_at: nil> to respond to :first<em>name
     # ./spec/models/user</em>spec.rb:8:in <code>block (3 levels) in <top (required)>'

With each of these tests failing, we have the framework we need to add migrations, methods, associations, and validations to our models. As our application evolves, models expand and our schema changes, this level of testing provides us with protection for introducing breaking changes.


Conclusion

While we didn’t cover too many topics in depth, you should now have a basic understanding of integration and unit testing with Cucumber and RSpec. TDD/BDD is one of the things developers either seem to do or don’t do, but I’ve found that the benefits of TDD/BDD far outweigh the cons on more than one occasion.

Build Your First Admin Bundle for Laravel

$
0
0

It's hard to deny the fact that the PHP community is excited for Laravel 4. Among other things, the framework leverages the power of Composer, which means it's able to utilize any package or script from Packagist.

In the meantime, Laravel offers "Bundles", which allow us to modularize code for use in future projects. The bundle directory is full of excellent scripts and packages that you can use in your applications. In this lesson, I’ll show you how to build one from scratch!


Wait, What's a Bundle?

Bundles offer an easy way to group related code. If you’re familiar with CodeIgniter, bundles are quite similar to "Sparks". This is apparent when you take a look at the folder structure.

Folder Structure

Creating a bundle is fairly simple. To illustrate the process, we’ll build an admin panel boilerplate that we can use within future projects. Firstly, we need to create an 'admin' directory within our 'bundles' folder. Try to replicate the folder structure from the image above.

Before we begin adding anything to our bundle, we need to first register it with Laravel. This is done in your application's bundles.php file. Once you open this file, you should see an array being returned; we simply need to add our bundle and define a handle. This will become the URI in which we access our admin panel.

'admin' => array('handles' => 'admin')

Here, I've named mine, “admin,” but feel free to call yours whatever you wish.

Once we've got that setup, we need to create a start.php file. Here, we're going to set up a few things, such as our namespaces. If you're not bothered by this, then you don't actually need a start file for your bundle to work, as expected.

Laravel's autoloader class allows us to do a couple of things: map our base controller, and autoload namespaces.

Autoloader::map(array(
    'Admin_Base_Controller' => Bundle::path('admin').'controllers/base.php',
));
Autoloader::namespaces(array(
    'Admin\Models' => Bundle::path('admin').'models',
    'Admin\Libraries' => Bundle::path('admin').'libraries',
));

Namespacing will ensure that we don't conflict with any other models or libraries already included in our application. You'll notice that we haven't opted to not namespace our controllers to make things a little easier.


Publishing Assets

For the admin panel, we'll take advantage of Twitter's Bootstrap, so go grab a copy. We can pop this into a public folder inside our bundle in order to publish to our application later on.

When you're ready to publish them, just run the following command through artisan.

php artisan bundle:publish admin

This will copy the folder structure and files to the bundles directory in our public folder, within the root of the Laravel installation. We can then use this in our bundle's base controller.


Setting up the Base Controller

It's always a smart idea to setup a base controller, and extend from there. Here, we can setup restful controllers, define the layout, and include any assets. We just need to call this file, base.php, and pop it into our controller’s directory.

Firstly, let's get some housekeeping out of the way. We'll of course want to use Laravel's restful controllers.

public $restful = true;

And we'll specify a layout that we'll create shortly. If you're not used to controller layouts, then you're in for a treat.

public $layout = 'admin::layouts.main';

The bundle name, followed by two colons, is a paradigm in Laravel we'll be seeing more of in the future, so keep an eye out.

When handling assets within our bundle, we can do things as expected and specify the path from the root of the public folder. Thankfully, Laravel is there to make our lives easier. In our construct, we need to specify the bundle, before adding to our asset containers.

Asset::container('header')->bundle('admin');
Asset::container('footer')->bundle('admin');

If you're unfamiliar with asset containers, don't worry; they're merely sections of a page where you want to house your assets. Here, we'll be including stylesheets in the header, and scripts in the footer.

Now, with that out of the way, we can include our bootstrap styles and scripts easily. Our completed base controller should look similar to:

class Admin_Base_Controller extends Controller {
    public $restful = true;
    public $layout = 'admin::layouts.main';
    public function __construct(){
        parent::__construct();
        Asset::container('header')->bundle('admin');
        Asset::container('header')->add('bootstrap', 'css/bootstrap.min.css');
        Asset::container('footer')->bundle('admin');
        Asset::container('footer')->add('jquery', 'http://code.jquery.com/jquery-latest.min.js');
        Asset::container('footer')->add('bootstrapjs', 'js/bootstrap.min.js');
    }
    /**
     * Catch-all method for requests that can't be matched.
     *
     * @param  string    $method
     * @param  array     $parameters
     * @return Response
     */
    public function __call($method, $parameters){
        return Response::error('404');
    }
}

We've also brought across the catch-all request from the application's base controller to return a 404 response, should a page not be found.

Before we do anything else, let's make the file for that layout, views/layout/main.blade.php, so we don't encounter any errors later on.


Securing the Bundle

As we're building an admin panel, we're going to want to keep people out. Thankfully, we can use Laravel's built in Auth class to accomplish this with ease..

First, we need to create our table; I'm going to be using 'admins' as my table name, but you can change it, if you wish. Artisan will generate a migration, and pop it into our bundle's migrations directory. Just run the following in the command line.

php artisan migrate:make admin::create_admins_table

Building the Schema

If you're unfamiliar with the schema builder, I recommend that you take a glance at the documentation. We're going to include a few columns:

  • id – This will auto-increment and become our primary key
  • name
  • username
  • password
  • email
  • role – We won't be taking advantage of this today, but it will allow you to extend the bundle later on

We'll also include the default timestamps, in order to follow best practices.

/**
 * Make changes to the database.
 *
 * @return void
 */
public function up()
{
    Schema::create('admins', function($table)
    {
        $table->increments('id');
        $table->string('name', 200);
        $table->string('username', 32)->unique();
        $table->string('password', 64);
        $table->string('email', 320)->unique();
        $table->string('role', 32);
        $table->timestamps();
    });
}
/**
 * Revert the changes to the database.
 *
 * @return void
 */
public function down()
{
    Schema::drop('admins');
}

Now that we've got our database structure in place, we need to create an associated model for the table. This process is essentially identical to how we might accomplish this in our main application. We create the file and model, based on the singular form of our table name – but we do need to ensure that we namespace correctly.

namespace Admin\Models;
use \Laravel\Database\Eloquent\Model as Eloquent;
class Admin extends Eloquent {
}

Above, we've ensured that we're using the namespace that we defined in start.php. Also, so we can reference Eloquent correctly, we create an alias.

Extending Auth

To keep our bundle entirely self contained, we’ll need to extend auth. This will allow us to define a table just to login to our admin panel, and not interfere with the main application.

Before we create our custom driver, we'll create a configuration file, where you can choose if you'd like to use the username or email columns from the database table.

return array(
    'username' => 'username',
    'password' => 'password',
);

If you want to alter the columns that we'll be using, simply adjust the values here.

We next need to create the driver. Let's call it, “AdminAuth,” and include it in our libraries folder. Since we're extending Auth, we only need to overwrite a couple of methods to get everything working, as we intended.

namespace Admin\Libraries;
use Admin\Models\Admin as Admin, Laravel\Auth\Drivers\Eloquent as Eloquent, Laravel\Hash, Laravel\Config;
class AdminAuth extends Eloquent {
/**
 * Get the current user of the application.
 *
 * If the user is a guest, null should be returned.
 *
 * @param  int|object  $token
 * @return mixed|null
 */
public function retrieve($token)
{
    // We return an object here either if the passed token is an integer (ID)
    // or if we are passed a model object of the correct type
    if (filter_var($token, FILTER_VALIDATE_INT) !== false)
    {
        return $this->model()->find($token);
    }
    else if (get_class($token) == new Admin)
    {
        return $token;
    }
}
/**
 * Attempt to log a user into the application.
 *
 * @param  array $arguments
 * @return void
 */
public function attempt($arguments = array())
{
    $user = $this->model()->where(function($query) use($arguments)
    {
        $username = Config::get('admin::auth.username');
        $query->where($username, '=', $arguments['username']);
        foreach(array_except($arguments, array('username', 'password', 'remember')) as $column => $val)
        {
            $query->where($column, '=', $val);
        }
    })->first();
    // If the credentials match what is in the database, we will just
    // log the user into the application and remember them if asked.
    $password = $arguments['password'];
    $password_field = Config::get('admin::auth.password', 'password');
    if ( ! is_null($user) and Hash::check($password, $user->{$password_field}))
    {
        return $this->login($user->get_key(), array_get($arguments, 'remember'));
    }
    return false;
}
protected function model(){
    return new Admin;
}
}

Now that we've created the driver, we need to let Laravel know. We can use Auth's extend method to do this in our start.php file.

Auth::extend('adminauth', function() {
    return new Admin\Libraries\AdminAuth;
});

One final thing that we need to do is configure Auth to use this at runtime. We can do this in our base controller's constructor with the following.

Config::set('auth.driver', 'adminauth');

Routes & Controllers

Before we can route to anything, we need to create a controller. Let's create our dashboard controller, which is what we'll see after logging in.

As we'll want this to show up at the root of our bundle (i.e. the handle we defined earlier), we'll need to call this home.php. Laravel uses the 'home' keyword to establish what you want to show up at the root of your application or bundle.

Extend your base controller, and create an index view. For now, simply return 'Hello World' so we can ensure that everything is working okay.

class Admin_Home_Controller extends Admin_Base_Controller {
    public function get_index(){
        return 'Hello World';
    }
}

Now that our controller is setup, we can route to it. Create a routes.php within your bundle, if you haven't already. Similar to our main application, each bundle can have its own routes file that works identically.

Route::controller(array(
    'admin::home',
));

Here, I've registered the home controller, which Laravel will automatically assign to /. Later , we'll add our login controller to the array.

If you head to /admin (or whatever handle you defined earlier) in your browser, then you should see 'Hello World'.


Building the Login Form

Let’s create the login controller, however, rather than extending the base controller, we’ll instead extend Laravel's main controller. The reason behind this decision will become apparent shortly.

Because we're not extending, we need to set a couple of things up before beginning – namely restful layouts, the correct auth driver, and our assets.

class Admin_Login_Controller extends Controller {
    public $restful = true;
    public function __construct(){
        parent::__construct();
        Config::set('auth.driver', 'adminauth');
        Asset::container('header')->bundle('admin');
        Asset::container('header')->add('bootstrap', 'css/bootstrap.min.css');
    }
}

Let's also create our view. We're going to be using Blade – Laravel's templating engine – to speed things up a bit. Within your bundles views directory, create a 'login' directory and an 'index.blade.php' file within it.

We'll pop in a standard HTML page structure and echo the assets.

<!DOCTYPE html><html lang="en"><head><meta charset="utf-8"><title>Login</title>
    {{Asset::container('header')->styles()}}<!--[if lt IE 9]><script src="http://html5shim.googlecode.com/svn/trunk/html5.js"></script><![endif]--></head><body></body></html>

Now, let's make sure that the view is being created in the controller. As we're using restful controllers, we can take advantage of the 'get' verb in our method.

public function get_index(){
    return View::make('admin::login.index');
}

Awesome! We're now good to start building our form, which we can create with the Form class.

{{Form::open()}}
{{Form::label('username', 'Username')}}
{{Form::text('username')}}
{{Form::label('password', 'Password')}}
{{Form::password('password')}}
{{Form::submit('Login', array('class' => 'btn btn-success'))}}
{{Form::token()}}
{{Form::close()}}
Login Form

Above, we created a form that will post to itself (exactly what we want), along with various form elements and labels to go with it. The next step is to process the form.

As we're posting the form to itself and using restful controllers, we just need to create the post_index method and use this to process our login. If you've never used Auth before, then go and have a peek at the documentation before moving on.

public function post_index(){
    $creds = array(
        'username' => Input::get('username'),
        'password' => Input::get('password'),
    );
    if (Auth::attempt($creds)) {
        return Redirect::to(URL::to_action('admin::home@index'));
    } else {
        return Redirect::back()->with('error', true);
    }
}

If the credentials are correct, the user will be redirected to the dashboard. Otherwise, they'll be redirected back with an error that we can check for in the login view. As this is just session data, and not validation errors, we only need to implement a simple check.

@if(Session::get('error'))
    Sorry, your username or password was incorrect.
@endif

We'll also need to log users out; so let's create a get_logout method, and add the following. This will log users out, and then redirect them when visiting /admin/login/logout.

public function get_logout(){
    Auth::logout();
    return Redirect::to(URL::to_action('admin::home@index'));
}

The last thing we should do is add the login controller to our routes file.

Route::controller(array(
    'admin::home',
    'admin::login',
));

Filtering routes

To stop people from bypassing our login screen, we need to filter our routes to determine if they're authorized users. We can create the filter in our routes.php, and attach it to our base controller, to filter before the route is displayed.

Route::filter('auth', function() {
    if (Auth::guest()) return Redirect::to(URL::to_action('admin::login'));
});

At this point, all that's left to do is call this in our base controller's constructor. If we extended our login controller from our base, then we'd have an infinite loop that would eventually time out.

$this->filter('before', 'auth');

Setting up the Views

Earlier, we created our main.blade.php layout; now, we’re going to do something with it. Let's get an HTML page and our assets being brought in.

<!DOCTYPE html><html lang="en"><head><meta charset="utf-8"><title>{{$title}}</title>
    {{Asset::container('header')->styles()}}<!--[if lt IE 9]><script src="http://html5shim.googlecode.com/svn/trunk/html5.js"></script><![endif]--></head><body><div class="container">
        {{$content}}</div>
  {{Asset::container('footer')->scripts()}}</body></html>

You'll notice that I've also echoed out a couple of variables: $title and $content. We'll be able to use magic methods from our controller to pass data through to these. I've also popped $content inside the container div that Bootstrap will provide the styling for.

Next, let's create the view for our dashboard. As we'll be nesting this, we only need to put the content we want to put into our container.

<h1>Hello</h1><p class="lead">This is our dashboard view</p>

Save this as index.blade.php within the views/dashboard directory inside of your bundle.

We now need to set our controller to take advantage of the layout and view files that we just created. Within the get_index method that we created earlier, add the following.

$this->layout->title = 'Dashboard';
$this->layout->nest('content', 'admin::dashboard.index');

title is a magic method that we can then echo out as a variable in our layout. By using nest, we're able to include a view inside the layout straight from our controller.


Creating a Task

In order to speed things up, Laravel provides us with an easy way to execute code from the command line. These are called "Tasks"; it's a good idea to create one to add a new user to the database easily.

We simply need to ensure that the file takes on the name of our task, and pop it into our bundle's tasks directory. I'm going to call this setup.php, as we'll use it just after installing the bundle.

use Laravel\CLI\Command as Command;
use Admin\Models\Admin as Admin;
class Admin_Setup_Task {
public function run($arguments){
    if(empty($arguments) || count($arguments) < 5){
        die("Error: Please enter first name, last name, username, email address and password\n");
    }
    Command::run(array('bundle:publish', 'admin'));
    $role = (!isset($arguments[5])) ? 'admin' : $arguments[5];
    $data = array(
        'name' => $arguments[0].' '.$arguments[1],
        'username' => $arguments[2],
        'email' => $arguments[3],
        'password' => Hash::make($arguments[4]),
        'role' => $role,
    );
    $user = Admin::create($data);
    echo ($user) ? 'Admin created successfully!' : 'Error creating admin!';
}
}

Laravel will pass through an array of arguments; we can count these to ensure that we're getting exactly what we want. If not, we'll echo out an error. You'll also notice that we're using the Command class to run bundle:publish. This will allow you to run any command line task built into Laravel inside your application or bundle.

The main thing this task does is grab the arguments passed through to it, hash the password, and insert a new admin into the Admins table. To run this, we need to use the following in the command line.

php artisan admin::setup firstname lastname username email@address.com password

What Now?

In this tutorial, we created an boilerplate admin panel that is quite easy to extend. For example, the roles column that we created could allow you to limit what your clients are able to see.

A bundle can be anything from an admin panel, like we built today, to Markdown parsers – or even the entire Zend Framework (I'm not kidding). Everything that we covered here will set you on your way to writing awesome Laravel bundles, which can be published to Laravel's bundle directory.

Learn more about creating Laravel bundles here on Nettuts+.

Getting Started with TypeScript

$
0
0

In late 2012, Microsoft introduced TypeScript, a typed superset for JavaScript that compiles into plain JavaScript. TypeScript focuses on providing useful tools for large scale applications by implementing features, such as classes, type annotations, inheritance, modules and much more! In this tutorial, we will get started with TypeScript, using simple bite-sized code examples, compiling them into JavaScript, and viewing the instant results in a browser.


Installing the Tools

TypeScript features are enforced only at compile-time.

You’ll set up your machine according to your specific platform and needs. Windows and Visual Studio users can simply download the Visual Studio Plugin. If you’re on Windows and don’t have Visual Studio, give Visual Studio Express for Web a try. The TypeScript experience in Visual Studio is currently superior to other code editors.

If you’re on a different platform (or don’t want to use Visual Studio), all you need is a text editor, a browser, and the TypeScript npm package to use TypeScript. Follow these installation instructions:

  1. Install Node Package Manager (npm)
    		$ curl http://npmjs.org/install.sh | sh
    		$ npm --version
    		1.1.70
  2. Install the TypeScript npm package globally in the command line:
    		$ npm install -g typescript
    		$ npm view typescript version
    		npm http GET https://registry.npmjs.org/typescript
    		npm http 304 https://registry.npmjs.org/typescript
    		0.8.1-1
  3. Any modern browser: Chrome is used for this tutorial
  4. Any text editor: Sublime Text is used for this tutorial
  5. Syntax highlighting plugin for text editors

That’s it; we are ready to make a simple “Hello World” application in TypeScript!


Hello World in TypeScript

TypeScript is a superset of Ecmascript 5 (ES5) and incorporates features proposed for ES6. Because of this, any JavaScript program is already a TypeScript program. The TypeScript compiler performs local file transformations on TypeScript programs. Hence, the final JavaScript output closely matches the TypeScript input.

First, we will create a basic index.html file and reference an external script file:

<!doctype html><html lang="en"><head><meta charset="UTF-8"><title>Learning TypeScript</title></head><body><script src="hello.js"></script></body></html>

This is a simple “Hello World” application; so, let’s create a file named hello.ts. The *.ts extension designates a TypeScript file. Add the following code to hello.ts:

alert(‘hello world in TypeScript!’);

Next, open the command line interface, navigate to the folder containing hello.ts, and execute the TypeScript compiler with the following command:

tsc hello.ts

The tsc command is the TypeScript compiler, and it immediately generates a new file called hello.js. Our TypeScript application does not use any TypeScript-specific syntax, so we see the same exact JavaScript code in hello.js that we wrote in hello.ts.

Great! Now we can explore TypeScript's features and see how it can help us maintain and author large scale JavaScript applications.


Type Annotations

Type annotations are an optional feature, which allows us to check and express our intent in the programs we write. Let's create a simple area() function in a new TypeScript file, called type.ts

function area(shape: string, width: number, height: number) {
	var area = width * height;
	return "I'm a " + shape + " with an area of " + area + " cm squared.";
}
document.body.innerHTML = area("rectangle", 30, 15);

Next, change the script source in index.html to type.js and run the TypeScript compiler with tsc type.ts. Refresh the page in the browser, and you should see the following:

As shown in the previous code, the type annotations are expressed as part of the function parameters; they indicate what types of values you can pass to the function. For example, the shape parameter is designated as a string value, and width and height are numeric values.

Type annotations, and other TypeScript features, are enforced only at compile-time. If you pass any other types of values to these parameters, the compiler will give you a compile-time error. This behavior is extremely helpful while building large-scale applications. For example, let's purposely pass a string value for the width parameter:

function area(shape: string, width: number, height: number) {
	var area = width * height;
	return "I'm a " + shape + " with an area of " + area + " cm squared.";
}
document.body.innerHTML = area("rectangle", "width", 15); // wrong width type

We know this results in an undesirable outcome, and compiling the file alerts us to the problem with the following error:

$ tsc type.ts
type.ts(6,26): Supplied parameters do not match any signature of call target

Notice that despite this error, the compiler generated the type.js file. The error doesn't stop the TypeScript compiler from generating the corresponding JavaScript, but the compiler does warn us of potential issues. We intend width to be a number; passing anything else results in undesired behavior in our code. Other type annotations include bool or even any.


Interfaces

Let's expand our example to include an interface that further describes a shape as an object with an optional color property. Create a new file called interface.ts, and modify the script source in index.html to include interface.js. Type the following code into interface.ts:

interface Shape {
	name: string;
	width: number;
	height: number;
	color?: string;
}
function area(shape : Shape) {
	var area = shape.width * shape.height;
	return "I'm " + shape.name + " with area " + area + " cm squared";
}
console.log( area( {name: "rectangle", width: 30, height: 15} ) );
console.log( area( {name: "square", width: 30, height: 30, color: "blue"} ) );

Interfaces are names given to object types. Not only can we declare an interface, but we can also use it as a type annotation.

Compiling interface.js results in no errors. To evoke an error, let's append another line of code to interface.js with a shape that has no name property and view the result in the console of the browser. Append this line to interface.js:

console.log( area( {width: 30, height: 15} ) );

Now, compile the code with tsc interface.js. You'll receive an error, but don't worry about that right now. Refresh your browser and look at the console. You'll see something similar to the following screenshot:

Now let's look at the error. It is:

interface.ts(26,13): Supplied parameters do not match any signature of call target:
Could not apply type 'Shape' to argument 1, which is of type '{ width: number; height: number; }'

We see this error because the object passed to area() does not conform to the Shape interface; it needs a name property in order to do so.


Arrow Function Expressions

Understanding the scope of the this keyword is challenging, and TypeScript makes it a little easier by supporting arrow function expressions, a new feature being discussed for ECMAScript 6. Arrow functions preserve the value of this, making it much easier to write and use callback functions. Consider the following code:

var shape = {
	name: "rectangle",
	popup: function() {
		console.log('This inside popup(): ' + this.name);
		setTimeout(function() {
			console.log('This inside setTimeout(): ' + this.name);
			console.log("I'm a " + this.name + "!");
		}, 3000);
	}
};
shape.popup();

The this.name on line seven will clearly be empty, as demonstrated in the browser console:

We can easily fix this issue by using the TypeScript arrow function. Simply replace function() with () =>.

var shape = {
	name: "rectangle",
	popup: function() {
		console.log('This inside popup(): ' + this.name);
		setTimeout( () => {
			console.log('This inside setTimeout(): ' + this.name);
			console.log("I'm a " + this.name + "!");
		}, 3000);
	}
};
shape.popup();

And the results:

Take a peek at the generated JavaScript file. You'll see that the compiler injected a new variable, var _this = this;, and used it in setTimeout()'s callback function to reference the name property.


Classes with Public and Private Accessibility Modifiers

TypeScript supports classes, and their implementation closely follows the ECMAScript 6 proposal. Let's create another file, called class.ts, and review the class syntax:

class Shape {
	area: number;
	color: string;
	constructor ( name: string, width: number, height: number ) {
		this.area = width * height;
		this.color = "pink";
	};
	shoutout() {
		return "I'm " + this.color + " " + this.name +  " with an area of " + this.area + " cm squared.";
	}
}
var square = new Shape("square", 30, 30);
console.log( square.shoutout() );
console.log( 'Area of Shape: ' + square.area );
console.log( 'Name of Shape: ' + square.name );
console.log( 'Color of Shape: ' + square.color );
console.log( 'Width of Shape: ' + square.width );
console.log( 'Height of Shape: ' + square.height );

The above Shape class has two properties, area and color, one constructor (aptly named constructor()), as well as a shoutout() method. The scope of the constructor arguments (name, width and height) are local to the constructor. This is why you'll see errors in the browser, as well as the compiler:

class.ts(12,42): The property 'name' does not exist on value of type 'Shape'
class.ts(20,40): The property 'name' does not exist on value of type 'Shape'
class.ts(22,41): The property 'width' does not exist on value of type 'Shape'
class.ts(23,42): The property 'height' does not exist on value of type 'Shape'

Any JavaScript program is already a TypeScript program.

Next, let's explore the public and private accessibility modifiers. Public members can be accessed everywhere, whereas private members are only accessible within the scope of the class body. There is, of course, no feature in JavaScript to enforce privacy, hence private accessibility is only enforced at compile-time and serves as a warning to the developer's original intent of making it private.

As an illustration, let's add the public accessibility modifier to the constructor argument, name, and a private accessibility modifier to the member, color. When we add public or private accessibility to an argument of the constructor, that argument automatically becomes a member of the class with the relevant accessibility modifier.

...
private color: string;
...
constructor ( public name: string, width: number, height: number ) {
...
class.ts(24,41): The property 'color' does not exist on value of type 'Shape'

Inheritance

Finally, you can extend an existing class and create a derived class from it with the extends keyword. Let's append the following code to the existing file, class.ts, and compile it:

class Shape3D extends Shape {
	volume: number;
	constructor ( public name: string, width: number, height: number, length: number ) {
		super( name, width, height );
		this.volume = length * this.area;
	};
	shoutout() {
		return "I'm " + this.name +  " with a volume of " + this.volume + " cm cube.";
	}
	superShout() {
		return super.shoutout();
	}
}
var cube = new Shape3D("cube", 30, 30, 30);
console.log( cube.shoutout() );
console.log( cube.superShout() );

A few things are happening with the derived Shape3D class:

  • Because it is derived from the Shape class, it inherits the area and color properties.
  • Inside the constructor method, the super method calls the constructor of the base class, Shape, passing the name, width, and height values. Inheritance allows us to reuse the code from Shape, so we can easily calculate this.volume with the inherited area property.
  • The method shoutout() overrides the base class's implementation, and a new method superShout() directly calls the base class's shoutout() method by using the super keyword.
    • With only a few additional lines of code, we can easily extend a base class to add more specific functionality and make our intention known through TypeScript.


      TypeScript Resources

      Despite TypeScript's extremely young age, you can find many great resources on the language around the web (including a full course coming to Tuts+ Premium!). Be sure to check these out:


      We're Just Getting Started

      TypeScript supports classes, and their implementation closely follows the ECMAScript 6 proposal.

      Trying out TypeScript is easy. If you enjoy a more statically-typed approach for large applications, then TypeScript's features will enforce a familiar, disciplined environment. Although it has been compared to CoffeeScript or Dart, TypeScript is different in that it doesn't replace JavaScript; it adds features to JavaScript.

      We have yet to see how TypeScript will evolve, but Microsoft has stated that they will keep its many features (type annotations aside) aligned with ECMAScript 6. So, if you'd like to try out many of the new ES6 features, TypeScript is an excellent way to do so! Go ahead - give it a try!

Best of Tuts+ in December 2012

$
0
0

Each month, we bring together a selection of the best tutorials and articles from across the whole Tuts+ network. Whether you’d like to read the top posts from your favourite site, or would like to start learning something completely new, this is the best place to start!


Psdtuts+ — Photoshop Tutorials

  • Create Biologically Viable Alien Concept Art in Photoshop

    Create Biologically Viable Alien Concept Art in Photoshop

    Designing a plausible alien life form represents a challenge for concept artists. The monstrous creatures used blockbuster movies and video games can be relatively easily created, as they don’t need to be functional within a real world context. To design a creature that not only meets with a clients’ requirements, but is also potentially biologically viable requires a careful balance between plausibility and visual impact, a balance aided by careful research into the way real animals look and behave. In this tutorial, Alex Ries will explain how to illustrate an alien life form that could potentially exist in real life.

    Visit Article

  • Create a Realistic Photo Composite From a Sketch – Tuts + Premium Tutorial

    Create a Realistic Photo Composite From a Sketch – Tuts + Premium Tutorial

    When mixing photos, it’s often helpful to use digital painting techniques to help improve your scene. In this Tuts+ Premium tutorial, Nat Monney will show you how to combine several photos to create a realistic scene of a locomotive passing through a desert landscape. This tutorial will begin as a sketch, we will then show you how to combine photo manipulation and digital painting techniques to create finished artwork that looks amazingly close to the original sketch. This tutorial is available exclusively to Tuts+ Premium Members.

    Visit Article

  • How to Draw a Camcorder Icon From Scratch in Photoshop

    How to Draw a Camcorder Icon From Scratch in Photoshop

    In this tutorial, we will explain how to draw a camcorder icon from scratch in Photoshop using shape layers, brushes, and layer styles. Let’s get started!

    Visit Article


Nettuts+ — Web Development Tutorials

  • What’s Hot in 2013: Our Picks

    What’s Hot in 2013: Our Picks

    2012 was a fantastic year for new technologies, products, and frameworks in our industry. That said, 2013 is looking to be even better! Recently, I asked our Nettuts+ writing staff to compile a list of the technologies that they’ll be keeping a close eye on. Now these aren’t necessarily brand new, but we expect them to spike in popularity this year!

    Visit Article

  • Build a Twitter Clone From Scratch: The Design

    Build a Twitter Clone From Scratch: The Design

    This article represents the first in a new group effort by the Nettuts+ staff, which covers the process of designing and building a web app from scratch – in multiple languages! We’ll use a fictional Twitter-clone, called Ribbit, as the basis for this series.

    Visit Article

  • Best Practices When Working With JavaScript Templates

    Best Practices When Working With JavaScript Templates

    Maybe you don't need them for simple web apps, but it doesn't take too much complexity before embracing JavaScript templates becomes a good decision. Like any other tool or technique, there are a few best practices that you should keep in mind, when using templates. We’ll take a look at a handful of these practices in this tutorial.

    Visit Article


Vectortuts+ — Illustrator Tutorials


Webdesigntuts+ — Web Design Tutorials


Phototuts+ — Photography Tutorials

  • Jonathan Cherry: Cities and Photography

    Jonathan Cherry: Cities and Photography

    Jonathan Cherry is a young photographer with a passion for taking engaging photographs. His images have offered him opportunities to travel and photograph new locations and communities. He also runs the extremely popular Mull It Over blog, profiling contemporary photographers from around the world. We caught up with him recently to give him the chance to be the interviewee and find out more about his work.

    Visit Article

  • How to Create Stunning Skateboarding Photography

    How to Create Stunning Skateboarding Photography

    You’re more likely to hear skateboarding called an art or a lifestyle than a sport. Skating is creating and style earns respect. If you’re looking for a new and creative outlet to explore, then you should give skateboarding photography a go.

    Visit Article

  • The Complete Guide to Photographing Classical Musicians

    The Complete Guide to Photographing Classical Musicians

    For those of us who regularly take promotional shots of bands and fashion, it can be easy to get stuck in a rut of approaching the same clientele over and over, when actually, the skills used for those shoots can be easily transferred to allow us to broaden our client base and vary out shooting style. Ive found that working with classical musicians is an absolute pleasure, they turn up on time and more often than not, they know what they want!

    Visit Article


Cgtuts+ — Computer Graphics Tutorials

  • Learning Autodesk Maya: A Beginners Introduction to the Software, Part 1

    Learning Autodesk Maya: A Beginners Introduction to the Software, Part 1

    If you’ve always wanted to dive into Autodesk Maya but didn’t know where to start, this new two part tutorial is for you! Besides being one of the premier software packages on the market for film, games and VFX. Maya’s seemly endless number of tools, features, and options make it extremely intimidating to learn (especially for beginners) but don’t let that stop you! Today we’re kicking off part one of Shaun Keenan’s new beginner friendly tutorial series, where he’ll take you deep into the dark depths of Maya and give you an in-depth look at many of the programs awesome features.

    Visit Article

  • Building a Complete Human Facial Rig In 3D Studio Max, Part 1: Bone Setup & Controls – Tuts+ Premium

    Building a Complete Human Facial Rig In 3D Studio Max, Part 1: Bone Setup & Controls – Tuts+ Premium

    Today we’re launching the next chapter in our epic series of character rigging tutorials for 3ds Max and Maya from author Soni Kumari. The series has so far covered everything you need to know about rigging complete characters in Maya with two fantastic, in-depth tutorials Complete Character Rig In Maya& Complete Facial Rig In Maya. And now we’re completing the series for 3D Studio Max.

    Visit Article

  • Creating an Old Weathered Low Poly Post Box in Maya, Part 1: Modeling & Mapping

    Creating an Old Weathered Low Poly Post Box in Maya, Part 1: Modeling & Mapping

    Learn how to create a low poly, highly detailed, old UK post box worn out by the weather, rusty and bitten up. A model suitable for use in the game industry or as an environment filler for your scenes. We’ll take a simple, straight forward eye-balled approach to the modeling while relying on several references images. We’ll then dive into the UVMapping and learn how to create a highly detailed PSD network shader via Maya, for the Color, Normal & Specular maps.

    Visit Article


Aetuts+ — After Effects Tutorials

  • Quick Tip – How To Automate Simple Track Marker Removal

    Quick Tip – How To Automate Simple Track Marker Removal

    In this tutorial I will be showing you a quick and easy way to remove tracking markers. Often tracking markers are needed to achieve accurate matchmoves, however we commonly find ourselves with the laborious task of having to remove these trackers.

    Visit Article

  • How to Create the Spider-Man 2 Title Effect

    How to Create the Spider-Man 2 Title Effect

    In this tutorial we will be recreating the look and feel of the title sequence from the 2004 movie “Spider-Man 2″. First we will create the “webs” that move across the screen, then we will use expressions and masks to isolate the intermittent shapes that are formed by those webs. Finally we will use these shapes as alpha mattes for copies of our logo or text.

    Visit Article

  • 3D Building Fragmentation and Compositing – Part 1

    D Building Fragmentation and Compositing – Part 1

    We’re going to recreate the original camera from a shaky camcorder shot and make a building to line up with the one in the source plate. Using thinking particles and volume breaker to simulate it collapsing, we’ll camera map it and export the sequence for the 2nd part of this tutorial where we’ll integrate our render.

    Visit Article


Audiotuts+ — Audio & Production Tutorials


Wptuts+ — WordPress Tutorials


Mobiletuts+ — Mobile Development Tutorials

  • Learn iOS SDK Development from Scratch!

    Learn iOS SDK Development from Scratch!

    Interested in learning native iOS SDK development? Now is the perfect time to get started! Mobiletuts+ is pleased to announce an in-depth, fully up-to-date session on how to become an iOS SDK developer!

    Visit Article

  • Building a Shopping List Application From Scratch – Part 1

    Building a Shopping List Application From Scratch – Part 1

    In the next two lessons, we will put the knowledge learned in this session into practice by creating a basic shopping list application. Along the way, you will also learn a number of new concepts and patterns, such as creating a custom model class and implementing a custom delegate pattern. We have a lot of ground to cover, so let’s get started!

    Visit Article

  • Creating Your First iOS Application

    Creating Your First iOS Application

    Even though we have already learned quite a bit in this series on iOS development, I am sure you are eager to start building iOS applications that do something cool or useful. In this tutorial, your wish will be granted! Using Xcode, you will create an iOS project from scratch, modify the project’s source code, and run your application on either the iOS Simulator or a physical device!

    Visit Article


Gamedevtuts+ — Game Development

  • 40+ Fantastic Game Development Tutorials From Across the Web

    Fantastic Game Development Tutorials From Across the Web

    The indie gamedev community is awesome: so willing to share tips, tricks, advice, and even detailed tutorials explaining important concepts. Here, I’ve rounded up a few dozen of my favourite examples from around the internet, covering coding, maths, game design, and being a game developer. This is the quality of content that we aspire to achieve at Gamedevtuts+.

    Visit Article

  • Designing a Boss Fight: Lessons Learned From Modern Games

    Designing a Boss Fight: Lessons Learned From Modern Games

    Boss battles have existed since practically the beginning of gaming and they have all followed a similar idea throughout the years: a big baddie that gets in the way of some major objective. In many cases they have had an overbearing role during the game’s story, with ongoing hints of their existence or of the approaching fight with them.

    Visit Article

  • Animating With Asset Sheets: An Alternative to Blitting

    Animating With Asset Sheets: An Alternative to Blitting

    So you’ve got your awesome game in the works, it’s got all sorts of complex physics, epic enemy AI or what-have-you. But it feels lifeless. You want some OOMPH, you want some animation!

    Visit Article


Mactuts+ — Mac & OS X

  • An In-Depth Look at iTunes 11

    An In-Depth Look at iTunes 11

    Alongside the launch of the iPhone 5, iPod Nano, and iPod Touch, Apple promised us an October release of iTunes 11; the biggest update to their all-in-one media management tool since the inclusion of a music store in 2003. As October came and went, Apple failed to deliver on time, instead pushing the release back to ’November”, and on the 29th the new iTunes finally shipped. With this major release comes a new interface, seamless iCloud integration, and a major design overhaul. In this article Ill take you in-depth into the new additions and changes to Apples Swiss Army Knife of media players.

    Visit Article

  • Dan Benjamin on Running the Best Podcast Network on The Web

    Dan Benjamin on Running the Best Podcast Network on The Web

    He created “NPR for geeks”, and with massively popular shows including Hypercritical, Build and Analyze, Back to Work, and Amplified, Dan Benjamin is perhaps one of the most prolific technology podcasters around. In addition to his personal talent, Dan’s 5by5 Network is currently home to over thirty shows, with topics ranging from health and fitness to app development. In the late summer of this year, I had a chance to talk to Dan for about an hour about his career, podcasting, and of course, his setup. This is that interview, uncut and in its entirety. Enjoy!

    Visit Article

  • Mactuts+ Quiz #2: Terminal Basics

    Mactuts+ Quiz #2: Terminal Basics

    Terminal is one of our favorite topics here at Mactuts+. It’s a truly advanced application that allows you to take control of your Mac at a level not possible with any other utility. Today we’re going to find out how much you’ve learned. Browse through our Terminal tutorials, then take the quiz below to test your knowledge!

    Visit Article


Crafttuts+ — Craft & Handmade

  • Create a Wondrous Winter Wonderland in a Jam Jar

    Create a Wondrous Winter Wonderland in a Jam Jar

    Everyone has their own ideal winter scene that they visualise, whether it’s in the city or set in a cute little village. This tutorial shows you how to craft your own, using accessible materials such as paper and a jam jar. It’s a Christmas-themed snow globe with a difference!

    Visit Article

  • Make a Wire-Wrapped Word for Your Wall

    Make a Wire-Wrapped Word for Your Wall

    I live in an apartment with all white walls and quite a minimal style, but there’s one thing that I’m stuck with and can’t afford to change – the ugly aging beige intercom handset. I’ve given a lot of thought to how I can hide or disguise it, to no avail. So I came up with a solution: instead of trying to hide it, I decided to create this neon wrapped wire speech bubble to place right next to it. Now the handset doesn’t stick out like a sore thumb anymore, it’s become part of a fun art piece.

    Visit Article

  • Knitting Fundamentals: Learn to Purl

    Knitting Fundamentals: Learn to Purl

    Most beginner knitters learn how to cast-on and how to do the knit stitch right away. Then, when somebody mentions learning the purl stitch, they run for cover. I know, I know, you just started to feel comfortable knitting, and now people want to throw something new at you, and it’s scary. Let me put your mind at ease by saying, I promise, this stitch is not nearly as terrifying as you think. In fact, with a little bit of practice, I’m sure you’ll be purling like a professional in no time!

    Visit Article


FreelanceSwitch — Freelance Jobs & Information

  • 7 Resources New Freelancers Can Use to Figure Out What to Charge

    7 Resources New Freelancers Can Use to Figure Out What to Charge

    It’s possibly the most baffling question that faces new freelancers: What in the heck am I supposed to charge for my work?
    You don’t have a sense of market rates yet. Your prospect doesn’t want to tell you their budget. Figuring out what to charge for your freelance services is intimidating.

    Visit Article

  • How to Get More Fans on Facebook

    How to Get More Fans on Facebook

    Now you’ve created and set-up your Facebook Page, your next step is getting people to become Facebook fans and Like your Page. In this post, I show you how to increase likes on Facebook, drive Facebook fan engagement, and overall grow the number of your Facebook page fans. Learn how to get a bunch of Likes on your freelance Facebook page.

    Visit Article

  • How to Build a Strong Yearly Marketing Plan for Your Freelancing Business

    How to Build a Strong Yearly Marketing Plan for Your Freelancing Business

    As we approach the end of the year, it’s a good time to begin thinking about next year’s goals for your freelance business.
    Maybe you’ve played your freelance marketing strategy loose so far in your freelancing business. Or maybe you have put together a roughly functional marketing plan in the past but want to improve upon those past efforts.

    Visit Article

A Beginner’s Guide to HTTP and REST

$
0
0

Hypertext Transfer Protocol (HTTP) is the life of the web. It’s used every time you transfer a document, or make an AJAX request. But HTTP is surprisingly a relative unknown among some web developers.

This introduction will demonstrate how the set of design principles, known as REST, underpin HTTP, and allow you to embrace its fullest power by building interfaces, which can be used from nearly any device or operating system.

November, 2010

Why REST?

REST is a simple way to organize interactions between independent systems.

REST is a simple way to organize interactions between independent systems. It’s been growing in popularity since 2005, and inspires the design of services, such as the Twitter API. This is due to the fact that REST allows you to interact with minimal overhead with clients as diverse as mobile phones and other websites. In theory, REST is not tied to the web, but it’s almost always implemented as such, and was inspired by HTTP. As a result, REST can be used wherever HTTP can.

The alternative is building relatively complex conventions on top of HTTP. Often, this takes the shape of entire new XML-based languages. The most illustrious example is SOAP. You have to learn a completely new set of conventions, but you never use HTTP to its fullest power. Because REST has been inspired by HTTP and plays to its strengths, it is the best way to learn how HTTP works.

After an initial overview, we’ll examine each of the HTTP building blocks: URLs, HTTP verbs and response codes. We’ll also review how to use them in a RESTful way. Along the way, we’ll illustrate the theory with an example application, which simulates the process of keeping track of data related to a company’s clients through a web interface.


HTTP

HTTP is the protocol that allows for sending documents back and forth on the web.

HTTP is the protocol that allows for sending documents back and forth on the web. A protocol is a set of rules that determines which messages can be exchanged, and which messages are appropriate replies to others. Another common protocol is POP3, which you might use to fetch email on your hard disk.

In HTTP, there are two different roles: server and client. In general, the client always initiates the conversation; the server replies. HTTP is text based; that is, messages are essentially bits of text, although the message body can also contain other media. Text usage makes it easy to monitor an HTTP exchange.

HTTP messages are made of a header and a body. The body can often remain empty; it contains data that you want to transmit over the network, in order to use it according to the instructions in the header. The header contains metadata, such as encoding information; but, in the case of a request, it also contains the important HTTP methods. In the REST style, you will find that header data is often more significant than the body.


Spying HTTP at Work

If you use Chrome Developer Tools, or Firefox with the Firebug extension installed, click on the Net panel, and set it to enabled. You will then have the ability to view the details of the HTTP requests as you surf. For example:

Screenshot of Firebug Net panel

Another helpful way to familiarize yourself with HTTP is to use a dedicated client, such as cURL.

cURL is a command line tool that is available on all major operating systems.

Once you have cURL installed, type:

curl -v google.com

This will display the complete HTTP conversation. Requests are preceded by >, while responses are preceded by <.


URLS

URLs are how you identify the things that you want to operate on. We say that each URL identifies a resource. These are exactly the same URLs which are assigned to web pages. In fact, a web page is a type of resource. Let’s take a more exotic example, and consider our sample application, which manages the list of a company’s clients:

/clients

will identify all clients, while

/clients/jim

will identify the client, named ‘Jim’, assuming that he is the only one with that name.

In these examples, we do not generally include the hostname in the URL, as it is irrelevant from the standpoint of how the interface is organized. Nevertheless, the hostname is important to ensure that the resource identifier is unique all over the web. We often say you send the request for a resource to a host. The host is included in the header separately from the resource path, which comes right on top of the request header:

GET /clients/jim HTTP/1.1
Host: example.com

Resources are best thought of as nouns. For example, the following is not RESTful:

/clients/add

This is because it uses a URL to describe an action. This is a fairly fundamental point in distinguishing RESTful from non-RESTful systems.

Finally, URLs should be as precise as needed; everything needed to uniquely identify a resource should be in the URL. You should not need to include data identifying the resource in the request. This way, URLs act as a complete map of all the data your application handles.

But how do you specify an action? For example, how do you tell that you want a new client record created instead of retrieved? That is where HTTP verbs come into play.


HTTP Verbs

Each request specifies a certain HTTP verb, or method, in the request header. This is the first all caps word in the request header. For instance,

GET / HTTP/1.1

means the GET method is being used, while

DELETE /clients/anne HTTP/1.1

means the DELETE method is being used.

HTTP verbs tell the server what to do with the data identified by the URL.

HTTP verbs tell the server what to do with the data identified by the URL. The request can optionally contain additional information in its body, which might be required to perform the operation – for instance, data you want to store with the resource. You can supply this data in cURL with the -d option.

If you’ve ever created HTML forms, you’ll be familiar with two of the most important HTTP verbs: GET and POST. But there are far more HTTP verbs available. The most important ones for building RESTful API are GET, POST, PUT and DELETE. Other methods are available, such as HEAD and OPTIONS, but they are more rare (if you want to know about all other HTTP methods, the official source is IETF).

GET

GET is the simplest type of HTTP request method; the one that browsers use each time you click a link or type a URL into the address bar. It instructs the server to transmit the data identified by the URL to the client. Data should never be modified on the server side as a result of a GET request. In this sense, a GET request is read-only, but of course, once the client receives the data, it is free to do any operation with it on its own side – for instance, format it for display.

PUT

A PUT request is used when you wish to create or update the resource identified by the URL. For example,

PUT /clients/robin

might create a client, called Robin on the server. You will notice that REST is completely backend agnostic; there is nothing in the request that informs the server how the data should be created – just that it should. This allows you to easily swap the backend technology if the need should arise. PUT requests contain the data to use in updating or creating the resource in the body. In cURL, you can add data to the request with the -d switch.

curl -v -X PUT -d "some text"

DELETE

DELETE should perform the contrary of PUT; it should be used when you want to delete the resource identified by the URL of the request.

curl -v -X DELETE /clients/anne

This will delete all data associated with the resource, identified by /clients/anne.

POST

POST is used when the processing you wish to happen on the server should be repeated, if the POST request is repeated (that is, they are not idempotent; more on that below). In addition, POST requests should cause processing of the request body as a subordinate of the URL you are posting to.

In plain words:

POST /clients/

should not cause the resource at /clients/, itself, to be modified, but a resource whose URL starts with /clients/. For instance, it could append a new client to the list, with an id generated by the server.

/clients/some-unique-id

PUT requests are used easily instead of POST requests, and vice versa. Some systems use only one, some use POST for create operations, and PUT for update operations (since with a PUT request you always supply the complete URL), some even use POST for updates and PUT for creates.

Often, POST requests are used to trigger operations on the server, which do not fit into the Create/Update/Delete paradigm; but this, however, is beyond the scope of REST. In our example, we’ll stick with PUT all the way.


Classifying HTTP Methods

Safe and unsafe methods:
safe methods are those that never modify resources. The only safe methods, from the four listed above, is GET. The others are unsafe, because they may result in a modification of the resources.
Idempotent methods:
These methods achieve the same result, no matter how many times the request is repeated: they are GET, PUT, and DELETE. The only non idempotent method is POST. PUT and DELETE being considered idempotent might be surprising, though, it, in fact, is quite easy to explain: repeating a PUT method with exactly the same body should modify a resource in a way that it remains identical to the one described in the previous PUT request: nothing will change! Similarly, it makes no sense to delete a resource twice. It follows that no matter how many times a PUT or DELETE request is repeated, the result should be the same as if it had been done only once.

Remember: it’s you, the programmer, who ultimately decides what happens when a certain HTTP method is used. There is nothing inherent to HTTP implementations that will automatically cause resources to be created, listed, deleted, or updated. You must be careful to apply the HTTP protocol correctly and enforce these semantics yourself.


Representations

The HTTP client and HTTP server exchange information about resources identified by URLs.

We can sum up what we have learned so far in the following way: the HTTP client and HTTP server exchange information about resources identified by URLs.

We say that the request and response contain a representation of the resource. By representation, we mean information, in a certain format, about the state of the resource or how that state should be in the future. Both the header and the body are pieces of the representation.

The HTTP headers, which contain metadata, are tightly defined by the HTTP spec; they can only contain plain text, and must be formatted in a certain manner.

The body can contain data in any format, and this is where the power of HTTP truly shines. You know that you can send plain text, pictures, HTML, and XML in any human language. Through request metadata or different URLs, you can choose between different representations for the same resource. For example, you might send a webpage to browsers and JSON to applications.

The HTTP response should specify the content type of the body. This is done in the header, in the Content-Type field; for instance:

Content/Type: application/json

For simplicity, our example application only sends JSON back and forth, but the application should be architectured in such a way that you can easily change the format of the data, to tailor for different clients or user preferences.


HTTP Client Libraries

cURL is, more often than not, the HTTP client solution of choice for PHP developers.

To experiment with the different request methods, you need a client, which allows you to specify which method to use. Unfortunately, HTML forms do not fit the bill, as they only allow you to make GET and POST requests. In real life, APIs are accessed programmatically through a separate client application, or through JavaScript in the browser.

This is the reason why, in addition to the server, it is essential to have good HTTP client capabilities available in your programming language of choice.

A very popular HTTP client library is, again, cURL. You’ve already been familiarized with the cURL command from earlier in this tutorial. cURL includes both a standalone command line program, and a library that can be used by various programming languages. In particular, cURL is, more often than not, the HTTP client solution of choice for PHP developers. Other languages, such as Python, offer more native HTTP client libraries.


Setting up the Example Application

I want to expose the low-level functionality as much as possible.

Our example PHP application is extremely barebones. I want to expose the low-level functionality as much as possible, without any framework magic. I also did not want to use a real API, such as Twitter’s, because they are subject to change unexpectedly, you need to setup authentication, which can be a hassle, and, obviously, you cannot study the implementation.

To run the example application, you will need to install PHP5 and a web server, with some mechanism to run PHP. The current version must be at least version 5.2 to have access to the json_encode() and json_decode() functions.

As for servers, the most common choice is still Apache with mod_php, but you’re free to use any alternatives that you’re comfortable with. There is a sample Apache configuration, which contains rewrite rules to help you setup the application quickly. All requests to any URL, starting with /clients/, must be routed to our server.php file.

In Apache, you need to enable mod_rewrite and put the supplied mod_rewrite configuration somewhere in your Apache configuration, or your .htacess file. This way, server.php will respond to all requests coming from the server. The same must be achieved with Nginx, or whichever alternative server you decide to use.


How the Example Applications Works

There are two keys to processing requests the REST way. The first key is to initiate different processing, depending on the HTTP method – even when the URLS are the same. In PHP, there is a variable in the $_SERVER global array, which determines which method has been used to make the request:

$_SERVER['REQUEST_METHOD']

This variable contains the method name as a string, for instance ‘GET‘, ‘PUT‘, and so on.

The other key is to know which URL has been requested. To do this, we use another standard PHP variable:

$_SERVER['REQUEST_URI']

This variable contains the URL starting from the first forward slash. For instance, if the host name is ‘example.com‘, ‘http://example.com/‘ would return ‘/‘, while ‘http://example.com/test/‘ would return ‘/test/‘.

Let’s first attempt to determine which URL has been called. We only consider URLs starting with ‘clients‘. All other are invalid.

$resource = array_shift($paths);
if ($resource == 'clients') {
    $name = array_shift($paths);
    if (empty($name)) {
        $this->handle_base($method);
    } else {
        $this->handle_name($method, $name);
    }
} else {
    // We only handle resources under 'clients'
    header('HTTP/1.1 404 Not Found');
}

We have two possible outcomes:

  • The resource is the clients, in which case, we return a complete listing
  • There is a further identifier

If there is a further identifier, we assume it is the client’s name, and, again, forward it to a different function, depending on the method. We use a switch statement, which should be avoided in a real application:

switch($method) {
  case 'PUT':
      $this->create_contact($name);
      break;
  case 'DELETE':
      $this->delete_contact($name);
      break;
  case 'GET':
      $this->display_contact($name);
      break;
  default:
      header('HTTP/1.1 405 Method Not Allowed');
      header('Allow: GET, PUT, DELETE');
      break;
  }

Response Codes

HTTP response codes standardize a way of informing the client about the result of its request.

You might have noticed that the example application uses the PHP header(), passing some strange looking strings as arguments. The header() function prints the HTTP headers and ensures that they are formatted appropriately. Headers should be the first thing on the response, so you shouldn’t output anything else before you are done with the headers. Sometimes, your HTTP server may be configured to add other headers, in addition to those you specify in your code.

Headers contain all sort of meta information; for example, the text encoding used in the message body or the MIME type of the body’s content. In this case, we are explicitly specifying the HTTP response codes. HTTP response codes standardize a way of informing the client about the result of its request. By default, PHP returns a 200 response code, which means that the response is successful.

The server should return the most appropriate HTTP response code; this way, the client can attempt to repair its errors, assuming there are any. Most people are familiar with the common 404 Not Found response code, however, there are a lot more available to fit a wide variety of situations.

Keep in mind that the meaning of a HTTP response code is not extremely precise; this is a consequence of HTTP itself being rather generic. You should attempt to use the response code which most closely matches the situation at hand. That being said, do not worry too much if you cannot find an exact fit.

Here are some HTTP response codes, which are often used with REST:

200 OK

This response code indicates that the request was successful.

201 Created

This indicates the request was successful and a resource was created. It is used to confirm success of a PUT or POST request.

400 Bad Request

The request was malformed. This happens especially with POST and PUT requests, when the data does not pass validation, or is in the wrong format.

404 Not Found

This response indicates that the required resource could not be found. This is generally returned to all requests which point to a URL with no corresponding resource.

401 Unauthorized

This error indicates that you need to perform authentication before accessing the resource.

405 Method Not Allowed

The HTTP method used is not supported for this resource.

409 Conflict

This indicates a conflict. For instance, you are using a PUT request to create the same resource twice.

500 Internal Server Error

When all else fails; generally, a 500 response is used when processing fails due to unanticipated circumstances on the server side, which causes the server to error out.


Exercising the Example Application

Let’s begin by simply fetching information from the application. We want the details of the client, ‘jim‘, so let’s send a simple GET request to the URL for this resource:

curl -v http://localhost:80/clients/jim

This will display the complete message headers. The last line in the response will be the message body; in this case, it will be JSON containing Jim’s address (remember that omitting a method name will result in a GET request; also replace localhost:80 with the server name and port you are using).

Next, we can obtain the information for all clients at once:

curl -v http://localhost:80/clients/

To create a new client, named Paul…

curl -v -X PUT http://localhost:80/clients/paul -d '{"address":"Sunset Boulevard" }

and you will receive the list of all clients now containing Paul as a confirmation.

Finally, to delete a client:

curl -v -X DELETE http://localhost:80/clients/anne

You will find that the returned JSON no longer contains any data about Anne.

If you try to retrieve a non-existing client, for example:

curl -v http://localhost:80/clients/jerry

You will obtain a 404 error, while, if you attempt to create a client which already exists:

curl -v -X PUT http://localhost:80/clients/anne

You will instead receive a 409 error.


Conclusion

In general, the less assumptions beyond HTTP you make, the better.

It’s important to remember that HTTP was conceived to communicate between systems, which share nothing but an understanding of the protocol. In general, the less assumptions beyond HTTP you make, the better: this allows the widest range of programs and devices to access your API.

I used PHP in this tutorial, because it is most likely the language most familiar to Nettuts+ readers. That said, PHP, although designed for the web, is probably not the best language to use when working in a REST way, as it handles PUT requests in a completely different fashion than GET and POST.

Beyond PHP, you might consider the following:

Among the applications which attempt to adhere to REST principles, the classic example is the Atom Publishing Protocol, though it’s honestly not used too often in practice. For a modern application, which is built on the philosophy of using HTTP to the fullest, refer to Apache CouchDB.

Have fun!


The First Ever Tuts+ Premium Sale

$
0
0

Those of you who help support the Tuts+ network by becoming Premium members will be well aware of the improvements that we’ve made in the last year, including a full redesign, in depth screencast courses on the technologies that you most want to learn, and eBook partnerships with Smashing Magazine and Packt Publishing.

To celebrate the new year, we’re offering a sale on our subscription for the very first time!


25% Off

Sale

Sign up before January 31st, and save 25% off the already discounted yearly membership price! You can now enjoy an entire year of Tuts+ Premium content for only $135, which is a cumulative savings of $93 off the monthly price.


Top-Tier Courses

Each course on Tuts+ Premium consists of multi-part screencasts, which teach you a particular tool or technology from the inside-out. We’ve done our best to acquire the best talent available, including Jeremy McPeak, Dan Wellman, Andrew Burgess, Bryan Jones, Rey Bango, Jose Mota, myself (Jeffrey), and many more instructors.

Here’s but a small sampling of our course catalog, all of which is included with your subscription.

  • Go Portable With jQuery Mobile

    Go Portable With jQuery Mobile

    In this course, we’ll review the widgets that come with jQuery Mobile, and how they can be initialized and configured. We’ll focus on the data-attribute method of using the framework, and learn how we can use it without having to write a single line of JavaScript or CSS.

  • CSS Terminology Decoded

    CSS Terminology Decoded

    CSS is an interesting language. It’s quite easy to learn the basics, but truly mastering it is a much more involved process that can take years – despite what your developer friends may tell you.

    Anyone can apply a bit of styling, but do you understand all of the various terminology associated with modern CSS? Concatenation, proprocessors, frameworks, OOCSS, responsive design…the list goes on and on!

  • Simple Sinatra

    Simple Sinatra

    Welcome to “Simple Sinatra.” This course is all about coming to grips with the popular lightweight Ruby web framework, Sinatra. Sinatra stays out of your way by providing a minimal API, that allows you to get up and running with a new web application as quickly as possible.

  • Agile Design Patterns

    Agile Design Patterns

    Design patterns are an essential part of software development. At some point in every programmer’s career, he or she will have to dig in and learn how to apply these patterns. Even if they may appear scary at first, they are, in fact, much simpler to understand than you might initially think. In this course, you’ll learn what design patterns are, how each of them is defined, what they are used for, and, of course, how to implement them in PHP!

  • Advanced Backbone Patterns and Techniques

    Advanced Backbone Patterns and Techniques

    Backbone is one of the best JavaScript libraries available, but it’s likely that there are some advanced use cases that you haven’t yet considered. In fact, Backbone’s sparse but stalwart set of features might leave you thinking that it just isn’t the right tool for advanced web applications. Nothing could be further from the truth! In this course, continuing on from Connected to the Backbone, we’ll dig into some of the more advanced patterns, as you learn how to wield Backbone with precision.

  • Test-Driven PHP in Action

    Test-Driven PHP in Action

    In this course, join Radoslaw Benkel, as he takes you through the ins and outs of using PHP’s most popular test-suite, PHPUnit. Along the way, you’ll of course learn how to install it on your system, how to use the various assertions, how to create mocks, and much more!

  • Riding Ruby on Rails

    Riding Ruby on Rails

    Been meaning to learn Ruby on Rails, but felt that it was too hard? Well, not anymore. In this course, José Mota will take you through the process of creating dynamic and creative web applications, using Ruby on Rails. Come join us for the ride!

  • jQuery Plugin Development: Best Practices

    jQuery Plugin Development: Best Practices

    This course will explain the fundamental aspects of writing great jQuery plugins. Some of the topics we’ll cover include how plugins extend jQuery’s prototype to add new methods to the library, understanding the this object inside a plugin, keeping plugins configurable, theming them, handling events and exposing AJAX options for easier implementation by others. Sound fun?

  • Programming .NET

    Programming .NET

    For over ten years, Microsoft’s .NET Framework has been the platform to develop for if you want to target Windows or Windows-based technologies. Getting started with the .NET Framework can be a daunting task, as the .NET Framework class library is pretty darn big. But, thankfully, there are a core set of classes that you can use in any Windows or Web app.

  • Mac App Development

    Mac App Development

    In this course, Bryan Jones, creator of the immensely popular CodeKit app, will teach you the ins and outs of Mac application development.

This list could go on and on. Even better, we’re releasing more courses each month than ever before! If it’s a skill that you want to learn, chances are, we either offer it on the site, or it’s in development by one of our staff instructors.


Want Some Freebie Courses?

  • Perfect Workflow in Sublime Text 2

    Perfect Workflow in Sublime Text 2

    I’m a confessed code editor addict, and have tried them all! I was an early adopter of Coda, a TextMate advocate, even a Vim convert. But all of that changed when I discovered Sublime Text 2, the best code editor available today. Don’t believe me? Let me convince you in this course.

  • 30 Days To Learn HTML and CSS

    30 Days To Learn HTML and CSS

    Even if your goal is not to become a web designer, learning HTML and CSS can be an amazing tool to have in your skill-set – both in the workplace, and at home. If this has been on your to-do list for some time, why don’t you take thirty days and join me? Give me around ten minutes every day, and I’ll teach you the essentials of HTML and CSS. And don’t worry…we start at the beginning!

  • 30 Days to Learn jQuery

    30 Days to Learn jQuery

    You know you need to learn jQuery. Everyone else has, and you’re falling behind! No worries; it’s a brand new year, and what better way to celebrate it than by learning a new skill? Give me thirty minutes every day for the next month, and I’ll transform you into a jQuery pro!


eBooks

In addition to our course catalog, we also regularly add new top-selling eBooks, which are free to download for all Premium members. Some of our most popular selections include:

  • Scalable and Modular Architecture for CSS

    Scalable and Modular Architecture for CSS

    SMACSS (pronounced “smacks”) is more style guide than rigid framework. There is no library within here for you to download or install. SMACSS is a way to examine your design process and as a way to fit those rigid frameworks into a flexible thought process. It is an attempt to document a consistent approach to site development when using CSS. And really, who isn’t building a site with CSS these days?!

  • Finance for Freelancers

    Finance for Freelancers

    Getting on top of your finances doesn’t have to be a headache. Freelance veteran Martha Retallick makes finance and accounting concepts fun and friendly.

  • The Grumpy Programmer’s Guide To Building Testable Applications in PHP

    The Grumpy Programmer’s Guide To Building Testable Applications in PHP

    There are plenty of books that show the aspiring PHP programmer how to use testing tools. But how do you actually build your application in such a way that using the testing tools is easy instead of a constant struggle?

  • The Node Beginner Book

    The Node Beginner Book

    The aim of The Node Beginner Book is to get you started with developing applications for Node.js, teaching you everything you need to know about advanced JavaScript along the way in 69 pages.

  • Digging into WordPress

    Digging into WordPress

    Written by WordPress veterans Chris Coyier and Jeff Starr, Digging into WordPress is 400+ jam-packed pages of everything you need to get the most out of WordPress. WordPress is great right out of the box, but unless you want an ordinary vanilla blog, it is essential to understand the full potential of WordPress and have the right tools to get the job done.

  • Decoding HTML5

    Decoding HTML5

    This book focuses less on the politics of HTML5 (though it does touch on this), and more on the ways to immediately integrate HTML5—and its friends—into your web projects. If you’re in need of a book that will get you up and running with many of the new tags, form elements, and JavaScript APIs as quickly as possible, then this is the book for you.

Beginning in January, 2013, we’ll also be releasing two new entirely free books from Packt Publishing to our members each month.


1 Year Subscription for $135

Every spec of content listed above, as well as dozens upon dozens more courses and ebooks are included as part of your Tuts+ Premium membership. With the sale that we’re running this month, you can access it all – as well as the new content that we have in production (there’s a lot) – for $11 per month!

The best part, though, is that, in addition to all of the courses and eBooks that Premium has to offer, your subscription also helps to support our free educational sites, like Nettuts+, which we’re incredibly passionate about.

I hope you’ll consider signing up! If you have any questions, let me know below, and I’ll do my best to assist! See you inside!

SCRUM: The Story of an Agile Team

$
0
0

Scrum is one of the most heavily used agile techniques. It’s not about coding; instead, it focuses on organization and project management. If you have a few moments, let me tell you about the team I work with, and how we adopted Scrum techniques.


A Little History

Scrum’s roots actually extend beyond the Agile era.

Scrum’s roots actually extend beyond the Agile era. The first mention of this technique can be found in 1986, by Hirotaka Takeuchi and Ikujiro Nonaka, for commercial product development. The first official paper defining Scrum, written by Jeff Sutherland and Ken Schwaber, was presented in 1995.

Scrum’s popularity grew shortly after the 2001 publication of the Agile Manifesto, as well as the book Agile Software Development with Scrum, coauthored by Ken Schwaber and Mike Beedle.


A Few Facts

Scrum defines a set of recommendations, which teams are encouraged to follow. It also defines several actors – or roles, if you prefer that terminology – together with an iterative process of production and periodical planning. There are several tools, which accommodate the Scrum process. I will reference a few in this article, but the most powerful tools are the white board and sticky notes.

There is not, and never will be, a list of “Scrum Best Practices,” because team and project context trumps all other considerations. — Mike Cohn

The Roles

Everything starts with the pig and the chicken. The chicken asks the pig if he is interested in jointly opening a restaurant. The chicken says they could call it, “Ham-and-Eggs.” The pig answers, “No thanks. I’d be committed, but you’d only be involved!

That’s Scrum! It specifies a concrete set of roles, which are divided into two groups:

  • Committed– those directly responsible for production and delivery of the final product. These roles include the team as a whole, its members, the scrum master, and the product owner.
  • Involved– represents the other people interested in the project, but who aren’t taking an active or direct part in the production and delivery processes. These roles are typically stakeholders and managers.

This is How We Started

Everything depends on dedication and good will. If you want your team to be efficient, productive, and deliver on time, you need someone to embrace some form of Agile techniques. Scrum may or may not be ideal for you, but it is surely one of the best places to start. Find that someone on your team who is willing to help the others, or you, yourself, can take on the responsibility of introducing Scrum.

You may ask why you should care how another team, like mine, does Scrum. You should care because we all learn how to do Scrum better by hearing stories of how it has been done by others – especially those who are doing it well. – Mike Cohn

The talented team I work with already knew a lot about Agile. We switched from Waterfall development to a more agile process, and released quite frequently. We successfully managed to release every three to six months, having a decently low number of bugs after each release.

But, still, we were far from what we can achieve today. We missed the process, or rules, that would force us to change our perspective on the product and process. That was the moment when our team manager introduced us to Scrum, a term we, at that time, had never heard of.

This person took the role of the Scrum Master.

The Scrum Master

The Scrum Master is easily one of the most important roles. This person is responsible for creating a bridge between the Product Owner (defined below) and the Team (also defined later). This person usually possesses a strong technical knowledge, and actively participates in the development process. He or she also communicates with the Product Owner about the User Stories, and how to organize the Backlog.

The Scrum Master coordinates development processes, but he does not micro-manage (the team is self-organized). At the beginning of the process, however, the Scrum Master might micro-manage part of the team, in order to improve their team interaction and self-organization techniques.

Scrum Masters have more responsibilities, and I’ll cover them as we discuss this process.


Introducing the Term “Sprint”

Personally, we don’t have any problem with three to six month releases, but I originally couldn’t imagine such a frequent release schedule. I thought it was too fast, and didn’t provide us with the necessary time to integrate and debug our products. But then, our Scrum Master introduced us to the term, sprint.

Spring: a basic unit of development in Scrum. It can take between one week and one month, and the product is in a stable state after each sprint.

That sounds outrageous! Stable every week> Impossible! But, in actuality, it’s quite possible. First, we reduced our production cycles from three months to one-and-a-half months, and, finally, to a single month. All of this was accomplished without changing our style. However, you’ll need to introduce some rules for a sprint less than thirty days. There’s no magic rule-set here; the rules must benefit your project.

If I recall correctly, the first significant change in our development process came by the introduction of sprint planning.

Sprint Planning

This is one of the several meetings that Scrum recommends. Before each new sprint, the team, product owner, and scrum master plan the next sprint. This meeting can take a whole day, but shorter sprints likely only need a couple hours or so.

Our process is mostly reviewing the product backlog, and deciding upon a subset of user stories that will be included in the next sprint. These decisions are made by direct negotiations between the team, represented by the scrum master, and the product owner.

The Product Owner

Setting the direction of a product by guessing which small feature will provide the most value may be the most difficult task.

This person is responsible for defining the User Stories and maintaining the Product Backlog. He or she is also a bridge between the team and higher management. The product owner evaluates the requests from stakeholders, higher management, users, and other feedback (like bug reports, user surveys, etc).

He or she prioritizes this backlog, providing the maximum value to the stakeholders in the shortest possible time. The product owner achieves this by planning the most valuable user stories that can be completed in a timely manner. This may sound sophisticated – it is! In fact, setting the direction of a product by guessing which small feature will provide the most value may be the most difficult task of the whole process. On the other hand, sometimes it’s rather easy. You may have a thousand users asking for a specific feature. In these cases, the correct choice is obvious.

If those users represent a large portion of you user base, providing that specific feature increases loyalty.

But again, this is a difficult choice. What if you could increase your user base by 100% by implementing a different feature? So, you can either increase your current users’ loyalty, or increase the user base. What is the correct choice? It all depends on the current direction of the business, and the product owner must decide where to take the product.

In the company I work for, these decisions propagate to the team. It’s not a requirement of the Scrum process, but it is especially useful with new teams. Sharing information goes a long way in helping everyone understand why some decisions are made, and why seemingly obvious features may be delayed or dropped.


The Planning Board

I remember the morning it happened: I arrived at the office, only to find our scrum master preparing a makeshift planning board with A4 paper and transparent tape. I had no idea what he was doing. As with every morning, I prepared a pot of coffee, and waited to see.

When finished, a white board was placed on the wall. It had several columns, and morphed into a rectangular shape. Several colored sticky notes peppered the “board.” That was two years ago.

The board now accommodates the Lean Development Process that we use today. Remember, Agile is all about change and adapting to change. Never blindly follow the rules.

So, what’s on that board?

Columns for the Development Process

There are four main columns:

  • Release Backlog– Where all the stories reside for the current release. Yes, the product is ready for release after each sprint, but that doesn’t necessarily mean that we actually release it. We typically have five-day sprints.
  • Sprint Backlog– When we plan, we negotiate what the product owner wants to complete in the sprint. How do we decide what we can and cannot complete? By estimating the difficulty of each story (see below – Estimating stories). The end result is a set of stories moved from the release backlog to the sprint backlog. The team concentrates on finishing those stories in the upcoming week.
  • Working On– This one is simple. When team members take a story, they add it to this column, to show everyone what they are working on. This is meant for employee control, but rather for communicating with team members. Everyone should always know what their teammates are working on. In the above image, the small bookmarks stuck on the sticky notes contain my team members’ names.
  • Done– Complete all the things! Yes, this is where completed stories go. However, it’s important to define “what is done.” At the end of an ideal sprint, all stories and bugs from the sprint backlog should be contained within this column.

Tip: Many teams like to split the Working On column into several others to better define different stages of work. We split ours into Design, Development and Testing. Feel free to invent your own steps, according to your process.

The Definition of Done

What is done? When can you confidently state that a story is complete? How do you know?

“Done” must be clearly and precisely defined, so that everyone knows when a story is complete. The definition of “done” can differ from team to team, and even from project to project. There is no exact rule. I recommend that you raise this issue at a team meeting, and decide what determines a complete story. Here are some ideas that you might want to consider:

  • Create a minimalistic design.
  • Create a GUI mockup.
  • Use TDD or ensure that there are unit tests.
  • Write the code.
  • Let another team member manually test your story.
  • The whole system can be built and compiled with the new code.
  • Functional or acceptance tests pass as expected, after the new code is integrated into the system.

There are multiple ideas that can be included in the definition of done. Take what you consider to be necessary for your team and use it. Also, don’t forget to adapt this definition over time. If you notice that your definition is becoming outdated, you may consider removing some elements or adding necessary, but frequently forgotten, ideas.

In the picture above, the green sticky notes describe what we considered to be done, for each part of the process.


Populating the Board

That was the question we asked ourselves. Until this point, we did not use sticky notes for planning. We used software to keep track of user stories and bugs, but, other than that, we used nothing. After lunch, our scrum master presented us with a mountain of colored sticky notes. After preparing a dozen notes, he explained them to us.

The User Stories

A user story is a short, one sentence definition of a feature or functionality.

These represent the main features that we want to implement. A user story is a short, one sentence definition of a feature or functionality. It is referred to as a user story, because it is presented from the perspective of a user. Naturally, the term user is the person using our application. This person can be in one or more different categories: a system administrator, restricted user, manager, etc.

An example of such a story might sound something like, “As a user, I want to share a folder with my teammates.

At that point, we did not have a product owner defined, so our scrum master invented these stories. This is acceptable at the beginning, but I highly recommended that you separate these two roles. Otherwise, how can the scrum master negotiate the sprint backlog with the product owner?

You may think to yourself, “Why negotiate? Isn’t it actually better for a single person to decide what to do and when?” Not quite. The two roles would be influenced by a single person’s views of the system or project. Two people, on the other hand, have a better chance of providing a more objective path for the team, ensuring the end goal (better valuable software) is achieved.

The product owner should define user stories; the team should negotiate their execution, and the scrum master represents them.

User stories define everything new that needs to be done; they’re represented by yellow sticky notes on our board.

Bugs, Bugs, Bugs

Tracking your bugs is incredibly important.

We also list bugs on the board. Do you see those red sticky notes? Those are the bugs that we need to fix for the next release.

Different teams treat bugs in different ways. Our team mixes the bugs with the stories, but we always begin a sprint by fixing the bugs.

I know of other teams who pile up bugs for a period of three sprints, and spend every fourth sprint fixing them. Others split sprints into 25/75 for bugs/stories. Further, other teams may work on stories in the morning, and bugs after lunch; it simply depends on the company.

It’s up to you to find the best solution for your team, and of course, keep track of your bugs. Write them on sticky notes so that you can track your system’s issues and the fixes for those issues. Tracking your bugs is incredibly important.

Tasks or Sub-Stories

Tasks are written as simple sentences from the developer’s point of view.

Ideally, each story should be short enough to be completed with relative ease, but splitting stories into other stories can prove difficult. Some projects simply don’t afford this possibility. Still, you’ll find large stories that several team members can work on. It’s important to divide huge chunks of work into smaller, easier to manage pieces.

One approach splits big stories into Tasks. Tasks are written as simple sentences from the developer’s point of view. For example, the previous folder sharing story might be divided into several tasks, such as: “Create UI for sharing”, “Implement public sharing mechanism”, “Implement access control functionality”, “Add Access Control checkboxes to the UI”, and so on. The point is that you have to think more about the story every time you break it into smaller tasks. Your team will have much greater control over a story, when you analyze each piece.

We use light-blue sticky notes for tasks on our board. Look in the last column (the “Done” column), and you’ll find our tasks under the user story stickies. That particular story was broken into around four tasks.

Technical Tasks

Certain activities must be completed in order to finish a task, story, or the sprint as a whole. These tasks are usually infrastructure related, but you may find that the tasks require changes to the system. This process may or may not be part of a story. For example, you may find that a third party application must be installed, in order to implement your application’s sharing capability. Is this part of our user story? Possibly, but maybe not.

We determined that these tasks should be separated from the actual story. This helped us to better track these extra tasks. On our board, you can find these tasks on green sticky notes. There is one on the sprint backlog, and about three in testing.

The Technical Backlog

For a young team with little experience with Agile and Scrum, it’s helpful to highlight these tasks with a mini-backlog.

This backlog is for infrastructural tasks, such as updating the automated testing system, configuring a new server, and other things, which make our everyday development work easier. These are things that must be completed at some point, but are not directly related to development.

You don’t have to put these items into a separate backlog. In fact, I know of teams who don’t separate them. Our team dropped our technical backlog a few months ago; we decided that infrastructural tasks are as important as other tasks. However, for a young team with little experience in Agile and Scrum, it’s helpful to highlight these tasks with a mini-backlog. So, I recommend that you should give it a try. It may work for you, and, if not, then just put Infrastructural Tasks on you planning board, possibly with a different color.


The Big Challenge: Estimation

During a planning meeting, we decide which user stories and bugs from the product backlog (or release backlog in our case) to include in the next sprint. This may sound simple, but, in reality, it’s quite complicated.

The product owner comes forward with a set of stories to work on. This list typically contains more work than what can be accomplished in the sprint. The scrum master, together with the team, has to convince the product owner of what can be done during a sprint. Over time, this process becomes easier, as the product owner learns the approximate velocity of the team. Then again, the team may become more productive with each sprint, thus allowing more stories to be finished. The trick is to have a team who really wants to exceed expectations!

Now, the product owner wants to complete more stories than we can do in a sprint. We need to estimate the amount of work we can do in relation to the stories submitted by the product owner, and we can do this in a variety of ways.

Story Points

Story Points are one of the most common methods for estimating stories, bugs or tasks. They are not necessarily the best approach, but they still are a good way to start.

So what is a story point? At the beginning of the process, the team looks for the simplest story they can find on the board. It doesn’t matter how difficult it is, or how long it takes to complete. When they find that story, they set it as the reference story of having one point. In some projects, such a story can be as simple as fixing UI elements in ten minutes; whereas, the simplest story for more complex systems may take two hours for three people to complete. Now that you have a baseline, evaluate the other stories and bugs and assign them points.

This can be more difficult than it seems, and there are several point techniques to better help estimate stories. Here are two of them:

  • Use Fobinacci Numbers – 1,2,3,5,8 (and maybe 13, but a 13-sized story smells too big to me).
  • Use Powers of 2 – 1,2,4,8 (and maybe 16, but this number should be avoided).

You are free to choose whatever you feel most comfortable with. Be agile! Maybe you want to use two points increments, because your tasks are best estimated that way. Bravo to you!

Semaphores

Numbers are great, and many technical people love them. You may find, however, that, at some point, programmers begin to associate story points with time. They will think, “It takes me two days to do this. Let’s give it five points”. This is wrong. Estimates go from bad to worse when this happens. Story points should never relate to time.

Using semaphore colors may help alleviate this problem. Instead of assigning numbers to stories, the team can associate those stories with colors. Our team made this change a few months ago, and it greatly helped change our point of view.

Naturally, each color has a different meaning.

  • Green signifies an easy task that can be completed in the next sprint.
  • Yellow refers to a more difficult task – one that requires more effort from several team members. It also has a high chance of completion in the next sprint.
  • Red labels are assigned to stories, which are extremely difficult and may not be finished in a single sprint. There should be little to no such stories, but if you adopt one week sprints, five days is a short time.

T-Shirt Sizes

Numbers may be ugly to you, colors too artistic. It’s time for t-shirt sizes! This technique suggests giving up on comparing story complexity with time of completion. Simply put, instead of numbers, you use sizes like S, M, L, XL, XXL for your stories.

I personally never felt attracted to this kind of estimation, but, hey, some feel that it’s the best way to go. Try it out, if you feel comfortable with the idea.


The product owner, stake holders, and managers have to know what to expect from the end of a sprint. They must decide if the stories that were worked on should be released, and they have to know when features are ready. It’s not a good solution to release every new feature at the end of a product’s development cycle; releasing the most valuable features on a more frequent basis is a considerably better way to go. To achieve this, they must know what will be available in the short term, and their information should stem from the team. Therefore, the team should estimate, as best as possible, the work they can do in a sprint.


Measuring Speed of Development

So you want to see how well you perform in the current sprint? A frequently-used method is the burndown chart:

In this chart, we have a five day sprint, and we assumed that we could complete ten points in the sprint. Each value point represents the remaining story points at the end of each day. The green line reveals the ideal path: a steady two points per day. The red line shows our actual performance, or the true speed of development.

It’s not on the planning board picture, but my team used to have an A4 paper sheet positioned above the planning board, with the burndown chart for each sprint. At the conclusion of each day, one of the team members was responsible for calculating the points completed for that day. It’s simple: if programmers move the stories from column to column as they work, finding the remaining unfinished stories is easy.

There are no half-done stories. If a story is done, it is done. If it is not complete, then it is not counted on the burndown chart.

Of course, you will fail – BIG TIME – at estimating! This is especially true at the beginning. There is no way to avoid this, unfortunately. There is no way to know how many points you can to deliver. Your first chart may very well look like:

Our first chart surely looked similar to that. I think we did not even complete half of what we committed to. Why? Well, because estimation is hard. No matter what you do, or how good are you, when someone asks you how complicated something you have never done before is, you will have a hard time providing an accurate estimation. Don’t panic! Try your best. With time, these things become easier. You may become able to estimate with a 70% accuracy at some point for short sprints. If the sprints are longer, your accuracy will likely be less, because there will be more stories to estimate and more variables that can go wrong.

When this happens, you adjust. For the next sprint, take four points. Like this:

This is bad again. You were too conservative, and finished early. It’s a natural reaction for the team to adjust, based on the failure of the previous estimation. Still, this is a failure again, but on the other side of the road.

The problem? What do you do after you’ve finished what you were planning for? Another story? How do you put that on the chart? You can’t redraw the original line.

When working with burndown charts, it’s recommended to always make an average of the last 3-5 charts, in order to specify the points for the upcoming sprint. Of course, at the beginning, you don’t have such information available, so you won’t be as accurate as in the future. That’s okay.

After some time, your charts will begin to resemble the first example more and more. You will, most of the time, finish all the stories and have a sustained velocity.

Velocity?

This term refers to your speed of development. It relates to how much you can do in a sprint. One of the most important concepts in Agile is to have a consistent velocity. Make the team deliver at a constant pace. With traditional project management, velocity decreases as a project ages. Complexity and rigidity forces the speed of development down.

Agile methodologies and techniques target to maintain a constant pace. Deliver fast now, and deliver faster later. How do they do it? Scrum is one of the elements of the puzzle. Other important pieces include the techniques that programmers can use to make their code and development process better. For example, XP (eXtreme Programming), Pair Programming, TDD (Test Driven Development), etc. All these, together, can make a team really great.

So we measure this velocity, but what do we actually do with it?

Tip: Measuring velocity is for making better predictions; not for judging a team or its members.

Use velocity to know what your team can do. Use velocity to know what to expect. Use velocity to make your team want more and be better. Never use velocity to judge your team or evaluate the productivity of your programmers.


Always Look Back and Improve

Following the first few sprints, our scrum master gathered the whole team. He began asking us about the good and bad things from the past week. This might be uncomfortable at the beginning, but it’s still incredibly important. Describing what you felt went wrong in the past week will create awareness. And, of course, it is also helpful to highlight what went well!

These meetings are typically referred to as Retrospectives. They offer the scope to highlight what was good and what went wrong. Here are some examples from my own retrospectives.

Bad Things

  • Team members were fighting too much
  • Team member X or Y was not collaborative when pair programming
  • We lost too much time with small things, like X or Y
  • We did not pair program all the time
  • We did not write unit tests for module M

When discussing problems, try to put aside your personal feelings; speak about what’s bothering you. This is the only way for the team to resolve the issue. The most important thing is to immediately propose a solution for each problem. Don’t simply let the list be written and forgotten; stay for a few minutes, and have the whole team think about what can be done to avoid them next week.

Good Things

  • We finished on time
  • We were able to talk without fighting
  • Some of us became more receptive to suggestions and ideas
  • We wrote all the code with TDD

Again, highlight as many good things as possible – especially at the beginning or with junior programmers. For example, having all the code written with TDD may be a big achievement for a junior team. Make them feel really good about it, make them want to do it more and more. The same is true for a senior team; they simply have other things to highlight (TDD is done by reflex).


I Want to See What You Did This Sprint

The demo is for showing stake holders (and the product owner) the progress of the project.

This heading comes from the words of my scrum master. At that point, he also was the product owner. Before the end of a sprint, he’d ask us to present him with what we accomplished. We prepared a Demo, or a working example in a controlled environment.

Scrum proposes these demos at the end of each sprint. These should be done before the retrospective meeting that we discussed above. The team should prepare a special environment, and ensure that the product is capable of showcasing the features done in this sprint. The demo is for showing stake holders (and the product owner) the progress of the project.

You may ask yourself why I mentioned a controlled environment, when our product should be production-ready at the end of each sprint. Yes, the product should be as close to production-ready as possible, but that does not mean that the feature, itself, is ready. Often, there will be features which are too big to fit within a single sprint. The product will remain stable, but the feature will not quite be ready. When stakeholder see the demo, they want to review the feature and what it can do. In many cases, to showcase some functionalities for unfinished features, special environments must first be prepared.

Additionally, based on these demos, the product owner may determine that a bigger feature is good enough, and a new version of the product should be published and sent to the users. Most of the time, even if a feature is not quite complete, a release will help the project gain valuable user feedback, and concentrate the completion of the feature in a way that will satisfy as many users as possible.

Well, this sounds quite simple. You’re an agile team, you keep your tests always on green, and your product is in a stable state. How difficult can it be to prepare a quick demo? It’s more difficult than you might think!

Our team needed, if I remember correctly, more than five attempts before we managed to correctly prepare the demo. Luckily for us, the stake holders were not included in these first failed demos!


Still, We Need More Guidance

In these meetings, each team member must answer three questions.

It was the moment when our scrum master proposed to make a meeting each day. Yes! Every day, every morning, at an exact hour!

I find this to be a very good thing for new teams – for people who are not yet comfortable with one another, or with the project. These daily meetings, called Daily Scrum, are kept with the team, at a specified time every day. This should be kept before any work is done for the respective day. In my team, we set the time to 10AM each morning, which was difficult to do. Nonetheless, it was the correct decision.

The daily scrum is a short and simple meeting (not more than fifteen minutes). The scope of it is to help team members see who is doing what, and determine where the problems and bottlenecks in the development project are.

Tip: Because we want to ensure that these meetings remain short, we stand up. People usually get tired after 15 minutes of standing, which is perfect! If you find coworkers searching for places to sit and rest, your meeting has likely gone on too long.

In these meetings, each team member must answer three questions:

  • What did you do yesterday?– A short answer is expected – max 2-3 sentences.
  • What are you planning to do today?– Same type of short answer, things like “I will work on this story today.”
  • Are there any problems with your process? If yes, what? Can they be quickly solved? This should be an answer highlighting the problems and solutions, if known. No detailed discussions should be taken, while this meeting goes on. The scrum master should take note of the problem, and work toward solving it together with the team, after the meeting is adjourned.

Solving the problems and impediments in the way of developers should be high priority for the team, so that they can continue with their development as soon as possible. Often, the person who had the problem is capable of solving it in a timely manner. Other times, he or she requires the help of a teammate. And other times, the problem can be so serious that the team will have to stop development and concentrate exclusively on solving the one thing that prevents them from continuing their work.

I remember my team encountering these huge road-blocks on several different occasions. There were tasks and stories, which seemed to be quite obvious at first site, but, after a pair or a single programmer had the chance to dig into the problem, the obvious became confusing and wrong. We discovered several times that a third party library could not provide us with the necessary functionality, and ended up concentrating all of our efforts into finding another, more capable library – or even implementing a solution ourselves.

The majority of our project is written in PHP. At some point, we had to interface our project with VMWare. We reviewed the official libraries for VMWare API, and found out there are Java and Perl versions. There’s also an unofficial Ruby option. We were sure that we could use one of them, and simply do some exec() calls from PHP to capture the output as a string. As we thought, parsing from there should be piece of cake.

It turned out that this was next to impossible. Neither API library worked quite as we expected it to. Some were abandoned or incomplete, and they had nearly impossible to parse outputs. Ultimately, we were forced to do something that nobody had ever done before: implement a VMWare API library in PHP. Unfortunately, there was no other reasonably acceptable way to do it.

This problem was massive; it set back the initial plans by weeks! Of course, our product owner was immediately notified, and, together with him, we planned a new schedule, and developed new stories, which included the creation of this API library.

More often than not, your problems will be much smaller. People might get stuck with some more sophisticated logic, but, many times, by the following morning, they already have ideas and solutions. Other times, they will simply be going on the wrong road, and a teammate will need to help get them back on track. These represent your typical issues.


Conclusion

Here we are at the conclusion. At least, this is how my team got started with Scrum. Some rules were very useful; others less so. Further, some rules were only useful for a short time, while others are still respected religiously by our team.

I can only recommend that you embrace the agile process, try out Scrum, and form your own conclusions. I’m sure that you’ll find bits and pieces to adopt for your team. Be agile, adapt it for your style of work, for you projects and your personalities, and don’t be afraid of adding your own custom rules. After all, Agile refers to adaptation, not blindly following a set of pre-determined rules!

For more top-shelf eBooks, courses, and tutorials, like this one, be sure to consider signing up for Tuts+ Premium!

Building Ribbit in Rails

$
0
0

Welcome to the next installment in our Twitter clone series! In this tutorial, we’ll build Ribbit from scratch, not using PHP, but with Ruby on Rails. Let’s get started!

Example

One quick service announcement before we get started: we won’t be styling the UI for the application in this tutorial; that was done in Build a Twitter Clone From Scratch: The Design. I’ll let you know if we have to tweak anything from that article.


Step 0: Setting up the Environment

First things first: I’m using Ruby 1.9.3 (p194) and Rails version 3.2.8 for this tutorial. Make sure that you’re running the same versions. It doesn’t matter how you install Ruby; you can use RVM ( tutorial ), rbenv, or just a regular Ruby installation. No matter the approach, each installer gives you the gem binary, which you can then use to install Rails. Just use this command:

    gem install rails

This installs the latest version of Rails, and we can start building our app now that it is installed. I hope you realize that Rails is a command line tool; you’ll need to be comfortable in the terminal to be comfortable in this tutorial.


Step 1: Creating the Rails App

We begin by generating the project. In the command line, navigate to whatever directory you want the new project to reside in. Then, run this:

    rails new ribbitApp

This single command generates multiple files inside a folder, called ribbitApp. This is what Rails gives us to start with; it even installed the gems required for the project.

Let’s cd into that directory and initialize a git repo.

    cd ribbitApp
    git init

One of the Rail-generated files is .gitignore. If you’re on a Mac, you’ll probably want to add the following line to this file – just to keep things clean:

    **.DS_Store

Now, we’re ready to make our first commit!

    git add .
    git commit -m 'initial rails app'

Step 2: Prepping the UI

The interface tutorial introduced you to Ribbit’s images and stylesheet. Download those assets and copy the gfx folder and less.js and style.less files into your app’s public directory.

Let’s add a rule to style.less: the style for our flash messages. Paste this at the bottom of the file:

    .flash {
        padding: 10px;
        margin: 20px 0;&.error {
            background: #ffefef;
            color: #4c1717;
            border: 1px solid #4c1717;
        }&.warning {
            background: #ffe4c1;
            color: #79420d;
            border: 1px solid #79420d;
        }&.notice {
            background: #efffd7;
            color: #8ba015;
            border: 1px solid #8ba015;
        }
    }

That’s all! Let’s make another commit:

    git add .
    git commit -m 'Add flash styling'

Now, let’s create the layout. This is the HTML that wraps the main content of every page – essentially, the header and footer. A Rails app stores this in app/views/layouts/application.html.erb. Get rid of everything in this file, and add the following code:

<!DOCTYPE html><html><head><link rel="stylesheet/less" href="/style.less"><script src="/less.js"></script></head><body><header><div class="wrapper"><img src="/gfx/logo.png"><span>Twitter Clone</span></div></header><div id="content"><div class="wrapper"><% flash.each do |name, msg| %><%= content_tag :div, msg, class: "flash #{name}" %><% end %><%= yield %></div></div><footer><div class="wrapper">
                Ribbit - A Twitter Clone Tutorial<img src="/gfx/logo-nettuts.png"></div></footer></body></html>

There are three things you should notice about this. First, every URL to a public asset (images, stylesheet, JavaScript) begins with a forward slash (/). This is so that we can still load the assets when we’re in “deeper” routes. Second, notice the markup for displaying the flash messages. This displays flash messages when they exist. And third, note the <%= yield %>; this is where we insert other “sub”-templates.

Okay, let’s commit this:

    git add .
    git commit -m 'Edit application.html.erb'

Step 3: Creating Users

Of course, we can’t have a Twitter clone without users, so let’s add that functionality. This can be a complex feature, but Rails makes it a little easier for us.

We naturally don’t want to store our users’ passwords as plain text; that’s a huge security risk. Instead, we’ll rely on Rails to automate an encryption process for our passwords. We begin by opening our Gemfile and searching for these lines:

    # To use ActiveModel has_secure_password
    # gem 'bcrypt-ruby', '~> 3.0.0'

Using has_secure_password is exactly what we want to do, so un-comment that second line. Save the file and head back to the command line. Now we have to run bundle install to install the bcrypt gem.

We can create our user resource, once the gem has been installed.

A Rails resource is basically a model, its associated controller, and a few other files.

We create a resource by using the rails generate (or rails g) command:

    rails generate resource user username name email passworddigest avatarurl

We pass several parameters to this command, resulting in a resource, called user; its model has the following five fields:

  • username: a unique username, the equivalent of a Twitter handle.
  • name: the user’s actual name.
  • email: their email address.
  • password_digest: the encrypted version of their password.
  • avatar_url: the path to their avatar image.

We now need to migrate our database so that it’s set up to store users. We accomplish this by running the following command:

    rake db:migrate

This creates our users table, but we don’t directly interact with the database with Rails. Instead, we use the ActiveRecord ORM. We need to add some code to app/models/user.rb. Open that file.

When Rails generated this file, it added a call to the attr_accessible method. This method determines which properties are readable and writable on this class’ instances. By default, all the aforementioned properties are accessible, but we want to change that:

    attr_accessible :avatar_url, :email, :name, :password, :password_confirmation, :username

Most of these should make sense, but what’s with password and password_confirmation? After all, we have only a password_digest field in our database. This is part of the Rails magic that I mentioned earlier. We can set a password and password_confirmation field on a User instance. If they match, the password will be encrypted and stored in the database. But to enable this functionality, we need to add another line to our User model class:

    has_secure_password

Next, we want to incorporate validation into our model. Calling has_secure_password takes care of the password fields, so we’ll deal with the email, username, and name fields.

The name field is simple: we just want to ensure that it is present.

    validates :name, presence: true

For the username field, we want to ensure that it exists and is unique; no two users can have the same username:

    validates :username, uniqueness: true, presence: true

Finally, the email field not only needs to exist and be unique, but it also needs to match a regular expression:

    validates :email, uniqueness: true, presence: true, format: { with: /^[\w.+-]+@([\w]+.)+\w+$/ }

This is a very simplistic email regex, but it should do for our purposes.

Now, what about that avatar_url field? We want to use the user’s email address and pull their associated Gravatar, so we have to generate this URL. We could do this on the fly, but it will be more efficient to store it in the database. First, we need to make sure that we have a clean email address. Let’s add a method to our User class:

private
def prep_email
    self.email = self.email.strip.downcase if self.email
end

That private keyword means that all the methods defined after the keyword are defined as private methods; they cannot be accessed from outside the class (on instances).

This prep_email method trims the whitespace at the beginning and end of the string, and then converts all characters to lowercase.

This is necessary because we’re going to generate a hash for this value.

We want this method to run just before the validation process; add the following line of code near the top of the class.

    before_validation :prep_email

Next, let’s generate the URL for the avatar by writing another method (put it under the one above):

    def create_avatar_url
        self.avatar_url = "http://www.gravatar.com/avatar/#{Digest::MD5.hexdigest(self.email)}?s=50"
    end

We need to call this method before we save the user to the database. Add this call to the top of the file:

    before_save :create_avatar_url

And that’s it! Now we have the mechanics of our user functionality in place. Before writing the UI for creating users, let’s commit our work so far:

    git add .
    git commit -m 'Create user resource'

Step 5: Writing The User UI

Building a UI in Rails means that we need to be aware of the routes that render our views. If you open config/routes.rb, you’ll find a line like this:

    resources :users

This was added to the routes.rb file when we generated the user resource, and it sets up the default REST routes. Right now, the route we’re interested in is the route that displays the form for creating new users: /users/new. When someone goes to this route, the new method on the users controller will execute, so that’s where we’ll start.

The users controller is found in app/controllers/users_controller.rb. It has no methods by default, so let’s add the new method within the UsersController class.

    def new
        @user = User.new
    end

As you know, Ruby instance variables begin with @– making @user available from inside our view. Let’s head over to the view by creating a file, named new.html.erb in the app/views/users folder. Here’s what goes in that view:

<img src="/gfx/frog.jpg"><div class="panel right"><h1>New to Ribbit?</h1><%= form_for @user do |f| %><% if @user.errors.any? %><ul><% @user.errors.full_messages.each do |message| %><li><%= message %></li><% end %></ul><% end %><%= f.text_field :email, placeholder: "email" %><%= f.text_field :username, placeholder: "username" %><%= f.text_field :name, placeholder: "name" %><%= f.password_field :password, placeholder: "password" %><%= f.password_field :password_confirmation, placeholder: "password" %><%= f.submit "Create Account" %><% end %></div>

This is actually the view that serves as the home page view, when a user is not logged in. We use the form_for helper method to create a form and pass it the @user variable as a parameter. Then, inside the form (which is inside the Ruby block), we first print out errors. Of course, there won’t be any errors on the page the first time around.

However, any input that fails our validation rules results in an error message that is displayed in a list item.

Then, we have the fields for our user properties. Since this design doesn’t have any labels for the text boxes, I’ve put what would be label text as the fields’ placeholder (using the placeholder attribute). These won’t display in older browsers, but that’s not relevant to our main goal here.

Now, what happens when the user clicks the “Create Account” button? This form will POST to the /users route, resulting in the execution of the create method in the users controller. Back to that controller, then:

def create
  @user = User.new(params[:user])
  if @user.save
    redirect_to @user, notice: "Thank you for signing up for Ribbit!"
  else
    render 'new'
  end
end

We start by creating a new user, passing the new method the values from our form. Then, we call the save method. This method first validates the input; if the data is in the correct format, the method inserts the record into the database and returns true. Otherwise, it returns false.

If @user.save returns true, we redirect the viewer to… the @user object itself?

This actually redirects to the path for that user, which will be /users/. If @user.save returns false, we re-render the /users/new path and display any validation errors. We also pre-populate the form fields with the the user’s previously provided information. Clever, eh?

Well, if we direct the viewer to their new user profile, we need to create that page next. This triggers the show method in the users controller, so we’ll add that first:

def show
  @user = User.find(params[:id])
end

This method looks at the id number in the route (for example, /users/4) and finds the associated user in the database. Just as before, we can now use this @user variable from the view.

Create the app/views/users/show.html.erb file, and add the following code:

<div id="createRibbit" class="panel right"><h1>Create a Ribbit</h1><p><form><textarea name="text" class="ribbitText"></textarea><input type="submit" value="Ribbit!"></form></p></div><div id="ribbits" class="panel left"><h1>Your Ribbit Profile</h1><div class="ribbitWrapper"><img class="avatar" src="<%= @user.avatar_url %>"><span class="name"><%= @user.name %></span> @<%= @user.username %><p>
            XX Ribbits<span class="spacing">XX Followers</span><span class="spacing">XX Following</span></p></div></div><div class="panel left"><h1>Your Ribbits</h1><div class="ribbitWrapper">
            Ribbits coming . . .</div></div>

You’ll notice that we have a few placeholders in this view. First, there’s the form for creating a new ribbit. Then there’s the follower, following numbers, and the list of your ribbits. We’ll come to all this soon.

There’s one more thing to do in this step: we want the root route (/) to show the new user form for the time being. Open the config/routes.rb file again, and add this line:

    root to: 'users#new'

This simply makes the root route call the new method in the users controller. Now, we just need to delete the public/index.html file which overrides this configuration. After you delete that, run the following in the command line:

    rails server

You could also run rails s to achieve the same results. As you would expect, this starts the rails server. You can now point your browser to localhost:3000/, and you should see the following:

Now, fill in the form and click “Create Account.” You should be sent to the user profile, like this:

Great! Now we have our user accounts working. Let’s commit this:

    git add .
    git commit -m 'User form and profile pages'

Step 6: Adding Session Support

Even though we implemented the user feature, a user cannot log in just yet. So, let’s add session support next.

You may have recognized the way we’ve created user accounts.

I’ve taken this general method from Railscast episode 250. That episode also demonstrates how to create session support, and I’ll use that approach for Ribbit.

We start by creating a controller to manage our sessions. We won’t actually store sessions in the database, but we do need to be able to set and unset session variables. A controller is the correct way to do that.

    rails generate controller sessions new create destroy

Here, we create a new controller, called sessions. We also tell it to generate the new, create, and destroy methods. Of course, it won’t fill in these methods, but it will create their “shell” for us.

Now, let’s open the app/controllers/sessions_controller.rb file. The new method is fine as is, but the create method needs some attention. This method executes after the user enters their credentials and clicks “Log In.” Add the following code:

    def create
        user = User.find_by_username(params[:username])
        if user && user.authenticate(params[:password])
            session[:userid] = user.id
            redirect_to rooturl, notice: "Logged in!"
        else
            flash[:error] = "Wrong Username or Password."
            redirect_to root_url
        end
    end

We use the find_by_username method on the User class to retrieve the user with the provided username. Then, we call the authenticate method, passing it the password. This method was added as part of the use_secure_password feature. If the user’s credentials pass muster, we can set the user_id session variable to the user’s ID. Finally, we redirect to the root route with the message “Logged in!”.

If the user’s credentials fail to authenticate, we simply redirect to the root route and set the flash error message to “Username or password was wrong.

Logging out fires the destroy method. It’s a really simple method:

    def destroy
        session[:userid] = nil
        redirect_to root_url, notice: "Logged out."
    end

This code is fairly self-explanatory; just get rid of that session variable and redirect to the root.

Rails helped us out once again and added three routes for these methods, found in config/routes.rb:

    get "sessions/new"
    get "sessions/create"
    get "sessions/destroy"

We want to change the sessions/create route to POST, like this:

    post "sessions/create"

We’re almost ready to add the login form to our views. But first, let’s create a helper method that allows us to quickly retrieve the currently logged-in user. We’ll put this in the application controller so that we can access it from any view file. The path to the application controller is app/controllers/application_controller.rb.

private
def current_user
    @current_user ||= User.find(session[:user_id]) if session[:user_id]
end
helper_method :current_user

We pass User.find the session[:user_id] variable that we set in the sessions controller. The call to helper_method is what makes this a helper method that we can call from the view.

Now, we can open our app/views/layouts/application.html.erb file and add the login form. See the <span>Twitter Clone</span> in the <header> element? The following code goes right after that:

<% if current_user %><%= link_to "Log Out", sessions_destroy_path %><% else %><%= form_tag sessions_create_path do %><%= text_field_tag :username, nil, placeholder: "username" %><%= password_field_tag :password, nil,  placeholder: "password" %><%= submit_tag "Log In" %><% end %><% end %>

If the user is logged in (or, if a non-nil value is returned from current_user), we’ll display a logout link, but we’ll add more links here later. You might not have seen the link_to method before; it takes the provided text and URL and generates a hyperlink.

If no user is logged in, we use the form_tag method to create a form that posts to the seesions_create_path.

Note that we can’t use the form_for method because we don’t have an instance object for this form (like with our user object). The text_field_tag and password_field_tag methods accept the same parameters: the name for the field as a symbol, the value for the field (nil in this case), and then an options object. We’re just setting a placeholder value here.

There’s a bit of a glitch in our session support: a user is not automatically logged in, after they create a new user account. We can fix this by adding a single line to the create method in the UserController class. Right after the if @user.save line, add:

    session[:user_id] = @user.id

Believe it or not, the above line of code finishes the session feature. You should now be able to re-start the Rails server and log in. Try to log out and back in again. The only difference between the two functions is the lack of the login form when you’re logged in. But we’ll add more later!

Let’s commit this:

    git add .
    git commit -m 'users are now logged in upon creation'

Step 7: Creating Ribbits

Now we’re finally ready to get to the point of our application: creating Ribbits (our version of tweets). We begin by creating the ribbit resource:

    rails g resource ribbit content:text user_id:integer
    rake db:migrate

This resource only needs two fields: the actual content of the ribbit, and the id of the user who created it. We’ll migrate the database to create the new table. Then, we’ll make a few modifications to the new Ribbit model. Open app/models/ribbit.rb and add the following:

class Ribbit < ActiveRecord::Base
  default_scope order: 'createdat DESC'
  attr_accessible :content, :userid
  belongs_to :user
  validates :content, length: { maximum: 140 }
end

The default_scope call is important; it orders a list of ribbits in from the most recent to least recent. The belongs_to method creates an association between this Ribbit class and the User class, making our user objects have a tweets array as a property.

Finally, we have a validates call, which ensures that our ribbits don’t exceed 140 characters.

Oh, yeah: the flip side of the belongs_to statement. In the User class (app/models/user.rb), we want to add this line:

    has_many :ribbits

This completes the association; now each user can have many ribbits.

We want users to have the ability to create ribbits from their profile page. As you’ll recall, we have a form in that template. So let’s replace that form in app/view/users/show.html.erb, as well as make a few other changes. Here’s what you should end up with:

<% if current_user %><div id="createRibbit" class="panel right"><h1>Create a Ribbit</h1><p><%= form_for @ribbit do |f| %><%= f.textarea :content, class: 'ribbitText' %><%= f.submit "Ribbit!" %><% end %></p></div><% end %><div id="ribbits" class="panel left"><h1>Your Ribbit Profile</h1><div class="ribbitWrapper"><img class="avatar" src="<%= @user.avatarurl %>"><span class="name"><%= @user.name %></span> @<%= @user.username %><p><%= @user.ribbits.size %> Ribbits<span class="spacing">XX Followers</span><span class="spacing">XX Following</span></p></div></div><div class="panel left"><h1>Your Ribbits</h1><% @user.ribbits.each do |ribbit| %><div class="ribbitWrapper"><img class="avatar" src="<%= @user.avatar_url %>"><span class="name"><%= @user.name %></span>
                @<%= @user.username %><span class="time"><%= time_ago_in_words(ribbit.created_at) %></span><p> <%= ribbit.content %> </p></div><% end %></div>

There are three areas that we change with this code. First, we remove the HTML form at the top and replace it with a call to the form_for helper function. Of course, it makes sense that we only need a text area for the content (we already know the current user’s ID). Note that form_for accepts a @ribbit as the parameter, and we need to add that object to the users_controller#show method (app/controllers/users_controller.rb):

    def show
        @user = User.find(params[:id])
        @ribbit = Ribbit.new
    end

Notice that we wrap the whole form section (the <div id="createRibbit">) with an if statement. If there’s no current user (meaning no one is logged in), we won’t show the ribbit form.

Next, we want to display the number of the user’s ribbits. That number appears just above their follower count. Remember that our user instance has a ribbits property. So, we can replace our filler text with this:

<%= @user.ribbits.size %> Ribbits

We need to show the user’s ribbits. We can loop over that same ribbits array, and display each ribbit in turn. That’s the final part of the code above.

Lastly, (at least as far as ribbit creation is concerned), we need to modify the create method in the ribbits controller (app/controllers/ribbits_controller.rb. The method executes when the user clicks the “Ribbit!” button.

def create
  @ribbit = Ribbit.new(params[:ribbit])
  @ribbit.userid = current_user.id</p>
  if @ribbit.save
      redirect_to current_user
  else
      flash[:error] = "Problem!"
      redirect_to current_user
  end
end

I know the “Problem!” error message isn’t very descriptive, but it will do for our simple application. Really, the only error that could occur is a ribbit longer than 140 characters.

So, give it a try: start the server (rails server), log in, go to your profile page (http://localhost:3000/users/, but of course, any user profile page will do), write a ribbit, and click “Ribbit!”. The new ribbit should display in the ribbit list on your profile page.

Okay, let’s commit these changes:

    git add .
    git commit -m 'ribbit functionality created'

Step 8: Creating the Public Tweets Page

Next up, we want to create a public page that includes all the ribbits made by all users. Logically, that should be the ribbits index view, found at /ribbits. The controller method for this is ribbits_controler#index. It’s actually a very simple method:

    def index
        @ribbits = Ribbit.all include: :user
        @ribbit = Ribbit.new
    end

The first line fetches all the ribbits and their associated users, and the second line creates the new ribbit instance (this page will have a ribbit form).

The other step, of course, is the template (app/view/ribbits/index.html.erb). It’s similar to the user profile template:

<% if current_user %><div id="createRibbit" class="panel right"><h1>Create a Ribbit</h1><p><%= form_for @ribbit do |f| %><%= f.textarea :content, class: 'ribbitText' %><%= f.submit "Ribbit!" %><% end %></p></div><% end %><div class="panel left"><h1>Public Ribbits</h1><% @ribbits.each do |ribbit| %><div class="ribbitWrapper"><a href="<%= user_path ribbit.user %>"><img class="avatar" src="<%= ribbit.user.avatar_url %>"><span class="name"><%= ribbit.user.name %></span></a>
                @<%= ribbit.user.username %><span class="time"><%= time_ago_in_words(ribbit.created_at) %></span><p> <%= ribbit.content %> </p></div><% end %></div>

In this template, the avatar image and the user’s name are surrounded by a link that points to the user’s profile page. Let’s also add a link to the public tweets page. Just before the logout link in app/view/layouts/application.html.erb, add this:

<%= link_to "Public Ribbits", ribbits_path %>

Finally:

    git add .
    git commit -m 'added the public ribbits page'

Step 9: Following Other Users

It wouldn’t be a Twitter clone if we couldn’t follow other users, so let’s work on that feature next.

This is a little tricky at first. Think about it: our User records need to to follow other User records and be followed by other User records. It’s a many-to-many, self-joining association. This means we’ll need an association class, and we’ll call it “Relationship”. Start by creating this resource:

    rails g resource relationship follower_id:integer followed_id:integer
    rake db:migrate

This model only needs two fields: the follower’s ID, and the followed’s ID (note: the terminology here can get a little confusing. In our case, I’m using the term “followed” to mean the user being, well, followed).

Next up, we want to relate the User model with this Relationship model, which we do from both sides. First, in the Relationship class (app/models/relationship.rb), we want to add these two lines:

    belongs_to :follower, classname: "User"
    belongs_to :followed, classname: "User"

The first line relates a User record to the follower_id field, and the second line relates a User record to the followed_id field. It’s important to include the class name, because Rails can’t infer the class from the property names (‘follower’ and ‘followed’). It can, however, infer the correct database fields (follower_id and followed_id) from those names.

Now, in the User class (app/model/user.rb), we have to first connect each user model to its associated relationships:

    has_many :follower_relationships, classname: "Relationship", foreign_key: "followed_id"
    has_many :followed_relationships, classname: "Relationship", foreign_key: "follower_id"

We need to create two associations, because we have two sets of relationships per user: all the people following them and all the people they follow. And no, those foreign keys shouldn’t be switched. The follower_relationship association is responsible for all of your followers. Hence, it needs the followed_id foreign key.

Then, we can use those relationships to get to the followers on the other side of them:

    has_many :followers, through: :follower_relationships
    has_many :followeds, through: :followed_relationships

These give our user records the followers and followeds methods. They’re both methods that return the arrays of our followers or the people we follow, respectively.

Finally, let’s add two methods to our user model that helps us with the UI:

def following? user
    self.followeds.include? user
end
def follow user
    Relationship.create follower_id: self.id, followed_id: user.id
end

Let’s commit these changes before tweaking the UI.

    git add .
    git commit -m 'created user relationships infrastructure'

Now, the UI is all on the user profile pages, which is app/views/users/show.html.erb. We’ll start with something simple: the follower and following count. See where we have this?

<span class="spacing">XX Followers</span><span class="spacing">XX Following</span>

We’ll replace these placeholder values, like so:

<span class="spacing"><%= @user.followers.count %> Followers</span><span class="spacing"><%= @user.followeds.count %> Following</span>

We have the follow/unfollow button under these counts, but there are a few states we need to consider. First, we don’t want to show any button if the user either is viewing their own profile or if they’re not logged in. Second, we want to display an “Unfollow” button if the user already follows this profile’s owner.

<% if current_user and @user != current_user %><% if current_user.following? @user %><%= form_tag relationship_path, method: :delete do %><%= submit_tag "Unfollow" %><% end %><% else %><%= form_for @relationship do %><%= hidden_field_tag :followed_id, @user.id %><%= submit_tag "Follow" %><% end %><% end %><% end %>

A Rails resource is a basically a model, its associated controller, and a few other files.

Put this code just under the paragraph that holds the above spans.

The forms are the more complex parts here. First, if the current user already follows the viewed user, we’ll use form_tag to create a form that goes to the relationship_path. Of course, we can’t forget to set the method as delete because we’re deleting a relationship.

If the current user doesn’t follow the viewed user, we’ll create a form_for the current relationship. We’ll simply use a hidden field to determine which user to follow.

If you’re paying attention, you’ll know that something’s missing: the ability to manipulate a relationship instance from this view. We need a Relationship instance. If the current user doesn’t already follow this user, we need to create a blank relationship. Otherwise, we need to have a relationship on hand to delete! Back to app/controllers/users_controller.rb, and add the following to the show method:

@relationship = Relationship.where(
    follower_id: current_user.id,
    followed_id: @user.id
).first_or_initialize if current_user

This is a bit different from the usual way of finding or creating a record. This initializes a blank Relationship instance if no records are found that match the where parameters. Of course, we only want to do this if there is a current_user.

The routes for this model are enabled by the resources :relationship line in the config/routes.rb, so we don’t have to worry about that.

Now, in app/controllers/relationships_controller.rb, we’ll start with the new method:

    def create
        @relationship = Relationship.new
        @relationship.followed_id = params[:followed_id]
        @relationship.follower_id = current_user.id</p>
    if @relationship.save
        redirect_to User.find params[:followed_id]
    else
        flash[:error] = "Couldn't Follow"
        redirect_to root_url
    end
end

Pretty standard stuff by now, right? We’ll create the relationship, save it, and redirect back to the user’s profile.

The destroy method is also simple:

    def destroy
        @relationship = Relationship.find(params[:id])
        @relationship.destroy
        redirect_to user_path params[:id]
    end

Now, create another user (or four) and have a few users follow other users. You should see the text of the follow buttons change, as well as the follower / following count.

Great! Now we can commit this feature:

    git add .
    git commit -m 'Following other users is now working'

Step 10: Creating a Few Other Pages

There are a few other simple pages that we want to add. First, let’s create a page to list all the registered users. This would be a great place to find new friends, see their pages, and eventually follow them. Logically, this should be the /users route, so we’ll use the UsersController#index method:

    def index
        @users = User.all
    end

Now, for app/views/users/index.html.erb:

<div id="ribbits" class="panel left"><h1>Public Profile</h1><% @users.each do |user| %><div class="ribbitWrapper"><a href="<%= user_path user %>"><img class="avatar" src="<%= user.avatar_url %>"><span class="name"><%= user.name %></span></a>
            @<%= user.username %><p><%= user.ribbits.size %> Ribbits<span class="spacing"><%= user.followers.count %> Followers</span><span class="spacing"><%= user.followeds.count %> Following</span></p><% if user.ribbits.first %><p><%= user.ribbits.first.content %></p><% end %></div><% end %></div>

Finally, let’s add a link to this page to the top of our template. Let’s also add a link to the logged-in user’s profile. Right beside the “Public Ribbits” link, add:

<%= link_to "Public Profiles", users_path %><%= link_to "My Profile", current_user %>

Next is the buddies page. This is where a user goes to view the ribbits of the people they follow; we’ll also redirect users to this page when they’re logged in and view the home page.

Strangely, finding the correct place in the code for this page is a bit tricky. After all, each page in our Rails app must be based on a method in one of our controllers. Best practice dictates that each controller has six REST methods that control a resource. In this case, we want to view the ribbits of a subset of users, which, at least to me, seems to be a bit of an edge case. Here’s how we’ll handle it: let’s create a buddies method in the UsersController:

    def buddies
        if current_user
            @ribbit = Ribbit.new
            buddies_ids = current_user.followeds.map(&:id).push(current_user.id)
            @ribbits = Ribbit.find_all_by_user_id buddies_ids
        else
            redirect_to root_url
        end
    end

Obviously, there’s nothing to show if a user isn’t logged in, so we’ll check the current_user. If we’re not logged in, we redirect to the root URL (/). Otherwise, we create a new ribbit (for our new ribbit form).

We then need to find all ribbits from the current user and the people they follow.

We can use the followeds array property, and map it to only retrieve the user ids. Then, we push in the current user’s ids as well and finally retrieve the ribbits from those users.

Let’s store the template in app/views/users/buddies.html.erb; it’s very similar to our public ribbits template:

<% if current_user %><div id="createRibbit" class="panel right"><h1>Create a Ribbit</h1><p><%= form_for @ribbit do |f| %><%= f.textarea :content, class: 'ribbitText' %><%= f.submit "Ribbit!" %><% end %></p></div><% end %><div class="panel left"><h1>Buddies' Ribbits</h1><% @ribbits.each do |ribbit| %><div class="ribbitWrapper"><a href="<%= user_path ribbit.user %>"><img class="avatar" src="<%= ribbit.user.avatar_url %>"><span class="name"><%= ribbit.user.name %></span></a>
                @<%= ribbit.user.username %><span class="time"><%= time_ago_in_words(ribbit.created_at) %></span><p> <%= ribbit.content %> </p></div><% end %></div>

We need to make a route for this method, in order to use it. Open config/routes.rb and add the following:

    get 'buddies', to: 'users#buddies', as: 'buddies'

Now, we can go to /buddies and see the page!

There’s something else we want to do with this, however. If a logged-in user goes to the root URL, we need to redirect them to /buddies. Remember, this route is currently:

    def new
        @user = User.new
    end

Let’s change it to this:

    def new
        if current_user
            redirect_to buddies_path
        else
            @user = User.new
        end
    end

We should also add a link to the buddies page in app/views/layouts/application.html.erb:

<%= link_to "Buddies' Ribbits", buddies_path %>

And now, we’ll commit these changes:

    git add .
    git commit -m 'added buddies page'

Step 11 Deploying to Heroku

The last step is to deploy the application. We’ll use Heroku.

The last step is to deploy the application. We’ll use Heroku. I’m going to assume that you have a Heroku account, and that you’ve installed the Heroku toolbelt (the command line tools).

We run into a problem before we even begin! We’ve been using a SQLite database, because Rails uses SQLite by default. However, Heroku doesn’t use SQLite; it uses PostgreSQL for the database. We have to make a change to our Gemfile, a change that actually breaks our local copy of the app (unless you install and configure a PostgreSQL server). Here’s my compromise: I’ll show you how to do it here, and you can play with my deployed version. But you don’t have make the change on your local project.

Thankfully, switching Rails to PostgreSQL is very simple. In our Gemfile, there’s a line that looks like this:

    gem "sqlite"

Change that line to this:

    gem "pg"

We now must install this gem locally, in order to update Gemfile.lock. We do this by running:

    bundle install

And we commit:

    git add .
    git commit -m 'updated Gemfile with Postgres'

We can now create our Heroku application. In our project directory, run:

    heroku create
    git push heroku master

And finally:

    heroku open

That opens your browser with your deployed Heroku application. You can play with my deployed copy.


That’s It!

And there you go! We just built a really simple Twitter clone. Sure, there are dozens of features we could add to this, but we nailed the most important pieces: users, ribbits, and followings.

Remember, if Rails isn’t your racket, check out the other tutorials in this series! We have a whole line-up of Ribbit tutorials using different languages and frameworks in the pipes. Stay tuned!

A RequireJS, Backbone, and Bower Starter Template

$
0
0

Switching to a modular approach to writing JavaScript is unfortunately a more difficult process than we might hope. Once you understand the concept of AMD, you then have to figure out the logistics: how do you setup RequireJS? What about non-AMD libraries? What about dependency management? What about configuration and optimization?


Using This Starter Template

The repo for the video tutorial should give you an excellent starting point, when beginning new RequireJS + Backbone projects. Once you’re comfortable with the process, also, at some point, be sure to consider Yeoman with RequireJS support.


Quick Setup

First, of course, download this repo. Then, from the Terminal (assuming Node.js installed), install RequireJS.

npm install requirejs

Next, we need an easy way to deal with dependency management. We’ll use Bower, from the guys at Twitter.

npm install bower

Let’s now install the dependencies for this project. I’m assuming that we’re building a Backbone project, so I’ve listed RequireJS, jQuery, Underscore, and Backbone as dependencies.

bower install

Please note that we’re using the AMD versions of both Backbone and Underscore to make the setup process as easy as possible.

When ready to build the project, run:

build/build.sh

this will create a new `dist` directory, copy the files over, run the r.js optimizer on assets, and clean it the file structure a bit for production. Refer to app.build.js for configuration options.

CSS Imports

If you’re not using a preprocessor, feel free to modularize your stylesheets, and @import them into a master stylesheet. During the build process, r.js will merge these files together, so that you don’t have to worry about any performance hits from using @import.

Why 2013 is the Year of PHP

$
0
0

2012 was an excellent year for the PHP community, thanks to many badly needed features being added to version 5.4, as well as the countless projects, advancing PHP to the next level.

In this article, I’d like to review a handful of the issues that people had with PHP in the past, and provide a glimpse at why 2013 just may be the year of PHP!


Why the Hostility?

This may come as a surprise to you, but many people have negative feelings toward PHP developers, and the language as a whole. You likely know exactly what I mean, if you’ve considered learning Ruby in the past couple of years, due to some sense of peer pressure.

However, before you make any changes, you have to ask yourself: “Why does PHP have such a stigma?”

Well, like many of life’s important questions, there is no clear-cut answer. After doing a bit of searching online, for some PHP arguments, you’ll find that roughly eighty percent of the arguments against PHP are rooted in ignorance, in one form or another.

Roughly eighty percent of the arguments against PHP are rooted in ignorance.

The Beginners

There are the beginners, who don’t really know how PHP works. This results in questions, like “Why can’t you listen for button events with PHP?,” and similar questions about AJAX.

One Language to Rule Them All

Next, you have the folks who don’t know about other language or framework than the one that they currently use. These are the types of people who make arguments, such as “Rails is much easier then PHP,” and things like that.

Fighting PHP 4

The third form of misconception comes from the people who haven’t kept up with PHP’s advances over the years. Instead, they’re still fighting the language, as it existed years and years ago. This results in statements, like: “PHP isn’t object oriented” or “PHP sucks because it doesn’t support namespacing.” You get the idea.

Scaling

Lastly, we have the more intelligent developers who believe that “PHP can’t scale” or “PHP has no standards,” which is completely false. Scaling has less to do with the language, and more with the server and how your app is structured. As for standards? Well, it only takes a quick Google search for PHP-FIG.

What is the PHP-FIG?“The idea behind the group is for project representatives to talk about the commonalities between our projects and find ways we can work together. Our main audience is each other, but we’re very aware that the rest of the PHP community is watching. If other folks want to adopt what we’re doing they are welcome to do so, but that is not the aim.”

It’s an unfortunate truth that some arguments, which permeate through the web are either completely false, or updated.


PHP Isn’t Perfect

There’s truth in every criticism, however.

There’s truth in every criticism, however. PHP isn’t perfect. When it comes to its implementation of core features and functions, PHP is inconsistent. These arguments are entirely valid.

These inconsistencies are not without reason, though. PHP started out as what we would refer to today as a templating language. Since then, it has gone through multiple paradigm shifts, transforming into a functional language, like C, and then to the fully OOP language that we enjoy today. Along the way, best practices have emerged, and different people have been in control of what is added. This results in a lot of “different” kinds of code in one language. Now you might ask, “Why not just deprecate the bad parts?

The answer to this question is the same as to why we are still building sites for old versions of Internet Explorer. Don’t get me wrong; I would love to just drop it, but massive changes like this can’t be done without a bit of time. Hopefully, over time, PHP will advance further into OOP, and begin converting its objects to use their functions with the dot notation, rather than the admittedly awkward -> syntax. So, instead of array_push($arr, "Value");, you would write something, like $arr.push("Value");.

Don’t worry; things like this have been happening slowly. Just look at the new PHP 5.5 features. The old function-oriented MySQL add-on has been deprecated, in favor of the newer object-oriented approach.


The Present

Now with the past covered, let’s move up to the present. There are a handful of really cool projects and movements, some of which borrow ideas from other languages, in order to propel PHP to the next level.

Let’s consider the following:


Composer

The PHP community can now stop reinventing the wheel over and over again, thanks to Composer.

Inspired by tools, like Bundler and NPM, the PHP community can now stop reinventing the wheel over and over again, thanks to Composer. Node.js was the first language that made me feel comfortable with using packages. If you’ve used it before, then you know what I mean. Packages are installed locally to your project’s directory, it’s easy to find documentation for most of the plugins, and it’s relatively simple to submit your own packages.

PEAR?

PHP did offer an alternative for years, PEAR, but it wasn’t overly intuitive or easy to use. It felt bulky for something that ultimately fetched plain-text files. Further, it installed all packages globally. This forced you to inform people which packages you used when distributing your source code. As you might guess, this resulted in mis-matched versions, and other things of that nature.

If you so desire, you can pick and choose your components.

Composer fixes all of this, thanks to locally stored packages, and the ability to create per-project dependency files. This means you can easily distribute your project with this dependency file, and others can use their own copy of Composer to automatically download all specified dependencies, while simultaneously keeping them up to date.

Additionally, Composer is a light application – written in PHP, itself – and comes with an autoloader feature. This works off of the PSR-0 standard (mentioned above), which will automatically load your dependancies as you need them, so your application remains as clean as possible.

All of these features are a definite improvement, however, without community adoption, it means nothing. I’m happy to inform you that it’s been very well accepted. Big projects, such as Symfony and Laravel, have already uploaded their components to the Composer library, Packagist. Having the framework split up into components means that you can easily build your own custom framework to match your liking. In other words, no more bloated frameworks. If you so desire, you can pick and choose your components.

Need an example? You could take the database component from Laravel, and pair it with the templating component from the Symfony framework. In fact, the Laravel framework, itself, leverages many well-tested Symfony components. Why rebuild the wheel, when you can instead focus your efforts on other areas?


Laravel

Even if you do have issues with some of PHP’s inconsistencies, Laravel abstracts nearly all of it.

Now this wouldn’t be an article about PHP’s future without discussing Laravel in a bit more detail. We’re often asked why Nettuts+ seems to be pushing Laravel as much as it has been. This is the wrong question. Instead, ask “Why not?

Even if you do have issues with some of PHP’s inconsistencies, Laravel abstracts nearly all of it, providing you with the feel and elegance of a language, like Ruby, but with the ease of PHP.

Laravel comes with Eloquent, an ORM that completely rethinks everything to do with databases. I mostly use MySQL with PHP; what you get back from the database is a resource object, which you then have to run through a function to capture the results. In Laravel, everything is returned as standard PHP; you’re given objects, which you can modify and save. You can do things, such as combining results from multiple tables to save on database calls (referred to as eager loading), and laughably simple to do things, like validation and custom queries. As a bonus, if you don’t like SQL, well all of this can be done with an OOP style, using simple and readable methods, such as find and delete.

We’ve only seen the tip of the iceberg with what Eloquent brings to the table, but, already, you can see the improvements. Laravel brings this kind of innovation to nearly every field of PHP, including things like templating, routing, migrations, RESTful classes, and more. The best part, though, is that, with each new release, Laravel’s creator, Taylor Otwell, continues to raise the bar.

If you’d like to learn more about Laravel, I recommend the Tuts+ Premium course, Laravel Essentials, taught by our very own Jeffrey Way. I’m not saying this as a part of the Nettuts+ staff, but as a person who watched the series. I can honestly say that I had zero knowledge of Laravel going in, and Jeffrey did an excellent job of covering as much as possible.

Ultimately it’s not really about the framework, but the community support. As long as there is support for a project, it will be updated and will remain relevant. If your worried about how long it will remain popular, then, simply by actively using it, you are securing your odds!


PHP 5.4 / 5.5

The next thing that I’d like to discuss is the updates to PHP that were released in 2012. With the release of version 5.4 came a plethora of excellent new features. For a full overview of the updates, you can take a look at these two articles here on Nettuts+: 5.4 article, 5.5 article.

But, for a quick recap of my favorites:

Traits

  • Traits add the ability to create Class “partials,” which allows you to create consistent objects without re-writing everything over and over.

Generators

  • Generators let you do some cool things with lists of data, as well as allow you to benefit from all the features that come with lazy-evaluation.

CLI Web Server

  • Another great addition is the built in web server, which allows you to test your applications with different versions of PHP, without the need for something like Apache.

Dereferencing

  • Dereferencing is not a major addition, but it’s nice to be able to reference child elements without the use of functions. This includes things like accessing individual characters of a constant by using only square bracket notation.

The New Password Hashing API

  • With the new API, you are given the ability to both encrypt strings, as well as verify and strengthen passwords – all without any knowledge of bcrypt or any other hashing algorithm.

These represent just a few of the new improvements, and their is a whole list of things that are currently being discussed for the next version, scheduled to be released later this year.


Test Driven Development

Finally, let’s talk a bit about testing your code. While admittedly a bit late to the game, in 2012, our community saw widespread adoption of the test driven development methodology. I could make up a growth percentage, but I feel that a better indication of truth is to simply looking around on different dev sites and forums. You’ll surely see a spike! When it comes to testing in PHP, PHPUnit is the well-accepted standard.

Why is it Important?

Think about your project before diving in, like a cowboy.

Many times, you set out to write some code, but you lose something in the translation. What I mean by this is that you plan on one thing, but when implementing it, you lose a bit of the integrity or functionality. Another common problem arises when writing code for large projects: you end up with multiple classes and files that each have their own dependancies. What your left with is an “intertwined evolution” of functionality that can prove difficult to keep track of and maintain. Like a game of Jenga, by updating one piece, you may break another, crippling your application. These are just two example problems, but there are certainly others.

How Does TDD Help?

Well, you write clear tests before writing any production code. This means, when you get to writing your actual code, it is forced to conform to your original plan. Not only this, but, down the line all, dependencies will be tracked in your tests. If you update a bit of code, and inadvertently break one of the tests, you will immediately be notified.

Yes setting up these tests requires an extra step, but so is thinking before you speak. Does anyone the benefits of that? Of course not. The same is true for tests: think about your project before diving in, like a cowboy.

Additional Learning

Conclusion

It’s an exciting time to be a PHP developer. Many of the inherent problems have or are being fixed. AS for the other issues, we’ll those are easily remedied with a good framework and testing.

So what do you think? Are you getting on board? Disagree with me? If so, let’s continue the discussion below!

Source Maps 101

$
0
0

In today’s modern workflow, the code that we author in our development environments is considerably different from the production code, after running it through compilation, minification, concatenation, or various other optimization processes.

This is where source maps come into play, by pointing out the exact mapping in our production code to the original authored code. In this introductory tutorial, we’ll take a simple project, and run it through various JavaScript compilers for the purposes of playing with source maps in the browser.


What are Source Maps?

Source maps offer a language-agnostic way of mapping production code to the original code that was authored.

Source maps offer a language-agnostic way of mapping production code to the original code that was authored in your development environment. When we ultimately look at the code-base, generated and prepared for production, it becomes very challenging to locate exactly where the line mapping to our original authored code is. However, during compilation, a source map stores this information, so that, when we query a line section, it will return the exact location in the original file to us! This offers a huge advantage for the developer, as the code then becomes readable – and even debuggable!

In this tutorial, we’ll take a very simple bit of JavaScript and SASS code, run them through various compilers, and then view our original files in the browser with the help of source maps. Go ahead and download the demo files and let’s get started!


Browsers

Please note that, while writing this article, Chrome (Version 23) supports JavaScript Source Maps, and even SASS Source Maps. Firefox should also gain support in the near future, as it’s currently in an active stage of development. With that word of caution out of the way, let’s now see how we can take advantage of source maps in the browser!

Source Maps in Chrome

First, we must enable support in Chrome, using the following simple steps:

  • Open Chrome Developer Tools: View -> Developer -> Developer Tools
  • Click the “Settings” cog in the bottom-right corner
  • Choose “General,” and select “Enable source maps”

Setup

If you’d like to work along with this tutorial, download the demo and open the “start” directory. The files and directory structure is quite basic, with some simple JavaScript in scripts/script.js. You should be able to open index.html and even add some CSS color names or hex values to amend the background color.

$ tree
.
├── index.html
├── scripts
│   ├── jquery.d.ts
│   ├── script.coffee.coffee
│   ├── script.js
│   └── script.typescript.ts
└── styles
    ├── style.css
    └── style.sass

Have a look through the simple script files in plain JavaScript, TypeScript or CoffeeScript. Using various JavaScript compilers, we’ll create a production-ready version, as well as generate the corresponding source maps.

In the following sections, we’ll use five different ways to generate a compiled and minified script.js, along with the associated source map. You can either choose to test out all of the options, or simply go with the compiler that you are already familiar with. These options include:

  1. Closure Compiler
  2. GruntJS with JSMin
  3. Uglifyjs 2
  4. CoffeeScript and Redux
  5. TypeScript

Option A: Closure Compiler

Closure Compiler, by Google, is a tool for optimizing JavaScript. It does this by analyzing your code, removing irrelevant bits, and then minifying the rest. On top of that, it can also generate source maps.

Let’s use the following steps to create an optimized version of script.js, using the Closure compiler:

  1. Download the latest Closure compiler.
  2. Transfer the file, compiler.jar, to the directory, scripts.
  3. Navigate to the directory, scripts, from the command line, and execute the following, so that an optimized, production-ready script.closure.js file will be created:
    java -jar compiler.jar --js script.js --js_output_file script.closure.js
  4. Ensure that index.html is now linked with the newly created file, scripts/script.closure.js, by uncommenting Option A.

When we open index.html within the browser and navigate to the Source Panel in the developer tools, only the optimized version of script.closure.js is referenced; we have no way of making a relation back to our original, properly indented. Let’s next create the source map file by executing the following command in the scripts directory:

java -jar compiler.jar --js script.js --create_source_map script.closure.js.map --source_map_format=V3 --js_output_file script.closure.js

Notice that Closure Compiler takes in two options, --create_source_map and --source_map_format, to create a source map file, script.closure.js.map, with source map version 3. Next, append the source mapping url to the end of the compiled script file, script.closure.js, so that the optimized file contains the source map location information:

//@ sourceMappingURL=script.closure.js.map

Now, when we view the project in the browser, the “scripts” directory, under the Source Panel of the developer tools, will show both the original file as well as the optimized version, script.closure.js. Although the browser is of course using the optimized file that we originally referenced in index.html, source maps allow us to create a connection to the original file.

Also, do try it out with breakpoints for debugging, but keep in mind that watch expressions and variables are not yet available with source maps. Hopefully, they will be in the future!


Option B: GruntJS Task for JSMin

If you already use Grunt.js for build processes, then the Grunt plugin for JSMin source maps will come in handy. Not only will it optimize your code, but it will also create the source map!

The following steps will demonstrate how to create an optimized version of script.js with the Grunt JSMin plugin:

  1. install Grunt.js and initiate a gruntfile, grunt.js, within the root of the “start” directory:
    $ npm install -g grunt
    $ npm view grunt version
    npm http GET https://registry.npmjs.org/grunt
    npm http 200 https://registry.npmjs.org/grunt
    0.3.17
    $ grunt init:gruntfile
  2. Install the Grunt plugin grunt-jsmin-sourcemap; when you do, a directory, called node_modules/grunt-jsmin-sourcemap will be created:
    $ npm install grunt-jsmin-sourcemap
  3. Edit the newly created grunt.js file to only contain the jsmin-sourcemap task – to keep things as simple as possible.
    module.exports = function(grunt) {
      grunt.loadNpmTasks('grunt-jsmin-sourcemap');
      grunt.initConfig({
        'jsmin-sourcemap': {
          all: {
            src: ['scripts/script.js'],
            dest: 'scripts/script.jsmin-grunt.js',
            destMap: 'scripts/script.jsmin-grunt.js.map'
          }
        }
      });
      grunt.registerTask('default', 'jsmin-sourcemap');
    };
  4. Return to the command line, and run grunt; this will execute the jsmin-sourcemap task, as the default task is stated as such within the grunt.js file:
    $ grunt
    Running "jsmin-sourcemap:all" (jsmin-sourcemap) task
    Done, without errors.
  5. In the newly created source map file, script.grunt-jsmin.js.map, ensure that the source is "sources":["script.js"].
  6. Uncomment Option B to link to the newly created file, script.grunt-jsmin.js, within index.html, and open up in the browser.

With Grunt and the plugin, jsmin-sourcemap, the build process created two files: the optimized script file with the source mapping url at the bottom, as well as a source map. You will need both of these in order to view all of them in the browser.


Option C: UglifyJS

UglifyJS2 is another JavaScript parser, minfier and compressor. Similar to the two alternatives above, UglifyJS2 will create an optimized script file, appended with a source mapping url as well as a source map file that will contain the mapping to the original file. To use UglifyJS, execute the following in the command line of the “start” directory:

  1. Install the NPM module, uglify-js, locally; a directory, called nocde_module/uglify-js, will be created.
    $ npm install uglify-js
    $ npm view uglify-js version
    2.2.3
    $ cd scripts/
  2. Within the “scripts” directory, we’ll execute the command to create an optimized version, as well as a source file with the options, --source-map and --output, to name the output file.
    uglifyjs --source-map script.uglify.js.map --output script.uglify.js script.js
  3. Lastly, ensure that index.html is correctly linked to the script, script.uglify.js

Option D: CoffeeScript Redux

For the previous three options, we only required a one-step optimization, from the original code to the optimized JavaScript. However, for languages like CoffeeScript, we need a two-step process: CoffeeScript > JavaScript > optimised JavaScript. In this section, we will explore how to create Multi-Level Source Maps with CoffeeScript and the CoffeeScript Redux compiler.

Step 1: CoffeeScript to Plain JavaScript

Navigate to the directory, “start,” in the command line. In the following steps, we will map the optimized script file back to the CoffeeScript:

  1. Install CoffeeScript as a global npm package
  2. Compile the CoffeeScript file, script.coffee.coffee, to create a plain JavaScript version, using the following command:
    $ coffee -c scripts/script.coffee.coffee
  3. Install CoffeeScript Redux:
    $ git clone https://github.com/michaelficarra/CoffeeScriptRedux.git coffee-redux
    $ cd coffee-redux
    $ npm install
    $ make -j test
    $ cd ..
  4. Next, we will create a source map file, script.coffee.js.map, that will hold the mapping information from the generated JavaScript back to the CoffeeScript file:
    $ coffee-redux/bin/coffee --source-map -i scripts/script.coffee.coffee > scripts/script.coffee.js.map
  5. Ensure that the generated JavaScript file, script.coffee.js, has the source mapping url right at the end with the following line:
    //@ sourceMappingURL=script.coffee.js.map
  6. Ensure that the source map file, script.coffee.js.map, has the correct reference file as "file":"script.coffee.coffee", and source file as "sources":["script.coffee.coffee"]

Step 2: Plain JavaScript to Minified JavaScript

  1. Finally, we will use UglifyJS once again to minify the generated JavaScript, as well as create a source map. This time, it will take in a source map so that we can refer back to the original CoffeeScript file. Execute the following command in the “scripts” directory:
    $ cd scripts/
    $ uglifyjs script.coffee.js -o script.coffee.min.js --source-map script.coffee.min.js.map --in-source-map script.coffee.js.map
  2. Finally, ensure that the source map file, script.coffee.min.js.map, has the correct reference file as "file":"script.coffee.min.js", and the correct sources as "sources":["script.coffee.coffee"].

Option E: TypeScript

TypeScript, just like CoffeeScript, also requires a two-step process: TypeScript > Plain JavaScript > Minified JavaScript. Because the script uses a jQuery plugin, we need two TypeScript files, which are already provided: script.typescript.ts and jquery.d.ts.

Step 1: TypeScript to Plain JavaScript

Navigate to the “scripts” directory from the command line, and execute the following command:

$ tsc script.typescript.ts -sourcemap

The above command will create a new JavaScript file, called script.typescript.js, with the source mapping url at the bottom: //@ sourceMappingURL=script.typescript.js.map. With this single command, it will also create the map file, script.typescript.js.map.

Step 2: Plain JavaScript to Minified JavaScript

As with the CoffeeScript example, the next step is to use UglifyJS.

$ uglifyjs script.typescript.js -o script.typescript.min.js --source-map script.typescript.min.js.map --in-source-map script.typescript.js.map

Finally, ensure that index.html links to the correct script file, scripts/script.typescript.min.js, and open it up in the browser!


Source Maps for SASS

Beyond JavaScript, currently, Chrome also supports SASS or SCSS source maps. For SASS source mapping, let’s amend a few settings in Chrome, and then compile SASS to CSS with debug parameters:

  1. Before changing any settings, notice that, upon inspecting an element from developer tools, it will only show us the CSS file reference. This isn’t overly helpful.
  2. Go to chrome://flags/.
  3. Enable Developer Tools experiments.
  4. Open Dev Tools > Setting > Experiments > Check “Support for SASS”.
  5. Compile SASS with the follow debug parameters in the “styles” directory. This will prepend each CSS Ruleset with @media -sass-debug-info that will have the information on the filename and the line number.
    $ cd styles/
    $ sass --debug-info --watch style.sass:style.css
  6. Be sure to restart the developer tools, and refresh the page.
  7. Now, when we inspect an element, we can access the original SASS file!

Beyond simply viewing the SASS file, if you are running LiveReload in the background and make changes to the SASS file, the page will also update to reflect the changes. For example, let’s open up the project in Firefox, and inspect the page, using the Firebug extension.


Information Within a Source Map

If we view any of the *.map files, it will contain the mapping information from the original file to the optimised file. The structure of a source map is typically in the JSON format, using the Version 3 specifications. It will usually contain the following five properties:

  1. version: Version number of the source map – typically “3.”
  2. file: Name of the optimized file.
  3. sources: Names of the original files.
  4. names: Symbols used for mapping.
  5. mappings: Mapping data.

Additional Resources

Source maps are still very much under active development, but, already, there are some great resources available around the web. Be sure to consider the following, if you’d like to learn more.


Conclusion

I hope that the above walk-through, using multiple compilers, has demonstrated the potential of source maps. Although the functionality is currently limited, hopefully, in the future, we’ll have full debugging capability, including access to variables and expressions.

PSR-Huh?

$
0
0

If you’re an avid PHP developer, it’s quite likely that you’ve come across the abbreviation, PSR, which stands for “PHP Standards Recommendation.” At the time of this writing, there are four of them: PSR-0 to PSR-3. Let’s take a look at what these are, and why you should care (and participate).


A Brief History

PHP has never truly had a uniform standard for writing code. Those who maintain various codebases commit time to writing their own naming conventions and coding style guidelines. Some of these developers choose to inherit a well-documented standard, such as PEAR or Zend Framework; yet others choose to write standards completely from scratch.


The Framework Interoperability Group

Do not hesitate to open a new topic in the mailing list.

At the php|tek conference in 2009, people representing various projects discussed their options for working between projects. It surely comes as no surprise that sticking to a set of standards between codebases was the main agenda item.

Until recently, they labeled themselves as the “PHP Standards Group”, but now, they operate under the umbrella, Framework Interoperability Group (FIG). As they felt, the former didn’t accurately describe the group’s intentions. Even though the name of this group explicitly refers to frameworks, developers representing all sorts of projects have been accepted as voting members.

The FIG intends to host a cross-section of the PHP ecosystem, not exclusively framework developers. For example, the Symfony, Lithium and CakePHP frameworks each have a representative as a voting member, but the same goes for PyroCMS, phpDocumentor, and even Composer.

The voting members can start or participate in votes, however, anyone else (including you!) can become a PHP-FIG community member by subscribing to the PHP-FIG mailing list.

This mailing list is where discussions, votes, suggestions and feedback take place.

The Goal

The goal of the FIG is to create a dialogue between project representatives, with the aim of finding ways to work together (interoperability). At the time of this writing, that dialogue has spawned four PHP Standards Recommendations: PSR-0 to PSR-3.

Those recommendations are free and can be adopted by anyone, though no one is obligated to do so. In fact, voting members are not required to implement any of the PSRs in the projects that they represent!


PSR-0: Autoloader Standard

PSR-0 is a huge step forward for reusable code.

Remember how you used to specify many require statements to reference all of the classes you require? Thankfully, this pattern changed with PHP 5′s magic __autoload() function. PHP 5.1.2 made autoloading even better by introducing spl_autoload(), which allows you to register a chain of autoloading functions with spl_autoload_register().

No matter how good the autoloading functionality is, it does not define how to implement it with existing codebases. For example, library X might approach the directory and classname structure differently than library Y, but you might want to use both!

Writing a proper autoloader that knows where to look for all possible fully-qualified names, as well as test all file extensions (.class.php, inc.php, .php etc) will quickly become a mess. Without a standard to define how to name classes and where to place them, autoloader interoperability would still be a pipe dream.

Meet PSR-0. A standard recommendation that describes “the mandatory requirements that must be adhered to for autoloader interoperability.”

PSR-0 is a huge step forward for reusable code. If a project follows the PSR-0 standard and its components are loosely coupled, you can simply take those components, place them within a ‘vendor’ directory, and use a PSR-0 compliant autoloader to include those components. Or, even better, let Composer do that for you!

For example, this is exactly what Laravel does with two Symfony Components (Console and HttpFoundation).

The FIG has an example implementation of a PSR-0 compliant autoloader function that can be registered to spl_autoload_register(), but you are free to use any of the more flexible implementations, such as the decoupled Symfony ClassLoader or Composer’s autoloader.


PSR-1: Basic Coding Standard

PSR-1 focuses on a basic coding standard.

There was a lengthy period of low-activity in the FIG after PSR-0′s acceptance. In fact, it took a good year and a half before the PSR-1 and PSR-2 documents were approved.

PSR-1 focuses on a basic coding standard. It refrains from being too detailed, and does so by limiting itself to a set of ground rules to “ensure a high level of technical interoperability between shared PHP code”.

  • Only use the <?php and <?= tags.
  • Only use UTF-8 without BOM for PHP code.
  • Separate side-effects (generate output, access a database etc.) and declarations.
  • Enforce PSR-0.
  • Class names must be defined in StudlyCaps.
  • Class constants must be defined in upper case with underscore separators.
  • Method names must be defined in camelCase.

The ground rules focus on naming conventions and file structure. This ensures that all shared PHP code behaves and looks the same way at a high level. Imagine an application that uses numerous third-party components, and they all use different naming conventions and character encodings. That would be a mess!


PSR-2: Coding Style Guide

PSR-2′s purpose is to have a single style guide for PHP code that results in uniformly formatted shared code.

PSR-2 extends and expands PSR-1′s basic coding standards. Its purpose is to have a single style guide for PHP code that results in uniformly formatted shared code.

The coding style guide’s rules were decided upon after an extensive survey given to the FIG voting members.

The rules in PSR-2, agreed upon by the voting members, do not necessarily reflect the preferences of every PHP developer. Since the FIG’s beginning, however, the PHP-FIG has stated that their recommendations have always been for the FIG itself; others willing to adopt the recommendations is a positive outcome.

The full PSR-2 specification can be found in the fig-standards repository. Be sure to give it a read.

In an ideal world, every PHP project would adopt the recommendations found in PSR-1 and PSR-2. However, due to taste (i.e. “Naming convention x looks better than y!”, “Tabs over spaces!”) and historical segmentation between various coding styles, there have only been a sparse amount of PHP projects adopting PSR-1 and PSR-2 in its entirety.


PSR-3: Logger Interface

PSR-3 describes a shared logging interface.

PHP Standard Recommendation #3 is the most recent addition to the accepted FIG-standards. It was accepted on December 27, 2012 with an impressive vote count of 18 to 0 (8 voting members did not cast a vote).

PSR-3 describes a shared logging interface, incorporating the eight Syslog levels of The Syslog Protocol (RFC 5424): debug, info, notice, warning, error, critical, alert and emergency.

Those eight Syslog levels are defined as method names, which accept two parameters: a string with a log $message and an optional $context array with additional data that does not fit well in the former string. Implementers may then replace placeholders in $message with values from $context.

A shared interface standard, like PSR-3, results in frameworks, libraries and other applications being able to type hint on that shared interface, allowing developers to choose a preferred implementation.

In other words: if a logging component is known to adhere to PSR-3, it can simply be swapped with a different PSR-3 compliant logging component. This assures maximum interoperability between calls to logger implementations.

Monolog recently implemented PSR-3. It’s therefore known to implement the Psr\Log\LoggerInterface and its associated guidelines found in the PSR-3 document.


Criticism

The PHP-FIG is doing a great job of centralizing PHP standards.

Some people say that the PSRs go too far to define a global set of standards to define a coding style – something that is more subjective than objective. Others feel that a shared interface is too specific and not flexible. But criticism comes naturally with any standard initiative. People don’t like how identifiers are supposed to be named, which indentation is used, or how the standards are formed.

Most of the criticism and reflection takes place from the sideline, after the standards are accepted – even though the standards form in the mailing list (which is open for everyone to join in and leave feedback, comments or suggestions). Do not hesitate to open a new topic in the mailing list, if you think you have something valuable to add.

It’s also important to keep in mind that it’s not the specific combination of rules, which contribute to the benefit of the recommended standards, but rather having one consistent set of rules which are shared among various codebases.

By everyone shirking their own preferences, we have one consistent style from the outside-in, meaning I can use ANY package – not just whichever happens to be camelCase.

- Phil Sturgeon in the PHP-FIG mailing list


The Future

The FIG intends to host a cross-section of the PHP ecosystem, not only framework developers.

With a growing number of influential voting members and four accepted standards, the FIG is certainly gaining more traction in the PHP community. PSR-0 already has widespread usage, and hopefully PSR-1 and PSR-2 will follow suit to achieve more uniformity in shared PHP code.

With the shared interface defined in PSR-3, the Framework Interoperability Group took a new turn in recommending standards. They are still heading in that direction, as the contents of new shared interfaces are being discussed on the mailing list.

Currently there is an interesting discussion about the proposal of an HTTP Message Package, which holds shared interfaces for implementing an HTTP client. There are also various discussions proposing a shared Cache interface; but, as of now, it seems to be on low-activity.

No matter what the outcome of those proposals will be, the PHP-FIG is doing a great job of centralizing PHP standards. They are without a doubt influencing the PHP ecosphere in a positive manner, and hopefully their efforts will obtain a more prominent place in the PHP programming language.

Remember: currently, they still operate under the name of the Framework Interoperability Group, and have no intentions whatsoever to tell you – Joe the Programmer – how to build your applications. They merely recommend a set of standards that anyone can adopt.


Better Workflow in PHP With Composer, Namespacing, and PHPUnit

Important Considerations When Building Single Page Web Apps

$
0
0

Single page web applications – or SPAs, as they are commonly referred to – are quickly becoming the de facto standard for web app development. The fact that a major part of the app runs inside a single web page makes it very interesting and appealing, and the accelerated growth of browser capabilities pushes us closer to the day, when all apps run entirely in the browser.

Technically, most web pages already are SPAs; it’s the complexity of a page that differentiates a web page from a web app. In my opinion, a page becomes an app when you incorporate workflows, CRUD operations, and state management around certain tasks. You’re working with a SPA when each of these tasks take place on the same page (using AJAX for client/server communication, of course).

Let’s start with this common understanding, and dive into some of the more important things that should be considered when building SPAs.


There are numerous points to consider before building a new app; to make matters worse, the expansive web development landscape can be intimidating at the outset. I have been in those unsettling shoes, but fortunately, the past few years have brought consensus on the tools and techniques that make the application development experience as enjoyable and productive as possible.

Most apps consist of both client and server-side pieces; although this article focuses mostly on the client-side portion of an app, I’ll provide a few server-side pointers toward the end of this article.

There is a colorful mix of technologies on the client-side, as well as several libraries and practices that enable a productive app development experience. This can be summarized, using the following word cloud.

Important Considerations - checklist

I will expand on each of the points above in the following sections.


Picking an Application Framework

There are an abundance of frameworks to choose from. Here’s but a handful of the most popular:

Choosing a framework is easily one of the most important choices you will make for your app. Certainly, you’ll want to choose the best framework for your team and app. Each of the above frameworks incorporate the MVC design pattern (in some form or another). As such, it’s quite common to refer to them as MVC frameworks. If we had to order these frameworks on a scale of complexity, learning curve and feature set, from left to right, it might look like:

App Frameworks

Although dissimilar in their implementation and level of sophistication, all the aforementioned frameworks provide some common abstractions, such as:

Just looking at the past five years, there has been an explosive growth in libraries, tools and practices.

  • Model: a wrapper around a JSON data structure with support for property getters/setters and property change notification.
  • Collection: a collection of models. Provides notifcations when a model is added, removed, or changed in the collection.
  • Events: a standard pattern to subscribe to and publish notifications.
  • View: A backing object for a DOM fragment with support for listening to DOM events relative to the DOM fragment. The View has access to the corresponding Model instance. In some frameworks, there is also a Controller that orchestrates changes between the View and Model.
  • Routing: Navigation within an app via URLs. Relies on the browser history API.
  • Syncing: Persisting model changes via Ajax calls.

More advanced frameworks, like CanJS, BatmanJS, EmberJS and AngularJS, expand on these basic features by providing support for automatic data-binding and client-side templates. The templates are data-bound and keep the view in sync with any changes to the model. If you decide to pick an advanced framework, you will certainly get a lot of out-of-the-box features, but it also expects you to build your app in a certain way.

Of all the previously listed frameworks, Meteor is the only full-stack framework. It provides tools not only for client-side development, but it also provides you with a server-side piece, via NodeJS, and end-to-end model synchronization, via MongoDB. This means that, when you save a model on the client, it automatically persists in MongoDB. This is a fantastic option, if you run a Node backend and use MongoDB for persistence.

Based on the complexity of your app, you should pick the framework that makes you the most productive. There certainly will be a learning curve, but that’s a one-time toll you pay for express-lane development. Be sure to carve out some time to evaluate these frameworks, based on a representative use-case.

Note: If you want to learn more about these frameworks from their creators, listen to these videos from ThroneJS.


Client-Side Templates

The most popular JavaScript-based templating systems are Underscore templates and Handlebars.

Some of the advanced frameworks from the previous section offer built-in templating systems.

For example, EmberJS has built-in support for Handlebars. However, you do have to consider a templating engine if you decide to use a lean framework, such as Backbone. Underscore is an excellent starting point, if you have limited templating requirements. Otherwise, Handlebars works great for more advanced projects. It also offers many built-in features for more expressive templates.

If you find that you require a large number of client-side templates, you can save some computation time by pre-compiling the templates on the server. Pre-compilation gives you plain JavaScript functions that you invoke to improve the load time of the page. Handlebars supports pre-compilation, making it worth the time and effort to fully explore.

ExpressJS users can even use the same templating engine on the client as on the server, giving you the benefit of sharing your templates between both the client and server.


Modular Development

Using a preprocessor requires an extra step in your build process.

JavaScript code is traditionally added to the page, via the <script /> element. You typically list libraries and other dependencies first, and then list the code that references those dependencies. This style works well, when you only need to include a few files; however, it will quickly become a nightmare to maintain, as you include additional scripts.

One solution to this problem is to treat each script file as a Module, and identify it by a name or relative file path. Using these semantics, and with the support of libraries, like RequireJS and Browserify, you can build your app using a module-based system.

The module thus becomes a way to identify the functionality within the app. You can organize these modules, using a certain folder structure that groups them based on a particular feature or functionality. Modules help in managing your application’s scripts, and it also eliminates global dependencies that must be included with <script /> elements before the application scripts. For libraries that are not AMD compatible, RequireJS offers a shim feature that exposes non-AMD scripts as modules.

There are currently two types of module-based systems: AMD (Asynchronous Module Definition) and CommonJS.

In AMD, each module contains a single top-level define() statement that lists all required dependencies, and an export function that exposes the module’s functionality. Here’s an example:

define([
    // listing out the dependencies (relative paths)
    'features/module/BaseView',
    'utils/formatters'
], function(BaseView, formatters) { // Export function that takes in the dependencies and returns some object
    // do something here
    // An explicit require
    var myModule = require('common/myModule');
    // Object exposing some functionality
    return { ... };
});

CommonJS module names are based on either a relative file path or a built-in module lookup process. There is no define() function in any module, and dependencies are explicitly stated by calls to require(). A module exposes its functionality, via the module.exports object, which each module automatically creates. Here’s a CommonJS example:

var fs = require('fs'), // standard or built-in modules
    path = require('path'),
    formatters = require('./utils/formatters'); // relative file path as module name
// Export my code
module.exports = { ... };

The CommonJS module style is more prevalent in NodeJS applications, where it makes sense to skip the call to define() call – you are working with a file-system based module lookup. Interestingly, you can do the same in a browser, using Browserify.


Package Management

Performance should be on your mind as you build and add features to your app.

Most apps have at least one dependency, be it a library or some other third party piece of code. You’ll find that you need some way to manage those dependencies as their number increases, and you need to insulate yourself from any breaking changes that newer versions of those dependencies may introduce.

Package management identifies all the dependencies in your app with specific names and versions. It gives you greater control over your dependencies, and ensures that everyone on your team is using an identical version of the library. The packages that your app needs are usually listed in a single file that contains a library’s version and name. Some of the common package managers for different tech stacks are:

  • Linux: Aptitude
  • .NET: Nuget
  • PERL: CPAN
  • Ruby: Gems
  • PHP: Composer
  • Node: NPM
  • Java: Maven and Gradle

Although package management is more of a server-side ability, it’s gaining popularity in client-side development circles. Twitter introduced Bower, a browser package manager similar to NPM for Node. Bower lists the client-side dependencies in component.json, and they are downloaded by running the bower CLI tool. For example, to install jQuery, from the Terminal, you would run:

bower install jquery

The ability to control a project’s dependencies makes development more predictable, and provides a clear list of the libraries that an app requires. If you consider consolidating your libraries in the future, doing so will be easier with your package listing file.


Unit and Integration Testing

It goes without saying that unit testing is a critical part of app development. It ensures that features continue to work as you refactor code, introduce libraries, and make sweeping changes to your app. Without unit tests, it will prove difficult to know when something fails, due to a minor code change. Coupled with end-to-end integration testing, it can be a powerful tool, when making architectural changes.

On the client-side, Jasmine, Mocha and Qunit are the most popular testing frameworks. Jasmine and Mocha support a more Behavior-Driven Development (BDD) style, where the tests read like English statements. QUnit, on the other hand, is a more traditional unit testing framework, offering an assertion-style API.

Jasmine, Mocha or Qunit run tests on a single browser.

If you want to gather test results from multiple browsers, you can try a tool like Testacular that runs your tests in multiple browsers.

To take testing the whole nine yards, you’ll likely want to have integration tests in your app, using Selenium and Cucumber/Capybara. Cucumber allows you to write tests (aka features) in an English-like syntax, called Gherkin, which can even be shared with the business folks. Each test statement in your Cucumber file is backed by executable code that you can write in Ruby, JavaScript or any of the other supported languages.

Executing a Cucumber feature file runs your executable code, which in turn tests the app and ensures that all business functionality has been properly implemented. Having an executable feature file is invaluable for a large project, but it might be overkill for smaller projects. It definitely requires a bit of effort to write and maintian these Cucumber scripts, so it really boils down to a team’s decision.


UI Considerations

Having a good working knowledge of CSS will help you achieve innovative designs in HTML.

The UI is my favorite portion of an app; it’s one of the things that immediately differentiates your product from the competition. Although apps differ in their purpose and look and feel, there are a few common responsibilities that most apps have. UI design and architecture is a fairly intensive topic, but it’s worth mentioning a few design points:

  • Form Handling: use different input controls (numeric inputs, email, date picker, color picker, autocomplete), validations on form submit, highlight errors in form inputs, and propagating server-side errors on the client.
  • Formatting: apply custom formats to numbers and other values.
  • Error Handling: propagate different kinds of client and server errors. Craft the text for different nuances in errors, maintain an error dictionary and fill placeholders with runtime values.
  • Alerts and Notifications: tell the user about important events and activities, and show system messages coming from the server.
  • Custom Controls: capture unique interaction patterns in the app as controls that can be reused. Identify the inputs and outputs from the control without coupling with a specific part of the app.
  • Grid System: build layouts using a grid system, like Compass Susy, 960gs, CSS Grid. The grid system will also help in creating responsive layout for different form factors.
  • UI Pattern Library: get comfortable with common UI patterns. Use Quince for reference.
  • Layered Graphics: understand the intricacies of CSS, the box models, floats, positioning, etc. Having a good working knowledge of CSS will help you achieve innovative designs in HTML.
  • Internationalization: adapt a site to different locales. Detect the locale using the Accept-Language HTTP header or through a round-trip to gather more info from the client.

CSS Preprocessors

CSS is a deceptively simple language that has simple constructs. Interestingly, it can also be very unwieldy to manage, especially if there are many of the same values used among the various selectors and properties. It’s not uncommon to reuse a set of colors throughout a CSS file, but doing so introduces repetition, and changing those repeated values increases the potential for human error.

CSS preprocessors solve this problem and help to organize, refactor and share common code. Features, such as variables, functions, mixins and partials, make it easy to maintain CSS. For example, you could store the value of a common color within a variable, and then use that variable wherever you want to use its value.

Using a preprocessor requires an extra step in your build process: you have to generate the final CSS.

There are, however, tools which auto-compile your files, and you can also find libraries that simplify stylesheet development. SASS and Stylus are two popular preprocessors that offer corresponding helper libraries. These libraries also make it easy to build grid-based systems and create a responsive page layout that adapts to different form factors (tablets and phones).

Although CSS preprocessors make it easy to build CSS with shared rules, you still have the responsibility of structuring it well, and isolating related rules into their own files. Some principles from SMACSS and OOCSS can serve as a great guide during this process.

Scalable and Modular Architecture for CSS is included, as part of a Tuts+ Premium membership.


Version Control

If you know a hip developer, then you’re probably aware that Git is the reigning champion of all version control systems (VCS). I won’t go into all the details of why Git is superior, but suffice it to say that branching and merging (two very common activities during development) are mostly hassle free.

A close parallel to Git, in terms of philosophy, is Mercurial (hg)–although it is not as popular as Git. The next best alternative is the long-standing Subversion. The choice of VCS is greatly dependent on your company standards, and, to some extent, your team. However, if you are part of a small task force, Git is easily the preferred option.


Browser Considerations

It goes without saying that unit testing is a critical part of app development.

There are a variety of browsers that we must support. Libraries, like jQuery and Zepto, already abstract the DOM manipulation API, but there are other differences in JavaScript and CSS, which require extra effort on our parts. The following guidelines can help you manage these differences:

  • Use a tool, like Sauce Labs or BrowserStack to test the website on multiple browsers and operating systems.
  • Use polyfills and shims, such as es5shim and Modernizr to detect if the browser supports a given feature before calling the API.
  • Use CSS resets, such as Normalize, Blueprint, and Eric Myer’s Reset to start with a clean slate look on all browsers.
  • Use vendor prefixes (-webkit-, -moz-, -ms-) on CSS properties to support different rendering engines.
  • Use browser compatibility charts, such as findmebyIP and canIuse.

Managing browser differences may involve a bit of trial and error; Google and StackOverflow can be your two best friends, when you find yourself in a browser-induced jam.


Libraries

There are a few libraries that you might want to consider:


Minification

Before deploying your application, it’s a good idea to combine all of your scripts into a single file; the same can be said for your CSS. This step is generally referred to as minification, and it aims to reduce the number of HTTP requests and the size of your scripts.

You can minify JavaScript and CSS with: RequireJS optimizer, UglifyJS, and Jammit. They also combine your images and icons into a single sprite sheet for even more optimization.

Editor’s Note: I recommend that you use Grunt or Yeoman (which uses Grunt) to easily build and deploy your applications.

Tools of the Trade

Twitter introduced Bower, a browser package manager similar to NPM for Node.

I would be remiss if I did not mention the tools for building SPAs. The following lists a few:

  • JsHint to catch lint issues in your JavaScript files. This tool can catch syntactic issues, such as missing semicolons and enforcing a certain code style on the project.
  • Instead of starting a project from scratch, consider a tool, such as Yeoman to quickly build the initial scaffolding for the project. It provides built-in support for CSS preprocessors (like SASS, Less and Stylus), compiling CoffeeScript files to JavaScript and watching for file changes. It also prepares your app for deployment by minifying and optimizing your assets. Like Yeoman, there are other tools to consider, such as MimosaJS and Middleman.
  • If you’re looking for a make-like tool for JavaScript, look no further than Grunt. It is an extensible build tool that can handle a variety of tasks. Yeoman uses Grunt to handle all of its tasks.
  • Nodemon for auto-starting a Node program each time a file changes. A simliar tool is forever.
  • Code editors, such as Sublime Text, Vim, and JetBrains WebStorm.
  • Command line tools ZSH or BASH. Master the shell because it can be very, effective especially when working with tools like Yeoman, Grunt, Bower and NPM.
  • Homebrew is a simple package manager for installing utilities.

Performance Considerations

CSS preprocessors make it easy to build CSS with shared rules.

Rather than treating this as an after-thought, performance should be on your mind as you build and add features to your app. If you encounter a performance issue, you should first profile the app. The Webkit inspector offers a built-in profiler that can provide a comprehensive report for CPU, memory and rendering bottlenecks. The profiler helps you isolate the issue, which you can then fix and optimize. Refer to the Chrome Developer Tools for in-depth coverage of the Chrome web inspector.

Some common performance improvements include:

  • Simplify CSS selectors to minimize recalculation and layout costs.
  • Minimize DOM manipulations and remove unnecessary elements.
  • Avoid data bindings when the number of DOM elements run into hundreds.
  • Clean up event handlers in view instances that are no longer needed.
  • Try to generate most of the HTML on the server-side. Once on the client, create the backing view with the existing DOM element.
  • Have region-specific servers for faster turn around.
  • Use CDNs for serving libraries and static assets.
  • Analyze your web page with tools like YSlow and take actions outlined in the report.

The above is only a cursory list. Visit Html5Rocks for more comprehensive performance coverage.


Auditing and Google Analytics

If you plan on tracking your app’s usage or gathering audit trails around certain workflows, Google Analytics (GA) is probably your best solution. By including a simple GA script on each page with your tracking code, you can gather a variety of your app’s metrics. You can also set up goals at the Google Analytics website. This fairly extensive topic is worth investigating, if tracking and auditing is an important concern.


Keeping up With the Jones

The world of web development changes quickly. Just looking at the past five years, there has been an explosive growth in libraries, tools and practices. The best way to keep tabs on the web’s evolution is to subscribe to blogs (like this one), newsletters and just being curious:


Operations Management

The client-side, although looking like a large piece of the stack, is actually only half of the equation. The other half is the server, which may also be referred to as operations management. Although beyond the scope of this article, these operations can include:

  • continuous integration, using build servers such as TeamCity, Jenkins, and Hudson.
  • persistence, data redundancy, failover and disaster recovery.
  • caching data in-memory and invalidating the cache at regular intervals.
  • handling roles and permissions and validating user requests.
  • scaling under heavy load.
  • security, SSL certificates, and exploit-testing.
  • password management.
  • support, monitoring and reporting tools.
  • deployment and staging.

Summary

As you can see, developing an app and bringing it to production involves a variety of modern technologies. We focused primarily on client-side development, but don’t forget the server-side portion of the app. Separately, they’re useless, but together, you have the necessary layers for a working application.

With so much to learn, you wouldn’t be alone if you feel overwhelmed. Just stick with it, and don’t stop! You’ll get there soon enough.

Ruby on Rails Study Guide: The History of Rails

$
0
0

Ruby on Rails– or simply, Rails – is an open source, rapid web development framework, with a continuous goal of maximizing developer happiness and productivity. Created nearly a decade ago, Rails today forms the backbone of many of the most popular applications on the web, and has an incredibly vibrant and passionate community. In this study quide segment, we’ll review the history of Ruby of Rails.

Study Guides: When applying for a programming job, you’ll often be presented with a quiz that intends to determine your level of knowledge and experience in a given subject. The various articles in this series provide condensed solutions to the questions that you might expect to see on such tests.

The Foundation of Rails

Rails was created in 2003 by David Heinemeier Hansson, while working on the code base for Basecamp, a project management tool, by 37signals. David extracted Ruby on Rails and officially released it as open source code in July of 2004. Despite rapid iteration of the Rails code base throughout the years, it has stuck to three basic principles:

  • Ruby Programming Language
  • Model-View-Controller Architecture
  • Programmer Happiness

The Ruby Programming Language

Ruby on Rails is written in the programming language, Ruby, which was created by Yukihiro Matsumoto a.k.a. Matz in 1995. Matz created Ruby from some of his favorite programming languages, such as Lisp, Perl, and Ada, while placing significant emphasis on “trying to make Ruby natural, not simple.” David, himself, fell in love with Ruby upon first using it.

A big part of what makes Ruby so special to work with is just how much expression you can pack into few lines of code.

Eventually, there was a huge surge in Ruby’s popularity in the mid 2000s. Much of its success can be attributed to the popularity of Rails.

Model-View-Controller Architecture

Baked into the architecture of Rails is the software pattern, referred to as MVC (Model-View-Controller). This provides a clean isolation among the business logic in the Model, the user interface through the Views, as well as the processors handling all sorts of user requests in the Controller. This also makes for easier code maintenance.

Programmer Happiness

Rails heavily emphasizes “Convention over Configuration.”

Rails was created with the goal of increasing programmers’ happiness and productivity levels. In short, with Rails you can get started with a full-stack web application by quickly creating pages, templates and even query functions. Rails heavily emphasizes “Convention over Configuration.” This means that a programmer only needs to specify and code out the non-standard parts of a program. Even though Rails comes with its own set of tools and settings, you’re certainly not limited to them. Developers are free to configure their apps however they wish, though adopting conventions is certainly recommended.


A Look Back

As we look back at the history of Rails, let’s review some of the more significant releases over the years.

  1. Rails 1.0 (Dec 2005) – Mostly polishing up and closing pending tickets from the first release along with the inclusion of Scriptaculous 1.5 and Prototype 1.4.
  2. Rails 1.2 (Jan 2007) – REST and generation HTTP appreciation
  3. Rails 2.0 (Dec 2007) – Better routing resources, multiview, HTTP basic authentication, cookie store sessions
  4. Rails 2.0 (Nov 2008) – i18n, thread safe, connection pool, Ruby 1.9, JRuby
  5. Rails 2.3 (Mar 2009) – Templates, Engines, Rack
  6. Rails 3.0 (Aug 2010) – New query engine, new router for controller, mailer controller, CRSF protection
  7. Rails 3.1 (Aug 2011) – jQuery, SASS, CoffeeScript, Sprockets with Assets Pipeline
  8. Rails 3.2 (Jan 2012) – Journey routing engine, faster development mode, automatic query explains, tagged loggin for multi-user application

Over the years, Rails has indeed made it easier for beginners to dive into web development, as well as build large complex applications – some of which include Twitter (at one point), GitHub and, of course, 37signals’ very own Basecamp. Although it has often been criticized for performance and bloat, Rails continues its iterations along with an ever-growing developer community and a vibrant ecosystem.

Rails is even offered by many hacker schools today, as part of their curriculum for web development.


A Peek Ahead

For updates on Rails’ development in the future, or even a deeper look back to learn how the various technologies were integrated in past versions, be sure to review the following links:

  1. Release Notes
  2. Documentation

As we look ahead, the core team and many contributors are putting the finishing touches on Rails 4.0. Stay tuned to Nettuts+, where we’ll dig into everything that this new release has to offer!

From Scrum to Lean

$
0
0

While Scrum’s primary goal is organization and project management, Lean is more about optimizing processes in order to quickly produce quality products. It can be your first step toward adopting Agile principles, or it may be something that your team evolves to, when Scrum isn’t enough. I invite you to read my team’s story, and how we evolved from Scrum to a more Lean-ish development process.


A Little History

Lean is a set of principles defined by the Japanese automobile manufacturing industry in the 1980s. The Toyota quality engineer, John Krafcik, coined the term, while observing the processes and tools used to eliminate waste in mass automobile production. It wasn’t until 2003 that Mary and Tom Poppendieck introduced Lean as a software development process in their book, Lean Software Development: An Agile Toolkit.

Whereas Scrum is a set of rules and roles, Lean is a set of principles and concepts with a handful of tools. Both are considered Agile techniques, and they share the same ideology of delivering fast, while reducing defects and errors. I always emphasize Agile’s adaptability, but can’t ignore the fact that Scrum presents itself as a mandatory set of rules. In fact, Scrum’s religious fans would shout blasphemy for not following Scrum’s rules to the letter.

Lean, on the other hand, is more open; its followers present the process as a set of highly adaptable recommendations.

It encourages the team or company to make decisions, and it adapts to the decisions and every-day surprises that your team and company face.

As my team matured and exploited Scrum, we felt that some aspects of it held us back. We became a fairly disciplined and homogenous team. Some meetings were not appropriate for us anymore, and we started to realize that daily meetings were not efficient. We learned that problems should be solved faster, and we felt the need to avoid these procedures that held us back.

We moved on.


The Two Major Concepts of Lean

Lean exposes two major concepts, but the core ideas are: eliminating waste and improving the work flow.

Eliminating Waste

If something’s going to break, it will break on Friday.

Anything that stands in the way of production is waste. This includes lost time, leftover materials, and an unused work force. Defects in the final product are waste; it wastes time to fix them, wastes money to replace them, and wastes resources to find other solutions.

The concept of waste nicely translates to the world of software development. Waste can be described by late deliveries, bugs, and programmers having nothing to do (do not confuse this with “programmers should program eight hours a day without pause and youtube”).

Improving Flow

Toyota’s Lean concept concentrates on the production flow. In a production plant, the flow is a chain of procedures that transform the raw materials into their final products. These procedures can be very different from one another and take different amounts time to complete, but they can each be improved to make them more efficient. It’s a constant battle finding and improving bottlenecks in the process.

An ideal flow is one in which each step takes the same amount of time and effort to produce the same amount of products.

This does not mean that each process should cost the same amount of money, but each process should be able to complete with the same amount of ease.

Ironically, Scrum was the tool that eventually led us to realize the waste in our project. While we adhered to pure Scrum in some areas of our production, we started to identify bugs and/or delays that we could easily avoid by taking a different approach.

We knew little about Lean at that point. We read a few articles on the subject, but we did the wrong thing and ignored them, because we were so focused on our Scrum process. We were nearly convinced that Scrum was the Holy Grail, but our “Scrum machine gun” started to misfire. We needed to open our minds.


The Principles of Lean

For software development, Lean’s principles were adapted into the following seven.

1– Eliminate Waste

The change from Scrum to Lean was actually liberating.

In software development, you can find and eliminate waste by recognizing the things that need to be improved. In a pragmatic sense, anything that is not a direct value for the customer is waste. To provide a few examples: waste is a bug, a comment in the code, an unused (or rarely used feature), teams waiting on other teams, taking a personal call… you get the idea. Anything holding you, your team, or your product back is a waste, and you should take the appropriate actions to remove it.

I remember one of our problems was the frequent need to react faster than two sprints. Developing in sprints and respecting Scrum’s rules prohibits you from changing the stories assigned to the current sprint. A team needs to adapt when a user finds a bug and needs a fix, or when the most important customer wants a feature that can easily be complted in two days. Scrum is just not flexible enough in these cases.

2– Amplify Learning

Place a high value on education.

You have waste, and you naturally want less waste in the future. But why is there waste? It more than likely comes from a team member that doesn’t quite know how to approach a particular problem. That’s all right; nobody knows everything. Place a high value on education.

Identify the areas that need the most improvement (knowledge-wise) and start training. The more you and your team know, the easier it is to reduce waste.

For example, learning Test Driven Development (TDD) can reduce the number of bugs in your code. If you have problems with integrating different teams’ modules, you may want to learn what Continuous Integration means and implement a suitable solution.

You can also learn from user feedback; this allows you to learn how users use your product. A frequently used feature may only be accessible by navigating through five menus. That’s a sign of waste! You may initially blame such waste on programmers and designers (and you may be correct), but users tend to use your software in ways that you never intended. Learning from your users helps eliminate this kind of waste. It also helps you eliminate wrong or incomplete requirements. User feedback can drive a product on paths you would never have otherwise considered.

At some point, my team identified that certain modules could have been better written had we known more about the domain in the beginning. Of course, there is no way to turn back time, and rewriting a huge chunk of code is not practical. We decided to invest time to learn about the domain when tasked with a more complex feature. This may require a few hours or even weeks, depending on the complexity of the problem.

3– Decide as Late as Possible

Every decision has a cost. It may not be immediate and material, but the cost is there. Deferring decisions helps you fully prepare for the problems that you need to face. Probably the most common example of delayed decision is with database design.

Decisions don’t have to be technical in nature; communicating with customers helps you make decisions that impact the way you approach your products’ features.

But users don’t always know what they want. By deferring feature decisions until the users actually need the feature, you will have more information about the problem and can provide the necessary functionality.

Choose the Agile methodology that works best for you and your team.

Fourteen years ago, the team made a decision to store the application’s configuration in a MySQL database. They made this decision at the beginning of the project, and now the current team (my team) has a difficult burden to carry. Fortunately, that product is no longer in active development, but we still have to maintain it from time to time. What should be a simple task ends up being a monumentally huge.

On the bright side, we learned from our predecessors’ mistakes. We make programming, architectural and project decisions as late as possible. In fact, we learned this hard lesson before adopting Lean. Write open and decoupled code, and write a design that is persistence – and configuration-agnostic. It is certainly more difficult to do, but ultimately saves you a lot of time in the future.

Some time ago, we added a feature to our product that compresses data on the disk. We knew it would be useful, and wanted to add it to our product as quickly as possible. We started with simple functionality, avoiding decisions regarding options and configuration until a later time. Users began providing feedback after a few months, and we took that information to make our decisions on the feature’s options and configuration. We modified the feature in less than a day, and not a single user has complained or requested more functionality. It was an easy modification to make; we wrote the code knowing that we’d make a modification in the future.

4– Deliver as Fast as Possible

We live in a constantly changing world. The programs we write today are for computers that will be obsolete in two years. Moore’s law is still valid, and it will continue to be so.

The speed of delivery is everything in this fast-paced world.

Delivering a product in three years puts you behind the pack, so it’s very important to give value to your customers as soon as possible. History proved that an incomplete product with an acceptable amount of bugs is better than nothing. Plus you have invaluable user feedback.

Our company had a dream: deliver after each sprint. Naturally, that is impractical in most cases. Our users did not want an updated product every week or month. So, while we strive to release each version of our code, we don’t. We learned that “fast” is what the user perceives – not what we are physically able to do. In our product’s industry, fast means regular updates every few months and critical bug fixes within days. This is how our users perceives “fast”; other types of products and industries have different definitions of “fast.”

5– Empower the Team

Programmers used to be resources encased in cubicles, silently performing their tasks for their company. This was the prominent image of a programmer in the late 1990s, but that’s certainly no longer the case.

History demonstrated that that approach and traditional waterfall project management is not suitable for software.

It was so bad at one point that only around 5% percent of all software projects were actually delivered. Million-dollar businesses and products were failing 95% of the time, leading to huge losses.

Lean identified that unmotivated programmers caused that waste. But why the lack of motivation? Well, programmers and development teams were not listened to. The company set tasks and micromanaged the employees who were seen only as resources producing source code.

Lean encourages managers to listen to programmers, and it encourages programmers to teach their managers the process of software production. It also encourages programmers to directly work with clients and users. This does not mean the developers do everything, but it does give them the power to influnce the production’s evolution. Surprisingly, having that feeling of, “You know that great feature the users love? It was my idea!” is a big motivational factor.

But don’t think this only works for large teams; our team is small. Our company has only a handful of developers, so we have always been close to our users. That relationship has allowed us to influence our product beyond what our managers might have initially envisioned.

6– Build Integrity

Anything that stands in the way of production is waste.

Integrity is about the robustness of your product, and it’s how customers see your product as a whole. Integrity is about UI uniformity, reliability and security, and the user feeling safe using your product. Integrity is the complete image the user creates for your product. For example, UI integrity involves the look and feel between pages, screens, modules or even between the UI of your system and the company’s web site.

But integrity can also be observed and practiced at the source code level. Integrity can mean that your modules, classes and other pieces are written in a similar way. You use the same principles, patterns and techniques throughout your code base – even between different teams. Integrity means that you have to frequently refactor your code. It is continuous and endless work, and you should strive for, but never reach, it.

Maintaining integrity in our source code is a difficult task. We realized that finding duplicate code is the most difficult thing to do, and by duplication, I don’t mean a couple of lines of duplicate code in the same method or class.

It is not even about searching in different modules for the exact same code; it’s about finding those pieces of common logic, extracting them into their own classes, and using them in several places.

Finding logical duplication is very difficult and requires intimate knowledge of the source code.

I’ve been on the same project for more than a year, and I am still surprised when I find duplicate code. But that’s a good thing; we reached a point when we actually see these things and take action on them. Since we started actively removing duplicate high-level logic, our code quality increased. We’re one step closer to achieving integrity.

7– See the Whole

Users don’t always know what they want.

When you create your application, you have to think about the third party components that you rely on in order to develop your product, as well as the other third parties your product communicates with. Your application needs to integrate with the design of a device or the operating system on a desktop PC. Isn’t it easier to use an app that integrates with your smartphone’s notification system, and the UI reflects the OS’ UI?

Yes, seeing the whole is not that easy. You have to detach yourself from the tiny details and look at your product from afar. One of the memorable moments in our product’s development was when we realized that we have to rely on what other programmers produce in other projects. We realized that the kernel of the system, programmed by others, is one of these third party components that we rely on.

At one point, that third party component changed. We could have just applied band-aids to our application, or we could’ve taken the easy route and blamed the programmers that wrote that component. Instead, we took the problem by the horns and fixed the problem in the third party kernel. Seeing the whole and working with it can be messy, but it can make the difference between a product and a great product.


Kanban – A Great Tool

There are several tools and techniques to make Lean work. I prefer Kanban, a board-based tool that is similar to Scrum’s planning board. Imagine a Kanban board as a double-funnel.

On the left are the never ending list of stories that we need to address. All the finished stories pile up on the right, and the manager or product owner determines when to publish a new release, based on this list.

In the middle is our effective Kanban process. The product should be in a stable and release-ready state when we complete a story. This doesn’t necessarily mean that a feature is done; the product may have a few partially implemented features. But the product’s stability, security and robustness should be production quality.

More Flexibility with Releases

We were doing quite well with our current sprint. It was a usual Monday, a calm day with little excitement, but we started to see some problems by Wednesday. That’s OK, it happens all the time. Overcoming these difficulties, however, required some extra time. Our product owner agreed with us to continue working on the current feature and extend the current sprint about three or four extra days.

Friday came with an surprise. You know the rule: if something’s going to break, it will break on Friday. An important and potential client required a certain feature before signing a contract with the company. We had to react (and fast!). A new release was mandatory… But wait! We are in a middle of a sprint. The product should be release-ready by the end of the sprint. What do we do? Scrum would say to do the new feature in the next sprint, but we’re already late with the current sprint! That was the moment, when we started to realize that we have to think smaller than an individual sprint. We need to be able to adapt faster, and we need to release sooner if necessary.


The Kanban Board

A Kanban board looks quite similar to a Scrum planning board, but with a few additions in order to better accommodate the Lean process.

The first column on the left is the full backlog: everything that we need to do at some point. On the far right, you have the other funnel, containing all the completed (but not released) stories.

In the middle is our process. These columns can differ depending on each team and process. It’s usually recommended to have at least one column for the next few tasks, and another column for the stories currently in development. The above image demonstrates several more columns to better present the development process.

The To-Do column lists the tasks that we need to complete. Then, we have Design, where developers work on designing the current stories. The fourth column is Development, the actual coding. Finally, the Testing column lists the tasks waiting for review by another teammate.

Limit Work in Progress

Nobody knows everything.

If you compare this Kanban board with a scrum planning board, you will immediately notice the obvious differences. First, each column has a number, representing the maximum number of stories that are allowed to reside in that column. We have four to-do’s, four in design, three in development and two in testing. The backlog and completed tasks have no such limit.

Each column’s value must be defined by the team. In the above image, I assigned arbitrary numbers to the columns; your numbers may differ significantly. Also, the numbers are not final. Adapt the numbers when you identify and remove bottlenecks.

Dynamically Allocate Resources

One of the most amazing qualities of Kanban and Lean is the importance of collaboration and effort reallocation. As you can see on the board, each column of our process contains cards with people on them. There are eight developers on the example board – quite a large team! The board always presents the current status of who is doing what at any given time.

According to our board, there are three individuals working on design, two pairs working on development and one developer testing. Stories move to the next column their the work in the current column is complete, and depending on the type of development and organization of the team, the same developer can continue working on the same story as it moves through the process.

Let’s presume that we have specialized people. So the three designers’ primary function is design, the four developers write code, and the lonely tester primarily tests the product/feature. If a designer finishes a story, the story moves to development and another story from the to-do list is pulled into design.

This is a normal process. A story was moved from design to development, and now development is at its maximum stories. But what if another designer finishes another story? That gives the developer team four stories – an unwanted situation.

Lean wants to avoid congestion. It is forbidden to move a story to the next column if it exceeds the column’s maximum. In this case, resources need to be reallocated; the designer that finished their task must chose what to do next. His first option is to pull another task from the to-do column, but he cannot do that because he needs to pass his newly finished task to the development team (which he cannot do). His only other option is to start working on a development story. He may not be the best developer, but his efforts will help maintain the process flow.

If the tester finishes the last story in his column, he could help the designer on his development task.

This is great! The team can reorganize on the fly! Do you see a waste? Do you see a bottleneck in the flow? Take immediate action!

Once a story in the development column is completed, the tester returns to testing, the designer to designing, and the developers pick up the story the designer and tester were working on. But keep in mind that you do not have to follow that exact prescription; the developer could start designing as the designer and tester finished developing their story. It’s up to you!

Our board is now back to normal.

Welcome Scrum-Ban

It is forbidden to move a story to the next column if it exceeds the column’s maximum.

I watched with nostalgia as our scrum master dismantled our board. Piece by piece, he tore down our beloved board, which then became a mountain of crumpled paper.

Another colleague entered the room, a few huge sheets of fresh white paper in his hands. Our board was about to be reborn into something different, something better suited for our needs. After the paper sheets were on the wall, we started an ad-hoc meeting to define the columns we needed to define our process. We then debated the amount of stories that should be in each column. After everything was carefully painted and arranged on the wall, we experienced that strange feeling… sadness for the old but happiness for the new.

We did something that many people call, Scrum-Ban. We kept some scrum concepts, such as the scrum master and product owner roles, and we still estimate and evaluate the stories. But we now focus on Lean and Kanban, preserving flow, and discovering and fixing waste and bottlenecks.

The change from Scrum to Lean was actually liberating. Team members became much more friendly with one another, and we understood that we should offer help as soon as there is nothing in our column. This feeling that developers matter made us think about the project as a whole; we care more for the project than we ever have before.


Final Thoughts

Lean was not always considered Agile. Even today, some Agilists refuse to recognize it as an Agile methodology. But more and more, programmers accept Lean as one of the ultimate Agile methodologies.

As one of my wise colleagues pointed out, Lean and Kanban allow you to follow this methodology on your own. So, if you are a lone developer and need some tools to make your life easier, try out some free Kanban tools.

The AgileZen website offers a free account, letting you track a single project.

I found it to be one of the best free online Kanban tools; I even use it every day for tracking and planning the progress of the articles and courses that I provide for Tuts+. Of course, you can always upgrade your AgileZen account, if you need to track more projects.

In this article, we reviewed Lean and Kanban as an evolution of Scrum. Does this mean that Lean is better than Scrum? Absolutely not! It depends on the projects and people you work with. As always, choose the Agile methodology that works best for you and your team.

Announcing the Mobile Bundle!

$
0
0

If you love building for mobile then you’ll love the Mobile Bundle! We’ve filled it with 39 fantastic items from our marketplaces, and we’ve knocked down the price to $20 for 7 days only!


The Mobile Bundle Is Now on Sale

This bundle includes more than $500 worth of highly rated items from ThemeForest, GraphicRiver, VideoHive, PhotoDune, 3DOcean, and CodeCanyon. All of the items were selected by either our review team or by the Marketplace community with a single goal in mind: to help us all build better websites and apps for mobile phones.

Grab these 39 files. They’ll only be around as a bundle until the 31st of January AEDT.

tuts big image

Due to the exclusive nature of the Mobile Bundle, the bundle items are purchased ‘as-is’, meaning no bundle files are eligible for item support.

Get the Bundle Here!

Viewing all 502 articles
Browse latest View live