• Home
  • Smashing Magazine
 
 

Smashing Magazine

444 Items
  • Smashing Magazine
    Modifying Admin Post Lists In WordPress

       Have you ever created a custom post type and then found that only the titles and dates of your posts are displayed in the admin lists? While WordPress will add taxonomies for you, that’s the most it can do. Adding relevant at-a-glance information is easy; in this article, we’ll look how to modify admin post lists with WordPress. To make sure we’re on the same page, an admin list is the table of posts shown in the admin section when you click on “Posts,” “Pages” or another custom post type. Before we delve in, it is worth noting that admin tables are created using the WP_List_Table class. Jeremy Desvaux de Marigny has written a great article on native admin tables that explains how to make these from scratch. We’ll focus in this article on how to extend existing tables. We’ll do this using an example from a theme that we recently built, named Rock Band. Rock Band includes event management, which means that we needed some custom event-specific interface elements and details to make the admin section more useful! Creating A Custom Post Type This process is fairly straightforward and is documented well in “The Complete Guide to Custom Post Types.” All we need is a definition of the labels that we’re going to use and a few settings. Open up your functions.php file and drop the following into it. add_action( 'init', 'bs_post_types' ); function bs_post_types() { $labels = array( 'name' => __( 'Events', THEMENAME ), 'singular_name' => __( 'Event', THEMENAME ), 'add_new' => __( 'Add New', THEMENAME ), 'add_new_item' => __( 'Add New Event', THEMENAME ), 'edit_item' => __( 'Edit Event', THEMENAME ), 'new_item' => __( 'New Event', THEMENAME ), 'all_items' => __( 'All Event', THEMENAME ), 'view_item' => __( 'View Event', THEMENAME ), 'search_items' => __( 'Search Events', THEMENAME ), 'not_found' => __( 'No events found', THEMENAME ), 'not_found_in_trash' => __( 'No events found in Trash', THEMENAME ), 'menu_name' => __( 'Events', THEMENAME ), ); $supports = array( 'title', 'editor' ); $slug = get_theme_mod( 'event_permalink' ); $slug = ( empty( $slug ) ) ? 'event' : $slug; $args = array( 'labels' => $labels, 'public' => true, 'publicly_queryable' => true, 'show_ui' => true, 'show_in_menu' => true, 'query_var' => true, 'rewrite' => array( 'slug' => $slug ), 'capability_type' => 'post', 'has_archive' => true, 'hierarchical' => false, 'menu_position' => null, 'supports' => $supports, ); register_post_type( 'event', $args ); } Quick Tip By pulling the permalink from the theme settings, you can make sure that users of your theme are able to set their own permalinks. This is important for multilingual websites, on which administrators might want to make sure that URLs are readable by their users. What we get is the post list above. It’s better than nothing, but it has no at-a-glance information at all. Event venue, start time and ticket status would be great additions, so let’s get cracking! Adding Custom Table Headers Throughout this whole process, we will never have to touch the WP_Lists_Table class directly. This is wonderful news! Because we’ll be doing everything with hooks, our code will be nice and modular, easily customizable. Adding the header is as simple as modifying the value of an array. This sounds like a job for a filter! add_filter('manage_event_posts_columns', 'bs_event_table_head'); function bs_event_table_head( $defaults ) { $defaults['event_date'] = 'Event Date'; $defaults['ticket_status'] = 'Ticket Status'; $defaults['venue'] = 'Venue'; $defaults['author'] = 'Added By'; return $defaults; } Note the name of the filter: It corresponds to the name of the post type we have created. This means you can modify the table of any post type, not only your custom ones. Just use manage_post_posts_columns to modify the columns for the table of regular posts. Once this code has been placed in our functions file, you should see the four new table headers. The fields don’t have any content yet; it is for us to decide what goes in them. Fill ’er Up! Adding data for each column is about as “complex” as it was to create the columns. add_action( 'manage_event_posts_custom_column', 'bs_event_table_content', 10, 2 ); function bs_event_table_content( $column_name, $post_id ) { if ($column_name == 'event_date') { $event_date = get_post_meta( $post_id, '_bs_meta_event_date', true ); echo date( _x( 'F d, Y', 'Event date format', 'textdomain' ), strtotime( $event_date ) ); } if ($column_name == 'ticket_status') { $status = get_post_meta( $post_id, '_bs_meta_event_ticket_status', true ); echo $status; } if ($column_name == 'venue') { echo get_post_meta( $post_id, '_bs_meta_event_venue', true ); } } As is obvious from the structure of this function, it gets called separately for each column. Because of this, we need to check which column is currently being displayed and then spit out the corresponding data. The data we need for this is stored in the postmeta table. Here’s how to do it: The event’s date is stored using the _bs_meta_event_date key. The ticket’s status uses the _bs_meta_event_ticket_status key. The venue is stored using the _bs_meta_event_venue meta key. Because these are all postmeta values, we just need to use the get_post_meta() function to retrieve them. With the exception of the date, we can echo these values right away. This brings us to an important point. You are not restricted to showing individual snippets of data or showing links. Whatever you output will be shown. With sufficient time, you could attach a calendar to the dates, which would be shown on hover. You could create flyout menus that open up on click, and so on. As you can see, this is much better. The event’s date, ticket status, venue and author can be seen, which makes this table actually informative, rather than just a way to get to edit pages. However, we can do more. Ordering Columns Enabling column ordering takes two steps but is fairly straightforward. First, use a filter to specify which of your columns should be sortable by adding it to an array. Then, create a filter for each column to modify the query when a user clicks to sorts the column. add_filter( 'manage_edit-event_sortable_columns', 'bs_event_table_sorting' ); function bs_event_table_sorting( $columns ) { $columns['event_date'] = 'event_date'; $columns['ticket_status'] = 'ticket_status'; $columns['venue'] = 'venue'; return $columns; } add_filter( 'request', 'bs_event_date_column_orderby' ); function bs_event_date_column_orderby( $vars ) { if ( isset( $vars['orderby'] ) && 'event_date' == $vars['orderby'] ) { $vars = array_merge( $vars, array( 'meta_key' => '_bs_meta_event_date', 'orderby' => 'meta_value' ) ); } return $vars; } add_filter( 'request', 'bs_ticket_status_column_orderby' ); function bs_ticket_status_column_orderby( $vars ) { if ( isset( $vars['orderby'] ) && 'ticket_status' == $vars['orderby'] ) { $vars = array_merge( $vars, array( 'meta_key' => '_bs_meta_event_ticket_status', 'orderby' => 'meta_value' ) ); } return $vars; } add_filter( 'request', 'bs_venue_column_orderby' ); function bs_venue_column_orderby( $vars ) { if ( isset( $vars['orderby'] ) && 'venue' == $vars['orderby'] ) { $vars = array_merge( $vars, array( 'meta_key' => '_bs_meta_event_venue', 'orderby' => 'meta_value' ) ); } return $vars; } Here is what’s happening in each of these cases. Whenever posts are listed, an array of arguments is passed that determines what is shown — things like how many to show per page, which post type to display, and so on. WordPress knows how to construct the array of arguments for each of its built-in features. When we say, “order by venue,” WordPress doesn’t know what this means. Results are ordered before they are displayed, not after the fact. Therefore, WordPress needs to know what order to pull posts in before it actually retrieves them. Thus, we tell WordPress which meta_key to filter by and how to treat the values (meta_value for strings, meta_value_num for integers). As with displaying data, you can go nuts here. You can use all of the arguments that WP_Query takes to perform taxonomy filtering, meta field queries and so on. By adding the code above, we can now click to order based on date, status and venue. We’re almost there. One more thing would help out a lot, especially when dealing with hundreds of events. Data Filtering Setting up the filters is analogous to setting up ordering. First, we tell WordPress which controls we want to use. Then, we need to make sure those controls actually do something. Let’s get started. add_action( 'restrict_manage_posts', 'bs_event_table_filtering' ); function bs_event_table_filtering() { global $wpdb; if ( $screen->post_type == 'event' ) { $dates = $wpdb->get_results( "SELECT EXTRACT(YEAR FROM meta_value) as year, EXTRACT( MONTH FROM meta_value ) as month FROM $wpdb->postmeta WHERE meta_key = '_bs_meta_event_date' AND post_id IN ( SELECT ID FROM $wpdb->posts WHERE post_type = 'event' AND post_status != 'trash' ) GROUP BY year, month " ) ; echo ''; echo '' . __( 'Show all event dates', 'textdomain' ) . ''; foreach( $dates as $date ) { $month = ( strlen( $date->month ) == 1 ) ? 0 . $date->month : $date->month; $value = $date->year . '-' . $month . '-' . '01 00:00:00'; $name = date( 'F Y', strtotime( $value ) ); $selected = ( !empty( $_GET['event_date'] ) AND $_GET['event_date'] == $value ) ? 'selected="select"' : ''; echo '' . $name . ''; } echo ''; $ticket_statuses = get_ticket_statuses(); echo ''; echo '' . __( 'Show all ticket statuses', 'textdomain' ) . ''; foreach( $ticket_statuses as $value => $name ) { $selected = ( !empty( $_GET['ticket_status'] ) AND $_GET['ticket_status'] == $value ) ? 'selected="selected"' : ''; echo '' . $name . ''; } echo ''; } } I know, this is a bit scarier! Initially, all we are doing is making sure that we add filters to the right page. As you can see from the hook, this is not specific to the post’s type, so we need to check manually. Once we’re sure that we’re on the events page, we add two controls: a selector for event dates and a selector for ticket statuses. We have one custom function in there, get_ticket_statuses(), which is used to retrieve a list of ticket statuses. These are all defined by the user, so describing how it works would be overkill. Suffice it to say that it contains an array with the key-value pairs that we need for the selector. Once this is done, the table will reach its final form. We now have our filters along the top, but they don’t work yet. Let’s fix that, shall we? Filtering data is simply a matter of adding arguments to the query again. This time, instead of ordering our data, we’ll add parameters to narrow down or broaden our returned list of posts. add_filter( 'parse_query','bs_event_table_filter' ); function bs_event_table_filter( $query ) { if( is_admin() AND $query->query['post_type'] == 'event' ) { $qv = &$query->query_vars; $qv['meta_query'] = array(); if( !empty( $_GET['event_date'] ) ) { $start_time = strtotime( $_GET['event_date'] ); $end_time = mktime( 0, 0, 0, date( 'n', $start_time ) + 1, date( 'j', $start_time ), date( 'Y', $start_time ) ); $end_date = date( 'Y-m-d H:i:s', $end_time ); $qv['meta_query'][] = array( 'field' => '_bs_meta_event_date', 'value' => array( $_GET['event_date'], $end_date ), 'compare' => 'BETWEEN', 'type' => 'DATETIME' ); } if( !empty( $_GET['ticket_status'] ) ) { $qv['meta_query'][] = array( 'field' => '_bs_meta_event_ticket_status', 'value' => $_GET['ticket_status'], 'compare' => '=', 'type' => 'CHAR' ); } if( !empty( $_GET['orderby'] ) AND $_GET['orderby'] == 'event_date' ) { $qv['orderby'] = 'meta_value'; $qv['meta_key'] = '_bs_meta_event_date'; $qv['order'] = strtoupper( $_GET['order'] ); } } } For each filter, we need to add rules to the query. When we’re filtering for events, we need to add a meta_query. This will return only results for which the custom field key is _bs_meta_event_ticket_status and the value is the given ticket’s status. Once this final piece of the puzzle is added, we will have a customized WordPress admin list, complete with filtering, ordering and custom data. Well done! Overview Adding custom data to a table is a great way to draw information to the attention of users. Plugin developers can hook their functionality into posts without touching any other functionality, and theme authors can add advanced information about custom post types and other things to relevant places. Showing the right information in the right place can make a huge difference in the salability and likability of any product. That being said, don’t overexploit your newfound power. Don’t add fields just because you can, especially to WordPress’ main tables. Don’t forget that others know about this, too, and many developers of SEO plugins and similar products already add their own columns to posts. If you’re going to add things to the default post types, I suggest including settings to enable and disable them. If you’ve used these techniques in one of your products or are wondering how to show some tidbit of information in a table, let us know in the comments! (al, il) © Daniel Pataki for Smashing Magazine, 2013.

    0
    0
    0
    Favorite
    Grab It
  • Smashing Magazine
    Speed Up Your Mobile Website With Varnish

       Imagine that you have just written a post on your blog, tweeted about it and watched it get retweeted by some popular Twitter users, sending hundreds of people to your blog at once. Your excitement at seeing so many visitors talk about your post turns to dismay as they start to tweet that your website is down — a database connection error is shown. Or perhaps you have been working hard to generate interest in your startup. One day, out of the blue, a celebrity tweets about how much they love your product. The person’s followers all seem to click at once, and many of them find that the domain isn’t responding, or when they try to sign up for the trial, the page times out. Despite your apologies on Twitter, many of the visitors move on with their day, and you lose much of the momentum of that initial tweet. These scenarios are fairly common, and I have noticed in my own work that when content becomes popular via social networks, the proportion of mobile devices that access that content is higher than usual, because many people use their mobile devices, rather than desktop applications, to access Twitter and other social networks. Many of these mobile users access the Web via slow data connections and crowded public Wi-Fi. So, anything you can do to ensure that your website loads quickly will benefit those users. In this article, I’ll show you Varnish Web application accelerator, a free and simple thing that makes a world of difference when a lot of people land on your website all at once. Introducing The Magic For the majority of websites, even those whose content is updated daily, a large number of visitors are served exactly the same content. Images, CSS and JavaScript, which we expect not to change very much — but also content stored in a database using a blogging platform or content management system (CMS) — are often served to visitors in exactly the same way every time. Visitors coming to a blog from Twitter would likely not all be served exactly the same content — including not only images, JavaScript and CSS, but also content that is created with PHP and with queries to the database before being served as a page to the browser. Each request for that blog’s post would require not only the Web server that serves the file (for example, Apache), but also PHP scripts, a connection to the database, and queries run against database tables. The number of database connections that can be made and the number of Apache processes that can run are always limited. The greater the number of visitors, the less memory available and the slower each request becomes. Ultimately, users will start to see database connection errors, or the website will just seem to hang, with pages not loading as the server struggles to keep up with demand. This is where an HTTP cache like Varnish comes in. Instead of requests from browsers directly hitting your Web server, making the server create and serve the pages requested, requests would first hit the cache. If the requested page is in the cache, then it is served directly from memory, never touching Apache or the database. If the page is not in the cache, then the request is handed over to Apache as usual, whereupon Apache will create and serve the page, which is then stored in the cache, ready for the next request. Serving a page from memory is a lot faster than serving it from disk via Apache. In addition, the page never needs to touch PHP or the database, leaving those processes free to handle traffic that does require a database connection or some processing. For example, in our second scenario of a startup being mentioned by a celebrity, the majority of people clicking through would check out only a few pages of the website — all of those pages could be in the cache and served from memory. The few who go on to sign up would find that the registration form works well, because the server-side code and database connection are not bogged down by people pouring in from Twitter. How Does It Work? The diagram below shows how a blog post might be served when all requests go to the Apache Web server. This example shows five browsers all requesting the same page, which uses PHP and MySQL. Every HTTP request is served by Apache — images, CSS, JavaScript and HTML files. If a file is PHP, then it is parsed by PHP. And if content is required from the database, then a database connection is made, SQL queries are run, and the page is assembled from the returned data before being served to the browser via Apache. If we place Varnish in front of Apache, we would instead see the following: If the page and assets requested are already cached, then Varnish serves them from memory — Apache, PHP and MySQL would never be touched. If a browser requests something that is not cached, then Varnish hands it over to Apache so that it can do the job detailed above. The key point is that Apache needs to do that job only once, because the result is then stored in memory, and when a second request is made, Varnish can serve it. The tool has other benefits. In Varnish terminology, when you configure Apache as your Web server, you are configuring a “back end.” Varnish allows you to configure multiple back ends. So, you might want to run two Web servers — for example, using Apache for PHP pages while serving static assets (such as CSS files) from nginx. You can set this up in Varnish, which will pass the request through to the correct server. In this tutorial, we will look at the simplest use case. I’m Sold! How Do I Get Started? Varnish is really easy to install and configure. You will need root, or sudo, access to your server to install things on it. Therefore, your website needs to be hosted on a virtual private server (VPS) or the like. You can get a VPS very inexpensively these days, and Varnish is a big reason to choose a VPS over shared hosting. Some CMS’ have plugins that work with Varnish or that integrate it in the control panel — usually to make clearing the cache easier. But you can put Varnish in any CMS or any static website, without any particular integration with other systems. I’ll walk you through installing Varnish, assuming that you already run Apache as a Web server on your system. I run Debian Linux, but packages for other distributions are available. (The paths to files on the system will vary with the Linux distribution.) Before starting, check that Apache is serving your website as expected. If the server is brand new or you are trying out Varnish on a local virtual machine, make sure to configure a virtual host and that you can view a test page on the server using a browser. Install Varnish Installation instructions for various platforms are in Varnish’s documentation. I am using Debian Wheezy; so, as root, I followed the instructions for Debian. Once Varnish is installed, you will see the following line in the terminal, telling you that it has started successfully. [ ok ] Starting HTTP accelerator: varnishd. By default, Apache listens for requests on port 80. This is where incoming HTTP requests go, because we want Varnish to essentially sit in front of Apache. We need to configure Varnish to listen on port 80 and change Apache to a different port — usually 8080. We then tell Varnish where Apache is. Reconfigure Apache To change the port that Apache listens on, open the file /etc/apache2/ports.conf as root, and find the following lines: NameVirtualHost *:80 Listen 80 Change these lines to this: NameVirtualHost *:8080 Listen 8080 If you see the following lines, just change 80 to 8080 in the same way. NameVirtualHost 127.0.0.1:80 Listen 80 Save this file and open your default virtual host file, which should be in /etc/apache2/sites-available. In this file, find the following line: Change it to this: You will also need to make this change to any other virtual hosts you have set up. Configure Varnish Open the file /etc/default/varnish, and scroll down to the uncommented section that starts with DAEMON_OPTS. Edit this so that it looks like the following block, which will make Varnish listen on port 80. DAEMON_OPTS="-a :80 \ -T localhost:1234 \ -f /etc/varnish/default.vcl \ -S /etc/varnish/secret \ -s malloc,256m" Open the file /etc/varnish/default.vcl, and check that the default back end is set to port 8080, because this is where Apache will be now. backend default { .host = "127.0.0.1"; .port = "8080"; } Restart Apache and Varnish as root with the following commands: service apache2 restart service varnish restart Check that your test website is still available. If it is, then you’ll probably be wondering how to test that it is being served from Varnish. There are a few ways to do this. The simplest is to use cURL. In the command line, type the following: curl http://yoursite.com --head The response should be something like Via: 1.1 varnish. You can also look at the statistics generated by Varnish. In the command line, type varnishstat, and watch the hit rate increase as you refresh your page in the browser. Varnish refers to something it can serve as a “hit” and something it passes to Apache or another back end as a “miss.” Another useful tool is varnish-top. Type varnishtop -i txurl in the command line, and refresh your page in the browser. This tool shows you which files are being served by Varnish. Purging The Cache Now that pages are being cached, if you change an HTML or CSS file, you won’t see the changes immediately. This trips me up all of the time. I know that a cache is in front of Apache, yet every so often I still have that baffled moment of “Where are my changes?!” Type varnishadm "ban.url ." in the command line to clear the entire cache. You can also control Varnish over HTTP. Plugins are available, such as Varnish HTTP Purge for WordPress, that you can configure to purge the cache directly from the administration area. Some Simple Customizations You’ll probably want to know a few things about how Varnish works by default in order to tweak it. Configuring it as described above should cause most basic assets and pages to be served from the cache, once those assets have been cached in memory. Varnish will only cache things that are safe to do so, and it might not cache some common things that you think it would. A good example is cookies. In its default configuration, Varnish will not cache content if a cookie is set. So, if your website serves different content to logged-in users, such as personalized content, you wouldn’t want to serve everyone content that is meant for one user. However, you’d probably want to ignore some cookies, such as for analytics. If the website does not serve any personalized content, then the only cookies you would probably care about are those set for your admin area — it would be inconvenient if Varnish cached the admin area and you couldn’t see changes. Let’s edit /etc/varnish/default.vcl. Assuming your admin area is at /admin, you would add the following: sub vcl_recv {    if ( !( req.url ~ ^/admin/) ) {      unset req.http.Cookie;    } } Some cookies might be important — for example, logged-in users should get uncached content. So, you don’t want to eliminate all cookies. A trip to the land of regular expressions is required to identify the cookies we’ll need. Many recipes for doing this can be found with a quick search online. For analytics cookies, you could add the following. sub vcl_recv {   // Remove has_js and Google Analytics __* cookies.   set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(_[_a-z]+|has_js)=[^;]*", "");   // Remove a ";" prefix, if present.   set req.http.Cookie = regsub(req.http.Cookie, "^;\s*", ""); } Varnish has a section in its documentation on “Cookies.” In most cases, configuring Varnish as described above and removing analytics cookies will dramatically speed up your website. Once Varnish is up and running and you are familiar with the logs, you can start to tweak the configuration and get more performance from the cache. Next Steps To learn more, go through Varnish’s documentation. You should understand enough of Varnish’s basics by now to try some of the examples. The section on “Achieving a High Hit Rate” is well worth a read for the simple tips on tweaking your configuration. Keep calm and try Varnish to optimize mobile websites. (Image source) (al, ea, il) © Rachel Andrew for Smashing Magazine, 2013.

    1
    0
    0
    Favorite
    Grab It
  • Smashing Magazine
    Four Ways To Build A Mobile Application, Part 1: Native iOS

       The mobile application development landscape is filled with many ways to build a mobile app. Among the most popular are: native iOS, native Android, PhoneGap, Appcelerator Titanium. This article marks the start of a series of four articles covering the technologies above. The series will provide an overview of how to build a simple mobile application using each of these four approaches. Because few developers have had the opportunity to develop for mobile using a variety of tools, this series is intended to broaden your scope. we’ll start with some background and then dig into iOS. I’ve built the same simple application with each technology to demonstrate the basic concepts of development and the differences between the platforms and development tools. The purpose of this series is not to convert you to a particular technology, but rather to provide some insight into how applications are created with these various tools, highlighting some of the common terms and concepts in each environment. FasTip is a simple application to calculate tips. Because this is a simple example, it uses the standard UI controls of each platform: The screenshots above show the application running as native iOS, PhoneGap and native Android applications. Appcelerator Titanium uses native controls, so it looks the same as the native iOS and Android applications. Our application has two screens: a main screen where the tips are calculated, and a settings screen that enables the user to set a tip percentage. To keep things simple and straightforward, we’ll use the default styles of each environment. The source code for each app is available on GitHub. Native iOS Development Most applications in Apple’s App Store are written in the Objective-C programming language, and developers typically use Xcode to develop their applications. Obtaining the Tools To build an iOS app, you must use Mac OS X; other operating systems are not supported. The development tools that you’ll need, iOS 7 SDK and Xcode 5, are free of charge, and you can run the app that you build in the iOS simulator, which is part of the iOS SDK. To run your app on a real device and make it available in Apple’s App Store, you must pay $99 per year. “About Xcode,” iOS Developer Library, Apple “iOS Dev Center,” Apple “iOS Developer Program,” Apple Creating a New Project Once you have installed Xcode, you’ll want to create a new project. Choose “Create a new Xcode project” from the welcome screen or via File → New Project in the menu. For a simple application such as this one, “Single View” is appropriate. Upon clicking “Next,” you will be presented with a dialog to enter some basic information about your application: The value that you enter in the “Class Prefix” option tells Xcode to attach that unique prefix to every class that you generate with Xcode. Because Objective-C does not support “namespacing,” as found in Java, attaching a unique prefix to your classes will avoid naming conflicts. The “Devices” setting lets you restrict your application to run only on an iPhone or an iPad; the “universal” option will enable the application to run on both. Navigation Controllers and View Controllers The screen functionality of iOS applications is grouped into what are known as view controllers. Our application will have two view controllers: one for the main screen and one for the settings screen. A view controller contains the logic needed to interact with the controls on a screen. It also interacts with another component called the navigation controller, which in turn provides the mechanism for moving between view controllers. A navigation controller provides the navigation bar, which appears at the top of each screen. The view controllers are pushed onto a stack of views that are managed by the navigation controller as the user moves from screen to screen. Storyboards: Building the User Experience Visually Starting with iOS 5, Xcode has had storyboards, which enable developers to quickly lay out a series of view controllers and define the content for each. Here’s our sample application in a storyboard: The container on the left represents the navigation controller, which enables the user to move from screen to screen. The two objects on the right represent the two screens, or view controllers, that make up our app. The arrow leading from the main screen to the settings screen is referred to as a segue, and it indicates the transition from screen to screen. A new segue is created by selecting the button in the originating view and then, while the Control key is pressed, dragging the mouse to the destination view controller. Apple’s documentation provides more detail about this process. In the example above, we can see that a text field has been selected, and the property panel is used to adjust the various attributes of the controls. When this application was created, the “universal” app option was selected, enabling the app to run on both an iPhone and iPad. As a result, two versions of the storyboard file have been created. When the app is running on an iPhone or iPod Touch, the _iPhone version of the file will be used, and the _iPad version will be used for iPads. This allows a different layout to be used for the iPad’s larger display. The view controller will automatically load the appropriate layout. Keep in mind that if your storyboards expose different sets of controls for the iPad and the iPhone, then you must account for this in the code for your view controller. In addition to directly positioning items at particular coordinates on the screen, you can also use the Auto Layout system that was introduced in iOS 6. This enables you to define constraints in the relationships between controls in the view. The storyboard editor enables you to create and edit these constraints. The constraints can also be manipulated programmatically. The Auto Layout mechanism is quite sophisticated and a bit daunting to use at first. Apple has an extensive guide on Auto Layout in its documentation. Associating Storyboards With Your Code To access the storyboard objects from the code, you must define the relationships between them. Connecting items from the storyboard to your code via Xcode is not obvious if you’re used to other development environments. Before you can do this, you must first create a view controller to hold these associations. This can be done with the following steps: Choose File → New File. In the dialog that appears, choose “Objective-C class”: In the next dialog, give your class a name and ensure that it inherits from UIViewController: Upon clicking “Next,” you’ll be asked to confirm where in the project the file should be saved. For a simple project, picking the main directory of the app is fine. Upon clicking “Next,” you’ll see that a new set of files has been created for your view controller. Now, associate that newly created view controller with the view controller in your storyboard. With the storyboard open, click on the view controller. In the “Identity Inspector” panel, pick the “Class” that this view controller is to be associated with: Once this process is completed, the code for your view controller will be properly referenced by the storyboard entry. To reference the controls that you’ve dragged onto a storyboard from your Objective-C code, you’ll need to define these relationships. The storyboard editor has an “assistant editor” view to help with this. Basically, it’s a split-pane view that shows both the storyboard and your code. In this example, we’ll reference a button that’s already been placed on the storyboard: First, ensure that you’ve completed the steps above to associate the view controller class with the corresponding view controller in the storyboard. Choose the assistant editor by clicking the icon that looks like this: A split-pane view will open, with the storyboard on the left and your view controller class on the right. Select the button in your storyboard and, while holding down the Control key, drag from the button to the interface area of your code. The resulting dialog will enable you to create an “outlet” for the button in your code. Simply give the button a name, and click the “Connect” button in the dialog. You may now reference the button in the view controller from your code. Let’s hook up a method to be invoked when a person taps on the button. Select the button again, and use the same Control-and-drag maneuver to drop a reference into the interface section of your view controller. This time, in the dialog box that appears, we’ll associate an “action,” rather than an outlet. Choose “Action” from the “Connection” drop-down menu, and enter a name like this: For the “Event,” use the default of “Touch Up Inside,” and press the “Connect” button. Note that your class now has an interface with two entries in it: @interface FTSettingsViewController () @property (weak, nonatomic) IBOutlet UIButton *myButton; - (IBAction)tappedMyButton:(id)sender; @end The IBOutlet item is used to identify anything that you’re referencing from the storyboard, and the IBAction is used to identify actions that come from the storyboard. Notice also that Xcode has an empty method where you can place the code to be run when the user taps on the control: - (IBAction)tappedMyButton:(id)sender { } The process above does take some getting used to and could certainly be made more intuitive. After some practice, it will get less awkward. You might find it useful to bookmark the section of the Xcode documentation that describes how to “Connect User Interface Objects to Your Code.” As we’ll see later, you can also add objects to the view and manipulate their properties programmatically. In fact, applications of even moderate complexity typically perform a lot of manipulation in code. For complex apps, some developers eschew the storyboard and use the code-based alternative almost entirely. Getting Into the Code For even the most basic of applications to function, some code must be written. So far in the storyboard, we’ve laid out our user interface and the interactions between the view controllers. But no code has been written to perform the calculations, to persist the settings of the tip percentage and so on. That is all done by you, the developer, in Objective-C. When an application is running, its overall lifecycle is handled by something called an “application delegate.” Various methods in this delegate are called when key events in the application’s lifecycle occur. These events could be any of the following: the application is started, the application is moved to the background, the application is brought to the foreground, the application is about to be terminated, a push notification arrives. The events above are handled in a file called AppDelegate. For our sample application, the default handling of these events is just fine; we don’t need to take any special action. The documentation has an overview of the application’s lifecycle and of responding to changes in an app’s state. The next area of attention is the view controller. Just as with the application delegate, the view controller has its own lifecycle. The view controller’s lifecycle includes methods that are invoked when the following happens: the view controller has been loaded; the view controller is about to appear or has appeared on the screen; the view controller is about to disappear or has disappeared from the screen; the bounds of the view have changed (for example, because the device has been rotated) and the view will be laid out again. The main code for our application is in the FTViewController.m file. Here is the first bit of code that initializes our screen: - (void)viewWillAppear:(BOOL)animated { // Restore any default tip percentage if available NSUserDefaults *defaults = [NSUserDefaults standardUserDefaults]; float tipPercentage = [defaults floatForKey:@"tipPercentage"]; if (tipPercentage > 0) { _tipPercentage = tipPercentage; } else { _tipPercentage = 15.0; } self.tipAmountLabel.text = [NSString stringWithFormat:@"%0.2f%%", _tipPercentage]; } In this application, we want to use whatever tip percentage value was stored in the past. To do this, we can use NSUserDefaults, which is a persistent data store to hold settings and preferences for an application. Keep in mind that these values are not encrypted in any way, so this is not the best place to store sensitive data, such as passwords. A KeyChain API is provided in the iOS SDK to store such data. In the code above, we’re attempting to retrieve the tipPercentage setting. If that’s not found, we’ll just default to 15%. When the user taps the “Calculate Tip” button, the following code is run: - (IBAction)didTapCalculate:(id)sender { float checkAmount, tipAmount, totalAmount; if (self.checkAmountTextField.text.length > 0) { checkAmount = [self.checkAmountTextField.text floatValue]; tipAmount = checkAmount * (_tipPercentage / 100); totalAmount = checkAmount + tipAmount; self.tipAmountLabel.text = [NSString stringWithFormat:@"$%0.2f", tipAmount]; self.totalAmountLabel.text = [NSString stringWithFormat:@"$%0.2f", totalAmount]; } [self.checkAmountTextField resignFirstResponder]; } We’re simply reading the value that the user has inputted in the “Amount” field and then calculating the tip’s value. Note how the stringWithFormat method is used to display the tipAmount value as a currency value. When the user taps the “Settings” button in the navigation controller, the segue that we established in the storyboard will push the settings’ view controller onto the stack. A separate view controller file, FTSettingsViewController, will now handle the interactions on this screen. Pressing the “Done” button on this screen will run the following code: - (IBAction)didTapDone:(id)sender { float tipPercentage; tipPercentage = [self.tipPercentageTextField.text floatValue]; if (tipPercentage > 0) { NSUserDefaults *defaults = [NSUserDefaults standardUserDefaults]; [defaults setFloat:tipPercentage forKey:@"tipPercentage"]; [defaults synchronize]; [[self navigationController] popViewControllerAnimated:YES]; } else { [[[UIAlertView alloc] initWithTitle:@"Invalid input" message:@"Percentage must be a decimal value" delegate:nil cancelButtonTitle:@"ok" otherButtonTitles:nil] show]; } } Here we’re retrieving the value from the text field and making sure that the inputted value is greater than 0. If it is, then we use NSUserDefaults to persist the setting. Calling the synchronize method is what will actually save the values to storage. After we’ve saved the value, we use the popViewControllerAnimated method on the navigation controller to remove the settings view and return to the prior screen. Note that if the user does not fill in the percentage correctly, then they will be shown the standard iOS UIAlertView dialog and will remain on the settings screen. In the section above on view controllers and storyboards, I mentioned that the controls in a view can be manipulated programmatically. While that was not necessary for our application, the following is a snippet of code that creates a button and adds it to a particular location on the screen: CGRect buttonRect = CGRectMake(100, 75, 150, 80); UIButton *myButton = [UIButton buttonWithType:UIButtonTypeRoundedRect]; myButton.frame = buttonRect; [myButton setTitle:@"Click me!" forState:UIControlStateNormal]; [self.view addSubview:myButton]; Generally speaking, all of the controls that you place in a view extend from an ancestor class named UIView. As such, buttons, labels, text-input fields and so on are all UIViews. One instance of a UIView is in the view controller. This can be referenced in your view controller’s code as self.view. The iOS SDK positions items in a view based on a frame, also referred to as CGRect, which is a structure that contains the x and y coordinates of the item, as well as the width and height of the object. Note in the code above that the button is instantiated and assigned a frame (location and size) and then added to the view controller’s view. Running and Debugging an iOS Application When Xcode and the iOS SDK are installed, so is the iOS simulator, which simulates an iOS device directly on your machine. Xcode has a drop-down menu that allows you to select different device configurations. Pressing the “Run” button in the upper-left corner will build the app and then run it in the chosen simulator. Using the menu above, you can switch between iPhones and iPads of different sizes, as well as between Retina and non-Retina versions of each device. Debugging is done simply by clicking in the left margin of the code editor, where the line numbers appear. When the execution of your app reaches the breakpoint, the app will stop and the variable values in effect at that moment in time will appear below the code editor: Some things, such as push notifications, cannot readily be tested in the simulator. For these things, you will need to test on a device, which requires you to register as an Apple developer for $99 a year. Once you have joined, you can plug in your device with a USB cable. Xcode will prompt you for your credentials and will offer to “provision” the device for you. Once the device is recognized, it will be shown in the same menu that allows you to switch between device simulators. In Xcode, by going to Window → Organizer in the menu, you can display a tool that enables you to manage all of the devices visible in Xcode and to examine crash logs and more. The Organizer window also lets you take and export screenshots of your application. Summary Thus far, we’ve seen the basics of developing a simple native iOS application. Most applications are more complex than this, but these are the basic building blocks: Xcode The development environment Storyboards For laying out and configuring the user interface View controllers Provide the basic logic for interacting with each of the views defined in the storyboards Navigation controllers Enable the user to navigate between the different views Learning Resources To get started with iOS development, you might want to consult these useful resources: iOS Programming: The Big Nerd Ranch Guide, Joe Conway and Aaron Hillegass This book is excellent. It guides you through both Objective-C and iOS development and will hold your interest with some nice functional examples. Objective-C Programming: The Big Nerd Ranch Guide, Aaron Hillegass This book provides more detailed information about Objective-C, should you wish to delve into the language’s features, beyond what is covered in iOS Programming. “Coding Together: Developing iOS 6 Apps for iPhone and iPad,” Stanford University This series of videos is available through iTunes U. “WWDC 2013 Session Videos,” Apple After each Worldwide Developers Conference, Apple publishes videos of all of the sessions. iOS developers at every level will find something here. “iOS 7 Design Resources,” Apple The documentation for iOS is quite good, and Apple has many well-written guides on key features of the iOS SDK. Ray Wenderlich: Tutorials for iPhone / iOS Developers and Gamers Ray provides a great series of tutorials, and new content is added regularly. Premium tutorials are also available at cost. This concludes the first segment of our tour of app development. I hope it has given you some insight into the basic concepts behind native app development on iOS. The next article in this series will cover how to build the same application using native Android development tools. (al, ea) © Peter Traeg for Smashing Magazine, 2013.

    1
    0
    0
    Favorite
    Grab It
  • Smashing Magazine
    An Introduction To Full-Stack JavaScript

       Nowadays, with any Web app you build, you have dozens of architectural decisions to make. And you want to make the right ones: You want to use technologies that allow for rapid development, constant iteration, maximal efficiency, speed, robustness and more. You want to be lean and you want to be agile. You want to use technologies that will help you succeed in the short and long term. And those technologies are not always easy to pick out. In my experience, full-stack JavaScript hits all the marks. You’ve probably seen it around; perhaps you’ve considered its usefulness and even debated it with friends. But have you tried it yourself? In this post, I’ll give you an overview of why full-stack JavaScript might be right for you and how it works its magic. To give you a quick preview: (Large view) I’ll introduce these components piece by piece. But first, a short note on how we got to where we are today. Why I Use JavaScript I’ve been a Web developer since 1998. Back then, we used Perl for most of our server-side development; but even since then, we’ve had JavaScript on the client side. Web server technologies have changed immensely since then: We went through wave after wave of languages and technologies, such as PHP, ASP, JSP, .NET, Ruby, Python, just to name a few. Developers began to realize that using two different languages for the client and server environments complicates things. In the early era of PHP and ASP, when template engines were just an idea, developers embedded application code in their HTML. Seeing embedded scripts like this was not uncommon: <?php if ($login == true){ ?> alert("Welcome"); <?php } ?> Or, even worse: var users_deleted = []; <?php $arr_ids = array(1,2,3,4); foreach($arr_ids as $value){ ?> users_deleted.push("<php>"); <?php } ?> For starters, there were the typical errors and confusing statements between languages, such as for and foreach. Furthermore, writing code like this on the server and on the client to handle the same data structure is uncomfortable even today (unless, of course, you have a development team with engineers dedicated to the front end and engineers for the back end — but even if they can share information, they wouldn’t be able to collaborate on each other’s code): $.ajax({ url:"/json.php", success: function(data){ var x; for(x in data){ alert("fruit:" + x + " points:" + data[x]); } } }); The initial attempts to unify under a single language were to create client components on the server and compile them to JavaScript. This didn’t work as expected, and most of those projects failed (for example, ASP MVC replacing ASP.NET Web forms, and GWT arguably being replaced in the near future by Polymer). But the idea was great, in essence: a single language on the client and the server, enabling us to reuse components and resources (and this is the keyword: resources). The answer was simple: Put JavaScript on the server. JavaScript was actually born server-side in Netscape Enterprise Server, but the language simply wasn’t ready at the time. After years of trial and error, Node.js finally emerged, which not only put JavaScript on the server, but also promoted the idea of non-blocking programming, bringing it from the world of nginx, thanks to the Node creator’s nginx background, and (wisely) keeping it simple, thanks to JavaScript’s event-loop nature. (In a sentence, non-blocking programming aims to put time-consuming tasks off to the side, usually by specifying what should be done when these tasks are completed, and allowing the processor to handle other requests in the meantime.) Node.js changed the way we handle I/O access forever. As Web developers, we were used to the following lines when accessing databases (I/O): var resultset = db.query("SELECT * FROM 'table'"); drawTable(resultset); This line essentially blocks your code, because your program stops running until your database driver has a resultset to return. In the meantime, your platform’s infrastructure provides the means for concurrency, usually using threads and forks. With Node.js and non-blocking programming, we’re given more control over program flow. Now (even if you still have parallel execution hidden by your database (I/O) driver), you can define what the program should do in the meantime and what it will do when you receive the resultset: db.query("SELECT * FROM 'table'", function(resultset){ drawTable(resultset); }); doSomeThingElse(); With this snippet, we’ve defined two program flows: The first handles our actions just after sending the database query, while the second handles our actions just after we receive our resultSet using a simple callback. This is an elegant and powerful way to manage concurrency. As they say, “Everything runs in parallel — except your code.” Thus, your code will be easy to write, read, understand and maintain, all without your losing control over program flow. These ideas weren’t new at the time — so, why did they become so popular with Node.js? Simple: Non-blocking programming can be achieved in several ways. Perhaps the easiest is to use callbacks and an event loop. In most languages, that’s not an easy task: While callbacks are a common feature in some other languages, an event loop is not, and you’ll often find yourself grappling with external libraries (for example, Python with Tornado). But in JavaScript, callbacks are built into the language, as is the event loop, and almost every programmer who has even dabbled in JavaScript is familiar with them (or at least has used them, even if they don’t quite understand what the event loop is). Suddenly, every startup on Earth could reuse developers (i.e. resources) on both the client and server side, solving the “Python Guru Needed” job posting problem. So, now we have an incredibly fast platform (thanks to non-blocking programming), with a programming language that’s incredibly easy to use (thanks to JavaScript). But is it enough? Will it last? I’m sure JavaScript will have an important place in the future. Let me tell you why. Functional Programming JavaScript was the first programming language to bring the functional paradigm to the masses (of course, Lisp came first, but most programmers have never built a production-ready application using it). Lisp and Self, Javascript’s main influences, are full of innovative ideas that can free our minds to explore new techniques, patterns and paradigms. And they all carry over to JavaScript. Take a look at monads, Church numbers or even (for a more practical example) Underscore’s collections functions, which can save you lines and lines of code. Dynamic Objects and Prototypal Inheritance Object-oriented programming without classes (and without endless hierarchies of classes) allows for fast development — just create objects, add methods and use them. More importantly, it reduces refactoring time during maintenance tasks by enabling the programmer to modify instances of objects, instead of classes. This speed and flexibility pave the way for rapid development. JavaScript Is the Internet JavaScript was designed for the Internet. It’s been here since the beginning, and it’s not going away. All attempts to destroy it have failed; recall, for instance, the downfall of Java Applets, VBScript’s replacement by Microsoft’s TypeScript (which compiles to JavaScript), and Flash’s demise at the hands of the mobile market and HTML5. Replacing JavaScript without breaking millions of Web pages is impossible, so our goal going forward should be to improve it. And no one is better suited for the job than Technical Committee 39 of ECMA. Sure, alternatives to JavaScript are born every day, like CoffeeScript, TypeScript and the millions of languages that compile to JavaScript. These alternatives might be useful for development stages (via source maps), but they will fail to supplant JavaScript in the long run for two reasons: Their communities will never be bigger, and their best features will be adopted by ECMAScript (i.e. JavaScript). JavaScript is not an assembly language: It’s a high-level programming language with source code that you can understand — so, you should understand it. End-to-End JavaScript: Node.js And MongoDB We’ve covered the reasons to use JavaScript. Next, we’ll look at JavaScript as a reason to use Node.js and MongoDB. Node.js Node.js is a platform for building fast and scalable network applications — that’s pretty much what the Node.js website says. But Node.js is more than that: It’s the hottest JavaScript runtime environment around right now, used by a ton of applications and libraries — even browser libraries are now running on Node.js. More importantly, this fast server-side execution allows developers to focus on more complex problems, such as Natural for natural language processing. Even if you don’t plan to write your main server application with Node.js, you can use tools built on top of Node.js to improve your development process; for example, Bower for front-end package management, Mocha for unit testing, Grunt for automated build tasks and even Brackets for full-text code editing. So, if you’re going to write JavaScript applications for the server or the client, you should become familiar with Node.js, because you will need it daily. Some interesting alternatives exist, but none have even 10% of Node.js’ community. MongoDB MongoDB is a NoSQL document-based database that uses JavaScript as its query language (but is not written in JavaScript), thus completing our end-to-end JavaScript platform. But that’s not even the main reason to choose this database. MongoDB is schema-less, enabling you to persist objects in a flexible way and, thus, adapt quickly to changes in requirements. Plus, it’s highly scalable and based on map-reduce, making it suitable for big data applications. MongoDB is so flexible that it can be used as a schema-less document database, a relational data store (although it lacks transactions, which can only be emulated) and even as a key-value store for caching responses, like Memcached and Redis. Server Componentization With Express Server-side componentization is never easy. But with Express (and Connect) came the idea of “middleware.” In my opinion, middleware is the best way to define components on the server. If you want to compare it to a known pattern, it’s pretty close to pipes and filters. The basic idea is that your component is part of a pipeline. The pipeline processes a request (i.e. the input) and generates a response (i.e. the output), but your component isn’t responsible for the entire response. Instead, it modifies only what it needs to and then delegates to the next piece in the pipeline. When the last piece of the pipeline finishes processing, the response is sent back to the client. We refer to these pieces of the pipeline as middleware. Clearly, we can create two kinds of middleware: Intermediates An intermediate processes the request and the response but is not fully responsible for the response itself and so delegates to the next middleware. Finals A final has full responsibility over the final response. It processes and modifies the request and the response but doesn’t need to delegate to the next middleware. In practice, delegating to the next middleware anyway will allow for architectural flexibility (i.e. for adding more middleware later), even if that middleware doesn’t exist (in which case, the response would go straight to the client). (Large view) As a concrete example, consider a “user manager” component on the server. In terms of middleware, we’d have both finals and intermediates. For our finals, we’d have such features as creating a user and listing users. But before we can perform those actions, we need our intermediates for authentication (because we don’t want unauthenticated requests coming in and creating users). Once we’ve created these authentication intermediates, we can just plug them in anywhere that we want to turn a previously unauthenticated feature into an authenticated feature. Single-Page Applications When working with full-stack JavaScript, you’ll often focus on creating single-page applications (SPAs). Most Web developers are tempted more than once to try their hand at SPAs. I’ve built several (mostly proprietary), and I believe that they are simply the future of Web applications. Have you ever compared an SPA to a regular Web app on a mobile connection? The difference in responsiveness is in the order of tens of seconds. (Note: Others might disagree with me. Twitter, for example, rolled back its SPA approach. Meanwhile, large websites such as Zendesk are moving towards it. I’ve seen enough evidence of the benefits of SPAs to believe in them, but experiences vary.) If SPAs are so great, why build your product in a legacy form? A common argument I hear is that people are worried about SEO. But if you handle things correctly, this shouldn’t be an issue: You can take different approaches, from using a headless browser (such as PhantomJS) to render the HTML when a Web crawler is detected to performing server-side rendering with the help of existing frameworks. Client Side MV* With Backbone.js, Marionette And Twitter Bootstrap Much has been said about MV* frameworks for SPAs. It’s a tough choice, but I’d say that the top three are Backbone.js, Ember and AngularJS. All three are very well regarded. But which is best for you? Unfortunately, I must admit that I have limited experience with AngularJS, so I’ll leave it out of the discussion. Now, Ember and Backbone.js represent two different ways of attacking the same problem. Backbone.js is minimal and offers just enough for you to create a simple SPA. Ember, on the other hand, is a complete and professional framework for creating SPAs. It has more bells and whistles, but also a steeper learning curve. (You can read more about Ember.js here.) Depending on the size of your application, the decision could be as easy as looking at the “features used” to “features available” ratio, which will give you a big hint. Styling is a challenge as well, but again, we can count on frameworks to bail us out. For CSS, Twitter Bootstrap is a good choice because it offers a complete set of styles that are both ready to use out of the box and easy to customize. Bootstrap was created in the LESS language, and it’s open source, so we can modify it if need be. It comes with a ton of UX controls that are well documented. Plus, a customization model enables you to create your own. It is definitely the right tool for the job. Best Practices: Grunt, Mocha, Chai, RequireJS and CoverJS Finally, we should define some best practices, as well as mention how to implement and maintain them. Typically, my solution centers on several tools, which themselves are based on Node.js. Mocha and Chai These tools enable you to improve your development process by applying test-driven development (TDD) or behavior-driven development (BDD), creating the infrastructure to organize your unit tests and a runner to automatically run them. Plenty of unit test frameworks exist for JavaScript. Why use Mocha? The short answer is that it’s flexible and complete. The long answer is that it has two important features (interfaces and reporters) and one significant absence (assertions). Allow me to explain: Interfaces Maybe you’re used to TDD concepts of suites and unit tests, or perhaps you prefer BDD ideas of behavior specifications with describe and should. Mocha lets you use both approaches. Reporters Running your test will generate reports of the results, and you can format these results using various reporters. For example, if you need to feed a continuous integration server, you’ll find a reporter to do just that. Lack of an assertion library Far from being a problem, Mocha was designed to let you use the assertion library of your choice, giving you even more flexibility. You have plenty of options, and this is where Chai comes into play. Chai is a flexible assertion library that lets you use any of the three major assertion styles: assert This is the classic assertion style from old-school TDD. For example: assert.equal(variable, "value"); expect This chainable assertion style is most commonly used in BDD. For example: expect(variable).to.equal("value"); should This is also used in BDD, but I prefer expect because should often sounds repetitive (i.e. with the behavior specification of “it (should do something…)”). For example: variable.should.equal("value"); Chai combines perfectly with Mocha. Using just these two libraries, you can write your tests in TDD, BDD or any style imaginable. Grunt Grunt enables you to automate build tasks, anything including simple copying-and-pasting and concatenation of files, template precompilation, style language (i.e. SASS and LESS) compilation, unit testing (with Mocha), linting and code minification (for example, with UglifyJS or Closure Compiler). You can add your own automated task to Grunt or search the registry, where hundreds of plugins are available (once again, using a tool with a great community behind it pays off). Grunt can also monitor your files and trigger actions when any are modified. RequireJS RequireJS might sound like just another way to load modules with the AMD API, but I assure you that it is much more than that. With RequireJS, you can define dependencies and hierarchies on your modules and let the RequireJS library load them for you. It also provides an easy way to avoid global variable space pollution by defining all of your modules inside functions. This makes the modules reusable, unlike namespaced modules. Think about it: If you define a module like Demoapp.helloWordModule and you want to port it to Firstapp.helloWorldModule, then you would need to change every reference to the Demoapp namespace in order to make it portable. RequireJS will also help you embrace the dependency injection pattern. Suppose you have a component that needs an instance of the main application object (a singleton). From using RequireJS, you realize that you shouldn’t use a global variable to store it, and you can’t have an instance as a RequireJS dependency. So, instead, you need to require this dependency in your module constructor. Let’s see an example. In main.js: define( ["App","module"], function(App, Module){ var app = new App(); var module = new Module({ app: app }) return app; } ); In module.js: define([], function(){ var module = function(options){ this.app = options.app; }; module.prototype.useApp = function(){ this.app.performAction(); }; return module } ); Note that we cannot define the module with a dependency to main.js without creating a circular reference. CoverJS Code coverage is a metric for evaluating your tests. As the name implies, it tells you how much of your code is covered by your current test suite. CoverJS measures your tests’ code coverage by instrumenting statements (instead of lines of code, like JSCoverage) in your code and generating an instrumented version of the code. It can also generate reports to feed your continuous integration server. Conclusion Full-stack JavaScript isn’t the answer to every problem. But its community and technology will carry you a long way. With JavaScript, you can create scalable, maintainable applications, unified under a single language. There’s no doubt, it’s a force to be reckoned with. (al, ea) © Alejandro Hernandez for Smashing Magazine, 2013.

    2
    0
    0
    Favorite
    Grab It
  • Smashing Magazine
    The Future Of Video In Web Design

       Federico was the only other kid on the block with a dedicated ISDN line, so I gave him a call. It had taken six hours of interminable waiting (peppered with frantic bouts of cursing), but I had just watched 60 choppy seconds of the original Macintosh TV commercial in Firefox, and I had to tell someone. It blew my mind. Video on the Web has improved quite a bit since that first jittery low-res commercial I watched on my Quadra 605 back in 7th grade. But for the most part, videos are still separate from the Web, cordoned off by iframes and Flash and bottled up in little windows in the center of the page. They’re a missed opportunity for Web designers everywhere. But how do you integrate video into an app or a marketing page? What would it look like, and how do you implement it? In this article, you will find inspiration, how-tos and a few technical goodies to get you started with modern video on the Web. When Video Leaves Its Cage Video combined with animation is a powerful tool for innovative and compelling user experiences. Imagine interactive screencasts and tutorials in which DOM elements flow and move around the page in sync with the instructor. Why not combine video with animation to walk new users through your app? Or what about including videos of your product on your marketing page, instead of static JPEGs? Getting carried away is easy — video can become little more than sophisticated blink tags if you’re not careful. But there are plenty of beautiful, inspiring examples of video tightly integrated in a design. Apple’s new marketing page for the Mac Pro is a stunning example of video reaching out from its cage into the surrounding content. The new Mac Pro is featured in the center of the page, and as you scroll, it swoops and spins and disassembles itself. Supporting copy fades in to describe what you are seeing. A static screenshot of the new landing page doesn’t do the new Mac Pro justice. (larger view) Another great example of interactive video is Adrian Holovaty’s Soundslice. Soundslice is filled with YouTube videos of music sliced and diced into tablature (or tabs), which is notation that guitar players use to learn music. The musical bars at the bottom stay in sync with the video. (larger view) When you watch a music video, the tabs are animated at the bottom in time with the music, so that you can play along with your guitar. You can even slow down the video or loop selections to practice difficult sections, and the tab animation will stay in sync. How Do You Add Video To A Design? If you venture into video and animation in your next project, you won’t have many resources to lean on for implementation. No canonical, easy-to-use, open-source library for syncing video with animation exists, so every implementation is a bit different. Should you use a JavaScript animation library or pure CSS keyframes and transitions? Should you host the videos yourself and take advantage of HTML5’s video tag events or use YouTube or Vimeo? And then how exactly do you tie animations to a video? Together, we will explore answers to the above-mentioned questions and more as we build our own micro JavaScript framework. Charlie.js provides an easy-to-use API for building pages with synchronized video and CSS3 animation. Charlie.js, named in honor of Charlie Chaplin. (Image source) The best way to learn is by doing, so let’s dive in. What Does Charlie.js Do? We need a way to create animations and then trigger them at certain moments in a video. We also need to pause the animations if the video stops, and we’ll need a way to handle the user jumping around to different times in the video. To limit the scope of this article, we’ll have Charlie.js use only CSS animations. JavaScript animation libraries are more flexible and powerful than CSS animations, but wrapping one’s head around the straightforward, declarative syntax of keyframes is pretty easy, and the effects are hardware-accelerated. Sticking with only CSS animations is a pretty good choice for small projects. To keep it simple, Charlie.js will support only one video per page. As we go through the exercise of building this library, remember that we’re using the framework just to learn about CSS animation and video on the Web. The goal is to learn, not to create production-quality code. Define The API For our little framework, defining the API first makes sense. In other words, we need to figure out how someone would use the library and then write the JavaScript to implement the API. A video and animation library could work in many ways, but the main interface puzzle is to figure out how to couple the animation to the video. How should a developer specify which animations should appear on which elements and at which times they should start in the video? One option is to suck down the data in JSON or XML. The opposite solution is to have no separate data files and to put all of the configuration into pure JavaScript function calls. Both are fine, but there is a middle road. Normally, CSS animation is defined in a style sheet. Ideally, that’s where it should be defined for Charlie.js, not in a JSON file. It just makes sense, and doing it this way has advantages. When the animation is in a style sheet, rather than a JavaScript or JSON file, you can test it without loading the entire video and animation library. The animations are coupled to an element with data attributes. The data attributes define the animation names and also specify the start times. Let’s say you have a CSS animation named fade for dialing down the opacity, and another named fling for moving elements off the page. And you want a div on the page to use both animations three seconds into the video. Your markup would look like this: ... Charlie.js will see this and know to run the fade and fling animations once the video hits three seconds. The fade and fling animations are defined in a style sheet that is linked to the document. Here is what the fade animation might look like (browser prefixes are excluded here but are required for Chrome and Safari): .fade { animation-name: fade; animation-duration: 3s; animation-timing-function: linear; animation-iteration-count: 1; animation-direction: normal; animation-fill-mode: forwards; } @keyframes fade { 0% { opacity: 1; } 100% { opacity: 0; } } The .fade class is what Charlie.js applies to the animated element, which will trigger the fade animation. Host The Videos: HTML5 Vs. Flash And Silverlight With the API out of the way, the next decision is how to host the video. The host will determine what kind of container the video is stuffed into, and the container will determine what is possible with the video. Video embedded with Flash or Silverlight will limit your design options, so the video-hosting service should ideally support HTML5’s video tag. The video tag is easier to style and move around on the page. You can apply CSS filters and transforms and even use CSS animation on the video itself. Plus, the standard media events are robust and provide plenty of places and ways to hook your code into the video. The big downside of the video tag is compatibility. It doesn’t work in Internet Explorer 8. What kinds of video-hosting should Charlie.js support? Building a library that supports multiple hosting options is feasible. For example, Popcorn.js (an awesome library for syncing content with video) supports several hosting options and APIs. But to keep it simple, our little library will support only a vanilla video tag. Anything in an iframe or Flash container won’t be supported. That’s nice for Charlie.js, but what if you are stuck supporting old browsers and have to deal with a video stuffed in an iframe? Most video-hosting companies have decent APIs. At the very least, you should be able to use those APIs to sync up your animation — you’ll just be stuck working with an embedded Flash object. YouTube and Vimeo are the most popular services, and both offer extensive APIs. Wistia is another great option but less well known. If you want to use a pure video tag but don’t want to host the video yourself, take a look at Vid.ly. Once you upload your video, Vid.ly will encode it in every format you need and give you a universal URL that you can use in your video tag, which will automatically choose the correct video type according to the user agent. Your browser does not support the HTML5 video element. Heads Up The JavaScript in most of these snippets uses Underscore; stuff like _.forEach and _.toArray are utility functions from that library. Underscore encourages a functional style of programming that might look strange if you’ve never seen it before, but a little time invested in learning Underscore can save you a lot of time and lines of code. It’s worth checking out. For this article, you’ll find comments in the code to tell you what’s going on, and it should be pretty easy to understand. One other caveat: The code here will run in most modern browsers, but no attempt has been made to make this completely cross-browser compatible. If your business really needs CSS animation to be synced with video and to run in almost every browser, then this library will not help you out. But for my business, and perhaps for yours, supporting only modern browsers is fine. And even with this restriction, plenty of material here is still worth learning. Control CSS Animations With JavaScript JavaScript is the glue between video and CSS animation. There is no way to couple an animation to a video purely with CSS. Animation doesn’t start until a style is applied, and CSS gives you only so many ways to trigger extra styles (such as :hover). In order to sync animation to video, we will need to pause, stop, resume, skip to the middle, and even reverse running animations. All of this is possible with JavaScript. So, the first step is to get the CSS animation out of the style sheet and into JavaScript. Every CSS animation has two parts. The first part is the keyframe and the properties used to configure how the animation behaves, such as duration, iteration and direction. The second part is what triggers the animation. Charlie.js will need to find both parts in the style sheets. The first thing we need is a utility function to search through style sheets that are loaded on the page. findRules = function(matches){ //document.stylesheets is not an array by default. // It's a StyleSheetList. toArray converts it to an actual array. var styleSheets = _.toArray(document.styleSheets), rules = []; // forEach iterates through a list, in this case passing //every sheet in styleSheets to the next forEach _.forEach(styleSheets, function(sheet){ //This foreach iterates through each rule in the style sheet //and checks if it passes the matches function. _.forEach(_.toArray(sheet.cssRules), function(rule){ if (matches(rule)){ rules.push(rule); } }); }); return rules; } The findRules function iterates through every rule of every style sheet and returns a list of rules that match the passed-in matches function. To get all of the keyframe rules, we pass in a function to findRules that checks whether the rule is a keyframe: // A little code to handle prefixed properties var KEYFRAMES_RULE = window.CSSRule.KEYFRAMES_RULE || window.CSSRule.WEBKIT_KEYFRAMES_RULE || window.CSSRule.MOZ_KEYFRAMES_RULE || window.CSSRule.O_KEYFRAMES_RULE || window.CSSRule.MS_KEYFRAMES_RULE, ... var keyframeRules = findRules(function(rule){ return KEYFRAMES_RULE === rule.type; }), ... At this point, we have the keyframes in JavaScript, but we still need the rest of the animation styles that define duration, iterations, direction and so on. To find all of these classes, we again use the findRules function to go through every rule in every style sheet. This time, though, the matches function that we’ll pass in will check to see whether the rule has an animationName property. ... var animationStyleRules = findRules(function(rule){ return rule.style && rule.style[animationName(rule.style)]; }); ... The animationsName function is there to handle the prefixes, because the animationName property still requires prefixes in some browsers. That function looks like this: ... if (style.animationName) { name = "animationName"; } else if (style.webkitAnimationName) { name = "webkitAnimationName"; } else if (style.mozAnimationName) { name = "mozAnimationName"; } else if (style.oAnimationName) { name="oAnimationName"; } else if (style.msAnimationName) { name = "msAnimationName"; } else { name = ""; } return name; ... Once the correct prefix has been determined, the name is cached and used for future look-ups. Once the keyframes and animation styles have been collected, they get stuffed into an instance of a helper class and stored for Charlie.js to use later. var CSSAnimations = function(keyframes, cssRules){ this.keyframes = keyframes; this.cssRules = cssRules; }; Get The Timing Information From The Data Attributes Timing information is attached to the element that will be animated using a data attribute (remember that we decided this when we were defining the API). So, we need to crawl the document and pull out the information. Any element that will be animated is marked with the class of charlie, which makes it pretty easy to find the data attributes we are looking for. var scrapeAnimationData = function() { /* Grab the data from the DOM. */ var data = {}; _.forEach( //loop through every element that should be animated document.getElementsByClassName("charlie"), //for each element, pull off the info from the dataset function(element) { /* * Creates an object of animation name: time, e.g. * * { swoopy: [ * { element: domElement, * time: 6522 }, * { element: anotherElement, * time: 7834 }] * } */ // var names = element.dataset.animations.split(/\s*,\s*/), times = element.dataset.times.split(/\s*,\s*/), // creates an array of arrays, each one called a "tuple" // basically ties the time to the // animation name, so it looks like this: //[["zippy", 1], ["fade", 2] ... ] tuples = _.zip(names, times); /* * turn the tuples into an object, * which is a little easier to work with. * We end up with an object that looks like this: * { * fade: [ {element: domElement, time: "1.2s"}, ... ], * fling: [ {element: domelement, time: "2.4s"}, ... ] * } * So, we can reuse an animation on different elements * at different times. */ _.forEach(tuples, function(tuple){ var name = tuple[0], time = tuple[1]; data[name] = data[name] || []; data[name].push({ element: element, time: time }) }); }); return data; }, This stores all of the timing information in an object with the animation’s name as the key, followed by a list of times and elements. This object is used to create several Animation objects, which are then stuffed into various data structures to make it easy and fast to look up which animations should be running in the big loop. The requestAnimationFrame Loop The heart of Charlie.js is a loop that runs whenever the video runs. The loop is created with requestAnimationFrame. tick: function(time){ if (this.running){ this.frameID = requestAnimationFrame(this.tick.bind(this)); this.controller.startAnimations(time, video.currentTime); } } The requestAnimationFrame function is specifically designed to optimize any kind of animation, such as DOM manipulations, painting to the canvas, and WebGL. It’s a tighter loop than anything you can get with setTimeout, and it’s calibrated to bundle animation steps into a single reflow, thus performing better. It’s also better for battery usage and will completely stop running when the user switches tabs. The loop starts when the video starts and stops when the video stops. Charlie.js also needs to know whether the video ends or jumps to the middle somewhere. Each of those events requires a slightly different response. video.addEventListener("play", this.start.bind(this), false); video.addEventListener("ended", this.ended.bind(this), false); video.addEventListener("pause", this.stop.bind(this), false); video.addEventListener("seeked", this.seeked.bind(this), false); As the video plays, the loop keeps ticking. Each tick runs this code: // allow precision to one tenth of a second var seconds = roundTime(videoTime), me = this; //resume any paused animations me.resumeAnimations(); /* start up any animations that should be running at this second. * Don't start any that are already running */ if (me.bySeconds[seconds]){ var animations = me.bySeconds[seconds], notRunning = _.filter(animations, function(animation){ return !_.contains(me.running, animation); }); /* requestAnimationFrame happens more than * every tenth of a second, so this code will run * multiple times for each animation starting time */ _.forEach(notRunning, function(animation){ animation.start(); me.running.push(animation); }); } Everything we have done up to this point has been to support these few lines of code. The seconds variable is just the video.currentTime value rounded to the nearest tenth of a second. The bySeconds property is created from the time data that is scraped from the HTML — it’s just a quick way to grab a list of animations to start at a given time. The running array is a list of animations that are currently running. The requestAnimationFrame loop is really fast and runs many, many times a second, and Charlie.js only supports a resolution of one tenth of a second. So, if one animation starts at the 2-second mark, then requestAnimationFrame will try to start it several times until the video has progressed to the next tenth of a second. To prevent animations from starting over and over again during that tenth of a second, they get put into the running array so that we know what is running and don’t start it again unnecessarily. To start a CSS animation, just add the animation properties to an element’s style. The easiest way to do this is to just add the animation class to the element’s classList, and that is exactly what the animation’s start method does. start: function(){ var me = this; //The name of the animation is the same as the class name by convention. me.element.classList.add(me.name); onAnimationEnd(me.element, function(){ me.reset(); }); } The name of the animation is the same as the class name by convention. Pause And Resume Animations When the video stops, the animations should stop with it. There is a pretty straightforward way to do this using CSS animations: We just set the animationPlayState property of the element to paused. ... //method on the animation object pause: function(){ this.element.style.webkitAnimationPlayState = "paused"; this.element.style.mozAnimationPlayState = "paused"; this.element.style.oAnimationPlayState = "paused"; this.element.style.animationPlayState = "paused"; }, resume: function(){ this.element.style.webkitAnimationPlayState = "running"; this.element.style.mozAnimationPlayState = "running"; this.element.style.oAnimationPlayState = "running"; this.element.style.animationPlayState = "running"; } ... //called on the video "pause" event while(animation = me.running.pop()){ animation.pause(); //keep track of paused animations so we can resume them later ... me.paused.push(animation); } The only trick here is to keep track of which animations have been paused, so that they can resume once the video starts again, like so: while (animation = me.paused.pop()){ animation.resume(); me.running.push(animation); } How To Start An Animation In The Middle What if someone skips ahead in the video and jumps right into the middle of an animation? How do you start a CSS animation in the middle? The animationDelay property is exactly what we need. Normally, animationDelay is set to a positive number. If you want an animation to start three seconds after the animation style has been applied, then you’d set animationDelay to 3s. But if you set animationDelay to a negative number, then it will jump to the middle of the animation. For example, if an animation lasts three seconds, and you want the animation to start two seconds in, then set the animationDelay property to -2s. Whenever a user scrubs to the middle of the video, Charlie.js will need to stop all of the animations that are currently running, figure out what should be running, and then set the appropriate animationDelay values. Here is the high-level code: // 1. go through each to start // 2. set the animation delay so that it starts at the right spot // 3. start 'em up. var me = this, seconds = roundTime(videoTime), toStart = animationsToStart(me, seconds); // go through each animation to start _.forEach(toStart, function(animation){ //set the delay to start the animation at the right place setDelay(animation, seconds); //start it up animation.start(); /* If the move is playing right now, then let the animation * keep playing. Otherwise, pause the animation until * the video resumes. */ if (playNow) { me.running.push(animation); } else { me.paused.push(animation); animation.pause(); } }); The animationsToStart function loops through a sorted list of animations and looks for anything that should be running. If the end time is greater than the current time and the start time is less than the current time, then the animation should be started. var animationsToStart = function(me, seconds) { var toStart = []; for(var i = 0; i seconds) { break; } if (animation.endsAt > seconds) { toStart.push(animation); } } return toStart; }; The timeModel is a list of animations sorted by the times when the animations should end. This code loops through that list and looks for animations that start before the current time and end after the current time. The toStart array represents all of the animations that should be running right now. Those values get passed up to the higher-level code, which then computes and sets the delay in the setDelay function. setDelay = function(animation, seconds) { var delay = -(seconds - animation.startsAt); delay = delay The seconds parameter is the current time in the video. Let’s say that the video is at 30 seconds, that the animation starts at 24 seconds and that it lasts for 10 seconds. If we set the delay to -6s, then it will start the animation 6 seconds in and will last another 4 seconds. Look At The Code For Yourself We’ve covered here how to use requestAnimationFrame to create a tight, optimized loop for animations, how to scrape keyframes and animation styles from the style sheet, how to start and stop animations with the video, and even how to start CSS animations in the middle. But to get to the point, we’ve skipped over quite a bit of glue code. Charlie.js is only a couple of hundred lines of code, and it is open source and commented thoroughly. You are welcome to grab the code and read it. You can even use it if you want, with a few caveats: Charlie.js was made for educational purposes. It was made to be read and for you to learn about CSS animations, videos, requestAnimationFrame, etc. Don’t just plug it into your production code unless you really know what you are doing. Cross-browser support for animation is pretty good, and Charlie.js tries to be friendly to all the browsers when it can be. However, it hasn’t been heavily tested. It eats up CPU, even if the video is paused. (This has something to do with CSS animations still rendering.) The user can’t drag the seek bar while the video is unpaused. If they do, then the animations will start and overlap each other. Charlie.js does not respond to changes in frame rate. So, if the video stutters or you want to slow down the rate of the video, then the animations will fall out of sync. Also, you can’t run video backwards. Animations won’t start if the current tab isn’t set to the video, due to requestAnimationFrame not running unless the video tab is active. This could confuse users who switch back and forth between tabs. Some of these limitations can be fixed pretty easily, but Charlie.js was made for a very limited use case. I’ve put together a demo of Charlie.js in action so that you can see what it can do. The future of video in Web design is filled with possibilities, and I for one can’t wait to see what happens. Additional Resources A demo of Charlie.js See what you can do with video and CSS3 animation. “CSS3 Animation,” Can I Use… “How Does the New Mac Pro Site Work,” Sean Fioritto “Syncing Content With HTML5 Video,” Christian Heilmann, Smashing Magazine “Controlling CSS Animations and Transitions With JavaScript,” CSS-Tricks “Adrian Holovaty’s Talks SoundSlice” (video), 37signals “100 Riffs: A Brief History of Rock n’ Roll,” Soundslice An amazing demonstration of Soundslice “HTML5 Video With Filters and SVG” (video), idibidiart “requestAnimationFrame for Smart Animating,” Paul Irish (al, ea, il) © Sean Fioritto for Smashing Magazine, 2013.

    0
    0
    0
    Favorite
    Grab It
  • Smashing Magazine
    An In-Depth Introduction To Ember.js

       With the release of Ember.js 1.0, it’s just about time to consider giving it a try. This article aims to introduce Ember.js to newcomers who want to learn about this framework. Users often say that the learning curve is steep, but once you’ve overcome the difficulties, then Ember.js is tremendous. This happened to me as well. While the official guides are more accurate and up to date than ever (for real!), this post is my attempt to make things even smoother for beginners. First, we will clarify the main concepts of the framework. Next, we’ll go in depth with a step-by-step tutorial that teaches you how to build a simple Web app with Ember.js and Ember-Data, which is Ember’s data storage layer. Then, we will see how views and components help with handling user interactions. Finally, we will dig a little more into Ember-Data and template precompiling. Ember’s famous little mascot, Tomster. (Image credits) The unstyled demo below will help you follow each step of the tutorial. The enhanced demo is basically the same but with a lot more CSS and animations and a fully responsive UX when displayed on small screens. Unstyled demo Source code Enhanced demo Table of Contents Definitions of main concepts Let’s build a simple CRUD Sketch our app What you’ll need to get started Our files directory structure Precompile templates or not? Set up the model with Ember-Data’s FixtureAdapter Instantiate the router The application template The users route Object vs. array controller Displaying the number of users Computed properties Redirecting from the index page Single user route Edit a user Our first action TransitionTo or TransitionToRoute? Saving user modifications Delete a user Create a user Format data with helpers Format data with bound helpers Switch to the LocalStorage adapter Playing with views jQuery and the didInsertElement Side panel components with className bindings Modals with layout and event bubbling What is Ember-Data The store Adapters What about not using Ember-Data? What is Handlebars template precompiling? Template naming conventions Precompiling with Grunt Precompiling with Rails Conclusion Tools, tips and resources Acknowledgments Definitions Of Main Concepts The diagram below illustrates how routes, controllers, views, templates and models interact with each other. Let’s define these concepts. And if you’d like to learn more, check the relevant section of the official guides: Models The Router Controllers Views Components Templates Helpers Models Suppose our application handles a collection of users. Well, those users and their informations would be the model. Think of them as the database data. Models may be retrieved and updated by implementing AJAX callbacks inside your routes, or you can rely on Ember-Data (a data-storage abstraction layer) to greatly simplify the retrieval, updating and persistence of models over a REST API. The Router There is the Router, and then there are routes. The Router is just a synopsis of all of your routes. Routes are the URL representations of your application’s objects (for example, a route’s posts will render a collections of posts). The goal of routes is to query the model, from their model hook, to make it available in the controller and in the template. Routes can also be used to set properties in controllers, to execute events and actions, and to connect a particular template to a particular controller. Last but not least, the model hook can return promises so that you can implement a LoadingRoute, which will wait for the model to resolve asynchronously over the network. Controllers At first, a controller gets a model from a route. Then, it makes the bridge between the model and the view or template. Let’s say you need a convenient method or function for switching between editing mode to normal mode. A method such as goIntoEditMode() and closeEditMode() would be perfect, and that’s exactly what controllers can be used for. Controllers are auto-generated by Ember.js if you don’t declare them. For example, you can create a user template with a UserRoute; and, if you don’t create a UserController (because you have nothing special to do with it), then Ember.js will generate one for you internally (in memory). The Ember Inspector extension for Chrome can help you track those magic controllers. Views Views represent particular parts of your application (the visual parts that the user can see in the browser). A View is associated with a Controller, a Handlebars template and a Route. The difference between views and templates can be tricky. You will find yourself dealing with views when you want to handle events or handle some custom interactions that are impossible to manage from templates. They have a very convenient didInsertElement hook, through which you can play with jQuery very easily. Furthermore, they become extremely useful when you need to build reusable views, such as modals, popovers, date-pickers and autocomplete fields. Components A Component is a completely isolated View that has no access to the surrounding context. It’s a great way to build reusable components for your apps. A Twitter Button, a custom select box and those reusable charts are all great examples of components. In fact, they’re such a great idea that the W3C is actually working with the Ember team on a custom element specification. Templates Simply put, a template is the view’s HTML markup. It prints the model data and automatically updates itself when the model changes. Ember.js uses Handlebars, a lightweight templating engine also maintained by the Ember team. It has the usual templating logic, like if and else, loops and formatting helpers, that kind of stuff. Templates may be precompiled (if you want to cleanly organize them as separate .hbs or .handlebars files) or directly written in <script> tags in your HTML page. Jump to the section on templates precompiling to dig into the subject. Helpers Handlebars helpers are functions that modify data before it is rendered on the screen — for example, to format dates better than Mon Jul 29 2013 13:37:39 GMT+0200 (CEST). In your template, the date could be written as {{date}}. Let’s say you have a formateDate helper (which converts dates into something more elegant, like “One month ago” or “29 July 2013”). In this case, you could use it like so: {{formateDate date}}. Components? Helpers? Views? HELP! The Ember.js forum has an answer and StackOverflow has a response that should alleviate your headache. Let’s Build An App In this section, we’ll build a real app, a simple interface for managing a group of users (a CRUD app). Here’s what we’ll do: look at the architecture we’re aiming for; get you started with the dependencies, files structure, etc.; set up the model with Ember-Data’s FixtureAdapter; see how routes, controllers, views and templates interact with each other; finally, replace the FixtureAdapter with the LSAdapter to persist data in the browser’s local storage. Sketch Our App We need a basic view to render a group of users (see 1 below). We need a single-user view to see its data (2). We need to be able to edit and delete a given user’s data (3). Finally, we need a way to create a new user; for this, we will reuse the edit form. Ember.js strongly relies on naming conventions. So, if you want the page /foo in your app, you will have the following: a foo template, a FooRoute, a FooController, and a FooView. Learn more about Ember’s naming conventions in the guides. What You’ll Need to Get Started You will need: jQuery, Ember.js itself (obviously), Handlebars (i.e. Ember’s templating engine), Ember-Data (i.e. Ember’s data-persistence abstraction layer). /* /index.html */ … // your code

    0
    0
    0
    Favorite
    Grab It
  • Smashing Magazine
    Killer Responsive Layouts With CSS Regions

       As Web designers, we are largely constrained by the layout features available to us. Content placed inside a container will often naturally extend the container vertically, wrapping the content. If a design requires elements to remain a certain height, then our options are limited. In these cases, we can only add a scroll bar or hide the overflow. The CSS Regions specification provides a new option. Support Regions are a new part of the CSS specification, so not all browsers have implemented them, and in some cases you might have to enable a flag to use them. They have recently gained support in iOS7 and Safari 7, as well as Safari 6.1+. Adobe maintains a list of supported browsers and instructions on enabling regions and other features. However, support for regions is constantly growing. For a robust list of which browsers have implemented regions and the various features available, see Adobe’s “CSS Regions Support” page. Regions 101 CSS regions enable us to disperse content across multiple containing elements. They provide a flow, which consists of content that may appear within multiple elements, and a region chain, which is the collection of elements the flow is spread across. Once these elements have been defined, the flow dynamically fills the elements in the region chain. We can then size our containers vertically without worrying about the content getting cut off, because it simply overflows into next element in the chain. This creates new opportunities for layout with responsive design. To use regions, start by creating a named flow; simply add the CSS property flow-into to your content element, with the value of your flow’s name. Then, for each region through which you want the content to flow, apply the CSS property flow-from with the same flow name value. The content element will then flow through the region elements. Current implementations in browsers require the property to be prefixed, but we are using the unprefixed version here. #myContent { flow-into: myNamedFlow; } .myRegion { flow-from: myNamedFlow; } Your HTML would contain a content element and the scaffolding of all of the regions that this content will flow through. When you use regions, the content element will not be visible in its original location and any HTML already in your region elements will disappear, replaced by the content being flowed into them. Because of this, we can have placeholder or fallback content within our region elements. ... When using regions, the content being flowed is not a child of the region elements. You are only changing where the content is displayed. According to the DOM, everything remains the same, so the content does not inherit styles from the region in which it lives. Instead, the specification defines a CSS pseudo-selector, ::region(), which allows you to style the content within a region. Apply the pseudo-element to the region’s selector and then pass a selector as an argument, specifying the elements that will be styled within a particular region. .myRegion::region(p){ /*styles for all the paragraphs flowing inside our regions*/ } Responsive Design With Regions Responsive design is the technique of creating malleable layouts that stretch and change according to the given context. Frequently, designers will make elements flexible with percentages and media queries to adapt a layout to different screen sizes. Responsive design adapts content to every screen without requiring the designer to completely overhaul the design or code. Regions facilitate responsive design in several ways. First, you no longer have to rely on height: auto for every element to ensure content fits. Instead, you can allow the content to flow into different elements within the layout. This means that the content does not dictate the layout, but rather adapts to the intended design. You can still use height: auto on the last region in the chain to ensure that it extends to display all remaining content. You can see this technique in the CodePen example below. See the Pen Region Auto Height by CJ Gammon (@cjgammon) on CodePen. Regions And Events You can use JavaScript events with regions to manage your layout and to ensure that content is displayed properly. The regions specification defines events that you can use to respond to certain conditions. The regionoversetchange event is dispatched when the regionOverset property changes for any region. This can occur when a user resizes the page, stretching out the container element so that the content no longer flows into certain regions. The value of regionOverset is either fit, overset or empty. A value of empty specifies no content inside the region. The regionOverset property is set to overset when the last region in the chain is unable to display all of the remaining content, making some of the content unreadable. The fit value sets content to fit within the region properly, either completely (if earlier in the chain) or partially (if it is the last region in the chain). How you respond to these events will depend on the design, content and other aspects of your layout. These events could be used to dynamically add or remove regions or to apply a class that changes the layout. You can see an example of the former technique in the CodePen below. Note: Some implementations call the event regionlayoutupdate, instead of regionoversetchange, based on an earlier version of the specification. See the Pen okmGu by CJ Gammon (@cjgammon) on CodePen. Regions And Media Queries Regions are defined entirely in CSS, making them easy to use in combination with media queries. In addition to resizing and positioning elements, you can completely change which elements are defined as regions. You can also set a region to display: none, which will cause it to be skipped entirely in the region chain. This capability makes it easy to remove particular regions from a layout without worrying about the continuity of the content. You can also use this technique to display whole new templates with completely different layouts, without ever changing the content. Regions And Break Properties Regions also extend break properties from the multi-column layout specification, which you can use to define how content breaks within your regions. You can apply these properties to elements within the flow either to always break or to avoid breaking a region relative to the element. Using the value region for break-before or break-after will always force a region to break before or after the element, respectively. The value avoid-region can be used for break-before, break-after or break-inside to prevent regions from breaking before, after or inside the element. This technique is useful for keeping related elements grouped together and for preventing important elements from being split. The demo below shows images along the right column and long descriptive text flowing along the left. If you reduce the width of your browser, then media queries will change the layout, causing the images to redistribute over the narrower single-column structure. Applying break-after: region to the image containers ensures that a new region break will occur after each image in the image flow. Note: Some implementations use non-standard regions-specific break properties with a region prefix; for example, region-break-before or, with a vendor prefix, -webkit-region-break-before. The break-after property is applied to regions with media queries. Regions And Viewport Units Viewport units enable you to use the window (or viewport) as the basis for sizing elements, which creates a consistent aspect ratio and harmony in the layout. You can simulate pages or blocks that break up the content cohesively. A potential pitfall of this approach is that, if you use the aspect ratio of the device to size containers, defining both the width and the height, then your content might no longer fit inside the containers. You could, however, use regions to break up the content while respecting the variable-sized elements across different screen sizes. You can see this technique being applied in the “Demo for National Geographic Orphan Elephants.” On this website, images and text are alternated to maintain the height of the viewport. We use regions to flow the content through all of the text sections, and we adjust them when the user shrinks the screen. Regions being used with viewport units. Notice how the image fits the window exactly. (Large view) The typical navigation paradigm for magazines and books on a tablet is pagination — i.e. enabling the user to swipe or tap to page through the content. As a designer, you want these pages to respond to a variety of screen sizes. Regions are particularly useful for this kind of layout, because you can size columns using viewport units and create a variety of different layouts that enable content to flow across the columns. An example of this done in HTML is shown in the video below: The Kindle Cloud Reader website has a similar two-page spread but uses JavaScript to manage the layout. Implementing this kind of layout in JavaScript requires significant development overhead, and manipulating the DOM so heavily will usually incur a performance penalty. You can use regions to bring these capabilities natively to the browser, increasing the website’s performance while reducing development time. Debugging When working with regions, it’s helpful to have tools to easily manage and debug various features. In Chrome Developer Tools, you can enable debugging features specific to regions. Detailed instructions on enabling these tools can be found in Christian Cantrell’s post “Web Inspector Support for CSS Regions.” With these features, you can find all of the named flows in a document, find the content and region chain associated with each named flow, and get visual cues for whether content fits in a region based on the regionOverset property. Webkit Nightly also has some helpful visual cues. When you open the Web Inspector and inspect a region’s container, you will see a region number and links between the region containers showing the flow of the content. Webkit Nightly allows you to inspect region containers, showing their number and the flow chain. Further Reading Regions open up many new opportunities for designing responsively and ensuring that content looks great at any size. One responsive website whose unique layout was created with regions is Adobe’s demo for a bike company, created with Edge Reflow. Follow @adobeweb for the latest updates on regions and other new Web features. Also, be sure to check out Adobe’s CodePen collection, which shows regions in use; you may want to fork one or more of the examples to explore different ways to use regions. For more on regions, visit Adobe’s Web Platform Team Blog, which often provides updates about the specification and implementations. Full details can be found in the CSS Regions specification, which outlines all of the topics covered here and more. You can also find more information and examples in the “Regions” section of Adobe & HTML. Front page image credits: Adobe & HTML (al, il) © CJ Gammon for Smashing Magazine, 2013.

    0
    0
    0
    Favorite
    Grab It
  • Smashing Magazine
    Get Up And Running With Grunt

       In this article, we’ll explore how to use Grunt in a project to speed up and change the way you develop websites. We’ll look briefly at what Grunt can do, before jumping into how to set up and use its various plugins to do all of the heavy lifting in a project. We’ll then look at how to build a simple input validator, using Sass as a preprocessor, how to use grunt-cssc and CssMin to combine and minify our CSS, how to use HTMLHint to make sure our HTML is written correctly, and how to build our compressed assets on the fly. Lastly, we’ll look at using UglifyJS to reduce the size of our JavaScript and ensure that our website uses as little bandwidth as possible. Grunt.js is a JavaScript task runner that helps you perform repetitive tasks such as minification, compilation, unit testing or linting. Getting Started With Grunt Most developers would agree that the speed and pace of JavaScript development over the last few years has been pretty astounding. Whether with frameworks such as Backbone.js and Ember.js or with communities such as JS Bin, the development of this language is changing not only the way we experience websites as users but also the way we build them. When you are working with JavaScript, you will likely need to execute multiple tasks regularly. While this is pretty much a given in most projects, it’s a time-consuming and repetitive way to work. Being in such an active community, you would assume that tools are available to automate and speed up this process. This is where Grunt comes in. What Is Grunt? Built on top of Node.js, Grunt is a task-based command-line tool that speeds up workflows by reducing the effort required to prepare assets for production. It does this by wrapping up jobs into tasks that are compiled automatically as you go along. Basically, you can use Grunt on most tasks that you consider to be grunt work and would normally have to manually configure and run yourself. While earlier versions came bundled with plugins like JSHint and Uglyify, the most recent release (version 0.4) relies on plugins for everything. What kind of tasks? Well, the list is exhaustive. Suffice it to say, Grunt can handle most things you throw at it, from minifying to concatenating JavaScript. It can also be used for a range of tasks unrelated to JavaScript, such as compiling CSS from LESS and Sass. We’ve even used it with blink(1) to notify us when a build fails. Why Use Grunt? One of the best things about Grunt is the consistency it brings to teams. If you work collaboratively, you’ll know how frustrating inconsistency in the code can be. Grunt enables teams to work with a unified set of commands, thus ensuring that everyone on the team is writing code to the same standard. After all, nothing is more frustrating than a build that fails because of little inconsistencies in how a team of developers writes code. Grunt also has an incredibly active community of developers, with new plugins being released regularly. The barrier to entry is relatively low because a vast range of tools and automated tasks are already available to use. Setting Up The first thing to do in order to use Grunt is to set up Node.js. (If you know nothing about Node.js, don’t worry — it merely needs to be installed in order for Grunt to be able to run.) Once Node.js is installed, run this command: $ npm install -g grunt-cli To make sure Grunt has been properly installed, you can run the following command: $ grunt --version The next step is to create a package.json and a gruntfile.js file in the root directory of your project. Creating the package.json File The JSON file enables us to track and install all of our development dependencies. Then, anyone who works on the project will have the most current dependencies, which ultimately helps to keep the development environments in sync. Create a file in the root of your project that contains the following: { "name" : "SampleGrunt", "version" : "0.1.0", "author" : "Brandon Random", "private" : true, "devDependencies" : { "grunt" : "~0.4.0" } } Once you have done this, run the following command: $ npm install This tells npm which dependencies to install and places them in a node_modules folder. Creating the gruntfile.js File Gruntfile.js is essentially made up of a wrapper function that takes grunt as an argument. module.exports = function(grunt){ grunt.initConfig({ pkg: grunt.file.readJSON('package.json') }); grunt.registerTask('default', []); }; You are now set up to run Grunt from the command line at the root of your project. But if you do so at this stage, you will get the following warning: $ grunt > Task "default" not found. Use --force to continue. We’d get this because we haven’t specified any tasks or dependencies yet other than Grunt. So, let’s do that. But first, let’s look at how to extend the package.json file. Extending the package.json File The best thing about working with Node.js is that it can find packages and install them in one go, simply based on the contents of the package file. To install all of the new dependencies, just add this to the file: { "name" : "SampleGrunt", "version" : "0.1.0", "author" : "Mike Cunsolo", "private" : true, "devDependencies" : { "grunt" : "~0.4.0", "grunt-contrib-cssmin": "*", "grunt-contrib-sass": "*", "grunt-contrib-uglify": "*", "grunt-contrib-watch": "*", "grunt-cssc": "*", "grunt-htmlhint": "*", "matchdep": "*" } } And to complete the process? You guessed it: $ npm install Loading npm Tasks In Grunt Now that the packages have been installed, they have to be loaded in Grunt before we can do anything with them. We can load all of the tasks automatically with a single line of code, using the matchdep dependency. This is a boon for development because now the dependency list will be included only in the package file. At the top of gruntfile.js, above grunt.initConfig, paste this: require("matchdep").filterDev("grunt-*").forEach(grunt.loadNpmTasks); Without matchdep, we would have to write grunt.loadNpmTasks("grunt-task-name"); for each dependency, which would quickly add up as we find and install other plugins. Because the plugins are loaded into Grunt, we may start specifying options. First off is the HTML file (index.html), which contains the following: Enter your first nameEnter your first name Validating With HTMLHint Add this configuration to grunt.initConfig: htmlhint: { build: { options: { 'tag-pair': true, 'tagname-lowercase': true, 'attr-lowercase': true, 'attr-value-double-quotes': true, 'doctype-first': true, 'spec-char-escape': true, 'id-unique': true, 'head-script-disabled': true, 'style-disabled': true }, src: ['index.html'] } } A plugin is typically configured like this: the plugin’s name (without the grunt-contrib-/grunt- prefix), then one or more targets of your choosing (which can be used to create custom options for the plugin for different files), an options object, and the files it affects. Now, when we run grunt htmlhint from the terminal, it will check through the source file and make sure that our HTML has no errors! However, manually typing this command several times an hour would get tedious pretty quickly. Automate Tasks That Run Every Time A File Is Saved The watch task can run a unique set of tasks according to the file being saved, using targets. Add this configuration to grunt.initConfig: watch: { html: { files: ['index.html'], tasks: ['htmlhint'] } } Then, run grunt watch in the terminal. Now, try adding a comment to index.html. You’ll notice that when the file is saved, validation is automatic! This is a boon for development because it means that watch will silently validate as you write code, and it will fail if the code hasn’t passed the relevant tests (and it will tell you what the problem is). Note that grunt watch will keep running until the terminal is closed or until it is stopped (Control + C on a Mac). Keeping The JavaScript As Lean As Possible Let’s set up a JavaScript file to validate a user’s name. To keep this as simple as possible, we’ll check only for non-alphabetical characters. We’ll also use the strict mode of JavaScript, which prevents us from writing valid but poor-quality JavaScript. Paste the following into assets/js/base.js: function Validator() { "use strict"; } Validator.prototype.checkName = function(name) { "use strict"; return (/[^a-z]/i.test(name) === false); }; window.addEventListener('load', function(){ "use strict"; document.getElementById('firstname').addEventListener('blur', function(){ var _this = this; var validator = new Validator(); var validation = document.getElementById('namevalidation'); if (validator.checkName(_this.value) === true) { validation.innerHTML = 'Looks good! :)'; validation.className = "validation yep"; _this.className = "yep"; } else { validation.innerHTML = 'Looks bad! :('; validation.className = "validation nope"; _this.className = "nope"; } }); }); Let’s use UglifyJS to minify this source file. Add this to grunt.initConfig: uglify: { build: { files: { 'build/js/base.min.js': ['assets/js/base.js'] } } } UglifyJS compresses all of the variable and function names in our source file to take up as little space as possible, and then trims out white space and comments — extremely useful for production JavaScript. Again, we have to set up a watch task to build our Uglify’ed JavaScript. Add this to the watch configuration: watch: { js: { files: ['assets/js/base.js'], tasks: ['uglify'] } } Building CSS From Sass Source Files Sass is incredibly useful for working with CSS, especially on a team. Less code is usually written in the source file because Sass can generate large CSS code blocks with such things as functions and variables. Walking through Sass itself is a little beyond the scope of this article; so, if you are not comfortable with learning a preprocessor at this stage, you can skip this section. But we will cover a very simple use case, using variables, one mixin and the Sassy CSS (SCSS) syntax, which is very similar to CSS! Grunt’s Sass plugin requires the Sass gem. You will need to install Ruby on your system (it comes preloaded in OS X). You can check whether Ruby is installed with this terminal command: ruby -v Install Sass by running the following: gem install sass Depending on your configuration, you might need to run this command via sudo — i.e. sudo gem install sass: — at which point you will be asked for your password. When Sass is installed, create a new directory named assets and, inside that, another named sass. Create a new file named master.scss in this directory, and paste the following in it: @mixin prefix($property, $value, $prefixes: webkit moz ms o spec) { @each $p in $prefixes { @if $p == spec { #{$property}: $value; } @else { -#{$p}-#{$property}: $value; } } } $input_field: #999; $input_focus: #559ab9; $validation_passed: #8aba56; $validation_failed: #ba5656; $bg_colour: #f4f4f4; $box_colour: #fff; $border_style: 1px solid; $border_radius: 4px; html { background: $bg_colour; } body { width: 720px; padding: 40px; margin: 80px auto; background: $box_colour; box-shadow: 0 1px 3px rgba(0, 0, 0, .1); border-radius: $border_radius; font-family: sans-serif; } input[type="text"] { @include prefix(appearance, none, webkit moz); @include prefix(transition, border .3s ease); border-radius: $border_radius; border: $border_style $input_field; width: 220px; } input[type="text"]:focus { border-color: $input_focus; outline: 0; } label, input[type="text"], .validation { line-height: 1; font-size: 1em; padding: 10px; display: inline; margin-right: 20px; } input.yep { border-color: $validation_passed; } input.nope { border-color: $validation_failed; } p.yep { color: $validation_passed; } p.nope { color: $validation_failed; } You will notice that the SCSS extension looks a lot more like CSS than conventional Sass. This style sheet makes use of two Sass features: mixins and variables. A mixin constructs a block of CSS based on some parameters passed to it, much like a function would, and variables allow common fragments of CSS to be defined once and then reused. Variables are especially useful for hex colours; we can build a palette that can be changed in one place, which makes tweaking aspects of a design very fast. The mixin is used to prefix rules such as for appearance and transitions, and it reduces bulk in the file itself. When working with a large style sheet, anything that can be done to reduce the number of lines will make the file easier to read when a team member other than you wants to update a style. In addition to Sass, grunt-cssc combines CSS rules together, ensuring that the generated CSS has minimal repetition. This can be very useful in medium- to large-scale projects in which a lot of styles are repeated. However, the outputted file is not always the smallest possible. This is where the cssmin task comes in. It not only trims out white space, but transforms colors to their shortest possible values (so, white would become #fff). Add these tasks to gruntfile.js: cssc: { build: { options: { consolidateViaDeclarations: true, consolidateViaSelectors: true, consolidateMediaQueries: true }, files: { 'build/css/master.css': 'build/css/master.css' } } }, cssmin: { build: { src: 'build/css/master.css', dest: 'build/css/master.css' } }, sass: { build: { files: { 'build/css/master.css': 'assets/sass/master.scss' } } } Now that we have something in place to handle style sheets, these tasks should also be run automatically. The build directory is created automatically by Grunt to house all of the production scripts, CSS and (if this were a full website) compressed images. This means that the contents of the assets directory may be heavily commented and may contain more documentation files for development purposes; then, the build directory would strip all of that out, leaving the assets as optimized as possible. We’re going to define a new set of tasks for working with CSS. Add this line to gruntfile.js, below the default task: grunt.registerTask('buildcss', ['sass', 'cssc', 'cssmin']); Now, when grunt buildcss is run, all of the CSS-related tasks will be executed one after another. This is much tidier than running grunt sass, then grunt cssc, then grunt cssmin. All we have to do now is update the watch configuration so that this gets run automatically. watch: { css: { files: ['assets/sass/**/*.scss'], tasks: ['buildcss'] } } This path might look a little strange to you. Basically, it recursively checks any directory in our assets/sass directory for .scss files, which allows us to create as many Sass source files as we want, without having to add the paths to gruntfile.js. After adding this, gruntfile.js should look like this: module.exports = function(grunt){ "use strict"; require("matchdep").filterDev("grunt-*").forEach(grunt.loadNpmTasks); grunt.initConfig({ pkg: grunt.file.readJSON('package.json'), cssc: { build: { options: { consolidateViaDeclarations: true, consolidateViaSelectors: true, consolidateMediaQueries: true }, files: { 'build/css/master.css': 'build/css/master.css' } } }, cssmin: { build: { src: 'build/css/master.css', dest: 'build/css/master.css' } }, sass: { build: { files: { 'build/css/master.css': 'assets/sass/master.scss' } } }, watch: { html: { files: ['index.html'], tasks: ['htmlhint'] }, js: { files: ['assets/js/base.js'], tasks: ['uglify'] }, css: { files: ['assets/sass/**/*.scss'], tasks: ['buildcss'] } }, htmlhint: { build: { options: { 'tag-pair': true, // Force tags to have a closing pair 'tagname-lowercase': true, // Force tags to be lowercase 'attr-lowercase': true, // Force attribute names to be lowercase e.g. is invalid 'attr-value-double-quotes': true, // Force attributes to have double quotes rather than single 'doctype-first': true, // Force the DOCTYPE declaration to come first in the document 'spec-char-escape': true, // Force special characters to be escaped 'id-unique': true, // Prevent using the same ID multiple times in a document 'head-script-disabled': true, // Prevent script tags being loaded in the for performance reasons 'style-disabled': true // Prevent style tags. CSS should be loaded through }, src: ['index.html'] } }, uglify: { build: { files: { 'build/js/base.min.js': ['assets/js/base.js'] } } } }); grunt.registerTask('default', []); grunt.registerTask('buildcss', ['sass', 'cssc', 'cssmin']); }; We should now have a static HTML page, along with an assets directory with the Sass and JavaScript source, and a build directory with the optimized CSS and JavaScript inside, along with the package.json and gruntfile.js files. By now, you should have a pretty solid foundation for exploring Grunt further. As mentioned, an incredibly active community of developers is building front-end plugins. My advice is to head on over to the plugin library and explore the more than 300 plugins. (al) © Mike Cunsolo for Smashing Magazine, 2013.

    0
    0
    0
    Favorite
    Grab It
  • Smashing Magazine
    Automate Your Responsive Images With Mobify.js

       Responsive images are one of the biggest sources of frustration in the Web development community. With good reason, too: The average size of pages has grown from 1 MB to a staggering 1.5 MB in the last year alone. Images account for more than 60% of that growth, and this percentage will only go up. Much of that page weight could be reduced if images were conditionally optimized based on device width, pixel density and modern image formats (such as WebP). These reductions would result in faster loading times and in users who are more engaged and who would stick around longer. But the debate isn’t about whether to optimize images for different devices, but about how to go about doing so. In an ideal world, we would continue using the img tag, and the browser would download exactly what it needs based on the width of the device and the layout of the page. However, no functionality like that currently exists. One way to get functionality similar to that would be to change the src attribute of img elements on the fly with JavaScript, but the lookahead pre-parser (or preloader) prevents this from being a viable option. The first step to overcoming this problem is to create a markup-based solution that allows for alternate image sources to be delivered based on a device’s capabilities. This was solved with the introduction of the picture element, created by the W3C Responsive Images Community Group (although no browser currently implements it natively). However, the picture element introduces a whole new problem: Developers must now generate a separate asset for every image at every breakpoint. What developers really need is a solution that automatically generates small images for small devices from a single high-resolution image. Ideally, this automated solution would make only one request per image and would be 100% semantic and backwards-compatible. The Image API in Mobify.js provides that solution. The Element As The Upcoming Best Practice The picture element is currently the frontrunner to replace the img element because it enables developers to specify different images for different screen resolutions in order to solve the problem of both performance and art direction (although the new srcN proposal is worth looking into). The typical set-up involves defining breakpoints, generating images for each breakpoint and then writing the picture markup for the image. Let’s see how we can make the following image responsive using a workflow that includes the picture element: We’ll use a baseline of 320, 512, 1024 and 2048 pixels. First, we need to generate a copy of each image for those different resolutions, either by using a command-line interface (CLI) tool such as Image Optim or by saving them with Photoshop’s “Save for web” feature. Then, we would use the following markup: One problem with this markup is that, in its current configuration, our image would not be optimized for mobile devices. Here is the same image scaled down to 320 pixels wide: Identifying the people in this photo is difficult. To better cater to the smaller screen size, we need to use the power of art direction to crop this photo for small screens: Because this file isn’t simply a scaled-down version of the original, the name of the file should be given a different structure (so, responsive-obama-mobile.png, instead of responsive-obama-320.png): But what if we want to account for high-DPI (dots per inch) devices? The picture element’s specification has a srcset attribute that allows us to easily specify different images for different pixel ratios. Below is what our markup would look like if we used the picture element. Here we have introduced a couple of new files (responsive-obama-mobile-2x.png and responsive-obama-4096.png) that must also be generated. At this point, we’ll have six different copies of the same image. Let’s take this a step further. What if we want to conditionally load our images in a more modern format, such as WebP, according to whether the browser supports it? Suddenly, the total number of files we must generate increases from 6 to 12. Let’s be honest: No one wants to generate multiple versions of every image for various resolutions and have to constantly update those versions in the markup. We need automation! The Ideal Responsive Image Workflow The ideal workflow is one that allows developers to upload images in the highest resolution possible while still using the img element in such a way that it automatically resizes and compresses the images for different browsers. The img element is great because it is a simple tag for solving a simple problem: displaying images for users on the Web. Continuing to use this element in a way that is performant and backwards-compatible would be ideal. Then, when the need for art direction arises and scaling down images is not enough, we could use the picture element; the branching logic built into its syntax is perfect for that use case. This ideal workflow is possible using the responsive Image API in Mobify.js. Mobify.js is an open-source library that improves responsive websites by providing responsive images, JavaScript and CSS optimization, adaptive templating and more. The Image API automatically resizes and compresses img and picture elements and, if needed, does it without changing a single line of markup in the back end. Simply upload your high-resolution assets and let the API take care of the rest. Automatically Make Images Responsive Without Changing The Back End The problem of responsive images is a hard one to solve because of the lookahead pre-parser, which prevents us from changing the src attribute of an img element on the fly with JavaScript in a performant way. The pre-parser is a feature of browsers that starts downloading resources as fast as possible by spawning a separate thread outside of the main rendering thread and whose only job is to locate resources and download them in parallel. The way the pre-parser works made a lot of sense prior to responsive design, but in our multi-device world, images in the markup are not necessarily the images we want users to download; thus, we need to start thinking of APIs that allow developers to control resource loading without sacrificing the benefits of the pre-parser. For more details on this subject, consider reading Steve Souders’ “I .” One way that many developers avoid the pre-parser is by manually changing the src attribute of each img into data-src, which tricks the pre-parser into not noticing those images, and then changing data-src back to src with JavaScript. With the Capturing API in Mobify.js, we can avoid this approach entirely, allowing us to be performant while remaining completely semantic (no or data-src hacks needed). The Capturing technique stops the pre-parser from initially downloading the resources in the page, but it doesn’t prevent parallel downloads. Using Mobify.js’ Image API in conjunction with Capturing, we are able to have automatic responsive images with a single JavaScript tag. Here is what the API call looks like: Mobify.Capture.init(function(capture){ var capturedDoc = capture.capturedDoc; var images = capturedDoc.querySelectorAll('img, picture'); Mobify.ResizeImages.resize(images, capturedDoc) capture.renderCapturedDoc(); }); This takes any image on the page and rewrites the src to the following schema: http://ir0.mobify.com//// For example, if this API was running on the latest version of Chrome for Android, with a screen 320 CSS pixels wide and a device pixel ratio of 2, then the following image… … would be rewritten to this: The image of the forest would be resized to 640 pixels wide, and, because Chrome supports WebP, we would take advantage of that in order to reduce the size of the image even further. After the first request, the image would be cached on Mobify’s CDN for the next time it is needed in that particular size and format. Because this image of the forest does not require any art direction, we can continue using the img element. You can see an example of automatic image resizing for yourself. Feel free to open your Web inspector to confirm that the original images do not download! Using this solution, we simply our workflow. We only upload a high-resolution asset for each image, and then sit back and let the the API take care of resizing them automatically. No proxy in the middle, no changing of any attributes — just a single line of JavaScript that is copied to the website. Go ahead and try it out by copying and pasting the following line of code at the top of your head element. (Please note that it must go before any other tag that loads an external resource.) !function(a,b,c,d,e){function g(a,c,d,e){var f=b.getElementsByTagName("script")[0];a.src=e,a.id=c,a.setAttribute("class",d),f.parentNode.insertBefore(a,f)}a.Mobify={points:[+new Date]};var f=/((; )|#|&#038;|^)mobify=(\d)/.exec(location.hash+"; "+b.cookie);if(f&#038;&#038;f[3]){if(!+f[3])return}else if(!c())return;b.write('<plaintext style="display:none">'),setTimeout(function(){var c=a.Mobify=a.Mobify||{};c.capturing=!0;var f=b.createElement("script"),h="mobify",i=function(){var c=new Date;c.setTime(c.getTime()+3e5),b.cookie="mobify=0; expires="+c.toGMTString()+"; path=/",a.location=a.location.href};f.onload=function(){if(e)if("string"==typeof e){var c=b.createElement("script");c.onerror=i,g(c,"main-executable",h,mainUrl)}else a.Mobify.mainExecutable=e.toString(),e()},f.onerror=i,g(f,"mobify-js",h,d)})}(window,document,function(){var a=/webkit|msie\s10|(firefox)[\/\s](\d+)|(opera)[\s\S]*version[\/\s](\d+)|3ds/i.exec(navigator.userAgent);return a?a[1]&#038;&#038;+a[2]<4?!1:a[3]&#038;&#038;+a[4]<11?!1:!0:!1}, // path to mobify.js "//cdn.mobify.com/mobifyjs/build/mobify-2.0.0.min.js", // calls to APIs go here function() { var capturing = window.Mobify &#038;&#038; window.Mobify.capturing || false; if (capturing) { Mobify.Capture.init(function(capture){ var capturedDoc = capture.capturedDoc; var images = capturedDoc.querySelectorAll("img, picture"); Mobify.ResizeImages.resize(images); // Render source DOM to document capture.renderCapturedDoc(); }); } }); (Please note that this script does not have a single point of failure. If Mobify.js fails to load, then the script will opt out and your website will load as normal. If the image-resizing servers are down or if you are in a development environment and the images are not publicly accessible, then the original images will load.) You can also make use of the full documentation. Browser support for the snippet above is as follows: All Webkit/Blink based browsers, Firefox 4+, Opera 11+, and Internet Explorer 10+. Resizing img elements automatically is great for the majority of use cases. But, as demonstrated in the Obama example, art direction is necessary for certain types of images. How can we continue using the picture element for art direction without having to maintain six versions of the same image? The Image API will also resize picture elements, meaning that you can use the picture element for its greatest strength (art direction) and leave the resizing up to the API. Resizing  Elements While automating the sizes of images for different browsers is possible, automating art direction is impossible. The picture element is the best possible solution for specifying different images at different breakpoints, due to the robust branching logic built into its defined syntax (although as mentioned before, srcN is a more recent proposal that offers very similar features). But, as mentioned, writing the markup for the picture element and creating six assets for each image gets very complicated: When using the Image API in conjunction with the picture element, we can simplify the markup significantly: The source elements here will be automatically rewritten in the same way that the img elements were in the previous example. Also, note that the markup above does not require noscript to be used for the fallback image to prevent a second request, because Capturing allows you to keep the markup semantic. Mobify.js also allows for a modified picture element, which is useful for explicitly defining how wide images should be at different breakpoints, instead of having to rely on the width of devices. For example, if you have an image that is half the width of a tablet’s window, then specifying the width of the image according to the maximum width of the browser would generate an image that is larger than necessary: In this case, automatically specifying a width according to the browser’s width would create an unnecessarily large image. To solve this problem, the Image API allows for alternate picture markup that enables us to override the width of each source element, instead of specifying a different src attribute for each breakpoint. For example, we could write an element like this: Notice the use of the data-src attribute on the picture element. This gives us a high-resolution original image as a starting point, which we can use to resize into assets for other breakpoints. Let's break down how this would actually work in the browser: If the browser is between 0 and 511 pixels wide (i.e. a smartphone), then use responsive-obama-mobile.png (for the purpose of art direction). If the browser is between 512 and 1023 pixels wide, then use responsive-obama.png, because src is not specified in the source element corresponding to that media query. Automatically determine the width because data-width isn't specified. If the browser is between 1024 and 2047 pixels wide, then use responsive-obama.png, because src is not specified in the sourceelement corresponding to that media query. Resize to 512 pixels wide, as specified in the data-width attribute. If the browser is 2048 pixels or wider, then use responsive-obama.png, because src is not specified in the source element corresponding to that media query. Resize to 1024 pixels wide, as specified in the data-width attribute. If JavaScript isn't supported, then fall back to the regular old img tag. The Image API will run on each picture element, transforming the markup into this: The Picture polyfill (included in Mobify.js) would then run and select the appropriate image according to the media queries. It will also work well when browser vendors implement the picture element natively. See a page that uses the modified picture element markup for yourself. Using the Image API without Capturing One caveat with Capturing is that it requires the script to be inserted in the head element, which is a blocking JavaScript call that can delay the initial downloading of resources. The total length of delay on first load is approximately 0.5 seconds on a device with a 3G connection (i.e. including the DNS lookup and downloading and Capturing), less on 4G or Wi-Fi, and about 60 milliseconds on subsequent requests (since the library will have been cached). But the minor penalty is a small price to pay in exchange for being easy to use, backwards-compatible and semantic. To use the Image API without Capturing to avoid the blocking JavaScript request, you need to change the src attribute of all of your img elements to x-src (you might also want to add the appropriate noscript tags if you're concerned about browsers on which JavaScript has been disabled) and paste the following asynchronous script right before the closing head tag: <script> var intervalId = setInterval(function(){ if (window.Mobify) { var images = document.querySelectorAll('img[x-src], picture'); if (images.length > 0) { Mobify.ResizeImages.resize(images); } // When the document has finished loading, stop checking for new images if (Mobify.Utils.domIsReady()) { clearInterval(intervalId) } } }, 100); This script will load Mobify.js asynchronously, and when finished loading, will start to loading the images as the document loads (it does not need to wait for the entire document to finish loading before kicking off image requests). Using The Image API For Web Apps If you are using a client-side JavaScript model-view-controller (MVC) framework, such as Backbone or AngularJS, you could still use Mobify.js’ Image API. First, include the Mobify.js library in your app: Then, rewrite image URLs with the method outlined in Mobify.js’ documentation: Mobify.ResizeImages.getImageUrl(url) This method accepts an absolute URL and returns the URL to the resized image. The easiest way to pass images into this method is by creating a template helper (for example, {{image_resize '/obama.png' }} in Handlebars.js) that executes the getImageUrl method in order to generate the image’s URL automatically. Using Your Own Image-Resizing Back End Images are resized through Mobify's Performance Suite resizing servers, which provides support for automatic resizing, WebP, CDN caching and more. There is a default limit on how many images you can convert for free per month, but if you're driving a large volume of traffic, then give Mobify a shout and we'll find a way to help. The API also allows you to use a different image-resizing service, such as Sencha.io Src, or your own backend service. How Can Browser Vendors Better Support Responsive Images? The Webkit team has recently implemented the src-set attribute, and so will Blink and Gecko in the coming months. This is a huge step in the right direction, as it means that browser vendors are taking the responsive image problem seriously. However, it doesn't solve the art direction problem, nor does it prevent the issue of needing to generate multiple assets at different resolutions. The developer community recently got together to discuss the responsive image problem. One of the more interesting proposals discussed was Client Hints from Ilya Grigorik, which is a proposal that involves sending device properties such as DPR, width and height in the headers of each request. I like this solution, because it allows us to continue using the img tag as per usual, and only requires us to use the picture (or srcN) when we need branching logic to do art direction. Although valid concerns have been raised about adding additional HTTP headers and using content negotiation to solve this problem. More importantly, for established websites with thousands of images, it may not be so easy to route those images through a server that can resize using the headers provided by Client Hints. This could be solved by re-writing images at the web server level, or with a proxy, but both of those can be problematic to setup. In my opinion, this is something we should be able to handle on the client through greater control over resource loading. If developers had greater control over resource loading, then responsive images would be a much simpler problem to tackle. The reason why so many responsive image solutions out there are proxy-based is because the images must be rewritten before the document arrives to the browser. This is to accommodate the pre-parser's attempt to download images as quickly as possible. But proxies can be very problematic in their security and scalability and, really, if we had an easy way to interact with the pre-parser, then many proxy-based solutions would redundant. How can we get greater control over resource loading while still getting all of the benefits from the pre-parser? The key thing here is that we do not want to simply turn off the pre-parser — its ability to download assets in parallel is a huge win and one of the biggest performance improvements introduced into browsers. (Please note that the Capturing API does not prevent parallel downloads.) One idea is to provide a beforeload event that fires before each resource on a page has loaded. This event is actually available when one uses Safari browser extensions, and in some browsers it is available in a very limited capacity. If we could use this event to control resource loading in a way that works with the pre-parsing thread, then Capturing would no longer be needed. Here is a basic example of how you might use the beforeload event, if it worked as described: function rewriteImgs(event) { if (event.target === "IMG") { var img = event.target; img.src = "//ir0.mobify.com/" + screen.width + "/" + img.src; } } document.addEventListener("beforeload", rewriteImgs, true); The key challenge is to somehow get the pre-parser to play nice with JavaScript that is executed in the main rendering loop. There is currently a new system being developed in browsers called the Service Worker, which is intended to allow developers to intercept network requests in order to help build web applications that work offline. However, the current implementation does not allow for intercepting requests on the initial load. This is because loading an external script which controls resource loading would have to block the loading of other resources — but I believe it could be modified to do so in a way that does not sacrifice performance through the use of inline scripts. Conclusion While there are many solutions to the problem of responsive images, the one that automates as much work as possible while still allowing for art direction will be the solution that drives the future of Web development. Consider using Mobify.js to automate responsive images today if you are after a solution that does the following: requires you to generate only one high-resolution image for each asset, letting the API take care of serving smaller images based on device conditions (width, WebP support, etc.); makes only one request per image; allows for 100% semantic and backwards-compatible markup that doesn't require changes to your back end (if using Capturing); has a simplified picture element that is automatically resized, so you can focus on using it only for art direction. (Front page image credits: Creating High-Performance Mobile Websites) (al, ea, il) © Shawn Jansepar for Smashing Magazine, 2013.

    0
    0
    0
    Favorite
    Grab It
  • Smashing Magazine
    Smart, Effective Strategies To Design Marketing Campaigns

       Ever since I’ve been involved in the Web, I’ve been fascinated by little things that make a big impact. It’s one of the reasons why I started collecting and blogging about these details, which could in some way help others grow an audience. One recurring topic early on was launch and landing pages and the strategies that creators use to expand the reach of their websites, which led to a Smashing Magazine post titled “Elements of a Viral Launch Page.” Another interesting recurring topic is the campaign page, which you’ll find either embedded in an existing website (Foursquare’s Game of Cones and Dropbox’s Great Space Race) or as a completely independent website (Iubenda’s Orwell Test) that redirects traffic back to the source. Such campaigns have varying goals, such as to drive traffic, to raise awareness or simply to get a single person’s attention. In this post, you’ll learn what to look out for when creating your own small campaign and how these elements fit together in existing campaigns around the Web. What Do We Mean By “Campaign”? The word “campaign” is traditionally defined as a military or political operation that is confined to a particular area or involves a specific type of fighting and that is intended to achieve a particular goal. That’s exactly what we mean here: A campaign is a sustained effort that is slightly beyond your day-to-day business but still connected to it in some way. Interestingly, a campaign can be carried out with little effort, if you closely monitor what is going on around you and your brand. In the wake of PRISM, we recently ran a small campaign called Orwell Test, trying to redirect some of the attention to Iubenda, an app that generates privacy policies. This post contains my observations and a framework for coming up with new campaigns, which you can integrate in your own marketing activities. Thus, it will include subjective opinion to complete the picture. 9 Things To Look Out For After the Orwell Test campaign, I have been thinking about some of the reasons why it worked and how to come up with even better ways to get the word out about our startup. Obviously, the main driver here was the timing with PRISM and people’s emotional response to its outrageous reach. (Anger spreads faster and wider than joy on social networks, according to a scientific study on Weibo.) The campaign asked a simple question: Is your company any better than the governments that collect data without informing people? This way, we made a connection with what is happening right now. We made a connection to something that people are very upset about (we’ll discuss why that’s important in the section “A Little Viral Theory” near the end), and we challenged them to think about their own behavior. So, what things can you take advantage of, and what are some great campaigns that do? The Elements Relevance Tell a story. What is happening now? Cause Contribute to a cause. Popular issue Geek out on a popular or niche topic. Competition Host a competition. Stats, stats, stats Infographics, shareable data, etc. Content Partnerships Targeting someone or something Creativity and innovation Related fields Branch out into something related. This list is not exhaustive, but it’s a good framework to start with, and the examples below will provide some perspective. To understand what your campaign should look like and to make it perfect, you will have to do a bit more homework and incorporate a few additional elements: analyze your target users, set a goal for the campaign, and add viral elements. Other Key Elements Persona Who are your target users? Build a persona. Goal What goal would you like to achieve? Virality Understand the viral loop and what it is caused by to maximize your reach. (Again, we’ll cover this in the “A Little Viral Theory” section.) Now we’re ready to look at some examples. 1. Relevance One of the easiest things to leverage is relevance. Something might be happening out there that you can add your own perspective to. The more relevant or innovative your contribution, the more attention you will be able to draw. Orwell Test was born out of pure timing and our eagerness to work on something that people might appreciate. Another project that did an interesting job of capturing attention because of its content and name recognition was Prism Break, which was started by a Japanese developer and designer and posted in Reddit’s technology section and which has since taken on a life of its own. Responding to short-lived events — for example, via Twitter — also has potential. Sketch When Adobe was considering discontinuing Fireworks (which it did), Sketch invited people to look at its alternative, which it was offering at a 50% discount. Sketch’s tweet resulted in 33 favorites and 299 retweets! Tweetbot The same thing happened with Tweetbot, which discounted its flagship app when Twitter killed off some versions of its TweetDeck app. Above is a humorous tweet by Paul Haddad, a member of the Tweetbot team. The tweet got 420 retweets and 152 favorites! 2. Cause Another approach is to announce your support of a recurring event or cause. Movember Movember is a movement in which men grow moustaches in November to raise awareness about prostate cancer and to fund organizations that fight the disease. The movement is important to New Relic, which decided to donate $10 to the cause with every registration. New Relic ended up donating around $55,000 to the Movember Foundation and the Susan G. Komen foundation. If we do the math, that’s 5500 new customers, if all of the registrations were legitimate. The campaign had a single landing page at newrelic.com/movember, organized into various sections. (It has since been removed, unfortunately, but might come back in a new version. Do visit the 404 page there, though, a great example of an actionable landing page.) The campaign’s home was a one-pager that explained the concept of Movember and why New Relic is supporting it (the Movember Foundation is one of its clients). Visitors were invited to sign up below each of those sections (“Create your FREE account!”). To remind people to share, the website also asked visitors to consider spreading the word below the different sections. The page’s header contained the usual Facebook, Google+ and Twitter buttons. The last section offered the visitor the alternative of supporting a female-oriented cause. Notice how the copy taps into the altruism theme discussed earlier. It’s hard not to participate when you can support by “just sharing.” 3. Popular Issue Being relevant could also mean building the campaign on something very popular that people identify with (a form of relevance). Below are some campaigns based on very popular issues. Crowdfunding is all the rage these days (deservedly so), and all aspects of it are being discussed in blogs and social media and even traditional media. Nintendo played with this concept for the launch of its Game & Wario game for the Wii U. Crowdfarter With Crowdfarter, Nintendo built its very own Wario-themed Kickstarter-like campaign, featuring “executive updates” by Wario himself. Visitors could preorder the game, and the page explained everything they needed to know about the upcoming release. The pages were structured to obviously reflect Kickstarter (and similar websites): video at the top, social elements along the right, and details just below. The most important part was that the goals and donation element were connected to the sharing elements. That is, visitors could “buy” into the campaign by sharing in social media. When the campaign had been shared a certain number of times, Wario badges would be released for downloading. The last prize was a video of the gameplay. Makers Nike did something similar by tapping into a popular current topic. It built and released its Makers app in July 2013. Visitors could download the app and educate themselves on how to work with materials in an environmentally sustainable way. Quite a few sharing elements were built into this campaign. The campaign lives at its own domain, nikemakers.com, not a subdomain of nike.com. The website was built with Tumblr to appeal to people who love the platform. It also includes a video explaining how this campaign could turn into a full-blown movement. (By the way, check out Tomasz Tunguz’s post on how to start a movement for your product.) Marauders Map The makers of Circle, a local social network app, created a small campaign website based on a Harry Potter-style marauders map to explain what its service does, which is to track nearby friends. (The marauders map was apparently the result of a Circle hackathon.) 4. Competition People love to compete. And people love the chance to win something, however improbable the chance of actually winning might be. A competition is always an interesting basis for a campaign. The Great Space Race Dropbox understands the viral loop better than anyone. When it started out, it promised free additional storage for users who referred their friends as well as for the friends themselves. This not only incentivized existing users to share, but also enticed potential users to sign up because they would clearly be getting a better deal than anyone else. Earlier this year, Dropbox decided to run the Great Space Race, which would give free Dropbox space to everyone at a student’s school (more precisely, an extra 3 GB to the student for two years, plus the additional space their school had earned). The campaign hinged on two elements: acquiring users and activating those users. Every invitation would earn two points towards free space for the student’s school, and new users would unlock four more points by completing the “Get Started on Dropbox” guide. The points would be converted into free space for everyone at the school for two years (to a maximum of 25 GB). Right below the simple instructions appeared the school rankings for the student’s own country, and below that the international rankings, so that the student could see where they fit in and how far they had to go. (The competition is over, but the page is still available.) Game of Cones Foursquare built a campaign named Game of Cones, launched around the time when the “Red Wedding” episode of Game of Thrones aired, which dominated social media for a while. Foursquare’s competition was a battle in which the ice cream shop with the most check-ins in New York City or San Francisco would win the Iron Cone. Foursquare combined a competition with a very popular phenomenon, Game of Thrones, working with HBO to bring this campaign to its users. The sharing aspect was a combination of rooting for the various “houses” — for example, Bi-Rite and Smitten — and sharing via the hash tag #SummerIsComing. Users were incentivized to share by “choosing their allegiance.” Getaround Racer Getaround repurposed a racing game into a competition for its brand in which users could “drive with style, like 007.” The winner spent a day in an Aston Martin car, with a custom-tailored suit, had a five-star dinner and spent the night in a luxurious hotel. Users could participate in the competition by sharing their racing time in the game on Facebook. 5. Infographics, Compilations And POV An infographic is a great campaign tool, as we all know. It compiles a few interesting facts into an easily digestible (and, thus, shareable) format. One of my all-time favorites is the Web Trend Map by iA. iA has become well known for these infographics, repurposing metro maps into what could even be considered as posters. Time Machine Foursquare’s Time Machine was an interesting partnership (with Samsung’s Galaxy S4) and a good example of emotional design. The time machine took users back in time in an interactive, visual way, showing what they’ve been up to and the places they’ve visited. If the user changed cities, the machine would take them there as well, changing locations with a small spaceship animation. After the user had completed and seen their own history, they could check out some interesting new places around them. To finish off, they could see a colorful infographic of their own journey, which they could easily share. Flurry In a post titled “Why Your Marketing Campaign Sucks” on TechCrunch, Mark Suster singles out Flurry as an exceptional campaign marketer, and he introduces the term “point-of-view marketing,” which a campaign creator needs to follow in order to succeed. The gist is that it’s not all about you, but rather about why the campaign is newsworthy. Flurry’s campaign was a simple blog post, “Christmas 2012 Shatters More Smart Device and App Download Records.” As Suster puts it: “Flurry doesn’t talk about all of their analytics features and functions. They offer a point-of-view about their market. And they back it up with data. And journalists eat that shit up because it has all that they’re looking for: facts, charts, an angle, news, something that their readers care about, etc.” The takeaway here is, are you painting the big picture, or just talking about yourself? 6. Content Words. Images. Content. These form an integral subset of all of the other categories we’re talking about. All of them matter, the color choices, contrast, symmetry, balance. Yet they stand as a category of their own. Justin Jackson perfectly captured the idea that content can be the message in his piece “This Is a Web Page.” He opens with the heart of the matter: “There’s not much here. Just words.” Those words garnered over 200,000 page views in just two weeks. 7. Partnership We’ve seen partnership campaigns before. Here’s one GE pulled off on BuzzFeed. Flight Mode GE partnered with BuzzFeed to promote GE Aviation at the Paris Air Show. With BuzzFeed, GE created sponsored content related to aviation, and it provided a novel way to navigate the content, called “Flight Mode,” whereby users could fly towards the content they wanted to read. It might not have revolutionized online reading, but it was a memorable campaign. KitKat and Android Most of you have probably seen this. The campaign launched after I had finished the final draft of this article, but I just had to squeeze it in. Google and Nestlé have come together to promote the next version of Android, 4.4, by naming it Android KitKat, continuing Google’s convention of naming major Android releases after desserts. This dessert campaign includes an actual dessert: Android will be featured on KitKat packaging, and customers can win little prizes. KitKat’s campaign website. Android’s campaign website. 9. Narrow Target An effective campaign could just as well target an extremely narrow niche audience. “How Much to Make an App” Ooomf is a service for people who are looking for developers and designers to make an idea happen. It recently released a side project and campaign titled “How Much to Make an App.” The website helps users estimate the costs of a project. It targets anyone who types “How much does it cost to make an app” into a search engine. Additionally, Web professionals may send their customers this way to get an idea of what’s involved. Sharing is enabled by the handy nature of the tool, which explains what is expected of the visitor to make their idea a reality. In general, the more useful something is to someone, the more likely they will share it. Ooomf added two links to its website, one on the main landing page at the top, and a call to action to “submit your project” right on the website (plus a “learn more” link). “Please Feature Us, TechCrunch” Here’s a company that tried to increase its chances of being covered by TechCrunch by targeting the publication with a campaign page titled “Please Feature Us TC.” The campaign never got Hipvite the coverage it sought; however, according to TJ Tan on Dribbble, it gave the company a nice flow of registrations (the app no longer exists). In an email to me, Tan explained that Hipvite’s primary goal was to attract early adopters to try out the product; the TechCrunch feature would merely have been a great bonus. Nearbox The team behind Nearbox wanted to meet up with Andrew Chen to discuss “growth hacking” in their area of interest (mobile). They sent him the Snapchat image above, along with the website they built. This is a great example of how to target an individual in a small campaign. They even asked people to tweet about it: “Those guys from @Nearbox really want to meet @andrewchen http://bit.ly/YzKesU.” It’s hard to turn down people who do things like this. 8. Creativity And Innovation This next creative campaign is still running. Hollywood and Vines This campaign, called Hollywood and Vines, consists of a compilation of six-second Vine clips submitted from all over the world. Sharing is encouraged via the usual social channels and with an email like this: “Airbnb creates the world’s first film made entirely of Vines. Check out the project at http://www.hollywoodandvines.com/ #AirbnbHV.” Solved by Flexbox Creativity also means coming up with a great campaign name. “Solved by Flexbox” is a great name to support the CSS Flexbox campaign. 10. Related Area One last option worth mentioning is to choose an area of focus that isn’t necessarily your core business, but where you can add unique insight or get a lot of views. Speakeasy When Speakeasy launched its product to book locations for events, instead of a typical launch party, it threw 50 parties by getting promoters to use its ticketing platform for the same weekend. The incentive for promoters posed a problem; Speakeasy wanted to host a liquor giveaway, but that wasn’t viable because of the high turnaround time that spirit brands usually have. So, it decided to make its own Vodka brand and advertise it in its emails (and on a fake website), without any explanation. The campaign’s results were impressive: media coverage in New York and Toronto, one weekend with 45 parties in three cities, and 5000 new users. Litense I’ll conclude with our newest campaign at Iubenda, named Litense. We tackled a subject related to what we do, open-source licenses, and added our expertise. Because we have some experience in creating legal documents, we attempted to redesign open-source licenses to make them instantly understandable and easy to use, both for their creators and their users. We picked the catchiest name possible, Litense, made the licenses easy to scan and read, with the help of icons, and added a few descriptive sentences of what you can do with the licenses. To make it even easier, we host the licenses on our own servers. All the user has to do is link to the license, which will pop open in a modal window over top their page. At the bottom of the page, we list the various licenses, with an overview to help visitors make sense of the various clauses; it also contains the only link back to Iubenda’s website. We hope the service provides value to a lot of people, people who might need Iubenda’s other services at some point. We hope the project will draw a bit of attention to Iubenda’s vision of reworking legal documents for the Web. A Little Viral Theory Now that you know the ingredients of a campaign, let’s look at the recipe to make it effective. You’ll usually want to follow a few rules to make as big an impact as possible. For starters, you’ll need to bake in a bit of a viral loop. In short, you’ll need to get a viral coefficient above 1. viral coefficient = (average number of users invited by each active user who invites someone) × (proportion of invited users who actually join or become active) × (proportion of active users who invite others) Ultimately, it comes down to eliciting a strong emotional driver that will get visitors to share. (If you’re interested in emotional design, check out Aarron Walter’s book on the topic and my earlier article). What’s the strongest driver on social networks? Anger. Higher virality is achieved by eliciting strong emotions. Here are a few things you can do: Make your content visual. Use images, infographics, videos, GIFs. Visual content is more engaging, faster to consume and faster to activate the response you seek. Make the content interactive. Personalize the content. Create emotional stacking with lists. Emotions aren’t the only thing that raises the viral coefficient. Other factors that motivate people to share are social benefit or ego appeasement. Exclusivity The viewer wants to feel in the know. Altruism The user wants to do something good. Self-image How does the shared product represent the user? Convenience Make sharing as easy as possible. Now that we’ve covered the theory, let’s look at a recent and brilliant execution of it. One Second on the Internet Designly’s One Second on the Internet campaign taps into so many of the themes above, and the website has been shared like crazy. See for yourself with a live Twitter search. Let’s review five reasons why this campaign is so effective: Visual content The highly visual content centers on the blocks of bright logos. The page has the feel of an infographic, because the size of each block correlates to the number of actions performed on that platform in one second. Interactivity The interaction is subtle yet important. The number in the top left of each section adjusts to the time the visitor has spent on the website relative to the block being viewed. Personal The page is not necessarily personalized, but it surely is personal. Designly is talking about everyday things: voting on Reddit, posting to Instagram, posting to Tumblr, Skyping, tweeting, uploading to Dropbox, searching on Google, watching Youtube videos, liking on Facebook, sending email. Hey, I do that! Emotional stacking The page starts with the smallest block (Reddit) and finishes with the biggest (Facebook). Then, we get a few mind-boggling (and very shareable) facts, before the invitation to click that email button. Convenient sharing Notice how the sharing buttons stay with you the whole time and how Designly’s URL sits in the bottom-right of the page? Now What? Now it’s your turn. Go create a great campaign with what you’ve seen here (and use one of Iubenda’s newly released licenses!). Also, don’t forget to tell us in the comments about the last thing that left a lasting impression on you. And feel free to link to any campaigns you may have done. Further Reading and Links Here are some links from the article you shouldn’t miss: Full views of some of the campaigns: Great Space Race, Movember, etc. “Why Your Marketing Campaign Sucks,” Mark Suster, TechCrunch “The Secret Recipe for Viral Content Marketing Success,” Kelsey Libert, The Moz Blog “How to Make That One Thing Go Viral,” Upworthy “Elements of a Viral Launch Page,” Simon Schmid, Smashing Magazine “Key Ingredients to Make Your App Go Viral,” Carla White, Smashing Magazine “30,000 Newsletter Subscribers in 12 Months (Startup Lessons Learned),” Gregory Ciotti Influence: Science and Practice, Robert B. Cialdini Yes!: 50 Scientifically Proven Ways to Be Persuasive, Noah J. Goldstein, Steve J. Martin and Robert B. Cialdini (al ea) © Simon Schmid for Smashing Magazine, 2013.

    2
    0
    0
    Favorite
    Grab It
  • Smashing Magazine
    So We Wanted To Build A File Uploader&#8230; (A Case Study)

       One day I discovered that I needed to design an API that would upload files from a client to a server. I work on the Russian Web mail provider at Mail.Ru and deal with JavaScript in all its aspects. A basic feature of any Web mail service is of course attaching a file to an email. Mail.ru is no exception: We used to have a Flash uploader, which was rather good, but still had some problems. HTML markup, graphics, business logic and even localization were all built into it, and it made the uploader pretty bloated. Furthermore, only a Flash developer could make any changes to it. We realized that we needed to build something new and different. This article covers all of our steps in creating what we consider to be a better tool for the job. Anyone who has ever written a Flash uploader knows what problems it always brings: Cookies for authentication are difficult to manage because depending on browser and operating system they have erratic behaviour in Flash (i.e. cookies are not shared between HTTP requests and FileReference upload/download). Officially Flash supports cookies only in IE and they will not be shared among other browsers, or they will be retrieved from IE; There are assumptions that with Flash, cookies are read from Internet Explorer although it’s not officially confirmed; Proxy settings are quite inconvenient to update; with Flash, they are always retrieved from IE, independent of the browser used; Errors #2038 and #2048, elusive errors that appear in some combinations of network settings, browser and Flash player version; AdBlock and the like (no comment). So, we decided that it was the right time for a change. Here’s a list of features that we wanted to have with a new approach to this problem: Select multiple files; Get file information (name, type and mini-type); Preview images before uploading; Resize, crop and rotate images client-side; Upload results to the server, plus CORS; Make it independent of external libraries; Make it extensible. Over the last four years, we’ve all read heated debates about various features and options of HTML5, including the File API. Many publications touch on this API, and we have a few functioning examples of it. One would think, “Here’s a tool that solves the problem.” But is it as easy as it looks? Well, let’s look at the browser statistics for Mail.Ru. We have selected only browser versions that support the File API, although in some cases these browsers do not provide full support for the API. The diagram shows that a bit more than a whopping 87% of browsers indeed support the File API: Chrome 10+ Firefox 3.6+ Opera 11.10+ Safari 5.4+ IE 10+ Also, we shouldn’t forget about mobile browsers, which are becoming more popular by the day. Take iOS 6+, for example, which already supports the File API. However, 87% is not 100% and in our case it wasn’t feasible to entirely abandon Flash at this point. So, our task evolved to building a tool that combines two techniques (File API and Flash) and that lets the developer kind of… ignore the way files are actually uploaded. During the development process we decided to combine all preliminary development into a separate library (a unified API) that would work independent of the environment and could be used wherever you like, not only in our service. So let’s go into detail on a few specifics of the development process and see what we’ve built, how we built it and what we’ve learned along the way. Retrieve File List Basics first. Here is how files are received in HTML5. Very simple. var input = document.getElementById("file"); input.addEventListener("change", function (){ var files = input.files; }, false); But what do you do if you have only Flash support and no File API? The basic idea that we had for users with Flash support was to make all interactions go through Flash. You couldn’t simply call up a file-selection dialog. Due to the security policy, the dialog would open only after the Flash object has been clicked. This is why the Flash object would be positioned above your target input. Then, you would attach a mouseover event handler to the document, and put the Flash object into the input’s parent element when the user hovers over it. The user would click the Flash object, open the file-selection dialog and select a file. Data would be transferred from Flash to JavaScript using ExternalInterface. The JavaScript would bind the data received with the input element and emulate the change event. [[Flash]] --> jsFunc([{ id: "346515436346", // unique identifier name: "hello-world.png", // file name type: "image/png", // mime-type size: 43325 // file size }, { // etc. }]) All further interactions between JavaScript and Flash are performed through the only available method in Flash. The first argument is a command name. The second parameter is an object with two mandatory fields: the file id and the callback function. The callback is called from Flash once the command is executed. flash.cmd("imageTransform", { id: "346515436346", // file identification matrix: { }, // transformation matrix callback: "__UNIQ_NAME__" }); The combination of the two methods results in the API, which is very similar to native JavaScript. The only difference is in the way files are received. Now we use the API method because the input has the files property only when the browser supports HTML5 and the File API. In the case of Flash, the list is taken from the data associated with it. var input = document.getElementById("file"); FileAPI.event.on(input, "change", function (){ var files = FileAPI.getFiles(input); }); Filter Usually, file uploading comes with a set of restrictions. The most common restrictions are on file size, image type and dimensions (width and height). If you look around solutions for this issue, you’ll notice that validation is usually done on the server, and the user would receive an error message if the file doesn’t match any restrictions. I tried to solve this problem in another way, by validating files on the client side — before the file has started uploading. What’s the catch? The catch is that when we initially get the list of files, we have only the bare minimum of information about the files: name, size and type. To get more detailed information, we need to actually read the files. To do that, we can use FileReader. So if we play around with FileReader, we’ll probably come up with the following filtering technique: FileAPI.filterFiles(files, function (file, info){ if( /^image/.test(file.type) ){ return info.width > 320 && info.height > 240; } else if( file.size ){ return file.size 0 ){ // ... } }); You can get the file’s dimensions “out of the box,” as well as a way to collect all of the data you’ll need: FileAPI.addInfoReader(/^audio/, function (file, callback){ // collect required information // and call it back callback( false, // or error message { artist: "...", album: "...", title: "...", ... } ); }); Process Images In developing the API, we also wanted a convenient and powerful tool that would allow us to work with images — to create previews, crop, rotate and resize, for example — and whose functionality would be supported in both HTML5 and Flash. Flash First, we needed to understand how to do this via Flash — that is, what to send to JavaScript to build the image. As we of course know, this is usually done using the data URI. Flash reads the file as Base64 and transfers it to JavaScript. So we add data:image/png;base64 to it, and use this string as the src. A happy ending? Unfortunately, IE 6 and 7 do not support the data URI, and IE 8+, which supports the data URI, cannot process more than 32 KB. In this case, JavaScript would create a second Flash object and transfer the Base64-encoded content into it. This Flash object would restore the image. HTML5 In the case of HTML5, we would get the original image first, and then perform all required transformations using the canvas. Getting the original image can be done in one of two ways. The first is to read the file as a dataURI using FileReader. The second is to use URL.createObjectURL to create a link to the file, which is bound to the current tab. Of course, the second option is good and is enough to generate a preview, but not all browsers support it. Opera 12, for example, does not support the accompanying URL.revokeObjectURL, which informs the browser that there is no need to keep a link to the file anymore. When we combine all of these methods, we get a class of FileAPI.Image: crop(x, y, width, height) resize(width,[height]) rotate(deg) preview(width, height) — crop and resize get(callback) — get final image All of these techniques fill the transformation matrix, which is applied only when the get() method is called. Transformations are performed using the HTML5 canvas or in Flash (when the file is uploaded through the Flash interface). Here is our description of the matrix: { // parameters fragment of original sx: Number, sy: Number, sw: Number, sh: Number, // destination size dw: Number, dh: Number, deg: Number } And here is a short example: FileAPI.Image(imageFle) // returns FileAPI.Image instance .crop(300, 300) // crop the image width and height .resize(100, 100) // resize to 100x100px .get(function (err, img){ if( !err ){ // Append the result in the DOM-node (). images.appendChild(img); } }); Resize Digital cameras emerged long ago and are still very popular. Some cost about $20 to $30 and can take photos with a resolution of 10 MP and up. We tried to downsize photos taken with such cameras, and this is what we ended up with: As you can see, the quality is rather poor. However, if we first resize it in half and then do it again several times until we get the desired dimensions, then the quality is much better. This method is actually quite old, and in fact a consequence of the “nearest neighbor” interpolation; when compressing images at once, we are losing the quality of the image really “quickly”. The difference is evident: Apply a slight sharpening effect, and the image will be ideal. We also tried other variations, such as bicubic interpolation and the Lanczos algorithm. The result was a bit better, but the process took more time: 1.5 seconds versus 200 to 300 milliseconds. This method also yielded the same results in canvas and Flash. Uploading Files Now let’s sum up our various options for uploading a file to the server. iframe Yes, we still use this many years later: At first, we create а form element with a nested iframe inside. (The form’s target attribute and the name of the iframe should be the same.) After that, we move the input[type="file"] into it because if you put a clone there, it will turn up empty. To illustrate this issue, imagine that you load a file via iframe. We could use something like this: var inp = document.getElementById('photo'); var form = getIFrameFormTransport(); form.appendChild(inp.cloneNode(true)); // send a "clone" form.submit(); However, such input would be “empty” in IE, i.e. it wouldn’t contain the selected file, which is why we need to “send” the original file and replace it with a clone. That is why we subscribe to events via API methods, to save them during cloning. Then, we call form.submit(), and put the contents of the form through the iframe. We’ll get the results using JSONP. var inp = document.getElementById('photo'); var cloneInp = inp.cloneNode(true); var form = getIFrameFormTransport(); // Insert the "clone" after the "original" inp.parentNode.insertBefore(cloneInp, inp); form.appendChild(inp); // Send the "original form.submit(); Yes, erratic indeed. Flash In principle, everything is quite simple: JavaScript calls the method from the Flash object and passes the ID of the file to be uploaded. Flash, in turn, duplicates all states and events in JavaScript. XMLHttpRequest and FormData Now we can send binary data, not just text data. This is easy: // collect data to be sent var form = new FormData form.append("foo", "bar"); // the first parameter is the name of POST-parameter, form.append("attach", file); // the second parameter is the string, file or Blob // send to server var xhr = new XMLHttpRequest; xhr.open("POST", "/upload", true); xhr.send(form); What if, for example, we want to send not a file, but canvas data? There are two options. The first, which is easiest and correct, is to convert canvas to Blob: canvasToBlob(canvas, function (blob){ var form = new FormData form.append("foo", "bar"); form.append("attach", blob, "filename.png"); //not all support the third parameter // ... }); As you can see, this trick is not universal. In case canvas doesn’t have Canvas.toBlob() (or it cannot be implemented), we will choose another option. This option is also good for browsers that do not support FormData. The point is to create the multipart request manually and then send it to the server. The code for the canvas would look like this: var dataURL = canvas.toDataURL("image/png"); // or result from FileReader var base64 = dataURL.replace(/^data:[^,]+,/, ""); // cut the beginning var binaryString = window.atob(base64); // decode Base64 // now get together multipart, nothing complicated var uniq = '1234567890'; var data = [ '--_'+ uniq , 'Content-Disposition: form-data; name="my-file"; filename="hello-world.png"' , 'Content-Type: image/png' , '' , binaryString , '--_'+ uniq +'--' ].join('\r\n'); var xhr = new XMLHttpRequest; xhr.open('POST', '/upload', true); xhr.setRequestHeader('Content-Type', 'multipart/form-data; boundary=_'+uniq); if( xhr.sendAsBinary ){ xhr.sendAsBinary(data); } else { var bytes = Array.prototype.map.call(data, function(c){ return c.charCodeAt(0) & 0xff; }); xhr.send(new Uint8Array(bytes).buffer); } Finally, our efforts result in the following method: var xhr = FileAPI.upload({ url: '/upload', data: { foo: 'bar' }, headers: { 'Session-Id': '...' }, files: { images: imageFiles, others: otherFiles }, imageTransform: { maxWidth: 1024, maxHeight: 768 }, upload: function (xhr){}, progress: function (event, file){}, complete: function (err, xhr, file){}, fileupload: function (file, xhr){}, fileprogress: function (event, file){}, filecomplete: function (err, xhr, file){} }); This has a lot of parameters, but the most important one is imageTransform. It transforms images on the client, and it operates via both Flash and HTML5. And that’s not even half of the story. We can have multiple imageTransforms: { huge: { maxWidth: 800, maxHeight: 600, rotate: 90 }, medium: { width: 320, height: 240, preview: true }, small: { width: 100, height: 120, preview: true } } This means that three copies (besides the original) will be sent to the server. What for? If you can transfer the load from the server to the client, it’s a good idea to do it. The server should probably only minimally validate input files. First of all, you are not only removing the load, but also avoid logic on the server, completely moving it to the client. Second, if the file doesn’t have to get uploaded to the server, we save bandwidth. In addition, there are often problems when it isn’t possible to make further processing on the server, such as integration with third-party services (Amazon S3, for example). In our experience, it’s OK to move the additional logic that previously was managed server-side to the client. The upload function also calls back an XMLHttpRequest-like object; that is, it assumes some properties and methods of XMLHttpRequest, such as: status HTTP status code statusText HTTP status text responseText server’s reply getResponseHeader(name) get header of the server’s reply getAllResponseHeaders() get all headers abort() abort upload Although HTML5 allows you to upload several files in one request, standard Flash allows only file-by-file uploading. Moreover, in our opinion, uploading files in a batch proved not to be a good idea. For one, Flash doesn’t support this, and we wanted to have an identical behavior for both Flash and HTML5. Second, the user might simply run out of memory and the browser will fail. XMLHttpRequest, which has called back the upload, is a proxy XMLHttpRequest, in fact. Its methods and properties reflect states in the file currently being uploaded. Final Word I’ll end with a small example of how we let users upload files using drag’n'drop: if( FileAPI.support.dnd ){ // element where you can drop the files var el = document.getElementById("el"); // subscribe to events associated with Drag'n'Drop FileAPI.event.dnd(el, function (over){ // method will be activated when you enter/leave the element if( over ){ el.classList.add("dropzone_hover"); } else { el.classList.remove("dropzone_hover"); } }, function (dropFiles){ // the user has dropped the files FileAPI.upload({ url: "/upload", files: { attaches: dropFiles }, complete: function (err, xhr){ if( !err ){ // files are uploaded } } }); }); } It took us quite some time to develop the library. We worked on it for about 5 months since it was a little side thing that we had to finish aside from the regular work. The main headache was caused by the little details that different browsers had. Chrome, Firefox and IE10+ were just fine, but Safari and Opera had very different behaviors from version to version, including inconsistencies on Win/Mac platforms. Still, the main problem was to actually combine all three technologies — iframe, Flash, HTML5 — to create a bulletproof file uploader. The library is available on GitHub and we’ve published a documentation as well. Bug reports and pull requests are more than welcome! Useful Links FileAPI (and demo), Mail.ru, GitHub Mail.ru, GitHub Find Tarantool, fest and much else. “HTML5 Form Features,” Can I Use…? See the support for input[type="file" multiple]. “File API,” Can I Use…? “FileReader,” Mozilla Developer Network “URL.createObjectURL” and “URL.revokeObjectURL,” Mozilla Developer Network “XMLHttpRequest,” Mozilla Developer Network “FormData,” Mozilla Developer Network This article has been reviewed and edited by Andrew Sumin, a front-end engineer working on Mail.ru front-end team. (al ea il) © Konstantin Lebedev for Smashing Magazine, 2013.

    1
    0
    0
    Favorite
    Grab It
  • Smashing Magazine
    Thinking Inside The Box With Vanilla JavaScript

       During the past four or five years of blogging regularly and doing research for other writing projects, I’ve come across probably thousands of articles on JavaScript. To me, it seems that a big chunk of these articles can be divided into two very general categories: jQuery; Theory and concept articles focused on things like IIFEs, closures and design patterns. Yes, I’ve likely stumbled upon a ton of other articles that don’t fall into either of these categories or that are more specific. But somehow it feels that most of the ones that really get pushed in the community fall under one of the two categories above. I think those articles are great, and I hope we see more of them. But sometimes the simplest JavaScript features are sitting right under our noses and we just haven’t had a lot of exposure to them. I’m talking about native, more-or-less cross-browser features that have been in the language for some time. So, in this article, I won’t be talking about jQuery, and I won’t be looking at structural code concepts or patterns. Instead, I’m going to introduce you to some pure JavaScript features that you can use today and that you might not have ever considered before. insertAdjacentHTML() Years ago, Microsoft introduced a method called insertAdjacentHTML() as a way to insert a specified string of text as HTML or XML into a specific place in the DOM. This feature has been available in Internet Explorer (IE) since version 4. Let’s see how it works. Suppose you have the following HTML: Some example text Some example text And suppose you want to insert another snippet of HTML between #box1 and #box2. You can do this quite easily using insertAdjacentHTML(): var box2 = document.getElementById("box2"); box2.insertAdjacentHTML('beforebegin', 'This gets inserted.'); With that, the generated DOM ends up like this: Some example text This gets inserted. Some example text View a simple demo. The insertAdjacentHTML() method takes two parameters. The first defines where you want to place the HTML, relative to the targeted element (in this case, the #box2 element). This may be one of the following four string values: beforebegin The HTML would be placed immediately before the element, as a sibling. afterbegin The HTML would be placed inside the element, before its first child. beforeend The HTML would be placed inside the element, after its last child. afterend The HTML would be placed immediately after the element, as a sibling. Again, these are string values, not keywords, so they must be placed inside of single or double quotes. The second parameter is the string you want to insert, also placed in quotes (or else it would be a variable holding a string that was previously defined). Note that it should be a string, not a DOM element or element collection; so, it could just be text, with no actual markup. insertAdjacentHTML() has, as outlined in a post on Mozilla Hacks, a couple of advantages over something more conventional, like innerHTML(): It does not corrupt the existing DOM elements, and it performs better. And if you’re wondering why this one hasn’t received a lot of attention so far, despite being well supported in all in-use versions of IE, the reason is probably that, as mentioned in the Mozilla Hacks article, it was not added to Firefox until version 8. Because all other major browsers support this, and Firefox users have been auto-updating since version 5, it’s quite safe to use. For more on this method: “insertAdjacentHTML(),” in the “DOM Parsing and Serialization” specification, WHATWG “Element.insertAdjacentHTML,” Mozilla Developer Network getBoundingClientRect() You can obtain the coordinates and, by extension, the dimensions of any element on the page using another lesser-known method, the getBoundingClientRect() method. Here’s an example of how it might be used: var box = document.getElementById('box'), x, y, w; x = box.getBoundingClientRect().left; y = box.getBoundingClientRect().top; if (box.getBoundingClientRect().width) { w = box.getBoundingClientRect().width; // for modern browsers } else { w = box.offsetWidth; // for oldIE } console.log(x, y, w); View a demo. Here, we’ve targeted an element with an ID of box, and we’re accessing three properties of the getBoundingClientRect() method for the #box element. Here’s a summary of six fairly self-explanatory properties that this method exposes: top How many pixels the top edge of the element is from the topmost edge of the viewport left How many pixels the left edge of the element is from the leftmost edge of the viewport right How many pixels the right edge of the element is from the leftmost edge of the viewport bottom How many pixels the bottom edge of the element is from the topmost edge of the viewport width The width of the element height The height of the element All of these properties are read-only. And notice that the coordinate properties (top, left, right and bottom) are all relative to the top-left of the viewport. What about the if/else in the example from above? IE 6 to 8 don’t support the width and height properties; so, if you want full cross-browser support for those, you’ll have to use offsetWidth and/or offsetHeight. As with insertAdjacentHTML(), despite the lack of support for width and height, this method has been supported in IE since ancient times, and it has support everywhere else that’s relevant, so it’s pretty safe to use. I will concede something here: Getting the coordinates of an element using offset-based values (such as offsetWidth) is actually faster than using getBoundingClientRect(). Note, however, that offset-based values will always round to the nearest integer, whereas getBoundingClientRect()’s properties will return fractional values. For more info: “Element.getBoundingClientRect,” Mozilla Developer Network “getBoundingClientRect Is Awesome,” John Resig The API If you’ve ever manipulated elements on the fly with JavaScript, then you’ve likely used methods such as createElement, removeChild, parentNode and related features. And you can manipulate HTML tables in this way, too. But you may not realize that there is a very specific API for creating and manipulating HTML tables with JavaScript, and it has very good browser support. Let’s take a quick look at some of the methods and properties available with this API. All of the following methods are available to be used on any HTML table element: insertRow() deleteRow() insertCell() deleteCell() createCaption() deleteCaption() createTHead() deleteTHead() And then there are the following properties: caption tHead tFoot rows rows.cells With these features, we can create an entire table, including rows, cells, a caption and cell content. Here’s an example: var table = document.createElement('table'), tbody = document.createElement('tbody'), i, rowcount; table.appendChild(tbody); for (i = 0; i View a demo. The script above combines some customary core DOM methods with methods and properties of the HTMLTableElement API. The same code written without the table API might be considerably more complex and, thus, harder to read and maintain. Once again, these table-related features have support all the way back to IE 7 (and probably earlier) and everywhere else that’s relevant, so feel free to use these methods and properties where you see fit. For more info: “HTMLTableElement,” Mozilla Developer Network “Tabular Data,” in the “HTML” specification, WHATWG Wrapping Up This discussion of specific native JavaScript features has been a reminder of sorts. We can easily become comfortable with the features of a language that we know well, without looking deeper into the language’s syntax for simpler and more maintainable ways to solve our problems. So, from time to time, look inside the box, so to speak. That is, investigate all that vanilla JavaScript has to offer, and try not to rely too much on plugins and libraries, which can unnecessarily bloat your code. (Credits of image on front page: nyuhuhuu) (al ea) © Louis Lazaris for Smashing Magazine, 2013.

    0
    0
    0
    Favorite
    Grab It
  • Smashing Magazine
    Addressing The Responsive Images Performance Problem: A Case Study

       Five-inch mobile devices are on the market that have the same screen resolution as 50-inch TVs. We have users with unlimited high-speed broadband as well as users who pay money for each megabyte transferred. Responsive design for images is about optimizing the process of serving images to users. In this article, we will share our responsive image technique, the “padding-bottom” technique, which we researched and implemented on the mobile version of the Swedish news website Aftonbladet. The techniques presented here are the result of a few weeks of research that we did in October 2012. We were fortunate enough to be a part of the team that built a new responsive mobile website for Aftonbladet. Aftonbladet is Sweden’s largest website, and its mobile version gets about 3 million unique users and up to 100 million page views per week. With that amount of users, we felt it was our responsibility to make a fast and well-optimized website. Saving just 100 KB of image data per page view would translate into a lot of terabytes of data traffic saved in Sweden per year. We started out by researching other responsive image techniques, but because none of them was a perfect match, we ended up combining some of the best hacks into our own solution. Please note that this project covered only a responsive mobile website; but do not worry — the technique presented here applies to all types of responsive websites. The Specification We started out by creating a simple specification in order to select a suitable responsive image solution. The solution had to: be easy to cache, multiserve images. Let’s go through these requirements and see what they mean. Easy to Cache With a website that gets traffic peaks of over 10,000 requests per second, we wanted to keep the server logic as simple as possible. This means that we didn’t want to use server-side device detection or a cookie-based solution that serves multiple versions of the HTML. We needed a single HTML file to be served to all users, although manipulating the HTML with JavaScript after it has loaded is acceptable. The same rules apply to images; we needed to be able to put the images on a content delivery network (CDN), and we did not want any dynamics in the image-serving logic. Multiserving Images We wanted to serve different image versions to different devices. One big complaint about our previous mobile website was that high-DPI iPhones and Android devices did not get the high-resolution images they deserved. So, we wanted to improve image quality, but only for the devices that were capable of displaying it. Loading Images With JavaScript JavaScript, if placed in the footer where it should be, will load after the HTML and CSS has been parsed. This means that, if JavaScript is responsible for loading images, we can’t take advantage of the browser’s preloader, and so an image will start downloading a fair bit later than normal. This is not good, of course, and it reveals another problem: The page might reflow every time the JavaScript inserts an image into the DOM. Reflowing happens when the browser recalculates the dimensions of the elements on the page and redraws them. We have set up a demo page and a video that demonstrate this effect. Note that the demo page has an inserted delay of 500 milliseconds between each image in order to simulate a slow connection speed. As you can see from the video, another very annoying feature is that the user will likely get lost in the reflowing when returning to a page with the “Back” button. This is actually a serious problem for websites such as Aftonbladet. Having a functional “Back” button will keep users longer on the website. The reflowing problem would not really be present on a website that is not responsive because we would be able to set a width and height in pixels on the image tag: One important aspect of responsive Web design is to remove those hardcoded attributes and to make images fluid, with CSS: img { max-width: 100%; } No More max-width: 100% We needed to find a solution whereby we could reserve space for an image with only HTML and CSS and, thus, avoid reflowing. That is, when the JavaScript inserts an image into the page, it would just be inserted in the reserved area and we would avoid reflowing. So, we threw out one of the cornerstones of responsive Web design, img { max-width: 100% }, and searched for another solution that could reserve space for a responsive image. What we needed was something that specifies just the aspect ratio of the image and lets the height shrink with the width. And we found a solution. The Padding-Bottom Hack This technique is based on something called intrinsic ratios, but because none of our team’s members could remember, understand or pronounce the term “intrinsic” we just called it the “padding-bottom hack.” Many people learned about this feature back in 2009 in A List Apart’s article “Creating Intrinsic Ratios for Video,” by Thierry Koblentz, and the technique is often used to embed third-party content and media on responsive websites. With the technique, we define the height as a measure relative to the width. Padding and margin have such intrinsic properties, and we can use them to create aspect ratios for elements that do not have any content in them. Because padding has this capability, we can set padding-bottom to be relative to the width of an element. If we also set height to be 0, we’ll get what we want. .img-container { padding-bottom: 56.25%; /* 16:9 ratio */ height: 0; background-color: black; } The next step is to place an image inside the container and make sure it fills up the container. To do this, we need to position the image absolutely inside the container, like so: .img-container { position: relative; padding-bottom: 56.25%; /* 16:9 ratio */ height: 0; overflow: hidden; } .img-container img { position: absolute; top: 0; left: 0; width: 100%; height: 100%; } The container reserves the space needed for the image. Now we can tweak our demo, applying the padding-bottom hack, and the user will no longer get lost in the reflowing that we saw earlier. Also, the “Back” button functions as expected. See the new video and demo. This technique improves the user experience of the website quite a bit over the traditional max-width approach, but the experienced reader will by now have noticed two things: We need to know the aspect ratio of the image before we load an image. Images could be scaled up larger than their original size. To handle the aspect ratios, you need either to have a content management system with which you can control the templates or to have a limited, fixed set of aspect ratios for images. If you have something in between, whereby you cannot affect how image tags are rendered, then this method will probably be hard to use. At Aftonbladet, we decided to calculate the padding-bottom percentage on the server and print it out as an inline style in the HTML, as you will see in the following code snippets. For the second problem, we found that, for our use case, letting the image scale up if needed (and losing some quality) was actually better than setting a fixed maximum width for the image. Choosing An Image-Loading Technique Now that we’ve allowed ourselves to load images with JavaScript, because we’ve minimized the reflowing, we can set up the requirements for this: The resulting HTML should be a single img tag. The DOM elements should be minimal. It should execute as quickly as possible. It should not break when JavaScript is disabled. Based on this simple specification, we created an inline vanilla JavaScript, based on the “noscript” technique. The idea is to add the information about different image sizes to the HTML as data attributes in a noscript tag. The content of the noscript tag would be an img tag and would be shown to browsers that have JavaScript turned off. Let’s look at the markup: The job of the JavaScript, then, is to parse the content of the page, identify images that should be lazy-loaded, check the size of the device’s screen and pick the correct image. The following code would look for images to load and insert them into the DOM. It is important that the JavaScript be inline and load as soon as possible after the HTML. The script would also retrieve the alt tag from the noscript tag and insert it into the newly created img tag. var lazyloadImage = function (imageContainer) { var imageVersion = getImageVersion(); if (!imageContainer || !imageContainer.children) { return; } var img = imageContainer.children[0]; if (img) { var imgSRC = img.getAttribute("data-src-" + imageVersion); var altTxt = img.getAttribute("data-alt"); if (imgSRC) { var imageElement = new Image(); imageElement.src = imgSRC; imageElement.setAttribute("alt", altTxt ? altTxt : ""); imageContainer.appendChild(imageElement); imageContainer.removeChild(imageContainer.children[0]); } } }, lazyLoadedImages = document.getElementsByClassName("lazy-load"); for (var i = 0; i Picking The Perfect Image So far, the techniques described here generally apply to any website that implements responsive images. The last step, selecting the image to send to the browser, is different in the way that it has to be adapted to the needs of the website. Many factors need to to be considered when choosing the optimal image to send to a particular device, such as screen size, network speed, cacheability, overall page weight and the user’s preference. The website we built for Aftonbladet mainly targets mobile browsers, and we were lucky enough to have a lot of statistics on the average user’s behavior. By analyzing the numbers, we could identify some trends. First, the vast majority hold their device in portrait mode. For reading and browsing articles, portrait mode is the natural choice. And while screen size varies a lot, over 99% of the traffic we analyzed represent devices with a viewport width of either 320 or 360 pixels. Secondly, most of the visiting devices have high-density screens, with a native width of 480, 640, 720 or 1080 pixels. The highest resolutions come from newer phones, such as the Galaxy S4 and Xperia Z; while a 1080 pixel-wide image looks great on those phones, tests showed that a 720 pixel-wide image would look good enough, with less of a bandwidth cost. After analyzing the data, we settled on three versions for each image: small (optimized for a 320 pixel-wide screen), medium (optimized for a 640 pixel-wide screen), large (optimized for a 720 pixel-wide screen). (Devices without JavaScript would get the small image.) We believe these settings are reasonable for a mobile website, but a fully responsive website that targets all kinds of devices would probably benefit from different settings. We give the versions logical names, instead of specifying media queries in the markup. We choose to do it this way for flexibility. This makes it easier to evolve the JavaScript in the future to, for example, adapt to network speed or enable a user setting that overrides which image to use. In its simplest form, the engine for selecting image versions could be implemented as in the following example (although, to support Internet Explorer, we’d need another function as a workaround for the absence of window.devicePixelRatio). var getImageVersion = function() { var devicePixelRatio = getDevicePixelRatio(); /* Function defined elsewhere.*/ var width = window.innerWidth * devicePixelRatio; if (width > 640) { return "high"; } else if (width > 320) { return "medium"; } else { return "small"; // default version } }; We also tried to take the screen’s height into account when selecting the right image. In theory, this would have been a nice complement to make sure that the image is well suited to the device. But when we tested the theory in the real world, we soon found too many edge cases that made it impossible to detect the height in a way that was good enough. Apps that embed Web views inside scroll views reported a seemingly random height between 0 and 3000 pixels, and features such as Samsung’s TouchWiz system, which has a split-screen mode, made us abandon the screen’s height as a reliable value for choosing image sizes. We created a simple demo page that has all of the JavaScript and CSS needed. But keep in mind that our code is targeted at mobile devices, so it doesn’t work out of the box in, say, old Internet Explorer browsers. Making Smaller Images Large beautiful images use up a lot of bandwidth. Downloading a lot of 720 pixel-wide images on a cellular network can be both slow and costly for the user. While a new image format such as WebP would help some users, it is not supported by enough browsers to be viable as the only solution. Fortunately, thanks to research by Daan Jobsis, we can take advantage of the fact that a high compression rate doesn’t affect the perceived quality of an image very much if the image’s dimensions are larger than displayed or if the image is displayed at its native size on a high-density screen. With aggressive JPEG compression, it is, therefore, possible to maintain a reasonable download size while still having images look beautiful on high-density displays. We already had an image server that could generate scaled, cropped and compressed images on the fly, so it was just a matter of choosing the right settings. This is also one reason why we didn’t include an image version for 480-pixel screens. Scaling down a 640 pixel-wide image with a high compression level made for a better-looking image at a smaller size than we could achieve with an image that had the native resolution of the 480-pixel screen. In this case, we decided that making the device scale the image to fit was worth it. Red Areas Don’t Compress Well A high compression rate is no silver bullet, though. Some images look terrible when compressed, especially ones with prominent bright-red areas, in which JPEG compression artifacts would be clearly visible and could spoil the overall impression of the image. Unfortunately, the editors at Aftonbladet have a fondness for images with prominent bright-red areas, which made our task just a little more challenging. These two images are saved with a 30% quality setting. While the image on the left might be passable even on a normal screen, the red circle in the right image looks bad even on a high-density screen. Finding a Compromise We could solve this problem in a few ways. We could increase the dimensions and compression of the images even more, which would make the artifacts less visible. This would mean that the browser has to scale images while rendering the page, which could have a performance impact. It would also require larger source images, which in practice are not always available. Another solution would be to let the editors choose how compression should be handled for each image, but this would add another step to the editors’ workflow, and we would need to educate them on the intricacies of how image compression, size, performance and mobile devices work together. In the end, we settled on a compromise and avoided using high compression rates for important images and image galleries, instances where the image is the center of attention. In these cases, we also make sure to load only the visible images, to not waste the user’s bandwidth. The teaser images on Aftonbladet’s section pages (left) work really well with high compression levels, while a full-screen image gallery (right) benefits from higher-quality images. Generating the different images can be problematic. We have the luxury of an existing back end that can scale, compress and crop images on demand. Rolling your own might entail a lot of work, but implementing it as, for example, a WordPress plugin (using the WP_Image_Editor class) would actually be pretty straightforward. The Bottom Line Three years after Ethan Marcotte introduced responsive Web design, we’re still struggling to find a good solution to the problem of how to handle images. All responsive image solutions are more or less a hack, and so is this one. But we have managed to escape the excessive reflowing problem, we have not introduced a lot of unneeded DOM elements, we have a cacheable website, and we prevent images that aren’t used from being downloaded. Aftonbladet’s home page on a mobile device has around 40 images and, with this technique, ends up being around 650 KB on a “large screen,” 570 KB on a medium screen and 450 KB on a small screen (although the size varies according to the content). Without the high compression rate, the large and medium versions would be over a megabyte in size. And compared to the old website, we’ve managed to move from blurry low-resolution images to high-quality images tailored to each device, with just a 25% increase in download size. So, we are still waiting for the perfect solution to responsive images. In the meantime, what we have outlined above has been a success for Aftonbladet’s new responsive website and hybrid apps. The new website (whose perceived loading time is twice as fast as that of the old one) has led to a huge boost in traffic and user interaction; and all through the summer, traffic to Aftonbladet’s mobile version has been higher than traffic to the desktop and tablet versions. (al) (ea) © Anders Andersen for Smashing Magazine, 2013.

    0
    0
    0
    Favorite
    Grab It
  • Smashing Magazine
    Introducing Responsive Web Typography With FlowType.JS

       It’s our great pleasure to support active members of the Web design and development community. Today, we’re proud to present FlowType.JS that allows a perfect character count per line at any screen width. This article is yet another special of our series of various tools, libraries and techniques that we’ve published here on Smashing Magazine: LiveStyle, PrefixFree, Foundation, Sisyphus.js, GuideGuide, Gridpak, JS Bin, CSSComb and Jelly Navigation Menu. — Ed. While working on an image-heavy site for Simple Focus, a couple of our designers, John Wilson and Casey Zumwalt, noticed how images always scaled perfectly. Pull the corner of the browser window and the images expand to fill the space. Push back the corner, they shrink and fall into place. The line length of hypertext, on the other hand, changes based on its parent element’s width, which has a negative effect on readability. “Wouldn’t it be nice,” John asked, “if text worked more like images?” Casey assured him that it could, with a jQuery plugin, if only they could figure out the math. “In a fluid layout, browser width and typographic measure are linked: the wider the viewport, the more characters per line.” – Trent Walton Simple Focus is mainly a design firm, so like most programming ideas we have, we didn’t do anything with it. Then, a few weeks later, John was rereading Trent Walton’s article about fluid type and was inspired to try and figure it out. An hour later, we had a working prototype and were kicking the tires internally. Within two weeks, FlowType.JS was fully-developed and ready to be sent into the world. Here’s the process of how we got there: Technically Speaking FlowType.JS, when boiled down, is nothing more than some clever math wrapped in a jQuery plugin, with some options for controlling font sizes to accomplish a certain line length. Let’s take a deeper look into the code to better understand what’s going on: The Basic Math As you will see below, it’s pretty simple stuff. First, we need to measure the width of an element in order to set a base number, which will be the key to the rest of the equation. Then we divide that base by a number that resolves to a reasonable font-size. For example, if an element measures at 1000px and we divided it by 50, we end up with 20px, which is a reasonable font-size. Line-height is another simple equation based off the font-size. Let’s say we choose a line-height of 1.45 times the font-size for readability. This equation is easy: font-size multiplied by 1.45 equals the recommended line-height. The Prototype An initial prototype shows us the idea actually works: var $width = $window.width(), $fontSize = $width / 50, $lineHeight = $fontSize * 1.45; $(window).ready( function() { $('element').css({ 'font-size':$fontSize + 'px', 'line-height':$lineHeight + 'px' }); } $(window).resize( function() { $('element').css({ 'font-size':$fontSize + 'px', 'line-height':$lineHeight + 'px' }); }); If you were paying attention, you may have noticed that there’s one major problem with the code: the math is based off of the window’s width, not the element’s width. This causes problems with breakpoints where elements resize to a larger dimension and the text gets smaller while the width of the element became wider. Improved Code Revising the code to measure the element’s width instead of the window’s fixed this problem. During this simple update, we also decided to start including options for maximum and minimum thresholds for font-sizes and element width, since a very narrow column would cause the font-size to become too small to read. Read more about these thresholds. Sharing the revised code here would make this article entirely too long as it includes several ‘if’ statements as well as duplicate code. Inefficient to say the least. With that said, at least it had options and worked well. But we’re focused on design, remember? So we wanted to get a little advice from some friends before we put something out there that could make us look like noobs. A Little Help from Friends Almost ready to launch, FlowType.JS was reviewed by several peers. Dave Rupert suggested we make sure it performs well by creating a demo page with several instances and lots of text. We put that together and held our breath, and fortunately it worked very well. Then we asked Giovanni DiFeterici for his feedback. Giovanni surprised us by refactoring and condensing all the ‘if’ statements into two lines of code. In the end, the compressed version of FlowType.JS can be as low as 450 bytes. We also got advice from plenty of other generous friends on everything all the way down to spell checking the demo site. The Final Code The final code is phenomenally simple. A few options and variables set simultaneously, a base function called changes where all the magic happens, and two simple calls for changes. One sets the font-size on load and another to recalculate on window resize. Take a look at the code here: (function($) { $.fn.flowtype = function(options) { var settings = $.extend({ maximum : 9999, minimum : 1, maxFont : 9999, minFont : 1, fontRatio : 35, lineRatio : 1.45 }, options), changes = function(el) { var $el = $(el), elw = $el.width(), width = elw > settings.maximum ? settings.maximum : elw settings.maxFont ? settings.maxFont : fontBase How It Works and Fallback As you can see, the code applies the newly calculated numbers as inline CSS to the element that is selected. Because this new CSS is inline, it overwrites whatever you have set in your linked stylesheets, creating a natural fallback in case a user has JavaScript disabled. You'll want to configure the settings based on the font choices you make since the math works out differently based on the size of the font you choose. Implementation FlowType.JS was built as a jQuery plugin, so getting started is easy. All you need to do is call FlowType.JS and configure a few settings based on your design. $('body').flowtype({ minimum : 500, maximum : 1200, minFont : 12, maxFont : 40, fontRatio : 30, lineRatio : 1.45 }); Full instructions are on our demo site. If jQuery isn't your thing, one Github community member has already ported it to native JavaScript. Nothing Is Ever Finished We have more ideas for ways to improve the plugin, but we are treating it as an experiment first and foremost. It solves a common problem in Responsive Design where line-length and line-height aren't ideal between break points. Regardless, there have been some questions raised about FlowType.JS by many smart developers and designers. One question that we've been asked is centered on typographical theory: should a design start with font-size or element width when optimizing text for legibility? I think the best answer is that it's a judgement call, that reading the text in your design is the best way to determine what's most legible. We've simply written a tool to help you accomplish what you want with your designs. Another is about accessibility: doesn't this tool disable text zoom, thus making sites less accessible? We're aware of this behavior, but users are able to zoom beyond 200% and see font size increase. For now, simply remember to take your audience into consideration when designing with FlowType.JS. Remember, like any utility, it's not a cure-all for the challenges of Web design. We're just trying to contribute a small idea to the Web design and development community and welcome feedback over at Github. (il, ea) © JD Graffam for Smashing Magazine, 2013.

    13
    0
    0
    Favorite
    Grab It
  • Smashing Magazine
    Challenging CSS Best Practices

       Editor’s Note: This article features techniques that are used in practice by Yahoo! and question coding techniques that we are used to today. You might be interested in reading Decoupling HTML From CSS by Jonathan Snook, On HTML Elements Identifiers by Tim Huegdon and Atomic Design With Sass by Robin Rendle as well. Please keep in mind: some of the mentioned techniques are not considered to be best practices. When it comes to CSS, I believe that the sacred principle of “separation of concerns” (SoC) has lead us to accept bloat, obsolescence, redundancy, poor caching and more. Now, I’m convinced that the only way to improve how we author style sheets is by moving away from this principle. For those of you who have never heard of the SoC principle in the context of Web design, it relates to something commonly known as the “separation of the three layers”: structure, presentation, behavior. It is about dividing these concerns into separate resources: an HTML document, one or more cascading style sheets and one or more JavaScript files. But when it comes to the presentational layer, “best practice” goes way beyond the separation of resources. CSS authors thrive on styling documents entirely through style sheets, an approach that has been sanctified by Dave Shea’s excellent project CSS Zen Garden. CSS Zen Garden is what most — if not all — developers consider to be the standard for how to author style sheets. The Standard To help me illustrate issues related to today’s best practices, I’ll use a very common pattern: the media object. Its combination of markup and CSS will be our starting point. Markup In our markup, a wrapper (div.media) contains an image wrapped in a link (a.img), followed by a div (div.bd): @thierrykoblentz 14 minutes ago CSS Let’s give a 10-pixel margin to the wrapper and style both the wrapper and div.bd as block-formatting contexts (BFC). In other words, the wrapper will contain the floated link, and the content of div.bd will not wrap around said link. A gutter between the image and text is created with a 10-pixel margin (on the float): .media { margin: 10px; } .media, .bd { overflow: hidden; _overflow: visible; zoom: 1; } .media .img { float: left; margin-right: 10px; } .media .img img { display: block; } Result Here is the presentation of the wrapper, with the image in the link and the blob of text: @thierrykoblentz 14 minutes ago A New Requirement Comes In Suppose we now need to be able to display the image on the other side of the text as well. Markup Thanks to the magic of BFC, all we need to do is change the styles of the link. For this, we use a new class, imgExt. @thierrykoblentz 14 minutes ago CSS We’ll add an extra rule to float the link to the right and change its margin: .media { margin: 10px; } .media, .bd { overflow: hidden; _overflow: visible; zoom: 1; } .media .img { float: left; margin-right: 10px; } .media .img img { display: block; } .media .imgExt { float: right; margin-left: 10px; } Result The image is now displayed on the opposite side: @thierrykoblentz 14 minutes ago One More Requirement Comes In Suppose we now need to make the text smaller when this module is inside the right rail of the page. To do that, we create a new rule, using #rightRail as a contextual selector: Markup Our module is now inside a div#rightRail container: @thierrykoblentz 14 minutes ago CSS Again, we create an extra rule, this time using a descendant selector, #rightRail .bd. .media { margin: 10px; } .media, .bd { overflow: hidden; _overflow: visible; zoom: 1; } .media .img { float: left; margin-right: 10px; } .media .img img { display: block; } .media .imgExt { float: right; margin-left: 10px; } #rightRail .bd { font-size: smaller; } Result Here is our original module, showing inside div#rightRail: @thierrykoblentz 14 minutes ago What’s Wrong With This Model? Simple changes to the style of our module have resulted in new rules in the style sheet. There must be a way to style things without always having to write more CSS rules. We are grouping selectors for common styles (.media,.bd {}). Grouping selectors, rather than using a class associated with these styles, will lead to more CSS. Of our six rules, four are context-based. Rules that are context-specific are hard to maintain. Styles related to such rules are not very reusable. RTL and LTR interfaces become complicated. To change direction, we’d need to overwrite some of our styles (i.e. write more rules). For example: .rtl .media .img { margin-right: auto; /* reset */ float: right; margin-left: 10px; } .rtl .media .imgExt { margin-left: auto; /* reset */ float: left; margin-right: 10px; } Meet Atomic Cascading Style Sheet a·tom·ic /ə’tämik/ of or forming a single irreducible unit or component in a larger system. As we all know, the smaller the unit, the more reusable it is. “Treat code like Lego. Break code into the smallest little blocks possible.” — @csswizardry (via @stubbornella) #btconf — Smashing Magazine (@smashingmag) May 27, 2013 To break down styles into irreducible units, we can map classes to a single style, rather than many. This will result in a more granular palette of rules, which in turn improves reusability. Let’s revisit the media object using this new approach. Markup We are using five classes, none of which are related to content: @thierrykoblentz 14 minutes ago CSS Each class is associated with one particular style. For the most part, this means we have one declaration per rule. .Bfc { overflow: hidden; zoom: 1; } .M-10 { margin: 10px; } .Fl-start { float: left; } .Mend-10 { margin-right: 10px; } .Fz-s { font-size: smaller; } Result @thierrykoblentz 14 minutes ago What Is This about? Let’s ignore the class names for now and focus on what this does (or does not): No contextual styling We do not use contextual or descendant selectors, which means that our style sheet has no dead weight. Directions (left and right) are “abstracted.” Rather than overwriting styles, we serve a RTL style sheet that contains rules such as these: .Fl-start { float: right; } .Mend-10 { margin-left: 10px; } Same classes, same properties, different values. But the most important thing to notice here is that we are styling via markup. We have changed the context in which we style our modules. We are now editing HTML templates instead of style sheets. I believe that this approach is a game-changer because it narrows the scope dramatically. We are styling not in the global scope (the style sheet), but at the module and block level. We can change the style of a module without worrying about breaking something else on the page. And we can do this without adding any rule to the style sheet, let alone creating a new class and rule: .someBasicStyleForThisElementHere {...} We get no redundancy. Selectors are not duplicated, and styles belong to a single rule instead of being part of many. For example, the style sheets that this page links to contain 72 float declarations. Also, abandoning a style — for example, deciding to always keep the image on the left side of the module — does not make any of our rules obsolete. Sound Good? Not sold yet? I hear you saying, “This goes against every single rule in the book. This is no better than inline styling. And your class names are not only cryptic, but unsemantic, too!” Fair enough. Let’s address these concerns. Regarding Unsemantic Class Names If you check the W3C’s “Tips for Webmasters,” where it says “Good names don’t change,” you’ll see that the argument is about maintenance, not semantics per se. All it says is that changing styles is easier in a CSS file than in multiple HTML files. .border4px would be a bad name only if changing the style of an element required us to change the declaration that that class name is associated with. In other words: .border4px {border-width:2px;} Regarding Cryptic Class Names For the most part, these class names follow the syntax of Zen Coding — see the “Zen Coding Cheat Sheet” (PDF) — now renamed Emmet. In other words, they are simple abbreviations. There are exceptions for styles associated with direction (left and right) and styles that involve a combination of declarations. For example, Bfc stands for “block-formatting context.” Regarding Mimicking Inline Styles Hopefully, the diagram below clears things up: Inline styles versus Atomic CSS. Specificity The technique is not as specific as @style. It lowers style weight because rules rely on a single class, as opposed to rules like .parent .bd {}, which clocks in at 0.0.2.0 (see “CSS Specificity: Things You Should Know”). Verbosity Most classes are abbreviations of declarations (for example, M-10 versus margin: 10px). Some classes, such as Bfc, refer to more than one style (see “Mapping” in the diagram above). Other classes use “start” and “end” keywords, rather than left and right values (see “Abstraction” in the diagram above). Here are the advantages of @style: Scope Styles are “sandboxed” to the nodes they are attached to. Portability Because the styles are “encapsulated,” you can move modules around without losing their styles. Of course, we still need the style sheet; however, because we are making context irrelevant, modules can live anywhere on a page, website or even network. The Path To Bloat Because the styles of our module are tied only to presentational class names, they can be anything we want them to be. For example, if we need to create a simple two-column layout, all we need to do is replace the link with a div in our template. That would look like this: column 1 column 2 And we would need only one extra rule in the style sheet: .Bfc { overflow: hidden; zoom: 1; } .M-10 { margin: 10px; } .Fl-start { float: left; } .Mend-10 { margin-right: 10px; } .Fz-s { font-size: smaller; } .W-50 { width: 50%; } Compare this to the traditional way: column 1 sidebar This would require us to create three new classes, to add an extra rule and to group selectors. .wrapper, .content, .media, .bd { overflow: hidden; _overflow: visible; zoom: 1; } .sidebar { width: 50%; } .sidebar, .media .img { float: left; margin-right: 10px; } .media .img img { display: block; } I think the code above pretty well demonstrates the price we pay for following the SoC principle. In my experience, all it does is grow style sheets. Moreover, the larger the files, the more complex the rules and selectors become. And then no one would dare edit the existing rules: We leave alone rules that we suspect to be obsolete for fear of breaking something. We create new rules, rather than modify existing ones, because we are not sure the latter is 100% safe. In other words, we make things worse because we can get away with bloat. Nowadays, people are accustomed to very large style sheets, and many authors think they come with the territory. Rather than fighting bloat, they use tools (i.e. preprocessors) to help them deal with it. Chris Eppstein tells us: “LinkedIn has over 1,100 Sass files (230k lines of SCSS) and over 90 web developers writing Sass every day.” CSS Bloat vs. HTML Bloat Let’s face it: the data has to live somewhere. Consider these two blocks: In many cases, the “semantic” class name makes up more bytes than the presentational class name (.wrapper versus .Bfc). But I do not think this is a real concern compared to what most apps onboard these days via data- attributes. This is where gzip comes into play, because the high redundancy in class names across a document would achieve better compression. And the same is true of style sheets, in which we have many redundant sequences: .M-1 {margin: 1px;} .M-2 {margin: 2px;} .M-4 {margin: 4px;} .M-6 {margin: 6px;} .M-8 {margin: 8px;} etc. Caching Presentational rules do not change. Style sheets made from such rules mature into tool sets in which authors can find everything they need. By their nature, they stop growing and become immutable, and immutable is cache-friendly. No More .button Class? The technique I’m discussing here is not about banning “semantic” class names or rules that group many declarations. The idea is to reevaluate the benefits of the common approach, rather than adopting it as the de facto technique for styling Web pages. In other words, we are restricting the “component” approach to the few cases in which it makes the most sense. For example, you may find the following rules in our style sheets, rules that set styles for which we do not create simple classes or rules that ensure cross-browser support. .button { display: inline-block; *display: inline; zoom: 1; font-size: bold 16px/2em Arial; height: 2em; box-shadow: inset 1px 1px 2px 0px #fff; background: -webkit-gradient(linear, left top, left bottom, color-stop(0.05, #ededed), color-stop(1, #dfdfdf)); background: linear-gradient(center top, #ededed 5%, #dfdfdf 100%); filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#ededed', endColorstr='#dfdfdf'); background-color: #ededed; color: #777; text-decoration: none; text-align: center; text-shadow: 1px 1px 2px #ffffff; border-radius: 4px; border: 2px solid #dcdcdc; } .modal { position: fixed; top: 50%; left: 50%; -webkit-transform: translate(-50%,-50%); -ms-transform: translate(-50%,-50%); transform: translate(-50%,-50%); *width: 600px; *margin-left: -300px; *top: 50px; } @media \0screen { .modal { width: 600px; margin-left: -300px; top: 50px; } } On the other hand, you would not see rules like the ones below (i.e. styles bound to particular modules), because we prefer to apply these same styles using multiple classes: one for font size, one for color, one for floats, etc. .news-module { font-size: 14px; color: #555; float: left; width: 50%; padding: 10px; margin-right: 10px; } .testimonial { font-size: 16px; font-style: italic; color: #222; padding: 10px; } Do We Include Every Possible Style In Our Style Sheet? The idea is to have a pool of rules that authors can choose from to style anything they want. Styles that are common enough across a website would become part of the style sheet. If a style is too specific, then we’d rely on @style (the style attribute). In other words, we’d prefer to pollute the markup rather than the style sheet. The primary goal is to create a sheet made of rules that address various design patterns, from a basic rule that floats an element to “helper” classes. /** * one liner with ellipsis * 1. we inherit hyphens:auto from body, which would break "Ell" in table cells */ .Ell { max-width: 100%; white-space: nowrap; overflow: hidden; text-overflow: ellipsis; -webkit-hyphens: none; /* 1 */ -ms-hyphens: none; -o-hyphens: none; hyphens: none; } /** * kinda line-clamp * two lines according to default font-size and line-height */ .LineClamp { display: -webkit-box; -webkit-line-clamp: 2; -webkit-box-orient: vertical; font-size: 13px; line-height: 1.25; max-height: 32px; _height: 32px; overflow: hidden; } /** * reveals an hidden element on :hover or :focus * visibility can be forced by applying the class "RevealNested-on" * IE8+ */ :root .NestedHidden { opacity: 0; } :root .NestedHidden:focus, :root .RevealNested:hover .NestedHidden, :root .RevealNested-on .NestedHidden { opacity: 1; } How Does This Scale? We have just released a brand new My Yahoo, which relies heavily on this technique. This is how it compares to a few other Yahoo products (after gzip’ing): CSS Assets answers.yahoo.com 30.1 KB sports.yahoo.com 67.4 KB omg.yahoo.com 46.2 KB yahoo.com 45.9 KB my.yahoo.com 21.3 KB Our style sheet weighs 17.9 KB (about 3 KB of which are property-specific), and it is shareable (unlike the style sheets of other properties). The reason for this is that none of the rules it contains relate to content. Wrapping Up Because presentational class names have always been deemed “out of bounds,” we — the community — have not really investigated what their use entails. In fact, in the name of best practice, we’ve dismissed every opportunity to explore their potential benefits. Here at Yahoo, @renatoiwa, @StevenRCarlson and I are developing projects with this new CSS architecture. The code appears to be predictable, reusable, maintainable and scalable. These are the results we’ve experienced so far: Less bloat We can build entire modules without adding a single line to the style sheets. Faster development Styles are driven by classes that are not related to content, so we can copy and paste existing modules to get started. RTL interface for free Using start and end keywords makes a lot of sense. It saves us from having to write extra rules for RTL context. Better caching A huge chunk of CSS can be shared across products and properties. Very little maintenance (on the CSS side) Only a small set of rules are meant to change over time. Less abstraction There is no need to look for rules in a style sheet to figure out the styling of a template. It’s all in the markup. Third-party development A third party can hand us a template without having to attach a style sheet (or a style block) to it. No custom rules from third parties means no risk of breakage due to rules that have not been properly namespaced. (Note that if maintenance is easier on the CSS side than on the HTML side, then the reason is simply that we can cheat on the CSS side by not cleaning up rules. But if we were required to keep things lean and clean, then the pain would be the same.) Final Note I was at a meetup a couple of weeks ago, where I heard Colt McAnlis say, “Tools, not rules.” A quick search for this idiom returned this: “We all need to be open to new learnings, new approaches, new best practices and we need to be able to share them.” (al, ea) © Thierry Koblentz for Smashing Magazine, 2013.

    3
    0
    0
    Favorite
    Grab It
  • Smashing Magazine
    Designing For Emotion With Hover Effects

       Of the many factors that must be considered in Web design, emotional interaction is an important, but frequently neglected, component. In the real world, we experience the sensual interaction of design all the time. Reflect for a moment on the emotional engagement of slipping behind the wheel of a powerful luxury car: the welcoming embrace of the driving seat, the tactile experience of running your hands over the leather on the steering wheel, the subtle gleam reflected in the controls. There is no technical requirement for any of these finely crafted details: The vehicle would perform equally well without them. Yet this singular focus on sensual and emotional engagement is what separates luxury goods from all others and what inspires deep loyalty from customers. This drive for emotional design can be discovered in the most surprising places. Take the power light on the next to last-generation Apple MacBook. The company deserves credit for helping thousands of users avoid entanglements with power cords though the MagSafe connector, but the deeper emotional intimacy is held in the tiniest of details: the power status light on the front of the laptop. When in sleep mode, the light pulses, and not randomly: It does so 10 times a minute, the breathing rate of a resting human being. The blink rate of Apple’s MacBook in sleep mode falls within the average breathing rate of a resting adult. (Image source: Michael Stillwell) To take another example: The front of a car is not two headlights and a grill. It is a face, one with its own character and communication. Look at cars and vans marketed to suburban mothers, on which lines tend to be round, curved, welcoming and friendly. The front of the vehicle often demonstrates the neotenic effect: We regard “eyes” (headlights) that are larger than the body as being cute, safe, loveable. Compare this to the visual communication of vehicles marketed to men, particularly sports cars: Those lines are angular and aggressive, right down to the slitted eyes. Note how a sports car’s front interacts with you on an emotional level. (Image source: GabboT) Designing For Emotion We can strive to achieve the same emotional engagement on websites — a promise to delight, surprise and affect users without resorting to manipulation or being too saccharine. While the digital realm lacks many sensual cues, it is possible to impart an emotional experience through a careful selection of color, stroke and typography. CSS transitions help us to make interactions more human: Rather than flicking from one state to another, we can ease the motion of an element over a few hundred milliseconds to make a website feel more inviting. Paradoxically, the same techniques can also make a website feel faster, especially if we use the opportunity to preload content. In creating such experiences, we must avoid the mistakes of the past: The engagement should encourage the visitor to explore the website, but should never hide important components such as navigation. Rewards for exploration should be a treated as a bonus, rather than a required interaction. Any information shared should also be accessible to those who don’t have the time or ability to use the interface. Surprise In Boxes (Please see the demo at “Hover Effect on Images From Different Directions Using Pure CSS,” on CodePen.) One way to design for emotional surprise and stimulation might be to present different panes of information according to the way the user hovers over a responsive image. For this example, I’ll use a photo of the spiral galaxy NGC 1309, added to an HTML5 page: (Note that I’ve intentionally left the alt value of the image blank. We’ll return to that attribute shortly.) The information panels are created from four span elements, with the entire group wrapped in a div tag that includes a class: Spiral Galaxy NGC 1309 Approximately 120 million light years from Earth Home to several Cephid variable stars Member of the Eridanus galactic cloud We’ll write the CSS transition code sans vendor prefixes: Internet Explorer 10 does not use prefixes for animation, Firefox no longer requires them, Chrome is not far behind, and a piece of JavaScript magic such as Lea Verou’s -prefix-free will take care of those browsers that still do. .multi-hover { position: relative; font-family: Orbitron, sans-serif; max-width: 500px; line-height: 0; } .multi-hover img { max-width: 100%; } .multi-hover span { position: absolute; width: 100%; height: 100%; line-height: 1.5; font-weight: 100; text-align: center; box-sizing: border-box; font-size: 3em; transition: .3s linear; color: white; padding: 15%; opacity: 0; } The CSS takes advantage of the rule that absolutely positioned elements inside relative containers will be transformed relative to their parent. Because the image determines the height and width of the div, the span elements will always be exactly the same size, protected from growing further by the use of box-sizing, max-width and line-height: 0. In this example, I’m using Orbitron by The League of Moveable Type as an appropriate typeface. Next, locate the span elements, so that each lies just over the inner edge of the image. We’ll do so by writing offsets from the containing div in percentages, to keep everything responsive: .multi-hover span:nth-child(1) { top: 0; left: 90%; background: hsla(0,70%,50%,0.6);  } /* right panel */ .multi-hover span:nth-child(2) { top: -90%; left: 0; background: hsla(90,70%,50%,0.6); } /* top panel */ .multi-hover span:nth-child(3) { top: 0; left: -90%; background: hsla(180,70%,50%,0.6); } /* left panel */ .multi-hover span:nth-child(4) { top: 90%; left: 0; background: hsla(270,70%,50%,0.6); } /* bottom panel */ As you’ll see in a moment, the order of the panel declarations matters. The result looks something like this: Positioned span elements with text. (Large view) To clip the outside edges of the panels, use overflow: hidden on the containing div: .multi-hover { position: relative; overflow: hidden; font-family: Orbitron, sans-serif; Now, the result appears like this: The colored sections we can see for now will function as the “hit areas” of our panels. Increasing the size of these areas will increase the panel’s ability to respond to quicker and broader mouse movements, but will also increase the overlap between them, making it more likely that a different panel will be activated than the one expected. Finally, we’ll hide the panels entirely by setting their opacity to 0 and moving them on hover, taking advantage of the fact that transparent elements still respond to mouse events. .multi-hover span { position: absolute; width: 100%; height: 100%; line-height: 1.5; font-weight: 100; z-index: 2; text-align: center; box-sizing: border-box; font-size: 3em; transition: .3s linear; color: white; padding: 15%; opacity: 0; } .multi-hover span:hover { opacity: 1; } .multi-hover span:nth-child(odd):hover { left: 0; z-index: 3; } .multi-hover span:nth-child(even):hover { top: 0; z-index: 3; } The odd and even declarations set each panel to the opposite side of the box, positioning them entirely over the image and completing the design. Note that this interface pattern requires exploratory mouse movement from the outside of the box inwards to activate each panel; alternately, a completely “in box” exploration model could be created by lowering the z-index of the :hover states to 1. Accessibility Testing Making the panels invisible brings up the issue of accessibility. Partially sighted users might be able to see and interact with the panels, but blind users obviously will not. Screen readers treat such content as being truly “invisible” (leaving them, therefore, unread), depending on the context; content that is set to display: none is usually left unread, for example. However, opacity does not trigger this behavior. On a Mac, you can easily test this by activating VoiceOver and having it read the page we’ve created in the browser: Command + F5 to start VoiceOver, Control + Option + A to read Web page content, Command + F5 to stop VoiceOver. You’ll hear the span content being read in the order that it appears on the page; all that’s missing is a description of the image at the end: Spiral Galaxy NGC 1309 Approximately 120 million light years from Earth Home to several Cephid variable stars Member of the Eridanus galactic cloud Note that this only treats the visual aspects of accessibility. There are other important areas (cognitive, motor and language) that I will leave unaddressed in this example for the sake of space but that have been detailed by other Smashing Magazine authors. Adding Touch Support The majority of mobile devices that depend on touch interfaces do not support a pure “hover” state: An element is either “in touch” or not. With rare exceptions, there is no registration of a fingertip being just above the screen of a mobile device. A brief touch might be interpreted as a hover event by mobile browsers, but this behavior is not perfectly predictable. As such, our interface will not work on most tablets or phones, at least as it currently exists. To solve this issue, we’ll add a little JavaScript (by way of jQuery) to the page: function is_touch_device() { return !!('ontouchstart' in window) || !!('onmsgesturechange' in window); }; $(document).ready(function() { if (is_touch_device()) { $('span').unbind('mousenter mouseleave touchend touchstart'); $('span').bind('touchstart', function() { $('span').removeClass('hover'); $(this).addClass('hover'); }); } }); The JavaScript applies a class of hover to any span element that is touched. So, we just need to alter our CSS declarations to make this class equivalent to the :hover event: .multi-hover span:hover { opacity: 1; } .multi-hover span:nth-child(odd):hover { left: 0; z-index: 3; } .multi-hover span:nth-child(odd).hover { left: 0; z-index: 1; } .multi-hover span:nth-child(even):hover { top: 0; z-index: 3; } .multi-hover span:nth-child(even).hover { top: 0; z-index: 1; } } Note that in the mobile version, the extended panel goes “under” the level of those that are retracted, allowing them to be tapped. Disengaging the copy controls on handheld devices might also be wise. This is not some futile pursuit of DRM, but a practical response to the fact that longer touch times on mobile devices can bring up copy prompts that could get in the way of the user interface: .multi-hover span { -ms-touch-action: none; -webkit-touch-callout: none; -webkit-user-select: none; } This user interface pattern of exploration on a mobile device may now be described as “tap on edge.” This could be further enhanced by increasing the overlap of the original position of the span elements in an @media query to provide larger hotspots, making the panels easier to activate, along with further improvements for smartphones. This code is not the only way to achieve this effect either: Ana Tudor has written an alternative technique using CSS transforms and SASS. Conclusion Crafting an element of surprise on Web pages can raise visitor engagement without obfuscating important content, sidelining mobile visitors or disadvantaging users who require accessibility features. Naturally, this must always be balanced with the need to guide users through the website: Visitors will only be surprised by the effects described here if they explore the page for themselves, or are led to it. How much users should be led and how much opportunity they should be given to discover a delight on their own initiative is a central question of user experience design. (al) (ea) © Dudley Storey for Smashing Magazine, 2013.

    0
    0
    0
    Favorite
    Grab It
  • Smashing Magazine
    Analyzing Network Characteristics Using JavaScript And The DOM, Part 2

       In Part 1 of this series, we had a look at how the underlying protocols of the Web work, and how we can use JavaScript to estimate their performance characteristics. In this second part, we’ll look at DNS, IPv6 and the new W3C specification for the NavigationTiming API. DNS Explained Every device attached to the Internet is identified by a numeric address known as an IP address. The two forms of IP addresses seen on the open Internet are IPv4, which is a 32-bit number often represented as a series of four decimal numbers separated by dots, e.g. 80.72.139.101, and IPv6 which is a 128-bit number represented as a series of multiple hexadecimal numbers separated by colons, e.g. 2607:f298:1:103::c8c:a407. These addresses are good for computers to understand; they take up a fixed number of bytes and can easily be processed, but they’re hard for humans to remember. They’re also not very good for branding, and are often tied to a geographic location or an infrastructure service provider (like an ISP or a hosting provider). To get around these shortcomings, the “Domain Name System” was invented. At its simplest form, DNS creates a mapping between a human readable name, like “www.smashingmagazine.com” and its machine readable address (80.72.139.101). DNS can hold much more information, but this is all that’s important for this article. For now, we’ll focus on DNS latency, and how we can measure it using JavaScript from the browser. DNS latency is important because the browser needs to do a DNS lookup for every unique hostname that it needs to download resources from — even if multiple hostnames map to the same IP address. Larger view Measuring DNS Lookup Times The simple way to measure DNS lookup time from JavaScript would be to first measure the latency to a host using its IP address, and then measure it again using its hostname. The difference between the two should give us DNS lookup time. We use the methods developed in Part 1 to measure latency. The problem with this approach is that if the browser has already done a DNS lookup on this hostname, then that lookup will be cached, and we won’t really get a difference. What we need, is a wildcard DNS record, and a Web server listening on it. Carlos Bueno did a great write-up about this on the YDN blog a few years ago, and built the code that boomerang uses. Before we look at the code, let’s take a quick look at how DNS lookups work with the following (simplified) diagram: From left to right: The client here is the browser, the DNS server is (typically) the user’s ISP, the Root name server knows where to look for most domains (or who to ask if it doesn’t know about them), and finally the authoritative server which is the DNS server of the website owner. Each of these layers has their own cache, and that cache generally sticks around for as long as the authoritative server’s TTL (known as “Time To Live”) says it should; but not all servers follow the spec (and that’s a complete topic for itself). Now, let’s look at the code: var dns_time; function start() { var gen_url, img, t_start, t_dns, t_http, random = Math.floor(Math.random()*(2147483647)).toString(36); // 1. Create a random hostname within our domain gen_url = "http://*.foo.com".replace(/\*/, random); var A_loaded = function() { t_dns = new Date().getTime() - t_start; // 3. Load another image from the same host (see step 2 below) img = new Image(); img.onload = B_loaded; t_start = new Date().getTime(); img.src = gen_url + "image-l.gif?t=" + (new Date().getTime()) + Math.random(); }; var B_loaded = function() { t_http = new Date().getTime() - t_start; img = null; // 4. DNS time is the time to load the image with uncached DNS // minus the time to load the image with cached DNS dns_time = t_dns - t_http; }; // 2. Load an image from the random hostname img = new Image(); img.onload = A_loaded; t_start = new Date().getTime(); img.src = gen_url + "image-l.gif?t=" + (new Date().getTime()) + Math.random(); } Let’s step through the code quickly. What we’ve done here is the following: Create a random hostname prefixed to our wildcard domain. This makes sure that the hostname lookup isn’t cached by anyone. Load an image from this host and measure the time it takes for it to load. Load another image from the same host and measure the time it takes to load. Calculate the difference between the two measured times. The first measured time period includes DNS lookup time, TCP handshake time and network latency. The second time measured includes network latency. There are two downsides to this approach though. First, it measures the worst case DNS lookup time, i.e. the time it takes to do a DNS lookup if your hostname isn’t cached by any intermediate DNS server. In practice, this isn’t always the case. There isn’t an easy way to get around that without the help of browsers, and we’ll get to that later at the end of this article. Also, what probably makes it hard for most people to implement is setting up a wildcard DNS record. This isn’t always possible if you don’t control your DNS servers. Many shared hosting providers won’t let you set up a wildcard DNS record. The only thing you can do in this case is to move hosting providers, or at least DNS providers. Measuring IPv6 Support And Latency Technically, measuring IPv6 shouldn’t really be a separate topic, however, even a decade after its introduction, IPv6 adoption is still fairly low. ISPs have been holding back because not too many websites offer IPv6 support, and website owners have been holding back because not too many of their users have IPv6 support, and they’re not sure how it will impact performance or user experience. The IPv6 test in boomerang helps you determine if your users have IPv6 support and how their IPv6 latency compares to IPv4 latency. It doesn’t check to see if their IPv6 support is broken or not (but see Google’s IPv6 test page if you’d like to know that). There are two parts to the IPv6 test: First, we check to see if we can connect to a host using its IPv6 address, and if we can, we measure how long it takes. Next, we try to connect to a hostname that only resolves to an IPv6 address. The first test tells us if the user’s network can make IPv6 connections. The second tells us if their DNS server can lookup AAAA records. We need to run the test in this order because we’d be unable to correctly test DNS if connections at the IP level fail. The code is very similar to the DNS test, except we don’t need a wildcard DNS record: var ipv6_url = "http://[2600:1234::d155]/image-l.gif", host_url = "http://ipv6.foo.com/image-l.gif", timeout: 1200, ipv6_latency='NA', dnsv6_latency='NA', timers: { ipv6: { start: null, end: null }, host: { start: null, end: null } }; var img, rnd = "?t=" + (new Date().getTime()) + Math.random(), timer=0, error = null; img = new Image(); function HOST_loaded() { // 4. When image loads, record its time timers['host'].end = new Date().getTime(); clearTimeout(timer); img.onload = img.onerror = null; img = null; // 5. Calculate latency done(); } function error(which) { // 6. If any image fails to load or times out, terminate the test immediately timers[which].supported = false; clearTimeout(timer); img.onload = img.onerror = null; img = null; done(); } function done() { if(timers['ipv6'].end !== null) { ipv6_latency = timers.ipv6.end - timers.ipv6.start; } if(timers['host'].end !== null) { dnsv6_latency = timers.host.end - timers.host.start; } } img.onload = function() { // 2. When image loads, record its time timers['ipv6'].end = new Date().getTime(); clearTimeout(timer); // 3. Then load image with hostname that only resolves to ipv6 address img = new Image(); img.onload = HOST_loaded; img.onerror = function() { error('host') }; timer = setTimeout(function() { error('host') }, timeout); timers['host].start = new Date().getTime(); img.src = host_url + rnd; }; img.onerror = function() { error('ipv6') }; timer = setTimeout(function() { error('ipv6') }, timeout); this.timers['ipv6'].start = new Date().getTime(); // 1. Load image with ipv6 address img.src = ipv6_url + rnd; Yes, this code can be refactored to make it smaller, but that would make it harder to explain. This is what we do: We first load an image from a host using its IPv6 address. This checks to see that we can make a network connection to an IPv6 address. If your network, browser or OS don’t support IPv6, this will fail and the onerror event fires. If the image loads up, we know that IPv6 connections are supported. We record the time that we’ll use to measure latency later. Then we try to load up an image using a hostname that only resolves to an IPv6 address. It’s important that this hostname does not resolve to an IPv4 address or this test might pass even if the DNS server cannot handle IPv6. If this succeeds, we know that our DNS server can look up and return AAAA (the IPv6 equivalent of A) records. We record the time. And then go ahead and calculate the latency. We can compare this with our IPv4 latency and DNS latency. This would also be an appropriate place to call any callback function to say that the test has completed. If any of the image loads fired an onerror event or if they timed out, we terminate the test immediately. In that case, any tests that haven’t run have their corresponding variable (ipv6_latency or dnsv6_latency) set to “NA”, indicating no support. There are other ways to test IPv6 support with help from the server side, for example, have your server set a cookie stating whether it was loaded via IPv4 or IPv6. This only works well if your testing page and your image page are on the same domain. The NavigationTiming API The NavigationTiming API is an interface provided by many modern browsers that gives JavaScript developers detailed information about the time the browser spent in the various states of downloading a page. The specification is still in a draft state, but as of the date of this article, Internet Explorer, Chrome and Firefox support it. Safari and Opera do not currently support the API. JavaScript developers get access to the NavigationTiming object through window.performance.timing. Try this now. If you’re using Chrome, IE 9+ or Firefox 8+, open a Web console and inspect the contents of window.performance.timing. The diagram below explains the order of events whose time shows up in the object. Let’s look at a few of them: Larger view | Image source Now the items we’ve been interested in measuring are: Page Load Time We get the full page load time by taking the difference between loadEventEnd and navigationStart. The latter tells us when the user initiated the page load, either by clicking a link, or entering it into their browser’s URL bar. The latter tells us when the onload event finished. If we’re not interested in the execution time of the onload event, we could use loadEventStart instead. Network/Application Latency Network latency is the time from the browser initiating download to the time the first byte showed up. Now, part of this latency could be attributed to the application doing something before sending out bytes, but there’s no way to know that from the client side. We use the difference between requestStart and responseStart. TCP Connect Time TCP connect time is the difference between connectStart and connectEnd, however, if the connection is over SSL, then this includes the time to negotiate an SSL handshake. You’d need to take that into account, and use secureConnectionStart instead of connectEnd if it exists, and if you care about the difference. DNS Latency DNS latency is the difference between domainLookupStart and domainLookupEnd. Important: We use a combination of times from window.performance.timing to determine each one of these. While this looks good for the most part, and really tells you what your users experience, there are a few caveats to be aware of. If DNS is already cached, then DNS latency will be 0. Similarly, if the browser uses a persistent TCP connection, then TCP connect time will be 0. If the document is read out of cache, then network latency will be 0. Keep these points in mind and use them to determine what fraction of your users makes effective use of available application caches. The Navigation Timing interface provides us with many more timers, but several of them are restricted by the browser’s same origin policy. These include details about redirects and unloading of the previous page. Other timers related to the DOM already have equivalent JavaScript events, namely the readystatechange, DOMComplete and load events. The Network Information API Another interesting network related API is the Network Information API. While not strictly performance related, it does help make guesses at expected network performance. This API is currently only supported by Android devices, and is exposed via the navigator.connection.type object. In particular, it tells you whether the device is currently using Ethernet, Wi-Fi, 2G or 3G. An article I highly recommend reading would be David Calhoun’s piece that shows some good examples on Optimizing Based On Connection Speed. Both the article and the comments are useful reading. Summary While the Navigation Timing API provides easy access to accurate page timing information, it is still insufficient to draw a complete picture. There is still some benefit to estimating various performance characteristics using the techniques mentioned earlier in this series. Whether we need to support browsers that do not currently implement the Navigation Timing or get information about resources not included in the current page, be sure to find out more about the user’s network bandwidth or whether their support for IPv6 is better or worse than their support for IPv4 — a combination of methods gives us the best all-round picture. All of the techniques presented here were developed while writing Boomerang though not all of them made it into the code yet. References The following links helped in writing this article and may be referred to for more information on specific topics: The Domain Name System: WikiPedia article explaining what DNS is and how it works. RFC 1035: DNS Specification: One of the RFCs about DNS, this one details the specification and implementation. Wildcard DNS records: WikiPedia article explaining Wildcard DNS. IPv4: WikiPedia article about revision 4 of the IP addressing protocol. IPv6: WikiPedia article about revision 6 of the IP addressing protocol. Google’s IPv6 test page: Tells you if your browser, OS and network provider support IPv6 and whether that support works correctly or not. Hurricane Electric’s IPv6 tunnels: Create an IPv6 tunnel over your IPv4 network to give yourself IPv6 support (most useful for testing) before your ISP does. The NavTiming Specification: W3C draft specification for the browser NavTiming API. Network Information API: W3C draft specification for the Network Information API. “Using navigator.connection on Android” written by David Calhoun. (Credits of image on frontpage: Vlasta Juricek) (il) (ea) © Philip Tellis for Smashing Magazine, 2013.

    0
    0
    0
    Favorite
    Grab It
  • Smashing Magazine
    How Optimized Are Your Images? Meet ImageOptim-CLI, a Batch Compression Tool

       Exporting images for the Web from one’s favorite graphics software is something many of us have done hundreds of times. Our eyes fixate on an image’s preview, carefully adjusting the quality and optimization settings until we’ve found that sweet spot, where the file size and quality are both the best they can possibly be. After exporting the image — usually using a feature called “Save for the Web” — and having gone to all that care and effort, we would be forgiven for thinking that our image is in the best shape possible. That’s not always the case, of course. In fact, much more data is usually left in such files, data that browsers have to download despite not requiring or even using it, data that keeps our users waiting just a bit longer than necessary. Thankfully, a number of popular tools can help us optimize images even further, but which should we use? We assumed, for a time at least, that our graphics editing software properly optimized our files, but what do we really know about our image optimization tools? Image Optimization Tools If you’re not currently using any image optimization tool, I would urge you to choose one. Any is better than none. Regardless of which you choose, you will likely speed up your website and keep users happy. To inform our work, I ran the most popular image optimization tools over a varied sample of images (kindly donated by Daan Jobsis via his “Retina Revolution” article), and I’ve published the results on GitHub. The report shows us how much data each tool saves and how much quality was lost statistically. However, how great a loss in quality is noticeable and how much is acceptable will vary from person to person, project to project and image to image. Aim For The Biggest Gains I’ve been using ImageOptim for many years, with ImageAlpha and JPEGmini joining it more recently. With this trio, we have a specialist in JPEGs, another in PNGs, and a great all-round application, ImageOptim, which also supports GIF and other formats. Each uses different techniques to deliver impressive savings, but they complement each other when combined to offer better savings still. ImageOptim ImageOptim beats any single lossless optimizer by bundling all of them. It works by finding the best combination of compression parameters and removes unnecessary comments and color profiles. ImageAlpha ImageAlpha is unique in its lossy conversion of PNG24 to PNG8, delivering savings many times bigger than popular PNG optimizers such as Smush.it and TinyPNG. The conversion even maintains alpha-transparency in all browsers, including on iOS and even in IE 6. JPEGmini JPEGmini is a “patent-pending photo recompression technology, which significantly reduces the size of photographs without affecting their perceptual quality.” The creators claim it reduces a file’s size by up to 80%, while maintaining quality that is visually identical to the original. The savings are quite remarkable, but you will need to purchase the software to use it without restriction. Prioritize Convenience In terms of performance, the comparative data is reassuring, and to date I’ve been happy with my decisions. But there’s a real problem: all of these tools are GUI applications for OS X. This has some benefits because everything is local. You don’t need to upload and download files to a Web server, so there’s no risk of the service being temporarily unavailable. This also means that your images don’t need to leave your machine either. But at some point ahead of every launch, I had to remember to open each application, manually process new images, then wait for the tool to finish, before doing the same in the next application. This soon gets tedious: We need to automate! This is why (with James Stout and Kornel Lesiński) I’ve created ImageOptim-CLI, automated image optimization from the command line interface (CLI). ImageOptim-CLI Though other image optimization tools are available from the command line, ImageOptim-CLI exists because the current benchmarks suggest that ImageOptim, ImageAlpha and JPEGmini currently outperform those alternatives over lossless and lossy optimizations. I wanted to take advantage of this. Given a folder or other set of images, ImageOptim-CLI automates the process of optimizing them with ImageAlpha, JPEGmini and ImageOptim. In one command, we can run our chosen images through all three optimizers — giving us automated, multi-stage image optimization right from the command line. This gives us the levels of optimization of all three applications, with the convenience of the command line, opening up all kinds of possibilities for integration with other utilities: Integrate it with Alfred workflows. Extend OS X with folder actions and more using Automator. Optimize images whenever they change with the Guard RubyGem. Ensure that images are optimized when you Git commit. Do you know of other ways to integrate image optimization in your workflow? If so, please share your ideas in the comments. Installation and Usage The CLI can be downloaded as a ZIP archive or cloned using Git, but the easiest way is by running this: npm install -g imageoptim-cli Running all three applications before closing them afterwards can be achieved with this: imageoptim --image-alpha --jpeg-mini --quit --directory ~/Sites/MyProject Or you can do it with the equivalent shorthand format: imageoptim -a -j -q -d ~/Sites/MyProject You will find more installation and usage examples on the project page on GitHub. Case Study: Myspace Earlier this week, I visited Myspace and found that 4.1 MB of data was transferred to my machine. With the home page’s beautiful magazine-style layout, it’s no surprise that roughly 76% (or 3.1 MB) of that were images. I was curious whether any data could be saved by running the images through ImageOptim-CLI. So, I recorded the video below to show the tool being installed and then run over Myspace’s home page. As you can see, the total size of images before running the command was 3,186 KB, and ImageOptim-CLI was able to remove 986 KB of data, while preserving 99.93% of image quality. grunt-imageoptim There is a companion Grunt plugin for ImageOptim-CLI, called grunt-imageoptim, which offers full support for the optimization of folders and collections of images. It can also be paired with grunt-contrib-watch to run whenever any images are modified in your project. Smashing Magazine has a great article for those who want to get up and running with Grunt. Summary Image optimization is an essential step in a designer’s workflow, and with so many tools to choose from, there’s bound to be one that suits your needs. Data should bear heavily in your decision, so that you reap bigger rewards, but choose one that is convenient — using a weak tool every time is better than using than a strong tool sometimes. You’ll rarely make a decision in your career that doesn’t have some kind of trade-off, and this is no different. Resources ImageOptim ImageAlpha JPEGmini ImageOptim-CLI grunt-imageoptim If you’ve made it this far, I thank you for reading and welcome your questions, comments and ideas. (al, ea) © Jamie for Smashing Magazine, 2013.

    5
    0
    0
    Favorite
    Grab It
  • Smashing Magazine
    Introducing Jelly Navigation Menu: When Canvas Meets PaperJS

       It’s our great pleasure to support active members of the Web design and development community. Today, we’re proud to present the Jelly Navigation Menu that shows the power of PaperJS and TweenJS when used together. This article is yet another golden nugget of our series of various tools, libraries and techniques that we’ve published here on Smashing Magazine: LiveStyle, PrefixFree, Foundation, Sisyphus.js, GuideGuide, Gridpak, JS Bin and CSSComb. — Ed. There is no doubt that the Web helps designers and developers find the best inspiration and resources for their projects. Even though there are a bunch of different tutorials and tips available online, I feel that HTML5 canvas techniques are missing the most. Good news: I had the chance to fulfill this wide gap. In this article, I would like to share my experience and story of how I brought the “Jelly Navigation Menu” to life. Credits go to Capptivate.co and Ashleigh Brennan’s icons — they were my inspiration for this project. Before We Start The source code for this project was originally written in CoffeeScript — I believe it’s a better way to express and share JavaScript code that way. I will refer to CoffeScript’s source in code sections within this post and you’ll also notice links to CodePens that have been rewritten in JavaScript and represent local parts of code as well. I recommend downloading the source code on GitHub so you can easily follow me while I explain the necessary code in detail. I used PaperJS for the canvas graphics and TweenJS for the animations. Both of them tend to freak out some folks, but don’t worry, they are really intuitive and easy to understand. If you’d like to learn how to set up PaperJS and TweenJS environments, you can fork this cool bootstrap pen for online fun or this git repo if you want to experiment locally. A preview of the Jelly Navigation Menu. See Full Demo First Step: Changing The Section Shape Our first aim is to change the menu section shape by manipulating the curves. Every object is made up of anchor points. These points are connected with each other by curves. So each point has “In” and “Out” handles to define the location and direction of specific curves. Folks who work with vector editors should feel comfortable with this step. In Paper.js, paths are represented by a sequence of segments that are connected by curves. A segment consists of a point and two handles, defining the location and direction of the curves. See the handles in action. All we need to do is to change the handleOut position of the top-left and bottom-right points. To achieve this, I wrote simple so-called “toppie” and “bottie” functions: toppie:(amount)-> @base.segments[1].handleOut.y = amount @base.segments[1].handleOut.x = (@wh/2) bottie:(amount)-> @base.segments[3].handleOut.y = amount @base.segments[3].handleOut.x = - @wh/2 # @wh/2 is section center. # @base variable holds section's rectangle path. It’s important to set the handle’s X position to exactly the middle of the section, so that the curve will turn out to be symmetrical. See Demo #1 Second Step: Calculating The Scrolling Speed So the next thing that needs to be done is to calculate the scrolling speed and direction, and then pass this data to the bottie and toppie functions. We can listen to the container’s scrolling event and determine the current scrolling position (in my case the “container” is a #wrapper element whereas it is a window object in the pen examples). # get current scroll value window.PaperSections.next = window.PaperSections.$container.scrollTop() # and calculate the difference with previous scroll position window.PaperSections.scrollSpeed = (window.PaperSections.next - window.PaperSections.prev) # to make it all work, all we have left to do is to save current scroll position to prev variable window.PaperSections.prev = window.PaperSections.next This is repeated for every scrolling event. In this code snippet, window.PaperSections is just a global variable. I also made a few minor additions in my implementation: A silly coefficient to increase scroll speed by multiplying it by 1.2 (just played around with it), I sliced the scroll speed result by its maximum so that it is not larger than sectionHeight/2, I also added a direction coefficient (it could be 1 or -1, you can change it in dat.gui on the top right of the page). This way you can control the reaction direction of sections. Here is the final code: if window.PaperSections.i % 4 is 0 direction = if window.PaperSections.invertScroll then -1 else 1 window.PaperSections.next = window.PaperSections.$container.scrollTop() window.PaperSections.scrollSpeed = direction*window.PaperSections.slice 1.2*(window.PaperSections.next - window.PaperSections.prev), window.PaperSections.data.sectionheight/2 window.PaperSections.prev = window.PaperSections.next window.PaperSections.i++ In this example, if window.PaperSections.i % 4 is 0 helps us react on every fourth scroll event — similar to a filter. That function lives in window.PaperSections.scrollControl. That’s it! We’re almost done! It couldn’t be any easier, right? Try out the scrolling here. See the demo. Step Three: Make It Jelly! In this final step, we need to animate the toppie and bottie functions to 0 with TweenJS’ elastic easing everytime the scrolling stops. 3.1 Determine When Scrolling Stops To do this, let’s add the setTimeout function to our window.PaperSections.scrollControl function (or scroll) with 50ms delay. Each time when the scrolling event fires up, the Timeout is cleared except for the last one, i.e. once scrolling stops, the code in our timeout will execute. clearTimeout window.PaperSections.timeOut window.PaperSections.timeOut = setTimeout -> window.PaperSections.$container.trigger 'stopScroll' window.PaperSections.i = 0 window.PaperSections.prev = window.PaperSections.$container.scrollTop() , 50 The main focus here is the window.PaperSections.$container.trigger stopScroll event. We can subscribe to it and launch the animation appropriately. The other two lines of code are simply being used to reset helper variables for later scrollSpeed calculations. See Demo #2 3.2 Animate Point’s handleOut To “0” Next, we’ll write the translatePointY function to bring our jelly animation to life. This function will take the object as a parameter with the following key-value sets: { // point to move (our handleOut point) point: @base.segments[1].handleOut, // destination point to: 0, // duration of animation duration: duration } The function body is made up of the following: translatePointY:(o)-> # create new tween(from point position) to (options.to position, with duration) mTW = new TWEEN.Tween(new Point(o.point)).to(new Point(o.to), o.duration) # set easing to Elastic Out mTW.easing TWEEN.Easing.Elastic.Out # on each update set point's Y to current animation point mTW.onUpdate -> o.point.y = @y # finally start the tween mTW.start() The TWEEN.update() function also has to be added to every frame of the PaperJS animation loop: onFrame = -> TWEEN.update() Also, we need to stop all animations on scrolling. I added the following line to the scroll listener function: TWEEN.removeAll() Finally, we need to subscribe to the stopScroll event and launch the animations by calling our translatePointY function: window.PaperSections.$container.on 'stopScroll', => # calculate animation duration duration = window.PaperSections.slice(Math.abs(window.PaperSections.scrollSpeed*25), 1400) or 3000 # launch animation for top left point @translatePointY( point: @base.segments[1].handleOut to: 0 duration: duration ).then => # clear scroll speed variable after animation has finished # without it section will jump a little when new scroll event fires window.PaperSections.scrollSpeed = 0 # launch animation for bottom right point @translatePointY point: @base.segments[3].handleOut to: 0 duration: duration Et voilà! You can preview the final demo here: See Demo #3 Note: In my source code of the translatePointY function, I added a deferred object for chaining, optional easing and onUpdate function. It is omitted here for the sake of simplicity. In Conclusion Last but not least, a class for the sections has to be added. Of course, you can make as many instances of it as you like; you just need to define initial Y offset and colors. Also, you will need to make sure that the section in your layout has the same height as the section in canvas. Then we can just apply translated3d to both on the scroll event and animations. This will cause HTML sections to move properly, just like the canvas sections, and hence produce a realistic animation. The reason why we need to use translate3d instead of translateY is to make sure that Webkit rendering engines use hardware acceleration while rendering them, so we do not drop out of the 60fps animation budget. But keep your eyes wide open if your sections contain any text. 3D transforms will drop anti-aliasing from subpixel to grayscale, so it may look a bit blurry! Feedback I look forward to your thoughts, questions and/or your feedback to the Jelly Navigation Menu in the comments section below. You can also reach out to me via Twitter anytime! (vf) (il) © Oleg Solomka for Smashing Magazine, 2013.

    2
    0
    0
    Favorite
    Grab It
  • Smashing Magazine
    A Guide To Designing Touch Keyboards (With Cheat Sheet)

       Touch devices have rightfully been praised for generally being much more intuitive than the decades-old computer mouse and keyboard. Users interact directly with touch interfaces, which narrows the gap between human act and software response. Yet typing on mobile devices — in particular on smartphones — is quite the horror story. It’s slow, painful and error-prone. The obvious culprits are keyboard character size and proximity of the keys, but there are many other important aspects to consider, including: using auto-correct dictionaries appropriately, auto-capitalizing relevantly, hinting at the input type, honoring the tab sequence, invoking custom keyboards consistently. During a recent large-scale 1:1 mobile usability study of 18 of the largest mobile commerce websites, we observed how certain features and limitations of modern touch keyboards can collide with the user’s expectations of how to fill out a form. When this happens, users quickly grow frustrated, as one form-field validation error pops up after another or, worse yet, the user gets stuck and ultimately abandons the website. When faced with a suboptimal touch keyboard implementation, users lose confidence in the website, and some even doubt their own ability to fill out a form on a smartphone. Clearly, a good mobile experience requires good form usability, and implementing touch keyboards is a key part of that. In this article, we will look a bit deeper into the usability issues surrounding touch keyboards, including five design guidelines that will alleviate some of these pains. The guidelines are an excerpt from the 147 guidelines in the M-Commerce Usability Report. We previously looked into 10 guidelines for mobile e-commerce here on Smashing Magazine; these 5 guidelines on touch keyboards are more generic and apply to any mobile website on which the user interacts with a touch keyboard. Furthermore, we’ve also benchmarked the mobile websites of the top 50 online retailers against these five guidelines and found that an astounding 98% get one or more of these wrong, and 70% of the top mobile websites get at least two wrong (as of 31 July 2013). While some of the guidelines might seem obvious at first, clearly we all need to pay better attention when so many multi-million dollar e-commerce stores get them wrong. 1. Disable Auto-Correction When The Dictionary Is Weak (92% Get It Wrong) Issue: Poor auto-correction is frustrating when users actually notice it, and can be detrimental when they do not. Auto-correction often works very poorly for abbreviations, street names, email addresses and other words that are not in the dictionary. This caused significant issues throughout testing and resulted in a great deal of erroneous data being submitted as test subjects completed their purchases. As this subject typed in the street name “westheimer” on the website for Toys’R’Us, the phone incorrectly auto-corrected to “weathermen” (left). However, the subject did not notice this, submitted the form and received a validation error (right). (Large view) One major problem with auto-correction is that users often fail to notice the correction (because they are focused on what they are typing instead of what they have typed). This is fine if the correction is correct, but it can be detrimental if it is wrong. For example, in multiple instances during testing, valid addresses were auto-corrected to invalid ones and submitted because the subject didn’t notice the auto-correction. On websites without address validators, this resulted in orders being shipped to wrong addresses, unless the subject was particularly attentive on the order-review page (after all, auto-corrected data often looks very similar to the intended input, making users less likely to notice the error). Of course, auto-correction fails miserably in the address field not just in edge cases (such as with “weatherman”), but with common (and standardized) abbreviations, such as “Rd” being auto-corrected to “Ed.” That being said, auto-correction did prove helpful in other scenarios when it corrected invalid data to valid data. Disabling auto-correction on all fields (such as comment fields), therefore, is not recommended. Instead, use discretion, and disable it on fields for which the dictionaries are weak. This typically includes proper names of various sorts (streets, cities, persons) and other identifiers (email addresses, coupon codes, etc.). While seemingly simple, in practice this is by far the most neglected part of touch keyboard usability; almost every single top mobile commerce website gets this one wrong. The benchmark reveals that 92% of them haven’t disabled auto-correction on the address field. Given the severity of the problems caused by auto-correction on address and email fields, it’s astonishing how few actually disable auto-correction here. You can disable auto-correction by adding the autocorrect attribute to the input tag and setting it to off, like so: 2. Show The Appropriate Keyboard Layout (60% Get It Wrong) Issue: Inappropriate keyboard layouts slow down typing, and users generally mistype long number sequences on standard keyboards because of the small hit area and the close proximity of the numeric keys. One of the main limitations of touch keyboards on smartphones is their size. The letters themselves are minuscule. In fact, a character on an iPhone 4 in portrait mode measures 4 × 5.9 millimeters. Compare this to Apple’s own design guidelines, which recommend that all clickable interface elements be of least 6.85 × 6.85 millimeters because anything below that would yield very poor click accuracy. (Microsoft and Nokia also recommend a minimum hit area of approximately 7 × 7 millimeters). Predictably, this results in misspellings. But by changing an attribute or two in the code of your input fields, you can instruct the user’s phone to automatically show a keyboard optimized for the requested input. For example, you can invoke a numeric keyboard for a credit-card field, a phone keyboard for a telephone field, and an email keyboard for an email address. This saves the user from having to switch from the traditional keyboard layout, and, in the case of numeric inputs, minimizes typos because these dedicated keyboards have much larger keys, thus reducing the chance of accidental taps. The credit-card input on Best Buy invokes the standard keyboard layout, so the user has to first switch to the numbers and special characters view (middle) and then type out the 16 digits without a single typo. This was a difficult task for many subjects, who looked to and from their card and phone while trying to hit the miniature buttons stacked against one another. (Large view) Throughout testing, multiple subjects noticed these dedicated keyboards and commented on them approvingly. In fact, on iOS, the hit area of a key is 471% larger on the numeric keyboard than on the traditional keyboard (209 × 108 pixels versus 52 × 76 pixels). More importantly, we recorded significantly fewer typos in numeric inputs when a numeric keyboard layout was displayed. This led to fewer validation errors, which, in turn, resulted in a better and more seamless shopping experience on those websites. This was especially true of long sequences, such as phone and credit-card numbers. On the left, a subject accidentally hits the dash button instead of the “1” button due to the small size and proximity of buttons on the standard keyboard layout. A number-optimized keyboard layout would have been more appropriate. On the right, when the user fills out the “Day phone” field on GAP’s website, a special phone-optimized keyboard appears, showing buttons that are 471% larger than those on the traditional keyboard. (Large view) Another benefit of dedicated keyboard layouts is that they indicate the required input, which is helpful if the label is out of sight or the user is unsure of what to enter. However, note that numeric keyboard layouts can be limiting because they do not allow the user to enter alphabetic characters and allow only a few, if any, special characters or separators. Invoking these keyboards on fields where they are the best match, therefore, is important, which includes phone numbers, ZIP codes, credit-card numbers and credit-card security codes. Similarly, make sure your formatting examples are actually possible to replicate using the invoked keyboard. Typing a phone number according to the example format given here (“555-555-5555”) is not possible on iOS because the keyboard layout does not include the dash character. (Interestingly, entering it on Android is possible, which goes to show why testing on multiple platforms helps to ensure that you don’t require formatting that is only possible on some.) (Large view) Given these substantial usability benefits, one would think that these dedicated keyboards are widely used. Yet, 60% of the top mobile commerce websites do not invoke them for one or more of the layouts for email addresses (email keyboard), phone numbers (phone keyboard) or credit-card numbers (numeric keyboard). Technically, there are a few ways to invoke the numeric keyboard layouts, and also slight distinctions between them (i.e. phone versus number), with slightly different behaviors across platforms (iOS, Android, etc.). In general, two HTML attributes will invoke a numeric keyboard layout, namely the type and pattern attributes. The type attribute carries semantic meaning and should be used only when an appropriate type is available for your input, which is the case for phone numbers and email addresses. For numeric inputs, however, providing a pattern attribute instead is recommended. (Note that you might want to add a novalidate attribute if you only want the browser to invoke the keyboard and not enforce this format.) For any phone fields, use this: For any other fields where you want to invoke a numeric keyboard, use this: For any email fields, use this: As mentioned, there are some distinctions between the types of numeric keyboard layouts, as well as some differences between mobile platforms. For example, on iOS, the code provided above to invoke the phone layout would produce a keyboard that allows the user to enter digits and a small set of phone-related special characters and separators, whereas the code for invoking the numeric keyboard would allow the user to enter only digits. Meanwhile, on Android, the phone keyboard layout is also invoked but with vastly more special characters, allowing for richer formatting of the phone number. Yet, the numeric keyboard invoked by the pattern attribute isn’t support by Android at all; instead, it simply invokes the regular alphanumeric keyboard. While you could use type="number" to invoke a numeric keyboard on both iOS and Android, setting the type to number carries semantic meaning that in many cases wouldn’t be appropriate (for example, a credit-card number is a numeric sequence, not a number). Therefore, we recommend the more defensive strategy of using pattern="\d*", which produces an enhanced experience on iOS while having no implications on other platforms that do not yet support this behavior. (Of course, if the field does represent a number, such as a price or quantity, then type="number" should obviously be used.) 3. Invoke Keyboard Layouts Consistently (54% Get It Wrong) Issue: When one field invokes a dedicated keyboard layout but other similar fields do not, users are confused and begin to question the type of input requested by the field without the dedicated keyboard. Invoking the appropriate keyboard layout for a particular input field is great (see the previous recommendation), but be sure to do it consistently throughout your website, or else you could greatly confuse the user. In other words, if a ZIP code field invokes a numeric keyboard, then similar input fields should have the same behavior. (Large view) While this might sound obvious, many websites fail to invoke dedicated keyboard layouts consistently. For instance, the flower store FTD (pictured above) invokes a numeric keyboard for the credit-card number but not for the very next security-code field, even though both values are always numeric. Of the top 50 grossing online retailers, 54% get this wrong on their mobile websites, where one or more of the telephone, credit-card or CVV fields don’t invoke a numeric touch keyboard. These 54% break down as follows (in absolute numbers): 24% invoke the numeric inputs for none of these three numeric inputs (which, although consistent, is consistently bad), and the remaining 30% (including FTD) are inconsistent, with the numeric keyboard layout being invoked on only some of the fields. Even more surprising is just how confused some of the test subjects were by this during the usability tests. They began questioning their initial interpretation of individual fields, thinking that perhaps something else was required. For example, upon seeing the standard keyboard layout for the “Card security code” (pictured on FTD’s website above), the subjects began wondering whether this was the three-digit code on the back of their credit card or one of the many other strings printed on the card. 4. Honor The Behavior Of The “Next” and “Previous” Buttons (4% Get It Wrong) Issue: Users are either vexed or confused when “Next” and “Previous” buttons take them to fields that are illogically sequenced. During testing, the subjects struggled with websites that failed to honor the behavior of the “Next” and “Previous” buttons. The expected behavior is straightforward: When the user clicks the “Next” button, they expect to be taken to the next logical field in the form, without any changes and without the form being submitted. The same goes for the “Previous” button, just in the opposite direction, of course. This goes beyond just having the right tab sequence (although that is a good start). Things often go awry when dealing with dynamic fields that depend on the user’s prior selection. In these instances, we’ve seen the data of users get deleted or the tab sequence violated. One must be particularly careful with the interfaces of custom forms, too. For example, in the Disney Store, a custom-designed state selector isn’t part of the tabbing sequence (because it technically isn’t an input element), and so users are sent right past the state field. After filling in her ZIP code here (left), the subject hit the “Next” button, which correctly took her to the “Location Type” drop-down menu (right). But, as shown, the website cleared the subject’s previously inputted data. Obviously, data should persist when “Next” and “Previous” buttons are used. (Large view) These buttons essentially function as the mobile version of keyboard tabbing; therefore, they should adopt the same sequential principles as desktop tabbing. They should provide a fast way to get from one field to the next without having to use a pointer (whether a mouse or finger). This is particularly important on mobile because screen space is so limited when the keyboard is open that the next field might be partially obstructed, making the “Next” button even more convenient to use. So, while the “Next” and “Previous” buttons might not be used by all users, the consequence of dishonoring the behavior of these buttons is significant. Luckily, most websites get this one right. As long as the code is clean, mobile browsers will, by default, set the tabbing sequence according to the order of appearance of the fields. Of the top mobile websites, only 4% get this wrong. 5. Disable Auto-Capitalization Where Appropriate (38% Get It Wrong) Issue: Nearly all subjects believed their email address had to be in lowercase, so auto-capitalizing this data adds needless friction to the process. The default behavior of smartphones is to auto-capitalize the first letter in standard text fields, which is usually desirable. However, disabling this auto-capitalization is preferable in a few cases, especially for email addresses, which most test subjects wanted to be in lowercase. This subject noticed the capital “J” and went back to replace it with a lowercase “j” because he was unsure whether the capitalized version would work. (Large view) Multiple times during testing, subjects noticed an uppercase letter and made an explicit effort to replace it with the lowercase equivalent. Most explained that they were unsure whether uppercase characters were allowed or whether email addresses in general were case-sensitive. On websites that had disabled auto-capitalization in the email field, no subject ever actively capitalized the first character. Disabling auto-capitalization for email and other appropriate fields (such as URLs) is recommended, then. Among the top mobile commerce websites, 38% don’t disable auto-capitalization on email address fields, leaving them as plain-text input fields, and leaving less technically inclined users in doubt. Auto-capitalization can be disabled by adding the autocapitalize attribute to the tag and setting it to off, like so: Of course, for email fields, you should set the type attribute to email: On iOS, setting the type to email will automatically disable auto-capitalization. However, still set the autocapitalize attribute because that will also work on iOS and might be needed on other platforms that do not yet support the email input type or that implement it differently. Testing And The Cheat Sheet While these fundamentals might seem obvious at first, remember that 98% of the world’s largest mobile commerce websites violate at least one of these (see the complete list). And 70% get two or more of these “basic” touch keyboard guidelines wrong. In fact, 24% haven’t optimized their inputs for touch keyboards at all, whether by omitting basic keyboard layouts (phone, email, numeric), invoking them inconsistently (or consistently poorly), not disabling auto-correction where appropriate, or not disabling auto-capitalization for email fields. One reason for this lack of compliance might be that very thorough testing would be required to spot all of the pitfalls across a large website — hence, the third recommendation of invoking keyboard layouts consistently, which in an ideal world shouldn’t even need to be mentioned. Another reason, mentioned in a prior Smashing Magazine article, is that mobile and touch interfaces represent a relatively new platform, with an entirely new interaction method that requires attention to a myriad of small details — details that we as Web designers and developers are not yet accustomed to actively looking and designing for. For this reason, we’ll end this article with a cheat sheet of the most common pitfalls when working with input fields for touch interfaces, along with copy-and-pastable code and a mobile touch-optimized demo of fields that invoke the correct keyboards, which you can use as a checklist when designing and developing mobile- and tablet-optimized websites. Interactive cheat sheet for touch keyboards, with mobile-optimized demo These fields are typically included in the following types of forms: account registration, account sign-in, search, surveys, the entire checkout process, comment forms, and contact forms. We recommend searching your entire code repository to catch every single instance of these. (al) (ea) © Christian Holst for Smashing Magazine, 2013.

    1
    0
    0
    Favorite
    Grab It

-